path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Data Analysis/NumPy/numpy-exercises.ipynb | ###Markdown
NumPy PracticeThis notebook offers a set of exercises for different tasks with NumPy.It should be noted there may be more than one different way to answer a question or complete an exercise.Exercises are based off (and directly taken from) the quick introduction to NumPy notebook.Different tasks will be detailed by comments or text.For further reference and resources, it's advised to check out the [NumPy documentation](https://numpy.org/devdocs/user/index.html).And if you get stuck, try searching for a question in the following format: "how to do XYZ with numpy", where XYZ is the function you want to leverage from NumPy.
###Code
# Import NumPy as its abbreviation 'np'
import numpy as np
# Create a 1-dimensional NumPy array using np.array()
a1 = np.array([1,2,3])
# Create a 2-dimensional NumPy array using np.array()
a2 = np.array([[1,2,3],
[4,5,6]])
# Create a 3-dimensional Numpy array using np.array()
a3 = np.array([[[1,2,3],
[4,5,6],
[7,8,9]],
[[10,11,12],
[13,14,15],
[16,17,18]],
[[19,20,21],
[22,23,24],
[25,26,27]]])
###Output
_____no_output_____
###Markdown
Now we've you've created 3 different arrays, let's find details about them.Find the shape, number of dimensions, data type, size and type of each array.
###Code
# Attributes of 1-dimensional array (shape,
# number of dimensions, data type, size and type)
a1.ndim,a1.dtype,a1.size,type(a1)
# Attributes of 2-dimensional array
a2.ndim,a2.dtype,a2.size,type(a2)
# Attributes of 3-dimensional array
a3.ndim,a3.dtype,a3.size,type(a3)
# Import pandas and create a DataFrame out of one
# of the arrays you've created
import pandas as pd
pd.DataFrame(a2)
# Create an array of shape (10, 2) with only ones
np.ones((10,2))
# Create an array of shape (7, 2, 3) of only zeros
np.zeros((7,2,3))
# Create an array within a range of 0 and 100 with step 3
np.arange(0,100,3)
# Create a random array with numbers between 0 and 10 of size (7, 2)
np.random.randint(0,10,size=(7,2))
# Create a random array of floats between 0 & 1 of shape (3, 5)
np.random.random((3,5))
# Set the random seed to 42
np.random.seed(42)
# Create a random array of numbers between 0 & 10 of size (4, 6)
np.random.randint(0,10,size=(4,6))
###Output
_____no_output_____
###Markdown
Run the cell above again, what happens?Are the numbers in the array different or the same? Why do think this is?
###Code
# Create an array of random numbers between 1 & 10 of size (3, 7)
# and save it to a variable
one_and_10 = np.random.randint(1,10,size=(3,7))
# Find the unique numbers in the array you just created
np.unique(one_and_10)
# Find the 0'th index of the latest array you created
one_and_10[0]
# Get the first 2 rows of latest array you created
one_and_10[:2]
# Get the first 2 values of the first 2 rows of the latest array
one_and_10[:2,:2]
# Create a random array of numbers between 0 & 10 and an array of ones
# both of size (3, 5), save them both to variables
zero_and_10 = np.random.randint(0,10,(3,5))
ones = np.ones((3,5))
# Add the two arrays together
zero_and_10 + ones
# Create another array of ones of shape (5, 3)
ones_t = np.ones((5,3))
# Try add the array of ones and the other most recent array together
ones + ones_t
###Output
_____no_output_____
###Markdown
When you try the last cell, it produces an error. Why do think this is?How would you fix it?
###Code
# Create another array of ones of shape (3, 5)
ones = np.ones((3,5))
# Subtract the new array of ones from the other most recent array
zero_and_10 - ones
# Multiply the ones array with the latest array
zero_and_10 * ones
# Take the latest array to the power of 2 using '**'
zero_and_10 ** 2
# Do the same thing with np.square()
np.square(zero_and_10)
# Find the mean of the latest array using np.mean()
np.mean(zero_and_10)
# Find the maximum of the latest array using np.max()
np.max(zero_and_10)
# Find the minimum of the latest array using np.min()
np.min(zero_and_10)
# Find the standard deviation of the latest array
np.std(zero_and_10)
# Find the variance of the latest array
np.var(zero_and_10)
# Reshape the latest array to (3, 5, 1)
zero_and_10.reshape(3,5,1)
# Transpose the latest array
zero_and_10.T
###Output
_____no_output_____
###Markdown
What does the transpose do?
###Code
# Create two arrays of random integers between 0 to 10
# one of size (3, 3) the other of size (3, 2)
a4 = np.random.randint(0,10,(3,3))
a5 = np.random.randint(0,10,(3,2))
# Perform a dot product on the two newest arrays you created
np.dot(a4,a5)
# Create two arrays of random integers between 0 to 10
# both of size (4, 3)
a6 = np.random.randint(0,10,(4,3))
a7 = np.random.randint(0,10,(4,3))
# Perform a dot product on the two newest arrays you created
np.dot(a6,a7)
###Output
_____no_output_____
###Markdown
It doesn't work. How would you fix it?
###Code
# Take the latest two arrays, perform a transpose on one of them and then perform
# a dot product on them both
np.dot(a6,a7.T)
###Output
_____no_output_____
###Markdown
Notice how performing a transpose allows the dot product to happen.Why is this?Checking out the documentation on [`np.dot()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) may help, as well as reading [Math is Fun's guide on the dot product](https://www.mathsisfun.com/algebra/vectors-dot-product.html).Let's now compare arrays.
###Code
# Create two arrays of random integers between 0 & 10 of the same shape
# and save them to variables
a7 = np.random.randint(0,10,(4,3))
a8 = np.random.randint(0,10,(4,3))
# Compare the two arrays with '>'
a7 > a8
###Output
_____no_output_____
###Markdown
What happens when you compare the arrays with `>`?
###Code
# Compare the two arrays with '>='
a7 >= a8
# Find which elements of the first array are greater than 7
a7 > 7
# Which parts of each array are equal? (try using '==')
a7 == a8
# Sort one of the arrays you just created in ascending order
np.sort(a7)
# Sort the indexes of one of the arrays you just created
np.argsort(a7)
# Find the index with the maximum value in one of the arrays you've created
np.argmax(a7)
# Find the index with the minimum value in one of the arrays you've created
np.argmin(a7)
# Find the indexes with the maximum values down the 1st axis (axis=1)
# of one of the arrays you created
np.argmax(a7,axis=1)
# Find the indexes with the minimum values across the 0th axis (axis=0)
# of one of the arrays you created
np.argmin(a7,axis=0)
# Create an array of normally distributed random numbers
np.random.randn(1,10)
# Create an array with 10 evenly spaced numbers between 1 and 100
np.linspace(1,100,10)
###Output
_____no_output_____ |
State_vectors_to_orbital_elements.ipynb | ###Markdown
State vectors to orbital elements
###Code
import numpy as np
##Unit Conversions##
rad_to_deg = 180/np.pi;
##Known values##
mu = 398600;
t = 1000;
###Output
_____no_output_____
###Markdown
Defining the given state vector
###Code
##Given state vector##
x = -13686.889393418738;
y = -13344.772667428870;
z = 10814.629905439588;
s = np.array([x,y,z]);
xdot = 0.88259108105901152;
ydot = 1.9876415852134037;
zdot = 3.4114313525042017;
sdot = np.array([xdot,ydot,zdot]);
###Output
_____no_output_____
###Markdown
Converting given state vector to orbital elements $$r = \sqrt{x^2+y^2+z^2}$$$$speed = \sqrt{v_x^2+v_y^2+v_z^2}$$Using Vis-viva equation,$$a = {(\frac{2}{r}-\frac{v^2}{\mu})}^{-1}$$e can be found by the following equations:$$\mu.\bar e = (v^2-\frac{\mu}{r})\bar r - (\bar r.\bar v)\bar v$$$$e = \sqrt{e_x^2+e_y^2+e_z^2}$$i can be evaluated by the following relations:$$\cos i = \hat w.\hat k$$where $$\hat w = \frac{\bar r \times \bar v}{\mid \bar r \times \bar v \mid}$$ $\hat N$ is the unit vector along nodal line.$$\hat N = \frac{\hat k \times \hat w}{\mid \hat k \times \hat w \mid}$$ $\Omega$ can be computed as follows:$$\cos \Omega = \hat i.\hat N$$$$\sin \Omega = (\hat i \times \hat N).\hat k$$Both sin and cos are computed as $\Omega$, $\omega$, and $\nu$ vary from 0 to 360 degrees.$$\hat e = \frac{\bar e}{e}$$$\omega$ can be calculated using the following equations:$$\cos \omega = \hat N.\hat e$$$$\sin \omega = (\hat N \times \hat e).\hat w$$The last orbital element, $\nu$ is calculated as follows:$$\cos \nu = \hat e.\hat r$$$$\sin \nu = (\hat e \times \hat r).\hat w$$
###Code
##Converting to orbital elements##
r = np.sqrt((x**2)+(y**2)+(z**2));
v = np.sqrt((xdot**2)+(ydot**2)+(zdot**2));
a = np.power((2/r)-((v**2)/mu),-1);
e_vec = ((((v**2)-(mu/r))*s)-(np.dot
(np.transpose(sdot)
,s)*sdot))/mu;
e = np.sqrt(np.dot(np.transpose(e_vec),e_vec));
w_cap = np.cross(s,sdot)/np.sqrt(np.sum
(np.power(np.cross
(s,sdot),2)));
k_cap = np.array([0,0,1]);
i_cap = np.array([1,0,0]);
j_cap = np.array([0,1,0]);
N_cap = np.cross(k_cap,w_cap)/np.sqrt(np.sum
(np.power(np.cross
(k_cap,w_cap),2)));
cos_nu_0 = np.dot(s/r,np.transpose(e_vec/e));
sin_nu_0 = np.dot(np.cross(e_vec/e,s/r),
np.transpose(w_cap));
if sin_nu_0 >= 0:
nu_0 = np.arccos(cos_nu_0);
if sin_nu_0 < 0:
nu_0 = (2*np.pi)-np.arccos(cos_nu_0);
i = np.arccos(np.dot(w_cap,np.transpose(k_cap)));
cos_Omega = np.dot(i_cap,np.transpose(N_cap));
sin_Omega = np.dot(np.cross(i_cap,N_cap),
np.transpose(k_cap));
if sin_Omega >= 0:
Omega = np.arccos(cos_Omega);
if sin_Omega < 0:
Omega = (2*np.pi)-np.arccos(cos_Omega);
cos_omega = np.dot(N_cap,np.transpose(e_vec/e));
sin_omega = np.dot(np.cross(N_cap,e_vec/e),np.transpose(w_cap));
if sin_omega >= 0:
omega = np.arccos(cos_omega);
if sin_omega < 0:
omega = (2*np.pi)-np.arccos(cos_omega);
print('a =',a);
print('e =',e);
print('i =',i*rad_to_deg);
print('Omega =',Omega*rad_to_deg);
print('omega =',omega*rad_to_deg);
print('nu_0 =',nu_0*rad_to_deg);
###Output
a = 20000.018460650677
e = 0.09999900702499649
i = 100.0
Omega = 230.0
omega = 199.99988817526065
nu_0 = 190.0001118247393
###Markdown
Estimating true anomaly at t = 1000 seconds The same procedure as the above code was followed.
###Code
##Calculating anomaly at t= 1000 seconds##
n = np.sqrt(mu/(a**3));
E_0 = 2*np.arctan(np.sqrt((1-e)/(1+e))*np.tan(nu_0/2));
M_0 = E_0-(e*np.sin(E_0));
M = M_0+(n*t);
E = 0;
fdotE = 1-(e*np.cos(E))
fE = E-(e*np.sin(E))-M;
epsilon = 0.0000001;
while fE>epsilon or fE<(-1*epsilon):
E = E-(fE/fdotE);
fE = E-(e*np.sin(E))-M;
fdotE = 1-(e*np.cos(E));
nu = 2*np.arctan(np.sqrt((1+e)/(1-e))*np.tan(E/2));
###Output
_____no_output_____
###Markdown
State vector at t = 1000 seconds The state vector after 1000 seconds is displayed as the result of the code.
###Code
##State vector at t=1000 seconds##
H = np.sqrt(mu*a*(1-(e**2)));
r_1000 = a*(1-(e**2))/(1+(e*np.cos(nu)));
s_p_1000 = np.zeros((3,1));
s_p_1000[0,0] = r_1000*np.cos(nu);
s_p_1000[1,0] = r_1000*np.sin(nu);
R_omega = np.array([[np.cos(omega),-np.sin(omega),0],
[np.sin(omega),np.cos(omega),0],[0,0,1]]);
R_Omega = np.array([[np.cos(Omega),
-np.sin(Omega),0],
[np.sin(Omega),
np.cos(Omega),0],[0,0,1]]);
R_i = np.array([[1,0,0],[0,np.cos(i),-np.sin(i)],
[0,np.sin(i),np.cos(i)]]);
R = np.dot(R_Omega,np.dot(R_i,R_omega));
s_1000 = np.dot(R,s_p_1000);
v_p_1000 = np.zeros((3,1));
v_p_1000[0,0] = -mu*np.sin(nu)/H;
v_p_1000[1,0] = mu*(e+np.cos(nu))/H;
v_1000 = np.dot(R,v_p_1000);
print('X_1000 =',s_1000[0,0],'km');
print('Y_1000 =',s_1000[1,0],'km');
print('Z_1000 =',s_1000[2,0],'km');
print('Xdot_1000 =',v_1000[0,0],'km/s');
print('Ydot_1000 =',v_1000[1,0],'km/s');
print('Zdot_1000 =',v_1000[2,0],'km/s');
###Output
X_1000 = -12552.04150633802 km
Y_1000 = -11118.283622706676 km
Z_1000 = 14000.844806355082 km
Xdot_1000 = 1.3812752457561022 km/s
Ydot_1000 = 2.4525144474178697 km/s
Zdot_1000 = 2.939582308195734 km/s
|
notebooks/query_context/qc3_ctx_household.ipynb | ###Markdown
Query by Household Overview Explore the FEC data by specifying SQL predicates identifying "households" (defined based on `indiv` records conjectured to represent real-world people residing at the same physical address)This approach will create the following query contexts:* `ctx_household`* `ctx_indiv`* `ctx_contrib` Notebook Setup Configure database connect info/options Note: database connect string can be specified on the initial `%sql` command:```pythondatabase_url = "postgresql+psycopg2://user@localhost/fecdb"%sql $database_url```Or, connect string is taken from DATABASE_URL environment variable (if not specified for `%sql`):```python%sql```
###Code
%load_ext sql
%config SqlMagic.autopandas=True
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# connect string taken from DATABASE_URL environment variable
%sql
###Output
_____no_output_____
###Markdown
Set styling
###Code
%%html
<style>
tr, th, td {
text-align: left !important;
}
</style>
###Output
_____no_output_____
###Markdown
Validate Context
###Code
%%sql
select count(*)
from ctx_household
###Output
* postgresql+psycopg2://crash@localhost/fecdb
1 rows affected.
###Markdown
Queries / Use Cases Demographic Summary by State
###Code
%%sql
select hx.state,
count(*)
from ctx_household hx
group by 1
order by 2 desc
###Output
* postgresql+psycopg2://crash@localhost/fecdb
1 rows affected.
###Markdown
Top Contributing Households across Election Cycles Note that we could simplify this query by introducing a `ctx_household_contrib` context view (analogous to `ctx_donor_contrib`). *\[It is actually a somewhat-deliberate design choice not to extend all of the Donor constructs over to Household, even though the two entities have identical underlying structures—we may complete and maintain the analogy later, if analysis and reporting by Household becomes more important and/or interesting\]*
###Code
%%sql
select hx.id as hh_id,
hx.name as hh_name,
count(*) contribs,
sum(ic.transaction_amt) total_amount,
round(avg(ic.transaction_amt), 2) avg_amount,
max(ic.transaction_amt) max_amount,
array_agg(distinct ic.elect_cycle) as elect_cycles,
round(sum(ic.transaction_amt) / count(distinct ic.elect_cycle), 2) avg_cycle_amount
from ctx_household hx
join indiv i on i.hh_indiv_id = hx.id
join indiv_contrib ic on ic.indiv_id = i.id
group by 1, 2
order by 4 desc
limit 50
###Output
* postgresql+psycopg2://crash@localhost/fecdb
1 rows affected.
|
DecisionTree_RF/RegressionTree.ipynb | ###Markdown
构建回归树 0.import工具库
###Code
import pandas as pd
from sklearn import preprocessing
from sklearn import tree
from sklearn.datasets import load_boston
###Output
_____no_output_____
###Markdown
1.加载数据
###Code
boston_house = load_boston()
boston_feature_name = boston_house.feature_names
boston_features = boston_house.data
boston_target = boston_house.target
boston_feature_name
print(boston_house.DESCR)
boston_features[:5,:]
boston_target
###Output
_____no_output_____
###Markdown
构建模型
###Code
rgs = tree.DecisionTreeRegressor(max_depth=4)
rgs = rgs.fit(boston_features, boston_target)
rgs
import pydotplus
from IPython.display import Image, display
dot_data = tree.export_graphviz(rgs,
out_file = None,
feature_names = boston_feature_name,
class_names = boston_target,
filled = True,
rounded = True
)
graph = pydotplus.graph_from_dot_data(dot_data)
display(Image(graph.create_png()))
###Output
_____no_output_____ |
8.Factor in R.ipynb | ###Markdown
What is Factor in R?Factors are variables in R which take on a limited number of different values; such variables are often referred to as categorical variables.In a dataset, we can distinguish two types of variables: categorical and continuous.1. In a categorical variable, the value is limited and usually based on a particular finite group. For example, a categorical variable can be countries, year, gender, occupation.2. A continuous variable, however, can take any values, from integer to decimal. For example, we can have the revenue, price of a share, etc..**Categorical Variables**R stores categorical variables into a factor. Let's check the code below to convert a character variable into a factor variable. Characters are not supported in machine learning algorithm, and the only way is to convert a string to an integer.
###Code
factor(x = character(), levels, labels = levels, ordered = is.ordered(x))
###Output
_____no_output_____
###Markdown
Arguments:1. x: A vector of data. Need to be a string or integer, not decimal.2. Levels: A vector of possible values taken by x. This argument is optional. The default value is the unique list of items of the vector x.3. Labels: Add a label to the x data. For example, 1 can take the label `male` while 0, the label `female`.4. ordered: Determine if the levels should be ordered.**Example:**Let's create a factor data frame.
###Code
# Create gender vector
gender_vector <- c("Male", "Female", "Female", "Male", "Male")
class(gender_vector)
# Convert gender_vector to a factor
factor_gender_vector <-factor(gender_vector)
class(factor_gender_vector)
###Output
_____no_output_____
###Markdown
It is important to transform a string into factor when we perform Machine Learning task.A categorical variable can be divided into nominal categorical variable and ordinal categorical variable.**Nominal Categorical Variable**A categorical variable has several values but the order does not matter. For instance, male or female categorical variable do not have ordering.
###Code
# Create a color vector
color_vector <- c('blue', 'red', 'green', 'white', 'black', 'yellow')
# Convert the vector to factor
factor_color <- factor(color_vector)
factor_color
###Output
_____no_output_____
###Markdown
From the factor_color, we can't tell any order. Ordinal Categorical VariableOrdinal categorical variables do have a natural ordering. We can specify the order, from the lowest to the highest with order = TRUE and highest to lowest with order = FALSE.**Example:**We can use summary to count the values for each factor.
###Code
# Create Ordinal categorical vector
day_vector <- c('evening', 'morning', 'afternoon', 'midday', 'midnight', 'evening')
# Convert `day_vector` to a factor with ordered level
factor_day <- factor(day_vector, order = TRUE, levels =c('morning', 'midday', 'afternoon', 'evening', 'midnight'))
# Print the new variable
factor_day
## Levels: morning < midday < afternoon < evening < midnight
# Append the line to above code
# Count the number of occurence of each level
summary(factor_day)
###Output
_____no_output_____
###Markdown
Continuous VariablesContinuous class variables are the default value in R. They are stored as numeric or integer. We can see it from the dataset below. mtcars is a built-in dataset. It gathers information on different types of car. We can import it by using mtcars and check the class of the variable mpg, mile per gallon. It returns a numeric value, indicating a continuous variable.
###Code
dataset <- mtcars
head(dataset)
class(dataset$mpg)
###Output
_____no_output_____ |
ch05/01_KPE.ipynb | ###Markdown
textacyを用いたキーフレーズ抽出このノートブックでは、さまざまな自然言語処理タスクを行うことのできるライブラリ「textacy」を利用して、キーフレーズ抽出をします。 準備 パッケージのインストール
###Code
!pip install textacy==0.11.0 spacy==3.1.2
###Output
Collecting textacy==0.11.0
Downloading textacy-0.11.0-py3-none-any.whl (200 kB)
[K |████████████████████████████████| 200 kB 5.1 MB/s
[?25hCollecting spacy==3.1.2
Downloading spacy-3.1.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.8 MB)
[K |████████████████████████████████| 5.8 MB 54.2 MB/s
[?25hRequirement already satisfied: requests>=2.10.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (2.23.0)
Collecting jellyfish>=0.8.0
Downloading jellyfish-0.8.8.tar.gz (134 kB)
[K |████████████████████████████████| 134 kB 48.6 MB/s
[?25hRequirement already satisfied: tqdm>=4.19.6 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (4.62.0)
Requirement already satisfied: scikit-learn>=0.19.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (0.22.2.post1)
Requirement already satisfied: cachetools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (4.2.2)
Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (2.6.2)
Collecting cytoolz>=0.10.1
Downloading cytoolz-0.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.6 MB)
[K |████████████████████████████████| 1.6 MB 65.1 MB/s
[?25hRequirement already satisfied: numpy>=1.17.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (1.19.5)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (1.4.1)
Collecting pyphen>=0.10.0
Downloading pyphen-0.11.0-py3-none-any.whl (2.0 MB)
[K |████████████████████████████████| 2.0 MB 53.0 MB/s
[?25hRequirement already satisfied: joblib>=0.13.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (1.0.1)
Requirement already satisfied: wasabi<1.1.0,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (0.8.2)
Collecting catalogue<2.1.0,>=2.0.4
Downloading catalogue-2.0.6-py3-none-any.whl (17 kB)
Collecting thinc<8.1.0,>=8.0.8
Downloading thinc-8.0.10-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (623 kB)
[K |████████████████████████████████| 623 kB 56.1 MB/s
[?25hRequirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (21.0)
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (1.0.5)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (57.4.0)
Collecting spacy-legacy<3.1.0,>=3.0.7
Downloading spacy_legacy-3.0.8-py2.py3-none-any.whl (14 kB)
Collecting typer<0.4.0,>=0.3.0
Downloading typer-0.3.2-py3-none-any.whl (21 kB)
Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (0.4.1)
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (2.0.5)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (2.11.3)
Requirement already satisfied: typing-extensions<4.0.0.0,>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (3.7.4.3)
Collecting pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4
Downloading pydantic-1.8.2-cp37-cp37m-manylinux2014_x86_64.whl (10.1 MB)
[K |████████████████████████████████| 10.1 MB 35.2 MB/s
[?25hRequirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (3.0.5)
Collecting pathy>=0.3.5
Downloading pathy-0.6.0-py3-none-any.whl (42 kB)
[K |████████████████████████████████| 42 kB 1.4 MB/s
[?25hCollecting srsly<3.0.0,>=2.4.1
Downloading srsly-2.4.1-cp37-cp37m-manylinux2014_x86_64.whl (456 kB)
[K |████████████████████████████████| 456 kB 59.5 MB/s
[?25hRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from catalogue<2.1.0,>=2.0.4->spacy==3.1.2) (3.5.0)
Requirement already satisfied: toolz>=0.8.0 in /usr/local/lib/python3.7/dist-packages (from cytoolz>=0.10.1->textacy==0.11.0) (0.11.1)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->spacy==3.1.2) (2.4.7)
Requirement already satisfied: smart-open<6.0.0,>=5.0.0 in /usr/local/lib/python3.7/dist-packages (from pathy>=0.3.5->spacy==3.1.2) (5.1.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.10.0->textacy==0.11.0) (2021.5.30)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.10.0->textacy==0.11.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.10.0->textacy==0.11.0) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.10.0->textacy==0.11.0) (3.0.4)
Requirement already satisfied: click<7.2.0,>=7.1.1 in /usr/local/lib/python3.7/dist-packages (from typer<0.4.0,>=0.3.0->spacy==3.1.2) (7.1.2)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->spacy==3.1.2) (2.0.1)
Building wheels for collected packages: jellyfish
Building wheel for jellyfish (setup.py) ... [?25l[?25hdone
Created wheel for jellyfish: filename=jellyfish-0.8.8-cp37-cp37m-linux_x86_64.whl size=73228 sha256=5e03a9623b30484a80e1dac3b881d19655e46b198cbb2560075cb3ff7977f819
Stored in directory: /root/.cache/pip/wheels/82/aa/f4/716387e1f167cbbf911488aa056138152f4d8699c9c9b43ea8
Successfully built jellyfish
Installing collected packages: catalogue, typer, srsly, pydantic, thinc, spacy-legacy, pathy, spacy, pyphen, jellyfish, cytoolz, textacy
Attempting uninstall: catalogue
Found existing installation: catalogue 1.0.0
Uninstalling catalogue-1.0.0:
Successfully uninstalled catalogue-1.0.0
Attempting uninstall: srsly
Found existing installation: srsly 1.0.5
Uninstalling srsly-1.0.5:
Successfully uninstalled srsly-1.0.5
Attempting uninstall: thinc
Found existing installation: thinc 7.4.0
Uninstalling thinc-7.4.0:
Successfully uninstalled thinc-7.4.0
Attempting uninstall: spacy
Found existing installation: spacy 2.2.4
Uninstalling spacy-2.2.4:
Successfully uninstalled spacy-2.2.4
Successfully installed catalogue-2.0.6 cytoolz-0.11.0 jellyfish-0.8.8 pathy-0.6.0 pydantic-1.8.2 pyphen-0.11.0 spacy-3.1.2 spacy-legacy-3.0.8 srsly-2.4.1 textacy-0.11.0 thinc-8.0.10 typer-0.3.2
###Markdown
モデルのダウンロード
###Code
!python -m spacy download en_core_web_sm
###Output
Collecting en-core-web-sm==3.1.0
Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.1.0/en_core_web_sm-3.1.0-py3-none-any.whl (13.6 MB)
[K |████████████████████████████████| 13.6 MB 73 kB/s
[?25hRequirement already satisfied: spacy<3.2.0,>=3.1.0 in /usr/local/lib/python3.7/dist-packages (from en-core-web-sm==3.1.0) (3.1.2)
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.0.5)
Requirement already satisfied: typer<0.4.0,>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (0.3.2)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (21.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (57.4.0)
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (4.62.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.11.3)
Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.23.0)
Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (1.19.5)
Requirement already satisfied: wasabi<1.1.0,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (0.8.2)
Requirement already satisfied: pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (1.8.2)
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (1.0.5)
Requirement already satisfied: catalogue<2.1.0,>=2.0.4 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.0.6)
Requirement already satisfied: thinc<8.1.0,>=8.0.8 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (8.0.10)
Requirement already satisfied: pathy>=0.3.5 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (0.6.0)
Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.7 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (3.0.8)
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (3.0.5)
Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (0.4.1)
Requirement already satisfied: typing-extensions<4.0.0.0,>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (3.7.4.3)
Requirement already satisfied: srsly<3.0.0,>=2.4.1 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.4.1)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from catalogue<2.1.0,>=2.0.4->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (3.5.0)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.4.7)
Requirement already satisfied: smart-open<6.0.0,>=5.0.0 in /usr/local/lib/python3.7/dist-packages (from pathy>=0.3.5->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (5.1.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2021.5.30)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (3.0.4)
Requirement already satisfied: click<7.2.0,>=7.1.1 in /usr/local/lib/python3.7/dist-packages (from typer<0.4.0,>=0.3.0->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (7.1.2)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.0.1)
Installing collected packages: en-core-web-sm
Attempting uninstall: en-core-web-sm
Found existing installation: en-core-web-sm 2.2.5
Uninstalling en-core-web-sm-2.2.5:
Successfully uninstalled en-core-web-sm-2.2.5
Successfully installed en-core-web-sm-3.1.0
[38;5;2m✔ Download and installation successful[0m
You can now load the package via spacy.load('en_core_web_sm')
###Markdown
インポート
###Code
import spacy
import textacy
from textacy import extract
###Output
_____no_output_____
###Markdown
データのアップロードまずは可視化する埋め込みをアップロードします。本ノートブックと同じ階層にDataフォルダがあり、その下に`nlphistory.txt`があるので、そちらをアップロードします。
###Code
from google.colab import files
uploaded = files.upload()
###Output
_____no_output_____
###Markdown
データの読み込みアップロードしたデータを読み込みます。Colabでない場合は、`Data/nlphistory.txt`を指定して読み込んでください。
###Code
mytext = open("nlphistory.txt").read()
mytext
###Output
_____no_output_____
###Markdown
spaCyのドキュメントを取得
###Code
# spaCyモデルの読み込み
en = textacy.load_spacy_lang("en_core_web_sm")
# テキストをspaCyドキュメントへ変換
doc = textacy.make_spacy_doc(mytext, lang=en)
###Output
_____no_output_____
###Markdown
TextRankを用いたキーフレーズ抽出`extract.keyterms.textrank`を用いて、キーフレーズ抽出をします。
###Code
extract.keyterms.textrank(doc, topn=5)
###Output
_____no_output_____
###Markdown
TextRankとSGRankの結果を比較してみましょう。
###Code
kps_textrank = [kps for kps, _ in extract.keyterms.textrank(doc, normalize="lemma", topn=5)]
kps_sgrank = [kps for kps, _ in extract.keyterms.sgrank(doc, topn=5)]
print(f"Textrank output\t: {kps_textrank}")
print(f"SGRank output\t: {kps_sgrank}")
###Output
Textrank output : ['successful natural language processing system', 'statistical machine translation system', 'natural language system', 'statistical natural language processing', 'natural language task']
SGRank output : ['natural language processing system', 'statistical machine translation', 'early', 'research', 'late 1980']
###Markdown
重複したキーフレーズに対処するために、textacyは`aggregate_term_variants`関数を用意しています。この関数を使うことで、重複のないキーフレーズを得ることができます。
###Code
terms = set([term for term, _ in extract.keyterms.sgrank(doc)])
extract.utils.aggregate_term_variants(terms)
###Output
_____no_output_____
###Markdown
名詞のチャンクは、キーフレーズの候補として考えることができます。この方法の欠点は、大量のフレーズができてしまうことと、それらをランク付けする方法が無いことです。
###Code
[chunk for chunk in extract.noun_chunks(doc)]
###Output
_____no_output_____
###Markdown
textacyは他にもさまざまな情報抽出の機能を備えており、その多くは正規表現パターンやヒューリスティックに基づいて、頭字語や引用語などの表現の抽出に対応しています。これら以外にも、品詞タグのパターンを含む正規表現にマッチするものを抽出したり、固有表現を含む文、主語・動詞・目的語のタプルなどを探すこともできます。詳細については、以下のドキュメントを参照してください。- [textacy: NLP, before and after spaCy](https://textacy.readthedocs.io/en/latest/)
###Code
###Output
_____no_output_____ |
train_notebooks/tweet-extraction_train.ipynb | ###Markdown
Data Pre-processing and Transformation
###Code
train = pd.read_csv("/kaggle/input/tweet-sentiment-extraction/train.csv")
test = pd.read_csv("/kaggle/input/tweet-sentiment-extraction/test.csv")
print("Training set has {} data points".format(len(train)))
print("Testing set has {} data points".format(len(test)))
train.head()
test.head()
###Output
_____no_output_____
###Markdown
Check for NaN values
###Code
train.isna().sum()
test.isna().sum()
# Since there is only one NaN value, let's drop it
# Dropping it in the TweetDataset class below
# train = train.dropna(axis=0).reset_index(drop=True)
print("Training set has {} data points".format(len(train)))
print("Testing set has {} data points".format(len(test)))
train.isna().sum()
###Output
Training set has 27481 data points
Testing set has 3534 data points
###Markdown
Removing punctuations & stopwords, or not?
###Code
# Checking if punctuation appears in selected_text
selected_text_has_punctuation = train.selected_text.str.extract(
r'([{}]+)'.format(
re.escape(
string.punctuation)))
# number of selected_text with punctuations
selected_text_has_punctuation.isna().sum()
# observing some tweets whose selected_text contain punctuations
train.loc[selected_text_has_punctuation.dropna().index].head()
###Output
_____no_output_____
###Markdown
- Punctuation seems to appear in quite a lot of our extracted examples. I'll not remove punctuations for this dataset.- Also, I need to preserve stopwords as it can be seen in the above `neutral` sentiment that the tweet *text* has been extracted as-is in the *selected_text*. Deciding max *text* length
###Code
train.text.str.len().max()
MAX_LEN = 148
###Output
_____no_output_____
###Markdown
Tokenizer The pretrained RoBERTa model and tokenizer are from huggingface [transformers](https://huggingface.co/transformers/main_classes/model.html?highlight=save_pretrained) library. They can be downloaded by using the `from_pretrained()` method or attached to a kaggle kerned from [here](https://www.kaggle.com/cdeotte/tf-roberta)
###Code
class TweetDataset:
def __init__(self, data_df, tokenizer, train=True, max_len=96):
self.data = data_df.dropna(axis=0).reset_index(drop=True)
self.is_train = True if train else False
self.sentiment_tokens = {
'positive': tokenizer.encode('positive').ids[0],
'negative': tokenizer.encode('negative').ids[0],
'neutral': tokenizer.encode('neutral').ids[0]
}
self.tokenizer = tokenizer
self.max_len = max_len
def ByteLevelBPEPreprocessor(self, text, selected_text, sentiment):
"""Return Input IDs, Attention Mask, Start/End tokens
This function returns Input IDs and Attention Mask. If this is
training dataset it also return start and end tokens.
"""
text = " " + " ".join(text.split())
enc = self.tokenizer.encode(text)
s_tok = self.sentiment_tokens[sentiment]
# Get InputIDs
input_ids = np.ones((self.max_len),
dtype = 'int32')
input_ids[:len(enc.ids)+5] = [0] + enc.ids + [2,2] + [s_tok] + [2]
# Get Attention mask
attention_mask = np.zeros((self.max_len),
dtype='int32')
attention_mask[:len(enc.ids)+5] = 1
if self.is_train:
selected_text = " ".join(selected_text.split())
idx = text.find(selected_text)
char_tokens = np.zeros((len(text)))
char_tokens[idx:idx+len(selected_text)] = 1
# if text has ' ' prefix
if text[idx-1] == ' ':
char_tokens[idx-1] = 1
# Get start and end token for selected_text in input IDs
start_tokens = np.zeros((self.max_len),
dtype='int32')
end_tokens = np.zeros((self.max_len),
dtype='int32')
ptr_idx = 0
label_idx = list()
for i, enc_id in enumerate(enc.ids):
sub_word = self.tokenizer.decode([enc_id])
if sum(char_tokens[ptr_idx:ptr_idx+len(sub_word)]) > 0:
label_idx.append(i)
ptr_idx += len(sub_word)
if label_idx:
# + 1 as we added prefix before
start_tokens[label_idx[0] + 1] = 1
end_tokens[label_idx[-1] + 1] = 1
return input_ids, attention_mask, start_tokens, end_tokens
return input_ids, attention_mask
def __call__(self):
data_len = len(self.data)
input_ids = np.ones((data_len, self.max_len),
dtype='int32')
attention_mask = np.zeros((data_len, self.max_len),
dtype='int32')
token_type_ids = np.zeros((data_len, self.max_len),
dtype='int32')
if self.is_train:
start_tokens = np.zeros((data_len, self.max_len),
dtype='int32')
end_tokens = np.zeros((data_len, self.max_len),
dtype='int32')
for i, row in tqdm(self.data.iterrows(), total=len(self.data)):
out = self.ByteLevelBPEPreprocessor(
row['text'],
row['selected_text'] if self.is_train else None,
row['sentiment']
)
if self.is_train:
input_ids[i], attention_mask[i], start_tokens[i], end_tokens[i] = out
else:
input_ids[i], attention_mask[i] = out
if self.is_train:
return input_ids, attention_mask, token_type_ids, start_tokens, end_tokens
return input_ids, attention_mask, token_type_ids
class TransformerQA:
def __init__(self, max_len, model_path, tokenizer, fit=True):
self.max_len = max_len
self.model_path = model_path
self.tokenizer = tokenizer
def roberta_model(self):
"""Return RoBERTa base mode with a custom question answering head
"""
input_ids = tf.keras.layers.Input((self.max_len,),
dtype=tf.int32)
attention_mask = tf.keras.layers.Input((self.max_len,),
dtype=tf.int32)
token_type_ids = tf.keras.layers.Input((self.max_len,),
dtype=tf.int32)
config = RobertaConfig.from_pretrained(
os.path.join(self.model_path, 'config-roberta-base.json')
)
roberta_model = TFRobertaModel.from_pretrained(
os.path.join(self.model_path, 'pretrained-roberta-base.h5'),
config=config
)
x = roberta_model(inputs=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids)
x1 = tf.keras.layers.Dropout(0.1)(x[0])
x1 = tf.keras.layers.Conv1D(1,1)(x1)
x1 = tf.keras.layers.Flatten()(x1)
x1 = tf.keras.layers.Activation('softmax')(x1)
x2 = tf.keras.layers.Dropout(0.1)(x[0])
x2 = tf.keras.layers.Conv1D(1,1)(x2)
x2 = tf.keras.layers.Flatten()(x2)
x2 = tf.keras.layers.Activation('softmax')(x2)
model = tf.keras.models.Model(
inputs=[input_ids, attention_mask, token_type_ids],
outputs=[x1,x2]
)
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
return model
def jaccard(self, str1, str2):
"""Return Jaccard similarity score betweeen two strings
"""
a = set(str1.lower().split())
b = set(str2.lower().split())
if (len(a)==0) & (len(b)==0): return 0.5
c = a.intersection(b)
return float(len(c)) / (len(a) + len(b) - len(c))
def get_model_selected_text(self, data_df, preds_start, preds_end):
"""Return list of 'selected_text using the predicted start/end tokens'
"""
st_list = []
for k in range(len(data_df)):
idx_start = np.argmax(preds_start[k,])
idx_end = np.argmax(preds_end[k,])
if idx_start > idx_end:
st = data_df.loc[k,'text']
# if data_df.loc[k, 'sentiment'] != 'neutral':
# st = st.split()[idx_start]
else:
text = " " + " ".join(data_df.loc[k,'text'].split())
enc = self.tokenizer.encode(text)
st = self.tokenizer.decode(enc.ids[idx_start-1:idx_end])
st_list.append(st)
return st_list
def fit(self, train_df, input_ids, attention_mask,
token_type_ids, start_tokens, end_tokens,
stratify_y, VER='v0', verbose=1):
"""Fit a RoBERTa model with the training dataset
"""
avg_score = []
oof_start = np.zeros((input_ids.shape[0],
self.max_len))
oof_end = np.zeros((input_ids.shape[0],
self.max_len))
skf = StratifiedKFold(n_splits=5,
shuffle=True,
random_state=42)
for fold, (idxT,idxV) in enumerate(skf.split(input_ids,
stratify_y)):
print('Training FOLD {}:'.format(fold+1))
K.clear_session()
model = self.roberta_model()
sv = tf.keras.callbacks.ModelCheckpoint(
'{}-roberta-{}.h5'.format(VER, fold),
monitor='val_loss',
verbose=verbose,
save_best_only=True,
save_weights_only=True,
mode='auto',
save_freq='epoch'
)
model.fit([input_ids[idxT,],
attention_mask[idxT,],
token_type_ids[idxT,]],
[start_tokens[idxT,], end_tokens[idxT,]],
epochs=3,
batch_size=32,
verbose=verbose,
callbacks=[sv],
validation_data=(
[
input_ids[idxV,],
attention_mask[idxV,],
token_type_ids[idxV,]
],
[start_tokens[idxV,], end_tokens[idxV,]]
)
)
# Load best saved model from disk
print('Loading model...')
model.load_weights('{}-roberta-{}.h5'.format(VER, fold))
# Predicting OOF samples
print('Predicting OOF...')
oof_start[idxV,],oof_end[idxV,] = model.predict(
[
input_ids[idxV,],
attention_mask[idxV,],
token_type_ids[idxV,]
],
verbose=verbose
)
pred_df = train_df.loc[idxV].reset_index(drop=True)
pred_df['oof_st'] = self.get_model_selected_text(
data_df=pred_df,
preds_start=oof_start[idxV,],
preds_end=oof_end[idxV,]
)
fold_val_score = pred_df.apply(
lambda x: self.jaccard(x['selected_text'],
x['oof_st']
),
axis=1
).mean()
avg_score.append(fold_val_score)
print('>>>> FOLD {} Jaccard score = {}'.format(fold+1,
fold_val_score))
def predict(self, pred_df, input_ids, attention_mask,
token_type_ids, n_models, VER='v0', verbose=1):
"""Return a list of predicted 'selected_text' by loading saved models
"""
preds_start = np.zeros((input_ids.shape[0],
self.max_len))
preds_end = np.zeros((input_ids.shape[0],
self.max_len))
for i in range(n_models):
K.clear_session()
model = self.roberta_model()
print('Loading model...')
model.load_weights('{}-roberta-{}.h5'.format(VER, i))
preds = model.predict(
[input_ids, attention_mask, token_type_ids],
verbose=verbose
)
preds_start += preds[0]/n_models
preds_end += preds[1]/n_models
test_st = self.get_model_selected_text(
data_df=pred_df,
preds_start=preds_start,
preds_end=preds_end
)
return test_st
PATH = '../input/tf-roberta/'
tokenizer = tokenizers.ByteLevelBPETokenizer(
vocab_file=PATH+'vocab-roberta-base.json',
merges_file=PATH+'merges-roberta-base.txt',
lowercase=True,
add_prefix_space=True
)
# Get pre-processed & transformed training inputs and labels
train_data = TweetDataset(train, tokenizer, train=True, max_len=MAX_LEN)
input_ids, attention_mask, token_type_ids, start_tokens, end_tokens = train_data()
# Get pre-processed & transformed testing inputs
test_data = TweetDataset(test, tokenizer, train=False, max_len=MAX_LEN)
test_input_ids, test_attention_mask, test_token_type_ids = test_data()
QA_model = TransformerQA(max_len=MAX_LEN,
model_path=PATH,
tokenizer=tokenizer)
# Train the RoBERTa model
QA_model.fit(train_df=train_data.data,
input_ids=input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
start_tokens=start_tokens,
end_tokens=end_tokens,
stratify_y=train_data.data.sentiment.values)
# Test the RoBERTa model
test['selected_text'] = QA_model.predict(pred_df=test,
input_ids=test_input_ids,
attention_mask=test_attention_mask,
token_type_ids=test_token_type_ids,
n_models=5)
test[['textID','selected_text']].to_csv('submission.csv',index=False)
# Show 25 random predicted 'selected_text'
test.sample(25)
###Output
_____no_output_____ |
Past/DSS/NLP/3.Scikit_data_creaning.ipynb | ###Markdown
BOWbag of words.1. 전체 문서를 구성하는 고정된 단어장을 만들고 1. 개별 문서에 단어장에 해당하는 단어들이 포함되어 있는지 비교 후1. 횟수 or 있는지 없는지 체크 - `DictVectorizer`:단어의 수를 세어놓은 사전에서 BOW 벡터를 만든다.- `CountVectorizer`:문서 집합으로부터 단어의 수를 세어 BOW 벡터를 만든다.- `TfidfVectorizer`:문서 집합으로부터 단어의 수를 세고 TF-IDF 방식으로 단어의 가중치를 조정한 BOW 벡터를 만든다.- `HashingVectorizer`:hashing trick 을 사용하여 빠르게 BOW 벡터를 만든다. DictVectorizer사전 형태로 되어있는 feature 정보를 matrix형태로 변환- corpus 상의 각 단어의 사용 빈도를 나타내는 경우가 많음
###Code
from sklearn.feature_extraction import DictVectorizer
v = DictVectorizer(sparse=False)
D = [{'A': 1, 'B': 2}, {'B': 3, 'C': 1}]
X = v.fit_transform(D) # fit / transform
X
print(v.feature_names_)
print(v.inverse_transform(X))
print(v.transform({'C': 4, 'D': 3}))
###Output
['A', 'B', 'C']
[{'A': 1.0, 'B': 2.0}, {'B': 3.0, 'C': 1.0}]
[[0. 0. 4.]]
###Markdown
CountVectorizer`= tokenizing + counting + BOW` 다양한 인수를 가진다. 그 중 중요한 것들은 다음과 같다.- `stop_words` : 문자열 {‘english’}, 리스트 또는 None (디폴트)stop words 목록.‘english’이면 영어용 스탑 워드 사용.- `analyzer` : 문자열 {‘word’, ‘char’, ‘char_wb’} 또는 함수단어 n-그램, 문자 n-그램, 단어 내의 문자 n-그램- `tokenizer` : 함수 또는 None (디폴트)토큰 생성 함수 .- `token_pattern` : string토큰 정의용 정규 표현식- `ngram_range` : (min_n, max_n) 튜플n-그램 범위- `max_df` : 정수 또는 [0.0, 1.0] 사이의 실수. 디폴트 1단어장에 포함되기 위한 최대 빈도- `min_df` : 정수 또는 [0.0, 1.0] 사이의 실수. 디폴트 1단어장에 포함되기 위한 최소 빈도- `vocabulary` : 사전이나 리스트단어장
###Code
from sklearn.feature_extraction.text import CountVectorizer
corpus = [
'This is the first document.',
'This is the second second document.',
'And the third one.',
'Is this the first document?',
'The last document?',
]
vect = CountVectorizer()
vect.fit(corpus)
vect.vocabulary_ # 각 단어에 이름표(숫자)를 붙임 - 단어장 생성
vect.transform(['This is the second document.']).toarray()
vect.transform(['Something completely new.']).toarray()
vect.transform(corpus).toarray()
###Output
_____no_output_____
###Markdown
Stop Words단어장을 생성할 때 무시할 수 있는 단어를 넣는다. (필수)- 너무 많은 단어는 오히려 판별에 방해가 되기 때문
###Code
vect = CountVectorizer(stop_words=["and", "is", "the", "this"]).fit(corpus)
vect.vocabulary_
vect = CountVectorizer(stop_words="english").fit(corpus) # 기본세팅 stop words
vect.vocabulary_
###Output
_____no_output_____
###Markdown
Tokenanalyzer, tokenizer, token_pattern 등의 인수로 토큰 생성기 선택 가능- analyzer : string, {'word', 'char', 'char_wb'} or callable- tokenizer : Override the string tokenization step while preserving the preprocessing and n-grams generation steps.- token_pattern : customize
###Code
vect = CountVectorizer(analyzer="char").fit(corpus)
vect.vocabulary_
vect = CountVectorizer(token_pattern="t\w+").fit(corpus)
vect.vocabulary_
import nltk
vect = CountVectorizer(tokenizer=nltk.word_tokenize).fit(corpus)
vect.vocabulary_
###Output
_____no_output_____
###Markdown
n-gram단어장 생성에 사용할 토큰의 크기 결정(chunk 크기)
###Code
vect = CountVectorizer(ngram_range=(2,2)).fit(corpus)
vect.vocabulary_ # 2개로만 묶음
vect = CountVectorizer(ngram_range=(1,2), token_pattern="t\w+").fit(corpus)
vect.vocabulary_ # 1개 + 2개
###Output
_____no_output_____
###Markdown
빈도수 `max_df`, `min_df` 인수 사용으로 문서에서 토큰이 나타난 횟수를 기준으로 구성가능- 인수 값이 int : 횟수- 인수 값이 float : 비중너무 적은 단어는 overfitting 발생할 수 있기 때문, 많은 단어는 의미가 없기 때문
###Code
vect = CountVectorizer(max_df=4, min_df=2).fit(corpus)
vect.vocabulary_, vect.stop_words_
vect.transform(corpus).toarray().sum(axis=0)
###Output
_____no_output_____
###Markdown
TF-IDFTerm Frequency - Inverse Document Frequency - 문서 안에서 단어가 나오는 빈도 - 문서에 단어가 나올 빈도- 단어를 갯수 그대로 카운트하지 않고 모든 문서에 공통적으로 들어있는 단어의 경우 가중치를 축소하는 방법 (why? 문서 구별 능력이 떨어진다고 판단)
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
tfidv = TfidfVectorizer().fit(corpus)
tfidv.transform(corpus).toarray()
###Output
_____no_output_____
###Markdown
Hashing Trick- `CountVectorizer`는 모든 작업을 메모리 상에서 수행하므로 처리할 문서의 크기가 커지면 속도가 느려지거나 실행이 불가능해진다. - `HashingVectorizer`를 사용하면 해시 함수를 사용하여 단어에 대한 인덱스 번호를 생성하기 때문에 메모리 및 실행 시간을 줄일 수 있다.
###Code
from sklearn.datasets import fetch_20newsgroups
twenty = fetch_20newsgroups()
len(twenty.data)
%time CountVectorizer().fit(twenty.data).transform(twenty.data)
from sklearn.feature_extraction.text import HashingVectorizer
hv = HashingVectorizer(n_features=10)
%time hv.transform(twenty.data)
###Output
CPU times: user 4.09 s, sys: 39.7 ms, total: 4.13 s
Wall time: 4.13 s
###Markdown
ex
###Code
from urllib.request import urlopen
import json
import string
from konlpy.utils import pprint
from konlpy.tag import Hannanum
hannanum = Hannanum()
f = urlopen("https://www.datascienceschool.net/download-notebook/708e711429a646818b9dcbb581e0c10a/")
json = json.loads(f.read())
cell = ["\n".join(c["source"]) for c in json["cells"] if c["cell_type"] == "markdown"]
docs = [w for w in hannanum.nouns(" ".join(cell)) if ((not w[0].isnumeric()) and (w[0] not in string.punctuation))]
%matplotlib inline
vect = CountVectorizer().fit(docs)
count = vect.transform(docs).toarray().sum(axis=0)
idx = np.argsort(-count)
count = count[idx]
feature_name = np.array(vect.get_feature_names())[idx]
plt.bar(range(len(count)), count)
plt.show()
pprint(list(zip(feature_name, count)))
###Output
[('컨테이너', 81),
('도커', 41),
('명령', 34),
('이미지', 33),
('사용', 26),
('가동', 14),
('중지', 13),
('mingw64', 13),
('삭제', 12),
('이름', 11),
('아이디', 11),
('다음', 10),
('시작', 9),
('목록', 8),
('옵션', 6),
('a181562ac4d8', 6),
('입력', 6),
('외부', 5),
('출력', 5),
('해당', 5),
('호스트', 5),
('명령어', 5),
('확인', 5),
('경우', 5),
('재시작', 4),
('존재', 4),
('컴퓨터', 4),
('터미널', 4),
('프롬프트', 4),
('포트', 4),
('377ad03459bf', 3),
('가상', 3),
('수행', 3),
('문자열', 3),
('dockeruser', 3),
('항목', 3),
('마찬가지', 3),
('대화형', 3),
('종료', 2),
('상태', 2),
('저장', 2),
('호스트간', 2),
('작업', 2),
('지정', 2),
('생각', 2),
('문헌', 2),
('동작', 2),
('시스템', 2),
('명시해', 2),
('특정', 2),
('관련하', 2),
('이때', 2),
('의미', 2),
('추가', 2),
('조합', 1),
('container', 1),
('폴더', 1),
('a1e4ed2ac65b', 1),
('작동', 1),
('자체', 1),
('자동', 1),
('image', 1),
('정지', 1),
('핵심', 1),
('초간단', 1),
('중복', 1),
('id', 1),
('최소한', 1),
('일부분', 1),
('컨테이', 1),
('daemon', 1),
('컨테이너상', 1),
('한다', 1),
('콜론', 1),
('태그', 1),
('하나', 1),
('툴박스', 1),
('파일', 1),
('포워딩', 1),
('주의해', 1),
('이해', 1),
('누른다', 1),
('이미지는', 1),
('공유', 1),
('브라우저', 1),
('복사', 1),
('문제', 1),
('문자', 1),
('관련', 1),
('명시', 1),
('길벗', 1),
('사용법', 1),
('메시지', 1),
('마지막', 1),
('리눅스', 1),
('나오기', 1),
('도서출판', 1),
('데몬', 1),
('대화적', 1),
('대표적', 1),
('내부', 1),
('머신', 1),
('이재홍', 1),
('사용자', 1),
('생략', 1),
('tag', 1),
('가능', 1),
('의존', 1),
('으로', 1),
('내용', 1),
('원본', 1),
('요약', 1),
('가지', 1),
('사용해', 1),
('오류', 1),
('연결', 1),
('여기', 1),
('개념', 1),
('실행', 1),
('시스템상', 1),
('소개', 1),
('설명', 1),
('생성', 1),
('연습', 1),
('윈도우즈', 1)]
|
.ipynb_checkpoints/SupportVectorMachines6-checkpoint.ipynb | ###Markdown
Support Vector Machine Models**Support vector machines (SVMs)** are a widely used and powerful category of machine learning algorithms. There are many variations on the basic idea of an SVM. An SVM attempts to **maximally seperate** classes by finding the **suport vector** with the lowest error rate or maximum separation. SVMs can use many types of **kernel functions**. The most common kernel functions are **linear** and the **radial basis function** or **RBF**. The linear basis function attempts to separate classes by finding hyperplanes in the feature space that maximally separate classes. The RBF uses set of local Gaussian shaped basis kernels to find a nonlinear separation of the classes. Example: Iris datasetAs a first example you will use SVMs to classify the species of iris flowers. As a first step, execute the code in the cell below to load the required packages to run the rest of this notebook.
###Code
from sklearn import svm, preprocessing
#from statsmodels.api import datasets
from sklearn import datasets ## Get dataset from sklearn
import sklearn.model_selection as ms
import sklearn.metrics as sklm
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import numpy.random as nr
%matplotlib inline
###Output
_____no_output_____
###Markdown
To get a feel for these data, you will now load and plot them. The code in the cell below does the following:1. Loads the iris data as a Pandas data frame. 2. Adds column names to the data frame.3. Displays all 4 possible scatter plot views of the data. Execute this code and examine the results.
###Code
def plot_iris(iris):
'''Function to plot iris data by type'''
setosa = iris[iris['Species'] == 'setosa']
versicolor = iris[iris['Species'] == 'versicolor']
virginica = iris[iris['Species'] == 'virginica']
fig, ax = plt.subplots(2, 2, figsize=(12,12))
x_ax = ['Sepal_Length', 'Sepal_Width']
y_ax = ['Petal_Length', 'Petal_Width']
for i in range(2):
for j in range(2):
ax[i,j].scatter(setosa[x_ax[i]], setosa[y_ax[j]], marker = 'x')
ax[i,j].scatter(versicolor[x_ax[i]], versicolor[y_ax[j]], marker = 'o')
ax[i,j].scatter(virginica[x_ax[i]], virginica[y_ax[j]], marker = '+')
ax[i,j].set_xlabel(x_ax[i])
ax[i,j].set_ylabel(y_ax[j])
## Import the dataset from sklearn.datasets
iris = datasets.load_iris()
## Create a data frame from the dictionary
species = [iris.target_names[x] for x in iris.target]
iris = pd.DataFrame(iris['data'], columns = ['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width'])
iris['Species'] = species
#print(species)
## Plot views of the iris data
plot_iris(iris)
###Output
_____no_output_____
###Markdown
You can see that Setosa (in blue) is well separated from the other two categories. The Versicolor (in orange) and the Virginica (in green) show considerable overlap. The question is how well our classifier will separate these categories. Scikit Learn classifiers require numerically coded numpy arrays for the features and as a label. The code in the cell below does the following processing:1. Creates a numpy array of the features.2. Numerically codes the label using a dictionary lookup, and converts it to a numpy array. Execute this code.
###Code
Features = np.array(iris[['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width']])
levels = {'setosa':0, 'versicolor':1, 'virginica':2}
Labels = np.array([levels[x] for x in iris['Species']])
###Output
_____no_output_____
###Markdown
Next, execute the code in the cell below to split the dataset into test and training set. Notice that unusually, 100 of the 150 cases are being used as the test dataset.
###Code
## Randomly sample cases to create independent training and test data
nr.seed(1115)
indx = range(Features.shape[0])
indx = ms.train_test_split(indx, test_size = 100)
X_train = Features[indx[0],:]
y_train = np.ravel(Labels[indx[0]])
X_test = Features[indx[1],:]
y_test = np.ravel(Labels[indx[1]])
###Output
_____no_output_____
###Markdown
As is always the case with machine learning, numeric features must be scaled. The code in the cell below performs the following processing:1. A Zscore scale object is defined using the `StandarScaler` function from the Scikit Learn preprocessing package. 2. The scaler is fit to the training features. Subsequently, this scaler is used to apply the same scaling to the test data and in production. 3. The training features are scaled using the `transform` method. Execute this code.
###Code
scale = preprocessing.StandardScaler()
scale.fit(X_train)
X_train = scale.transform(X_train)
###Output
_____no_output_____
###Markdown
Now you will define and fit a linear SVM model. The code in the cell below defines a linear SVM object using the `LinearSVC` function from the Scikit Learn SVM package, and then fits the model. Execute this code.
###Code
nr.seed(1115)
svm_mod = svm.LinearSVC()
svm_mod.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Notice that the SVM model object hyper parameters are displayed. Next, the code in the cell below performs the following processing to score the test data subset:1. The test features are scaled using the scaler computed for the training features. 2. The `predict` method is used to compute the scores from the scaled features. Execute this code.
###Code
X_test = scale.transform(X_test)
scores = svm_mod.predict(X_test)
###Output
_____no_output_____
###Markdown
It is time to evaluate the model results. Keep in mind that the problem has been made difficult deliberately, by having more test cases than training cases. The iris data has three species categories. Therefore it is necessary to use evaluation code for a three category problem. The function in the cell below extends code from pervious labs to deal with a three category problem. Execute this code and examine the results.
###Code
def print_metrics_3(labels, scores):
conf = sklm.confusion_matrix(labels, scores)
print(' Confusion matrix')
print(' Score Setosa Score Versicolor Score Virginica')
print('Actual Setosa %6d' % conf[0,0] + ' %5d' % conf[0,1] + ' %5d' % conf[0,2])
print('Actual Versicolor %6d' % conf[1,0] + ' %5d' % conf[1,1] + ' %5d' % conf[1,2])
print('Actual Vriginica %6d' % conf[2,0] + ' %5d' % conf[2,1] + ' %5d' % conf[2,2])
## Now compute and display the accuracy and metrics
print('')
print('Accuracy %0.2f' % sklm.accuracy_score(labels, scores))
metrics = sklm.precision_recall_fscore_support(labels, scores)
print(' ')
print(' Setosa Versicolor Virginica')
print('Num case %0.2f' % metrics[3][0] + ' %0.2f' % metrics[3][1] + ' %0.2f' % metrics[3][2])
print('Precision %0.2f' % metrics[0][0] + ' %0.2f' % metrics[0][1] + ' %0.2f' % metrics[0][2])
print('Recall %0.2f' % metrics[1][0] + ' %0.2f' % metrics[1][1] + ' %0.2f' % metrics[1][2])
print('F1 %0.2f' % metrics[2][0] + ' %0.2f' % metrics[2][1] + ' %0.2f' % metrics[2][2])
print_metrics_3(y_test, scores)
###Output
Confusion matrix
Score Setosa Score Versicolor Score Virginica
Actual Setosa 34 1 0
Actual Versicolor 0 24 10
Actual Vriginica 0 3 28
Accuracy 0.86
Setosa Versicolor Virginica
Num case 35.00 34.00 31.00
Precision 1.00 0.86 0.74
Recall 0.97 0.71 0.90
F1 0.99 0.77 0.81
###Markdown
Examine these results. Notice the following:1. The confusion matrix has dimension 3X3. You can see that most cases are correctly classified. 2. The overll accuracy is 0.86. Since the classes are roughly balanced, this metric indicates relatively good performance of the classifier, particularly since it was only trained on 50 cases. 3. The precision, recall and F1 for each of the classes is relatively good. Versicolor has the worst metrics since it has the largest number of misclassified cases. To get a better feel for what the classifier is doing, the code in the cell below displays a set of plots showing correctly (as '+') and incorrectly (as 'o') cases, with the species color-coded. Execute this code and examine the results.
###Code
def plot_iris_score(iris, y_test, scores):
'''Function to plot iris data by type'''
## Find correctly and incorrectly classified cases
true = np.equal(scores, y_test).astype(int)
print(true)
## Create data frame from the test data
iris = pd.DataFrame(iris)
levels = {0:'setosa', 1:'versicolor', 2:'virginica'}
iris['Species'] = [levels[x] for x in y_test]
iris.columns = ['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width', 'Species']
## Set up for the plot
fig, ax = plt.subplots(2, 2, figsize=(12,12))
markers = ['o', '+']
x_ax = ['Sepal_Length', 'Sepal_Width']
y_ax = ['Petal_Length', 'Petal_Width']
for t in range(2): # loop over correct and incorect classifications
setosa = iris[(iris['Species'] == 'setosa') & (true == t)]
versicolor = iris[(iris['Species'] == 'versicolor') & (true == t)]
virginica = iris[(iris['Species'] == 'virginica') & (true == t)]
# loop over all the dimensions
for i in range(2):
for j in range(2):
ax[i,j].scatter(setosa[x_ax[i]], setosa[y_ax[j]], marker = markers[t], color = 'blue')
ax[i,j].scatter(versicolor[x_ax[i]], versicolor[y_ax[j]], marker = markers[t], color = 'orange')
ax[i,j].scatter(virginica[x_ax[i]], virginica[y_ax[j]], marker = markers[t], color = 'green')
ax[i,j].set_xlabel(x_ax[i])
ax[i,j].set_ylabel(y_ax[j])
plot_iris_score(X_test, y_test, scores)
###Output
[0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1
1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 1 0 1 1
0 1 1 0 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1]
###Markdown
Examine these plots. You can see how the classifier has divided the feature space between the classes. Notice that most of the errors occur in the overlap region between Virginica and Versicolor. This behavior is to be expected. There is an error in classifying Setosa which is a bit surprising, and which probably arises from the projection of the division between classes. Is it possible that a nonlinear SVM would separate these cases better? The code in the cell below uses the `SVC` function to define a nonlinear model using radial basis function. This model is fit with the training data and displays the evaluation of the model. Execute this code, and answer **Question 1** on the course page.
###Code
nr.seed(1115)
svc_mod = svm.SVC()
svc_mod.fit(X_train, y_train)
scores = svm_mod.predict(X_test)
print_metrics_3(y_test, scores)
plot_iris_score(X_test, y_test, scores)
###Output
Confusion matrix
Score Setosa Score Versicolor Score Virginica
Actual Setosa 34 1 0
Actual Versicolor 0 24 10
Actual Vriginica 0 3 28
Accuracy 0.86
Setosa Versicolor Virginica
Num case 35.00 34.00 31.00
Precision 1.00 0.86 0.74
Recall 0.97 0.71 0.90
F1 0.99 0.77 0.81
[0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1
1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 1 0 1 1
0 1 1 0 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1]
|
notebooks/chapter14_graphgeo/03_dag.ipynb | ###Markdown
> This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python. 14.3. Resolving dependencies in a Directed Acyclic Graph with a topological sort You need the `python-apt` package in order to build the package dependency graph. (https://pypi.python.org/pypi/python-apt/)We also assume that this notebook is executed on a Debian system (like Ubuntu). If you don't have such a system, you can download the data *Debian* directly on the book's website. Extract it in the current directory, and start this notebook directly at step 7. (http://ipython-books.github.io) 1. We import the `apt` module and we build the list of packages.
###Code
import json
import apt
cache = apt.Cache()
###Output
_____no_output_____
###Markdown
2. The `graph` dictionary will contain the adjacency list of a small portion of the dependency graph.
###Code
graph = {}
###Output
_____no_output_____
###Markdown
3. We define a function that returns the list of dependencies of a package.
###Code
def get_dependencies(package):
if package not in cache:
return []
pack = cache[package]
ver = pack.candidate or pack.versions[0]
# We flatten the list of dependencies,
# and we remove the duplicates.
return sorted(set([item.name
for sublist in ver.dependencies
for item in sublist]))
###Output
_____no_output_____
###Markdown
4. We now define a *recursive* function that builds the dependency graph for a particular package. This function updates the `graph` variable.
###Code
def get_dep_recursive(package):
if package not in cache:
return []
if package not in graph:
dep = get_dependencies(package)
graph[package] = dep
for dep in graph[package]:
if dep not in graph:
graph[dep] = get_dep_recursive(dep)
return graph[package]
###Output
_____no_output_____
###Markdown
5. Let's build the dependency graph for IPython.
###Code
get_dep_recursive('ipython');
###Output
_____no_output_____
###Markdown
6. Finally, let's save the adjacency list in JSON.
###Code
with open('data/apt.json', 'w') as f:
json.dump(graph, f, indent=1)
###Output
_____no_output_____
###Markdown
7. We import a few packages.
###Code
import json
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
8. Let's load the adjacency list from the JSON file.
###Code
with open('data/apt.json', 'r') as f:
graph = json.load(f)
###Output
_____no_output_____
###Markdown
9. Now, we create a directed graph (`DiGraph` in NetworkX) from our adjacency list. We reverse the graph to get a more natural ordering.
###Code
g = nx.DiGraph(graph).reverse()
###Output
_____no_output_____
###Markdown
10. A topological sort only exists when the graph is a **directed acyclic graph** (DAG). This means that there is no cycle in the graph, i.e. no circular dependency here. Is our graph a DAG?
###Code
nx.is_directed_acyclic_graph(g)
###Output
_____no_output_____
###Markdown
11. What are the packages responsible for the cycles? We can find it out with the `simple_cycles` function.
###Code
set([cycle[0] for cycle in nx.simple_cycles(g)])
###Output
_____no_output_____
###Markdown
12. Here, we can try to remove these packages. In an actual package manager, these cycles need to be carefully taken into account.
###Code
g.remove_nodes_from(_)
nx.is_directed_acyclic_graph(g)
###Output
_____no_output_____
###Markdown
13. The graph is now a DAG. Let's display it first.
###Code
ug = g.to_undirected()
deg = ug.degree()
plt.figure(figsize=(4,4));
# The size of the nodes depends on the number of dependencies.
nx.draw(ug, font_size=6,
node_size=[20*deg[k] for k in ug.nodes()]);
###Output
_____no_output_____
###Markdown
14. Finally, we can perform the topological sort, thereby obtaining a linear installation order satisfying all dependencies.
###Code
nx.topological_sort(g)
###Output
_____no_output_____ |
final_project/fraud500.ipynb | ###Markdown
Credit card fraud detection Binary classification problem to determine whether a transaction is a fraud/non fraud. The datasets contains transactions made by credit cards in September 2013 by european cardholders. To guarantee anonimity all the independent variables are transformed into numerical using PCA transformations.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
from sklearn.ensemble import IsolationForest
!pwd
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
fraud = pd.read_csv('/kaggle/input/creditcardfraud/creditcard.csv')
fraud.head(3)
fraud.columns
fraud.info()
fraud.describe()
fraud.shape
fraud.isna().sum().any()
###Output
_____no_output_____
###Markdown
There are no NaN values in the entire dataset.
###Code
fraud['Class'].value_counts()
plt.figure(figsize=(9,4))
plt.bar(['non fraud', 'fraud'], np.log10(fraud['Class'].value_counts().to_numpy()), width=0.3, color=['navy', 'firebrick'], zorder=3)
plt.title('Non fraud/fraud count', fontsize=14)
plt.ylabel('class count (log10 scale)')
plt.xlabel('non fraud/fraud')
plt.grid(color='y', axis='y', linewidth=0.5)
plt.show()
print('Non frauds: {} \nFrauds: {}'.format(fraud['Class'].value_counts()[0], fraud['Class'].value_counts()[1]))
###Output
_____no_output_____
###Markdown
Thus we can see the dataset is higly unbalanced. Data distributions and correlation matrix
###Code
fig, ax = plt.subplots(1, 2, figsize=(18,4))
sns.histplot(data=fraud.loc[fraud['Class'] == 0], x="Time", ax=ax[0], color='navy', kde=True)
ax[0].set_title('Non frauds distribution of transaction time', fontsize=14)
sns.histplot(data=fraud.loc[fraud['Class'] == 1], x="Time", ax=ax[1], color='firebrick', kde=True)
ax[1].set_title('Frauds distribution of transaction time', fontsize=14)
plt.show()
fig, ax = plt.subplots(1, 2, figsize=(18,4))
sns.histplot(data=fraud.loc[(fraud['Class'] == 0) & (fraud['Amount'] <= 1000)], x="Amount", color='navy', ax=ax[0], kde=True)
ax[0].set_title('Non frauds distribution of transaction amount in (0-1000) interval', fontsize=14)
sns.histplot(data=fraud.loc[fraud['Class'] == 1], x="Amount", ax=ax[1], color='firebrick', kde=True)
ax[1].set_title('Frauds distribution of transaction amount', fontsize=14)
plt.show()
plt.subplots(figsize=(11,9))
corr = fraud.corr()
sns.heatmap(corr, cmap='YlOrBr', annot_kws={'size':20})
plt.title("Correlation matrix of whole dataset", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Outliers removal The adopted outlier removal technique is Isolation forest.
###Code
removal_fraud = IsolationForest(max_samples='auto', random_state=150, contamination='auto', n_jobs=-1)
removal_nofraud = IsolationForest(max_samples='auto', random_state=150, contamination='auto', n_jobs=-1)
f = fraud.loc[(fraud['Class'] == 1)]
nof = fraud.loc[(fraud['Class'] == 0)]
mask_f = removal_fraud.fit_predict(f[[col for col in f.columns if 'V' in col]])
mask_nof = removal_nofraud.fit_predict(nof[[col for col in nof.columns if 'V' in col]])
print(f.shape)
print(nof.shape)
(mask_f == -1).sum()
(mask_nof == -1).sum()
fraud_outliers = pd.concat([f.iloc[(mask_f == -1)], nof.iloc[(mask_nof == -1)]])
fraud_clean = pd.concat([f.iloc[~(mask_f == -1)], nof.iloc[~(mask_nof == -1)]])
fraud_outliers['Class'].value_counts()
fraud_clean['Class'].value_counts()
###Output
_____no_output_____
###Markdown
Plot the outliers distributions
###Code
fig, ax = plt.subplots(1, 2, figsize=(18,4))
sns.histplot(data=fraud_outliers.loc[fraud_outliers['Class'] == 0], x="Time", ax=ax[0], color='navy', kde=True)
ax[0].set_title('Non frauds distribution of transaction time', fontsize=14)
sns.histplot(data=fraud_outliers.loc[fraud_outliers['Class'] == 1], x="Time", ax=ax[1], color='firebrick', kde=True)
ax[1].set_title('Frauds distribution of transaction time', fontsize=14)
fig.suptitle('Outlier distribution of Time', fontsize=14)
plt.show()
fig, ax = plt.subplots(1, 2, figsize=(18,4))
sns.histplot(data=fraud_outliers.loc[(fraud_outliers['Class'] == 0)], x="Amount", color='navy', ax=ax[0], kde=True)
ax[0].set_title('Non frauds distribution of transaction amount', fontsize=14)
sns.histplot(data=fraud_outliers.loc[fraud_outliers['Class'] == 1], x="Amount", ax=ax[1], color='firebrick', kde=True)
ax[1].set_title('Frauds distribution of transaction amount', fontsize=14)
fig.suptitle('Outlier distribution of Transaction Amount', fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Plot of distributions of clean data
###Code
fig, ax = plt.subplots(1, 2, figsize=(18,4))
sns.histplot(data=fraud_clean.loc[fraud_clean['Class'] == 0], x="Time", ax=ax[0], color='navy', kde=True)
ax[0].set_title('Non frauds distribution of transaction time', fontsize=14)
sns.histplot(data=fraud_clean.loc[fraud_clean['Class'] == 1], x="Time", ax=ax[1], color='firebrick', kde=True)
ax[1].set_title('Frauds distribution of transaction time', fontsize=14)
fig.suptitle('Clean data distribution of Time', fontsize=14)
plt.show()
fig, ax = plt.subplots(1, 2, figsize=(18,4))
sns.histplot(data=fraud_clean.loc[(fraud_clean['Class'] == 0)], x="Amount", color='navy', ax=ax[0], kde=True)
ax[0].set_title('Non frauds distribution of transaction amount', fontsize=14)
sns.histplot(data=fraud_clean.loc[fraud_clean['Class'] == 1], x="Amount", ax=ax[1], color='firebrick', kde=True)
ax[1].set_title('Frauds distribution of transaction amount', fontsize=14)
fig.suptitle('Clean data distribution of transaction amount', fontsize=14)
plt.show()
###Output
_____no_output_____ |
31-3-graphmatriceadja-incid.ipynb | ###Markdown

###Code
a=matrix([[1,1,1,0,0,0,0],
[1,0,0,1,1,0,0],
[1,0,0,0,0,1,1],
[0,1,0,1,0,1,0],
[0,1,0,0,1,0,1],
[0,0,1,0,1,1,0],
[0,0,1,0,1,1,0]])
%display latex
a
F1=Graph([(1,2,4),(1,3,6),(1,5,7),(2,6,7),(3,4,7),(4,5,6)]);F1
###Output
_____no_output_____ |
Data Science Resources/zero-to-mastery-ml-master/section-2-data-science-and-ml-tools/scikit-learn-exercises-solutions.ipynb | ###Markdown
Scikit-Learn Practice SolutionsThis notebook offers a set of potential solutions to the Scikit-Learn excercise notebook.Exercises are based off (and directly taken from) the quick [introduction to Scikit-Learn notebook](https://github.com/mrdbourke/zero-to-mastery-ml/blob/master/section-2-data-science-and-ml-tools/introduction-to-scikit-learn.ipynb).Different tasks will be detailed by comments or text.For further reference and resources, it's advised to check out the [Scikit-Learn documnetation](https://scikit-learn.org/stable/user_guide.html).And if you get stuck, try searching for a question in the following format: "how to do XYZ with Scikit-Learn", where XYZ is the function you want to leverage from Scikit-Learn.Since we'll be working with data, we'll import Scikit-Learn's counterparts, Matplotlib, NumPy and pandas.Let's get started.
###Code
# Setup matplotlib to plot inline (within the notebook)
%matplotlib inline
# Import the pyplot module of Matplotlib as plt
import matplotlib.pyplot as plt
# Import pandas under the abbreviation 'pd'
import pandas as pd
# Import NumPy under the abbreviation 'np'
import numpy as np
###Output
_____no_output_____
###Markdown
End-to-end Scikit-Learn classification workflowLet's start with an end to end Scikit-Learn workflow.More specifically, we'll:1. Get a dataset ready2. Prepare a machine learning model to make predictions3. Fit the model to the data and make a prediction4. Evaluate the model's predictions The data we'll be using is [stored on GitHub](https://github.com/mrdbourke/zero-to-mastery-ml/tree/master/data). We'll start with [`heart-disease.csv`](https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/heart-disease.csv), a dataset which contains anonymous patient data and whether or not they have heart disease.**Note:** When viewing a `.csv` on GitHub, make sure it's in the raw format. For example, the URL should look like: https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/heart-disease.csv 1. Getting a dataset ready
###Code
# Import the heart disease dataset and save it to a variable
# using pandas and read_csv()
# Hint: You can directly pass the URL of a csv to read_csv()
heart_disease = pd.read_csv("https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/heart-disease.csv")
# Check the first 5 rows of the data
heart_disease.head()
###Output
_____no_output_____
###Markdown
Our goal here is to build a machine learning model on all of the columns except `target` to predict `target`.In essence, the `target` column is our **target variable** (also called `y` or `labels`) and the rest of the other columns are our independent variables (also called `data` or `X`).And since our target variable is one thing or another (heart disease or not), we know our problem is a classification problem (classifying whether something is one thing or another).Knowing this, let's create `X` and `y` by splitting our dataframe up.
###Code
# Create X (all columns except target)
X = heart_disease.drop("target", axis=1)
# Create y (only the target column)
y = heart_disease["target"]
###Output
_____no_output_____
###Markdown
Now we've split our data into `X` and `y`, we'll use Scikit-Learn to split it into training and test sets.
###Code
# Import train_test_split from sklearn's model_selection module
from sklearn.model_selection import train_test_split
# Use train_test_split to split X & y into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y)
# View the different shapes of the training and test datasets
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
What do you notice about the different shapes of the data?Since our data is now in training and test sets, we'll build a machine learning model to fit patterns in the training data and then make predictions on the test data.To figure out which machine learning model we should use, you can refer to [Scikit-Learn's machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html).After following the map, you decide to use the [`RandomForestClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html). 2. Preparing a machine learning model
###Code
# Import the RandomForestClassifier from sklearn's ensemble module
from sklearn.ensemble import RandomForestClassifier
# Instantiate an instance of RandomForestClassifier as clf
clf = RandomForestClassifier()
###Output
_____no_output_____
###Markdown
Now you've got a `RandomForestClassifier` instance, let's fit it to the training data.Once it's fit, we'll make predictions on the test data. 3. Fitting a model and making predictions
###Code
# Fit the RandomForestClassifier to the training data
clf.fit(X_train, y_train)
# Use the fitted model to make predictions on the test data and
# save the predictions to a variable called y_preds
y_preds = clf.predict(X_test)
###Output
_____no_output_____
###Markdown
4. Evaluating a model's predictionsEvaluating predictions is as important making them. Let's check how our model did by calling the `score()` method on it and passing it the training (`X_train, y_train`) and testing data.
###Code
# Evaluate the fitted model on the training set using the score() function
clf.score(X_train, y_train)
# Evaluate the fitted model on the test set using the score() function
clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
* How did you model go? * What metric does `score()` return for classifiers? * Did your model do better on the training dataset or test dataset? Experimenting with different classification modelsNow we've quickly covered an end-to-end Scikit-Learn workflow and since experimenting is a large part of machine learning, we'll now try a series of different machine learning models and see which gets the best results on our dataset.Going through the [Scikit-Learn machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html), we see there are a number of different classification models we can try (different models are in the green boxes).For this exercise, the models we're going to try and compare are:* [LinearSVC](https://scikit-learn.org/stable/modules/svm.htmlclassification)* [KNeighborsClassifier](https://scikit-learn.org/stable/modules/neighbors.html) (also known as K-Nearest Neighbors or KNN)* [SVC](https://scikit-learn.org/stable/modules/svm.htmlclassification) (also known as support vector classifier, a form of [support vector machine](https://en.wikipedia.org/wiki/Support-vector_machine))* [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) (despite the name, this is actually a classifier)* [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) (an ensemble method and what we used above)We'll follow the same workflow we used above (except this time for multiple models):1. Import a machine learning model2. Get it ready3. Fit it to the data and make predictions4. Evaluate the fitted model**Note:** Since we've already got the data ready, we can reuse it in this section.
###Code
# Import LinearSVC from sklearn's svm module
from sklearn.svm import LinearSVC
# Import KNeighborsClassifier from sklearn's neighbors module
from sklearn.neighbors import KNeighborsClassifier
# Import SVC from sklearn's svm module
from sklearn.svm import SVC
# Import LogisticRegression from sklearn's linear_model module
from sklearn.linear_model import LogisticRegression
# Note: we don't have to import RandomForestClassifier, since we already have
###Output
_____no_output_____
###Markdown
Thanks to the consistency of Scikit-Learn's API design, we can use virtually the same code to fit, score and make predictions with each of our models.To see which model performs best, we'll do the following:1. Instantiate each model in a dictionary2. Create an empty results dictionary3. Fit each model on the training data4. Score each model on the test data5. Check the resultsIf you're wondering what it means to instantiate each model in a dictionary, see the example below.
###Code
# EXAMPLE: Instantiating a RandomForestClassifier() in a dictionary
example_dict = {"RandomForestClassifier": RandomForestClassifier()}
# Create a dictionary called models which contains all of the classification models we've imported
# Make sure the dictionary is in the same format as example_dict
# The models dictionary should contain 5 models
models = {"LinearSVC": LinearSVC(),
"KNN": KNeighborsClassifier(),
"SVC": SVC(),
"LogisticRegression": LogisticRegression(),
"RandomForestClassifier": RandomForestClassifier()}
# Create an empty dictionary called results
results = {}
###Output
_____no_output_____
###Markdown
Since each model we're using has the same `fit()` and `score()` functions, we can loop through our models dictionary and, call `fit()` on the training data and then call `score()` with the test data.
###Code
# EXAMPLE: Looping through example_dict fitting and scoring the model
example_results = {}
for model_name, model in example_dict.items():
model.fit(X_train, y_train)
example_results[model_name] = model.score(X_test, y_test)
example_results
# Loop through the models dictionary items, fitting the model on the training data
# and appending the model name and model score on the test data to the results dictionary
for model_name, model in models.items():
model.fit(X_train, y_train)
results[model_name] = model.score(X_test, y_test)
results
###Output
/Users/daniel/Desktop/ml-course/zero-to-mastery-ml/env/lib/python3.6/site-packages/sklearn/svm/_base.py:947: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/daniel/Desktop/ml-course/zero-to-mastery-ml/env/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:939: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html.
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
###Markdown
* Which model performed the best? * Do the results change each time you run the cell? * Why do you think this is?Due to the randomness of how each model finds patterns in the data, you might notice different results each time.Without manually setting the random state using the `random_state` parameter of some models or using a NumPy random seed, every time you run the cell, you'll get slightly different results.Let's see this in effect by running the same code as the cell above, except this time setting a [NumPy random seed equal to 42](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.seed.html).
###Code
# Run the same code as the cell above, except this time set a NumPy random seed
# equal to 42
np.random.seed(42)
for model_name, model in models.items():
model.fit(X_train, y_train)
results[model_name] = model.score(X_test, y_test)
results
###Output
/Users/daniel/Desktop/ml-course/zero-to-mastery-ml/env/lib/python3.6/site-packages/sklearn/svm/_base.py:947: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
/Users/daniel/Desktop/ml-course/zero-to-mastery-ml/env/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:939: ConvergenceWarning: lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.
Increase the number of iterations (max_iter) or scale the data as shown in:
https://scikit-learn.org/stable/modules/preprocessing.html.
Please also refer to the documentation for alternative solver options:
https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression
extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)
###Markdown
* Run the cell above a few times, what do you notice about the results? * Which model performs the best this time?* What happens if you add a NumPy random seed to the cell where you called `train_test_split()` (towards the top of the notebook) and then rerun the cell above?Let's make our results a little more visual.
###Code
# Create a pandas dataframe with the data as the values of the results dictionary,
# the index as the keys of the results dictionary and a single column called accuracy.
# Be sure to save the dataframe to a variable.
results_df = pd.DataFrame(results.values(),
results.keys(),
columns=["Accuracy"])
# Create a bar plot of the results dataframe using plot.bar()
results_df.plot.bar();
###Output
_____no_output_____
###Markdown
Using `np.random.seed(42)` results in the `LogisticRegression` model perfoming the best (at least on my computer).Let's tune its hyperparameters and see if we can improve it. Hyperparameter TuningRemember, if you're ever trying to tune a machine learning models hyperparameters and you're not sure where to start, you can always search something like "MODEL_NAME hyperparameter tuning".In the case of LogisticRegression, you might come across articles, such as [Hyperparameter Tuning Using Grid Search by Chris Albon](https://chrisalbon.com/machine_learning/model_selection/hyperparameter_tuning_using_grid_search/).The article uses [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) but we're going to be using [`RandomizedSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html).The different hyperparameters to search over have been setup for you in `log_reg_grid` but feel free to change them.
###Code
# Different LogisticRegression hyperparameters
log_reg_grid = {"C": np.logspace(-4, 4, 20),
"solver": ["liblinear"]}
###Output
_____no_output_____
###Markdown
Since we've got a set of hyperparameters we can import `RandomizedSearchCV`, pass it our dictionary of hyperparameters and let it search for the best combination.
###Code
# Setup np random seed of 42
np.random.seed(42)
# Import RandomizedSearchCV from sklearn's model_selection module
from sklearn.model_selection import RandomizedSearchCV
# Setup an instance of RandomizedSearchCV with a LogisticRegression() estimator,
# our log_reg_grid as the param_distributions, a cv of 5 and n_iter of 5.
rs_log_reg = RandomizedSearchCV(estimator=LogisticRegression(),
param_distributions=log_reg_grid,
cv=5,
n_iter=5,
verbose=True)
# Fit the instance of RandomizedSearchCV
rs_log_reg.fit(X_train, y_train);
###Output
Fitting 5 folds for each of 5 candidates, totalling 25 fits
###Markdown
Once `RandomizedSearchCV` has finished, we can find the best hyperparmeters it found using the `best_params_` attributes.
###Code
# Find the best parameters of the RandomizedSearchCV instance using the best_params_ attribute
rs_log_reg.best_params_
# Score the instance of RandomizedSearchCV using the test data
rs_log_reg.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
After hyperparameter tuning, did the models score improve? What else could you try to improve it? Are there any other methods of hyperparameter tuning you can find for `LogisticRegression`? Classifier Model EvaluationWe've tried to find the best hyperparameters on our model using `RandomizedSearchCV` and so far we've only been evaluating our model using the `score()` function which returns accuracy. But when it comes to classification, you'll likely want to use a few more evaluation metrics, including:* [**Confusion matrix**](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) - Compares the predicted values with the true values in a tabular way, if 100% correct, all values in the matrix will be top left to bottom right (diagnol line).* [**Cross-validation**](https://scikit-learn.org/stable/modules/cross_validation.html) - Splits your dataset into multiple parts and train and tests your model on each part and evaluates performance as an average. * [**Precision**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.htmlsklearn.metrics.precision_score) - Proportion of true positives over total number of samples. Higher precision leads to less false positives.* [**Recall**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.htmlsklearn.metrics.recall_score) - Proportion of true positives over total number of true positives and false positives. Higher recall leads to less false negatives.* [**F1 score**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.htmlsklearn.metrics.f1_score) - Combines precision and recall into one metric. 1 is best, 0 is worst.* [**Classification report**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) - Sklearn has a built-in function called `classification_report()` which returns some of the main classification metrics such as precision, recall and f1-score.* [**ROC Curve**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_score.html) - [Receiver Operating Characterisitc](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) is a plot of true positive rate versus false positive rate.* [**Area Under Curve (AUC)**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) - The area underneath the ROC curve. A perfect model achieves a score of 1.0.Before we get to these, we'll instantiate a new instance of our model using the best hyerparameters found by `RandomizedSearchCV`.
###Code
# Instantiate a LogisticRegression classifier using the best hyperparameters from RandomizedSearchCV
clf = LogisticRegression(solver="liblinear", C=0.23357214690901212)
# Fit the new instance of LogisticRegression with the best hyperparameters on the training data
clf.fit(X_train, y_train);
###Output
_____no_output_____
###Markdown
Now it's to import the relative Scikit-Learn methods for each of the classification evaluation metrics we're after.
###Code
# Import confusion_matrix and classification_report from sklearn's metrics module
from sklearn.metrics import confusion_matrix, classification_report
# Import precision_score, recall_score and f1_score from sklearn's metrics module
from sklearn.metrics import precision_score, recall_score, f1_score
# Import plot_roc_curve from sklearn's metrics module
from sklearn.metrics import plot_roc_curve
###Output
_____no_output_____
###Markdown
Evaluation metrics are very often comparing a model's predictions to some ground truth labels.Let's make some predictions on the test data using our latest model and save them to `y_preds`.
###Code
# Make predictions on test data and save them
y_preds = clf.predict(X_test)
###Output
_____no_output_____
###Markdown
Time to use the predictions our model has made to evaluate it beyond accuracy.
###Code
# Create a confusion matrix using the confusion_matrix function
confusion_matrix(y_test, y_preds)
###Output
_____no_output_____
###Markdown
**Challenge:** The in-built `confusion_matrix` function in Scikit-Learn produces something not too visual, how could you make your confusion matrix more visual?You might want to search something like "how to plot a confusion matrix". Note: There may be more than one way to do this.
###Code
# Import seaborn for improving visualisation of confusion matrix
import seaborn as sns
# Make confusion matrix more visual
def plot_conf_mat(y_test, y_preds):
"""
Plots a confusion matrix using Seaborn's heatmap().
"""
fig, ax = plt.subplots(figsize=(3, 3))
ax = sns.heatmap(confusion_matrix(y_test, y_preds),
annot=True, # Annotate the boxes
cbar=False)
plt.xlabel("True label")
plt.ylabel("Predicted label")
# Fix the broken annotations (this happened in Matplotlib 3.1.1)
bottom, top = ax.get_ylim()
ax.set_ylim(bottom + 0.5, top - 0.5);
plot_conf_mat(y_test, y_preds)
###Output
_____no_output_____
###Markdown
How about a classification report?
###Code
# classification report
print(classification_report(y_test, y_preds))
###Output
precision recall f1-score support
0 0.83 0.69 0.75 35
1 0.77 0.88 0.82 41
accuracy 0.79 76
macro avg 0.80 0.78 0.78 76
weighted avg 0.79 0.79 0.79 76
###Markdown
**Challenge:** Write down what each of the columns in this classification report are.* **Precision** - Indicates the proportion of positive identifications (model predicted class 1) which were actually correct. A model which produces no false positives has a precision of 1.0.* **Recall** - Indicates the proportion of actual positives which were correctly classified. A model which produces no false negatives has a recall of 1.0.* **F1 score** - A combination of precision and recall. A perfect model achieves an F1 score of 1.0.* **Support** - The number of samples each metric was calculated on.* **Accuracy** - The accuracy of the model in decimal form. Perfect accuracy is equal to 1.0.* **Macro avg** - Short for macro average, the average precision, recall and F1 score between classes. Macro avg doesn’t class imbalance into effort, so if you do have class imbalances, pay attention to this metric.* **Weighted avg** - Short for weighted average, the weighted average precision, recall and F1 score between classes. Weighted means each metric is calculated with respect to how many samples there are in each class. This metric will favour the majority class (e.g. will give a high value when one class out performs another due to having more samples).The classification report gives us a range of values for precision, recall and F1 score, time to find these metrics using Scikit-Learn functions.
###Code
# Find the precision score of the model using precision_score()
precision_score(y_test, y_preds)
# Find the recall score
recall_score(y_test, y_preds)
# Find the F1 score
f1_score(y_test, y_preds)
###Output
_____no_output_____
###Markdown
Confusion matrix: done.Classification report: done.ROC (receiver operator characteristic) curve & AUC (area under curve) score: not done.Let's fix this.If you're unfamiliar with what a ROC curve, that's your first challenge, to read up on what one is.In a sentence, a [ROC curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) is a plot of the true positive rate versus the false positive rate.And the AUC score is the area behind the ROC curve.Scikit-Learn provides a handy function for creating both of these called [`plot_roc_curve()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_roc_curve.html).
###Code
# Plot a ROC curve using our current machine learning model using plot_roc_curve
plot_roc_curve(clf, X_test, y_test);
###Output
_____no_output_____
###Markdown
Beautiful! We've gone far beyond accuracy with a plethora extra classification evaluation metrics.If you're not sure about any of these, don't worry, they can take a while to understand. That could be an optional extension, reading up on a classification metric you're not sure of.The thing to note here is all of these metrics have been calculated using a single training set and a single test set. Whilst this is okay, a more robust way is to calculate them using [cross-validation](https://scikit-learn.org/stable/modules/cross_validation.html).We can calculate various evaluation metrics using cross-validation using Scikit-Learn's [`cross_val_score()`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) function along with the `scoring` parameter.
###Code
# Import cross_val_score from sklearn's model_selection module
from sklearn.model_selection import cross_val_score
# EXAMPLE: By default cross_val_score returns 5 values (cv=5).
cross_val_score(clf,
X,
y,
scoring="accuracy",
cv=5)
# EXAMPLE: Taking the mean of the returned values from cross_val_score
# gives a cross-validated version of the scoring metric.
cross_val_acc = np.mean(cross_val_score(clf,
X,
y,
scoring="accuracy",
cv=5))
cross_val_acc
###Output
_____no_output_____
###Markdown
In the examples, the cross-validated accuracy is found by taking the mean of the array returned by `cross_val_score()`.Now it's time to find the same for precision, recall and F1 score.
###Code
# Find the cross-validated precision
cross_val_precision = np.mean(cross_val_score(clf,
X,
y,
scoring="precision",
cv=5))
cross_val_precision
# Find the cross-validated recall
cross_val_recall = np.mean(cross_val_score(clf,
X,
y,
scoring="recall",
cv=5))
cross_val_recall
# Find the cross-validated F1 score
cross_val_f1 = np.mean(cross_val_score(clf,
X,
y,
scoring="f1",
cv=5))
cross_val_f1
###Output
_____no_output_____
###Markdown
Exporting and importing a trained modelOnce you've trained a model, you may want to export it and save it to file so you can share it or use it elsewhere.One method of exporting and importing models is using the joblib library.In Scikit-Learn, exporting and importing a trained model is known as [model persistence](https://scikit-learn.org/stable/modules/model_persistence.html).
###Code
# Import the dump and load functions from the joblib library
from joblib import dump, load
# Use the dump function to export the trained model to file
dump(clf, "trained-classifier.joblib")
# Use the load function to import the trained model you just exported
# Save it to a different variable name to the origial trained model
loaded_clf = load("trained-classifier.joblib")
# Evaluate the loaded trained model on the test data
loaded_clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
What do you notice about the loaded trained model results versus the original (pre-exported) model results? Scikit-Learn Regression PracticeFor the next few exercises, we're going to be working on a regression problem, in other words, using some data to predict a number.Our dataset is a [table of car sales](https://docs.google.com/spreadsheets/d/1LPEIWJdSSJYrfn-P3UQDIXbEn5gg-o6I7ExLrWTTBWs/edit?usp=sharing), containing different car characteristics as well as a sale price.We'll use Scikit-Learn's built-in regression machine learning models to try and learn the patterns in the car characteristics and their prices on a certain group of the dataset before trying to predict the sale price of a group of cars the model has never seen before.To begin, we'll [import the data from GitHub](https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/car-sales-extended-missing-data.csv) into a pandas DataFrame, check out some details about it and try to build a model as soon as possible.
###Code
# Read in the car sales data
car_sales = pd.read_csv("https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/car-sales-extended-missing-data.csv")
# View the first 5 rows of the car sales data
car_sales.head()
# Get information about the car sales DataFrame
car_sales.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000 entries, 0 to 999
Data columns (total 5 columns):
Make 951 non-null object
Colour 950 non-null object
Odometer (KM) 950 non-null float64
Doors 950 non-null float64
Price 950 non-null float64
dtypes: float64(3), object(2)
memory usage: 39.2+ KB
###Markdown
Looking at the output of `info()`,* How many rows are there total?* What datatypes are in each column?* How many missing values are there in each column?
###Code
# Find number of missing values in each column
car_sales.isna().sum()
# Find the datatypes of each column of car_sales
car_sales.dtypes
###Output
_____no_output_____
###Markdown
Knowing this information, what would happen if we tried to model our data as it is?Let's see.
###Code
# EXAMPLE: This doesn't work because our car_sales data isn't all numerical
from sklearn.ensemble import RandomForestRegressor
car_sales_X, car_sales_y = car_sales.drop("Price", axis=1), car_sales.Price
rf_regressor = RandomForestRegressor().fit(car_sales_X, car_sales_y)
###Output
_____no_output_____
###Markdown
As we see, the cell above breaks because our data contains non-numerical values as well as missing data.To take care of some of the missing data, we'll remove the rows which have no labels (all the rows with missing values in the `Price` column).
###Code
# Remove rows with no labels (NaN's in the Price column)
car_sales.dropna(subset=["Price"], inplace=True)
###Output
_____no_output_____
###Markdown
Building a pipelineSince our `car_sales` data has missing numerical values as well as the data isn't all numerical, we'll have to fix these things before we can fit a machine learning model on it.There are ways we could do this with pandas but since we're practicing Scikit-Learn, we'll see how we might do it with the [`Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) class. Because we're modifying columns in our dataframe (filling missing values, converting non-numerical data to numbers) we'll need the [`ColumnTransformer`](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html), [`SimpleImputer`](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) and [`OneHotEncoder`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) classes as well.Finally, because we'll need to split our data into training and test sets, we'll import `train_test_split` as well.
###Code
# Import Pipeline from sklearn's pipeline module
from sklearn.pipeline import Pipeline
# Import ColumnTransformer from sklearn's compose module
from sklearn.compose import ColumnTransformer
# Import SimpleImputer from sklearn's impute module
from sklearn.impute import SimpleImputer
# Import OneHotEncoder from sklearn's preprocessing module
from sklearn.preprocessing import OneHotEncoder
# Import train_test_split from sklearn's model_selection module
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Now we've got the necessary tools we need to create our preprocessing `Pipeline` which fills missing values along with turning all non-numerical data into numbers.Let's start with the categorical features.
###Code
# Define different categorical features
categorical_features = ["Make", "Colour"]
# Create categorical transformer Pipeline
categorical_transformer = Pipeline(steps=[
# Set SimpleImputer strategy to "constant" and fill value to "missing"
("imputer", SimpleImputer(strategy="constant", fill_value="missing")),
# Set OneHotEncoder to ignore the unknowns
("onehot", OneHotEncoder(handle_unknown="ignore"))])
###Output
_____no_output_____
###Markdown
It would be safe to treat `Doors` as a categorical feature as well, however since we know the vast majority of cars have 4 doors, we'll impute the missing `Doors` values as 4.
###Code
# Define Doors features
door_feature = ["Doors"]
# Create Doors transformer Pipeline
door_transformer = Pipeline(steps=[
# Set SimpleImputer strategy to "constant" and fill value to 4
("imputer", SimpleImputer(strategy="constant", fill_value=4))])
###Output
_____no_output_____
###Markdown
Now onto the numeric features. In this case, the only numeric feature is the `Odometer (KM)` column. Let's fill its missing values with the median.
###Code
# Define numeric features (only the Odometer (KM) column)
numeric_features = ["Odometer (KM)"]
# Crearte numeric transformer Pipeline
numeric_transformer = Pipeline(steps=[
# Set SimpleImputer strategy to fill missing values with the "Median"
("imputer", SimpleImputer(strategy="median"))])
###Output
_____no_output_____
###Markdown
Time to put all of our individual transformer `Pipeline`'s into a single `ColumnTransformer` instance.
###Code
# Setup preprocessing steps (fill missing values, then convert to numbers)
preprocessor = ColumnTransformer(
transformers=[
# Use the categorical_transformer to transform the categorical_features
("cat", categorical_transformer, categorical_features),
# Use the door_transformer to transform the door_feature
("door", door_transformer, door_feature),
# Use the numeric_transformer to transform the numeric_features
("num", numeric_transformer, numeric_features)])
###Output
_____no_output_____
###Markdown
Boom! Now our `preprocessor` is ready, time to import some regression models to try out.Comparing our data to the [Scikit-Learn machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html), we can see there's a handful of different regression models we can try.* [RidgeRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html)* [SVR(kernel="linear")](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html) - short for Support Vector Regressor, a form form of support vector machine.* [SVR(kernel="rbf")](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html) - short for Support Vector Regressor, a form of support vector machine.* [RandomForestRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) - the regression version of RandomForestClassifier.
###Code
# Import Ridge from sklearn's linear_model module
from sklearn.linear_model import Ridge
# Import SVR from sklearn's svm module
from sklearn.svm import SVR
# Import RandomForestRegressor from sklearn's ensemble module
from sklearn.ensemble import RandomForestRegressor
###Output
_____no_output_____
###Markdown
Again, thanks to the design of the Scikit-Learn library, we're able to use very similar code for each of these models.To test them all, we'll create a dictionary of regression models and an empty dictionary for regression model results.
###Code
# Create dictionary of model instances, there should be 4 total key, value pairs
# in the form {"model_name": model_instance}.
# Don't forget there's two versions of SVR, one with a "linear" kernel and the
# other with kernel set to "rbf".
regression_models = {"Ridge": Ridge(),
"SVR_linear": SVR(kernel="linear"),
"SVR_rbf": SVR(kernel="rbf"),
"RandomForestRegressor": RandomForestRegressor()}
# Create an empty dictionary for the regression results
regression_results = {}
###Output
_____no_output_____
###Markdown
Our regression model dictionary is prepared as well as an empty dictionary to append results to, time to get the data split into `X` (feature variables) and `y` (target variable) as well as training and test sets.In our car sales problem, we're trying to use the different characteristics of a car (`X`) to predict its sale price (`y`).
###Code
# Create car sales X data (every column of car_sales except Price)
car_sales_X = car_sales.drop("Price", axis=1)
# Create car sales y data (the Price column of car_sales)
car_sales_y = car_sales["Price"]
# Use train_test_split to split the car_sales_X and car_sales_y data into
# training and test sets.
# Give the test set 20% of the data using the test_size parameter.
# For reproducibility set the random_state parameter to 42.
car_X_train, car_X_test, car_y_train, car_y_test = train_test_split(car_sales_X,
car_sales_y,
test_size=0.2,
random_state=42)
# Check the shapes of the training and test datasets
car_X_train.shape, car_X_test.shape, car_y_train.shape, car_y_test.shape
###Output
_____no_output_____
###Markdown
* How many rows are in each set?* How many columns are in each set?Alright, our data is split into training and test sets, time to build a small loop which is going to:1. Go through our `regression_models` dictionary2. Create a `Pipeline` which contains our `preprocessor` as well as one of the models in the dictionary3. Fits the `Pipeline` to the car sales training data4. Evaluates the target model on the car sales test data and appends the results to our `regression_results` dictionary
###Code
# Loop through the items in the regression_models dictionary
for model_name, model in regression_models.items():
# Create a model pipeline with a preprocessor step and model step
model_pipeline = Pipeline(steps=[("preprocessor", preprocessor),
("model", model)])
# Fit the model pipeline to the car sales training data
print(f"Fitting {model_name}...")
model_pipeline.fit(car_X_train, car_y_train)
# Score the model pipeline on the test data appending the model_name to the
# results dictionary
print(f"Scoring {model_name}...")
regression_results[model_name] = model_pipeline.score(car_X_test,
car_y_test)
###Output
Fitting Ridge...
Scoring Ridge...
Fitting SVR_linear...
Scoring SVR_linear...
Fitting SVR_rbf...
Scoring SVR_rbf...
Fitting RandomForestRegressor...
Scoring RandomForestRegressor...
###Markdown
Our regression models have been fit, let's see how they did!
###Code
# Check the results of each regression model by printing the regression_results
# dictionary
regression_results
###Output
_____no_output_____
###Markdown
* Which model did the best?* How could you improve its results?* What metric does the `score()` method of a regression model return by default?Since we've fitted some models but only compared them via the default metric contained in the `score()` method (R^2 score or coefficient of determination), let's take the `RidgeRegression` model and evaluate it with a few other [regression metrics](https://scikit-learn.org/stable/modules/model_evaluation.htmlregression-metrics).Specifically, let's find:1. **R^2 (pronounced r-squared) or coefficient of determination** - Compares your models predictions to the mean of the targets. Values can range from negative infinity (a very poor model) to 1. For example, if all your model does is predict the mean of the targets, its R^2 value would be 0. And if your model perfectly predicts a range of numbers it's R^2 value would be 1. 2. **Mean absolute error (MAE)** - The average of the absolute differences between predictions and actual values. It gives you an idea of how wrong your predictions were.3. **Mean squared error (MSE)** - The average squared differences between predictions and actual values. Squaring the errors removes negative errors. It also amplifies outliers (samples which have larger errors).Scikit-Learn has a few classes built-in which are going to help us with these, namely, [`mean_absolute_error`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html), [`mean_squared_error`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html) and [`r2_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html).
###Code
# Import mean_absolute_error from sklearn's metrics module
from sklearn.metrics import mean_absolute_error
# Import mean_squared_error from sklearn's metrics module
from sklearn.metrics import mean_squared_error
# Import r2_score from sklearn's metrics module
from sklearn.metrics import r2_score
###Output
_____no_output_____
###Markdown
All the evaluation metrics we're concerned with compare a model's predictions with the ground truth labels. Knowing this, we'll have to make some predictions.Let's create a `Pipeline` with the `preprocessor` and a `Ridge()` model, fit it on the car sales training data and then make predictions on the car sales test data.
###Code
# Create RidgeRegression Pipeline with preprocessor as the "preprocessor" and
# Ridge() as the "model".
ridge_pipeline = Pipeline(steps=[("preprocessor", preprocessor),
("model", Ridge())])
# Fit the RidgeRegression Pipeline to the car sales training data
ridge_pipeline.fit(car_X_train, car_y_train)
# Make predictions on the car sales test data using the RidgeRegression Pipeline
car_y_preds = ridge_pipeline.predict(car_X_test)
# View the first 50 predictions
car_y_preds[:50]
###Output
_____no_output_____
###Markdown
Nice! Now we've got some predictions, time to evaluate them. We'll find the mean squared error (MSE), mean absolute error (MAE) and R^2 score (coefficient of determination) of our model.
###Code
# EXAMPLE: Find the MSE by comparing the car sales test labels to the car sales predictions
mse = mean_squared_error(car_y_test, car_y_preds)
# Return the MSE
mse
# Find the MAE by comparing the car sales test labels to the car sales predictions
mae = mean_absolute_error(car_y_test, car_y_preds)
# Return the MAE
mae
# Find the R^2 score by comparing the car sales test labels to the car sales predictions
r2 = r2_score(car_y_test, car_y_preds)
# Return the R^2 score
r2
###Output
_____no_output_____ |
MasterIndex/TextProcessingOne13Form_update_sanjiv.ipynb | ###Markdown
Code to process a 13D/F/G filing and pull out required fields
###Code
%pylab inline
import pandas as pd
from bs4 import BeautifulSoup
import requests
import re
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Do all extraction in one block of code - Run this
###Code
%time
# Create an empty data frame. To be populated later with quarterly from info.
form_links = pd.DataFrame(columns = ['CIK', 'Company Name', 'Form Type', 'Date Filed', 'Filename'])
# Function to extract URLs of Form 13*. URLs to be used to access HTML/XML documents of interest
#
# Input: First and last years in range of years we want Form 13s pulled
#
# Output: Dataframe of all Form 13* filed during range of years specified.
def extract_13_url(first, last):
for yr in range(first, last):
for qr in range(1,5):
#print(yr)
df = pd.read_csv('MasterIndex'+str(yr)+ '_' + str(qr)+'.idx', encoding='latin1', sep = '\t')
data = df.iloc[6:].reset_index(drop=True)
data = data.rename(columns={'Description: Master Index of EDGAR Dissemination Feed': 'messy'})
meep = data.messy.str.split('|', expand = True)
raw = meep.rename(columns={0: 'CIK', 1: 'Company Name', 2: 'Form Type', 3: 'Date Filed', 4: 'Filename'})
#raw = raw[raw['Form Type'].str.contains('13D')| head['Form Type'].str.contains('13F') |\
# raw['Form Type'].str.contains('13G')].reset_index(drop=True)
raw = raw[raw['Form Type'].str.match('.*13D.*|.*13F.*|.*13G.*')].reset_index(drop = True)
form_links = form_links.append(raw)
form_links = form_links.reset_index(drop = True)
#extract_13_url(1997, 2019)
###Output
_____no_output_____
###Markdown
Save the dataframe to a csv file so we only need to load it down the pipeline.
###Code
### DO NOT RUN ###
#form_links.to_csv('form_info.csv', index = False)
### DO NOT RUN ###
###Output
_____no_output_____
###Markdown
Below, test for the first filename to check for single file processing
###Code
df = pd.read_csv('form_info.csv', index_col = False)
df.head()
###Output
_____no_output_____
###Markdown
This is a very clean table. We can likely use the CIK from each row to match with the respective company's CIK in the Compustat table.However, this Form 13* table includes both the target and acquirer's CIK for the same filing (in other words, we have duplicate filings). We will need to figure out how to workaround duplicates, or find another to uniquely identify each filing. Fortunately, we know that each filing only contains the target's CUSIP number. Compustat data also contains a company's CUSIP number (which is unique). Therefore, we will have to extract the CUSIP no. from each filing and append it to the table as a new field.
###Code
### DO NOT RUN ###
### Psedocode for later reference ###
if df['Date Filed'][1] >= last_filing & <= this_filing:
return that row's qtr
else:
check the next row of datetbl
### DO NOT RUN ###
###Output
_____no_output_____
###Markdown
Test - do not delete
###Code
#Fields we want to collect
fields = ["COMPANY CONFORMED NAME","CENTRAL INDEX KEY","STANDARD INDUSTRIAL CLASSIFICATION","CUSIP NO."]
for x in range(1,10):
url = 'https://www.sec.gov/Archives/' + df.Filename[x]
f = requests.get(url)
BeautifulSoup(f.content,'lxml').get_text()
###Output
_____no_output_____
###Markdown
Content Extraction
###Code
td = df[df['Form Type'].str.match('.*13D')].reset_index(drop = True)
# Specify index of the file we want. Here, we will use index = 1 as a test.
x = 1
url = 'https://www.sec.gov/Archives/' + td.Filename[x]
# Fields we want to collect
fields = ["COMPANY CONFORMED NAME","CENTRAL INDEX KEY","STANDARD INDUSTRIAL CLASSIFICATION","CUSIP No."]
f = requests.get(url)
BeautifulSoup(f.content,'lxml').get_text()
###Output
_____no_output_____
###Markdown
Code separately in the function below:- Item 4- Item 5- (Date of Event Which Requires Filing of this Statement)- SOURCE OF FUNDS*- AGGREGATE AMOUNT BENEFICIALLY OWNED BY EACH REPORTING PERSON- PERCENT OF CLASS REPRESENTED BY AMOUNT IN ROW (11)
###Code
# MAIN FUNCTIONS TO DO ALL EXTRACTION
#SEARCH LIST OF TEXT FOR ITEM LINE NUMBER
def findLineNumber(list_of_text,text_item):
res = [j for j in list_of_text if re.search(text_item,j)]
for item in res:
idx = list_of_text.index(item)
return idx
def extract13D(url):
print("URL: ",url)
f = requests.get(url)
text = BeautifulSoup(f.content,'lxml').get_text()
#Split into lines
list13d = text.splitlines()
#Remove all stuff that starts with /xa0 and then all blank lines
list13d = [j for j in list13d if j.startswith('\xa0')==False]
list13d = [j for j in list13d if len(j)>0]
for field in fields:
print('Field: ',field)
res = [j for j in list13d if re.search(field,j)]
for r in res:
x = re.split('[\t]',r)
y = [j for j in x if len(j)>0][-1]
print(y)
print(' ')
#PROCESS ALL ITEMS
#Item 4
print('Field: Purpose of Transaction')
start_idx = findLineNumber(list13d,'Item 4')
end_idx = findLineNumber(list13d,'Item 5.')
print(' '.join(list13d[(start_idx+1):end_idx]))
extract13D(url)
###Output
URL: https://www.sec.gov/Archives/edgar/data/1000079/0000950116-97-000432.txt
Field: COMPANY CONFORMED NAME
SC&T INTERNATIONAL INC
CAPITAL VENTURES INTERNATIONAL /E9/
Field: CENTRAL INDEX KEY
0001000079
0001011712
Field: STANDARD INDUSTRIAL CLASSIFICATION
COMPUTER PERIPHERAL EQUIPMENT, NEC [3577]
[]
Field: CUSIP No.
CUSIP No. 783975 10 5 Page 2 of 5 Pages
CUSIP No. 783975 10 5 Page 3 of 5 Pages
CUSIP No. 783975 10 5 Page 4 of 5 Pages
CUSIP No. 783975 10 5 Page 5 of 5 Pages
Field: Purpose of Transaction
|
codes/tests/Teste_mag_modelo_escada.ipynb | ###Markdown
Etapa 1: Definicão das coordenadas de Observação:
###Code
nx = 100 # n de observacoes na direcao x
ny = 100 # n de observacoes na direcao y
size = (nx, ny)
xmin = -10000.0 # metros
xmax = +10000.0 # metros
ymin = -10000.0 # metros
ymax = +10000.0 # metros
z = -100.0 #altura de voo, (com Z constante) em metros
#zmax = -100.0 # altura de voo, em metros
dicionario = {'nx': nx,
'ny': ny,
'xmin': xmin,
'xmax': xmax,
'ymin': ymin,
'ymax': ymax,
'z': z,
'color': '.r'}
x, y, X, Y, Z = plot_3D.create_aquisicao(dicionario)
###Output
_____no_output_____
###Markdown
Etapa 2: Definicão das coordenadas dos prismas modelados:
###Code
# coordenadas dos vertices (corners) do prisma, em metros:
x1,x2 = (-2000.0, 2000.0)
y1,y2 = (-3000.0, 3000.0)
z1,z2 = (1500.0,2000.0) # z eh positivo para baixo!
deltaz = 100.0
deltay = 4000.0
incl = 'positivo'
dic = {'n': 3,
'x': [x1, x2],
'y': [y1, y2],
'z': [z1, z2],
'deltay': deltay,
'deltaz': deltaz,
'incl': 'positivo'}
pointx, pointy, pointz = plot_3D.creat_point(dic)
print(pointx)
print(pointy)
print(pointz)
#%matplotlib notebook
dic1 = {'x': [pointx[0], pointx[1]],
'y': [pointy[0], pointy[1]],
'z': [pointz[0], pointz[1]]}
dic2 = {'x': [pointx[2], pointx[3]],
'y': [pointy[2], pointy[3]],
'z': [pointz[2], pointz[3]]}
dic3 = {'x': [pointx[4], pointx[5]],
'y': [pointy[4], pointy[5]],
'z': [pointz[4], pointz[5]]}
#----------------------------------------------------------------------------------------------------#
vert1 = plot_3D.vert_point(dic1)
vert2 = plot_3D.vert_point(dic2)
vert3 = plot_3D.vert_point(dic3)
#----------------------------------------------------------------------------------------------------#
color = 'b'
size = [9, 10]
view = [210,145]
#----------------------------------------------------------------------------------------------------#
prism_1 = plot_3D.plot_prism(vert1, color)
prism_2 = plot_3D.plot_prism(vert2, color)
prism_3 = plot_3D.plot_prism(vert3, color)
#----------------------------------------------------------------------------------------------------#
prisma = {'n': 3,
'prisma': [prism_1, prism_2,prism_3]}
plot_3D.plot_obs_3d(prisma, size, view, x, y, pointz)
###Output
_____no_output_____
###Markdown
Etapa 3: Simulação do campo Principal na região das observações:
###Code
I = -30.0 # inclinacao do campo principal em graus
D = -23.0 # declinacao do campo principal em graus
Fi = 40000.0 # Intensidade do campo principal (nT)
# Campo principal variando com as posicao F(X,Y):
F = Fi + 0.013*X + 0.08*Y # nT
###Output
_____no_output_____
###Markdown
Etapa 4: Definição das propriedades das fontes crustais (prismas verticas):
###Code
# Propriedades magneticas da fonte crustal:
inc = I # magnetizacao puramente induzida
dec = -10.0
Mi = 10.0 # intensidade da magnetizacao em A/m
Mi2 = 15.0
Mi3 = 7.0
fonte_crustal_mag1 = [pointx[0], pointx[1],
pointy[0], pointy[1],
pointz[0], pointz[1], Mi]
fonte_crustal_mag2 = [pointx[2], pointx[3],
pointy[2], pointy[3],
pointz[2], pointz[3], Mi2]
fonte_crustal_mag3 = [pointx[4], pointx[5],
pointy[4], pointy[5],
pointz[4], pointz[5], Mi3]
###Output
_____no_output_____
###Markdown
Etapa 5: Cálculo das anomalias via function (prism_tf)
###Code
tfa1 = prism.prism_tf(X, Y,z, fonte_crustal_mag1, I, D, inc, dec)
tfa2 = prism.prism_tf(X, Y,z, fonte_crustal_mag2, I, D, inc, dec)
tfa3 = prism.prism_tf(X, Y,z, fonte_crustal_mag3, I, D, inc, dec)
tfa_final = tfa1 + tfa2 + tfa3
###Output
_____no_output_____
###Markdown
Etapa 6: Acréscimo de rúido via function (noise_normal_dist)
###Code
mi = 0.0
sigma = 0.1
#ACTn = noise.noise_gaussiana(t, mi, sigma, ACT)
tfa_final1 = auxiliars.noise_normal_dist(tfa_final, mi, sigma)
%matplotlib inline
#xs = [x1, x1, x2, x2, x1]
#ys = [y1, y2, y2, y1, y1]
#xs1 = [pointx[0], pointx[0], pointx[5], pointx[5], pointx[0]]
#ys1 = [pointy[0], pointy[5], pointy[5], pointy[0], pointy[0]]
#flechax = [[numpy.absolute(pointx[0] + pointx[5])], [pointx[5]]]
#flechay = [[numpy.absolute(pointy[0] + pointy[5])], [pointy[5]]]
#origin = [[numpy.absolute(pointx[0] + pointx[5])], [[numpy.absolute(pointy[0] + pointy[5])]]]
#ponta = [[pointx[5]], [pointy[5]]]
#print(ponta)
# graficos
plt.close('all')
plt.figure(figsize=(9,10))
#******************************************************
plt.contourf(Y, X, tfa_final1, 20, cmap = plt.cm.RdBu_r)
plt.title('Anomalia de Campo Total(nT)', fontsize = 20)
plt.xlabel('East (m)', fontsize = 20)
plt.ylabel('North (m)', fontsize = 20)
#corpo, = plt.plot(ys1,xs1,'k-*', label = 'Extensão do Dique')
#plt.plot(ys2,xs2,'k-')
#plt.plot(ys3,xs3,'m-')
#arrow = plt.arrow(2000.0, 0.0, 4500.0, 0.0, width=250, length_includes_head = True, color = 'k')
#first_legend = plt.legend(handles=[corpo], bbox_to_anchor=(1.25, 1), loc='upper left', borderaxespad=0.0, fontsize= 12.0)
#plt.legend([arrow, corpo], ['Direção de mergulho', 'Extensão do Dique'], bbox_to_anchor=(1.25, 1), loc='upper left', borderaxespad=0.0, fontsize= 12.0)
plt.colorbar()
#plt.savefig('prisma_anomalia.pdf', format='pdf')
plt.show()
xs1 = [pointx[0], pointx[0], pointx[1], pointx[1], pointx[0]]
#xs2 = [pointx[2], pointx[2], pointx[3], pointx[3], pointx[2]]
#xs3 = [pointx[4], pointx[4], pointx[5], pointx[5], pointx[4]]
ys1 = [pointy[0], pointy[1], pointy[1], pointy[0], pointy[0]]
#ys2 = [pointy[2], pointy[3], pointy[3], pointy[2], pointy[2]]
#ys3 = [pointy[4], pointy[5], pointy[5], pointy[4], pointy[4]]
# graficos
plt.close('all')
plt.figure(figsize=(10,10))
#******************************************************
plt.contourf(Y, X, tfa_final, 20, cmap = plt.cm.RdBu_r)
plt.title('Campo Total (nT)', fontsize = 12)
plt.xlabel('East (m)', fontsize = 10)
plt.ylabel('North (m)', fontsize = 10)
plt.plot(ys1,xs1,'g-')
#plt.plot(ys2,xs2,'k-')
#plt.plot(ys3,xs3,'m-')
plt.colorbar()
#plt.savefig('teste_100_40000_D10.png', format='png')
plt.show()
dici1 = {'nx': nx,
'ny': ny,
'X': X,
'Y': Y,
'Z': Z,
'ACTn': tfa_final
}
data_e_hora_atuais = datetime.now()
data_e_hora = data_e_hora_atuais.strftime('%d_%m_%Y_%H_%M')
dicionario = {'Data da Modelagem': data_e_hora,
'Tipo de Modelagem': 'Modelagem de prisma',
'números de corpos': 3,
'Coordenadas do prisma 1 (x1, x2, y1, y2, z1, z2)': [pointx[0], pointx[1], pointy[0], pointy[1], pointz[0], pointz[1]],
'Coordenadas do prisma 2 (x1, x2, y1, y2, z1, z2)': [pointx[2], pointx[3], pointy[2], pointy[3], pointz[2], pointz[3]],
'Coordenadas do prisma 3 (x1, x2, y1, y2, z1, z2)': [pointx[4], pointx[5], pointy[4], pointy[5], pointz[4], pointz[5]],
'inclinação': 'positivo',
'Informação da fonte (Mag, Incl, Decl)': [Mi, inc, dec],
'Informação regional (Camp.Geomag, Incl, Decl)': [Fi, I, D]}
print(dicionario)
Data_f = salve_doc.reshape_matrix(dici1)
Data_f
#salve_doc.create_diretorio(dicionario, Data_f)
###Output
_____no_output_____ |
deep-learning/fastai-docs/fastai_docs-master/dev_nb/snapshot/001a_nn_basics.ipynb | ###Markdown
What is `torch.nn` *really*? A quick journey: from neural net "from scratch", to fully utilizing `torch.nn`, `torch.optim`, `Dataset`, and `DataLoader` *by Jeremy Howard, fast.ai. Thanks to Rachel Thomas and Francisco Ingham.*PyTorch provides the elegantly designed modules and classes `torch.nn`, `torch.optim`, `Dataset`, and `DataLoader` to help you create and train neural networks. In order to fully utilize their power and customize them for your problem, you need to really understand exactly what they're doing. To develop this understanding, we will first train basic neural net on the MNIST data set without using any features from these models; we will initially only use the most basic PyTorch tensor functionality. Then, we will incrementally add one feature from `torch.nn`, `torch.optim`, `Dataset`, or `DataLoader` at a time, showing exactly what each piece does, and how it works to make the code either more concise, or more flexible.**This tutorial assumes you already have PyTorch installed, and are familiar with the basics of tensor operations.** (If you're familiar with Numpy array operations, you'll find the PyTorch tensor operations used here nearly identical). MNIST data setup We will use the classic [MNIST](http://deeplearning.net/data/mnist/) dataset, which consists of black-and-white images of hand-drawn digits (between 0 and 9).We will use [pathlib](https://docs.python.org/3/library/pathlib.html) for dealing with paths (part of the Python 3 standard library), and will download the dataset using [requests](https://docs.python-requests.org/en/master/). We will only import modules when we use them, so you can see exactly what's being used at each point.
###Code
from pathlib import Path
import requests
DATA_PATH = Path('data')
PATH = DATA_PATH/'mnist'
PATH.mkdir(parents=True, exist_ok=True)
URL='http://deeplearning.net/data/mnist/'
FILENAME='mnist.pkl.gz'
if not (PATH/FILENAME).exists():
content = requests.get(URL+FILENAME).content
(PATH/FILENAME).open('wb').write(content)
###Output
_____no_output_____
###Markdown
This dataset is in numpy array format, and has been stored using pickle, a python-specific format for serializing data.
###Code
import pickle, gzip
with gzip.open(PATH/FILENAME, 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
###Output
_____no_output_____
###Markdown
Each image is 28 x 28, and is being stored as a flattened row of length 784 (=28x28). Let's take a look at one; we need to reshape it to 2d first.
###Code
%matplotlib inline
from matplotlib import pyplot
import numpy as np
pyplot.imshow(x_train[0].reshape((28,28)), cmap="gray")
x_train.shape
###Output
_____no_output_____
###Markdown
PyTorch uses `torch.tensor`, rather than numpy arrays, so we need to convert our data.
###Code
import torch
x_train,y_train,x_valid,y_valid = map(torch.tensor, (x_train,y_train,x_valid,y_valid))
n,c = x_train.shape
x_train, x_train.shape, y_train.min(), y_train.max()
###Output
_____no_output_____
###Markdown
Neural net from scratch (no `torch.nn`) Let's first create a model using nothing but PyTorch tensor operations. We're assuming you're already familiar with the basics of neural networks. (If you're not, you can learn them at [course.fast.ai](http://course.fast.ai).)PyTorch provides methods to create random or zero-filled tensors, which we will use to create our weights and bias for a simple linear model. These are just regular tensors, with one very special addition: we tell PyTorch that they require a gradient. This causes PyTorch to record all of the operations done on the tensor, so that it can calculate the gradient during back-propagation *automatically*!For the weights, we set `requires_grad` **after** the initialization, since we don't want that step included in the gradient. (Note that a trailling `_` in PyTorch signifies that the operation is performed in-place.)*NB: We are initializing the weights here with [Xavier initialisation](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) (by multiplying with 1/sqrt(n)).*
###Code
import math
weights = torch.randn(784,10)/math.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)
###Output
_____no_output_____
###Markdown
Thanks to PyTorch's ability to calculate gradients automatically, we can use any standard Python function (or callable object) as a model! So let's just write a plain matrix multiplication and broadcasted addition to create a simple linear model. We also need an activation function, so we'll write `log_softmax` and use it. Remember: although PyTorch provides lots of pre-written loss functions, activation functions, and so forth, you can easily write your own using plain python. PyTorch will even create fast GPU or vectorized CPU code for your function automatically.
###Code
def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb): return log_softmax(xb @ weights + bias)
###Output
_____no_output_____
###Markdown
In the above, the '@' stands for the dot product operation. We will call our function on one batch of data (in this case, 64 images). This is one *forward pass*. Note that our predictions won't be any better than random at this stage, since we start with random weights.
###Code
bs=64 # batch size
xb = x_train[0:bs] # a mini-batch from x
preds = model(xb) # predictions
preds[0], preds.shape
###Output
_____no_output_____
###Markdown
As you see, the `preds` tensor contains not only the tensor values, but also a gradient function. We'll use this later to do backprop.Let's implement negative log-likelihood to use as the loss function (again, we can just use standard Python):
###Code
def nll(input, target): return -input[range(target.shape[0]), target].mean()
loss_func = nll
###Output
_____no_output_____
###Markdown
Let's check our loss with our random model, so we can see if we improve after a backprop pass later.
###Code
yb = y_train[0:bs]
loss_func(preds, yb)
###Output
_____no_output_____
###Markdown
We can now run a training loop. For each iteration, we will:- select a mini-batch of data (of size `bs`)- use the model to make predictions- calculate the loss- `loss.backward()` updates the gradients of the model, in this case, `weights` and `bias`.- We now use these gradients to update the weights and bias. We do this within the `torch.no_grad()` context manager, because we do not want these actions to be recorded for our next calculation of the gradient. You can read more about how PyTorch's Autograd records operations [here](https://pytorch.org/docs/stable/notes/autograd.html).- We then set the gradients to zero, so that we are ready for the next loop. Otherwise, our gradients would record a running tally of all the operations that had happened (i.e. `loss.backward()` *adds* the gradients to whatever is already stored, rather than replacing them). *Handy tip: you can use the standard python debugger to step through PyTorch code, allowing you to check the various variable values at each step. Uncomment `set_trace()` below to try it out.*
###Code
from IPython.core.debugger import set_trace
lr = 0.5 # learning rate
epochs = 2 # how many epochs to train for
for epoch in range(epochs):
for i in range((n-1)//bs + 1):
# set_trace()
start_i = i*bs
end_i = start_i+bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
###Output
_____no_output_____
###Markdown
That's it: we've created and trained a minimal neural network (in this case, a logistic regression, since we have no hidden layers) entirely from scratch!Let's check the loss and compare to what we got earlier. We expect that the loss will have decreased, and it has.
###Code
loss_func(model(xb), yb)
###Output
_____no_output_____
###Markdown
Using `torch.nn.functional` We will now refactor our code, so that it does the same thing as before, only we'll start taking advantage of PyTorch's `nn` classes to make it more concise and flexible. At each step from here, we should be making our code one or more of: shorter, more understandable, and/or more flexible.The first and easiest step is to make our code shorter by replacing our hand-written activation and loss functions with those from `torch.nn.functional` (which is generally imported into the namespace `F` by convention). This module contains all the functions in the `torch.nn` library (whereas other parts of the library contain classes). As well as a wide range of loss and activation functions, you'll also find here some convenient functions for creating neural nets, such as pooling functions. (There are also functions for doing convolutions, linear layers, etc, but as we'll see, these are usually better handled using other parts of the library.)If you're using negative log likelihood loss and log softmax activation, then Pytorch provides a single function `F.cross_entropy` that combines the two. So we can even remove the activation function from our model.
###Code
import torch.nn.functional as F
loss_func = F.cross_entropy
def model(xb): return xb @ weights + bias
###Output
_____no_output_____
###Markdown
Note that we no longer call `log_softmax` in the `model` function. Let's confirm that our loss is the same as before:
###Code
loss_func(model(xb), yb)
###Output
_____no_output_____
###Markdown
Refactor using nn.Module Next up, we'll use `nn.Module` and `nn.Parameter`, for a clearer and more concise training loop. We subclass `nn.Module` (which itself is a class and able to keep track of state). In this case, we want to create a class that holds our weights, bias, and method for the forward step. `nn.Module` has a number of attributes and methods (such as `.parameters()` and `.zero_grad()`) which we will be using.**NB**: `nn.Module` (uppercase M) is a PyTorch specific concept, and is a class we'll be using a lot. `nn.Module` is not to be confused with the Python concept of a (lowercase m) [module](https://docs.python.org/3/tutorial/modules.html), which is a file of Python code that can be imported.
###Code
from torch import nn
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.randn(784,10)/math.sqrt(784))
self.bias = nn.Parameter(torch.zeros(10))
def forward(self, xb): return xb @ self.weights + self.bias
###Output
_____no_output_____
###Markdown
Since we're now using an object instead of just using a function, we first have to instantiate our model:
###Code
model = Mnist_Logistic()
###Output
_____no_output_____
###Markdown
Now we can calculate the loss in the same way as before. Note that `nn.Module` objects are used as if they are functions (i.e they are *callable*), but behind the scenes Pytorch will call our `forward` method automatically.
###Code
loss_func(model(xb), yb)
###Output
_____no_output_____
###Markdown
Previously for our training loop we had to update the values for each parameter by name, and manually zero out the grads for each parameter separately, like this:```pythonwith torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_()```Now we can take advantage of model.parameters() and model.zero_grad() (which are both defined by PyTorch for `nn.Module`) to make those steps more concise and less prone to the error of forgetting some of our parameters, particularly if we had a more complicated model:```pythonwith torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()```We'll wrap our little training loop in a `fit` function so we can run it again later.
###Code
def fit():
for epoch in range(epochs):
for i in range((n-1)//bs + 1):
start_i = i*bs
end_i = start_i+bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
for p in model.parameters(): p -= p.grad * lr
model.zero_grad()
fit()
###Output
_____no_output_____
###Markdown
Let's double-check that our loss has gone down:
###Code
loss_func(model(xb), yb)
###Output
_____no_output_____
###Markdown
Refactor using nn.Linear We continue to refactor our code. Instead of manually defining and initializing `self.weights` and `self.bias`, and calculating `xb @ self.weights + self.bias`, we will instead use the Pytorch class [nn.Linear](https://pytorch.org/docs/stable/nn.htmllinear-layers) for a linear layer, which does all that for us. Pytorch has many types of predefined layers that can greatly simplify our code, and often makes it faster too.
###Code
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(784,10)
def forward(self, xb): return self.lin(xb)
###Output
_____no_output_____
###Markdown
We instantiate our model and calculate the loss in the same way as before:
###Code
model = Mnist_Logistic()
loss_func(model(xb), yb)
###Output
_____no_output_____
###Markdown
We are still able to use our same `fit` method as before.
###Code
fit()
loss_func(model(xb), yb)
###Output
_____no_output_____
###Markdown
Refactor using optim Pytorch also has a package with various optimization algorithms, `torch.optim`. We can use the `step` method from our optimizer to take a forward step, instead of manually updating each parameter.This will let us replace our previous manually coded optimization step:```pythonwith torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()```and instead use just:```pythonopt.step()opt.zero_grad()```(`optim.zero_grad()` resets the gradient to 0 and we need to call it before computing the gradient for the next minibatch.)
###Code
from torch import optim
###Output
_____no_output_____
###Markdown
We'll define a little function to create our model and optimizer so we can reuse it in the future.
###Code
def get_model():
model = Mnist_Logistic()
return model, optim.SGD(model.parameters(), lr=lr)
model,opt = get_model()
loss_func(model(xb), yb)
for epoch in range(epochs):
for i in range((n-1)//bs + 1):
start_i = i*bs
end_i = start_i+bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
loss_func(model(xb), yb)
###Output
_____no_output_____
###Markdown
Refactor using Dataset PyTorch has an abstract Dataset class. A Dataset can be anything that has a `__len__` function (called by Python's standard `len` function) and a `__getitem__` function as a way of indexing into it. [This tutorial](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html) walks through a nice example of creating a custom FacialLandmarkDataset class as a subclass of Dataset. PyTorch's [TensorDataset](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.htmlTensorDataset) is a Dataset wrapping tensors. By defining a length and way of indexing, this also gives us a way to iterate, index, and slice along the first dimension of a tensor. This will make it easier to access both the independent and dependent variables in the same line as we train.
###Code
from torch.utils.data import TensorDataset
###Output
_____no_output_____
###Markdown
Both `x_train` and `y_train` can be combined in a single TensorDataset, which will be easier to iterate over and slice.
###Code
train_ds = TensorDataset(x_train, y_train)
###Output
_____no_output_____
###Markdown
Previously, we had to iterate through minibatches of x and y values separately:```python xb = x_train[start_i:end_i] yb = y_train[start_i:end_i]```Now, we can do these two steps together:```python xb,yb = train_ds[i*bs : i*bs+bs]```
###Code
model,opt = get_model()
for epoch in range(epochs):
for i in range((n-1)//bs + 1):
xb,yb = train_ds[i*bs : i*bs+bs]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
loss_func(model(xb), yb)
###Output
_____no_output_____
###Markdown
Refactor using DataLoader Pytorch's `DataLoader` is responsible for managing batches. You can create a `DataLoader` from any `Dataset`. `DataLoader` makes it easier to iterate over batches. Rather than having to use `train_ds[i*bs : i*bs+bs]`, the DataLoader gives us each minibatch automatically.
###Code
from torch.utils.data import DataLoader
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs)
###Output
_____no_output_____
###Markdown
Previously, our loop iterated over batches (xb, yb) like this:```pythonfor i in range((n-1)//bs + 1): xb,yb = train_ds[i*bs : i*bs+bs] pred = model(xb) ...```Now, our loop is much cleaner, as (xb, yb) are loaded automatically from the data loader:```pythonfor xb,yb in train_dl: pred = model(xb) ...```
###Code
model,opt = get_model()
for epoch in range(epochs):
for xb,yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
loss_func(model(xb), yb)
###Output
_____no_output_____
###Markdown
Thanks to Pytorch's `nn.Module`, `nn.Parameter`, `Dataset`, and `DataLoader`, our training loop is now dramatically smaller and easier to understand. Let's now try to add the basic features necessary to create effecive models in practice. Add validation First try In section 1, we were just trying to get a reasonable training loop set up for use on our training data. In reality, you **always** should also have a [validation set](http://www.fast.ai/2017/11/13/validation-sets/), in order to identify if you are overfitting.Shuffling the training data is [important](https://www.quora.com/Does-the-order-of-training-data-matter-when-training-neural-networks) to prevent correlation between batches and overfitting. On the other hand, the validation loss will be identical whether we shuffle the validation set or not. Since shuffling takes extra time, it makes no sense to shuffle the validation data.We'll use a batch size for the validation set that is twice as large as that for the training set. This is because the validation set does not need backpropagation and thus takes less memory (it doesn't need to store the gradients). We take advantage of this to use a larger batch size and compute the loss more quickly.
###Code
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)
valid_ds = TensorDataset(x_valid, y_valid)
valid_dl = DataLoader(valid_ds, batch_size=bs*2)
###Output
_____no_output_____
###Markdown
We will calculate and print the validation loss at the end of each epoch.(Note that we always call `model.train()` before training, and `model.eval()` before inference, because these are used by layers such as `nn.BatchNorm2d` and `nn.Dropout` to ensure appropriate behaviour for these different phases.)
###Code
model,opt = get_model()
for epoch in range(epochs):
model.train()
for xb,yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
model.eval()
with torch.no_grad():
valid_loss = sum(loss_func(model(xb), yb)
for xb,yb in valid_dl)
print(epoch, valid_loss/len(valid_dl))
###Output
0 tensor(0.2969)
1 tensor(0.3138)
###Markdown
Create fit() and get_data() We'll now do a little refactoring of our own. Since we go through a similar process twice of calculating the loss for both the training set and the validation set, let's make that into its own function, "`loss_batch`", which computes the loss for one batch.We pass an optimizer in for the training set, and use it to perform backprop. For the validation set, we don't pass an optimizer, so the method doesn't perform backprop.
###Code
def loss_batch(model, loss_func, xb, yb, opt=None):
loss = loss_func(model(xb), yb)
if opt is not None:
loss.backward()
opt.step()
opt.zero_grad()
return loss.item(), len(xb)
###Output
_____no_output_____
###Markdown
`fit` runs the necessary operations to train our model and compute the training and validation losses for each epoch.
###Code
import numpy as np
def fit(epochs, model, loss_func, opt, train_dl, valid_dl):
for epoch in range(epochs):
model.train()
for xb,yb in train_dl: loss_batch(model, loss_func, xb, yb, opt)
model.eval()
with torch.no_grad():
losses,nums = zip(*[loss_batch(model, loss_func, xb, yb)
for xb,yb in valid_dl])
val_loss = np.sum(np.multiply(losses,nums)) / np.sum(nums)
print(epoch, val_loss)
###Output
_____no_output_____
###Markdown
`get_data` returns dataloaders for the training and validation sets.
###Code
def get_data(train_ds, valid_ds, bs):
return (DataLoader(train_ds, batch_size=bs, shuffle=True),
DataLoader(valid_ds, batch_size=bs*2))
###Output
_____no_output_____
###Markdown
Now, our whole process of obtaining the data loaders and fitting the model can be run in 3 lines of code:
###Code
train_dl,valid_dl = get_data(train_ds, valid_ds, bs)
model,opt = get_model()
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
0 0.36996033158302305
1 0.34184285497665406
###Markdown
You can use these basic 3 lines of code to train a wide variety of models. Let's see if we can use them to train a convolutional neural network (CNN)! Switch to CNN First try We are now going to build our neural network with three convolutional layers. Because none of the functions in the previous section assume anything about the model form, we'll be able to use them to train a CNN without any modification.We will use Pytorch's predefined [Conv2d](https://pytorch.org/docs/stable/nn.htmltorch.nn.Conv2d) class as our convolutional layer. We define a CNN with 3 convolutional layers. Each convolution is followed by a ReLU. At the end, we perform an average pooling.
###Code
class Mnist_CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)
self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)
def forward(self, xb):
xb = xb.view(-1,1,28,28)
xb = F.relu(self.conv1(xb))
xb = F.relu(self.conv2(xb))
xb = F.relu(self.conv3(xb))
xb = F.avg_pool2d(xb, 4)
return xb.view(-1,xb.size(1))
lr=0.1
###Output
_____no_output_____
###Markdown
[Momentum](http://cs231n.github.io/neural-networks-3/sgd) is a variation on stochastic gradient descent that takes previous updates into account as well and generally leads to faster training.
###Code
model = Mnist_CNN()
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
xb, yb = next(iter(valid_dl))
loss_func(model(xb), yb)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
0 0.3650521045684815
1 0.28984554643630983
###Markdown
nn.Sequential `torch.nn` has another handy class we can use to simply our code: [`Sequential`](https://pytorch.org/docs/stable/nn.htmltorch.nn.Sequential). A `Sequential` object runs each of the modules contained within it, in a sequential manner. This is a simpler way of writing our neural network. To take advantage of this, we need to be able to easily define a **custom layer** from a given function. For instance, PyTorch doesn't have a `view` layer (`view` is PyTorch's version of numpy's `reshape`) and we need to create one for our network. `Lambda` will create a layer that we can then use when defining a network with `Sequential`.
###Code
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func=func
def forward(self, x): return self.func(x)
def preprocess(x): return x.view(-1,1,28,28)
###Output
_____no_output_____
###Markdown
The model created with `Sequential` is simply:
###Code
model = nn.Sequential(
Lambda(preprocess),
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AvgPool2d(4),
Lambda(lambda x: x.view(x.size(0),-1))
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
xb, yb = next(iter(valid_dl))
loss_func(model(xb), yb)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
0 0.3841403673171997
1 0.26314052562713625
###Markdown
Wrapping `DataLoader` Our CNN is fairly concise, but it only works with MNIST, because:- It assumes the input is a 28\*28 long vector- It assumes that the final CNN grid size is 4\*4 (since that's the average pooling kernel size we used)Let's get rid of these two assumptions, so our model works with any 2d single channel image. First, we can remove the initial Lambda layer but moving the data preprocessing into a generator:
###Code
def preprocess(x,y): return x.view(-1,1,28,28),y
class WrappedDataLoader():
def __init__(self, dl, func):
self.dl = dl
self.func = func
def __len__(self): return len(self.dl)
def __iter__(self):
batches = iter(self.dl)
for b in batches: yield(self.func(*b))
train_dl,valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
###Output
_____no_output_____
###Markdown
Next, we can replace `nn.AvgPool2d` with `nn.AdaptiveAvgPool2d`, which allows us to define the size of the *output* tensor we want, rather than the *input* tensor we have. As a result, our model will work with any size input.
###Code
model = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0),-1))
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
###Output
_____no_output_____
###Markdown
Let's try it out:
###Code
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
0 0.3101793124198914
1 0.24328768825531005
###Markdown
Using your GPU If you're lucky enough to have access to a CUDA-capable CPU (you can rent one for about about $0.50/hour from most cloud providers) you can use it to speed up your code. First check that your GPU is working in Pytorch:
###Code
torch.cuda.is_available()
###Output
_____no_output_____
###Markdown
And then create a device object for it:
###Code
dev = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
###Output
_____no_output_____
###Markdown
Let's update `preprocess` to move batches to the GPU:
###Code
def preprocess(x,y): return x.view(-1,1,28,28).to(dev),y.to(dev)
train_dl,valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
###Output
_____no_output_____
###Markdown
Finally, we can move our model to the GPU.
###Code
model.to(dev);
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
###Output
_____no_output_____
###Markdown
You should find it runs faster now:
###Code
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
0 0.2065797366142273
1 0.16616964597702027
|
Apriori e sue varianti.ipynb | ###Markdown
Algoritmo Apriori e sue variantiVa installata una libreria di machine learning che contenga apriori. Nella fattispecie, usiamo `mlxtend` che installiamo con `pip`.
###Code
import pandas as pd
from mlxtend.preprocessing import TransactionEncoder
from mlxtend.frequent_patterns import apriori, fpmax, fpgrowth, association_rules
# il dataset è una lista di transazioni, anch'esse espresse come liste
dataset = [['Milk', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
['Dill', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'],
['Milk', 'Apple', 'Kidney Beans', 'Eggs'],
['Milk', 'Unicorn', 'Corn', 'Kidney Beans', 'Yogurt'],
['Corn', 'Onion', 'Onion', 'Kidney Beans', 'Ice cream', 'Eggs']]
# La classe TransactionEncoder analizza i dati, che possono avere una qualunque
# forma iterabile e genera una struttura intermedia da cui si ottiene
# il dataframe
te = TransactionEncoder()
te_ary = te.fit(dataset).transform(dataset)
df = pd.DataFrame(te_ary, columns=te.columns_)
df
# Generare il database verticale in forma di tid-list dei singoli item è immediato
df.transpose()
# Applichiamo Apriori con un supporto minimo di 0.6
#frequent_itemsets = apriori(df, min_support=0.6, use_colnames=True)
frequent_itemsets = fpgrowth(df, min_support=0.6, use_colnames=True)
#frequent_itemsets = fpmax(df, min_support=0.6, use_colnames=True)
frequent_itemsets.sort_values('support',ascending=False)
# Confrontiamo le prestazioni di apriori e fpgrowth -- Frequent Pattern Tree Growth
%timeit -n 100 -r 10 apriori(df, min_support=0.6)
%timeit -n 100 -r 10 fpgrowth(df, min_support=0.6)
# calcoliamo le regole a supporto mininmo 0.6 e confidenza 0.8
association_rules(frequent_itemsets, metric="confidence", min_threshold=0.8)
# Sono riportate le seguenti metriche alternative
# - support(A->C) = support(A+C), range: [0, 1]
# - confidence(A->C) = support(A+C) / support(A), range: [0, 1]
# - lift(A->C) = confidence(A->C) / support(C), range: [0, inf]
# - leverage(A->C) = support(A->C) - support(A)*support(C), range: [-1, 1]
# - conviction = [1 - support(C)] / [1 - confidence(A->C)], range: [0, inf]
###Output
_____no_output_____ |
bin/MODIS2DL.ipynb | ###Markdown
Ingest MODIS Land Cover Data This notebook will ingest MODIS land cover data onto the DL platform. The MODIS land cover dats product is released yearly at a maximum resolution of 500m. The product features five different land cover classification bands. They are quite similar - we'll use the first one, the _Annual International Geosphere-Biosphere Programme (IGBP) classification_. The data are available from a number of US government data services, see https://lpdaac.usgs.gov/products/mcd12q1v006/. The land cover data is available in tiles that follow the MODIS Sinusoidal Grid, a special project system for MODIS products, see Figure. We'll need to use GDAL to convert the hdf tiles to GeoTiffs. The tiles will be downloaded from NASA's Earthdata, for which a registered account is required. A free account can be created [here](https://urs.earthdata.nasa.gov/home). User credentials should then be stored as a dict in json: `{username:, password:}`. **Figure: MODIS Sinusoidal Grid**
###Code
import logging, os, sys, json, requests, glob, pickle
from requests.auth import HTTPBasicAuth
import descarteslabs as dl
from descarteslabs.catalog import Product
from descarteslabs.catalog import Image as dl_Image
from descarteslabs.catalog import ClassBand, DataType, Resolution, ResolutionUnit
from bs4 import BeautifulSoup
###Output
_____no_output_____
###Markdown
Approach**Fetch the Data**- Create and store login credentials- For each year of the land cover product: - Parse the website and extract the hdf files - Retrieve the hdf files **Push to DL**- Create the DL product and land cover band- Convert the hdf files to GeoTiff- Upload the GeoTiffs to the DL product
###Code
params = {}
params['modis_path'] = '/home/jovyan/solar-pv-global-inventory/data/MODIS' # path to the geodatabase
params['credentials_path'] = os.path.join(params['modis_path'], 'earthdata.cred')
params['product_params'] = {'_id':'modis-land-cover',
'name':'MODIS land cover product for uploaded MODIS land cover tiles'}
params['year'] = '2014'
params['band_params'] = {'name':'IGBP_class',
'data_range':(0,255),
'display_range':(0,20),
'resolution':500,
'index':0}
###Output
_____no_output_____
###Markdown
Download the Data
###Code
credentials = json.load(open(params['credentials_path'],'r'))
def get_url_paths(url, ext='', params={}):
response = requests.get(url, params=params)
if response.ok:
response_text = response.text
else:
return response.raise_for_status()
soup = BeautifulSoup(response_text, 'html.parser')
parent = [url + node.get('href') for node in soup.find_all('a') if node.get('href').endswith(ext)]
return parent
url = 'https://e4ftl01.cr.usgs.gov/MOTA/MCD12Q1.006/'+params['year']+'.01.01/'
ext = 'hdf'
hdf_urls = get_url_paths(url, ext)
print (len(hdf_urls), hdf_urls[0])
with open(os.path.join(params['modis_path'], 'list.txt'),'w') as f:
f.write('\n'.join(hdf_urls))
!wget --user={credentials["username"]} --password={credentials["password"]} -i {os.path.join(params['modis_path'],'list.txt')} -P {params['modis_path']+'/tmp'} -q
###Output
_____no_output_____
###Markdown
Get Class Labels
###Code
class_labels = {
1 : 'Evergreen Needleleaf Forests: dominated by evergreen conifer trees (canopy >2m). Tree cover >60%.',
2 : 'Evergreen Broadleaf Forests: dominated by evergreen broadleaf and palmate trees (canopy >2m). Tree cover >60%.',
3 : 'Deciduous Needleleaf Forests: dominated by deciduous needleleaf (larch) trees (canopy >2m). Tree cover >60%.',
4 : 'Deciduous Broadleaf Forests: dominated by deciduous broadleaf trees (canopy >2m). Tree cover >60%.',
5 : 'Mixed Forests: dominated by neither deciduous nor evergreen (40-60% of each) tree type (canopy >2m). Tree cover >60%.',
6 : 'Closed Shrublands: dominated by woody perennials (1-2m height) >60% cover.',
7 : 'Open Shrublands: dominated by woody perennials (1-2m height) 10-60% cover.',
8 : 'Woody Savannas: tree cover 30-60% (canopy >2m).',
9 : 'Savannas: tree cover 10-30% (canopy >2m).',
10: 'Grasslands: dominated by herbaceous annuals (<2m).',
11: 'Permanent Wetlands: permanently inundated lands with 30-60% water cover and >10% vegetated cover.',
12: 'Croplands: at least 60% of area is cultivated cropland.',
13: 'Urban and Built-up Lands: at least 30% impervious surface area including building materials, asphalt and vehicles.',
14: 'Cropland/Natural Vegetation Mosaics: mosaics of small-scale cultivation 40-60% with natural tree, shrub, or herbaceous vegetation.',
15: 'Permanent Snow and Ice: at least 60% of area is covered by snow and ice for at least 10 months of the year.',
16: 'Barren: at least 60% of area is non-vegetated barren (sand, rock, soil) areas with less than 10% vegetation.',
17: 'Water Bodies: at least 60% of area is covered by permanent water bodies.',
}
pickle.dump(class_labels, open('./class_labels_MODIS.pkl','wb'))
class_labels = [': '.join([str(kk),vv.split(':')[0]]) for kk,vv in class_labels.items()]
class_labels
###Output
_____no_output_____
###Markdown
Convert MODIS to GeoTiff
###Code
hdf_files = glob.glob(params['modis_path']+'/tmp'+'/*.hdf')
print (len(hdf_files), hdf_files[0])
for f in hdf_files:
fname = f.split('/')[-1]
#gdal_translate HDF4_EOS:EOS_GRID:"MCD12Q1.A2018001.h22v04.006.2019200003144.hdf":MCD12Q1:LC_Type1 test.ti
tifname = f.split('/')[-1].split('.')[2]+'.tif'
subprocess.call(['gdal_translate',
'HDF4_EOS:EOS_GRID:"{}":MCD12Q1:LC_Type1'.format(f),
os.path.join(params['modis_path'],params['year'],tifname)])
###Output
_____no_output_____
###Markdown
Prep DL Product and Bands
###Code
product = Product.get('oxford-university:modis-land-cover')#(params['product_params']['_id'])
if not product:
product = Product(id=params['product_params']['_id'],
name=params['product_params']['name'])
product.save()
bands = [bb for bb in product.bands().limit(2)]
if not bands:
band = ClassBand(name=params['band_params']['name'], product=product)
band.data_type = DataType.BYTE
band.data_range = params['band_params']['data_range']
band.display_range = params['band_params']['display_range']
band.resolution = Resolution(unit=ResolutionUnit.METERS, value=params['band_params']['resolution'])
band.band_index = params['band_params']['index']
band.class_labels = class_labels
band.save()
### delete the product if it needs to be remade
# status = product.delete_related_objects()
# status
# product.delete()
### add readers
# product.readers = ["email:[email protected]", "email:[email protected]", "email:[email protected]"]
# product.save()
###Output
_____no_output_____
###Markdown
Upload Images
###Code
image_files = glob.glob(os.path.join(params['modis_path'],params['year'],'*.tif'))
print (len(image_files), image_files[0])
uploads = []
for f in image_files:
image = dl_Image(product=product, name=params['year']+'.'+f.split('/')[-1])
image.acquired = params['year']+"-01-01"
image_path = f
uploads.append(image.upload(image_path))
for u in uploads:
print (u.status)
###Output
_____no_output_____ |
Jupyter_Notebooks/Interfaces/.ipynb_checkpoints/w2v_Dashboard-checkpoint.ipynb | ###Markdown
MHS-Word2Vec Dashboard
###Code
import re, json, warnings, gensim
import pandas as pd
import numpy as np
# Primary visualizations
import matplotlib.pyplot as plt
import matplotlib
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
import seaborn as sns
# PCA visualization
from scipy.spatial.distance import cosine
from sklearn.metrics import pairwise
from sklearn.manifold import MDS, TSNE
from mpl_toolkits.mplot3d import Axes3D
from sklearn.decomposition import PCA
# Import (Jupyter) Dash -- App Functionality
import dash
from dash.dependencies import Input, Output, State
import dash_table
import dash_core_components as dcc
import dash_html_components as html
from jupyter_dash import JupyterDash
# Ignore simple warnings.
warnings.simplefilter('ignore', DeprecationWarning)
# Declare directory location to shorten filepaths later.
abs_dir = "/Users/quinn.wi/Documents/"
# Load model.
model = gensim.models.KeyedVectors.load_word2vec_format(abs_dir + 'Data/Output/WordVectors/jqa_w2v.txt')
###Output
_____no_output_____
###Markdown
Functions
###Code
%%time
# https://www.kaggle.com/pierremegret/gensim-word2vec-tutorial
def tsnescatterplot(model, word, list_names):
"""
Plot in seaborn the results from the t-SNE dimensionality reduction algorithm of the vectors of a query word,
its list of most similar words, and a list of words.
"""
arrays = np.empty((0, 100), dtype='f') # 100 == vector size when model was created.
word_labels = [word]
color_list = ['red']
# adds the vector of the query word
arrays = np.append(arrays, model.__getitem__([word]), axis=0)
# gets list of most similar words
close_words = model.most_similar([word])
# adds the vector for each of the closest words to the array
for wrd_score in close_words:
wrd_vector = model.__getitem__([wrd_score[0]])
word_labels.append(wrd_score[0])
color_list.append('blue')
arrays = np.append(arrays, wrd_vector, axis=0)
# adds the vector for each of the words from list_names to the array
for wrd in list_names:
wrd_vector = model.__getitem__([wrd[0]])
word_labels.append(wrd[0])
color_list.append('green')
arrays = np.append(arrays, wrd_vector, axis=0)
# Reduces the dimensionality from 300 to x dimensions with PCA; error will arise if x is too large.
reduc = PCA(n_components=41).fit_transform(arrays)
# Finds t-SNE coordinates for 2 dimensions
np.set_printoptions(suppress=True)
Y = TSNE(n_components=2, random_state=0, perplexity=15).fit_transform(reduc)
# Sets everything up to plot
df = pd.DataFrame({'x': [x for x in Y[:, 0]],
'y': [y for y in Y[:, 1]],
'words': word_labels,
'color': color_list})
fig, _ = plt.subplots()
# fig.set_size_inches(9, 9)
# Basic plot
p1 = sns.regplot(data=df,
x="x",
y="y",
fit_reg=False,
marker="o",
scatter_kws={'s': 40,
'facecolors': df['color']
}
)
plt.xticks([])
plt.yticks([])
plt.xlabel("")
plt.ylabel("")
# add annotations one by one with a loop
for line in range(0, df.shape[0]):
p1.text(df['x'][line],
df['y'][line],
' ' + df['words'][line].title(),
horizontalalignment = 'center',
verticalalignment = 'bottom',
size = 'small',
color = 'gray',
weight = 'normal')
plt.xlim(Y[:, 0].min()-50, Y[:, 0].max()+50)
plt.ylim(Y[:, 1].min()-50, Y[:, 1].max()+50)
plt.title('t-SNE visualization for {}'.format(word.title()))
###Output
CPU times: user 5 µs, sys: 0 ns, total: 5 µs
Wall time: 7.87 µs
###Markdown
App
###Code
%%time
# App configurations
app = JupyterDash(__name__)
app.config.suppress_callback_exceptions = True
# Plot configurations.
sns.set_style("whitegrid", {'axes.grid' : False})
font = {'family' : 'serif',
'weight' : 'normal',
'size' : 18}
matplotlib.rc('font', **font)
palette = sns.color_palette("Set1", 4)
plt.figure(figsize=(25, 12))
# Layout.
app.layout = html.Div(
className = 'wrapper',
children = [
# app-header
html.Header(
className="app-header",
children = [
html.Div('Word2Vec Dashboard', className = "app-header--title")
]),
# content-wrapper
html.Div(
className = 'content-wrapper',
children = [
dcc.Input(id = 'text', type = 'text', placeholder = 'work'),
# dcc.Slider(id = 'slider', min = 5, max = 35, step = 1, value = 20),
dcc.Graph(id = 'text-plot')
])
])
###########################
######### Callbacks #######
###########################
@app.callback(
Output('text-plot', 'children'),
Input('text', 'text_value'),
# State('slider', 'slider_value')
)
def update_textPlot(text_value):
fig = tsnescatterplot(model, text_value, model.most_similar([text_value], topn = 25))
return fig
if __name__ == "__main__":
# app.run_server(mode = 'inline', debug = True) # mode = 'inline' for JupyterDash
app.run_server(debug = True)
###Output
Dash app running on http://127.0.0.1:8050/
CPU times: user 60 ms, sys: 56.1 ms, total: 116 ms
Wall time: 328 ms
|
ML_Models_BA.ipynb | ###Markdown
Machine Learning to Predict Brittleness from other Geophysical Logs Data: 4 wells from the Appalachian Basin
###Code
import os
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from sklearn import metrics
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.pipeline import Pipeline
from sklearn.ensemble import GradientBoostingRegressor as gbR, GradientBoostingClassifier as gbC, IsolationForest
from sklearn.svm import SVC, SVR
from sklearn.neural_network import MLPClassifier, MLPRegressor
from sklearn.feature_selection import mutual_info_regression
pd.set_option('display.max_columns', None) #to display all the column information
pd.options.display.max_seq_items = 2000
###Output
_____no_output_____
###Markdown
Load data
###Code
file_directory = r"../Thesis work/Thesis work/Well_Data_CSV_Merged" #for macbook google drive
file_name1 = "Poseidon.csv"
file_name2 = "Boggess.csv"
file_name3 = "Mip3h.csv"
file_name4 = "Whipkey.csv"
file_name = [file_name1, file_name2, file_name3, file_name4]
data = []
for i in file_name:
file_path = os.path.join(file_directory,i)
df = pd.read_csv(file_path)
data.append(df)
data_poseidon = data[0]
data_boggess = data[1]
data_mip3h = data[2]
data_whipkey = data[3]
# ## Marcellus Shale interval
# data_poseidon = data_poseidon.loc[(data_poseidon['DEPT'] > 7880) & (data_poseidon['DEPT'] < 8040)]
# data_boggess = data_boggess.loc[(data_boggess['DEPT'] > 7880) & (data_boggess['DEPT'] < 7970)]
# data_mip3h = data_mip3h.loc[(data_mip3h['DEPT'] > 7450) & (data_mip3h['DEPT'] < 7560)]
# data_whipkey = data_whipkey.loc[(data_whipkey['DEPT'] > 7730) & (data_whipkey['DEPT'] < 7840)]
print("The Poseidon data has {} rows".format(data_poseidon.shape[0]))
print("The Boggess data has {} rows".format(data_boggess.shape[0]))
print("The Mip3h data has {} rows".format(data_mip3h.shape[0]))
print("The Whipkey data has {} rows".format(data_whipkey.shape[0]))
###Output
_____no_output_____
###Markdown
Input and Output of the Model Data for Regression task
###Code
features = ['DEPT', 'GR', 'NPHI','RHOZ', 'HCAL', 'DTCO','PEFZ','Brittleness_new'] #list of the features names to select
# features = ['DEPT', 'GR','RHOZ', 'HCAL', 'NPHI','DTCO', 'Brittleness_new'] #list of the features names to select
target = 'Brittleness_new' #name of the output feature
data = pd.concat([data_whipkey,
data_boggess,
data_poseidon], ignore_index=True)
data = data.loc[: ,features]
fig, ax = plt.subplots(1, 2, figsize = (15,6))
m = ax[0].scatter(data_poseidon.PR_DYN, data_poseidon.YME_DYN, c = data_poseidon.Brittleness)
ax[0].set_xlabel("Poisson's ratio", fontsize =15)
ax[0].set_ylabel("Young's modulus", fontsize =15)
ax[0].axhline(y=6, color='r', linestyle='--')
ax[0].axvline(x=0.2, color='r', linestyle='--')
ax[0].text(0.23, 0.6, 'Brittle Region',fontsize=15, horizontalalignment='center', verticalalignment='center', transform=ax[0].transAxes, c='r')
ax[0].text(0.75, 0.08, 'Ductile Region',fontsize=15, horizontalalignment='center', verticalalignment='center', transform=ax[0].transAxes, c='r')
l = ax[1].scatter(data_poseidon.PR_DYN, data_poseidon.YME_DYN, c = data_poseidon.Brittleness_new)
ax[1].set_xlabel("Poisson's ratio", fontsize =15)
ax[1].set_ylabel("Young's modulus", fontsize =15)
ax[1].axhline(y=6, color='r', linestyle='--')
ax[1].axvline(x=0.2, color='r', linestyle='--')
ax[1].text(0.23, 0.6, 'Brittle Region',fontsize=15, horizontalalignment='center', verticalalignment='center', transform=ax[1].transAxes, c='r')
ax[1].text(0.75, 0.08, 'Ductile Region',fontsize=15, horizontalalignment='center', verticalalignment='center', transform=ax[1].transAxes, c='r')
fig.colorbar(l, ax = ax[1])
fig.colorbar(m, ax = ax[0])
# fig.savefig(r'./Images/{}.png'.format('YME-PR plot'), dpi=300)
data[data < 0] = np.nan #remove negative values
data.dropna(inplace = True)
data.shape
data.describe()
#add correlation plot
data.corr(method = 'spearman')
def StatRelat(data, target):
#Mutual information and Pearson's corelation for measuring the dependency between the variables.
"""
function to estimate the Mutual information and Pearson's corelation
for measuring the dependency between the variables.
Parameters
----------
data : DataFrame
The data
target: Str
The column name of the target feature
Returns
-------
A histogram of mutual information and heatmap of correlation between features
"""
df2 = data.copy().dropna()
X = df2.drop(['DEPT',target], axis=1)._get_numeric_data() # separate DataFrames for predictor and response features
y = df2.loc[:,[target]]._get_numeric_data()
mi = mutual_info_regression(X,np.ravel(y), random_state=20) # calculate mutual information
mi /= np.max(mi) # calculate relative mutual information
indices = np.argsort(mi)[::-1] # find indicies for descending order
print("Feature ranking:") # write out the feature importances
for f in range(X.shape[1]):
print("%d. feature %s = %f" % (f + 1, X.columns[indices][f], mi[indices[f]]))
fig, ax = plt.subplots(nrows=1,ncols=2,figsize=(15, 7))
# fig.subplots_adjust(left=0.0, bottom=0.0, right=1., top=1., wspace=0.2, hspace=0.2)
ax[0].bar(range(X.shape[1]), mi[indices],color="g", align="center")
ax[0].set_title("Mutual Information")
ax[0].set_xticks(range(X.shape[1]))
ax[0].set_xticklabels(X.columns[indices],rotation=90)
ax[0].set_xlim([-1, X.shape[1]])
cmap = sns.diverging_palette(250, 10, as_cmap=True)
mask = np.zeros_like(df2.drop(['DEPT'], axis=1).corr())
mask[np.triu_indices_from(mask)] = True
with sns.axes_style("white"):
sns.heatmap(df2.drop(['DEPT'], axis=1).corr(), mask=mask,cmap=cmap, vmax=.3, ax=ax[1], square=True, annot = True)
ax[1].set_yticklabels(ax[1].get_yticklabels(), rotation=45)
fig.savefig(r'./Images/{}.png'.format('feature_selection'), dpi=300)
StatRelat(data, target)
data_summary = data.drop(['DEPT'], axis=1).describe().T.round(2)
# data_summary.to_excel(r'./Images/{}.xlsx'.format('data_summary_before_stand'))
data_summary
#range
data_summary['max'] - data_summary['min']
#standard deviation
data.std()
scaler = MinMaxScaler()
data_norm = pd.DataFrame(scaler.fit_transform(data.drop(['DEPT'], axis=1)), columns = data.drop(['DEPT'], axis=1).columns)
data_norm_summary = data_norm.describe().T.round(2)
data_norm_summary
# data_norm_summary.to_excel(r'./Images/{}.xlsx'.format('data_summary_after_minmax'))
scaler = StandardScaler()
data_norm = pd.DataFrame(scaler.fit_transform(data.drop(['DEPT'], axis=1)), columns = data.drop(['DEPT'], axis=1).columns)
data_norm_summary = data_norm.describe().T.round(2)
data_norm_summary
# data_norm_summary.to_excel(r'./Images/{}.xlsx'.format('data_summary_after_standard'))
X = data.drop(['DEPT','RHOZ',target], axis=1)
y = data.loc[:,[target]]
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2, random_state = 1)
X_train.shape
def box_plot(X_train, save_file_name):
fig, ax = plt.subplots(1,len(X_train.columns), figsize = (15,8))
for i, feature in enumerate(X_train.columns):
ax[i].boxplot(X_train[feature])
ax[i].set_ylabel(feature, fontsize = 20)
right_side = ax[i].spines["right"]
top_side = ax[i].spines["top"]
bottom_side = ax[i].spines["bottom"]
right_side.set_visible(False)
top_side.set_visible(False)
bottom_side.set_visible(False)
ax[i].axes.get_xaxis().set_visible(False)
# fig.savefig(r'./Images/{}.png'.format(save_file_name), dpi=300)
box_plot(X_train, "before_outlier_removal")
# identify outliers in the training dataset
iso = IsolationForest(contamination=0.1)
yhat = iso.fit_predict(X_train)
# select all rows that are not outliers
mask = yhat != -1
X_train, y_train = X_train[mask], y_train[mask]
# summarize the shape of the updated training dataset
print(X_train.shape, y_train.shape)
box_plot(X_train, "after_outlier_removal")
df = data_mip3h.loc[: ,features].dropna()
X_blind = df.drop(['DEPT','RHOZ',target], axis=1)
y_blind = df.loc[:,[target]]
# X_blind = data_boggess.loc[: ,features].drop([target], axis=1)
# y_blind = data_boggess.loc[:,[target]]
X_test.shape
X_blind.shape
###Output
_____no_output_____
###Markdown
Model Building
###Code
def modelfit(X_train, X_test, X_blind, y_train, y_test, y_blind, algorithm, hyper_parameters, scaler, classification,printFeatureImportance=True, cv_folds=3):
"""
function to tune the gradient boosting model and return the optimum
Parameters
----------
X_train : DataFrame
The input features for the training set
X_test : DataFrame
The input features for the testing set
X_blind : DataFrame
The input features for the blind set
y_train : DataFrame
The output feature for the training set
y_test : DataFrame
The output feature for the testing set
y_blind : DataFrame
The output feature for the blind set
algorithm : {'neural','svm','gradientboosting'}
The Machine Learning model
hyper_parameters : dict
A dictionary of the hyperparameters of the models that will be tuned
scaler : {'standard','minmax'}
Scaling technique to employ.
classification : bool
Flag to specify the modeling technique. True for classification and False for regression
printFeatureImportance : bool
Flag to specify if to display the feature importance histogram.
cv_folds : int
Number of cross-validation folds. default is 3.
Returns
-------
model : an object of the trained gradient boosting which can be deployed or saved
"""
#step to assign the selected standardaziation
if scaler == 'standard':
scaler = StandardScaler()
elif scaler == 'minmax':
scaler =MinMaxScaler()
else:
print("invalid scaler: use 'standard' or 'minmax'")
#step to assign the selected machine learning algorithm
if algorithm == 'svm':
if classification is True:
algo = SVC(random_state=83)
else:
algo = SVR()
elif algorithm == 'neural':
if classification is True:
algo = MLPClassifier(random_state=677)
else:
algo = MLPRegressor(random_state=134)
elif algorithm == 'gradientboosting':
if classification is True:
algo = gbC(random_state=10)
else:
algo = gbR(random_state=824)
else:
print("invalid scaler: use 'svm' or 'neural' or 'gradientboosting'")
if classification is True:
pipe = Pipeline(steps=[('scaler', scaler), ('model', algo)])
model = GridSearchCV(estimator = pipe,
param_grid = hyper_parameters,
scoring='accuracy',n_jobs=-1, cv=cv_folds, verbose = 1)
#Fit the model on the data
model.fit(X_train.values, y_train.values.ravel())
#Predict training set:
y_train_pred = model.predict(X_train)
#Predict testing set:
y_test_pred = model.predict(X_test)
#Predict blind set
y_blind_pred = model.predict(X_blind)
#Print model report:
print("Model Report")
print("-------------------------------")
print("The training accuracy : {0:.4g}".format(metrics.accuracy_score(y_train.values, y_train_pred)))
print("The testing accuracy is : {0:.4g}".format(metrics.accuracy_score(y_test.values,y_test_pred)))
print("The blind well accuracy is : {0:.4g}".format(metrics.accuracy_score(y_blind.values,y_blind_pred)))
print("CV best score : {0:.4g}".format(model.best_score_))
print("CV best parameter combinations : {}".format(model.best_params_))
if algorithm == 'gradientboosting':
#Print Feature Importance:
if printFeatureImportance:
feat_imp = pd.Series(model.best_estimator_.named_steps.model.feature_importances_, X_train.columns).sort_values(ascending=False)
feat_imp.plot(kind='barh', title='Feature Importances')
plt.xlabel('Feature Importance Score')
else:
pipe = Pipeline(steps=[('scaler', scaler), ('model', algo)])
model = GridSearchCV(estimator = pipe,
param_grid = hyper_parameters,
scoring='r2',n_jobs=-1,
cv=cv_folds, verbose = 1)
#Fit the model on the data
model.fit(X_train.values, y_train.values.ravel())
#Predict training set:
y_train_pred = model.predict(X_train)
#Predict testing set:
y_test_pred = model.predict(X_test)
#Predict blind set
y_blind_pred = model.predict(X_blind)
#Print model report:
print("Model Report")
print("-------------------------------")
print("The training R2 score : {0:.4g}".format(metrics.r2_score(y_train.values, y_train_pred)))
print("The testing R2 score is : {0:.4g}".format(metrics.r2_score(y_test.values,y_test_pred)))
print("The blind well R2 score is : {0:.4g}".format(metrics.r2_score(y_blind.values,y_blind_pred)))
print("CV best score : {0:.4g}".format(model.best_score_))
print("CV best parameter combinations : {}".format(model.best_params_))
if algorithm == 'gradientboosting':
#Print Feature Importance:
if printFeatureImportance:
feat_imp = pd.Series(model.best_estimator_.named_steps.model.feature_importances_, X_train.columns).sort_values(ascending=False)
feat_imp.plot(kind='barh', title='Feature Importances')
plt.xlabel('Feature Importance Score')
return model.best_estimator_
"model__min_samples_split" : [2,3,4,5],
"model__min_samples_leaf": [1,2,3,4,5],
"model__max_depth" : range(4,8,1)
"model__n_estimators" : range(100,301,50)
###Output
_____no_output_____
###Markdown
Training the Gradient Boosting
###Code
#use the documentation of SVR() to understand the parameters
#put new parameters in the grid by using "model__" before the parameter name as below
hyper_parameters = {
}
model_gb = modelfit(X_train, X_test, X_blind, y_train, y_test, y_blind, algorithm='gradientboosting',
hyper_parameters=hyper_parameters, scaler='minmax',
classification=False,printFeatureImportance=True, cv_folds=3)
#sample size vs score
m = Pipeline(steps=[('scaler', StandardScaler()), ('model', gbR(max_depth= 7, min_samples_leaf= 1, min_samples_split= 3))])
size = np.arange(500,X_train.shape[0], 500)
train_scores = []
test_scores = []
blind_scores = []
for i in size:
m.fit(X_train.iloc[:i,:].values, y_train.iloc[:i,:].values.ravel())
train_scores.append(metrics.r2_score(y_train.iloc[:i,:].values, m.predict(X_train.iloc[:i,:].values)))
test_scores.append(metrics.r2_score(y_test.values, m.predict(X_test)))
blind_scores.append(metrics.r2_score(y_blind.values, m.predict(X_blind)))
plt.plot(size, train_scores, label = 'train')
plt.plot(size, test_scores, label = 'test')
plt.plot(size, blind_scores, label = 'blind')
plt.legend()
feat_imp = pd.Series(model_gb.named_steps.model.feature_importances_, X_train.columns).sort_values(ascending=False)
feat_imp.plot(kind='barh', title='Feature Importances')
plt.xlabel('Feature Importance Score')
# plt.savefig(r'./Images/{}.png'.format('gb_feature_importance'), dpi=300)
# model2 = modelfit(X_train2, X_test2, X_blind2, y_train2, y_test2, y_blind2, algorithm='gradientboosting',
# hyper_parameters=hyper_parameters, scaler='standard',
# classification=True,printFeatureImportance=True, cv_folds=3)
###Output
_____no_output_____
###Markdown
Training the SVM
###Code
'model__kernel': ['linear', 'poly','rbf','sigmoid']
'model__gamma': ['scale', 'auto']
'model__C': [1,10,100]
#use the documentation of SVR() to understand the parameters
#put new parameters in the grid by using "model__" before the parameter name as below
hyper_parameters = {
'model__epsilon': np.arange(0.01,0.1,0.01)}
model_svm = modelfit(X_train, X_test, X_blind, y_train, y_test, y_blind, algorithm='svm',
hyper_parameters=hyper_parameters, scaler='standard',
classification=False,printFeatureImportance=True, cv_folds=3)
model_svm.named_steps.model.support_vectors_.shape
#sample size vs score
m = Pipeline(steps=[('scaler', StandardScaler()), ('model', SVR(epsilon=0.02))])
size = np.arange(500,X_train.shape[0], 500)
train_scores = []
test_scores = []
blind_scores = []
for i in size:
m.fit(X_train.iloc[:i,:].values, y_train.iloc[:i,:].values.ravel())
train_scores.append(metrics.r2_score(y_train.iloc[:i,:].values, m.predict(X_train.iloc[:i,:].values)))
test_scores.append(metrics.r2_score(y_test.values, m.predict(X_test)))
blind_scores.append(metrics.r2_score(y_blind.values, m.predict(X_blind)))
plt.plot(size, train_scores, label = 'train')
plt.plot(size, test_scores, label = 'test')
plt.plot(size, blind_scores, label = 'blind')
plt.legend()
###Output
_____no_output_____
###Markdown
Training the Neural Network
###Code
#use the documentation of MLPClassifier() to understand the parameters
#put new parameters in the grid by using "model__" before the parameter name as below
hyper_parameters = {'model__hidden_layer_sizes': [(10,10,),(19,19,),(20,),(20,20,)],
'model__tol': [0.0001,0.00001,0.001],
'model__solver': ['lbfgs'],
'model__max_iter': [1000]}
model_nn = modelfit(X_train, X_test, X_blind, y_train, y_test, y_blind, algorithm='neural',
hyper_parameters=hyper_parameters, scaler='minmax',
classification=False,printFeatureImportance=True, cv_folds=3)
#sample size vs score
m = Pipeline(steps=[('scaler', StandardScaler()), ('model', MLPRegressor(hidden_layer_sizes= (19, 19), max_iter= 1000, solver= 'lbfgs', tol= 1e-05))])
size = np.arange(500,X_train.shape[0], 500)
train_scores = []
test_scores = []
blind_scores = []
for i in size:
m.fit(X_train.iloc[:i,:].values, y_train.iloc[:i,:].values.ravel())
train_scores.append(metrics.r2_score(y_train.iloc[:i,:].values, m.predict(X_train.iloc[:i,:].values)))
test_scores.append(metrics.r2_score(y_test.values, m.predict(X_test)))
blind_scores.append(metrics.r2_score(y_blind.values, m.predict(X_blind)))
plt.plot(size, train_scores, label = 'train')
plt.plot(size, test_scores, label = 'test')
plt.plot(size, blind_scores, label = 'blind')
plt.legend()
###Output
_____no_output_____
###Markdown
Visualizing the Result
###Code
#create folder to save images
if os.path.exists(r'./Images'):
pass
else:
os.mkdir(r'./Images')
def plot_logs2(data, well_name, model_gb, model_svm, model_nn, formation):
"""
function to plot the log data and the predictions
Parameters
----------
data : DataFrame
The well data to be plotted
well_name : str
The name of the well being plotted
model:
The trained model used for the prediction
formation : dict
The formation tops ( names as keys and depth interval as the item in a list)
Returns
-------
A plot of the well logs
"""
#assigning the logs to variable names to make the code cleaner and easier to read
MD = data.DEPT
GR = data.GR
RHOB = data.RHOZ
NPHI = data.NPHI
DT= data.DTCO
PEFZ = data.PEFZ
BA = data.Brittleness_new
#creating the figure
fig, ax = plt.subplots(nrows=1, ncols=6,figsize=(15,10), sharey=True, gridspec_kw={'width_ratios': [3,3,3,3,3,3]})
# fig.suptitle("O {}".format(well_name), fontsize=25)
fig.subplots_adjust(top=0.85, wspace=0.2)
# ax[0].set_ylim(formation['Upper Marcellus'][0],formation['Lower Marcellus'][1]) #display only a depth range
ax[0].set_ylim(7600, formation['Lower Marcellus'][1]) #display only a depth range
ax[0].invert_yaxis()
ax[0].set_ylabel('MD (M)',fontsize=20)
ax[0].yaxis.grid(True)
ax[0].get_xaxis().set_visible(False) #removing the x-axis label at the bottom of the fig
##Track 1
##Gamma_ray and PEF
ax_GR = ax[0].twiny() #share the depth axis
ax_GR.set_xlim(0,270)
ax_GR.plot(GR,MD, color='black')
ax_GR.set_xlabel('GR (API)',color='black')
ax_GR.tick_params('x',colors='black') ##change the color of the x-axis tick label
ax[0].get_xaxis().set_visible(False)
ax[0].yaxis.grid(True)
ax_GR.grid(True,alpha=0.5)
#variable colorfill
GR_range = abs(GR.min() - GR.max())
cmap = plt.get_cmap('nipy_spectral') #color map
color_index = np.arange(GR.min(), GR.max(), GR_range / 20)
#loop through each value in the color_index
for index in sorted(color_index):
index_value = (index - GR.min())/GR_range
color = cmap(index_value) #obtain colour for color index value
ax_GR.fill_betweenx(MD, 0 , GR, where = GR >= index, color = color)
ax_PEFZ = ax[0].twiny()
ax_PEFZ.plot(PEFZ,MD, color='red')
ax_PEFZ.set_xlabel('PEFZ',color='red')
ax_PEFZ.tick_params('x',colors='red') ##change the color of the x-axis tick label
ax_PEFZ.spines['top'].set_position(('outward',40)) ##move the x-axis up
ax_PEFZ.spines["top"].set_edgecolor("red")
#Track 2
##NPHI and RHOB
ax_NPHI = ax[1].twiny()
ax_NPHI.set_xlim(-0.1,0.4)
ax_NPHI.invert_xaxis()
ax_NPHI.plot(NPHI, MD, label='NPHI[%]', color='green')
ax_NPHI.spines['top'].set_position(('outward',0))
ax_NPHI.set_xlabel('NPHI[%]', color='green')
ax_NPHI.tick_params(axis='x', colors='green')
ax_NPHI.spines["top"].set_edgecolor("green")
ax_RHOB = ax[1].twiny()
ax_RHOB.set_xlim(1.95,2.95)
ax_RHOB.invert_xaxis()
ax_RHOB.plot(RHOB, MD,label='RHOB[g/cc]', color='red')
ax_RHOB.spines['top'].set_position(('outward',40))
ax_RHOB.set_xlabel('RHOB[g/cc]',color='red')
ax_RHOB.tick_params(axis='x', colors='red')
ax_RHOB.spines["top"].set_edgecolor('red')
ax[1].get_xaxis().set_visible(False)
ax[1].yaxis.grid(True)
ax_RHOB.grid(True,alpha=0.5)
ax[1].axis('off')
# #color fill
# x = np.array(ax_RHOB.get_xlim())
# z = np.array(ax_NPHI.get_xlim())
# nz=((NPHI-np.max(z))/(np.min(z)-np.max(z)))*(np.max(x)-np.min(x))+np.min(x)
# ax_RHOB.fill_betweenx(MD, RHOB, nz, where=RHOB>=nz, interpolate=True, color='green')
# ax_RHOB.fill_betweenx(MD, RHOB, nz, where=RHOB<=nz, interpolate=True, color='yellow')
#Track 3
##Sonic
ax_DT = ax[2].twiny()
ax_DT.grid(True)
ax_DT.set_xlim(100,50)
ax_DT.spines['top'].set_position(('outward',0))
ax_DT.plot(DT, MD, label='DT[us/ft]', color='blue')
ax_DT.set_xlabel('DT[us/ft]', color='blue')
ax_DT.tick_params(axis='x', colors='blue')
ax_DT.spines["top"].set_edgecolor("blue")
ax[2].get_xaxis().set_visible(False)
ax[2].yaxis.grid(True)
ax_DT.grid(True,alpha=0.5)
ax[2].axis('off')
#Track 4
#gb model
ax_BA1 = ax[3].twiny()
ax_BA1.grid(True)
ax_BA1.set_xlim(0,1)
ax_BA1.spines['top'].set_position(('outward',0))
ax_BA1.plot(BA, MD, label='BRITTLENESS ESTIMATE', color='black')
ax_BA1.set_xlabel('BRITTLENESS ESTIMATE', color='black')
ax_BA1.tick_params(axis='x', colors='black')
##Ploting the predicted data
###work on this for generalization
ax_pred = ax[3].twiny()
df = data.loc[: , features].dropna()
pred = model_gb.predict(df.drop(['DEPT','RHOZ',target], axis=1))
df['Brittleness_predict'] = pred
ax_BA1.plot(df.Brittleness_predict, df.DEPT, color='red', linestyle='--')
ax_pred.spines['top'].set_position(('outward',40))
ax_pred.set_xlabel('BRITTLENESS (GB)',color='red')
ax_pred.tick_params(axis='x', colors='red')
ax_pred.spines["top"].set_edgecolor('red')
ax[3].get_xaxis().set_visible(False)
ax[3].yaxis.grid(True)
ax[3].axis('off')
ax_BA1.grid(True,alpha=0.5)
#Track 4
##Brittleness
## nn model
ax_BA2 = ax[4].twiny()
ax_BA2.grid(True)
ax_BA2.set_xlim(0,1)
ax_BA2.spines['top'].set_position(('outward',0))
ax_BA2.plot(BA, MD, label='BRITTLENESS ESTIMATE', color='black')
ax_BA2.set_xlabel('BRITTLENESS ESTIMATE', color='black')
ax_BA2.tick_params(axis='x', colors='black')
##Ploting the predicted data
###work on this for generalization
ax_pred = ax[4].twiny()
df = data.loc[: , features].dropna()
pred = model_nn.predict(df.drop(['DEPT','RHOZ', target], axis=1))
df['Brittleness_predict'] = pred
ax_BA2.plot(df.Brittleness_predict, df.DEPT, color='blue', linestyle='--')
ax_pred.spines['top'].set_position(('outward',40))
ax_pred.set_xlabel('BRITTLENESS (NN)',color='blue')
ax_pred.tick_params(axis='x', colors='blue')
ax_pred.spines["top"].set_edgecolor('blue')
ax[4].get_xaxis().set_visible(False)
ax[4].yaxis.grid(True)
ax[4].axis('off')
ax_BA2.grid(True,alpha=0.5)
#Track 4
##Brittleness
##svm model
ax_BA3 = ax[5].twiny()
ax_BA3.grid(True)
ax_BA3.set_xlim(0,1)
ax_BA3.spines['top'].set_position(('outward',0))
ax_BA3.plot(BA, MD, label='BRITTLENESS ESTIMATE', color='black')
ax_BA3.set_xlabel('BRITTLENESS ESTIMATE', color='black')
ax_BA3.tick_params(axis='x', colors='black')
##Ploting the predicted data
###work on this for generalization
ax_pred = ax[5].twiny()
df = data.loc[: , features].dropna()
pred = model_svm.predict(df.drop(['DEPT','RHOZ',target], axis=1))
df['Brittleness_predict'] = pred
ax_BA3.plot(df.Brittleness_predict, df.DEPT, color='purple', linestyle='--')
ax_pred.spines['top'].set_position(('outward',40))
ax_pred.set_xlabel('BRITTLENESS (SVM)',color='purple')
ax_pred.tick_params(axis='x', colors='purple')
ax_pred.spines["top"].set_edgecolor('purple')
ax[5].get_xaxis().set_visible(False)
ax[5].yaxis.grid(True)
ax[5].axis('off')
ax_BA3.grid(True,alpha=0.5)
# #formation top
# ax_top = ax[-1]
# ax[-1].axis('off')
# formation_midpoints = []
# for key, value in formation.items():
# #Calculate mid point of the formation
# formation_midpoints.append(value[0] + (value[1]-value[0])/2)
# zone_colours = ["red", "blue", "green"]
# for ax in [ax_GR, ax_NPHI, ax_BA1, ax_BA2, ax_BA3, ax_top]:
# # loop through the formations dictionary and zone colours
# for depth, colour in zip(formation.values(), zone_colours):
# # use the depths and colours to shade across the subplots
# ax.axhspan(depth[0], depth[1], color=colour, alpha=0.1)
# for label, formation_mid in zip(formation.keys(),
# formation_midpoints):
# ax_top.text(0.5, formation_mid, label, rotation=90,
# verticalalignment='center', horizontalalignment='center', fontweight='bold',
# fontsize='large')
# fig.savefig(r'./Images/{}.png'.format(well_name), dpi=600)
X_train.columns
# formation = {'Tully': [7195,7310],
# 'Mahantango': [7310,7455],
# 'Marcellus': [7455,7560]}
formation = {'Upper Marcellus': [7453,7476],
'Middle Marcellus': [7476,7517],
'Lower Marcellus': [7517,7555]}
plot_logs2(data_mip3h, "MIP3H2", model_gb, model_svm, model_nn, formation)
# formation2 = {'Tully': [7604,7670],
# 'Mahantango': [7670,7882],
# 'Marcellus': [7882,8052]}
formation_pos = {'Upper Marcellus': [7883,7961],
'Middle Marcellus': [7961,8015],
'Lower Marcellus': [8015,8052]}
plot_logs2(data_poseidon, "Poseidon", model_gb, model_svm, model_nn, formation_pos)
formation_bog = {'Upper Marcellus': [7877,7905],
'Middle Marcellus': [7905, 7951],
'Lower Marcellus': [7951,7974]}
plot_logs2(data_boggess, "Boggess", model_gb, model_svm, model_nn, formation_bog)
formation_whip = {'Upper Marcellus': [7736, 7785],
'Middle Marcellus': [7785, 7811],
'Lower Marcellus': [7811, 7835]}
plot_logs2(data_whipkey, "Whipkey", model_gb, model_svm, model_nn, formation_whip)
###Output
_____no_output_____ |
notebooks/Predictive_modeling.ipynb | ###Markdown
Importing the data
###Code
# setting the raw path
processed_data_path = os.path.join(os.path.pardir,"data","processed")
train_file_path = os.path.join(processed_data_path,"train.csv")
test_file_path = os.path.join(processed_data_path,"test.csv")
X = pd.read_csv(train_file_path)
y = pd.read_csv(test_file_path)
X.shape
y.shape
###Output
_____no_output_____
###Markdown
Splitting the data
###Code
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.1, random_state=0)
print("Number transactions X_train dataset: ", X_train.shape)
print("Number transactions y_train dataset: ", y_train.shape)
print("Number transactions X_test dataset: ", X_test.shape)
print("Number transactions y_test dataset: ", y_test.shape)
###Output
Number transactions X_train dataset: (65786, 10)
Number transactions y_train dataset: (65786, 1)
Number transactions X_test dataset: (7310, 10)
Number transactions y_test dataset: (7310, 1)
###Markdown
Dummy model Creating a dummy classifierA dummy classifier is a type of classifier which does not generate any insight about the data and classifies the given data using only simple rules. The classifier’s behavior is completely independent of the training data as the trends in the training data are completely ignored and instead uses one of the strategies to predict the class label.It is used only as a simple baseline for the other classifiers i.e. any other classifier is expected to perform better on the given dataset. It is especially useful for datasets where are sure of a class imbalance. It is based on the philosophy that any analytic approach for a classification problem should be better than a random guessing approach.
###Code
'''
Below are a few strategies used by the dummy classifier to predict a class label –
Most Frequent: The classifier always predicts the most frequent class label in the training data.
Stratified: It generates predictions by respecting the class distribution of the training data. It is different from the “most frequent” strategy as it instead associates a probability with each data point of being the most frequent class label.
Uniform: It generates predictions uniformly at random.
'''
strategies = ['most_frequent', 'stratified', 'uniform']
test_scores = []
for s in strategies:
#if s =='constant':
# dclf = DummyClassifier(strategy = s, random_state = 0)
#else:
dclf = DummyClassifier(strategy = s, random_state = 0)
dclf.fit(X_train, y_train)
score = dclf.score(X_test, y_test)
test_scores.append(score)
test_scores
# accuracy score
print('accuracy for baseline model : {0:.2f}'.format(accuracy_score(y_test, dclf.predict(X_test))))
ax = sns.stripplot(strategies, test_scores);
ax.set(xlabel ='Strategy', ylabel ='Test Score')
plt.show()
###Output
_____no_output_____
###Markdown
Machine learning models Defining the helper function for k-fold and stratified k-fold cross validation
###Code
# Defining the path to save the figures/plots.
figures_data_path = os.path.join(os.path.pardir, 'reports','figures')
kf = KFold(n_splits = 10, shuffle = True, random_state = 4)
skf = StratifiedKFold(n_splits = 10, shuffle = True, random_state = 4)
def model_classifier(model, X, y, cv):
"""
Creates folds manually, perform
Returns an array of validation (recall) scores
"""
scores = []
for train_index,test_index in cv.split(X,y):
X_train,X_test = X.loc[train_index],X.loc[test_index]
y_train,y_test = y.loc[train_index],y.loc[test_index]
# Fit the model on the training data
model_obj = model.fit(X_train, y_train)
y_pred = model_obj.predict(X_test)
y_pred_prob = model_obj.predict_proba(X_test)[:,1]
# Score the model on the validation data
score = accuracy_score(y_test, y_pred)
report = classification_report(y_test, y_pred)
conf_matrix = confusion_matrix(y_test, y_pred)
scores.append(score)
mean_score = np.array(scores).mean()
print('Accuracy scores of the model: {:.2f}'.format(mean_score))
print('\n Classification report of the model')
print('--------------------------------------')
print(report)
print('\n Confusion Matrix of the model')
print('--------------------------------------')
print(conf_matrix)
print("\n ROC Curve")
logit_roc_auc = roc_auc_score(y_test, y_pred)
fpr, tpr, thresholds = roc_curve(y_test, y_pred_prob)
plt.figure()
val_model = input("Enter your model name: ")
plt.plot(fpr, tpr, label= val_model + ' (area = %0.2f)' % logit_roc_auc)
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
my_fig = val_model + '.png'
plt.savefig(os.path.join(figures_data_path, my_fig))
plt.show()
###Output
_____no_output_____
###Markdown
Logistic regression model
###Code
# instantiating the model;
logregression = LogisticRegression()
###Output
_____no_output_____
###Markdown
1. Using k-fold.Overview:The k-fold cross-validation procedure involves splitting the training dataset into k folds. The first k-1 folds are used to train a model, and the holdout kth fold is used as the test set. This process is repeated and each of the folds is given an opportunity to be used as the holdout test set. A total of k models are fit and evaluated, and the performance of the model is calculated as the mean of these runs.
###Code
model_classifier(logregression, X, y,kf)
###Output
Accuracy scores of the model: 0.73
Classification report of the model
--------------------------------------
precision recall f1-score support
0 0.70 0.78 0.74 3636
1 0.75 0.67 0.71 3673
accuracy 0.73 7309
macro avg 0.73 0.73 0.72 7309
weighted avg 0.73 0.73 0.72 7309
Confusion Matrix of the model
--------------------------------------
[[2825 811]
[1195 2478]]
ROC Curve
###Markdown
2. Stratified k-fold validationOverview; This is the modified k-fold cross-validation and train-test splits can be used to preserve the class distribution in the dataset.
###Code
model_classifier(logregression, X, y,skf)
###Output
Accuracy scores of the model: 0.73
Classification report of the model
--------------------------------------
precision recall f1-score support
0 0.71 0.79 0.75 3654
1 0.76 0.68 0.72 3655
accuracy 0.73 7309
macro avg 0.74 0.73 0.73 7309
weighted avg 0.74 0.73 0.73 7309
Confusion Matrix of the model
--------------------------------------
[[2873 781]
[1172 2483]]
ROC Curve
###Markdown
Hyperparameter Optimization:logistic regression
###Code
lr_hyp = LogisticRegression()
# regularization penalty space
penalty = ['l1', 'l2']
# regularization hyperparameter space
C = np.logspace(0, 4, 10)
# hyperparameter options
param_grid = dict(C=C, penalty=penalty)
log_reg_cv = GridSearchCV(lr_hyp, param_grid, verbose=0)
model_classifier(log_reg_cv, X, y, skf)
'''
lr_hyp = LogisticRegression()
# regularization penalty space
penalty = ['l1','l2']
solver = ['liblinear', 'saga']
# regularization hyperparameter space
#C = np.logspace(0, 4, 10)
C = np.logspace(0, 4, num=10)
# hyperparameter options
param_grid = dict(C=C, penalty=penalty, solver=solver)
log_reg_cv = RandomizedSearchCV(lr_hyp, param_grid)
model_classifier(log_reg_cv, X, y, kf)
'''
###Output
Accuracy scores of the model: 0.73
Classification report of the model
--------------------------------------
precision recall f1-score support
0 0.71 0.79 0.75 3654
1 0.76 0.68 0.72 3655
accuracy 0.73 7309
macro avg 0.74 0.73 0.73 7309
weighted avg 0.74 0.73 0.73 7309
Confusion Matrix of the model
--------------------------------------
[[2873 781]
[1172 2483]]
ROC Curve
Enter your model name: log_reg_cv
###Markdown
In our case since we had tackled class imbalance, the is no much difference noted in the accuracy score gotten by using the k-fold classification and using stratified k-fold cross validation. Confusion Matrixhe diagonal elements represent the number of points for which the predicted label is equal to the true label, while off-diagonal elements are those that are mislabeled by the classifier. The higher the diagonal values of the confusion matrix the better, indicating many correct predictions. classification_report F1 scoreA measurement that considers both precision and recall to compute the score. The F1 score can be interpreted as a weighted average of the precision and recall values, where an F1 score reaches its best value at 1 and worst value at 0 Out of all the classes, 72 % of our data was predicted correctly. **ROC Curve:**The ROC curve shows the trade-off between sensitivity (or TPR) and specificity (1 – FPR). Classifiers that give curves closer to the top-left corner indicate a better performance The closer the curve comes to the 45-degree diagonal of the ROC space, the less accurate the test. In our case it is far away and therefore we can conclude that test is a bit accurate. **XGBoost Model**
###Code
#instantiating the model
xgb_regressor = XGBClassifier()
###Output
_____no_output_____
###Markdown
XGBoost using k-fold cross validation
###Code
model_classifier(xgb_regressor,X,y,kf)
###Output
Accuracy scores of the model: 0.84
Classification report of the model
--------------------------------------
precision recall f1-score support
0 0.82 0.87 0.85 3636
1 0.87 0.82 0.84 3673
accuracy 0.84 7309
macro avg 0.84 0.84 0.84 7309
weighted avg 0.84 0.84 0.84 7309
Confusion Matrix of the model
--------------------------------------
[[3169 467]
[ 675 2998]]
ROC Curve
###Markdown
XGBoost using stratified k-fold cross validation
###Code
model_classifier(xgb_regressor,X,y,skf)
###Output
Accuracy scores of the model: 0.85
Classification report of the model
--------------------------------------
precision recall f1-score support
0 0.83 0.88 0.85 3654
1 0.87 0.82 0.84 3655
accuracy 0.85 7309
macro avg 0.85 0.85 0.85 7309
weighted avg 0.85 0.85 0.85 7309
Confusion Matrix of the model
--------------------------------------
[[3209 445]
[ 656 2999]]
ROC Curve
###Markdown
XGBoost Hyperparameter tuning
###Code
def timer(start_time=None):
if not start_time:
start_time = datetime.now()
return start_time
elif start_time:
thour, temp_sec = divmod((datetime.now() - start_time).total_seconds(), 3600)
tmin, tsec = divmod(temp_sec, 60)
print('\n Time taken: %i hours %i minutes and %s seconds.' % (thour, tmin, round(tsec, 2)))
# A parameter grid for XGBoost
params = {
'min_child_weight': [1, 5, 10],
'gamma': [0.5, 1, 1.5, 2, 5],
'subsample': [0.6, 0.8, 1.0],
'colsample_bytree': [0.6, 0.8, 1.0],
'max_depth': [3, 4, 5]
}
xgb_hyp = XGBClassifier()
xgb_random_search = RandomizedSearchCV(xgb_hyp, param_distributions=params, n_iter=5, scoring='roc_auc', n_jobs=-1, cv=10, verbose=3)
start_time = timer(None) # timing starts from this point for "start_time" variable
xgb_random_search.fit(X, y)
timer(start_time)
xgb_random_search.best_estimator_
xgb_random_search.best_params_
xgb_tuned = XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=0.6, gamma=0.5,
learning_rate=0.1, max_delta_step=0, max_depth=5,
min_child_weight=10, missing=None, n_estimators=100, n_jobs=1,
nthread=None, objective='binary:logistic', random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
silent=None, subsample=0.6, verbosity=1)
model_classifier(xgb_tuned,X,y,skf)
###Output
_____no_output_____
###Markdown
MLPDefinition - What does Multilayer Perceptron (MLP) mean?A multilayer perceptron (MLP) is a feedforward artificial neural network that generates a set of outputs from a set of inputs. An MLP is characterized by several layers of input nodes connected as a directed graph between the input and output layers. MLP uses backpropogation for training the network. MLP is a deep learning method.
###Code
mlp = MLPClassifier()
###Output
_____no_output_____
###Markdown
MLP using K-fold cv
###Code
model_classifier(mlp,X,y,kf)
###Output
Accuracy scores of the model: 0.75
Classification report of the model
--------------------------------------
precision recall f1-score support
0 0.71 0.85 0.78 3636
1 0.82 0.66 0.73 3673
accuracy 0.75 7309
macro avg 0.76 0.75 0.75 7309
weighted avg 0.76 0.75 0.75 7309
Confusion Matrix of the model
--------------------------------------
[[3097 539]
[1257 2416]]
ROC Curve
###Markdown
MLP using stratified k-fold cv
###Code
model_classifier(mlp,X,y,skf)
###Output
Accuracy scores of the model: 0.75
Classification report of the model
--------------------------------------
precision recall f1-score support
0 0.72 0.85 0.78 3654
1 0.81 0.66 0.73 3655
accuracy 0.76 7309
macro avg 0.76 0.76 0.75 7309
weighted avg 0.76 0.76 0.75 7309
Confusion Matrix of the model
--------------------------------------
[[3095 559]
[1231 2424]]
ROC Curve
###Markdown
Hyperparameter tuning for Multilayer perceptron
###Code
parameter_space = {
'hidden_layer_sizes': [(10,30,10),(20,)],
'activation': ['tanh', 'relu'],
'solver': ['sgd', 'adam'],
'alpha': [0.0001, 0.05],
'learning_rate': ['constant','adaptive'],
}
mlp_hyp = MLPClassifier()
randomized_mlp = RandomizedSearchCV(mlp_hyp, parameter_space, n_jobs=-1, cv=10)
start_time = timer(None) # timing starts from this point for "start_time" variable
randomized_mlp.fit(X, y)
timer(start_time)
randomized_mlp.best_estimator_
randomized_mlp.best_params_
mlp_tuned = MLPClassifier(alpha=0.05, hidden_layer_sizes=(20,), learning_rate='adaptive',
solver='sgd')
model_classifier(mlp_tuned,X,y,skf)
###Output
Accuracy scores of the model: 0.74
Classification report of the model
--------------------------------------
precision recall f1-score support
0 0.71 0.82 0.76 3654
1 0.79 0.66 0.72 3655
accuracy 0.74 7309
macro avg 0.75 0.74 0.74 7309
weighted avg 0.75 0.74 0.74 7309
Confusion Matrix of the model
--------------------------------------
[[3012 642]
[1248 2407]]
ROC Curve
###Markdown
Support Vector Machine Model;The objective of the support vector machine algorithm is to find a hyperplane in an N-dimensional space(N — the number of features) that distinctly classifies the data points.
###Code
from sklearn.svm import SVC
best_svr = SVR(kernel='rbf')
###Output
_____no_output_____
###Markdown
SVM using stratified k-fold cross validation
###Code
model_classifier(best_svr,X,y,skf)
###Output
_____no_output_____
###Markdown
SVM using k-fold cross validation
###Code
model_classifier(best_svr,X,y,kf)
###Output
_____no_output_____ |
docs/notebooks/tweet-ouptut.ipynb | ###Markdown
Tweet output `TweetOutput` is as generic class which can export scrapped tweets.Under the hood it has abstract method `export_tweets(tweets: List[Tweet])`. There are few implementations of `TweetOutput`
###Code
import stweet as st
###Output
_____no_output_____
###Markdown
PrintTweetOutput PrintTweetOutput prints all tweets in console. It does not store tweets anywhere.
###Code
st.PrintTweetOutput();
###Output
_____no_output_____
###Markdown
CollectorTweetOutput CollectorTweetOutput stores all tweets in memory. This solution is best way when we want to analyse small part of tweets.To get all tweets run method `get_scrapped_tweets()`
###Code
st.CollectorTweetOutput();
###Output
_____no_output_____
###Markdown
CsvTweetOutput `CsvTweetOutput` stores tweets in csv file. It has two parameters `file_location` and `add_header_on_start`.When `add_header_on_start` is `True` header is adding only when file is empty. It is possible to continue storing the tweets in file in next tasks.
###Code
st.CsvTweetOutput(
file_location='my_csv_file.csv',
add_header_on_start=True
);
###Output
_____no_output_____
###Markdown
JsonLineFileTweetOutput `JsonLineFileTweetOutput` stores tweets in file in json lines. This solution is better because it can be problem with fast saving new tweet in large files, also it can be problem with reading. Using json lines it is possible to read line by line, without read whole document into memory.Class have only one property – `file_name`, this is the file to store tweets in json line format.
###Code
st.JsonLineFileTweetOutput(
file_name='my_jl_file.jl'
);
###Output
_____no_output_____
###Markdown
PrintEveryNTweetOutput `PrintEveryNTweetOutput` print event N-th scrapped tweet. This is best solution to track that new tweets are scrapping.Class have only one parameter – `each_n`, this is the N value described above.
###Code
st.PrintEveryNTweetOutput(
each_n=1000
);
###Output
_____no_output_____
###Markdown
PrintFirstInRequestTweetOutput PrintFirstInRequestTweetOutput is a debug TweetOutput. It allow to track every request and shows the first part of response.
###Code
st.PrintFirstInRequestTweetOutput();
###Output
_____no_output_____ |
04 OOPS-2/4.4 Polymorphism (Method overriding).ipynb | ###Markdown
Polymorphism - ability to take multiple formsOne in many forms is nothing but polymorphism
###Code
class Vehicle:
def __init__(self, color, maxSpeed):
self.color = color
self.__maxSpeed = maxSpeed #Making maxSpeed as private using '__' before the memeber
def getMaxSpeed(self): #get method is a public function
return self.__maxSpeed
def setMaxSpeed(self, maxSpeed): #set method is a public function
self.__maxSpeed = maxSpeed
def print(self): #Another way of accessing printing private members --> printing within a class
print("Color : ",self.color)
print("MaxSpeed :", self.__maxSpeed)
class Car(Vehicle):
def __init__(self, color, maxSpeed, numGears, isConvertible):
super().__init__(color, maxSpeed) #Inheriting from vehicle class
self.numGears = numGears #Passing via arguments in car class
self.isConvertible = isConvertible
def print(self):
#self.print() #Here there is only one print() and we can use self or super in this example.
print("NumGears :", self.numGears)
print("IsConvertible :", self.isConvertible)
c = Car("red", 35, 5, False)
c.print()
###Output
NumGears : 5
IsConvertible : False
###Markdown
** Here the vehicle and car class has the same method name with same arguments (so called) ---> Method Overloading (in programming)So, c.print() first searches in the car class if print() is present then it prints else it goes to its parent class (Vehicle class in the below example). As we removed the print() from class Car then the control goes to the print() of Vehicle class (parent class). Suppose if the print() is not available in the parent class then the control goes to its parent class and so on till it reaches the print()
###Code
class Vehicle:
def __init__(self, color, maxSpeed):
self.color = color
self.__maxSpeed = maxSpeed #Making maxSpeed as private using '__' before the memeber
def getMaxSpeed(self): #get method is a public function
return self.__maxSpeed
def setMaxSpeed(self, maxSpeed): #set method is a public function
self.__maxSpeed = maxSpeed
def print(self): #Another way of accessing printing private members --> printing within a class
print("Color : ",self.color)
print("MaxSpeed :", self.__maxSpeed)
class Car(Vehicle):
def __init__(self, color, maxSpeed, numGears, isConvertible):
super().__init__(color, maxSpeed) #Inheriting from vehicle class
self.numGears = numGears #Passing via arguments in car class
self.isConvertible = isConvertible
# def printCar(self):
# #super().print() #Instead of super() you can also use self.print() beause a car inherits all properties and methods from parent class
# self.print() #Here there is only one print() and we can use self or super in this example.
# print("NumGears :", self.numGears)
# print("IsConvertible :", self.isConvertible)
c = Car("red", 35, 5, False)
c.print()
class Vehicle:
def __init__(self, color, maxSpeed):
self.color = color
self.__maxSpeed = maxSpeed #Making maxSpeed as private using '__' before the memeber
def getMaxSpeed(self): #get method is a public function
return self.__maxSpeed
def setMaxSpeed(self, maxSpeed): #set method is a public function
self.__maxSpeed = maxSpeed
# def print(self): #Another way of accessing printing private members --> printing within a class
# print("Color : ",self.color)
# print("MaxSpeed :", self.__maxSpeed)
class Car(Vehicle):
def __init__(self, color, maxSpeed, numGears, isConvertible):
super().__init__(color, maxSpeed) #Inheriting from vehicle class
self.numGears = numGears #Passing via arguments in car class
self.isConvertible = isConvertible
# def printCar(self):
# #super().print() #Instead of super() you can also use self.print() beause a car inherits all properties and methods from parent class
# self.print() #Here there is only one print() and we can use self or super in this example.
# print("NumGears :", self.numGears)
# print("IsConvertible :", self.isConvertible)
c = Car("red", 35, 5, False)
c.print()
###Output
_____no_output_____
###Markdown
In the above example, there is no print in the entire program so ERROR
###Code
class Vehicle:
def __init__(self, color, maxSpeed):
self.color = color
self.__maxSpeed = maxSpeed #Making maxSpeed as private using '__' before the memeber
def getMaxSpeed(self): #get method is a public function
return self.__maxSpeed
def setMaxSpeed(self, maxSpeed): #set method is a public function
self.__maxSpeed = maxSpeed
def print(self): #Another way of accessing printing private members --> printing within a class
print("Color : ",self.color)
print("MaxSpeed :", self.__maxSpeed)
class Car(Vehicle):
def __init__(self, color, maxSpeed, numGears, isConvertible):
super().__init__(color, maxSpeed) #Inheriting from vehicle class
self.numGears = numGears #Passing via arguments in car class
self.isConvertible = isConvertible
def print(self):
super().print() #Here there is only one print() and we can use self or super in this example.
print("NumGears :", self.numGears)
print("IsConvertible :", self.isConvertible)
c = Car("red", 35, 5, False)
c.print()
print("_____________________________")
v = Vehicle("green", 98)
v.print()
#Predict the Output:
class Vehicle:
def __init__(self,color):
self.color = color
def print(self):
print(c.color,end=" ")
class Car(Vehicle):
def __init__(self,color,numGears):
super().__init__(color)
self.numGears = numGears
def print(self):
print(c.color,end=" ")
print(c.numGears)
c = Car("black",5)
c.print()
#Predict the Output:
class Vehicle:
def __init__(self,color):
self.color = color
def print(self):
print(c.color,end=" ")
class Car(Vehicle):
def __init__(self,color,numGears):
super().__init__(color)
self.numGears = numGears
def print(self):
self.print()
print(c.numGears)
c = Car("black",5)
c.print()
###Output
_____no_output_____ |
Week1/.ipynb_checkpoints/BjetSelection-checkpoint.ipynb | ###Markdown
Classification of b-quark jets in the Aleph simulated dataPython macro for selecting b-jets in Aleph Z->qqbar MC in various ways:* Initially, simply with "if"-statements making requirements on certain variables. This corresponds to selecting "boxes" in the input variable space (typically called "X"). One could also try a Fisher discriminant (linear combination of input variables), which corresponds to a plane in the X-space. But as the problem is non-linear, it is likely to be sub-optimal.* Next using Machine Learning (ML) methods. We will try both tree based and Neural Net (NN) based methods, and see how complicated (or not) it is to get a good solution, and how much better it performs compared to the "classic" selection method.In the end, this exercise is the simple start on moving into the territory of multidimensional analasis. Data:The input variables (X) are:* energy: Measured energy of the jet in GeV. Should be 45 GeV, but fluctuates.* cTheta: cos(theta), i.e. the polar angle of the jet with respect to the beam axis. The detector works best in the central region (|cTheta| small) and less well in the forward regions.* phi: The azimuth angle of the jet. As the detector is uniform in phi, this should not matter (much).* prob_b: Probability of being a b-jet from the pointing of the tracks to the vertex.* spheri: Sphericity of the event, i.e. how spherical it is.* pt2rel: The transverse momentum squared of the tracks relative to the jet axis, i.e. width of the jet.* multip: Multiplicity of the jet (in a relative measure).* bqvjet: b-quark vertex of the jet, i.e. the probability of a detached vertex.* ptlrel: Transverse momentum (in GeV) of possible lepton with respect to jet axis (about 0 if no leptons).The target variable (Y) is:* isb: 1 if it is from a b-quark and 0, if it is not.Finally, those before you (the Aleph collaboration in the mid 90'ies) produced a Neural Net based classification variable, which you can compare to (and compete with?):* nnbjet: Value of original Aleph b-jet tagging algorithm (for reference). Task:Thus, the task before you is to produce a function (ML algorithm), which given the input variables X provides an output variable estimate, Y_est, which is "closest possible" to the target variable, Y. The "closest possible" is left to the user to define in a _Loss Function_, which we will discuss further. In classification problems (such as this), the typical loss function to use "Cross Entropy", see https://en.wikipedia.org/wiki/Cross_entropy.* Author: Troels C. Petersen (NBI)* Email: [email protected]* Date: 20th of April 2021
###Code
from __future__ import print_function, division # Ensures Python3 printing & division standard
from matplotlib import pyplot as plt
from matplotlib import colors
from matplotlib.colors import LogNorm
import numpy as np
import csv
###Output
_____no_output_____
###Markdown
Possible other packages to consider:cornerplot, seaplot, sklearn.decomposition(PCA)
###Code
r = np.random
r.seed(42)
SavePlots = False
plt.close('all')
###Output
_____no_output_____
###Markdown
Evaluate an attempt at classification:This is made into a function, as this is called many times. It returns a "confusion matrix" and the fraction of wrong classifications.
###Code
def evaluate(bquark) :
N = [[0,0], [0,0]] # Make a list of lists (i.e. matrix) for counting successes/failures.
for i in np.arange(len(isb)):
if (bquark[i] == 0 and isb[i] == 0) : N[0][0] += 1
if (bquark[i] == 0 and isb[i] == 1) : N[0][1] += 1
if (bquark[i] == 1 and isb[i] == 0) : N[1][0] += 1
if (bquark[i] == 1 and isb[i] == 1) : N[1][1] += 1
fracWrong = float(N[0][1]+N[1][0])/float(len(isb))
return N, fracWrong
###Output
_____no_output_____
###Markdown
Main program start:
###Code
# Get data (with this very useful NumPy reader):
data = np.genfromtxt('AlephBtag_MC_small_v2.csv', names=True)
energy = data['energy']
cTheta = data['cTheta']
phi = data['phi']
prob_b = data['prob_b']
spheri = data['spheri']
pt2rel = data['pt2rel']
multip = data['multip']
bqvjet = data['bqvjet']
ptlrel = data['ptlrel']
nnbjet = data['nnbjet']
isb = data['isb']
###Output
_____no_output_____
###Markdown
Produce 1D figures:Define the histogram range and binning (important - MatPlotLib is NOT good at this):
###Code
Nbins = 100
xmin = 0.0
xmax = 1.0
###Output
_____no_output_____
###Markdown
Make new lists selected based on what the jets really are (b-quark jet or light-quark jet):
###Code
prob_b_bjets = []
prob_b_ljets = []
bqvjet_bjets = []
bqvjet_ljets = []
for i in np.arange(len(isb)) :
if (isb[i] == 1) :
prob_b_bjets.append(prob_b[i])
bqvjet_bjets.append(bqvjet[i])
else :
prob_b_ljets.append(prob_b[i])
bqvjet_ljets.append(bqvjet[i])
# Produce the actual figure, here with two histograms in it:
fig, ax = plt.subplots(figsize=(12, 6)) # Create just a single figure and axes (figsize is in inches!)
hist_prob_b_bjets = ax.hist(prob_b_bjets, bins=Nbins, range=(xmin, xmax), histtype='step', linewidth=2, label='prob_b_bjets', color='blue')
hist_prob_b_ljets = ax.hist(prob_b_ljets, bins=Nbins, range=(xmin, xmax), histtype='step', linewidth=2, label='prob_b_ljets', color='red')
ax.set_xlabel("Probability of b-quark based on track impact parameters") # Label of x-axis
ax.set_ylabel("Frequency / 0.01") # Label of y-axis
ax.set_title("Distribution of prob_b") # Title of plot
ax.legend(loc='best') # Legend. Could also be 'upper right'
ax.grid(axis='y')
fig.tight_layout()
fig.show()
if SavePlots :
fig.savefig('Hist_prob_b_and_bqvjet.pdf', dpi=600)
###Output
_____no_output_____
###Markdown
Produce 2D figures:
###Code
# First we try a scatter plot, to see how the individual events distribute themselves:
fig2, ax2 = plt.subplots(figsize=(12, 6))
scat2_prob_b_vs_bqvjet_bjets = ax2.scatter(prob_b_bjets, bqvjet_bjets, label='b-jets', color='blue')
scat2_prob_b_vs_bqvjet_ljets = ax2.scatter(prob_b_ljets, bqvjet_ljets, label='l-jets', color='red')
ax2.legend(loc='best')
fig2.tight_layout()
fig2.show()
if SavePlots :
fig2.savefig('Scatter_prob_b_vs_bqvjet.pdf', dpi=600)
# However, as can be seen in the figure, the overlap between b-jets and light-jets is large,
# and one covers much of the other in a scatter plot, which also does not show the amount of
# statistics in the dense regions. Therefore, we try two separate 2D histograms (zoomed):
fig3, ax3 = plt.subplots(1, 2, figsize=(12, 6))
hist2_prob_b_vs_bqvjet_bjets = ax3[0].hist2d(prob_b_bjets, bqvjet_bjets, bins=[40,40], range=[[0.0, 0.4], [0.0, 0.4]])
hist2_prob_b_vs_bqvjet_ljets = ax3[1].hist2d(prob_b_ljets, bqvjet_ljets, bins=[40,40], range=[[0.0, 0.4], [0.0, 0.4]])
ax3[0].set_title("b-jets")
ax3[1].set_title("light-jets")
fig3.tight_layout()
fig3.show()
if SavePlots :
fig3.savefig('Hist2D_prob_b_vs_bqvjet.pdf', dpi=600)
###Output
_____no_output_____
###Markdown
Selection:
###Code
# I give the selection cuts names, so that they only need to be changed in ONE place (also ensures consistency!):
loose_propb = 0.10
tight_propb = 0.16
loose_bqvjet = 0.12
tight_bqvjet = 0.28
# If either of the variable clearly indicate b-quark, or of both loosely do so, call it a b-quark, otherwise not!
bquark=[]
for i in np.arange(len(prob_b)):
if (prob_b[i] > tight_propb) :
bquark.append(1)
elif (bqvjet[i] > tight_bqvjet) :
bquark.append(1)
elif ((prob_b[i] > loose_propb) and (bqvjet[i] > loose_bqvjet)) :
bquark.append(1)
else :
bquark.append(0)
###Output
_____no_output_____
###Markdown
Evaluate the selection:
###Code
N, fracWrong = evaluate(bquark)
print("\nRESULT OF HUMAN ATTEMPT AT A GOOD SELECTION:")
print(" First number is my estimate, second is the MC truth:")
print(" True-Negative (0,0) = ", N[0][0])
print(" False-Negative (0,1) = ", N[0][1])
print(" False-Positive (1,0) = ", N[1][0])
print(" True-Positive (1,1) = ", N[1][1])
print(" Fraction wrong = ( (0,1) + (1,0) ) / sum = ", fracWrong)
### Compare with NN-approach from 1990'ies:
bquark=[]
for i in np.arange(len(prob_b)):
if (nnbjet[i] > 0.82) : bquark.append(1)
else : bquark.append(0)
N, fracWrong = evaluate(bquark)
print("\nALEPH BJET TAG:")
print(" First number is my estimate, second is the MC truth:")
print(" True-Negative (0,0) = ", N[0][0])
print(" False-Negative (0,1) = ", N[0][1])
print(" False-Positive (1,0) = ", N[1][0])
print(" True-Positive (1,1) = ", N[1][1])
print(" Fraction wrong = ( (0,1) + (1,0) ) / sum = ", fracWrong)
###Output
_____no_output_____ |
ChatBotTraining.ipynb | ###Markdown
https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html https://medium.com/cindicator/building-chatbot-weekend-of-a-data-scientist-8388d99db093
###Code
from keras.models import Model
from keras.layers.recurrent import LSTM
from keras.layers import Dense, Input, Embedding
from keras.preprocessing.sequence import pad_sequences
from keras.callbacks import ModelCheckpoint, EarlyStopping
from collections import Counter
import nltk
import numpy as np
from sklearn.model_selection import train_test_split
np.random.seed(2018)
BATCH_SIZE = 128
NUM_EPOCHS = 100
HIDDEN_UNITS = 256
MAX_INPUT_SEQ_LENGTH = 20
MAX_TARGET_SEQ_LENGTH = 20
MAX_VOCAB_SIZE = 100
DATA_PATH = 'data/movie_lines.txt'
TENSORBOARD = 'TensorBoard/'
WEIGHT_FILE_PATH = 'data/word-weights.h5'
input_counter = Counter()
target_counter = Counter()
# read the data
with open(DATA_PATH, 'r', encoding="latin-1") as f:
df = f.read()
rows = df.split('\n')
lines = [row.split(' +++$+++ ')[-1] for row in rows]
input_texts = []
target_texts = []
prev_words = []
for line in lines:
next_words = [w.lower() for w in nltk.word_tokenize(line)]
if len(next_words) > MAX_TARGET_SEQ_LENGTH:
next_words = next_words[0:MAX_TARGET_SEQ_LENGTH]
if len(prev_words) > 0:
input_texts.append(prev_words)
for w in prev_words:
input_counter[w] += 1
target_words = next_words[:]
target_words.insert(0, 'START')
target_words.append('END')
for w in target_words:
target_counter[w] += 1
target_texts.append(target_words)
prev_words = next_words
# encode the data
input_word2idx = dict()
target_word2idx = dict()
for idx, word in enumerate(input_counter.most_common(MAX_VOCAB_SIZE)):
input_word2idx[word[0]] = idx + 2
for idx, word in enumerate(target_counter.most_common(MAX_VOCAB_SIZE)):
target_word2idx[word[0]] = idx + 1
input_word2idx['PAD'] = 0
input_word2idx['UNK'] = 1
target_word2idx['UNK'] = 0
input_idx2word = dict([(idx, word) for word, idx in input_word2idx.items()])
target_idx2word = dict([(idx, word) for word, idx in target_word2idx.items()])
num_encoder_tokens = len(input_idx2word)
num_decoder_tokens = len(target_idx2word)
np.save('data/word-input-word2idx.npy', input_word2idx)
np.save('data/word-input-idx2word.npy', input_idx2word)
np.save('data/word-target-word2idx.npy', target_word2idx)
np.save('data/word-target-idx2word.npy', target_idx2word)
encoder_input_data = []
encoder_max_seq_length = 0
decoder_max_seq_length = 0
for input_words, target_words in zip(input_texts, target_texts):
encoder_input_wids = []
for w in input_words:
w2idx = 1
if w in input_word2idx:
w2idx = input_word2idx[w]
encoder_input_wids.append(w2idx)
encoder_input_data.append(encoder_input_wids)
encoder_max_seq_length = max(len(encoder_input_wids), encoder_max_seq_length)
decoder_max_seq_length = max(len(target_words), decoder_max_seq_length)
context = dict()
context['num_encoder_tokens'] = num_encoder_tokens
context['num_decoder_tokens'] = num_decoder_tokens
context['encoder_max_seq_length'] = encoder_max_seq_length
context['decoder_max_seq_length'] = decoder_max_seq_length
np.save('data/word-context.npy', context)
context = dict()
context['num_encoder_tokens'] = num_encoder_tokens
context['num_decoder_tokens'] = num_decoder_tokens
context['encoder_max_seq_length'] = encoder_max_seq_length
context['decoder_max_seq_length'] = decoder_max_seq_length
np.save('data/word-context.npy', context)
def generate_batch(input_data, output_text_data):
num_batches = len(input_data) // BATCH_SIZE
while True:
for batchIdx in range(0, num_batches):
start = batchIdx * BATCH_SIZE
end = (batchIdx + 1) * BATCH_SIZE
encoder_input_data_batch = pad_sequences(input_data[start:end], encoder_max_seq_length)
decoder_target_data_batch = np.zeros(shape=(BATCH_SIZE, decoder_max_seq_length, num_decoder_tokens))
decoder_input_data_batch = np.zeros(shape=(BATCH_SIZE, decoder_max_seq_length, num_decoder_tokens))
for lineIdx, target_words in enumerate(output_text_data[start:end]):
for idx, w in enumerate(target_words):
w2idx = 0
if w in target_word2idx:
w2idx = target_word2idx[w]
decoder_input_data_batch[lineIdx, idx, w2idx] = 1
if idx > 0:
decoder_target_data_batch[lineIdx, idx - 1, w2idx] = 1
yield [encoder_input_data_batch, decoder_input_data_batch], decoder_target_data_batch
# Compiling and training
encoder_inputs = Input(shape=(None,), name='encoder_inputs')
encoder_embedding = Embedding(input_dim=num_encoder_tokens, output_dim=HIDDEN_UNITS,
input_length=encoder_max_seq_length, name='encoder_embedding')
encoder_lstm = LSTM(units=HIDDEN_UNITS, return_state=True, name='encoder_lstm')
encoder_outputs, encoder_state_h, encoder_state_c = encoder_lstm(encoder_embedding(encoder_inputs))
encoder_states = [encoder_state_h, encoder_state_c]
decoder_inputs = Input(shape=(None, num_decoder_tokens), name='decoder_inputs')
decoder_lstm = LSTM(units=HIDDEN_UNITS, return_state=True, return_sequences=True, name='decoder_lstm')
decoder_outputs, decoder_state_h, decoder_state_c = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
decoder_dense = Dense(units=num_decoder_tokens, activation='softmax', name='decoder_dense')
decoder_outputs = decoder_dense(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.compile(loss='categorical_crossentropy', optimizer='adam')
json = model.to_json()
open('data/word-architecture.json', 'w').write(json)
X_train, X_test, y_train, y_test = train_test_split(encoder_input_data, target_texts, test_size=0.2, random_state=42)
train_gen = generate_batch(X_train, y_train)
test_gen = generate_batch(X_test, y_test)
train_num_batches = len(X_train) // BATCH_SIZE
test_num_batches = len(X_test) // BATCH_SIZE
cb = [EarlyStopping(monitor='val_loss', patience=5, verbose=1, mode='min', min_delta=0.0001),
ModelCheckpoint(filepath=WEIGHT_FILE_PATH, monitor='val_loss', save_best_only=True)]
model.fit_generator(generator=train_gen,
steps_per_epoch=train_num_batches,
epochs=NUM_EPOCHS,
verbose=1,
validation_data=test_gen,
validation_steps=test_num_batches,
callbacks=cb)
model.save_weights(WEIGHT_FILE_PATH)
###Output
/usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:433: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
|
Modulo 4 - Analisi per regioni/regioni/Calabria/CALABRIA - SARIMA mensile.ipynb | ###Markdown
CREAZIONE MODELLO SARIMA REGIONE CALABRIA
###Code
import pandas as pd
df = pd.read_csv('../../csv/regioni/calabria.csv')
df.head()
df['DATA'] = pd.to_datetime(df['DATA'])
df.info()
df=df.set_index('DATA')
df.head()
###Output
_____no_output_____
###Markdown
Creazione serie storica dei decessi totali della regione Calabria
###Code
ts = df.TOTALE
ts.head()
from datetime import datetime
from datetime import timedelta
start_date = datetime(2015,1,1)
end_date = datetime(2020,9,30)
lim_ts = ts[start_date:end_date]
#visulizzo il grafico
import matplotlib.pyplot as plt
plt.figure(figsize=(12,6))
plt.title('Decessi mensili regione Calabria dal 2015 a settembre 2020', size=20)
plt.plot(lim_ts)
for year in range(start_date.year,end_date.year+1):
plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.5)
###Output
_____no_output_____
###Markdown
Decomposizione
###Code
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(ts, period=12, two_sided=True, extrapolate_trend=1, model='multiplicative')
ts_trend = decomposition.trend #andamento della curva
ts_seasonal = decomposition.seasonal #stagionalità
ts_residual = decomposition.resid #parti rimanenti
plt.subplot(411)
plt.plot(ts,label='original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(ts_trend,label='trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(ts_seasonal,label='seasonality')
plt.legend(loc='best')
plt.subplot(414)
plt.plot(ts_residual,label='residual')
plt.legend(loc='best')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Test stazionarietà
###Code
from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeseries):
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
critical_value = dftest[4]['5%']
test_statistic = dftest[0]
alpha = 1e-3
pvalue = dftest[1]
if pvalue < alpha and test_statistic < critical_value: # null hypothesis: x is non stationary
print("X is stationary")
return True
else:
print("X is not stationary")
return False
test_stationarity(ts)
###Output
X is not stationary
###Markdown
Suddivisione in Train e Test Train: da gennaio 2015 a ottobre 2019; Test: da ottobre 2019 a dicembre 2019.
###Code
from datetime import datetime
train_end = datetime(2019,10,31)
test_end = datetime (2019,12,31)
covid_end = datetime(2020,8,31)
from dateutil.relativedelta import *
tsb = ts[:test_end]
decomposition = seasonal_decompose(tsb, period=12, two_sided=True, extrapolate_trend=1, model='multiplicative')
tsb_trend = decomposition.trend #andamento della curva
tsb_seasonal = decomposition.seasonal #stagionalità
tsb_residual = decomposition.resid #parti rimanenti
tsb_diff = pd.Series(tsb_trend)
d = 0
while test_stationarity(tsb_diff) is False:
tsb_diff = tsb_diff.diff().dropna()
d = d + 1
print(d)
#TEST: dal 01-01-2015 al 31-10-2019
train = tsb[:train_end]
#TRAIN: dal 01-11-2019 al 31-12-2019
test = tsb[train_end + relativedelta(months=+1): test_end]
###Output
X is not stationary
X is not stationary
X is not stationary
X is stationary
3
###Markdown
Grafici Autocorrelazione e Autocorrelazione Parziale
###Code
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
plot_acf(ts, lags =12)
plot_pacf(ts, lags =12)
plt.show()
###Output
_____no_output_____
###Markdown
Creazione del modello SARIMA sul Train
###Code
from statsmodels.tsa.statespace.sarimax import SARIMAX
model = SARIMAX(train, order=(12,1,1))
model_fit = model.fit()
print(model_fit.summary())
###Output
c:\users\monta\appdata\local\programs\python\python38\lib\site-packages\statsmodels\tsa\base\tsa_model.py:524: ValueWarning: No frequency information was provided, so inferred frequency M will be used.
warnings.warn('No frequency information was'
c:\users\monta\appdata\local\programs\python\python38\lib\site-packages\statsmodels\tsa\base\tsa_model.py:524: ValueWarning: No frequency information was provided, so inferred frequency M will be used.
warnings.warn('No frequency information was'
c:\users\monta\appdata\local\programs\python\python38\lib\site-packages\statsmodels\tsa\statespace\sarimax.py:965: UserWarning: Non-stationary starting autoregressive parameters found. Using zeros as starting parameters.
warn('Non-stationary starting autoregressive parameters'
###Markdown
Verifica della stazionarietà dei residui del modello ottenuto
###Code
residuals = model_fit.resid
test_stationarity(residuals)
plt.figure(figsize=(12,6))
plt.title('Confronto valori previsti dal modello con valori reali del Train', size=20)
plt.plot (train.iloc[1:], color='red', label='train values')
plt.plot (model_fit.fittedvalues.iloc[1:], color = 'blue', label='model values')
plt.legend()
plt.show()
conf = model_fit.conf_int()
plt.figure(figsize=(12,6))
plt.title('Intervalli di confidenza del modello', size=20)
plt.plot(conf)
plt.xticks(rotation=45)
plt.show()
###Output
_____no_output_____
###Markdown
Predizione del modello sul Test
###Code
#inizio e fine predizione
pred_start = test.index[0]
pred_end = test.index[-1]
#pred_start= len(train)
#pred_end = len(tsb)
#predizione del modello sul test
predictions_test= model_fit.predict(start=pred_start, end=pred_end)
plt.plot(test, color='red', label='actual')
plt.plot(predictions_test, label='prediction' )
plt.xticks(rotation=45)
plt.legend()
plt.show()
print(predictions_test)
# Accuracy metrics
import numpy as np
def forecast_accuracy(forecast, actual):
mape = np.mean(np.abs(forecast - actual)/np.abs(actual)) # MAPE: errore percentuale medio assoluto
me = np.mean(forecast - actual) # ME: errore medio
mae = np.mean(np.abs(forecast - actual)) # MAE: errore assoluto medio
mpe = np.mean((forecast - actual)/actual) # MPE: errore percentuale medio
rmse = np.mean((forecast - actual)**2)**.5 # RMSE
corr = np.corrcoef(forecast, actual)[0,1] # corr: correlazione tra effettivo e previsione
mins = np.amin(np.hstack([forecast[:,None],
actual[:,None]]), axis=1)
maxs = np.amax(np.hstack([forecast[:,None],
actual[:,None]]), axis=1)
minmax = 1 - np.mean(mins/maxs) # minmax: errore min-max
return({'mape':mape, 'me':me, 'mae': mae,
'mpe': mpe, 'rmse':rmse,
'corr':corr, 'minmax':minmax})
forecast_accuracy(predictions_test, test)
import numpy as np
from statsmodels.tools.eval_measures import rmse
nrmse = rmse(predictions_test, test)/(np.max(test)-np.min(test))
print('NRMSE: %f'% nrmse)
###Output
NRMSE: 1.833846
###Markdown
Predizione del modello compreso l'anno 2020
###Code
#inizio e fine predizione
start_prediction = ts.index[0]
end_prediction = ts.index[-1]
predictions_tot = model_fit.predict(start=start_prediction, end=end_prediction)
plt.figure(figsize=(12,6))
plt.title('Previsione modello su dati osservati - dal 2015 al 30 settembre 2020', size=20)
plt.plot(ts, color='blue', label='actual')
plt.plot(predictions_tot.iloc[1:], color='red', label='predict')
plt.xticks(rotation=45)
plt.legend(prop={'size': 12})
plt.show()
diff_predictions_tot = (ts - predictions_tot)
plt.figure(figsize=(12,6))
plt.title('Differenza tra i valori osservati e i valori stimati del modello', size=20)
plt.plot(diff_predictions_tot)
plt.show()
diff_predictions_tot['24-02-2020':].sum()
predictions_tot.to_csv('../../csv/pred/predictions_SARIMA_calabria.csv')
###Output
_____no_output_____
###Markdown
Intervalli di confidenza della previsione totale
###Code
forecast = model_fit.get_prediction(start=start_prediction, end=end_prediction)
in_c = forecast.conf_int()
print(forecast.predicted_mean)
print(in_c)
print(forecast.predicted_mean - in_c['lower TOTALE'])
plt.plot(in_c)
plt.show()
upper = in_c['upper TOTALE']
lower = in_c['lower TOTALE']
lower.to_csv('../../csv/lower/predictions_SARIMA_calabria_lower.csv')
upper.to_csv('../../csv/upper/predictions_SARIMA_calabria_upper.csv')
###Output
_____no_output_____ |
Ch12_computational-performance/12-2mx.ipynb | ###Markdown
http://preview.d2l.ai/d2l-en/master/chapter_computational-performance/async-computation.html
###Code
from d2l import mxnet as d2l
import numpy, os, subprocess
from mxnet import autograd, gluon, np, npx
from mxnet.gluon import nn
npx.set_np()
with d2l.Benchmark('numpy'):
for _ in range(10):
a = numpy.random.normal(size=(1000, 1000))
b = numpy.dot(a, a)
with d2l.Benchmark('mxnet.np'):
for _ in range(10):
a = np.random.normal(size=(1000, 1000))
b = np.dot(a, a)
with d2l.Benchmark():
for _ in range(10):
a = np.random.normal(size=(1000, 1000))
b = np.dot(a, a)
npx.waitall()
with d2l.Benchmark():
for _ in range(10):
a = np.random.normal(size=(1000, 1000))
b = np.dot(a, a)
npx.waitall()
x = np.ones((1, 2))
y = np.ones((1, 2))
z = x * y + 2
z
with d2l.Benchmark('waitall'):
b = np.dot(a, a)
npx.waitall()
with d2l.Benchmark('wait_to_read'):
b = np.dot(a, a)
b.wait_to_read()
with d2l.Benchmark('numpy conversion'):
b = np.dot(a, a)
b.asnumpy()
with d2l.Benchmark('scalar conversion'):
b = np.dot(a, a)
b.sum().item()
with d2l.Benchmark('synchronous'):
for _ in range(1000):
y = x + 1
y.wait_to_read()
with d2l.Benchmark('asynchronous'):
for _ in range(1000):
y = x + 1
y.wait_to_read()
def data_iter():
timer = d2l.Timer()
num_batches, batch_size = 150, 1024
for i in range(num_batches):
X = np.random.normal(size=(batch_size, 512))
y = np.ones((batch_size,))
yield X, y
if (i + 1) % 50 == 0:
print(f'batch {i + 1}, time {timer.stop():.4f} sec')
net = nn.Sequential()
net.add(nn.Dense(2048, activation='relu'),
nn.Dense(512, activation='relu'), nn.Dense(1))
net.initialize()
trainer = gluon.Trainer(net.collect_params(), 'sgd')
loss = gluon.loss.L2Loss()
def get_mem():
res = subprocess.check_output(['ps', 'u', '-p', str(os.getpid())])
return int(str(res).split()[15]) / 1e3
for X, y in data_iter():
break
loss(y, net(X)).wait_to_read()
mem = get_mem()
with d2l.Benchmark('time per epoch'):
for X, y in data_iter():
with autograd.record():
l = loss(y, net(X))
l.backward()
trainer.step(X.shape[0])
l.wait_to_read() # Barrier before a new batch
npx.waitall()
print(f'increased memory: {get_mem() - mem:f} MB')
mem = get_mem()
with d2l.Benchmark('time per epoch'):
for X, y in data_iter():
with autograd.record():
l = loss(y, net(X))
l.backward()
trainer.step(X.shape[0])
npx.waitall()
print(f'increased memory: {get_mem() - mem:f} MB')
###Output
_____no_output_____ |
docker/dockerfiles/jupyter/docker/notebooks/PNDA minimal notebook.ipynb | ###Markdown
Minimal PNDA Jupyter notebook`%matplotlib notebook` must be set before `import matplotlib.pyplot as plt` or plotting with matplotlib will fail
###Code
%matplotlib notebook
import matplotlib.pyplot as plt
import sys
import pandas as pd
import matplotlib
print(u'▶ Python version ' + sys.version)
print(u'▶ Pandas version ' + pd.__version__)
print(u'▶ Matplotlib version ' + matplotlib.__version__)
import numpy as np
values = np.random.rand(100)
df = pd.DataFrame(data=values, columns=['RandomValue'])
df.head(10)
df.plot()
###Output
_____no_output_____ |
Reinforcement Learning Summer School 2019 (Lille, France)/practical_rec_systems/practical_rec_systems.ipynb | ###Markdown
Bandits for Recommendation - RLSS 2019 For conveniency slides used for presentation are available on https://www.dropbox.com/s/6px7a37qddstgtn/RLSS.pdf?dl=0The objective of this notebook is to apply the bandits algorithms to recommendation problem using a simulated envrionment. Although in practice you would also use the real data, the complexity of the recommendation problem and the associated algorithmic challenges can already be revealed even in this simple setting.  Controlled recommendation environmentIn the simulated environment, user browsers a web-site and might click on recommendation served by a recommendation agent. The goal of the agent is to maximize the number of clicks. The simulation is going to be a little bit more involved that ideal situation from the previous TP. In particular we are going to work on a stream of user which are also generating some "organic" observations, meaning that you collect some browsing events and have to perform some recommendations until the user decides to leave. Define action, observation, reward* Action -- a recommended item (e.g., in e-commerce it can be a product). Here the simulator will be set to have 100 possible products* Reward -- user interaction with the recommendation (e.g., a click)* Observation -- user activity (e.g., list of products that user has visited during his/her browsing session). Here we report only "organic" event which are not the recommended item and can occur even if the recommendation was not used.
###Code
from reco_env import RecoEnv, env_1_args
from configuration import Configuration
from agent import Agent, RandomAgent, random_args
# you can overwrite environment arguments here
RND_SEED = 1234
env_1_args['random_seed'] = RND_SEED
# create environment with configuration
env = RecoEnv(Configuration({
**env_1_args,
}))
env.reset()
# random agent
rand_agent = RandomAgent(Configuration({
**random_args,
}))
# counting steps
i = 0
observation, _, _, _ = env.step(None)
reward, done = 0, False
while not done:
# choose action given current observation
action = rand_agent.act(observation)
# execute action in the environment
observation, reward, done, info = env.step(action['a'])
print(f"Step: {i} - Action: {action} - Observation: {observation} - Reward: {reward}")
i += 1
###Output
_____no_output_____
###Markdown
Simulation of user response to recommendationUser response to recommendation is modeled as a function of (1) affinity of user to recommended product, and (2) correction due to the recommendation. $\mu(u,p,t) := f(\Lambda(u,p,t) + \epsilon(u,p,t))$,where $f$ is an increasing function, $\Lambda(u,p,t)$ is the log odds of user $u$ being interested by product $a$ at time $t$, $\epsilon(u,p,t))$ is a zero mean correction.Assuming the latent space for user and product, let $\omega \in \mathbb{R}^K$ be the latent representation of user $u$ of size $K$, $\beta \in \mathbb{R}^K$ is the latent reprentation of product $p$, then user response on recommendation can be modeled as$\mu(u,p,t) := \text{sigmoid}(\beta^T \omega + \mu_\epsilon)$, where $\omega_i = \mathcal{N}(0, \sigma^2_\omega)$, $\beta_i = \mathcal{N}(0, 1)$, $\mu_\epsilon = \mathcal{N}(0, \sigma^2_\mu)$.In advertisement, typical values for $\mu(u,p,t)$ are around $0.02$. Online learning using recommendation environment We introduce functions that train, evaluate and plot the evaluation metrics of recommendation agents.
###Code
from train_eval_utils import train_eval_agents, plot_ctr
help(train_eval_agents)
help(plot_ctr)
###Output
_____no_output_____
###Markdown
Bandits Code bandits algorithms that you have already seen in the class. In this part the simulator is configured in a such way that we are actually facing the stochastic bandit setting: all the users are the same and their preferences are not evolving. UCB algorithm [Auer 2002] Code the agent that runs the UCB algorithm for product click-through rate (number of clicks / number of displays). * First, we provide the code for bound based on Hoeffding inequality$I_k = argmax_k \hat{\mu}_{k,n} + \sqrt{\frac{2 \log t}{n}}$* Then, improve by tuning $\alpha$ $I_k = argmax_k \hat{\mu}_{k,n} + \alpha \sqrt{\frac{2 \log t}{n}}$* Finally, use the fact that click is a Bernoulli random variable to obtain a sharper bound
###Code
import numpy as np
# Implement an Agent interface
class AlphaUCBAgent(Agent):
def __init__(self, config):
super(AlphaUCBAgent, self).__init__(config)
# Init with ones to avoid division by zero
self.product_rewards = np.zeros(self.config.num_products, dtype=np.float32)
self.product_counts = np.ones(self.config.num_products, dtype=np.float32)
# alpha parameter (already tuned)
self.alpha = 0.01
def train(self, observation, action, reward, done):
"""Train from observed data"""
if reward is not None and action is not None:
self.product_rewards[action] += reward
self.product_counts[action] += 1
def act(self, observation):
"""Return an action given current observation"""
t = sum(self.product_counts)
ucb = self.product_rewards / t + self.alpha * np.sqrt(2.0*np.log(t)/self.product_counts)
action = np.argmax(ucb)
return { 'a': action }
# TODO: code the beta bound
def ucb_bound(num_clicks, num_displays):
return ..
class BetaUCBAgent(Agent):
def __init__(self, config):
super(BetaUCBAgent, self).__init__(config)
self.product_rewards = np.zeros(self.config.num_products, dtype=np.float32)
self.product_counts = np.ones(self.config.num_products, dtype=np.float32)
self.ucb_func = np.vectorize(ucb_bound)
def train(self, observation, action, reward, done):
if reward is not None and action is not None:
self.product_rewards[action] += reward
self.product_counts[action] += 1
def act(self, observation):
ucb = self.ucb_func(self.product_rewards, self.product_counts)
action = np.argmax(ucb)
return { 'a': action }
###Output
_____no_output_____
###Markdown
Compare UCB agents performance and running time in stochastic bandits setting* Train and evaluate UCB with Hoeffding bound and exact bound* Achieve similar performance by tuning $\alpha$ parameter* Compare the running time
###Code
# number of products to recommend
num_products = 10
# number of users for train and evaluation
num_train_users, num_eval_users = 1000, 1000
custom_args = { 'num_products': num_products,
'random_seed': RND_SEED,
}
config = Configuration({
**env_1_args,
**custom_args,
})
alpha_ucb_agent = AlphaUCBAgent(config)
beta_ucb_agent = BetaUCBAgent(config)
rand_agent = RandomAgent(config)
agents = [rand_agent, alpha_ucb_agent, beta_ucb_agent]
stats = train_eval_agents(agents, config, num_train_users, num_eval_users)
print(stats)
plot_ctr(stats)
###Output
_____no_output_____
###Markdown
Exp3 / Boltzmann exploration algorithm * Code the agent that runs the Exp3 / Boltzmann exploration algorithm. The adversarial setting is going to be described later in the course but you can find it on wikipedia https://en.wikipedia.org/wiki/Multi-armed_banditExp3[43].__Remark:__ *it is possible to have a exponential speedup of the sampling unsing storing the weights in a binary tree containing partial sums, see http://timvieira.github.io/blog/post/2016/11/21/heaps-for-incremental-computation/** Tune temperature parameter
###Code
from scipy.special import logsumexp
from numpy.random import choice
class Exp3Agent(Agent):
def __init__(self, config):
super(Exp3Agent, self).__init__(config)
self.product_rewards = np.zeros(self.config.num_products, dtype=np.float32)
self.product_counts = np.ones(self.config.num_products, dtype=np.float32)
# TODO: tune softmax temperature
self.eta = ..
def train(self, observation, action, reward, done):
if reward is not None and action is not None:
self.product_rewards[action] += reward
self.product_counts[action] += 1
def log_softmax(self, vec):
return vec - logsumexp(vec)
def softmax(self, vec):
probs = np.exp(self.log_softmax(vec))
probs /= probs.sum()
return probs
def act(self, observation):
# TODO: compute probability of choosing action using Boltzmann exploration
prob = ..
# TODO: sample an action
action = ..
return { 'a': action, 'ps': prob[action] }
###Output
_____no_output_____
###Markdown
Compare UCB and Exp3 agents performance and running time in stochastic bandits setting* Train and evaluate UCB and Exp3 algorithms against random product recommendation* Increase the number of products to 100 and explain the change
###Code
# number of products to recommend
num_products = 10
# number of users for train and evaluation
num_train_users, num_eval_users = 1000, 1000
custom_args = { 'num_products': num_products,
'random_seed': RND_SEED,
}
config = Configuration({
**env_1_args,
**custom_args,
})
ucb_agent = AlphaUCBAgent(config)
exp3_agent = Exp3Agent(config)
rand_agent = RandomAgent(config)
agents = [rand_agent, ucb_agent, exp3_agent]
stats = train_eval_agents(agents, config, num_train_users, num_eval_users)
print(stats)
plot_ctr(stats)
###Output
_____no_output_____
###Markdown
Compare UCB and Exp3 agents in a non-stationary setting Now we set the simulation to have some updates in the preference of the users thought they are initialized equally (using the random_seed given in the custom_args). This modelization of the user state change leads to non-stationary user response to a recommendation and is dependant of what you recommended.
###Code
num_products = 100
num_train_users, num_eval_users = 1000, 1000
# adversarial setting: increase user state change during time
custom_args = { 'num_products': num_products,
'random_seed': RND_SEED,
'sigma_omega': 0.3
}
config = Configuration({
**env_1_args,
**custom_args,
})
ucb_agent = AlphaUCBAgent(config)
exp3_agent = Exp3Agent(config)
rand_agent = RandomAgent(config)
agents = [rand_agent, ucb_agent, exp3_agent]
stats = train_eval_agents(agents, config, num_train_users, num_eval_users)
print(stats)
plot_ctr(stats)
###Output
_____no_output_____
###Markdown
Compare sample complexity $\sigma$-subgaussian distribution* UCB$R_T = \mathcal{O}(\sum_{i>1} \frac{\log T}{\Delta_i})$* Exp3$R_T = \mathcal{O}(\sum_{i>1}\frac{\log^2 (T \Delta_i^2)}{\Delta_i})$distribution-independent* UCB$R_T = \mathcal{O}(\sqrt{KT\log T})$* Exp3$R_T = \mathcal{O}(\sqrt{KT}\log K)$ Using side data: browsing events To make algorithms more sample efficient, we will use side data to bootstrap the recommendation. Specifically, we will use the user browsing events (aka "organic" events). The amount of browsing data is typically much larger than the number of click events on recommendation. Popularity agent The simpliest recommedation agent is based on the total number of views of the product during browsing.
###Code
class PopularityAgent(Agent):
def __init__(self, config):
super(PopularityAgent, self).__init__(config)
# Track number of times each item is viewed during browsing
self.nb_views = np.ones(self.config.num_products)
def train(self, observation, action, reward, done):
if observation:
for view in observation:
# TODO: code the update for nb_views that increments view counts
..
def act(self, observation):
# TODO: compute probability of choosing an action proportionally to the total number of views
prob = ..
# TODO: choose action
action = ..
return { 'a': action, 'ps': prob[action] }
###Output
_____no_output_____
###Markdown
Contextual BanditImprove the popularity agent by personalizing popularity to the user interest. We represent the user interest by the last product he/she has seen.
###Code
from scipy.special import logsumexp
class ContextualExp3Agent(Agent):
def __init__(self, config):
super(ContextualExp3Agent, self).__init__(config)
self.product_rewards = np.zeros((self.config.num_products, self.config.num_products))
self.last_product_seen = None
# softmax temperature parameter
self.eta = 0.01
def update_lps(self, observation):
"""Update the last product seen based on the current observation"""
if observation:
self.last_product_seen = observation[-1]
def train(self, observation, action, reward, done):
if observation:
# TODO: code the update for product_rewards matrix
..
def log_softmax(self, vec):
return vec - logsumexp(vec)
def softmax(self, vec):
probs = np.exp(self.log_softmax(vec))
probs /= probs.sum()
return probs
def act(self, observation):
self.update_lps(observation)
# TODO: compute probability of choosing an action given last seen product by the user
prob = ..
# TODO: choose action
action = ..
return { 'a': action, 'ps': prob[action] }
###Output
_____no_output_____
###Markdown
Compare agents that use click events to agents that use view events
###Code
num_products = 100
num_train_users, num_eval_users = 1000, 1000
# increase difference among users
custom_args = { 'num_products': num_products,
'random_seed': RND_SEED,
'sigma_omega_initial': 2.0,
}
config = Configuration({
**env_1_args,
**custom_args,
})
contextual_exp3_agent = ContextualExp3Agent(config)
exp3_agent = Exp3Agent(config)
pop_agent = PopularityAgent(config)
ucb_agent = AlphaUCBAgent(config)
rand_agent = RandomAgent(config)
agents = [pop_agent, contextual_exp3_agent, exp3_agent, ucb_agent, rand_agent]
stats = train_eval_agents(agents, config, num_train_users, num_eval_users)
print(stats)
plot_ctr(stats)
###Output
_____no_output_____ |
05_Duplicate_Questions/02_Duplicate_Questions_FastText.ipynb | ###Markdown
Question PairsIn the last notebook we used packed padded to do a binary classification on questions. We were classifying weather questions are duplicated or not. In this notebook we are going to create a modified `FastText` that will do the same task.**Note**: The rest of the notebook will remain unchanged from the previous one. Where there's a change i will highlight. Imports
###Code
import time, os, torch, random, math
from prettytable import PrettyTable
import numpy as np
from matplotlib import pyplot as plt
import pandas as pd
import torch, os, random
from torch import nn
import torch.nn.functional as F
torch.__version__
###Output
_____no_output_____
###Markdown
SEEDS
###Code
SEED = 42
np.random.seed(SEED)
random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deteministic = True
###Output
_____no_output_____
###Markdown
Device
###Code
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
###Output
_____no_output_____
###Markdown
Mounting the google drive
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Paths to data
###Code
base_path = '/content/drive/MyDrive/NLP Data/duplicates-questions'
train_path = 'train.csv'
val_path = 'val.csv'
test_path = 'test.csv'
os.path.exists(base_path)
###Output
_____no_output_____
###Markdown
Data LoadingThis is a binary classification task where we are going to predict weather questions are duplicates or not. We are going to have 2 inputs which is two differant questions that will map to one label, is_duplicate(1) or is_not_duplicate (0). We are going to create the fields of our data. Fast TextAccoding to the FastText paper we need to generate bigrams for each question.We are going to create a function called ``generate_bigram()`` that will generate bigrams for us for both of these two input questions. We will pass this function to the Text field as the preprocessing function. What do we have?We are having three `csv` files for each set whih makes it easy to create the dataset for this task.
###Code
def generate_bigrams(x):
x = [i.lower() for i in x]
n_grams = set(zip(*[x[i: ] for i in range(2)]))
for n_gram in n_grams:
x.append(' '.join(n_gram))
return x
generate_bigrams(['What', 'is', 'the', 'meaning', "of", "OCR", "in", "python"])
from torchtext.legacy import data
###Output
_____no_output_____
###Markdown
Fields
###Code
TEXT = data.Field(
tokenize = 'spacy',
tokenizer_language = 'en_core_web_sm',
preprocessing = generate_bigrams,
)
LABEL = data.LabelField(dtype = torch.float)
fields = {
"question1": ("qn1", TEXT),
"question2": ("qn2", TEXT),
"is_duplicate": ("label", LABEL),
}
###Output
_____no_output_____
###Markdown
Next we will create our dataset using our favourate class fro torchtext `TabularDataset`. We are going to load the data that is in `csv` format as follows.
###Code
train_data, val_data, test_data = data.TabularDataset.splits(
base_path,
train=train_path,
test= test_path,
validation= val_path,
format = "csv",
fields=fields
)
print(vars(train_data.examples[0]))
###Output
{'qn1': ['is', 'it', 'right', 'for', 'a', 'woman', 'to', 'date', 'someone', '2', '-', '3', 'years', 'younger', 'than', 'her', '?', 'is it', 'for a', 'date someone', 'a woman', 'woman to', 'to date', 'someone 2', '2 -', 'younger than', 'it right', 'her ?', '3 years', 'right for', '- 3', 'than her', 'years younger'], 'qn2': ['is', 'it', 'strange', 'to', 'have', 'a', 'crush', 'on', 'someone', 'say', '17', 'years', 'younger', 'than', 'me', '?', 'crush on', 'is it', 'to have', 'have a', 'me ?', 'on someone', 'younger than', 'strange to', 'than me', 'it strange', 'say 17', 'someone say', 'a crush', 'years younger', '17 years'], 'label': '0'}
###Markdown
Next we will build the Vocabulary.We are going to use the pretrained word vectors `glove.6B.100d` which was trained on about 6 billion english words.
###Code
MAX_VOCAB_SIZE = 100_000
TEXT.build_vocab(
train_data,
max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_
)
LABEL.build_vocab(train_data)
LABEL.vocab.stoi
###Output
_____no_output_____
###Markdown
Creating iteratorsWe are going to use the `BucketIterator` to create iterators for all these sets that we have.
###Code
sort_key = lambda x: len(x.qn1)
BATCH_SIZE = 128
train_iter, val_iter, test_iter = data.BucketIterator.splits(
(train_data, val_data, test_data),
device = device,
batch_size = BATCH_SIZE,
sort_key = sort_key,
sort_within_batch=True
)
###Output
_____no_output_____
###Markdown
Next we are going to create the model.We are going to have two inputs which will be Question1 and Question2.* Each question will be passed through it's own embedding layer.* These embedding layers will then be concatenated and passed through a linear layer for predictions.
###Code
class DuplicateQuestionsFastText(nn.Module):
def __init__(self,
vocab_size,
embedding_size,
output_dim,
pad_index,
dropout=.5
):
super(DuplicateQuestionsFastText, self).__init__()
self.embedding_1 = nn.Embedding(
vocab_size,
embedding_size,
padding_idx = pad_index
)
self.embedding_2 = nn.Embedding(
vocab_size,
embedding_size,
padding_idx = pad_index
)
self.out = nn.Linear(
embedding_size,
out_features = output_dim
)
self.dropout = nn.Dropout(dropout)
def forward(self,
question1,
question2,
):
embedded_1 = self.embedding_1(question1).permute(1 ,0, 2)
embedded_2 = self.embedding_2(question2).permute(1 ,0, 2)
embedded = self.dropout(torch.cat((embedded_1, embedded_2), dim=1))
pooled = F.avg_pool2d(embedded,
(embedded.shape[1], 1)
).squeeze(1)
return self.out(pooled)
###Output
_____no_output_____
###Markdown
Creating the model instance.
###Code
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
OUTPUT_DIM = 1
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
duplicate_questions_model = DuplicateQuestionsFastText(
INPUT_DIM,
EMBEDDING_DIM,
OUTPUT_DIM,
pad_index = PAD_IDX
).to(device)
duplicate_questions_model
###Output
_____no_output_____
###Markdown
Model parameters
###Code
def count_trainable_params(model):
return sum(p.numel() for p in model.parameters()), sum(p.numel() for p in model.parameters() if p.requires_grad)
n_params, trainable_params = count_trainable_params(duplicate_questions_model)
print(f"Total number of paramaters: {n_params:,}\nTotal tainable parameters: {trainable_params:,}")
###Output
Total number of paramaters: 20,000,501
Total tainable parameters: 20,000,501
###Markdown
Loading pretrained vectors to the `embedding` layers.* Now we have two embedding layers in the model, so we need to add the word vectors to each embedding layer as follows:
###Code
pretrained_embeddings = TEXT.vocab.vectors
duplicate_questions_model.embedding_1.weight.data.copy_(
pretrained_embeddings
)
duplicate_questions_model.embedding_2.weight.data.copy_(
pretrained_embeddings
)
###Output
_____no_output_____
###Markdown
Zeroing the `` and the `` tokens.These tokens are not acually necessary for the model trainning that's the reason we are zeroing them. We will do this for all our emmbedding layers in the model.
###Code
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token] or TEXT.vocab.stoi["<unk>"]
duplicate_questions_model.embedding_1.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
duplicate_questions_model.embedding_1.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
duplicate_questions_model.embedding_2.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
duplicate_questions_model.embedding_2.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
duplicate_questions_model.embedding_1.weight.data
###Output
_____no_output_____
###Markdown
Loss and optimizerFor the optimizer we are going to use `Adam()` with default paramaters and for the loss function we are going to use the `BCEWithLogitsLoss()` since we are doing a binary classification.
###Code
optimizer = torch.optim.Adam(duplicate_questions_model.parameters())
criterion = nn.BCEWithLogitsLoss().to(device)
###Output
_____no_output_____
###Markdown
Accuracy function.For the accuracy we are going to create a `binary_accuracy` function that will take predicted labels and accual labels to return the accuracy as a probability value.
###Code
def binary_accuracy(y_preds, y_true):
rounded_preds = torch.round(torch.sigmoid(y_preds))
correct = (rounded_preds == y_true).float()
return correct.sum() / len(correct)
###Output
_____no_output_____
###Markdown
Train and evaluation function.This time around we have two features which is our two text labels. The model except 2 positional args which are:``` question1, question2``` Where are we going to get them?Well our iterator contains all this information so we dont have o worry much about that. Let's create a train and evaluation functions.
###Code
def train(model, iterator, optimizer, criterion):
epoch_loss,epoch_acc = 0, 0
model.train()
for batch in iterator:
optimizer.zero_grad()
qn1 = batch.qn1
qn2= batch.qn2
predictions = model(qn1, qn2).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion):
epoch_loss,epoch_acc = 0, 0
model.eval()
with torch.no_grad():
for batch in iterator:
qn1 = batch.qn1
qn2 = batch.qn2
predictions = model(qn1, qn2).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
###Output
_____no_output_____
###Markdown
Train LoopWe are going to create some helper functions that will help us to visualize every epoch during training.Time to string
###Code
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
###Output
_____no_output_____
###Markdown
Tabulate training epoch
###Code
def visualize_training(start, end, train_loss, train_accuracy, val_loss, val_accuracy, title):
data = [
["Training", f'{train_loss:.3f}', f'{train_accuracy:.3f}', f"{hms_string(end - start)}" ],
["Validation", f'{val_loss:.3f}', f'{val_accuracy:.3f}', "" ],
]
table = PrettyTable(["CATEGORY", "LOSS", "ACCURACY", "ETA"])
table.align["CATEGORY"] = 'l'
table.align["LOSS"] = 'r'
table.align["ACCURACY"] = 'r'
table.align["ETA"] = 'r'
table.title = title
for row in data:
table.add_row(row)
print(table)
N_EPOCHS = 10
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start = time.time()
train_loss, train_acc = train(duplicate_questions_model, train_iter, optimizer, criterion)
valid_loss, valid_acc = evaluate(duplicate_questions_model, val_iter, criterion)
title = f"EPOCH: {epoch+1:02}/{N_EPOCHS:02} {'saving best model...' if valid_loss < best_valid_loss else 'not saving...'}"
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(duplicate_questions_model.state_dict(), 'best-model.pt')
end = time.time()
visualize_training(start, end, train_loss, train_acc, valid_loss, valid_acc, title)
###Output
+--------------------------------------------+
| EPOCH: 01/10 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.530 | 0.739 | 0:00:45.04 |
| Validation | 0.492 | 0.765 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 02/10 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.466 | 0.783 | 0:00:44.70 |
| Validation | 0.470 | 0.777 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 03/10 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.435 | 0.800 | 0:00:45.06 |
| Validation | 0.458 | 0.785 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 04/10 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.411 | 0.813 | 0:00:44.78 |
| Validation | 0.451 | 0.789 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 05/10 saving best model... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.393 | 0.823 | 0:00:44.83 |
| Validation | 0.449 | 0.795 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 06/10 not saving... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.377 | 0.831 | 0:00:45.10 |
| Validation | 0.450 | 0.799 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 07/10 not saving... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.365 | 0.837 | 0:00:44.85 |
| Validation | 0.455 | 0.798 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 08/10 not saving... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.353 | 0.843 | 0:00:44.85 |
| Validation | 0.457 | 0.796 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 09/10 not saving... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.344 | 0.848 | 0:00:44.45 |
| Validation | 0.460 | 0.796 | |
+------------+-------+----------+------------+
+--------------------------------------------+
| EPOCH: 10/10 not saving... |
+------------+-------+----------+------------+
| CATEGORY | LOSS | ACCURACY | ETA |
+------------+-------+----------+------------+
| Training | 0.335 | 0.852 | 0:00:44.83 |
| Validation | 0.464 | 0.797 | |
+------------+-------+----------+------------+
###Markdown
Evaluating the best model.
###Code
duplicate_questions_model.load_state_dict(torch.load('best-model.pt'))
test_loss, test_acc = evaluate(duplicate_questions_model, test_iter, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
###Output
Test Loss: 0.450 | Test Acc: 78.80%
###Markdown
Model InferenceOur predict sentiment function will:* get `two` question pairs, tokenize them and convert them to sequences.* pass the model the, questions that are converted to tensors.* Apply the sigmoid to get the accual label.
###Code
import en_core_web_sm
nlp = en_core_web_sm.load()
def predict_sentiment(model, q1, q2):
model.eval()
tokenized_q1 = [tok.text for tok in nlp.tokenizer(q1.lower())]
tokenized_q2 = [tok.text for tok in nlp.tokenizer(q2.lower())]
indexed_1 = [TEXT.vocab.stoi[t] for t in tokenized_q1]
indexed_2 = [TEXT.vocab.stoi[t] for t in tokenized_q2]
tensor_1 = torch.LongTensor(indexed_1).to(device).unsqueeze(1)
tensor_2 = torch.LongTensor(indexed_2).to(device).unsqueeze(1)
prediction = torch.sigmoid(model(tensor_1, tensor_2))
return prediction.item()
###Output
_____no_output_____
###Markdown
Getting questions for testing.
###Code
dataframe = pd.read_csv(os.path.join(
base_path,
test_path
))
qns1 = dataframe.question1.values
qns2 = dataframe.question2.values
true_labels = dataframe.is_duplicate.values
from prettytable import PrettyTable
def tabulate(column_names, data, max_characters:int, title:str):
table = PrettyTable(column_names)
table.align[column_names[0]] = "l"
table.align[column_names[1]] = "l"
table.title = title
table._max_width = {column_names[0] :max_characters, column_names[1] :max_characters}
for row in data:
table.add_row(row)
print(table)
for i, (q1, q2, label) in enumerate(zip(qns1, qns2, true_labels[:10])):
pred = predict_sentiment(duplicate_questions_model, q1, q2)
classes = ["not duplicate", "duplicate"]
probability = pred if pred >=0.5 else 1 - pred
table_headers =["KEY", "VALUE"]
table_data = [
["Question 1", q1],
["Question2", q2],
["PREDICTED CLASS", round(pred)],
["PREDICTED CLASS NAME", classes[round(pred)]],
["REAL CLASS", label],
["REAL CLASS NAME", classes[label]],
["CONFIDENCE OVER OTHER CLASSES", f'{ probability * 100:.2f}%'],
]
title = "Duplicate Questions"
tabulate(table_headers, table_data, 50, title=title)
###Output
+------------------------------------------------------------------------------------+
| Duplicate Questions |
+-------------------------------+----------------------------------------------------+
| KEY | VALUE |
+-------------------------------+----------------------------------------------------+
| Question 1 | Do you watch Korean dramas? |
| Question2 | Is it normal to watch Korean drama if you are a |
| | guy? |
| PREDICTED CLASS | 0 |
| PREDICTED CLASS NAME | not duplicate |
| REAL CLASS | 0 |
| REAL CLASS NAME | not duplicate |
| CONFIDENCE OVER OTHER CLASSES | 89.91% |
+-------------------------------+----------------------------------------------------+
+------------------------------------------------------------------------------------+
| Duplicate Questions |
+-------------------------------+----------------------------------------------------+
| KEY | VALUE |
+-------------------------------+----------------------------------------------------+
| Question 1 | What are some good home remedies for getting rid |
| | of stress bumps on the lips? |
| Question2 | How do I get rid of an acidic tummy and a sore |
| | mouth? Is there any home remedies? |
| PREDICTED CLASS | 0 |
| PREDICTED CLASS NAME | not duplicate |
| REAL CLASS | 0 |
| REAL CLASS NAME | not duplicate |
| CONFIDENCE OVER OTHER CLASSES | 65.72% |
+-------------------------------+----------------------------------------------------+
+------------------------------------------------------------------------------------+
| Duplicate Questions |
+-------------------------------+----------------------------------------------------+
| KEY | VALUE |
+-------------------------------+----------------------------------------------------+
| Question 1 | “Everyone wants to go to Baghdad. Real men want to |
| | go to Tehran.” What does this mean? |
| Question2 | Why do you want to go back to college days? |
| PREDICTED CLASS | 1 |
| PREDICTED CLASS NAME | duplicate |
| REAL CLASS | 0 |
| REAL CLASS NAME | not duplicate |
| CONFIDENCE OVER OTHER CLASSES | 56.61% |
+-------------------------------+----------------------------------------------------+
+-------------------------------------------------------------------+
| Duplicate Questions |
+-------------------------------+-----------------------------------+
| KEY | VALUE |
+-------------------------------+-----------------------------------+
| Question 1 | How can I ask my wife for sex? |
| Question2 | Do I have to ask my wife for sex? |
| PREDICTED CLASS | 1 |
| PREDICTED CLASS NAME | duplicate |
| REAL CLASS | 0 |
| REAL CLASS NAME | not duplicate |
| CONFIDENCE OVER OTHER CLASSES | 93.09% |
+-------------------------------+-----------------------------------+
+------------------------------------------------------------------------------------+
| Duplicate Questions |
+-------------------------------+----------------------------------------------------+
| KEY | VALUE |
+-------------------------------+----------------------------------------------------+
| Question 1 | How do you deal with having a bad reputation in |
| | college? |
| Question2 | How can I deal with bad reputation in college? |
| PREDICTED CLASS | 1 |
| PREDICTED CLASS NAME | duplicate |
| REAL CLASS | 0 |
| REAL CLASS NAME | not duplicate |
| CONFIDENCE OVER OTHER CLASSES | 76.68% |
+-------------------------------+----------------------------------------------------+
+------------------------------------------------------------------------------------+
| Duplicate Questions |
+-------------------------------+----------------------------------------------------+
| KEY | VALUE |
+-------------------------------+----------------------------------------------------+
| Question 1 | What credit card is the one that you pay the |
| | least? |
| Question2 | What are the consiquences of not paying the credit |
| | card? |
| PREDICTED CLASS | 0 |
| PREDICTED CLASS NAME | not duplicate |
| REAL CLASS | 0 |
| REAL CLASS NAME | not duplicate |
| CONFIDENCE OVER OTHER CLASSES | 67.27% |
+-------------------------------+----------------------------------------------------+
+----------------------------------------------------------------------+
| Duplicate Questions |
+-------------------------------+--------------------------------------+
| KEY | VALUE |
+-------------------------------+--------------------------------------+
| Question 1 | Does Elon Musk have a lack of focus? |
| Question2 | What is the origin of the Drama? |
| PREDICTED CLASS | 1 |
| PREDICTED CLASS NAME | duplicate |
| REAL CLASS | 0 |
| REAL CLASS NAME | not duplicate |
| CONFIDENCE OVER OTHER CLASSES | 98.80% |
+-------------------------------+--------------------------------------+
+------------------------------------------------------------------------------------+
| Duplicate Questions |
+-------------------------------+----------------------------------------------------+
| KEY | VALUE |
+-------------------------------+----------------------------------------------------+
| Question 1 | What universities does Investors Real estate |
| | recruit new grads from? What majors are they |
| | looking for? |
| Question2 | What universities does Renaissance Real estate |
| | recruit new grads from? What majors are they |
| | looking for? |
| PREDICTED CLASS | 0 |
| PREDICTED CLASS NAME | not duplicate |
| REAL CLASS | 0 |
| REAL CLASS NAME | not duplicate |
| CONFIDENCE OVER OTHER CLASSES | 99.37% |
+-------------------------------+----------------------------------------------------+
+------------------------------------------------------------------------------------+
| Duplicate Questions |
+-------------------------------+----------------------------------------------------+
| KEY | VALUE |
+-------------------------------+----------------------------------------------------+
| Question 1 | Could God who is truly all powerful create a rock |
| | that he himself could not lift? |
| Question2 | If God is all powerful can he make a rock so heavy |
| | even he cannot lift it? |
| PREDICTED CLASS | 1 |
| PREDICTED CLASS NAME | duplicate |
| REAL CLASS | 1 |
| REAL CLASS NAME | duplicate |
| CONFIDENCE OVER OTHER CLASSES | 99.31% |
+-------------------------------+----------------------------------------------------+
+------------------------------------------------------------------------------------+
| Duplicate Questions |
+-------------------------------+----------------------------------------------------+
| KEY | VALUE |
+-------------------------------+----------------------------------------------------+
| Question 1 | How do I convey my mom (single mother) that i |
| | want/need to get married asap indirectly? Please |
| | help |
| Question2 | How do I convey my parents that they need to let |
| | me take my own decisions? |
| PREDICTED CLASS | 0 |
| PREDICTED CLASS NAME | not duplicate |
| REAL CLASS | 0 |
| REAL CLASS NAME | not duplicate |
| CONFIDENCE OVER OTHER CLASSES | 89.29% |
+-------------------------------+----------------------------------------------------+
###Markdown
Conclusion.We have learned how to create a model that maps 2 inputs to one output using the modified FastText model. What's Next? Next.Next we are going to try to use 1 embedding instead of two embedding as we did in this notebook. We are also going to use the same model achitecture `FastText`.
###Code
###Output
_____no_output_____ |
notebooks/feature-importance-clinvar.ipynb | ###Markdown
Is feature order the same for clinvar and panel?
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import make_classification
from sklearn.ensemble import ExtraTreesClassifier
panel_file = '../data/interim/panel.dat'
panel_df_pre = pd.read_csv(panel_file, sep='\t')
panel_df = panel_df_pre[['Disease', 'gene']].rename(columns={'Disease':'panel_disease'})
clinvar_file = '../data/interim/clinvar/clinvar.eff.dbnsfp.anno.dat.limit.xls'
clinvar_df_pre = pd.read_csv(clinvar_file, sep='\t')
clinvar_df_pp = pd.merge(panel_df, clinvar_df_pre, on=['gene'], how='left')
clinvar_df = clinvar_df_pp[clinvar_df_pp.Disease=='clinvar_single']
cols = ['ccr', 'fathmm', 'vest', 'missense_badness', 'missense_depletion', 'is_domain']
for disease in set(clinvar_df['panel_disease']):
print(disease)
X = clinvar_df[clinvar_df.panel_disease==disease][cols]
y = clinvar_df[clinvar_df.panel_disease==disease]['y']
# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
print("Feature ranking:")
for f in range(X.shape[1]):
print("%s %d. %s (%f)" % (disease, f + 1, cols[indices[f]], importances[indices[f]]))
plt.figure()
plt.title("Feature importances " + disease)
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
!jupyter nbconvert --to=python feature-importance-clinvar.ipynb --stdout > cv3_tmp.py
###Output
[NbConvertApp] Converting notebook feature-importance-clinvar.ipynb to python
|
Mining OpenStackNova Repo Notebook.ipynb | ###Markdown
Mining Software Repositories: OpenStack Nova Project. Goal The goal of this tool and analysis is to help in capturing insights from the commits on a project repo, in this case: the openstack nova project repo. This will help in understanding the project as well as provide guidiance to contributors and maintainers. Objectives The following questions will be answered:* Find the most actively modified module?* How many commits occured during the studied period?* How much churn occurred during the studied period? Churn is defined as the sum of added and removed lines by all commits. **NB**: This workflow is responsible for the pre-processing, analysis, and generation of insight from the collected data. It is assumed that the automated collection of the data via the script accessible in thesame folder with this notebook has been completed. The collected data will be loaded here before the other process in the workflow executes. Required imports:
###Code
# Built-in libraries
import json
import os
# The normal data science ecosystem libraries
# pandas for data wrangling
import pandas as pd
# Plotting modules and libraries required
import matplotlib as mpl
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Required settings:
###Code
# Settings:
# 1. Command needed to make plots appear in the Jupyter Notebook
%matplotlib inline
# 2. Command needed to make plots bigger in the Jupyter Notebook
plt.rcParams['figure.figsize']= (12, 10)
# 3. Command needed to make 'ggplot' styled plots- professional and yet good looking theme.
plt.style.use('ggplot')
# 4. This will make the plot zoomable
# mpld3.enable_notebook()
###Output
_____no_output_____
###Markdown
Other utility functions for data manipulation
###Code
# Utility data manipulation functions
# 1. Extract path parameters from filename
def get_path_parameters(dframe):
filename = os.path.basename(dframe["filename"])
filetype = os.path.splitext(dframe["filename"])[1]
directory = os.path.dirname(dframe["filename"])
return directory, filename, filetype
###Output
_____no_output_____
###Markdown
1. Loading the data
###Code
# Open and load json file
with open('data.json', encoding="utf8") as file:
data = json.load(file)
print("data loaded successfully")
###Output
data loaded successfully
###Markdown
Data normalization The collected commit data is a semi-structured json which has nested data similar to the image below. Files is a list of file objects. The loaded data will be normalized into a flat table using pandas.json_normalize. 
###Code
df = pd.json_normalize(data, "files", ["commit_node_id", "commit_sha", "commit_html_url", "commit_date" ])
###Output
_____no_output_____
###Markdown
2. Displaying current state of the data
###Code
# The first 3 rows
df.head(3)
# The last 3 rows
df.tail(3)
# Summary of the dataframe
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1973 entries, 0 to 1972
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 sha 1973 non-null object
1 filename 1973 non-null object
2 status 1973 non-null object
3 additions 1973 non-null int64
4 deletions 1973 non-null int64
5 changes 1973 non-null int64
6 blob_url 1973 non-null object
7 raw_url 1973 non-null object
8 contents_url 1973 non-null object
9 patch 1965 non-null object
10 previous_filename 8 non-null object
11 commit_node_id 1973 non-null object
12 commit_sha 1973 non-null object
13 commit_html_url 1973 non-null object
14 commit_date 1973 non-null object
dtypes: int64(3), object(12)
memory usage: 231.3+ KB
###Markdown
3 Verify data
###Code
# Let us manually examine atleast one commit and see if the present rows are correct.
# We use the most recent commit as at the development time.
# Pls note that this commit will not be part of commits after 6 month from today February 17th, 2022
commit = '3a14c1a4277a9f44b67e080138b28b680e5e6824'
df[df["commit_sha"] == commit]
###Output
_____no_output_____
###Markdown
 4. Data cleaning
###Code
# Removing columns not needed for the analysis
columns = ['previous_filename', 'patch', 'contents_url', 'raw_url', 'commit_node_id']
df.drop(columns, inplace=True, axis=1)
# Generating and adding extra columns
df[["directory", "file_name", "file_type"]] = df.apply(lambda x: get_path_parameters(x), axis=1, result_type="expand")
# Delete the previous filename column as it is no longer required
df.drop("filename", inplace=True, axis=1)
# Rename columns
df.rename(columns={"sha": "file_sha", "status": "file_status", "additions":"no_of_additions", "deletions": "no_of_deletions"}, inplace=True)
# Optimising the data frame by correcting the data types.
# This will also make more operations possible on the data frame
df = df.astype({'file_sha': 'str', 'file_status': 'category', 'no_of_additions':'int', 'no_of_deletions':'int', 'changes':'int', 'blob_url':'str', 'commit_sha':'str', 'commit_html_url':'str', 'directory':'str', 'file_name':'str', 'file_type':'category'})
df['commit_date'] = pd.to_datetime(df['commit_date'], infer_datetime_format=True)
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1973 entries, 0 to 1972
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 file_sha 1973 non-null object
1 file_status 1973 non-null category
2 no_of_additions 1973 non-null int32
3 no_of_deletions 1973 non-null int32
4 changes 1973 non-null int32
5 blob_url 1973 non-null object
6 commit_sha 1973 non-null object
7 commit_html_url 1973 non-null object
8 commit_date 1973 non-null datetime64[ns, UTC]
9 directory 1973 non-null object
10 file_name 1973 non-null object
11 file_type 1973 non-null category
dtypes: category(2), datetime64[ns, UTC](1), int32(3), object(6)
memory usage: 135.8+ KB
###Markdown
A. Basic Analysis and Visualization 1. Total number of commits that occured during the studied period.
###Code
# value_counts returns a series object counting all unique values.
# the 1st value being the most frequently occuring i.e. the commit with highest no of file changes.
df["commit_sha"].value_counts()
print("The total no. of processed commits is: {commits_total}".format(commits_total = len(df["commit_sha"].value_counts())))
###Output
The total no. of processed commits is: 467
###Markdown
2. The 12 most modified files
###Code
df["file_name"].value_counts().head(12)
df["file_name"].value_counts().head(12).sort_values().plot.barh(figsize=(10, 9)); plt.axhline(0, color='k'); plt.title('The 12 most modified files'); plt.xlabel('Total number of changes'); plt.ylabel('File Names')
###Output
_____no_output_____
###Markdown
3. The 12 most modified directories
###Code
# the term directory is used in place of module
df["directory"].value_counts().head(12)
df["directory"].value_counts().head(12).sort_values().plot.pie(autopct='%1.1f%%', figsize=(20,8),
shadow=True, startangle=90, ylabel="");plt.title('The 12 most modified modules')
###Output
_____no_output_____
###Markdown
4. The most modified file types
###Code
df["file_type"].value_counts()
df["file_type"].value_counts().head().sort_values().plot.bar(figsize=(10, 8)); plt.axhline(0, color='k'); plt.title('The most modified file types'); plt.xlabel('File extension'); plt.ylabel('Total number of files')
###Output
_____no_output_____
###Markdown
5. Churn
###Code
# sum of all changes across all directories that occured during the period
df["changes"].sum()
###Output
_____no_output_____
###Markdown
B. Exploring activities at module level using aggregation operations 1. The total number of file modifications by directory i.e. no. of rows per directory. A row in df records a file change & the commit responsible
###Code
# the term directory is used in place of module
# thesame files may be modified in a particular directory by different commits
# we cant have rows where a particular commit modifies thesame file more than once
# Reveal the total no of changes in each directiory i.e. the no. of times the directory was modified.
grp1 = df.groupby(['directory']).size().sort_values(ascending=False).head(12)
grp1
###Output
_____no_output_____
###Markdown
2. The total number of commits per directory
###Code
# split data by column <directory>
# pass column <commit_sha> to indicate we want the number of unique values for that column
# apply .nunique() to count the number of unique values in that coulmn
grp2 = df.groupby('directory')['commit_sha'].nunique().sort_values(ascending=False).head(12)
grp2
grp2.sort_values().plot.barh(figsize=(10, 8)); plt.axhline(0, color='k'); plt.title('Total number of commits by directory'); plt.xlabel('Number of commits'); plt.ylabel('Directories')
###Output
_____no_output_____
###Markdown
3. The number of files by directory
###Code
# The number of unique file per directory
group_by = df.groupby('directory')['file_name'].nunique().sort_values(ascending=False).head(12)
group_by
###Output
_____no_output_____
###Markdown
4. The churn by directory
###Code
df.groupby('directory').sum().sort_values(by='changes', ascending=False).head(12)
###Output
_____no_output_____
###Markdown
5. The churn by file name
###Code
df.groupby('file_name').sum().sort_values(by='changes', ascending=False).head(12)
###Output
_____no_output_____
###Markdown
6. The churn by file type
###Code
df.groupby('file_type').sum().sort_values(by='changes', ascending=False).head(12)
###Output
_____no_output_____ |
chapter_02/l01_fbprophet_hs300.ipynb | ###Markdown
读取沪深300行情数据,因为按教程的开展进度,我们目前还没有部署好QUANTAXIS,所以没有直接调用行情数据的方法。在这个教学例子中,只能用预先保存好的行情数据文件代替。行情数据文件的实际路径需要按照你的部署目录修改,请自行编辑下面的路径字符串。
###Code
# 如果加载不到这个例子数据文件,请自行下载数据文件保存到用户目录,并修改下面的存储路径的用户名
market_df = pd.read_pickle(u'/home/wangdong/Downloads/996Quant/chapter_02/kline_399300_60min_21-01-29_15_00.pickle')
# 日线
#market_df = pd.read_pickle(u'C:\\Users\\azai\\OneDrive\\Documents\\kline_399300_day_21-01-29_00_00.pickle')
# 兼容日线和小时线的处理。双重索引的行情数据结构来自QUANTAXIS:QADataStruct
if('type' in market_df.columns):
market_df['date'] = pd.to_datetime(market_df.index.get_level_values(level=0))
###Output
_____no_output_____
###Markdown
因为 Prophet 所需要的两列名称是 ‘ds’ 和 ‘y’,其中,’ds’ 表示时间戳,’y’ 表示时间序列的值,因此通常来说都需要修改 pd.dataframe 的列名字。如果原来的两列名字是 ‘timestamp’ 和 ‘value’ 的话,只需要这样写:
###Code
df = market_df.reset_index().rename(columns={'date':'ds', 'close':'y'})
df['y'] = np.log(df['y'])
# 构造模型并且训练数据
model = Prophet()
model.fit(df);
# 预测未来的periods个数据
future = model.make_future_dataframe(periods=22) #forecasting for 1 year from now.
forecast = model.predict(future)
plt.rcParams['font.sans-serif'] = ['Microsoft YaHei']
plt.rcParams['figure.figsize']=(20,10)
plt.style.use('ggplot')
figure = model.plot(forecast)
for changepoint in model.changepoints:
plt.axvline(changepoint,ls='--', lw=1)
two_years = market_df.copy()
code = market_df.index.get_level_values(level=1)[0]
forecast['code'] = code
forecast = forecast.rename(columns={'ds':'date'}).set_index(['date', 'code'])
two_years = two_years.reindex(columns=[*two_years.columns,
*['yhat', 'yhat_upper', 'yhat_lower']])
two_years.loc[:, ['yhat', 'yhat_upper', 'yhat_lower']] = forecast.loc[two_years.index,
['yhat', 'yhat_upper', 'yhat_lower']]
two_years['yhat']=np.exp(two_years.yhat)
two_years['yhat_upper']=np.exp(two_years.yhat_upper)
two_years['yhat_lower']=np.exp(two_years.yhat_lower)
two_years[['close', 'yhat']].plot()
fig, ax1 = plt.subplots()
each_day = market_df.index.get_level_values(level=0)
ax1.plot(each_day, two_years.close)
ax1.plot(each_day, two_years.yhat)
ax1.plot(each_day, two_years.yhat_upper, color='black', linestyle=':', alpha=0.5)
ax1.plot(each_day, two_years.yhat_lower, color='black', linestyle=':', alpha=0.5)
ax1.set_title(u'沪深300时机走势(橙色)与沪深300预测波动置信区间(黑虚线)')
ax1.set_ylabel('Price')
ax1.set_xlabel('Date')
print(each_day[-1])
model.changepoints
###Output
_____no_output_____ |
IBM_Week_3_Part_1.ipynb | ###Markdown
IBM Week 3 Capstone Project The code below scrapes the provided wikipedia page for the raw data and cleans it. The data includes the postal codes, borouhs, and neighorhoods in Toronto.
###Code
import pandas as pd
raw_data = pd.read_html('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M', header=0)
data = pd.DataFrame(raw_data[0]).dropna(subset=['Neighborhood']).reset_index(drop=True)
print(data.head())
###Output
Postal Code Borough Neighborhood
0 M3A North York Parkwoods
1 M4A North York Victoria Village
2 M5A Downtown Toronto Regent Park, Harbourfront
3 M6A North York Lawrence Manor, Lawrence Heights
4 M7A Downtown Toronto Queen's Park, Ontario Provincial Government
###Markdown
The code below prints the shape of the data obtained from wikipedia.
###Code
print(data.shape)
###Output
(103, 3)
|
Day_6/lab6/lab6-drive-ham-rabi-ramsey.ipynb | ###Markdown
by nick brønn Treat the transmon as a qubit for simplicity Then we can describe the dynamics of the qubit with the **Pauli Matrices**:$$\sigma^x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \qquad\sigma^y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \qquad\sigma^z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \qquad$$ They obey the commutator relations$$ [\sigma^x, \sigma^y] = 2i\sigma^z \qquad [\sigma^y, \sigma^z] = 2i\sigma^x \qquad [\sigma^z, \sigma^x] = 2i\sigma^y $$ On its own, the qubit Hamiltonian is$$ \hat{H}_q = -\frac{1}{2} \hbar \omega_q \sigma^z $$ Ground state of the qubit ($|0\rangle$): points in the $+\hat{z}$ direction of the Bloch sphere Excited state of the qubit ($|1\rangle$): points in the $-\hat{z}$ direction of the Bloch sphere
###Code
from qiskit.quantum_info import Statevector
from qiskit.visualization import plot_bloch_multivector
excited = Statevector.from_int(1, 2)
plot_bloch_multivector(excited.data)
###Output
_____no_output_____
###Markdown
The Pauli matrices also let us define raising and lowering operators$$ \sigma^+ = \frac{1}{2}( \sigma^x - i\sigma^y) \qquad {\rm and} \qquad \sigma^- = \frac{1}{2}( \sigma^x + i\sigma^y)$$ They raise and lower qubit states$$ \sigma^+ |0\rangle = |1\rangle \qquad \sigma^+ |1\rangle = 0\qquad {\rm and} \qquad \sigma^-|1\rangle = |0\rangle \qquad \sigma^-|0\rangle = 0$$ Electric Dipole Interaction The **qubit** behaves as an electric dipole$$\vec{d} = \vec{d}_0 \sigma^+ + \vec{d}_0^* \sigma^-$$ The **drive** behaves as an electric field$$\vec{E} = \vec{E}_0 e^{-i\omega_d t} + \vec{E}_0^* e^{i\omega_d t}$$ The drive Hamiltonian is then$$ \hat{H}_d = -\vec{d} \cdot \vec{E} $$ And now, some math...$$\hat{H}_d = -\left(\vec{d}_0 \sigma^+ + \vec{d}_0^* \sigma^-\right) \cdot \left(\vec{E}_0 e^{-i\omega_d t} + \vec{E}_0^* e^{i\omega_d t}\right) \\= -\left(\vec{d}_0 \cdot \vec{E}_0 e^{-i\omega_d t} + \vec{d}_0 \cdot \vec{E}_0^* e^{i\omega_d t}\right)\sigma^+-\left(\vec{d}_0^* \cdot \vec{E}_0 e^{-i\omega_d t} + \vec{d}_0^* \cdot \vec{E}_0^* e^{i\omega_d t}\right)\sigma^- $$ $$\equiv -\hbar\left(\Omega e^{-i\omega_d t} + \tilde{\Omega} e^{i\omega_d t}\right)\sigma^+-\hbar\left(\tilde{\Omega}^* e^{-i\omega_d t} + \Omega^* e^{i\omega_d t}\right)\sigma^-$$by setting $\Omega \equiv \vec{d}_0 \cdot \vec{E}_0/\hbar$ and $\tilde{\Omega} \equiv \vec{d}_0 \cdot \vec{E}_0^*/\hbar $ Rotating Wave Approximation Move the Hamiltonian to the interaction picture $$\hat{H}_{d,I} = U\hat{H}_dU^\dagger \qquad \qquad ^*{\rm omitting\, terms\, that\, cancel} $$ with $$U = e^{i\hat{H}_q t/\hbar} = e^{-i\omega_q t \sigma^z/2} = I\cos(\omega_q t/2) - i\sigma^z\sin(\omega_q t/2)$$ Calculate the operator terms $$\left(I\cos(\omega_q t/2) - i\sigma^z\sin(\omega_q t/2)\right) \sigma^+ \left(I\cos(\omega_q t/2) + i\sigma^z\sin(\omega_q t/2)\right) = e^{i\omega_q t} \sigma^+ \\\left(I\cos(\omega_q t/2) - i\sigma^z\sin(\omega_q t/2)\right) \sigma^- \left(I\cos(\omega_q t/2) + i\sigma^z\sin(\omega_q t/2)\right) = e^{-i\omega_q t} \sigma^-$$ The transformed Hamiltonian is$$\hat{H}_{d,I} = -\hbar\left(\Omega e^{-i(\omega_q-\omega_d) t} + \tilde{\Omega} e^{i(\omega_q+\omega_d) t}\right) \sigma^+ -\hbar\left(\tilde{\Omega}^* e^{-i(\omega_q+\omega_d) t} + \Omega^* e^{i(\omega_q-\omega_d) t}\right) \sigma^-$$ Rotating Wave Approximation$$\hat{H}_{d,I} = -\hbar\left(\Omega e^{-i(\omega_q-\omega_d) t} + \tilde{\Omega} e^{i(\omega_q+\omega_d) t}\right) \sigma^+ -\hbar\left(\tilde{\Omega}^* e^{-i(\omega_q+\omega_d) t} + \Omega^* e^{i(\omega_q-\omega_d) t}\right) \sigma^-$$ $\omega_q-\omega_d$: slow-rotating terms contribute most of interaction $\omega_q+\omega_d$: fast-rotating terms tend to average out Define $\Delta_q = \omega_q - \omega_d$ and make the RWA$$\hat{H}_{d,I}^{\rm (RWA)} =-\hbar\Omega e^{-i\Delta_q t} \sigma^+ -\hbar \Omega^* e^{i\Delta_q t} \sigma^-$$ Transform Hamiltonian back to the Schrödinger picture$$\hat{H}_{d}^{\rm (RWA)} = -\hbar\Omega e^{-i\omega_d t} \sigma^+ -\hbar\Omega^* e^{i\omega_d t} \sigma^-$$ And the total qubit Hamiltonian is$$ \hat{H}_{\rm tot} = \hat{H}_q + \hat{H}_d^{\rm (RWA)} = -\frac{1}{2} \hbar\omega_q \sigma^z -\hbar\Omega e^{-i\omega_d t} \sigma^+ -\hbar\Omega^* e^{i\omega_d t} \sigma^- $$ Qubit Drive ExampleSet $\Omega^* \equiv \Omega$ and transform into the frame of the drive $$\hat{H}_{\rm eff} = U_d \hat{H}_{\rm tot} U_d^\dagger - i\hbar U_d \dot{U_d}^\dagger$$with $U_d = \exp\{-i\omega_d t\sigma^z/2\}$ In a similar calculation to earlier, the effective Hamiltonian is$$\hat{H}_{\rm eff} = -\frac{1}{2}\hbar \Delta_q \sigma^z -\hbar\Omega \sigma^x$$ What this meansOn-resonance ($\Delta_q = 0$): the drive rotates the qubit around the $\hat{x}$-axis Off-resonance ($\Delta_q \ne 0$): an additional $\hat{z}$-rotation on top of drive Qiskit Pulse: On-resonant Drive (Rabi)$$\hat{H}_{\rm eff} = -\hbar\Omega \sigma^x$$ Import Necessary Libraries
###Code
from qiskit.tools.jupyter import *
from qiskit import IBMQ
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q', group='open', project='main')
backend = provider.get_backend('ibmq_armonk')
###Output
ibmqfactory.load_account:WARNING:2020-07-24 16:35:36,355: Credentials are already in use. The existing account in the session will be replaced.
###Markdown
Verify Backend is Pulse-enabled
###Code
backend_config = backend.configuration()
assert backend_config.open_pulse, "Backend doesn't support Pulse"
###Output
_____no_output_____
###Markdown
Qiskit Pulse: On-resonant Drive (Rabi) Take care of some other things
###Code
dt = backend_config.dt
print(f"Sampling time: {dt*1e9} ns")
backend_defaults = backend.defaults()
###Output
Sampling time: 0.2222222222222222 ns
###Markdown
Qiskit Pulse: On-resonant Drive (Rabi)
###Code
import numpy as np
# unit conversion factors -> all backend properties returned in SI (Hz, sec, etc)
GHz = 1.0e9 # Gigahertz
MHz = 1.0e6 # Megahertz
us = 1.0e-6 # Microseconds
ns = 1.0e-9 # Nanoseconds
# We will find the qubit frequency for the following qubit.
qubit = 0
# The Rabi sweep will be at the given qubit frequency.
center_frequency_Hz = backend_defaults.qubit_freq_est[qubit] # The default frequency is given in Hz
# warning: this will change in a future release
print(f"Qubit {qubit} has an estimated frequency of {center_frequency_Hz / GHz} GHz.")
###Output
Qubit 0 has an estimated frequency of 4.974452425330183 GHz.
###Markdown
Qiskit Pulse: On-resonant Drive (Rabi)
###Code
from qiskit import pulse, assemble # This is where we access all of our Pulse features!
from qiskit.pulse import Play
from qiskit.pulse import pulse_lib # This Pulse module helps us build sampled pulses for common pulse shapes
### Collect the necessary channels
drive_chan = pulse.DriveChannel(qubit)
meas_chan = pulse.MeasureChannel(qubit)
acq_chan = pulse.AcquireChannel(qubit)
inst_sched_map = backend_defaults.instruction_schedule_map
measure = inst_sched_map.get('measure', qubits=[0])
###Output
_____no_output_____
###Markdown
Qiskit Pulse: On-resonant Drive (Rabi)
###Code
# Rabi experiment parameters
# Drive amplitude values to iterate over: 50 amplitudes evenly spaced from 0 to 0.75
num_rabi_points = 50
drive_amp_min = 0
drive_amp_max = 0.75
drive_amps = np.linspace(drive_amp_min, drive_amp_max, num_rabi_points)
# drive waveforms mush be in units of 16
drive_sigma = 80 # in dt
drive_samples = 8*drive_sigma # in dt
###Output
_____no_output_____
###Markdown
Qiskit Pulse: On-resonant Drive (Rabi)
###Code
# Build the Rabi experiments:
# A drive pulse at the qubit frequency, followed by a measurement,
# where we vary the drive amplitude each time.
rabi_schedules = []
for drive_amp in drive_amps:
rabi_pulse = pulse_lib.gaussian(duration=drive_samples, amp=drive_amp,
sigma=drive_sigma, name=f"Rabi drive amplitude = {drive_amp}")
this_schedule = pulse.Schedule(name=f"Rabi drive amplitude = {drive_amp}")
this_schedule += Play(rabi_pulse, drive_chan)
# The left shift `<<` is special syntax meaning to shift the start time of the schedule by some duration
this_schedule += measure << this_schedule.duration
rabi_schedules.append(this_schedule)
rabi_schedules[-1].draw(label=True, scaling=1.0)
###Output
_____no_output_____
###Markdown
Qiskit Pulse: On-resonant Drive (Rabi)
###Code
# assemble the schedules into a Qobj
num_shots_per_point = 1024
rabi_experiment_program = assemble(rabi_schedules,
backend=backend,
meas_level=1,
meas_return='avg',
shots=num_shots_per_point,
schedule_los=[{drive_chan: center_frequency_Hz}]
* num_rabi_points)
# RUN the job on a real device
#job = backend.run(rabi_experiment_program)
#print(job.job_id())
#from qiskit.tools.monitor import job_monitor
#job_monitor(job)
# OR retreive result from previous run
job = backend.retrieve_job("5ef3bf17dc3044001186c011")
###Output
_____no_output_____
###Markdown
Qiskit Pulse: On-resonant Drive (Rabi)
###Code
rabi_results = job.result()
import matplotlib.pyplot as plt
plt.style.use('dark_background')
scale_factor = 1e-14
# center data around 0
def baseline_remove(values):
return np.array(values) - np.mean(values)
###Output
_____no_output_____
###Markdown
Qiskit Pulse: On-resonant Drive (Rabi)
###Code
rabi_values = []
for i in range(num_rabi_points):
# Get the results for `qubit` from the ith experiment
rabi_values.append(rabi_results.get_memory(i)[qubit]*scale_factor)
rabi_values = np.real(baseline_remove(rabi_values))
plt.xlabel("Drive amp [a.u.]")
plt.ylabel("Measured signal [a.u.]")
plt.scatter(drive_amps, rabi_values, color='white') # plot real part of Rabi values
plt.show()
###Output
_____no_output_____
###Markdown
Qiskit Pulse: On-resonant Drive (Rabi) Define Rabi curve-fitting function
###Code
from scipy.optimize import curve_fit
def fit_function(x_values, y_values, function, init_params):
fitparams, conv = curve_fit(function, x_values, y_values, init_params)
y_fit = function(x_values, *fitparams)
return fitparams, y_fit
fit_params, y_fit = fit_function(drive_amps,
rabi_values,
lambda x, A, B, drive_period, phi: (A*np.cos(2*np.pi*x/drive_period - phi) + B),
[10, 0.1, 0.6, 0])
###Output
_____no_output_____
###Markdown
Qiskit Pulse: On-resonant Drive (Rabi)
###Code
plt.scatter(drive_amps, rabi_values, color='white')
plt.plot(drive_amps, y_fit, color='red')
drive_period = fit_params[2] # get period of rabi oscillation
plt.axvline(drive_period/2, color='red', linestyle='--')
plt.axvline(drive_period, color='red', linestyle='--')
plt.annotate("", xy=(drive_period, 0), xytext=(drive_period/2,0), arrowprops=dict(arrowstyle="<->", color='red'))
plt.xlabel("Drive amp [a.u.]", fontsize=15)
plt.ylabel("Measured signal [a.u.]", fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
Save $\pi/2$ pulse for later
###Code
pi_amp = abs(drive_period / 2)
print(f"Pi Amplitude = {pi_amp}")
# Drive parameters
# The drive amplitude for pi/2 is simply half the amplitude of the pi pulse
drive_amp = pi_amp / 2
# x_90 is a concise way to say pi_over_2; i.e., an X rotation of 90 degrees
x90_pulse = pulse_lib.gaussian(duration=drive_samples,
amp=drive_amp,
sigma=drive_sigma,
name='x90_pulse')
###Output
_____no_output_____
###Markdown
Qiskit Pulse: Off-resonant Drive (Ramsey)$$\hat{H}_{\rm eff} = -\frac{1}{2}\hbar \Delta_q \sigma^z -\hbar\Omega \sigma^x$$
###Code
# Ramsey experiment parameters
time_max_us = 1.8
time_step_us = 0.025
times_us = np.arange(0.1, time_max_us, time_step_us)
# Convert to units of dt
delay_times_dt = times_us * us / dt
# create schedules for Ramsey experiment
ramsey_schedules = []
for delay in delay_times_dt:
this_schedule = pulse.Schedule(name=f"Ramsey delay = {delay * dt / us} us")
this_schedule += Play(x90_pulse, drive_chan)
this_schedule += Play(x90_pulse, drive_chan) << this_schedule.duration + int(delay)
this_schedule += measure << this_schedule.duration
ramsey_schedules.append(this_schedule)
ramsey_schedules[-1].draw(label=True, scaling=1.0)
###Output
_____no_output_____
###Markdown
Qiskit Pulse: Off-resonant Drive (Ramsey)
###Code
# Execution settings
num_shots = 256
detuning_MHz = 2
ramsey_frequency = round(center_frequency_Hz + detuning_MHz * MHz, 6) # need ramsey freq in Hz
ramsey_program = assemble(ramsey_schedules,
backend=backend,
meas_level=1,
meas_return='avg',
shots=num_shots,
schedule_los=[{drive_chan: ramsey_frequency}]*len(ramsey_schedules)
)
# RUN the job on a real device
#job = backend.run(ramsey_experiment_program)
#print(job.job_id())
#from qiskit.tools.monitor import job_monitor
#job_monitor(job)
# OR retreive job from previous run
job = backend.retrieve_job('5ef3ed3a84b1b70012374317')
###Output
_____no_output_____
###Markdown
Qiskit Pulse: Off-resonant Drive (Ramsey) Ramsey curve-fitting function
###Code
ramsey_results = job.result()
ramsey_values = []
for i in range(len(times_us)):
ramsey_values.append(ramsey_results.get_memory(i)[qubit]*scale_factor)
fit_params, y_fit = fit_function(times_us, np.real(ramsey_values),
lambda x, A, del_f_MHz, C, B: (
A * np.cos(2*np.pi*del_f_MHz*x - C) + B
),
[5, 1./0.4, 0, 0.25]
)
###Output
_____no_output_____
###Markdown
Qiskit Pulse: Off-resonant Drive (Ramsey)
###Code
# Off-resonance component
_, del_f_MHz, _, _, = fit_params # freq is MHz since times in us
plt.scatter(times_us, np.real(ramsey_values), color='white')
plt.plot(times_us, y_fit, color='red', label=f"df = {del_f_MHz:.2f} MHz")
plt.xlim(0, np.max(times_us))
plt.xlabel('Delay between X90 pulses [$\mu$s]', fontsize=15)
plt.ylabel('Measured Signal [a.u.]', fontsize=15)
plt.title('Ramsey Experiment', fontsize=15)
plt.legend()
plt.show()
###Output
_____no_output_____ |
homeworks/08/Pandas_exercises_part2.ipynb | ###Markdown
Problem 1 Import the library pandas as pd Import արեք pandas գրադարանը pd անունով
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Import the dataset gender.txt, assign to a variable gender and use user_id as index Import արեք gender.txt dataset-ը, վերագրեք այն gender փոփոխականին ու օգտագործեք user_id սյունը որպես index
###Code
gender = pd.read_csv('data/gender.txt', sep = '|', index_col = 'user_id')
gender
###Output
_____no_output_____
###Markdown
Print the first 20 entries Տպեք առաջին 20 արժեքները
###Code
gender.head(20)
###Output
_____no_output_____
###Markdown
Print the last 10 entries Տպեք վերջին 10 արժեքները
###Code
gender.tail(10)
###Output
_____no_output_____
###Markdown
What is the number of rows in the dataset? Քանի տող կա dataset-ում?
###Code
gender.shape[0]
###Output
_____no_output_____
###Markdown
What is the number of columns in the dataset? Քանի սյուն կա dataset-ում?
###Code
gender.shape[1]
###Output
_____no_output_____
###Markdown
Print the name of all the columns Տպեք սյուների անունները
###Code
gender.columns.to_list()
###Output
_____no_output_____
###Markdown
How is the dataset indexed? Ինչպես է ինդեքսավորված dataset-ը?
###Code
gender.index
# gender.index.to_list()
###Output
_____no_output_____
###Markdown
What is the data type of each column? Ինչ տիպի է ամեն սյունը՞
###Code
gender.dtypes
###Output
_____no_output_____
###Markdown
Print only the occupation column Տպեք միայն occupation սյունը
###Code
gender['occupation']
###Output
_____no_output_____
###Markdown
How many different occupations there are in this dataset? * Քանի հատ տարբեր occupation կա dataset-ում?
###Code
gender['occupation'].nunique()
###Output
_____no_output_____
###Markdown
What is the most frequent occupation? Որն է ամենահաճախ հանդիպող occupation-ը?
###Code
gender['occupation'].mode()
###Output
_____no_output_____
###Markdown
Problem 2 Import the libraries Import արեք pandas գրադարանը pd անունով
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Import the dataset drinks.csv, assign it to a variable called drinks Import արեք drinks.csv dataset-ը, վերագրեք այն drinks փոփոխականին
###Code
drinks = pd.read_csv('data/drinks.csv')
drinks
###Output
_____no_output_____
###Markdown
Which continent drinks more beer on average? Որ continent-ն է միջինում ավելի շատ գարեջուր խմում՞
###Code
drinks.groupby('continent')['beer_servings'].mean().idxmax()
###Output
_____no_output_____
###Markdown
For each continent print the statistics for wine consumption Ամեն continent-ի համար տպեք wine_servings-ի ստատիստիկան (mean, min, max, std, quartiles)
###Code
drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Print the mean alcohol consumption per continent for every column Ամեն սյան համար տպեք միջին ալկոհոլի օգտագործման քանակը ամեն continent-ի համար
###Code
drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Print the median alcohol consumption per continent for every column Ամեն սյան համար տպեք մեդիան ալկոհոլի օգտագործման քանակը ամեն continent-ի համար
###Code
drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Print the mean, min and max values for spirt consumption. Տպեք spirt-ի օգտագործման mean, min ու max-ը։
###Code
import numpy as np
drinks['total_litres_of_pure_alcohol'].agg([np.mean, np.min, np.max])
###Output
_____no_output_____
###Markdown
Problem 3 Import the libraries Import արեք pandas գրադարանը pd անունով
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Import the dataset baby_names.txt, assign it to a variable called baby_names Import արեք baby_names.txt dataset-ը ու այն վերագրեք baby_names փոփոխականի
###Code
baby_names = pd.read_csv('data/baby_names.csv')
baby_names
###Output
_____no_output_____
###Markdown
See the first 10 entries. Տպեք առաջին 10 արժեքը
###Code
baby_names.head(10)
###Output
_____no_output_____
###Markdown
Delete the columns 'Unnamed: 0' and 'Id' Ջնջեք 'Unnamed: 0' ու 'Id' սյուները
###Code
baby_names = baby_names.drop(['Unnamed: 0', 'Id'], axis = 1)
baby_names
###Output
_____no_output_____
###Markdown
Are there more male or female names in the dataset? Արդյոք dataset-ում ավելի շատ են կանացի թե տղամարդու անունները?
###Code
baby_names.groupby('Gender')['Count'].sum().idxmax()
###Output
_____no_output_____
###Markdown
Delete the column 'Year'. Group the dataset by name and assign to names. Ջնջեք 'Year' սյունը. Խմբավորեք dataset-ը ըստ անունի ու վերագրեք արդյունքը names փոփոխականին
###Code
del baby_names['Year']
baby_names
names = baby_names.groupby('Name')
names
###Output
_____no_output_____
###Markdown
How many different names exist in the dataset? Քանի հատ տարբեր անուն կա dataset-ում?
###Code
len(names)
###Output
_____no_output_____
###Markdown
What is the name with most occurrences? Որն է ամենաշատ հանդիպող անունը?
###Code
names['Count'].sum().idxmax()
###Output
_____no_output_____
###Markdown
What is the median name occurrence? Որն է անունների հանդիպման մեդիանը՞
###Code
names['Count'].sum().median()
###Output
_____no_output_____
###Markdown
Get a summary with the mean, min, max, std and quartiles. Ստացեք dataset-ի ամփոփում mean, min, max, std ու quartile-ներով
###Code
baby_names.describe()
###Output
_____no_output_____ |
Module07_SimpleDynamicalModel/SimpleBucketModel.ipynb | ###Markdown
A Simple Bucket Hydrology Model April 9, 2018
###Code
import numpy as np
import matplotlib.pyplot as plt
Nt = 100
dt = 1.0
P = np.zeros((Nt,1))
P[19:39] = 4.0
print(P.shape)
t = np.arange(1,Nt+1,1)
print(t.shape)
plt.figure(1)
plt.bar(t,P)
plt.ylabel('Precip. [mm/d]')
plt.xlabel('Time [d]')
plt.show()
k1 = 0.02 # Drainage coefficient in units of day^-1
W1_0 = 250.0 # Water storage in units of mm
# Initializing a data container for our water storage at each time step
W1 = np.zeros(t.shape)
# Update initial condition
W1[0] = W1_0
# Initializing a data container for our discharge at each time step
Q = np.zeros(t.shape)
# Update initial condition
Q[0] = k1*W1[0]
# The main loop
for i in np.arange(1,Nt,1):
# Compute the value of the derivatives
dW1dt = P[i-1] - k1*W1[i-1]
# Compute the next value of W
W1[i] = W1[i-1] + dW1dt*dt
# Compute the next value of Q
Q[i] = k1*W1[i]
plt.figure(2)
plt.subplot(311)
plt.bar(t,P)
plt.subplot(312)
plt.plot(t,W1)
plt.subplot(313)
plt.plot(t,Q)
plt.show()
###Output
_____no_output_____ |
4_pose_estimation/4-2_DataLoader.ipynb | ###Markdown
4.2 DataLoaderの作成- 本ファイルでは、OpenPosetなど姿勢推定で使用するDatasetとDataLoaderを作成します。MS COCOデータセットを対象とします。 学習目標1. マスクデータについて理解する2. OpenPoseで使用するDatasetクラス、DataLoaderクラスを実装できるようになる3. OpenPoseの前処理およびデータオーギュメンテーションで、何をしているのか理解する 事前準備書籍の指示に従い、本章で使用するデータを用意します
###Code
# 必要なパッケージのimport
import json
import os
import os.path as osp
import numpy as np
import cv2
from PIL import Image
from matplotlib import cm
import matplotlib.pyplot as plt
%matplotlib inline
import torch.utils.data as data
###Output
_____no_output_____
###Markdown
画像、マスク画像、アノテーションデータへのファイルパスリストを作成
###Code
def make_datapath_list(rootpath):
"""
学習、検証の画像データとアノテーションデータ、マスクデータへのファイルパスリストを作成する。
"""
# アノテーションのJSONファイルを読み込む
json_path = osp.join(rootpath, 'COCO.json')
with open(json_path) as data_file:
data_this = json.load(data_file)
data_json = data_this['root']
# indexを格納
num_samples = len(data_json)
train_indexes = []
val_indexes = []
for count in range(num_samples):
if data_json[count]['isValidation'] != 0.:
val_indexes.append(count)
else:
train_indexes.append(count)
# 画像ファイルパスを格納
train_img_list = list()
val_img_list = list()
for idx in train_indexes:
img_path = os.path.join(rootpath, data_json[idx]['img_paths'])
train_img_list.append(img_path)
for idx in val_indexes:
img_path = os.path.join(rootpath, data_json[idx]['img_paths'])
val_img_list.append(img_path)
# マスクデータのパスを格納
train_mask_list = []
val_mask_list = []
for idx in train_indexes:
img_idx = data_json[idx]['img_paths'][-16:-4]
anno_path = "./data/mask/train2014/mask_COCO_tarin2014_" + img_idx+'.jpg'
train_mask_list.append(anno_path)
for idx in val_indexes:
img_idx = data_json[idx]['img_paths'][-16:-4]
anno_path = "./data/mask/val2014/mask_COCO_val2014_" + img_idx+'.jpg'
val_mask_list.append(anno_path)
# アノテーションデータを格納
train_meta_list = list()
val_meta_list = list()
for idx in train_indexes:
train_meta_list.append(data_json[idx])
for idx in val_indexes:
val_meta_list.append(data_json[idx])
return train_img_list, train_mask_list, val_img_list, val_mask_list, train_meta_list, val_meta_list
# 動作確認(実行には10秒ほど時間がかかります)
train_img_list, train_mask_list, val_img_list, val_mask_list, train_meta_list, val_meta_list = make_datapath_list(
rootpath="./data/")
val_meta_list[24]
###Output
_____no_output_____
###Markdown
マスクデータの働きを確認
###Code
index = 24
# 画像
img = cv2.imread(val_img_list[index])
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()
# マスク
mask_miss = cv2.imread(val_mask_list[index])
mask_miss = cv2.cvtColor(mask_miss, cv2.COLOR_BGR2RGB)
plt.imshow(mask_miss)
plt.show()
# 合成
blend_img = cv2.addWeighted(img, 0.4, mask_miss, 0.6, 0)
plt.imshow(blend_img)
plt.show()
###Output
_____no_output_____
###Markdown
画像の前処理作成
###Code
# データ処理のクラスとデータオーギュメンテーションのクラスをimportする
from utils.data_augumentation import Compose, get_anno, add_neck, aug_scale, aug_rotate, aug_croppad, aug_flip, remove_illegal_joint, Normalize_Tensor, no_Normalize_Tensor
class DataTransform():
"""
画像とマスク、アノテーションの前処理クラス。
学習時と推論時で異なる動作をする。
学習時はデータオーギュメンテーションする。
"""
def __init__(self):
self.data_transform = {
'train': Compose([
get_anno(), # JSONからアノテーションを辞書に格納
add_neck(), # アノテーションデータの順番を変更し、さらに首のアノテーションデータを追加
aug_scale(), # 拡大縮小
aug_rotate(), # 回転
aug_croppad(), # 切り出し
aug_flip(), # 左右反転
remove_illegal_joint(), # 画像からはみ出たアノテーションを除去
# Normalize_Tensor() # 色情報の標準化とテンソル化
no_Normalize_Tensor() # 本節のみ、色情報の標準化をなくす
]),
'val': Compose([
# 本書では検証は省略
])
}
def __call__(self, phase, meta_data, img, mask_miss):
"""
Parameters
----------
phase : 'train' or 'val'
前処理のモードを指定。
"""
meta_data, img, mask_miss = self.data_transform[phase](
meta_data, img, mask_miss)
return meta_data, img, mask_miss
# 動作確認
# 画像読み込み
index = 24
img = cv2.imread(val_img_list[index])
mask_miss = cv2.imread(val_mask_list[index])
meat_data = val_meta_list[index]
# 画像前処理
transform = DataTransform()
meta_data, img, mask_miss = transform("train", meat_data, img, mask_miss)
# 画像表示
img = img.numpy().transpose((1, 2, 0))
plt.imshow(img)
plt.show()
# マスク表示
mask_miss = mask_miss.numpy().transpose((1, 2, 0))
plt.imshow(mask_miss)
plt.show()
# 合成 RGBにそろえてから
img = Image.fromarray(np.uint8(img*255))
img = np.asarray(img.convert('RGB'))
mask_miss = Image.fromarray(np.uint8((mask_miss)))
mask_miss = np.asarray(mask_miss.convert('RGB'))
blend_img = cv2.addWeighted(img, 0.4, mask_miss, 0.6, 0)
plt.imshow(blend_img)
plt.show()
###Output
_____no_output_____
###Markdown
訓練データの正解情報として使うアノテーションデータの作成 ※ Issue [142] (https://github.com/YutaroOgawa/pytorch_advanced/issues/142)以下で描画時にエラーが発生する場合は、お手数ですが、Matplotlibのバージョンを3.1.3に変更して対応してください。
###Code
from utils.dataloader import get_ground_truth
# 画像読み込み
index = 24
img = cv2.imread(val_img_list[index])
mask_miss = cv2.imread(val_mask_list[index])
meat_data = val_meta_list[index]
# 画像前処理
meta_data, img, mask_miss = transform("train", meat_data, img, mask_miss)
img = img.numpy().transpose((1, 2, 0))
mask_miss = mask_miss.numpy().transpose((1, 2, 0))
# OpenPoseのアノテーションデータ生成
heat_mask, heatmaps, paf_mask, pafs = get_ground_truth(meta_data, mask_miss)
# 画像表示
plt.imshow(img)
plt.show()
# 左肘のheatmapを確認
# 元画像
img = Image.fromarray(np.uint8(img*255))
img = np.asarray(img.convert('RGB'))
# 左肘
heat_map = heatmaps[:, :, 6] # 6は左肘
heat_map = Image.fromarray(np.uint8(cm.jet(heat_map)*255))
heat_map = np.asarray(heat_map.convert('RGB'))
heat_map = cv2.resize(
heat_map, (img.shape[1], img.shape[0]), interpolation=cv2.INTER_CUBIC)
# 注意:heatmapは画像サイズが1/8になっているので拡大する
# 合成して表示
blend_img = cv2.addWeighted(img, 0.5, heat_map, 0.5, 0)
plt.imshow(blend_img)
plt.show()
# 左手首
heat_map = heatmaps[:, :, 7] # 7は左手首
heat_map = Image.fromarray(np.uint8(cm.jet(heat_map)*255))
heat_map = np.asarray(heat_map.convert('RGB'))
heat_map = cv2.resize(
heat_map, (img.shape[1], img.shape[0]), interpolation=cv2.INTER_CUBIC)
# 合成して表示
blend_img = cv2.addWeighted(img, 0.5, heat_map, 0.5, 0)
plt.imshow(blend_img)
plt.show()
# 左肘と左手首へのPAFを確認
paf = pafs[:, :, 24] # 24は左肘と左手首をつなぐxベクトルのPAF
paf = Image.fromarray(np.uint8((paf)*255))
paf = np.asarray(paf.convert('RGB'))
paf = cv2.resize(
paf, (img.shape[1], img.shape[0]), interpolation=cv2.INTER_CUBIC)
# 合成して表示
blend_img = cv2.addWeighted(img, 0.3, paf, 0.7, 0)
plt.imshow(blend_img)
plt.show()
# PAFのみを表示
paf = pafs[:, :, 24] # 24は左肘と左手首をつなぐxベクトルのPAF
paf = Image.fromarray(np.uint8((paf)*255))
paf = np.asarray(paf.convert('RGB'))
paf = cv2.resize(
paf, (img.shape[1], img.shape[0]), interpolation=cv2.INTER_CUBIC)
plt.imshow(paf)
###Output
_____no_output_____
###Markdown
Datasetの作成
###Code
from utils.dataloader import get_ground_truth
class COCOkeypointsDataset(data.Dataset):
"""
MSCOCOのCocokeypointsのDatasetを作成するクラス。PyTorchのDatasetクラスを継承。
Attributes
----------
img_list : リスト
画像のパスを格納したリスト
anno_list : リスト
アノテーションへのパスを格納したリスト
phase : 'train' or 'test'
学習か訓練かを設定する。
transform : object
前処理クラスのインスタンス
"""
def __init__(self, img_list, mask_list, meta_list, phase, transform):
self.img_list = img_list
self.mask_list = mask_list
self.meta_list = meta_list
self.phase = phase
self.transform = transform
def __len__(self):
'''画像の枚数を返す'''
return len(self.img_list)
def __getitem__(self, index):
img, heatmaps, heat_mask, pafs, paf_mask = self.pull_item(index)
return img, heatmaps, heat_mask, pafs, paf_mask
def pull_item(self, index):
'''画像のTensor形式のデータ、アノテーション、マスクを取得する'''
# 1. 画像読み込み
image_file_path = self.img_list[index]
img = cv2.imread(image_file_path) # [高さ][幅][色BGR]
# 2. マスクとアノテーション読み込み
mask_miss = cv2.imread(self.mask_list[index])
meat_data = self.meta_list[index]
# 3. 画像前処理
meta_data, img, mask_miss = self.transform(
self.phase, meat_data, img, mask_miss)
# 4. 正解アノテーションデータの取得
mask_miss_numpy = mask_miss.numpy().transpose((1, 2, 0))
heat_mask, heatmaps, paf_mask, pafs = get_ground_truth(
meta_data, mask_miss_numpy)
# 5. マスクデータはRGBが(1,1,1)か(0,0,0)なので、次元を落とす
# マスクデータはマスクされている場所は値が0、それ以外は値が1です
heat_mask = heat_mask[:, :, :, 0]
paf_mask = paf_mask[:, :, :, 0]
# 6. チャネルが最後尾にあるので順番を変える
# 例:paf_mask:torch.Size([46, 46, 38])
# → torch.Size([38, 46, 46])
paf_mask = paf_mask.permute(2, 0, 1)
heat_mask = heat_mask.permute(2, 0, 1)
pafs = pafs.permute(2, 0, 1)
heatmaps = heatmaps.permute(2, 0, 1)
return img, heatmaps, heat_mask, pafs, paf_mask
# 動作確認
train_dataset = COCOkeypointsDataset(
val_img_list, val_mask_list, val_meta_list, phase="train", transform=DataTransform())
val_dataset = COCOkeypointsDataset(
val_img_list, val_mask_list, val_meta_list, phase="val", transform=DataTransform())
# データの取り出し例
item = train_dataset.__getitem__(0)
print(item[0].shape) # img
print(item[1].shape) # heatmaps,
print(item[2].shape) # heat_mask
print(item[3].shape) # pafs
print(item[4].shape) # paf_mask
###Output
_____no_output_____
###Markdown
DataLoaderの作成
###Code
# データローダーの作成
batch_size = 8
train_dataloader = data.DataLoader(
train_dataset, batch_size=batch_size, shuffle=True)
val_dataloader = data.DataLoader(
val_dataset, batch_size=batch_size, shuffle=False)
# 辞書型変数にまとめる
dataloaders_dict = {"train": train_dataloader, "val": val_dataloader}
# 動作の確認
batch_iterator = iter(dataloaders_dict["train"]) # イタレータに変換
item = next(batch_iterator) # 1番目の要素を取り出す
print(item[0].shape) # img
print(item[1].shape) # heatmaps,
print(item[2].shape) # heat_mask
print(item[3].shape) # pafs
print(item[4].shape) # paf_mask
###Output
_____no_output_____ |
test/eis-metadata-validation/Planon metadata validation5b.ipynb | ###Markdown
EIS metadata validation scriptUsed to validate Planon output with spreadsheet input 1. Data import
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Read data. There are two datasets: Planon and Master. The latter is the EIS data nomencalture that was created. Master is made up of two subsets: loggers and meters. Loggers are sometimes called controllers and meters are sometimes called sensors. In rare cases meters or sensors are also called channels.
###Code
planon=pd.read_excel('EIS Assets v2.xlsx',index_col = 'Code')
#master_loggerscontrollers_old = pd.read_csv('LoggersControllers.csv', index_col = 'Asset Code')
#master_meterssensors_old = pd.read_csv('MetersSensors.csv', encoding = 'macroman', index_col = 'Asset Code')
master='MASTER PlanonLoggersAndMeters 17 10 16.xlsx'
master_loggerscontrollers=pd.read_excel(master,sheetname='Loggers Controllers', index_col = 'Asset Code')
master_meterssensors=pd.read_excel(master,sheetname='Meters Sensors', encoding = 'macroman', index_col = 'Asset Code')
planon['Code']=planon.index
master_loggerscontrollers['Code']=master_loggerscontrollers.index
master_meterssensors['Code']=master_meterssensors.index
set(master_meterssensors['Classification Group'])
set(master_loggerscontrollers['Classification Group'])
new_index=[]
for i in master_meterssensors.index:
if '/' not in i:
new_index.append(i[:i.find('-')+1]+i[i.find('-')+1:].replace('-','/'))
else:
new_index.append(i)
master_meterssensors.index=new_index
master_meterssensors['Code']=master_meterssensors.index
new_index=[]
for i in master_meterssensors.index:
logger=i[:i.find('/')]
if master_loggerscontrollers.loc[logger]['Classification Group']=='BMS controller':
meter=i[i.find('/')+1:]
if meter[0] not in {'N','n','o','i'}:
new_index.append(i)
else:
new_index.append(i)
len(master_meterssensors)
master_meterssensors=master_meterssensors.loc[new_index]
len(master_meterssensors)
master_meterssensors.to_csv('meterssensors.csv')
master_loggerscontrollers.to_csv('loggerscontrollers.csv')
###Output
_____no_output_____
###Markdown
Unify index, caps everything and strip of trailing spaces.
###Code
planon.index=[str(i).strip() for i in planon.index]
master_loggerscontrollers.index=[str(i).strip() for i in master_loggerscontrollers.index]
master_meterssensors.index=[str(i).strip() for i in master_meterssensors.index]
###Output
_____no_output_____
###Markdown
Drop duplicates (shouldn't be any)
###Code
planon.drop_duplicates(inplace=True)
master_loggerscontrollers.drop_duplicates(inplace=True)
master_meterssensors.drop_duplicates(inplace=True)
###Output
_____no_output_____
###Markdown
Split Planon import into loggers and meters Drop duplicates (shouldn't be any)
###Code
# Split the Planon file into 2, one for loggers & controllers, and one for meters & sensors.
planon_loggerscontrollers = planon.loc[(planon['Classification Group'] == 'EN.EN4 BMS Controller') | (planon['Classification Group'] == 'EN.EN1 Data Logger')]
planon_meterssensors = planon.loc[(planon['Classification Group'] == 'EN.EN2 Energy Meter') | (planon['Classification Group'] == 'EN.EN3 Energy Sensor')]
planon_loggerscontrollers.drop_duplicates(inplace=True)
planon_meterssensors.drop_duplicates(inplace=True)
###Output
C:\Anaconda2\envs\python3\lib\site-packages\pandas\util\decorators.py:91: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
return func(*args, **kwargs)
###Markdown
Index unique? show number of duplicates in index
###Code
len(planon_loggerscontrollers.index[planon_loggerscontrollers.index.duplicated()])
len(planon_meterssensors.index[planon_meterssensors.index.duplicated()])
###Output
_____no_output_____
###Markdown
Meters are not unique. This is becasue of the spaces served. This is ok for now, we will deal with duplicates at the comparison stage. Same is true for loggers - in the unlikely event that there are duplicates in the future.
###Code
planon_meterssensors.head(3)
###Output
_____no_output_____
###Markdown
2. Validation Create list of all buildings present in Planon export. These are buildings to check the data against from Master.
###Code
buildings=set(planon_meterssensors['BuildingNo.'])
buildings
len(buildings)
###Output
_____no_output_____
###Markdown
2.1. Meters Create dataframe slice for validation from `master_meterssensors` where the only the buildings located in `buildings` are contained. Save this new slice into `master_meterssensors_for_validation`. This is done by creating sub-slices of the dataframe for each building, then concatenating them all together.
###Code
master_meterssensors_for_validation = \
pd.concat([master_meterssensors.loc[master_meterssensors['Building Code'] == building] \
for building in buildings])
master_meterssensors_for_validation.head(2)
#alternative method
master_meterssensors_for_validation2 = \
master_meterssensors[master_meterssensors['Building Code'].isin(buildings)]
master_meterssensors_for_validation2.head(2)
###Output
_____no_output_____
###Markdown
Planon sensors are not unique because of the spaces served convention in the two data architectures. The Planon architecture devotes a new line for each space served - hence the not unique index. The Master architecture lists all the spaces only once, as a list, therefore it has a unique index. We will need to take this into account and create matching dataframe out of planon for comparison, with a unique index.
###Code
len(master_meterssensors_for_validation)
len(planon_meterssensors)-len(planon_meterssensors.index[planon_meterssensors.index.duplicated()])
###Output
_____no_output_____
###Markdown
Sort datasets after index for easier comparison.
###Code
master_meterssensors_for_validation.sort_index(inplace=True)
planon_meterssensors.sort_index(inplace=True)
###Output
C:\Anaconda2\envs\python3\lib\site-packages\ipykernel\__main__.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
from ipykernel import kernelapp as app
###Markdown
2.1.1 Slicing of meters to only certain columns of comparison
###Code
planon_meterssensors.T
master_meterssensors_for_validation.T
###Output
_____no_output_____
###Markdown
Create dictionary that maps Planon column names onto Master. From Nicola: - Code (Asset Code) - Description- EIS ID (Channel)- Utility Type- Fiscal Meter- Tenant Meter`Building code` and `Building name` are implicitly included. `Logger Serial Number`, `IP` or `MAC` would be essential to include, as well as `Make` and `Model`. `Additional Location Info` is not essnetial but would be useful to have. Locations (`Locations.Space.Space number` and `Space Name`) are included in the Planon export - but this is their only viable data source, therefore are not validated against.
###Code
#Planon:Master
meters_match_dict={
"BuildingNo.":"Building Code",
"Building":"Building Name",
"Description":"Description",
"EIS ID":"Logger Channel",
"Tenant Meter.Name":"Tenant meter",
"Fiscal Meter.Name":"Fiscal meter",
"Code":"Code"
}
###Output
_____no_output_____
###Markdown
Filter both dataframes based on these new columns. Then remove duplicates. Currently, this leads to loss of information of spaces served, but also a unique index for the Planon dataframe, therefore bringing the dataframes closer to each other. When including spaces explicitly in the comparison (if we want to - or just trust the Planon space mapping), this needs to be modified.
###Code
master_meterssensors_for_validation_filtered=master_meterssensors_for_validation[list(meters_match_dict.values())]
planon_meterssensors_filtered=planon_meterssensors[list(meters_match_dict.keys())]
master_meterssensors_for_validation_filtered.head(2)
planon_meterssensors_filtered.head(2)
###Output
_____no_output_____
###Markdown
Unify headers, drop duplicates (bear the mind the spaces argument, this where it needs to be brought back in in the future!).
###Code
planon_meterssensors_filtered.columns=[meters_match_dict[i] for i in planon_meterssensors_filtered]
planon_meterssensors_filtered.drop_duplicates(inplace=True)
master_meterssensors_for_validation_filtered.drop_duplicates(inplace=True)
planon_meterssensors_filtered.head(2)
###Output
_____no_output_____
###Markdown
Fiscal/Tenant meter name needs fixing from Yes/No and 1/0.
###Code
planon_meterssensors_filtered['Fiscal meter']=planon_meterssensors_filtered['Fiscal meter'].isin(['Yes'])
planon_meterssensors_filtered['Tenant meter']=planon_meterssensors_filtered['Tenant meter'].isin(['Yes'])
master_meterssensors_for_validation_filtered['Fiscal meter']=master_meterssensors_for_validation_filtered['Fiscal meter'].isin([1])
master_meterssensors_for_validation_filtered['Tenant meter']=master_meterssensors_for_validation_filtered['Tenant meter'].isin([1])
master_meterssensors_for_validation_filtered.head(2)
planon_meterssensors_filtered.head(2)
###Output
_____no_output_____
###Markdown
Cross-check missing meters
###Code
a=np.sort(list(set(planon_meterssensors_filtered.index)))
b=np.sort(list(set(master_meterssensors_for_validation_filtered.index)))
meterssensors_not_in_planon=[]
for i in b:
if i not in a:
print(i+',',end=" "),
meterssensors_not_in_planon.append(i)
print('\n\nMeters in Master, but not in Planon:',
len(meterssensors_not_in_planon),'/',len(b),':',
round(len(meterssensors_not_in_planon)/len(b)*100,3),'%')
(set([i[:5] for i in meterssensors_not_in_planon]))
a=np.sort(list(set(planon_meterssensors_filtered.index)))
b=np.sort(list(set(master_meterssensors_for_validation_filtered.index)))
meterssensors_not_in_master=[]
for i in a:
if i not in b:
print(i+',',end=" "),
meterssensors_not_in_master.append(i)
print('\n\nMeters in Planon, not in Master:',
len(meterssensors_not_in_master),'/',len(a),':',
round(len(meterssensors_not_in_master)/len(a)*100,3),'%')
len(set([i for i in meterssensors_not_in_master]))
set([i[:9] for i in meterssensors_not_in_master])
set([i[:5] for i in meterssensors_not_in_master])
###Output
_____no_output_____
###Markdown
Check for duplicates in index, but not duplicates over the entire row
###Code
print(len(planon_meterssensors_filtered.index))
print(len(set(planon_meterssensors_filtered.index)))
print(len(master_meterssensors_for_validation_filtered.index))
print(len(set(master_meterssensors_for_validation_filtered.index)))
master_meterssensors_for_validation_filtered[master_meterssensors_for_validation_filtered.index.duplicated()]
###Output
_____no_output_____
###Markdown
The duplicates are the `nan`s. Remove these for now. Could revisit later to do an index-less comparison, only over row contents.
###Code
good_index=[i for i in master_meterssensors_for_validation_filtered.index if str(i).lower().strip()!='nan']
master_meterssensors_for_validation_filtered=master_meterssensors_for_validation_filtered.loc[good_index]
master_meterssensors_for_validation_filtered.drop_duplicates(inplace=True)
len(planon_meterssensors_filtered)
len(master_meterssensors_for_validation_filtered)
###Output
_____no_output_____
###Markdown
Do comparison only on common indices. Need to revisit and identify the cause missing meters, both ways (5 Planon->Meters and 30 Meters->Planon in this example).
###Code
comon_index=list(set(master_meterssensors_for_validation_filtered.index).intersection(set(planon_meterssensors_filtered.index)))
len(comon_index)
master_meterssensors_for_validation_intersected=master_meterssensors_for_validation_filtered.loc[comon_index].sort_index()
planon_meterssensors_intersected=planon_meterssensors_filtered.loc[comon_index].sort_index()
len(master_meterssensors_for_validation_intersected)
len(planon_meterssensors_intersected)
###Output
_____no_output_____
###Markdown
Still have duplicate indices. For now we just drop and keep the first.
###Code
master_meterssensors_for_validation_intersected = master_meterssensors_for_validation_intersected[~master_meterssensors_for_validation_intersected.index.duplicated(keep='first')]
master_meterssensors_for_validation_intersected.head(2)
planon_meterssensors_intersected.head(2)
###Output
_____no_output_____
###Markdown
2.1.2. Primitive comparison
###Code
planon_meterssensors_intersected==master_meterssensors_for_validation_intersected
np.all(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected)
###Output
_____no_output_____
###Markdown
2.1.3. Horizontal comparison Number of cells matching
###Code
(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()
###Output
_____no_output_____
###Markdown
Percentage matching
###Code
(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100
((planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100).plot(kind='bar')
###Output
_____no_output_____
###Markdown
2.1.4. Vertical comparison
###Code
df=pd.DataFrame((planon_meterssensors_intersected.T==master_meterssensors_for_validation_intersected.T).sum())
df
df=pd.DataFrame((planon_meterssensors_intersected.T==master_meterssensors_for_validation_intersected.T).sum()/\
len(planon_meterssensors_intersected.T)*100)
df[df[0]<100]
###Output
_____no_output_____
###Markdown
2.1.5. Smart(er) comparison Not all of the dataframe matches. Let us do some basic string formatting, maybe that helps.
###Code
sum(planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description'])
planon_meterssensors_intersected['Description']=[str(s).lower().strip()\
.replace(' ',' ').replace(' ',' ').replace('nan','')\
for s in planon_meterssensors_intersected['Description'].values]
master_meterssensors_for_validation_intersected['Description']=[str(s).lower().strip()\
.replace(' ',' ').replace(' ',' ').replace('nan','')\
for s in master_meterssensors_for_validation_intersected['Description'].values]
sum(planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description'])
###Output
_____no_output_____
###Markdown
Some errors fixed, some left. Let's see which ones. These are either: - Wrong duplicate dropped- Input human erros in the description.- Actual erros somewhere in the indexing.
###Code
for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description']].index:
print(i,'\t\tPlanon:',planon_meterssensors_intersected.loc[i]['Description'],'\t\tMaster:',master_meterssensors_for_validation_intersected.loc[i]['Description'])
###Output
_____no_output_____
###Markdown
Let us repeat the exercise for `Logger Channel`. Cross-validate, flag as highly likely error where both mismatch.
###Code
sum(planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel'])
planon_meterssensors_intersected['Logger Channel']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in planon_meterssensors_intersected['Logger Channel'].values]
master_meterssensors_for_validation_intersected['Logger Channel']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in master_meterssensors_for_validation_intersected['Logger Channel'].values]
sum(planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel'])
###Output
_____no_output_____
###Markdown
All errors fixed on logger channels.
###Code
for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel']].index:
print(i,'\t\tPlanon:',planon_meterssensors_intersected.loc[i]['Logger Channel'],'\t\tMaster:',master_meterssensors_for_validation_intersected.loc[i]['Logger Channel'])
###Output
_____no_output_____
###Markdown
New error percentage:
###Code
(planon_meterssensors_intersected!=master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100
###Output
_____no_output_____
###Markdown
2.2. Loggers
###Code
buildings=set(planon_loggerscontrollers['BuildingNo.'])
buildings
master_loggerscontrollers_for_validation = \
pd.concat([master_loggerscontrollers.loc[master_loggerscontrollers['Building Code'] == building] \
for building in buildings])
master_loggerscontrollers_for_validation.head(2)
len(master_loggerscontrollers_for_validation)
len(planon_loggerscontrollers)-len(planon_loggerscontrollers.index[planon_loggerscontrollers.index.duplicated()])
master_loggerscontrollers_for_validation.sort_index(inplace=True)
planon_loggerscontrollers.sort_index(inplace=True)
planon_loggerscontrollers.T
master_loggerscontrollers_for_validation.T
###Output
_____no_output_____
###Markdown
Create dictionary that maps Planon column names onto Master. From Nicola: - EIS ID (Serial Number)- Make- Model- Description- Code (Asset Code)- Building Code`Building code` and `Building name` are implicitly included. `Logger IP` or `MAC` would be essential to include, as well as `Make` and `Model`. `Additional Location Info` is not essnetial but would be useful to have. Locations (`Locations.Space.Space number` and `Space Name`) are included in the Planon export - but this is their only viable data source, therefore are not validated against.
###Code
#Planon:Master
loggers_match_dict={
"BuildingNo.":"Building Code",
"Building":"Building Name",
"Description":"Description",
"EIS ID":"Logger Serial Number",
"Make":"Make",
"Model":"Model",
"Code":"Code"
}
master_loggerscontrollers_for_validation_filtered=master_loggerscontrollers_for_validation[list(loggers_match_dict.values())]
planon_loggerscontrollers_filtered=planon_loggerscontrollers[list(loggers_match_dict.keys())]
master_loggerscontrollers_for_validation_filtered.head(2)
planon_loggerscontrollers_filtered.head(2)
planon_loggerscontrollers_filtered.columns=[loggers_match_dict[i] for i in planon_loggerscontrollers_filtered]
planon_loggerscontrollers_filtered.drop_duplicates(inplace=True)
master_loggerscontrollers_for_validation_filtered.drop_duplicates(inplace=True)
planon_loggerscontrollers_filtered.head(2)
master_loggerscontrollers_for_validation_filtered.head(2)
a=np.sort(list(set(planon_loggerscontrollers_filtered.index)))
b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index)))
loggerscontrollers_not_in_planon=[]
for i in b:
if i not in a:
print(i+',',end=" "),
loggerscontrollers_not_in_planon.append(i)
print('\n\nLoggers in Master, but not in Planon:',
len(loggerscontrollers_not_in_planon),'/',len(b),':',
round(len(loggerscontrollers_not_in_planon)/len(b)*100,3),'%')
a=np.sort(list(set(planon_loggerscontrollers_filtered.index)))
b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index)))
loggerscontrollers_not_in_master=[]
for i in a:
if i not in b:
print(i+',',end=" "),
loggerscontrollers_not_in_master.append(i)
print('\n\nLoggers in Planon, not in Master:',
len(loggerscontrollers_not_in_master),'/',len(a),':',
round(len(loggerscontrollers_not_in_master)/len(a)*100,3),'%')
print(len(planon_loggerscontrollers_filtered.index))
print(len(set(planon_loggerscontrollers_filtered.index)))
print(len(master_loggerscontrollers_for_validation_filtered.index))
print(len(set(master_loggerscontrollers_for_validation_filtered.index)))
master_loggerscontrollers_for_validation_filtered[master_loggerscontrollers_for_validation_filtered.index.duplicated()]
comon_index=list(set(master_loggerscontrollers_for_validation_filtered.index).intersection(set(planon_loggerscontrollers_filtered.index)))
master_loggerscontrollers_for_validation_intersected=master_loggerscontrollers_for_validation_filtered.loc[comon_index].sort_index()
planon_loggerscontrollers_intersected=planon_loggerscontrollers_filtered.loc[comon_index].sort_index()
master_loggerscontrollers_for_validation_intersected.head(2)
planon_loggerscontrollers_intersected.head(2)
planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected
###Output
_____no_output_____
###Markdown
Loggers matching
###Code
(planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()
###Output
_____no_output_____
###Markdown
Percentage matching
###Code
(planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100
((planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100).plot(kind='bar')
###Output
_____no_output_____
###Markdown
Loggers not matching on `Building Name`.
###Code
sum(planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name'])
planon_loggerscontrollers_intersected['Building Name']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in planon_loggerscontrollers_intersected['Building Name'].values]
master_loggerscontrollers_for_validation_intersected['Building Name']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in master_loggerscontrollers_for_validation_intersected['Building Name'].values]
sum(planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name'])
###Output
_____no_output_____
###Markdown
That didnt help.
###Code
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name']].index:
print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Building Name'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Building Name'])
###Output
EX001-B01 Planon: roads - main campus Master: underpass
MC029-B01 Planon: cetad Master: bowland hall cetad
MC033-L03 Planon: county john creed Master: john creed
MC047-B02 Planon: welcome centre Master: conference centre
MC047-L01 Planon: welcome centre Master: conference centre
MC047-L02 Planon: welcome centre Master: conference centre
MC055-B01 Planon: furness residences Master: furness blocks
MC071-B01 Planon: furness college Master: furness
MC072-B02 Planon: psc Master: psc building
MC072-B04 Planon: psc Master: psc building
MC103-B01 Planon: lancaster house hotel Master: hotel
MC198-B01 Planon: grizedale college - offices, bar & social space Master: grizedale
MC198-B02 Planon: grizedale college - offices, bar & social space Master: grizedale
OC004-B01 Planon: chancellor's wharf, wyre house Master: chancellors wharf
OC005-B01 Planon: chancellor's wharf, lune house Master: chancellors wharf
OC006-B01 Planon: chancellor's wharf, kent house Master: chancellors wharf
###Markdown
Follow up with lexical distance comparison. That would flag this as a match. Loggers not matching on `Serial Number`.
###Code
sum(planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number'])
planon_loggerscontrollers_intersected['Logger Serial Number']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ').replace('{','').replace('}','') for s in planon_loggerscontrollers_intersected['Logger Serial Number'].values]
master_loggerscontrollers_for_validation_intersected['Logger Serial Number']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ').replace('{','').replace('}','') for s in master_loggerscontrollers_for_validation_intersected['Logger Serial Number'].values]
sum(planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number'])
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number']].index:
print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number'])
###Output
MC032-L04 Planon: 50198367 Master: 050198367e00
MC046-L05 Planon: 50198895300 Master: 050198895300
MC063-L01 Planon: 50198829500 Master: 050198829500
MC064-L03 Planon: 50198872600 Master: 050198872600
MC071-L02 Planon: 50201286300 Master: 050201286300
MC071-L05 Planon: 50201221 Master: 050201221e00
MC071-L16 Planon: 50198904000 Master: 050198904000
MC078-L03 Planon: 50198864300 Master: 050198864300
MC102-L01 Planon: 50157909800 Master: 050157909800
###Markdown
Technically the same, but there is a number format error. Compare based on float value, if they match, replace one of them. This needs to be amended, as it will throw `cannot onvert to float` exception if strings are left in from the previous step.
###Code
z1=[]
z2=[]
for i in planon_loggerscontrollers_intersected.index:
if planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']:
if float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])==\
float(master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']):
z1.append(str(int(float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']))))
z2.append(str(int(float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']))))
else:
z1.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])
z2.append(master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number'])
else:
z1.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])
z2.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])
planon_loggerscontrollers_intersected['Logger Serial Number']=z1
master_loggerscontrollers_for_validation_intersected['Logger Serial Number']=z2
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number']].index:
print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number'])
###Output
_____no_output_____
###Markdown
New error percentage:
###Code
(planon_loggerscontrollers_intersected!=master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100
###Output
_____no_output_____
###Markdown
(Bearing in my mind the above, this is technically 0)
###Code
a=np.sort(list(set(planon_meterssensors_filtered.index)))
b=np.sort(list(set(master_meterssensors_for_validation_filtered.index)))
meterssensors_not_in_planon=[]
for i in b:
if i not in a:
print(i+',',end=" "),
meterssensors_not_in_planon.append(i)
print('\n\nMeters in Master, but not in Planon:',
len(meterssensors_not_in_planon),'/',len(b),':',
round(len(meterssensors_not_in_planon)/len(b)*100,3),'%')
q1=pd.DataFrame(meterssensors_not_in_planon)
a=np.sort(list(set(planon_meterssensors_filtered.index)))
b=np.sort(list(set(master_meterssensors_for_validation_filtered.index)))
meterssensors_not_in_master=[]
for i in a:
if i not in b:
print(i+',',end=" "),
meterssensors_not_in_master.append(i)
print('\n\nMeters in Planon, not in Master:',
len(meterssensors_not_in_master),'/',len(a),':',
round(len(meterssensors_not_in_master)/len(a)*100,3),'%')
q2=pd.DataFrame(meterssensors_not_in_master)
a=np.sort(list(set(planon_loggerscontrollers_filtered.index)))
b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index)))
loggerscontrollers_not_in_planon=[]
for i in b:
if i not in a:
print(i+',',end=" "),
loggerscontrollers_not_in_planon.append(i)
print('\n\nLoggers in Master, but not in Planon:',
len(loggerscontrollers_not_in_planon),'/',len(b),':',
round(len(loggerscontrollers_not_in_planon)/len(b)*100,3),'%')
q3=pd.DataFrame(loggerscontrollers_not_in_planon)
a=np.sort(list(set(planon_loggerscontrollers_filtered.index)))
b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index)))
loggerscontrollers_not_in_master=[]
for i in a:
if i not in b:
print(i+',',end=" "),
loggerscontrollers_not_in_master.append(i)
print('\n\nLoggers in Planon, not in Master:',
len(loggerscontrollers_not_in_master),'/',len(a),':',
round(len(loggerscontrollers_not_in_master)/len(a)*100,3),'%')
q4=pd.DataFrame(loggerscontrollers_not_in_master)
q5=pd.DataFrame((planon_meterssensors_intersected!=master_meterssensors_for_validation_intersected).sum()/\
len(planon_meterssensors_intersected)*100)
q6=pd.DataFrame((planon_loggerscontrollers_intersected!=master_loggerscontrollers_for_validation_intersected).sum()/\
len(planon_loggerscontrollers_intersected)*100)
w1=[]
for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description']].index:
w1.append({"Meter":i,'Planon':planon_meterssensors_intersected.loc[i]['Description'],
'Master':master_meterssensors_for_validation_intersected.loc[i]['Description']})
q7=pd.DataFrame(w1)
w2=[]
for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name']].index:
w2.append({"Logger":i,'Planon':planon_loggerscontrollers_intersected.loc[i]['Building Name'],
'Master':master_loggerscontrollers_for_validation_intersected.loc[i]['Building Name']})
q8=pd.DataFrame(w2)
writer = pd.ExcelWriter('final5b.xlsx')
q1.to_excel(writer,'Meters Master, not Planon')
q2.to_excel(writer,'Meters Planon, not Master')
q3.to_excel(writer,'Loggers Master, not Planon')
q4.to_excel(writer,'Loggers Planon, not Master')
q5.to_excel(writer,'Meters error perc')
q6.to_excel(writer,'Loggers error perc')
q7.to_excel(writer,'Meters naming conflcits')
q1
q9=[]
try:
for i in q1[0].values:
if i[:i.find('/')] not in set(q3[0].values):
q9.append(i)
except:pass
pd.DataFrame(q9).to_excel(writer,'Meters Master, not Planon, not Logger')
q10=[]
try:
for i in q1[0].values:
if 'L82' not in i:
q10.append(i)
except:pass
pd.DataFrame(q10).to_excel(writer,'Meters Master, not Planon, not L82')
q11=[]
try:
for i in q1[0].values:
if 'MC210' not in i:
q11.append(i)
except:pass
pd.DataFrame(q11).to_excel(writer,'Meters Master, not Planon, not 210')
writer.save()
test=[]
for i in planon_meterssensors_intersected.index:
test.append(i[:9])
planon_meterssensors_intersected['test']=test
planon_meterssensors_intersected.set_index(['test','Code'])
###Output
_____no_output_____ |
hist/Processamento Yelp 2.ipynb | ###Markdown
Atividade Integradora Criando Ambiente Spark
###Code
# import findspark as fs
from pyspark.sql import SparkSession
from pyspark.sql import SparkSession
from pyspark.sql.types import *
from pyspark.sql import functions as f
from pyspark.sql.window import Window
from pyspark.ml.feature import StopWordsRemover
import pandas as pd
import seaborn as sns
sns.set(style="ticks", palette="pastel")
import os
from wordcloud import WordCloud, ImageColorGenerator
import matplotlib.pyplot as plt
%matplotlib inline
# MAC Local (Viviane)
# spark_location='/Users/vivi/server/spark' # Set your own
# java8_location= '/Library/Java/JavaVirtualMachines/jdk1.8.0_251.jdk/Contents/Home/' # Set your own
# os.environ['JAVA_HOME'] = java8_location
# fs.init(spark_home=spark_location)
datapath = 'C:\\Users\\RuWindows\\Desktop\\PI\\yelp_dataset\\'
#datapath = '../data/yelp'
files = sorted(os.listdir(datapath))
files
#!head data/yelp_academic_dataset_review.json
# Spark Session
spark = SparkSession.builder \
.master('local[*]') \
.appName('Integradora Yelp') \
.config("spark.ui.port", "4060") \
.getOrCreate()
spark
###Output
_____no_output_____
###Markdown
spark = SparkSession.builder \ .master('local[8]') \ .appName('Yelp Integradora') \ .getOrCreate()
###Code
sc = spark.sparkContext
spark#.stop()
###Output
_____no_output_____
###Markdown
Importando as Bases Origem - Raw
###Code
usr_raw = spark.read.json(datapath+'/yelp_academic_dataset_user.json')
rv_raw = spark.read.json(datapath+'/yelp_academic_dataset_review.json')
bz_raw = spark.read.json(datapath+'/yelp_academic_dataset_business.json')
tp_raw = spark.read.json(datapath+'/yelp_academic_dataset_tip.json')
bz_raw.createOrReplaceTempView('bz')
rv_raw.createOrReplaceTempView('rv')
usr_raw.createOrReplaceTempView('usr')
tp_raw.createOrReplaceTempView('tp')
# Visualizando Estrutura
bz_raw.printSchema()
# Verificando o SQL
print(spark.catalog.listTables())
# bz_raw.columns
# usr_raw.columns
# rv_raw.columns
# tp_raw.columns
###Output
_____no_output_____
###Markdown
Base Business
###Code
# Abertura dos atributos para colunas
dfs = []
for x in ["hours", "attributes"]:
cols = bz_raw.select(f"{x}.*").columns
for col in cols:
try:
dfs.append(dfs[-1].withColumn(col, f.col(f"{x}.{col}")))
except IndexError:
dfs.append(bz_raw.withColumn(col, f.col(f"{x}.{col}")))
df_final = dfs[-1].drop("hours", "attributes")
df_final.createOrReplaceTempView("df_final")
df_final.printSchema()
###Output
root
|-- address: string (nullable = true)
|-- business_id: string (nullable = true)
|-- categories: string (nullable = true)
|-- city: string (nullable = true)
|-- is_open: long (nullable = true)
|-- latitude: double (nullable = true)
|-- longitude: double (nullable = true)
|-- name: string (nullable = true)
|-- postal_code: string (nullable = true)
|-- review_count: long (nullable = true)
|-- stars: double (nullable = true)
|-- state: string (nullable = true)
|-- Friday: string (nullable = true)
|-- Monday: string (nullable = true)
|-- Saturday: string (nullable = true)
|-- Sunday: string (nullable = true)
|-- Thursday: string (nullable = true)
|-- Tuesday: string (nullable = true)
|-- Wednesday: string (nullable = true)
|-- AcceptsInsurance: string (nullable = true)
|-- AgesAllowed: string (nullable = true)
|-- Alcohol: string (nullable = true)
|-- Ambience: string (nullable = true)
|-- BYOB: string (nullable = true)
|-- BYOBCorkage: string (nullable = true)
|-- BestNights: string (nullable = true)
|-- BikeParking: string (nullable = true)
|-- BusinessAcceptsBitcoin: string (nullable = true)
|-- BusinessAcceptsCreditCards: string (nullable = true)
|-- BusinessParking: string (nullable = true)
|-- ByAppointmentOnly: string (nullable = true)
|-- Caters: string (nullable = true)
|-- CoatCheck: string (nullable = true)
|-- Corkage: string (nullable = true)
|-- DietaryRestrictions: string (nullable = true)
|-- DogsAllowed: string (nullable = true)
|-- DriveThru: string (nullable = true)
|-- GoodForDancing: string (nullable = true)
|-- GoodForKids: string (nullable = true)
|-- GoodForMeal: string (nullable = true)
|-- HairSpecializesIn: string (nullable = true)
|-- HappyHour: string (nullable = true)
|-- HasTV: string (nullable = true)
|-- Music: string (nullable = true)
|-- NoiseLevel: string (nullable = true)
|-- Open24Hours: string (nullable = true)
|-- OutdoorSeating: string (nullable = true)
|-- RestaurantsAttire: string (nullable = true)
|-- RestaurantsCounterService: string (nullable = true)
|-- RestaurantsDelivery: string (nullable = true)
|-- RestaurantsGoodForGroups: string (nullable = true)
|-- RestaurantsPriceRange2: string (nullable = true)
|-- RestaurantsReservations: string (nullable = true)
|-- RestaurantsTableService: string (nullable = true)
|-- RestaurantsTakeOut: string (nullable = true)
|-- Smoking: string (nullable = true)
|-- WheelchairAccessible: string (nullable = true)
|-- WiFi: string (nullable = true)
###Markdown
Road Map* bz: - tratar atributos - selecionar os que fazem sentido e transforar true ou false para dummy - rodar um k-means na base () Criando Base Única Principal Unificando as Bases para contrução dos Modelos - "Joins" Juntando as informações de reviews, estabelecimentos da cidade escolhidas e usuários que frequentam esses estabelecimentos. Reviews + Business
###Code
base = spark.sql("""
SELECT A.business_id,
A.cool AS cool_rv,
A.date AS date_rv,
A.funny AS funny_rv,
A.review_id,
A.stars AS stars_rv,
A.text AS text_rv,
A.useful AS useful_rv,
A.user_id,
B.address AS address_bz,
B.categories AS categories_bz,
B.city AS city_bz,
B.hours AS hours_bz,
B.is_open AS is_open_bz,
B.latitude AS latitude_bz,
B.longitude AS longitude_bz,
B.name AS name_bz,
B.postal_code AS postal_code_bz,
B.review_count AS review_count_bz,
B.stars AS stars_bz,
B.state AS state_bz
FROM rv as A
LEFT JOIN bz as B
ON A.business_id = B.business_id
WHERE B.city = 'Toronto'
AND B.state = 'ON'
AND B.review_count > 20
AND (B.categories like '%Restaurant%' OR B.categories like '%Food%')
""")
# base.show(5)
base.createOrReplaceTempView('base')
###Output
_____no_output_____
###Markdown
- Contagem da quantidade de linhas para garantir que a integridade do dataset ser mantém ao longo do processamento.
###Code
#linhas na base de reviews + business
# spark.sql('''
# SELECT Count(*)
# FROM base
# ''').show()
###Output
_____no_output_____
###Markdown
(Reviews + Business) + Users
###Code
base1 = spark.sql("""
SELECT A.*,
B.average_stars AS stars_usr,
B.compliment_cool AS compliment_cool_usr,
B.compliment_cute AS compliment_cute_usr,
B.compliment_funny AS compliment_funny_usr,
B.compliment_hot AS compliment_hot_usr,
B.compliment_list AS compliment_list_usr,
B.compliment_more AS compliment_more_usr,
B.compliment_note AS compliment_note_usr,
B.compliment_photos AS compliment_photos_usr,
B.compliment_plain AS compliment_plain_usr,
B.compliment_profile AS compliment_profile_usr,
B.compliment_writer AS compliment_writer_usr,
B.cool AS cool_usr,
B.elite AS elite_usr,
B.fans AS fans_usr,
B.friends AS friends_usr,
B.funny AS funny_usr,
B.name AS name_usr,
B.review_count AS review_count_usr,
B.useful AS useful_usr,
B.yelping_since AS yelping_since_usr
FROM base as A LEFT JOIN usr as B
ON A.user_id = B.user_id
""")
base1.createOrReplaceTempView('base1')
#linhas na base de reviews + business + users
# spark.sql('''
# SELECT Count(*)
# FROM base1
# ''').show()
aux = spark.sql('''
SELECT user_id, city_bz, yelping_since_usr,
COUNT(review_id) AS city_review_counter_usr,
review_count_usr
FROM base1
GROUP BY user_id, review_count_usr, city_bz, yelping_since_usr
ORDER BY city_review_counter_usr DESC
''')
aux.createOrReplaceTempView('aux')
# aux.show()
###Output
_____no_output_____
###Markdown
Aparentemente os usuários fazem reviews em estabelecimentos não só em Toronto. Para incluir essa informação no modelo, será criada uma variável com a relação entre as quantidade de reviews do usuário na cidade pela quantidade total de reviews do usuário. - Média de reviews por usuário na cidade e total
###Code
# spark.sql('''
# SELECT AVG(city_review_counter_usr),
# AVG(review_count_usr)
# FROM aux
# ''').show()
###Output
_____no_output_____
###Markdown
- Remoção de usuários com apenas 1 review na cidade
###Code
base2 = spark.sql('''
SELECT A.*,
B.city_review_counter_usr,
(B.city_review_counter_usr/B.review_count_usr) AS city_review_ratio_usr
FROM base1 as A
LEFT JOIN aux as B
ON A.user_id = B.user_id
WHERE B.city_review_counter_usr > 1
''')
base2.createOrReplaceTempView('base2')
#linhas na base de reviews + business + users
# spark.sql('''
# SELECT Count(*)
# FROM base2
# ''').show()
###Output
_____no_output_____
###Markdown
- Classificação das avaliações em Boa (1 - maior do que 4) e Ruim ou inexistente (0 - menor do que 4).
###Code
base3 = spark.sql("""
SELECT *,
(CASE WHEN stars_rv >=4 THEN 1 ELSE 0 END) as class_rv,
(CASE WHEN stars_bz >=4 THEN 1 ELSE 0 END) as class_bz,
(CASE WHEN stars_usr >=4 THEN 1 ELSE 0 END) as class_usr
FROM base2
""")
base3.columns
base3.createOrReplaceTempView('base3')
# spark.sql('''
# SELECT Count(*)
# FROM base3
# ''').show()
###Output
_____no_output_____
###Markdown
((Reviews + Business) + Users ) + Tips
###Code
# spark.sql('''
# SELECT business_id, user_id,
# count(text) AS tips_counter,
# sum(compliment_count) as total_compliments
# FROM tp
# GROUP BY business_id, user_id
# ORDER BY total_compliments DESC
# ''').show()
base4 = spark.sql('''
SELECT A.*,
IFNULL(B.compliment_count,0) AS compliment_count_tip,
IFNULL(B.text,'') AS tip
FROM base3 as A
LEFT JOIN tp as B
ON (A.user_id = B.user_id AND A.business_id = B.business_id)
''')
# base4.select('business_id', 'user_id','tip','compliment_count_tip').show()
# base4.select('text_rv','tip').show()
###Output
_____no_output_____
###Markdown
Tratamento do Texto
###Code
def word_clean(sdf,col,new_col):
rv1 = sdf.withColumn(new_col,f.regexp_replace(f.col(col), "'d", " would"))
rv2 = rv1.withColumn(new_col,f.regexp_replace(f.col(new_col), "'ve", " have"))
rv3 = rv2.withColumn(new_col,f.regexp_replace(f.col(new_col), "'s", " is"))
rv4 = rv3.withColumn(new_col,f.regexp_replace(f.col(new_col), "'re", " are"))
rv5 = rv4.withColumn(new_col,f.regexp_replace(f.col(new_col), "n't", " not"))
rv6 = rv5.withColumn(new_col,f.regexp_replace(f.col(new_col), '\W+', " "))
rv7 = rv6.withColumn(new_col,f.lower(f.col(new_col)))
return rv7
base5 = word_clean(base4,'text_rv','text_clean')
base6 = word_clean(base5,'tip','tip_clean')
# base6.select('text_clean','tip_clean').show()
###Output
_____no_output_____
###Markdown
- Contagem de amigos de cada usuário
###Code
base7 = base6.withColumn('friends_counter_usr', f.size(f.split(f.col('friends_usr'),',')))
base7.createOrReplaceTempView('base7')
base8 = spark.sql('''
SELECT *,
(CASE WHEN friends_usr = 'None' THEN 0 ELSE friends_counter_usr END) as friends_count_usr
FROM base7
''')
df = base8.select('friends_usr','friends_counter_usr','friends_count_usr').limit(10).toPandas()
df.dtypes
df
# base8.select('friends_usr','friends_counter_usr','friends_count_usr').show()
###Output
_____no_output_____
###Markdown
Concatenando Comentários por Usuário - Review + Tips
###Code
base9 = base8.withColumn('rv_tip', f.concat(f.col('text_clean'),f.lit(' '), f.col('tip_clean')))
# base9.select('text_clean','tip_clean','rv_tip','stars_rv','compliment_count_tip','funny_rv','cool_rv').show()
base9.createOrReplaceTempView('base9')
# spark.sql('''
# SELECT stars_rv, count(tip_clean) as tip_counter
# FROM base9
# GROUP BY stars_rv
# ORDER BY tip_counter DESC
# ''').show()
###Output
_____no_output_____
###Markdown
- Remoção de colunas que não serão utilizadas na primeira modelagem
###Code
base_final = base9.drop('friends_usr','friends_counter_usr','name_usr','city_bz', 'address_bz','state_bz', 'hours_bz','text_rv','tip','tip_clean','elite_usr')#,'review_id')
base_final.columns
###Output
_____no_output_____
###Markdown
Salva Base analítica em CSV
###Code
base_final.write \
.format('csv') \
.mode('overwrite') \
.option('sep', ',') \
.option('header', True) \
.save('output/yelp.csv')
###Output
_____no_output_____
###Markdown
Base para Modelo de TópicosInformações de Texto que serão tratadas em Modelos de Tópicos no R
###Code
words = base_final.select('review_id','user_id','business_id','categories_bz','stars_rv','rv_tip')
words2 = words.withColumn('category_bz', f.explode(f.split(f.col('categories_bz'),', ')))
words3 = words2.drop('categories_bz')
# words3.show()
#words4 = words3.withColumn('word', f.explode(f.split(f.col('review_tip'),' ')))
###Output
_____no_output_____
###Markdown
Salva Base Auxiliar para Modelo de Tópicos - "Reviews + Tips"
###Code
words3.write \
.format('csv') \
.mode('overwrite') \
.option('sep', ',') \
.option('header', True) \
.save('output/yelp_words.csv')
###Output
_____no_output_____
###Markdown
Matriz de distânciasEstruturação de Dados para Clusterização Hierárquica - Preparação para criação de matriz de distâncias baseada na nota de cada avaliação.
###Code
dist1 = base_final.select('user_id','categories_bz','stars_rv')
# dist1.show()
dist2 = dist1.withColumn('category_bz', f.explode(f.split(f.col('categories_bz'),', ')))
# dist2.show()
dist2.createOrReplaceTempView('dist')
###Output
_____no_output_____
###Markdown
- Quantidade de usuários e estabelecimentos
###Code
# spark.sql('''
# SELECT Count(DISTINCT user_id)
# FROM dist
# ''').show()
# spark.sql('''
# SELECT Count(DISTINCT categories_bz)
# FROM dist
# ''').show()
# spark.sql('''
# SELECT Count(DISTINCT category_bz)
# FROM dist
# ''').show()
###Output
_____no_output_____
###Markdown
- Aumentando o limite máximo de coluna de acordo com o número de estabelecimentos
###Code
#spark.conf.set('spark.sql.pivotMaxValues', u'21000')
dist3 = dist2.groupBy("user_id").pivot("category_bz").mean("stars_rv")
dist4 = dist3.fillna(0)
# dist4.show()
###Output
_____no_output_____
###Markdown
Salva Base Auxiliar para Matriz de Distâncias - "Category"
###Code
dist4.write \
.format('csv') \
.mode('overwrite') \
.option('sep', ',') \
.option('header', True) \
.save('output/yelp_dist.csv')
###Output
_____no_output_____
###Markdown
Análise Gráfica Heatmap - Criando mapa de calor da concentração de reviews
###Code
base_mapas = base_final#.limit(1000)
base_mapas.createOrReplaceTempView('base_mapas')
mapa1 = spark.sql("""
SELECT latitude_bz,
longitude_bz
FROM base_mapas
WHERE latitude_bz is not null
AND longitude_bz is not null
""")
mapa1.show(10)
###Output
+-------------+--------------+
| latitude_bz| longitude_bz|
+-------------+--------------+
| 43.6697687| -79.382838|
|43.6386597113| -79.3806966|
|43.6630940441|-79.3840069721|
| 43.656838| -79.399237|
|43.6599496025| -79.479805281|
| 43.6547562| -79.3874925|
| 43.6376269| -79.393259|
|43.6543411559|-79.4004796073|
|43.6729833023|-79.2866801843|
| 43.655584| -79.3985383|
+-------------+--------------+
only showing top 10 rows
###Markdown
Decobrindo o ponto central de Latitude e Longetude do Mapa
###Code
# spark.sql("""
# SELECT avg(latitude) as avg_lat,
# avg(longitude) as avg_long
# FROM base_mapas
# """).show()
import folium
from folium import plugins
mapa = folium.Map(location=[43.6732, -79.3919],
zoom_start=11,
tiles='Stamen Toner')
# OpenStreetMap, Stamen Terrain, Stamen Toner
mapa
lat = mapa1.toPandas()['latitude'].values
lon = mapa1.toPandas()['longitude'].values
coordenadas = []
for la, lo in zip(lat, lon):
coordenadas.append([la,lo])
mapa.add_child(plugins.HeatMap(coordenadas))
lat_lon3 = spark.sql("""
SELECT 'ON' as state,
(SUM(review_count) / (select SUM(review_count) from base_mapas))*100 as review_perc
FROM base_mapas
WHERE latitude is not null
AND longitude is not null
GROUP BY state
""")
## Geo-Json do Canada - https://geojson-maps.ash.ms/
#url = 'https://raw.githubusercontent.com/AshKyd/geojson-regions/master/countries/110m/'
# state_geo = f'{url}/CAN.geojson'
url = 'https://raw.githubusercontent.com/jasonicarter/toronto-geojson/master/'
state_geo = f'{url}/toronto_crs84.geojson'
df = lat_lon3.toPandas()
m = folium.Map(location=[43, -79], zoom_start=10)
bins = list(df['review_perc'].quantile([0, 0.25, 0.5, 0.75, 1]))
folium.Choropleth(
geo_data=state_geo,
name='choropleth',
data=df,
columns=['state', 'review_perc'],
key_on='feature.properties.name',
fill_color='BuPu',
fill_opacity=0.7,
line_opacity=0.2,
bins=bins,
legend_name='Reviews (%)',
reset=True
).add_to(m)
m
###Output
_____no_output_____ |
preprocessing/CVAT_to_CSV.ipynb | ###Markdown
Example to convert XML annotations from CVAT to a csv format
###Code
# Import statements
import pandas as pd
import numpy as np
import os
import xml.etree.ElementTree as ET
import copy
###Output
_____no_output_____
###Markdown
1. Prepare header of CSV file
###Code
# List of the keypoints
keypoints = ['LFHoof', 'LFAnkle', 'LFKnee', 'RFHoof', 'RFAnkle', 'RFKnee',
'LHHoof', 'LHAnkle', 'LHKnee', 'RHHoof', 'RHAnkle', 'RHKnee',
'Nose', 'HeadTop', 'Spine1', 'Spine2', 'Spine3' ]
# Make header for the CSV file. Here, we have video, frame, and then 3 columns per keypoint: x,y and likelihood.
# Note that I never used the likelihood in my research, but also never really bothered to remove it from my csv files...
header = ['video','frame']
for k in keypoints:
header.append(k+"_x")
header.append(k+"_y")
header.append(k+"_likelihood")
###Output
_____no_output_____
###Markdown
2. Parse the XML file
###Code
def xml_to_csv(save_path, xml_file, header):
"""
Function that parses a CVAT XML file and saves the annotations in a csv format
:param save_path: path of the folder where to save the csv file
:param xml_file: the CVAT xml file containing the annotations. It should be saved as images, and not video format (I think, it was a long time ago)
:param header: the header of the csv file
"""
# Get the parser for the CSV file
tree = ET.parse(xml_file)
root = tree.getroot()
video_name = root.find('meta').find('source').text
print(video_name)
images = root.findall('image')
print(len(images))
#Init dict
video_labels = {}
for h in header:
video_labels[h] = [None] * len(images) # empty list of the number of images
stop_video = False
i = -1
# Loop through images
for j, image in enumerate(images):
points = list(image)
if len(points) == 0: # Get the labels of the videos
for h in video_labels:
video_labels[h].pop()
continue
i += 1
if len(points) != 17: # If more than 17 or less than 17 keypoints then there is a problem with the labels of that frame. you need to check it in CVAT
print(video_name, "frame:", image.attrib['name'], len(points))
stop_video = True
video_labels['video'][i] = video_name
video_labels['frame'][i] = int(image.attrib['name'].split('_')[1]) #"frame_123456"
# print(video_labels['frame'][i])
for point in points: # loop through the keypoints
bodypart = point.attrib['label']
xy = point.attrib['points'].split(',') # [x,y]
attributes = point.findall('attribute') # likelihood
for attr in attributes: # you should probably comment that part if you don't use likelihood
if attr.attrib['name'] == 'likelihood':
like = attr.text
if video_labels[bodypart+'_x'][i] != None:
print(bodypart, 'double keypoint', video_name, "frame:", image.attrib['name'], video_labels[bodypart+'_x'][i], xy[0])
stop_video = True
continue
# check if the keypoints are not too far from the ones in the neighbouring frames (wrong labels) You can comment this out
if i > 0 and video_labels[bodypart+'_x'][i-1] != None:
diff_x = np.abs(float(xy[0]) - float(video_labels[bodypart+'_x'][i-1]))
diff_y = np.abs(float(xy[1]) - float(video_labels[bodypart+'_y'][i-1]))
if diff_x >= 100:
print(bodypart, 'outlier', video_name, "frame:", image.attrib['name'], 'x', diff_x)
stop_video = True
continue
if diff_y >= 30:
print(bodypart, 'outlier', video_name, "frame:", image.attrib['name'], 'y', diff_y)
stop_video = True
continue
video_labels[bodypart+'_x'][i] = xy[0]
video_labels[bodypart+'_y'][i] = xy[1]
video_labels[bodypart+'_likelihood'][i] = like
if stop_video:
print('stop')
else:
df = pd.DataFrame(video_labels)
# df.head()
csv_file = video_name.split('.')[0]+'.csv'
print(os.path.join(save_path, csv_file))
df.to_csv(os.path.join(save_path, csv_file), index=False)
###Output
_____no_output_____ |
OMP.ipynb | ###Markdown
###Code
import numpy as np
def find_sparse_representation(phi, y):
iteration = 0
max_iteration = phi.shape[1]
epsilon = 1.0e-5
A = np.empty((phi.shape[0], 0))
A_index = np.array([], dtype="int")
sparse = np.zeros(phi.shape[1])
x = np.array([])
residue = y.copy()
while iteration < max_iteration and np.linalg.norm(residue, ord=1) > epsilon:
projection = np.absolute(phi.T @ residue)
argmax_index = np.argmax(projection)
insert_index = np.searchsorted(A_index, argmax_index)
A_index = np.insert(A_index, insert_index, argmax_index, axis=0)
A = np.insert(A, insert_index, phi.T[argmax_index], axis=1)
x = np.linalg.pinv(A) @ y
residue = y - (A @ x)
iteration += 1
for idx, x_val in enumerate(x):
sparse[A_index[idx]] = x_val
return sparse
phi = np.array([[1, 0, 1, 0, 0, 1],
[0, 1, 1, 1, 0, 0],
[1, 0, 0, 1, 1, 0],
[0, 1, 0, 0, 1, 1]])
y = np.array([0, -10, -100, 0])
sparse = find_sparse_representation(phi, y)
print(sparse)
print(phi @ sparse)
###Output
[-45. 0. 45. -55. 0. 0.]
[ 1.42108547e-14 -1.00000000e+01 -1.00000000e+02 0.00000000e+00]
|
demos/NWIS_demo_1.ipynb | ###Markdown
National trends in peak annual streamflow IntroductionThis notebook demonstrates a slightly more advanced application of data_retrieval.nwis to collect using a national dataset of historical peak annual streamflow measurements. The objective is to use a regression of peak annual streamflow and time to identify any trends. But, not for a singile station, SetupBefore we begin any analysis, we'll need to setup our environment by importing any modules.
###Code
from scipy import stats
import pandas as pd
import numpy as np
from mpl_toolkits.basemap import Basemap, cm
import matplotlib.pyplot as plt
from data_retrieval import nwis, utils, codes
###Output
_____no_output_____
###Markdown
Basic usageRecall that the basic way to download data from NWIS is through through the `nwis.get_record()` function, which returns a user-specified record as a `pandas` dataframe. The `nwis.get_record()` function is really a facade of sorts, that allows the user to download data from various NWIS services through a consistant interface. To get started, we require a few simple parameters: a list of site numbers or states codes, a service, and a start date.
###Code
# download annual peaks from a single site
df = nwis.get_record(sites='03339000', service='peaks', start='1970-01-01')
df.head()
# alternatively information for the entire state of illiois can be downloaded using
#df = nwis.get_record(state_cd='il', service='peaks', start='1970-01-01')
###Output
_____no_output_____
###Markdown
Most of the fields are empty, but no matter. All we require are date (`datetime`), site number (`site_no`), and peak streamflow (`peak_va`).Note that when multiple sites are specified, `nwis.get_record()` will combine `datetime` and `site_no` fields to create a multi-index dataframe. Preparing the regressionNext we'll define a function that applies ordinary least squares on peak discharge and time.After grouping the dataset by `site_no`, we will apply the regression on a per-site basis. The results from each site, will be returned as a row that includes the slope, y-intercept, r$^2$, p value, and standard error of the regression.
###Code
def peak_trend_regression(df):
"""
"""
#convert datetimes to days for regression
peak_date = df.index
peak_date = pd.to_datetime(df.index.get_level_values(1))
df['peak_d'] = (peak_date - peak_date.min()) / np.timedelta64(1,'D')
#df['peak_d'] = (df['peak_dt'] - df['peak_dt'].min()) / np.timedelta64(1,'D')
#normalize the peak discharge values
df['peak_va'] = (df['peak_va'] - df['peak_va'].mean())/df['peak_va'].std()
slope, intercept, r_value, p_value, std_error = stats.linregress(df['peak_d'], df['peak_va'])
#df_out = pd.DataFrame({'slope':slope,'intercept':intercept,'p_value':p_value},index=df['site_no'].iloc[0])
#return df_out
return pd.Series({'slope':slope,'intercept':intercept,'p_value': p_value,'std_error':std_error})
###Output
_____no_output_____
###Markdown
Preparing the analysis
###Code
def peak_trend_analysis(states, start_date):
"""
states : list
a list containing the two-letter codes for each state to include in the
analysis.
start_date : string
the date to use a the beginning of the analysis.
"""
final_df = pd.DataFrame()
for state in states:
# download annual peak discharge records
df = nwis.get_record(state_cd=state, start=start_date, service='peaks')
# group the data by site and apply our regression
temp = df.groupby('site_no').apply(peak_trend_regression).dropna()
# drop any insignificant results
temp = temp[temp['p_value']<0.05]
# now download metadata for each site, which we'll use later to plot the sites
# on a map
site_df = nwis.get_record(sites=temp.index, service='site')
if final_df.empty:
final_df = pd.merge(site_df, temp, right_index=True, left_on='site_no')
else:
final_df = final_df.append( pd.merge(site_df, temp, right_index=True, left_on='site_no') )
return final_df
###Output
_____no_output_____
###Markdown
To run the analysis for all states since 1970, one would only need to uncomment and run the following lines. However, pulling all that data from NWIS takes time and puts and could put a burden on resoures.
###Code
# Warning these lines will download a large dataset from the web and
# will take few minutes to run.
#start = '1970-01-01'
#states = codes.state_codes
#final_df = peak_trend_analysis(states=states, start_date=start)
#final_df.to_csv('datasets/peak_discharge_trends.csv')
###Output
_____no_output_____
###Markdown
Instead, lets quickly load some predownloaded data, which I generated using the code above.
###Code
final_df = pd.read_csv('datasets/peak_discharge_trends.csv')
final_df.head()
###Output
_____no_output_____
###Markdown
Notice how the data has been transformed. In addition to statistics about the peak streamflow trends, we've also used the NWIS site service to add latitude and longtitude information for each station. Plotting the resultsFinally we'll use `basemap` and `matplotlib`, along with the location information from NWIS, to plot the results on a map (shown below). Stations with increasing peak annual discharge are shown in red; whereas, stations with decreasing peaks are blue.
###Code
fig = plt.figure(num=None, figsize=(10, 6) )
# setup a basemap covering the contiguous United States
m = Basemap(width=5500000, height=4000000, resolution='l',
projection='aea', lat_1=36., lat_2=44, lon_0=-100, lat_0=40)
# add coastlines
m.drawcoastlines(linewidth=0.5)
# add parallels and meridians.
m.drawparallels(np.arange(-90.,91.,15.),labels=[True,True,False,False],dashes=[2,2])
m.drawmeridians(np.arange(-180.,181.,15.),labels=[False,False,False,True],dashes=[2,2])
# add boundaries and rivers
m.drawcountries(linewidth=1, linestyle='solid', color='k' )
m.drawstates(linewidth=0.5, linestyle='solid', color='k')
m.drawrivers(linewidth=0.5, linestyle='solid', color='cornflowerblue')
increasing = final_df[final_df['slope'] > 0]
decreasing = final_df[final_df['slope'] < 0]
#x,y = m(lons, lats)
# categorical plots get a little ugly in basemap
m.scatter(increasing['dec_long_va'].tolist(),
increasing['dec_lat_va'].tolist(),
label='increasing', s=2, color='red',
latlon=True)
m.scatter(decreasing['dec_long_va'].tolist(),
decreasing['dec_lat_va'].tolist(),
label='increasing', s=2, color='blue',
latlon=True)
###Output
_____no_output_____ |
casestudy_agriculture.ipynb | ###Markdown
Agriculture Case Study BackgroundDuring a normal year, sugar cane in Queensland typically flowers early May through June; July to November is typically cane harvesting season. The ProblemWhile sugar is growing, fields may look visually similar, but health or growth rates from these fields can be quite different, leading to variability and unpredictability in revenue. Identifying underperforming crops can have two benefits:- Ability to scout for frost or disease damage.- Ability to investigate poor performing paddocks and undertake management action such as soil testing or targeted fertilising to improve yield. Digital Earth Australia Use CaseSatellite imagery can be used to measure pasture health over time and identify any changes in growth patterns between otherwise similar paddocks.The normalised difference vegetation index (NDVI) describes the difference between visible and near-infrared reflectance of vegetation cover. This index estimates the density of green on an area of land and can be used to track the health and growth of sugar as it matures. Comparing the NDVI of two similar paddocks will help to identify any anomalies in growth patterns. In this example, data from the European Sentinel-2 satellites is used to make a near real-time assessment of crop growing patterns, facillitating management decisions in the field. This data is made available through the Copernicus Regional Data Hub and Digital Earth Australia within 1-2 days of capture.The worked example below takes users through the code required to:- Create a time series data cube over a farming property.- Select multiple paddocks for comparison.- Create graphs to identify crop performance trends over the previous month.- Interpret the results. Technical details**Products used: NDVI** The normalised difference vegetation index is calculated from near infra-red (NIR) and red band measurements. It takes values from -1 to 1, with high values corresponding to dense vegetation. It is calculated as$$\text{NDVI} = \frac{\text{NIR}-\text{Red}}{\text{NIR}+\text{Red}}.$$**Satellite data: Sentinel-2** Near real-time optical data for the last 90 days. Available from the [Amazon S3 dea-public-data](http://dea-public-data.s3-website-ap-southeast-2.amazonaws.com/?prefix=L2/sentinel-2-nrt/S2MSIARD/) bucket. Covers both Sentinel-2a and Sentinel-2b.**App functions:** [casestudy_agriculture_functions](/user/user/edit/examples/utils/casestudy_agriculture_functions.py)* `load_agriculture_data()`: Loads, combines and cleans data from Sentinel-2a and -2b.* `run_agriculture_app()`: Launches an interactive map and plots average NDVI for selected areas. Run this notebook Load the app functionsThe relevant Open Data Cube commands are exectuted by the two app functions `load_agriculture_data()` and `run_agriculture_app()`. To run the notebook, these need to be imported from `utils.casestudy_agriculture_functions` where they're described.The `%matplotlib notebook` command allows the notebook to contain interactive plots.**To run the following cell, click inside and either press the** `Run` **button on the tool-bar or press** `Shift+Enter` **on the keyboard.**
###Code
%matplotlib notebook
from utils.casestudy_agriculture_functions import load_agriculture_data, run_agriculture_app
###Output
_____no_output_____
###Markdown
Load the dataThe `load_agriculture_data()` command performs several key steps:* identify all available Sentinel-2 near real time data in the case-study area over the last 90 days* remove any bad quality pixels* keep images where more than half of the image contains good quality pixels* collate images from Sentinel-2a and Sentinel-2b into a single data-set* calculate the NDVI from the red and near infrared bands* return the collated data for analysisThe cleaned and collated data is stored in the `dataset_sentinel2` object. As the command runs, feedback will be provided below the cell, including information on the number of cleaned images loaded from each satellite.**To run the following cell, click inside and either press the** `Run` **button on the tool-bar or press** `Shift+Enter` **on the keyboard.**
###Code
dataset_sentinel2 = load_agriculture_data()
###Output
_____no_output_____
###Markdown
Run the agriculture appThe `run_agriculture_app()` command launches an interactive map. Drawing polygons within the boundary (which represents the area covered by the loaded data) will result in plots of the average NDVI in that area.The command works by taking the loaded data `dataset_sentinel2` as an argument. **To run the following cell, click inside and either press the** `Run` **button on the tool-bar or press** `Shift+Enter` **on the keyboard.***Note:* data points will only appear for images where more than 50% of the pixels were classified as good quality. This may cause trend lines on the average NDVI plot to appear disconnected. Available data points will be marked with the `*` symbol.
###Code
run_agriculture_app(dataset_sentinel2)
###Output
_____no_output_____ |
0203_tensor_mani.ipynb | ###Markdown
--- 뷰(View) - 원소의 수를 유지하면서 텐서의 크기 변경. 매우 중요함!!
###Code
t = np.array([[[0, 1, 2],
[3, 4, 5]],
[[6, 7, 8],
[9, 10, 11]]])
ft = torch.FloatTensor(t)
mu.log("t.shape", t.shape)
mu.log("ft.shape", ft.shape)
###Output
t.shape : (2, 2, 3)
ft.shape : torch.Size([2, 2, 3])
###Markdown
--- 3차원 텐서에서 2차원 텐서로 변경
###Code
mu.log("ft.view([-1, 3])", ft.view([-1, 3]))
mu.log("ft.view([-1, 3]).shape", ft.view([-1, 3]).shape)
###Output
ft.view([-1, 3]) :
torch.Size([4, 3]) tensor([[ 0., 1., 2.],
[ 3., 4., 5.],
[ 6., 7., 8.],
[ 9., 10., 11.]])
ft.view([-1, 3]).shape : torch.Size([4, 3])
###Markdown
--- 3차원 텐서의 크기 변경
###Code
mu.log("ft.view([-1, 1, 3])", ft.view([-1, 1, 3]))
mu.log("ft.view([-1, 1, 3]).shape", ft.view([-1, 1, 3]).shape)
###Output
ft.view([-1, 1, 3]) :
torch.Size([4, 1, 3]) tensor([[[ 0., 1., 2.]],
[[ 3., 4., 5.]],
[[ 6., 7., 8.]],
[[ 9., 1 ...
ft.view([-1, 1, 3]).shape : torch.Size([4, 1, 3])
###Markdown
--- 스퀴즈(Squeeze) - 1인 차원을 제거한다.
###Code
ft = torch.FloatTensor([[0], [1], [2]])
mu.log("ft", ft)
mu.log("ft.shape", ft.shape)
###Output
ft :
torch.Size([3, 1]) tensor([[0.],
[1.],
[2.]])
ft.shape : torch.Size([3, 1])
###Markdown
--- 언스퀴즈(Unsqueeze) - 특정 위치에 1인 차원을 추가한다.
###Code
ft = torch.Tensor([0, 1, 2])
mu.log("ft", ft)
mu.log("ft.shape", ft.shape)
mu.log("ft.unsqueeze(0)", ft.unsqueeze(0)) # 인덱스가 0부터 시작하므로 0은 첫번째 차원을 의미한다.
mu.log("ft.unsqueeze(0).shape", ft.unsqueeze(0).shape)
mu.log("ft.view(1, -1)", ft.view(1, -1))
mu.log("ft.view(1, -1).shape", ft.view(1, -1).shape)
mu.log("ft.unsqueeze(1)", ft.unsqueeze(1))
mu.log("ft.unsqueeze(1).shape", ft.unsqueeze(1).shape)
mu.log("ft.unsqueeze(-1)", ft.unsqueeze(-1))
mu.log("ft.unsqueeze(-1).shape", ft.unsqueeze(-1).shape)
###Output
ft :
torch.Size([3]) tensor([0., 1., 2.])
ft.shape : torch.Size([3])
ft.unsqueeze(0) :
torch.Size([1, 3]) tensor([[0., 1., 2.]])
ft.unsqueeze(0).shape : torch.Size([1, 3])
ft.view(1, -1) :
torch.Size([1, 3]) tensor([[0., 1., 2.]])
ft.view(1, -1).shape : torch.Size([1, 3])
ft.unsqueeze(1) :
torch.Size([3, 1]) tensor([[0.],
[1.],
[2.]])
ft.unsqueeze(1).shape : torch.Size([3, 1])
ft.unsqueeze(-1) :
torch.Size([3, 1]) tensor([[0.],
[1.],
[2.]])
ft.unsqueeze(-1).shape : torch.Size([3, 1])
###Markdown
--- 타입 캐스팅(Type Casting)
###Code
lt = torch.LongTensor([1, 2, 3, 4])
mu.log("lt", lt)
mu.log("lt.float()", lt.float())
bt = torch.ByteTensor([True, False, False, True])
mu.log("bt", bt)
mu.log("bt.long()", bt.long())
mu.log("bt.float()", bt.float())
###Output
lt :
torch.Size([4]) tensor([1, 2, 3, 4])
lt.float() :
torch.Size([4]) tensor([1., 2., 3., 4.])
bt :
torch.Size([4]) tensor([1, 0, 0, 1], dtype=torch.uint8)
bt.long() :
torch.Size([4]) tensor([1, 0, 0, 1])
bt.float() :
torch.Size([4]) tensor([1., 0., 0., 1.])
###Markdown
--- 연결하기(concatenate)
###Code
x = torch.FloatTensor([[1, 2], [3, 4]])
y = torch.FloatTensor([[5, 6], [7, 8]])
mu.log("torch.cat([x, y], dim=0)", torch.cat([x, y], dim=0))
mu.log("torch.cat([x, y], dim=1)", torch.cat([x, y], dim=1))
###Output
torch.cat([x, y], dim=0) :
torch.Size([4, 2]) tensor([[1., 2.],
[3., 4.],
[5., 6.],
[7., 8.]])
torch.cat([x, y], dim=1) :
torch.Size([2, 4]) tensor([[1., 2., 5., 6.],
[3., 4., 7., 8.]])
###Markdown
--- 스택킹(Stacking)
###Code
x = torch.FloatTensor([1, 4])
y = torch.FloatTensor([2, 5])
z = torch.FloatTensor([3, 6])
mu.log("torch.stack([x, y, z])", torch.stack([x, y, z]))
mu.log("torch.cat([x.unsqueeze(0), y.unsqueeze(0), z.unsqueeze(0)], dim=0)",
torch.cat([x.unsqueeze(0), y.unsqueeze(0), z.unsqueeze(0)], dim=0))
mu.log("torch.stack([x, y, z], dim=1)", torch.stack([x, y, z], dim=1))
###Output
torch.stack([x, y, z]) :
torch.Size([3, 2]) tensor([[1., 4.],
[2., 5.],
[3., 6.]])
torch.cat([x.unsqueeze(0), y.unsqueeze(0), z.unsqueeze(0)], dim=0) :
torch.Size([3, 2]) tensor([[1., 4.],
[2., 5.],
[3., 6.]])
torch.stack([x, y, z], dim=1) :
torch.Size([2, 3]) tensor([[1., 2., 3.],
[4., 5., 6.]])
###Markdown
--- ones_like와 zeros_like - 0으로 채워진 텐서와 1로 채워진 텐서
###Code
x = torch.FloatTensor([[0, 1, 2], [2, 1, 0]])
mu.log("x", x)
mu.log("torch.ones_like(x)", torch.ones_like(x))
mu.log("torch.zeros_like(x)", torch.zeros_like(x))
###Output
x :
torch.Size([2, 3]) tensor([[0., 1., 2.],
[2., 1., 0.]])
torch.ones_like(x) :
torch.Size([2, 3]) tensor([[1., 1., 1.],
[1., 1., 1.]])
torch.zeros_like(x) :
torch.Size([2, 3]) tensor([[0., 0., 0.],
[0., 0., 0.]])
###Markdown
--- In-place Operation (덮어쓰기 연산)
###Code
x = torch.FloatTensor([[1, 2], [3, 4]])
mu.log("x", x)
mu.log("x.mul(2.)", x.mul(2.)) # 곱하기 2를 수행한 결과를 출력
mu.log("x", x)
mu.log("x.mul_(2.)", x.mul_(2.)) # 곱하기 2를 수행한 결과를 변수 x에 값을 저장하면서 결과를 출력
mu.log("x", x)
###Output
x :
torch.Size([2, 2]) tensor([[1., 2.],
[3., 4.]])
x.mul(2.) :
torch.Size([2, 2]) tensor([[2., 4.],
[6., 8.]])
x :
torch.Size([2, 2]) tensor([[1., 2.],
[3., 4.]])
x.mul_(2.) :
torch.Size([2, 2]) tensor([[2., 4.],
[6., 8.]])
x :
torch.Size([2, 2]) tensor([[2., 4.],
[6., 8.]])
|
3_Query_Diabetic_Patients.ipynb | ###Markdown
After 2 demo, now try to pull diabetic patients record Advice: As we move into more complex SQL queries, for efficiency purpose, I recommend to practise SQL code in pgAdmin first before load into Jupyter Reference * Article: "Improving Patient Cohort Identification Using Natural Language Processing" 10 Sep. 2016, https://link.springer.com/chapter/10.1007/978-3-319-43742-2_28. Accessed 25 Nov. 2018.* Git repository: https://github.com/MIT-LCP/critical-data-book* To understand Mimic III tables: https://mimic.physionet.org/mimictables/* To understand ICD_9 codes: http://www.icd9data.com/* SQL query: https://github.com/MIT-LCP/critical-data-book/tree/master/part_iii/chapter_28 Prepare database Import libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import psycopg2
import getpass
%matplotlib inline
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
Create database connection, you can load your password here just for practise but not recommend to save into repository
###Code
user = 'postgres'
password = 'postgres' # note if you want to set password for offline pracise would be fine, but don't post to repository
host = 'localhost'
dbname = 'mimic'
schema = 'mimiciii' # set to your defined schema name, I use mimiciii here
###Output
_____no_output_____
###Markdown
Connect to the database (execute again if lost connection)
###Code
con = psycopg2.connect(dbname=dbname, user=user, host=host, password=password)
cur = con.cursor()
cur.execute('SET search_path to {}'.format(schema)) # this is compulsory step to find your database
print('Thread opened' if not cur.closed else 'Thread closed')
# If need password sensitive, use below code
# con = psycopg2.connect(dbname=dbname, user=user, host=host,
# password=getpass.getpass(prompt='Password:'.format(user)))
# cur = con.cursor()
# cur.execute('SET search_path to {}'.format(schema)) # this is compulsory step to find your database
# print('connected' if not cur.connection.closed else 'not connected')
###Output
Password:········
connected
###Markdown
Query using Structured data Structured data means all records are categorized in the table so we can just classify themUnstructured data means records are in the text note area and need text mining to capture them First, validate whether the table we understand matches what article describes: * The unstructured clinical notes include: - discharge summaries - nursing progress notes - physician notes - electrocardiogram (ECG) reports - echocardiogram reports - and radiology reports* We excluded clinical notes that were related to any imaging results (ECG_Report, Echo_Report, and Radiology_Report). * We extracted notes from MIMIC-III with the following data elements: - patient identification number (SUBJECT_ID), - hospital admission identification number (HADM_IDs), - intensive care unit stay identification number (ICUSTAY_ID), - note type, - note date/time, - and note text. It's our first time query Mimic III db, will practise a few queries to validate the procedure is working well. The tables that are used in the queries:* admissions: include subject_id, all patients* diagnoses_icd: include icd9_code, subject_id, patients under diagnosis (covered all patients)* patients: include subject_id, dob (covered all patients)* PROCEDURES_ICD: include subject_id, those who were under procedures (subset of all patients)
###Code
# Find list of categories under noteevents table
query = \
"""
select distinct(category)
from noteevents;
"""
data = pd.read_sql_query(query, con)
data
# Discharge summaries
query = \
"""
select count(distinct(subject_id))
from noteevents
where category like 'Discharge summary%';
"""
data = pd.read_sql_query(query, con)
data
# Nursing/Nursing others
query = \
"""
select count(distinct(subject_id))
from noteevents
where category like 'Nursing%';
"""
data = pd.read_sql_query(query, con)
data
# Physician notes
query = \
"""
select count(distinct(subject_id))
from noteevents
where category like 'Physician%';
"""
data = pd.read_sql_query(query, con)
data
# ECG reports
query = \
"""
select count(distinct(subject_id))
from noteevents
where category like 'ECG%';
"""
data = pd.read_sql_query(query, con)
data
# Echocardiogram reports
query = \
"""
select count(distinct(subject_id))
from noteevents
where category like 'Echo%';
"""
data = pd.read_sql_query(query, con)
data
# Radiology reports
query = \
"""
select count(distinct(subject_id))
from noteevents
where category like 'Radiology%';
"""
data = pd.read_sql_query(query, con)
data
###Output
_____no_output_____
###Markdown
Now query Diabetes `structured` data Diabetes types in ICD9 code* Diabetes mellitus * 249 secondary diabetes mellitus (includes the following codes: 249, 249.0, 249.00, 249.01, 249.1, 249.10, 249.11, 249.2, 249.20, 249.21, 249.3, 249.30, 249.31, 249.4, 249.40, 249.41, 249.5, 249.50, 249.51, 249.6, 249.60, 249.61, 249.7, 249.70, 249.71, 249.8, 249.80, 249.81, 249.9, 249.90, 249.91) * 250 diabetes mellitus (includes the following codes: 250, 250.0, 250.00, 250.01, 250.02, 250.03, 250.1, 250.10, 250.11, 250.12, 250.13, 250.2, 250.20, 250.21, 250.22, 250.23, 250.3, 250.30, 250.31, 250.32, 250.33, 250.4, 250.40, 250.41, 250.42, 250.43, 250.5, 250.50, 250.51, 250.52, 250.53, 250.6, 250.60, 250.61, 250.62, 250.63, 250.7, 250.70, 250.71, 250.72, 250.73, 250.8, 250.80, 250.81, 250.82, 250.83, 250.9, 250.90, 250.91, 250.92, 250.93)* Hemodialysis - 585.6 end stage renal disease (requiring chronic dialysis) - 996.1 mechanical complication of other vascular device, implant, and graft - 996.73 other complications due to renal dialysis device, implant, and graft - E879.1 kidney dialysis as the cause of abnormal reaction of patient, or of later complication, without mention of misadventure at time of procedure - V45.1 postsurgical renal dialysis status - V56.0 encounter for extracorporeal dialysis - V56.1 fitting and adjustment of extracorporeal dialysis catheter * Precedure codes - 38.95 venous catheterization for renal dialysis - 39.27 arteriovenostomy for renal dialysis - 39.42 revision of arteriovenous shunt for renal dialysis - 39.43 removal of arteriovenous shunt for renal dialysis - 39.95 hemodialysis
###Code
# Total number of patients in the Mimic III db is 46520, with 58976 records (someone has multiple records)
# Note: connecting to diagnoses_icd table won't impact the query result so it has full record as admissions
query = \
"""
select subject_id, hadm_id
from admissions;
"""
data = pd.read_sql_query(query, con)
print('Total', data.shape[0], ' lines of record')
print('Totally', data.subject_id.unique().shape[0], 'unique patients')
# Total number of patients who are Diabetes mellitus is 10403
# Note: connecting to patients table won't impact the query result so it has full records
query = \
"""
select count(distinct(a.subject_id))
from diagnoses_icd di, admissions a
where di.subject_id = a.subject_id
and (
di.ICD9_CODE like '249%' -- secondary diabetes mellitus
or di.ICD9_CODE like '250%' -- diabetes mellitus
);
"""
data = pd.read_sql_query(query, con)
data
# Total number of patients who are Diabetes mellitus and older than 18 is 10397
query = \
"""
select count(distinct(a.subject_id))
from diagnoses_icd di, admissions a, patients p
where di.subject_id = a.subject_id and a.subject_id = p.subject_id
and (
di.ICD9_CODE like '249%' -- secondary diabetes mellitus
or di.ICD9_CODE like '250%' -- diabetes mellitus
)
and (
(cast(a.ADMITTIME as date) - cast(p.DOB as date))/365.242 >= 18
);
"""
data = pd.read_sql_query(query, con)
data
# Total number of patients who are Diabetes mellitus, older than 18, and also received procedures is 9460
query = \
"""
select count(distinct(a.subject_id))
from diagnoses_icd di, admissions a, patients p, procedures_icd pi
where di.subject_id = a.subject_id
and a.subject_id = p.subject_id -- we have patient here for p.DOB information
and pi.subject_id = a.subject_id
and (
di.ICD9_CODE like '249%' -- secondary diabetes mellitus
or di.ICD9_CODE like '250%' -- diabetes mellitus
)
and (
(cast(a.ADMITTIME as date) - cast(p.DOB as date))/365.242 >= 18
);
"""
data = pd.read_sql_query(query, con)
data
# Compared to all patients who received procdures as Hemodialysis is 1316
query = \
"""
select count(distinct(subject_id))
from diagnoses_icd
where ICD9_CODE in ('5856','9961','99673','E8791','V451','V560','V561');
"""
data = pd.read_sql_query(query, con)
data
# Patients who are Diabetes mellitus, older than 18, and received procedures as Hemodialysis is 718
# Note: diagnoses_icd also include Hemodialysis ICD9 code
# We want patients who have both Diabetes mellitus amd Hemodialysis, e.g. di.ICD9_CODE of one patient have both '249x' and '5856'
query = \
"""
with diab as (
select distinct(a.subject_id) -- second diabtes adults who under procedures is 9460
from diagnoses_icd di, admissions a, patients p, procedures_icd pi
where di.subject_id = a.subject_id
and a.subject_id = p.subject_id -- we have patient here for p.DOB information
and pi.subject_id = a.subject_id
and (
di.ICD9_CODE like '249%' -- secondary diabetes mellitus
or di.ICD9_CODE like '250%' -- diabetes mellitus
)
and ((cast(a.ADMITTIME as date) - cast(p.DOB as date))/365.242 >= 18) -- adults
)
select distinct(di.subject_id) -- second diabetes adults under hemodialysis procedures is 718
from diagnoses_icd di, diab
where di.subject_id = diab.subject_id
and di.ICD9_CODE in ('5856','9961','99673','E8791','V451','V560','V561'); -- Hemodialysis
"""
data = pd.read_sql_query(query, con)
print('There are ', len(data), 'patients with diabetes mellitus, adults and received Hemodialysis')
###Output
There are 718 patients with diabetes mellitus, adults and received Hemodialysis
###Markdown
Remember to close the thread after all queries
###Code
cur.close()
print('cursor still open ...' if not cur.closed else 'cursor closed ...')
del cur
print('cursor deleted from instance ...')
con.close()
print('connection still open ...' if not con.closed else 'connection closed')
###Output
cursor closed ...
cursor deleted from instance ...
connection closed
|
Modulo3/Ejercicios/Problema2.ipynb | ###Markdown
Teoria Previa-------------------------- En este ejercicio vas a trabajar el concepto de puntos, coordenadas y vectores sobre el plano cartesiano y cómo la programación Orientada a Objetos puede ser una excelente aliada para trabajar con ellos. No está pensado para que hagas ningún tipo de cálculo sino para que practiques la automatización de tareas. El plano cartesiano Representa un espacio bidimensional (en 2 dimensiones), formado por dos rectas perpendiculares, una horizontal y otra vertical que se cortan en un punto. La recta horizontal se denomina eje de las abscisas o **eje X**, mientras que la vertical recibe el nombre de eje de las ordenadas o simplemente **eje Y**. En cuanto al punto donde se cortan, se conoce como el **punto de origen O**. Es importante remarcar que el plano se divide en 4 cuadrantes: Puntos y coordenadas El objetivo de todo esto es describir la posición de **puntos** sobre el plano en forma de **coordenadas**, que se forman asociando el valor del eje de las X (horizontal) con el valor del eje Y (vertical).La representación de un punto es sencilla: **P(X,Y)** dónde X y la Y son la distancia horizontal (izquierda o derecha) y vertical (arriba o abajo) respectivamente, utilizando como referencia el punto de origen (0,0), justo en el centro del plano. Vectores en el plano Finalmente, un vector en el plano hace referencia a un segmento orientado, generado a partir de dos puntos distintos.A efectos prácticos no deja de ser una línea formada desde un punto inicial en dirección a otro punto final, por lo que se entiende que un vector tiene longitud y dirección/sentido. En esta figura, podemos observar dos puntos A y B que podríamos definir de la siguiente forma:- **A(x1, y1) => A(2, 3)**- **B(x2, y2) => B(5, 5)**Y el vector se representaría como la diferencia entre las coordendas del segundo punto respecto al primero (el segundo menos el primero):- **AB = (x2 - x1, y2 - y1) => (5-2, 5-3) => (3,2)**Lo que en definitiva no deja de ser: 3 a la derecha y 2 arriba.Y con esto finalizamos este mini repaso. Ejercicio-------------------------- - Crea una clase llamada **Punto** con sus dos coordenadas X e Y.- Añade un método **constructor** para crear puntos fácilmente. Si no se reciben una coordenada, su valor será cero.- Sobreescribe el método **string**, para que al imprimir por pantalla un punto aparezca en formato (X,Y)- Añade un método llamado **cuadrante** que indique a qué cuadrante pertenece el punto, teniendo en cuenta que si X == 0 e Y != 0 se sitúa sobre el eje Y, si X != 0 e Y == 0 se sitúa sobre el eje X y si X == 0 e Y == 0 está sobre el origen.- Añade un método llamado **vector**, que tome otro punto y calcule el vector resultante entre los dos puntos.- **(Optativo)** Añade un método llamado **distancia**, que tome otro punto y calcule la distancia entre los dos puntos y la muestre por pantalla. La fórmula es la siguiente: $$ d = \sqrt{ (({x2 - x1}){^2} + ({y2 - y1}){^2} ) }$$ - Crea una clase llamada **Rectangulo** con dos puntos (inicial y final) que formarán la diagonal del rectángulo.- Añade un método **constructor** para crear ambos puntos fácilmente, si no se envían se crearán dos puntos en el origen por defecto.- Añade al rectángulo un método llamado **base** que muestre la base.- Añade al rectángulo un método llamado **altura** que muestre la altura.- Añade al rectángulo un método llamado **area** que muestre el area. Recuerde
###Code
import math
# raiz cuadrada
print(math.sqrt(9))
# valor absoluto
print(abs(-4))
###Output
3.0
4
|
Misc/mosaicking_and_masking.ipynb | ###Markdown
Creating a composite image from multiple PlanetScope scenes In this exercise, you'll learn how to create a composite image (or mosaic) from multiple PlanetScope satellite images that cover an area of interest (AOI). We'll use `rasterio`, along with its vector-data counterpart `fiona`, to do this. Step 1. Aquiring Imagery In order to visually search for imagery in our AOI, we'll use [Planet Explorer](https://www.planet.com/explorer/).For this exercise, we're going to visit Yosemite National Park. In the screenshot below you'll see an AOI drawn around [Mount Dana](https://en.wikipedia.org/wiki/Mount_Dana) on the eastern border of Yosemite. You can use [data/mt-dana-small.geojson](data/mt-dana-small.geojson) to search for this same AOI yourself.Here we want an image that depicts the mountain on a clear summer day, so for this data search in Planet Explorer we'll set the filters to show only scenes with less than 5% cloud cover, and narrow down the date range to images captured between July 1-July 31, 2017. Since we're only interested in PlanetScope data, and we're creating a visual - not analytic - product, we'll set the Source to `3-band PlanetScope Scene`. Finally, since we want to create a mosaic that includes our entire AOI, we'll set the Area coverage to full coverage.  As you can see in the animated gif above, this search yields multiple days within July 2017 that match our filters. After previewing a few days, I decided I like the look of July 21, 2017.After selecting a single day, you can roll over the individual images to preview their coverage. In the gif above, you'll notice that it takes three individual images to completely cover our AOI. In this instance, as I roll over each item in Planet Explorer I can see that the scenes' rectangular footprints extend far beyond Mount Dana. All three scenes overlap slightly, and one scene touches only a small section at the bottom of the AOI. Still, taken together the images provide 100% coverage, so we'll go ahead and place an order for the Visual imagery products for these three scenes.  Once the order is ready, download the images, extract them from the .zip, and move them into the `data/` directory adjacent to this Notebook. Step 2. Inspecting Imagery
###Code
# Load our 3 images using rasterio
import rasterio
img1 = rasterio.open('data/20170721_175836_103c_3B_Visual.tif')
img2 = rasterio.open('data/20170721_175837_103c_3B_Visual.tif')
img3 = rasterio.open('data/20170721_175838_103c_3B_Visual.tif')
###Output
_____no_output_____
###Markdown
At this point we can use `rasterio` to inspect the metadata of these three images. Specifically, in order to create a composite from these images, we want to verify that all three images have the same data type, the same coordinate reference systems and the same band count:
###Code
print(img1.meta['dtype'], img1.meta['crs'], img1.meta['count'])
print(img2.meta['dtype'], img2.meta['crs'], img2.meta['count'])
print(img3.meta['dtype'], img3.meta['crs'], img3.meta['count'])
###Output
uint8 EPSG:32611 4
uint8 EPSG:32611 4
uint8 EPSG:32611 4
###Markdown
Success - they do! But wait, I thought we were using a "Visual" image, and expecting only 3 bands of information (RGB)?Let's take a closer look at what these bands contain:
###Code
# Read in color interpretations of each band in img1 - here we'll assume img2 and img3 have the same values
colors = [img1.colorinterp[band] for band in range(img1.count)]
# take a look at img1's band types:
for color in colors:
print(color.name)
###Output
red
green
blue
alpha
###Markdown
The fourth channel is actually a binary alpha mask: this is common in satellite color models, and can be confirmed in Planet's [documentation on the PSSCene3Band product](https://developers.planet.com/docs/api/psscene3band/).Now that we've verified all three satellite images have the same critical metadata, we can safely use `rasterio.merge` to stitch them together. Step 3. Creating the Mosaic
###Code
from rasterio.merge import merge
# merge returns the mosaic & coordinate transformation information
(mosaic, transform) = merge([img1, img2, img3])
###Output
_____no_output_____
###Markdown
Once that process is complete, take a moment to congratulate yourself. At this stage you've successfully acquired adjacent imagery, inspected metadata, and performed a compositing process in order to generate a new mosaic. Well done!Before we go further, let's use `rasterio.plot` (a matplotlib interface) to preview the results of our mosaic. This will just give us a quick-and-dirty visual representation of the results, but it can be useful to verify the compositing did what we expected.
###Code
from rasterio.plot import show
show(mosaic)
###Output
_____no_output_____
###Markdown
At this point we're ready to write our mosaic out to a new GeoTIFF file. To do this, we'll want to grab the geospatial metadata from one of our original images (again, here we'll use img1 to represent the metadata of all 3 input images).
###Code
# Grab a copy of our source metadata, using img1
meta = img1.meta
# Update the original metadata to reflect the specifics of our new mosaic
meta.update({"transform": transform,
"height":mosaic.shape[1],
"width":mosaic.shape[2]})
with rasterio.open('data/mosaic.tif', 'w', **meta) as dst:
dst.write(mosaic)
###Output
_____no_output_____
###Markdown
Step 4. Clip the Mosaic to AOI Boundaries Now that we've successfully created a composite mosaic of three input images, the final step is to clip that mosaic to our area of interest. To do that, we'll create a mask for our mosaic based on the AOI boundaries, and crop the mosaic to the extents of that mask.You'll recall that we used Explorer to search for Mount Dana, in Yosemite National Park. The GeoJSON file we used for that search can also be used here, to provide a mask outline for our mosaic.For this step we're going to do a couple things: first, we'll use rasterio's sister-library `fiona` to read in the GeoJSON file. Just as `rasterio` is used to manipulate raster data, `fiona` works similarly on vector data. Where `rasterio` represents raster imagery as numpy arrays, `fiona` represents vector data as GeoJSON-like Python dicts. You can learn [more about fiona here](http://toblerity.org/fiona/manual.html).After reading in the GeoJSON you'll want to extract the _geometry_ of the AOI (_hint:_ `geometry` will be the dict key). A note about Coordinate Reference SystemsIf you attempt to apply the AOI to the mosaic imagery now, rasterio will throw errors, telling you that the two datasets do not overlap. This is because the Coordinate Reference System (CRS) used by each dataset do not match. You can verify this by reading the `crs` attribute of the Collection object generated by `fiona.open()`._Hint: the CRS of mt-dana-small.geojson is:_ `'epsg:4326'`You'll recall that earlier we validated the metadata of the original input imagery, and learned it had a CRS of `'epsg:32611'`. Before the clip can be applied, you will need to to transform the geometry of the AOI to match the CRS of the imagery. Luckily, fiona is smart enough to apply the necessary mathematical transformation to a set of coordinates in order to convert them to new values: apply `fiona.transform.transform_geom` to the AOI geometry to do this, specifying the GeoJSON's CRS as the source CRS, and the imagery's CRS as the destination CRS.
###Code
# use rasterio's sister-library for working with vector data
import fiona
# use fiona to open our original AOI GeoJSON
with fiona.open('data/mt-dana-small.geojson') as mt:
aoi = [feature["geometry"] for feature in mt]
# transform AOI to match mosaic CRS
from fiona.transform import transform_geom
transformed_coords = transform_geom('EPSG:4326', 'EPSG:32611', aoi[0])
aoi = [transformed_coords]
###Output
_____no_output_____
###Markdown
At this stage you have read in the AOI geometry and transformed its coordinates to match the mosaic. We're now ready to use `rasterio.mask.mask` to create a mask over our mosaic, using the AOI geometry as the mask line. Passing `crop=True` to the mask function will automatically crop the bits of our mosaic that fall outside the mask boundary: you can think of it as applying the AOI as a cookie cutter to the mosaic.
###Code
# import rasterio's mask tool
from rasterio.mask import mask
# apply mask with crop=True to cut to boundary
with rasterio.open('data/mosaic.tif') as mosaic:
clipped, transform = mask(mosaic, aoi, crop=True)
# See the results!
show(clipped)
###Output
_____no_output_____
###Markdown
Congratulations! You've created a clipped mosaic, showing only the imagery that falls within our area of interest.From here, the only thing left to do is save our results to a final output GeoTIFF.
###Code
# save the output to a final GeoTIFF
# use the metadata from our original mosaic
meta = mosaic.meta.copy()
# update metadata with new, clipped mosaic's boundaries
meta.update({"transform": transform,
"height":clipped.shape[1],
"width":clipped.shape[2]})
# write the output to a GeoTIFF
with rasterio.open('data/clipped_mosaic.tif', 'w', **meta) as dst:
dst.write(clipped)
###Output
_____no_output_____ |
content/courses/ml_intro/17_big_data_spark/01-Introduction to Spark and Python.ipynb | ###Markdown
Introduction to Spark and PythonLet's learn how to use Spark with Python by using the pyspark library! Make sure to view the video lecture explaining Spark and RDDs before continuing on with this code.This notebook will serve as reference code for the Big Data section of the course involving Amazon Web Services. The video will provide fuller explanations for what the code is doing. Creating a SparkContextFirst we need to create a SparkContext. We will import this from pyspark:
###Code
from pyspark import SparkContext
###Output
_____no_output_____
###Markdown
Now create the SparkContext,A SparkContext represents the connection to a Spark cluster, and can be used to create an RDD and broadcast variables on that cluster.*Note! You can only have one SparkContext at a time the way we are running things here.*
###Code
sc = SparkContext()
###Output
_____no_output_____
###Markdown
Basic OperationsWe're going to start with a 'hello world' example, which is just reading a text file. First let's create a text file.___ Let's write an example text file to read, we'll use some special jupyter notebook commands for this, but feel free to use any .txt file:
###Code
%%writefile example.txt
first line
second line
third line
fourth line
###Output
Overwriting example.txt
###Markdown
Creating the RDD Now we can take in the textfile using the **textFile** method off of the SparkContext we created. This method will read a text file from HDFS, a local file system (available on allnodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings.
###Code
textFile = sc.textFile('example.txt')
###Output
_____no_output_____
###Markdown
Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs. ActionsWe have just created an RDD using the textFile method and can perform operations on this object, such as counting the rows.RDDs have actions, which return values, and transformations, which return pointers to new RDDs. Let’s start with a few actions:
###Code
textFile.count()
textFile.first()
###Output
_____no_output_____
###Markdown
TransformationsNow we can use transformations, for example the filter transformation will return a new RDD with a subset of items in the file. Let's create a sample transformation using the filter() method. This method (just like Python's own filter function) will only return elements that satisfy the condition. Let's try looking for lines that contain the word 'second'. In which case, there should only be one line that has that.
###Code
secfind = textFile.filter(lambda line: 'second' in line)
# RDD
secfind
# Perform action on transformation
secfind.collect()
# Perform action on transformation
secfind.count()
###Output
_____no_output_____ |
Second Meetup/Binary Classification.ipynb | ###Markdown
Content* Libraries* Introduction to Problem * Loading Dataset * Visualizing Raw Dataset * Preprocessing * Visualizing Proprocessed Dataset* Logistic Regression with numpy * Forward Propagation * Backward Propagation * Complete Propagation * Combining All Together * Training * Prediction * Evaluation* Logistic Regression with tensorflow * Just Forward Propagation * Training and Evaluation* Logistic Regression with keras * Just Layer Description * Training * Evaluation Libraries
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
from sklearn import datasets
from sklearn.model_selection import train_test_split
from utils import plot_prediction
###Output
_____no_output_____
###Markdown
Introduction to ProblemGiven a set of inputs X as features of flowers, we want to assign them to one of two possible categories 0 or 1 that means what type of flower they are.For solving this problem we use logistic regression because it models the probability that each input belongs to a particular category. Loading Dataset How many features this dataset have? What is the label for this problem?
###Code
iris = pd.read_csv('./data/Iris.csv')
iris.sample(5)
print("Number of Flowers: {}".format(iris.shape[0]))
###Output
Number of Flowers: 150
###Markdown
Visualizing Dataset
###Code
sepalPlt = sb.FacetGrid(iris, hue="Species", size=6).map(plt.scatter, "SepalLengthCm", "SepalWidthCm")
plt.legend(loc='upper left')
###Output
C:\Users\Erfan\Anaconda3\envs\tf\lib\site-packages\seaborn\axisgrid.py:230: UserWarning: The `size` paramter has been renamed to `height`; please update your code.
warnings.warn(msg, UserWarning)
###Markdown
Preprocessing In this step we will simplify the problem into a binary classification problem with just 2 features.
###Code
X = iris.iloc[:, 1:3]
Y = (iris['Species'] != 'Iris-setosa') * 1
###Output
_____no_output_____
###Markdown
We devide data to train and test sets:
###Code
X_train, X_test, y_train, y_test = train_test_split(X, Y)
plt.figure(figsize=(10, 6))
plt.scatter(X_train[y_train == 0].iloc[:, 0], X_train[y_train == 0].iloc[:, 1], color='b', label='Iris-setosa train')
plt.scatter(X_train[y_train == 1].iloc[:, 0], X_train[y_train == 1].iloc[:, 1], color='r', label='Others train')
plt.scatter(X_test[y_test == 0].iloc[:, 0], X_test[y_test == 0].iloc[:, 1], color='b', label='Iris-setosa test', marker='+', s=150)
plt.scatter(X_test[y_test == 1].iloc[:, 0], X_test[y_test == 1].iloc[:, 1], color='r', label='Others test', marker='+', s=150)
plt.xlabel('SepalLengthCm')
plt.ylabel('SepalWidthCm')
plt.legend()
###Output
_____no_output_____
###Markdown
Questions Logistic Regression Logistic Regression is a Machine Learning classification algorithm that is used to predict the probability of a categorical dependent variable. In logistic regression, the dependent variable is a binary variable that contains data coded as 1 (yes, success, etc.) or 0 (no, failure, etc.). In other words, the logistic regression model predicts P(Y=1) as a function of X. \begin{align}Z = w_1x_1+w_2x_2+\dots+w_nx_n + b\end{align} Forward Propagation First we will implement linear multiplication: $$W=\begin{bmatrix}w_1, & w_2, & \dots & w_n \end{bmatrix}$$$$X=\begin{bmatrix}x_1, & x_2, & \dots & x_n \end{bmatrix}$$ \begin{align}Z = W^TX + b\end{align}
###Code
def linear_mult(X, w, b):
return np.dot(w.T, X) + b
###Output
_____no_output_____
###Markdown
Now we implement a function could generate W and b for us:
###Code
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
w = np.zeros((dim,1))
b = 0
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
###Output
_____no_output_____
###Markdown
Next we will implement sigmoid function to map calculated value to a probablity:  \begin{align}A = \sigma(Z) = \frac{1}{1 + e^{-Z}}\end{align}
###Code
def sigmoid(z):
return 1 / (1 + np.exp(-z))
###Output
_____no_output_____
###Markdown
Now we implement the cost function, cost function represent the difference between our predictions and actual labels(y is the actual label and a is our predicted label): \begin{align}J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(A^{(i)})+(1-y^{(i)})\log(1-A^{(i)})\end{align}
###Code
def cost_function(y, a):
return -np.mean(y*np.log(a) + (1-y)*np.log(1-a))
###Output
_____no_output_____
###Markdown
Now we implement the whole forward propagation which will calculate cost and the predicted value for the each data point:
###Code
def forward_propagate(w, b, X, Y):
m = X.shape[1]
Z = linear_mult(X, w, b)
A = sigmoid(Z)
cost = cost_function(Y, A)
cost = np.squeeze(cost)
assert(cost.shape == ())
back_require = {
'A': A
}
return back_require, cost
###Output
_____no_output_____
###Markdown
Backward Propagation Now we calculate W and b derivative as follow:$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
###Code
def backward_propagate(w, b, X, Y, back_require):
m = X.shape[1]
A = back_require['A']
dw = (1/m) * np.dot(X,(A-Y).T)
db = (1/m) * np.sum(A - Y)
assert(dw.shape == w.shape)
assert(db.dtype == float)
grads = {"dw": dw,
"db": db}
return grads
###Output
_____no_output_____
###Markdown
Complete Propagation
###Code
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
# FORWARD PROPAGATION
back_require, cost = forward_propagate(w, b, X, Y)
# BACKWARD PROPAGATION
grads = backward_propagate(w, b, X, Y, back_require)
return grads, cost
###Output
_____no_output_____
###Markdown
Combining All Together Now we combine all our implemented functions together to create an optimizer which can find a linear function to devide the zero labeled data from one labeled data points by optimizng W and b as follow:$$W=W−\alpha{dw}$$$$b=b−\alpha{db}$$$\alpha$ is the learning rate  
###Code
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
grads, cost = propagate(w,b,X,Y)
dw = grads["dw"]
db = grads["db"]
w -= learning_rate*dw
b -= learning_rate*db
# Record the costs
costs.append(cost)
# Print the cost every 100 training iterations
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
###Output
_____no_output_____
###Markdown
Training
###Code
%%time
X_train_t, y_train_t = np.array(X_train.T), np.array(y_train.T)
w, b = initialize_with_zeros(2)
params, grads, costs = optimize(w, b, X_train_t, y_train_t, num_iterations= 800, learning_rate = 0.1, print_cost = False)
plt.plot(range(len(costs)),costs)
plt.xlabel('Iterations')
plt.ylabel('Cost(loss) value')
###Output
_____no_output_____
###Markdown
Prediction
###Code
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
Z = linear_mult(X, w, b)
A = sigmoid(Z)
for i in range(m):
Y_prediction[0][i] = 1 if A[0][i] > .5 else 0
assert(Y_prediction.shape == (1, m))
return Y_prediction
###Output
_____no_output_____
###Markdown
Evaluation
###Code
preds = predict(params['w'], params['b'], X_train_t)
print('Accuracy on training set: %{}'.format((preds[0] == y_train).mean()*100))
preds = predict(params['w'], params['b'], X_train_t)
print('Accuracy on training set: %{}'.format((preds[0] == y_train).mean()*100))
preds = predict(params['w'], params['b'], np.array(X_test.T))
print('Accuracy on test set: %{}'.format((preds[0] == y_test).mean()*100))
plot_prediction(X_train, y_train, X_test, y_test, predict, params)
###Output
_____no_output_____
###Markdown
Logistic Regression with tensorflow Just Forward Propagation
###Code
y_test = np.array(y_test).astype(np.float32).reshape(-1,1)
y_train = np.array(y_train).astype(np.float32).reshape(-1,1)
graph = tf.Graph()
with graph.as_default():
with tf.device("/cpu:0"):
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
X_data = tf.placeholder(tf.float32, shape=(None, 2))
y_data = tf.placeholder(tf.float32 , shape=(None, 1))
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random values following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(tf.truncated_normal([2, 1]))
biases = tf.Variable(tf.zeros([1]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(X_data, weights) + biases
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y_data, logits=logits))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
prediction = tf.nn.sigmoid(logits)
###Output
_____no_output_____
###Markdown
Training and Evaluation
###Code
%%time
num_steps = 800
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.global_variables_initializer().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
d, l, predictions = session.run([optimizer, loss, prediction],
feed_dict={X_data: X_train.astype(np.float32), y_data: y_train})
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(predictions, y_train))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Test accuracy: %.1f%%' % accuracy(
prediction.eval(feed_dict={X_data: X_test.astype(np.float32)}), y_test))
plot_prediction(X_train, y_train, X_test, y_test, prediction, params, fm_type='tensorflow') #TODO
###Output
Initialized
Loss at step 0: 2.653436
Training accuracy: 33.0%
Loss at step 100: 0.112133
Training accuracy: 99.1%
Loss at step 200: 0.086459
Training accuracy: 99.1%
Loss at step 300: 0.074192
Training accuracy: 99.1%
Loss at step 400: 0.066845
Training accuracy: 99.1%
Loss at step 500: 0.061882
Training accuracy: 99.1%
Loss at step 600: 0.058268
Training accuracy: 99.1%
Loss at step 700: 0.055495
Training accuracy: 99.1%
Test accuracy: 100.0%
Wall time: 647 ms
###Markdown
Keras Just Layer Description
###Code
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, activation=tf.nn.sigmoid, input_shape=(2,))
])
model.compile(optimizer='sgd',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Training
###Code
%%time
history = model.fit(X_train, y_train, epochs=800, verbose=0)
plt.plot(range(len(history.history['acc'])),history.history['acc'])
plt.xlabel('Iterations')
plt.ylabel('Accuracy value')
###Output
_____no_output_____
###Markdown
Evaluation
###Code
print('Test accuracy: %.1f%%' % accuracy(
model.predict(X_train.astype(np.float32)), y_train))
print('Test accuracy: %.1f%%' % accuracy(
model.predict(X_test.astype(np.float32)), y_test))
plot_prediction(X_train, y_train, X_test, y_test, model.predict, params, fm_type='keras')
###Output
_____no_output_____ |
build/workspace/dividedby.ipynb | ###Markdown
Divided by: no decimals
###Code
# initialising spark context
from pyspark import SparkContext, SparkConf, SQLContext
conf = SparkConf().setAppName("pysparkApp")
sc = SparkContext(conf=conf)
series = sc.parallelize(range(0, 100))
divided_by = 5
numbers = series.map(lambda number: (number%divided_by, 1)).reduceByKey(lambda x, y : x + y)
ordered = numbers.map(lambda x:(x[1], x[0])).sortBy(lambda x: x[1])
ordered.take(1)
sc.stop()
###Output
_____no_output_____ |
DAY 001 ~ 100/DAY078_[BaekJoon] 가장 긴 감소하는 부분 수열 (Python).ipynb | ###Markdown
2020년 4월 24일 금요일 BaekJoon - 11722번 : 가장 긴 감소하는 부분 수열 (Python) 문제 : https://www.acmicpc.net/problem/11722 블로그 : https://somjang.tistory.com/entry/BaekJoon-11722%EB%B2%88-%EA%B0%80%EC%9E%A5-%EA%B8%B4-%EA%B0%90%EC%86%8C%ED%95%98%EB%8A%94-%EB%B6%80%EB%B6%84-%EC%88%98%EC%97%B4-Python 첫번째 시도
###Code
inputNum = int(input())
inputNums = input()
inputNums = inputNums.split()
inputNums = [int(num) for num in inputNums]
nc = [0] * (inputNum)
maxNum = 0
for i in range(0, inputNum):
minNum = 0
for j in range(0, i):
if inputNums[i] < inputNums[j]:
if minNum < nc[j]:
minNum = nc[j]
#print(inputNums, nc)
nc[i] = minNum + 1
if maxNum < nc[i]:
maxNum = nc[i]
print(maxNum)
###Output
_____no_output_____ |
Scikit-Learn/5) Improve a Model(Hyperparameter Tuning)/HyperParameterTuning_explanation.ipynb | ###Markdown
5. IMPROVING a Model (Hyperparameter Tuning) The **first predictions** you make with a model are generally referred to as **baseline predictions**. The same goes with the first evaluation metrics you get. These are generally referred to as baseline metrics.so ,**first model = baseline model**Your next goal is to improve upon these baseline metrics.Two of the main methods to improve baseline metrics are from a data perspective and a model perspective. From a data perspective asks:* Could we collect more data? In machine learning, more data is generally better, as it gives a model more opportunities to learn patterns.* Could we improve our data? This could mean filling in missing values or finding a better encoding (turning things into numbers) strategy. From a model perspective asks:* Is there a better model we could use? If you've started out with a simple model, could you use a more complex one? (we saw an example of this when looking at the [Scikit-Learn machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html), ensemble methods are generally considered more complex models)* Could we improve the current model? If the model you're using performs well straight out of the box, can the hyperparameters be tuned to make it even better? **Note**: Patterns in data are also often referred to as data parameters. The difference between parameters and hyperparameters is a machine learning model seeks to find parameters in data on its own, where as, hyperparameters are settings on a model which a user (you) can adjust.Hyperparameters vs Parameters* Parameters = model find these patterns in data* Hyperparameters = settings on a model you can adjust to (potentially) improve its ability to find patterns.
###Code
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
clf.get_params() # get the parameters of a model
###Output
_____no_output_____ |
2_Ejercicios/Primera_parte/Entregables/2. Ejercios_Mod1/2020_12_21/web_scraping_delivery.ipynb | ###Markdown
1. From HTML*Using only beautiful soap, no regex*Save in a dataframe the next information using web scraping. Each row of the dataframe must have in different columns:- The name of the title- The id of the div where is the value scraped. If there is not id, then the value is must be numpy.nan- The name of the tag where is the value scraped.- The next scraped values in different rows: - The value: "Este es el segundo párrafo" --> Row 1 - The url https://pagina1.xyz/ --> Row 2 - The url https://pagina4.xyz/ --> Row 3 - The url https://pagina5.xyz/ --> Row 4 - The value "links footer-links" --> Row 5 - The value "Este párrafo está en el footer" --> Row 6
###Code
html = """
<html lang="es">
<head>
<meta charset="UTF-8">
<title>Página de prueba</title>
</head>
<body>
<div id="main" class="full-width">
<h1>El título de la página</h1>
<p>Este es el primer párrafo</p>
<p>Este es el segundo párrafo</p>
<div id="innerDiv">
<div class="links">
<a href="https://pagina1.xyz/">Enlace 1</a>
<a href="https://pagina2.xyz/">Enlace 2</a>
</div>
<div class="right">
<div class="links">
<a href="https://pagina3.xyz/">Enlace 3</a>
<a href="https://pagina4.xyz/">Enlace 4</a>
</div>
</div>
</div>
<div id="footer">
<!-- El footer -->
<p>Este párrafo está en el footer</p>
<div class="links footer-links">
<a href="https://pagina5.xyz/">Enlace 5</a>
</div>
</div>
</div>
</body>
</html>"""
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
soup = BeautifulSoup(html, 'html.parser')
type(soup)
ids = soup.findAll(id)
ids.contents
###Output
_____no_output_____ |
Example_Notebooks/Window_Wall_Ratio_pix4d_HQ.ipynb | ###Markdown
First, we set the image and parameter directories, as well as the merged polygons file path. We load the merged polygons, as we also initialize a dictionary for the Cameras. The Camera class stores all information related to the camera, i.e. intrinsic and extrinsic camera parameters.
###Code
#Example file
filename = "DJI_0033.JPG"
#directory = "../data/Drone_Flight/"
facade_file = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/Data_results/merged_polygons.txt"#"../data/Drone_Flight/merged.txt"
image_dir = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/images/" #directory + "RGB/"
param_dir = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/pix4d_HQ/1_initial/params/" #directory + "params/"
#predictions_dir = directory + "predictions/"
offset = np.loadtxt(param_dir + "pix4d_HQ_offset.xyz",usecols=range(3)) #offset = np.loadtxt(param_dir + "offset.txt",usecols=range(3))
#Initializes a dictionary of Camera classes. See utils.py for more information.
camera_dict = utils.create_camera_dict(param_dir, filename="pix4d_HQ_calibrated_camera_parameters.txt", offset=offset)
#Loads pmatrices and image filenamees
p_matrices = np.loadtxt(param_dir + 'pix4d_HQ_pmatrix.txt', usecols=range(1,13)) # p_matrices = np.loadtxt(param_dir + 'pmatrix.txt', usecols=range(1,13))
#Loads the merged polygons, as well as a list of facade types (i.e. roof, wall, or floor)
merged_polygons, facade_type_list, file_format = fw.load_merged_polygon_facades(filename=facade_file)
#Offset adjustment parameter
# height_adj = np.array([0.0, 0.0, 108])
# offset = offset + height_adj
polygon_offset = np.loadtxt("/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/Data_results/polygon_offset.txt", delimiter=",")
#Adjust height if necessary for the camera images
height_adj = -polygon_offset[2] #108.0#108.0
offset_adj = np.array([0.0, 0.0, height_adj])
offset = offset + offset_adj
###Output
_____no_output_____
###Markdown
Next, we extract the contours for the window predictions, by taking the window prediction points and using them to create a shapely polygon.
###Code
window_file = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/masks_2/DJI_0033.png"
# window_file ='/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/mask_GT/DJI_0033_pink.png'
print("Window predictions: ")
image = cv2.imread(window_file)
plt.imshow(image)
plt.show()
#Extract the contours of the window file
contours = contour_extraction.extract_contours(window_file)
#Create polygons from the window contours
window_polygons = utils.convert_polygons_shapely(contours)
def plot_shapely_polys(image_file, polys):
for poly in polys:
s = poly
s = poly.simplify(0.1, preserve_topology=True)
x,y = s.exterior.xy
plt.plot(x,y)
plt.show()
print("Extracted contours: ")
plt.imshow(image)
plot_shapely_polys(window_file, window_polygons)
###Output
Window predictions:
###Markdown
Finally, for each window point, we obtain its 3D coordinates and use them to calculate the window to wall ratio.
###Code
camera = camera_dict[filename]
pmatrix = camera.calc_pmatrix()
# print(pmatrix)
image_file = utils.load_image(image_dir + filename)
# print(image_dir + filename)
#Projects the merged polygon facades onto the camera image
projected_facades, projective_distances = extract_window_wall_ratio.project_merged_polygons(
merged_polygons, offset, pmatrix)
# print(projected_facades)
#Creates a dictionary mapping the facade to the windows contained within them, keyed by facade index
facade_window_map = extract_window_wall_ratio.get_facade_window_map(
window_polygons, projected_facades, projective_distances)
# print(facade_window_map)
#Creates a list of all the facades in the merged polygon
facades = []
for poly in merged_polygons:
facades = facades + poly
facade_indices = list(facade_window_map.keys())
# print(facade_indices)
for i in facade_indices:
#Computes window to wall ratio
win_wall_ratio = extract_window_wall_ratio.get_window_wall_ratio(
projected_facades[i], facades[i], facade_window_map[i])
#Output printing:
print("Facade index: " + str(i))
print("Window-to-wall ratio: " + str(win_wall_ratio))
#Uncomment this line to plot the windows and facades on the image
# extract_window_wall_ratio.plot_windows_facade(projected_facades[i], facade_window_map[i], image_file)
###Output
facade_area: 538.634465447367
None
window_area: 115.04176815306703
None
Facade index: 2
Window-to-wall ratio: 0.2135804066260747
facade_area: 911.3013916015625
None
window_area: 0.573974609375
None
Facade index: 5
Window-to-wall ratio: 0.000629840593534342
###Markdown
convert json to mask (ground truth)
###Code
import json
import pandas as pd
import numpy as np
from PIL import Image, ImageDraw, ImageChops
data = json.load(open('/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/mask_GT/DJI_0033.json'))
nested_lst_of_tuples = []
for i in range(len(data.get("objects"))):
objects = data.get("objects")[i]
points_lab = objects['points']
nested_lists = points_lab['exterior']
nested_lst_of_tuples0 = [tuple(l) for l in nested_lists]
nested_lst_of_tuples.append(nested_lst_of_tuples0)
def getMask(original,polygon):
#Returns the mask of the polygon
mask = Image.new('L', original.size, 0)
mask_draw = ImageDraw.Draw(mask)
for i in range(len(data.get("objects"))):
mask_draw.polygon(polygon[i], outline=1, fill=1)
return np.array(mask)
original=Image.open(image_dir + filename)
mask = getMask(original,nested_lst_of_tuples)
plt.figure()
plt.imshow(mask)
import matplotlib
matplotlib.image.imsave('/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/mask_GT/DJI_0033.png', mask)
###Output
_____no_output_____ |
Bloque 1 - Ramp-Up/05_Python/03_Funciones/04_RESU_Ejercicios funciones.ipynb | ###Markdown
 Ejercicios funciones Ejercicio 1Escribe una función que convierta números del 1 al 7, en nombres de los dias de la semana. La función constará de un único argumento numérico y una salida de tipo string
###Code
def dia_semana(dia_num):
if dia_num == 1:
return "Lunes"
elif dia_num == 2:
return "Martes"
elif dia_num == 3:
return "Miércoles"
elif dia_num == 4:
return "Jueves"
elif dia_num == 5:
return "Viernes"
elif dia_num == 6:
return "Sábado"
elif dia_num == 7:
return "Domingo"
else:
return "No es un dia de la semana"
print(dia_semana(1))
print(dia_semana(7))
def day_of_week():
day_num = int(input("Introduzca un número de día de la semana"))
if day_num == 1:
day = "Lunes"
elif day_num == 2:
day = "Martes"
elif day_num == 3:
day = "Miercoles"
elif day_num == 4:
day = "Jueves"
elif day_num == 5:
day = "Viernes"
elif day_num == 6:
day = "Sabado"
elif day_num == 7:
day = "Domingo"
else:
print("Erro al introducir el número, vuelva a intentarlo")
day_of_week()
###Output
Introduzca un número de día de la semana 10
###Markdown
Ejercicio 2En el ejercicio 8 de bucles, creábamos una pirámide invertida, cuyo número de pisos venía determinado por un input del usuario. Crear una función que replique el comportamiento de la pirámide, y utiliza un único parámetro de entrada de la función para determinar el número de filas de la pirámide, es decir, elimina la sentencia input.
###Code
def piramide(filas):
for i in range(filas):
out = ""
for j in range(filas-i):
out = out + " " + str(j + 1)
print(out)
piramide(4)
piramide(6)
for i in range(filas):
out = ""
for j in range(filas-i):
out = out + " " + str(j + 1)
print(out)
###Output
_____no_output_____
###Markdown
Ejercicio 3Escibe una función que compare dos números. La función tiene dos argumentos y hay tres salidas posibles: que sean iguales, que el primero se mayor que el segundo, o que el segundo sea mayor que el primero
###Code
def compare_fun(num_1, num_2):
if num_1 == num_2:
return "Son iguales"
elif num_1 > num_2:
return str(num_1) + " es mayor que " + str(num_2)
else:
return str(num_2) + " es mayor que " + str(num_1)
print(compare_fun(3,4))
print(compare_fun(4,4))
###Output
4 es mayor que 3
Son iguales
###Markdown
Ejercicio 4Escribe una función que sea un contador de letras. En el primer argumento tienes que introducir un texto, y el segundo que sea la letra a contar. La función tiene que devolver un entero con el número de veces que aparece esa letra, tanto mayuscula, como minúscula
###Code
def letter_count(texto, letra):
texto = texto.lower()
return texto.count(letra.lower())
letter_count("En esta clase aprenderemos mucho Python", "C")
###Output
_____no_output_____
###Markdown
Ejercicio 5Escribe una función que tenga un único argumento, un string. La salida de la función tiene que ser un diccionario con el conteo de todas las letras de ese string.
###Code
def letter_count_dic(texto):
all_letters = {}
for i in texto.lower():
if i in all_letters:
all_letters[i] = all_letters[i] + 1
else:
all_letters[i] = 1
return all_letters
letter_count_dic("En esta clase aprenderemos mucho Python")
###Output
_____no_output_____
###Markdown
Ejercicio 6Escribir una función que añada o elimine elementos en una lista. La función necesita los siguientes argumentos:* lista: la lista donde se añadira o eliminarán los elementos* comando: "add" o "remove"* elemento: Por defecto es None.
###Code
def listas_program(lista, comando, elemento = None):
if comando == "add" and elemento != None:
lista.append(elemento)
return lista
elif comando == "add" and elemento == None:
lista.append(9999)
return lista
elif comando == "remove":
try:
lista.remove(elemento)
return lista
except:
print("El elemento", elemento, "no existe en la lista")
print(listas_program([1,2,3], "add", 9))
print(listas_program([1,2,3], "add"))
print(listas_program([1,2,3], "remove", 2))
print(listas_program([1,2,3], "remove", 9))
###Output
[1, 2, 3, 9]
[1, 2, 3, 9999]
[1, 3]
El elemento 9 no existe en la lista
None
###Markdown
Ejercicio 7Crea una función que reciba un número arbitrario de palabras, y devuelva una frase completa, separando las palabras con espacios.
###Code
def junta_palabras(*args):
return " ".join(args)
junta_palabras("Hola", "me", "llamo", "Dani")
###Output
_____no_output_____
###Markdown
Ejercicio 8Escribe un probrama que calcule la [serie de Fibonacci](https://es.wikipedia.org/wiki/Sucesi%C3%B3n_de_Fibonacci)
###Code
def fibonacci(n):
if n == 1 or n == 2:
return 1
else:
return (fibonacci(n - 1) + (fibonacci(n - 2)))
print(fibonacci(6))
###Output
8
###Markdown
Ejercicio 9Define en una única celda las siguientes funciones:* Función que calcule el área de un cuadrado* Función que calcule el area de un triángulo* Función que calcule el área de un círculoEn otra celda, calcular el area de:* Dos círculos de radio 10 + un triángulo de base 3 y altura 7* Un cuadrado de lado = 10 + 3 círculos (uno de radio = 4 y los otros dos de radio = 6) + 5 triángulos de base = 2 + altura = 4
###Code
def cuadrado(lado):
return lado * lado
def triangulo(base, altura):
return (base * altura)/2
def circulo(radio):
import math
return math.pi * radio**2
circ1 = 2 * circulo(10)
tria1 = triangulo(3,7)
print(circ1 + tria1)
cuadr2 = cuadrado(10)
circ2 = circulo(4) + 2 * circulo(6)
trian2 = 5 * triangulo(2,4)
print(cuadr2 + circ2 + trian2)
###Output
638.8185307179587
396.46015351590177
|
notebooks/Cardano_Price.ipynb | ###Markdown
[DATA SCIENCE CHALLENGE SCL WEEK 1]() [Predict Cardano Price](https://github.com/jesussantana/Predict-Cardano-Price) Models*************[](https://www.python.org/) [](https://jupyter.org/try) [](https://www.linkedin.com/in/chus-santana/) [](https://github.com/jesussantana)
###Code
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import numpy as np
# ^^^ pyforest auto-imports - don't write above this line
# ==============================================================================
# Auto Import Dependencies
# ==============================================================================
# pyforest imports dependencies according to use in the notebook
# ==============================================================================
#!pip install mplfinance
#import mplfinance as mpf
#!pip install keras
#!pip install tensorflow
#!pip install imblearn
# Dependencies not Included in Auto Import*
# ==============================================================================
import time
import keras
from tensorflow import keras as ks
from keras.layers import TimeDistributed
from keras.models import load_model
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from keras.utils.np_utils import to_categorical
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import TimeDistributed
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.callbacks import ModelCheckpoint
from imblearn.over_sampling import SMOTE
from sklearn.preprocessing import MinMaxScaler
# Pandas configuration
# ==============================================================================
pd.set_option('display.max_columns', None)
# # Graphics
# ==============================================================================
import matplotlib.ticker as ticker
from matplotlib import style
import matplotlib.pyplot as plotter
import plotly.express as px
# Matplotlib configuration
# ==============================================================================
plt.rcParams['image.cmap'] = "bwr"
#plt.rcParams['figure.dpi'] = "100"
plt.rcParams['savefig.bbox'] = "tight"
style.use('ggplot') or plt.style.use('ggplot')
%matplotlib inline
# Seaborn configuration
# ==============================================================================
sns.set_theme(style='darkgrid', palette='deep')
dims = (20, 16)
# Warnings configuration
# ==============================================================================
import warnings
warnings.filterwarnings('ignore')
# Folder configuration
# ==============================================================================
from os import path
import sys
new_path = '../scripts/'
if new_path not in sys.path:
sys.path.append(new_path)
path = "../data/"
###Output
_____no_output_____
###Markdown
Explore Dataset**********
###Code
df_train = pd.read_csv(path + 'raw/train.csv')
df_test = pd.read_csv(path + 'raw/test_predictors.csv')
df_train.head()
df_train.info()
df_test.head()
df_test.head().info()
def convert_time(df):
df = df.sort_values('Open_time')
df['Year'] = [int(x[:4]) for x in df['Open_time'] ]
df['Month'] = [int(x[5:7]) for x in df['Open_time'] ]
df['Day'] = [int(x[8:11]) for x in df['Open_time'] ]
df['Hour'] = [int(x[11:13]) for x in df['Open_time'] ]
df['Minute'] = [int(x[14:16]) for x in df['Open_time'] ]
df['Open_time'] = [time.mktime(time.strptime(x, "%Y-%m-%d %H:%M:%S")) for x in df['Open_time'] ]
return df
df_train = convert_time(df_train)
df_test = convert_time(df_test)
###Output
_____no_output_____
###Markdown
Train Test**********
###Code
cols = df_train.columns
cols = cols.drop("target")
X_train = df_train[cols]
y_train = df_train["target"]
# Transform Dataset
# ==============================================================================
# strategy = {0:15000, 1:10000, 2:10000}
strategy = {0:20000, 1:5000, 2:5000}
oversample = SMOTE(sampling_strategy=strategy)
# X_train_over, y_train_over = X_train, y_train
X_train_over, y_train_over = oversample.fit_resample(X_train, y_train)
y_train_over = to_categorical(y_train_over, num_classes=3)
scaler = MinMaxScaler(feature_range=(0, 1))
# scaler = StandardScaler()
X_train_over = scaler.fit_transform(X_train_over)
# X_train_over = X_train_over.copy()
X_test = scaler.fit_transform(df_test)
pd.DataFrame(y_train_over).value_counts()
# [samples, timesteps, features].
timesteps = 20
samples = int(X_train_over.shape[0]/timesteps)
features = len(cols)
X_train_reshape = np.array(X_train_over).reshape(samples, timesteps, features)
y_train_reshape = np.array(y_train_over).reshape(samples, timesteps, 3)
X_train_reshape.shape
y_train_reshape
###Output
_____no_output_____
###Markdown
Model************
###Code
model = Sequential()
model.add(LSTM(500, dropout=0.25, input_shape=( timesteps, features ),return_sequences=True))
model.add(Dense(3,activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
model.summary()
checkpoint_filepath = 'model.h5'
model_checkpoint_callback = keras.callbacks.ModelCheckpoint(
filepath=checkpoint_filepath,
monitor='val_accuracy',
mode='max',
save_best_only=True)
X_train_orig = scaler.fit_transform(X_train[:-1])
y_train_orig = to_categorical(y_train[:-1], num_classes=3)
timesteps = 20
samples = int(X_train_orig.shape[0]/timesteps)
features = len(cols)
X_train_orig_reshape = np.array(X_train_orig).reshape(samples, timesteps, features)
y_train_orig_reshape = np.array(y_train_orig).reshape(samples, timesteps, 3)
X_train_orig_reshape.shape
history = model.fit(X_train_reshape, y_train_reshape, epochs=3000, batch_size=512, verbose=0, callbacks=[model_checkpoint_callback],
validation_data=(X_train_orig_reshape, y_train_orig_reshape))
fig, ax = plt.subplots(2,1,figsize=(10,10))
ax[0].set_title('Cross Entropy Loss', fontsize = 20)
ax[0].plot(history.history['loss'], color='blue', label='Train')
ax[0].plot(history.history['val_loss'], color='orange', label='Validation')
ax[0].set_ylabel('Cross Entropy Loss', fontsize = 16)
ax[0].legend(fontsize = 16)
ax[1].set_title('Classification Accuracy', fontsize = 20)
ax[1].plot(history.history['accuracy'], color='blue', label='Train')
ax[1].plot(history.history['val_accuracy'], color='orange', label='Validation')
ax[1].set_ylabel('Classification Accuracy', fontsize = 16)
ax[1].set_xlabel('Epochs', fontsize = 16)
ax[1].legend(fontsize = 16)
plt.show()
###Output
_____no_output_____
###Markdown
Analysis************
###Code
best_model = load_model('model.h5')
best_model.summary()
p = pd.DataFrame(best_model.predict(X_train_reshape).reshape(X_train_reshape.shape[0]*X_train_reshape.shape[1],3))
p.idxmax(axis=1).value_counts()
p['predicted'] = p.idxmax(axis=1)
p['real'] = y_train
p
accuracy = (p['predicted'] == p['real']).sum()
print('Accuracy: ',accuracy/p.shape[0])
###Output
Accuracy: 0.40996666666666665
###Markdown
Prediction***********
###Code
X_test.shape
X_test_orig = X_test[:-3]
timesteps = 20
samples = int(X_test_orig.shape[0]/timesteps)
features = len(cols)
X_test_orig_reshape = np.array(X_test_orig).reshape(samples, timesteps, features)
res = pd.DataFrame(best_model.predict(X_test_orig_reshape).reshape(X_test_orig_reshape.shape[0]*X_test_orig_reshape.shape[1],3))
res.idxmax(axis=1).value_counts()
result = list(res.idxmax(axis=1))
result.append(0)
result.append(0)
result.append(0)
pd.DataFrame(result).shape
pd.DataFrame(result).to_csv('result.csv')
res
###Output
_____no_output_____ |
notebooks/03_spurious_antisense.ipynb | ###Markdown
Detecting Spurious Antisense:
###Code
import sys
import os
from glob import glob
from collections import defaultdict, Counter, namedtuple
import itertools as it
import random
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from matplotlib import cm
import seaborn as sns
import pysam
## Default plotting params
%matplotlib inline
sns.set(font='Arial')
plt.rcParams['svg.fonttype'] = 'none'
style = sns.axes_style('white')
style.update(sns.axes_style('ticks'))
style['xtick.major.size'] = 2
style['ytick.major.size'] = 2
sns.set(font_scale=2, style=style)
pal = sns.color_palette(['#0072b2', '#d55e00', '#009e73', '#f0e442', '#cc79a7', '#88828c'])
cmap = ListedColormap(pal.as_hex()[:2])
sns.set_palette(pal)
sns.palplot(pal)
plt.show()
###Output
_____no_output_____
###Markdown
ERCC antisense reads:We can simply count the number of reads that map antisense to the ERCC spikein controls to estimate antisense here
###Code
ercc_bams = [
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180201_1617_20180201_FAH45730_WT_Col0_2916_regular_seq/aligned_data/ERCC92/201901_col0_2916.bam',
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180413_1558_20180413_FAH77434_mRNA_WT_Col0_2917/aligned_data/ERCC92/201901_col0_2917.bam',
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180416_1534_20180415_FAH83697_mRNA_WT_Col0_2918/aligned_data/ERCC92/201901_col0_2918.bam',
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180418_1428_20180418_FAH83552_mRNA_WT_Col0_2919/aligned_data/ERCC92/201901_col0_2919.bam',
'/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180405_FAH59362_WT_Col0_2917/aligned_data/ERCC92/201903_col0_2917_exp2.bam',
]
def get_antisense(bam_fn):
antisense = 0
with pysam.AlignmentFile(bam_fn) as bam:
mapped = bam.mapped
for aln in bam.fetch():
if aln.is_reverse:
antisense += 1
return antisense, mapped
total_mapped = 0
total_antisense = 0
for bam in ercc_bams:
a, t = get_antisense(bam)
total_mapped += t
total_antisense += a
print(total_antisense, total_antisense / total_mapped * 100)
###Output
2 0.021175224986765485
###Markdown
Antisense at RCAAs an example of a highly expressed gene with no genuine antisense annotations, we use RCA.
###Code
rca_locus = ['2', 16_570_746, 16_573_692]
def intersect(inv_a, inv_b):
a_start, a_end = inv_a
b_start, b_end = inv_b
if a_end < b_start or a_start > b_end:
return 0
else:
s = max(a_start, b_start)
e = min(a_end, b_end)
return e - s
def intersect_spliced_invs(invs_a, invs_b):
score = 0
invs_a = iter(invs_a)
invs_b = iter(invs_b)
a_start, a_end = next(invs_a)
b_start, b_end = next(invs_b)
while True:
if a_end < b_start:
try:
a_start, a_end = next(invs_a)
except StopIteration:
break
elif a_start > b_end:
try:
b_start, b_end = next(invs_b)
except StopIteration:
break
else:
score += intersect([a_start, a_end], [b_start, b_end])
if a_end > b_end:
try:
b_start, b_end = next(invs_b)
except StopIteration:
break
else:
try:
a_start, a_end = next(invs_a)
except StopIteration:
break
return score
def bam_cigar_to_invs(aln):
invs = []
start = aln.reference_start
end = aln.reference_end
strand = '-' if aln.is_reverse else '+'
left = start
right = left
for op, ln in aln.cigar:
if op in (1, 4, 5):
# does not consume reference
continue
elif op in (0, 2, 7, 8):
# consume reference but do not add to invs yet
right += ln
elif op == 3:
invs.append([left, right])
left = right + ln
right = left
if right > left:
invs.append([left, right])
assert invs[0][0] == start
assert invs[-1][1] == end
return start, end, strand, np.array(invs)
PARSED_ALN = namedtuple('Aln', 'chrom start end read_id strand invs')
def parse_pysam_aln(aln):
chrom = aln.reference_name
read_id = aln.query_name
start, end, strand, invs = bam_cigar_to_invs(
aln)
return PARSED_ALN(chrom, start, end, read_id, strand, invs)
counts = Counter()
with pysam.AlignmentFile('/cluster/ggs_lab/mtparker/analysis_notebooks/chimeric_transcripts/vir1_vs_col0/aligned_data/col0.merged.bam') as bam:
for aln in bam.fetch(*rca_locus):
aln = parse_pysam_aln(aln)
overlap = intersect_spliced_invs([rca_locus[1:]], aln.invs)
aln_len = sum([e - s for s, e in aln.invs])
if overlap / aln_len > 0.1:
counts[aln.strand] += 1
counts
###Output
_____no_output_____ |
ECE365/machine learning/lab3_vvv_2021/.ipynb_checkpoints/ass-checkpoint.ipynb | ###Markdown
Lab 3: Classification (Part 2) and Model Selection Name: Your Name Here (Your netid here) Due Feburary 16th, 2021 11:59 PM**Logistics and Lab Submission**See the [course website](https://courses.engr.illinois.edu/ece365/fa2019/logisticsvvv.html). Remember that all labs count equally, despite the labs being graded from a different number of total points). **What You Will Need To Know For This Lab**This lab covers a few more basic classifiers which can be used for M-ary classification:- Naive Bayes- Logistic Regression- Support Vector Machinesas well as cross-validation, a tool for model selection and assessment. The submission procedure is provided below:- You will be provided with a template Python script (main.py) for this lab where you need to implement the provided functions as needed for each question. Follow the instructions provided in this Jupyter Notebook (.ipynb) to implement the required functions. **Do not change the file name or the function headers!**- Upload only your Python script (.py file) on Gradescope. Don't upload your datasets or Jupyter Notebook (.ipynb file).- Your grades and feedbacks will appear on Gradescope. The grading for the programming questions is automated using Gradescope autograder, no partial credits are given. Therefore, if you wish, you will have a chance to re-submit your code **within 72 hours** of receiving your first grade for this lab, only if you have *reasonable* submissions before the deadline (i.e. not an empty script).- If you re-submit, the final grade for the programming part of this lab will be calculated as .4 \* first_grade + .6 \* .9 \* re-submission_grade.- This lab also has Multiple Choice Questions (MCQs) that are needed to be completed on Gradescope **within the deadline**.There are some problems which have short answer questions. They are not graded, but we are free to discuss answers to these problems. **Multiple Choice Questions (MCQs) will be graded on Gradescope!**Remember in many applications, the end goal is not always "run a classifier", like in a homework problem, but is to use the output of the classifier in the context of the problem at hand (e.g. detecting spam, identifying cancer, etc.). Because of this, some of our Engineering Design-type questions are designed to get you to think about the entire design problem at a high level.**Warning: Do not train on your test sets. You will automatically have your score halved for a problem if you train on your test data.** **Preamble (don't change this)**
###Code
%pylab inline
import numpy as np
from sklearn import neighbors
from sklearn import svm
from sklearn import model_selection
from numpy import genfromtxt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import glob
%run main.py
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Problem 1: Spam Detection (70 points)In this problem, you will be constructing a crude spam detector. As you all know, when you receive an e-mail, it can be divided into one of two types: ham (useful mail, label $-1$) and spam (junk mail, label $+1$). In the [olden days](http://www.paulgraham.com/spam.html), people tried writing a bunch of rules to detect spam. However, it was quickly seen that machine learning approaches work fairly well for a little bit of work. You will be designing a spam detector by applying some of the classification techniques you learned in class to a batch of emails used to train and test [SpamAssassin](http://spamassassin.apache.org/), a leading anti-spam software package. Let the *vocabulary* of a dataset be a list of all terms occuring in a data set. So, for example, a vocabulary could be ["cat","dog","chupacabra", "aerospace", ...]. Our features will be based only the frequencies of terms in our vocabulary occuring in the e-mails (such an approach is called a *bag of words* approach, since we ignore the positions of the terms in the emails). The $j$-th feature is the number of times term $j$ in the vocabulary occurs in the email. If you are interested in further details on this model, you can see Chapters 6 and 13 in [Manning's Book](http://nlp.stanford.edu/IR-book/).You will use the following classifiers in this problem:- sklearn.naive_bayes.BernoulliNB (Naive Bayes Classifier with Bernoulli Model)- sklearn.naive_bayes.MultinomialNB (Naive Bayes Classifier with Multinomial Model)- sklearn.svm.LinearSVC (Linear Support Vector Machine)- sklearn.linear_model.LogisticRegression (Logistic Regression)- sklearn.neighbors.KNeighborsClassifier (1-Nearest Neighbor Classifier)In the context of the Bernoulli Model for Naive Bayes, scikit-learn will binarize the features by interpretting the $j$-th feature to be $1$ if the $j$-th term in the vocabulary occurs in the email and $0$ otherwise. This is a categorical Naive Bayes model, with binary features. While we did not discuss the multinomial model in class, it operates directly on the frequencies of terms in the vocabulary, and is discussed in Section 13.2 in [Manning's Book](http://nlp.stanford.edu/IR-book/) (though you do not need to read this reference). Both the Bernoulli and Multinomial models are commonly used for Naive Bayes in text classification. A sample Ham email is: From [email protected] Mon Jun 24 17:06:54 2002 Return-Path: [email protected] Delivery-Date: Tue May 28 02:53:28 2002 Received: from mp.opensrs.net (mp.opensrs.net [216.40.33.45]) by dogma.slashnull.org (8.11.6/8.11.6) with ESMTP id g4S1rSe14718 for ; Tue, 28 May 2002 02:53:28 +0100 Received: (from popensrs@localhost) by mp.opensrs.net (8.9.3/8.9.3) id VAA04361; Mon, 27 May 2002 21:53:26 -0400 Message-Id: Date: Mon, 27 May 2002 21:53:26 -0500 (EST) From: "Starflung NIC" To: Subject: Automated 30 day renewal reminder 2002-05-27 X-Keywords: The following domains that are registered as belonging to you are due to expire within the next 60 days. If you would like to renew them, please contact [email protected]; otherwise they will be deactivated and may be registered by another. Domain Name, Expiry Date nutmegclothing.com, 2002-06-26 A sample Spam email is: From [email protected] Fri Aug 23 11:03:31 2002 Return-Path: Delivered-To: [email protected] Received: from localhost (localhost [127.0.0.1]) by phobos.labs.example.com (Postfix) with ESMTP id 478B54415C for ; Fri, 23 Aug 2002 06:02:57 -0400 (EDT) Received: from mail.webnote.net [193.120.211.219] by localhost with POP3 (fetchmail-5.9.0) for zzzz@localhost (single-drop); Fri, 23 Aug 2002 11:02:57 +0100 (IST) Received: from smtp.easydns.com (smtp.easydns.com [205.210.42.30]) by webnote.net (8.9.3/8.9.3) with ESMTP id IAA08912; Fri, 23 Aug 2002 08:13:36 +0100 From: [email protected] Received: from mymail.dk (unknown [61.97.34.233]) by smtp.easydns.com (Postfix) with SMTP id 7484A2F85C; Fri, 23 Aug 2002 03:13:31 -0400 (EDT) Reply-To: Message-ID: To: [email protected] Subject: HELP WANTED. WORK FROM HOME REPS. MiME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" X-Priority: 3 (Normal) X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook, Build 10.0.2616 Importance: Normal Date: Fri, 23 Aug 2002 03:13:31 -0400 (EDT) Content-Transfer-Encoding: 8bit Help wanted. We are a 14 year old fortune 500 company, that is growing at a tremendous rate. We are looking for individuals who want to work from home. This is an opportunity to make an excellent income. No experience is required. We will train you. So if you are looking to be employed from home with a career that has vast opportunities, then go: http://www.basetel.com/wealthnow We are looking for energetic and self motivated people. If that is you than click on the link and fill out the form, and one of our employement specialist will contact you. To be removed from our link simple go to: http://www.basetel.com/remove.html 1349lmrd5-948HyhJ3622xXiM0-290VZdq6044fFvN0-799hUsU07l50 First, we will load the data. Our dataset has a bit over 9000 emails, with about 25% of them being spam. We will use 50% of them as a training set, 25% of them as a validation set and 25% of them as a test set.
###Code
# Get list of emails
spamfiles=glob.glob('./Data/Spam/*')
hamfiles=glob.glob('./Data/Ham/*')
# First, we will split the files into the training, validation and test sets.
np.random.seed(seed=222017) # seed the RNG for repeatability
fnames=np.asarray(spamfiles+hamfiles)
nfiles=fnames.size
labels=np.ones(nfiles)
labels[len(spamfiles):]=-1
# Randomly permute the files we have
idx=np.random.permutation(nfiles)
fnames=fnames[idx]
labels=labels[idx]
#Split the file names into which set they belong to
tname=fnames[:int(nfiles/2)]
trainlabels=labels[:int(nfiles/2)]
vname=fnames[int(nfiles/2):int(nfiles*3/4)]
vallabels=labels[int(nfiles/2):int(nfiles*3/4)]
tename=fnames[int(3/4*nfiles):]
testlabels=labels[int(3/4*nfiles):]
from sklearn.feature_extraction.text import CountVectorizer
# Get our Bag of Words Features from the data
bow = CountVectorizer(input='filename',encoding='iso-8859-1',binary=False)
traindata=bow.fit_transform(tname)
valdata=bow.transform(vname)
testdata=bow.transform(tename)
###Output
_____no_output_____
###Markdown
The $100$ most and least common terms in the vocabulary are:
###Code
counts=np.reshape(np.asarray(np.argsort(traindata.sum(axis=0))),-1)
vocab=np.reshape(np.asarray(bow.get_feature_names()),-1)
print ("100 most common terms: " , ','.join(str(s) for s in vocab[counts[-100:]]), "\n")
print ("100 least common terms: " , ','.join(str(s) for s in vocab[counts[:100]]))
###Output
100 most common terms: slashnull,dogma,ist,thu,not,lists,cnet,mail,wed,as,html,have,click,jmason,exmh,00,are,align,freshrpms,or,mailman,date,text,mon,message,12,postfix,type,arial,users,bgcolor,ie,rpm,linux,version,22,be,taint,your,mailto,sourceforge,admin,content,20,color,table,jm,on,aug,border,127,example,face,href,this,nbsp,gif,09,subject,10,img,src,sep,it,that,0100,spamassassin,height,esmtp,is,size,xent,fork,you,tr,www,in,list,11,br,width,received,localhost,id,of,and,org,by,with,net,for,td,http,2002,font,from,3d,to,the,com
100 least common terms: g6mn17405760,e17titx,e17tvdy,e17ueb2,e17vjs8,e17vjsf,e17w5r4,e17wchv,e17wcmr,s4tkh2qxhrdntbervcuydvpgt4frugzlf3xwvohcrdtxohcfpaziiaed0ne9lw5,e17wosd,e17wosk,e17wssb,e17titf,e17wsyl,e17xbmd,e17xd4y,e17xlhj,e17yawz,s4lyze220qd,e17yozl,e17ysm1,e17ysna,e17ysox,e17ywux,e17z5re,e17z65d,e17wved,e17tfo0,e17texc,e17stjj,e17kazn,e17kb3f,e17kb3l,e17kba2,e17kcfg,e17kkxb,e17kxx7,e17kxxd,e17lk0h,e17lzkx,e17m2xi,e17mbzo,e17mpr7,e17n4br,e17n8od,e17nmuf,e17oai5,e17owlg,e17owlz,e17pfia,e17pfih,e17r7cf,e17rqza,e17rqzi,e17s52j,e17s6q9,e17sd3a,e17zimu,e17zl6i,e18bs5u,e18ec44161,e1n_n,e1pyognhf88zoewompdrqazaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa,s42bvq,s3zy0uqn9cxgumxzswr1e,e1s_jim_mac_gearailt,e1t,e1xwdo3b1k3wvr1u6cyugmvhm1nnyssndv2knuhw4g,s3wul4rjqofkdbzdhdtzxxnb005aaaaaa,e208716f77,e208e2940b3,e20c8406ff,s3w3ibekx4my0f8afuy,s3ulb6cl,e2178f6d01a70dfbdf9c84c4dcaf58dc,e22,e22432940aa,e224536,e226e294098,e22ab2d42c,e23,e23917,e23a916f1e,s3qjh,e240,e240merc,e241b6184464107168656739bf96c6b9,e242f2940ef,e1l_,e17k4ao,e1l1o9q,e1irt,e18gf17,e18hpmg,e18ifxm,e193416fea,e1amfeffcsliuttecieokbirfye5ds7mqt6dpbmltqjmwz5kzz5qvkvkvknb0i8hihpnwqro1z3a,e1b2916f03,e1bf816efc
###Markdown
We will have our training data in `traindata` (with labels in `trainlabels`), validation data in `valdata` (with labels in `vallabels`) and test data in `testdata` (with labels in `testlabels`). For each of the following classifiers **(10 points each)**:- sklearn.naive_bayes.BernoulliNB (Naive Bayes Classifier with Bernoulli Model)- sklearn.naive_bayes.MultinomialNB (Naive Bayes Classifier with Multinomial Model)- sklearn.svm.LinearSVC (Linear Support Vector Machine)- sklearn.linear_model.LogisticRegression (Logistic Regression)- sklearn.neighbors.KNeighborsClassifier (as a 1-Nearest Neighbor Classifier)In *main.py*, you are required to finish the followings:1. Train on the training data in `traindata` with corresponding labels `trainlabels`. Use the default parameters, unless otherwise noted.2. Report Training Error.3. Report Validation Error.4. Report the time it takes to fit the classifier (i.e. time to perform xxx.fit(X,y)).5. Report the time it takes to run the classifier on the validation data (i.e. time to perform xxx.predict(X,y)).You can ignore all warnings. After you finish all parts above, you can retrieve your performances as followed:
###Code
%run main.py
q1 = Question1()
classifier_list = ["BernoulliNB", "MultinomialNB", "LinearSVC", "LogisticRegression", "NN"]
for name in classifier_list:
ret = eval("q1." + name + "_classifier(traindata, trainlabels, valdata, vallabels)")
print(name, "classifier:")
print("The Training Error is: %.3f" % ret[1])
print("The Validation Error is: %.3f" % ret[2])
print("The Fitting Time is: %.5f sec" % ret[3])
print("The Predicting Time is: %.5f sec" % ret[4])
print("-----------------------------------------------------------------")
###Output
BernoulliNB classifier:
The Training Error is: 0.034
The Validation Error is: 0.055
The Fitting Time is: 0.89972 sec
The Predicting Time is: 0.03590 sec
-----------------------------------------------------------------
MultinomialNB classifier:
The Training Error is: 0.019
The Validation Error is: 0.027
The Fitting Time is: 0.01693 sec
The Predicting Time is: 0.00499 sec
-----------------------------------------------------------------
LinearSVC classifier:
The Training Error is: 0.000
The Validation Error is: 0.011
The Fitting Time is: 0.53161 sec
The Predicting Time is: 0.00299 sec
-----------------------------------------------------------------
LogisticRegression classifier:
The Training Error is: 0.000
The Validation Error is: 0.008
The Fitting Time is: 1.38427 sec
The Predicting Time is: 0.00399 sec
-----------------------------------------------------------------
NN classifier:
The Training Error is: 0.000
The Validation Error is: 0.016
The Fitting Time is: 0.00702 sec
The Predicting Time is: 1.77223 sec
-----------------------------------------------------------------
###Markdown
**Extra (not evaluated):** Based on the results of this problem and knowledge of the application at hand (spam filtering), pick one of the classifiers in this problem and describe how you would use it as part of a spam filter for the University of Illinois email system. Sketch out a system design at a very high level -- how you would train the spam filter to deal with new threats, would you filter everyone's email jointly, etc. You may get some inspiration from the [girls and boys](https://gmail.googleblog.com/2007/10/how-our-spam-filter-works.html) at [Gmail](https://gmail.googleblog.com/2015/07/the-mail-you-want-not-spam-you-dont.html), the [chimps at MailChimp](http://kb.mailchimp.com/delivery/spam-filters/about-spam-filters) or other places. Write a function that calculates the confusion matrix (cf. Fig. 2.1 in the notes). You may wish to read Section 2.1.1 in the notes -- it may be helpful, but is not necessary to complete this problem. **(10 points)**Run the classifier you selected in the previous part of the problem on the test data. The following code displays the test error and the output of the function. **(10 points)**
###Code
_, testError, cm = q1.classify(traindata, trainlabels, testdata, testlabels)
print("The Test Error is: %3f" % testError)
print("Confusion matrix for test data:")
print ("True Positives:", cm[0,0], "False Positive:", cm[0,1])
print ("False Negative:", cm[1,0], "True Negatives:", cm[1,1])
print ("True Positive Rate : ", cm[0,0]/(cm[0,0] + cm[1,0]))
print ("False Positive Rate: ", cm[0,1]/(cm[0,1] + cm[1,1]))
###Output
The Test Error is: 0.008982
Confusion matrix for test data:
True Positives: 616.0 False Positive: 17.0
False Negative: 4.0 True Negatives: 1701.0
True Positive Rate : 0.9935483870967742
False Positive Rate: 0.00989522700814901
###Markdown
As a sanity check, you should observe that your true positive rate is above 0.95 (i.e. highly sensitive). Problem 2: Cross-Validation (45 Points) Now, we will load some data (acquired from K.P. Murphy's PMTK tookit).
###Code
problem2_tmp= genfromtxt('Data/p2.csv', delimiter=',')
# Randomly reorder the data
np.random.seed(seed=2217) # seed the RNG for repeatability
idx=np.random.permutation(problem2_tmp.shape[0])
problem2_tmp=problem2_tmp[idx]
#The training data which you will use is called "traindata"
traindata=problem2_tmp[:200,:2]
#The training labels are in "labels"
trainlabels=problem2_tmp[:200,2]
#The test data which you will use is called "testdata" with labels "testlabels"
testdata=problem2_tmp[200:,:2]
testlabels=problem2_tmp[200:,2]
# You should not re-shuffle your data in your functions!
###Output
_____no_output_____
###Markdown
Write a function which implements $5$-fold cross-validation to estimate the error of a classifier with cross-validation with the 0,1-loss for k-Nearest Neighbors (kNN). You will be given as input:* A (N,d) numpy.ndarray of training data, *trainData* (with N divisible by 5)* A length $N$ numpy.ndarray of training labels, *trainLabels** A number $k$, for which cross-validated error estimates will be outputted for $1,\ldots,k$Your output will be a vector (represented as a numpy.ndarray) *err*, such that *err[i]* is the cross-validated estimate of using i neighbors (*err* will be of length $k+1$; the zero-th component of the vector will be meaningless). **For this problem, take your folds to be 0:N/5, N/5:2N/5, ..., 4N/5:N for cross-validation (In general, however, the folds should be randomly divided).**Use scikit-learn's sklearn.neighbors.KNeighborsClassifier to perform the training and classification for the kNN models involved. Do not use any other features of scikit-learn, such as things from sklearn.model_selection. (25 points) Write a function that *calls the above function* and returns 1) the output from the previous function, 2) the number of neighbors within $1,\ldots,30$ that minimizes the cross-validation error, and 3) the correponding minimum error. (10 points) The following code helps you to visualize your result. It plots the cross-validation error with respect to the number of neighbors. Your best number of neighbors should be roughly at the middle of your err array.
###Code
%run main.py
q2 = Question2()
k = 30
err, k_min, err_min = q2.minimizer_K(traindata,trainlabels,k)
print(err)
plot(np.arange(1,k+1),err[1:])
xlabel('Number of Neighbors')
ylabel('Cross-validation error')
axis('tight')
print("The best number of neighbors is:", k_min)
print("The corresponding error is:", err_min)
###Output
[1. 0.26 0.245 0.24 0.225 0.215 0.2 0.19 0.2 0.185 0.19 0.185
0.19 0.18 0.175 0.18 0.19 0.195 0.19 0.19 0.195 0.19 0.195 0.21
0.2 0.205 0.2 0.21 0.205 0.2 0.195]
The best number of neighbors is: 14
The corresponding error is: 0.175
###Markdown
Train a kNN model on the whole training data using the number of neighbors you found in the previous part of the question, and apply it to the test data. **(10 points)**
###Code
_, testError = q2.classify(traindata, trainlabels, testdata, testlabels)
print("The test error is:", testError)
###Output
The test error is: 0.214
###Markdown
As a sanity check, the test error should be around 0.2. Problem 3: Detecting Cancer with SVMs and Logistic Regression (35 points) We consider the [Breast Cancer Wisconsin Data Set](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29) from W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on Electronic Imaging: Science and Technology, volume 1905, pages 861-870, San Jose, CA, 1993. The authors diagnosed people by characterizing 3 cell nuclei per person extracted from the breast (pictures [here](http://web.archive.org/web/19970225174429/http://www.cs.wisc.edu/~street/images/)), each with 10 features (for a 30-dimensional feature space):1. radius (mean of distances from center to points on the perimeter) 2. texture (standard deviation of gray-scale values) 3. perimeter 4. area 5. smoothness (local variation in radius lengths) 6. compactness (perimeter^2 / area - 1.0) 7. concavity (severity of concave portions of the contour) 8. concave points (number of concave portions of the contour) 9. symmetry 10. fractal dimension ("coastline approximation" - 1)and classified the sample into one of two classes: Malignant ($+1$) or Benign ($-1$). You can read the original paper for more on what these features mean.You will be attempting to classify if a sample is Malignant or Benign using Support Vector Machines, as well as Logistic Regression. Since we don't have all that much data, we will use 10-fold cross-validation to tune our parameters for our SVMs and Logistic Regression. We use 90% of the data for training, and 10% for testing.You will be experimenting with SVMs using Gaussian RBF kernels (through sklearn.svm.SVC), linear SVMs (through sklearn.svm.LinearSVC), and Logistic Regression (sklearn.linear_model.LogisticRegression). Your model selection will be done with cross-validation via sklearn.model_selection's *cross_val_score*. This returns the accuracy for each fold, i.e. the fraction of samples classified correctly. Thus, the cross-validation error is simply 1-mean(cross_val_score). First, we load the data. We will use scikit-learn's train test split function to split the data. The data is scaled for reasons outlined here. In short, it helps avoid some numerical issues and avoids some problems with certain features which are typically large affecting the SVM optimization problem unfairly compared to features which are typically small.
###Code
cancer = genfromtxt('Data/wdbc.csv', delimiter=',')
np.random.seed(seed=282017) # seed the RNG for repeatability
idx=np.random.permutation(cancer.shape[0])
cancer=cancer[idx]
cancer_features=cancer[:,1:]
cancer_labels=cancer[:,0]
#The training data is in data_train with labels label_train.
# The test data is in data_test with labels label_test.
data_train, data_test, label_train, label_test = train_test_split(cancer_features,cancer_labels,test_size=0.1,random_state=292017)
# Rescale the training data and scale the test data correspondingly
scaler=MinMaxScaler(feature_range=(-1,1))
data_train=scaler.fit_transform(data_train) #Note that the scaling is determined solely via the training data!
data_test=scaler.transform(data_test)
%run main.py
q3 = Question3()
# The following lines ignore the warnings.
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
The soft margin linear SVM is tuned based on a parameter $C$, which controls how much points can be violating the margin (this isn't the same $C$ as in the notes, though it serves the same function; see the [scikit-learn documentation](http://scikit-learn.org/stable/modules/svm.htmlsvc) for details). Use cross-validation to select a value of $C$ for a linear SVM (sklearn.svm.LinearSVC) by varying $C$ from $2^{-5},2^{-4},\ldots,2^{15}$. (10 points)
###Code
C_min, min_err = q3.LinearSVC_crossValidation(data_train, label_train)
print("Soft Margin Linear SVM:")
print("The best C is:", C_min)
print("The corresponding error is:", min_err)
###Output
Soft Margin Linear SVM:
The best C is: 0.125
The corresponding error is: 0.02714932126696823
###Markdown
You will now experiment with using kernels in an SVM, particularly the Gaussian RBF kernel (in sklearn.svm.SVC). The SVM has two parameters to tune in this case: $C$ (as before), and $\gamma$, which is a parameter in the RBF. Use cross-validation to select parameters $(C,\gamma)$ by searching varying $(C,\gamma)$ over $C=2^{-5},2^{-4},\ldots,2^{15}$ and $\gamma=2^{-15},\ldots,2^{3}$ [So, you will try about 400 parameter choices]. This procedure is known as a **grid search**. Use *GridSearchCV* (see doc [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html)) to perform a grid search (and you can use *clf.best\_params_* to get the best parameters). Out of these, which $(C,\gamma)$ parameters would you choose? What is the corresponding cross-validation error?We are using a fairly coarse grid for this problem, but in practice one could use a finer grid once the rough range of good parameters is known (rather than starting with a fine grid, which would waste a lot of time). (10 points)
###Code
C_min, gamma_min, min_err = q3.SVC_crossValidation(data_train, label_train)
print("SVM with RBF kernel:")
print("The best C is:", C_min)
print("The best gamma is:", gamma_min)
print("The corresponding error is:", min_err)
###Output
SVM with RBF kernel:
The best C is: 8
The best gamma is: 0.125
The corresponding error is: 0.01953125
###Markdown
As stated in a footnote in the notes, Logistic Regression normally has a regularizer parameter to promote stability. Scikit-learn calls this parameter $C$ (which is like $\lambda^{-1}$ in the notes); see the [LibLinear](http://www.csie.ntu.edu.tw/~cjlin/papers/liblinear.pdf) documentation for the exact meaning of $C$. Use cross-validation to select a value of $C$ for logistic regression (sklearn.linear_model.LogisticRegression) by varying $C$ from $2^{-14},\ldots,2^{14}$. You may optionally make use of sklearn.model_selection.GridSearchCV, or write the search by hand. **(5 points)**
###Code
C_min, min_err = q3.LogisticRegression_crossValidation(data_train, label_train)
print("Logistic Regression:")
print("The best C is:", C_min)
print("The corresponding error is:", min_err)
###Output
Logistic Regression:
The best C is: 2
The corresponding error is: 0.02734375
###Markdown
Train the classifier selected above on the whole training set. Then, estimate the prediction error using the test set. What is your estimate of the prediction error? How does it compare to the cross-validation error? (10 points)
###Code
_, error = q3.classify(data_train, label_train, data_test, label_test)
print("The prediction error is:", error)
###Output
The prediction error is: 0.017543859649122806
|
code/post_process_data.ipynb | ###Markdown
Process the raw mooring dataContents:* Raw data reprocessing.* Interpolated data processing.* ADCP processing.* VMP processing.Import the needed libraries.
###Code
import datetime
import glob
import os
import gsw
import numpy as np
import numpy.ma as ma
import scipy.integrate as igr
import scipy.interpolate as itpl
import scipy.io as io
import scipy.signal as sig
import seawater
import xarray as xr
from matplotlib import path
import munch
import load_data
import moorings as moo
import utils
from oceans.sw_extras import gamma_GP_from_SP_pt
# Data directory
data_in = os.path.expanduser("../data")
data_out = data_in
def esum(ea, eb):
return np.sqrt(ea ** 2 + eb ** 2)
def emult(a, b, ea, eb):
return np.abs(a * b) * np.sqrt((ea / a) ** 2 + (eb / b) ** 2)
###Output
_____no_output_____
###Markdown
Process raw data into a more convenient formatParameters for raw processing.
###Code
# Corrected levels.
# heights = [-540., -1250., -2100., -3500.]
# Filter cut off (hours)
tc_hrs = 40.0
# Start of time series (matlab datetime)
t_start = 734494.0
# Length of time series
max_len = N_data = 42048
# Data file
raw_data_file = "moorings.mat"
# Index where NaNs start in u and v data from SW mooring
sw_vel_nans = 14027
# Sampling period (minutes)
dt_min = 15.0
# Window length for wave stress quantities and mesoscale strain quantities.
nperseg = 2 ** 9
# Spectra parameters
window = "hanning"
detrend = "constant"
# Extrapolation/interpolation limit above which data will be removed.
dzlim = 100.0
# Integration of spectra parameters. These multiple N and f respectively to set
# the integration limits.
fhi = 1.0
flo = 1.0
flov = 1.0 # When integrating spectra involved in vertical fluxes, get rid of
# the near inertial portion.
# When bandpass filtering windowed data use these params multiplied by f and N
filtlo = 0.9 # times f
filthi = 1.1 # times N
# Interpolation distance that raises flag (m)
zimax = 100.0
dt_sec = dt_min * 60.0 # Sample period in seconds.
dt_day = dt_sec / 86400.0 # Sample period in days.
N_per_day = int(1.0 / dt_day) # Samples per day.
print("RAW DATA")
###############################################################################
# Load w data for cc mooring and chop from text files. I checked and all the
# data has the same start date and the same length
print("Loading vertical velocity data from text files.")
nortek_files = glob.glob(os.path.join(data_in, "cc_1_*.txt"))
depth = []
for file in nortek_files:
with open(file, "r") as f:
content = f.readlines()
depth.append(int(content[3].split("=")[1].split()[0]))
idxs = np.argsort(depth)
w = np.empty((42573, 12))
datenum = np.empty((42573, 12))
for i in idxs:
YY, MM, DD, hh, W = np.genfromtxt(
nortek_files[i], skip_header=12, usecols=(0, 1, 2, 3, 8), unpack=True
)
YY = YY.astype(int)
MM = MM.astype(int)
DD = DD.astype(int)
mm = (60 * (hh % 1)).astype(int)
hh = np.floor(hh).astype(int)
w[:, i] = W / 100
dates = []
for j in range(len(YY)):
dates.append(datetime.datetime(YY[j], MM[j], DD[j], hh[j], mm[j]))
dates = np.asarray(dates)
datenum[:, i] = utils.datetime_to_datenum(dates)
idx_start = np.searchsorted(datenum[:, 0], t_start)
w = w[idx_start : idx_start + max_len]
# Start prepping raw data from the mat file.
print("Loading raw data file.")
data_path = os.path.join(data_in, raw_data_file)
ds = utils.loadmat(data_path)
cc = ds.pop("c")
nw = ds.pop("nw")
ne = ds.pop("ne")
se = ds.pop("se")
sw = ds.pop("sw")
cc["id"] = "cc"
nw["id"] = "nw"
ne["id"] = "ne"
se["id"] = "se"
sw["id"] = "sw"
moorings = [cc, nw, ne, se, sw]
# Useful information
dt_min = 15.0 # Sample period in minutes.
dt_sec = dt_min * 60.0 # Sample period in seconds.
dt_day = dt_sec / 86400.0 # Sample period in days.
print("Chopping time series.")
for m in moorings:
m["idx_start"] = np.searchsorted(m["Dates"], t_start)
for m in moorings:
m["N_data"] = max_len
m["idx_end"] = m["idx_start"] + max_len
# Chop data to start and end dates.
varl = ["Dates", "Temp", "Sal", "u", "v", "Pres"]
for m in moorings:
for var in varl:
m[var] = m[var][m["idx_start"] : m["idx_end"], ...]
print("Renaming variables.")
print("Interpolating negative pressures.")
for m in moorings:
__, N_levels = m["Pres"].shape
m["N_levels"] = N_levels
# Tile time and pressure
m["t"] = np.tile(m.pop("Dates")[:, np.newaxis], (1, N_levels))
# Fix negative pressures by interpolating nearby data.
fix = m["Pres"] < 0.0
if fix.any():
levs = np.argwhere(np.any(fix, axis=0))[0]
for lev in levs:
x = m["t"][fix[:, lev], lev]
xp = m["t"][~fix[:, lev], lev]
fp = m["Pres"][~fix[:, lev], lev]
m["Pres"][fix[:, lev], lev] = np.interp(x, xp, fp)
# Rename variables
m["P"] = m.pop("Pres")
m["u"] = m["u"] / 100.0
m["v"] = m["v"] / 100.0
m["spd"] = np.sqrt(m["u"] ** 2 + m["v"] ** 2)
m["angle"] = np.angle(m["u"] + 1j * m["v"])
m["Sal"][(m["Sal"] < 33.5) | (m["Sal"] > 34.9)] = np.nan
m["S"] = m.pop("Sal")
m["Temp"][m["Temp"] < -2.0] = np.nan
m["T"] = m.pop("Temp")
# Dimensional quantities.
m["f"] = gsw.f(m["lat"])
m["ll"] = np.array([m["lon"], m["lat"]])
m["z"] = gsw.z_from_p(m["P"], m["lat"])
# Estimate thermodynamic quantities.
m["SA"] = gsw.SA_from_SP(m["S"], m["P"], m["lon"], m["lat"])
m["CT"] = gsw.CT_from_t(m["SA"], m["T"], m["P"])
# specvol_anom = gsw.specvol_anom(m['SA'], m['CT'], m['P'])
# m['sva'] = specvol_anom
cc["wr"] = w
print("Calculating thermodynamics.")
print("Excluding bad data using T-S funnel.")
# Chuck out data outside of TS funnel sensible range.
funnel = np.genfromtxt("funnel.txt")
for m in moorings:
S = m["SA"].flatten()
T = m["CT"].flatten()
p = path.Path(funnel)
in_funnel = p.contains_points(np.vstack((S, T)).T)
fix = np.reshape(~in_funnel, m["SA"].shape)
m["in_funnel"] = ~fix
varl = ["S"]
if fix.any():
levs = np.squeeze(np.argwhere(np.any(fix, axis=0)))
for lev in levs:
x = m["t"][fix[:, lev], lev]
xp = m["t"][~fix[:, lev], lev]
for var in varl:
fp = m[var][~fix[:, lev], lev]
m[var][fix[:, lev], lev] = np.interp(x, xp, fp)
# Re-estimate thermodynamic quantities.
m["SA"] = gsw.SA_from_SP(m["S"], m["P"], m["lon"], m["lat"])
m["CT"] = gsw.CT_from_t(m["SA"], m["T"], m["P"])
print("Calculating neutral density.")
# Estimate the neutral density
for m in moorings:
# Compute potential temperature using the 1983 UNESCO EOS.
m["PT0"] = seawater.ptmp(m["S"], m["T"], m["P"])
# Flatten variables for analysis.
lons = m["lon"] * np.ones_like(m["P"])
lats = m["lat"] * np.ones_like(m["P"])
S_ = m["S"].flatten()
T_ = m["PT0"].flatten()
P_ = m["P"].flatten()
LO_ = lons.flatten()
LA_ = lats.flatten()
gamman = gamma_GP_from_SP_pt(S_, T_, P_, LO_, LA_)
m["gamman"] = np.reshape(gamman, m["P"].shape) + 1000.0
print("Calculating slice gradients at C.")
# Want gradient of density/vel to be local, no large central differences.
slices = [slice(0, 4), slice(4, 6), slice(6, 10), slice(10, 12)]
cc["dgdz"] = np.empty((cc["N_data"], cc["N_levels"]))
cc["dTdz"] = np.empty((cc["N_data"], cc["N_levels"]))
cc["dudz"] = np.empty((cc["N_data"], cc["N_levels"]))
cc["dvdz"] = np.empty((cc["N_data"], cc["N_levels"]))
for sl in slices:
z = cc["z"][:, sl]
g = cc["gamman"][:, sl]
T = cc["T"][:, sl]
u = cc["u"][:, sl]
v = cc["v"][:, sl]
cc["dgdz"][:, sl] = np.gradient(g, axis=1) / np.gradient(z, axis=1)
cc["dTdz"][:, sl] = np.gradient(T, axis=1) / np.gradient(z, axis=1)
cc["dudz"][:, sl] = np.gradient(u, axis=1) / np.gradient(z, axis=1)
cc["dvdz"][:, sl] = np.gradient(v, axis=1) / np.gradient(z, axis=1)
print("Filtering data.")
# Low pass filter data.
tc = tc_hrs * 60.0 * 60.0
fc = 1.0 / tc # Cut off frequency.
normal_cutoff = fc * dt_sec * 2.0 # Nyquist frequency is half 1/dt.
b, a = sig.butter(4, normal_cutoff, btype="lowpass")
varl = [
"z",
"P",
"S",
"T",
"u",
"v",
"wr",
"SA",
"CT",
"gamman",
"dgdz",
"dTdz",
"dudz",
"dvdz",
] # sva
for m in moorings:
for var in varl:
try:
data = m[var].copy()
except KeyError:
continue
m[var + "_m"] = np.nanmean(data, axis=0)
# For the purpose of filtering set fill with 0 rather than nan (SW)
nans = np.isnan(data)
if nans.any():
data[nans] = 0.0
datalo = sig.filtfilt(b, a, data, axis=0)
# Then put nans back...
if nans.any():
datalo[nans] = np.nan
namelo = var + "_lo"
m[namelo] = datalo
namehi = var + "_hi"
m[namehi] = m[var] - m[namelo]
m["spd_lo"] = np.sqrt(m["u_lo"] ** 2 + m["v_lo"] ** 2)
m["angle_lo"] = ma.angle(m["u_lo"] + 1j * m["v_lo"])
m["spd_hi"] = np.sqrt(m["u_hi"] ** 2 + m["v_hi"] ** 2)
m["angle_hi"] = ma.angle(m["u_hi"] + 1j * m["v_hi"])
###Output
_____no_output_____
###Markdown
Save the raw data.
###Code
io.savemat(os.path.join(data_out, "C_raw.mat"), cc)
io.savemat(os.path.join(data_out, "NW_raw.mat"), nw)
io.savemat(os.path.join(data_out, "NE_raw.mat"), ne)
io.savemat(os.path.join(data_out, "SE_raw.mat"), se)
io.savemat(os.path.join(data_out, "SW_raw.mat"), sw)
###Output
_____no_output_____
###Markdown
Create virtual mooring 'raw'.
###Code
print("VIRTUAL MOORING")
print("Determine maximum knockdown as a function of z.")
zms = np.hstack([m["z"].max(axis=0) for m in moorings if "se" not in m["id"]])
Dzs = np.hstack(
[m["z"].min(axis=0) - m["z"].max(axis=0) for m in moorings if "se" not in m["id"]]
)
zmax_pfit = np.polyfit(zms, Dzs, 2) # Second order polynomial for max knockdown
np.save(
os.path.join(data_out, "zmax_pfit"), np.polyfit(zms, Dzs, 2), allow_pickle=False
)
# Define the knockdown model:
def zmodel(u, zmax, zmax_pfit):
return zmax + np.polyval(zmax_pfit, zmax) * u ** 3
print("Load model data.")
mluv = xr.load_dataset("../data/mooring_locations_uv1.nc")
mluv = mluv.isel(
t=slice(0, np.argwhere(mluv.u[:, 0, 0].data == 0)[0][0])
) # Get rid of end zeros...
mluv = mluv.assign_coords(lon=mluv.lon)
mluv = mluv.assign_coords(id=["cc", "nw", "ne", "se", "sw"])
mluv["spd"] = (mluv.u ** 2 + mluv.v ** 2) ** 0.5
print("Create virtual mooring 'raw' dataset.")
savedict = {
"cc": {"id": "cc"},
"nw": {"id": "nw"},
"ne": {"id": "ne"},
"se": {"id": "se"},
"sw": {"id": "sw"},
}
mids = ["cc", "nw", "ne", "se", "sw"]
def nearidx(a, v):
return np.argmin(np.abs(np.asarray(a) - v))
for idx, mid in enumerate(mids):
savedict[mid]["lon"] = mluv.lon[idx].data
savedict[mid]["lat"] = mluv.lat[idx].data
izs = []
for i in range(moorings[idx]["N_levels"]):
izs.append(nearidx(mluv.z, moorings[idx]["z"][:, i].max()))
spdm = mluv.spd.isel(z=izs, index=idx).mean(dim="z")
spdn = spdm / spdm.max()
zmax = mluv.z[izs]
zk = zmodel(spdn.data[:, np.newaxis], zmax.data[np.newaxis, :], zmax_pfit)
savedict[mid]["z"] = zk
savedict[mid]["t"] = np.tile(
mluv.t.data[:, np.newaxis], (1, moorings[idx]["N_levels"])
)
fu = itpl.RectBivariateSpline(mluv.t.data, -mluv.z.data, mluv.u[..., idx].data)
fv = itpl.RectBivariateSpline(mluv.t.data, -mluv.z.data, mluv.v[..., idx].data)
uk = fu(mluv.t.data[:, np.newaxis], -zk, grid=False)
vk = fv(mluv.t.data[:, np.newaxis], -zk, grid=False)
savedict[mid]["u"] = uk
savedict[mid]["v"] = vk
io.savemat("../data/virtual_mooring_raw.mat", savedict)
###Output
_____no_output_____
###Markdown
Create virtual mooring 'interpolated'.
###Code
# Corrected levels.
# heights = [-540., -1250., -2100., -3500.]
# Filter cut off (hours)
tc_hrs = 40.0
# Start of time series (matlab datetime)
# t_start = 734494.0
# Length of time series
# max_len = N_data = 42048
# Sampling period (minutes)
dt_min = 60.0
dt_sec = dt_min * 60.0 # Sample period in seconds.
dt_day = dt_sec / 86400.0 # Sample period in days.
N_per_day = int(1.0 / dt_day) # Samples per day.
# Window length for wave stress quantities and mesoscale strain quantities.
nperseg = 2 ** 7
# Spectra parameters
window = "hanning"
detrend = "constant"
# Extrapolation/interpolation limit above which data will be removed.
dzlim = 100.0
# Integration of spectra parameters. These multiple N and f respectively to set
# the integration limits.
fhi = 1.0
flo = 1.0
flov = 1.0 # When integrating spectra involved in vertical fluxes, get rid of
# the near inertial portion.
moorings = utils.loadmat("../data/virtual_mooring_raw.mat")
cc = moorings.pop("cc")
nw = moorings.pop("nw")
ne = moorings.pop("ne")
se = moorings.pop("se")
sw = moorings.pop("sw")
moorings = [cc, nw, ne, se, sw]
N_data = cc["t"].shape[0]
###Output
_____no_output_____
###Markdown
Polynomial fits first.
###Code
print("**Generating corrected data**")
# Generate corrected moorings
z = np.concatenate([m["z"].flatten() for m in moorings])
u = np.concatenate([m["u"].flatten() for m in moorings])
v = np.concatenate([m["v"].flatten() for m in moorings])
print("Calculating polynomial coefficients.")
pzu = np.polyfit(z, u, 2)
pzv = np.polyfit(z, v, 2)
# Additional height in m to add to interpolation height.
hoffset = [-25.0, 50.0, -50.0, 100.0]
pi2 = np.pi * 2.0
nfft = nperseg
levis = [(0, 1, 2, 3), (4, 5), (6, 7, 8, 9), (10, 11)]
Nclevels = len(levis)
spec_kwargs = {
"fs": 1.0 / dt_sec,
"window": window,
"nperseg": nperseg,
"nfft": nfft,
"detrend": detrend,
"axis": 0,
}
idx1 = np.arange(nperseg, N_data, nperseg // 2) # Window end index
idx0 = idx1 - nperseg # Window start index
N_windows = len(idx0)
# Initialise the place holder dictionaries.
c12w = {"N_levels": 12} # Dictionary for raw, windowed data from central mooring
c4w = {"N_levels": Nclevels} # Dictionary for processed, windowed data
c4 = {"N_levels": Nclevels} # Dictionary for processed data
# Dictionaries for raw, windowed data from outer moorings
nw5w, ne5w, se5w, sw5w = {"id": "nw"}, {"id": "ne"}, {"id": "se"}, {"id": "sw"}
moorings5w = [nw5w, ne5w, se5w, sw5w]
# Dictionaries for processed, windowed data from outer moorings
nw4w, ne4w, se4w, sw4w = {"id": "nw"}, {"id": "ne"}, {"id": "se"}, {"id": "sw"}
moorings4w = [nw4w, ne4w, se4w, sw4w]
# Initialised the arrays of windowed data
varr = ["t", "z", "u", "v"]
for var in varr:
c12w[var] = np.zeros((nperseg, N_windows, 12))
var4 = [
"t",
"z",
"u",
"v",
"dudx",
"dvdx",
"dudy",
"dvdy",
"dudz",
"dvdz",
"nstrain",
"sstrain",
"vort",
"div",
]
for var in var4:
c4w[var] = np.zeros((nperseg, N_windows, Nclevels))
for var in var4:
c4[var] = np.zeros((N_windows, Nclevels))
# Initialised the arrays of windowed data for outer mooring
varro = ["z", "u", "v"]
for var in varro:
for m5w in moorings5w:
m5w[var] = np.zeros((nperseg, N_windows, 5))
var4o = ["z", "u", "v"]
for var in var4o:
for m4w in moorings4w:
m4w[var] = np.zeros((nperseg, N_windows, Nclevels))
# for var in var4o:
# for m4 in moorings4:
# m4[var] = np.zeros((N_windows, 4))
# Window the raw data.
for i in range(N_windows):
idx = idx0[i]
for var in varr:
c12w[var][:, i, :] = cc[var][idx : idx + nperseg, :]
for i in range(N_windows):
idx = idx0[i]
for var in varro:
for m5w, m in zip(moorings5w, moorings[1:]):
m5w[var][:, i, :] = m[var][idx : idx + nperseg, :]
print("Interpolating properties.")
# Do the interpolation
for i in range(Nclevels):
# THIS hoffset is important!!!
c4["z"][:, i] = np.mean(c12w["z"][..., levis[i]], axis=(0, -1)) + hoffset[i]
for j in range(N_windows):
zr = c12w["z"][:, j, levis[i]]
ur = c12w["u"][:, j, levis[i]]
vr = c12w["v"][:, j, levis[i]]
zi = c4["z"][j, i]
c4w["z"][:, j, i] = np.mean(zr, axis=-1)
c4w["t"][:, j, i] = c12w["t"][:, j, 0]
c4w["u"][:, j, i] = moo.interp_quantity(zr, ur, zi, pzu)
c4w["v"][:, j, i] = moo.interp_quantity(zr, vr, zi, pzv)
dudzr = np.gradient(ur, axis=-1) / np.gradient(zr, axis=-1)
dvdzr = np.gradient(vr, axis=-1) / np.gradient(zr, axis=-1)
# Instead of mean, could moo.interp1d
c4w["dudz"][:, j, i] = np.mean(dudzr, axis=-1)
c4w["dvdz"][:, j, i] = np.mean(dvdzr, axis=-1)
for m5w, m4w in zip(moorings5w, moorings4w):
zr = m5w["z"][:, j, :]
ur = m5w["u"][:, j, :]
vr = m5w["v"][:, j, :]
m4w["z"][:, j, i] = np.full((nperseg), zi)
m4w["u"][:, j, i] = moo.interp_quantity(zr, ur, zi, pzu)
m4w["v"][:, j, i] = moo.interp_quantity(zr, vr, zi, pzv)
print("Filtering windowed data.")
fcorcpd = np.abs(gsw.f(cc["lat"])) * 86400 / pi2
varl = ["u", "v"]
for var in varl:
c4w[var + "_lo"] = utils.butter_filter(
c4w[var], 24 / tc_hrs, fs=N_per_day, btype="low", axis=0
)
c4w[var + "_hi"] = c4w[var] - c4w[var + "_lo"]
varl = ["u", "v"]
for var in varl:
for m4w in moorings4w:
m4w[var + "_lo"] = utils.butter_filter(
m4w[var], 24 / tc_hrs, fs=N_per_day, btype="low", axis=0
)
m4w[var + "_hi"] = m4w[var] - m4w[var + "_lo"]
c4w["zi"] = np.ones_like(c4w["z"]) * c4["z"]
print("Calculating horizontal gradients.")
# Calculate horizontal gradients
for j in range(N_windows):
ll = np.stack(
([m["lon"] for m in moorings[1:]], [m["lat"] for m in moorings[1:]]), axis=1
)
uv = np.stack(
(
[m4w["u_lo"][:, j, :] for m4w in moorings4w],
[m4w["v_lo"][:, j, :] for m4w in moorings4w],
),
axis=1,
)
dudx, dudy, dvdx, dvdy, vort, div = moo.div_vort_4D(ll[:, 0], ll[:, 1], uv)
nstrain = dudx - dvdy
sstrain = dvdx + dudy
c4w["dudx"][:, j, :] = dudx
c4w["dudy"][:, j, :] = dudy
c4w["dvdx"][:, j, :] = dvdx
c4w["dvdy"][:, j, :] = dvdy
c4w["nstrain"][:, j, :] = nstrain
c4w["sstrain"][:, j, :] = sstrain
c4w["vort"][:, j, :] = vort
c4w["div"][:, j, :] = div
for var in var4:
if var == "z": # Keep z as modified by hoffset.
continue
c4[var] = np.mean(c4w[var], axis=0)
freq, c4w["Puu"] = sig.welch(c4w["u_hi"], **spec_kwargs)
_, c4w["Pvv"] = sig.welch(c4w["v_hi"], **spec_kwargs)
_, c4w["Cuv"] = sig.csd(c4w["u_hi"], c4w["v_hi"], **spec_kwargs)
c4w["freq"] = freq.copy()
# Get rid of annoying tiny values.
svarl = ["Puu", "Pvv", "Cuv"]
for var in svarl:
c4w[var][0, ...] = 0.0
c4[var + "_int"] = np.full((N_windows, 4), np.nan)
# Horizontal azimuth according to Jing 2018
c4w["theta"] = np.arctan2(2.0 * c4w["Cuv"].real, (c4w["Puu"] - c4w["Pvv"])) / 2
# Integration #############################################################
print("Integrating power spectra.")
for var in svarl:
c4w[var + "_cint"] = np.full_like(c4w[var], fill_value=np.nan)
fcor = np.abs(gsw.f(cc["lat"])) / pi2
N_freq = len(freq)
freq_ = np.tile(freq[:, np.newaxis, np.newaxis], (1, N_windows, Nclevels))
# ulim = fhi * np.tile(c4["N"][np.newaxis, ...], (N_freq, 1, 1)) / pi2
ulim = 1e9 # Set a huge upper limit since we don't know what N is...
llim = fcor * flo
use = (freq_ < ulim) & (freq_ > llim)
svarl = ["Puu", "Pvv", "Cuv"]
for var in svarl:
c4[var + "_int"] = igr.simps(use * c4w[var].real, freq, axis=0)
c4w[var + "_cint"] = igr.cumtrapz(use * c4w[var].real, freq, axis=0, initial=0.0)
# Change lower integration limits for vertical components...
llim = fcor * flov
use = (freq_ < ulim) & (freq_ > llim)
# Usefull quantities
c4["nstress"] = c4["Puu_int"] - c4["Pvv_int"]
c4["sstress"] = -2.0 * c4["Cuv_int"]
c4["F_horiz"] = (
-0.5 * (c4["Puu_int"] - c4["Pvv_int"]) * c4["nstrain"]
- c4["Cuv_int"] * c4["sstrain"]
)
# ## Now we have to create the model 'truth'...
#
# Load the model data and estimate some gradients.
print("Estimating smoothed gradients (slow).")
mluv = xr.load_dataset("../data/mooring_locations_uv1.nc")
mluv = mluv.isel(
t=slice(0, np.argwhere(mluv.u[:, 0, 0].data == 0)[0][0])
) # Get rid of end zeros...
mluv = mluv.assign_coords(lon=mluv.lon)
mluv = mluv.assign_coords(id=["cc", "nw", "ne", "se", "sw"])
mluv["dudz"] = (["t", "z", "index"], np.gradient(mluv.u, mluv.z, axis=1))
mluv["dvdz"] = (["t", "z", "index"], np.gradient(mluv.v, mluv.z, axis=1))
uv = np.rollaxis(np.stack((mluv.u, mluv.v))[..., 1:], 3, 0)
dudx, dudy, dvdx, dvdy, vort, div = moo.div_vort_4D(mluv.lon[1:], mluv.lat[1:], uv)
nstrain = dudx - dvdy
sstrain = dvdx + dudy
mluv["dudx"] = (["t", "z"], dudx)
mluv["dudy"] = (["t", "z"], dudy)
mluv["dvdx"] = (["t", "z"], dvdx)
mluv["dvdy"] = (["t", "z"], dvdy)
mluv["nstrain"] = (["t", "z"], nstrain)
mluv["sstrain"] = (["t", "z"], sstrain)
mluv["vort"] = (["t", "z"], vort)
mluv["div"] = (["t", "z"], div)
# Smooth the model data in an equivalent way to the real mooring.
dudxs = (
mluv.dudx.rolling(t=nperseg, center=True)
.reduce(np.average, weights=sig.hann(nperseg))
.dropna("t")
)
dvdxs = (
mluv.dvdx.rolling(t=nperseg, center=True)
.reduce(np.average, weights=sig.hann(nperseg))
.dropna("t")
)
dudys = (
mluv.dudy.rolling(t=nperseg, center=True)
.reduce(np.average, weights=sig.hann(nperseg))
.dropna("t")
)
dvdys = (
mluv.dvdy.rolling(t=nperseg, center=True)
.reduce(np.average, weights=sig.hann(nperseg))
.dropna("t")
)
sstrains = (
mluv.sstrain.rolling(t=nperseg, center=True)
.reduce(np.average, weights=sig.hann(nperseg))
.dropna("t")
)
nstrains = (
mluv.nstrain.rolling(t=nperseg, center=True)
.reduce(np.average, weights=sig.hann(nperseg))
.dropna("t")
)
divs = (
mluv.div.rolling(t=nperseg, center=True)
.reduce(np.average, weights=sig.hann(nperseg))
.dropna("t")
)
vorts = (
mluv.vort.rolling(t=nperseg, center=True)
.reduce(np.average, weights=sig.hann(nperseg))
.dropna("t")
)
dudzs = (
mluv.dudz.isel(index=0)
.rolling(t=nperseg, center=True)
.reduce(np.average, weights=sig.hann(nperseg))
.dropna("t")
)
dvdzs = (
mluv.dvdz.isel(index=0)
.rolling(t=nperseg, center=True)
.reduce(np.average, weights=sig.hann(nperseg))
.dropna("t")
)
# Make spline fits.
fdudx = itpl.RectBivariateSpline(dudxs.t.data, -dudxs.z.data, dudxs.data)
fdvdx = itpl.RectBivariateSpline(dvdxs.t.data, -dvdxs.z.data, dvdxs.data)
fdudy = itpl.RectBivariateSpline(dudys.t.data, -dudys.z.data, dudys.data)
fdvdy = itpl.RectBivariateSpline(dvdys.t.data, -dvdys.z.data, dvdys.data)
fsstrain = itpl.RectBivariateSpline(sstrains.t.data, -sstrains.z.data, sstrains.data)
fnstrain = itpl.RectBivariateSpline(nstrains.t.data, -nstrains.z.data, nstrains.data)
fdiv = itpl.RectBivariateSpline(divs.t.data, -divs.z.data, divs.data)
fvort = itpl.RectBivariateSpline(vorts.t.data, -vorts.z.data, vorts.data)
fdudz = itpl.RectBivariateSpline(dudzs.t.data, -dudzs.z.data, dudzs.data)
fdvdz = itpl.RectBivariateSpline(dvdzs.t.data, -dvdzs.z.data, dvdzs.data)
# Interpolate using splines.
dudxt = fdudx(c4["t"], -c4["z"], grid=False)
dvdxt = fdvdx(c4["t"], -c4["z"], grid=False)
dudyt = fdudy(c4["t"], -c4["z"], grid=False)
dvdyt = fdvdy(c4["t"], -c4["z"], grid=False)
sstraint = fsstrain(c4["t"], -c4["z"], grid=False)
nstraint = fnstrain(c4["t"], -c4["z"], grid=False)
divt = fdiv(c4["t"], -c4["z"], grid=False)
vortt = fvort(c4["t"], -c4["z"], grid=False)
dudzt = fdudz(c4["t"], -c4["z"], grid=False)
dvdzt = fdvdz(c4["t"], -c4["z"], grid=False)
c4["dudxt"] = dudxt
c4["dvdxt"] = dvdxt
c4["dudyt"] = dudyt
c4["dvdyt"] = dvdyt
c4["sstraint"] = sstraint
c4["nstraint"] = nstraint
c4["divt"] = divt
c4["vortt"] = vortt
c4["dudzt"] = dudzt
c4["dvdzt"] = dvdzt
io.savemat("../data/virtual_mooring_interpolated.mat", c4)
io.savemat("../data/virtual_mooring_interpolated_windowed.mat", c4w)
###Output
_____no_output_____
###Markdown
Signal to noise ratios.
###Code
print("Estimating signal to noise ratios.")
M = munch.munchify(utils.loadmat('../data/virtual_mooring_interpolated.mat'))
# shear strain
dsstrain = M.sstrain - M.sstraint
SNR_sstrain = M.sstrain.var(axis=0)/dsstrain.var(axis=0)
np.save('../data/SNR_sstrain', SNR_sstrain, allow_pickle=False)
# normal strain
dnstrain = M.nstrain - M.nstraint
SNR_nstrain = M.nstrain.var(axis=0)/dnstrain.var(axis=0)
np.save('../data/SNR_nstrain', SNR_nstrain, allow_pickle=False)
# zonal shear
ddudz = M.dudz - M.dudzt
SNR_dudz = M.dvdz.var(axis=0)/ddudz.var(axis=0)
np.save('../data/SNR_dudz', SNR_dudz, allow_pickle=False)
# meridional shear
ddvdz = M.dvdz - M.dvdzt
SNR_dvdz = M.dvdz.var(axis=0)/ddvdz.var(axis=0)
np.save('../data/SNR_dvdz', SNR_dvdz, allow_pickle=False)
# divergence
ddiv = M.div - M.divt
SNR_nstrain = M.div.var(axis=0)/ddiv.var(axis=0)
np.save('../data/SNR_div', SNR_nstrain, allow_pickle=False)
###Output
_____no_output_____
###Markdown
Generate interpolated data.Set parameters again.
###Code
# Corrected levels.
# heights = [-540., -1250., -2100., -3500.]
# Filter cut off (hours)
tc_hrs = 40.0
# Start of time series (matlab datetime)
t_start = 734494.0
# Length of time series
max_len = N_data = 42048
# Data file
raw_data_file = "moorings.mat"
# Index where NaNs start in u and v data from SW mooring
sw_vel_nans = 14027
# Sampling period (minutes)
dt_min = 15.0
# Window length for wave stress quantities and mesoscale strain quantities.
nperseg = 2 ** 9
# Spectra parameters
window = "hanning"
detrend = "constant"
# Extrapolation/interpolation limit above which data will be removed.
dzlim = 100.0
# Integration of spectra parameters. These multiple N and f respectively to set
# the integration limits.
fhi = 1.0
flo = 1.0
flov = 1.0 # When integrating spectra involved in vertical fluxes, get rid of
# the near inertial portion.
# When bandpass filtering windowed data use these params multiplied by f and N
filtlo = 0.9 # times f
filthi = 1.1 # times N
# Interpolation distance that raises flag (m)
zimax = 100.0
dt_sec = dt_min * 60.0 # Sample period in seconds.
dt_day = dt_sec / 86400.0 # Sample period in days.
N_per_day = int(1.0 / dt_day) # Samples per day.
###Output
_____no_output_____
###Markdown
Polynomial fits first.
###Code
print("REAL MOORING INTERPOLATION")
print("**Generating corrected data**")
moorings = load_data.load_my_data()
cc, nw, ne, se, sw = moorings
# Generate corrected moorings
T = np.concatenate([m["T"].flatten() for m in moorings])
S = np.concatenate([m["S"].flatten() for m in moorings])
z = np.concatenate([m["z"].flatten() for m in moorings])
u = np.concatenate([m["u"].flatten() for m in moorings])
v = np.concatenate([m["v"].flatten() for m in moorings])
g = np.concatenate([m["gamman"].flatten() for m in moorings])
# SW problems...
nans = np.isnan(u) | np.isnan(v)
print("Calculating polynomial coefficients.")
pzT = np.polyfit(z[~nans], T[~nans], 3)
pzS = np.polyfit(z[~nans], S[~nans], 3)
pzg = np.polyfit(z[~nans], g[~nans], 3)
pzu = np.polyfit(z[~nans], u[~nans], 2)
pzv = np.polyfit(z[~nans], v[~nans], 2)
# Additional height in m to add to interpolation height.
hoffset = [-25.0, 50.0, -50.0, 100.0]
pi2 = np.pi * 2.0
nfft = nperseg
levis = [(0, 1, 2, 3), (4, 5), (6, 7, 8, 9), (10, 11)]
Nclevels = len(levis)
spec_kwargs = {
"fs": 1.0 / dt_sec,
"window": window,
"nperseg": nperseg,
"nfft": nfft,
"detrend": detrend,
"axis": 0,
}
idx1 = np.arange(nperseg, N_data, nperseg // 2) # Window end index
idx0 = idx1 - nperseg # Window start index
N_windows = len(idx0)
# Initialise the place holder dictionaries.
c12w = {"N_levels": 12} # Dictionary for raw, windowed data from central mooring
c4w = {"N_levels": Nclevels} # Dictionary for processed, windowed data
c4 = {"N_levels": Nclevels} # Dictionary for processed data
# Dictionaries for raw, windowed data from outer moorings
nw5w, ne5w, se5w, sw5w = {"id": "nw"}, {"id": "ne"}, {"id": "se"}, {"id": "sw"}
moorings5w = [nw5w, ne5w, se5w, sw5w]
# Dictionaries for processed, windowed data from outer moorings
nw4w, ne4w, se4w, sw4w = {"id": "nw"}, {"id": "ne"}, {"id": "se"}, {"id": "sw"}
moorings4w = [nw4w, ne4w, se4w, sw4w]
# Initialised the arrays of windowed data
varr = ["t", "z", "u", "v", "gamman", "S", "T", "P"]
for var in varr:
c12w[var] = np.zeros((nperseg, N_windows, cc["N_levels"]))
var4 = [
"t",
"z",
"u",
"v",
"gamman",
"dudx",
"dvdx",
"dudy",
"dvdy",
"dudz",
"dvdz",
"dgdz",
"nstrain",
"sstrain",
"vort",
"N2",
]
for var in var4:
c4w[var] = np.zeros((nperseg, N_windows, Nclevels))
for var in var4:
c4[var] = np.zeros((N_windows, Nclevels))
# Initialised the arrays of windowed data for outer mooring
varro = ["z", "u", "v"]
for var in varro:
for m5w in moorings5w:
m5w[var] = np.zeros((nperseg, N_windows, 5))
var4o = ["z", "u", "v"]
for var in var4o:
for m4w in moorings4w:
m4w[var] = np.zeros((nperseg, N_windows, Nclevels))
# for var in var4o:
# for m4 in moorings4:
# m4[var] = np.zeros((N_windows, 4))
# Window the raw data.
for i in range(N_windows):
idx = idx0[i]
for var in varr:
c12w[var][:, i, :] = cc[var][idx : idx + nperseg, :]
for i in range(N_windows):
idx = idx0[i]
for var in varro:
for m5w, m in zip(moorings5w, moorings[1:]):
m5w[var][:, i, :] = m[var][idx : idx + nperseg, :]
c4["interp_far_flag"] = np.full_like(c4["u"], False, dtype=bool)
print("Interpolating properties.")
# Do the interpolation
for i in range(Nclevels):
# THIS hoffset is important!!!
c4["z"][:, i] = np.mean(c12w["z"][..., levis[i]], axis=(0, -1)) + hoffset[i]
for j in range(N_windows):
zr = c12w["z"][:, j, levis[i]]
ur = c12w["u"][:, j, levis[i]]
vr = c12w["v"][:, j, levis[i]]
gr = c12w["gamman"][:, j, levis[i]]
Sr = c12w["S"][:, j, levis[i]]
Tr = c12w["T"][:, j, levis[i]]
Pr = c12w["P"][:, j, levis[i]]
zi = c4["z"][j, i]
c4["interp_far_flag"][j, i] = np.any(np.min(np.abs(zr - zi), axis=-1) > zimax)
c4w["z"][:, j, i] = np.mean(zr, axis=-1)
c4w["t"][:, j, i] = c12w["t"][:, j, 0]
c4w["u"][:, j, i] = moo.interp_quantity(zr, ur, zi, pzu)
c4w["v"][:, j, i] = moo.interp_quantity(zr, vr, zi, pzv)
c4w["gamman"][:, j, i] = moo.interp_quantity(zr, gr, zi, pzg)
dudzr = np.gradient(ur, axis=-1) / np.gradient(zr, axis=-1)
dvdzr = np.gradient(vr, axis=-1) / np.gradient(zr, axis=-1)
dgdzr = np.gradient(gr, axis=-1) / np.gradient(zr, axis=-1)
N2 = seawater.bfrq(Sr.T, Tr.T, Pr.T, cc["lat"])[0].T
# Instead of mean, could moo.interp1d
c4w["dudz"][:, j, i] = np.mean(dudzr, axis=-1)
c4w["dvdz"][:, j, i] = np.mean(dvdzr, axis=-1)
c4w["dgdz"][:, j, i] = np.mean(dgdzr, axis=-1)
c4w["N2"][:, j, i] = np.mean(N2, axis=-1)
for m5w, m4w in zip(moorings5w, moorings4w):
if (m5w["id"] == "sw") & (
idx1[j] > sw_vel_nans
): # Skip this level because of NaNs
zr = m5w["z"][:, j, (0, 1, 3, 4)]
ur = m5w["u"][:, j, (0, 1, 3, 4)]
vr = m5w["v"][:, j, (0, 1, 3, 4)]
else:
zr = m5w["z"][:, j, :]
ur = m5w["u"][:, j, :]
vr = m5w["v"][:, j, :]
m4w["z"][:, j, i] = np.full((nperseg), zi)
m4w["u"][:, j, i] = moo.interp_quantity(zr, ur, zi, pzu)
m4w["v"][:, j, i] = moo.interp_quantity(zr, vr, zi, pzv)
print("Filtering windowed data.")
fcorcpd = np.abs(cc["f"]) * 86400 / pi2
Nmean = np.sqrt(np.average(c4w["N2"], weights=sig.hann(nperseg), axis=0))
varl = ["u", "v", "gamman"]
for var in varl:
c4w[var + "_hib"] = np.zeros_like(c4w[var])
c4w[var + "_lo"] = utils.butter_filter(
c4w[var], 24 / tc_hrs, fs=N_per_day, btype="low", axis=0
)
c4w[var + "_hi"] = c4w[var] - c4w[var + "_lo"]
for i in range(Nclevels):
for j in range(N_windows):
Nmean_ = Nmean[j, i] * 86400 / pi2
for var in varl:
c4w[var + "_hib"][:, j, i] = utils.butter_filter(
c4w[var][:, j, i],
(filtlo * fcorcpd, filthi * Nmean_),
fs=N_per_day,
btype="band",
)
varl = ["u", "v"]
for var in varl:
for m4w in moorings4w:
m4w[var + "_lo"] = utils.butter_filter(
m4w[var], 24 / tc_hrs, fs=N_per_day, btype="low", axis=0
)
m4w[var + "_hi"] = m4w[var] - m4w[var + "_lo"]
c4w["zi"] = np.ones_like(c4w["z"]) * c4["z"]
print("Calculating horizontal gradients.")
# Calculate horizontal gradients
for j in range(N_windows):
ll = np.stack(
([m["lon"] for m in moorings[1:]], [m["lat"] for m in moorings[1:]]), axis=1
)
uv = np.stack(
(
[m4w["u_lo"][:, j, :] for m4w in moorings4w],
[m4w["v_lo"][:, j, :] for m4w in moorings4w],
),
axis=1,
)
dudx, dudy, dvdx, dvdy, vort, _ = moo.div_vort_4D(ll[:, 0], ll[:, 1], uv)
nstrain = dudx - dvdy
sstrain = dvdx + dudy
c4w["dudx"][:, j, :] = dudx
c4w["dudy"][:, j, :] = dudy
c4w["dvdx"][:, j, :] = dvdx
c4w["dvdy"][:, j, :] = dvdy
c4w["nstrain"][:, j, :] = nstrain
c4w["sstrain"][:, j, :] = sstrain
c4w["vort"][:, j, :] = vort
print("Calculating window averages.")
for var in var4 + ["u_lo", "v_lo", "gamman_lo"]:
if var == "z": # Keep z as modified by hoffset.
continue
c4[var] = np.average(c4w[var], weights=sig.hann(nperseg), axis=0)
print("Estimating w and b.")
om = np.fft.fftfreq(nperseg, 15 * 60)
c4w["w_hi"] = np.fft.ifft(
1j
* pi2
* om[:, np.newaxis, np.newaxis]
* np.fft.fft(-c4w["gamman_hi"] / c4["dgdz"], axis=0),
axis=0,
).real
c4w["w_hib"] = np.fft.ifft(
1j
* pi2
* om[:, np.newaxis, np.newaxis]
* np.fft.fft(-c4w["gamman_hib"] / c4["dgdz"], axis=0),
axis=0,
).real
# Estimate buoyancy variables
c4w["b_hi"] = -gsw.grav(-c4["z"], cc["lat"]) * c4w["gamman_hi"] / c4["gamman_lo"]
c4w["b_hib"] = -gsw.grav(-c4["z"], cc["lat"]) * c4w["gamman_hib"] / c4["gamman_lo"]
c4["N"] = np.sqrt(c4["N2"])
print("Estimating covariance spectra.")
freq, c4w["Puu"] = sig.welch(c4w["u_hi"], **spec_kwargs)
_, c4w["Pvv"] = sig.welch(c4w["v_hi"], **spec_kwargs)
_, c4w["Pww"] = sig.welch(c4w["w_hi"], **spec_kwargs)
_, c4w["Pwwg"] = sig.welch(c4w["gamman_hi"] / c4["dgdz"], **spec_kwargs)
c4w["Pwwg"] *= (pi2 * freq[:, np.newaxis, np.newaxis]) ** 2
_, c4w["Pbb"] = sig.welch(c4w["b_hi"], **spec_kwargs)
_, c4w["Cuv"] = sig.csd(c4w["u_hi"], c4w["v_hi"], **spec_kwargs)
_, c4w["Cuwg"] = sig.csd(c4w["u_hi"], c4w["gamman_hi"] / c4["dgdz"], **spec_kwargs)
c4w["Cuwg"] *= -1j * pi2 * freq[:, np.newaxis, np.newaxis]
_, c4w["Cvwg"] = sig.csd(c4w["v_hi"], c4w["gamman_hi"] / c4["dgdz"], **spec_kwargs)
c4w["Cvwg"] *= -1j * pi2 * freq[:, np.newaxis, np.newaxis]
_, c4w["Cub"] = sig.csd(c4w["u_hi"], c4w["b_hi"], **spec_kwargs)
_, c4w["Cvb"] = sig.csd(c4w["v_hi"], c4w["b_hi"], **spec_kwargs)
print("Estimating covariance matrices.")
def cov(x, y, axis=None):
return np.mean((x - np.mean(x, axis=axis)) * (y - np.mean(y, axis=axis)), axis=axis)
c4["couu"] = cov(c4w["u_hib"], c4w["u_hib"], axis=0)
c4["covv"] = cov(c4w["v_hib"], c4w["v_hib"], axis=0)
c4["coww"] = cov(c4w["w_hib"], c4w["w_hib"], axis=0)
c4["cobb"] = cov(c4w["b_hib"], c4w["b_hib"], axis=0)
c4["couv"] = cov(c4w["u_hib"], c4w["v_hib"], axis=0)
c4["couw"] = cov(c4w["u_hib"], c4w["w_hib"], axis=0)
c4["covw"] = cov(c4w["v_hib"], c4w["w_hib"], axis=0)
c4["coub"] = cov(c4w["u_hib"], c4w["b_hib"], axis=0)
c4["covb"] = cov(c4w["v_hib"], c4w["b_hib"], axis=0)
c4w["freq"] = freq.copy()
# Get rid of annoying tiny values.
svarl = ["Puu", "Pvv", "Pbb", "Cuv", "Cub", "Cvb", "Pwwg", "Cuwg", "Cvwg"]
for var in svarl:
c4w[var][0, ...] = 0.0
c4[var + "_int"] = np.full((N_windows, 4), np.nan)
# Horizontal azimuth according to Jing 2018
c4w["theta"] = np.arctan2(2.0 * c4w["Cuv"].real, (c4w["Puu"] - c4w["Pvv"])) / 2
# Integration #############################################################
print("Integrating power spectra.")
for var in svarl:
c4w[var + "_cint"] = np.full_like(c4w[var], fill_value=np.nan)
fcor = np.abs(cc["f"]) / pi2
N_freq = len(freq)
freq_ = np.tile(freq[:, np.newaxis, np.newaxis], (1, N_windows, Nclevels))
ulim = fhi * np.tile(c4["N"][np.newaxis, ...], (N_freq, 1, 1)) / pi2
llim = fcor * flo
use = (freq_ < ulim) & (freq_ > llim)
svarl = ["Puu", "Pvv", "Pbb", "Cuv", "Pwwg"]
for var in svarl:
c4[var + "_int"] = igr.simps(use * c4w[var].real, freq, axis=0)
c4w[var + "_cint"] = igr.cumtrapz(use * c4w[var].real, freq, axis=0, initial=0.0)
# Change lower integration limits for vertical components...
llim = fcor * flov
use = (freq_ < ulim) & (freq_ > llim)
svarl = ["Cub", "Cvb", "Cuwg", "Cvwg"]
for var in svarl:
c4[var + "_int"] = igr.simps(use * c4w[var].real, freq, axis=0)
c4w[var + "_cint"] = igr.cumtrapz(use * c4w[var].real, freq, axis=0, initial=0.0)
# Ruddic and Joyce effective stress
for var1, var2 in zip(["Tuwg", "Tvwg"], ["Cuwg", "Cvwg"]):
func = use * c4w[var2].real * (1 - fcor ** 2 / freq_ ** 2)
nans = np.isnan(func)
func[nans] = 0.0
c4[var1 + "_int"] = igr.simps(func, freq, axis=0)
func = use * c4w[var2].real * (1 - fcor ** 2 / freq_ ** 2)
nans = np.isnan(func)
func[nans] = 0.0
c4w[var1 + "_cint"] = igr.cumtrapz(func, freq, axis=0, initial=0.0)
# Usefull quantities
c4["nstress"] = c4["Puu_int"] - c4["Pvv_int"]
c4["sstress"] = -2.0 * c4["Cuv_int"]
c4["F_horiz"] = (
-0.5 * (c4["Puu_int"] - c4["Pvv_int"]) * c4["nstrain"]
- c4["Cuv_int"] * c4["sstrain"]
)
c4["F_vert"] = (
-(c4["Cuwg_int"] - cc["f"] * c4["Cvb_int"] / c4["N"] ** 2) * c4["dudz"]
- (c4["Cvwg_int"] + cc["f"] * c4["Cub_int"] / c4["N"] ** 2) * c4["dvdz"]
)
c4["F_vert_alt"] = -c4["Tuwg_int"] * c4["dudz"] - c4["Tvwg_int"] * c4["dvdz"]
c4["F_total"] = c4["F_horiz"] + c4["F_vert"]
c4["EPu"] = c4["Cuwg_int"] - cc["f"] * c4["Cvb_int"] / c4["N"] ** 2
c4["EPv"] = c4["Cvwg_int"] + cc["f"] * c4["Cub_int"] / c4["N"] ** 2
##
c4["nstress_cov"] = c4["couu"] - c4["covv"]
c4["sstress_cov"] = -2.0 * c4["couv"]
c4["F_horiz_cov"] = (
-0.5 * (c4["couu"] - c4["covv"]) * c4["nstrain"] - c4["couv"] * c4["sstrain"]
)
c4["F_vert_cov"] = (
-(c4["couw"] - cc["f"] * c4["covb"] / c4["N"] ** 2) * c4["dudz"]
- (c4["covw"] + cc["f"] * c4["coub"] / c4["N"] ** 2) * c4["dvdz"]
)
c4["F_total_cov"] = c4["F_horiz_cov"] + c4["F_vert_cov"]
###Output
_____no_output_____
###Markdown
Estimate standard error on covariances.
###Code
bootnum = 1000
np.random.seed(12341555)
idxs = np.arange(nperseg, dtype="i2")
# def cov1(xy, axis=0):
# x = xy[..., -1]
# y = xy[..., -1]
# return np.mean((x - np.mean(x, axis=axis))*(y - np.mean(y, axis=axis)), axis=axis)
print("Estimating error on covariance using bootstrap (slow).")
euu_ = np.zeros((bootnum, N_windows, Nclevels))
evv_ = np.zeros((bootnum, N_windows, Nclevels))
eww_ = np.zeros((bootnum, N_windows, Nclevels))
ebb_ = np.zeros((bootnum, N_windows, Nclevels))
euv_ = np.zeros((bootnum, N_windows, Nclevels))
euw_ = np.zeros((bootnum, N_windows, Nclevels))
evw_ = np.zeros((bootnum, N_windows, Nclevels))
eub_ = np.zeros((bootnum, N_windows, Nclevels))
evb_ = np.zeros((bootnum, N_windows, Nclevels))
for i in range(bootnum):
idxs_ = np.random.choice(idxs, nperseg)
u_ = c4w["u_hib"][idxs_, ...]
v_ = c4w["v_hib"][idxs_, ...]
w_ = c4w["w_hib"][idxs_, ...]
b_ = c4w["b_hib"][idxs_, ...]
euu_[i, ...] = cov(u_, u_, axis=0)
evv_[i, ...] = cov(v_, v_, axis=0)
eww_[i, ...] = cov(w_, w_, axis=0)
ebb_[i, ...] = cov(b_, b_, axis=0)
euv_[i, ...] = cov(u_, v_, axis=0)
euw_[i, ...] = cov(u_, w_, axis=0)
evw_[i, ...] = cov(v_, w_, axis=0)
eub_[i, ...] = cov(u_, b_, axis=0)
evb_[i, ...] = cov(v_, b_, axis=0)
c4["euu"] = euu_.std(axis=0)
c4["evv"] = evv_.std(axis=0)
c4["eww"] = eww_.std(axis=0)
c4["ebb"] = ebb_.std(axis=0)
c4["euv"] = euv_.std(axis=0)
c4["euw"] = euw_.std(axis=0)
c4["evw"] = evw_.std(axis=0)
c4["eub"] = eub_.std(axis=0)
c4["evb"] = evb_.std(axis=0)
###Output
_____no_output_____
###Markdown
Error on gradients.
###Code
finite_diff_err = 0.06 # Assume 6 percent...
SNR_dudz = np.load("../data/SNR_dudz.npy")
SNR_dvdz = np.load("../data/SNR_dvdz.npy")
SNR_nstrain = np.load("../data/SNR_nstrain.npy")
SNR_sstrain = np.load("../data/SNR_sstrain.npy")
ones = np.ones_like(c4["euu"])
c4["edudz"] = ones * np.sqrt(c4["dudz"].var(axis=0) / SNR_dudz)
c4["edvdz"] = ones * np.sqrt(c4["dvdz"].var(axis=0) / SNR_dvdz)
c4["enstrain"] = esum(
ones * np.sqrt(c4["nstrain"].var(axis=0) / SNR_nstrain),
finite_diff_err * c4["nstrain"],
)
c4["esstrain"] = esum(
ones * np.sqrt(c4["sstrain"].var(axis=0) / SNR_sstrain),
finite_diff_err * c4["sstrain"],
)
###Output
_____no_output_____
###Markdown
Error propagation.
###Code
euumvv = 0.5 * esum(c4["euu"], c4["evv"])
c4["enstress"] = euumvv.copy()
enorm = emult(
-0.5 * (c4["Puu_int"] - c4["Pvv_int"]), c4["nstrain"], euumvv, c4["enstrain"]
)
eshear = emult(c4["Cuv_int"], c4["sstrain"], c4["euv"], c4["esstrain"])
c4["errF_horiz_norm"] = enorm.copy()
c4["errF_horiz_shear"] = eshear.copy()
c4["errF_horiz"] = esum(enorm, eshear)
euumvv = 0.5 * esum(c4["euu"], c4["evv"])
c4["enstress_cov"] = euumvv.copy()
enorm = emult(-0.5 * (c4["couu"] - c4["covv"]), c4["nstrain"], euumvv, c4["enstrain"])
eshear = emult(c4["couv"], c4["sstrain"], c4["euv"], c4["esstrain"])
c4["errF_horiz_norm_cov"] = enorm.copy()
c4["errF_horiz_shear_cov"] = eshear.copy()
c4["errF_horiz_cov"] = esum(enorm, eshear)
euwmvb = esum(c4["euw"], np.abs(cc["f"] / c4["N"] ** 2) * c4["evb"])
evwpub = esum(c4["evw"], np.abs(cc["f"] / c4["N"] ** 2) * c4["eub"])
c4["evstressu"] = euwmvb
c4["evstressv"] = evwpub
edu = emult(
-(c4["Cuwg_int"] - cc["f"] * c4["Cvb_int"] / c4["N"] ** 2),
c4["dudz"],
euwmvb,
c4["edudz"],
)
edv = emult(
-(c4["Cvwg_int"] + cc["f"] * c4["Cub_int"] / c4["N"] ** 2),
c4["dvdz"],
evwpub,
c4["edvdz"],
)
c4["errEPu"] = edu.copy()
c4["errEPv"] = edv.copy()
c4["errF_vert"] = esum(edu, edv)
c4["errEPu_alt"] = emult(-c4["Tuwg_int"], c4["dudz"], c4["euw"], c4["edudz"])
c4["errEPv_alt"] = emult(-c4["Tvwg_int"], c4["dvdz"], c4["evw"], c4["edvdz"])
c4["errF_vert_alt"] = esum(c4["errEPu_alt"], c4["errEPv_alt"])
edu = emult(
-(c4["couw"] - cc["f"] * c4["covb"] / c4["N"] ** 2), c4["dudz"], euwmvb, c4["edudz"]
)
edv = emult(
-(c4["covw"] + cc["f"] * c4["coub"] / c4["N"] ** 2), c4["dvdz"], evwpub, c4["edvdz"]
)
c4["errEPu_cov"] = edu.copy()
c4["errEPv_cov"] = edv.copy()
c4["errF_vert_cov"] = esum(edu, edv)
c4["errF_total"] = esum(c4["errF_vert"], c4["errF_horiz"])
c4["errF_total_cov"] = esum(c4["errF_vert_cov"], c4["errF_horiz_cov"])
###Output
_____no_output_____
###Markdown
Save the interpolated data.
###Code
io.savemat(os.path.join(data_out, "C_alt.mat"), c4)
io.savemat(os.path.join(data_out, "C_altw.mat"), c4w)
###Output
_____no_output_____
###Markdown
ADCP Processing
###Code
print("ADCP PROCESSING")
tf = np.array([16.0, 2.0]) # band pass filter cut off hours
tc_hrs = 40.0 # Low pass cut off (hours)
dt = 0.5 # Data sample period hr
print("Loading ADCP data from file.")
file = os.path.expanduser(os.path.join(data_in, "ladcp_data.mat"))
adcp = utils.loadmat(file)["ladcp2"]
print("Removing all NaN rows.")
varl = ["u", "v", "z"]
for var in varl: # Get rid of the all nan row.
adcp[var] = adcp.pop(var)[:-1, :]
print("Calculating vertical shear.")
z = adcp["z"]
dudz = np.diff(adcp["u"], axis=0) / np.diff(z, axis=0)
dvdz = np.diff(adcp["v"], axis=0) / np.diff(z, axis=0)
nans = np.isnan(dudz) | np.isnan(dvdz)
dudz[nans] = np.nan
dvdz[nans] = np.nan
adcp["zm"] = utils.mid(z, axis=0)
adcp["dudz"] = dudz
adcp["dvdz"] = dvdz
# Low pass filter data.
print("Low pass filtering at {:1.0f} hrs.".format(tc_hrs))
varl = ["u", "v", "dudz", "dvdz"]
for var in varl:
data = adcp[var]
nans = np.isnan(data)
adcp[var + "_m"] = np.nanmean(data, axis=0)
datalo = utils.butter_filter(
utils.interp_nans(adcp["dates"], data, axis=1), 1 / tc_hrs, 1 / dt, btype="low"
)
# Then put nans back...
if nans.any():
datalo[nans] = np.nan
namelo = var + "_lo"
adcp[namelo] = datalo
namehi = var + "_hi"
adcp[namehi] = adcp[var] - adcp[namelo]
# Band pass filter the data.
print("Band pass filtering between {:1.0f} and {:1.0f} hrs.".format(*tf))
varl = ["u", "v", "dudz", "dvdz"]
for var in varl:
data = adcp[var]
nans = np.isnan(data)
databp = utils.butter_filter(
utils.interp_nans(adcp["dates"], data, axis=1), 1 / tf, 1 / dt, btype="band"
)
# Then put nans back...
if nans.any():
databp[nans] = np.nan
namebp = var + "_bp"
adcp[namebp] = databp
io.savemat(os.path.join(data_out, "ADCP.mat"), adcp)
###Output
_____no_output_____
###Markdown
VMP data
###Code
print("VMP PROCESSING")
vmp = utils.loadmat(os.path.join(data_in, "jc054_vmp_cleaned.mat"))["d"]
box = np.array([[-58.0, -58.0, -57.7, -57.7], [-56.15, -55.9, -55.9, -56.15]]).T
p = path.Path(box)
in_box = p.contains_points(np.vstack((vmp["startlon"], vmp["startlat"])).T)
idxs = np.argwhere(in_box).squeeze()
Np = len(idxs)
print("Isolate profiles in match around mooring.")
for var in vmp:
ndim = np.ndim(vmp[var])
if ndim == 2:
vmp[var] = vmp[var][:, idxs]
if ndim == 1 and vmp[var].size == 36:
vmp[var] = vmp[var][idxs]
print("Rename variables.")
vmp["P"] = vmp.pop("press")
vmp["T"] = vmp.pop("temp")
vmp["S"] = vmp.pop("salin")
print("Deal with profiles where P[0] != 1.")
P_ = np.arange(1.0, 10000.0)
i0o = np.zeros((Np), dtype=int)
i1o = np.zeros((Np), dtype=int)
i0n = np.zeros((Np), dtype=int)
i1n = np.zeros((Np), dtype=int)
pmax = 0.0
for i in range(Np):
nans = np.isnan(vmp["eps"][:, i])
i0o[i] = i0 = np.where(~nans)[0][0]
i1o[i] = i1 = np.where(~nans)[0][-1]
P0 = vmp["P"][i0, i]
P1 = vmp["P"][i1, i]
i0n[i] = np.searchsorted(P_, P0)
i1n[i] = np.searchsorted(P_, P1)
pmax = max(P1, pmax)
P = np.tile(np.arange(1.0, pmax + 2)[:, np.newaxis], (1, len(idxs)))
eps = np.full_like(P, np.nan)
chi = np.full_like(P, np.nan)
T = np.full_like(P, np.nan)
S = np.full_like(P, np.nan)
for i in range(Np):
eps[i0n[i] : i1n[i] + 1, i] = vmp["eps"][i0o[i] : i1o[i] + 1, i]
chi[i0n[i] : i1n[i] + 1, i] = vmp["chi"][i0o[i] : i1o[i] + 1, i]
T[i0n[i] : i1n[i] + 1, i] = vmp["T"][i0o[i] : i1o[i] + 1, i]
S[i0n[i] : i1n[i] + 1, i] = vmp["S"][i0o[i] : i1o[i] + 1, i]
vmp["P"] = P
vmp["eps"] = eps
vmp["chi"] = chi
vmp["T"] = T
vmp["S"] = S
vmp["z"] = gsw.z_from_p(vmp["P"], vmp["startlat"])
print("Calculate neutral density.")
# Compute potential temperature using the 1983 UNESCO EOS.
vmp["PT0"] = seawater.ptmp(vmp["S"], vmp["T"], vmp["P"])
# Flatten variables for analysis.
lons = np.ones_like(P) * vmp["startlon"]
lats = np.ones_like(P) * vmp["startlat"]
S_ = vmp["S"].flatten()
T_ = vmp["PT0"].flatten()
P_ = vmp["P"].flatten()
LO_ = lons.flatten()
LA_ = lats.flatten()
gamman = gamma_GP_from_SP_pt(S_, T_, P_, LO_, LA_)
vmp["gamman"] = np.reshape(gamman, vmp["P"].shape) + 1000.0
io.savemat(os.path.join(data_out, "VMP.mat"), vmp)
###Output
_____no_output_____ |
tools/4_extract_routines_from_transcripts.ipynb | ###Markdown
This notebook extracts routines of referring expressions that are "fixed", i.e. become shared or established amongst interlocutors.
###Code
import re
import pathlib as pl
import pandas as pd
import numpy as np
from spacy.lang.en import English
from read_utils import read_tables
def natural_sort(l):
convert = lambda text: int(text) if text.isdigit() else text.lower()
alphanum_key = lambda key: [ convert(c) for c in re.split('([0-9]+)', key) ]
return sorted(l, key = alphanum_key)
###Output
_____no_output_____
###Markdown
Define paths.
###Code
# Inputs.
data_dir = pl.Path('../data')
transcripts_dir = data_dir.joinpath('transcripts')
dialign_dir = pl.Path('dialign-1.0')
dialign_jar_file = dialign_dir.joinpath('dialign.jar')
output_dir = pl.Path('../outputs')
interm_dir = output_dir.joinpath('intermediate')
task_features_file = interm_dir.joinpath(
'log_features/justhink19_log_features_task_level.csv')
# Outputs.
processed_data_dir = pl.Path('../processed_data')
dialign_inputs_dir = processed_data_dir.joinpath('dialign_inputs')
dialign_outputs_dir = processed_data_dir.joinpath('dialign_outputs')
routines_dir = processed_data_dir.joinpath('routines')
utterances_dir = processed_data_dir.joinpath('utterances')
tokens_dir = processed_data_dir.joinpath('tokens')
dirs = [
dialign_inputs_dir, dialign_outputs_dir,
routines_dir,
utterances_dir, tokens_dir,
]
for d in dirs:
if not d.exists():
d.mkdir()
print('Created {}'.format(d))
synthesis_dep_file = dialign_outputs_dir.joinpath(
'metrics-speaker-dependent.tsv')
synthesis_indep_file = dialign_outputs_dir.joinpath(
'metrics-speaker-dependent.tsv')
###Output
_____no_output_____
###Markdown
Define task-specific referents.
###Code
node_words = {
'basel',
'luzern',
'zurich',
'bern',
'zermatt',
'interlaken',
'montreux',
'neuchatel',
'gallen',
'davos',
}
task_words = node_words
print(len(task_words), sorted(task_words))
###Output
10 ['basel', 'bern', 'davos', 'gallen', 'interlaken', 'luzern', 'montreux', 'neuchatel', 'zermatt', 'zurich']
###Markdown
Load data. Read transcripts.
###Code
transcript_dfs = read_tables(transcripts_dir, form='transcript')
###Output
Reading transcript files from ../data/transcripts.
transcript 10 files found.
File justhink19_transcript_07 belongs to team 7
File justhink19_transcript_08 belongs to team 8
File justhink19_transcript_09 belongs to team 9
File justhink19_transcript_10 belongs to team 10
File justhink19_transcript_11 belongs to team 11
File justhink19_transcript_17 belongs to team 17
File justhink19_transcript_18 belongs to team 18
File justhink19_transcript_20 belongs to team 20
File justhink19_transcript_28 belongs to team 28
File justhink19_transcript_47 belongs to team 47
Transcript of 7 has 639 utterances
Transcript of 8 has 669 utterances
Transcript of 9 has 810 utterances
Transcript of 10 has 469 utterances
Transcript of 11 has 567 utterances
Transcript of 17 has 325 utterances
Transcript of 18 has 359 utterances
Transcript of 20 has 507 utterances
Transcript of 28 has 348 utterances
Transcript of 47 has 396 utterances
###Markdown
Refine task and transcript durations. Compute speaking durations from transcripts.
###Code
end_times = dict()
for team_no, df in transcript_dfs.items():
dff = df[df['utterance'] == '(omitted)']
if len(dff) > 0:
end_time = dff['start'].min()
else:
end_time = df.iloc[-1]['end']
end_times[team_no] = end_time
end_times
###Output
_____no_output_____
###Markdown
Print the total transcribed duration in hours.
###Code
values = [td / 60 / 60 for td in end_times.values()]
sum(values)
###Output
_____no_output_____
###Markdown
Slice the transcripts by their inferred duration. There is sometimes more talk after the task ends, some of which was also transcribed, we omit that.This is specifically when the team fails i.e. time is up, and we intervene.
###Code
for team_no in transcript_dfs:
df = transcript_dfs[team_no]
df = df[df.end <= end_times[team_no]]
transcript_dfs[team_no] = df
# # A quick check.
# transcript_dfs[7].tail(), end_times[7]
###Output
_____no_output_____
###Markdown
Generate inputs for dialign to extract routines. Define a tokeniser. Create a tokeniser with the default settings for English, including punctuation rules and exceptions.
###Code
nlp = English()
tokeniser = nlp.tokenizer
###Output
_____no_output_____
###Markdown
Define a tokeniser method for dialign (as per dialign input format).
###Code
def tokenise_utterances(df, tokeniser):
df = df.copy()
texts = list()
for u in df['utterance']:
tokens = tokeniser(u)
text = ' '.join([t.text for t in tokens])
texts.append(text)
df['utterance'] = texts
return df
###Output
_____no_output_____
###Markdown
Define an exporter for dialign.
###Code
def export_for_dialign(df, file):
with open(str(file), 'w') as f:
for i, row in df.iterrows():
print('{}\t{}'.format(row['interlocutor'], row['utterance']),
file=f)
###Output
_____no_output_____
###Markdown
Rework the transcripts for dialign: obtain simpler transcripts (tokenised and interlocutors A & B only).
###Code
print('Reworking the transcripts to input into dialign...')
utterance_dfs = dict()
for team_no, df in transcript_dfs.items():
print('Processing Team {}'.format(team_no))
# Filter for interlocutors A and B only.
df = df[df['interlocutor'].isin(['A', 'B'])]
# Tokenise.
df = tokenise_utterances(df, tokeniser)
# Reset the utterance numbers.
df['utterance_no'] = range(len(df))
# Keep.
utterance_dfs[team_no] = df
print('Done!')
###Output
Reworking the transcripts to input into dialign...
Processing Team 7
Processing Team 8
Processing Team 9
Processing Team 10
Processing Team 11
Processing Team 17
Processing Team 18
Processing Team 20
Processing Team 28
Processing Team 47
Done!
###Markdown
Export the transcripts in dialign input format.
###Code
print('Exporting the transcripts in dialign input format...')
for team_no, df in utterance_dfs.items():
# Construct filename.
file = 'justhink19_dialogue_{:02d}.tsv'.format(team_no)
file = dialign_inputs_dir.joinpath(file)
# Export table to file.
export_for_dialign(df, file)
print('Written for team {:2d} to {}'.format(team_no, file))
print('Done!')
###Output
Exporting the transcripts in dialign input format...
Written for team 7 to ../processed_data/dialign_inputs/justhink19_dialogue_07.tsv
Written for team 8 to ../processed_data/dialign_inputs/justhink19_dialogue_08.tsv
Written for team 9 to ../processed_data/dialign_inputs/justhink19_dialogue_09.tsv
Written for team 10 to ../processed_data/dialign_inputs/justhink19_dialogue_10.tsv
Written for team 11 to ../processed_data/dialign_inputs/justhink19_dialogue_11.tsv
Written for team 17 to ../processed_data/dialign_inputs/justhink19_dialogue_17.tsv
Written for team 18 to ../processed_data/dialign_inputs/justhink19_dialogue_18.tsv
Written for team 20 to ../processed_data/dialign_inputs/justhink19_dialogue_20.tsv
Written for team 28 to ../processed_data/dialign_inputs/justhink19_dialogue_28.tsv
Written for team 47 to ../processed_data/dialign_inputs/justhink19_dialogue_47.tsv
Done!
###Markdown
Run dialign.
###Code
cmd = 'java -jar {} -i {} -o {}'.format(
dialign_jar_file.resolve(),
dialign_inputs_dir.resolve(),
dialign_outputs_dir.resolve())
print(cmd)
print('Running for dialogues...')
!$cmd
print('Done!')
###Output
java -jar /home/utku/playground/justhink-alignment-analysis/tools/dialign-1.0/dialign.jar -i /home/utku/playground/justhink-alignment-analysis/processed_data/dialign_inputs -o /home/utku/playground/justhink-alignment-analysis/processed_data/dialign_outputs
Running for dialogues...
Done!
###Markdown
Read routine tables.i.e. shared expression lexicons as termed by dialign.
###Code
routine_dfs = dict()
for team_no in sorted(transcript_dfs):
routine_file = 'justhink19_dialogue_{:02d}_tsv-lexicon.tsv'.format(
team_no)
routine_file = dialign_outputs_dir.joinpath(routine_file)
df = pd.read_csv(str(routine_file), sep='\t')
print('Read for team {:02d}: {} routines'.format(team_no, len(df)))
l = list()
for e in df['Surface Form']:
tokenised_e = [t.text for t in tokeniser(e)]
v = 0
for n in task_words:
if n in tokenised_e:
v += 1
l.append(v)
df.insert(3, 'task_spec_referent_count', l)
df['utterances'] = [[int(n) for n in seq.split(', ')]
for seq in df['Turns']]
routine_dfs[team_no] = df
# # Example/debugging.
# team_no = 18
# routine_dfs[team_no].head(3)
###Output
Read for team 07: 384 routines
Read for team 08: 420 routines
Read for team 09: 533 routines
Read for team 10: 226 routines
Read for team 11: 371 routines
Read for team 17: 149 routines
Read for team 18: 149 routines
Read for team 20: 287 routines
Read for team 28: 194 routines
Read for team 47: 223 routines
###Markdown
Filter for routines with task-specific referents.
###Code
for team_no, df in routine_dfs.items():
df = df[df.task_spec_referent_count > 0]
df = df.drop('task_spec_referent_count', axis=1)
routine_dfs[team_no] = df
###Output
_____no_output_____
###Markdown
Rework routine instances with token positions. Construct token tables.
###Code
token_dfs = dict()
for team_no, df in utterance_dfs.items():
df = df.copy()
# Split the utterances into words, convert to a list.
df['token'] = [u.split() for u in df['utterance']]
# df = df.assign(**{'words': df['object'].str.split()})
# Transform each word to a row, preserving the other values in the row.
df = df.explode('token')
# Assign a subutterance no.
df.insert(2, 'token_no', range(len(df)))
token_dfs[team_no] = df
df = token_dfs[7].copy()
df.head()
def find_sub_list(sl, l):
# allows for multiple matches
# from https://stackoverflow.com/a/17870684
results = []
sll = len(sl)
for ind in (i for i, e in enumerate(l) if e == sl[0]):
if l[ind:ind+sll] == sl:
results.append((ind, ind+sll-1))
return results
# # Try.
# greeting = ['hello', 'my', 'name', 'is', 'bob',
# 'how', 'are', 'you', 'my', 'name', 'is']
# print(find_sub_list(['my', 'name', 'is'], greeting))
###Output
_____no_output_____
###Markdown
Find routine expressions' subutterance numbers from utterance numbers.
###Code
def get_start_indices(subutterance, u_no, u_df):
l = list() # subutterance list to be built.
# Find the utterance (row) with that utterance no.
utterance_row = u_df[u_df.utterance_no == u_no]
# Make sure there is only one such row.
assert len(utterance_row) == 1, print(
'Multiple utterances found at {}'.format(u_no))
# Select the first (and only) row.
utterance_row = utterance_row.iloc[0]
# Get the utterance string at that row.
utterance = utterance_row['utterance']
# Find the occurrences of subutterance routine in the utterance.
indices = find_sub_list(subutterance.split(), utterance.split())
assert len(indices) != 0, print(
'Could not find subutterance "{}" at utterance "{}" ({})'.format(
subutterance, utterance, u_no))
# Get the token offset of the utterance.
offset = t_df[t_df.utterance_no == u_no].iloc[0]['token_no']
# Put the initial positions of the occurrences into a list.
for start, end in indices:
l.append(start + offset)
return l
for team_no, df in routine_dfs.items():
print('Finding routine token indices for team {:2d}'.format(
team_no))
u_df = utterance_dfs[team_no]
t_df = token_dfs[team_no]
tokens_list = list()
establish_list = list()
priming_list = list()
for i, row in df.iterrows():
subutterance = row['Surface Form']
# subutterance list to be built, for each row.
l = list()
for u_no in row['utterances']: # for each utterance no
l += get_start_indices(subutterance, u_no, u_df)
tokens_list.append(l)
# priming token from the priming utterance no.
u_no = row['utterances'][0]
t = get_start_indices(subutterance, u_no, u_df)[0]
priming_list.append(t)
# establishment token from the establishment utterance no.
u_no = row['Establishment turn']
t = get_start_indices(subutterance, u_no, u_df)[0]
establish_list.append(t)
df['tokens'] = tokens_list
df['priming_token'] = priming_list
df['establish_token'] = establish_list
print('Done!')
routine_dfs[7].head()
###Output
Finding routine token indices for team 7
Finding routine token indices for team 8
Finding routine token indices for team 9
Finding routine token indices for team 10
Finding routine token indices for team 11
Finding routine token indices for team 17
Finding routine token indices for team 18
Finding routine token indices for team 20
Finding routine token indices for team 28
Finding routine token indices for team 47
Done!
###Markdown
Export routine tables.
###Code
print('Exporting routine tables...')
for team_no, df in routine_dfs.items():
# Construct filename.
file = 'justhink19_routines_{:02d}.csv'.format(team_no)
file = routines_dir.joinpath(file)
# Write the table to file.
df.to_csv(file, index=False, sep='\t')
print('Exported routines for {:2d} to {}'.format(team_no, file))
print('Done!')
###Output
Exporting routine tables...
Exported routines for 7 to ../processed_data/routines/justhink19_routines_07.csv
Exported routines for 8 to ../processed_data/routines/justhink19_routines_08.csv
Exported routines for 9 to ../processed_data/routines/justhink19_routines_09.csv
Exported routines for 10 to ../processed_data/routines/justhink19_routines_10.csv
Exported routines for 11 to ../processed_data/routines/justhink19_routines_11.csv
Exported routines for 17 to ../processed_data/routines/justhink19_routines_17.csv
Exported routines for 18 to ../processed_data/routines/justhink19_routines_18.csv
Exported routines for 20 to ../processed_data/routines/justhink19_routines_20.csv
Exported routines for 28 to ../processed_data/routines/justhink19_routines_28.csv
Exported routines for 47 to ../processed_data/routines/justhink19_routines_47.csv
Done!
###Markdown
Export the simplified transcripts ("utterances").
###Code
print('Exporting tokenised filtered transcripts (utterances)')
for team_no, df in utterance_dfs.items():
file = 'justhink19_utterances_{:02d}.csv'.format(team_no)
file = utterances_dir.joinpath(file)
# Export table to file.
df.to_csv(file, index=False, float_format='%.3f', sep='\t')
print('Exported utterances for {:2d} to {}'.format(team_no, file))
print('Done!')
###Output
Exporting tokenised filtered transcripts (utterances)
Exported utterances for 7 to ../processed_data/utterances/justhink19_utterances_07.csv
Exported utterances for 8 to ../processed_data/utterances/justhink19_utterances_08.csv
Exported utterances for 9 to ../processed_data/utterances/justhink19_utterances_09.csv
Exported utterances for 10 to ../processed_data/utterances/justhink19_utterances_10.csv
Exported utterances for 11 to ../processed_data/utterances/justhink19_utterances_11.csv
Exported utterances for 17 to ../processed_data/utterances/justhink19_utterances_17.csv
Exported utterances for 18 to ../processed_data/utterances/justhink19_utterances_18.csv
Exported utterances for 20 to ../processed_data/utterances/justhink19_utterances_20.csv
Exported utterances for 28 to ../processed_data/utterances/justhink19_utterances_28.csv
Exported utterances for 47 to ../processed_data/utterances/justhink19_utterances_47.csv
Done!
###Markdown
Export the token tables.
###Code
for team_no, df in token_dfs.items():
file = 'justhink19_tokens_{:02d}.csv'.format(team_no)
file = tokens_dir.joinpath(file)
# Export table to file.
df.to_csv(file, index=False, float_format='%.3f', sep='\t')
print('Exported tokens for {:2d} to {}'.format(team_no, file))
print('Done!')
###Output
Exported tokens for 7 to ../processed_data/tokens/justhink19_tokens_07.csv
Exported tokens for 8 to ../processed_data/tokens/justhink19_tokens_08.csv
Exported tokens for 9 to ../processed_data/tokens/justhink19_tokens_09.csv
Exported tokens for 10 to ../processed_data/tokens/justhink19_tokens_10.csv
Exported tokens for 11 to ../processed_data/tokens/justhink19_tokens_11.csv
Exported tokens for 17 to ../processed_data/tokens/justhink19_tokens_17.csv
Exported tokens for 18 to ../processed_data/tokens/justhink19_tokens_18.csv
Exported tokens for 20 to ../processed_data/tokens/justhink19_tokens_20.csv
Exported tokens for 28 to ../processed_data/tokens/justhink19_tokens_28.csv
Exported tokens for 47 to ../processed_data/tokens/justhink19_tokens_47.csv
Done!
|
KCM/KCM_Keyword_Correlation_Models_from_Open_Corpus.ipynb | ###Markdown
KCM( Keyword Correlation Models ) 安裝套件
###Code
!pip install gdown
###Output
Requirement already satisfied: gdown in /usr/local/lib/python3.7/dist-packages (3.6.4)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from gdown) (2.23.0)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from gdown) (1.15.0)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from gdown) (4.62.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->gdown) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->gdown) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->gdown) (2021.5.30)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->gdown) (1.24.3)
###Markdown
下載wiki 資料隨機50000筆維基百科條目的JSON檔,格式如下:
###Code
#untokenize wiki_data wiki_2021_10_05_50000
!gdown --id 1rQnbaOiqoN40AzHVq_IrRW4ki-rFPRxZ
###Output
Downloading...
From: https://drive.google.com/uc?id=1rQnbaOiqoN40AzHVq_IrRW4ki-rFPRxZ
To: /content/wiki_2021_10_05_50000.json
100% 62.9M/62.9M [00:00<00:00, 87.8MB/s]
###Markdown
將資料進行斷詞 可使用[Jieba](https://blog.pulipuli.info/2017/11/fasttag-identify-part-of-speech-in.html)進行斷詞。斷詞後範例:```[ { "id":"0", "title":"克拉西瓦亞梅恰河", "token":[ [('克拉西瓦亞梅恰河', 'n'), ('俄羅斯', 'n'), ('河流', 'n')], [('位於', 'v'), ('圖拉州', 'n'), ('利佩茨克州', 'n')] ] }]```
###Code
"""
wiki_data tokenize you download
"""
###Output
_____no_output_____
###Markdown
KCM-Keyword Correlation Models from Open Corpus
###Code
import json
from tqdm import tqdm
#data have been tokenized
with open('wiki_tokenize.json') as file:
data = json.load(file)
print(len(data))
flag_list = ['n','ng','nr','nrfg','nrt','ns','nt','nz'] #Part of Speech list
kcm_res = {}
#Compute KCM model
###Output
50000
###Markdown
作業說明會給定20個目標詞,需要回傳與目標詞相關的前十名,並將KCM的結果繳交到網站上。評分方式:20題一題5分,回傳的前十名與標準答案有5個以上重疊即拿到分數。繳交方式: https://github.com/NCHU-NLP-Lab/110_Advanced-Data-Mining-and-Big-Data-Analysis填入組別與答案按下送出,答案格式回傳一個list,包含20組答案(用逗號分隔):```範例question_list=['臺灣','美國']Answer=[["日本", "香港", "中國大陸", "分佈", "中國", "中華民國", "日治", "臺北市", "名稱", "臺北"],["非建制地區", "城市", "人口普查", "加拿大", "英國", "地區", "加利福尼亞州", "國家", "伊利諾伊州", "公司"]]```
###Code
def Top_10_Related_Words(QueryTerm):
"""
Return 10 most related keywords from kcm result.
Parameters
----------
input : string
only keywords on kcm dict will return result, otherwise return empty list
Returns
-------
value : list of dict, resultant keywords as keys, and its frequency will be value.
"""
return res
test_word_list = ['臺灣', '美國', '大學', '肺炎','天安門']
for word in test_word_list:
Top_10_Related_Words(word)
###Output
_____no_output_____ |
Conditional_Statements.ipynb | ###Markdown
Python Conditional Statements Explained in Minutes - 18 ASA Learning
###Code
# Comparison Operators
# Equals: a == b
# Not Equals: a != b
# Less than: a < b
# Less than or equal to: a <= b
# Greater than: a > b
# Greater than or equal to: a >= b
print('asa' == 'aaa')
#Logical Operators
a = 1
b = 2
c = 3
print(a<b or a>c)
print(not True)
# if-else statement
if a>b:
print('A is greater than B')
else:
print('A is less than B')
# Nested if
if a>b:
print('A is greater than B')
elif a<c:
print('A is less than C')
else:
print('None')
# One-liners
a = 5
print('A is less than 7') if a<7 else print('A is greater than 7')
###Output
A is less than 7
|
src/python_cheatsheet/jupyter_notebooks/19_Context_Manager.ipynb | ###Markdown
Python Cheat SheetBasic cheatsheet for Python mostly based on the book written by Al Sweigart, [Automate the Boring Stuff with Python](https://automatetheboringstuff.com/) under the [Creative Commons license](https://creativecommons.org/licenses/by-nc-sa/3.0/) and many other sources. Read It- [Website](https://www.pythoncheatsheet.org)- [Github](https://github.com/wilfredinni/python-cheatsheet)- [PDF](https://github.com/wilfredinni/Python-cheatsheet/raw/master/python_cheat_sheet.pdf)- [Jupyter Notebook](https://mybinder.org/v2/gh/wilfredinni/python-cheatsheet/master?filepath=jupyter_notebooks) Context ManagerWhile Python's context managers are widely used, few understand the purpose behind their use. These statements, commonly used with reading and writing files, assist the application in conserving system memory and improve resource management by ensuring specific resources are only in use for certain processes. with statementA context manager is an object that is notified when a context (a block of code) starts and ends. You commonly use one with the with statement. It takes care of the notifying.For example, file objects are context managers. When a context ends, the file object is closed automatically:
###Code
with open(filename) as f:
file_contents = f.read()
# the open_file object has automatically been closed.
###Output
_____no_output_____
###Markdown
Anything that ends execution of the block causes the context manager's exit method to be called. This includes exceptions, and can be useful when an error causes you to prematurely exit from an open file or connection. Exiting a script without properly closing files/connections is a bad idea, that may cause data loss or other problems. By using a context manager you can ensure that precautions are always taken to prevent damage or loss in this way. Writing your own contextmanager using generator syntaxIt is also possible to write a context manager using generator syntax thanks to the `contextlib.contextmanager` decorator:
###Code
import contextlib
@contextlib.contextmanager
def context_manager(num):
print('Enter')
yield num + 1
print('Exit')
with context_manager(2) as cm:
# the following instructions are run when the 'yield' point of the context
# manager is reached.
# 'cm' will have the value that was yielded
print('Right in the middle with cm = {}'.format(cm))
###Output
_____no_output_____ |
Sentiment_analysis_NLP.ipynb | ###Markdown
###Code
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 /root/.kaggle/kaggle.json
!kaggle competitions download -c sentiment-analysis-on-movie-reviews
!unzip train.tsv.zip -d ./dataset
!wget http://nlp.stanford.edu/data/glove.6B.zip
!unzip glove.6B.zip -d ./glove/
#paths
data_file = './dataset/train.tsv'
dataset = './dataset'
import pandas as pd
reviews = pd.read_csv(data_file, sep='\t')
reviews.head()
review_phrase = reviews['Phrase']
review_sentiment = reviews['Sentiment']
import numpy as np
Y_dim = (len(review_sentiment), 5)
Y = np.zeros(Y_dim)
for i,sentiment in enumerate(review_sentiment):
Y[i][sentiment-1] = 1 #sentiments are numbers from 1 to 5
#clean the Phrases
import re
import string
rem = string.punctuation
pattern = r"[{}]".format(rem)
#remove punctuations
review_phrase_cleaned = review_phrase.str.replace(pattern, '')
review_phrase_cleaned #dataset is too much noisy
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
max_features = 10000
tokenizer = Tokenizer(num_words = max_features, split=' ')
tokenizer.fit_on_texts(review_phrase_cleaned.values)
X = tokenizer.texts_to_sequences(review_phrase_cleaned.values)
X = pad_sequences(X)
X.shape
#split train and validation data
from sklearn.model_selection import train_test_split
x_train, x_val, y_train, y_val = train_test_split(X,Y, test_size = 0.2, random_state = 33)
print(x_train.shape,y_train.shape)
print(x_val.shape,y_val.shape)
def get_coefs(word, *arr):
return word, np.asarray(arr, dtype='float32')
def get_embed_mat(EMBEDDING_FILE, max_features,embed_dim):
# word vectors
embeddings_index = dict(get_coefs(*o.rstrip().rsplit(' ')) for o in open(EMBEDDING_FILE, encoding='utf8'))
print('Found %s word vectors.' % len(embeddings_index))
# embedding matrix
word_index = tokenizer.word_index
num_words = min(max_features, len(word_index) + 1)
all_embs = np.stack(embeddings_index.values()) #for random init
embedding_matrix = np.random.normal(all_embs.mean(), all_embs.std(),
(num_words, embed_dim))
for word, i in word_index.items():
if i >= max_features:
continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
max_features = embedding_matrix.shape[0]
return embedding_matrix
# embedding matrix
EMBEDDING_FILE = './glove/glove.6B.100d.txt'
embed_dim = 100 #word vector dim
embedding_matrix = get_embed_mat(EMBEDDING_FILE,max_features,embed_dim)
print(embedding_matrix.shape)
from keras.models import Sequential
from keras.layers import Dense, Embedding, GRU, Flatten, Conv1D, MaxPooling1D, Dropout, Bidirectional
from keras.optimizers import Adam
filters = 64
pool_size = 2
kernel_size = 3
gru_out = 128
model = Sequential()
model.add(Embedding(max_features,embed_dim,input_length=X.shape[1], weights=[embedding_matrix], trainable=True))
model.add(Conv1D(filters,kernel_size=kernel_size,padding='same',activation='relu'))
model.add(MaxPooling1D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Bidirectional(GRU(gru_out,return_sequences=True)))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(128,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(5,activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
from keras.callbacks import ModelCheckpoint
model_name = 'Sentiment_analysis_nlp.h5'
checkpointer = ModelCheckpoint(filepath=model_name,monitor='val_acc', verbose=0, save_best_only=True)
history = model.fit(x_train, y_train,
epochs=5,
batch_size = 128,
callbacks=[checkpointer],
validation_data=(x_val,y_val))
import matplotlib.pyplot as plt
plt.figure(figsize=(15,5))
plt.subplot(1, 2, 1)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend(['train', 'test'], loc='upper left')
plt.subplot(1, 2, 2)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.xlabel('epochs')
plt.ylabel('acc')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
from keras.models import load_model
#retrieving model from the checkpoint
best = load_model(model_name)
_, acc = best.evaluate(x_val, y_val)
print("validation accuracy: ", acc)
###Output
_____no_output_____ |
maps_hash/.ipynb_checkpoints/hash_map-checkpoint.ipynb | ###Markdown
(In addition to having fun) We write programs to solve real world problems. Data structures help us in representing and efficiently manipulating the data associated with these problems. Let us see if we can use any of the data structures that we already know to solve the following problem Problem StatementIn a class of students, store heights for each student. The problem in itself is very simple. We have the data of heights of each student. We want to store it so that next time someone asks for height of a student, we can easily return the value. But how can we store these heights?Obviously we can use a database and store these values. But, let's say we don't want to do that for now. We want to use a data structure to store these values as part of our program. For the sake of simplicity, our problem is limited to storing heights of students. But you can certainly imagine scenarios where you have to store such `key-value` pairs and later on when someone gives you a `key`, you can efficiently return the corrresponding `value`.The class diagram for HashMaps would look something like this.
###Code
class HashMap:
def __init__(self):
self.num_entries = 0
def put(self, key, value):
pass
def get(self, key):
pass
def size(self):
return self.num_entries
###Output
_____no_output_____
###Markdown
ArraysCan we use arrays to store `key-value` pairs?We can certainly use one array to store the names of the students and use another array to store their corresponding heights at the corresponding indices.What will be the time complexity in this scenario?To obtain height of a student, say `Potter, Harry`, we will have to traverse the entire array and check if the value at a particular index matches `Potter, Harry`. Once we find the index in which this value is stored, we can use this index to obtain the height from the second array. Thus, because of this traveral, complexity for `get()` operation becomes $O(n)$. Even if we maintain a sorted array, the operation will not take less than $O(log(n))$ complexity.What happens if a student leaves a class? We will have to delete the entry corresponding to the student from both the arrays. This would require another traversal to find the index. And then we will have to shift our entire array to fill this gap. Again, the time complexity of operation becomes $O(n)$ Linked ListIs it possible to use linked lists for this problem?We can certainly modify our `LinkedListNode` to have two different value attributes - one for name of the student and the other for height. But we again face the same problem. In the worst case, we will have to traverse the entire linked list to find the height of a particular student. Once again, the cost of operation becomes $O(n)$. Stacks and QueuesStacks and Queues are LIFO and FIFO data structures respectively. Can you think why they too do not make a good choice for storing `key-value` pairs? ------------------------------------------------------------------------ Can we do better? Can you think of any data structure that allows for fast `get()` operation? Let us circle back to arrays. When we obtain the element present at a particular index using something like `arr[3]`, the operation takes constant i.e. `O(1)` time. *For review - Does this constant time operation require further explanation?*If we think about `array indices as keys` and the `element present at those indices as values`, we can fairly conclude that at least for non-zero integer `keys`, we can use arrays. However, like our current problem statement, we may not always have integer keys.`If only we had a function that could give us arrays indices for any key value that we gave it!` Hash FunctionsSimply put, hash functions are these really incredible `magic` functions which can map data of any size to a fixed size data. This fixed sized data is often called hash code or hash digest. Let's create our own hash function to store strings
###Code
def hash_function(string):
pass
###Output
_____no_output_____
###Markdown
For a given string, say `abcd`, a very simple hash function can be sum of corresponding ASCII values. *Note: you can use `ord(character`) to determine ASCII value of a particular character e.g. `ord('a') will return 97`*
###Code
def hash_function(string):
hash_code = 0
for character in string:
hash_code += ord(character)
return hash_code
hash_code_1 = hash_function("abcd")
print(hash_code_1)
###Output
394
###Markdown
Looks like our hash function is working fine. But is this really a good hash function?For starters, it will return the same value for `abcd` and `bcda`. Do we want that? We can create 24 different permutations for the string `abcd` and each will have the same value. We cannot put 24 values to one index. Obviously, this makes it clear that our hash function must return unique values for unique objects. When two different inputs produce the same output, then we have something called a `collision`. An ideal hash function must be immune from producing collisions. Let's think something else.Can product help? We will again run in the same problem. The honest answer is that we have differernt hash functions for different types of keys. The hash function for integers will be different from the hash function for strings, which again, will be different for some object of a class that you created. However, let's try to continue with our problem and try to come up with a hash function for strings. Hash Function for Strings For a string, say `abcde`, a very effective function is treating this as number of prime number base `p`. Let's elaborate this statement. For a number, say `578`, we can represent this number in base 10 number system as $$5*10^2 + 7*10^1 + 8*10^0$$Similarly, we can treat `abcde` as $$a * p^4 + b * p^3 + c * p^2 + d * p^1 + e * p^0$$Here, we replace each character with its corresponding ASCII value. A lot of research goes into figuring out good hash functions and this hash function is one of the most popular functions used for strings. We use prime numbers because the provide a good distribution. The most common prime numbers used for this function are 31 and 37. Thus, using this algorithm, we can get a corresponding integer value for each string key and store it in the array.Note that the array used for this purpose is called a `bucket array`. It is not a special array. We simply choose to give a special name to arrays for this purpose. Each entry in this `bucket array` is called a `bucket` and the index in which we store a bucket is called `bucket index`.Let's add these details to our class.
###Code
class HashMap:
def __init__(self, initial_size=10):
self.bucket_array = [None for _ in range(initial_size)]
self.p = 37
self.num_entries = 0
def put(self, key, value):
pass
def get(self, key):
pass
def get_bucket_index(self, key):
return self.get_hash_code(key)
def get_hash_code(self, key):
key = str(key)
num_buckets = len(self.bucket_array)
current_coefficient = 1
hash_code = 0
for character in key:
hash_code += ord(character) * current_coefficient
current_coefficient *= self.p
current_coefficient = current_coefficient
return hash_code
hash_map = HashMap()
bucket_index = hash_map.get_bucket_index("abcd")
print(bucket_index)
hash_map = HashMap()
bucket_index = hash_map.get_bucket_index("bcda")
print(bucket_index)
###Output
5054002
###Markdown
Compression FunctionWe now have a good hash function which will return unique values for unique objects. But let's look at the values. These are huge. We cannot create such large arrays. So we use another function - `compression function` to compress these values so as to create arrays of reasonable sizes. A very simple, good, and effective compression function can be ` mod len(array)`. The `modulo operator %` returns the remainder of one number when divided by other. So, if we have an array of size 10, we can be sure that modulo of any number with 10 will be less than 10, allowing it to fit into our bucket array.Because of how modulo operator works, instead of creating a new function, we can write the logic for compression function in our `get_hash_code()` function itself.https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/modular-multiplication
###Code
class HashMap:
def __init__(self, initial_size = 10):
self.bucket_array = [None for _ in range(initial_size)]
self.p = 31
self.num_entries = 0
def put(self, key, value):
pass
def get(self, key):
pass
def get_bucket_index(self, key):
bucket_index = self.get_hash_code(keu)
return bucket_index
def get_hash_code(self, key):
key = str(key)
num_buckets = len(self.bucket_array)
current_coefficient = 1
hash_code = 0
for character in key:
hash_code += ord(character) * current_coefficient
hash_code = hash_code % num_buckets # compress hash_code
current_coefficient *= self.p
current_coefficient = current_coefficient % num_buckets # compress coefficient
return hash_code % num_buckets # one last compression before returning
def size(self):
return self.num_entries
###Output
_____no_output_____
###Markdown
Collision HandlingAs discussed earlier, when two different inputs produce the same output, then we have a collision. Our implementation of `get_hash_code()` function is satisfactory. However, because we are using compression function, we are prone to collisions. Consider the following scenario. We have a bucket array of length 10 and we get two different hash codes for two different inputs, say 355, and 1095. Even though the hash codes are different in this case, the bucket index will be same because of the way we have implemented our compression function. Such scenarios where multiple entries want to go to the same bucket are very common. So, we introduce some logic to handle collisions.There are two popular ways in which we handle collisions.1. Closed Addressing or Separate chaining2. Open Addressing1. Closed addressing is a clever technique where we use the same bucket to store multiple objects. The bucket in this case will store a linked list of key-value pairs. Every bucket has it's own separate chain of linked list nodes.2. In open addressing, we do the following: * If, after getting the bucket index, the bucket is empty, we store the object in that particular bucket * If the bucket is not empty, we find an alternate bucket index by using another function which modifies the current hash code to give a new code Separate chaining is a simple and effective technique to handle collisions and that is what we discuss here. Implement the `put` and `get` function using the idea of separate chaining.
###Code
class LinkedListNode:
def __init__(self, key, value):
self.key = key
self.value = value
self.next = None
class HashMap:
def __init__(self, initial_size = 10):
self.bucket_array = [None for _ in range(initial_size)]
self.p = 31
self.num_entries = 0
def put(self, key, value):
bucket_index = self.get_bucket_index(key)
new_node = LinkedListNode(key, value)
head = self.bucket_array[bucket_index]
# check if key is already present in the map, and update it's value
while head is not None:
if head.key == key:
head.value = value
return
head = head.next
# key not found in the chain --> create a new entry and place it at the head of the chain
head = self.bucket_array[bucket_index]
new_node.next = head
self.bucket_array[bucket_index] = new_node
self.num_entries += 1
def get(self, key):
bucket_index = self.get_hash_code(key)
head = self.bucket_array[bucket_index]
while head is not None:
if head.key == key:
return head.value
head = head.next
return None
def get_bucket_index(self, key):
bucket_index = self.get_hash_code(key)
return bucket_index
def get_hash_code(self, key):
key = str(key)
num_buckets = len(self.bucket_array)
current_coefficient = 1
hash_code = 0
for character in key:
hash_code += ord(character) * current_coefficient
hash_code = hash_code % num_buckets # compress hash_code
current_coefficient *= self.p
current_coefficient = current_coefficient % num_buckets # compress coefficient
return hash_code % num_buckets # one last compression before returning
def size(self):
return self.num_entries
hash_map = HashMap()
hash_map.put("one", 1)
hash_map.put("two", 2)
hash_map.put("three", 3)
hash_map.put("neo", 11)
print("size: {}".format(hash_map.size()))
print("one: {}".format(hash_map.get("one")))
print("neo: {}".format(hash_map.get("neo")))
print("three: {}".format(hash_map.get("three")))
print("size: {}".format(hash_map.size()))
###Output
size: 4
one: 1
neo: 11
three: 3
size: 4
###Markdown
Time Complexity and Rehashing We used arrays to implement our hashmaps because arrays offer $O(1)$ time complexity for both put and get operations. *Note: in case of arrays put is simply `arr[i] = 5` and get is `height = arr[5]`* 1. Put Operation* In the put operation, we first figure out the bucket index. Calculating the hash code to figure out the bucket index takes some time.* After that, we go to the bucket index and in the worst case we traverse the linked list to find out if the key is already present or not. This also takes some time.To analyze the time complexity for any algorithm as a function of the input size `n`, we first have to determine what our input is. In this case, we are putting and gettin key value pairs. So, these entries i.e. key-value pairs are our input. Therefore, our `n` is number of such key-value pair entries.*Note: time complexity is always determined in terms of input size and not the actual amount of work that is being done independent of input size. That "independent amount of work" will be constant for every input size so we disregard that.** In case of our hash function, the computation time for hash code depends on the size of each string. Compared to number of entries (which we always consider to be very high e.g. in the order of $10^5$) the length of each string can be considered to be very small. Also, most of the strings will be around the same size when compared to this high number of entries. Hence, we can ignore the hash computation time in our analysis of time complexity.* Now, the entire time complexity essentialy depends on the linked list traversal. In the worst case, all entries would go to the same bucket index and our linked list at that index would be huge. Therefore, the time complexity in that scenario would be $O(n)$. However, hash functions are wisely chosen so that this does not happen. `On average, the distribution of entries is such that if we have n entries and b buckets, then each bucket does not have more than n/b key-value pair entries.` Therefore, because of our choice of hash functions, we can assume that the time complexity is $O(\dfrac{n}{b})$.This number which determines the `load` on our bucket array `n/b` is known as load factor. Generally, we try to keep our load factor around or less than 0.7. This essentially means that if we have a bucket array of size 10, then the number of key-value pair entries will not be more than 7.**What happens when we get more entries and the value of our load factor crosses 0.7?**In that scenario, we must increase the size of our bucket array. Also, we must recalculate the bucket index for each entry in the hashn map.*Note: the hash code for each key present in the bucket array would still be the same. However, because of the compression function, the bucket index will change.* Therefore, we need to `rehash` all the entries in our hash map. This is known as `Rehashing`. 2. Get and Delete operationCan you figure out the time complexity for get and delete operation?*Answer: the solution follows the same logic and the time complexity is O(1). Note that we do not reduce the size of bucket array in delete operation. Rehashing Code
###Code
class LinkedListNode:
def __init__(self, key, value):
self.key = key
self.value = value
self.next = None
class HashMap:
def __init__(self, initial_size = 15):
self.bucket_array = [None for _ in range(initial_size)]
self.p = 31
self.num_entries = 0
self.load_factor = 0.7
def put(self, key, value):
bucket_index = self.get_bucket_index(key)
new_node = LinkedListNode(key, value)
head = self.bucket_array[bucket_index]
# check if key is already present in the map, and update it's value
while head is not None:
if head.key == key:
head.value = value
return
head = head.next
# key not found in the chain --> create a new entry and place it at the head of the chain
head = self.bucket_array[bucket_index]
new_node.next = head
self.bucket_array[bucket_index] = new_node
self.num_entries += 1
# check for load factor
current_load_factor = self.num_entries / len(self.bucket_array)
if current_load_factor > self.load_factor:
self.num_entries = 0
self._rehash()
def get(self, key):
bucket_index = self.get_hash_code(key)
head = self.bucket_array[bucket_index]
while head is not None:
if head.key == key:
return head.value
head = head.next
return None
def get_bucket_index(self, key):
bucket_index = self.get_hash_code(key)
return bucket_index
def get_hash_code(self, key):
key = str(key)
num_buckets = len(self.bucket_array)
current_coefficient = 1
hash_code = 0
for character in key:
hash_code += ord(character) * current_coefficient
hash_code = hash_code % num_buckets # compress hash_code
current_coefficient *= self.p
current_coefficient = current_coefficient % num_buckets # compress coefficient
return hash_code % num_buckets # one last compression before returning
def size(self):
return self.num_entries
def _rehash(self):
old_num_buckets = len(self.bucket_array)
old_bucket_array = self.bucket_array
num_buckets = 2 * old_num_buckets
self.bucket_array = [None for _ in range(num_buckets)]
for head in old_bucket_array:
while head is not None:
key = head.key
value = head.value
self.put(key, value) # we can use our put() method to rehash
head = head.next
hash_map = HashMap(7)
hash_map.put("one", 1)
hash_map.put("two", 2)
hash_map.put("three", 3)
hash_map.put("neo", 11)
print("size: {}".format(hash_map.size()))
print("one: {}".format(hash_map.get("one")))
print("neo: {}".format(hash_map.get("neo")))
print("three: {}".format(hash_map.get("three")))
print("size: {}".format(hash_map.size()))
###Output
size: 4
one: 1
neo: 11
three: 3
size: 4
###Markdown
Delete OperationCan you implement delete operation using all we have learnt so far?
###Code
class LinkedListNode:
def __init__(self, key, value):
self.key = key
self.value = value
self.next = None
class HashMap:
def __init__(self, initial_size = 15):
self.bucket_array = [None for _ in range(initial_size)]
self.p = 31
self.num_entries = 0
self.load_factor = 0.7
def put(self, key, value):
bucket_index = self.get_bucket_index(key)
new_node = LinkedListNode(key, value)
head = self.bucket_array[bucket_index]
# check if key is already present in the map, and update it's value
while head is not None:
if head.key == key:
head.value = value
return
head = head.next
# key not found in the chain --> create a new entry and place it at the head of the chain
head = self.bucket_array[bucket_index]
new_node.next = head
self.bucket_array[bucket_index] = new_node
self.num_entries += 1
# check for load factor
current_load_factor = self.num_entries / len(self.bucket_array)
if current_load_factor > self.load_factor:
self.num_entries = 0
self._rehash()
def get(self, key):
bucket_index = self.get_hash_code(key)
head = self.bucket_array[bucket_index]
while head is not None:
if head.key == key:
return head.value
head = head.next
return None
def get_bucket_index(self, key):
bucket_index = self.get_hash_code(key)
return bucket_index
def get_hash_code(self, key):
key = str(key)
num_buckets = len(self.bucket_array)
current_coefficient = 1
hash_code = 0
for character in key:
hash_code += ord(character) * current_coefficient
hash_code = hash_code % num_buckets # compress hash_code
current_coefficient *= self.p
current_coefficient = current_coefficient % num_buckets # compress coefficient
return hash_code % num_buckets # one last compression before returning
def size(self):
return self.num_entries
def _rehash(self):
old_num_buckets = len(self.bucket_array)
old_bucket_array = self.bucket_array
num_buckets = 2 * old_num_buckets
self.bucket_array = [None for _ in range(num_buckets)]
for head in old_bucket_array:
while head is not None:
key = head.key
value = head.value
self.put(key, value) # we can use our put() method to rehash
head = head.next
def delete(self, key):
bucket_index = self.get_bucket_index(key)
head = self.bucket_array[bucket_index]
previous = None
while head is not None:
if head.key == key:
if previous is None:
self.bucket_array[bucket_index] = head.next
else:
previous.next = head.next
self.num_entries -= 1
return
else:
previous = head
head = head.next
hash_map = HashMap(7)
hash_map.put("one", 1)
hash_map.put("two", 2)
hash_map.put("three", 3)
hash_map.put("neo", 11)
print("size: {}".format(hash_map.size()))
print("one: {}".format(hash_map.get("one")))
print("neo: {}".format(hash_map.get("neo")))
print("three: {}".format(hash_map.get("three")))
print("size: {}".format(hash_map.size()))
hash_map.delete("one")
print(hash_map.get("one"))
print(hash_map.size())
###Output
size: 4
one: 1
neo: 11
three: 3
size: 4
None
3
|
.ipynb_checkpoints/Pre-processing-checkpoint.ipynb | ###Markdown
Pre-processing
###Code
#https://medium.com/@bedigunjit/simple-guide-to-text-classification-nlp-using-svm-and-naive-bayes-with-python-421db3a72d34
import pandas as pd
import numpy as np
from nltk.tokenize import word_tokenize
from nltk import pos_tag
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.preprocessing import LabelEncoder
from collections import defaultdict
from nltk.corpus import wordnet as wn
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import model_selection, naive_bayes, svm
from sklearn.metrics import accuracy_score
from collections import Counter
#import dill
if __name__ == "__main__":
# Reproduce the same result every time if the script is kept consistent otherwise each run will produce different results
np.random.seed(500)
#[1] Read the data
Corpus = pd.read_json(r"C:\Users\Panos\Desktop\Dissert\Code\Sample_Video_Games_5.json", lines=True, encoding='latin-1')
Corpus = Corpus[['reviewText','overall']]
# Print some info
Corpus.info()
print(Corpus.overall.value_counts())
#[1.5] Reduce number of classes
for index,entry in enumerate(Corpus['overall']):
if entry == 1.0 or entry == 2.0:
Corpus.loc[index,'overall_final'] = -1
elif entry == 3.0:
Corpus.loc[index,'overall_final'] = 0
elif entry == 4.0 or entry == 5.0:
Corpus.loc[index,'overall_final'] = 1
#[2] Preprocessing
# Step - a : Remove blank rows if any.
Corpus['reviewText'].dropna(inplace=True)
# Step - b : Change all the text to lower case. This is required as python interprets 'dog' and 'DOG' differently
Corpus['reviewText'] = [entry.lower() for entry in Corpus['reviewText']]
# Step - c : Tokenization : In this each entry in the corpus will be broken into set of words
Corpus['reviewText'] = [word_tokenize(entry) for entry in Corpus['reviewText']]
# Step - d : Remove Stop words, Non-Numeric and perfom Word Stemming/Lemmenting.
# WordNetLemmatizer requires Pos tags to understand if the word is noun or verb or adjective etc. By default it is set to Noun
tag_map = defaultdict(lambda : wn.NOUN)
tag_map['J'] = wn.ADJ
tag_map['V'] = wn.VERB
tag_map['R'] = wn.ADV
for index,entry in enumerate(Corpus['reviewText']):
# Declaring Empty List to store the words that follow the rules for this step
Final_words = []
# Initializing WordNetLemmatizer()
word_Lemmatized = WordNetLemmatizer()
# pos_tag function below will provide the 'tag' i.e if the word is Noun(N) or Verb(V) or something else.
for word, tag in pos_tag(entry):
# Below condition is to check for Stop words and consider only alphabets
if word not in stopwords.words('english') and word.isalpha():
word_Final = word_Lemmatized.lemmatize(word,tag_map[tag[0]])
Final_words.append(word_Final)
# The final processed set of words for each iteration will be stored in 'text_final'
Corpus.loc[index,'text_final'] = str(Final_words)
#Print the first 3 rows
print(Corpus.iloc[:3])
print("hey yo")
#dill.dump_session('notebook_env.db')
#[3] Prepare Train and Test Data sets
Train_X, Test_X, Train_Y, Test_Y = model_selection.train_test_split(Corpus['text_final'],Corpus['overall_final'],test_size=0.3)
print(Counter(Train_Y).values()) # counts the elements' frequency
#[4] Encoding
Encoder = LabelEncoder()
Train_Y = Encoder.fit_transform(Train_Y)
Test_Y = Encoder.fit_transform(Test_Y)
#[5] Word Vectorization
Tfidf_vect = TfidfVectorizer(max_features=10000)
Test_X_Tfidf = Tfidf_vect.fit_transform(Corpus['text_final'])
Train_X_Tfidf = Tfidf_vect.transform(Train_X)
Test_X_Tfidf = Tfidf_vect.transform(Test_X)
#[6] SMOTE (Synthetic Minority Over-Sampling Technique)
from imblearn.under_sampling import NearMiss, RandomUnderSampler
nm = NearMiss(ratio='not minority',random_state=777, version=1, n_neighbors=1)
X_nm, y_nm = nm.fit_sample(Train_X_Tfidf, Train_Y)
print(Counter(y_nm).values()) # counts the elements' frequency
the vocabulary that it has learned from the corpus
print(Tfidf_vect.vocabulary_)
the vectorized data
print(Train_X_Tfidf)
#[7] Use the ML Algorithms to Predict the outcome
# fit the training dataset on the NB classifier
Naive = naive_bayes.MultinomialNB()
Naive.fit(X_nm,y_nm)
# predict the labels on validation dataset
predictions_NB = Naive.predict(Test_X_Tfidf)
# Use accuracy_score function to get the accuracy
print("Naive Bayes Accuracy Score -> ",accuracy_score(predictions_NB, Test_Y)*100)
# Making the confusion matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(Test_Y, predictions_NB)
print("-----------------cm------------------")
print(cm)
print("-------------------------------------")
#[8] Support Vector Machine
# Classifier - Algorithm - SVM
# fit the training dataset on the classifier
SVM = svm.SVC(C=1.0, kernel='linear', degree=3, gamma='auto')
SVM.fit(X_nm,y_nm)
# predict the labels on validation dataset
predictions_SVM = SVM.predict(Test_X_Tfidf)
# Use accuracy_score function to get the accuracy
print("SVM Accuracy Score -> ",accuracy_score(predictions_SVM, Test_Y)*100)
#A try to parallelize the for loop
# #https://medium.com/@bedigunjit/simple-guide-to-text-classification-nlp-using-svm-and-naive-bayes-with-python-421db3a72d34
# import pandas as pd
# import numpy as np
# from nltk.tokenize import word_tokenize
# from nltk import pos_tag
# from nltk.corpus import stopwords
# from nltk.stem import WordNetLemmatizer
# from sklearn.preprocessing import LabelEncoder
# from collections import defaultdict
# from nltk.corpus import wordnet as wn
# from sklearn.feature_extraction.text import TfidfVectorizer
# from sklearn import model_selection, naive_bayes, svm
# from sklearn.metrics import accuracy_score
# from collections import Counter
# import multiprocessing
# from joblib import Parallel, delayed
# if __name__ == "__main__":
# # Reproduce the same result every time if the script is kept consistent otherwise each run will produce different results
# np.random.seed(500)
# #[1] Read the data
# Corpus = pd.read_json(r"C:\Users\Panos\Desktop\Dissert\Code\Sample_Video_Games_5.json", lines=True, encoding='latin-1')
# Corpus = Corpus[['reviewText','overall']]
# # Print some info
# Corpus.info()
# print(Corpus.overall.value_counts())
# #https://medium.com/@mjschillawski/quick-and-easy-parallelization-in-python-32cb9027e490
# num_cores = multiprocessing.cpu_count()
# processed_list = Parallel(n_jobs=num_cores)(delayed(my_function(i,parameters)
# for i in enumerate(Corpus['overall'])
# #[2] Preprocessing
# # Step - a : Remove blank rows if any.
# Corpus['reviewText'].dropna(inplace=True)
# # Step - b : Change all the text to lower case. This is required as python interprets 'dog' and 'DOG' differently
# Corpus['reviewText'] = [entry.lower() for entry in Corpus['reviewText']]
# # Step - c : Tokenization : In this each entry in the corpus will be broken into set of words
# Corpus['reviewText'] = [word_tokenize(entry) for entry in Corpus['reviewText']]
# # Step - d : Remove Stop words, Non-Numeric and perfom Word Stemming/Lemmenting.
# # WordNetLemmatizer requires Pos tags to understand if the word is noun or verb or adjective etc. By default it is set to Noun
# tag_map = defaultdict(lambda : wn.NOUN)
# tag_map['J'] = wn.ADJ
# tag_map['V'] = wn.VERB
# tag_map['R'] = wn.ADV
# for index,entry in enumerate(Corpus['reviewText']):
# # Declaring Empty List to store the words that follow the rules for this step
# Final_words = []
# # Initializing WordNetLemmatizer()
# word_Lemmatized = WordNetLemmatizer()
# # pos_tag function below will provide the 'tag' i.e if the word is Noun(N) or Verb(V) or something else.
# for word, tag in pos_tag(entry):
# # Below condition is to check for Stop words and consider only alphabets
# if word not in stopwords.words('english') and word.isalpha():
# word_Final = word_Lemmatized.lemmatize(word,tag_map[tag[0]])
# Final_words.append(word_Final)
# # The final processed set of words for each iteration will be stored in 'text_final'
# Corpus.loc[index,'text_final'] = str(Final_words)
# #Print the first 3 rows
# print(Corpus.iloc[:3])
# print("hey yo")
# def my_function():
# #[1.5] Reduce number of classes
# for index,entry in enumerate(Corpus['overall']):
# if entry == 1.0 or entry == 2.0:
# Corpus.loc[index,'overall_final'] = -1
# elif entry == 3.0:
# Corpus.loc[index,'overall_final'] = 0
# elif entry == 4.0 or entry == 5.0:
# Corpus.loc[index,'overall_final'] = 1
###Output
_____no_output_____ |
01 Prepare point spread data.ipynb | ###Markdown
Load packages
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Load data
###Code
spread_df=[]
years_list = [18,17,16,15,14,13,12,11,10,'09','08','07','06','05','04','03']
year_int = 2019
for year in years_list:
#Note: 18 refers to 2019 and so on
temp_spread_df = pd.read_csv("Data/Point spreads/ncaabb" + str(year) + ".csv")
temp_spread_df['Season'] = year_int
year_last = year
year_int = year_int-1
if year==18:
spread_df = temp_spread_df
else:
spread_df = spread_df.append(temp_spread_df)
spread_df['home'] = spread_df['home'].str.lower()
spread_df['road'] = spread_df['road'].str.lower()
spread_df.head()
###Output
_____no_output_____
###Markdown
Match Team IDs to data
###Code
teams_df = pd.read_csv('Data/Kaggle NCAA/TeamSpellings.csv', sep='\,', engine='python')
teams_df.head()
team_list = ['home','road']
for team in team_list:
spread_df = pd.merge(spread_df, teams_df, left_on=[team], right_on = ['TeamNameSpelling'], how='left')
if team=='home':
spread_df.rename(columns={'TeamID': 'HTeamID'}, inplace=True)
if team=='road':
spread_df.rename(columns={'TeamID': 'RTeamID'}, inplace=True)
spread_df = spread_df.drop(['TeamNameSpelling'], axis=1)
###Output
_____no_output_____
###Markdown
Note: change to home and road team and score
###Code
spread_df.loc[spread_df['hscore']>spread_df['rscore'], 'WTeamID'] = spread_df['HTeamID']
spread_df.loc[spread_df['hscore']<spread_df['rscore'], 'LTeamID'] = spread_df['HTeamID']
spread_df.loc[spread_df['hscore']>spread_df['rscore'], 'WScore'] = spread_df['hscore']
spread_df.loc[spread_df['hscore']<spread_df['rscore'], 'LScore'] = spread_df['hscore']
spread_df.loc[spread_df['rscore']>spread_df['hscore'], 'WTeamID'] = spread_df['RTeamID']
spread_df.loc[spread_df['rscore']<spread_df['hscore'], 'LTeamID'] = spread_df['RTeamID']
spread_df.loc[spread_df['rscore']>spread_df['hscore'], 'WScore'] = spread_df['rscore']
spread_df.loc[spread_df['rscore']<spread_df['hscore'], 'LScore'] = spread_df['rscore']
spread_df.replace([np.inf, -np.inf], np.nan).dropna(subset=['WTeamID','LTeamID','WScore','LScore','line'])
spread_df = spread_df[spread_df['WScore'] != "."]
spread_df = spread_df[spread_df['LScore'] != "."]
spread_df = spread_df[spread_df['line'] != "."]
spread_df['line'].value_counts(dropna=False)
#drop spreads with bad coverage
spread_df = spread_df.drop(['line7ot','lineargh','lineash','lineashby','linedd','linedunk','lineer','linegreen','linemarkov','linemass','linepib','linepig','linepir','linepiw','linepom','lineprophet','linerpi','lineround','linesauce','lineseven','neutral','lineteamrnks','linetalis','lineespn','linemassey','linedonc','linesaggm','std','linepugh','linefox','linedok','lineopen'], axis=1)
spread_df.tail()
spread_df.dropna(subset=['line'])
###Output
_____no_output_____
###Markdown
Write the data to a csv
###Code
spread_df.to_csv('Data/~Created data/spreads_all.csv', index=False)
###Output
_____no_output_____ |
TITANIC-SUBMISION.ipynb | ###Markdown
'''Titanic challenge'''
###Code
#Import the required libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#Importing dataset
titanic=pd.read_csv('D:\\R-Projects\\train.csv')
print()
df=titanic.copy()
#Lets check our columns
df.columns
#Lets list our columns starting with the target column
columns= ['Survived', 'Pclass', 'Sex', 'Age', 'SibSp',
'Parch', 'Ticket', 'Fare', 'Cabin', 'Embarked']
#what's the datatype of our columns
df.dtypes
#Are there null values in our dataset? Let's see by writing a function to check through the set
def isnull():
names=[]
values=[]
for column in columns:
null_values_count=df[column].isnull().sum()
names.append(column)
values.append(null_values_count)
data={'Column':names, 'Null values':values}
data=pd.DataFrame(data=data)
return data
d = isnull()
d
titanic.describe()
###Output
_____no_output_____
###Markdown
In the abbove describe, we discover that the dset contains a total of 891 rows.
###Code
#Lets plot the distribution of each of our variables.
#We define a function first then we will be calling it
def distribution(column, bins=20, figsize=(8,8)):
try:
df[column].hist(bins=bins, figsize=figsize)
except KeyError:
print(f'The column {column} is not part of the dataframe')
sns.set(style='darkgrid')
distribution('Age')
distribution('Fare')
###Output
_____no_output_____
###Markdown
We can only view the distribution of numerical values
###Code
#We have our numerical variables as
numerical_attributes=['Age','Fare']
Categorical_attributes=['Pclass','Sex','SibSp','Parch','Embarked']
#You notice we have ingored the variables (i.e Cabin and Ticket) with string values which need encoding
df['Sex'].value_counts()
###Output
_____no_output_____
###Markdown
We discover that Column 'Sex' has 577 males and 314 females
###Code
#How many males in the first class
males=df[df['Sex']=='male']
males_first_class=males[males['Pclass']==1]
len(males_first_class)
males_survived_first_class=males[males['Survived']==1]
len(males_survived_first_class)
###Output
_____no_output_____
###Markdown
We can see that there survived 109 men in the first class
###Code
#How many passengers died and had siblings
df['SibSp'].value_counts()
#We want to check unique values in Embarked column
df['Embarked'].value_counts()
###Output
_____no_output_____
###Markdown
From the overcell we cab derive that 644 passengers boarded at S, 168 at C, and 77 at Q.
###Code
#can we plot the number of males and females in the ship? yes
sns.set(style='darkgrid')
sns.countplot (df['Sex'])
sns.countplot(df['Pclass'])
###Output
_____no_output_____
###Markdown
We can discover that 2nd class had few passegers followed by 1 and many in 3rd Class.
###Code
#How many people had children and died?
sns.countplot(df['Parch'])
###Output
_____no_output_____
###Markdown
We can discover that many people (687) without children died
###Code
#Let's fill any 0 vakue with nan
from numpy import nan
from numpy import isnan
from sklearn.impute import SimpleImputer
# mark zero values as missing or NaN
new_fao[['loan_amount_term']] = new_fao[['loan_amount_term']].replace(0, nan)
# retrieve the numpy array
values = new_fao.values
# define the imputer
imputer = SimpleImputer(missing_values=nan, strategy='mean')
# transform the dataset
transformed_values = imputer.fit_transform(values)
# count the number of NaN values in each column
print('Missing: %d' % isnan(transformed_values).sum())
###Output
_____no_output_____
###Markdown
Now let's start our models..
###Code
#importing all requiered libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings(action='ignore')
%matplotlib inline
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import Imputer
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import normalize
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import LinearSVC
from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier, RandomForestClassifier
#Now we want to list the names of the models we are going to use in our project
names=['Gaussian NB', 'Logistic Regression', 'Kneighbors Classifier', 'Neural Net',
'Decision Tree', 'Linear SVM', 'AdaBoost', 'Gradient Boosting',
'Random Forest']
#This code now lists the models themselves
models=[GaussianNB(), LogisticRegression(), KNeighborsClassifier(), MLPClassifier(),
DecisionTreeClassifier(), LinearSVC(), AdaBoostClassifier(),
GradientBoostingClassifier(), RandomForestClassifier()]
#The are list of all columns in the dset
all_coulmns =['Survived', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp', 'Parch', 'Ticket',
'Fare', 'Cabin', 'Embarked']
#usable columns are here
columns=['Survived', 'Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']
#Some columns in our dset contains categorical values and they are here
categorical_columns=['Pclass', 'Sex', 'SibSp', 'Parch', 'Embarked']
#Lets think of encoding the two usable columns
encoded_columns=['Sex','Embarked']
#We have some columns with continous variables
continous_columns=['Age','Fare']
#Lets view the structure of our dset
df=df[columns]
df.tail(12)
'''Now we want to make a single function to perfom the entire
process of preprocessing'''
'''The process will..
1. fill in the missing values
2. Scale the values
3. Encode categorical values'''
class ColumnSelector(BaseEstimator, TransformerMixin):
def __init__(self,columns):
self.columns=columns
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
wanted_columns=X[self.columns]
return wanted_columns
'''
num_pipeline=Pipeline([
('Column selector', ColumnSelector(continuos_columns)),
('imputer', Imputer(strategy='mean')),
('standard scaler', StandardScaler())
])
tranformed_numerical_attributes=num_pipeline.fit_transform(df
'''
#Now we want to develop another function to encod all our string columns in the dset
encoded_sex=LabelEncoder().fit_transform(df['Sex'])
encoded_pclass=LabelEncoder().fit_transform(df['Pclass'])
embarked=df['Embarked']
embarked_transformed=[]
for value in embarked:
value=str(value)
embarked_transformed.append(value)
encoded_embarked=LabelEncoder().fit_transform(embarked_transformed)
#Now let's generate a clean dset with this single function
def clean(df):
columns = ['Survived', 'Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare',
'Embarked']
categorical_columns = ['Pclass', 'Sex', 'SibSp', 'Parch', 'Embarked']
encoded_columns = ['Sex', 'Embarked']
continuous_columns = ['Age', 'Fare']
num_pipeline = Pipeline([
('Column selector', ColumnSelector(continuous_columns)),
('imputer', Imputer(strategy='median')),
('standard scaler', StandardScaler())
])
transformed_numerical_attributes = num_pipeline.fit_transform(df)
encoded_sex = LabelEncoder().fit_transform(df['Sex'])
encoded_pclass = LabelEncoder().fit_transform(df['Pclass'])
embarked = df['Embarked']
embarked_transformed = []
for value in embarked:
value = str(value)
embarked_transformed.append(value)
encoded_embarked = LabelEncoder().fit_transform(embarked_transformed)
dataset = np.c_[transformed_numerical_attributes, encoded_sex, encoded_pclass,
encoded_embarked, df['SibSp'], df['Parch']]
return dataset
dataset = clean(df)
dataset.shape
X_train, X_test = train_test_split(dataset, test_size=0.3)
y_train, y_test = train_test_split(df['Survived'], test_size=0.3)
X_train.shape
###Output
_____no_output_____ |
ENPM690 Robot Learning Final Project.ipynb | ###Markdown
Training an Autonomous Cab to pick up and drop passengers using Q-LearningAuthor: Akshitha Pothamshetty UID: 116399326About: M.Eng Robotics 2020, University of Maryland, College Park About ProjectIn this project, I have used Reinforcement Learning to pick up and drop off passengers at the right locations. I have used Q-Learning to train the model. To keep the focus on applying Q-Learning, I have used existing OpenAI GYM environment, Taxi-v3. This task was introduced in [Dietterich2000] to illustrate some issues in hierarchical reinforcement learning. There are 4 locations (labeled by different letters) and our job is to pick up the passenger at one location and drop him off in another. We receive +20 points for a successful dropoff, and lose 1 point for every timestep it takes. There is also a 10 point penalty for illegal pick-up and drop-off actions.In this notebook, I will go through the project step by step. 1: Install and import dependenciesFollowing packages needs to be installed successfully for executing rest of the notebook:1. CMAKE2. Scipy3. Numpy4. Atari Gym
###Code
!pip install cmake matplotlib scipy numpy 'gym[atari]' # Run only once.
import gym # Import OpenAI GYM package
from IPython.display import clear_output, Markdown, display # For visualization
from time import sleep # For visualization
import random # Randomly generating states
import numpy as np # For Q-Table
import matplotlib.pyplot as plt
from scipy.signal import savgol_filter
random.seed(42) # Setting random state for consistent results.
env = gym.make("Taxi-v3").env # Using existing Taxi-V3 environment
# Generate a random environment
env.reset()
env.render()
# Properties of our environment
print("Property: Action Space {}".format(env.action_space))
print("Property: State Space {}".format(env.observation_space))
###Output
+---------+
|[35mR[0m: | : :[34;1m[43mG[0m[0m|
| : | : : |
| : : : : |
| | : | : |
|Y| : |B: |
+---------+
Property: Action Space Discrete(6)
Property: State Space Discrete(500)
###Markdown
2: Explore our environmentOur environment consists of 6 actions:1. Go South2. Go North3. Go East4. Go West5. Pick Up Passenger6. Drop Off a PassengerOur environment consists of 500 possible states:* It is a 5x5 grid, with 4 pick up or drop off location, with one additional state of passenger being inside the taxi:* 5x5 x (4+1) x 4 = 500 states
###Code
# Drop off and pick up locations are indexed from 0-3 for the four
# different locations.
# Let's initiate a forced state:
# Taxi at: (3,1), Pick Up at R(0)(Blue) and Drop Off at B(3)(Pink)
initialState = env.encode(3, 1, 0, 3)
print "Out of 500 possible states, our unique state ID is:", initialState
env.s = initialState # set the environment
env.render()
# Let's see the possible action space in this state
print "Action: [Probability, NextState, Reward, GoalReached]"
env.P[initialState]
###Output
Out of 500 possible states, our unique state ID is: 323
+---------+
|[34;1mR[0m: | : :G|
| : | : : |
| : : : : |
| |[43m [0m: | : |
|Y| : |[35mB[0m: |
+---------+
Action: [Probability, NextState, Reward, GoalReached]
###Markdown
3: Establish a baseline, using Brute Force ApproachWe will use a Brute force approach to establish a baseline. We will ask the taxi to reach the goal state with no intelligence at all. It will choose a random state to reach the goal. We will evaluate how the model performs from here.
###Code
# Create a visualizer function
def print_frames(frames, episode=1, verbose=False):
for i, frame in enumerate(frames):
clear_output(wait=True)
print(frame['frame'])
if (verbose):
print("\nEpisode Number {}".format(episode))
print("Timestep: {}".format(i + 1))
print("State: {}".format(frame['state']))
print("Action: {}".format(frame['action']))
print("Reward: {}".format(frame['reward']))
sleep(.1)
def printmd(string):
display(Markdown(string))
env.s = initialState # Set environment to initial state.
epochs = 0
penalties, reward = 0, 0
frames = [] # for animation
done = False
while not done:
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if reward == -10:
penalties += 1
# Put each rendered frame into dict for animation
frames.append({
'frame': env.render(mode='ansi'),
'state': state,
'action': action,
'reward': reward
}
)
epochs += 1
# Start visualization only if steps taken are less than 1000. Otherwise it takes too long.
if epochs < 1000:
print_frames(frames)
print("\nSteps taken: {}".format(epochs))
print("Penalties incurred: {}".format(penalties))
###Output
+---------+
|R: | : :G|
| : | : : |
| : : : : |
| | : | : |
|Y| : |[35m[34;1m[43mB[0m[0m[0m: |
+---------+
(Dropoff)
Timestep: 508
State: 475
Action: 5
Reward: 20
Steps taken: 508
Penalties incurred: 163
###Markdown
4: Train the model using Reinforcement LearningUsing a Q-Learning approach to train the model. Q-learning lets the agent use the environment's rewards to learn, over time, the best action to take in a given state. For this purpose, we will create a Q-table to store Q-values, that map to a state and corresponding action combination. A Q-value for a particular state-action combination is representative of the "quality" of an action taken from that state. Better Q-values imply better chances of getting greater rewards.For evaluation of the model, We will keep a track of how many timesteps and number of penalties incurred while training the model.
###Code
%%time
# initialize a q-table to store q-values for each state-action pair.
q_table = np.zeros([env.observation_space.n, env.action_space.n]) # Initialize a q-table with zeros
print"Training the agent .. .. .." # Typically takes around 3m 17seconds around
# Hyperparameters
alpha = 0.1
gamma = 0.6
epsilon = 1.0
min_epsilon = 0.1
max_epsilon = 1.00
# For plotting metrics
all_epochs = []
all_penalties = []
all_epsilons = []
frames = [] # for animation
rewards = [] # for analyzing training.
for episode in range(1, 100001):
state = env.reset()
episode_rewards = []
epochs, penalties, reward = 0, 0, 0
done = False
while not done:
# Explore Action Space
if random.uniform(0, 1) < min_epsilon:
action = env.action_space.sample()
# Exploit learned Values
else:
action = np.argmax(q_table[state])
# Get next state, reward, goal_status
next_state, reward, done, info = env.step(action)
# current q-value
old_value = q_table[state, action]
next_max = np.max(q_table[next_state])
# Learning: update current q-value based on best chosen next step
new_value = (1 - alpha) * old_value + alpha * (reward + gamma * next_max)
q_table[state, action] = new_value
if done:
epsilon = min_epsilon + (max_epsilon - min_epsilon)*np.exp(-0.0001*episode)
all_epsilons.append(epsilon)
# check if penalties incurred
if reward == -10:
penalties += 1
episode_rewards.append(reward)
# update state
state = next_state
epochs += 1
# Put each rendered frame into dict for animation
frames.append({
'frame': env.render(mode='ansi'),
'state': state,
'action': action,
'reward': reward
}
)
rewards.append(np.mean(episode_rewards))
# render training progress
if episode % 100 == 0:
clear_output(wait=True)
print("Training ongoing...\nSuccessfully completed {} Pick-Drop episodes.".format(episode))
print("Training finished.\n")
plt.figure(figsize=(16, 8), dpi= 80, facecolor='w', edgecolor='k')
plt.plot(savgol_filter(rewards, 1001, 2))
plt.title("Smoothened training reward per episode")
plt.xlabel('Episode');
plt.ylabel('Total Reward');
###Output
_____no_output_____
###Markdown
As we can see, rewards are getting more and more positive as the training continues and starts to converge once the model is trained. Depending on the graph, 25,000 epochs seem enough to train the agent successfully.
###Code
plt.figure(figsize=(16, 8), dpi= 80, facecolor='w', edgecolor='k')
plt.plot(all_epsilons)
plt.title("Epsilon for episode")
plt.xlabel('Episode');
plt.ylabel('Epsilon');
###Output
_____no_output_____
###Markdown
Epsilon is the exploration factor. It shuffles the model from exploration mode to exploitation mode. Exploration is finding new information about the environment. Whereas, Exploitation using existing information to maximize the reward. Initially, our driving agent knows pretty much nothing about the best set of driving directions for picking up and dropping off passengers. After good amount of exploration, we want our agent to switch to exploitation mode. Eventually, all choices will be based on what is learned. In this graph, we can see how it decreases over training. Lower values of epsilon switches the agent to exploitation model. This is a desired trend. 5: Analyze what the model has learnedWe want to see if the model has learned to take optimum steps from any state to reach a goal position.
###Code
import pprint
# Drop off and pick up locations are indexed from 0-3 for the four
# different locations.
# Let's initiate first forced state:
# Taxi at: (2,0), Passenger inside the taxi and Drop Off at R(0)(Pink)
statesList = [[2,0,4,0], [2,0,4,2], [2,3,4,3], [2,3,4,1]]
Actions = ["South", "North", "East", "West", "PickUp", "DropOff"]
exampleCount = 0
for state in statesList:
exampleCount += 1
exampleState = env.encode(*state)
print"Example {}: Passenger inside taxi - To be dropped at location: {}".format(exampleCount, state[3])
env.s = exampleState # set the environment
env.render()
# Let's see the possible action space in this state
env.P[exampleState]
stateAction = dict(zip(Actions, q_table[exampleState]))
max_key = max(stateAction, key=stateAction.get)
print "Best Action:", max_key, "\nQ-Value:",round(stateAction[max_key], 2)
print "\n--------------------------------------------------------\n"
###Output
Example 1: Passenger inside taxi - To be dropped at location: 0
+---------+
|[35mR[0m: | : :G|
| : | : : |
|[42m_[0m: : : : |
| | : | : |
|Y| : |B: |
+---------+
(Dropoff)
Best Action: North
Q-Value: 5.6
--------------------------------------------------------
Example 2: Passenger inside taxi - To be dropped at location: 2
+---------+
|R: | : :G|
| : | : : |
|[42m_[0m: : : : |
| | : | : |
|[35mY[0m| : |B: |
+---------+
(Dropoff)
Best Action: South
Q-Value: 5.6
--------------------------------------------------------
Example 3: Passenger inside taxi - To be dropped at location: 3
+---------+
|R: | : :G|
| : | : : |
| : : :[42m_[0m: |
| | : | : |
|Y| : |[35mB[0m: |
+---------+
(Dropoff)
Best Action: South
Q-Value: 5.6
--------------------------------------------------------
Example 4: Passenger inside taxi - To be dropped at location: 1
+---------+
|R: | : :[35mG[0m|
| : | : : |
| : : :[42m_[0m: |
| | : | : |
|Y| : |B: |
+---------+
(Dropoff)
Best Action: North
Q-Value: 2.36
--------------------------------------------------------
###Markdown
As we can see after the training has been completed, the cab agent is able to predict the best step to reach destination accurately.1. In example 1, cab with the passenger is at (2,0). Dropping destination for the case is R(0). The most sensible step is to move north. The model predicts this accurately as:| Best Action | || | Q-Value| --- | ---| --- || North | || | 5.6 |2. In example 3, cab with the passenger is at (2,3). Dropping destination for the case is B(3). The most sensible step is to move South. The model predicts this accurately as:| Best Action | || | Q-Value| --- | ---| --- || South | || | 5.6 |3. Similarly, in example 4, can is at (2,3) with the passenger inside. The dropping destination for the case is G(1). Both North and East have same q-values in this case, 2.36. Thus, the model hasn't overfit and predicts both the correct paths successfully. 6: Evaluating agent's performance after training.Now we want to test our model on finding best routes between checkpoints and analyze penalties it is adding, if any!
###Code
total_epochs, total_penalties = 0, 0
episodes = 100 # Give 100 tests to the agent.
# frames = [] # for animation
testRewards = [] # Analyzing performance
sleep(5)
for episode in range(episodes):
state = env.reset() # Reset the environment to a random new state
epochs, penalties, reward = 0, 0, 0
done = False # Episode completed?
frames = []
episodeRewards = []
while not done: # While not dropped at correct destination
action = np.argmax(q_table[state]) # Choose best action for a given state
state, reward, done, info = env.step(action) # Get reward, new state and drop status
if reward == -10: # Increment penalties if occurred
penalties += 1
epochs += 1 # Keep track of timesteps required to successfully reach the destination.
# Put each rendered frame into dict for animation
frames.append({
'frame': env.render(mode='ansi'),
'state': state,
'action': action,
'reward': reward
}
)
episodeRewards.append(reward)
total_penalties += penalties
total_epochs += epochs
testRewards.append(np.mean(episodeRewards))
# Visualize the current episode.
print_frames(frames, episode+1, True)
sleep(.50)
print"\n----------------- Test Results --------------------- \n"
print("Results after {} episodes:".format(episodes))
print("Average timesteps per episode: {}".format(total_epochs / episodes))
print("Average penalties per episode: {}".format(total_penalties / episodes))
###Output
+---------+
|R: | : :[35m[34;1m[43mG[0m[0m[0m|
| : | : : |
| : : : : |
| | : | : |
|Y| : |B: |
+---------+
(Dropoff)
Episode Number 100
Timestep: 11
State: 85
Action: 5
Reward: 20
----------------- Test Results ---------------------
Results after 100 episodes:
Average timesteps per episode: 13
Average penalties per episode: 0
|
41-problem-solution_introduction_to_machine_learning.ipynb | ###Markdown
Using `DecisionTreeClassifier` fit a decision tree to the data- What are the attributes/features?- What is the target feature?- Interpret the tree
###Code
dtree = tree.DecisionTreeClassifier().fit(
iris[['petalWidth','petalLength','sepalWidth','sepalLength']], iris['species'])
print(export_text(dtree, feature_names=['petalWidth','petalLength','sepalWidth','sepalLength']))
###Output
|--- petalLength <= 2.45
| |--- class: setosa
|--- petalLength > 2.45
| |--- petalWidth <= 1.75
| | |--- petalLength <= 4.95
| | | |--- petalWidth <= 1.65
| | | | |--- class: versicolor
| | | |--- petalWidth > 1.65
| | | | |--- class: virginica
| | |--- petalLength > 4.95
| | | |--- petalWidth <= 1.55
| | | | |--- class: virginica
| | | |--- petalWidth > 1.55
| | | | |--- petalLength <= 5.45
| | | | | |--- class: versicolor
| | | | |--- petalLength > 5.45
| | | | | |--- class: virginica
| |--- petalWidth > 1.75
| | |--- petalLength <= 4.85
| | | |--- sepalWidth <= 3.10
| | | | |--- class: virginica
| | | |--- sepalWidth > 3.10
| | | | |--- class: versicolor
| | |--- petalLength > 4.85
| | | |--- class: virginica
|
tomef/metrics/clustering.py.ipynb | ###Markdown
Clustering ← ↑`Description`--- Setup---
###Code
from __init__ import init_vars
init_vars(vars(), ('info', {}))
from sklearn.metrics.cluster import normalized_mutual_info_score, adjusted_rand_score
import data
import config
from base import nbprint
from util import ProgressIterator
from widgetbase import nbbox
from metrics.widgets import h_mat_picker
from metrics.helper import load_ground_truth_classes, load_class_array_from_h_mat
if RUN_SCRIPT: h_mat_picker(info)
###Output
_____no_output_____
###Markdown
--- NMI---`Definition`
###Code
def nmi(labels_true, labels_pred):
return normalized_mutual_info_score(labels_true, labels_pred)
###Output
_____no_output_____
###Markdown
--- ARI---`Definition`
###Code
def ari(labels_true, labels_pred):
return adjusted_rand_score(labels_true, labels_pred)
###Output
_____no_output_____
###Markdown
--- Show all---
###Code
if RUN_SCRIPT:
nbbox(mini=True)
labels_true = load_ground_truth_classes(info)
labels_pred = load_class_array_from_h_mat(info)
nbprint('NMI score: {}'.format(nmi(labels_true, labels_pred)))
nbprint('ARI score: {}'.format(ari(labels_true, labels_pred)))
###Output
_____no_output_____ |
Chapter07/05 - Enabling ML explainability with SageMaker Clarify.ipynb | ###Markdown
Enabling ML explainability with SageMaker Clarify This notebook contains the code to help readers work through one of the recipes of the book [Machine Learning with Amazon SageMaker Cookbook: 80 proven recipes for data scientists and developers to perform ML experiments and deployments](https://www.amazon.com/Machine-Learning-Amazon-SageMaker-Cookbook/dp/1800567030) How to do it...
###Code
%store -r s3_bucket_name
%store -r prefix
%store -r training_data_path
%store -r test_data_path
%store -r model_name
import sagemaker
session = sagemaker.Session()
region = session.boto_region_name
role = sagemaker.get_execution_role()
s3_training_data_path = training_data_path
s3_test_data_path = test_data_path
s3_output_path = f"s3://{s3_bucket_name}/{prefix}/output"
!aws s3 cp {s3_training_data_path} tmp/training_data.csv
!aws s3 cp {s3_test_data_path} tmp/test_data.csv
import pandas as pd
training_data = pd.read_csv("tmp/training_data.csv")
test_data = pd.read_csv("tmp/test_data.csv")
target = test_data['approved']
features = test_data.drop(columns=['approved'])
features.to_csv('tmp/test_features.csv', index=False, header=False)
features
base = f"s3://{s3_bucket_name}/{prefix}/input"
s3_feature_path = f"{base}/test_features.csv"
!aws s3 cp tmp/test_features.csv {s3_feature_path}
from sagemaker.clarify import ModelConfig
model_config = ModelConfig(
model_name=model_name,
instance_type='ml.c5.xlarge',
instance_count=1,
accept_type='text/csv'
)
from sagemaker.clarify import SageMakerClarifyProcessor
processor = SageMakerClarifyProcessor(
role=role,
instance_count=1,
instance_type='ml.m5.large',
sagemaker_session=session
)
baseline = features.iloc[0:200].values.tolist()
baseline
from sagemaker.clarify import SHAPConfig
shap_config = SHAPConfig(
baseline=baseline,
num_samples=50,
agg_method='median'
)
headers = training_data.columns.to_list()
from sagemaker.clarify import DataConfig
data_config = DataConfig(
s3_data_input_path=s3_training_data_path,
s3_output_path=s3_output_path,
label='approved',
headers=headers,
dataset_type='text/csv'
)
%%time
processor.run_explainability(
data_config=data_config,
model_config=model_config,
explainability_config=shap_config
)
output = processor.latest_job.outputs[0]
output_destination = output.destination
output_destination
!aws s3 cp {output_destination}/ tmp/ --recursive
!ls -lahF tmp/
!cat tmp/analysis.json
###Output
_____no_output_____ |
natural-language/text-classification.ipynb | ###Markdown
Contextual Text ClassificationRNN based sentiment analysis on dataset of plain-text IMDB movie reviews.
###Code
import os
import re
import string
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
tf.__version__
###Output
_____no_output_____
###Markdown
Prepare data
###Code
os.listdir("/database/tensorflow-datasets/")
# Load data
dataset, info = tfds.load(
name="imdb_reviews",
with_info=True,
as_supervised=True,
data_dir="/database/tensorflow-datasets/"
)
train_dataset, test_dataset = dataset["train"], dataset["test"]
for review, label in train_dataset.take(1).as_numpy_iterator():
print("Review;", review, "Label:", label)
for review, label in test_dataset.take(1).as_numpy_iterator():
print("Review;", review, "Label:", label)
type(train_dataset), type(test_dataset)
# Create an optimized dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
test_dataset = test_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
###Output
_____no_output_____
###Markdown
Create text encoder
###Code
VOCAB_SIZE = 5000
MAX_SEQUENCE_LENGTH = 512
EMBEDDING_DIM = 100
# Custom text processor
def text_processor(input_data):
lowercase_text = tf.strings.lower(input_data)
stripped_text = tf.strings.regex_replace(lowercase_text, "<br />", " ")
return tf.strings.regex_replace(
stripped_text, "[%s]" % re.escape(string.punctuation), ""
)
# Encoder layer
encoder_layer = tf.keras.layers.TextVectorization(
max_tokens=VOCAB_SIZE,
standardize=text_processor,
split="whitespace",
output_mode="int",
output_sequence_length=MAX_SEQUENCE_LENGTH
)
# Learn the encoder layer
encoder_layer.adapt(train_dataset.map(lambda text, label: text))
vocabulary = np.array(encoder_layer.get_vocabulary())
print("Top 20 vocabulary:", vocabulary[:20]) # Most frequent
print("Bottom 20 vocabulary:", vocabulary[-20:]) # Least frequent
###Output
Bottom 20 vocabulary: ['acid' '35' '1971' 'wouldbe' 'voiced' 'victory' 'uplifting' 'unseen'
'unfair' 'tooth' 'technicolor' 'survivor' 'stunned' 'sounding' 'sid'
'screens' 'rolled' 'resulting' 'reflection' 'ramones']
###Markdown
Implement model architecture
###Code
def get_bilstm_model():
model = tf.keras.Sequential()
model.add(encoder_layer)
model.add(tf.keras.layers.Embedding(VOCAB_SIZE, EMBEDDING_DIM, mask_zero=True))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(128, return_sequences=True)))
model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(128)))
model.add(tf.keras.layers.Dense(64, activation="relu"))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(1))
return model
###Output
_____no_output_____
###Markdown
Learn and evaluate model
###Code
# Get model
model = get_bilstm_model()
# Compile
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), optimizer="adam", metrics=["accuracy"])
# Learn
history = model.fit(
train_dataset,
epochs=10,
validation_data=test_dataset,
callbacks=[tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=2)]
)
# Evaluate model
test_loss, test_acc = model.evaluate(test_dataset)
print("Test Loss:", test_loss, "Test Accuracy:", test_acc)
###Output
391/391 [==============================] - 20s 50ms/step - loss: 0.3759 - accuracy: 0.8816
Test Loss: 0.3758942484855652 Test Accuracy: 0.8815600275993347
###Markdown
Plot model performance
###Code
# Model accuracy
plt.plot(history.history["accuracy"])
plt.plot(history.history["val_accuracy"])
plt.title("model accuracy")
plt.ylabel("accuracy")
plt.xlabel("epoch")
plt.legend(["train", "val"], loc="upper left")
plt.show()
# Model loss
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "val"], loc="upper left")
plt.show()
###Output
_____no_output_____
###Markdown
Generate predictions
###Code
sample_text = ("The movie was not good. The animation and the graphics were terrible. I would not recommend this movie.")
predictions = model.predict(np.array([sample_text]))
label_predicted = "Positive" if predictions[0][0] >= 0.0 else "Negative"
print("Review:", sample_text)
print("Predicted Label:", label_predicted)
###Output
Review: The movie was not good. The animation and the graphics were terrible. I would not recommend this movie.
Predicted Label: Negative
|
munge/preprocessing_lda.ipynb | ###Markdown
LDA being a probabilistic graphical model (i.e. dealing with probabilities) only requires raw counts, so we use a CountVectorizer.
###Code
from sklearn.feature_extraction.text import CountVectorizer
def apply_count_vectorizer(df, series, word_appearal_threshold):
vectorizer = CountVectorizer(max_df=word_appearal_threshold,
min_df=2, # words that appear in < x lines will be discarded
token_pattern='\w+|\$[\d\.]+|\S+')
tf = vectorizer.fit_transform(df[series]).toarray()
logging.info(f"applying vectorizer : {vectorizer.get_params()}. \n Matrix shape: {tf.shape}")
logging.info(f"discard words appearing in more than {word_appearal_threshold}% of cases")
# tf_feature_names tells us what word each column in the matric represents
tf_feature_names = vectorizer.get_feature_names()
return tf, tf_feature_names
tf, tf_feature_names = apply_count_vectorizer(df=df, series = 'abstract', word_appearal_threshold=.9)
###Output
2020-26-03: INFO: applying vectorizer : {'analyzer': 'word', 'binary': False, 'decode_error': 'strict', 'dtype': <class 'numpy.int64'>, 'encoding': 'utf-8', 'input': 'content', 'lowercase': True, 'max_df': 0.9, 'max_features': None, 'min_df': 2, 'ngram_range': (1, 1), 'preprocessor': None, 'stop_words': None, 'strip_accents': None, 'token_pattern': '\\w+|\\$[\\d\\.]+|\\S+', 'tokenizer': None, 'vocabulary': None}.
Matrix shape: (803, 6465)
2020-26-03: INFO: discard words appearing in more than 0.9% of cases
###Markdown
LDA with Sklearn
###Code
from sklearn.decomposition import LatentDirichletAllocation
number_of_topics = 10
model = LatentDirichletAllocation(n_components=number_of_topics, random_state=0)
model.fit(tf)
def display_topics(model, feature_names, no_top_words):
topic_dict = {}
for topic_idx, topic in enumerate(model.components_):
topic_dict["Topic %d words" % (topic_idx)]= ['{}'.format(feature_names[i])
for i in topic.argsort()[:-no_top_words - 1:-1]]
topic_dict["Topic %d weights" % (topic_idx)]= ['{:.1f}'.format(topic[i])
for i in topic.argsort()[:-no_top_words - 1:-1]]
return pd.DataFrame(topic_dict)
no_top_words = 10
display_topics(model, tf_feature_names, no_top_words)
###Output
_____no_output_____
###Markdown
LDA with Gensimhttps://www.machinelearningplus.com/nlp/topic-modeling-gensim-python/
###Code
# Gensim
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
# spacy for lemmatization
import spacy
# Plotting tools
import pyLDAvis
import pyLDAvis.gensim
def preprocess_news(df, series):
corpus=[]
for item in df[series].dropna()[:5000]:
words=[w for w in word_tokenize(item)]
corpus.append(words)
return corpus
texts = preprocess_news(df, series='abstract')
# Create Dictionary
id2word = corpora.Dictionary(texts)
# Term Document Frequency - TDF
corpus = [id2word.doc2bow(text) for text in texts]
print(f"produced corpus shown above is a mapping of (word_id, word_frequency: {corpus[:1]})")
# Build LDA model
lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus,
id2word=id2word,
num_topics=10,
random_state=100,
update_every=1,
chunksize=100,
passes=10, # training epochs
alpha='auto',
per_word_topics=True)
# Print the Keyword in the 10 topics
import pprint
pprint.pprint(lda_model.print_topics())
doc_lda = lda_model[corpus]
###Output
2020-26-03: INFO: topic #0 (0.159): 0.143*"cov" + 0.136*"sars" + 0.023*"early" + 0.015*"protein" + 0.014*"binding" + 0.013*"severe" + 0.012*"associated" + 0.011*"spike" + 0.011*"acute" + 0.011*"syndrome"
2020-26-03: INFO: topic #1 (0.275): 0.085*"data" + 0.026*"using" + 0.025*"virus" + 0.013*"specific" + 0.013*"reveals" + 0.012*"sequencing" + 0.012*"human" + 0.011*"identification" + 0.011*"influenza" + 0.010*"genome"
2020-26-03: INFO: topic #2 (0.120): 0.100*"infect" + 0.091*"diagnosis" + 0.088*"application" + 0.087*"optimization" + 0.033*"rna" + 0.008*"contact" + 0.008*"highly" + 0.007*"epidem" + 0.007*"different" + 0.007*"detect"
2020-26-03: INFO: topic #3 (0.237): 0.053*"period" + 0.052*"analysis" + 0.050*"incubation" + 0.050*"publicly" + 0.049*"statistical" + 0.049*"available" + 0.049*"truncation" + 0.049*"right" + 0.039*"novel" + 0.009*"immune"
2020-26-03: INFO: topic #4 (0.196): 0.092*"control" + 0.029*"cell" + 0.028*"unknown" + 0.021*"cells" + 0.015*"studi" + 0.015*"single" + 0.014*"expression" + 0.014*"receptor" + 0.011*"molecular" + 0.011*"non"
2020-26-03: INFO: topic #5 (0.544): 0.074*"covid" + 0.069*"coronavirus" + 0.065*"epidemiological" + 0.047*"transmission" + 0.043*"model" + 0.038*"novel" + 0.036*"case" + 0.032*"pcr" + 0.031*"identifying" + 0.031*"characterizing"
2020-26-03: INFO: topic #6 (0.148): 0.040*"virus" + 0.019*"infectious" + 0.018*"host" + 0.016*"author" + 0.015*"zika" + 0.013*"protein" + 0.010*"infection" + 0.009*"activity" + 0.008*"infect" + 0.007*"may"
2020-26-03: INFO: topic #7 (0.181): 0.092*"infections" + 0.089*"strategy" + 0.025*"pneumonia" + 0.019*"prediction" + 0.016*"coronavirus" + 0.013*"evolution" + 0.010*"structure" + 0.009*"health" + 0.008*"diagnostic" + 0.008*"review"
2020-26-03: INFO: topic #8 (0.247): 0.083*"characteristics" + 0.033*"patients" + 0.031*"clinical" + 0.024*"viral" + 0.020*"covid" + 0.013*"dynamics" + 0.012*"study" + 0.012*"disease" + 0.011*"bat" + 0.011*"rna"
2020-26-03: INFO: topic #9 (0.108): 0.017*"gene" + 0.013*"proteins" + 0.013*"social" + 0.013*"epidemics" + 0.012*"evolutionary" + 0.012*"pathogen" + 0.011*"science" + 0.010*"monitoring" + 0.009*"hospitalized" + 0.009*"quantifying"
###Markdown
Compute Model Perplexity and Coherence Score
###Code
# Compute Perplexity: measure of how good the model is. lower the better.
print('\nPerplexity: ', lda_model.log_perplexity(corpus))
# Compute Coherence Score
coherence_model_lda = CoherenceModel(model=lda_model, texts=texts, dictionary=id2word, coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
print('\nCoherence Score: ', coherence_lda)
###Output
2020-26-03: INFO: -8.803 per-word bound, 446.5 perplexity estimate based on a held-out corpus of 803 documents with 131769 words
2020-26-03: INFO: using ParallelWordOccurrenceAccumulator(processes=11, batch_size=64) to estimate probabilities from sliding windows
###Markdown
Hyperparameter TuningWe have a baseline coherence score for the default LDA model, let’s perform a series of sensitivity tests to help determine the following model hyperparameters: - Number of Topics (K) - Dirichlet hyperparameter alpha: Document-Topic Density - Dirichlet hyperparameter beta: Word-Topic Density We’ll perform these tests in sequence, one parameter at a time by keeping others constant and run them over the two different validation corpus sets. We’ll use C_v as our choice of metric for performance comparison
###Code
def compute_coherence_values(dictionary, corpus, texts, limit, start=2, step=3):
"""
Compute c_v coherence for various number of topics
Parameters:
----------
dictionary : Gensim dictionary
corpus : Gensim corpus
texts : List of input texts
limit : Max num of topics
Returns:
-------
model_list : List of LDA topic models
coherence_values : Coherence values corresponding to the LDA model with respective number of topics
"""
coherence_values = []
model_list = []
for num_topics in range(start, limit, step):
model = gensim.models.ldamodel.LdaModel(corpus=corpus, num_topics=num_topics, id2word=id2word)
model_list.append(model)
coherencemodel = CoherenceModel(model=model, texts=texts, dictionary=dictionary, coherence='c_v')
coherence_values.append(coherencemodel.get_coherence())
return model_list, coherence_values
model_list, coherence_values = compute_coherence_values(dictionary=id2word,
corpus=corpus,
texts=texts,
start=2, limit=40, step=6)
# Show graph
limit=40; start=2; step=6;
x = range(start, limit, step)
plt.plot(x, coherence_values)
plt.xlabel("Num Topics")
plt.ylabel("Coherence score")
plt.legend(("coherence_values"), loc='best')
plt.show()
# Print the coherence scores
for m, cv in zip(x, coherence_values):
print("Num Topics =", m, " has Coherence Value of", round(cv, 4))
# Select the model and print the topics
import pprint
optimal_model = model_list[5]
model_topics = optimal_model.show_topics(formatted=False)
pprint.pprint(optimal_model.print_topics(num_words=10))
###Output
2020-26-03: INFO: topic #8 (0.031): 0.007*"patients" + 0.006*"covid" + 0.005*"exon" + 0.004*"severe" + 0.004*"study" + 0.004*"virus" + 0.003*"model" + 0.003*"disease" + 0.003*"viral" + 0.003*"infection"
2020-26-03: INFO: topic #14 (0.031): 0.006*"virus" + 0.005*"data" + 0.005*"cases" + 0.005*"disease" + 0.004*"viruses" + 0.004*"viral" + 0.004*"preprint" + 0.004*"rna" + 0.004*"model" + 0.004*"protein"
2020-26-03: INFO: topic #31 (0.031): 0.009*"protein" + 0.008*"rna" + 0.006*"virus" + 0.005*"cells" + 0.004*"infection" + 0.004*"sequence" + 0.004*"preprint" + 0.004*"cases" + 0.004*"host" + 0.003*"proteins"
2020-26-03: INFO: topic #12 (0.031): 0.007*"virus" + 0.006*"viral" + 0.006*"rna" + 0.004*"viruses" + 0.004*"transmission" + 0.004*"protein" + 0.004*"using" + 0.003*"species" + 0.003*"epidemic" + 0.003*"genome"
2020-26-03: INFO: topic #10 (0.031): 0.006*"covid" + 0.006*"transmission" + 0.005*"patients" + 0.005*"cases" + 0.005*"data" + 0.005*"clinical" + 0.004*"human" + 0.004*"preprint" + 0.004*"cov" + 0.004*"viral"
2020-26-03: INFO: topic #25 (0.031): 0.011*"cov" + 0.010*"sars" + 0.010*"virus" + 0.006*"host" + 0.005*"infection" + 0.005*"patients" + 0.005*"coronavirus" + 0.004*"ncov" + 0.004*"rna" + 0.004*"novel"
2020-26-03: INFO: topic #19 (0.031): 0.007*"virus" + 0.005*"patients" + 0.005*"pcr" + 0.005*"viral" + 0.004*"rna" + 0.004*"results" + 0.004*"one" + 0.004*"preprint" + 0.003*"two" + 0.003*"data"
2020-26-03: INFO: topic #20 (0.031): 0.006*"infection" + 0.005*"cells" + 0.004*"viral" + 0.004*"human" + 0.004*"preprint" + 0.004*"using" + 0.004*"time" + 0.003*"virus" + 0.003*"patients" + 0.003*"proteins"
2020-26-03: INFO: topic #7 (0.031): 0.009*"rna" + 0.005*"binding" + 0.005*"transmission" + 0.004*"virus" + 0.004*"protein" + 0.004*"viral" + 0.004*"cov" + 0.004*"two" + 0.003*"preprint" + 0.003*"available"
2020-26-03: INFO: topic #27 (0.031): 0.006*"virus" + 0.006*"model" + 0.006*"data" + 0.006*"patients" + 0.005*"disease" + 0.005*"china" + 0.005*"transmission" + 0.004*"preprint" + 0.004*"epidemic" + 0.004*"infection"
2020-26-03: INFO: topic #15 (0.031): 0.009*"patients" + 0.005*"cases" + 0.005*"transmission" + 0.005*"disease" + 0.005*"data" + 0.004*"infection" + 0.004*"model" + 0.004*"covid" + 0.004*"cells" + 0.004*"preprint"
2020-26-03: INFO: topic #11 (0.031): 0.007*"cases" + 0.006*"transmission" + 0.006*"cov" + 0.006*"sars" + 0.006*"coronavirus" + 0.005*"model" + 0.005*"data" + 0.004*"covid" + 0.004*"disease" + 0.004*"patients"
2020-26-03: INFO: topic #5 (0.031): 0.009*"ncov" + 0.005*"cov" + 0.005*"human" + 0.005*"sars" + 0.005*"coronavirus" + 0.004*"viral" + 0.004*"cases" + 0.004*"patients" + 0.004*"novel" + 0.004*"identified"
2020-26-03: INFO: topic #23 (0.031): 0.015*"sars" + 0.015*"cov" + 0.009*"virus" + 0.009*"protein" + 0.006*"viral" + 0.005*"patients" + 0.005*"infection" + 0.005*"binding" + 0.005*"cells" + 0.005*"human"
2020-26-03: INFO: topic #13 (0.031): 0.008*"cases" + 0.007*"wuhan" + 0.007*"data" + 0.006*"china" + 0.005*"number" + 0.004*"virus" + 0.004*"preprint" + 0.004*"model" + 0.004*"outbreak" + 0.004*"time"
2020-26-03: INFO: topic #1 (0.031): 0.008*"cov" + 0.007*"rna" + 0.007*"sars" + 0.006*"patients" + 0.006*"novel" + 0.005*"human" + 0.005*"virus" + 0.004*"viral" + 0.004*"proteins" + 0.004*"severe"
2020-26-03: INFO: topic #6 (0.031): 0.010*"patients" + 0.006*"covid" + 0.005*"cases" + 0.005*"epidemic" + 0.003*"using" + 0.003*"virus" + 0.003*"study" + 0.003*"preprint" + 0.003*"two" + 0.003*"china"
2020-26-03: INFO: topic #26 (0.031): 0.006*"virus" + 0.006*"cell" + 0.006*"preprint" + 0.005*"viral" + 0.005*"infection" + 0.005*"sars" + 0.004*"study" + 0.004*"infected" + 0.004*"human" + 0.004*"cov"
2020-26-03: INFO: topic #24 (0.031): 0.006*"cases" + 0.005*"cov" + 0.005*"patients" + 0.004*"sars" + 0.004*"human" + 0.004*"model" + 0.004*"preprint" + 0.004*"covid" + 0.004*"data" + 0.004*"disease"
2020-26-03: INFO: topic #4 (0.031): 0.005*"transmission" + 0.005*"human" + 0.004*"disease" + 0.004*"virus" + 0.004*"infection" + 0.004*"protein" + 0.004*"model" + 0.004*"using" + 0.003*"based" + 0.003*"preprint"
###Markdown
Finding the dominant topic in each sentenceOne of the practical application of topic modeling is to determine what topic a given document is about.To find that, we find the topic number that has the highest percentage contribution in that document.The format_topics_sentences() function below nicely aggregates this information in a presentable table.
###Code
def format_topics_sentences(ldamodel=lda_model, corpus=corpus, texts=df['title']):
# Init output
sent_topics_df = pd.DataFrame()
# Get main topic in each document
for i, row in enumerate(ldamodel[corpus]):
row = sorted(row, key=lambda x: (x[1]), reverse=True)
# Get the Dominant topic, Perc Contribution and Keywords for each document
for j, (topic_num, prop_topic) in enumerate(row):
if j == 0: # => dominant topic
wp = ldamodel.show_topic(topic_num)
topic_keywords = ", ".join([word for word, prop in wp])
sent_topics_df = sent_topics_df.append(pd.Series([int(topic_num), round(prop_topic,4), topic_keywords]), ignore_index=True)
else:
break
sent_topics_df.columns = ['Dominant_Topic', 'Perc_Contribution', 'Topic_Keywords']
# Add original text to the end of the output
contents = pd.Series(texts)
sent_topics_df = pd.concat([sent_topics_df, contents], axis=1)
return(sent_topics_df)
df_topic_sents_keywords = format_topics_sentences(ldamodel=optimal_model,
corpus=corpus,
texts=df['title'])
# Format
df_dominant_topic = df_topic_sents_keywords.reset_index()
df_dominant_topic.columns = ['Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text']
# Show
df_dominant_topic.head(10)
###Output
_____no_output_____
###Markdown
Topic distribution across documents
###Code
# Number of Documents for Each Topic
topic_counts = df_topic_sents_keywords['Dominant_Topic'].value_counts()
# Percentage of Documents for Each Topic
topic_contribution = round(topic_counts/topic_counts.sum(), 4)
# Topic Number and Keywords
topic_num_keywords = df_topic_sents_keywords[['Dominant_Topic', 'Topic_Keywords']]
# Concatenate Column wise
df_dominant_topics = pd.concat([topic_num_keywords, topic_counts, topic_contribution], axis=1)
# Change Column names
df_dominant_topics.columns = ['Dominant_Topic', 'Topic_Keywords', 'Num_Documents', 'Perc_Documents']
# Show
df_dominant_topics
###Output
_____no_output_____ |
assignment/Assignment 4 Week 4.ipynb | ###Markdown
Exercise 4.1
###Code
plt.figure(figsize=(8,8))
x = np.array([[0,1], [0,3], [2,0]])
y = np.array([0, 0, 1]) # 0 is class 1 and 1 is class 2
plt.scatter(x[:,0], x[:,1], marker="x")
plt.plot([2,0], [0,1])
plt.show()
###Output
_____no_output_____
###Markdown
a) The classification boundary is the perpendicular bisector of the line segment between (0,1)and (2,0). There are 2 support vectors.
###Code
x = np.array([[0,-1], [0,3], [2,0]])
y = np.array([0, 0, 1])
plt.scatter(x[:,0], x[:,1], marker="x")
plt.plot([2,0], [0,0])
plt.show()
###Output
_____no_output_____
###Markdown
b) The classification boundary now becomes the vertical line through (1,0). All three points become support vectors. Exercise 4.2
###Code
from sklearn.preprocessing import minmax_scale
from sklearn.preprocessing import maxabs_scale
from sklearn.preprocessing import normalize
help (pr.svc)
# Consider (0,0) for one class, (1,1) and (2,0) for the other class,
# and (1,0)as a test point and see
# how the classification of the last point changes with different scalings of the first feature.
plt.figure(figsize=(16,8))
plt.subplot(1,2,1)
x = np.array([[1,1], [2,0], [0,0], [1,0]])
y = np.array([0, 0, 1, 2])
plt.scatter(x[:,0], x[:,1], marker="x", s=100)
train = pr.prdataset(x[:3], y[:3])
c = pr.svc(train, ("linear", 0, 20))
pr.plotc(c)
plt.subplot(1,2,2)
x_rescale = normalize(x)
plt.scatter(x_rescale[:,0], x_rescale[:,1], marker="x", s=100)
train_re = pr.prdataset(x_rescale[:3], y[:3])
c = pr.svc(train_re, ("linear", 0, 20))
pr.plotc(c)
plt.show()
###Output
_____no_output_____
###Markdown
It testify the fact that the support vector classifier is sensitive to feature scaling. Exercise 4.3 Difference between LDA and SVM
###Code
plt.figure(figsize=(8,8))
x = np.array([[0,1],[2,4],[1,0]])
y = np.array([0,0,1])
data = pr.prdataset(x,y)
w1 = pr.ldc(data,1)
w2 = pr.svc(data,("linear",0, 20))
plt.scatter(x[:,0], x[:,1],marker = "x")
pr.plotc(w1)
pr.plotc(w2)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The two solutions will be the same when the number of support vectors is three in the 2D case. LDA will always have three "support vectors" in this 2D setting. See Exercise 3.17
###Code
data = pr.gendatc(n=(1000,1000),dim=2, mu=0.0)
feature = +data
label = pr.genlab(n=(1000,1000),lab=[-1,1])
noiseFeature = np.hstack((feature,np.random.rand(2000,60)))
noiseData = pr.prdataset(noiseFeature, label)
e_nmc = pr.cleval(noiseData, pr.nmc(), trainsize=[5, 10, 20, 40], nrreps=10) # default
u = pr.svc([],("linear",0,20))
e_svc = pr.cleval(noiseData, u,trainsize=[5, 10, 20, 40], nrreps=10)
plt.title("Learning Curve for nmc with gendatc")
plt.title("Learning Curve for svc with gendatc")
plt.legend()
help (pr.cleval)
###Output
Help on function cleval in module prtools.prtools:
cleval(a, u, trainsize=[2, 3, 5, 10, 20, 30], nrreps=3, testfunc=<function testc at 0x7fb7bcd8a670>)
Learning curve
E = cleval(A,U,TRAINSIZE,NRREPS)
Estimate the classification error E of (untrained) mapping U on
dataset A for varying training set sizes. Default is
trainsize=[2,3,5,10,20,30].
To get reliable estimates, the train-test split is repeated NRREPS=3
times.
Example:
a = gendatb([100,100])
u = nmc()
e = cleval(a,u,nrreps=10)
###Markdown
Exercise 4.7 svc(a,(kernel type,par,C)) a)
###Code
a=pr.gendatb(n=[20,20], s=1) # a large independent banana test set
plt.figure(figsize=(35,60))
widths = [0.1, 0.25, 0.5, 0.75, 1, 1.5, 2, 5, 7.5, 10]
for i in range(len(widths)):
svc = pr.svc(a, ("rbf", widths[i], 10))
plt.subplot(5,2,i+1)
pr.scatterd(a)
pr.plotc(svc)
plt.title("widths="+str(widths[i]))
plt.show()
###Output
_____no_output_____
###Markdown
When the width increases from 0.1 to 10, the decision boundary becomes smoother and the performance is deteriorating. Exercise 4.8 Optimize the hyperparameter of an RBF SVC
###Code
help (pr.prcrossval)
a = pr.gendatb(n=[200,200], s=1)
s = np.array([0.2, 0.5, 1.0, 2.0, 5.0, 7.0, 10.0, 25.0])
e = np.zeros(len(s))
for i in range(len(s)):
e[i] = pr.prcrossval(a, pr.svc([],("rbf", s[i], 10)), k=10).mean()
plt.figure(figsize=(15,10))
plt.plot(s, e, "-D")
plt.title("Validation Curve")
plt.xlabel("s")
plt.ylabel("error")
plt.show()
###Output
_____no_output_____ |
GoogleCloud/GoogleTranslation/google-translation.ipynb | ###Markdown
Google Translation Translate text to a specified target language (supported languages here).This notebook is largely identical to the Python implementation in the official documentation.Google Translation Documentation
###Code
from google.cloud import translate
import six
def google_translate(target, text):
translate_client = translate.Client()
if isinstance(text, six.binary_type):
text = text.decode('utf-8')
result = translate_client.translate(
text, target_language=target)
return result['translatedText']
input_text = "ここにgoogle翻訳のための日本語のセンテンスがあります!"
result = google_translate('en', input_text)
# Print translated text
print(result)
###Output
Here is a Japanese sentence for google translation!
|
code/notebooks/collision_analysis.ipynb | ###Markdown
Norm
###Code
seed = 7
for dataset, features_by_dataset in features.items():
for head_info, f_d_h in features_by_dataset.items():
negs = list(sorted(f_d_h[seed]))
fig, axes = plt.subplots(len(f_d_h[seed]), 1, figsize=(20, 20), sharex=True, sharey=True)
norms = []
norm_max = 0.
norm_min = 10e7
for i, neg in enumerate(negs[::-1]):
# use only use seed value
X_train = f_d_h[seed][neg][0]
norm = np.sqrt(np.sum(X_train ** 2, axis=1))
norms.append(norm)
norm_max = max(norm_max, np.max(norm))
norm_min = min(norm_min, np.min(norm))
for i, norm in enumerate(norms):
num_bins = int(np.log(len(norm)))
sns.histplot(norm, bins=num_bins, stat="probability", ax=axes[i], fill=False, binrange=(norm_min, norm_max))
axes[i].axvline(norm.mean(), color="k", linestyle="dashed", linewidth=2.)
for i, neg in enumerate(negs[::-1]):
axes[i].set_xlabel("")
axes[i].set_ylabel("" + "${}$".format(neg))
axes[-1].set_xlabel("Norm")
fig.text(0.02, 0.5, "\# negative samples $+ 1$", va="center", rotation="vertical")
fname = "../../doc/figs/norm_hist_{}_{}_seed-{}.pdf".format(head_info, dataset.upper().replace("1", "-1"), seed)
plt.savefig(fname)
axes[0].set_title("{} {}".format(head_info, dataset))
plt.show()
###Output
_____no_output_____
###Markdown
Cosine
###Code
seed = 7
# store averaged cosine
cosine_sims = {}
for dataset, f_d in features.items():
cosine_sims[dataset] = {}
for head_info, f_d_h in f_d.items():
cosine_sims[dataset][head_info] = {}
C = min(10, len(id2classes[dataset]))
negs = list(sorted(f_d_h[seed]))
fig, axes = plt.subplots(len(negs), C, figsize=(32, 16), sharex=True, sharey=True)
for i, neg in enumerate(negs[::-1]):
X_train, y_train, X_eval, y_eval = f_d_h[seed][neg]
X_train_normalized = sklearn.preprocessing.normalize(X_train, axis=1)
hist = []
for c in range(C):
X_train_c = X_train_normalized[y_train == c]
cos_sim = X_train_c.dot(X_train_c.T)
cos_sim = cos_sim[np.triu_indices(len(cos_sim), 1)].flatten()
num_bins = int(np.log(len(cos_sim)))
sns.histplot(cos_sim, bins=num_bins, stat="probability", ax=axes[i, c], fill=False, binrange=(-1., 1.))
axes[i, c].axvline(cos_sim.mean(), color="k", linestyle="dashed", linewidth=2.)
for r in range(len(negs)):
for c in range(C):
axes[r, c].set_xlabel("")
axes[r, c].set_ylabel("")
for i, class_name in enumerate(id2classes[dataset][:C]):
axes[-1, i].set_xlabel(class_name.replace("_", " ").capitalize())
for i, neg in enumerate(negs[::-1]):
axes[i, 0].set_ylabel(neg, size="large")
fig.text(0.06, 0.5, "\# negative samples $+ 1$", va="center", rotation="vertical")
fname = "../../doc/figs/cos_hist_{}_{}_seed-{}.pdf".format(head_info, dataset, seed)
plt.savefig(fname)
axes[0, 0].set_title("{} {}".format(head_info, dataset))
plt.show()
###Output
_____no_output_____
###Markdown
relative change
###Code
def get_value_hist(edges):
results = [(edges[i] + edges[i + 1]) / 2. for i in range(len(edges) - 1)]
return np.array(results)
for dataset, features_by_dataset in features.items():
for head_info, f_d_h in features_by_dataset.items():
fig, axes_ratio = plt.subplots(1, 1, figsize=(16, 9))
cos_w_distance_by_seed = []
norm_w_distance_by_seed = []
for seed, f_d_h_s in f_d_h.items():
negs = np.array(list(sorted(f_d_h_s)))
cosine_hist_by_neg = []
norms_by_neg = []
for neg in negs:
X_train, y_train, X_eval, y_eval = f_d_h_s[neg]
C = len(np.unique(y_train))
# norm
norms_by_neg.append(np.sqrt(np.sum(X_train ** 2, axis=1)))
X_train_normalized = sklearn.preprocessing.normalize(X_train, axis=1)
# histogram for cosine similarity
cos_hist = []
for c in range(C):
X_train_c = X_train_normalized[y_train == c]
cos_sim = X_train_c.dot(X_train_c.T)
cos_sim = cos_sim[np.triu_indices(len(cos_sim), 1)].flatten()
cos_hist.append(
np.histogram(
cos_sim,
bins=int(np.log(len(cos_sim))),
range=(-1.0, 1.0),
)
)
cosine_hist_by_neg.append(cos_hist) # num-negs x C x 2
# cosine W distance per class
w_mean_over_C = []
min_neg = negs[0]
for cos_histogram in cosine_hist_by_neg[1:]:
w_distance = []
for c in range(len(cosine_hist_by_neg[0])):
base_values = get_value_hist(cosine_hist_by_neg[0][c][1])
base_weights = cosine_hist_by_neg[0][c][0] / np.sum(cosine_hist_by_neg[0][c][0])
values = get_value_hist(cos_histogram[c][1])
weights = cos_histogram[c][0] / np.sum(cos_histogram[c][0])
d = scipy.stats.wasserstein_distance(
base_values, values,
base_weights, weights
)
w_distance.append(d)
w_mean_over_C.append(np.mean(w_distance))
cos_w_distance_by_seed.append(w_mean_over_C)
# histgram for norm
# since norm is not bounded, decide the boundaries among k
h_min = 10 ** 9
h_max = 0.
for norm_list in norms_by_neg:
h_min = min(h_min, np.min(norm_list))
h_max = max(h_max, np.max(norm_list))
norms_hist_by_neg = []
for norm_list in norms_by_neg:
norms_hist_by_neg.append(
np.histogram(
norm_list,
bins=int(np.log(len(norm_list))),
range=(h_min, h_max),
)
)
# norm's W distance
# base dist.
norm_w_distance = []
base_weights = norms_hist_by_neg[0][0] / np.sum(norms_hist_by_neg[0][0])
base_values = get_value_hist(norms_hist_by_neg[0][1])
for norm_histogram in norms_hist_by_neg[1:]:
values = get_value_hist(norm_histogram[1])
weights = norm_histogram[0] / np.sum(norm_histogram[0])
d = scipy.stats.wasserstein_distance(
base_values, values,
base_weights, weights
)
norm_w_distance.append(d)
norm_w_distance_by_seed.append(norm_w_distance)
# the smallest pair's distance is removed since the relative change is trivial: 0
cos_w_distance = np.array(cos_w_distance_by_seed).mean(axis=0)
cos_w_distance -= cos_w_distance[0]
cos_w_distance = cos_w_distance[1:]
norm_w_distance = np.array(norm_w_distance_by_seed).mean(axis=0)
norm_w_distance -= norm_w_distance[0]
norm_w_distance = norm_w_distance[1:]
axes_ratio.errorbar(np.arange(len(negs) - 2), cos_w_distance, fmt="o", markersize=20, label="Cosine")
axes_ratio.errorbar(np.arange(len(negs) - 2), norm_w_distance, fmt="v", markersize=20, label="Norm")
axes_ratio.set_xticks(np.arange(len(negs) - 2))
axes_ratio.set_xticklabels(negs[2:])
axes_ratio.set_ylabel("Relative change")
axes_ratio.set_xlabel("\# negative samples $+ 1$")
plt.legend()
fname = "../../doc/figs/wasserstein_distance_{}_{}.pdf".format(head_info, dataset)
plt.savefig(fname)
axes_ratio.set_title(f"{dataset} {head_info}")
plt.show()
###Output
_____no_output_____ |
ukpsummarizer-be/cplex/python/examples/mp/jupyter/tutorials/Linear_Programming.ipynb | ###Markdown
Tutorial: Linear Programming, (CPLEX Part 1)This notebook gives an overview of Linear Programming (or LP). After completing this unit, you should be able to - describe the characteristics of an LP in terms of the objective, decision variables and constraints, - formulate a simple LP model on paper, - conceptually explain some standard terms related to LP, such as dual, feasible region, infeasible, unbounded, slack, reduced cost, and degenerate. You should also be able to describe some of the algorithms used to solve LPs, explain what presolve does, and recognize the elements of an LP in a basic DOcplex model.>This notebook is part of [Prescriptive Analytics for Python](https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html).>It requires a valid subscription to **Decision Optimization on Cloud** or a **local installation of CPLEX Optimizers**. Discover us [here](https://developer.ibm.com/docloud).Table of contents:* [Introduction to Linear Programming](Introduction-to-Linear-Programming)* [Example: a production problem](Example:-a-production-problem)* [CPLEX Modeling for Python](Use-IBM-Decision-Optimization-CPLEX-Modeling-for-Python)* [Algorithms for solving LPs](Algorithms-for-solving-LPs)* [Summary](Summary)* [References](References) Introduction to Linear ProgrammingIn this topic, you’ll learn what the basic characteristics of a linear program are. What is Linear Programming?Linear programming deals with the maximization (or minimization) of a linear objective function, subject to linear constraints, where all the decision variables are continuous. That is, no discrete variables are allowed. The linear objective and constraints must consist of linear expressions. What is a linear expression?A linear expression is a scalar product, for example, the expression:$$\sum{a_i x_i}$$where a_i represents constants (that is, data) and x_i represents variables or unknowns.Such an expression can also be written in short form as a vector product:$$^{t}A X$$where $A$ is the vector of constants and $X$ is the vector of variables.*Note*: Nonlinear terms that involve variables (such as x and y) are not allowed in linear expressions. Terms that are not allowed in linear expressions include - multiplication of two or more variables (such as x times y), - quadratic and higher order terms (such as x squared or x cubed), - exponents, - logarithms,- absolute values. What is a linear constraint?A linear constraint is expressed by an equality or inequality as follows:- $linear\_expression = linear\_expression$- $linear\_expression \le linear\_expression$- $linear\_expression \ge linear\_expression$Any linear constraint can be rewritten as one or two expressions of the type linear expression is less than or equal to zero.Note that *strict* inequality operators (that is, $>$ and $<$) are not allowed in linear constraints. What is a continuous variable?A variable (or _decision_ variable) is an unknown of the problem. Continuous variables are variables the set of real numbers (or an interval). Restrictions on their values that create discontinuities, for example a restriction that a variable should take integer values, are not allowed. Symbolic representation of an LPA typical symbolic representation of a Linear Programming is as follows:$minimize \sum c_{i} x_{i}\\\\subject\ to:\\\ a_{11}x_{1} + a_{12} x_{2} ... + a_{1n} x_{n} \ge b_{1}\\\ a_{21}x_{1} + a_{22} x_{2} ... + a_{2n} x_{n} \ge b_{2}\\...\ a_{m1}x_{1} + a_{m2} x_{2} ... + a_{mn} x_{n} \ge b_{m}\\x_{1}, x_{2}...x_{n} \ge 0$This can be written in a concise form using matrices and vectors as:$min\ C^{t}x\\s.\ t.\ Ax \ge B\\x \ge 0$Where $x$ denotes the vector of variables with size $n$, $A$ denotes the matrix of constraint coefficients, with $m$ rows and $n$ columns and $B$ is a vector of numbers with size $m$. Characteristics of a linear program Example: a production problemIn this topic, you’ll analyze a simple production problem in terms of decision variables, the objective function, and constraints. You’ll learn how to write an LP formulation of this problem, and how to construct a graphical representation of the model. You’ll also learn what feasible, optimal, infeasible, and unbounded mean in the context of LP. Problem description: telephone productionA telephone company produces and sells two kinds of telephones, namely desk phones and cellular phones. Each type of phone is assembled and painted by the company. The objective is to maximize profit, and the company has to produce at least 100 of each type of phone.There are limits in terms of the company’s production capacity, and the company has to calculate the optimal number of each type of phone to produce, while not exceeding the capacity of the plant. Writing a descriptive modelIt is good practice to start with a descriptive model before attempting to write a mathematical model. In order to come up with a descriptive model, you should consider what the decision variables, objectives, and constraints for the business problem are, and write these down in words.In order to come up with a descriptive model, consider the following questions:- What are the decision variables? - What is the objective? - What are the constraints? Telephone production: a descriptive modelA possible descriptive model of the telephone production problem is as follows:- Decision variables: - Number of desk phones produced (DeskProduction) - Number of cellular phones produced (CellProduction)- Objective: Maximize profit- Constraints: 1. The DeskProduction should be greater than or equal to 100. 2. The CellProduction should be greater than or equal to 100. 3. The assembly time for DeskProduction plus the assembly time for CellProduction should not exceed 400 hours. 4. The painting time for DeskProduction plus the painting time for CellProduction should not exceed 490 hours. Writing a mathematical modelConvert the descriptive model into a mathematical model:- Use the two decision variables DeskProduction and CellProduction- Use the data given in the problem description (remember to convert minutes to hours where appropriate)- Write the objective as a mathematical expression- Write the constraints as mathematical expressions (use “=”, “=”, and name the constraints to describe their purpose)- Define the domain for the decision variables Telephone production: a mathematical modelTo express the last two constraints, we model assembly time and painting time as linear combinations of the two productions, resulting in the following mathematical model:$maximize:\\\ \ 12\ desk\_production + 20\ cell\_production\\subject\ to: \\\ \ desk\_production >= 100 \\\ \ cell\_production >= 100 \\\ \ 0.2\ desk\_production + 0.4\ cell\_production <= 400 \\\ \ 0.5\ desk\_production + 0.4\ cell\_production <= 490 \\$ Using DOcplex to formulate the mathematical model in PythonUse the [DOcplex](https://cdn.rawgit.com/IBMDecisionOptimization/docplex-doc/2.0.15/docs/index.html) Python library to write the mathematical model in Python.This is done in four steps:- create a instance of docplex.mp.Model to hold all model objects- create decision variables,- create linear constraints,- finally, define the objective.But first, we have to import the class `Model` from the docplex module. Use IBM Decision Optimization CPLEX Modeling for PythonLet's use the DOcplex Python library to write the mathematical model in Python. Step 1: Download the libraryFirst install *docplex* if needed.
###Code
import sys
try:
import docplex.mp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
###Output
_____no_output_____
###Markdown
Step 2: Set up the prescriptive engine* Subscribe to our private cloud offer or Decision Optimization on Cloud solve service [here](https://developer.ibm.com/docloud) if you do not want to use a local solver.* Get the service URL and your personal API key and enter your credentials here if accurate:
###Code
url = None
key = None
###Output
_____no_output_____
###Markdown
Step 3: Set up the prescriptive model Create the modelAll objects of the model belong to one model instance.
###Code
# first import the Model class from docplex.mp
from docplex.mp.model import Model
# create one model instance, with a name
m = Model(name='telephone_production')
###Output
_____no_output_____
###Markdown
Define the decision variables- The continuous variable `desk` represents the production of desk telephones.- The continuous variable `cell` represents the production of cell phones.
###Code
# by default, all variables in Docplex have a lower bound of 0 and infinite upper bound
desk = m.continuous_var(name='desk')
cell = m.continuous_var(name='cell')
###Output
_____no_output_____
###Markdown
Set up the constraints- Desk and cel phone must both be greater than 100- Assembly time is limited- Painting time is limited.
###Code
# write constraints
# constraint #1: desk production is greater than 100
m.add_constraint(desk >= 100)
# constraint #2: cell production is greater than 100
m.add_constraint(cell >= 100)
# constraint #3: assembly time limit
ct_assembly = m.add_constraint( 0.2 * desk + 0.4 * cell <= 400)
# constraint #4: paiting time limit
ct_painting = m.add_constraint( 0.5 * desk + 0.4 * cell <= 490)
###Output
_____no_output_____
###Markdown
Express the objectiveWe want to maximize the expected revenue.
###Code
m.maximize(12 * desk + 20 * cell)
###Output
_____no_output_____
###Markdown
A few remarks about how we formulated the mathemtical model in Python using DOcplex:- all arithmetic operations (+, \*, \-) are done using Python operators- comparison operators used in writing linear constraint use Python comparison operators too. Print information about the modelWe can print information about the model to see how many objects of each type it holds:
###Code
m.print_information()
###Output
_____no_output_____
###Markdown
Graphical representation of a Linear ProblemA simple 2-dimensional LP (with 2 decision variables) can be represented graphically using a x- and y-axis. This is often done to demonstrate optimization concepts. To do this, follow these steps:- Assign one variable to the x-axis and the other to the y-axis.- Draw each of the constraints as you would draw any line in 2 dimensions.- Use the signs of the constraints (=, =) to determine which side of each line falls within the feasible region (allowable solutions).- Draw the objective function as you would draw any line in 2 dimensions, by substituting any value for the objective (for example, 12 * DeskProduction + 20 * CellProduction = 4000) Feasible set of solutions This graphic shows the feasible region for the telephone problem. Recall that the feasible region of an LP is the region delimited by the constraints, and it represents all feasible solutions. In this graphic, the variables DeskProduction and CellProduction are abbreviated to be desk and cell instead. Look at this diagram and search intuitively for the optimal solution. That is, which combination of desk and cell phones will yield the highest profit. The optimal solution To find the optimal solution to the LP, you must find values for the decision variables, within the feasible region, that maximize profit as defined by the objective function. In this problem, the objective function is to maximize $$12 * desk + 20 * cell $$To do this, first draw a line representing the objective by substituting a value for the objective. Next move the line up (because this is a maximization problem) to find the point where the line last touches the feasible region. Note that all the solutions on one objective line, such as AB, yield the same objective value. Other values of the objective will be found along parallel lines (such as line CD). In a profit maximizing problem such as this one, these parallel lines are often called isoprofit lines, because all the points along such a line represent the same profit. In a cost minimization problem, they are known as isocost lines. Since all isoprofit lines have the same slope, you can find all other isoprofit lines by pushing the objective value further out, moving in parallel, until the isoprofit lines no longer intersect the feasible region. The last isoprofit line that touches the feasible region defines the largest (therefore maximum) possible value of the objective function. In the case of the telephone production problem, this is found along line EF. The optimal solution of a linear program always belongs to an extreme point of the feasible region (that is, at a vertex or an edge). Solve with the Decision Optimization solve serviceIf url and key are None, the Modeling layer will look for a local runtime, otherwise will use the credentials.Look at the documentation for a good understanding of the various solving/generation modes.If you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.In any case, `Model.solve()` returns a solution object in Python, containing the optimal values of decision variables, if the solve succeeds, or else it returns `None`.
###Code
s = m.solve(url=url, key=key)
m.print_solution()
###Output
_____no_output_____
###Markdown
In this case, CPLEX has found an optimal solution at (300, 850). You can check that this point is indeed an extreme point of the feasible region. Multiple Optimal SolutionsIt is possible that an LP has multiple optimal solutions. At least one optimal solution will be at a vertex.By default, the CPLEX® Optimizer reports the first optimal solution found. Example of multiple optimal solutions This graphic shows an example of an LP with multiple optimal solutions. This can happen when the slope of the objective function is the same as the slope of one of the constraints, in this case line AB. All the points on line AB are optimal solutions, with the same objective value, because they are all extreme points within the feasible region. Binding and nonbinding constraintsA constraint is binding if the constraint becomes an equality when the solution values are substituted.Graphically, binding constraints are constraints where the optimal solution lies exactly on the line representing that constraint. In the telephone production problem, the constraint limiting time on the assembly machine is binding:$$ 0.2desk + 0.4 cell <= 400\\ desk = 300cell = 8500.2(300) + 0.4(850) = 400$$The same is true for the time limit on the painting machine:$$ 0.5desk + 0.4cell <= 4900.5(300) + 0.4(850) = 490 $$On the other hand, the requirement that at least 100 of each telephone type be produced is nonbinding because the left and right hand sides are not equal:$$ desk >= 100\\ 300 \neq 100$$ InfeasibilityA model is infeasible when no solution exists that satisfies all the constraints. This may be because:The model formulation is incorrect.The data is incorrect.The model and data are correct, but represent a real-world conflict in the system being modeled.When faced with an infeasible model, it's not always easy to identify the source of the infeasibility. DOcplex helps you identify potential causes of infeasibilities, and it will also suggest changes to make the model feasible. An example of infeasible problem This graphic shows an example of an infeasible constraint set for the telephone production problem. Assume in this case that the person entering data had accidentally entered lower bounds on the production of 1100 instead of 100. The arrows show the direction of the feasible region with respect to each constraint. This data entry error moves the lower bounds on production higher than the upper bounds from the assembly and painting constraints, meaning that the feasible region is empty and there are no possible solutions. Infeasible models in DOcplexCalling `solve()` on an infeasible model returns None. Let's experiment this with DOcplex. First, we take a copy of our model and an extra infeasible constraint which states that desk telephone production must be greater than 1100
###Code
# create a new model, copy of m
im = m.copy()
# get the 'desk' variable of the new model from its name
idesk = im.get_var_by_name('desk')
# add a new (infeasible) constraint
im.add_constraint(idesk >= 1100);
# solve the new proble, we expect a result of None as the model is now infeasible
ims = im.solve(url=url, key=key)
if ims is None:
print('- model is infeasible')
###Output
_____no_output_____
###Markdown
Correcting infeasible modelsTo correct an infeasible model, you must use your knowledge of the real-world situation you are modeling.If you know that the model is realizable, you can usually manually construct an example of a feasible solution and use it to determine where your model or data is incorrect. For example, the telephone production manager may input the previous month's production figures as a solution to the model and discover that they violate the erroneously entered bounds of 1100.DOcplex can help perform infeasibility analysis, which can get very complicated in large models. In this analysis, DOcplex may suggest relaxing one or more constraints. Relaxing constraints by changing the modelIn the case of LP models, the term “relaxation” refers to changing the right hand side of the constraint to allow some violation of the original constraint.For example, a relaxation of the assembly time constraint is as follows:$$0.2 \ desk + 0.4\ cell <= 440$$Here, the right hand side has been relaxed from 400 to 440, meaning that you allow more time for assembly than originally planned. Relaxing model by converting hard constraints to soft constraints- A _soft_ constraint is a constraint that can be violated in some circumstances. - A _hard_ constraint cannot be violated under any circumstances. So far, all constraints we have encountered are hard constraints.Converting hard constraints to soft is one way to resolve infeasibilities.The original hard constraint on assembly time is as follows:$$0.2 \ desk + 0.4 \ cell <= 400$$You can turn this into a soft constraint if you know that, for example, an additional 40 hours of overtime are available at an additional cost. First add an overtime term to the right-hand side:$$0.2 \ desk + 0.4 \ cell <= 400 + overtime$$Next, add a hard limit to the amount of overtime available:$$overtime <= 40$$Finally, add an additional cost to the objective to penalize use of overtime. Assume that in this case overtime costs an additional $2/hour, then the new objective becomes:$$maximize\ 12 * desk + 20 * cell — 2 * overtime$$ Implement the soft constraint model using DOcplexFirst and extra variable for overtime, with an upper bound of 100. This suffices to express the hard limit on overtime.
###Code
overtime = m.continuous_var(name='overtime', ub=40)
###Output
_____no_output_____
###Markdown
Modify the assembly time constraint by changing its right-hand side by adding overtime. *Note*: this operation modifies the model by performing a _side-effect_ on the constraint object. DOcplex allows dynamic edition of model elements.
###Code
ct_assembly.rhs = 400 + overtime
###Output
_____no_output_____
###Markdown
Last, modify the objective expression to add the penalization term.Note that we use the Python decrement operator.
###Code
m.maximize(12*desk + 20 * cell - 2 * overtime)
###Output
_____no_output_____
###Markdown
And solve again using DOcplex:
###Code
s2 = m.solve(url=url, key=key)
m.print_solution()
###Output
_____no_output_____
###Markdown
Unbounded Variable vs. Unbounded modelA variable is unbounded when one or both of its bounds is infinite. A model is unbounded when its objective value can be increased or decreased without limit. The fact that a variable is unbounded does not necessarily influence the solvability of the model and should not be confused with a model being unbounded. An unbounded model is almost certainly not correctly formulated. While infeasibility implies a model where constraints are too limiting, unboundedness implies a model where an important constraint is either missing or not restrictive enough.By default, DOcplex variables are unbounded: their upper bound is infinite (but their lower bound is zero). Unbounded feasible regionThe telephone production problem would become unbounded if, for example, the constraints on the assembly and painting time were neglected. The feasible region would then look as in this diagram where the objective value can increase without limit, up to infinity, because there is no upper boundary to the region. Algorithms for solving LPsThe IBM® CPLEX® Optimizers to solve LP problems in CPLEX include:- Simplex Optimizer- Dual-simplex Optimizer- Barrier Optimizer The simplex algorithmThe simplex algorithm, developed by George Dantzig in 1947, was the first generalized algorithm for solving LP problems. It is the basis of many optimization algorithms. The simplex method is an iterative method. It starts with an initial feasible solution, and then tests to see if it can improve the result of the objective function. It continues until the objective function cannot be further improved.The following diagram illustrates how the simplex algorithm traverses the boundary of the feasible region for the telephone production problem. The algorithm, starts somewhere along the edge of the shaded feasible region, and advances vertex-by-vertex until arriving at the vertex that also intersects the optimal objective line. Assume it starts at the red dot indicated on the diagam. The revised simplex algorithmTo improve the efficiency of the simplex algorithm, George Dantzig and W. Orchard-Hays revised it in 1953. CPLEX uses the revised simplex algorithm, with a number of improvements. The CPLEX Optimizers are particularly efficient and can solve very large problems rapidly. You can tune some CPLEX Optimizer parameters to change the algorithmic behavior according to your needs. The Dual simple algorithm The dual of a LPThe concept of duality is important in linear programming. Every LP problem has an associated LP problem known as its _dual_. The dual of this associated problem is the original LP problem (known as the primal problem). If the primal problem is a minimization problem, then the dual problem is a maximization problem and vice versa. A primal-dual pair *Primal (P)* -------------------- $max\ z=\sum_{i} c_{i}x_{i}$ *Dual (D)*------------------------------- $min\ w= \sum_{j}b_{j}y_{j}$ - Each constraint in the primal has an associated dual variable, yi.- Any feasible solution to D is an upper bound to P, and any feasible solution to P is a lower bound to D.- In LP, the optimal objective values of D and P are equivalent, and occurs where these bounds meet.- The dual can help solve difficult primal problems by providing a bound that in the best case equals the optimal solution to the primal problem. Dual pricesIn any solution to the dual, the values of the dual variables are known as the dual prices, also called shadow prices.For each constraint in the primal problem, its associated dual price indicates how much the dual objective will change with a unit change in the right hand side of the constraint.The dual price of a non-binding constraint is zero. That is, changing the right hand side of the constraint will not affect the objective value.The dual price of a binding constraint can help you make decisions regarding the constraint.For example, the dual price of a binding resource constraint can be used to determine whether more of the resource should be purchased or not. The dual simplex algorithmThe simplex algorithm works by finding a feasible solution and moving progressively toward optimality. The dual simplex algorithm implicitly uses the dual to try and find an optimal solution to the primal as early as it can, and regardless of whether the solution is feasible or not. It then moves from one vertex to another, gradually decreasing the infeasibility while maintaining optimality, until an optimal feasible solution to the primal problem is found. In CPLEX, the Dual-simplex Optimizer is the first choice for most LP problems. Basic solutions and basic variablesYou learned earlier that the simplex algorithm travels from vertex to vertex to search for the optimal solution. A solution at a vertex is known as a _basic_ solution. Without getting into too much detail, it's worth knowing that part of the simplex algorithm involves setting a subset of variables to zero at each iteration. These variables are known as non-basic variables. The remaining variables are the _basic_ variables. The concepts of basic solutions and variables are relevant in the definition of reduced costs that follows next. Reduced CostsThe reduced cost of a variable gives an indication of the amount the objective will change with a unit increase in the variable value.Consider the simplest form of an LP:$minimize\ c^{t}x\\s.t. \\Ax = b \\x \ge 0$If $y$ represents the dual variables for a given basic solution, then the reduced costs are defined as: $$c - y^{t}A$$Such a basic solution is optimal if: $$ c - y^{t}A \ge 0$$If all reduced costs for this LP are non-negative, it follows that the objective value can only increase with a change in the variable value, and therefore the solution (when minimizing) is optimal.DOcplex lets you acces sreduced costs of variable, after a successful solve. Let's experiment with the two decision variables of our problem: Getting reduced cost values with DOcplexDOcplex lets you access reduced costs of variable, after a successful solve. Let's experiment with the two decision variables of our problem:
###Code
print('* desk variable has reduced cost: {0}'.format(desk.reduced_cost))
print('* cell variable has reduced cost: {0}'.format(cell.reduced_cost))
###Output
_____no_output_____
###Markdown
Default optimality criteria for CPLEX optimizerBecause CPLEX Optimizer operates on finite precision computers, it uses an optimality tolerance to test the reduced costs. The default optimality tolerance is –1e-6, with optimality criteria for the simplest form of an LP then being:$$c — y^{t}A> –10^{-6}$$You can adjust this optimality tolerance, for example if the algorithm takes very long to converge and has already achieved a solution sufficiently close to optimality. Reduced Costs and multiple optimal solutionsIn the earlier example you saw how one can visualize multiple optimal solutions for an LP with two variables. For larger LPs, the reduced costs can be used to determine whether multiple optimal solutions exist. Multiple optimal solutions exist when one or more non-basic variables with a zero reduced cost exist in an optimal solution (that is, variable values that can change without affecting the objective value). In order to determine whether multiple optimal solutions exist, you can examine the values of the reduced costs with DOcplex. Slack valuesFor any solution, the difference between the left and right hand sides of a constraint is known as the _slack_ value for that constraint. For example, if a constraint states that f(x) <= 100, and in the solution f(x) = 80, then the slack value of this constraint is 20.In the earlier example, you learned about binding and non-binding constraints. For example, f(x) <= 100 is binding if f(x) = 100, and non-binding if f(x) = 80.The slack value for a binding constraint is always zero, that is, the constraint is met exactly.You can determine which constraints are binding in a solution by examining the slack values with DOcplex. This might help to better interpret the solution and help suggest which constraints may benefit from a change in bounds or a change into a soft constraint. Accessing slack values with DOcplexAs an example, let's examine the slack values of some constraints in our problem, after we revert the change to soft constrants
###Code
# revert soft constraints
ct_assembly.rhs = 440
s3 = m.solve(url=url, key=key)
# now get slack value for assembly constraint: expected value is 40
print('* slack value for assembly time constraint is: {0}'.format(ct_assembly.slack_value))
# get slack value for painting time constraint, expected value is 0.
print('* slack value for painting time constraint is: {0}'.format(ct_painting.slack_value))
###Output
_____no_output_____
###Markdown
DegeneracyIt is possible that multiple non-optimal solutions with the same objective value exist. As the simplex algorithm attempts to move in the direction of an improved objective value, it might happen that the algorithm starts cycling between non-optimal solutions with equivalent objective values. This is known as _degeneracy_. Modern LP solvers, such as CPLEX Simplex Optimizer, have built-in mechanisms to help escape such cycling by using perturbation techniques involving the variable bounds.If the default algorithm does not break the degenerate cycle, it's a good idea to try some other algorithms, for example the Dual-simplex Optimizer. Problem that are primal degenerate, are often not dual degenerate, and vice versa. Setting a LP algorithm with DOcplexUsers can change the algorithm by editing the `lpmethod` parameter of the model.We won't go into details here, it suffices to know this parameter accepts an integer from 0 to 6, where 0 denotes automatic choice of the algorithm, 1 is for primal simplex, 2 is for dual simplex, and 4 is for barrier...For example, choosing the barrier algorithm is done by setting value 4 to this parameter. We access the `parameters` property of the model and from there, assign the `lpmethod` parameter
###Code
m.parameters.lpmethod = 4
m.solve(url=url, key=key, log_output=True)
###Output
_____no_output_____ |
exercises/assignments/dlb-1-neural-networks.ipynb | ###Markdown
[](https://colab.research.google.com/github/JorisRoels/deep-learning-biology/blob/main/exercises/assignments/dlb-1-neural-networks.ipynb) Exercise 1: Neural NetworksIn this notebook, we will be using neural networks to identify enzyme sequences from protein sequences. The structure of these exercises is as follows: 1. [Import libraries and download data](scrollTo=ScagUEMTMjlK)2. [Data pre-processing](scrollTo=ohZHyOTnI35b)3. [Building a neural network with PyTorch](scrollTo=kIry8iFZI35y)4. [Training & validating the network](scrollTo=uXrEb0rTI35-)5. [Improving the model](scrollTo=o76Hxj7-Mst5)6. [Understanding the model](scrollTo=Ult7CTpCMxTi)This notebook is largely based on the research published in: Li, Y., Wang, S., Umarov, R., Xie, B., Fan, M., Li, L., & Gao, X. (2018). DEEPre: Sequence-based enzyme EC number prediction by deep learning. Bioinformatics, 34(5), 760–769. https://doi.org/10.1093/bioinformatics/btx680 1. Import libraries and download dataLet's start with importing the necessary libraries.
###Code
import pickle
import numpy as np
import random
import os
import matplotlib.pyplot as plt
plt.rcdefaults()
import pandas as pd
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.manifold import TSNE
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from progressbar import ProgressBar, Percentage, Bar, ETA, FileTransferSpeed
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data as data
from torch.utils.data import DataLoader
from torchvision import datasets
import gdown
import zipfile
import os
###Output
_____no_output_____
###Markdown
As you will notice, Colab environments come with quite a large library pre-installed. If you need to import a module that is not yet specified, you can add it in the previous cell (make sure to run it again). If the module is not installed, you can install it with `pip`. To make your work reproducible, it is advised to initialize all modules with stochastic functionality with a fixed seed. Re-running this script should give the same results as long as the seed is fixed.
###Code
# make sure the results are reproducible
seed = 0
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# run all computations on the GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print('Running computations with %s' % torch.device(device))
if torch.cuda.is_available():
print(torch.cuda.get_device_properties(device))
###Output
_____no_output_____
###Markdown
We will now download the required data from a public Google Drive repository. The data is stored as a zip archive and automatically extracted to the `data` directory in the current directory.
###Code
# fields
url = 'http://data.bits.vib.be/pub/trainingen/DeepLearning/data-1.zip'
cmp_data_path = 'data.zip'
# download the compressed data
gdown.download(url, cmp_data_path, quiet=False)
# extract the data
zip = zipfile.ZipFile(cmp_data_path)
zip.extractall('')
# remove the compressed data
os.remove(cmp_data_path)
###Output
_____no_output_____
###Markdown
2. Data pre-processingThe data are protein sequences and stored in binary format as pickle files. However, we encode the data to a binary matrix $X$ where the value at position $(i,j)$ represents the absence or presence of the protein $i$ in the sequence $j$: $$X_{i,j}=\left\{ \begin{array}{ll} 1 \text{ (protein } i \text{ is present in sequence } j \text{)}\\ 0 \text{ (protein } i \text{ is not present in sequence } j \text{)} \end{array} \right.$$The corresponding labels $y$ are also binary, they separate the enzyme from the non-enzyme sequences: $$y_{j}=\left\{ \begin{array}{ll} 1 \text{ (sequence } j \text{ is an enzyme)}\\ 0 \text{ (sequence } j \text{ is not an enzyme)} \end{array} \right.$$
###Code
def encode_data(f_name_list, proteins):
with open(f_name_list,'rb') as f:
name_list = pickle.load(f)
encoding = []
widgets = ['Encoding data: ', Percentage(), ' ', Bar(), ' ', ETA()]
pbar = ProgressBar(widgets=widgets, maxval=len(name_list))
pbar.start()
for i in range(len(name_list)):
single_encoding = np.zeros(len(proteins))
if name_list[i] != []:
for protein_name in name_list[i]:
single_encoding[proteins.index(protein_name)] = 1
encoding.append(single_encoding)
pbar.update(i)
pbar.finish()
return np.asarray(encoding, dtype='int8')
# specify where the data is stored
data_dir = 'data-1/'
f_name_list_enzymes = os.path.join(data_dir, 'Pfam_name_list_new_data.pickle')
f_name_list_nonenzyme = os.path.join(data_dir, 'Pfam_name_list_non_enzyme.pickle')
f_protein_names = os.path.join(data_dir, 'Pfam_model_names_list.pickle')
# load the different proteins
with open(f_protein_names,'rb') as f:
proteins = pickle.load(f)
num_proteins = len(proteins)
# encode the sequences to a binary matrix
enzymes = encode_data(f_name_list_enzymes, proteins)
non_enzymes = encode_data(f_name_list_nonenzyme, proteins)
# concatenate everything together
X = np.concatenate([enzymes, non_enzymes], axis=0)
# the labels are binary (1 for enzymes, 0 for non-enzymes) and are one-hot encoded
y = np.concatenate([np.ones([22168,1]), np.zeros([22168,1])], axis=0).flatten()
# print a few statistics
print('There are %d sequences with %d protein measurements' % (X.shape[0], X.shape[1]))
print('There are %d enzyme and %d non-enzyme sequences' % (enzymes.shape[0], non_enzymes.shape[0]))
###Output
_____no_output_____
###Markdown
Here is a quick glimpse in the data. For a random selection of proteins, we plot the amount of times it was counted in the enzyme and non-enzyme sequences.
###Code
# selection of indices for the proteins
inds = np.random.randint(num_proteins, size=20)
proteins_subset = [proteins[i] for i in inds]
# compute the sum over the sequences
enzymes_sum = np.sum(enzymes, axis=1)
non_enzymes_sum = np.sum(non_enzymes, axis=1)
# plot the counts on the subset of proteins
df = pd.DataFrame({'Enzyme': enzymes_sum[inds], 'Non-enzyme': non_enzymes_sum[inds]}, index=proteins_subset)
df.plot.barh()
plt.xlabel('Counts')
plt.ylabel('Protein')
plt.show()
###Output
_____no_output_____
###Markdown
To evaluate our approaches properly, we will split the data in a training and testing set. We will use the training set to train our algorithms and the testing set as a separate set of unseen data to evaluate the performance of our models.
###Code
test_ratio = 0.5 # we will use 50% of the data for testing
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=test_ratio, random_state=seed)
print('%d sequences for training and %d for testing' % (x_train.shape[0], x_test.shape[0]))
###Output
_____no_output_____
###Markdown
3. Building a neural network with PyTorchNow, we have to implement the neural network and train it. For this, we will use the high-level deep learning library [PyTorch](https://pytorch.org/). PyTorch is a well-known, open-source, machine learning framework that has a comprehensive set of tools and libraries and accelerates research prototyping. It also supports transparant training of machine learning models on GPU devices, which can benefit runtimes significantly. The full documentation can be found [here](https://pytorch.org/docs/stable/index.html). Let's start by defining the architecture of neural network. **Exercise**: build a network with a single hidden layer with PyTorch: - The first layer will be a [fully connected layer](https://pytorch.org/docs/stable/generated/torch.nn.Linear.htmltorch.nn.Linear) with [relu](https://pytorch.org/docs/stable/nn.functional.html?highlight=relutorch.nn.functional.relu) activation that transforms the input features to a 512 dimensional (hidden) feature vector representation. - The output layer is another [fully connected layer](https://pytorch.org/docs/stable/generated/torch.nn.Linear.htmltorch.nn.Linear) that transforms the hidden representation to a class probability distribution. - Print the network architecture to validate your architecture. - Run the network on a random batch of samples. Note that you have to transfer the numpy ndarray type inputs to floating point [PyTorch tensors](https://pytorch.org/docs/stable/tensors.html).
###Code
# define the number of classes
"""
INSERT CODE HERE
"""
# The network will inherit the Module class
class Net(nn.Module):
def __init__(self, n_features=512):
super(Net, self).__init__()
"""
INSERT CODE HERE
"""
def forward(self, x):
"""
INSERT CODE HERE
"""
return x
# initialize the network and print the architecture
"""
INSERT CODE HERE
"""
# run the network on a batch of samples
# note that we have to transfer the numpy ndarray type inputs to float torch tensors
"""
INSERT CODE HERE
"""
###Output
_____no_output_____
###Markdown
**Exercise**: manually compute the amount of parameters in this network and verify this with PyTorch.
###Code
"""
INSERT CODE HERE
"""
print('There are %d trainable parameters in the network' % n_params)
###Output
_____no_output_____
###Markdown
4. Training and validating the networkTo train this network, we still need two things: a loss function and an optimizer. For the loss function, we will use the commonly used cross entropy loss for classification. For the optimizer, we will use stochastic gradient descent (SGD) with a learning rate of 0.1.
###Code
learning_rate = 0.1
loss_fn = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Great. Now it's time to train our model and implement backpropagation. Fortunately, PyTorch makes this relatively easy. A single optimization iteration consists of the following steps: 1. Sample a batch from the training data: we use the convenient [data loading](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html) system provided by PyTorch. You can simply enumerate over the `DataLoader` objects. 2. Set all gradients equal to zero. You can use the [`zero_grad()`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=zero_gradtorch.nn.Module.zero_grad) function. 3. Feed the batch to the network and compute the outputs. 4. Compare the outputs to the labels with the loss function. Note that the loss function itself is a `Module` object as well and thus can be treated in a similar fashion as the network for computing outputs. 5. Backpropagate the gradients w.r.t. the computed loss. You can use the [`backward()`](https://pytorch.org/docs/stable/autograd.html?highlight=backwardtorch.autograd.backward) function for this. 6. Apply one step in the optimization (e.g. gradient descent). For this, you will need the optimizer's [`step()`](https://pytorch.org/docs/stable/optim.htmltorch.optim.Optimizer.step) function. **Exercise**: train the model with the following settings: - Train the network for 50 epochs- Use a mini batch size of 1024- Track the performance of the classifier by additionally providing the test data. We have already provided a validation function that tracks the accuracy. This function expects a network module, a binary matrix $X$ of sequences, their corresponding labels $y$ and the batch size (for efficiency reasons) as inputs.
###Code
# dataset useful for sampling (and many other things)
class ProteinSeqDataset(data.Dataset):
def __init__(self, data, labels):
self.data = data
self.labels = labels
def __getitem__(self, i):
return self.data[i], self.labels[i]
def __len__(self):
return len(self.data)
def validate_accuracy(net, X, y, batch_size=1024):
# evaluation mode
net.eval()
# save predictions
y_preds = np.zeros((len(y)))
for b in range(len(y) // batch_size):
# sample a batch
inputs = X[b*batch_size: (b+1)*batch_size]
# transform to tensors
inputs = torch.from_numpy(inputs).float().to(device)
# forward call
y_pred = net(inputs)
y_pred = F.softmax(y_pred, dim=1)[:, 1] > 0.5
# save predictions
y_preds[b*batch_size: (b+1)*batch_size] = y_pred.detach().cpu().numpy()
# remaining batch
b = len(y) // batch_size
inputs = torch.from_numpy(X[b*batch_size:]).float().to(device)
y_pred = net(inputs)
y_pred = F.softmax(y_pred, dim=1)[:, 1] > 0.5
y_preds[b*batch_size:] = y_pred.detach().cpu().numpy()
# compute accuracy
acc = accuracy_score(y, y_preds)
return acc
# implementation of a single training epoch
def train_epoch(net, loader, loss_fn, optimizer):
"""
INSERT CODE HERE
"""
return -1
# implementation of a single testing epoch
def test_epoch(net, loader, loss_fn):
"""
INSERT CODE HERE
"""
return -1
def train_net(net, train_loader, test_loader, loss_fn, optimizer, epochs):
# transfer the network to the GPU
net = net.to(device)
train_loss = np.zeros((epochs))
test_loss = np.zeros((epochs))
train_acc = np.zeros((epochs))
test_acc = np.zeros((epochs))
for epoch in range(epochs):
# training
train_loss[epoch] = train_epoch(net, train_loader, loss_fn, optimizer)
train_acc[epoch] = validate_accuracy(net, x_train, y_train)
# testing
test_loss[epoch] = test_epoch(net, test_loader, loss_fn)
test_acc[epoch] = validate_accuracy(net, x_test, y_test)
print('Epoch %5d - Train loss: %.6f - Train accuracy: %.6f - Test loss: %.6f - Test accuracy: %.6f'
% (epoch, train_loss[epoch], train_acc[epoch], test_loss[epoch], test_acc[epoch]))
return train_loss, test_loss, train_acc, test_acc
# parameters
"""
INSERT CODE HERE
"""
# build a training and testing dataloader that handles batch sampling
train_data = ProteinSeqDataset(x_train, y_train)
train_loader = DataLoader(train_data, batch_size=batch_size)
test_data = ProteinSeqDataset(x_test, y_test)
test_loader = DataLoader(test_data, batch_size=batch_size)
# start training
train_loss, test_loss, train_acc, test_acc = train_net(net, train_loader, test_loader, loss_fn, optimizer, n_epochs)
###Output
_____no_output_____
###Markdown
The code below visualizes the learning curves: these curves illustrate how the loss on the train and test set decays over time. Additionally, we also report a similar curve for the train and test accuracy. The final accuracy is also reported.
###Code
def plot_learning_curves(train_loss, test_loss, train_acc, test_acc):
plt.figure(figsize=(11, 4))
plt.subplot(1, 2, 1)
plt.plot(train_loss)
plt.plot(test_loss)
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend(('Train', 'Test'))
plt.subplot(1, 2, 2)
plt.plot(train_acc)
plt.plot(test_acc)
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.legend(('Train', 'Test'))
plt.show()
# plot the learning curves (i.e. train/test loss and accuracy)
plot_learning_curves(train_loss, test_loss, train_acc, test_acc)
# report final accuracy
print('The model obtains an accuracy of %.2f%%' % (100*test_acc[-1]))
###Output
_____no_output_____
###Markdown
5. Improving the modelWe will try to improve the model by improving the training time and mitigating the overfitting to some extent. **Exercise**: Improve the model by implementing the following adjustments: - Train the network based on [`Adam`](https://pytorch.org/docs/stable/optim.htmltorch.optim.Adam) optimization instead of stochastic gradient descent. The Adam optimizer adapts its learning rate over time and therefore improves convergence significantly. For more details on the algorithm, we refer to the [original published paper](https://arxiv.org/pdf/1412.6980.pdf). You can significantly reduce the learning rate (e.g. 0.0001) and number of training epochs (e.g. 20). - The first adjustment to avoid overfitting is reduce the size of the network. At first sight this may seem strange because this reduces the capacity of the network. However, large networks are more likely to focus on details in the training data because of the redundant number of neurons in the hidden layer. Experiment with smaller hidden representations (e.g. 32 or 16). - The second adjustment to mitigate overfitting is [Dropout](https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html). During training, Dropout layers randomly switch off neurons (i.e. their value is temporarily set to zero). This forces the network to use the other neurons to make an appropriate decision. At test time, the dropout layers are obviously ignored and no neurons are switched off. The amount of neurons that are switched off during training is called the dropout factor (e.g. 0.50). For more details, we refer to the [original published paper](https://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf).
###Code
# The network will inherit the Module class
class ImprovedNet(nn.Module):
def __init__(self, n_features=512, p=0.5):
super(ImprovedNet, self).__init__()
"""
INSERT CODE HERE
"""
def forward(self, x):
"""
INSERT CODE HERE
"""
return x
# initialize the network and print the architecture
"""
INSERT CODE HERE
"""
# parameters
"""
INSERT CODE HERE
"""
# Adam optimization
"""
INSERT CODE HERE
"""
# start training
train_loss, test_loss, train_acc, test_acc = train_net(improved_net, train_loader, test_loader, loss_fn, optimizer, n_epochs)
# plot the learning curves (i.e. train/test loss and accuracy)
plot_learning_curves(train_loss, test_loss, train_acc, test_acc)
# report final accuracy
print('The model obtains an accuracy of %.2f%%' % (100*test_acc[-1]))
###Output
_____no_output_____
###Markdown
6. Understanding the modelTo gain more insight in the network, it can be useful to take a look to the hidden representations of the network. To do this, you have to propagate a number of samples through the first hidden layer of the network and visualize them using dimensionality reduction techniques. **Exercise**: Visualize the hidden representations of a batch of samples in 2D to gain more insight in the network's decision process: - Compute the hidden representation of a batch of samples. To do this, you will have to select a batch, transform this in a torch Tensor and apply the hidden and relu layer of the network on the inputs. Since these are also modules, you can use them in a similar fashion as the original network. - Extract the outputs of the networks as a numpy array and apply dimensionality reduction. A common dimensionality reducing method is the t-SNE algorithm.
###Code
# select a batch of samples
"""
INSERT CODE HERE
"""
# compute the hidden representation of the batch
"""
INSERT CODE HERE
"""
# reduce the dimensionality of the hidden representations
"""
INSERT CODE HERE
"""
# visualize the reduced representations and label each sample
"""
INSERT CODE HERE
"""
###Output
_____no_output_____
###Markdown
Another way to analyze the network is by checking which proteins cause the highest hidden activations in enzyme and non-enzyme samples. These features are discriminative for predicting the classes.
###Code
# isolate the positive and negative samples
h_pos = h[batch_labels == 1]
h_neg = h[batch_labels == 0]
# compute the mean activation
h_pos_mean = h_pos.mean(axis=0)
h_neg_mean = h_neg.mean(axis=0)
# sort the mean activations
i_pos = np.argsort(h_pos_mean)
i_neg = np.argsort(h_neg_mean)
# select the highest activations
n = 5
i_pos = i_pos[-n:][::-1]
i_neg = i_neg[-n:][::-1]
print('Discriminative features that result in high activation for enzyme prediction: ')
for i in i_pos:
print(' - %s (mean activation value: %.3f)' % (proteins[i], h_pos_mean[i]))
print('Discriminative features that result in high activation for non-enzyme prediction: ')
for i in i_neg:
print(' - %s (mean activation value: %.3f)' % (proteins[i], h_neg_mean[i]))
###Output
_____no_output_____ |
Model backlog/Deep Learning/[39th] Bi LSTM - Craw, GloVe - Adam.ipynb | ###Markdown
Dependencies
###Code
import gc
import os
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from gensim.models import KeyedVectors
from sklearn import metrics
from sklearn.model_selection import train_test_split
from keras import optimizers
from keras.models import Model
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, LearningRateScheduler
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, Embedding, Dropout, Activation, CuDNNGRU, CuDNNLSTM, Conv1D, Bidirectional, GlobalMaxPool1D, GlobalAveragePooling1D, SpatialDropout1D
# Set seeds to make the experiment more reproducible.
from tensorflow import set_random_seed
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed_everything()
%matplotlib inline
sns.set_style("whitegrid")
pd.set_option('display.float_format', lambda x: '%.4f' % x)
warnings.filterwarnings("ignore")
train = pd.read_csv("../input/jigsaw-unintended-bias-in-toxicity-classification/train.csv")
test = pd.read_csv("../input/jigsaw-unintended-bias-in-toxicity-classification/test.csv")
print("Train shape : ", train.shape)
print("Test shape : ", test.shape)
###Output
Train shape : (1804874, 45)
Test shape : (97320, 2)
###Markdown
Preprocess
###Code
train['target'] = np.where(train['target'] >= 0.5, 1, 0)
train['comment_text'] = train['comment_text'].astype(str)
X_test = test['comment_text'].astype(str)
# Lower comments
train['comment_text'] = train['comment_text'].apply(lambda x: x.lower())
X_test = X_test.apply(lambda x: x.lower())
# Mapping Punctuation
def map_punctuation(data):
punct_mapping = {"_":" ", "`":" ",
"‘": "'", "₹": "e", "´": "'", "°": "", "€": "e", "™": "tm", "√": " sqrt ", "×": "x", "²": "2", "—": "-", "–": "-",
"’": "'", "_": "-", "`": "'", '“': '"', '”': '"', '“': '"', "£": "e", '∞': 'infinity', 'θ': 'theta', '÷': '/',
'α': 'alpha', '•': '.', 'à': 'a', '−': '-', 'β': 'beta', '∅': '', '³': '3', 'π': 'pi'}
def clean_special_chars(text, mapping):
for p in mapping:
text = text.replace(p, mapping[p])
return text
return data.apply(lambda x: clean_special_chars(x, punct_mapping))
train['comment_text'] = map_punctuation(train['comment_text'])
X_test = map_punctuation(X_test)
# Removing Punctuation
def remove_punctuation(data):
punct = "/-'?!.,#$%\'()*+-/:;<=>@[\\]^_`{|}~`" + '""“”’' + '∞θ÷α•à−β∅³π‘₹´°£€\×™√²—–&'
def clean_special_chars(text, punct):
for p in punct:
text = text.replace(p, ' ')
return text
return data.apply(lambda x: clean_special_chars(x, punct))
train['comment_text'] = remove_punctuation(train['comment_text'])
X_test = remove_punctuation(X_test)
# Clean contractions
def clean_contractions(text):
specials = ["’", "‘", "´", "`"]
for s in specials:
text = text.replace(s, "'")
return text
train['comment_text'] = train['comment_text'].apply(lambda x: clean_contractions(x))
X_test = X_test.apply(lambda x: clean_contractions(x))
# Mapping contraction
def map_contraction(data):
contraction_mapping = {"trump's": 'trump is', "'cause": 'because', ',cause': 'because', ';cause': 'because', "ain't": 'am not', 'ain,t': 'am not', 'ain;t': 'am not',
'ain´t': 'am not', 'ain’t': 'am not', "aren't": 'are not', 'aren,t': 'are not', 'aren;t': 'are not', 'aren´t': 'are not', 'aren’t': 'are not',
"can't": 'cannot', "can't've": 'cannot have', 'can,t': 'cannot', 'can,t,ve': 'cannot have', 'can;t': 'cannot', 'can;t;ve': 'cannot have',
'can´t': 'cannot', 'can´t´ve': 'cannot have', 'can’t': 'cannot', 'can’t’ve': 'cannot have', "could've": 'could have', 'could,ve': 'could have',
'could;ve': 'could have', "couldn't": 'could not', "couldn't've": 'could not have', 'couldn,t': 'could not', 'couldn,t,ve': 'could not have',
'couldn;t': 'could not', 'couldn;t;ve': 'could not have', 'couldn´t': 'could not', 'couldn´t´ve': 'could not have', 'couldn’t': 'could not',
'couldn’t’ve': 'could not have', 'could´ve': 'could have', 'could’ve': 'could have', "didn't": 'did not', 'didn,t': 'did not', 'didn;t': 'did not',
'didn´t': 'did not', 'didn’t': 'did not', "doesn't": 'does not', 'doesn,t': 'does not', 'doesn;t': 'does not', 'doesn´t': 'does not', 'doesn’t': 'does not',
"don't": 'do not', 'don,t': 'do not', 'don;t': 'do not', 'don´t': 'do not', 'don’t': 'do not', "hadn't": 'had not', "hadn't've": 'had not have',
'hadn,t': 'had not', 'hadn,t,ve': 'had not have', 'hadn;t': 'had not', 'hadn;t;ve': 'had not have', 'hadn´t': 'had not', 'hadn´t´ve': 'had not have',
'hadn’t': 'had not', 'hadn’t’ve': 'had not have', "hasn't": 'has not', 'hasn,t': 'has not', 'hasn;t': 'has not', 'hasn´t': 'has not', 'hasn’t': 'has not',
"haven't": 'have not', 'haven,t': 'have not', 'haven;t': 'have not', 'haven´t': 'have not', 'haven’t': 'have not', "he'd": 'he would', "he'd've": 'he would have',
"he'll": 'he will', "he's": 'he is', 'he,d': 'he would', 'he,d,ve': 'he would have', 'he,ll': 'he will', 'he,s': 'he is', 'he;d': 'he would',
'he;d;ve': 'he would have', 'he;ll': 'he will', 'he;s': 'he is', 'he´d': 'he would', 'he´d´ve': 'he would have', 'he´ll': 'he will', 'he´s': 'he is',
'he’d': 'he would', 'he’d’ve': 'he would have', 'he’ll': 'he will', 'he’s': 'he is', "how'd": 'how did', "how'll": 'how will', "how's": 'how is',
'how,d': 'how did', 'how,ll': 'how will', 'how,s': 'how is', 'how;d': 'how did', 'how;ll': 'how will', 'how;s': 'how is', 'how´d': 'how did', 'how´ll':
'how will', 'how´s': 'how is', 'how’d': 'how did', 'how’ll': 'how will', 'how’s': 'how is', "i'd": 'i would', "i'll": 'i will', "i'm": 'i am', "i've":
'i have', 'i,d': 'i would', 'i,ll': 'i will', 'i,m': 'i am', 'i,ve': 'i have', 'i;d': 'i would', 'i;ll': 'i will', 'i;m': 'i am', 'i;ve': 'i have',
"isn't": 'is not', 'isn,t': 'is not', 'isn;t': 'is not', 'isn´t': 'is not', 'isn’t': 'is not', "it'd": 'it would', "it'll": 'it will', "it's": 'it is',
'it,d': 'it would', 'it,ll': 'it will', 'it,s': 'it is', 'it;d': 'it would', 'it;ll': 'it will', 'it;s': 'it is', 'it´d': 'it would', 'it´ll': 'it will',
'it´s': 'it is', 'it’d': 'it would', 'it’ll': 'it will', 'it’s': 'it is', 'i´d': 'i would', 'i´ll': 'i will', 'i´m': 'i am', 'i´ve': 'i have', 'i’d': 'i would',
'i’ll': 'i will', 'i’m': 'i am', 'i’ve': 'i have', "let's": 'let us', 'let,s': 'let us', 'let;s': 'let us', 'let´s': 'let us', 'let’s': 'let us', "ma'am": 'madam',
'ma,am': 'madam', 'ma;am': 'madam', "mayn't": 'may not', 'mayn,t': 'may not', 'mayn;t': 'may not', 'mayn´t': 'may not', 'mayn’t': 'may not', 'ma´am': 'madam',
'ma’am': 'madam', "might've": 'might have', 'might,ve': 'might have', 'might;ve': 'might have', "mightn't": 'might not', 'mightn,t': 'might not',
'mightn;t': 'might not', 'mightn´t': 'might not', 'mightn’t': 'might not', 'might´ve': 'might have', 'might’ve': 'might have', "must've": 'must have',
'must,ve': 'must have', 'must;ve': 'must have', "mustn't": 'must not', 'mustn,t': 'must not', 'mustn;t': 'must not', 'mustn´t': 'must not',
'mustn’t': 'must not', 'must´ve': 'must have', 'must’ve': 'must have', "needn't": 'need not', 'needn,t': 'need not', 'needn;t': 'need not', 'needn´t': 'need not',
'needn’t': 'need not', "oughtn't": 'ought not', 'oughtn,t': 'ought not', 'oughtn;t': 'ought not', 'oughtn´t': 'ought not', 'oughtn’t': 'ought not',
"sha'n't": 'shall not', 'sha,n,t': 'shall not', 'sha;n;t': 'shall not', "shan't": 'shall not', 'shan,t': 'shall not', 'shan;t': 'shall not', 'shan´t': 'shall not',
'shan’t': 'shall not', 'sha´n´t': 'shall not', 'sha’n’t': 'shall not', "she'd": 'she would', "she'll": 'she will', "she's": 'she is', 'she,d': 'she would',
'she,ll': 'she will', 'she,s': 'she is', 'she;d': 'she would', 'she;ll': 'she will', 'she;s': 'she is', 'she´d': 'she would', 'she´ll': 'she will',
'she´s': 'she is', 'she’d': 'she would', 'she’ll': 'she will', 'she’s': 'she is', "should've": 'should have', 'should,ve': 'should have',
'should;ve': 'should have', "shouldn't": 'should not', 'shouldn,t': 'should not', 'shouldn;t': 'should not', 'shouldn´t': 'should not', 'shouldn’t': 'should not',
'should´ve': 'should have', 'should’ve': 'should have', "that'd": 'that would', "that's": 'that is', 'that,d': 'that would', 'that,s': 'that is',
'that;d': 'that would', 'that;s': 'that is', 'that´d': 'that would', 'that´s': 'that is', 'that’d': 'that would', 'that’s': 'that is', "there'd": 'there had',
"there's": 'there is', 'there,d': 'there had', 'there,s': 'there is', 'there;d': 'there had', 'there;s': 'there is', 'there´d': 'there had', 'there´s': 'there is',
'there’d': 'there had', 'there’s': 'there is', "they'd": 'they would', "they'll": 'they will', "they're": 'they are', "they've": 'they have',
'they,d': 'they would', 'they,ll': 'they will', 'they,re': 'they are', 'they,ve': 'they have', 'they;d': 'they would', 'they;ll': 'they will',
'they;re': 'they are', 'they;ve': 'they have', 'they´d': 'they would', 'they´ll': 'they will', 'they´re': 'they are', 'they´ve': 'they have',
'they’d': 'they would', 'they’ll': 'they will', 'they’re': 'they are', 'they’ve': 'they have', "wasn't": 'was not', 'wasn,t': 'was not', 'wasn;t': 'was not',
'wasn´t': 'was not', 'wasn’t': 'was not', "we'd": 'we would', "we'll": 'we will', "we're": 'we are', "we've": 'we have', 'we,d': 'we would', 'we,ll': 'we will',
'we,re': 'we are', 'we,ve': 'we have', 'we;d': 'we would', 'we;ll': 'we will', 'we;re': 'we are', 'we;ve': 'we have', "weren't": 'were not', 'weren,t': 'were not',
'weren;t': 'were not', 'weren´t': 'were not', 'weren’t': 'were not', 'we´d': 'we would', 'we´ll': 'we will', 'we´re': 'we are', 'we´ve': 'we have',
'we’d': 'we would', 'we’ll': 'we will', 'we’re': 'we are', 'we’ve': 'we have', "what'll": 'what will', "what're": 'what are', "what's": 'what is',
"what've": 'what have', 'what,ll': 'what will', 'what,re': 'what are', 'what,s': 'what is', 'what,ve': 'what have', 'what;ll': 'what will', 'what;re': 'what are',
'what;s': 'what is', 'what;ve': 'what have', 'what´ll': 'what will', 'what´re': 'what are', 'what´s': 'what is', 'what´ve': 'what have', 'what’ll': 'what will',
'what’re': 'what are', 'what’s': 'what is', 'what’ve': 'what have', "where'd": 'where did', "where's": 'where is', 'where,d': 'where did', 'where,s': 'where is',
'where;d': 'where did', 'where;s': 'where is', 'where´d': 'where did', 'where´s': 'where is', 'where’d': 'where did', 'where’s': 'where is', "who'll": 'who will',
"who's": 'who is', 'who,ll': 'who will', 'who,s': 'who is', 'who;ll': 'who will', 'who;s': 'who is', 'who´ll': 'who will', 'who´s': 'who is', 'who’ll': 'who will',
'who’s': 'who is', "won't": 'will not', 'won,t': 'will not', 'won;t': 'will not', 'won´t': 'will not', 'won’t': 'will not', "wouldn't": 'would not',
'wouldn,t': 'would not', 'wouldn;t': 'would not', 'wouldn´t': 'would not', 'wouldn’t': 'would not', "you'd": 'you would', "you'll": 'you will',
"you're": 'you are', 'you,d': 'you would', 'you,ll': 'you will', 'you,re': 'you are', 'you;d': 'you would', 'you;ll': 'you will', 'you;re': 'you are',
'you´d': 'you would', 'you´ll': 'you will', 'you´re': 'you are', 'you’d': 'you would', 'you’ll': 'you will', 'you’re': 'you are', '´cause': 'because',
'’cause': 'because', "you've": 'you have', "could'nt": 'could not', "havn't": 'have not', 'here’s': 'here is', 'i""m': 'i am', "i'am": 'i am', "i'l": 'i will',
"i'v": 'i have', "wan't": 'want', "was'nt": 'was not', "who'd": 'who would', "who're": 'who are', "who've": 'who have', "why'd": 'why would',
"would've": 'would have', "y'all": 'you all', "y'know": 'you know', 'you.i': 'you i', "your'e": 'you are', "arn't": 'are not', "agains't": 'against',
"c'mon": 'common', "doens't": 'does not', 'don""t': 'do not', "dosen't": 'does not', "dosn't": 'does not', "shoudn't": 'should not', "that'll": 'that will',
"there'll": 'there will', "there're": 'there are', "this'll": 'this all', "u're": 'you are', "ya'll": 'you all', "you'r": 'you are', 'you’ve': 'you have',
"d'int": 'did not', "did'nt": 'did not', "din't": 'did not', "dont't": 'do not', "gov't": 'government', "i'ma": 'i am', "is'nt": 'is not', '‘i': 'i',
'ᴀɴᴅ': 'and', 'ᴛʜᴇ': 'the', 'ʜᴏᴍᴇ': 'home', 'ᴜᴘ': 'up', 'ʙʏ': 'by', 'ᴀᴛ': 'at', '…and': 'and', 'civilbeat': 'civil beat', 'trumpcare': 'trump care',
'obamacare': 'obama care', 'ᴄʜᴇᴄᴋ': 'check', 'ғᴏʀ': 'for', 'ᴛʜɪs': 'this', 'ᴄᴏᴍᴘᴜᴛᴇʀ': 'computer', 'ᴍᴏɴᴛʜ': 'month', 'ᴡᴏʀᴋɪɴɢ': 'working', 'ᴊᴏʙ': 'job',
'ғʀᴏᴍ': 'from', 'sᴛᴀʀᴛ': 'start', 'gubmit': 'submit', 'co₂': 'carbon dioxide', 'ғɪʀsᴛ': 'first', 'ᴇɴᴅ': 'end', 'ᴄᴀɴ': 'can', 'ʜᴀᴠᴇ': 'have', 'ᴛᴏ': 'to',
'ʟɪɴᴋ': 'link', 'ᴏғ': 'of', 'ʜᴏᴜʀʟʏ': 'hourly', 'ᴡᴇᴇᴋ': 'week', 'ᴇxᴛʀᴀ': 'extra', 'gʀᴇᴀᴛ': 'great', 'sᴛᴜᴅᴇɴᴛs': 'student', 'sᴛᴀʏ': 'stay', 'ᴍᴏᴍs': 'mother',
'ᴏʀ': 'or', 'ᴀɴʏᴏɴᴇ': 'anyone', 'ɴᴇᴇᴅɪɴɢ': 'needing', 'ᴀɴ': 'an', 'ɪɴᴄᴏᴍᴇ': 'income', 'ʀᴇʟɪᴀʙʟᴇ': 'reliable', 'ʏᴏᴜʀ': 'your', 'sɪɢɴɪɴɢ': 'signing',
'ʙᴏᴛᴛᴏᴍ': 'bottom', 'ғᴏʟʟᴏᴡɪɴɢ': 'following', 'mᴀᴋᴇ': 'make', 'ᴄᴏɴɴᴇᴄᴛɪᴏɴ': 'connection', 'ɪɴᴛᴇʀɴᴇᴛ': 'internet', 'financialpost': 'financial post',
'ʜaᴠᴇ': ' have ', 'ᴄaɴ': ' can ', 'maᴋᴇ': ' make ', 'ʀᴇʟɪaʙʟᴇ': ' reliable ', 'ɴᴇᴇᴅ': ' need ', 'ᴏɴʟʏ': ' only ', 'ᴇxᴛʀa': ' extra ', 'aɴ': ' an ',
'aɴʏᴏɴᴇ': ' anyone ', 'sᴛaʏ': ' stay ', 'sᴛaʀᴛ': ' start', 'shopo': 'shop'}
def clean_special_chars(text, mapping):
for p in mapping:
text = text.replace(p, mapping[p])
return text
return data.apply(lambda x: clean_special_chars(x, contraction_mapping))
train['comment_text'] = map_contraction(train['comment_text'])
X_test = map_contraction(X_test)
# Mapping misspelling
def map_misspelling(data):
misspelling_mapping = {'sb91': 'senate bill', 'trump': 'trump', 'utmterm': 'utm term', 'fakenews': 'fake news', 'gʀᴇat': 'great', 'ʙᴏᴛtoᴍ': 'bottom',
'washingtontimes': 'washington times', 'garycrum': 'gary crum', 'htmlutmterm': 'html utm term', 'rangermc': 'car', 'tfws': 'tuition fee waiver',
'sjws': 'social justice warrior', 'koncerned': 'concerned', 'vinis': 'vinys', 'yᴏᴜ': 'you', 'trumpsters': 'trump', 'trumpian': 'trump', 'bigly': 'big league',
'trumpism': 'trump', 'yoyou': 'you', 'auwe': 'wonder', 'drumpf': 'trump', 'brexit': 'british exit', 'utilitas': 'utilities', 'ᴀ': 'a', '😉': 'wink',
'😂': 'joy', '😀': 'stuck out tongue', 'theguardian': 'the guardian', 'deplorables': 'deplorable', 'theglobeandmail': 'the globe and mail',
'justiciaries': 'justiciary', 'creditdation': 'accreditation', 'doctrne': 'doctrine', 'fentayal': 'fentanyl', 'designation-': 'designation',
'conartist': 'con-artist', 'mutilitated': 'mutilated', 'obumblers': 'bumblers', 'negotiatiations': 'negotiations', 'dood-': 'dood', 'irakis': 'iraki',
'cooerate': 'cooperate', 'cox': 'cox', 'racistcomments': 'racist comments', 'envirnmetalists': 'environmentalists'}
def clean_special_chars(text, mapping):
for p in mapping:
text = text.replace(p, mapping[p])
return text
return data.apply(lambda x: clean_special_chars(x, misspelling_mapping))
train['comment_text'] = map_misspelling(train['comment_text'])
X_test = map_misspelling(X_test)
# Train/validation split
train_ids, val_ids = train_test_split(train['id'], test_size=0.2, random_state=2019)
train_df = pd.merge(train_ids.to_frame(), train)
validate_df = pd.merge(val_ids.to_frame(), train)
Y_train = train_df['target'].values
Y_val = validate_df['target'].values
X_train = train_df['comment_text']
X_val = validate_df['comment_text']
# Hyper parameters
maxlen = 220 # max number of words in a question to use
embed_size = 250 # how big is each word vector
max_features = 410047 # how many unique words to use (i.e num rows in embedding vector)
learning_rate = 0.001
epochs = 5
batch_size = 512
es_patience = 3
rlr_patience = 2
decay_factor = 0.25
# Fill missing values
X_train = X_train.fillna("_na_").values
X_val = X_val.fillna("_na_").values
X_test = X_test.fillna("_na_").values
# Tokenize the sentences
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(X_train))
X_train = tokenizer.texts_to_sequences(X_train)
X_val = tokenizer.texts_to_sequences(X_val)
X_test = tokenizer.texts_to_sequences(X_test)
# Pad the sentences
X_train = pad_sequences(X_train, maxlen=maxlen)
X_val = pad_sequences(X_val, maxlen=maxlen)
X_test = pad_sequences(X_test, maxlen=maxlen)
###Output
_____no_output_____
###Markdown
Loading Embedding
###Code
def get_coefs(word, *arr):
return word, np.asarray(arr, dtype='float32')
def load_embeddings(path):
emb_arr = KeyedVectors.load(path)
return emb_arr
def build_matrix(word_index, path):
embedding_index = load_embeddings(path)
embedding_matrix = np.zeros((len(word_index) + 1, 300))
unknown_words = []
for word, i in word_index.items():
if i <= max_features:
try:
embedding_matrix[i] = embedding_index[word]
except KeyError:
try:
embedding_matrix[i] = embedding_index[word.lower()]
except KeyError:
try:
embedding_matrix[i] = embedding_index[word.title()]
except KeyError:
unknown_words.append(word)
return embedding_matrix, unknown_words
glove_path = '../input/gensim-embeddings-dataset/glove.840B.300d.gensim'
craw_path = '../input/gensim-embeddings-dataset/crawl-300d-2M.gensim'
glove_embedding_matrix, glove_unknown_words = build_matrix(tokenizer.word_index, glove_path)
print('n unknown words (GloVe): ', len(glove_unknown_words))
craw_embedding_matrix, craw_unknown_words = build_matrix(tokenizer.word_index, craw_path)
print('n unknown words (Crawl): ', len(craw_unknown_words))
embedding_matrix = np.concatenate([glove_embedding_matrix, craw_embedding_matrix], axis=-1)
del glove_embedding_matrix, craw_embedding_matrix
gc.collect()
###Output
n unknown words (GloVe): 117166
n unknown words (Crawl): 116009
###Markdown
Model
###Code
inp = Input(shape=(maxlen,))
x = Embedding(*embedding_matrix.shape, weights=[embedding_matrix], trainable=False)(inp)
x = SpatialDropout1D(0.3)(x)
x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(x)
x = Bidirectional(CuDNNLSTM(256, return_sequences=True))(x)
# x = GlobalAveragePooling1D()(x)
x = GlobalMaxPool1D()(x)
x = Dense(512, activation="relu")(x)
x = Dense(512, activation="relu")(x)
x = Dense(1, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
optimizer = optimizers.Adam(lr=learning_rate)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
model.summary()
es = EarlyStopping(monitor='val_loss', mode='min', patience=es_patience, restore_best_weights=True, verbose=1)
rlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience=rlr_patience, factor=decay_factor, min_lr=1e-6, verbose=1)
history = model.fit(X_train, Y_train, batch_size=batch_size, epochs=epochs, validation_data=(X_val, Y_val), callbacks=[es, rlrop])
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 8))
ax1.plot(history.history['acc'], label='Train Accuracy')
ax1.plot(history.history['val_acc'], label='Validation accuracy')
ax1.legend(loc='best')
ax1.set_title('Accuracy')
ax2.plot(history.history['loss'], label='Train loss')
ax2.plot(history.history['val_loss'], label='Validation loss')
ax2.legend(loc='best')
ax2.set_title('Loss')
plt.xlabel('Epochs')
sns.despine()
plt.show()
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
identity_columns = [
'male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish',
'muslim', 'black', 'white', 'psychiatric_or_mental_illness']
# Convert taget and identity columns to booleans
def convert_to_bool(df, col_name):
df[col_name] = np.where(df[col_name] >= 0.5, True, False)
def convert_dataframe_to_bool(df):
bool_df = df.copy()
for col in ['target'] + identity_columns:
convert_to_bool(bool_df, col)
return bool_df
SUBGROUP_AUC = 'subgroup_auc'
BPSN_AUC = 'bpsn_auc' # stands for background positive, subgroup negative
BNSP_AUC = 'bnsp_auc' # stands for background negative, subgroup positive
def compute_auc(y_true, y_pred):
try:
return metrics.roc_auc_score(y_true, y_pred)
except ValueError:
return np.nan
def compute_subgroup_auc(df, subgroup, label, model_name):
subgroup_examples = df[df[subgroup]]
return compute_auc(subgroup_examples[label], subgroup_examples[model_name])
def compute_bpsn_auc(df, subgroup, label, model_name):
"""Computes the AUC of the within-subgroup negative examples and the background positive examples."""
subgroup_negative_examples = df[df[subgroup] & ~df[label]]
non_subgroup_positive_examples = df[~df[subgroup] & df[label]]
examples = subgroup_negative_examples.append(non_subgroup_positive_examples)
return compute_auc(examples[label], examples[model_name])
def compute_bnsp_auc(df, subgroup, label, model_name):
"""Computes the AUC of the within-subgroup positive examples and the background negative examples."""
subgroup_positive_examples = df[df[subgroup] & df[label]]
non_subgroup_negative_examples = df[~df[subgroup] & ~df[label]]
examples = subgroup_positive_examples.append(non_subgroup_negative_examples)
return compute_auc(examples[label], examples[model_name])
def compute_bias_metrics_for_model(dataset, subgroups, model, label_col, include_asegs=False):
"""Computes per-subgroup metrics for all subgroups and one model."""
records = []
for subgroup in subgroups:
record = {
'subgroup': subgroup,
'subgroup_size': len(dataset[dataset[subgroup]])
}
record[SUBGROUP_AUC] = compute_subgroup_auc(dataset, subgroup, label_col, model)
record[BPSN_AUC] = compute_bpsn_auc(dataset, subgroup, label_col, model)
record[BNSP_AUC] = compute_bnsp_auc(dataset, subgroup, label_col, model)
records.append(record)
return pd.DataFrame(records).sort_values('subgroup_auc', ascending=True)
# validate_df = pd.merge(val_ids.to_frame(), train)
validate_df['preds'] = model.predict(X_val)
validate_df = convert_dataframe_to_bool(validate_df)
bias_metrics_df = compute_bias_metrics_for_model(validate_df, identity_columns, 'preds', 'target')
print('Validation bias metric by group')
display(bias_metrics_df)
def power_mean(series, p):
total = sum(np.power(series, p))
return np.power(total / len(series), 1 / p)
def get_final_metric(bias_df, overall_auc, POWER=-5, OVERALL_MODEL_WEIGHT=0.25):
bias_score = np.average([
power_mean(bias_df[SUBGROUP_AUC], POWER),
power_mean(bias_df[BPSN_AUC], POWER),
power_mean(bias_df[BNSP_AUC], POWER)
])
return (OVERALL_MODEL_WEIGHT * overall_auc) + ((1 - OVERALL_MODEL_WEIGHT) * bias_score)
# train_df = pd.merge(train_ids.to_frame(), train)
train_df['preds'] = model.predict(X_train)
train_df = convert_dataframe_to_bool(train_df)
print('Train ROC AUC: %.4f' % get_final_metric(bias_metrics_df, metrics.roc_auc_score(train_df['target'].values, train_df['preds'].values)))
print('Validation ROC AUC: %.4f' % get_final_metric(bias_metrics_df, metrics.roc_auc_score(validate_df['target'].values, validate_df['preds'].values)))
###Output
Train ROC AUC: 0.9327
Validation ROC AUC: 0.9296
###Markdown
Predictions
###Code
Y_test = model.predict(X_test)
submission = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/sample_submission.csv')
submission['prediction'] = Y_test
submission.to_csv('submission.csv', index=False)
submission.head(10)
###Output
_____no_output_____ |
logi-gsmap/Untitled.ipynb | ###Markdown
Test
###Code
import numpy as np
import netCDF4 as nc4
f = nc4.Dataset('jabar_gsmap_1hr.nc','r')
print(f)
###Output
<class 'netCDF4._netCDF4.Dataset'>
root group (NETCDF3_CLASSIC data model, file format NETCDF3):
dimensions(sizes): lon(48), lat(32), time(122689)
variables(dimensions): float64 [4mlon[0m(lon), float64 [4mlat[0m(lat), float64 [4mtime[0m(time), float32 [4mprecip[0m(time,lat,lon)
groups:
|
guides/notebooks/categorical-encoding-DEMO.ipynb | ###Markdown
Categorical Encoding Demo and ExamplesThis is a Jupyter notebook for exploring the categorical-encoding library discussed in a [Feature Labs article] I wrote on the topic.To use the library, make sure you have the `categorical-encoding` and `featuretools` libraries installed. Encoder API
###Code
import categorical_encoding as ce
import featuretools as ft
from featuretools.tests.testing_utils import make_ecommerce_entityset
es = make_ecommerce_entityset()
f1 = ft.Feature(es["log"]["product_id"])
f2 = ft.Feature(es["log"]["purchased"])
f3 = ft.Feature(es["log"]["value"])
f4 = ft.Feature(es["log"]["countrycode"])
features = [f1, f2, f3, f4]
ids = [0, 1, 2, 3, 4, 5]
feature_matrix = ft.calculate_feature_matrix(features, es,
instance_ids=ids)
print(feature_matrix)
###Output
product_id purchased value countrycode
id
0 coke zero True 0.0 US
1 coke zero True 5.0 US
2 coke zero True 10.0 US
3 car True 15.0 US
4 car True 20.0 US
5 toothpaste True 0.0 AL
###Markdown
Performing a train-test split is standard in machine learning pipelines. Here, I've just simulated an actual train-test split by randomly picking certain rows to be train or test data.
###Code
train_data = feature_matrix.iloc[[0, 1, 4, 5]]
print(train_data)
test_data = feature_matrix.iloc[[2, 3]]
print(test_data)
###Output
product_id purchased value countrycode
id
2 coke zero True 10.0 US
3 car True 15.0 US
###Markdown
Next up, we initialize and call the encoder on our data.
###Code
enc = ce.Encoder(method='leave_one_out')
train_enc = enc.fit_transform(train_data, features, train_data['value'])
test_enc = enc.transform(test_data)
print(train_enc)
print(test_enc)
###Output
PRODUCT_ID_leave_one_out purchased value COUNTRYCODE_leave_one_out
id
2 2.50 True 10.0 8.333333
3 6.25 True 15.0 8.333333
###Markdown
Note how that the encoder only uses the training data to learn its encoding, and the test data is directly encoded using the learning mappings.Now, we typically would have to redo the entire categorical encoding process for the following feature matrix.
###Code
fm2 = ft.calculate_feature_matrix(features, es, instance_ids=[6,7])
print(fm2)
###Output
product_id purchased value countrycode
id
6 toothpaste True 1.0 AL
7 toothpaste True 2.0 AL
###Markdown
However, by integration with Featuretools, we can generate already encoded data.
###Code
features_encoded = enc.get_features()
fm2_encoded = ft.calculate_feature_matrix(features_encoded, es, instance_ids=[6,7])
print(fm2_encoded)
###Output
PRODUCT_ID_leave_one_out purchased value COUNTRYCODE_leave_one_out
id
6 6.25 True 1.0 6.25
7 6.25 True 2.0 6.25
###Markdown
Encoding Methods ExamplesFor reference, here is our original encoder:
###Code
feature_matrix
###Output
_____no_output_____
###Markdown
Classic Encoders
###Code
# Creates a new column for each unique value.
enc_one_hot = ce.Encoder(method='one_hot')
fm_enc_one_hot = enc_one_hot.fit_transform(feature_matrix, features)
fm_enc_one_hot
# Each unique string value is assigned a counting number specific to that value.
enc_ord = ce.Encoder(method='ordinal')
fm_enc_ord = enc_ord.fit_transform(feature_matrix, features)
fm_enc_ord
# The categories' values are first Ordinal encoded,
# the resulting integers are converted to binary,
# then the resulting digits are split into columns.
enc_bin = ce.Encoder(method='binary')
fm_enc_bin = enc_bin.fit_transform(feature_matrix, features)
fm_enc_bin
# use a hashing algorithm to map category values to corresponding columns
enc_hash = ce.Encoder(method='hashing')
fm_enc_hash = enc_hash.fit_transform(feature_matrix, features)
fm_enc_hash
###Output
_____no_output_____
###Markdown
Bayesian Encoders
###Code
# replaces each specific category value with a weighted average of the dependent variable.
enc_targ = ce.Encoder(method='target')
fm_enc_targ = enc_targ.fit_transform(feature_matrix, features, feature_matrix['value'])
fm_enc_targ
# identical to target except leaves own row out when calculating average
enc_leave = ce.Encoder(method='leave_one_out')
fm_enc_leave = enc_leave.fit_transform(feature_matrix, features, feature_matrix['value'])
fm_enc_leave
###Output
_____no_output_____ |
thPyCh1.ipynb | ###Markdown
Think Python 2 Outline: Chapter 1 Problem solving: the ability to formulate problems, think creatively about solutions, and express a solution clearly and accurately. The process of learning to program is an excellent opportunity to practice these skills. What is a Program?******What is it to you? ________________- input- output- math- conditional execution- repetition The Environment that you write Python:***- directly into interpreter - the REPL- IDLE- web-based eg. PythonAnywhere- Local Development Environment, like Sublime Text or PyCharm- Local/web hybrid: Jupyter
###Code
#First Program!
print("Hello World!")
#Arithmetic Operators:
40+2
6*7
2**3 #This is exponentiation!
#Fill in your own: ______________
###Output
_____no_output_____ |
DJI_rnn1.ipynb | ###Markdown
[](https://colab.research.google.com/github/lior0110/main/blob/master/DJI_rnn1.ipynb)
###Code
import os
if not os.path.exists('main'): os.system('git clone https://github.com/lior0110/main/')
os.chdir('main')
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# read the given data
data = pd.read_csv('DJI2.csv', index_col=0, parse_dates=True)
data.head()
data.info()
data['Adj Close'].plot()
# see if 'Adj Close' is the same as 'Close'
# if yes drop 'Adj Close'
test = data['Adj Close'] == data['Close']
if all(data['Adj Close'] == data['Close']):
data = data.drop(columns='Adj Close')
data.head()
data.tail()
data.columns
data.shape
# make data and target arrays
target_style = 'Price' # / 'Price' / 'Change' / 'ChangeP' / 'UD'
window_len = 100 # window length to use for prediction
useVolume = True
y = np.zeros(data.shape[0]-window_len)
if useVolume:
X = np.zeros((data.shape[0]-window_len,window_len,data.shape[1]))
else:
X = np.zeros((data.shape[0]-window_len,window_len,data.shape[1]-1))
for i in range(data.shape[0]-window_len):
if target_style == 'Change':
# target of Change for next day
y[i] = data['Close'][i+window_len] - data['Close'][i+window_len-1]
elif target_style == 'ChangeP':
# target of Change for next day in percentage
y[i] = (data['Close'][i+window_len] - data['Close'][i+window_len-1]) / data['Close'][i+window_len-1]
elif target_style == 'UD':
# target of Up/Down for next day
if (data['Close'][i+window_len] - data['Close'][i+window_len-1]) > 0:
y[i] = 1
elif (data['Close'][i+window_len] - data['Close'][i+window_len-1]) < 0:
y[i] = -1
else:
y[i] = 0
# y[i] = 1 if (data['Close'][i+window_len] - data['Close'][i+window_len-1]) > 0 else -1
else:
# target of Price for next day
y[i] = data['Close'][i+window_len]
if useVolume:
X[i,:,:] = data.iloc[i:i+window_len].values
else:
X[i,:,:] = data.iloc[i:i+window_len,:-1].values
plt.plot(y)
plt.show()
X[-1,-10:,:]
y[-1]
# train test split
y_train = y[:int(0.7*len(y))]
y_valid = y[int(0.7*len(y)):int(0.85*len(y))]
y_test = y[int(0.85*len(y)):]
X_train = X[:int(0.7*len(X)),:,:]
X_valid = X[int(0.7*len(X)):int(0.85*len(X)),:,:]
X_test = X[int(0.85*len(X)):,:,:]
# get the max and min in the train data
maxPrice = np.max(X_train[:,:,:-1])
print('max Price: ',maxPrice)
minPrice = np.min(X_train[:,:,:-1])
print('min Price: ',minPrice)
if useVolume:
maxVolume = np.max(X_train[:,:,-1])
print('max Volume: ',maxVolume)
minVolume = np.min(X_train[:,:,-1])
print('min Volume: ',minVolume)
# data scaling
if useVolume:
X_train[:,:,:-1] = (X_train[:,:,:-1] - minPrice) / (maxPrice - minPrice)
X_train[:,:,-1] = (X_train[:,:,-1] - minVolume) / (maxVolume - minVolume)
X_valid[:,:,:-1] = (X_valid[:,:,:-1] - minPrice) / (maxPrice - minPrice)
X_valid[:,:,-1] = (X_valid[:,:,-1] - minVolume) / (maxVolume - minVolume)
X_test[:,:,:-1] = (X_test[:,:,:-1] - minPrice) / (maxPrice - minPrice)
X_test[:,:,-1] = (X_test[:,:,-1] - minVolume) / (maxVolume - minVolume)
else:
X_train[:,:,:] = (X_train[:,:,:] - minPrice) / (maxPrice - minPrice)
X_valid[:,:,:] = (X_valid[:,:,:] - minPrice) / (maxPrice - minPrice)
X_test[:,:,:] = (X_test[:,:,:] - minPrice) / (maxPrice - minPrice)
# target scaling
if target_style == 'Price':
y_train = (y_train - minPrice) / (maxPrice - minPrice)
plt.plot(y_train)
plt.show()
y_valid = (y_valid - minPrice) / (maxPrice - minPrice)
plt.plot(y_valid)
plt.show()
y_test = (y_test - minPrice) / (maxPrice - minPrice)
plt.plot(y_test)
plt.show()
elif target_style == 'Change':
maxChange = np.max(np.abs(y_train))
y_train = y_train / maxChange
plt.plot(y_train)
plt.show()
y_valid = y_valid / maxChange
plt.plot(y_valid)
plt.show()
y_test = y_test / maxChange
plt.plot(y_test)
plt.show()
# from keras.models import Sequential
# import keras.layers as layers
# from keras.layers import Input, Dense, Dropout, LSTM, CuDNNLSTM, GRU, CuDNNGRU, Bidirectional
# from keras.optimizers import SGD, RMSprop, Adam, Adagrad
# from keras.losses import mean_squared_error
# from keras.models import load_model
# from keras import backend as K
# from keras.callbacks import EarlyStopping
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import backend as K
# The GRU architecture
regressorGRU = tf.keras.Sequential()
# First GRU layer with Dropout regularisation
regressorGRU.add(layers.GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],X_train.shape[2])))
regressorGRU.add(layers.Dropout(0.2))
# Second GRU layer
regressorGRU.add(layers.GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],X_train.shape[2])))
regressorGRU.add(layers.Dropout(0.2))
# Third GRU layer
regressorGRU.add(layers.GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],X_train.shape[2])))
regressorGRU.add(layers.Dropout(0.2))
# Fourth GRU layer
regressorGRU.add(layers.GRU(units=50))
regressorGRU.add(layers.Dropout(0.2))
# The output layer
regressorGRU.add(layers.Dense(units=1))
# The LSTM architecture
regressorLSTM = tf.keras.Sequential()
# First LSTM layer with Dropout regularisation
regressorLSTM.add(layers.LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],X_train.shape[2])))
regressorLSTM.add(layers.Dropout(0.2))
# Second LSTM layer
regressorLSTM.add(layers.LSTM(units=50, return_sequences=True))
regressorLSTM.add(layers.Dropout(0.2))
# Third LSTM layer
regressorLSTM.add(layers.LSTM(units=50, return_sequences=True))
regressorLSTM.add(layers.Dropout(0.2))
# Fourth LSTM layer
regressorLSTM.add(layers.LSTM(units=50))
regressorLSTM.add(layers.Dropout(0.2))
# The output layer
regressorLSTM.add(layers.Dense(units=1))
def r2_score(y_true, y_pred):
SS_res = K.sum(K.square(y_true - y_pred))
SS_tot = K.sum(K.square(y_true - K.mean(y_true)))
SS_reg = K.sum(K.square(y_pred - K.mean(y_true)))
# return ( 1 - SS_res/(SS_tot + K.epsilon()) )
return ( SS_res/(SS_tot + K.epsilon()) )
# return ( SS_reg/(SS_tot + K.epsilon()) )
# regressor = regressorLSTM
regressor = regressorGRU
callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True)
regressor.compile(optimizer='Adam', loss=r2_score, metrics=['mse',r2_score]) # optimizer='Adam'/'RMSProp'
print(regressor.summary())
hist = regressor.fit(X_train, y_train,epochs = 100, callbacks=[callback], validation_data=(X_valid, y_valid)) # , batch_size=32
hist.history.keys()
plt.plot(hist.history['loss'])
plt.show()
# predict for lest point
if useVolume:
lest_data = data.iloc[-window_len:].values
lest_data[:,:-1] = (lest_data[:,:-1] - minPrice) / (maxPrice - minPrice)
lest_data[:,-1] = (lest_data[:,-1] - minVolume) / (maxVolume - minVolume)
else:
lest_data = data.iloc[-window_len:,:-1].values
lest_data = (lest_data - minPrice) / (maxPrice - minPrice)
lest_data = lest_data.reshape((1,window_len,X_train.shape[2]))
predicted_stock_price = regressor.predict(lest_data)
# target_style = 'Price' # / 'Price' / 'Change' / 'ChangeP' / 'UD'
if target_style == 'Price':
predicted_stock_price = predicted_stock_price * (maxPrice - minPrice) + minPrice
print("predicted stock price for next step is: ", predicted_stock_price)
elif target_style == 'Change':
predicted_stock_price = predicted_stock_price * maxChange
print("predicted stock price Change for next step is: ", predicted_stock_price)
elif target_style == 'ChangeP':
print("predicted stock price Change percentage for next step is: ", predicted_stock_price)
elif target_style == 'UD':
print("predicted stock price Up Down percentage for next step is: ", predicted_stock_price)
predicted_stock_price = regressor.predict(X_test)
# if target_style == 'Price':
# predicted_stock_price = (predicted_stock_price * (maxPrice - minPrice)) + minPrice
# predicted_stock_price
# Visualising the test results
plt.plot(y_test, color = 'pink', label = 'Real Stock Price')
plt.plot(predicted_stock_price, color = 'blue', label = 'Predicted Stock Price')
plt.title('Stock Price Prediction on test data')
plt.xlabel('Time')
plt.ylabel('Stock Price on test data')
plt.legend()
plt.show()
predicted_stock_price = regressor.predict(X_valid)
# if target_style == 'Price':
# predicted_stock_price = (predicted_stock_price * (maxPrice - minPrice)) + minPrice
# predicted_stock_price
# Visualising the validation results
plt.plot(y_valid, color = 'pink', label = 'Real Stock Price')
plt.plot(predicted_stock_price, color = 'blue', label = 'Predicted Stock Price')
plt.title('Stock Price Prediction on validation data')
plt.xlabel('Time')
plt.ylabel('Stock Price')
plt.legend()
plt.show()
# r2_score(y_valid, predicted_stock_price)
SS_res = np.sum(np.square(y_valid - predicted_stock_price))
print('SS_res = ',SS_res)
SS_tot = np.sum(np.square(y_valid - np.mean(y_valid)))
print('SS_tot = ',SS_tot)
SS_reg = np.sum(np.square(predicted_stock_price - np.mean(y_valid)))
print('SS_reg = ',SS_reg)
r2 = 1 - SS_res/SS_tot
print('r2 = ',r2)
r2 = SS_reg/SS_tot
print('r2 = ',r2)
mse = np.mean(np.square(y_valid - predicted_stock_price))
print('mse = ',mse)
predicted_stock_price = regressor.predict(X_train)
# if target_style == 'Price':
# predicted_stock_price = (predicted_stock_price * (maxPrice - minPrice)) + minPrice
# predicted_stock_price
# Visualising the train results
plt.plot(y_train, color = 'pink', label = 'Real Stock Price')
plt.plot(predicted_stock_price, color = 'blue', label = 'Predicted Stock Price')
plt.title('Stock Price Prediction on train data')
plt.xlabel('Time')
plt.ylabel('Stock Price')
plt.legend()
plt.show()
plt.close('all')
###Output
_____no_output_____ |
###Markdown
第六章—支持向量机 1、基于最大间隔分割数据**最大化支持向量到分割面的距离** **1、线性可分支持向量机:**通过**硬间隔最大化**学习一个线性分类器,也称为**硬间隔支持向量机** **2、线性支持向量机:**通过**软间隔最大化**学习一个线性分类器,也称为**软间隔支持向量机** **3、非线性支持向量机:**通过**核技巧**及**软间隔最大化**,学习非线性支持向量机 2、寻找最大间隔 3、SMO高效优化算法
###Code
import random
from numpy import *
from time import sleep
# SMO算法中的辅助函数
# 文本处理为数据集
def loadDataSet(fileName):
dataMat = []; labelMat = []
fr = open(fileName)
for line in fr.readlines():
lineArr = line.strip().split('\t')
dataMat.append([float(lineArr[0]), float(lineArr[1])])
labelMat.append(float(lineArr[2]))
return dataMat, labelMat
def selectJrand(i, m):
j = i # 随机化任意一个不等于i的整数
while (j == i):
j = int(random.randint(0, m))
return j
# 调整大于H和小于L的数值
def clipAlpha(aj ,H, L):
if(aj > H):
aj = H
if(L > aj):
aj = L
return aj
dataArr, labelArr = loadDataSet('D:/data/study/AI/ML/MLcode/Ch06/testSet.txt')
print(dataArr, '\n', labelArr)
# 简化版SMO算法
def smoSimple(dataMatIn, classLabels, C, toler, maxIter):
dataMatrix = mat(dataMatIn); labelMat = mat(classLabels).transpose()
b = 0
m, n = shape(dataMatrix) # 数据个数和特征数
alphas = mat(zeros((m, 1))) # 初始化为0向量
iter_ = 0
while(iter_ < maxIter): # 外循环,迭代次数
alphaPairsChanged = 0
for i in range(m): # 内循环,对数据集中每个数据
fXi = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[i,:].T)) + b
Ei = fXi - float(labelMat[i])
if(((labelMat[i]*Ei < -toler) and (alphas[i] < C)) or ((labelMat[i]*Ei > toler) and (alphas[i] > 0))):
j = selectJrand(i, m) # 随机选择第二个alpha
fXj = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[j,:].T)) + b
# labelMat[j] = labelMat[j].astype(float)
Ej = fXj - float(labelMat[j])
alphaIold = alphas[i].copy()
alphaJold = alphas[j].copy()
if(labelMat[i] != labelMat[j]): # 确保alpha在0和C之间
L = max(0, alphas[j] - alphas[i])
H = min(C, C + alphas[j] - alphas[i])
else:
L = max(0, alphas[j] + alphas[i] - C)
H = min(C, alphas[j] + alphas[i])
if(L == H):
print("L==H")
continue
eta = 2.0 * dataMatrix[i,:]*dataMatrix[j,:].T - dataMatrix[i,:]*dataMatrix[i,:].T - dataMatrix[j,:]*dataMatrix[j,:].T
if(eta >= 0):
print("eta>=0")
continue
alphas[j] -= labelMat[j]*(Ei - Ej)/eta
alphas[j] = clipAlpha(alphas[j], H, L)
if (abs(alphas[j] - alphaJold) < 0.00001):
print("j not moving enough")
continue
alphas[i] += labelMat[j]*labelMat[i]*(alphaJold - alphas[j]) #将i和j更新相同的值,更新方向相反
b1 = b - Ei- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:]*dataMatrix[i,:].T \
- labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[i,:]*dataMatrix[j,:].T
b2 = b - Ej- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:]*dataMatrix[j,:].T \
- labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[j,:]*dataMatrix[j,:].T
if((0 < alphas[i]) and (C > alphas[i])):
b = b1
elif((0 < alphas[j]) and (C > alphas[j])):
b = b2
else:
b = (b1 + b2)/2.0
alphaPairsChanged += 1
print("iter: %d i:%d, pairs changed %d" % (iter_, i, alphaPairsChanged))
if(alphaPairsChanged == 0):
iter_ += 1
else:
iter_ = 0
print("iteration number: %d" % iter_)
return b, alphas
smoSimple(dataArr, labelArr, 0.6, 0.001, 40)
###Output
L==H
iter: 0 i:1, pairs changed 1
L==H
iter: 0 i:4, pairs changed 2
iter: 0 i:5, pairs changed 3
iter: 0 i:8, pairs changed 4
j not moving enough
j not moving enough
L==H
iter: 0 i:17, pairs changed 5
j not moving enough
j not moving enough
iter: 0 i:23, pairs changed 6
L==H
L==H
iter: 0 i:29, pairs changed 7
j not moving enough
L==H
L==H
j not moving enough
L==H
j not moving enough
L==H
L==H
iter: 0 i:54, pairs changed 8
j not moving enough
L==H
L==H
j not moving enough
j not moving enough
iter: 0 i:71, pairs changed 9
j not moving enough
L==H
L==H
iter: 0 i:82, pairs changed 10
j not moving enough
j not moving enough
iteration number: 0
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
L==H
L==H
j not moving enough
L==H
L==H
j not moving enough
iter: 0 i:69, pairs changed 1
L==H
j not moving enough
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
L==H
iteration number: 0
L==H
j not moving enough
L==H
j not moving enough
L==H
L==H
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:20, pairs changed 1
j not moving enough
j not moving enough
L==H
j not moving enough
iter: 0 i:29, pairs changed 2
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:52, pairs changed 3
j not moving enough
iter: 0 i:55, pairs changed 4
j not moving enough
j not moving enough
L==H
j not moving enough
j not moving enough
L==H
L==H
j not moving enough
j not moving enough
iter: 0 i:85, pairs changed 5
j not moving enough
j not moving enough
iteration number: 0
iter: 0 i:1, pairs changed 1
L==H
j not moving enough
L==H
iter: 0 i:8, pairs changed 2
j not moving enough
j not moving enough
L==H
j not moving enough
iter: 0 i:20, pairs changed 3
iter: 0 i:22, pairs changed 4
j not moving enough
iter: 0 i:25, pairs changed 5
j not moving enough
j not moving enough
iter: 0 i:30, pairs changed 6
iter: 0 i:39, pairs changed 7
L==H
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:94, pairs changed 8
j not moving enough
iteration number: 0
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:52, pairs changed 1
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:94, pairs changed 2
L==H
iteration number: 0
j not moving enough
iter: 0 i:8, pairs changed 1
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:30, pairs changed 2
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:86, pairs changed 3
L==H
j not moving enough
L==H
iteration number: 0
L==H
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:30, pairs changed 1
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iteration number: 0
j not moving enough
j not moving enough
iter: 0 i:22, pairs changed 1
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:68, pairs changed 2
j not moving enough
iter: 0 i:71, pairs changed 3
j not moving enough
j not moving enough
L==H
L==H
iteration number: 0
L==H
j not moving enough
j not moving enough
iter: 0 i:8, pairs changed 1
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:29, pairs changed 2
j not moving enough
j not moving enough
iter: 0 i:46, pairs changed 3
j not moving enough
j not moving enough
iter: 0 i:55, pairs changed 4
iter: 0 i:65, pairs changed 5
j not moving enough
L==H
j not moving enough
j not moving enough
L==H
L==H
L==H
iteration number: 0
j not moving enough
j not moving enough
L==H
L==H
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:24, pairs changed 1
j not moving enough
j not moving enough
iter: 0 i:27, pairs changed 2
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:52, pairs changed 3
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iteration number: 0
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:24, pairs changed 1
j not moving enough
iter: 0 i:26, pairs changed 2
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:55, pairs changed 3
j not moving enough
j not moving enough
iter: 0 i:92, pairs changed 4
j not moving enough
iteration number: 0
L==H
L==H
j not moving enough
L==H
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:52, pairs changed 1
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:92, pairs changed 2
iter: 0 i:96, pairs changed 3
L==H
iteration number: 0
L==H
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:65, pairs changed 1
j not moving enough
j not moving enough
iteration number: 0
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:26, pairs changed 1
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:54, pairs changed 2
j not moving enough
j not moving enough
j not moving enough
iteration number: 0
j not moving enough
j not moving enough
iter: 0 i:25, pairs changed 1
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iteration number: 0
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:96, pairs changed 1
L==H
iteration number: 0
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:69, pairs changed 1
iteration number: 0
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:46, pairs changed 1
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iteration number: 0
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:26, pairs changed 1
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
L==H
j not moving enough
L==H
iteration number: 0
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:54, pairs changed 1
j not moving enough
j not moving enough
j not moving enough
L==H
j not moving enough
L==H
iteration number: 0
j not moving enough
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
L==H
L==H
L==H
L==H
iteration number: 1
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 1 i:55, pairs changed 1
j not moving enough
L==H
j not moving enough
iteration number: 0
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
L==H
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:52, pairs changed 1
iter: 0 i:54, pairs changed 2
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iteration number: 0
iter: 0 i:17, pairs changed 1
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:30, pairs changed 2
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iter: 0 i:55, pairs changed 3
j not moving enough
j not moving enough
j not moving enough
iteration number: 0
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iteration number: 1
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
j not moving enough
iteration number: 2
L==H
###Markdown
4、利用完整Platt SMO算法加速优化
###Code
# 完整版Platt SMO的支持函数
class optStruct:
def __init__(self, dataMatIn, classLabels, C, toler, kTup):
self.X = dataMatIn
self.labelMat = classLabels
self.C = C
self.tol = toler
self.m = shape(dataMatIn)[0]
self.alphas = mat(zeros((self.m, 1)))
self.b = 0
self.eCache = mat(zeros((self.m, 2)))
def calcEk(oS, k):
fXk = float(multiply(oS.alphas,oS.labelMat).T*(oS.X*oS.X[k,:].T)) + oS.b
Ek = fXk - float(oS.labelMat[k])
return Ek
def selectJ(i, oS, Ei):
maxK = -1; maxDeltaE = 0; Ej = 0
oS.eCache[i] = [1, Ei]
validEcacheList = nonzero(oS.eCache[:,0].A)[0]
if((len(validEcacheList)) > 1):
for k in validEcacheList:
if(k == i):
continue
Ek = calcEk(oS, k)
deltaE = abs(Ei - Ek)
if (deltaE > maxDeltaE):
maxK = k; maxDeltaE = deltaE; Ej = Ek
return maxK, Ej
else:
j = selectJrand(i, oS.m)
Ej = calcEk(oS, j)
return j, Ej
def updateEk(oS, k):
Ek = calcEk(oS, k)
oS.eCache[k] = [1, Ek]
# 完整Platt SMO算法中的优化例程
def innerL(i, oS):
Ei = calcEk(oS, i)
if (((oS.labelMat[i]*Ei < -oS.tol) and (oS.alphas[i] < oS.C)) or ((oS.labelMat[i]*Ei > oS.tol) and (oS.alphas[i] > 0))):
j, Ej = selectJ(i, oS, Ei)
alphaIold = oS.alphas[i].copy(); alphaJold = oS.alphas[j].copy();
if (oS.labelMat[i] != oS.labelMat[j]):
L = max(0, oS.alphas[j] - oS.alphas[i])
H = min(oS.C, oS.C + oS.alphas[j] - oS.alphas[i])
else:
L = max(0, oS.alphas[j] + oS.alphas[i] - oS.C)
H = min(oS.C, oS.alphas[j] + oS.alphas[i])
if(L==H):
print("L==H")
return 0
eta = 2.0 * oS.X[i,:]*oS.X[j,:].T - oS.X[i,:]*oS.X[i,:].T - oS.X[j,:]*oS.X[j,:].T
if(eta >= 0):
print("eta>=0")
return 0
oS.alphas[j] -= oS.labelMat[j]*(Ei - Ej)/eta
oS.alphas[j] = clipAlpha(oS.alphas[j], H, L)
updateEk(oS, j)
if(abs(oS.alphas[j] - alphaJold) < 0.00001):
print("j not moving enough")
return 0
oS.alphas[i] += oS.labelMat[j]*oS.labelMat[i]*(alphaJold - oS.alphas[j])
updateEk(oS, i)
b1 = oS.b - Ei- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.X[i,:]*oS.X[i,:].T - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.X[i,:]*oS.X[j,:].T
b2 = oS.b - Ej- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.X[i,:]*oS.X[j,:].T - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.X[j,:]*oS.X[j,:].T
if((0 < oS.alphas[i]) and (oS.C > oS.alphas[i])):
oS.b = b1
elif((0 < oS.alphas[j]) and (oS.C > oS.alphas[j])):
oS.b = b2
else:
oS.b = (b1 + b2)/2.0
return 1
else: return 0
# 完整Platt SMO外循环代码
def smoP(dataMatIn, classLabels, C, toler, maxIter,kTup=('lin', 0)):
oS = optStruct(mat(dataMatIn), mat(classLabels).transpose(), C, toler, kTup)
iter_ = 0
entireSet = True; alphaPairsChanged = 0
while (iter_ < maxIter) and ((alphaPairsChanged > 0) or (entireSet)):
alphaPairsChanged = 0
if entireSet: #go over all
for i in range(oS.m):
alphaPairsChanged += innerL(i,oS)
print("fullSet, iter: %d i:%d, pairs changed %d" % (iter_,i,alphaPairsChanged))
iter_ += 1
else: #go over non-bound (railed) alphas
nonBoundIs = nonzero((oS.alphas.A > 0) * (oS.alphas.A < C))[0]
for i in nonBoundIs:
alphaPairsChanged += innerL(i,oS)
print("non-bound, iter: %d i:%d, pairs changed %d" % (iter_, i, alphaPairsChanged))
iter_ += 1
if entireSet: entireSet = False #toggle entire set loop
elif (alphaPairsChanged == 0): entireSet = True
print("iteration number: %d" % iter_)
return oS.b,oS.alphas
b, alphas = smoP(dataArr, labelArr, 0.6, 0.001, 40)
def calcWs(alphas,dataArr,classLabels):
X = mat(dataArr); labelMat = mat(classLabels).transpose()
m,n = shape(X)
w = zeros((n,1))
for i in range(m):
w += multiply(alphas[i]*labelMat[i], X[i,:].T)
return w
ws = calcWs(alphas, dataArr, labelArr)
print(ws)
datMat = mat(dataArr)
print(datMat[0] * mat(ws) + b)
###Output
[[-0.92555695]]
###Markdown
5、复杂数据上应用核函数
###Code
# 核转换函数
def kernelTrans(X, A, kTup): #calc the kernel or transform data to a higher dimensional space
m,n = shape(X)
K = mat(zeros((m,1)))
if(kTup[0]=='lin'): K = X * A.T #linear kernel
elif(kTup[0]=='rbf'):
for j in range(m):
deltaRow = X[j,:] - A
K[j] = deltaRow * deltaRow.T
K = exp(K/(-1*kTup[1]**2)) #divide in NumPy is element-wise not matrix like Matlab
else: raise NameError('Houston We Have a Problem -- \
That Kernel is not recognized')
return K
class optStruct:
def __init__(self, dataMatIn, classLabels, C, toler, kTup):
self.X = dataMatIn
self.labelMat = classLabels
self.C = C
self.tol = toler
self.m = shape(dataMatIn)[0]
self.alphas = mat(zeros((self.m, 1)))
self.b = 0
self.eCache = mat(zeros((self.m, 2)))
self.K = mat(zeros((self.m, self.m)))
for i in range(self.m):
self.K[:, i] = kernelTrans(self.X, self.X[i, :], kTup)
def innerL(i, oS):
Ei = calcEk(oS, i)
if (((oS.labelMat[i]*Ei < -oS.tol) and (oS.alphas[i] < oS.C)) or ((oS.labelMat[i]*Ei > oS.tol) and (oS.alphas[i] > 0))):
j, Ej = selectJ(i, oS, Ei)
alphaIold = oS.alphas[i].copy(); alphaJold = oS.alphas[j].copy();
if (oS.labelMat[i] != oS.labelMat[j]):
L = max(0, oS.alphas[j] - oS.alphas[i])
H = min(oS.C, oS.C + oS.alphas[j] - oS.alphas[i])
else:
L = max(0, oS.alphas[j] + oS.alphas[i] - oS.C)
H = min(oS.C, oS.alphas[j] + oS.alphas[i])
if(L==H):
print("L==H")
return 0
eta = 2.0 * oS.K[i,j] - oS.K[i,i] - oS.K[j,j]
if(eta >= 0):
print("eta>=0")
return 0
oS.alphas[j] -= oS.labelMat[j]*(Ei - Ej)/eta
oS.alphas[j] = clipAlpha(oS.alphas[j], H, L)
updateEk(oS, j)
if(abs(oS.alphas[j] - alphaJold) < 0.00001):
print("j not moving enough")
return 0
oS.alphas[i] += oS.labelMat[j]*oS.labelMat[i]*(alphaJold - oS.alphas[j])
updateEk(oS, i)
b1 = oS.b - Ei- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,i] - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[i,j]
b2 = oS.b - Ej- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,j]- oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[j,j]
if((0 < oS.alphas[i]) and (oS.C > oS.alphas[i])):
oS.b = b1
elif((0 < oS.alphas[j]) and (oS.C > oS.alphas[j])):
oS.b = b2
else:
oS.b = (b1 + b2)/2.0
return 1
else: return 0
def calcEk(oS, k):
fXk = float(multiply(oS.alphas,oS.labelMat).T*oS.K[:,k] + oS.b)
Ek = fXk - float(oS.labelMat[k])
return Ek
# 测试中使用核函数:利用核函数进行分类的径向基测试函数
def testRbf(k1=1.3):
dataArr,labelArr = loadDataSet('D:/data/study/AI/ML/MLcode/Ch06/testSetRBF.txt')
b,alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, ('rbf', k1)) # C=200 important
datMat = mat(dataArr); labelMat = mat(labelArr).transpose()
svInd = nonzero(alphas.A>0)[0]
sVs = datMat[svInd] #get matrix of only support vectors
labelSV = labelMat[svInd];
print("there are %d Support Vectors" % shape(sVs)[0])
m, n = shape(datMat)
errorCount = 0
for i in range(m):
kernelEval = kernelTrans(sVs, datMat[i,:], ('rbf', k1))
predict = kernelEval.T * multiply(labelSV, alphas[svInd]) + b
if(sign(predict)!=sign(labelArr[i])): errorCount += 1
print("the training error rate is: %f" % (float(errorCount)/m))
dataArr, labelArr = loadDataSet('D:/data/study/AI/ML/MLcode/Ch06/testSetRBF2.txt')
errorCount = 0
datMat = mat(dataArr); labelMat = mat(labelArr).transpose()
m, n = shape(datMat)
for i in range(m):
kernelEval = kernelTrans(sVs, datMat[i,:], ('rbf', k1))
predict = kernelEval.T * multiply(labelSV, alphas[svInd]) + b
if(sign(predict)!=sign(labelArr[i])): errorCount += 1
print("the test error rate is: %f" % (float(errorCount)/m))
testRbf()
###Output
fullSet, iter: 0 i:0, pairs changed 1
fullSet, iter: 0 i:1, pairs changed 1
fullSet, iter: 0 i:2, pairs changed 2
fullSet, iter: 0 i:3, pairs changed 3
fullSet, iter: 0 i:4, pairs changed 3
fullSet, iter: 0 i:5, pairs changed 4
fullSet, iter: 0 i:6, pairs changed 4
fullSet, iter: 0 i:7, pairs changed 5
fullSet, iter: 0 i:8, pairs changed 5
fullSet, iter: 0 i:9, pairs changed 5
fullSet, iter: 0 i:10, pairs changed 6
fullSet, iter: 0 i:11, pairs changed 7
fullSet, iter: 0 i:12, pairs changed 7
fullSet, iter: 0 i:13, pairs changed 8
fullSet, iter: 0 i:14, pairs changed 9
fullSet, iter: 0 i:15, pairs changed 10
fullSet, iter: 0 i:16, pairs changed 11
fullSet, iter: 0 i:17, pairs changed 12
fullSet, iter: 0 i:18, pairs changed 13
fullSet, iter: 0 i:19, pairs changed 14
fullSet, iter: 0 i:20, pairs changed 14
fullSet, iter: 0 i:21, pairs changed 15
j not moving enough
fullSet, iter: 0 i:22, pairs changed 15
j not moving enough
fullSet, iter: 0 i:23, pairs changed 15
fullSet, iter: 0 i:24, pairs changed 16
fullSet, iter: 0 i:25, pairs changed 16
fullSet, iter: 0 i:26, pairs changed 17
fullSet, iter: 0 i:27, pairs changed 18
fullSet, iter: 0 i:28, pairs changed 19
fullSet, iter: 0 i:29, pairs changed 20
fullSet, iter: 0 i:30, pairs changed 20
fullSet, iter: 0 i:31, pairs changed 21
fullSet, iter: 0 i:32, pairs changed 21
fullSet, iter: 0 i:33, pairs changed 21
fullSet, iter: 0 i:34, pairs changed 21
fullSet, iter: 0 i:35, pairs changed 21
fullSet, iter: 0 i:36, pairs changed 22
fullSet, iter: 0 i:37, pairs changed 22
fullSet, iter: 0 i:38, pairs changed 22
fullSet, iter: 0 i:39, pairs changed 22
j not moving enough
fullSet, iter: 0 i:40, pairs changed 22
fullSet, iter: 0 i:41, pairs changed 23
L==H
fullSet, iter: 0 i:42, pairs changed 23
fullSet, iter: 0 i:43, pairs changed 23
fullSet, iter: 0 i:44, pairs changed 23
fullSet, iter: 0 i:45, pairs changed 24
L==H
fullSet, iter: 0 i:46, pairs changed 24
fullSet, iter: 0 i:47, pairs changed 24
L==H
fullSet, iter: 0 i:48, pairs changed 24
fullSet, iter: 0 i:49, pairs changed 24
fullSet, iter: 0 i:50, pairs changed 25
fullSet, iter: 0 i:51, pairs changed 25
j not moving enough
fullSet, iter: 0 i:52, pairs changed 25
L==H
fullSet, iter: 0 i:53, pairs changed 25
fullSet, iter: 0 i:54, pairs changed 26
fullSet, iter: 0 i:55, pairs changed 26
fullSet, iter: 0 i:56, pairs changed 27
fullSet, iter: 0 i:57, pairs changed 27
fullSet, iter: 0 i:58, pairs changed 27
fullSet, iter: 0 i:59, pairs changed 27
fullSet, iter: 0 i:60, pairs changed 27
fullSet, iter: 0 i:61, pairs changed 27
fullSet, iter: 0 i:62, pairs changed 28
fullSet, iter: 0 i:63, pairs changed 28
fullSet, iter: 0 i:64, pairs changed 28
fullSet, iter: 0 i:65, pairs changed 28
fullSet, iter: 0 i:66, pairs changed 28
fullSet, iter: 0 i:67, pairs changed 28
fullSet, iter: 0 i:68, pairs changed 28
fullSet, iter: 0 i:69, pairs changed 28
fullSet, iter: 0 i:70, pairs changed 28
fullSet, iter: 0 i:71, pairs changed 28
fullSet, iter: 0 i:72, pairs changed 28
fullSet, iter: 0 i:73, pairs changed 28
j not moving enough
fullSet, iter: 0 i:74, pairs changed 28
fullSet, iter: 0 i:75, pairs changed 28
L==H
fullSet, iter: 0 i:76, pairs changed 28
fullSet, iter: 0 i:77, pairs changed 28
L==H
fullSet, iter: 0 i:78, pairs changed 28
fullSet, iter: 0 i:79, pairs changed 28
fullSet, iter: 0 i:80, pairs changed 28
L==H
fullSet, iter: 0 i:81, pairs changed 28
fullSet, iter: 0 i:82, pairs changed 28
fullSet, iter: 0 i:83, pairs changed 28
fullSet, iter: 0 i:84, pairs changed 28
L==H
fullSet, iter: 0 i:85, pairs changed 28
fullSet, iter: 0 i:86, pairs changed 28
L==H
fullSet, iter: 0 i:87, pairs changed 28
fullSet, iter: 0 i:88, pairs changed 28
fullSet, iter: 0 i:89, pairs changed 28
L==H
fullSet, iter: 0 i:90, pairs changed 28
fullSet, iter: 0 i:91, pairs changed 28
L==H
fullSet, iter: 0 i:92, pairs changed 28
fullSet, iter: 0 i:93, pairs changed 28
fullSet, iter: 0 i:94, pairs changed 28
fullSet, iter: 0 i:95, pairs changed 28
fullSet, iter: 0 i:96, pairs changed 28
fullSet, iter: 0 i:97, pairs changed 28
fullSet, iter: 0 i:98, pairs changed 28
fullSet, iter: 0 i:99, pairs changed 28
iteration number: 1
j not moving enough
non-bound, iter: 1 i:0, pairs changed 0
j not moving enough
non-bound, iter: 1 i:1, pairs changed 0
j not moving enough
non-bound, iter: 1 i:3, pairs changed 0
j not moving enough
non-bound, iter: 1 i:10, pairs changed 0
j not moving enough
non-bound, iter: 1 i:11, pairs changed 0
j not moving enough
non-bound, iter: 1 i:13, pairs changed 0
j not moving enough
non-bound, iter: 1 i:14, pairs changed 0
j not moving enough
non-bound, iter: 1 i:15, pairs changed 0
j not moving enough
non-bound, iter: 1 i:16, pairs changed 0
j not moving enough
non-bound, iter: 1 i:17, pairs changed 0
j not moving enough
non-bound, iter: 1 i:18, pairs changed 0
j not moving enough
non-bound, iter: 1 i:19, pairs changed 0
non-bound, iter: 1 i:21, pairs changed 0
j not moving enough
non-bound, iter: 1 i:24, pairs changed 0
non-bound, iter: 1 i:26, pairs changed 1
non-bound, iter: 1 i:27, pairs changed 2
j not moving enough
non-bound, iter: 1 i:28, pairs changed 2
j not moving enough
non-bound, iter: 1 i:29, pairs changed 2
j not moving enough
non-bound, iter: 1 i:31, pairs changed 2
j not moving enough
non-bound, iter: 1 i:36, pairs changed 2
j not moving enough
non-bound, iter: 1 i:41, pairs changed 2
j not moving enough
non-bound, iter: 1 i:42, pairs changed 2
non-bound, iter: 1 i:45, pairs changed 3
j not moving enough
non-bound, iter: 1 i:50, pairs changed 3
j not moving enough
non-bound, iter: 1 i:54, pairs changed 3
j not moving enough
non-bound, iter: 1 i:56, pairs changed 3
j not moving enough
non-bound, iter: 1 i:62, pairs changed 3
iteration number: 2
j not moving enough
non-bound, iter: 2 i:0, pairs changed 0
j not moving enough
non-bound, iter: 2 i:1, pairs changed 0
j not moving enough
non-bound, iter: 2 i:3, pairs changed 0
j not moving enough
non-bound, iter: 2 i:10, pairs changed 0
j not moving enough
non-bound, iter: 2 i:11, pairs changed 0
j not moving enough
non-bound, iter: 2 i:13, pairs changed 0
j not moving enough
non-bound, iter: 2 i:14, pairs changed 0
non-bound, iter: 2 i:15, pairs changed 1
j not moving enough
non-bound, iter: 2 i:16, pairs changed 1
j not moving enough
non-bound, iter: 2 i:17, pairs changed 1
j not moving enough
non-bound, iter: 2 i:18, pairs changed 1
j not moving enough
non-bound, iter: 2 i:19, pairs changed 1
j not moving enough
non-bound, iter: 2 i:21, pairs changed 1
j not moving enough
non-bound, iter: 2 i:26, pairs changed 1
j not moving enough
non-bound, iter: 2 i:27, pairs changed 1
j not moving enough
non-bound, iter: 2 i:28, pairs changed 1
j not moving enough
non-bound, iter: 2 i:29, pairs changed 1
j not moving enough
non-bound, iter: 2 i:31, pairs changed 1
j not moving enough
non-bound, iter: 2 i:36, pairs changed 1
non-bound, iter: 2 i:41, pairs changed 1
j not moving enough
non-bound, iter: 2 i:42, pairs changed 1
non-bound, iter: 2 i:45, pairs changed 2
j not moving enough
non-bound, iter: 2 i:50, pairs changed 2
j not moving enough
non-bound, iter: 2 i:54, pairs changed 2
non-bound, iter: 2 i:56, pairs changed 3
j not moving enough
non-bound, iter: 2 i:62, pairs changed 3
j not moving enough
non-bound, iter: 2 i:76, pairs changed 3
iteration number: 3
non-bound, iter: 3 i:0, pairs changed 1
j not moving enough
non-bound, iter: 3 i:1, pairs changed 1
j not moving enough
non-bound, iter: 3 i:3, pairs changed 1
j not moving enough
non-bound, iter: 3 i:10, pairs changed 1
non-bound, iter: 3 i:11, pairs changed 2
j not moving enough
non-bound, iter: 3 i:13, pairs changed 2
j not moving enough
non-bound, iter: 3 i:14, pairs changed 2
j not moving enough
non-bound, iter: 3 i:16, pairs changed 2
j not moving enough
non-bound, iter: 3 i:17, pairs changed 2
j not moving enough
non-bound, iter: 3 i:18, pairs changed 2
j not moving enough
non-bound, iter: 3 i:19, pairs changed 2
j not moving enough
non-bound, iter: 3 i:21, pairs changed 2
j not moving enough
non-bound, iter: 3 i:26, pairs changed 2
j not moving enough
non-bound, iter: 3 i:27, pairs changed 2
j not moving enough
non-bound, iter: 3 i:28, pairs changed 2
j not moving enough
non-bound, iter: 3 i:29, pairs changed 2
j not moving enough
non-bound, iter: 3 i:31, pairs changed 2
non-bound, iter: 3 i:36, pairs changed 2
j not moving enough
non-bound, iter: 3 i:41, pairs changed 2
j not moving enough
non-bound, iter: 3 i:42, pairs changed 2
j not moving enough
non-bound, iter: 3 i:45, pairs changed 2
j not moving enough
non-bound, iter: 3 i:50, pairs changed 2
j not moving enough
non-bound, iter: 3 i:54, pairs changed 2
j not moving enough
non-bound, iter: 3 i:56, pairs changed 2
j not moving enough
non-bound, iter: 3 i:62, pairs changed 2
j not moving enough
non-bound, iter: 3 i:76, pairs changed 2
j not moving enough
non-bound, iter: 3 i:87, pairs changed 2
iteration number: 4
j not moving enough
non-bound, iter: 4 i:0, pairs changed 0
j not moving enough
non-bound, iter: 4 i:1, pairs changed 0
j not moving enough
non-bound, iter: 4 i:3, pairs changed 0
j not moving enough
non-bound, iter: 4 i:10, pairs changed 0
j not moving enough
non-bound, iter: 4 i:13, pairs changed 0
j not moving enough
non-bound, iter: 4 i:14, pairs changed 0
j not moving enough
non-bound, iter: 4 i:16, pairs changed 0
j not moving enough
non-bound, iter: 4 i:17, pairs changed 0
j not moving enough
non-bound, iter: 4 i:18, pairs changed 0
j not moving enough
non-bound, iter: 4 i:19, pairs changed 0
j not moving enough
non-bound, iter: 4 i:21, pairs changed 0
j not moving enough
non-bound, iter: 4 i:26, pairs changed 0
j not moving enough
non-bound, iter: 4 i:27, pairs changed 0
j not moving enough
non-bound, iter: 4 i:28, pairs changed 0
j not moving enough
non-bound, iter: 4 i:29, pairs changed 0
j not moving enough
non-bound, iter: 4 i:31, pairs changed 0
non-bound, iter: 4 i:36, pairs changed 0
j not moving enough
non-bound, iter: 4 i:41, pairs changed 0
j not moving enough
non-bound, iter: 4 i:42, pairs changed 0
j not moving enough
non-bound, iter: 4 i:45, pairs changed 0
j not moving enough
non-bound, iter: 4 i:50, pairs changed 0
j not moving enough
non-bound, iter: 4 i:54, pairs changed 0
j not moving enough
non-bound, iter: 4 i:56, pairs changed 0
###Markdown
6、手写数字识别问题回顾
###Code
def img2vector(filename):
returnVect = zeros((1,1024))
fr = open(filename)
for i in range(32):
lineStr = fr.readline()
for j in range(32):
returnVect[0,32*i+j] = int(lineStr[j])
return returnVect
# 基于SVM手写数字识别
def loadImages(dirName):
from os import listdir
hwLabels = []
trainingFileList = listdir(dirName) #load the training set
m = len(trainingFileList)
trainingMat = zeros((m,1024))
for i in range(m):
fileNameStr = trainingFileList[i]
fileStr = fileNameStr.split('.')[0] #take off .txt
classNumStr = int(fileStr.split('_')[0])
if classNumStr == 9: hwLabels.append(-1)
else: hwLabels.append(1)
trainingMat[i,:] = img2vector('%s/%s' % (dirName, fileNameStr))
return trainingMat, hwLabels
def testDigits(kTup = ('rbf', 10)):
dataArr,labelArr = loadImages('D:/data/study/AI/ML/MLcode/Ch02/trainingDigits')
b, alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, kTup)
datMat = mat(dataArr); labelMat = mat(labelArr).transpose()
svInd = nonzero(alphas.A > 0)[0]
sVs = datMat[svInd]
labelSV = labelMat[svInd];
print("there are %d Support Vectors" % shape(sVs)[0])
m, n = shape(datMat)
errorCount = 0
for i in range(m):
kernelEval = kernelTrans(sVs, datMat[i,:], kTup)
predict = kernelEval.T * np.multiply(labelSV, alphas[svInd]) + b
if(sign(predict)!=sign(labelArr[i])): errorCount += 1
print("the training error rate is: %f" % (float(errorCount)/m))
dataArr, labelArr = loadImages('D:/data/study/AI/ML/MLcode/Ch02/testDigits')
errorCount = 0
datMat = mat(dataArr); labelMat = mat(labelArr).transpose()
m, n = shape(datMat)
for i in range(m):
kernelEval = kernelTrans(sVs, datMat[i,:], kTup)
predict = kernelEval.T * multiply(labelSV, alphas[svInd]) + b
if(sign(predict)!=sign(labelArr[i])): errorCount += 1
print("the test error rate is: %f" % (float(errorCount)/m))
testDigits(('rbf', 20))
###Output
L==H
fullSet, iter: 0 i:0, pairs changed 0
fullSet, iter: 0 i:1, pairs changed 1
fullSet, iter: 0 i:2, pairs changed 2
fullSet, iter: 0 i:3, pairs changed 3
fullSet, iter: 0 i:4, pairs changed 4
fullSet, iter: 0 i:5, pairs changed 4
fullSet, iter: 0 i:6, pairs changed 4
fullSet, iter: 0 i:7, pairs changed 5
fullSet, iter: 0 i:8, pairs changed 6
fullSet, iter: 0 i:9, pairs changed 7
fullSet, iter: 0 i:10, pairs changed 7
fullSet, iter: 0 i:11, pairs changed 7
fullSet, iter: 0 i:12, pairs changed 7
fullSet, iter: 0 i:13, pairs changed 8
fullSet, iter: 0 i:14, pairs changed 9
fullSet, iter: 0 i:15, pairs changed 9
fullSet, iter: 0 i:16, pairs changed 9
fullSet, iter: 0 i:17, pairs changed 10
fullSet, iter: 0 i:18, pairs changed 11
fullSet, iter: 0 i:19, pairs changed 11
j not moving enough
fullSet, iter: 0 i:20, pairs changed 11
L==H
fullSet, iter: 0 i:21, pairs changed 11
L==H
fullSet, iter: 0 i:22, pairs changed 11
fullSet, iter: 0 i:23, pairs changed 11
fullSet, iter: 0 i:24, pairs changed 11
L==H
fullSet, iter: 0 i:25, pairs changed 11
L==H
fullSet, iter: 0 i:26, pairs changed 11
L==H
fullSet, iter: 0 i:27, pairs changed 11
L==H
fullSet, iter: 0 i:28, pairs changed 11
L==H
fullSet, iter: 0 i:29, pairs changed 11
L==H
fullSet, iter: 0 i:30, pairs changed 11
fullSet, iter: 0 i:31, pairs changed 11
L==H
fullSet, iter: 0 i:32, pairs changed 11
fullSet, iter: 0 i:33, pairs changed 11
L==H
fullSet, iter: 0 i:34, pairs changed 11
L==H
fullSet, iter: 0 i:35, pairs changed 11
L==H
fullSet, iter: 0 i:36, pairs changed 11
fullSet, iter: 0 i:37, pairs changed 11
L==H
fullSet, iter: 0 i:38, pairs changed 11
L==H
fullSet, iter: 0 i:39, pairs changed 11
L==H
fullSet, iter: 0 i:40, pairs changed 11
fullSet, iter: 0 i:41, pairs changed 11
L==H
fullSet, iter: 0 i:42, pairs changed 11
fullSet, iter: 0 i:43, pairs changed 11
fullSet, iter: 0 i:44, pairs changed 11
fullSet, iter: 0 i:45, pairs changed 11
L==H
fullSet, iter: 0 i:46, pairs changed 11
L==H
fullSet, iter: 0 i:47, pairs changed 11
L==H
fullSet, iter: 0 i:48, pairs changed 11
L==H
fullSet, iter: 0 i:49, pairs changed 11
L==H
fullSet, iter: 0 i:50, pairs changed 11
L==H
fullSet, iter: 0 i:51, pairs changed 11
L==H
fullSet, iter: 0 i:52, pairs changed 11
L==H
fullSet, iter: 0 i:53, pairs changed 11
fullSet, iter: 0 i:54, pairs changed 11
fullSet, iter: 0 i:55, pairs changed 11
L==H
fullSet, iter: 0 i:56, pairs changed 11
fullSet, iter: 0 i:57, pairs changed 11
fullSet, iter: 0 i:58, pairs changed 11
fullSet, iter: 0 i:59, pairs changed 11
fullSet, iter: 0 i:60, pairs changed 11
fullSet, iter: 0 i:61, pairs changed 11
fullSet, iter: 0 i:62, pairs changed 11
fullSet, iter: 0 i:63, pairs changed 11
fullSet, iter: 0 i:64, pairs changed 11
L==H
fullSet, iter: 0 i:65, pairs changed 11
L==H
fullSet, iter: 0 i:66, pairs changed 11
L==H
fullSet, iter: 0 i:67, pairs changed 11
L==H
fullSet, iter: 0 i:68, pairs changed 11
L==H
fullSet, iter: 0 i:69, pairs changed 11
L==H
fullSet, iter: 0 i:70, pairs changed 11
L==H
fullSet, iter: 0 i:71, pairs changed 11
L==H
fullSet, iter: 0 i:72, pairs changed 11
L==H
fullSet, iter: 0 i:73, pairs changed 11
L==H
fullSet, iter: 0 i:74, pairs changed 11
L==H
fullSet, iter: 0 i:75, pairs changed 11
L==H
fullSet, iter: 0 i:76, pairs changed 11
fullSet, iter: 0 i:77, pairs changed 11
L==H
fullSet, iter: 0 i:78, pairs changed 11
L==H
fullSet, iter: 0 i:79, pairs changed 11
fullSet, iter: 0 i:80, pairs changed 11
L==H
fullSet, iter: 0 i:81, pairs changed 11
L==H
fullSet, iter: 0 i:82, pairs changed 11
L==H
fullSet, iter: 0 i:83, pairs changed 11
L==H
fullSet, iter: 0 i:84, pairs changed 11
L==H
fullSet, iter: 0 i:85, pairs changed 11
fullSet, iter: 0 i:86, pairs changed 11
fullSet, iter: 0 i:87, pairs changed 11
fullSet, iter: 0 i:88, pairs changed 11
L==H
fullSet, iter: 0 i:89, pairs changed 11
L==H
fullSet, iter: 0 i:90, pairs changed 11
fullSet, iter: 0 i:91, pairs changed 11
L==H
fullSet, iter: 0 i:92, pairs changed 11
fullSet, iter: 0 i:93, pairs changed 11
L==H
fullSet, iter: 0 i:94, pairs changed 11
fullSet, iter: 0 i:95, pairs changed 11
L==H
fullSet, iter: 0 i:96, pairs changed 11
fullSet, iter: 0 i:97, pairs changed 11
L==H
fullSet, iter: 0 i:98, pairs changed 11
L==H
fullSet, iter: 0 i:99, pairs changed 11
L==H
fullSet, iter: 0 i:100, pairs changed 11
L==H
fullSet, iter: 0 i:101, pairs changed 11
fullSet, iter: 0 i:102, pairs changed 11
L==H
fullSet, iter: 0 i:103, pairs changed 11
L==H
fullSet, iter: 0 i:104, pairs changed 11
L==H
fullSet, iter: 0 i:105, pairs changed 11
fullSet, iter: 0 i:106, pairs changed 11
fullSet, iter: 0 i:107, pairs changed 11
L==H
fullSet, iter: 0 i:108, pairs changed 11
L==H
fullSet, iter: 0 i:109, pairs changed 11
L==H
fullSet, iter: 0 i:110, pairs changed 11
L==H
fullSet, iter: 0 i:111, pairs changed 11
fullSet, iter: 0 i:112, pairs changed 11
L==H
fullSet, iter: 0 i:113, pairs changed 11
L==H
fullSet, iter: 0 i:114, pairs changed 11
L==H
fullSet, iter: 0 i:115, pairs changed 11
fullSet, iter: 0 i:116, pairs changed 11
L==H
fullSet, iter: 0 i:117, pairs changed 11
L==H
fullSet, iter: 0 i:118, pairs changed 11
L==H
fullSet, iter: 0 i:119, pairs changed 11
fullSet, iter: 0 i:120, pairs changed 11
L==H
fullSet, iter: 0 i:121, pairs changed 11
L==H
fullSet, iter: 0 i:122, pairs changed 11
L==H
fullSet, iter: 0 i:123, pairs changed 11
L==H
fullSet, iter: 0 i:124, pairs changed 11
L==H
fullSet, iter: 0 i:125, pairs changed 11
L==H
fullSet, iter: 0 i:126, pairs changed 11
L==H
fullSet, iter: 0 i:127, pairs changed 11
L==H
fullSet, iter: 0 i:128, pairs changed 11
L==H
fullSet, iter: 0 i:129, pairs changed 11
L==H
fullSet, iter: 0 i:130, pairs changed 11
L==H
fullSet, iter: 0 i:131, pairs changed 11
fullSet, iter: 0 i:132, pairs changed 11
L==H
fullSet, iter: 0 i:133, pairs changed 11
fullSet, iter: 0 i:134, pairs changed 11
L==H
fullSet, iter: 0 i:135, pairs changed 11
L==H
fullSet, iter: 0 i:136, pairs changed 11
fullSet, iter: 0 i:137, pairs changed 11
L==H
fullSet, iter: 0 i:138, pairs changed 11
fullSet, iter: 0 i:139, pairs changed 11
L==H
fullSet, iter: 0 i:140, pairs changed 11
fullSet, iter: 0 i:141, pairs changed 11
L==H
fullSet, iter: 0 i:142, pairs changed 11
L==H
fullSet, iter: 0 i:143, pairs changed 11
L==H
fullSet, iter: 0 i:144, pairs changed 11
fullSet, iter: 0 i:145, pairs changed 11
L==H
fullSet, iter: 0 i:146, pairs changed 11
fullSet, iter: 0 i:147, pairs changed 11
L==H
fullSet, iter: 0 i:148, pairs changed 11
L==H
fullSet, iter: 0 i:149, pairs changed 11
L==H
fullSet, iter: 0 i:150, pairs changed 11
L==H
fullSet, iter: 0 i:151, pairs changed 11
L==H
fullSet, iter: 0 i:152, pairs changed 11
L==H
fullSet, iter: 0 i:153, pairs changed 11
L==H
fullSet, iter: 0 i:154, pairs changed 11
L==H
fullSet, iter: 0 i:155, pairs changed 11
fullSet, iter: 0 i:156, pairs changed 11
L==H
fullSet, iter: 0 i:157, pairs changed 11
fullSet, iter: 0 i:158, pairs changed 11
L==H
fullSet, iter: 0 i:159, pairs changed 11
L==H
fullSet, iter: 0 i:160, pairs changed 11
L==H
fullSet, iter: 0 i:161, pairs changed 11
L==H
fullSet, iter: 0 i:162, pairs changed 11
L==H
fullSet, iter: 0 i:163, pairs changed 11
L==H
fullSet, iter: 0 i:164, pairs changed 11
L==H
fullSet, iter: 0 i:165, pairs changed 11
L==H
fullSet, iter: 0 i:166, pairs changed 11
L==H
fullSet, iter: 0 i:167, pairs changed 11
L==H
fullSet, iter: 0 i:168, pairs changed 11
L==H
fullSet, iter: 0 i:169, pairs changed 11
L==H
fullSet, iter: 0 i:170, pairs changed 11
L==H
fullSet, iter: 0 i:171, pairs changed 11
L==H
fullSet, iter: 0 i:172, pairs changed 11
L==H
fullSet, iter: 0 i:173, pairs changed 11
L==H
fullSet, iter: 0 i:174, pairs changed 11
L==H
fullSet, iter: 0 i:175, pairs changed 11
L==H
fullSet, iter: 0 i:176, pairs changed 11
L==H
fullSet, iter: 0 i:177, pairs changed 11
L==H
fullSet, iter: 0 i:178, pairs changed 11
L==H
fullSet, iter: 0 i:179, pairs changed 11
L==H
fullSet, iter: 0 i:180, pairs changed 11
L==H
fullSet, iter: 0 i:181, pairs changed 11
fullSet, iter: 0 i:182, pairs changed 11
L==H
fullSet, iter: 0 i:183, pairs changed 11
L==H
fullSet, iter: 0 i:184, pairs changed 11
L==H
fullSet, iter: 0 i:185, pairs changed 11
L==H
fullSet, iter: 0 i:186, pairs changed 11
L==H
fullSet, iter: 0 i:187, pairs changed 11
L==H
fullSet, iter: 0 i:188, pairs changed 11
L==H
fullSet, iter: 0 i:189, pairs changed 11
L==H
fullSet, iter: 0 i:190, pairs changed 11
|
|
Webscraping/digit.in/GamingLaptops_Webscraping.ipynb | ###Markdown
7. Write a program to scrap all the available details of top 10 gaming laptops from digit.in.
###Code
#Connect to web driver
driver=webdriver.Chrome(r"D://chromedriver.exe") #r converts string to raw string
#If not r, we can use executable_path = "C:/path name"
#Getting the website to driver
driver.get('https://www.digit.in/')
#When we run this line, automatically the webpage will be opened
#Clicking on top 10 option
top10=driver.find_element_by_xpath("//div[@class='menu']/ul/li[4]/a")
top10.click()
#Clicking on best gaming laptops option
best_gl=driver.find_element_by_xpath("//div[@class='listing_container']/ul/li[26]/a")
best_gl.click()
#Specifying the url of the webpage to be scraped
url="https://www.digit.in/top-products/best-gaming-laptops-40.html"
driver.get(url)
#Extracting the tags having the laptop name
name=driver.find_elements_by_xpath("//div[@class='right-container']/div/a/h3")
name
#Extracting the text from the tags
prod_name=[] #Empty list
#As we need to scrap data for all the products, we are running a for loop for extracting all data
for i in name:
prod_name.append(i.text)
prod_name
#Extracting the tags having the OS type
OS_type=driver.find_elements_by_xpath("//div[@class='product-detail']/div/ul/li[1]/div/div")
OS_type
#Extracting the text from the tags
OS=[] #Empty list
#As we need to scrap data for all the products, we are running a for loop for extracting all data
for i in OS_type:
OS.append(i.text)
OS
#Extracting the tags having display details
display=driver.find_elements_by_xpath("//div[@class='product-detail']/div/ul/li[2]/div/div")
display
#Extracting the text from the tags
display_specs=[] #Empty list
#As we need to scrap data for all the products, we are running a for loop for extracting all data
for i in display:
display_specs.append(i.text)
display_specs
#Extracting the tags having processor details
processor=driver.find_elements_by_xpath("//div[@class='product-detail']/div/ul/li[3]/div/div")
processor
#Extracting the text from the tags
processor_specs=[] #Empty list
#As we need to scrap data for all the products, we are running a for loop for extracting all data
for i in processor:
processor_specs.append(i.text)
processor_specs
#Extracting the tags having memory specs
#List of specification names
memory=driver.find_elements_by_xpath("//div[@class='Spcs-details'][1]/table/tbody/tr[6]/td[1]")
#Value of the specifications
memory_specs=driver.find_elements_by_xpath("//div[@class='Spcs-details'][1]/table/tbody/tr[6]/td[3]")
#Now we will separate HDD and RAM text from memory specs tags
HDD=[]
RAM=[] #Empty lists
for i in range(len(memory)):
if memory[i].text=='Memory':
HDD.append(memory_specs[i].text.split('/')[0])
RAM.append(memory_specs[i].text.split('/')[1])
else:
HDD.append('Not Available')
RAM.append('Not Available')
print('HDD:',HDD)
print('RAM:',RAM)
#Extracting the tags having weight
#List of specifications name
weights=driver.find_elements_by_xpath("//div[@class='Spcs-details']/table/tbody/tr/td[1]")
#Value of the specifications
weights_specs=driver.find_elements_by_xpath("//div[@class='Spcs-details']/table/tbody/tr/td[3]")
#Now we will separate weight text from tags
weight=[] #Empty list
for i in range(len(weights)):
if weights[i].text=='Weight':
weight.append(weights_specs[i].text)
weight
#Extracting the tags having dimensions
#List of specifications name
dims=driver.find_elements_by_xpath("//div[@class='Spcs-details']/table/tbody/tr/td[1]")
#Value of the specifications
dims_specs=driver.find_elements_by_xpath("//div[@class='Spcs-details']/table/tbody/tr/td[3]")
#Now we will separate dimensions text from tags
dimension=[] #Empty list
for i in range(len(dims)):
if dims[i].text=='Dimension':
dimension.append(dims_specs[i].text)
dimension
#Extracting the tags having Graphics Processor
#List of specifications name
GPs=driver.find_elements_by_xpath("//div[@class='Spcs-details']/table/tbody/tr/td[1]")
#Value of the specifications
GPs_specs=driver.find_elements_by_xpath("//div[@class='Spcs-details']/table/tbody/tr/td[3]")
#Now we will separate GPs text from tags
GPU=[] #Empty list
for i in range(len(GPs)):
if GPs[i].text=='Graphics Processor':
GPU.append(GPs_specs[i].text)
GPU
#Extracting the tags having the price
#As there are some prices in the main url, we will go to full specs and scrape the prices
#First we will extract the urls of all laptop's full specs
full_specs=[] #Empty list
urls=driver.find_elements_by_xpath("//div[@class='full-specs']/span")
#Running a for loop for extraction of text from tags
for i in urls:
if i.get_attribute('data-href'):
full_specs.append(i.get_attribute('data-href'))
full_specs
#Now we will extract price by iterating full_specs
Price=[] #Empty list
for i in full_specs:
driver.get(i)
try:
prices=driver.find_element_by_xpath("//div[@class='Block-price']/b") #Getting price tags
Price.append(prices.text) #Extracting text
except NoSuchElementException as e: #Running an exception if the price is not available
Price.append("Not Available") #Message to be printed in places where the price is not available
Price #Checking the extracted prices
#Creating a dataframe for saving our extracted data
laptops=pd.DataFrame({'Product Name':prod_name,'OS':OS,'Display':display_specs,'Processor':processor_specs,'HDD':HDD,'RAM':RAM,
'Weight':weight,'Dimension':dimension,'Graphic Processor':GPU,'Price':Price})
laptops
#Saving the data into a csv file
laptops.to_csv('Gaming_Laptops.csv')
#Closing the driver
driver.close()
###Output
_____no_output_____ |
figurasaula2.ipynb | ###Markdown
Retas em $V_n$Vamos aproveitar estes exercícios para falar um pouco do *matplotlib* que é uma biblioteca gráfica que se usa com o python. Ela é bem completa e quem tiver interessado pode ver mais no [site oficial do matplotlib](https://matplotlib.org).Primeiro vamos carregar a biblioteca em nosso ambiente para usar suas funções.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np # outra biblioteca importante para parte numérica
###Output
_____no_output_____
###Markdown
Exercício 1:Uma reta passa pelos pontos $(-3,1)$ e $(1,1)$:
###Code
plt.plot([-3 , 1], [1, 1])
plt.plot([0,0,1,2,-2],[0,1,2,1,1], "ro")
plt.plot([-3 , 1], [1, 1])
###Output
_____no_output_____
###Markdown
Exercicio 2Vamos verificar se os pontos $P=(2,1,1)$ $Q=(4,1-1)$ e $R=(3,-1,1)$ estão na mesma reta
###Code
from mpl_toolkits import mplot3d
fig = plt.figure()
ax = plt.axes(projection='3d')
xs=[2,4,3]
ys=[1,1,-1]
zs=[1,-1,1]
ax.plot3D(xs,ys,zs, 'ro')
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['legend.fontsize'] = 10
fig = plt.figure()
ax = fig.gca(projection='3d')
# Prepare arrays x, y, z
t = np.linspace(0, 6, 100)
s = np.linspace(0, 1.5, 100)
x = 1+t
y = 1+2*t
z = 1+3*t
x1= 2+3*s
y1= 1+8*s
z1= 13*s
ax.plot(x, y, z, label="L1")
ax.plot(x1,y1,z1, label="L2")
ax.legend()
###Output
_____no_output_____ |
Lab 4 - Exploratory Data Analysis.ipynb | ###Markdown
Assignment: SQL Notebook for Peer AssignmentEstimated time needed: **60** minutes. IntroductionUsing this Python notebook you will:1. Understand the Spacex DataSet2. Load the dataset into the corresponding table in a Db2 database3. Execute SQL queries to answer assignment questions Overview of the DataSetSpaceX has gained worldwide attention for a series of historic milestones.It is the only private company ever to return a spacecraft from low-earth orbit, which it first accomplished in December 2010.SpaceX advertises Falcon 9 rocket launches on its website with a cost of 62 million dollars wheras other providers cost upward of 165 million dollars each, much of the savings is because Space X can reuse the first stage.Therefore if we can determine if the first stage will land, we can determine the cost of a launch.This information can be used if an alternate company wants to bid against SpaceX for a rocket launch.This dataset includes a record for each payload carried during a SpaceX mission into outer space. Download the datasetsThis assignment requires you to load the spacex dataset.In many cases the dataset to be analyzed is available as a .CSV (comma separated values) file, perhaps on the internet. Click on the link below to download and save the dataset (.CSV file):Spacex DataSet Store the dataset in database table**it is highly recommended to manually load the table using the database console LOAD tool in DB2**.Now open the Db2 console, open the LOAD tool, Select / Drag the .CSV file for the dataset, Next create a New Table, and then follow the steps on-screen instructions to load the data. Name the new table as follows:**SPACEXDATASET****Follow these steps while using old DB2 UI which is having Open Console Screen****Note:While loading Spacex dataset, ensure that detect datatypes is disabled. Later click on the pencil icon(edit option).**1. Change the Date Format by manually typing DD-MM-YYYY and timestamp format as DD-MM-YYYY HH\:MM:SS. Here you should place the cursor at Date field and manually type as DD-MM-YYYY.2. Change the PAYLOAD_MASS\_\_KG\_ datatype to INTEGER. **Changes to be considered when having DB2 instance with the new UI having Go to UI screen*** Refer to this insruction in this link for viewing the new Go to UI screen.* Later click on **Data link(below SQL)** in the Go to UI screen and click on **Load Data** tab.* Later browse for the downloaded spacex file.* Once done select the schema andload the file.
###Code
!pip install sqlalchemy==1.3.9
!pip install ibm_db_sa
!pip install ipython-sql
###Output
Collecting sqlalchemy==1.3.9
Downloading SQLAlchemy-1.3.9.tar.gz (6.0 MB)
[K |████████████████████████████████| 6.0 MB 28.4 MB/s eta 0:00:01
[?25hBuilding wheels for collected packages: sqlalchemy
Building wheel for sqlalchemy (setup.py) ... [?25ldone
[?25h Created wheel for sqlalchemy: filename=SQLAlchemy-1.3.9-cp38-cp38-linux_x86_64.whl size=1209520 sha256=0d3446be09ff9f7f7645d278253310253f852059d91f70ded7ad2fe33c11467d
Stored in directory: /tmp/wsuser/.cache/pip/wheels/cb/43/46/fa638f2422554332b7865d600275b24568bf60e76104a94bb4
Successfully built sqlalchemy
Installing collected packages: sqlalchemy
Attempting uninstall: sqlalchemy
Found existing installation: SQLAlchemy 1.4.22
Uninstalling SQLAlchemy-1.4.22:
Successfully uninstalled SQLAlchemy-1.4.22
Successfully installed sqlalchemy-1.3.9
Requirement already satisfied: ibm_db_sa in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (0.3.7)
Requirement already satisfied: ibm-db>=2.0.0 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ibm_db_sa) (3.0.4)
Requirement already satisfied: sqlalchemy>=0.7.3 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ibm_db_sa) (1.3.9)
Collecting ipython-sql
Downloading ipython_sql-0.4.0-py3-none-any.whl (19 kB)
Requirement already satisfied: ipython>=1.0 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython-sql) (7.27.0)
Collecting prettytable<1
Downloading prettytable-0.7.2.zip (28 kB)
Requirement already satisfied: six in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython-sql) (1.15.0)
Requirement already satisfied: sqlalchemy>=0.6.7 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython-sql) (1.3.9)
Requirement already satisfied: ipython-genutils>=0.1.0 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython-sql) (0.2.0)
Collecting sqlparse
Downloading sqlparse-0.4.2-py3-none-any.whl (42 kB)
[K |████████████████████████████████| 42 kB 3.5 MB/s eta 0:00:01
[?25hRequirement already satisfied: matplotlib-inline in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (0.1.2)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (3.0.20)
Requirement already satisfied: pickleshare in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (0.7.5)
Requirement already satisfied: traitlets>=4.2 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (5.0.5)
Requirement already satisfied: pexpect>4.3 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (4.8.0)
Requirement already satisfied: backcall in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (0.2.0)
Requirement already satisfied: decorator in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (5.0.9)
Requirement already satisfied: pygments in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (2.9.0)
Requirement already satisfied: jedi>=0.16 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (0.17.2)
Requirement already satisfied: setuptools>=18.5 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (52.0.0.post20211006)
Requirement already satisfied: parso<0.8.0,>=0.7.0 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from jedi>=0.16->ipython>=1.0->ipython-sql) (0.7.0)
Requirement already satisfied: ptyprocess>=0.5 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from pexpect>4.3->ipython>=1.0->ipython-sql) (0.7.0)
Requirement already satisfied: wcwidth in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython>=1.0->ipython-sql) (0.2.5)
Building wheels for collected packages: prettytable
Building wheel for prettytable (setup.py) ... [?25ldone
[?25h Created wheel for prettytable: filename=prettytable-0.7.2-py3-none-any.whl size=13700 sha256=306fb1b9a5a5d3ed4f545c2d12061e1305c1897c13a1005301f5c763b6d7bdb6
Stored in directory: /tmp/wsuser/.cache/pip/wheels/48/6d/77/9517cb933af254f51a446f1a5ec9c2be3e45f17384940bce68
Successfully built prettytable
Installing collected packages: sqlparse, prettytable, ipython-sql
Successfully installed ipython-sql-0.4.0 prettytable-0.7.2 sqlparse-0.4.2
###Markdown
Connect to the databaseLet us first load the SQL extension and establish a connection with the database
###Code
%load_ext sql
###Output
_____no_output_____
###Markdown
**DB2 magic in case of old UI service credentials.**In the next cell enter your db2 connection string. Recall you created Service Credentials for your Db2 instance before. From the **uri** field of your Db2 service credentials copy everything after db2:// (except the double quote at the end) and paste it in the cell below after ibm_db_sa://in the following format**%sql ibm_db_sa://my-username:my-password\@my-hostname:my-port/my-db-name****DB2 magic in case of new UI service credentials.** * Use the following format.* Add security=SSL at the end**%sql ibm_db_sa://my-username:my-password\@my-hostname:my-port/my-db-name?security=SSL**
###Code
# The code was removed by Watson Studio for sharing.
###Output
_____no_output_____
###Markdown
TasksNow write and execute SQL queries to solve the assignment tasks. Task 1 Display the names of the unique launch sites in the space mission
###Code
%sql select distinct(LAUNCH_SITE) from SPACEXDATASET;
###Output
* ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB
Done.
###Markdown
Task 2 Display 5 records where launch sites begin with the string 'CCA'
###Code
%sql select * from SPACEXDATASET where LAUNCH_SITE like 'CCA%' limit 5;
###Output
* ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB
Done.
###Markdown
Task 3 Display the total payload mass carried by boosters launched by NASA (CRS)
###Code
%sql select sum(PAYLOAD_MASS__KG_) from SPACEXDATASET where CUSTOMER = 'NASA (CRS)';
###Output
* ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB
Done.
###Markdown
Task 4 Display average payload mass carried by booster version F9 v1.1
###Code
%sql select avg(PAYLOAD_MASS__KG_) from SPACEXDATASET where BOOSTER_VERSION = 'F9 v1.1';
###Output
* ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB
Done.
###Markdown
Task 5 List the date when the first successful landing outcome in ground pad was acheived.*Hint:Use min function*
###Code
%%sql
select DATE
from SPACEXDATASET
where LANDING__OUTCOME = 'Success (ground pad)'
order by DATE asc
limit 1;
###Output
* ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB
Done.
###Markdown
Task 6 List the names of the boosters which have success in drone ship and have payload mass greater than 4000 but less than 6000
###Code
%%sql
select distinct(BOOSTER_VERSION)
from SPACEXDATASET
where LANDING__OUTCOME = 'Success (drone ship)' and (PAYLOAD_MASS__KG_ between 4000 and 6000);
###Output
* ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB
Done.
###Markdown
Task 7 List the total number of successful and failure mission outcomes
###Code
%%sql
select MISSION_OUTCOME, count(*) as COUNT
from SPACEXDATASET
group by MISSION_OUTCOME;
###Output
* ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB
Done.
###Markdown
Task 8 List the names of the booster_versions which have carried the maximum payload mass. Use a subquery
###Code
%%sql
select distinct(BOOSTER_VERSION)
from SPACEXDATASET
where PAYLOAD_MASS__KG_ = (select max(PAYLOAD_MASS__KG_) from SPACEXDATASET);
###Output
* ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB
Done.
###Markdown
Task 9 List the failed landing_outcomes in drone ship, their booster versions, and launch site names for in year 2015
###Code
%%sql
select BOOSTER_VERSION, LAUNCH_SITE, LANDING__OUTCOME
from SPACEXDATASET
where LANDING__OUTCOME = 'Failure (drone ship)' and YEAR(DATE) = '2015';
###Output
* ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB
Done.
###Markdown
Task 10 Rank the count of landing outcomes (such as Failure (drone ship) or Success (ground pad)) between the date 2010-06-04 and 2017-03-20, in descending order
###Code
%%sql
select LANDING__OUTCOME, count(*) as COUNT
from SPACEXDATASET
where (DATE between '2010-06-04' and '2017-03-20')
group by LANDING__OUTCOME
order by count(*) desc;
###Output
* ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB
Done.
|
PyCitySchools/PyCitySchools_JiKim.ipynb | ###Markdown
District Summary* Calculate the total number of schools* Calculate the total number of students* Calculate the total budget* Calculate the average math score * Calculate the average reading score* Calculate the overall passing rate (overall average score), i.e. (avg. math score + avg. reading score)/2* Calculate the percentage of students with a passing math score (70 or greater)* Calculate the percentage of students with a passing reading score (70 or greater)* Create a dataframe to hold the above results* Optional: give the displayed data cleaner formatting
###Code
school_data_complete.head()
school_data_complete["average_score"] = (school_data_complete["reading_score"]+school_data_complete["math_score"])/2
total_school_number = school_data_complete["school_name"].nunique()
total_student_number = school_data_complete["Student ID"].count()
school_budgets = school_data_complete["budget"].unique()
total_budget = school_budgets.sum()
average_math_score = school_data_complete["math_score"].mean()
average_reading_score = school_data_complete["reading_score"].mean()
#Bin students into pass (>=70) and fail (<70)
bins =[0,70,100]
school_data_complete["Math Pass"] = pd.cut(school_data_complete["math_score"], bins, labels=False, right=False)
school_data_complete["Reading Pass"] = pd.cut(school_data_complete["reading_score"], bins, labels=False, right=False)
school_data_complete["Overall Pass"] = pd.cut(school_data_complete["average_score"], bins, labels=False, right=False)
#Calculate % of students that passed
percent_passing_math = school_data_complete["Math Pass"].sum()/total_student_number*100
percent_passing_reading = school_data_complete["Reading Pass"].sum()/total_student_number*100
percent_passing_overall = (percent_passing_math+percent_passing_reading)/2
#Create Summary of District
district_summary_df = pd.DataFrame([{"Total Schools":total_school_number,
"Total Students":"{:,}".format(total_student_number),
"Total Budget":"${:,.2f}".format(total_budget),
"Average Math Score": average_math_score,
"Average Reading Score": average_reading_score,
"% Passing Math": percent_passing_math,
"% Passing Reading": percent_passing_reading,
"% Overall Passing Rate": percent_passing_overall}])
district_summary_df
###Output
_____no_output_____
###Markdown
School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two) * Create a dataframe to hold the above results Top Performing Schools (By Passing Rate) * Sort and display the top five schools in overall passing rate
###Code
#Group complete data by Schools
grouped_school_data = school_data_complete.groupby(["school_name"])
#Calculate per school numbers
school_total_student_number = grouped_school_data["Student ID"].count()
school_total_budget = grouped_school_data["budget"].sum()/school_total_student_number
school_per_student_budget = school_total_budget/school_total_student_number
school_average_math_score = grouped_school_data["math_score"].mean()
school_average_reading_score = grouped_school_data["reading_score"].mean()
school_type = grouped_school_data["type"].unique().apply(lambda x: "%s".join(x))
school_percent_passing_math = grouped_school_data["Math Pass"].sum()/school_total_student_number*100
school_percent_passing_reading = grouped_school_data["Reading Pass"].sum()/school_total_student_number*100
school_percent_passing_overall = (school_percent_passing_math+school_percent_passing_reading)/2
#Create school summary dataframe
school_summary_df = pd.DataFrame({"School Type":school_type,
"Total Students":school_total_student_number,
"Total Budget":school_total_budget,
"Per Student Budget":school_per_student_budget,
"Average Math Score":school_average_math_score,
"Average Reading Score":school_average_reading_score,
"% Passing Math":school_percent_passing_math,
"% Passing Reading":school_percent_passing_reading,
"% Overall Passing Rate":school_percent_passing_overall})
school_summary_df["Total Students"] = school_summary_df["Total Students"].map("{:,}".format)
school_summary_df["Total Budget"] = school_summary_df["Total Budget"].map("${:,.2f}".format)
school_summary_df["Per Student Budget"] = school_summary_df["Per Student Budget"].map("${:.2f}".format)
#sort school data by overall passing rate (best to worst)
sorted_df = school_summary_df.sort_values(by="% Overall Passing Rate", ascending = False)
sorted_df = sorted_df.rename_axis(None)
sorted_df[0:5]
###Output
_____no_output_____
###Markdown
Bottom Performing Schools (By Passing Rate) * Sort and display the five worst-performing schools
###Code
#sort school data by overall passing rate (worst to best)
sorted_df = school_summary_df.sort_values(by="% Overall Passing Rate", ascending = True)
sorted_df = sorted_df.rename_axis(None)
sorted_df[0:5]
###Output
_____no_output_____
###Markdown
Math Scores by Grade * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting
###Code
#Create series for each grade
only_9th = school_data_complete.loc[school_data_complete['grade'] == '9th',:]
only_10th = school_data_complete.loc[school_data_complete['grade'] == '10th',:]
only_11th = school_data_complete.loc[school_data_complete['grade'] == '11th',:]
only_12th = school_data_complete.loc[school_data_complete['grade'] == '12th',:]
#Group data by school name
grouped_9th = only_9th.groupby('school_name')
grouped_10th = only_10th.groupby('school_name')
grouped_11th = only_11th.groupby('school_name')
grouped_12th = only_12th.groupby('school_name')
#Calculate average math scores per grade
average_math_9th = grouped_9th["math_score"].mean()
average_math_10th = grouped_10th["math_score"].mean()
average_math_11th = grouped_11th["math_score"].mean()
average_math_12th = grouped_12th["math_score"].mean()
#Create new dataframe with average math scores broken down by school and grade
math_scores_df = pd.DataFrame({"9th":average_math_9th,
"10th":average_math_10th,
"11th":average_math_11th,
"12th":average_math_12th})
math_scores_df = math_scores_df.rename_axis(None)
math_scores_df
###Output
_____no_output_____
###Markdown
Reading Score by Grade * Perform the same operations as above for reading scores
###Code
#Calculate average reading scores per grade
average_reading_9th = grouped_9th["reading_score"].mean()
average_reading_10th = grouped_10th["reading_score"].mean()
average_reading_11th = grouped_11th["reading_score"].mean()
average_reading_12th = grouped_12th["reading_score"].mean()
#Create new dataframe with average reading scores broken down by school and grade
reading_scores_df = pd.DataFrame({"9th":average_reading_9th,
"10th":average_reading_10th,
"11th":average_reading_11th,
"12th":average_reading_12th})
reading_scores_df = reading_scores_df.rename_axis(None)
reading_scores_df
###Output
_____no_output_____
###Markdown
Scores by School Spending * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two)
###Code
# create bins based on budget per student
spending_bins = [0, 585, 615, 645, 675]
group_names = ["<$585", "$585-615", "$615-645", "$645-675"]
#put data into bins based on spending ranges
school_summary_df["Per Student Budget"]=school_summary_df["Per Student Budget"].replace('[^.0-9]','',regex=True).astype(float)
school_summary_df["Spending Ranges (Per Student)"]=pd.cut(school_summary_df["Per Student Budget"], spending_bins, labels=group_names, include_lowest=True)
#group data by Spending Ranges
df = school_summary_df.groupby("Spending Ranges (Per Student)")
binned_average_math_score = df["Average Math Score"].mean()
binned_average_reading_score = df["Average Reading Score"].mean()
binned_average_percent_passing_math = df["% Passing Math"].mean()
binned_average_percent_passing_reading = df["% Passing Reading"].mean()
binned_average_overall_passing_rate = df["% Overall Passing Rate"].mean()
display_table = pd.DataFrame({"Average Math Score":binned_average_math_score,
"Average Reading Score":binned_average_reading_score,
"% Passing Math":binned_average_percent_passing_math,
"% Passing Reading":binned_average_percent_passing_reading,
"% Overall Passing Rate":binned_average_overall_passing_rate})
display_table
###Output
_____no_output_____
###Markdown
Scores by School Size * Perform the same operations as above, based on school size.
###Code
# create bins based on budget per student
size_bins = [0, 1000, 2000, 5000]
group_names2 = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
#put data into bins based on School Size
school_summary_df["Total Students"]=school_summary_df["Total Students"].replace('[^.0-9]','',regex=True).astype(float)
school_summary_df["School Size"]=pd.cut(school_summary_df["Total Students"], size_bins, labels=group_names2, include_lowest=True)
#group data by School Size
df = school_summary_df.groupby("School Size")
binned_average_math_score = df["Average Math Score"].mean()
binned_average_reading_score = df["Average Reading Score"].mean()
binned_average_percent_passing_math = df["% Passing Math"].mean()
binned_average_percent_passing_reading = df["% Passing Reading"].mean()
binned_average_overall_passing_rate = df["% Overall Passing Rate"].mean()
display_table = pd.DataFrame({"Average Math Score":binned_average_math_score,
"Average Reading Score":binned_average_reading_score,
"% Passing Math":binned_average_percent_passing_math,
"% Passing Reading":binned_average_percent_passing_reading,
"% Overall Passing Rate":binned_average_overall_passing_rate})
display_table
###Output
_____no_output_____
###Markdown
Scores by School Type * Perform the same operations as above, based on school type.
###Code
#group data by School Type
df = school_summary_df.groupby("School Type")
binned_average_math_score = df["Average Math Score"].mean()
binned_average_reading_score = df["Average Reading Score"].mean()
binned_average_percent_passing_math = df["% Passing Math"].mean()
binned_average_percent_passing_reading = df["% Passing Reading"].mean()
binned_average_overall_passing_rate = df["% Overall Passing Rate"].mean()
display_table = pd.DataFrame({"Average Math Score":binned_average_math_score,
"Average Reading Score":binned_average_reading_score,
"% Passing Math":binned_average_percent_passing_math,
"% Passing Reading":binned_average_percent_passing_reading,
"% Overall Passing Rate":binned_average_overall_passing_rate})
display_table
###Output
_____no_output_____ |
Union Fold/0912/547. Friend Circles.ipynb | ###Markdown
说明: 一班有N个学生。他们中有些是朋友,有些不是。 他们的友谊本质上是传递的。例如,如果A是B的直接朋友,而B是C的直接朋友,则A是C的间接朋友。 并且我们定义的朋友圈是一组是直接或间接朋友的学生。 给定一个N * N矩阵M表示班级学生之间的朋友关系。 如果M[i][j] = 1,则第i个学生和第j个学生是彼此的直接朋友,否则不是。 而且,您必须输出所有学生之间的朋友圈总数。Example 1: Input: [[1,1,0], [1,1,0], [0,0,1]] Output: 2 Explanation:The 0th and 1st students are direct friends, so they are in a friend circle. The 2nd student himself is in a friend circle. So return 2. Example 2: Input: [[1,1,0], [1,1,1], [0,1,1]] Output: 1 Explanation:The 0th and 1st students are direct friends, the 1st and 2nd students are direct friends, so the 0th and 2nd students are indirect friends. All of them are in the same friend circle, so return 1.Constraints: 1、1 <= N <= 200 2、M[i][i] == 1 3、M[i][j] == M[j][i]
###Code
class Solution:
def findCircleNum(self, M) -> int:
def findFather(x):
if father[x] != x:
father[x] = findFather(father[x])
return father[x]
def union(a, b):
x = father[a]
y = father[b]
father[x] = y
father = dict()
N = len(M)
for i in range(N):
father[i] = i
for i in range(N):
for j in range(N):
if i != j and M[i][j] == 1:
# 如果i和j不是共同的祖先,并且二者还有联系,
# 通过将二者的祖先合并,构成一个完整的朋友圈
if findFather(i) != findFather(j):
union(i, j)
ancestors = set()
for i in range(N):
ancestors.add(findFather(i))
return len(ancestors)
M_ = [[1,1,0],
[1,1,1],
[0,1,1]]
solution = Solution()
solution.findCircleNum(M_)
###Output
{0: 0, 1: 1, 2: 2}
{0: 1, 1: 1, 2: 2}
|
homage_to_alignment algorithms.ipynb | ###Markdown
Sequence Alignment Algorithms- Sequence alignment algorithms are ways to **arrange two or many biological sequences** - They **identify regions of similarity** that may indicate to functional, structural, or evolutionary relationships between the sequences. Biological information in arrays Data science in biology originated from the biological data stored in sequences 3 main biological data types*Image soure: Shutter Stock* (adpated) Central dogma of molecular biology*Image soure: Shutter Stock* (adapted) ... explained in data terms*Image soure: Shutter Stock* (adapted) More about Proteins- Proteins are class of chemicals in our body that **makes our body function**. - In a healthy individual, proteins make them **see, listen, walk and talk and think, process information**, and control immune response. - In **diseases** some these protein are **dysregulated**.*Image source: [loxooncology](https://www.loxooncology.com/genomically-defined-cancers)* Protein composition- Proteins are made up of different combinations of 20 amino acids. - Based on how they are arranged, their characters and functions are decided.Image credit: *Wikimedia commons by LadyofHats* **Each amino acid (aa) is represented as a letter:**
###Code
amino_acid_list = ('A', 'C', 'D', 'E', 'F',
'G', 'H', 'I', 'K', 'L',
'M', 'N', 'P', 'Q', 'R',
'S', 'T', 'V', 'W', 'Y')
###Output
_____no_output_____
###Markdown
1. **Structural components**: Several amino acid in proteins are arranged closely, forming compact structures.2. **Disorder region**: Several amino acid don't form structures and exist as disorder region.Image credit: *Wikimedia commons by LadyofHats* **Role of protein composition**- Protein composition **determine their physics and chemical nature**.- It makes them function as **machines in different part of body** where it’s needed. Evolutionary biology- Study of evolutionary processes that **produced the diversity of life on Earth**, starting from a single common ancestor. - **natural selection**, common descent, and speciation (origin of species). Evolution of proteins- Process of change in the sequence composition- Studied by **comparing the sequences and structures** of proteins (*homologs*) from other organisms Example Use of such comparative studies- observe pattern of conservation- identify common region that are present in both sequences- transfer functions, understand origin of these sequences etc. Complexity increases when comparing proteins with longer aa chains**Full length protein sequence of human P53**```>P53_HUMAN Cellular tumor antigen p53 OS=Homo sapiens length=393MEEPQSDPSVEPPLSQETFSDLWKLLPENNVLSPLPSQAMDDLMLSPDDIEQWFTEDPGPDEAPRMPEAAPPVAPAPAAPTPAAPAPAPSWPLSSSVPSQKTYQGSYGFRLGFLHSGTAKSVTCTYSPALNKMFCQLAKTCPVQLWVDSTPPPGTRVRAMAIYKQSQHMTEVVRRCPHHERCSDSDGLAPPQHLIRVEGNLRVEYLDDRNTFRHSVVVPYEPPEVGSDCTTIHYNYMCNSSCMGGMNRRPILTIITLEDSSGNLLGRNSFEVRVCACPGRDRRTEEENLRKKGEPHHELPPGSTKRALPNNTSSSPQPKKKPLDGEYFTLQIRGRERFEMFRELNEALELKDAQAGKEPGGSRAHSSHLKSKKGQSTSRHKKLMFKTEGPDSD```**Full length protein sequence of mouse P53**```>P53_MOUSE Cellular tumor antigen p53 OS=Mus musculus length=390MTAMEESQSDISLELPLSQETFSGLWKLLPPEDILPSPHCMDDLLLPQDVEEFFEGPSEALRVSGAPAAQDPVTETPGPVAPAPATPWPLSSFVPSQKTYQGNYGFHLGFLQSGTAKSVMCTYSPPLNKLFCQLAKTCPVQLWVSATPPAGSRVRAMAIYKKSQHMTEVVRRCPHHERCSDGDGLAPPQHLIRVEGNLYPEYLEDRQTFRHSVVVPYEPPEAGSEYTTIHYKYMCNSSCMGGMNRRPILTIITLEDSSGNLLGRDSFEVRVCACPGRDRRTEEENFRKKEVLCPELPPGSAKRALPTCTSASPPQKKKPLDGEYFTLKIRGRKRFEMFRELNEALELKDAHATEESGDSRAHSSYLKTKKGQSTSRHKKTMVKKVGPDSD```
###Code
# Example protein: P53: acts as a tumor suppressor in many cancers
# Source: UniProt: https://www.uniprot.org/uniprot/P04637
human_p53 = 'TFSDLWKLLPENNV' # first 10 aa of the full length (393 aa)
mouse_p53 = 'SQETFSGLWKLLPP' # first 10 aa of the full length (390 aa)
###Output
_____no_output_____
###Markdown
Protein Similary and Alignment Algorithms Protein sequences are aligned and their smiliarity is scored. Pairwise Alignments **Global alignment: Needleman Wunsch Algorithm**- Assigns a score to every possible alignment- Finds alignments with highest scoreIt was the first application of **dynamic programming** - simplifies a decision by breaking it down into smaller problems (all alignments)- finds optimal solution (best aligment)**Local alignment (Smith-Waterman algorithm)*
###Code
human_p53 = 'TFSDLWKLLPENNV'
mouse_p53 = 'SQETFSGLWKLLPP'
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
def create_substution_matrix(residue_list, match_score=1,
mismatch_score=-1):
'''
This function creates a substituion matrix for residues:
Arguments:
- residue_list: A list of amino acid or dna/rna residues.
- match_score: An integer number indicating match score. (default score = 1)
- mismatch_score: An integer number indicating mismatch score. (default score = -1)
'''
scoring_matrix = pd.DataFrame(index=residue_list, columns=residue_list)
scoring_matrix = scoring_matrix.fillna(0)
for residue_col in residue_list:
for residue_row in residue_list:
if residue_col == residue_row:
scoring_matrix.loc[residue_col, residue_row] = match_score
else:
scoring_matrix.loc[residue_col, residue_row] = mismatch_score
return scoring_matrix
scoring_matrix = create_substution_matrix(amino_acid_list)
def create_heatmap_from_matrix(matrix_name, filename='default', color='Blues'):
sns.set(rc={"figure.figsize":(12,8)})
df = pd.DataFrame(matrix_name)
g = sns.heatmap(df, annot=True, fmt='g', cmap=color)
if not filename == 'default':
g.get_figure().savefig(os.path.join(file_path, filename))
create_heatmap_from_matrix(scoring_matrix)
def create_dynamic_prog_matrix(seq1, seq2, scoring_matrix,
gap_penalty=-1):
'''
This function creates a scorinf matrix for two
sequences using dynamic programming:
Arguments:
- seq1: First amino acid sequence
- seq2: Second amino acid sequence
- scoring_matrix: Scoring matrix for amino acid
- gap_penalty: An integer number indicating gap
penalty/score. (default value = -1)
'''
index_list = [0]+list(seq1)
column_list = [0]+list(seq2)
dp_matrix = pd.DataFrame(index=index_list, columns=column_list)
dp_matrix = dp_matrix.fillna(0)
for i, residue_seq1 in enumerate(list(seq1)):
dp_matrix.iloc[i+1, 0] = (i+1)*-1
for j, residue_seq2 in enumerate(list(seq2)):
dp_matrix.iloc[0, j+1] = (j+1)*-1
dp_matrix.loc[residue_seq1,
residue_seq2] = scoring_matrix.loc[
residue_seq1, residue_seq2]
scored_dp_matrix = _calculate_alignment_scores(dp_matrix, gap_penalty)
return scored_dp_matrix
def _calculate_alignment_scores(dp_matrix, gap_penalty):
for i, rows in enumerate(dp_matrix.index.values):
if not rows == 0:
for j, cols in enumerate(dp_matrix.columns.values):
if not cols == 0:
current_score = dp_matrix.iloc[i, j]
left_score = dp_matrix.iloc[i, j-1] + gap_penalty
up_score = dp_matrix.iloc[i-1, j] + gap_penalty
diag_score = dp_matrix.iloc[i-1, j-1] + current_score
high_score = max([left_score, up_score, diag_score])
dp_matrix.iloc[i, j] = high_score
return(dp_matrix)
scored_dp_matrix = create_dynamic_prog_matrix(human_p53, mouse_p53,
scoring_matrix)
print(scored_dp_matrix)
def trace_best_alignment(scored_dp_matrix, match_score=1,
mismatch_score=-1, gap_penalty=-1):
'''
This function traces back the best alignment.
Diagonal arrow is a match or mismatch. Horizontal arrows introduce
gap ("-") in the row and vertical arrows introduce gaps in the column.
Arguments:
- scored_dp_matrix: scored matrix for two sequences.
- match_score: An integer number indicating match score.
(default score = 1)
- mismatch_score: An integer number indicating mismatch score.
(default score = -1)
- gap_penalty: An integer number indicating gap penalty/score.
(default score = -1)
'''
i = len(scored_dp_matrix.index.values)-1
j = len(scored_dp_matrix.columns.values)-1
row_residue_list = []
col_residue_list = []
match_positions = []
print("Trackback type:\n")
while i > 0 and j > 0:
current_score = scored_dp_matrix.iloc[i, j]
left_score = scored_dp_matrix.iloc[i, j-1]
up_score = scored_dp_matrix.iloc[i-1, j]
diag_score = scored_dp_matrix.iloc[i-1, j-1]
row_val = scored_dp_matrix.index.values[i]
col_val = scored_dp_matrix.columns.values[j]
trackback_type = ""
if i > 1 and j > 1 and (current_score == diag_score + match_score and row_val == col_val):
trackback_type = "diagonal_match"
row_val = scored_dp_matrix.index.values[i]
col_val = scored_dp_matrix.columns.values[j]
i -= 1
j -= 1
match_positions.append(row_val)
elif i > 1 and j > 1 and (current_score == diag_score + mismatch_score and row_val != col_val):
trackback_type = "diagonal_mismatch"
row_val = scored_dp_matrix.index.values[i]
col_val = scored_dp_matrix.columns.values[j]
i -= 1
j -= 1
match_positions.append(row_val)
elif i > 0 and (current_score == up_score + gap_penalty):
trackback_type = "up"
row_val = scored_dp_matrix.index.values[i]
col_val = '-'
i -= 1
# match_Score -= 1
elif j > 0 and (current_score == left_score + gap_penalty):
trackback_type = "left"
col_val = scored_dp_matrix.columns.values[j]
row_val = '-'
j -= 1
# match_Score -= 1
else:
row_val = scored_dp_matrix.index.values[i]
col_val = scored_dp_matrix.columns.values[j]
i -= 1
j -= 1
match_positions.append(row_val)
print(trackback_type)
row_residue_list.append(row_val)
col_residue_list.append(col_val)
print("Total aligned positions: {}".format(len(match_positions)))
col_seq = ''.join(map(str, col_residue_list[::-1]))
row_seq = ''.join(map(str, row_residue_list[::-1]))
return col_seq, row_seq
aligned_seq1, aligned_seq2 = trace_best_alignment(scored_dp_matrix)
print('Optimal global alignment of the given sequences is:\n{}\n{}'.format(
aligned_seq1, aligned_seq2))
create_heatmap_from_matrix(scored_dp_matrix)
###Output
Optimal global alignment of the given sequences is:
TFSGLWKLLPP---
TFSDLWKLLPENNV
###Markdown
Substituion Matrix for ProteinsA substitution matrix used for sequence alignment of proteins are used to score alignments between evolutionarily divergent protein sequences. The most popular metrices is BLOSUM (BLOcks SUbstitution Matrix) matrix, [provided by NCBI](https://www.ncbi.nlm.nih.gov/Class/BLAST/BLOSUM62.txt).This matrix gives a similarity score to appropriately align similar (even when not identical) aa based on their bio-chemical properties.
###Code
with open('blosum62.txt', 'r') as in_fh:
print(in_fh.read())
# Format BLOSUM subsitution file to pandas matrix
def read_blosum_file_to_matrix(blosum_file):
'''
Creates a matrix from NCBI BLOSUM file.
Arguments:
- blosum_file: provide local file BLOSUM substitution matrix.
(Download the current version:
https://www.ncbi.nlm.nih.gov/Class/BLAST/BLOSUM62.txt)
'''
header_list= ['A', 'R', 'N', 'D', 'C', 'Q',
'E', 'G', 'H', 'I', 'L', 'K',
'M', 'F', 'P', 'S', 'T', 'W',
'Y', 'V', 'B', 'Z', 'X', '*'] # * is gap
blosum = pd.read_csv(blosum_file, skiprows=6, delim_whitespace=True)
blosum = blosum.replace('NaN', 0)
blosum.columns = header_list
blosum.index = header_list
return blosum
blosum = read_blosum_file_to_matrix('blosum62.txt')
print(blosum)
## Calling the function create_dynamic_prog_matrix
# with blosum substituion matrix and gap penalty -4
scored_dp_matrix_blosum = create_dynamic_prog_matrix(
human_p53, mouse_p53,
blosum, gap_penalty=-4)
print(scored_dp_matrix_blosum)
## Edited the previous function trace_best_alignment
## by removing match and mismatch score
def trace_best_alignment_with_blosum(scored_dp_matrix, gap_penalty=-4):
'''
This function traces back the best alignment.
Diagonal arrow is a match or mismatch. Horizontal arrows introduce
gap ("-") in the row and vertical arrows introduce gaps in the column.
Arguments:
- scored_dp_matrix: scored matrix for two sequences.
- gap_penalty: An integer number indicating gap penalty/score.
(default gap penalty in NCBI BLOSUM62 = -4)
'''
i = len(scored_dp_matrix.index.values)-1
j = len(scored_dp_matrix.columns.values)-1
row_residue_list = []
col_residue_list = []
match_positions = []
print("\nTrackback type:")
while i > 0 and j > 0:
current_score = scored_dp_matrix.iloc[i, j]
left_score = scored_dp_matrix.iloc[i, j-1]
up_score = scored_dp_matrix.iloc[i-1, j]
diag_score = scored_dp_matrix.iloc[i-1, j-1]
row_val = scored_dp_matrix.index.values[i]
col_val = scored_dp_matrix.columns.values[j]
trackback_type = ""
if i > 1 and j > 1 and current_score == diag_score:
trackback_type = "diagonal_match"
row_val = scored_dp_matrix.index.values[i]
col_val = scored_dp_matrix.columns.values[j]
i -= 1
j -= 1
match_positions.append(row_val)
elif i > 0 and (current_score == up_score + gap_penalty):
trackback_type = "up"
row_val = scored_dp_matrix.index.values[i]
col_val = '-'
i -= 1
elif j > 0 and (current_score == left_score + gap_penalty):
trackback_type = "left"
col_val = scored_dp_matrix.columns.values[j]
row_val = '-'
j -= 1
else:
trackback_type = "diagonal_match"
row_val = scored_dp_matrix.index.values[i]
col_val = scored_dp_matrix.columns.values[j]
i -= 1
j -= 1
match_positions.append(row_val)
print(trackback_type)
row_residue_list.append(row_val)
col_residue_list.append(col_val)
print("\nTotal aligned positions: {}".format(len(match_positions)))
col_seq = ''.join(map(str, col_residue_list[::-1]))
row_seq = ''.join(map(str, row_residue_list[::-1]))
return col_seq, row_seq
print('Input sequences:\n{}\n{}'.format(mouse_p53, human_p53))
aligned_seq1, aligned_seq2 = trace_best_alignment_with_blosum(scored_dp_matrix_blosum)
print('Optimal global alignment of the given sequences using BLOSUM62 is:\n{}\n{}'.format(
aligned_seq1, aligned_seq2))
create_heatmap_from_matrix(scored_dp_matrix_blosum)
###Output
Input sequences:
SQETFSGLWKLLPP
TFSDLWKLLPENNV
Trackback type:
up
up
up
diagonal_match
diagonal_match
diagonal_match
diagonal_match
diagonal_match
diagonal_match
diagonal_match
diagonal_match
diagonal_match
diagonal_match
diagonal_match
Total aligned positions: 11
Optimal global alignment of the given sequences using BLOSUM62 is:
TFSGLWKLLPP---
TFSDLWKLLPENNV
###Markdown
Local Alignment: Smith–Waterman AlgorithmA local alignment, instead of global alignment can be carried out to **finds subsequences** (rather than full length) that align the best.**Smith–Waterman algorithm** compares segments of all possible lengths and optimizes the similarity score rather than comparing the entire length. - It is a dynamic programming algorithm, and a **variation of Needleman-Wunsch** (Global) alignment algorithm.- It **sets negative scoring matrix cells to zero**, which makes local alignments visible.- **Traceback starts at the highest scoring matrix** cell and proceeds until a cell with score zero is encountered. - It has a much higher complexity in time and space, and often cannot be applied to large-scale problems. The Biopython CommunityThanks to the very active bioinformatics community of volunteers who have developed, improved and maintained the freely available Python package [Biopython](https://biopython.org/) ([License](https://github.com/biopython/biopython/blob/master/LICENSE.rst)).The Biopython tools for biological computation (including the different algorithms to handle biological sequences) allows the reproducibility of results obtained from different softwares.
###Code
from Bio import pairwise2
alignments = pairwise2.align.localxx(human_p53, mouse_p53)
alignments
###Output
_____no_output_____ |
Quarterly_to_Monthly.ipynb | ###Markdown
###Code
!pip install datetime-quarter
!pip install numpy
import pandas as pd
import numpy as np
import datetime as dt
from datetime import datetime
import time
import pytz, tzlocal
import string
import plotly.graph_objects as go
from datequarter import DateQuarter
from scipy.interpolate import CubicSpline
# read and parse part of "Summary" worksheet
def get_sum_region(start, size):
emp = pd.DataFrame()
for i in range(size):
df1 = df_sum.iloc[5:26, start+i]
emp = pd.concat([emp, df1], axis=0)
emp.reset_index(drop = True, inplace = True)
emp.columns = ['emp']
emp_p = pd.DataFrame()
for i in range(size):
df1 = df_sum.iloc[28:49, start+i]
emp_p = pd.concat([emp_p, df1], axis=0)
emp_p.reset_index(drop = True, inplace = True)
emp_p.columns = ['emp_percent']
gdp = pd.DataFrame()
for i in range(size):
df1 = df_sum.iloc[53:74, start+i]
gdp = pd.concat([gdp, df1], axis=0)
gdp.reset_index(drop = True, inplace = True)
gdp.columns = ['gdp']
gdp_p = pd.DataFrame()
for i in range(size):
df1 = df_sum.iloc[76:97, start+i]
gdp_p = pd.concat([gdp_p, df1], axis=0)
gdp_p.reset_index(drop = True, inplace = True)
gdp_p.columns = ['gdp_percent']
pop = pd.DataFrame()
for i in range(size):
df1 = df_sum.iloc[101:122, start+i]
pop = pd.concat([pop, df1], axis=0)
pop.reset_index(drop = True, inplace = True)
pop.columns = ['pop']
pop_p = pd.DataFrame()
for i in range(size):
df1 = df_sum.iloc[124:145, start+i]
pop_p = pd.concat([pop_p, df1], axis=0)
pop_p.reset_index(drop = True, inplace = True)
pop_p.columns = ['pop_percent']
return(pd.concat([emp, emp_p, gdp, gdp_p, pop, pop_p], axis=1))
# Employment and GDP by sector for the region (df = US, NYPA, NYCY, etc)
def get_sector(start, size, df):
emp = pd.DataFrame()
for i in range(size):
df1 = df.iloc[5:26, start+i]
emp = pd.concat([emp, df1], axis=0)
emp.reset_index(drop = True, inplace = True)
emp.columns = ['emp']
gdp = pd.DataFrame()
for i in range(size):
df1 = df.iloc[29:50, start+i]
gdp = pd.concat([gdp, df1], axis=0)
gdp.reset_index(drop = True, inplace = True)
gdp.columns = ['gdp']
return(pd.concat([emp, gdp], axis=1))
# Imcome info by region (df = US, NYPA, NYCY, etc)
def get_income(begin, end, df):
# average income from employment (thousands US$)
avg_income = pd.DataFrame(df.iloc[52, begin:end])
avg_income.columns = ['avg_income']
avg_income.reset_index(drop=True, inplace=True)
# Personal disposable income (million US$)
disp_income = pd.DataFrame(df.iloc[53, begin:end])
disp_income.columns = ['disp_income']
disp_income.reset_index(drop=True, inplace=True)
# Real personal disposable income (million US$, constant 2012 prices)
rdisp_income = pd.DataFrame(df.iloc[54, begin:end])
rdisp_income.columns = ['rdisp_income']
rdisp_income.reset_index(drop=True, inplace=True)
# Retail sales (millions US$)
retail = pd.DataFrame(df.iloc[55, begin:end])
retail.columns = ['retail']
retail.reset_index(drop=True, inplace=True)
# Real retail sales (millions US$, constant 2012 prices)
r_retail = pd.DataFrame(df.iloc[56, begin:end])
r_retail.columns = ['r_retail']
r_retail.reset_index(drop=True, inplace=True)
return(pd.concat([avg_income, disp_income, rdisp_income, retail, r_retail], axis=1))
# Read in the Oxford County dataset
# create functions to automate the data processing
file_path = '/content/drive/MyDrive/Oxford_2021Q2/County dataset.xlsm'
xls = pd.ExcelFile(file_path)
# to read all sheets to a map
sheet_to_df_map = {}
for sheet_name in xls.sheet_names:
sheet_to_df_map[sheet_name] = xls.parse(sheet_name)
# Get the "Summary" worksheet
df_sum = sheet_to_df_map['Summary']
# create time and region columns
# Year column
year = pd.DataFrame(np.arange(1980,2036), columns = ['year'])
year.reset_index(drop=True, inplace=True)
# Quarter column, repeat quarter 1-4 for each year
q = pd.DataFrame(np.arange(1, 5), columns = ['quarter'])
q.reset_index(drop=True, inplace=True)
quarter = pd.concat([q]*56, ignore_index=True)
# Year column for quarterly report, i.e., repeat every year 4 times
year_q = pd.DataFrame(np.repeat(year.values,4,axis=0), columns = ['year'])
# Get the 20 regions and US as region_id=0 (total 21 regions)
region = pd.DataFrame(df_sum.iloc[5:26, 0])
region.reset_index(drop = True, inplace = True)
region.columns = ['regions']
region['region_id'] = np.arange(len(region))
# repeat regions for each year: 21*56
regions = pd.concat([region]*56, ignore_index=True)
# repeat year for each region
year_region = pd.DataFrame(np.repeat(year.values,21,axis=0), columns = ['year'])
# repeat 21 region list for each quarter
regions_q = pd.concat([region]*224, ignore_index=True)
# repeat each year 84 times for each region-quarter combination
year_region_q = pd.DataFrame(np.repeat(year.values,84,axis=0), columns = ['year'])
# repeat quarter 1-4 and year combination for each region
quarter_region = pd.DataFrame(np.repeat(quarter.values,21,axis=0), columns = ['quarter'])
# read "Summary", start from column 3, from year 1980 to 2035
df_data = get_sum_region(3, 56)
sum_region = pd.concat([year_region, regions, df_data], axis=1)
# read "Summary", Quarterly data, start from column 60, from year 1980Q1 to 2035Q4
df_data = get_sum_region(60, 224)
sum_region_q = pd.concat([year_region_q, quarter_region, regions_q, df_data], axis=1)
# Get the "US" worksheet
df_us = sheet_to_df_map['US']
# read in the list of sectiors, 21 including "Total" as sector 0
sector = pd.DataFrame(df_us.iloc[5:26, 0])
sector.reset_index(drop = True, inplace = True)
sector.columns = ['sectors']
sector['sector_id'] = np.arange(len(sector))
# sector set repeat for each year
sectors = pd.concat([sector]*56, ignore_index=True)
year_sector = pd.DataFrame(np.repeat(year.values,21,axis=0), columns = ['year'])
# sector set repeat for each year-quarter combination
sectors_q = pd.concat([sector]*224, ignore_index=True)
# repeat each year 84 times for each quarter-sector combination
year_sector_q = pd.DataFrame(np.repeat(year.values,84,axis=0), columns = ['year'])
# repeat each quarter 21 times for each sector
quarter_sector = pd.DataFrame(np.repeat(quarter.values,21,axis=0), columns = ['quarter'])
# "US", "NYPA", "NYCT", "NY01", ... all have the same data structure
region_list = xls.sheet_names
sector_region = pd.DataFrame()
sector_region_q = pd.DataFrame()
income_region = pd.DataFrame()
income_region_q = pd.DataFrame()
for r in region_list[2:23]:
df_r = sheet_to_df_map[r]
df_data = get_sector(3, 56, df_r)
df1 = pd.concat([year_sector, sectors, df_data], axis=1)
df1['region']= r
sector_region = pd.concat([sector_region, df1], axis=0)
df_data = get_sector(60, 224, df_r)
df2 = pd.concat([year_sector_q, quarter_sector, sectors_q, df_data], axis=1)
df2['region']= r
sector_region_q = pd.concat([sector_region_q, df2], axis=0)
df_data = get_income(3, 59, df_r)
df3 = pd.concat([year, df_data], axis=1)
df3['region'] = r
income_region = pd.concat([income_region, df3], axis=0)
df_data = get_income(60, 284, df_r)
df4 = pd.concat([year_q, quarter, df_data], axis=1)
df4['region'] = r
income_region_q = pd.concat([income_region_q, df4], axis=0)
## Explore variables
## region and region ID
#region.head()
## list of region worksheet names
#region_list
## sector and sector ID
#sector.head()
sum_region_q.head() # Summary sheet
## Employment/GDP by sector by region, in US, NYPA NYCY and all the region sheets
#sector_region_q.head()
## Incomes and Spending by region, in US, NYPA NYCY and all the region sheets
#income_region_q.head()
# functions used in interpolation method
def get_ym(i, start_year):
year = int(i/12)
month = int(i%12+1)
if (month == 0 and year>0):
year = year -1
year = start_year + year
ym = str(year)+'-'+str(month).zfill(2)
return(ym)
def get_ym_next(i, start_year):
i = i+1
year = int(i/12)
month = int(i%12+1)
if (month == 0 and year>0):
year = year -1
year = start_year + year
ym = str(year)+'-'+str(month).zfill(2)
return(ym)
def interpolate_sum(df, var, start_year):
series2 = df.loc[:, ['q_date', var]]
series2.columns = ['datetime', var]
series2[var] = pd.to_numeric(series2[var])
series2.loc[-1] = [str(start_year-1)+"-12-31", 0] # adding a row
series2.index = series2.index + 1 # shifting index
series2 = series2.sort_index() # sorting by index
series2['datetime'] = pd.to_datetime(series2['datetime'])
# variable value for each quarter is incremental values for each quarter
# Generate cumulative series
series2[var] = series2[var].cumsum()
x = []
for i in range(len(series2.datetime)):
x.append((series2.datetime[i].year-start_year)*12 + series2.datetime[i].month)
y = np.array(series2[var], dtype = 'float')
f = CubicSpline(x, y, bc_type='natural')
num_years = 2035 - start_year + 1
x_new = np.linspace(0, num_years*12, num_years*12+1)
y_new = f(x_new)
x_ym = []
for i in range(len(x_new)):
x_ym.append(get_ym(x_new[i], start_year))
output_variable = pd.concat([pd.DataFrame(x_new), pd.DataFrame(x_ym), pd.DataFrame(y_new)], axis = 1)
output_variable.columns = ['timeid', 'datetime',var]
output_variable['datetime'] = pd.to_datetime(output_variable['datetime'])
output_variable['datetime'] = output_variable['datetime'] - dt.timedelta(days = 1)
output_variable[var] = output_variable[var].diff()
output_variable = output_variable.loc[output_variable["timeid"] > 0, ['datetime', var]]
return(output_variable)
def interpolate_average(df, var, start_year, factor):
series3 = df.loc[:, ['month_id', var]]
series3.columns = ['datetime', var]
series3[var] = pd.to_numeric(series3[var]*3/factor)
series3.loc[-1] = [0, 0] # adding a row
series3.index = series3.index + 1 # shifting index
series3= series3.sort_index() # sorting by index
series3[var] = series3[var].cumsum()
x = []
x = np.array(series3['datetime'], dtype = 'int')
y = np.array(series3[var], dtype = 'float')
f = CubicSpline(x, y, bc_type='natural')
num_years = 2035 - start_year + 1
x_new = np.linspace(0, num_years*12, num_years*12+1)
y_dev = f(x_new, 1)*factor
x_datetime = []
for i in range(len(x_new)):
x_datetime.append(get_ym_next(x_new[i], start_year))
monthly_avg = pd.concat([pd.DataFrame(x_new), pd.DataFrame(x_datetime), pd.DataFrame(y_dev)], axis = 1)
monthly_avg.columns = ['time_id','datetime', var]
monthly_avg['datetime'] = pd.to_datetime(monthly_avg['datetime'])
monthly_avg['datetime'] = monthly_avg['datetime'] - dt.timedelta(days = 1)
return(monthly_avg)
def monthly_average(df, var, start_year, factor):
series3 = df.loc[:, ['month_id', var]]
series3.columns = ['datetime', var]
series3[var] = pd.to_numeric(series3[var]*3/factor)
series3.loc[-1] = [0, 0] # adding a row
series3.index = series3.index + 1 # shifting index
series3= series3.sort_index() # sorting by index
series3[var] = series3[var].cumsum()
x = []
x = np.array(series3['datetime'], dtype = 'int')
y = np.array(series3[var], dtype = 'float')
f = CubicSpline(x, y, bc_type='natural')
num_years = 2035 - start_year + 1
x_new = np.linspace(0, num_years*12, num_years*12+1)
x_datetime = []
y_month = []
for m in range(len(x_new)):
y_month.append(np.mean(f(np.linspace(m, m+1, 1001), 1))*factor)
x_datetime.append(get_ym_next(x_new[m], start_year))
monthly_avg = pd.concat([pd.DataFrame(x_new), pd.DataFrame(x_datetime), pd.DataFrame(y_month)], axis = 1)
monthly_avg.columns = ['time_id','datetime', var]
monthly_avg['datetime'] = pd.to_datetime(monthly_avg['datetime'])
monthly_avg['datetime'] = monthly_avg['datetime'] - dt.timedelta(days = 1)
return(monthly_avg)
# GDP by region
# the year the data became available
start_year = 2001
# region ID
region_ID = 2
# select variable for interpolation
var = "gdp"
df2 = sum_region_q.loc[(sum_region_q["region_id"] == region_ID) & (sum_region_q["year"]>=start_year), ["year", "quarter", "gdp", "region_id"]]
df2.reset_index(drop = True, inplace = True)
end_date = []
for i in range(len(df2)):
end_date.append(DateQuarter(df2.iloc[i].year, df2.iloc[i].quarter).end_date())
df2['q_date'] = pd.DataFrame(end_date)
output_variable = interpolate_sum(df2, var, start_year )
output_variable.head()
# GDP by region by sector: US, NYPA, NYCY and other regions worksheets
# the year the data became available
start_year = 1980
# region worksheet name
region_worksheet = 'US'
# select sector
sector_ID = 1
# select the varialbe to be interpolated
var = "gdp"
# select the dataset
input_variable = sector_region_q
df2 = input_variable.loc[(input_variable["region"] == region_worksheet) &
(input_variable["year"]>=start_year) &
(input_variable["sector_id"]==sector_ID),
["year", "quarter", var]]
df2.reset_index(drop = True, inplace = True)
end_date = []
for i in range(len(df2)):
end_date.append(DateQuarter(df2.iloc[i].year, df2.iloc[i].quarter).end_date())
df2['q_date'] = pd.DataFrame(end_date)
output_variable = interpolate_sum(df2, var, start_year)
output_variable.head()
# Incomes and Spending
# By region by sector: US, NYPA, NYCY and other regions worksheets
# the year the data became available
start_year = 1992
# region worksheet name
region_worksheet = 'US'
# select the dataset
input_variable = income_region_q
# select variable: avg_income disp_income rdisp_income retail r_retail
var = "retail"
df2 = input_variable.loc[(input_variable["region"] == region_worksheet) &
(input_variable["year"]>=start_year),
["year", "quarter", var]]
df2.reset_index(drop = True, inplace = True)
end_date = []
for i in range(len(df2)):
end_date.append(DateQuarter(df2.iloc[i].year, df2.iloc[i].quarter).end_date())
df2['q_date'] = pd.DataFrame(end_date)
output_variable = interpolate_sum(df2, var, start_year)
output_variable.head()
# using interpolate_average & monthly_average function
# Select the variable: emp or pop
var = "emp"
region_ID = 0
# the year the data became available
if region_ID == 0:
start_year = 1980
else:
start_year = 1990
df3 = sum_region_q.loc[(sum_region_q["region_id"] == region_ID) &
(sum_region_q["year"]>=start_year),
["year", "quarter", var]]
df3.reset_index(drop = True, inplace = True)
end_date = []
month_id = []
for i in range(len(df3)):
month_id.append((df3.iloc[i].year-start_year)*12+(df3.iloc[i].quarter)*3)
end_date.append(DateQuarter(df3.iloc[i].year, df3.iloc[i].quarter).end_date())
df3['month_id'] = pd.DataFrame(month_id)
df3['datetime'] = pd.DataFrame(end_date)
df3['datetime'] = pd.to_datetime(df3['datetime'])
factor = 10000
interpolated_monthly = monthly_average(df3, var, start_year, factor)
interpolated_quarterly = interpolate_average(df3, var, start_year, factor)
print(interpolated_monthly.head())
print(interpolated_quarterly.head())
# Plotting the results
# remove scrolling output window feature
from IPython.core.display import display, HTML
display(HTML("<style>div.output_scroll { height: 44em; }</style>"))
import plotly.graph_objects as go
# set report variable
report_variable = "Employment"
# set plot title
plot_title = "Monthly "+report_variable
# Create traces
fig = go.Figure()
fig.add_trace(go.Scatter(x=interpolated_quarterly.time_id[0:72], y=interpolated_quarterly[var][0:72],
mode='lines', line_shape = 'spline',
name='smooth'))
fig.add_trace(go.Scatter(x=interpolated_monthly.time_id[0:72], y=interpolated_monthly[var][0:72],
mode='lines', line_shape = 'hv',
name='monthly'))
fig.add_trace(go.Scatter(x=df3.month_id[0:24], y=df3[var][0:24],
mode='lines', line_shape = 'vh',
name='quarterly'))
fig.update_layout(
title = plot_title,
xaxis_title="Time",
yaxis_title=report_variable,
font=dict(
family="Courier New, monospace",
size=12,
color="RebeccaPurple"
)
)
fig.show()
# Plotting the results
# remove scrolling output window feature
from IPython.core.display import display, HTML
display(HTML("<style>div.output_scroll { height: 44em; }</style>"))
import plotly.graph_objects as go
# set report variable
report_variable = "Employment"
# set plot title
plot_title = region.iloc[region_ID].regions + " Monthly "+ report_variable
fig = go.Figure()
# Create traces
fig = go.Figure()
fig.add_trace(go.Scatter(x=interpolated_quarterly.time_id, y=interpolated_quarterly.emp,
mode='lines', line_shape = 'spline',
name='smooth'))
fig.add_trace(go.Scatter(x=interpolated_monthly.time_id, y=interpolated_monthly.emp,
mode='lines', line_shape = 'hv',
name='monthly'))
fig.add_trace(go.Scatter(x=df3.month_id, y=df3.emp,
mode='lines', line_shape = 'vh',
name='quarterly'))
fig.update_layout(
title = plot_title,
xaxis_title="Time",
yaxis_title=report_variable,
font=dict(
family="Courier New, monospace",
size=12,
color="RebeccaPurple"
)
)
fig.show()
def interpolate_average2(df, var, start_year, end_year, factor):
series3 = df.loc[:, ['month_id', var]]
series3.columns = ['datetime', var]
series3[var] = pd.to_numeric(series3[var]*3/factor)
series3.loc[-1] = [0, 0] # adding a row
series3.index = series3.index + 1 # shifting index
series3= series3.sort_index() # sorting by index
series3[var] = series3[var].cumsum()
x = []
x = np.array(series3['datetime'], dtype = 'int')
y = np.array(series3[var], dtype = 'float')
f = CubicSpline(x, y, bc_type='natural')
num_years = end_year - start_year + 1
x_new = np.linspace(0, num_years*12, num_years*12+1)
y_dev = f(x_new, 1)*factor
x_datetime = []
for i in range(len(x_new)):
x_datetime.append(get_ym_next(x_new[i], start_year))
monthly_avg = pd.concat([pd.DataFrame(x_new), pd.DataFrame(x_datetime), pd.DataFrame(y_dev)], axis = 1)
monthly_avg.columns = ['time_id','datetime', var]
monthly_avg['datetime'] = pd.to_datetime(monthly_avg['datetime'])
monthly_avg['datetime'] = monthly_avg['datetime'] - dt.timedelta(days = 1)
return(monthly_avg)
def monthly_average2(df, var, start_year, end_year, factor):
series3 = df.loc[:, ['month_id', var]]
series3.columns = ['datetime', var]
series3[var] = pd.to_numeric(series3[var]*3/factor)
series3.loc[-1] = [0, 0] # adding a row
series3.index = series3.index + 1 # shifting index
series3= series3.sort_index() # sorting by index
series3[var] = series3[var].cumsum()
x = []
x = np.array(series3['datetime'], dtype = 'int')
y = np.array(series3[var], dtype = 'float')
f = CubicSpline(x, y, bc_type='natural')
num_years = end_year - start_year + 1
x_new = np.linspace(0, num_years*12, num_years*12+1)
x_datetime = []
y_month = []
for m in range(len(x_new)):
y_month.append(np.mean(f(np.linspace(m, m+1, 1001), 1))*factor)
x_datetime.append(get_ym_next(x_new[m], start_year))
monthly_avg = pd.concat([pd.DataFrame(x_new), pd.DataFrame(x_datetime), pd.DataFrame(y_month)], axis = 1)
monthly_avg.columns = ['time_id','datetime', var]
monthly_avg['datetime'] = pd.to_datetime(monthly_avg['datetime'])
monthly_avg['datetime'] = monthly_avg['datetime'] - dt.timedelta(days = 1)
return(monthly_avg)
file_path = '/content/drive/MyDrive/Interpolate/BLS_employment.xlsx'
xls = pd.ExcelFile(file_path)
# to read all sheets to a map
sheet_to_df_map = {}
for sheet_name in xls.sheet_names:
sheet_to_df_map[sheet_name] = xls.parse(sheet_name)
# Get the "Summary" worksheet
df_quarter = sheet_to_df_map['Quarterly']
df_quarter.reset_index(drop=True, inplace=True)
df_month = sheet_to_df_map['Monthly']
df_month.reset_index(drop=True, inplace=True)
year = pd.DataFrame(np.arange(1990,2021), columns = ['year'])
year.reset_index(drop=True, inplace=True)
# Quarter column, repeat quarter 1-4 for each year
q = pd.DataFrame(np.arange(1, 5), columns = ['quarter'])
q.reset_index(drop=True, inplace=True)
quarter = pd.concat([q]*31, ignore_index=True)
# Year column for quarterly report, i.e., repeat every year 4 times
year_q = pd.DataFrame(np.repeat(year.values,4,axis=0), columns = ['year'])
year_q.reset_index(drop=True, inplace=True)
# read Quarterly data
quarterly_us = pd.concat([year_q, quarter, df_quarter['US'][0:124]], axis=1)
quarterly_us.columns = ['year', 'quarter', 'US']
quarterly_us.reset_index(drop=True, inplace=True)
quarterly_us = quarterly_us.astype({"year": int, "quarter":int, "US":object}).copy()
end_date = []
month_id = []
start_year = 1990
end_year = 2020
for i in range(len(quarterly_us)):
month_id.append((quarterly_us.iloc[i].year-start_year)*12+(quarterly_us.iloc[i].quarter)*3)
end_date.append(DateQuarter(quarterly_us.iloc[i].year, quarterly_us.iloc[i].quarter).end_date())
quarterly_us['month_id'] = pd.DataFrame(month_id)
quarterly_us['datetime'] = pd.DataFrame(end_date)
quarterly_us['datetime'] = pd.to_datetime(quarterly_us['datetime'])
factor = 10000
var = "US"
interpolated_monthly = monthly_average2(quarterly_us, var, start_year, end_year, factor)
interpolated_quarterly = interpolate_average2(quarterly_us, var, start_year, end_year, factor)
print(interpolated_monthly.head())
print(interpolated_quarterly.head())
# Plotting the results
# remove scrolling output window feature
from IPython.core.display import display, HTML
display(HTML("<style>div.output_scroll { height: 44em; }</style>"))
import plotly.graph_objects as go
# set report variable
report_variable = "Employment"
# set plot title
plot_title = " Monthly "+ report_variable
fig = go.Figure()
# Create traces
fig = go.Figure()
fig.add_trace(go.Scatter(x=interpolated_quarterly.time_id, y=interpolated_quarterly["US"],
mode='lines', line_shape = 'spline',
name='smooth'))
fig.add_trace(go.Scatter(x=interpolated_monthly.time_id, y=interpolated_monthly["US"],
mode='lines', line_shape = 'hv',
name='monthly'))
fig.add_trace(go.Scatter(x=quarterly_us.month_id, y=quarterly_us["US"],
mode='lines', line_shape = 'vh',
name='quarterly'))
fig.update_layout(
title = plot_title,
xaxis_title="Time",
yaxis_title=report_variable,
font=dict(
family="Courier New, monospace",
size=12,
color="RebeccaPurple"
)
)
fig.show()
# Plotting the results
# remove scrolling output window feature
from IPython.core.display import display, HTML
display(HTML("<style>div.output_scroll { height: 44em; }</style>"))
import plotly.graph_objects as go
# set report variable
report_variable = "Employment"
# set plot title
plot_title = " Monthly "+ report_variable
fig = go.Figure()
# Create traces
fig = go.Figure()
fig.add_trace(go.Scatter(x=interpolated_monthly.time_id, y=df_month["US"],
mode='lines', line_shape = 'spline',
name='true monthly'))
fig.add_trace(go.Scatter(x=interpolated_monthly.time_id, y=interpolated_monthly["US"],
mode='lines', line_shape = 'spline',
name='estimated monthly'))
fig.update_layout(
title = plot_title,
xaxis_title="Time",
yaxis_title=report_variable,
font=dict(
family="Courier New, monospace",
size=12,
color="RebeccaPurple"
)
)
fig.show()
from google.colab import files
#output_file_name = input('Output File Name: ')
#interpolated_monthly.to_csv(output_file_name+'.csv')
#files.download(output_file_name+'.csv')
###Output
_____no_output_____ |
Audio_dispersion.ipynb | ###Markdown
Audio DispersionIn this project I am going to experiment audio dispersion using Phyphox app. I am interested to do a simple task; using FFT to cleanout audio data by denoising lower frequencies from the signal in jupyter notebook and Phyphox. Phyphox is an app that uses the sensors in a smartphone to perform physics experiments. The app is developed by RWTH Aachen University, Germany. More about phyphox can be found on [PhyPhox](http://phyphox.org) website.\Audio dispersion is also done with Broadband Transmision Method . This method requires the measurements of a reference velocity to obtain values for the acoustic dispersion in different medium. In our case we will do it simply by taking fourier transform in Jupyter Notebook. The goal is to clean out noisy Audio signal by Audio dispersion. We will take noisy audio data with Phyphox and than by using fourer transform we will be able to remove some noisy frequencies. Waves and Dispersion: When a wave is a distrubance in a medium (like waves in water) which propagate through the medeum,without moving of the medium.A travelling wave can be described by the following equation:\ $ y(x,t) = asin(kx−ωt) $ \Here,\y(x,t) : The height of the wave at position x, and time t \a : The amplitude of the wave\k : The wave number\ω : The angular frequency. The speed at which the wave propagate is given by: v = w/k . More about traveling waves can be found [Here](https://openstax.org/books/university-physics-volume-1/pages/16-1-traveling-waves) If multiple waves such as\$ y_1= a_1sin(k_1x−ω_1t)$,\$y_2 = a_2sin(k_2x−ω_2t)$\............ \ travels together , the equation of the resultant wave is given by their sum:\$ y_{sum} = a_1sin(k_1x−ω_1t)+a_2sin(k_2x−ω_2t)+......$The amplitude of the resultant wave depends on the frquency of each wave. Lets for $ w_1$ and $ w_2 $ If $ ω_1/k_1 = ω_2/k_2 $ , both waves propagate at the same speed, there will be no dispersion,the shape of the resultant function does not change as it moves forward. But when waves of different frequencies propagate at different speeds it causes Dispersion, and the shape of the resultant wave changes as it moves through medium. The plots of the wave fronts of the above waves are given below : Plot of $ y_1 $  Plot of $y_2$  Plot of $ y_{sum} $ Algorithm :In this project , I am going to use Fourier Transform to experiment the dispersion in sound wave. Fourer transform will allow us to convert the audio signal from the time domain to the frequency domain. Fourier transform is a means of mapping a signal, in the time or space domain into its spectrum in the frequency domain. The time and frequency domains are just alternative ways of representing signals and the Fourier transform is the mathematical relationship between the two representations. A change of signal in one domain would also affect the signal in the other domain, but not necessarily in the same way. Conversion from time Domain to frequency domain by FT Fig : [Fourier Transform](https://towardsdatascience.com/understanding-audio-data-fourier-transform-fft-spectrogram-and-speech-recognition-a4072d228520) DFT vs FFT---Discrete Fourier Transform (DFT) is a transform like Fourier transform used with digitized signals. As the name suggests, it is the discrete version of the FT that views both the time domain and frequency domain as periodic. Fast Fourier Transform, or FFT, is a computational algorithm that reduces the computing time and complexity of large transforms. FFT is just an algorithm used for fast computation of the DFT. Fig : [DFT](https://en.wikipedia.org/wiki/Discrete_Fourier_transform) A DFT can be performed as O($N^2$) in time complexity, whereas FFT reduces the time complexity in the order of O (NlogN). DFT can be used in many digital processing systems across a variety of applications such as calculating a signal’s frequency spectrum, solving partial differential applications, detection of targets from radar echoes, correlation analysis, computing polynomial multiplication, spectral analysis, and more. FFT has been widely used for acoustic measurements in churches and concert halls. Other applications of FFT include spectral analysis in analog video measurements, large integer and polynomial multiplication, filtering algorithms, computing isotopic distributions, calculating Fourier series coefficients, calculating convolutions, generating low frequency noise, designing kinoforms, performing dense structured matrices, image processing, and more.  The inverse Fourier Transform will allow us the reverese the process by converting signal from frequency domain to the time domain. Taking audio data with Phyphox app. Using the phyphox app, I have recorded audio data from the sound of waterfall (from tap).From the audio data, I am going to denosie the audio signal using FFT. Phyphox uses the cellphones microphone to record the audio amplitudes. For this experiment I have recorded the sound of waterfall for 1 minute. The screen shot of the data visualization (generated by the app) is shown below :  Importing necessary librariesFor the data visualization , I am going to use Pandas, more about pandas can be found [here](https://pandas.pydata.org/docs/index.html). I am going to use numpy and matplotlib to visualize the data. I am also going to use scipy to take the Fourer transform of the audio data .
###Code
import numpy as np
import pandas as pd
# Import ploting tool
import matplotlib.pyplot as plt
# Linting tool
%load_ext pycodestyle_magic
%pycodestyle_on
###Output
The pycodestyle_magic extension is already loaded. To reload it, use:
%reload_ext pycodestyle_magic
###Markdown
Loading audio data in jupyter notebook. From the phyphox app, we can export the audio data as a ".csv" file which we will use in the following analysis.
###Code
# reading csv file
df = pd.read_csv('Data/Amplitudes.csv', sep=',')
# Reading csv file as dataframe and assigning them as data series.
amplitude = df["Sound pressure level (dB)"]
time = df["Time (s)"]
###Output
_____no_output_____
###Markdown
Visualization of audio data in Time-domain If we plot the Amplitude vs time the visual information we get is here :
###Code
plt.figure(figsize=[12, 6])
plt.rcParams["figure.dpi"] = 100
plt.plot(time, amplitude)
plt.xlabel("Time (seconds)")
plt.ylabel("Sound Pressure Level (dB)")
plt.show()
###Output
_____no_output_____
###Markdown
This is the same plot as it was shown by the phyphox app. Converting to frequency domain by Fast Fourier Transform
###Code
# Necessary imports for FFT
from scipy.fft import fft, fftfreq, rfft, rfftfreq
# from scipy import signal
# Number of samples in normalized_tone
sampling_rate = round(len(amplitude)/time[len(time)-1])
N = sampling_rate * time[len(time)-1]
amplitude_fft = fft(amplitude.to_numpy())
psd = amplitude_fft * np.conj(amplitude_fft) / len(time)
# Getting frequency
freqs = fftfreq(psd.shape[0], 1 / sampling_rate)
plt.figure(figsize=[12, 6])
# Converting y axis to log scale
plt.yscale('log')
plt.ylabel("Power spectrul density")
plt.xlabel("Frequency (Hz)")
plt.plot(freqs, np.abs(psd), color='red')
plt.show()
###Output
_____no_output_____
###Markdown
Denoising the signalLests say I am going to neglect all signal with below PSD of 0.2 , so I will set a threshold at 0.2 so that it will filter out all the signals below this value.
###Code
# Our target frequency is filtering_threshold
filtering_threshold = 0.2
indices = psd > filtering_threshold
# cleaning out the noise below the threshold power spectrul of frequencies
psd_clean = psd * indices
# ploting the clean signal PSD on top of noisy signal PSD
plt.figure(figsize=[12, 6])
plt.yscale('log')
plt.ylabel("Power spectrul density")
plt.xlabel("Frequency (Hz)")
plt.axhline(y=filtering_threshold, color='blue', linestyle='--',
label="PSD threshold")
plt.plot(freqs, np.abs(psd), color="red", label="Noisy PSD")
plt.plot(freqs, np.abs(psd_clean), color='green', label="Clean PSD")
plt.legend(loc="upper right")
plt.show()
###Output
_____no_output_____
###Markdown
Plot after cleaning out nosiy frequencies below threshold:
###Code
# Plot of cleaned psd
plt.title('Cleaned PSD')
plt.plot(freqs, np.abs(psd_clean), color='green', label="Clean PSD")
# Conversion to Log scale
plt.yscale('log')
plt.ylabel("Power spectrul density")
plt.xlabel("Frequency (Hz)")
###Output
_____no_output_____
###Markdown
Applying the Inverse FFT to get back to the original signal after cleaning out noise:
###Code
# Imports
from scipy.fft import ifft
amplitude_fft_clean = amplitude * indices
amplitude_clean = ifft(amplitude_fft_clean.to_numpy())
###Output
_____no_output_____
###Markdown
Comparing the cleaned Audio signal with noisy Audio signal:
###Code
# ploting the noisy signal and clean signal
fig, axs = plt.subplots(1, 2, figsize=(18, 6))
# Plot Noisy Signal
axs[0].set_title('Noisy signal')
axs[0].plot(np.abs(time), np.abs(amplitude), color="red", label="Noisy signal")
axs[0].set_xlabel("Amplitude")
axs[0].set_xlabel("Time (seconds)")
axs[0].set_xlim([5, 30])
# Plot cleaned signal
axs[1].set_title('Cleaned out signal')
axs[1].plot(np.abs(time), np.abs(amplitude_clean), color='green',
label="Clean signal")
axs[1].set_ylim([-0.50, 0.50])
axs[1].set_xlim([5, 30])
axs[1].set_xlabel("Amplitude")
axs[1].set_xlabel("Time (seconds)")
###Output
_____no_output_____ |
scatterplot_practice.ipynb | ###Markdown
In this workspace, you'll make use of this data set describing various car attributes, such as fuel efficiency. The cars in this dataset represent about 3900 sedans tested by the EPA from 2013 to 2018. This dataset is a trimmed-down version of the data found [here](https://catalog.data.gov/dataset/fuel-economy-data).
###Code
fuel_econ = pd.read_csv('./data/fuel_econ.csv')
fuel_econ.head()
###Output
_____no_output_____
###Markdown
**TO DO 1**: Let's look at the relationship between fuel mileage ratings for city vs. highway driving, as stored in the 'city' and 'highway' variables (in miles per gallon, or mpg). **Use a _scatter plot_ to depict the data.**1. What is the general relationship between these variables? 2. Are there any points that appear unusual against these trends?
###Code
sb.regplot(data=fuel_econ, x='city', y='highway', scatter_kws={'alpha': 1/8})
plt.plot([10,60], [10,60])
plt.xlabel('City (mpg)')
plt.ylabel('Highway (mpg)');
###Output
_____no_output_____
###Markdown
Expected Output
###Code
# run this cell to check your work against ours
scatterplot_solution_1()
###Output
Most of the data falls in a large blob between 10 and 30 mpg city and 20 to 40 mpg highway. Some transparency is added via 'alpha' to show the concentration of data. Interestingly, for most cars highway mileage is clearly higher than city mileage, but for those cars with city mileage above about 30 mpg, the distinction is less pronounced. In fact, most cars above 45 mpg city have better city mileage than highway mileage, contrary to the main trend. It might be good to call out this trend by adding a diagonal line to the figure using the `plot` function. (See the solution file for that code!)
###Markdown
**TO DO 2**: Let's look at the relationship between two other numeric variables. How does the engine size relate to a car's CO2 footprint? The 'displ' variable has the former (in liters), while the 'co2' variable has the latter (in grams per mile). **Use a heat map to depict the data.** How strong is this trend?
###Code
plt.hist2d(data=fuel_econ, x='displ', y='co2', cmin=0.5, cmap='viridis_r')
plt.colorbar()
plt.xlabel('Displacement (l)')
plt.ylabel('CO2 (gpm)');
fuel_econ[['displ', 'co2']].describe()
bins_x = np.arange(0.6, 7+0.4, 0.4)
bins_y = np.arange(29, 692+50, 50)
plt.hist2d(data=fuel_econ, x='displ', y='co2', cmin=0.5, cmap='viridis_r', bins=[bins_x, bins_y])
plt.colorbar()
plt.xlabel('Displacement (l)')
plt.ylabel('CO2 (gpm)');
###Output
_____no_output_____
###Markdown
Expected Output
###Code
# run this cell to check your work against ours
scatterplot_solution_2()
###Output
In the heat map, I've set up a color map that goes from light to dark, and made it so that any cells without count don't get colored in. The visualization shows that most cars fall in a line where larger engine sizes correlate with higher emissions. The trend is somewhat broken by those cars with the lowest emissions, which still have engine sizes shared by most cars (between 1 and 3 liters).
|
notebooks/opt-cts/opt-cts.ipynb | ###Markdown
OptimizationIn this notebook, we explore various algorithmsfor solving x* = argmin_{x in R^D} f(x), where f(x) is a differentiable cost function. TOC* [Automatic differentiation](AD)* [Second-order full-batch optimization](second)* [Stochastic gradient descent](SGD)* [TF2 tutorial by Mukesh Mithrakumar](https://nbviewer.jupyter.org/github/adhiraiyan/DeepLearningWithTF2.0/blob/master/notebooks/04.00-Numerical-Computation.ipynb)
###Code
import sklearn
import scipy
import scipy.optimize
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
import itertools
import time
from functools import partial
import os
import numpy as np
from scipy.special import logsumexp
np.set_printoptions(precision=3)
# We make some wrappers around random number generation
# so it works even if we switch from numpy to JAX
import numpy as onp # original numpy
def set_seed(seed):
onp.random.seed(seed)
def randn(*args):
return onp.random.randn(*args)
def randperm(args):
return onp.random.permutation(args)
import torch
import torchvision
print("torch version {}".format(torch.__version__))
if torch.cuda.is_available():
print(torch.cuda.get_device_name(0))
print("current device {}".format(torch.cuda.current_device()))
else:
print("Torch cannot find GPU")
def set_seed(seed):
onp.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if use_cuda else "cpu")
#torch.backends.cudnn.benchmark = True
# Tensorflow 2.0
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
print("tf version {}".format(tf.__version__))
if tf.test.is_gpu_available():
print(tf.test.gpu_device_name())
else:
print("TF cannot find GPU")
# JAX (https://github.com/google/jax)
!pip install --upgrade -q https://storage.googleapis.com/jax-releases/cuda$(echo $CUDA_VERSION | sed -e 's/\.//' -e 's/\..*//')/jaxlib-$(pip search jaxlib | grep -oP '[0-9\.]+' | head -n 1)-cp36-none-linux_x86_64.whl
!pip install --upgrade -q jax
import jax
import jax.numpy as np
import numpy as onp
from jax.scipy.special import logsumexp
from jax import grad, hessian, jacfwd, jacrev, jit, vmap
from jax.experimental import optimizers
print("jax version {}".format(jax.__version__))
###Output
jax version 0.1.43
###Markdown
Automatic differentiation In this section we illustrate various AD libraries by using them to derive the gradient of the negative log likelihood for binary logistic regression applied to the Iris dataset. We compare to the manual numpy implementation. As a minor detail, we evaluate the gradient of the NLL of the test data with the parameters set to their training MLE, in order to get an interesting signal; using a random weight vector makes the dynamic range of the output harder to see.
###Code
# Fit the model to a dataset, so we have an "interesting" parameter vector to use.
import sklearn.datasets
from sklearn.model_selection import train_test_split
iris = sklearn.datasets.load_iris()
X = iris["data"]
y = (iris["target"] == 2).astype(onp.int) # 1 if Iris-Virginica, else 0'
N, D = X.shape # 150, 4
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33, random_state=42)
from sklearn.linear_model import LogisticRegression
# We set C to a large number to turn off regularization.
# We don't fit the bias term to simplify the comparison below.
log_reg = LogisticRegression(solver="lbfgs", C=1e5, fit_intercept=False)
log_reg.fit(X_train, y_train)
w_mle_sklearn = np.ravel(log_reg.coef_)
w = w_mle_sklearn
## Compute gradient of loss "by hand" using numpy
def BCE_with_logits(logits, targets):
N = logits.shape[0]
logits = logits.reshape(N,1)
logits_plus = np.hstack([np.zeros((N,1)), logits]) # e^0=1
logits_minus = np.hstack([np.zeros((N,1)), -logits])
logp1 = -logsumexp(logits_minus, axis=1)
logp0 = -logsumexp(logits_plus, axis=1)
logprobs = logp1 * targets + logp0 * (1-targets)
return -np.sum(logprobs)/N
# Compute using numpy
def sigmoid(x): return 0.5 * (np.tanh(x / 2.) + 1)
def predict_logit(weights, inputs):
return np.dot(inputs, weights) # Already vectorized
def predict_prob(weights, inputs):
return sigmoid(predict_logit(weights, inputs))
def NLL(weights, batch):
X, y = batch
logits = predict_logit(weights, X)
return BCE_with_logits(logits, y)
def NLL_grad(weights, batch):
X, y = batch
N = X.shape[0]
mu = predict_prob(weights, X)
g = np.sum(np.dot(np.diag(mu - y), X), axis=0)/N
return g
y_pred = predict_prob(w, X_test)
loss = NLL(w, (X_test, y_test))
grad_np = NLL_grad(w, (X_test, y_test))
print("params {}".format(w))
#print("pred {}".format(y_pred))
print("loss {}".format(loss))
print("grad {}".format(grad_np))
###Output
params [-4.414 -9.111 6.539 12.686]
loss 0.11824002861976624
grad [-0.235 -0.122 -0.198 -0.064]
###Markdown
AD in JAX Below we use JAX to compute the gradient of the NLL for binary logistic regression.For some examples of using JAX to compute the gradients, Jacobians and Hessians of simple linear and quadratic functions,see [this notebook](https://github.com/probml/pyprobml/blob/master/notebooks/linear_algebra.ipynbAD-jax).More details on JAX's autodiff can be found in the official [autodiff cookbook](https://github.com/google/jax/blob/master/notebooks/autodiff_cookbook.ipynb).
###Code
grad_jax = grad(NLL)(w, (X_test, y_test))
print("grad {}".format(grad_jax))
assert np.allclose(grad_np, grad_jax)
###Output
grad [-0.235 -0.122 -0.198 -0.064]
###Markdown
AD in Tensorflow We just wrap the relevant forward computations inside GradientTape(), and then call tape.gradient(objective, [variables]).
###Code
w_tf = tf.Variable(np.reshape(w, (D,1)))
x_test_tf = tf.convert_to_tensor(X_test, dtype=np.float64)
y_test_tf = tf.convert_to_tensor(np.reshape(y_test, (-1,1)), dtype=np.float64)
with tf.GradientTape() as tape:
logits = tf.linalg.matmul(x_test_tf, w_tf)
y_pred = tf.math.sigmoid(logits)
loss_batch = tf.nn.sigmoid_cross_entropy_with_logits(y_test_tf, logits)
loss_tf = tf.reduce_mean(loss_batch, axis=0)
grad_tf = tape.gradient(loss_tf, [w_tf])
grad_tf = grad_tf[0][:,0].numpy()
assert np.allclose(grad_np, grad_tf)
print("params {}".format(w_tf))
#print("pred {}".format(y_pred))
print("loss {}".format(loss_tf))
print("grad {}".format(grad_tf))
###Output
WARNING: Logging before flag parsing goes to stderr.
W0826 04:28:46.621946 140039241475968 deprecation.py:323] From /tensorflow-2.0.0b1/python3.6/tensorflow/python/ops/nn_impl.py:182: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
AD in PyTorch We just compute the objective, call backward() on it, and then lookup variable.grad. However, we have to specify the requires_grad=True attribute on the variable before computing the objective, so that Torch knows to record its values on its tape.
###Code
w_torch = torch.Tensor(np.reshape(w, [D, 1])).to(device)
w_torch.requires_grad_()
x_test_tensor = torch.Tensor(X_test).to(device)
y_test_tensor = torch.Tensor(y_test).to(device)
y_pred = torch.sigmoid(torch.matmul(x_test_tensor, w_torch))[:,0]
criterion = torch.nn.BCELoss(reduction='mean')
loss_torch = criterion(y_pred, y_test_tensor)
loss_torch.backward()
grad_torch = w_torch.grad[:,0].cpu().numpy()
assert np.allclose(grad_np, grad_torch)
print("params {}".format(w_torch))
#print("pred {}".format(y_pred))
print("loss {}".format(loss_torch))
print("grad {}".format(grad_torch))
###Output
params tensor([[-4.4138],
[-9.1106],
[ 6.5387],
[12.6857]], device='cuda:0', requires_grad=True)
loss 0.11824004352092743
grad [-0.235 -0.122 -0.198 -0.064]
###Markdown
Second-order, full-batch optimization The "gold standard" of optimization is second-order methods, that leverage Hessian information. Since the Hessian has O(D^2) parameters, such methods do not scale to high-dimensional problems. However, we can sometimes approximate the Hessian using low-rank or diagonal approximations. Below we illustrate the low-rank BFGS method, and the limited-memory version of BFGS, that uses O(D H) space and O(D^2) time per step, where H is the history length.In general, second-order methods also require exact (rather than noisy) gradients. In the context of ML, this means they are "full batch" methods, since computing the exact gradient requires evaluating the loss on all the datapoints. However, for small data problems, this is feasible (and advisable).Below we illustrate how to use LBFGS as implemented in various libraries.Other second-order optimizers have a similar API.We use the same binary logistic regression problem as above.
###Code
# Repeat relevant code from AD section above, for convenience.
# Dataset
import sklearn.datasets
from sklearn.model_selection import train_test_split
iris = sklearn.datasets.load_iris()
X = iris["data"]
y = (iris["target"] == 2).astype(onp.int) # 1 if Iris-Virginica, else 0'
N, D = X.shape # 150, 4
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33, random_state=42)
# Sklearn estimate
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(solver="lbfgs", C=1e5, fit_intercept=False)
log_reg.fit(X_train, y_train)
w_mle_sklearn = np.ravel(log_reg.coef_)
w = w_mle_sklearn
# Define Model and binary cross entropy loss
def BCE_with_logits(logits, targets):
N = logits.shape[0]
logits = logits.reshape(N,1)
logits_plus = np.hstack([np.zeros((N,1)), logits]) # e^0=1
logits_minus = np.hstack([np.zeros((N,1)), -logits])
logp1 = -logsumexp(logits_minus, axis=1)
logp0 = -logsumexp(logits_plus, axis=1)
logprobs = logp1 * targets + logp0 * (1-targets)
return -np.sum(logprobs)/N
def sigmoid(x): return 0.5 * (np.tanh(x / 2.) + 1)
def predict_logit(weights, inputs):
return np.dot(inputs, weights) # Already vectorized
def predict_prob(weights, inputs):
return sigmoid(predict_logit(weights, inputs))
def NLL(weights, batch):
X, y = batch
logits = predict_logit(weights, X)
return BCE_with_logits(logits, y)
###Output
_____no_output_____
###Markdown
Scipy versionWe show how to use the implementation from [scipy.optimize](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.htmlscipy.optimize.minimize)
###Code
import scipy.optimize
# We manually compute gradients, but could use Jax instead
def NLL_grad(weights, batch):
X, y = batch
N = X.shape[0]
mu = predict_prob(weights, X)
g = np.sum(np.dot(np.diag(mu - y), X), axis=0)/N
return g
def training_loss(w):
return NLL(w, (X_train, y_train))
def training_grad(w):
return NLL_grad(w, (X_train, y_train))
set_seed(0)
w_init = randn(D)
options={'disp': None, 'maxfun': 1000, 'maxiter': 1000}
method = 'BFGS'
w_mle_scipy = scipy.optimize.minimize(
training_loss, w_init, jac=training_grad,
method=method, options=options).x
print("parameters from sklearn {}".format(w_mle_sklearn))
print("parameters from scipy-bfgs {}".format(w_mle_scipy))
# Limited memory version requires that we work with 64bit, since implemented in Fortran.
def training_loss2(w):
l = NLL(w, (X_train, y_train))
return onp.float64(l)
def training_grad2(w):
g = NLL_grad(w, (X_train, y_train))
return onp.asarray(g, dtype=onp.float64)
set_seed(0)
w_init = randn(D)
memory = 10
options={'disp': None, 'maxcor': memory, 'maxfun': 1000, 'maxiter': 1000}
# The code also handles bound constraints, hence the name
method = 'L-BFGS-B'
w_mle_scipy = scipy.optimize.minimize(training_loss, w_init, jac=training_grad2, method=method).x
print("parameters from sklearn {}".format(w_mle_sklearn))
print("parameters from scipy-lbfgs {}".format(w_mle_scipy))
###Output
parameters from sklearn [-4.414 -9.111 6.539 12.686]
parameters from scipy-lbfgs [-4.415 -9.114 6.54 12.691]
###Markdown
PyTorch versionWe show how to use the version from [PyTorch.optim.lbfgs](https://github.com/pytorch/pytorch/blob/master/torch/optim/lbfgs.py).
###Code
# Put data into PyTorch format.
import torch
from torch.utils.data import DataLoader, TensorDataset
N, D = X_train.shape
x_train_tensor = torch.Tensor(X_train)
y_train_tensor = torch.Tensor(y_train)
data_set = TensorDataset(x_train_tensor, y_train_tensor)
# Define model and loss.
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.linear = torch.nn.Linear(D, 1, bias=False)
def forward(self, x):
y_pred = torch.sigmoid(self.linear(x))
return y_pred
set_seed(0)
model = Model()
criterion = torch.nn.BCELoss(reduction='mean')
optimizer = torch.optim.LBFGS(model.parameters(), history_size=10)
def closure():
optimizer.zero_grad()
y_pred = model(x_train_tensor)
loss = criterion(y_pred, y_train_tensor)
#print('loss:', loss.item())
loss.backward()
return loss
max_iter = 10
for i in range(max_iter):
loss = optimizer.step(closure)
params = list(model.parameters())
w_torch_bfgs = params[0][0].detach().numpy() #(D,) vector
print("parameters from sklearn {}".format(w_mle_sklearn))
print("parameters from torch-bfgs {}".format(w_torch_bfgs))
###Output
parameters from sklearn [-4.414 -9.111 6.539 12.686]
parameters from torch-bfgs [-4.415 -9.114 6.54 12.691]
###Markdown
TF version There is also a version of [LBFGS in TF](https://www.tensorflow.org/probability/api_docs/python/tfp/optimizer/lbfgs_minimize) Stochastic gradient descent In this section we illustrate how to implement SGD. We apply it to a simple convex problem, namely MLE for binary logistic regression on the small iris dataset, so we can compare to the exact batch methods we illustrated above. Numpy versionWe show a minimal implementation of SGD using vanilla numpy. For convenience, we use TFDS to create a stream of mini-batches. We compute gradients by hand, but can use any AD library.
###Code
import tensorflow_datasets as tfds
def make_batcher(batch_size, X, y):
def get_batches():
# Convert numpy arrays to tfds
ds = tf.data.Dataset.from_tensor_slices({"X": X, "y": y})
ds = ds.batch(batch_size)
# convert tfds into an iterable of dict of NumPy arrays
return tfds.as_numpy(ds)
return get_batches
batcher = make_batcher(2, X_train, y_train)
for epoch in range(2):
print('epoch {}'.format(epoch))
for batch in batcher():
x, y = batch["X"], batch["y"]
#print(x.shape)
def sgd(params, loss_fn, grad_loss_fn, get_batches_as_dict, max_epochs, lr):
print_every = max(1, int(0.1*max_epochs))
for epoch in range(max_epochs):
epoch_loss = 0.0
for batch_dict in get_batches_as_dict():
x, y = batch_dict["X"], batch_dict["y"]
batch = (x, y)
batch_grad = grad_loss_fn(params, batch)
params = params - lr*batch_grad
batch_loss = loss_fn(params, batch) # Average loss within this batch
epoch_loss += batch_loss
if epoch % print_every == 0:
print('Epoch {}, Loss {}'.format(epoch, epoch_loss))
return params,
set_seed(0)
D = X_train.shape[1]
w_init = onp.random.randn(D)
def training_loss2(w):
l = NLL(w, (X_train, y_train))
return onp.float64(l)
def training_grad2(w):
g = NLL_grad(w, (X_train, y_train))
return onp.asarray(g, dtype=onp.float64)
max_epochs = 5
lr = 0.1
batch_size = 10
batcher = make_batcher(batch_size, X_train, y_train)
w_mle_sgd = sgd(w_init, NLL, NLL_grad, batcher, max_epochs, lr)
print(w_mle_sgd)
###Output
Epoch 0, Loss 21.775604248046875
Epoch 1, Loss 3.2622179985046387
Epoch 2, Loss 3.1074540615081787
Epoch 3, Loss 2.9816956520080566
Epoch 4, Loss 2.875518798828125
(DeviceArray([-0.399, -0.919, 0.311, 2.174], dtype=float32),)
###Markdown
Jax version JAX has a small optimization library focused on stochastic first-order optimizers. Every optimizer is modeled as an (`init_fun`, `update_fun`, `get_params`) triple of functions. The `init_fun` is used to initialize the optimizer state, which could include things like momentum variables, and the `update_fun` accepts a gradient and an optimizer state to produce a new optimizer state. The `get_params` function extracts the current iterate (i.e. the current parameters) from the optimizer state. The parameters being optimized can be ndarrays or arbitrarily-nested list/tuple/dict structures, so you can store your parameters however you’d like.Below we show how to reproduce our numpy code using this library.
###Code
# Version that uses JAX optimization library
#@jit
def sgd_jax(params, loss_fn, get_batches, max_epochs, opt_init, opt_update, get_params):
loss_history = []
opt_state = opt_init(params)
#@jit
def update(i, opt_state, batch):
params = get_params(opt_state)
g = grad(loss_fn)(params, batch)
return opt_update(i, g, opt_state)
print_every = max(1, int(0.1*max_epochs))
total_steps = 0
for epoch in range(max_epochs):
epoch_loss = 0.0
for batch_dict in get_batches():
X, y = batch_dict["X"], batch_dict["y"]
batch = (X, y)
total_steps += 1
opt_state = update(total_steps, opt_state, batch)
params = get_params(opt_state)
train_loss = onp.float(loss_fn(params, batch))
loss_history.append(train_loss)
if epoch % print_every == 0:
print('Epoch {}, train NLL {}'.format(epoch, train_loss))
return params, loss_history
b=list(batcher())
X, y = b[0]["X"], b[0]["y"]
X.shape
batch = (X, y)
params= w_init
onp.float(NLL(params, batch))
g = grad(NLL)(params, batch)
# JAX with constant LR should match our minimal version of SGD
schedule = optimizers.constant(step_size=lr)
opt_init, opt_update, get_params = optimizers.sgd(step_size=schedule)
w_mle_sgd2, history = sgd_jax(w_init, NLL, batcher, max_epochs,
opt_init, opt_update, get_params)
print(w_mle_sgd2)
print(history)
###Output
_____no_output_____ |
Iris_dataset_problem/encoding_categorical_variables.ipynb | ###Markdown
Imputing missing values
###Code
URL = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
df = pd.read_csv(URL, names=['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'class'])
# Randomly select 10 rows
random_index = np.random.choice(df.index, replace=False, size=10)
# Set the sepal_length values of the rows to be None
df.loc[random_index, 'sepal_length'] = None
df.isnull().any()
# droping the missing value row from data frame
print("Number of rows before deleting: %d" %(df.shape[0]))
df2 = df.dropna()
print("Number of rows before deleting: %d" %(df2.shape[0]))
# imputing missing value using mean
df.sepal_length = df.sepal_length.fillna(df.sepal_length.mean())
df.isnull().any()
###Output
_____no_output_____ |
module_1/Movies_IMBD_v4.1.ipynb | ###Markdown
Предобработка
###Code
answers = {}
data['profit'] = data['revenue'] - data['budget']
data['genres_array'] = data['genres'].str.split('|')
data['overview_len'] = data['overview'].str.split().str.len()
data['cast_array'] = data['cast'].str.split('|')
data['production_array'] = data['production_companies'].str.split('|')
data['release_date'] = data['release_date'].apply(pd.to_datetime)
def movie_title(row):
clean_title = row['original_title'].to_string(index=False).strip(' ')
clean_imdb_id = row['imdb_id'].to_string(index=False).strip(' ')
return f'{clean_title} ({clean_imdb_id})'
###Output
_____no_output_____
###Markdown
1. У какого фильма из списка самый большой бюджет?
###Code
row = data[data['budget'] == data['budget'].max()][['imdb_id', 'original_title']]
answers['1'] = movie_title(row)
print(answers['1'])
###Output
Pirates of the Caribbean: On Stranger Tides (tt1298650)
###Markdown
2. Какой из фильмов самый длительный (в минутах)?
###Code
row = data[data['runtime'] == data['runtime'].max()][['imdb_id', 'original_title']]
answers['2'] = movie_title(row)
print(answers['2'])
###Output
Gods and Generals (tt0279111)
###Markdown
3. Какой из фильмов самый короткий (в минутах)?
###Code
row = data[data['runtime'] == data['runtime'].min()][['imdb_id', 'original_title']]
answers['3'] = movie_title(row)
print(answers['3'])
###Output
Winnie the Pooh (tt1449283)
###Markdown
4. Какова средняя длительность фильмов?
###Code
answers['4'] = round(data['runtime'].mean())
print(answers['4'])
###Output
110
###Markdown
5. Каково медианное значение длительности фильмов?
###Code
answers['5'] = round(data['runtime'].median())
print(answers['5'])
###Output
107
###Markdown
6. Какой самый прибыльный фильм? Внимание! Здесь и далее под «прибылью» или «убытками» понимается разность между сборами и бюджетом фильма. (прибыль = сборы - бюджет) в нашем датасете это будет (profit = revenue - budget)
###Code
row = data[data['profit'] == data['profit'].max()][['imdb_id', 'original_title', 'profit']]
answers['6'] = movie_title(row)
print(answers['6'])
###Output
Avatar (tt0499549)
###Markdown
7. Какой фильм самый убыточный?
###Code
row = data[data['profit'] == data['profit'].min()][['imdb_id', 'original_title', 'profit']]
answers['7'] = movie_title(row)
print(answers['7'])
###Output
The Lone Ranger (tt1210819)
###Markdown
8. У скольких фильмов из датасета объем сборов оказался выше бюджета?
###Code
answers['8'] = data[data['profit'] > 0]['imdb_id'].count()
print(answers['8'])
###Output
1478
###Markdown
9. Какой фильм оказался самым кассовым в 2008 году?
###Code
max_profit = data[data['release_year'] == 2008]['profit'].max()
row = data[(data['release_year'] == 2008) & (data['profit'] == max_profit)][['imdb_id', 'original_title', 'profit']]
answers['9'] = movie_title(row)
print(answers['9'])
###Output
The Dark Knight (tt0468569)
###Markdown
10. Самый убыточный фильм за период с 2012 по 2014 г. (включительно)?
###Code
min_profit = data[data['release_year'].between(2012, 2014)]['profit'].min()
row = data[(data['release_year'].between(2012, 2014)) & (data['profit'] == min_profit)]
answers['10'] = movie_title(row)
print(answers['10'])
###Output
The Lone Ranger (tt1210819)
###Markdown
11. Какого жанра фильмов больше всего?
###Code
genres = data.explode('genres_array').groupby('genres_array')['imdb_id'].count().sort_values(ascending=False)
answers['11'] = genres.index[0]
print(answers['11'])
###Output
Drama
###Markdown
12. Фильмы какого жанра чаще всего становятся прибыльными?
###Code
with_profit = data[data['profit'] > 0]
genres = with_profit.explode('genres_array').groupby('genres_array')['imdb_id'].count().sort_values(ascending=False)
answers['12'] = genres.index[0]
print(answers['12'])
###Output
Drama
###Markdown
13. У какого режиссера самые большие суммарные кассовые сбооры?
###Code
answers['13'] = data.groupby('director').sum()['revenue'].sort_values(ascending=False).index[0]
print(answers['13'])
###Output
Peter Jackson
###Markdown
14. Какой режисер снял больше всего фильмов в стиле Action?
###Code
variants = ['Ridley Scott', 'Guy Ritchie', 'Robert Rodriguez', 'Quentin Tarantino', 'Tony Scott']
action_directors = data[data['director'].isin(variants)]
action_directors = action_directors[action_directors['genres'].str.contains('Action')] \
.groupby('director')['imdb_id'] \
.count() \
.sort_values(ascending=False)
answers['14'] = action_directors.index[0]
print(answers['14'])
###Output
Robert Rodriguez
###Markdown
15. Фильмы с каким актером принесли самые высокие кассовые сборы в 2012 году?
###Code
data_2012 = data[data['release_year'] == 2012]
revenue = data_2012.explode('cast_array').groupby('cast_array')['revenue'].sum().sort_values(ascending=False)
answers['15'] = revenue.index[0]
print(answers['15'])
###Output
Chris Hemsworth
###Markdown
16. Какой актер снялся в большем количестве высокобюджетных фильмов?
###Code
cast_budget = Counter()
def count_cast_budget(row):
for cast in row['cast_array']:
cast_budget[cast] += 1
data[data['budget'] > data['budget'].median()].apply(count_cast_budget, axis='columns')
answers['16'], _ = cast_budget.most_common(1)[0]
print(answers['16'])
high_budget = data[data['budget'] > data['budget'].median()]
cast_hist = high_budget.explode('cast_array').groupby('cast_array')['imdb_id'].count().sort_values(ascending=False)
answers['16'] = cast_hist.index[0]
print(answers['16'])
###Output
Matt Damon
###Markdown
17. В фильмах какого жанра больше всего снимался Nicolas Cage?
###Code
cage = data[data['cast'].str.contains('Nicolas Cage')]
genres = cage.explode('genres_array').groupby('genres_array')['imdb_id'].count().sort_values(ascending=False)
answers['17'] = genres.index[0]
print(answers['17'])
###Output
Action
###Markdown
18. Самый убыточный фильм от Paramount Pictures
###Code
paramount = data[data['production_companies'].str.contains('Paramount Pictures')]
answers['18'] = movie_title(paramount.sort_values(by='profit').head(1))
print(answers['18'])
###Output
K-19: The Widowmaker (tt0267626)
###Markdown
19. Какой год стал самым успешным по суммарным кассовым сборам?
###Code
answers['19'] = data.groupby('release_year')['revenue'].sum().sort_values(ascending=False).index[0]
print(answers['19'])
###Output
2015
###Markdown
20. Какой самый прибыльный год для студии Warner Bros?
###Code
warner = data[data['production_companies'].str.contains('Warner Bros')]
answers['20'] = warner.groupby('release_year')['revenue'].sum().sort_values(ascending=False).index[0]
print(answers['20'])
###Output
2014
###Markdown
21. В каком месяце за все годы суммарно вышло больше всего фильмов?
###Code
answers['21'] = data.groupby(by=data['release_date'].dt.month)['imdb_id'].count().sort_values(ascending=False).index[0]
print(calendar.month_name[answers['21']])
###Output
September
###Markdown
22. Сколько суммарно вышло фильмов летом? (за июнь, июль, август)
###Code
answers['22'] = data[data['release_date'].dt.month.between(6,8)]['imdb_id'].count()
print(answers['22'])
###Output
450
###Markdown
23. Для какого режиссера зима – самое продуктивное время года?
###Code
directors = data[data['release_date'].dt.month.isin([12,1,2])].groupby('director')['imdb_id'].count().sort_values(ascending=False)
answers['23'] = directors.index[0]
print(answers['23'])
###Output
Peter Jackson
###Markdown
24. Какая студия дает самые длинные названия своим фильмам по количеству символов?
###Code
title_len = {}
def title_count(row):
for p in row['production_array']:
title_len.setdefault(p, [])
title_len[p].append(len(row['original_title']))
data.apply(title_count, axis='columns')
result = []
for p in title_len:
result.append( (sum(title_len[p]) / len(title_len[p]), p) )
_, answers['24'] = sorted(result, reverse=True)[0]
print(answers['24'])
with_len = data.explode('production_array')[['production_array', 'original_title']]
with_len['original_title_len'] = with_len['original_title'].apply(len)
answers['24'] = with_len.groupby('production_array')['original_title_len'] \
.mean() \
.sort_values(ascending=False) \
.index[0]
print(answers['24'])
###Output
Four By Two Productions
###Markdown
25. Описание фильмов какой студии в среднем самые длинные по количеству слов?
###Code
answers['25'] = data.explode('production_array') \
.groupby('production_array')['overview_len'] \
.mean() \
.sort_values(ascending=False) \
.index[0]
print(answers['25'])
###Output
Midnight Picture Show
###Markdown
26. Какие фильмы входят в 1 процент лучших по рейтингу? по vote_average
###Code
variants = [
['Inside Out', 'The Dark Knight', '12 Years a Slave'],
['BloodRayne', 'The Adventures of Rocky & Bullwinkle'],
['Batman Begins', 'The Lord of the Rings: The Return of the King', 'Upside Down'],
['300', 'Lucky Number Slevin', 'Kill Bill: Vol. 1'],
['Upside Down', 'Inside Out', 'Iron Man'],
]
best_99 = data[data['vote_average'] > data['vote_average'].quantile(.99)]['original_title']
best_set = set(best_99.unique())
answers['26'] = 'unknown'
for variant in variants:
variant_set = set(variant)
if set(variant).issubset(best_set):
answers['26'] = ', '.join(variant)
break
print(answers['26'])
###Output
Inside Out, The Dark Knight, 12 Years a Slave
###Markdown
27. Какие актеры чаще всего снимаются в одном фильме вместе?
###Code
pairs = Counter()
def count_pairs(row):
for i, cast in enumerate(row['cast_array']):
for cast2 in row['cast_array'][i:]:
if cast != cast2:
pairs[', '.join(sorted([cast, cast2]))] += 1
data.apply(count_pairs, axis='columns')
answers['27'], _ = pairs.most_common(1)[0]
print(answers['27'])
###Output
Daniel Radcliffe, Rupert Grint
###Markdown
Submission
###Code
display(answers)
# и убедиться что ни чего не пропустил)
print(len(answers))
###Output
27
|
1.Neural-Network.ipynb | ###Markdown
Basics on how to build a simple Neural Network 0. Imports
###Code
import torch
import torch.nn as nn # Network Modules
import torch.optim as optim # Gradient Descent, SGD, Adam, ...
import torch.nn.functional as F # Activation functions
# The Data Loader gives us easier data set management
# allowing us to create mini batches and this kind of things easily
from torch.utils.data import DataLoader
# Datasets from torchvision: https://pytorch.org/vision/stable/datasets.html
import torchvision.datasets as datasets
# Transformations to perform on our data set (for data augmentation, for example)
import torchvision.transforms as transforms
# Already implemented & pre-trained models from torchvsion: https://pytorch.org/vision/stable/models.html
import torchvision.models
from tqdm import tqdm # progress bar
###Output
_____no_output_____
###Markdown
1. Create a Fully Connected Network Model of the neural network:
###Code
class NN(nn.Module):
def __init__(self, input_size, num_classes):
# call the initialization of the nn.Module
super(NN, self).__init__()
# create here the NN modules that are going to be used
self.fc1 = nn.Linear(input_size, 50)
self.fc2 = nn.Linear(50, num_classes)
def forward(self, x):
# assembly the modules that participate on the forward propagation part
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
###Output
_____no_output_____
###Markdown
To import the CNN:
###Code
from models.SimpleCNN import CNN
# # to make sure it runs correctly (should output torch.Size([64, 10])):
# model = CNN()
# x = torch.randn(64, 1, 28, 28)
# print(model(x).shape)
###Output
_____no_output_____
###Markdown
2. Set Device
###Code
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
###Output
_____no_output_____
###Markdown
3. Hyperparameters
###Code
INPUT_SIZE = 784
INPUT_CHANNELS = 1
NUM_CLASSES = 10
LEARNING_RATE = 0.001
BATCH_SIZE = 64
NUM_EPOCHS = 3
LOAD_MODEL = True
CHECKPOINT_NAME = "checkpoints/my_checkpoint.pth.tar"
###Output
_____no_output_____
###Markdown
4. Load Data - `root`: Where the dataset is going to be downloaded.- `train`: If True: download the training set. If False: Download the test set.- `transform`: Transformations to perform on the dataset (from NumPy to Tensor to be run on PyTorch).
###Code
train_ds = datasets.MNIST(root='data', train=True, transform=transforms.ToTensor(), download=True)
test_ds = datasets.MNIST(root='data', train=False, transform=transforms.ToTensor(), download=True)
###Output
_____no_output_____
###Markdown
⚠️ Be careful not to shuffle the data if it has to follow an specific order, like in some NLP cases.
###Code
train_loader = DataLoader(dataset=train_ds, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(dataset=test_ds, batch_size=BATCH_SIZE, shuffle=True)
###Output
_____no_output_____
###Markdown
5. Initialize network > To choose the model, just decomment it and comment the rest Simple neural network (NN):
###Code
# model = NN(input_size=INPUT_SIZE, num_classes=NUM_CLASSES).to(DEVICE)
###Output
_____no_output_____
###Markdown
Convolutional neural network (CNN):
###Code
model = CNN(in_channels=INPUT_CHANNELS, num_classes=NUM_CLASSES).to(DEVICE)
###Output
_____no_output_____
###Markdown
VGG16: Initial model summary:```VGG( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace=True) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ... (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace=True) (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): AdaptiveAvgPool2d(output_size=(7, 7)) (classifier): Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.5, inplace=False) (3): Linear(in_features=4096, out_features=4096, bias=True) (4): ReLU(inplace=True) (5): Dropout(p=0.5, inplace=False) (6): Linear(in_features=4096, out_features=1000, bias=True) ))``` We don't want to perform any operation as avgpool. Therefore, we're going to create an Identity module that will leave the input as it is:
###Code
class Identity(nn.Module):
def __init__(self):
super(Identity, self).__init__()
def forward(self, x):
return x
###Output
_____no_output_____
###Markdown
We also don't want the classifier part to have an output of 1000 features, so we are gonna say that the classifier is just a Linear module with an output of 10:
###Code
def load_vgg16_model():
model = torchvision.models.vgg16(pretrained=True)
# We just want to perform backpropagation on the last layers. Therefore,
# we're going to deactivate the grad of the parameters until now.
# This will make the traning much more faster as it will only train the new
# added layers!
for param in model.parameters():
param.requires_grad = False
model.avgpool = Identity()
# if we look at line 28 of the summary, we can see that there are 512 output_channels
model.classifier = nn.Sequential(nn.Linear(512, 100),
nn.ReLU(),
nn.Linear(100, NUM_CLASSES))
return model
# model = load_vgg16_model()
# print(model) # model summary
###Output
_____no_output_____
###Markdown
Summary:```VGG( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace=True) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ... (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace=True) (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): Identity() (classifier): Sequential( (0): Linear(in_features=512, out_features=100, bias=True) (1): ReLU() (2): Linear(in_features=100, out_features=10, bias=True) ))``` 6. Loss & Optimizer
###Code
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
###Output
_____no_output_____
###Markdown
7. Checkpoints & Model Loading
###Code
def save_checkpoint(state, filename="checkpoints/my_checkpoint.pth.tar"):
print("=> Saving checkpoint")
torch.save(state, filename)
def load_checkpoint(checkpoint):
print("=> Loading checkpoint")
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
###Output
_____no_output_____
###Markdown
8. Train Network
###Code
if LOAD_MODEL:
try:
load_checkpoint(torch.load(CHECKPOINT_NAME))
except:
raise FileNotFoundError("No previous checkpoints were found.")
print("Checkpoint has been loaded correctly!")
def reshape_if_simple_nn(data):
if isinstance(model, NN):
# Get to correct shape for the simple neural network: [64, 1, 28, 28] -> [64, 784]
# - The Linear layer expects one input per neuron, therefore,
# we cannot introduce an array per neuron. We've first to convert it to only 1 value.
data = data.reshape(data.shape[0], -1) # -1 flatten all the following layers
return data
for epoch in range(NUM_EPOCHS):
losses = []
loop = tqdm(enumerate(train_loader), total=len(train_loader), leave=False)
if epoch % 2 == 0: # save a checkpoint every two epochs
checkpoint = {'state_dict': model.state_dict(), 'optimizer': optimizer.state_dict()}
save_checkpoint(checkpoint, CHECKPOINT_NAME)
for batch_idx, (data, targets) in loop:
# Carry data to CUDA if possible
data = data.to(device=DEVICE)
targets = targets.to(device=DEVICE)
data = reshape_if_simple_nn(data)
### Forward ###
scores = model(data)
loss = criterion(scores, targets)
losses.append(loss.item())
### Backward ###
# For each batch, set all the gradients to 0 to avoid using previous gradients
# on a new batch and run through new problems
optimizer.zero_grad()
loss.backward()
# perform the optimization
optimizer.step()
# update progress bar
loop.set_description(f"Epoch [{epoch}/{NUM_EPOCHS}]")
loop.set_postfix(loss = loss.item())
mean_loss = sum(losses) / len(losses)
print(f"Loss at epoch {epoch} was {mean_loss:.5f}")
###Output
=> Saving checkpoint
Loss at epoch 0 was 0.25962
Loss at epoch 1 was 0.07248
=> Saving checkpoint
Loss at epoch 2 was 0.05244
###Markdown
9. Accuracy & Test
###Code
def check_accuracy(loader, model):
dataset_type = "training" if loader.dataset.train else 'test'
print(f"Checking accuracy on {dataset_type} data")
num_correct = 0
num_samples = 0
model.eval() # in other cases, it'll disable dropout and this kind of layers
# with torch.no_grad() we avoid computing the gradients in the calculations
with torch.no_grad():
for x, y in loader:
x = x.to(device=DEVICE)
y = y.to(device=DEVICE)
x = reshape_if_simple_nn(x)
scores = model(x)
# Remember we said that the output shape is gonna be nn.Linear(50, 10)
# We want to take the greatest value, so just apply argmax
predictions = scores.argmax(dim=1)
# Remember, x, predictions & y are batches of 64 elements.
# if we perform (predictions == y), we'll obtain a tensor like the following one:
# tensor([True, False, True, True]).sum() = 4
num_correct += (predictions == y).sum()
num_samples += predictions.size(0)
acc = (float(num_correct) / float(num_samples)) * 100
print(f"Got {num_correct} / {num_samples} with accuracy {acc:.2f}")
model.train() # to remove the model.eval() part
return acc
check_accuracy(train_loader, model)
check_accuracy(test_loader, model)
###Output
Checking accuracy on training data
Got 59258 / 60000 with accuracy 98.76
Checking accuracy on test data
Got 9854 / 10000 with accuracy 98.54
|
EPM_datos_Uraba/daniel-new.ipynb | ###Markdown
Text analysis RepairCode First we extract a text and clean it To do that, remove stopwords, NAN, RAMAL, RAMALES
###Code
es_stopwords = [str(x).upper() for x in stopwords.words("spanish")]
def remove_accents(input_str):
nfkd_form = unicodedata.normalize('NFKD', input_str)
return u"".join([c for c in nfkd_form if not unicodedata.combining(c)])
es_stopwords_na = [remove_accents(x) for x in es_stopwords]
es_stopwords_na.extend(["NAN", "RAMAL", "RAMALES"])
def clean_text(text):
# Remove non alphabetic charactets using pattern = r"[^\w]" seen in class
pattern = r"[^\w]"
ret = re.sub(pattern, " ", text)
ret = remove_accents(ret)
for bad in es_stopwords_na:
to_replace = " " + bad + " " if bad != "NAN" else bad
ret = ret.replace(to_replace, " ")
return ret
# Create clean column
df["RepairCodeStringClean"] = df["RepairCodeString"].apply(clean_text)
all_reviews_text = ' '.join(df["RepairCodeString"])
all_reviews_text = clean_text(all_reviews_text)
print(all_reviews_text)
# Get tokens
tokenized_words = nltk.word_tokenize(all_reviews_text)
# remove length smaller than 2
tokenized_words = [each.strip() for each in tokenized_words if len(each.lower()) > 2]
word_freq = Counter(tokenized_words)
ten_pct =round(len(word_freq)*0.1)
## Top 10%
word_freq.most_common(ten_pct)
## Similarly, bottom 10%
word_freq.most_common()[-ten_pct:-1]
df["RepairCodeStringClean"].apply(lambda x: np.nan if str(x).strip() == "" else x).dropna().head()
## First 5 repair codes n-grams
# first_5_revs = AllRCs[0:5]
# word_tokens = nltk.word_tokenize(''.join(first_5_revs))
# list(ngrams(word_tokens, 3)) #ngrams(word_tokens,n) gives the n-grams.
###Output
_____no_output_____
###Markdown
N-Grams RepairCode
###Code
def top_k_ngrams(word_tokens,n,k):
## Getting them as n-grams
n_gram_list = list(ngrams(word_tokens, n))
### Getting each n-gram as a separate string
n_gram_strings = [' '.join(each) for each in n_gram_list]
n_gram_counter = Counter(n_gram_strings)
most_common_k = n_gram_counter.most_common(k)
print(most_common_k)
top_k_ngrams(tokenized_words, 1, 10)
top_k_ngrams(tokenized_words, 2, 10)
top_k_ngrams(tokenized_words, 3, 10)
top_k_ngrams(tokenized_words, 4, 10)
# nltk.pos_tag(tokenized_words)
# import spaghetti as sgt
# sent1 = 'Mi colega me ayuda a programar cosas .'.split()
# sent2 = 'Está embarazada .'.split()
# test_sents = [sent1, sent2]
# # Default Spaghetti tagger.
# print (sgt.pos_tag(test_sent))
# # Tag multiple sentences.
# print (sgt.pos_tag_sents(test_sents))
# spa_tagger = sgt.CESSTagger()
# # POS tagger trained on unigrams of CESS corpus.
# spa_unigram_tagger = spa_tagger.uni
# print (spa_unigram_tagger.tag(sent1))
# # POS tagger traned on bigrams of CESS corpus.
# spa_bigram_tagger = spa_tagger.bi
# print (spa_bigram_tagger.tag(sent2))
# print (spa_bigram_tagger.tag_sents(test_sents))
# # Now lets PoD tag everything
# etiquetador=StanfordPOSTagger(tagger,jar)
# etiquetas=etiquetador.tag(tokenized_words)
# etiquetas
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.