path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
chap3/bayes.ipynb | ###Markdown
ベイズの識別規則 MNISTの0と1をベイズの識別規則を使って分類してみる まずはMNISTのデータをダウンロードする。機械学習ライブラリにはだいたい有名なデータセットが簡単にダウンロードできるようになっている。今回はscikit-learnを使う。その他のライブラリでは[ここ](https://kakedashi-engineer.appspot.com/2020/02/10/mnist/)を参考に。
###Code
from sklearn.datasets import fetch_openml
X, y = fetch_openml("mnist_784", return_X_y=True)
###Output
_____no_output_____
###Markdown
データが70000個。- X: 28x28の画像のピクセルが平らに並べられている。- y: それぞれの画像に対するラベルが入っている (str)
###Code
print(X.shape, y.shape)
###Output
(70000, 784) (70000,)
###Markdown
今回は0と1だけを使うので、それ以外は捨てていい。NumPyを使う。
###Code
import numpy as np
IMG = np.empty((0, 28, 28), float)
LABEL = []
for img, label in zip(X, y):
if label == "0" or label == "1":
# reshapeは置き換えてくれるわけではないので注意
img = img.reshape((1, 28, 28))
IMG = np.append(IMG, img, axis=0)
LABEL.append(label)
###Output
_____no_output_____
###Markdown
いま、`IMG`には画像データ、`LABEL`にはそれに対応したラベルが入っている。matplotlibを使って可視化してみる。
###Code
from matplotlib import pyplot as plt
print("label:", LABEL[0])
print("image:")
plt.imshow(IMG[0])
cnt = 0
for label in LABEL:
if label == '0':
cnt += 1
print("label 0: ", cnt)
print("label 1: ", len(LABEL) - cnt)
###Output
label 0: 6903
label 1: 7877
###Markdown
今回は、1. 28x28の画像データの端の方のデータを5pxずつ除いて、18x18の画像データにした後、2. 3x3の画像データにして、3. 2値表現に直す4. 3x3を平らにするベイズの識別規則を適用する。 「初めてのパターン認識」p23,表3.1のように今回のデータを整理すると、以下のようになる。数字はテキトー。||サンプル数|0番目のデータ==0|1番目のデータ==0|...|9番目のデータ==0||---|---|---|---|---|---||ラベル0|6903|0|5|...|8||ラベル1|7877|0|65|...|7| 1. まず、5pxずつ除く
###Code
IMG_18x18 = IMG[:, 5:-5, 5:-5]
plt.imshow(IMG_18x18[0])
###Output
_____no_output_____
###Markdown
次に3x3の画像データに直す。ここでは、6x6のブロックを作ってそれぞれの平均を取る。
###Code
IMG_3x3 = np.empty((IMG_18x18.shape[0], 3, 3))
for i in range(3):
for j in range(3):
IMG_3x3[:, i, j] = np.mean(IMG_18x18[:, 6*i: 6*(i+1), 6*j: 6*(j+1)], axis=(1,2))
plt.imshow(IMG_3x3[0])
###Output
_____no_output_____
###Markdown
次に2値化する
###Code
IMG_3x3_BIN = np.empty(IMG_3x3.shape)
threshold = 32.0
IMG_3x3_BIN[IMG_3x3 > threshold] = 1.0
IMG_3x3_BIN[IMG_3x3 <= threshold] = 0.0
plt.imshow(IMG_3x3_BIN[0])
###Output
_____no_output_____
###Markdown
平坦化する
###Code
IMG_FLATTENED = IMG_3x3_BIN.reshape(-1, 9)
print(IMG_FLATTENED.shape)
print(IMG_FLATTENED[0])
###Output
(14780, 9)
[0. 1. 1. 1. 0. 1. 1. 1. 1.]
###Markdown
ベイズの識別規則を使う前に、学習データとテストデータを分けておく。
###Code
from sklearn.model_selection import train_test_split
# 8 : 2
LABEL = np.array(LABEL)
X_train, X_test, y_train, y_test = train_test_split(
IMG_FLATTENED, LABEL, test_size=0.2)
###Output
_____no_output_____
###Markdown
表にまとめる。以下のように、カウントをしていき、記録する。||サンプル数|0番目のデータ==0|1番目のデータ==0|...|9番目のデータ==0||---|---|---|---|---|---||ラベル0|6903|0|5|...|8||ラベル1|7877|0|65|...|7|
###Code
table0 = np.sum(X_train[y_train=="0"], axis=0)
table1 = np.sum(X_train[y_train=="1"], axis=0)
print("0: ", table0)
print("1: ", table1)
width = 0.3
plt.bar([i + width for i in range(9)],
table0,
color='r',
label="0",
align="center",
width=width)
plt.bar([i for i in range(9)],
table1,
color='b',
label="1",
align="center",
width=width)
plt.legend(loc=2)
plt.xticks([i + width/2 for i in range(9)], [str(i) for i in range(9)])
###Output
0: [2651. 5482. 5428. 5429. 2076. 5327. 5451. 5263. 5124.]
1: [ 69. 5362. 2699. 44. 6308. 274. 1626. 6238. 234.]
###Markdown
グラフを見る限り、確率で分けることができそう。$ P(label|feature_0,feature_1,...,feature_8) = {{P(feature_0,feature_1,...,feature_8|label)P(label)} \over P(feature_0,feature_1,...,feature_8)}$から、左辺を求めると、分類ができることがわかる。まず、$ P(label)を求める。$
###Code
from IPython.display import display, Markdown
# label == 0の数
data_size0 = len(X_train[y_train == "0"])
# label == 1の数
data_size1 = len(X_train[y_train == "1"])
md = f'''
||サンプル数|{'|'.join([str(i) for i in range(9)])}|
|---|---|{'|'.join(['---' for _ in range(9)])}|
|ラベル0|{data_size0}|{'|'.join([str(d) for d in table0])}|
|ラベル1|{data_size1}|{'|'.join([str(d) for d in table1])}|
'''
display(Markdown(md))
###Output
_____no_output_____
###Markdown
$ P(label=0) = {DataSize_0 \over DataSize_0 + DataSize_1} $, $ P(label=1) = {DataSize_1 \over DataSize_0 + DataSize_1} $であるから、それぞれ
###Code
from IPython.display import Latex
p_label = np.array([
data_size0/(data_size0 + data_size1), # P(label=0)
data_size1/(data_size0 + data_size1), # P(label=1)
])
tex = f'''
$$ P(label=0) = {{DataSize_0 \over DataSize_0 + DataSize_1}} = {"{:.3f}".format(p_label[0])} $$
$$ P(label=1) = {{DataSize_1 \over DataSize_0 + DataSize_1}} = {"{:.3f}".format(p_label[1])} $$
'''
display(Latex(tex))
###Output
_____no_output_____
###Markdown
次に、$ P(feature_0,feature_1,...,feature_8|label) $ を求めたい。それには以下の式が使える。$ P(feature_0,feature_1,...,feature_8|label)=\prod_{i=0}^8 P(feature_i|label) $ラベルがわかったときの9つのそれぞれの特徴の条件付き確率を表にする。つまり、$ P(feature|label) $を表にする。
###Code
prob_table = np.array([
[(data_size0 - d) / data_size0 for d in table0], # P(f_i=0|label=0)
[(data_size1 - d) / data_size1 for d in table1], # P(f_i=0|label=1)
[d / data_size0 for d in table0], # P(f_i=1|label=0)
[d / data_size1 for d in table1], # P(f_i=1|label=1)
])
md = f'''
||{'|'.join([f"$feature_{i}$" for i in range(9)])}|
|---|{'|'.join(['---' for _ in range(9)])}|
|$P(feature_i=0|label=0)$|{'|'.join(["{:.6f}".format(p) for p in prob_table[0]])}|
|$P(feature_i=0|label=1)$|{'|'.join(["{:.6f}".format(p) for p in prob_table[1]])}|
|$P(feature_i=1|label=0)$|{'|'.join(["{:.6f}".format(p) for p in prob_table[2]])}|
|$P(feature_i=1|label=1)$|{'|'.join(["{:.6f}".format(p) for p in prob_table[3]])}|
'''
display(Markdown(md))
###Output
_____no_output_____
###Markdown
$ P(feature0,feature1,...,feature8|label) $を求める。その結果をNumpy配列に入れる。`p_features`は10次元配列で、0次元目にはラベル、1次元目以降には対応する特徴量が入る。
###Code
from itertools import product
from functools import reduce
p_features_cond_label = np.ones((2,) * (1 + 9))
for idx in product((0, 1), repeat=1+9):
label = idx[0]
for i, b in enumerate(idx[1:]):
p_features_cond_label[idx] *= prob_table[2 * b + label, i]
md = """|label|features|probability|
|---|---|---|
"""
for idx in product((0,1), repeat=1+9):
md += f"|{idx[0]}|{idx[1:]}|{p_features_cond_label[idx]}|\n"
display(Markdown(md))
###Output
_____no_output_____
###Markdown
さて、ここまでで、$ P(feature0,feature1,...,feature8|label) $ と $ P(label) $が求まった。元々、以下の式の左辺を求めたかった。$ P(label|feature0,feature1,...,feature8) = {{P(feature0,feature1,...,feature8|label)P(label)} \over P(feature0,feature1,...,feature8)}$求まっていないのは、$ P(feature0,feature1,...,feature8) $。$ P(feature0,feature1,...,feature8)=\sum_{i=0,1} P(feature0,feature1,...,feature8,label=i) $ ... (1)$ P(feature0,feature1,...,feature8,label)=P(feature0,feature1,...,feature8|label)P(label) $ ...(2)(1)に(2)を代入して、$ P(feature0,feature1,...,feature8)=\sum_{i=0,1} P(feature0,...,feature8|label=i)P(label=i) $ ...(3)(3)の式から求めていく。
###Code
p_features = np.empty((2,) * (9))
for idx in product((0, 1), repeat=9):
p_features[idx] = p_features_cond_label[(0, *idx)]*p_label[0] + p_features_cond_label[(1, *idx)]*p_label[1]
md = """|features|probability|
|---|---|
"""
for idx in product((0,1), repeat=9):
md += f"|{idx}|{p_features[idx]}|\n"
display(Markdown(md))
###Output
_____no_output_____
###Markdown
準備は整ったので、代入して求める。$ P(label|feature0,feature1,...,feature8) = {{P(feature0,feature1,...,feature8|label)P(label)} \over P(feature0,feature1,...,feature8)}$
###Code
# P(label=0|feature0, ..., features8)
p_label0_cond_features = np.empty((2,)*9)
for idx in product((0, 1), repeat=9):
p_label0_cond_features[idx] = p_features_cond_label[(0, *idx)]*p_label[0]/p_features[idx]
md = """|features|probability|result|
|---|---|---|
"""
for idx in product((0,1), repeat=9):
md += f"|{idx}|{p_label0_cond_features[idx]}|{p_label0_cond_features[idx] > 0.5}|\n"
display(Markdown(md))
###Output
_____no_output_____
###Markdown
特徴量に対する確率が求まったので、テストデータに適用してみる。
###Code
test_size = len(X_test)
err = 0
for x, y in zip(X_test, y_test):
x = tuple(x.astype(int))
if p_label0_cond_features[x] > 0.5: # predict 0
if y != "0":
err += 1
else: # predict 1
if y != "1":
err += 1
print(f"accuracy: {test_size - err}/{test_size} = {(test_size - err)/test_size} = {100*(test_size - err)/test_size}%")
###Output
accuracy: 2938/2956 = 0.9939106901217862 = 99.39106901217862%
|
EDA_Assignments/A_08_FeatureEngineeringPart2_en_SerhanOnerAksakal.ipynb | ###Markdown
Assignments for "Feature Engineering - Part 2" In this assignment, you are going to use a dataset related to the US education system. Please download the ([dataset](https://www.kaggle.com/noriuk/us-education-datasets-unification-project/home)) from Kaggle. You are going to use `states_all.csv` within this dataset.To complete this assignment, submit the Github link of the Jupyter notebook file containing solutions to the questions below. You can talk to your mentor on your head or ask Slack at office time. **(1)** Create a variable that contains the weighted average of the grades in the dataset. The number of students in the fourth grade is different from that of the eighth grade. So you will need a weighted average!
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
import seaborn as sns
import scipy.stats as stats
from scipy.stats.mstats import winsorize
import warnings
warnings.filterwarnings('ignore')
sns.set(style="whitegrid")
states = pd.read_csv('states_all.csv')
states
states.info()
states.describe()
states['GRADES_4_G_non_null'] = states.GRADES_4_G.dropna()
states['GRADES_4_G_non_null'].dropna()[98]
grades_4th = []
grades_8th = []
grades_12th = []
averages = []
for value in range(0,1632):
grades_4th.append(states.GRADES_4_G.dropna()[value])
print(len(grades_4th))
for value in range(0,1632):
grades_8th.append(states.GRADES_8_G.dropna()[value])
print(len(grades_8th))
for value in range(0,1632):
grades_12th.append(states.GRADES_12_G.dropna()[value])
print(len(grades_12th))
for value in range(0,1632):
averages.append( (grades_4th[value] + grades_8th[value] + grades_12th[value] ) / 3 )
print(len(averages))
averages = pd.DataFrame(averages)
averages.columns = ['AVG_OF_3_GRADES']
states['AVG_OF_3_GRADES'] = averages['AVG_OF_3_GRADES']
states['AVG_OF_3_GRADES']
###Output
_____no_output_____
###Markdown
**(2)** What is the correlation between the variable you just created and the types of expenditures? Which expenditure item has more correlation than others?
###Code
correlation = states.corr()
plt.figure(figsize=(12,12))
sns.heatmap(correlation, annot=True, fmt='.2f', annot_kws={"size": 8}, linewidths=.5, vmin=0, vmax=1, cmap='viridis')
plt.title("Correlation Matrix")
plt.show()
###Output
_____no_output_____
###Markdown
So the Other Expenditures has the most correlation between the our variable than the other expenditures. **(3)** Now apply the Principal Components Analysis (PCA) for the four expenditure items! How much of the total variance can be explained by the first component?
###Code
states_df = pd.DataFrame()
states_df['TOTAL_EXPENDITURE'] = states['TOTAL_EXPENDITURE']
states_df['INSTRUCTION_EXPENDITURE'] = states['INSTRUCTION_EXPENDITURE']
states_df['SUPPORT_SERVICES_EXPENDITURE'] = states['SUPPORT_SERVICES_EXPENDITURE']
states_df['OTHER_EXPENDITURE'] = states['OTHER_EXPENDITURE']
states_df.info()
states_df=states_df.select_dtypes(exclude='object')
states_df.dropna(inplace=True)
states_df.info()
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
X = states_df.values
X = StandardScaler().fit_transform(states_df)
pca = PCA(n_components=4)
principalComponents=pca.fit_transform(X)
exp_var= pca.explained_variance_ratio_
cumsum_var=np.cumsum(np.round(pca.explained_variance_ratio_, decimals=4)*100)
print(exp_var)
print(cumsum_var)
plt.plot(cumsum_var)
plt.xlabel('Number of Components')
plt.ylabel('% of Variance Explained')
# Scree Plot
percent_variance = np.round(pca.explained_variance_ratio_* 100, decimals =2)
columns = ['PC1', 'PC2','PC3','PC4']
plt.bar(x= range(1,5), height=percent_variance, tick_label=columns)
plt.ylabel('Percentate of Variance Explained')
plt.xlabel('Principal Component')
plt.title('PCA Scree Plot')
plt.show()
###Output
_____no_output_____
###Markdown
Hence, the first principal component, namely the Total Expenditure explains the 97.14% of the variance.
###Code
X = pd.DataFrame(X, columns=states_df.columns)
def myplot(score,coeff,labels=None):
xs = score[:,0]
ys = score[:,1]
n = coeff.shape[0]
scalex = 1.0/(xs.max() - xs.min())
scaley = 1.0/(ys.max() - ys.min())
fig = plt.figure(figsize=(12,6), dpi=100)
plt.scatter(xs * scalex,ys * scaley,s=5)
for i in range(n):
plt.arrow(0, 0, coeff[i,0], coeff[i,1],color = 'r',alpha = 0.5)
if labels is None:
plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, "Var"+str(i+1), color = 'green', ha = 'center', va = 'center')
else:
plt.text(coeff[i,0]* 1.15, coeff[i,1] * 1.15, labels[i], color = 'g', ha = 'center', va = 'center')
plt.xlabel("PC{}".format(1))
plt.ylabel("PC{}".format(2))
plt.grid()
myplot(np.array(X)[:,0:2],np.transpose(pca.components_[0:2, :]),list(X.columns))
plt.show()
###Output
_____no_output_____
###Markdown
**(4)** What is the correlation between the GPA you created and the first principal component?
###Code
states_df['GPA'] = averages['AVG_OF_3_GRADES']
states_df.corr()
###Output
_____no_output_____
###Markdown
The correlation between the GPA and the Total_Expenditure is 0.92. That is pretty much high. **(5)** When you need to choose the most appropriate variables for your model, would you prefer the first basic variables instead of the expenditure items? Why?
###Code
# Since the first principal component explains 97.14% of the variance,
# it is much easier to deal with it rather than the first basic variables.
###Output
_____no_output_____ |
quetzal_germany-master/notebooks/prep20_cluster.ipynb | ###Markdown
Preparation of the transport network. Saves aggregated bus and short-distance rail network. Needs PT networks.
###Code
input_path = '../input_static/'
output_path = '../output/'
model_path = '../model/'
# Loading StepModel with PT networks...
sm = stepmodel.read_json(input_path + 'de_pt_network')
bus = stepmodel.read_json(input_path + 'de_pt_network_bus')
# Make sure to use the right zones
z = stepmodel.read_json(model_path + 'de_zones')
z.zones['FID'] = z.zones['NUTS_ID']
sm.zones = z.zones
bus.zones = z.zones
###Output
_____no_output_____
###Markdown
Test links and nodes for network integrityNeccessary for any further steps
###Code
# Check nodeset integrity for later steps to work
try:
sm.integrity_test_nodeset_consistency()
except AssertionError:
print('Found {} orphan nodes'.format(len(sm.orphan_nodes)))
sm.nodes.drop(sm.orphan_nodes, inplace=True)
# Test integrity again
sm.integrity_test_nodeset_consistency()
# Test sequences
# Use an own function because quetzal's takes ages
def test_sequences(trip):
assert len(trip)==trip['link_sequence'].max(), \
'broken sequence in trip {}'.format(trip['trip_id'].unique()[0])
# Fix sequences
# Use an own function because quetzal's takes ages
def fix_sequences(trip):
trip = trip.sort_values('link_sequence')
# Check link succession
ind = list(trip.index)
for i in range(len(trip.index) - 1):
try:
assert trip.loc[ind[i], 'b'] == trip.loc[ind[i+1], 'a'], \
'broken trip {}: stop {} has no successor link'.format(
trip['trip_id'].unique()[0], trip.loc[ind[i], 'b'])
except AssertionError:
trip.loc[ind[i+1]:ind[-1], 'trip_id'] = \
trip.loc[ind[i+1]:ind[-1], 'trip_id'] + '_' + str(i)
# Repair sequences
if len(trip) != trip['link_sequence'].max():
trip['link_sequence'] = trip.groupby('trip_id')['link_sequence'].apply(
lambda t: [j for j in range(1, len(t.index)+1)]).sum()
return trip
# Test and save broken sequences
def test_sequences_save(trip):
if len(trip)!=trip['link_sequence'].max():
return list(trip.index)
tqdm.pandas()
try:
sm.links.groupby('trip_id').progress_apply(test_sequences)
except AssertionError:
links = sm.links.groupby('trip_id').progress_apply(fix_sequences).reset_index(level=0, drop=True)
links.groupby('trip_id').progress_apply(test_sequences)
sm.links = links
broken_seqs = bus.links.groupby('trip_id').progress_apply(test_sequences_save)
broken_seqs.loc[broken_seqs.notna()]
links = bus.links.loc[broken_seqs.loc[broken_seqs.notna()].sum()
].groupby('trip_id').progress_apply(fix_sequences)
links.sample()
links.shape
links.reset_index(level=0, drop=True, inplace=True)
links.groupby('trip_id').progress_apply(test_sequences)
bus.links = bus.links.drop(broken_seqs.loc[broken_seqs.notna()].sum()).append(links)
bus.links.sample()
bus.nodes.sample()
sm.links.loc[sm.links.duplicated(['a', 'b', 'trip_id'], keep=False)]
###Output
_____no_output_____
###Markdown
Map nodes to zonesNeeded for clustering
###Code
# Nodes must be a GeoDataFrame
sm.nodes = gpd.GeoDataFrame(sm.nodes, crs=sm.epsg)
shapely.speedups.enable()
sm.nodes['FID'] = np.nan
for _, zone in tqdm(sm.zones.iterrows(), total=sm.zones.shape[0]):
sm.nodes.loc[sm.nodes['geometry'].within(zone['geometry']), 'FID'] = zone['FID']
bus.nodes = gpd.GeoDataFrame(bus.nodes, crs=sm.epsg)
bus.nodes['FID'] = np.nan
for _, zone in tqdm(bus.zones.iterrows(), total=bus.zones.shape[0]):
bus.nodes.loc[bus.nodes['geometry'].within(zone['geometry']), 'FID'] = zone['FID']
# Drop nodes outside of zones
sm.nodes = sm.nodes[sm.nodes['FID'].notna()]
bus.nodes = bus.nodes[bus.nodes['FID'].notna()]
print(sm.nodes.shape)
print(bus.nodes.shape)
###Output
(15376, 4)
(413611, 4)
###Markdown
Divide links and nodesLong-distance nodes don't need to be clustered due to relatively small numbers, while bus and rail short-distance services have a high geographic coverage.
###Code
# Divide nodes
print(sm.nodes.shape)
disagg_nodes = sm.nodes.loc[sm.nodes['route_type']=='rail_short_distance'].append(bus.nodes)
sm.nodes = sm.nodes.loc[sm.nodes['route_type']!='rail_short_distance']
print(disagg_nodes.shape)
# Divide links
print(sm.links.shape)
disagg_links = sm.links.loc[sm.links['route_type']=='rail_short_distance'].append(bus.links)
sm.links = sm.links.loc[sm.links['route_type']!='rail_short_distance']
print(disagg_links.shape)
# Number of trips
len(disagg_links['trip_id'].unique())
###Output
_____no_output_____
###Markdown
Cluster stopsBus and short-distance rail service in the GTFS feeds contain a lot of intermediate stops which can be clustered
###Code
# Cluster nodes by zone and route_type
nodes = []
parenthood = []
clusters = []
links = []
for z in tqdm(disagg_nodes['FID'].unique(), total=len(sm.zones)):
for t in disagg_nodes.loc[disagg_nodes['FID']==z, 'route_type'].unique():
n = disagg_nodes.loc[(disagg_nodes['route_type']==t) & (disagg_nodes['FID']==z)]
n_clusters = min(5, max(1, len(sm.nodes.loc[ # Max 5, min 1 cluster
(sm.nodes['FID']==z) & (sm.nodes['route_type'].isin(['rail_long_distance', 'air']))])))
while n_clusters > len(n):
n_clusters -= 1
if n_clusters < 1:
continue
l, centroids, node_clusters, node_parenthood = connectivity.node_clustering(
disagg_links.loc[(disagg_links['a'].isin(n.index)) &
(disagg_links['b'].isin(n.index))], n, n_clusters)
nodes.append(centroids)
parenthood.append(node_parenthood)
clusters.append(node_clusters)
links.append(l)
# Coverage
gpd.GeoDataFrame(pd.concat(clusters), crs=sm.epsg).plot()
# Number of zones with short-distance PT connection
len(pd.concat(parenthood)['FID'].unique())
# Merge node clusters by group (zone and route_type)
agg_nodes = []
for i in range(len(nodes)):
agg = gpd.GeoDataFrame(nodes[i], crs=sm.epsg)
agg.index = agg.index.astype(int)
agg = agg.merge(parenthood[i][['FID', 'route_type', 'cluster']].drop_duplicates(),
how='left', left_index=True, right_on='cluster')
agg.index = agg['FID'] + '_' + agg['route_type'] + '_' + agg['cluster'].astype(str)
agg_nodes.append(agg)
agg_nodes = gpd.GeoDataFrame(pd.concat(agg_nodes), crs=sm.epsg)
agg_nodes.sample(2)
print(agg_nodes.loc[agg_nodes['route_type']=='bus'].shape)
print(agg_nodes.loc[agg_nodes['route_type']!='bus'].shape)
###Output
(865, 4)
(849, 4)
###Markdown
Aggregate links within the clustered trips
###Code
# Example: rail short-distance around Aachen
gpd.GeoDataFrame(links[0]).plot(alpha=.1,ax=gpd.GeoDataFrame(nodes[0]).plot(
color='r', ax=gpd.GeoDataFrame(sm.zones.loc['DEA2D']).T.plot(color='g', alpha=.2)))
def aggregate_links(links, parents):
links = links.sort_values(['trip_id', 'link_sequence'])
l = links.loc[links['a']!=links['b']].copy()
if len(l) == 0:
return # Cluster is fully irrelevant
# Correct geometry
c_dict = parents.groupby('cluster').head(1).set_index(
'cluster')['geometry_centroid'].to_dict()
l['geometry'] = [geometry.LineString([c_dict[int(a)], c_dict[int(b)]])
for a, b in zip(l['a'], l['b'])]
# Set a distance
l['length'] = l['geometry'].apply(lambda l: int(geodesic(l.coords[0], l.coords[-1]).m))
# Aggregate time
def within_time(trip):
ident = trip['trip_id'].unique()[0]
times = []
prev_seq = 0
for seq in trip['link_sequence']:
times.append(int(links.loc[
(links['link_sequence']>prev_seq) &
(links['link_sequence']<=seq) &
(links['trip_id']==ident), 'time'].sum()))
prev_seq = seq
l.loc[l['trip_id']==ident, 'time'] = times
l.groupby('trip_id').apply(within_time)
# Rename a and b as in agg_nodes
prefix = parents['FID'].unique()[0] + '_' + parents['route_type'].unique()[0] + '_'
l['a'] = prefix + l['a']
l['b'] = prefix + l['b']
return l
agg_links = []
for i in tqdm(range(len(links)), total=len(links)):
l = aggregate_links(links[i], parenthood[i])
if l is not None:
agg_links.append(l)
# Example: rail short-distance around Aachen
gpd.GeoDataFrame(agg_links[0]).plot(
alpha=.1,ax=gpd.GeoDataFrame(nodes[0]).plot(
color='r', ax=gpd.GeoDataFrame(sm.zones.loc['DEA2D']).T.plot(color='g', alpha=.2)))
agg_links = pd.concat(agg_links)
agg_links.drop(['disaggregated_a', 'disaggregated_b'], axis=1, inplace=True)
agg_links.sample(2)
len(agg_links)
###Output
_____no_output_____
###Markdown
Prepare the inter-zonal links
###Code
# Filter inter-zonal links
links = pd.concat(links)
parenthood = pd.concat(parenthood)
disagg_links = disagg_links.drop(links.index)
len(disagg_links)
disagg_links = disagg_links.loc[(disagg_links['a'].isin(parenthood.index)) &
(disagg_links['b'].isin(parenthood.index))]
len(disagg_links)
# Generate dictionary with old and new node indexes
parenthood['node_name'] = parenthood['FID'] + '_' + \
parenthood['route_type'] + '_' + parenthood['cluster'].astype(str)
node_dict = parenthood['node_name'].to_dict()
# Replace old node indexes with new ones
disagg_links['a'] = disagg_links['a'].map(node_dict)
disagg_links['b'] = disagg_links['b'].map(node_dict)
# Any issues?
disagg_links.loc[disagg_links['a']==disagg_links['b']]
# Correct geometry
geo_dict = parenthood.groupby('node_name').head(1).set_index(
'node_name')['geometry_centroid'].to_dict()
disagg_links['geometry'] = [geometry.LineString([geo_dict[a], geo_dict[b]])
for a,b in zip(disagg_links['a'], disagg_links['b'])]
# Add length
disagg_links['length'] = disagg_links['geometry'].apply(
lambda l: int(geodesic(l.coords[0], l.coords[-1]).m))
disagg_links.sample()
# Example: rail short-distance around Aachen
gpd.GeoDataFrame(disagg_links.loc[disagg_links['b'].isin(agg_nodes.loc[agg_nodes['FID']=='DEA2D'].index)]).plot(
ax=gpd.GeoDataFrame(agg_links.loc[agg_links['a'].isin(agg_nodes.loc[agg_nodes['FID']=='DEA2D'].index)]).plot(
alpha=.1,ax=gpd.GeoDataFrame(nodes[0]).plot(
color='r', ax=gpd.GeoDataFrame(sm.zones.loc['DEA2D']).T.plot(color='g', alpha=.2))))
# Concat links
agg_links = pd.concat([agg_links, disagg_links])
len(agg_links)
# Re-index sequence numbers
agg_links = agg_links.sort_values(['trip_id', 'link_sequence'])
agg_links['link_sequence'] = sum([list(range(1, count+1)) for count in
agg_links.groupby('trip_id').count()['a']], [])
###Output
_____no_output_____
###Markdown
Merge aggregated links and nodes with the model
###Code
# Generate length for long-distance nodes
sm.links['length'] = sm.links['geometry'].apply(
lambda l: int(geodesic(l.coords[0], l.coords[-1]).m))
# Re-add links to model
sm.links = sm.links.append(agg_links)
sm.links.shape
# Re-add nodes to the model
sm.nodes = sm.nodes.append(agg_nodes)
sm.nodes.shape
# Drop cluster column
sm.nodes.drop('cluster', axis=1, inplace=True)
try:
sm.integrity_test_nodeset_consistency()
except AssertionError:
print('Number of orphan nodes: {}'.format(
len(sm.orphan_nodes)))
print('Number of missing nodes: {}'.format(
len(sm.missing_nodes)))
sm.links.groupby('trip_id').apply(test_sequences)
len(sm.links.loc[sm.links.isna().any(axis=1)])
# Drop frequencies of unused nodes
sm.frequencies['stop_id'] = sm.frequencies['stop_id'].map(node_dict)
bus.frequencies['stop_id'] = bus.frequencies['stop_id'].map(node_dict)
sm.frequencies = sm.frequencies.loc[
sm.frequencies['stop_id'].isin(sm.nodes.index)
].append(bus.frequencies.loc[
bus.frequencies['stop_id'].isin(sm.nodes.index)
]).reset_index()
sm.frequencies = sm.frequencies.groupby(['hour', 'stop_id']).agg(
{'trip_id': 'sum'}).reset_index()
sm.frequencies[['trip_id', 'hour']] = sm.frequencies[['trip_id', 'hour']].astype(int)
sm.frequencies.shape
# Drop routes of unused links
sm.pt_routes = sm.pt_routes.loc[sm.links['route_id'].unique()]
sm.pt_routes.shape
# Drop routes of unused bus links
bus.pt_routes = bus.pt_routes.loc[sm.links['route_id'].unique()]
bus.pt_routes.shape
###Output
_____no_output_____
###Markdown
Save model
###Code
# Add bus service to ancilliary
sm.agencies = sm.agencies.append(bus.agencies).reset_index(drop=True)
sm.pt_routes = sm.pt_routes.append(bus.pt_routes).reset_index(drop=True)
# Now, we have bus services in the same tables
sm.pt_route_types.append('bus')
# Reduce file size by shortening node index names
sm.nodes['index'] = [i.replace('rail_short_distance', 'r_s') for i in sm.nodes.index]
sm.nodes.set_index('index', drop=True, inplace=True)
sm.links['a'] = sm.links['a'].apply(lambda n: n.replace('rail_short_distance', 'r_s'))
sm.links['b'] = sm.links['b'].apply(lambda n: n.replace('rail_short_distance', 'r_s'))
sm.nodes['index'] = [i.replace('rail_long_node', 'r_l_n') for i in sm.nodes.index]
sm.nodes.set_index('index', drop=True, inplace=True)
sm.links['a'] = sm.links['a'].apply(lambda n: n.replace('rail_long_node', 'r_l_n'))
sm.links['b'] = sm.links['b'].apply(lambda n: n.replace('rail_long_node', 'r_l_n'))
# Shorten link index names
sm.links['index'] = [i.replace('rail_long', 'r_l').replace('rail_short', 'r_s')
for i in sm.links.index]
sm.links.set_index('index', drop=True, inplace=True)
# Shorten route type names
type_dict = {'rail_short_distance': 'rail_short', 'rail_long_distance': 'rail_long'}
sm.links['route_type'] = sm.links['route_type'].replace(type_dict)
sm.nodes['route_type'] = sm.nodes['route_type'].replace(type_dict)
sm.pt_route_types = [t.replace('_distance', '') for t in sm.pt_route_types]
sm.links.loc[sm.links['route_type']=='rail_short'].sample()
# Split links in graph and auxiliary information
# for file sizes being compatible with github's size limit
cols = ['link_sequence', 'route_id', 'time', 'trip_id']
auxiliary = sm.links[cols]
sm.links.drop(cols, axis=1, inplace=True)
sm.links.shape
# Saving model...
sm.to_json(model_path + 'de_pt_network_agg',
only_attributes=['zones', 'links', 'nodes', 'pt_route_types'],
encoding='utf-8')
sm.to_json(model_path + 'de_pt_network_ancillary',
only_attributes=['agencies', 'pt_routes', 'frequencies'],
encoding='utf-8')
# Save auxiliary information seperately
auxiliary['index'] = auxiliary.index
auxiliary.reset_index(drop=True, inplace=True)
auxiliary.to_json(model_path + 'de_pt_network_agg/links_quetzaldata.json')
###Output
_____no_output_____ |
DoitPandas_Resource/notebook/02_done.ipynb | ###Markdown
직접 해보세요! 갭마인더 데이터 집합 불러오기(28쪽)
###Code
import pandas
df = pandas.read_csv('../data/gapminder.tsv', sep='\t')
import pandas as pd
df = pd.read_csv('../data/gapminder.tsv', sep='\t')
###Output
_____no_output_____
###Markdown
직접 해보세요! 불러온 데이터 집합 살펴보기(29쪽)
###Code
print(df.head())
print(type(df))
print(df.shape)
# print(df.shape[0])
# print(df.shape[1])
print(df.columns)
print(df.dtypes)
print(df.info())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1704 entries, 0 to 1703
Data columns (total 6 columns):
country 1704 non-null object
continent 1704 non-null object
year 1704 non-null int64
lifeExp 1704 non-null float64
pop 1704 non-null int64
gdpPercap 1704 non-null float64
dtypes: float64(2), int64(2), object(2)
memory usage: 80.0+ KB
None
###Markdown
직접 해보세요! 열 단위로 데이터 추출하기(32쪽)
###Code
country_df = df['country']
print(type(country_df))
print(country_df.head())
print(country_df.tail())
subset = df[['country', 'continent', 'year']]
print(type(subset))
print(subset.head())
print(subset.tail())
###Output
country continent year
1699 Zimbabwe Africa 1987
1700 Zimbabwe Africa 1992
1701 Zimbabwe Africa 1997
1702 Zimbabwe Africa 2002
1703 Zimbabwe Africa 2007
###Markdown
직접 해보세요! loc 속성으로 행 단위 데이터 추출하기(35쪽)
###Code
print(df.loc[0])
print(df.loc[99])
print(df.loc[-1])
number_of_rows = df.shape[0]
last_row_index = number_of_rows - 1
print(df.loc[last_row_index])
print(df.tail(n=1))
print(df.tail(n=2))
print(df.loc[[0, 99, 999]])
###Output
country continent year lifeExp pop gdpPercap
0 Afghanistan Asia 1952 28.801 8425333 779.445314
99 Bangladesh Asia 1967 43.453 62821884 721.186086
999 Mongolia Asia 1967 51.253 1149500 1226.041130
###Markdown
알아두면 좋아요! tail 메서드와 loc 속성이 반환하는 자료형은 서로 달라요!(37쪽)
###Code
subset_loc = df.loc[0]
subset_tail = df.tail(n=1)
print(type(subset_loc))
print(type(subset_tail))
###Output
<class 'pandas.core.series.Series'>
<class 'pandas.core.frame.DataFrame'>
###Markdown
직접 해보세요! iloc 속성으로 행 데이터 추출하기(37쪽)
###Code
print(df.iloc[1])
print(df.iloc[99])
print(df.iloc[-1])
print(df.iloc[1710])
print(df.iloc[[0, 99, 999]])
###Output
country continent year lifeExp pop gdpPercap
0 Afghanistan Asia 1952 28.801 8425333 779.445314
99 Bangladesh Asia 1967 43.453 62821884 721.186086
999 Mongolia Asia 1967 51.253 1149500 1226.041130
###Markdown
직접 해보세요! 데이터 추출하기 ─ 슬라이싱 구문, range 메서드(39쪽) 1. 슬라이싱 구문으로 데이터 추출하기
###Code
subset = df.loc[:, ['year', 'pop']]
print(subset.head())
subset = df.iloc[:, [2, 4, -1]]
print(subset.head())
###Output
year pop gdpPercap
0 1952 8425333 779.445314
1 1957 9240934 820.853030
2 1962 10267083 853.100710
3 1967 11537966 836.197138
4 1972 13079460 739.981106
###Markdown
2. range 메서드로 원하는 데이터 추출하기
###Code
small_range = list(range(5))
print(small_range)
print(type(small_range))
subset = df.iloc[:, small_range]
print(subset.head())
small_range = list(range(3, 6))
print(small_range)
subset = df.iloc[:, small_range]
print(subset.head())
small_range = list(range(0, 6, 2))
subset = df.iloc[:, small_range]
print(subset.head())
###Output
country year pop
0 Afghanistan 1952 8425333
1 Afghanistan 1957 9240934
2 Afghanistan 1962 10267083
3 Afghanistan 1967 11537966
4 Afghanistan 1972 13079460
###Markdown
4. 슬라이싱, range 메서드 비교하기
###Code
subset = df.iloc[:, :3]
print(subset.head())
subset = df.iloc[:, 0:6:2]
print(subset.head())
###Output
country year pop
0 Afghanistan 1952 8425333
1 Afghanistan 1957 9240934
2 Afghanistan 1962 10267083
3 Afghanistan 1967 11537966
4 Afghanistan 1972 13079460
###Markdown
5. loc, iloc 속성 자유자재로 사용하기
###Code
print(df.iloc[[0, 99, 999], [0, 3, 5]])
print(df.loc[[0, 99, 999], ['country', 'lifeExp', 'gdpPercap']])
print(df.loc[10:13, ['country', 'lifeExp', 'gdpPercap']])
###Output
country lifeExp gdpPercap
10 Afghanistan 42.129 726.734055
11 Afghanistan 43.828 974.580338
12 Albania 55.230 1601.056136
13 Albania 59.280 1942.284244
###Markdown
직접 해보세요! 그룹화한 데이터의 평균 구하기(44쪽) 1. lifeExp 열을 연도별로 그룹화하여 평균 계산하기
###Code
# print(df.head(n=10))
print(df.groupby('year')['lifeExp'].mean())
grouped_year_df = df.groupby('year')
print(type(grouped_year_df))
print(grouped_year_df)
grouped_year_df_lifeExp = grouped_year_df['lifeExp']
print(type(grouped_year_df_lifeExp))
mean_lifeExp_by_year = grouped_year_df_lifeExp.mean()
print(mean_lifeExp_by_year)
###Output
year
1952 49.057620
1957 51.507401
1962 53.609249
1967 55.678290
1972 57.647386
1977 59.570157
1982 61.533197
1987 63.212613
1992 64.160338
1997 65.014676
2002 65.694923
2007 67.007423
Name: lifeExp, dtype: float64
###Markdown
6. lifeExp, gdpPercap 열의 평균값을 연도, 지역별로 그룹화하여 한 번에 계산하기
###Code
multi_group_var = df.groupby(['year', 'continent'])[['lifeExp', 'gdpPercap']].mean()
print(multi_group_var)
print(type(multi_group_var))
###Output
<class 'pandas.core.frame.DataFrame'>
###Markdown
7. 그룹화한 데이터의 개수 세기
###Code
print(df.groupby('continent')['country'].nunique())
###Output
continent
Africa 52
Americas 25
Asia 33
Europe 30
Oceania 2
Name: country, dtype: int64
###Markdown
직접 해보세요! 그래프 그리기(48쪽)
###Code
%matplotlib inline
import matplotlib.pyplot as plt
global_yearly_life_expectancy = df.groupby('year')['lifeExp'].mean()
print(global_yearly_life_expectancy)
global_yearly_life_expectancy.plot()
###Output
_____no_output_____ |
notebooks/Module2-Python-Data-Structures/PY0101EN-2.3_notebook_quizz_dictioary.ipynb | ###Markdown
You will need the dictionary D:
###Code
D={'a':0,'b':1,'c':2}
###Output
_____no_output_____
###Markdown
Find Value of a Key Find the value for the key 'a'
###Code
D["a"]
###Output
_____no_output_____
###Markdown
Click here for the solution```pythonD["a"]``` Keys Find the keys of the dictionary D
###Code
D.keys()
###Output
_____no_output_____ |
Notebooks/008_Custom_Layer.ipynb | ###Markdown
3 Minutes Machine Learning Episode 8: Custom Layers Marco Sanguineti, 2021---Welcome to 3 minutes Machine Learning!Reference: https://archive.ics.uci.edu/ml/datasets/Airfoil+Self-Noise
###Code
import tensorflow as tf
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
print(tf.__version__)
def loadThumb(path):
# Let's import this video thumbnail!
myThumb = plt.imread(path)
fig, ax = plt.subplots(figsize=(15, 10))
plt.axis('off')
ax.imshow(myThumb)
plt.show()
# loadThumb('/tmp/yt_thumb_008.png')
###Output
_____no_output_____
###Markdown
Video Topics> 1. Load the dataset from UCI.edu> 2. Create a model with the keras API with a custom layer and activation> 3. Train the model and check the results> 4. See you on next video! Load the dataset___
###Code
URL = "https://archive.ics.uci.edu/ml/machine-learning-databases/00291/airfoil_self_noise.dat"
cols = ['Frequency',
'Angle of Attack',
'Chord length',
'Free-stream velocity',
'Suction side displacement thickness',
'Sound Pressure']
dataset = pd.read_table(URL, names=cols, dtype='float32')
dataset
dataset.describe().T
# sns.pairplot(dataset)
# plt.show()
###Output
_____no_output_____
###Markdown
Create the model___
###Code
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Layer
# Let's create a custom quadratic layer
class myDenseLayer(Layer):
def __init__(self, units=32, activation=None):
super(myDenseLayer, self).__init__()
self.units = units
self.activation = tf.keras.activations.get(activation)
def build(self, input_shape):
a_init = tf.random_normal_initializer()
self.a = tf.Variable(name='a',
initial_value=a_init(shape=(input_shape[-1], self.units)), dtype='float32',
trainable=True)
self.b = tf.Variable(name='b',
initial_value=a_init(shape=(input_shape[-1], self.units)), dtype='float32',
trainable=True)
c_init = tf.zeros_initializer()
self.c = tf.Variable(name='c',
initial_value=c_init(shape=(self.units)), dtype='float32',
trainable=True)
def call(self, inputs):
return self.activation(tf.matmul(tf.math.square(inputs), self.a)+tf.matmul(inputs, self.b) + self.c)
myLayer = myDenseLayer(units=16, activation='relu')
input_data = Input(shape=(5), name='Input')
customDense = myLayer(input_data)
output = Dense(1, name='output')(customDense)
model = Model(input_data, output)
model.compile(optimizer=Adam(learning_rate=0.001), loss='mse', metrics=['mae', 'mse'])
model.summary()
tf.keras.utils.plot_model(
model, to_file='model.png', show_shapes=True, show_dtype=True,
show_layer_names=True, rankdir='TB', expand_nested=False, dpi=96
)
def separate(df):
return df[['Sound Pressure']].to_numpy(), df.drop(df[['Sound Pressure']], axis=1).to_numpy()
min_max_scaler = preprocessing.MinMaxScaler()
df_normed = pd.DataFrame(min_max_scaler.fit_transform(dataset))
df_normed.columns = list(dataset.columns)
train_set, test_set = train_test_split(df_normed)
train_labels, train_features = separate(train_set)
test_labels, test_features = separate(test_set)
###Output
_____no_output_____
###Markdown
Train and check the results___
###Code
myLayer.variables
history = model.fit(
train_features,
train_labels,
batch_size = 32,
epochs=1000,
validation_data=(test_features,
test_labels)
)
myLayer.variables
loss = history.history['loss']
val_loss = history.history['val_loss']
fig, ax = plt.subplots(figsize=(8, 6))
plt.plot(loss)
plt.plot(val_loss)
plt.grid('both')
plt.xlabel('x')
plt.ylabel('Loss Function')
plt.title('Loss Function trend')
plt.show()
fig, ax = plt.subplots(1, 2, figsize=(12, 6), sharey=True)
ax[0].axis('equal')
ax[0].scatter(train_labels[:, 0], model.predict(train_features)[:, 0], marker='^',
color='r', edgecolor='k')
ax[0].plot([0, 1], [0, 1], c='k')
ax[0].plot([0, 1], [0.2, 1.2],'--', c='orange')
ax[0].plot([0, 1], [-0.2, 0.8],'--', c='orange')
ax[0].plot([0, 1], [0.1, 1.1],'--', c='pink')
ax[0].plot([0, 1], [-0.1, 0.9],'--', c='pink')
ax[0].set_title('Training Set - Y1')
ax[0].set_ylim(0, 1)
ax[0].grid(which='both', alpha=0.8, c='white')
ax[0].set_facecolor('#eaeaf2')
ax[0].spines['bottom'].set_color('white')
ax[0].spines['top'].set_color('white')
ax[0].spines['right'].set_color('white')
ax[0].spines['left'].set_color('white')
ax[1].axis('equal')
ax[1].scatter(test_labels[:, 0], model.predict(test_features)[:, 0], marker='^',
color='g', edgecolor='k')
ax[1].plot([0, 1], [0, 1], c='k')
ax[1].plot([0, 1], [0.2, 1.2],'--', c='orange')
ax[1].plot([0, 1], [-0.2, 0.8],'--', c='orange')
ax[1].plot([0, 1], [0.1, 1.1],'--', c='pink')
ax[1].plot([0, 1], [-0.1, 0.9],'--', c='pink')
ax[1].set_title('Validation Set - Y1')
ax[1].set_ylim(0, 1)
ax[1].grid(which='both', alpha=0.8, c='white')
ax[1].set_facecolor('#eaeaf2')
ax[1].spines['bottom'].set_color('white')
ax[1].spines['top'].set_color('white')
ax[1].spines['right'].set_color('white')
ax[1].spines['left'].set_color('white')
import numpy as np
from sklearn.metrics import r2_score
from scipy.stats import pearsonr
for i in range(np.shape(train_labels)[1]):
metrics= {
'mae-train': np.mean(np.abs(train_labels[:, i] - model.predict(train_features)[:, i])),
'mse-train': np.mean(np.square(train_labels[:, i] - model.predict(train_features)[:, i])),
'r2-train': r2_score(train_labels[:, i], model.predict(train_features)[:, i]),
'pearson-train': pearsonr(train_labels[:, i], model.predict(train_features)[:, i])[0],
'mae-test': np.mean(np.abs(test_labels[:, i] - model.predict(test_features)[:, i])),
'mse-test': np.mean(np.square(test_labels[:, i] - model.predict(test_features)[:, i])),
'r2-test': r2_score(test_labels[:, i] ,model.predict(test_features)[:, i]),
'pearson-test': pearsonr(test_labels[:, i], model.predict(test_features)[:, i])[0]
}
blue = lambda x: '\033[94m' + x + '\033[0m'
yellow = lambda x: '\033[93m' + x + '\033[0m'
for key in metrics:
if 'train' in key:
print(f'Y{i} - {blue(key)} - {str(metrics[key])[:7]}')
else:
print(f'Y{i} - {yellow(key)} - {str(metrics[key])[:7]}')
###Output
Y0 - [94mmae-train[0m - 0.05067
Y0 - [94mmse-train[0m - 0.00441
Y0 - [94mr2-train[0m - 0.87064
Y0 - [94mpearson-train[0m - 0.93566
Y0 - [93mmae-test[0m - 0.04748
Y0 - [93mmse-test[0m - 0.00402
Y0 - [93mr2-test[0m - 0.87471
Y0 - [93mpearson-test[0m - 0.93871
###Markdown
Greetings---
###Code
!pip install art
from art import tprint, aprint
tprint('See you on next videos!')
def subscribe():
"""
Attractive subscription form
"""
aprint("giveme", number=5)
print(f'\n\tLike and subscribe to support this work!\n')
aprint("giveme", number=5)
subscribe()
###Output
Requirement already satisfied: art in /usr/local/lib/python3.7/dist-packages (5.2)
____ _ _ _ _
/ ___| ___ ___ _ _ ___ _ _ ___ _ __ _ __ ___ __ __| |_ __ __(_) __| | ___ ___ ___ | |
\___ \ / _ \ / _ \ | | | | / _ \ | | | | / _ \ | '_ \ | '_ \ / _ \\ \/ /| __| \ \ / /| | / _` | / _ \ / _ \ / __|| |
___) || __/| __/ | |_| || (_) || |_| | | (_) || | | | | | | || __/ > < | |_ \ V / | || (_| || __/| (_) |\__ \|_|
|____/ \___| \___| \__, | \___/ \__,_| \___/ |_| |_| |_| |_| \___|/_/\_\ \__| \_/ |_| \__,_| \___| \___/ |___/(_)
|___/
༼ つ ◕_◕ ༽つ ༼ つ ◕_◕ ༽つ ༼ つ ◕_◕ ༽つ ༼ つ ◕_◕ ༽つ ༼ つ ◕_◕ ༽つ
Like and subscribe to support this work!
༼ つ ◕_◕ ༽つ ༼ つ ◕_◕ ༽つ ༼ つ ◕_◕ ༽つ ༼ つ ◕_◕ ༽つ ༼ つ ◕_◕ ༽つ
|
felis_python2/lectures/04_Scipy.ipynb | ###Markdown
04 Scipy **For a better understanding of this notebook, the 03 Scipy notebook should be known.** This script will go beyond descriptive statistics as shown in the previous notebook. One main purpose is the filling of gaps in data set. First, by applying some easy (and not so easy) interpolation algorithms. Second, by setting up a linear regression to another data set and use other data to fill the gap (which will be done as a project). Lastly, finding derivatives and antiderivatives will be introduced.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import requests
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Interpolation 1D interpolation The Scipy module offers the interpolate submodule, that can be used for interpolation. Procedural functions are offered along with high level interpolator classes, that offer moethods beyond simple gap filling. From simple linear interpolations to piecewise cubic splines most common interpolation algorithms can be found. In order to illustrate the algorithm itself, the interpolation will be performed on generated data.
###Code
from scipy.interpolate import interp1d
interp1d??
# generate an example data set
x = np.linspace(0, 10, num=11)
y = np.sin(-x**2/13.)
# interpolate
f = interp1d(x,y, kind='linear')
xi = np.linspace(0, 10, num=100)
# plot
fig, ax = plt.subplots(1,1)
ax.plot(x,y,'ob')
ax.plot(xi, f(xi), '-r', label='linear')
plt.legend(loc='upper left')
fq = interp1d(x,y,kind='quadratic')
fc = interp1d(x,y,kind='cubic')
fn = interp1d(x,y,kind='nearest')
# plot
fig, ax = plt.subplots(1,1)
ax.plot(x,y,'ob')
ax.plot(xi, fq(xi), '-r', label='quadratic')
ax.plot(xi, fc(xi), '-g', label='cubic')
ax.plot(xi, fn(xi), '--k', label='nearest')
plt.legend(loc='upper left')
###Output
_____no_output_____
###Markdown
In terms of filling gaps in environmental data, the univariate linear, quadratic and cubic functions, along with the nearest neighbour interpolator shall fit most requirements. Although it might not seem to be very useful in the given example, the nearest neighbour algorithm is especially usefull in case the set of observations shall be preserved and no new values shall be added. This is almost always the case for classified data. The cubic and quadratic interpolations are in fact spline interpolators. The interp1d can also apply higher splines by giving the order instead of the method as the *kind* argument.
###Code
x = np.linspace(0,10, num=15)
np.random.seed(938)
y = np.random.gamma(1.5, 7, size=15)
# plot
fig, ax = plt.subplots(1,1)
ax.plot(x,y, 'ob')
xi = np.linspace(0,10, num=100)
# plot
fig, axes = plt.subplots(3,2, figsize=(12,9))
for i,deg in zip(range(6), [1,2,3,5,7,9]):
f = interp1d(x,y,kind=deg)
axes.flatten()[i].plot(x,y,'ob')
axes.flatten()[i].plot(xi, f(xi), '-g')
axes.flatten()[i].set_title('%d splines' % deg)
###Output
_____no_output_____
###Markdown
Linear Regression In contrast to the interpolation algorithms a linear regression will find the best suitable parameter set to fit a predefined function to the set of data. In most implementations the *best* fit will be evaluated on the basis of either the *least squre method* or a *maximum likelihood* approach. This should not be confused with an interpolation, where **unknown** values are tried to be guessed based on the known values. A short example shall illustrate this. We will use the scipy module to fint different function to the observations from above, then the difference shall be clear. The first example is a linear model, then a polynomial expression.
###Code
from scipy.optimize import curve_fit
# define the linear model
lin = lambda x,m,b: m*x + b
# fit
cof, cov = curve_fit(lin,x,y)
# plot
fig, ax = plt.subplots(1,1)
ax.plot(x,y, 'ob')
ax.plot(xi,[lin(_, *cof) for _ in xi], '--g', lw=2)
# prepare the functions
poly2 = lambda x,a,b,c: a*x**2 + b*x + c
poly3 = lambda x,a,b,c,d: a*x**3 + b*x**2 + c*x + d
poly4 = lambda x,a,b,c,d,e: a*x**4 + b*x**3 + c*x**2 + d*x + e
# plot
fig, ax = plt.subplots(1,1)
ax.plot(x,y,'ob')
for model, c in zip((poly2, poly3, poly4), ('b', 'g', 'k')):
cof, cov = curve_fit(model, x, y)
ax.plot(xi, [model(_, *cof) for _ in xi], '--%s' % c, lw=2)
# poly 7
poly7 = lambda x,a,b,c,d,e,f,g,h: a*x**7 + b*x**6 + c*x**5 + d*x**4 + e*x**3 + f*x**2 + g*x + h
# fit
cof, cov = curve_fit(poly7, x, y)
# plot
fig, ax = plt.subplots(1,1)
ax.plot(x,y,'ob')
ax.plot(xi, [poly7(_, *cof) for _ in xi], '-g', lw=2)
###Output
_____no_output_____
###Markdown
2D interpolation The Scipy stack includes a number of approaches, functions and classes which are intended for multivariate interpolation, with an even stronger subset for 2D interpolation. Most of these classes need an in-depth understanding of the math behind the algorithm in order to avoid misuage and errors. These kinds of algorithms and approaches go beyond the scope of this notebook and will not be introduced. However, the usage of the respective classes and functions is very similar to the ones introduced here. Therefore, let's generate a continous field.
###Code
# define a field
rf = lambda x, y: np.sin(x) + np.sin(y)
###Output
_____no_output_____
###Markdown
At first, we need a regular aranged grid, where we can apply the field to. This grid will represent the *truth* of our observation. In an application of these techniques, this field is what we are actually interested in. This example will illustrate how near we can come using the interpolators for this very easy approach of an mathematical describeable field.
###Code
# generate a meshgrid with range()
t = np.linspace(-3, 3, 100)
xx,yy = np.meshgrid(t,t)
#xx, yy = np.mgrid[-3:3:100j, -3:3:100j]
zz = rf(xx,yy)
fig, ax = plt.subplots(1,1, figsize=(6,6))
ax.pcolor(xx,yy,zz)
# add contour lines
contour = ax.contour(xx,yy,zz, colors='k')
ax.clabel(contour)
###Output
_____no_output_____
###Markdown
Next, we will randomly select some samples from this environment and check, how different interpolations perform here.
###Code
# select sample points
n = 30
sample = np.random.rand(n,2) * 6 - 3
# unstack
xi = sample[:, 0]
yi = sample[:, 1]
zi = rf(xi, yi)
fig, ax = plt.subplots(1,1, figsize=(6,6))
ax.pcolor(xx,yy,zz)
ax.plot(xi, yi, 'or')
contour = ax.contour(xx,yy,zz, colors='k')
ax.clabel(contour)
###Output
_____no_output_____
###Markdown
Similar to the interp1d, the scipy package offers an interp2d algorithm, which in fact does not interpolate but approximate. The actual interpolation algorithm is called *griddata*, which can be used. It has to be mentioned, that this is an blackbox function, that just takes the input data.
###Code
from scipy.interpolate import griddata, SmoothBivariateSpline
# interpolate using the nearest neighbors
Zn = griddata(sample, zi, (xx,yy), method='nearest')
Zl = griddata(sample, zi, (xx,yy), method='linear')
Zc = griddata(sample, zi, (xx,yy), method='cubic')
fig, axes = plt.subplots(1,3, figsize=(12,4))
for ax, Z in zip(axes, (Zn, Zl, Zc)):
# plot
ax.matshow(Z, origin='lower', cmap='RdYlBu')
ax.scatter((xi + 3) / 6 * 100, (yi + 3) / 6 * 100,25,(1,0,0))
griddata(sample, zi, (xx,yy), method='nearest')
###Output
_____no_output_____
###Markdown
As a last interpolation example, one of the classes is used: SmoothBivariareSpline. This class is quite handy as the order of splines can be set for each variable independently, e.g. you might use cubic splines in x-direction but a linear interpolation on the y-axis.
###Code
# use a spline of order 3
interpolant = SmoothBivariateSpline(xi, yi, zi, kx=3, ky=3)
fig, ax = plt.subplots(1,1, figsize=(6,6))
ax.pcolor(xx,yy,interpolant(t, t))
# add contour lines
contour = ax.contour(xx,yy, interpolant(t, t), colors='k')
ax.clabel(contour)
# add the sample
ax.scatter(xi, yi, 25, zi)
plt.xlim(-3, 3)
plt.ylim(-3, 3)
# calculate the residuals
residuals = np.abs(zz - interpolant(t,t))
fig, ax = plt.subplots(1,1)
ax.pcolor(xx,yy,residuals, cmap='RdBu_r')
ax.scatter(xi, yi, 25, (1,0,0))
plt.xlim(-3, 3)
plt.ylim(-3, 3)
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
# calc the colormap
cmap = cm.jet(interpolant(t,t) / np.amax(interpolant(t,t)))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(xx,yy,interpolant(t,t), alpha=0.5, facecolors=cmap)
###Output
_____no_output_____
###Markdown
Integration Integration is a common tool is several environmental models and can be used to evaluate the integrative information of a function, especially in non-linear cases. In principle, scipy offer different approaches for solving an integration problem, numerical integration and symbolic integration. * **numerical integration:** these algorithms will find a numerical approximation of the integral. This is kind of *always working*,but these are apprximations and will introduce an error to the result. This error might or might not be significant. * **symbolic integration:** by symbolic integration scipy will try to solve the integral analytically. Obviously this will only work if there is an analytical solution for the problem and a solver implemented. If available, this is the most correct solution as no approximations are used. * **class specific integration:** some of scipy's interpolation classes also implement methods for finding derivatives and integrals. These methods are very specific and therefore are usually the best choice. polynomial integration For polynomials, the numpy packages offers a really fast and exact solver based on the Fundamental Theorem of Calculus. This is a method of the polynomial object. Let's solve an rather easy example:$$ f(x) = -x^2 + 5 $$
###Code
# create the polynom
p = np.poly1d([-1, 0, 5])
#p = np.poly1d([2])
# plot
x = np.linspace(-3,3, 100)
fig, ax = plt.subplots(1, 1)
ax.plot(x, p(x), '-b', lw=2)
plt.grid()
# get the integral coefficent
print(p.integ())
# get the integral on [0,2]
_x = np.linspace(0, 2, 100)
# plot
fig, ax = plt.subplots(1, 1)
ax.plot(x, p(x), '-b', lw=2)
ax.fill_between(_x, 0, p(_x), facecolor='yellow', alpha=0.5)
plt.grid()
p.integ()(2)
###Output
_____no_output_____
###Markdown
sympy **sympy** is an important package for **sym**bolic **py**thon, that can solve a lot of problems related to symbolic mathematics. One application is solving symbolic integration problems. One of the main advantages (like with the polnomial's integ method) is the set up of antiderivative. Unlike the polynomial class from numpy, the sympy functions can also be used to solve non-polynomial functions. Let's illustrate this on two easy examples. \begin{equation} f(x) = -x^2 + 5\end{equation} \begin{equation} f(x) = sin\left(\frac{-x^2}{13}\right)\end{equation}
###Code
from sympy import latex,pprint, integrate, symbols, sin as Sin
###Output
_____no_output_____
###Markdown
In order to make a variable *symbolic* and therefore for sympy useable, these variables have to be instantiated as *Symbol* class instances. The *symbols* class factory can be used for this. The integrate method can then return the antiderivative and, if a interval is given, the definite solution of the antiderivative on this integral.
###Code
# make z a symbol
z = symbols('z', real=True)
#print the antiderivate
print('Antiderivative:')
pprint(integrate(-z**2 + 5))
# solve for [0:3]
print('Solution [0,3):', integrate(-z**2 + 5, (z, 0,3)))
integrate(-z**2 + 5)
###Output
_____no_output_____
###Markdown
The
###Code
from sympy.plotting import plot as sym_plot
# make z2 an symbol
z = symbols('z', real=True)
# print the antiderivative
print('Antiderivative:')
pprint(integrate(-Sin(z**2 / 13.)))
sym_plot(-Sin(z**2 / 13.))
# solve for [-5,5]
print('Solution [-5,5):', integrate(-Sin(z**2 / 13.), (z, -5, 5)))
integrate(-Sin(z**2 / 13.))
###Output
_____no_output_____
###Markdown
numerical integration Any type of numerical solution for an integration problem is solved by finding an approximation. Obviously there are tons of solutions with different advantages and disadvantages. Some approximations are fast, others are almost impossible to solve within a reasonable time span. The scipy.interpolate package has two implementations that will fit almost all requirements of environmental scientist when it comes to integration. The function *cumtrapz* is an implementation of the trapezodial rule and the *simp* function, which is an implementation of the Simpson's rule.
###Code
from scipy.integrate import cumtrapz, simps
# setup the function
f = lambda x : -np.sin(-x**2 / 13.)
x = np.linspace(-5,5, num=100)
print('trazezodial:\t', cumtrapz(f(x), x)[-1])
print('Simpson\'s:\t', simps(f(x), x))
# performance
f = lambda x : x**3 + x**2
z = symbols('z', real=True)
%timeit integrate(z**3+z**2, (z, -5, 5))
%timeit cumtrapz(f(x), x)
%timeit simps(f(x), x)
###Output
The slowest run took 12.69 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 739 µs per loop
10000 loops, best of 3: 29.9 µs per loop
10000 loops, best of 3: 95.7 µs per loop
###Markdown
Test scenario for integration precision using polynomials $$ p(x) = 5*x^5 + x^4 + 3*x^3 + \frac{1}{3}*x^2 - \frac{3}{4}*x + 6$$
###Code
fp = lambda x: 5*x**5 + x**4 + 3*x**3 + (1/3) *x**2 - (3/4)*x + 6
nodes = np.linspace(0,3, num=10)
p = np.poly1d([5, 1, 3, (1/3), -(3/4), 6])
print('Real: ',p.integ()(3))
print('Trpz: ', cumtrapz(fp(nodes), nodes)[-1])
print('Simps:', simps(fp(nodes), nodes))
# plot
%matplotlib inline
fig, ax = plt.subplots(1,1)
tests = np.logspace(1,5,num=30)
align = lambda n: np.linspace(0,3,num=n)
ax.axhline(y=734.475, color='r', lw=2)
ax.semilogx(tests, [cumtrapz(fp(align(n)), align(n))[-1] for n in tests], '.-b')
plt.grid(which='both')
###Output
/home/mirko/anaconda3/envs/py3-dev/lib/python3.5/site-packages/ipykernel/__main__.py:6: DeprecationWarning: object of type <class 'numpy.float64'> cannot be safely interpreted as an integer.
|
notebooks/01_dataset_builder.ipynb | ###Markdown
Dataset Builder> Builds a number of types of datasets that are augmented by the `wikidata` knowledge base Dataset VariationsKeywords can be supplied to the `build` function through the `dataset_type` keyword argument Description`DatasetBuilder.build(ds, dataset_type='description')`Augements the given dataset with the description of the `keyword`. *Example*`Stephen Curry is my favorite basketball player. {Stephen Curry: {Description: American basketball player}}`
###Code
#export
class DatasetBuilder():
"Build a dataset using `get_entities_in_text`"
def __init__(self):
# self.db = WikiDatabase()
self.nlp = spacy.load("en_core_web_sm", disable=["tok2vec", "tagger", "parser", "attribute_ruler", "lemmatizer"])
# module_url = "https://tfhub.dev/google/universal-sentence-encoder/4" #@param ["https://tfhub.dev/google/universal-sentence-encoder/4", "https://tfhub.dev/google/universal-sentence-encoder-large/5"]
# self.encoder = hub.load(module_url)
def build(self, ds, dataset_type='random'):
"Build a database based a given dataset"
if dataset_type == 'random':
return ds.map(self.random, batched=False)
elif dataset_type == 'description':
return ds.map(self.description, batched=False)
elif dataset_type == 'relevant':
pass
def build_knowledge_entities(self, ds, split):
ds = ds.map(self.get_entities, batched=True, num_proc=4)
ds.save_to_disk('data/augmented_datasets/entities/' + split + '/')
def build_csv(self, ds, split):
ds = ds.map(self.retrieve_knowledge, batched=False)
ds.save_to_disk('data/augmented_datasets/')
def get_entities(self, batch):
import pdb; pdb.set_trace()
doc = self.nlp(batch)
entities = doc.ents
def retrieve_knowledge(self, sequence):
text = sequence['text']
entities = self.get_entities_in_text(text)
knowledge = self.add_associations(entities)
sequence['knowledge'] = knowledge
return sequence
def add_associations(self, entities):
"Returns list of entity/association dictionaries"
associations = []
for e in entities:
a = self.get_entity_associations(e)
k = {e[1]: a}
associations.append(k)
return associations
def _get_json(self, item):
"""Return JSON version of list object"""
d = {"label": None, "description": None}
d['label'] = item[1]
d['description'] = item[2]
return json.dumps(d)
def get_entities_in_text(self, text):
"Returns entities found in the sentence `text`"
doc = self.nlp(text)
entities = []
spacy_entities = doc.ents
for entity in spacy_entities:
entity = self.db.get_entity_by_label(entity.text)
if entity:
entities.append(entity)
return entities
def get_entity_associations(self, entity):
"""
Given an `entity_id` return a dictionary containing all the associated properties.
"""
entity_id = entity[0]
entity_associations_dict = {'id':entity_id, 'description':entity[2]}
# Remove all None values from list
associations = self.db.get_entity_associations(entity_id)
if not associations:
return None
for property_id, related_entity_id in associations:
property_name, related_entity_label = self.db.get_property_string(property_id, related_entity_id)
entity_associations_dict[property_name] = related_entity_label
return entity_associations_dict
###Output
_____no_output_____
###Markdown
Testing
###Code
# Build description dataset
from kirby.run_params import RunParams
from kirby.data_manager import DataManager
from datasets import load_dataset
run_params = RunParams(debug=True)
data_manager = DataManager(run_params)
ds_builder = DatasetBuilder()
split = 'train'
ds = data_manager.load(split)
ds_builder.build_knowledge_entities(ds, split)
split = 'valid'
ds = data_manager.load(split)
ds_builder.build_knowledge_entities(ds, split)
ds
# creation
ds_builder = DatasetBuilder()
assert isinstance(ds_builder, DatasetBuilder)
# Test ranked phrases
x = "Stephen Curry is my favorite basketball player."
ds_builder.rake.extract_keywords_from_text(x)
ranked_phrases = ds_builder.rake.get_ranked_phrases()
assert ranked_phrases == ['favorite basketball player', 'stephen curry'], "RAKE failed"
print(ds_builder.db.get_entity_by_label('Cristiano Ronaldo'))
assert ds_builder.db.get_entity_by_label('Cristiano Ronaldo') == ['Q11571', 'Cristiano Ronaldo', 'Portuguese association football player'], 'ERROR in `database_proxy`'
# Get Entities from the sentence
x = "Microsoft has bought Bethesda"
entities = ds_builder.get_entities_in_text(x)
print(entities)
assert entities == [['Q2283', 'Microsoft', 'American multinational technology corporation'],\
['Q224892', 'Bethesda', 'Wikimedia disambiguation page']],\
'Error in `dataset_builder.get_entities_in_text()`'
# Get associations from an entity
associations = ds_builder.get_entity_associations(entities[0][0])
assert associations == {"topic's main Wikimedia portal": 'Portal:Microsoft',
'founded by': 'Bill Gates',
'country': 'United States of America',
'instance of': 'software company',
'headquarters location': 'Redmond',
'stock exchange': 'NASDAQ',
'chief executive officer': 'Steve Ballmer',
"topic's main category": 'Category:Microsoft',
'subsidiary': 'Xbox Game Studios',
'described by source': 'Lentapedia (full versions)',
'industry': 'technology industry',
'product or material produced': 'Microsoft Windows',
'legal form': 'Washington corporation',
'business division': 'Microsoft Research',
'history of topic': 'history of Microsoft',
'member of': 'Alliance for Open Media',
'permanent duplicated item': None,
'part of': 'NASDAQ-100',
'award received': 'Big Brother Awards',
'owner of': 'Microsoft TechNet',
'owned by': 'BlackRock',
'board member': 'Reid Hoffman',
'chairperson': 'John W. Thompson',
'location of formation': 'Albuquerque',
'director / manager': 'Satya Nadella',
'external auditor': 'Deloitte & Touche LLP',
'partnership with': 'ID2020'}, 'Error in `dataset_builder.get_entity_associations()`'
# Description
text = "Darth Vader cut off Luke Skywalker's hand"
ds_builder.description(text)
###Output
_____no_output_____ |
365 DS/4_4_working-with-arrays-exercise.ipynb | ###Markdown
Working with Arrays 1. Run the following cells:
###Code
import numpy as np
array_1D = np.array([10,11,12,13, 14])
array_1D
array_2D = np.array([[20,30,40,50,60], [43,54,65,76,87], [11,22,33,44,55]])
array_2D
array_3D = np.array([[[1,2,3,4,5], [11,21,31,41,51]], [[11,12,13,14,15], [51,52,53,54,5]]])
array_3D
###Output
_____no_output_____ |
Tasks/Task1/Part2/Viajeros-BarChar.ipynb | ###Markdown
Preprocesado de datos
###Code
import json
import numpy as np
import csv
import sys
import locale
locale.setlocale(locale.LC_ALL, 'en_US')
#Data from: Instuto Nacional de Estadística www.ine.es
f = open("./sources/viajeros.txt", "r")
reader = csv.reader(f)
max_value = -1
names = []
viaj2000_list = []
viaj2016_list = []
for row in reader:
tokens=row[0].split(";")
name=tokens[0]
if name in ["Eslovaquia", "Hungría", "Islandia"]:
continue
names.append(name)
tokens[1]=tokens[1].split(".")[0]
tokens[2]=tokens[2].split(".")[0]
viaj2000=int(tokens[1])
if(max_value < viaj2000):
max_value = viaj2000
viaj2000_list.append(viaj2000)
viaj2016=int(tokens[2])
if(max_value < viaj2016):
max_value = viaj2016
viaj2016_list.append(viaj2016)
f.close()
###Output
_____no_output_____
###Markdown
Creando fuente de datos
###Code
from bokeh.models import ColumnDataSource
rang_list = list(range(0, len(names[1:])))
rang1 = [x + 0.3 for x in rang_list]
rang2 = [x + 0.8 for x in rang_list]
total_2000 = np.sum(viaj2000_list[1:])
total_2016 = np.sum(viaj2016_list[1:])
perc_viaj2000_list = [x * 100.0/total_2000 if x != 0 else 0 for x in viaj2000_list[1:]]
perc_viaj2016_list = [x * 100.0/total_2016 if x != 0 else 0 for x in viaj2016_list[1:]]
viaj2000ListLocale= [locale.format('%d', x, grouping=True) for x in viaj2000_list[1:]]
viaj2016ListLocale= [locale.format('%d', x, grouping=True) for x in viaj2016_list[1:]]
source1 = ColumnDataSource(
data=dict(
namesAtt=names[1:],
v2000Att=viaj2000ListLocale,
v2016Att=viaj2016ListLocale,
v2000AttPerc=perc_viaj2000_list,
v2016AttPerc=perc_viaj2016_list,
rang1att=rang1,
rang2att=rang2,
),
column_names=["Name", "Travelers_2000", "Travelers_2016", "Travelers_2000_perc", "Travelers_2016_perc", "rang1", "rang2s"]
)
###Output
_____no_output_____
###Markdown
Visualización de datos
###Code
import numpy
from bokeh.palettes import PuBu
from bokeh.io import show, output_notebook, output_file
from bokeh.models import ColumnDataSource, ranges, HoverTool
from bokeh.plotting import figure
output_notebook()
x_label = "Países"
y_label = "Visitantes de otros países (%)"
title = "Visitantes por país de residencia (2000 vs 2016)"
plot = figure(plot_width=700, plot_height=400, tools="save",
x_axis_label = x_label,
y_axis_label = y_label,
title=title,
x_range = names[1:])
bar1 = plot.vbar(source=source1, x='rang1att', top='v2000AttPerc', bottom=0,width=0.4,color='#AE9E59', legend='2000')
bar2 = plot.vbar(source=source1, x='rang2att', top='v2016AttPerc', bottom=0,width=0.4,color='#4F4478', legend='2016')
plot.xaxis.major_label_orientation = 45
plot.add_tools(HoverTool(renderers=[bar1], tooltips=[("Año", "2000"), ("Porcentaje: ", "@v2000AttPerc"), ("Núm.Personas : ", "@v2000Att")]))
plot.add_tools(HoverTool(renderers=[bar2], tooltips=[("Año", "2016"), ("Porcentaje: ", "@v2016AttPerc"), ("Núm.Personas : ", "@v2016Att")]))
output_file("../../../site/Travelings.html", title="Viajeros entrados por país de residencia")
#plot.add_layout(labels)
show(plot)
###Output
_____no_output_____ |
Examples-Yuan/project.ipynb | ###Markdown
New project with climate indices
###Code
# this is a practice
dir()
dir(np)
###Output
_____no_output_____ |
Vitae.ipynb | ###Markdown
Examples using `vitae`
###Code
import vitae
import os
help(vitae.formatted_bibs)
help(vitae.makemycv)
# I need to know where your home directory is
from pathlib import Path
home = str(Path.home())
# Here is my bib file for you to mess around with.
filename = vitae.__path__[0] + '/data/bibs.bib'
vitae_out = home + '/Desktop/Vitae_test'
try:
os.mkdir(vitae_out)
except OSError:
print ("Creation of the directory %s failed because it probably already exists!" % vitae_out)
else:
print ("Successfully created the directory %s " % vitae_out)
# I don't need output to a variable.
# This notebook is in the same folder as my cv.tex file, next to my cv.bib file.
# My cv.bib file includes the lines per the readme file.
_ = vitae.makemycv(filename = filename, outpath = home + '/Desktop/Vitae_test', author = 'Slater')#, silent = True)
# If you want a bunch of your papers formatted and put into a particular format, try this:
help(vitae.write_bibs)
help(vitae.write_bibs)
vitae.write_bibs(bibfile = filename,
bibliographystyle='apalike',
outfile_name=home + '/Desktop/Vitae_test/try.html',
since_year=2008)
###Output
try.html moved to try_old.html
###Markdown
Experimentation with fuzzywuzzy for name matching.
###Code
from fuzzywuzzy import fuzz
fuzz.ratio('Slater, Joseph C.','Slater, J')
fuzz.ratio('Slater, J', 'Slater, Joseph C.')
authorname = 'Slater, Joseph C.'
###Output
_____no_output_____
###Markdown
Preference file
###Code
from appdirs import *
appname = 'vitae'
appauthor = "Joseph C. Slater"
homedir = user_data_dir(appname, appauthor)
homedir = homedir.split('/')
if os.name =='posix':
homedir = r'/' + homedir[1] + r'/' + homedir[2] + r'/.login/share/'
homedir
user_data_dir
dir()
AppDirs
os
os.name
import platform
platform
None + '/Users/jslater/.login'
import os
not os.path.isdir('/Users/jslater')
import bibtexparser
import tempfile
from bibtexparser.bparser import BibTexParser
from bibtexparser.customization import homogenize_latex_encoding
import os
from pathlib import Path
filename = vitae.__path__[0] + '/data/bibs.bib'
parser = BibTexParser()
parser.customization = homogenize_latex_encoding
parser.ignore_nonstandard_types = False
with open(filename) as bibtex_file:
bib_database = bibtexparser.load(bibtex_file, parser)
bibs = bib_database.entries
for bib in bibs:
print(bib['ID'], bib['year'])
###Output
Yuan:2018 2018
Slater:2008 2008
Scott_Emuakpor_2015 2015
20133516678254 2013
20171603573539 2017
20133416631683 2012
20131516189974 2013
20140917412531 2014
20133316620350 2013
20153001076728 2015
20161602256927 2015
20153301168495 2014
Gillaugh:2017 2017
20150600501309 2014
20133316620621 2013
20153601228715 2015
20150200408648 2015
allemang2006complete 2006
20133316600712 2012
Beck:2013a 2013
ISI:000337747800001 2014
ISI:000332960100017 2014
ISI:000326022600004 2013
Maple:2002 2002
Beck:2013 2013
Shiryayev2005gf 2005
pettit.slater.ea:measurements 2004
Carlson2000 2000
Eddy_current 1996
Carlson1996 1996
Chase1996 1996
Nayfeh1995 1995
HOLLKAMP1994lg 1994
HOLLKAMP1994dn 1994
Ross 1985
Phillips1969 1969
Rabinow1948a 1948
Shiryayev:2010a 2010
Doebling:1998 1998
Alvin:1998 1998
Peterson:1997 1997
Doebling:1996 1996
ALVIN:1995 1995
ALVIN:1995a 1995
ALVIN:1995b 1995
Mead:2011 2011
Instruments:1977 1977
Orbit:2010 2010
Shiryayev:2010 2010
SUNDERMEYER:1995 1995
BERAN:2004 2005
Reiser:2010fk 2010
Snapper:2006fk 2006
Chondros_etal_JSV2001 2001
Chu_AIAAJ1992 1992
SHM:CardenSHM2004 2004
SHM:WordenSHM04 2004
SHM:DoeblingSVD98 1998
Shiryayev:2008fk 2008
SHM:LosAlamos96 1996
Meier:2009lr 2009
Shiryayev:2009fk 2009
Allen:2008pb 2008
Shiryayev:2005bs 2005
Jolly1998 1998
Valevate2004gr 2004
Winslow1949 1949
LMD1999 1999
WBMRF241ES2003 2003
Rabinow1948b 1948
Anon1995 1995
Ferguson:1992gf 1992
Billington:1996ly 1996
Vos:2007hw 2007
Vos:2007ad 2007
Barrett:2006lp 2006
Pornsin-sirirak:2001yb 2001
Mueller:2007ee 2007
Panasonic:2001gd 2001
Hou:2004qp 2004
Khechfe:2002tk 2002
Saadat:2004hh 2004
Vestroni:2002qr 2002
Li:2004qd 2004
Li:2004wf 2004
Masuda:2002ca 2002
Saadat:2004ys 2004
Furukawa:2005sy 2005
AVMAP:2008eu 2008
WeberGenesisManual 2009
Unknown:2005zl 2005
Ernst:2008vl 2008 May
Parhi:2009fv 2009
Shiryayev:2006fy 2006
Balagangadhar:2008it 2008
Black:2001mz 2001
Nintendo:2008zl 2008
Hasbro:2006xy 2006
SHM:Shiryayev_JSCHM2007 2008
Nashif:2008it 2008
Torvik:2007gd 2007
Torvik:2007ph 2007
Torvik:2003cs 2003
Liebst:1996pi 1996
Singh:2003ao 2003
Hollkamp:2005zl 2005
Hollkamp:2001yg 2001
Hollkamp:1999fv 1999
Turcotte:1998eu 1998
Hollkamp:1996lq 1996
Palm:2005kl 2005
Riccar:2008kx 2008
Yang:2001ay 2001
Meng:2000ss 2000
Bently:1997yo 1997
IMAM:1989qc 1989
Lin:2004fc 2004
Ishida:2006dw 2006
Galvanetto:2008ty 2008
Hobby:2008fu 2008
Motorola:2006ph 2006
SHM:LANL2007 2007
Nokia:2007rz 2007
Brown:2008qv 2008
Campbell:1924kk 1924
Campbell:1924bs 1924
Campbell:1924fv 1924
Banks:1992it 1992
Banks:1993ph 1993
Cheong:2005kk 2005
Wu:2008fv 2008
Ho:2003fy 2003
Lopez1:2008xy 2008
unknown:2004rm 2004
Warminski:2003tw 2003
cobramicrotalk 2008
unknown:2006lr 2006
Kaminski:2005ts 2005
Nelson:2007yb 2007
bellman60 1960
unknown:2005dz 2005
cite-key 2006-2007
Penmetsa:1rr 2002
HOLLKAMP:1994vn 1994
Kim:2005sj 2005
Tang:2003xd 2003
Morgan:2002fi 2002
Tang:1999le 1999
Morgan:1998bx 1998
Tsai:1999wb 1999
Tsai:1996ct 1996
Alvin:2003qr 2003
Park:2003mb 2003
Park:2003dp 2003
Park:2004tg 2004
Choi:2007ai 2007
CRAWLEY:1987rw 1987
Ozer:2003qv 2003
Rizai:1983fj 1983
Reeves:1997fk 1997
Whitley:2006lr 2006
Unknown:2007qy 2007
Brown:2003lr 2003
Slater2002 2002
Shiryayev:2003vn 2003
Choi:2007yq 2007
Cobb:2004fj 2004
Schulz:1995uq 1995
Shiryayev:2006qy 2006
Page:2006fk 2006
Shiryayev:2007lr 2007
Shiryayev:2007fk 2007
Hendrich:2005lr 2005
Nicholson:2007lr 2007
Mikulas:1970lr 1970
GOPINATHAN:1999qy 1999
QURESHI:2001fk 2001
Maddux:2006lr 2006
Collins-Sussman:2007lr 2007
Lu:2007lr 2007
Smallwood:2003fk 2003
rathinam:1893 2003
Malik:2007qy 2007
Malik:2007lr 2007
Unknown:2007lr 2007
Kim1997 1997
Heathcote2007 2007
Panasonic2005lr 2005
ALYANAK2006uq 2006
SHIRYAYEV2003qy 2003
Valevate2004fk 2004
Page2006lr 2006
Shiryayev2006ls 2006
Shiryayev2006lr 2006
Marcoa2006lr 2006
Unknown2006uq 2006
Unknown2006fk 2006
Whirlpool2005fk 2005
Balagangadhar1997fk 1997
unknown2005fka 2005
Anderson2000fk 2000
Thorup2006uq 2006
Dominik2005fk 2005
Remington2005fk 2005
DAILY2006fk 2006
Unknown2005fk 2005
Dennison2006fk 2006
Shiryayev2006fk 2006
Himelblau2001vn 2001
Barrett1970ys 1970
Kacena-III1970uq 1970
Wong2004 2004
Brandis1979 1979
Ryland1980 1980
Meirovitch1984 1984
Varley1985 1985
Meirovitch1985 1985
Oberst1952fk 1952
spectraquest96 1996
Rahtz2004fk 2004
Unkown2001fk 2001
Unknown2001fk 2001
Parallels2006fk 2006
Kukreja2006 2006
Samsung2002ye 2002
Ericsson2005ov 2005
Zimmerman1992ek 1992
ANDERSON1994 1994
HAGOOD1991 1991
SONY2000ws 2000
Rathinam2003 2003
Mikulas2005zy 2005
El-Ashry2005dx 2005
2006ze 2006
Avitabile2003tn 2003
OCallahan2000gg 2000
Avitabile1991wl 1991
Avitabile1991km 1991
Avitabile1990qd 1990
OCallahan2002lg 2002
OCallahan1991nw 1991
Desforges1995fa 1995
juang94 1994
Asmussen1996am 1996
HaddaraOMAE92 1992
YangJERT84 1984
Caldwell1978ko 1978
ChoKPSEM01 2001
SpanosJVA98 1998
Tsai1985ly 1985
Worden2000xc 2000
YangSVB75 1975
TiwariJSV95 1995
LiCS04 2004
RobertsB90 1990
Huang99MSSP 1999
Ibrahim87MSSP 1987
Zubaydi2000lj 2000
Cole68 1968
Kay1988nn 1988
HamiltonTSA 1994
Cole71pat 1971
Lathi98 1998
Cole71nasa 1971
Cole1973nf 1973
Stry1991fl 1991
Asmussen1999fj 1999
Ibrahim1977wz 1977
Asmussen1997oi 1997
unknown2005th 2005
Page2004ee 2004
Page2005wy 2005
Daly2003cc 2003
Milanese2005ri 2005
Markey2005sq 2005
Slater:1999uq 1999
Slater1997 1997
Slater1996yj 1996
Slater:2001ys 2001
Epureanu2002 2002
Hall2002 2002
Thomas2002 2002
Dowell2001 2001
Epureanu2001 2001
Clark2000 2000
Epureanu2000 2000
Fraser1999 1999
Tang1999 1999
Florea1998 1998
Lorence1996 1996
LORENCE1995 1995
CIZMAS1995 1995
HALL1994 1994
HALL1993a 1993
HALL1993 1993
Thomas2005 2005
JOHNSON1995 1995
ADDINGTON1995 1995
JOHNSON1982 1982
KOO1993 1993
Tsai2001 2001
Austin2000 2000
Saravanan2000 2000
Lin1996 1996
Zambrano1996 1996
McDaniel1996 1996
DAVIS1995 1995
HU1995 1995
Landi2002 2002
Williams2004ij 2004
Pai1999qr 1999
Hedgepeth1981co 1981
NetgearMR814 2002
IbrahimIJMA86 1986
YaffeeTS 2000
ewins92 1992
Papoulis02 2002
Ljung87 1987
Vandiver82 1982
Quinn2005ml 2005
Deshmukh2005lu 2005
SCHULZ1995 1995
BANKS1994 1994
SLATER1995 1995
Mortara2004 2004
Geiannakis2001op 2001
Huang2004 2004
Huang1996 1996
Valevate2004cm 2004
Ong1998da 1998
Zi2005 2005
PESEK1995 1995
Al-Bedoor2004 2004
Al-Bedoor2001fq 2001
Adewusi2001 2001
Adewusi2002 2002
Al-Bedoor1999 1999
Al-Bedoor1999ja 1999
Al-Bedoor2000 2000
Al-Bedoor2000zy 2000
Al-Bedoor2000ro 2000
Al-Bedoor2001hi 2001
Al-Bedoor2001nz 2001
Al-Bedoor2002 2002
Al-Bedoor2005 2005
Al-Nassar2003 2003
Al-Qaisia2000 2000
Al-Qaisia2005 2005
Hamdan2001a 2001
Hamdan2001 2001
Chong2001 2001
Sanliturk2001 2001
Lee2005 2005
Sanliturk1996 1996
WSUMEBylaws 2003
Greschik2005xx 2005
Greschik2002 2002
Murphy2005jb 2005
Williams2005yu 2005
NCEES2000ji 2000
Zhou2001 2001
Castanier2005ve 2005
Pappa2002qj 2002
Beheim2001sp 2001
otnes72 1972
Kobayashi2000vt 2000
Beran2004hl 2004
Pettit2004zm 2004
Bae2004qh 2004
Crassidis2004sv 2004
von-Groll2000bs 2000
Dosch1992ce 1992
Fleming2002cz 2002
avitabile.ocallahan.ea:comparison 1989
Csaba1998ur 1998
strum89 1989
balmes:high 1993
jackson89 1989
basu.griffin:effect 1986
bendiksen:flutter 1984
harris1995xw 1995
bendiksen:mode-localization 1986
Campbell:1924dz 1924
maia98 1998
hartog84 1985
castanier.ottarsson.ea:reduced 1997
chen94 1994
castanier.pierre:consideration 1997
bendat98 1998
craig:structural 1981
bendat00 2000
craig.chung:generalized 1982
craig:review 1987
bendat90 1990
hamming83 1983
craig.hale:block-krylov 1988
dimarogonas96 1996
crawley.mokadam:stagger 1984
crawley:aeroelastic 1988
adams2001my 2001
dye.henry:vibration 1969
Allemang2001sb 2001
ewins:effect 1969
rao99 1999
ewins:vibration 1973
cook89 1989
ewins.han:resonant 1984
mcconnell95 1995
ewins.imregun:vibration 1984
shih1988qo 1988
fabunmi:forced 1980
shih1988ms 1988
kelly93 1993
gawronski.williams:model 1991
gordon.hollkamp:experimental 1998
childs93 1993
vankarsen1984fp 1984
griffin:friction 1980
rao00 2000
griffin.hoosac:model 1984
griffin:on 1988
dippery1996pu 1996
weaver90 1990
happawana.bajaj.ea:singular 1991
hill:high 1998
allemang198og 1998
hodges:confinement 1982
meirovitch97 1997
fladung2003at 2003
hurty:dynamic 1965
wirsching95 1995
irretier:spectral 1983
phillips2003sg 2003
jones:vibrating 1975
jones:literature 1997
goldman91 1991
kaza.kielb:effects 1982
bathe96 1996
kaza.kielb:flutter 1984
ruotolo2001jy 2001
hermans1998eq 1998
kenyon.rabe.ea:aerodynamic 1998
horn99 1999
kenyon.minkiewicz:mistuning 1998
krauss1999jk 1999
kielb.kaza:aeroelastic 1983
bathe76 1976
welch1967st 1967
kienholz.smith:admittance 1994
golub85 1985
kruse.pierre:forced 1996
lane:system 1956
zienkiewicz89 1989
liessa.macbain.ea:vibrations 1984
alvin1933fk 1995
crassidis1994oe 1994
lin.mignolet:effects 1996
inman00 2000
macneal:hybrid 1971
duhamel34 1834
mead:wave 1975
ocallahan1988da 1988
menq.griffin.ea:forced 1986
menq.griffin.ea:influence 1986
newland93 1993
OCallahan1990sn 1990
menq.chidamparam.ea:friction 1991
mignolet.hu:direct 1997
caughey65 1965
mignolet.lin:effects 1997
alvin1997nx 1997
mignolet.lin:identification 1997
mottershead1993ge 1993
minas.kodiyalam:vibration 1995
reid83 1983
inman89 1989
minkiewicz.russler:dynamic 1997
murthy.pierre.ea:efficient 1994
vance1988hc 1988
muszynska.jones:parametric 1983
nashif85 1985
natsiavas:steady 1992
pappa1981yu 1981
natsiavas:mode 1993
ibrahim1973zd 1973
ocallahan:procedure 1989
chen84 1984
ottarsson.pierre:transfer 1993
richardson1985ot 1985
aoki68 1968
petrov:large-scale 1993
kailath80 1980
petrov:analysis 1994
richardson1982xt 1982
pierre.dowell:localization 1987
pierre:mode 1988
kalman65 1966
pierre.murthy:aeroelastic 1992
ibrahim1977zy 1977
juang86 1986
pierre.smith.ea:localization 1994
prohl:method 1958
rao:turbomachine 1991
webster 1981
rzadkowski:general 1994
rzadkowski:resonance 1997
imregun1995rb 1995
sanliturk.imregun:vibration 1994
ibrahim1978yd 1978
hamdan89e 1989
sanliturk.ewins:modelling 1996
hamdan89 1989
sanliturk.imregun.ea:harmonic 1997
shapiro:solving 1999
ibrahim1976ie 1976
skelton1988hp 1988
sinha:computation 1997
srinivasan:vibrations 1984
srinivasan.cutts:measurement 1984
lembregts1986oj 1986
ibrahim1996dc 1996
su.craig:krylov 1991
heylen1998jm 1998
swaminadham.soni.ea:on 1987
lembregts1989fz 1989
triantafyllou.triantafyllou:frequency 1991
vakakis:non-similar 1992
craig1990tj 1990
valero.bendiksen:vibration 1986
van-der-auweraer1987kf 1987
verdon:review 1993
zhang1986fj 1986
zhang1985uf 1985
wang.chen:investigation 1993
torvik2005ft 2005
weaver.prohl:high-frequency 1958
rogers2004lk 2004
wei.pierre:localization 1988
whitehead:effect 1966
whitehead:effects 1976
whitehead:maximum 1998
wu:letter 1993
wu.wickert.ea:modal 1995
yang.griffin:reduced 1995
yang.griffin:exploring 1995
yang.menq:contact 1997
yang.menq:stick-slip-separation 1998
ahl.allen:hierarchy 1996
allen.maute:reliability-based 2002
ayyub:elicitation 2001
balas.ea:-analysis 2001
beran.lucia.ea:reduced 2002
beran.pettit:unpublished 2001
beuker.ea:robust 2002
bismarck-nasr:structural 1999
bisplinghoff.ashley.ea:aeroelasticity 1996
borglund:robust 2002
chavez.schmidt:systems 2001
cherki.ea:fuzzy 2000
dorato:introduction 1987
research.development:dot 1995
dowell.edwards.ea:nonlinear 2003
dyess:future 2001
ghanem.spanos:stochastic 1991
hall.thomas.ea:proper 2000
hanss.oexl.ea:identification 2002
harris.starnes.ea:design 2002
hartwigsen:dynamics 2002
hazelrigg:systems 1996
hoblit:gust 1988
hodges.pierce:introduction 2002
holmes.lumley.ea:turbulence 1996
huttsell:private 2002
huyse.walters:random 2001
ibrahim.pettit:uncertainties 2003
karpel.moulin.ea:aeroservoelastic 2000
kuttenkeuler.ringertz:aeroelastic 1998
lazzarin.milani.ea:scatter 1997
lee.jiang.ea:flutter 1998
le-maitre.knio.ea:stochastic 2001
liaw.yang:reliability 1993
lind:private 2003
lind:match-point 2002
lind.brenner:flight 2002
lind.brenner:flutterometer 2000
lind.brenner:robust 1999
lind.brenner:analyzing 1998
lind.brenner:wavelet 1998
lindsley.beran.ea:effects 2002
livne:aeroelasticity 2001
ma.bergman.ea:identification 2001
marczyk:beyond 2002
marczyk:principles 1999
melchers:structural 1999
morgan.henrion:uncertainty 1990
mortagua.lind:accurate 2003
nair.keane:stochastic 2002
packard.doyle:complex 1993
pendleton.bessette.ea:active 2000
pettit:uncertainty 2003
pettit.grandhi:optimization 2003
pettit.beran:effects 2003
pettit.canfield.ea:stochastic 2002
pettit.beran:application 2002
pettit.grandhi:reliability 2001
pettit.grandhi:multidisciplinary 2000
poiron:on 2000
poirion:impact 1995
potter.lind:developing 2001
prazenica.lind.ea:uncertainty 2002
ray.stengel:application 1991
segalman:status 2002
segalman:four-parameter 2002
stark.woods:probability 1994
:systems 2000
veley.pettit.ea:increased 2001
walters:towards 2003
walters.huyse:uncertainty 2002
wang.grandhi:improved 1995
weishaupl.laschka:euler 1999
wilks:statistical 1995
xiu.karniadakis:modeling 2003
xiu.karniadakis:wiener-askey 2002
xiu.lucor.ea:stochastic 2002
yang.nikolaidis:design 1991
yang.nikolaidis.ea:design 1990
yue:development 2002
hartwigsen.song.ea:experimental 2004
mook.stry:analog 1992
technosoft:adaptive 2003
apostolakis:how 2003
beran.silva:reduced 2001
bullock:identification 1996
wit.ollson.ea:new 1995
crawley.aubert:identification 1986
cycorp:cyc 2003
dahl:solid 1968
dowell.hall:modeling 2001
administration:federal 2002
iwan:distributed-element 1966
iwan:on 1967
group:janes 2003
li.ghanem:adaptive 1997
masri.caughey:nonparametric 1979
masri.chassiakos.ea:identification 1993
millman.king.ea:stochastic 2003
moorhouse:detailed 2002
park.fellipa:flexibility-based 1998
pettit.veley:risk 2003
pettit.beran:influence 2001
ren.beards:identification 1998
sakamoto.ghanem:polynomial 2002
sivia:data 1996
song.hartwigsen.ea:simulation 2004
university:systems 2000
thilmany:too 2003
wen:method 1976
:abaqusstandard 2002
demuth.beal:neural 2003
boivin.pierre.ea:nonlinear 1995
bolotin:nonconservative 1963
selvam:computer 1998
selvam.visbal:computation 1998
slater.agnes:structural 1999
Slater1997ey 1997
shaw.pierre.ea:modal 1999
geering:continuous 1976
mook.junkins:minimum 1988
mook.lew:multiple 1991
mook.stry:correlation 1991
newland:introduction 1980
witte:statistics 1980
rao1989nr 1989
:aeroelasticity 1975
dowell:nonlinear 1966
beran.huttsel.ea:computational 1999
inman:engineering 1996
romanowski:reduced 1996
rand:nonlinear 1971
rosenberg:normal 1962
rosenberg:advances 1966
gopinathan:robust 1999
astrom.wittenmark:computer-controlled 1984
astrom.wittenmark:adaptive 1995
balakrishnan.frey.ea:framework 1997
ball.hoyt:adaptive 1990
bakic.mutka.ea:brisk 1998
bellenot:state 1992
cavitt.overstreet.ea:performance 1996
chetlur.abu-ghazaleh.ea:optimizing 1998
eisenhauer.gu.ea:online 1997
ferscha:probabilistic 1995
fleischmann.wilsey:comparative 1995
frey.radhakrishnan.ea:optimistic 1998
fujimoto:parallel 1990
w-gu.vetter:falcon 1995
hamnes.tripathi:investigations 1994
hollingsworth:critical 1998
ivan-rosu.schwan:improving 1996
jefferson:virtual 1985
lamport:time 1978
lin.preiss.ea:selecting 1993
lin:estimating 1996
ludwig.wismuller:omis-2-0 1997
martin.mcbrayer.ea:time 1995
martin.mcbrayer.ea:warped 1996
mcbrayer.wilsey:process 1995
meyers:att 1990
hollingsworth.miller:dynamic 1993
hollingsworth.miller.ea:dynamic 1994
miller.callaghan.ea:paradyn 1995
palaniswamy.wilsey:adaptive 1993
palaniswamy:dynamic 1994
palaniswamy.wilsey:scheduling 1994
palaniswamy.wilsey:parameterized 1996
preiss.macintrye.ea:on 1992
quaglia:combining 1999
rajan.wilsey:dynamically 1995
rajan:cancellation 1996
rajan.radhakrishnan.ea:dynamic 1999
radhakrishnan.moore.ea:external 1997
radhakrishnan.abu-ghazaleh.ea:on-line 1998
radhakrishnan.martin.ea:object-oriented 1998
radhakrishnan.wilsey:software 2002
reed.elford.ea:next 1996
reed.aydt.ea:performance 1998
reiher.wieland.ea:limitation 1989
reiher.fujimoto.ea:cancellation 1990
ribler.vetter.ea:autopilot 1998
shaw.garlan:software 1996
singh.weber.ea:splash 1992
steinman:discrete-event 1996
vetter.schwan:high 1997
young.wilsey:distributed 1996
young.wilsey:optimistic 1996
young.abu-ghazaleh.ea:ofc 1998
sinha.griffin:friction 1982
ferri:investigation 1987
johnson:self-tuning 1982
hrovat:application 1993
karnop.trikha:comparative 1969
hanagud.obal.ea:electronic 1985
lane.ferri.ea:vibration 1992
watkins.yurkovich:vibration 1992
singh.matheu:active 1997
dupont.dasturi:semi-active 1997
dowell.schwartz:forced 1983
mcclamroch.gavin.ea:electrorheological 1994
wang.kim.ea:structural 1994
junjiro.hyun.ea:semi-active 1997
symans.constantinou.ea:semi-active 1994
crawley.lazarus.ea:embedded 1989
sprangler.hall:piezoelectric 1990
hall.prechtl:development 1994
forward.swigert:electronic 1981
fred:active 1995
chango.youdam.ea:optimal 1995
lin.wen:smart 1996
gregory.larry:uniform 1996
karnopp:design 1990
moline.floyd.ea:simulation 1994
masao.tomohiro:vibration 1997
whiteman.ferri:multi-mode 1997
dupont.bapna:stability 1994
griffin.menq:friction 1991
beards.woowat:control 1985
gaul.nitsche:friction 2000
colin.scott:active 1997
ahmadian.mantena:modal 1996
guyan:reduction 1965
paz:dynamic 1984
kammer:test-analysis 1987
kammer:hybrid 1991
ben-israel.greville:generalized 1974
wang.slater:comparison 1998
armstrong-helouvry.dupond.ea:survey 1994
gere.timoshenko:mechanics 1984
lindfield.penny:numerical 1999
masri.sassi.ea:nonparametric 1982
balachandran.nayfeh.ea:identification 1994
bauchau.nikishkov:aeroelastic 1999
beran.morton:continuation 1997
dowell:eigenmode 1996
dowell.hall.ea:eigenmode 1997
edwards:transonic 1996
gopinathan.mortara.ea:limit 2000
gordnier.melville:accuracy 1998
guruswamy.tu:navier-stokes 1996
hall.thomas.ea:reduced-order 1999
harten:high 1983
kurdila.prazenica.ea:multiresolution 1999
masri.h.ea:nonparametric 1982
mortara.slater.ea:proper 2000
morton.rizzetta.ea:numerical 1998
morton.beran:hopf-bifurcation 1999
bisplinghoff.ashley:principles 1962
pettit.beran:reduced-order 2000
rediniotis.ko.ea:synthetic 1999
sankar.ruo.ea:application 1986
schuster.vadyak.ea:static 1990
slater.pettit.ea:in-situ 2002
slater.pettit.ea:subspace 2001
volterra:theory 1959
wiener:nonlinear 1958
yee:class 1989
lee.schetzen:measurement 1965
caughey:nonlinear 1975
richardson.formenti:parameter 1982
shih:investigation 1989
vold:eleventh 1986
tang.kholodar.ea:system 2001
van-der-auweraer.snoeys.ea:global 1986
pellicano.vakakis:normal 2001
yabuno.nayfeh:nonlinear 2001
nayfeh.chin.ea:nonlinear 1995
mann.khamis:outreaches 1994
slater:in-situ 2000
balagangadhar:on 1997
carr:applications 1981
king.vakakis:energy-based 1996
manevich.mikhlin:on 1972
meirovitch:analytical 1967
nayfeh.nayfeh:on 1994
shaw:invariant 1994
shaw.pierre:on 1992
shaw.pierre:normal 1993
slater:nonlinear 1993
slater.agnes:nonlinear 1998
vakakis.rand:normal 1992
slater:final 2004
yae.inman:dynamic 1999
eager.lasowska.ea:adaptive 1986
foster.roy.ea:quality 2000
Lim2001vc 2001
hac:loadbalancing 1989
casavant.kuhl:taxonomy 1988
strang.tomaro.ea:defining 1999
krypis.kumar:parallel 1996
griffen.anderson.ea:computational 1978
morgan.visbal.ea:chimera-based 2001
lane.ferre.ea:vibration 1992
castanier:models 1999
griffin.yang:new 1999
hollkamp.gordon:modal 1999
inman:vibration 1989
kielb.macri.ea:advanced 1999
kosmatka.mehmed:experimental 1999
slater.belvin.ea:survey 1993
slater.minkiewicz.ea:forced 1999
romanowski1996ga 1996
bagley.torvik:fractional 1983
banks.fabiano.ea:spatial 1988
christensen:theory 1982
cudney.inman:determining 1989
medaglia.stahle:smrd 1980
morgenthaler:practical 1991
slater:modeling 1992
soedel:vibration 1981
segalman:initial 2001
anttonen:pod-based 2002
chatterjee:introduction 2000
cizmas:proper 2003
hall:reduced 1999
holmes.berkooz:turbulence 1996
lumley:stochastic 1970
lumley:atmospheric 1967
mittelbach:latex 2004
sirovich:turbulence 1987
slater:potential 2004
stewart:introduction 1973
mortara:advanced 2002
welch:use 1967
the0-1inmathworks0-1ininc:matlab 2002
shiryayev.slater:panel 2004
billings:identification 1980
2002xn 2002
jolly.bender.ea:properties 1998
juang.pappa:eigensystem 1985
carlson:implementation 2000
gorman:free 1982
cadwell:magnetic 1996
winslow:induced 1949
carlson.catanzarite.ea:commercial 1996
division:designing 1999
2003jp 2003
rabinow:magnetic 1948
anon:brake 1995
chase:cutting 1996
phillips:engineering 1969
nayfeh.mook:nonlinear 1995
salama.kuo.ea:simulation 1999
haug.protard.ea:numerical 1991
clem.smith.ea:pressurized 2000
szyszkowski1991nw 1991
atkinson1 1965
banks2 1992
bellman1 1960
bhu1 1992
biot1 1955
blaquiere1 1966
garcia1 1989
ghm1 1985
greenburg1 1971
hamdan1 1989
hughes_skelton1 1980
inman1 1989
inman3 1994
king1 1993
kryloff1 1943
mathe 1991
mathematica 1991
minorsky1 1969
nayfeh1 1979
nayfeh3 1973
palmiii 1983
rand2 1971
rand3 1974
reism1 1988
slate2 1992
slate3 1991
slate4 1991
slate5 1992
slate6 1993
slate8 1993
slot1 1991
szempliska1 1990
vakakis1 1990
yen1 1974
Friswell1995hf 1995
|
archive/10xpbmc_rna_variable_genes.ipynb | ###Markdown
preprocessing
###Code
adata_CG = si.read_h5ad("./input/data_processed/rna/rna_seq.h5ad")
adata_CG
# si.pp.filter_cells_rna(adata,min_n_genes=100)
si.pp.filter_genes(adata_CG,min_n_cells=3)
si.pp.cal_qc_rna(adata_CG)
si.pl.violin(adata_CG,list_obs=['n_counts','n_genes','pct_mt'])
si.pp.normalize(adata_CG,method='lib_size')
si.pp.log_transform(adata_CG)
si.pp.select_variable_genes(adata_CG)
si.pl.variable_genes(adata_CG,show_texts=True)
si.pl.variable_genes(adata_CG,show_texts=False, save_fig=True)
###Output
_____no_output_____
###Markdown
discretize RNA expression
###Code
si.tl.discretize(adata_CG,n_bins=5)
si.pl.discretize(adata_CG,kde=False)
###Output
[0.48992336 1.5519998 2.1158605 2.934613 3.9790492 7.4695992 ]
###Markdown
Generate Graph
###Code
si.tl.gen_graph(list_CG=[adata_CG],
copy=False,
dirname='graph0')
###Output
/data/pinello/SHARED_SOFTWARE/anaconda_latest/envs/hc_simba/lib/python3.7/site-packages/pandas/core/arrays/categorical.py:2487: FutureWarning: The `inplace` parameter in pandas.Categorical.remove_unused_categories is deprecated and will be removed in a future version.
res = method(*args, **kwargs)
###Markdown
PBG training
###Code
si.settings.pbg_params
dict_config = si.settings.pbg_params.copy()
## start training
dict_config['wd_interval'] = 10
si.tl.pbg_train(pbg_params = dict_config, auto_wd=True, output='model')
si.settings.pbg_params = dict_config.copy()
si.pl.pbg_metrics(fig_ncol=1)
si.pl.pbg_metrics(fig_ncol=1,save_fig=True,fig_name='graph0_model.pdf')
###Output
_____no_output_____
###Markdown
Post-training Analysis
###Code
dict_adata = si.read_embedding()
dict_adata
adata_C = dict_adata['C'] # embeddings for cells
adata_G = dict_adata['G'] # embeddings for genes
adata_C
adata_G
palette_celltype={'B':'#1f77b4',
'CD4 T':'#ff7f0e',
'CD8 T':'#279e68',
'Dendritic':"#aa40fc",
'CD14 Monocytes':'#d62728',
'FCGR3A Monocytes':'#8c564b',
'Megakaryocytes':'#e377c2',
'NK':'#b5bd61'}
###Output
_____no_output_____
###Markdown
visualize embeddings of cells
###Code
## Add annotation of celltypes (optional)
adata_C.obs['celltype'] = adata_CG[adata_C.obs_names,:].obs['celltype'].copy()
adata_C
si.tl.umap(adata_C,n_neighbors=15,n_components=2)
adata_C
si.pl.umap(adata_C,color=['celltype'],dict_palette={'celltype': palette_celltype},fig_size=(6,4),
drawing_order='random')
si.pl.umap(adata_C,color=['celltype'],dict_palette={'celltype': palette_celltype},fig_size=(6,4),
drawing_order='random',
save_fig=True,
fig_name='umap_graph0_model.pdf')
###Output
_____no_output_____
###Markdown
visualize embeddings of cells and genes SIMBA embed genes into the same UMAP space
###Code
adata_all = si.tl.embed(adata_ref=adata_C,list_adata_query=[adata_G])
adata_all.obs
## add annotations of cells and genes
adata_all.obs['entity_anno'] = ""
adata_all.obs.loc[adata_C.obs_names, 'entity_anno'] = adata_all.obs.loc[adata_C.obs_names, 'celltype']
adata_all.obs.loc[adata_G.obs_names, 'entity_anno'] = 'gene'
adata_all.obs
si.tl.umap(adata_all,n_neighbors=15,n_components=2)
palette_entity_anno = palette_celltype.copy()
palette_entity_anno['gene'] = "#607e95"
si.pl.umap(adata_all,color=['id_dataset','entity_anno'],dict_palette={'entity_anno': palette_entity_anno},
drawing_order='original',
fig_size=(6,5))
si.pl.umap(adata_all,color=['entity_anno'],dict_palette={'entity_anno': palette_entity_anno},
drawing_order='original',
texts=['CST3', 'NKG7', 'PPBP'],
show_texts=True,
fig_size=(8,6))
si.pl.umap(adata_all[::-1,],color=['entity_anno'],dict_palette={'entity_anno': palette_entity_anno},
drawing_order='original',
texts=['CST3', 'NKG7', 'PPBP','FCER1A'],
show_texts=True,
fig_size=(8,6))
###Output
_____no_output_____
###Markdown
Other ways to explore entities
###Code
adata_cmp = si.tl.compare_entities(adata_ref=adata_C, adata_query=adata_G)
adata_cmp
si.pl.entity_metrics(adata_cmp,x='max',y='gini',
show_texts=True,show_cutoff=False,
cutoff_x=1.5,cutoff_y=8.5)
si.pl.entity_metrics(adata_cmp,x='max',y='gini',
texts=['CST3', 'NKG7', 'PPBP'],
show_texts=True,show_cutoff=False,
cutoff_x=1.5,cutoff_y=8.5)
si.pl.entity_metrics(adata_cmp,x='max',y='entropy',
show_texts=True,show_cutoff=False,
cutoff_x=1.5,cutoff_y=8.5)
si.pl.entity_metrics(adata_cmp,x='max',y='entropy',
texts=['CST3', 'NKG7', 'PPBP'],
show_texts=True,show_cutoff=False,
cutoff_x=1.5,cutoff_y=8.5)
adata_cmp.obs['celltype'] = adata_CG.obs.loc[adata_cmp.obs_names,'celltype']
entities = ['CST3', 'NKG7', 'PPBP']
si.pl.entity_barcode(adata_cmp,
layer='softmax',
entities=entities,
anno_ref='celltype',
palette=palette_celltype)
si.pl.entity_barcode(adata_cmp,
layer='norm',
entities=entities,
anno_ref='celltype',
palette=palette_celltype)
adata_CG.write(os.path.join(workdir, 'adata_CG.h5ad'))
adata_C.write(os.path.join(workdir, 'adata_C.h5ad'))
adata_G.write(os.path.join(workdir, 'adata_G.h5ad'))
adata_all.write(os.path.join(workdir, 'adata_all.h5ad'))
adata_cmp.write(os.path.join(workdir, 'adata_cmp.h5ad'))
###Output
... storing 'pbg_id' as categorical
... storing 'celltype' as categorical
... storing 'id_dataset' as categorical
... storing 'entity_anno' as categorical
|
JupyterNotebooks/Part 2 - SupportVectorMachine.ipynb | ###Markdown
Drop the columns which are not required and not useful for predictions
###Code
drop_cols = ['brand','categories','categories','dateAdded','dateUpdated','keys','manufacturer','name','reviewsdate','dateSeen','sourceURLs','text','title','userCity','upc','userProvince']
df = df.drop(drop_cols,axis=1)
df.head()
###Output
_____no_output_____
###Markdown
Fill the NaNs with suitable values
###Code
df['didPurchase'].fillna(True, inplace=True)
df['doRecommend'].fillna(True, inplace=True)
###Output
_____no_output_____
###Markdown
Convert boolean values to binary values i.e. True to 1 and False to 0
###Code
df.didPurchase = (df.didPurchase)*1
df.doRecommend = (df.doRecommend)*1
df.fillna(0, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Convert string values to integer values by hashing the column values
###Code
def get_hash(x):
return abs(hash(x)) % 10**9
df['username'] = df['username'].apply(get_hash)
df['id'] = df['id'].apply(get_hash)
df.head()
df.groupby('doRecommend').count()
df.describe()
df.groupby('doRecommend').median()
df.groupby('doRecommend').mean()
###Output
_____no_output_____
###Markdown
Scale the column values
###Code
def scaled_df(df):
scaled = pd.DataFrame()
for item in df:
if item in df.select_dtypes(include=[np.float]):
scaled[item] = ((df[item] - df[item].min()) /
(df[item].max() - df[item].min()))
else:
scaled[item] = df[item]
return scaled
df_scaled = scaled_df(df)
f, ax = plt.subplots(figsize=(11, 15))
ax.set_axis_bgcolor('#FFFFFF')
plt.title("Box Plot Product Data Unscaled")
ax.set(xlim=(-.05, 1.05))
ax = sns.boxplot(data = df[:22],
orient = 'h',
palette = 'Set3')
f, ax = plt.subplots(figsize=(11, 15))
ax.set_axis_bgcolor('#FFFFFF')
plt.title("Box Plot Product Data Scaled")
ax.set(xlim=(-.05, 1.05))
ax = sns.boxplot(data = df_scaled[:22],
orient = 'h',
palette = 'Set3')
df.dtypes
df.head()
###Output
_____no_output_____
###Markdown
Set predictor columns to determine the results
###Code
predictor_names=['id','didPurchase','username','rating']
predictor_names
###Output
_____no_output_____
###Markdown
Find Rank for each of the predictor columns
###Code
def rank_predictors(dat,l,f='doRecommend'):
rank={}
max_vals=dat.max()
median_vals=dat.groupby(f).median() # We are using the median as the mean is sensitive to outliers
for p in l:
score=np.abs((median_vals[p][1]-median_vals[p][0])/max_vals[p])
rank[p]=score
return rank
cat_rank=rank_predictors(df,predictor_names)
cat_rank
###Output
_____no_output_____
###Markdown
Sort the predictors by rank
###Code
cat_rank=sorted(cat_rank.items(), key=lambda x: x[1])
cat_rank
###Output
_____no_output_____
###Markdown
Take the top predictors based on median difference
###Code
ranked_predictors=[]
for f in cat_rank[1:]:
ranked_predictors.append(f[0])
ranked_predictors
###Output
_____no_output_____
###Markdown
Predicting if the product will be recommended or not using the predictor columns
###Code
X = df_scaled[predictor_names]
#setting target
y = df_scaled['doRecommend']
X_train, X_test, y_train, y_test = train_test_split(df, y,test_size=0.2)
###Output
_____no_output_____
###Markdown
Find the accuracy score using SVM Classifier using Linear Kernel
###Code
print("---------------------------------------------")
print("RBF Kernel")
svc = svm.SVC(kernel='rbf', C=1).fit(X, y)
print("KfoldCrossVal mean score using SVM is %s" %cross_val_score(svc,X,y,cv=10).mean())
#SVM metrics
sm = svc.fit(X_train, y_train)
y_pred = sm.predict(X_test)
print("Accuracy score using SVM is %s" %metrics.accuracy_score(y_test, y_pred))
print("---------------------------------------------")
print("RBF Kernel")
svc = svm.SVC(kernel='rbf', C=10).fit(X, y)
print("KfoldCrossVal mean score using SVM is %s" %cross_val_score(svc,X,y,cv=10).mean())
#SVM metrics
sm = svc.fit(X_train, y_train)
y_pred = sm.predict(X_test)
print("Accuracy score using SVM is %s" %metrics.accuracy_score(y_test, y_pred))
print("---------------------------------------------")
print("Poly Kernel")
svc = svm.SVC(kernel='poly', C=1).fit(X, y)
print("KfoldCrossVal mean score using SVM is %s" %cross_val_score(svc,X,y,cv=10).mean())
#SVM metrics
sm = svc.fit(X_train, y_train)
y_pred = sm.predict(X_test)
print("Accuracy score using SVM is %s" %metrics.accuracy_score(y_test, y_pred))
print("---------------------------------------------")
print("Sigmoid Kernel")
svc = svm.SVC(kernel='sigmoid', C=1, gamma=0.001).fit(X, y)
print("KfoldCrossVal mean score using SVM is %s" %cross_val_score(svc,X,y,cv=10).mean())
#SVM metrics
sm = svc.fit(X_train, y_train)
y_pred = sm.predict(X_test)
print("Accuracy score using SVM is %s" %metrics.accuracy_score(y_test, y_pred))
###Output
---------------------------------------------
RBF Kernel
KfoldCrossVal mean score using SVM is 0.7535714285714286
Accuracy score using SVM is 0.8461538461538461
---------------------------------------------
RBF Kernel
KfoldCrossVal mean score using SVM is 0.7535714285714286
Accuracy score using SVM is 0.8461538461538461
---------------------------------------------
Poly Kernel
KfoldCrossVal mean score using SVM is 0.7392857142857143
Accuracy score using SVM is 0.8461538461538461
---------------------------------------------
Sigmoid Kernel
KfoldCrossVal mean score using SVM is 0.7392857142857143
Accuracy score using SVM is 0.8461538461538461
###Markdown
Changing hyper-parameter values does not change the accuracy score of predictions.
###Code
#setting svm classifier
svc = svm.SVC(kernel='rbf', C=1).fit(X, y)
print("KfoldCrossVal mean score using SVM is %s" %cross_val_score(svc,X,y,cv=10).mean())
#SVM metrics
sm = svc.fit(X_train, y_train)
y_pred = sm.predict(X_test)
print("Accuracy score using SVM is %s" %metrics.accuracy_score(y_test, y_pred))
###Output
KfoldCrossVal mean score using SVM is 0.7535714285714286
Accuracy score using SVM is 0.8461538461538461
|
data_analysis/participant_matcher.ipynb | ###Markdown
Participant MatcherThis notebook selects the timestamps from the ibex farm participants and matches them up with the corresponding participant on pavlovia
###Code
#Import Statements
import pandas as pd
import math
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["font.family"] = "sans-serif"
import seaborn as sns
import datetime
sns.set(palette="bright")
sns.set_style("whitegrid")
sns.set_context("paper", font_scale=1.2)
## Data in
#read in source csv (Pavlovia)
df_p = pd.read_csv(r'data/results_pavlovia.csv', encoding='utf-8-sig',
low_memory=False
)
# Exclude Participants
p_exclude= [5, 15, 20, 22, 36, 37, 42, 45, 47, 49, 51, 53, 56, 79, 90, 167, 168, 64, 180, 99, 167,
71,121, 57,77,97, 197,182, 63,73,88,103,113,138,163,183
] # 5 was me, 47 performed bad
for p in p_exclude:
df_p = df_p[df_p['participant'] != p]
# now the actual 50 participants are left
# display column names
print(list(df_p.keys()))
#get timestamps for each participant
df_p.groupby(['participant'])['__datetime'].unique()
# drop ms and add a timestamp column
df_p["timestamp"] = [datetime.datetime.strptime(i[:i.rfind(".")], "%Y-%m-%d_%Hh%M.%S") for i in df_p["__datetime"]]
#read in questionnaire data
df_q = pd.read_csv('data/results_audiovisual_exp.txt',
sep = ',',
comment='#',
header=None,
names = ['time','hash', 'controller','item','element', 'type', 'group','petype','pename','parameter', 'value', 'event_time', 'comments'],
engine = 'python')
# reformat unix time into timestamp and add as column
df_q['timestamp'] = [datetime.datetime.fromtimestamp(i) for i in df_q['time']]
# remove duplicates and get hash values, which are the ip address, so there will still be duplicates
# also change timezone if necessary
df_q_ts = df_q.drop_duplicates(subset=["timestamp"])
df_q_ts.loc[:,"timestamp"] = df_q_ts.loc[:,"timestamp"] + datetime.timedelta(hours=0)
df_q_ts = df_q_ts.set_index("timestamp")
df_q_ts
# set timestamp as index so we can search by index
df_p_participants = df_p.drop_duplicates(subset=["timestamp"])[["participant", "timestamp"]].set_index("participant")
df_p_participants
# for each participant in the questionnaire, find the next matching timestamp on pavlovia, adjust tolerance if necessary
participant_hash = {}
print('part ID', ' ibex timestamp', ' pavlovia timestamp', ' hash id ibex')
for participant, timestamp in list(zip(df_p_participants.index, df_p_participants["timestamp"])):
try:
index = df_q_ts.index.get_loc(timestamp, method='ffill', tolerance=datetime.timedelta(minutes=40))
except KeyError:
print(participant, " | ", ' nothing found | ', timestamp, " | ", "no match")
else:
hashcode = df_q_ts.iloc[index]["hash"]
q_timestamp = df_q_ts.iloc[index].name
print(participant, " | ", q_timestamp," | ", timestamp, " | ", hashcode)
participant_hash[participant] = hashcode
print(len(participant_hash))
df = df_q[df_q["hash"].isin(participant_hash.values())]
#display all the genders
list(df[df['parameter'] == 'gender']['value'])
fig = plt.gcf()
fig.set_size_inches(10, 8)
sns.set(font_scale=1.5)
ax = sns.histplot(data = df[df['parameter'] == 'gender'],x = 'value')
for p in ax.patches:
ax.annotate(format(p.get_height(), '.0f'),
(p.get_x() + p.get_width() / 2., p.get_height()),
ha = 'center', va = 'center',
xytext = (0, 9),
textcoords = 'offset points')
plt.title('Distribution of gender')
plt.savefig("figures/gender_dist.png", dpi=192, bbox_inches='tight')
ages = df[df['parameter'] == 'age'].sort_values(by='value')
ages = ages[ages['value'] != '333']
ages = ages[ages['value'] != '14']
ages = ages[ages['value'] != '11']
agelist = list(ages['value'])
agelist = list(map(int, agelist))
mean = np.mean(agelist)
print(agelist)
print(mean)
#display all the ages
print(list(ages))
fig = plt.gcf()
fig.set_size_inches(10, 8)
sns.set(font_scale=1.5)
ax = sns.histplot(x = ages['value'],
kde = True)
for p in ax.patches:
ax.annotate(format(p.get_height(), '.0f'),
(p.get_x() + p.get_width() / 2., p.get_height()),
ha = 'center', va = 'center',
xytext = (0, 9),
textcoords = 'offset points')
plt.title('Distribution of age')
plt.savefig("figures/age_dist.png", dpi=192, bbox_inches='tight')
fig = plt.gcf()
fig.set_size_inches(6, 8)
sns.set(font_scale=1.5)
ax = sns.histplot(data = df[df['parameter'] == 'handedness'],x = 'value')
for p in ax.patches:
ax.annotate(format(p.get_height(), '.0f'),
(p.get_x() + p.get_width() / 2., p.get_height()),
ha = 'center', va = 'center',
xytext = (0, 9),
textcoords = 'offset points')
plt.title('Distribution of handedness')
plt.savefig("figures/handedness_dist.png", dpi=192, bbox_inches='tight')
fig = plt.gcf()
fig.set_size_inches(10, 8)
sns.set(font_scale=1.5)
ax = sns.histplot(data = df[df['parameter'] == 'vision'],x = 'value')
for p in ax.patches:
ax.annotate(format(p.get_height(), '.0f'),
(p.get_x() + p.get_width() / 2., p.get_height()),
ha = 'center', va = 'center',
xytext = (0, 9),
textcoords = 'offset points')
plt.title('Do you have vision problems? ')
plt.savefig("figures/vision_dist.png", dpi=192, bbox_inches='tight')
###Output
_____no_output_____ |
.ipynb_checkpoints/SalesAnalysis-checkpoint.ipynb | ###Markdown
Clean up the data drop rows of Nan
###Code
nan_df= all_data [all_data.isna().any(axis=1)]
nan_df.head()
all_data = all_data.dropna(how='all')
all_data.head()
###Output
_____no_output_____
###Markdown
find 'OR' and delete it
###Code
all_data= all_data[all_data['Order Date'].str[0:2]!= 'Or']
###Output
_____no_output_____
###Markdown
Convert columns to the correct type
###Code
all_data['Quantity Ordered']= pd.to_numeric(all_data['Quantity Ordered']) #make int
all_data['Price Each'] = pd.to_numeric(all_data['Price Each'])#make float
all_data.head()
###Output
_____no_output_____
###Markdown
Augment data with additional columns Adding month column
###Code
all_data['Month']= all_data['Order Date'].str[0:2]
all_data['Month']= all_data['Month'].astype('int32')
all_data.head()
###Output
_____no_output_____
###Markdown
add Sales column
###Code
all_data['Sales']= all_data['Quantity Ordered'] * all_data['Price Each']
all_data.head()
###Output
_____no_output_____
###Markdown
The best month for sales
###Code
results= all_data.groupby('Month').sum()
months = range(1,13)
plt.bar(months,results['Sales'])
plt.xticks(months)
plt.ylabel('Sales in USD ($)')
plt.xlabel('Month number')
plt.show()
###Output
_____no_output_____ |
02_soilmap_update_Ksat_AWC_USLE-K.ipynb | ###Markdown
K_SAT updateloading the Rosetta predicted K_sat (from an external Python 2.7 env) values into the lookup tables for each texture class
###Code
the_soilmap_dataset = 'data/eesti_soil_red1_texture_fix_geo_drain_soc_bd.shp'
shape_export_texture_values = 'data/eesti_soil_red1_texture_overview.shp'
eesti_soil_red1_soc_bd = gpd.read_file(the_soilmap_dataset, encoding='utf-8')
eesti_soil_red1_soc_bd.drop(columns=['upd_siffer', 'Loimis1',
'loimis_rec', 'EST_TXT1', 'LXTYPE1', 'EST_TXT2', 'LXTYPE2', 'EST_TXT3',
'LXTYPE3', 'EST_TXT4', 'LXTYPE4'], inplace=True)
eesti_soil_red1_textures = gpd.read_file(shape_export_texture_values, encoding='utf-8')
eesti_soil_red1_textures = eesti_soil_red1_textures[['orig_fid', 'upd_siffer', 'Loimis1',
'loimis_rec', 'EST_TXT1', 'LXTYPE1', 'EST_TXT2', 'LXTYPE2', 'EST_TXT3',
'LXTYPE3', 'EST_TXT4', 'LXTYPE4']]
display(eesti_soil_red1_soc_bd.sample(10))
display(eesti_soil_red1_soc_bd.dtypes)
display(eesti_soil_red1_textures.sample(10))
display(eesti_soil_red1_textures.dtypes)
eesti_soil_red1 = pd.merge(left=eesti_soil_red1_soc_bd, right=eesti_soil_red1_textures,
left_on='orig_fid', right_on='orig_fid', how='left')
del(eesti_soil_red1_soc_bd)
del(eesti_soil_red1_textures)
display(eesti_soil_red1.sample(10))
display(eesti_soil_red1.dtypes)
# actually the same as from the loimis lookups, but have to be filled up now with the Rosetta Ksat values
texture_rules = {
# l - liiv, en: sand
'l': {'sand': 90, 'silt': 5, 'clay': 5, 'lxtype': 'S'},
'l1': {'sand': 95, 'silt': 5, 'clay': 0, 'lxtype': 'S'},
'l2': {'sand': 90, 'silt': 3, 'clay': 7, 'lxtype': 'S'},
'l3': {'sand': 90, 'silt': 3, 'clay': 7, 'lxtype': 'S'},
'l4': {'sand': 90, 'silt': 3, 'clay': 7, 'lxtype': 'S'},
'l5': {'sand': 90, 'silt': 3, 'clay': 7, 'lxtype': 'S'},
# pl - peenliiv, en: fine sand (täiendina peenliivakas)
'pl': {'sand': 90, 'silt': 3, 'clay': 7, 'lxtype': 'S'},
# plsl - peenliivakas saviliiv, en: fine clayey sand
'plsl': {'sand': 82, 'silt': 9, 'clay': 9, 'lxtype': 'LS'},
# sl - saviliiv, en: clayey sand
'sl': {'sand': 82, 'silt': 9, 'clay': 9, 'lxtype': 'LS'},
'sl1': {'sand': 82, 'silt': 9, 'clay': 9, 'lxtype': 'LS'},
'sl2': {'sand': 82, 'silt': 9, 'clay': 9, 'lxtype': 'LS'},
'sl3': {'sand': 82, 'silt': 9, 'clay': 9, 'lxtype': 'LS'},
'sl4': {'sand': 82, 'silt': 9, 'clay': 9, 'lxtype': 'LS'},
# tsl - tolmjas saviliiv, en: dusty clayey sand
'tsl': {'sand': 80, 'silt': 14, 'clay': 6, 'lxtype': 'LS'},
'tsl1': {'sand': 80, 'silt': 14, 'clay': 6, 'lxtype': 'LS'},
# dk - liivakivirähk, en: sandstone grit
'dk': {'sand': 90, 'silt': 3, 'clay': 7, 'lxtype': 'S'},
# ls - liivsavi
'ls': {'sand': 55, 'silt': 30, 'clay': 15, 'lxtype': 'L'},
# ls₁ - kerge liivsavi, en: light sandy clay
'ls1': {'sand': 65, 'silt': 20, 'clay': 15, 'lxtype': 'SL'},
# ls₂ - keskmine liivsavi, en: medium sandy clay
'ls2': {'sand': 55, 'silt': 30, 'clay': 15, 'lxtype': 'L'},
# ls₃ - raske liivsavi, en: heavy sandy clay
'ls3': {'sand': 50, 'silt': 15, 'clay': 35, 'lxtype': 'CL'},
'ls4': {'sand': 50, 'silt': 15, 'clay': 35, 'lxtype': 'CL'},
'ls5': {'sand': 50, 'silt': 15, 'clay': 35, 'lxtype': 'CL'},
# tls - tolmjas liivsavi, en: dusty sandy clay
'tls': {'sand': 35, 'silt': 50, 'clay': 15, 'lxtype': 'SiL'},
'tls1': {'sand': 40, 'silt': 45, 'clay': 15, 'lxtype': 'L'},
'tls2': {'sand': 35, 'silt': 50, 'clay': 15, 'lxtype': 'SiL'},
'tls3': {'sand': 30, 'silt': 40, 'clay': 30, 'lxtype': 'SiCL'},
# s - savi, en: clay
's': {'sand': 25, 'silt': 30, 'clay': 45, 'lxtype': 'C'},
's1': {'sand': 25, 'silt': 30, 'clay': 45, 'lxtype': 'C'},
's2': {'sand': 25, 'silt': 30, 'clay': 45, 'lxtype': 'HC'},
# th 15 või th 15-20 toorhuumusliku horisondi tüsedus, en: raw humus thickness
'th': {'sand': 25, 'silt': 25, 'clay': 50, 'lxtype': 'HUMUS'},
'th3': {'sand': 25, 'silt': 25, 'clay': 50, 'lxtype': 'HUMUS'},
# t - turvas, en: peat, amp is erodeeritud level
't': {'sand': 25, 'silt': 25, 'clay': 50, 'lxtype': 'PEAT'},
# t₁ - halvasti lagunenud, en: (under 20%)
't1': {'sand': 25, 'silt': 25, 'clay': 50, 'lxtype': 'PEAT'},
# t₂ - keskmine lagunenud, en: (20%-40%)
't2': {'sand': 20, 'silt': 20, 'clay': 60, 'lxtype': 'PEAT'},
# t₃ - hästi lagunenud, en: (above 40%)
't3': {'sand': 15, 'silt': 15, 'clay': 70, 'lxtype': 'PEAT'},
# all rocky
# def skeleton_no_amp(): return [ no_info, pk, kr, p, d, lu, ck ]
# r 2 rähk 'r', 'r1', 'r2', 'r3', 'r4', 'r5', 'r⁰', 'r⁰1',
# v 3 paeveeris 'v', 'v1', 'v2', 'v3', 'v4', 'v5'
# v_0 4 raudkiviveeris 'v_0', 'v°1', 'v⁰', 'v⁰1', 'v⁰2', 'v⁰4'
# kb 5 klibu 'kb', 'kb1', 'kb2', 'kb3', 'kb4', 'kb5'
# k 7 paekivid 'k', 'k1', 'k2', 'k3', 'k4', 'k5'
# k_0 8 raudkivid 'k_0', 'k°1', 'k⁰', 'k⁰1', 'k⁰2', 'k⁰3', 'k⁰5'
# kr 1 kruus (no_amp?) 'kr', 'kr1', 'kr5'
# ck 6 kiltkivirähk (no_amp?) 'ck'
# pk 9 paeplaadid (no_amp?) 'pk'
# p 10 paas (no_amp?) 'p'
# d 11 liivakivi (no_amp?) 'd'
# lu 12 lubisetted (no_amp?) 'lu'
# Mahuprotsente ei kasutata korese tüüp 1, 9, 10, 11, ja 12 puhul #
'ck': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'd': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'lu': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'p': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'pk': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'kr': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'kr1': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'kr5': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k1': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k2': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k3': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k4': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k5': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k_0': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'kb': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'kb1': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'kb2': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'kb3': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'kb4': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'kb5': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k⁰': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k°1': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k⁰1': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k⁰2': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k⁰3': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k⁰4': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'k⁰5': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'r': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'r1': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'r2': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'r3': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'r4': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'r5': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'r⁰': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'r⁰1': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'r⁰2': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'r⁰3': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'r⁰4': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'r⁰5': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v1': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v2': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v3': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v4': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v5': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v_0': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v⁰': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v°1': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v⁰1': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v⁰2': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v⁰3': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v⁰4': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
'v⁰5': {'sand': 100, 'silt': 0, 'clay': 0, 'lxtype': 'GRAVELS'},
# NaN 0
'no_peenes': {'sand': 60, 'silt': 20, 'clay': 20, 'lxtype': 'no_info'},
'no_info': {'sand': 60, 'silt': 20, 'clay': 20, 'lxtype': 'no_info'}
}
s1 = len(texture_rules.keys())
n_arr = np.loadtxt(r"Rosetta-3.0\output\soil_classes_output_mod2.txt", delimiter=",")
orig_arr = np.loadtxt(r"Rosetta-3.0\output\soil_classes_inp_mod2.txt", delimiter=" ")
counter = 0
new_dict = {}
results = []
def compare_by_ssc(a, b):
if a['sand'] == b['sand'] and a['silt'] == b['silt'] and a['clay'] == b['clay']:
return True
else:
return False
def compare_by_ssc_and_ksat(a, b):
if a['sand'] == b['sand'] and a['silt'] == b['silt'] and a['clay'] == b['clay'] and \
a['k_sat_cm_d'] == b['k_sat_cm_d'] and a['k_sat'] == b['k_sat']:
return True
else:
return False
def comp_ssc_list(ssc_list, a):
is_in = False
for elem in ssc_list:
if compare_by_ssc(elem, a):
return True
return is_in
for counter in range(s1):
# print(k)
sand = orig_arr[counter][0]
silt = orig_arr[counter][1]
clay = orig_arr[counter][2]
k_val_cm_d = n_arr[counter][3]
k_val = k_val_cm_d * 10 / 24
k_val_cm_d_std = n_arr[counter][4]
k_val_cm_d_std_log = n_arr[counter][5]
sub_dict = {'sand': int(sand),
'silt': int(silt),
'clay': int(clay),
'k_sat_cm_d': np.round(k_val_cm_d, 3),
'k_sat': np.round(k_val, 3),
'k_val_cm_d_std': np.round(k_val_cm_d_std, 3),
'k_val_cm_d_std_log': np.round(k_val_cm_d_std_log, 3) }
# new_dict.update({ k: sub_dict})
# texture_rules[k]['k_sat'] = np.round(k_val, 2)
if not comp_ssc_list(results, sub_dict):
results.append(sub_dict)
# print(sub_dict)
# print(texture_rules[k])
# print(texture_rules)
# for i in results:
# display(i)
for k in texture_rules.keys():
entry = texture_rules[k]
is_in = False
cp_ksat = {}
for elem in results:
if compare_by_ssc(entry, elem):
is_in = True
cp_ksat = elem
if not is_in:
print(f"not found for {k}")
else:
# texture_rules[k]['k_sat'] = np.round(cp_ksat['k_sat'], 2)
texture_rules[k].update({'k_sat': cp_ksat['k_sat']})
texture_rules[k].update({'k_sat_cm_d': cp_ksat['k_sat_cm_d']})
texture_rules[k].update({'k_val_cm_d_std': cp_ksat['k_val_cm_d_std']})
texture_rules[k].update({'k_val_cm_d_std_log': cp_ksat['k_val_cm_d_std_log']})
print(f"'{k}': {texture_rules[k]},")
import soil_lib
from soil_lib.LoimisLookups import swat_ext_defaults_lookup, texture_rules_ksat_ext
def get_ksat_awc(row):
SOL_K1=0
SOL_K2=0
SOL_K3=0
SOL_K4=0
if row['nlayers'] >= 1:
idx = row['EST_TXT1']
if idx is None and row['LXTYPE1'] == 'GRAVELS':
idx = 'kr'
try:
ksat_values = texture_rules_ksat_ext[idx]
default_values = swat_ext_defaults_lookup(idx)
SOL_K1=ksat_values['k_sat']
except KeyError as ex:
print(ex)
pass
# print(row)
if row['nlayers'] >= 2:
idx = row['EST_TXT2']
if idx is None and row['LXTYPE2'] == 'GRAVELS':
idx = 'kr'
try:
ksat_values = texture_rules_ksat_ext[idx]
default_values = swat_ext_defaults_lookup(idx)
SOL_K2=ksat_values['k_sat']
except KeyError as ex:
print(ex)
pass
# print(row)
if row['nlayers'] >= 3:
idx = row['EST_TXT3']
if idx is None and row['LXTYPE3'] == 'GRAVELS':
idx = 'kr'
try:
ksat_values = texture_rules_ksat_ext[idx]
default_values = swat_ext_defaults_lookup(idx)
SOL_K3=ksat_values['k_sat']
except KeyError as ex:
print(ex)
pass
# print(row)
if row['nlayers'] >= 4:
idx = row['EST_TXT4']
if idx is None and row['LXTYPE4'] == 'GRAVELS':
idx = 'kr'
try:
ksat_values = texture_rules_ksat_ext[idx]
default_values = swat_ext_defaults_lookup(idx)
SOL_K4=ksat_values['k_sat']
except KeyError as ex:
print(ex)
pass
# print(row)
return pd.Series([SOL_K1, SOL_K2, SOL_K3,SOL_K4])
del(eesti_soil_red1_soc_bd)
del(eesti_soil_red1_textures)
eesti_soil_red1[['SOL_K1', 'SOL_K2', 'SOL_K3', 'SOL_K4']] = eesti_soil_red1.apply(get_ksat_awc, axis=1)
display(eesti_soil_red1.sample(10))
display(eesti_soil_red1.dtypes)
eesti_soil_red1 = eesti_soil_red1[['orig_fid', 'upd_siffer', 'WRB_code', 'Boniteet', 'Varv', 'Loimis1',
'loimis_rec', 'nlayers', 'SOL_ZMX', 'SOL_Z1', 'SOL_Z2', 'SOL_Z3',
'SOL_Z4', 'EST_TXT1', 'LXTYPE1', 'EST_TXT2', 'LXTYPE2', 'EST_TXT3',
'LXTYPE3', 'EST_TXT4', 'LXTYPE4', 'SOL_CLAY1', 'SOL_SILT1', 'SOL_SAND1',
'SOL_ROCK1', 'SOL_CLAY2', 'SOL_SILT2', 'SOL_SAND2', 'SOL_ROCK2',
'SOL_CLAY3', 'SOL_SILT3', 'SOL_SAND3', 'SOL_ROCK3', 'SOL_CLAY4',
'SOL_SILT4', 'SOL_SAND4', 'SOL_ROCK4', 'SOL_SOC1', 'SOL_SOC2',
'SOL_SOC3', 'SOL_SOC4', 'SOL_BD1', 'SOL_BD2', 'SOL_BD3', 'SOL_BD4',
'SOL_K1', 'SOL_K2', 'SOL_K3', 'SOL_K4',
'slp_mean', 'slp_median',
'slp_stdev', 'twi_mean', 'twi_median', 'twi_stdev', 'ls_mean',
'ls_median', 'ls_stdev', 'tri_mean', 'tri_median', 'tri_stdev',
'area_drain', 'drain_pct', 'geometry']]
# + SOL_K + SOL_AWC
eesti_soil_red1.to_file('data/eesti_soil_red1_texture_fix_geo_drain_soc_bd_ksat.shp', encoding='utf-8')
###Output
_____no_output_____
###Markdown
merge of the external AWC summary - load result dataset from 05_hydrogrids_extents_and_awc.ipynb- there we should have AWC for 1-4- merge into main working dataset
###Code
awc_layers = gpd.read_file(f"data/eesti_soil_red1_texture_fix_geo_redo_awc_merged_layers.shp", encoding='utf-8')
columns_to_keep = [ 'orig_fid', 'SOL_Z2', 'SOL_Z3', 'SOL_Z4', 'SOL_AWC1', 'SOL_AWC2', 'SOL_AWC3', 'SOL_AWC4']
awc_update = awc_layers[columns_to_keep].copy()
del(awc_layers)
eesti_soil_red1_awc = gpd.read_file("data/eesti_soil_red1_texture_fix_geo_drain_soc_bd_ksat.shp", encoding='utf-8')
eesti_soil_red1_awc = pd.merge(left=eesti_soil_red1_awc, right=awc_update, left_on='orig_fid', right_on='orig_fid', how='left')
display(eesti_soil_red1_awc.sample(10))
display(eesti_soil_red1_awc.dtypes)
# SOL_AWC1_x float64
# SOL_AWC2_x float64
# SOL_AWC3_x float64
# SOL_AWC4_x
# SOL_AWC1_y float64
# SOL_AWC2_y float64
# SOL_AWC3_y float64
# SOL_AWC4_y
# SOL_Z2_x int64
# SOL_Z3_x int64
# SOL_Z4_x
# SOL_Z2_y float64
# SOL_Z3_y float64
# SOL_Z4_y
eesti_soil_red1_awc.drop( columns = [
"SOL_Z2_x",
"SOL_Z3_x",
"SOL_Z4_x",
"SOL_AWC1_x",
"SOL_AWC2_x",
"SOL_AWC3_x",
"SOL_AWC4_x" ], inplace=True)
eesti_soil_red1_awc.rename(columns={
"SOL_Z2_y" : "SOL_Z2",
"SOL_Z3_y" : "SOL_Z3",
"SOL_Z4_y" : "SOL_Z4",
"SOL_AWC1_y" : "SOL_AWC1",
"SOL_AWC2_y" : "SOL_AWC2",
"SOL_AWC3_y" : "SOL_AWC3",
"SOL_AWC4_y" : "SOL_AWC4" }, inplace=True)
eesti_soil_red1_awc = eesti_soil_red1_awc[['orig_fid', 'upd_siffer', 'WRB_code', 'Boniteet', 'Varv', 'Loimis1',
'loimis_rec', 'nlayers', 'SOL_ZMX', 'SOL_Z1', 'SOL_Z2', 'SOL_Z3',
'SOL_Z4', 'EST_TXT1', 'LXTYPE1', 'EST_TXT2', 'LXTYPE2', 'EST_TXT3',
'LXTYPE3', 'EST_TXT4', 'LXTYPE4', 'SOL_CLAY1', 'SOL_SILT1', 'SOL_SAND1',
'SOL_ROCK1', 'SOL_CLAY2', 'SOL_SILT2', 'SOL_SAND2', 'SOL_ROCK2',
'SOL_CLAY3', 'SOL_SILT3', 'SOL_SAND3', 'SOL_ROCK3', 'SOL_CLAY4',
'SOL_SILT4', 'SOL_SAND4', 'SOL_ROCK4', 'SOL_SOC1', 'SOL_SOC2',
'SOL_SOC3', 'SOL_SOC4', 'SOL_BD1', 'SOL_BD2', 'SOL_BD3', 'SOL_BD4',
'SOL_K1', 'SOL_K2', 'SOL_K3', 'SOL_K4',
'SOL_AWC1', 'SOL_AWC2', 'SOL_AWC3', 'SOL_AWC4',
'slp_mean', 'slp_median',
'slp_stdev', 'twi_mean', 'twi_median', 'twi_stdev', 'ls_mean',
'ls_median', 'ls_stdev', 'tri_mean', 'tri_median', 'tri_stdev',
'area_drain', 'drain_pct', 'geometry']]
display(eesti_soil_red1_awc.sample(10))
display(eesti_soil_red1_awc.dtypes)
eesti_soil_red1_awc.to_file('data/eesti_soil_red1_texture_fix_geo_drain_soc_bd_ksat_awc_final.shp', encoding='utf-8')
###Output
_____no_output_____
###Markdown
HYDGRP USLE_K- then we need K_SAT, ROCK, EST_TXT, NLAYERS, SAND- we need to get these 4 A B C D groups for SWAT HYDGRP- for USLS_K we need 6 groups based on ROCKINESS properties
###Code
# reload_the_latest
# eesti_soil_red1_awc.to_file('data/eesti_soil_red1_texture_fix_geo_drain_soc_bd_ksat_awc_final.shp', encoding='utf-8')
eesti_soil_red1_awc = gpd.read_file(f"data/eesti_soil_red1_texture_fix_geo_drain_soc_bd_ksat_awc_final.shp", encoding='utf-8')
display(eesti_soil_red1_awc.sample(10))
display(eesti_soil_red1_awc.dtypes)
def decide_swat_hydgrp(row):
# hydgrp info
# A sands and gravel that are deep and well draining, high infiltration rates
# B moderate infiltration rates, moderately fine to moderately coarse, moderately deep and well drained
# C slow infiltration rates, moderately fine to fine textures, with a layer that impedes downward movement
# D very slow infiltration, clay soils, clay pan or clay layer near surface, with high swelling potential;
# or high permanent water table; or shallow soil above near impervious materials
# former work
# hydgrp = 'A'
# if lxtype in ['S', 'GRAVELS']:
# hydgrp = 'A'
# elif lxtype in ['L', 'LS', 'SL']:
# hydgrp = 'B'
# elif lxtype in ['HUMUS', 'SiL']:
# hydgrp = 'C'
# else:
# hydgrp = 'D' # 'PEAT', 'SiCL', 'CL', 'C', 'HC'
# return hydgrp
# current approach
# 'S', 'LS', 'L', 'SL', 'PEAT', 'GRAVELS', 'CL', 'C', 'HUMUS', None, 'HC', 'no_info'
layer_classes = set([ row['LXTYPE1'], row['LXTYPE2'], row['LXTYPE3'], row['LXTYPE4'] ])
sandy_t = layer_classes - set(['L', 'SL', 'PEAT', 'GRAVELS', 'CL', 'C', 'HUMUS', None, 'HC', 'no_info'])
hydgrp = 'A'
if 'PEAT' in layer_classes or 'HUMUS' in layer_classes or 'SiL' in layer_classes:
hydgrp = 'C'
elif 'SiCL' in layer_classes or 'CL' in layer_classes or 'C' in layer_classes or 'HC' in layer_classes:
hydgrp = 'D'
elif 'LS' in sandy_t or 'S' in sandy_t:
hydgrp = 'A'
else:
hydgrp = 'B'
return hydgrp
def estimate_soil_permeability_class(row):
# Soil permeability classes and saturated hydraulic conductivity ranges
# estimated from major soil textural classes (p = 1: very rapid, …, p = 6: very slow; Table 2).
# Permeability class (p) Texture Saturated hydraulic conductivity, mm h− 1
# 1 (fast and very fast) Sand > 61.0
# 2 (moderate fast) Loamy sand, sandy loam 20.3–61.0
# 3 (moderate) Loam, silty loam 5.1–20.3
# 4 (moderate low) Sandy clay loam, clay loam 2.0–5.1
# 5 (slow) Silty clay loam, sand clay 1.0–2.0
# 6 (very slow) Silty clay, clay < 1.0
# S
# GRAVELS
# -
# LS
# SL
# -
# L
# SiL
# -
# CL
# -
# SiCL
# -
# C
# HC
# PEAT
# HUMUS
p = 1
# if row['SOL_K1'] >= 61.0:
# p = 1
# elif row['SOL_K1'] > 61.0 >= row['SOL_K1'] >= 20.3:
# p = 2
# elif row['SOL_K1'] > 20.3 >= row['SOL_K1'] >= 5.1:
# p = 3
# elif row['SOL_K1'] > 5.1 >= row['SOL_K1'] >= 2.0:
# p = 4
# elif row['SOL_K1'] > 2.0 >= row['SOL_K1'] >= 1.0:
# p = 5
# else:
# p = 6
if row['LXTYPE1'] in ['S', 'GRAVELS']:
p = 1
elif row['LXTYPE1'] in ['LS', 'SL']:
p = 2
elif row['LXTYPE1'] in ['L', 'SiL']:
p = 3
elif row['LXTYPE1'] in ['CL']:
p = 4
elif row['LXTYPE1'] in ['SiCL']:
p = 5
elif row['LXTYPE1'] in ['PEAT', 'HUMUS', 'C', 'HC']:
p = 6
else:
p = 3 # if unknown
return p
def decide_structural_class(row):
# Structure class (s) European Soil Database
# 1 (very fine granular: 1–2 mm) G (good)
# 2 (fine granular: 2–5 mm) N (normal)
# 3 (medium or coarse granular: 5–10 mm) P (poor)
# 4 (blocky, platy or massive: > 10 mm) H (humic or peaty top soil)
# 'S', 'LS', 'L', 'SL', 'PEAT', 'GRAVELS', 'CL', 'C', 'HUMUS', None, 'HC', 'no_info'
peenes_code = row['LXTYPE1']
rock_content = row['SOL_ROCK1']
s = 1
if peenes_code in ['PEAT','HUMUS']:
s = 4
elif peenes_code in [ 'S', 'LS', 'L', 'SL', 'CL', 'C', 'HC' ] and rock_content <= 10:
s = 1
elif peenes_code in [ 'S', 'LS', 'L', 'SL', 'CL', 'C', 'HC', 'GRAVELS' ] and rock_content <= 15:
s = 2
elif rock_content > 15:
s = 3
# elif rock_content >= 50:
# s = 4
else:
s = 2
return s
def usle_k_calc_for(m_clay, m_silt, m_sand, cbn_om, structural_class, permeability_class):
# m_clay [%] clay fraction content (< 0.002 mm);
# m_silt [%] silt fraction content (0.002–0.05 mm);
# m_sand [%] very fine sand fraction content (0.05–0.1 mm);
# om [%] the organic matter content;
M_txt_factor = (m_silt + m_sand) * (100 - m_clay)
om = 1.27 * cbn_om
# om = cbn_om
p = permeability_class
s = structural_class
K_factor = ( ( (2.1*math.pow(10,-4)) * (math.pow(M_txt_factor,1.14) * (12-om) ) + (3.25 * (s-2) ) + (2.5 * (p-3)) ) / 100)
return K_factor
def func(x):
# fittied for a strong decay function toward 0 for all too large SOC/too small USLE_K equation results
a = 1.32434513
b = 0.4747399
c = 0.00542093
return a * np.exp(-b * x) + c
def apply_hydrgrp_and_usle_k(row):
hydrgrp = decide_swat_hydgrp(row)
structural_class = decide_structural_class(row)
permeability_class = estimate_soil_permeability_class(row)
usle_k = usle_k_calc_for(m_clay = row['SOL_CLAY1'], m_silt = row['SOL_SILT1'], m_sand = row['SOL_SAND1'],
cbn_om = row['SOL_SOC1'], structural_class = structural_class,
permeability_class = permeability_class)
if usle_k < 0:
usle_k = func(row['SOL_SOC1'])
return pd.Series([hydrgrp, usle_k, structural_class, permeability_class])
eesti_soil_red1_awc[['HYDGRP','USLE_K', 'S_TXT_CLASS', 'P_SOIL_CLASS']] = eesti_soil_red1_awc.apply(lambda x: apply_hydrgrp_and_usle_k(x), axis=1)
display(eesti_soil_red1_awc.sample(10))
display(eesti_soil_red1_awc.dtypes)
eesti_soil_red1_awc['SOL_SOC1'].describe()
eesti_soil_red1_awc['USLE_K'].describe()
eesti_soil_red1_awc = eesti_soil_red1_awc[['orig_fid', 'upd_siffer', 'WRB_code', 'Boniteet', 'Varv', 'Loimis1',
'loimis_rec', 'nlayers', 'SOL_ZMX', 'SOL_Z1', 'SOL_Z2', 'SOL_Z3',
'SOL_Z4', 'EST_TXT1', 'LXTYPE1', 'EST_TXT2', 'LXTYPE2', 'EST_TXT3',
'LXTYPE3', 'EST_TXT4', 'LXTYPE4', 'SOL_CLAY1', 'SOL_SILT1', 'SOL_SAND1',
'SOL_ROCK1', 'SOL_CLAY2', 'SOL_SILT2', 'SOL_SAND2', 'SOL_ROCK2',
'SOL_CLAY3', 'SOL_SILT3', 'SOL_SAND3', 'SOL_ROCK3', 'SOL_CLAY4',
'SOL_SILT4', 'SOL_SAND4', 'SOL_ROCK4', 'SOL_SOC1', 'SOL_SOC2',
'SOL_SOC3', 'SOL_SOC4', 'SOL_BD1', 'SOL_BD2', 'SOL_BD3', 'SOL_BD4',
'SOL_K1', 'SOL_K2', 'SOL_K3', 'SOL_K4',
'SOL_AWC1', 'SOL_AWC2', 'SOL_AWC3', 'SOL_AWC4','USLE_K','HYDGRP',
'slp_mean', 'slp_median',
'slp_stdev', 'twi_mean', 'twi_median', 'twi_stdev', 'ls_mean',
'ls_median', 'ls_stdev', 'tri_mean', 'tri_median', 'tri_stdev',
'area_drain', 'drain_pct', 'geometry']]
display(eesti_soil_red1_awc.sample(10))
display(eesti_soil_red1_awc.dtypes)
eesti_soil_red1_awc.to_file('data/eesti_soil_red1_texture_fix_geo_drain_soc_bd_ksat_awc_usle_hydgrp_final.shp', driver='ESRI Shapefile', encoding='utf-8')
eesti_soil_red1_SWAT = eesti_soil_red1_awc[['orig_fid', 'upd_siffer', 'WRB_code', 'Boniteet', 'Varv', 'Loimis1',
'loimis_rec', 'nlayers', 'SOL_ZMX', 'SOL_Z1', 'SOL_Z2', 'SOL_Z3',
'SOL_Z4', 'EST_TXT1', 'LXTYPE1', 'EST_TXT2', 'LXTYPE2', 'EST_TXT3',
'LXTYPE3', 'EST_TXT4', 'LXTYPE4', 'SOL_CLAY1', 'SOL_SILT1', 'SOL_SAND1',
'SOL_ROCK1', 'SOL_CLAY2', 'SOL_SILT2', 'SOL_SAND2', 'SOL_ROCK2',
'SOL_CLAY3', 'SOL_SILT3', 'SOL_SAND3', 'SOL_ROCK3', 'SOL_CLAY4',
'SOL_SILT4', 'SOL_SAND4', 'SOL_ROCK4', 'SOL_SOC1', 'SOL_SOC2',
'SOL_SOC3', 'SOL_SOC4', 'SOL_BD1', 'SOL_BD2', 'SOL_BD3', 'SOL_BD4',
'SOL_K1', 'SOL_K2', 'SOL_K3', 'SOL_K4',
'SOL_AWC1', 'SOL_AWC2', 'SOL_AWC3', 'SOL_AWC4','USLE_K','HYDGRP',
'slp_mean', 'twi_mean', 'ls_mean', 'tri_mean',
'area_drain', 'drain_pct', 'geometry']]
eesti_soil_red1_SWAT.to_file('data/eesti_soil_red1_texture_fix_geo_drain_soc_bd_ksat_awc_usle_hydgrp_final_swat.shp', driver='ESRI Shapefile', encoding='utf-8')
for i in range(1,8):
print(i)
###Output
1
2
3
4
5
6
7
|
python/.ipynb_checkpoints/python-exercises-checkpoint.ipynb | ###Markdown
Python ExercisesThis notebook is for programming exercises in python using : * Statistics* Inbuilt Functions and Libraries* Pandas * Numpy
###Code
import math
import numpy as np
import pandas as pd
import re
from operator import itemgetter, attrgetter
###Output
_____no_output_____
###Markdown
Python Statistics
###Code
def median(dataPoints):
"computer median of given data points"
if not dataPoints:
raise 'no datapoints passed'
sortedpoints=sorted(dataPoints)
mid=len(dataPoints)//2
#even
#print mid , sortedpoints
if len(dataPoints)%2==0:
return (sortedpoints[mid-1] + sortedpoints[mid])/2.0
else:
# odd
return sortedpoints[mid]
def range(dataPoints):
"compute range of given data points"
if not dataPoints:
raise 'no datapoints passed'
return max(dataPoints)-mean(dataPoints)
def quartiles(dataPoints):
"computer first and last quartile in the datalist"
if not dataPoints:
raise 'no datapoints passed'
sortedpoints=sorted(dataPoints)
mid=len(dataPoints)//2
#even
if(len(dataPoints)%2==0):
print sortedpoints[:mid]
lowerQ=median(sortedpoints[:mid])
upperQ=median(sortedpoints[mid:])
else:
lowerQ=median(sortedpoints[:mid])
upperQ=median(sortedpoints[mid+1:])
return lowerQ,upperQ
def summary(dataPoints):
"print stat summary of data"
if not dataPoints:
raise 'no datapoints passed'
print "Summary Statistics:"
print ("Min : " , min(dataPoints))
print ("First Quartile : ",quartiles(dataPoints)[0] )
print ("median : ", median(dataPoints))
print ("Second Quartile : ", quartiles(dataPoints)[1])
print ("max : ", max(dataPoints))
return ""
datapoints=[68, 83, 58, 84, 100, 64]
#quartiles(datapoints)
print summary(datapoints)
###Output
Summary Statistics:
('Min : ', 58)
('First Quartile : ', 64)
('median : ', 75.5)
('Second Quartile : ', 84)
('max : ', 100)
###Markdown
Some simpler exercises based on common python function ``Question:Write a program that calculates and prints the value according to the given formula:Q = Square root of [(2 * C * D)/H]Following are the fixed values of C and H:C is 50. H is 30.D is the variable whose values should be input to your program in a comma-separated sequence.ExampleLet us assume the following comma separated input sequence is given to the program:100,150,180The output of the program should be:18,22,24``
###Code
C=50
H=30
def f1(inputList):
answer= [math.sqrt((2*C*num*1.0)/H) for num in inputList]
return ','.join(str (int(round(num))) for num in answer)
string='100,150,180'
nums=[int(num ) for num in string.split(',')]
type(nums)
print f1(nums)
###Output
18,22,24
###Markdown
``Question:Write a program which takes 2 digits, X,Y as input and generates a 2-dimensional array. The element value in the i-th row and j-th column of the array should be i*j.Note: i=0,1.., X-1; j=0,1,¡Y-1.ExampleSuppose the following inputs are given to the program:3,5Then, the output of the program should be:[[0, 0, 0, 0, 0], [0, 1, 2, 3, 4], [0, 2, 4, 6, 8]] ``
###Code
dimensions=[3,5]
rows=dimensions[0]
columns=dimensions[1]
array=np.zeros((rows,columns))
#print array
for row in range(rows):
for column in range(columns):
array[row][column]=row*column
print array
###Output
[[ 0. 0. 0. 0. 0.]
[ 0. 1. 2. 3. 4.]
[ 0. 2. 4. 6. 8.]]
###Markdown
``Question:Write a program that accepts a comma separated sequence of words as input and prints the words in a comma-separated sequence after sorting them alphabetically.Suppose the following input is supplied to the program:without,hello,bag,worldThen, the output should be:bag,hello,without,world``
###Code
string='without,hello,bag,world'
wordList=string.split(',')
wordList.sort()
#print wordList
print ','.join(word for word in wordList)
###Output
bag,hello,without,world
###Markdown
``Question:A website requires the users to input username and password to register. Write a program to check the validity of password input by users.Following are the criteria for checking the password:1. At least 1 letter between [a-z]2. At least 1 number between [0-9]1. At least 1 letter between [A-Z]3. At least 1 character from [$@]4. Minimum length of transaction password: 65. Maximum length of transaction password: 12Your program should accept a sequence of comma separated passwords and will check them according to the above criteria. Passwords that match the criteria are to be printed, each separated by a comma.ExampleIf the following passwords are given as input to the program:ABd1234@1,a F1,2w3E*,2We3345Then, the output of the program should be:ABd1234@1``
###Code
def check_password(items):
values=[]
for string in items:
if len(string) < 6 and len(string)> 12:
continue
else :
pass
if not re.search('[a-z]',string):
continue
elif not re.search('[0-9]',string):
continue
elif not re.search('[A-Z]',string):
continue
elif not re.search('[$#@]',string):
continue
elif re.search('\s',string):
continue
else :pass
values.append(string)
return ','.join(pwd for pwd in values)
string='ABd1234@1,a F1#,2w3E*,2We3345 '
items=string.split(',')
print check_password(items)
###Output
ABd1234@1
###Markdown
``Question:You are required to write a program to sort the (name, age, height) tuples by ascending order where name is string, age and height are numbers. The tuples are input by console. The sort criteria is:1: Sort based on name;2: Then sort based on age;3: Then sort by score.The priority is that name > age > score.If the following tuples are given as input to the program:Tom,19,80John,20,90Jony,17,91Jony,17,93Json,21,85Then, the output of the program should be:[('John', '20', '90'), ('Jony', '17', '91'), ('Jony', '17', '93'), ('Json', '21', '85'), ('Tom', '19', '80')]``
###Code
string= 'Tom,19,80 John,20,90 Jony,17,91 Jony,17,93 Json,21,85'
items= [ tuple(item.split(',')) for item in string.split(' ')]
print sorted(items, key=itemgetter(0,1,2))
###Output
[('John', '20', '90'), ('Jony', '17', '91'), ('Jony', '17', '93'), ('Json', '21', '85'), ('Tom', '19', '80')]
###Markdown
``Question:Write a program to compute the frequency of the words from the input. The output should output after sorting the key alphanumerically. Suppose the following input is supplied to the program:New to Python or choosing between Python 2 and Python 3? Read Python 2 or Python 3.Then, the output should be:2:23.:13?:1New:1Python:5Read:1and:1between:1choosing:1or:2to:1``
###Code
string='New to Python or choosing between Python 2 and Python 3? Read Python 2 or Python 3.'
freq={}
for word in string.split(' '):
freq[word]=freq.get(word,0)+1
words=freq.keys()
for item in sorted(words):
print "%s:%d" %(item,freq.get(item))
###Output
2:2
3.:1
3?:1
New:1
Python:5
Read:1
and:1
between:1
choosing:1
or:2
to:1
###Markdown
Panda based exerciesSome exercises related to using pandas for dataframe operationsThe source of this exercises is at : https://github.com/ajcr/100-pandas-puzzles/blob/master/100-pandas-puzzles-with-solutions.ipynb
###Code
data = {'animal': ['cat', 'cat', 'snake', 'dog', 'dog', 'cat', 'snake', 'cat', 'dog', 'dog'],
'age': [2.5, 3, 0.5, np.nan, 5, 2, 4.5, np.nan, 7, 3],
'visits': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1],
'priority': ['yes', 'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no']}
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']
# Create a DataFrame df from this dictionary data which has the index labels.
df = pd.DataFrame(data,index=labels)
#display summary of the basic information
df.info()
df.describe()
# return first 3 , last 3 rows of dataframe
print df.head(3)
#df.iloc[:3]
print ' '
print df.iloc[-3:]
#print df.tail(3)
# Select just the 'animal' and 'age' columns from the DataFrame df.
df[['animal','age']]
#df.loc[:,['animal','age']]
#Select the data in rows [3, 4, 8] and in columns ['animal', 'age'].
df.loc[df.index[[3,4,8]], ['animal','age']]
# Select only the rows where the number of visits is greater than 3.
df[df['visits']>3]
# Select the rows where the age is missing, i.e. is NaN.
df[df['age'].isnull()]
#Select the rows where the animal is a cat and the age is less than 3.
df[ (df['animal']=='cat') & (df['age'] <3) ]
#Select the rows the age is between 2 and 4 (inclusive).
df[df['age'].between(2,4)]
#Change the age in row 'f' to 1.5
df.loc['f','age']=1.5
#Calculate the sum of all visits (the total number of visits).
df['visits'].sum()
#Calculate the mean age for each different animal in df.
df.groupby('animal')['age'].mean()
# Append a new row 'k' to df with your choice of values for each column. Then delete that row to return the original DataFrame.
df.loc['k'] = [5.5, 'dog', 'no', 2]
# and then deleting the new row...
df = df.drop('k')
# Count the number of each type of animal in df.
df['animal'].value_counts()
#Sort df first by the values in the 'age' in decending order, then by the value in the 'visit' column in ascending order.
df.sort_values(by=['age','visits'], ascending=[False,True])
# The 'priority' column contains the values 'yes' and 'no'.
#Replace this column with a column of boolean values: 'yes' should be True and 'no' should be False.
df['priority']=df['priority'].map({'yes': True, 'no':False})
# In the 'animal' column, change the 'snake' entries to 'python'.
df['animal']= df['animal'].replace({'snake': 'python'})
# For each animal type and each number of visits, find the mean age.
#In other words, each row is an animal, each column is a number of visits and the values are the mean ages
#(hint: use a pivot table).
df.pivot_table(index='animal', columns='visits', values='age' , aggfunc='mean')
###Output
_____no_output_____
###Markdown
DataFrames: beyond the basics
###Code
# You have a DataFrame df with a column 'A' of integers. For example:
df = pd.DataFrame({'A': [1, 2, 2, 3, 4, 5, 5, 5, 6, 7, 7]})
#How do you filter out rows which contain the same integer as the row immediately above?
df.loc[df['A'].shift() != df['A']]
#Given a DataFrame of numeric values, say
df = pd.DataFrame(np.random.random(size=(5, 3))) # a 5x3 frame of float values
#how do you subtract the row mean from each element in the row?
#print df
# axis=1 means row wise , axis=0 means columnwise
df.sub(df.mean(axis=1), axis=0)
#Suppose you have DataFrame with 10 columns of real numbers, for example:
df = pd.DataFrame(np.random.random(size=(5, 10)), columns=list('abcdefghij'))
#Which column of numbers has the smallest sum? (Find that column's label.)
#print df.sum(axis=0)
df.sum(axis=0).idxmin()
# How do you count how many unique rows a DataFrame has (i.e. ignore all rows that are duplicates)?
len(df) - df.duplicated(keep=False).sum()
# better is
print len(df.duplicated(keep=False))
#You have a DataFrame that consists of 10 columns of floating--point numbers.
#Suppose that exactly 5 entries in each row are NaN values.
#For each row of the DataFrame, find the column which contains the third NaN value.
#(You should return a Series of column labels.)
(df.isnull().cumsum(axis=1)==3).idxmax(axis=1)
# A DataFrame has a column of groups 'grps' and and column of numbers 'vals'. For example:
df = pd.DataFrame({'grps': list('aaabbcaabcccbbc'),
'vals': [12,345,3,1,45,14,4,52,54,23,235,21,57,3,87]})
#For each group, find the sum of the three greatest values.
df.groupby('grps')['vals'].nlargest(3).sum(level=0)
#A DataFrame has two integer columns 'A' and 'B'. The values in 'A' are between 1 and 100 (inclusive).
#For each group of 10 consecutive integers in 'A' (i.e. (0, 10], (10, 20], ...),
#calculate the sum of the corresponding values in column 'B'.
###Output
_____no_output_____
###Markdown
Numpy Exercises The problems have been taken from following resources : * http://www.w3resource.com/python-exercises/numpy/index.php*
###Code
# 1. Write a Python program to print the NumPy version in your system.
print (np.__version__)
#2. Write a Python program to count the number of characters (character frequency) in a string.
l = [12.23, 13.32, 100, 36.32]
print 'original list: ' , l
print 'numpy array : ', np.array(l)
#Create a 3x3 matrix with values ranging from 2 to 10.
np.arange(2,11).reshape(3,3)
###Output
_____no_output_____ |
Course 2 - Improving Deep Neural Networks - Hyperparameter tuning, Regularization and Optimization/NoteBooks/Gradient+Checking+v1.ipynb | ###Markdown
Gradient CheckingWelcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".Let's do it!
###Code
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
###Output
_____no_output_____
###Markdown
1) How does gradient checking work?Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$. Let's look back at the definition of a derivative (or gradient):$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."We know the following:- $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly. - You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct. Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct! 2) 1-dimensional gradient checkingConsider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. **Figure 1** : **1D linear model** The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation"). **Exercise**: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
"""
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
"""
### START CODE HERE ### (approx. 1 line)
J = np.dot(theta, x)
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
###Output
J = 8
###Markdown
**Expected Output**: ** J ** 8 **Exercise**: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
###Code
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
"""
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
"""
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
###Output
dtheta = 2
###Markdown
**Expected Output**: ** dtheta ** 2 **Exercise**: To show that the `backward_propagation()` function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.**Instructions**:- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow: 1. $\theta^{+} = \theta + \varepsilon$ 2. $\theta^{-} = \theta - \varepsilon$ 3. $J^{+} = J(\theta^{+})$ 4. $J^{-} = J(\theta^{-})$ 5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$- Then compute the gradient using backward propagation, and store the result in a variable "grad"- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$You will need 3 Steps to compute this formula: - 1'. compute the numerator using np.linalg.norm(...) - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice. - 3'. divide them.- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
###Code
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
"""
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta + epsilon # Step 1
thetaminus = theta - epsilon # Step 2
J_plus = forward_propagation(x, thetaplus) # Step 3
J_minus = forward_propagation(x, thetaminus) # Step 4
gradapprox = (J_plus - J_minus) / (2 * epsilon) # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = backward_propagation(x, theta)
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
###Output
The gradient is correct!
difference = 2.91933588329e-10
###Markdown
**Expected Output**:The gradient is correct! ** difference ** 2.9193358103083e-10 Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`. Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it! 3) N-dimensional gradient checking The following figure describes the forward and backward propagation of your fraud detection model. **Figure 2** : **deep neural network***LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID*Let's look at your implementations for forward propagation and backward propagation.
###Code
def forward_propagation_n(X, Y, parameters):
"""
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
"""
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
###Output
_____no_output_____
###Markdown
Now, run backward propagation.
###Code
def backward_propagation_n(X, Y, cache):
"""
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T) * 2
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
###Output
_____no_output_____
###Markdown
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct. **How does gradient checking work?**.As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "`dictionary_to_vector()`" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.The inverse function is "`vector_to_dictionary`" which outputs back the "parameters" dictionary. **Figure 2** : **dictionary_to_vector() and vector_to_dictionary()** You will need these functions in gradient_check_n()We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.**Exercise**: Implement gradient_check_n().**Instructions**: Here is pseudo-code that will help you implement the gradient check.For each i in num_parameters:- To compute `J_plus[i]`: 1. Set $\theta^{+}$ to `np.copy(parameters_values)` 2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$ 3. Calculate $J^{+}_i$ using to `forward_propagation_n(x, y, vector_to_dictionary(`$\theta^{+}$ `))`. - To compute `J_minus[i]`: do the same thing with $\theta^{-}$- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to `parameter_values[i]`. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: $$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
###Code
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7):
"""
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference > 1e-7:
print("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
###Output
[93mThere is a mistake in the backward propagation! difference = 0.285093156654[0m
|
Applications/AutoEncoderExample.ipynb | ###Markdown
AutoEncoder Example using DNN Demonstrated using mnist data
###Code
(X_train_digits, y_train), (X_test_digits, y_test) = mnist.load_data()
X_train = np.array(list(map(lambda x: x.flatten()/255,X_train_digits)))
X_test = np.array(list(map(lambda x: x.flatten()/255,X_test_digits)))
class AutoEncoder():
"""
An auto encoder is a semi supervised learning algorithm that attempts to reconstruct input using a smaller feature space
Parameters:
X: numpy array(): data matrix
encoder: DNN to reduce dimensions of matrix
decoder: DNN to recreate the original data from the encoded data
full_model: DNN that combines both the encoder and decoder objects, used to train both
"""
def __init__(self,X):
self.X = X
self.encoder = None
self.decoder = None
self.full_model = DNN()
self.full_model.add(Input(X))
self.count = 0
def create_encoder(self,layers=[Dense(32),Dense(512)],encoded_dims=2):
self.count = 0
for layer in layers:
self.full_model.add(layer)
self.count += 1
self.full_model.add(Dense(encoded_dims))
def create_decoder(self,layers=[Dense(32)]):
if len(layers) > 0:
for layer in layers:
self.full_model.add(layer)
self.full_model.add(Dense(self.X.shape[-1]))
def finalize_encoder_decoder(self):
count = 0
layer = self.full_model.head.getNext()
self.encoder = DNN()
self.decoder = DNN()
self.encoder.add(Input(self.X))
while layer != None:
print(layer)
newlay = copy.deepcopy(layer)
if count <= self.count:
self.encoder.add(newlay)
self.encoder.outlayer.update(newlay.getWeights())
if count == a.count:
self.encoder.outlayer.next = None
self.decoder.add(Input(self.encoder.outlayer.output))
else:
self.decoder.add(newlay)
self.decoder.outlayer.update(newlay.getWeights())
layer = layer.getNext()
count += 1
def train(self,learning_rate=0.0001,epochs=100,loss="mse"):
self.full_model.fit(self.X,self.X,lr=learning_rate,epochs=epochs,loss=loss)
self.finalize_encoder_decoder()
def predict(self,X):
encoded = self.encoder.predict(X)
decoded = self.decoder.predict(encoded)
return encoded,decoded, self.full_model.predict(X)
a = AutoEncoder(X_train[:200])
a.create_encoder()
a.create_decoder()
prediction = a.full_model.predict(X_train[0].reshape(1,784)).reshape(28,28)
print(prediction)
plt.imshow(prediction)
a.train(epochs=10000,learning_rate=0.001,loss="mse")
a.decoder.forward(a.encoder.forward(X_train[0]))
a.predict(X_train[0])[2]
plt.imshow(a.predict(X_train[0])[2].reshape(28,28))
plt.imshow(X_train[0].reshape(28,28))
###Output
_____no_output_____ |
14_Longest_Collatz_Sequence.ipynb | ###Markdown
The following iterative sequence is defined for the set of positive integers:
$n → n/2$ (n is even)
$n → 3n + 1$ (n is odd)
Using the rule above and starting with 13, we generate the following sequence:
$13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1$
It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.
Which starting number, under one million, produces the longest chain?
NOTE: Once the chain starts the terms are allowed to go above one million.
###Code
# Initial Solution
def find_value_with_longest_chain(upper_limit=1000000):
longest_chain = 0
for n in range(1, upper_limit):
count = 1
current_value = n
while n != 1:
if n % 2 == 0:
n = n/2
else:
n = (3*n) + 1
count += 1
if count > longest_chain:
longest_chain = count
longest_value = current_value
return longest_value
print(find_value_with_longest_chain())
%timeit find_value_with_longest_chain()
# Optimized solution trading memory for speed
def chain_count(n):
if n in counts:
return counts[n]
if n % 2 == 0:
counts[n] = 1 + chain_count(n/2)
else:
counts[n] = 1 + chain_count((3*n) + 1)
return counts[n]
def find_value_with_longest_chain(upper_limit=1000000):
counts = {1: 1}
longest_chain = 0
for n in range(1, upper_limit):
if chain_count(n) > longest_chain:
longest_chain = chain_count(n)
longest_value = n
return longest_value
print(find_value_with_longest_chain())
%timeit find_value_with_longest_chain()
###Output
837799
1 loop, best of 3: 383 ms per loop
|
Spacy Learning/.ipynb_checkpoints/Chapter2-checkpoint.ipynb | ###Markdown
Readme:This notebook documents what I learnt from https://course.spacy.io/en/Special thanks to the content creators and the presenter InesIf you want to learn more about spaCy, please visit https://spacy.io/ or https://course.spacy.io/en/Thank you! Chapter 2: Large-scale data analysis with spaCy Vocab : stores data shared across multiple documentsStrings are encoded to hash valuesStrings are only stored once in the StringStore vianlp.vocab.stringsStrings store : lookup table in both directionsstaCy internally looks at hash idHashes can't be reversed
###Code
import spacy
nlp = spacy.load("en_core_web_sm")
coffee_hash = nlp.vocab.strings["coffee"]
coffee_hash # hash value
coffee_string = nlp.vocab.strings[coffee_hash]
coffee_string # string value
string = coffee_string = nlp.vocab.strings[319786] # errors if never see the strings before
# lexeme doesn't have context-depended pos, dependencies, or entity labels
from spacy.lang.en import English
nlp = English()
doc = nlp("I have a cat")
# Look up the hash for the word "cat"
cat_hash = nlp.vocab.strings["cat"]
print(cat_hash)
# Look up the cat_hash to get the string
cat_string = nlp.vocab.strings[cat_hash]
print(cat_string)
from spacy.lang.en import English
nlp = English()
doc = nlp("David Bowie is a PERSON")
# Look up the hash for the string label "PERSON"
person_hash = nlp.vocab.strings["PERSON"]
print(person_hash)
# Look up the person_hash to get the string
person_string = nlp.vocab.strings[person_hash]
print(person_string)
# #from spacy.lang.en import English
# from spacy.lang.de import German
# # Create an English and German nlp object
# nlp = English()
# nlp_de = German()
# # Get the ID for the string 'Bowie'
# bowie_id = nlp.vocab.strings["Bowie"]
# print(bowie_id)
# # Look up the ID for "Bowie" in the vocab
# print(nlp_de.vocab.strings[bowie_id])
# errors! : The string "Bowie" isn’t in the German vocab, so the hash can’t be resolved in the string store.
# never see it, errors
# Hashes can’t be reversed. To prevent this problem, add the word to the new vocab by processing a text or looking up the string, or use the same vocab to resolve the hash back to a string.
###Output
_____no_output_____
###Markdown
Doc and span are very powerful and hold references and relationships of words and sentences * Convert results to strings as late as possible * Use Token attributes if available, such as index, token.i for token index Don't forget to pass in the shared vocab
###Code
# manually create Doc object
from spacy.lang.en import English
nlp = English()
# Import the Doc class
from spacy.tokens import Doc
# Desired text: "spaCy is cool!"
words = ["spaCy", "is", "cool", "!"]
spaces = [True, True, False, False]
# Create a Doc from the words and spaces
doc = Doc(nlp.vocab, words=words, spaces=spaces)
print(doc.text)
from spacy.lang.en import English
nlp = English()
# Import the Doc class
from spacy.tokens import Doc
# Desired text: "Go, get started!"
words = ["Go", ",", "get", "started", "!"]
spaces = [False, True, True, False, False]
# Create a Doc from the words and spaces
doc = Doc(nlp.vocab, words=words, spaces=spaces)
print(doc.text)
from spacy.lang.en import English
nlp = English()
# Import the Doc class
from spacy.tokens import Doc
# Desired text: "Oh, really?!"
words = ["Oh", ",","really", "?", "!"]
spaces = [False, True, False, False, False]
# Create a Doc from the words and spaces
doc = Doc(nlp.vocab,words=words, spaces=spaces)
print(doc.text)
from spacy.lang.en import English
nlp = English()
# Import the Doc and Span classes
from spacy.tokens import Doc, Span
words = ["I", "like", "David", "Bowie"]
spaces = [True, True, True, False]
# Create a doc from the words and spaces
doc = Doc(nlp.vocab, words=words, spaces=spaces)
print(doc.text)
# Create a span for "David Bowie" from the doc and assign it the label "PERSON"
span = Span(doc, 2, 4, label="PERSON")
print(span.text, span.label_)
# Add the span to the doc's entities
doc.ents = [span]
# Print entities' text and labels
print([(ent.text, ent.label_) for ent in doc.ents])
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Berlin looks like a nice city")
# Get all tokens and part-of-speech tags
token_texts = [token.text for token in doc]
pos_tags = [token.pos_ for token in doc]
for index, pos in enumerate(pos_tags):
# Check if the current token is a proper noun
if pos == "PROPN":
# Check if the next token is a verb
if pos_tags[index + 1] == "VERB":
result = token_texts[index]
print("Found proper noun before a verb:", result)
###Output
Found proper noun before a verb: Berlin
###Markdown
This is not a good code as it only uses lists of strings instead of native token attributes. This is often less efficient, and can't express complex relationships.
###Code
# This is a better way, doc[i] will gives you token, so can just use doc[token.i+1].pos_
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Berlin looks like a nice city")
for token in doc:
if token.pos_ == "PROPN":
if doc[token.i+1].pos_ == "VERB":
print("Found proper noun before a verb:", token.text)
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Berlin looks like a nice city")
# Iterate over the tokens
for token in doc:
# Check if the current token is a proper noun
if token.pos_ == "PROPN":
# Check if the next token is a verb
if doc[token.i + 1].pos_ == "VERB":
print("Found proper noun before a verb:", token.text)
###Output
Found proper noun before a verb: Berlin
###Markdown
Word vectors and semantic similarity:* spaCy can compare/predict similarity* Doc/Span/Token.similarity* Similar score of 0 to 1* Needs a model that has word vectors included * en_core_web_md (medium) * en_core_web_lg (large) * NOT en_core_web_sm (small model) * https://stackoverflow.com/questions/50487495/what-is-difference-between-en-core-web-sm-en-core-web-mdand-en-core-web-lg-mod* Similarity is determined using word vectors* generated using algos like Word2Vec and lots of textx* default : consine similarity * doc and span vectors default to average of token vectors* short phrases better than long documents with tons of irrelevant words* similarity depends on the application context
###Code
import spacy
#!python3 -m spacy download en_core_web_md
nlp = spacy.load("en_core_web_md")
doc1 = nlp("I like cats")
doc2 = nlp("I hate cats")
print(doc1.similarity(doc2))
# both describe sentiments regarding cats, similiar
# but opposite sentiments
import spacy
# Load the en_core_web_md model
nlp = spacy.load("en_core_web_md")
# Process a text
doc = nlp("Two bananas in pyjamas")
# Get the vector for the token "bananas"
bananas_vector = doc[1].vector
print(bananas_vector)
import spacy
nlp = spacy.load("en_core_web_md")
doc1 = nlp("It's a warm summer day")
doc2 = nlp("It's sunny outside")
# Get the similarity of doc1 and doc2
similarity = doc1.similarity(doc2)
print(similarity)
import spacy
nlp = spacy.load("en_core_web_md")
doc = nlp("TV and books")
token1, token2 = doc[0], doc[2]
# Get the similarity of the tokens "TV" and "books"
similarity = token1.similarity(token2)
print(similarity)
import spacy
nlp = spacy.load("en_core_web_md")
doc = nlp("This was a great restaurant. Afterwards, we went to a really nice bar.")
# Create spans for "great restaurant" and "really nice bar"
span1 = doc[3:5]
span2 = doc[12:15]
# Get the similarity of the spans
similarity = span1.similarity(span2)
print(similarity)
print(span1)
print(span2)
###Output
0.75173926
great restaurant
really nice bar
###Markdown
Combining models and rules * Statistical Models * use cases(generalize) * entity recognizer, dependency parser, and pos tags * product/person names, subject/object relationships * Rule-based systems * https://spacy.io/usage/rule-based-matching * dics with finite examples * countries/cities, drug_names, dog_breeds * Matcher/PhraseMatcher/Tokenizer * phrase matching is great for matching large word lists * lower() The lowercase form of the token text, Silican -> silican, to compare * tokenizer already takes care of splitting off whitespace and each dictionary in the pattern describes one token. Does not need extra {text: " "} * is_title() -> cat -> Cat
###Code
# example
from spacy.matcher import Matcher
matcher = Matcher(nlp.vocab)
pattern = [{"LEMMA":"love","POS":"VERB"},
{"LOWER":"cats"}]
matcher.add("LOVE_CATS", [pattern])
pattern = [{"TEXT":"very","OP":"+"},
{"TEXT":"happy"}]
matcher.add("VERY_HAPPY", [pattern])
doc = nlp("I love cats and i'm very happy")
matches = matcher(doc)
for match_id, start, end in matches:
span = doc[start:end]
print(span.text, span.root.head.text)
print(doc[start-1].text, doc[start-1].pos_)
import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
doc = nlp(
"Twitch Prime, the perks program for Amazon Prime members offering free "
"loot, games and other benefits, is ditching one of its best features: "
"ad-free viewing. According to an email sent out to Amazon Prime members "
"today, ad-free viewing will no longer be included as a part of Twitch "
"Prime for new members, beginning on September 14. However, members with "
"existing annual subscriptions will be able to continue to enjoy ad-free "
"viewing until their subscription comes up for renewal. Those with "
"monthly subscriptions will have access to ad-free viewing until October 15."
)
# Create the match patterns, follow the orders
pattern1 = [{"LOWER": "amazon"}, {"IS_TITLE": True, "POS": "PROPN"}]
pattern2 = [{"LOWER": "ad"}, {"TEXT": "-"}, {"LOWER": "free"}, {"POS": "NOUN"}]
# Initialize the Matcher and add the patterns
matcher = Matcher(nlp.vocab)
matcher.add("PATTERN1", [pattern1])
matcher.add("PATTERN2", [pattern2])
# Iterate over the matches
for match_id, start, end in matcher(doc):
# Print pattern string name and text of matched span
print(doc.vocab.strings[match_id], doc[start:end].text)
###Output
PATTERN1 Amazon Prime
PATTERN2 ad-free viewing
PATTERN1 Amazon Prime
PATTERN2 ad-free viewing
PATTERN2 ad-free viewing
PATTERN2 ad-free viewing
###Markdown
Sometimes it’s more efficient to match exact strings instead of writing patterns describing the individual tokens. This is especially true for finite categories of things – like all countries of the world. We already have a list of countries, so let’s use this as the basis of our information extraction script. A list of string names is available as the variable COUNTRIES.
###Code
# import json
# from spacy.lang.en import English
# with open("exercises/en/countries.json", encoding="utf8") as f:
# COUNTRIES = json.loads(f.read())
# nlp = English()
# doc = nlp("Czech Republic may help Slovakia protect its airspace")
# # Import the PhraseMatcher and initialize it
# from spacy.matcher import PhraseMatcher
# matcher = PhraseMatcher(nlp.vocab)
# # Create pattern Doc objects and add them to the matcher
# # This is the faster version of: [nlp(country) for country in COUNTRIES]
# patterns = list(nlp.pipe(COUNTRIES))
# matcher.add("COUNTRY", None, *patterns)
# # Call the matcher on the test document and print the result
# matches = matcher(doc)
# print([doc[start:end] for match_id, start, end in matches])
# return [Czech Republic, Slovakia]
import spacy
from spacy.matcher import PhraseMatcher
from spacy.tokens import Span
import json
with open("exercises/en/countries.json", encoding="utf8") as f:
COUNTRIES = json.loads(f.read())
with open("exercises/en/country_text.txt", encoding="utf8") as f:
TEXT = f.read()
nlp = spacy.load("en_core_web_sm")
matcher = PhraseMatcher(nlp.vocab)
patterns = list(nlp.pipe(COUNTRIES))
matcher.add("COUNTRY", None, *patterns)
# Create a doc and reset existing entities
doc = nlp(TEXT)
doc.ents = []
# Iterate over the matches
for match_id, start, end in matcher(doc): # match countries, and label as "GPE", and add to NER lists
# Create a Span with the label for "GPE"
span = Span(doc, start, end, label="GPE")
# Overwrite the doc.ents and add the span
doc.ents = list(doc.ents) + [span]
# Get the span's root head token
span_root_head = span.root.head
# Print the text of the span root's head token and the span text
print(span_root_head.text, "-->", span.text)
# Print the entities in the document
print([(ent.text, ent.label_) for ent in doc.ents if ent.label_ == "GPE"])
in --> Namibia
in --> South Africa
Africa --> Cambodia
of --> Kuwait
as --> Somalia
Somalia --> Haiti
Haiti --> Mozambique
in --> Somalia
for --> Rwanda
Britain --> Singapore
War --> Sierra Leone
of --> Afghanistan
invaded --> Iraq
in --> Sudan
of --> Congo
earthquake --> Haiti
[('Namibia', 'GPE'), ('South Africa', 'GPE'), ('Cambodia', 'GPE'), ('Kuwait', 'GPE'), ('Somalia', 'GPE'), ('Haiti', 'GPE'), ('Mozambique', 'GPE'), ('Somalia', 'GPE'), ('Rwanda', 'GPE'), ('Singapore', 'GPE'), ('Sierra Leone', 'GPE'), ('Afghanistan', 'GPE'), ('Iraq', 'GPE'), ('Sudan', 'GPE'), ('Congo', 'GPE'), ('Haiti', 'GPE')]
###Output
_____no_output_____ |
Python/sort.ipynb | ###Markdown
Basic Python Function- `sorted(iterable, *, key=None, reverse=False)`: [ref1: official doc](https://docs.python.org/3/library/functions.htmlsorted) | [ref2: Stack Overflow](https://stackoverflow.com/a/4233482/8280662)- `list.sort(*, key=None, reverse=False)`: [ref](https://docs.python.org/3/library/stdtypes.htmllist.sort)[Sorting HowTo official doc](https://docs.python.org/3/howto/sorting.htmlsortinghowto) `sorted()``sorted(iterable, /, *, key=None, reverse=False)` Sort Tuple by multiple KeysSort by 2 keys in Python: A key can be a function that returns a `tuple`:
###Code
from collections import namedtuple
student_tuples = (
('john', 'A', 15),
('jane', 'C', 12),
('jane', 'B', 13),
('coco', 'B', 12),
('dave', 'B', 10),
('dave', 'A', 11)
)
# v1
print(student_tuples[0][:3])
sorted(student_tuples, key=lambda x: (x[0], x[1], x[2])) # sort by (name, grade, value) one by one
print(student_tuples[0][::-1])
print()
sorted(student_tuples, key=lambda x: (x[2], x[1], x[0]), reverse=True)
###Output
(15, 'A', 'john')
###Markdown
Call `sorted(iterable, key: NoneType=None)` with a collection as iterable. For `key`, create a lambda function that takes an element as a parameter and returns a 2-tuple of mapped to values.
###Code
a_list = ["aaa", "cc", "bb"]
new_list = sorted(a_list, key=lambda x: (len(x), x))
new_list
###Output
_____no_output_____
###Markdown
Sort Dictionary by value and key
###Code
from collections import Counter
words = ["i","love","leetcode","i","love","coding"]
counter = Counter(words)
counter
# sort dictionary directly will only sort by keys
sorted(counter)
# use `.items()` to get a list of tuples
sorted_counter = sorted(counter.items(), key=lambda x: (-x[1], x[0]))
sorted_counter
# we can turn it back to dictionary
dict(sorted_counter)
###Output
_____no_output_____
###Markdown
`list.sort()``list.sort(*, key=None, reverse=False)` | [ref](https://docs.python.org/3/library/stdtypes.htmllist.sort)
###Code
import random
import numpy as np
input = list(zip(range(10), np.arange(10)*10))
random.seed(2)
random.shuffle(input)
input
from typing import List, Tuple
def my_sort(v: List[Tuple]):
v.sort(reverse=True) # this will change the input inplace
return v
my_sort(input)
input
###Output
_____no_output_____ |
genattack.ipynb | ###Markdown
This is somewhat modified implementation of gradient free black box adversarial attack approach from https://arxiv.org/abs/1805.11090 (for now the official repository is empty, but here's the link https://github.com/nesl/adversarial_genattack) for the competition (https://competitions.codalab.org/competitions/19090) organized as a part of Machines Can See 2018 summit (http://machinescansee.com/).You can find the baseline code for the competition here https://github.com/AlexanderParkin/MCS2018.Baseline. I used some of it for image preprocessing and saving adversarial examples.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import os
import pandas as pd
from PIL import Image
from showprogress import showprogress
import MCS2018
###Output
_____no_output_____
###Markdown
Control GPU usage
###Code
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
###Output
_____no_output_____
###Markdown
Load the black box model before torch (doesn't work otherwise for some reason) :C You can find details of the contest here https://competitions.codalab.org/competitions/19090.We first need to load the black box model
###Code
# ! wget http://mcs2018-competition.visionlabs.ru/distribs/cuda9/ubuntu/MCS2018.cpython-36m-x86_64-linux-gnu.so
gpu_id = 0
net = MCS2018.Predictor(gpu_id)
import torch
from genattack import *
from torchvision import transforms
###Output
_____no_output_____
###Markdown
Load datalists Then load the data used in the contest
###Code
# ! python downloader.py --root /data --main_imgs --student_model_imgs --submit_list --pairs_list
###Output
_____no_output_____
###Markdown
also please unzip the data if necessary
###Code
img_pairs = pd.read_csv('/data/mcs/pairs_list.csv')
img_pairs[:2]
###Output
_____no_output_____
###Markdown
Define transforms
###Code
MEAN = [0.485, 0.456, 0.406]
STD = [0.229, 0.224, 0.225]
REVERSE_MEAN = [-0.485, -0.456, -0.406]
REVERSE_STD = [1/0.229, 1/0.224, 1/0.225]
ATTACK_DIR = '/data/mcs/attack_imgs/'
transform = transforms.Compose([
transforms.CenterCrop(224),
transforms.Resize((112,112)),
transforms.ToTensor(),
transforms.Normalize(mean=MEAN, std=STD)
])
###Output
_____no_output_____
###Markdown
Hyperparams
###Code
n_imgs = 5
dim = 512 # descriptor dim
nchannels = 3
h = 112
w = 112
N = 6 # size of population to evolve
G = 500 # number of generations to evolve through
p = torch.cuda.FloatTensor([0.005])
alpha = torch.cuda.FloatTensor([1.])
delta = torch.cuda.FloatTensor([0.05])
###Output
_____no_output_____
###Markdown
The following 2 functions are taken from original baseline repo to save adversarial images
###Code
def reverse_normalize(tensor, mean, std):
'''reverese normalize to convert tensor -> PIL Image'''
tensor_copy = tensor.clone()
for t, m, s in zip(tensor_copy, mean, std):
t.div_(s).sub_(m)
return tensor_copy
def tensor2img(tensor, on_cuda=True):
tensor = reverse_normalize(tensor, REVERSE_MEAN, REVERSE_STD)
# clipping
tensor[tensor > 1] = 1
tensor[tensor < 0] = 0
tensor = tensor.squeeze(0)
if on_cuda:
tensor = tensor.cpu()
return transforms.ToPILImage()(tensor)
###Output
_____no_output_____
###Markdown
Ok, go!
###Code
low_ssim = [] # for images with low ssim
scores = []
for idx in showprogress(img_pairs.index.values):
try:
pairs = {'source': img_pairs.loc[idx].source_imgs.split('|'),
'target': img_pairs.loc[idx].target_imgs.split('|')}
source_img_names = pairs['source']
target_img_names = pairs['target']
targets = torch.cuda.FloatTensor(n_imgs, dim)
for source_img_name in source_img_names:
source_img_name = os.path.join('/data/mcs/imgs/', source_img_name)
source_img = Image.open(source_img_name)
x = transform(source_img)
x = x.cuda(async=True)
tavg = torch.cuda.FloatTensor(dim)
# since the task is to confuse black box between two identities,
# each having n_imgs images, we simply take average target descriptors
for i, target_img_name in enumerate(target_img_names):
target_img_name = os.path.join('/data/mcs/imgs/', target_img_name)
target_img = Image.open(target_img_name)
t = transform(target_img).unsqueeze(0).numpy()
targets[i] = torch.cuda.FloatTensor(net.submit(t))
tavg += targets[i]
tavg /= torch.norm(tavg) # make avg descriptor of unit length
Pc = attack(x, tavg, delta, alpha, p, N, G, net)
ssimm = ssim(x.squeeze().permute(1,2,0).cpu().numpy(),
Pc[0].permute(1,2,0).cpu().numpy(),
multichannel=True)
d_adv = net.submit(Pc[0][None, :, :, :].cpu().numpy())
# compute L2 distances between target and adversarial descriptors
for i in range(n_imgs):
scores.append(np.linalg.norm(targets[i].cpu().numpy() - d_adv))
print(sum(scores[-5:]) / 5) # print the mean score for the current source example
if ssimm < 0.95:
print('SSIM low: %s' % source_img_name)
low_ssim.append(source_img_name)
continue # do not save images with low ssim, better retry for them after
# save adversarial example
attack_img = tensor2img(Pc[0], True)
attack_img.save(ATTACK_DIR + os.path.basename(source_img_name).replace('.jpg', '.png'))
except Exception as e:
print(source_img_name)
print(e)
pass
print(np.array(scores).mean())
###Output
_____no_output_____ |
feature-engineering-and-data-transformations/using-toy-exmaples/1_Demo_Data_Explore.ipynb | ###Markdown
Read the dataset
###Code
use_cols = [
'Pclass', 'Sex', 'Age', 'Fare', 'SibSp',
'Survived'
]
data = pd.read_csv('./data/titanic.csv', usecols=use_cols)
data.head(3)
###Output
_____no_output_____
###Markdown
Get dtypes for each columns
###Code
str_var_list, num_var_list, all_var_list = explore.get_dtypes(data=data)
print(str_var_list) # string type
print(num_var_list) # numeric type
print(all_var_list) # all
###Output
['Sex']
['Survived', 'Pclass', 'Age', 'SibSp', 'Fare']
['Sex', 'Survived', 'Pclass', 'Age', 'SibSp', 'Fare']
###Markdown
General data description
###Code
explore.describe(data=data,output_path=r'./output/')
###Output
result saved at: ./output/describe.csv
###Markdown
Discrete variable barplotdraw the barplot of a discrete variable x against y(target variable). By default the bar shows the mean value of y.
###Code
explore.discrete_var_barplot(x='Pclass',y='Survived',data=data,output_path='./output/')
###Output
Image saved at
###Markdown
Discrete variable countplotdraw the countplot of a discrete variable x
###Code
explore.discrete_var_countplot(x='Pclass',data=data,output_path='./output/')
###Output
Image saved at ./output/Countplot_Pclass.png
###Markdown
Discrete variable boxplotdraw the boxplot of a discrete variable x against y.
###Code
explore.discrete_var_boxplot(x='Pclass',y='Fare',data=data,output_path='./output/')
###Output
Image saved at ./output/Boxplot_Pclass_Fare.png
###Markdown
Continuous variable distplotdraw the distplot of a continuous variable x.
###Code
explore.continuous_var_distplot(x=data['Fare'],output_path='./output/')
###Output
C:\Users\yimeng.zhang\AppData\Roaming\Python\Python36\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
Scatter plotdraw the scatter-plot of two variables.
###Code
explore.scatter_plot(x=data.Fare,y=data.Pclass,data=data,output_path='./output/')
###Output
Image saved at ./output/Scatter_plot_Fare_Pclass.png
###Markdown
Correlation plotdraw the correlation plot between variables.
###Code
explore.correlation_plot(data=data,output_path='./output/')
###Output
Image saved at ./output/Corr_plot.png
###Markdown
Heatmap
###Code
flights = sns.load_dataset("flights")
print(flights.head(5))
# explore.heatmap(data=data[['Sex','Survived']])
flights = flights.pivot("month", "year", "passengers")
explore.heatmap(data=flights,output_path='./output/')
###Output
year month passengers
0 1949 January 112
1 1949 February 118
2 1949 March 132
3 1949 April 129
4 1949 May 121
Image saved at ./output/Heatmap.png
|
system-modeling/Lab1/generate_distribution.ipynb | ###Markdown
Lab 1+ Автор: Роман Кривохижа+ Група: ІС-72+ Викладач: Новікова П.А. ************ Module importing
###Code
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
###Output
_____no_output_____
###Markdown
I. Exponential distribution Генерація даних з параметром $\lambda$ = 10 1. Згенеруємо $N$ випадкових, рівномірно розподілених в інтервалі (0, 1) чисел $\epsilon_i$
###Code
n = 10000
lambda_1 = 10
epsilon_uniform = np.random.uniform(low=0, high=1, size=n)
###Output
_____no_output_____
###Markdown
**Побудуємо гістрограму даного розподілу:**
###Code
fig, ax = plt.subplots(1,1, figsize=(15,6))
sns.distplot(epsilon_uniform, ax=ax, color='darkviolet')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'Frequency')
ax.set_title(u'Uniform distribution');
###Output
_____no_output_____
###Markdown
+ розподіл схожий на рівномірний+ дані розподілені в інтервалі (0, 1) **Перевіримо значення математичного сподівання та дисперсії, порівняємо з теоретичними:**$M(\epsilon) = \int_0^1 x \,dx = \frac{1}{2}$$D(\epsilon) = \int_0^1 (x - \frac{1}{2})^2 \,dx = \frac{1}{12}$
###Code
print('M(epsilon) = %s\nD(epsilon) = %s' % (epsilon_uniform.mean(), epsilon_uniform.std(ddof=1)**2))
###Output
M(epsilon) = 0.5022906859077897
D(epsilon) = 0.08371993880498886
###Markdown
+ значення математичного очікування та дисперсії подібні до теоретичних 2. Згенеруємо $N$ чисел $x_i$ з експоненційного розподілу $x_i = -\frac{1}{\lambda} ln(\epsilon_i)$
###Code
x_exp = -np.log(epsilon_uniform) / lambda_1
###Output
_____no_output_____
###Markdown
**Побудуємо гістрограму та KDE plot для згенерованих даних**
###Code
fig, ax = plt.subplots(1,1, figsize=(15,6))
sns.distplot(x_exp, ax=ax, color='darkviolet', label=f'$\lambda$ = {lambda_1}')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'Frequency')
ax.set_title(u'Histogram of generated exponential distribution');
ax.legend();
###Output
_____no_output_____
###Markdown
+ розподіл даних подібний до експоненційного **Перевіримо значення математичного сподівання та дисперсії, порівняємо з теоретичними:**$\mu = \frac{1}{\lambda}$$D(x) = \sigma^2$, де $\sigma = \frac{1}{\lambda}$
###Code
print('M(x) = %s\nstd(x) = %s\nD(x) = %s' % (x_exp.mean(), x_exp.std(ddof=1), x_exp.std(ddof=1)**2))
###Output
M(x) = 0.09969482379830981
std(x) = 0.10032798023481321
D(x) = 0.010065703617997069
###Markdown
+ математичне сподівання приблизно дорівнює стандартному квадратичному відхиленню **Створимо єдину функцію для генерування випадкового розподілу:**
###Code
def generate_exp_distr(lambda_val, n=10000):
epsilon_uniform = np.random.uniform(low=0, high=1, size=n)
return -np.log(epsilon_uniform) / lambda_val
###Output
_____no_output_____
###Markdown
3. Cumulative distribution function$F(x) = 1 - e^{-\lambda x}$
###Code
def exp_cdf(x, l):
"""
Cumulative distribution function
"""
return 1 - np.exp(-l*x) # F
def exp_pdf(x, l):
"""
Probability density function
"""
return l*np.exp(-l*x) # f
fig, ax = plt.subplots(1,1, figsize=(15,6))
x = np.linspace(0,1,100000)
sns.lineplot(x_exp, exp_cdf(x_exp, 1/((x.mean() + x.std(ddof=1)) / 2)), ax=ax, color='darkviolet', label='$F(x)$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'F(x)')
ax.set_title(u'Applying exponential cdf to generated data');
ax.legend();
###Output
_____no_output_____
###Markdown
4. Відповідність заданому закону розподілу перевірити за допомогою критерію $\chi^2$ $H_0:$ дані мають експоненційний розподіл з заданим параметром $\lambda$$H_1: H_0$ не виконується
###Code
def create_bins_expon(x, l, n_bins=30):
start = x.min()
finish = x.max() + 1e-9
h = (finish - start) / n_bins
n = x.size
obs_freq = {}
exp_freq = {}
current = start
i = 0
while current <= finish:
obs_freq[i] = np.sum((x >= current) & (x < (current+h)))
p_i = np.exp(-l*current) - np.exp(-l*(current+h))
exp_freq[i] = p_i * n
i += 1
current += h
return normilize_bins_expon(obs_freq, exp_freq)
def normilize_bins_expon(obs_freq, exp_freq):
assert len(obs_freq) > 2 or len(exp_freq) > 2
for i in sorted(obs_freq.keys(), reverse=True)[:-1]:
if obs_freq[i] <= 5 or exp_freq[i] <= 5:
obs_freq[i-1] += obs_freq[i]
exp_freq[i-1] += exp_freq[i]
del obs_freq[i], exp_freq[i]
return obs_freq, exp_freq
observed_freq, expected_freq = create_bins_expon(x_exp, 1/((x_exp.mean() + x_exp.std(ddof=1)) / 2))
stat_val, p_value = stats.chisquare(list(observed_freq.values()), list(expected_freq.values()), ddof=1)
###Output
_____no_output_____
###Markdown
**p-value:**+ **Ймовірність отримати значення статистики як в експерименті чи ще більше екстримальне при справедливості нульової гіпотези**+ **Чим нижче p-value, тим сильніше дані свідчать проти прийняття нульової гіпотези на користь альтернативи**$p = P(T \geq t|H_0)$
###Code
alpha = 0.05
if p_value < alpha:
print(f'Не можемо прийняти нульову гіпотезу на рівні значемості alpha={alpha}.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
else:
print('Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
###Output
Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.
Значення статистики:
- stat_val = 20.62903
- p-value = 0.41925
###Markdown
**Підставимо хибне значення $\lambda$**:
###Code
observed_freq, expected_freq = create_bins_expon(x_exp, 2)
stat_val, p_value = stats.chisquare(list(observed_freq.values()), list(expected_freq.values()), ddof=1)
alpha = 0.05
if p_value < alpha:
print(f'Не можемо прийняти нульову гіпотезу на рівні значемості alpha={alpha}.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
else:
print('Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
###Output
Не можемо прийняти нульову гіпотезу на рівні значемості alpha=0.05.
Значення статистики:
- stat_val = 16251.51981
- p-value = 0.0
###Markdown
Генерація даних з параметром $\lambda$ = 1.2 1. Згенеруємо $N$ випадкових, рівномірно розподілених в інтервалі (0, 1) чисел $\epsilon_i$
###Code
n = 10000
lambda_2 = 1.2
epsilon_uniform = np.random.uniform(low=0, high=1, size=n)
###Output
_____no_output_____
###Markdown
**Побудуємо гістрограму даного розподілу:**
###Code
fig, ax = plt.subplots(1,1, figsize=(15,6))
sns.distplot(epsilon_uniform, ax=ax, color='darkviolet')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'Frequency')
ax.set_title(u'Uniform distribution');
###Output
_____no_output_____
###Markdown
+ розподіл схожий на рівномірний+ дані розподілені в інтервалі (0, 1) **Перевіримо значення математичного сподівання та дисперсії, порівняємо з теоретичними:**$M(\epsilon) = \int_0^1 x \,dx = \frac{1}{2}$$D(\epsilon) = \int_0^1 (x - \frac{1}{2})^2 \,dx = \frac{1}{12}$
###Code
print('M(epsilon) = %s\nD(epsilon) = %s' % (epsilon_uniform.mean(), epsilon_uniform.std(ddof=1)**2))
###Output
M(epsilon) = 0.5002668283560772
D(epsilon) = 0.08381010413052314
###Markdown
+ значення математичного очікування та дисперсії подібні до теоретичних 2. Згенеруємо $N$ чисел $x_i$ з експоненційного розподілу $x_i = -\frac{1}{\lambda} ln(\epsilon_i)$
###Code
x_exp = -np.log(epsilon_uniform) / lambda_2
###Output
_____no_output_____
###Markdown
**Побудуємо гістрограму та KDE plot для згенерованих даних**
###Code
fig, ax = plt.subplots(1,1, figsize=(15,6))
sns.distplot(x_exp, ax=ax, color='darkviolet', label=f'$\lambda = {lambda_2}$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'Frequency')
ax.set_title(u'Histogram of generated exponential distribution');
ax.legend();
###Output
_____no_output_____
###Markdown
+ розподіл даних подібний до експоненційного **Перевіримо значення математичного сподівання та дисперсії, порівняємо з теоретичними:**$\mu = \frac{1}{\lambda}$$D(x) = \sigma^2$, де $\sigma = \frac{1}{\lambda}$
###Code
print('M(x) = %s\nstd(x) = %s\nD(x) = %s' % (x_exp.mean(), x_exp.std(ddof=1), x_exp.std(ddof=1)**2))
###Output
M(x) = 0.8329584630527969
std(x) = 0.8283072948153118
D(x) = 0.6860929746442599
###Markdown
+ математичне сподівання приблизно дорівнює стандартному квадратичному відхиленню 3. Cumulative distribution function:$F(x) = 1 - e^{-\lambda x}$
###Code
fig, ax = plt.subplots(1,1, figsize=(15,6))
x = np.linspace(0,1,100000)
sns.lineplot(x_exp, exp_cdf(x_exp, 1/((x.mean() + x.std(ddof=1)) / 2)), ax=ax, color='darkviolet', label='$F(x)$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'F(x)')
ax.set_title(u'Applying exponential cdf to generated data');
ax.legend();
###Output
_____no_output_____
###Markdown
4. Відповідність заданому закону розподілу перевірити за допомогою критерію $\chi^2$ $H_0:$ дані мають експоненційний розподіл з заданим параметром $\lambda$$H_1: H_0$ не виконується
###Code
observed_freq, expected_freq = create_bins_expon(x_exp, 1/((x_exp.mean() + x_exp.std(ddof=1)) / 2))
stat_val, p_value = stats.chisquare(list(observed_freq.values()), list(expected_freq.values()), ddof=1)
###Output
_____no_output_____
###Markdown
+ **Ймовірність отримати значення статистики як в експерименті чи ще більше екстримальне при справедливості нульової гіпотези**+ **Чим нижче p-value, тим сильніше дані свідчать проти прийняття нульової гіпотези на користь альтернативи**$p = P(T \geq t|H_0)$
###Code
alpha = 0.05
if p_value < alpha:
print(f'Не можемо прийняти нульову гіпотезу на рівні значемості alpha={alpha}.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
else:
print('Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
###Output
Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.
Значення статистики:
- stat_val = 19.88334
- p-value = 0.40163
###Markdown
**Підставимо хибне значення $\lambda$**:
###Code
observed_freq, expected_freq = create_bins_expon(x_exp, 5)
stat_val, p_value = stats.chisquare(list(observed_freq.values()), list(expected_freq.values()), ddof=1)
alpha = 0.05
if p_value < alpha:
print(f'Не можемо прийняти нульову гіпотезу на рівні значемості alpha={alpha}.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
else:
print('Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
###Output
Не можемо прийняти нульову гіпотезу на рівні значемості alpha=0.05.
Значення статистики:
- stat_val = 382842.92575
- p-value = 0.0
###Markdown
************ II. Normal distribution Генерація даних з параметром $\mu = 0, \sigma = 1$ 1. Згенеруємо $N$ випадкових, рівномірно розподілених в інтервалі (0, 1) чисел $\epsilon_i$
###Code
n = 10000
mu_1 = 0
sigma_1 = 1
epsilon_uniform = np.random.uniform(low=0, high=1, size=(n, 12))
###Output
_____no_output_____
###Markdown
**Побудуємо гістрограму даного розподілу:**
###Code
fig, ax = plt.subplots(1,1, figsize=(15,6))
sns.distplot(epsilon_uniform, ax=ax, color='darkviolet')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'Frequency')
ax.set_title(u'Uniform distribution');
###Output
_____no_output_____
###Markdown
+ розподіл схожий на рівномірний+ дані розподілені в інтервалі (0, 1) **Перевіримо значення математичного сподівання та дисперсії, порівняємо з теоретичними:**$M(\epsilon) = \int_0^1 x \,dx = \frac{1}{2}$$D(\epsilon) = \int_0^1 (x - \frac{1}{2})^2 \,dx = \frac{1}{12}$
###Code
print('M(epsilon) = %s\nD(epsilon) = %s' % (epsilon_uniform.mean(), epsilon_uniform.std(ddof=1)**2))
###Output
M(epsilon) = 0.5001876147117275
D(epsilon) = 0.08348801337400137
###Markdown
+ значення математичного очікування та дисперсії подібні до теоретичних 2. Згенеруємо $N$ чисел $x_i$ з нормального розподілу $x_i = \sigma \mu_i + \alpha$, де$\mu_i = \sum_{i=1}^{12} \epsilon_i - 6$
###Code
x_normal = sigma_1 * (epsilon_uniform.sum(axis=1) - 6) + mu_1
###Output
_____no_output_____
###Markdown
**Побудуємо гістрограму та KDE plot для згенерованих даних**
###Code
fig, ax = plt.subplots(1,1, figsize=(15,6))
sns.distplot(x_normal, ax=ax, color='darkviolet', label=f'$\mu = {mu_1}$, $\sigma = {sigma_1}$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'Frequency')
ax.set_title(u'Histogram of generated normal distribution');
ax.legend();
###Output
_____no_output_____
###Markdown
+ розподіл даних подібний до нормального з відповідними параметрами **Перевіримо значення математичного сподівання та дисперсії, порівняємо з теоретичними:**$\mu = \mu_1$$D(x) = \sigma^2$, де $\sigma = \sigma_1$
###Code
print('M(x) = %s\nstd(x) = %s\nD(x) = %s' % (x_normal.mean(), x_normal.std(ddof=1), x_normal.std(ddof=1)**2))
###Output
M(x) = 0.00225137654072899
std(x) = 0.9988180394980531
D(x) = 0.9976374760267345
###Markdown
+ параметри математичного сподівання та стандарного відхилення подібні до тих, які ми хотіли отримати **Створимо єдину функцію для генерування випадкового розподілу:**
###Code
def generate_normal_distr(mu, sigma, n=10000):
epsilon_uniform = np.random.uniform(low=0, high=1, size=(n, 12))
return sigma_1 * (epsilon_uniform.sum(axis=1) - 6) + mu_1
###Output
_____no_output_____
###Markdown
3. Probability density function and Cumulative distribution function:$f(x) = \frac{1}{\sigma \sqrt{2\pi}} e^{-\frac{(x - a)^2}{2 \sigma^2}}$
###Code
def normal_pdf(x, mu, sigma):
"""
Probability density function
"""
return (1/(sigma*np.sqrt(2*np.pi)))*np.exp(-np.power((x-mu)/sigma, 2)/2) # f
fig, ax = plt.subplots(1,1, figsize=(15,6))
x = np.linspace(0,1,100000)
sns.lineplot(x_normal, normal_pdf(x_normal, x_normal.mean(), x_normal.std(ddof=1)), ax=ax, color='darkviolet', label='$f(x)$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'f(x)')
ax.set_title(u'Applying normal pdf to generated data');
ax.legend();
fig, ax = plt.subplots(1,1, figsize=(15,6))
x = np.linspace(0,1,100000)
sns.lineplot(x_normal, stats.norm.cdf(x_normal, x_normal.mean(), x_normal.std(ddof=1)), ax=ax, color='darkviolet', label='$F(x)$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'F(x)')
ax.set_title(u'Applying normal cdf to generated data');
ax.legend();
###Output
_____no_output_____
###Markdown
4. Відповідність заданому закону розподілу перевірити за допомогою критерію $\chi^2$ $H_0:$ дані мають нормальний розподіл з заданим параметром $\mu$ та $\sigma$$H_1: H_0$ не виконується
###Code
def create_bins_norm(x, mu, sigma, n_bins=30):
start = x.min()
finish = x.max() + 1e-9
h = (finish - start) / n_bins
n = x.size
obs_freq = {}
exp_freq = {}
current = start
i = 0
while current <= finish:
obs_freq[i] = np.sum((x >= current) & (x < (current+h)))
p_i = np.abs(stats.norm(mu, sigma).cdf(current) - stats.norm(mu, sigma).cdf(current+h))
exp_freq[i] = p_i * n
i += 1
current += h
return normilize_bins_norm(obs_freq, exp_freq)
def normilize_bins_norm(obs_freq, exp_freq):
assert len(obs_freq) > 2 or len(exp_freq) > 2
for i in sorted(obs_freq.keys(), reverse=True)[:-1]:
if obs_freq[i] <= 5 or exp_freq[i] <= 5:
obs_freq[i-1] += obs_freq[i]
exp_freq[i-1] += exp_freq[i]
del obs_freq[i], exp_freq[i]
for i in sorted(obs_freq.keys())[:-1]:
if obs_freq[i] <= 5 or exp_freq[i] <= 5:
j = 1
while not i+j in obs_freq:
j += 1
obs_freq[i+j] += obs_freq[i]
exp_freq[i+j] += exp_freq[i]
del obs_freq[i], exp_freq[i]
return obs_freq, exp_freq
observed_freq, expected_freq = create_bins_norm(x_normal, x_normal.mean(), x_normal.std(ddof=1))
stat_val, p_value = stats.chisquare(list(observed_freq.values()), list(expected_freq.values()), ddof=2)
###Output
_____no_output_____
###Markdown
+ **Ймовірність отримати значення статистики як в експерименті чи ще більше екстримальне при справедливості нульової гіпотези**+ **Чим нижче p-value, тим сильніше дані свідчать проти прийняття нульової гіпотези на користь альтернативи**$p = P(T \geq t|H_0)$
###Code
alpha = 0.05
if p_value < alpha:
print(f'Не можемо прийняти нульову гіпотезу на рівні значемості alpha={alpha}.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
else:
print('Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
###Output
Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.
Значення статистики:
- stat_val = 24.40442
- p-value = 0.38171
###Markdown
**Підставимо хибне значення $\mu$ та $\sigma$**:
###Code
observed_freq, expected_freq = create_bins_norm(x_normal, 0, 15)
stat_val, p_value = stats.chisquare(list(observed_freq.values()), list(expected_freq.values()), ddof=2)
alpha = 0.05
if p_value < alpha:
print(f'Не можемо прийняти нульову гіпотезу на рівні значемості alpha={alpha}.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
else:
print('Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
###Output
Не можемо прийняти нульову гіпотезу на рівні значемості alpha=0.05.
Значення статистики:
- stat_val = 88106.76529
- p-value = 0.0
###Markdown
Генерація даних з параметром $\mu = 12, \sigma = 24$ 1. Згенеруємо $N$ випадкових, рівномірно розподілених в інтервалі (0, 1) чисел $\epsilon_i$
###Code
n = 10000
mu_2 = 12
sigma_2 = 24
epsilon_uniform = np.random.uniform(low=0, high=1, size=(n, 12))
###Output
_____no_output_____
###Markdown
**Побудуємо гістрограму даного розподілу:**
###Code
fig, ax = plt.subplots(1,1, figsize=(15,6))
sns.distplot(epsilon_uniform, ax=ax, color='darkviolet')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'Frequency')
ax.set_title(u'Uniform distribution');
###Output
_____no_output_____
###Markdown
+ розподіл схожий на рівномірний+ дані розподілені в інтервалі (0, 1) **Перевіримо значення математичного сподівання та дисперсії, порівняємо з теоретичними:**$M(\epsilon) = \int_0^1 x \,dx = \frac{1}{2}$$D(\epsilon) = \int_0^1 (x - \frac{1}{2})^2 \,dx = \frac{1}{12}$
###Code
print('M(epsilon) = %s\nD(epsilon) = %s' % (epsilon_uniform.mean(), epsilon_uniform.std(ddof=1)**2))
###Output
M(epsilon) = 0.4997155535654053
D(epsilon) = 0.08349931003425738
###Markdown
+ значення математичного очікування та дисперсії подібні до теоретичних 2. Згенеруємо $N$ чисел $x_i$ з нормального розподілу $x_i = \sigma \mu_i + \alpha$, де$\mu_i = \sum_{i=1}^{12} \epsilon_i - 6$
###Code
x_normal = sigma_2 * (epsilon_uniform.sum(axis=1) - 6) + mu_2
###Output
_____no_output_____
###Markdown
**Побудуємо гістрограму та KDE plot для згенерованих даних**
###Code
fig, ax = plt.subplots(1,1, figsize=(15,6))
sns.distplot(x_normal, ax=ax, color='darkviolet', label=f'$\mu = {mu_2}$, $\sigma = {sigma_2}$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'Frequency')
ax.set_title(u'Histogram of generated normal distribution');
ax.legend();
###Output
_____no_output_____
###Markdown
+ розподіл даних подібний до нормального з відповідними параметрами **Перевіримо значення математичного сподівання та дисперсії, порівняємо з теоретичними:**$\mu = \mu_1$$D(x) = \sigma^2$, де $\sigma = \sigma_1$
###Code
print('M(x) = %s\nstd(x) = %s\nD(x) = %s' % (x_normal.mean(), x_normal.std(ddof=1), x_normal.std(ddof=1)**2))
###Output
M(x) = 11.91807942683673
std(x) = 23.90276101765187
D(x) = 571.3419842669779
###Markdown
+ параметри математичного сподівання та стандарного відхилення подібні до тих, які ми хотіли отримати 3. Probability density function and Cumulative distribution function:$f(x) = \frac{1}{\sigma \sqrt{2\pi}} e^{-\frac{(x - a)^2}{2 \sigma^2}}$
###Code
fig, ax = plt.subplots(1,1, figsize=(15,6))
x = np.linspace(0,1,100000)
sns.lineplot(x_normal, normal_pdf(x_normal, x_normal.mean(), x_normal.std(ddof=1)), ax=ax, color='darkviolet', label='$f(x)$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'f(x)')
ax.set_title(u'Applying normal pdf to generated data');
ax.legend();
fig, ax = plt.subplots(1,1, figsize=(15,6))
x = np.linspace(0,1,100000)
sns.lineplot(x_normal, stats.norm.cdf(x_normal, x_normal.mean(), x_normal.std(ddof=1)), ax=ax, color='darkviolet', label='$F(x)$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'F(x)')
ax.set_title(u'Applying normal cdf to generated data');
ax.legend();
###Output
_____no_output_____
###Markdown
4. Відповідність заданому закону розподілу перевірити за допомогою критерію $\chi^2$ $H_0:$ дані мають нормальний розподіл з заданим параметром $\mu$ та $\sigma$$H_1: H_0$ не виконується
###Code
observed_freq, expected_freq = create_bins_norm(x_normal, x_normal.mean(), x_normal.std(ddof=1))
stat_val, p_value = stats.chisquare(list(observed_freq.values()), list(expected_freq.values()), ddof=2)
###Output
_____no_output_____
###Markdown
+ **Ймовірність отримати значення статистики як в експерименті чи ще більше екстримальне при справедливості нульової гіпотези**+ **Чим нижче p-value, тим сильніше дані свідчать проти прийняття нульової гіпотези на користь альтернативи**$p = P(T \geq t|H_0)$
###Code
alpha = 0.05
if p_value < alpha:
print(f'Не можемо прийняти нульову гіпотезу на рівні значемості alpha={alpha}.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
else:
print('Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
###Output
Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.
Значення статистики:
- stat_val = 12.59219
- p-value = 0.97238
###Markdown
**Підставимо хибне значення $\mu$ та $\sigma$**:
###Code
observed_freq, expected_freq = create_bins_norm(x_normal, 10, 25)
stat_val, p_value = stats.chisquare(list(observed_freq.values()), list(expected_freq.values()), ddof=2)
alpha = 0.05
if p_value < alpha:
print(f'Не можемо прийняти нульову гіпотезу на рівні значемості alpha={alpha}.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
else:
print('Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
###Output
Не можемо прийняти нульову гіпотезу на рівні значемості alpha=0.05.
Значення статистики:
- stat_val = 101.02813
- p-value = 0.0
###Markdown
************ III. Uniform distribution Генерація даних з параметром $a = 5^{13}, c = 2^{31}$ 1. Згенеруємо $N$ випадкових чисел, використовуючи конгруентний метод:$x_{i+1} = z_{i+1} \div c$, де$z_{i+1} = a z_i ($ mod $c)$
###Code
n = 10000
z_0 = 9
z = z_0
a = 5 ** 13
c = 2 ** 31
x_uniform = []
for i in range(n):
x = z / c
x_uniform.append(x)
z = (a * z) % c
x_uniform = np.array(x_uniform)
###Output
_____no_output_____
###Markdown
**Побудуємо гістрограму даного розподілу:**
###Code
fig, ax = plt.subplots(1,1, figsize=(15,6))
sns.distplot(x_uniform, ax=ax, color='darkviolet', label='$z_0 = 9$, $a = 5^{13}$, $c = 2^{31}$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'Frequency')
ax.set_title(u'Uniform distribution');
ax.legend();
###Output
_____no_output_____
###Markdown
+ розподіл схожий на рівномірний+ дані розподілені в інтервалі (0, 1) **Перевіримо значення математичного сподівання та дисперсії, порівняємо з теоретичними:**$M(\epsilon) = \int_0^1 x \,dx = \frac{1}{2}$$D(\epsilon) = \int_0^1 (x - \frac{1}{2})^2 \,dx = \frac{1}{12}$
###Code
print('M(epsilon) = %s\nD(epsilon) = %s' % (x_uniform.mean(), x_uniform.std(ddof=1)**2))
###Output
M(epsilon) = 0.4999288878433406
D(epsilon) = 0.08299691989657997
###Markdown
+ значення математичного очікування та дисперсії подібні до теоретичних **Давайте знайдемо параметри даного розподілу:**
###Code
b_obs = x_uniform.mean() + np.sqrt(3)*x_uniform.std(ddof=1)
a_obs = 2*x_uniform.mean() - b_obs
print(f'a_obs = {round(a_obs, 5)}, b_obs = {round(b_obs, 5)}')
###Output
a_obs = 0.00094, b_obs = 0.99892
###Markdown
**Створимо єдину функцію для генерування випадкового розподілу:**
###Code
def generate_uniform_distr(z_0, a, c, n=10000):
x_uniform = []
for i in range(n):
x = z / c
x_uniform.append(x)
z = (a * z) % c
return np.array(x_uniform)
###Output
_____no_output_____
###Markdown
3. Probability density function:$f(x) = \frac{1}{b - a}$, $a \leq x \leq b$
###Code
def uniform_pdf(x, a, b):
"""
Probability density function
"""
return (1/(b-a))*((x >= a) & (x <= b)) # f
fig, ax = plt.subplots(1,1, figsize=(15,6))
x = np.linspace(0,1,100000)
sns.lineplot(x_uniform, uniform_pdf(x_uniform, a_obs, b_obs), ax=ax, color='darkviolet', label='$f(x)$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'f(x)')
ax.set_title(u'Applying uniform pdf to generated data');
ax.legend();
###Output
_____no_output_____
###Markdown
4. Відповідність заданому закону розподілу перевірити за допомогою критерію $\chi^2$ $H_0:$ дані мають рівномірний розподіл з заданим параметром $a$ та $b$$H_1: H_0$ не виконується
###Code
def create_bins_uniform(x, a, b, n_bins=30):
start = x.min()
finish = x.max() + 1e-9
h = (finish - start) / n_bins
n = x.size
obs_freq = {}
exp_freq = {}
current = start
i = 0
while current <= finish:
obs_freq[i] = np.sum((x >= current) & (x < (current+h)))
p_i = np.abs(stats.uniform(a, b).cdf(current) - stats.uniform(a, b).cdf(current+h))
exp_freq[i] = p_i * n
i += 1
current += h
return normilize_bins_uniform(obs_freq, exp_freq)
def normilize_bins_uniform(obs_freq, exp_freq):
assert len(obs_freq) > 2 or len(exp_freq) > 2
for i in sorted(obs_freq.keys(), reverse=True)[:-1]:
if obs_freq[i] <= 5 or exp_freq[i] <= 5:
obs_freq[i-1] += obs_freq[i]
exp_freq[i-1] += exp_freq[i]
del obs_freq[i], exp_freq[i]
for i in sorted(obs_freq.keys())[:-1]:
if obs_freq[i] <= 5 or exp_freq[i] <= 5:
j = 1
while not i+j in obs_freq:
j += 1
obs_freq[i+j] += obs_freq[i]
exp_freq[i+j] += exp_freq[i]
del obs_freq[i], exp_freq[i]
return obs_freq, exp_freq
b = (x.mean() + np.sqrt(3)*x.std(ddof=1))
a = 2*x.mean() - b
observed_freq, expected_freq = create_bins_uniform(x_uniform, a, b)
stat_val, p_value = stats.chisquare(list(observed_freq.values()), list(expected_freq.values()), ddof=2)
###Output
_____no_output_____
###Markdown
+ **Ймовірність отримати значення статистики як в експерименті чи ще більше екстримальне при справедливості нульової гіпотези**+ **Чим нижче p-value, тим сильніше дані свідчать проти прийняття нульової гіпотези на користь альтернативи**$p = P(T \geq t|H_0)$
###Code
alpha = 0.05
if p_value < alpha:
print(f'Не можемо прийняти нульову гіпотезу на рівні значемості alpha={alpha}.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
else:
print('Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
###Output
Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.
Значення статистики:
- stat_val = 31.61227
- p-value = 0.24675
###Markdown
**Підставимо хибне значення $a$ та $b$**:
###Code
observed_freq, expected_freq = create_bins_uniform(x_uniform, 0, 0.5)
stat_val, p_value = stats.chisquare(list(observed_freq.values()), list(expected_freq.values()), ddof=2)
alpha = 0.05
if p_value < alpha:
print(f'Не можемо прийняти нульову гіпотезу на рівні значемості alpha={alpha}.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
else:
print('Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
###Output
Не можемо прийняти нульову гіпотезу на рівні значемості alpha=0.05.
Значення статистики:
- stat_val = 34972.82104
- p-value = 0.0
###Markdown
Генерація даних з параметром $a = 8^{9}, c = 2^{63}$ 1. Згенеруємо $N$ випадкових чисел, використовуючи конгруентний метод:$x_{i+1} = z_{i+1} \div c$, де$z_{i+1} = a z_i ($ mod $c)$
###Code
n = 10000
z_0 = 17
z = z_0
a = 5 ** 19
c = 2 ** 63
x_uniform = []
for i in range(n):
x = z / c
x_uniform.append(x)
z = (a * z) % c
x_uniform = np.array(x_uniform)
###Output
_____no_output_____
###Markdown
**Побудуємо гістрограму даного розподілу:**
###Code
fig, ax = plt.subplots(1,1, figsize=(15,6))
sns.distplot(x_uniform, ax=ax, color='darkviolet', label='$z_0 = 17$, $a = 5^{19}$, $c = 2^{63}$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'Frequency')
ax.set_title(u'Uniform distribution');
ax.legend();
###Output
_____no_output_____
###Markdown
+ розподіл схожий на рівномірний+ дані розподілені в інтервалі (0, 1) **Перевіримо значення математичного сподівання та дисперсії, порівняємо з теоретичними:**$M(\epsilon) = \int_0^1 x \,dx = \frac{1}{2}$$D(\epsilon) = \int_0^1 (x - \frac{1}{2})^2 \,dx = \frac{1}{12}$
###Code
print('M(epsilon) = %s\nD(epsilon) = %s' % (x_uniform.mean(), x_uniform.std(ddof=1)**2))
###Output
M(epsilon) = 0.49829352325505827
D(epsilon) = 0.08385442915042123
###Markdown
+ значення математичного очікування та дисперсії подібні до теоретичних **Давайте знайдемо параметри даного розподілу:**
###Code
b_obs = x_uniform.mean() + np.sqrt(3)*x_uniform.std(ddof=1)
a_obs = 2*x_uniform.mean() - b_obs
print(f'a_obs = {round(a_obs, 5)}, b_obs = {round(b_obs, 5)}')
###Output
a_obs = -0.00327, b_obs = 0.99985
###Markdown
3. Probability density function:$f(x) = \frac{1}{b - a}$, $a \leq x \leq b$
###Code
def uniform_pdf(x, a, b):
"""
Probability density function
"""
return (1/(b-a))*((x >= a) & (x <= b)) # f
fig, ax = plt.subplots(1,1, figsize=(15,6))
x = np.linspace(0,1,100000)
sns.lineplot(x_uniform, uniform_pdf(x_uniform, a_obs, b_obs), ax=ax, color='darkviolet', label='$f(x)$')
ax.set_xlabel(u'Generated data')
# ax.set_xlim(0, 200)
ax.set_ylabel(u'f(x)')
ax.set_title(u'Applying uniform pdf to generated data');
ax.legend();
###Output
_____no_output_____
###Markdown
4. Відповідність заданому закону розподілу перевірити за допомогою критерію $\chi^2$ $H_0:$ дані мають рівномірний розподіл з заданим параметром $a$ та $b$$H_1: H_0$ не виконується
###Code
b = (x.mean() + np.sqrt(3)*x.std(ddof=1))
a = 2*x.mean() - b
observed_freq, expected_freq = create_bins_uniform(x_uniform, a, b)
stat_val, p_value = stats.chisquare(list(observed_freq.values()), list(expected_freq.values()), ddof=2)
###Output
_____no_output_____
###Markdown
+ **Ймовірність отримати значення статистики як в експерименті чи ще більше екстримальне при справедливості нульової гіпотези**+ **Чим нижче p-value, тим сильніше дані свідчать проти прийняття нульової гіпотези на користь альтернативи**$p = P(T \geq t|H_0)$
###Code
alpha = 0.05
if p_value < alpha:
print(f'Не можемо прийняти нульову гіпотезу на рівні значемості alpha={alpha}.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
else:
print('Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
###Output
Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.
Значення статистики:
- stat_val = 31.80644
- p-value = 0.23933
###Markdown
**Підставимо хибне значення $a$ та $b$**:
###Code
observed_freq, expected_freq = create_bins_uniform(x_uniform, 0, 0.9)
stat_val, p_value = stats.chisquare(list(observed_freq.values()), list(expected_freq.values()), ddof=2)
alpha = 0.05
if p_value < alpha:
print(f'Не можемо прийняти нульову гіпотезу на рівні значемості alpha={alpha}.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
else:
print('Можемо прийняти нульову гіпотезу про розподіл данних з заданим параметром.')
print('Значення статистики:')
print('\t- stat_val = %s\n\t- p-value = %s' % (round(stat_val, 5), round(p_value, 5)))
###Output
Не можемо прийняти нульову гіпотезу на рівні значемості alpha=0.05.
Значення статистики:
- stat_val = 2538.04452
- p-value = 0.0
|
labs/Lab-03-Functions/lab3.ipynb | ###Markdown
Lab 3: Functions Please complete this lab by providing answers in cells after the question. Use **Code** cells to write and run any code you need to answer the question and **Markdown** cells to write out answers in words. After you are finished with the assignment, remember to download it as an **HTML file** and submit it in **ELMS**.This assignment is due by **11:59pm on Friday, February 18**.
###Code
import numpy as np
from datascience import *
# These lines set up graphing capabilities.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
# This is just to make the plots look a certain way
plt.style.use('fivethirtyeight')
###Output
_____no_output_____
###Markdown
Defining functionsLet's write a very simple function that converts a proportion to a percentage by multiplying it by 100. For example, the value of `to_percentage(.5)` should be the number 50 (no percent sign).A function definition has a few parts. `def`It always starts with `def` (short for **def**ine): def NameNext comes the name of the function. Like other names we've defined, it can't start with a number or contain spaces. Let's call our function `to_percentage`: def to_percentage SignatureNext comes something called the *signature* of the function. This tells Python how many arguments your function should have, and what names you'll use to refer to those arguments in the function's code. A function can have any number of arguments (including 0!). `to_percentage` should take one argument, and we'll call that argument `proportion` since it should be a proportion. def to_percentage(proportion) If we want our function to take more than one argument, we add a comma between each argument name.We put a colon after the signature to tell Python it's over. If you're getting a syntax error after defining a function, check to make sure you remembered the colon! def to_percentage(proportion): DocumentationFunctions can do complicated things, so you should write an explanation of what your function does. For small functions, this is less important, but it's a good habit to learn from the start. Conventionally, Python functions are documented by writing an **indented** triple-quoted string: def to_percentage(proportion): """Converts a proportion to a percentage.""" BodyNow we start writing code that runs when the function is called. This is called the *body* of the function and every line **must be indented with a tab**. Any lines that are *not* indented and left-aligned with the def statement is considered outside the function. Some notes about the body of the function:- We can write any code that we would write anywhere else. - We use the arguments defined in the function signature. We can do this because we assume that when we call the function, values are already assigned to those arguments.- We generally avoid referencing variables defined *outside* the function.Now, let's give a name to the number we multiply a proportion by to get a percentage: def to_percentage(proportion): """Converts a proportion to a percentage.""" factor = 100 `return`The special instruction `return` is part of the function's body and tells Python to make the value of the function call equal to whatever comes right after `return`. We want the value of `to_percentage(.5)` to be the proportion .5 times the factor 100, so we write: def to_percentage(proportion): """Converts a proportion to a percentage.""" factor = 100 return proportion * factor `return` only makes sense in the context of a function, and **can never be used outside of a function**. `return` is always the last line of the function because Python stops executing the body of a function once it hits a `return` statement.*Note:* `return` inside a function tells Python what value the function evaluates to. However, there are other functions, like `print`, that have no `return` value. For example, `print` simply prints a certain value out to the console. `return` and `print` are **very** different. **Question 1.Define `to_percentage` in the cell below. Call your function to convert the proportion .2 to a percentage. Name that percentage `twenty_percent`.**
###Code
###Output
_____no_output_____
###Markdown
Like built-in functions you've used in previous labs (`max`, `abs`, etc.), you can use named values as arguments to your function.**Question 2. Use `to_percentage` again to convert the proportion named `a_proportion` (defined below) to a percentage called `a_percentage`.***Note:* You don't need to define `to_percentage` again! Like other named values, functions stick around after you define them.
###Code
a_proportion = 2**(.5) / 2
###Output
_____no_output_____
###Markdown
Here's something important about functions: the names assigned *within* a function body are only accessible within the function body. Once the function has returned, those names are gone. So even though you defined `factor = 100` inside the body of the `to_percentage` function up above and then called `to_percentage`, you cannot refer to `factor` anywhere except inside the body of `to_percentage`:
###Code
# You should see an error when you run this. (If you don't, you might
# have defined factor somewhere above.)
factor
###Output
_____no_output_____
###Markdown
As we've seen with built-in functions, functions can also take strings (or arrays, or tables) as arguments, and they can return those things, too.**Question 3. Define a function called `disemvowel`. It should take a single string as its argument. (You can call that argument whatever you want.) It should return a copy of that string, but with all the characters that are vowels removed. (In English, the vowels are the characters "a", "e", "i", "o", and "u".)***Hint:* To remove all the "a"s from a string, you can use `.replace("a", "")`. The `.replace` method for strings returns a new string, so you can call `replace` multiple times, one after the other.
###Code
def disemvowel(a_string):
# An example call to your function. (It's often helpful to run
# an example call from time to time while you're writing a function,
# to see how it currently works.)
disemvowel("Can you read this without vowels?")
###Output
_____no_output_____
###Markdown
Calls on calls on callsJust as you write a series of lines to build up a complex computation, it's useful to define a series of small functions that build on each other. Since you can write any code inside a function's body, you can call other functions you've written.If a function is a like a recipe, defining a function in terms of other functions is like having a recipe for cake telling you to follow another recipe to make the frosting, and another to make the jam filling. This makes the cake recipe shorter and clearer, and it avoids having a bunch of duplicated frosting recipes. It's a foundation of productive programming.For example, suppose you want to count the number of characters *that aren't vowels* in a piece of text. One way to do that is this to remove all the vowels and count the size of the remaining string.**Question 4. Write a function called `num_non_vowels`. It should take a string as its argument and return a number. That number should be the number of characters in the argument string that aren't vowels.***Hint:* The function `len` takes a string as its argument and returns the number of characters in it.
###Code
def num_non_vowels(a_string):
"""The number of characters in a string, minus the vowels."""
...
# Try calling your function yourself to make sure the output is what
# you expect. You can also use the interact function in the next cell if you'd like.
###Output
_____no_output_____
###Markdown
Functions can also encapsulate code that *does an action* rather than computing a value. For example, if you call `print` inside a function, and then call that function, something will get printed.The `movies_by_year` dataset in the textbook has information about movie sales in recent years. Suppose you'd like to display the year with the 5th-highest total gross movie sales, printed in a human-readable way. You might do this:
###Code
movies_by_year = Table.read_table("movies_by_year.csv")
rank = 5
fifth_from_top_movie_year = movies_by_year.sort("Total Gross", descending=True).column("Year").item(rank-1)
print("Year number", rank, "for total gross movie sales was:", fifth_from_top_movie_year)
###Output
_____no_output_____
###Markdown
After writing this, you realize you also wanted to print out the 2nd and 3rd-highest years. Instead of copying your code, you decide to put it in a function. Since the rank varies, you make that an argument to your function.**Question 5. Write a function called `print_kth_top_movie_year`. It should take a single argument, the rank of the year (like 2, 3, or 5 in the above examples). It should print out a message like the one above.***Note:* Your function shouldn't have a `return` statement.
###Code
def print_kth_top_movie_year(k):
...
print(...)
# Example calls to your function:
print_kth_top_movie_year(2)
print_kth_top_movie_year(3)
###Output
_____no_output_____
###Markdown
`print` is not the same as `return`The `print_kth_top_movie_year(k)` function prints the total gross movie sales for the year that was provided! However, since we did not return any value in this function, we can not use it after we call it. Let's look at an example of another function that prints a value but does not return it.
###Code
def print_number_five():
print(5)
print_number_five()
###Output
_____no_output_____
###Markdown
However, if we try to use the output of `print_number_five()`, we see that we get an error when we try to add the number 5 to it!
###Code
print_number_five_output = print_number_five()
print_number_five_output + 5
###Output
_____no_output_____
###Markdown
It may seem that `print_number_five()` is returning a value, 5. In reality, it just displays the number 5 to you without giving you the actual value! If your function prints out a value without returning it and you try to use that value, you will run into errors, so be careful! Practice Writing FunctionsIn this question, we'll look at the 2015 compensation of CEOs at the 100 largest companies in California. The data was compiled from a [Los Angeles Times analysis](http://spreadsheets.latimes.com/california-ceo-compensation/), and ultimately came from [filings](https://www.sec.gov/answers/proxyhtf.htm) mandated by the SEC from all publicly-traded companies. Two companies have two CEOs, so there are 102 CEOs in the dataset.We've copied the raw data from the LA Times page into a file called `raw_compensation.csv`. (The page notes that all dollar amounts are in **millions of dollars**.)
###Code
raw_compensation = Table.read_table('raw_compensation.csv')
raw_compensation.show(5)
###Output
_____no_output_____
###Markdown
We want to compute the average of the CEOs' pay. Try running the cell below.
###Code
np.average(raw_compensation.column("Total Pay"))
###Output
_____no_output_____
###Markdown
You should see an error. Let's examine why this error occurred by looking at the values in the `Total Pay` column.
###Code
raw_compensation.column("Total Pay").item(0)
type(raw_compensation.column("Total Pay").item(0))
###Output
_____no_output_____
###Markdown
It looks like the values in the `Total Pay` column are strings. It doesn't make sense to take the average of string values, so we need to convert them to numbers if we want to do this. Let's extract the first value in `Total Pay`. It's Mark Hurd's pay in 2015, in *millions* of dollars.
###Code
mark_hurd_pay_string = raw_compensation.column("Total Pay").item(0)
mark_hurd_pay_string
###Output
_____no_output_____
###Markdown
**Question 6. Convert `mark_hurd_pay_string` to a number of *dollars*.**Some hints, as this question requires multiple steps:- The string method `strip` will be useful for removing the dollar sign; it removes a specified character from the start or end of a string. For example, the value of `"100%".strip("%")` is the string `"100"`. - You'll also need the function `float`, which converts a string that looks like a number to an actual number. - Finally, remember that the answer should be in dollars, not millions of dollars.
###Code
###Output
_____no_output_____
###Markdown
To compute the average pay, we need to do this for every CEO. But that looks like it would involve copying this code 102 times.This is where functions come in. First, we'll define a new function, giving a name to the expression that converts "total pay" strings to numeric values. In the next section, we'll see the payoff: we can call that function on every pay string in the dataset at once.**Question 7. Copy the expression you used to compute `mark_hurd_pay`, and use it as the return expression of the function below. But make sure you replace the specific `mark_hurd_pay_string` with the generic `pay_string` name specified in the first line in the `def` statement.***Hint*: When dealing with functions, you should generally not be referencing any variable outside of the function. Usually, you want to be working with the arguments that are passed into it, such as `pay_string` for this function. If you're using `mark_hurd_pay_string` within your function, you're referencing an outside variable!
###Code
def convert_pay_string_to_number(pay_string):
"""Converts a pay string like '$100' (in millions) to a number of dollars."""
return ...
###Output
_____no_output_____
###Markdown
`apply`ing functionsDefining a function is a lot like giving a name to a value with `=`. In fact, a function is a value just like the number 1 or the text "the"!For example, we can make a new name for the built-in function `max` if we want:
###Code
our_name_for_max = max
our_name_for_max(2, 6)
###Output
_____no_output_____
###Markdown
The old name for `max` is still around:
###Code
max(2, 6)
###Output
_____no_output_____
###Markdown
Try just writing `max` or `our_name_for_max` (or the name of any other function) in a cell, and run that cell. Python will print out a (very brief) description of the function.
###Code
max
###Output
_____no_output_____
###Markdown
Why is this useful? Since functions are just values, it's possible to pass them as arguments to other functions. This means we can use tools like the table method `apply`. `apply` calls a function many times, once on *each* element in a column of a table. It produces an *array* of the results. Here we use `apply` to convert every CEO's pay to a number, using the function you defined:
###Code
raw_compensation.apply(convert_pay_string_to_number, "Total Pay")
###Output
_____no_output_____
###Markdown
Here's an illustration of what that did:Note that we didn't write something like `convert_pay_string_to_number()` or `convert_pay_string_to_number("Total Pay ($)")`. The job of `apply` is to call the function we give it, so instead of calling `convert_pay_string_to_number` ourselves, we just write its name as an argument to `apply`. **Question 8. Using `apply`, make a table that's a copy of `raw_compensation` with one additional column called `Total Pay ($)`. That column should contain the result of applying `convert_pay_string_to_number` to the `Total Pay` column (as we did above). Call the new table `compensation`.**
###Code
compensation = raw_compensation.with_column(
"Total Pay ($)",
...
compensation
###Output
_____no_output_____
###Markdown
Now that we have all the pays as numbers, we can learn more about them through computation.**Question 9. Compute the average total pay of the CEOs in the dataset.**
###Code
average_total_pay = ...
average_total_pay
###Output
_____no_output_____
###Markdown
**Question 10. Companies pay executives in a variety of ways: in cash, by granting stock or other equity in the company, or with ancillary benefits (like private jets). Compute the proportion of each CEO's pay that was cash. (Your answer should be an array of numbers, one for each CEO in the dataset.)***Note:* When you answer this question, you'll encounter a red box appearing below your code cell that says something like `RuntimeWarning: invalid value encountered in true_divide`. Don't worry too much about the message. Warnings are raised by Python when it encounters an unusual condition in your code, but the condition is not severe enough to warrant throwing an error. The warning below is Python's cryptic way of telling you that you're dividng a number by zero. If you extract the values in `Total Pay ($)` as an array, you'll see that the last element is 0.
###Code
cash_proportion = ...
cash_proportion
###Output
_____no_output_____
###Markdown
Check out the `% Change` column in `compensation`. It shows the percentage increase in the CEO's pay from the previous year. For CEOs with no previous year on record, it instead says "(No previous year)". The values in this column are *strings*, not numbers, so like the `Total Pay` column, it's not usable without a bit of extra work.Given your current pay and the percentage increase from the previous year, you can compute your previous year's pay. For example, if your pay is $\$100$ this year, and that's an increase of 50% from the previous year, then your previous year's pay was $\frac{\$100}{1 + \frac{50}{100}}$, or around \$66.66.**Question 11. Create a new table called `with_previous_compensation`. It should be a copy of `compensation`, but with the "(No previous year)" CEOs filtered out, and with an extra column called `2014 Total Pay ($)`. That column should have each CEO's pay in 2014.***Hint 1:* You can print out your results after each step to make sure you're on the right track. Split the cell to make this easier to do!*Hint 2:* I've provided a structure that you can use to get to the answer. However, if it's confusing, feel free to delete the current structure and approach the problem your own way!
###Code
# Definition to turn percent to number
def percent_string_to_num(percent_string):
"""Converts a percentage string to a number."""
return ...
# Compensation table where there is a previous year
having_previous_year = ...
# Get the percent changes as numbers instead of strings
# We're still working off the table having_previous_year
percent_changes = ...
# Calculate the previous year's pay
# We're still working off the table having_previous_year
previous_pay = ...
# Put the previous pay column into the having_previous_year table
with_previous_compensation = ...
with_previous_compensation
###Output
_____no_output_____
###Markdown
**Question 12. What was the average pay of these CEOs in 2014?**
###Code
average_pay_2014 = ...
average_pay_2014
###Output
_____no_output_____ |
fashion_MNIST/tf/simple_DNN.ipynb | ###Markdown
DNNhttps://www.tensorflow.org/tutorials/keras/classification___
###Code
%tensorflow_version 2.x
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
%matplotlib inline
%config InlineBackend.figure_formats = {'png', 'retina'}
tf.test.gpu_device_name() # GPU を認識しているか確認
###Output
TensorFlow 2.x selected.
###Markdown
データのロード
###Code
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
train_images = train_images / 255.0
test_images = test_images / 255.0
###Output
_____no_output_____
###Markdown
モデルを作る
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)), # 最初のみ input shape を指定
keras.layers.Dense(256, activation='relu'), # 28*28 -> 256
keras.layers.Dense(128, activation='relu'), # 256 -> 128
keras.layers.Dense(32, activation='relu'), # 128 -> 32
keras.layers.Dense(16, activation='relu'), # 32 -> 16
keras.layers.Dense(10, activation='softmax') # 16 -> 10
])
model.summary()
from tensorflow.keras import optimizers
model.compile(optimizer=optimizers.Adam(learning_rate=0.002), # str or optimaizer instance. optimaizer='adam' だと全くパラメータ指定してないことになるよね
loss="sparse_categorical_crossentropy", # ラベルがone-hot エンコーディングで表現されていないのでこれを使う
metrics=["accuracy"]) # 訓練とテストのステップを監視するのに使用
###Output
_____no_output_____
###Markdown
学習
###Code
%%time
model.fit(train_images,
train_labels,
epochs=30)
###Output
Train on 60000 samples
Epoch 1/30
60000/60000 [==============================] - 5s 86us/sample - loss: 0.5160 - accuracy: 0.8151
Epoch 2/30
60000/60000 [==============================] - 4s 74us/sample - loss: 0.3849 - accuracy: 0.8590
Epoch 3/30
60000/60000 [==============================] - 4s 74us/sample - loss: 0.3502 - accuracy: 0.8727
Epoch 4/30
60000/60000 [==============================] - 5s 75us/sample - loss: 0.3271 - accuracy: 0.8790
Epoch 5/30
60000/60000 [==============================] - 5s 76us/sample - loss: 0.3091 - accuracy: 0.8867
Epoch 6/30
60000/60000 [==============================] - 5s 75us/sample - loss: 0.2954 - accuracy: 0.8913
Epoch 7/30
60000/60000 [==============================] - 5s 76us/sample - loss: 0.2852 - accuracy: 0.8942
Epoch 8/30
60000/60000 [==============================] - 5s 76us/sample - loss: 0.2752 - accuracy: 0.8973
Epoch 9/30
60000/60000 [==============================] - 5s 75us/sample - loss: 0.2672 - accuracy: 0.9002
Epoch 10/30
60000/60000 [==============================] - 4s 74us/sample - loss: 0.2578 - accuracy: 0.9044
Epoch 11/30
60000/60000 [==============================] - 4s 74us/sample - loss: 0.2495 - accuracy: 0.9072
Epoch 12/30
60000/60000 [==============================] - 4s 75us/sample - loss: 0.2452 - accuracy: 0.9069
Epoch 13/30
60000/60000 [==============================] - 4s 75us/sample - loss: 0.2382 - accuracy: 0.9107
Epoch 14/30
60000/60000 [==============================] - 5s 76us/sample - loss: 0.2346 - accuracy: 0.9121
Epoch 15/30
60000/60000 [==============================] - 4s 75us/sample - loss: 0.2288 - accuracy: 0.9151
Epoch 16/30
60000/60000 [==============================] - 5s 75us/sample - loss: 0.2218 - accuracy: 0.9172
Epoch 17/30
60000/60000 [==============================] - 5s 75us/sample - loss: 0.2169 - accuracy: 0.9184
Epoch 18/30
60000/60000 [==============================] - 5s 75us/sample - loss: 0.2140 - accuracy: 0.9194
Epoch 19/30
60000/60000 [==============================] - 4s 75us/sample - loss: 0.2103 - accuracy: 0.9215
Epoch 20/30
60000/60000 [==============================] - 5s 75us/sample - loss: 0.2039 - accuracy: 0.9236
Epoch 21/30
60000/60000 [==============================] - 5s 76us/sample - loss: 0.2004 - accuracy: 0.9252
Epoch 22/30
60000/60000 [==============================] - 4s 74us/sample - loss: 0.2000 - accuracy: 0.9254
Epoch 23/30
60000/60000 [==============================] - 5s 75us/sample - loss: 0.1941 - accuracy: 0.9266
Epoch 24/30
60000/60000 [==============================] - 5s 76us/sample - loss: 0.1884 - accuracy: 0.9293
Epoch 25/30
60000/60000 [==============================] - 4s 75us/sample - loss: 0.1901 - accuracy: 0.9290
Epoch 26/30
60000/60000 [==============================] - 5s 75us/sample - loss: 0.1847 - accuracy: 0.9299
Epoch 27/30
60000/60000 [==============================] - 4s 75us/sample - loss: 0.1804 - accuracy: 0.9327
Epoch 28/30
60000/60000 [==============================] - 5s 75us/sample - loss: 0.1823 - accuracy: 0.9322
Epoch 29/30
60000/60000 [==============================] - 4s 74us/sample - loss: 0.1774 - accuracy: 0.9342
Epoch 30/30
60000/60000 [==============================] - 5s 75us/sample - loss: 0.1765 - accuracy: 0.9340
CPU times: user 2min 37s, sys: 13.1 s, total: 2min 51s
Wall time: 2min 15s
###Markdown
推論
###Code
predictions = model.predict(test_images)
pred_labels = np.argmax(predictions, axis=1)
test_acc = np.mean(pred_labels == test_labels)
# test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\nTest accuracy:', test_acc)
l = 5
h = 6
plt.figure(figsize=(15, 18))
for i, (img , pred_l, test_l) in enumerate(zip(test_images[:l*h], pred_labels[:l*h], test_labels[:l*h]), start=1):
plt.subplot(h, l, i)
plt.imshow(img, cmap="gray")
if pred_l != test_l:
color = "red"
else:
color = "black"
plt.title(f"{class_names[pred_l]}/{class_names[test_l]}", color=color)
plt.axis("off")
plt.show()
###Output
_____no_output_____ |
jupyterhub/notebooks/zz_under_construction/zz_old/TensorFlow/Labs/lstm.ipynb | ###Markdown
the goal of this exercise is to train a LSTM character model over [Text8] data.
###Code
import urllib
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
"""Download a file if not present, and make sure it's the right size."""
if not os.path.exists(filename):
filename, _ = urllib.urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print 'Found and verified', filename
else:
print statinfo.st_size
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('text8.zip', 31344016)
import zipfile
def read_data(filename):
f = zipfile.ZipFile(filename)
for name in f.namelist():
return f.read(name)
f.close()
text = read_data(filename)
print "Data size", len(text)
###Output
Data size 100000000
###Markdown
Create a small validation set.
###Code
valid_size = 1000
valid_text = text[:valid_size]
train_text = text[valid_size:]
train_size = len(train_text)
print train_size, train_text[:64]
print valid_size, valid_text[:64]
###Output
99999000 ons anarchists advocate social relations based upon voluntary as
1000 anarchism originated as a term of abuse first used against earl
###Markdown
Utility functions to map characters to vocabulary IDs and back.
###Code
import string
#Note that string is an object here
vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' '
first_letter = ord(string.ascii_lowercase[0])
print vocabulary_size
print first_letter
##The following two function converts character to their ASCII value, and vice versa
def char2id(char):
if char in string.ascii_lowercase:
return ord(char) - first_letter + 1
elif char == ' ':
return 0
else:
print 'Unexpected character:', char
return 0
def id2char(dictid):
if dictid > 0:
return chr(dictid + first_letter - 1)
else:
return ' '
print char2id('a'), char2id('z'), char2id(' '), char2id('ï')
print id2char(1)
print id2char(26)
###Output
1 26 0 Unexpected character: ï
0
a
z
###Markdown
Function to generate a training batch for the LSTM model.
###Code
import os
import numpy as np
import tensorflow as tf
batch_size=64
num_unrollings=10 #number of characters taken each time
#Define a class that has functions
class BatchGenerator(object):
def __init__(self, text, batch_size, num_unrollings):
self._text = text
self._text_size = len(text)
self._batch_size = batch_size
self._num_unrollings = num_unrollings
segment = self._text_size / batch_size
self._cursor = [ offset * segment for offset in xrange(batch_size)]
self._last_batch = self._next_batch()
def _next_batch(self):
"""Generate a single batch from the current cursor position in the data."""
batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float)
for b in xrange(self._batch_size):
batch[b, char2id(self._text[self._cursor[b]])] = 1.0
self._cursor[b] = (self._cursor[b] + 1) % self._text_size
return batch
def next(self):
"""Generate the next array of batches from the data. The array consists of
the last batch of the previous array, followed by num_unrollings new ones.
"""
batches = [self._last_batch]
for step in xrange(self._num_unrollings):
batches.append(self._next_batch())
self._last_batch = batches[-1]
return batches
def characters(probabilities):
"""Turn a 1-hot encoding or a probability distribution over the possible
characters back into its (mostl likely) character representation."""
return [id2char(c) for c in np.argmax(probabilities, 1)]
def batches2string(batches):
"""Convert a sequence of batches back into their (most likely) string
representation."""
s = [''] * batches[0].shape[0]
for b in batches:
s = [''.join(x) for x in zip(s, characters(b))]
return s
#initialize 2 objects, which contain its own data and functions
train_batches = BatchGenerator(train_text, batch_size, num_unrollings)
print batches2string(train_batches.next())
print batches2string(train_batches.next())
valid_batches = BatchGenerator(valid_text, 1, 1)
print batches2string(valid_batches.next())
print batches2string(valid_batches.next())
import random
"""Log-probability of the true labels in a predicted batch."""
def logprob(predictions, labels):
predictions[predictions < 1e-10] = 1e-10
return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0]
"""Sample one element from a distribution, assumed to be normalized probabilities. """
def sample_distribution(distribution):
r = random.uniform(0, 1)
s = 0
for i in xrange(len(distribution)):
s += distribution[i]
if s >= r:
return i
return len(distribution) - 1
"""Turn a (column) prediction into 1-hot encoded samples."""
def sample(prediction):
p = np.zeros(shape=[1, vocabulary_size], dtype=np.float)
p[0, sample_distribution(prediction[0])] = 1.0
return p
"""Generate a random column of probabilities."""
def random_distribution():
b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size])
return b/np.sum(b, 1)[:,None]
###Output
_____no_output_____
###Markdown
Simple LSTM Model.
###Code
num_nodes = 64
graph = tf.Graph()
with graph.as_default():
# Parameters:
# Input gate: input, previous output, and bias.
ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))
im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))
ib = tf.Variable(tf.zeros([1, num_nodes]))
# Forget gate: input, previous output, and bias.
fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))
fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))
fb = tf.Variable(tf.zeros([1, num_nodes]))
# Memory cell: input, state and bias.
cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))
cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))
cb = tf.Variable(tf.zeros([1, num_nodes]))
# Output gate: input, previous output, and bias.
ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))
om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))
ob = tf.Variable(tf.zeros([1, num_nodes]))
# Variables saving state across unrollings.
saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)
saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)
# Classifier weights and biases.
w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1))
b = tf.Variable(tf.zeros([vocabulary_size]))
# Definition of the cell computation.
def lstm_cell(i, o, state):
"""Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf
Note that in this formulation, we omit the various connections between the
previous state and the gates."""
input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib)
forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb)
update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb
state = forget_gate * state + input_gate * tf.tanh(update)
output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob)
return output_gate * tf.tanh(state), state
# Input data.
train_data = list()
for _ in xrange(num_unrollings + 1):
train_data.append(
tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size]))
train_inputs = train_data[:num_unrollings]
train_labels = train_data[1:] # labels are inputs shifted by one time step.
# Unrolled LSTM loop.
outputs = list()
output = saved_output
state = saved_state
for i in train_inputs:
output, state = lstm_cell(i, output, state)
outputs.append(output)
# State saving across unrollings.
with tf.control_dependencies([saved_output.assign(output),
saved_state.assign(state)]):
# Classifier.
logits = tf.nn.xw_plus_b(tf.concat(0, outputs), w, b)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(
logits, tf.concat(0, train_labels)))
# Optimizer.
global_step = tf.Variable(0)
learning_rate = tf.train.exponential_decay(
10.0, global_step, 5000, 0.1, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
gradients, v = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, 1.25)
optimizer = optimizer.apply_gradients(
zip(gradients, v), global_step=global_step)
# Predictions.
train_prediction = tf.nn.softmax(logits)
# Sampling and validation eval: batch 1, no unrolling.
sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size])
saved_sample_output = tf.Variable(tf.zeros([1, num_nodes]))
saved_sample_state = tf.Variable(tf.zeros([1, num_nodes]))
reset_sample_state = tf.group(
saved_sample_output.assign(tf.zeros([1, num_nodes])),
saved_sample_state.assign(tf.zeros([1, num_nodes])))
sample_output, sample_state = lstm_cell(
sample_input, saved_sample_output, saved_sample_state)
with tf.control_dependencies([saved_sample_output.assign(sample_output),
saved_sample_state.assign(sample_state)]):
sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b))
num_steps = 7001
summary_frequency = 100
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print 'Initialized'
mean_loss = 0
for step in xrange(num_steps):
batches = train_batches.next()
feed_dict = dict()
for i in xrange(num_unrollings + 1):
feed_dict[train_data[i]] = batches[i]
_, l, predictions, lr = session.run(
[optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict)
mean_loss += l
if step % summary_frequency == 0:
if step > 0:
mean_loss = mean_loss / summary_frequency
# The mean loss is an estimate of the loss over the last few batches.
print 'Average loss at step', step, ':', mean_loss, 'learning rate:', lr
mean_loss = 0
labels = np.concatenate(list(batches)[1:])
print 'Minibatch perplexity: %.2f' % float(
np.exp(logprob(predictions, labels)))
if step % (summary_frequency * 10) == 0:
# Generate some samples.
print '=' * 80
for _ in xrange(5):
feed = sample(random_distribution())
sentence = characters(feed)[0]
reset_sample_state.run()
for _ in xrange(79):
prediction = sample_prediction.eval({sample_input: feed})
feed = sample(prediction)
sentence += characters(feed)[0]
print sentence
print '=' * 80
# Measure validation set perplexity.
reset_sample_state.run()
valid_logprob = 0
for _ in xrange(valid_size):
b = valid_batches.next()
predictions = sample_prediction.eval({sample_input: b[0]})
valid_logprob = valid_logprob + logprob(predictions, b[1])
print 'Validation set perplexity: %.2f' % float(np.exp(
valid_logprob / valid_size))
###Output
_____no_output_____ |
remove_scalar_outliers/remove_scalar_outliers.ipynb | ###Markdown
Remove Scalar Outliers
###Code
def remove_scalar_outliers(df, c_std_max=2, verbose=False, file='outliers'):
'''
Remove all scalar outliers of all features in a dataframe.
Outlier condition: A value is considered as an outlier, if the absolute value is bigger than the absolute value of the arithmetic mean of a feature plus the absolute value of the standard deviation of a feature times `c_std_max`.
`c_std_max` (e.g. `1` or `2`) is the amount of standard deviations for each feature value not considered as an outlier.
`c_std_max (default 2)` is set to two by default. The value can be specified as a parameter of the function.
Save an infographic as .png-file with `filename` (default `outliers`) specified under `file`-parameter.
If `verbose=True` (default `false`), show detailed info about outliers removed.
'''
import pandas as pd
import numpy as np
import matplotlib
from IPython.display import display, Markdown, Math
# Helper functions
def latex(string):
display(Markdown(rf"""{string}"""))
def save_fig(fig,file):
fig.savefig(f'{file}.png',bbox_inches='tight')
def plot_barh(X,y,file):
import math
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,int(math.ceil(len(df.columns)/3))))
fig.add_subplot() \
.barh(X,y,color=(0.2, 0.4, 0.6, 0.6))
plt.xticks(rotation=90)
plt.tight_layout()
plt.show()
if file:
save_fig(fig,file)
def plot_histogram(df):
import matplotlib.pyplot as plt
#df = df.loc[df.select_dtypes(include=['number', 'datetime'])]
fig, axes = plt.subplots(len(df.columns)//3, 3, figsize=(12, 48))
i = 0
for triaxis in axes:
for axis in triaxis:
df.hist(column = df.columns[i], bins = 100, ax=axis)
i = i+1
plt.show()
if verbose:
latex('## Feature Histograms (with outliers)')
plot_histogram(df)
latex('## Removing outliers...')
total_outliers_count = 0
outlier_counts = []
outlier_percentages = []
outliers = []
for feature in df.select_dtypes('number'):
max_dev = c_std_max*df[feature].std()
maximum = df[feature].mean() + max_dev
minimum = df[feature].mean() - max_dev
outlier_condition = (df[feature] > maximum) | (df[feature] < minimum)
amount_of_all_values_for_feature = outlier_condition.count()
if outlier_condition.any():
if verbose:
outliers_snapshot = pd.DataFrame([df.loc[
outlier_condition,
feature
].copy().rename(f"removed outliers in feature $$X := $$ {feature}")]).transpose()
outliers_snapshot['$$x_{min \, new}$$'] = minimum
outliers_snapshot['$$x_{max \, new}$$'] = maximum
outliers_snapshot['$$\overline{x} +/- c_{max} * \sigma$$'] = f'{df[feature].mean()} +/- {max_dev}'
outliers_snapshot['$$\overline{x}$$'] = df[feature].mean()
outliers_snapshot['$$c_{{\sigma_{{max}}}}$$'] = c_std_max
outliers_snapshot['$$\sigma$$'] = df[feature].std()
outliers.append(outliers_snapshot)
df.loc[outlier_condition, feature] = np.nan
if verbose:
amount_of_outliers_in_feature = outlier_condition.sum()
percentage = (amount_of_outliers_in_feature/amount_of_all_values_for_feature)*100
total_outliers_count += amount_of_outliers_in_feature
outlier_counts.append(amount_of_outliers_in_feature)
outlier_percentages.append(rf"$\frac{{{amount_of_outliers_in_feature}}}{{{amount_of_all_values_for_feature}}}$ ($\approx {percentage:.2f} \%$) outliers for $c_{{\sigma_{{max}}}}={c_std_max}$")
latex(rf"$\frac{{{amount_of_outliers_in_feature}}}{{{amount_of_all_values_for_feature}}}$ (${percentage} \%$) outliers in feature '{feature}' removed.")
else:
if verbose:
outlier_counts.append(0)
outlier_percentages.append(rf"$0 \%$ outliers for $c_{{\sigma_{{max}}}}={c_std_max}$")
latex(rf"Outliers in feature $X : = $ '{feature}' (dtype={df[feature].dtype}, mean={df[feature].mean()}, std={df[feature].std()}, n={amount_of_all_values_for_feature})")
latex(rf"$\forall x \in X : {minimum} <= x <= {maximum}$ (no outliers)")
else:
pass
if any(count != 0 for count in outlier_counts):
X = [feature + " " + percentage_string for feature, percentage_string in zip(df.select_dtypes('number').columns, outlier_percentages)]
if verbose:
latex('## Percentages of outliers removed as barchart')
plot_barh(X, outlier_counts, file)
latex('## Feature Histograms (without outliers)')
plot_histogram(df)
latex('## All outlier values for all features')
for feature_outliers in outliers:
display(feature_outliers)
return df
###Output
_____no_output_____
###Markdown
Here goes your own df (`pandas.DataFrame`)
###Code
# load example dataset <<heart>> from TensorFlow: https://www.tensorflow.org/tutorials/load_data/pandas_dataframe
import tensorflow as tf
import pandas as pd
csv_file = tf.keras.utils.get_file('heart.csv', 'https://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
df = pd.read_csv(csv_file)
df = remove_scalar_outliers(df=df, c_std_max=2, verbose=True, file='outliers-heart')
###Output
_____no_output_____ |
Deep Learning Specialization/Sequence Models/Week 2/Emoji_v3a.ipynb | ###Markdown
Emojify! Welcome to the second assignment of Week 2! You're going to use word vector representations to build an Emojifier. 🤩 💫 🔥Have you ever wanted to make your text messages more expressive? Your emojifier app will help you do that. Rather than writing:>"Congratulations on the promotion! Let's get coffee and talk. Love you!" The emojifier can automatically turn this into:>"Congratulations on the promotion! 👍 Let's get coffee and talk. ☕️ Love you! ❤️"You'll implement a model which inputs a sentence (such as "Let's go see the baseball game tonight!") and finds the most appropriate emoji to be used with this sentence (⚾️). Using Word Vectors to Improve Emoji Lookups* In many emoji interfaces, you need to remember that ❤️ is the "heart" symbol rather than the "love" symbol. * In other words, you'll have to remember to type "heart" to find the desired emoji, and typing "love" won't bring up that symbol.* You can make a more flexible emoji interface by using word vectors!* When using word vectors, you'll see that even if your training set explicitly relates only a few words to a particular emoji, your algorithm will be able to generalize and associate additional words in the test set to the same emoji. * This works even if those additional words don't even appear in the training set. * This allows you to build an accurate classifier mapping from sentences to emojis, even using a small training set. What you'll build:1. In this exercise, you'll start with a baseline model (Emojifier-V1) using word embeddings.2. Then you will build a more sophisticated model (Emojifier-V2) that further incorporates an LSTM. By the end of this notebook, you'll be able to:* Create an embedding layer in Keras with pre-trained word vectors* Explain the advantages and disadvantages of the GloVe algorithm* Describe how negative sampling learns word vectors more efficiently than other methods* Build a sentiment classifier using word embeddings* Build and train a more sophisticated classifier using an LSTM🏀 👑👆 😎(^^^ Emoji for "skills") Table of Contents- [Packages](0)- [1 - Baseline Model: Emojifier-V1](1) - [1.1 - Dataset EMOJISET](1-1) - [1.2 - Overview of the Emojifier-V1](1-2) - [1.3 - Implementing Emojifier-V1](1-3) - [Exercise 1 - sentence_to_avg](ex-1) - [1.4 - Implement the Model](1-4) - [Exercise 2 - model](ex-2) - [1.5 - Examining Test Set Performance](1-5)- [2 - Emojifier-V2: Using LSTMs in Keras](2) - [2.1 - Model Overview](2-1) - [2.2 Keras and Mini-batching](2-2) - [2.3 - The Embedding Layer](2-3) - [Exercise 3 - sentences_to_indices](ex-3) - [Exercise 4 - pretrained_embedding_layer](ex-4) - [2.4 - Building the Emojifier-V2](2-4) - [Exercise 5 - Emojify_V2](ex-5) - [2.5 - Train the Model](2-5)- [3 - Acknowledgments](3) PackagesLet's get started! Run the following cell to load the packages you're going to use.
###Code
import numpy as np
from emo_utils import *
import emoji
import matplotlib.pyplot as plt
from test_utils import *
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 - Baseline Model: Emojifier-V1 1.1 - Dataset EMOJISETLet's start by building a simple baseline classifier. You have a tiny dataset (X, Y) where:- X contains 127 sentences (strings).- Y contains an integer label between 0 and 4 corresponding to an emoji for each sentence.Figure 1: EMOJISET - a classification problem with 5 classes. A few examples of sentences are given here. Load the dataset using the code below. The dataset is split between training (127 examples) and testing (56 examples).
###Code
X_train, Y_train = read_csv('data/train_emoji.csv')
X_test, Y_test = read_csv('data/tesss.csv')
maxLen = len(max(X_train, key=len).split())
###Output
_____no_output_____
###Markdown
Run the following cell to print sentences from X_train and corresponding labels from Y_train. * Change `idx` to see different examples. * Note that due to the font used by iPython notebook, the heart emoji may be colored black rather than red.
###Code
for idx in range(10):
print(X_train[idx], label_to_emoji(Y_train[idx]))
###Output
never talk to me again 😞
I am proud of your achievements 😄
It is the worst day in my life 😞
Miss you so much ❤️
food is life 🍴
I love you mum ❤️
Stop saying bullshit 😞
congratulations on your acceptance 😄
The assignment is too long 😞
I want to go play ⚾
###Markdown
1.2 - Overview of the Emojifier-V1In this section, you'll implement a baseline model called "Emojifier-v1". Figure 2: Baseline model (Emojifier-V1). Inputs and Outputs* The input of the model is a string corresponding to a sentence (e.g. "I love you"). * The output will be a probability vector of shape (1,5), (indicating that there are 5 emojis to choose from).* The (1,5) probability vector is passed to an argmax layer, which extracts the index of the emoji with the highest probability. One-hot Encoding* To get your labels into a format suitable for training a softmax classifier, convert $Y$ from its current shape $(m, 1)$ into a "one-hot representation" $(m, 5)$, * Each row is a one-hot vector giving the label of one example. * Here, `Y_oh` stands for "Y-one-hot" in the variable names `Y_oh_train` and `Y_oh_test`:
###Code
Y_oh_train = convert_to_one_hot(Y_train, C = 5)
Y_oh_test = convert_to_one_hot(Y_test, C = 5)
###Output
_____no_output_____
###Markdown
Now, see what `convert_to_one_hot()` did. Feel free to change `index` to print out different values.
###Code
idx = 50
print(f"Sentence '{X_train[50]}' has label index {Y_train[idx]}, which is emoji {label_to_emoji(Y_train[idx])}", )
print(f"Label index {Y_train[idx]} in one-hot encoding format is {Y_oh_train[idx]}")
###Output
Sentence 'I missed you' has label index 0, which is emoji ❤️
Label index 0 in one-hot encoding format is [1. 0. 0. 0. 0.]
###Markdown
All the data is now ready to be fed into the Emojify-V1 model. You're ready to implement the model! 1.3 - Implementing Emojifier-V1As shown in Figure 2 (above), the first step is to:* Convert each word in the input sentence into their word vector representations.* Take an average of the word vectors. Similar to this week's previous assignment, you'll use pre-trained 50-dimensional GloVe embeddings. Run the following cell to load the `word_to_vec_map`, which contains all the vector representations.
###Code
word_to_index, index_to_word, word_to_vec_map = read_glove_vecs('data/glove.6B.50d.txt')
###Output
_____no_output_____
###Markdown
You've loaded:- `word_to_index`: dictionary mapping from words to their indices in the vocabulary - (400,001 words, with the valid indices ranging from 0 to 400,000)- `index_to_word`: dictionary mapping from indices to their corresponding words in the vocabulary- `word_to_vec_map`: dictionary mapping words to their GloVe vector representation.Run the following cell to check if it works:
###Code
word = "cucumber"
idx = 289846
print("the index of", word, "in the vocabulary is", word_to_index[word])
print("the", str(idx) + "th word in the vocabulary is", index_to_word[idx])
###Output
the index of cucumber in the vocabulary is 113317
the 289846th word in the vocabulary is potatos
###Markdown
Exercise 1 - sentence_to_avgImplement `sentence_to_avg()` You'll need to carry out two steps:1. Convert every sentence to lower-case, then split the sentence into a list of words. * `X.lower()` and `X.split()` might be useful. 😉2. For each word in the sentence, access its GloVe representation. * Then take the average of all of these word vectors. * You might use `numpy.zeros()`, which you can read more about [here]('https://numpy.org/doc/stable/reference/generated/numpy.zeros.html'). Additional Hints* When creating the `avg` array of zeros, you'll want it to be a vector of the same shape as the other word vectors in the `word_to_vec_map`. * You can choose a word that exists in the `word_to_vec_map` and access its `.shape` field. * Be careful not to hard-code the word that you access. In other words, don't assume that if you see the word 'the' in the `word_to_vec_map` within this notebook, that this word will be in the `word_to_vec_map` when the function is being called by the automatic grader.**Hint**: you can use any one of the word vectors that you retrieved from the input `sentence` to find the shape of a word vector.
###Code
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: sentence_to_avg
def sentence_to_avg(sentence, word_to_vec_map):
"""
Converts a sentence (string) into a list of words (strings). Extracts the GloVe representation of each word
and averages its value into a single vector encoding the meaning of the sentence.
Arguments:
sentence -- string, one training example from X
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
Returns:
avg -- average vector encoding information about the sentence, numpy-array of shape (J,), where J can be any number
"""
# Get a valid word contained in the word_to_vec_map.
any_word = list(word_to_vec_map.keys())[0]
### START CODE HERE ###
# Step 1: Split sentence into list of lower case words (≈ 1 line)
words = list(map(lambda x: x.lower(), sentence.split()))
# Initialize the average word vector, should have the same shape as your word vectors.
avg = np.zeros((len(word_to_vec_map[any_word]),))
# Initialize count to 0
count = 0
# Step 2: average the word vectors. You can loop over the words in the list "words".
for w in words:
# Check that word exists in word_to_vec_map
if w in word_to_vec_map:
avg += word_to_vec_map[w]
# Increment count
count +=1
if count > 0:
# Get the average. But only if count > 0
avg = avg / count
### END CODE HERE ###
return avg
# BEGIN UNIT TEST
avg = sentence_to_avg("Morrocan couscous is my favorite dish", word_to_vec_map)
print("avg = \n", avg)
def sentence_to_avg_test(target):
# Create a controlled word to vec map
word_to_vec_map = {'a': [3, 3], 'synonym_of_a': [3, 3], 'a_nw': [2, 4], 'a_s': [3, 2],
'c': [-2, 1], 'c_n': [-2, 2],'c_ne': [-1, 2], 'c_e': [-1, 1], 'c_se': [-1, 0],
'c_s': [-2, 0], 'c_sw': [-3, 0], 'c_w': [-3, 1], 'c_nw': [-3, 2]
}
# Convert lists to np.arrays
for key in word_to_vec_map.keys():
word_to_vec_map[key] = np.array(word_to_vec_map[key])
avg = target("a a_nw c_w a_s", word_to_vec_map)
assert tuple(avg.shape) == tuple(word_to_vec_map['a'].shape), "Check the shape of your avg array"
assert np.allclose(avg, [1.25, 2.5]), "Check that you are finding the 4 words"
avg = target("love a a_nw c_w a_s", word_to_vec_map)
assert np.allclose(avg, [1.25, 2.5]), "Divide by count, not len(words)"
avg = target("love", word_to_vec_map)
assert np.allclose(avg, [0, 0]), "Average of no words must give an array of zeros"
avg = target("c_se foo a a_nw c_w a_s deeplearning c_nw", word_to_vec_map)
assert np.allclose(avg, [0.1666667, 2.0]), "Debug the last example"
print("\033[92mAll tests passed!")
sentence_to_avg_test(sentence_to_avg)
# END UNIT TEST
###Output
avg =
[-0.008005 0.56370833 -0.50427333 0.258865 0.55131103 0.03104983
-0.21013718 0.16893933 -0.09590267 0.141784 -0.15708967 0.18525867
0.6495785 0.38371117 0.21102167 0.11301667 0.02613967 0.26037767
0.05820667 -0.01578167 -0.12078833 -0.02471267 0.4128455 0.5152061
0.38756167 -0.898661 -0.535145 0.33501167 0.68806933 -0.2156265
1.797155 0.10476933 -0.36775333 0.750785 0.10282583 0.348925
-0.27262833 0.66768 -0.10706167 -0.283635 0.59580117 0.28747333
-0.3366635 0.23393817 0.34349183 0.178405 0.1166155 -0.076433
0.1445417 0.09808667]
[92mAll tests passed!
###Markdown
1.4 - Implement the ModelYou now have all the pieces to finish implementing the `model()` function! After using `sentence_to_avg()` you need to:* Pass the average through forward propagation* Compute the cost* Backpropagate to update the softmax parameters Exercise 2 - modelImplement the `model()` function described in Figure (2). * The equations you need to implement in the forward pass and to compute the cross-entropy cost are below:* The variable $Y_{oh}$ ("Y one hot") is the one-hot encoding of the output labels. $$ z^{(i)} = Wavg^{(i)} + b$$$$ a^{(i)} = softmax(z^{(i)})$$$$ \mathcal{L}^{(i)} = - \sum_{k = 0}^{n_y - 1} Y_{oh,k}^{(i)} * log(a^{(i)}_k)$$**Note**: It is possible to come up with a more efficient vectorized implementation. For now, just use nested for loops to better understand the algorithm, and for easier debugging.The function `softmax()` is provided, and has already been imported.
###Code
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: model
def model(X, Y, word_to_vec_map, learning_rate=0.01, num_iterations=400):
"""
Model to train word vector representations in numpy.
Arguments:
X -- input data, numpy array of sentences as strings, of shape (m, 1)
Y -- labels, numpy array of integers between 0 and 7, numpy-array of shape (m, 1)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
learning_rate -- learning_rate for the stochastic gradient descent algorithm
num_iterations -- number of iterations
Returns:
pred -- vector of predictions, numpy-array of shape (m, 1)
W -- weight matrix of the softmax layer, of shape (n_y, n_h)
b -- bias of the softmax layer, of shape (n_y,)
"""
# Get a valid word contained in the word_to_vec_map
any_word = list(word_to_vec_map.keys())[0]
# Initialize cost. It is needed during grading
cost = 0
# Define number of training examples
m = Y.shape[0] # number of training examples
n_y = len(np.unique(Y)) # number of classes
n_h = word_to_vec_map[any_word].shape[0] # dimensions of the GloVe vectors
# Initialize parameters using Xavier initialization
W = np.random.randn(n_y, n_h) / np.sqrt(n_h)
b = np.zeros((n_y,))
# Convert Y to Y_onehot with n_y classes
Y_oh = convert_to_one_hot(Y, C=n_y)
# Optimization loop
for t in range(num_iterations): # Loop over the number of iterations
for i in range(m): # Loop over the training examples
### START CODE HERE ### (≈ 4 lines of code)
# Average the word vectors of the words from the i'th training example
avg = sentence_to_avg(X[i], word_to_vec_map)
# Forward propagate the avg through the softmax layer.
# You can use np.dot() to perform the multiplication.
z = np.dot(W, avg) + b
a = softmax(z)
# Compute cost using the i'th training label's one hot representation and "A" (the output of the softmax)
cost = - np.sum(Y_oh[i] * np.log(a))
### END CODE HERE ###
# Compute gradients
dz = a - Y_oh[i]
dW = np.dot(dz.reshape(n_y, 1), avg.reshape(1, n_h))
db = dz
# Update parameters with Stochastic Gradient Descent
W = W - learning_rate * dW
b = b - learning_rate * db
if t % 100 == 0:
print("Epoch: " + str(t) + " --- cost = " + str(cost))
pred = predict(X, Y, W, b, word_to_vec_map) #predict is defined in emo_utils.py
return pred, W, b
# UNIT TEST
def model_test(target):
# Create a controlled word to vec map
word_to_vec_map = {'a': [3, 3], 'synonym_of_a': [3, 3], 'a_nw': [2, 4], 'a_s': [3, 2], 'a_n': [3, 4],
'c': [-2, 1], 'c_n': [-2, 2],'c_ne': [-1, 2], 'c_e': [-1, 1], 'c_se': [-1, 0],
'c_s': [-2, 0], 'c_sw': [-3, 0], 'c_w': [-3, 1], 'c_nw': [-3, 2]
}
# Convert lists to np.arrays
for key in word_to_vec_map.keys():
word_to_vec_map[key] = np.array(word_to_vec_map[key])
# Training set. Sentences composed of a_* words will be of class 0 and sentences composed of c_* words will be of class 1
X = np.asarray(['a a_s synonym_of_a a_n c_sw', 'a a_s a_n c_sw', 'a_s a a_n', 'synonym_of_a a a_s a_n c_sw', " a_s a_n",
" a a_s a_n c ", " a_n a c c c_e",
'c c_nw c_n c c_ne', 'c_e c c_se c_s', 'c_nw c a_s c_e c_e', 'c_e a_nw c_sw', 'c_sw c c_ne c_ne'])
Y = np.asarray([0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1])
np.random.seed(10)
pred, W, b = model(X, Y, word_to_vec_map, 0.0025, 110)
assert W.shape == (2, 2), "W must be of shape 2 x 2"
assert np.allclose(pred.transpose(), Y), "Model must give a perfect accuracy"
assert np.allclose(b[0], -1 * b[1]), "b should be symmetric in this example"
print("\033[92mAll tests passed!")
model_test(model)
###Output
Epoch: 0 --- cost = 0.05105772513207823
Accuracy: 0.9166666666666666
Epoch: 100 --- cost = 0.00970311068897676
Accuracy: 1.0
[92mAll tests passed!
###Markdown
Run the next cell to train your model and learn the softmax parameters (W, b). **The training process will take about 5 minutes**
###Code
np.random.seed(1)
pred, W, b = model(X_train, Y_train, word_to_vec_map)
print(pred)
###Output
Epoch: 0 --- cost = 1.9520498812810076
Accuracy: 0.3484848484848485
Epoch: 100 --- cost = 0.07971818726014807
Accuracy: 0.9318181818181818
Epoch: 200 --- cost = 0.04456369243681402
Accuracy: 0.9545454545454546
Epoch: 300 --- cost = 0.03432267378786059
Accuracy: 0.9696969696969697
[[3.]
[2.]
[3.]
[0.]
[4.]
[0.]
[3.]
[2.]
[3.]
[1.]
[3.]
[3.]
[1.]
[3.]
[2.]
[3.]
[2.]
[3.]
[1.]
[2.]
[3.]
[0.]
[2.]
[2.]
[2.]
[1.]
[4.]
[3.]
[3.]
[4.]
[0.]
[3.]
[4.]
[2.]
[0.]
[3.]
[2.]
[2.]
[3.]
[4.]
[2.]
[2.]
[0.]
[2.]
[3.]
[0.]
[3.]
[2.]
[4.]
[3.]
[0.]
[3.]
[3.]
[3.]
[4.]
[2.]
[1.]
[1.]
[1.]
[2.]
[3.]
[1.]
[0.]
[0.]
[0.]
[3.]
[4.]
[4.]
[2.]
[2.]
[1.]
[2.]
[0.]
[3.]
[2.]
[2.]
[0.]
[3.]
[3.]
[1.]
[2.]
[1.]
[2.]
[2.]
[4.]
[3.]
[3.]
[2.]
[4.]
[0.]
[0.]
[3.]
[3.]
[3.]
[3.]
[2.]
[0.]
[1.]
[2.]
[3.]
[0.]
[2.]
[2.]
[2.]
[3.]
[2.]
[2.]
[2.]
[4.]
[1.]
[1.]
[3.]
[3.]
[4.]
[1.]
[2.]
[1.]
[1.]
[3.]
[1.]
[0.]
[4.]
[0.]
[3.]
[3.]
[4.]
[4.]
[1.]
[4.]
[3.]
[0.]
[2.]]
###Markdown
Great! Your model has pretty high accuracy on the training set. Now see how it does on the test set: 1.5 - Examining Test Set Performance Note that the `predict` function used here is defined in `emo_util.spy`.
###Code
print("Training set:")
pred_train = predict(X_train, Y_train, W, b, word_to_vec_map)
print('Test set:')
pred_test = predict(X_test, Y_test, W, b, word_to_vec_map)
###Output
Training set:
Accuracy: 0.9772727272727273
Test set:
Accuracy: 0.8571428571428571
###Markdown
**Note**:* Random guessing would have had 20% accuracy, given that there are 5 classes. (1/5 = 20%).* This is pretty good performance after training on only 127 examples. The Model Matches Emojis to Relevant WordsIn the training set, the algorithm saw the sentence >"I love you." with the label ❤️. * You can check that the word "adore" does not appear in the training set. * Nonetheless, let's see what happens if you write "I adore you."
###Code
X_my_sentences = np.array(["i adore you", "i love you", "funny lol", "lets play with a ball", "food is ready", "not feeling happy"])
Y_my_labels = np.array([[0], [0], [2], [1], [4],[3]])
pred = predict(X_my_sentences, Y_my_labels , W, b, word_to_vec_map)
print_predictions(X_my_sentences, pred)
###Output
Accuracy: 0.8333333333333334
i adore you ❤️
i love you ❤️
funny lol 😄
lets play with a ball ⚾
food is ready 🍴
not feeling happy 😄
###Markdown
Amazing! * Because *adore* has a similar embedding as *love*, the algorithm has generalized correctly even to a word it has never seen before. * Words such as *heart*, *dear*, *beloved* or *adore* have embedding vectors similar to *love*. * Feel free to modify the inputs above and try out a variety of input sentences. * How well does it work? Word Ordering isn't Considered in this Model* Note that the model doesn't get the following sentence correct:>"not feeling happy" * This algorithm ignores word ordering, so is not good at understanding phrases like "not happy." Confusion Matrix* Printing the confusion matrix can also help understand which classes are more difficult for your model. * A confusion matrix shows how often an example whose label is one class ("actual" class) is mislabeled by the algorithm with a different class ("predicted" class).Print the confusion matrix below:
###Code
# START SKIP FOR GRADING
print(Y_test.shape)
print(' '+ label_to_emoji(0)+ ' ' + label_to_emoji(1) + ' ' + label_to_emoji(2)+ ' ' + label_to_emoji(3)+' ' + label_to_emoji(4))
print(pd.crosstab(Y_test, pred_test.reshape(56,), rownames=['Actual'], colnames=['Predicted'], margins=True))
plot_confusion_matrix(Y_test, pred_test)
# END SKIP FOR GRADING
###Output
(56,)
❤️ ⚾ 😄 😞 🍴
Predicted 0.0 1.0 2.0 3.0 4.0 All
Actual
0 6 0 0 1 0 7
1 0 8 0 0 0 8
2 2 0 16 0 0 18
3 1 1 2 12 0 16
4 0 0 1 0 6 7
All 9 9 19 13 6 56
###Markdown
What you should remember:- Even with a mere 127 training examples, you can get a reasonably good model for Emojifying. - This is due to the generalization power word vectors gives you. - Emojify-V1 will perform poorly on sentences such as *"This movie is not good and not enjoyable"* - It doesn't understand combinations of words. - It just averages all the words' embedding vectors together, without considering the ordering of words. **Not to worry! You will build a better algorithm in the next section!** 2 - Emojifier-V2: Using LSTMs in Keras You're going to build an LSTM model that takes word **sequences** as input! This model will be able to account for word ordering. Emojifier-V2 will continue to use pre-trained word embeddings to represent words. You'll feed word embeddings into an LSTM, and the LSTM will learn to predict the most appropriate emoji. PackagesRun the following cell to load the Keras packages you'll need:
###Code
import numpy as np
import tensorflow
np.random.seed(0)
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input, Dropout, LSTM, Activation
from tensorflow.keras.layers import Embedding
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.initializers import glorot_uniform
np.random.seed(1)
###Output
_____no_output_____
###Markdown
2.1 - Model OverviewHere is the Emojifier-v2 you will implement: Figure 3: Emojifier-V2. A 2-layer LSTM sequence classifier. 2.2 Keras and Mini-batching In this exercise, you want to train Keras using mini-batches. However, most deep learning frameworks require that all sequences in the same mini-batch have the **same length**. This is what allows vectorization to work: If you had a 3-word sentence and a 4-word sentence, then the computations needed for them are different (one takes 3 steps of an LSTM, one takes 4 steps) so it's just not possible to do them both at the same time. Padding Handles Sequences of Varying Length* The common solution to handling sequences of **different length** is to use padding. Specifically: * Set a maximum sequence length * Pad all sequences to have the same length. Example of Padding:* Given a maximum sequence length of 20, you could pad every sentence with "0"s so that each input sentence is of length 20. * Thus, the sentence "I love you" would be represented as $(e_{I}, e_{love}, e_{you}, \vec{0}, \vec{0}, \ldots, \vec{0})$. * In this example, any sentences longer than 20 words would have to be truncated. * One way to choose the maximum sequence length is to just pick the length of the longest sentence in the training set. 2.3 - The Embedding LayerIn Keras, the embedding matrix is represented as a "layer."* The embedding matrix maps word indices to embedding vectors. * The word indices are positive integers. * The embedding vectors are dense vectors of fixed size. * A "dense" vector is the opposite of a sparse vector. It means that most of its values are non-zero. As a counter-example, a one-hot encoded vector is not "dense."* The embedding matrix can be derived in two ways: * Training a model to derive the embeddings from scratch. * Using a pretrained embedding. Using and Updating Pre-trained EmbeddingsIn this section, you'll create an [Embedding()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) layer in Keras* You will initialize the Embedding layer with GloVe 50-dimensional vectors. * In the code below, you'll observe how Keras allows you to either train or leave this layer fixed. * Because your training set is quite small, you'll leave the GloVe embeddings fixed instead of updating them. Inputs and Outputs to the Embedding Layer* The `Embedding()` layer's input is an integer matrix of size **(batch size, max input length)**. * This input corresponds to sentences converted into lists of indices (integers). * The largest integer (the highest word index) in the input should be no larger than the vocabulary size.* The embedding layer outputs an array of shape (batch size, max input length, dimension of word vectors).* The figure shows the propagation of two example sentences through the embedding layer. * Both examples have been zero-padded to a length of `max_len=5`. * The word embeddings are 50 units in length. * The final dimension of the representation is `(2,max_len,50)`. Figure 4: Embedding layer Prepare the Input Sentences Exercise 3 - sentences_to_indicesImplement `sentences_to_indices`This function processes an array of sentences X and returns inputs to the embedding layer:* Convert each training sentences into a list of indices (the indices correspond to each word in the sentence)* Zero-pad all these lists so that their length is the length of the longest sentence. Additional Hints:* Note that you may have considered using the `enumerate()` function in the for loop, but for the purposes of passing the autograder, please follow the starter code by initializing and incrementing `j` explicitly.
###Code
for idx, val in enumerate(["I", "like", "learning"]):
print(idx, val)
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: sentences_to_indices
def sentences_to_indices(X, word_to_index, max_len):
"""
Converts an array of sentences (strings) into an array of indices corresponding to words in the sentences.
The output shape should be such that it can be given to `Embedding()` (described in Figure 4).
Arguments:
X -- array of sentences (strings), of shape (m, 1)
word_to_index -- a dictionary containing the each word mapped to its index
max_len -- maximum number of words in a sentence. You can assume every sentence in X is no longer than this.
Returns:
X_indices -- array of indices corresponding to words in the sentences from X, of shape (m, max_len)
"""
m = X.shape[0] # number of training examples
### START CODE HERE ###
# Initialize X_indices as a numpy matrix of zeros and the correct shape (≈ 1 line)
X_indices = np.zeros((m, max_len))
for i in range(m): # loop over training examples
# Convert the ith training sentence in lower case and split is into words. You should get a list of words.
sentence_words = list(map(lambda x: x.lower(), X[i].split()))
# Initialize j to 0
j = 0
# Loop over the words of sentence_words
for w in sentence_words:
# if w exists in the word_to_index dictionary
if w in word_to_index:
# Set the (i,j)th entry of X_indices to the index of the correct word.
X_indices[i, j] = word_to_index[w]
# Increment j to j + 1
j += 1
### END CODE HERE ###
return X_indices
# UNIT TEST
def sentences_to_indices_test(target):
# Create a word_to_index dictionary
word_to_index = {}
for idx, val in enumerate(["i", "like", "learning", "deep", "machine", "love", "smile", '´0.=']):
word_to_index[val] = idx;
max_len = 4
sentences = np.array(["I like deep learning", "deep ´0.= love machine", "machine learning smile"]);
indexes = target(sentences, word_to_index, max_len)
print(indexes)
assert type(indexes) == np.ndarray, "Wrong type. Use np arrays in the function"
assert indexes.shape == (sentences.shape[0], max_len), "Wrong shape of ouput matrix"
assert np.allclose(indexes, [[0, 1, 3, 2],
[3, 7, 5, 4],
[4, 2, 6, 0]]), "Wrong values. Debug with the given examples"
print("\033[92mAll tests passed!")
sentences_to_indices_test(sentences_to_indices)
###Output
[[0. 1. 3. 2.]
[3. 7. 5. 4.]
[4. 2. 6. 0.]]
[92mAll tests passed!
###Markdown
**Expected value**```[[0, 1, 3, 2], [3, 7, 5, 4], [4, 2, 6, 0]]``` Run the following cell to check what `sentences_to_indices()` does, and take a look at your results.
###Code
X1 = np.array(["funny lol", "lets play baseball", "food is ready for you"])
X1_indices = sentences_to_indices(X1, word_to_index, max_len=5)
print("X1 =", X1)
print("X1_indices =\n", X1_indices)
###Output
X1 = ['funny lol' 'lets play baseball' 'food is ready for you']
X1_indices =
[[155345. 225122. 0. 0. 0.]
[220930. 286375. 69714. 0. 0.]
[151204. 192973. 302254. 151349. 394475.]]
###Markdown
Build Embedding LayerNow you'll build the `Embedding()` layer in Keras, using pre-trained word vectors. * The embedding layer takes as input a list of word indices. * `sentences_to_indices()` creates these word indices.* The embedding layer will return the word embeddings for a sentence. Exercise 4 - pretrained_embedding_layerImplement `pretrained_embedding_layer()` with these steps:1. Initialize the embedding matrix as a numpy array of zeros. * The embedding matrix has a row for each unique word in the vocabulary. * There is one additional row to handle "unknown" words. * So vocab_size is the number of unique words plus one. * Each row will store the vector representation of one word. * For example, one row may be 50 positions long if using GloVe word vectors. * In the code below, `emb_dim` represents the length of a word embedding.2. Fill in each row of the embedding matrix with the vector representation of a word * Each word in `word_to_index` is a string. * word_to_vec_map is a dictionary where the keys are strings and the values are the word vectors.3. Define the Keras embedding layer. * Use [Embedding()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding). * The input dimension is equal to the vocabulary length (number of unique words plus one). * The output dimension is equal to the number of positions in a word embedding. * Make this layer's embeddings fixed. * If you were to set `trainable = True`, then it will allow the optimization algorithm to modify the values of the word embeddings. * In this case, you don't want the model to modify the word embeddings.4. Set the embedding weights to be equal to the embedding matrix. * Note that this is part of the code is already completed for you and does not need to be modified!
###Code
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: pretrained_embedding_layer
def pretrained_embedding_layer(word_to_vec_map, word_to_index):
"""
Creates a Keras Embedding() layer and loads in pre-trained GloVe 50-dimensional vectors.
Arguments:
word_to_vec_map -- dictionary mapping words to their GloVe vector representation.
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
embedding_layer -- pretrained layer Keras instance
"""
vocab_size = len(word_to_index) + 1 # adding 1 to fit Keras embedding (requirement)
any_word = list(word_to_vec_map.keys())[0]
emb_dim = word_to_vec_map[any_word].shape[0] # define dimensionality of your GloVe word vectors (= 50)
### START CODE HERE ###
# Step 1
# Initialize the embedding matrix as a numpy array of zeros.
# See instructions above to choose the correct shape.
emb_matrix = np.zeros((vocab_size, emb_dim))
# Step 2
# Set each row "idx" of the embedding matrix to be
# the word vector representation of the idx'th word of the vocabulary
for word, idx in word_to_index.items():
emb_matrix[idx, :] = word_to_vec_map[word]
# Step 3
# Define Keras embedding layer with the correct input and output sizes
# Make it non-trainable.
embedding_layer = Embedding(vocab_size, emb_dim, trainable=False)
### END CODE HERE ###
# Step 4 (already done for you; please do not modify)
# Build the embedding layer, it is required before setting the weights of the embedding layer.
embedding_layer.build((None,)) # Do not modify the "None". This line of code is complete as-is.
# Set the weights of the embedding layer to the embedding matrix. Your layer is now pretrained.
embedding_layer.set_weights([emb_matrix])
return embedding_layer
# UNIT TEST
def pretrained_embedding_layer_test(target):
# Create a controlled word to vec map
word_to_vec_map = {'a': [3, 3], 'synonym_of_a': [3, 3], 'a_nw': [2, 4], 'a_s': [3, 2], 'a_n': [3, 4],
'c': [-2, 1], 'c_n': [-2, 2],'c_ne': [-1, 2], 'c_e': [-1, 1], 'c_se': [-1, 0],
'c_s': [-2, 0], 'c_sw': [-3, 0], 'c_w': [-3, 1], 'c_nw': [-3, 2]
}
# Convert lists to np.arrays
for key in word_to_vec_map.keys():
word_to_vec_map[key] = np.array(word_to_vec_map[key])
# Create a word_to_index dictionary
word_to_index = {}
for idx, val in enumerate(list(word_to_vec_map.keys())):
word_to_index[val] = idx;
np.random.seed(1)
embedding_layer = target(word_to_vec_map, word_to_index)
assert type(embedding_layer) == Embedding, "Wrong type"
assert embedding_layer.input_dim == len(list(word_to_vec_map.keys())) + 1, "Wrong input shape"
assert embedding_layer.output_dim == len(word_to_vec_map['a']), "Wrong output shape"
assert np.allclose(embedding_layer.get_weights(),
[[[ 3, 3], [ 3, 3], [ 2, 4], [ 3, 2], [ 3, 4],
[-2, 1], [-2, 2], [-1, 2], [-1, 1], [-1, 0],
[-2, 0], [-3, 0], [-3, 1], [-3, 2], [ 0, 0]]]), "Wrong vaulues"
print("\033[92mAll tests passed!")
pretrained_embedding_layer_test(pretrained_embedding_layer)
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
print("weights[0][1][1] =", embedding_layer.get_weights()[0][1][1])
print("Input_dim", embedding_layer.input_dim)
print("Output_dim",embedding_layer.output_dim)
###Output
weights[0][1][1] = 0.39031
Input_dim 400001
Output_dim 50
###Markdown
2.4 - Building the Emojifier-V2Now you're ready to build the Emojifier-V2 model, in which you feed the embedding layer's output to an LSTM network! Figure 3: Emojifier-v2. A 2-layer LSTM sequence classifier. Exercise 5 - Emojify_V2Implement `Emojify_V2()`This function builds a Keras graph of the architecture shown in Figure (3). * The model takes as input an array of sentences of shape (`m`, `max_len`, ) defined by `input_shape`. * The model outputs a softmax probability vector of shape (`m`, `C = 5`). * You may need to use the following Keras layers: * [Input()](https://www.tensorflow.org/api_docs/python/tf/keras/Input) * Set the `shape` and `dtype` parameters. * The inputs are integers, so you can specify the data type as a string, 'int32'. * [LSTM()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/LSTM) * Set the `units` and `return_sequences` parameters. * [Dropout()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout) * Set the `rate` parameter. * [Dense()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) * Set the `units`, * Note that `Dense()` has an `activation` parameter. For the purposes of passing the autograder, please do not set the activation within `Dense()`. Use the separate `Activation` layer to do so. * [Activation()](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Activation) * You can pass in the activation of your choice as a lowercase string. * [Model()](https://www.tensorflow.org/api_docs/python/tf/keras/Model) * Set `inputs` and `outputs`. Additional Hints* Remember that these Keras layers return an object, and you will feed in the outputs of the previous layer as the input arguments to that object. The returned object can be created and called in the same line.```Python How to use Keras layers in two lines of codedense_object = Dense(units = ...)X = dense_object(inputs) How to use Keras layers in one line of codeX = Dense(units = ...)(inputs)```* The `embedding_layer` that is returned by `pretrained_embedding_layer` is a layer object that can be called as a function, passing in a single argument (sentence indices).* Here is some sample code in case you're stuck: 😊```Pythonraw_inputs = Input(shape=(maxLen,), dtype='int32')preprocessed_inputs = ... some pre-processingX = LSTM(units = ..., return_sequences= ...)(processed_inputs)X = Dropout(rate = ..., )(X)...X = Dense(units = ...)(X)X = Activation(...)(X)model = Model(inputs=..., outputs=...)...```
###Code
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: Emojify_V2
def Emojify_V2(input_shape, word_to_vec_map, word_to_index):
"""
Function creating the Emojify-v2 model's graph.
Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)
Returns:
model -- a model instance in Keras
"""
### START CODE HERE ###
# Define sentence_indices as the input of the graph.
# It should be of shape input_shape and dtype 'int32' (as it contains indices, which are integers).
sentence_indices = Input(input_shape, dtype='int32')
# Create the embedding layer pretrained with GloVe Vectors (≈1 line)
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
# Propagate sentence_indices through your embedding layer
# (See additional hints in the instructions).
embeddings = embedding_layer(sentence_indices)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# The returned output should be a batch of sequences.
X = LSTM(128, return_sequences=True)(embeddings)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X trough another LSTM layer with 128-dimensional hidden state
# The returned output should be a single hidden state, not a batch of sequences.
X = LSTM(128)(X)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X through a Dense layer with 5 units
X = Dense(5)(X)
# Add a softmax activation
X = Activation('softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = Model(inputs=sentence_indices, outputs=X)
### END CODE HERE ###
return model
# UNIT TEST
def Emojify_V2_test(target):
# Create a controlled word to vec map
word_to_vec_map = {'a': [3, 3], 'synonym_of_a': [3, 3], 'a_nw': [2, 4], 'a_s': [3, 2], 'a_n': [3, 4],
'c': [-2, 1], 'c_n': [-2, 2],'c_ne': [-1, 2], 'c_e': [-1, 1], 'c_se': [-1, 0],
'c_s': [-2, 0], 'c_sw': [-3, 0], 'c_w': [-3, 1], 'c_nw': [-3, 2]
}
# Convert lists to np.arrays
for key in word_to_vec_map.keys():
word_to_vec_map[key] = np.array(word_to_vec_map[key])
# Create a word_to_index dictionary
word_to_index = {}
for idx, val in enumerate(list(word_to_vec_map.keys())):
word_to_index[val] = idx;
maxLen = 4
model = target((maxLen,), word_to_vec_map, word_to_index)
expectedModel = [['InputLayer', [(None, 4)], 0], ['Embedding', (None, 4, 2), 30], ['LSTM', (None, 4, 128), 67072, (None, 4, 2), 'tanh', True], ['Dropout', (None, 4, 128), 0, 0.5], ['LSTM', (None, 128), 131584, (None, 4, 128), 'tanh', False], ['Dropout', (None, 128), 0, 0.5], ['Dense', (None, 5), 645, 'linear'], ['Activation', (None, 5), 0]]
comparator(summary(model), expectedModel)
Emojify_V2_test(Emojify_V2)
###Output
[32mAll tests passed![0m
###Markdown
Run the following cell to create your model and check its summary. * Because all sentences in the dataset are less than 10 words, `max_len = 10` was chosen. * You should see that your architecture uses 20,223,927 parameters, of which 20,000,050 (the word embeddings) are non-trainable, with the remaining 223,877 being trainable. * Because your vocabulary size has 400,001 words (with valid indices from 0 to 400,000) there are 400,001\*50 = 20,000,050 non-trainable parameters.
###Code
model = Emojify_V2((maxLen,), word_to_vec_map, word_to_index)
model.summary()
###Output
Model: "functional_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 10)] 0
_________________________________________________________________
embedding_4 (Embedding) (None, 10, 50) 20000050
_________________________________________________________________
lstm_2 (LSTM) (None, 10, 128) 91648
_________________________________________________________________
dropout_2 (Dropout) (None, 10, 128) 0
_________________________________________________________________
lstm_3 (LSTM) (None, 128) 131584
_________________________________________________________________
dropout_3 (Dropout) (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 5) 645
_________________________________________________________________
activation_1 (Activation) (None, 5) 0
=================================================================
Total params: 20,223,927
Trainable params: 223,877
Non-trainable params: 20,000,050
_________________________________________________________________
###Markdown
Compile the Model As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics you want to use. Compile your model using `categorical_crossentropy` loss, `adam` optimizer and `['accuracy']` metrics:
###Code
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
2.5 - Train the Model It's time to train your model! Your Emojifier-V2 `model` takes as input an array of shape (`m`, `max_len`) and outputs probability vectors of shape (`m`, `number of classes`). Thus, you have to convert X_train (array of sentences as strings) to X_train_indices (array of sentences as list of word indices), and Y_train (labels as indices) to Y_train_oh (labels as one-hot vectors).
###Code
X_train_indices = sentences_to_indices(X_train, word_to_index, maxLen)
Y_train_oh = convert_to_one_hot(Y_train, C = 5)
###Output
_____no_output_____
###Markdown
Fit the Keras model on `X_train_indices` and `Y_train_oh`, using `epochs = 50` and `batch_size = 32`.
###Code
model.fit(X_train_indices, Y_train_oh, epochs = 50, batch_size = 32, shuffle=True)
###Output
Epoch 1/50
5/5 [==============================] - 0s 37ms/step - loss: 1.5575 - accuracy: 0.3258
Epoch 2/50
5/5 [==============================] - 0s 39ms/step - loss: 1.5033 - accuracy: 0.3636
Epoch 3/50
5/5 [==============================] - 0s 37ms/step - loss: 1.4834 - accuracy: 0.3561
Epoch 4/50
5/5 [==============================] - 0s 36ms/step - loss: 1.3908 - accuracy: 0.4394
Epoch 5/50
5/5 [==============================] - 0s 34ms/step - loss: 1.2709 - accuracy: 0.5833
Epoch 6/50
5/5 [==============================] - 0s 23ms/step - loss: 1.1592 - accuracy: 0.5909
Epoch 7/50
5/5 [==============================] - 0s 24ms/step - loss: 1.1091 - accuracy: 0.5227
Epoch 8/50
5/5 [==============================] - 0s 35ms/step - loss: 0.9558 - accuracy: 0.6136
Epoch 9/50
5/5 [==============================] - 0s 34ms/step - loss: 0.8462 - accuracy: 0.6742
Epoch 10/50
5/5 [==============================] - 0s 23ms/step - loss: 0.7825 - accuracy: 0.6970
Epoch 11/50
5/5 [==============================] - 0s 24ms/step - loss: 0.7329 - accuracy: 0.7197
Epoch 12/50
5/5 [==============================] - 0s 35ms/step - loss: 0.7485 - accuracy: 0.7348
Epoch 13/50
5/5 [==============================] - 0s 22ms/step - loss: 0.6112 - accuracy: 0.7727
Epoch 14/50
5/5 [==============================] - 0s 24ms/step - loss: 0.6173 - accuracy: 0.7727
Epoch 15/50
5/5 [==============================] - 0s 24ms/step - loss: 0.4984 - accuracy: 0.8333
Epoch 16/50
5/5 [==============================] - 0s 33ms/step - loss: 0.5170 - accuracy: 0.8106
Epoch 17/50
5/5 [==============================] - 0s 22ms/step - loss: 0.4145 - accuracy: 0.8485
Epoch 18/50
5/5 [==============================] - 0s 24ms/step - loss: 0.3879 - accuracy: 0.8788
Epoch 19/50
5/5 [==============================] - 0s 34ms/step - loss: 0.4071 - accuracy: 0.8636
Epoch 20/50
5/5 [==============================] - 0s 22ms/step - loss: 0.3873 - accuracy: 0.8561
Epoch 21/50
5/5 [==============================] - 0s 24ms/step - loss: 0.3364 - accuracy: 0.8636
Epoch 22/50
5/5 [==============================] - 0s 35ms/step - loss: 0.4438 - accuracy: 0.8258
Epoch 23/50
5/5 [==============================] - 0s 22ms/step - loss: 0.4873 - accuracy: 0.8788
Epoch 24/50
5/5 [==============================] - 0s 23ms/step - loss: 0.2766 - accuracy: 0.9091
Epoch 25/50
5/5 [==============================] - 0s 24ms/step - loss: 0.3825 - accuracy: 0.8409
Epoch 26/50
5/5 [==============================] - 0s 35ms/step - loss: 0.2347 - accuracy: 0.9242
Epoch 27/50
5/5 [==============================] - 0s 22ms/step - loss: 0.2216 - accuracy: 0.9318
Epoch 28/50
5/5 [==============================] - 0s 24ms/step - loss: 0.1907 - accuracy: 0.9394
Epoch 29/50
5/5 [==============================] - 0s 23ms/step - loss: 0.1545 - accuracy: 0.9394
Epoch 30/50
5/5 [==============================] - 0s 34ms/step - loss: 0.1222 - accuracy: 0.9545
Epoch 31/50
5/5 [==============================] - 0s 22ms/step - loss: 0.1229 - accuracy: 0.9621
Epoch 32/50
5/5 [==============================] - 0s 23ms/step - loss: 0.0905 - accuracy: 0.9621
Epoch 33/50
5/5 [==============================] - 0s 34ms/step - loss: 0.0831 - accuracy: 0.9621
Epoch 34/50
5/5 [==============================] - 0s 34ms/step - loss: 0.0772 - accuracy: 0.9848
Epoch 35/50
5/5 [==============================] - 0s 23ms/step - loss: 0.0800 - accuracy: 0.9773
Epoch 36/50
5/5 [==============================] - 0s 23ms/step - loss: 0.1515 - accuracy: 0.9470
Epoch 37/50
5/5 [==============================] - 0s 35ms/step - loss: 0.1038 - accuracy: 0.9621
Epoch 38/50
5/5 [==============================] - 0s 33ms/step - loss: 0.1981 - accuracy: 0.9318
Epoch 39/50
5/5 [==============================] - 0s 23ms/step - loss: 0.1496 - accuracy: 0.9470
Epoch 40/50
5/5 [==============================] - 0s 24ms/step - loss: 0.2673 - accuracy: 0.9091
Epoch 41/50
5/5 [==============================] - 0s 35ms/step - loss: 0.2034 - accuracy: 0.9167
Epoch 42/50
5/5 [==============================] - 0s 23ms/step - loss: 0.2540 - accuracy: 0.9167
Epoch 43/50
5/5 [==============================] - 0s 23ms/step - loss: 0.0868 - accuracy: 0.9773
Epoch 44/50
5/5 [==============================] - 0s 24ms/step - loss: 0.1083 - accuracy: 0.9697
Epoch 45/50
5/5 [==============================] - 0s 34ms/step - loss: 0.1317 - accuracy: 0.9621
Epoch 46/50
5/5 [==============================] - 0s 22ms/step - loss: 0.0907 - accuracy: 0.9773
Epoch 47/50
5/5 [==============================] - 0s 24ms/step - loss: 0.0809 - accuracy: 0.9848
Epoch 48/50
5/5 [==============================] - 0s 34ms/step - loss: 0.0597 - accuracy: 0.9773
Epoch 49/50
5/5 [==============================] - 0s 22ms/step - loss: 0.0341 - accuracy: 1.0000
Epoch 50/50
5/5 [==============================] - 0s 24ms/step - loss: 0.0503 - accuracy: 0.9848
###Markdown
Your model should perform around **90% to 100% accuracy** on the training set. Exact model accuracy may vary! Run the following cell to evaluate your model on the test set:
###Code
X_test_indices = sentences_to_indices(X_test, word_to_index, max_len = maxLen)
Y_test_oh = convert_to_one_hot(Y_test, C = 5)
loss, acc = model.evaluate(X_test_indices, Y_test_oh)
print()
print("Test accuracy = ", acc)
###Output
2/2 [==============================] - 0s 3ms/step - loss: 0.5385 - accuracy: 0.8214
Test accuracy = 0.8214285969734192
###Markdown
You should get a test accuracy between 80% and 95%. Run the cell below to see the mislabelled examples:
###Code
# This code allows you to see the mislabelled examples
C = 5
y_test_oh = np.eye(C)[Y_test.reshape(-1)]
X_test_indices = sentences_to_indices(X_test, word_to_index, maxLen)
pred = model.predict(X_test_indices)
for i in range(len(X_test)):
x = X_test_indices
num = np.argmax(pred[i])
if(num != Y_test[i]):
print('Expected emoji:'+ label_to_emoji(Y_test[i]) + ' prediction: '+ X_test[i] + label_to_emoji(num).strip())
###Output
Expected emoji:😄 prediction: he got a very nice raise ❤️
Expected emoji:😄 prediction: she got me a nice present ❤️
Expected emoji:😞 prediction: work is hard 😄
Expected emoji:😞 prediction: This girl is messing with me ❤️
Expected emoji:😞 prediction: work is horrible 😄
Expected emoji:🍴 prediction: any suggestions for dinner 😄
Expected emoji:😄 prediction: you brighten my day ❤️
Expected emoji:😞 prediction: she is a bully ❤️
Expected emoji:😞 prediction: My life is so boring ❤️
Expected emoji:😄 prediction: will you be my valentine ❤️
###Markdown
Now you can try it on your own example! Write your own sentence below:
###Code
# Change the sentence below to see your prediction. Make sure all the words are in the Glove embeddings.
x_test = np.array(['I cannot play'])
X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen)
print(x_test[0] +' '+ label_to_emoji(np.argmax(model.predict(X_test_indices))))
###Output
I cannot play ⚾
|
text-code-en.ipynb | ###Markdown
Query that returns both text(name) and code(ID) from JSON-stat Example HS-codes in foreign trade Import librariesUse [pyjstat](https://pypi.org/project/pyjstat/) library for JSON-stat and pandas. Pandas is loaded as part of av pyjstat
###Code
from pyjstat import pyjstat
import requests
###Output
_____no_output_____
###Markdown
URL with the table's metadata, where we post the query
###Code
tabid = "08799" #
lang = "en" # language code can also be "no"
POST_URL = "https://data.ssb.no/api/v0/" + lang + "/table/" + tabid # 'https://data.ssb.no/api/v0/en/table/08799'
###Output
_____no_output_____
###Markdown
Query, can be taken from the console: Import / export all commodity codes (HS) to US May 2020, ca. 65.000 cells. Max limit for one query in PxWebApi is 800.000 cells, incl. empty cells.
###Code
json_q = {
"query": [
{
"code": "Varekoder",
"selection": {
"filter": "all",
"values": [
"*"
]
}
},
{
"code": "ImpEks",
"selection": {
"filter": "item",
"values": [
"1",
"2"
]
}
},
{
"code": "Land",
"selection": {
"filter": "item",
"values": [
"US"
]
}
},
{
"code": "ContentsCode",
"selection": {
"filter": "item",
"values": [
"Mengde1",
"Verdi",
"Mengde2"
]
}
},
{
"code": "Tid",
"selection": {
"filter": "item",
"values": [
"2020M05"
]
}
}
],
"response": {
"format": "json-stat2"
}
}
###Output
_____no_output_____
###Markdown
Post query
###Code
res = requests.post(POST_URL, json=json_q)
###Output
_____no_output_____
###Markdown
Read JSON-stat result using the library pyjstatSaved it as dataset ds.
###Code
ds = pyjstat.Dataset.read(res.text)
type(ds)
###Output
_____no_output_____
###Markdown
Check dataset ds
###Code
ds
###Output
_____no_output_____
###Markdown
Get some main metadata from the JSON-stat dataset
###Code
title = ds['label']
print(title)
###Output
08799: External trade in goods, by commodity number, imports/exports, country, contents and month
###Markdown
Last update as GMT
###Code
last_update = ds['updated']
print(last_update)
###Output
2021-04-13T22:00:00Z
###Markdown
Get source
###Code
source = ds['source']
print(source)
###Output
Statistics Norway
###Markdown
Role gives some shortcuts
###Code
ds_roles = ds['role']
print(ds_roles)
###Output
OrderedDict([('time', ['Tid']), ('metric', ['ContentsCode']), ('geo', ['Land'])])
###Markdown
Make dataframesWe have to make two dataframes, one with text and one with id. Pyjstat returns 'label' by default.
###Code
hstrade = ds.write('dataframe')
hstrade_id = ds.write('dataframe', naming='id')
hstrade.head()
hstrade_id.head()
###Output
_____no_output_____
###Markdown
Make new column with ID and label concatinated
###Code
hstrade['hstrade_combi'] = hstrade_id['Varekoder'] + ' ' + hstrade['commodity number']
hstrade.columns
###Output
_____no_output_____
###Markdown
Make new dataframe with only the colums we want in new order. Mark double [[ ]]
###Code
hstrade_new = hstrade[['hstrade_combi', 'imports/exports', 'country', 'contents', 'month',
'value']]
hstrade_new.tail()
###Output
_____no_output_____ |
walkmate-tf-multilabel-paper-boost-sense.ipynb | ###Markdown
Using Tensorflow to implement walkmate coach action
###Code
%matplotlib inline
import numpy as np # linear algebra
import seaborn as sns
sns.set(style='whitegrid')
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import tensorflow as tf
import random
import math
from tensorflow import keras
from tensorflow.keras import layers
from subprocess import check_output
print(check_output(["ls", "./input"]).decode("utf8"))
###Output
_____no_output_____
###Markdown
**Step 1: Read the data**
###Code
walkmate = pd.read_csv('./input/Walkmate_Boost_Sense.csv')
walkmate.shape
walkmate.head()
###Output
_____no_output_____
###Markdown
I want to do a multi class classificationPredicting coach action to boost : Motivation vs Ability vs Propensity vs Other vs None
###Code
#sns.pairplot(walkmate[['age', 'gender', 'baseline_steps', 'block_action']], diag_kind='kde')
walkmate['step_count_prev'].replace(np.nan, 0, inplace=True)
walkmate['step_count_prev_1'].replace(np.nan, 0, inplace=True)
walkmate['step_count_prev_2'].replace(np.nan, 0, inplace=True)
walkmate['step_count_prev_3'].replace(np.nan, 0, inplace=True)
walkmate['step_count_prev_4'].replace(np.nan, 0, inplace=True)
walkmate.isnull().values.any()
list(walkmate.columns)
###Output
_____no_output_____
###Markdown
**Step 0 : Sub select data for training** * Drop not relevant rows * Drop irrelevant columns
###Code
walkmate.drop(walkmate[walkmate['coach_type'] == 'ASSISTANT'].index, inplace = True)
walkmate.drop(walkmate[ (walkmate['Attendance'] == 'AS') | (walkmate['Attendance'] == 'A')].index, inplace = True)
walkmate.drop(walkmate[walkmate['block_reward'] == 'Unsuccessful'].index, inplace = True)
walkmate.shape
###Output
_____no_output_____
###Markdown
**Step 3: Split data based on participant-id** * trainset: 80%* testset: 20%
###Code
# set seed for numpy and tensorflow
# set for reproducible results
seed = 5
test_portion = 0.2
titration_portion = 1.0
np.random.seed(seed)
tf.random.set_seed(seed)
all_users = list(walkmate[walkmate['coach_type']=='DIRECT']['participant_id'].unique())
print(all_users)
test_users = random.sample(all_users, int(test_portion * len(all_users)))
print(test_users)
walkmate.drop(labels=[ 'block_reward', 'coach_id', 'coach_type', 'version_id', 'conv_turn', 'duration', 'promptness', 'achieved_step_count', 'difference_achieved_baseline', 'difference_goal_achieved', 'sense_action_str_m', 'sense_action_str_a', 'sense_action_str_t', 'sense_action', 'States', 'unknown', 'Boost_Sense', 'Boost', 'Attendance'], axis=1, inplace = True)
walkmate.describe()
test_dataframe = walkmate.loc[walkmate['participant_id'].isin(test_users)]
train_dataframe = walkmate.drop(test_dataframe.index)
train_dataframe = train_dataframe.drop(train_dataframe.sample(frac=(1-titration_portion)).index)
print(
"Using %d samples for training and %d for validation"
% (len(train_dataframe), len(test_dataframe))
)
print(train_dataframe.columns)
###Output
_____no_output_____
###Markdown
**Step 4: Normalized processing**
###Code
def dataframe_to_dataset(dataframe):
dataframe = dataframe.copy()
dataframe.pop("participant_id")
labels_m = dataframe.pop("sense_action_m")
labels_a = dataframe.pop("sense_action_a")
labels_t = dataframe.pop("sense_action_t")
labels_b = dataframe.pop("boost_action")
labels_o = dataframe.pop("other_action")
labels = pd.concat([labels_m, labels_a, labels_t, labels_b, labels_o], axis = 1)
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
ds = ds.shuffle(buffer_size=len(dataframe))
return ds
train_ds = dataframe_to_dataset(train_dataframe)
test_ds = dataframe_to_dataset(test_dataframe)
for x, y in train_ds.take(1):
print("Input:", x)
print("Target:", y)
###Output
_____no_output_____
###Markdown
**Step 5: Build the model framework**
###Code
TARGET_FEATURE_LABELS = ["a", "m", "n", "o", "b"]
NUMERIC_FEATURE_NAMES = [
"study_day_id",
"block_id",
"step_count_prev",
"step_count_prev_1",
"step_count_prev_2",
"step_count_prev_3",
"step_count_prev_4",
"age_in_years",
"baseline_step",
"goal_steps",
"conv_turn_avg",
"duration_avg",
"week",
"binary_age",
]
CATEGORICAL_FEATURES_WITH_VOCABULARY = {
"user_state_m": list(walkmate["user_state_m"].unique()),
"user_state_a": list(walkmate["user_state_a"].unique()),
"user_state_t": list(walkmate["user_state_t"].unique()),
"perceived_state_a": list(walkmate["perceived_state_a"].unique()),
"perceived_state_m": list(walkmate["perceived_state_m"].unique()),
"Gender": list(walkmate["Gender"].unique()),
}
CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys())
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES
#COLUMN_DEFAULTS = [
## [0] if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME] else ["NA"]
# for feature_name in CSV_HEADER
#]
NUM_CLASSES = 5
print(len(FEATURE_NAMES))
from sklearn.metrics import roc_auc_score
def auroc(y_true, y_pred):
return tf.numpy_function(roc_auc_score, (y_true, y_pred), tf.double)
learning_rate = 0.1
dropout_rate = 0.01
batch_size = 64
num_epochs = 30
hidden_units = [16, 16]
def custom_loss(y_true, y_pred):
loss_m = tf.keras.losses.binary_crossentropy(y_true[:,0], y_pred[:,0], from_logits=False)
loss_a = tf.keras.losses.binary_crossentropy(y_true[:,1], y_pred[:,1], from_logits=False)
loss_t = tf.keras.losses.binary_crossentropy(y_true[:,2], y_pred[:,2], from_logits=False)
loss_b = tf.keras.losses.binary_crossentropy(y_true[:,3], y_pred[:,3], from_logits=False)
loss_o = tf.keras.losses.binary_crossentropy(y_true[:,4], y_pred[:,4], from_logits=False)
sense_loss = tf.add(loss_m, loss_a, loss_t)
non_sense_loss = tf.add(loss_b, loss_o)
return tf.add(sense_loss, non_sense_loss)
def my_multi_label_metric_fn(y_true, y_pred):
y_pred_binary = tf.math.greater(y_pred, tf.constant([0.5]))
y_true_binary = tf.dtypes.cast(y_true, tf.bool)
action_accuracy = tf.math.reduce_any(tf.math.logical_and(y_true_binary, y_pred_binary), axis=1)
non_action_accuracy = tf.math.reduce_all(tf.math.logical_or(tf.math.logical_and(tf.math.logical_not(y_true_binary), tf.math.logical_not(y_pred_binary)), y_true_binary), axis=1)
weights = tf.divide(tf.cast(tf.math.count_nonzero(y_true, axis=1), tf.float32), tf.constant(3.0))
weighted_accuracy = tf.add(tf.math.multiply(tf.cast(action_accuracy, tf.float32), tf.cast(weights, tf.float32)), tf.math.multiply(tf.cast(non_action_accuracy, tf.float32), tf.math.subtract(tf.constant(1.0), tf.cast(weights, tf.float32))))
return tf.reduce_mean(weighted_accuracy) # Note the `axis=-1`
def my_metric_fn(y_true, y_pred):
m = tf.keras.metrics.BinaryAccuracy()
m.update_state(y_true, y_pred)
return tf.reduce_mean(m.result()) # Note the `axis=-1`
def my_metric_motivation_fn(y_true, y_pred):
m_m = tf.keras.metrics.BinaryAccuracy()
m_m.update_state(y_true[:, 0], y_pred[:, 0])
return tf.reduce_mean(m_m.result()) # Note the `axis=-1`
def my_metric_ability_fn(y_true, y_pred):
m_a = tf.keras.metrics.BinaryAccuracy()
m_a.update_state(y_true[:, 1], y_pred[:, 1])
return tf.reduce_mean(m_a.result())
def my_metric_trigger_fn(y_true, y_pred):
m_p = tf.keras.metrics.BinaryAccuracy()
m_p.update_state(y_true[:, 2], y_pred[:, 2])
return tf.reduce_mean(m_p.result())
def my_metric_boost_fn(y_true, y_pred):
m_p = tf.keras.metrics.BinaryAccuracy()
m_p.update_state(y_true[:, 3], y_pred[:, 3])
return tf.reduce_mean(m_p.result())
def my_metric_other_fn(y_true, y_pred):
m_p = tf.keras.metrics.BinaryAccuracy()
m_p.update_state(y_true[:, 4], y_pred[:, 4])
return tf.reduce_mean(m_p.result())
def run_experiment(model):
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss=custom_loss,
metrics=[my_metric_fn, my_metric_motivation_fn, my_metric_ability_fn, my_metric_trigger_fn, my_metric_boost_fn, my_metric_other_fn, tf.keras.metrics.AUC(multi_label=True, num_labels=NUM_CLASSES)],
run_eagerly=True,
)
train_dataset = train_ds.batch(batch_size)
test_dataset = test_ds.batch(batch_size)
print("Start training the model...")
history = model.fit(train_dataset, epochs=num_epochs)
print("Model training finished")
#_, train_accuracy = model.evaluate(test_dataset, verbose=0)
_, accuracy, mm, ma, mp, mb, mo, auc = model.evaluate(test_dataset, verbose=0)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
print("Motivation : %f Ability : %f Trigger : %f Boost : %f Others : %f", mm, ma, mp, mb, mo)
print("auc ", auc)
# print(f"Test accuracy: {round(mlc_accuracy * 100, 2)}%")
def create_model_inputs():
inputs = {}
for feature_name in FEATURE_NAMES:
if feature_name in NUMERIC_FEATURE_NAMES:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype=tf.float32
)
else:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype=tf.string
)
return inputs
from tensorflow.keras.layers import StringLookup
def encode_inputs(inputs, use_embedding=False):
encoded_features = []
for feature_name in inputs:
if feature_name in CATEGORICAL_FEATURE_NAMES:
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
# Create a lookup to convert string values to an integer indices.
# Since we are not using a mask token nor expecting any out of vocabulary
# (oov) token, we set mask_token to None and num_oov_indices to 0.
lookup = StringLookup(
vocabulary=vocabulary,
mask_token=None,
num_oov_indices=0,
output_mode="int" if use_embedding else "binary",
)
if use_embedding:
# Convert the string input values into integer indices.
encoded_feature = lookup(inputs[feature_name])
embedding_dims = int(math.sqrt(len(vocabulary)))
# Create an embedding layer with the specified dimensions.
embedding = layers.Embedding(
input_dim=len(vocabulary), output_dim=embedding_dims
)
# Convert the index values to embedding representations.
encoded_feature = embedding(encoded_feature)
else:
# Convert the string input values into a one hot encoding.
encoded_feature = lookup(tf.expand_dims(inputs[feature_name], -1))
else:
# Use the numerical features as-is.
encoded_feature = tf.expand_dims(inputs[feature_name], -1)
encoded_features.append(encoded_feature)
all_features = layers.concatenate(encoded_features)
return all_features
from tensorflow.keras import layers
def create_baseline_lr_model():
inputs = create_model_inputs()
features = encode_inputs(inputs)
outputs = layers.Dense(units=NUM_CLASSES, activation="sigmoid")(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
baseline_lr_model = create_baseline_lr_model()
keras.utils.plot_model(baseline_lr_model, show_shapes=True, rankdir="LR")
run_experiment(baseline_lr_model)
for layer in baseline_lr_model.layers:
print(layer.weights)
from tensorflow.keras import layers
def create_baseline_model():
inputs = create_model_inputs()
features = encode_inputs(inputs)
for units in hidden_units:
features = layers.Dense(units)(features)
features = layers.BatchNormalization()(features)
features = layers.ReLU()(features)
features = layers.Dropout(dropout_rate)(features)
outputs = layers.Dense(units=NUM_CLASSES, activation="sigmoid")(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
baseline_model = create_baseline_model()
#keras.utils.plot_model(baseline_model, show_shapes=True, rankdir="LR")
run_experiment(baseline_model)
ds_test_batch = test_ds.batch(32)
pred_array = np.empty((0, 5), np.float32)
truth_array = np.empty((0, 5), np.float32)
for (x, y) in ds_test_batch:
prediction = wide_and_deep_model.predict(x)
pred_array = np.append(pred_array, prediction, axis=0)
truth_array = np.append(truth_array, y, axis=0)
y_pred_keras = wide_and_deep_model.predict(test_dataset, batch_size=300)
print(wide_and_deep_model.evaluate(test_dataset))
fpr_keras_m, tpr_keras_m, thresholds_keras_m = roc_curve(truth_array[:,0], pred_array[:,0])
auc_keras_m = auc(fpr_keras_m , tpr_keras_m )
fpr_keras_a, tpr_keras_a, thresholds_keras_a = roc_curve(truth_array[:,1], pred_array[:,1])
auc_keras_a = auc(fpr_keras_a , tpr_keras_a)
fpr_keras_t, tpr_keras_t, thresholds_keras_t = roc_curve(truth_array[:,2], pred_array[:,2])
auc_keras_t = auc(fpr_keras_t , tpr_keras_t)
fpr_keras_b, tpr_keras_b, thresholds_keras_b = roc_curve(truth_array[:,3], pred_array[:,3])
auc_keras_b = auc(fpr_keras_b, tpr_keras_b)
fpr_keras_o, tpr_keras_o, thresholds_keras_o = roc_curve(truth_array[:,4], pred_array[:,4])
auc_keras_o = auc(fpr_keras_o , tpr_keras_o )
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras_m, tpr_keras_m, label='Sense M (area = {:.3f})'.format(auc_keras_m))
plt.plot(fpr_keras_a, tpr_keras_a, label='Sense A (area = {:.3f})'.format(auc_keras_a))
plt.plot(fpr_keras_t, tpr_keras_t, label='Sense P (area = {:.3f})'.format(auc_keras_t))
plt.plot(fpr_keras_b, tpr_keras_b, label='Boost (area = {:.3f})'.format(auc_keras_b))
plt.plot(fpr_keras_o, tpr_keras_o, label='Others (area = {:.3f})'.format(auc_keras_o))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve for sense action')
plt.legend(loc='best')
plt.savefig('direct_sense_coaching.pdf')
plt.show()
plt.savefig('direct_coaching.png')
import seaborn as sns
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
sns.lineplot(fpr_keras_m, tpr_keras_m, label='Motivation (area = {:.3f})'.format(auc_keras_m))
# plt.plot(fpr_keras_a, tpr_keras_a, label='Ability (area = {:.3f})'.format(auc_keras_a))
# plt.plot(fpr_keras_t, tpr_keras_t, label='Propensity (area = {:.3f})'.format(auc_keras_t))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
def create_wide_and_deep_model():
inputs = create_model_inputs()
wide = encode_inputs(inputs)
wide = layers.BatchNormalization()(wide)
deep = encode_inputs(inputs, use_embedding=True)
for units in hidden_units:
deep = layers.Dense(units)(deep)
deep = layers.BatchNormalization()(deep)
deep = layers.ReLU()(deep)
deep = layers.Dropout(dropout_rate)(deep)
merged = layers.concatenate([wide, deep])
outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(merged)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
wide_and_deep_model = create_wide_and_deep_model()
#keras.utils.plot_model(wide_and_deep_model, show_shapes=True, rankdir="LR")
run_experiment(wide_and_deep_model)
def create_deep_and_cross_model():
inputs = create_model_inputs()
x0 = encode_inputs(inputs, use_embedding=True)
cross = x0
for _ in hidden_units:
units = cross.shape[-1]
x = layers.Dense(units)(cross)
cross = x0 * x + cross
cross = layers.BatchNormalization()(cross)
deep = x0
for units in hidden_units:
deep = layers.Dense(units)(deep)
deep = layers.BatchNormalization()(deep)
deep = layers.ReLU()(deep)
deep = layers.Dropout(dropout_rate)(deep)
merged = layers.concatenate([cross, deep])
outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(merged)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
deep_and_cross_model = create_deep_and_cross_model()
#keras.utils.plot_model(deep_and_cross_model, show_shapes=True, rankdir="LR")
run_experiment(deep_and_cross_model)
GROWING_STRATEGY = "BEST_FIRST_GLOBAL"
NUM_TREES = 250
MIN_EXAMPLES = 6
MAX_DEPTH = 5
SUBSAMPLE = 0.65
SAMPLING_METHOD = "RANDOM"
VALIDATION_RATIO = 0.1
pip install -U tensorflow_decision_forests
import tensorflow_decision_forests as tfdf
def specify_feature_usages(inputs):
feature_usages = []
for feature_name in inputs:
if inputs[feature_name].dtype == tf.dtypes.float32:
feature_usage = tfdf.keras.FeatureUsage(
name=feature_name, semantic=tfdf.keras.FeatureSemantic.NUMERICAL
)
else:
feature_usage = tfdf.keras.FeatureUsage(
name=feature_name, semantic=tfdf.keras.FeatureSemantic.CATEGORICAL
)
feature_usages.append(feature_usage)
return feature_usages
def create_gbt_model():
gbt_model = tfdf.keras.GradientBoostedTreesModel(
features=specify_feature_usages(create_model_inputs()),
exclude_non_specified_features=True,
growing_strategy=GROWING_STRATEGY,
num_trees=NUM_TREES,
max_depth=MAX_DEPTH,
min_examples=MIN_EXAMPLES,
subsample=SUBSAMPLE,
validation_ratio=VALIDATION_RATIO,
task=tfdf.keras.Task.CLASSIFICATION,
loss="DEFAULT",
)
gbt_model.compile(metrics=[keras.metrics.BinaryAccuracy(name="accuracy")])
return gbt_model
def prepare_sample(features, target):
for feature_name in features:
if feature_name in CATEGORICAL_FEATURES_WITH_VOCABULARY:
if features[feature_name].dtype != tf.dtypes.string:
# Convert categorical feature values to string.
features[feature_name] = tf.strings.as_string(features[feature_name])
return features, target
def run_experiment(model, train_data, test_data, num_epochs=1, batch_size=None):
train_dataset = tfdf.keras.pd_dataframe_to_tf_dataset(
train_data, label='block_action_m'
).map(prepare_sample, num_parallel_calls=tf.data.AUTOTUNE)
test_dataset = tfdf.keras.pd_dataframe_to_tf_dataset(
test_data, label='block_action_m'
).map(prepare_sample, num_parallel_calls=tf.data.AUTOTUNE)
#train_tfdf = train_data.map(prepare_sample, num_parallel_calls=tf.data.AUTOTUNE)
#test_tfdf = test_data.map(prepare_sample, num_parallel_calls=tf.data.AUTOTUNE)
model.fit(train_dataset, epochs=num_epochs, batch_size=batch_size, verbose=0)
_, accuracy = model.evaluate(test_dataset, verbose=2)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
gbt_model = create_gbt_model()
#keras.utils.plot_model(gbt_model, show_shapes=True, rankdir="LR")
run_experiment(gbt_model, train_dataframe, test_dataframe)
print(gbt_model.summary())
###Output
_____no_output_____ |
_notebooks/2020-11-15-MNIST.ipynb | ###Markdown
An introduction to Pytorch and Fastai v2 on the MNIST dataset.> Building a digit classifier with deep learning.- toc:true- branch: master- badges: true- comments: true- author: Jonathan Sands- categories: [deep learning, fastai, pytorch, vision, classifier]- image: images/MNIST.jpeg How ?We will build a deep learning model for digit classification on the **MNIST dataset** using the **Pytorch library** first and then using the **fastai library** based on Pytorch to showcase how easy it makes building models. Who does this blog post concern ?This is addressed to people that **have basic knowledge about deep learning and want to start building models**.I will explain some aspects of deep learning but don't expect a full course starting scratch! Type of model builtWe won't create a brand new architecture for our neural net. Actually, in the first part using Pytorch, we will only include **linear layers** with some non-linearity between them. **No convolution** etc.. We aren't aiming at building a state of the art model. Why ?I made this as part of the homework recommendation from the [Deep Learning for Coders with Fastai and PyTorch](https://www.amazon.com/Deep-Learning-Coders-fastai-PyTorch/dp/1492045527) book I am currently reading. **Go check it out !** Downloading the dataThe **fastai library provides handy functionalities** to download data and already has some urls for some famous datasets.
###Code
from fastai.vision.all import *
import torchvision
import torchvision.transforms as transforms
from livelossplot import PlotLosses
URLs.MNIST
###Output
_____no_output_____
###Markdown
Using fatai's `untar_data` procedure, we will **download** and **decompress** the data from the above url in one go. The data will **only be downloaded the first time**.Take a look at the [documentation](https://docs.fast.ai/data.externaluntar_data) if you want to learn more.
###Code
path = untar_data(URLs.MNIST, dest="/workspace/data")
Path.BASE_PATH = path
path.ls()
###Output
_____no_output_____
###Markdown
As you can see, the data was **already split into a training and testing dataset** for our convenience! Let's take a peek into what is inside.
###Code
(path/"training").ls()
###Output
_____no_output_____
###Markdown
We have **a different directory for every digit**, each of them containing images *(see below)* of their corresponding digit.This makes labeling easy. The **label of each image is the name of its parent directory**!
###Code
(path/"training/1").ls()
###Output
_____no_output_____
###Markdown
For example, the *1* directory contains 6742 images. One is displayed below.
###Code
image = Image.open((path/"training/1").ls()[0])
image
image.size
image.mode
###Output
_____no_output_____
###Markdown
This image and all the others in the data we just downloaded are **28x28 grayscale images** ('L' mode means gray-scale). The pytorch way Data preparation Making pytorch datasets
###Code
transform = transforms.Compose(
[transforms.Grayscale(), transforms.ToTensor(), transforms.Normalize([0.5], [0.5])]
)
###Output
_____no_output_____
###Markdown
Above are the **transformations** we will make to each of the images when creating our Pytorch datasets.**Step 1**: Converting into a **grayscale image**, i.e. fusing the RGB color channels into a grayscale one (from what would be a \[3, 28, 28\] tensor to a \[1, 28, 28\]).> Tip: We need to do this because the `loader` parameter of `ImageFolder` (see next cell) loads 3 channels even if the original image only has one. I couldn't bother creating a custom loader so this does the trick.**Step 2**: Converting the grayscale image (with pixel values in the range \[0, 255\] into a 3 dimensional \[1, 28, 28\] **pytorch tensor** (with values in the range \[0, 1\]).**Step 3**: We normalize with mean = 0.5 and std = 0.5 to get values from pixels in the range \[-1, 1\]. (pixel = (image - mean) / std maps 0 to -1 and 1 to 1).> Note: The argument of centering around 0 is usually held for activation functions inside the network (which we aren't doing because we are using ReLU) but I did it for the input layer because I felt like it. This is **not the same as standardizing** but still gives a zero centered range. You can read [this](https://datascience.stackexchange.com/questions/54296/should-input-images-be-normalized-to-1-to-1-or-0-to-1) and the links given for more info.
###Code
# Dataset on all the images from the "training" folder
full_dataset = torchvision.datasets.ImageFolder((path/"training").as_posix(), transform = transform)
# Splitting the above dataset into a training and validation dataset
train_size = int(0.8 * len(full_dataset))
valid_size = len(full_dataset) - train_size
training_set, validation_set = torch.utils.data.random_split(full_dataset, [train_size, valid_size])
# Dataset using the "testing" folder
testing_set = torchvision.datasets.ImageFolder((path/"testing").as_posix(), transform = transform)
###Output
_____no_output_____
###Markdown
We just built 3 datasets. A **training** dataset, a **validation** dataset and a **testing** dataset.Our images in the "training" folders were divided randomnly into the testing and validation dataset with a ratio of **80%** and **20%** of the images respectively.**Testing dataset**:Used to calculate our gradients and update our weights using the loss obtained forwarding the data through the network.**Validation dataset**:Used to assess model performance on unseen data during the training. We tune our hyperparameters (learning rate, batch size, number of epochs, network structure etc.) to improve this performance.> Important: After some hyperparameter tuning, we may be satisfied with our model's performance with the validation dataset. But the thing is, we tuned these hyperparameters to fit the validation data. The mesure of performance is biased because we adapted to the validation data.**Testing dataset**:Used to get a final, unbiased, performance assessment. This data wasn't seen during the whole model building process.> Warning: You can't go back and tune hyperparameters to improve this performance, because that would create the same problem as for the validation dataset. From datasets to dataloaders In pytorch, a *"Data loader combines a dataset and a sampler, and provides an iterable over the given dataset"*.Look at [the documentation](https://pytorch.org/docs/stable/data.html) to learn more.
###Code
bs = 64
###Output
_____no_output_____
###Markdown
The `bs` variable above corresponds to the **batch size**. This is the **number of observations forwarded at a time in our neural network** (and used to calculate our mean loss and then our gradients for the training).> Note: I chose batches of **64** because it seemed to work well for this application and my GPU isn't great so I might run out of memory with anything larger.
###Code
train_loader = torch.utils.data.DataLoader(training_set, batch_size=bs, shuffle=True)
validation_loader = torch.utils.data.DataLoader(validation_set, batch_size=bs)
dataloaders = {
"train": train_loader,
"validation": validation_loader
}
###Output
_____no_output_____
###Markdown
We created a training and a testing data loader we will iterate on during our buidling process. The `shuffle` argument is set to True for the training data loader, meaning we will **reshuffle the data at every epoch**. Training a neural networkDeep learning is like making a dish.I like to see the neural network's architecture as the plates / the cutlery / cooking tools, the weights as the ingredients and the hyperparameters as the cooking time / temperature / seasonning etc. Creating the architectureWithout the proper tools, it would be impossible to make the dish you want and for it to be good, even if you found all the ingredients that satisfy your needs.
###Code
pytorch_net = nn.Sequential(
nn.Flatten(),
nn.Linear(28*28, 128),
nn.ReLU(),
nn.Linear(128, 50),
nn.ReLU(),
nn.Linear(50,10),
nn.LogSoftmax(dim=1))
###Output
_____no_output_____
###Markdown
Here we chose a simple but good enough network architecture. It may not be state of the art but as you will see, it still performs quite well!**`Flatten`**: flattens our [1,28,28] tensor into a [1,784] tensor. Our model doesn't care if it was a square image to start with, it just sees numbers, and a's long as the same pixel in our original image gets mapped to the same input variable (one of the 784 values) each time, our model will be able to learn.We won't be doing any spatial treatment (like convolution , pooling etc.), so we just start be turning our input tensor into a feature vector that will be used by our classifier.**`Linear`**: linear layer **with an additive bias** (`bias` parameter is set to `True` by default).> Important: This layer **can only learn linear relations** and stacking another one doesn't change this fact since a single linear layer is capable of representing any consecutive number of linear layers.**`ReLU`**: stands for *Rectified linear unit* is an **activation function**, also called a **nonlinearity**. It replaces every negative number with 0 (See plot below). By adding a nonlinear function between each linear layer, they become somewhat **decoupled** from each other and can each do its own useful work. Meaning with nonlinearity between linear layers **we can now learn nonlinear relations**!**`LogSoftmax`**: applies log(Softmax(x)) to the last layer. Softmax maps all the values to [0, 1] and add up to 1 (probability distribution). log(Softmax) maps these values to [-inf, 0].> Note: If you want to know why we use LogSoftmax instead of Softmax, you can read [this](https://deepdatascience.wordpress.com/2020/02/27/log-softmax-vs-softmax/). > Note: According to our above neural network structure, our first linear layer can construct 128 different features (each representing some different mix of pixels) and our second one (decoupled from the first) can learn 50!
###Code
#hide
def plot_function(f, tx=None, ty=None, title=None, min=-2, max=2, figsize=(6,4)):
x = torch.linspace(min,max)
fig,ax = plt.subplots(figsize=figsize)
ax.plot(x,f(x))
if tx is not None: ax.set_xlabel(tx)
if ty is not None: ax.set_ylabel(ty)
if title is not None: ax.set_title(title)
#hide_input
plot_function(F.relu, title="ReLU activation")
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
lr = 1e-2
nb_epoch = 77
###Output
_____no_output_____
###Markdown
Before moving on, we should define a bunch of variables.*"A `torch.device` is an object representing the device on which a `torch.Tensor` is or will be allocated"*. Head [here](https://pytorch.org/docs/stable/tensor_attributes.htmltorch.torch.device) for more info.Here we want to perform our computations on a GPU if it is available.`lr` is our **learning rate** hyperparameter representing the size of the step we take when applying SGD.`nb_epoch` is our **number of epochs**, meaning the number of complete passes through the training dataset.> Note: I found the values for these hyperparameters to work well via trial and error. They are probably not optimal
###Code
optimizer = torch.optim.SGD(pytorch_net.parameters(), lr=lr)
###Output
_____no_output_____
###Markdown
The `optimizer` object above will **handle the stochastic gradient descent (SGD) step for us**. We need to pass it our model's parameters (so it can step on them) and a learning rate.
###Code
criterion = nn.NLLLoss()
###Output
_____no_output_____
###Markdown
We chose pytorch's `nn.NLLLoss()` for our **loss function**. It stands for *negative log likelihood loss* and is useful to train a classification problem with more than 2 classes. It **expects log-probabilities as input** for each class, which is our case after applying `LogSoftmax`.> Tip: Instead of applying a `LogSoftmax` layer in the last layer of our network and using `NLLLoss`, we could have used `CrossEntropyLoss` instead which is a loss that combines the two into one single class. Read [the doc](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) for more.
###Code
def train_model(model, criterion, optimizer, dataloaders, num_epochs=10):
liveloss = PlotLosses() # Live training plot generic API
model = model.to(device) # Moves and/or casts the parameters and buffers to device.
for epoch in range(num_epochs): # Number of passes through the entire training & validation datasets
logs = {}
for phase in ['train', 'validation']: # First train, then validate
if phase == 'train':
model.train() # Set the module in training mode
else:
model.eval() # Set the module in evaluation mode
running_loss = 0.0 # keep track of loss
running_corrects = 0 # count of carrectly classified inputs
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device) # Perform Tensor device conversion
labels = labels.to(device)
outputs = model(inputs) # forward pass through network
loss = criterion(outputs, labels) # Calculate loss
if phase == 'train':
optimizer.zero_grad() # Set all previously calculated gradients to 0
loss.backward() # Calculate gradients
optimizer.step() # Step on the weights using those gradient w -= gradient(w) * lr
_, preds = torch.max(outputs, 1) # Get model's predictions
running_loss += loss.detach() * inputs.size(0) # multiply mean loss by the number of elements
running_corrects += torch.sum(preds == labels.data) # add number of correct predictions to total
epoch_loss = running_loss / len(dataloaders[phase].dataset) # get the "mean" loss for the epoch
epoch_acc = running_corrects.float() / len(dataloaders[phase].dataset) # Get proportion of correct predictions
# Logging
prefix = ''
if phase == 'validation':
prefix = 'val_'
logs[prefix + 'log loss'] = epoch_loss.item()
logs[prefix + 'accuracy'] = epoch_acc.item()
liveloss.update(logs) # Update logs
liveloss.send() # draw, display stuff
###Output
_____no_output_____
###Markdown
We have our everything needed to cook our meal !The actual cooking takes place in the above function, which handles the **training phase** and **validation phase**.
###Code
#hide
from fastbook import *
###Output
_____no_output_____
###Markdown
 The above graph illustrates what is going on during our **training phase**. We use our model to make **predictions** and calculate our **loss** (`NLLLoss` here) based on the **real labels**, then calulate the **gradients** using `loss.backward()` (computes *dloss/dx* for every parameter *x* which has `requires_grad=True`, which is the case for `nn.Parameters()` that we use under the hood) and **step the weights** with our optimizer before **repeating the process**.The **stop condition** in our case is just the number of epochs.> Note: You can use other stop conditions, such as a minimum accuracy to be obtained on the validation set etc.The **validation phase** is basically the same process without calculating gradients and stepping since we are only intersted on measuring model performance.> Tip: Be sure to call model.train() and model.eval() before the corresponding phase. Some layers like `BatchNorm` and `Dropout` have a different behavior during training and evaluation. This is not the case of our model but it is still good habit)
###Code
train_model(pytorch_net, criterion, optimizer, dataloaders, nb_epoch)
###Output
_____no_output_____
###Markdown
After 80 epochs we get **97.7% accuracy** on the validation data, which is very good for a simple model such as this one!Even though our validation loss and accuracy stabilized themselves after around 50 epochs, I kept going for a couple epochs just in case I could squeeze a bit more out.> Note: Notice how training loss is close to 0 and accuracy nearly at 100%. This means our network nearly perfectly memorized the entire training data.> Tip: Plotting metrics such as accuracy and loss lets you visualize the process and helps with parameter tuning (what could be causing spikes? Is the accuracy going up / down or has it reached a plateau? etc.)> Warning: If validation loss started to go up and validation accuracy started to go down after some time, that would be signs of overfitting! Models using architectures with more layers take longer to train, and are more prone to overfitting. Training on small amount of data makes memorizing easier and can also lead to overfitting. **Be careful!**
###Code
torch.save(pytorch_net, 'models/pytorch-97.7acc.pt')
###Output
_____no_output_____
###Markdown
Let's **save** our trained model for inference using `torch.save`.> Warning: This saves the entire module using Python's [pickle](https://docs.python.org/3/library/pickle.html) module. There are disadvantages to this approach. I used it here because it is more intuitive and is not the focus of this blog post, but feel free to read [this official pytorch document](https://pytorch.org/tutorials/beginner/saving_loading_models.html) to learn more about use cases regarding the saving and loading of Pytorch models. The Fastai way*"fastai is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches."*. Read [the docs](https://docs.fast.ai) to learn more! Data preparation
###Code
block = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
get_y=parent_label,
batch_tfms=aug_transforms(mult=2., do_flip=False))
###Output
_____no_output_____
###Markdown
The `DataBlock` class is a *"generic container to quickly build `Datasets` and `DataLoaders`"*.- **`blocks`**: This is the way of telling the API that our inputs are images and our targets are categories. Types are represented by blocks, here we use `ImageBlock` and `CategoryBlock` for inputs and targets respectively.- **`get_items`**: expects a function to assemble our items inside the data block. `get_image_files` searches subfolder for all image filenames recursively.- **`splitter`**: Controls how our validation set is created. `RandomSplitter` splits items between training and validation (with `valid_pct` portion in validation) randomnly.- **`get_y`**: expects a function to label data according to file name. `parent_label` labels items with the parent folder name.- **`batch_tfms`**: These are transformations applied to batched data samples on the GPU. `aug_transforms` is an *"utility function to create a list of flip, rotate, zoom, warp and lighting transforms"*. (Here we disabled flipping because we don't want to train on mirrored images and use twice the amount of augmentation compared to the default.) These augmentations are only done on the training set, we don't want to evaluate our model's performance on distorted images. > Note: *"Our model can learn to focus on, and recognize, different features in our images. It also reflects how images work in the real world: different photos of the same thing may be framed in slightly different ways.In fact, an entirely untrained neural network knows nothing whatsoever about how images behave. It doesn't even recognize that when an object is rotated by one degree, it still is a picture of the same thing! So actually training the neural network with examples of images where the objects are in slightly different places and slightly different sizes helps it to understand the basic concept of what an object is, and how it can be represented in an image."* (Deep Learning for Coders with Fastai and PyTorch)This **doesn't acutally build the datasets and data loaders** since we didn't actually gave it our images yet. But once we do, it knows exactly how to deal with them!
###Code
loaders = block.dataloaders(path/"training")
loaders.train.show_batch(max_n=4, nrows=1)
###Output
_____no_output_____
###Markdown
`block.dataloaders` creates a `DataLoaders` object from the source we give it. Here we gave it the *training* folder.The sample of images shown are outputs from the created training data loader. As you can see, they are correctly labeled and quite **distorted due to the batch augmentations** we made.We use the default value of **64 for our batch size** (`bs` parameter). Training a neural network
###Code
learn = cnn_learner(loaders, resnet34, metrics=accuracy)
###Output
_____no_output_____
###Markdown
`cnn_learner` builds a **convolutional neural network** style learner from dataloaders and an architecture. In our case we use the *ResNet* architecture. The 34 refers to the number of layers in this variant of the architecture.`cnn_learner` has a parameter called `pretrained` which defaults to `True`, that sets the weights in our model to values **already trained** by experts to recognize thousands of categories on the [ImageNet dataset](http://www.image-net.org/).When using a **pretrained model**, cnn_learner will **remove the last layer** since that is always specifically customized to the original training task (i.e. ImageNet dataset classification), and replace it with one or more new layers with randomized weights (called the **head**), of an appropriate size for the dataset you are working with.> Tip: "You should nearly always use a pretrained model, because it means that your model, before you've even shown it any of your data, is already very capable. And, as you'll see, in a deep learning model many of these capabilities are things you'll need, almost regardless of the details of your project. For instance, parts of pretrained models will handle edge, gradient, and color detection, which are needed for many tasks." (Deep Learning for Coders with Fastai and PyTorch)
###Code
learn.lr_find()
###Output
_____no_output_____
###Markdown
`learn.lr_find()` **explores learning rates** in a given range ([1e-7, 10] by default) over a number of iterations (100 default) and plots the loss versus the learning rates on a log scale.> Tip: A rule of thumb is choosing a value that is approximately in the middle of the sharpest downward slope (around 1e-2 here). You can also use the `lr_min` and `lr_steep` indicators above and choose a learning rate between them.
###Code
learn.fine_tune(12, base_lr=1e-2, cbs=[ShowGraphCallback()])
###Output
_____no_output_____
###Markdown
`learn.fine_tune`: *"Fine tune with `freeze` for `freeze_epochs` then with `unfreeze` for `epochs` using discriminative LR"* ([docs](https://docs.fast.ai/callback.scheduleLearner.fine_tune))By default a pretrained `Learner` is in a **frozen state**, meaning that **only the head of the model will train** while the body stays frozen.To resume, `fine_tune` trains the head (automatically added by `cnn_learner` with random weights) without the body for a few epochs (defaults to 1) and then unfreezes the `Learner` and trains the whole model for a number of epochs (here we chose 12) using **discriminative learning rates** (which means it applies different learning rates for different parts of the model).`cbs` expects a list of callbacks. Here we passed `ShowGraphCallback` which updates a graph of training and validation loss (as seen above).> Note: Discriminative learning rate is preferred on pretrained models since the early layers are already trained and already recognize features useful to our specific model. This means we don't need to train these layers *"as hard"* (updating weights by smaller steps), which is why we use a range of learning rates from smaller ones for early layers to larger ones for later layers. **CONGRATS!**After training our model for a while, we get around **99.5%** accuracy on our validation set with minimal effort!
###Code
learn.export("models/fastai-99acc.pkl")
###Output
_____no_output_____
###Markdown
`learn.export` **saves** the definition of **how to create our `DataLoaders`** on top of saving the **architecture and parameters** of the model.Saving the `Dataloaders` allows us to transform the data for inference in the same manner as our validation set by default, so data augmentation will not be applied.
###Code
interp = ClassificationInterpretation.from_learner(learn)
###Output
_____no_output_____
###Markdown
`ClassificationInterpretation.from_learner()` constructs an `ClassificationInterpretation` object from a `learner`.It gives a handful of **interpretation methods** for classification models.
###Code
interp.plot_confusion_matrix()
###Output
_____no_output_____
###Markdown
The above **confusion matrix** helps us visualize where our model made mistakes. It like the most confused number were 0 with 6, 6 with 8, 5 with 3 and 7 with 2.
###Code
interp.plot_top_losses(10)
###Output
_____no_output_____
###Markdown
We can also visualize which images resulted in the **largest loss**.It seems like the upper-left 9 was **mislabeled** as a 3, so our network was right.For some of those, even humans could have mistaken them ! Evaluating our model's inference on the testing dataset! Pytorch model
###Code
def test_model(model, criterion, test_loader):
model = model.to(device) # Moves and/or casts the parameters and buffers to device.
test_loss = 0.0 # keep track of loss
test_corrects = 0 # count of carrectly classified inputs
with torch.no_grad(): # Disable gradient calculation
for inputs, labels in test_loader:
inputs = inputs.to(device) # Perform Tensor device conversion
labels = labels.to(device)
outputs = model(inputs) # forward pass through network
loss = criterion(outputs, labels) # Calculate loss
_, preds = torch.max(outputs, 1)
test_loss += loss * inputs.size(0) # multiply mean loss by the number of elements
test_corrects += torch.sum(preds == labels.data) # add number of correct predictions to total
avg_loss = test_loss / len(test_loader.dataset) # get the "mean" loss for the epoch
avg_acc = test_corrects.float() / len(test_loader.dataset) # Get proportion of correct predictions
return avg_loss.item(), avg_acc.item()
###Output
_____no_output_____
###Markdown
Our **testing procedure** is basically the **same as our validation phase** from the training procedure apart from the **absence of epochs**. (To be expected since they serve the same purpose!)We **infer** predictions from our inputs by batches, then calculate the loss from them (how *"far"* they were from the real labels) and record the loss and the number of correctly labeled inputs, before averaging it all at the end.
###Code
testing_loader = torch.utils.data.DataLoader(testing_set, batch_size=bs)
###Output
_____no_output_____
###Markdown
Creation of a testing `DataLoader` to be passed to our testing procedure.
###Code
pytorch_loss, pytorch_accuracy = test_model(pytorch_net, criterion, testing_loader)
def print_loss_acc(loss, acc):
print("Loss : {:.6f}".format(loss))
print("Accuracy : {:.6f}".format(acc))
print_loss_acc(pytorch_loss, pytorch_accuracy)
###Output
Loss : 0.079889
Accuracy : 0.977300
###Markdown
The results on the **testing data** are approximately the same as on the validation set! Fastai model
###Code
learn = load_learner('models/fastai-99acc.pkl')
test_dl = learn.dls.test_dl(get_image_files(path/"testing"), with_labels=True)
###Output
_____no_output_____
###Markdown
`test_dl` creates a **test dataloader** from `test_items` (list of image paths) using validation transforms of `dls`.We set `with_labels` to `True` because we want the labels of each image to **check the inference accuracy of our model**.
###Code
fastai_loss, fastai_accuracy = learn.validate(dl=test_dl)
###Output
_____no_output_____
###Markdown
`learn.validate` returns the **calculated loss** and the **metrics** of the model on the `dl` data loader.
###Code
print_loss_acc(fastai_loss, fastai_accuracy)
###Output
Loss : 0.014902
Accuracy : 0.995400
|
Chapter03/intro.ipynb | ###Markdown
Основные термины**Ген** вектор значений индивидуума, обучаемый алгоритмом**Популяция** - коллекция генов**Функция приспособленности** - или целевая функция. Функция, которую мы оптимизируем. Индивидуумы, для которых ф-ия дает наилучшую оценку будут отобраны генетическим алгоритмом. Класс creatorМетафабрика. позволяющая расширять существующие классы. Используется обычно для создания классов Fitness и Individual
###Code
from deap import creator, base, tools, algorithms
class Employee:
pass
creator.create("Developer", Employee, position="Developer", programmingLanguages=set)
help(creator.Developer)
###Output
Help on class Developer in module deap.creator:
class Developer(__main__.Employee)
| Developer(*args, **kargs)
|
| Method resolution order:
| Developer
| __main__.Employee
| builtins.object
|
| Methods defined here:
|
| __init__ = initType(self, *args, **kargs)
| Replace the __init__ function of the new type, in order to
| add attributes that were defined with **kargs to the instance.
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| position = 'Developer'
|
| ----------------------------------------------------------------------
| Data descriptors inherited from __main__.Employee:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
###Markdown
FitnessВ этом классе инкапсулированы значения приспособленности. Приспособленность в deap можно определять по нескольким компонентам (целями). у каждой из которых есть вес. Комбинация весов определяет поведение или стратегию приспособления в конкретной задачеbase.Fitness - абстрактный класс. содержащий кортеж weights. Чтобы определить стратегию, кортежу наджо присвоить значения.
###Code
creator.create('FitnessMax', base.Fitness, weights=(1.0,))
help(creator.FitnessMax)
creator.FitnessMax.weights
###Output
_____no_output_____
###Markdown
В данном случае стратегия FitnessMax - максимизировать приспособленность индивидумов с единственной целью. Минимизация будет выглядеть так:
###Code
creator.create('FitnessMin', base.Fitness, weights=(-1.0,))
###Output
_____no_output_____
###Markdown
Множество целей. Первые две компоненты максимизируются, третья минимизируется. Важность компонент следует слева направо
###Code
creator.create('FitnessCompound', base.Fitness, weights=(1.0, 0.2, -0.5))
###Output
_____no_output_____
###Markdown
**Кортеж values** хранит сами значения приспособленности. Эти значения дает отдельно определяемая ф-ия, которую обычно называют evaluate(). Кортеж содержит по одному значению для каждой ф-ии (цели).**Третий кортеж wvalues** содержит взвешенные значени, полученные перемножением creator.create('FitnessMin', base.Fitness, weights=(-1.0,))values и weights. Это используется для сравнения индивидуумов.wvalues можно сравнивать лексикографически с помощью операторов ``>``, ``=``, ``<=``, ``==``, ``!=`` IndividualС помощью этого класса определяются индивидуумы, образующие популяцию. В данном случае все гены индивидумов будут иметь тип лист, а класс каждого индивидума будет содержать экземляр FitnessMax
###Code
creator.create('Individual', list, fitness=creator.FitnessMax)
help(creator.Individual)
###Output
Help on class Individual in module deap.creator:
class Individual(builtins.list)
| Individual(*args, **kargs)
|
| Built-in mutable sequence.
|
| If no argument is given, the constructor creates a new empty list.
| The argument must be an iterable if specified.
|
| Method resolution order:
| Individual
| builtins.list
| builtins.object
|
| Methods defined here:
|
| __init__ = initType(self, *args, **kargs)
| Replace the __init__ function of the new type, in order to
| add attributes that were defined with **kargs to the instance.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Methods inherited from builtins.list:
|
| __add__(self, value, /)
| Return self+value.
|
| __contains__(self, key, /)
| Return key in self.
|
| __delitem__(self, key, /)
| Delete self[key].
|
| __eq__(self, value, /)
| Return self==value.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __getitem__(...)
| x.__getitem__(y) <==> x[y]
|
| __gt__(self, value, /)
| Return self>value.
|
| __iadd__(self, value, /)
| Implement self+=value.
|
| __imul__(self, value, /)
| Implement self*=value.
|
| __iter__(self, /)
| Implement iter(self).
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lt__(self, value, /)
| Return self<value.
|
| __mul__(self, value, /)
| Return self*value.
|
| __ne__(self, value, /)
| Return self!=value.
|
| __repr__(self, /)
| Return repr(self).
|
| __reversed__(self, /)
| Return a reverse iterator over the list.
|
| __rmul__(self, value, /)
| Return value*self.
|
| __setitem__(self, key, value, /)
| Set self[key] to value.
|
| __sizeof__(self, /)
| Return the size of the list in memory, in bytes.
|
| append(self, object, /)
| Append object to the end of the list.
|
| clear(self, /)
| Remove all items from list.
|
| copy(self, /)
| Return a shallow copy of the list.
|
| count(self, value, /)
| Return number of occurrences of value.
|
| extend(self, iterable, /)
| Extend list by appending elements from the iterable.
|
| index(self, value, start=0, stop=9223372036854775807, /)
| Return first index of value.
|
| Raises ValueError if the value is not present.
|
| insert(self, index, object, /)
| Insert object before index.
|
| pop(self, index=-1, /)
| Remove and return item at index (default last).
|
| Raises IndexError if list is empty or index is out of range.
|
| remove(self, value, /)
| Remove first occurrence of value.
|
| Raises ValueError if the value is not present.
|
| reverse(self, /)
| Reverse *IN PLACE*.
|
| sort(self, /, *, key=None, reverse=False)
| Sort the list in ascending order and return None.
|
| The sort is in-place (i.e. the list itself is modified) and stable (i.e. the
| order of two equal elements is maintained).
|
| If a key function is given, apply it once to each list item and sort them,
| ascending or descending, according to their function values.
|
| The reverse flag can be set to sort in descending order.
|
| ----------------------------------------------------------------------
| Static methods inherited from builtins.list:
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from builtins.list:
|
| __hash__ = None
###Markdown
ToolboxЭто контейнер для функций и операторов и позволяет создавать новые операторы путем назначения псевдонимовПервым аргументом реигстрируем имя ф-ии. Вторым передаем ей выполняемую ф-ию. Остальные рагументы - необязательны (аргументы выполняемой ф-ии)
###Code
def sum_of_two(a, b):
return a + b
toolbox = base.Toolbox()
toolbox.register('increment_by_five', sum_of_two, b=5)
toolbox.increment_by_five(10)
###Output
_____no_output_____
###Markdown
Создание генетических операторовмодуль tools содержит полезные функции для осуществления операций отбора, включая скрещивание и мутации, а так-же утилиты для инициализации. Поэтому Toolbox используется в основном для этогоВ данном случае:- создан оператор tools.select использующий турнирный отбор с аргументом 3 (размер тиурнира)- создан оператор mate как псевдоним ф-ии cxTwoPoint(), выполняющей двухточечное скрещивание- создан оператор mutate c операцией инвертирования бита и вероятностью 0.02Функции отбора хранятся в selection.py. Функции скрещивания в crossower.py. Функции мутации в mutation.py
###Code
toolbox.register('select', tools.selTournament, tournize=3)
toolbox.register('mate', tools.cxESTwoPoint)
toolbox.register('mutate', tools.mutFlipBit, indpb=0.02)
###Output
_____no_output_____
###Markdown
Создание популяцииФ-л init.py содержит несколько ф-ий полезных для создания и иницализации популяции.Ф-я initRepeat() принимает три аргумента:- тип контейнера результирующих объектов- ф-ия. генерирующая объекты, которые помещаются в контейнер- сколько объектов генерировать
###Code
import random
toolbox.register('zero_or_one', random.randint, 0, 1)
rnd = tools.initRepeat(list, toolbox.zero_or_one, 30)
rnd
###Output
_____no_output_____
###Markdown
Вычисление приспособленностизначение приспособленности обычно возвращаются отдельно определенной ф-ией, которая регистрируется в toolbox как evaluate (обычно)
###Code
def some_calc(nums):
return nums
toolbox.register('evaluate', some_calc)
###Output
_____no_output_____
###Markdown
Statisticsкласс tools.Statistics позволяет собират ьстатистику. задавая ф-ию, применяемую к данным, для которых вычисляется статистика. Например для случая, когда популяция является данными, функция извлекающая приспособленность каждого индивидуума и зарегистрированные методы
###Code
import numpy as np
stats = tools.Statistics(lambda ind: inf.fitness.values)
stats.register("max", np.max)
stats.register("avg", np.mean)
###Output
_____no_output_____ |
phase1/2.3/Plot_gps_log_NE.ipynb | ###Markdown
install
###Code
%%time
# Important library for many geopython libraries
!apt install gdal-bin python-gdal python3-gdal
# Install rtree - Geopandas requirment
!apt install python3-rtree
# Install Geopandas
!pip install git+git://github.com/geopandas/geopandas.git
# Install descartes - Geopandas requirment
!pip install descartes
# Install Folium for Geographic data visualization
!pip install folium
# Install plotlyExpress
!pip install plotly_express
!pip install Shapely
!pip install plotly_express
from shapely.geometry import Point,Polygon
import pandas as pd
import numpy as np
import geopandas as gpd
import matplotlib
import matplotlib.pyplot as plt
import folium
from folium import plugins
from folium.plugins import HeatMap
import plotly_express as px
from google.colab import drive
from IPython.display import display
import glob
import re
import plotly_express as px
import seaborn as sns
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
read
###Code
!ls '/content/drive/My Drive/2021 Route Prediction/Project-1/Source-Code/data/gps_log_2019-04-week-1'
gps_week_1 = pd.read_csv('/content/drive/My Drive/2021 Route Prediction/Project-1/Source-Code/data/gps_log_2019-04-week-1')
###Output
_____no_output_____
###Markdown
Plot
###Code
gps_week_1.info()
car_day = gps_week_1[['time_stamp']]
car_day = car_day[['time_stamp']].astype("datetime64")
car_day_count = car_day.groupby(car_day['time_stamp'].dt.day).count()
car_day_count.plot(figsize=(16,14))
car = gps_week_1[['unit_id']]
veh_count = car.unit_id.value_counts()
veh_count.plot(figsize=(16,14))
plt.hist(veh_count,log=True)
plt.show()
car_hour = gps_week_1[['time_stamp','unit_id']]
car_hour = car_hour[['time_stamp']].astype("datetime64")
car_hour['unit_id'] = gps_week_1[['unit_id']]
car_hour
c = car_hour.loc[car_hour['unit_id'] == '005000800000868998030032979']
c = c.loc[c['time_stamp'] < pd.Timestamp(2019, 4, 2)]
x = c['time_stamp'].max() - c['time_stamp'].min()
x.total_seconds()
i = 0;
df_day_1 = pd.DataFrame(columns=['id', 'time'])
for car in car_hour.unit_id.unique():
c = car_hour.loc[car_hour['unit_id'] == car]
c = c.loc[c['time_stamp'] < pd.Timestamp(2019, 4, 2)]
hour = c['time_stamp'].max() - c['time_stamp'].min()
df_day_1.loc[i] = [car,hour.total_seconds()]
i = i+1
print("complet")
df_day_1
df_day_1['time'] = df_day_1[['time']] / 3600
df_day_1
df_day_1.plot.hist()
###Output
_____no_output_____ |
code/experiment-std-data.ipynb | ###Markdown
Model selection on standard deviation columns
###Code
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.manifold import Isomap
from sklearn.manifold import LocallyLinearEmbedding
from sklearn.decomposition import TruncatedSVD
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.decomposition import PCA, KernelPCA
from sklearn.feature_selection import VarianceThreshold
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
from sklearn.feature_selection import mutual_info_classif
from sklearn.feature_selection import SelectFromModel
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import RFE
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
df = pd.read_csv('../data/train_data.csv')
X = df.drop(['class4', 'class2'], axis=1)
X = X.loc[:, X.columns[range(1, X.shape[1] - 1, 2)]]
X_scaled = pd.DataFrame(StandardScaler().fit_transform(X), columns = X.columns)
y_class2 = df['class2']
y_class4 = df['class4']
###Output
_____no_output_____
###Markdown
Classifiers
###Code
classifiers = [
('logistic', LogisticRegression()),
('kNeighbour', KNeighborsClassifier(3)),
('svcLinear', SVC(kernel="linear", C=0.025, probability=True)),
('svc', SVC(gamma=2, C=1, probability=True)),
('gaussian', GaussianProcessClassifier(1.0 * RBF(1.0))),
('decissionTree', DecisionTreeClassifier(max_depth=5)),
('rfc', RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1)),
('mlp', MLPClassifier(alpha=1, max_iter=1000)),
('ada', AdaBoostClassifier()),
('gaussianNB', GaussianNB()),
('qda', QuadraticDiscriminantAnalysis())]
###Output
_____no_output_____
###Markdown
Dimensionality reduction techinques
###Code
# Variance boundry for VarianceThreshold
# More info: https://scikit-learn.org/stable/modules/feature_selection.html#variance-threshold
p = 0.7
variance = p * (1 - p)
dimension_reductions_y2 = [
('iso', Isomap(n_components=20)),
('lle', LocallyLinearEmbedding(n_components=20)),
('llemodified', LocallyLinearEmbedding(n_components=20, method='modified', n_neighbors=90)),
('svd', TruncatedSVD(n_components=20)),
('lda', LinearDiscriminantAnalysis(n_components=1)),
('pca', PCA()),
('kpca', KernelPCA(kernel="rbf", fit_inverse_transform=True, gamma=1)),
('sel', VarianceThreshold(threshold=variance)),
('kbest', SelectKBest(f_classif, k=20)),
('kbestmutual', SelectKBest(mutual_info_classif, k=20)),
('select', SelectFromModel(LinearSVC(penalty="l2"))),
('selecttree', SelectFromModel(ExtraTreesClassifier(n_estimators=20))),
('rfe', RFE(estimator=DecisionTreeClassifier(), n_features_to_select=20))]
dimension_reductions_y4 = [
('iso', Isomap(n_components=20)),
('lle', LocallyLinearEmbedding(n_components=20)),
('llemodified', LocallyLinearEmbedding(n_components=20, method='modified', n_neighbors=90)),
('svd', TruncatedSVD(n_components=20)),
('lda', LinearDiscriminantAnalysis(n_components=2)),
('pca', PCA()),
('kpca', KernelPCA(kernel="rbf", fit_inverse_transform=True, gamma=1)),
('sel', VarianceThreshold(threshold=variance)),
('kbest', SelectKBest(f_classif, k=20)),
('kbestmutual', SelectKBest(mutual_info_classif, k=20)),
('select', SelectFromModel(LinearSVC(penalty="l2"))),
('selecttree', SelectFromModel(ExtraTreesClassifier(n_estimators=20))),
('rfe', RFE(estimator=DecisionTreeClassifier(), n_features_to_select=30))]
###Output
_____no_output_____
###Markdown
Computations Functions
###Code
def k_fold_cross_validation(ml_pipeline, X, y, n=5, k=10, score='accuracy'):
"""Perform N repeated K-fold cross-validation
Keyword arguments:
ml_pipeline -- Intance of scikit-learn's Pipeline
X -- Data to perform cross-validation
y -- Labels of the data
n -- Amount of times cross-validation is repeated (default is 5)
k -- Amount of folds that the data is splitted to perform
cross-validation (default is 10)
score -- Scoring type as a string for scikit-learn's
cross_val_score method (default is accuracy)
Return:
Two element numpy array where first value is mean of cross-validation scores
and second is standard deviation of cross-validation scores.
"""
cv = RepeatedStratifiedKFold(n_splits = n,
n_repeats = k,
random_state = 1)
n_scores = cross_val_score(ml_pipeline, X, y,
scoring = score, cv = cv,
n_jobs = -1)
return(np.array([np.mean(n_scores), np.std(n_scores)]))
###Output
_____no_output_____
###Markdown
Process
###Code
columns = ['accuracy_mean', 'accuracy_std',
'accuracy_scaled_mean', 'accuracy_scaled_std']
statistics_y2 = pd.DataFrame(index = columns)
statistics_y4 = pd.DataFrame(index = columns)
###Output
_____no_output_____
###Markdown
Binary
###Code
y = y_class2
for model_used in classifiers:
model = Pipeline([model_used])
not_scaled = k_fold_cross_validation(model, X, y)
scaled = k_fold_cross_validation(model, X_scaled, y)
data = np.concatenate((not_scaled, scaled))
statistics_y2[ model_used[0] ] = data
for feature_selection in dimension_reductions_y2:
model = Pipeline([feature_selection, model_used])
not_scaled = k_fold_cross_validation(model, X, y)
scaled = k_fold_cross_validation(model, X_scaled, y)
column = model_used[0] + '_' + feature_selection[0]
data = np.concatenate((not_scaled, scaled))
statistics_y2[ column ] = data
statistics_transpose_y2 = statistics_y2.transpose(copy=True)
statistics_transpose_y2
statistics_transpose_y2.describe()
###Output
_____no_output_____
###Markdown
multi-class
###Code
y = y_class4
for model_used in classifiers:
model = Pipeline([model_used])
not_scaled = k_fold_cross_validation(model, X, y)
scaled = k_fold_cross_validation(model, X_scaled, y)
data = np.concatenate((not_scaled, scaled))
statistics_y4[ model_used[0] ] = data
for feature_selection in dimension_reductions_y4:
model = Pipeline([feature_selection, model_used])
not_scaled = k_fold_cross_validation(model, X, y)
scaled = k_fold_cross_validation(model, X_scaled, y)
column = model_used[0] + '_' + feature_selection[0]
data = np.concatenate((not_scaled, scaled))
statistics_y4[ column ] = data
statistics_transpose_y4 = statistics_y4.transpose(copy=True)
statistics_transpose_y4
statistics_transpose_y4.describe()
###Output
_____no_output_____
###Markdown
Save results
###Code
statistics_transpose_y2.to_csv('../data/experiment_first_round/CV_binary_std_data.csv', index_label="model_name")
statistics_transpose_y4.to_csv('../data/experiment_first_round/CV_multinomial_std_data.csv', index_label="model_name")
###Output
_____no_output_____ |
book/_build/jupyter_execute/notebooks/01-introduction-geospatial-data.ipynb | ###Markdown
Introduction to geospatial vector data in Python
###Code
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Importing geospatial data Geospatial data is often available from specific GIS file formats or data stores, like ESRI shapefiles, GeoJSON files, geopackage files, PostGIS (PostgreSQL) database, ...We can use the GeoPandas library to read many of those GIS file formats (relying on the `fiona` library under the hood, which is an interface to GDAL/OGR), using the `geopandas.read_file()` function.For example, let's start by reading a shapefile with all the countries of the world (adapted from http://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-admin-0-countries/, zip file is available in the `/data` directory), and inspect the data:
###Code
# read data
data = gpd.read_file("zip://../data/ne_110m_admin_0_countries.zip")
data.head()
# reading shape files
# countries = data = gpd.read_file(r'../shapefiles/world.shp')
###Output
_____no_output_____
###Markdown
Exploring Data
###Code
# type of dataframe
type(data)
# shape
data.shape
# dtypes
data.dtypes
# columns
data.columns
# info
data.info()
data.head()
data.plot()
plt.show()
###Output
_____no_output_____
###Markdown
What can we observe:- Using `.head()` we can see the first rows of the dataset, just like we can do with Pandas.- There is a 'geometry' column and the different countries are represented as polygons- We can use the `.plot()` method to quickly get a *basic* visualization of the data What's a GeoDataFrame?We used the GeoPandas library to read in the geospatial data, and this returned us a `GeoDataFrame`:
###Code
type(data)
###Output
_____no_output_____
###Markdown
A GeoDataFrame contains a tabular, geospatial dataset:* It has a **'geometry' column** that holds the geometry information (or features in GeoJSON).* The other columns are the **attributes** (or properties in GeoJSON) that describe each of the geometriesSuch a `GeoDataFrame` is just like a pandas `DataFrame`, but with some additional functionality for working with geospatial data:* A `.geometry` attribute that always returns the column with the geometry information (returning a GeoSeries). The column name itself does not necessarily need to be 'geometry', but it will always be accessible as the `.geometry` attribute.* It has some extra methods for working with spatial data (area, distance, buffer, intersection, ...), which we will see in later notebooks
###Code
data.geometry
type(data.geometry)
data.geometry.area
###Output
/tmp/ipykernel_17973/1639781399.py:1: UserWarning: Geometry is in a geographic CRS. Results from 'area' are likely incorrect. Use 'GeoSeries.to_crs()' to re-project geometries to a projected CRS before this operation.
data.geometry.area
###Markdown
**It's still a DataFrame**, so we have all the pandas functionality available to use on the geospatial dataset, and to do data manipulations with the attributes and geometry information together.For example, we can calculate average population number over all countries (by accessing the 'pop_est' column, and calling the `mean` method on it):
###Code
data['pop_est'].mean()
###Output
_____no_output_____
###Markdown
Or, we can use boolean filtering to select a subset of the dataframe based on a condition:
###Code
africa = data[data['continent'] == 'Africa']
africa.plot()
###Output
_____no_output_____
###Markdown
---The rest of the tutorial is going to assume you already know some pandas basics, but we will try to give hints for that part for those that are not familiar. A few resources in case you want to learn more about pandas:- Pandas docs: https://pandas.pydata.org/pandas-docs/stable/10min.html- Other tutorials: chapter from pandas in https://jakevdp.github.io/PythonDataScienceHandbook/, https://github.com/jorisvandenbossche/pandas-tutorial, https://github.com/TomAugspurger/pandas-head-to-tail, ... **REMEMBER:** * A `GeoDataFrame` allows to perform typical tabular data analysis together with spatial operations* A `GeoDataFrame` (or *Feature Collection*) consists of: * **Geometries** or **features**: the spatial objects * **Attributes** or **properties**: columns with information about each spatial object Geometries: Points, Linestrings and PolygonsSpatial **vector** data can consist of different types, and the 3 fundamental types are:* **Point** data: represents a single point in space.* **Line** data ("LineString"): represents a sequence of points that form a line.* **Polygon** data: represents a filled area.And each of them can also be combined in multi-part geometries (See https://shapely.readthedocs.io/en/stable/manual.htmlgeometric-objects for extensive overview). For the example we have seen up to now, the individual geometry objects are Polygons:
###Code
print(data.geometry[2])
###Output
POLYGON ((21.0200403174764 40.84272695572588, 20.99998986174722 40.58000397395401, 20.67499677906363 40.43499990494303, 20.61500044117275 40.11000682225935, 20.15001590341052 39.62499766698397, 19.98000044117015 39.69499339452341, 19.96000166187321 39.91500580500605, 19.40608198413673 40.25077342382247, 19.31905887215714 40.72723012955356, 19.40354983895429 41.40956574153546, 19.54002729663711 41.71998607031276, 19.37176883309496 41.87754751237065, 19.37176816334725 41.8775506797835, 19.30448611825079 42.19574514420782, 19.73805138517963 42.68824738216557, 19.80161339689869 42.50009349219084, 20.07070000000004 42.58863000000008, 20.28375451018189 42.32025950781508, 20.52295000000004 42.21787000000006, 20.59024654668023 41.85540891928363, 20.59024743010491 41.85540416113361, 20.4631750830992 41.51508901627534, 20.60518191903736 41.08622630468523, 21.0200403174764 40.84272695572588))
###Markdown
Let's import some other datasets with different types of geometry objects.A dateset about cities in the world (adapted from http://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-populated-places/, zip file is available in the `/data` directory), consisting of Point data:
###Code
cities = geopandas.read_file("zip://./data/ne_110m_populated_places.zip")
print(cities.geometry[0])
###Output
POINT (12.45338654497177 41.90328217996012)
###Markdown
And a dataset of rivers in the world (from http://www.naturalearthdata.com/downloads/50m-physical-vectors/50m-rivers-lake-centerlines/, zip file is available in the `/data` directory) where each river is a (multi-)line:
###Code
rivers = geopandas.read_file("zip://./data/ne_50m_rivers_lake_centerlines.zip")
print(rivers.geometry[0])
###Output
LINESTRING (51.9371337598152 55.70106609892139, 51.88086646731369 55.68625891701544, 51.82031249962222 55.69745514553858, 51.7476018274624 55.69366250841807, 51.6628417966117 55.60817291874525, 51.57871093775964 55.59943268477065, 51.51342773400279 55.58312409100404, 51.50854492161091 55.52948639548083, 51.48583984403365 55.49640534033426, 51.36914062543957 55.46796295772435, 51.21306254869774 55.50264985760492, 51.13452148447897 55.48273346527725, 51.07934570274205 55.46759674659262, 50.98022460947817 55.46637604371949, 50.83445217522774 55.45630956063775, 50.6883789060617 55.42011139502489, 50.4118652342932 55.40119049644431, 50.07802734358711 55.38112213757665, 49.82216796867687 55.33466217681809, 49.53222656260584 55.260614325191, 49.38232421848795 55.17182037990665, 49.24808475131027 55.11301870345045)
###Markdown
The `shapely` libraryThe individual geometry objects are provided by the [`shapely`](https://shapely.readthedocs.io/en/stable/) library
###Code
type(countries.geometry[0])
###Output
_____no_output_____
###Markdown
To construct one ourselves:
###Code
from shapely.geometry import Point, Polygon, LineString
p = Point(0, 0)
print(p)
polygon = Polygon([(1, 1), (2,2), (2, 1)])
polygon.area
polygon.distance(p)
###Output
_____no_output_____
###Markdown
**REMEMBER**: Single geometries are represented by `shapely` objects:* If you access a single geometry of a GeoDataFrame, you get a shapely geometry object* Those objects have similar functionality as geopandas objects (GeoDataFrame/GeoSeries). For example: * `single_shapely_object.distance(other_point)` -> distance between two points * `geodataframe.distance(other_point)` -> distance for each point in the geodataframe to the other point Plotting our different layers together
###Code
ax = countries.plot(edgecolor='k', facecolor='none', figsize=(15, 10))
rivers.plot(ax=ax)
cities.plot(ax=ax, color='red')
ax.set(xlim=(-20, 60), ylim=(-40, 40))
###Output
_____no_output_____
###Markdown
See the [04-more-on-visualization.ipynb](04-more-on-visualization.ipynb) notebook for more details on visualizing geospatial datasets. Let's practice!Throughout the exercises in this course, we will work with several datasets about the city of Paris.Here, we start with the following datasets:- The administrative districts of Paris (https://opendata.paris.fr/explore/dataset/quartier_paris/): `paris_districts_utm.geojson`- Real-time (at the moment I downloaded them ..) information about the public bicycle sharing system in Paris (vélib, https://opendata.paris.fr/explore/dataset/stations-velib-disponibilites-en-temps-reel/information/): `data/paris_bike_stations_mercator.gpkg`Both datasets are provided as files.Let's explore those datasets: **EXERCISE**:We will start with exploring the bicycle station dataset (available as a GeoPackage file: `data/paris_bike_stations_mercator.gpkg`) * Read the stations datasets into a GeoDataFrame called `stations`.* Check the type of the returned object (with `type(..)`)* Check the first rows of the dataframes. What kind of geometries dooes this datasets contain?* How many features are there in the dataset? (hint: use the `.shape` attribute) Hints* The geopandas.read_file() function can read different geospatial file formats. You pass the file name as first argument.
###Code
# %load _solved/solutions/01-introduction-geospatial-data1.py
# %load _solved/solutions/01-introduction-geospatial-data2.py
# %load _solved/solutions/01-introduction-geospatial-data3.py
# %load _solved/solutions/01-introduction-geospatial-data4.py
###Output
_____no_output_____
###Markdown
**EXERCISE**:* Make a quick plot of the `stations` dataset.* Make the plot a bit larger byt setting the figure size to (12, 6) (hint: the `plot` method accepts a `figsize` keyword).
###Code
# %load _solved/solutions/01-introduction-geospatial-data5.py
###Output
_____no_output_____
###Markdown
A plot with just some points can be hard to interpret without any spatial context. Therefore, in the next exercise we will learn how to add a background map.We are going to make use of the [contextily](https://github.com/darribas/contextily) package. The `add_basemap()` function of this package makes it easy to add a background web map to our plot. We begin by plotting our data first, and then pass the matplotlib axes object (returned by dataframe's `plot()` method) to the `add_basemap()` function. `contextily` will then download the web tiles needed for the geographical extent of your plot.**EXERCISE**:* Import `contextily`.* Re-do the figure of the previous exercise: make a plot of all the points in `stations`, but assign the result to an `ax` variable.* Set the marker size equal to 5 to reduce the size of the points (use the `markersize` keyword of the `plot()` method for this).* Use the `add_basemap()` function of `contextily` to add a background map: the first argument is the matplotlib axes object `ax`.
###Code
# %load _solved/solutions/01-introduction-geospatial-data6.py
# %load _solved/solutions/01-introduction-geospatial-data7.py
###Output
_____no_output_____
###Markdown
**EXERCISE**:* Make a histogram showing the distribution of the number of bike stands in the stations. Hints* Selecting a column can be done with the square brackets: `df['col_name']`* Single columns have a `hist()` method to plot a histogram of its values.
###Code
# %load _solved/solutions/01-introduction-geospatial-data8.py
###Output
_____no_output_____
###Markdown
**EXERCISE**:Let's now visualize where the available bikes are actually stationed: * Make a plot of the `stations` dataset (also with a (12, 6) figsize).* Use the `'available_bikes'` colums to determine the color of the points. For this, use the `column=` keyword.* Use the `legend=True` keyword to show a color bar.
###Code
# %load _solved/solutions/01-introduction-geospatial-data9.py
###Output
_____no_output_____
###Markdown
**EXERCISE**:Next, we will explore the dataset on the administrative districts of Paris (available as a GeoJSON file: "data/paris_districts_utm.geojson")* Read the dataset into a GeoDataFrame called `districts`.* Check the first rows of the dataframe. What kind of geometries does this dataset contain?* How many features are there in the dataset? (hint: use the `.shape` attribute)* Make a quick plot of the `districts` dataset (set the figure size to (12, 6)).
###Code
# %load _solved/solutions/01-introduction-geospatial-data10.py
# %load _solved/solutions/01-introduction-geospatial-data11.py
# %load _solved/solutions/01-introduction-geospatial-data12.py
# %load _solved/solutions/01-introduction-geospatial-data13.py
###Output
_____no_output_____
###Markdown
**EXERCISE**: What are the largest districts (biggest area)?* Calculate the area of each district.* Add this area as a new column to the `districts` dataframe.* Sort the dataframe by this area column for largest to smallest values (descending).Hints* Adding a column can be done by assing values to a column using the same square brackets syntax: `df['new_col'] = values`* To sort the rows of a DataFrame, use the `sort_values()` method, specifying the colum to sort on with the `by='col_name'` keyword. Check the help of this method to see how to sort ascending or descending.
###Code
# %load _solved/solutions/01-introduction-geospatial-data14.py
# %load _solved/solutions/01-introduction-geospatial-data15.py
# %load _solved/solutions/01-introduction-geospatial-data16.py
###Output
_____no_output_____
###Markdown
**EXERCISE**:* Add a column `'population_density'` representing the number of inhabitants per squared kilometer (Note: The area is given in squared meter, so you will need to multiply the result with `10**6`).* Plot the districts using the `'population_density'` to color the polygons. For this, use the `column=` keyword.* Use the `legend=True` keyword to show a color bar.
###Code
# %load _solved/solutions/01-introduction-geospatial-data17.py
# %load _solved/solutions/01-introduction-geospatial-data18.py
# %load _solved/solutions/01-introduction-geospatial-data19.py
###Output
_____no_output_____
###Markdown
--- For the curious: A bit more on importing and creating GeoDataFrames Note on `fiona`Under the hood, GeoPandas uses the [Fiona library](http://toblerity.org/fiona/) (pythonic interface to GDAL/OGR) to read and write data. GeoPandas provides a more user-friendly wrapper, which is sufficient for most use cases. But sometimes you want more control, and in that case, to read a file with fiona you can do the following:
###Code
import fiona
from shapely.geometry import shape
with fiona.Env():
with fiona.open("zip://./data/ne_110m_admin_0_countries.zip") as collection:
for feature in collection:
# ... do something with geometry
geom = shape(feature['geometry'])
# ... do something with properties
print(feature['properties']['name'])
###Output
_____no_output_____
###Markdown
Constructing a GeoDataFrame manually
###Code
geopandas.GeoDataFrame({
'geometry': [Point(1, 1), Point(2, 2)],
'attribute1': [1, 2],
'attribute2': [0.1, 0.2]})
###Output
_____no_output_____
###Markdown
Creating a GeoDataFrame from an existing dataframeFor example, if you have lat/lon coordinates in two columns:
###Code
df = pd.DataFrame(
{'City': ['Buenos Aires', 'Brasilia', 'Santiago', 'Bogota', 'Caracas'],
'Country': ['Argentina', 'Brazil', 'Chile', 'Colombia', 'Venezuela'],
'Latitude': [-34.58, -15.78, -33.45, 4.60, 10.48],
'Longitude': [-58.66, -47.91, -70.66, -74.08, -66.86]})
gdf = geopandas.GeoDataFrame(
df, geometry=geopandas.points_from_xy(df.Longitude, df.Latitude))
gdf
###Output
_____no_output_____ |
none.ipynb | ###Markdown
Importing required libraries
###Code
from astropy.io import fits
import numpy as np
import matplotlib.pyplot as plt
import copy
import os
from skimage.feature import blob_doh
import shutil
from csv import writer, reader
from astropy.wcs import WCS
from astropy.coordinates import SkyCoord
from numpy import asarray, save, load
###Output
_____no_output_____
###Markdown
Uploading all data
###Code
SOURCES_NUMBER = 1 # Should be equal for all data used
kSize = 5 # Size of smoothing kernel
kSigma = 1.5 # Sigma for smoothing
d = int((kSize-1)/2)
src_folder = "data/skymaps/NONE/"+str(SOURCES_NUMBER)
matrices = []
blob_pos = dict()
cont = 0
filename_dict = dict()
with open("data/xml_files/"+str(SOURCES_NUMBER)+"/blobs.csv") as csv_file:
csv_reader = reader(csv_file, delimiter=',')
tmp_rows = [row for row in csv_reader]
for filename in os.listdir(src_folder):
if "NONE" in filename:
filename_dict[int(filename.split("_")[2][:-5])] = filename
for order_id in sorted(filename_dict):
blob_pos[cont] = []
filename = filename_dict[order_id]
matrices.append(fits.open(src_folder+"/"+filename))
wcs = WCS(header=(matrices[-1])[0].header)
for i in range(SOURCES_NUMBER):
sky = SkyCoord(float(tmp_rows[cont][2*i+1]), float(tmp_rows[cont][2*i+2]), unit='deg')
y, x= wcs.world_to_array_index(sky)
blob_pos[cont].append([int(x), int(y)])
cont += 1
###Output
_____no_output_____
###Markdown
Look at images
###Code
for i, mat in enumerate(matrices[:5]):
plt.matshow(mat[0].data, cmap='gray')
###Output
_____no_output_____
###Markdown
Defining methods required to smooth the images
###Code
def gaussianKernel(size, sigma):
kernel = np.fromfunction(lambda x, y: (1/(2*np.pi*sigma**2)) * np.e ** ((-1*((x-(size-1)/2)**2+(y-(size-1)/2)**2))/(2*sigma**2)), (size, size))
return kernel
def gaussianBlur(img, kernel):
gaussian = np.zeros((img.shape[0]-2*d, img.shape[1]-2*d))
for y in range(d, img.shape[0]-d):
for x in range(d, img.shape[1]-d):
gaussian[y-d][x-d] = np.sum(np.multiply(img[y-d:y+d+1, x-d:x+d+1], kernel))
return gaussian
###Output
_____no_output_____
###Markdown
Load numpy array from npy file
###Code
matrices_smoothed = load("data/skymaps/NONE/"+str(SOURCES_NUMBER)+"/matrices_smoothed.npy")
dimx = (matrices[-1][0].data).shape[0]-2*d
dimy = (matrices[-1][0].data).shape[1]-2*d
del matrices
###Output
_____no_output_____
###Markdown
Smoothing and saving in each image in list "matrices_smoothed".
###Code
kernel = gaussianKernel(kSize, kSigma)
matrices_smoothed = []
for i, mat in enumerate(matrices):
matrices_smoothed.append(gaussianBlur(mat[0].data, kernel))
dimx = (matrices[-1][0].data).shape[0]-2*d
dimy = (matrices[-1][0].data).shape[1]-2*d
del matrices
###Output
_____no_output_____
###Markdown
Save numpy array as npy file
###Code
save("data/skymaps/NONE/"+str(SOURCES_NUMBER)+"/matrices_smoothed.npy", asarray(matrices_smoothed))
###Output
_____no_output_____
###Markdown
Showing smoothed images.
###Code
print("Shape of smoothed image: ", matrices_smoothed[0].shape)
for mat in matrices_smoothed[:5]:
plt.matshow(mat, cmap='gray')
###Output
_____no_output_____
###Markdown
Plotting histogram of intensities Computing mean background level
###Code
mean_back = 0
for mat in matrices_smoothed:
mean_back += mat.mean()
mean_back /= len(matrices_smoothed)
thresh = 7 * mean_back
print("Mean background level: ", mean_back)
print("Thresholding at: ", thresh)
###Output
_____no_output_____
###Markdown
Binarizing images using previously defined threshold.
###Code
matrices_bin = []
for i, mat in enumerate(matrices_smoothed):
matrices_bin.append(np.zeros((dimx, dimy)))
matrices_bin[i][mat > thresh] = 1
del matrices_smoothed
for mat in matrices_bin[:5]:
plt.matshow(mat, cmap='gray')
blobs = dict()
mean_error_pix = []
mean_error_angle = []
data_taken = 0
for i, mat in enumerate(matrices_bin):
if i > 1000:
break
blobs[i] = []
cont = 0
blobs_doh = blob_doh(mat, max_sigma=20, threshold=.01)
for blob in blobs_doh:
y, x, r = blob
if r > 1:
cont += 1
blobs[i].append((int(x), int(y)))
if cont == SOURCES_NUMBER:
data_taken += 1
else:
blobs[i] = []
print("Percentage of images with correct amount of blobs: ", data_taken/len(matrices_bin)*100, "%")
print("Showing data:")
with open("data/dataset_none/blobs.csv", "a+", newline='') as write_obj:
csv_writer = writer(write_obj)
for key in blobs.keys():
blobs_key = blobs[key]
if len(blobs_key) == SOURCES_NUMBER:
to_be_saved = [key, SOURCES_NUMBER]
for blob in blobs_key:
to_be_saved.append(blob)
closer_blob_dist = 10000
sky_obs = wcs.pixel_to_world(blob[0], blob[1])
w_obs = blob[0]*2*np.pi
for i, blob_true in enumerate(blob_pos[key]):
sky_true = SkyCoord(float(tmp_rows[key][2*i+1]), float(tmp_rows[key][2*i+2]), unit='deg')
tmp_dist = (sky_true.separation(sky_obs)).degree
if tmp_dist < closer_blob_dist:
closer_blob_dist = tmp_dist
closer_blob = blob_true
mean_error_angle.append(closer_blob_dist)
mean_error_pix.append(np.sqrt((closer_blob[0]-blob[0])**2 + (closer_blob[1]-blob[1])**2))
print("Error in pixel: ", mean_error_pix[-1])
print("Error angle: ", mean_error_angle[-1])
print(key, "Coping skymap and blob ", blobs_key, " to 'dataset_none' folder.")
origin = src_folder+"/skymap_NONE_"+str(key)+".fits"
dest = "data/dataset_none/skymap_NONE_"+str(key)+"_"+str(SOURCES_NUMBER)+".fits"
#shutil.copyfile(origin, dest)
# Add contents of list as last row in the csv file
#csv_writer.writerow(to_be_saved)
print("Mean error pixel: ", np.mean(mean_error_pix))
print("Max error in pixel: ", np.max(mean_error_pix))
print("Min error in pixel: ", np.min(mean_error_pix))
print("Standard deviation error pixel: ", np.std(mean_error_pix))
print("Mean error angle: ", np.mean(mean_error_angle))
print("Max error in angle: ", np.max(mean_error_angle))
print("Min error in angle: ", np.min(mean_error_angle))
print("Standard deviation error angle: ", np.std(mean_error_angle))
###Output
_____no_output_____ |
GNN_notebooks_and_pretrained_models/Ensemble_Reptile_GraphNN_Regression_EquivariantMessagePassing.ipynb | ###Markdown
Notes: - some helper functions have been modified for this task- graph specific extra helper functions have been added- A normalization trick has been used to ease meta learning Imports and Installs
###Code
# Install required packages.
!pip install -q torch-scatter -f https://pytorch-geometric.com/whl/torch-1.10.0+cu113.html
!pip install -q torch-sparse -f https://pytorch-geometric.com/whl/torch-1.10.0+cu113.html
!pip install -q git+https://github.com/rusty1s/pytorch_geometric.git
import matplotlib.pyplot as plt
import numpy as np
# Required imports for neural network
import torch.nn as nn
import torch
from torch.autograd import Variable
import random
# For GNNs
from torch.nn import Linear
from torch.nn import BatchNorm1d
import torch.nn.functional as F
from torch_geometric.nn import GATv2Conv
from torch_geometric.nn import GraphConv
from torch_geometric.nn import GraphNorm
from torch_geometric.nn import global_mean_pool
from torch_geometric.nn import global_max_pool
import torch.nn as nn
import numpy as np
def reverse(x):
return np.format_float_scientific(np.sqrt(x*np.sqrt(6)/1.96)*1.96/10)
###Output
_____no_output_____
###Markdown
Data Loading and GenerationReptile for regression task using GNNsSome common GNN datasets are here:https://pytorch-geometric.readthedocs.io/en/latest/modules/datasets.htmltorch_geometric.datasets.GNNBenchmarkDatasetWe will use a regression dataset with 19 regression targets from the paper:“MoleculeNet: A Benchmark for Molecular Machine Learning” For this implementation we focus on regressing only the Dipole moment
###Code
import torch
from torch_geometric.datasets import QM9
dataset = QM9(root='data/QM9')
# This function is based on https://pytorch-geometric.readthedocs.io/en/latest/notes/colabs.html
#Function to display properties of the dataset (it is not necessary for anything)
def display_graph_dataset_properties(dataset):
print()
print(f'Dataset: {dataset}:')
print('====================')
print(f'Number of graphs: {len(dataset)}')
print(f'Number of features: {dataset.num_features}')
print(f'Number of classes: {dataset.num_classes}')
data = dataset[0] # Get the first graph object.Ç
print()
print('Look at a sample graph of the dataset')
print(data)
print('=============================================================')
# Gather some statistics about the first graph.
print(f'Number of nodes: {data.num_nodes}')
print(f'Number of edges: {data.num_edges}')
print(f'Average node degree: {data.num_edges / data.num_nodes:.2f}')
print(f'Has isolated nodes: {data.has_isolated_nodes()}')
print(f'Has self-loops: {data.has_self_loops()}')
print(f'Is undirected: {data.is_undirected()}')
display_graph_dataset_properties(dataset)
# Transform the dataset into a list
dataset_list = []
for i in range(len(dataset)):
dataset_list.append(dataset[i])
#Shuffle the dataset list
random.shuffle(dataset_list)
#Split into train and test
GRAPH_TRAIN = dataset_list[:int(np.floor(len(dataset_list)*0.9))]
GRAPH_TEST = dataset_list[int(np.floor(len(dataset_list)*0.9)):]
y_values = []
for i in range(len(GRAPH_TRAIN)):
for j in range(19):
y_values.append(GRAPH_TRAIN[i].y[0][j].item())
for i in range(len(GRAPH_TEST)):
for j in range(19):
y_values.append(GRAPH_TEST[i].y[0][j].item())
mean = np.mean(np.array(y_values))
std = np.std(np.array(y_values))
for i in range(len(GRAPH_TRAIN)):
for j in range(19):
GRAPH_TRAIN[i].y[0][j]=(GRAPH_TRAIN[i].y[0][j]-mean)/std
for i in range(len(GRAPH_TEST)):
for j in range(19):
GRAPH_TEST[i].y[0][j]=(GRAPH_TEST[i].y[0][j]-mean)/std
###Output
_____no_output_____
###Markdown
Equivariant Message Passing Model (based on Haitz Sáez de Ocáriz Borde's coursework for L45)
###Code
!pip install -q torch-scatter -f https://pytorch-geometric.com/whl/torch-1.10.0+cu111.html
!pip install -q torch-geometric==2.0.3
from torch_geometric.nn import MessagePassing
#To calculate euclidean distance
import torch.nn as nn
pdist = nn.PairwiseDistance(p=2)
from torch.nn import Linear, ReLU, BatchNorm1d, Module, Sequential
from torch_scatter import scatter
from torch_scatter import scatter_mean
class MPNNLayer(MessagePassing):
def __init__(self, emb_dim=64, edge_dim=4, aggr='add'):
"""Message Passing Neural Network Layer
Args:
emb_dim: (int) - hidden dimension `d`
edge_dim: (int) - edge feature dimension `d_e`
aggr: (str) - aggregation function `\oplus` (sum/mean/max)
"""
# Set the aggregation function
super().__init__(aggr=aggr)
self.emb_dim = emb_dim
self.edge_dim = edge_dim
# MLP `\psi` for computing messages `m_ij`
# Implemented as a stack of Linear->BN->ReLU->Linear->BN->ReLU
# dims: (2d + d_e) -> d
self.mlp_msg = Sequential(
Linear(2*emb_dim + edge_dim, emb_dim), BatchNorm1d(emb_dim), ReLU(),
Linear(emb_dim, emb_dim), BatchNorm1d(emb_dim), ReLU()
)
# MLP `\phi` for computing updated node features `h_i^{l+1}`
# Implemented as a stack of Linear->BN->ReLU->Linear->BN->ReLU
# dims: 2d -> d
self.mlp_upd = Sequential(
Linear(2*emb_dim, emb_dim), BatchNorm1d(emb_dim), ReLU(),
Linear(emb_dim, emb_dim), BatchNorm1d(emb_dim), ReLU()
)
def forward(self, h, edge_index, edge_attr):
"""
The forward pass updates node features `h` via one round of message passing.
As our MPNNLayer class inherits from the PyG MessagePassing parent class,
we simply need to call the `propagate()` function which starts the
message passing procedure: `message()` -> `aggregate()` -> `update()`.
The MessagePassing class handles most of the logic for the implementation.
To build custom GNNs, we only need to define our own `message()`,
`aggregate()`, and `update()` functions (defined subsequently).
Args:
h: (n, d) - initial node features
edge_index: (e, 2) - pairs of edges (i, j)
edge_attr: (e, d_e) - edge features
Returns:
out: (n, d) - updated node features
"""
out = self.propagate(edge_index, h=h, edge_attr=edge_attr)
return out
def message(self, h_i, h_j, edge_attr):
"""Step (1) Message
The `message()` function constructs messages from source nodes j
to destination nodes i for each edge (i, j) in `edge_index`.
The arguments can be a bit tricky to understand: `message()` can take
any arguments that were initially passed to `propagate`. Additionally,
we can differentiate destination nodes and source nodes by appending
`_i` or `_j` to the variable name, e.g. for the node features `h`, we
can use `h_i` and `h_j`.
This part is critical to understand as the `message()` function
constructs messages for each edge in the graph. The indexing of the
original node features `h` (or other node variables) is handled under
the hood by PyG.
Args:
h_i: (e, d) - destination node features
h_j: (e, d) - source node features
edge_attr: (e, d_e) - edge features
Returns:
msg: (e, d) - messages `m_ij` passed through MLP `\psi`
"""
msg = torch.cat([h_i, h_j, edge_attr], dim=-1)
return self.mlp_msg(msg)
def aggregate(self, inputs, index):
"""Step (2) Aggregate
The `aggregate` function aggregates the messages from neighboring nodes,
according to the chosen aggregation function ('sum' by default).
Args:
inputs: (e, d) - messages `m_ij` from destination to source nodes
index: (e, 1) - list of source nodes for each edge/message in `input`
Returns:
aggr_out: (n, d) - aggregated messages `m_i`
"""
return scatter(inputs, index, dim=self.node_dim, reduce=self.aggr)
def update(self, aggr_out, h):
"""
Step (3) Update
The `update()` function computes the final node features by combining the
aggregated messages with the initial node features.
`update()` takes the first argument `aggr_out`, the result of `aggregate()`,
as well as any optional arguments that were initially passed to
`propagate()`. E.g. in this case, we additionally pass `h`.
Args:
aggr_out: (n, d) - aggregated messages `m_i`
h: (n, d) - initial node features
Returns:
upd_out: (n, d) - updated node features passed through MLP `\phi`
"""
upd_out = torch.cat([h, aggr_out], dim=-1)
return self.mlp_upd(upd_out)
def __repr__(self) -> str:
return (f'{self.__class__.__name__}(emb_dim={self.emb_dim}, aggr={self.aggr})')
class MPNNModel(Module):
def __init__(self, num_layers=3, emb_dim=64, in_dim=11, edge_dim=4, out_dim=1):
"""Message Passing Neural Network model for graph property prediction
Args:
num_layers: (int) - number of message passing layers `L`
emb_dim: (int) - hidden dimension `d`
in_dim: (int) - initial node feature dimension `d_n`
edge_dim: (int) - edge feature dimension `d_e`
out_dim: (int) - output dimension (fixed to 1)
"""
super().__init__()
# Linear projection for initial node features
# dim: d_n -> d
self.lin_in = Linear(in_dim, emb_dim)
# Stack of MPNN layers
self.convs = torch.nn.ModuleList()
for layer in range(num_layers):
self.convs.append(MPNNLayer(emb_dim, edge_dim, aggr='add'))
# Global pooling/readout function `R` (mean pooling)
# PyG handles the underlying logic via `global_mean_pool()`
self.pool = global_max_pool
# Linear prediction head
# dim: d -> out_dim
self.lin_pred = Linear(emb_dim, out_dim)
def forward(self, data):
"""
Args:
data: (PyG.Data) - batch of PyG graphs
Returns:
out: (batch_size, out_dim) - prediction for each graph
"""
h = self.lin_in(data.x) # (n, d_n) -> (n, d)
for conv in self.convs:
h = h + conv(h, data.edge_index, data.edge_attr) # (n, d) -> (n, d)
# Note that we add a residual connection after each MPNN layer
h_graph = self.pool(h, data.batch) # (n, d) -> (batch_size, d)
out = self.lin_pred(h_graph) # (batch_size, d) -> (batch_size, 1)
return out.view(-1)
class EquivariantMPNNLayer(MessagePassing):
def __init__(self, emb_dim=64, edge_dim=4, aggr='add'):
"""Message Passing Neural Network Layer
This layer is invariant to 3D rotations and translations.
Args:
emb_dim: (int) - hidden dimension `d`
edge_dim: (int) - edge feature dimension `d_e`
aggr: (str) - aggregation function `\oplus` (sum/mean/max)
"""
# Set the aggregation function
super().__init__(aggr=aggr)
self.emb_dim = emb_dim
self.edge_dim = edge_dim
# MLP `\psi` for computing messages `m_ij`
# dims: (2d+ d_e+1) -> d
# 2*d --> embedding for each node
# d_e --> edge dimension
# +1 --> distance between nodes in 3d
self.mlp_msg = Sequential(
Linear(2*emb_dim + edge_dim+1, emb_dim), BatchNorm1d(emb_dim), ReLU(),
Linear(emb_dim, emb_dim), BatchNorm1d(emb_dim), ReLU()
)
# ==========================================
# MLP `\phi` for computing updated node features `h_i^{l+1}`
# dims: 2d -> d
self.mlp_upd = Sequential(
Linear(2*emb_dim, emb_dim), BatchNorm1d(emb_dim), ReLU(),
Linear(emb_dim, emb_dim), BatchNorm1d(emb_dim), ReLU()
)
self.msg_to_weight = Sequential(
Linear(emb_dim, emb_dim), BatchNorm1d(emb_dim), ReLU(),
Linear(emb_dim, 1), ReLU()
)
def forward(self, h, pos, edge_index, edge_attr):
"""
The forward pass updates node features `h` via one round of message passing.
Args:
h: (n, d) - initial node features
pos: (n, 3) - initial node coordinates
edge_index: (e, 2) - pairs of edges (i, j)
edge_attr: (e, d_e) - edge features
Returns:
out: [(n, d),(n,3)] - updated node features and pos
"""
out, new_pos = self.propagate(edge_index, h=h, edge_attr=edge_attr, pos = pos)
return (out, new_pos)
# ==========================================
def message(self, h_i, h_j, edge_attr, pos_i, pos_j):
"""The `message()` function constructs messages from source nodes j
to destination nodes i for each edge (i, j) in `edge_index`.
Args:
h_i: (e, d) - destination node features
h_j: (e, d) - source node features
pos_i: (e, 3) - destination node positions
pos_j: (e, 3) - source node positions
edge_attr: (e, d_e) - edge features
Returns:
msg: [(e, d),(e,3)] - messages m_ij passed through MLP \psi and relative difference
"""
dist = pdist(pos_i, pos_j).pow(2).reshape(pos_i.shape[0],1)
relative_difference = pos_i-pos_j
msg = torch.cat([h_i, h_j, edge_attr,dist], dim=-1)
return (self.mlp_msg(msg),relative_difference)
# ==========================================
def aggregate(self, inputs, index):
"""The `aggregate` function aggregates the messages from neighboring nodes,
according to the chosen aggregation function ('sum' by default).
Args:
inputs: [(e, d),(e,3)] - messages `m_ij` from destination to source nodes and relative difference
index: (e, 1) - list of source nodes for each edge/message in `input`
Returns:
aggr_out: [(n, d),(e,3)] - aggregated messages `m_i` and message to weight
"""
inputs_h,relative_difference=inputs
return (scatter(inputs_h, index, dim=self.node_dim, reduce=self.aggr),scatter_mean(self.msg_to_weight(inputs_h)*relative_difference, index, dim=self.node_dim))
def update(self, aggr_out, h,pos):
"""The `update()` function computes the final node features by combining the
aggregated messages with the initial node features.
Args:
aggr_out: [(n, d),(e,3)] - aggregated messages `m_i` and message to weight
h: (n, d) - initial node features
Returns:
upd_out: [(n, d),(n,3)] - updated node features passed through MLP `\phi` and pos features
"""
aggr_out1,aggr_out2 = aggr_out
upd_out = torch.cat([h, aggr_out1], dim=-1)
pos_out = pos + aggr_out2
return (self.mlp_upd(upd_out),pos_out)
def __repr__(self) -> str:
return (f'{self.__class__.__name__}(emb_dim={self.emb_dim}, aggr={self.aggr})')
class FinalMPNNModel(MPNNModel):
def __init__(self, num_layers=3, emb_dim=64, in_dim=11, edge_dim=4, out_dim=1):
"""Message Passing Neural Network model for graph property prediction
This model uses both node features and coordinates as inputs, and
is invariant to 3D rotations and translations (the constituent MPNN layers
are equivariant to 3D rotations and translations).
Args:
num_layers: (int) - number of message passing layers `L`
emb_dim: (int) - hidden dimension `d`
in_dim: (int) - initial node feature dimension `d_n`
edge_dim: (int) - edge feature dimension `d_e`
out_dim: (int) - output dimension (fixed to 1)
"""
super().__init__()
# Linear projection for initial node features
# dim: d_n -> d
self.lin_in = Linear(in_dim, emb_dim)
# Stack of MPNN layers
self.convs = torch.nn.ModuleList()
for layer in range(num_layers):
self.convs.append(EquivariantMPNNLayer(emb_dim, edge_dim, aggr='add'))
# Global pooling/readout function `R` (mean pooling)
# PyG handles the underlying logic via `global_mean_pool()`
self.pool = global_mean_pool
# Linear prediction head
# dim: d -> out_dim
self.lin_pred = Linear(emb_dim, out_dim)
self.sigmoid = nn.Sigmoid()
def forward(self, data):
"""
Args:
data: (PyG.Data) - batch of PyG graphs
Returns:
out: (batch_size, out_dim) - prediction for each graph
"""
h = self.lin_in(data.x) # (n, d_n) -> (n, d)
pos = data.pos
for conv in self.convs:
# Message passing layer
h_update, pos_update = conv(h, pos, data.edge_index, data.edge_attr)
# Update node features
h = h + h_update # (n, d) -> (n, d)
# Note that we add a residual connection after each MPNN layer
# Update node coordinates
pos = pos_update # (n, 3) -> (n, 3)
h_graph = self.pool(h, data.batch) # (n, d) -> (batch_size, d)
# if self.normalization:
# out = self.sigmoid(self.lin_pred(h_graph)) # (batch_size, d) -> (batch_size, 1)
# else:
# out = self.lin_pred(h_graph) # (batch_size, d) -> (batch_size, 1)
out = self.lin_pred(h_graph)
return out.view(-1)
###Output
_____no_output_____
###Markdown
Simplistic GNN Model
###Code
# class GNN(torch.nn.Module):
# def __init__(self, input_dim=11, hidden_dim=200, output_dim=1):
# super(GNN, self).__init__()
# #Hidden Layers
# self.hidden1 = GraphConv(input_dim, hidden_dim)
# self.hidden2 = GraphConv(hidden_dim, hidden_dim)
# self.hidden3 = GraphConv(hidden_dim, output_dim)
# self.norm = GraphNorm(hidden_dim)
# #Activation Function
# self.relu = nn.ReLU()
# def forward(self, input_x, edge_index, batch):
# #Standard forward
# x = self.hidden1(input_x,edge_index)
# x = self.norm(x)
# x = self.relu(x)
# x = self.hidden2(x,edge_index)
# x = self.norm(x)
# x = self.relu(x)
# x = self.hidden3(x,edge_index)
# #Global mean pool across batches
# x = global_mean_pool(x, batch)
# return x
###Output
_____no_output_____
###Markdown
Helper functions
###Code
# The Minimum Square Error is used to evaluate the difference between prediction and ground truth
criterion = nn.MSELoss()
def copy_existing_model(model):
# Function to copy an existing model
# We initialize a new model
new_model = FinalMPNNModel()
# Copy the previous model's parameters into the new model
new_model.load_state_dict(model.state_dict())
return new_model
def initialization_to_store_meta_losses():
# This function creates lists to store the meta losses
global store_train_loss_meta; store_train_loss_meta = []
global store_test_loss_meta; store_test_loss_meta = []
def test_set_validation(model,new_model,graph,lr_inner,k,store_test_loss_meta,task):
# This functions does not actually affect the main algorithm, it is just used to evaluate the new model
new_model = training(model, graph, lr_inner, k,task)
# Obtain the loss
loss = evaluation(new_model, graph,task)
# Store loss
store_test_loss_meta.append(loss)
def train_set_evaluation(new_model,graph,store_train_loss_meta,task):
loss = evaluation(new_model, graph,task)
store_train_loss_meta.append(loss)
def print_losses(epoch,store_train_loss_meta,store_test_loss_meta,printing_step=1000):
if epoch % printing_step == 0:
print(f'Epochh : {epoch}, Average Train Meta Loss : {np.mean(store_train_loss_meta)}, Average Test Meta Loss : {np.mean(store_test_loss_meta)}')
#This is based on the paper update rule, we calculate the difference between parameters and then this is used by the optimizer, rather than doing the update by hand
def reptile_parameter_update(model,new_model):
# Zip models for the loop
zip_models = zip(model.parameters(), new_model.parameters())
for parameter, new_parameter in zip_models:
if parameter.grad is None:
parameter.grad = torch.tensor(torch.zeros_like(parameter))
# Here we are adding the gradient that will later be used by the optimizer
parameter.grad.data.add_(parameter.data - new_parameter.data)
# Define commands in order needed for the metaupdate
# Note that if we change the order it doesn't behave the same
def metaoptimizer_update(metaoptimizer):
# Take step
metaoptimizer.step()
# Reset gradients
metaoptimizer.zero_grad()
def metaupdate(model,new_model,metaoptimizer):
# Combine the two previous functions into a single metaupdate function
# First we calculate the gradients
reptile_parameter_update(model,new_model)
# Use those gradients in the optimizer
metaoptimizer_update(metaoptimizer)
def evaluation(new_model, graph, task, item = True):
# Make model prediction
prediction = new_model(graph)
label = graph.y[:,task:task+1]
# Get loss
if item == True: #Depending on whether we need to return the loss value for storing or for backprop
loss = criterion(prediction,label).item()
else:
loss = criterion(prediction,label)
return loss
def training(model, graph, lr_k, k,task):
# Create new model which we will train on
new_model = copy_existing_model(model)
# Define new optimizer
koptimizer = torch.optim.SGD(new_model.parameters(), lr=lr_k)
# Update the model multiple times, note that k>1 (do not confuse k with K)
for i in range(k):
# Reset optimizer
koptimizer.zero_grad()
# Evaluate the model
loss = evaluation(new_model, graph, task, item = False)
# Backpropagate
loss.backward()
koptimizer.step()
return new_model
###Output
_____no_output_____
###Markdown
Additional GNN Helper Functions Additional helper functions to handle minibatching based on coursework by Haitz Sáez de Ocáriz for L45 Practical 1. The code was partially given in the practical and we had to fill it in, so this is based on my solution. Also, some further modification applied for our implementation: for message passing we include coordinate encoding features and edge attributes, apart from node features, labels, and the adjancency metrics that describes node connectivity.
###Code
class Graph(object):
def __init__(self, edge_index, x, y,edge_attr,pos):
""" Graph structure
for a mini-batch it will store a big (sparse) graph
representing the entire batch
Args:
x: node features [num_nodes x num_feats]
y: graph labels [num_graphs]
edge_index: list of edges [2 x num_edges]
"""
self.edge_index = edge_index
self.x = x.to(torch.float32)
self.y = y
self.num_nodes = self.x.shape[0]
self.edge_attr = edge_attr
self.pos = pos
#ignore this for now, it will be useful for batching
def set_batch(self, batch):
""" list of ints that maps each node to the graph it belongs to
e.g. for batch = [0,0,0,1,1,1,1]: the first 3 nodes belong to graph_0 while
the last 4 belong to graph_1
"""
self.batch = batch
# this function return a sparse tensor
def get_adjacency_matrix(self):
""" from the list of edges create
a num_nodes x num_nodes sparse adjacency matrix
"""
return torch.sparse.LongTensor(self.edge_index,
# we work with a binary adj containing 1 if an edge exist
torch.ones((self.edge_index.shape[1])),
torch.Size((self.num_nodes, self.num_nodes))
)
def create_mini_batch(graph_list):
""" Built a sparse graph from a batch of graphs
Args:
graph_list: list of Graph objects in a batch
Returns:
a big (sparse) Graph representing the entire batch
"""
#insert first graph into the structure
batch_edge_index = graph_list[0].edge_index
batch_x = graph_list[0].x
batch_y = graph_list[0].y
batch_edge_attr = graph_list[0].edge_attr
batch_pos = graph_list[0].pos
batch_batch = torch.zeros((graph_list[0].num_nodes), dtype=torch.int64)
# ============ YOUR CODE HERE =============
# you may need additional variables
num_nodes_added= graph_list[0].num_nodes
# ==========================================
#append the rest of the graphs to the structure
for idx, graph in enumerate(graph_list[1:]):
# ============ YOUR CODE HERE =============
# concat the features
batch_x = torch.cat((batch_x,graph.x))
# concat the labels
batch_y = torch.cat((batch_y,graph.y))
# concat the coords
batch_pos = torch.cat((batch_pos,graph.pos))
# concat the adjacency matrix as a block diagonal matrix
batch_edge_index = torch.cat((batch_edge_index, torch.add(graph.edge_index, num_nodes_added)), dim=1)
batch_edge_attr = torch.cat((batch_edge_attr, graph.edge_attr))
num_nodes_added += graph.num_nodes
# ==========================================
# ============ YOUR CODE HERE =============
# create the array of indexes mapping nodes in the batch-graph
# to the graph they belong to
# specify the mapping between the new nodes and the graph they belong to (idx+1)
batch_batch = torch.cat((batch_batch, torch.full((graph.num_nodes,), idx + 1)))
# ==========================================
#create the big sparse graph
batch_graph = Graph(batch_edge_index, batch_x, batch_y, batch_edge_attr,batch_pos)
#attach the index array to the Graph structure
batch_graph.set_batch(batch_batch)
return batch_graph
from scipy.linalg import block_diag
import matplotlib.cm as cm
import networkx as nx
def get_color_coded_str(i, color):
return "\033[3{}m{}\033[0m".format(int(color), int(i))
def print_color_numpy(map, list_graphs):
""" print matrix map in color according to list_graphs
"""
list_blocks = []
for i,graph in enumerate(list_graphs):
block_i = (i+1)*np.ones((graph.num_nodes,graph.num_nodes))
list_blocks += [block_i]
block_color = block_diag(*list_blocks)
map_modified = np.vectorize(get_color_coded_str)(map, block_color)
print("\n".join([" ".join(["{}"]*map.shape[0])]*map.shape[1]).format(*[x for y in map_modified.tolist() for x in y]))
def draw_one_graph(ax, edges, label=None, node_emb=None, layout=None, special_color=False):
"""draw a graph with networkx based on adjacency matrix (edges)
graph labels could be displayed as a title for each graph
node_emb could be displayed in colors
"""
graph = nx.Graph()
edges = zip(edges[0], edges[1])
graph.add_edges_from(edges)
node_pos = layout(graph)
#add colors according to node embeding
if (node_emb is not None) or special_color:
color_map = []
node_list = [node[0] for node in graph.nodes(data = True)]
for i,node in enumerate(node_list):
#just ignore this branch
if special_color:
if len(node_list) == 3:
crt_color = (1,0,0)
elif len(node_list) == 5:
crt_color = (0,1,0)
elif len(node_list) == 4:
crt_color = (1,1,0)
else:
special_list = [(1,0,0)] * 3 + [(0,1,0)] * 5 + [(1,1,0)] * 4
crt_color = special_list[i]
else:
crt_node_emb = node_emb[node]
#map float number (node embeding) to a color
crt_color = cm.gist_rainbow(crt_node_emb, bytes=True)
crt_color = (crt_color[0]/255.0, crt_color[1]/255.0, crt_color[2]/255.0, crt_color[3]/255.0)
color_map.append(crt_color)
nx.draw_networkx_nodes(graph,node_pos, node_color=color_map,
nodelist = node_list, ax=ax)
nx.draw_networkx_edges(graph, node_pos, ax=ax)
nx.draw_networkx_labels(graph,node_pos, ax=ax)
else:
nx.draw_networkx(graph, node_pos, ax=ax)
def gallery(graphs, labels=None, node_emb=None, special_color=False, max_graphs=4, max_fig_size=(40, 10), layout=nx.layout.kamada_kawai_layout):
''' Draw multiple graphs as a gallery
Args:
graphs: torch_geometrics.dataset object/ List of Graph objects
labels: num_graphs
node_emb: num_graphs* [num_nodes x num_ch]
max_graphs: maximum graphs display
'''
num_graphs = min(len(graphs), max_graphs)
ff, axes = plt.subplots(1, num_graphs,
figsize=max_fig_size,
subplot_kw={'xticks': [], 'yticks': []})
if num_graphs == 1:
axes = [axes]
if node_emb is None:
node_emb = num_graphs*[None]
if labels is None:
labels = num_graphs * [" "]
for i in range(num_graphs):
draw_one_graph(axes[i], graphs[i].edge_index.numpy(), labels[i], node_emb[i], layout, special_color)
if labels[i] != " ":
axes[i].set_title(f"Target: {labels[i]}", fontsize=28)
axes[i].set_axis_off()
plt.show()
# 3 random custom-designed graphs for visualisations
graph1 = Graph(x=torch.rand((3,32)),
y=torch.rand((1)),
edge_index=torch.tensor([[0,0,0,1,1,1,2,2,2],[0,1,2,0,1,2,0,1,2]]),
edge_attr = torch.tensor([[0,0,0,1,1,1,2,2,2],[0,1,2,0,1,2,0,1,2]]),
pos = torch.rand((3,3))
)
graph1 = random.sample(GRAPH_TRAIN, 1)[0]
graph2 = random.sample(GRAPH_TRAIN, 1)[0]
graph3 = random.sample(GRAPH_TRAIN, 1)[0]
# graph2 = Graph(x=torch.rand((5,32)),
# y=torch.rand((1)),
# edge_index=torch.tensor([[0,0,0,0,0,1,1,1,2,1,2,3,4], [0,1,2,3,4,2,3,4,4,0,0,0,0]]))
# graph3 = Graph(x=torch.rand((4,32)),
# y=torch.rand((1)),
# edge_index=torch.tensor([[0,1,2,3],[1,2,3,0]]))
list_graphs = [graph1, graph2, graph3]
# create a mini-batch from these 3 graphs
batch_sample = create_mini_batch(list_graphs)
# show statistics about the new graph built from this batch of graphs
print(f"Batch number_of_nodes: {batch_sample.num_nodes}")
print(f"Batch features shape: {batch_sample.x.shape}")
print(f"Batch labels shape: {batch_sample.y.shape}")
print(f"Batch adjacency: ")
print_color_numpy(batch_sample.get_adjacency_matrix().to_dense().numpy(), list_graphs)
# gallery([graph1, graph1, graph1, batch_sample], max_fig_size=(20,6), special_color=True)
# print(f"And we also have access to which graph each node belongs to {batch_sample.batch}\n")
###Output
Batch number_of_nodes: 61
Batch features shape: torch.Size([61, 11])
Batch labels shape: torch.Size([3, 19])
Batch adjacency:
[31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m1[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m1[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m1[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m1[0m [31m1[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m1[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m1[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [31m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m1[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m1[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m1[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m1[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m1[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m1[0m [32m1[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m1[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [32m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m1[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m1[0m [33m0[0m [33m1[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m1[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m1[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m1[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
[30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [30m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m1[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m [33m0[0m
###Markdown
Reptile
###Code
#Define important variables
epochs = int(15000) # number of epochs
lr_meta=0.001 # Learning rate for meta model (outer loop)
printing_step=20 # how many epochs should we wait to print the loss
lr_k=0.0005 # Internal learning rate
k=5 # Number of internal updates for each task
K = 10 #Number of samples per task
number_of_tasks = 5 #number of tasks for metalearning (max is 19), using 5 converges relatively fast, otherwise it is a bit of a pain
# Initializations
initialization_to_store_meta_losses()
model = FinalMPNNModel()
metaoptimizer = torch.optim.Adam(model.parameters(), lr=lr_meta)
# Training loop
for epoch in range(15000):
# Sample a task at random 0-18 regression tasks --> T, task should only change per epoch, so it is only updated here
task = random.randint(0,17) #Note that for this problem 'task' must be passed to the evaluation function --> graph.y has all the selection targets, we only use the one specified by task
# Empty list
graph = []
for i in range(K): #Store graphs
graph.append(random.sample(GRAPH_TRAIN, 1)[0])
# Create graph mini batch from list
graph = create_mini_batch(graph)
# Update model predefined number of times based on k
new_model = training(model, graph, lr_k, k,task)
# Evalaute the loss for the training data
train_set_evaluation(new_model,graph,store_train_loss_meta,task)
#Meta-update --> Get gradient for meta loop and update
metaupdate(model,new_model,metaoptimizer)
# Evalaute the loss for the test data
# Note that we need to sample the graph from the test data
graph = []
for i in range(K): #Store graphs
graph.append(random.sample(GRAPH_TEST, 1)[0])
graph = create_mini_batch(graph)
test_set_validation(model,new_model,graph,lr_k,k,store_test_loss_meta,task)
# Print losses every 'printing_step' epochs
print_losses(epoch,store_train_loss_meta,store_test_loss_meta,printing_step)
###Output
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:520: UserWarning: Using a target size (torch.Size([10, 1])) that is different to the input size (torch.Size([10])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
###Markdown
Few Shot learning with new meta-model
###Code
model = torch.load('emp_qm9_1')
###Output
_____no_output_____
###Markdown
The model performs good few shot learning
###Code
graph_type = "equiv"
num_evals = 100
all_losses = []
for test_eval in range(num_evals):
task = 18
graph = []
for i in range(K): #Store graphs
graph.append(random.sample(GRAPH_TEST, 1)[0])
graph = create_mini_batch(graph)
k_shot_updates = 6
initialization_to_store_meta_losses()
for shots in range(k_shot_updates):
new_model = training(model, graph, lr_k,shots, task)
train_set_evaluation(new_model,graph,store_train_loss_meta, task)
all_losses.append(np.array(store_train_loss_meta))
# plt.plot(store_train_loss_meta,label = 'Loss')
# plt.legend()
# plt.xlabel('k shots')
all_losses = np.array(all_losses)
np.save(f"reptile_graph_{graph_type}_k.npy", all_losses)
fig, ax = plt.subplots(figsize=(8,4))
mean_loss = np.mean(all_losses, axis=0)
# confidence interval plotting help from: https://stackoverflow.com/questions/59747313/how-to-plot-confidence-interval-in-python
y = mean_loss
x = list(range(len(mean_loss)))
ci = 1.96 * np.std(all_losses, axis=0)**2/np.sqrt(len(y))
ax_size=16
title_size=18
ax.plot(x, y, linewidth=3, label=f"Mean Loss")
# to avoid having MSE < 0
truncated_error = np.clip(y-ci, a_min=0, a_max=None)
ax.fill_between(x, truncated_error, (y+ci), alpha=.5,label=f"95% CI")
ax.set_xlabel("Gradient Steps",fontsize=ax_size)
ax.set_ylabel("Mean Squared Error (MSE)",fontsize=ax_size)
ax.set_title("Graph Regression: k-Shot Evaluation",fontsize=title_size)
ax.legend()#loc="upper right")
plt.savefig(f"graph_reg_{graph_type}_kshot.png")
np.save('emp_truncated_loss.npy',truncated_error)
np.save('emp_ci.npy',ci)
np.save('emp_y.npy',y)
np.save('emp_x.npy',x)
np.save('emp_mean_loss.npy',mean_loss)
###Output
_____no_output_____
###Markdown
Ensembling
###Code
#Load models
model1 = torch.load('emp_qm9_1')
model2 = torch.load('emp_qm9_2')
model3 = torch.load('emp_qm9_3')
model4 = torch.load('emp_qm9_4')
model5 = torch.load('emp_qm9_5')
model6 = torch.load('emp_qm9_6')
class Ensemble_method1(nn.Module):
def __init__(self, model1,model2,model3,model4,model5,model6):
super(Ensemble_method1, self).__init__()
self.pre_trained1 = model1
self.pre_trained2 = model2
self.pre_trained3 = model3
self.pre_trained4 = model4
self.pre_trained5 = model5
self.pre_trained6 = model6
def forward(self, x):
out_1 = self.pre_trained1(x)
out_2 = self.pre_trained2(x)
out_3 = self.pre_trained3(x)
out_4 = self.pre_trained4(x)
out_5 = self.pre_trained5(x)
out_6 = self.pre_trained6(x)
return (out_1+out_2+out_3+out_4+out_5+out_6)/6
model_ensemble1=Ensemble_method1(model1,model2,model3,model4,model5,model6)
def training_ensemble1(model, graph, lr_k, k,task):
# Create new model which we will train on
new_model = copy_existing_model_ensemble1(model)
# Define new optimizer
koptimizer = torch.optim.SGD(new_model.parameters(), lr=lr_k)
# Update the model multiple times, note that k>1 (do not confuse k with K)
for i in range(k):
# Reset optimizer
koptimizer.zero_grad()
# Evaluate the model
loss = evaluation(new_model, graph, task, item = False)
# Backpropagate
loss.backward()
koptimizer.step()
return new_model
def copy_existing_model_ensemble1(model):
# Function to copy an existing model
# We initialize a new model
new_model = Ensemble_method1(model1,model2,model3,model4,model5,model6)
# Copy the previous model's parameters into the new model
new_model.load_state_dict(model.state_dict())
return new_model
graph_type = "equiv"
num_evals = 100
all_losses = []
for test_eval in range(num_evals):
task = 18
graph = []
for i in range(K): #Store graphs
graph.append(random.sample(GRAPH_TEST, 1)[0])
graph = create_mini_batch(graph)
k_shot_updates = 6
initialization_to_store_meta_losses()
for shots in range(k_shot_updates):
new_model = training_ensemble1(model_ensemble1, graph, lr_k,shots, task)
train_set_evaluation(new_model,graph,store_train_loss_meta, task)
all_losses.append(np.array(store_train_loss_meta))
# plt.plot(store_train_loss_meta,label = 'Loss')
# plt.legend()
# plt.xlabel('k shots')
all_losses = np.array(all_losses)
np.save(f"reptile_graph_{graph_type}_k.npy", all_losses)
fig, ax = plt.subplots(figsize=(8,4))
mean_loss = np.mean(all_losses, axis=0)
# confidence interval plotting help from: https://stackoverflow.com/questions/59747313/how-to-plot-confidence-interval-in-python
y = mean_loss
x = list(range(len(mean_loss)))
ci = 1.96 * np.std(all_losses, axis=0)**2/np.sqrt(len(y))
ax_size=16
title_size=18
ax.plot(x, y, linewidth=3, label=f"Mean Loss")
# to avoid having MSE < 0
truncated_error = np.clip(y-ci, a_min=0, a_max=None)
ax.fill_between(x, truncated_error, (y+ci), alpha=.5,label=f"95% CI")
ax.set_xlabel("Gradient Steps",fontsize=ax_size)
ax.set_ylabel("Mean Squared Error (MSE)",fontsize=ax_size)
ax.set_title("Graph Regression: k-Shot Evaluation",fontsize=title_size)
ax.legend()#loc="upper right")
plt.savefig(f"graph_reg_{graph_type}_kshot.png")
class Ensemble_method2(nn.Module):
def __init__(self, model1,model2,model3,model4,model5,model6):
super(Ensemble_method2, self).__init__()
self.pre_trained1 = model1
self.pre_trained2 = model2
self.pre_trained3 = model3
self.pre_trained4 = model4
self.pre_trained5 = model5
self.pre_trained6 = model6
self.linear = nn.Linear(6,1)
def forward(self, x):
out_1 = self.pre_trained1(x)[:,None]
out_2 = self.pre_trained2(x)[:,None]
out_3 = self.pre_trained3(x)[:,None]
out_4 = self.pre_trained4(x)[:,None]
out_5 = self.pre_trained5(x)[:,None]
out_6 = self.pre_trained6(x)[:,None]
out_combined = torch.cat((out_1,out_2,out_3,out_4,out_5,out_6),dim=-1)
out_final = self.linear(out_combined)
return out_final
model_ensemble2=Ensemble_method2(model1,model2,model3,model4,model5,model6)
#Initilize weighted average
model_ensemble2.linear.weight.data.fill_(1/6)
model_ensemble2.linear.bias.data.fill_(1/6)
def training_ensemble2(model, graph, lr_k, k,task):
# Create new model which we will train on
new_model = copy_existing_model_ensemble2(model)
# Define new optimizer
koptimizer = torch.optim.SGD(new_model.parameters(), lr=lr_k)
# Update the model multiple times, note that k>1 (do not confuse k with K)
for i in range(k):
# Reset optimizer
koptimizer.zero_grad()
# Evaluate the model
loss = evaluation(new_model, graph, task, item = False)
# Backpropagate
loss.backward()
koptimizer.step()
return new_model
def copy_existing_model_ensemble2(model):
# Function to copy an existing model
# We initialize a new model
new_model = Ensemble_method2(model1,model2,model3,model4,model5,model6)
# Copy the previous model's parameters into the new model
new_model.load_state_dict(model.state_dict())
return new_model
graph_type = "equiv"
num_evals = 100
all_losses = []
for test_eval in range(num_evals):
task = 18
graph = []
for i in range(K): #Store graphs
graph.append(random.sample(GRAPH_TEST, 1)[0])
graph = create_mini_batch(graph)
k_shot_updates = 6
initialization_to_store_meta_losses()
for shots in range(k_shot_updates):
new_model = training_ensemble2(model_ensemble2, graph, lr_k,shots, task)
train_set_evaluation(new_model,graph,store_train_loss_meta, task)
all_losses.append(np.array(store_train_loss_meta))
# plt.plot(store_train_loss_meta,label = 'Loss')
# plt.legend()
# plt.xlabel('k shots')
all_losses = np.array(all_losses)
np.save(f"reptile_graph_{graph_type}_k.npy", all_losses)
fig, ax = plt.subplots(figsize=(8,4))
mean_loss = np.mean(all_losses, axis=0)
# confidence interval plotting help from: https://stackoverflow.com/questions/59747313/how-to-plot-confidence-interval-in-python
y = mean_loss
x = list(range(len(mean_loss)))
ci = 1.96 * np.std(all_losses, axis=0)**2/np.sqrt(len(y))
ax_size=16
title_size=18
ax.plot(x, y, linewidth=3, label=f"Mean Loss")
# to avoid having MSE < 0
truncated_error = np.clip(y-ci, a_min=0, a_max=None)
ax.fill_between(x, truncated_error, (y+ci), alpha=.5,label=f"95% CI")
ax.set_xlabel("Gradient Steps",fontsize=ax_size)
ax.set_ylabel("Mean Squared Error (MSE)",fontsize=ax_size)
ax.set_title("Graph Regression: k-Shot Evaluation",fontsize=title_size)
ax.legend()#loc="upper right")
plt.savefig(f"graph_reg_{graph_type}_kshot.png")
###Output
_____no_output_____ |
1. Introduction To Python/.ipynb_checkpoints/main-checkpoint.ipynb | ###Markdown
Lecture 1. Introduction to Python 1. Hello World!
###Code
print("Hello World!!!")
###Output
Hello World!!!
###Markdown
2. Add two number [Variable]
###Code
a,b = int(input()),int(input())
print(a+b)
###Output
5
11
16
###Markdown
3. DataType
###Code
a,b,c = 5,2.5,"abc"
print(type(a))
print(type(b))
print(type(c))
###Output
<class 'int'>
<class 'float'>
<class 'str'>
###Markdown
4. Python Number
###Code
a,b,c = 5,6,5+6j
print(type(a))
print(type(b))
print(type(c))
a = 10
print(id(a))
a = a+1
print(id(a))
a = 5
b = 5
print(id(a))
print(id(b))
###Output
140710341515040
140710341515040
###Markdown
5. Number Limit 6. Arthimatic Operators
###Code
a = 10
b = 5
print(a+b)
print(a-b)
print(a*b)
print(a/b)
print(a//b)
print(a**b)
print(a%b)
2*3//4
(2*3)/4
p,r,t = 1000,5,500
print(p*r*t/100)
print(17//10)
print(17/10)
###Output
1.7
###Markdown
7. Input from User
###Code
a,b = int(input()),int(input())
print(a+b)
###Output
23
34
57
###Markdown
Assignment Problem Question 1
###Code
X,N = int(input()),int(input())
print(X**N)
a,b,c = int(input()),int(input()),int(input())
print(b-a)
a,b,c,d = int(input()),int(input()),int(input()),int(input())
print((a+b+c+d)//2)
a,b,c = int(input()),int(input()),int(input())
print((a+b+c)/3)
###Output
5
10
15
10.0
|
notebooks/2020_0726torch_word_language_model.ipynb | ###Markdown
Pytoch による単語ベースの言語モデル- date: 2020-0726- author: 浅川伸一- source: https://github.com/pytorch/examples/tree/master/word_language_model
###Code
# coding: utf-8
import argparse
import time
import math
import os
import torch
import torch.nn as nn
import torch.onnx
#import data
#import model
# Set the random seed manually for reproducibility.
seed = 20200726
cuda = True
torch.manual_seed(seed)
if torch.cuda.is_available():
if not cuda:
print("WARNING: Set cuda=True")
device = torch.device("cuda" if cuda else "cpu")
device
# this cell is the contenet of data.py
import os
from io import open
#import torch
class Dictionary(object):
def __init__(self):
self.word2idx = {}
self.idx2word = []
def add_word(self, word):
if word not in self.word2idx:
self.idx2word.append(word)
self.word2idx[word] = len(self.idx2word) - 1
return self.word2idx[word]
def __len__(self):
return len(self.idx2word)
class Corpus(object):
def __init__(self, path):
self.dictionary = Dictionary()
#self.train = self.tokenize(os.path.join(path, 'train.txt'))
#self.valid = self.tokenize(os.path.join(path, 'valid.txt'))
#self.test = self.tokenize(os.path.join(path, 'test.txt'))
self.train = self.tokenize(os.path.join(path, 'train.csv'))
self.valid = self.tokenize(os.path.join(path, 'test.csv'))
self.test = self.tokenize(os.path.join(path, 'test.csv'))
def tokenize(self, path):
"""Tokenizes a text file."""
assert os.path.exists(path)
# Add words to the dictionary
with open(path, 'r', encoding="utf8") as f:
for line in f:
words = line.split() + ['<eos>']
for word in words:
self.dictionary.add_word(word)
# Tokenize file content
with open(path, 'r', encoding="utf8") as f:
idss = []
for line in f:
words = line.split() + ['<eos>']
ids = []
for word in words:
ids.append(self.dictionary.word2idx[word])
idss.append(torch.tensor(ids).type(torch.int64))
ids = torch.cat(idss)
return ids
#download wikitext-2 dataset and GloVe embeddings
!wget https://s3.amazonaws.com/fast-ai-nlp/wikitext-2.tgz -P /data
!tar xzf /data/wikitext-2.tgz -C /data
!mv /data/wikitext-2/ /data/testwikitext2/
!ls -l /data/testwikitext2/
data_path = '/data/testwikitext2'
###############################################################################
# Load data
###############################################################################
corpus = Corpus(data_path)
# Starting from sequential data, batchify arranges the dataset into columns.
# For instance, with the alphabet as the sequence and batch size 4, we'd get
# ┌ a g m s ┐
# │ b h n t │
# │ c i o u │
# │ d j p v │
# │ e k q w │
# └ f l r x ┘.
# These columns are treated as independent by the model, which means that the
# dependence of e. g. 'g' on 'f' can not be learned, but allows more efficient
# batch processing.
def batchify(data, bsz):
# Work out how cleanly we can divide the dataset into bsz parts.
nbatch = data.size(0) // bsz
# Trim off any extra elements that wouldn't cleanly fit (remainders).
data = data.narrow(0, 0, nbatch * bsz)
# Evenly divide the data across the bsz batches.
data = data.view(bsz, -1).t().contiguous()
return data.to(device)
eval_batch_size = 10
batch_size = 20
train_data = batchify(corpus.train, batch_size)
val_data = batchify(corpus.valid, eval_batch_size)
test_data = batchify(corpus.test, eval_batch_size)
# model.py
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
class RNNModel(nn.Module):
"""Container module with an encoder, a recurrent module, and a decoder."""
def __init__(self, rnn_type, ntoken, ninp, nhid, nlayers, dropout=0.5, tie_weights=False):
super(RNNModel, self).__init__()
self.ntoken = ntoken
self.drop = nn.Dropout(dropout)
self.encoder = nn.Embedding(ntoken, ninp)
if rnn_type in ['LSTM', 'GRU']:
self.rnn = getattr(nn, rnn_type)(ninp, nhid, nlayers, dropout=dropout)
else:
try:
nonlinearity = {'RNN_TANH': 'tanh', 'RNN_RELU': 'relu'}[rnn_type]
except KeyError:
raise ValueError( """An invalid option for `--model` was supplied,
options are ['LSTM', 'GRU', 'RNN_TANH' or 'RNN_RELU']""")
self.rnn = nn.RNN(ninp, nhid, nlayers, nonlinearity=nonlinearity, dropout=dropout)
self.decoder = nn.Linear(nhid, ntoken)
# Optionally tie weights as in:
# "Using the Output Embedding to Improve Language Models" (Press & Wolf 2016)
# https://arxiv.org/abs/1608.05859
# and
# "Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling" (Inan et al. 2016)
# https://arxiv.org/abs/1611.01462
if tie_weights:
if nhid != ninp:
raise ValueError('When using the tied flag, nhid must be equal to emsize')
self.decoder.weight = self.encoder.weight
self.init_weights()
self.rnn_type = rnn_type
self.nhid = nhid
self.nlayers = nlayers
def init_weights(self):
initrange = 0.1
nn.init.uniform_(self.encoder.weight, -initrange, initrange)
nn.init.zeros_(self.decoder.weight)
nn.init.uniform_(self.decoder.weight, -initrange, initrange)
def forward(self, input, hidden):
emb = self.drop(self.encoder(input))
output, hidden = self.rnn(emb, hidden)
output = self.drop(output)
decoded = self.decoder(output)
decoded = decoded.view(-1, self.ntoken)
return F.log_softmax(decoded, dim=1), hidden
def init_hidden(self, bsz):
weight = next(self.parameters())
if self.rnn_type == 'LSTM':
return (weight.new_zeros(self.nlayers, bsz, self.nhid),
weight.new_zeros(self.nlayers, bsz, self.nhid))
else:
return weight.new_zeros(self.nlayers, bsz, self.nhid)
# Temporarily leave PositionalEncoding module here. Will be moved somewhere else.
class PositionalEncoding(nn.Module):
r"""Inject some information about the relative or absolute position of the tokens
in the sequence. The positional encodings have the same dimension as
the embeddings, so that the two can be summed. Here, we use sine and cosine
functions of different frequencies.
.. math::
\text{PosEncoder}(pos, 2i) = sin(pos/10000^(2i/d_model))
\text{PosEncoder}(pos, 2i+1) = cos(pos/10000^(2i/d_model))
\text{where pos is the word position and i is the embed idx)
Args:
d_model: the embed dim (required).
dropout: the dropout value (default=0.1).
max_len: the max. length of the incoming sequence (default=5000).
Examples:
>>> pos_encoder = PositionalEncoding(d_model)
"""
def __init__(self, d_model, dropout=0.1, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0).transpose(0, 1)
self.register_buffer('pe', pe)
def forward(self, x):
r"""Inputs of forward function
Args:
x: the sequence fed to the positional encoder model (required).
Shape:
x: [sequence length, batch size, embed dim]
output: [sequence length, batch size, embed dim]
Examples:
>>> output = pos_encoder(x)
"""
x = x + self.pe[:x.size(0), :]
return self.dropout(x)
class TransformerModel(nn.Module):
"""Container module with an encoder, a recurrent or transformer module, and a decoder."""
def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5):
super(TransformerModel, self).__init__()
try:
from torch.nn import TransformerEncoder, TransformerEncoderLayer
except:
raise ImportError('TransformerEncoder module does not exist in PyTorch 1.1 or lower.')
self.model_type = 'Transformer'
self.src_mask = None
self.pos_encoder = PositionalEncoding(ninp, dropout)
encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)
self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)
self.encoder = nn.Embedding(ntoken, ninp)
self.ninp = ninp
self.decoder = nn.Linear(ninp, ntoken)
self.init_weights()
def _generate_square_subsequent_mask(self, sz):
mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask
def init_weights(self):
initrange = 0.1
nn.init.uniform_(self.encoder.weight, -initrange, initrange)
nn.init.zeros_(self.decoder.weight)
nn.init.uniform_(self.decoder.weight, -initrange, initrange)
def forward(self, src, has_mask=True):
if has_mask:
device = src.device
if self.src_mask is None or self.src_mask.size(0) != len(src):
mask = self._generate_square_subsequent_mask(len(src)).to(device)
self.src_mask = mask
else:
self.src_mask = None
src = self.encoder(src) * math.sqrt(self.ninp)
src = self.pos_encoder(src)
output = self.transformer_encoder(src, self.src_mask)
output = self.decoder(output)
return F.log_softmax(output, dim=-1)
###############################################################################
# Build the model
###############################################################################
ntokens = len(corpus.dictionary)
model_name = 'Transformer'
#model_name = 'LSTM'
emsize = 200 # size of word embeddings
nhid = 200 # number of hidden units per layer
nlayers = 2 # number of layers
lr = 20. # # initial learning rate
clip = 0.25 # gradient clipping
epochs = 40 # upper epoch limit
batch_size = 20 # batch size
bptt = 35 # sequence length
dropout = 0.2 # dropout applied to layers (0 = no dropout)
tied = True # tie the word embedding and softmax weights
seed = 1111 # random seed
cuda = False # use CUDA
log_interval = 200 # report interval
saved_weight = 'model.pth' # path to save the final model
onnx_export = '' # path to export the final model in onnx format
nhead = 2 # the number of heads in the encoder/decoder of the transformer model
dry_run = False # verify the code and the model
if model_name == 'Transformer':
model = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, dropout).to(device)
else:
model = RNNModel(rnn_type=model,
ntoken=ntokens,
ninp=emsize,
nhid=nhid,
nlayers=nlayers,
dropout=dropout,
tie_weights=tied).to(device)
criterion = nn.NLLLoss()
###############################################################################
# Training code
###############################################################################
def repackage_hidden(h):
"""Wraps hidden states in new Tensors, to detach them from their history."""
if isinstance(h, torch.Tensor):
return h.detach()
else:
return tuple(repackage_hidden(v) for v in h)
# get_batch subdivides the source data into chunks of length args.bptt.
# If source is equal to the example output of the batchify function, with
# a bptt-limit of 2, we'd get the following two Variables for i = 0:
# ┌ a g m s ┐ ┌ b h n t ┐
# └ b h n t ┘ └ c i o u ┘
# Note that despite the name of the function, the subdivison of data is not
# done along the batch dimension (i.e. dimension 1), since that was handled
# by the batchify function. The chunks are along dimension 0, corresponding
# to the seq_len dimension in the LSTM.
def get_batch(source, i):
seq_len = min(bptt, len(source) - 1 - i)
data = source[i:i+seq_len]
target = source[i+1:i+1+seq_len].view(-1)
return data, target
def evaluate(data_source):
# Turn on evaluation mode which disables dropout.
model.eval()
total_loss = 0.
ntokens = len(corpus.dictionary)
if model_name != 'Transformer':
hidden = model.init_hidden(eval_batch_size)
with torch.no_grad():
for i in range(0, data_source.size(0) - 1, bptt):
data, targets = get_batch(data_source, i)
if model_name == 'Transformer':
output = model(data)
output = output.view(-1, ntokens)
else:
output, hidden = model(data, hidden)
hidden = repackage_hidden(hidden)
total_loss += len(data) * criterion(output, targets).item()
return total_loss / (len(data_source) - 1)
def train():
# Turn on training mode which enables dropout.
model.train()
total_loss = 0.
start_time = time.time()
ntokens = len(corpus.dictionary)
if model_name != 'Transformer':
hidden = model.init_hidden(batch_size)
for batch, i in enumerate(range(0, train_data.size(0) - 1, bptt)):
data, targets = get_batch(train_data, i)
# Starting each batch, we detach the hidden state from how it was previously produced.
# If we didn't, the model would try backpropagating all the way to start of the dataset.
model.zero_grad()
if model_name == 'Transformer':
output = model(data)
output = output.view(-1, ntokens)
else:
hidden = repackage_hidden(hidden)
output, hidden = model(data, hidden)
loss = criterion(output, targets)
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
for p in model.parameters():
p.data.add_(p.grad, alpha=-lr)
total_loss += loss.item()
if batch % log_interval == 0 and batch > 0:
cur_loss = total_loss / log_interval
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches | lr {:02.2f} | ms/batch {:5.2f} | '
'loss {:5.2f} | ppl {:8.2f}'.format(
epoch, batch, len(train_data) // bptt, lr,
elapsed * 1000 / log_interval, cur_loss, math.exp(cur_loss)))
total_loss = 0
start_time = time.time()
if dry_run:
break
def export_onnx(path, batch_size, seq_len):
print('The model is also exported in ONNX format at {}'.
format(os.path.realpath(onnx_export)))
model.eval()
dummy_input = torch.LongTensor(seq_len * batch_size).zero_().view(-1, batch_size).to(device)
hidden = model.init_hidden(batch_size)
torch.onnx.export(model, (dummy_input, hidden), path)
# Loop over epochs.
# lr = args.lr
best_val_loss = None
epochs = 10
# At any point you can hit Ctrl + C to break out of training early.
try:
for epoch in range(1, epochs+1):
epoch_start_time = time.time()
train()
val_loss = evaluate(val_data)
print('-' * 89)
print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | '
'valid ppl {:8.2f}'.format(epoch, (time.time() - epoch_start_time),
val_loss, math.exp(val_loss)))
print('-' * 89)
# Save the model if the validation loss is the best we've seen so far.
if not best_val_loss or val_loss < best_val_loss:
torch.save(model.state_dict(), saved_weight)
#with open(save, 'wb') as f:
# torch.save(model, f)
best_val_loss = val_loss
else:
# Anneal the learning rate if no improvement has been seen in the validation dataset.
lr /= 4.0
except KeyboardInterrupt:
print('-' * 89)
print('Exiting from training early')
# generate.py
outf = 'generated.txt'
words = 1000
temperature = 1.0
#is_transformer_model = hasattr(model, 'model_type') and model.model_type == 'Transformer'
#if not is_transformer_model:
if model_name != 'Transformer':
hidden = model.init_hidden(1)
input = torch.randint(ntokens, (1, 1), dtype=torch.long).to(device)
with open(outf, 'w') as outf:
with torch.no_grad(): # no tracking history
for i in range(words):
#if is_transformer_model:
if model_name == 'Transformer':
output = model(input, False)
word_weights = output[-1].squeeze().div(temperature).exp().cpu()
word_idx = torch.multinomial(word_weights, 1)[0]
word_tensor = torch.Tensor([[word_idx]]).long().to(device)
input = torch.cat([input, word_tensor], 0)
else:
output, hidden = model(input, hidden)
word_weights = output.squeeze().div(temperature).exp().cpu()
word_idx = torch.multinomial(word_weights, 1)[0]
input.fill_(word_idx)
word = corpus.dictionary.idx2word[word_idx]
outf.write(word + ('\n' if i % 20 == 19 else ' '))
if i % log_interval == 0:
print('| Generated {}/{} words'.format(i, words))
!head generated.txt
#type(model)
loaded_weight = torch.load(saved_weight)
model.load_state_dict(loaded_weight)
#loaded_weight.keys()
model
# Load the best saved model.
#with open(saved_weight, 'rb') as f:
loaded_weight = torch.load(saved_weight)
model.load_state_dict(loaded_weight)
# after load the rnn params are not a continuous chunk of memory
# this makes them a continuous chunk, and will speed up forward pass
# Currently, only rnn model supports flatten_parameters function.
if model_name in ['RNN_TANH', 'RNN_RELU', 'LSTM', 'GRU']:
model.rnn.flatten_parameters()
# Run on test data.
test_loss = evaluate(test_data)
print('=' * 89)
print('| End of training | test loss {:5.2f} | test ppl {:8.2f}'.format(
test_loss, math.exp(test_loss)))
print('=' * 89)
if len(onnx_export) > 0:
# Export the model in ONNX format.
export_onnx(args.onnx_export, batch_size=1, seq_len=args.bptt)
###Output
=========================================================================================
| End of training | test loss 6.87 | test ppl 965.54
=========================================================================================
|
var-auth/20200225_geo_tan/20200225_geo_tan.ipynb | ###Markdown
Problem
###Code
%%latex
In Figure \ref{fig:probfig}, the radius of the small circle (black) is the same as the edge length of a square.
What is the value of $\tan(\angle \mathrm{ACB})$?
\begin{figure}[H]
\centering
\includegraphics[width=0.5\linewidth]{probfig.jpeg}
\caption{Problem's figure}
\label{fig:probfig}
\end{figure}
###Output
_____no_output_____
###Markdown
Solution
###Code
%%latex
% https://tex.stackexchange.com/questions/86044/represent-an-arc-over-letters
\settowidth{\mywidth}{AB}
From Figure \ref{fig:probfig}, the inscribed angle $\angle \mathrm{ACB}$ has its intercepted arc $\bigfrown{\mathrm{AB}}$
(with angle $\angle \mathrm{AOB}$).
Because inscribed angle is half of its intercepted arc, we have:
\begin{equation}
\angle \mathrm{ACB} = \frac{1}{2} \times \angle \mathrm{AOB}.
\end{equation}
Let point D (as shown in Figure \ref{fig:figwithsizes}: \Nameref{fig:figwithsizes}) be the bottom center (crossing point) of the 2 squares.
Due to symmetry $\angle \mathrm{AOD} = \frac{1}{2} \times \angle \mathrm{AOB}$, and hence:
\begin{equation}
\tan( \angle \mathrm{ACB} ) = \tan({\angle \mathrm{AOD}}).
\end{equation}
Figure \ref{fig:figwithsizes} shows Figure \ref{fig:probfig} which is annotated with sizes relevant to the solution.
Let point O be the center of the big red circle. Also, let the edge length of the square (which is also the radius of the small black circle) and
the radius of the big red circle be $a$ and $R$, respectively.
%matplotlib inline
# patches: https://note.nkmk.me/python-matplotlib-patches-circle-rectangle/
# patches: https://www.codespeedy.com/how-to-draw-shapes-in-matplotlib-with-python/
# patches: https://nickcharlton.net/posts/drawing-animating-shapes-matplotlib.html
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib
# matplotlib.rcParams['figure.figsize'] = 6, 6 # Resize the figure, in inches.
matplotlib.rcParams['figure.dpi']= 150
matplotlib.rcParams['mathtext.fontset'] = 'cm' # or 'stix'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
def annotate_dim(ax,xyfrom,xyto,text=None,fontsize=15, color='black', xoffset=0.0, yoffset=0.0, lnwid=1.0, boxfc='white'):
if text is None:
text = str(np.sqrt( (xyfrom[0]-xyto[0])**2 + (xyfrom[1]-xyto[1])**2 ))
ax.annotate("",xyfrom,xyto,arrowprops=dict(color=color, arrowstyle='<->', shrinkA=0, shrinkB=0, lw=lnwid))
ax.text((xyto[0]+xyfrom[0])/2+xoffset,(xyto[1]+xyfrom[1])/2+yoffset,text,fontsize=fontsize,color=color,
bbox=dict(facecolor=boxfc, edgecolor='none', boxstyle='round,pad=0.02'))
def annotate_dim_anchor(ax, xyfrom, xyto, ancfrom, ancto,
text=None,fontsize=15, color='black', xoffset=0.0, yoffset=0.0, lnwid=1.0, boxfc='white',
anclnwid=0.2, extend=2.0):
if text is None:
text = str(np.sqrt( (xyfrom[0]-xyto[0])**2 + (xyfrom[1]-xyto[1])**2 ))
# Line: ancfrom - xyfrom & ancto - xyto
eps = 0.0001
if abs(xyfrom[0] - ancfrom[0])>eps or abs(xyfrom[1] - ancfrom[1])>eps:
extx = xyfrom[0]
exty = xyfrom[1]
if abs(xyfrom[0] - ancfrom[0])<=eps:
exty = xyfrom[1] + extend * (xyfrom[1] - ancfrom[1])/abs(xyfrom[1] - ancfrom[1])
elif abs(xyfrom[1] - ancfrom[1])<=eps:
extx = xyfrom[0] + extend * (xyfrom[0] - ancfrom[0])/abs(xyfrom[0] - ancfrom[0])
else:
seglen = np.sqrt( (xyfrom[0]-ancfrom[0])**2 + (xyfrom[1]-ancfrom[1])**2 )
extx = xyfrom[0] + extend * (xyfrom[0]-ancfrom[0]) / seglen
exty = xyfrom[1] + extend * (xyfrom[1]-ancfrom[1]) / seglen
lancfr = plt.Line2D( (ancfrom[0], extx), (ancfrom[1], exty), color=color, lw=anclnwid )
ax.add_line(lancfr)
if abs(xyto[0] - ancto[0])>eps or abs(xyto[1] - ancto[1])>eps:
extx = xyto[0]
exty = xyto[1]
if abs(xyto[0] - ancto[0])<=eps:
exty = xyto[1] + extend * (xyto[1] - ancto[1])/abs(xyto[1] - ancto[1])
elif abs(xyto[1] - ancto[1])<=eps:
extx = xyto[0] + extend * (xyto[0] - ancto[0])/abs(xyto[0] - ancto[0])
else:
seglen = np.sqrt( (xyto[0]-ancto[0])**2 + (xyto[1]-ancto[1])**2 )
extx = xyto[0] + extend * (xyto[0]-ancto[0]) / seglen
exty = xyto[1] + extend * (xyto[1]-ancto[1]) / seglen
lancto = plt.Line2D( (ancto[0], extx), (ancto[1], exty), color=color, lw=anclnwid )
ax.add_line(lancto)
ax.annotate("",xyfrom,xyto,arrowprops=dict(color=color, arrowstyle='<->', shrinkA=0, shrinkB=0, lw=lnwid))
ax.text((xyto[0]+xyfrom[0])/2+xoffset,(xyto[1]+xyfrom[1])/2+yoffset,text,fontsize=fontsize,color=color,
bbox=dict(facecolor=boxfc, edgecolor='none', boxstyle='round,pad=0.02'))
# fig = plt.figure()
ax = plt.axes()
a = 3.0
R = 5.0*a/3.0
ptO = (a, 3*a-R)
vecOA = (0.0-ptO[0], a-ptO[1])
absvecOA = np.sqrt( (vecOA[0])**2 + (vecOA[1])**2 )
uvecOA = ( vecOA[0]/absvecOA, vecOA[1]/absvecOA )
ptA = ( ptO[0] + R*uvecOA[0], ptO[1] + R*uvecOA[1] )
ptD = (a, 0.0)
ptTop = (a, 3*a)
ptE = (0.0, a)
ptF = (a, a)
ptG = (2*a, 0.0)
shplw=1.5
# fc = face color, ec = edge color
c = patches.Circle(xy=(a, 2*a), radius=a, fc='none', ec='k', lw=shplw, ls='-') # default lw=1, ls='-' or '-.' or '--'
cbig = patches.Circle(xy=ptO, radius=R, fc='none', ec='r', lw=shplw, ls='-')
r1 = patches.Rectangle(xy=(0, 0), width=a, height=a, ec='#000000', lw=shplw, fill=False)
r2 = patches.Rectangle(xy=(a, 0), width=a, height=a, ec='#000000', lw=shplw, fill=False)
lineOA = plt.Line2D((ptO[0], ptA[0]), (ptO[1], ptA[1]), ls='--', lw=shplw)
lineOF = plt.Line2D((ptO[0], ptF[0]), (ptO[1], ptF[1]), ls='--', lw=shplw, color='k')
ax.add_patch(cbig)
ax.add_patch(c)
ax.add_patch(r1)
ax.add_patch(r2)
ax.add_line(lineOA)
ax.add_line(lineOF)
plt.text(ptO[0]+0.2, ptO[1]+0.15, 'O', fontsize=12)
plt.text(ptA[0]-0.6, ptA[1]-0.4, 'A', fontsize=12)
plt.text(ptE[0]-0.35, ptE[1]+0.18, 'E', fontsize=12)
plt.text(ptF[0]+0.18, ptF[1]-0.6, 'F', fontsize=12)
plt.text(ptTop[0]-0.2, ptTop[1]+0.25, 'T', fontsize=12)
annotate_dim(ax, ptO, ptG, '$R$', fontsize=13, color='green', xoffset=0.0, yoffset=0.1, lnwid=1.0, boxfc='none')
annotate_dim(ax, ptO, ptTop, '$R$', fontsize=13, color='green', xoffset=0.05, yoffset=-0.1, lnwid=1.0, boxfc='none')
annotate_dim(ax, (0.0, 0.65*a), (a, 0.65*a), '$a$', fontsize=13, color='green', xoffset=-0.12, yoffset=0.17, lnwid=1.0, boxfc='none')
dimanclw = 0.25
# anchor and dimension point
dppos = 4.2
ancp1 = (2*a, a)
dp1 = (dppos*a, a)
dp2 = (dppos*a, 3*a)
hige = 0.5
annotate_dim_anchor(ax, dp1, dp2, ancp1, ptTop,
'$2a$', fontsize=13, color='green', xoffset=0.11, yoffset=-0.1, lnwid=1.0, boxfc='none',
anclnwid=dimanclw, extend=hige)
dp3 = (dppos*a, 0.0)
annotate_dim_anchor(ax, dp1, dp3, dp1, ptG,
'$a$', fontsize=13, color='green', xoffset=0.11, yoffset=-0.1, lnwid=1.0, boxfc='none',
anclnwid=dimanclw, extend=hige)
dppos2 = 3.0
dp4 = (dppos2*a, a)
dp5 = (dppos2*a, ptO[1])
annotate_dim_anchor(ax, dp4, dp5, dp4, ptO,
'$2a-R$', fontsize=13, color='green', xoffset=0.2, yoffset=-0.2, lnwid=1.0, boxfc='none',
anclnwid=dimanclw, extend=hige)
dppos3 = -0.55
dpD = (ptD[0], dppos3*a)
dpG = (ptG[0], dppos3*a)
annotate_dim_anchor(ax, dpD, dpG, ptD, ptG,
'$a$', fontsize=13, color='green', xoffset=-0.12, yoffset=0.17, lnwid=1.0, boxfc='none',
anclnwid=dimanclw, extend=hige)
# annotate_dim_anchor(ax, (dpD[0]-1, dpD[1]-1), (dpG[0]+2, dpG[1]-3), ptD, ptG,
# '$a$', fontsize=13, color='green', xoffset=-0.12, yoffset=0.17, lnwid=1.0, boxfc='none',
# anclnwid=dimanclw, extend=hige)
plt.text(ptD[0]-0.55, ptD[1]-0.6, 'D', fontsize=12)
plt.text(ptG[0]+0.1, ptG[1]-0.6, 'G', fontsize=12)
plt.axis('off')
plt.axis('scaled')
ax.set_aspect('equal')
plt.show()
%%latex
Let $\overline{\mathrm{XY}}$ and $\lvert\overline{\mathrm{XY}}\rvert$ be line segment $\mathrm{XY}$ and its length, respectively.
Looking at right triangle $\triangle\mathrm{ODG}$ and applying Pythagoras' theorem:
\begin{align*}
\lvert\overline{\mathrm{OD}}\rvert^2 + \lvert\overline{\mathrm{DG}}\rvert^2 = \lvert\overline{\mathrm{OG}}\rvert^2
& \Rightarrow \lvert\overline{\mathrm{OD}}\rvert^2 + a^2 = R^2 \\
& \Rightarrow \lvert\overline{\mathrm{OD}}\rvert = \sqrt{R^2 - a^2}.
\addtocounter{equation}{1}
\label{eq:lenOD} \tag{\theequation}
\end{align*}
Also from Figure \ref{fig:figwithsizes}, we have:
\begin{align*}
\lvert\overline{\mathrm{OD}}\rvert + \lvert\overline{\mathrm{OT}}\rvert = \lvert\overline{\mathrm{DT}}\rvert
& \Rightarrow \lvert\overline{\mathrm{OD}}\rvert + R = \lvert\overline{\mathrm{DF}}\rvert + \lvert\overline{\mathrm{FT}}\rvert \\
& \Rightarrow \lvert\overline{\mathrm{OD}}\rvert + R = a + 2a \\
& \Rightarrow \lvert\overline{\mathrm{OD}}\rvert = 3a - R.
\addtocounter{equation}{1}
\label{eq:lenOD2} \tag{\theequation}
\end{align*}
From equation \eqref{eq:lenOD} and \eqref{eq:lenOD2}, we have:
\begin{align*}
% \require{cancel}
\sqrt{R^2 - a^2} = 3a - R
& \Rightarrow R^2 - a^2 = (3a - R)^2 \\
& \Rightarrow \cancel{R^2} - a^2 = 9a^2 - 6aR + \cancel{R^2} \\
& \Rightarrow 6aR = 9a^2 + a^2 = 10a^2 \\
& \Rightarrow 6\cancel{a}R = 10a^{\cancel{2}} && (a\neq 0) \\
& \Rightarrow R = \frac{5}{3}a.
\addtocounter{equation}{1}
\label{eq:relation_Ra} \tag{\theequation}
\end{align*}
%%latex
Now look at right triangle $\triangle\mathrm{EFO}$.
\begin{align*}
\tan(\angle \mathrm{AOD}) = \tan(\angle \mathrm{EOF})
&= \frac{\lvert\overline{\mathrm{EF}}\rvert}{\lvert\overline{\mathrm{FO}}\rvert} \\
&= \frac{a}{2a - R} && (\text{from Figure }\ref{fig:figwithsizes}) \\
&= \frac{a}{2a - \frac{5}{3}a} && (\text{substituting $R$ from equation \eqref{eq:relation_Ra}}) \\
&= \frac{\cancel{a}}{\frac{1}{3}\cancel{a}} \\
&= 3.
\addtocounter{equation}{1}
\label{eq:answer} \tag{\theequation}
\end{align*}
Hence, $\tan(\angle\mathrm{ACB}) = \tan(\angle \mathrm{AOD}) = 3$. $\blacksquare$
###Output
_____no_output_____ |
socratica/video-02.ipynb | ###Markdown
Python Programming Tutorials (Computer Science)The 🦉 [Socratica](https://www.youtube.com/channel/UCW6TXMZ5Pq6yL6_k5NZ2e0Q) YouTube Channel has a 33-video [playlist](https://www.youtube.com/playlist?list=PLi01XoE8jYohWFPpC17Z-wWhPOSuh8Er-) devoted to the introduction of Python. 2 Hello World in Python
###Code
%run video-00.py
from IPython import display
video = display.YouTubeVideo('KOdfpbnWLVo')
video
display.HTML(f'<a href="{video.src}">link</a>')
print('Hello world')
%run -i video-02.py
###Output
Hello world
|
playground/notebooks/get_all_tweets.ipynb | ###Markdown
Load all user_ids from csv file
###Code
import pandas as pd
def load_users():
filename = "top_users.csv"
my_csv = pd.read_csv(filename)
column = my_csv.user_id
names = ""
for name in column.values:
names += str(name) + ","
return names[:-1]
#print(load_users())
###Output
_____no_output_____
###Markdown
Define a function that connects to Twitter API and and gets tweets
###Code
def get_tweets():
url = 'https://stream.twitter.com/1.1/statuses/filter.json'
user_ids = load_users()
query_data = [('language', 'en'),('follow',user_ids)]
query_url = url + '?' + '&'.join([str(t[0]) + '=' + str(t[1]) for t in query_data])
response = requests.get(query_url, auth=my_auth, stream=True)
print(query_url, response)
return response
###Output
_____no_output_____
###Markdown
Define a function that streams the tweets over a tcp connection
###Code
def print_tweets(http_resp):
for line in http_resp.iter_lines():
try:
full_tweet = json.loads(line)
tweet_text = full_tweet['text'].encode("utf-8")
user_name = full_tweet['user']['name'].encode("utf-8")
print("------------------------------------------")
print("Tweet User: {}".format(user_name))
print("Tweet Text: {}".format(tweet_text))
except:
e = sys.exc_info()[0]
print("Error: %s" % e)
###Output
_____no_output_____
###Markdown
Start a localhost tcp socket to stream the tweets
###Code
#TCP_IP = "localhost"
#TCP_PORT = 9009
#conn = None
#s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
#s.bind((TCP_IP, TCP_PORT))
#s.listen(1)
print("Waiting for TCP connection...")
#conn, addr = s.accept()
print("Connected... Starting getting tweets.")
resp = get_tweets()
print_tweets(resp)
###Output
_____no_output_____ |
notebooks/1.14-sfb-predict-hoax-logreg-pipeline.ipynb | ###Markdown
Predict hoax using scaled features Improvements (for a subsequent notebook): - implement randomforest and xgboost, as in the paper- graph the test scores alongside cross-val scores- fix my StandardScaler implementation per Geoff's note ([link](https://stackoverflow.com/questions/51459406/how-to-apply-standardscaler-in-pipeline-in-scikit-learn-sklearn)) Notes: Struggles with model fit without SFS Goal: Match or surpass the $R^2$ achieved with unscaled features ➜ Fewer coefficients present without scaling. Why? - There was a typo and a feature slipped through that wasn't supposed to - ➜ fixed - The sets are the same now, but the phenomenon is remaining.- Maybe it's because the intercept was scaled to zero? - ➜ Try only scaling the original floats. - Score improved to .54/.5 , but unscaled remains .6- Look at (unscaled_coeff / $\sigma$) to understand feature importance - ➜ Try repeating the regression with these features only - Score improved to .55/.5 , and unscaled remains .58- Checked Lasso (with all features) - ➜ same results as Ridge: .54/.5- Relaxed my limitation of C, - ➜ improved to .57/.5 with C=6- Ran logistic regression on unscaled features with no regularization - ➜ improved to .61/.5 - Far out-performed the paper-author's models- Let's try SFS with scaled features - Success!- Conclusion: - The unscaled data was causing regularization to do feature selection in a better way. - When the feature selection was improved with Notes: sklearn GridSearchCV vs. customization When to use sklearn's GridSearchCV to wrap SFS? - Technical particularity: - It's only possible to get access to the SFS instance for the param combination that GridSearchCV found to be optimal. - This mitigates output complexity, but prevents thorough review. - It would prevent a review of both SFFS and SFBS within the same gridsearch as is done here, for instance.- Pros - GridSearchCV is great for automated production applications, to make the code robust, concise, and palatable for other users.- Cons - GridSearchCV wouldn't let me do a viz overview of the param_grid outputs. - (it's grid ***search***, not grid ***exploration***.) Previous thoughts on why not to use sklearn's GridSearchCV with SFS:
###Code
sklearn.model_selection.GridSearchCV can only optimize one hyperparameter at a time. It cannot require two hyperparameters to be fixed together.
However to implement SFS in a pipeline with GridSearchCV requires two instances of the estimator:
- one instance est_sfs for the estimator in SFS, and
- another instance est_scor at the end of the pipeline to allow it to be used to fit a model.
But then, these two estimator instances will have two separate/parallel sets of hyperparameters.
Consider that the model used for these instances is sklearn.linear_model.LogisticRegression, which has a regularization hyperparameter C.
The C used by the LogisticRegression inside SFS is used to select features, and the C at the end of the pipeline is used by the gridsearch to score each model run.
Three solution alternatives to make sure the two C's are the same for each model run:
1. Use GridSearch to wrap the pipeline and param_grid, with an unused stand-in estimator at the end of the pipeline. Write a custom function to analyze and score the SFS output pulled directly from my_pipeline.steps[index] etc.
2. Write my own custom gridsearch function.
3. Rewrite the GridSearchCV code as was done in this SOF answer (without fully-posted code)
documentation here ([link](https://stackoverflow.com/questions/48831851/how-to-pass-two-estimator-objects-to-sklearns-gridsearchcv-so-that-they-have-th))
Another perspective:
- Maybe doing the scoring with a different estimator est_scor is better:
- The main point of a diverse paramgrid within SFS is develop a diverse output of feature sets.
- That's an entirely separate goal from tuning optimal hyperparameters for the final model.
###Output
_____no_output_____
###Markdown
Get hoax data imports
###Code
import os, re, patsy
import pandas as pd, numpy as np
from datetime import datetime as dt
from sklearn.model_selection import train_test_split
path = '/home/bhrdwj/git/predwikt/data/raw/wiki_reliability/unzipped/'
fea = (pd.read_csv(path+'hoax_features.csv', usecols=lambda x: x not in ['Unnamed: 0'])
.rename(columns={'headings_by_level(2)':'headings_by_level_2', 'revision_id.key':'revision_id_key'}))
###Output
_____no_output_____
###Markdown
train test split Make series of negative revisions and their revision keys, and vice versa
###Code
neg_revs = fea[['revision_id', 'revision_id_key', 'has_template']]
neg_revs = neg_revs.loc[neg_revs.has_template==0].set_index('revision_id')['revision_id_key']
pos_revs = fea[['revision_id', 'revision_id_key', 'has_template']]
pos_revs = pos_revs.loc[pos_revs.has_template==1].set_index('revision_id')['revision_id_key']
neg_revs.shape #, pos_revs.shape
###Output
_____no_output_____
###Markdown
Test-train split the neg_revs, and form dfte and dftr
###Code
neg_revs_tr, neg_revs_te = train_test_split(neg_revs, test_size=.2, random_state=0)
pos_revs_tr = pos_revs[neg_revs_tr.values]
pos_revs_te = pos_revs[neg_revs_te.values]
revs_tr = pd.concat((neg_revs_tr, pos_revs_tr))
revs_te = pd.concat((neg_revs_te, pos_revs_te))
fea_rev = fea.set_index('revision_id')
dftr = fea_rev.loc[revs_tr.index].dropna()
dfte = fea_rev.loc[revs_te.index].dropna()
del neg_revs, pos_revs, neg_revs_tr, pos_revs_tr, neg_revs_te, pos_revs_te, revs_tr, revs_te, fea_rev
dftr[dftr.columns.difference(['page_id','revision_id_key','has_template'])].describe().T.sort_values(by='mean');
###Output
_____no_output_____
###Markdown
prep
###Code
# remove non-features; onehotify categoricals
ytr = dftr.has_template
Xtr = dftr[dftr.columns.difference(['page_id','revision_id_key','has_template'])]
Xtr = patsy.dmatrix('~ '+' + '.join(Xtr.columns), data=Xtr, NA_action='drop', return_type='dataframe')
yte = dfte.has_template
Xte = dfte[dfte.columns.difference(['page_id','revision_id_key','has_template'])]
Xte = patsy.dmatrix('~ '+' + '.join(Xte.columns), data=Xte, NA_action='drop', return_type='dataframe')
# make complete list of columns in case the test set doesn't include any of a rare class
Xcols = list(
set(Xtr.columns.tolist())
.union(set(Xte.columns.tolist()))
)
for col in Xcols:
if col not in Xte:
Xte[col] = 0
if col not in Xtr:
Xtr[col] = 0
Xtr = Xtr.reindex(columns=Xcols)
Xte = Xte.reindex(columns=Xcols)
Xtr.shape, Xte.shape
###Output
_____no_output_____
###Markdown
Feature selection imports
###Code
from sklearn.model_selection import ParameterGrid, StratifiedKFold
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
###Output
_____no_output_____
###Markdown
functions get_fitfeats_sfs_pipe
###Code
def bh_sfs_gridsearch(Xtr, ytr, pipe, param_grid:dict):
"""
Concept:
Like sklearn's GridSearch, but relying on sfs for the CV
Value-add: Returns all feature sets from SFFS and SFBS runs and sub-runs for each parameter combo
Args:
Xtr: pd.DataFrame of features
ytr: pd.Series target
pipe: sklearn Pipeline ending in a mlxtend SequentialFeatureSelector instance.
param_grid: dict of lists, params of pipe, input for sklearn ParameterGrid
Returns:
dict:
- keys are numbers starting in 1, for each cell of ParameterGrid
- values are lists of feature names
"""
print(f'start time: {dt.now()}')
pg = {i:j for i,j in enumerate(ParameterGrid(param_grid), start=1)}
sfs_featdict = {}
sfs_scoredict = {}
for i in pg:
pipe.set_params(**pg[i]).fit(Xtr, ytr)
k_feats = pg[i]['sfs__k_features']
idx_tup = pipe.steps[-1][1].get_metric_dict()[k_feats]['feature_idx']
d = {k:list(v['feature_idx']) for k,v in pipe.steps[1][1].subsets_.items()} # IMPROVEMENT: change pipe.steps[1] to directly look for the sfs step
for j in d: d[j] = Xtr.columns[d[j]]
sfs_featdict[i] = d
s = {k:v['avg_score'] for k,v in pipe.steps[1][1].subsets_.items()} # IMPROVEMENT: change pipe.steps[1] to directly look for the sfs step
sfs_scoredict[i] = s
print(f'finish time: {dt.now()}')
return sfs_featdict, sfs_scoredict
###Output
_____no_output_____
###Markdown
instances
###Code
scaler = StandardScaler()
lr_sfs = LogisticRegression(penalty='l2', max_iter=1000, fit_intercept=True)
cv_sfs = StratifiedKFold(n_splits=5, shuffle=False)
sfs = SFS(estimator=lr_sfs, forward=True, floating=False, scoring='accuracy', cv=cv_sfs) # unused / placeholder
sffs = SFS(estimator=lr_sfs, forward=True, floating=True, scoring='accuracy', cv=cv_sfs)
sfbs = SFS(estimator=lr_sfs, forward=False, floating=True, scoring='accuracy', cv=cv_sfs)
sfs_pipe = Pipeline([('scaler', scaler),('sfs', sfs)])
###Output
_____no_output_____
###Markdown
fit sffs
###Code
sfs_pipe_param_grid = {
'sfs': [sffs, sfbs],
'sfs__k_features': [1, len(Xtr.columns)],
'sfs__estimator': [lr_sfs],
'sfs__estimator__C': [.013]
}
sfs_featdict, sfs_scoredict = bh_sfs_gridsearch(Xtr, ytr, sfs_pipe, sfs_pipe_param_grid)
###Output
start time: 2022-01-13 19:36:36.847337
finish time: 2022-01-13 19:39:54.213056
###Markdown
pickle
###Code
import pickle
# with open('../data/processed/sfs_scoredict.StratifiedKFoldkle','wb+') as f:
# pickle.dump(sfs_scoredict, f)
# with open('../data/processed/sfs_featdict.pickle','wb+') as f:
# pickle.dump(sfs_featdict, f)
with open('../data/processed/sfs_featdict.pickle','rb') as f:
sfs_featdict = pickle.load(f)
with open('../data/processed/sfs_scoredict.pickle','rb') as f:
sfs_scoredict = pickle.load(f)
###Output
_____no_output_____
###Markdown
extract feats get onehot df ~ features selected by sfs munge_onehotdf_from_sfs_featdict (function)
###Code
def munge_onehotdf_from_sfs_featdict(sfs_featdict):
# simple 1D list of sfs feats
sfs_feat_set = set()
for i in sfs_featdict: # runs
for j in sfs_featdict[i]: # kfeats
sfs_feats = list(sfs_feat_set.union(set(sfs_featdict[i][j])))
# make multiindex columns and empty dataframe
col = pd.MultiIndex.from_tuples(
[(j,k) for j in sfs_featdict for k in sfs_featdict[j]])
idx = pd.Index(sfs_feats, name='feature')
df = pd.DataFrame('-', idx, col)
# insert onehots into df
for i in sfs_feats: # feats
for j in sfs_featdict: # runs
for k in sfs_featdict[j]: # steps within run
df.loc[i,(j,k)] = int(i in sfs_featdict[j][k])
df = df.astype(int)
return df
###Output
_____no_output_____
###Markdown
get df
###Code
sfs_onehots = munge_onehotdf_from_sfs_featdict(sfs_featdict)
###Output
_____no_output_____
###Markdown
get simple 1D lists of sfs runs, kfeats, and feats
###Code
sfs_runs = list(sfs_featdict.keys())
sfs_feat_set, sfs_kfeats_set = set(), set()
for i in sfs_featdict: # runs
for j in sfs_featdict[i]: # kfeats
sfs_feats = list(sfs_feat_set.union(set(sfs_featdict[i][j])))
sfs_kfeats = list(sfs_kfeats_set.union(set([j])))
###Output
_____no_output_____
###Markdown
sort the index of features by occurence count occurrences of features to sort the index
###Code
idx = pd.IndexSlice
sfs_feat_usecounts = sfs_onehots.loc[:,idx[1,:]].sum(axis=1)
for i in sfs_featdict:
sfs_feat_usecounts += sfs_onehots.loc[:,idx[i,:]].sum(axis=1)
###Output
_____no_output_____
###Markdown
sort features by frequency of occurence in sfs cases
###Code
sfs_onehots = sfs_onehots.reindex(index=sfs_feat_usecounts.sort_values(ascending=False).index)
sfs_onehots = sfs_onehots.sort_index(axis=1)
###Output
_____no_output_____
###Markdown
viz feats imports
###Code
import matplotlib
%matplotlib inline
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
plot feature selection
###Code
feat_fig, ax = plt.subplots(1,2, figsize=(16,8))
feat_fig.subplots_adjust(wspace=.5)
(sns.heatmap(
sfs_onehots.loc[:,idx[2,:]].T.droplevel(level=0).T,
ax=ax[0], cbar=False)
.set(ylabel=None, xlabel='kfeats')
)
(sns.heatmap(
sfs_onehots.loc[:,idx[3,:]].T.droplevel(level=0).T,
ax=ax[1], cbar=False)
.set(ylabel=None, xlabel='kfeats')
)
ax[0].set_title('Sequential Floating Forward Selection')
ax[1].set_title('Sequential Floating Backward Selection')
feat_fig.suptitle('Logistic Regression: Features selected, graphed against k_features')
feat_fig.show()
feat_fig
###Output
_____no_output_____
###Markdown
plot cv scores from feature selection
###Code
scor_fig, ax = plt.subplots(1,2, figsize=(14,4))
scor_fig.subplots_adjust(wspace=.4)
pd.Series(sfs_scoredict[2]).plot(ax=ax[0])
pd.Series(sfs_scoredict[3]).plot(ax=ax[1])
for axis in ax:
axis.set_xlabel('kfeats')
axis.set_ylabel('accuracy')
ax[0].set_title('Sequential Floating Forward Selection')
ax[1].set_title('Sequential Floating Backward Selection')
scor_fig.suptitle('Logistic Regression: Cross-validation accuracy score from SFS (baseline accuracy 0.5)')
scor_fig.show()
scor_fig
###Output
_____no_output_____
###Markdown
identify best set of feats Get top feats set by cross-val score create empty dataframe for scores
###Code
df_scores = pd.DataFrame('-', ['score'], sfs_onehots.columns)
###Output
_____no_output_____
###Markdown
insert scores into dataframe
###Code
for j in sfs_runs: # runs
for k in sfs_featdict[j]: # steps within run
df_scores.loc['score',(j,k)] = sfs_scoredict[j][k]
df_scores = df_scores.sort_index(axis=1)
###Output
_____no_output_____
###Markdown
select the multiindex that had the best cross-val score across sffs and sfbs
###Code
cols_best_index = df_scores.T.astype(float).nlargest(1, columns='score').T.columns
sfs_feats_best_onehots = sfs_onehots[cols_best_index].T.droplevel(level=0).reset_index(drop=True).T
sfs_feats_best = feats_best_onehots[(feats_best_onehots > 0).values].index.tolist()
sfs_score_best = df_scores.T.astype(float).nlargest(1, columns='score').values[0][0]
sfs_score_best, sfs_feats_best
###Output
_____no_output_____
###Markdown
Tune logistic regression penalty "C" (aka "alpha")
###Code
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LogisticRegression
lr_simp = LogisticRegression(max_iter=10000)
Cs = np.logspace(-4,4,500)
pg = {'C': Cs}
grs = GridSearchCV(estimator=lr_simp, param_grid=pg)
grs.fit(Xtr[sfs_feats_best], ytr);
C_scores = pd.Series(grs.cv_results_['mean_test_score'], index=grs.param_grid['C'])
C_scores_rolling = C_scores.rolling(10, center=True).mean()
best_C = C_scores_rolling.idxmax()
best_C_score = C_scores_rolling.max()
C_fig, ax = plt.subplots(1,1, figsize=(8,4))
ax.set_xscale('log')
ax.set_xlabel('C (Inverse Ridge Penalty)')
ax.set_ylabel('Mean Accuracy Score')
ax.set_title('Mean Accuracy Across CV Splits (Training Set)\n')
C_scores.plot(ax=ax)
C_fig.show()
C_rolling_fig, ax = plt.subplots(1,1, figsize=(8,4))
ax.set_xscale('log')
ax.set_xlabel('C (Inverse Ridge Penalty)')
ax.set_ylabel('Rolling Mean Accuracy Score')
ax.set_title('Mean Accuracy Across CV Splits (Training Set)\n'
'(Line is Rolling Average of Means, Marker is Best-C)')
C_scores_rolling.plot(ax=ax)
ax.scatter(x=best_C, y=best_C_score, marker='o', color=['r'])
C_rolling_fig.show()
display(C_fig, C_rolling_fig)
print(f'Best C is: {best_C}')
###Output
_____no_output_____
###Markdown
Test
###Code
lr_final = LogisticRegression(max_iter=10000, C=0.013)
lr_final.fit(Xtr[sfs_feats_best],ytr)
lr_final.score(Xte[sfs_feats_best],yte)
###Output
_____no_output_____ |
docs/_shelved_sphinx_content/examples/dispersive-transmon-readout/dispersive-transmon-readout.ipynb | ###Markdown
Example: autoplot-app and dispersive transmon readout data A typical example from the field of superconducting qubits:We perform Rabi oscillations on a transmon (or other) qubit, and measure it for a set of rotation angles dispersively using a resonator.The resulting data is complex -- each of the two qubit states results in a different (noisy) heterodyne signal.After integrating this signal in time we are left we complex data points, the amplitude and phase of which encode the detected qubit state.In the following we will:* generate some fake readout data* use the autoplot app interactively (for instance from a jupyterlab notebook) to inspect it* look at the complex histograms of the readout Making and visualizing the data With the GUI magic we can run Qt programs from within jupyter. The GUI elements of plottr are all written in Qt, so this is required for interactive use in notebooks.
###Code
%gui qt
###Output
_____no_output_____
###Markdown
Some imports::func:`plottr.utils.testdata.dispersive_qubit_readout.angle_data` produces fake qubit readout data with statistical noise given a rotation angle
###Code
import numpy as np
from plottr.utils.testdata.dispersive_qubit_readout import angle_data
from plottr.plot.pyqtgraph.autoplot import AutoPlot as PGAutoPlot
from plottr.data.datadict import str2dd
from plottr.apps.autoplot import autoplot
data = str2dd("signal[](rotation_angle[rad], repetition)")
for theta in np.linspace(0, 4*np.pi, 17):
readout_data = angle_data(theta)
repetitions = np.arange(readout_data.size)
data.add_data(
rotation_angle=theta,
repetition=repetitions,
signal=angle_data(theta)
)
flowchart, dialog = autoplot(
data,
plotWidgetClass=PGAutoPlot
)
###Output
_____no_output_____ |
[email protected] Assignment-22 SQL Assignment on IMDB data.ipynb | ###Markdown
1. List all the directors who directed a 'Comedy' movie in a leap year. (You need to check that the genre is 'Comedy’ and year is a leap year) Your query should return director name, the movie name, and the year.
###Code
sql_query='''SELECT p.name,m.title, m.year FROM MOVIE m, M_Genre g,Genre n,M_Director d, Person p
where trim(year) % 4 = 0 \
and m.MID=g.MID \
and m.MID=d.MID \
and d.PID=p.PID \
and g.GID=n.GID \
and trim(n.Name)='Comedy' '''
result = pd.read_sql_query( sql_query, conn)
result
###Output
_____no_output_____
###Markdown
2. List the names of all the actors who played in the movie 'Anand' (1971)
###Code
sql_query='''SELECT p.Name FROM Movie m, M_Cast c, Person p \
where trim(title) = 'Anand' \
and m.MID=c.MID \
and trim(c.PID)=trim(p.PID) '''
result = pd.read_sql_query( sql_query, conn)
result
###Output
_____no_output_____
###Markdown
3. List all the actors who acted in a film before 1970 and in a film after 1990. (That is: 1990.)
###Code
#https://stackoverflow.com/questions/26522727/convert-string-to-int-inside-where-clause-of-sqlite-statment
sql_query='''SELECT Distinct P.Name FROM Movie m, M_Cast c, Person p
where cast(substr(year,-4) as INTEGER) < 1970
and m.MID = c.MID
and trim(c.PID) = p.PID
and trim(p.pid) in (SELECT distinct trim(p.pid) FROM Movie m, M_Cast c, Person p
where cast(substr(year,-4) as INTEGER) > 1990
and m.MID = c.MID
and trim(c.PID) = p.PID)
order by P.name
'''
result = pd.read_sql_query( sql_query, conn)
result
###Output
_____no_output_____
###Markdown
4. List all directors who directed 10 movies or more, in descending order of the number of movies they directed. Return the directors' names and the number of movies each of them directed.
###Code
sql_query='''SELECT p.name,count(p.name) AS 'No of Movies' FROM Movie m, M_Director d, Person p \
where m.MID = d.MID \
and trim(d.PID) = p.PID
group by p.name
having count(p.name) >=10
ORDER BY count(p.name) DESC ; '''
result = pd.read_sql_query( sql_query, conn)
result
###Output
_____no_output_____
###Markdown
5 a. For each year, count the number of movies in that year that had only female actors.
###Code
sql_query='''SELECT distinct substr(year,-4) as 'Year', count(substr(year,-4)) AS 'No of Movies' FROM Movie m, M_Cast c, Person p \
where m.MID = c.MID
and trim(c.PID) = p.PID
and m.mid not in ( SELECT distinct m.mid FROM Movie m, M_Cast c, Person p
where m.MID = c.MID
and trim(c.PID) = p.PID
and p.gender = 'Male' )
Group by substr(year,-4) ; '''
result = pd.read_sql_query( sql_query, conn)
result
###Output
_____no_output_____
###Markdown
5 b. Now include a small change: report for each year the percentage of movies in that year with only female actors, and the total number of movies made that year. For example, one answer will be: 1990 31.81 13522 meaning that in 1990 there were 13,522 movies, and 31.81% had only female actors. You do not need to round your answer.
###Code
sql_query='''Select distinct m.year as 'Year',m.TMovies as 'Total Movies', ((a.FMovies * 100.00) / m.TMovies) as '%age of Female Movies'
from (SELECT substr(m.year,-4) as 'Year', count(m.year) AS 'TMovies' FROM Movie m, M_Cast c, Person p \
where m.MID = c.MID \
and trim(c.PID) = p.PID \
Group by substr(m.year,-4)) m\
Join ( SELECT distinct substr(m.year,-4) as 'Year', count(substr(year,-4)) AS 'FMovies' FROM Movie m, M_Cast c, Person p \
where m.MID = c.MID
and trim(c.PID) = p.PID
and m.mid not in ( SELECT distinct m.mid FROM Movie m, M_Cast c, Person p
where m.MID = c.MID
and trim(c.PID) = p.PID
and p.gender = 'Male' )
Group by substr(m.year,-4)) a on m.year=a.year
group by m.year; '''
result = pd.read_sql_query( sql_query, conn)
result
###Output
_____no_output_____
###Markdown
6. Find the film(s) with the largest cast. Return the movie title and the size of the cast. By "cast size" we mean the number of distinct actors that played in that movie: if an actor played multiple roles, or if it simply occurs multiple times in casts, we still count her/him only once.
###Code
sql_query=''' SELECT m.title,b.Total_Cast FROM Movie m
Left Join (SELECT c.MID,Count(a.PID) as 'Total_Cast' FROM M_Cast c
Left Join (Select Distinct p.PID from Person p) a
where trim(c.PID) = a.PID
group by c.MID) b
WHERE m.MID=b.MID
order by b.Total_Cast DESC
; '''
result = pd.read_sql_query( sql_query, conn)
result
###Output
_____no_output_____
###Markdown
7. A decade is a sequence of 10 consecutive years. For example, say in your database you have movie information starting from 1965. Then the first decade is 1965, 1966, ..., 1974; the second one is 1967, 1968, ..., 1976 and so on. Find the decade D with the largest number of films and the total number of films in D.
###Code
#https://stackoverflow.com/questions/51609285/sql-query-for-find-the-decade-with-the-largest-number-of-records
#sql_query='''select substr(year,-4),count(year) from movie group by year '''
sql_query='''select y.year as decade_start, y.year + 9 as decade_end, count(*) as num_movies
from (select distinct year from movie) y
join movie m on m.year >= y.year and m.year < y.year + 10
group by y.year
order by count(*) desc
limit 1;
'''
result = pd.read_sql_query( sql_query, conn)
result
###Output
_____no_output_____
###Markdown
8. Find the actors that were never unemployed for more than 3 years at a stretch. (Assume that the actors remain unemployed between two consecutive movies).
###Code
sql_query='''select p.name from Movie m,M_Cast c,Person p
where m.mid=c.mid
and trim(c.pid)=p.pid
and p.name not in ( select distinct a.name from (select m.year,p.name from Movie m,M_Cast c,Person p
where m.mid=c.mid
and trim(c.pid)=p.pid
order by p.Name,m.year) a
join (select m.year,p.name from Movie m,M_Cast c,Person p
where m.mid=c.mid
and trim(c.pid)=p.pid
order by p.Name,m.year) b on a.name=b.name
where a.year-b.year > 3 )
'''
result = pd.read_sql_query( sql_query, conn)
result
###Output
_____no_output_____
###Markdown
9. Find all the actors that made more movies with Yash Chopra than any other director.
###Code
sql_query='''select distinct a.name from (
select distinct p.pid,p.name,d.pid, count(*) as 'Total' from Movie m,M_Cast c,Person p , M_Director d
where m.mid=c.mid
and m.mid=d.mid
and trim(c.pid)=p.pid
and d.pid not in ('nm0007181')
and p.pid in ( select distinct p.pid from Movie m,M_Cast c,Person p
where m.mid=c.mid
and trim(c.pid)=p.pid
and m.mid in ( select m.mid from Movie m,M_Director d,Person p
where m.mid=d.mid
and trim(d.pid)=p.pid
and p.name like 'Yash%' )) group by p.pid,p.name,d.pid ) a
join (select distinct p.pid,count(trim(p.pid)) as 'Yash' from Movie m,M_Cast c,Person p
where m.mid=c.mid
and trim(c.pid)=p.pid
and m.mid in ( select m.mid from Movie m,M_Director d,Person p
where m.mid=d.mid
and trim(d.pid)=p.pid
and p.name like 'Yash%' )
group by p.pid) b on a.pid=b.pid
where b.Yash > a.Total
'''
result = pd.read_sql_query( sql_query, conn)
result
###Output
_____no_output_____
###Markdown
10. The Shahrukh number of an actor is the length of the shortest path between the actor and Shahrukh Khan in the "co-acting" graph. That is, Shahrukh Khan has Shahrukh number 0; all actors who acted in the same film as Shahrukh have Shahrukh number 1; all actors who acted in the same film as some actor with Shahrukh number 1 have Shahrukh number 2, etc. Return all actors whose Shahrukh number is 2.
###Code
sql_query =''' select distinct p.name from Movie m,M_Cast c,Person p
where m.mid=c.mid
and trim(c.pid)=p.pid
and m.mid in (select distinct m.mid from Movie m,M_Cast c,Person p
where m.mid=c.mid
and trim(c.pid)=p.pid
and p.pid in ( select distinct p.pid from Movie m,M_Cast c,Person p
where m.mid=c.mid
and trim(c.pid)=p.pid
and m.mid in ( select distinct m.mid from Movie m,M_Cast c,Person p
where m.mid=c.mid
and trim(c.pid)=p.pid
and p.name like '%Shah Rukh Khan%')))
'''
result = pd.read_sql_query( sql_query, conn)
result
con.close()
###Output
_____no_output_____ |
jupyter/incremental_modeling+Local.jupyter-py36.ipynb | ###Markdown
Incremental modeling with decision optimizationThis tutorial includes everything you need to set up decision optimization engines, build a mathematical programming model, then incrementally modify it.You will learn how to:- change coefficients in an expression- add terms in an expression- modify constraints and variables bounds- remove/add constraints- play with relaxationsWhen you finish this tutorial, you'll have a foundational knowledge of _Prescriptive Analytics_.>This notebook is part of the **[Prescriptive Analytics for Python](https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html)**>It requires an [installation of CPLEX Optimizers](http://ibmdecisionoptimization.github.io/docplex-doc/getting_started.html)Discover us [here](https://developer.ibm.com/docloud)Table of contents:- [Describe the business problem](Describe-the-business-problem:--Games-Scheduling-in-the-National-Football-League)* [How decision optimization (prescriptive analytics) can help](How--decision-optimization-can-help)* [Use decision optimization](Use-decision-optimization) * [Step 1: Import the library](Step-1:-Import-the-library) * [Step 2: Set up the prescriptive model](Step-2:-Set-up-the-prescriptive-model) * [Step 3: Modify the model](Step-3:-Modify-the-model)* [Summary](Summary)**** Describe the business problem: Telephone productionA possible descriptive model of the telephone production problem is as follows:* Decision variables: * Number of desk phones produced (DeskProduction) * Number of cellular phones produced (CellProduction)Objective: Maximize profit* Constraints: * The DeskProduction should be greater than or equal to 100. * The CellProduction should be greater than or equal to 100. * The assembly time for DeskProduction plus the assembly time for CellProduction should not exceed 400 hours. * The painting time for DeskProduction plus the painting time for CellProduction should not exceed 490 hours.This is a type of discrete optimization problem that can be solved by using either **Integer Programming** (IP) or **Constraint Programming** (CP). > **Integer Programming** is the class of problems defined as the optimization of a linear function, subject to linear constraints over integer variables. > **Constraint Programming** problems generally have discrete decision variables, but the constraints can be logical, and the arithmetic expressions are not restricted to being linear. For the purposes of this tutorial, we will illustrate a solution with mathematical programming (MP). How decision optimization can help* Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes. * Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. * Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage. With prescriptive analytics, you can: * Automate the complex decisions and trade-offs to better manage your limited resources.* Take advantage of a future opportunity or mitigate a future risk.* Proactively update recommendations based on changing events.* Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes. Use decision optimization Step 1: Import the libraryRun the following code to import Decision Optimization CPLEX Modeling library. The *DOcplex* library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.
###Code
import sys
try:
import docplex.mp
except:
raise Exception('Please install docplex. See https://pypi.org/project/docplex/')
###Output
_____no_output_____
###Markdown
A restart of the kernel might be needed. Step 2: Set up the prescriptive model Writing a mathematical modelConvert the descriptive model into a mathematical model:* Use the two decision variables DeskProduction and CellProduction* Use the data given in the problem description (remember to convert minutes to hours where appropriate)* Write the objective as a mathematical expression* Write the constraints as mathematical expressions (use “=”, “=”, and name the constraints to describe their purpose)* Define the domain for the decision variables Telephone production: a mathematical modelTo express the last two constraints, we model assembly time and painting time as linear combinations of the two productions, resulting in the following mathematical model:maximize: 12 desk_production+20 cell_productionsubject to: desk_production>=100 cell_production>=100 0.2 desk_production+0.4 cell_production<=400 0.5 desk_production+0.4 cell_production<=490
###Code
# first import the Model class from docplex.mp
from docplex.mp.model import Model
# create one model instance, with a name
m = Model(name='telephone_production')
###Output
_____no_output_____
###Markdown
The continuous variable desk represents the production of desk telephones.The continuous variable cell represents the production of cell phones.
###Code
# by default, all variables in Docplex have a lower bound of 0 and infinite upper bound
desk = m.integer_var(name='desk')
cell = m.integer_var(name='cell')
m.maximize(12 * desk + 20 * cell)
# write constraints
# constraint #1: desk production is greater than 100
m.add_constraint(desk >= 100, "desk")
# constraint #2: cell production is greater than 100
m.add_constraint(cell >= 100, "cell")
# constraint #3: assembly time limit
ct_assembly = m.add_constraint( 0.2 * desk + 0.4 * cell <= 400, "assembly_limit")
# constraint #4: paiting time limit
ct_painting = m.add_constraint( 0.5 * desk + 0.4 * cell <= 490, "painting_limit")
###Output
_____no_output_____
###Markdown
Solve with Decision Optimization If you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.You will get the best solution found after ***n*** seconds, thanks to a time limit parameter.
###Code
m.print_information()
msol = m.solve()
assert msol is not None, "model can't solve"
m.print_solution()
###Output
_____no_output_____
###Markdown
Step 3: Modify the model Modify constraints and variables bounds The model object provides getters to retrieve variables and constraints by name:* get_var_by_name* get_constraint_by_nameThe variable and constraint objects both provide properties to access the right hand side (rhs) and left hand side (lhs).When you modify a rhs or lhs of a variable, you of course need to give a number.When you modify a rhs or lhs of a constraint, you can give a number or an expression based on variables.Let's say we want to build 2000 cells and 1000 desks maximum.And let's say we want to increase the production of both of them from 100 to 350
###Code
# Access by name
m.get_var_by_name("desk").ub = 2000
# acess via the object
cell.ub = 1000
m.get_constraint_by_name("desk").rhs = 350
m.get_constraint_by_name("cell").rhs = 350
msol = m.solve()
assert msol is not None, "model can't solve"
m.print_solution()
###Output
_____no_output_____
###Markdown
The production plan has been updated accordingly to our small changes. Modify expressions We now want to introduce a new type of product: the "hybrid" telephone.
###Code
hybrid = m.integer_var(name='hybrid')
###Output
_____no_output_____
###Markdown
We need to:- introduce it in the objective- introduce it in the existing painting and assembly time constraints - add a new constraint for its production to produce at least 350 of them.
###Code
m.add_constraint(hybrid >= 350)
;
###Output
_____no_output_____
###Markdown
The objective will move frommaximize: 12 desk_production+20 cell_productiontomaximize: 12 desk_production+20 cell_production + 10 hybrid_prodction
###Code
m.get_objective_expr().add_term(hybrid, 10)
;
###Output
_____no_output_____
###Markdown
The time constraints will be updated from 0.2 desk_production+0.4 cell_production<=4000.5 desk_production+0.4 cell_production<=490to0.2 desk_production+0.4 cell_production + 0.2 hybrid_production<=4000.5 desk_production+0.4 cell_production + 0.2 hybrid_production<=490 When you add a constraint to a model, its object is returned to you by the method add_constraint.If you don't have it, you can access it via its name
###Code
m.get_constraint_by_name("assembly_limit").lhs.add_term(hybrid, 0.2)
ct_painting.lhs.add_term(hybrid, 0.2)
;
###Output
_____no_output_____
###Markdown
We can now compute the new production plan for our 3 products
###Code
msol = m.solve()
assert msol is not None, "model can't solve"
m.print_solution()
###Output
_____no_output_____
###Markdown
Let's now say we improved our painting process, the distribution of the coefficients in the painting limits is not [0.5, 0.4, 0.2] anymore but [0.1, 0.1, 0.1]When you have the hand on an expression, you can modify the coefficient variable by variable with set_coefficient or via a list of (variable, coeff) with set_coefficients
###Code
ct_painting.lhs.set_coefficients([(desk, 0.1), (cell, 0.1), (hybrid, 0.1)])
msol = m.solve()
assert msol is not None, "model can't solve"
m.print_solution()
###Output
_____no_output_____
###Markdown
Relaxations Let's now introduce a new constraint: polishing time limit.
###Code
# constraint: polishing time limit
ct_polishing = m.add_constraint( 0.6 * desk + 0.6 * cell + 0.3 * hybrid <= 290, "polishing_limit")
msol = m.solve()
if msol is None:
print("model can't solve")
###Output
_____no_output_____
###Markdown
The model is now infeasible. We need to handle it and dig into the infeasibilities. You can now use the Relaxer object. You can control the way it will relax the constraints or you can use 1 of the various automatic modes:- 'all' relaxes all constraints using a MEDIUM priority; this is the default.- 'named' relaxes all constraints with a user name but not the others.- 'match' looks for priority names within constraint names; unnamed constraints are not relaxed.We will use the 'match' mode.Polishing constraint is mandatory.Painting constraint is a nice to have.Assembly constraint has low priority.
###Code
ct_polishing.name = "high_"+ct_polishing.name
ct_assembly.name = "low_"+ct_assembly.name
ct_painting.name = "medium_"+ct_painting.name
# if a name contains "low", it has priority LOW
# if a ct name contains "medium" it has priority MEDIUM
# same for HIGH
# if a constraint has no name or does not match any, it is not relaxable.
from docplex.mp.relaxer import Relaxer
relaxer = Relaxer(prioritizer='match', verbose=True)
relaxed_sol = relaxer.relax(m)
relaxed_ok = relaxed_sol is not None
assert relaxed_ok, "relaxation failed"
relaxer.print_information()
m.print_solution()
ct_polishing_relax = relaxer.get_relaxation(ct_polishing)
print("* found slack of {0} for polish ct".format(ct_polishing_relax))
ct_polishing.rhs+= ct_polishing_relax
m.solve()
m.report()
m.print_solution()
###Output
_____no_output_____ |
Introduction_to_sleep_scoring.ipynb | ###Markdown
Morpheo baseline project Introduction to sleep scoring[Colaboratory version](https://colab.research.google.com/drive/1o8sjVQrexX20cv3Ca5dA5TQz_ceSS-xp) Please execute the cell bellow to initialize the notebook environment
###Code
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import requests
import h5py
import keras
plt.rcParams.update({'figure.figsize': (4.5, 3.5)})
###Output
Using TensorFlow backend.
###Markdown
Please execute the cell bellow to download the sleep scoring dataset (if needed)
###Code
!if [ ! -d ./datasets ]; then git clone https://github.com/mpbrigham/intro-sleep-scoring; \
cp -rf ./intro-sleep-scoring/datasets ./; fi
###Output
_____no_output_____
###Markdown
Data visualization Import datasetInvestigate the structure of polysomnography records in HDF5 format with `h5py` library.**Suggestions*** Open HDF5 database `mesa_sleep_0001_s.h5`* Print table names and shapes**Refs*** h5py: HDF5 for Python (http://docs.h5py.org/en/latest)
###Code
path = './datasets/mesa_small/mesa_sleep_0001_s.h5'
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```table shapeEEG1 (1439, 1920)EEG2 (1439, 1920)EEG3 (1439, 1920)EKG (1439, 1920)EMG (1439, 1920)EOG-L (1439, 1920)EOG-R (1439, 1920)stages (1439,)``` Data typesCheck data type of tables and their records for EEG tables (`EEG*`) and hypnogram table (`stages`).The hypnogram is split in 30 s intervals of recording, called *epochs*. Each epoch is assigned a sleep score.Print data type of EEG tables and their records.
###Code
print('table\t table type\t\t\t\t record type\n')
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```table table type record typeEEG1 float32EEG2 float32EEG3 float32stages int32``` Convert data to Numpy arraysExport EEG channels to array `x` and hypnogram to array `y`.Print variable type of arrays `x` and `y`, and their contents. **Suggestions*** Concatenate tables `EEG*` into array `x` with shape `(1439, 1920, 3)`* Save table `stages` into array `y` with shape `(1439, 1)`
###Code
x, y = (None, None)
print('var\t var type\t\t\t element type\t var shape\n')
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```var var type element type var shapex float32 (1439, 1920, 3)y int32 (1439, 1)``` Visualize dataVisualize data from EEG channels and hypnogram by plotting epoch 1000 of each.**Suggestions*** Use functions `plt.figure()` and `plt.plot()` from Matplotlib to plot first 200 samples of epoch 1001 of array `x`. Add a small value to each channel to separate them vertically.* Plot all samples from array `y`.
###Code
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT** Visualize proportion of sleep stagesPrint unique elements of array y.Print sleep stage proportions with names provided in dictionary `y_name` given below.
###Code
y_name = {0: 'AWA', 1:'N1', 2:'N2', 3:'N3', 4:'REM'}
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```unique elements of hypnogram: [0, 1, 2, 3, 4] AWA 0.0000N1 0.0938N2 0.6324N3 0.0396REM 0.2168``` Visualize EEG samples per sleeping stageVisualize first 200 samples of EEG from 2nd epoch of each sleeping stage.**Suggestions*** Use function `np.where()` from Numpy to find relevant indexes in array `y`* Use functions `plt.subplot()` from Matplotlib to plot multiple plots in the same figure
###Code
fig = plt.figure(figsize=(20,3.5))
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT** Data pre-processing Basic statistical metricsWrite function `print_stats()` to print minimum, maximum, mean and standard deviation of EEG channels in array `x`.Print unique elements of array `y` and their proportions.**Suggestions*** Use functions `np.min()`, `np.max()`, `np.mean()` and `np.std()` to print statistics of array `x`* Print table of sleep stage proportions in `y`
###Code
def print_stats(x, name='EEG'):
"""Print minimum, maximum, mean and standard deviation along dimension 0 of array.
x: array with shape (channel, batch, data)
"""
print(name+'\t min\t\t max\t\t mean\t\t std\n')
# insert your code here
print_stats(x)
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```EEG min max mean std0 -2.6387 2.6618 -0.0107 0.16571 -2.6356 2.8187 0.0856 0.19542 -6.5130 6.2349 0.0080 0.6050``` Plot histogram of EEGPlot histogram of EEG channels in array `x`.**Suggestions*** Use function `np.linspace()` from Numpy to define array `x_bins` with 100 successive values between -0.2 and 0.2.* Use function `plt.hist()` from Matplotlib to plot histogram of array `x`, using keyword `bins` to set bins location to `x_bins`
###Code
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT** Remove mean from EEG dataWrite function `pre_process()` to remove channel mean from EEG channels.Apply function to array `x`, print basic statistical metrics and plot histogram.**Suggestions*** Use function `np.mean()` with keyword `keepdims` to measure mean of EEG channels* Remove mean of EEG channels from array `x`
###Code
def pre_process(x):
"""Remove channel mean from array x.
x: array with shape (channel, batch, data)
returns x_out: array x with zero channel mean
"""
x_out = x
# insert your code here
return x_out
x = pre_process(x)
print_stats(x)
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```EEG min max mean std0 -2.6280 2.6725 -0.0000 0.16571 -2.7213 2.7331 -0.0000 0.19542 -6.5211 6.2269 -0.0000 0.6050``` Prepare data for training Write data import functionWrite function `load_data()` to import EEG channels and hypnogram from HDF5 database.
###Code
def load_data(path):
"""Import EEG channels and hypnogram from HDF5 database.
path: filesysWrite function `load_data()` to import EEG channels and hypnogram from HDF5 database.Write function `load_data()` to import EEG channels and hypnogram from HDF5 database.tem path of HDF5 database
returns x: array containing EEG channels
y: array containing hypnogram
"""
x = None
y = None
# insert your code here
return (x, y)
path = './datasets/mesa_small/mesa_sleep_0002_s.h5'
x, y = load_data(path)
x = pre_process(x)
if x is not None:
print(x[0,:5,0])
print(y[1000:1005])
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```[0.04396349 0.07795955 0.07226041 0.07493883 0.06915694][[4] [4] [4] [4] [4]]``` Import test setImport test set from HDF5 database `mesa_sleep_0021_s` into arrays `x_test` and `y_test`.Print shapes of the new arrays, and basic EEG stats from array `x_test`.
###Code
path = './datasets/mesa_small/mesa_sleep_0021_s.h5'
x_test, y_test = load_data(path)
x_test = pre_process(x_test)
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```var shapex_test (1079, 1920, 3)y_test (1079, 1)EEG min max mean std0 -1.9979 0.6084 0.0000 0.02221 -0.4698 2.0072 0.0000 0.03802 -5.0036 1.0731 0.0000 0.0511 ``` Split dataset into train and validation setsSplit arrays `x` and `y` into train and validation sets `x_train`, `x_val`, `y_train` and `y_val`. The validation set contains 300 epochs from each HDF5 database.Print the shapes of the new arrays.**Note:** the function `np.random.seed(0)` from Numpy is used to replicate the expected output.**Suggestions*** Create boolean array `idx` with 1439 elements initialized with `False` values* Use function `np.random.choice()` to randomly select (without replacement) 300 elements and set them to `True`* Split `x` into `x_train` and `x_val` according to array `idx`* Use function `np.random.seed(0)` from Numpy to replicate the expected output
###Code
np.random.seed(0)
x_val = None
y_val = None
x_train = None
y_train = None
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```var shapex_train (1019, 1920, 3)y_train (1019, 1)x_val (300, 1920, 3)y_val (300, 1)``` Generate train and validation setsCreate train and validation sets in arrays `x_train`, `y_train`, `x_val` and `y_val` from HDF5 databases `mesa_sleep_0001_s`, `mesa_sleep_0002_s`, `mesa_sleep_0006_s`, `mesa_sleep_0014_s` and `mesa_sleep_0016_s`.Print the shapes of train and validation datasets. Print basic statistical metrics of array `x_train`.
###Code
np.random.seed(0)
paths = ['./datasets/mesa_small/mesa_sleep_0001_s.h5',
'./datasets/mesa_small/mesa_sleep_0002_s.h5',
'./datasets/mesa_small/mesa_sleep_0006_s.h5',
'./datasets/mesa_small/mesa_sleep_0014_s.h5',
'./datasets/mesa_small/mesa_sleep_0016_s.h5']
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```var shapex_train (5215, 1920, 3)y_train (5215, 1)x_val (1500, 1920, 3)y_val (1500, 1)EEG min max mean std0 -2.7610 2.7743 0.0002 0.15891 -2.7251 2.7513 0.0001 0.42172 -6.7197 6.6099 0.0006 0.4857``` Model setup Write input/output conversion functionsWrite function `to_input()` to convert EEG data into 2-dimensional array by concatenating EEG channels on dimension 1.i.e array `x_train` with shape `(5215, 1920, 3)` is mapped to array with shape `(5215, 5760)`.Write function `to_output()` to sleep scores into binarized `one-hot-encoding`, using function `keras.utils.to_categorical()` from Keras library. This transformation assigns score `0` to `[1 0 0 0 0]`, score `1` to `[0 1 0 0 0]`, etc.i.e array `y_train` with shape `(5215, 1)` is mapped to array with shape `(5215, 5)`.
###Code
from keras import backend as K
from keras.layers import Input, Dense, Layer
from keras.models import Sequential
def to_input(x):
"""Convert data array to shape (batch, data).
x: array with shape (channel, batch, data)
returns x_out: array x with shape (batch, data)
"""
x_out = None
# insert your code here
return x_out
def to_output(y):
"""Convert label array to one-hot-encoding with shape (batch, data).
y: label array with shape (1, batch)
returns: y_out (array with shape (batch, label))
"""
y_out = None
# insert your code here
return y_out
if to_input(x_train) is not None:
print('var\t\t\t shape\n')
for item, item_name in ([[to_input(x_train), 'to_input(x_train)'],
[to_output(y_train), 'to_output(y_train)']]):
print(item_name, '\t', item.shape)
print('\n')
print(to_input(x_train)[:2])
print(to_output(y_train)[:2])
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```var shapeto_input(x_train) (5215, 5760)to_output(y_train) (5215, 5)[[ 0.00099635 0.0048069 0.00776085 ... 0.00423072 -0.00867984 0.02381653] [ 0.00212982 0.00123634 -0.00316261 ... 0.01105075 0.03953509 0.04564268]][[1. 0. 0. 0. 0.] [1. 0. 0. 0. 0.]] ``` Convert datasets to network input/output formatConvert datasets into format compatible with network using functions `to_input()` and `to_output()`.Print shapes of the new arrays.
###Code
input_train = to_input(x_train)
input_val = to_input(x_val)
input_test = to_input(x_test)
output_train = to_output(y_train)
output_val = to_output(y_val)
output_test = to_output(y_test)
if input_train is not None:
input_shape = input_train.shape[1]
output_shape = output_train.shape[1]
print('var\t\t shape\n')
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```var shapeinput_train (5215, 5760)input_val (1500, 5760)input_test (1079, 5760)output_train (5215, 5)output_val (1500, 5)output_test (1079, 5)``` Softmax modelImplement simple network with softmax output layer with library Keras (https://keras.io/).Write function `model_softmax()` that returns the compiled model.Use adadelta optimizer, binary crossentropy loss and categorical accuracy metric.
###Code
def model_softmax():
"""Define softmax network
returns m: Keras model with softmax output
"""
m = None
# insert your code here
return m
model = model_softmax()
if model is not None:
model.summary()
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```_________________________________________________________________Layer (type) Output Shape Param =================================================================input (Layer) (None, 5760) 0 _________________________________________________________________output (Dense) (None, 5) 28805 =================================================================Total params: 28,805Trainable params: 28,805Non-trainable params: 0_________________________________________________________________``` Train softmax modelTrain softmax network during 5 training epochs, batch size of 32 with sample shuffling.**Suggestions*** Use method `fit()` with keywords `epochs`, `batch_size` and `shuffle` to train model* Use method `evaluate()` to evaluate performance metrics in validation and test sets
###Code
np.random.seed(0)
n_epochs = 5
model = model_softmax()
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```Epoch 1/55215/5215 [==============================] - 2s 295us/step - loss: 0.5193 - categorical_accuracy: 0.3342...Epoch 5/55215/5215 [==============================] - 1s 162us/step - loss: 0.4387 - categorical_accuracy: 0.4086Dataset loss accuracyval 0.4674 0.3487test 0.5068 0.1168``` Performance evaluation with cross validationEstimate model performance on unseen data with function `cross_validation()` that implements *leave-one-out cross validation*. In this performance evaluation scheme the original dataset is split in `K` sets or *folds*. The model is succesfully trained in `K-1` sets and tested on the remaining set. In this case, a set corresponds to a database.
###Code
def cross_validation(paths, model_ref, epochs=5, verbose=True):
"""Leave-one-out cross validation scheme at database level
paths: list containing paths of HDF5 databases
model_ref: Keras model
epochs: number of training epochs
verbose: print intermediate results
returns models: list with trained Keras models
metrics: list with validation and test accuracy
"""
models = []
metrics = []
# insert your code here
return (models, metrics)
###Output
_____no_output_____
###Markdown
Train softmax model with cross validationTrain softmax model with cross validation, 5 training epochs, batch size of 32 with sample shuffling.
###Code
paths = ['./datasets/mesa_small//mesa_sleep_0001_s.h5',
'./datasets/mesa_small//mesa_sleep_0002_s.h5',
'./datasets/mesa_small//mesa_sleep_0006_s.h5',
'./datasets/mesa_small//mesa_sleep_0014_s.h5',
'./datasets/mesa_small//mesa_sleep_0016_s.h5',
'./datasets/mesa_small//mesa_sleep_0021_s.h5']
np.random.seed(0)
models, model_test = cross_validation(paths, model_softmax, epochs=n_epochs)
s
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```6 fold cross-validationacc val acc test fold0.3753 0.3912 10.3333 0.3601 20.3000 0.3031 30.4060 0.3300 40.3327 0.3011 50.3513 0.1158 6min max mean std (accuracy)0.3000 0.4060 0.3498 0.0338 val0.1158 0.3912 0.3002 0.0883 test``` ANN modelImplement ANN model with single hidden layer with 256 ReLU units and softmax output.Write function `model_ann()` that returns the compiled model.Use adadelta optimizer, binary crossentropy loss and categorical accuracy metric.
###Code
def model_ann():
"""Define shallow ANN model
returns m: shallow ANN Keras model
"""
m = None
# insert your code here
return m
model = model_ann()
if model is not None:
model.summary()
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT**```_________________________________________________________________Layer (type) Output Shape Param =================================================================input (Layer) (None, 5760) 0 _________________________________________________________________h1 (Dense) (None, 256) 1474816 _________________________________________________________________output (Dense) (None, 5) 1285 =================================================================Total params: 1,476,101Trainable params: 1,476,101Non-trainable params: 0_________________________________________________________________``` Train ANN modelTrain ANN model during 5 training epochs, batch size of 32 with sample shuffling.**Note:** the actual output may be slightly different from the expected output
###Code
np.random.seed(0)
n_epochs = 5
model = model_ann()
# insert your code here
###Output
_____no_output_____
###Markdown
**EXPECTED OUTPUT** (may vary slightly)```Epoch 1/55215/5215 [==============================] - 2s 328us/step - loss: 0.4492 - categorical_accuracy: 0.4276...Epoch 5/55215/5215 [==============================] - 1s 252us/step - loss: 0.3614 - categorical_accuracy: 0.5528Dataset loss accuracyval 0.4168 0.5080test 0.4888 0.4847``` Train ANN model with cross validationTrain ANN model with cross validation, 5 training epochs, batch size of 32 with sample shuffling.**Note:** the actual output may be slightly different from the expected output
###Code
np.random.seed(0)
models, model_test = cross_validation(paths, model_ann, epochs=n_epochs)
# insert your code here
###Output
_____no_output_____ |
notebooks/example/ModelAnalysis.ipynb | ###Markdown
Plotting and analysing models
###Code
# Start with imports
# These should always live at the top of a notebook
# Import commonly used external libraries - you won't always need these, but most of the time you will
import numpy as np
import pandas as pd
# Import our project interface - this is the main method of accessing our models
from autumn.tools.project import get_project
# Also include our custom pretty printer - it can make things a lot easier to read!
from autumn.tools.utils.display import pretty_print
model_name = 'covid_19'
model_region = 'malaysia'
project = get_project(model_name, model_region)
###Output
_____no_output_____
###Markdown
Working with parametersAuTuMN provides a number of facilities for interacting with model parameters, but the basic pattern is always the same; 1. Get some parameters2. Modify them (or not)3. Build and run a model
###Code
# Run the model with unmodified baseline parameters
# This command returns a summer CompartmentalModel object that contains the completed run data
params = project.param_set.baseline
m = project.run_baseline_model(params)
# Summer provides convenience functions to access model outputs (and derived outputs) as pandas DataFrames
# This is the recommended way of using model outputs in an interactive context
outputs_df = m.get_outputs_df()
doutputs_df = m.get_derived_outputs_df()
# Let's have a quick look at one of the derived outputs - pandas makes it easy to plot directly
doutputs_df['accum_deaths'].plot()
###Output
_____no_output_____
###Markdown
Changing parametersIn the above example, we simply ran the model with the 'default' parameters (ie those specified in the project file) Here we will modify parameters programatically and compare outputs from different runs
###Code
# Have a look at our baseline params; using pretty printing can make it easier to see what's going on
params_baseline = project.param_set.baseline
pretty_print(params_baseline)
# Let's say we are interested in experimenting with a single parameter - contact rate
# We'll take a look at the existing value first
params_baseline['contact_rate']
# To change a parameter value, the AuTuMN parameters API creates a non-destructive copy of the parameters with updates applied
# This is the recommended way of interacting with parameters, since it includes
# automatic validation facilities and other niceties
#
# In performance-sensitive contexts, you may want to interact with parameter dictionaries directly
# Updates are passed in via dictionaries
updates = {'contact_rate': 0.03}
# Create a new parameter set - params_baseline remains unchanged
new_params = params_baseline.update(updates)
new_params['contact_rate']
# Run a model with the new parameters, and get its derived outputs
new_do_df = project.run_baseline_model(new_params).get_derived_outputs_df()
# Examine the outputs
# We'll plot the original and new outputs overlayed so we can compare them directly
doutputs_df['accum_deaths'].plot(label='orig')
ax = new_do_df['accum_deaths'].plot(label='new')
ax.legend()
# Here we generate a range of values programatically and compare all of them
# This is not necessarily the best or fastest way to do this, but does demonstrate how easy it is to build
# tools for exploring model dynamics
def run_model_with_param_updates(baseline, update_dict):
params = baseline.update(update_dict)
m = project.run_baseline_model(params)
return m
cr_comp_df = pd.DataFrame()
for contact_rate in np.linspace(0.01, 0.05, 5):
cur_results = run_model_with_param_updates(params_baseline, {'contact_rate': contact_rate})
cr_comp_df[contact_rate] = cur_results.get_derived_outputs_df()['accum_deaths']
# Let's have a look at the outputs
ax = cr_comp_df.plot(title="Accum deaths by contact rate")
# bbox_to_anchor lets you place a plot legend specifically
# For plots with a lot of data, you may want to move it outside
# of the plotting frame altogether as we have done here
ax.legend(bbox_to_anchor=(1.0,1.0))
###Output
_____no_output_____
###Markdown
Plotting and analysing models
###Code
# Start with imports
# These should always live at the top of a notebook
# Import commonly used external libraries - you won't always need these, but most of the time you will
import numpy as np
import pandas as pd
# Import our project interface - this is the main method of accessing our models
from autumn.tools.project import get_project
# Also include our custom pretty printer - it can make things a lot easier to read!
from autumn.tools.utils.pretty import pretty_print
model_name = 'covid_19'
model_region = 'malaysia'
project = get_project(model_name, model_region)
###Output
_____no_output_____
###Markdown
Working with parametersAuTuMN provides a number of facilities for interacting with model parameters, but the basic pattern is always the same; 1. Get some parameters2. Modify them (or not)3. Build and run a model
###Code
# Run the model with unmodified baseline parameters
# This command returns a summer CompartmentalModel object that contains the completed run data
params = project.param_set.baseline
m = project.run_baseline_model(params)
# Summer provides convenience functions to access model outputs (and derived outputs) as pandas DataFrames
# This is the recommended way of using model outputs in an interactive context
outputs_df = m.get_outputs_df()
doutputs_df = m.get_derived_outputs_df()
# Let's have a quick look at one of the derived outputs - pandas makes it easy to plot directly
doutputs_df['accum_deaths'].plot()
###Output
_____no_output_____
###Markdown
Changing parametersIn the above example, we simply ran the model with the 'default' parameters (ie those specified in the project file) Here we will modify parameters programatically and compare outputs from different runs
###Code
# Have a look at our baseline params; using pretty printing can make it easier to see what's going on
params_baseline = project.param_set.baseline
pretty_print(params_baseline)
# Let's say we are interested in experimenting with a single parameter - contact rate
# We'll take a look at the existing value first
params_baseline['contact_rate']
# To change a parameter value, the AuTuMN parameters API creates a non-destructive copy of the parameters with updates applied
# This is the recommended way of interacting with parameters, since it includes
# automatic validation facilities and other niceties
#
# In performance-sensitive contexts, you may want to interact with parameter dictionaries directly
# Updates are passed in via dictionaries
updates = {'contact_rate': 0.03}
# Create a new parameter set - params_baseline remains unchanged
new_params = params_baseline.update(updates)
new_params['contact_rate']
# Run a model with the new parameters, and get its derived outputs
new_do_df = project.run_baseline_model(new_params).get_derived_outputs_df()
# Examine the outputs
# We'll plot the original and new outputs overlayed so we can compare them directly
doutputs_df['accum_deaths'].plot(label='orig')
ax = new_do_df['accum_deaths'].plot(label='new')
ax.legend()
# Here we generate a range of values programatically and compare all of them
# This is not necessarily the best or fastest way to do this, but does demonstrate how easy it is to build
# tools for exploring model dynamics
def run_model_with_param_updates(baseline, update_dict):
params = baseline.update(update_dict)
m = project.run_baseline_model(params)
return m
cr_comp_df = pd.DataFrame()
for contact_rate in np.linspace(0.01, 0.05, 5):
cur_results = run_model_with_param_updates(params_baseline, {'contact_rate': contact_rate})
cr_comp_df[contact_rate] = cur_results.get_derived_outputs_df()['accum_deaths']
# Let's have a look at the outputs
ax = cr_comp_df.plot(title="Accum deaths by contact rate")
# bbox_to_anchor lets you place a plot legend specifically
# For plots with a lot of data, you may want to move it outside
# of the plotting frame altogether as we have done here
ax.legend(bbox_to_anchor=(1.0,1.0))
###Output
_____no_output_____ |
notebooks/5.4. Modifying Network - Addition.ipynb | ###Markdown
AdditionYou can add one network to another. The network you're adding the other network too will be updated with the nodes, link and data from the other network. The process aims to consolidate node and link indexing, for nodes in the same spot and links with the same modes.This method should only be used with networks that have been generated in the same manner, so two PT2MATSim networks or two GeNet OSM networks, both of which either simplified or not. It is recommended that they are not simplified at the time of adding, as some nodes may have ceased to exist through simplification, possibly leading to a network with weird behaviour, duplicated links (especially when the networks have different density) or connectivity issues.For now, the method only supports non overlapping services for `Schedules`, so let's merge two `Network`s with just graphs. Below we make two networks from OSM. One a small, but denser subset of the other and add them together.
###Code
from genet import read_osm
_n_tiny = read_osm('../example_data/tiny_example.osm',
'../genet/configs/OSM/default_config.yml',
epsg='epsg:27700')
_n_tiny
_n_tiny.plot()
_n = read_osm('../example_data/example.osm',
'../genet/configs/OSM/slim_config.yml',
epsg='epsg:27700')
_n
_n.plot()
###Output
/Users/kasia.kozlowska/pycharm_venvs/genet/lib/python3.7/site-packages/pyproj/crs/crs.py:53: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6
return _prepare_from_string(" ".join(pjargs))
/Users/kasia.kozlowska/pycharm_venvs/genet/lib/python3.7/site-packages/osmnx/utils_graph.py:56: FutureWarning: Assigning CRS to a GeoDataFrame without a geometry column is now deprecated and will not be supported in the future.
gdf_nodes = gpd.GeoDataFrame(data, index=nodes, crs=crs)
###Markdown
The `add` method actually adds one `Network` onto another, rather than create a new instance to save some memory. The `Network` being added will inherit or change link or node ids depending on the `Network` it's being added to.
###Code
_n.add(_n_tiny)
_n
_n.plot()
###Output
/Users/kasia.kozlowska/pycharm_venvs/genet/lib/python3.7/site-packages/pyproj/crs/crs.py:53: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6
return _prepare_from_string(" ".join(pjargs))
/Users/kasia.kozlowska/pycharm_venvs/genet/lib/python3.7/site-packages/osmnx/utils_graph.py:56: FutureWarning: Assigning CRS to a GeoDataFrame without a geometry column is now deprecated and will not be supported in the future.
gdf_nodes = gpd.GeoDataFrame(data, index=nodes, crs=crs)
|
01-basics/03-CIFAR10_pl.ipynb | ###Markdown
CIFAR10: Training a classifer with **PyTorch-lightning**[](https://colab.research.google.com/github/lento234/ml-tutorials/blob/main/01-basics/03-CIFAR10_pl.ipynb)**References**:- https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html- https://pytorch-lightning.rtfd.io/en/latest/**Runtime setup: GPU accelerator at Google colab:**1. On the main menu, click **Runtime** and select **Change runtime type**. 2. Select **GPU** as the hardware accelerator.
###Code
!nvidia-smi
###Output
_____no_output_____
###Markdown
**Table of content**1. [Load and pre-process the dataset](load)2. [Define the CNN model **+ training step + loss + optimizer**](define)3. [Setup the **trainer**](trainer)4. [Train **and validate** the model on **train** and **test** dataset](train)5. [Assess training with **tensorboard**](tensorboard)6. [Test the model](validate) **CIFAR10 Dataset**The dataset consists of `3x32x32` images of 10 difference classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. SetupLightning is easy to install. Simply ```pip install pytorch-lightning```
###Code
!pip install pytorch-lightning --quiet
###Output
_____no_output_____
###Markdown
Environment
###Code
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torchvision import transforms, datasets
import pytorch_lightning as pl
mpl.style.use('seaborn-poster')
mpl.rcParams['mathtext.fontset'] = 'cm'
mpl.rcParams['figure.figsize'] = 5 * np.array([1.618033988749895, 1])
pl.seed_everything(234)
###Output
_____no_output_____
###Markdown
Hyper-parameters
###Code
batch_size = 32
num_workers = 4
num_epochs = 5
learning_rate = 0.001
momentum = 0.9
###Output
_____no_output_____
###Markdown
1. Load and pre-process data- Define preprocessing algorithm- Load training and test dataset 1.1 Define preprocessing algorithm
###Code
transform = transforms.Compose([
transforms.ToTensor(), # convert data to pytorch tensor
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) # normalize dataset for each channel
])
###Output
_____no_output_____
###Markdown
1.2 Load training and test dataset
###Code
# Download train and test dataset
train_dataset = datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
test_dataset = datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
# Dataset sampler (shuffle, distributed loading)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size,
shuffle=False, num_workers=num_workers)
print(f"num. examples: train = {len(train_dataset)}, test = {len(test_dataset)}")
classes = np.array(['plane', 'car', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck'])
num_classes = len(classes)
def imshow(images, labels):
plt.figure(figsize=(10,10))
for i in range(16):
plt.subplot(4, 4,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
img = images[i] / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img.numpy(), (1, 2, 0)), cmap=plt.cm.binary)
plt.xlabel(classes[labels[i]])
plt.show()
# get some random training images
images, labels = next(iter(train_loader))
# show images
imshow(images, labels)
###Output
_____no_output_____
###Markdown
2. Define the CNN model **+ training step + loss + optimizer****Architecture:**- Input: An image of `n_channels=3`.- Two layer stacks of 2D convolutional layers (`Conv2d` with `kernel_size=5`) with rectified linear activation (`ReLU`) followed by a 2D max pooling (`MaxPool2D` with `kernel_size=2` and `stride=2`)- Three layer stacks of Fully-connected layers (`Linear`) with ReLU activaton.- Output: 10-dimensional vector defining the activation of each class
###Code
class Net(pl.LightningModule):
def __init__(self, **kwargs):
super(Net, self).__init__()
# save hyper-parameters
self.save_hyperparameters()
self.example_input_array = torch.ones(1, self.hparams.num_channels, 32, 32)
# Define network
self.layer1 = nn.Sequential(nn.Conv2d(self.hparams.num_channels, 6, kernel_size=5),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(nn.Conv2d(6, 16, kernel_size=5),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2))
self.layer3 = nn.Sequential(nn.Flatten(),
nn.Linear(16 * 5 * 5, 120),
nn.ReLU())
self.layer4 = nn.Sequential(nn.Linear(120, 84),
nn.ReLU())
self.layer5 = nn.Linear(84, self.hparams.num_classes)
def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.layer5(x)
return x
def training_step(self, batch, batch_idx):
x_train, y_train = batch
y_pred = self(x_train)
loss = F.cross_entropy(y_pred, y_train)
self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True) # logging
return loss
def validation_step(self, batch, batch_idx):
x_test, y_test = batch
y_pred = self(x_test)
loss = F.cross_entropy(y_pred, y_test)
self.log('val_loss', loss)
def test_step(self, batch, batch_idx):
x_test, y_test = batch
y_pred = self(x_test)
loss = F.cross_entropy(y_pred, y_test)
self.log('test_loss', loss)
def configure_optimizers(self):
return torch.optim.SGD(self.parameters(),
lr=self.hparams.learning_rate,
momentum=self.hparams.momentum)
# Construct model
model = Net(
num_channels=3,
num_classes=num_classes,
learning_rate=learning_rate,
momentum=momentum
)
###Output
_____no_output_____
###Markdown
3. Setup the **trainer**
###Code
# GPU trainer
trainer = pl.Trainer(
gpus=1,
max_epochs=num_epochs,
progress_bar_refresh_rate=50,
)
###Output
_____no_output_____
###Markdown
**Additional flags:**```python log_gpu_memory='all', gpu stats profiler=True, profiling stats precision=16, half-precision deterministic=True reproducability accelerator='ddp' distributed data parallelism benchmark=True cudnn benchmark and optimizing callbacks=[custom_callback_one(), custom_callback_two()] fast_dev_run=True dev run for debugging all the hooks```More info: https://pytorch-lightning.readthedocs.io/en/stable/trainer.html 4. Train **and validate** the model on **train** and **test** dataset
###Code
trainer.fit(model, train_loader, test_loader)
###Output
_____no_output_____
###Markdown
5. Assess training with **tensorboard**
###Code
# Start tensorboard
%reload_ext tensorboard
%tensorboard --logdir lightning_logs
###Output
_____no_output_____
###Markdown
6. Test the model on **test** dataset
###Code
trainer.test(model, test_loader)
###Output
_____no_output_____ |
kobert/20210903load_model.ipynb | ###Markdown
###Code
!pip install transformers==4.4.2
!pip install sentencepiece
!pip install tensorflow_addons
import tensorflow as tf
import numpy as np
import pandas as pd
from transformers import *
import numpy as np
import pandas as pd
from tqdm import tqdm
import os
import urllib.request
import logging
import tensorflow_addons as tfa
logging.basicConfig(level=logging.ERROR)
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import pandas as pd
dataset = pd.read_csv('/content/drive/MyDrive/0902_load_model_test/lucy_data.csv', encoding='cp949')
#라벨 데이터 category 정수로 변환
data_colname = 'contents'
label_colname = 'category'
encoder = LabelEncoder()
encoder.fit(dataset[label_colname])
dataset[label_colname] = encoder.transform(dataset[label_colname])
#이어서 contents 전처리(정제)
dataset[data_colname] = dataset[data_colname].str.replace("\(.*\)|\s-\s.*"," " ,regex=True)
dataset[data_colname] = dataset[data_colname].str.replace("\[.*\]|\s-\s.*"," ",regex=True)
dataset[data_colname] = dataset[data_colname].str.replace("\<.*\>|\s-\s.*"," ",regex=True)
dataset[data_colname] = dataset[data_colname].str.replace("무단전재 및 재배포 금지"," ",regex=True)
dataset[data_colname] = dataset[data_colname].str.replace("무단 전재 및 재배포 금지"," ",regex=True)
dataset[data_colname] = dataset[data_colname].str.replace("©"," ",regex=True)
dataset[data_colname] = dataset[data_colname].str.replace("ⓒ"," ",regex=True)
dataset[data_colname] = dataset[data_colname].str.replace("저작권자"," ",regex=True)
dataset[data_colname] = dataset[data_colname].str.replace(".* 기자", " ", regex=True) #기자 이름에서 오는 유사도 차단
dataset[data_colname] = dataset[data_colname].str.replace("사진 = .*", " ", regex=True) #사진 첨부 문구 삭제
dataset[data_colname] = dataset[data_colname].str.replace("사진=.*", " ", regex=True) #사진 첨부 문구 삭제
dataset[data_colname] = dataset[data_colname].str.replace('\"', "",regex=True)
dataset[data_colname] = dataset[data_colname].str.replace("([a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+.[a-zA-Z0-9-.]+)", " ", regex=True) #이메일 주소에서 오는 유사도 차단
dataset[data_colname] = dataset[data_colname].str.replace("\n"," ")
dataset[data_colname] = dataset[data_colname].str.replace("\r"," ")
dataset[data_colname] = dataset[data_colname].str.replace("\t"," ")
dataset[data_colname] = dataset[data_colname].str.replace( "\’" , "", regex=True)
dataset[data_colname] = dataset[data_colname].str.replace("[ ]{2,}"," ",regex=True)
dataset.head()
num_epochs = 5
num_batch = 16
warmup_ratio = 0.1
t_total = len(dataset) * num_epochs
warmup_step = int(t_total * warmup_ratio)
initializer_range = 0.2
max_seq_len = 128
LR = tf.keras.optimizers.schedules.CosineDecayRestarts(initial_learning_rate=2e-5, first_decay_steps=warmup_step)
def create_model():
model = TFBertModel.from_pretrained("monologg/kobert", from_pt=True)
input_ids_layer = tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32)
attention_masks_layer = tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32)
token_type_ids_layer = tf.keras.layers.Input(shape=(max_seq_len,), dtype=tf.int32)
outputs = model([input_ids_layer, attention_masks_layer, token_type_ids_layer])
pooled_output = outputs[1]
optimizer = tf.keras.optimizers.Adam(learning_rate=LR)
pooled_output = tf.keras.layers.Dropout(0.5)(pooled_output)
prediction = tf.keras.layers.Dense(9, activation='softmax', kernel_initializer=tf.keras.initializers.TruncatedNormal(stddev=initializer_range))(pooled_output)
cls_model = tf.keras.Model([input_ids_layer, attention_masks_layer, token_type_ids_layer], prediction)
cls_model.compile(optimizer=optimizer, loss=tf.keras.losses.CategoricalCrossentropy(), metrics = ['accuracy'])
cls_model.summary()
return cls_model
# 새로운 모델 객체를 만듭니다
model = create_model()
# 이전에 저장한 가중치를 로드합니다
model.load_weights('/content/drive/MyDrive/0902_load_model_test/best_model')
# 토크나이저 코드 로드
urllib.request.urlretrieve("https://raw.githubusercontent.com/monologg/KoBERT-Transformers/master/kobert_transformers/tokenization_kobert.py", filename="tokenization_kobert.py")
from tokenization_kobert import KoBertTokenizer
tokenizer = KoBertTokenizer.from_pretrained('monologg/kobert')
def sentence_prediction(example):
global tokenizer
input_ids, attention_masks, token_type_ids = [], [], []
input_id = tokenizer.encode(example, max_length=max_seq_len, pad_to_max_length=True)
# attention_mask는 실제 단어가 위치하면 1, 패딩의 위치에는 0인 시퀀스.
padding_count = input_id.count(tokenizer.pad_token_id)
attention_mask = [1] * (max_seq_len - padding_count) + [0] * padding_count
# token_type_id는 세그먼트 임베딩을 위한 것으로 이번 예제는 문장이 1개이므로 전부 0으로 통일.
token_type_id = [0] * max_seq_len
input_ids.append(input_id)
attention_masks.append(attention_mask)
token_type_ids.append(token_type_id)
input_ids = np.array(input_ids)
attention_masks = np.array(attention_masks)
token_type_ids = np.array(token_type_ids)
return [input_ids, attention_masks, token_type_ids]
def evaluation_predict(sentence):
data_x = sentence_prediction(sentence)
predict = model.predict(data_x)
print('예측 결과 수치', predict)
print(predict)
predict_answer = np.argmax(predict[0])
predict_value = predict[0][predict_answer]
if predict_answer == 0:
print("(IT/과학 확률 : %.2f) IT/과학 뉴스입니다." % (1-predict_value))
return 0
elif predict_answer == 1:
print("(경제 확률 : %.2f) 경제 뉴스입니다." % predict_value)
return 1
elif predict_answer == 2:
print("(문화 확률 : %.2f) 문화 뉴스입니다." % predict_value)
return 2
elif predict_answer == 3:
print("(미용/건강 확률 : %.2f) 미용/건강 뉴스입니다." % predict_value)
return 3
elif predict_answer == 4:
print("(사회 확률 : %.2f) 사회 뉴스입니다." % predict_value)
return 4
elif predict_answer == 5:
print("(생활 확률 : %.2f) 생활 뉴스입니다." % predict_value)
return 5
elif predict_answer == 6:
print("(스포츠 확률 : %.2f) 스포츠 뉴스입니다." % predict_value)
return 6
elif predict_answer == 7:
print("(연예 확률 : %.2f) 연예 뉴스입니다." % predict_value)
return 7
elif predict_answer == 8:
print("(정치 확률 : %.2f) 정치 뉴스입니다." % predict_value)
return 8
evaluation_predict('정치 뉴스입니다.')
result = dataset.copy()
result['predict'] = -1
result['predict_tag'] = 'a'
for idx, x in enumerate(dataset['contents']):
out_val = evaluation_predict(x)
result['predict'][idx] = out_val
result
#정수를 라벨로 인코딩
def category_decoding(lucy_data, label_colname):
"""
tips: 정수로 된 라벨을 model에 지정된 텍스트 label로 인코딩.
Args:
lucy_data : dataframe 형식의 데이터
label_colname : 라벨의 컬럼명
Returns:
lucy_data : DataFrame
"""
new_column = label_colname+'_tag'
lucy_data[new_column] = lucy_data[label_colname].astype(str) # 새로운 컬럼 생성
lucy_data[new_column] = lucy_data[new_column].astype(str) #String형 컬럼으로 바꾸기
label_dict = {'0': 'IT/과학',
'1': '경제',
'2': '문화',
'3': '미용/건강',
'4': '사회',
'5': '생활',
'6': '스포츠',
'7': '연예',
'8': '정치'}
for key, value in label_dict.items():
print(value)
lucy_data[new_column] = lucy_data[new_column].str.replace(key, value)
print("===============Data decoding success! ")
return lucy_data
category_decoding(result, 'predict')
label_dict = {'0': 'IT/과학',
'1': '경제',
'2': '문화',
'3': '미용/건강',
'4': '사회',
'5': '생활',
'6': '스포츠',
'7': '연예',
'8': '정치'}
label_list = label_dict.values()
label_list = list(label_list)
from sklearn.metrics import classification_report
print(classification_report(result['category'], result['predict'], target_names=label_list))
from sklearn.metrics import confusion_matrix
confusion_matrix(result['category'], result['predict'])
from sklearn.metrics import classification_report, confusion_matrix, precision_score, recall_score, f1_score
import seaborn as sns
import pandas as pd
cm2 = confusion_matrix(result['category'], result['predict'])
cmdf2 = pd.DataFrame(cm2, index=list(map(lambda x: x+'(실제값)', label_list)) , columns=list(map(lambda x: x+'(예측값)', label_list)) )
cmdf2
import matplotlib.pyplot as plt
from matplotlib import font_manager, rc
font_path = "/content/malgun.ttf"
font = font_manager.FontProperties(fname=font_path).get_name()
rc('font', family=font)
sns.heatmap(cm2, annot = True, fmt = 'd',cmap = 'Reds')
plt.xlabel('실제값')
plt.ylabel('예측값')
plt.xticks([0.5,1.5],list(map(lambda x: x+'(실제값)', label_list)))
plt.yticks([0.5,1.5],list(map(lambda x: x+'(예측값)', label_list)))
plt.show()
###Output
_____no_output_____ |
docs/notebooks/Getting started.ipynb | ###Markdown
What is Maelstrom? Maelstrom is a code that enables the modelling and analysis of binary light curves with a pulsating component. When a pulsating star is in orbit with a companion, the time taken for the stellar pulsations to reach us changes over time. This information is encoded in the phase of the pulsation, which, for the coherently pulsating $\delta$ Scuti stars can be used as clocks to infer the position of the stars throughout the orbit. This method is known as _phase modulation_. Phase modulation has been used for a wide range of stars, most notable of which being pulsars, whose highly precise millisecond level precision enabled the detection of the first planet. For intermediate mass stars, phase modulation is useful for the A/F type pulsators to infer stellar mass companions. The Maelstrom code contains routines for forward modelling these orbits. In a forward model, a light curve is generated from the orbital parameters and fit to the actual light curve data. Getting started
###Code
import numpy as np
import corner
import pandas as pd
import matplotlib.pyplot as plt
import exoplanet as xo
import pymc3 as pm
import lightkurve as lk
%config IPython.matplotlib.backend = "retina"
from matplotlib import rcParams
rcParams["figure.dpi"] = 150
rcParams["savefig.dpi"] = 150
###Output
_____no_output_____
###Markdown
In this notebook, we're going to inspect and fit the time delays for a well known Kepler $\delta$ Scuti system, KIC 9651065. This tutorial assumes basic working knowledge of PyMC3. [A great introduction to this can be found here](https://docs.exoplanet.codes/en/stable/tutorials/intro-to-pymc3/)Now, let's first download the light curve.
###Code
lc = lk.search_lightcurvefile('KIC 9651065', mission='Kepler').download_all().PDCSAP_FLUX.stitch().remove_nans()
lc.plot()
###Output
_____no_output_____
###Markdown
.. Note:: Lightkurve automatically subtracts the Kepler zero time (2454844 days), so all times are reported in Barycentric Kepler Julian Date (BKJD). _If you want to include additional data, you must work within the same reference time_. Here, we pass the time and flux data into `Maelstrom`. If the `freq` argument is not specified, `Maelstrom` will automatically try and detect some good peaks to use. Be warned however, this does not always work and you should always check which frequencies you use! I have set the upper limit on the peak search to 40 d$^{-1}$, since there are some Nyquist aliases. We can tell they are aliases as their time delays will match the Kepler orbital period. Neat!`first_look` will subdivide the lightcurve using the old method, and make some diagnostic plots. In order, these are the light curve, the amplitude spectrum, the extracted time delays, and the periodogram of the time delays. The frequencies are colored by their amplitude, and the orange line is the weighted average value. We see a nice peak around 272 d in the last panel, which is the orbital period of the system.
###Code
from maelstrom import Maelstrom
ms = Maelstrom(lc.time, lc.flux, max_peaks=3, fmin=5, fmax=40)
ms.first_look();
###Output
_____no_output_____
###Markdown
Note that if you want the frequencies `Maelstrom` has found, simply call `ms.freq`
###Code
print(f"Oscillation modes are at {ms.freq}")
###Output
Oscillation modes are at [19.47767606 21.71213419 30.80189468]
###Markdown
To read off this period, we can calculate the peak in the power spectrum of the time delays (the bottom left panel above)
###Code
period_guess = ms.get_period_estimate()
print(f"The orbital period is around {period_guess:.2f} days")
###Output
The orbital period is around 269.96 days
###Markdown
This is pretty good for a few lines of code! In fact, the actual orbital period is closer to 272 d. Let's now perform an initial optimisation..We first need to call `setup_orbit_model`, but an orbital period estimate is not required (unless the time delay signal is very weak..!). We can also pass along an initial guess for the eccentricity, but let's leave it at 0 and see what happens.
###Code
ms.setup_orbit_model(period=period_guess)
opt = ms.optimize()
opt
###Output
_____no_output_____
###Markdown
Let's go through some of those parameters for a sec. The ones not mentioned are nuisance parameters which are required for the model. - `t0`: Time of periastron passage (d) - `lighttime`: The projected semi-major axis calculated for each frequency in the model. Note that most of these agree, indicating that all the frequencies in the `Maelstrom` model belong to the same star. - `period`: The orbital period of the system. - `varpi`: The angle between the ascending node and periapsis - `eccen`: Orbital eccentricity - `tref`: Time of reference passage of periapsis Let's see what the theoretical time delays look like in the optimised model vs the actual extracted values.
###Code
td_time, td_td = ms.get_time_delay()
td_average = np.average(td_td, axis=-1, weights=ms.get_weights())
with ms:
model_td = xo.eval_in_model(ms.tau, opt) * 86400
plt.plot(ms.time, model_td - np.median(model_td), c='blue', linewidth=0.5)
plt.plot(td_time, td_average, '.k')
plt.xlabel('Time [day]')
plt.ylabel('Time delay [s]')
###Output
_____no_output_____
###Markdown
Cool! Each frequency in the light curve has its own independent `asini` in the Maelstrom model. This is useful for when there are multiple stars pulsating in the same binary system, as they would be (mostly) equal and opposite in sign. However, Maelstrom isn't actually fitting these parameters. What we're doing is forward modelling the input light curve by fitting the phase variations ($\tau$) in each point; $y(t) = \sum_{j=1}^{J} \Big[A_j \cos(\omega_j (t - \tau)) + B_j \sin(\omega_j (t-\tau))\Big]$ Although it is not as useful as looking at the time-delays, we can still inspect the actual light curve that is generated from the time delay signal:
###Code
with ms:
plt.plot(ms.time, xo.eval_in_model(ms.lc_model, opt), c='blue', linewidth=0.5, label='Maelstrom')
plt.plot(ms.time, ms.flux, '.k', label='Data')
plt.xlim(200,205)
plt.xlabel('Time [day]')
plt.ylabel('Flux [ppt]')
plt.legend()
###Output
_____no_output_____
###Markdown
Maelstrom has decided that all these frequencies belong to one star, as their lighttimes are all positive (or negative). This means the system is PB1. We can get a ready-made model from the get go by asking nicely, and passing in the optimisation results. The PB1 model is for binaries with only one pulsating component. All the frequencies now use the same `asini` parameter, unlike our first model
###Code
pb1_model = ms.pin_orbit_model(opt)
pb1_model
###Output
_____no_output_____
###Markdown
As we can see, pb1_model inherits from the PyMC3 Models object. It is, by definition, a custom model which has access to all of the properties of the default Model class. This means we can do cool things like this:
###Code
pm.model_to_graphviz(pb1_model)
###Output
_____no_output_____
###Markdown
Finally, if we are happy with the default priors in Maelstrom we can sample the model. There are strong covariances between some of the parameters. This means that sampling will slow down significantly unless we use a custom NUTS step (see https://exoplanet.dfm.io/en/stable/tutorials/pymc3-extras/dense-mass-matrices) Sampling is quite slow while the mass matrix is tuned (typically the first 1000 steps). Because of this, I'm only going to make 10 draws of the posterior distribution.
###Code
trace = pb1_model.sample(tune=10, draws=10)
###Output
optimizing logp for variables: [PB1_mean]
4it [00:00, 22.28it/s, logp=-1.150728e+05]
message: Optimization terminated successfully.
logp: -115072.80884284787 -> -115072.7881902971
optimizing logp for variables: [PB1_logs_lc]
8it [00:00, 27.62it/s, logp=-1.097041e+05]
message: Optimization terminated successfully.
logp: -115072.7881902971 -> -109704.06934437387
optimizing logp for variables: [PB1_omega, PB1_eccen]
45it [00:01, 26.08it/s, logp=-1.066786e+05]
message: Optimization terminated successfully.
logp: -109704.06934437387 -> -106678.6001791905
optimizing logp for variables: [PB1_phi]
20it [00:00, 26.86it/s, logp=-1.066570e+05]
message: Optimization terminated successfully.
logp: -106678.6001791905 -> -106657.04265097293
optimizing logp for variables: [PB1_lognu]
103it [00:03, 32.85it/s, logp=-1.066545e+05]
message: Desired error not necessarily achieved due to precision loss.
logp: -106657.04265097293 -> -106654.54220594438
optimizing logp for variables: [PB1_eccen, PB1_omega, PB1_lognu, PB1_mean, PB1_logasini, PB1_logs_lc, PB1_phi, PB1_logP]
323it [00:11, 27.99it/s, logp=-1.064707e+05]
message: Desired error not necessarily achieved due to precision loss.
logp: -106654.54220594438 -> -106470.71703961623
optimizing logp for variables: [PB1_logasini]
67it [00:02, 32.72it/s, logp=-1.064707e+05]
message: Desired error not necessarily achieved due to precision loss.
logp: -106470.71703961623 -> -106470.71703961623
optimizing logp for variables: [PB1_eccen, PB1_omega, PB1_lognu, PB1_mean, PB1_logasini, PB1_logs_lc, PB1_phi, PB1_logP]
81it [00:02, 27.78it/s, logp=-1.508331e+05]
message: Desired error not necessarily achieved due to precision loss.
logp: -106470.71703961623 -> -106470.71703961623
optimizing logp for variables: [PB1_logP]
4it [00:00, 23.40it/s, logp=-1.064707e+05]
message: Optimization terminated successfully.
logp: -106470.71703961623 -> -106470.71703961515
optimizing logp for variables: [PB1_eccen, PB1_omega, PB1_lognu, PB1_mean, PB1_logasini, PB1_logs_lc, PB1_phi, PB1_logP]
133it [00:04, 28.31it/s, logp=-1.064707e+05]
message: Desired error not necessarily achieved due to precision loss.
logp: -106470.71703961515 -> -106470.71703961305
Only 10 samples in chain.
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [PB1_eccen, PB1_omega, PB1_lognu, PB1_mean, PB1_logasini, PB1_logs_lc, PB1_phi, PB1_logP]
Sampling 2 chains: 100%|██████████| 40/40 [00:59<00:00, 1.49s/draws]
/Users/danielhey/anaconda3/lib/python3.7/site-packages/pymc3/sampling.py:464: UserWarning: The number of samples is too small to check convergence reliably.
warnings.warn("The number of samples is too small to check convergence reliably.")
There were 7 divergences after tuning. Increase `target_accept` or reparameterize.
The acceptance probability does not match the target. It is 0.06641073900830238, but should be close to 0.9. Try to increase the number of tuning steps.
There were 7 divergences after tuning. Increase `target_accept` or reparameterize.
The acceptance probability does not match the target. It is 2.2473988127317096e-06, but should be close to 0.9. Try to increase the number of tuning steps.
###Markdown
Look at all those errors! That's because we only sampled for 10 steps across 2 chains. For a full sampling run, check out the case studies in the sidebar.In any case, our trace is a PyMC3 trace object which can be manipulated accordingly:
###Code
pm.summary(trace)
###Output
/Users/danielhey/anaconda3/lib/python3.7/site-packages/pymc3/stats.py:991: FutureWarning: The join_axes-keyword is deprecated. Use .reindex or .reindex_like on the result to achieve the same functionality.
axis=1, join_axes=[dforg.index])
|
examples/Benchmark_sparse_vs_qutip.ipynb | ###Markdown
Time independent
###Code
setup_q = '''
ts = np.linspace(0,{param}*np.pi,{param}*10)
init = qutip_sym_state({N_cutoff})
H = qutip.num({N_cutoff})
opt = qutip.Options(rhs_reuse=qutip_rhs_reuse, nsteps=1000*{N_cutoff})
'''
ex_q = '''
res = qutip.sesolve(H,init,ts,[], options=opt)
'''
HARM_OSC_SYM_STATE_qutip = parametric_timeit(ex_q, setup_q, periods)
setup_c = '''
ts = np.linspace(0,{param}*np.pi,{param}*10)
init = cutiepy_sym_state({N_cutoff})
H = cutiepy.num({N_cutoff})
mxsteps = 1000*{N_cutoff}
'''
ex_c = '''
res = cutiepy.sesolve(H,init,ts, mxsteps=mxsteps)
'''
HARM_OSC_SYM_STATE_cutiepy = parametric_timeit(ex_c, setup_c, periods)
plot_times([HARM_OSC_SYM_STATE_cutiepy, HARM_OSC_SYM_STATE_qutip], ['cutiepy', 'qutip'])
###Output
_____no_output_____
###Markdown
Time dependent
###Code
setup_q = '''
ts = np.linspace(0,{param}*np.pi/10,{param}*10)
init = qutip_sym_state({N_cutoff})
H0 = qutip.zero_oper({N_cutoff})
H = qutip.num({N_cutoff})
tdep = 't/({param}*np.pi)'
opt = qutip.Options(rhs_reuse=qutip_rhs_reuse, nsteps=1000*{N_cutoff})
'''
ex_q = '''
res = qutip.mesolve([H0,[H,tdep]],init,ts,[],[], options=opt)
'''
HARM_OSC_RAMP_SYM_STATE_qutip = parametric_timeit(ex_q, setup_q, periods)
setup_c = '''
ts = np.linspace(0,{param}*np.pi/10,{param}*10)
init = cutiepy_sym_state({N_cutoff})
H = cutiepy.num({N_cutoff})*cutiepy.t/({param}*np.pi)
mxsteps = 1000*{N_cutoff}
'''
ex_c = '''
res = cutiepy.mesolve(H,[],init,ts, mxsteps=mxsteps)
'''
HARM_OSC_RAMP_SYM_STATE_cutiepy = parametric_timeit(ex_c, setup_c, periods)
plot_times([HARM_OSC_RAMP_SYM_STATE_cutiepy, HARM_OSC_RAMP_SYM_STATE_qutip], ['cutiepy', 'qutip'])
###Output
_____no_output_____ |
GaussSeidel/GaussSeidelStrainGages.ipynb | ###Markdown
The Gauss-Seidel iterative method is used to solve linear, simultaneous equation such as what you would have for a strain rosette. A problem with three equations in three variables has been chosen for this demonstration.
###Code
import numpy as np #numerics package
np.set_printoptions(precision=3) #digits of precision for printing purposes
#Equations to solve using Gauss-Siedel iterative method:
#These could be simultaneous equations for strain gages
# 10 x_1 + x_2 - x_3 = 18
# x_1 + 15 x_2 + x_3 = -12
# -x_1 + x_2 + 20 x_3 = 17
#
x_guess_0 = np.array([1, 1, 1]); #initial guess
x_guess_0.astype(float) #convert initial guess values to "float" instead of "int"
err_tol = 0.001; #This is the target "error". The smaller this value, the more accurate
#your results will be but the simulation will run for longer.
run = 1; #simulation run number. Initial value is "1" for "first run". This will be
#updated for subsequent runs.
err_max = err_tol + 1; #Artificial value to "kick-start" simulation.
#Choose a value that is GREATER THAN the erro tolerance.
err_max_ARRAY = np.array([0]); #This array will store the maximum error calculated
#between current value and previous value.
#When the max error is less than the tolerance value
#the simulation is assumed to have converged to
#the final solution.
#!! Remember that in Python array number starts at 0 and not 1.
# Hence, if you have an array a = [3,10,15], the 1st element is a[0] = 3,
# the second element is a[1] = 10 and the third element is a[2] = 15.
while (err_max >= err_tol):
if(run == 1):
x_guess_old = x_guess_0;
else:
x_guess_old = x_new
run = run+1;
x_1_new = (18 - x_guess_old[1] + x_guess_old[2])/10;#uses previous guess values for x_2, x_3
x_2_new = (-12 - x_1_new - x_guess_old[2])/15; #uses x_1_new
x_3_new = (17 + x_1_new - x_2_new)/20;#uses x_1_new and x_2_new
x_new = np.array([x_1_new, x_2_new, x_3_new]);
err_1 = (x_1_new-x_guess_old[0])*100/x_1_new; #percentage error calculation between current value and previous value
err_2 = (x_2_new-x_guess_old[1])*100/x_2_new; #percentage error calculation between current value and previous value
err_3 = (x_3_new-x_guess_old[2])*100/x_3_new; #percentage error calculation between current value and previous value
err = np.array([err_1, err_2, err_3]); #choose maximum of all percentage errors
err_max = np.amax(err)
print('Max error: ')
print(err_max)
print('\n')
if(err_max <= err_tol): #check if solution has converged by comparing max error with error tolerance
print(' ')
print('*******************************')
print('Converged solution!')
print(x_new)
print('******************************')
###Output
Max error:
201.35135135135135
Max error:
9.891870244293163
Max error:
0.11483260734020947
Max error:
0.005054014919569182
Max error:
0.0001157703626143231
*******************************
Converged solution!
[ 2. -1. 1.]
******************************
|
notebooks/MAST/K2/K2_Lightcurve/k2_lightcurve.ipynb | ###Markdown
Beginner: Read and Plot A K2 Light Curve FileThis notebook tutorial demonstrates how to load and plot the contents of a K2 light curve (lc) file. We will plot the flux timeseries contained within the file, and display which pixels were used in the photometric aperture.
###Code
%matplotlib inline
from astropy.io import fits
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
IntroductionA light curve is a plot of flux versus time, and is used to identify variability, including the transits of orbiting companions like planets. The light curve shown here will be for the star TRAPPIST-1 (the K2 ID for the standard readout is EPIC 246199087, a larger readout for this star was also obtained with a K2 ID of EPIC 200164267). TRAPPIST-1 is known to host a system of seven Earth-sized planets.This tutorial will refer to a couple K2-related terms that we define here.* Campaign = During the K2 mission, the Kepler telescope observed the sky in a given pointing along the ecliptic plane for approximately 80 days at a time. Each of these regions is referred to as a "Campaign", starting with Campaign 0 and ending with Campaign 19. There was also a special "Engineering" Campaign before Campaign 0 that lasted ~10 days.* HDU = Header Data Unit. A FITS file is made up of HDUs that contain data and metadata relating to the file. The first HDU is called the primary HDU, and anything that follows is considered an "extension", e.g., "the first FITS extension", "the second FITS extension", etc.* BJD = Barycentric Julian Date, the Julian Date that has been corrected for differences in the Earth's position with respect to the Solar System center of mass.* BKJD = Barycentric Kepler Julian Date, the timestamp measured in BJD, but offset by 2454833.0. I.e., BKJD = BJD - 2454833.0* WCS = World Coordinate System, A FITS convention used to store coordinate information inside FITS headers. For K2 full frame images, it is used to provide the translation needed to go from pixel coorindates to celestial coordinates in right ascension and declination.* SAP Flux = Simple Aperture Photometry flux, the flux after summing the calibrated pixels within the K2 optimal photometric aperture.* PDCSAP Flux = Pre-search Data Conditioned Simple Aperture Photometry, the SAP flux values nominally corrected for instrumental variations. Thus, these fluxes are the mission's best estimate of the intrinsic variability of the target. Obtaining The Light Curve FileWe will read the light curve file from Campaign 12 using the MAST URL location. So that we can get started with understanding the file contents without reviewing how to automatically search for and retrieve K2 files, we won't show how to search and retrieve K2 light curve files in this tutorial.
###Code
# For the purposes of this tutorial, we just know the MAST URL location of the file we want to examine.
fits_file = "https://archive.stsci.edu/missions/k2/lightcurves/c12/246100000/99000/ktwo246199087-c12_llc.fits"
###Output
_____no_output_____
###Markdown
Understanding The Light Curve FITS File StructureK2 light curve FITS files contain a primary HDU with metadata stored in the header. The first extension HDU contains more metadata in the header, and stores arrays of data in a binary FITS table, which include the timestamps, SAP fluxes, and PDCSAP fluxes. The second extension HDU consists of an image that contains the collected pixels for this target, and records information about them, such as which of those pixels were used in the optimal photometric aperture to create the SAP fluxes. Let's examine the structure of the FITS file using the astropy.fits `info` function, which shows the FITS file format in more detail.
###Code
fits.info(fits_file)
###Output
_____no_output_____
###Markdown
Let's examine the binary table in the first FITS extension, since that contains the arrays of timestamps and fluxes we want to plot. We will use the astropy.fits `getdata` function to access the table from the first extension HDU, and then show the columns of the table. We can see that the table includes columns for the timestamps in Kepler BJD format (**TIME**), SAP flux (**SAP_FLUX**), and PDCSAP flux (**PDCSAP_FLUX**).
###Code
fits.getdata(fits_file, ext=1).columns
###Output
_____no_output_____
###Markdown
Reading the timestamps and fluxes.Now that we have the light curve file, let's store the timestamps and fluxes as arrays for use later.
###Code
with fits.open(fits_file, mode="readonly") as hdulist:
k2_bjds = hdulist[1].data['TIME']
sap_fluxes = hdulist[1].data['SAP_FLUX']
pdcsap_fluxes = hdulist[1].data['PDCSAP_FLUX']
###Output
_____no_output_____
###Markdown
Plot the light curve.Let's make a plot of the PDCSAP flux vs. time in Kepler BJD.
###Code
# Start figure and axis.
fig, ax = plt.subplots()
fig.set_size_inches(12., 8.)
# Plot the timeseries in black circles.
ax.plot(k2_bjds, pdcsap_fluxes, 'ko')
# Let's label the axes and define a title for the figure.
fig.suptitle("TRAPPIST-1 Light Curve - Campaign 12")
ax.set_ylabel("PDCSAP Flux (e-/s)")
ax.set_xlabel("Time (BKJD)")
# Let's zoom in on the x-axis and y-axis. We can see a sinusoidal pattern due to starspots.
# The transits are in there too, but you'd need to clean the light curve before you see them.
ax.set_xlim(2920., 2950.)
ax.set_ylim(0.1e7, 0.2e7)
# Adjust the left margin so the y-axis label shows up.
plt.subplots_adjust(left=0.15)
plt.show()
###Output
_____no_output_____
###Markdown
Understanding Light Curve FlagsThe table of information contains more than just timestamps and fluxes. In particular, there is a column of flags associated with every timestamp that indicate a number of warnings and conditions associated with that measurement. Not every flag is worth excluding from your analysis: you should always make your own decision. A summary of the flags can be found [here](https://archive.stsci.edu/kepler/manuals/archive_manual.pdfpage=19).
###Code
# First we need to read in the array of cadence quality flags, let's do
# that now.
with fits.open(fits_file, mode="readonly") as hdulist:
qual_flags = hdulist[1].data['SAP_QUALITY']
###Output
_____no_output_____
###Markdown
Now let's plot the full time series, but this time we'll overplot those points that have a quality flag greater than zero in red.
###Code
# Start figure and axis.
fig, ax = plt.subplots()
fig.set_size_inches(12., 8.)
# Plot the timeseries in black circles.
ax.plot(k2_bjds, pdcsap_fluxes, 'ko')
# Locate quality flags greater than zero.
where_gt0 = np.where(qual_flags > 0)[0]
# Overplot the fluxes with quality flags greater than zero in red.
ax.plot(k2_bjds[where_gt0], pdcsap_fluxes[where_gt0], 'ro')
# Let's zoom in on the x-axis and y-axis.
ax.set_xlim(2920., 2950.)
ax.set_ylim(0.1e7, 0.2e7)
# Let's label the axes and define a title for the figure.
fig.suptitle("TRAPPIST-1 Light Curve - Campaign 12")
ax.set_ylabel("PDCSAP Flux (e-/s)")
ax.set_xlabel("Time (TBJD)")
plt.show()
###Output
_____no_output_____
###Markdown
Intriguingly, some of the largest outliers in the positive flux direction are flagged: are these bad measurements that should be excised from the time series? Finding out the quality flag value and converting the value to its consitutent bit masks to understand why these points are flagged would be the first step. We encourage you to do this as a follow-up to this tutorial. Displaying The Aperture Pixel InformationLet's read in the second FITS extension HDU to display the aperture information. First, let's read in the aperture pixels from the HDU.
###Code
with fits.open(fits_file, mode="readonly") as hdulist:
aperture = hdulist[2].data
###Output
_____no_output_____
###Markdown
Let's plot the pixels as an image.
###Code
# Start figure and axis.
fig, ax = plt.subplots()
fig.set_size_inches(12., 8.)
# Display the pixels as an image.
cax = ax.imshow(aperture, cmap=plt.cm.YlGnBu_r, origin="lower")
# Add a color bar.
cbar = fig.colorbar(cax)
# Add a title to the plot.
fig.suptitle("TRAPPIST-1 Aperture - Campaign 12")
plt.show()
###Output
_____no_output_____
###Markdown
Understanding The Aperture Pixel ValuesWe see the pixel values are integers, but what do they mean? The pixels are bitmasks that encode information about each pixel. You can find a summary of what the different values mean [here](https://archive.stsci.edu/kepler/manuals/archive_manual.pdfpage=20). For example, a pixel in the aperture that might have a value of 15 can be broken down into power of 2 like: 8+4+2+1 = 15. Referencing the table of values, this means this particular pixel was used to calculate the Pixel Response Function (PRF) centroid, was used to calculate the flux weighted centroid, was part of the optimal photometric aperture, and was collected by the spacecraft. Numpy has a bulit-in function that can convert an integer into a binary bit mask. Let's use that now on one of the common values we see in our displayed image above.
###Code
# Break down a pixel value of 267 (yellow pixels displayed above) into its
# constituent bits.
bitmask = np.binary_repr(15)
print(bitmask)
###Output
_____no_output_____ |
assessments/Courework1-Muti-layer Neural Networks/cw1/MLP Exercise .ipynb | ###Markdown
Multi-layer Perceptron Exercise In this exercise, we will implement the multi-perceptron algorithm with two hidden layers. The implementation of the MLP algorithm will be in the mlp.py file but you will test your implementation in this notebook. In this exercise we will use the MNIST dataset that we used for week 2 lab (KNN).
###Code
# importing the MLP algorithm from mlp.py file
# where you will implement the MLP algorithm
from mlp import MLP
import numpy as np
# This is to reload all changed modules every time before executing a new line.
# https://stackoverflow.com/questions/5364050/reloading-submodules-in-ipython
%load_ext autoreload
%autoreload 2
# loading the MNIST datatset
import pickle, gzip
f = gzip.open('mnist.pkl.gz','rb')
tset, vset, teset = pickle.load(f, encoding='latin1')
print(tset[0].shape, vset[0].shape, teset[0].shape)
f.close()
import matplotlib.pyplot as plt # To install: pip install matplotlib
# visualise some examples from the dataset
fig, ax = plt.subplots(2,5)
for i, ax in enumerate(ax.flatten()):
im_idx = np.argwhere(teset[1] == i)[0]
plottable_image = np.reshape(teset[0][im_idx], (28, 28))
ax.imshow(plottable_image, cmap='gray_r')
# we will use only 9000 images for training and 1000 for testing
# Just use the first 9000 images for training
tread = 9000
train_in = tset[0][:tread,:]
# This is a little bit of work -- 1 of N encoding
# Make sure you understand how it does it
train_tgt = np.zeros((tread,10))
for i in range(tread):
train_tgt[i,tset[1][i]] = 1
# and use 1000 images for testing
teread = 1000
test_in = teset[0][:teread,:]
test_tgt = np.zeros((teread,10))
for i in range(teread):
test_tgt[i,teset[1][i]] = 1
###Output
_____no_output_____
###Markdown
Initialise the MLP classifier
###Code
# We choose the first and second hidden layers to have 5 neurons each.
sizes = [784,5,5,10] # 784 is the number of pixels of the images and 10 is the number of classes
classifier = MLP(sizes)
# print(classifier.beta,classifier.momentum,classifier.nin, classifier.nhidden1, classifier.nhidden2, classifier.nout)
# print(classifier.weights1.shape)
# print(classifier.weights2.shape)
# print(classifier.weights3.shape)
# TODO: open the mlp.py file and implement self.forwardPass and self.train methods
# test your implementation here
# for now, let's keep the learning rate and the number of iterations unchanged
classifier.train(train_in, train_tgt, 0.1, 1000)
#print(classifier.hidden1.shape)
#print(classifier.hidden2.shape)
# we evaluate our model on the testing set
# and show the confusion matrix and the accuracy
classifier.evaluate(test_in, test_tgt)
# you should expect the accuracy to be really low ~ most likely less than %50
# I think we can do better by experimenting with different learning rate and
# number of neurons in each hidden layer.
# TODO: modify the network parameters to get the test accuracy above %90
# you can change the learning rate, the number of neurons of each hidden layer
# and number of iterations. You can also implement the gradient descent algorithm
# with momentum and experiment it with different momentum values.
best_sizes = [784,50,50,10]
best_beta = 1
best_momentum = 0.9
best_lr = 0.001 # best learning rate
best_niterations = 1200
best_classifier = MLP(sizes = best_sizes, beta=best_beta, momentum=best_momentum)
best_classifier.train(train_in, train_tgt, best_lr, best_niterations)
best_classifier.evaluate(test_in, test_tgt)
# TODO: run the following code to save the best parameters and
# the weights of the network that achieves the desired accuracy
best_parameters = {
'sizes': best_sizes,
'beta': best_beta,
'momentum': best_momentum,
'lr': best_lr,
'niterations': best_niterations,
'weights_1': best_classifier.weights1,
'weights_2': best_classifier.weights2,
'weights_3': best_classifier.weights3,
}
with open('best_classifier.pkl', 'wb') as handle:
pickle.dump(best_parameters, handle, protocol=pickle.HIGHEST_PROTOCOL)
best_parameters
###Output
_____no_output_____ |
Program's_Contributed_By_Contributors/AI-Summer-Course/py-master/DeepLearningML/10_gpu_benchmarking/gpu_performance_test_small_image_classification.ipynb | ###Markdown
Small Image Classification Using Simple Aritifical Neural Network: GPU Benchmarking
###Code
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
import numpy as np
# Version Information
# tensorflow 2.2.0 , Cudnn7.6.5 and Cuda 10.1 , python 3.8
###Output
_____no_output_____
###Markdown
**This command shows list of physical devices available for tensorflow. You can see GPU listed here. If you have NVIDIA GPU you need to install CUDA toolkit and cuDNN as per instruction on this webpage. Without proper installation you will not see GPU in list of devices**https://shawnhymel.com/1961/how-to-install-tensorflow-with-gpu-support-on-windows/
###Code
tf.config.experimental.list_physical_devices()
tf.__version__
tf.test.is_built_with_cuda()
###Output
_____no_output_____
###Markdown
Load the dataset Our dataset contains 60000 small training images that belongs to one of the below 10 classes
###Code
(X_train, y_train), (X_test,y_test) = tf.keras.datasets.cifar10.load_data()
X_train.shape
y_train.shape
###Output
_____no_output_____
###Markdown
Data Visualization
###Code
def plot_sample(index):
plt.figure(figsize = (10,1))
plt.imshow(X_train[index])
plot_sample(0)
plot_sample(1)
plot_sample(2)
classes = ["airplane","automobile","bird","cat","deer","dog","frog","horse","ship","truck"]
plot_sample(3)
classes[y_train[3][0]]
y_train[:3]
y_test.shape
X_train.shape
###Output
_____no_output_____
###Markdown
Preprocessing: Scale images
###Code
X_train_scaled = X_train / 255
X_test_scaled = X_test / 255
y_train_categorical = keras.utils.to_categorical(
y_train, num_classes=10, dtype='float32'
)
y_test_categorical = keras.utils.to_categorical(
y_test, num_classes=10, dtype='float32'
)
y_train[0:5]
y_train_categorical[0:5]
###Output
_____no_output_____
###Markdown
Model building and training
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(32,32,3)),
keras.layers.Dense(3000, activation='relu'),
keras.layers.Dense(1000, activation='relu'),
keras.layers.Dense(10, activation='sigmoid')
])
model.compile(optimizer='SGD',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train_scaled, y_train_categorical, epochs=1)
###Output
1563/1563 [==============================] - 3s 2ms/step - loss: 1.8642 - accuracy: 0.3328
###Markdown
Let's make some predictions
###Code
np.argmax(model.predict(X_test_scaled)[0])
y_test[0]
def get_model():
model = keras.Sequential([
keras.layers.Flatten(input_shape=(32,32,3)),
keras.layers.Dense(3000, activation='relu'),
keras.layers.Dense(1000, activation='relu'),
keras.layers.Dense(10, activation='sigmoid')
])
model.compile(optimizer='SGD',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Measure training time on a CPU
###Code
%%timeit -n1 -r1
with tf.device('/CPU:0'):
cpu_model = get_model()
cpu_model.fit(X_train_scaled, y_train_categorical, epochs=1)
###Output
1563/1563 [==============================] - 44s 28ms/step - loss: 1.8660 - accuracy: 0.3301
44.5 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
###Markdown
Lets measure training time on a GPU (I've NVIDIA Titan RTX)
###Code
%%timeit -n1 -r1
with tf.device('/GPU:0'):
cpu_model = get_model()
cpu_model.fit(X_train_scaled, y_train_categorical, epochs=1)
###Output
1563/1563 [==============================] - 3s 2ms/step - loss: 1.8581 - accuracy: 0.3354
3.6 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
###Markdown
Lets run same test for 10 epocs
###Code
%%timeit -n1 -r1
with tf.device('/CPU:0'):
cpu_model = get_model()
cpu_model.fit(X_train_scaled, y_train_categorical, epochs=10)
%%timeit -n1 -r1
with tf.device('/GPU:0'):
cpu_model = get_model()
cpu_model.fit(X_train_scaled, y_train_categorical, epochs=10)
###Output
Epoch 1/10
1563/1563 [==============================] - 3s 2ms/step - loss: 1.8624 - accuracy: 0.3322
Epoch 2/10
1563/1563 [==============================] - 3s 2ms/step - loss: 1.6579 - accuracy: 0.4146
Epoch 3/10
1563/1563 [==============================] - 3s 2ms/step - loss: 1.5686 - accuracy: 0.4443
Epoch 4/10
1563/1563 [==============================] - 3s 2ms/step - loss: 1.5068 - accuracy: 0.4681
Epoch 5/10
1563/1563 [==============================] - 3s 2ms/step - loss: 1.4572 - accuracy: 0.4863
Epoch 6/10
1563/1563 [==============================] - 3s 2ms/step - loss: 1.4139 - accuracy: 0.5009
Epoch 7/10
1563/1563 [==============================] - 3s 2ms/step - loss: 1.3766 - accuracy: 0.5147
Epoch 8/10
1563/1563 [==============================] - 3s 2ms/step - loss: 1.3400 - accuracy: 0.5282
Epoch 9/10
1563/1563 [==============================] - 3s 2ms/step - loss: 1.3093 - accuracy: 0.5401
Epoch 10/10
1563/1563 [==============================] - 3s 2ms/step - loss: 1.2769 - accuracy: 0.5523
29.5 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
|
3-prediction.ipynb | ###Markdown
Use generated model for prediction: Randomly select images from data_test directory
###Code
import os
import gc
import cv2
import csv
import dlib
import glob
import random
import numpy as np
import tensorflow as tf
detector = dlib.get_frontal_face_detector()
from keras.models import load_model
def get_predictions(img_path, lookup_table, model):
'''
Function to get model predictions.
img_path: image path for prediction
lookup_table: class labels for inverse encoding
model: generated model(VGGNet)
'''
org = cv2.imread(img_path)
actual_name = os.path.basename(os.path.dirname(img_path))
image = cv2.resize(cv2.imread(img_path), (64, 64))
image = np.expand_dims(image, axis=0)
pred = np.argmax(model.predict(image), axis=1)[0]
result = {'Actual_Name': actual_name, 'Prediction': labels[pred]}
font = cv2.FONT_HERSHEY_PLAIN
window = cv2.namedWindow('Image', cv2.WINDOW_NORMAL)
cv2.resizeWindow('Image', 900, 900)
# dlib detector, returns the default face detector
dets = detector(org, 1)
for i, d in enumerate(dets):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
i, d.left(), d.top(), d.right(), d.bottom()))
cv2.rectangle(org, (d.left(), d.top()), (d.right(), d.bottom()), (255,255,0), 5)
cv2.putText(org, str(labels[pred]), (d.left(), d.top()-5),
font, 2, (0, 0, 255), 2, cv2.LINE_AA)
cv2.imshow('Image', org)
cv2.waitKey(0)
cv2.destroyAllWindows()
return result
# directory path
test_dir = glob.glob(
'./Dataset/**/*.jpg', recursive=True)
# load saved model
model = load_model('model.h5')
with open('class_labels.csv', 'r') as csvfile:
spamreader = csv.reader(csvfile, delimiter=',', quotechar='|')
labels = [row for row in spamreader]
labels = labels[0]
# Randomly choose images from test_data directory
idx = random.randint(1, len(test_dir))
# get prediction
get_predictions(test_dir[idx], labels, model)
gc.collect()
tf.keras.backend.clear_session()
###Output
_____no_output_____ |
im2txt/MemeNote.ipynb | ###Markdown
Converting captions and meme vector representations into single Tfrecord Requires putting memes through alexnet to find their vector rep, shuffling the captions, changing captions into their word2idx, finally saving one caption together with one meme.
###Code
with open('captions.txt','r') as f:
captions = f.readlines()
current_dir
len(captions)
captions = list(set(captions))
len(captions)
captions = [s.lower() for s in captions]
img_files = [os.path.join(image_dir, f) for f in os.listdir(image_dir) if f.endswith('jpg')]
print(len(img_files))
img_files = list(set(img_files))
print(len(img_files))
meme_name = img_files[2500].replace('/Users/ALP/Desktop/Stanford/CS224n/MemeProject/memes/','')
meme_name = meme_name.replace('.jpg','')
meme_name = meme_name.replace('-',' ').lower()
meme_name = 'dolan'
print(meme_name)
match = [s for s in captions if meme_name in s]
#print(match)
#match[0].replace(meme_name + ' - ', '')
search_dir = '/Users/ALP/Desktop/Stanford/CS224n/MemeProject/memes'
os.chdir(search_dir)
img_files = filter(os.path.isfile, os.listdir(search_dir))
img_files = [os.path.join(search_dir, f) for f in img_files] # add path to each file
img_files.sort(key=lambda x: os.path.getmtime(x))
with open('/Users/ALP/Desktop/Stanford/CS224n/MemeProject/Captions.txt','r') as f:
captions = f.readlines()
#captions = list(set(captions))
captions = [s.lower() for s in captions]
data_memes = []
data_captions = []
data_meme_names = [] #just to check captions have been paired correctly
counter = 0
passed = 0
#Doing everything in one script: (the fc6 vectors are quite sparse)
with tf.Session() as sess:
# Initialize all variables
sess.run(tf.global_variables_initializer())
# Load the pretrained weights into the model
model.load_initial_weights(sess)
for i,meme in enumerate(img_files):
#meme_name = meme.replace('/Users/ALP/Desktop/Stanford/CS224n/MemeProject/memes/','')
#meme_name = meme_name.replace('.jpg','').lower()
#meme_name = meme_name.replace('-',' ')
img = Image.open(meme)
try:
img.thumbnail((227, 227), Image.ANTIALIAS)
#img = img.resize((227,227))
#use img.thumbnail for square images, img.resize for non square
assert np.shape(img) == (227, 227, 3)
except AssertionError:
img = img.resize((227,227))
print('sizing error')
# Subtract the ImageNet mean
img = img - imagenet_mean #should probably change this
# Reshape as needed to feed into model
img = img.reshape((1,227,227,3))
meme_vector = sess.run(score, feed_dict={x: img, keep_prob: 1}) #[1,4096]
meme_vector = np.reshape(meme_vector,[4096])
assert np.shape(meme_vector) == (4096,)
#match = [s.split('-',1)[-1].lstrip() for s in captions if meme_name in s]
match = []
meme_name = captions[counter].split('-')[0]
while meme_name in captions[counter]:
match.append(captions[counter].split('-')[-1])
counter += 1
#now save in tfrecords format, or prepare for that action
meme_vectors = [meme_vector for cap in match]
image_names = [image_name for cap in match]
assert len(meme_vectors) == len(match)
data_memes.extend(meme_vectors)
data_captions.extend(match)
data_meme_names.extend(image_names)
if i % 100 == 0:
print(i,len(data_memes),len(data_captions),len(data_meme_names))
print(passed)
print(len(data_memes))
search_dir = '/Users/ALP/Desktop/Stanford/CS224n/MemeProject/memes'
os.chdir(search_dir)
files = filter(os.path.isfile, os.listdir(search_dir))
files = [os.path.join(search_dir, f) for f in files] # add path to each file
files.sort(key=lambda x: os.path.getmtime(x))
print(files[:100])
with open('/Users/ALP/Desktop/Stanford/CS224n/MemeProject/Captions.txt','r') as f:
captions = f.readlines()
#captions = list(set(captions))
captions = [s.lower() for s in captions]
print(len([s for s in captions if 'scared bekett' in s]))
captions[112000:112100]
del img_files[0]
img_files[2]
for i,meme in enumerate(img_files):
img_files[i] = meme.replace('/Users/ALP/Desktop/Stanford/CS224n/MemeProject/memes/','')
img_files[2503]
img_files[10]
f = open('ordered_memes.txt', 'w')
for item in img_files:
f.write('%s\n' % item)
deleters = []
for i,ting in enumerate(data_captions):
if ting == '':
deleters.append(i)
for i,ting in enumerate(deleters):
del data_captions[ting-i]
del data_memes[ting-i]
import re
word_captions = []
for capt in data_captions:
words = re.findall(r"[\w']+|[.,!?;'><(){}%$#£@-_+=|\/~`^&*]", capt)
word_captions.append(words)
#print(len(word_captions))
#word_captions = list(set(word_captions))
#print(len(word_captions))
from collections import Counter
print("Creating vocabulary.")
counter = Counter()
for c in word_captions:
counter.update(c)
print("Total words:", len(counter))
# Filter uncommon words and sort by descending count.
word_counts = [x for x in counter.items() if x[1] >= 3]
word_counts.sort(key=lambda x: x[1], reverse=True)
print("Words in vocabulary:", len(word_counts))
# Create the vocabulary dictionary.
reverse_vocab = [x[0] for x in word_counts]
#unk_id = len(reverse_vocab)
vocab_dict = dict([(x, y) for (y, x) in enumerate(reverse_vocab)])
reverse_vocab[1]
EMBEDDING_DIMENSION=300 # Available dimensions for 6B data is 50, 100, 200, 300
data_directory = '~/Desktop/Stanford/CS224n/MemeProject'
PAD_TOKEN = 0
word2idx = { 'PAD': PAD_TOKEN } # dict so we can lookup indices for tokenising our text later from string to sequence of integers
weights = []
index_counter = 0
with open('glove.42B.300d.txt','r') as file:
for index, line in enumerate(file):
values = line.split() # Word and weights separated by space
word = values[0] # Word is first symbol on each line
if word in vocab_dict:
index_counter += 1
word_weights = np.asarray(values[1:], dtype=np.float32) # Remainder of line is weights for word
word2idx[word] = index_counter # PAD is our zeroth index so shift by one
weights.append(word_weights)
if index % 20000 == 0:
print(index)
if index + 1 == 1500000:
# Limit vocabulary to top 40k terms
break
EMBEDDING_DIMENSION = len(weights[0])
# Insert the PAD weights at index 0 now we know the embedding dimension
weights.insert(0, np.zeros(EMBEDDING_DIMENSION))
# Append unknown and pad to end of vocab and initialize as random #maybe include start and end token here
UNKNOWN_TOKEN=len(weights)
word2idx['UNK'] = UNKNOWN_TOKEN
word2idx['<S>'] = UNKNOWN_TOKEN + 1
word2idx['</S>'] = UNKNOWN_TOKEN + 2
weights.append(np.random.randn(EMBEDDING_DIMENSION))
weights.append(np.random.randn(EMBEDDING_DIMENSION))
weights.append(np.random.randn(EMBEDDING_DIMENSION))
# Construct our final vocab
weights = np.asarray(weights, dtype=np.float32)
VOCAB_SIZE=weights.shape[0]
#Save Vocabulary
with tf.gfile.FastGFile('vocab.txt', "w") as f:
f.write("\n".join(["%s %d" % (w, c) for w, c in word2idx.iteritems()]))
print("Wrote vocabulary file:", 'vocab.txt')
with open('vocab.txt','r') as f:
reverse_vocab = list(f.readlines())
reverse_vocab = [(line.split()[0],line.split()[1]) for line in reverse_vocab]
vocab = dict([(x, y) for (x, y) in reverse_vocab])
print(vocab['.'])
x = sorted(vocab.iteritems(), key=lambda x: int(x[1]))
reverse_vocab = [y[0] for y in x]
print(reverse_vocab[44430:])
filenames = [os.path.join(image_dir, f) for f in ['one_does_not_simply.jpg']]
filenames
weights[76984]
np.savetxt('embedding_matrix2',weights)
deleters = []
for i,ting in enumerate(data_captions):
if len(ting) == 2:
deleters.append(i)
for i,ting in enumerate(deleters):
del data_captions[ting-i]
del data_memes[ting-i]
deleters[0]
len(data_captions)
import re
token_captions = []
for capt in data_captions:
token_caption = []
token_caption.append(word2idx['<S>'])
words = re.findall(r"[\w']+|[.,!?;'><(){}%$#£@-_+=|\/~`^&*]", capt)
for word in words:
try:
token = word2idx[word]
except KeyError:
token = word2idx['UNK']
token_caption.append(token)
token_caption.append(word2idx['</S>'])
token_captions.append(token_caption)
orig_unk_labels = []
for i in range(len(unk_label)):
print(labels_shuffled[unk_label[i]])
orig_unk_labels.append(labels_shuffled[unk_label[i]])
print(len(set(orig_unk_labels)))
from __future__ import division
total_words = 0
total_UNK = 0
for i,ting in enumerate(token_captions):
for word in ting:
total_words += 1
if word == word2idx['UNK']:
total_UNK += 1
print(total_words - 2*len(data_captions))
print(total_UNK)
print((total_UNK/(total_words - 2*len(data_captions))))
for i,ting in enumerate(deleters):
del data_captions[ting-i]
del data_memes[ting-i]
del token_captions[ting-i]
from random import shuffle
c = list(zip(data_memes, token_captions))
shuffle(c)
memes_shuffled, captions_shuffled = zip(*c)
len(captions_shuffled)
def _int64_feature(value):
"""Wrapper for inserting an int64 Feature into a SequenceExample proto."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _bytes_feature(value):
"""Wrapper for inserting a bytes Feature into a SequenceExample proto."""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _int64_feature_list(values):
"""Wrapper for inserting an int64 FeatureList into a SequenceExample proto."""
return tf.train.FeatureList(feature=[_int64_feature(v) for v in values])
def _bytes_feature_list(values):
"""Wrapper for inserting a bytes FeatureList into a SequenceExample proto."""
return tf.train.FeatureList(feature=[_bytes_feature(v) for v in values])
def _floats_feature(value):
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
#ONLY FOR ALEXNET
memes_shuffled_int = []
for i,meme in enumerate(memes_shuffled):
memes_shuffled_int.append(np.int_(meme*1000000000))
print(memes_shuffled_int[0][:100])
#ONLY FOR INCEPTION
class ImageDecoder(object):
"""Helper class for decoding images in TensorFlow."""
def __init__(self):
# Create a single TensorFlow Session for all image decoding calls.
self._sess = tf.Session()
# TensorFlow ops for JPEG decoding.
self._encoded_jpeg = tf.placeholder(dtype=tf.string)
self._decode_jpeg = tf.image.decode_jpeg(self._encoded_jpeg, channels=3)
def decode_jpeg(self, encoded_jpeg):
image = self._sess.run(self._decode_jpeg,
feed_dict={self._encoded_jpeg: encoded_jpeg})
assert len(image.shape) == 3
assert image.shape[2] == 3
return image
def _to_sequence_example(image, decoder, vocab):
"""Builds a SequenceExample proto for an image-caption pair.
Args:
image: An ImageMetadata object.
decoder: An ImageDecoder object.
vocab: A Vocabulary object.
Returns:
A SequenceExample proto.
"""
with tf.gfile.FastGFile(image.filename, "r") as f:
encoded_image = f.read()
try:
decoder.decode_jpeg(encoded_image)
except (tf.errors.InvalidArgumentError, AssertionError):
print("Skipping file with invalid JPEG data: %s" % image.filename)
return
context = tf.train.Features(feature={
"image/image_id": _int64_feature(image.image_id),
"image/data": _bytes_feature(encoded_image),
})
assert len(image.captions) == 1
caption = image.captions[0]
caption_ids = [vocab.word_to_id(word) for word in caption]
feature_lists = tf.train.FeatureLists(feature_list={
"image/caption": _bytes_feature_list(caption),
"image/caption_ids": _int64_feature_list(caption_ids)
})
sequence_example = tf.train.SequenceExample(
context=context, feature_lists=feature_lists)
return sequence_example
#SAVING TRAINING SET (ALEXNET)
import sys
train_filename = 'train.tfrecords4' # address to save the TFRecords file
# open the TFRecords file
writer = tf.python_io.TFRecordWriter(train_filename)
for i in range(len(memes_shuffled_int)):
if not i % 20000:
print 'Train data: {}/{}'.format(i, len(memes_shuffled_int))
sys.stdout.flush()
context = tf.train.Features(feature={
"train/meme": _bytes_feature(memes_shuffled_int[i].tostring()),
})
feature_lists = tf.train.FeatureLists(feature_list={
"train/captions": _int64_feature_list(captions_shuffled[i])
})
sequence_example = tf.train.SequenceExample(
context=context, feature_lists=feature_lists)
writer.write(sequence_example.SerializeToString())
writer.close()
sys.stdout.flush()
###Output
_____no_output_____ |
Activities Week 4 (pandas)/Pandas_Week/Day1/Day1.ipynb | ###Markdown
Pandas Week Day 1 All Activities Instructor Turn Activity 1: Jupyter Introduction
###Code
# ====================================
# Activity 1 Instructor
# Running the basic "Hello World" code
# ====================================
hello = "Hello World"
print(hello)
# Doing simple math
4 + 4
# Storing results in variable
a = 5
# Using those variables elsewhere in the code
a
# Variables will hold the values most recently run
# This means that, if we run the code above, it will now print 2
a = 2
###Output
_____no_output_____
###Markdown
Students Turn Activity 2: Netflix Remix Instructions * Using `Netflix.py` as a jumping off point, convert the application so that it runs properly within a Jupyter Notebook. * Make sure to have the application print out the user's input, the path to `Netflix_Ratings.csv`, and the final rating/review for the film in different cells. Bonus * Go through any of the activities from last week and attempt to convert them to run within a Jupyter Notebook. While doing this, try to split up the code into cells and print out the outputs.
###Code
# ====================================
# Activity 2 Student
# ====================================
# Code Here
#Modules
import os
import csv
# Prompt user for video lookup
netflix_input = str(input("What movie are you looking for? "))
# Set path for file
csvpath = os.path.join("Resources","netflix_ratings.csv")
print(csvpath)
found = False
with open(csvpath, newline="") as csvfile:
csvreader = csv.reader(csvfile, delimiter=",")
print(csvfile)
for row in csvreader:
#if the first string in the list, title, equals the netflix input, then
if (row[0] == netflix_input):
#print the row
print(row[0] + " is rated " + row[1])
#found is now true, we found it
found = True
break
if found == False:
print("nope, sorry")
# Set variable to check if we found the video
###Output
Resources\netflix_ratings.csv
<_io.TextIOWrapper name='Resources\\netflix_ratings.csv' mode='r' encoding='cp1252'>
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
nope, sorry
###Markdown
Instructor Turn Activity 3: Creating Data Frames
###Code
# Dependencies importing pandas and storing it to a variable called pd
import pandas as pd
# Then create a Pandas Series from a raw list
data_series = pd.Series(["UC Irvine", "UCLA", "UC Berkley", "UC Riverside", "UC Davis"])
data_series
# Convert a list of dictionarys into a dataframe
states_dicts = [{"STATE": "California", "ABBREVIATION" : "CA"},
{"STATE": "New York", "ABBREVIATION": "NY"}]
dataframe_states = pd.DataFrame(states_dicts)
dataframe_states
# Convert a single dictionary containing list into dataframe
dataframe = pd.DataFrame(
{
"Dynasty": ["Early Dynastic Period", "Old Kingdom"],
"Pharoh": ["Thinis", "Memphis"]
}
)
dataframe
###Output
_____no_output_____
###Markdown
Students Turn Activity 4: Data-Frame Shop Instructions * Create a DataFrame for a frame shop that contains three columns - "Frame", "Price", and "Sales" - and has five rows of data stored within it. * Using an alternate method from that used before, create a DataFrame for an art gallery that contains three columns - "Painting", "Price", and "Popularity" - and has four rows of data stored within it. Bonus * Once both of the DataFrames have been created, discuss with those around you which method you prefer to use and why.
###Code
# Import Dependencies
import pandas as pd
# DataFrame should have 3 columns: Frame, Price, and Sales AND 5 rows of data# DataF
data_series = pd.Series(["Frame", "Price", "Sales"])
data_series
states_dicts = [{"Frame": "Square", "Price" : "15", "Sales": "300"},
{"Frame": "Round", "Price" : "20", "Sales": "200"},
{"Frame": "Rectangle", "Price" : "50", "Sales": "6"},
{"Frame": "Triangle", "Price" : "120", "Sales": "1200"},
{"Frame": "Diamond", "Price" : "50", "Sales": "700"},]
dataframe_states = pd.DataFrame(states_dicts)
dataframe_states
# Use a different method of creating DataFrames to
# Create a DataFrame for an art gallery that contains three columns - "Painting", "Price", and "Popularity"
# and has 4 rows of data
dataframe = pd.DataFrame(
{
"Painting": ["Mona Lisa", "Sunflowers", "Starry Night", "Ballerina"],
"Price": ["1500", "45", "5000", "3000"],
"Popularity": ["High", "Medium", "Medium", "Low"]
}
)
dataframe
###Output
_____no_output_____
###Markdown
Instructor Turn Activity 5: Data-Frame Functions
###Code
# Dependencies
import os
import pandas as pd
# Save path to data set in a variable
data_file = os.path.join("Resources", "dataSet.csv")
print(data_file)
# Use Pandas to read data
data_file_pd = pd.read_csv(data_file)
data_file_pd.head()
# Dislay a statistical overview of the DataFrame
data_file_pd.describe()
# Reference a single column within a DataFrame
data_file_pd["Amount"].head()
# Reference multiple columns within a DataFrame
data_file_pd[["Amount", "Gender"]].head()
# The mean method averages the series
average = data_file_pd["Amount"].mean()
average
# The sum method adds every entry in the series
total = data_file_pd["Amount"].sum()
total
# The unique method shows every element of the series that appears only once
unique = data_file_pd["Last Name"].unique()
unique
# The value_counts method counts unique values in a column
count = data_file_pd["Gender"].value_counts()
count
# Calculations can also be performed on Series and added into DataFrames as new columns
thousands_of_dollars = data_file_pd["Amount"]/1000
data_file_pd["Thousands of Dollars"] = thousands_of_dollars
data_file_pd.head()
###Output
_____no_output_____
###Markdown
Students Turn: Activity 6 Training Grounds Instructions * Using the DataFrame provided, perform all of the following actions... * Provide a simple, analytical overview of the dataset's numeric columns * Collect all of the names of the trainers within the dataset * Figure out how many students each trainer has * Find the average weight of the students at the gym * Find the combined weight of all of the students at the gym * Convert the "Membership (Days)" column into weeks and then add this new series into the DataFrame
###Code
# ====================================
# Activity 6 Student
# ====================================
# Import Dependencies
import os
import pandas as pd
import random
# A gigantic DataFrame of individuals' names, their trainers, their weight, and their days as gym members
training_data = pd.DataFrame({
"Name":["Gino Walker","Hiedi Wasser","Kerrie Wetzel","Elizabeth Sackett","Jack Mitten","Madalene Wayman","Jamee Horvath","Arlena Reddin","Tula Levan","Teisha Dreier","Leslie Carrier","Arlette Hartson","Romana Merkle","Heath Viviani","Andres Zimmer","Allyson Osman","Yadira Caggiano","Jeanmarie Friedrichs","Leann Ussery","Bee Mom","Pandora Charland","Karena Wooten","Elizabet Albanese","Augusta Borjas","Erma Yadon","Belia Lenser","Karmen Sancho","Edison Mannion","Sonja Hornsby","Morgan Frei","Florencio Murphy","Christoper Hertel","Thalia Stepney","Tarah Argento","Nicol Canfield","Pok Moretti","Barbera Stallings","Muoi Kelso","Cicely Ritz","Sid Demelo","Eura Langan","Vanita An","Frieda Fuhr","Ernest Fitzhenry","Ashlyn Tash","Melodi Mclendon","Rochell Leblanc","Jacqui Reasons","Freeda Mccroy","Vanna Runk","Florinda Milot","Cierra Lecompte","Nancey Kysar","Latasha Dalton","Charlyn Rinaldi","Erline Averett","Mariko Hillary","Rosalyn Trigg","Sherwood Brauer","Hortencia Olesen","Delana Kohut","Geoffrey Mcdade","Iona Delancey","Donnie Read","Cesar Bhatia","Evia Slate","Kaye Hugo","Denise Vento","Lang Kittle","Sherry Whittenberg","Jodi Bracero","Tamera Linneman","Katheryn Koelling","Tonia Shorty","Misha Baxley","Lisbeth Goering","Merle Ladwig","Tammie Omar","Jesusa Avilla","Alda Zabala","Junita Dogan","Jessia Anglin","Peggie Scranton","Dania Clodfelter","Janis Mccarthy","Edmund Galusha","Tonisha Posey","Arvilla Medley","Briana Barbour","Delfina Kiger","Nia Lenig","Ricarda Bulow","Odell Carson","Nydia Clonts","Andree Resendez","Daniela Puma","Sherill Paavola","Gilbert Bloomquist","Shanon Mach","Justin Bangert","Arden Hokanson","Evelyne Bridge","Hee Simek","Ward Deangelis","Jodie Childs","Janis Boehme","Beaulah Glowacki","Denver Stoneham","Tarra Vinton","Deborah Hummell","Ulysses Neil","Kathryn Marques","Rosanna Dake","Gavin Wheat","Tameka Stoke","Janella Clear","Kaye Ciriaco","Suk Bloxham","Gracia Whaley","Philomena Hemingway","Claudette Vaillancourt","Olevia Piche","Trey Chiles","Idalia Scardina","Jenine Tremble","Herbert Krider","Alycia Schrock","Miss Weibel","Pearlene Neidert","Kina Callender","Charlotte Skelley","Theodora Harrigan","Sydney Shreffler","Annamae Trinidad","Tobi Mumme","Rosia Elliot","Debbra Putt","Rena Delosantos","Genna Grennan","Nieves Huf","Berry Lugo","Ayana Verdugo","Joaquin Mazzei","Doris Harmon","Patience Poss","Magaret Zabel","Marylynn Hinojos","Earlene Marcantel","Yuki Evensen","Rema Gay","Delana Haak","Patricia Fetters","Vinnie Elrod","Octavia Bellew","Burma Revard","Lakenya Kato","Vinita Buchner","Sierra Margulies","Shae Funderburg","Jenae Groleau","Louetta Howie","Astrid Duffer","Caron Altizer","Kymberly Amavisca","Mohammad Diedrich","Thora Wrinkle","Bethel Wiemann","Patria Millet","Eldridge Burbach","Alyson Eddie","Zula Hanna","Devin Goodwin","Felipa Kirkwood","Kurtis Kempf","Kasey Lenart","Deena Blankenship","Kandra Wargo","Sherrie Cieslak","Ron Atha","Reggie Barreiro","Daria Saulter","Tandra Eastman","Donnell Lucious","Talisha Rosner","Emiko Bergh","Terresa Launius","Margy Hoobler","Marylou Stelling","Lavonne Justice","Kala Langstaff","China Truett","Louanne Dussault","Thomasena Samaniego","Charlesetta Tarbell","Fatimah Lade","Malisa Cantero","Florencia Litten","Francina Fraise","Patsy London","Deloris Mclaughlin"],
"Trainer":['Bettyann Savory','Mariah Barberio','Gordon Perrine','Pa Dargan','Blanch Victoria','Aldo Byler','Aldo Byler','Williams Camire','Junie Ritenour','Gordon Perrine','Bettyann Savory','Mariah Barberio','Aldo Byler','Barton Stecklein','Bettyann Savory','Barton Stecklein','Gordon Perrine','Pa Dargan','Aldo Byler','Brittani Brin','Bettyann Savory','Phyliss Houk','Bettyann Savory','Junie Ritenour','Aldo Byler','Calvin North','Brittani Brin','Junie Ritenour','Blanch Victoria','Brittani Brin','Bettyann Savory','Blanch Victoria','Mariah Barberio','Bettyann Savory','Blanch Victoria','Brittani Brin','Junie Ritenour','Pa Dargan','Gordon Perrine','Phyliss Houk','Pa Dargan','Mariah Barberio','Phyliss Houk','Phyliss Houk','Calvin North','Williams Camire','Brittani Brin','Gordon Perrine','Bettyann Savory','Bettyann Savory','Pa Dargan','Phyliss Houk','Barton Stecklein','Blanch Victoria','Coleman Dunmire','Phyliss Houk','Blanch Victoria','Pa Dargan','Harland Coolidge','Calvin North','Bettyann Savory','Phyliss Houk','Bettyann Savory','Harland Coolidge','Gordon Perrine','Junie Ritenour','Harland Coolidge','Blanch Victoria','Mariah Barberio','Coleman Dunmire','Aldo Byler','Bettyann Savory','Gordon Perrine','Bettyann Savory','Barton Stecklein','Harland Coolidge','Aldo Byler','Aldo Byler','Pa Dargan','Junie Ritenour','Brittani Brin','Junie Ritenour','Gordon Perrine','Mariah Barberio','Mariah Barberio','Mariah Barberio','Bettyann Savory','Brittani Brin','Aldo Byler','Phyliss Houk','Blanch Victoria','Pa Dargan','Phyliss Houk','Brittani Brin','Barton Stecklein','Coleman Dunmire','Bettyann Savory','Bettyann Savory','Gordon Perrine','Blanch Victoria','Junie Ritenour','Phyliss Houk','Coleman Dunmire','Williams Camire','Harland Coolidge','Williams Camire','Aldo Byler','Harland Coolidge','Gordon Perrine','Brittani Brin','Coleman Dunmire','Calvin North','Phyliss Houk','Brittani Brin','Aldo Byler','Bettyann Savory','Brittani Brin','Gordon Perrine','Calvin North','Harland Coolidge','Coleman Dunmire','Harland Coolidge','Aldo Byler','Junie Ritenour','Blanch Victoria','Harland Coolidge','Blanch Victoria','Junie Ritenour','Harland Coolidge','Junie Ritenour','Gordon Perrine','Brittani Brin','Coleman Dunmire','Williams Camire','Junie Ritenour','Brittani Brin','Calvin North','Barton Stecklein','Barton Stecklein','Mariah Barberio','Coleman Dunmire','Bettyann Savory','Mariah Barberio','Pa Dargan','Barton Stecklein','Coleman Dunmire','Brittani Brin','Barton Stecklein','Pa Dargan','Barton Stecklein','Junie Ritenour','Bettyann Savory','Williams Camire','Pa Dargan','Calvin North','Williams Camire','Coleman Dunmire','Aldo Byler','Barton Stecklein','Coleman Dunmire','Blanch Victoria','Mariah Barberio','Mariah Barberio','Harland Coolidge','Barton Stecklein','Phyliss Houk','Pa Dargan','Bettyann Savory','Barton Stecklein','Harland Coolidge','Junie Ritenour','Pa Dargan','Mariah Barberio','Blanch Victoria','Williams Camire','Phyliss Houk','Phyliss Houk','Coleman Dunmire','Mariah Barberio','Gordon Perrine','Coleman Dunmire','Brittani Brin','Pa Dargan','Coleman Dunmire','Brittani Brin','Blanch Victoria','Coleman Dunmire','Gordon Perrine','Coleman Dunmire','Aldo Byler','Aldo Byler','Mariah Barberio','Williams Camire','Phyliss Houk','Aldo Byler','Williams Camire','Aldo Byler','Williams Camire','Coleman Dunmire','Phyliss Houk'],
"Weight":[128,180,193,177,237,166,224,208,177,241,114,161,162,151,220,142,193,193,124,130,132,141,190,239,213,131,172,127,184,157,215,122,181,240,218,205,239,217,234,158,180,131,194,171,177,110,117,114,217,123,248,189,198,127,182,121,224,111,151,170,188,150,137,231,222,186,139,175,178,246,150,154,129,216,144,198,228,183,173,129,157,199,186,232,172,157,246,239,214,161,132,208,187,224,164,177,175,224,219,235,112,241,243,179,208,196,131,207,182,233,191,162,173,197,190,182,231,196,196,143,250,174,138,135,164,204,235,192,114,179,215,127,185,213,250,213,153,217,176,190,119,167,118,208,113,206,200,236,159,218,168,159,156,183,121,203,215,209,179,219,174,220,129,188,217,250,166,157,112,236,182,144,189,243,238,147,165,115,160,134,245,174,238,157,150,184,174,134,134,248,199,165,117,119,162,112,170,224,247,217],
"Membership(Days)":[52,70,148,124,186,157,127,155,37,185,158,129,93,69,124,13,76,153,164,161,48,121,167,69,39,163,7,34,176,169,108,162,195,86,155,77,197,200,80,142,179,67,58,145,188,147,125,15,13,173,125,4,61,29,132,110,62,137,197,135,162,174,32,151,149,65,18,42,63,62,104,200,189,40,38,199,1,12,8,2,195,30,7,72,130,144,2,34,200,143,43,196,22,115,171,54,143,59,14,52,109,115,187,185,26,19,178,18,120,169,45,52,130,69,168,178,96,22,78,152,39,51,118,130,60,156,108,69,103,158,165,142,86,91,117,77,57,169,86,188,97,111,22,83,81,177,163,35,12,164,21,181,171,138,22,107,58,51,38,128,19,193,157,13,104,89,13,10,26,190,179,101,7,159,100,49,120,109,56,199,51,108,47,171,69,162,74,119,148,88,32,159,65,146,140,171,88,18,59,13]
})
training_data.head(10)
# Collecting a summary of all numeric data
training_data.describe()
# Finding the names of the trainers
trainer = training_data["Trainer"].unique()
trainer
# Finding how many students each trainer has
count = training_data["Trainer"].value_counts()
count
# Finding the average weight of all students
training_data["Weight"].mean()
# Finding the combined weight of all students
training_data["Weight"].sum()
# Converting the membership days into weeks and then adding a column to the DataFrame
weeks = training_data["Membership(Days)"]/7
training_data["Membership(Weeks)"] = weeks
training_data.head()
###Output
_____no_output_____
###Markdown
Instructor Turn: Activity 7 Column Manipulation
###Code
# Import Dependencies
import pandas as pd
# A gigantic DataFrame of individuals' names, their trainers, their weight, and their days as gym members
training_data = pd.DataFrame({
"Name":["Gino Walker","Hiedi Wasser","Kerrie Wetzel","Elizabeth Sackett","Jack Mitten","Madalene Wayman","Jamee Horvath","Arlena Reddin","Tula Levan","Teisha Dreier","Leslie Carrier","Arlette Hartson","Romana Merkle","Heath Viviani","Andres Zimmer","Allyson Osman","Yadira Caggiano","Jeanmarie Friedrichs","Leann Ussery","Bee Mom","Pandora Charland","Karena Wooten","Elizabet Albanese","Augusta Borjas","Erma Yadon","Belia Lenser","Karmen Sancho","Edison Mannion","Sonja Hornsby","Morgan Frei","Florencio Murphy","Christoper Hertel","Thalia Stepney","Tarah Argento","Nicol Canfield","Pok Moretti","Barbera Stallings","Muoi Kelso","Cicely Ritz","Sid Demelo","Eura Langan","Vanita An","Frieda Fuhr","Ernest Fitzhenry","Ashlyn Tash","Melodi Mclendon","Rochell Leblanc","Jacqui Reasons","Freeda Mccroy","Vanna Runk","Florinda Milot","Cierra Lecompte","Nancey Kysar","Latasha Dalton","Charlyn Rinaldi","Erline Averett","Mariko Hillary","Rosalyn Trigg","Sherwood Brauer","Hortencia Olesen","Delana Kohut","Geoffrey Mcdade","Iona Delancey","Donnie Read","Cesar Bhatia","Evia Slate","Kaye Hugo","Denise Vento","Lang Kittle","Sherry Whittenberg","Jodi Bracero","Tamera Linneman","Katheryn Koelling","Tonia Shorty","Misha Baxley","Lisbeth Goering","Merle Ladwig","Tammie Omar","Jesusa Avilla","Alda Zabala","Junita Dogan","Jessia Anglin","Peggie Scranton","Dania Clodfelter","Janis Mccarthy","Edmund Galusha","Tonisha Posey","Arvilla Medley","Briana Barbour","Delfina Kiger","Nia Lenig","Ricarda Bulow","Odell Carson","Nydia Clonts","Andree Resendez","Daniela Puma","Sherill Paavola","Gilbert Bloomquist","Shanon Mach","Justin Bangert","Arden Hokanson","Evelyne Bridge","Hee Simek","Ward Deangelis","Jodie Childs","Janis Boehme","Beaulah Glowacki","Denver Stoneham","Tarra Vinton","Deborah Hummell","Ulysses Neil","Kathryn Marques","Rosanna Dake","Gavin Wheat","Tameka Stoke","Janella Clear","Kaye Ciriaco","Suk Bloxham","Gracia Whaley","Philomena Hemingway","Claudette Vaillancourt","Olevia Piche","Trey Chiles","Idalia Scardina","Jenine Tremble","Herbert Krider","Alycia Schrock","Miss Weibel","Pearlene Neidert","Kina Callender","Charlotte Skelley","Theodora Harrigan","Sydney Shreffler","Annamae Trinidad","Tobi Mumme","Rosia Elliot","Debbra Putt","Rena Delosantos","Genna Grennan","Nieves Huf","Berry Lugo","Ayana Verdugo","Joaquin Mazzei","Doris Harmon","Patience Poss","Magaret Zabel","Marylynn Hinojos","Earlene Marcantel","Yuki Evensen","Rema Gay","Delana Haak","Patricia Fetters","Vinnie Elrod","Octavia Bellew","Burma Revard","Lakenya Kato","Vinita Buchner","Sierra Margulies","Shae Funderburg","Jenae Groleau","Louetta Howie","Astrid Duffer","Caron Altizer","Kymberly Amavisca","Mohammad Diedrich","Thora Wrinkle","Bethel Wiemann","Patria Millet","Eldridge Burbach","Alyson Eddie","Zula Hanna","Devin Goodwin","Felipa Kirkwood","Kurtis Kempf","Kasey Lenart","Deena Blankenship","Kandra Wargo","Sherrie Cieslak","Ron Atha","Reggie Barreiro","Daria Saulter","Tandra Eastman","Donnell Lucious","Talisha Rosner","Emiko Bergh","Terresa Launius","Margy Hoobler","Marylou Stelling","Lavonne Justice","Kala Langstaff","China Truett","Louanne Dussault","Thomasena Samaniego","Charlesetta Tarbell","Fatimah Lade","Malisa Cantero","Florencia Litten","Francina Fraise","Patsy London","Deloris Mclaughlin"],
"Trainer":['Bettyann Savory','Mariah Barberio','Gordon Perrine','Pa Dargan','Blanch Victoria','Aldo Byler','Aldo Byler','Williams Camire','Junie Ritenour','Gordon Perrine','Bettyann Savory','Mariah Barberio','Aldo Byler','Barton Stecklein','Bettyann Savory','Barton Stecklein','Gordon Perrine','Pa Dargan','Aldo Byler','Brittani Brin','Bettyann Savory','Phyliss Houk','Bettyann Savory','Junie Ritenour','Aldo Byler','Calvin North','Brittani Brin','Junie Ritenour','Blanch Victoria','Brittani Brin','Bettyann Savory','Blanch Victoria','Mariah Barberio','Bettyann Savory','Blanch Victoria','Brittani Brin','Junie Ritenour','Pa Dargan','Gordon Perrine','Phyliss Houk','Pa Dargan','Mariah Barberio','Phyliss Houk','Phyliss Houk','Calvin North','Williams Camire','Brittani Brin','Gordon Perrine','Bettyann Savory','Bettyann Savory','Pa Dargan','Phyliss Houk','Barton Stecklein','Blanch Victoria','Coleman Dunmire','Phyliss Houk','Blanch Victoria','Pa Dargan','Harland Coolidge','Calvin North','Bettyann Savory','Phyliss Houk','Bettyann Savory','Harland Coolidge','Gordon Perrine','Junie Ritenour','Harland Coolidge','Blanch Victoria','Mariah Barberio','Coleman Dunmire','Aldo Byler','Bettyann Savory','Gordon Perrine','Bettyann Savory','Barton Stecklein','Harland Coolidge','Aldo Byler','Aldo Byler','Pa Dargan','Junie Ritenour','Brittani Brin','Junie Ritenour','Gordon Perrine','Mariah Barberio','Mariah Barberio','Mariah Barberio','Bettyann Savory','Brittani Brin','Aldo Byler','Phyliss Houk','Blanch Victoria','Pa Dargan','Phyliss Houk','Brittani Brin','Barton Stecklein','Coleman Dunmire','Bettyann Savory','Bettyann Savory','Gordon Perrine','Blanch Victoria','Junie Ritenour','Phyliss Houk','Coleman Dunmire','Williams Camire','Harland Coolidge','Williams Camire','Aldo Byler','Harland Coolidge','Gordon Perrine','Brittani Brin','Coleman Dunmire','Calvin North','Phyliss Houk','Brittani Brin','Aldo Byler','Bettyann Savory','Brittani Brin','Gordon Perrine','Calvin North','Harland Coolidge','Coleman Dunmire','Harland Coolidge','Aldo Byler','Junie Ritenour','Blanch Victoria','Harland Coolidge','Blanch Victoria','Junie Ritenour','Harland Coolidge','Junie Ritenour','Gordon Perrine','Brittani Brin','Coleman Dunmire','Williams Camire','Junie Ritenour','Brittani Brin','Calvin North','Barton Stecklein','Barton Stecklein','Mariah Barberio','Coleman Dunmire','Bettyann Savory','Mariah Barberio','Pa Dargan','Barton Stecklein','Coleman Dunmire','Brittani Brin','Barton Stecklein','Pa Dargan','Barton Stecklein','Junie Ritenour','Bettyann Savory','Williams Camire','Pa Dargan','Calvin North','Williams Camire','Coleman Dunmire','Aldo Byler','Barton Stecklein','Coleman Dunmire','Blanch Victoria','Mariah Barberio','Mariah Barberio','Harland Coolidge','Barton Stecklein','Phyliss Houk','Pa Dargan','Bettyann Savory','Barton Stecklein','Harland Coolidge','Junie Ritenour','Pa Dargan','Mariah Barberio','Blanch Victoria','Williams Camire','Phyliss Houk','Phyliss Houk','Coleman Dunmire','Mariah Barberio','Gordon Perrine','Coleman Dunmire','Brittani Brin','Pa Dargan','Coleman Dunmire','Brittani Brin','Blanch Victoria','Coleman Dunmire','Gordon Perrine','Coleman Dunmire','Aldo Byler','Aldo Byler','Mariah Barberio','Williams Camire','Phyliss Houk','Aldo Byler','Williams Camire','Aldo Byler','Williams Camire','Coleman Dunmire','Phyliss Houk'],
"Weight":[128,180,193,177,237,166,224,208,177,241,114,161,162,151,220,142,193,193,124,130,132,141,190,239,213,131,172,127,184,157,215,122,181,240,218,205,239,217,234,158,180,131,194,171,177,110,117,114,217,123,248,189,198,127,182,121,224,111,151,170,188,150,137,231,222,186,139,175,178,246,150,154,129,216,144,198,228,183,173,129,157,199,186,232,172,157,246,239,214,161,132,208,187,224,164,177,175,224,219,235,112,241,243,179,208,196,131,207,182,233,191,162,173,197,190,182,231,196,196,143,250,174,138,135,164,204,235,192,114,179,215,127,185,213,250,213,153,217,176,190,119,167,118,208,113,206,200,236,159,218,168,159,156,183,121,203,215,209,179,219,174,220,129,188,217,250,166,157,112,236,182,144,189,243,238,147,165,115,160,134,245,174,238,157,150,184,174,134,134,248,199,165,117,119,162,112,170,224,247,217],
"Membership(Days)":[52,70,148,124,186,157,127,155,37,185,158,129,93,69,124,13,76,153,164,161,48,121,167,69,39,163,7,34,176,169,108,162,195,86,155,77,197,200,80,142,179,67,58,145,188,147,125,15,13,173,125,4,61,29,132,110,62,137,197,135,162,174,32,151,149,65,18,42,63,62,104,200,189,40,38,199,1,12,8,2,195,30,7,72,130,144,2,34,200,143,43,196,22,115,171,54,143,59,14,52,109,115,187,185,26,19,178,18,120,169,45,52,130,69,168,178,96,22,78,152,39,51,118,130,60,156,108,69,103,158,165,142,86,91,117,77,57,169,86,188,97,111,22,83,81,177,163,35,12,164,21,181,171,138,22,107,58,51,38,128,19,193,157,13,104,89,13,10,26,190,179,101,7,159,100,49,120,109,56,199,51,108,47,171,69,162,74,119,148,88,32,159,65,146,140,171,88,18,59,13]
})
training_data.head(10)
# Collecting a list of all columns within the DataFrame
training_data.columns
# Reorganizing the columns using double brackets
organized_df = training_data[["Name","Trainer","Weight","Membership(Days)"]]
organized_df.head()
# Using .rename(columns={}) in order to rename columns
renamed_df = organized_df.rename(columns={"Membership(Days)":"Membership in Days", "Weight":"Weight in Pounds"})
renamed_df.head()
###Output
_____no_output_____
###Markdown
Students Turn: Activity 8 Hey, Arnold!* This assignment will give you experience creating DataFrames from scratch.* You will create a pandas DataFrame of characters from this TV show: [https://en.wikipedia.org/wiki/Hey_Arnold!](https://en.wikipedia.org/wiki/Hey_Arnold!) Instructions1. First, use Pandas to create a DataFrame with the following columns and values:* `Character_in_show`: Arnold, Gerald, Helga, Phoebe, Harold, Eugene* `color_of_hair`: blonde, black, blonde, black, unknown, red* `Height`: average, tallish, tallish, short, tall, short* `Football_Shaped_Head`: True, False, False, False, False, False2. You'll note that the above column names are inconsistent and difficult to work with. Rename them to the following, respectively:* `Character`, `Hair Color`, `Height`, `Football Head`3. Next, create a new table that contains all of the columns in the following order...* `Character`, `Football Head`, `Hair Color`, `Height`4. Finally, save the file in Excel format.
###Code
# import dependencies
import os
import csv
import pandas as pd
# Create a data frame with given columns and value
training_data = pd.DataFrame({
"Character_in_show":["Arnold", "Gerald", "Helga", "Phoebe", "Harold", "Eugene"],
"color_of_hair":["blonde", "black", "blonde", "black", "unknown", "red"],
"Height": ["average", "tallish", "tallish", "short", "tall", "short"],
"Football_Shaped_Head": [True, False, False, False, False, False]
})
training_data
# Rename columns to clean up the look
renamed = training_data.rename(columns={
"Character_in_show":"Character",
"Football_Shaped_Head":"Football Head",
"color_of_hair": "Hair Color"})
renamed.head()
# Organize columns into a more logical order
column_switch = renamed[["Character", "Football Head", "Hair Color", "Height"]]
column_switch.head()
# Export new data use .to_excel("Output/filename.xlsx")
column_switch.to_excel("Output/hey_arnold.xlsx")
###Output
_____no_output_____
###Markdown
Instructor Turn: Activity 9 Reading and Writing CSVs
###Code
# Dependencies
import pandas as pd
# Store filepath in a variable
file_one = "Resources/DataOne.csv"
# Read our Data file with the pandas library
# Not every CSV requires an encoding, but be aware this can come up
file_one_df = pd.read_csv(file_one, encoding="ISO-8859-1")
# Show just the header
file_one_df.head()
# Show a single column
file_one_df["first_name"].head()
# Show mulitple specific columns--note the extra brackets
file_one_df[["first_name", "email"]].head()
# Head does not change the DataFrame--it only displays it
file_one_df.head()
# Export file as a CSV, without the Pandas index (default is true), but with the header
file_one_df.to_csv("Output/fileOne.csv", index=False, header=True)
###Output
_____no_output_____
###Markdown
Students Turn Activity 10: GoodReads - Part 1 Instructions * Read in the GoodReads CSV using Pandas * Remove unnecessary columns from the DataFrame so that only the following columns remain: `isbn`, `original_publication_year`, `original_title`, `authors`, `ratings_1`, `ratings_2`, `ratings_3`, `ratings_4`, and `ratings_5` * Rename the columns to the following: `ISBN`, `Publication Year`, `Original Title`, `Authors`, `One Star Reviews`, `Two Star Reviews`, `Three Star Reviews`, `Four Star Reviews`, and `Five Star Reviews` * Write the DataFrame into a new CSV file Hints * The base CSV file uses UTF-8 encoding. Trying to read in the file using some other kind of encoding could lead to strange characters appearing within the dataset.
###Code
# Import Dependencies
import os
import csv
import pandas as pd
# Make a reference to the books.csv file path encoding utf-8
bookpath = os.path.join("Resources","books.csv")
# Import the books.csv file as a DataFrame
books_df = pd.read_csv(bookpath, encoding="UTF-8")
books_df
# Remove unecessary columns from the DataFrame and save the new DataFrame
# Only keep: "isbn", "original_publication_year", "original_title", "authors",
# "ratings_1", "ratings_2", "ratings_3", "ratings_4", "ratings_5"
books_df_columns = books_df[["isbn", "original_publication_year", "original_title", "authors",
"ratings_1", "ratings_2", "ratings_3", "ratings_4", "ratings_5"]]
books_df_columns.head()
# Rename the headers to be more explanatory
renamed_columns = books_df_columns.rename(columns={
"original_publication_year":"Publication Year",
"original_title":"Title",
"ratings_1": "1 Star",
"ratings_2": "2 Stars",
"ratings_3": "3 Stars",
"ratings_4": "4 Stars",
"ratings_5": "5 Stars",
})
renamed_columns.head()
# Push the remade DataFrame to a new CSV file
renamed_columns.to_csv("Output/GoodReads1.csv", index=False, header=True)
###Output
_____no_output_____
###Markdown
Students Turn Activity 11: GoodReads - Part II Instructions * Using the modified DataFrame that was created earlier, create a summary table for the dataset that includes the following pieces of information... * The count of unique authors within the DataFrame * The year of the earliest published book in the DataFrame * The year of the latest published book in the DataFrame * The total number of reviews within the DataFrame
###Code
# Import Dependencies
import os
import csv
import pandas as pd
# File to Load
csv_path = "./Output/GoodReads1.csv"
# Read the modified GoodReads csv and store into Pandas DataFrame
books_df = pd.read_csv(csv_path, encoding="UTF-8")
books_df.head()
# Calculate the number of unique authors in the DataFrame
num_authors = books_df["authors"].nunique()
#OR
#num_authors = len(books_df["authors"].unique())
# Calculate the earliest/latest year a book was published
earliest_year = books_df["Publication Year"].min()
latest_year = books_df["Publication Year"].max()
# Calculate the total reviews for the entire dataset
total_reviews = books_df["1 Star"].sum() + books_df["2 Stars"].sum() + books_df["3 Stars"].sum() + books_df["4 Stars"].sum() + books_df["5 Stars"].sum()
# Place all of the data found into a summary DataFrame
summary = [{"Number of Authors": num_authors,
"Earliest Published": earliest_year,
"Latest Published": latest_year,
"Total Reviews" : total_reviews}]
dataframe_summary = pd.DataFrame(summary)
dataframe_summary
###Output
_____no_output_____ |
coursebook/modules/m2/02_00_introduction_to_dataframe.ipynb | ###Markdown
Introduction to the DataFrame APIIn this section, we will introduce the [DataFrame and Dataset APIs](https://spark.apache.org/docs/latest/sql-programming-guide.html).We will use a small subset from the [Record Linkage Comparison Data Set](https://archive.ics.uci.edu/ml/datasets/record+linkage+comparison+patterns), borrowed from UC Irvine Machine Learning Repository. It consists of several CSV files with match scores for patients in a Germany hospital, but we will use only one of them for the sake of simplicity. Please consult {cite:p}`schmidtmann2009evaluation` and {cite:p}`sariyar2011controlling` for more details regarding the data sets and research. Setup- Setup a `SparkSession` to work with the Dataset and DataFrame API- Unzip the `scores.zip` file located under `data` folder.
###Code
from pyspark import SparkContext, SparkConf
conf = SparkConf().setAppName("intro-to-df").setMaster("local")
sc = SparkContext(conf=conf)
# Avoid polluting the console with warning messages
sc.setLogLevel("ERROR")
###Output
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
22/03/07 18:04:29 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/03/07 18:04:29 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
###Markdown
Create a SparkSession to work with the DataFrame API
###Code
from pyspark.sql import SparkSession
spark = SparkSession(sc)
help(SparkSession)
###Output
Help on class SparkSession in module pyspark.sql.session:
class SparkSession(pyspark.sql.pandas.conversion.SparkConversionMixin)
| SparkSession(sparkContext, jsparkSession=None)
|
| The entry point to programming Spark with the Dataset and DataFrame API.
|
| A SparkSession can be used create :class:`DataFrame`, register :class:`DataFrame` as
| tables, execute SQL over tables, cache tables, and read parquet files.
| To create a :class:`SparkSession`, use the following builder pattern:
|
| .. autoattribute:: builder
| :annotation:
|
| Examples
| --------
| >>> spark = SparkSession.builder \
| ... .master("local") \
| ... .appName("Word Count") \
| ... .config("spark.some.config.option", "some-value") \
| ... .getOrCreate()
|
| >>> from datetime import datetime
| >>> from pyspark.sql import Row
| >>> spark = SparkSession(sc)
| >>> allTypes = sc.parallelize([Row(i=1, s="string", d=1.0, l=1,
| ... b=True, list=[1, 2, 3], dict={"s": 0}, row=Row(a=1),
| ... time=datetime(2014, 8, 1, 14, 1, 5))])
| >>> df = allTypes.toDF()
| >>> df.createOrReplaceTempView("allTypes")
| >>> spark.sql('select i+1, d+1, not b, list[1], dict["s"], time, row.a '
| ... 'from allTypes where b and i > 0').collect()
| [Row((i + 1)=2, (d + 1)=2.0, (NOT b)=False, list[1]=2, dict[s]=0, time=datetime.datetime(2014, 8, 1, 14, 1, 5), a=1)]
| >>> df.rdd.map(lambda x: (x.i, x.s, x.d, x.l, x.b, x.time, x.row.a, x.list)).collect()
| [(1, 'string', 1.0, 1, True, datetime.datetime(2014, 8, 1, 14, 1, 5), 1, [1, 2, 3])]
|
| Method resolution order:
| SparkSession
| pyspark.sql.pandas.conversion.SparkConversionMixin
| builtins.object
|
| Methods defined here:
|
| __enter__(self)
| Enable 'with SparkSession.builder.(...).getOrCreate() as session: app' syntax.
|
| .. versionadded:: 2.0
|
| __exit__(self, exc_type, exc_val, exc_tb)
| Enable 'with SparkSession.builder.(...).getOrCreate() as session: app' syntax.
|
| Specifically stop the SparkSession on exit of the with block.
|
| .. versionadded:: 2.0
|
| __init__(self, sparkContext, jsparkSession=None)
| Initialize self. See help(type(self)) for accurate signature.
|
| createDataFrame(self, data, schema=None, samplingRatio=None, verifySchema=True)
| Creates a :class:`DataFrame` from an :class:`RDD`, a list or a :class:`pandas.DataFrame`.
|
| When ``schema`` is a list of column names, the type of each column
| will be inferred from ``data``.
|
| When ``schema`` is ``None``, it will try to infer the schema (column names and types)
| from ``data``, which should be an RDD of either :class:`Row`,
| :class:`namedtuple`, or :class:`dict`.
|
| When ``schema`` is :class:`pyspark.sql.types.DataType` or a datatype string, it must match
| the real data, or an exception will be thrown at runtime. If the given schema is not
| :class:`pyspark.sql.types.StructType`, it will be wrapped into a
| :class:`pyspark.sql.types.StructType` as its only field, and the field name will be "value".
| Each record will also be wrapped into a tuple, which can be converted to row later.
|
| If schema inference is needed, ``samplingRatio`` is used to determined the ratio of
| rows used for schema inference. The first row will be used if ``samplingRatio`` is ``None``.
|
| .. versionadded:: 2.0.0
|
| .. versionchanged:: 2.1.0
| Added verifySchema.
|
| Parameters
| ----------
| data : :class:`RDD` or iterable
| an RDD of any kind of SQL data representation (:class:`Row`,
| :class:`tuple`, ``int``, ``boolean``, etc.), or :class:`list`, or
| :class:`pandas.DataFrame`.
| schema : :class:`pyspark.sql.types.DataType`, str or list, optional
| a :class:`pyspark.sql.types.DataType` or a datatype string or a list of
| column names, default is None. The data type string format equals to
| :class:`pyspark.sql.types.DataType.simpleString`, except that top level struct type can
| omit the ``struct<>`` and atomic types use ``typeName()`` as their format, e.g. use
| ``byte`` instead of ``tinyint`` for :class:`pyspark.sql.types.ByteType`.
| We can also use ``int`` as a short name for :class:`pyspark.sql.types.IntegerType`.
| samplingRatio : float, optional
| the sample ratio of rows used for inferring
| verifySchema : bool, optional
| verify data types of every row against schema. Enabled by default.
|
| Returns
| -------
| :class:`DataFrame`
|
| Notes
| -----
| Usage with spark.sql.execution.arrow.pyspark.enabled=True is experimental.
|
| Examples
| --------
| >>> l = [('Alice', 1)]
| >>> spark.createDataFrame(l).collect()
| [Row(_1='Alice', _2=1)]
| >>> spark.createDataFrame(l, ['name', 'age']).collect()
| [Row(name='Alice', age=1)]
|
| >>> d = [{'name': 'Alice', 'age': 1}]
| >>> spark.createDataFrame(d).collect()
| [Row(age=1, name='Alice')]
|
| >>> rdd = sc.parallelize(l)
| >>> spark.createDataFrame(rdd).collect()
| [Row(_1='Alice', _2=1)]
| >>> df = spark.createDataFrame(rdd, ['name', 'age'])
| >>> df.collect()
| [Row(name='Alice', age=1)]
|
| >>> from pyspark.sql import Row
| >>> Person = Row('name', 'age')
| >>> person = rdd.map(lambda r: Person(*r))
| >>> df2 = spark.createDataFrame(person)
| >>> df2.collect()
| [Row(name='Alice', age=1)]
|
| >>> from pyspark.sql.types import *
| >>> schema = StructType([
| ... StructField("name", StringType(), True),
| ... StructField("age", IntegerType(), True)])
| >>> df3 = spark.createDataFrame(rdd, schema)
| >>> df3.collect()
| [Row(name='Alice', age=1)]
|
| >>> spark.createDataFrame(df.toPandas()).collect() # doctest: +SKIP
| [Row(name='Alice', age=1)]
| >>> spark.createDataFrame(pandas.DataFrame([[1, 2]])).collect() # doctest: +SKIP
| [Row(0=1, 1=2)]
|
| >>> spark.createDataFrame(rdd, "a: string, b: int").collect()
| [Row(a='Alice', b=1)]
| >>> rdd = rdd.map(lambda row: row[1])
| >>> spark.createDataFrame(rdd, "int").collect()
| [Row(value=1)]
| >>> spark.createDataFrame(rdd, "boolean").collect() # doctest: +IGNORE_EXCEPTION_DETAIL
| Traceback (most recent call last):
| ...
| Py4JJavaError: ...
|
| newSession(self)
| Returns a new :class:`SparkSession` as new session, that has separate SQLConf,
| registered temporary views and UDFs, but shared :class:`SparkContext` and
| table cache.
|
| .. versionadded:: 2.0
|
| range(self, start, end=None, step=1, numPartitions=None)
| Create a :class:`DataFrame` with single :class:`pyspark.sql.types.LongType` column named
| ``id``, containing elements in a range from ``start`` to ``end`` (exclusive) with
| step value ``step``.
|
| .. versionadded:: 2.0.0
|
| Parameters
| ----------
| start : int
| the start value
| end : int, optional
| the end value (exclusive)
| step : int, optional
| the incremental step (default: 1)
| numPartitions : int, optional
| the number of partitions of the DataFrame
|
| Returns
| -------
| :class:`DataFrame`
|
| Examples
| --------
| >>> spark.range(1, 7, 2).collect()
| [Row(id=1), Row(id=3), Row(id=5)]
|
| If only one argument is specified, it will be used as the end value.
|
| >>> spark.range(3).collect()
| [Row(id=0), Row(id=1), Row(id=2)]
|
| sql(self, sqlQuery)
| Returns a :class:`DataFrame` representing the result of the given query.
|
| .. versionadded:: 2.0.0
|
| Returns
| -------
| :class:`DataFrame`
|
| Examples
| --------
| >>> df.createOrReplaceTempView("table1")
| >>> df2 = spark.sql("SELECT field1 AS f1, field2 as f2 from table1")
| >>> df2.collect()
| [Row(f1=1, f2='row1'), Row(f1=2, f2='row2'), Row(f1=3, f2='row3')]
|
| stop(self)
| Stop the underlying :class:`SparkContext`.
|
| .. versionadded:: 2.0
|
| table(self, tableName)
| Returns the specified table as a :class:`DataFrame`.
|
| .. versionadded:: 2.0.0
|
| Returns
| -------
| :class:`DataFrame`
|
| Examples
| --------
| >>> df.createOrReplaceTempView("table1")
| >>> df2 = spark.table("table1")
| >>> sorted(df.collect()) == sorted(df2.collect())
| True
|
| ----------------------------------------------------------------------
| Class methods defined here:
|
| getActiveSession() from builtins.type
| Returns the active :class:`SparkSession` for the current thread, returned by the builder
|
| .. versionadded:: 3.0.0
|
| Returns
| -------
| :class:`SparkSession`
| Spark session if an active session exists for the current thread
|
| Examples
| --------
| >>> s = SparkSession.getActiveSession()
| >>> l = [('Alice', 1)]
| >>> rdd = s.sparkContext.parallelize(l)
| >>> df = s.createDataFrame(rdd, ['name', 'age'])
| >>> df.select("age").collect()
| [Row(age=1)]
|
| ----------------------------------------------------------------------
| Readonly properties defined here:
|
| catalog
| Interface through which the user may create, drop, alter or query underlying
| databases, tables, functions, etc.
|
| .. versionadded:: 2.0.0
|
| Returns
| -------
| :class:`Catalog`
|
| conf
| Runtime configuration interface for Spark.
|
| This is the interface through which the user can get and set all Spark and Hadoop
| configurations that are relevant to Spark SQL. When getting the value of a config,
| this defaults to the value set in the underlying :class:`SparkContext`, if any.
|
| Returns
| -------
| :class:`pyspark.sql.conf.RuntimeConfig`
|
| .. versionadded:: 2.0
|
| read
| Returns a :class:`DataFrameReader` that can be used to read data
| in as a :class:`DataFrame`.
|
| .. versionadded:: 2.0.0
|
| Returns
| -------
| :class:`DataFrameReader`
|
| readStream
| Returns a :class:`DataStreamReader` that can be used to read data streams
| as a streaming :class:`DataFrame`.
|
| .. versionadded:: 2.0.0
|
| Notes
| -----
| This API is evolving.
|
| Returns
| -------
| :class:`DataStreamReader`
|
| sparkContext
| Returns the underlying :class:`SparkContext`.
|
| .. versionadded:: 2.0
|
| streams
| Returns a :class:`StreamingQueryManager` that allows managing all the
| :class:`StreamingQuery` instances active on `this` context.
|
| .. versionadded:: 2.0.0
|
| Notes
| -----
| This API is evolving.
|
| Returns
| -------
| :class:`StreamingQueryManager`
|
| udf
| Returns a :class:`UDFRegistration` for UDF registration.
|
| .. versionadded:: 2.0.0
|
| Returns
| -------
| :class:`UDFRegistration`
|
| version
| The version of Spark on which this application is running.
|
| .. versionadded:: 2.0
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| Builder = <class 'pyspark.sql.session.SparkSession.Builder'>
| Builder for :class:`SparkSession`.
|
|
| builder = <pyspark.sql.session.SparkSession.Builder object>
|
| ----------------------------------------------------------------------
| Data descriptors inherited from pyspark.sql.pandas.conversion.SparkConversionMixin:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
###Markdown
Unzip the scores file, if it was not done already
###Code
from os import path
scores_zip = path.join("data", "scores.zip")
scores_csv = path.join("data", "scores.csv")
%set_env SCORES_ZIP=$scores_zip
%set_env SCORES_CSV=$scores_csv
%%bash
command -v unzip >/dev/null 2>&1 || { echo >&2 "unzip command is not installed. Aborting."; exit 1; }
[[ -f "$SCORES_CSV" ]] && { echo "file data/$SCORES_CSV already exist. Skipping."; exit 0; }
[[ -f "$SCORES_ZIP" ]] || { echo "file data/$SCORES_ZIP does not exist. Aborting."; exit 1; }
echo "Unzip file $SCORES_ZIP"
unzip "$SCORES_ZIP" -d data
! head "$SCORES_CSV"
###Output
"id_1","id_2","cmp_fname_c1","cmp_fname_c2","cmp_lname_c1","cmp_lname_c2","cmp_sex","cmp_bd","cmp_bm","cmp_by","cmp_plz","is_match"
37291,53113,0.833333333333333,?,1,?,1,1,1,1,0,TRUE
39086,47614,1,?,1,?,1,1,1,1,1,TRUE
70031,70237,1,?,1,?,1,1,1,1,1,TRUE
84795,97439,1,?,1,?,1,1,1,1,1,TRUE
36950,42116,1,?,1,1,1,1,1,1,1,TRUE
42413,48491,1,?,1,?,1,1,1,1,1,TRUE
25965,64753,1,?,1,?,1,1,1,1,1,TRUE
49451,90407,1,?,1,?,1,1,1,1,0,TRUE
39932,40902,1,?,1,?,1,1,1,1,1,TRUE
###Markdown
Loading the Scores CSV file into a DataFrame We are going to use the Reader API
###Code
help(spark.read)
help(spark.read.csv)
scores = spark.read.csv(scores_csv)
scores
help(scores.show)
###Output
Help on method show in module pyspark.sql.dataframe:
show(n=20, truncate=True, vertical=False) method of pyspark.sql.dataframe.DataFrame instance
Prints the first ``n`` rows to the console.
.. versionadded:: 1.3.0
Parameters
----------
n : int, optional
Number of rows to show.
truncate : bool or int, optional
If set to ``True``, truncate strings longer than 20 chars by default.
If set to a number greater than one, truncates long strings to length ``truncate``
and align cells right.
vertical : bool, optional
If set to ``True``, print output rows vertically (one line
per column value).
Examples
--------
>>> df
DataFrame[age: int, name: string]
>>> df.show()
+---+-----+
|age| name|
+---+-----+
| 2|Alice|
| 5| Bob|
+---+-----+
>>> df.show(truncate=3)
+---+----+
|age|name|
+---+----+
| 2| Ali|
| 5| Bob|
+---+----+
>>> df.show(vertical=True)
-RECORD 0-----
age | 2
name | Alice
-RECORD 1-----
age | 5
name | Bob
###Markdown
We can look at the head of the DataFrame calling the `show` method. scores.show() **Can anyone spot what's wrong with the above data?**- Question marks- Column names- `Float` and `Int` in the same columnLet's check the schema of our DataFrame
###Code
help(scores.printSchema)
scores.printSchema()
###Output
root
|-- _c0: string (nullable = true)
|-- _c1: string (nullable = true)
|-- _c2: string (nullable = true)
|-- _c3: string (nullable = true)
|-- _c4: string (nullable = true)
|-- _c5: string (nullable = true)
|-- _c6: string (nullable = true)
|-- _c7: string (nullable = true)
|-- _c8: string (nullable = true)
|-- _c9: string (nullable = true)
|-- _c10: string (nullable = true)
|-- _c11: string (nullable = true)
###Markdown
**Why everythin is a `String`?** Managing Schema and Null Values
###Code
scores_df = (
spark.read
.option("header", "true")
.option("nullValue", "?")
.option("inferSchema", "true")
.csv(scores_csv)
)
scores_df.printSchema()
scores_df.show(5)
###Output
+-----+-----+-----------------+------------+------------+------------+-------+------+------+------+-------+--------+
| id_1| id_2| cmp_fname_c1|cmp_fname_c2|cmp_lname_c1|cmp_lname_c2|cmp_sex|cmp_bd|cmp_bm|cmp_by|cmp_plz|is_match|
+-----+-----+-----------------+------------+------------+------------+-------+------+------+------+-------+--------+
|37291|53113|0.833333333333333| null| 1.0| null| 1| 1| 1| 1| 0| true|
|39086|47614| 1.0| null| 1.0| null| 1| 1| 1| 1| 1| true|
|70031|70237| 1.0| null| 1.0| null| 1| 1| 1| 1| 1| true|
|84795|97439| 1.0| null| 1.0| null| 1| 1| 1| 1| 1| true|
|36950|42116| 1.0| null| 1.0| 1.0| 1| 1| 1| 1| 1| true|
+-----+-----+-----------------+------------+------------+------------+-------+------+------+------+-------+--------+
only showing top 5 rows
###Markdown
Transformations and ActionsCreating a DataFrame does not cause any distributed computation in the cluster. **A DataFrame is un data set representing an intermediate step in a computation**.For operatring data (in a distributed manner), we have two type of operations: **transformations** and **actions**:- Transformations: lazy evaluation. They're not computed immediately, but they are recorded as a **lineage** for query play optimization.- Actions: distributed computation occurs after invoking an action
###Code
# how many?
scores_df.count()
###Output
_____no_output_____
###Markdown
We can use the `collect` action to return `Array` with all the `Row` objects in our DataFrame.
###Code
scores_df.collect()
###Output
###Markdown
**The `Array` will reside in local memory!!** Write to DiskWe are going to save the DataFrame into a different format: Parquet
###Code
scores_df.write.format("parquet").save("data/scores-parquet")
! ls data/scores-parquet
scores_parquet = spark.read.parquet("data/scores-parquet")
scores_parquet.printSchema()
scores_parquet.show(5)
###Output
+-----+-----+-----------------+------------+------------+------------+-------+------+------+------+-------+--------+
| id_1| id_2| cmp_fname_c1|cmp_fname_c2|cmp_lname_c1|cmp_lname_c2|cmp_sex|cmp_bd|cmp_bm|cmp_by|cmp_plz|is_match|
+-----+-----+-----------------+------------+------------+------------+-------+------+------+------+-------+--------+
|37291|53113|0.833333333333333| null| 1.0| null| 1| 1| 1| 1| 0| true|
|39086|47614| 1.0| null| 1.0| null| 1| 1| 1| 1| 1| true|
|70031|70237| 1.0| null| 1.0| null| 1| 1| 1| 1| 1| true|
|84795|97439| 1.0| null| 1.0| null| 1| 1| 1| 1| 1| true|
|36950|42116| 1.0| null| 1.0| 1.0| 1| 1| 1| 1| 1| true|
+-----+-----+-----------------+------------+------------+------------+-------+------+------+------+-------+--------+
only showing top 5 rows
###Markdown
Analyzing DataAll good for now, but we don't load data for the sake of i, we do it because we want to run some analysis.- First two column are Integer IDs. There represent the patients that were matched in the record.- The next nine column are numeric values (int and double). They represnt match scores on different fields, such name, sex, birthday, and locations.- The last column is a boolean value indicating whether or not the pair of patient records represented by the line was a match.**We could use this dataset to build a simple classifier that allows us to predict whether a record will be a match based on the values of the match scores for patient records.** CachingEach time we process data (e.g., calling the `collect` method), Spark re-opens the file, parsea the rows, and then execute the requested action. It does not matter if we have filtered the data and created a smaller set of record.We can use the `cache` method to indicate to store the DataFrame in memory.
###Code
help(scores_df.cache)
###Output
Help on method cache in module pyspark.sql.dataframe:
cache() method of pyspark.sql.dataframe.DataFrame instance
Persists the :class:`DataFrame` with the default storage level (`MEMORY_AND_DISK`).
.. versionadded:: 1.3.0
Notes
-----
The default storage level has changed to `MEMORY_AND_DISK` to match Scala in 2.0.
###Markdown
**Spark is in-memory only. Myth or misconception?**"Spill"Storage levels:- `MEMORY_AND_DISK`- `MEMORY`- `MEMORY_SER`
###Code
scores_cached = scores_df.cache()
scores_cached.count()
scores_cached.take(10)
###Output
_____no_output_____
###Markdown
Query Plan
###Code
scores_cached.explain()
###Output
== Physical Plan ==
FileScan csv [id_1#56,id_2#57,cmp_fname_c1#58,cmp_fname_c2#59,cmp_lname_c1#60,cmp_lname_c2#61,cmp_sex#62,cmp_bd#63,cmp_bm#64,cmp_by#65,cmp_plz#66,is_match#67] Batched: false, DataFilters: [], Format: CSV, Location: InMemoryFileIndex(1 paths)[file:/Users/cmontemuio/repos/personal/bda-course/coursebook/modules/m2..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id_1:int,id_2:int,cmp_fname_c1:double,cmp_fname_c2:double,cmp_lname_c1:double,cmp_lname_c2...
###Markdown
GroupBy + OrderBy
###Code
from pyspark.sql.functions import col
scores_cached.groupBy("is_match").count().orderBy(col("count").desc()).show()
###Output
+--------+------+
|is_match| count|
+--------+------+
| false|572820|
| true| 2093|
+--------+------+
###Markdown
Aggregation FunctionsIn addition to `count`, we can also compute more complex aggregation like sums, mins, maxes, means, and standard deviation. How? we use `agg` method of the DataFrame API.
###Code
from pyspark.sql.functions import avg, stddev
aggregated = scores_cached.agg(avg("cmp_sex"), stddev("cmp_sex"))
aggregated.explain()
aggregated.show()
###Output
+------------------+--------------------+
| avg(cmp_sex)|stddev_samp(cmp_sex)|
+------------------+--------------------+
|0.9550923357099248| 0.20710152240504734|
+------------------+--------------------+
###Markdown
SQLANSI 2003-compliant version or HiveQL.
###Code
scores_df.createOrReplaceTempView("scores")
# scores_cached.groupBy("is_match").count().orderBy(col("count").desc()).show()
spark.sql("""
SELECT is_match, COUNT(*) cnt
FROM scores
GROUP BY is_match
ORDER BY cnt DESC
""").show()
spark.sql("""
SELECT is_match, COUNT(*) cnt
FROM scores
GROUP BY is_match
ORDER BY cnt DESC
""").explain()
scores_cached.groupBy("is_match").count().orderBy(col("count").desc()).explain()
###Output
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Sort [count#2925L DESC NULLS LAST], true, 0
+- Exchange rangepartitioning(count#2925L DESC NULLS LAST, 200), ENSURE_REQUIREMENTS, [id=#484]
+- HashAggregate(keys=[is_match#67], functions=[count(1)])
+- Exchange hashpartitioning(is_match#67, 200), ENSURE_REQUIREMENTS, [id=#481]
+- HashAggregate(keys=[is_match#67], functions=[partial_count(1)])
+- InMemoryTableScan [is_match#67]
+- InMemoryRelation [id_1#56, id_2#57, cmp_fname_c1#58, cmp_fname_c2#59, cmp_lname_c1#60, cmp_lname_c2#61, cmp_sex#62, cmp_bd#63, cmp_bm#64, cmp_by#65, cmp_plz#66, is_match#67], StorageLevel(disk, memory, deserialized, 1 replicas)
+- FileScan csv [id_1#56,id_2#57,cmp_fname_c1#58,cmp_fname_c2#59,cmp_lname_c1#60,cmp_lname_c2#61,cmp_sex#62,cmp_bd#63,cmp_bm#64,cmp_by#65,cmp_plz#66,is_match#67] Batched: false, DataFilters: [], Format: CSV, Location: InMemoryFileIndex(1 paths)[file:/Users/cmontemuio/repos/personal/bda-course/coursebook/modules/m2..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id_1:int,id_2:int,cmp_fname_c1:double,cmp_fname_c2:double,cmp_lname_c1:double,cmp_lname_c2...
###Markdown
Should I use Spark SQL or the DataFrame APIDepends on the query. Pandas is my friend!required packages: `pandas` and `numpy`- poetry add pandas numpy- pip install pandas numpy
###Code
scores_pandas = scores_df.toPandas()
scores_pandas.head()
scores_pandas.shape
###Output
_____no_output_____ |
code/btree/traversal.ipynb | ###Markdown
二叉树遍历
###Code
# Definition for a binary tree node.
class TreeNode:
def __init__(self, x):
self.val = x
self.left = None
self.right = None
def treeNodeToString(root):
if not root:
return "[]"
output = ""
queue = [root]
current = 0
while current != len(queue):
node = queue[current]
current = current + 1
if not node:
output += "null, "
continue
output += str(node.val) + ", "
queue.append(node.left)
queue.append(node.right)
return "[" + output[:-2] + "]"
def stringToTreeNode(input):
input = input.strip()
input = input[1:-1]
if not input:
return None
inputValues = [s.strip() for s in input.split(',')]
root = TreeNode(int(inputValues[0]))
nodeQueue = [root]
front = 0
index = 1
while index < len(inputValues):
node = nodeQueue[front]
front = front + 1
item = inputValues[index]
index = index + 1
if item != "null":
leftNumber = int(item)
node.left = TreeNode(leftNumber)
nodeQueue.append(node.left)
if index >= len(inputValues):
break
item = inputValues[index]
index = index + 1
if item != "null":
rightNumber = int(item)
node.right = TreeNode(rightNumber)
nodeQueue.append(node.right)
return root
def prettyPrintTree(node, prefix="", isLeft=True):
if not node:
print("Empty Tree")
return
if node.right:
prettyPrintTree(node.right, prefix + ("│ " if isLeft else " "), False)
print(prefix + ("└── " if isLeft else "┌── ") + str(node.val))
if node.left:
prettyPrintTree(node.left, prefix + (" " if isLeft else "│ "), True)
node = stringToTreeNode("[1,2,3,4,5,null,7]")
prettyPrintTree(node)
###Output
│ ┌── 7
│ ┌── 3
└── 1
│ ┌── 5
└── 2
└── 4
###Markdown
递归使用递归进行树的遍历是最简单的,但是开销比较大,其核心就是两句递归以及访问根节点的时机。```pythondef dfs(root: TreeNode): 基线条件 if not root: return 在这访问就是前序 dfs(root.left) 在这访问就是中序 dfs(root.right) 在这访问就是后序```递归实现时,是函数自己调用自己,一层层的嵌套下去,操作系统/虚拟机自动帮我们用 栈 来保存了每个调用的函数,现在我们需要自己模拟这样的调用过程。递归的调用过程是这样的:```pythondfs(root.left) dfs(root.left) dfs(root.left) 为null返回 打印节点 dfs(root.right) dfs(root.left) dfs(root.left) ...```> 作者:wang_ni_ma链接:https://leetcode-cn.com/problems/binary-tree-inorder-traversal/solution/dong-hua-yan-shi-94-er-cha-shu-de-zhong-xu-bian-li/来源:力扣(LeetCode)著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
###Code
# 下面是范例
from typing import List
class RecursingTraversal:
def preorder(root: TreeNode) -> List[int]:
res = []
def NLR(root: TreeNode):
# 基线条件
if not root:
return
res.append(root.val)
NLR(root.left)
NLR(root.right)
NLR(root)
return res
def inorder(root: TreeNode) -> List[int]:
res = []
def LNR(root: TreeNode):
# 基线条件
if not root:
return
LNR(root.left)
res.append(root.val)
LNR(root.right)
LNR(root)
return res
def postorder(root: TreeNode) -> List[int]:
res = []
def LRN(root: TreeNode):
# 基线条件
if not root:
return
LRN(root.left)
LRN(root.right)
res.append(root.val)
LRN(root)
return res
# test
def test(node, traversalMethod):
prettyPrintTree(node)
try:
print("前序遍历:", traversalMethod.preorder(node))
print("中序遍历:", traversalMethod.inorder(node))
print("后序遍历:", traversalMethod.postorder(node))
except Exception as err:
print("ERR>>>", err)
node = stringToTreeNode("[1,2,3,4,5,null,6]")
test(node, RecursingTraversal)
###Output
│ ┌── 6
│ ┌── 3
└── 1
│ ┌── 5
└── 2
└── 4
前序遍历: [1, 2, 4, 5, 3, 6]
中序遍历: [4, 2, 5, 1, 3, 6]
后序遍历: [4, 5, 2, 6, 3, 1]
###Markdown
迭代递归的调用过程是不断往左边走,当左边走不下去了,就打印节点,并转向右边,然后右边继续这个过程。我们在迭代实现时,就可以用栈来模拟上面的调用过程。时间复杂度:$O(n)$空间复杂度:$O(h)$, $h$ 是树的高度
###Code
class iterativeTraversal:
def preorder(root: TreeNode) -> List[int]:
res = []
# 输入是[]的情况
if not root:
return res
stack = []
while stack or root:
while root:
# 入栈的时候加入结果
res.append(root.val)
stack.append(root)
# 访问左子树
root = root.left
root = stack.pop()
# 访问右子树
root = root.right
return res
def inorder(root: TreeNode) -> List[int]:
res = []
# 输入是[]的情况
if not root:
return res
stk = []
while stk or root:
# 访问左子树
while root:
stk.append(root)
root = root.left
# 访问父节点
root = stk.pop()
# 访问父节点时加入结果
res.append(root.val)
# 访问右子树
root = root.right
return res
##
# 后序遍历输出顺序是:L R N
# 前序遍历输出顺序是:N L R 其反转为 R L N
# 所以可以修改前序遍历顺序为 N R L,则其反转为 L R N,即为后序遍历
##
def postorder2(root: TreeNode) -> List[int]:
res = []
# 输入是[]的情况
if not root:
return res
stk = []
while root or stk:
while root:
stk.append(root)
res.append(root.val)
#root = root.left
root = root.right
root = stk.pop()
#root = root.right
root = root.left
return res[::-1] # reverse
def postorder(root: TreeNode) -> List[int]:
res = []
# 输入是[]的情况
if not root:
return res
stk = []
prev = None # 用于记录上一个出栈打印的元素
while stk or root:
while root:
stk.append(root)
root = root.left
root = stk.pop()
if not root.right or root.right == prev: # 对应左右节点和已经访问右节点的父节点
res.append(root.val)
prev = root
root = None
else: # 未访问过右节点的父节点
stk.append(root)
root = root.right
return res
node = stringToTreeNode("[1,2,3,4,5,null,6]")
test(node, iterativeTraversal)
###Output
│ ┌── 6
│ ┌── 3
└── 1
│ ┌── 5
└── 2
└── 4
前序遍历: [1, 2, 4, 5, 3, 6]
中序遍历: [4, 2, 5, 1, 3, 6]
后序遍历: [4, 5, 2, 6, 3, 1]
###Markdown
染色法官方题解中介绍了三种方法来完成树的中序遍历,包括:- 递归- 借助栈的迭代方法- 莫里斯遍历- 在树的深度优先遍历中(包括前序、中序、后序遍历),递归方法最为直观易懂,但考虑到效率,我们通常不推荐使用递归。栈迭代方法虽然提高了效率,但其嵌套循环却非常烧脑,不易理解,容易造成“一看就懂,一写就废”的窘况。而且对于不同的遍历顺序(前序、中序、后序),循环结构差异很大,更增加了记忆负担。因此,我在这里介绍一种“颜色标记法”(瞎起的名字……),兼具栈迭代方法的高效,又像递归方法一样简洁易懂,更重要的是,这种方法对于前序、中序、后序遍历,能够写出完全一致的代码。其核心思想如下:使用颜色标记节点的状态,新节点为白色,已访问的节点为灰色。如果遇到的节点为白色,则将其标记为灰色,然后将其右子节点、自身、左子节点依次入栈。如果遇到的节点为灰色,则将节点的值输出。使用这种方法实现的中序遍历如下:```pythonclass Solution: def inorderTraversal(self, root: TreeNode) -> List[int]: WHITE, GRAY = 0, 1 res = [] stack = [(WHITE, root)] while stack: color, node = stack.pop() if node is None: continue if color == WHITE: stack.append((WHITE, node.right)) stack.append((GRAY, node)) stack.append((WHITE, node.left)) else: res.append(node.val) return res```如要实现前序、后序遍历,只需要调整左右子节点的入栈顺序即可。>作者:hzhu212 链接:[戳我](https://leetcode-cn.com/problems/binary-tree-inorder-traversal/solution/yan-se-biao-ji-fa-yi-chong-tong-yong-qie-jian-ming/) 来源:力扣(LeetCode) 著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
###Code
class ColorTraversal:
def preorder(root: TreeNode) -> List[int]:
WHITE, GRAY = 0, 1
res = []
stack = [(WHITE, root)]
while stack:
color, node = stack.pop()
if node is None: continue
if color == WHITE:
stack.append((WHITE, node.right))
stack.append((WHITE, node.left))
stack.append((GRAY, node))
else:
res.append(node.val)
return res
def inorder(root: TreeNode) -> List[int]:
WHITE, GRAY = 0, 1
res = []
stack = [(WHITE, root)]
while stack:
color, node = stack.pop()
if node is None: continue
if color == WHITE:
stack.append((WHITE, node.right))
stack.append((GRAY, node))
stack.append((WHITE, node.left))
else:
res.append(node.val)
return res
def postorder(root: TreeNode) -> List[int]:
WHITE, GRAY = 0, 1
res = []
stack = [(WHITE, root)]
while stack:
color, node = stack.pop()
if node is None: continue
if color == WHITE:
stack.append((GRAY, node))
stack.append((WHITE, node.right))
stack.append((WHITE, node.left))
else:
res.append(node.val)
return res
node = stringToTreeNode("[1,2,3,4,5,null,6]")
test(node, ColorTraversal)
###Output
│ ┌── 6
│ ┌── 3
└── 1
│ ┌── 5
└── 2
└── 4
前序遍历: [1, 2, 4, 5, 3, 6]
中序遍历: [4, 2, 5, 1, 3, 6]
后序遍历: [4, 5, 2, 6, 3, 1]
###Markdown
Morris递归和迭代的方式都使用了辅助的空间,而莫里斯遍历的优点是没有使用任何辅助空间。缺点是改变了整个树的结构,强行把一棵二叉树改成一段链表结构。 图示 过程演示时间复杂度:找到每个前驱节点的复杂度是 $O(n)$,因为 $n$ 个节点的二叉树有 $n−1$ 条边,每条边只可能使用 2 次(一次定位到节点,一次找到前驱节点),故时间复杂度为 $O(n)$空间复杂度:$O(1)$>作者:wang_ni_ma链接:https://leetcode-cn.com/problems/binary-tree-inorder-traversal/solution/dong-hua-yan-shi-94-er-cha-shu-de-zhong-xu-bian-li/来源:力扣(LeetCode)著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
###Code
class MorrisTraversal:
def preorder(root: TreeNode) -> List[int]:
pass
def inorder(root: TreeNode) -> List[int]:
res = []
pre = None
while root:
# 如果左节点不为空,就将当前节点连带右子树全部挂到
# 左节点的最右子树下面
if root.left:
pre = root.left
while pre.right:
pre = pre.right
pre.right = root
# 将root指向root的left
tmp = root
root = root.left
tmp.left = None
# 左子树为空,则打印这个节点,并向右边遍历
else:
res.append(root.val)
root = root.right
return res
def postorder(root: TreeNode) -> List[int]:
pass
node = stringToTreeNode("[1,2,3,4,5,null,6]")
###Output
_____no_output_____
###Markdown
LevelOrder
###Code
class LevelOrderTraversal:
def iterativeLevelOrderFlatten(root: TreeNode) -> List[int]:
if root is None:
return []
res = []
Q = []
Q.append(root)
while Q:
# 队首元素出队
head = Q[0]
del Q[0]
res.append(head.val)
# 如果出队元素有左右子,则加入队列
if head.left:
Q.append(head.left)
if head.right:
Q.append(head.right)
return res
def iterativeLevelOrderLayered(root: TreeNode) -> List[List[int]]:
if root is None:
return []
res = []
Q = []
Q.append(root)
while Q:
# 因为需要分层,所以一次性把一层的节点都出队
level = []
for _ in range(len(Q)):
head = Q.pop(0)
level.append(head.val)
# 如果出队元素有左右子,则加入队列
if head.left:
Q.append(head.left)
if head.right:
Q.append(head.right)
res.append(level)
return res
node = stringToTreeNode("[1,2,3,4,5,null,6]")
print(LevelOrderTraversal.iterativeLevelOrderFlatten(node))
print(LevelOrderTraversal.iterativeLevelOrderLayered(node))
###Output
[1, 2, 3, 4, 5, 6]
[[1], [2, 3], [4, 5, 6]]
###Markdown
仰望大佬 关于递归的手绘讲解我们不能含糊地记说:“中序遍历是先访问左子树,再访问根节点,再访问右子树。”这么描述是不准确的,容易产生误导。因为无论是前、中、后序遍历,都是先访问根节点,再访问它的左子树,再访问它的右子树。区别在哪里?比如中序遍历,它是将 do something with root(处理当前节点)放在了访问完它的左子树之后。比方说,打印节点值,就会产生「左根右」的打印顺序。前、中、后序遍历都是基于DFS,节点的访问顺序如上图所示,每个节点有三个不同的驻留阶段,即每个节点会被经过三次:1. 在递归它的左子树之前。2. 在递归完它的左子树之后,在递归它的右子树之前。3. 在递归完它的右子树之后。我们将 do something with root 放在这三个时间点之一,就分别对应了:前、中、后序遍历。所以,它们的唯一区别是:在什么时间点去处理节点,去拿他做文章。搞清楚概念后,怎么用栈去模拟递归遍历,并且是中序遍历呢?dfs 一棵树先做什么?如下图,先递归节点A,再递归左子节点B,再递归左子节点D,即一个个压入递归栈先是不断地将左节点压入栈,我们可以确定出这部分的代码:```jswhile (root) { stack.push(root); root = root.left;}```DFS的访问顺序是:根节点、左子树、右子树,现在要访问(牵扯出)栈中的节点的右子树了。并且是先访问『位于树的底部的』即『位于栈的顶部的』节点的右子树。于是,让栈顶节点逐个出栈,出栈的同时,把它的右子节点压入栈,相当于递归右子节点。因为中序遍历,在栈顶节点的右子节点压栈之前,要处理出栈节点的节点值,将它输出新入栈的右子节点,或者说右子树,就是对他进行递归。和节点A、B、D的递归压栈一样,它们都是子树。不同的子树都要做一样的事情,同样也要先不断将它的左子节点压入栈,然后再出栈,带出右子节点入栈。即栈顶出栈的过程,也要包含下面这部分代码,你可以看到这部分代码重复两次:```jswhile (root) { stack.push(root); root = root.left;}```其实这两次出现,分别对应了下面的两次 inorder 调用:```jsinorder(root.left);res.push(root.val);inorder(root.right);```最终实现:```jsconst inorderTraversal = (root) => { const res = []; const stack = []; while (root) { // 先把能压入栈的左子节点都压进来 stack.push(root); root = root.left; } while (stack.length) { let node = stack.pop(); // 栈顶的节点出栈 res.push(node.val); // 在压入右子树之前,处理它的数值部分(因为中序遍历) node = node.right; // 获取它的右子树 while (node) { // 右子树存在,执行while循环 stack.push(node); // 压入当前root node = node.left; // 不断压入左子节点 } } return res;};```>作者:xiao_ben_zhu链接:https://leetcode-cn.com/problems/binary-tree-inorder-traversal/solution/shou-hua-tu-jie-yong-zhan-mo-ni-zhong-xu-bian-li-z/来源:力扣(LeetCode)著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
###Code
# 最强模板
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
# 递归
# 时间复杂度:O(n),n为节点数,访问每个节点恰好一次。
# 空间复杂度:空间复杂度:O(h),h为树的高度。最坏情况下需要空间O(n),平均情况为O(logn)
# 递归1:二叉树遍历最易理解和实现版本
class Solution:
def preorderTraversal(self, root: TreeNode) -> List[int]:
if not root:
return []
# 前序递归
return [root.val] + self.preorderTraversal(root.left) + self.preorderTraversal(root.right)
# # 中序递归
# return self.inorderTraversal(root.left) + [root.val] + self.inorderTraversal(root.right)
# # 后序递归
# return self.postorderTraversal(root.left) + self.postorderTraversal(root.right) + [root.val]
# 递归2:通用模板,可以适应不同的题目,添加参数、增加返回条件、修改进入递归条件、自定义返回值
class Solution:
def preorderTraversal(self, root: TreeNode) -> List[int]:
def dfs(cur):
if not cur:
return
# 前序递归
res.append(cur.val)
dfs(cur.left)
dfs(cur.right)
# # 中序递归
# dfs(cur.left)
# res.append(cur.val)
# dfs(cur.right)
# # 后序递归
# dfs(cur.left)
# dfs(cur.right)
# res.append(cur.val)
res = []
dfs(root)
return res
# 迭代
# 时间复杂度:O(n),n为节点数,访问每个节点恰好一次。
# 空间复杂度:O(h),h为树的高度。取决于树的结构,最坏情况存储整棵树,即O(n)
# 迭代1:前序遍历最常用模板(后序同样可以用)
class Solution:
def preorderTraversal(self, root: TreeNode) -> List[int]:
if not root:
return []
res = []
stack = [root]
# # 前序迭代模板:最常用的二叉树DFS迭代遍历模板
while stack:
cur = stack.pop()
res.append(cur.val)
if cur.right:
stack.append(cur.right)
if cur.left:
stack.append(cur.left)
return res
# # 后序迭代,相同模板:将前序迭代进栈顺序稍作修改,最后得到的结果反转
# while stack:
# cur = stack.pop()
# if cur.left:
# stack.append(cur.left)
# if cur.right:
# stack.append(cur.right)
# res.append(cur.val)
# return res[::-1]
# 迭代1:层序遍历最常用模板
class Solution:
def levelOrder(self, root: TreeNode) -> List[List[int]]:
if not root:
return []
cur, res = [root], []
while cur:
lay, layval = [], []
for node in cur:
layval.append(node.val)
if node.left: lay.append(node.left)
if node.right: lay.append(node.right)
cur = lay
res.append(layval)
return res
# 迭代2:前、中、后序遍历通用模板(只需一个栈的空间)
class Solution:
def inorderTraversal(self, root: TreeNode) -> List[int]:
res = []
stack = []
cur = root
# 中序,模板:先用指针找到每颗子树的最左下角,然后进行进出栈操作
while stack or cur:
while cur:
stack.append(cur)
cur = cur.left
cur = stack.pop()
res.append(cur.val)
cur = cur.right
return res
# # 前序,相同模板
# while stack or cur:
# while cur:
# res.append(cur.val)
# stack.append(cur)
# cur = cur.left
# cur = stack.pop()
# cur = cur.right
# return res
# # 后序,相同模板
# while stack or cur:
# while cur:
# res.append(cur.val)
# stack.append(cur)
# cur = cur.right
# cur = stack.pop()
# cur = cur.left
# return res[::-1]
# 迭代3:标记法迭代(需要双倍的空间来存储访问状态):
# 前、中、后、层序通用模板,只需改变进栈顺序或即可实现前后中序遍历,
# 而层序遍历则使用队列先进先出。0表示当前未访问,1表示已访问。
class Solution:
def preorderTraversal(self, root: TreeNode) -> List[int]:
res = []
stack = [(0, root)]
while stack:
flag, cur = stack.pop()
if not cur: continue
if flag == 0:
# 前序,标记法
stack.append((0, cur.right))
stack.append((0, cur.left))
stack.append((1, cur))
# # 后序,标记法
# stack.append((1, cur))
# stack.append((0, cur.right))
# stack.append((0, cur.left))
# # 中序,标记法
# stack.append((0, cur.right))
# stack.append((1, cur))
# stack.append((0, cur.left))
else:
res.append(cur.val)
return res
# # 层序,标记法
# res = []
# queue = [(0, root)]
# while queue:
# flag, cur = queue.pop(0) # 注意是队列,先进先出
# if not cur: continue
# if flag == 0:
# 层序遍历这三个的顺序无所谓,因为是队列,只弹出队首元素
# queue.append((1, cur))
# queue.append((0, cur.left))
# queue.append((0, cur.right))
# else:
# res.append(cur.val)
# return res
# 莫里斯遍历
# 时间复杂度:O(n),n为节点数,看似超过O(n),有的节点可能要访问两次,实际分析还是O(n),具体参考大佬博客的分析。
# 空间复杂度:O(1),如果在遍历过程中就输出节点值,则只需常数空间就能得到中序遍历结果,空间只需两个指针。
# 如果将结果储存最后输出,则空间复杂度还是O(n)。
# PS:莫里斯遍历实际上是在原有二叉树的结构基础上,构造了线索二叉树,
# 线索二叉树定义为:原本为空的右子节点指向了中序遍历顺序之后的那个节点,把所有原本为空的左子节点都指向了中序遍历之前的那个节点
# emmmm,好像大学教材学过,还考过
# 此处只给出中序遍历,前序遍历只需修改输出顺序即可
# 而后序遍历,由于遍历是从根开始的,而线索二叉树是将为空的左右子节点连接到相应的顺序上,使其能够按照相应准则输出
# 但是后序遍历的根节点却已经没有额外的空间来标记自己下一个应该访问的节点,
# 所以这里需要建立一个临时节点dump,令其左孩子是root。并且还需要一个子过程,就是倒序输出某两个节点之间路径上的各个节点。
# 具体参考大佬博客
# 莫里斯遍历,借助线索二叉树中序遍历(附前序遍历)
class Solution:
def inorderTraversal(self, root: TreeNode) -> List[int]:
res = []
# cur = pre = TreeNode(None)
cur = root
while cur:
if not cur.left:
res.append(cur.val)
# print(cur.val)
cur = cur.right
else:
pre = cur.left
while pre.right and pre.right != cur:
pre = pre.right
if not pre.right:
# print(cur.val) 这里是前序遍历的代码,前序与中序的唯一差别,只是输出顺序不同
pre.right = cur
cur = cur.left
else:
pre.right = None
res.append(cur.val)
# print(cur.val)
cur = cur.right
return res
# N叉树遍历
# 时间复杂度:时间复杂度:O(M),其中 M 是 N 叉树中的节点个数。每个节点只会入栈和出栈各一次。
# 空间复杂度:O(M)。在最坏的情况下,这棵 N 叉树只有 2 层,所有第 2 层的节点都是根节点的孩子。
# 将根节点推出栈后,需要将这些节点都放入栈,共有 M−1个节点,因此栈的大小为 O(M)。
"""
# Definition for a Node.
class Node:
def __init__(self, val=None, children=None):
self.val = val
self.children = children
"""
# N叉树简洁递归
class Solution:
def preorder(self, root: 'Node') -> List[int]:
if not root: return []
res = [root.val]
for node in root.children:
res.extend(self.preorder(node))
return res
# N叉树通用递归模板
class Solution:
def preorder(self, root: 'Node') -> List[int]:
res = []
def helper(root):
if not root:
return
res.append(root.val)
for child in root.children:
helper(child)
helper(root)
return res
# N叉树迭代方法
class Solution:
def preorder(self, root: 'Node') -> List[int]:
if not root:
return []
s = [root]
# s.append(root)
res = []
while s:
node = s.pop()
res.append(node.val)
# for child in node.children[::-1]:
# s.append(child)
s.extend(node.children[::-1])
return res
# 作者:821218213
# 链接:https://leetcode-cn.com/problems/binary-tree-inorder-traversal/solution/python3-er-cha-shu-suo-you-bian-li-mo-ban-ji-zhi-s/
# 来源:力扣(LeetCode)
# 著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
###Output
_____no_output_____ |
.ipynb_checkpoints/Programming for Data Analytics Assignment 1-checkpoint.ipynb | ###Markdown
PROGRAMMING FOR DATA ANALYTICS ASSIGNMENT 1: THE *NUMPY.RANDOM* PYTHON PACKAGE BACKGROUND The *numpy.random* Python package is a module within the numpy library which generates arrays of sample values from many kinds of probability distributions. *Ref* [Python for Data Analysis, Wes McKinney, O'Reilly,2018] SIMPLE RANDOM DATA FUNCTION The *Simple Random Data* function generates random numbers of different types such as various shaped arrays, normally distributed arrays, ranges of floats of differing parameters such as the range, dimensions and probabilities of the returned values. **The following lines of code demonstrate the usefulness of the *random.rand* function to generate random numbers between 0 and 1 and how this data can then be manipulated, plotted and visualised.**
###Code
#numpy.random.rand(d0, d1, ..., dn) function example
#returns array of random floats between 0 and 1 of shape (d0 to dn)
import numpy as np #import numpy package
import matplotlib.pyplot as plt
import scipy as sc
from scipy import stats
import math as mt
import pandas as pd
%matplotlib inline
R1 = np.random.rand(100,1) #return 100 x 1 random array of floats between 0 and (not including) 1
x = np.arange(100) #create a list 0f 1 to 100 in order to plot the random no's created against in a scatter plot
plt.scatter(x,R1)
plt.title("plot of 100 random floats ")
###Output
_____no_output_____
###Markdown
**Plot of the random array against its square (just for fun)**
###Code
R1 = np.random.rand(100,1) #return 5x5 random array
R2 = R1*R1 #create an array of R1 Squared
DF = np.hstack((R1,R2)) #Create a dataframe of R1 and R2
plt.scatter(DF[:,0],DF[:,1]) #Plot of R1 against its square:R2
plt.title("Random.rand function plotted vs its square")
###Output
_____no_output_____
###Markdown
**Some summary statistics of our random numbers**
###Code
z = np.array(R1)
print('Minumum =', z.min())
print('Maximum =', z.max())
print('Mean = ', z.mean())
print('Variance =', z.var())
###Output
Minumum = 0.0006580253741128583
Maximum = 0.9856140775033527
Mean = 0.4759549015120975
Variance = 0.09663537522484028
###Markdown
PERMUTATIONS FUNCTIONS The *Permutations* functions, *shuffle* and *permutation* both generate random numbers with the shuffle function changing the order of a given set of inputs. The permutation function will create a random sequence of numbers given an input length. **The following code creates a lotto draw using the *shuffle* function**
###Code
#Shuffle Function
i = list(range(1, 43)) #create a list of 42 numbers
x = np.array(i) #convert to a numpy array
np.random.shuffle(x) #shuffle the numbers
x
draw = x[0:6] #output first 6 random numbers
'The lotto numbers are', draw # dispay the lotto numbers
###Output
_____no_output_____
###Markdown
**Example of the use of the *permutation* function to create a random password**
###Code
#Permutation Function
#A small example of how the permutation function can be used to generate a random password
Letters = np.array(['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'X', 'Y', 'Z'])
Order = np.random.permutation(Letters)
print('Password is =', Order[1:7])
###Output
Password is = ['O' 'T' 'Q' 'H' 'L' 'D']
###Markdown
The above examples of *shuffle* and *permutation* functions show practical uses for these functions. In particular it can be seen where there would be scope to use these functions as part of larger programs used for encryption and password management DISTRIBUTIONS FUNCTIONS The Poisson DistributionThe *numpy.random.poisson* function creates a Poisson Distribution is a probability distribution which can be described as the probability of events occuring at regular intervals of which the events are independent of eachother. An example would be the number of decays per second of a radioactive source where the average number of decays per unit time is known.*Definition*: If the mean number of counts in the interval is $\lambda$ > 0 , the random variable X that equals the number of counts (events) in the interval has a **Poisson Distribution** with parameter $\lambda$, and the probability mass function of x is$$ f(x) = \frac{{\mathrm e^{-\lambda} \lambda^x}}{x!} $$, x = 0,1,2,... and where *e* is the base of the natural log e and is 2.71828*References* [https://en.wikipedia.org/wiki/Poisson_distribution], [Applied Statistics and Probability for Engineers, Montgomery; D.C.; Runger; G.C.]
###Code
# The poisson Distribution
import seaborn as sns
P = np.random.poisson(5, 10000) #plots a Poisson Distribution of interval = 5 and size = 12
ax = sns.distplot(P, color = 'y',) #The 'distplot' seaborn function plots a univariate distribution of observations and combines a hist plot with a probability density (seaborn.kdeplot) plot
plt.title('Poisson Distribution')
plt.xlabel('x variables')
plt.ylabel('f(x)')
###Output
C:\Users\hugh_\Anaconda3\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
**It can be seen from the above plot that the probability of events (f(x)) is highest as we approach the point where the actual number of events (5) equals the average number of events in the above example. In this way the distribution provides both a numerical AND graphical way to understand the probabilities of events occuring given their known average per discrete times** Example of a poisson distribution of distance vs gamma ray counts *ref*[https://lexieslogofphysics.wordpress.com/2013/05/08/poisson-statistics-and-radioactive-decay/] The Exponential Distribution The *numpy.random.exponential* function creates a random exponential distribution. The *Exponential Distribution* is related to the *Poisson Distribution* from the perspective that the poisson distribution measures the probability of occurences (at regular intervals) whereas the exponential distribution is related to the cumulative interval of those occurences. Examples of the use of the Exponential function include Reliability Engineering where the exponential distribution is used to model the behavior of units that have a constant failure rate (or units that do not degrade with time or wear out). *Ref* [http://reliawiki.org/index.php/The_Exponential_Distribution]*Definition*. The random variable X that equals the distance between successive counts of a Poisson process with mean $\lambda >0$ has an **exponential distribution** with parameter $\lambda$ The probability density function of X is:$$f(x) = {\lambda e^{-\lambda x}}$$ for $$ 0 = \infty $$ **Use of Exponential Function in Reliability Engineering** The above graphic of an exponential reliability function shows how the reliability function (y axis) decreases with time. The $\Upsilon$ value is a unique variable used in Reliability Engineering to indicate the expected time of failure. (A machine is not likely to fail at T=0). This can be seen in the black line where the $\Upsilon$ value is 2500 *Ref*[http://reliawiki.org/index.php/The_Exponential_Distribution]
###Code
#The Exponential Distribution
Q = np.random.exponential(5,10000)
ax = sns.distplot(Q, color = 'g',) #The 'distplot' seaborn function plots a univariate distribution of observations and combines a hist plot with a probability density (seaborn.kdeplot) plot
plt.title('Exponential Distribution')
plt.xlabel('x variables')
plt.ylabel('f(x)')
###Output
C:\Users\hugh_\Anaconda3\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
The Log Normal Distribution The *numpy.random.lognormal(mean=0.0, sigma=1.0, size=None)* distribution creates a random distribution of the log of a normal distribution, with a specified mean, sd and size. The mean, sd and size are derived from the normal distribution from which the log normal random distribution is created. The probability density function of the log normal distribution is defined as:$$p(x) = \frac{1}{\sigma x \sqrt{2\pi}} e^{(-\frac{(ln(x)-\mu)^2}{2\sigma^2})}$$where: $\mu$, $\sigma$ = mean, sd*Ref* [https://en.wikipedia.org/wiki/Log-normal_distributionProbability_density_function]The log-normal distribution also has relevance to Reliability Engineering. The lognormal distribution is commonly used to model the lives of units whose failure modes are of a fatigue-stressnature. Since this includes most, if not all, mechanical systems, the log normal distribution can have widespreadapplication. *Ref* [Life Data Analysis Reference, http://reliawiki.org/index.php/Introduction_to_Life_Data_Analysis]
###Code
L = np.random.lognormal(3, 0.5, 20000)
ax = sns.distplot(L, color = 'm',) #The 'distplot' seaborn function plots a log-normal distribution of observations and combines a hist plot with a probability density (seaborn.kdeplot) plot
plt.title('Log Normal Distribution (pdf)')
plt.xlabel('x variables')
plt.ylabel('p(x)')
L = np.random.lognormal(3, 0.5, 20000) # log normal distribution of mean 3, sd 0.5 and range 20000
#attempt to create a continuous distribution function of lognormal data
CY = np.cumsum(L) #numulative sum of L: credit https://stackoverflow.com/questions/9378420/how-to-plot-cdf-in-matplotlib-in-python
ax = sns.distplot(CY, color = 'b',) #The 'distplot' seaborn function plots a log-normal distribution of observations and combines a hist plot with a probability density (seaborn.kdeplot) plot
plt.title('Log Normal Distribution as a cdf')
plt.xlabel('x variables')
plt.ylabel('p(x)')
###Output
C:\Users\hugh_\Anaconda3\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
**The above code is an attempt to plot the cumulative frequency of the log normal distribution. Similar cumulative logarithmic relationships can be seen in nature in examples such as the growth rate of bacteria, with a log(growth), lag(steady) and death phase.** The Normal Distribution The *numpy.random.normal(loc=0.0, scale=1.0, size=None)* creates a normal distribution of normal data. The normal distribution is one of the more commonly used distributions in statistics since it is representative of many common real world distributions such as test scores in an exams and industrial processes. The normal distribution may also be referred to as the 'bell curve' or 'Gaussian distribution' the latter in honor of the 18th-century physicist and mathematician Karl Gauss, who used this distribution to analyze astronomical data *Ref* [Statistics in a nutshell, S. Boslaugh; P.A. Watters, O'Reilly]The probability function for the Normal Distribution is: $$p(x) \sim N(\mu|\sigma^2) \\\\$$$$p(x) \sim \frac{1}{\sqrt{2\pi\sigma^2}} \exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu}{\sigma}\bigg)^2 \bigg] }$$Where: $\mu$, $\sigma$ = mean, sd
###Code
M = np.random.normal(3, 0.5, 20000)# normal distribution of mean=3, sd=0.5 and range 20000
ax = sns.distplot(M, color = 'g',) #The 'distplot' seaborn function plots a normal distribution of observations and combines a hist plot with a probability density (seaborn.kdeplot) plot
plt.title('Normal Distribution')
plt.xlabel('x variables')
plt.ylabel('p(x)')
###Output
C:\Users\hugh_\Anaconda3\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
**It can be seen in the above plot of the normal distribution that the probability density is at its highest at the mean and evenly distributed either side of the mean. The 'spread' of the distribution is dependent on the standard deviation** Applications of the normal distribution can be seen in the Operational Excellence Manufacturing model *Six Sigma* which aims to have less than 6 $\sigma$ defects in a process, i.e. less than 6 standard deviations from the mean or 99.9997 % defect free *Ref* [https://en.wikipedia.org/wiki/Six_Sigma]The illustration below shows what occurs when a process varies by 1.5$\sigma$s around the mean. The Lower and upper specification limit (LSL and USL) are at 6$\sigma$s but a variation of 1.5$\sigma$s will still keep the process in control. **THE KOLMOGOROV-SMIRNOV TEST FOR NORMALITY**The following line of code uses the '*Kolmogorov-Smirnov*' Test for Normality which we use on our original random normal distribution, M above.A *p* value >0.05 means we can reject the null hypothesis that the data is not normally distributed. '*D*' is the test statistic i.e. the difference between the test data and a normal distribution (low in our case as expected). Since we have created a random normal distribution we expect our *p* value to be high. The Kolmogorov Smirnov test for normality is an important tool to be used on data, as the distribution of the data can affect further statistical analysis such as correlations, ANOVA's etc. *ref* [https://www.spss-tutorials.com/spss-kolmogorov-smirnov-test-for-normality/test-statistic] The Kolmogorov-Smirnov test may also be used to test for other types of data distributions but is more commonly used on normally data distributed data sets. It is an important test which is typically carried out after the data has been visualised graphically. Graphical representation of the data provides clues to its distribution characteristics e.g. symmetry, and is an aid to decide which kind of transformations may be necessary.
###Code
#The Kolmogorov-Smirnov test used to test for, in the case a normal distribution
d, pval = stats.kstest((M-M.mean())/M.std(), 'norm') # determines the test statistic determination
print('KS-statistic D = %6.3f pvalue = %6.4f' % (d, pval)) #prints the KS-statistic and p value
###Output
KS-statistic D = 0.005 pvalue = 0.7534
###Markdown
**DATA TRANSFORMATIONS**After testing the data for normality, in the event that the data proves to be non-normal it may be possible to transform the data to make it more 'normal' in order to perform parametric statistics which may rely on a normal dataset for them to be valid. Square root or log transformations are examples of these types of transformation. It must be noted however that insights gained on transformed data subjected to statistical techniques must be interpreted in the context of what transformations have been carried out *Ref* [Statistics in a nutshell, Boslaugh, S; Watters, P.A., O'Reilly] THE RANDOM BINOMIAL DISTRIBUTION The *numpy.random.binomial(n, p, size=None)* function creates a random binomial function. Wikipedia tells us that the binomial distribution is representative of the probability, *p* of a series of 'successes' in a number *n* of trials in an experiment which can have two possible outcomes ('success' or 'failure'). *ref*[https://en.wikipedia.org/wiki/Binomial_distribution]A number of assumptions need to be valid in order for an experiment to be considered binomial: * The experiment has two possible outcomes (A random trial with two possible outcomes is known as a *Bernouilli experiment*) * The trials in the experiment are independent * The probability of a success in each trial is constant Where *n* = No. of trials and *x* is number of successesThe probability Mass Function of x is: $$p(x) = {\binom{n}{x}}{p^x(1-p)^{n-x}}$$ x= 0,1,2...given *p* and *n* is 1,2.....*Ref* [Applied Statistics and Probability for Engineers, Montgomery, D.C.; Runger, G.C.]The binomial distribution can be used to describe many different types of real life data such as coin flips 'heads' or 'tails' or 'pass' 'fail' for exam results, where one of two outcomes is possible [Statistics in a nutshell, Boslaugh, S; Watters, P.A., O'Reilly]
###Code
B = np.random.binomial(20, 0.5, 20000)# binomial distribution of 20000 trials, p=0.5 and number of successes = 20
sns.distplot(B, color = 'r', bins=50); #histogram and kde plot
plt.title('Binomial Distribution')
plt.xlabel('x variables')
plt.ylabel('p(x)')
###Output
C:\Users\hugh_\Anaconda3\lib\site-packages\scipy\stats\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
**In the above plot the distribution is symmetrical as p = 0.5. The distributions symmetry becomes greater with an increased number of trials** The code below outputs some summary statistics on our binomial distribution. skewness and kurtosis in particular are useful measures to describe the nature of our binomial distribution. both are low in our data set as it can be seen that the data set is highly symmetrical.Skewness is a measure of the lack of symmetry of a distribution and kurtosis is the fronting or tailing of a distribution: *Ref* [https://www.itl.nist.gov/div898/handbook/eda/section3/eda35b.htm]
###Code
AR = np.array(B) #Creates a numpy array of the random binomial dataset
stats.describe(AR)#outputs summary statistics
###Output
_____no_output_____ |
presentation_bessel_demo.ipynb | ###Markdown
Testing the Bessel Basis
###Code
import dvr_1d
%matplotlib inline
###Output
_____no_output_____
###Markdown
With a Morse potential
###Code
d = dvr_1d.BesselDVR(npts=100, R=20, dim=3, lam=0)
d.morse_test(xmax=20., ymax=0.5)
###Output
Testing 1-D DVR with a Morse potential
The first 5 energies are:
[-1.75751018 -0.76051377 -0.21815647 -0.00674606 0.05559304]
###Markdown
With a Woods-Saxon potential
###Code
d = dvr_1d.BesselDVR(npts=100, R=10, dim=3, lam=0)
d.woods_saxon_test()
###Output
Testing 1-D DVR with a Woods-Saxon potential
The first 5 energies are:
[-43.55090359 -36.58134219 -29.25606017 -21.92813792 -14.94875837]
|
download_steam_banners.ipynb | ###Markdown
Download Steam BannersCode inspired from https://github.com/woctezuma/download-steam-banners Setting Mount Google Drive
###Code
!pip install Google-Colab-Transfer
import colab_transfer
colab_transfer.mount_google_drive()
###Output
Drive already mounted at /content/drive/; to attempt to forcibly remount, call drive.mount("/content/drive/", force_remount=True).
###Markdown
Install Python requirements
###Code
!pip install aiofiles aiohttp nest_asyncio steamspypi
!pip install gamedatacrunch
###Output
_____no_output_____
###Markdown
API Steam API
###Code
def get_steam_api_url():
steam_api_url = 'https://api.steampowered.com/ISteamApps/GetAppList/v2/'
return steam_api_url
import requests
def request_steam_catalog():
url = get_steam_api_url()
print('Downloading data from Steam API.')
response = requests.get(url)
steam_catalog = response.json()
return steam_catalog
def parse_steam_catalog(steam_catalog):
app_id_list = [
app['appid']
for app in steam_catalog['applist']['apps']
]
app_id_list = sorted(app_id_list)
print('#appIDs = {}'.format(len(app_id_list)))
return app_id_list
steam_catalog = request_steam_catalog()
steam_app_id_list = parse_steam_catalog(steam_catalog)
###Output
_____no_output_____
###Markdown
SteamSpy API
###Code
import steamspypi
def request_steamspy_catalog():
print('Downloading data from SteamSpy API.')
steamspy_catalog = steamspypi.load()
return steamspy_catalog
def parse_steamspy_catalog(steamspy_catalog):
app_id_list = list(steamspy_catalog.keys())
app_id_list = sorted(app_id_list, key=int)
print('#appIDs = {}'.format(len(app_id_list)))
return app_id_list
steamspy_catalog = request_steamspy_catalog()
steamspy_app_id_list = parse_steamspy_catalog(steamspy_catalog)
###Output
_____no_output_____
###Markdown
GameDataCrunch APIReferences:- https://www.gamedatacrunch.com/- https://github.com/woctezuma/gamedatacrunch
###Code
import gamedatacrunch as gdc
gamedatacrunch_catalog = gdc.load()
gamedatacrunch_app_id_list = gdc.load_app_ids()
###Output
_____no_output_____
###Markdown
Run In Colab notebooks, the following error [can arise](https://stackoverflow.com/a/62608278) after `loop.run_until_complete()` is called.> RuntimeError: This event loop is already runningTo avoid this error, [the following piece of code](https://stackoverflow.com/a/56434301) seems necessary.
###Code
import nest_asyncio
nest_asyncio.apply()
###Output
_____no_output_____
###Markdown
Input
###Code
import steamspypi
def get_app_ids_file_name():
return 'app_ids.txt'
def create_app_id_list(app_id_list = None):
if app_id_list is None:
app_id_list = gamedatacrunch_app_id_list
with open(get_app_ids_file_name(), 'w') as f:
for app_id in sorted(app_id_list, key=int):
f.write(str(app_id) + '\n')
def get_app_ids(fname=None):
if fname is None:
fname = get_app_ids_file_name()
with open(fname, 'r') as f:
app_ids = [int(app_id.strip()) for app_id in f.readlines()]
return app_ids
###Output
_____no_output_____
###Markdown
Data source
###Code
def get_cdn_url():
# Historically:
cdn_url = 'https://steamcdn-a.akamaihd.net/steam/apps/'
# As of September 23, 2020:
cdn_url = 'https://cdn.cloudflare.steamstatic.com/steam/apps/'
return cdn_url
def get_banner_conventional_name(is_horizontal_banner=True):
if is_horizontal_banner:
# A choice of horizontal banners:
# NB: originally, I used 'header' for my experiments with Steam banners.
# banner_conventional_name = 'header' # 460x215 ; ratio 2.14
# banner_conventional_name = 'capsule_231x87' # 231x87 ; ratio 2.66
# banner_conventional_name = 'capsule_467x181' # 467x181 ; ratio 2.58
banner_conventional_name = 'capsule_616x353' # 616x353 ; ratio 1.75
# The following is a horizontal image with no logo.
# Caveat: this may not always exist! It is a recent addition to Steam!
# banner_conventional_name = 'library_hero' # 1920x620 ; ratio 3.10
else:
# A choice of vertical banners:
# Caveat: these may not always exist! It is a recent addition to Steam!
banner_conventional_name = 'library_600x900' # 300x450 ; ratio 0.67
# banner_conventional_name = 'library_600x900_2x' # 600x900 ; ratio 0.67
return banner_conventional_name
def get_logo_conventional_name():
# The following is a horizontal image with only the logo of the game.
# Caveat: this may not always exist! It is a recent addition to Steam!
logo_conventional_name = 'logo' # inconsistent image size and ratio
return logo_conventional_name
def get_file_extension(is_logo=False):
if is_logo:
# NB: the logo image is a PNG, whereas all the other banner images are JPG.
file_extension = '.png'
else:
file_extension = '.jpg'
return file_extension
def get_banner_url(app_id,
is_logo=False,
is_horizontal_banner=True):
if is_logo:
image_name = get_logo_conventional_name()
else:
image_name = get_banner_conventional_name(is_horizontal_banner)
file_extension = get_file_extension(is_logo)
return get_cdn_url() + str(app_id) + '/' + image_name + file_extension
###Output
_____no_output_____
###Markdown
Output
###Code
from pathlib import Path
def get_banner_folder(prefixe = 'original',
is_horizontal_banner=True):
if is_horizontal_banner:
prefixe += '_horizontal'
else:
prefixe += '_vertical'
banner_folder = 'data/'+prefixe+'_steam_banners/'
Path(banner_folder).mkdir(exist_ok=True)
return banner_folder
def get_banner_file_name(app_id,
prefixe='original',
is_horizontal_banner=True,
is_logo=False):
banner_folder_name = get_banner_folder(prefixe=prefixe,
is_horizontal_banner=is_horizontal_banner)
banner_file_name = banner_folder_name + str(app_id) + get_file_extension(is_logo)
return banner_file_name
###Output
_____no_output_____
###Markdown
Download Steam banners **Caveat**: there are about 30,000 banners to download!
###Code
import asyncio
import aiofiles
import aiohttp
async def main(is_logo=False,
is_horizontal_banner=True):
async with aiohttp.ClientSession() as session:
for app_id in sorted(get_app_ids()):
banner_file_name = Path(get_banner_file_name(app_id,
is_horizontal_banner=is_horizontal_banner,
is_logo=is_logo))
if banner_file_name.exists():
continue
banner_url = get_banner_url(app_id,
is_logo=is_logo,
is_horizontal_banner=is_horizontal_banner)
# Reference: https://stackoverflow.com/a/51745925
async with session.get(banner_url) as resp:
if resp.status == 200:
f = await aiofiles.open(banner_file_name, mode='wb')
await f.write(await resp.read())
await f.close()
print('Banner downloaded to {} for appID {}.'.format(banner_file_name, app_id))
else:
print('Banner for appID {} could not be downloaded.'.format(app_id))
is_logo = False
# For horizontal banners:
is_horizontal_banner=True
# For vertical banners:
# is_horizontal_banner=False
create_app_id_list()
loop = asyncio.get_event_loop()
loop.run_until_complete(main(is_logo=is_logo,
is_horizontal_banner=is_horizontal_banner))
###Output
_____no_output_____
###Markdown
Alternatively:
###Code
!pip install git+https://github.com/alcinos/imagedownloader.git > /dev/null
urls = [
get_banner_url(app_id,
is_logo=is_logo,
is_horizontal_banner=is_horizontal_banner)
for app_id in get_app_ids()
]
root_folder = '/content/'
my_store_path = root_folder + get_banner_folder(is_horizontal_banner=is_horizontal_banner)
def my_path_fn(store_path, url):
# caveat: do not modify the functions arguments. These are expected by imgdl!
app_id = url.split('/')[-2]
file_ext = '.jpg'
return Path(store_path, app_id + file_ext)
import imgdl
paths = imgdl.download(urls,
store_path = my_store_path,
path_fn = my_path_fn,
n_workers = 50)
###Output
_____no_output_____
###Markdown
Post-processing appIDs AppIDs for which a Steam banner was effectively downloaded:
###Code
import glob
def get_app_ids_with_steam_banners(is_horizontal_banner=True, is_logo=False):
image_filenames = Path(get_banner_folder(is_horizontal_banner=is_horizontal_banner)).glob('*' + get_file_extension(is_logo))
app_ids = [banner.name.strip(get_file_extension(is_logo)) for banner in image_filenames]
# There is an issue with duplicates, e.g. 'ABC (1).jpg' but only when running on Google Drive:
app_ids = [app_id for app_id in app_ids if ' (' not in app_id]
app_ids = [int(app_id) for app_id in app_ids]
return app_ids
###Output
_____no_output_____
###Markdown
Compare the number appIDs in SteamSpy database with the number of banners saved to disk
###Code
create_app_id_list()
app_ids = get_app_ids()
print('#appIDs in SteamSpy database = {}'.format(len(app_ids)))
app_ids_with_steam_banners = get_app_ids_with_steam_banners(is_horizontal_banner=is_horizontal_banner,
is_logo=is_logo)
print('#banners saved to disk = {}'.format(len(app_ids_with_steam_banners)))
###Output
#appIDs in SteamSpy database = 33821
#banners saved to disk = 14035
###Markdown
Omit appIDs for which a banner could not be found
###Code
def trim_app_id_list(is_horizontal_banner=True, is_logo=False):
app_ids = get_app_ids()
app_ids_with_steam_banners = get_app_ids_with_steam_banners(is_horizontal_banner=is_horizontal_banner,
is_logo=is_logo)
common_app_ids = set(app_ids).intersection(app_ids_with_steam_banners)
with open(get_app_ids_file_name(), 'w') as f:
for app_id in sorted(common_app_ids, key=int):
f.write(str(app_id) + '\n')
create_app_id_list()
trim_app_id_list(is_horizontal_banner=is_horizontal_banner,
is_logo=is_logo)
app_ids = get_app_ids()
print('#appIDs in SteamSpy database, and with a banner saved to disk = {}'.format(len(app_ids)))
###Output
#appIDs in SteamSpy database, and with a banner saved to disk = 14023
###Markdown
Archive the downloaded Steam banners to Google DriveBefore applying any post-processing, we archive the downloaded Steam banners to Google Drive.
###Code
import colab_transfer
colab_path = colab_transfer.get_path_to_home_of_local_machine()
drive_path = colab_transfer.get_path_to_home_of_google_drive()
###Output
_____no_output_____
###Markdown
Horizontal banners
###Code
input_file_name = 'original_horizontal_steam_banners.tar'
###Output
_____no_output_____
###Markdown
Save to Google Drive
###Code
!du -sh /content/data/original_horizontal_steam_banners/
!tar cvf original_horizontal_steam_banners.tar /content/data/original_horizontal_steam_banners/
!du -sh /content/original_horizontal_steam_banners.tar
colab_transfer.copy_file(
file_name=input_file_name,
source=colab_path,
destination=drive_path,
)
###Output
_____no_output_____
###Markdown
Load from Google Drive
###Code
colab_transfer.copy_file(
file_name=input_file_name,
source=drive_path,
destination=colab_path,
)
!mkdir -p /content/data/original_horizontal_steam_banners/
!tar xvf original_horizontal_steam_banners.tar -C /
###Output
_____no_output_____
###Markdown
Vertical banners
###Code
input_file_name = 'original_vertical_steam_banners.tar'
###Output
_____no_output_____
###Markdown
Save to Google Drive
###Code
!du -sh /content/data/original_vertical_steam_banners/
!tar cvf original_vertical_steam_banners.tar /content/data/original_vertical_steam_banners/
!du -sh /content/original_vertical_steam_banners.tar
colab_transfer.copy_file(
file_name=input_file_name,
source=colab_path,
destination=drive_path,
)
###Output
_____no_output_____
###Markdown
Load from Google Drive
###Code
colab_transfer.copy_file(
file_name=input_file_name,
source=drive_path,
destination=colab_path,
)
!mkdir -p /content/data/original_vertical_steam_banners/
!tar xvf original_vertical_steam_banners.tar -C /
###Output
_____no_output_____
###Markdown
Post-processing Steam banners
###Code
import os
os.chdir('/content/')
###Output
_____no_output_____
###Markdown
Create a folder where banners resized to square proportions will be stored
###Code
prefixe = 'resized'
if is_horizontal_banner:
prefixe += '_horizontal'
else:
prefixe += '_vertical'
Path('data/'+prefixe+'_steam_banners/').mkdir(exist_ok=True)
###Output
_____no_output_____
###Markdown
Function to read banner names
###Code
def get_banner_names(prefixe='original',
is_horizontal_banner=True,
is_logo=False):
if is_horizontal_banner:
prefixe += '_horizontal'
else:
prefixe += '_vertical'
file_extension = get_file_extension(is_logo=is_logo)
banner_names = [Path(f).name for f in glob.glob('data/'+prefixe+'_steam_banners/*' + file_extension)]
return banner_names
###Output
_____no_output_____
###Markdown
Find banners which were downloaded but not yet resized
###Code
def find_new_banners(is_horizontal_banner=True, is_logo=False):
l_original = get_banner_names('original',
is_horizontal_banner=is_horizontal_banner,
is_logo=is_logo)
l_resized = get_banner_names('resized',
is_horizontal_banner=is_horizontal_banner,
is_logo=is_logo)
l_new = set(l_original).difference(l_resized)
app_ids_new = [int(app_id.strip(get_file_extension(is_logo))) for app_id in l_new]
print('#banners downloaded but not yet resized = {}'.format(len(app_ids_new)))
return app_ids_new
###Output
_____no_output_____
###Markdown
Handle a log file with appIDs of banners which have yet to be resized
###Code
def get_copy_and_resize_todo_file_name():
return 'copy_and_resize_todo.txt'
def create_copy_and_resize_todo(is_horizontal_banner=True, is_logo=False):
app_ids_new = find_new_banners(is_horizontal_banner=is_horizontal_banner,
is_logo=is_logo)
with open(get_copy_and_resize_todo_file_name(), 'w') as f:
for app_id in app_ids_new:
f.write(str(app_id) + '\n')
return
def load_copy_and_resize_todo():
with open(get_copy_and_resize_todo_file_name(), 'r') as f:
app_ids = [int(app_id.strip()) for app_id in f.readlines()]
return app_ids
create_copy_and_resize_todo(is_horizontal_banner=is_horizontal_banner,
is_logo=is_logo)
###Output
#banners downloaded but not yet resized = 14035
###Markdown
Install ImageMagick
###Code
!apt-get install imagemagick
###Output
Reading package lists... Done
Building dependency tree
Reading state information... Done
imagemagick is already the newest version (8:6.9.7.4+dfsg-16ubuntu6.8).
The following package was automatically installed and is no longer required:
libnvidia-common-430
Use 'apt autoremove' to remove it.
0 upgraded, 0 newly installed, 0 to remove and 25 not upgraded.
###Markdown
Resize horizontal banners from 215x460 pixels to 215x215 pixels.Resize vertical banners from 450x300 pixels to 256x256 pixels.There are two possible methods: `mogrify` and `convert`. [Batch resize](https://stackoverflow.com/a/18018161) with `mogrify` for a fast post-processing, even on the cloud!NB: We cannot make use of `load_copy_and_resize_todo()` with `mogrify`, because there is no `for`-loop.
###Code
file_extension = get_file_extension(is_logo=is_logo)
if is_horizontal_banner:
!mogrify \
-resize '215x215!' \
-path /content/data/resized_horizontal_steam_banners \
/content/data/original_horizontal_steam_banners/*{file_extension}
else:
!mogrify \
-resize '256x256!' \
-path /content/data/resized_vertical_steam_banners \
/content/data/original_vertical_steam_banners/*{file_extension}
if is_horizontal_banner:
!ls /content/data/original_horizontal_steam_banners/ | wc -l
!ls /content/data/resized_horizontal_steam_banners/ | wc -l
else:
!ls /content/data/original_vertical_steam_banners/ | wc -l
!ls /content/data/resized_vertical_steam_banners/ | wc -l
###Output
_____no_output_____
###Markdown
Alternatively, use a `for`-loop to call `convert` on each image.NB: A `for`-loop is necessary here, because,in contrast to `mogrify`, `convert` would load all the images into memory which would not be a good idea here, with close to 20k images.**Caveat: this process is extremely fast [if run locally](https://github.com/woctezuma/download-steam-banners/blob/master/batch_resize_images.py), and is extremely slow if run on the cloud!**
###Code
file_extension = get_file_extension(is_logo=is_logo)
for app_id in load_copy_and_resize_todo():
if is_horizontal_banner:
!echo convert data/original_horizontal_steam_banners/{app_id}{file_extension} -resize '215x215!' data/resized_horizontal_steam_banners/{app_id}{file_extension}
!convert data/original_horizontal_steam_banners/{app_id}{file_extension} -resize '215x215!' data/resized_horizontal_steam_banners/{app_id}{file_extension}
else:
!echo convert data/original_vertical_steam_banners/{app_id}{file_extension} -resize '256x256!' data/resized_vertical_steam_banners/{app_id}{file_extension}
!convert data/original_vertical_steam_banners/{app_id}{file_extension} -resize '256x256!' data/resized_vertical_steam_banners/{app_id}{file_extension}
###Output
_____no_output_____
###Markdown
Manually check one banner
###Code
import cv2
from matplotlib import pyplot as plt
app_id = 620
file_extension = get_file_extension(is_logo=is_logo)
for prefixe in ['original_horizontal',
'resized_horizontal',
'original_vertical',
'resized_vertical']:
img_name = 'data/' + prefixe + '_steam_banners/'+str(app_id)+file_extension
img = cv2.imread(img_name)
try:
plt.imshow(img[..., ::-1])
except TypeError:
continue
print('Dimensions: {} for file: {}'.format(img.shape,
img_name))
plt.xticks([]), plt.yticks([]) # to hide tick values on X and Y axis
plt.show()
###Output
Dimensions: (450, 300, 3) for file: data/original_vertical_steam_banners/620.jpg
###Markdown
Automatically check every banner. Caveat: this is slow and optional!
###Code
import cv2
app_ids_to_resize = []
counters = {'original':0, 'resized':0}
for counter, app_id in enumerate(get_app_ids()):
img_name = get_banner_file_name(app_id=app_id,
prefixe='resized',
is_horizontal_banner=is_horizontal_banner,
is_logo=is_logo)
img = cv2.imread(img_name)
if img is None:
print('Missing banner for appID = {}'.format(app_id))
continue
if img.shape[0] != img.shape[1]:
print('Non-square banner size for appID = {}'.format(app_id))
app_ids_to_resize.append(app_id)
counters['original'] += 1
else:
counters['resized'] += 1
if (counter+1) % 100 == 0:
print('Current counters: {}'.format(counters))
###Output
_____no_output_____
###Markdown
Ensure that the banners which had slipped through the resize process somehow are resized.**Caveat: this process is extremely fast [if run locally](https://github.com/woctezuma/download-steam-banners/blob/master/batch_resize_images.py), and is extremely slow if run on the cloud!**
###Code
file_extension = get_file_extension(is_logo=is_logo)
for counter, app_id in enumerate(app_ids_to_resize):
img_name = get_banner_file_name(app_id=app_id,
prefixe='resized',
is_horizontal_banner=is_horizontal_banner,
is_logo=is_logo)
img = cv2.imread(img_name)
if img.shape[0] != img.shape[1]:
if is_horizontal_banner:
!echo mogrify -resize '215x215!' data/resized_horizontal_steam_banners/{app_id}{file_extension}
!mogrify -resize '215x215!' data/resized_horizontal_steam_banners/{app_id}{file_extension}
else:
!echo mogrify -resize '256x256!' data/resized_vertical_steam_banners/{app_id}{file_extension}
!mogrify -resize '256x256!' data/resized_vertical_steam_banners/{app_id}{file_extension}
else:
continue
if (counter+1) % 100 == 0:
print('{} images processed'.format(counter+1))
###Output
_____no_output_____
###Markdown
Archive the resized images At the end, archive the resized images into a .tar file, and copy it to Google Drive.
###Code
import colab_transfer
colab_path = colab_transfer.get_path_to_home_of_local_machine()
drive_path = colab_transfer.get_path_to_home_of_google_drive()
###Output
_____no_output_____
###Markdown
Horizontal banners
###Code
input_file_name = 'resized_horizontal_steam_banners.tar'
###Output
_____no_output_____
###Markdown
Save to Google Drive
###Code
!du -sh /content/data/resized_horizontal_steam_banners/
!tar cvf resized_horizontal_steam_banners.tar /content/data/resized_horizontal_steam_banners/
!du -sh /content/resized_horizontal_steam_banners.tar
colab_transfer.copy_file(
file_name=input_file_name,
source=colab_path,
destination=drive_path,
)
###Output
_____no_output_____
###Markdown
Vertical banners
###Code
input_file_name = 'resized_vertical_steam_banners.tar'
###Output
_____no_output_____
###Markdown
Save to Google Drive
###Code
!du -sh /content/data/resized_vertical_steam_banners/
!tar cvf resized_vertical_steam_banners.tar /content/data/resized_vertical_steam_banners/
!du -sh /content/resized_vertical_steam_banners.tar
colab_transfer.copy_file(
file_name=input_file_name,
source=colab_path,
destination=drive_path,
)
###Output
_____no_output_____ |
notebooks/Alexandre_Desafio.ipynb | ###Markdown
Desafio Lumini Para este desafio, faremos uma análise exploratória de dados de uma amostra de inscrições no ENEM de 2016, seguindo recomendações que estão no repositório e podem ser vistas abaixo:  Importação das bibliotecas
###Code
import pandas as pd
import warnings
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="white", context="talk")
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Carregamento de arquivos e primeiras análises Vamos primeiramente abrir o arquivo csv contendo os dados:
###Code
micro_enem = pd.read_csv("Microdados_Enem_2016.csv")
###Output
_____no_output_____
###Markdown
O dicionário presente também é importante para trazer uma explicaçã para as colunas presentes no texto. Vamos ver uma amostra dos dados carregados
###Code
micro_enem.head()
###Output
_____no_output_____
###Markdown
Toda a lista de colunas contém as classes:
###Code
micro_enem.columns.tolist()
###Output
_____no_output_____
###Markdown
Check de consistência e criação de nova informação de notas Vamos analisar o ano de realização da prova para consistência da análise:
###Code
micro_enem["NU_ANO"].unique()
###Output
_____no_output_____
###Markdown
Todos fizeram a prova no mesmo ano, logo, comparar notas aqui faz sentido . Vamos separar as notas e descartar os casos faltantes para calcular uma média de nota por aluno. A partir de agora, não consideraremos as notas de alunos que não realizaram todas as provas do enem sem perder nenhum dia.
###Code
micro_enem = micro_enem.dropna(subset=['NU_NOTA_CN','NU_NOTA_CH', 'NU_NOTA_LC', 'NU_NOTA_MT', 'NU_NOTA_REDACAO'])
micro_enem
notas_alunos = micro_enem[['NU_NOTA_CN','NU_NOTA_CH', 'NU_NOTA_LC', 'NU_NOTA_MT', 'NU_NOTA_REDACAO']].dropna()
###Output
_____no_output_____
###Markdown
Podemos agora calcular a média por matéria por curiosidade e a média por aluno:
###Code
notas_alunos.mean(axis=0)
nota_alunos = notas_alunos.mean(axis=1)
micro_enem['nota_final']=nota_alunos
###Output
_____no_output_____
###Markdown
Nesse momento, temos uma nota final média por aluno como nova variável. Usaremos ela em análises Análise de estados Vamos realizar agora uma primeira exploração associando o estado do aluno com sua respectiva nota. Vamos listar de ordem decrescente para analisarmos os piores estados e melhores
###Code
uf_nota = micro_enem[['SG_UF_RESIDENCIA','nota_final']]
###Output
_____no_output_____
###Markdown
Piores Estados por nota:
###Code
uf_nota.groupby('SG_UF_RESIDENCIA').mean().sort_values('nota_final')[:5]
###Output
_____no_output_____
###Markdown
Melhores Estados por nota:
###Code
uf_nota.groupby('SG_UF_RESIDENCIA').mean().sort_values('nota_final')[-5:]
###Output
_____no_output_____
###Markdown
Antes de tomar conclusões, é importante analisarmos também quantos elementos participam de cada grupo. Por exemplo, pode-se ver que o estados que estão em melhor posição são estados pouco populosos na listagem. Também vamos olhar o desvio padrão, para entendermos se temos um dado com pouca variação, mesmo com uma população pouco representativa
###Code
uf_nota.groupby('SG_UF_RESIDENCIA').agg(['mean', 'std', 'count']).sort_values(('nota_final','mean'))
###Output
_____no_output_____
###Markdown
A amostra é bem pequena comparando-se o número total de inscritos no ENEM, porém de certa forma representativa. Dos 5 piores estados, 3 são da região norte e 2 da região nordeste. Possivelmente, a região norte ou nordeste são as com pior desempenho. Vamos verificar isso a seguir Análise de região do país Podemos fazer essa divisão também por região, já que o código do município, contém a região como primeiro dígito identificador
###Code
regiao_nota = micro_enem[['CO_MUNICIPIO_RESIDENCIA','nota_final']]
regiao_nota['região'] = regiao_nota['CO_MUNICIPIO_RESIDENCIA'].astype("str").apply(lambda x:x[0])
del regiao_nota['CO_MUNICIPIO_RESIDENCIA']
###Output
_____no_output_____
###Markdown
vamos pegar uma amostra da UF só para conseguirmos identificar o que é cada número de região:
###Code
pd.concat([uf_nota[:5],regiao_nota['região'][:5]], axis=1)
###Output
_____no_output_____
###Markdown
podemos então ver que 1 é norte, 2 é nordeste, 3 é sudeste, 4 é sul, e 5 é centro-oeste (único não representado na amostra). Vamos criar um mapa para localizar as regiões por nome:
###Code
map_region = {'1':'norte', '2':'nordeste', '3':'sudeste', '4':'sul', '5':'centro-oeste'}
###Output
_____no_output_____
###Markdown
Vamos agora agrupar as métricas por região, usando também a média, desvio e contagem de elementos para avaliarmos:
###Code
regiao_nota = regiao_nota.groupby('região').agg(['mean', 'std', 'count']).sort_values(('nota_final','mean'))
regiao_nota
###Output
_____no_output_____
###Markdown
Trocando os nomes pelo nome da região e plotando:
###Code
regiao_nota['região'] = regiao_nota.index.to_series().apply(lambda x:map_region[x])
grades = regiao_nota[('nota_final','mean')].tolist()
regions = regiao_nota['região'].tolist()
# PLOT
f, ax1 = plt.subplots(1, figsize=(8, 8))#, sharex=True)
sns.barplot(x=regions, y=grades, data=regiao_nota, palette="deep", ax=ax1)
ax1.axhline(0, color="k", clip_on=False)
ax1.set_ylabel("Nota média do Enem")
###Output
_____no_output_____
###Markdown
Apesar das notas estarem próximas, as menores notas **dessa amostra** são do nordeste e norte, com índices quase iguais. Essa divisão, é similar à encontrada aqui: https://www.leiaja.com/carreiras/2019/07/02/veja-regioes-com-maior-e-menor-media-na-redacao-do-enem/. Apesar de serem de anos distintos, espera-se que o comportamento não tenha tanta variação de um ano para outro. Outro detalhe é que aqui estamos analisando uma amostra apenas do conjunto inteiro de dados Análise de Idade Vamos realizar agora uma análise a partir da idade dos candidatos
###Code
age_grades = micro_enem[['NU_IDADE','nota_final']]
ages_grade = age_grades.groupby('NU_IDADE').agg(['mean', 'std', 'count'])
print(ages_grade.shape)
ages_grade.head()
###Output
_____no_output_____
###Markdown
Como tivemos 53 possíveis idades aqui, a contagem dela deve ser bem variada. Vamos filtrar apenas idades com pelo menos 10 amostras para remoção de outliers
###Code
ages_grade = ages_grade[ages_grade[('nota_final','count')]>10]
ages = ages_grade.index.tolist()
grades = ages_grade[('nota_final','mean')].tolist()
df = ages_grade[('nota_final','mean')]
ages_grade['ages'] = ages_grade.index
ages_grade['value'] = ages_grade[('nota_final','mean')]
ages_grade = ages_grade[['ages', 'value']]
###Output
_____no_output_____
###Markdown
O gráfico abaixo relaciona a nota a idade dos participantes. É possível identificar que participantes mais jovens possuem melhores notas, o que poderá ser justificado na próxima análise
###Code
sns.set(style="darkgrid")
g = sns.lmplot(x="ages", y="value", data=ages_grade, ci=None, palette="muted", height=10,
scatter_kws={"s": 50, "alpha": 1}, truncate=False)
g.set_ylabels("Nota média do Enem")
g.set_xlabels("Idade")
fig = g.fig
fig.suptitle("Nota Enem x Idade. Reta de regressão também plotada.", fontsize=12)
###Output
_____no_output_____
###Markdown
Vamos analisar agora a idade média das pessoas que trabalham e não trabalham
###Code
trabalho = micro_enem[['NU_IDADE', 'Q026']]
trabalho_map = {"A":"Nunca Trabalhei", "B": "Já Trabalhei", "C":"Trabalho"}
trabalho['trabalho']=trabalho['Q026'].apply(lambda x:trabalho_map[x])
###Output
_____no_output_____
###Markdown
Plotando a média de idade dos que trabalham e não trabalham
###Code
g = sns.catplot(x="trabalho", y="NU_IDADE", data=trabalho,
height=6, kind="bar", palette="muted")
g.set_ylabels("Idade Média")
g.set_xlabels("Situação de Trabalho")
###Output
_____no_output_____
###Markdown
Ao analisar a idade média dos participantes que trabalham, é possível notar que participantes que trabalham ou já trabalharam (possívelmente desempregados) possuem idade média maior, e também são os que tendem a possuir menor média no Enem (após os 20 anos). Logo, o fator de gastar tempo trabalhando ao invés de estudar, é um possível motivo pelo qual as notas de pessoas com idade superior à 20 anos são mais baixas que a de jovens. Análise Etnias Vamos analisar as notas por etnias declaradas no enem
###Code
ethnic = micro_enem[['TP_COR_RACA','nota_final']]
ethnic = ethnic.groupby('TP_COR_RACA').agg(['mean', 'std', 'count'])
etnics_map = {0:'Não Declar.',1:'Branca', 2:'Preto', 3:'Pardo', 4:'Amarelo', 5:'Indígena'}
ethnic['cor'] = ethnic.index.to_series().apply(lambda x:etnics_map[x])
grades = ethnic[('nota_final','mean')].tolist()
color = ethnic['cor'].tolist()
###Output
_____no_output_____
###Markdown
O gráfico abaixo relaciona o grupo étnico declarado pelo participante com sua nota. É possivel identificar que o grupo étnico "branco" possui maiores médias, enquanto o "indígena", piores. Novamente, devido a pequena quantidade de dados, a análise não pode ser tida como final:
###Code
f, ax1 = plt.subplots(1, figsize=(8, 8))#, sharex=True)
sns.barplot(x=color, y=grades, palette="deep", ax=ax1)
ax1.axhline(0, color="k", clip_on=False)
ax1.set_ylabel("Nota média do Enem")
###Output
_____no_output_____
###Markdown
Análise Tipo de Escola
###Code
school_type = micro_enem[['TP_ESCOLA','nota_final']]
school_type = school_type.groupby('TP_ESCOLA').agg(['mean', 'std', 'count'])#.sort_values(('nota_final','mean'))
school_type
###Output
_____no_output_____
###Markdown
Como é possível ver,a maioria dos estudantes do subconjunto não declarou o tipo de escola. Dos declarados, a maioria é de escola pública, com uma 569 indivíduos de escola particular. Como no exterior só possui um aluno, vamos retirá-lo da análise.
###Code
school_type = school_type[:3]
school_map = {1:'Não Declar.', 2:'Pública', 3:'Privada', 4:'Exterior'}
school_type['escola'] = school_type.index.to_series().apply(lambda x:school_map[x])
grades = school_type[('nota_final','mean')].tolist()
school = school_type['escola'].tolist()
f, ax1 = plt.subplots(1, figsize=(8, 8))#, sharex=True)
sns.barplot(x=school, y=grades, palette="rocket", ax=ax1)
ax1.axhline(0, color="k", clip_on=False)
ax1.set_ylabel("Nota média do Enem")
###Output
_____no_output_____
###Markdown
Ainda que a maior parte dos estudantes venha de escolas públicas, as melhores notas se encontram em alunos provenientes de escolas particulares Relação Tipos de Escolas e Tipos de Etnias Aqui, realizaremos uma análise do tipo de escola e tipos de etnias para entendermos a relação de ambas
###Code
school_ethnic = micro_enem[['TP_ESCOLA', 'TP_COR_RACA', 'nota_final']]
school_ethnic['TP_COR_RACA'] = school_ethnic['TP_COR_RACA'].apply(lambda x:etnics_map[x])
school_ethnic['Tipo Escola'] = school_ethnic['TP_ESCOLA'].apply(lambda x:school_map[x])
del school_ethnic['TP_ESCOLA']
###Output
_____no_output_____
###Markdown
Faremos a análise em cima apenas do número de estudantes de escolas pública e privada:
###Code
school_ethnic = school_ethnic[(school_ethnic['Tipo Escola']=="Pública") | (school_ethnic['Tipo Escola']=="Privada")]
###Output
_____no_output_____
###Markdown
Agora, relacionando grupo étnico e escola de origem com as notas, pode-se ver que independente do grupo étnico,as notas de alunos provenientes de escolas públicas são mais baixas que notas de alunos provenientes de escolas particulares:
###Code
g = sns.catplot(x="TP_COR_RACA", y="nota_final", hue="Tipo Escola", data=school_ethnic,
height=8, kind="bar", palette="muted")
g.despine(left=True)
g.set_ylabels("Nota média do Enem")
g.set_xlabels("Etnia")
###Output
_____no_output_____
###Markdown
Os Negros (Pretos e Pardos) tendem a ter um resultado inferior aos Brancos e amarelos nas escolas privadas. Além do tipo de escola, é importante analisar como as etnias se comportam com relação à renda familiar. Isso pode ajudar a evidenciar o fato de que em escolas particulares, ainda existe uma grande distinção entre escolas mais caras e mais baratas. Logicamente, se os negros possuem menos renda, mesmo estudando em escola particular eles estudarão em escolas com menor desempenho. Vamos à análise para ver se confirma nossa hipótese:
###Code
color_income = micro_enem[['TP_ESCOLA', 'TP_COR_RACA', 'Q006']]
etnics_map = {0:'Não Declar.',1:'Branca', 2:'Preto', 3:'Pardo', 4:'Amarelo', 5:'Indígena'}
color_income['cor'] = color_income['TP_COR_RACA'].apply(lambda x:etnics_map[x])
del color_income['TP_ESCOLA']
del color_income['TP_COR_RACA']
brancos = color_income[color_income['cor']=='Branca']
negros = color_income[(color_income['cor']=='Preto') | (color_income['cor']=='Pardo')]
###Output
_____no_output_____
###Markdown
Nos gráficos a seguir, analisaremos apenas os casos de negros e brancos por terem mais relevância estatística. Vamos analisar as diferentes classes sociais, sendo que **A** equivale a nenhuma renda familiar, **B** equivale a renda de até 880,00 reais, **C** de 880,01 até 1320,00 reais, até **Q** que é mais de 17.600,00 reais.
###Code
g_negros = negros.groupby("Q006").count()
g_brancos = brancos.groupby("Q006").count()
f, (ax1, ax2) = plt.subplots(2,1, figsize=(8, 15))#, sharex=True)
sns.barplot(x=g_negros.index, y=g_negros['cor'], palette="rocket", ax=ax1)
ax1.axhline(0, color="k", clip_on=False)
ax1.set_ylabel("Número de Indivíduos")
ax1.set_xlabel("Renda Média (A menor, Q Maior)")
ax1.set_title("Renda Média de Negros")
sns.barplot(x=g_brancos.index, y=g_brancos['cor'], palette="rocket", ax=ax2)
ax2.axhline(0, color="k", clip_on=False)
ax2.set_ylabel("Número de Indivíduos")
ax2.set_xlabel("Renda Média (A menor, Q Maior)")
ax2.set_title("Renda Média de Brancos")
###Output
_____no_output_____
###Markdown
Percebe-se no gráfico acima que negros possuem renda inferior à renda de brancos. Isso está diretamente ligado ao desempenho escolar, que pode ser justificado pelos negros terem desempenho inferior tanto em escolas públicas como em escolas privadas Análise de gênero Aqui, vamos fazer uma seleção dos gêneros pela nota do Enem
###Code
gender = micro_enem[['TP_SEXO','nota_final']]
f, ax1 = plt.subplots(1, figsize=(7, 7))#, sharex=True)
sns.boxplot(x="TP_SEXO", y="nota_final", data=gender, palette="rocket", ax=ax1)
ax1.axhline(0, color="k", clip_on=False)
ax1.set_ylabel("Nota média do Enem")
ax1.set_xlabel("Sexos Masculino (M) e Feminino (F)")
ax1.set_title("Distribuição de notas por gênero")
###Output
_____no_output_____ |
FADL2/darknet_loss_troubleshooting.ipynb | ###Markdown
1.
###Code
from fastai.conv_learner import *
from fastai.models import darknet
from pathlib import Path
PATH = Path('data/imagenet')
PATH_TRAIN = PATH/'train'
# sz = 256
# bs = 32
# darknet53 = darknet.darknet_53()
# tfms = tfms_from_model(darknet53, sz)
# model_data = ImageClassifierData.from_paths(PATH, bs=bs, tfms=tfms, val_name='train')
# learner = ConvLearner.from_model_data(darknet53, model_data)
learner.crit
model_data
learner.crit
learner.crit = F.cross_entropy
learner.crit
###Output
_____no_output_____
###Markdown
2. Checking behavior of `.pretrained` in fastai/conv_learner.py. Looking at number of FC layers at the end of a stock PyTorch model, compared to loading through FastAI:
###Code
from torchvision.models import resnet18
resnet18
resnet18 = resnet18()
# resnet18
###Output
_____no_output_____
###Markdown
1 linear layer at the end.
###Code
from fastai.conv_learner import *
resnet18
###Output
_____no_output_____
###Markdown
What fastai originally imports is the resnet18 constructor function from torchvision.models.resnet. Good to know.
###Code
learner = ConvLearner.from_model_data(resnet18(), model_data)
# learner
from pathlib import Path
PATH = Path('data/cifar10')
sz = 32
bs = 64
tfms = tfms_from_model(resnet18, sz)
model_data = ImageClassifierData.from_paths(PATH, bs, tfms=tfms, val_name='test')
learner = ConvLearner.pretrained(resnet18, model_data)
# learner
###Output
_____no_output_____
###Markdown
Right. fastai strips the FC layer and its associated pooling layers, and replaces them with adaptive pooling and 2 linear layers with batchNorm and dropout; with a LogSoftmax output layer.
###Code
from fastai.models import darknet
darknet.darknet_53
tfms = tfms_from_stats(imagenet_stats, sz)
model_data = ImageClassifierData.from_paths(PATH, bs=bs, tfms=tfms, val_name='test')
darknet53 = darknet.darknet_53
# learner = ConvLearner.from_model_data(darknet53(num_classes=10), model_data)
# learner
# learner = ConvLearner.pretrained(darknet53(num_classes=10), model_data)
# learner
###Output
_____no_output_____
###Markdown
3.
###Code
%matplotlib inline
%reload_ext autoreload
%autoreload 2
from fastai.conv_learner import *
from fastai.models import darknet
from pathlib import Path
PATH = Path('data/imagenet')
PATH_TRAIN = PATH/'train'
bs = 32
sz = 256
tfms = tfms_from_stats(imagenet_stats, sz)
model_data = ImageClassifierData.from_paths(PATH, bs=bs, tfms=tfms)
darknet53 = darknet.darknet_53
###Output
_____no_output_____
###Markdown
`.from_model_data`
###Code
learner = ConvLearner.from_model_data(darknet53(), model_data)
# learner.summary()
###Output
_____no_output_____
###Markdown
head:``` ('AdaptiveAvgPool2d-232', OrderedDict([('input_shape', [-1, 1024, 14, 14]), ('output_shape', [-1, 1024, 1, 1]), ('nb_params', 0)])), ('Flatten-233', OrderedDict([('input_shape', [-1, 1024, 1, 1]), ('output_shape', [-1, 1024]), ('nb_params', 0)])), ('Linear-234', OrderedDict([('input_shape', [-1, 1024]), ('output_shape', [-1, 1000]), ('trainable', True), ('nb_params', 1025000)]))])``` learner with NLL loss fails Learning-Rate Finder phase due to negative losses:
###Code
learner.lr_find()
learner.sched.plot()
learner.crit
###Output
_____no_output_____
###Markdown
`.pretrained`
###Code
learner = ConvLearner.pretrained(darknet53(), model_data)
learner.summary()
learner.lr_find()
learner.sched.plot()
###Output
_____no_output_____
###Markdown
Testing FixTesting addition of LogSoftmax layer to fastai.models.darknet.Darknet definition
###Code
from pathlib import Path
from fastai.conv_learner import *
from fastai.models import darknet
PATH = Path('data/imagenet')
sz = 256; bs=32
tfms = tfms_from_stats(imagenet_stats, sz)
model_data = ImageClassifierData.from_paths(PATH, bs=bs, tfms=tfms, val_name='train')
darknet53 = darknet.darknet_53
learner = ConvLearner.from_model_data(darknet53(), model_data)
# learner.summary()
learner.lr_find()
learner.sched.plot()
learner.summary()
learner.crit
###Output
_____no_output_____ |
nb_download/new-baseline-pytorch-moa.ipynb | ###Markdown
If you are looking for a team member, do consider me too ! References :1. @abhishek and @artgor 's Parallel Programming video https://www.youtube.com/watch?v=VRVit0-0AXE2. @yasufuminakama 's Amazying Notebook https://www.kaggle.com/yasufuminakama/moa-pytorch-nn-starter `If you consider forking my kernel, remember to turn off the internet after giving an` **UPVOTE** Update V.141. Added features from PCA to existing ones [improves CV score] Update V.111. Added feature selection using `VarianceThreshold` method of sklearn [improves CV score] Update:1. Model updated2. Increased Seeds If you like it, Do Upvote :)
###Code
import sys
sys.path.append('../input/iterative-stratification/iterative-stratification-master')
from iterstrat.ml_stratifiers import MultilabelStratifiedKFold
import numpy as np
import random
import pandas as pd
import matplotlib.pyplot as plt
import os
import copy
import seaborn as sns
from sklearn import preprocessing
from sklearn.metrics import log_loss
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import warnings
warnings.filterwarnings('ignore')
os.listdir('../input/lish-moa')
train_features = pd.read_csv('../input/lish-moa/train_features.csv')
train_targets_scored = pd.read_csv('../input/lish-moa/train_targets_scored.csv')
train_targets_nonscored = pd.read_csv('../input/lish-moa/train_targets_nonscored.csv')
test_features = pd.read_csv('../input/lish-moa/test_features.csv')
sample_submission = pd.read_csv('../input/lish-moa/sample_submission.csv')
GENES = [col for col in train_features.columns if col.startswith('g-')]
CELLS = [col for col in train_features.columns if col.startswith('c-')]
def seed_everything(seed=42):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
seed_everything(seed=42)
train_targets_scored.sum()[1:].sort_values()
train_features['cp_type'].unique()
###Output
_____no_output_____
###Markdown
PCA features + Existing features
###Code
# GENES
n_comp = 50
data = pd.concat([pd.DataFrame(train_features[GENES]), pd.DataFrame(test_features[GENES])])
data2 = (PCA(n_components=n_comp, random_state=42).fit_transform(data[GENES]))
train2 = data2[:train_features.shape[0]]; test2 = data2[-test_features.shape[0]:]
train2 = pd.DataFrame(train2, columns=[f'pca_G-{i}' for i in range(n_comp)])
test2 = pd.DataFrame(test2, columns=[f'pca_G-{i}' for i in range(n_comp)])
# drop_cols = [f'c-{i}' for i in range(n_comp,len(GENES))]
train_features = pd.concat((train_features, train2), axis=1)
test_features = pd.concat((test_features, test2), axis=1)
#CELLS
n_comp = 15
data = pd.concat([pd.DataFrame(train_features[CELLS]), pd.DataFrame(test_features[CELLS])])
data2 = (PCA(n_components=n_comp, random_state=42).fit_transform(data[CELLS]))
train2 = data2[:train_features.shape[0]]; test2 = data2[-test_features.shape[0]:]
train2 = pd.DataFrame(train2, columns=[f'pca_C-{i}' for i in range(n_comp)])
test2 = pd.DataFrame(test2, columns=[f'pca_C-{i}' for i in range(n_comp)])
# drop_cols = [f'c-{i}' for i in range(n_comp,len(CELLS))]
train_features = pd.concat((train_features, train2), axis=1)
test_features = pd.concat((test_features, test2), axis=1)
###Output
_____no_output_____
###Markdown
feature Selection using Variance Encoding
###Code
from sklearn.feature_selection import VarianceThreshold
var_thresh = VarianceThreshold(threshold=0.5)
data = train_features.append(test_features)
data_transformed = var_thresh.fit_transform(data.iloc[:, 4:])
train_features_transformed = data_transformed[ : train_features.shape[0]]
test_features_transformed = data_transformed[-test_features.shape[0] : ]
train_features = pd.DataFrame(train_features[['sig_id','cp_type','cp_time','cp_dose']].values.reshape(-1, 4),\
columns=['sig_id','cp_type','cp_time','cp_dose'])
train_features = pd.concat([train_features, pd.DataFrame(train_features_transformed)], axis=1)
test_features = pd.DataFrame(test_features[['sig_id','cp_type','cp_time','cp_dose']].values.reshape(-1, 4),\
columns=['sig_id','cp_type','cp_time','cp_dose'])
test_features = pd.concat([test_features, pd.DataFrame(test_features_transformed)], axis=1)
train_features
train = train_features.merge(train_targets_scored, on='sig_id')
train = train[train['cp_type']!='ctl_vehicle'].reset_index(drop=True)
test = test_features[test_features['cp_type']!='ctl_vehicle'].reset_index(drop=True)
target = train[train_targets_scored.columns]
train = train.drop('cp_type', axis=1)
test = test.drop('cp_type', axis=1)
train
###Output
_____no_output_____
###Markdown
Binning
###Code
# for col in GENES:
# train.loc[:, f'{col}_bin'] = pd.cut(train[col], bins=3, labels=False)
# test.loc[:, f'{col}_bin'] = pd.cut(test[col], bins=3, labels=False)
###Output
_____no_output_____
###Markdown
Distribution plots
###Code
# plt.figure(figsize=(16,16))
# sns.set_style("whitegrid")
# gene_choice = np.random.choice(len(GENES), 16)
# for i, col in enumerate(gene_choice):
# plt.subplot(4, 4, i+1)
# plt.hist(train_features.loc[:, GENES[col]],bins=100, color='orange')
# plt.title(GENES[col])
###Output
_____no_output_____
###Markdown
[Naive] Outlier Removal
###Code
# train_ = train.copy() [Didn't wanted to actually normalize, so created a copy and normalized that for further calculation]
# for col in GENES:
# # train_[col] = (train[col]-np.mean(train[col])) / (np.std(train[col]))
# mean = train_[col].mean()
# std = train_[col].std()
# std_r = mean + 4*std
# std_l = mean - 4*std
# drop = train_[col][(train_[col]>std_r) | (train_[col]<std_l)].index.values
# train = train.drop(drop).reset_index(drop=True)
# # folds = folds.drop(drop).reset_index(drop=True)
# target = target.drop(drop).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
PCA
###Code
# n_comp = 50
# data = pd.concat([pd.DataFrame(train[CELLS]), pd.DataFrame(test[CELLS])])
# data2 = (PCA(n_components=n_comp, random_state=42).fit_transform(data[CELLS]))
# train2 = data2[:train.shape[0]]; test2 = data2[train.shape[0]:]
# train2 = pd.DataFrame(train2, columns=[f'c-{i}' for i in range(n_comp)])
# test2 = pd.DataFrame(test2, columns=[f'c-{i}' for i in range(n_comp)])
# drop_cols = [f'c-{i}' for i in range(n_comp,len(CELLS))]
# train = train.drop(columns=drop_cols)
# test = test.drop(columns=drop_cols)
target_cols = target.drop('sig_id', axis=1).columns.values.tolist()
###Output
_____no_output_____
###Markdown
CV folds
###Code
folds = train.copy()
mskf = MultilabelStratifiedKFold(n_splits=5)
for f, (t_idx, v_idx) in enumerate(mskf.split(X=train, y=target)):
folds.loc[v_idx, 'kfold'] = int(f)
folds['kfold'] = folds['kfold'].astype(int)
folds
print(train.shape)
print(folds.shape)
print(test.shape)
print(target.shape)
print(sample_submission.shape)
###Output
_____no_output_____
###Markdown
Dataset Classes
###Code
class MoADataset:
def __init__(self, features, targets):
self.features = features
self.targets = targets
def __len__(self):
return (self.features.shape[0])
def __getitem__(self, idx):
dct = {
'x' : torch.tensor(self.features[idx, :], dtype=torch.float),
'y' : torch.tensor(self.targets[idx, :], dtype=torch.float)
}
return dct
class TestDataset:
def __init__(self, features):
self.features = features
def __len__(self):
return (self.features.shape[0])
def __getitem__(self, idx):
dct = {
'x' : torch.tensor(self.features[idx, :], dtype=torch.float)
}
return dct
def train_fn(model, optimizer, scheduler, loss_fn, dataloader, device):
model.train()
final_loss = 0
for data in dataloader:
optimizer.zero_grad()
inputs, targets = data['x'].to(device), data['y'].to(device)
# print(inputs.shape)
outputs = model(inputs)
loss = loss_fn(outputs, targets)
loss.backward()
optimizer.step()
scheduler.step()
final_loss += loss.item()
final_loss /= len(dataloader)
return final_loss
def valid_fn(model, loss_fn, dataloader, device):
model.eval()
final_loss = 0
valid_preds = []
for data in dataloader:
inputs, targets = data['x'].to(device), data['y'].to(device)
outputs = model(inputs)
loss = loss_fn(outputs, targets)
final_loss += loss.item()
valid_preds.append(outputs.sigmoid().detach().cpu().numpy())
final_loss /= len(dataloader)
valid_preds = np.concatenate(valid_preds)
return final_loss, valid_preds
def inference_fn(model, dataloader, device):
model.eval()
preds = []
for data in dataloader:
inputs = data['x'].to(device)
with torch.no_grad():
outputs = model(inputs)
preds.append(outputs.sigmoid().detach().cpu().numpy())
preds = np.concatenate(preds)
return preds
###Output
_____no_output_____
###Markdown
Model
###Code
class Model(nn.Module):
def __init__(self, num_features, num_targets, hidden_size):
super(Model, self).__init__()
self.batch_norm1 = nn.BatchNorm1d(num_features)
self.dropout1 = nn.Dropout(0.2)
self.dense1 = nn.utils.weight_norm(nn.Linear(num_features, hidden_size))
self.batch_norm2 = nn.BatchNorm1d(hidden_size)
self.dropout2 = nn.Dropout(0.5)
self.dense2 = nn.utils.weight_norm(nn.Linear(hidden_size, hidden_size))
self.batch_norm3 = nn.BatchNorm1d(hidden_size)
self.dropout3 = nn.Dropout(0.5)
self.dense3 = nn.utils.weight_norm(nn.Linear(hidden_size, num_targets))
def forward(self, x):
x = self.batch_norm1(x)
x = self.dropout1(x)
x = F.relu(self.dense1(x))
x = self.batch_norm2(x)
x = self.dropout2(x)
x = F.relu(self.dense2(x))
x = self.batch_norm3(x)
x = self.dropout3(x)
x = self.dense3(x)
return x
###Output
_____no_output_____
###Markdown
Preprocessing steps
###Code
def process_data(data):
data = pd.get_dummies(data, columns=['cp_time','cp_dose'])
# data.loc[:, 'cp_time'] = data.loc[:, 'cp_time'].map({24: 0, 48: 1, 72: 2})
# data.loc[:, 'cp_dose'] = data.loc[:, 'cp_dose'].map({'D1': 0, 'D2': 1})
# --------------------- Normalize ---------------------
# for col in GENES:
# data[col] = (data[col]-np.mean(data[col])) / (np.std(data[col]))
# for col in CELLS:
# data[col] = (data[col]-np.mean(data[col])) / (np.std(data[col]))
#--------------------- Removing Skewness ---------------------
# for col in GENES + CELLS:
# if(abs(data[col].skew()) > 0.75):
# if(data[col].skew() < 0): # neg-skewness
# data[col] = data[col].max() - data[col] + 1
# data[col] = np.sqrt(data[col])
# else:
# data[col] = np.sqrt(data[col])
return data
feature_cols = [c for c in process_data(folds).columns if c not in target_cols]
feature_cols = [c for c in feature_cols if c not in ['kfold','sig_id']]
len(feature_cols)
# HyperParameters
DEVICE = ('cuda' if torch.cuda.is_available() else 'cpu')
EPOCHS = 25
BATCH_SIZE = 128
LEARNING_RATE = 1e-3
WEIGHT_DECAY = 1e-5
NFOLDS = 5
EARLY_STOPPING_STEPS = 10
EARLY_STOP = False
num_features=len(feature_cols)
num_targets=len(target_cols)
hidden_size=1024
###Output
_____no_output_____
###Markdown
Single fold training
###Code
def run_training(fold, seed):
seed_everything(seed)
train = process_data(folds)
test_ = process_data(test)
trn_idx = train[train['kfold'] != fold].index
val_idx = train[train['kfold'] == fold].index
train_df = train[train['kfold'] != fold].reset_index(drop=True)
valid_df = train[train['kfold'] == fold].reset_index(drop=True)
x_train, y_train = train_df[feature_cols].values, train_df[target_cols].values
x_valid, y_valid = valid_df[feature_cols].values, valid_df[target_cols].values
train_dataset = MoADataset(x_train, y_train)
valid_dataset = MoADataset(x_valid, y_valid)
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
validloader = torch.utils.data.DataLoader(valid_dataset, batch_size=BATCH_SIZE, shuffle=False)
model = Model(
num_features=num_features,
num_targets=num_targets,
hidden_size=hidden_size,
)
model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY)
scheduler = optim.lr_scheduler.OneCycleLR(optimizer=optimizer, pct_start=0.1, div_factor=1e3,
max_lr=1e-2, epochs=EPOCHS, steps_per_epoch=len(trainloader))
loss_fn = nn.BCEWithLogitsLoss()
early_stopping_steps = EARLY_STOPPING_STEPS
early_step = 0
oof = np.zeros((len(train), target.iloc[:, 1:].shape[1]))
best_loss = np.inf
for epoch in range(EPOCHS):
train_loss = train_fn(model, optimizer,scheduler, loss_fn, trainloader, DEVICE)
print(f"FOLD: {fold}, EPOCH: {epoch}, train_loss: {train_loss}")
valid_loss, valid_preds = valid_fn(model, loss_fn, validloader, DEVICE)
print(f"FOLD: {fold}, EPOCH: {epoch}, valid_loss: {valid_loss}")
if valid_loss < best_loss:
best_loss = valid_loss
oof[val_idx] = valid_preds
torch.save(model.state_dict(), f"FOLD{fold}_.pth")
elif(EARLY_STOP == True):
early_step += 1
if (early_step >= early_stopping_steps):
break
#--------------------- PREDICTION---------------------
x_test = test_[feature_cols].values
testdataset = TestDataset(x_test)
testloader = torch.utils.data.DataLoader(testdataset, batch_size=BATCH_SIZE, shuffle=False)
model = Model(
num_features=num_features,
num_targets=num_targets,
hidden_size=hidden_size,
)
model.load_state_dict(torch.load(f"FOLD{fold}_.pth"))
model.to(DEVICE)
predictions = np.zeros((len(test_), target.iloc[:, 1:].shape[1]))
predictions = inference_fn(model, testloader, DEVICE)
return oof, predictions
def run_k_fold(NFOLDS, seed):
oof = np.zeros((len(train), len(target_cols)))
predictions = np.zeros((len(test), len(target_cols)))
for fold in range(NFOLDS):
oof_, pred_ = run_training(fold, seed)
predictions += pred_ / NFOLDS
oof += oof_
return oof, predictions
# Averaging on multiple SEEDS
SEED = [0, 1, 2, 3 ,4, 5]
oof = np.zeros((len(train), len(target_cols)))
predictions = np.zeros((len(test), len(target_cols)))
for seed in SEED:
oof_, predictions_ = run_k_fold(NFOLDS, seed)
oof += oof_ / len(SEED)
predictions += predictions_ / len(SEED)
train[target_cols] = oof
test[target_cols] = predictions
# test['atp-sensitive_potassium_channel_antagonist'] = 0.0
# test['erbb2_inhibitor'] = 0.0
# train['atp-sensitive_potassium_channel_antagonist'] = 0.0
# train['erbb2_inhibitor'] = 0.0
train_targets_scored
len(target_cols)
valid_results = train_targets_scored.drop(columns=target_cols).merge(train[['sig_id']+target_cols], on='sig_id', how='left').fillna(0)
y_true = train_targets_scored[target_cols].values
y_pred = valid_results[target_cols].values
score = 0
for i in range(len(target_cols)):
score_ = log_loss(y_true[:, i], y_pred[:, i])
score += score_ / target.shape[1]
print("CV log_loss: ", score)
sub = sample_submission.drop(columns=target_cols).merge(test[['sig_id']+target_cols], on='sig_id', how='left').fillna(0)
sub.to_csv('submission.csv', index=False)
sub.shape
###Output
_____no_output_____ |
tutorial-ja/012_ising_ja.ipynb | ###Markdown
イジングゲートイジングゲートは、二量子ビットを同時に回転させるゲートです。 今回学ぶこと1. Rxx,Ryy,Rzzゲートについて2. 回路を作成 BlueqatのインストールpipからBlueqatをインストールを行います。
###Code
!pip install blueqat
###Output
Requirement already satisfied: blueqat in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (0.3.14)
Requirement already satisfied: scipy>=1.1.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from blueqat) (1.1.0)
Requirement already satisfied: numpy~=1.12 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from blueqat) (1.14.6)
[31mnumba 0.49.0 has requirement numpy>=1.15, but you'll have numpy 1.14.6 which is incompatible.[0m
[33mYou are using pip version 10.0.1, however version 20.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
###Markdown
Rxx, Ryy, RzzRxx,Ryy,Rzz ゲートは下記で実行できます。| Rxx | Ryy | Rzz ||:-:|:-:|:-:||`rxx(θ)`|`ryy(θ)`|`rzz(θ)`| RXX ゲート
###Code
from blueqat import Circuit
import math
Circuit(2).rxx(math.pi/2)[0,1].m[:].run(shots=100)
###Output
_____no_output_____
###Markdown
RYY ゲート
###Code
from blueqat import Circuit
import math
Circuit(2).ryy(math.pi/2)[0,1].m[:].run(shots=100)
###Output
_____no_output_____
###Markdown
RZZ ゲート
###Code
from blueqat import Circuit
import math
Circuit().rzz(math.pi/2)[0,1].m[:].run(shots=100)
###Output
_____no_output_____ |
Exercicios/Exercicio 3/9796078_Murilo_Trevisan_Lista_de_Exercicio_3_SEL0378.ipynb | ###Markdown
Lista de Exercício 3 Introdução à Visão Computacional (SEL0339/SEL5886)**Instruções:** 1. Esta lista consiste de 4 exercícios. 1. Deve-se colocar comentários nos códigos desenvolvidos. 1. As perguntas devem ser respondidas também como comentários no arquivo. 1. Colocar seu nome e número USP abaixo. 1. Quaisquer problemas na execução das listas, entrar em contato com os monitores. 1. Depois de terminado os exercícios, deve ser gerado um arquivo **extensão .ipynb** para ser enviado ao professor pelo E-DISCIPLINAS da disciplina até a data máxima de entrega. 1. Caso não seja enviado, o aluno ficará sem nota.--- Executar no Google Colab Ver codigo fonte no GitHub `Nome: Murilo Henrique Pasini Trevisan ``Número USP: 9796078 ` Introdução:Nessa lista de exercícios vamos estudar sobre histogramas, transformações de intensidade ponto a ponto, equalização de histogramas, filtros passa-baixa e passa-alta e processamento de pixel de borda. Primeiramente vamos importar as bibliotecas que iremos utilizar:
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2 as cv
import os
from scipy.io import loadmat
from IPython.display import HTML
from base64 import b64encode
###Output
_____no_output_____
###Markdown
**Atenção**: os códigos abaixo são para fazer o download das imagens necessárias para a prática. EXECUTE-OS!
###Code
import urllib.request
try:
urllib.request.urlretrieve("https://github.com/LAVI-USP/SEL0339-SEL5886_2021/blob/main/imagens/pratica_03/fotografo.tif?raw=true", "fotografo.tif")
except:
print("[ERRO] Não foi possível fazer o download das imagens dessa prática. Entre em contato com o monitor")
try:
urllib.request.urlretrieve("https://github.com/LAVI-USP/SEL0339-SEL5886_2021/blob/main/imagens/pratica_03/polem_baixo_contraste.bmp?raw=true", "polem_baixo_contraste.bmp")
except:
print("[ERRO] Não foi possível fazer o download das imagens dessa prática. Entre em contato com o monitor")
try:
urllib.request.urlretrieve("https://github.com/LAVI-USP/SEL0339-SEL5886_2021/blob/main/imagens/pratica_03/palavrascruzadas.tif?raw=true", "palavrascruzadas.tif")
except:
print("[ERRO] Não foi possível fazer o download das imagens dessa prática. Entre em contato com o monitor")
try:
urllib.request.urlretrieve("https://github.com/LAVI-USP/SEL0339-SEL5886_2021/blob/main/imagens/pratica_03/mriphantom.tif?raw=true", "mriphantom.tif")
except:
print("[ERRO] Não foi possível fazer o download das imagens dessa prática. Entre em contato com o monitor")
try:
urllib.request.urlretrieve("https://github.com/LAVI-USP/SEL0339-SEL5886_2021/blob/main/imagens/pratica_03/armadura.tif?raw=true", "armadura.tif")
except:
print("[ERRO] Não foi possível fazer o download das imagens dessa prática. Entre em contato com o monitor")
try:
urllib.request.urlretrieve("https://github.com/LAVI-USP/SEL0339-SEL5886_2021/blob/main/imagens/pratica_03/pontos.tif?raw=true", "pontos.tif")
except:
print("[ERRO] Não foi possível fazer o download das imagens dessa prática. Entre em contato com o monitor")
try:
urllib.request.urlretrieve("https://github.com/LAVI-USP/SEL0339-SEL5886_2021/blob/main/imagens/pratica_03/board_ruido.tif?raw=true", "board_ruido.tif")
except:
print("[ERRO] Não foi possível fazer o download das imagens dessa prática. Entre em contato com o monitor")
###Output
[ERRO] Não foi possível fazer o download das imagens dessa prática. Entre em contato com o monitor
[ERRO] Não foi possível fazer o download das imagens dessa prática. Entre em contato com o monitor
[ERRO] Não foi possível fazer o download das imagens dessa prática. Entre em contato com o monitor
###Markdown
1) Visualização de histogramasEntão, o que é histograma? Você pode considerar o histograma como um gráfico ou plotagem, que dá uma ideia geral sobre a distribuição da intensidade dos pixels de uma imagem. É um gráfico que indica os valores de intensidade dos pixels (variando de 0 a 255, caso a quantização seja realizada em 8 bits) no eixo X e o número de pixels na imagem com a intensidade correspondente no eixo Y.É apenas outra forma de entender a imagem. Olhando para o histograma de uma imagem, você tem uma intuição sobre o contraste, brilho, distribuição de intensidade, etc. Quase todas as ferramentas de processamento de imagem hoje oferecem recursos de histograma. Figura 1: Legenda. Referência: Histograms - OpenCV.**Exercício:**1. Mostrar a imagem ```fotografo.tif``` e seu histograma com o número de *bins* diferentes. Utilize ```bins=80``` e ```bins=40``` e **comente os resultados**.*Dica:* Utilize a função [plt.hist](https://matplotlib.org/3.3.1/api/_as_gen/matplotlib.pyplot.hist.html). *Ex:*``` pythonplt.hist(myImg.flatten(),bins=XX,density=False,range=(0,255))```
###Code
## -- Seu código começa AQUI -- ##
# Leitura da imagem
foto = cv.imread("fotografo.tif")
# apresentação da imagem
plt.imshow(foto)
#plt.figure(figsize = (5,5))
#histograma10 = plt.hist( foto.flatten(), bins=10, range=(0,255))
# Plot do histograma com 40 divisões no eixo x
plt.figure(figsize = (5,5))
histograma40 = plt.hist( foto.flatten(), bins=40, range=(0,255))
# Plot do histograma com 80 divisões no eixo x
plt.figure(figsize = (5,5))
histograma80 = plt.hist( foto.flatten(), bins=80, range=(0,255))
#plt.figure(figsize = (5,5))
#histograma255 = plt.hist( foto.flatten(), bins=255, range=(0,255))
# Podemos notar nas imagens abaixo a distribuição do número de pixels em cada
# nível de cinza da imagem, onde no eixo x temos o valor em nível de cinza da
# imagem, variando de 0 até 255, e em y temos o número de pixels neste nível,
# quando utilizamos um valor de bins menor que o valor total da imagem, os pixels
# são agrupados em valores próximos, de forma a manter no histograma somente o
# número de bins definida na função, que são no caso 40 e 80.
## -- Seu código termina AQUI -- ##
###Output
_____no_output_____
###Markdown
2) Transformação de intensidadesAs técnicas de processamento no domínio espacial operam diretamente nos pixels da imagem. A expressão geral para a função de transformação nos níveis de cinza pode ser dada por: $$g(x,y) = T[f(x,y)],$$sendo $f(x,y)$ a imagem de entrada e $g(x,y)$ a imagem de saída ou imagem processada. $T$ é um operador em $f$.2.1) Transformação linear:Um exemplo de função de transformação é a linear, tal que:$$g(x,y) = c \times f(x,y) + b,$$onde $c$ é uma constante que controla o contraste e $b$, o brilho.**Exercício:**1. Aplique uma transformação linear na imagem ```polem_baixo_contraste.bmp``` de modo a alargar seu histograma para toda a faixa de valores de dados do tipo ```uint8```. Ou seja, encontre um valor de $c$ e $b$ para que a imagem esteja na faixa de 0 a 255.2. Mostre as imagens e os respectivos histogramas antes e depois da transformação. Lembre-se de alterar os limites de visualização da imagem para o range todo para visualizar o efeito do alargamento. **Comente o resultado.**3. A partir do resultado anterior, aplique uma transformação linear na mesma imagem afim de gerar o seu negativo. Mostre a imagem.
###Code
## -- Seu código começa AQUI -- ##
# Importação da imagem usando OpenCV
polem = cv.imread("polem_baixo_contraste.bmp")
# Visualização da imagem
plt.figure(figsize = (5,5))
plt.imshow(polem)
# Visualização do histograma da imagem
plt.figure(figsize = (5,5))
HistogramaBaixa = plt.hist(polem.flatten(), bins = 255, range=(0,255))
# Alteração de brilho
polem2 = polem - 125
plt.figure(figsize = (5,5))
plt.imshow(polem2)
plt.figure(figsize = (5,5))
HistogramaBrilho = plt.hist(polem2.flatten(), bins = 255, range=(0,255))
# Alargamento de contraste
polem3 = polem2 * (255/30)
polem3 = polem3.astype('uint8') # correção para tipo inteiro na imagem
plt.figure(figsize = (5,5))
plt.imshow(polem3) # mostra a nova imagem
plt.figure(figsize = (5,5))
HistogramaContrast = plt.hist(polem3.flatten(), bins = 255, range=(0,255)) # mostra o novo histograma
## -- Seu código termina AQUI -- ##
###Output
_____no_output_____
###Markdown
2.2) Transformação não-linear:Agora iremos analisar algumas transformações não-lineares. A formula geral da transformação logarítmica ($log$) é dada por:$$g(x,y) = c * log(f(x,y) + 1),$$onde $c$ é uma constante. A figura 2 ilustra essa transformação, bem como algumas outras transformações já mencionadas.Figura 2: Exemplos de transformações ponto a ponto. Referência: Gonzalez and Woods, Digital Image Processing 3rd.A equação da transformação *gamma* é dada por:$$g(x,y) = c * f(x,y)^\gamma$$onde $c$ tambem é uma constante. Como no caso da transformação logarítmica, curvas de transformação de potência com valores de $\gamma$ menores que 1 mapeiam uma faixa estreita de valores escuros de entradas em uma faixa mais ampla de valores de saída, com o oposto se aplicando a valores mais altos de níveis de entrada. A figura 3 ilustra como são os formatos das curvas com diferentes valores de *gamma*.Figura 3: Curvas com diferentes valores de *gamma*.Referência: Gonzalez and Woods, Digital Image Processing 3rd.**Exercício:**1. Utilizando a imagem ```mriphantom.tif```, execute as seguintes transformações não-lineares, encontrando o valor mais adequado para a constante c de forma que os níveis de cinza abranjam todo o range de intensidade considerando resolução de 8 bits:* ```G1 = np.uint8(c * np.log10(img + 1.0))```* ```G2 = np.uint8(c * (img ** 0.28))```2. Mostrar as imagens e os histogramas resultantes de cada uma das transformações acima.3. **Comente os resultados** encontrados para cada uma delas, explicando o que a transformação utilizada fez com os níveis de cinza da imagem em relação ao contraste e brilho.
###Code
## -- Seu código começa AQUI -- ##
# recebendo a imagem
phantom = cv.imread('mriphantom.tif')
# plotando a imagem original e seu gistograma
plt.figure(figsize = (5,5))
plt.imshow(phantom)
plt.figure(figsize = (5,5))
HistogramaBaixa = plt.hist(phantom.flatten(), bins = 255, range=(0,255))
# Importando as transformações não lineares
c1 = 250
c2 = 260
G1 = np.uint8(c1 * np.log10(phantom + 1.0))
G2 = np.uint8(c2 * (phantom ** 0.28))
# Plotando a transformação log
plt.figure(figsize = (5,5))
plt.imshow(G1)
plt.figure(figsize = (5,5))
HistogramaBaixa = plt.hist(G1.flatten(), bins = 255, range=(0,255))
# Plotando a trabsformação Gamma
plt.figure(figsize = (5,5))
plt.imshow(G2)
plt.figure(figsize = (5,5))
HistogramaBaixa = plt.hist(G2.flatten(), bins = 255, range=(0,255))
# Ambas as transformações tiveram resultados semelhantes, como esperado
# pelo gráfico mostrado acima das transformações não lineares,
# Transformações gamma com índices menores que 1 se assemelham a transformações
# log, desta forma, o que a transformação fez foi realizar o aumento de
# contraste na região com maior intensidade de pixels, que era a região com
# baixos níveis de cinza, como pode ser visto no primeiro histograma
# os dois histogramas seguintes demonstram o contraste ocupando toda a faixa de
# intensidade de niveis de cinza, e como visto nas imagens, agora é possível
# identificar visualmente a imagem que estava oculta devido a variação do nivel
# devido a baixa variação de niveis de cinza, imperceptivel aos nossos olhos.
## -- Seu código termina AQUI -- ##
###Output
_____no_output_____
###Markdown
3) Equalização de histograma**Exercício:**1. Faça a equalização de histograma da imagem ```polem_baixo_contraste.bmp``` utlizando a função [cv.equalizeHist](https://docs.opencv.org/2.4/modules/imgproc/doc/histograms.html?highlight=equalizehistequalizehist) do OpenCV.2. Mostre as imagens e os respectivos histogramas (antes e depois da equalização). **Comente os resultados.** Nota-se diferença com relação ao alargamento de contraste realizado anteriormente com a mesma imagem?
###Code
## -- Seu código começa AQUI -- ##
img4 = cv.imread('polem_baixo_contraste.bmp', 0)
# Visualização da imagem
plt.figure(figsize = (5,5))
plt.imshow(img4, 'gray')
# Visualização do histograma da imagem
plt.figure(figsize = (5,5))
HistogramaBaixa = plt.hist(img4.flatten(), bins = 255, range=(0,255))
# Equalização da OpenCV
img = polem4
img = cv.equalizeHist(img, img)
# Plot da imagem equalizada
plt.figure(figsize = (5,5))
plt.imshow(img, 'gray')
# Plot do histograma da imagem equalizada
plt.figure(figsize = (5,5))
HistogramaBaixa = plt.hist(img.flatten(), bins = 255, range=(0,255))
# Pode-se notar a equalização ao olhar para o histograma da nova imagem
# Os pixels estão bem distribuídos na escala de cinza
## -- Seu código termina AQUI -- ##
###Output
_____no_output_____
###Markdown
4) Binarização**Exercicio:**1. Visualize o histograma da imagem `palavrascruzadas.tif`, defina um limiar (*threshold*) para binarização e realize a operação de transformação para imagem binária. O objetivo é separar ao máximo o que é considerado como peça do que é letra ou fundo da imagem.2. Apresente a imagem binarizada resultante. 3. **Comente os resultados**.*Dica:* Você pode utilizar a função [cv.threshold](https://docs.opencv.org/master/d7/d1b/group__imgproc__misc.htmlgae8a4a146d1ca78c626a53577199e9c57) - [exemplos aqui](https://docs.opencv.org/master/d7/d4d/tutorial_py_thresholding.html) ou técnicas regulares de programação *Python*. Utilize ambas para comparar os resultados.*Ex:*``` pythoncv.threshold(myImg, threshold, 255, cv.THRESH_BINARY)```
###Code
#Visualização de histograma
## -- Seu código começa AQUI -- ##
# Importação das imagens
palavras = cv.imread('palavrascruzadas.tif')
# Plot da imagem e seu histograma
plt.figure(figsize = (5,5))
plt.imshow(palavras)
plt.figure(figsize = (5,5))
HistogramaBaixa = plt.hist(palavras.flatten(), bins = 255, range=(0,255))
## -- Seu código termina AQUI -- ##
#Binarização
#@title Threshold para binarização{ run: "auto" }
threshold = 158 #@param {type:"slider", min:0, max:255, step:1}
## -- Seu código começa AQUI -- ##
# threshold da imagem
ret,binary = cv.threshold(palavras, threshold, 255, cv.THRESH_BINARY)
# Plot da imagem
plt.figure(figsize = (5,5))
plt.imshow(binary, 'gray')
# Plot do histograma
plt.figure(figsize = (5,5))
HistogramaBaixa = plt.hist(binary.flatten(), bins = 255, range=(0,255))
## -- Seu código termina AQUI -- ##
###Output
_____no_output_____ |
Tensorflow_Professional_Certificate/C3-NLP/C3_W2_Assignment.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
**Note:** This notebook can run using TensorFlow 2.5.0
###Code
#!pip install tensorflow==2.5.0
import csv
import tensorflow as tf
import numpy as np
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# bbc-text.csv
!gdown --id 1rX10xeI3eUJmOLsc4pOPY6AnCLO8DxNj
vocab_size = 1000# YOUR CODE HERE
embedding_dim = 16# YOUR CODE HERE
max_length = 200# YOUR CODE HERE
trunc_type = 'post'# YOUR CODE HERE
padding_type = 'post'# YOUR CODE HERE
oov_tok = '<OOV>'# YOUR CODE HERE
training_portion = .8
sentences = []
labels = []
stopwords = [ "a", "about", "above", "after", "again", "against", "all", "am", "an", "and", "any", "are", "as", "at", "be", "because", "been", "before", "being", "below", "between", "both", "but", "by", "could", "did", "do", "does", "doing", "down", "during", "each", "few", "for", "from", "further", "had", "has", "have", "having", "he", "he'd", "he'll", "he's", "her", "here", "here's", "hers", "herself", "him", "himself", "his", "how", "how's", "i", "i'd", "i'll", "i'm", "i've", "if", "in", "into", "is", "it", "it's", "its", "itself", "let's", "me", "more", "most", "my", "myself", "nor", "of", "on", "once", "only", "or", "other", "ought", "our", "ours", "ourselves", "out", "over", "own", "same", "she", "she'd", "she'll", "she's", "should", "so", "some", "such", "than", "that", "that's", "the", "their", "theirs", "them", "themselves", "then", "there", "there's", "these", "they", "they'd", "they'll", "they're", "they've", "this", "those", "through", "to", "too", "under", "until", "up", "very", "was", "we", "we'd", "we'll", "we're", "we've", "were", "what", "what's", "when", "when's", "where", "where's", "which", "while", "who", "who's", "whom", "why", "why's", "with", "would", "you", "you'd", "you'll", "you're", "you've", "your", "yours", "yourself", "yourselves" ]
print(len(stopwords))
# Expected Output
# 153
with open("./bbc-text.csv", 'r') as csvfile:
### START CODE HERE
header = csvfile.readline()
line = csvfile.readline()
while line!='':
l, s = line.split(',')
labels.append(l)
sentences.append(s)
line = csvfile.readline()
### END CODE HERE
print(len(labels))
print(len(sentences))
print(sentences[0])
# Expected Output
# 2225
# 2225
# tv future hands viewers home theatre systems plasma high-definition tvs digital video recorders moving living room way people watch tv will radically different five years time. according expert panel gathered annual consumer electronics show las vegas discuss new technologies will impact one favourite pastimes. us leading trend programmes content will delivered viewers via home networks cable satellite telecoms companies broadband service providers front rooms portable devices. one talked-about technologies ces digital personal video recorders (dvr pvr). set-top boxes like us s tivo uk s sky+ system allow people record store play pause forward wind tv programmes want. essentially technology allows much personalised tv. also built-in high-definition tv sets big business japan us slower take off europe lack high-definition programming. not can people forward wind adverts can also forget abiding network channel schedules putting together a-la-carte entertainment. us networks cable satellite companies worried means terms advertising revenues well brand identity viewer loyalty channels. although us leads technology moment also concern raised europe particularly growing uptake services like sky+. happens today will see nine months years time uk adam hume bbc broadcast s futurologist told bbc news website. likes bbc no issues lost advertising revenue yet. pressing issue moment commercial uk broadcasters brand loyalty important everyone. will talking content brands rather network brands said tim hanlon brand communications firm starcom mediavest. reality broadband connections anybody can producer content. added: challenge now hard promote programme much choice. means said stacey jolna senior vice president tv guide tv group way people find content want watch simplified tv viewers. means networks us terms channels take leaf google s book search engine future instead scheduler help people find want watch. kind channel model might work younger ipod generation used taking control gadgets play them. might not suit everyone panel recognised. older generations comfortable familiar schedules channel brands know getting. perhaps not want much choice put hands mr hanlon suggested. end kids just diapers pushing buttons already - everything possible available said mr hanlon. ultimately consumer will tell market want. 50 000 new gadgets technologies showcased ces many enhancing tv-watching experience. high-definition tv sets everywhere many new models lcd (liquid crystal display) tvs launched dvr capability built instead external boxes. one example launched show humax s 26-inch lcd tv 80-hour tivo dvr dvd recorder. one us s biggest satellite tv companies directtv even launched branded dvr show 100-hours recording capability instant replay search function. set can pause rewind tv 90 hours. microsoft chief bill gates announced pre-show keynote speech partnership tivo called tivotogo means people can play recorded programmes windows pcs mobile devices. reflect increasing trend freeing multimedia people can watch want want.
# Improve this by shuffling the training data
train_size = int(len(sentences)*training_portion)
train_sentences = sentences[0:train_size]
train_labels = labels[0:train_size]
validation_sentences = sentences[train_size:]
validation_labels = labels[train_size:]
print(train_size)
print(len(train_sentences))
print(len(train_labels))
print(len(validation_sentences))
print(len(validation_labels))
# Expected output (if training_portion=.8)
# 1780
# 1780
# 1780
# 445
# 445
tokenizer = Tokenizer(num_words=vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(train_sentences)
word_index = tokenizer.word_index
# Remove stop worlds from the vocab
oov_idx = tokenizer.word_index[oov_tok]
for word in stopwords:
if word in tokenizer.word_index:
tokenizer.word_index[word] = oov_idx
print(f"Vocab size: {len(tokenizer.word_index)}")
train_sequences = tokenizer.texts_to_sequences(train_sentences)
train_padded = pad_sequences(train_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
print(len(train_sequences[0]))
print(len(train_padded[0]))
print(len(train_sequences[1]))
print(len(train_padded[1]))
print(len(train_sequences[10]))
print(len(train_padded[10]))
# Expected Ouput
# 449
# 120
# 200
# 120
# 192
# 120
tokenizer.num_words
validation_sequences = tokenizer.texts_to_sequences(validation_sentences)
validation_padded = pad_sequences(validation_sequences, maxlen=max_length, padding=padding_type, truncating=trunc_type)
print(len(validation_sequences))
print(validation_padded.shape)
# Expected output
# 445
# (445, 120)
label_tokenizer = Tokenizer(num_words=len(np.unique(labels))+1)
label_tokenizer.fit_on_texts(labels)
#training_label_seq = np.array(label_tokenizer.texts_to_sequences(train_labels))[:, np.newaxis]
training_label_seq = np.array(label_tokenizer.texts_to_sequences(train_labels))
#validation_label_seq = np.array(label_tokenizer.texts_to_sequences(validation_labels))[:, np.newaxis]
validation_label_seq = np.array(label_tokenizer.texts_to_sequences(validation_labels))
print(training_label_seq[0])
print(training_label_seq[1])
print(training_label_seq[2])
print(training_label_seq.shape)
print(validation_label_seq[0])
print(validation_label_seq[1])
print(validation_label_seq[2])
print(validation_label_seq.shape)
# Expected output
# [4]
# [2]
# [1]
# (1780, 1)
# [5]
# [4]
# [3]
# (445, 1)
# QUESTION: I dont understand why the output layer is of size 6. When there are only 5 labels!
model = tf.keras.Sequential([
tf.keras.layers.Embedding(input_dim=vocab_size, output_dim=embedding_dim, input_length=max_length),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(24, activation='relu'),
tf.keras.layers.Dense(label_tokenizer.num_words, activation='sigmoid')
])
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
# Expected Output
# Layer (type) Output Shape Param #
# =================================================================
# embedding (Embedding) (None, 120, 16) 16000
# _________________________________________________________________
# global_average_pooling1d (Gl (None, 16) 0
# _________________________________________________________________
# dense (Dense) (None, 24) 408
# _________________________________________________________________
# dense_1 (Dense) (None, 6) 150
# =================================================================
# Total params: 16,558
# Trainable params: 16,558
# Non-trainable params: 0
num_epochs = 30
history = model.fit(x=train_padded, y=training_label_seq,
epochs=num_epochs,
validation_data=(validation_padded, validation_label_seq),
batch_size=32,
verbose=2,
)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, "accuracy")
plot_graphs(history, "loss")
unique_tokens = set(tokenizer.word_index.values())
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_sentence(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
unique_tokens = set(tokenizer.word_index.values())
len(tokenizer.index_word.keys())
len(unique_tokens)
e = model.layers[0]
weights = e.get_weights()[0]
print(weights.shape) # shape: (vocab_size, embedding_dim)
# Expected output
# (1000, 16)
len(reverse_word_index.keys())
import io
out_v = io.open('vecs.tsv', 'w', encoding='utf-8')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for word_num in range(1, tokenizer.num_words):
word = tokenizer.index_word[word_num]
embeddings = weights[word_num]
out_m.write(word + "\n")
out_v.write('\t'.join([str(x) for x in embeddings]) + "\n")
out_v.close()
out_m.close()
try:
from google.colab import files
except ImportError:
pass
else:
files.download('vecs.tsv')
files.download('meta.tsv')
# Note: This does not work on http://projector.tensorflow.org/ no clue why
###Output
_____no_output_____ |
non_root_examples/01_uproot.ipynb | ###Markdown
Import the libraries
###Code
import uproot
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
- The '%matplotlib inline' is a notebook magic for the plots to be saved with the notebook
###Code
path = "../Data/"
filename = path + "higgs_small.root"
###Output
_____no_output_____
###Markdown
- Now we will open a ROOT file containing signal and background Higgs events
###Code
rt_file = uproot.open(filename)
###Output
_____no_output_____
###Markdown
We can see the trees in the file with:
###Code
rt_file.keys()
###Output
_____no_output_____
###Markdown
Let's print the branches of each tree
###Code
# TBranches of TTrees are also presented as dicts.
for tree in rt_file.keys():
print(rt_file[tree].keys())
###Output
[b'lepton_pT', b'lepton_eta', b'lepton_phi', b'missing_energy_magnitude', b'missing_energy_phi', b'jet_1_pt', b'jet_1_eta', b'jet_1_phi', b'jet_1_b_tag', b'jet_2_pt', b'jet_2_eta', b'jet_2_phi', b'jet_2_b_tag', b'jet_3_pt', b'jet_3_eta', b'jet_3_phi', b'jet_3_b_tag', b'jet_4_pt', b'jet_4_eta', b'jet_4_phi', b'jet_4_b_tag', b'm_jj', b'm_jjj', b'm_lv', b'm_jlv', b'm_bb', b'm_wbb', b'm_wwbb']
[b'lepton_pT', b'lepton_eta', b'lepton_phi', b'missing_energy_magnitude', b'missing_energy_phi', b'jet_1_pt', b'jet_1_eta', b'jet_1_phi', b'jet_1_b_tag', b'jet_2_pt', b'jet_2_eta', b'jet_2_phi', b'jet_2_b_tag', b'jet_3_pt', b'jet_3_eta', b'jet_3_phi', b'jet_3_b_tag', b'jet_4_pt', b'jet_4_eta', b'jet_4_phi', b'jet_4_b_tag', b'm_jj', b'm_jjj', b'm_lv', b'm_jlv', b'm_bb', b'm_wbb', b'm_wwbb']
###Markdown
Let's pick one tree and one branch and try to plot it
###Code
lep_pt = rt_file[b'TreeS'][b'lepton_pT']
signal_tree = rt_file[b'TreeS']
lep_pt
###Output
_____no_output_____
###Markdown
Hmm, dealing with a TBranch object is not that usefull, it would be better to have an array instead.The numpy array objects will have the same type as specified from the interpretation method.
###Code
print(lep_pt.interpretation)
lep_pt_array = lep_pt.array()
# We could have used lep_pt = rt_file[b'TreeS'][b'lepton_pT'].array()
# and would have the same result
lep_pt_array
###Output
_____no_output_____
###Markdown
- We can see, that when iterating through a branch, we iterate through the baskets of the branch
###Code
for x in signal_tree.iterate([b'lepton_pT'],namedecode="utf-8"):
print(x['lepton_pT'].size)
###Output
3990
1306
###Markdown
These two numbers correspond to the size of each basket, in this case there are 2 backets ---We can iterate for each event, by using lazyarray(here just the first 10 elements)
###Code
for i, pT in enumerate(signal_tree.lazyarray('lepton_pT')[:10]):
print(i,pT)
###Output
0 0.869293212890625
1 0.9075421094894408
2 0.7988347411155698
3 1.1050089597702026
4 0.40939134359359747
5 0.9338953495025634
6 1.4051437377929688
7 1.1765655279159546
8 0.9459739923477172
9 1.3840976953506468
###Markdown
--- We can even access multiple branches at once, where the leptons is a python dictionarywith numpy arrays for each variable
###Code
leptons = signal_tree.arrays(['lepton_pT', 'lepton_eta', 'lepton_phi'])
type(leptons[b'lepton_pT'])
###Output
_____no_output_____
###Markdown
---Now lets what branches the signal tree contains
###Code
signal_tree.show()
# Lets convert one branch into an array and plot it
m_bb = signal_tree.array('m_bb')
# m_bb will be a numpy array with the same type as the branch eg 8-byte floating
# which means a 64-bit floating-point number
print(m_bb.dtype)
# Double checking the size of the array:
print(m_bb.shape)
# We can even make a histogram with matplotlib
plt.hist(m_bb, bins=20)
plt.xlabel('m_bb')
plt.title('Histogram from a branch')
plt.show()
###Output
float64
(5296,)
|
Project_3_reno.ipynb | ###Markdown
###Code
from sklearn.cluster import KMeans
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from sklearn.model_selection import train_test_split
import seaborn as sns
from mpl_toolkits.mplot3d import axes3d
pd.set_option('precision', 2)
from google.colab import drive
path = '/content/Plume_reno.csv'
df = pd.read_csv(path)
df = pd.read_csv('/content/Plume_reno.csv',header=0)
df['time'] = df['time'].astype(str)+ df['loc'].astype(str)
df['time'] = df['time'].astype(int)
df.drop('loc', inplace=True, axis=1)
df = df.groupby((['time','pol']), as_index=False)['conc'].mean()
#(['col5','col2'])
df['conc'].fillna(0, inplace=True)
df
df1 = df.pivot(index='time', columns='pol', values='conc')
df
df['pol'].replace({'Atenolol': 1, 'Atrazin': 2, 'Azithromycin':3, 'Benzotriazol':4, 'Bezafibrat':5, 'Carbamazepin':6, 'Carbendazim':7, 'Chloridazon':8, 'Ciprofloxacin':9, 'Clarithromycin':10, 'Clindamycin':11, 'Clofibric acid':12, 'Diclofenac':13, 'Diuron':14, 'Gabapentin':15, 'Gemfibrocil':16, 'Iopamidol':17, 'IPBC':18, 'Irgarol':19, 'Isoproturon':20, 'Ketoprofen':21, 'Mecoprop':22, 'Mefenamic acid':23, 'Methylbenzotriazol':24, 'Metoprolol':25, 'Metronidazol':26, 'Naproxen':27, 'Norfloxacin':28, 'Ofloxacin':29, 'Paracetamol':30, 'Primidon':31,'Propiconazol':32, 'Propranolol':33, 'Simvastatin':34, 'Sotalol':35, 'Sulfamethoxazol':36, 'Terbutryn':37, 'Trimethoprim':38}, inplace=True)
df['pol'] = df['pol'].astype(float)
km = KMeans(n_clusters=2)
y_predicted = km.fit_predict(df)
df['cluster']=y_predicted
x,y = df()
x_train, x_test, y_train, y_test = train_test_split(x, y)
# Silhouette Score for K means
# Import ElbowVisualizer
from yellowbrick.classifier import ClassificationReport
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
visualizer = ClassificationReport(model)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.show()
from yellowbrick.cluster import KElbowVisualizer
model = KMeans()
X,y = df()
# k is range of number of clusters.
visualizer = KElbowVisualizer(model, k=(2,5),metric='silhouette', timings= True)
visualizer.fit(X) # Fit the data to the visualizer
visualizer.show() # Finalize and render the figure
###Output
_____no_output_____ |
examples/12_CNN_Based_QPE.ipynb | ###Markdown
Convolutional Neural Network for QPELet's test our QPE approach with CNN.
###Code
import os, csv, logging, argparse, glob, h5py, pickle
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.layers import Input, Dropout, Dense, Flatten, Activation
from tensorflow.keras.layers import Conv2D, BatchNormalization, MaxPooling2D
from tensorflow.keras.models import Model, load_model
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras import regularizers
from tensorflow.keras import initializers
from tensorflow.keras.utils import normalize, to_categorical
# Parameters
nLayer = 6 # 6 10-min dbz for an hour
nY = 275 # y-dimension of dbz data
nX = 162 # x-dimension of dbz data
batchSize = 128 # Batch size for training / testing
prec_bins=[0, 0.5, 10, 15, 30, 1000]
yseg_stat = [0.5, 8, 13, 29] # 40-year statistics of hourly precipitation of trace, 90%, 95%, and 99% percentile
yseg = [0.5, 10, 15, 30] # Actual segmentation for precipitation
#-----------------------------------------------------------------------
# Functions
#-----------------------------------------------------------------------
# Load input/output data for model
def loadIOTab(srcx, srcy, dropna=False):
import pandas as pd
# Read raw input and output
#logging.info("Reading input X from: "+ srcx)
print("Reading input X from: "+ srcx)
xfiles = []
for root, dirs, files in os.walk(srcx):
for fn in files:
if fn.endswith('.npy'):
xfiles.append({'date':fn.replace('.npy',''), 'xuri':os.path.join(root, fn)})
xfiles = pd.DataFrame(xfiles)
print("... read input size: "+str(xfiles.shape))
#logging.info("Reading output Y from: "+ srcy)
print("Reading output Y from: "+ srcy)
yraw = pd.read_csv(srcy, encoding='utf-8')
yraw['date'] = yraw['date'].apply(str)
print("... read output size: "+str(yraw.shape))
# Create complete IO-data
print("Pairing X-Y and splitting training/testing data.")
iotab = pd.merge(yraw, xfiles, on='date', sort=True)
print("... data size after merging: "+str(iotab.shape))
# Dro NA if specified
if dropna:
print('Dropping records with NA')
iotab = iotab.dropna()
print("... data size after dropping-NAs: "+str(iotab.shape))
# Generate weited sampling
# Done
return(iotab)
def generate_equal_samples(iotab, prec_bins, ylab='y', shuffle=True):
'''Create equal sampling list:
repeat sample rare categories to mtach the frequency of the majority case.
'''
# Analysis the Precipitation
prec_hist = np.histogram(iotab[ylab], bins=prec_bins)
maxc = np.max(prec_hist[0]) # Count the most frequent category
nrep = np.round(maxc/prec_hist[0]).astype(int) # Multiples required to reach max-count
# Categorize precipitation by specified bins
iotab['prec_cat'] = np.digitize(iotab[ylab], bins=prec_bins[1:-1])
logging.debug('Sample histogram before weighted sampling:')
logging.debug(iotab['prec_cat'].value_counts())
# Repeat sampling by p
for icat in range(0,len(prec_bins)-1):
repeat_n = nrep[icat]
tmp = iotab.loc[iotab['prec_cat']==icat,:]
print('Append data category: '+str(icat)+' for '+ str(repeat_n) +' times with size '+str(tmp.shape))
for j in range(int(repeat_n)-1):
iotab = iotab.append(tmp, ignore_index=True)
logging.debug('Sample histogram after weighted sampling:')
logging.debug(iotab['prec_cat'].value_counts().sort_index())
# Shuffle new dataset if specified
if shuffle:
iotab = iotab.sample(frac=1)#.reset_index(drop=True)
#
return(iotab)
def loadDBZ(flist):
''' Load a list a dbz files (in npy format) into one numpy array. '''
xdata = []
for f in flist:
tmp = np.load(f)
xdata.append(tmp)
x = np.array(xdata, dtype=np.float32)
return(x)
# Function to give report
def report_evaluation(y_true, y_pred, verbose=0):
import sklearn.metrics as metrics
# Calculate measures
results = {}
results['y_true_mean'] = y_true.mean()
results['y_true_var'] = y_true.var()
results['y_pred_mean'] = y_pred.mean()
results['y_pred_var'] = y_pred.var()
results['rmse'] = np.sqrt(metrics.mean_squared_error(y_true,y_pred))
if y_pred.var()<=10e-8:
results['corr'] = 0
else:
results['corr'] = np.corrcoef(y_true,y_pred)[0,1]
# Print results if verbose > 0
if verbose>0:
if verbose>1:
print('Mean of y_true: ' + str(results['y_true_mean']))
print('Variance of y_true: ' + str(results['y_true_var']))
print('Mean of y_pred: ' + str(results['y_pred_mean']))
print('Variance of y_pred: ' + str(results['y_pred_var']))
print('RMSE: ' + str(results['rmse']))
print('Corr: ' + str(results['corr']))
# Return results
return(results)
# Create cross validation splits
def create_CV_splits(iotable, k=5, ysegs=None, ylab='y', shuffle=False):
from sklearn.model_selection import StratifiedKFold, KFold
# Index of each fold
cvidx_train = []
cvidx_test = []
# If segmentation of y is not specified, use simple KFold
if ysegs is None:
kf = KFold(n_splits=k, random_state=None, shuffle=shuffle)
for idx_train, idx_test in kf.split(iotable['xuri']):
cvidx_train.append(idx_train)
cvidx_test.append(idx_test)
else:
kf = StratifiedKFold(n_splits=k, random_state=None, shuffle=shuffle)
for idx_train, idx_test in kf.split(iotable['xuri'], np.digitize(iotable[ylab], ysegs)):
cvidx_train.append(idx_train)
cvidx_test.append(idx_test)
return((cvidx_train, cvidx_test))
# CNN
def init_model_reg(input_shape):
"""
:Return:
Newly initialized model (regression).
:param
int input_shape: The number of variables to use as input features.
"""
# Input layer
inputs = Input(shape=input_shape)
# blovk1: CONV -> CONV -> MaxPooling
x = Conv2D(filters=32, kernel_size=(3,3), activation='relu', name='block1_conv1', data_format='channels_first')(inputs)
x = BatchNormalization(axis=1)(x)
x = MaxPooling2D((2,2), name='block1_pool', data_format='channels_first')(x)
x = Dropout(0.25)(x)
# block2: CONV -> CONV -> MaxPooling
x = Conv2D(64, (3,3), activation='relu', name='block2_conv1',data_format='channels_first')(x)
x = BatchNormalization(axis=1)(x)
x = Conv2D(64, (3,3), activation='relu', name='block2_conv2',data_format='channels_first')(x)
x = BatchNormalization(axis=1)(x)
x = MaxPooling2D((2,2), name='block2_pool', data_format='channels_first')(x)
x = Dropout(0.25)(x)
# block3: CONV -> CONV -> MaxPooling
x = Conv2D(128, (3,3), activation='relu', name='block3_conv1',data_format='channels_first')(x)
x = BatchNormalization(axis=1)(x)
x = Conv2D(128, (3,3), activation='relu', name='block3_conv2',data_format='channels_first')(x)
x = BatchNormalization(axis=1)(x)
x = MaxPooling2D((2,2), name='block3_pool', data_format='channels_first')(x)
x = Dropout(0.25)(x)
# Output block: Flatten -> Dense -> Dense -> softmax output
x = Flatten()(x)
x = Dense(256, activation='relu', name='fc1')(x)
x = BatchNormalization(axis=1)(x)
x = Dropout(0.5)(x)
x = Dense(16, activation='relu', name='fc2')(x)
x = BatchNormalization(axis=1)(x)
# Output layer
out = Dense(1, activation='linear', name='main_output')(x)
# Initialize model
model = Model(inputs = inputs, outputs = out)
# Define compile parameters
adam = Adam(lr=0.01, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
#sgd = SGD(lr=0.01, momentum=1e-8, decay=0.001, nesterov=True)#, clipvalue=1.)
model.compile(loss='mse', optimizer=adam, metrics=['mae','cosine_similarity'])
return(model)
def y_to_log(y):
''' Convert the y to log(y+1). '''
ylog = np.log(y+1).astype(np.float32)
return(ylog)
def log_to_y(y):
''' Convert the predicted y in log-scale back to original scale. '''
yori = (np.exp(y.flatten())-1.0).astype(np.float32)
yori[yori<0.5] = 0. # Set the minimal values to 0.
return(yori)
def data_generator_reg(iotab, batch_size, ylab='y', logy=False):
''' Data generator for batched processing. '''
nSample = len(iotab)
y = np.array(iotab[ylab], dtype=np.float32).reshape(nSample, 1)
#print(y[:5])
# This line is just to make the generator infinite, keras needs that
while True:
batch_start = 0
batch_end = batch_size
while batch_start < nSample:
limit = min(batch_end, nSample)
X = loadDBZ(iotab['xuri'][batch_start:limit])
if logy:
Y = y_to_log(y[batch_start:limit])
else:
Y = y[batch_start:limit]/100.
#print(X.shape)
yield (X,Y) #a tuple with two numpy arrays with batch_size samples
batch_start += batch_size
batch_end += batch_size
# End of generator
# Function to give report
def report_evaluation(y_true, y_pred, verbose=0):
import sklearn.metrics as metrics
# Calculate measures
results = {}
results['y_true_mean'] = y_true.mean()
results['y_true_var'] = y_true.var()
results['y_pred_mean'] = y_pred.mean()
results['y_pred_var'] = y_pred.var()
results['rmse'] = np.sqrt(metrics.mean_squared_error(y_true,y_pred))
if y_pred.var()<=10e-8:
results['corr'] = 0
else:
results['corr'] = np.corrcoef(y_true,y_pred)[0,1]
# Print results if verbose > 0
if verbose>0:
if verbose>1:
print('Mean of y_true: ' + str(results['y_true_mean']))
print('Variance of y_true: ' + str(results['y_true_var']))
print('Mean of y_pred: ' + str(results['y_pred_mean']))
print('Variance of y_pred: ' + str(results['y_pred_var']))
print('RMSE: ' + str(results['rmse']))
print('Corr: ' + str(results['corr']))
# Return results
return(results)
###Output
_____no_output_____
###Markdown
Data Preparation
###Code
logging.basicConfig(level=logging.DEBUG)
srcx = '../dbz_2016070609/'
srcy = './data/1hrmax.csv'
# Create IO table
iotab = loadIOTab(srcx, srcy, dropna=True)
# Create weighted sampling rom IOdata
newiotab = generate_equal_samples(iotab, prec_bins=prec_bins, ylab='t1hr', shuffle=True)
#
print(iotab.shape)
print(newiotab.shape)
print(iotab['prec_cat'].value_counts().sort_index())
print(newiotab['prec_cat'].value_counts().sort_index())
tf.random.set_seed(11223)
model = init_model_reg((nLayer, nY, nX))
model.summary()
steps_train = np.ceil(newiotab.shape[0]/32)
hist = model.fit_generator(data_generator_reg(newiotab, 32, ylab='t1hr', logy=True), steps_per_epoch=steps_train, epochs=10, max_queue_size=32, verbose=0)
print(pd.DataFrame(hist.history))
steps_test = np.ceil(iotab.shape[0]/32)
y_pred = model.predict_generator(data_generator_reg(iotab, 32, ylab='t1hr', logy=True), steps=steps_test, verbose=0)
yt = iotab['t1hr']
print(iotab.shape)
print(y_pred.shape)
ys = pd.DataFrame({'y': yt, 'y_pred': y_pred.flatten()})
tmp = report_evaluation(yt, log_to_y(y_pred.flatten()))
print(tmp)
print(pd.DataFrame(hist.history))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(ys['y'], 'b-', label='y_true')
plt.plot(ys['y_pred'], 'r--', label='y_pred')
plt.legend()
plt.show()
plt.plot(pd.DataFrame(hist.history))
plt.legend()
plt.show()
print(ys.iloc[:30,:])
###Output
_____no_output_____ |
05PolynomialRegressionAndModelGeneralization/05Validation-and-Cross-Validation.ipynb | ###Markdown
Validation 和 Cross Validation
###Code
import numpy as np
from sklearn import datasets
digits = datasets.load_digits()
X = digits.data
y = digits.target
###Output
_____no_output_____
###Markdown
测试train_test_split
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.4, random_state = 666)
from sklearn.neighbors import KNeighborsClassifier
best_k, best_p, best_score = 0, 0, 0
for k in range(2, 11):
for p in range(1, 6):
knn_clf = KNeighborsClassifier(weights="distance", n_neighbors=k, p=p)
knn_clf.fit(X_train, y_train)
score = knn_clf.score(X_test, y_test)
if score > best_score:
best_k, best_p, best_score = k, p, score
print("Best K =", best_k)
print("Best P =", best_p)
print("Best Score =", best_score)
###Output
Best K = 3
Best P = 4
Best Score = 0.9860917941585535
###Markdown
使用交叉验证
###Code
from sklearn.model_selection import cross_val_score
knn_clf = KNeighborsClassifier()
cross_val_score(knn_clf, X_train, y_train)
best_k, best_p, best_score = 0, 0, 0
for k in range(2, 11):
for p in range(1, 6):
knn_clf = KNeighborsClassifier(weights="distance", n_neighbors=k, p=p)
scores = cross_val_score(knn_clf, X_train, y_train)
score = np.mean(scores)
if score > best_score:
best_k, best_p, best_score = k, p, score
print("Best K =", best_k)
print("Best P =", best_p)
print("Best Score =", best_score)
best_knn_clf = KNeighborsClassifier(weights="distance", n_neighbors=2, p=2)
best_knn_clf.fit(X_train, y_train)
best_knn_clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
回顾网格搜索
###Code
from sklearn.model_selection import GridSearchCV
param_grid = [
{
'weights': ['distance'],
'n_neighbors': [i for i in range(2, 11)],
'p': [i for i in range(1, 6)]
}
]
grid_search = GridSearchCV(knn_clf, param_grid, verbose=1)
grid_search.fit(X_train, y_train)
grid_search.best_score_
grid_search.best_params_
best_knn_clf = grid_search.best_estimator_
best_knn_clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
cv参数
###Code
cross_val_score(knn_clf, X_train, y_train, cv=5)
grid_search = GridSearchCV(knn_clf, param_grid, verbose=1, cv=5)
grid_search
###Output
_____no_output_____ |
Yandex data science/5/Week 1/wine.ipynb | ###Markdown
**Корректность проверена на Python 3.7:**+ pandas 0.23.0+ numpy 1.14.5+ scipy 1.1.0+ statsmodels 0.9.0 Продажи австралийского вина Известны ежемесячные продажи австралийского вина в тысячах литров с января 1980 по июль 1995, необходимо построить прогноз на следующие три года.
###Code
%pylab inline
import pandas as pd
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
import warnings
from itertools import product
def invboxcox(y,lmbda):
if lmbda == 0:
return(np.exp(y))
else:
return(np.exp(np.log(lmbda*y+1)/lmbda))
wine = pd.read_csv('monthly-australian-wine-sales.csv',',', index_col=['month'], parse_dates=['month'], dayfirst=True)
wine.sales = wine.sales * 1000
plt.figure(figsize(15,7))
wine.sales.plot()
plt.ylabel('Wine sales')
pylab.show()
wine
###Output
_____no_output_____
###Markdown
Проверка стационарности и STL-декомпозиция ряда:
###Code
plt.figure(figsize(15,10))
sm.tsa.seasonal_decompose(wine.sales).plot()
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(wine.sales)[1])
###Output
Критерий Дики-Фуллера: p=0.051161
###Markdown
Стабилизация дисперсии Сделаем преобразование Бокса-Кокса для стабилизации дисперсии:
###Code
wine['sales_box'], lmbda = stats.boxcox(wine.sales)
plt.figure(figsize(15,7))
wine.sales_box.plot()
plt.ylabel(u'Transformed wine sales')
print("Оптимальный параметр преобразования Бокса-Кокса: %f" % lmbda)
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(wine.sales_box)[1])
###Output
Оптимальный параметр преобразования Бокса-Кокса: 0.236675
Критерий Дики-Фуллера: p=0.029565
###Markdown
Стационарность Критерий Дики-Фуллера отвергает гипотезу нестационарности, но визуально в данных виден тренд. Попробуем сезонное дифференцирование; сделаем на продифференцированном ряде STL-декомпозицию и проверим стационарность:
###Code
wine['sales_box_diff'] = wine.sales_box - wine.sales_box.shift(12)
plt.figure(figsize(15,10))
sm.tsa.seasonal_decompose(wine.sales_box_diff[12:]).plot()
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(wine.sales_box_diff[12:])[1])
###Output
Критерий Дики-Фуллера: p=0.128317
###Markdown
Критерий Дики-Фуллера не отвергает гипотезу нестационарности, и полностью избавиться от тренда не удалось. Попробуем добавить ещё обычное дифференцирование:
###Code
wine['sales_box_diff2'] = wine.sales_box_diff - wine.sales_box_diff.shift(1)
plt.figure(figsize(15,10))
sm.tsa.seasonal_decompose(wine.sales_box_diff2[13:]).plot()
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(wine.sales_box_diff2[13:])[1])
###Output
Критерий Дики-Фуллера: p=0.000002
###Markdown
Гипотеза нестационарности отвергается, и визуально ряд выглядит лучше — тренда больше нет. Подбор модели Посмотрим на ACF и PACF полученного ряда:
###Code
plt.figure(figsize(15,8))
ax = plt.subplot(211)
sm.graphics.tsa.plot_acf(wine.sales_box_diff2[13:].values.squeeze(), lags=48, ax=ax)
pylab.show()
ax = plt.subplot(212)
sm.graphics.tsa.plot_pacf(wine.sales_box_diff2[13:].values.squeeze(), lags=48, ax=ax)
pylab.show()
###Output
_____no_output_____
###Markdown
Начальные приближения: Q=1, q=2, P=1, p=4
###Code
ps = range(0, 5)
d=1
qs = range(0, 3)
Ps = range(0, 2)
D=1
Qs = range(0, 2)
parameters = product(ps, qs, Ps, Qs)
parameters_list = list(parameters)
len(parameters_list)
%%time
results = []
best_aic = float("inf")
warnings.filterwarnings('ignore')
for param in parameters_list:
#try except нужен, потому что на некоторых наборах параметров модель не обучается
try:
model=sm.tsa.statespace.SARIMAX(wine.sales_box, order=(param[0], d, param[1]),
seasonal_order=(param[2], D, param[3], 12)).fit(disp=-1)
#выводим параметры, на которых модель не обучается и переходим к следующему набору
except ValueError:
print('wrong parameters:', param)
continue
aic = model.aic
#сохраняем лучшую модель, aic, параметры
if aic < best_aic:
best_model = model
best_aic = aic
best_param = param
results.append([param, model.aic])
warnings.filterwarnings('default')
###Output
Wall time: 27 s
###Markdown
Если в предыдущей ячейке возникает ошибка, убедитесь, что обновили statsmodels до версии не меньше 0.8.0rc1.
###Code
result_table = pd.DataFrame(results)
result_table.columns = ['parameters', 'aic']
print(result_table.sort_values(by = 'aic', ascending=True).head())
###Output
parameters aic
21 (1, 2, 0, 1) 1006.024314
29 (2, 1, 0, 1) 1007.801388
31 (2, 1, 1, 1) 1008.786373
45 (3, 2, 0, 1) 1009.167726
33 (2, 2, 0, 1) 1009.284102
###Markdown
Лучшая модель:
###Code
print(best_model.summary())
###Output
SARIMAX Results
============================================================================================
Dep. Variable: sales_box No. Observations: 176
Model: SARIMAX(1, 1, 2)x(0, 1, [1], 12) Log Likelihood -498.012
Date: Sun, 26 Apr 2020 AIC 1006.024
Time: 14:19:18 BIC 1021.493
Sample: 01-01-1980 HQIC 1012.304
- 08-01-1994
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ar.L1 0.9103 0.053 17.035 0.000 0.806 1.015
ma.L1 -1.9305 0.035 -55.265 0.000 -1.999 -1.862
ma.L2 0.9502 0.033 28.732 0.000 0.885 1.015
ma.S.L12 -0.7172 0.053 -13.432 0.000 -0.822 -0.613
sigma2 24.6918 1.913 12.909 0.000 20.943 28.441
===================================================================================
Ljung-Box (Q): 43.11 Jarque-Bera (JB): 40.47
Prob(Q): 0.34 Prob(JB): 0.00
Heteroskedasticity (H): 2.45 Skew: -0.29
Prob(H) (two-sided): 0.00 Kurtosis: 5.37
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
###Markdown
Её остатки:
###Code
plt.figure(figsize(15,8))
plt.subplot(211)
best_model.resid[13:].plot()
plt.ylabel(u'Residuals')
ax = plt.subplot(212)
sm.graphics.tsa.plot_acf(best_model.resid[13:].values.squeeze(), lags=48, ax=ax)
print("Критерий Стьюдента: p=%f" % stats.ttest_1samp(best_model.resid[13:], 0)[1])
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1])
###Output
Критерий Стьюдента: p=0.384951
Критерий Дики-Фуллера: p=0.000000
###Markdown
Остатки несмещены (подтверждается критерием Стьюдента) стационарны (подтверждается критерием Дики-Фуллера и визуально), неавтокоррелированы (подтверждается критерием Льюнга-Бокса и коррелограммой).Посмотрим, насколько хорошо модель описывает данные:
###Code
wine['model'] = invboxcox(best_model.fittedvalues, lmbda)
plt.figure(figsize(15,7))
wine.sales.plot()
wine.model[13:].plot(color='r')
plt.ylabel('Wine sales')
pylab.show()
###Output
_____no_output_____
###Markdown
Прогноз
###Code
wine2 = wine[['sales']]
date_list = [datetime.datetime.strptime("1994-09-01", "%Y-%m-%d") + relativedelta(months=x) for x in range(0,36)]
future = pd.DataFrame(index=date_list, columns= wine2.columns)
wine2 = pd.concat([wine2, future])
wine2['forecast'] = invboxcox(best_model.predict(start=176, end=211), lmbda)
plt.figure(figsize(15,7))
wine2.sales.plot()
wine2.forecast.plot(color='r')
plt.ylabel('Wine sales')
pylab.show()
###Output
_____no_output_____ |
GATK-Spark-IGV.ipynb | ###Markdown
Alignment and GATK-Spark ProtocolWe demonstrate how by embedding Docker containers, we can distribute Jupyter notebooks that can run complicated workflows on the cloud interactively.The example is a GATK variant calling pipeline that is run in a distributed manner using Spark on an auto-generated AWS kubernetes cluster Download the fastq input filesThis step will take approximately 30 minutes, depending on your bandwidth.The fastq files are publicly available with the following links: ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/HG00100/sequence_read/ERR013140_1.filt.fastq.gz ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/HG00100/sequence_read/ERR013140_2.filt.fastq.gzTo download the files we can use the parallel-fastq-dump container. This is a python wrapper around the fastq-dump utility that allows it to download separate chunks of the files separately.```docker run --rm -i -v /home/jovyan/work:/data biodepot/alpine-utils /bin/bash -c 'mkdir -p /data/fastq && cd /data/fastq && \wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/HG00100/sequence_read/ERR013140_1.filt.fastq.gz && \wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/HG00100/sequence_read/ERR013140_2.filt.fastq.gz'```{nbdocker0} Download the reference and create the indicesRunning two containers will take approximately 10 minutes.We download the reference and generate the indices for bwa Download the reference human transcriptome:```docker run --rm -i -v /home/jovyan/work:/data biodepot/alpine-utils /bin/bash -c 'mkdir -p /data/reference && wget -qO- "ftp://ftp.ncbi.nlm.nih.gov/refseq/H_sapiens/annotation/GRCh38_latest/refseq_identifiers/GRCh38_latest_genomic.fna.gz"| gunzip -c > /data/reference/GCh38.fa'```{nbdocker1} Build indices using bwa:```docker run --rm -i -v /home/jovyan/work:/data biodepot/alpine-bwa:3.7-0.7.15 /bin/bash -c 'cd /data/reference && bwa index GCh38.fa'```{nbdocker2} Alignment using bwa-mem to create bamfilesRunning alignment for each fastq input file will take approximately 30 minutes. ```docker run --rm -i -v /home/jovyan/work:/data biodepot/alpine-bwa-samtools:3.7-0.7.15-1.9-52-g651bf14 \ /bin/bash -c 'mkdir -p /data/bams && bwa mem -t 8 \ /data/fastq/ERR013140_1.fastq.gz /data/fastq/ERR013140_2.fastq.gz | samtools sort -@8 -o data/bams/ERR013140.bam - '```{nbdocker3}Results will be in: /home/jovyan/work/ GATK (non-Spark steps)Not all the components of GATK use Spark Add steps to download ref/ref.fasta and put in /data/references/ref.fasta```docker run --rm -i -v /home/jovyan/work:/data \broadinstitute/gatk:4.0.5.1 \/bin/bash -c 'mkdir -p /data/variants && \ cd /gatk/gatk_data/germline && \ gatk HaplotypeCaller -R /data/references/ref.fasta -I /data/bams/ERR013140.bam -O /data/variants/variants.vcf '```{nbdocker4}```docker run --rm -i -v /home/jovyan/work:/data \broadinstitute/gatk:4.0.5.1 \/bin/bash -c 'gatk ValidateSamFile -I /data/bams/ERR013140.bam -MODE SUMMARY'{nbdocker5}``` GATK (Spark steps)About 1 minute```docker run --rm -i -v /.nbdocker/:/home/ubuntu/gatk_data \ broadinstitute/gatk:4.0.5.1 \/bin/bash -c ' gatk --java-options "-Xmx6G" MarkDuplicatesSpark \-R /data/references/ref.fasta \-I /data/bams/ERR013140.bam\-O /data/bams/ERR013140.bam \-M /data/bams/metrics.txt \-- \--spark-master local[*]'```{nbdocker6} Visualization with igv
###Code
import igv
b = igv.Browser(
{
"genome": "hg19",
"locus": "chr22:24,376,166-24,376,456"
}
)
b.show()
b.load_track(
{
"name": "Local BAM",
"url": "files/data/gstt1_sample.bam",
"indexURL": "files/data/gstt1_sample.bam.bai",
"format": "bam",
"type": "alignment"
})
b.get_svg()
b.display_svg()
###Output
_____no_output_____ |
src/models/roberta/Data_stats.ipynb | ###Markdown
UDS-dur
###Code
uds_dur = {}
for split in ['train', 'dev', 'test']:
uds_dur[split] = pd.read_csv(locs[split] + split + "-temporal-duration-data.tsv", sep="\t")
print(f"Train: {uds_dur['train'].shape[0]:,}",
f"\nDev: {uds_dur['dev'].shape[0]:,}",
f"\nTest: {uds_dur['test'].shape[0]:,}")
###Output
Train: 437,836
Dev: 34,446
Test: 31,854
###Markdown
UDS-rel
###Code
uds_rel = {}
for split in ['train', 'dev', 'test']:
uds_rel[split] = pd.read_csv(locs[split] + split + "-temporal-relation-data.tsv", sep="\t")
print(f"Train: {uds_rel['train'].shape[0]:,}",
f"\nDev: {uds_rel['dev'].shape[0]:,}",
f"\nTest: {uds_rel['test'].shape[0]:,}")
###Output
Train: 476,744
Dev: 45,104
Test: 41,096
###Markdown
TBD
###Code
tbd = {}
for split in ['train', 'dev', 'test']:
tbd[split] = pd.read_csv(locs[split] + split + "-tbdense-data.tsv", sep="\t")
print(f"Train: {tbd['train'].shape[0]:,}",
f"\nDev: {tbd['dev'].shape[0]:,}",
f"\nTest: {tbd['test'].shape[0]:,}")
###Output
Train: 7,418
Dev: 1,528
Test: 2,964
###Markdown
RED
###Code
red = {}
for split in ['train', 'dev', 'test']:
red[split] = pd.read_csv(locs[split] + split + "-red-data.tsv", sep="\t")
print(f"Train: {red['train'].shape[0]:,}",
f"\nDev: {red['dev'].shape[0]:,}",
f"\nTest: {red['test'].shape[0]:,}")
###Output
Train: 4,844
Dev: 296
Test: 438
###Markdown
TE3
###Code
te3 = {}
for split in ['train', 'dev', 'test']:
te3[split] = pd.read_csv(locs[split] + split + "-tempeval3-data.tsv", sep="\t")
print(f"Train: {te3['train'].shape[0]:,}",
f"\nDev: {te3['dev'].shape[0]:,}",
f"\nTest: {te3['test'].shape[0]:,}")
###Output
Train: 20,592
Dev: 3,320
Test: 3,336
|
tutorial_notebooks/training_all_blanks.ipynb | ###Markdown
4. Network Architecture & Training Welcome to the fourth notebook of our six part series part of our tutorial on Deep Learning for Human Activity Recognition. Within the last notebook you learned:- What are common evaluation metrics when evaluating the performance of an Human Activity Recognition model?- How are they defined? How are they computed? How do they differ from each other?This notebook will teach you everything you need to know about how neural networks are defined and trained using [PyTorch](https://pytorch.org/). As mentioned during the [theoretical part](https://https://mariusbock.github.io/dl-for-har/) of this session, we will not go into detail about each building block of a neural network and how a network is trained, but rather stick to a basic level of understanding. If you want to dig deeper, we recommend you checking out other sources, like [Coursera](https://www.coursera.org/courses?query=deep%20learning) and [YouTube](https://www.youtube.com/results?search_query=deep+learning), as there are plenty of well written tutorials on the fundamentals of Deep Learning. After working through this notebook you will be able to answer the following questions:- How do I define a sample neural network architecture in PyTorch? - What additional preprocessing do I need to apply to my data to fed it into my network?- How do I define a train loop which trains my neural network? 4.1. Important Remarks If you are accessing this tutorial via [Google Colab](https://colab.research.google.com/github/mariusbock/dl-for-har/blob/main/tutorial_notebooks/training.ipynb), first make sure to use Google Colab in English. This will help us to better assist you with issues that might arise during the tutorial. There are two ways to change the default language if it isn't English already:1. On Google Colab, go to `Help` -> `View in English` 2. Change the default language of your browser to `English`.To also ease the communication when communicating errors, enable line numbers within the settings of Colab.1. On Google Colab, go to `Tools` -> `Settings` -> `Editor` -> `Show line numbers`In general, we strongly advise you to use Google Colab as it provides you with a working Python distribution as well as free GPU resources. To make Colab use GPUs, you need to change the current notebooks runtime type via:- `Runtime` -> `Change runtime type` -> `Dropdown` -> `GPU` -> `Save`**Hint:** you can auto-complete code in Colab via `ctrl` + `spacebar`For the live tutorial, we require all participants to use Colab. If you decide to rerun the tutorial at later points and rather want to have it run locally on your machine, feel free to clone our [GitHub repository](https://github.com/mariusbock/dl-for-har).To get started with this notebook, you need to first run the code cell below. Please set `use_colab` to be `True` if you are accessing this notebook via Colab. If not, please set it to `False`. This code cell will make sure that imports from our GitHub repository will work.
###Code
import os, sys
use_colab = True
module_path = os.path.abspath(os.path.join('..'))
if use_colab:
# move to content directory and remove directory for a clean start
%cd /content/
%rm -rf dl-for-har
# clone package repository (will throw error if already cloned)
!git clone https://github.com/mariusbock/dl-for-har.git
# navigate to dl-for-har directory
%cd dl-for-har/
else:
os.chdir(module_path)
# this statement is needed so that we can use the methods of the DL-ARC pipeline
if module_path not in sys.path:
sys.path.append(module_path)
###Output
_____no_output_____
###Markdown
4.2. Defining a Network Architecture During this tutorial we will use [PyTorch](https://pytorch.org/) as our Deep Learning framework of choice. The open source library is one of the most popular frameworks out there for applying Deep Learning. It has all the necessary building blocks found in neural networks pre-implemented as well as offers a variety of helpful functions which can be used to easily implement your first Deep Learning script with just a few lines of code.In the following we will define our neural network architecture. Once defined we can use our previously preprocessed sensor-data to train a network which will be able to predict the type of activities being performed for a given sliding window. As mentioned during the introduction to this chapter, the architecture which we will used is called **DeepConvLSTM** [[1]](1). The architecture was introduced by Francisco Javier Ordonez and Daniel Roggen in 2016 and is to this date a state-of-the-art architecture for applying Deep Learning on Human Activity Recognition. The architecture combines both convolutional and recurrent layers.The architecture is made of three main parts:1. **Convolutional layers:** Convolutional layers are based on filters (e.g. a 2 by 1 matrix) shifting over some input (e.g. a sliding window) resulting in activation feature map. The main idea of convolutions is that they are able to detect a specific type of feature anywhere within the input. Within the original architecture Ordonez and Roggen apply 4 convolutional layers each with 64 filters of size 5 by 1. 2. **LSTM layer(s):** After applying convolutional layers, Ordonez and Roggen make us of an LSTM in order to capture time dependencies on features extracted by convolutional operations. An LSTM is a type of neural network which is able to learn temporal dependencies in data via gated mechanisms. The LSTM itself is structured into layers. Within the original architecture Ordonez and Roggen employ a 2-layered LSTM with 128 hidden units. 3. **Classification layer:** The output of the LSTM is finally fed into a classifier which is a fully-connected layer and produces the final predictions. Preceeding the classifier, Ordonez and Roggen additionally put a dropout layer, which is a form of regularization. A dropout layer randomly deactivates neurons according to a dropout probability and thus prevents the probability of your network overfitting.Contradicting to popular belief that one needs at least a 2-layered LSTM when dealing with sequential data, within a recent work of ours, we exhibited that a 1-layered LSTM might be a better suited option when dealing with raw sensor-data [[2]](2). Therefore, within the next code block, we will define the altered DeepConvLSTM architecture as presented in our paper which **employs a 1-layered instead of 2-layered LSTM**.In order to give you a better idea of how to define your PyTorch implementation of the DeepConvLSTM, we already defined a [PyTorch module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html) called **DeepConvLSTM** for you to start out with. A PyTorch module typically consists of two main functions - the `init()` and `forward()` function. Within the former all relevant parameters and building blocks of the neural network are defined. Within the latter the parameters and building blocks are put together, i.e. the computation of the network defined. Within the next tasks you will be asked to fill in some of the missing parts of said module function. Task 1: Implementing the DeepConvLSTM1. Within the `init()` function define the activation function. Use [PyTorch's implementation](https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html) of the ReLU activation function called `ReLU`. Set `inplace=True`. (`lines 17-18`)2. Within the `init()` function define the four convolution layers. Use [PyTorch's implementation](https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html) of a 2d-convolution called `Conv2d`. Hints on the input and dimensions are given as comments within the code. The filter size should be of size (`filter_width x 1`) (`lines 20-24`)3. Within the `init()` function define the LSTM. Use [PyTorch's implementation](https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html) of a LSTM called `LSTM`. Hints on the input size of the LSTM is given as comments within the code. The `hidden_size` and `num_layers` are given as attributes within the `init()` function. (`lines 26-27`)4. Within the `init()` define the dropout layer. Use [PyTorch's implementation](https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html) of a dropout layer called `Dropout`. Pass the `Dropout` object the `drop_prob` variable defined within the `init()` function (`lines 29-30`)5. Within the `init()` define the classifier, i.e. fully connected layer. Use [PyTorch's implementation](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) of a fully-connected layer called `Linear`. (`lines 32-33`)6. Fill in the blanks within the `forward()` function. Apply each of the building blocks you defined in the `init()` on your input `x`. (`lines 39-43, 52-53 and 58-60`)
###Code
from torch import nn
class DeepConvLSTM(nn.Module):
def __init__(self, config):
super(DeepConvLSTM, self).__init__()
# parameters
self.window_size = config['window_size']
self.drop_prob = config['drop_prob']
self.nb_channels = config['nb_channels']
self.nb_classes = config['nb_classes']
self.seed = config['seed']
self.nb_filters = config['nb_filters']
self.filter_width = config['filter_width']
self.nb_units_lstm = config['nb_units_lstm']
self.nb_layers_lstm = config['nb_layers_lstm']
# define activation function
self.relu =
# define conv layers
self.conv1 =
self.conv2 =
self.conv3 =
self.conv4 =
# define lstm layers
self.lstm =
# define dropout layer
self.dropout =
# define classifier
self.fc =
def forward(self, x):
# reshape data for convolutions
x = x.view(-1, 1, self.window_size, self.nb_channels)
# apply convolution and the activation function
x =
x =
x =
x =
# sets the final sequence length
final_seq_len = x.shape[2]
# permute dimensions and reshape for LSTM
x = x.permute(0, 2, 1, 3)
x = x.reshape(-1, final_seq_len, self.nb_filters * self.nb_channels)
# apply LSTM (note: it has two outputs!)
x, _ =
# reshape data for classifier
x = x.view(-1, self.nb_units_lstm)
# apply dropout and feed data through classifier
x =
x =
# reshape data and return predicted label of last sample within final sequence (determines label of window)
out = x.view(-1, final_seq_len, self.nb_classes)
return out[:, -1, :]
###Output
_____no_output_____
###Markdown
4.3. Preparing your data Great, we now have a neural network defined which we can call and use for training! But, there is one essential step missing before moving on towards the training loop - your data needs to be put into the correct format (again). In addition to the preprocessing steps that you know from the previous notebook, we also need to **make sure that our dataset is using the correct data types which are compatible with a GPU**. Furthermore, our data needs to be split into a **training** and **validation** dataset. As you know from the [theoretical part of this section](https://mariusbock.github.io/dl-for-har), within Deep Learning we essentially try to approximate a function. To judge whether the parameterized function we came up with appropriately approximates such underlying function, we validate our network's perfomance on unseen data. If the algorithm still performs well, i.e. predicts the correct labels for the unseen data, we say that we have found a **generalized function**. The next notebook will cover in more detail what different validation methods exist and go into detail why we need and what common pitfalls exist.The following task will guide you through the necessary preprocessing one needs to apply on top of the [RealWorld (HAR) dataset](https://sensor.informatik.uni-mannheim.de/dataset_realworld). The first step of loading the data will already be filled out for you. As you can see, we used a predefined method called `load_dataset()`, which is part of the DL-ARC feature stack.The preprocessing consists of **four essential parts**:1. Split the data into a training and validation dataset. The validation dataset is used to gain feedback on the perfomance of the model and functions as unseen data. Results obtained on the validation dataset can be used as an indicator whether the changes you make to a network and/ or its training process are improving or worsening results.2. Apply the sliding window approach on top of the training and validation dataset. As you learned in the previous notebook, we do not classify a single record, but a window of records. The label of the last record within a window defines the label of the window and is our ultimate goal to predict.3. (Optional) Omit the subject identifier column.4. Convert the two datasets into the correct data format so that they are compatible with the GPU. Task 2: Getting your data ready for training1. Split the data into a train and validation dataset. The train dataset shall consist of the data of the first two subjects. The validation dataset shall be the data of the third subject. (`lines 16-19`)2. Segment your train and validation data into windows. Instead of going back to your defined function within the last notebook, you can use our [predefined method](https://github.com/mariusbock/dl-for-har/blob/main/data_processing/sliding_window.py) which is part of the DL-ARC feature stack called `apply_sliding_window`. It is already imported for you. (`lines 26-29`)3. (*Optional*) Omit the first feature column (subject_identifier) from the train and validation dataset. (`lines 35-36`)4. Convert the feature columns of the train and validation to `float32` and label column to `uint8` for GPU compatibility. Use the [built-in function](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.html) of a pandas dataframe called `astype()`. (`lines 41-43`)
###Code
import pandas as pd
import numpy as np
import warnings
from data_processing.sliding_window import apply_sliding_window
from data_processing.preprocess_data import load_dataset
warnings.filterwarnings('ignore')
# data loading (we are using a predefined method called load_dataset, which is part of the DL-ARC feature stack)
X, y, num_classes, class_names, sampling_rate, has_null = load_dataset('rwhar_3sbjs')
# since the method returns features and labels separatley, we need to concat them
data = np.concatenate((X, y[:, None]), axis=1)
# define the train data to be all data belonging to the first two subjects
train_data =
# define the validation data to be all data belonging to the third subject
valid_data =
# settings for the sliding window (change them if you want to!)
sw_length = 50
sw_unit = 'units'
sw_overlap = 50
# apply a sliding window on top of both the train and validation data; you can use our predefined method
# you can import it via from preprocessing.sliding_window import apply_sliding_window
X_train, y_train =
X_valid, y_valid =
print("\nShape of the train and validation datasets after splitting and windowing: ")
print(X_train.shape, y_train.shape)
print(X_valid.shape, y_valid.shape)
# (optional) omit the first feature column (subject_identifier) from the train and validation dataset
X_train, X_valid =
print("\nShape of the train and validation feature dataset after splitting and windowing: ")
print(X_train.shape, X_valid.shape)
# convert the features of the train and validation to float32 and labels to uint8 for GPU compatibility
X_train, y_train =
X_valid, y_valid =
###Output
_____no_output_____
###Markdown
4.4. Training Your Network Since we now have brought the data into the correct format, let's train our network with it!A typical training loop can be divided into three steps:1. **Definition:** You define your network, optimizer and loss2. **Training:** Iterating over the number of epochs: you chunk your training data into so-called batches and iteratively feed them through your network. After a batch has been fed through the network, you compute the loss said batch produced. Using the loss you backprogate it through the network using the optimizer which adjusts the weights accordingly. 3. **Validation:** After you have processed your whole training dataset, you go on to validate the predictive performance of the network. To do so you again chunk your training and validation data into batches. Iterating over all batches of both all datasets, fed the batches through the trained network and obtain its predictions. **Note:** you only want to obtain predictions and not backpropagate any loss. The obtained predictions can now be used to calculate standard evaluation metrics such as **precision** and **recall**. Due to being limited in time we will not talk about their computation in great detail during the tutorial. Nevertheless, we created a [separate notebook](https://colab.research.google.com/github/mariusbock/dl-for-har/blob/main/tutorial_notebooks/evaluation.ipynb) for you which covers the most essential evaluation metrics used in HAR. Feel free to work through it if you want to accustom yourself with how each of them is calculated. The next task will guide you through **implementing your training and validation loop**. It will again have parts missing which you need to fill out, but will already provide you with certain code segments, to ease the task and focus on the essential parts. Task 3: Define your own train loop 1. You'll see that we already defined a `config` object which you can use to pass to your network. Nevertheless, there are three values missing, namely the `window_size`, `nb_channels` and `nb_classes`. Define them correctly. (`lines 27-33`)2. Define your DeepConvLSTM network by calling the object we previously defined. Also define the `optimizer` being the [Adam optimizer](https://pytorch.org/docs/stable/optim.html) and `criterion` being the [Cross-Entropy Loss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) (`lines 35-36 and 42-44`)3. Define the `DataLoader` objects. The `DataLoader` objects only work with [PyTorch tensor datasets](https://pytorch.org/docs/stable/data.html), which we already defined for you as the `val_dataset` and `train_dataset` variables. Pass the `DataLoader` object the dataset variables, the `batch_size` you want to use and set `shuffle=True`. (`lines 43-49`)4. Further define the training loop by iterating over the training `DataLoader` object. We already defined parts for you. In a nutshell: for each batch, compute the loss by passing it through the network; backprogate the computed loss using your optimizer object. Use the [.backward()](https://pytorch.org/docs/stable/autograd.html) of the loss object and [.step()](https://pytorch.org/docs/stable/optim.html) of the optimizer to do so. (`lines 69-70 and 75-78`)5. While training obtain predictions for the train dataset. To do so obtain the final predicitons for each batch by applying the PyTorch `softmax` [function](https://pytorch.org/docs/stable/generated/torch.nn.functional.softmax.html) on top of the network output. (`lines 80-81`)6. After training obtain predictions for the validation dataset using the resulting trained network of the current epoch. Iterate again over the validation `DataLoader` object and fed each batch through the network. In addition to calculating the [cross-entropy loss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html), obtain the final predicitons for each batch by applying the PyTorch `softmax` [function](https://pytorch.org/docs/stable/generated/torch.nn.functional.softmax.html) on top of the network output. Using the predictions the script will calculate [accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html), [precision](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html), [recall](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html) and [F1-score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) on both your training and validation set. (`lines 113-114 and 119-120`)7. Play around with different values for the parameters within the `config` file. How does each one of them influence your training loop? Feel free to also use a completly different optimizer - you can find all the different options on the [PyTorch website](https://pytorch.org/docs/stable/optim.html). (`lines 10-25`)
###Code
import torch
from torch.utils.data import DataLoader
from sklearn.metrics import precision_score, recall_score, f1_score, jaccard_score
from misc.torchutils import seed_torch
import time
# this is the config object which contains all relevant settings. Feel free to change them and see how it influences
# your results. Parameters which shouldn't be changed are marked.
config = {
'nb_filters': 64,
'filter_width': 11,
'nb_units_lstm': 128,
'nb_layers_lstm': 1,
'drop_prob': 0.5,
'seed': 1,
'epochs': 20,
'batch_size': 100,
'learning_rate': 1e-4,
'weight_decay': 1e-6,
'gpu_name': 'cuda:0',
'print_counts': False
}
# in order to get reproducible results, we need to seed torch and other random parts of our implementation
seed_torch(config['seed'])
# define the missing parameters within the config file.
# window_size = size of the sliding window in units
# nb_channels = number of feature channels
# nb_classes = number of classes that can be predicted
config['window_size'] = X_train.shape[1]
config['nb_channels'] = X_train.shape[2]
config['nb_classes'] = len(class_names)
# initialize your DeepConvLSTM object
network =
# sends network to the GPU and sets it to training mode
network.to(config['gpu_name'])
network.train()
# initialize the optimizer and loss
optimizer =
criterion =
# initializes the train and validation dataset in Torch format
train_dataset = torch.utils.data.TensorDataset(torch.from_numpy(X_valid).float(), torch.from_numpy(y_valid))
val_dataset = torch.utils.data.TensorDataset(torch.from_numpy(X_train), torch.from_numpy(y_train))
# define the train- and valloader; use from torch.utils.data import DataLoader
trainloader =
valloader =
# define your training loop; iterates over the number of epochs
for e in range():
# helper objects needed for proper documentation
train_losses = []
train_preds = []
train_gt = []
start_time = time.time()
batch_num = 1
# iterate over the trainloader object (it'll return batches which you can use)
for i, (x, y) in enumerate():
# sends batch x and y to the GPU
inputs, targets = x.to(config['gpu_name']), y.to(config['gpu_name'])
optimizer.zero_grad()
# send inputs through network to get predictions
train_output =
# calculates loss
loss = criterion(train_output, targets.long())
# backprogate your computed loss through the network
# use the .backward() and .step() function on your loss and optimizer
loss
optimizer
# calculate actual predictions (i.e. softmax probabilites); use torch.nn.functional.softmax()
train_output =
# appends the computed batch loss to list
train_losses.append(loss.item())
# creates predictions and true labels; appends them to the final lists
y_preds = np.argmax(train_output.cpu().detach().numpy(), axis=-1)
y_true = targets.cpu().numpy().flatten()
train_preds = np.concatenate((np.array(train_preds, int), np.array(y_preds, int)))
train_gt = np.concatenate((np.array(train_gt, int), np.array(y_true, int)))
# prints out every 100 batches information about the current loss and time per batch
if batch_num % 100 == 0 and batch_num > 0:
cur_loss = np.mean(train_losses)
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d} batches | ms/batch {:5.2f} | train loss {:5.2f}'.format(e, batch_num, elapsed * 1000 / config['batch_size'], cur_loss))
start_time = time.time()
batch_num += 1
# helper objects
val_preds = []
val_gt = []
val_losses = []
# sets network to eval mode and
network.eval()
with torch.no_grad():
# iterate over the valloader object (it'll return batches which you can use)
for i, (x, y) in enumerate():
# sends batch x and y to the GPU
inputs, targets = x.to(config['gpu_name']), y.to(config['gpu_name'])
# send inputs through network to get predictions
val_output =
# calculates loss by passing criterion both predictions and true labels
val_loss = criterion(val_output, targets.long())
# calculate actual predictions (i.e. softmax probabilites); use torch.nn.functional.softmax() on dim=1
val_output =
# appends validation loss to list
val_losses.append(val_loss.item())
# creates predictions and true labels; appends them to the final lists
y_preds = np.argmax(val_output.cpu().numpy(), axis=-1)
y_true = targets.cpu().numpy().flatten()
val_preds = np.concatenate((np.array(val_preds, int), np.array(y_preds, int)))
val_gt = np.concatenate((np.array(val_gt, int), np.array(y_true, int)))
# print epoch evaluation results for train and validation dataset
print("\nEPOCH: {}/{}".format(e + 1, config['epochs']),
"\nTrain Loss: {:.4f}".format(np.mean(train_losses)),
"Train Acc: {:.4f}".format(jaccard_score(train_gt, train_preds, average='macro')),
"Train Prec: {:.4f}".format(precision_score(train_gt, train_preds, average='macro')),
"Train Rcll: {:.4f}".format(recall_score(train_gt, train_preds, average='macro')),
"Train F1: {:.4f}".format(f1_score(train_gt, train_preds, average='macro')),
"\nVal Loss: {:.4f}".format(np.mean(val_losses)),
"Val Acc: {:.4f}".format(jaccard_score(val_gt, val_preds, average='macro')),
"Val Prec: {:.4f}".format(precision_score(val_gt, val_preds, average='macro')),
"Val Rcll: {:.4f}".format(recall_score(val_gt, val_preds, average='macro')),
"Val F1: {:.4f}".format(f1_score(val_gt, val_preds, average='macro')))
# if chosen, print the value counts of the predicted labels for train and validation dataset
if config['print_counts']:
print('Predicted Train Labels: ')
print(np.vstack((np.nonzero(np.bincount(train_preds))[0], np.bincount(train_preds)[np.nonzero(np.bincount(train_preds))[0]])).T)
print('Predicted Val Labels: ')
print(np.vstack((np.nonzero(np.bincount(val_preds))[0], np.bincount(val_preds)[np.nonzero(np.bincount(val_preds))[0]])).T)
# set network to train mode again
network.train()
###Output
_____no_output_____ |
notebooks/explore/Metrics.ipynb | ###Markdown
ok I have to make another merge function because my original assumption was wrong about duplicating the time...
###Code
import datetime as dt
from functools import reduce
def mergeORIGINAL(dfs:list[pd.DataFrame]):
''' Layer 1
combines multiple mutlicolumned dataframes.
to support disparate frequencies,
outter join fills in missing values with previous value.
'''
if len(dfs) == 0:
return None
if len(dfs) == 1:
return dfs[0]
for df in dfs:
df.index = pd.to_datetime(df.index)
return reduce(
lambda left, right: pd.merge(
left,
right,
how='outer',
left_index=True,
right_index=True).fillna(method='ffill'),
#.fillna(method='bfill'),
# don't bfill here, in many cases its fine to bfill, but not in all.
# maybe we will bfill in model. always bfill After ffill.
dfs)
df1 = pd.DataFrame(['a', 'b', 'c'],
columns=pd.MultiIndex.from_product([['target'], ['key']]),
index = [
'2022-04-15 20:20:20.000000',
'2022-04-15 20:20:21.000000',
'2022-04-15 20:20:22.000000'],)
df1
df2 = pd.DataFrame(['a2', 'b2', 'c2', 'd2', 'e2'],
columns=pd.MultiIndex.from_product([['feature2'], ['keys']]),
index = [
'2022-04-15 20:20:20.100000',
'2022-04-15 20:20:20.500000',
'2022-04-15 20:20:20.900000',
'2022-04-15 20:20:21.000000',
'2022-04-15 20:20:21.100000',],)
df2
df3 = pd.DataFrame(['a3', 'b3', 'c3', 'd3', 'e3'],
columns=pd.MultiIndex.from_product([['feature3'], ['keys']]),
index = [
'2022-04-15 20:20:19.000000',
'2022-04-15 20:20:19.200000',
'2022-04-15 20:20:20.000000',
'2022-04-15 20:20:20.200000',
'2022-04-15 20:20:23.100000',],)
df3
mergeORIGINAL([df1, df2, df3])
###Output
_____no_output_____
###Markdown
no good because stream 1 is our target we don't want it duplicated
###Code
def merge(dfs:list[pd.DataFrame], targetDf:pd.DataFrame, targetColumn:'str|tuple[str]'):
dfs = [targetDf] + dfs
for df in dfs:
df.index = pd.to_datetime(df.index)
return reduce(
lambda left, right: pd.merge(
left,
right,
how='outer',
left_index=True,
right_index=True),#.fillna(method='ffill'),
#.fillna(method='bfill'),
# don't bfill here, in many cases its fine to bfill, but not in all.
# maybe we will bfill in model. always bfill After ffill.
dfs)
merged = merge([df2, df3], targetDf=df1, targetColumn=('target', 'key'))
merged
merged.drop_duplicates(subset=('target', 'key'), keep='first')
for col in merged.columns:
if col != ('target', 'key'):
merged[col] = merged[col].fillna(method='ffill')
merged
merged = merged[merged[('target', 'key')].notna()]
merged
###Output
_____no_output_____
###Markdown
ok now that that's solved...
###Code
# I wish I didn't have to expand it out then filter it down... but this isn't a solution...
df1.merge(df2, how='left', left_index=True, right_index=True)
###Output
_____no_output_____
###Markdown
next handle names because the old way should be superceeded by the mutliindex columns
###Code
import pandas as pd
from functools import reduce
# Convert all indexes to datetime
for df in [df1, df2, df3]:
df.index = pd.to_datetime(df.index)
# Perform as-of merges
res = reduce(
lambda left, right:
pd.merge_asof(left, right, left_index=True, right_index=True),
[df1, df2, df3])
res
mylist = [1,2,3]
mylist.insert(0, mylist.pop(mylist.index(2)))
mylist
mylist = [df2,df1,df3]
for ix, item in enumerate(mylist):
if ('target', 'key') in item.columns:
mylist.insert(0, mylist.pop(ix))
break
mylist
df3.loc[:, [('feature3' ,'keys')]].shift(-1)
pd.DataFrame(
{('sourceId', 'streamId', target): values for target, values in list({'a': 1, 'b':2}.items()) + [('StreamObservationId', 'observationId')]},
#columns=pd.MultiIndex.from_tuples([(self.sourceId, self.streamId, self.)]),
index=['observedTime'])
###Output
_____no_output_____ |
notebooks/7_CNNs_SarcasmBaseline.ipynb | ###Markdown
Define CNN Architecture
###Code
import torch.nn as nn
import torch.nn.functional as F
class CNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.convs = nn.ModuleList([
nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (fs, embedding_dim))
for fs in filter_sizes
])
# =========== OLD APPROACH FOR FIXED NUM OF DIFFERENT FILTERS =========== #
# self.conv_0 = nn.Conv2d(in_channels = 1,
# out_channels = n_filters,
# kernel_size = (filter_sizes[0], embedding_dim))
# self.conv_1 = nn.Conv2d(in_channels = 1,
# out_channels = n_filters,
# kernel_size = (filter_sizes[1], embedding_dim))
# self.conv_2 = nn.Conv2d(in_channels = 1,
# out_channels = n_filters,
# kernel_size = (filter_sizes[2], embedding_dim))
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.unsqueeze(1)
#embedded = [batch size, 1, sent len, emb dim]
conved = [F.relu(conv(embedded)).squeeze(3) for conv in self.convs]
#conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]
pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
# =========== OLD APPROACH FOR FIXED NUM OF DIFFERENT FILTERS =========== #
# #embedded = [batch size, 1, sent len, emb dim]
# conved_0 = F.relu(self.conv_0(embedded).squeeze(3))
# conved_1 = F.relu(self.conv_1(embedded).squeeze(3))
# conved_2 = F.relu(self.conv_2(embedded).squeeze(3))
# #conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]
# pooled_0 = F.max_pool1d(conved_0, conved_0.shape[2]).squeeze(2)
# pooled_1 = F.max_pool1d(conved_1, conved_1.shape[2]).squeeze(2)
# pooled_2 = F.max_pool1d(conved_2, conved_2.shape[2]).squeeze(2)
#pooled_n = [batch size, n_filters]
# cat = self.dropout(torch.cat((pooled_0, pooled_1, pooled_2), dim = 1))
#cat = [batch size, n_filters * len(filter_sizes)]
cat = self.dropout(torch.cat(pooled, dim = 1))
return self.fc(cat)
# =========== EXPERIMENT WITH 1D CNN =========== #
class CNN1d(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.convs = nn.ModuleList([
nn.Conv1d(in_channels = embedding_dim,
out_channels = n_filters,
kernel_size = fs)
for fs in filter_sizes
])
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.permute(0, 2, 1)
#embedded = [batch size, emb dim, sent len]
conved = [F.relu(conv(embedded)) for conv in self.convs]
#conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]
pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat(pooled, dim = 1))
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
# import json
# with open('/content/Sarcasm_Headlines_Dataset_v2.json') as data_file:
# data = json.load(data_file)
# for element in data:
# if 'article_link' in element:
# del element['article_link']
# with open('Sarcasm_Headlines_Dataset_v2.json', 'w') as data_file:
# data = json.dump(data, data_file)
###Output
_____no_output_____
###Markdown
Sarcasm Data (Headlines)
###Code
import torch
from torchtext.legacy import data
from torchtext.legacy import datasets
import random
import numpy as np
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
# create fields in emotion dataset
TEXT = data.Field(tokenize = 'spacy',
tokenizer_language = 'en_core_web_sm',
batch_first = True
)
LABEL = data.LabelField(dtype=torch.float)
# tuples representative of tabular format
fields = [
('is_sarcastic', LABEL),
('headline', TEXT)
]
# LOAD EMOTION DATA
sarc_train, = data.TabularDataset.splits(
path='/content',
train='Sarcasm_Headlines_Dataset_v2.csv',
format='csv',
fields=fields,
skip_header=False
)
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(sarc_train,
max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_)
LABEL.build_vocab(sarc_train)
sarc_train, sarc_valid, sarc_test = sarc_train.split(random_state = random.seed(SEED), split_ratio=[0.4,0.3,0.3])
BATCH_SIZE = 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
sarc_train_iterator, sarc_valid_iterator, sarc_test_iterator = data.BucketIterator.splits(
(sarc_train, sarc_valid, sarc_test),
batch_size = BATCH_SIZE,
device = device,
sort=False)
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
N_FILTERS = 100
FILTER_SIZES = [3,4,5]
OUTPUT_DIM = 1
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
pretrained_embeddings = TEXT.vocab.vectors
model.embedding.weight.data.copy_(pretrained_embeddings)
# zero initial weights of <unk> and <pad>
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]
model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
###Output
_____no_output_____
###Markdown
TRAINING
###Code
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
criterion = nn.BCEWithLogitsLoss()
model = model.to(device)
criterion = criterion.to(device)
###Output
_____no_output_____
###Markdown
Helpers
###Code
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
rounded_preds = torch.round(torch.sigmoid(preds))
correct = (rounded_preds == y).float() #convert into float for division
acc = correct.sum() / len(correct)
return acc
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
# print(batch.__dict__)
optimizer.zero_grad()
predictions = model(batch.headline).squeeze(1)
loss = criterion(predictions, batch.is_sarcastic)
acc = binary_accuracy(predictions, batch.is_sarcastic)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch.headline).squeeze(1)
loss = criterion(predictions, batch.is_sarcastic)
acc = binary_accuracy(predictions, batch.is_sarcastic)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
###Output
_____no_output_____
###Markdown
Training - Sentiment
###Code
N_EPOCHS = 10
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, sarc_train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, sarc_valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'CNN-sarcasm-baseline.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
# Validation
model.load_state_dict(torch.load('CNN-sarcasm-baseline.pt'))
test_loss, test_acc = evaluate(model, sarc_test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
###Output
Test Loss: 0.326 | Test Acc: 85.87%
###Markdown
Test - Sentiment
###Code
import spacy
nlp = spacy.load('en_core_web_sm')
def predict_sarcasm(model, sentence, min_len = 5):
model.eval()
tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
if len(tokenized) < min_len:
tokenized += ['<pad>'] * (min_len - len(tokenized))
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
tensor = torch.LongTensor(indexed).to(device)
tensor = tensor.unsqueeze(0)
prediction = torch.sigmoid(model(tensor))
return prediction.item()
predict_sarcasm(model, "")
###Output
_____no_output_____ |
17-Perspective&AffineTransforms.ipynb | ###Markdown
[](https://notebooks.azure.com/import/gh/Alireza-Akhavan/class.vision) Getting Perpsective Transform این تابع با داشتن 4 نقطه یک ماتریس 3 در 3 ی affine مطابق فرمول زیر ایجاد میکند.به نحوی که:برای:
###Code
import cv2
import numpy as np
image = cv2.imread('images/scan.jpg')
cv2.imshow('Original', image)
cv2.waitKey(0)
# Cordinates of the 4 corners of the original image
points_A = np.float32([[320,15], [700,215], [85,610], [530,780]])
# Cordinates of the 4 corners of the desired output
# We use a ratio of an A4 Paper 1 : 1.41
points_B = np.float32([[0,0], [420,0], [0,594], [420,594]])
# Use the two sets of four points to compute
# the Perspective Transformation matrix, M
M = cv2.getPerspectiveTransform(points_A, points_B)
warped = cv2.warpPerspective(image, M, (420,594))
cv2.imshow('warpPerspective', warped)
cv2.waitKey(0)
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
getAffineTransform این تابع با داشتن 3 نقطه یک ماتریس 2 در 3 ی affine مطابق فرمول زیر ایجاد میکند.به نحوی که:برای:
###Code
import cv2
import numpy as np
image = cv2.imread('images/qodsi.jpg')
rows,cols,ch = image.shape
cv2.imshow('Original', image)
cv2.waitKey(0)
# Cordinates of the 4 corners of the original image
points_A = np.float32([[320,15], [700,215], [85,610]])
# Cordinates of the 4 corners of the desired output
# We use a ratio of an A4 Paper 1 : 1.41
points_B = np.float32([[0,0], [420,0], [0,594]])
# Use the two sets of four points to compute
# the Perspective Transformation matrix, M
M = cv2.getAffineTransform(points_A, points_B)
warped = cv2.warpAffine(image, M, (cols, rows))
cv2.imshow('warpPerspective', warped)
cv2.waitKey(0)
cv2.destroyAllWindows()
###Output
_____no_output_____ |
Old Vers/Traffic_Sign_Classifier-Ver-01.ipynb | ###Markdown
Building a Traffic Sign Recognition Classifier, Deep Learning Approach Loading the Data > **Note**: In this project, I use deep neural networks and convolutional neural networks to classify traffic signsusing traffic signs from the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
###Code
# Load pickled data
import pickle
import pandas as pd
import cv2
import numpy as np
from sklearn import preprocessing
import os
from random import shuffle
import glob
from pathlib import Path
import tensorflow as tf
import matplotlib.pyplot as plt
import math
from keras.layers import Input, InputLayer, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout
from keras.models import Sequential,Model
from keras.optimizers import SGD
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
import keras
from keras import backend as K
training_file = "traffic-signs-data/train.p"
validation_file = "traffic-signs-data/valid.p"
testing_file = "traffic-signs-data/test.p"
csv_features_file = "signnames.csv"
with open(training_file, mode="rb") as f:
train_data = pickle.load(f)
with open(validation_file, mode="rb") as f:
valid_data = pickle.load(f)
with open(testing_file, mode="rb") as f:
test_data = pickle.load(f)
features_df = pd.read_csv(csv_features_file)
unique_label_ids = [row["ClassId"] for _, row in features_df.iterrows()]
unique_label_names = [row["SignName"] for _, row in features_df.iterrows()]
###Output
_____no_output_____
###Markdown
Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**
###Code
x_train_raw_data, y_train_raw_data = train_data["features"], train_data["labels"]
x_valid_raw_data, y_valid_raw_data = valid_data["features"], valid_data["labels"]
x_test_raw_data, y_test_raw_data = test_data["features"], test_data["labels"]
# in case resizing is required - not used in this pipeline
target_image_size = 32
n_train = x_train_raw_data.shape[0]
n_validation = x_valid_raw_data.shape[0]
n_test = x_test_raw_data.shape[0]
# shape of an traffic sign image
image_shape = x_train_raw_data.shape[1:4]
# unique classes/labels there are in the dataset
assert np.array_equal(np.sort(np.unique(y_train_raw_data)), np.sort(np.asarray(unique_label_ids))), \
"There is a mismatch in the training data"
n_classes = len(unique_label_ids)
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
###Output
Number of training examples = 34799
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43
###Markdown
Pre-processing - 1. **Whitening the images**: - 1. [Image Pre-processing for Deep Learning](https://towardsdatascience.com/image-pre-processing-c1aec0be3edf) - 2. [Preprocessing for deep learning: from covariance matrix to image whitening](https://www.freecodecamp.org/news/preprocessing-for-deep-learning-from-covariance-matrix-to-image-whitening-9e2b9c75165c/)- 3. [Preprocessing for deep learning: from covariance matrix to image whitening](https://hadrienj.github.io/posts/Preprocessing-for-deep-learning/)
###Code
import random
from importlib import reload
import classificationModules
reload(classificationModules)
from classificationModules import Cnn
cnn = Cnn()
cnn.init_model(unique_label_names, unique_label_ids)
x_train_data, y_train_data = cnn.normalize(x_train_raw_data, approach="scale"), cnn.one_hot_encode(y_train_raw_data)
x_valid_data, y_valid_data = cnn.normalize(x_valid_raw_data, approach="scale"), cnn.one_hot_encode(y_valid_raw_data)
x_test_data, y_test_data = cnn.normalize(x_test_raw_data, approach="scale"), cnn.one_hot_encode(y_test_raw_data)
# working images, a temporary variable to process images
train_images = x_train_data[:1000, :, :, :]
w_images = x_train_data[:1000, :, :, :]
w_images = w_images.reshape(w_images.shape[0], w_images.shape[1]*w_images.shape[2]*w_images.shape[3])
print(w_images.shape)
print(w_images.mean(axis=0).shape)
w_images = w_images - w_images.mean(axis=0)
print(w_images.mean(axis=0))
w_images_cov = np.cov(w_images, rowvar=True)
w_images_cov_u, w_images_cov_s, w_images_cov_v = np.linalg.svd(w_images_cov)
# print(w_images_cov_u.shape, w_images_cov_s.shape)
epsilon = 0.1
w_images_zca = w_images_cov_u.dot(np.diag(1.0/np.sqrt(w_images_cov_s + epsilon))).dot(w_images_cov_u.T).dot(w_images)
###Output
_____no_output_____
###Markdown
Shuffle the traning data
###Code
labelled_data = list(zip(x_train_data, y_train_data))
random.shuffle(labelled_data)
random_x_train_data, random_y_train_data = zip(*labelled_data)
random_x_train_data = list(random_x_train_data)
random_y_train_data = list(random_y_train_data)
#print(random_y_train_data[0])
#print(random_x_train_data[0].shape)
#plt.imshow(random_x_train_data[0])
###Output
_____no_output_____
###Markdown
Exploring and visualizing the dataset
###Code
%matplotlib inline
# cnn.display_images(x_test_data[:5], y_test_raw_data[:5], normalized=True)
###Output
_____no_output_____
###Markdown
Model Architecture
###Code
def create_conv_net(x, keep_prob, n_classes):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data - normalized image
: keep_prob: Placeholder tensor that hold dropout keep probability
: return: Tensor that represents logits
"""
# Applying a few Convolution and Max Pool layers
conv_ksize = (3, 3) # filter dimensions
# conv_ksize = (3, 3) # filter dimensions
conv_strides = (1, 1)
pool_ksize = (2, 2) # Filter kernel/patch dimensions [batch, height, width, channels]
pool_strides = (1, 1)
batch_normalizer = tf.keras.layers.BatchNormalization(trainable=False)
x = batch_normalizer(x)
#conv_layer = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 64 # D_out : number of out filters
#batch_normalizer = tf.keras.layers.BatchNormalization(trainable=True)
conv_layer = cnn.conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides,
wieghts_name="weights-layer-1", layer_name="hidden-layer-1", batch_normalizer=None)
#conv_layer = normalize_batch(conv_layer)
# next layer
conv_ksize = (3, 3) # output layers dimensions
conv_num_outputs = 128
#batch_normalizer = tf.keras.layers.BatchNormalization(trainable=True)
conv_layer = cnn.conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides,
wieghts_name="weights-layer-2", layer_name="hidden-layer-2", batch_normalizer=None)
# next layer
conv_ksize = (3, 3) # output layers dimensions
conv_num_outputs = 256
batch_normalizer = tf.keras.layers.BatchNormalization(trainable=True)
conv_layer = cnn.conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides,
wieghts_name="weights-layer-3", layer_name="hidden-layer-3", batch_normalizer=batch_normalizer)
################################################
# flattenning the layer
x_tensor = cnn.flatten(x)
# Applying a few fully connected layers
# x_tensor = tf.layers.batch_normalization(x_tensor)
x_tensor = cnn.fully_conn(x_tensor, 128)
normalize_batch = tf.keras.layers.BatchNormalization(trainable=True)
x_tensor = normalize_batch(x_tensor)
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
# x_tensor = tf.layers.batch_normalization(x_tensor)
x_tensor = cnn.fully_conn(x_tensor, 64)
normalize_batch = tf.keras.layers.BatchNormalization(trainable=True)
x_tensor = normalize_batch(x_tensor)
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
# x_tensor = tf.layers.batch_normalization(x_tensor)
#x_tensor = cnn.fully_conn(x_tensor, 32)
#normalize_batch = tf.keras.layers.BatchNormalization(trainable=True)
#x_tensor = normalize_batch(x_tensor)
#x_tensor = tf.nn.dropout(x_tensor, keep_prob)
# Applying an Output Layer
output_tensor = cnn.output(x_tensor, n_classes)
return output_tensor
##############################
## Build the Neural Network ##
##############################
learning_rate = 0.001
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = cnn.neural_net_image_input((target_image_size, target_image_size, 3))
y = cnn.neural_net_label_input(n_classes)
keep_prob = cnn.neural_net_keep_prob_input()
# Model
logits = create_conv_net(x, keep_prob, n_classes)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name="logits")
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
#optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name="accuracy")
# tests.test_conv_net(conv_net)
###Output
conv2d_maxpool... Start
Checking inputs dimensions...
conv_ksize: (3, 3)
conv_num_outputs: 64
Checking strides dimensions...
conv_strides: (1, 1, 1, 1)
pool_ksize: (1, 2, 2, 1)
pool_strides (1, 1, 1, 1)
conv2d_maxpool... End
conv2d_maxpool... Start
Checking inputs dimensions...
conv_ksize: (3, 3)
conv_num_outputs: 128
Checking strides dimensions...
conv_strides: (1, 1, 1, 1)
pool_ksize: (1, 2, 2, 1)
pool_strides (1, 1, 1, 1)
conv2d_maxpool... End
conv2d_maxpool... Start
Checking inputs dimensions...
conv_ksize: (3, 3)
conv_num_outputs: 256
Checking strides dimensions...
conv_strides: (1, 1, 1, 1)
pool_ksize: (1, 2, 2, 1)
pool_strides (1, 1, 1, 1)
batch_normalizer: <tensorflow.python.keras.layers.normalization.BatchNormalizationV1 object at 0x0000021371D55DD8>
conv2d_maxpool... End
###Markdown
Setting up hyper parameters and training on batches
###Code
epochs = 30
batch_size = 128
keep_probability = 0.8
save_model_path = "saved-model/"
print("Training... start")
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# for batch_features, batch_labels in CommonModules.load_preprocess_training_batch(batch_i, batch_size):
for batch_features, batch_labels in cnn.batch_features_labels(random_x_train_data, random_y_train_data, batch_size):
# print(batch_features.shape)
# print(batch_labels.shape)
cnn.train_neural_network(sess, x, y, keep_prob, optimizer, keep_probability, batch_features, batch_labels)
print("Epoch {0:>2}: ".format(epoch + 1), end="")
cnn.print_stats(sess, x, y, keep_prob, batch_features, batch_labels, x_valid_data, y_valid_data, cost, accuracy)
# Save the Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
print("Training... end")
import random
from importlib import reload
import classificationModules
reload(classificationModules)
from classificationModules import Cnn
cnn = Cnn()
cnn.init_model(unique_label_names, unique_label_ids)
fpr = {}
tpr = {}
auc = {}
top_n_predictions = 3
def test_model(test_features, test_labels, n_classes, n_samples):
"""
Test the saved model against the test dataset
"""
# test_images, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
# loaded_graph = tf.get_default_graph()
with tf.Session(graph=loaded_graph) as sess:
# with tf.Session() as sess:
# sess.run(tf.global_variables_initializer())
# Load model
loader = tf.train.import_meta_graph(save_model_path + ".meta")
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name("x:0")
loaded_y = loaded_graph.get_tensor_by_name("y:0")
loaded_keep_prob = loaded_graph.get_tensor_by_name("keep_prob:0")
loaded_logits = loaded_graph.get_tensor_by_name("logits:0")
loaded_acc = loaded_graph.get_tensor_by_name("accuracy:0")
# sess.run(tf.global_variables_initializer())
# loaded_y = tf.expand_dims(loaded_y, 1)
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in cnn.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print("Testing Accuracy: {}\n".format(test_batch_acc_total/test_batch_count))
# ROC
#all_test_predictions = sess.run(tf.nn.top_k(tf.nn.softmax(loaded_logits), label_categories_count),
# feed_dict={loaded_x: test_features, loaded_y: test_labels, loaded_keep_prob: 1.0})
all_test_predictions = sess.run(tf.nn.softmax(loaded_logits),
feed_dict={loaded_x: test_features, loaded_y: test_labels, loaded_keep_prob: 1.0})
predicted_y_probabilities = (all_test_predictions / all_test_predictions.sum(axis=0, keepdims=1))[::, 0]
#print("all_test_predictions", all_test_predictions[:10])
#print("y_pred_proba", y_pred_proba[:10])
#y_pred = all_test_predictions[1][::, 0]
#y_pred_proba = (all_test_predictions[0] / all_test_predictions[0].sum(axis=0, keepdims=1))[::, 0]
# print("y_pred_proba", y_pred_proba)
fpr["model-a"], tpr["model-a"], auc["model-a"] = cnn.plot_roc_curve(test_labels[::, 0], predicted_y_probabilities,
title="Model-2", legend_title="Model-2, auc")
#fpr, tpr, _ = metrics.roc_curve(test_labels[::, 0], predicted_y_probabilities)
#auc = metrics.roc_auc_score(test_labels[::, 0], predicted_y_probabilities)
#plt.plot(fpr,tpr,label="data 1, auc=" + str(auc))
#plt.legend(loc=4)
#plt.show()
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
# random_test_predictions = sess.run(tf.nn.softmax(loaded_logits),
# feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
#random_test_predictions = sess.run(tf.nn.top_k(random_test_predictions, label_categories_count))
random_test_predictions = sess.run(tf.nn.top_k(tf.nn.softmax(loaded_logits), n_classes),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels,
loaded_keep_prob: 1.0})
# print("random_test_predictions", random_test_predictions.values[0])
# print("random_test_labels", np.asanyarray(random_test_labels).shape)
# print("random_test_labels", cnn.lb.inverse_transform(np.asanyarray(random_test_labels)))
# random_test_predictions = sess.run(
# tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
# feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
cnn.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model(x_test_data, y_test_data, n_classes, n_samples=10)
###Output
INFO:tensorflow:Restoring parameters from saved-model/
Testing Accuracy: 0.8815957542621728
###Markdown
---- Design and Test a Model Architecture Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Model Architecture
###Code
### Define your architecture here.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Analyze Performance
###Code
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
###Output
_____no_output_____
###Markdown
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
###Output
_____no_output_____ |
old_projects/misc_projects/VQE_LiH_BKtransform.ipynb | ###Markdown
LOOK at:PHYS. REV. X, **8**, 031022 (2018)
###Code
new_BK = BK_Qubit_Reordering(
QubitHamiltonian,
Qubit_Op_list_Second_Quant_CC_Ops_ia,
Qubit_Op_list_Second_Quant_CC_Ops_ijab,
Hamilt.molecule.n_qubits,
ansatz_obj.Get_BK_HF_state_in_OCC_basis(),
Hamilt.molecule.n_electrons)
REDUCED_Qubit_MolecularHamiltonian = new_BK.Get_Reordered_Hamiltonian_2_qubits_removed()
REDUCED_Qubit_MolecularHamiltonian
Hamiltonian_graph_obj = Openfermion_Hamiltonian_Graph(REDUCED_Qubit_MolecularHamiltonian)
commutativity_flag = 'AC' ## <- defines relationship between sets!!!
plot_graph = False
Graph_colouring_strategy='largest_first'
anti_commuting_sets = Hamiltonian_graph_obj.Get_Clique_Cover_as_QubitOp(commutativity_flag, Graph_colouring_strategy=Graph_colouring_strategy, plot_graph=plot_graph)
anti_commuting_sets
Qubit_Op_list_Second_Quant_CC_Ops_ijab
new_BK.Get_Reordered_CC_qubit_terms_2_qubits_removed()
from quchem.Simulating_Quantum_Circuit import *
n_shots= 1000
def VQE_experiment_ENERGY(theta_ia_ijab_combined_list):
theta_params_ia = [theta_ia_ijab_combined_list[0], theta_ia_ijab_combined_list[1]]
theta_params_ijab = [theta_ia_ijab_combined_list[2]]
ansatz_cirq_circuit = full_ansatz_Q_Circ.Get_Full_HF_UCCSD_QC(theta_params_ia,
theta_params_ijab)
VQE_exp = VQE_Experiment(QubitHamiltonian, ansatz_cirq_circuit, n_shots)
return VQE_exp.Calc_Energy().real
theta_combined = [1,2, np.pi]
VQE_experiment_ENERGY(theta_combined)
import random
theta_ia_random_input = [random.uniform(0, 2*np.pi) for _ in range(len(Sec_Quant_CC_ops_ia))]
theta_ijab_random_input = [random.uniform(0, 2*np.pi) for _ in range(len(Sec_Quant_CC_ops_ijab))]
theta_combined_random_input = [*theta_ia_random_input, *theta_ijab_random_input]
### optimizer
from quchem.Scipy_Optimizer import *
GG = Optimizer(VQE_experiment_ENERGY, theta_combined_random_input, 'Nelder-Mead', store_values=True, display_iter_steps=True,
tol=1e-5,
display_convergence_message=True)
GG.get_env(50)
GG.plot_convergence()
plt.show()
Hamilt.molecule.fci_energy
from quchem.TensorFlow_Opt import *
###Output
_____no_output_____
###Markdown
**gradient is given by**https://arxiv.org/pdf/1906.08728.pdf$$\frac{\partial O(\theta)}{\partial \theta}=\left\langle\overrightarrow{0}\left|\hat{U}^{\dagger} \hat{R}_{y}^{C \dagger}(\theta+\pi / 4) \hat{V}^{\dagger} \hat{O} \hat{V} \hat{R}_{y}^{C}(\theta+\pi / 4) \hat{U}\right| \overrightarrow{0}\right\rangle -\left\langle\overrightarrow{0}\left|\hat{U}^{\dagger} \hat{R}_{y}^{C \dagger}(\theta-\pi / 4) \hat{V}^{\dagger} \hat{O} \hat{V} \hat{R}_{y}^{C}(\theta-\pi / 4) \hat{U}\right| \overrightarrow{0}\right\rangle$$$$\frac{\partial O(\theta)}{\partial \theta} =O(\theta+\pi / 4)-O(\theta-\pi / 4)$$
###Code
def calc_gradient(theta_ia_theta_jab_list):
grad_list=[]
for index, theta in enumerate(theta_ia_theta_jab_list):
new_theta_list = theta_ia_theta_jab_list.copy()
new_theta_list[index] = theta + np.pi/4
Obs_PLUS = VQE_experiment_ENERGY(new_theta_list)
new_theta_list[index] = theta - np.pi/4
Obs_MINUS = VQE_experiment_ENERGY(new_theta_list)
gradient = Obs_PLUS - Obs_MINUS
grad_list.append((gradient, theta))
return grad_list
###Output
_____no_output_____
###Markdown
note:this is very SLOW as it has to run a separate experiment TWICE for each parameter before taking a step!
###Code
X0 = [random.uniform(0, 2*np.pi) for _ in range(len(Sec_Quant_CC_ops_ia) + len(Sec_Quant_CC_ops_ijab))]
tf_opt = Tensor_Flow_Optimizer(VQE_experiment_ENERGY, X0, 'Adam', calc_gradient, learning_rate=0.1, beta1=0.9,
beta2=0.999, store_values=True, display_iter_steps=True)
tf_opt.optimize(50)
tf_opt.plot_convergence()
from quchem.Adam_Optimizer import *
def calc_gradient_ADAM(theta_ia_theta_jab_list):
grad_list=[]
for index, theta in enumerate(theta_ia_theta_jab_list):
new_theta_list = theta_ia_theta_jab_list.copy()
new_theta_list[index] = theta + np.pi/4
Obs_PLUS = VQE_experiment_ENERGY(new_theta_list)
new_theta_list[index] = theta - np.pi/4
Obs_MINUS = VQE_experiment_ENERGY(new_theta_list)
gradient = Obs_PLUS - Obs_MINUS
grad_list.append(gradient)
return np.array(grad_list)
X0 = np.array([random.uniform(0, 2*np.pi) for _ in range(len(Sec_Quant_CC_ops_ia) + len(Sec_Quant_CC_ops_ijab))])
opt_params, list_of_inputs, list_of_outputs = Adam_Opt(X0, VQE_experiment_ENERGY,
calc_gradient_ADAM,
learning_rate=0.1,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-8,
max_iter=50,
disp=True,
tolerance=1e-3,
store_steps=True)
VQE_experiment_ENERGY(opt_params)
import matplotlib.pyplot as plt
# % matplotlib inline
plt.figure()
plt.plot(list_of_outputs)
plt.xlabel('iterations')
plt.ylabel('objective function value')
plt.show()
###Output
_____no_output_____ |
notebooks/Webscraper.ipynb | ###Markdown
ObjectiveSpeed up my job search by automating searching through the websites I frequently use. Next steps- Add more websites- Filter jobs based on currently active jobs (date filter)- Make filter criteria configurable externally- Make script executable- Send update to email- Schedule script on server Preparation
###Code
import numpy as np
import pandas as pd
import os
import datetime
import requests
import pprint
import re
from bs4 import BeautifulSoup
# Make root folder the current working directory
os.chdir('..')
# Folders
folder_settings = './data/'
folder_raw = './data/raw/'
folder_interim = './data/interim/'
folder_clean = './data/processed/'
# Create empty arrays per datapoint
job_titles = []
job_links = []
job_locations = []
job_companies = []
job_exp_dates = []
job_source = []
date_today = datetime.date.today().strftime("%Y-%m-%d")
###Output
_____no_output_____
###Markdown
Scrape websites Nextbillion
###Code
urls = ['https://nextbillion.net/jobs/?jobs-page=1', 'https://nextbillion.net/jobs/?jobs-page=2', 'https://nextbillion.net/jobs/?jobs-page=2']
# Use loop to extract required data per
for url in urls:
soup_page = requests.get(url)
soup_page = BeautifulSoup(soup_page.content, 'html.parser')
# grabs each job
jobs = soup_page.findAll("li", {"class":"clearfix"})
for job in jobs:
job_title = job.h3.text
job_titles.append(job_title)
job_location = job.findAll("dd")[1].text
job_locations.append(job_location)
job_company = job.findAll("dd")[0].text
job_companies.append(job_company)
job_exp_date = job.findAll("dl")[3].dd.text
job_exp_dates.append(job_exp_date)
job_link = job.h3.a.get('href')
job_links.append(job_link)
job_source.append('nextbillion')
###Output
_____no_output_____
###Markdown
Findevgateway
###Code
urls = ['https://www.findevgateway.org/jobs?job_type=All®ions=&countries=&remote=All&f%5B0%5D=job_remote%3A4111&f%5B1%5D=job_type%3A3906']
# Use loop to extract required data per website
for url in urls:
soup_page = requests.get(url)
soup_page = BeautifulSoup(soup_page.content, 'html.parser')
# select each available job
jobs = soup_page.findAll("div", {"class":"listing__text"})
# extract data per job in arrays
for job in jobs:
try:
job_title = job.find("a", {"hreflang":"en"}).text
except IndexError:
job_title = 'Not available'
job_titles.append(job_title)
try:
job_location = job.findAll("div", {"class":"postmeta"})[1].text.split(':')[1]
except IndexError:
job_location = 'Not available'
job_locations.append(job_location)
try:
job_company = job.find("div", {"class":"postmeta"}).text[10:]
except IndexError:
job_company = 'Not available'
job_companies.append(job_company)
try:
job_exp_date = job.find("time", {"class":"datetime"}).text
except IndexError:
job_exp_date = 'Not available'
job_exp_dates.append(job_exp_date)
try:
job_link = 'https://www.findevgateway.org' + job.h3.a.get('href')
except IndexError:
job_post_date = 'Not available'
job_links.append(job_link)
job_source.append('findevgateway')
###Output
_____no_output_____
###Markdown
Create dataframe
###Code
# create dataframe based on extracted data
jobs_df = pd.DataFrame({'job_title': job_titles,
'company': job_companies,
'location': job_locations,
'expiration_date': job_exp_dates,
'source': job_source,
'weblink': job_links})
# Count unfiltered jobs
jobs_df['job_title'].count()
###Output
_____no_output_____
###Markdown
Filter relevant jobs
###Code
jobs_filtered_df = jobs_df.copy()
# make job titles lower case to prevent
jobs_filtered_df['job_title'] = jobs_filtered_df['job_title'].str.lower()
filter_criteria = 'manager|data science|data scientist|analytics'
jobs_filtered_df = jobs_filtered_df[jobs_filtered_df['job_title'].str.contains(filter_criteria)]
###Output
_____no_output_____
###Markdown
Export
###Code
# Export complete dataframe
filename = date_today + ' - Job search social data scientist - UNFILTERED.csv'
jobs_df.to_csv(folder_interim + filename)
# Export filtered data
filename = date_today + ' - Job search social data scientist - FILTERED.csv'
jobs_filtered_df.to_csv(folder_clean + filename)
# count jobs after filtering
jobs_filtered_df['job_title'].count()
jobs_filtered_df
###Output
_____no_output_____ |
notebooks/Distribuciones de Probabilidad II/Ejercicios_previos_al_taller.ipynb | ###Markdown
###Code
from scipy.stats import norm, binom
from math import sqrt
import random
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Ejercicio 1Un test de inteligencia consta de 200 preguntas de verdadero o falso. Para una persona que respondiese al azar, calcular la probabilidad de que acertase: Ítem a50 preguntas o menos
###Code
N = 200 # número de ensayos
p = 1/2 # probabilidad de éxito (moneda equilibrada)
q = 1-p # probabilidad de fallar
mu = N*p # media
sigma = sqrt(N*p*q) # desviación estándar
dist_test = norm(loc = mu, scale = sigma) # distribución normal
x = np.linspace(mu - 6*sigma, mu + 6*sigma, 1000)
plt.plot(x, dist_test.pdf(x))
plt.show()
print(f'Probabilidad: {dist_test.cdf(50.5)*100} %')
###Output
Probabilidad: 1.276554301425639e-10 %
###Markdown
Ítem bMás de 50 preguntas y menos de 100
###Code
print(f'Probabilidad: {(dist_test.cdf(99.5)-dist_test.cdf(50.5))*100} %')
###Output
Probabilidad: 47.18140111002151 %
###Markdown
Ítem cMás de 120 preguntas
###Code
print(f'Probabilidad: {dist_test.sf(120.5)*100} %')
###Output
Probabilidad: 0.18709519777715605 %
###Markdown
Comprobación de los resultadosUna simulación de un test se lleva a cabo y se cuenta la cantidad de respuestas correctas:
###Code
prueba = [random.randint(0,1) for i in range(N)]
print('Respuestas correctas:', prueba.count(1)) # es de esperar que la respuesta sea cercana a 100
###Output
Respuestas correctas: 93
###Markdown
Ejercicio 2La duración de un láser semiconductor a potencia constante tiene una distribución normal con media de 7000 horas y desviación típica de 600 horas.
###Code
mu = 7000 # media
sigma = 600 # desviación estándar
dist_laser = norm(loc = mu, scale = sigma) # distribución normal
x = np.linspace(mu - 6*sigma, mu + 6*sigma, 1000)
plt.plot(x, dist_laser.pdf(x))
plt.show()
###Output
_____no_output_____
###Markdown
Ítem a¿Cuál es la probabilidad de que el láser falle antes de 5000 horas?
###Code
print(f'Probabilidad: {dist_laser.cdf(5000)*100} %')
###Output
Probabilidad: 0.04290603331968372 %
###Markdown
Ítem b¿Cuál es la probabilidad que un láser siga funcionando después de 7000 horas?
###Code
print(f'Probabilidad: {dist_laser.sf(7000)*100} %')
###Output
Probabilidad: 50.0 %
###Markdown
Ítem cSi se hace uso de tres láseres en un producto y se supone que fallan de manera independiente. ¿Cuál es la probabilidad de que tres sigan funcionando después de 7.000 horas?
###Code
print(f'Probabilidad: {binom.pmf(3,3,dist_laser.sf(7000))*100} %')
###Output
Probabilidad: 12.500000000000004 %
###Markdown
Ejercicio 3Se producen arandelas cuyo diámetro interno está distribuido normalmente con media 0.5 pulgadas (in) y desviación estándar 0.005 in. Las arandelas se consideran defectuosas si su diámetro interno es de menos de 0.490 in o si es de más de 0.510 in. Si se mide el diámetro interno de una arandela al azar:
###Code
mu = 0.5 # media
sigma = 0.005 # desviación estándar
dist_arandelas = norm(loc = mu, scale = sigma) # distribución normal
x = np.linspace(mu - 6*sigma, mu + 6*sigma, 1000)
plt.plot(x, dist_arandelas.pdf(x))
plt.show()
###Output
_____no_output_____
###Markdown
Ítem a¿Cuál es la probabilidad de que no sea una defectuosa?
###Code
prob_defectuosa = dist_arandelas.sf(0.510)+dist_arandelas.cdf(0.490)
print(f'Probabilidad: {(1-prob_defectuosa)*100} %')
###Output
Probabilidad: 95.44997361036418 %
###Markdown
Ítem bSi se toman 10 varillas al azar, ¿cuál es la probabilidad de que exactamente 4 sean defectuosas?
###Code
print(f'Probabilidad: {binom.pmf(4,10,prob_defectuosa)*100} %')
###Output
Probabilidad: 0.0680659378387401 %
###Markdown
Ejercicio 4Una máquina automática llena latas de una bebida gaseosa siguiendo una distribución normal de media 0,35 l. y desviación típica 0,015 l.
###Code
mu = 0.35 # media
sigma = 0.015 # desviación estándar
dist_latas = norm(loc = mu, scale = sigma) # distribución normal
x = np.linspace(mu - 6*sigma, mu + 6*sigma, 1000)
plt.plot(x, dist_latas.pdf(x))
plt.show()
###Output
_____no_output_____
###Markdown
Ítem aSi se toma una lata de gaseosa al azar, ¿cuál es la probabilidad de que esa lata contenga menos de 0,34 l. de gaseosa?
###Code
print(f'Probabilidad: {dist_latas.cdf(0.34)*100} %')
###Output
Probabilidad: 25.24925375469239 %
###Markdown
Ítem bSi se desechan las latas de gaseosa que contengan menos del volumen mínimo equivalente a z=-0,3. ¿Cuál es el volumen mínimo que debe tener la lata para que no sea desechada?
###Code
prob_rechazo = norm.cdf(-0.3)
print(f'Mínimo: {dist_latas.ppf(prob_rechazo)} litros')
###Output
Mínimo: 0.3455 litros
###Markdown
Ítem c¿cuál es la probabilidad de que tomando una lata al azar, sea una desechada?
###Code
print(f'Probabilidad: {prob_rechazo*100} %')
###Output
Probabilidad: 38.20885778110474 %
###Markdown
Ítem dSi se eligen 500 latas llenadas con la máquina, ¿cuál es la probabilidad de que al menos 100 sean desechadas?
###Code
print(f'Probabilidad: {binom.sf(100,500,prob_rechazo)*100} %')
###Output
Probabilidad: 99.99999999999999 %
|
Scipy/ols.ipynb | ###Markdown
Ordinary Least Squares
###Code
%matplotlib inline
from __future__ import print_function
import numpy as np
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
np.random.seed(9876789)
###Output
_____no_output_____
###Markdown
OLS estimationArtificial data:
###Code
nsample = 100
x = np.linspace(0, 10, 100)
X = np.column_stack((x, x**2))
beta = np.array([1, 0.1, 10])
e = np.random.normal(size=nsample)
###Output
_____no_output_____
###Markdown
Our model needs an intercept so we add a column of 1s:
###Code
X = sm.add_constant(X)
y = np.dot(X, beta) + e
len(y.values)
plt.plot(y.values)
###Output
_____no_output_____
###Markdown
Fit and summary:
###Code
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 1.000
Model: OLS Adj. R-squared: 1.000
Method: Least Squares F-statistic: 4.020e+06
Date: Fri, 25 Nov 2016 Prob (F-statistic): 2.83e-239
Time: 07:19:21 Log-Likelihood: -146.51
No. Observations: 100 AIC: 299.0
Df Residuals: 97 BIC: 306.8
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
const 1.3423 0.313 4.292 0.000 0.722 1.963
x1 -0.0402 0.145 -0.278 0.781 -0.327 0.247
x2 10.0103 0.014 715.745 0.000 9.982 10.038
==============================================================================
Omnibus: 2.042 Durbin-Watson: 2.274
Prob(Omnibus): 0.360 Jarque-Bera (JB): 1.875
Skew: 0.234 Prob(JB): 0.392
Kurtosis: 2.519 Cond. No. 144.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Quantities of interest can be extracted directly from the fitted model. Type ``dir(results)`` for a full list. Here are some examples:
###Code
print('Parameters: ', results.params)
print('R2: ', results.rsquared)
###Output
Parameters: [ 1.34233516 -0.04024948 10.01025357]
R2: 0.999987936503
###Markdown
OLS non-linear curve but linear in parametersWe simulate artificial data with a non-linear relationship between x and y:
###Code
nsample = 50
sig = 0.5
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample)))
beta = [0.5, 0.5, -0.02, 5.]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
###Output
_____no_output_____
###Markdown
Fit and summary:
###Code
res = sm.OLS(y, X).fit()
print(res.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.933
Model: OLS Adj. R-squared: 0.928
Method: Least Squares F-statistic: 211.8
Date: Fri, 25 Nov 2016 Prob (F-statistic): 6.30e-27
Time: 07:19:21 Log-Likelihood: -34.438
No. Observations: 50 AIC: 76.88
Df Residuals: 46 BIC: 84.52
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1 0.4687 0.026 17.751 0.000 0.416 0.522
x2 0.4836 0.104 4.659 0.000 0.275 0.693
x3 -0.0174 0.002 -7.507 0.000 -0.022 -0.013
const 5.2058 0.171 30.405 0.000 4.861 5.550
==============================================================================
Omnibus: 0.655 Durbin-Watson: 2.896
Prob(Omnibus): 0.721 Jarque-Bera (JB): 0.360
Skew: 0.207 Prob(JB): 0.835
Kurtosis: 3.026 Cond. No. 221.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Extract other quantities of interest:
###Code
print('Parameters: ', res.params)
print('Standard errors: ', res.bse)
print('Predicted values: ', res.predict())
###Output
Parameters: [ 0.46872448 0.48360119 -0.01740479 5.20584496]
Standard errors: [ 0.02640602 0.10380518 0.00231847 0.17121765]
Predicted values: [ 4.77072516 5.22213464 5.63620761 5.98658823 6.25643234
6.44117491 6.54928009 6.60085051 6.62432454 6.6518039
6.71377946 6.83412169 7.02615877 7.29048685 7.61487206
7.97626054 8.34456611 8.68761335 8.97642389 9.18997755
9.31866582 9.36587056 9.34740836 9.28893189 9.22171529
9.17751587 9.1833565 9.25708583 9.40444579 9.61812821
9.87897556 10.15912843 10.42660281 10.65054491 10.8063004
10.87946503 10.86825119 10.78378163 10.64826203 10.49133265
10.34519853 10.23933827 10.19566084 10.22490593 10.32487947
10.48081414 10.66779556 10.85485568 11.01006072 11.10575781]
###Markdown
Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the ``wls_prediction_std`` command.
###Code
prstd, iv_l, iv_u = wls_prediction_std(res)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="data")
ax.plot(x, y_true, 'b-', label="True")
ax.plot(x, res.fittedvalues, 'r--.', label="OLS")
ax.plot(x, iv_u, 'r--')
ax.plot(x, iv_l, 'r--')
ax.legend(loc='best');
###Output
_____no_output_____
###Markdown
OLS with dummy variablesWe generate some artificial data. There are 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category.
###Code
nsample = 50
groups = np.zeros(nsample, int)
groups[20:40] = 1
groups[40:] = 2
#dummy = (groups[:,None] == np.unique(groups)).astype(float)
dummy = sm.categorical(groups, drop=True)
x = np.linspace(0, 20, nsample)
# drop reference category
X = np.column_stack((x, dummy[:,1:]))
X = sm.add_constant(X, prepend=False)
beta = [1., 3, -3, 10]
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + e
###Output
_____no_output_____
###Markdown
Inspect the data:
###Code
print(X[:5,:])
print(y[:5])
print(groups)
print(dummy[:5,:])
###Output
[[ 0. 0. 0. 1. ]
[ 0.40816327 0. 0. 1. ]
[ 0.81632653 0. 0. 1. ]
[ 1.2244898 0. 0. 1. ]
[ 1.63265306 0. 0. 1. ]]
[ 9.28223335 10.50481865 11.84389206 10.38508408 12.37941998]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 2 2 2 2 2 2 2 2 2 2]
[[ 1. 0. 0.]
[ 1. 0. 0.]
[ 1. 0. 0.]
[ 1. 0. 0.]
[ 1. 0. 0.]]
###Markdown
Fit and summary:
###Code
res2 = sm.OLS(y, X).fit()
print(res2.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.978
Model: OLS Adj. R-squared: 0.976
Method: Least Squares F-statistic: 671.7
Date: Fri, 25 Nov 2016 Prob (F-statistic): 5.69e-38
Time: 07:19:22 Log-Likelihood: -64.643
No. Observations: 50 AIC: 137.3
Df Residuals: 46 BIC: 144.9
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1 0.9999 0.060 16.689 0.000 0.879 1.121
x2 2.8909 0.569 5.081 0.000 1.746 4.036
x3 -3.2232 0.927 -3.477 0.001 -5.089 -1.357
const 10.1031 0.310 32.573 0.000 9.479 10.727
==============================================================================
Omnibus: 2.831 Durbin-Watson: 1.998
Prob(Omnibus): 0.243 Jarque-Bera (JB): 1.927
Skew: -0.279 Prob(JB): 0.382
Kurtosis: 2.217 Cond. No. 96.3
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Draw a plot to compare the true relationship to OLS predictions:
###Code
prstd, iv_l, iv_u = wls_prediction_std(res2)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
ax.plot(x, res2.fittedvalues, 'r--.', label="Predicted")
ax.plot(x, iv_u, 'r--')
ax.plot(x, iv_l, 'r--')
legend = ax.legend(loc="best")
###Output
_____no_output_____
###Markdown
Joint hypothesis test F testWe want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, $R \times \beta = 0$. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups:
###Code
R = [[0, 1, 0, 0], [0, 0, 1, 0]]
print(np.array(R))
print(res2.f_test(R))
###Output
[[0 1 0 0]
[0 0 1 0]]
<F test: F=array([[ 145.49268198]]), p=1.2834419617290837e-20, df_denom=46, df_num=2>
###Markdown
You can also use formula-like syntax to test hypotheses
###Code
print(res2.f_test("x2 = x3 = 0"))
###Output
<F test: F=array([[ 145.49268198]]), p=1.2834419617290837e-20, df_denom=46, df_num=2>
###Markdown
Small group effectsIf we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis:
###Code
beta = [1., 0.3, -0.0, 10]
y_true = np.dot(X, beta)
y = y_true + np.random.normal(size=nsample)
res3 = sm.OLS(y, X).fit()
print(res3.f_test(R))
print(res3.f_test("x2 = x3 = 0"))
###Output
<F test: F=array([[ 1.22491119]]), p=0.30318644106320874, df_denom=46, df_num=2>
###Markdown
MulticollinearityThe Longley dataset is well known to have high multicollinearity. That is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.
###Code
from statsmodels.datasets.longley import load_pandas
y = load_pandas().endog
X = load_pandas().exog
X = sm.add_constant(X)
###Output
_____no_output_____
###Markdown
Fit and summary:
###Code
ols_model = sm.OLS(y, X)
ols_results = ols_model.fit()
print(ols_results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: TOTEMP R-squared: 0.995
Model: OLS Adj. R-squared: 0.992
Method: Least Squares F-statistic: 330.3
Date: Fri, 25 Nov 2016 Prob (F-statistic): 4.98e-10
Time: 07:19:23 Log-Likelihood: -109.62
No. Observations: 16 AIC: 233.2
Df Residuals: 9 BIC: 238.6
Df Model: 6
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
const -3.482e+06 8.9e+05 -3.911 0.004 -5.5e+06 -1.47e+06
GNPDEFL 15.0619 84.915 0.177 0.863 -177.029 207.153
GNP -0.0358 0.033 -1.070 0.313 -0.112 0.040
UNEMP -2.0202 0.488 -4.136 0.003 -3.125 -0.915
ARMED -1.0332 0.214 -4.822 0.001 -1.518 -0.549
POP -0.0511 0.226 -0.226 0.826 -0.563 0.460
YEAR 1829.1515 455.478 4.016 0.003 798.788 2859.515
==============================================================================
Omnibus: 0.749 Durbin-Watson: 2.559
Prob(Omnibus): 0.688 Jarque-Bera (JB): 0.684
Skew: 0.420 Prob(JB): 0.710
Kurtosis: 2.434 Cond. No. 4.86e+09
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 4.86e+09. This might indicate that there are
strong multicollinearity or other numerical problems.
###Markdown
Condition numberOne way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length:
###Code
norm_x = X.values
for i, name in enumerate(X):
if name == "const":
continue
norm_x[:,i] = X[name]/np.linalg.norm(X[name])
norm_xtx = np.dot(norm_x.T,norm_x)
###Output
_____no_output_____
###Markdown
Then, we take the square root of the ratio of the biggest to the smallest eigen values.
###Code
eigs = np.linalg.eigvals(norm_xtx)
condition_number = np.sqrt(eigs.max() / eigs.min())
print(condition_number)
###Output
56240.8709118
###Markdown
Dropping an observationGreene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates:
###Code
ols_results2 = sm.OLS(y.ix[:14], X.ix[:14]).fit()
print("Percentage change %4.2f%%\n"*7 % tuple([i for i in (ols_results2.params - ols_results.params)/ols_results.params*100]))
###Output
Percentage change -13.35%
Percentage change -236.18%
Percentage change -23.69%
Percentage change -3.36%
Percentage change -7.26%
Percentage change -200.46%
Percentage change -13.34%
###Markdown
We can also look at formal statistics for this such as the DFBETAS -- a standardized measure of how much each coefficient changes when that observation is left out.
###Code
infl = ols_results.get_influence()
###Output
_____no_output_____
###Markdown
In general we may consider DBETAS in absolute value greater than $2/\sqrt{N}$ to be influential observations
###Code
2./len(X)**.5
print(infl.summary_frame().filter(regex="dfb"))
###Output
dfb_const dfb_GNPDEFL dfb_GNP dfb_UNEMP dfb_ARMED dfb_POP dfb_YEAR
0 -0.016406 -0.234566 -0.045095 -0.121513 -0.149026 0.211057 0.013388
1 -0.020608 -0.289091 0.124453 0.156964 0.287700 -0.161890 0.025958
2 -0.008382 0.007161 -0.016799 0.009575 0.002227 0.014871 0.008103
3 0.018093 0.907968 -0.500022 -0.495996 0.089996 0.711142 -0.040056
4 1.871260 -0.219351 1.611418 1.561520 1.169337 -1.081513 -1.864186
5 -0.321373 -0.077045 -0.198129 -0.192961 -0.430626 0.079916 0.323275
6 0.315945 -0.241983 0.438146 0.471797 -0.019546 -0.448515 -0.307517
7 0.015816 -0.002742 0.018591 0.005064 -0.031320 -0.015823 -0.015583
8 -0.004019 -0.045687 0.023708 0.018125 0.013683 -0.034770 0.005116
9 -1.018242 -0.282131 -0.412621 -0.663904 -0.715020 -0.229501 1.035723
10 0.030947 -0.024781 0.029480 0.035361 0.034508 -0.014194 -0.030805
11 0.005987 -0.079727 0.030276 -0.008883 -0.006854 -0.010693 -0.005323
12 -0.135883 0.092325 -0.253027 -0.211465 0.094720 0.331351 0.129120
13 0.032736 -0.024249 0.017510 0.033242 0.090655 0.007634 -0.033114
14 0.305868 0.148070 0.001428 0.169314 0.253431 0.342982 -0.318031
15 -0.538323 0.432004 -0.261262 -0.143444 -0.360890 -0.467296 0.552421
|
Python_Jupyter_Training/Week_1/1_Hello_World.ipynb | ###Markdown
Welcome to Python! Let's start with the best way to get familiar with a language: Hello World This will teach you:- How to print in python ... an invaluable skill every programmer will utilize- How to run a program. Sounds simple but hey we need to start somewhere. We will print to the terminal a string (group of characters) that says "Hello World"
###Code
print("Hello world")
###Output
Hello world
|
Protein_ligand_JP.ipynb | ###Markdown
こちらは[Making-it-rain](https://github.com/pablo-arantes/Making-it-rain)のノートブックを日本語化したものです。オリジナルのノートブックは以下のボタンから起動できます。 この日本語ノートブックをColabで使うには以下のボタンを利用ください。 **ようこそ!**OpenMMとAMBER力場を用いて、**タンパク質とリガンド**系の分子動力学(MD)シミュレーションを行うためのJupyterノートブックです。このノートブックは論文"***Making it rain: Cloud-based molecular simulations for everyone***" ([リンク](https://doi.org/10.1021/acs.jcim.1c00998))のsupplementary materialです。このパイプラインを利用する前に論文を参照することをお勧めします。このノートブックの主な目的は、クラウドコンピューティングの力を借りて、マイクロ秒単位のMDシミュレーションを安価に、かつ実現可能な方法で実行する方法をデモンストレーションすることです。--- **このノートブックはMDシミュレーションの標準プロトコルではありません。** 単にシミュレーションプロトコルの各ステップを示しただけのシンプルなMDパイプラインです。 --- **バグ**- バグを見つけたらイシューを報告してください https://github.com/pablo-arantes/making-it-rain/issues**謝辞**- 優れたオープンソースエンジンを開発されたOpenMMチームに感謝いたします。- 素晴らしいツール[ProLIF](https://prolif.readthedocs.io/en/latest/index.html) (Protein-Ligand Interaction Fingerprints)を開発したChemosimLab ([@ChemosimLab](https://twitter.com/ChemosimLab))チームに感謝いたします。- Making-it-rainは**Pablo R. Arantes** ([@pablitoarantes](https://twitter.com/pablitoarantes))と**Marcelo D. Polêto** ([@mdpoleto](https://twitter.com/mdpoleto))、 **Conrado Pedebos** ([@ConradoPedebos](https://twitter.com/ConradoPedebos))、**Rodrigo Ligabue-Braun** ([@ligabue_braun](https://twitter.com/ligabue_braun))が開発しました。- また、素晴らしいプラグイン[py3Dmol](https://3dmol.csb.pitt.edu/)は[David Koes](https://github.com/dkoes)による功績です。- 関連するノートブックは右を参照してください: [Making-it-rain](https://github.com/pablo-arantes/making-it-rain) **イントロダクション**一般に、MDシミュレーションは、1)シミュレーションボックス上の全原子の原子座標セット、2)原子間の相互作用エネルギーを記述する力場パラメータセットに依存しています。インプットとしては、以下が必要です。* 原始座標のセットを含む、タンパク質 .pdbファイルとリガンドの .pdbファイルこのノートブックでは、PDB 1AKI(ニワトリ卵白リゾチーム)のシミュレーションを行います。シミュレーションボックスを構築するために、LEaPプログラム(https://ambermd.org/tutorials/pengfei/index.php )を使用します。LEaP プログラムは、さまざまな種類の化学構造ファイル(主に .pdb と .mol2)と、Amberモデルパラメータファイル( .lib, .prepi, parm.dat, .frcmod など)の間の共通の入り口として機能します。各パラメータファイルには、エネルギー最小化や分子動力学など、シミュレーションを構築するために必要な情報が含まれています。LEaPは、[Amberマニュアル](https://ambermd.org/doc12/Amber20.pdf)のセクション 1.1で説明されている大きなワークフローの中で機能しますリガンドのトポロジーを構築するために、general AMBER force field (GAFF - http://ambermd.org/antechamber/gaff.html) とThe Open Force Field Toolkit (OpenFF - https://openforcefield.org/) を使用します。GAFFはAMBER力場と互換性があり、C, N, O, H, S, P, F, Cl, Br, I, からなるほとんど全ての有機分子に対するパラメータが用意されています。Open Force Field Toolkitは、[Open Force Field Initiative](https://openforcefield.org/)によって開発されたもので、直接的な化学的認識と厳密な統計的パラメータ化法に基づく現代的な分子力学力場の開発と応用のためのPythonツールキットです。インプットファイルの例は[ここ](https://github.com/pablo-arantes/making-it-rain/tree/main/PROTEIN_LIGAND)からダウンロードできます。; --- ------ **MD計算環境のセッティング**まず最初に、シミュレーションに必要なライブラリとパッケージをインストールする必要があります。インストールする主なパッケージは以下です。:1. Anaconda (https://docs.conda.io/en/latest/miniconda.html)2. OpenMM (https://openmm.org/)3. PyTraj (https://amber-md.github.io/pytraj/latest/index.html)4. py3Dmol (https://pypi.org/project/py3Dmol/)5. ProLIF (https://github.com/chemosim-lab/ProLIF)6. Numpy (https://numpy.org/)7. Matplotlib (https://matplotlib.org/)8. AmberTools (https://ambermd.org/AmberTools.php)
###Code
#@title **依存関係のインストール**
#@markdown しばらく時間がかかります。コーヒーでも飲んで一服してください ;-)
# install dependencies
%%capture
import sys
!pip -q install py3Dmol 2>&1 1>/dev/null
!pip install --upgrade MDAnalysis 2>&1 1>/dev/null
!pip install biopandas 2>&1 1>/dev/null
!pip install rdkit-pypi
!pip install Cython
!git clone https://github.com/chemosim-lab/ProLIF.git
prolif1 = "cd /content/ProLIF"
prolif2 = "sed -i 's/mdanalysis.*/mdanalysis==2.0.0/' setup.cfg"
prolif3 = "pip install ."
original_stdout = sys.stdout # Save a reference to the original standard output
with open('prolif.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(prolif1)
print(prolif2)
print(prolif3)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 prolif.sh 2>&1 1>/dev/null
!bash prolif.sh >/dev/null 2>&1
# install conda
!wget -qnc https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
!bash Miniconda3-latest-Linux-x86_64.sh -bfp /usr/local 2>&1 1>/dev/null
!rm -r Miniconda3-latest-Linux-x86_64.sh /content/ProLIF prolif.sh
!conda install -y -q -c conda-forge openmm=7.6 python=3.7 pdbfixer 2>&1 1>/dev/null
!conda install -c conda-forge ambertools --yes 2>&1 1>/dev/null
!conda install -c ambermd pytraj --yes 2>&1 1>/dev/null
!conda install -c conda-forge parmed --yes 2>&1 1>/dev/null
!conda install -c conda-forge openff-toolkit --yes 2>&1 1>/dev/null
!conda install -c bioconda pybel --yes
!conda install -c openbabel openbabel --yes
#load dependencies
sys.path.append('/usr/local/lib/python3.7/site-packages/')
from openmm import app, unit
from openmm.app import HBonds, NoCutoff, PDBFile
from openff.toolkit.topology import Molecule, Topology
from openff.toolkit.typing.engines.smirnoff import ForceField
from openff.toolkit.utils import get_data_file_path
import parmed as pmd
from biopandas.pdb import PandasPdb
import openmm as mm
from openmm import *
from openmm.app import *
from openmm.unit import *
import os
import urllib.request
import numpy as np
import MDAnalysis as mda
import py3Dmol
from __future__ import print_function
import pytraj as pt
import platform
import scipy.cluster.hierarchy
from scipy.spatial.distance import squareform
import scipy.stats as stats
import matplotlib.pyplot as plt
import pandas as pd
from scipy.interpolate import griddata
import seaborn as sb
from statistics import mean, stdev
from pytraj import matrix
from matplotlib import colors
from IPython.display import set_matplotlib_formats
!wget https://raw.githubusercontent.com/openforcefield/openff-forcefields/master/openforcefields/offxml/openff_unconstrained-2.0.0.offxml 2>&1 1>/dev/null
###Output
_____no_output_____
###Markdown
Google Driveを利用したシミュレーションデータの保存Google Colabでは、ユーザーが計算ノードにデータを保持することはできません。しかし、Google Driveを利用して、シミュレーションファイルの読み書きや保存を行うことは可能です。そのため,以下のことをお勧めします:1. 自分のGoogle Driveにフォルダを作成し、そこに必要な入力ファイルをコピーします。2. 作成したディレクトリのパスをコピーします。以下のセルでパスを利用します。
###Code
#@title ### **Google Driveのインポート**
#@markdown "Run"ボタンを押してGoogle Driveをアクセス可能にしてください。
from google.colab import drive
drive.flush_and_unmount()
drive.mount('/content/drive', force_remount=True)
#@title **GPUノードが正しく割り当てられているかどうかチェックします**
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
print('and then re-execute this cell.')
else:
print(gpu_info)
###Output
_____no_output_____
###Markdown
------ **必要なインプットファイルの読み込み**この時点で、すべてのライブラリと依存関係がインストールされ、必要なインプットファイルがすでにGoogle Driveのフォルダにあるはずです。**重要**: PDBファイルが正しいパスを指定しているか確認してください。必要ならパスを修正してファイルを再度アップロードしてください。レセプターの構造とリガンドの構造を統合して複合体を形成します。タンパク質とリガンドの座標はPDBファイルにより決まり、結合ポケットに配置されているリガンドと一致する必要があることに注意してください。以下に、すべてのインプットファイルの名前と、それらを含むGoogle Driveフォルダのパスを記入してください。
###Code
#@title **必要なインプットファイルを以下に記入してください**:
#@markdown **重要** リガンドの水素原子の付加が正しく分子をパラメータ化するのにとても重要です。
%%capture
import pybel
import rdkit
import mdtraj as md
from rdkit import Chem
from rdkit.Chem import AllChem,Draw
from rdkit.Chem.Draw import IPythonConsole
from pdbfixer import PDBFixer
Protein_PDB_file_name = 'protein.pdb' #@param {type:"string"}
Ligand_PDB_file_name = 'ligand.pdb' #@param {type:"string"}
Add_ligand_hydrogens = "Yes" #@param ["Yes", "No"]
ligand_name = Ligand_PDB_file_name
Google_Drive_Path = '/content/drive/MyDrive/' #@param {type:"string"}
workDir = Google_Drive_Path
file_name = os.path.join(workDir, str(Protein_PDB_file_name))
initial_pdb = os.path.join(workDir, "starting0.pdb")
ligand_pdb = os.path.join(workDir, str(ligand_name))
ligand_pdb2 = os.path.join(workDir, "ligand_H.pdb")
starting = os.path.join(workDir, "starting1.pdb")
starting2 = os.path.join(workDir, "starting2.pdb")
starting_end = os.path.join(workDir, "starting_end.pdb")
#Add hydrogens in the ligand
if Add_ligand_hydrogens == "Yes":
fixer = PDBFixer(filename=ligand_pdb)
PDBFile.writeFile(fixer.topology, fixer.positions, open("temp.pdb", 'w'))
ppdb = PandasPdb().read_pdb("temp.pdb")
ppdb.df['ATOM'] = ppdb.df['ATOM']
ppdb.df['HETATM']= ppdb.df['HETATM'][ppdb.df['HETATM']['element_symbol'] != 'H']
ppdb.to_pdb(path="temp.pdb", records=['ATOM', 'HETATM'], gz=False, append_newline=True)
mol= [m for m in pybel.readfile(filename="temp.pdb", format='pdb')][0]
mol.calccharges
mol.addh()
out=pybel.Outputfile(filename="temp2.pdb",format='pdb',overwrite=True)
out.write(mol)
out.close()
md.load("temp2.pdb").save("temp2.pdb")
halogens = ['Cl', 'F', 'Br', 'I']
atom_id = []
H_id = []
with open("temp2.pdb") as f:
for line in f:
data = line.split()
if data[0] == "ATOM":
if data[2] in halogens:
atom_id.append(data[1])
if data[0] == "CONECT":
if data[1] in atom_id:
if len(data) > 3:
H_id.append(data[3])
H_id.append(data[4])
H_id.append(data[5])
with open(ligand_pdb2, 'w') as h:
with open("temp2.pdb") as f:
for line in f:
data = line.split()
if data[0] == "ATOM":
if data[1] not in H_id:
print(line, file=h)
elif data[0] == "CONECT":
if data[1] not in atom_id:
print(line, file=h)
else:
print(line, file=h)
fixer = PDBFixer(filename=ligand_pdb2)
PDBFile.writeFile(fixer.topology, fixer.positions, open(ligand_pdb2, 'w'))
else:
fixer = PDBFixer(filename=ligand_pdb)
PDBFile.writeFile(fixer.topology, fixer.positions, open(ligand_pdb2, 'w'))
#Fix protein
pdb_parm = pmd.load_file(file_name)
pdb_parm.save(initial_pdb, standard_resnames=True, overwrite=True)
ppdb = PandasPdb().read_pdb(initial_pdb)
ppdb.df['ATOM'] = ppdb.df['ATOM']
ppdb.df['HETATM'] = ppdb.df['HETATM'][ppdb.df['HETATM']['residue_name'] == 'HOH']
ppdb.df['ATOM'] = ppdb.df['ATOM'][ppdb.df['ATOM']['atom_name'] != 'OXT']
ppdb.df['ATOM']= ppdb.df['ATOM'][ppdb.df['ATOM']['element_symbol'] != 'H']
ppdb.to_pdb(path=starting, records=['ATOM', 'HETATM'], gz=False, append_newline=True)
from Bio.PDB import is_aa
from Bio.PDB import PDBParser, PDBIO, Select
class ProtSelect(Select):
def accept_residue(self, residue):
print(f"{residue} -> {is_aa(residue)}")
return is_aa(residue, standard=True)
from Bio import PDB
pdb_ini = PDBParser().get_structure("pdb", starting)
io = PDBIO()
io.set_structure(pdb_ini)
io.save(starting2, ProtSelect());
pdb4amber_cmd = "pdb4amber -i " + str(starting2) + " -o " + str(starting_end) + " -p"
original_stdout = sys.stdout # Save a reference to the original standard output
with open('pdb4amber.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(pdb4amber_cmd)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 pdb4amber.sh 2>&1 1>/dev/null
!bash pdb4amber.sh 2> /dev/null
!rm pdb4amber.sh temp.pdb temp2.pdb
#@markdown ---
import rdkit
from rdkit import Chem
from rdkit.Chem import AllChem,Draw
from rdkit.Chem.Draw import IPythonConsole
#@title **リガンドトポロジーを生成するために立体異性体を生成する:**
##@markdown **リガンドのsmilesはここで見つけられます: https://pubchem.ncbi.nlm.nih.gov/**
mol= [m for m in pybel.readfile(filename=ligand_pdb2, format='pdb')][0]
mol.calccharges
mol.addh()
out=pybel.Outputfile(filename="temp2.smi",format='smiles',overwrite=True)
out.write(mol)
out.close()
fileObj = open("temp2.smi", "r",) #opens the file in read mode
for aRow in fileObj:
smi = aRow.split('\t')
fileObj.close()
Ligand_smiles = smi[0]
!rm temp2.smi >/dev/null 2>&1
mol = Chem.MolFromSmiles(Ligand_smiles)
def spam(n):
out=[]
for perm in getPerms(n):
elem = [ int(i) for i in list(perm) ]
out.append(elem)
return out
def getPerms(n):
from itertools import permutations
for i in getCandidates(n):
for perm in set(permutations(i)):
yield ''.join(perm)
def getCandidates(n):
for i in range(0, n+1):
res = "1" * i + "0" * (n - i)
yield res
def GetStereoIsomers(mol):
from rdkit import Chem
from copy import copy
out = []
chiralCentres = Chem.FindMolChiralCenters(mol, includeUnassigned=True)
#return the molecule object when no chiral centres where identified
if chiralCentres == []:
return [mol]
#All bit permutations with number of bits equals number of chiralCentres
elements = spam(len(chiralCentres))
!rm smiles.txt temp2.smi >/dev/null 2>&1
for isoId,element in enumerate(elements):
for centreId,i in enumerate(element):
atomId = chiralCentres[centreId][0]
if i == 0:
mol.GetAtomWithIdx(atomId).SetChiralTag(Chem.rdchem.ChiralType.CHI_TETRAHEDRAL_CW)
elif i == 1:
mol.GetAtomWithIdx(atomId).SetChiralTag(Chem.rdchem.ChiralType.CHI_TETRAHEDRAL_CCW)
outmol = copy(mol)
out.append(outmol)
print(Chem.MolToSmiles(mol,isomericSmiles=True), file=open("smiles.txt", "a",))
return out
Draw.MolsToGridImage(GetStereoIsomers(mol), subImgSize=(500,200), molsPerRow=1)
chiralCentres = Chem.FindMolChiralCenters(mol, includeUnassigned=True)
if chiralCentres != []:
print("Follow the stereoisomers for your ligand: \n")
fileObj = open("smiles.txt", "r",) #opens the file in read mode
smiles = fileObj.read().splitlines() #puts the file into an array
fileObj.close()
x = len(smiles[:-1])
for a in range(x+1):
y = smiles[0+a:(a+1)]
globals()[f"isomer{a+1}"] = str(y[0])
print("Isomer " + str(a+1) + " = " + str(y[0]) + "\n")
else:
isomer1 = Ligand_smiles
print("No chiral centres were identified! \nIsomer 1 = " + str(isomer1) )
Draw.MolsToGridImage(GetStereoIsomers(mol), subImgSize=(700,200), molsPerRow=1, returnPNG=True)
from rdkit import Chem
from rdkit.Chem import PandasTools
from openff.toolkit.typing.engines.smirnoff import ForceField
import parmed
#@title **トポロジーを生成するためのパラメータ:**
#@markdown **タンパク質のトポロジーを生成するためのパラメータ**:
Force_field = "ff19SB" #@param ["ff19SB", "ff14SB"]
if Force_field == "ff19SB":
ff = "leaprc.protein.ff19SB"
else:
ff = "leaprc.protein.ff14SB"
Water_type = "OPC" #@param ["TIP3P", "OPC"]
if Water_type == "TIP3P":
water = "leaprc.water.tip3p"
water_box = "TIP3PBOX"
else:
water = "leaprc.water.opc"
water_box = "OPCBOX"
#@markdown ボックスサイズ (Å):
Size_box = 12 #@param {type:"slider", min:10, max:20, step:1}
size_box = Size_box
#@markdown **注意**: 濃度をモル濃度単位で指定してください。AMBER tleap が自動的に系を中和します:
Ions = "NaCl" #@param ["NaCl", "KCl" ]
Concentration = "0.15" #@param {type:"string"}
#@markdown **リガンドのトポロジーを生成するためのパラメータ:**
Ligand_Force_field = "GAFF2" #@param ["GAFF2", "OpenFF 2.0.0 (Sage)"]
Ligand_isomer = "1" #@param {type:"string", min:1, max:10, step:100}
if chiralCentres == []:
isomer_end = isomer1
else:
isomer_end = globals()[f"isomer{Ligand_isomer}"]
Ligand_net_charges = "0" #@param {type:"string", min:-10, max:10, step:1}
#@markdown ---
tleap = os.path.join(workDir, "tleap.in")
top_nw = os.path.join(workDir, "SYS_nw.prmtop")
crd_nw = os.path.join(workDir, "SYS_nw.crd")
pdb_nw = os.path.join(workDir, "SYS_nw.pdb")
top = os.path.join(workDir, "SYS_gaff2.prmtop")
crd = os.path.join(workDir, "SYS_gaff2.crd")
pdb = os.path.join(workDir, "SYS.pdb")
ligand_noh = os.path.join(workDir, "ligand_noh.pdb")
ligand_h = os.path.join(workDir, "ligand_h.pdb")
ligand_mol2 = os.path.join(workDir, "ligand.mol2")
ligand_frcmod = os.path.join(workDir, "ligand.frcmod")
lig_new = os.path.join(workDir, "ligand_gaff.pdb")
protein_ligand = os.path.join(workDir, "protein_ligand.pdb")
lib = os.path.join(workDir, "lig.lib")
#gaff_command1 = "pdb4amber -i " + str(ligand_pdb2) + " -o " + str(ligand_h)
gaff_command1 = "pdb4amber -i " + str(ligand_pdb2) + " -o " + str(ligand_h)
gaff_command3 = "antechamber -i " + str(ligand_h) + " -fi pdb -o " + str(ligand_mol2) + " -fo mol2 -c bcc -nc " + str(Ligand_net_charges) + " -rn LIG -at gaff2"
gaff_command4 = "parmchk2 -i " + str(ligand_mol2) + " -f mol2 -o " + str(ligand_frcmod) + " -s gaff2"
original_stdout = sys.stdout # Save a reference to the original standard output
with open('gaff.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(gaff_command1)
print(gaff_command3)
print(gaff_command4)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 gaff.sh 2>&1 1>/dev/null
!bash gaff.sh >/dev/null 2>&1
f = open(tleap, "w")
f.write("""source """ + str(ff) + "\n"
"""source leaprc.gaff2
LIG = loadmol2 """ + str(ligand_mol2) + "\n"
"""loadamberparams """ + str(ligand_frcmod) + "\n"
"""saveoff LIG """ + str(lib) + "\n"
"""savepdb LIG """ + str(lig_new) + "\n"
"""quit""")
f.close()
tleap_command = "tleap -f " + str(tleap)
cat_command = "cat " + str(starting_end) + " " + str(lig_new) + str(" > ") + str(protein_ligand)
original_stdout = sys.stdout # Save a reference to the original standard output
with open('run_tleap.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(tleap_command)
print(cat_command)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 run_tleap.sh 2>&1 1>/dev/null
!bash run_tleap.sh 2>&1 1>/dev/null
ppdb = PandasPdb().read_pdb(protein_ligand)
ppdb.df['ATOM'] = ppdb.df['ATOM']
ppdb.df['OTHERS'] = [ppdb.df['OTHERS'] != 'OTHERS']
ppdb.to_pdb(path=protein_ligand, records=['ATOM', 'HETATM'], gz=False, append_newline=True)
f = open(tleap, "w")
f.write("""source """ + str(ff) + "\n"
"""source leaprc.DNA.OL15
source leaprc.RNA.OL3
source leaprc.GLYCAM_06j-1
source leaprc.lipid17
source leaprc.gaff2
source """ + str(water) + "\n"
"""loadamberparams """ + str(ligand_frcmod) + "\n"
"""loadoff """ + str(lib) + "\n"
"""SYS = loadpdb """ + str(protein_ligand) + "\n"
"""alignaxes SYS
savepdb SYS """ + str(pdb_nw) + "\n"
"""saveamberparm SYS """ + str(top_nw) + " " + str(crd_nw) + "\n"
"""solvatebox SYS """ + str(water_box) + " " + str(size_box) + """ 0.7
saveamberparm SYS """ + str(top) + " " + str(crd) + "\n"
"""savepdb SYS """ + str(pdb) + "\n"
"""quit""")
f.close()
tleap_command = "tleap -f " + str(tleap)
original_stdout = sys.stdout # Save a reference to the original standard output
with open('run_tleap.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(tleap_command)
sys.stdout = original_stdout # Reset the standard output to its original value
SYS = os.path.join(workDir, "SYS*")
rm_sys = "rm " + SYS
original_stdout = sys.stdout # Save a reference to the original standard output
with open('rm_sys.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(rm_sys)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 rm_sys.sh 2>&1 1>/dev/null
!bash rm_sys.sh 2> /dev/null
!chmod 700 run_tleap.sh 2>&1 1>/dev/null
!bash run_tleap.sh 2>&1 1>/dev/null
!grep "Volume:" leap.log > temp.txt
with open("temp.txt", 'r') as f:
for line in f:
vol = float(line.split()[1])
vol_lit = vol * pow(10, -27)
atom_lit = 9.03 * pow(10, 22)
conc = float(Concentration)
num_ion = int(vol_lit * (conc/0.15) * atom_lit)
if Ions == "NaCl":
pos_neut = "Na+ 0"
pos_num = "Na+ " + str(num_ion)
Cl_num = num_ion
else:
pos_neut = "K+ 0"
pos_num = "K+ " + str(num_ion)
Cl_num = num_ion
f = open(tleap, "w")
f.write("""source """ + str(ff) + "\n"
"""source leaprc.DNA.OL15
source leaprc.RNA.OL3
source leaprc.GLYCAM_06j-1
source leaprc.lipid17
source leaprc.gaff2
source """ + str(water) + "\n"
"""loadamberparams """ + str(ligand_frcmod) + "\n"
"""loadoff """ + str(lib) + "\n"
"""SYS = loadpdb """ + str(protein_ligand) + "\n"
"""alignaxes SYS
check SYS
charge SYS
addions SYS """ + str(pos_neut) + "\n"
"""addions SYS Cl- 0
check SYS
charge SYS
savepdb SYS """ + str(pdb_nw) + "\n"
"""saveamberparm SYS """ + str(top_nw) + " " + str(crd_nw) + "\n"
"""solvatebox SYS """ + str(water_box) + " " + str(size_box) + """ 0.7 """ + "\n"
"""addIonsRand SYS """ + str(pos_num) + """ Cl- """ + str(Cl_num) + "\n"
"""saveamberparm SYS """ + str(top) + " " + str(crd) + "\n"
"""savepdb SYS """ + str(pdb) + "\n"
"""quit""")
f.close()
!chmod 700 run_tleap.sh 2>&1 1>/dev/null
!bash run_tleap.sh 2>&1 1>/dev/null
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
mol = Chem.MolFromPDBFile(lig_new, removeHs=False)
Chem.MolToPDBFile(mol, os.path.join(workDir, "ligand_openFF.pdb"))
in_prmtop = top
in_crd = crd
orig_structure = parmed.amber.AmberParm(in_prmtop, in_crd)
pieces = orig_structure.split()
for piece in pieces:
print(f"There are {len(piece[1])} instance(s) of {piece[0]}")
from openmm.app import PDBFile
from openff.toolkit.topology import Molecule, Topology
from openff.toolkit.tests.utils import get_data_file_path
# rdmol = Chem.MolFromMolFile(os.path.join(workDir, "ligand_openFF.sdf"))
# ligand_off_molecule = Molecule.from_rdkit(rdmol, hydrogens_are_explicit=True)
ligand_off_molecule = Molecule.from_smiles(isomer_end)
ligand_pdbfile = PDBFile(os.path.join(workDir, "ligand_openFF.pdb"))
ligand_off_topology = Topology.from_openmm(
ligand_pdbfile.topology,
unique_molecules=[ligand_off_molecule],)
force_field = ForceField("openff_unconstrained-2.0.0.offxml")
ligand_system = force_field.create_openmm_system(ligand_off_topology)
new_ligand_structure = parmed.openmm.load_topology(
ligand_off_topology.to_openmm(),
ligand_system,
xyz=pieces[1][0].positions,)
new_ligand_structure.save(os.path.join(workDir, "ligand.prmtop"), overwrite=True)
new_ligand_structure.save(os.path.join(workDir, "ligand.inpcrd"), overwrite=True)
# Check how many atoms and which order elements are in the new ligand
n_atoms_new = len(new_ligand_structure.atoms)
elements_new = [atom.element for atom in new_ligand_structure.atoms]
# Check how many atoms and which order elements are in the old ligand
old_ligand_structure, n_copies = pieces[1]
n_atoms_old = len(old_ligand_structure.atoms)
elements_old = [atom.element for atom in old_ligand_structure.atoms]
print(
f"There are {n_atoms_old} in the old ligand structure and {n_atoms_new} atoms "
f"in the new ligand structure")
# Print out error message if number of atoms doesn't match
if n_atoms_new != n_atoms_old:
print(
"Error: Number of atoms in input ligand doesn't match number extracted "
"from prmtop file.")
if elements_new != elements_old:
print(
"Error: Elements in input ligand don't match elements in the ligand "
"from the prmtop file.")
print(f"Old elements: {elements_old}")
print(f"New elements: {elements_new}")
# Create a new, empty system
complex_structure = parmed.Structure()
# Add the protein. Convert explicitly to an AmberParm object to ensure that 1-4 scaling factors are preserved.
complex_structure += parmed.amber.AmberParm.from_structure(pieces[0][0])
# Add the ligand
complex_structure += parmed.amber.AmberParm.from_structure(new_ligand_structure)
# Add ions and Waters
ppdb = PandasPdb().read_pdb(pdb)
Cl = [ppdb.df['ATOM']['atom_name'] == 'Cl-']
Na = [ppdb.df['ATOM']['atom_name'] == 'Na+']
K = [ppdb.df['ATOM']['atom_name'] == 'K+']
Cl = np.array(Cl)
Na = np.array(Na)
K = np.array(K)
if True in Cl and True in Na:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
just_ion2_structure = parmed.Structure()
just_ion2_structure += pieces[3][0]
just_ion2_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
complex_structure += parmed.amber.AmberParm.from_structure(just_ion2_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[4][0]
just_water_structure *= len(pieces[4][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in Cl and True in K:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
just_ion2_structure = parmed.Structure()
just_ion2_structure += pieces[3][0]
just_ion2_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
complex_structure += parmed.amber.AmberParm.from_structure(just_ion2_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[4][0]
just_water_structure *= len(pieces[4][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in Cl:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[3][0]
just_water_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in Na:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[3][0]
just_water_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
elif True in K:
just_ion1_structure = parmed.Structure()
just_ion1_structure += pieces[2][0]
just_ion1_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_ion1_structure)
just_water_structure = parmed.Structure()
just_water_structure += pieces[3][0]
just_water_structure *= len(pieces[3][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
else:
just_water_structure = parmed.Structure()
just_water_structure += pieces[2][0]
just_water_structure *= len(pieces[2][1])
complex_structure += parmed.amber.AmberParm.from_structure(just_water_structure)
# Copy over the original coordinates and box vectors
complex_structure.coordinates = orig_structure.coordinates
complex_structure.box_vectors = orig_structure.box_vectors
# Export the Structure to AMBER files
top = os.path.join(workDir, "SYS_openff.prmtop")
crd = os.path.join(workDir, "SYS_openff.inpcrd")
complex_structure.save(top, overwrite=True)
complex_structure.save(crd, overwrite=True)
top_openff = os.path.exists(top)
crd_openff = os.path.exists(crd)
if top_openff == True and crd_openff == True:
print("Successfully generated topology! :-)")
else:
print("ERROR: Check your inputs! ")
else:
pdb_amber = os.path.exists(pdb)
top_amber = os.path.exists(top)
crd_amber = os.path.exists(crd)
if pdb_amber == True and top_amber == True and crd_amber == True:
print("Successfully generated topology! :-)")
else:
print("ERROR: Check your inputs! ")
!!rm *.sh ANTECHAMBER* ATOMTYPE* temp.txt >/dev/null 2>&1
###Output
_____no_output_____
###Markdown
シミュレーションボックスを眺めてみましょう:
###Code
#@title **3D構造の表示**
import ipywidgets
from ipywidgets import interact, fixed
import warnings
warnings.filterwarnings('ignore')
def show_pdb(show_box=True,
show_ligand=True,
show_sidechains=False,
show_mainchain=False,
color="None"):
def mainchain(p, color="white", model=0):
BB = ['C','O','N','CA']
p.addStyle({"model":model,'atom':BB},
{'stick':{'colorscheme':f"{color}Carbon",'radius':0.3}})
p.setViewStyle({'style':'outline','color':'black','width':0.1})
def ligand(p, model=0):
HP = ['LIG']
p.addStyle({"model":model,'and':[{'resn':HP}]},
{'stick':{'colorscheme':'greenCarbon','radius':0.3}})
p.setViewStyle({'style':'outline','color':'black','width':0.1})
def box(p, model=0):
p.addModelsAsFrames(pdb)
p.addSurface(py3Dmol.SAS, {'opacity': 0.6, 'color':'white'}) #comment this line if you dont want to see the water box
p.setViewStyle({'style':'outline','color':'black','width':0.1})
def sidechain(p, model=0):
HP = ["ALA","GLY","VAL","ILE","LEU","PHE","MET","PRO","TRP","CYS","TYR"]
BB = ['C','O','N']
p.addStyle({"model":model,'and':[{'resn':HP},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.3}})
p.addStyle({"model":model,'and':[{'resn':"GLY"},{'atom':'CA'}]},
{'sphere':{'colorscheme':"whiteCarbon",'radius':0.3}})
p.addStyle({"model":model,'and':[{'resn':"PRO"},{'atom':['C','O'],'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.3}})
p.addStyle({"model":model,'and':[{'resn':HP,'invert':True},{'atom':BB,'invert':True}]},
{'stick':{'colorscheme':"whiteCarbon",'radius':0.3}})
p = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js')
p.addModel(open(pdb,'r').read(),'pdb')
if color == "rainbow":
p.setStyle({'cartoon': {'color':'spectrum'}})
else:
p.setStyle({'cartoon':{}})
if show_sidechains: sidechain(p)
if show_mainchain: mainchain(p)
if show_ligand: ligand(p)
if show_box: box(p)
p.zoomTo()
return p.show()
interact(show_pdb,
show_box=ipywidgets.Checkbox(value=True),
show_ligand=ipywidgets.Checkbox(value=True),
show_sidechains=ipywidgets.Checkbox(value=False),
show_mainchain=ipywidgets.Checkbox(value=False),
color=ipywidgets.Dropdown(options=['None', 'rainbow'], value='None'))
#@title **リガンドインタラクションネットワーク(LigPlot)を描画し確認しましょう**
#@markdown この図はインタラクティブで、残基の周りを移動したり、凡例をクリックして特定の残基タイプや相互作用の表示を切り替えたりすることができます。図はHTMLファイル(initial.html)として保存されます。
import MDAnalysis as mda
import prolif as plf
import numpy as np
import os
from prolif.plotting.network import LigNetwork
# load topology
u = mda.Universe(os.path.join(workDir, "SYS_gaff2.prmtop"), pdb)
lig = u.select_atoms("resname LIG")
prot = u.select_atoms("protein")
# create RDKit-like molecules for visualisation
lmol = plf.Molecule.from_mda(lig)
pmol = plf.Molecule.from_mda(prot)
fp = plf.Fingerprint()
fp.run(u.trajectory[::1], lig, prot)
df = fp.to_dataframe(return_atoms=True)
net = LigNetwork.from_ifp(df, lmol,
# replace with `kind="frame", frame=0` for the other depiction
kind="frame", frame=0,
rotation=270)
net.save(os.path.join(workDir, "initial.html"))
net.display()
###Output
_____no_output_____
###Markdown
------ **シミュレーションボックスの平衡化**適切なMD平衡化プロトコルは、タンパク質の実験的なコンフォメーションを維持しながら、シミュレーションボックス全体で温度と圧力の両方を平衡化するように設計されています。さらに、溶媒がタンパク質の周りに馴染むようにし、適切な溶媒和層を形成します。以下では、温度、圧力、シミュレーション時間などのMD平衡化パラメータを設定します。また、タンパク質の重原子をその場に拘束しておくための力定数(force constant)や、原子座標をトラジェクトリファイル(.dcd)に保存する頻度も定義します。設定が終わったら、次の2つのセルを実行して系を平衡化することができます。
###Code
#@title ### **MD平衡化プロトコルのパラメータ:**
# remove whitespaces
Jobname = 'prot_lig_equil' #@param {type:"string"}
Minimization_steps = "1000" #@param ["1000", "5000", "10000", "20000", "50000", "100000"]
#@markdown シミュレーション時間(ナノ秒)と積分時間(フェムト秒):
Time = "5" #@param {type:"string"}
stride_time_eq = Time
Integration_timestep = "2" #@param ["0.5", "1", "2", "3", "4"]
dt_eq = Integration_timestep
#@markdown 温度(ケルビン)と圧力(バール)
Temperature = 298 #@param {type:"string"}
temperature_eq = Temperature
Pressure = 1 #@param {type:"string"}
pressure_eq = Pressure
#@markdown 位置拘束の力定数(kJ/mol):
Force_constant = 800 #@param {type:"slider", min:0, max:2000, step:100}
#@markdown トラジェクトリファイルを書き出す頻度(ピコ秒):
Write_the_trajectory = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_trajectory_eq = Write_the_trajectory
#@markdown ログファイルを書き出す頻度(ピコ秒):
Write_the_log = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_log_eq = Write_the_log
#@markdown ---
#@title **平衡化MDシミュレーション(NPTアンサンブル)の実行**
#@markdown さあ、系を平衡化しましょう!
###########################################
import openmm as mm
from openmm import *
from openmm.app import *
from openmm.unit import *
import pytraj as pt
from sys import stdout, exit, stderr
import os, math, fnmatch
#############################################
# Defining MD simulation parameters
jobname = os.path.join(workDir, Jobname)
coordinatefile = crd
pdbfile = pdb
topologyfile = top
time_ps = float(Time)*1000
simulation_time = float(time_ps)*picosecond # in ps
dt = int(dt_eq)*femtosecond
temperature = float(temperature_eq)*kelvin
savcrd_freq = int(write_the_trajectory_eq)*picosecond
print_freq = int(write_the_log_eq)*picosecond
pressure = float(pressure_eq)*bar
restraint_fc = int(Force_constant) # kJ/mol
nsteps = int(simulation_time.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nprint = int(print_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nsavcrd = int(savcrd_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
#############################################
# Defining functions to use below:
def backup_old_log(pattern, string):
result = []
for root, dirs, files in os.walk("./"):
for name in files:
if fnmatch.fnmatch(name, pattern):
try:
number = int(name[-2])
avail = isinstance(number, int)
#print(name,avail)
if avail == True:
result.append(number)
except:
pass
if len(result) > 0:
maxnumber = max(result)
else:
maxnumber = 0
backup_file = "\#" + string + "." + str(maxnumber + 1) + "#"
os.system("mv " + string + " " + backup_file)
return backup_file
def restraints(system, crd, fc, restraint_array):
boxlx = system.getDefaultPeriodicBoxVectors()[0][0].value_in_unit(nanometers)
boxly = system.getDefaultPeriodicBoxVectors()[1][1].value_in_unit(nanometers)
boxlz = system.getDefaultPeriodicBoxVectors()[2][2].value_in_unit(nanometers)
if fc > 0:
# positional restraints for all heavy-atoms
posresPROT = CustomExternalForce('k*periodicdistance(x, y, z, x0, y0, z0)^2;')
posresPROT.addPerParticleParameter('k')
posresPROT.addPerParticleParameter('x0')
posresPROT.addPerParticleParameter('y0')
posresPROT.addPerParticleParameter('z0')
for atom1 in restraint_array:
atom1 = int(atom1)
xpos = crd.positions[atom1].value_in_unit(nanometers)[0]
ypos = crd.positions[atom1].value_in_unit(nanometers)[1]
zpos = crd.positions[atom1].value_in_unit(nanometers)[2]
posresPROT.addParticle(atom1, [fc, xpos, ypos, zpos])
system.addForce(posresPROT)
return system
##############################################
#############################################
print("\n> Simulation details:\n")
print("\tJob name = " + jobname)
print("\tCoordinate file = " + str(coordinatefile))
print("\tPDB file = " + str(pdbfile))
print("\tTopology file = " + str(topologyfile))
print("\n\tSimulation_time = " + str(simulation_time))
print("\tIntegration timestep = " + str(dt))
print("\tTotal number of steps = " + str(nsteps))
print("\n\tSave coordinates each " + str(savcrd_freq))
print("\tPrint in log file each " + str(print_freq))
print("\n\tTemperature = " + str(temperature))
print("\tPressure = " + str(pressure))
#############################################
print("\n> Setting the system:\n")
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
print("\t- Reading topology and structure file...")
prmtop = pmd.load_file(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = complex_structure.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
else:
print("\t- Reading topology and structure file...")
prmtop = AmberPrmtopFile(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = prmtop.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
print("\t- Applying restraints. Force Constant = " + str(Force_constant) + "kJ/mol")
pt_system = pt.iterload(coordinatefile, topologyfile)
pt_topology = pt_system.top
restraint_array = pt.select_atoms('!(:H*) & !(:WAT) & !(:Na+) & !(:Cl-) & !(:Mg+) & !(:K+)', pt_topology)
system = restraints(system, inpcrd, restraint_fc, restraint_array)
print("\t- Setting barostat...")
system.addForce(MonteCarloBarostat(pressure, temperature))
print("\t- Setting integrator...")
integrator = LangevinIntegrator(temperature, friction, dt)
integrator.setConstraintTolerance(constraintTolerance)
simulation = Simulation(prmtop.topology, system, integrator)
simulation.context.setPositions(inpcrd.positions)
if inpcrd.boxVectors is not None:
simulation.context.setPeriodicBoxVectors(*inpcrd.boxVectors)
print("\t- Energy minimization: " + str(Minimization_steps) + " steps")
simulation.minimizeEnergy(tolerance=10*kilojoule/mole, maxIterations=int(Minimization_steps))
print("\t-> Potential Energy = " + str(simulation.context.getState(getEnergy=True).getPotentialEnergy()))
print("\t- Setting initial velocities...")
simulation.context.setVelocitiesToTemperature(temperature)
#############################################
# Running Equilibration on NPT ensemble
dcd_file = jobname + ".dcd"
log_file = jobname + ".log"
rst_file = jobname + ".rst"
prv_rst_file = jobname + ".rst"
pdb_file = jobname + ".pdb"
# Creating a trajectory file and reporters
dcd = DCDReporter(dcd_file, nsavcrd)
firstdcdstep = (nsteps) + nsavcrd
dcd._dcd = DCDFile(dcd._out, simulation.topology, simulation.integrator.getStepSize(), firstdcdstep, nsavcrd) # charmm doesn't like first step to be 0
simulation.reporters.append(dcd)
simulation.reporters.append(StateDataReporter(stdout, nprint, step=True, speed=True, progress=True, totalSteps=nsteps, remainingTime=True, separator='\t\t'))
simulation.reporters.append(StateDataReporter(log_file, nprint, step=True, kineticEnergy=True, potentialEnergy=True, totalEnergy=True, temperature=True, volume=True, speed=True))
print("\n> Simulating " + str(nsteps) + " steps...")
simulation.step(nsteps)
simulation.reporters.clear() # remove all reporters so the next iteration don't trigger them.
##################################
# Writing last frame information of stride
print("\n> Writing state file (" + str(rst_file) + ")...")
state = simulation.context.getState( getPositions=True, getVelocities=True )
with open(rst_file, 'w') as f:
f.write(XmlSerializer.serialize(state))
last_frame = int(nsteps/nsavcrd)
print("> Writing coordinate file (" + str(pdb_file) + ", frame = " + str(last_frame) + ")...")
positions = simulation.context.getState(getPositions=True).getPositions()
PDBFile.writeFile(simulation.topology, positions, open(pdb_file, 'w'))
print("\n> Finished!\n")
###Output
_____no_output_____
###Markdown
------ **MDシミュレーション本番の実行(Production)**最後に、平衡化された系の座標を入力構造として、シミュレーション本番(Production simulation)そのものを進めます。ここでは、熱力学的に平衡化された系から本番のシミュレーションを開始することを保証するために、平衡化シミュレーションの最終フレームの原子の位置と速度を含む*.rst 状態ファイル*を使用することに注意してください。ここでもう一つの重要な情報は**Number_of_strides**と**Stride_Time**。このノートブックでは指定した*stride*数のシミュレーションを行うので、**simulation time = Number_of_strides*Stride_Time**となります。例えば、*Number_of_strides=10* と*Stride_Time=10 ns*と設定することで100nsシミュレーションできます。**重要:Productionシミュレーションの最後に、すべてのstrideを連結して完全なトラジェクトリファイルを作成し、可視化および分析することができます。**この方法の背景にあるアイデアは、Google ColabでGPUを使える断続的な時間(12h/24h)をうまく利用することです。
###Code
#@markdown ### **インプットファイルの名前を下に記入してください:**
Equilibrated_PDB = 'prot_lig_equil.pdb' #@param {type:"string"}
State_file = 'prot_lig_equil.rst' #@param {type:"string"}
#@markdown ---
#@markdown ### **MD Prodcutionプロトコルのパラメータ:**
# remove whitespaces
Jobname = 'prot_lig_prod' #@param {type:"string"}
#@markdown シミュレーション時間(ナノ秒)、stride数(整数)と積分時間(フェムト秒):
Stride_Time = "5" #@param {type:"string"}
stride_time_prod = Stride_Time
Number_of_strides = "1" #@param {type:"string"}
nstride = Number_of_strides
Integration_timestep = "2" #@param ["0.5", "1", "2", "3", "4"]
dt_prod = Integration_timestep
#@markdown 温度(ケルビン)と圧力(バール)
Temperature = 298 #@param {type:"string"}
temperature_prod = Temperature
Pressure = 1 #@param {type:"string"}
pressure_prod = Pressure
#@markdown トラジェクトリファイルを書き出す頻度(ピコ秒):
Write_the_trajectory = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_trajectory_prod = Write_the_trajectory
#@markdown ログファイルを書き出す頻度(ピコ秒):
Write_the_log = "10" #@param ["10", "100", "200", "500", "1000"]
write_the_log_prod = Write_the_log
#@markdown ---
#@title **平衡化した後のMDシミュレーション本番(Production)(NPTアンサンブル)**
#
###########################################
import openmm as mm
from openmm import *
from openmm.app import *
from openmm.unit import *
from sys import stdout, exit, stderr
import os, math, fnmatch
#############################################
# Defining MD simulation parameters
jobname = os.path.join(workDir, str(Jobname))
coordinatefile = crd
pdbfile = os.path.join(workDir, Equilibrated_PDB)
topologyfile = top
equil_rst_file = os.path.join(workDir, State_file)
stride_time_ps = float(stride_time_prod)*1000
stride_time = float(stride_time_ps)*picosecond
nstride = int(Number_of_strides)
dt = int(dt_prod)*femtosecond
temperature = float(temperature_prod)*kelvin
savcrd_freq = int(write_the_trajectory_prod)*picosecond
print_freq = int(write_the_log_prod)*picosecond
pressure = float(pressure_prod)*bar
simulation_time = stride_time*nstride
nsteps = int(stride_time.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nprint = int(print_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
nsavcrd = int(savcrd_freq.value_in_unit(picosecond)/dt.value_in_unit(picosecond))
firststride = 1 # must be integer
#############################################
# Defining functions to use below:
def backup_old_log(pattern, string):
result = []
for root, dirs, files in os.walk("./"):
for name in files:
if fnmatch.fnmatch(name, pattern):
try:
number = int(name[-2])
avail = isinstance(number, int)
#print(name,avail)
if avail == True:
result.append(number)
except:
pass
if len(result) > 0:
maxnumber = max(result)
else:
maxnumber = 0
backup_file = "\#" + string + "." + str(maxnumber + 1) + "#"
os.system("mv " + string + " " + backup_file)
return backup_file
##############################################
#############################################
print("\n> Simulation details:\n")
print("\tJob name = " + jobname)
print("\tCoordinate file = " + str(coordinatefile))
print("\tPDB file = " + str(pdbfile))
print("\tTopology file = " + str(topologyfile))
print("\n\tSimulation_time = " + str(stride_time*nstride))
print("\tIntegration timestep = " + str(dt))
print("\tTotal number of steps = " + str(nsteps*nstride))
print("\tNumber of strides = " + str(nstride) + " (" + str(stride_time) + " in each stride)")
print("\n\tSave coordinates each " + str(savcrd_freq))
print("\tSave checkpoint each " + str(savcrd_freq))
print("\tPrint in log file each " + str(print_freq))
print("\n\tTemperature = " + str(temperature))
print("\tPressure = " + str(pressure))
#############################################
print("\n> Setting the system:\n")
if Ligand_Force_field == "OpenFF 2.0.0 (Sage)":
print("\t- Reading topology and structure file...")
prmtop = pmd.load_file(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = complex_structure.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
else:
print("\t- Reading topology and structure file...")
prmtop = AmberPrmtopFile(topologyfile)
inpcrd = AmberInpcrdFile(coordinatefile)
print("\t- Creating system and setting parameters...")
nonbondedMethod = PME
nonbondedCutoff = 1.0*nanometers
ewaldErrorTolerance = 0.0005
constraints = HBonds
rigidWater = True
constraintTolerance = 0.000001
friction = 1.0
system = prmtop.createSystem(nonbondedMethod=nonbondedMethod, nonbondedCutoff=nonbondedCutoff,
constraints=constraints, rigidWater=rigidWater, ewaldErrorTolerance=ewaldErrorTolerance)
print("\t- Setting barostat...")
system.addForce(MonteCarloBarostat(pressure, temperature))
print("\t- Setting integrator...")
integrator = LangevinIntegrator(temperature, friction, dt)
integrator.setConstraintTolerance(constraintTolerance)
simulation = Simulation(prmtop.topology, system, integrator)
simulation.context.setPositions(inpcrd.positions)
if inpcrd.boxVectors is not None:
simulation.context.setPeriodicBoxVectors(*inpcrd.boxVectors)
#############################################
# Opening a loop of extension NSTRIDE to simulate the entire STRIDE_TIME*NSTRIDE
for n in range(1, nstride + 1):
print("\n\n>>> Simulating Stride #" + str(n) + " <<<")
dcd_file = jobname + "_" + str(n) + ".dcd"
log_file = jobname + "_" + str(n) + ".log"
rst_file = jobname + "_" + str(n) + ".rst"
prv_rst_file = jobname + "_" + str(n-1) + ".rst"
pdb_file = jobname + "_" + str(n) + ".pdb"
if os.path.exists(rst_file):
print("> Stride #" + str(n) + " finished (" + rst_file + " present). Moving to next stride... <")
continue
if n == 1:
print("\n> Loading previous state from equilibration > " + equil_rst_file + " <")
with open(equil_rst_file, 'r') as f:
simulation.context.setState(XmlSerializer.deserialize(f.read()))
currstep = int((n-1)*nsteps)
currtime = currstep*dt.in_units_of(picosecond)
simulation.currentStep = currstep
simulation.context.setTime(currtime)
print("> Current time: " + str(currtime) + " (Step = " + str(currstep) + ")")
else:
print("> Loading previous state from > " + prv_rst_file + " <")
with open(prv_rst_file, 'r') as f:
simulation.context.setState(XmlSerializer.deserialize(f.read()))
currstep = int((n-1)*nsteps)
currtime = currstep*dt.in_units_of(picosecond)
simulation.currentStep = currstep
simulation.context.setTime(currtime)
print("> Current time: " + str(currtime) + " (Step = " + str(currstep) + ")")
dcd = DCDReporter(dcd_file, nsavcrd)
firstdcdstep = (currstep) + nsavcrd
dcd._dcd = DCDFile(dcd._out, simulation.topology, simulation.integrator.getStepSize(), firstdcdstep, nsavcrd) # first step should not be 0
simulation.reporters.append(dcd)
simulation.reporters.append(StateDataReporter(stdout, nprint, step=True, speed=True, progress=True, totalSteps=(nsteps*nstride), remainingTime=True, separator='\t\t'))
simulation.reporters.append(StateDataReporter(log_file, nprint, step=True, kineticEnergy=True, potentialEnergy=True, totalEnergy=True, temperature=True, volume=True, speed=True))
print("\n> Simulating " + str(nsteps) + " steps... (Stride #" + str(n) + ")")
simulation.step(nsteps)
simulation.reporters.clear() # remove all reporters so the next iteration don't trigger them.
##################################
# Writing last frame information of stride
print("\n> Writing state file (" + str(rst_file) + ")...")
state = simulation.context.getState( getPositions=True, getVelocities=True )
with open(rst_file, 'w') as f:
f.write(XmlSerializer.serialize(state))
last_frame = int(nsteps/nsavcrd)
print("> Writing coordinate file (" + str(pdb_file) + ", frame = " + str(last_frame) + ")...")
positions = simulation.context.getState(getPositions=True).getPositions()
PDBFile.writeFile(simulation.topology, positions, open(pdb_file, 'w'))
print("\n> Finished!\n")
#@title **トラジェクトリを連結し整列する**
Skip = "1" #@param ["1", "2", "5", "10", "20", "50"]
stride_traj = Skip
Output_format = "dcd" #@param ["dcd", "pdb", "trr", "xtc"]
#@markdown **注意:** フレーム数が大きすぎるとColabのメモリ許容範囲を超えてしまいます。5000フレーム以下なら十分です。
simulation_time_analysis = stride_time_ps*nstride
simulation_ns = float(Stride_Time)*int(Number_of_strides)
number_frames = int(simulation_time_analysis)/int(Write_the_trajectory)
number_frames_analysis = number_frames/int(stride_traj)
traj_end = os.path.join(workDir, str(Jobname) + "_all.dcd")
traj_end2 = os.path.join(workDir, str(Jobname) + "_all." + str(Output_format))
template = os.path.join(workDir, str(Jobname) + '_%s.dcd')
flist = [template % str(i) for i in range(1, nstride + 1)]
#print(flist)
trajlist = pt.load(flist, pdb, stride=stride_traj)
traj_image = trajlist.iterframe(autoimage=True, rmsfit=0)
traj_write = pt.write_traj(traj_end, traj_image, overwrite=True)
traj_load = pt.load(traj_end, pdb)
traj_align = pt.align(traj_load, mask="@CA", ref=0)
traj_write = pt.write_traj(traj_end, traj_align, overwrite=True, options='dcd')
traj_write = pt.write_traj(traj_end2, traj_align, overwrite=True, options=Output_format)
traj_load = pt.load(traj_end, os.path.join(workDir, "SYS_gaff2.prmtop"))
print(traj_load)
traj_end_check = os.path.exists(traj_end2)
if traj_end_check == True:
print("Trajectory concatenated successfully! :-)")
else:
print("ERROR: Check your inputs! ")
#@title **トラジェクトリの読み込み、可視化と確認**
#@markdown しばらく時間がかかります。コーヒーをもう一杯どうでしょう? :-)
import warnings
warnings.filterwarnings('ignore')
!rm *.pdb 2> /dev/null
#py3dmol functions
class Atom(dict):
def __init__(self, line):
self["type"] = line[0:6].strip()
self["idx"] = line[6:11].strip()
self["name"] = line[12:16].strip()
self["resname"] = line[17:20].strip()
self["resid"] = int(int(line[22:26]))
self["x"] = float(line[30:38])
self["y"] = float(line[38:46])
self["z"] = float(line[46:54])
self["sym"] = line[76:78].strip()
def __str__(self):
line = list(" " * 80)
line[0:6] = self["type"].ljust(6)
line[6:11] = self["idx"].ljust(5)
line[12:16] = self["name"].ljust(4)
line[17:20] = self["resname"].ljust(3)
line[22:26] = str(self["resid"]).ljust(4)
line[30:38] = str(self["x"]).rjust(8)
line[38:46] = str(self["y"]).rjust(8)
line[46:54] = str(self["z"]).rjust(8)
line[76:78] = self["sym"].rjust(2)
return "".join(line) + "\n"
class Molecule(list):
def __init__(self, file):
for line in file:
if "ATOM" in line or "HETATM" in line:
self.append(Atom(line))
def __str__(self):
outstr = ""
for at in self:
outstr += str(at)
return outstr
if number_frames_analysis > 10:
stride_animation = number_frames_analysis/10
else:
stride_animation = 1
u = mda.Universe(pdb, traj_end)
# Write out frames for animation
protein = u.select_atoms('not (resname WAT)')
i = 0
for ts in u.trajectory[0:len(u.trajectory):int(stride_animation)]:
if i > -1:
with mda.Writer('' + str(i) + '.pdb', protein.n_atoms) as W:
W.write(protein)
i = i + 1
# Load frames as molecules
molecules = []
for i in range(int(len(u.trajectory)/int(stride_animation))):
with open('' + str(i) + '.pdb') as ifile:
molecules.append(Molecule(ifile))
models = ""
for i in range(len(molecules)):
models += "MODEL " + str(i) + "\n"
for j,mol in enumerate(molecules[i]):
models += str(mol)
models += "ENDMDL\n"
#view.addModelsAsFrames(models)
# Animation
view = py3Dmol.view(width=800, height=600)
view.addModelsAsFrames(models)
for i, at in enumerate(molecules[0]):
default = {"cartoon": {'color': 'spectrum'}}
view.setViewStyle({'style':'outline','color':'black','width':0.1})
view.setStyle({'model': -1, 'serial': i+1}, at.get("pymol", default))
HP = ['LIG']
view.setStyle({"model":-1,'and':[{'resn':HP}]},{'stick':{'radius':0.3}})
view.zoomTo()
view.animate({'loop': "forward"})
view.show()
#@title **MDシミュレーションの間のリガンドインタラクションネットワーク(LigPlot)を描画し確認しましょう**
#@markdown この図はインタラクティブで、残基の周りを移動したり、凡例をクリックして特定の残基タイプや相互作用の表示を切り替えたりすることができます。図はHTMLファイル(output.html)として保存されます。
#@markdown **出力ファイルの名前を下に記入してください:**
Output_name = 'Interaction' #@param {type:"string"}
#@markdown 相互作用が見られる頻度によって、対応するエッジの幅を制御します。しきい値を使用することで、最も頻度の低い相互作用を隠すことができます。例えば、しきい値0.3はフレームの30%未満でしか生じない相互作用を隠すことができます。
Threshold = 0.3 #@param {type:"slider", min:0, max:1.0, step:0.1}
import MDAnalysis as mda
import prolif as plf
import numpy as np
import os
from prolif.plotting.network import LigNetwork
# load topology
u = mda.Universe(os.path.join(workDir, "SYS_gaff2.prmtop"), traj_end)
lig = u.select_atoms("resname LIG")
prot = u.select_atoms("protein")
# create RDKit-like molecules for visualisation
lmol = plf.Molecule.from_mda(lig)
pmol = plf.Molecule.from_mda(prot)
if number_frames_analysis > 10:
stride_animation = number_frames_analysis/10
else:
stride_animation = 1
fp = plf.Fingerprint()
fp.run(u.trajectory[::int(stride_animation)], lig, prot)
df = fp.to_dataframe(return_atoms=True)
net = LigNetwork.from_ifp(df, lmol,
# replace with `kind="frame", frame=0` for the other depiction
kind="aggregate", threshold=float(Threshold),
rotation=270)
net.save(os.path.join(workDir, Output_name + ".html"))
net.display()
###Output
_____no_output_____
###Markdown
------ **解析**トラジェクトリを可視化することは非常に有効ですが、より定量的なデータも時には必要です。MDトラジェクトリの解析は多岐にわたるので、ここですべてを網羅するつもりはありません。しかし、MDanalysisやPyTraj を利用することで、簡単にシミュレーションを解析することができます。以下では、シミュレーションの挙動を解明するのに光を当てるのに役立つコードスニペットの例をいくつか示します。
###Code
#@title **MM-PBSA法で結合自由エネルギーを計算**
#@markdown **重要:** ここで、複合体、受容体、リガンドの相互作用エネルギーと溶媒和自由エネルギーを計算し、その結果を平均して結合自由エネルギーを推定します。結合に対するエントロピーの寄与は計算しないので、厳密にはこの結果は真の自由エネルギーではありませんが、類似の系との比較には使用できることに注意してください。MM-GBSA法とMM-PBSA法の両方を用いて結合エネルギーの計算を行い、比較します。
#@markdown GB/SAの入力パラメータを選択します。"OBC"モデル(igb=2および5)はより新しいものですが、大幅な改善が見られるため、ほとんどのプロジェクトで推奨されています(詳細については、「Amberマニュアル」のセクション 4.1 を確認してください( https://ambermd.org/doc12/Amber20.pdf ))。
igb = "2" #@param ["0", "1", "2", "5", "6", "7", "8", "10"]
Salt_concentration = '0.15' #@param {type:"string"}
#@markdown **下に出力ファイルを記入してください:**
Output_name = 'FINAL_RESULTS_MMPBSA' #@param {type:"string"}
final_mmpbsa = os.path.join(workDir, Output_name)
if number_frames_analysis > 10:
stride = number_frames_analysis/10
else:
stride = 1
f = open("mmpbsa.in", "w")
f.write("""&general """ + "\n"
""" endframe=""" + str(int(number_frames_analysis)) + """, interval=""" + str(int(stride)) + """, strip_mask=:WAT:Na+:Cl-:Mg+:K+, """ + "\n"
"""/ """ + "\n"
"""&gb """ + "\n"
""" igb=""" + str(igb) + """, saltcon=""" + str(Salt_concentration) + """, """ + "\n"
"""/ """ + "\n"
"""&pb """ + "\n"
""" istrng=""" + str(Salt_concentration) + """, inp=2, radiopt=0, prbrad=1.4, """ + "\n"
"""/""")
f.close()
amberhome = "source /usr/local/amber.sh"
ante_MMPBSA = "ante-MMPBSA.py -p " + os.path.join(workDir, "SYS_gaff2.prmtop") + " -c com.prmtop -r rec.prmtop -l ligand.prmtop -s :WAT:Na+:Cl-:Mg+:K+ -n :LIG --radii mbondi2"
MMPBSA = "MMPBSA.py -O -i mmpbsa.in -o " + str(final_mmpbsa) + ".dat -sp " + os.path.join(workDir, "SYS_gaff2.prmtop") + " -cp com.prmtop -rp rec.prmtop -lp ligand.prmtop -y " + str(traj_end)
mkdir = "mkdir " + os.path.join(workDir, "MMPBSA")
mv = "mv _MMPBSA* *.prmtop reference.frc mmpbsa.in " + os.path.join(workDir, "MMPBSA")
original_stdout = sys.stdout # Save a reference to the original standard output
with open('run_MMPBSA.sh', 'w') as f:
sys.stdout = f # Change the standard output to the file we created.
print(amberhome)
print(ante_MMPBSA)
print(MMPBSA)
print(mkdir)
print(mv)
sys.stdout = original_stdout # Reset the standard output to its original value
!chmod 700 run_MMPBSA.sh 2>&1 1>/dev/null
!bash run_MMPBSA.sh 2>&1 1>/dev/null
f_mmpbsa = open(final_mmpbsa + '.dat', 'r')
file_contents = f_mmpbsa.read()
print(file_contents)
f_mmpbsa.close()
#@title **相互作用エネルギー**
#@markdown **重要:** リガンドとタンパク質の相互作用の強さを定量化するために、これら2つの間の非結合相互作用エネルギーを計算します。 この定量化は自由エネルギーでも結合エネルギーでもないことに注意することが重要です。
#@markdown **下に出力ファイルを記入してください:**
Output_name = 'Interaction_energy' #@param {type:"string"}
pt_topology = traj_load.top
restraint_array = pt.select_atoms('!(:WAT) & !(:Na+) & !(:Cl-) & !(:Mg+) & !(:K+) & !(:LIG)', pt_topology)
first_atom = restraint_array[0]
last_atom = restraint_array[-1]
mask = "LIE :LIG @" + str(first_atom+1) + "-" + str(last_atom+1)
lie = pt.analysis.energy_analysis.lie(traj_load, mask=mask, options='cutvdw 12.0 cutelec 12.0 diel 2.0', dtype='dict')
lie_elec = lie['LIE[EELEC]']
lie_vdw = lie['LIE[EVDW]']
lie_total = lie_elec + lie_vdw
lie_total_mean = mean(lie_total)
lie_total_stdev = stdev(lie_total)
print("Interaction Energy Average = " + str("{:.2f}".format(lie_total_mean)) + " \u00B1 " + str("{:.2f}".format(lie_total_stdev)) + " kcal/mol")
time = len(lie_total)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, lie_total, alpha=0.6, color = 'blue', linewidth = 1.5, label= "Total Energy")
ax = plt.plot(time_array, lie_elec, alpha=0.6, color = 'green', linewidth = 1.5, label= "Electrostatic Energy")
ax = plt.plot(time_array, lie_vdw, alpha=0.6, color = 'red', linewidth = 1.5, label= "van der Waals Energy")
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel('Interaction Energy \n (kcal/mol)', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.legend(frameon=False, loc='center left', bbox_to_anchor=(1, 0.5))
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
lie_eelec = pd.DataFrame(lie['LIE[EELEC]'])
lie_eelec.to_csv(os.path.join(workDir, Output_name + "_eelec.csv"))
lie_evdw = pd.DataFrame(lie['LIE[EVDW]'])
lie_evdw.to_csv(os.path.join(workDir, Output_name + "_evdw.csv"))
#@title **リガンドと触媒部位残基との距離を計算**
#@markdown **下に出力ファイルを記入してください:**
Output_name = 'distance' #@param {type:"string"}
#@markdown **最近接残基のカットオフ距離(Å):**
Distance = '5' #@param {type:"string"}
ini = 0
top = pt_topology
for frame in traj_load:
top.set_reference(traj_load[ini])
indices = traj_load.top.select('(:LIG<:' + str(Distance) + ')&!(:WAT|:Na+,Cl-,LIG)')
residues = [res.original_resid for res in top[indices].residues]
res_string = ','.join(str(e) for e in residues)
print("Selected residues = " + res_string + "\n")
mask = ":LIG :" + str(res_string)
dist = pt.distance(traj_load, mask)
dist_mean = mean(dist)
dist_stdev = stdev(dist)
print("Distance Average = " + str("{:.2f}".format(dist_mean)) + " \u00B1 " + str("{:.2f}".format(dist_stdev)) + " Å")
time = len(dist)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, dist, alpha=1, color = 'springgreen', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("Distance [$\AA$]", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(dist)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **リガンドと特定の残基との距離を計算**
#@markdown **下に出力ファイルを記入してください:**
Output_name = 'distance_select' #@param {type:"string"}
#@markdown **残基の番号をスペースなしのカンマで区切って記入してください (1,2,3...):**
Residues = '78,84,85' #@param {type:"string"}
mask = ":LIG :" + str(Residues)
dist = pt.distance(traj_load, mask)
print("Selected residues = " + Residues + "\n")
dist_mean = mean(dist)
dist_stdev = stdev(dist)
print("Distance Average = " + str("{:.2f}".format(dist_mean)) + " \u00B1 " + str("{:.2f}".format(dist_stdev)) + " Å")
time = len(dist)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, dist, alpha=1, color = 'magenta', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("Distance [$\AA$]", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(dist)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **タンパク質CA原子のRMSDを計算**
#@markdown **出力ファイルの名前を下に記入してください:**
Output_name = 'rmsd_ca' #@param {type:"string"}
rmsd = pt.rmsd(traj_load, ref = 0, mask = "@CA")
time = len(rmsd)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
ax = plt.plot(time_array, rmsd, alpha=0.6, color = 'blue', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("RMSD [$\AA$]", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(rmsd)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **RMSDを分布としてプロット**
#@markdown **出力ファイルの名前を下に記入してください:**
Output_name = 'rmsd_dist' #@param {type:"string"}
ax = sb.kdeplot(rmsd, color="blue", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('RMSD [$\AA$]', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(False)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
#@title **タンパク質CA原子の慣性半径(radius of gyration )を計算**
#@markdown **出力ファイルの名前を下に記入してください:**
Output_name = 'radius_gyration' #@param {type:"string"}
radgyr = pt.radgyr(traj_load, mask = "@CA")
time = len(rmsd)*int(Write_the_trajectory)/1000
time_array = np.arange(0,time,int(Write_the_trajectory)/1000)*int(stride_traj)
# Plotting:
plt.plot(time_array, radgyr, alpha=0.6, color = 'green', linewidth = 1.0)
plt.xlim(0, simulation_ns)
#plt.ylim(2, 6)
plt.xlabel("Time (ns)", fontsize = 14, fontweight = 'bold')
plt.ylabel("Radius of gyration ($\AA$)", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(radgyr)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **慣性半径を分布としてプロット**
#@markdown **出力ファイルの名前を下に記入してください:**
Output_name = 'radius_gyration_dist' #@param {type:"string"}
ax = sb.kdeplot(radgyr, color="green", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('Radius of gyration ($\AA$)', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(False)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
#@title **タンパク質CA原子のRMSFを計算**
#@markdown **出力ファイルの名前を下に記入してください:**
Output_name = 'rmsf_ca' #@param {type:"string"}
rmsf = pt.rmsf(traj_load, "@CA")
bfactor = pt.bfactors(traj_load, byres=True)
# Plotting:
plt.plot(rmsf[:,1], alpha=1.0, color = 'red', linewidth = 1.0)
plt.xlabel("Residue", fontsize = 14, fontweight = 'bold')
plt.ylabel("RMSF ($\AA$)", fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.xlim(0, len(rmsf[:-1]))
#plt.xticks(np.arange(min(rmsf[:1]), max(rmsf[:1])))
plt.yticks(fontsize = 12)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(rmsf)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **2D RMSD**
#@markdown **出力ファイルの名前を下に記入してください:**
Output_name = '2D_rmsd' #@param {type:"string"}
last_frame = len(time_array)
stride_ticks_f = (last_frame)/5
ticks_frame = np.arange(0,(len(time_array) + float(stride_ticks_f)), float(stride_ticks_f))
a = ticks_frame.astype(float)
stride_ticks_t = (simulation_ns)/5
tick_time = np.arange(0,(float(simulation_ns) + float(stride_ticks_t)), float(stride_ticks_t))
b = tick_time.astype(float)
mat1 = pt.pairwise_rmsd(traj_load, mask="@CA", frame_indices=range(int(number_frames_analysis)))
ax = plt.imshow(mat1, cmap = 'PRGn', origin='lower', interpolation = 'bicubic')
plt.title('2D RMSD')
plt.xlabel('Time (ns)', fontsize = 14, fontweight = 'bold')
plt.ylabel('Time (ns)', fontsize = 14, fontweight = 'bold')
# plt.xticks(fontsize = 12)
# plt.yticks(fontsize = 12)
plt.xticks(a, b.round(decimals=3), fontsize = 12)
plt.yticks(a, b.round(decimals=3), fontsize = 12)
# plt.xlim(0, a[-1])
# plt.ylim(0, a[-1])
cbar1 = plt.colorbar()
cbar1.set_label("RMSD ($\AA$)", fontsize = 14, fontweight = 'bold')
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(mat1)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
#@title **主成分分析(PCA)の固有ベクトルを計算**
data = pt.pca(traj_load, fit=True, ref=0, mask='@CA', n_vecs=2)
#print('projection values of each frame to first mode = {} \n'.format(data[0][0]))
#print('projection values of each frame to second mode = {} \n'.format(data[0][1]))
#print('eigvenvalues of first two modes', data[1][0])
#print("")
#print('eigvenvectors of first two modes: \n', data[1][1])
last_frame = len(time_array)
stride_ticks_f = (last_frame)/5
ticks_frame = np.arange(0,(len(time_array) + float(stride_ticks_f)), float(stride_ticks_f))
a = ticks_frame.astype(float)
a2 = a.tolist()
stride_ticks_t = (simulation_ns)/5
tick_time = np.arange(0,(float(simulation_ns) + float(stride_ticks_t)), float(stride_ticks_t))
b = tick_time.astype(float)
#@markdown **出力ファイルの名前を下に記入してください:**
Output_name = 'PCA' #@param {type:"string"}
Output_PC1 = 'PC1' #@param {type:"string"}
Output_PC2 = 'PC2' #@param {type:"string"}
%matplotlib inline
%config InlineBackend.figure_format = 'retina' # high resolution
projection_data = data[0]
plt.title(r'PCA of C-$\alpha$')
PC1 = data[0][0]
PC2 = data[0][1]
a = plt.scatter(PC1,PC2, c=range(int(number_frames_analysis)), cmap='Greens', marker='o',s=8, alpha=1)
plt.clim(0, last_frame)
plt.xlabel('PC1', fontsize = 14, fontweight = 'bold')
plt.ylabel('PC2', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
# N = len(number_frames)
# x2 = np.arange(N)
cbar1 = plt.colorbar(a, orientation="vertical")
cbar1.set_label('Time(ns)', fontsize = 14, fontweight = 'bold')
cbar1.set_ticks(a2)
cbar1.set_ticklabels(b.round(decimals=3))
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
pc1=pd.DataFrame(PC1)
pc1.to_csv(os.path.join(workDir, Output_PC1 + ".csv"))
pc2=pd.DataFrame(PC2)
pc2.to_csv(os.path.join(workDir, Output_PC2 + ".csv"))
#@title **主成分1(PC1)と主成分2(PC2)を分布としてプロット**
Output_name = 'PCA_dist' #@param {type:"string"}
fig = plt.figure(figsize=(9,5))
plt.subplot(1, 2, 1)
ax = sb.kdeplot(PC1, color="green", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('PC1', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(True)
ax.spines['left'].set_visible(False)
plt.subplot(1, 2, 2)
ax2 = sb.kdeplot(PC2, color="purple", shade=True, alpha=0.2, linewidth=0.5)
plt.xlabel('PC2', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks([])
plt.ylabel('')
ax2.spines['top'].set_visible(False)
ax2.spines['right'].set_visible(False)
ax2.spines['bottom'].set_visible(True)
ax2.spines['left'].set_visible(False)
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
#@title **Pearson's Cross Correlation (CC)**
#@markdown **出力ファイルの名前を下に記入してください:**
Output_name = 'cross_correlation' #@param {type:"string"}
traj_align = pt.align(traj_load, mask='@CA', ref=0)
mat_cc = matrix.correl(traj_align, '@CA')
ax = plt.imshow(mat_cc, cmap = 'PiYG_r', interpolation = 'bicubic', vmin = -1, vmax = 1, origin='lower')
plt.xlabel('Residues', fontsize = 14, fontweight = 'bold')
plt.ylabel('Residues', fontsize = 14, fontweight = 'bold')
plt.xticks(fontsize = 12)
plt.yticks(fontsize = 12)
cbar1 = plt.colorbar()
cbar1.set_label('$CC_ij$', fontsize = 14, fontweight = 'bold')
plt.savefig(os.path.join(workDir, Output_name + ".png"), dpi=600, bbox_inches='tight')
raw_data=pd.DataFrame(mat_cc)
raw_data.to_csv(os.path.join(workDir, Output_name + ".csv"))
###Output
_____no_output_____ |
Assignment_3_Palencia.ipynb | ###Markdown
Linear Algebra for ChEAssignment 3: Matrices ObjectivesAt the end of this activity you will be able to:1. Be familiar with matrices and their relation to linear equations.2. Perform basic matrix operations.3. Program and translate matrix equations and operations using Python. Discussion
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
###Output
_____no_output_____
###Markdown
Matrices $$A = \left\{ \begin{array}\ x + y \\ 4x - 10y \end{array}\right. \\B = \left\{ \begin{array}\ x+y+z \\ 3x -2y -z \\ -x + 4y +2z \end{array}\right. \\C = \left\{ \begin{array}\ w -2x +3y +4z \\ 3w -x -3y +4z \\ 2w +4x +2y -4z\\ \end{array}\right. \\$$ $$A=\begin{bmatrix} 1 & 1 \\ 4 & {-10}\end{bmatrix} \\B=\begin{bmatrix} 1 & 1 & 1 \\ 3 & -2 & -1 \\ -1 & 4 & 2\end{bmatrix}\\C=\begin{bmatrix} 3 & -4 & 3 & -4\\ 1 & -1 & -5 & 1 \\ 3 & -1 & -4 & 2\end{bmatrix}\\$$ Declaring Matrices $$A=\begin{bmatrix}a_{(0,0)}&a_{(0,1)}&\dots&a_{(0,j-1)}\\a_{(1,0)}&a_{(1,1)}&\dots&a_{(1,j-1)}\\\vdots&\vdots&\ddots&\vdots&\\a_{(i-1,0)}&a_{(i-1,1)}&\dots&a_{(i-1,j-1)}\end{bmatrix}$$
###Code
## Since we'll keep on describing matrices, Let's make a function
def describe_mat(matrix):
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
## Declaring a 2 x 2 Matrix
A = np.array([
[1, 2],
[3, 1]
])
describe_mat (A)
G = np.array([
[2,3,4],
[2,4,6]
])
describe_mat(G)
## Declaring a 3 x 2 matrix
B = np.array([
[4, 8],
[3, 9],
[2, 6]
])
describe_mat(B)
H = np.array([7,7,7,8])
describe_mat(H)
###Output
Matrix:
[7 7 7 8]
Shape: (4,)
Rank: 1
###Markdown
Categorizing Matrices According to Shape Row and Column Matrices
###Code
## Declaring a Row Matrix
rowmatrix1D = np.array([
2, 4, 6, 8
]) ## this is a 1-D Matrix with a shape of (3,), it's not really considered as a row matrix.
row_mat_2D = np.array([
[3,6,-9, -12]
]) ## this is a 2-D Matrix with a shape of (1,3)
describe_mat(rowmatrix1D)
describe_mat(row_mat_2D)
## Declaring a Column Matrix
col_mat = np.array([
[4],
[10],
[7]
]) ## this is a 2-D Matrix with a shape of (3,1)
describe_mat(col_mat)
###Output
Matrix:
[[ 4]
[10]
[ 7]]
Shape: (3, 1)
Rank: 2
###Markdown
Square Matrices
###Code
def describe_mat(matrix):
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
square_mat = np.array([
[1,9,3],
[9,4,6],
[3,6,7]
])
non_square_mat = np.array([
[5,4,3],
[9,8,7]
])
describe_mat(square_mat)
describe_mat(non_square_mat)
###Output
Matrix:
[[1 9 3]
[9 4 6]
[3 6 7]]
Shape: (3, 3)
Rank: 2
Is Square: True
Matrix:
[[5 4 3]
[9 8 7]]
Shape: (2, 3)
Rank: 2
Is Square: False
###Markdown
According to Element Values Null Matrix Null Matrix is a matrix that has no elements. It is always a subspace of any vector or matrix.
###Code
def describe_mat(matrix):
if matrix.size > 0:
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
else:
print('Matrix is Null')
null_mat = np.array([])
describe_mat(null_mat)
###Output
Matrix is Null
###Markdown
Zero MatrixA zero matrix can be any rectangular matrix but with all elements having a value of 0.
###Code
zero_mat_row = np.zeros((6,4))
zero_mat_sqr = np.zeros((3,6))
zero_mat_rct = np.zeros((9,8))
print(f'Zero Row Matrix: \n{zero_mat_row}')
print(f'Zero Square Matrix: \n{zero_mat_sqr}')
print(f'Zero Rectangular Matrix: \n{zero_mat_rct}')
###Output
Zero Row Matrix:
[[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]
[0. 0. 0. 0.]]
Zero Square Matrix:
[[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0.]]
Zero Rectangular Matrix:
[[0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0.]]
###Markdown
Ones MatrixA ones matrix, just like the zero matrix, can be any rectangular matrix but all of its elements are 1s instead of 0s.
###Code
ones_mat_row = np.ones((2,4))
ones_mat_sqr = np.ones((3,9))
ones_mat_rct = np.ones((5,8))
print(f'Ones Row Matrix: \n{ones_mat_row}')
print(f'Ones Square Matrix: \n{ones_mat_sqr}')
print(f'Ones Rectangular Matrix: \n{ones_mat_rct}')
###Output
Ones Row Matrix:
[[1. 1. 1. 1.]
[1. 1. 1. 1.]]
Ones Square Matrix:
[[1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1.]]
Ones Rectangular Matrix:
[[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1. 1. 1.]]
###Markdown
Diagonal MatrixA diagonal matrix is a square matrix that has values only at the diagonal of the matrix.
###Code
np.array([
[2,4,6],
[3,7,9],
[4,8,1]
])
# a[1,1], a[2,2], a[3,3], ... a[n-1,n-1]
d = np.diag([7,2,3,4])
d
###Output
_____no_output_____
###Markdown
Identity MatrixAn identity matrix is a special diagonal matrix in which the values at the diagonal are ones.
###Code
np.eye(2)
np.identity(5)
###Output
_____no_output_____
###Markdown
Upper Triangular MatrixAn upper triangular matrix is a matrix that has no values below the diagonal.
###Code
np.array([
[1,2,3,4],
[5,6,7,5],
[9,2,-3,4],
[5,6,7,-8]
])
F = np.array([
[2, 4, -6, -8, 10],
[2, 4, -6, -8, 10],
[2, 4, -6, -8, 10],
[2, 4, -6, -8, 10],
[2, 4, -6, -8, 10],
])
np.triu(F)
###Output
_____no_output_____
###Markdown
Lower Triangular MatrixA lower triangular matrix is a matrix that has no values above the diagonal.
###Code
np.array([
[1, 9, 3],
[2, 4, 6],
[3, 7, 9],
])
###Output
_____no_output_____
###Markdown
Practice 1.Given the linear combination below, try to create a corresponding matrix representing it.:$$\theta = 5x + 3y - z$$ 2. Given the system of linear combinations below, try to encode it as a matrix. Also describe the matrix.$$A = \left\{\begin{array}5x_1 + 2x_2 +x_3\\4x_2 - x_3\\10x_3\end{array}\right.$$ 3. Given the matrix below, express it as a linear combination in a markdown and a LaTeX markdown
###Code
G = np.array([
[1,7,8],
[2,2,2],
[4,6,7]
])
###Output
_____no_output_____
###Markdown
4. Given the matrix below, display the output as a LaTeX markdown also express it as a system of linear combinations.
###Code
H = np.tril(G)
H
###Output
_____no_output_____
###Markdown
Matrix Algebra Addition
###Code
A= np.array([
[1,2],
[8,2],
[9,2],
])
B= np. array ([
[1,2],
[1,2],
[1,2],
])
A+B
###Output
_____no_output_____
###Markdown
Subtraction
###Code
A= np.array([
[1,2],
[9,2],
[5,2],
])
B= np. array ([
[1,2],
[4,8],
[1,2],
])
A-B
###Output
_____no_output_____
###Markdown
Element-Wise Multiplication
###Code
A= np.array([
[1,7],
[8,2],
[1,9],
])
B= np. array ([
[5,2],
[1,2],
])
A@B
###Output
_____no_output_____
###Markdown
Task 1Create a function named mat_desc() that througouhly describes a matrix, it should:Displays the shape, size, and rank of the matrix.Displays whether the matrix is square or non-square.Displays whether the matrix is an empty matrix.Displays if the matrix is an identity, ones, or zeros matrixUse 5 sample matrices in which their shapes are not lower than(3,3) . In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
def mat_desc(mat):
sq = False
mat = np.array(mat)
print(mat)
print('Shape:', mat.shape)
print('Size:', mat.size)
print('Rank:', np.linalg.matrix_rank(mat))
if(mat.shape[0] == mat.shape[1]):
sq = True
print('The matrix is a square matrix')
else:
print('The matrix is a non-square matrix')
if(mat.shape[0] == 0 and mat.shape[1] == 0):
print('The matrix is empty')
else:
print('The matrix is not empty')
iden = np.identity(mat.shape[0])
if(sq and (iden == mat).all()):
print('The matrix is an identity matrix')
else:
print('The matrix is not an identity matrix')
one = np.ones((mat.shape[0], mat.shape[1]))
if((one == mat).all()):
print('The matrix is a ones matrix')
else:
print('The matrix is not a ones matrix')
zero = np.zeros((mat.shape[0], mat.shape[1]))
if((zero == mat).all()):
print('The matrix is a zeros matrix')
else:
print('The matrix is not a zeros matrix')
print ('Matrix 1:')
RM1 = np.array([
[1,0,0],
[5,3,0],
[7,8,5]])
mat_desc(RM1)
print ('Matrix 2:')
RM2 = np.array([
[0,0,0],
[0,0,0],
[0,0,0]])
mat_desc(RM2)
print ('Matrix 3:')
RM3 = np.array([
[1,1,1],
[1,1,1],
[1,1,1]])
mat_desc(RM3)
print ('Matrix 4:')
RM4 = np.array([
[7,12,3],
[12,32,51],
[32,51,13]])
mat_desc(RM4)
print ('Matrix 5:')
RM5 = np.array ([
[69,32,34],
[134,66,16],
[26,2001,2000]])
mat_desc(RM5)
###Output
Matrix 5:
[[ 69 32 34]
[ 134 66 16]
[ 26 2001 2000]]
Shape: (3, 3)
Size: 9
Rank: 3
The matrix is a square matrix
The matrix is not empty
The matrix is not an identity matrix
The matrix is not a ones matrix
The matrix is not a zeros matrix
###Markdown
Task 2Create a function named mat_operations() that takes in two matrices a input parameters it should:Determines if the matrices are viable for operation and returns your own error message if they are not viable.Returns the sum of the matrices.Returns the difference of the matrices.Returns the element-wise multiplication of the matrices.Returns the element-wise division of the matrices.Use 5 sample matrices in which their shapes are not lower than(3,3) . In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
def mat_operations(mat1, mat2):
mat1 = np.array(mat1)
mat2 = np.array(mat2)
print('Matrix 1:', mat1)
print('Matrix 2:', mat2)
if(mat1.shape != mat2.shape):
print('The shape of both matrices are not the same. Could not perform operations.')
return
print('Sum of the given matrices:')
msum = mat1 + mat2
print(msum)
print('Difference of the given matrices:')
mdiff = mat1 - mat2
print(mdiff)
print('Element-wise multiplication of the matrices:')
mmul = np.multiply(mat1, mat2)
print(mmul)
print('Element-wise division of the matrices:')
mmul = np.divide(mat1, mat2)
print(mmul)
print('Sample Case: 1')
mat_operations([[7, 12, 3], [12, 32, 51], [32, 51, 13]], [[7, 12, 3], [12, 3, 5], [2, 1, 1]])
print('Sample Case: 2')
mat_operations([[7, 2, 3], [2, 2, 1], [2, 1, 3]], [[8, 2, 7], [2, 3, 5], [2 , 1, 1]])
print('Sample Case: 3')
mat_operations([[7, 2, 3], [12, 2, 1], [3, 1, 13]], [[7, 1, 2, 3], [1, 2, 3, 5], [0, 7, 3, 5], [2, 1, 1, 4]])
print('Sample Case: 4')
mat_operations([[0, 0, 0], [2, 0, 1], [0, 1, 3]], [[1, 0, 0], [2, 3, 5], [0, 0, 1]])
print('Sample Case: 5')
mat_operations([[7, 1, 2, 3], [1, 2, 3, 5], [0, 7, 3, 5]], [[7, 2, 3], [12, 2, 1], [3, 1, 13]])
###Output
Sample Case: 1
Matrix 1: [[ 7 12 3]
[12 32 51]
[32 51 13]]
Matrix 2: [[ 7 12 3]
[12 3 5]
[ 2 1 1]]
Sum of the given matrices:
[[14 24 6]
[24 35 56]
[34 52 14]]
Difference of the given matrices:
[[ 0 0 0]
[ 0 29 46]
[30 50 12]]
Element-wise multiplication of the matrices:
[[ 49 144 9]
[144 96 255]
[ 64 51 13]]
Element-wise division of the matrices:
[[ 1. 1. 1. ]
[ 1. 10.66666667 10.2 ]
[16. 51. 13. ]]
Sample Case: 2
Matrix 1: [[7 2 3]
[2 2 1]
[2 1 3]]
Matrix 2: [[8 2 7]
[2 3 5]
[2 1 1]]
Sum of the given matrices:
[[15 4 10]
[ 4 5 6]
[ 4 2 4]]
Difference of the given matrices:
[[-1 0 -4]
[ 0 -1 -4]
[ 0 0 2]]
Element-wise multiplication of the matrices:
[[56 4 21]
[ 4 6 5]
[ 4 1 3]]
Element-wise division of the matrices:
[[0.875 1. 0.42857143]
[1. 0.66666667 0.2 ]
[1. 1. 3. ]]
Sample Case: 3
Matrix 1: [[ 7 2 3]
[12 2 1]
[ 3 1 13]]
Matrix 2: [[7 1 2 3]
[1 2 3 5]
[0 7 3 5]
[2 1 1 4]]
The shape of both matrices are not the same. Could not perform operations.
Sample Case: 4
Matrix 1: [[0 0 0]
[2 0 1]
[0 1 3]]
Matrix 2: [[1 0 0]
[2 3 5]
[0 0 1]]
Sum of the given matrices:
[[1 0 0]
[4 3 6]
[0 1 4]]
Difference of the given matrices:
[[-1 0 0]
[ 0 -3 -4]
[ 0 1 2]]
Element-wise multiplication of the matrices:
[[0 0 0]
[4 0 5]
[0 0 3]]
Element-wise division of the matrices:
[[0. nan nan]
[1. 0. 0.2]
[nan inf 3. ]]
Sample Case: 5
Matrix 1: [[7 1 2 3]
[1 2 3 5]
[0 7 3 5]]
Matrix 2: [[ 7 2 3]
[12 2 1]
[ 3 1 13]]
The shape of both matrices are not the same. Could not perform operations.
|
Pytorch 60min Blitz.ipynb | ###Markdown
PyTorch 60 minute blitz TensorsSee [Pytorch learning the basics](PyTorch Learn the Basics.ipynbTensors) AutoGrad See [Pytorch learning the basics](PyTorch Learn the Basics.ipynb Automatic differentiation with `torch.autograd`) Neural NetworksExamin the example network below for classifying digit images:The typical training procedure is as follows:1. Define the NN2. Iterate over the dataset3. Process the input through the NN4. Compute the loss5. Propagate the gradients6. Update the weights Define the network:
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self) -> None:
super().__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# affine operation: y=wx+b
self.fc1 = nn.Linear(16*5*5, 120) # 5x5 from image dimension
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# Max pooling over a (2,2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square, you can specify the max_pool with one number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
# Flatten all dimension except the batch dimension
x = torch.flatten(x, 1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x # returning logits
net = Net()
print(net)
# The learnable parameters of the model.
params = list(net.parameters())
print(len(params))
print(params[0].size()) # conv1's .weight
###Output
10
torch.Size([6, 1, 5, 5])
###Markdown
`torch.nn` only supports mini-bathces, not snigle samples.For eacmple `nn.Conv2d` takes a 4D Tensor of `nSamples x nChannels x Height x Width`.If you only have one sample you can use `input.unsqueeze(0)` to add a fake batch dimension
###Code
# Random 32x32 input. Note: the expected input size of this network (LeNet) is 32x32.
# To use this on MNIST you must resize the images.
input = torch.randn(1,32,32).unsqueeze(0)
out = net(input)
print(out)
# zero out the gradient buffers with random gradients:
net.zero_grad()
out.backward(torch.randn(1,10))
###Output
_____no_output_____
###Markdown
Computing the loss function, backprop and weight updates
###Code
output = net(input)
target = torch.randn(10) # a dummy target, for example
target = target.view(1, -1) # make it the same shape as output
criterion = nn.MSELoss()
loss = criterion(output, target)
print(loss)
net.zero_grad() # zeroes the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
import torch.optim as optim
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update
###Output
_____no_output_____
###Markdown
Training a Classifier What about data?Can generally use standard python packages to load data into numpy arrays and convert these into Tensors.- Images: Pillow, OpenCV- Audio: SciPy and Librosa- Text: Python, NLTK and SpaCyTorchvision has dataloaders for common datasets such as ImageNet, CIFAR10, MNIST etc and data transformers for images.We will use CIFAR10, the images are 3x32x32. Training and image classifier1. Load and normalize the CIFAR10 training and test datasets using `torchvision`.2. Define a CNN3. Define the loss function4. Train the network on the training data5. Test the network on the test data 1. Load and normalize CIFAR10The output of torchvision datasets are PILImage images in the [0,1] range, we want to normalize them to tensors in the [-1,1] range.
###Code
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((.5, .5, .5), (.5, .5, .5))]
)
batch_size = 4
trainset = torchvision.datasets.CIFAR10(
root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(
trainset, batch_size=batch_size, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(
root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(
trainset, batch_size=batch_size, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck')
# Visulise data
import matplotlib.pyplot as plt
import numpy as np
def imshow(img):
img = img/2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1,2,0)))
plt.show()
# print("here")
# Get some random training images
dataiter = iter(trainloader)
# print("here")
images,labels = dataiter.next()
# print("here")
imshow(torchvision.utils.make_grid(images))
print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size)))
###Output
_____no_output_____
###Markdown
2. Define the CNNWe're going to use the same network as before, except now it takes 3 channel images rather than 1 channel.
###Code
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
###Output
_____no_output_____
###Markdown
3. Define loss and optimizerUsing Cross-Entropy loss and SGD with momentum
###Code
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
net.cuda()
###Output
_____no_output_____
###Markdown
4. Train the NN
###Code
for epoch in range(2):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
optimizer.zero_grad()
outputs = net(inputs.cuda())
loss = criterion(outputs.cpu(), labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 2000 == 1999:
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
# Save the model
PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)
###Output
_____no_output_____
###Markdown
5. Test the network
###Code
dataiter = iter(testloader)
images, labels = dataiter.next()
# Print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
net = Net().cuda()
net.load_state_dict(torch.load(PATH))
outputs = net(images.cuda())
# Outputs are logits for 10 classes
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4)))
from tqdm import tqdm
# test the entire test dataset
correct = 0
total = 0
# since we're not training, we don't need to calculate the gradients for our outputs
with torch.no_grad():
for data in tqdm(testloader):
images, labels = data
images, labels = images.cuda(), labels.cuda()
# calculate outputs by running images through the network
outputs = net(images)
# the class with the highest energy is what we choose as prediction
_, predicted = torch.max(outputs.data, 1)
predicted = predicted
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
# prepare to count predictions for each class
correct_pred = {classname: 0 for classname in classes}
total_pred = {classname: 0 for classname in classes}
from tqdm import tqdm
# again no gradients needed
with torch.no_grad():
for data in tqdm(testloader):
images, labels = data
images, labels = images.cuda(), labels.cuda()
outputs = net(images)
_, predictions = torch.max(outputs, 1)
# collect the correct predictions for each class
for label, prediction in zip(labels, predictions):
if label == prediction:
correct_pred[classes[label]] += 1
total_pred[classes[label]] += 1
# print accuracy for each class
accuracies = []
for classname, correct_count in correct_pred.items():
accuracy = 100 * float(correct_count) / total_pred[classname]
accuracies.append(accuracy)
print("Accuracy for class {:5s} is: {:.1f} %".format(classname,
accuracy))
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.bar(classes, accuracies)
plt.show()
###Output
Accuracy for class plane is: 63.1 %
Accuracy for class car is: 71.9 %
Accuracy for class bird is: 39.1 %
Accuracy for class cat is: 54.7 %
Accuracy for class deer is: 49.8 %
Accuracy for class dog is: 49.5 %
Accuracy for class frog is: 47.4 %
Accuracy for class horse is: 59.6 %
Accuracy for class ship is: 57.1 %
Accuracy for class truck is: 74.6 %
|
04_data_manipulation/04_data_manipulation_solutions.ipynb | ###Markdown
MAT281 Aplicaciones de la Matemática en la IngenieríaPuedes ejecutar este jupyter notebook de manera interactiva:[](https://mybinder.org/v2/gh/sebastiandres/mat281_m01_introduccion/master?filepath=00_template/00_template.ipynb)[](https://colab.research.google.com/github/sebastiandres/mat281_m01_introduccion/blob/master//00_template/00_template.ipynb) ¿Qué contenido aprenderemos?* Manipulación de datos con ```pandas```. - Crear objetos (Series, DataFrames, Index). - Análisis exploratorio. - Realizar operaciones y filtros. - Aplicar funciones y métodos. Motivación En los últimos años, el interés por los datos ha crecido sostenidamente, algunos términos de moda tales como *data science*, *machine learning*, *big data*, *artifial intelligence*, *deep learning*, etc. son prueba fehaciente de ello. Por dar un ejemplo, las búsquedas la siguiente imagen muestra el interés de búsqueda en Google por *__Data Science__* en los últimos cinco años. [Fuente](https://trends.google.com/trends/explore?date=today%205-y&q=data%20science)  Muchos se ha dicho respecto a esto, declaraciones tales como: * _"The world’s most valuable resource is no longer oil, but data."_* _"AI is the new electricity."_* _"Data Scientist: The Sexiest Job of the 21st Century."_ trends.embed.renderExploreWidget("TIMESERIES", {"comparisonItem":[{"keyword":"data science","geo":"","time":"today 5-y"}],"category":0,"property":""}, {"exploreQuery":"date=today%205-y&q=data%20science","guestPath":"https://trends.google.com:443/trends/embed/"}); Los datos por si solos no son útiles, su verdadero valor está en el análisis y en todo lo que esto conlleva, por ejemplo:* Predicciones* Clasificaciones* Optimización* Visualización* Aprendizaje Por esto es importante recordar al tío Ben: _"Un gran poder conlleva una gran responsabilidad"_. NumpyDesde la propia página web:NumPy is the fundamental package for scientific computing with Python. It contains among other things:* a powerful N-dimensional array object* sophisticated (broadcasting) functions* tools for integrating C/C++ and Fortran code* useful linear algebra, Fourier transform, and random number capabilitiesBesides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases. **Idea**: Realizar cálculos numéricos eficientemente. Pandas Desde el repositorio de GitHub:pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with "relational" or "labeled" data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. It is already well on its way toward this goal. Actualmente cuenta con más de 1200 contribuidores y casi 18000 commits!
###Code
import pandas as pd
pd.__version__
###Output
_____no_output_____
###Markdown
Series Arreglos unidimensionales con etiquetas. Se puede pensar como una generalización de los diccionarios de Python.
###Code
pd.Series?
###Output
_____no_output_____
###Markdown
Para crear una instancia de una serie existen muchas opciones, las más comunes son:* A partir de una lista.* A partir de un _numpy.array_.* A partir de un diccionario.* A partir de un archivo (por ejemplo un csv).
###Code
my_serie = pd.Series(range(3, 33, 3))
my_serie
type(my_serie)
# Presiona TAB y sorpréndete con la cantidad de métodos!
# my_serie.
###Output
_____no_output_____
###Markdown
Las series son arreglos unidemensionales que constan de _data_ e _index_.
###Code
my_serie.values
type(my_serie.values)
my_serie.index
type(my_serie.index)
###Output
_____no_output_____
###Markdown
A diferencia de numpy, pandas ofrece más flexibilidad para los valores e índices.
###Code
my_serie_2 = pd.Series(range(3, 33, 3), index=list('abcdefghij'))
my_serie_2
###Output
_____no_output_____
###Markdown
Acceder a los valores de una serie es muy fácil!
###Code
my_serie_2['b']
my_serie_2.loc['b']
my_serie_2.iloc[1]
###Output
_____no_output_____
###Markdown
```loc```?? ```iloc```??
###Code
# pd.Series.loc?
###Output
_____no_output_____
###Markdown
A modo de resumen:* ```loc``` es un método que hace referencia a las etiquetas (*labels*) del objeto .* ```iloc``` es un método que hace referencia posicional del objeto. **Consejo**: Si quieres editar valores siempre utiliza ```loc``` y/o ```iloc```.
###Code
my_serie_2.loc['d'] = 1000
my_serie_2
###Output
_____no_output_____
###Markdown
Trabajar con fechas Pandas incluso permite que los index sean fechas! Por ejemplo, a continuación se crea una serie con las tendencia de búsqueda de *data science* en Google.
###Code
import os
ds_trend = pd.read_csv(os.path.join('data', 'dataScienceTrend.csv'), index_col=0, squeeze=True)
ds_trend.head(10)
ds_trend.tail(10)
ds_trend.dtype
ds_trend.index
###Output
_____no_output_____
###Markdown
**OJO!** Los valores del Index son _strings_ (_object_ es una generalización). **Solución:** _Parsear_ a elementos de fecha con la función ```pd.to_datetime()```.
###Code
# pd.to_datetime?
ds_trend.index = pd.to_datetime(ds_trend.index, format='%Y-%m-%d')
ds_trend.index
###Output
_____no_output_____
###Markdown
Para otros tipos de _parse_ puedes visitar la documentación [aquí](https://docs.python.org/3/library/datetime.htmlstrftime-and-strptime-behavior). La idea de los elementos de fecha es poder realizar operaciones que resulten naturales para el ser humano. Por ejemplo:
###Code
ds_trend.index.min()
ds_trend.index.max()
ds_trend.index.max() - ds_trend.index.min()
###Output
_____no_output_____
###Markdown
Volviendo a la Serie, podemos trabajar con todos sus elementos, por ejemplo, determinar rápidamente la máxima tendencia.
###Code
max_trend = ds_trend.max()
max_trend
###Output
_____no_output_____
###Markdown
Para determinar el _index_ correspondiente al valor máximo usualmente se utilizan dos formas:* Utilizar una máscara (*mask*)* Utilizar métodos ya implementados
###Code
# Mask
ds_trend[ds_trend == max_trend]
# Built-in method
ds_trend.idxmax()
###Output
_____no_output_____
###Markdown
Dataframes Arreglo bidimensional y extensión natural de una serie. Podemos pensarlo como la generalización de un numpy.array. Utilizando el dataset de los jugadores de la NBA la flexibilidad de pandas se hace mucho más visible. No es necesario que todos los elementos sean del mismo tipo!
###Code
import os
player_data = pd.read_csv(os.path.join('data', 'player_data.csv'), index_col='name')
player_data.head()
player_data.info(memory_usage=True)
type(player_data)
player_data.dtypes
###Output
_____no_output_____
###Markdown
Puedes pensar que un dataframe es una colección de series
###Code
player_data['birth_date'].head()
type(player_data['birth_date'])
###Output
_____no_output_____
###Markdown
Exploración
###Code
player_data.describe()
player_data.describe(include='all')
player_data.max()
###Output
_____no_output_____
###Markdown
Para extraer elementos lo más recomendable es el método loc.
###Code
player_data.loc['Zaid Abdul-Aziz', 'college']
###Output
_____no_output_____
###Markdown
Evita acceder con doble corchete
###Code
player_data['college']['Zaid Abdul-Aziz']
###Output
_____no_output_____
###Markdown
Aunque en ocasiones funcione, no se asegura que sea siempre así. [Más info aquí.](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlwhy-does-assignment-fail-when-using-chained-indexing) Valores perdidos/nulos Pandas ofrece herramientas para trabajar con valors nulos, pero es necesario conocerlas y saber aplicarlas. Por ejemplo, el método ```isnull()``` entrega un booleano si algún valor es nulo. Por ejemplo: ¿Qué jugadores no tienen registrado su fecha de nacimiento?
###Code
player_data.index.shape
player_data[player_data['birth_date'].isnull()]
###Output
_____no_output_____
###Markdown
Si deseamos encontrar todas las filas que contengan por lo menos un valor nulo.
###Code
player_data.isnull()
# pd.DataFrame.any?
rows_null_mask = player_data.isnull().any(axis=1) # axis=1 hace referencia a las filas.
rows_null_mask.head()
player_data[rows_null_mask].head()
player_data[rows_null_mask].shape
###Output
_____no_output_____
###Markdown
Para determinar aquellos que no tienen valors nulos el prodecimiento es similar.
###Code
player_data[player_data.notnull().all(axis=1)].head()
###Output
_____no_output_____
###Markdown
¿Te fijaste que para usar estas máscaras es necesario escribir por lo menos dos veces el nombre del objeto? Una buena práctica para generalizar las máscaras consiste en utilizar las funciones ``lambda``
###Code
player_data[lambda df: df.notnull().all(axis=1)].head()
###Output
_____no_output_____
###Markdown
Una función lambda es una función pequeña y anónima. Pueden tomar cualquer número de argumentos pero solo tienen una expresión. Pandas incluso ofrece opciones para eliminar elementos nulos!
###Code
pd.DataFrame.dropna?
# Cualquier registro con null
print(player_data.dropna().shape)
# Filas con elementos nulos
print(player_data.dropna(axis=0).shape)
# Columnas con elementos nulos
print(player_data.dropna(axis=1).shape)
###Output
(4213, 7)
(4213, 7)
(4550, 2)
###Markdown
Ejemplo práctico¿Para cada posición, cuál es la máxima cantidad de tiempo que ha estado un jugador?Un _approach_ para resolver la pregunta anterior tiene los siguientes pasos:1. Determinar el tiempo de cada jugador en su posición.2. Determinar todas las posiciones.3. Iterar sobre cada posición y encontrar el mayor valor.
###Code
# 1. Determinar el tiempo de cada jugador en su posición.
player_data['duration'] = player_data['year_end'] - player_data['year_start']
player_data.head()
# 2. Determinar todas las posiciones.
positions = player_data['position'].unique()
positions
# 3. Iterar sobre cada posición y encontrar el mayor valor.
nba_position_duration = pd.Series()
for position in positions:
df_aux = player_data.loc[lambda x: x['position'] == position]
max_duration = df_aux['duration'].max()
nba_position_duration.loc[position] = max_duration
nba_position_duration
###Output
_____no_output_____
###Markdown
Resumen* Pandas posee una infinidad de herramientas para trabajar con datos, incluyendo la carga, manipulación, operaciones y filtrado de datos.* La documentación oficial (y StackOverflow) son tus mejores amigos.* La importancia está en darle sentido a los datos, no solo a coleccionarlos. Evaluación Laboratorio * Nombre: * Rol: Instruciones1. Pon tu nombre y rol en la celda superior.2. Debes enviar este **.ipynb** con el siguiente formato de nombre: **```04_data_manipulation_NOMBRE_APELLIDO.ipynb```** con tus respuestas a [email protected] y [email protected] .3. Se evaluara tanto el código como la respuesta en una escala de 0 a 4 con valores enteros.4. La entrega es al final de esta clase. Dataset jugadores NBA (2pts)1. ¿Cuál o cuáles son los jugadores más altos de la NBA?2. Crear un DataFrame llamado ```nba_stats``` donde los índices sean las distintas posiciones y que posea las siguientes columns: - nm_players: Cantidad de jugadores distintos que utilizan esa posición. - mean_duration: Duración de años promedio. - tallest: Mayor altura en cm. - young_birth: Fecha de nacimiento del jugador/es más joven.
###Code
import numpy as np
height_split = player_data['height'].str.split('-')
for player, height_list in height_split.items():
if height_list == height_list:
# Para manejar el caso en que la altura sea nan.
height = int(height_list[0]) * 30.48 + int(height_list[1]) * 2.54
player_data.loc[player, "height_cm"] = height
else:
player_data.loc[player, "height_cm"] = np.nan
max_height = player_data['height_cm'].max()
tallest_player = player_data.loc[lambda x: x['height_cm'] == max_height].index.tolist()
print(tallest_player)
# Castear la fecha de str a objeto datetime
player_data['birth_date_fix'] = pd.to_datetime(player_data['birth_date'], format="%B %d, %Y")
# Crear dataframe con las columnas solicitadas
nba_stats = pd.DataFrame(columns=["nm_players", "mean_duration", "tallest", "young_birth"])
for position in player_data['position'].unique():
if position == position:
# Existen posiciones nan, por lo que hay que tratarlas de manera distinta.
aux_df = player_data.loc[lambda x: x['position'] == position] # Dataframe filtrado
else:
aux_df = player_data.loc[lambda x: x['position'].isnull()]
# Calcular
nm_players = aux_df.index.nunique() # or len(aux_df.index.unique())
mean_duration = aux_df['duration'].mean()
tallest = aux_df['height_cm'].max()
young_birth = aux_df['birth_date_fix'].min()
# Escribir en el dataframe
nba_stats.loc[position, ["nm_players", "mean_duration", "tallest", "young_birth"]] = [nm_players, mean_duration, tallest, young_birth]
nba_stats
###Output
_____no_output_____
###Markdown
Dataset del Gasto Neto Mensualizado por año de las Instituciones Públicas (2pts)Este dataset incluye las cifras (actualizadas a la moneda del año 2017), el gasto ejecutadopor las distintas instituciones en los variados programas del Presupuesto, y desglosadohasta el máximo nivel del clasificador presupuestario. Los montos contemplan el GastoNeto, es decir, integran los gastos que afectan el patrimonio público, excluyendo aquéllosque sólo se traducen en movimientos de activos y pasivos financieros que sirven defuente de financiamiento de los primeros 1. Cargar el dataset ```gasto_fiscal.csv``` que se encuentra en la carpeta ```data``` en un DataFrame llamado **```gasto_fiscal```**. ¿Cuánta MB de memoria está utilizando? ¿Cuáles son las columnas que consumen más y menos memoria? ¿Cuál crees que es la razón?2. Crear un DataFrame llamado ```gasto_fiscal_stats```, donde los _index_ sean cada Partida y las columnas correspondan a: - A la suma total de los montos desde el año 2011 al 2014. - Cantidad de registros con monto igual a cero. - Mes con mayor gasto - Porcentaje del mes con mayor gasto respecto al gasto total.
###Code
gasto_fiscal = # NO EVALUADO
###Output
_____no_output_____
###Markdown
* gasto_fiscal_mb = FIX ME* more_memory_columns = []* less_memory_columns = []* reason = ''
###Code
gasto_fiscal_stats = # NO EVALUADO
###Output
_____no_output_____ |
notebook/naive-collaborative-filtering.ipynb | ###Markdown
load data
###Code
import numpy as np
import pandas as pd
data_url = 'https://gist.githubusercontent.com/guerbai/3f4964350678c84d359e3536a08f6d3a/raw/f62f26d9ac24d434b1a0be3b5aec57c8a08e7741/user_book_ratings.txt'
df = pd.read_csv(data_url,
sep = ',',
header = None,
names = ['user_id', 'book_id', 'rating'])
print (df.head())
print ('-----')
user_count = df['user_id'].unique().shape[0]
book_count = df['book_id'].unique().shape[0]
print ('user_count: ', user_count)
print ('book_count: ', book_count)
###Output
user_id book_id rating
0 user_001 book_001 4
1 user_001 book_002 3
2 user_001 book_005 5
3 user_002 book_001 5
4 user_002 book_003 4
-----
user_count: 6
book_count: 6
###Markdown
generate user_item_matrix
###Code
user_id_index_series = pd.Series(range(user_count), index=['user_001', 'user_002', 'user_003', 'user_004', 'user_005', 'user_006'])
book_id_index_series = pd.Series(range(book_count), index=['book_001', 'book_002', 'book_003', 'book_004', 'book_005', 'book_006'])
def construct_user_item_matrix(df):
user_item_matrix = np.zeros((user_count, book_count), dtype=np.int8)
for row in df.itertuples():
user_id = row[1]
book_id = row[2]
rating = row[3]
user_item_matrix[user_id_index_series[user_id], book_id_index_series[book_id]] = rating
return user_item_matrix
user_book_matrix = construct_user_item_matrix(df)
print ('user_item_matrix looks like:')
print ('-----')
print (user_book_matrix)
###Output
user_item_matrix looks like:
-----
[[4 3 0 0 5 0]
[5 0 4 0 4 0]
[4 0 5 3 4 0]
[0 3 0 0 0 5]
[0 4 0 0 0 4]
[0 0 2 4 0 5]]
###Markdown
compute similarity_matrix
###Code
def cosine_similarity(vec1, vec2):
return round(vec1.dot(vec2)/(np.linalg.norm(vec1)*np.linalg.norm(vec2)), 2)
def construct_similarity_matrix(user_item_matrix, dim='user'):
if dim == 'user':
similarity_matrix = np.zeros((user_count, user_count))
count = user_count
else:
similarity_matrix = np.zeros((book_count, book_count))
count = book_count
get_vector = lambda i: user_item_matrix[i] if dim == 'user' else user_item_matrix[:,i]
for i in range(user_count):
i_vector = get_vector(i)
similarity_matrix[i][i] = cosine_similarity(i_vector, i_vector)
for j in range(i, count):
j_vector = get_vector(j)
similarity = cosine_similarity(i_vector, j_vector)
similarity_matrix[i][j] = similarity
similarity_matrix[j][i] = similarity
return similarity_matrix
user_similarity_matrix = construct_similarity_matrix(user_book_matrix)
book_similarity_matrix = construct_similarity_matrix(user_book_matrix, dim='book')
print ('user_similarity_matrix:')
print (user_similarity_matrix)
print ('book_similarity_matrix:')
print (book_similarity_matrix)
###Output
user_similarity_matrix:
[[ 1. 0.75 0.63 0.22 0.3 0. ]
[ 0.75 1. 0.91 0. 0. 0.16]
[ 0.63 0.91 1. 0. 0. 0.4 ]
[ 0.22 0. 0. 1. 0.97 0.64]
[ 0.3 0. 0. 0.97 1. 0.53]
[ 0. 0.16 0.4 0.64 0.53 1. ]]
book_similarity_matrix:
[[ 1. 0.27 0.79 0.32 0.98 0. ]
[ 0.27 1. 0. 0. 0.34 0.65]
[ 0.79 0. 1. 0.69 0.71 0.18]
[ 0.32 0. 0.69 1. 0.32 0.49]
[ 0.98 0.34 0.71 0.32 1. 0. ]
[ 0. 0.65 0.18 0.49 0. 1. ]]
###Markdown
recommend similar users
###Code
def recommend_similar_users(user_id, n=3):
user_index = user_id_index_series[user_id]
similar_users_index = pd.Series(user_similarity_matrix[user_index]).drop(index=user_index).sort_values(ascending=False).index[:n]
return np.array(similar_users_index)
print ('recommend user_indexes %s to user_001' % recommend_similar_users('user_001'))
###Output
recommend user_indexes [1 2 4] to user_001
###Markdown
recommend similar items
###Code
def recommend_similar_items(item_id, n=3):
item_index = book_id_index_series[item_id]
similar_item_index = pd.Series(book_similarity_matrix[item_index]).drop(index=item_index).sort_values(ascending=False).index[:n]
return np.array(similar_item_index)
print ('recommend item_indexes %s to book_001' % recommend_similar_items('book_001'))
###Output
recommend item_indexes [4 2 3] to book_001
###Markdown
item-based naive cf
###Code
def recommend_item_to_user_ib(user_id):
user_index = user_id_index_series[user_id]
user_read_books = np.nonzero(user_book_matrix[user_index])[0]
book_set = set()
book_relation = dict()
for book in user_read_books:
relative_books = recommend_similar_items(book, 2)
book_set = book_set.union(relative_books)
book_relation[book] = relative_books
book_set = book_set.difference(user_read_books)
predict = pd.Series([0.0]*len(book_set), index=list(book_set))
for book in book_set:
fenzi = 0
fenmu = 0
for similar_book, relative_books in book_relation.items():
if book in relative_books:
fenzi += book_similarity_matrix[book][similar_book] * user_book_matrix[user_index][similar_book]
fenmu += book_similarity_matrix[book][similar_book]
predict[book] = round(fenzi/fenmu, 2)
return predict.sort_values(ascending=False)
recommend_item_to_user_ib('user_001')
###Output
_____no_output_____ |
scripts/tractatus-timeline/tractatus_philosophy.ipynb | ###Markdown
edits to original doc: philosophy.docx- in word doc, go to Edit > Find > Advanced Find & Replace- in "Find what" field, type: <[A-Za-z]- select "Use Wildcards" and (at bottom) Format > Font > Italic- in "Replace" field, type: ∞^&- this should insert a "∞" before every italicized word- & do the same for underlinesnew file = philosophy emph.docx
###Code
import docx
from guess_language import guessLanguage
import re
import json
doc = docx.Document("philosophy emph.docx")
all_paras = doc.paragraphs
para_list=[]
for para in all_paras:
text=para.text
text=re.sub("∞","<i>",text)
text=re.sub("¡","<u>",text)
para_list.append(text)
para_text='\n'.join(para_list)
# split into sections
split_mark="¬"
para_text=re.sub("Ms-10",split_mark+"Ms-10",para_text) # insert split marker
sections0=re.split(split_mark,para_text)
sections=[]
for sec in sections0:
if len(sec)>3: # incomplete sections?
sec=re.sub("·","•",sec)
#sec=re.sub("\(|\)","†",sec)
sections.append(sec)
sections.sort()
print(len(sections))
numdot="[0-9]•[0-9]+[^\t]*"
numbrack="[0-9]+\[\w+\]"
cr="[0-9]+[.][0-9]+[.][0-9]+ \([0-9,\s]+\)[*+]*"
# fields and their corresponding regular expressions-
fields=[("manuscript","Ms-.*\(NB\)"), #Ms-101,12r[2] et 13r[1] (1914--0902) (NB)
("pt-number","\n\s*("+numdot+")\t"), #5•3063
("pt-page","\t("+numbrack+")\t"), #52[5]
("tlp-number","\t"+numbrack+"\t\s*("+numdot+")\t"),
("cross-reference","\t"+cr+"\n|\t"+cr+"[\s]+"+cr+"\n"), #22.8.14 (2)* 2.9.14 (1)**
("date","\n\s*([0-9]+[.][0-9]+[.][0-9]+[.]*)")] #2.9.14.
re.findall(fields[4][1],sections[0])
# grab into json
nodes=[]
empties=[]
multiples=[]
sos=[]
sections.sort()
for section in sections:
text=[]
# metadata
s={}
e=[]
m=[]
for field in fields:
line=re.findall(field[1],section)
if len(line)==1:
s[field[0]]=line[0]
elif len(line)==0:
s[field[0]]=""
e.append(s)
elif len(line)>1:
#s[field[0]]=line
#OR
for i in range(len(line)):
s[field[0]+str(i+1)]=line[i]
m.append(s)
if len(e)in[2,3,4]:
new=(section,e)
empties.append(new)
if m!=[]:
new=(section,m[-1])
multiples.append(new)
# translations
subsections=re.split("\n",section)
ger=[]
ger_ind=[]
eng=[]
eng_ind=[]
for subsec in subsections:
if (guessLanguage(subsec) == 'de'):
ger_ind.append(subsections.index(subsec))
elif (guessLanguage(subsec) == 'en'):
eng_ind.append(subsections.index(subsec))
if eng_ind!=[]:
eng_start=sorted(eng_ind)[0]
s["eng"]='\n'.join(subsections[eng_start:])
else:
eng_start=len(subsections)
s["eng"]=""
if ger_ind!=[]:
ger_start=sorted(ger_ind)[0]
s["ger"]='\n'.join(subsections[ger_start:eng_start])
else:
s["ger"]=""
nodes.append(s)
print(len(nodes))
print(len(empties))
print(len(multiples))
with open('stern_hz.json', 'w', encoding='utf8') as f:
json.dump(nodes, f, indent=4, ensure_ascii=False)
# View empties
for e in empties:
print(e[0],'\n',e[1],'\n\n•••\n')
# View multiples
for m in multiples:
print(m[0],'\n',m[1],'\n\n\n')
###Output
Ms-101,31r[6] et 32r[1] (1914--1003) (NB)
4•4482 45[5] 4•462 (1) 3.10.14 (4) 6.6.15 (4)
4•449 45[6] 4•465** 3.10.14 (4) 25.5.15 (4)
Tautologien sagen nichts aus, sie sind nicht Bilder von Sachverhalten: Sie sind / selber logisch vollkommen neutral. (Das logische Produkt einer Tautologie und eines Satzes sagt nicht mehr noch weniger aus als dieser allein.)
Tautologies state nothing, they are are not pictures of states of things. They are themselves logically completely neutral. (The logical product of a tautology and a propostion says neither more nor less than the latter by itself.) [<i>See 4.462 and <i>cf. 4.465.]
{'manuscript': 'Ms-101,31r[6] et 32r[1] (1914--1003) (NB)', 'pt-number1': '4•4482', 'pt-number2': '4•449', 'pt-page1': '45[5]', 'pt-page2': '45[6]', 'tlp-number1': '4•462 (1)', 'tlp-number2': '4•465**', 'cross-reference1': '\t3.10.14 (4) 6.6.15 (4)\n', 'cross-reference2': '\t3.10.14 (4) 25.5.15 (4)\n', 'date': '', 'eng': ' Tautologies state nothing, they are are not pictures of states of things. They are themselves logically completely neutral. (The logical product of a tautology and a propostion says neither more nor less than the latter by itself.) [<i>See 4.462 and <i>cf. 4.465.]\n\n\t\t \n', 'ger': 'Tautologien sagen nichts aus, sie sind nicht Bilder von Sachverhalten: Sie sind / selber logisch vollkommen neutral. (Das logische Produkt einer Tautologie und eines Satzes sagt nicht mehr noch weniger aus als dieser allein.)\n'}
Ms-101,45r[2] (1914--1015) (NB)
3•03 17[5] 3•03 15.10.14 (5)+
4•0711 48[2] 4•031 (1) 29.9.14 (2)** 15.10.14 (5)+
Im Satze stellen wir – sozusagen – <u>zur <u>Probe die Dinge zusammen wie sie sich in Wirklichkeit aber <u>nicht zu verhalten brauchen, wir können aber nicht etwas <u>Unlogisches zusammenstellen denn dazu müßten wir in der Sprache aus der Logik heraus können. – Wenn aber der ganz allgemeine Satz <u>nur „<u>logische Konstante” enthält so kann er für uns nicht mehr sein als – einfach – ein logisches Gebilde und kann nicht mehr tun als uns seine eigenen logischen Eigenschaften zu zeigen. – Wenn es ganz allgemeine Sätze gibt, – <u>was stellen wir in ihnen probeweise zusammen??
In a proposition we – so to speak – <u>try <u>out putting things together as they do <u>not have to be in reality; but we cannot make any <u>unlogical arrangement, for in order to do that we would have to be able to get outside logic within language. – But if the entirely general proposition contains <u>only "<u>logical constants", then it cannot be anything more to us than – simply – a logical formation and cannot do anything more than show us its own logical properties. – If there are entirely general propositions – <u>what do we try out in arranging them? [Cf 4.031 (1) and 3.03.]
{'manuscript': 'Ms-101,45r[2] (1914--1015) (NB)', 'pt-number1': '3•03', 'pt-number2': '4•0711', 'pt-page1': '17[5]', 'pt-page2': '48[2]', 'tlp-number1': '3•03', 'tlp-number2': '4•031 (1)', 'cross-reference1': '\t15.10.14 (5)+\n', 'cross-reference2': '\t29.9.14 (2)** 15.10.14 (5)+\n', 'date': '', 'eng': 'In a proposition we – so to speak – <u>try <u>out putting things together as they do <u>not have to be in reality; but we cannot make any <u>unlogical arrangement, for in order to do that we would have to be able to get outside logic within language. – But if the entirely general proposition contains <u>only "<u>logical constants", then it cannot be anything more to us than – simply – a logical formation and cannot do anything more than show us its own logical properties. – If there are entirely general propositions – <u>what do we try out in arranging them? [Cf 4.031 (1) and 3.03.]\n\n\n', 'ger': 'Im Satze stellen wir – sozusagen – <u>zur <u>Probe die Dinge zusammen wie sie sich in Wirklichkeit aber <u>nicht zu verhalten brauchen, wir können aber nicht etwas <u>Unlogisches zusammenstellen denn dazu müßten wir in der Sprache aus der Logik heraus können. – Wenn aber der ganz allgemeine Satz <u>nur „<u>logische Konstante” enthält so kann er für uns nicht mehr sein als – einfach – ein logisches Gebilde und kann nicht mehr tun als uns seine eigenen logischen Eigenschaften zu zeigen. – Wenn es ganz allgemeine Sätze gibt, – <u>was stellen wir in ihnen probeweise zusammen??\n'}
Ms-101,48r[2] (1914--1017) (NB)
5•323 62[4] 5•526 (1) 17.10.14 (3)* 19.10.14 (3)** 31.5.15 (1,4)+
5•324 63[1] 5•526 (2)* 17.10.14 (3)*
Kann man denn aber nicht die ganze Welt vollständig mit ganz allgemeinen Sätzen beschreiben? (Das Problem zeigt sich von allen Seiten.) Ja, man könnte die Welt vollständig durch ganz allgemeine Sätze beschreiben also ganz ohne irgend einen Namen oder sonst ein bezeichnendes Zeichen zu verwenden. Und um auf die gewöhnliche Sprache zu kommen brauchte man Namen etc. nur dadurch einführen indem man nach einem „(∃x)” sagte „und dieses x ist A” u.s.w.
But can't one describe the whole world completely by means of entirely general propositions? (The problem shows up on all sides.) Yes, one could describe the world completely by entirely general propositions, and hence wholly without using any kind of name or any other referring sign. And in order to arrive at ordinary language one would only need to introduce names etc. by saying after an "(∃x)", "and that x is A" and so on. [<i>Cf. 5.526.]
{'manuscript': 'Ms-101,48r[2] (1914--1017) (NB)', 'pt-number1': '5•323', 'pt-number2': '5•324', 'pt-page': '62[4]', 'tlp-number': '5•526 (1)', 'cross-reference': '\t17.10.14 (3)*\n', 'date': '', 'eng': 'But can\'t one describe the whole world completely by means of entirely general propositions? (The problem shows up on all sides.) Yes, one could describe the world completely by entirely general propositions, and hence wholly without using any kind of name or any other referring sign. And in order to arrive at ordinary language one would only need to introduce names etc. by saying after an "(∃x)", "and that x is A" and so on. [<i>Cf. 5.526.]\n\n \t\t \n', 'ger': 'Kann man denn aber nicht die ganze Welt vollständig mit ganz allgemeinen Sätzen beschreiben? (Das Problem zeigt sich von allen Seiten.) Ja, man könnte die Welt vollständig durch ganz allgemeine Sätze beschreiben also ganz ohne irgend einen Namen oder sonst ein bezeichnendes Zeichen zu verwenden. Und um auf die gewöhnliche Sprache zu kommen brauchte man Namen etc. nur dadurch einführen indem man nach einem „(∃x)” sagte „und dieses x ist A” u.s.w.\n'}
Ms-101,65r[5] (1914--1027) (NB)
No precisely equivalent PT para; = TLP 4.01 (2) (minus a comma)
4•01 8[2] 4•01 (1) 20.9.14 (1)+ 27.9.14 (4)** 27.10.14 (7)**
2•12 4[5] 2•12 (27.10.14 (7)+)
/ \ \ \ ✓ Der Satz ist ein Modell der Wirklichkeit so wie wir sie uns denken.
A proposition is a model of reality as we conceive of it. [<i>See 4.01 (2).]
{'manuscript': 'Ms-101,65r[5] (1914--1027) (NB)', 'pt-number1': '4•01', 'pt-number2': '2•12', 'pt-page1': '8[2]', 'pt-page2': '4[5]', 'tlp-number1': '4•01 (1)', 'tlp-number2': '2•12', 'cross-reference': '', 'date': '', 'eng': 'A proposition is a model of reality as we conceive of it. [<i>See 4.01 (2).]\n\n \t\t \n', 'ger': '/ \\ \\ \\ ✓ \tDer Satz ist ein Modell der Wirklichkeit so wie wir sie uns denken.\n'}
Ms-102,100r[3] (1915--0520) (NB)
20.5.15.
Ein Komplex ist eben ein Ding!
A complex just is a thing!
21.5.15.
{'manuscript': 'Ms-102,100r[3] (1915--0520) (NB)', 'pt-number': '', 'pt-page': '', 'tlp-number': '', 'cross-reference': '', 'date1': '20.5.15.', 'date2': '21.5.15.', 'eng': 'A complex just is a thing!\n\n21.5.15.\n\n \t\t \n', 'ger': 'Ein Komplex ist eben ein Ding! \n'}
Ms-102,120r[2] (1915--0603) (NB)
5•07 12[9] 5•142 3.6.15 (4,6)+
5•08 37[8] 5•143 (1) 5.6.15 (6)+ 3.6.15 (7)+
Aber geht es nicht so?: Wenn p aus q folgt, aber nicht q aus p, dann sagt q mehr als p.
Nun aber folgt aus einer Tautologie gar nichts. ——Sie aber folgt aus jedem Satz.
Analoges gilt von ihrem Gegenteil.
But doesn't it work this way: If p follows from q, but not q from p, then q says more than p?
Now nothing at all follows from a tautology. ——But it follows from every proposition. [<i>Cf. 5.142.]
The analogous point applies to its opposite. [<i>Cf. 5.143 (1).]
{'manuscript': 'Ms-102,120r[2] (1915--0603) (NB)', 'pt-number1': '5•07', 'pt-number2': '5•08', 'pt-page1': '12[9]', 'pt-page2': '37[8]', 'tlp-number1': '5•142', 'tlp-number2': '5•143 (1)', 'cross-reference1': '\t3.6.15 (4,6)+\n', 'cross-reference2': '\t5.6.15 (6)+ 3.6.15 (7)+\n', 'date': '', 'eng': "But doesn't it work this way: If p follows from q, but not q from p, then q says more than p? \n Now nothing at all follows from a tautology. ——But it follows from every proposition. [<i>Cf. 5.142.]\n The analogous point applies to its opposite. [<i>Cf. 5.143 (1).]\n\n \t\t \n", 'ger': 'Aber geht es nicht so?: Wenn p aus q folgt, aber nicht q aus p, dann sagt q mehr als p. \n Nun aber folgt aus einer Tautologie gar nichts. ——Sie aber folgt aus jedem Satz. \n Analoges gilt von ihrem Gegenteil. \n'}
Ms-102,120r[3] et 121r[1] (1915--0603) (NB)
5•08 37[8] 5•143 (1) 5.6.15 (6)+ 3.6.15 (7)+
4•4492 58[6] 4•466 (4)** 3.6.15 (8)**
Aber wie! Wäre da die Kontradiktion nicht der vielsagendste Satz? Aus „p.~p” folgt ja nicht nur „p” sondern auch „~p”! Aus ihnen folgt jeder Satz und sie folgen aus keinem!? Aber ich kann doch aus / einer Kontradiktion nichts schließen, eben <u>weil sie eine Kontradiktion ist!
Aber wenn die Kontradiktion die Klasse <u>aller <u>Sätze ist, so wird die Tautologie das Gemeinsame aller Klassen von Sätzen welche nichts Gemeinsames haben, und verschwindet gänzlich. „p ⌵ ~p” wäre also nur scheinbar ein Zeichen. In Wirklichkeit aber die Auflösung des Satzes.
But how would that work? Won't contradiction be the proposition that says the most then? It’s not only "p" that follows from "p.~p", but also "~p". Every proposition follows from it and it follows from none!? But I surely can’t infer anything from a contradiction, precisely <u>because it is a contradiction!
But if contradiction is the class of <u>all <u>propositions, then tautology becomes that shared feature of any classes of propositions that have nothing in common, and vanishes altogether. "p ⌵ ~p" would then only appear to be a sign. But in reality, the disintegration of the proposition. [<i>Cf. 5.143 (1-2); 4.466 (4).]
{'manuscript': 'Ms-102,120r[3] et 121r[1] (1915--0603) (NB)', 'pt-number1': '5•08', 'pt-number2': '4•4492', 'pt-page1': '37[8]', 'pt-page2': '58[6]', 'tlp-number1': '5•143 (1)', 'tlp-number2': '4•466 (4)**', 'cross-reference1': '\t5.6.15 (6)+ 3.6.15 (7)+\n', 'cross-reference2': '\t3.6.15 (8)**\n', 'date': '', 'eng': 'But how would that work? Won\'t contradiction be the proposition that says the most then? It’s not only "p" that follows from "p.~p", but also "~p". Every proposition follows from it and it follows from none!? But I surely can’t infer anything from a contradiction, precisely <u>because it is a contradiction!\n But if contradiction is the class of <u>all <u>propositions, then tautology becomes that shared feature of any classes of propositions that have nothing in common, and vanishes altogether. "p ⌵ ~p" would then only appear to be a sign. But in reality, the disintegration of the proposition. [<i>Cf. 5.143 (1-2); 4.466 (4).]\n \t\t \n\n', 'ger': 'Aber wie! Wäre da die Kontradiktion nicht der vielsagendste Satz? Aus „p.~p” folgt ja nicht nur „p” sondern auch „~p”! Aus ihnen folgt jeder Satz und sie folgen aus keinem!? Aber ich kann doch aus / einer Kontradiktion nichts schließen, eben <u>weil sie eine Kontradiktion ist! \n Aber wenn die Kontradiktion die Klasse <u>aller <u>Sätze ist, so wird die Tautologie das Gemeinsame aller Klassen von Sätzen welche nichts Gemeinsames haben, und verschwindet gänzlich. „p ⌵ ~p” wäre also nur scheinbar ein Zeichen. In Wirklichkeit aber die Auflösung des Satzes. \n'}
Ms-102,20r[4] et 21r[1] (1914--1105) (NB)
5•2331 51[5] 5•441* 5.11.14 (6)+
5•3031 SchonEnth. 51[6] 5•47 (2)* 5.11.14 (6)** 12.11.14 (5)+
Denn wenn die positive Tatsache φa gegeben ist dann ist auch die <u>Möglichkeit für (x).φx, ~(∃x).φx, ~φ(a) etc. etc. gegeben. (Alle / logischen Konstanten sind bereits im Elementarsatz enthalten.)
For if a positive fact φa is given then so is the <u>possibility of (x).φx, ~(∃x).φx, ~φ(a) etc. etc. (An elementary proposition already contains all logical constants.) [<i>Cf. 5.47 (2).]
{'manuscript': 'Ms-102,20r[4] et 21r[1] (1914--1105) (NB)', 'pt-number1': '5•2331', 'pt-number2': '5•3031 SchonEnth. 51[6]', 'pt-page': '51[5]', 'tlp-number': '5•441*', 'cross-reference1': '\t5.11.14 (6)+\n', 'cross-reference2': '\t5.11.14 (6)** 12.11.14 (5)+\n', 'date': '', 'eng': 'For if a positive fact φa is given then so is the <u>possibility of (x).φx, ~(∃x).φx, ~φ(a) etc. etc. (An elementary proposition already contains all logical constants.) [<i>Cf. 5.47 (2).]\n\n \t\t \n', 'ger': 'Denn wenn die positive Tatsache φa gegeben ist dann ist auch die <u>Möglichkeit für (x).φx, ~(∃x).φx, ~φ(a) etc. etc. gegeben. (Alle / logischen Konstanten sind bereits im Elementarsatz enthalten.) \n'}
Ms-102,47r[3] et 48r[1] (1914--1130) (NB)
30.11.14.
1.12.14.
Der Satz sagt gleichsam: Dieses Bild kann auf diese Weise keinen (oder kann einen) Sachverhalt / darstellen.
The proposition says as it were: This picture cannot (or can) represent a state of things in this way.
{'manuscript': 'Ms-102,47r[3] et 48r[1] (1914--1130) (NB)', 'pt-number': '', 'pt-page': '', 'tlp-number': '', 'cross-reference': '', 'date1': '30.11.14.', 'date2': '1.12.14.', 'eng': 'The proposition says as it were: This picture cannot (or can) represent a state of things in this way. \n\n\n\n \t\t \n', 'ger': 'Der Satz sagt gleichsam: Dieses Bild kann auf diese Weise keinen (oder kann einen) Sachverhalt / darstellen. \n'}
Ms-102,48r[5] et 49r[1] et 50r[1] et 51r[1] et 52r[1] et 53r[1] (1914--1206) (NB)
6•331 72[1] 6•341* 6.12.14 (1,2)
6•34 73[1] 6•342* 6.12.14 (3,4,5)*
6•341 73[2] 6•343 6.12.14 (6)*
6.12.14.
Die Newtonsche Mechanik bringt die Weltbeschreibung / auf eine einheitliche Form.
Denken wir uns eine weiße Fläche auf der unregelmäßige schwarze Flecken wären. Wir sagen nun: Was immer für ein Bild hierdurch entsteht immer werde ich seiner Beschreibung beliebig nahe kommen können indem ich die Fläche mit einem entsprechend feinen quadratischen Netzwerk bedecke und nun von jedem Quadrat sage daß es weiß oder schwarz ist. Ich werde auf diese Weise die Beschreibung dieser Fläche auf eine einheitliche Form gebracht haben. Diese Form ist beliebig denn ich hätte mit dem / gleichen Erfolge ein dreieckiges oder sechseckiges Netz verwenden können. Es kann sein daß die Beschreibung mit Hilfe eines dreieckigen Netzes einfacher geworden wäre d.h. daß wir die Fläche mit einem gröberen Dreiecksnetz genauer beschreiben könnten als mit einem feineren quadratischen (oder umgekehrt) etc. Den verschiedenen Netzen entsprechen verschiedene Systeme der Weltbeschreibung.
Die Mechanik bestimmt die Form der Weltbeschreibung indem sie sagt: Alle Sätze der Weltbeschreibung / müssen aus einer Anzahl gegebener Sätze – den mechanischen Axiomen – auf eine gegebene Art & Weise erhalten werden können. Hierdurch liefert sie die Bausteine zum Bau des wissenschaftlichen Gebäudes und sagt: Welches Gebäude Du immer aufführen willst jedes mußt Du irgendwie mit diesen & nur diesen Bausteinen zusammenbringen.
Wie man mit dem Zahlensystem jede beliebige Anzahl muß hinschreiben können so muß man mit dem System der Mechanik jeden beliebigen Satz der Physik / hinschreiben können.
Und hier sehen wir nun die gegenseitige Stellung von Logik & Mechanik.
(Man könnte das Netz auch aus verschiedenartigen Figuren bestehen lassen.)
Daß sich ein Bild wie das vorhin erwähnte durch ein Netz von gegebener Form beschreiben läßt, sagt über das Bild nichts aus (denn dies gilt für jedes solche Bild). Das aber charakterisiert das Bild daß es sich durch ein bestimmtes Netz, von <u>bestimmter Feinheit, beschreiben läßt. So auch sagt es nichts über die Welt aus daß sie sich durch die Newtonsche Mechanik / beschreiben läßt; aber wohl daß sie sich so durch jene beschreiben läßt wie dies eben der Fall ist. (Dies habe ich schon seit <u>langer Zeit gefühlt.) – Auch das sagt etwas von der Welt daß sie sich durch die eine Mechanik einfacher beschreiben läßt als durch die andere.
Die Mechanik ist <u>ein Versuch alle Sätze welche wir zur Weltbeschreibung benötigen nach <u>einem Plan zu konstruieren. (Die unsichtbaren Massen Hertz's.)
Die unsichtbaren Massen Hertz's sind <u>eingestandenermaßen Scheingegenstände.
Newtonian mechanics introduces uniformity into world description.
Imagine a white surface with irregular black spots. We then say: whatever kind of picture develops from these, I could always get as near as I like to its description, by covering the surface with a sufficiently fine square mesh and going on to say of every square that it is white or black. In this way I will have managed to introduce uniformity into my description of the surface. The form is arbitrary, because I could have just as well applied a triangular or hexagonal net. It may be that the description using a triangular net would have been simpler, that is, we might be able to describe the surface more accurately with a coarser triangular mesh than with a finer square mesh or vice versa, etc. Different systems of describing the world correspond to different nets.
Mechanics specifies a form of world description by saying: all propositions used in a description of the world must be obtainable in a given way from a number of given propositions - the mechanical axioms. In this way it supplies the building blocks for constructing the edifice of science, and says: Whatever building you want to erect, it must somehow be assembled from these, and only these, building blocks.
Just as I must be able to write down any arbitrary number by means of the number system, so I must be able to write down any arbitrary proposition of physics by means of the system of mechanics. [<i>See. 6.341.]
And now we can see the position in which logic and mechanics stand to each other.
(One might also arrange for the net to consist of different kinds of mesh.)
That a picture of the above-mentioned kind can be described by a net of a given form tells us nothing about the picture (for this holds of every such picture.) But what does characterize a picture is that it can be described by a specific net of a <u>specific fineness. Likewise, that the world can be described by Newtonian mechanics tells us nothing about the world; but what does tell us something about it is that the world can be described in precisely this way. (I have felt this for a <u>long time.) – We are also told something about the world by the fact that it can be described more simply by means of one system of mechanics than by means of another. [<i>Cf. 6.342.]
Mechanics is <u>one attempt to construct according to a <u>single plan all propositions we need for a description of the world. Hertz’s invisible masses.) [<i>Cf. 6.343.]
Hertz’s invisible masses are admittedly pseudo-objects.
{'manuscript': 'Ms-102,48r[5] et 49r[1] et 50r[1] et 51r[1] et 52r[1] et 53r[1] (1914--1206) (NB)', 'pt-number1': '6•331', 'pt-number2': '6•34', 'pt-number3': '6•341', 'pt-page1': '72[1]', 'pt-page2': '73[1]', 'pt-page3': '73[2]', 'tlp-number1': '6•341*', 'tlp-number2': '6•342*', 'tlp-number3': '6•343', 'cross-reference1': '\t6.12.14 (1,2)\n', 'cross-reference2': '\t6.12.14 (3,4,5)*\n', 'cross-reference3': '\t6.12.14 (6)*\n', 'date': '6.12.14.', 'eng': 'Newtonian mechanics introduces uniformity into world description. \nImagine a white surface with irregular black spots. We then say: whatever kind of picture develops from these, I could always get as near as I like to its description, by covering the surface with a sufficiently fine square mesh and going on to say of every square that it is white or black. In this way I will have managed to introduce uniformity into my description of the surface. The form is arbitrary, because I could have just as well applied a triangular or hexagonal net. It may be that the description using a triangular net would have been simpler, that is, we might be able to describe the surface more accurately with a coarser triangular mesh than with a finer square mesh or vice versa, etc. Different systems of describing the world correspond to different nets. \nMechanics specifies a form of world description by saying: all propositions used in a description of the world must be obtainable in a given way from a number of given propositions - the mechanical axioms. In this way it supplies the building blocks for constructing the edifice of science, and says: Whatever building you want to erect, it must somehow be assembled from these, and only these, building blocks.\nJust as I must be able to write down any arbitrary number by means of the number system, so I must be able to write down any arbitrary proposition of physics by means of the system of mechanics. [<i>See. 6.341.]\nAnd now we can see the position in which logic and mechanics stand to each other.\n(One might also arrange for the net to consist of different kinds of mesh.) \nThat a picture of the above-mentioned kind can be described by a net of a given form tells us nothing about the picture (for this holds of every such picture.) But what does characterize a picture is that it can be described by a specific net of a <u>specific fineness. Likewise, that the world can be described by Newtonian mechanics tells us nothing about the world; but what does tell us something about it is that the world can be described in precisely this way. (I have felt this for a <u>long time.) – We are also told something about the world by the fact that it can be described more simply by means of one system of mechanics than by means of another. [<i>Cf. 6.342.]\nMechanics is <u>one attempt to construct according to a <u>single plan all propositions we need for a description of the world. Hertz’s invisible masses.) [<i>Cf. 6.343.]\nHertz’s invisible masses are admittedly pseudo-objects.\n\n \t\t \n', 'ger': "Die Newtonsche Mechanik bringt die Weltbeschreibung / auf eine einheitliche Form. \nDenken wir uns eine weiße Fläche auf der unregelmäßige schwarze Flecken wären. Wir sagen nun: Was immer für ein Bild hierdurch entsteht immer werde ich seiner Beschreibung beliebig nahe kommen können indem ich die Fläche mit einem entsprechend feinen quadratischen Netzwerk bedecke und nun von jedem Quadrat sage daß es weiß oder schwarz ist. Ich werde auf diese Weise die Beschreibung dieser Fläche auf eine einheitliche Form gebracht haben. Diese Form ist beliebig denn ich hätte mit dem / gleichen Erfolge ein dreieckiges oder sechseckiges Netz verwenden können. Es kann sein daß die Beschreibung mit Hilfe eines dreieckigen Netzes einfacher geworden wäre d.h. daß wir die Fläche mit einem gröberen Dreiecksnetz genauer beschreiben könnten als mit einem feineren quadratischen (oder umgekehrt) etc. Den verschiedenen Netzen entsprechen verschiedene Systeme der Weltbeschreibung. \nDie Mechanik bestimmt die Form der Weltbeschreibung indem sie sagt: Alle Sätze der Weltbeschreibung / müssen aus einer Anzahl gegebener Sätze – den mechanischen Axiomen – auf eine gegebene Art & Weise erhalten werden können. Hierdurch liefert sie die Bausteine zum Bau des wissenschaftlichen Gebäudes und sagt: Welches Gebäude Du immer aufführen willst jedes mußt Du irgendwie mit diesen & nur diesen Bausteinen zusammenbringen. \n Wie man mit dem Zahlensystem jede beliebige Anzahl muß hinschreiben können so muß man mit dem System der Mechanik jeden beliebigen Satz der Physik / hinschreiben können. \nUnd hier sehen wir nun die gegenseitige Stellung von Logik & Mechanik. \n(Man könnte das Netz auch aus verschiedenartigen Figuren bestehen lassen.) \nDaß sich ein Bild wie das vorhin erwähnte durch ein Netz von gegebener Form beschreiben läßt, sagt über das Bild nichts aus (denn dies gilt für jedes solche Bild). Das aber charakterisiert das Bild daß es sich durch ein bestimmtes Netz, von <u>bestimmter Feinheit, beschreiben läßt. So auch sagt es nichts über die Welt aus daß sie sich durch die Newtonsche Mechanik / beschreiben läßt; aber wohl daß sie sich so durch jene beschreiben läßt wie dies eben der Fall ist. (Dies habe ich schon seit <u>langer Zeit gefühlt.) – Auch das sagt etwas von der Welt daß sie sich durch die eine Mechanik einfacher beschreiben läßt als durch die andere. \nDie Mechanik ist <u>ein Versuch alle Sätze welche wir zur Weltbeschreibung benötigen nach <u>einem Plan zu konstruieren. (Die unsichtbaren Massen Hertz's.) \nDie unsichtbaren Massen Hertz's sind <u>eingestandenermaßen Scheingegenstände.\n"}
Ms-102,54r[2] (1914--1208) (NB)
8.12.14.
Hinter unseren Gedanken, wahren & falschen, liegt immer wieder ein dunkler Grund, den wir erst später in's Licht ziehen, & als einen Gedanken aussprechen können.
Behind our thoughts, true and false, there is always a dark background, which we can only later bring into the light and express as a thought.
12.12.14.
{'manuscript': 'Ms-102,54r[2] (1914--1208) (NB)', 'pt-number': '', 'pt-page': '', 'tlp-number': '', 'cross-reference': '', 'date1': '8.12.14.', 'date2': '12.12.14.', 'eng': 'Behind our thoughts, true and false, there is always a dark background, which we can only later bring into the light and express as a thought.\n12.12.14.\n\n \t\t \n', 'ger': "Hinter unseren Gedanken, wahren & falschen, liegt immer wieder ein dunkler Grund, den wir erst später in's Licht ziehen, & als einen Gedanken aussprechen können. \n"}
Ms-102,55r[2] (1914--1214? --1215?) (NB)
14.12.14.
15.12.14.
Es ist offenbar: wir können als Schriftzeichen der ab-Funktionen einführen welche wir wollen das eigentliche Zeichen wird sich automatisch bilden. Und welche Eigenschaften werden sich hierbei von selbst herausbilden?
It’s obvious that we can introduce whatever we like as characters for the ab-functions, and that the real sign will provide itself automatically. And what properties will evolve on their own while doing so?
{'manuscript': 'Ms-102,55r[2] (1914--1214? --1215?) (NB)', 'pt-number': '', 'pt-page': '', 'tlp-number': '', 'cross-reference': '', 'date1': '14.12.14.', 'date2': '15.12.14.', 'eng': 'It’s obvious that we can introduce whatever we like as characters for the ab-functions, and that the real sign will provide itself automatically. And what properties will evolve on their own while doing so?\n\n \t\t \n', 'ger': 'Es ist offenbar: wir können als Schriftzeichen der ab-Funktionen einführen welche wir wollen das eigentliche Zeichen wird sich automatisch bilden. Und welche Eigenschaften werden sich hierbei von selbst herausbilden? \n'}
Ms-102,59r[5] et 60r[1] (1915—0114? --0115?) (NB)
14.1.15.
15.1.15.
Das Satzzeichen „p ⌵ q” stimmt / wenn p der Fall ist, wenn q der Fall ist und wenn beide der Fall sind anderenfalls stimmt es nicht: dies scheint unendlich einfach zu sein; und <u>so einfach wird die Lösung sein.
The propositional sign "p ⌵ q" is right if p is the case, if q is the case, and if both are the case, otherwise it is wrong. This seems to be immensely simple; and the solution will be <u>as simple as this.
{'manuscript': 'Ms-102,59r[5] et 60r[1] (1915—0114? --0115?) (NB)', 'pt-number': '', 'pt-page': '', 'tlp-number': '', 'cross-reference': '', 'date1': '14.1.15.', 'date2': '15.1.15.', 'eng': 'The propositional sign "p ⌵ q" is right if p is the case, if q is the case, and if both are the case, otherwise it is wrong. This seems to be immensely simple; and the solution will be <u>as simple as this.\n\n \t\t \n', 'ger': 'Das Satzzeichen „p ⌵ q” stimmt / wenn p der Fall ist, wenn q der Fall ist und wenn beide der Fall sind anderenfalls stimmt es nicht: dies scheint unendlich einfach zu sein; und <u>so einfach wird die Lösung sein. \n'}
Ms-102,81r[2] (1915--0430) (NB)
5•04103 61[2] 5•124 28.4.15 (5)+ 30.4.15 (2)+
5•04105 61[4] 5•1241 (3) 30.4.15 (3)+
p wird von allen Sätzen bejaht aus denen es folgt.
Jeder Satz der p widerspricht verneint p.
p is affirmed by all propositions from which it follows. [<i>See 5.124.]
Every proposition that contradicts p denies p. [<i>See 5.1241 (3).]
{'manuscript': 'Ms-102,81r[2] (1915--0430) (NB)', 'pt-number1': '5•04103', 'pt-number2': '5•04105', 'pt-page1': '61[2]', 'pt-page2': '61[4]', 'tlp-number1': '5•124', 'tlp-number2': '5•1241 (3)', 'cross-reference1': '\t28.4.15 (5)+ 30.4.15 (2)+\n', 'cross-reference2': '\t30.4.15 (3)+\n', 'date': '', 'eng': 'p is affirmed by all propositions from which it follows. [<i>See 5.124.]\nEvery proposition that contradicts p denies p. [<i>See 5.1241 (3).]\n\n \t\t \n', 'ger': 'p wird von allen Sätzen bejaht aus denen es folgt. \n Jeder Satz der p widerspricht verneint p. \n'}
Ms-102,99r[2] (1915--0517? --0518?) (NB)
4•0101 42[5] 4•015 18.5.15 (1)
17.5.15.
18.5.15.
Die Möglichkeit aller Gleichnisse, der ganzen Bildhaftigkeit unserer Ausdrucksweise, ruht in der Logik der Abbildung.
The possibility of all similes, of all the pictoriality of our language, reposes in the logic of depiction. [4.015.]
{'manuscript': 'Ms-102,99r[2] (1915--0517? --0518?) (NB)', 'pt-number': '4•0101', 'pt-page': '42[5]', 'tlp-number': '4•015', 'cross-reference': '\t18.5.15 (1)\n', 'date1': '17.5.15.', 'date2': '18.5.15.', 'eng': 'The possibility of all similes, of all the pictoriality of our language, reposes in the logic of depiction. [4.015.]\n\n \t\t \n', 'ger': 'Die Möglichkeit aller Gleichnisse, der ganzen Bildhaftigkeit unserer Ausdrucksweise, ruht in der Logik der Abbildung. \n'}
Ms-103,23r[3] (1916--0712? --0713?) (NB)
12.7.16.
13.7.16.
Immer wieder fühlt man daß auch im Elementarsatz von allen Gegenständen die Rede ist.
(∃x) . φx . x = a
{'manuscript': 'Ms-103,23r[3] (1916--0712? --0713?) (NB)', 'pt-number': '', 'pt-page': '', 'tlp-number': '', 'cross-reference': '', 'date1': '12.7.16.', 'date2': '13.7.16.', 'eng': '', 'ger': 'Immer wieder fühlt man daß auch im Elementarsatz von allen Gegenständen die Rede ist. \n (∃x) . φx . x = a \n\n \t\t \n'}
Ms-103,24r[2] (1916--0713) (NB)
5•3003 83[7] 5•503+ 13.7.16 (4)
5•503 116[5] 5•503* 13.7.16 (4) [Vgl. 83[7]]
/ \ φx, ψy ∣ χz, (∃x) ∙ , (x) ∙
Da sich offenbar leicht erklären läßt wie mit diesen Operationen sich Sätze bilden lassen und wie Sätze nicht zu bilden sind so muß sich dies auch <u>irgendwie exakt ausdrücken lassen.
{'manuscript': 'Ms-103,24r[2] (1916--0713) (NB)', 'pt-number1': '5•3003', 'pt-number2': '5•503', 'pt-page1': '83[7]', 'pt-page2': '116[5]', 'tlp-number1': '5•503+ ', 'tlp-number2': '5•503* ', 'cross-reference': '\t13.7.16 (4)\n', 'date': '', 'eng': '', 'ger': 'Da sich offenbar leicht erklären läßt wie mit diesen Operationen sich Sätze bilden lassen und wie Sätze nicht zu bilden sind so muß sich dies auch <u>irgendwie exakt ausdrücken lassen. \n\n \t\t \n'}
Ms-103,2r[2] (1916--0416) (NB)
4•4303 46[6] 4•5 (3)b** 16.4.16 (1)+ 21.11.16 (1)+ ???
4•43012 78[11] 4•5 (3a) 16.4.16 (1)+ 21.11.16 (1)+ ???
<u>Jeder einfache Satz läßt sich auf die Form φx bringen.
{'manuscript': 'Ms-103,2r[2] (1916--0416) (NB)', 'pt-number1': '4•4303', 'pt-number2': '4•43012', 'pt-page1': '46[6]', 'pt-page2': '78[11]', 'tlp-number1': '4•5 (3)b**', 'tlp-number2': '4•5 (3a) ', 'cross-reference': '', 'date': '', 'eng': '', 'ger': '<u>Jeder einfache Satz läßt sich auf die Form φx bringen. \n\n \t\t \n'}
Ms-103,42r[5] (1916--0808? --0811?) (NB)
8.8.16.
11.8.16.
Jedem Gegenstand stehe ich objektiv gegenüber. Dem Ich nicht.
|
src/GPT2-Simple.ipynb | ###Markdown
Downloading GPT-2If you're retraining a model on new text, you need to download the GPT-2 model first. There are three released sizes of GPT-2:* `124M` (default): the "small" model, 500MB on disk.* `355M`: the "medium" model, 1.5GB on disk.* `774M`: the "large" model, cannot currently be finetuned with Colaboratory but can be used to generate text from the pretrained model (see later in Notebook)Larger models have more knowledge, but take longer to finetune and longer to generate text. You can specify which base model to use by changing `model_name` in the cells below.
###Code
gpt2.download_gpt2(model_name="355M")
file_name = "../input/LesMiserables.txt"
###Output
_____no_output_____
###Markdown
Finetune GPT-2The next cell will start the actual finetuning of GPT-2. It creates a persistent TensorFlow session which stores the training config, then runs the training for the specified number of `steps`. (to have the finetuning run indefinitely, set `steps = -1`)The model checkpoints will be saved in `/checkpoint/run1` by default. The checkpoints are saved every 500 steps (can be changed) and when the cell is stopped.The training might time out after 4ish hours; make sure you end training and save the results so you don't lose them!**IMPORTANT NOTE:** If you want to rerun this cell, **restart the VM first** (Runtime -> Restart Runtime). You will need to rerun imports but not recopy files.Other optional-but-helpful parameters for `gpt2.finetune`:* **`restore_from`**: Set to `fresh` to start training from the base GPT-2, or set to `latest` to restart training from an existing checkpoint.* **`sample_every`**: Number of steps to print example output* **`print_every`**: Number of steps to print training progress.* **`learning_rate`**: Learning rate for the training. (default `1e-4`, can lower to `1e-5` if you have <1MB input data)* **`run_name`**: subfolder within `checkpoint` to save the model. This is useful if you want to work with multiple models (will also need to specify `run_name` when loading the model)* **`overwrite`**: Set to `True` if you want to continue finetuning an existing model (w/ `restore_from='latest'`) without creating duplicate copies. **EXTRA IMPORTANT NOTE**: If running the notebook in Google Colab, it is advised to mount your Drive to be able to save checkpoints and re-start from where you stopped in case the notebook crashes or times out.
###Code
gpt2.mount_gdrive()
sess = gpt2.start_tf_sess()
%%time
gpt2.finetune(sess,
dataset=file_name,
model_name='355M',
steps=2000,
restore_from='fresh', #change to 'latest' if restarting fine-tuning from saved checkpoint
checkpoint_dir = "drive/My Drive/gpt2/checkpoint", #if wanting to save checkpoints on Drive
run_name='run1',
print_every=10,
learning_rate=1e-4,
sample_every=500,
save_every=1000, #if the fine-tuning is very slow, lower this number
overwrite=True)
###Output
Loading checkpoint models/355M/model.ckpt
###Markdown
The next cell will allow you to load the retrained model checkpoint + metadata necessary to generate text.**IMPORTANT NOTE:** If you want to rerun this cell, **restart the VM first** (Runtime -> Restart Runtime). You will need to rerun imports but not recopy files.
###Code
gpt2.reset_session(sess)
del sess
sess = gpt2.start_tf_sess()
%%time
gpt2.load_gpt2(sess, run_name='run1')
###Output
Loading checkpoint checkpoint/run1/model-2000
CPU times: user 10.5 s, sys: 512 ms, total: 11 s
Wall time: 10.9 s
###Markdown
Generate Text From The Trained ModelAfter you've trained the model or loaded a retrained model from checkpoint, you can now generate text. `generate` generates a single text from the loaded model.
###Code
gpt2.generate(sess, run_name='run1')
%%time
text = gpt2.generate(sess, return_as_list=True)[0]
###Output
_____no_output_____
###Markdown
If you're creating an API based on your model and need to pass the generated text elsewhere, you can do `text = gpt2.generate(sess, return_as_list=True)[0]`You can also pass in a `prefix` to the generate function to force the text to start with a given character sequence and generate text from there (good if you add an indicator when the text starts).You can also generate multiple texts at a time by specifing `nsamples`. Unique to GPT-2, you can pass a `batch_size` to generate multiple samples in parallel, giving a massive speedup (in Colaboratory, set a maximum of 20 for `batch_size`).Other optional-but-helpful parameters for `gpt2.generate` and friends:* **`length`**: Number of tokens to generate (default 1023, the maximum)* **`temperature`**: The higher the temperature, the crazier the text (default 0.7, recommended to keep between 0.7 and 1.0)* **`top_k`**: Limits the generated guesses to the top *k* guesses (default 0 which disables the behavior; if the generated output is super crazy, you may want to set `top_k=40`)* **`top_p`**: Nucleus sampling: limits the generated guesses to a cumulative probability. (gets good results on a dataset with `top_p=0.9`)* **`truncate`**: Truncates the input text until a given sequence, excluding that sequence (e.g. if `truncate=''`, the returned text will include everything before the first ``). It may be useful to combine this with a smaller `length` if the input texts are short.* **`include_prefix`**: If using `truncate` and `include_prefix=False`, the specified `prefix` will not be included in the returned text.
###Code
gpt2.generate(sess,
length=500,
temperature=0.7,
top_k=50,
prefix="- Monsieur")
!du -h checkpoint/
###Output
1.4G checkpoint/run1
1.4G checkpoint/
###Markdown
For bulk generation, you can generate a large amount of text to a file and sort out the samples locally on your computer. The next cell will generate a generated text file with a unique timestamp.You can rerun the cells as many times as you want for even more generated texts!
###Code
gen_file = 'gpt2_gentext_{:%Y%m%d_%H%M%S}.txt'.format(datetime.utcnow())
gpt2.generate_to_file(sess,
destination_path=gen_file,
length=500,
temperature=0.7,
nsamples=100,
batch_size=20
)
# may have to run twice to get file to download
files.download(gen_file)
###Output
_____no_output_____
###Markdown
Download model:
###Code
!tar czvf ./model-lesmiserables.tar.gz checkpoint/run1/
!
###Output
__notebook_source__.ipynb model-lesmiserables.tar.gz samples
checkpoint models
|
examples/regression-insurance/4-Tuning.ipynb | ###Markdown
Table of Contents- **Geting Started** - Set Up Environment - Import Data- **GridSearchModelTuner** - Split Data - Find Best Hyper-Params Via Tuner - Evaluate- **Retrain & Evaluate** - Set Up `ModelTrainer` - Set Up Hyper-Parameters - Train & Evaluate Note: this notebook is meant to be a demo of some of the capabilities of **`oo-learning`** (https://github.com/shane-kercheval/oo-learning); it is not meant to show the best approach to exploring/cleaning/modeling this particular dataset. Also, with most graphs (e.g. correlations/box-plots/etc.) I will spend very little time commenting on the significance of any interesting or patterns. Again, the intent is to show a demo, not a guide to data analysis. Getting StartedIn this notebook, we'll see how to do find the best potential hyper-parameters for a given model via the **`GridSearchModelTuner`** class. Set Up Environment
###Code
# !pip install oolearning --upgrade
from oolearning import *
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
pd.set_option('display.max_colwidth', -1)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
width = 10
plt.rcParams['figure.figsize'] = [width, width/1.333]
###Output
_____no_output_____
###Markdown
Import Data `ExploreRegressionDataset` is a convenience class described in the [first notebook of this series](https://github.com/shane-kercheval/oo-learning/blob/master/examples/regression-insurance/1-Exploring.ipynb).
###Code
csv_file = '../data/insurance.csv'
target_variable = 'expenses'
explore = ExploreRegressionDataset.from_csv(csv_file_path=csv_file,
target_variable=target_variable)
explore.dataset.head()
###Output
_____no_output_____
###Markdown
`GridSearchModelTuner` A **`GridSearchModelTuner`** is used to search for the best hyper-parameters for a given model. Specifically, a `GridSearchModelTuner` uses a `Resampler` object (along with `Scores` objects) in order evaluate the performance of each hyper-paremeter combination. So for example, with the `RepeatedCrossValidationResampler` (with, let's say, 5 folds and 5 repeats), each hyper-parameter combination uses the Resampler object (actually, it is cloned and reused for each combination), and so each hyper-parameter combination is resampled/trained `25` (5x5) times. Each Score's value is recorded for each of the 25 times the model/params are trained/evaluated. So if we want to track 2 scores, perhaps `MAE` and `RMSE`; then each hyper-parameter combination will have 25 `MAE` values and 25 `RMSE` values.We can then compare the performance between the combinations to choose the best combination, or use the data to inform us about different combinations we might want to try. Split Data First, let's split the data into training/holdout sets, so that we can find the best Hyper-Parameters on the training set and then evaluate on the holdout set.However, we'll actually ignore the holdout indexes, since we will use a Trainer to do the work for us of refitting the final model on the training data and predicting/evaluating on the holdout data. If we use the same splitter in the Trainer, the split will be the same (i.e. same training data used in the Tuner will be used to refit the final model).
###Code
splitter = RegressionStratifiedDataSplitter(holdout_ratio=0.20) # set aside 20% of the data for the holdout set
training_indexes, _ = splitter.split(target_values=explore.dataset.expenses) # ignore holdout indexes
training_y = explore.dataset.iloc[training_indexes][target_variable]
training_x = explore.dataset.iloc[training_indexes].drop(columns=target_variable)
training_x.shape
training_y.hist()
###Output
_____no_output_____
###Markdown
Find Best Hyper-Params via Tuner Now, let's use a **`GridSearchModelTuner`** object to search for the best hyper-parameters for a given model, by evaluating various hyper-parameter combinations.
###Code
# define/configure the resampler
resampler = RepeatedCrossValidationResampler(model=ElasticNetRegressor(),
transformations=[DummyEncodeTransformer(CategoricalEncoding.DUMMY)],
scores=[MaeScore(), RmseScore()],
folds=5,
repeats=5)
# define/configure the GridSearchModelTuner
tuner = GridSearchModelTuner(resampler=resampler, hyper_param_object=ElasticNetRegressorHP())
###Output
_____no_output_____
###Markdown
We need to define the combinations of hyper-parameters we want to try out and evalaute.We'll use a dictionary and a **`HyperParamsGrid`** object to do this.
###Code
# define the combinations of hyper-params that we want to evaluate
params_dict = dict(alpha=[0.01, 0.1, 1],
l1_ratio=[0, 0.5, 1])
grid = HyperParamsGrid(params_dict=params_dict)
###Output
_____no_output_____
###Markdown
We'll pass the **`HyperParamsGrid`** object to the tuner, which will tell the tuner which combinations of hyper-params to try. We can preview the combinations with **`.params_grid()`**
###Code
grid.params_grid
tuner.tune(data_x=training_x, data_y=training_y, params_grid=grid)
###Output
/Users/shanekercheval/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/shanekercheval/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/shanekercheval/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/shanekercheval/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/shanekercheval/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
/Users/shanekercheval/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
###Markdown
Evaluate We can see the raw results of the `GridSearchModelTuner` via **`tuner.results.resampled_stats`**.Each hyper-parameter combination that was evaluated is represented by a row, with the columns either representing a particular hyper-parameter, or a Score's statistic (e.g. mean, standard deviation, or coefficient of variation). Each Score will be represented with three columns: the **mean** of the resampled Score values (i.e. the mean of the 25 Score values associated with each fold/repeat of the Resampler), the **standard deviation** of the resampled Score values, and the **coefficient of variation** of the values.
###Code
tuner.results.resampled_stats
###Output
_____no_output_____
###Markdown
We can interpret the results easier by visualizing with a heatmap, below.The hyper-parameter combination that is considered the "best" has a red label on the y-axis. The "best" combination is the one that has the "best" Score (i.e. the Scores that were passed into the Tuner during object creation). If multiple Score objects were passed in, then the first in the list is used (in this case, the first Score we specified above was `AucRocScore`).Remember, a Score can either be associated with a **utility** function or a **cost** function. A `utility` function (e.g. `AUC`) has score values where higher values represent a better score. A `cost` function (e.g. `error rate`) has score values where lower values represent a better score. This is represented in the heatmap with green (for `utility`) and blue (for `cost`) colors. In both cases, darker colors represent "better" scores. So in the case of `AUC`, higher numbers are better, and associated with darker colors. In the case of `error rate` (or `MAE`/`RMSE`/etc.), lower numbers are better and are also associated with darker numbers. The columns representing `standard deviation` (`SD`) and `coefficient of variation` (`CV`) always have darker colors associated lower numbers, because lower `SD`s and `CV`s represent more consistent results, which is better.
###Code
tuner.results.plot_resampled_stats()
###Output
_____no_output_____
###Markdown
In addition to the heatmap, we can also visualize the Resampled score values for a specific metric.For example, each boxplot below represents the 25 (i.e. 5 fold, 5 repeats) MAE values associated with each of the hyper-parameter combinations that were evaluated.
###Code
tuner.results.plot_resampled_scores(metric=Metric.MEAN_ABSOLUTE_ERROR)
###Output
_____no_output_____
###Markdown
In addition to the results, we can also access how long it took to resample (5 folds, 5 repeats) each of the combinations.
###Code
tuner.results.resampler_times
###Output
_____no_output_____
###Markdown
We can also get the metrics associated with the "best" combination. As explained above, the hyper-parameter combination that is considered the "best" is the one that has the "best" Score object that was passed into the Tuner. If multiple Score objects were passed in, then the first in the list is used (in this case, the first Score we specified above was `AucRocScore`).
###Code
tuner.results.best_model
###Output
_____no_output_____
###Markdown
And to get the parameter names/values in dictionary format, we can use **`tuner.results.best_hyper_params`**
###Code
tuner.results.best_hyper_params
###Output
_____no_output_____
###Markdown
Frequently, though, the first time we search across various (sometimes random) hyper-parameters combinations, we will not find the best **possible** combination, since we are evaluating a tiny fraction of all possible combinations. For a given parameter, the ideal value might be above or below the range of values that we tried. Or, our range might have been too wide, and we need to "zoom in" and try more granular values when we have more information about the general vicinity of the ideal value.**`tuner.results.plot_hyper_params_profile()`** helps us gain a better understanding of the performance of the various parameter values that we tried. We can visualize the performance of a specific hyper-parameter, or a set of hyper-paremeters (and there relationship to eachother).
###Code
tuner.results.plot_hyper_params_profile(metric=Metric.MEAN_ABSOLUTE_ERROR,
x_axis='alpha',
line='l1_ratio')
tuner.results.plot_hyper_params_profile(metric=Metric.MEAN_ABSOLUTE_ERROR, x_axis='l1_ratio')
###Output
_____no_output_____
###Markdown
Retrain & Evaluate Set Up `ModelTrainer` After finding the best hyper-parameters, we want to retrain the model with those parameters on the full training set and predict/evaluate on the holdout set.As stated previously, since we'll use same type of `Splitter` in the `ModelTrainer`, the training/holdout split will be the same (i.e. same training indexes/data used in the Tuner from the manual split will be used to refit the final model). Note, instead of using the default threshold of `0.5`, we'll use the threshold calculated by the Decorator via the resampler (associated with the hyper-parameters we are choosing), that makes the best tradeoff between Sensitivity/Specificity, captured via **`calculated_resampled_threshold`**.
###Code
# give the objects, which encapsulate the behavior of everything involved with training the model, to our ModelTrainer
trainer = ModelTrainer(model=ElasticNetRegressor(),
model_transformations=[DummyEncodeTransformer(CategoricalEncoding.DUMMY)],
splitter=RegressionStratifiedDataSplitter(holdout_ratio=0.2),
evaluator=RegressionEvaluator())
###Output
_____no_output_____
###Markdown
Set Up Hyper-Parameters Let's define the hyper-parameters we want to use.We know we want to use the **`ElasticNetRegressorHP`** class, which contains the specific parameters available to train with, but we also know that for some of those, we'll want to update them with the values we found with our `GridSearchModelTuner`.Let's first define our object, and use **`.params_dict`** to see the default hyper-parameters and values.
###Code
rf_hyper_params = ElasticNetRegressorHP()
rf_hyper_params.params_dict
###Output
_____no_output_____
###Markdown
Now let's update the specific parameters that we found with the `GridSearchModelTuner` (which we can access via **`tuner.results.best_hyper_params`**).We can use **`.update_dict()`**, which conveniently takes a dictionary with the parameter names/values that we want to update (which is what `best_hyper_params` returns) and updates the corresponding values of the hyper-parameters within our object.
###Code
rf_hyper_params.update_dict(tuner.results.best_hyper_params)
rf_hyper_params.params_dict
###Output
_____no_output_____
###Markdown
Train & Evaluate
###Code
trainer.train(data=explore.dataset, target_variable='expenses', hyper_params=rf_hyper_params)
trainer.holdout_evaluator.all_quality_metrics
###Output
_____no_output_____ |
notebooks/CNCTest.ipynb | ###Markdown
from dfml.python.dataset import Dataset: Load Dataset class from dfml API. Dataset(source_name): Creates object of dataset class for given source. Dataset.get_dataframe(): Read data source and return a pandas dataframe. from dfml.utilities import dfml_utils: Load dfml_util script. It has following functions dfml_utils.save_and_download_model(model_name, model_object):This method lets the user to save and download the trained model with DataFlowML. This model can than be used for training and/or scoring purpose as part of the DataFlowML pipeline model_name: Name of the model. Accepted String value only. e.g. "DecisionTreeModel" model_object: Object of the trained model dfml_utils.get_h2o_cluster_url(cluster_name):This method lets the user to get h2o cluster url by providing cluster name cluster_name: Name of the cluster. Accepted String value only. e.g. "TrainingCluster" dfml_utils.upload_and_register_h2o_model(model_object, model_name, model_type, project_name, project_version, workspace_name):This method lets the user to upload and register h2o model in 'mojo' format in dfml model_object: Object of trained H2O model model_name:Name of the model. Accepted String value only. e.g. "H2OTreeModel" model_type:Type of trained model. Accepted String value only. We support H2O model of types :"DistributedRandomForest","GeneralizedLinearModelling","IsolationForest","GradientBoostingMachine" project_name:Project Name in which model should register. Accepted String value only. e.g. "MyProject" project_version:Version of given project in which model should register workspace_name:Workspace Name in which model should register. Accepted String value only. e.g. "MyWorkspace"
###Code
from dfml.python.dataset import Dataset
from dfml.utilities import dfml_utils
dataset_1 = Dataset("CDCMaster")
# you can use pandas to create dataframe as shown below
# df = dataset_1.get_dataframe()
df1= dataset_1.get_dataframe()
###Output
Dataframe created
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.