path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
mybinder-examples/mybinder_data_wrangling.ipynb | ###Markdown
Printing verions of Python modules and packages with **watermark** - the IPython magic extension.
###Code
%load_ext watermark
%watermark -v -p numpy,pandas,geopandas,matplotlib.pyplot,json,requests,sodapy
# reading in data as a url from NYC Open Data
url = 'https://data.cityofnewyork.us/api/views/nabw-vbue/rows.csv?accessType=DOWNLOAD'
# saving data as a pandas dataframe named 'building_footprints_csv'
building_footprints_csv = pd.read_csv(url)
# previewing the first five rows
building_footprints_csv.head()
# printing the dimentions (i.e. rows, columns) of the data
building_footprints_csv.shape
rows = f'{building_footprints_csv.shape[0]:,}'
columns = building_footprints_csv.shape[1]
print('This dataset has {} rows and {} columns.'.format(rows, columns))
###Output
This dataset has 1,084,829 rows and 15 columns.
###Markdown
3. Data Inspection 3.1 Previewing Data
###Code
# previewing the first five rows of our dataframe
building_footprints_csv.head()
# previewing the last five rows of our dataframe
building_footprints_csv.tail()
# printing the shape or dimensions of our dataframe (i.e. rows, columns)
building_footprints_csv.shape
# the object's type
type(building_footprints_csv)
# printing the columns of our dataframe
building_footprints_csv.columns
# printing the data types of our columns
building_footprints_csv.dtypes
# printing the column names, non-null counts, and data types of our columns
building_footprints_csv.info()
# counts of unique values of our datatypes
building_footprints_csv.dtypes.value_counts()
# printing True/False if column is unique on our unique key (DOITT_ID)
building_footprints_csv['DOITT_ID'].is_unique
# printing descriptive statistics of our numeric columns in our data
building_footprints_csv.describe()
building_footprints_csv.describe(include=['O'])
###Output
_____no_output_____
###Markdown
3.3 Identifying Null/NA Values
###Code
# return a boolean same-sized object indicating if any of the values are NA
building_footprints_csv.isnull().any()
# printing the total amount of null/na values in our data
building_footprints_csv.isnull().sum()
# printing the total amount of null/na values in our data
building_footprints_csv.isnull().sum().sum()
# return descriptive statistics of boolean indicating if any of the values are NA
building_footprints_csv.isnull().any().describe()
# calculating a percentage of the number of nulls to total number of records of each column
missing_data = (building_footprints_csv.isnull().sum() / len(building_footprints_csv)) * 100
# creating a dataframe
missing_data = pd.DataFrame({'Missing Ratio (%)' :missing_data})
missing_data.sort_values(by='Missing Ratio (%)', ascending=True, inplace=True)
missing_data
###Output
_____no_output_____
###Markdown
4. Data Cleaning & Wrangling We will be cleaning the **Construction Year** (i.e. CNSTRCT_YR) column, as this is the column we will be using in our analysis. 4.1 Previewing Column Values
###Code
# printing the object's type
type(building_footprints_csv['CNSTRCT_YR'])
# returning a series of the 'CNSTRCT_YR' column
building_footprints_csv["CNSTRCT_YR"]
# returning a dataframe of the 'CNSTRCT_YR' column
building_footprints_csv[["CNSTRCT_YR"]]
# returning a dataframe of the 'CNSTRCT_YR' column
building_footprints_csv[["CNSTRCT_YR"]].describe()
# returning a dataframe of the 'CNSTRCT_YR' column
building_footprints_csv[["CNSTRCT_YR"]].hist()
# returning a dataframe of the 'CNSTRCT_YR' column
building_footprints_csv[["CNSTRCT_YR"]].plot.box()
building_footprints_csv.loc[building_footprints_csv["CNSTRCT_YR"] < 1900].describe()
bldgs_before_1990 = building_footprints_csv.loc[building_footprints_csv["CNSTRCT_YR"].between(1500, 1899)]
bldgs_before_1990.head()
bldgs_before_1990.shape
bldgs_before_1990['CNSTRCT_YR'].describe()
bldgs_before_1990['CNSTRCT_YR'].value_counts()
bldgs_before_1990['CNSTRCT_YR'].hist()
bldgs_before_1990['CNSTRCT_YR'].plot.box()
bldgs_before_1990['CNSTRCT_YR'].isna().sum()
%ls
bldgs_before_1990.to_csv('bldgs_before_1900.csv', index=False)
%ls
df = pd.read_csv('bldgs_before_1900.csv')
print(df.shape)
df.head()
df.CNSTRCT_YR.describe()
###Output
_____no_output_____ |
Outlier Analysis.ipynb | ###Markdown
Outlier AnalysisAuthors: Paul McCabe (code and written analysis), Ankur Malik (written analysis) Statistical Analysis
###Code
import pandas as pd
from sklearn.ensemble import IsolationForest
import numpy as np
sen_df = pd.read_csv("Stocks/sen_df_price.csv")
print(str(len(sen_df)) + " entries")
print(sen_df.columns)
###Output
4474 entries
Index(['Unnamed: 0', 'owner', 'asset_description', 'asset_type', 'type',
'amount', 'comment', 'senator', 'ptr_link', 'id', 'min_amount',
'max_amount', 'Open', 'High', 'Low', 'Close', 'Volume', 'Adjusted',
'Error_message_x', 'transaction_date', 'ticker', 'o_c_perc', 'h_l_perc',
'Error_message_y', '1_Week_O', '1_Week_C', '1_Week_H', '1_Week_L',
'1_Week_V', '1_Week_A', '1_Week_o_p', '1_Week_c_p', 'Error_message',
'1_Month_O', '1_Month_C', '1_Month_H', '1_Month_L', '1_Month_V',
'1_Month_A', '1_Month_o_p', '1_Month_c_p'],
dtype='object')
###Markdown
First we will examine the percentage returns. An important note is that the percentage is multiplied by -1 if it is a sale. That way if the stock plummets after they sell, it counts as them making that percentage. - __high_low_day__: the % change between the highest price of the transaction day and the lowest price. - __open_close_day__: the % change between the open price and the close price. - __open_open_week__: the % change between the open price of the transaction date and the open price a week later - __close_close_week__: the % change between the closing price of the transaction date and the closing price a week later
###Code
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.stats import iqr
import statistics as stat
import matplotlib.ticker as mtick
df_box = pd.DataFrame({"high_low_day": sen_df.h_l_perc,
"open_close_day": sen_df.o_c_perc,
"open_open_week": sen_df["1_Week_o_p"],
"close_close_week": sen_df["1_Week_c_p"],
"open_open_month": sen_df["1_Month_o_p"],
"close_close_month": sen_df['1_Month_c_p']})
sns.boxplot(x="variable", y="value", data=pd.melt(df_box))
plt.gca().set_yticklabels(['{:.0f}%'.format(x*100) for x in plt.gca().get_yticks()])
plt.xticks(rotation=45)
plt.xlabel('Types of Percentages')
plt.ylabel('Percentage Change')
plt.suptitle("BoxPlot of Percentage Change Over Different Durations")
plt.show()
cols = df_box.columns
for col in cols:
print(str(col) + " IQR: " + str("{:.4%}".format(iqr(df_box[col]))))
print(str(col) + " Median: " + str("{:.4%}".format(stat.median(df_box[col]))))
print(str(col) + " Mean: " + str("{:.4%}\n".format(stat.mean(df_box[col]))))
###Output
_____no_output_____
###Markdown
Interestingly enough, the median is significantly lower than the mean for the month long stock duration, indicating that highly profitable trades are offsetting the mean calculation.
###Code
sns.boxplot(x="variable", y="value", data=pd.melt(df_box))
plt.ylim([-.15, .15])
plt.gca().set_yticklabels(['{:.0f}%'.format(x*100) for x in plt.gca().get_yticks()])
plt.xticks(rotation=45)
plt.xlabel('Types of Percentages')
plt.ylabel('Percentage Change')
plt.suptitle("BoxPlot of Percentage Change Over Different Durations Zoomed In")
plt.show()
###Output
_____no_output_____
###Markdown
From these box plots we can see the range of transaction returns varies greatly from the mean and IQR values. One potential measure of suspicious is if a senator makes many trades in the 4th quartile range. This would mean a statistically significant number more than 25% of their trades in the top quartile. Before that though, we can also examine the owner of the top trades for the different measures.
###Code
print("# of trades in the top 100 trades of the dataset.\n")
print("Open Close Percentage")
print(sen_df.nlargest(100,'o_c_perc').senator.value_counts())
print("\n1 Week Close Percentage")
print(sen_df.nlargest(100,'1_Week_C').senator.value_counts())
print("\n1 Month Close Percentage")
print(sen_df.nlargest(100,'1_Month_C').senator.value_counts())
###Output
# of trades in the top 100 trades of the dataset.
Open Close Percentage
David A Perdue , Jr 50
Pat Roberts 9
James M Inhofe 8
Susan M Collins 6
Patrick J Toomey 6
Kelly Loeffler 4
John Hoeven 4
John F Reed 3
Angus S King, Jr. 2
Jerry Moran, 2
Shelley M Capito 2
Sheldon Whitehouse 2
Thomas R Carper 1
William Cassidy 1
Name: senator, dtype: int64
1 Week Close Percentage
Pat Roberts 19
Sheldon Whitehouse 16
Susan M Collins 12
Kelly Loeffler 12
David A Perdue , Jr 11
James M Inhofe 7
Shelley M Capito 5
Jerry Moran, 4
Patrick J Toomey 3
John F Reed 3
John Hoeven 3
Angus S King, Jr. 2
Thomas R Tillis 2
Gary C Peters 1
Name: senator, dtype: int64
1 Month Close Percentage
Pat Roberts 19
Sheldon Whitehouse 15
Susan M Collins 12
Kelly Loeffler 12
David A Perdue , Jr 12
James M Inhofe 6
Shelley M Capito 5
Jerry Moran, 4
Patrick J Toomey 4
John F Reed 3
John Hoeven 3
Angus S King, Jr. 2
Thomas R Tillis 2
Gary C Peters 1
Name: senator, dtype: int64
###Markdown
From these 3 tables, we notice that David A Perdue Jr. has many of the highest return trades on transaction day, but Pat Roberts, Sheldon Whitehouse, Susan M Collins, and Kelly Loeffler have many high return trades after a week and a month. Note that David A Perdue has the highest number of transactions by far, so his activity is perhaps not unusual. Below are each of their total transactions
###Code
print("David A Perdue , Jr: "+str(len(sen_df.loc[sen_df.senator == "David A Perdue , Jr"])))
print("Pat Roberts: "+str(len(sen_df.loc[sen_df.senator == "Pat Roberts"])))
print("Sheldon Whitehouse: "+str(len(sen_df.loc[sen_df.senator == "Sheldon Whitehouse"])))
print("Susan M Collins: "+str(len(sen_df.loc[sen_df.senator == "Susan M Collins"])))
print("Kelly Loeffler: "+str(len(sen_df.loc[sen_df.senator == "Kelly Loeffler"])))
###Output
David A Perdue , Jr: 1726
Pat Roberts: 249
Sheldon Whitehouse: 524
Susan M Collins: 389
Kelly Loeffler: 83
###Markdown
As you can see, everyone else's number of trades are much lower than David's. Kelly Loeffler in particular has nearly 15% of her trades in the top 2.2% of all senator trades after 1 month. Quartile Analysis We will now add a column labeling the trade's quartile.
###Code
cols = list(["h_l_perc", "o_c_perc", "1_Week_o_p", "1_Week_c_p", "1_Month_o_p", "1_Month_c_p"])
for col in cols:
sen_df[str(col) + "_q"] = pd.qcut(sen_df[col], 4, labels=False)
sen_df[str(col) + "_q"] = sen_df[str(col) + "_q"] + 1
cols_q = [s + "_q" for s in cols]
sen_df_4q = sen_df[sen_df["o_c_perc_q"] == 4]
sen_q = pd.DataFrame()
sen_q.index.name = 'Senators with 4th Quartile Trades'
sen_q['total_trans'] = sen_df.senator.value_counts()
for i in range(len(cols_q)):
sen_df_4q = sen_df[sen_df[str(cols_q[i])] == 4]
sen_q["n4q_" + str(cols[i])] = sen_df_4q.senator.value_counts()
sen_q["n4q_" + str(cols[i]) + '/total_trans'] = sen_q["n4q_"+ str(cols[i])]/sen_q['total_trans']
#sen_df.head()
###Output
_____no_output_____
###Markdown
Here the terminalogy will be slightly confusing. Essentially we are taking the 6 measures of trade profit from before, 1 day 1 week and 1 month with 2 measures for each time duration, and counting the number of times a Senator makes a trade in the top quartile. Then we compare that to their overall number of trades. As a baseline, we would expect that on average, 25% of trades for each senator would be in the top quartile. Below is a key: - __total_trans__: Total number of transactions - __n4q_h_l_perc__: Number of times they have a 4th quartile entry in High to Low percentage. (Same day as transaction) - __n4q_h_l_perc/total_trans__: Fraction representing number of times they have 4th quartile entry in High to Low percentage divided by total number of transactions. - __n4q_o_c_perc__: Number of times they have a 4th quartile entry in Open to Close percentage. (Same day as transaction) - __n4q_1_week_o_p__: Number of times they have a 4th quartile entry in the Open to Open percentage change 1 week after transaction date. - __n4q_1_week_c_p__: Number of times they have a 4th quartile entry in the Close to Close percentage change 1 week after transaction date. - Same convention for the stock price 1 month after transaction date.
###Code
sen_q.head()
###Output
_____no_output_____
###Markdown
Here are the top performers by percentages of trades that make it into the top quartile of trades. For those senators with statistically large enough sample sizes (n>32), we would expect that roughly 25% of their trades are in the 4th quartile of senator trades. What we find though is that several senators have significantly more successful trades than 25% of their total of trades.
###Code
sen_4q = pd.DataFrame()
sen_4q['4th Q Transaction Day Percentage'] = sen_q.nlargest(10, 'n4q_o_c_perc/total_trans')['n4q_o_c_perc/total_trans']
sen_4q['total_trans'] = sen_df.senator.value_counts()
sen_4q
sen_4q = pd.DataFrame()
sen_4q['4th Q 1 Week Percentage'] = sen_q.nlargest(10, 'n4q_1_Week_o_p/total_trans')['n4q_1_Week_o_p/total_trans']
sen_4q['total_trans'] = sen_df.senator.value_counts()
sen_4q
sen_4q = pd.DataFrame()
sen_4q['4th Q 1 Month Percentage'] = sen_q.nlargest(10, 'n4q_1_Month_o_p/total_trans')['n4q_1_Month_o_p/total_trans']
sen_4q['total_trans'] = sen_df.senator.value_counts()
sen_4q
###Output
_____no_output_____
###Markdown
Examining these 3 tables, we begin to notice names that appear multiple times and who have a high enough transaction count to not be considered lucky. This can be misleading though, as perhaps these senators just make risky trades and we are only looking at their successful trades. Further anaylysis would require looking at their 1st quartile trades, trades that do worse than 75 percent of all other trades. Unfortunately, due to time constraints of this project we will not be examining these names further. Unsupervised Learning With a dataset like this, it is more difficult to use supervised machine learning techniques, as there is no clear response variable. In this secion we will use several unsupervised machine learning methods in an attempt to notice unusual patterns/activity through clustering. Hierarchical Clustering "Hierarchical clustering is more flexible than K-means and more easily accomodates non-numerical variables. It is more sensitive in discovering outlying or aberrant groups or records... Hierarchical clustering's flexibility comes with a cost, and hierarchical clustering does not scale well to large data sets with millions of records" - Practical Statistics for Data Scientists In Python, we must convert our string columns to numerical values. Credit for the function below goes to https://pythonprogramming.net/working-with-non-numerical-data-machine-learning-tutorial/
###Code
import scipy.cluster.hierarchy as sch
sen_df_UL = pd.DataFrame({"High": sen_df.High,
"Low": sen_df.Low,
"Open": sen_df.Open,
"Close": sen_df.Close,
"min_amount": sen_df.min_amount,
"max_amount": sen_df.max_amount,
"Volume": sen_df.Volume,
"high_low_day": sen_df.h_l_perc,
"open_close_day": sen_df.o_c_perc,
"open_open_week": sen_df["1_Week_o_p"],
"close_close_week": sen_df["1_Week_c_p"],
"open_open_month": sen_df["1_Month_o_p"],
"close_close_month": sen_df['1_Month_c_p'],
"Owner": sen_df.owner.astype(object),
"Type": sen_df.type.astype(object),
"senator": sen_df.senator.astype(object)})
sen_df_UL.convert_objects(convert_numeric=True)
sen_df_UL.fillna(0, inplace=True)
#Convert non-numerical data into numerical representations for clustering techniques
def handle_non_numerical_data(df):
columns = df.columns.values
for column in columns:
text_digit_vals = {}
def convert_to_int(val):
return text_digit_vals[val]
if df[column].dtype != np.int64 and df[column].dtype != np.float64:
column_contents = df[column].values.tolist()
unique_elements = set(column_contents)
x = 0
for unique in unique_elements:
if unique not in text_digit_vals:
text_digit_vals[unique] = x
x+=1
df[column] = list(map(convert_to_int, df[column]))
return df
sen_df_UL = handle_non_numerical_data(sen_df_UL)
dendrogram = sch.dendrogram(sch.linkage(sen_df_UL, method = "ward"))
plt.title('Dendrogram')
plt.xlabel('Trades')
plt.ylabel('Euclidean distances')
plt.show()
###Output
_____no_output_____
###Markdown
From the dendrogram, using the ward method that minimizes varience between clusters, we can select our number of clusters. The vertical distances represent the distance/variance between clusters and in order to capture the strongest divide, we will set a threshold in a way that cuts the tallest vertical line. Marking it near the bottom of the left blue line, this will give us 3 clusters.
###Code
from sklearn.cluster import AgglomerativeClustering
cluster = AgglomerativeClustering(n_clusters=3, affinity='euclidean', linkage='ward')
cluster.fit_predict(sen_df_UL)
plt.scatter(sen_df_UL["open_open_week"], sen_df_UL["open_open_month"], c=cluster.labels_, s= 8)
plt.title("Open to Open Week vs Open to Open Month")
###Output
_____no_output_____
###Markdown
Unfortunately, our graph yields no clear clustering. Perhaps graphing vs other variables would yield better results, however I have found no better graph. Isolation Forest Isolation Forest is an unsupervised learning algorithm that belongs to the ensemble decision trees family. This approach is different from all previous methods. All the previous ones were trying to find the normal region of the data then identifies anything outside of this defined region to be an outlier or anomalous. "This method works differently. It explicitly isolates anomalies instead of profiling and constructing normal points and regions by assigning a score to each data point. It takes advantage of the fact that anomalies are the minority data points and that they have attribute-values that are very different from those of normal instances. This algorithm works great with very high dimensional datasets and it proved to be a very effective way of detecting anomalies." - __[Will Badr, Sr. AI/ML Specialist @ Amazon Web Service](https://towardsdatascience.com/5-ways-to-detect-outliers-that-every-data-scientist-should-know-python-code-70a54335a623)__ In this section we will just use the numerical columns. From this though, we can also create a covariance matrix!
###Code
iso_df = pd.DataFrame({"High": sen_df.High.astype(float),
"Low": sen_df.Low.astype(float),
"Open": sen_df.Open.astype(float),
"Close": sen_df.Close.astype(float),
"min_amount": sen_df.min_amount.astype(float),
"max_amount": sen_df.max_amount.astype(float),
"Volume": sen_df.Volume.astype(float),
"high_low_day": sen_df.h_l_perc.astype(float),
"open_close_day": sen_df.o_c_perc.astype(float),
"open_open_week": sen_df["1_Week_o_p"].astype(float),
"close_close_week": sen_df["1_Week_c_p"].astype(float),
"open_open_month": sen_df["1_Month_o_p"].astype(float),
"close_close_month": sen_df['1_Month_c_p'].astype(float)})
clf = IsolationForest(behaviour= 'new', max_samples=100, random_state = 1, contamination= 'auto')
preds = clf.fit_predict(iso_df)
sen_df['Iso_score'] = preds
sus_sen = sen_df[sen_df.Iso_score == -1]
sus_sen.senator.value_counts()
sen_sus_count = pd.DataFrame({'sus_trades': sus_sen.senator.value_counts()})
sen_sus_count.index.name = 'Senators with trades marked as outliers'
sen_sus_count['total_transactions'] = sen_df.senator.value_counts()
sen_sus_count['sus_trans/total_trans'] = sen_sus_count['sus_trades']/sen_sus_count['total_transactions']
sen_sus_count.style.format({'sus_trans/total_trans': '{:,.2f}'.format})
print("sus_trades = # of trades that have been detected as anomolies")
sen_sus_count.head(10)
###Output
sus_trades = # of trades that have been detected as anomolies
###Markdown
Correlation Matrix and Dendrogram Heatmap
###Code
corr = iso_df.corr()
ax = sns.heatmap(
corr,
vmin=-1, vmax=1, center=0,
cmap=sns.diverging_palette(20, 220, n=200),
square=True
)
ax.set_xticklabels(
ax.get_xticklabels(),
rotation=45,
horizontalalignment='right'
);
###Output
_____no_output_____
###Markdown
While I cannot glean much useful information from this graph, perhaps subsetting some of the data and using our dendrogram again will be more visually helpful.
###Code
iso_df['senator'] = sen_df['senator']
iso_df['senator'] = sen_df['senator']
iso_df = iso_df.set_index('senator')
#my_palette = dict(zip(iso_df..unique(), ["orange","yellow","brown"]))
#row_colors = df.cyl.map(my_palette)
sns.clustermap(iso_df, metric="correlation", method="single", cmap="Blues", standard_scale=1)
###Output
_____no_output_____ |
source/sagemaker/xgboost_beginner.ipynb | ###Markdown
电池单体一致性差报警- **问题描述**:预测未来两周内会不会出现 “电池单体一致性差报警”。- **问题分析**:将所有车辆数据进行聚合分析,设置最长时间窗口(两周),并提取窗口内的特征。未来两周内如果有报警,标签为1,否则为0。将问题转化为一个Time Series问题,输出为二分类概率。- **特征向量**:在该示意代码中考虑的特征包括7个维度,分别为:“采集时间”,“总电压(V)”,“总电流(A)”,“电池单体电压最高值(V)”,“电池单体电压最低值(V)”, “最高温度值(℃)”, “最低温度值(℃)” 注意:该示例代码中数据为人造数据,并非真实数据,其车架号等信息均为随机生成的,仅作示例使用,客户可根据自己的数据集合调整算法,并进行部署。 目录1. [第三方依赖库安装](第三方依赖库安装)1. [数据可视化](数据可视化)1. [数据集划分](数据集划分)1. [基于Xgboost分类模型进行电池单体一致性差预测](基于Xgboost分类模型进行电池单体一致性差预测) 1. [Xgboost模型搭建与训练](Xgboost模型搭建与训练) 1. [Xgboost模型部署](Xgboost模型部署) 1. [Xgboost模型测试](Xgboost模型测试)1. [资源回收](资源回收) 第三方依赖库下载
###Code
# In China, please use Tsing University opentuna sources to speedup the downloading (because of the exists of Great Firewall of China)
# ! pip install -i https://opentuna.cn/pypi/web/simple --upgrade pip==20.3.1
# ! pip install -i https://opentuna.cn/pypi/web/simple pandas==1.1.5
# ! pip install -i https://opentuna.cn/pypi/web/simple seaborn==0.11.0
# ! pip install -i https://opentuna.cn/pypi/web/simple --upgrade sagemaker==2.18.0
! pip install --upgrade pip==20.3.1
! pip install pandas==1.1.5
! pip install seaborn==0.11.0
! pip install --upgrade sagemaker==2.18.0
###Output
_____no_output_____
###Markdown
数据可视化数据集合中包含了“车架号VIN”,“采集时间Date”,“总电压(V)”,“总电流(A)”,“电池单体电压最高值(V)”,“电池单体电压最低值(V)”, “最高温度值(℃)”, “最低温度值(℃)”。最后一列的Label表征数据在该天是否发生了电池单体一致性差报警,0表示没有报警,1表示发生报警。
###Code
import pandas as pd
data_path = './series_samples.csv'
df = pd.read_csv(data_path)
df.head(30)
import seaborn as sns
import matplotlib.pyplot as plt
# 计算相关性系数
df.corr()
# 联合分布可视化
sns.pairplot(df)
###Output
_____no_output_____
###Markdown
数据集划分在基于Xgboost的分类算法中,我们选取历史14天的样本并聚合在一起形成一个 14*6 维度的特征向量,基于该向量预测未来14天内是否出现电池一致性差报警。数据集划分分为以下几步完成:- 将历史特征聚合成一个大的向量,并将未来14天内的电池一致性报警标签进行聚合,举例说明:对于第x天而言,第x-14天(含)到第x-1天(含)这共计14天的特征(维度为6)聚合成一个84维度的特征向量;若第x天(含)到第x+13天(含)这未来14天出现电池单体一致性报警,则该特征向量对应的分类标签为1;反之为0. 通过滑窗操作对所有数据执行上述操作,提取所有的训练/验证样本- 提取了所有样本之后按照4:1的比例进行训练集/测试集划分- 划分数据集之后将数据集上传至S3桶
###Code
import numpy as np
vins = df['VIN'].unique()
df = df.fillna(-1)
context_length = 14 # 特征向量为历史14天特征组合而成
predict_length = 14 # 预测未来两周内是否会出现电池一致性差报警
all_samples = list()
positive, negative = 0, 0
for vin in vins:
sequence = df.loc[df.loc[:,'VIN'] == vin].values[:, 2:]
for index in range(len(sequence)):
if index < context_length:
continue
feats = sequence[index - context_length:index, :-1].flatten()
label = 1.0 if 1.0 in sequence[index: index + predict_length, -1] else 0.0
sample = np.concatenate(([label], feats), axis=0)
all_samples.append(sample)
if label == 0:
negative += 1
else:
positive += 1
all_samples = np.array(all_samples)
np.savetxt("xgboost_samples_label_first.csv", all_samples, delimiter=",")
print('Positive samples = {}'.format(positive))
print('Negative samples = {}'.format(negative))
# 按照4:1比例划分训练集和验证集
train_val_ratio = 4.0
np.random.shuffle(all_samples)
index_split = int(len(all_samples) * train_val_ratio / (1.0 + train_val_ratio))
train_data = all_samples[0:index_split, :]
val_data = all_samples[index_split:, :]
print('train_data shape = {}'.format(train_data.shape))
print('val_data shape = {}'.format(val_data.shape))
np.savetxt("xgboost_train.csv", train_data, delimiter=",")
np.savetxt("xgboost_val.csv", val_data, delimiter=",")
# 将划分好的训练集/验证集上传至S3桶
import boto3
import sagemaker
account_id = boto3.client('sts').get_caller_identity()['Account']
region = boto3.Session().region_name
# bucket name SHOULD BE SAME with the training bucket name in deployment phase
bucket_name = 'bev-bms-train-{}-{}'.format(region, account_id)
# create bucket
s3 = boto3.client("s3")
existing_buckets = [b["Name"] for b in s3.list_buckets()['Buckets']]
if bucket_name in existing_buckets:
print('Bucket {} already exists'.format(bucket_name))
else:
s3.create_bucket(Bucket=bucket_name)
# upload train/val dataset to S3 bucket
s3_client = boto3.Session().resource('s3')
s3_client.Bucket(bucket_name).Object('train/xgboost_train.csv').upload_file('xgboost_train.csv')
s3_client.Bucket(bucket_name).Object('val/xgboost_val.csv').upload_file('xgboost_val.csv')
# s3_input_train = sagemaker.s3_input(s3_data='s3://{}/train'.format(bucket_name), content_type='csv')
# s3_input_val = sagemaker.s3_input(s3_data='s3://{}/val/'.format(bucket_name), content_type='csv')
s3_input_train = sagemaker.inputs.TrainingInput(s3_data='s3://{}/train'.format(bucket_name), content_type='csv')
s3_input_val = sagemaker.inputs.TrainingInput(s3_data='s3://{}/val'.format(bucket_name), content_type='csv')
print('s3_input_train = {}'.format(s3_input_train))
print('s3_input_val = {}'.format(s3_input_val))
###Output
_____no_output_____
###Markdown
基于Xgboost分类模型进行电池单体一致性差预测该示例代码中采用的是AWS自带的xgboost分类算法,用户可以根据实际情况选择其他类型的算法或者自己定义算法进行训练和部署。参考资料:- https://docs.aws.amazon.com/zh_cn/sagemaker/latest/dg/xgboost.html- https://github.com/aws/amazon-sagemaker-examples Xgboost模型搭建与训练
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
container = sagemaker.image_uris.retrieve(framework='xgboost', region=boto3.Session().region_name, version='1.0-1')
sess = sagemaker.Session()
role = get_execution_role()
print('Container = {}'.format(container))
print('Role = {}'.format(role))
xgb = sagemaker.estimator.Estimator(container,
role,
instance_count=1,
instance_type='ml.m4.xlarge',
sagemaker_session=sess,
output_path='s3://{}/{}'.format(bucket_name, 'xgboost-models'),
)
xgb.set_hyperparameters(max_depth=7, # 树的最大深度,值越大,树越复杂, 可以用来控制过拟合,典型值是3-10
eta=0.1, # 学习率, 典型值0.01 - 0.3
gamma=4, # 指定了一个结点被分割时,所需要的最小损失函数减小的大小
min_child_weight=6,
subsample=0.8, # 样本的采样率,如果设置成0.5,那么Xgboost会随机选择一半的样本作为训练集
objective='binary:logistic',
num_round=200)
xgb.fit({'train': s3_input_train, 'validation': s3_input_val})
###Output
_____no_output_____
###Markdown
Xgboost模型部署将模型部署为一个Runtime Endpoint,供用户调用进行前向推理计算
###Code
# Sagemaker Endpoint名字需要与solution部署时的Endpoint名一致
sm_endpoint_name = 'battery-consistency-bias-alarm-prediction-endpoint'
xgb_predictor = xgb.deploy(
endpoint_name=sm_endpoint_name,
initial_instance_count=1,
instance_type='ml.m4.xlarge'
)
###Output
_____no_output_____
###Markdown
Xgboost模型测试模型的性能指标有以下几个:- 真阳(True Positive): 实际上是正例的数据被分类为正例- 假阳(False Positive): 实际上是反例的数据被分类为正例- 真阴(True Negative): 实际上是反例的数据被分类为反例- 假阴(False Negative): 实际上是正例的数据被分类为反例- 召回率(Recall): Recall = TPR = TP / (TP + FN), 衡量的数据集中所有的正样本被模型区分出来的比例- 精确率(Persion): Persion = TP / (TP + FP), 衡量的模型区分出来的正样本中真正为正样本的比例- 假阳率(False Positive Rate, FPR): FPR = FP / (FP + TN), 衡量的是 被错分的反例 在所有反例样本中的占比ROC曲线:当选择的阈值越小,其TPR越大,FPR越大
###Code
# 验证集合性能分析
xgb_predictor.serializer = sagemaker.serializers.CSVSerializer()
gt_y = list()
pred_y = list()
for i in range(len(val_data)):
feed_feats = val_data[i, 1:]
prob = xgb_predictor.predict(feed_feats).decode('utf-8') # format: string
prob = np.float(prob) # format: np.float
gt_y.append(val_data[i, 0])
pred_y.append(prob)
from sklearn.metrics import roc_curve, auc, confusion_matrix, f1_score, precision_score, recall_score
from pandas.tseries.offsets import *
import matplotlib.pyplot as plt
def draw_roc(test_Y, pred):
fpr, tpr, thresholds = roc_curve(test_Y, pred, pos_label=1)
auc_score = auc(fpr, tpr)
print('Auccuracy = {}'.format(auc_score))
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.2f)' % auc_score)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
def p_r_f1(test_Y, pred, thres=0.5):
pred = [p>=thres and 1 or 0 for p in pred]
cm = confusion_matrix(test_Y, pred)
precision = precision_score(test_Y, pred)
recall = recall_score(test_Y, pred)
f1 = f1_score(test_Y, pred)
print('Confusion Matrix = \n{}'.format(cm))
print('Precision = {}'.format(precision))
print('recall = {}'.format(recall))
print('f1_score = {}'.format(f1))
draw_roc(gt_y, pred_y)
p_r_f1(gt_y, pred_y, thres=0.15)
# Sagemaker Endpoint调用测试
import boto3
import base64
import json
import time
import numpy as np
runtime_sm_client = boto3.client(service_name='sagemaker-runtime')
test_feats = np.array([
364.61335034013604, -0.2457482993197279, 3.80116836734694, 3.7861037414965986, 24.78316326530612,
23.18324829931973, 360.41936658172546, -0.2934109938114305, 3.757078631234074, 3.74262977793957,
24.97961412449945, 23.499089916272297, 363.06764705882347, 0.05309597523219842, 3.7850770123839013,
3.770754643962848, 25.468653250773997, 23.89357585139319, 369.6609137055837, 0.3319796954314729,
3.8541254532269758, 3.838007251631617, 23.976069615663523, 22.530094271211023, 364.7919594594594,
-0.003918918918919656, 3.80265472972973, 3.788666891891892, 23.415540540540537, 21.92297297297297,
353.11501792114694, 0.07193548387096703, 3.680777060931899, 3.667754121863799, 26.293906810035843,
24.433691756272406, 355.7949381989405, -0.17374926427310172, 3.708674220129488, 3.696100941730429,
25.93878752207181, 24.459976456739263, 349.36256842105263, 0.8924631578947366, 3.64051747368421,
3.6289330526315786, 25.335157894736838, 23.581052631578952, 372.0972686733557, -1.7828874024526196,
3.8795345596432553, 3.864828874024527, 25.26365663322185, 23.782608695652176, 348.5370820668693,
6.805876393110436, 3.632681864235056, 3.6181013171225933, 24.893617021276604, 23.38804457953394,
348.93208923208914, -3.038738738738739, 3.635883311883312, 3.624791076791076, 26.18876018876019,
24.41269841269841, 359.3844774273346, 0.13546691403834246, 3.7457195423624, 3.7334737167594314,
26.749845392702536, 25.212739641311067, 392.0038461538462, 0.0, 4.085240384615385,
4.071028846153847, 18.39423076923077, 17.0, 364.1829856584093, 0.1546284224250319,
3.79564667535854, 3.7821192959582786, 24.286179921773144, 22.885919165580184])
payload = ''
for k, value in enumerate(test_feats):
if k == len(test_feats) - 1:
payload += str(value)
else:
payload += (str(value) + ',')
print('Payload = \n{}'.format(payload))
t1 = time.time()
response = runtime_sm_client.invoke_endpoint(
EndpointName=sm_endpoint_name,
ContentType='text/csv', # The MIME type of the input data in the request body.
Body=payload)
t2 = time.time()
print('Time cost = {}'.format(t2 - t1))
predicted_prob = response['Body'].read().decode()
print('Predicted prob = {}'.format(predicted_prob))
###Output
_____no_output_____
###Markdown
资源回收
###Code
# xgb_predictor.delete_endpoint()
###Output
_____no_output_____ |
src/ssc-txns.ipynb | ###Markdown
Marketplace addrs: Thanks Levi for this gist: https://gist.github.com/levicook/34f390073bd57abebc786cda5bec4094
###Code
marketplace_mapper = {
"HZaWndaNWHFDd9Dhk5pqUUtsmoBCqzb1MLu3NAh1VX6B": "AlphaArt",
"A7p8451ktDCHq5yYaHczeLMYsjRsAkzc3hCXcSrwYHU7": "DigitalEyes",
"AmK5g2XcyptVLCFESBCJqoSfwV3znGoVYQnqEnaAZKWn": "ExchangeArt",
"MEisE1HzehtrDpAAT8PnLHjpSSkRYakotTuJRPjTpo8": "MagicEden",
"CJsLwbP1iu5DuUikHEJnLfANgKy6stB2uFgvBBHoyxwz": "Solanart"
}
marketplace_token_url_mapper = {
"AlphaArt": "https://alpha.art/t/",
"DigitalEyes": "https://digitaleyes.market/item/",
"ExchangeArt": "https://exchange.art/single/",
"MagicEden": "https://magiceden.io/item-details/",
"Solanart": "https://solanart.io/search/" # params: ?token=<>
}
marketplace_fetch_id_url = "https://api-mainnet.magiceden.io/rpc/getNFTByMintAddress"
###Output
_____no_output_____
###Markdown
Setting up http client using genesysgo rpc
###Code
SSC_RPC_ENDPOINT = "https://ssc-dao.genesysgo.net"
SOL_EXPLORER_RPC_ENDPOINT = "https://explorer-api.mainnet-beta.solana.com"
sol_client = AsyncClient(SSC_RPC_ENDPOINT)
###Output
_____no_output_____
###Markdown
Fetching data for all the accounts
###Code
SSC_METADATA_JSON_URL = "https://sld-gengo.s3.amazonaws.com"
# "https://sld-gengo.s3.amazonaws.com/3965.json"
SSC_ADDR = "D6wZ5U9onMC578mrKMp5PZtfyc5262426qKsYJW7nT3p"
ssc_account_data = await sol_client.get_account_info(SSC_ADDR)
ssc_account_data
total_txns = await sol_client.get_transaction_count()
total_txns
ssc_signatures = await sol_client.get_confirmed_signature_for_address2(SSC_ADDR, limit=500)
len(ssc_signatures["result"])
ssc_signatures["result"][-1]
ssc_signatures_prev1 = await sol_client.get_signatures_for_address(
SSC_ADDR,
before="65rTamEzboobANuQAXbfJFPGZtATRZGa5kVCSbPJJNbYzK32HXvoe3gYFSEpiXb23D1hDYrxLWw1RK8dBdVoaeZ7", limit=5)
ssc_signatures_prev1
def fetch_time_diff(ts_start: datetime, ts_end: datetime, time_format: str = "seconds"):
return round((ts_end - ts_start).total_seconds(), 2)
async def fetch_txn_batch(rpc_endpoint: str, batch_count: int, cursor_addr: str = None):
async with AsyncClient(rpc_endpoint) as sol_client:
ssc_signatures = await sol_client.get_confirmed_signature_for_address2(
SSC_ADDR,
limit=batch_count,
before=cursor_addr
)
result = []
if len(ssc_signatures["result"]) != 0:
result = ssc_signatures["result"]
return result
async def fetch_all_txns_for_ssc(rpc_endpoint: str, batch_count: int = 500) -> pd.DataFrame:
ts_start = datetime.now()
txn_list = []
cursor_addr = None
while True:
txns = await fetch_txn_batch(rpc_endpoint=rpc_endpoint,
batch_count=batch_count,
cursor_addr=cursor_addr)
if len(txns) == 0:
ts_end = datetime.now()
break
cursor_addr = txns[-1]["signature"]
print(f"Last txn: {cursor_addr} | Len: {len(txns)}")
txn_list.append(txns)
final_txn_list = list(chain.from_iterable(txn_list))
print(f"{len(final_txn_list)} signatures have been fetched in {fetch_time_diff(ts_start=ts_start, ts_end=ts_end)} secs")
return final_txn_list
results = await fetch_all_txns_for_ssc(rpc_endpoint=SSC_RPC_ENDPOINT)
###Output
Last txn: eTTkwqKyAwUaAXtWiwr1v9EaQ1NhabjNmDE64Bc7AEdE49eaY9udCkBywFjhTfiFvK16nocbv5pGqx3hb8d4siW | Len: 500
Last txn: 5VM3meV3RHiHMXp8YEr3AEzE6euKhPhbxADgFGu5uLCE34pYtY62KbDH8GWkkdVXPxwArNYzcJxwPJxcQycARTHN | Len: 500
Last txn: 65T2gWyYaeh9Qjb8yDDKzKhgSQcAG8DPv3CRzXLibgqvQ4wR7bRXXxE38QfDWwtdjjuSLnR1SQgnG1ntj8PQBssp | Len: 500
Last txn: 65rTamEzboobANuQAXbfJFPGZtATRZGa5kVCSbPJJNbYzK32HXvoe3gYFSEpiXb23D1hDYrxLWw1RK8dBdVoaeZ7 | Len: 500
2000 signatures have been fetched in 12.22 secs
###Markdown
Fetching txn details
###Code
results[0]
###Output
_____no_output_____
###Markdown
1 SOL = 1_000_000_000 lamports
###Code
def convert_lamport_to_sol(amount: float):
return amount / 1_000_000_000
async def fetch_raw_txn_details(txn: str, rpc_endpoint: str=SSC_RPC_ENDPOINT):
async with AsyncClient(rpc_endpoint) as sol_client:
return await sol_client.get_transaction(txn)
deets = await fetch_raw_txn_details(results[850]["signature"], rpc_endpoint=SSC_RPC_ENDPOINT)
post = deets["result"]["meta"]["postBalances"]
pre = deets["result"]["meta"]["preBalances"]
[convert_lamport_to_sol(bals[0] - bals[1]) for bals in list(zip(pre, post))]
class SscNftMetadata(BaseModel):
seller: str
purchaser: str
selling_price: float
seller_cut: float
marketplace: str
marketplace_contract_addr: str
nft_addr: str
nft_token_id: str # eg. Token id == SSC#4009
nft_image_url: str
nft_traits: Any
nft_marketplace_url: str
async def fetch_token_info(nft_addr: str):
resp = requests.get(f"{marketplace_fetch_id_url}/{nft_addr}")
print(resp.url)
if resp.status_code != 200:
return None
nft_metadata = resp.json()["results"]
return nft_metadata["img"], nft_metadata["attributes"], nft_metadata["title"], nft_metadata["content"]
async def fetch_txn_details(txn: str, rpc_endpoint: str=SSC_RPC_ENDPOINT):
formatted_details = {}
async with AsyncClient(rpc_endpoint) as sol_client:
txn_details = await sol_client.get_transaction(txn)
nft_addr = txn_details["result"]["meta"]["postTokenBalances"][0]["mint"]
pre_balances = txn_details["result"]["meta"]["preBalances"]
post_balances = txn_details["result"]["meta"]["postBalances"]
actual_balances = [convert_lamport_to_sol(bals[0] - bals[1]) for bals in list(zip(pre_balances, post_balances))]
addrs = txn_details["result"]["transaction"]["message"]["accountKeys"]
balance_list = list(zip(addrs, actual_balances))
nft_metadata = await fetch_token_info(nft_addr=nft_addr)
marketplace_name = marketplace_mapper[balance_list[-1][0]]
metadata = SscNftMetadata(
seller=balance_list[2][0],
purchaser=balance_list[0][0],
selling_price=balance_list[0][1],
seller_cut=-1 * (balance_list[2][1]),
marketplace=marketplace_name,
marketplace_contract_addr=balance_list[-1][0],
nft_addr=nft_addr,
nft_token_id=nft_metadata[2],
nft_image_url=nft_metadata[0],
nft_traits=nft_metadata[1],
nft_marketplace_url=f"{marketplace_token_url_mapper[marketplace_name]}{nft_addr}"
)
return metadata.dict()
await fetch_txn_details(results[850]["signature"], rpc_endpoint=SSC_RPC_ENDPOINT)
await fetch_txn_details(results[100]["signature"], rpc_endpoint=SSC_RPC_ENDPOINT)
await fetch_txn_details(results[1998]["signature"], rpc_endpoint=SSC_RPC_ENDPOINT)
await fetch_txn_details(results[200]["signature"], rpc_endpoint=SSC_RPC_ENDPOINT)
await fetch_txn_details(results[10]["signature"], rpc_endpoint=SSC_RPC_ENDPOINT)
await fetch_txn_details(results[0]["signature"], rpc_endpoint=SSC_RPC_ENDPOINT)
await fetch_txn_details("5yZNYqNi13NDTzW5BdDHucJpeG6SumL1y4G3AwfBiTgVdJpb1d12RmaisSM7yi3r1nHuaLLmQQHvLwkSrzr3Z5TG")
###Output
_____no_output_____
###Markdown
Get all mint addressesAbove method to get all txns is probably very great to fetch txns, but inorder to get all the mints we need to use the `getProgramAccounts` function
###Code
await sol_client.get_program_accounts(SSC_ADDR)
# url="https://api.mainnet-beta.solana.com"
# url="https://solana-api.projectserum.com"
METADATA_PROGRAM_ID="metaqbxxUerdq28cj1RbAWkYQm3ybzjb6a8bt518x1s"
MAX_NAME_LENGTH = 32;
MAX_URI_LENGTH = 200;
MAX_SYMBOL_LENGTH = 10;
MAX_CREATOR_LEN = 32 + 1 + 1;
creatorIndex=0
creatorAddress="Bhr9iWx7vAZ4JDD5DVSdHxQLqG9RvCLCSXvu6yC4TF6c"
payload=json.dumps({
"jsonrpc":"2.0",
"id":1,
"method":"getProgramAccounts",
"params":[
METADATA_PROGRAM_ID,
{
"encoding":"jsonParsed",
"filters":[
{
"memcmp": {
"offset":
1 + # key
32 + # update auth
32 + # mint
4 + # name string length
MAX_NAME_LENGTH + # name
4 + # uri string length
MAX_URI_LENGTH + # uri
4 + # symbol string length
MAX_SYMBOL_LENGTH + # symbol
2 + # seller fee basis points
1 + # whether or not there is a creators vec
4 + # creators vec length
creatorIndex * MAX_CREATOR_LEN,
"bytes": creatorAddress
},
},
],
}
]
})
headers={"content-type":"application/json","cache-control":"no-cache"}
response=requests.request("POST", SSC_RPC_ENDPOINT,data=payload,headers=headers)
data=response.json()["result"]
print("total records:",len(data))
data
# pubkey: update mint authority on solscan (go to any token and check the key)
#
memcmp_opts = [MemcmpOpts(offset=4, bytes="Bhr9iWx7vAZ4JDD5DVSdHxQLqG9RvCLCSXvu6yC4TF6c")]
data = await sol_client.get_program_accounts(pubkey="Bhr9iWx7vAZ4JDD5DVSdHxQLqG9RvCLCSXvu6yC4TF6c",
encoding="jsonParsed",
memcmp_opts=memcmp_opts)
from solana.rpc.types import TokenAccountOpts
await sol_client.get_token_accounts_by_owner("Bhr9iWx7vAZ4JDD5DVSdHxQLqG9RvCLCSXvu6yC4TF6c", opts=[TokenAccountOpts(mintprogram_id="metaqbxxUerdq28cj1RbAWkYQm3ybzjb6a8bt518x1s")])
data
###Output
_____no_output_____ |
notebooks/RMTPP.ipynb | ###Markdown
Hidden Layer: $h_j = \max{(W^{y}y_{j} + W^{t}t_{j} + W^{h}h_{j-1} + b_{h},0)} $ Marker Generation: $P(y_{j+1}=k\mid h_{j}) = \frac{\exp(V_{k,:}^{y}h_{j} + b_{k}^{y})}{\sum_{k=1}^{K} \exp(V_{k,:}^{y}h_{j} + b_{k}^{y})} = \sigma(z)_{k}$ where $\sigma$ is softmax function and $z$ is the vector $ V^{y}h_{j} + b^{y}$ Conditional Density: $f^{*}(t) = \exp\{{v^{t}}^\top.h_{j} + w^t(t-t_{j}) + b^{t} + \frac{1}{w^t}\exp({v^{t}}^\top.h_{j} + b^{t}) -\frac{1}{w^t}\exp({v^{t}}^\top.h_{j} + w^t(t-t_{j}) + b^{t} )\} $
###Code
class Rmtpp(nn.Module):
def __init__(self,hidden_size=32):
self.hidden_size = hidden_size
self.N = 1000
#marker_dim equals to time_dim
super(Rmtpp, self).__init__()
process_dim = 2
marker_dim = 2
#linear transformation
self.lin_op = nn.Linear(process_dim+1+hidden_size,hidden_size)
self.vt = nn.Linear(hidden_size,1)
#embedding
self.embedding = nn.Embedding(process_dim+1, 2, padding_idx=process_dim)
#weights
self.w_t = torch.rand(1, requires_grad=True)
self.V_y = torch.rand(self.hidden_size, requires_grad=True) #marker dim = number of markers
self.b_y = torch.rand(self.hidden_size, requires_grad=True) #bias
#compute integral of t*fstart(t) between tj and +infinity using trapezoidal rule
def next_time(self,tj,hj):
umax = 40 #maximum time
Deltat = umax/self.N
dt = torch.linspace(0, umax, self.N+1)
df = dt * self.fstart(dt, hj)
#normalization factor
integrand_ = ((df[1:] + df[:-1]) * 0.5) * Deltat
integral_ = torch.sum(integrand_)
return tj + integral_
#compute the function fstar
def fstart(self, u, hj):
ewt = -torch.exp(self.w_t)
eiwt = -torch.exp(-self.w_t)
return (
torch.exp(self.vt(hj) + ewt*u
- eiwt * torch.exp(self.vt(hj) + ewt*u) + eiwt*torch.exp(self.vt(hj)))
)
def proba_distribution(self,hj):
soft_max = nn.Softmax() #softmax of rows
return soft_max(self.V_y*hj + self.b_y)
def forward(self, time, marker, hidden_state):
#First compute next time
tj = time
time = self.next_time(time,hidden_state).unsqueeze(-1)
fstar = self.fstart(time-tj,hidden_state)
#Then next marker distribution
prob = self.proba_distribution(hidden_state)
#marker = marker.float()
marker = self.embedding(marker)
input_ = torch.cat((marker, time, hidden_state), dim=1)
hidden_state = F.relu(self.lin_op(input_))
return prob, fstar, hidden_state
def log_likelihood(self,time_sequence,marker_sequence):
marker_ = marker_sequence
time_sequence = time_sequence.unsqueeze(-1)
marker_sequence = marker_sequence.unsqueeze(-1)
loss = 0
hidden_state = torch.zeros(1, self.hidden_size)
for i in range(time_sequence.shape[0]-1):
time = time_sequence[i]
marker__ = marker_[i+1]
marker = marker_sequence[i]
prob, fstar, hidden_state= self(time,marker,hidden_state)
loss += torch.log(prob[:, marker__]) + torch.log(fstar)
return -1 * loss
def train(self, seq_times, seq_types, seq_lengths, optimizer,batch_size=20, epochs=10):
#seq_times: time sequences
#seq_types: type sequences
#seq_length: keep track of length of each sequences
n_sequences = seq_times.shape[0]
epoch_range = range(epochs)
losses = []
max_seq_length = seq_times.shape[1]
max_batch = max_seq_length - batch_size + 1
for e in epoch_range:
print("epochs {} ------".format(e+1))
epoch_losses = []
loss = torch.zeros(1, 1)
optimizer.zero_grad()
for batch in range(max_batch):
print("loss: {}".format(loss.item()))
print("batch number", batch)
for i in range(n_sequences):
length = seq_lengths[i]
current_batch_size = length-batch_size+1
if(current_batch_size > batch_size):
evt_times = seq_times[i,batch:batch+ batch_size]
evt_types = seq_types[i,batch:batch+ batch_size]
ll = self.log_likelihood(evt_times, evt_types)
loss += ll
loss.backward(retain_graph=True)
optimizer.step()
epoch_losses.append(loss.item())
print("loss: {}".format(loss.item()))
losses.append(np.mean(epoch_losses))
print("end -------------")
return losses
def train2(self, seq_times, seq_types, seq_lengths, optimizer,batch_size=20, epochs=10):
n_sequences = len(seq_times)
epoch_range = range(epochs)
losses = []
max_seq_length = len(seq_times)
max_batch = max_seq_length - batch_size + 1
for e in epoch_range:
print("epochs {} ------".format(e+1))
epoch_losses = []
loss = torch.zeros(1, 1)
optimizer.zero_grad()
for batch in range(max_batch):
#print("loss: {}".format(loss.item()))
print("batch number", batch)
evt_times = seq_times[batch:batch+ batch_size]
evt_types = seq_types[batch:batch+ batch_size]
ll = self.log_likelihood(evt_times, evt_types)
loss += ll
loss.backward(retain_graph=True)
optimizer.step()
epoch_losses.append(loss.item())
print("loss: {}".format(loss.item()))
losses.append(np.mean(epoch_losses))
print("end -------------")
return losses
def predict(seq_times, seq_types, seq_lengths, optimizer):
last_times = []
n_sequences = seq_times.shape[0]
for i in range(n_sequences):
length = seq_lengths[i]
evt_times = seq_times[i, :length]
evt_types = seq_types[i, :length]
time_sequence = evt_times.unsqueeze(-1)
marker_sequence = evt_types.unsqueeze(-1)
hidden_state = torch.zeros(1, self.hidden_size)
for i in range(time_sequence.shape[0]-1):
time = time_sequence[i]
marker__ = marker_[i+1]
marker = marker_sequence[i]
prob, fstar, hidden_state,t = self(time,marker,hidden_state)
last_times.append(time) #time is the last time at the end of the loop
return times
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training:
###Code
import tqdm
import torch.optim as optim
import numpy as np
rnn = Rmtpp()
learning_rate = 0.002
optimizer = optim.Adam(rnn.parameters(), lr=learning_rate)
seq_times_train = seq_times[:128]
seq_types_train = seq_types[:128]
seq_lengths_train = seq_lengths[:128]
#print(seq_times_train.shape)
seq_times_pred = seq_times[128:]
seq_types_pred = seq_types[128:]
seq_lengths_pred = seq_lengths[128:]
losses = rnn.train(seq_times_train, seq_types_train, seq_lengths_train,optimizer)
one_seq_times_train = seq_times[30,:]
one_seq_types_train = seq_types[30,:]
one_seq_lengths_train = seq_lengths[30,:]
losses = rnn.train2(seq_times_train, seq_types_train, seq_lengths_train,optimizer)
plt.figure(figsize=(8, 5), dpi=100)
plt.plot(range(1,11),losses, linestyle = "--",marker='.',markerfacecolor='red', markersize=10)
plt.xlabel("epochs")
plt.ylabel("loss")
plt.title("loss function of RMTPP model")
predicted_time = rnn.predict(seq_times_pred,seq_types_pred,seq_lengths_pred)
import numpy as np
#t = torch.linspace(0,100)
t = torch.linspace(0,5,100000)
tj = torch.tensor([0.1567])
hj = torch.tensor([1.56])
marker = torch.tensor([0.])
nf = rnn.normalization_factor(tj,hj)
fstar = rnn.fstart(t,tj,hj).detach().numpy()
delta = t.numpy()[1] - t.numpy()[0]
integrale_ = np.sum(fstar[1:] + fstar[:-1])/2 * delta
fstar = fstar/integrale_
import matplotlib.pyplot as plt
integrale__ = np.sum(fstar[1:] + fstar[:-1])/2 * delta
print(integrale__)
plt.plot(t.numpy(),fstar, linestyle = "--")
###Output
0.9999999568648903
###Markdown
Tick.Hawkes
###Code
from tick.plot import plot_point_process
from tick.hawkes import SimuHawkes, HawkesKernelSumExp
import matplotlib.pyplot as plt
###Output
/anaconda3/lib/python3.6/site-packages/tick/base/__init__.py:18: UserWarning: matplotlib.pyplot as already been imported, this call will have no effect.
matplotlib.use('Agg')
###Markdown
1 dimensional Hawkes process simulation using tick
###Code
run_time = 40
hawkes = SimuHawkes(n_nodes=1, end_time=run_time, verbose=False, seed=1398)
kernel1 = HawkesKernelSumExp([.1, .2, .1], [1., 3., 7.])
hawkes.set_kernel(0, 0, kernel1)
hawkes.set_baseline(0, 1.)
dt = 0.01
hawkes.track_intensity(dt)
hawkes.simulate()
timestamps = hawkes.timestamps
intensity = hawkes.tracked_intensity
intensity_times = hawkes.intensity_tracked_times
_, ax = plt.subplots(1, 2, figsize=(16, 4))
plot_point_process(hawkes, n_points=50000, t_min=0, max_jumps=20, ax=ax[0])
plot_point_process(hawkes, n_points=50000, t_min=2, t_max=20, ax=ax[1])
def simulate_timestamps(end_time):
# simulation 2 types of event for exemple selling or buying
hawkes = SimuHawkes(n_nodes=2, end_time=end_time, verbose=False, seed=1398)
kernel = HawkesKernelSumExp([.1, .2, .1], [1., 3., 7.])
kernel1 = HawkesKernelSumExp([.2, .3, .1], [1., 3., 7.])
hawkes.set_kernel(0, 0, kernel)
hawkes.set_kernel(0, 1, kernel)
hawkes.set_kernel(1, 0, kernel)
hawkes.set_kernel(1, 1, kernel)
hawkes.set_baseline(0, .8)
hawkes.set_baseline(1, 1.)
dt = 0.1
hawkes.track_intensity(dt)
hawkes.simulate()
timestamps = hawkes.timestamps
t0 = timestamps[0]
t1 = timestamps[1]
t = []
marker = []
n0 = len(t0)
n1 = len(t1)
i = 0
j = 0
while(i<n0 and j<n1):
if(t0[i]<t1[j]):
t.append(t0[i])
marker.append(0)
i += 1
else:
t.append(t1[j])
marker.append(1)
j += 1
if(i==n0):
for k in range(n0,n1):
t.append(t1[k])
marker.append(1)
else:
for k in range(n1,n0):
t.append(t0[k])
marker.append(0)
return t,marker
###Output
_____no_output_____
###Markdown
Training with simulated hawkes
###Code
import pickle
import os
import glob
import sys
# Add parent dir to interpreter path
nb_dir = os.path.split(os.getcwd())[0]
print("Notebook dir {:}".format(nb_dir))
if nb_dir not in sys.path:
sys.path.append(nb_dir)
print("Python interpreter path:")
for path in sys.path:
print(path)
SYNTH_DATA_FILES = glob.glob('../data/simulated/*.pkl')
print(SYNTH_DATA_FILES)
chosen_data_file = SYNTH_DATA_FILES[0]
print(chosen_data_file)
from utils.load_synth_data import process_loaded_sequences, one_hot_embedding
# Load data simulated using tick
print("Loading 2-dimensional Hawkes data.")
with open(chosen_data_file, "rb") as f:
loaded_hawkes_data = pickle.load(f)
print(loaded_hawkes_data.keys())
tmax = loaded_hawkes_data['tmax']
seq_times, seq_types, seq_lengths = process_loaded_sequences(
loaded_hawkes_data, 2, tmax)
torch.min(seq_lengths)
###Output
_____no_output_____ |
notebooks/lanthanum-small/[eda] Plots.ipynb | ###Markdown
Exploratory plots
###Code
import datetime
import calendar
import pprint
import json
import numpy as np
import pandas as pd
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams['figure.figsize'] = 12, 4
###Output
_____no_output_____
###Markdown
Load project
###Code
project_folder = '../../datasets/lanthanum-small/'
df = pd.read_csv(project_folder + 'flow1.csv', parse_dates=['time'])
flow = df.set_index('time')['flow'].fillna(0)
flow.head()
###Output
_____no_output_____
###Markdown
Helper functions
###Code
def plot_days(start_day, end_day):
"""Plot series for specific data range"""
df = flow[pd.Timestamp(start_day): pd.Timestamp(end_day)]
df.plot()
plt.show()
###Output
_____no_output_____
###Markdown
Plot the whole series
###Code
flow.plot()
plt.show()
###Output
_____no_output_____
###Markdown
From the plot it looks that we might have some outliers here. But it is also possible that the bigger values are correct responses to the rain we need to look closer to this data. Explore november 2013
###Code
plot_days('2014-11-01', '2014-12-01')
plot_days('2014-11-01', '2014-11-03')
###Output
_____no_output_____
###Markdown
Lets check some flow in Spring
###Code
plot_days('2016-06-01', '2016-07-01')
plot_days('2016-06-11', '2016-06-13')
###Output
_____no_output_____
###Markdown
And some in Spring
###Code
plot_days('2016-02-01', '2016-02-28')
###Output
_____no_output_____ |
Mod3.ipynb | ###Markdown
We appreciated your help in stepping in during a bit of an exigent situation. We have a bit calmer of a task for you and one suited to a "Sun Devil." We have some basic crime data for Phoenix (here (Links to an external site.)) and we need to make better sense of it. We want to know where different kinds of crimes are occurring, in which areas crime is growing fastest (or dropping fastest), and whether certain crimes are more common in certain areas of the city. Basically, we don't need maps or anything at this stage, just some data grouped by location (either the type of location or the zip codes) and some trend data.I mean, if you have the time for a bit of a challenge, we would love for you to bring in some other data to help draw a better picture around this. Are there some factors that affect the crime rate? If there are, we could see if there were ways to see where crime was more likely. We might even ask you to head up our new Pre-Crime unit in the Valley. For this badge you will be working exclusively with an openly available collection of crime stats for the Phoenix area (Links to an external site.). You need to import this data and make some sense of it. That might include some combination of: Grouping crimes by location type or by zip code (or groups of zip codes). Or, on the contrary, looking at types of crimes and where they are most common. Would be good to know which areas have the fastest growing and shrinking crime rates. Might even be worth grouping crimes by violent and non-violent?This is a fairly simple set of data and can be cut in a range of ways. Think about what useful (and "actionable") data might be of interest here, from the perspective of law enforcement or of residents, and how to group and gather this appropriately.
###Code
# import necessary packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import zipfile
import csv
# imported seaborn as I wanted to make my data pretty. Was not succesful but keeping this here so I can try again
# later
import seaborn as sns
sns.set(color_codes=True)
# this is the color palette I want to use when I sucessfully use seaborn
sns.color_palette("tab10")
###Output
_____no_output_____
###Markdown
Adding link to notebook so I can reference later:https://stackoverflow.com/questions/18016037/pandas-parsererror-eof-character-when-reading-multiple-csv-files-to-hdf5 Added the quoting=csv.QUOTE_NONE because was getting parsing error. Per stack Overflow, this will bypass that error by removing unecessary characters.
###Code
crimes = pd.read_csv('crimestat.csv', quoting=csv.QUOTE_NONE)
# renaming column titles to remove quotation marks
crimes.rename(columns={'"OCCURRED ON"':'Occurred_On', '"OCCURRED TO"':'Occurred_To', '"ZIP"':'Zip',
'"UCR CRIME CATEGORY"':'Crime_Category', '"100 BLOCK ADDR"':'Address',
'"PREMISE TYPE"':'Home_Type','"INC NUMBER"':'Inc_Number'}, inplace=True)
###Output
_____no_output_____
###Markdown
Link that showed how to map for me to reference in the future:https://stackoverflow.com/questions/67039036/changing-category-names-in-a-pandas-data-frame
###Code
# adding mapping to remove quotation marks around the types of crimes
crimes['Crime_Category'] = crimes['Crime_Category'].replace(mapping, regex=True)
mapping = {r'"LARCENY-THEFT"':'Larceny_Theft',r'"BURGLARY"':'Burglary',r'"MOTOR VEHICLE THEFT"':'Motor_Vehicle_Theft',
r'"DRUG OFFENSE"':'Drug_Offense',r'"AGGRAVATED ASSAULT"':'Aggravated_Assault',r'"ROBBERY"':'Robbery',
r'"RAPE"':'Rape', r'"ARSON"':'Arson', r'"MURDER AND NON-NEGLIGENT MANSLAUGHTER"':'Murder_Manslaughter'}
# this shows me how much data is in each column so I can determine what to group
crimes.nunique()
# this sorts by the "Inc Number"
crimes.sort_index(inplace=True)
# this shows how many crimes fall into each category
crimes.Crime_Category.value_counts()
###Output
_____no_output_____
###Markdown
df['Label'] = df['Label'].astype('category') Series.cat.rename_categories: df['Label'] = df['Label'].cat.rename_categories({'zero': 0,
###Code
# this shows how many crimes occured in each zip code. Most interested in 85015 because it has the most.
crimes.Zip.value_counts()
# this shows the number of crimes by premise type
crimes.Home_Type.value_counts()
# grouping data by zip code
zip_code_group = crimes.groupby('Zip')
# this shows the crimes in the zip code 85015, which had the most occurances
zip_code_group.get_group('"85015"')
# grouping by home type to get more insight
home_type_group = crimes.groupby('Home_Type')
# chose to see this group as single family house had the highest number of crimes
home_type_group.get_group('"SINGLE FAMILY HOUSE"')
# second highest number of crimes within premise type category
home_type_group.get_group('"APARTMENT"')
# sns is for seaborn, plot shows the number of crimes by zip code.
# (Future Mali, look up how to make the graph more readable)
sns.histplot(crimes['Zip'])
# bar that show the number of crimes by category
sns.displot(crimes['Crime_Category'])
# scatter plot shows the number of each crime type by zip code
sns.scatterplot(x=crimes['Zip'], y=crimes['Crime_Category'])
###Output
_____no_output_____ |
Task B/Training/Bert-Base Approaches/bert_tweet_+_roberta_mixout+_4_8_extra_dropout_middle_layers_kim_cnn+_f1_macro_BCE_loss.ipynb | ###Markdown
Main imports and code
###Code
# check which gpu we're using
!nvidia-smi
!pip install transformers
!pip install pytorch-ignite
# Any results you write to the current directory are saved as output.
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
from transformers import BertTokenizer,BertModel
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader,Dataset
from torch.nn.utils.rnn import pack_padded_sequence
from torch.optim import AdamW
from tqdm import tqdm
from argparse import ArgumentParser
from ignite.engine import Events, create_supervised_trainer, create_supervised_evaluator
from ignite.metrics import Accuracy, Loss
from ignite.engine.engine import Engine, State, Events
from ignite.handlers import EarlyStopping
from ignite.contrib.handlers import TensorboardLogger, ProgressBar
from ignite.utils import convert_tensor
from torch.optim.lr_scheduler import ExponentialLR
import warnings
warnings.filterwarnings('ignore')
import os
import gc
import copy
import time
import random
import string
# For data manipulation
import numpy as np
import pandas as pd
# Pytorch Imports
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
from torch.utils.data import Dataset, DataLoader
# Utils
from tqdm import tqdm
from collections import defaultdict
# Sklearn Imports
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import StratifiedKFold, KFold
from transformers import AutoTokenizer, AutoModel, AdamW
!pip install sentencepiece
import random
import os
from urllib import request
from google.colab import drive
drive.mount('/content/drive')
df=pd.read_csv('/content/drive/MyDrive/ISarcasm/DataSet/train.En.csv')
df=df.loc[df['sarcastic']==1]
df=df[['tweet','sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']]
train, validate, test = \
np.split(df.sample(frac=1, random_state=42),
[int(.6*len(df)), int(.8*len(df))])
train=pd.concat([train, validate], ignore_index=True)
# tedf1.to_csv('/content/drive/MyDrive/PCL/test_task_1',index=False)
# trdf1.to_csv('/content/drive/MyDrive/PCL/train_task_1',index=False)
###Output
_____no_output_____
###Markdown
RoBERTa Baseline for Task 1
###Code
import numpy as np
from sklearn.metrics import classification_report, accuracy_score, f1_score, confusion_matrix, precision_score , recall_score
from transformers import AutoConfig, AutoModelForSequenceClassification, AutoTokenizer, BertTokenizer
from transformers.data.processors import SingleSentenceClassificationProcessor
from transformers import Trainer , TrainingArguments
from transformers.trainer_utils import EvaluationStrategy
from transformers.data.processors.utils import InputFeatures
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
!pip install datasets
class PCLTrainDataset(Dataset):
def __init__(self, df, tokenizer, max_length,displacemnt):
self.df = df
self.max_len = max_length
self.tokenizer = tokenizer
self.text = df['tweet'].values
self.label=df[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values
def __len__(self):
return len(self.df)
def __getitem__(self, index):
text = self.text[index]
# summary = self.summary[index]
inputs_text = self.tokenizer.encode_plus(
text,
truncation=True,
add_special_tokens=True,
max_length=self.max_len,
padding='max_length'
)
target = self.label[index]
text_ids = inputs_text['input_ids']
text_mask = inputs_text['attention_mask']
return {
'text_ids': torch.tensor(text_ids, dtype=torch.long),
'text_mask': torch.tensor(text_mask, dtype=torch.long),
'target': torch.tensor(target, dtype=torch.float)
}
import math
def sigmoid(x):
return 1/(1+math.exp(-x))
from typing import Tuple
import torch
class F1Score:
"""
Class for f1 calculation in Pytorch.
"""
def __init__(self, average: str = 'weighted'):
"""
Init.
Args:
average: averaging method
"""
self.average = average
if average not in [None, 'micro', 'macro', 'weighted']:
raise ValueError('Wrong value of average parameter')
@staticmethod
def calc_f1_micro(predictions: torch.Tensor, labels: torch.Tensor) -> torch.Tensor:
"""
Calculate f1 micro.
Args:
predictions: tensor with predictions
labels: tensor with original labels
Returns:
f1 score
"""
true_positive = torch.eq(labels, predictions).sum().float()
f1_score = torch.div(true_positive, len(labels))
return f1_score
@staticmethod
def calc_f1_count_for_label(predictions: torch.Tensor,
labels: torch.Tensor, label_id: int) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Calculate f1 and true count for the label
Args:
predictions: tensor with predictions
labels: tensor with original labels
label_id: id of current label
Returns:
f1 score and true count for label
"""
# label count
true_count = torch.eq(labels, label_id).sum()
# true positives: labels equal to prediction and to label_id
true_positive = torch.logical_and(torch.eq(labels, predictions),
torch.eq(labels, label_id)).sum().float()
# precision for label
precision = torch.div(true_positive, torch.eq(predictions, label_id).sum().float())
# replace nan values with 0
precision = torch.where(torch.isnan(precision),
torch.zeros_like(precision).type_as(true_positive),
precision)
# recall for label
recall = torch.div(true_positive, true_count)
# f1
f1 = 2 * precision * recall / (precision + recall)
# replace nan values with 0
f1 = torch.where(torch.isnan(f1), torch.zeros_like(f1).type_as(true_positive), f1)
return f1, true_count
def __call__(self, predictions: torch.Tensor, labels: torch.Tensor) -> torch.Tensor:
"""
Calculate f1 score based on averaging method defined in init.
Args:
predictions: tensor with predictions
labels: tensor with original labels
Returns:
f1 score
"""
# simpler calculation for micro
if self.average == 'micro':
return self.calc_f1_micro(predictions, labels)
f1_score = 0
for label_id in range(1, len(labels.unique()) + 1):
f1, true_count = self.calc_f1_count_for_label(predictions, labels, label_id)
if self.average == 'weighted':
f1_score += f1 * true_count
elif self.average == 'macro':
f1_score += f1
if self.average == 'weighted':
f1_score = torch.div(f1_score, len(labels))
elif self.average == 'macro':
f1_score = torch.div(f1_score, len(labels.unique()))
return f1_score
class Recall_Loss(nn.Module):
'''Calculate F1 score. Can work with gpu tensors
The original implmentation is written by Michal Haltuf on Kaggle.
Returns
-------
torch.Tensor
`ndim` == 1. epsilon <= val <= 1
Reference
---------
- https://www.kaggle.com/rejpalcz/best-loss-function-for-f1-score-metric
- https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html#sklearn.metrics.f1_score
- https://discuss.pytorch.org/t/calculating-precision-recall-and-f1-score-in-case-of-multi-label-classification/28265/6
- http://www.ryanzhang.info/python/writing-your-own-loss-function-module-for-pytorch/
'''
def __init__(self, epsilon=1e-7):
super().__init__()
self.epsilon = epsilon
def forward(self, y_pred, y_true,):
# assert y_pred.ndim == 2
# assert y_true.ndim == 1
# print(y_pred.shape)
# print(y_true.shape)
# y_pred[y_pred<0.5]=0
# y_pred[y_pred>=0.5]=0
y_true_one_hot = y_true.to(torch.float32)
# y_pred_one_hot = F.one_hot(y_pred.to(torch.int64), 2).to(torch.float32)
tp = (y_true_one_hot * y_pred).sum(dim=0).to(torch.float32)
tn = ((1 - y_true_one_hot) * (1 - y_pred)).sum(dim=0).to(torch.float32)
fp = ((1 - y_true_one_hot) * y_pred).sum(dim=0).to(torch.float32)
fn = (y_true_one_hot * (1 - y_pred)).sum(dim=0).to(torch.float32)
precision = tp / (tp + fp + self.epsilon)
recall = tp / (tp + fn + self.epsilon)
# f1 = 2* (precision*recall) / (precision + recall + self.epsilon)
# f1 = f1.clamp(min=self.epsilon, max=1-self.epsilon)
# f1=f1.detach()
# print(f1.shape)
# y_pred=y_pred.reshape((y_pred.shape[0], 1))
# y_true=y_true.reshape((y_true.shape[0], 1))
# p1=y_true*(math.log(sigmoid(y_pred)))*(1-f1)[1]
# p0=(1-y_true)*math.log(1-sigmoid(y_pred))*(1-f1)[0]
# y_true_one_hot = F.one_hot(y_true.to(torch.int64), 2)
# print(y_pred)
# print(y_true_one_hot)
recall=recall.detach()
# f1= F1Score('macro')(y_pred, y_true_one_hot)
# f1=f1.detach()
CE =torch.nn.BCEWithLogitsLoss(weight=( 1 - recall))(y_pred, y_true_one_hot)
# loss = ( 1 - f1) * CE
return CE
def _squeeze_binary_labels(label):
if label.size(1) == 1:
squeeze_label = label.view(len(label), -1)
else:
inds = torch.nonzero(label >= 1).squeeze()
squeeze_label = inds[:,-1]
return squeeze_label
def cross_entropy(pred, label, weight=None, reduction='mean', avg_factor=None):
# element-wise losses
if label.size(-1) != pred.size(0):
label = _squeeze_binary_labels(label)
loss = F.cross_entropy(pred, label, reduction='none')
# apply weights and do the reduction
if weight is not None:
weight = weight.float()
loss = weight_reduce_loss(
loss, weight=weight, reduction=reduction, avg_factor=avg_factor)
return loss
def _expand_binary_labels(labels, label_weights, label_channels):
bin_labels = labels.new_full((labels.size(0), label_channels), 0)
inds = torch.nonzero(labels >= 1).squeeze()
if inds.numel() > 0:
bin_labels[inds, labels[inds] - 1] = 1
if label_weights is None:
bin_label_weights = None
else:
bin_label_weights = label_weights.view(-1, 1).expand(
label_weights.size(0), label_channels)
return bin_labels, bin_label_weights
def binary_cross_entropy(pred,
label,
weight=None,
reduction='mean',
avg_factor=None):
if pred.dim() != label.dim():
label, weight = _expand_binary_labels(label, weight, pred.size(-1))
# weighted element-wise losses
if weight is not None:
weight = weight.float()
loss = F.binary_cross_entropy_with_logits(
pred, label.float(), weight, reduction='none')
loss = weight_reduce_loss(loss, reduction=reduction, avg_factor=avg_factor)
return loss
def partial_cross_entropy(pred,
label,
weight=None,
reduction='mean',
avg_factor=None):
if pred.dim() != label.dim():
label, weight = _expand_binary_labels(label, weight, pred.size(-1))
# weighted element-wise losses
if weight is not None:
weight = weight.float()
mask = label == -1
loss = F.binary_cross_entropy_with_logits(
pred, label.float(), weight, reduction='none')
if mask.sum() > 0:
loss *= (1-mask).float()
avg_factor = (1-mask).float().sum()
# do the reduction for the weighted loss
loss = weight_reduce_loss(loss, reduction=reduction, avg_factor=avg_factor)
return loss
def kpos_cross_entropy(pred, label, weight=None, reduction='mean', avg_factor=None):
# element-wise losses
if pred.dim() != label.dim():
label, weight = _expand_binary_labels(label, weight, pred.size(-1))
target = label.float() / torch.sum(label, dim=1, keepdim=True).float()
loss = - target * F.log_softmax(pred, dim=1)
# apply weights and do the reduction
if weight is not None:
weight = weight.float()
loss = weight_reduce_loss(
loss, weight=weight, reduction=reduction, avg_factor=avg_factor)
return loss
class CrossEntropyLoss(nn.Module):
def __init__(self,
use_sigmoid=False,
use_kpos=False,
partial=False,
reduction='mean',
loss_weight=1.0,
thrds=None):
super(CrossEntropyLoss, self).__init__()
assert (use_sigmoid is True) or (partial is False)
self.use_sigmoid = use_sigmoid
self.use_kpos = use_kpos
self.partial = partial
self.reduction = reduction
self.loss_weight = loss_weight
if self.use_sigmoid and thrds is not None:
self.thrds=inverse_sigmoid(thrds)
else:
self.thrds = thrds
if self.use_sigmoid:
if self.partial:
self.cls_criterion = partial_cross_entropy
else:
self.cls_criterion = binary_cross_entropy
elif self.use_kpos:
self.cls_criterion = kpos_cross_entropy
else:
self.cls_criterion = cross_entropy
def forward(self,
cls_score,
label,
weight=None,
avg_factor=None,
reduction_override=None,
**kwargs):
assert reduction_override in (None, 'none', 'mean', 'sum')
reduction = (
reduction_override if reduction_override else self.reduction)
if self.thrds is not None:
cut_high_mask = (label == 1) * (cls_score > self.thrds[1])
cut_low_mask = (label == 0) * (cls_score < self.thrds[0])
if weight is not None:
weight *= (1 - cut_high_mask).float() * (1 - cut_low_mask).float()
else:
weight = (1 - cut_high_mask).float() * (1 - cut_low_mask).float()
loss_cls = self.loss_weight * self.cls_criterion(
cls_score,
label,
weight,
reduction=reduction,
avg_factor=avg_factor,
**kwargs)
return loss_cls
def inverse_sigmoid(Y):
X = []
for y in Y:
y = max(y,1e-14)
if y == 1:
x = 1e10
else:
x = -np.log(1/y-1)
X.append(x)
return X
class ResampleLoss(nn.Module):
def __init__(self,
use_sigmoid=True, partial=False,
loss_weight=1.0, reduction='mean',
reweight_func=None, # None, 'inv', 'sqrt_inv', 'rebalance', 'CB'
weight_norm=None, # None, 'by_instance', 'by_batch'
focal=dict(
focal=True,
alpha=0.5,
gamma=2,
),
map_param=dict(
alpha=10.0,
beta=0.2,
gamma=0.1
),
CB_loss=dict(
CB_beta=0.9,
CB_mode='average_w' # 'by_class', 'average_n', 'average_w', 'min_n'
),
logit_reg=dict(
neg_scale=5.0,
init_bias=0.1
),
class_freq=None,
train_num=None):
super(ResampleLoss, self).__init__()
assert (use_sigmoid is True) or (partial is False)
self.use_sigmoid = use_sigmoid
self.partial = partial
self.loss_weight = loss_weight
self.reduction = reduction
if self.use_sigmoid:
if self.partial:
self.cls_criterion = partial_cross_entropy
else:
self.cls_criterion = binary_cross_entropy
else:
self.cls_criterion = cross_entropy
# reweighting function
self.reweight_func = reweight_func
# normalization (optional)
self.weight_norm = weight_norm
# focal loss params
self.focal = focal['focal']
self.gamma = focal['gamma']
self.alpha = focal['alpha'] # change to alpha
# mapping function params
self.map_alpha = map_param['alpha']
self.map_beta = map_param['beta']
self.map_gamma = map_param['gamma']
# CB loss params (optional)
self.CB_beta = CB_loss['CB_beta']
self.CB_mode = CB_loss['CB_mode']
self.class_freq = torch.from_numpy(np.asarray(class_freq)).float().cuda()
self.num_classes = self.class_freq.shape[0]
self.train_num = train_num # only used to be divided by class_freq
# regularization params
self.logit_reg = logit_reg
self.neg_scale = logit_reg[
'neg_scale'] if 'neg_scale' in logit_reg else 1.0
init_bias = logit_reg['init_bias'] if 'init_bias' in logit_reg else 0.0
self.init_bias = - torch.log(
self.train_num / self.class_freq - 1) * init_bias ########################## bug fixed https://github.com/wutong16/DistributionBalancedLoss/issues/8
self.freq_inv = torch.ones(self.class_freq.shape).cuda() / self.class_freq
self.propotion_inv = self.train_num / self.class_freq
def forward(self,
cls_score_,
label,
weight=None,
avg_factor=None,
reduction_override=None,
**kwargs):
assert reduction_override in (None, 'none', 'mean', 'sum')
reduction = (
reduction_override if reduction_override else self.reduction)
weight = self.reweight_functions(label)
cls_score=cls_score_.clone()
cls_score, weight = self.logit_reg_functions(label.float(), cls_score, weight)
if self.focal:
logpt = self.cls_criterion(
cls_score.clone(), label, weight=None, reduction='none',
avg_factor=avg_factor)
# pt is sigmoid(logit) for pos or sigmoid(-logit) for neg
pt = torch.exp(-logpt)
wtloss = self.cls_criterion(
cls_score, label.float(), weight=weight, reduction='none')
alpha_t = torch.where(label==1, self.alpha, 1-self.alpha)
loss = alpha_t * ((1 - pt) ** self.gamma) * wtloss ####################### balance_param should be a tensor
loss = reduce_loss(loss, reduction) ############################ add reduction
else:
loss = self.cls_criterion(cls_score, label.float(), weight,
reduction=reduction)
loss = self.loss_weight * loss
return loss
def reweight_functions(self, label):
if self.reweight_func is None:
return None
elif self.reweight_func in ['inv', 'sqrt_inv']:
weight = self.RW_weight(label.float())
elif self.reweight_func in 'rebalance':
weight = self.rebalance_weight(label.float())
elif self.reweight_func in 'CB':
weight = self.CB_weight(label.float())
else:
return None
if self.weight_norm is not None:
if 'by_instance' in self.weight_norm:
max_by_instance, _ = torch.max(weight, dim=-1, keepdim=True)
weight = weight / max_by_instance
elif 'by_batch' in self.weight_norm:
weight = weight / torch.max(weight)
return weight
def logit_reg_functions(self, labels, logits, weight=None):
if not self.logit_reg:
return logits, weight
if 'init_bias' in self.logit_reg:
logits += self.init_bias
if 'neg_scale' in self.logit_reg:
logits = logits * (1 - labels) * self.neg_scale + logits * labels
if weight is not None:
weight = weight / self.neg_scale * (1 - labels) + weight * labels
return logits, weight
def rebalance_weight(self, gt_labels):
repeat_rate = torch.sum( gt_labels.float() * self.freq_inv, dim=1, keepdim=True)
pos_weight = self.freq_inv.clone().detach().unsqueeze(0) / repeat_rate
# pos and neg are equally treated
weight = torch.sigmoid(self.map_beta * (pos_weight - self.map_gamma)) + self.map_alpha
return weight
def CB_weight(self, gt_labels):
if 'by_class' in self.CB_mode:
weight = torch.tensor((1 - self.CB_beta)).cuda() / \
(1 - torch.pow(self.CB_beta, self.class_freq)).cuda()
elif 'average_n' in self.CB_mode:
avg_n = torch.sum(gt_labels * self.class_freq, dim=1, keepdim=True) / \
torch.sum(gt_labels, dim=1, keepdim=True)
weight = torch.tensor((1 - self.CB_beta)).cuda() / \
(1 - torch.pow(self.CB_beta, avg_n)).cuda()
elif 'average_w' in self.CB_mode:
weight_ = torch.tensor((1 - self.CB_beta)).cuda() / \
(1 - torch.pow(self.CB_beta, self.class_freq)).cuda()
weight = torch.sum(gt_labels * weight_, dim=1, keepdim=True) / \
torch.sum(gt_labels, dim=1, keepdim=True)
elif 'min_n' in self.CB_mode:
min_n, _ = torch.min(gt_labels * self.class_freq +
(1 - gt_labels) * 100000, dim=1, keepdim=True)
weight = torch.tensor((1 - self.CB_beta)).cuda() / \
(1 - torch.pow(self.CB_beta, min_n)).cuda()
else:
raise NameError
return weight
def RW_weight(self, gt_labels, by_class=True):
if 'sqrt' in self.reweight_func:
weight = torch.sqrt(self.propotion_inv)
else:
weight = self.propotion_inv
if not by_class:
sum_ = torch.sum(weight * gt_labels, dim=1, keepdim=True)
weight = sum_ / torch.sum(gt_labels, dim=1, keepdim=True)
return weight
def reduce_loss(loss, reduction):
"""Reduce loss as specified.
Args:
loss (Tensor): Elementwise loss tensor.
reduction (str): Options are "none", "mean" and "sum".
Return:
Tensor: Reduced loss tensor.
"""
reduction_enum = F._Reduction.get_enum(reduction)
# none: 0, elementwise_mean:1, sum: 2
if reduction_enum == 0:
return loss
elif reduction_enum == 1:
return loss.mean()
elif reduction_enum == 2:
return loss.sum()
def weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None):
"""Apply element-wise weight and reduce loss.
Args:
loss (Tensor): Element-wise loss.
weight (Tensor): Element-wise weights.
reduction (str): Same as built-in losses of PyTorch.
avg_factor (float): Avarage factor when computing the mean of losses.
Returns:
Tensor: Processed loss values.
"""
# if weight is specified, apply element-wise weight
if weight is not None:
loss = loss * weight
# if avg_factor is not specified, just reduce the loss
if avg_factor is None:
loss = reduce_loss(loss, reduction)
else:
# if reduction is mean, then average the loss by avg_factor
if reduction == 'mean':
loss = loss.sum() / avg_factor
# if reduction is 'none', then do nothing, otherwise raise an error
elif reduction != 'none':
raise ValueError('avg_factor can not be used with reduction="sum"')
return loss
def binary_cross_entropy(pred,
label,
weight=None,
reduction='mean',
avg_factor=None):
# weighted element-wise losses
if weight is not None:
weight = weight.float()
loss = F.binary_cross_entropy_with_logits(
pred, label.float(), weight, reduction='none')
loss = weight_reduce_loss(loss, reduction=reduction, avg_factor=avg_factor)
return loss
# class PCL_Model_Arch(nn.Module):
# def __init__(self,pre_trained='roberta-large'):
# super().__init__()
# self.bert = AutoModel.from_pretrained(pre_trained, output_hidden_states=True)
# output_channel = 16 # number of kernels
# num_classes = 6 # number of targets to predict
# dropout = 0.2 # dropout value
# embedding_dim = 768 # length of embedding dim
# ks = 3 # three conv nets here
# # input_channel = word embeddings at a value of 1; 3 for RGB images
# input_channel = 4 # for single embedding, input_channel = 1
# # [3, 4, 5] = window height
# # padding = padding to account for height of search window
# # 3 convolutional nets
# self.conv1 = nn.Conv2d(input_channel, output_channel, (3, embedding_dim), padding=(2, 0), groups=4)
# self.conv2 = nn.Conv2d(input_channel, output_channel, (4, embedding_dim), padding=(3, 0), groups=4)
# self.conv3 = nn.Conv2d(input_channel, output_channel, (5, embedding_dim), padding=(4, 0), groups=4)
# # apply dropout
# self.dropout = nn.Dropout(dropout)
# # fully connected layer for classification
# # 3x conv nets * output channel
# self.fc1 = nn.Linear(ks * output_channel, num_classes)
# self.softmax = nn.Sigmoid()
# def forward(self, text_id, text_mask):
# # get the last 4 layers
# outputs= self.bert(text_id, attention_mask=text_mask)
# # all_layers = [4, 16, 256, 768]
# hidden_layers = outputs[-1] # get hidden layers
# hidden_layers = torch.stack(hidden_layers, dim=1)
# x = hidden_layers[:, -4:]
# # x = x.unsqueeze(1)
# # x = torch.mean(x, 0)
# # print(hidden_layers.size())
# torch.cuda.empty_cache()
# x = [F.relu(self.conv1(x)).squeeze(3), F.relu(self.conv2(x)).squeeze(3), F.relu(self.conv3(x)).squeeze(3)]
# x = [F.dropout(i) for i in x]
# # max-over-time pooling; # (batch, channel_output) * ks
# x = [F.max_pool1d(i, i.size(2)).squeeze(2) for i in x]
# # concat results; (batch, channel_output * ks)
# x = torch.cat(x, 1)
# # add dropout
# x = self.dropout(x)
# # generate logits (batch, target_size)
# logit = self.fc1(x)
# torch.cuda.empty_cache()
# return logit
# class PCL_Model_Arch(nn.Module):
# def __init__(self,pre_trained='roberta-base'):
# super().__init__()
# self.bert = AutoModel.from_pretrained(pre_trained, output_hidden_states=True)
# output_channel = 16 # number of kernels
# num_classes = 6 # number of targets to predict
# dropout = 0.2 # dropout value
# embedding_dim = 768 # length of embedding dim
# ks = 3 # three conv nets here
# # input_channel = word embeddings at a value of 1; 3 for RGB images
# input_channel = 4 # for single embedding, input_channel = 1
# # [3, 4, 5] = window height
# # padding = padding to account for height of search window
# # 3 convolutional nets
# self.conv1 = nn.Conv2d(input_channel, output_channel, (3, embedding_dim), padding=(2, 0), groups=4)
# self.conv2 = nn.Conv2d(input_channel, output_channel, (4, embedding_dim), padding=(3, 0), groups=4)
# self.conv3 = nn.Conv2d(input_channel, output_channel, (5, embedding_dim), padding=(4, 0), groups=4)
# # apply dropout
# self.dropout = nn.Dropout(dropout)
# # fully connected layer for classification
# # 3x conv nets * output channel
# self.fc1 = nn.Linear(ks * output_channel, num_classes)
# self.softmax = nn.Sigmoid()
# def forward(self, text_id, text_mask):
# # get the last 4 layers
# outputs= self.bert(text_id, attention_mask=text_mask)
# # all_layers = [4, 16, 256, 768]
# hidden_layers = outputs[2] # get hidden layers
# hidden_layers = torch.stack(hidden_layers, dim=1)
# x = hidden_layers[:, -4:]
# # x = x.unsqueeze(1)
# # x = torch.mean(x, 0)
# # print(hidden_layers.size())
# torch.cuda.empty_cache()
# x = [F.relu(self.conv1(x)).squeeze(3), F.relu(self.conv2(x)).squeeze(3), F.relu(self.conv3(x)).squeeze(3)]
# x = [F.dropout(i,0.65) for i in x]
# # max-over-time pooling; # (batch, channel_output) * ks
# x = [F.max_pool1d(i, i.size(2)).squeeze(2) for i in x]
# # concat results; (batch, channel_output * ks)
# x = torch.cat(x, 1)
# # add dropout
# x = self.dropout(x)
# # generate logits (batch, target_size)
# logit = self.fc1(x)
# torch.cuda.empty_cache()
# return logit
# class PCL_Model_Arch(nn.Module):
# def __init__(self,pre_trained='roberta-base'):
# super().__init__()
# self.bert = AutoModel.from_pretrained(pre_trained)
# D_in, H, D_out = 768, 50, 6
# self.classifier = nn.Sequential(
# nn.Linear(D_in, H),
# nn.ReLU(),
# nn.Dropout(0.5),
# nn.Linear(H, D_out)
# )
# def forward(self, text_id, text_mask):
# # get the last 4 layers
# outputs= self.bert(text_id, attention_mask=text_mask)
# last_hidden_state_cls = outputs[0][:, 0, :]
# # Feed input to classifier to compute logits
# logits = self.classifier(last_hidden_state_cls)
# return logits
class WeightedLayerPooling(nn.Module):
def __init__(self, num_hidden_layers, layer_start: int = 4, layer_weights = None):
super(WeightedLayerPooling, self).__init__()
self.layer_start = layer_start
self.num_hidden_layers = num_hidden_layers
self.layer_weights = layer_weights if layer_weights is not None \
else nn.Parameter(
torch.tensor([1] * (num_hidden_layers+1 - layer_start), dtype=torch.float)
)
def forward(self, features):
ft_all_layers = features
all_layer_embedding = torch.stack(ft_all_layers)
all_layer_embedding = all_layer_embedding[self.layer_start:, :, :, :]
weight_factor = self.layer_weights.unsqueeze(-1).unsqueeze(-1).unsqueeze(-1).expand(all_layer_embedding.size())
weighted_average = (weight_factor*all_layer_embedding).sum(dim=0) / self.layer_weights.sum()
# features.update({'token_embeddings': weighted_average})
return weighted_average
class PCL_Model_Arch(nn.Module):
def __init__(self,pre_trained='roberta-base'):
super().__init__()
self.bert = AutoModel.from_pretrained(pre_trained, output_hidden_states=True)
self.config = AutoConfig.from_pretrained(pre_trained)
layer_start = 9
self.pooler = WeightedLayerPooling(
self.config.num_hidden_layers,
layer_start=layer_start, layer_weights=None
)
self.cls=nn.Linear(self.config.hidden_size, 6)
def forward(self, text_id, text_mask):
# get the last 4 layers
outputs= self.bert(text_id, attention_mask=text_mask)
out=outputs[2]
features = self.pooler(out)
sequence_output = features[:, 0]
logits = self.cls(sequence_output)
torch.cuda.empty_cache()
return logits
#Mixout code to combine vanilla network and dropout network
import torch
from torch.autograd.function import InplaceFunction
class Mixout(InplaceFunction):
# target: a weight tensor mixes with a input tensor
# A forward method returns
# [(1 - Bernoulli(1 - p) mask) * target + (Bernoulli(1 - p) mask) * input - p * target]/(1 - p)
# where p is a mix probability of mixout.
# A backward returns the gradient of the forward method.
# Dropout is equivalent to the case of target=None.
# I modified the code of dropout in PyTorch.
@staticmethod
def _make_noise(input):
return input.new().resize_as_(input)
@classmethod
def forward(cls, ctx, input, target=None, p=0.0, training=False, inplace=False):
if p < 0 or p > 1:
raise ValueError("A mix probability of mixout has to be between 0 and 1,"
" but got {}".format(p))
if target is not None and input.size() != target.size():
raise ValueError("A target tensor size must match with a input tensor size {},"
" but got {}". format(input.size(), target.size()))
ctx.p = p
ctx.training = training
if target is None:
target = cls._make_noise(input)
target.fill_(0)
target = target.to(input.device)
if inplace:
ctx.mark_dirty(input)
output = input
else:
output = input.clone()
if ctx.p == 0 or not ctx.training:
return output
ctx.noise = cls._make_noise(input)
if len(ctx.noise.size()) == 1:
ctx.noise.bernoulli_(1 - ctx.p)
else:
ctx.noise[0].bernoulli_(1 - ctx.p)
ctx.noise = ctx.noise[0].repeat(input.size()[0], *([1] * (len(input.size())-1)))
ctx.noise.expand_as(input)
if ctx.p == 1:
output = target.clone()
else:
output = ((1 - ctx.noise) * target + ctx.noise * output - ctx.p * target) / (1 - ctx.p)
return output
@staticmethod
def backward(ctx, grad_output):
if ctx.p > 0 and ctx.training:
return grad_output * ctx.noise, None, None, None, None
else:
return grad_output, None, None, None, None
def mixout(input, target=None, p=0.0, training=False, inplace=False):
return Mixout.apply(input, target, p, training, inplace)
class MixLinear(torch.nn.Module):
__constants__ = ['bias', 'in_features', 'out_features']
# If target is None, nn.Sequential(nn.Linear(m, n), MixLinear(m', n', p))
# is equivalent to nn.Sequential(nn.Linear(m, n), nn.Dropout(p), nn.Linear(m', n')).
# If you want to change a dropout layer to a mixout layer,
# you should replace nn.Linear right after nn.Dropout(p) with Mixout(p)
def __init__(self, in_features, out_features, bias=True, target=None, p=0.0):
super(MixLinear, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = Parameter(torch.Tensor(out_features, in_features))
if bias:
self.bias = Parameter(torch.Tensor(out_features))
else:
self.register_parameter('bias', None)
self.reset_parameters()
self.target = target
self.p = p
def reset_parameters(self):
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
def forward(self, input):
return F.linear(input, mixout(self.weight, self.target,
self.p, self.training), self.bias)
def extra_repr(self):
type = 'drop' if self.target is None else 'mix'
return '{}={}, in_features={}, out_features={}, bias={}'.format(type+"out", self.p,
self.in_features, self.out_features, self.bias is not None)
# def PCL_Model_Arch(nn.Module):
# def __init__(self,pre_trained='vinai/bertweet-base'):
# super().__init__()
# class PCL_Model_Arch(nn.Module):
# def __init__(self):
# super(PCL_Model_Arch, self).__init__()
# self.bert = AutoModel.from_pretrained('google/canine-c', output_hidden_states=False)
# self.drop = nn.Dropout(p=0.2)
# self.fc = nn.Linear(768, 2)
# self.softmax = nn.Softmax()
# def forward(self, text_id, text_mask):
# # get the last 4 layers
# outputs= self.bert(text_id, attention_mask=text_mask)
# # all_layers = [4, 16, 256, 768]
# hidden_layers = outputs[1] # get hidden layers
# torch.cuda.empty_cache()
# x = self.drop(hidden_layers)
# torch.cuda.empty_cache()
# # generate logits (batch, target_size)
# logit = self.fc(x)
# torch.cuda.empty_cache()
# return self.softmax(logit)
import torch
import torch.nn as nn
class AsymmetricLoss(nn.Module):
def __init__(self, gamma_neg=4, gamma_pos=1, clip=0.05, eps=1e-8, disable_torch_grad_focal_loss=True):
super(AsymmetricLoss, self).__init__()
self.gamma_neg = gamma_neg
self.gamma_pos = gamma_pos
self.clip = clip
self.disable_torch_grad_focal_loss = disable_torch_grad_focal_loss
self.eps = eps
def forward(self, x, y):
""""
Parameters
----------
x: input logits
y: targets (multi-label binarized vector)
"""
# Calculating Probabilities
x_sigmoid = torch.sigmoid(x)
xs_pos = x_sigmoid
xs_neg = 1 - x_sigmoid
# Asymmetric Clipping
if self.clip is not None and self.clip > 0:
xs_neg = (xs_neg + self.clip).clamp(max=1)
# Basic CE calculation
los_pos = y * torch.log(xs_pos.clamp(min=self.eps))
los_neg = (1 - y) * torch.log(xs_neg.clamp(min=self.eps))
loss = los_pos + los_neg
# Asymmetric Focusing
if self.gamma_neg > 0 or self.gamma_pos > 0:
if self.disable_torch_grad_focal_loss:
torch.set_grad_enabled(False)
pt0 = xs_pos * y
pt1 = xs_neg * (1 - y) # pt = p if t > 0 else 1-p
pt = pt0 + pt1
one_sided_gamma = self.gamma_pos * y + self.gamma_neg * (1 - y)
one_sided_w = torch.pow(1 - pt, one_sided_gamma)
if self.disable_torch_grad_focal_loss:
torch.set_grad_enabled(True)
loss *= one_sided_w
return -loss.sum()
class AsymmetricLossOptimized(nn.Module):
''' Notice - optimized version, minimizes memory allocation and gpu uploading,
favors inplace operations'''
def __init__(self, gamma_neg=4, gamma_pos=1, clip=0.05, eps=1e-8, disable_torch_grad_focal_loss=False):
super(AsymmetricLossOptimized, self).__init__()
self.gamma_neg = gamma_neg
self.gamma_pos = gamma_pos
self.clip = clip
self.disable_torch_grad_focal_loss = disable_torch_grad_focal_loss
self.eps = eps
# prevent memory allocation and gpu uploading every iteration, and encourages inplace operations
self.targets = self.anti_targets = self.xs_pos = self.xs_neg = self.asymmetric_w = self.loss = None
def forward(self, x, y):
""""
Parameters
----------
x: input logits
y: targets (multi-label binarized vector)
"""
self.targets = y
self.anti_targets = 1 - y
# Calculating Probabilities
self.xs_pos = torch.sigmoid(x)
self.xs_neg = 1.0 - self.xs_pos
# Asymmetric Clipping
if self.clip is not None and self.clip > 0:
self.xs_neg.add_(self.clip).clamp_(max=1)
# Basic CE calculation
self.loss = self.targets * torch.log(self.xs_pos.clamp(min=self.eps))
self.loss.add_(self.anti_targets * torch.log(self.xs_neg.clamp(min=self.eps)))
# Asymmetric Focusing
if self.gamma_neg > 0 or self.gamma_pos > 0:
if self.disable_torch_grad_focal_loss:
torch.set_grad_enabled(False)
self.xs_pos = self.xs_pos * self.targets
self.xs_neg = self.xs_neg * self.anti_targets
self.asymmetric_w = torch.pow(1 - self.xs_pos - self.xs_neg,
self.gamma_pos * self.targets + self.gamma_neg * self.anti_targets)
if self.disable_torch_grad_focal_loss:
torch.set_grad_enabled(True)
self.loss *= self.asymmetric_w
return -self.loss.sum()
class ASLSingleLabel(nn.Module):
'''
This loss is intended for single-label classification problems
'''
def __init__(self, gamma_pos=0, gamma_neg=4, eps: float = 0.1, reduction='mean'):
super(ASLSingleLabel, self).__init__()
self.eps = eps
self.logsoftmax = nn.LogSoftmax(dim=-1)
self.targets_classes = []
self.gamma_pos = gamma_pos
self.gamma_neg = gamma_neg
self.reduction = reduction
def forward(self, inputs, target):
'''
"input" dimensions: - (batch_size,number_classes)
"target" dimensions: - (batch_size)
'''
num_classes = inputs.size()[-1]
log_preds = self.logsoftmax(inputs)
self.targets_classes = torch.zeros_like(inputs).scatter_(1, target.long().unsqueeze(1), 1)
# ASL weights
targets = self.targets_classes
anti_targets = 1 - targets
xs_pos = torch.exp(log_preds)
xs_neg = 1 - xs_pos
xs_pos = xs_pos * targets
xs_neg = xs_neg * anti_targets
asymmetric_w = torch.pow(1 - xs_pos - xs_neg,
self.gamma_pos * targets + self.gamma_neg * anti_targets)
log_preds = log_preds * asymmetric_w
if self.eps > 0: # label smoothing
self.targets_classes = self.targets_classes.mul(1 - self.eps).add(self.eps / num_classes)
# loss calculation
loss = - self.targets_classes.mul(log_preds)
loss = loss.sum(dim=-1)
if self.reduction == 'mean':
loss = loss.mean()
return loss
!pip install emoji
tokenizer= AutoTokenizer.from_pretrained('roberta-base')
y_train=train[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values
class_freq=np.sum(y_train,axis=0)
len(y_train)
# class_freq=class_freq.reshape(6,1)
class_freq
def cross_entropy(pred, label, weight=None, reduction='mean', avg_factor=None):
# element-wise losses
if label.size(-1) != pred.size(0):
label = _squeeze_binary_labels(label)
loss = F.cross_entropy(pred, label, reduction='none')
# apply weights and do the reduction
if weight is not None:
weight = weight.float()
loss = weight_reduce_loss(
loss, weight=weight, reduction=reduction, avg_factor=avg_factor)
return loss
def partial_cross_entropy(pred,
label,
weight=None,
reduction='mean',
avg_factor=None):
if pred.dim() != label.dim():
label, weight = _expand_binary_labels(label, weight, pred.size(-1))
# weighted element-wise losses
if weight is not None:
weight = weight.float()
mask = label == -1
loss = F.binary_cross_entropy_with_logits(
pred, label.float(), weight, reduction='none')
if mask.sum() > 0:
loss *= (1-mask).float()
avg_factor = (1-mask).float().sum()
# do the reduction for the weighted loss
loss = weight_reduce_loss(loss, reduction=reduction, avg_factor=avg_factor)
return loss
def criterion(outputs1, targets):
criterion =AsymmetricLoss()
# print(outputs1)
# criterion=ResampleLoss(reweight_func='rebalance', loss_weight=1.0,
# focal=dict(focal=True, alpha=0.5, gamma=2),
# logit_reg=dict(init_bias=0.05, neg_scale=2.0),
# map_param=dict(alpha=0.1, beta=10.0, gamma=0.05),
# class_freq=class_freq, train_num=len(y_train))
loss = criterion(outputs1, targets)
return loss
import random
import numpy as np
from torch.utils.data.sampler import Sampler
class MultilabelBalancedRandomSampler(Sampler):
"""
MultilabelBalancedRandomSampler: Given a multilabel dataset of length n_samples and
number of classes n_classes, samples from the data with equal probability per class
effectively oversampling minority classes and undersampling majority classes at the
same time. Note that using this sampler does not guarantee that the distribution of
classes in the output samples will be uniform, since the dataset is multilabel and
sampling is based on a single class. This does however guarantee that all classes
will have at least batch_size / n_classes samples as batch_size approaches infinity
"""
def __init__(self, labels, indices=None, class_choice="least_sampled"):
"""
Parameters:
-----------
labels: a multi-hot encoding numpy array of shape (n_samples, n_classes)
indices: an arbitrary-length 1-dimensional numpy array representing a list
of indices to sample only from
class_choice: a string indicating how class will be selected for every
sample:
"least_sampled": class with the least number of sampled labels so far
"random": class is chosen uniformly at random
"cycle": the sampler cycles through the classes sequentially
"""
self.labels = labels
self.indices = indices
if self.indices is None:
self.indices = range(len(labels))
self.num_classes = self.labels.shape[1]
# List of lists of example indices per class
self.class_indices = []
for class_ in range(self.num_classes):
lst = np.where(self.labels[:, class_] == 1)[0]
lst = lst[np.isin(lst, self.indices)]
self.class_indices.append(lst)
self.counts = [0] * self.num_classes
assert class_choice in ["least_sampled", "random", "cycle"]
self.class_choice = class_choice
self.current_class = 0
def __iter__(self):
self.count = 0
return self
def __next__(self):
if self.count >= len(self.indices):
raise StopIteration
self.count += 1
return self.sample()
def sample(self):
class_ = self.get_class()
class_indices = self.class_indices[class_]
chosen_index = np.random.choice(class_indices)
if self.class_choice == "least_sampled":
for class_, indicator in enumerate(self.labels[chosen_index]):
if indicator == 1:
self.counts[class_] += 1
return chosen_index
def get_class(self):
if self.class_choice == "random":
class_ = random.randint(0, self.labels.shape[1] - 1)
elif self.class_choice == "cycle":
class_ = self.current_class
self.current_class = (self.current_class + 1) % self.labels.shape[1]
elif self.class_choice == "least_sampled":
min_count = self.counts[0]
min_classes = [0]
for class_ in range(1, self.num_classes):
if self.counts[class_] < min_count:
min_count = self.counts[class_]
min_classes = [class_]
if self.counts[class_] == min_count:
min_classes.append(class_)
class_ = np.random.choice(min_classes)
return class_
def __len__(self):
return len(self.indices)
CONFIG = {"seed": 2021,
"epochs": 5,
"model_name": "xlnet-base-cased",
"train_batch_size": 16,
"valid_batch_size": 64,
"max_length": 120,
"learning_rate": 1e-4,
"scheduler": 'CosineAnnealingLR',
"min_lr": 1e-6,
"T_max": 500,
"weight_decay": 1e-6,
"n_fold": 5,
"n_accumulate": 1,
"num_classes": 1,
"margin": 0.5,
"device": torch.device("cuda:0" if torch.cuda.is_available() else "cpu"),
}
def train_one_epoch(model, optimizer, scheduler, dataloader, device, epoch):
model.train()
dataset_size = 0
running_loss = 0.0
bar = tqdm(enumerate(dataloader), total=len(dataloader))
for step, data in bar:
text_ids = data['text_ids'].to(device, dtype = torch.long)
text_mask = data['text_mask'].to(device, dtype = torch.long)
targets = data['target'].to(device, dtype=torch.long)
# print(text_ids.shape)
batch_size = text_ids.size(0)
# print(targets)
outputs = model(text_ids, text_mask)
# print(outputs.shape)
# print(outputs.shape)
loss = criterion(outputs, targets)
loss = loss / CONFIG['n_accumulate']
loss.backward()
if (step + 1) % CONFIG['n_accumulate'] == 0:
optimizer.step()
# zero the parameter gradients
optimizer.zero_grad()
if scheduler is not None:
scheduler.step()
running_loss += (loss.item() * batch_size)
dataset_size += batch_size
epoch_loss = running_loss / dataset_size
bar.set_postfix(Epoch=epoch, Train_Loss=epoch_loss,
LR=optimizer.param_groups[0]['lr'])
gc.collect()
return epoch_loss
@torch.no_grad()
def valid_one_epoch(model, dataloader, device, epoch):
model.eval()
dataset_size = 0
running_loss = 0.0
bar = tqdm(enumerate(dataloader), total=len(dataloader))
for step, data in bar:
text_ids = data['text_ids'].to(device, dtype = torch.long)
text_mask = data['text_mask'].to(device, dtype = torch.long)
targets = data['target'].to(device, dtype=torch.long)
# print(text_ids.shape)
batch_size = text_ids.size(0)
outputs = model(text_ids, text_mask)
# outputs = outputs.argmax(dim=1)
loss = criterion(outputs, targets)
running_loss += (loss.item() * batch_size)
dataset_size += batch_size
epoch_loss = running_loss / dataset_size
bar.set_postfix(Epoch=epoch, Valid_Loss=epoch_loss,
LR=optimizer.param_groups[0]['lr'])
gc.collect()
return epoch_loss
def run_training(model, optimizer, scheduler, device, num_epochs, fold):
# To automatically log gradients
if torch.cuda.is_available():
print("[INFO] Using GPU: {}\n".format(torch.cuda.get_device_name()))
start = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_epoch_loss = np.inf
history = defaultdict(list)
for epoch in range(1, num_epochs + 1):
gc.collect()
train_epoch_loss = train_one_epoch(model, optimizer, scheduler,
dataloader=train_loader,
device=CONFIG['device'], epoch=epoch)
val_epoch_loss = valid_one_epoch(model, valid_loader, device=CONFIG['device'],
epoch=epoch)
history['Train Loss'].append(train_epoch_loss)
history['Valid Loss'].append(val_epoch_loss)
# deep copy the model
if val_epoch_loss <= best_epoch_loss:
print(f"Validation Loss Improved ({best_epoch_loss} ---> {val_epoch_loss})")
best_epoch_loss = val_epoch_loss
best_model_wts = copy.deepcopy(model.state_dict())
PATH = f"/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_large_kim_cnn/Loss-Fold-{fold}.bin"
torch.save(model.state_dict(), PATH)
# Save a model file from the current directory
print("Model Saved")
print()
end = time.time()
time_elapsed = end - start
print('Training complete in {:.0f}h {:.0f}m {:.0f}s'.format(
time_elapsed // 3600, (time_elapsed % 3600) // 60, (time_elapsed % 3600) % 60))
print("Best Loss: {:.4f}".format(best_epoch_loss))
# load best model weights
model.load_state_dict(best_model_wts)
return model, history
def fetch_scheduler(optimizer):
if CONFIG['scheduler'] == 'CosineAnnealingLR':
scheduler = lr_scheduler.CosineAnnealingLR(optimizer,T_max=CONFIG['T_max'],
eta_min=CONFIG['min_lr'])
elif CONFIG['scheduler'] == 'CosineAnnealingWarmRestarts':
scheduler = lr_scheduler.CosineAnnealingWarmRestarts(optimizer,T_0=CONFIG['T_0'],
eta_min=CONFIG['min_lr'])
elif CONFIG['scheduler'] == None:
return None
return scheduler
def prepare_loaders(fold):
displacemnt_list=[0,512,1024,1536,2048,2560,3072,3584,4096,4608,4950]
df_train = train[train.kfold != fold].reset_index(drop=True)
df_valid = train[train.kfold == fold].reset_index(drop=True)
# https://github.com/issamemari/pytorch-multilabel-balanced-sampler/blob/master/example.py
sampler = MultilabelBalancedRandomSampler(df_train[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values,class_choice='cycle')
train_dataset = PCLTrainDataset(df_train, tokenizer=tokenizer, max_length=CONFIG['max_length'],displacemnt=displacemnt_list[fold])
valid_dataset = PCLTrainDataset(df_valid, tokenizer=tokenizer, max_length=CONFIG['max_length'],displacemnt=displacemnt_list[fold])
train_loader = DataLoader(train_dataset, batch_size=CONFIG['train_batch_size'],
num_workers=2, shuffle=False, pin_memory=True, drop_last=True,sampler=sampler)
valid_loader = DataLoader(valid_dataset, batch_size=CONFIG['valid_batch_size'],
num_workers=2, shuffle=False, pin_memory=True)
return train_loader, valid_loader
!pip install iterative-stratification
from iterstrat.ml_stratifiers import MultilabelStratifiedKFold
train.reset_index(inplace=True)
mskf = MultilabelStratifiedKFold(n_splits=CONFIG['n_fold'], shuffle=True, random_state=CONFIG['seed'])
for fold, ( _, val_) in enumerate(mskf.split(train, train[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values)):
train.loc[val_ , "kfold"] = int(fold)
train["kfold"] = train["kfold"].astype(int)
# del model,train_loader, valid_loader
import gc
gc.collect()
torch.cuda.empty_cache()
###Output
_____no_output_____
###Markdown
http://seekinginference.com/applied_nlp/bert-cnn.html
###Code
torch.autograd.set_detect_anomaly(True)
###Output
_____no_output_____
###Markdown
https://github.com/arjundussa65/Thesis-2020
###Code
def get_optimizer_grouped_parameters(
model, model_type,
learning_rate, weight_decay,
layerwise_learning_rate_decay
):
no_decay = ["bias", "LayerNorm.weight"]
# initialize lr for task specific layer
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters() if "classifier" in n or "pooler" in n],
"weight_decay": 0.0,
"lr": learning_rate,
},
]
# initialize lrs for every layer
num_layers = model.config.num_hidden_layers
layers = [getattr(model, model_type).embeddings] + list(getattr(model, model_type).encoder.layer)
layers.reverse()
lr = learning_rate
for layer in layers:
lr *= layerwise_learning_rate_decay
optimizer_grouped_parameters += [
{
"params": [p for n, p in layer.named_parameters() if not any(nd in n for nd in no_decay)],
"weight_decay": weight_decay,
"lr": lr,
},
{
"params": [p for n, p in layer.named_parameters() if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
"lr": lr,
},
]
return optimizer_grouped_parameters
for fold in range(0, CONFIG['n_fold']):
print(f"====== Fold: {fold} ======")
# Create Dataloaders
train_loader, valid_loader = prepare_loaders(fold=fold)
model = PCL_Model_Arch()
# mixout = 0.7
# for name, module in model.named_modules():
# if name in ['dropout'] and isinstance(module, nn.Dropout):
# setattr(model, name, nn.Dropout(0))
# if name in ['classifier'] and isinstance(module, nn.Linear):
# target_state_dict = module.state_dict()
# bias = True if module.bias is not None else False
# new_module = MixLinear(module.in_features, module.out_features,
# bias, target_state_dict['weight'], 0.5)
# new_module.load_state_dict(target_state_dict)
# setattr(model, name, new_module)
model.to(CONFIG['device'])
torch.cuda.empty_cache()
# Define Optimizer and Scheduler
layerwise_learning_rate_decay = 0.9
use_bertadam = False
adam_epsilon = 1e-6
grouped_optimizer_params = get_optimizer_grouped_parameters(model, 'bert',CONFIG['learning_rate'], CONFIG['weight_decay'],0.9)
optimizer = AdamW(grouped_optimizer_params, lr=CONFIG['learning_rate'], weight_decay=CONFIG['weight_decay'],correct_bias=not use_bertadam,eps=adam_epsilon)
scheduler = fetch_scheduler(optimizer)
model, history = run_training(model, optimizer, scheduler,
device=CONFIG['device'],
num_epochs=CONFIG['epochs'],
fold=fold)
del model, history, train_loader, valid_loader
_ = gc.collect()
print()
test.dropna(inplace=True)
valid_dataset = PCLTrainDataset(test, tokenizer=tokenizer, max_length=CONFIG['max_length'],displacemnt=0)
valid_loader = DataLoader(valid_dataset, batch_size=CONFIG['valid_batch_size'],
num_workers=2, shuffle=False, pin_memory=True)
@torch.no_grad()
def valid_fn(model, dataloader, device):
model.eval()
dataset_size = 0
running_loss = 0.0
PREDS = []
bar = tqdm(enumerate(dataloader), total=len(dataloader))
for step, data in bar:
ids = data['text_ids'].to(device, dtype = torch.long)
mask = data['text_mask'].to(device, dtype = torch.long)
outputs = model(ids, mask)
sig=nn.Sigmoid()
outputs=sig(outputs)
# outputs = outputs.argmax(dim=1)
# print(len(outputs))
# print(len(np.max(outputs.cpu().detach().numpy(),axis=1)))
PREDS.append(outputs.detach().cpu().numpy())
# print(outputs.detach().cpu().numpy())
PREDS = np.concatenate(PREDS)
gc.collect()
return PREDS
def inference(model_paths, dataloader, device):
final_preds = []
for i, path in enumerate(model_paths):
model = PCL_Model_Arch()
model.to(CONFIG['device'])
model.load_state_dict(torch.load(path))
print(f"Getting predictions for model {i+1}")
preds = valid_fn(model, dataloader, device)
final_preds.append(preds)
final_preds = np.array(final_preds)
# print(final_preds)
final_preds = np.mean(final_preds, axis=0)
# print(final_preds)
final_preds[final_preds>=0.5] = 1
final_preds[final_preds<0.5] = 0
# final_preds= np.argmax(final_preds,axis=1)
return final_preds
###Output
_____no_output_____
###Markdown
roberta_weighted_no_mizout
###Code
MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_large_kim_cnn/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_large_kim_cnn/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_large_kim_cnn/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_large_kim_cnn/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_large_kim_cnn/Loss-Fold-4.bin']
# MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin']
preds = inference(MODEL_PATH_2, valid_loader, CONFIG['device'])
from sklearn.metrics import f1_score,accuracy_score,precision_score,classification_report
print(classification_report(test[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values, preds,target_names=['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']))
from sklearn.metrics import f1_score,accuracy_score,precision_score,classification_report
print(classification_report(test[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values, preds,target_names=['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']))
###Output
precision recall f1-score support
sarcasm 0.79 0.99 0.88 138
irony 0.29 0.81 0.43 36
satire 0.33 0.40 0.36 5
understatement 0.00 0.00 0.00 1
overstatement 0.09 0.50 0.16 6
rhetorical_question 0.60 0.84 0.70 25
micro avg 0.55 0.91 0.69 211
macro avg 0.35 0.59 0.42 211
weighted avg 0.65 0.91 0.74 211
samples avg 0.59 0.92 0.69 211
###Markdown
roberta_WeightedLayerPooling asymmetric loss with mixout
###Code
MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_WeightedLayerPooling/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_WeightedLayerPooling/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_WeightedLayerPooling/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_WeightedLayerPooling/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_WeightedLayerPooling/Loss-Fold-4.bin']
# MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin']
preds = inference(MODEL_PATH_2, valid_loader, CONFIG['device'])
from sklearn.metrics import f1_score,accuracy_score,precision_score,classification_report
print(classification_report(test[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values, preds,target_names=['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']))
###Output
precision recall f1-score support
sarcasm 0.79 1.00 0.88 138
irony 0.25 0.86 0.38 36
satire 0.12 0.20 0.15 5
understatement 0.00 0.00 0.00 1
overstatement 0.05 0.33 0.09 6
rhetorical_question 0.58 0.88 0.70 25
micro avg 0.51 0.92 0.65 211
macro avg 0.30 0.55 0.37 211
weighted avg 0.63 0.92 0.73 211
samples avg 0.54 0.94 0.66 211
###Markdown
roberta_mixout_no_kim
###Code
MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_mix_out_no_kim/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_mix_out_no_kim/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_mix_out_no_kim/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_mix_out_no_kim/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_mix_out_no_kim/Loss-Fold-4.bin']
# MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin']
preds = inference(MODEL_PATH_2, valid_loader, CONFIG['device'])
from sklearn.metrics import f1_score,accuracy_score,precision_score,classification_report
print(classification_report(test[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values, preds,target_names=['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']))
###Output
precision recall f1-score support
sarcasm 0.79 1.00 0.88 138
irony 0.25 0.81 0.38 36
satire 0.25 0.20 0.22 5
understatement 0.00 0.00 0.00 1
overstatement 0.04 0.67 0.07 6
rhetorical_question 0.35 0.96 0.52 25
micro avg 0.41 0.93 0.57 211
macro avg 0.28 0.61 0.35 211
weighted avg 0.61 0.93 0.71 211
samples avg 0.43 0.94 0.57 211
###Markdown
roberta+ kim+mixout
###Code
MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_mixout/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_mixout/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_mixout/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_mixout/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/roberta_mixout/Loss-Fold-4.bin']
# MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin']
preds = inference(MODEL_PATH_2, valid_loader, CONFIG['device'])
from sklearn.metrics import f1_score,accuracy_score,precision_score,classification_report
print(classification_report(test[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values, preds,target_names=['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']))
from sklearn.metrics import f1_score,accuracy_score,precision_score,classification_report
print(classification_report(test[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values, preds,target_names=['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']))
###Output
precision recall f1-score support
sarcasm 0.79 0.99 0.88 138
irony 0.37 0.72 0.49 36
satire 0.20 0.40 0.27 5
understatement 0.00 0.00 0.00 1
overstatement 0.07 0.50 0.12 6
rhetorical_question 0.53 0.96 0.69 25
micro avg 0.56 0.91 0.69 211
macro avg 0.33 0.60 0.41 211
weighted avg 0.65 0.91 0.75 211
samples avg 0.61 0.92 0.70 211
###Markdown
recall loss cycle -4:
###Code
MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_recall_middle_layer_dropout/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_recall_middle_layer_dropout/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_recall_middle_layer_dropout/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_recall_middle_layer_dropout/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_recall_middle_layer_dropout/Loss-Fold-4.bin']
# MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin']
preds = inference(MODEL_PATH_2, valid_loader, CONFIG['device'])
from sklearn.metrics import f1_score,accuracy_score,precision_score,classification_report
print(classification_report(test[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values, preds,target_names=['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']))
###Output
precision recall f1-score support
sarcasm 0.79 1.00 0.88 138
irony 0.25 0.03 0.05 36
satire 0.02 0.80 0.05 5
understatement 0.00 0.00 0.00 1
overstatement 0.00 0.00 0.00 6
rhetorical_question 0.14 1.00 0.25 25
micro avg 0.32 0.80 0.46 211
macro avg 0.20 0.47 0.21 211
weighted avg 0.58 0.80 0.62 211
samples avg 0.32 0.79 0.45 211
###Markdown
f1 loss cycle sampler betr -7:-3
###Code
MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_f1_middle_layer_dropout/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_f1_middle_layer_dropout/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_f1_middle_layer_dropout/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_f1_middle_layer_dropout/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_f1_middle_layer_dropout/Loss-Fold-4.bin']
# MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin']
preds = inference(MODEL_PATH_2, valid_loader, CONFIG['device'])
from sklearn.metrics import f1_score,accuracy_score,precision_score,classification_report
print(classification_report(test[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values, preds,target_names=['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']))
###Output
precision recall f1-score support
sarcasm 0.80 0.99 0.88 138
irony 0.64 0.19 0.30 36
satire 0.50 0.20 0.29 5
understatement 0.00 0.00 0.00 1
overstatement 0.00 0.00 0.00 6
rhetorical_question 0.59 0.76 0.67 25
micro avg 0.75 0.78 0.76 211
macro avg 0.42 0.36 0.36 211
weighted avg 0.71 0.78 0.71 211
samples avg 0.77 0.79 0.77 211
###Markdown
Assymetric loss random sampler
###Code
MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_asymm_extra_dropout_middle_layers/Loss-Fold-0.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_asymm_extra_dropout_middle_layers/Loss-Fold-1.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_asymm_extra_dropout_middle_layers/Loss-Fold-2.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_asymm_extra_dropout_middle_layers/Loss-Fold-3.bin','/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_asymm_extra_dropout_middle_layers/Loss-Fold-4.bin']
# MODEL_PATH_2=['/content/drive/MyDrive/ISarcasm/Models_Task_B/bert_tweet_kim_cnn/Loss-Fold-0.bin']
preds = inference(MODEL_PATH_2, valid_loader, CONFIG['device'])
from sklearn.metrics import f1_score,accuracy_score,precision_score,classification_report
print(classification_report(test[['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']].values, preds,target_names=['sarcasm', 'irony',
'satire', 'understatement', 'overstatement', 'rhetorical_question']))
###Output
precision recall f1-score support
sarcasm 0.79 1.00 0.88 138
irony 0.23 0.94 0.37 36
satire 0.12 0.40 0.19 5
understatement 0.09 1.00 0.17 1
overstatement 0.06 0.17 0.08 6
rhetorical_question 0.43 0.84 0.57 25
micro avg 0.47 0.93 0.63 211
macro avg 0.29 0.73 0.38 211
weighted avg 0.61 0.93 0.72 211
samples avg 0.50 0.95 0.63 211
###Markdown
Prepare submission
###Code
!cat task1.txt | head -n 10
!cat task2.txt | head -n 10
!zip submission.zip task1.txt task2.txt
###Output
_____no_output_____ |
95-1.1 Running Shor's algorithm (IBM qiskit, on QPU).ipynb | ###Markdown
###Code
# Running Shor's algorithm (IBM qiskit, on QPU)
# author: IBM
# license: MIT License # code: github.com/gressling/examples
# activity: single example # index: 95-1
# access: https://github.com/Qiskit/qiskit-ibmq-provider
# access: https://quantum-computing.ibm.com/
!pip install qiskit
from qiskit import IBMQ
from qiskit.aqua import QuantumInstance
from qiskit.aqua.algorithms import Shor
IBMQ.enable_account('e7344ad504273e671xxxxxxxx6226e7977f0c43a3404e6775816720640bd42a4f2bff0b1f19d0d6527f492') #<<API TOKEN>>
provider = IBMQ.get_provider(hub='ibm-q')
backend = provider.get_backend('ibmq_qasm_simulator')
factors = Shor(21)
result_dict = factors.run(QuantumInstance(backend, shots=1, skip_qobj_validation=False))
print(result_dict['factors'])
# Shor(21) will find the prime factors for 21
###Output
_____no_output_____ |
notebooks/Python3-DataScience/03-Pandas/12-Pandas Built-in Visualisation Task Solution.ipynb | ###Markdown
1. Import pyplot from matplotlib
###Code
import matplotlib.pyplot as ptl
###Output
_____no_output_____
###Markdown
2. Import pandas for csv file reading
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
3. Write the code which allows to show plots in jupiter notebook
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
4. Create a variable and read data from dataframe3 csv file in it
###Code
dataframe3 = pd.read_csv('dataframe3')
###Output
_____no_output_____
###Markdown
5. Show first 10 rows from the dataset
###Code
dataframe3.head(10)
###Output
_____no_output_____
###Markdown
6. Try to recreate the plot with same color and size of the points, and also with the same figure size as on image below. This is the scatter plot of b vs a. You may need to refresh your matplotlib knowledge
###Code
dataframe3.plot.scatter(x='a', y='b', figsize=(15, 4), s=70, c='green')
###Output
_____no_output_____
###Markdown
7. Try to recreate a histogram of the 'b' column
###Code
dataframe3['b'].plot.hist()
###Output
_____no_output_____
###Markdown
8. Create a boxplot comparing the b and c columns.
###Code
dataframe3[['b', 'c']].plot.box()
###Output
_____no_output_____
###Markdown
9. Create a kde plot of the c column
###Code
dataframe3['c'].plot.kde()
###Output
_____no_output_____
###Markdown
10. Create an area plot of all the columns for just the rows up to 20. (hint: use .iloc).
###Code
dataframe3.iloc[0:20].plot.area(alpha=0.5)
###Output
_____no_output_____ |
cnn/cnn_keras_mnist.ipynb | ###Markdown
CNN with Keras Import Library and Load MNIST Data
###Code
import keras
from keras.datasets import mnist
from keras.layers import Dense, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.models import Sequential
import matplotlib.pylab as plt
# We are using tensorflow-gpu so its best to test if its working
from keras import backend as K
K.tensorflow_backend._get_available_gpus()
#Load the MNIST dataset.
(X, y), (X_test,y_test) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Define HyperParameters and Initialize them
###Code
#We will need to tune this hyperparameter for best result.
num_classes = 10
epochs = 10
batch_size = 128
###Output
_____no_output_____
###Markdown
Preprocess the training data ReshapeIntialize height and width of the image. In case of MNIST data its 28x28Reshape the MNIST data into 4D tensor (no_of_sample, width, height, channels)MNIST image is in grayscale so channels will be 1 in our case. Convert the data into right typeConvert the data into float.Divide it by 255 to normalize. Since color ranges from 0 to 255.
###Code
#Initialize variable
width = 28
height = 28
no_channel = 1
input_shape = (width,height,no_channel)
#Reshape input
X = X.reshape(X.shape[0],width,height,no_channel)
X_test = X_test.reshape(X_test.shape[0],width,height,no_channel)
#Convert to float
X = X.astype('float32')
X_test = X_test.astype('float32')
#Normalize
X = X/255
X_test = X_test/255
print('x_train shape:', X.shape)
print(X.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
print(y.shape, 'output shape')
###Output
x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
(60000,) output shape
###Markdown
Convert Output to one hot vector.For example 3 would be represented by 1 at 3rd index of ten zeros where 1st one reperesent 0 and last 9. 0001000000
###Code
y = keras.utils.to_categorical(y,num_classes=10)
y_test = keras.utils.to_categorical(y_test,num_classes=10)
print(y.shape,'output shape')
###Output
(60000, 10) output shape
###Markdown
Define Keras Model and stack the layers This will have Conv Layer followed with relu activation. MaxPooling will be applied after that to subsample.
###Code
#Define model
model = Sequential()
#Layer1 = Conv + relu + maxpooling
model.add(Conv2D(filters=32,kernel_size=(5,5),strides=(1,1),padding='same',activation='relu',input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2,2)))
#Layer2 = Conv + relu + maxpooling
model.add(Conv2D(filters=64,kernel_size=(5,5),strides=(1,1),padding='same',activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
#Flatten
model.add(Flatten())
#Layer Fully connected
model.add(Dense(units=1000,activation='relu'))
model.add(Dense(units=num_classes,activation='softmax'))
#Compile model
model.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.categorical_crossentropy,
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Optional class for saving accuracy historyWe will need the accuracy at each epoch to plot the graph
###Code
class AccuracyHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.acc = []
def on_epoch_end(self, batch, logs={}):
self.acc.append(logs.get('acc'))
history = AccuracyHistory()
###Output
_____no_output_____
###Markdown
Train the model
###Code
model.fit(x=X,
y=y,
batch_size=batch_size,
epochs=10,
verbose=1,
validation_data=(X_test,y_test),
callbacks=[history])
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 64s 1ms/step - loss: 0.1386 - acc: 0.9584 - val_loss: 0.0398 - val_acc: 0.9869
Epoch 2/10
60000/60000 [==============================] - 57s 948us/step - loss: 0.0391 - acc: 0.9882 - val_loss: 0.0333 - val_acc: 0.9892
Epoch 3/10
60000/60000 [==============================] - 56s 941us/step - loss: 0.0247 - acc: 0.9920 - val_loss: 0.0276 - val_acc: 0.9910
Epoch 4/10
60000/60000 [==============================] - 56s 930us/step - loss: 0.0174 - acc: 0.9939 - val_loss: 0.0254 - val_acc: 0.9924
Epoch 5/10
60000/60000 [==============================] - 56s 929us/step - loss: 0.0142 - acc: 0.9953 - val_loss: 0.0231 - val_acc: 0.9916
Epoch 6/10
60000/60000 [==============================] - 57s 950us/step - loss: 0.0097 - acc: 0.9968 - val_loss: 0.0220 - val_acc: 0.9930
Epoch 7/10
60000/60000 [==============================] - 57s 949us/step - loss: 0.0080 - acc: 0.9973 - val_loss: 0.0290 - val_acc: 0.9907
Epoch 8/10
60000/60000 [==============================] - 57s 945us/step - loss: 0.0075 - acc: 0.9976 - val_loss: 0.0302 - val_acc: 0.9916
Epoch 9/10
60000/60000 [==============================] - 57s 946us/step - loss: 0.0084 - acc: 0.9972 - val_loss: 0.0382 - val_acc: 0.9905
Epoch 10/10
60000/60000 [==============================] - 57s 949us/step - loss: 0.0082 - acc: 0.9974 - val_loss: 0.0195 - val_acc: 0.9939
###Markdown
Model Evaluation
###Code
testScore = model.evaluate(x=X_test,y=y_test,verbose=1)
print('Test loss:', testScore[0])
print('Test accuracy:', testScore[1])
###Output
10000/10000 [==============================] - 4s 369us/step
Test loss: 0.019496263597800406
Test accuracy: 0.9939
###Markdown
Plot epoch vs accuracy.
###Code
plt.plot(range(1,11),history.acc)
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
###Output
_____no_output_____ |
tpu-flower-mixed-precision-custom-loops.ipynb | ###Markdown
Version 2 Log -> 1) Mixed Precision Added 2) Custom Model Training
###Code
# Necessary imports
import math, re, os
import tensorflow as tf
import numpy as np
from matplotlib import pyplot as plt
from kaggle_datasets import KaggleDatasets
from sklearn.metrics import f1_score, precision_score, recall_score, confusion_matrix
print("Tensorflow version: -> ", tf.__version__)
AUTO = tf.data.experimental.AUTOTUNE
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
strategy = tf.distribute.TPUStrategy(tpu)
except ValueError:
strategy = tf.distribute.MirroredStrategy()
print("Number of Accelerators : ", strategy.num_replicas_in_sync)
from kaggle_datasets import KaggleDatasets
GCS_DS_PATH = KaggleDatasets().get_gcs_path('flower-classification-with-tpus')
print(GCS_DS_PATH)
###Output
gs://kds-a5f3e552ca1eca1651815443969650adf7d40a60f75bf16a583d954d
###Markdown
Configuration
###Code
IMAGE_SIZE = [331, 331]
EPOCHS = 13
BATCH_SIZE = 16 * strategy.num_replicas_in_sync
# LR Scheduling
# Learning rate schedule
LR_START = 0.00001
LR_MAX = 0.00004 * strategy.num_replicas_in_sync
LR_MIN = 0.00001
LR_RAMPUP_EPOCHS = 3
LR_SUSTAIN_EPOCHS = 0
LR_EXP_DECAY = .7
GCS_PATH_SELECT = {
192 : GCS_DS_PATH + '/tfrecords-jpeg-192x192',
224 : GCS_DS_PATH + '/tfrecords-jpeg-224x224',
331 : GCS_DS_PATH + '/tfrecords-jpeg-331x331',
512 : GCS_DS_PATH + '/tfrecords-jpeg-512x512'
}
GCS_PATH = GCS_PATH_SELECT[IMAGE_SIZE[0]]
TRAINING_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/train/*.tfrec')
VALIDATION_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/val/*.tfrec')
TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + '/test/*.tfrec')
CLASSES = ['pink primrose', 'hard-leaved pocket orchid', 'canterbury bells', 'sweet pea', 'wild geranium', 'tiger lily', 'moon orchid', 'bird of paradise', 'monkshood', 'globe thistle', # 00 - 09
'snapdragon', "colt's foot", 'king protea', 'spear thistle', 'yellow iris', 'globe-flower', 'purple coneflower', 'peruvian lily', 'balloon flower', 'giant white arum lily', # 10 - 19
'fire lily', 'pincushion flower', 'fritillary', 'red ginger', 'grape hyacinth', 'corn poppy', 'prince of wales feathers', 'stemless gentian', 'artichoke', 'sweet william', # 20 - 29
'carnation', 'garden phlox', 'love in the mist', 'cosmos', 'alpine sea holly', 'ruby-lipped cattleya', 'cape flower', 'great masterwort', 'siam tulip', 'lenten rose', # 30 - 39
'barberton daisy', 'daffodil', 'sword lily', 'poinsettia', 'bolero deep blue', 'wallflower', 'marigold', 'buttercup', 'daisy', 'common dandelion', # 40 - 49
'petunia', 'wild pansy', 'primula', 'sunflower', 'lilac hibiscus', 'bishop of llandaff', 'gaura', 'geranium', 'orange dahlia', 'pink-yellow dahlia', # 50 - 59
'cautleya spicata', 'japanese anemone', 'black-eyed susan', 'silverbush', 'californian poppy', 'osteospermum', 'spring crocus', 'iris', 'windflower', 'tree poppy', # 60 - 69
'gazania', 'azalea', 'water lily', 'rose', 'thorn apple', 'morning glory', 'passion flower', 'lotus', 'toad lily', 'anthurium', # 70 - 79
'frangipani', 'clematis', 'hibiscus', 'columbine', 'desert-rose', 'tree mallow', 'magnolia', 'cyclamen ', 'watercress', 'canna lily', # 80 - 89
'hippeastrum ', 'bee balm', 'pink quill', 'foxglove', 'bougainvillea', 'camellia', 'mallow', 'mexican petunia', 'bromelia', 'blanket flower', # 90 - 99
'trumpet creeper', 'blackberry lily', 'common tulip', 'wild rose'] # 100 - 102
@tf.function
def lrfn(epoch):
if float(epoch) < LR_RAMPUP_EPOCHS:
lr = (LR_MAX - LR_START) / LR_RAMPUP_EPOCHS * float(epoch) + LR_START
elif float(epoch) < LR_RAMPUP_EPOCHS + LR_SUSTAIN_EPOCHS:
lr = LR_MAX
else:
lr = (LR_MAX - LR_MIN) * LR_EXP_DECAY**(float(epoch) - LR_RAMPUP_EPOCHS - LR_SUSTAIN_EPOCHS) + LR_MIN
return lr
lr_callback = tf.keras.callbacks.LearningRateScheduler(lrfn, verbose=True)
rng = [i for i in range(EPOCHS)]
y = [lrfn(x) for x in rng]
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
###Output
Learning rate schedule: 1e-05 to 0.00032 to 2.25e-05
###Markdown
Datasets
###Code
def decode_image(image_data):
image = tf.image.decode_jpeg(image_data, channels = 3)
image = tf.reshape(image, [*IMAGE_SIZE, 3])
return image
def read_labeled_tfrecord(example):
LABELED_TFREC_FORMAT = {
"image" : tf.io.FixedLenFeature([], tf.string),
"class" : tf.io.FixedLenFeature([], tf.int64)
}
example = tf.io.parse_single_example(example, LABELED_TFREC_FORMAT)
image = decode_image(example['image'])
label = tf.cast(example['class'], tf.int32)
return image, label
def read_unlabeled_tfrecord(example):
UNLABELED_TFREC_FORMAT = {
"image" : tf.io.FixedLenFeature([], tf.string),
"id" : tf.io.FixedLenFeature([], tf.string)
}
example = tf.io.parse_single_example(example, UNLABELED_TFREC_FORMAT)
image = decode_image(example['image'])
idnum = example['id']
return image, idnum
def load_dataset(filenames, labeled = True, ordered = False):
ignore_order = tf.data.Options()
if not ordered:
ignore_order.experimental_deterministic = False
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads = AUTO)
dataset = dataset.with_options(ignore_order)
dataset = dataset.map(read_labeled_tfrecord if labeled else read_unlabeled_tfrecord, num_parallel_calls = AUTO)
return dataset
def data_augment(image, label):
image = tf.image.random_flip_left_right(image)
return image, label
def get_training_dataset():
dataset = load_dataset(TRAINING_FILENAMES, labeled = True)
dataset = dataset.map(data_augment, num_parallel_calls = AUTO)
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
def get_validation_dataset(ordered = False, repeated = False):
dataset = load_dataset(VALIDATION_FILENAMES, labeled = True, ordered = ordered)
if repeated:
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.cache()
dataset = dataset.prefetch(AUTO)
return dataset
def get_test_dataset(ordered = False):
dataset = load_dataset(TEST_FILENAMES, labeled = False, ordered = ordered)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
def count_data_items(filenames):
n = [int(re.compile(r"-([0-9]*)\.").search(filename).group(1)) for filename in filenames]
return np.sum(n)
def int_div_round_up(a, b):
return (a + b - 1) // b
NUM_TRAINING_IMAGES = count_data_items(TRAINING_FILENAMES)
NUM_VALIDATION_IMAGES = count_data_items(VALIDATION_FILENAMES)
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
STEPS_PER_EPOCH = NUM_TRAINING_IMAGES // BATCH_SIZE
#VALIDATION_STEPS = -(-NUM_VALIDATION_IMAGES // BATCH_SIZE)
VALIDATION_STEPS = int_div_round_up(NUM_VALIDATION_IMAGES, BATCH_SIZE)
#TEST_STEPS = -(-NUM_TEST_IMAGES // BATCH_SIZE)
TEST_STEPS = int_div_round_up(NUM_TEST_IMAGES, BATCH_SIZE)
print("Dataset : {} training images, {} validation images, {} unlabeled test images".format(NUM_TRAINING_IMAGES, NUM_VALIDATION_IMAGES, NUM_TEST_IMAGES))
###Output
Dataset : 12753 training images, 3712 validation images, 7382 unlabeled test images
###Markdown
Model Training
###Code
with strategy.scope():
pretrained_model = tf.keras.applications.NASNetLarge(weights = 'imagenet', include_top = False)
model = tf.keras.Sequential([
# convert image format from int [0,255] to the format expected by this model
tf.keras.layers.Lambda(lambda data: tf.keras.applications.nasnet.preprocess_input(tf.cast(data, tf.float32)), input_shape=[*IMAGE_SIZE, 3]),
pretrained_model,
# models in tf.keras.applications with include_top=False output a 3D feature map which must be converted to 2D
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(len(CLASSES), activation='softmax')
])
model.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics = ['sparse_categorical_accuracy'], steps_per_execution = 16)
model.summary()
###Output
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/nasnet/NASNet-large-no-top.h5
343613440/343610240 [==============================] - 3s 0us/step
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lambda (Lambda) (None, 331, 331, 3) 0
_________________________________________________________________
NASNet (Functional) (None, 11, 11, 4032) 84916818
_________________________________________________________________
global_average_pooling2d (Gl (None, 4032) 0
_________________________________________________________________
dense (Dense) (None, 104) 419432
=================================================================
Total params: 85,336,250
Trainable params: 85,139,582
Non-trainable params: 196,668
_________________________________________________________________
###Markdown
Model Training
###Code
history = model.fit(get_training_dataset(), steps_per_epoch = STEPS_PER_EPOCH, epochs = EPOCHS, validation_data = get_validation_dataset(), validation_steps = VALIDATION_STEPS, callbacks = [lr_callback])
###Output
Epoch 1/13
Epoch 00001: LearningRateScheduler reducing learning rate to tf.Tensor(1e-05, shape=(), dtype=float32).
99/99 [==============================] - 482s 5s/step - loss: 4.3805 - sparse_categorical_accuracy: 0.1119 - val_loss: 3.7531 - val_sparse_categorical_accuracy: 0.2594
Epoch 2/13
Epoch 00002: LearningRateScheduler reducing learning rate to tf.Tensor(0.000113333335, shape=(), dtype=float32).
99/99 [==============================] - 42s 428ms/step - loss: 1.7567 - sparse_categorical_accuracy: 0.6096 - val_loss: 1.3810 - val_sparse_categorical_accuracy: 0.6695
Epoch 3/13
Epoch 00003: LearningRateScheduler reducing learning rate to tf.Tensor(0.00021666667, shape=(), dtype=float32).
99/99 [==============================] - 42s 428ms/step - loss: 0.3372 - sparse_categorical_accuracy: 0.9116 - val_loss: 0.8771 - val_sparse_categorical_accuracy: 0.7885
Epoch 4/13
Epoch 00004: LearningRateScheduler reducing learning rate to tf.Tensor(0.00032, shape=(), dtype=float32).
99/99 [==============================] - 42s 428ms/step - loss: 0.2025 - sparse_categorical_accuracy: 0.9456 - val_loss: 1.2789 - val_sparse_categorical_accuracy: 0.7322
Epoch 5/13
Epoch 00005: LearningRateScheduler reducing learning rate to tf.Tensor(0.000227, shape=(), dtype=float32).
99/99 [==============================] - 42s 428ms/step - loss: 0.1334 - sparse_categorical_accuracy: 0.9628 - val_loss: 1.0519 - val_sparse_categorical_accuracy: 0.7697
Epoch 6/13
Epoch 00006: LearningRateScheduler reducing learning rate to tf.Tensor(0.0001619, shape=(), dtype=float32).
99/99 [==============================] - 42s 428ms/step - loss: 0.0375 - sparse_categorical_accuracy: 0.9893 - val_loss: 0.7152 - val_sparse_categorical_accuracy: 0.8370
Epoch 7/13
Epoch 00007: LearningRateScheduler reducing learning rate to tf.Tensor(0.00011633, shape=(), dtype=float32).
99/99 [==============================] - 42s 428ms/step - loss: 0.0160 - sparse_categorical_accuracy: 0.9952 - val_loss: 0.6906 - val_sparse_categorical_accuracy: 0.8435
Epoch 8/13
Epoch 00008: LearningRateScheduler reducing learning rate to tf.Tensor(8.4431e-05, shape=(), dtype=float32).
99/99 [==============================] - 42s 428ms/step - loss: 0.0042 - sparse_categorical_accuracy: 0.9996 - val_loss: 0.5916 - val_sparse_categorical_accuracy: 0.8669
Epoch 9/13
Epoch 00009: LearningRateScheduler reducing learning rate to tf.Tensor(6.21017e-05, shape=(), dtype=float32).
99/99 [==============================] - 42s 428ms/step - loss: 0.0036 - sparse_categorical_accuracy: 0.9994 - val_loss: 0.5652 - val_sparse_categorical_accuracy: 0.8750
Epoch 10/13
Epoch 00010: LearningRateScheduler reducing learning rate to tf.Tensor(4.647119e-05, shape=(), dtype=float32).
99/99 [==============================] - 42s 427ms/step - loss: 0.0023 - sparse_categorical_accuracy: 0.9996 - val_loss: 0.5262 - val_sparse_categorical_accuracy: 0.8820
Epoch 11/13
Epoch 00011: LearningRateScheduler reducing learning rate to tf.Tensor(3.5529833e-05, shape=(), dtype=float32).
99/99 [==============================] - 42s 427ms/step - loss: 0.0022 - sparse_categorical_accuracy: 0.9996 - val_loss: 0.4936 - val_sparse_categorical_accuracy: 0.8906
Epoch 12/13
Epoch 00012: LearningRateScheduler reducing learning rate to tf.Tensor(2.7870883e-05, shape=(), dtype=float32).
99/99 [==============================] - 42s 427ms/step - loss: 0.0017 - sparse_categorical_accuracy: 0.9996 - val_loss: 0.4716 - val_sparse_categorical_accuracy: 0.9011
Epoch 13/13
Epoch 00013: LearningRateScheduler reducing learning rate to tf.Tensor(2.2509617e-05, shape=(), dtype=float32).
99/99 [==============================] - 42s 428ms/step - loss: 0.0014 - sparse_categorical_accuracy: 0.9998 - val_loss: 0.4607 - val_sparse_categorical_accuracy: 0.9022
###Markdown
Model Custom Training Loop (Coming soon)
###Code
with strategy.scope():
pretrained_model = tf.keras.applications.Xception(weights = 'imagenet', include_top = False, input_shape = [*IMAGE_SIZE, 3])
pretrained_model.trainable = True
model = tf.keras.Sequential([
tf.keras.layers.Lambda(lambda data: tf.keras.applications.xception.preprocess_input(tf.cast(data, tf.float32)), input_shape = [*IMAGE_SIZE, 3]),
pretrained_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(len(CLASSES), activation = 'softmax')
])
model.summary()
class LRSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __call__(self, step):
return lrfn(epoch = step // STEPS_PER_EPOCH)
optimizer = tf.keras.optimizers.Adam(learning_rate = LRSchedule())
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
valid_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
train_loss = tf.keras.metrics.Sum()
valid_loss = tf.keras.metrics.Sum()
loss_fn = lambda a,b : tf.nn.compute_average_loss(tf.keras.losses.sparse_categorical_crossentropy(a,b), global_batch_size = BATCH_SIZE)
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
probabilities = model(images, training = True)
loss = loss_fn(labels, probabilities)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_accuracy.update_state(labels, probabilities)
train_loss.update_state(loss)
@tf.function
def valid_step(images, labels):
probabilities = model(images, training = False)
loss = loss_fn(labels, probabilities)
# update metrics
valid_accuracy.update_state(labels, probabilities)
valid_loss.update_state(loss)
###Output
_____no_output_____
###Markdown
Training Loop
###Code
import time
start_time = epoch_start_time = time.time()
train_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset())
valid_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset())
print("Steps per epoch: ", STEPS_PER_EPOCH)
from collections import namedtuple
History = namedtuple('History', 'history')
history = History(history = {"loss" : [], "val_loss" : [], "sparse_categorical_accuracy" : [], "val_sparse_categorical_accuracy" : []})
epoch = 0
for step, (images, labels) in enumerate(train_dist_ds):
strategy.run(train_step, args = (images, labels))
print('=', end = '', flush = True)
if ((step + 1) // STEPS_PER_EPOCH) > epoch:
print('|', end = '', flush = True)
for image, labels in valid_dist_ds:
strategy.run(valid_step, args = (image, labels))
print("=", end = '', flush = True)
# metrics
history.history['sparse_categorical_accuracy'].append(train_accuracy.result().numpy())
history.history['val_sparse_categorical_accuracy'].append(valid_accuracy.result().numpy())
epoch_time = time.time() - epoch_start_time
print("\nEPOCH {:d}/{:d}".format(epoch + 1, EPOCHS))
print("time : {:0.1f}s".format(epoch + 1, EPOCHS))
print("loss : {:0.4f}".format(history.history['loss'][-1]))
print("accuracy : {:0.4f}".format(history.history['sparse_categorical_accuracy'][-1]))
print("val_loss : {:0.4f}".format(history.history['val_loss'][-1]))
print("val_acc : {:0.4f}".format(history.history["val_sparse_categorical_accuracy"][-1]))
print("lr : {:0.4g}".format(lrfn(epoch)), flush = True)
epoch = (step + 1) // STEPS_PER_EPOCH
epoch_start_time = time.time()
train_accuracy.reset_states()
valid_accuracy.reset_states()
valid_loss.reset_states()
train_loss.reset_states()
if epoch >= EPOCHS:
break
simple_ctl_training_time = time.time() - start_time
print("Training Time -> ", simple_ctl_training_time)
###Output
===================================================================================================|=============================
EPOCH 1/13
time : 1.0s
###Markdown
Optimized Model Training
###Code
with strategy.scope():
pretrained_model = tf.keras.applications.Xception(weights='imagenet', include_top=False ,input_shape=[*IMAGE_SIZE, 3])
pretrained_model.trainable = True # False = transfer learning, True = fine-tuning
model = tf.keras.Sequential([
# convert image format from int [0,255] to the format expected by this model
tf.keras.layers.Lambda(lambda data: tf.keras.applications.xception.preprocess_input(tf.cast(data, tf.float32)), input_shape=[*IMAGE_SIZE, 3]),
pretrained_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(len(CLASSES), activation='softmax')
])
model.summary()
# Instiate optimizer with learning rate schedule
class LRSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __call__(self, step):
return lrfn(epoch=step//STEPS_PER_EPOCH)
optimizer = tf.keras.optimizers.Adam(learning_rate=LRSchedule())
# this also works but is not very readable
#optimizer = tf.keras.optimizers.Adam(learning_rate=lambda: lrfn(tf.cast(optimizer.iterations, tf.float32)//STEPS_PER_EPOCH))
# Instantiate metrics
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
valid_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
train_loss = tf.keras.metrics.Sum()
valid_loss = tf.keras.metrics.Sum()
# Loss
# The recommendation from the Tensorflow custom training loop documentation is:
# loss_fn = lambda a,b: tf.nn.compute_average_loss(tf.keras.losses.sparse_categorical_crossentropy(a,b), global_batch_size=BATCH_SIZE)
# https://www.tensorflow.org/tutorials/distribute/custom_training#define_the_loss_function
# This works too and shifts all the averaging to the training loop which is easier:
loss_fn = tf.keras.losses.sparse_categorical_crossentropy
STEPS_PER_TPU_CALL = 99
VALIDATION_STEPS_PER_TPU_CALL = 29
@tf.function
def train_step(data_iter):
def train_step_fn(images, labels):
with tf.GradientTape() as tape:
probabilities = model(images, training=True)
loss = loss_fn(labels, probabilities)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
#update metrics
train_accuracy.update_state(labels, probabilities)
train_loss.update_state(loss)
# this loop runs on the TPU
for _ in tf.range(STEPS_PER_TPU_CALL):
strategy.run(train_step_fn, next(data_iter))
@tf.function
def valid_step(data_iter):
def valid_step_fn(images, labels):
probabilities = model(images, training=False)
loss = loss_fn(labels, probabilities)
# update metrics
valid_accuracy.update_state(labels, probabilities)
valid_loss.update_state(loss)
# this loop runs on the TPU
for _ in tf.range(VALIDATION_STEPS_PER_TPU_CALL):
strategy.run(valid_step_fn, next(data_iter))
import time
from collections import namedtuple
start_time = epoch_start_time = time.time()
# distribute the datset according to the strategy
train_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset())
# Hitting End Of Dataset exceptions is a problem in this setup. Using a repeated validation set instead.
# This will introduce a slight inaccuracy because the validation dataset now has some repeated elements.
valid_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset(repeated=True))
print("Training steps per epoch:", STEPS_PER_EPOCH, "in increments of", STEPS_PER_TPU_CALL)
print("Validation images:", NUM_VALIDATION_IMAGES,
"Batch size:", BATCH_SIZE,
"Validation steps:", NUM_VALIDATION_IMAGES//BATCH_SIZE, "in increments of", VALIDATION_STEPS_PER_TPU_CALL)
print("Repeated validation images:", int_div_round_up(NUM_VALIDATION_IMAGES, BATCH_SIZE*VALIDATION_STEPS_PER_TPU_CALL)*VALIDATION_STEPS_PER_TPU_CALL*BATCH_SIZE-NUM_VALIDATION_IMAGES)
History = namedtuple('History', 'history')
history = History(history={'loss': [], 'val_loss': [], 'sparse_categorical_accuracy': [], 'val_sparse_categorical_accuracy': []})
epoch = 0
train_data_iter = iter(train_dist_ds) # the training data iterator is repeated and it is not reset
# for each validation run (same as model.fit)
valid_data_iter = iter(valid_dist_ds) # the validation data iterator is repeated and it is not reset
# for each validation run (different from model.fit whre the
# recommendation is to use a non-repeating validation dataset)
step = 0
epoch_steps = 0
while True:
# run training step
train_step(train_data_iter)
epoch_steps += STEPS_PER_TPU_CALL
step += STEPS_PER_TPU_CALL
print('=', end='', flush=True)
# validation run at the end of each epoch
if (step // STEPS_PER_EPOCH) > epoch:
print('|', end='', flush=True)
# validation run
valid_epoch_steps = 0
for _ in range(int_div_round_up(NUM_VALIDATION_IMAGES, BATCH_SIZE*VALIDATION_STEPS_PER_TPU_CALL)):
valid_step(valid_data_iter)
valid_epoch_steps += VALIDATION_STEPS_PER_TPU_CALL
print('=', end='', flush=True)
# compute metrics
history.history['sparse_categorical_accuracy'].append(train_accuracy.result().numpy())
history.history['val_sparse_categorical_accuracy'].append(valid_accuracy.result().numpy())
history.history['loss'].append(train_loss.result().numpy() / (BATCH_SIZE*epoch_steps))
history.history['val_loss'].append(valid_loss.result().numpy() / (BATCH_SIZE*valid_epoch_steps))
# report metrics
epoch_time = time.time() - epoch_start_time
print('\nEPOCH {:d}/{:d}'.format(epoch+1, EPOCHS))
print('time: {:0.1f}s'.format(epoch_time),
'loss: {:0.4f}'.format(history.history['loss'][-1]),
'accuracy: {:0.4f}'.format(history.history['sparse_categorical_accuracy'][-1]),
'val_loss: {:0.4f}'.format(history.history['val_loss'][-1]),
'val_acc: {:0.4f}'.format(history.history['val_sparse_categorical_accuracy'][-1]),
'lr: {:0.4g}'.format(lrfn(epoch)),
'steps/val_steps: {:d}/{:d}'.format(epoch_steps, valid_epoch_steps), flush=True)
# set up next epoch
epoch = step // STEPS_PER_EPOCH
epoch_steps = 0
epoch_start_time = time.time()
train_accuracy.reset_states()
valid_accuracy.reset_states()
valid_loss.reset_states()
train_loss.reset_states()
if epoch >= EPOCHS:
break
optimized_ctl_training_time = time.time() - start_time
print("OPTIMIZED CTL TRAINING TIME: {:0.1f}s".format(optimized_ctl_training_time))
###Output
Training steps per epoch: 99 in increments of 99
Validation images: 3712 Batch size: 128 Validation steps: 29 in increments of 29
Repeated validation images: 0
=|=
EPOCH 1/13
time: 61.7s loss: 4.3838 accuracy: 0.1232 val_loss: 4.1617 val_acc: 0.2325 lr: 1e-05 steps/val_steps: 99/29
=|=
EPOCH 2/13
time: 9.9s loss: 2.3489 accuracy: 0.5249 val_loss: 1.0876 val_acc: 0.7516 lr: 0.0001133 steps/val_steps: 99/29
=|=
EPOCH 3/13
time: 10.0s loss: 0.7661 accuracy: 0.8362 val_loss: 0.4820 val_acc: 0.8874 lr: 0.0002167 steps/val_steps: 99/29
=|=
EPOCH 4/13
time: 9.4s loss: 0.2903 accuracy: 0.9384 val_loss: 0.4148 val_acc: 0.8982 lr: 0.00032 steps/val_steps: 99/29
=|=
EPOCH 5/13
time: 9.5s loss: 0.1116 accuracy: 0.9771 val_loss: 0.3332 val_acc: 0.9159 lr: 0.000227 steps/val_steps: 99/29
=|=
EPOCH 6/13
time: 9.4s loss: 0.0459 accuracy: 0.9929 val_loss: 0.3184 val_acc: 0.9275 lr: 0.0001619 steps/val_steps: 99/29
=|=
EPOCH 7/13
time: 9.5s loss: 0.0269 accuracy: 0.9971 val_loss: 0.2831 val_acc: 0.9329 lr: 0.0001163 steps/val_steps: 99/29
=|=
EPOCH 8/13
time: 9.6s loss: 0.0192 accuracy: 0.9981 val_loss: 0.2899 val_acc: 0.9297 lr: 8.443e-05 steps/val_steps: 99/29
=|=
EPOCH 9/13
time: 9.4s loss: 0.0130 accuracy: 0.9991 val_loss: 0.2747 val_acc: 0.9313 lr: 6.21e-05 steps/val_steps: 99/29
=|=
EPOCH 10/13
time: 9.4s loss: 0.0114 accuracy: 0.9994 val_loss: 0.2969 val_acc: 0.9283 lr: 4.647e-05 steps/val_steps: 99/29
=|=
EPOCH 11/13
time: 9.4s loss: 0.0102 accuracy: 0.9998 val_loss: 0.2753 val_acc: 0.9270 lr: 3.553e-05 steps/val_steps: 99/29
=|=
EPOCH 12/13
time: 9.4s loss: 0.0099 accuracy: 0.9997 val_loss: 0.3237 val_acc: 0.9224 lr: 2.787e-05 steps/val_steps: 99/29
=|=
EPOCH 13/13
time: 9.4s loss: 0.0090 accuracy: 0.9998 val_loss: 0.2699 val_acc: 0.9327 lr: 2.251e-05 steps/val_steps: 99/29
OPTIMIZED CTL TRAINING TIME: 176.2s
###Markdown
Optimized Model Training + Mixed Precision
###Code
### Mixed Precision Training
MIXED_PRECISION = True
XLA_ACCELERATE = True
if MIXED_PRECISION:
from tensorflow.keras.mixed_precision import experimental as mixed_precision
if tpu: policy = tf.keras.mixed_precision.experimental.Policy('mixed_bfloat16')
else: policy = tf.keras.mixed_precision.experimental.Policy("float32")
mixed_precision.set_policy(policy)
print("Mixed Precision enabled")
if XLA_ACCELERATE:
tf.config.optimizer.set_jit(True)
print("XLA Enabled")
with strategy.scope():
pretrained_model = tf.keras.applications.Xception(weights='imagenet', include_top=False ,input_shape=[*IMAGE_SIZE, 3])
pretrained_model.trainable = True # False = transfer learning, True = fine-tuning
model = tf.keras.Sequential([
# convert image format from int [0,255] to the format expected by this model
tf.keras.layers.Lambda(lambda data: tf.keras.applications.xception.preprocess_input(tf.cast(data, tf.float32)), input_shape=[*IMAGE_SIZE, 3]),
pretrained_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(len(CLASSES), activation='softmax')
])
model.summary()
# Instiate optimizer with learning rate schedule
class LRSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __call__(self, step):
return lrfn(epoch=step//STEPS_PER_EPOCH)
optimizer = tf.keras.optimizers.Adam(learning_rate=LRSchedule())
# this also works but is not very readable
#optimizer = tf.keras.optimizers.Adam(learning_rate=lambda: lrfn(tf.cast(optimizer.iterations, tf.float32)//STEPS_PER_EPOCH))
# Instantiate metrics
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
valid_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
train_loss = tf.keras.metrics.Sum()
valid_loss = tf.keras.metrics.Sum()
# Loss
# The recommendation from the Tensorflow custom training loop documentation is:
# loss_fn = lambda a,b: tf.nn.compute_average_loss(tf.keras.losses.sparse_categorical_crossentropy(a,b), global_batch_size=BATCH_SIZE)
# https://www.tensorflow.org/tutorials/distribute/custom_training#define_the_loss_function
# This works too and shifts all the averaging to the training loop which is easier:
loss_fn = tf.keras.losses.sparse_categorical_crossentropy
STEPS_PER_TPU_CALL = 99
VALIDATION_STEPS_PER_TPU_CALL = 29
@tf.function
def train_step(data_iter):
def train_step_fn(images, labels):
with tf.GradientTape() as tape:
probabilities = model(images, training=True)
loss = loss_fn(labels, probabilities)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
#update metrics
train_accuracy.update_state(labels, probabilities)
train_loss.update_state(loss)
# this loop runs on the TPU
for _ in tf.range(STEPS_PER_TPU_CALL):
strategy.run(train_step_fn, next(data_iter))
@tf.function
def valid_step(data_iter):
def valid_step_fn(images, labels):
probabilities = model(images, training=False)
loss = loss_fn(labels, probabilities)
# update metrics
valid_accuracy.update_state(labels, probabilities)
valid_loss.update_state(loss)
# this loop runs on the TPU
for _ in tf.range(VALIDATION_STEPS_PER_TPU_CALL):
strategy.run(valid_step_fn, next(data_iter))
import time
from collections import namedtuple
start_time = epoch_start_time = time.time()
# distribute the datset according to the strategy
train_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset())
# Hitting End Of Dataset exceptions is a problem in this setup. Using a repeated validation set instead.
# This will introduce a slight inaccuracy because the validation dataset now has some repeated elements.
valid_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset(repeated=True))
print("Training steps per epoch:", STEPS_PER_EPOCH, "in increments of", STEPS_PER_TPU_CALL)
print("Validation images:", NUM_VALIDATION_IMAGES,
"Batch size:", BATCH_SIZE,
"Validation steps:", NUM_VALIDATION_IMAGES//BATCH_SIZE, "in increments of", VALIDATION_STEPS_PER_TPU_CALL)
print("Repeated validation images:", int_div_round_up(NUM_VALIDATION_IMAGES, BATCH_SIZE*VALIDATION_STEPS_PER_TPU_CALL)*VALIDATION_STEPS_PER_TPU_CALL*BATCH_SIZE-NUM_VALIDATION_IMAGES)
History = namedtuple('History', 'history')
history = History(history={'loss': [], 'val_loss': [], 'sparse_categorical_accuracy': [], 'val_sparse_categorical_accuracy': []})
epoch = 0
train_data_iter = iter(train_dist_ds) # the training data iterator is repeated and it is not reset
# for each validation run (same as model.fit)
valid_data_iter = iter(valid_dist_ds) # the validation data iterator is repeated and it is not reset
# for each validation run (different from model.fit whre the
# recommendation is to use a non-repeating validation dataset)
step = 0
epoch_steps = 0
while True:
# run training step
train_step(train_data_iter)
epoch_steps += STEPS_PER_TPU_CALL
step += STEPS_PER_TPU_CALL
print('=', end='', flush=True)
# validation run at the end of each epoch
if (step // STEPS_PER_EPOCH) > epoch:
print('|', end='', flush=True)
# validation run
valid_epoch_steps = 0
for _ in range(int_div_round_up(NUM_VALIDATION_IMAGES, BATCH_SIZE*VALIDATION_STEPS_PER_TPU_CALL)):
valid_step(valid_data_iter)
valid_epoch_steps += VALIDATION_STEPS_PER_TPU_CALL
print('=', end='', flush=True)
# compute metrics
history.history['sparse_categorical_accuracy'].append(train_accuracy.result().numpy())
history.history['val_sparse_categorical_accuracy'].append(valid_accuracy.result().numpy())
history.history['loss'].append(train_loss.result().numpy() / (BATCH_SIZE*epoch_steps))
history.history['val_loss'].append(valid_loss.result().numpy() / (BATCH_SIZE*valid_epoch_steps))
# report metrics
epoch_time = time.time() - epoch_start_time
print('\nEPOCH {:d}/{:d}'.format(epoch+1, EPOCHS))
print('time: {:0.1f}s'.format(epoch_time),
'loss: {:0.4f}'.format(history.history['loss'][-1]),
'accuracy: {:0.4f}'.format(history.history['sparse_categorical_accuracy'][-1]),
'val_loss: {:0.4f}'.format(history.history['val_loss'][-1]),
'val_acc: {:0.4f}'.format(history.history['val_sparse_categorical_accuracy'][-1]),
'lr: {:0.4g}'.format(lrfn(epoch)),
'steps/val_steps: {:d}/{:d}'.format(epoch_steps, valid_epoch_steps), flush=True)
# set up next epoch
epoch = step // STEPS_PER_EPOCH
epoch_steps = 0
epoch_start_time = time.time()
train_accuracy.reset_states()
valid_accuracy.reset_states()
valid_loss.reset_states()
train_loss.reset_states()
if epoch >= EPOCHS:
break
optimized_ctl_training_time = time.time() - start_time
print("OPTIMIZED CTL TRAINING TIME: {:0.1f}s".format(optimized_ctl_training_time))
###Output
Training steps per epoch: 99 in increments of 99
Validation images: 3712 Batch size: 128 Validation steps: 29 in increments of 29
Repeated validation images: 0
=|=
EPOCH 1/13
time: 67.4s loss: 4.4414 accuracy: 0.0890 val_loss: 4.2116 val_acc: 0.1899 lr: 1e-05 steps/val_steps: 99/29
=|=
EPOCH 2/13
time: 8.5s loss: 2.4025 accuracy: 0.5075 val_loss: 1.0354 val_acc: 0.7645 lr: 0.0001133 steps/val_steps: 99/29
=|=
EPOCH 3/13
time: 8.4s loss: 0.7554 accuracy: 0.8436 val_loss: 0.4436 val_acc: 0.8917 lr: 0.0002167 steps/val_steps: 99/29
=|=
EPOCH 4/13
time: 7.8s loss: 0.2859 accuracy: 0.9395 val_loss: 0.3928 val_acc: 0.9036 lr: 0.00032 steps/val_steps: 99/29
=|=
EPOCH 5/13
time: 7.7s loss: 0.1102 accuracy: 0.9779 val_loss: 0.3736 val_acc: 0.9044 lr: 0.000227 steps/val_steps: 99/29
=|=
EPOCH 6/13
time: 7.8s loss: 0.0512 accuracy: 0.9910 val_loss: 0.3140 val_acc: 0.9170 lr: 0.0001619 steps/val_steps: 99/29
=|=
EPOCH 7/13
time: 7.8s loss: 0.0282 accuracy: 0.9964 val_loss: 0.2950 val_acc: 0.9211 lr: 0.0001163 steps/val_steps: 99/29
=|=
EPOCH 8/13
time: 7.8s loss: 0.0209 accuracy: 0.9972 val_loss: 0.2969 val_acc: 0.9230 lr: 8.443e-05 steps/val_steps: 99/29
=|=
EPOCH 9/13
time: 7.8s loss: 0.0130 accuracy: 0.9989 val_loss: 0.2856 val_acc: 0.9251 lr: 6.21e-05 steps/val_steps: 99/29
=|=
EPOCH 10/13
time: 7.7s loss: 0.0116 accuracy: 0.9992 val_loss: 0.2989 val_acc: 0.9251 lr: 4.647e-05 steps/val_steps: 99/29
=|=
EPOCH 11/13
time: 7.8s loss: 0.0111 accuracy: 0.9993 val_loss: 0.2951 val_acc: 0.9221 lr: 3.553e-05 steps/val_steps: 99/29
=|=
EPOCH 12/13
time: 7.8s loss: 0.0094 accuracy: 0.9993 val_loss: 0.2775 val_acc: 0.9316 lr: 2.787e-05 steps/val_steps: 99/29
=|=
EPOCH 13/13
time: 7.8s loss: 0.0090 accuracy: 0.9994 val_loss: 0.3037 val_acc: 0.9256 lr: 2.251e-05 steps/val_steps: 99/29
OPTIMIZED CTL TRAINING TIME: 162.2s
###Markdown
Generating the confusion matrix
###Code
cmdataset = get_validation_dataset(ordered = True)
images_ds = cmdataset.map(lambda image, label : image)
labels_ds = cmdataset.map(lambda image, label : label).unbatch()
cm_correct_labels = next(iter(labels_ds.batch(NUM_VALIDATION_IMAGES))).numpy()
cm_probabilities = model.predict(images_ds, steps = VALIDATION_STEPS)
cm_predictions = np.argmax(cm_probabilities, axis = -1)
print("Correct Labels : ", cm_correct_labels.shape, cm_correct_labels)
print("Predicted Labels: ", cm_predictions.shape, cm_predictions)
test_ds = get_test_dataset(ordered = True)
print("Computing predictions...")
test_images_ds = test_ds.map(lambda image, idnum : image)
probabilities = model.predict(test_images_ds, steps = TEST_STEPS)
predictions = np.argmax(probabilities, axis = -1)
print(predictions)
print("generating submission file")
test_ids_ds = test_ds.map(lambda image, idnum : idnum).unbatch()
test_ids = next(iter(test_ids_ds.batch(NUM_TEST_IMAGES))).numpy().astype('U')
np.savetxt("submission.csv", np.rec.fromarrays([test_ids, predictions]), fmt = ['%s', '%d'], delimiter = ',', header = 'id,label', comments = '')
!head submission.csv
###Output
Computing predictions...
|
data_prep/shard_inspect.ipynb | ###Markdown
###Code
for str_rec in tf.python_io.tf_record_iterator("./tf_shards/train_shards/s300_shard_0.tfrecord"):
example = tf.train.Example()
example.ParseFromString(str_rec)
print(dict(example.features.feature).keys())
break
print(tf.__version__)
if tf.test.gpu_device_name():
print('Default GPU Device:{}'.format(tf.test.gpu_device_name()))
else:
print("Please install GPU version of TF")
###Output
Default GPU Device:/device:GPU:0
|
2-EDA/3-Matplotlib/teoria/Notas_1-Matplotlib general aula.ipynb | ###Markdown
Visualization with Matplotlib We'll now take an in-depth look at the [Matplotlib](https://matplotlib.org/) **package for visualization in Python**.Matplotlib is a **multi-platform** data visualization library built on **NumPy** arrays, and designed to work with the broader **SciPy** stack.It was conceived by John Hunter in 2002, originally as a patch to IPython for enabling interactive MATLAB-style plotting via [gnuplot](http://www.gnuplot.info/) from the IPython command line.IPython's creator, Fernando Perez, was at the time scrambling to finish his PhD, and let John know he wouldn’t have time to review the patch for several months.John took this as a cue to set out on his own, and the Matplotlib package was born, with version 0.1 released in 2003.It received an early boost when it was adopted as the plotting package of choice of the Space Telescope Science Institute (the folks behind the Hubble Telescope), which financially supported Matplotlib’s development and greatly expanded its capabilities.One of Matplotlib’s most important features is its **ability to play well with many operating systems and graphics backends.**Matplotlib supports **dozens of backends and output types**, which means you can count on it to work regardless of which operating system you are using or which output format you wish.This **cross-platform**, everything-to-everyone approach has been one of the great strengths of Matplotlib.It has led to a large user base, which in turn has led to an active developer base and Matplotlib’s powerful tools and ubiquity within the scientific Python world.In recent years, however, the interface and style of Matplotlib have begun to show their age.**Newer tools like ggplot and ggvis in the R language, along with web visualization toolkits based on D3js and HTML5 canvas, often make Matplotlib feel clunky and old-fashioned.**Still, I'm of the opinion that we cannot ignore Matplotlib's strength as a well-tested, cross-platform graphics engine.Recent Matplotlib versions make it relatively easy to set **new global plotting styles** (see [Customizing Matplotlib: Configurations and Style Sheets](04.11-Settings-and-Stylesheets.ipynb)), and **people have been developing new packages** that build on its powerful internals to drive Matplotlib via cleaner, more modern APIs—for example, **Seaborn** (discussed in [Visualization With Seaborn](04.14-Visualization-With-Seaborn.ipynb)), [ggpy](http://yhat.github.io/ggpy/), [HoloViews](http://holoviews.org/), [Altair](http://altair-viz.github.io/), and **even Pandas** itself can be used as wrappers around Matplotlib's API.Even with wrappers like these, **it is still often useful to dive into Matplotlib's syntax to adjust the final plot output.**For this reason, I believe that Matplotlib itself will remain a vital piece of the data visualization stack, even if new tools mean the community gradually moves away from using the Matplotlib API directly. General Matplotlib TipsBefore we dive into the details of creating visualizations with Matplotlib, there are a few useful things you should know about using the package. Importing MatplotlibJust as we use the ``np`` shorthand for NumPy and the ``pd`` shorthand for Pandas, we will use some standard shorthands for Matplotlib imports:
###Code
import matplotlib as mpl
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
The ``plt`` interface is what we will use most often, as we shall see throughout this chapter. Setting StylesWe will use the ``plt.style`` directive to choose appropriate aesthetic [styles](https://matplotlib.org/3.2.1/gallery/style_sheets/style_sheets_reference.html) for our figures.Here we will set the ``classic`` style, which ensures that the plots we create use the classic Matplotlib style:
###Code
plt.style.use('classic')
# Que estilos hay por defecto
print(plt.style.available)
print('\n',len(plt.style.available))
###Output
['Solarize_Light2', '_classic_test_patch', 'bmh', 'classic', 'dark_background', 'fast', 'fivethirtyeight', 'ggplot', 'grayscale', 'seaborn', 'seaborn-bright', 'seaborn-colorblind', 'seaborn-dark', 'seaborn-dark-palette', 'seaborn-darkgrid', 'seaborn-deep', 'seaborn-muted', 'seaborn-notebook', 'seaborn-paper', 'seaborn-pastel', 'seaborn-poster', 'seaborn-talk', 'seaborn-ticks', 'seaborn-white', 'seaborn-whitegrid', 'tableau-colorblind10']
26
###Markdown
Throughout this section, we will adjust this style as needed.Note that the stylesheets used here are supported as of Matplotlib version 1.5; if you are using an earlier version of Matplotlib, only the default style is available.For more information on stylesheets, see [Customizing Matplotlib: Configurations and Style Sheets](https://matplotlib.org/3.3.1/tutorials/introductory/customizing.html). ``show()`` or No ``show()``? How to Display Your Plots A visualization you can't see won't be of much use, but just how you view your Matplotlib plots depends on the context.The best use of Matplotlib differs depending on how you are using it; roughly, **the three applicable contexts are using Matplotlib in a script, in an IPython terminal, or in an IPython notebook.** Plotting from a scriptIf you are using Matplotlib from within a script, the function ``plt.show()`` is your friend.``plt.show()`` starts an event loop, looks for all currently active figure objects, and opens one or more interactive windows that display your figure or figures.So, for example, you may have a file called *myplot.py* containing the following:```python ------- file: myplot.py ------import matplotlib.pyplot as pltimport numpy as npx = np.linspace(0, 10, 100)plt.plot(x, np.sin(x))plt.plot(x, np.cos(x))plt.show()```You can then run this script from the command-line prompt, which will result in a window opening with your figure displayed:```$ python myplot.py```The ``plt.show()`` command does a lot under the hood, as it must interact with your system's interactive graphical backend.The details of this operation can vary greatly from system to system and even installation to installation, but matplotlib does its best to hide all these details from you.One thing to be aware of: the **``plt.show()`` command should be used *only once* per Python session**, and is most often seen at the very end of the script.Multiple ``show()`` commands can lead to unpredictable backend-dependent behavior, and should mostly be avoided. Plotting from an IPython shellIt can be very convenient to use Matplotlib interactively within an IPython shell (see [IPython: Beyond Normal Python](01.00-IPython-Beyond-Normal-Python.ipynb)).IPython is built to work well with Matplotlib if you specify Matplotlib mode.To enable this mode, you can use the ``%matplotlib`` magic command after starting ``ipython``:```ipythonIn [1]: %matplotlibUsing matplotlib backend: TkAggIn [2]: import matplotlib.pyplot as plt```At this point, any ``plt`` plot command will cause a figure window to open, and further commands can be run to update the plot.Some changes (such as modifying properties of lines that are already drawn) will not draw automatically: to force an update, use ``plt.draw()``.Using ``plt.show()`` in Matplotlib mode is not required. Plotting from an IPython notebookThe IPython notebook is a browser-based interactive data analysis tool that can combine narrative, code, graphics, HTML elements, and much more into a single executable document.Plotting interactively within an IPython notebook can be done with the ``%matplotlib`` command, and works in a similar way to the IPython shell.In the IPython notebook, you also have the option of embedding graphics directly in the notebook, with two possible options:- ``%matplotlib notebook`` will lead to *interactive* plots embedded within the notebook- ``%matplotlib inline`` will lead to *static* images of your plot embedded in the notebookFor this book, we will generally opt for ``%matplotlib inline``:
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
After running this command (it needs to be done only once per kernel/session), any cell within the notebook that creates a plot will embed a PNG image of the resulting graphic:
###Code
import numpy as np
x = np.linspace(0,10,100)
fig = plt.figure()
plt.plot(x, np.sin(x),'-')
plt.plot(x, np.cos(x),'--')
###Output
_____no_output_____
###Markdown
Saving Figures to FileOne nice feature of Matplotlib is the ability to save figures in a wide variety of formats.Saving a figure can be done using the ``savefig()`` command.For example, to save the previous figure as a PNG file, you can run this:
###Code
fig.savefig('fede_fig.png')
###Output
_____no_output_____
###Markdown
We now have a file called ``my_figure.png`` in the current working directory:
###Code
!dir
###Output
Volume in drive C has no label.
Volume Serial Number is DE77-3D4F
Directory of C:\Users\feder\Github\local\thebridge_ft_nov21_primer_repo\2-EDA\3-Matplotlib\teoria
07/12/2021 12:23 <DIR> .
07/12/2021 12:23 <DIR> ..
07/12/2021 11:03 <DIR> .ipynb_checkpoints
07/12/2021 08:58 93,040 1-Matplotlib general aula.ipynb
07/12/2021 08:58 345,664 2-Line plots aula.ipynb
07/12/2021 08:58 190,066 3-Scatter plots aula.ipynb
07/12/2021 08:59 <DIR> data
07/12/2021 08:58 18,900 desplazamiento_horizontal.png
07/12/2021 12:23 26,246 fede_fig.png
07/12/2021 08:58 178,272 Gaussianas2D.png
07/12/2021 08:58 180 myplot.py
07/12/2021 08:58 26,306 my_figure.png
07/12/2021 12:22 114,784 Notas_1-Matplotlib general aula.ipynb
07/12/2021 08:58 345,664 Notas_2-Line plots aula.ipynb
07/12/2021 08:58 190,066 Notas_3-Scatter plots aula.ipynb
07/12/2021 08:58 251,154 np.newaxis.png
07/12/2021 08:58 562,740 senos_cosenos.gif
07/12/2021 08:58 47,263 traslacion_funciones.jpg
07/12/2021 11:03 72 Untitled.ipynb
15 File(s) 2,390,417 bytes
4 Dir(s) 293,073,346,560 bytes free
###Markdown
To confirm that it contains what we think it contains, let's use the IPython ``Image`` object to display the contents of this file:
###Code
from IPython.display import Image
Image('fede_fig.png')
###Output
_____no_output_____
###Markdown
In ``savefig()``, the file format is inferred from the extension of the given filename.Depending on what backends you have installed, many different file formats are available.The list of supported file types can be found for your system by using the following method of the figure canvas object:
###Code
fig.canvas.get_supported_filetypes()
###Output
_____no_output_____
###Markdown
Note that when saving your figure, it's not necessary to use ``plt.show()`` or related commands discussed earlier. Two Interfaces for the Price of OneA potentially confusing feature of Matplotlib is its dual interfaces: a convenient MATLAB-style state-based interface, and a more powerful object-oriented interface. We'll quickly highlight the differences between the two here. MATLAB-style Interface**Matplotlib was originally written as a Python alternative for MATLAB users**, and much of its syntax reflects that fact.The MATLAB-style tools are contained in the pyplot (``plt``) interface.For example, the following code will probably look quite familiar to MATLAB users:
###Code
#Usando estilo MATLAB no tenemos una variable donde manipular los ejes
plt.figure()
#Con subplot creados un grid donde ubicar gráficas
plt.subplot(2,1,1)
plt.plot(x, np.sin(x))
plt.subplot(2,1,2)
plt.plot(x, np.cos(x))
###Output
_____no_output_____
###Markdown
It is important to note that this interface is *stateful*: it keeps track of the **"current" figure and axes, which are where all ``plt`` commands are applied.**You can get a reference to these using the ``plt.gcf()`` (get current figure) and ``plt.gca()`` (get current axes) routines.While this stateful interface is fast and convenient for simple plots, it is easy to run into problems.For example, once the second panel is created, how can we go back and add something to the first?This is possible within the MATLAB-style interface, but a bit clunky.Fortunately, there is a better way. Object-oriented interfaceThe object-oriented interface is available for these more complicated situations, and for when you want more control over your figure.Rather than depending on some notion of an "active" figure or axes, in the object-oriented interface the plotting functions are *methods* of explicit ``Figure`` and ``Axes`` objects.To re-create the previous plot using this style of plotting, you might do the following:
###Code
# First create a grid of plots
# ax will be an array of two Axes objects
fig, ax = plt.subplots(2)
ax[0].plot(x, np.sin(x))
ax[1].plot(x, np.cos(x))
###Output
_____no_output_____ |
06_prepare/archive/02_Prepare_Dataset_BERT_Scikit_ScriptMode.ipynb | ###Markdown
Feature Transformation with Amazon a SageMaker Processing Job and Scikit-LearnIn this notebook, we convert raw text into BERT embeddings. This will allow us to perform natural language processing tasks such as text classification.Typically a machine learning (ML) process consists of few steps. First, gathering data with various ETL jobs, then pre-processing the data, featurizing the dataset by incorporating standard techniques or prior knowledge, and finally training an ML model using an algorithm.Often, distributed data processing frameworks such as Scikit-Learn are used to pre-process data sets in order to prepare them for training. In this notebook we'll use Amazon SageMaker Processing, and leverage the power of Scikit-Learn in a managed SageMaker environment to run our processing workload. NOTE: THIS NOTEBOOK WILL TAKE A 5-10 MINUTES TO COMPLETE. PLEASE BE PATIENT.  Contents1. Setup Environment1. Setup Input Data1. Setup Output Data1. Build a Spark container for running the processing job1. Run the Processing Job using Amazon SageMaker1. Inspect the Processed Output Data Setup EnvironmentLet's start by specifying:* The S3 bucket and prefixes that you use for training and model data. Use the default bucket specified by the Amazon SageMaker session.* The IAM role ARN used to give processing and training access to the dataset.
###Code
import sagemaker
import boto3
sess = sagemaker.Session()
role = sagemaker.get_execution_role()
bucket = sess.default_bucket()
region = boto3.Session().region_name
sm = boto3.Session().client(service_name='sagemaker', region_name=region)
###Output
_____no_output_____
###Markdown
Setup Input Data
###Code
%store -r s3_public_path_tsv
try:
s3_public_path_tsv
except NameError:
print('++++++++++++++++++++++++++++++++++++++++++++++++++++++++')
print('[ERROR] Please run the notebooks in the INGEST section before you continue.')
print('++++++++++++++++++++++++++++++++++++++++++++++++++++++++')
print(s3_public_path_tsv)
%store -r s3_private_path_tsv
try:
s3_private_path_tsv
except NameError:
print('++++++++++++++++++++++++++++++++++++++++++++++++++++++++')
print('[ERROR] Please run the notebooks in the INGEST section before you continue.')
print('++++++++++++++++++++++++++++++++++++++++++++++++++++++++')
print(s3_private_path_tsv)
###Output
_____no_output_____
###Markdown
Let's Copy 1 More Large Data File to Use For Training
###Code
!aws s3 cp --recursive $s3_public_path_tsv/ $s3_private_path_tsv/ --exclude "*" --include "amazon_reviews_us_Digital_Ebook_Purchase_v1_01.tsv.gz"
raw_input_data_s3_uri = 's3://{}/amazon-reviews-pds/tsv/'.format(bucket)
print(raw_input_data_s3_uri)
!aws s3 ls $raw_input_data_s3_uri
###Output
_____no_output_____
###Markdown
Run the Processing Job using Amazon SageMakerNext, use the Amazon SageMaker Python SDK to submit a processing job using our custom python script. Review the Processing Script
###Code
!pygmentize preprocess-scikit-text-to-bert.py
###Output
_____no_output_____
###Markdown
Run this script as a processing job. You also need to specify one `ProcessingInput` with the `source` argument of the Amazon S3 bucket and `destination` is where the script reads this data from `/opt/ml/processing/input` (inside the Docker container.) All local paths inside the processing container must begin with `/opt/ml/processing/`.Also give the `run()` method a `ProcessingOutput`, where the `source` is the path the script writes output data to. For outputs, the `destination` defaults to an S3 bucket that the Amazon SageMaker Python SDK creates for you, following the format `s3://sagemaker--//output//`. You also give the `ProcessingOutput` value for `output_name`, to make it easier to retrieve these output artifacts after the job is run.The arguments parameter in the `run()` method are command-line arguments in our `preprocess-scikit-text-to-bert.py` script.Note that we sharding the data using `ShardedByS3Key` to spread the transformations across all worker nodes in the cluster. Track the `Experiment`We will track every step of this experiment throughout the `prepare`, `train`, `optimize`, and `deploy`. Concepts**Experiment**: A collection of related Trials. Add Trials to an Experiment that you wish to compare together.**Trial**: A description of a multi-step machine learning workflow. Each step in the workflow is described by a Trial Component. There is no relationship between Trial Components such as ordering.**Trial Component**: A description of a single step in a machine learning workflow. For example data cleaning, feature extraction, model training, model evaluation, etc.**Tracker**: A logger of information about a single TrialComponent. Create the `Experiment`
###Code
import time
from smexperiments.experiment import Experiment
timestamp = int(time.time())
experiment = Experiment.create(
experiment_name='Amazon-Customer-Reviews-BERT-Experiment-{}'.format(timestamp),
description='Amazon Customer Reviews BERT Experiment',
sagemaker_boto_client=sm)
experiment_name = experiment.experiment_name
print('Experiment name: {}'.format(experiment_name))
###Output
_____no_output_____
###Markdown
Create the `Trial`
###Code
import time
from smexperiments.trial import Trial
timestamp = int(time.time())
trial = Trial.create(trial_name='trial-{}'.format(timestamp),
experiment_name=experiment_name,
sagemaker_boto_client=sm)
trial_name = trial.trial_name
print('Trial name: {}'.format(trial_name))
###Output
_____no_output_____
###Markdown
Create the `Experiment Config`
###Code
experiment_config = {
'ExperimentName': experiment_name,
'TrialName': trial.trial_name,
'TrialComponentDisplayName': 'prepare'
}
print(experiment_name)
%store experiment_name
print(trial_name)
%store trial_name
###Output
_____no_output_____
###Markdown
Set the Processing Job Hyper-Parameters
###Code
processing_instance_type='ml.c5.2xlarge'
processing_instance_count=1
train_split_percentage=0.90
validation_split_percentage=0.05
test_split_percentage=0.05
balance_dataset=True
max_seq_length=64
###Output
_____no_output_____
###Markdown
Choosing a `max_seq_length` for BERTSince a smaller `max_seq_length` leads to faster training and lower resource utilization, we want to find the smallest review length that captures `70%` of our reviews.Remember our distribution of review lengths from a previous section?```mean 67.930174std 130.954079min 1.00000010% 4.00000020% 14.00000030% 21.00000040% 25.00000050% 31.00000060% 42.00000070% 59.00000080% 87.00000090% 149.000000100% 5347.000000max 5347.000000```Review length `59` represents the `70th` percentile for this dataset. However, it's best to stick with powers-of-2 when using BERT. So let's choose `64` as this is the smallest power-of-2 greater than `59`. Reviews with length > `64` will be truncated to `64`.
###Code
from sagemaker.sklearn.processing import SKLearnProcessor
processor = SKLearnProcessor(framework_version='0.23-1',
role=role,
instance_type=processing_instance_type,
instance_count=processing_instance_count,
max_runtime_in_seconds=7200)
from sagemaker.processing import ProcessingInput, ProcessingOutput
processor.run(code='preprocess-scikit-text-to-bert.py',
inputs=[
ProcessingInput(source=raw_input_data_s3_uri,
destination='/opt/ml/processing/input/data/',
s3_data_distribution_type='ShardedByS3Key')
],
outputs=[
ProcessingOutput(s3_upload_mode='EndOfJob',
output_name='bert-train',
source='/opt/ml/processing/output/bert/train'),
ProcessingOutput(s3_upload_mode='EndOfJob',
output_name='bert-validation',
source='/opt/ml/processing/output/bert/validation'),
ProcessingOutput(s3_upload_mode='EndOfJob',
output_name='bert-test',
source='/opt/ml/processing/output/bert/test'),
],
arguments=['--train-split-percentage', str(train_split_percentage),
'--validation-split-percentage', str(validation_split_percentage),
'--test-split-percentage', str(test_split_percentage),
'--max-seq-length', str(max_seq_length),
'--balance-dataset', str(balance_dataset)
],
experiment_config=experiment_config,
logs=True,
wait=False)
scikit_processing_job_name = processor.jobs[-1].describe()['ProcessingJobName']
print(scikit_processing_job_name)
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/processing-jobs/{}">Processing Job</a></b>'.format(region, scikit_processing_job_name)))
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="blank" href="https://console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix">CloudWatch Logs</a> After About 5 Minutes</b>'.format(region, scikit_processing_job_name)))
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="blank" href="https://s3.console.aws.amazon.com/s3/buckets/{}/{}/?region={}&tab=overview">S3 Output Data</a> After The Processing Job Has Completed</b>'.format(bucket, scikit_processing_job_name, region)))
###Output
_____no_output_____
###Markdown
Monitor the Processing Job
###Code
running_processor = sagemaker.processing.ProcessingJob.from_processing_name(processing_job_name=scikit_processing_job_name,
sagemaker_session=sess)
processing_job_description = running_processor.describe()
print(processing_job_description)
processing_job_name = processing_job_description['ProcessingJobName']
print(processing_job_name)
running_processor.wait(logs=False)
###Output
_____no_output_____
###Markdown
_Please Wait Until the ^^ Processing Job ^^ Completes Above._ Inspect the Processed Output DataTake a look at a few rows of the transformed dataset to make sure the processing was successful.
###Code
processing_job_description = running_processor.describe()
output_config = processing_job_description['ProcessingOutputConfig']
for output in output_config['Outputs']:
if output['OutputName'] == 'bert-train':
processed_train_data_s3_uri = output['S3Output']['S3Uri']
if output['OutputName'] == 'bert-validation':
processed_validation_data_s3_uri = output['S3Output']['S3Uri']
if output['OutputName'] == 'bert-test':
processed_test_data_s3_uri = output['S3Output']['S3Uri']
print(processed_train_data_s3_uri)
print(processed_validation_data_s3_uri)
print(processed_test_data_s3_uri)
!aws s3 ls $processed_train_data_s3_uri/
!aws s3 ls $processed_validation_data_s3_uri/
!aws s3 ls $processed_test_data_s3_uri/
###Output
_____no_output_____
###Markdown
Pass Variables to the Next Notebook(s)
###Code
%store processing_job_name
%store processed_train_data_s3_uri
%store processed_validation_data_s3_uri
%store processed_test_data_s3_uri
%store max_seq_length
%store experiment_name
%store trial_name
%store
###Output
_____no_output_____
###Markdown
Show the Experiment Tracking Lineage
###Code
from sagemaker.analytics import ExperimentAnalytics
import pandas as pd
pd.set_option("max_colwidth", 500)
#pd.set_option("max_rows", 100)
experiment_analytics = ExperimentAnalytics(
sagemaker_session=sess,
experiment_name=experiment_name,
sort_by="CreationTime",
sort_order="Descending"
)
experiment_analytics_df = experiment_analytics.dataframe()
experiment_analytics_df
trial_component_name=experiment_analytics_df.TrialComponentName[0]
print(trial_component_name)
trial_component_description=sm.describe_trial_component(TrialComponentName=trial_component_name)
trial_component_description
from sagemaker.lineage.visualizer import LineageTableVisualizer
lineage_table_viz = LineageTableVisualizer(sess)
lineage_table_viz_df = lineage_table_viz.show(processing_job_name=processing_job_name)
lineage_table_viz_df
# from sagemaker.lineage.visualizer import LineageTableVisualizer
# lineage_table_viz = LineageTableVisualizer(sess)
# lineage_table_viz_df = lineage_table_viz.show(trial_component_name=trial_component_name)
# lineage_table_viz_df
from sagemaker.analytics import ArtifactAnalytics
artifact_analytics = ArtifactAnalytics()
artifacts_df = artifact_analytics.dataframe()
artifacts_df
# from sagemaker.lineage.association import Association
# from sagemaker.lineage.artifact import Artifact
# incoming_associations = Association.list(destination_arn='arn:aws:sagemaker:us-east-1:835319576252:artifact/ebfc31f9f6007feb1a70daaf24339c08',
# sort_by='CreationTime',
# sort_order='Descending')
# for association in enumerate(incoming_associations):
# print(association)
# from sagemaker.lineage.association import Association
# from sagemaker.lineage.artifact import Artifact
# incoming_associations = Association.list(source_arn='arn:aws:sagemaker:us-east-1:835319576252:artifact/ebfc31f9f6007feb1a70daaf24339c08',
# sort_by='CreationTime',
# sort_order='Descending')
# for association in enumerate(incoming_associations):
# print(association)
# # list all the contexts
# from sagemaker.lineage.context import Context
# contexts = Context.list(sort_by='CreationTime',
# sort_order='Descending')
# for ctx in contexts:
# print(ctx.source.source_uri)
# from sagemaker.lineage.action import Action
# actions = Action.list(sort_by='CreationTime', sort_order='Descending')
# for action in actions:
# print(action)
%%javascript
Jupyter.notebook.save_checkpoint();
Jupyter.notebook.session.delete();
###Output
_____no_output_____ |
OneHotEncodingMultipleColumns.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
data_dict = {
'Color': ['Red', 'Blue', 'Green', 'Green', 'Blue'],
'Brand': ['Mercedes', 'Mercedes', 'BMW', 'BMW', 'BMW'],
}
df = pd.DataFrame(
data=data_dict,
index=list('ABCDE')
)
df
from sklearn.preprocessing import OneHotEncoder
one_hot_encoder = OneHotEncoder(sparse=False)
one_hot_array = one_hot_encoder.fit_transform(df)
all_categories_lst = []
for categories in one_hot_encoder.categories_:
all_categories_lst.extend(categories)
one_hot_df = pd.DataFrame(data=one_hot_array,
index=df.index,
columns=all_categories_lst)
one_hot_df
pd.concat([df, one_hot_df], axis='columns')
###Output
_____no_output_____ |
Data Visualization/1. Univariate/Histogram_Practice.ipynb | ###Markdown
We'll continue working with the Pokémon dataset in this workspace.
###Code
pokemon = pd.read_csv('./data/pokemon.csv')
pokemon.head()
###Output
_____no_output_____
###Markdown
**Task**: Pokémon have a number of different statistics that describe their combat capabilities. Here, create a _histogram_ that depicts the distribution of 'special-defense' values taken. **Hint**: Try playing around with different bin width sizes to see what best depicts the data.
###Code
# YOUR CODE HERE
bins = np.arange(0, pokemon['special-defense'].max()+5, 5)
plt.hist(data = pokemon, x = pokemon['special-defense'], bins=bins);
# run this cell to check your work against ours
histogram_solution_1()
###Output
I've used matplotlib's hist function to plot the data. I have also used numpy's arange function to set the bin edges. A bin size of 5 hits the main cut points, revealing a smooth, but skewed curves. Are there similar characteristics among Pokemon with the highest special defenses?
|
tensorflow_v1.ipynb | ###Markdown
###Code
import tensorflow as tf
tf.__version__
!pip install tensorflow==1.13.1
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow.contrib import rnn
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split
!wget -O comp551w18-modified-mnist.zip https://www.dropbox.com/sh/dhu4cu8l8e32cvl/AACSbGE9X6P7an61STwwr8R0a?dl=0
# Run two times
!unzip comp551w18-modified-mnist.zip
###Output
y
Archive: comp551w18-modified-mnist.zip
inflating: test_x.csv
inflating: train_y.csv
inflating: train_x.csv
###Markdown
1. Data preprocessing
###Code
# Load data
x = np.loadtxt("train_x.csv", delimiter=",")# load from text
y = np.loadtxt("train_y.csv", delimiter=",")
x_train = x.reshape(-1, 64, 64) # reshape
y_train = y.reshape(-1, 1)
# Scale and reshape x
x_train = x_train/255
x_train = x_train.reshape(len(x_train),64*64)
# Transform y
onehot = OneHotEncoder(sparse = False)
y_train = onehot.fit_transform(y_train)
# Split dataset into training and testing subsets
X_train, X_test, Y_train, Y_test = train_test_split(x_train,y_train,test_size=0.2, random_state = 17)
###Output
_____no_output_____
###Markdown
2. Logistic Regression
###Code
# Parameters
learning_rate = 0.01
training_epochs = 50
batch_size = 256
display_step = 50
# TF graph input
x = tf.placeholder(tf.float32, [None, 4096]) # mnist data image of shape 64*64=4096
y = tf.placeholder(tf.float32, [None, 10]) # 0-9 digits recognition => 10 classes
# Set model weights
W = tf.Variable(tf.zeros([4096, 10]))
b = tf.Variable(tf.zeros([10]))
# Construct model
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
# Minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred), reduction_indices=1))
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(len(X_train)/batch_size)
n = 0
# Loop over all batches
for i in range(total_batch):
if n+batch_size >= X_train.shape[0]:
n = 0
batch_xs = X_train[n:n+batch_size]
batch_ys = Y_train[n:n+batch_size]
n += batch_size
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_xs,
y: batch_ys})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if (epoch+1) % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
print("Optimization Finished!")
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Accuracy:", accuracy.eval({x: X_test, y: Y_test}))
###Output
Epoch: 0050 cost= 2.191021394
Optimization Finished!
Accuracy: 0.1275
###Markdown
3. CNN
###Code
# Training Parameters
learning_rate = 0.01
num_steps = 100
batch_size = 256
display_step = 50
# Network Parameters
num_input = 4096
num_classes = 10
dropout = 0.7 # Dropout, probability to keep units
# tf Graph input
X = tf.placeholder(tf.float32, [None, num_input])
Y = tf.placeholder(tf.float32, [None, num_classes])
keep_prob = tf.placeholder(tf.float32) # dropout (keep probability)
# Create some wrappers for simplicity
def conv2d(x, W, b, strides=1):
# Conv2D wrapper, with bias and relu activation
x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
x = tf.nn.bias_add(x, b)
return tf.nn.relu(x)
def maxpool2d(x, k=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding='SAME')
# Create model
def conv_net(x, weights, biases, dropout):
# MNIST data input is a 1-D vector of 4096 features (64*64 pixels)
# Reshape to match picture format [Height x Width x Channel]
# Tensor input become 4-D: [Batch Size, Height, Width, Channel]
x = tf.reshape(x, shape=[-1, 64, 64, 1])
# Convolution Layer
conv1 = conv2d(x, weights['wc1'], biases['bc1'])
# Max Pooling (down-sampling)
conv1 = maxpool2d(conv1, k=2)
# Convolution Layer
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
# Max Pooling (down-sampling)
conv2 = maxpool2d(conv2, k=2)
# Convolution Layer
conv3 = conv2d(conv2, weights['wc3'], biases['bc3'])
# Max Pooling (down-sampling)
conv3 = maxpool2d(conv3, k=2)
# Fully connected layer
# Reshape conv3 output to fit fully connected layer input
fc1 = tf.reshape(conv3, [-1, weights['wd1'].get_shape().as_list()[0]])
fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)
# Output, class prediction
out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
return out
# Store layers weight & bias
weights = {
# 5x5 conv, 1 input, 32 outputs
'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
# 3x3 conv, 32 inputs, 64 outputs
'wc2': tf.Variable(tf.random_normal([3, 3, 32, 64])),
# 3x3 conv, 64 inputs, 128 outputs
'wc3': tf.Variable(tf.random_normal([3, 3, 64, 128])),
# fully connected, 8*8*128 inputs, 1024 outputs
'wd1': tf.Variable(tf.random_normal([8*8*128, 1024])),
# 1024 inputs, 10 outputs (class prediction)
'out': tf.Variable(tf.random_normal([1024, num_classes]))
}
biases = {
'bc1': tf.Variable(tf.random_normal([32])),
'bc2': tf.Variable(tf.random_normal([64])),
'bc3': tf.Variable(tf.random_normal([128])),
'bd1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([num_classes]))
}
# Construct model
logits = conv_net(X, weights, biases, keep_prob)
prediction = tf.nn.softmax(logits)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Evaluate model
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
n = 0
for step in range(1, num_steps+1):
if n+batch_size >= X_train.shape[0]:
n = 0
batch_x = X_train[n:n+batch_size]
batch_y = Y_train[n:n+batch_size]
n = n+batch_size
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y, keep_prob: dropout})
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
Y: batch_y,
keep_prob: 1.0})
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
print("Optimization Finished!")
# Calculate accuracy for 1000 test data points
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={X: X_test[:1000],
Y: Y_test[:1000],
keep_prob: 1.0}))
###Output
Step 1, Minibatch Loss= 1275381.0000, Training Accuracy= 0.121
Step 50, Minibatch Loss= 10540.1777, Training Accuracy= 0.109
Step 100, Minibatch Loss= 12.7216, Training Accuracy= 0.129
Optimization Finished!
Testing Accuracy: 0.118
###Markdown
4. RNN
###Code
tf.reset_default_graph()
# Training Parameters
learning_rate = 0.01
training_steps = 100
batch_size = 256
display_step = 50
# Network Parameters
num_input = 64 # data input: 64x64
timesteps = 64 # timesteps
num_hidden = 128 # hidden layer num of features
num_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
X = tf.placeholder("float", [None, timesteps, num_input])
Y = tf.placeholder("float", [None, num_classes])
# Define weights
weights = {
'out': tf.Variable(tf.random_normal([num_hidden, num_classes]))
}
biases = {
'out': tf.Variable(tf.random_normal([num_classes]))
}
def RNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, timesteps, n_input)
# Required shape: 'timesteps' tensors list of shape (batch_size, n_input)
# Unstack to get a list of 'timesteps' tensors of shape (batch_size, n_input)
x = tf.unstack(x, timesteps, 1)
# Define a lstm cell with tensorflow
lstm_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)
# Get lstm cell output
outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
logits = RNN(X, weights, biases)
prediction = tf.nn.softmax(logits)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
n = 0
for step in range(1, training_steps+1):
if n+batch_size >= X_train.shape[0]:
n= 0
batch_x = X_train[n:n+batch_size]
batch_y = Y_train[n:n+batch_size]
n = n+batch_size
# Reshape data to get 64 seq of 64 elements
batch_x = batch_x.reshape((batch_size, timesteps, num_input))
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
Y: batch_y})
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
print("Optimization Finished!")
# Calculate accuracy for 1000 test data points
test_data = X_test[:1000].reshape((-1, timesteps, num_input))
test_label = Y_test[:1000]
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={X: test_data, Y: test_label}))
###Output
Step 1, Minibatch Loss= 11.4233, Training Accuracy= 0.133
Step 50, Minibatch Loss= 2.3188, Training Accuracy= 0.117
Step 100, Minibatch Loss= 2.3256, Training Accuracy= 0.113
Optimization Finished!
Testing Accuracy: 0.117
###Markdown
5. Gated CNN
###Code
# Training Parameters
learning_rate = 0.01
num_steps = 100
batch_size = 256
display_step = 50
# Network Parameters
num_input = 4096
num_classes = 10 #total classes (0-9 digits)
dropout = 0.7 # Dropout, probability to keep units
# tf Graph input
X = tf.placeholder(tf.float32, [None, num_input])
Y = tf.placeholder(tf.float32, [None, num_classes])
keep_prob = tf.placeholder(tf.float32) # dropout (keep probability)
def conv2d(x, W, b, V, c, num, strides=1):
# Conv2D wrapper, with bias and gates
A = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
B = tf.nn.conv2d(x, V, strides=[1, strides, strides, 1], padding='SAME')
A = tf.nn.bias_add(A, b)
B = tf.nn.bias_add(B, c)
return tf.multiply(A, tf.nn.sigmoid(B))
def maxpool2d(x, k=2):
# MaxPool2D wrapper
return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
padding='SAME')
# Create model
def conv_net(x, weights, biases, dropout):
x = tf.reshape(x, shape=[-1, 64, 64, 1])
# Convolution Layer
conv1 = conv2d(x, weights['wc1'], biases['bc1'], weights['wv1'], biases['bv1'], weights['c1'])
# Max Pooling (down-sampling)
conv1 = maxpool2d(conv1, k=2)
# Convolution Layer
conv2 = conv2d(conv1, weights['wc2'], biases['bc2'], weights['wv2'], biases['bv2'], weights['c2'])
# Max Pooling (down-sampling)
conv2 = maxpool2d(conv2, k=2)
# Convolution Layer
conv3 = conv2d(conv2, weights['wc3'], biases['bc3'], weights['wv3'], biases['bv3'], weights['c3'])
# Max Pooling (down-sampling)
conv2 = maxpool2d(conv3, k=2)
# Fully connected layer
# Reshape conv2 output to fit fully connected layer input
fc1 = tf.reshape(conv3, [-1, weights['wd1'].get_shape().as_list()[0]])
fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
fc1 = tf.nn.relu(fc1)
# Apply Dropout
fc1 = tf.nn.dropout(fc1, dropout)
# Output, class prediction
out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
return out
# Store layers weight & bias
weights = {
'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
'wv1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
'c1': 5,
'wc2': tf.Variable(tf.random_normal([3, 3, 32, 64])),
'wv2': tf.Variable(tf.random_normal([3, 3, 32, 64])),
'c2': 3,
'wc3': tf.Variable(tf.random_normal([3, 3, 64, 128])),
'wv3': tf.Variable(tf.random_normal([3, 3, 64, 128])),
'c3': 3,
'wd1': tf.Variable(tf.random_normal([16*16*128, 1024])),
# 1024 inputs, 10 outputs (class prediction)
'out': tf.Variable(tf.random_normal([1024, num_classes]))
}
biases = {
'bc1': tf.Variable(tf.random_normal([32])),
'bv1': tf.Variable(tf.random_normal([32])),
'bc2': tf.Variable(tf.random_normal([64])),
'bv2': tf.Variable(tf.random_normal([64])),
'bc3': tf.Variable(tf.random_normal([128])),
'bv3': tf.Variable(tf.random_normal([128])),
'bd1': tf.Variable(tf.random_normal([1024])),
'out': tf.Variable(tf.random_normal([num_classes]))
}
# Construct model
logits = conv_net(X, weights, biases, keep_prob)
prediction = tf.nn.softmax(logits)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Evaluate model
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
n=0
for step in range(1, num_steps+1):
if n+batch_size >= X_train.shape[0]:
n=0
batch_x = X_train[n:n+batch_size]
batch_y = Y_train[n:n+batch_size]
n = n+batch_size
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y, keep_prob: dropout})
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
Y: batch_y,
keep_prob: 1.0})
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
print("Optimization Finished!")
# Calculate accuracy for 1000 test data points
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={X: X_test[:1000],
Y: Y_test[:1000],
keep_prob: 1.0}))
###Output
_____no_output_____ |
examples/getting_started/4_Superdense_coding/4_Superdense_coding.ipynb | ###Markdown
Superdense CodingIn this tutorial, we construct an implementation of the superdense coding protocol via Amazon Braket's SDK. Superdense coding is a method of transmitting two classical bits by sending only one qubit. Starting with a pair of entanged qubits, the sender (aka Alice) applies a certain quantum gate to their qubit and sends the result to the receiver (aka Bob), who is then able to decode the full two-bit message.If Alice wants to send a two-bit message to Bob using only classical channels, she would need to send two classical bits. However, with the help of quantum entanglement, Alice can do this by sending just one qubit. By ensuring that Alice and Bob initially share an entangled state of two qubits, they can devise a strategy such that Alice can transmit her two-bit message by sending her single qubit.To implement superdense coding, Alice and Bob need to share or otherwise prepare a maximally entangled pair of qubits (i.e., a Bell pair). Alice then selects one of the four possible messages to send with two classical bits: 00, 01, 10, or 11. Depending on which two-bit string she wants to send, Alice applies a corresponding quantum gate to encode her desired message. Finally, Alice sends her own qubit to Bob, which Bob then uses to decode the message by undoing the initial entangling operation.Note that superdense coding is closely related to quantum teleportation. In teleportation, one uses an entangled pair (an e-bit) and two uses of a classical channel to simulate a single use of a quantum channel. In superdense coding, one uses an e-bit and a single use of a quantum channel to simulate two uses of a classical channel. Detailed Steps1. Alice and Bob initially share a Bell pair. This can be prepared by starting with two qubits in the |0⟩ state, then applying the Hadamard gate (𝐻) to the first qubit to create an equal superposition, and finally applying a CNOT gate (𝐶𝑋) between the two qubits to produce a Bell pair. Alice holds one of these two qubits, while Bob holds the other.2. Alice selects one of the four possible messages to send Bob. Each message corresponds to a unique set of quantum gate(s) to apply to her own qubit, illustrated in the table below. For example, if Alice wants to send the message "01", she would apply the Pauli X gate.3. Alice sends her qubit to Bob through the quantum channel.4. Bob decodes Alice's two-bit message by first applying a CNOT gate using Alice's qubit as the control and his own qubit as the target, and then a Hadamard gate on Alice's qubit to restore the classical message.| Message | Alice's encoding | State Bob receives(non-normalized) | After 𝐶𝑋 gate(non-normalized) | After 𝐻 gate || :---: | :---: | :---: | :---: | :---: || 00 | 𝐼 | \|00⟩ + \|11⟩ | \|00⟩ + \|10⟩ | \|00⟩| 01 | 𝑋 | \|10⟩ + \|01⟩ | \|11⟩ + \|01⟩ | \|01⟩| 10 | 𝑍 | \|00⟩ - \|11⟩ | \|00⟩ - \|10⟩ | \|10⟩| 11 | 𝑍𝑋 | \|01⟩ - \|10⟩ | \|01⟩ - \|11⟩ | \|11⟩ Circuit DiagramCircuit used to send the message "00". To send other messages, swap out the identity (𝐼) gate. Code
###Code
# Print version of SDK
!pip show amazon-braket-sdk | grep Version
# Import Braket libraries
from braket.circuits import Circuit, Gate, Moments
from braket.circuits.instruction import Instruction
from braket.aws import AwsDevice
import matplotlib.pyplot as plt
import time
###Output
Version: 1.0.0.post1
###Markdown
Typically, we recommend running circuits with fewer than 25 qubits on the local simulator to avoid latency bottlenecks. The on-demand, high-performance simulator SV1 is better suited for larger circuits up to 34 qubits. Nevertheless, for demonstration purposes, we are going to continue this example with SV1 but it is easy to switch over to the local simulator by replacing the last line in the cell below with ```device = LocalSimulator()``` and importing the ```LocalSimulator```.
###Code
# Select device arn for the on-demand simulator
device = AwsDevice("arn:aws:braket:::device/quantum-simulator/amazon/sv1")
# Function to run quantum task, check the status thereof and collect results
def get_result(device, circ):
# get number of qubits
num_qubits = circ.qubit_count
# specify desired results_types
circ.probability()
# submit task: define task (asynchronous)
if device.name == 'StateVectorSimulator':
task = device.run(circ, shots=1000)
else:
task = device.run(circ, shots=1000)
# Get ID of submitted task
task_id = task.id
# print('Task ID :', task_id)
# Wait for job to complete
status_list = []
status = task.state()
status_list += [status]
print('Status:', status)
# Only notify the user when there's a status change
while status != 'COMPLETED':
status = task.state()
if status != status_list[-1]:
print('Status:', status)
status_list += [status]
# get result
result = task.result()
# get metadata
metadata = result.task_metadata
# get output probabilities
probs_values = result.values[0]
# get measurement results
measurement_counts = result.measurement_counts
# print measurement results
print('measurement_counts:', measurement_counts)
# bitstrings
format_bitstring = '{0:0' + str(num_qubits) + 'b}'
bitstring_keys = [format_bitstring.format(ii) for ii in range(2**num_qubits)]
# plot probabalities
plt.bar(bitstring_keys, probs_values)
plt.xlabel('bitstrings')
plt.ylabel('probability')
plt.xticks(rotation=90)
plt.show()
return measurement_counts
###Output
_____no_output_____
###Markdown
Alice and Bob initially share a Bell pair. Let's create this now:
###Code
circ = Circuit()
circ.h([0])
circ.cnot(0,1)
###Output
_____no_output_____
###Markdown
Define Alice's encoding scheme according to the table above. Alice selects one of these messages to send.
###Code
# Four possible messages and their corresponding gates
message = {"00": Circuit().i(0),
"01": Circuit().x(0),
"10": Circuit().z(0),
"11": Circuit().x(0).z(0)
}
# Select message to send. Let's start with '01' for now
m = "01"
###Output
_____no_output_____
###Markdown
Alice encodes her message by applying the gates defined above
###Code
# Encode the message
circ.add_circuit(message[m])
###Output
_____no_output_____
###Markdown
Alice then sends her qubit to Bob so that Bob has both qubits in his lab. Bob decodes Alice's message by disentangling the two qubits:
###Code
circ.cnot(0,1)
circ.h([0])
###Output
_____no_output_____
###Markdown
The full circuit now looks like
###Code
print(circ)
###Output
T : |0|1|2|3|4|
q0 : -H-C-X-C-H-
| |
q1 : ---X---X---
T : |0|1|2|3|4|
###Markdown
By measuring the two qubits in the computational basis, Bob can read off Alice's two qubit message
###Code
counts = get_result(device, circ)
print(counts)
###Output
Status: CREATED
Status: QUEUED
Status: RUNNING
Status: COMPLETED
measurement_counts: Counter({'01': 1000})
###Markdown
We can check that this scheme works for the other possible messages too:
###Code
for m in message:
# Reproduce the full circuit above by concatenating all of the gates:
newcirc = Circuit().h([0]).cnot(0,1).add_circuit(message[m]).cnot(0,1).h([0])
# Run the circuit:
counts = get_result(device, newcirc)
print("Message: " + m + ". Results:")
print(counts)
###Output
Status: CREATED
Status: QUEUED
Status: RUNNING
Status: COMPLETED
measurement_counts: Counter({'00': 1000})
|
california_housing/.ipynb_checkpoints/playground-checkpoint.ipynb | ###Markdown
Playground NotebookA Notebook for running experiments on data
###Code
#Import package
import pandas as pd
housing = pd.read_csv('housing.csv')
###Output
_____no_output_____
###Markdown
Plotting the data to check the distribution
###Code
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20,15))
plt.savefig('histograms.png')
plt.show()
###Output
_____no_output_____
###Markdown
GoalThe goal is to transform the data using standard scaler and log transform and plot the data again to see if it made the distributions close to normal
###Code
housing_num = housing.drop("ocean_proximity",axis=1)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(housing_num)
X = scaler.transform(housing_num)
housing_transformed = pd.DataFrame(X, columns=housing_num.columns)
housing_transformed.head(10)
housing_transformed.hist(bins=50, figsize=(20,15))
plt.show()
import numpy as np
X = np.log(housing_num)
housing_transformed = pd.DataFrame(X, columns=housing_num.columns)
housing_transformed.head(10)
housing_transformed.hist(bins=50, figsize=(20,15))
plt.show()
import seaborn as sns
sns.histplot(housing_num,x="median_income",y="median_house_value")
###Output
_____no_output_____
###Markdown
Plot box and whisker plot
###Code
housing.boxplot(figsize=(20,15))
###Output
_____no_output_____
###Markdown
Outlier detectionFinding which values are outliersMethodology:Assuming a Gaussian distribtution and looking for values that are more than 3 standard deviations from the mean
###Code
#Getting only the numerical attributes and dropping latitude and logitude
housing_num = housing.drop(["ocean_proximity","latitude","longitude"],axis=1)
#Finding the z-scores
housing_num_z_scores = np.abs((housing_num - housing_num.mean())/housing_num.std(ddof=0))
housing_num_z_scores
#Removing all the rows where the z score is greater than 3 in at least one column
# i.e. only keep the rows where z-score is less than 3 in all columns
housing_filtered = housing[(housing_num_z_scores < 3).all(axis=1)]
#reset index
housing_filtered.reset_index(inplace=True)
housing_filtered.drop('index',axis=1, inplace=True)
housing_filtered.hist(bins=50, figsize=(20,15))
###Output
C:\Users\Hassan\.conda\envs\ml-env\lib\site-packages\pandas\core\frame.py:4913: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
errors=errors,
###Markdown
Creating test and training sets and training model
###Code
#Converting the income to a categorical attribute with 5 categories
housing_filtered["income_cat"] = pd.cut(housing_filtered["median_income"],
bins = [0., 1.5, 3.0, 4.5, 6., np.inf],
labels =[1, 2, 3, 4, 5])
housing_filtered["income_cat"].hist()
#Now we can do stratified sampling based on the income category
#Using scikit-learn's StratifiedShuffleSplit class
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing_filtered, housing_filtered["income_cat"]):
strat_train_set = housing_filtered.loc[train_index]
strat_test_set = housing_filtered.loc[test_index]
#Looking at the correlations
housing = strat_train_set.copy()
corr_matrix = housing.corr()
corr_matrix["median_house_value"].sort_values(ascending=False)
###Output
_____no_output_____ |
cdl/onlinecdl_clr_cupy.ipynb | ###Markdown
Online Convolutional Dictionary Learning (CuPy Version)=======================================================This example demonstrates the use of [onlinecdl.OnlineConvBPDNDictLearn](http://sporco.rtfd.org/en/latest/modules/sporco.dictlrn.onlinecdl.htmlsporco.dictlrn.onlinecdl.OnlineConvBPDNDictLearn) for learning a convolutional dictionary from a set of training images. The dictionary is learned using the online dictionary learning algorithm proposed in [[33]](http://sporco.rtfd.org/en/latest/zreferences.htmlid33). This variant of the example uses the GPU accelerated version of [onlinecdl](http://sporco.rtfd.org/en/latest/modules/sporco.dictlrn.onlinecdl.htmlmodule-sporco.dictlrn.onlinecdl) within the [sporco.cupy](http://sporco.rtfd.org/en/latest/modules/sporco.cupy.htmlmodule-sporco.cupy) subpackage.
###Code
from __future__ import print_function
from builtins import input
import numpy as np
from sporco import util
from sporco import signal
from sporco import plot
plot.config_notebook_plotting()
from sporco.cupy import (cupy_enabled, np2cp, cp2np, select_device_by_load,
gpu_info)
from sporco.cupy.dictlrn import onlinecdl
###Output
_____no_output_____
###Markdown
Load training images.
###Code
exim = util.ExampleImages(scaled=True, zoom=0.25)
S1 = exim.image('barbara.png', idxexp=np.s_[10:522, 100:612])
S2 = exim.image('kodim23.png', idxexp=np.s_[:, 60:572])
S3 = exim.image('monarch.png', idxexp=np.s_[:, 160:672])
S4 = exim.image('sail.png', idxexp=np.s_[:, 210:722])
S5 = exim.image('tulips.png', idxexp=np.s_[:, 30:542])
S = np.stack((S1, S2, S3, S4, S5), axis=3)
###Output
_____no_output_____
###Markdown
Highpass filter training images.
###Code
npd = 16
fltlmbd = 5
sl, sh = signal.tikhonov_filter(S, fltlmbd, npd)
###Output
_____no_output_____
###Markdown
Construct initial dictionary.
###Code
np.random.seed(12345)
D0 = np.random.randn(8, 8, 3, 64)
###Output
_____no_output_____
###Markdown
Set regularization parameter and options for dictionary learning solver.
###Code
lmbda = 0.2
opt = onlinecdl.OnlineConvBPDNDictLearn.Options({
'Verbose': True, 'ZeroMean': False, 'eta_a': 10.0,
'eta_b': 20.0, 'DataType': np.float32,
'CBPDN': {'rho': 5.0, 'AutoRho': {'Enabled': True},
'RelaxParam': 1.8, 'RelStopTol': 1e-7, 'MaxMainIter': 50,
'FastSolve': False, 'DataType': np.float32}})
###Output
_____no_output_____
###Markdown
Create solver object and solve.
###Code
if not cupy_enabled():
print('CuPy/GPU device not available: running without GPU acceleration\n')
else:
id = select_device_by_load()
info = gpu_info()
if info:
print('Running on GPU %d (%s)\n' % (id, info[id].name))
d = onlinecdl.OnlineConvBPDNDictLearn(np2cp(D0), lmbda, opt)
iter = 50
d.display_start()
for it in range(iter):
img_index = np.random.randint(0, sh.shape[-1])
d.solve(np2cp(sh[..., [img_index]]))
d.display_end()
D1 = cp2np(d.getdict())
print("OnlineConvBPDNDictLearn solve time: %.2fs" % d.timer.elapsed('solve'))
###Output
Running on GPU 0 (GeForce RTX 2080 Ti)
###Markdown
Display initial and final dictionaries.
###Code
D1 = D1.squeeze()
fig = plot.figure(figsize=(14, 7))
plot.subplot(1, 2, 1)
plot.imview(util.tiledict(D0), title='D0', fig=fig)
plot.subplot(1, 2, 2)
plot.imview(util.tiledict(D1), title='D1', fig=fig)
fig.show()
###Output
_____no_output_____
###Markdown
Get iterations statistics from solver object and plot functional value.
###Code
its = d.getitstat()
fig = plot.figure(figsize=(7, 7))
plot.plot(np.vstack((its.DeltaD, its.Eta)).T, xlbl='Iterations',
lgnd=('Delta D', 'Eta'), fig=fig)
fig.show()
###Output
_____no_output_____ |
Kaggle_Titanic/Titanic 1 - Data Exploration.ipynb | ###Markdown
Titanic Data Exploration Author: Benjamin D Hamilton The purpose of this notebook is capture most of the data exploration performed for the Kaggle Titanic toy dataset competition. It has been tidied up from its original form.This notebook is organized in two large sections. - ***Section 1*** is initial data examination and visualization- ***Section 2*** examines individual features in greater detailTentative Conclusions:- If you want to survive, ride 1st class and leave your family at home Recommendations for Next Step:- Create a feature Family = SibSp + Parch + 1- Create a feature Fare_Indiv = Fare / (Ticket Multiplicity)- Create several features from Name, TBD- Create feature for Cabin Letter- Check and Manage Outliers
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as grd
%matplotlib inline
import seaborn as sns
sns.set_style('whitegrid')
###Output
_____no_output_____
###Markdown
Section 1: Data Introduction 1A: Examination and VisualizationTODO: - Import Data- Check out head and tail
###Code
train = pd.read_csv('./datasets/train.csv')
train_cols = train.columns
train_cols
train.head()
train.tail()
###Output
_____no_output_____
###Markdown
TODO:- Check Unique, Null Values- Make notes on feature observations
###Code
#Column names, data types, missing values
train.info()
print("Nulls:\n", train.isnull().sum())
plt.figure(figsize=(6,6))
sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis')
plt.title('Heatmap of NAN (gold)')
plt.show()
###Output
_____no_output_____
###Markdown
Notes from the above:- ***Cabin*** and ***Age*** have many missing values- ***Embarked*** has 2 missing values- Text based features: - ***Name*** (Last, Title. First) - ***Sex*** (male, female) - ***Ticket*** (Varies widely) - ***Cabin*** (Ship Deck, Room Number) - ***Embarked*** (One of 3 letters)- Numerical features: - ***PassengerId *** (use as an index later) - ***Survived*** is 0 or 1 <--- categorical - ***Pclass*** is 1,2,3 <--- categorical - ***Age*** is float - ***SibSp*** is int - ***Parch*** is int - ***Fare*** is float TODO:- Make sure every ***PassengerId*** is unique
###Code
print("Number of IDs:", len(train.PassengerId.unique()))
print("Number of Passengers:", len(train.PassengerId))
#Yep
###Output
Number of IDs: 891
Number of Passengers: 891
###Markdown
TODO:- Check unique values to see if there are any strange entries: ***Survived***
###Code
#Survived is OK
train.Survived.unique()
sns.countplot(x='Survived',data=train)
plt.show()
###Output
_____no_output_____
###Markdown
***Pclass***
###Code
#Pclass is OK
train.Pclass.unique()
#parsed by class
sns.countplot(x='Survived',hue='Pclass',data=train)
plt.title('P Class Survival')
###Output
_____no_output_____
###Markdown
***Name***- Names are complicated and will be dealt with in more detail later
###Code
#Check Names... ... ... there is a lot going on here.
print("Unique Names: ",len(train.Name.unique())) #everybody has their own name
train.Name.unique()[:15]
###Output
Unique Names: 891
###Markdown
***Sex***
###Code
train.Sex.unique()
sns.countplot(x='Sex',data=train)
plt.show()
#parsed by gender
sns.countplot(x='Survived',hue='Sex',data=train)
plt.title('Gender Survival')
###Output
_____no_output_____
###Markdown
***Age***- Most passengers were between 20-30- There is a big drop for early teenagers- Peaks again for children- Long tail out to old age
###Code
#Prefer using a distplot:
sns.distplot(train.Age.dropna(),kde=False,bins=30)
plt.xlim(0,80)
plt.ylabel('# Passengers')
plt.show()
#sns.distplot(train.Age.dropna(),bins=30,kde=False)
sns.distplot(train.Age.where(train.Survived==0).dropna(),bins=30,kde=False)
sns.distplot(train.Age.where(train.Survived==1).dropna(),bins=30,kde=False)
plt.legend(('Survived = 0', 'Survived = 1'))
plt.title('Age Distribution by Survival')
plt.ylabel('Passengers')
plt.show()
###Output
_____no_output_____
###Markdown
***Age and Survival by Class***
###Code
#Boxplots for distribution across each class, split by survived
plt.cla
plt.figure(figsize=(8,6))
sns.boxplot(x='Survived',y='Age',hue='Pclass',data=train)
plt.title('Survived vs Age by Pclass')
plt.show()
###Output
_____no_output_____
###Markdown
***Age and Class by Sex***
###Code
plt.figure(figsize=(8,6))
sns.boxplot(x='Pclass',y='Age',hue='Sex',data=train)
plt.title('PClass vs Age by Sex')
plt.show()
###Output
_____no_output_____
###Markdown
***SibSp*** and ***Parch***- ***SubSp*** - Only has values 0 through 5 and 8 (one very, very big family) - Heavily skewed towards 0
###Code
#What unique values?
train.SibSp.unique() #0 through 5, 8
sns.distplot(train.SibSp,kde=False)
plt.show()
###Output
_____no_output_____
###Markdown
- ***Parch*** - Values 0 through 6 - Heavily skewed towards 0
###Code
#What unique values?
train.Parch.unique() # 0 through 6
sns.distplot(train.Parch,kde=False)
plt.show()
###Output
_____no_output_____
###Markdown
***Ticket***- Ticket names are complicated, hard to see broad patterns- 681 Unique tickets, 891 Passengers ... some share ticket numbers
###Code
print("Nulls: ", train.Ticket.isnull().sum())
print("Total Tickets: ", len(train.Ticket), " (1 per passenger)")
print("Unique Tickets: ", len(train.Ticket.unique()), " (some people share ticket numbers)")
###Output
Nulls: 0
Total Tickets: 891 (1 per passenger)
Unique Tickets: 681 (some people share ticket numbers)
###Markdown
1B: Basic Statistics
###Code
train.drop(['PassengerId','Survived'],axis=1).describe()
###Output
_____no_output_____
###Markdown
***Correlation Matrix***
###Code
train.drop('PassengerId',axis=1).corr()
###Output
_____no_output_____
###Markdown
Section 2 - Detailed Analysis 2A: TICKETS
###Code
#Let's see what tickets share numbers...
train.Ticket.value_counts(ascending=False)[0:10]
###Output
_____no_output_____
###Markdown
TODO:- Look for similarities in passenger records using Ticket Name- Note, the value counts list is indexed by ticket name and has only 1 column (the integers)
###Code
#Record to make life easier
tick_val_ct = train.Ticket.value_counts(ascending=False)
###Output
_____no_output_____
###Markdown
***Meet the Sage Family*** - Missing all ages and cabins - All are in 3rd class - It says 8 SibSp and 2 Parch, but there are only 7 on this ticket. If SibSp = 8, then there needs to be 9 people. - These are probably 7 of 9 siblings. The 8th & 9th sibling and 2 parents are not on this ticket. - The last Name does not lead to the rest of the family - Nor does a search for others with SibSp = 8 - Nor does a search for 2 parents with 9 kids (Parch = 9, but I looked for 7+ anyway) - Sage Family Data is not self-consistent! ***They are probably in the test set!***
###Code
train[train.Ticket == tick_val_ct.index[0]]
#A quick glance reveals that no other Sage's are on board.
train[train.Name.str.contains('Sage')]
#Nor are there any people with SibSp = 8
train[train.SibSp == 8]
#And the parents should have Parch = the SubSp+1
#There are no such person
train[train.Parch > 7]
###Output
_____no_output_____
###Markdown
***Meet the Anderson Family*** - Missing all cabins, has all ages - All are in 3rd class - SibSp and Parch sums to 6; they and the ages correspond appropriately to parents and their children - Data appears self consistent, whole family is in the training set
###Code
train[train.Ticket == tick_val_ct.index[1]]
###Output
_____no_output_____
###Markdown
***Meet Various Asian (and maybe other ethincity) Men*** - Missing all cabins and some ages - All are in 3rd class - Names and cultural customs make it unclear whether these men are related - SibSp and Parch are all 0, indicating they were all travelling solo -or- the data wasn't collected properly - Why are they on the same ticket? Bought together? Brokered? Interesting to think about.
###Code
train[train.Ticket == tick_val_ct.index[2]]
###Output
_____no_output_____
###Markdown
***Meet the Skoog Family*** - Missing all cabins, has all ages - All are in 3rd class - Ages, SibSp, and Parch are consistent for a family of 6__Growing Question: Does the Fare count for the single ticket? If so, it may be worth adding fare/ticket holder as a feature__
###Code
train[train.Ticket == tick_val_ct.index[3]]
###Output
_____no_output_____
###Markdown
***Goodwin Family***- Children with SibSp = 5 indicates that 6 siblings are on the ship- Children with Parch = 2 indicates that 2 parents are on the ship- Only one parent with SibSp = 1 and Parch = 6 indicates that spouse and six kids are on board- This ticket only has 1 parent and 5 of the kids on it.__Where is parent 2 (Mr. Fredrick Goodwin should be his name) and mystery kid 6? Probably in the Test Set__
###Code
train[train.Ticket == tick_val_ct.index[4]]
#Searching on the last name shows no more:
train[train.Name.str.contains('Goodwin')]
#Searching on the SibSp = 5 and Parch = 6 reveal no missing parent or child
train[train.SibSp == 5]
train[train.Parch == 6]
###Output
_____no_output_____
###Markdown
***Meet the Panula Family*** - Missing all cabins, has all ages - All are in 3rd class - SibSp and Parch sums to 5; they and the ages correspond appropriately to parents and their children - Data appears self consistent, 5 kids (SibSp 4, Parch 1) and 1 parent (SibSp 0, Parch 1), all in train set
###Code
train[train.Ticket == tick_val_ct.index[5]]
###Output
_____no_output_____
###Markdown
***Meet the Rice Family*** - Missing all cabins, has all ages - All are in 3rd class - SibSp and Parch sums to 5; they and the ages correspond appropriately to parents and their children - 1 Parent on board, 5 Kids, but only 4 kids are shown. Kid 5 is missing. Probably in Test Set
###Code
train[train.Ticket == tick_val_ct.index[6]]
###Output
_____no_output_____
###Markdown
- PC is exclusive for Pclass = 1 (1st class ticket)
###Code
print("Total 1st class tickets: ", train.Pclass[train.Pclass == 1].sum())
print("Tickets with PC on them: ", len(train[train.Ticket.str.contains('PC')]))
train[train.Ticket.str.contains('PC')].drop(['PassengerId','Survived'],axis=1).describe()
###Output
_____no_output_____
###Markdown
- SW or S.W. are related:
###Code
train[train.Ticket.str.contains('SW') | train.Ticket.str.contains('S.W.')]
###Output
_____no_output_____
###Markdown
***TICKET OBSERVATIONS:***- There are passengers suggested by SibSp and Parch values that are not in the training set - They are probably in the test set. Going to run with the assumption that they are - Not going to look into the test set to see, though. That feels like cheating. - Are Ticket Fares are for single ticket or per person? - Analyzing fare/ticket mult may be useful. - May be useful to use total family number instead of training set multiplicity to capture - except for when family number doesn't exist (like w/ the Asians). Need an alternative there. 2B: Fares TODO:- Unique Fare Values, Statistics, Top 10- Distribution Plot
###Code
len(train.Fare.unique()) #248 fare classes, includes NAN if there are some p
train.Fare.describe()
#Top 10:
train.Fare.sort_values(ascending=False)[0:9].values
Figsize=(8,6)
sns.distplot(train.Fare,kde=False)
plt.show()
###Output
_____no_output_____
###Markdown
TODO:***Transform Fares to examine costs as if each ticket was paid for only once***- An individual's fare is the ticket cost divided by the higher of family size or ticket holders - family = SibSp + Parch + 1 - ticket holders is the multiplicity of the ticket _in the training set_, and may be missing a few in the test set- This allows the most accuracy for individuals travelling alone and families documented properly- It affords modest accuracy for cases like the Asians, above, where unrelated people were on 1 ticket
###Code
#Make a feature for fare cost / family size
train['Family'] = train['SibSp'] + train['Parch'] + 1
train['Fare_Indiv_FamilyCt'] = train.Fare / train.Family
#Make a feature for fare cost / ticket multiplicity
train['Ticket_Count'] = train.Ticket.apply(lambda x: train.Ticket.value_counts().loc[str(x)])
train['Fare_Indiv_TicketCt'] = train.Fare / train.Ticket_Count
train.Fare_Indiv_FamilyCt.head()
train.Fare_Indiv_TicketCt.head()
#Create the Individual Fares using the smallest of the two
train['Fare_Indiv'] = train[['Fare_Indiv_FamilyCt','Fare_Indiv_TicketCt']].min(axis=1)
train.drop(['Fare_Indiv_FamilyCt','Fare_Indiv_TicketCt'],axis=1,inplace=True)
train.Fare_Indiv.head()
###Output
_____no_output_____
###Markdown
TODO:- Examine two cases - Family number is higher than members that appear in training set (The Andersson Family returns) - Non-family members share a ticket (the Asian (and other) guys return)
###Code
#Compare the Andersson Family and the Asian Guys (tm)
#Pick choice columns for examination
#Take a look at the head to see how it plays out.
train[(train.Ticket.str.contains(tick_val_ct.index[1]) | \
train.Ticket.str.contains(tick_val_ct.index[2]))].\
loc[:,['Name','Ticket','Fare','Ticket_Count','Family','Fare_Indiv']].head()
###Output
_____no_output_____
###Markdown
TODO- Compare descriptive statistics- Transformed Fare histogram
###Code
print('Listed Fares: \n', train.Fare.describe())
print('\nIndividual Fares:\n', train.Fare_Indiv.describe())
sns.distplot(train.Fare_Indiv,kde=False)
plt.title('Histogram of Individual Fare Cost')
plt.ylabel('Passenger Count')
plt.show()
###Output
_____no_output_____
###Markdown
- ***Fare and Fare_Indiv have different correlations*** with the other features: - This nonlinear transformation may be useful in modelling
###Code
train.drop('PassengerId',axis=1).corr().loc[:,['Fare','Fare_Indiv']]
###Output
_____no_output_____
###Markdown
2C: Names- ***Name*** entires have the patern: - LastName, Title. FirstNames - ex. 'Allen, Mr. William Henry' - Women with "Title." = "Mrs." use Husband's FirstNames with their own name in parentheses - ex. 'Cumings, Mrs. John Bradley (Florence Briggs Thayer)' - Title. includes Mr. Mrs. etc... more on that later. - Special Cases: FirstName is generally the first name of the passenger - If the passenger is a married woman with title Mrs. the First name is of the form "HusbandFirstName (Woman's Full Name) Note: It will be worth splitting this up a bit to examine the impact of Titles on the models.
###Code
train.Name.head()
###Output
_____no_output_____
###Markdown
***More on this in the data workup... a lot more*** 2D: Embarked TODO:- Check uniques, Make Plots, Check Nans- Decide how to impute
###Code
train.Embarked.unique() #3 categories, a some NANs
sns.countplot('Embarked',data=train)
len(train.Embarked.unique()) #INCLUDES NANs
#Two null embarked entries are for two unrelated women travelling together, sharing a ticket and cabin
train[train.Embarked.isnull()]
#Almost everybody whose ticket starts with '113' is from port 'S'...
#Will use that to impute later.
sns.countplot('Embarked',data=train[train.Ticket.str.contains('113')])
###Output
_____no_output_____
###Markdown
2E: Cabins TODO:- Split the samples into Cabin = Null and Cabins != Null sets- Examine the Cabins != Null set - Get unique entries - Get multiplicity of entries (who is sharing a room) - Make observations on data
###Code
#Null set:
trcabnull = train[train.Cabin.isnull()]
trcabnull.head(2)
#Not Null set:
trcabnot = train[~train.Cabin.isnull()]
trcabnot.head()
#Unique Cabin entries:
trcabnot.Cabin.unique()
###Output
_____no_output_____
###Markdown
***Notes***: - Some of them have multiple entries, - ex: 'C23 C25 C27' (titantic map indicates its a suite) - ex: 'F G63' (unclear. Even looked at the titanic floorplan and this isn't clear)- Most begin with A, B, C, D, E- Some F and G - The G part is confusing; the A - F are very clear in the titanic deck plans, G may be for crew- one T - Captains quarters? ***Multiplicity***: - Number of passengers that share this room, according to the training data I have- This is not the same as number of passengers who may actually be IN the room... the test set may hold some of those
###Code
trcabnot.Cabin.value_counts()[:5]
### Occurrance of the rare ones:
print('Contains G:', trcabnot.Cabin.str.contains('G').sum())
print('Contains F:', trcabnot.Cabin.str.contains('F').sum())
print('Contains T:', trcabnot.Cabin.str.contains('T').sum())
###Output
Contains G: 7
Contains F: 13
Contains T: 1
###Markdown
***Look Closer at the Residents of Deck G***- Two women and their two daughters are in cabin G6- Cabin F G73 is two men (that are visible in the training data)- Cabin F G63 is one man (in training data, probably 2 in actuality --> I think the F Gxx combo is 3rd class quarters ***Observation***: It is hard to use the cabin information because so much is missing, still, I don't want to get rid of it yet. It may be useful in decision tree algorithms.
###Code
trcabnot[trcabnot.Cabin.str.contains('G')]
###Output
_____no_output_____
###Markdown
***Take a closer look at Deck F***- Note that F Gxx and F Exx correspond to 3rd class
###Code
trcabnot[trcabnot.Cabin.str.contains('F')]
###Output
_____no_output_____
###Markdown
***The single "T" Cabin:***
###Code
trcabnot[trcabnot.Cabin.str.contains('T')]
###Output
_____no_output_____
###Markdown
***It may be worth extracting the cabin letter and using it as a feature (that may be dropped when appropriate for a model)*** 2E: Family (new feature)
###Code
train.Family = train.SibSp + train.Parch + 1
train.Family.head()
sns.distplot(train.Family,kde=0)
plt.title('Distribution of Passengers by Family Size')
plt.ylabel('Passenger Count')
plt.xlabel('Family Count (1 = self)')
###Output
_____no_output_____
###Markdown
***Re-examine Basic Stats, Including Family and Fare_Indiv***
###Code
train.drop(['PassengerId','Survived'],axis=1).describe()
train.drop('PassengerId',axis=1).corr()
###Output
_____no_output_____
###Markdown
***Heatmap of Feature Correlation*** - Shows modest correlations in mostly expected ways- ***Fare and Fare_Indiv do not have the same correlations***, which makes the transformation useful!
###Code
train.drop('PassengerId',axis=1).corr().loc[:,['Fare','Fare_Indiv']]
sns.heatmap(train.drop('PassengerId',axis=1).corr())
plt.title('Heatmap of Correlations for Features')
###Output
_____no_output_____
###Markdown
***Survival and Family Size***
###Code
sns.boxplot(x='Survived',y='Family',hue='Sex',data=train)
plt.title('Survival vs Family Size for Sex')
sns.boxplot(x='Survived',y='Family',hue='Pclass',data=train[train.Sex=='male'])
plt.title('Survival vs Family Size by PClass for males only')
sns.boxplot(x='Survived',y='Family',hue='Pclass',data=train[train.Sex=='female'])
plt.title('Survival vs Family Size by PClass for females only')
###Output
_____no_output_____ |
Proj3/home/project_3_starter.ipynb | ###Markdown
Project 3: Smart Beta Portfolio and Portfolio Optimization OverviewSmart beta has a broad meaning, but we can say in practice that when we use the universe of stocks from an index, and then apply some weighting scheme other than market cap weighting, it can be considered a type of smart beta fund. A Smart Beta portfolio generally gives investors exposure or "beta" to one or more types of market characteristics (or factors) that are believed to predict prices while giving investors a diversified broad exposure to a particular market. Smart Beta portfolios generally target momentum, earnings quality, low volatility, and dividends or some combination. Smart Beta Portfolios are generally rebalanced infrequently and follow relatively simple rules or algorithms that are passively managed. Model changes to these types of funds are also rare requiring prospectus filings with US Security and Exchange Commission in the case of US focused mutual funds or ETFs.. Smart Beta portfolios are generally long-only, they do not short stocks.In contrast, a purely alpha-focused quantitative fund may use multiple models or algorithms to create a portfolio. The portfolio manager retains discretion in upgrading or changing the types of models and how often to rebalance the portfolio in attempt to maximize performance in comparison to a stock benchmark. Managers may have discretion to short stocks in portfolios.Imagine you're a portfolio manager, and wish to try out some different portfolio weighting methods.One way to design portfolio is to look at certain accounting measures (fundamentals) that, based on past trends, indicate stocks that produce better results. For instance, you may start with a hypothesis that dividend-issuing stocks tend to perform better than stocks that do not. This may not always be true of all companies; for instance, Apple does not issue dividends, but has had good historical performance. The hypothesis about dividend-paying stocks may go something like this: Companies that regularly issue dividends may also be more prudent in allocating their available cash, and may indicate that they are more conscious of prioritizing shareholder interests. For example, a CEO may decide to reinvest cash into pet projects that produce low returns. Or, the CEO may do some analysis, identify that reinvesting within the company produces lower returns compared to a diversified portfolio, and so decide that shareholders would be better served if they were given the cash (in the form of dividends). So according to this hypothesis, dividends may be both a proxy for how the company is doing (in terms of earnings and cash flow), but also a signal that the company acts in the best interest of its shareholders. Of course, it's important to test whether this works in practice.You may also have another hypothesis, with which you wish to design a portfolio that can then be made into an ETF. You may find that investors may wish to invest in passive beta funds, but wish to have less risk exposure (less volatility) in their investments. The goal of having a low volatility fund that still produces returns similar to an index may be appealing to investors who have a shorter investment time horizon, and so are more risk averse.So the objective of your proposed portfolio is to design a portfolio that closely tracks an index, while also minimizing the portfolio variance. Also, if this portfolio can match the returns of the index with less volatility, then it has a higher risk-adjusted return (same return, lower volatility).Smart Beta ETFs can be designed with both of these two general methods (among others): alternative weighting and minimum volatility ETF. InstructionsEach problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a ` TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity. PackagesWhen you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code.The other packages that we're importing are `helper`, `project_helper`, and `project_tests`. These are custom packages built to help you solve the problems. The `helper` and `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems. Install Packages
###Code
import sys
!{sys.executable} -m pip install -r requirements.txt
###Output
Requirement already satisfied: colour==0.1.5 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (0.1.5)
Collecting cvxpy==1.0.3 (from -r requirements.txt (line 2))
[?25l Downloading https://files.pythonhosted.org/packages/a1/59/2613468ffbbe3a818934d06b81b9f4877fe054afbf4f99d2f43f398a0b34/cvxpy-1.0.3.tar.gz (880kB)
[K 100% |████████████████████████████████| 880kB 16.4MB/s ta 0:00:01
[?25hRequirement already satisfied: cycler==0.10.0 in /opt/conda/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from -r requirements.txt (line 3)) (0.10.0)
Collecting numpy==1.14.5 (from -r requirements.txt (line 4))
[?25l Downloading https://files.pythonhosted.org/packages/68/1e/116ad560de97694e2d0c1843a7a0075cc9f49e922454d32f49a80eb6f1f2/numpy-1.14.5-cp36-cp36m-manylinux1_x86_64.whl (12.2MB)
[K 100% |████████████████████████████████| 12.2MB 2.4MB/s eta 0:00:01 2% |▉ | 296kB 27.6MB/s eta 0:00:01
[?25hCollecting pandas==0.21.1 (from -r requirements.txt (line 5))
[?25l Downloading https://files.pythonhosted.org/packages/3a/e1/6c514df670b887c77838ab856f57783c07e8760f2e3d5939203a39735e0e/pandas-0.21.1-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
[K 100% |████████████████████████████████| 26.2MB 1.2MB/s eta 0:00:01 28% |█████████▎ | 7.6MB 33.8MB/s eta 0:00:01 40% |█████████████ | 10.6MB 29.9MB/s eta 0:00:01 62% |████████████████████ | 16.5MB 32.1MB/s eta 0:00:01 86% |███████████████████████████▋ | 22.7MB 28.6MB/s eta 0:00:01 91% |█████████████████████████████▏ | 24.0MB 24.5MB/s eta 0:00:01
[?25hCollecting plotly==2.2.3 (from -r requirements.txt (line 6))
[?25l Downloading https://files.pythonhosted.org/packages/99/a6/8214b6564bf4ace9bec8a26e7f89832792be582c042c47c912d3201328a0/plotly-2.2.3.tar.gz (1.1MB)
[K 100% |████████████████████████████████| 1.1MB 9.2MB/s eta 0:00:01
[?25hRequirement already satisfied: pyparsing==2.2.0 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 7)) (2.2.0)
Requirement already satisfied: python-dateutil==2.6.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 8)) (2.6.1)
Requirement already satisfied: pytz==2017.3 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 9)) (2017.3)
Requirement already satisfied: requests==2.18.4 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 10)) (2.18.4)
Collecting scipy==1.0.0 (from -r requirements.txt (line 11))
[?25l Downloading https://files.pythonhosted.org/packages/d8/5e/caa01ba7be11600b6a9d39265440d7b3be3d69206da887c42bef049521f2/scipy-1.0.0-cp36-cp36m-manylinux1_x86_64.whl (50.0MB)
[K 100% |████████████████████████████████| 50.0MB 696kB/s eta 0:00:01 16% |█████▍ | 8.4MB 26.0MB/s eta 0:00:02 24% |███████▉ | 12.2MB 24.1MB/s eta 0:00:02 29% |█████████▍ | 14.6MB 24.7MB/s eta 0:00:02 31% |██████████ | 15.6MB 22.4MB/s eta 0:00:02 35% |███████████▍ | 17.8MB 25.4MB/s eta 0:00:02 38% |████████████▏ | 19.1MB 20.1MB/s eta 0:00:02 46% |███████████████ | 23.5MB 27.4MB/s eta 0:00:01 51% |████████████████▍ | 25.7MB 23.0MB/s eta 0:00:02 53% |█████████████████▏ | 26.8MB 22.5MB/s eta 0:00:02 55% |█████████████████▉ | 27.8MB 26.6MB/s eta 0:00:01 58% |██████████████████▋ | 29.0MB 26.9MB/s eta 0:00:01 70% |██████████████████████▋ | 35.4MB 19.1MB/s eta 0:00:01 72% |███████████████████████▍ | 36.5MB 26.0MB/s eta 0:00:01 75% |████████████████████████▏ | 37.7MB 24.5MB/s eta 0:00:01 77% |████████████████████████▉ | 38.9MB 28.8MB/s eta 0:00:01 77% |█████████████████████████ | 39.0MB 2.1MB/s eta 0:00:06 80% |█████████████████████████▋ | 40.1MB 21.5MB/s eta 0:00:01 82% |██████████████████████████▎ | 41.2MB 27.6MB/s eta 0:00:01 86% |███████████████████████████▉ | 43.4MB 34.1MB/s eta 0:00:01 91% |█████████████████████████████▏ | 45.6MB 17.7MB/s eta 0:00:01 93% |█████████████████████████████▉ | 46.7MB 20.9MB/s eta 0:00:01 95% |██████████████████████████████▋ | 47.8MB 28.3MB/s eta 0:00:01
[?25hRequirement already satisfied: scikit-learn==0.19.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 12)) (0.19.1)
Requirement already satisfied: six==1.11.0 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 13)) (1.11.0)
Collecting tqdm==4.19.5 (from -r requirements.txt (line 14))
[?25l Downloading https://files.pythonhosted.org/packages/71/3c/341b4fa23cb3abc335207dba057c790f3bb329f6757e1fcd5d347bcf8308/tqdm-4.19.5-py2.py3-none-any.whl (51kB)
[K 100% |████████████████████████████████| 61kB 9.8MB/s eta 0:00:01
[?25hCollecting osqp (from cvxpy==1.0.3->-r requirements.txt (line 2))
[?25l Downloading https://files.pythonhosted.org/packages/6c/59/2b80e881be227eecef3f2b257339d182167b55d22a1315ff4303ddcfd42f/osqp-0.6.1-cp36-cp36m-manylinux1_x86_64.whl (208kB)
[K 100% |████████████████████████████████| 215kB 15.6MB/s ta 0:00:01
[?25hCollecting ecos>=2 (from cvxpy==1.0.3->-r requirements.txt (line 2))
[?25l Downloading https://files.pythonhosted.org/packages/55/ed/d131ff51f3a8f73420eb1191345eb49f269f23cadef515172e356018cde3/ecos-2.0.7.post1-cp36-cp36m-manylinux1_x86_64.whl (147kB)
[K 100% |████████████████████████████████| 153kB 15.3MB/s ta 0:00:01
[?25hCollecting scs>=1.1.3 (from cvxpy==1.0.3->-r requirements.txt (line 2))
[?25l Downloading https://files.pythonhosted.org/packages/f2/6e/dbdd778c64c1920ae357a2013ea655d65a1f8b60f397d6e5549e4aafe8ec/scs-2.1.1-2.tar.gz (157kB)
[K 100% |████████████████████████████████| 163kB 18.7MB/s ta 0:00:01
[?25hCollecting multiprocess (from cvxpy==1.0.3->-r requirements.txt (line 2))
[?25l Downloading https://files.pythonhosted.org/packages/58/17/5151b6ac2ac9b6276d46c33369ff814b0901872b2a0871771252f02e9192/multiprocess-0.70.9.tar.gz (1.6MB)
[K 100% |████████████████████████████████| 1.6MB 11.8MB/s ta 0:00:01 42% |█████████████▊ | 675kB 31.1MB/s eta 0:00:01
[?25hRequirement already satisfied: fastcache in /opt/conda/lib/python3.6/site-packages (from cvxpy==1.0.3->-r requirements.txt (line 2)) (1.0.2)
Requirement already satisfied: toolz in /opt/conda/lib/python3.6/site-packages (from cvxpy==1.0.3->-r requirements.txt (line 2)) (0.8.2)
Requirement already satisfied: decorator>=4.0.6 in /opt/conda/lib/python3.6/site-packages (from plotly==2.2.3->-r requirements.txt (line 6)) (4.0.11)
Requirement already satisfied: nbformat>=4.2 in /opt/conda/lib/python3.6/site-packages (from plotly==2.2.3->-r requirements.txt (line 6)) (4.4.0)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10)) (3.0.4)
Requirement already satisfied: idna<2.7,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10)) (2.6)
Requirement already satisfied: urllib3<1.23,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10)) (1.22)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10)) (2017.11.5)
Requirement already satisfied: future in /opt/conda/lib/python3.6/site-packages (from osqp->cvxpy==1.0.3->-r requirements.txt (line 2)) (0.16.0)
Collecting dill>=0.3.1 (from multiprocess->cvxpy==1.0.3->-r requirements.txt (line 2))
[?25l Downloading https://files.pythonhosted.org/packages/c7/11/345f3173809cea7f1a193bfbf02403fff250a3360e0e118a1630985e547d/dill-0.3.1.1.tar.gz (151kB)
[K 100% |████████████████████████████████| 153kB 16.9MB/s ta 0:00:01 87% |████████████████████████████ | 133kB 21.1MB/s eta 0:00:01
[?25hRequirement already satisfied: jupyter-core in /opt/conda/lib/python3.6/site-packages (from nbformat>=4.2->plotly==2.2.3->-r requirements.txt (line 6)) (4.4.0)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /opt/conda/lib/python3.6/site-packages (from nbformat>=4.2->plotly==2.2.3->-r requirements.txt (line 6)) (2.6.0)
###Markdown
Load Packages
###Code
import pandas as pd
import numpy as np
import helper
import project_helper
import project_tests
###Output
_____no_output_____
###Markdown
Market Data Load DataFor this universe of stocks, we'll be selecting large dollar volume stocks. We're using this universe, since it is highly liquid.
###Code
df = pd.read_csv('../../data/project_3/eod-quotemedia.csv')
percent_top_dollar = 0.2
high_volume_symbols = project_helper.large_dollar_volume_stocks(df, 'adj_close', 'adj_volume', percent_top_dollar)
df = df[df['ticker'].isin(high_volume_symbols)]
close = df.reset_index().pivot(index='date', columns='ticker', values='adj_close')
volume = df.reset_index().pivot(index='date', columns='ticker', values='adj_volume')
dividends = df.reset_index().pivot(index='date', columns='ticker', values='dividends')
###Output
_____no_output_____
###Markdown
View DataTo see what one of these 2-d matrices looks like, let's take a look at the closing prices matrix.
###Code
project_helper.print_dataframe(close)
###Output
_____no_output_____
###Markdown
Part 1: Smart Beta PortfolioIn Part 1 of this project, you'll build a portfolio using dividend yield to choose the portfolio weights. A portfolio such as this could be incorporated into a smart beta ETF. You'll compare this portfolio to a market cap weighted index to see how well it performs. Note that in practice, you'll probably get the index weights from a data vendor (such as companies that create indices, like MSCI, FTSE, Standard and Poor's), but for this exercise we will simulate a market cap weighted index. Index WeightsThe index we'll be using is based on large dollar volume stocks. Implement `generate_dollar_volume_weights` to generate the weights for this index. For each date, generate the weights based on dollar volume traded for that date. For example, assume the following is close prices and volume data:``` Prices A B ...2013-07-08 2 2 ...2013-07-09 5 6 ...2013-07-10 1 2 ...2013-07-11 6 5 ...... ... ... ... Volume A B ...2013-07-08 100 340 ...2013-07-09 240 220 ...2013-07-10 120 500 ...2013-07-11 10 100 ...... ... ... ...```The weights created from the function `generate_dollar_volume_weights` should be the following:``` A B ...2013-07-08 0.126.. 0.194.. ...2013-07-09 0.759.. 0.377.. ...2013-07-10 0.075.. 0.285.. ...2013-07-11 0.037.. 0.142.. ...... ... ... ...```
###Code
def generate_dollar_volume_weights(close, volume):
"""
Generate dollar volume weights.
Parameters
----------
close : DataFrame
Close price for each ticker and date
volume : str
Volume for each ticker and date
Returns
-------
dollar_volume_weights : DataFrame
The dollar volume weights for each ticker and date
"""
assert close.index.equals(volume.index)
assert close.columns.equals(volume.columns)
#TODO: Implement function
d_vol = close * volume
return d_vol.apply(lambda x: x/x.sum(), axis = 1)
project_tests.test_generate_dollar_volume_weights(generate_dollar_volume_weights)
###Output
Tests Passed
###Markdown
View DataLet's generate the index weights using `generate_dollar_volume_weights` and view them using a heatmap.
###Code
index_weights = generate_dollar_volume_weights(close, volume)
project_helper.plot_weights(index_weights, 'Index Weights')
###Output
_____no_output_____
###Markdown
Portfolio WeightsNow that we have the index weights, let's choose the portfolio weights based on dividend. You would normally calculate the weights based on trailing dividend yield, but we'll simplify this by just calculating the total dividend yield over time.Implement `calculate_dividend_weights` to return the weights for each stock based on its total dividend yield over time. This is similar to generating the weight for the index, but it's using dividend data instead.For example, assume the following is `dividends` data:``` Prices A B2013-07-08 0 02013-07-09 0 12013-07-10 0.5 02013-07-11 0 02013-07-12 2 0... ... ...```The weights created from the function `calculate_dividend_weights` should be the following:``` A B2013-07-08 NaN NaN2013-07-09 0 12013-07-10 0.333.. 0.666..2013-07-11 0.333.. 0.666..2013-07-12 0.714.. 0.285..... ... ...```
###Code
def calculate_dividend_weights(dividends):
"""
Calculate dividend weights.
Parameters
----------
dividends : DataFrame
Dividend for each stock and date
Returns
-------
dividend_weights : DataFrame
Weights for each stock and date
"""
#TODO: Implement function
print(dividends)
column_d = dividends.cumsum(axis=0 )
d_weights = column_d.div(column_d.sum(axis=1),axis=0)
return d_weights
project_tests.test_calculate_dividend_weights(calculate_dividend_weights)
###Output
GRBJ ILRH UII
2002-06-17 0.00000000 0.00000000 0.00000000
2002-06-18 0.00000000 0.00000000 0.10000000
2002-06-19 0.00000000 1.00000000 0.30000000
2002-06-20 0.00000000 0.20000000 0.00000000
Tests Passed
###Markdown
View DataJust like the index weights, let's generate the ETF weights and view them using a heatmap.
###Code
etf_weights = calculate_dividend_weights(dividends)
project_helper.plot_weights(etf_weights, 'ETF Weights')
###Output
ticker AAL AAPL ABBV ABT AGN AIG \
date
2013-07-01 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-02 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-03 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-05 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-08 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-09 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-10 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-11 0.00000000 0.00000000 0.40000000 0.14000000 0.00000000 0.00000000
2013-07-12 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-15 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-16 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-17 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-18 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-19 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-22 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-23 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-24 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-25 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-26 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-29 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-30 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-31 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-01 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-02 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-05 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-06 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-07 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-08 0.00000000 3.05000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-09 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-12 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
... ... ... ... ... ... ...
2017-05-19 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-22 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-23 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-24 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-25 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-26 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-30 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-31 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-01 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-02 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-05 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-06 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-07 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-08 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-09 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-12 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.32000000
2017-06-13 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-14 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-15 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-16 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-19 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-20 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-21 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-22 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-23 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-26 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-27 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-28 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-29 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-30 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
ticker AMAT AMGN AMZN APC ... USB \
date ...
2013-07-01 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-02 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-03 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-05 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-08 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-09 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-10 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-11 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-12 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-15 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-16 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-17 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-18 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-19 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-22 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-23 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-24 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-25 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-26 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-29 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-30 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-07-31 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-08-01 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-08-02 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-08-05 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-08-06 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-08-07 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-08-08 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-08-09 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2013-08-12 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
... ... ... ... ... ... ...
2017-05-19 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-05-22 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-05-23 0.10000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-05-24 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-05-25 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-05-26 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-05-30 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-05-31 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-01 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-02 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-05 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-06 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-07 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-08 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-09 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-12 0.00000000 0.00000000 0.00000000 0.05000000 ... 0.00000000
2017-06-13 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-14 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-15 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-16 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-19 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-20 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-21 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-22 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-23 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-26 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-27 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-28 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.28000000
2017-06-29 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
2017-06-30 0.00000000 0.00000000 0.00000000 0.00000000 ... 0.00000000
ticker UTX V VLO VZ WBA WFC \
date
2013-07-01 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-02 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-03 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-05 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-08 0.00000000 0.00000000 0.00000000 0.51500000 0.00000000 0.00000000
2013-07-09 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-10 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-11 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-12 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-15 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-16 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-17 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-18 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-19 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-22 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-23 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-24 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-25 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-26 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-29 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-30 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-31 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-01 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-02 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-05 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-06 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-07 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.30000000
2013-08-08 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-09 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-12 0.00000000 0.00000000 0.22500000 0.00000000 0.00000000 0.00000000
... ... ... ... ... ... ...
2017-05-19 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-22 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-23 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-24 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-25 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-26 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-30 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-05-31 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-01 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-02 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-05 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-06 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-07 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-08 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-09 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-12 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-13 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-14 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-15 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-16 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-19 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-20 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-21 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-22 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-23 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-26 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-27 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-28 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-29 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2017-06-30 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
ticker WMT WYNN XOM
date
2013-07-01 0.00000000 0.00000000 0.00000000
2013-07-02 0.00000000 0.00000000 0.00000000
2013-07-03 0.00000000 0.00000000 0.00000000
2013-07-05 0.00000000 0.00000000 0.00000000
2013-07-08 0.00000000 0.00000000 0.00000000
2013-07-09 0.00000000 0.00000000 0.00000000
2013-07-10 0.00000000 0.00000000 0.00000000
2013-07-11 0.00000000 0.00000000 0.00000000
2013-07-12 0.00000000 0.00000000 0.00000000
2013-07-15 0.00000000 0.00000000 0.00000000
2013-07-16 0.00000000 0.00000000 0.00000000
2013-07-17 0.00000000 0.00000000 0.00000000
2013-07-18 0.00000000 0.00000000 0.00000000
2013-07-19 0.00000000 0.00000000 0.00000000
2013-07-22 0.00000000 0.00000000 0.00000000
2013-07-23 0.00000000 0.00000000 0.00000000
2013-07-24 0.00000000 0.00000000 0.00000000
2013-07-25 0.00000000 0.00000000 0.00000000
2013-07-26 0.00000000 0.00000000 0.00000000
2013-07-29 0.00000000 0.00000000 0.00000000
2013-07-30 0.00000000 0.00000000 0.00000000
2013-07-31 0.00000000 0.00000000 0.00000000
2013-08-01 0.00000000 0.00000000 0.00000000
2013-08-02 0.00000000 0.00000000 0.00000000
2013-08-05 0.00000000 0.00000000 0.00000000
2013-08-06 0.00000000 0.00000000 0.00000000
2013-08-07 0.47000000 0.00000000 0.00000000
2013-08-08 0.00000000 1.00000000 0.00000000
2013-08-09 0.00000000 0.00000000 0.63000000
2013-08-12 0.00000000 0.00000000 0.00000000
... ... ... ...
2017-05-19 0.00000000 0.00000000 0.00000000
2017-05-22 0.00000000 0.00000000 0.00000000
2017-05-23 0.00000000 0.00000000 0.00000000
2017-05-24 0.00000000 0.00000000 0.00000000
2017-05-25 0.00000000 0.00000000 0.00000000
2017-05-26 0.00000000 0.00000000 0.00000000
2017-05-30 0.00000000 0.00000000 0.00000000
2017-05-31 0.00000000 0.00000000 0.00000000
2017-06-01 0.00000000 0.00000000 0.00000000
2017-06-02 0.00000000 0.00000000 0.00000000
2017-06-05 0.00000000 0.00000000 0.00000000
2017-06-06 0.00000000 0.00000000 0.00000000
2017-06-07 0.00000000 0.00000000 0.00000000
2017-06-08 0.00000000 0.00000000 0.00000000
2017-06-09 0.00000000 0.00000000 0.00000000
2017-06-12 0.00000000 0.00000000 0.00000000
2017-06-13 0.00000000 0.00000000 0.00000000
2017-06-14 0.00000000 0.00000000 0.00000000
2017-06-15 0.00000000 0.00000000 0.00000000
2017-06-16 0.00000000 0.00000000 0.00000000
2017-06-19 0.00000000 0.00000000 0.00000000
2017-06-20 0.00000000 0.00000000 0.00000000
2017-06-21 0.00000000 0.00000000 0.00000000
2017-06-22 0.00000000 0.00000000 0.00000000
2017-06-23 0.00000000 0.00000000 0.00000000
2017-06-26 0.00000000 0.00000000 0.00000000
2017-06-27 0.00000000 0.00000000 0.00000000
2017-06-28 0.00000000 0.00000000 0.00000000
2017-06-29 0.00000000 0.00000000 0.00000000
2017-06-30 0.00000000 0.00000000 0.00000000
[1009 rows x 99 columns]
###Markdown
ReturnsImplement `generate_returns` to generate returns data for all the stocks and dates from price data. You might notice we're implementing returns and not log returns. Since we're not dealing with volatility, we don't have to use log returns.
###Code
def generate_returns(prices):
"""
Generate returns for ticker and date.
Parameters
----------
prices : DataFrame
Price for each ticker and date
Returns
-------
returns : Dataframe
The returns for each ticker and date
"""
#TODO: Implement function
return (prices-prices.shift(1))/prices.shift(1)
project_tests.test_generate_returns(generate_returns)
###Output
Tests Passed
###Markdown
View DataLet's generate the closing returns using `generate_returns` and view them using a heatmap.
###Code
returns = generate_returns(close)
project_helper.plot_returns(returns, 'Close Returns')
###Output
_____no_output_____
###Markdown
Weighted ReturnsWith the returns of each stock computed, we can use it to compute the returns for an index or ETF. Implement `generate_weighted_returns` to create weighted returns using the returns and weights.
###Code
def generate_weighted_returns(returns, weights):
"""
Generate weighted returns.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
weights : DataFrame
Weights for each ticker and date
Returns
-------
weighted_returns : DataFrame
Weighted returns for each ticker and date
"""
assert returns.index.equals(weights.index)
assert returns.columns.equals(weights.columns)
#TODO: Implement function
return returns * weights
project_tests.test_generate_weighted_returns(generate_weighted_returns)
###Output
Tests Passed
###Markdown
View DataLet's generate the ETF and index returns using `generate_weighted_returns` and view them using a heatmap.
###Code
index_weighted_returns = generate_weighted_returns(returns, index_weights)
etf_weighted_returns = generate_weighted_returns(returns, etf_weights)
project_helper.plot_returns(index_weighted_returns, 'Index Returns')
project_helper.plot_returns(etf_weighted_returns, 'ETF Returns')
###Output
_____no_output_____
###Markdown
Cumulative ReturnsTo compare performance between the ETF and Index, we're going to calculate the tracking error. Before we do that, we first need to calculate the index and ETF comulative returns. Implement `calculate_cumulative_returns` to calculate the cumulative returns over time given the returns.
###Code
def calculate_cumulative_returns(returns):
"""
Calculate cumulative returns.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
Returns
-------
cumulative_returns : Pandas Series
Cumulative returns for each date
"""
#TODO: Implement function
print(returns)
return (returns.sum(axis=1) + 1).cumprod()
project_tests.test_calculate_cumulative_returns(calculate_cumulative_returns)
###Output
OLNA PWN BUC
2006-06-02 nan nan nan
2006-06-03 1.59904743 1.66397210 1.67345829
2006-06-04 -0.37065629 -0.36541822 -0.36015840
2006-06-05 -0.41055669 0.60004777 0.00536958
Tests Passed
###Markdown
View DataLet's generate the ETF and index cumulative returns using `calculate_cumulative_returns` and compare the two.
###Code
index_weighted_cumulative_returns = calculate_cumulative_returns(index_weighted_returns)
etf_weighted_cumulative_returns = calculate_cumulative_returns(etf_weighted_returns)
project_helper.plot_benchmark_returns(index_weighted_cumulative_returns, etf_weighted_cumulative_returns, 'Smart Beta ETF vs Index')
###Output
ticker AAL AAPL ABBV ABT AGN \
date
2013-07-01 nan nan nan nan nan
2013-07-02 -0.00008007 0.00308984 0.00004768 -0.00001880 -0.00003423
2013-07-03 0.00009093 0.00074662 0.00000167 -0.00018529 0.00000450
2013-07-05 0.00001716 -0.00091335 0.00003497 0.00008758 0.00005552
2013-07-08 0.00001590 -0.00052257 0.00007514 0.00007710 0.00000238
2013-07-09 0.00009451 0.00177325 -0.00002380 -0.00012663 -0.00001420
2013-07-10 -0.00004895 -0.00034661 0.00003637 0.00000000 -0.00004299
2013-07-11 0.00003254 0.00139306 0.00001830 0.00012582 0.00002055
2013-07-12 0.00003548 -0.00012769 0.00011425 -0.00000507 0.00002572
2013-07-15 0.00003910 0.00016769 -0.00002700 0.00002153 -0.00002676
2013-07-16 0.00004287 0.00043884 -0.00005637 0.00003814 -0.00005587
2013-07-17 0.00018322 0.00001630 0.00001020 0.00001899 -0.00002105
2013-07-18 -0.00001237 0.00019567 0.00001774 -0.00001409 0.00005401
2013-07-19 -0.00003236 -0.00094460 0.00001188 0.00001775 0.00001713
2013-07-22 -0.00002195 0.00020867 0.00003328 -0.00000683 0.00010836
2013-07-23 -0.00002800 -0.00192193 -0.00004340 0.00016479 -0.00002568
2013-07-24 0.00024574 0.00831017 -0.00003301 -0.00002386 0.00000660
2013-07-25 0.00014154 -0.00024686 0.00003490 0.00001459 0.00006395
2013-07-26 0.00011100 0.00034022 0.00004431 0.00002041 0.00027543
2013-07-29 0.00005743 0.00135660 0.00002430 -0.00000491 0.00010452
2013-07-30 -0.00000966 0.00115320 -0.00002247 0.00001608 0.00003152
2013-07-31 0.00006190 -0.00013694 0.00006707 -0.00002952 -0.00002395
2013-08-01 0.00001162 0.00052497 -0.00001327 0.00002668 0.00002905
2013-08-02 -0.00018826 0.00116475 0.00000417 -0.00001413 0.00000264
2013-08-05 0.00007670 0.00194945 -0.00004045 -0.00004688 0.00002053
2013-08-06 -0.00004838 -0.00102528 0.00001632 -0.00006732 -0.00002086
2013-08-07 -0.00001283 -0.00006041 -0.00001986 -0.00002540 -0.00000459
2013-08-08 0.00002875 -0.00017964 0.00002861 -0.00000102 -0.00001625
2013-08-09 -0.00012378 -0.00153742 -0.00002640 -0.00000370 -0.00000959
2013-08-12 0.00007280 0.00428249 0.00002920 0.00000122 -0.00001149
... ... ... ... ... ...
2017-05-19 0.00005474 0.00019665 -0.00000462 0.00001480 0.00001656
2017-05-22 0.00007813 0.00037214 -0.00001317 0.00005779 0.00001197
2017-05-23 0.00002615 -0.00007182 0.00003513 -0.00000442 0.00003628
2017-05-24 0.00000800 -0.00016317 0.00000530 -0.00001265 0.00018447
2017-05-25 0.00011098 0.00016144 0.00002468 0.00002981 0.00000449
2017-05-26 0.00008737 -0.00011302 -0.00002063 0.00013184 -0.00004155
2017-05-30 -0.00007493 0.00002125 -0.00000332 0.00004151 -0.00005204
2017-05-31 0.00002551 -0.00027503 0.00000000 0.00008665 0.00008366
2017-06-01 0.00004851 0.00011609 0.00005242 0.00005490 0.00014993
2017-06-02 0.00005385 0.00093540 0.00004415 0.00004045 0.00001994
2017-06-05 0.00002166 -0.00067882 0.00002245 0.00001666 0.00000055
2017-06-06 0.00000000 0.00023189 0.00003261 -0.00002599 0.00000416
2017-06-07 0.00013965 0.00033360 0.00012593 0.00001491 -0.00000301
2017-06-08 0.00005526 -0.00011721 0.00000087 0.00001946 0.00003022
2017-06-09 -0.00007922 -0.00360991 0.00005373 0.00005045 0.00005877
2017-06-12 -0.00006474 -0.00241639 -0.00000803 -0.00000844 -0.00002953
2017-06-13 -0.00000226 0.00058421 0.00001053 0.00003018 0.00000044
2017-06-14 -0.00000368 -0.00067605 0.00008496 0.00001655 0.00007539
2017-06-15 -0.00001902 -0.00041203 0.00000546 0.00006698 0.00005290
2017-06-16 -0.00001854 -0.00087713 0.00003634 0.00000777 0.00001426
2017-06-19 0.00005462 0.00203336 0.00002731 0.00007594 0.00008386
2017-06-20 -0.00014440 -0.00049461 -0.00001161 -0.00001527 0.00002683
2017-06-21 0.00002178 0.00028432 0.00001113 -0.00002535 0.00024114
2017-06-22 0.00008739 -0.00007431 0.00030094 0.00010002 0.00012159
2017-06-23 -0.00004738 0.00025426 -0.00004088 -0.00001935 -0.00003638
2017-06-26 0.00001016 -0.00019778 0.00000664 -0.00001502 0.00007702
2017-06-27 -0.00001813 -0.00077347 -0.00002330 -0.00001404 -0.00007743
2017-06-28 0.00004971 0.00071601 0.00003349 -0.00002399 0.00001622
2017-06-29 0.00002938 -0.00084636 -0.00002422 0.00001920 -0.00006350
2017-06-30 0.00007130 0.00011825 0.00000210 -0.00000848 -0.00002407
ticker AIG AMAT AMGN AMZN APC \
date
2013-07-01 nan nan nan nan nan
2013-07-02 -0.00006761 0.00000096 -0.00010745 0.00011350 -0.00000098
2013-07-03 -0.00018591 0.00003831 -0.00001947 0.00001696 0.00000920
2013-07-05 0.00027114 0.00005276 0.00011363 0.00011327 0.00006833
2013-07-08 0.00006643 -0.00001704 0.00002731 0.00033438 -0.00000422
2013-07-09 0.00007565 0.00027192 0.00000582 0.00005462 0.00000296
2013-07-10 -0.00001323 0.00043271 0.00033858 0.00003416 0.00001513
2013-07-11 0.00005840 0.00004998 0.00007619 0.00059488 0.00010611
2013-07-12 0.00018158 0.00002959 0.00006108 0.00066634 0.00000488
2013-07-15 -0.00000829 -0.00003757 0.00000932 -0.00005028 -0.00011347
2013-07-16 -0.00007923 0.00001822 -0.00001515 0.00001800 -0.00006631
2013-07-17 0.00012409 0.00001839 0.00001113 0.00008157 0.00002489
2013-07-18 0.00003218 -0.00002399 0.00001792 -0.00026216 0.00009436
2013-07-19 -0.00001105 0.00000000 0.00037838 0.00005404 0.00014031
2013-07-22 0.00010834 -0.00003882 -0.00002529 -0.00009950 -0.00012367
2013-07-23 -0.00017234 -0.00001674 -0.00011588 -0.00010700 0.00000558
2013-07-24 -0.00010189 -0.00002347 -0.00000063 -0.00008054 -0.00009635
2013-07-25 0.00006359 0.00000301 0.00014799 0.00040546 -0.00000164
2013-07-26 0.00000504 0.00000000 0.00000422 0.00163337 -0.00005205
2013-07-29 -0.00004316 -0.00001438 -0.00001011 -0.00045459 0.00001062
2013-07-30 -0.00006316 0.00005012 0.00013536 -0.00022839 0.00001775
2013-07-31 -0.00007708 0.00000698 -0.00018534 -0.00003715 -0.00001566
2013-08-01 0.00058574 0.00003076 0.00004445 0.00024588 0.00013789
2013-08-02 0.00109424 -0.00003654 -0.00001355 -0.00007453 0.00000360
2013-08-05 0.00009258 -0.00004346 -0.00007067 -0.00019116 0.00002219
2013-08-06 -0.00014694 0.00001122 -0.00015876 -0.00001005 -0.00010496
2013-08-07 -0.00002208 -0.00006551 0.00156329 -0.00017163 0.00000667
2013-08-08 0.00013463 -0.00010214 -0.00019815 -0.00006438 0.00008638
2013-08-09 -0.00011690 0.00000539 -0.00005111 0.00008128 -0.00003579
2013-08-12 -0.00004680 -0.00000431 -0.00006271 -0.00002320 -0.00003739
... ... ... ... ... ...
2017-05-19 -0.00002541 0.00005588 -0.00005110 0.00007624 0.00010026
2017-05-22 0.00002861 0.00014452 -0.00040322 0.00050899 0.00000410
2017-05-23 0.00011973 0.00004589 0.00007104 0.00004059 0.00001827
2017-05-24 0.00009488 -0.00003922 0.00002595 0.00041167 -0.00001396
2017-05-25 0.00008111 0.00005410 0.00003856 0.00102046 -0.00007824
2017-05-26 0.00007491 0.00006427 -0.00002243 0.00016833 -0.00002834
2017-05-30 0.00006131 0.00000701 -0.00005431 0.00005367 -0.00009419
2017-05-31 -0.00004296 0.00005273 0.00006700 -0.00010258 -0.00001645
2017-06-01 0.00002317 0.00001459 0.00005018 0.00005580 0.00003458
2017-06-02 0.00002834 0.00011400 0.00014209 0.00060721 -0.00007611
2017-06-05 -0.00002330 -0.00000974 0.00008731 0.00022777 -0.00003101
2017-06-06 -0.00003538 -0.00001960 -0.00003399 -0.00046926 0.00003521
2017-06-07 -0.00001064 0.00009671 0.00013020 0.00034924 -0.00039690
2017-06-08 0.00003528 0.00001982 0.00005087 0.00000818 -0.00004037
2017-06-09 0.00003557 -0.00050334 0.00005527 -0.00231521 0.00005522
2017-06-12 -0.00000215 -0.00007908 0.00003399 -0.00121956 0.00001870
2017-06-13 0.00002802 0.00009276 -0.00001650 0.00108860 0.00007071
2017-06-14 0.00000091 -0.00013899 0.00002609 -0.00026290 -0.00018373
2017-06-15 -0.00003147 -0.00013815 -0.00005159 -0.00098189 -0.00009044
2017-06-16 -0.00000132 -0.00000473 -0.00008501 0.00245716 0.00004724
2017-06-19 0.00000735 0.00017563 0.00012173 0.00057466 -0.00002115
2017-06-20 -0.00003247 -0.00012538 0.00006939 -0.00016060 -0.00003381
2017-06-21 -0.00001163 0.00003949 0.00043681 0.00044650 -0.00007242
2017-06-22 -0.00000752 -0.00003467 0.00023072 -0.00003452 -0.00002022
2017-06-23 -0.00000277 0.00011762 -0.00020420 0.00007860 0.00002475
2017-06-26 0.00001027 -0.00015182 0.00005930 -0.00055802 -0.00004423
2017-06-27 0.00001734 -0.00025584 -0.00012216 -0.00098457 -0.00014396
2017-06-28 0.00002481 0.00017856 0.00014247 0.00079361 -0.00000109
2017-06-29 -0.00001865 -0.00019138 -0.00004579 -0.00078665 0.00016081
2017-06-30 -0.00011166 -0.00001973 -0.00001441 -0.00040815 0.00000730
ticker ... USB UTX V VLO \
date ...
2013-07-01 ... nan nan nan nan
2013-07-02 ... -0.00000198 -0.00006550 0.00000090 -0.00011051
2013-07-03 ... 0.00000723 0.00005975 0.00008169 0.00004320
2013-07-05 ... 0.00008510 0.00013294 0.00024008 0.00002624
2013-07-08 ... 0.00005290 0.00002670 -0.00028715 0.00009915
2013-07-09 ... 0.00003358 0.00006602 -0.00010514 -0.00004290
2013-07-10 ... -0.00004241 -0.00001511 -0.00004391 -0.00013046
2013-07-11 ... 0.00001428 0.00012735 0.00022493 0.00016411
2013-07-12 ... 0.00005925 0.00001070 0.00002596 0.00035982
2013-07-15 ... -0.00002355 0.00002432 -0.00003765 -0.00005146
2013-07-16 ... -0.00004200 -0.00001153 -0.00003535 -0.00008420
2013-07-17 ... -0.00018179 0.00004316 0.00001532 0.00004189
2013-07-18 ... 0.00005812 0.00002202 0.00003747 -0.00013378
2013-07-19 ... 0.00002454 0.00005431 -0.00003823 0.00005199
2013-07-22 ... 0.00001368 -0.00002253 0.00005678 0.00007544
2013-07-23 ... 0.00002921 0.00033154 -0.00016387 0.00001131
2013-07-24 ... 0.00000000 -0.00000925 -0.00011694 -0.00008046
2013-07-25 ... -0.00000774 -0.00000984 0.00071744 0.00016073
2013-07-26 ... 0.00000902 0.00000888 -0.00006150 0.00004029
2013-07-29 ... -0.00000655 0.00000505 -0.00004155 -0.00002380
2013-07-30 ... 0.00004268 0.00002718 -0.00003638 -0.00002650
2013-07-31 ... -0.00006007 0.00000000 -0.00480217 0.00002554
2013-08-01 ... 0.00003824 0.00008227 0.00073429 0.00002624
2013-08-02 ... 0.00001218 0.00003818 0.00057084 -0.00013073
2013-08-05 ... -0.00000325 -0.00008039 0.00004028 0.00000401
2013-08-06 ... -0.00001308 -0.00009533 -0.00017867 0.00052501
2013-08-07 ... -0.00004826 0.00004075 -0.00012883 0.00000409
2013-08-08 ... 0.00001550 0.00001017 -0.00003327 0.00007383
2013-08-09 ... -0.00002978 -0.00002948 -0.00006364 0.00001495
2013-08-12 ... -0.00000129 -0.00000433 -0.00000445 -0.00000634
... ... ... ... ... ...
2017-05-19 ... 0.00004057 0.00008091 0.00010251 0.00001545
2017-05-22 ... 0.00002764 0.00001884 0.00013440 -0.00000140
2017-05-23 ... 0.00004986 0.00002642 0.00008077 0.00001544
2017-05-24 ... -0.00001088 0.00000044 0.00012396 -0.00001682
2017-05-25 ... -0.00000119 0.00000730 0.00002666 -0.00003599
2017-05-26 ... -0.00004320 -0.00001671 -0.00003052 -0.00005298
2017-05-30 ... -0.00003473 -0.00002687 0.00000559 -0.00004055
2017-05-31 ... -0.00001778 0.00000051 0.00006346 -0.00003486
2017-06-01 ... 0.00006633 0.00002746 0.00002573 0.00003835
2017-06-02 ... -0.00001847 0.00001152 0.00009556 -0.00000080
2017-06-05 ... 0.00000284 -0.00006393 0.00010469 0.00002052
2017-06-06 ... -0.00004024 -0.00004227 -0.00008927 -0.00000464
2017-06-07 ... 0.00002310 -0.00001034 0.00003491 0.00000073
2017-06-08 ... 0.00006826 0.00000821 0.00000000 0.00006397
2017-06-09 ... 0.00008925 0.00001434 -0.00024705 0.00012589
2017-06-12 ... -0.00000345 -0.00002209 -0.00015874 0.00004231
2017-06-13 ... 0.00000976 0.00000612 0.00020447 0.00003797
2017-06-14 ... -0.00001105 -0.00000133 0.00002333 -0.00016548
2017-06-15 ... -0.00004745 0.00002132 -0.00021722 -0.00000346
2017-06-16 ... -0.00000927 -0.00001258 0.00000000 0.00006318
2017-06-19 ... 0.00002111 0.00007225 0.00005624 0.00004897
2017-06-20 ... -0.00001724 -0.00001361 -0.00003493 -0.00005084
2017-06-21 ... -0.00002320 0.00001008 0.00001402 -0.00008065
2017-06-22 ... -0.00007039 0.00002049 -0.00005614 0.00000849
2017-06-23 ... -0.00004944 0.00000210 0.00021394 0.00006257
2017-06-26 ... 0.00002305 -0.00000957 -0.00004234 0.00000334
2017-06-27 ... 0.00003219 -0.00000716 -0.00003066 0.00006312
2017-06-28 ... 0.00006761 0.00002495 0.00011536 0.00003320
2017-06-29 ... -0.00002530 -0.00002507 -0.00020165 -0.00001544
2017-06-30 ... 0.00001438 0.00001665 -0.00008733 0.00001635
ticker VZ WBA WFC WMT WYNN \
date
2013-07-01 nan nan nan nan nan
2013-07-02 0.00004461 0.00001712 -0.00004030 0.00001518 -0.00007002
2013-07-03 0.00007473 -0.00005637 0.00000000 0.00000418 -0.00000086
2013-07-05 0.00005706 0.00001661 0.00034469 0.00006630 0.00001579
2013-07-08 0.00005119 0.00016029 0.00035163 0.00020673 -0.00000291
2013-07-09 -0.00002363 0.00023550 -0.00005294 0.00003104 -0.00001000
2013-07-10 -0.00006595 0.00051905 -0.00032223 -0.00002753 0.00001396
2013-07-11 0.00011027 0.00017839 -0.00009911 0.00007061 0.00019166
2013-07-12 -0.00014573 0.00000649 0.00042438 0.00000000 -0.00001858
2013-07-15 -0.00008704 0.00004376 0.00031955 -0.00005856 0.00003086
2013-07-16 0.00005155 -0.00000081 -0.00013984 0.00003180 0.00001040
2013-07-17 0.00006997 0.00004907 0.00022632 -0.00001749 -0.00001570
2013-07-18 -0.00014968 0.00017875 0.00035464 0.00000969 0.00002542
2013-07-19 -0.00000320 0.00000588 0.00001063 0.00007239 -0.00000013
2013-07-22 0.00005163 0.00007456 0.00006554 -0.00001735 0.00000824
2013-07-23 0.00001091 -0.00001946 -0.00001558 0.00007051 -0.00004541
2013-07-24 0.00000260 0.00000368 -0.00007704 -0.00002638 -0.00000829
2013-07-25 0.00002869 -0.00000483 -0.00027567 -0.00001890 -0.00002013
2013-07-26 0.00003719 -0.00001476 -0.00004691 -0.00000098 0.00001720
2013-07-29 0.00007859 -0.00001491 -0.00006795 -0.00000074 0.00000885
2013-07-30 -0.00020003 -0.00005857 0.00000331 -0.00000816 0.00003943
2013-07-31 -0.00017728 0.00001974 0.00008852 0.00000440 0.00000080
2013-08-01 0.00007266 0.00009319 0.00024957 0.00002391 0.00011028
2013-08-02 0.00002935 -0.00001051 0.00005964 0.00006311 0.00006655
2013-08-05 -0.00000391 -0.00000131 -0.00004940 0.00000184 0.00002494
2013-08-06 -0.00001140 -0.00007790 -0.00008356 -0.00011868 -0.00003496
2013-08-07 -0.00001588 -0.00005562 -0.00016954 -0.00000354 -0.00000392
2013-08-08 -0.00004760 0.00001426 -0.00001926 -0.00001116 0.00003220
2013-08-09 -0.00005611 -0.00002140 0.00000256 -0.00004839 -0.00002207
2013-08-12 0.00003985 0.00004270 -0.00000788 0.00002499 0.00000450
... ... ... ... ... ...
2017-05-19 0.00006624 -0.00006093 0.00016073 0.00032194 -0.00003056
2017-05-22 0.00000935 0.00007252 -0.00001192 -0.00003352 0.00012109
2017-05-23 0.00000000 -0.00000575 0.00008443 -0.00000857 -0.00003638
2017-05-24 -0.00012447 -0.00004305 -0.00006200 -0.00004664 -0.00000306
2017-05-25 0.00004645 0.00001109 -0.00007998 0.00001540 0.00000698
2017-05-26 0.00000115 0.00002714 -0.00010270 -0.00002167 0.00003510
2017-05-30 0.00025086 -0.00007660 -0.00008379 0.00000189 0.00002438
2017-05-31 0.00012184 0.00004675 -0.00037496 0.00004578 0.00004555
2017-06-01 -0.00002995 0.00007251 0.00033368 0.00016695 0.00034674
2017-06-02 -0.00001834 0.00005324 -0.00007558 -0.00002198 0.00003691
2017-06-05 -0.00001027 0.00001403 -0.00002159 0.00011559 -0.00003517
2017-06-06 0.00001314 -0.00020828 0.00002214 -0.00024961 -0.00002140
2017-06-07 0.00000860 0.00000105 0.00008100 0.00003186 -0.00001176
2017-06-08 -0.00005077 0.00000045 0.00015437 -0.00003462 0.00009371
2017-06-09 0.00005841 -0.00001890 0.00031099 0.00004430 -0.00014829
2017-06-12 0.00008357 0.00003660 0.00006314 -0.00001783 0.00001227
2017-06-13 -0.00015311 -0.00000044 0.00009326 0.00002234 0.00009741
2017-06-14 0.00003322 0.00007191 0.00001986 0.00002871 -0.00001249
2017-06-15 -0.00000786 -0.00002538 -0.00015705 -0.00016229 0.00001030
2017-06-16 -0.00000191 -0.00050837 -0.00000203 -0.00170682 0.00000667
2017-06-19 -0.00000845 0.00016397 0.00007982 0.00006216 0.00011120
2017-06-20 -0.00015178 -0.00011074 -0.00015595 0.00000599 -0.00002623
2017-06-21 -0.00011765 -0.00001332 -0.00009191 0.00009759 0.00000459
2017-06-22 0.00000165 -0.00022076 -0.00011961 -0.00009314 -0.00000749
2017-06-23 -0.00000321 -0.00000319 -0.00000949 -0.00009608 0.00002054
2017-06-26 0.00005953 0.00013276 0.00009435 0.00009521 -0.00002291
2017-06-27 -0.00026401 -0.00002667 0.00005283 0.00004984 -0.00003948
2017-06-28 0.00000000 -0.00001700 0.00037680 0.00005309 0.00000601
2017-06-29 -0.00009357 0.00024460 0.00060731 -0.00005115 -0.00007020
2017-06-30 0.00004897 -0.00000637 -0.00009166 -0.00002593 0.00003017
ticker XOM
date
2013-07-01 nan
2013-07-02 0.00007598
2013-07-03 0.00000987
2013-07-05 0.00020520
2013-07-08 0.00016043
2013-07-09 0.00025170
2013-07-10 -0.00012779
2013-07-11 0.00012221
2013-07-12 0.00002268
2013-07-15 -0.00002395
2013-07-16 0.00001087
2013-07-17 0.00002793
2013-07-18 0.00015718
2013-07-19 0.00015711
2013-07-22 -0.00006191
2013-07-23 0.00005162
2013-07-24 -0.00002785
2013-07-25 -0.00000282
2013-07-26 -0.00003088
2013-07-29 -0.00013758
2013-07-30 -0.00003985
2013-07-31 -0.00001209
2013-08-01 -0.00036961
2013-08-02 -0.00020718
2013-08-05 -0.00010282
2013-08-06 -0.00002684
2013-08-07 -0.00002068
2013-08-08 0.00010157
2013-08-09 -0.00009342
2013-08-12 -0.00019221
... ...
2017-05-19 0.00003209
2017-05-22 0.00005638
2017-05-23 0.00003703
2017-05-24 -0.00004317
2017-05-25 -0.00010228
2017-05-26 -0.00003184
2017-05-30 -0.00006898
2017-05-31 -0.00009817
2017-06-01 0.00003252
2017-06-02 -0.00036609
2017-06-05 0.00012525
2017-06-06 0.00021991
2017-06-07 -0.00006540
2017-06-08 -0.00004813
2017-06-09 0.00019613
2017-06-12 0.00013789
2017-06-13 0.00000517
2017-06-14 -0.00017854
2017-06-15 0.00003268
2017-06-16 0.00042358
2017-06-19 -0.00011019
2017-06-20 -0.00008718
2017-06-21 -0.00014842
2017-06-22 -0.00004625
2017-06-23 0.00006075
2017-06-26 -0.00004662
2017-06-27 -0.00001773
2017-06-28 0.00005766
2017-06-29 -0.00020891
2017-06-30 0.00000618
[1009 rows x 99 columns]
ticker AAL AAPL ABBV ABT AGN \
date
2013-07-01 nan nan nan nan nan
2013-07-02 -0.00000000 0.00000000 0.00000000 -0.00000000 -0.00000000
2013-07-03 0.00000000 0.00000000 0.00000000 -0.00000000 0.00000000
2013-07-05 0.00000000 -0.00000000 0.00000000 0.00000000 0.00000000
2013-07-08 0.00000000 -0.00000000 0.00000000 0.00000000 0.00000000
2013-07-09 0.00000000 0.00000000 -0.00000000 -0.00000000 -0.00000000
2013-07-10 -0.00000000 -0.00000000 0.00000000 0.00000000 -0.00000000
2013-07-11 0.00000000 0.00000000 0.00029527 0.00037252 0.00000000
2013-07-12 0.00000000 -0.00000000 0.00139297 -0.00002376 0.00000000
2013-07-15 0.00000000 0.00000000 -0.00044825 0.00007446 -0.00000000
2013-07-16 0.00000000 0.00000000 -0.00085252 0.00010192 -0.00000000
2013-07-17 0.00000000 0.00000000 0.00015019 0.00005161 -0.00000000
2013-07-18 -0.00000000 0.00000000 0.00030804 -0.00004410 0.00000000
2013-07-19 -0.00000000 -0.00000000 0.00016974 0.00006269 0.00000000
2013-07-22 -0.00000000 0.00000000 0.00037388 -0.00002526 0.00000000
2013-07-23 -0.00000000 -0.00000000 -0.00054287 0.00031823 -0.00000000
2013-07-24 0.00000000 0.00000000 -0.00055930 -0.00008118 0.00000000
2013-07-25 0.00000000 -0.00000000 0.00050856 0.00004617 0.00000000
2013-07-26 0.00000000 0.00000000 0.00045986 0.00007078 0.00000000
2013-07-29 0.00000000 0.00000000 0.00020887 -0.00001319 0.00000000
2013-07-30 -0.00000000 0.00000000 -0.00027683 0.00004621 0.00000000
2013-07-31 0.00000000 -0.00000000 0.00058733 -0.00007986 -0.00000000
2013-08-01 0.00000000 0.00000000 -0.00017007 0.00006748 0.00000000
2013-08-02 -0.00000000 0.00000000 0.00005202 -0.00002876 0.00000000
2013-08-05 0.00000000 0.00000000 -0.00044426 -0.00009430 0.00000000
2013-08-06 -0.00000000 -0.00000000 0.00012336 -0.00010904 -0.00000000
2013-08-07 -0.00000000 -0.00000000 -0.00018538 -0.00004424 -0.00000000
2013-08-08 0.00000000 -0.00031838 0.00023201 -0.00000206 -0.00000000
2013-08-09 -0.00000000 -0.00221611 -0.00015865 -0.00000597 -0.00000000
2013-08-12 0.00000000 0.00437399 0.00014904 0.00000197 -0.00000000
... ... ... ... ... ...
2017-05-19 0.00001165 0.00004883 -0.00000466 0.00001175 0.00000209
2017-05-22 0.00001997 0.00008703 -0.00001305 0.00003705 0.00000203
2017-05-23 0.00000888 -0.00001767 0.00003736 -0.00000321 0.00000592
2017-05-24 0.00000215 -0.00004284 0.00000836 -0.00000963 0.00001809
2017-05-25 0.00002437 0.00004947 0.00003613 0.00002705 0.00000102
2017-05-26 0.00001365 -0.00002416 -0.00002392 0.00004842 -0.00000629
2017-05-30 -0.00001453 0.00000557 -0.00000368 0.00001811 -0.00000514
2017-05-31 0.00000851 -0.00008435 0.00000000 0.00004092 0.00000837
2017-06-01 0.00001198 0.00003913 0.00006348 0.00002686 0.00001843
2017-06-02 0.00000869 0.00021091 0.00004916 0.00002419 0.00000384
2017-06-05 0.00000403 -0.00013914 0.00002348 0.00000719 0.00000010
2017-06-06 0.00000000 0.00004807 0.00003238 -0.00001554 0.00000065
2017-06-07 0.00002039 0.00008468 0.00007866 0.00001201 -0.00000059
2017-06-08 0.00001015 -0.00003477 0.00000088 0.00000957 0.00000605
2017-06-09 -0.00002042 -0.00055125 0.00008029 0.00003397 0.00001455
2017-06-12 -0.00001423 -0.00033961 -0.00001219 -0.00000706 -0.00000774
2017-06-13 -0.00000055 0.00011422 0.00001220 0.00001709 0.00000010
2017-06-14 -0.00000110 -0.00013843 0.00007389 0.00001054 0.00001102
2017-06-15 -0.00000512 -0.00008492 0.00000686 0.00002796 0.00000833
2017-06-16 -0.00000588 -0.00019837 0.00003854 0.00000519 0.00000351
2017-06-19 0.00001609 0.00040536 0.00002468 0.00003914 0.00001044
2017-06-20 -0.00002961 -0.00012873 -0.00001017 -0.00000681 0.00000456
2017-06-21 0.00000751 0.00008394 0.00001018 -0.00001137 0.00002720
2017-06-22 0.00001005 -0.00002329 0.00015578 0.00003995 0.00001254
2017-06-23 -0.00000626 0.00006317 -0.00004457 -0.00001069 -0.00000296
2017-06-26 0.00000278 -0.00004451 0.00000831 -0.00000960 0.00001120
2017-06-27 -0.00000517 -0.00020285 -0.00002906 -0.00001360 -0.00001277
2017-06-28 0.00001394 0.00020671 0.00004421 -0.00001082 0.00000284
2017-06-29 0.00000677 -0.00020859 -0.00003643 0.00001429 -0.00001228
2017-06-30 0.00001271 0.00003348 0.00000250 -0.00000682 -0.00000429
ticker AIG AMAT AMGN AMZN APC \
date
2013-07-01 nan nan nan nan nan
2013-07-02 -0.00000000 0.00000000 -0.00000000 0.00000000 -0.00000000
2013-07-03 -0.00000000 0.00000000 -0.00000000 0.00000000 0.00000000
2013-07-05 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-08 0.00000000 -0.00000000 0.00000000 0.00000000 -0.00000000
2013-07-09 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-10 -0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-11 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-12 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-15 -0.00000000 -0.00000000 0.00000000 -0.00000000 -0.00000000
2013-07-16 -0.00000000 0.00000000 -0.00000000 0.00000000 -0.00000000
2013-07-17 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-18 0.00000000 -0.00000000 0.00000000 -0.00000000 0.00000000
2013-07-19 -0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-22 0.00000000 -0.00000000 -0.00000000 -0.00000000 -0.00000000
2013-07-23 -0.00000000 -0.00000000 -0.00000000 -0.00000000 0.00000000
2013-07-24 -0.00000000 -0.00000000 -0.00000000 -0.00000000 -0.00000000
2013-07-25 0.00000000 0.00000000 0.00000000 0.00000000 -0.00000000
2013-07-26 0.00000000 0.00000000 0.00000000 0.00000000 -0.00000000
2013-07-29 -0.00000000 -0.00000000 -0.00000000 -0.00000000 0.00000000
2013-07-30 -0.00000000 0.00000000 0.00000000 -0.00000000 0.00000000
2013-07-31 -0.00000000 0.00000000 -0.00000000 -0.00000000 -0.00000000
2013-08-01 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-02 0.00000000 -0.00000000 -0.00000000 -0.00000000 0.00000000
2013-08-05 0.00000000 -0.00000000 -0.00000000 -0.00000000 0.00000000
2013-08-06 -0.00000000 0.00000000 -0.00000000 -0.00000000 -0.00000000
2013-08-07 -0.00000000 -0.00000000 0.00000000 -0.00000000 0.00000000
2013-08-08 0.00000000 -0.00000000 -0.00000000 -0.00000000 0.00000000
2013-08-09 -0.00000000 0.00000000 -0.00000000 0.00000000 -0.00000000
2013-08-12 -0.00000000 -0.00000000 -0.00000000 -0.00000000 -0.00000000
... ... ... ... ... ...
2017-05-19 -0.00000732 0.00000442 -0.00008961 0.00000000 0.00004773
2017-05-22 0.00000695 0.00001682 -0.00021768 0.00000000 0.00000191
2017-05-23 0.00003043 0.00000761 0.00006698 0.00000000 0.00000840
2017-05-24 0.00001635 -0.00000758 0.00003358 0.00000000 -0.00000837
2017-05-25 0.00001963 0.00001062 0.00005237 0.00000000 -0.00004885
2017-05-26 0.00001608 0.00001295 -0.00002633 0.00000000 -0.00000898
2017-05-30 0.00002036 0.00000160 -0.00007270 0.00000000 -0.00004185
2017-05-31 -0.00001724 0.00000850 0.00008771 -0.00000000 -0.00000958
2017-06-01 0.00000997 0.00000316 0.00006248 0.00000000 0.00002324
2017-06-02 0.00001250 0.00002102 0.00018066 0.00000000 -0.00004159
2017-06-05 -0.00001499 -0.00000207 0.00006520 0.00000000 -0.00001698
2017-06-06 -0.00001914 -0.00000388 -0.00004177 -0.00000000 0.00001876
2017-06-07 -0.00000556 0.00001789 0.00012936 0.00000000 -0.00011827
2017-06-08 0.00001858 0.00000434 0.00005933 0.00000000 -0.00002315
2017-06-09 0.00001917 -0.00006847 0.00008399 -0.00000000 0.00005855
2017-06-12 -0.00000161 -0.00001079 0.00004841 -0.00000000 0.00001588
2017-06-13 0.00001660 0.00001387 -0.00002641 0.00000000 0.00003918
2017-06-14 0.00000080 -0.00002338 0.00003941 -0.00000000 -0.00008105
2017-06-15 -0.00001767 -0.00002244 -0.00005791 -0.00000000 -0.00005081
2017-06-16 -0.00000081 -0.00000139 -0.00009945 0.00000000 0.00004141
2017-06-19 0.00000647 0.00003294 0.00015875 0.00000000 -0.00001484
2017-06-20 -0.00002258 -0.00002526 0.00007017 -0.00000000 -0.00001802
2017-06-21 -0.00000691 0.00000721 0.00029351 0.00000000 -0.00004165
2017-06-22 -0.00000693 -0.00000772 0.00013904 -0.00000000 -0.00001176
2017-06-23 -0.00000123 0.00002080 -0.00007325 0.00000000 0.00001865
2017-06-26 0.00000654 -0.00002290 0.00006542 -0.00000000 -0.00002614
2017-06-27 0.00000938 -0.00003780 -0.00015662 -0.00000000 -0.00005113
2017-06-28 0.00001747 0.00002295 0.00018173 0.00000000 -0.00000047
2017-06-29 -0.00001372 -0.00003518 -0.00008198 -0.00000000 0.00006835
2017-06-30 -0.00003975 -0.00000377 -0.00002011 -0.00000000 0.00000408
ticker ... USB UTX V VLO \
date ...
2013-07-01 ... nan nan nan nan
2013-07-02 ... -0.00000000 -0.00000000 0.00000000 -0.00000000
2013-07-03 ... 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-05 ... 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-08 ... 0.00000000 0.00000000 -0.00000000 0.00000000
2013-07-09 ... 0.00000000 0.00000000 -0.00000000 -0.00000000
2013-07-10 ... -0.00000000 -0.00000000 -0.00000000 -0.00000000
2013-07-11 ... 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-12 ... 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-15 ... -0.00000000 0.00000000 -0.00000000 -0.00000000
2013-07-16 ... -0.00000000 -0.00000000 -0.00000000 -0.00000000
2013-07-17 ... -0.00000000 0.00000000 0.00000000 0.00000000
2013-07-18 ... 0.00000000 0.00000000 0.00000000 -0.00000000
2013-07-19 ... 0.00000000 0.00000000 -0.00000000 0.00000000
2013-07-22 ... 0.00000000 -0.00000000 0.00000000 0.00000000
2013-07-23 ... 0.00000000 0.00000000 -0.00000000 0.00000000
2013-07-24 ... 0.00000000 -0.00000000 -0.00000000 -0.00000000
2013-07-25 ... -0.00000000 -0.00000000 0.00000000 0.00000000
2013-07-26 ... 0.00000000 0.00000000 -0.00000000 0.00000000
2013-07-29 ... -0.00000000 0.00000000 -0.00000000 -0.00000000
2013-07-30 ... 0.00000000 0.00000000 -0.00000000 -0.00000000
2013-07-31 ... -0.00000000 0.00000000 -0.00000000 0.00000000
2013-08-01 ... 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-02 ... 0.00000000 0.00000000 0.00000000 -0.00000000
2013-08-05 ... -0.00000000 -0.00000000 0.00000000 0.00000000
2013-08-06 ... -0.00000000 -0.00000000 -0.00000000 0.00000000
2013-08-07 ... -0.00000000 0.00000000 -0.00000000 0.00000000
2013-08-08 ... 0.00000000 0.00000000 -0.00000000 0.00000000
2013-08-09 ... -0.00000000 -0.00000000 -0.00000000 0.00000000
2013-08-12 ... -0.00000000 -0.00000000 -0.00000000 -0.00001683
... ... ... ... ... ...
2017-05-19 ... 0.00002042 0.00011384 0.00002671 0.00002764
2017-05-22 ... 0.00001915 0.00003133 0.00002856 -0.00000250
2017-05-23 ... 0.00002909 0.00002558 0.00001875 0.00002334
2017-05-24 ... -0.00000941 0.00000062 0.00003220 -0.00002573
2017-05-25 ... -0.00000111 0.00001056 0.00000771 -0.00005167
2017-05-26 ... -0.00001831 -0.00002479 -0.00001236 -0.00006221
2017-05-30 ... -0.00001893 -0.00003596 0.00000167 -0.00003053
2017-05-31 ... -0.00001176 0.00000062 0.00001704 -0.00005285
2017-06-01 ... 0.00003596 0.00003172 0.00000564 0.00005592
2017-06-02 ... -0.00001276 0.00002044 0.00002485 -0.00000170
2017-06-05 ... 0.00000167 -0.00007040 0.00001315 0.00002554
2017-06-06 ... -0.00002506 -0.00005361 -0.00002488 -0.00000932
2017-06-07 ... 0.00001459 -0.00002133 0.00000989 0.00000085
2017-06-08 ... 0.00003574 0.00001006 0.00000000 0.00007461
2017-06-09 ... 0.00006122 0.00004397 -0.00005028 0.00017389
2017-06-12 ... -0.00000216 -0.00004745 -0.00003539 0.00005340
2017-06-13 ... 0.00000486 0.00001130 0.00005329 0.00004881
2017-06-14 ... -0.00000539 -0.00000313 0.00000796 -0.00014107
2017-06-15 ... -0.00002478 0.00004631 -0.00003797 -0.00000488
2017-06-16 ... -0.00000598 -0.00001990 0.00000000 0.00009034
2017-06-19 ... 0.00001634 0.00008855 0.00002072 0.00006481
2017-06-20 ... -0.00001245 -0.00002095 -0.00001361 -0.00007585
2017-06-21 ... -0.00001847 0.00001482 0.00000533 -0.00007770
2017-06-22 ... -0.00003828 0.00002588 -0.00001962 0.00001138
2017-06-23 ... -0.00002938 0.00000307 0.00005455 0.00009411
2017-06-26 ... 0.00001344 -0.00002148 -0.00001184 0.00000478
2017-06-27 ... 0.00002285 -0.00001723 -0.00001057 0.00006131
2017-06-28 ... 0.00004987 0.00005612 0.00004405 0.00005193
2017-06-29 ... -0.00001175 -0.00005203 -0.00005911 -0.00002961
2017-06-30 ... 0.00000825 0.00002835 -0.00002130 0.00002899
ticker VZ WBA WFC WMT WYNN \
date
2013-07-01 nan nan nan nan nan
2013-07-02 0.00000000 0.00000000 -0.00000000 0.00000000 -0.00000000
2013-07-03 0.00000000 -0.00000000 0.00000000 0.00000000 -0.00000000
2013-07-05 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-08 0.00052532 0.00000000 0.00000000 0.00000000 -0.00000000
2013-07-09 -0.00028726 0.00000000 -0.00000000 0.00000000 -0.00000000
2013-07-10 -0.00056764 0.00000000 -0.00000000 -0.00000000 0.00000000
2013-07-11 0.00081960 0.00000000 -0.00000000 0.00000000 0.00000000
2013-07-12 -0.00096582 0.00000000 0.00000000 0.00000000 -0.00000000
2013-07-15 -0.00053975 0.00000000 0.00000000 -0.00000000 0.00000000
2013-07-16 0.00038728 -0.00000000 -0.00000000 0.00000000 0.00000000
2013-07-17 0.00051668 0.00000000 0.00000000 -0.00000000 -0.00000000
2013-07-18 -0.00073494 0.00000000 0.00000000 0.00000000 0.00000000
2013-07-19 -0.00001938 0.00000000 0.00000000 0.00000000 -0.00000000
2013-07-22 0.00030510 0.00000000 0.00000000 -0.00000000 0.00000000
2013-07-23 0.00008526 -0.00000000 -0.00000000 0.00000000 -0.00000000
2013-07-24 0.00001891 0.00000000 -0.00000000 -0.00000000 -0.00000000
2013-07-25 0.00031194 -0.00000000 -0.00000000 -0.00000000 -0.00000000
2013-07-26 0.00029113 -0.00000000 -0.00000000 -0.00000000 0.00000000
2013-07-29 0.00041097 -0.00000000 -0.00000000 -0.00000000 0.00000000
2013-07-30 -0.00092707 -0.00000000 0.00000000 -0.00000000 0.00000000
2013-07-31 -0.00080792 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-01 0.00046379 0.00000000 0.00000000 0.00000000 0.00000000
2013-08-02 0.00020779 -0.00000000 0.00000000 0.00000000 0.00000000
2013-08-05 -0.00003383 -0.00000000 -0.00000000 0.00000000 0.00000000
2013-08-06 -0.00009983 -0.00000000 -0.00000000 -0.00000000 -0.00000000
2013-08-07 -0.00011038 -0.00000000 -0.00021029 -0.00001215 -0.00000000
2013-08-08 -0.00016870 0.00000000 -0.00001829 -0.00003846 0.00054653
2013-08-09 -0.00015899 -0.00000000 0.00000354 -0.00010873 -0.00048334
2013-08-12 0.00017395 0.00000000 -0.00001051 0.00005554 0.00010964
... ... ... ... ... ...
2017-05-19 0.00005662 -0.00003328 0.00005626 0.00009455 -0.00007392
2017-05-22 0.00000887 0.00002923 -0.00000408 -0.00001665 0.00036954
2017-05-23 0.00000000 -0.00000397 0.00003106 -0.00000455 -0.00010583
2017-05-24 -0.00006493 -0.00002622 -0.00002435 -0.00002582 -0.00001584
2017-05-25 0.00004020 0.00000680 -0.00002529 0.00001219 0.00004650
2017-05-26 0.00000148 0.00001995 -0.00003032 -0.00001367 0.00013570
2017-05-30 0.00012968 -0.00004309 -0.00002057 0.00000152 0.00008823
2017-05-31 0.00006356 0.00003443 -0.00008427 0.00003413 0.00021843
2017-06-01 -0.00001859 0.00004591 0.00007915 0.00009117 0.00049512
2017-06-02 -0.00001004 0.00003590 -0.00002232 -0.00001410 0.00010632
2017-06-05 -0.00001005 0.00001080 -0.00000748 0.00004760 -0.00013544
2017-06-06 0.00001007 -0.00007422 0.00000749 -0.00009813 -0.00006648
2017-06-07 0.00000861 0.00000039 0.00002324 0.00001649 -0.00002847
2017-06-08 -0.00004441 0.00000039 0.00003881 -0.00001644 0.00023807
2017-06-09 0.00007643 -0.00001887 0.00010147 0.00003673 -0.00049658
2017-06-12 0.00006699 0.00003242 0.00002318 -0.00001340 0.00006627
2017-06-13 -0.00010290 -0.00000039 0.00002859 0.00002088 0.00039020
2017-06-14 0.00003292 0.00004222 0.00000631 0.00002822 -0.00006489
2017-06-15 -0.00000711 -0.00001233 -0.00004953 -0.00007306 0.00005345
2017-06-16 -0.00000142 -0.00015932 -0.00000080 -0.00027424 0.00004065
2017-06-19 -0.00000854 0.00005576 0.00002784 0.00002038 0.00038209
2017-06-20 -0.00008978 -0.00006038 -0.00006163 0.00000312 -0.00012086
2017-06-21 -0.00007651 -0.00000733 -0.00003845 0.00005458 0.00003118
2017-06-22 0.00000146 -0.00007144 -0.00003960 -0.00005562 -0.00003865
2017-06-23 -0.00000438 -0.00000125 -0.00000326 -0.00005303 0.00012008
2017-06-26 0.00005260 0.00004971 0.00003674 0.00005194 -0.00010961
2017-06-27 -0.00013190 -0.00001111 0.00002105 0.00003978 -0.00014362
2017-06-28 0.00000000 -0.00000701 0.00009421 0.00003873 0.00002961
2017-06-29 -0.00006357 0.00005293 0.00011424 -0.00004463 -0.00030303
2017-06-30 0.00003731 -0.00000244 -0.00002839 -0.00001938 0.00013269
ticker XOM
date
2013-07-01 nan
2013-07-02 0.00000000
2013-07-03 0.00000000
2013-07-05 0.00000000
2013-07-08 0.00000000
2013-07-09 0.00000000
2013-07-10 -0.00000000
2013-07-11 0.00000000
2013-07-12 0.00000000
2013-07-15 -0.00000000
2013-07-16 0.00000000
2013-07-17 0.00000000
2013-07-18 0.00000000
2013-07-19 0.00000000
2013-07-22 -0.00000000
2013-07-23 0.00000000
2013-07-24 -0.00000000
2013-07-25 -0.00000000
2013-07-26 -0.00000000
2013-07-29 -0.00000000
2013-07-30 -0.00000000
2013-07-31 -0.00000000
2013-08-01 -0.00000000
2013-08-02 -0.00000000
2013-08-05 -0.00000000
2013-08-06 -0.00000000
2013-08-07 -0.00000000
2013-08-08 0.00000000
2013-08-09 -0.00015072
2013-08-12 -0.00029097
... ...
2017-05-19 0.00001898
2017-05-22 0.00003788
2017-05-23 0.00003038
2017-05-24 -0.00003027
2017-05-25 -0.00005654
2017-05-26 -0.00002105
2017-05-30 -0.00004734
2017-05-31 -0.00006343
2017-06-01 0.00002128
2017-06-02 -0.00012738
2017-06-05 0.00006680
2017-06-06 0.00011653
2017-06-07 -0.00003161
2017-06-08 -0.00003067
2017-06-09 0.00016027
2017-06-12 0.00008333
2017-06-13 0.00000309
2017-06-14 -0.00009164
2017-06-15 0.00001974
2017-06-16 0.00012753
2017-06-19 -0.00007457
2017-06-20 -0.00004636
2017-06-21 -0.00009004
2017-06-22 -0.00003766
2017-06-23 0.00005569
2017-06-26 -0.00003862
2017-06-27 -0.00001363
2017-06-28 0.00004410
2017-06-29 -0.00008669
2017-06-30 0.00000317
[1009 rows x 99 columns]
###Markdown
Tracking ErrorIn order to check the performance of the smart beta portfolio, we can calculate the annualized tracking error against the index. Implement `tracking_error` to return the tracking error between the ETF and benchmark.For reference, we'll be using the following annualized tracking error function:$$ TE = \sqrt{252} * SampleStdev(r_p - r_b) $$Where $ r_p $ is the portfolio/ETF returns and $ r_b $ is the benchmark returns._Note: When calculating the sample standard deviation, the delta degrees of freedom is 1, which is the also the default value._
###Code
def tracking_error(benchmark_returns_by_date, etf_returns_by_date):
"""
Calculate the tracking error.
Parameters
----------
benchmark_returns_by_date : Pandas Series
The benchmark returns for each date
etf_returns_by_date : Pandas Series
The ETF returns for each date
Returns
-------
tracking_error : float
The tracking error
"""
assert benchmark_returns_by_date.index.equals(etf_returns_by_date.index)
#TODO: Implement function
return np.sqrt(252)*(etf_returns_by_date-benchmark_returns_by_date).std()
project_tests.test_tracking_error(tracking_error)
###Output
Tests Passed
###Markdown
View DataLet's generate the tracking error using `tracking_error`.
###Code
smart_beta_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(etf_weighted_returns, 1))
print('Smart Beta Tracking Error: {}'.format(smart_beta_tracking_error))
###Output
Smart Beta Tracking Error: 0.1020761483200753
###Markdown
Part 2: Portfolio OptimizationNow, let's create a second portfolio. We'll still reuse the market cap weighted index, but this will be independent of the dividend-weighted portfolio that we created in part 1.We want to both minimize the portfolio variance and also want to closely track a market cap weighted index. In other words, we're trying to minimize the distance between the weights of our portfolio and the weights of the index.$Minimize \left [ \sigma^2_p + \lambda \sqrt{\sum_{1}^{m}(weight_i - indexWeight_i)^2} \right ]$ where $m$ is the number of stocks in the portfolio, and $\lambda$ is a scaling factor that you can choose.Why are we doing this? One way that investors evaluate a fund is by how well it tracks its index. The fund is still expected to deviate from the index within a certain range in order to improve fund performance. A way for a fund to track the performance of its benchmark is by keeping its asset weights similar to the weights of the index. We’d expect that if the fund has the same stocks as the benchmark, and also the same weights for each stock as the benchmark, the fund would yield about the same returns as the benchmark. By minimizing a linear combination of both the portfolio risk and distance between portfolio and benchmark weights, we attempt to balance the desire to minimize portfolio variance with the goal of tracking the index. CovarianceImplement `get_covariance_returns` to calculate the covariance of the `returns`. We'll use this to calculate the portfolio variance.If we have $m$ stock series, the covariance matrix is an $m \times m$ matrix containing the covariance between each pair of stocks. We can use [`Numpy.cov`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html) to get the covariance. We give it a 2D array in which each row is a stock series, and each column is an observation at the same period of time. For any `NaN` values, you can replace them with zeros using the [`DataFrame.fillna`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html) function.The covariance matrix $\mathbf{P} = \begin{bmatrix}\sigma^2_{1,1} & ... & \sigma^2_{1,m} \\ ... & ... & ...\\\sigma_{m,1} & ... & \sigma^2_{m,m} \\\end{bmatrix}$
###Code
def get_covariance_returns(returns):
"""
Calculate covariance matrices.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
Returns
-------
returns_covariance : 2 dimensional Ndarray
The covariance of the returns
"""
#TODO: Implement function
returns = returns.fillna(0)
return np.cov(returns, rowvar = False)
project_tests.test_get_covariance_returns(get_covariance_returns)
###Output
Tests Passed
###Markdown
View DataLet's look at the covariance generated from `get_covariance_returns`.
###Code
covariance_returns = get_covariance_returns(returns)
covariance_returns = pd.DataFrame(covariance_returns, returns.columns, returns.columns)
covariance_returns_correlation = np.linalg.inv(np.diag(np.sqrt(np.diag(covariance_returns))))
covariance_returns_correlation = pd.DataFrame(
covariance_returns_correlation.dot(covariance_returns).dot(covariance_returns_correlation),
covariance_returns.index,
covariance_returns.columns)
project_helper.plot_covariance_returns_correlation(
covariance_returns_correlation,
'Covariance Returns Correlation Matrix')
###Output
_____no_output_____
###Markdown
portfolio varianceWe can write the portfolio variance $\sigma^2_p = \mathbf{x^T} \mathbf{P} \mathbf{x}$Recall that the $\mathbf{x^T} \mathbf{P} \mathbf{x}$ is called the quadratic form.We can use the cvxpy function `quad_form(x,P)` to get the quadratic form. Distance from index weightsWe want portfolio weights that track the index closely. So we want to minimize the distance between them.Recall from the Pythagorean theorem that you can get the distance between two points in an x,y plane by adding the square of the x and y distances and taking the square root. Extending this to any number of dimensions is called the L2 norm. So: $\sqrt{\sum_{1}^{n}(weight_i - indexWeight_i)^2}$ Can also be written as $\left \| \mathbf{x} - \mathbf{index} \right \|_2$. There's a cvxpy function called [norm()](https://www.cvxpy.org/api_reference/cvxpy.atoms.other_atoms.htmlnorm)`norm(x, p=2, axis=None)`. The default is already set to find an L2 norm, so you would pass in one argument, which is the difference between your portfolio weights and the index weights. objective functionWe want to minimize both the portfolio variance and the distance of the portfolio weights from the index weights.We also want to choose a `scale` constant, which is $\lambda$ in the expression. $\mathbf{x^T} \mathbf{P} \mathbf{x} + \lambda \left \| \mathbf{x} - \mathbf{index} \right \|_2$This lets us choose how much priority we give to minimizing the difference from the index, relative to minimizing the variance of the portfolio. If you choose a higher value for `scale` ($\lambda$).We can find the objective function using cvxpy `objective = cvx.Minimize()`. Can you guess what to pass into this function? constraintsWe can also define our constraints in a list. For example, you'd want the weights to sum to one. So $\sum_{1}^{n}x = 1$. You may also need to go long only, which means no shorting, so no negative weights. So $x_i >0 $ for all $i$. you could save a variable as `[x >= 0, sum(x) == 1]`, where x was created using `cvx.Variable()`. optimizationSo now that we have our objective function and constraints, we can solve for the values of $\mathbf{x}$.cvxpy has the constructor `Problem(objective, constraints)`, which returns a `Problem` object.The `Problem` object has a function solve(), which returns the minimum of the solution. In this case, this is the minimum variance of the portfolio.It also updates the vector $\mathbf{x}$.We can check out the values of $x_A$ and $x_B$ that gave the minimum portfolio variance by using `x.value`
###Code
import cvxpy as cvx
def get_optimal_weights(covariance_returns, index_weights, scale=2.0):
"""
Find the optimal weights.
Parameters
----------
covariance_returns : 2 dimensional Ndarray
The covariance of the returns
index_weights : Pandas Series
Index weights for all tickers at a period in time
scale : int
The penalty factor for weights the deviate from the index
Returns
-------
x : 1 dimensional Ndarray
The solution for x
"""
assert len(covariance_returns.shape) == 2
assert len(index_weights.shape) == 1
assert covariance_returns.shape[0] == covariance_returns.shape[1] == index_weights.shape[0]
#TODO: Implement function
m = len(index_weights)
x = cvx.Variable(m)
portfolio_var = cvx.quad_form(x, covariance_returns)
distance = cvx.norm(x-index_weights, p=2, axis=None)
objective = cvx.Minimize(portfolio_var + scale*distance)
constraints = [x >= 0, cvx.sum(x) == 1]
problem = (cvx.Problem(objective, constraints)).solve()
x_values = x.value
return x_values
project_tests.test_get_optimal_weights(get_optimal_weights)
###Output
_____no_output_____
###Markdown
Optimized PortfolioUsing the `get_optimal_weights` function, let's generate the optimal ETF weights without rebalanceing. We can do this by feeding in the covariance of the entire history of data. We also need to feed in a set of index weights. We'll go with the average weights of the index over time.
###Code
raw_optimal_single_rebalance_etf_weights = get_optimal_weights(covariance_returns.values, index_weights.iloc[-1])
optimal_single_rebalance_etf_weights = pd.DataFrame(
np.tile(raw_optimal_single_rebalance_etf_weights, (len(returns.index), 1)),
returns.index,
returns.columns)
###Output
_____no_output_____
###Markdown
With our ETF weights built, let's compare it to the index. Run the next cell to calculate the ETF returns and compare it to the index returns.
###Code
optim_etf_returns = generate_weighted_returns(returns, optimal_single_rebalance_etf_weights)
optim_etf_cumulative_returns = calculate_cumulative_returns(optim_etf_returns)
project_helper.plot_benchmark_returns(index_weighted_cumulative_returns, optim_etf_cumulative_returns, 'Optimized ETF vs Index')
optim_etf_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(optim_etf_returns, 1))
print('Optimized ETF Tracking Error: {}'.format(optim_etf_tracking_error))
###Output
ticker AAL AAPL ABBV ABT AGN \
date
2013-07-01 nan nan nan nan nan
2013-07-02 -0.00011130 0.00113194 0.00007320 -0.00001672 -0.00008104
2013-07-03 0.00009843 0.00027582 0.00000241 -0.00006029 0.00000816
2013-07-05 0.00002715 -0.00040137 0.00005890 0.00006740 0.00012121
2013-07-08 0.00003001 -0.00028371 0.00011407 0.00004834 0.00000600
2013-07-09 0.00012530 0.00087887 -0.00004184 -0.00006519 -0.00003894
2013-07-10 -0.00005531 -0.00019167 0.00006914 0.00000000 -0.00008238
2013-07-11 0.00005887 0.00077888 0.00003121 0.00007636 0.00004733
2013-07-12 0.00005528 -0.00009098 0.00014726 -0.00000487 0.00005961
2013-07-15 0.00005756 0.00010896 -0.00004845 0.00001561 -0.00005004
2013-07-16 0.00005407 0.00032207 -0.00009215 0.00002136 -0.00011804
2013-07-17 0.00014078 0.00001336 0.00001738 0.00001158 -0.00003907
2013-07-18 -0.00000822 0.00016815 0.00004157 -0.00001154 0.00011741
2013-07-19 -0.00005213 -0.00078792 0.00002291 0.00001641 0.00005433
2013-07-22 -0.00002218 0.00015992 0.00005131 -0.00000672 0.00012637
2013-07-23 -0.00002784 -0.00085800 -0.00007450 0.00008469 -0.00007502
2013-07-24 0.00012600 0.00256650 -0.00007676 -0.00002160 0.00001698
2013-07-25 0.00009015 -0.00022800 0.00006979 0.00001229 0.00010462
2013-07-26 0.00006978 0.00028375 0.00006311 0.00001884 0.00018916
2013-07-29 0.00003706 0.00077052 0.00003060 -0.00000375 0.00008422
2013-07-30 -0.00000788 0.00061710 -0.00004056 0.00001313 0.00004226
2013-07-31 0.00003948 -0.00008708 0.00008858 -0.00002335 -0.00005177
2013-08-01 0.00000784 0.00045781 -0.00002567 0.00001975 0.00007101
2013-08-02 -0.00011996 0.00064163 0.00000785 -0.00000842 0.00000512
2013-08-05 0.00006678 0.00074650 -0.00006833 -0.00002812 0.00004461
2013-08-06 -0.00006591 -0.00044706 0.00001930 -0.00003308 -0.00005675
2013-08-07 -0.00001336 -0.00002900 -0.00003506 -0.00001622 -0.00000885
2013-08-08 0.00002946 -0.00009887 0.00005581 -0.00000096 -0.00004709
2013-08-09 -0.00010118 -0.00071104 -0.00003943 -0.00000288 -0.00002584
2013-08-12 0.00005978 0.00141952 0.00003747 0.00000096 -0.00002972
... ... ... ... ... ...
2017-05-19 0.00006452 0.00017034 -0.00000387 0.00001443 0.00001155
2017-05-22 0.00011064 0.00030361 -0.00001084 0.00004550 0.00001124
2017-05-23 0.00004921 -0.00006165 0.00003103 -0.00000394 0.00003279
2017-05-24 0.00001191 -0.00014945 0.00000694 -0.00001183 0.00010016
2017-05-25 0.00013507 0.00017271 0.00003003 0.00003324 0.00000563
2017-05-26 0.00007578 -0.00008443 -0.00001990 0.00005958 -0.00003490
2017-05-30 -0.00008088 0.00001952 -0.00000307 0.00002235 -0.00002858
2017-05-31 0.00004742 -0.00029591 0.00000000 0.00005053 0.00004662
2017-06-01 0.00006681 0.00013739 0.00005305 0.00003320 0.00010272
2017-06-02 0.00004843 0.00074050 0.00004109 0.00002989 0.00002138
2017-06-05 0.00002245 -0.00048860 0.00001963 0.00000889 0.00000055
2017-06-06 0.00000000 0.00016880 0.00002707 -0.00001921 0.00000360
2017-06-07 0.00011380 0.00029765 0.00006582 0.00001486 -0.00000332
2017-06-08 0.00005664 -0.00012221 0.00000074 0.00001184 0.00003376
2017-06-09 -0.00011399 -0.00193764 0.00006718 0.00004203 0.00008120
2017-06-12 -0.00007942 -0.00119405 -0.00001020 -0.00000874 -0.00004321
2017-06-13 -0.00000306 0.00040203 0.00001022 0.00002118 0.00000055
2017-06-14 -0.00000613 -0.00048745 0.00006193 0.00001307 0.00006156
2017-06-15 -0.00002865 -0.00029948 0.00000576 0.00003471 0.00004660
2017-06-16 -0.00003293 -0.00069955 0.00003236 0.00000644 0.00001964
2017-06-19 0.00009012 0.00142950 0.00002072 0.00004860 0.00005845
2017-06-20 -0.00016588 -0.00045414 -0.00000854 -0.00000846 0.00002551
2017-06-21 0.00004209 0.00029635 0.00000855 -0.00001413 0.00015242
2017-06-22 0.00005635 -0.00008221 0.00013093 0.00004965 0.00007028
2017-06-23 -0.00003509 0.00022303 -0.00003746 -0.00001329 -0.00001661
2017-06-26 0.00001559 -0.00015714 0.00000699 -0.00001193 0.00006277
2017-06-27 -0.00002901 -0.00071619 -0.00002443 -0.00001691 -0.00007154
2017-06-28 0.00007815 0.00073009 0.00003717 -0.00001345 0.00001591
2017-06-29 0.00003797 -0.00073670 -0.00003063 0.00001777 -0.00006885
2017-06-30 0.00007130 0.00011825 0.00000210 -0.00000848 -0.00002407
ticker AIG AMAT AMGN AMZN APC \
date
2013-07-01 nan nan nan nan nan
2013-07-02 -0.00003209 0.00000212 -0.00010131 0.00029023 -0.00000085
2013-07-03 -0.00010798 0.00007639 -0.00001798 0.00005311 0.00000677
2013-07-05 0.00015870 0.00009224 0.00013192 0.00032717 0.00006125
2013-07-08 0.00005283 -0.00003306 0.00003820 0.00082756 -0.00000332
2013-07-09 0.00004132 0.00021601 0.00001055 0.00016248 0.00000250
2013-07-10 -0.00000948 0.00025703 0.00026200 0.00013784 0.00001081
2013-07-11 0.00003165 0.00007331 0.00008121 0.00125948 0.00008246
2013-07-12 0.00012919 0.00006484 0.00006756 0.00132254 0.00000365
2013-07-15 -0.00000619 -0.00007172 0.00001192 -0.00016006 -0.00006437
2013-07-16 -0.00007126 0.00004009 -0.00002645 0.00004915 -0.00003585
2013-07-17 0.00009231 0.00004364 0.00001726 0.00029790 0.00002663
2013-07-18 0.00002472 -0.00004899 0.00003046 -0.00074525 0.00005701
2013-07-19 -0.00001694 0.00000000 0.00030457 0.00018499 0.00010537
2013-07-22 0.00008797 -0.00007975 -0.00002778 -0.00028799 -0.00007119
2013-07-23 -0.00012960 -0.00003462 -0.00014771 -0.00040054 0.00000403
2013-07-24 -0.00009936 -0.00006189 -0.00000130 -0.00035371 -0.00006526
2013-07-25 0.00008500 0.00001172 0.00018855 0.00074940 -0.00000164
2013-07-26 0.00000622 0.00000000 0.00000568 0.00142544 -0.00004021
2013-07-29 -0.00004197 -0.00004289 -0.00001449 -0.00095144 0.00000581
2013-07-30 -0.00005942 0.00009815 0.00011557 -0.00060551 0.00000621
2013-07-31 -0.00005991 0.00001739 -0.00018075 -0.00019766 -0.00001158
2013-08-01 0.00024800 0.00006746 0.00007016 0.00072538 0.00007799
2013-08-02 0.00019367 -0.00006484 -0.00002084 -0.00022356 0.00000244
2013-08-05 0.00003593 -0.00008478 -0.00010323 -0.00053167 0.00001583
2013-08-06 -0.00007597 0.00002344 -0.00014209 -0.00004005 -0.00006992
2013-08-07 -0.00001355 -0.00010508 0.00047131 -0.00064134 0.00000453
2013-08-08 0.00009502 -0.00009499 -0.00012413 -0.00019793 0.00005144
2013-08-09 -0.00008485 0.00000804 -0.00004693 0.00025816 -0.00002760
2013-08-12 -0.00004218 -0.00000803 -0.00008946 -0.00009632 -0.00002699
... ... ... ... ... ...
2017-05-19 -0.00002239 0.00002435 -0.00006340 0.00007075 0.00008602
2017-05-22 0.00002128 0.00009274 -0.00015402 0.00056675 0.00000344
2017-05-23 0.00009312 0.00003937 0.00004739 0.00004502 0.00001514
2017-05-24 0.00005004 -0.00003921 0.00002376 0.00045549 -0.00001508
2017-05-25 0.00006010 0.00005496 0.00003708 0.00066761 -0.00008810
2017-05-26 0.00004929 0.00006705 -0.00001866 0.00012135 -0.00001622
2017-05-30 0.00006261 0.00000829 -0.00005169 0.00004641 -0.00007579
2017-05-31 -0.00005305 0.00004417 0.00006240 -0.00010482 -0.00001736
2017-06-01 0.00003070 0.00001645 0.00004449 0.00006717 0.00004215
2017-06-02 0.00003850 0.00010938 0.00012864 0.00054368 -0.00007544
2017-06-05 -0.00004617 -0.00001075 0.00004644 0.00023001 -0.00003081
2017-06-06 -0.00005894 -0.00002019 -0.00002975 -0.00041422 0.00003403
2017-06-07 -0.00001714 0.00009318 0.00009222 0.00035406 -0.00021475
2017-06-08 0.00005727 0.00002262 0.00004230 0.00000995 -0.00004204
2017-06-09 0.00005910 -0.00035669 0.00005988 -0.00158902 0.00010631
2017-06-12 -0.00000451 -0.00005623 0.00003452 -0.00068800 0.00002832
2017-06-13 0.00004648 0.00007234 -0.00001885 0.00082666 0.00006994
2017-06-14 0.00000225 -0.00012200 0.00002814 -0.00022124 -0.00014473
2017-06-15 -0.00004955 -0.00011726 -0.00004142 -0.00063271 -0.00009087
2017-06-16 -0.00000227 -0.00000729 -0.00007113 0.00122635 0.00007406
2017-06-19 0.00001815 0.00017215 0.00011354 0.00037938 -0.00002654
2017-06-20 -0.00006335 -0.00013206 0.00005020 -0.00013022 -0.00003224
2017-06-21 -0.00001940 0.00003771 0.00021017 0.00048783 -0.00007457
2017-06-22 -0.00001945 -0.00004037 0.00009956 -0.00004661 -0.00002105
2017-06-23 -0.00000344 0.00010883 -0.00005245 0.00012240 0.00003339
2017-06-26 0.00001837 -0.00011982 0.00004685 -0.00048842 -0.00004681
2017-06-27 0.00002634 -0.00019776 -0.00011215 -0.00086918 -0.00009156
2017-06-28 0.00004906 0.00012011 0.00013017 0.00069679 -0.00000084
2017-06-29 -0.00003853 -0.00018415 -0.00005872 -0.00073037 0.00012243
2017-06-30 -0.00011166 -0.00001973 -0.00001441 -0.00040815 0.00000730
ticker ... USB UTX V VLO \
date ...
2013-07-01 ... nan nan nan nan
2013-07-02 ... -0.00000146 -0.00004368 0.00000139 -0.00006145
2013-07-03 ... 0.00000733 0.00004365 0.00012520 0.00001326
2013-07-05 ... 0.00005999 0.00008366 0.00025832 0.00001408
2013-07-08 ... 0.00004196 0.00002053 -0.00018166 0.00005955
2013-07-09 ... 0.00003445 0.00004722 -0.00006165 -0.00002146
2013-07-10 ... -0.00003138 -0.00001033 -0.00003441 -0.00006312
2013-07-11 ... 0.00001148 0.00008554 0.00022497 0.00008747
2013-07-12 ... 0.00007301 0.00000618 0.00005155 0.00011328
2013-07-15 ... -0.00002260 0.00001808 -0.00003175 -0.00003968
2013-07-16 ... -0.00003262 -0.00001098 -0.00005892 -0.00005027
2013-07-17 ... -0.00007563 0.00003831 0.00001769 0.00002727
2013-07-18 ... 0.00004777 0.00002095 0.00009308 -0.00007431
2013-07-19 ... 0.00002726 0.00004953 -0.00007352 0.00004244
2013-07-22 ... 0.00001285 -0.00001590 0.00008819 0.00004782
2013-07-23 ... 0.00002706 0.00012979 -0.00017788 0.00000504
2013-07-24 ... 0.00000000 -0.00000544 -0.00012503 -0.00006125
2013-07-25 ... -0.00001133 -0.00000839 0.00054227 0.00009167
2013-07-26 ... 0.00001136 0.00000756 -0.00009335 0.00003075
2013-07-29 ... -0.00000708 0.00000419 -0.00008203 -0.00002056
2013-07-30 ... 0.00002979 0.00002095 -0.00003557 -0.00002153
2013-07-31 ... -0.00005361 0.00000000 -0.00097116 0.00002002
2013-08-01 ... 0.00004845 0.00006715 0.00015722 0.00002403
2013-08-02 ... 0.00001553 0.00002424 0.00034733 -0.00007973
2013-08-05 ... -0.00000422 -0.00004617 0.00003781 0.00000338
2013-08-06 ... -0.00001832 -0.00005945 -0.00011520 0.00011813
2013-08-07 ... -0.00005657 0.00003474 -0.00010567 0.00000162
2013-08-08 ... 0.00002001 0.00000955 -0.00004333 0.00003893
2013-08-09 ... -0.00002563 -0.00002652 -0.00006913 0.00000800
2013-08-12 ... -0.00000143 -0.00000333 -0.00000358 -0.00000439
... ... ... ... ... ...
2017-05-19 ... 0.00003774 0.00006602 0.00010818 0.00001539
2017-05-22 ... 0.00003539 0.00001817 0.00011563 -0.00000139
2017-05-23 ... 0.00005377 0.00001484 0.00007594 0.00001300
2017-05-24 ... -0.00001740 0.00000036 0.00013041 -0.00001433
2017-05-25 ... -0.00000205 0.00000613 0.00003126 -0.00002880
2017-05-26 ... -0.00003390 -0.00001441 -0.00005016 -0.00003471
2017-05-30 ... -0.00003515 -0.00002096 0.00000680 -0.00001709
2017-05-31 ... -0.00002186 0.00000036 0.00006937 -0.00002960
2017-06-01 ... 0.00006689 0.00001851 0.00002300 0.00003134
2017-06-02 ... -0.00002374 0.00001193 0.00010129 -0.00000095
2017-06-05 ... 0.00000311 -0.00004110 0.00005360 0.00001432
2017-06-06 ... -0.00004663 -0.00003130 -0.00010142 -0.00000523
2017-06-07 ... 0.00002718 -0.00001246 0.00004035 0.00000048
2017-06-08 ... 0.00006656 0.00000588 0.00000000 0.00004187
2017-06-09 ... 0.00011401 0.00002570 -0.00020515 0.00009758
2017-06-12 ... -0.00000402 -0.00002774 -0.00014443 0.00002998
2017-06-13 ... 0.00000906 0.00000661 0.00021772 0.00002743
2017-06-14 ... -0.00001005 -0.00000183 0.00003252 -0.00007930
2017-06-15 ... -0.00004630 0.00002715 -0.00015544 -0.00000275
2017-06-16 ... -0.00001117 -0.00001167 0.00000000 0.00005086
2017-06-19 ... 0.00003053 0.00005191 0.00008483 0.00003649
2017-06-20 ... -0.00002327 -0.00001229 -0.00005573 -0.00004272
2017-06-21 ... -0.00003455 0.00000870 0.00002184 -0.00004380
2017-06-22 ... -0.00007160 0.00001519 -0.00008041 0.00000642
2017-06-23 ... -0.00005495 0.00000180 0.00022354 0.00005305
2017-06-26 ... 0.00002514 -0.00001261 -0.00004853 0.00000270
2017-06-27 ... 0.00004275 -0.00001012 -0.00004330 0.00003456
2017-06-28 ... 0.00008689 0.00003295 0.00018057 0.00002928
2017-06-29 ... -0.00002046 -0.00003055 -0.00024234 -0.00001669
2017-06-30 ... 0.00001438 0.00001665 -0.00008733 0.00001635
ticker VZ WBA WFC WMT WYNN \
date
2013-07-01 nan nan nan nan nan
2013-07-02 0.00004664 0.00002251 -0.00004678 0.00001267 -0.00005227
2013-07-03 0.00006530 -0.00006361 0.00000000 0.00000527 -0.00000093
2013-07-05 0.00004946 0.00002451 0.00028496 0.00004741 0.00002195
2013-07-08 0.00006529 0.00020302 0.00024964 0.00015708 -0.00000232
2013-07-09 -0.00003570 0.00020552 -0.00004194 0.00003285 -0.00000905
2013-07-10 -0.00007170 0.00025071 -0.00020388 -0.00002658 0.00001582
2013-07-11 0.00011533 0.00014776 -0.00005912 0.00008823 0.00010089
2013-07-12 -0.00013591 0.00001025 0.00024411 0.00000000 -0.00001902
2013-07-15 -0.00007766 0.00008700 0.00023339 -0.00006087 0.00002725
2013-07-16 0.00005572 -0.00000169 -0.00010519 0.00003476 0.00001071
2013-07-17 0.00007959 0.00007937 0.00015740 -0.00001730 -0.00002134
2013-07-18 -0.00013202 0.00013549 0.00028584 0.00001428 0.00003561
2013-07-19 -0.00000348 0.00001152 0.00001245 0.00007536 -0.00000022
2013-07-22 0.00005573 0.00010683 0.00005596 -0.00002118 0.00000642
2013-07-23 0.00001558 -0.00004057 -0.00001858 0.00006877 -0.00004107
2013-07-24 0.00000346 0.00001141 -0.00008061 -0.00003208 -0.00000873
2013-07-25 0.00005699 -0.00001140 -0.00020583 -0.00002215 -0.00002179
2013-07-26 0.00005318 -0.00003098 -0.00004432 -0.00000101 0.00001855
2013-07-29 0.00008014 -0.00003274 -0.00008258 -0.00000101 0.00000652
2013-07-30 -0.00018079 -0.00010024 0.00000320 -0.00001010 0.00004038
2013-07-31 -0.00016219 0.00003992 0.00007666 0.00000506 0.00000133
2013-08-01 0.00009319 0.00014567 0.00024143 0.00002829 0.00009534
2013-08-02 0.00004175 -0.00002440 0.00007181 0.00005336 0.00005164
2013-08-05 -0.00000693 -0.00000326 -0.00004659 0.00000200 0.00002211
2013-08-06 -0.00002079 -0.00011753 -0.00009661 -0.00008999 -0.00003595
2013-08-07 -0.00002779 -0.00009106 -0.00014437 -0.00000303 -0.00000360
2013-08-08 -0.00005401 0.00003348 -0.00001597 -0.00001222 0.00003051
2013-08-09 -0.00005260 -0.00004668 0.00000320 -0.00003568 -0.00002788
2013-08-12 0.00005821 0.00009725 -0.00000959 0.00001843 0.00000640
... ... ... ... ... ...
2017-05-19 0.00007340 -0.00008579 0.00017939 0.00012493 -0.00001660
2017-05-22 0.00001149 0.00007533 -0.00001302 -0.00002200 0.00008298
2017-05-23 0.00000000 -0.00001023 0.00009906 -0.00000602 -0.00002377
2017-05-24 -0.00008417 -0.00006758 -0.00007765 -0.00003412 -0.00000356
2017-05-25 0.00005215 0.00001755 -0.00008069 0.00001612 0.00001045
2017-05-26 0.00000192 0.00005151 -0.00009687 -0.00001810 0.00003053
2017-05-30 0.00016893 -0.00011159 -0.00006592 0.00000202 0.00001991
2017-05-31 0.00008285 0.00008924 -0.00027023 0.00004535 0.00004932
2017-06-01 -0.00002425 0.00011910 0.00025400 0.00012124 0.00011190
2017-06-02 -0.00001309 0.00009312 -0.00007164 -0.00001875 0.00002403
2017-06-05 -0.00001311 0.00002803 -0.00002400 0.00006331 -0.00003061
2017-06-06 0.00001313 -0.00019254 0.00002405 -0.00013051 -0.00001503
2017-06-07 0.00001124 0.00000102 0.00007468 0.00002195 -0.00000644
2017-06-08 -0.00005800 0.00000102 0.00012468 -0.00002189 0.00005387
2017-06-09 0.00009982 -0.00004901 0.00032601 0.00004889 -0.00011236
2017-06-12 0.00008752 0.00008422 0.00007449 -0.00001785 0.00001500
2017-06-13 -0.00013458 -0.00000102 0.00009197 0.00002783 0.00008841
2017-06-14 0.00004307 0.00010982 0.00002030 0.00003764 -0.00001471
2017-06-15 -0.00000932 -0.00003212 -0.00015965 -0.00009758 0.00001213
2017-06-16 -0.00000187 -0.00041510 -0.00000256 -0.00036629 0.00000923
2017-06-19 -0.00001119 0.00014528 0.00008975 0.00002722 0.00008674
2017-06-20 -0.00011769 -0.00015738 -0.00019872 0.00000417 -0.00002745
2017-06-21 -0.00010037 -0.00001912 -0.00012407 0.00007298 0.00000709
2017-06-22 0.00000192 -0.00018634 -0.00012781 -0.00007438 -0.00000878
2017-06-23 -0.00000575 -0.00000327 -0.00001053 -0.00007092 0.00002729
2017-06-26 0.00006900 0.00012967 0.00011856 0.00006945 -0.00002491
2017-06-27 -0.00017304 -0.00002897 0.00006792 0.00005320 -0.00003264
2017-06-28 0.00000000 -0.00001830 0.00030414 0.00005181 0.00000673
2017-06-29 -0.00008343 0.00013811 0.00036880 -0.00005970 -0.00006889
2017-06-30 0.00004897 -0.00000637 -0.00009166 -0.00002593 0.00003017
ticker XOM
date
2013-07-01 nan
2013-07-02 0.00006262
2013-07-03 0.00000917
2013-07-05 0.00016138
2013-07-08 0.00012350
2013-07-09 0.00019651
2013-07-10 -0.00009265
2013-07-11 0.00008063
2013-07-12 0.00002318
2013-07-15 -0.00002671
2013-07-16 0.00000892
2013-07-17 0.00002852
2013-07-18 0.00016371
2013-07-19 0.00013921
2013-07-22 -0.00005941
2013-07-23 0.00006489
2013-07-24 -0.00003669
2013-07-25 -0.00000350
2013-07-26 -0.00003152
2013-07-29 -0.00013334
2013-07-30 -0.00003891
2013-07-31 -0.00001064
2013-08-01 -0.00018094
2013-08-02 -0.00013989
2013-08-05 -0.00006511
2013-08-06 -0.00002179
2013-08-07 -0.00002364
2013-08-08 0.00008011
2013-08-09 -0.00007792
2013-08-12 -0.00015216
... ...
2017-05-19 0.00003662
2017-05-22 0.00007308
2017-05-23 0.00005861
2017-05-24 -0.00005840
2017-05-25 -0.00010913
2017-05-26 -0.00004069
2017-05-30 -0.00009177
2017-05-31 -0.00012304
2017-06-01 0.00004132
2017-06-02 -0.00024730
2017-06-05 0.00012970
2017-06-06 0.00022626
2017-06-07 -0.00006144
2017-06-08 -0.00005961
2017-06-09 0.00031150
2017-06-12 0.00016200
2017-06-13 0.00000602
2017-06-14 -0.00017842
2017-06-15 0.00003850
2017-06-16 0.00024868
2017-06-19 -0.00014541
2017-06-20 -0.00009043
2017-06-21 -0.00017579
2017-06-22 -0.00007352
2017-06-23 0.00010871
2017-06-26 -0.00007540
2017-06-27 -0.00002661
2017-06-28 0.00008612
2017-06-29 -0.00016931
2017-06-30 0.00000618
[1009 rows x 99 columns]
###Markdown
Rebalance Portfolio Over TimeThe single optimized ETF portfolio used the same weights for the entire history. This might not be the optimal weights for the entire period. Let's rebalance the portfolio over the same period instead of using the same weights. Implement `rebalance_portfolio` to rebalance a portfolio.Reblance the portfolio every n number of days, which is given as `shift_size`. When rebalancing, you should look back a certain number of days of data in the past, denoted as `chunk_size`. Using this data, compute the optoimal weights using `get_optimal_weights` and `get_covariance_returns`.
###Code
def rebalance_portfolio(returns, index_weights, shift_size, chunk_size):
"""
Get weights for each rebalancing of the portfolio.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
index_weights : DataFrame
Index weight for each ticker and date
shift_size : int
The number of days between each rebalance
chunk_size : int
The number of days to look in the past for rebalancing
Returns
-------
all_rebalance_weights : list of Ndarrays
The ETF weights for each point they are rebalanced
"""
assert returns.index.equals(index_weights.index)
assert returns.columns.equals(index_weights.columns)
assert shift_size > 0
assert chunk_size >= 0
#TODO: Implement function
rebalanced_weights = []
for i in range(chunk_size, len(returns), shift_size):
covariance_returns = get_covariance_returns(returns.iloc[0:i].tail(chunk_size))
index_weights_rebalance = index_weights[(i-chunk_size):i].iloc[-1]
print(index_weights_rebalance)
rebalanced_weights.append(get_optimal_weights(covariance_returns, index_weights_rebalance , scale=2.0))
return rebalanced_weights
project_tests.test_rebalance_portfolio(rebalance_portfolio)
###Output
INLL 0.00395679
EDSX 0.12434660
XLTF 0.00335064
Name: 2005-08-05, dtype: float64
INLL 0.00369562
EDSX 0.11447422
XLTF 0.00325973
Name: 2005-08-07, dtype: float64
INLL 0.00366501
EDSX 0.10806014
XLTF 0.00314648
Name: 2005-08-09, dtype: float64
INLL 0.00358844
EDSX 0.10097531
XLTF 0.00319009
Name: 2005-08-11, dtype: float64
Tests Passed
###Markdown
Run the following cell to rebalance the portfolio using `rebalance_portfolio`.
###Code
chunk_size = 250
shift_size = 5
all_rebalance_weights = rebalance_portfolio(returns, index_weights, shift_size, chunk_size)
###Output
250
ticker
AAL 0.01043134
AAPL 0.06006002
ABBV 0.01145332
ABT 0.00460319
AGN 0.00899074
AIG 0.00659741
AMAT 0.00588434
AMGN 0.00543873
AMZN 0.01913355
APC 0.00440472
AVGO 0.00117287
AXP 0.00498903
BA 0.00982399
BAC 0.02170975
BIIB 0.00656725
BMY 0.00762632
C 0.01871608
CAT 0.00315683
CBS 0.01490108
CELG 0.00299942
CHTR 0.00180181
CMCSA 0.01017089
CMG 0.00302964
COP 0.00757446
COST 0.00280221
CRM 0.00379319
CSCO 0.00894089
CVS 0.00609675
CVX 0.01479801
DAL 0.00855455
...
NVDA 0.00246675
ORCL 0.01370560
OXY 0.00461931
PEP 0.00608005
PFE 0.01084456
PG 0.01092437
PM 0.01692651
PXD 0.00773170
QCOM 0.01131740
REGN 0.00577372
SBUX 0.00556129
SLB 0.03248384
T 0.00983902
TGT 0.00390546
TWX 0.00569000
TXN 0.00331695
UAL 0.00602821
UNH 0.00490272
UNP 0.00619657
UPS 0.00309221
USB 0.00749353
UTX 0.00667534
V 0.00947944
VLO 0.01536379
VZ 0.00830452
WBA 0.00620383
WFC 0.01276589
WMT 0.01015302
WYNN 0.00720301
XOM 0.01625009
Name: 2014-06-26, Length: 99, dtype: float64
255
ticker
AAL 0.01064740
AAPL 0.06444048
ABBV 0.00623301
ABT 0.00369916
AGN 0.01256357
AIG 0.00614116
AMAT 0.00379133
AMGN 0.00704277
AMZN 0.02111199
APC 0.00420007
AVGO 0.00180726
AXP 0.00471998
BA 0.00741200
BAC 0.03439230
BIIB 0.00688026
BMY 0.00451301
C 0.02042720
CAT 0.00902236
CBS 0.02925707
CELG 0.01263424
CHTR 0.00449042
CMCSA 0.01257776
CMG 0.00273047
COP 0.00654981
COST 0.00324911
CRM 0.00456564
CSCO 0.01394073
CVS 0.00466741
CVX 0.01260531
DAL 0.01020666
...
NVDA 0.00199507
ORCL 0.00982919
OXY 0.00644561
PEP 0.00571962
PFE 0.01261782
PG 0.01245157
PM 0.00821700
PXD 0.00499030
QCOM 0.01305855
REGN 0.00637413
SBUX 0.00794489
SLB 0.01220479
T 0.01093813
TGT 0.00504452
TWX 0.00511640
TXN 0.00240195
UAL 0.00550536
UNH 0.00549731
UNP 0.00731179
UPS 0.00376938
USB 0.00449552
UTX 0.00590176
V 0.00942898
VLO 0.00621287
VZ 0.01094839
WBA 0.01053826
WFC 0.01330832
WMT 0.00630204
WYNN 0.00418008
XOM 0.01985035
Name: 2014-07-03, Length: 99, dtype: float64
260
ticker
AAL 0.00769333
AAPL 0.06903315
ABBV 0.01497718
ABT 0.00441796
AGN 0.00520380
AIG 0.00469487
AMAT 0.00449417
AMGN 0.00680436
AMZN 0.06581997
APC 0.00663625
AVGO 0.00196144
AXP 0.00428609
BA 0.00840713
BAC 0.01892006
BIIB 0.00644132
BMY 0.00378344
C 0.01579069
CAT 0.00440416
CBS 0.01149355
CELG 0.00755613
CHTR 0.00287407
CMCSA 0.00822048
CMG 0.00307877
COP 0.00835079
COST 0.00406165
CRM 0.00313075
CSCO 0.01074523
CVS 0.00369755
CVX 0.01311041
DAL 0.00936866
...
NVDA 0.00200463
ORCL 0.01037592
OXY 0.00582340
PEP 0.00670118
PFE 0.01093576
PG 0.01229479
PM 0.00795605
PXD 0.00516728
QCOM 0.01373637
REGN 0.00691219
SBUX 0.00357785
SLB 0.01146641
T 0.00778773
TGT 0.00373442
TWX 0.00548836
TXN 0.00431226
UAL 0.00800797
UNH 0.00491558
UNP 0.00438152
UPS 0.00325343
USB 0.00458623
UTX 0.00878963
V 0.00687869
VLO 0.00739226
VZ 0.01717434
WBA 0.00413258
WFC 0.03158623
WMT 0.00719892
WYNN 0.00278967
XOM 0.01554892
Name: 2014-07-11, Length: 99, dtype: float64
265
ticker
AAL 0.00616613
AAPL 0.07389262
ABBV 0.03434482
ABT 0.00630641
AGN 0.00493682
AIG 0.00425496
AMAT 0.00423825
AMGN 0.00628112
AMZN 0.02055992
APC 0.00544426
AVGO 0.00400365
AXP 0.00497137
BA 0.00598047
BAC 0.01843068
BIIB 0.00731252
BMY 0.00556404
C 0.01242999
CAT 0.00396996
CBS 0.02290155
CELG 0.00696112
CHTR 0.00198364
CMCSA 0.00992605
CMG 0.00431053
COP 0.00595205
COST 0.00379617
CRM 0.00323822
CSCO 0.00974220
CVS 0.00393037
CVX 0.00808980
DAL 0.00619498
...
NVDA 0.00423303
ORCL 0.01216470
OXY 0.00467230
PEP 0.00711976
PFE 0.01091110
PG 0.00781241
PM 0.00552556
PXD 0.00297799
QCOM 0.00977095
REGN 0.00404114
SBUX 0.00414588
SLB 0.01546643
T 0.00807482
TGT 0.00417896
TWX 0.02396761
TXN 0.00383966
UAL 0.00389051
UNH 0.00525746
UNP 0.00497237
UPS 0.00301669
USB 0.00507674
UTX 0.00737542
V 0.00996107
VLO 0.00482362
VZ 0.00726415
WBA 0.00789600
WFC 0.01192891
WMT 0.00573319
WYNN 0.00312199
XOM 0.01232416
Name: 2014-07-18, Length: 99, dtype: float64
270
ticker
AAL 0.00849226
AAPL 0.07608856
ABBV 0.01047717
ABT 0.00303387
AGN 0.00557975
AIG 0.00716587
AMAT 0.01241428
AMGN 0.00650348
AMZN 0.11137187
APC 0.00453987
AVGO 0.00463675
AXP 0.00649319
BA 0.01303572
BAC 0.01010698
BIIB 0.00636377
BMY 0.00504845
C 0.00937273
CAT 0.00736031
CBS 0.01407632
CELG 0.00594237
CHTR 0.00160974
CMCSA 0.00973875
CMG 0.00772954
COP 0.00448885
COST 0.00311746
CRM 0.00519212
CSCO 0.01257292
CVS 0.00466246
CVX 0.00809391
DAL 0.00692370
...
NVDA 0.00219131
ORCL 0.00549507
OXY 0.00405443
PEP 0.00459460
PFE 0.00729263
PG 0.00855017
PM 0.00371217
PXD 0.00375321
QCOM 0.01738429
REGN 0.00215466
SBUX 0.01349015
SLB 0.01135209
T 0.01050841
TGT 0.00267288
TWX 0.00876577
TXN 0.00771068
UAL 0.01058512
UNH 0.00514908
UNP 0.00418457
UPS 0.00211308
USB 0.00403948
UTX 0.00829339
V 0.02766652
VLO 0.00420069
VZ 0.00917772
WBA 0.00444963
WFC 0.00890871
WMT 0.00519801
WYNN 0.00299217
XOM 0.01342392
Name: 2014-07-25, Length: 99, dtype: float64
275
ticker
AAL 0.00546108
AAPL 0.06935885
ABBV 0.00851619
ABT 0.00243419
AGN 0.03065271
AIG 0.00725948
AMAT 0.00305383
AMGN 0.00827334
AMZN 0.03652384
APC 0.00844534
AVGO 0.00201432
AXP 0.01600204
BA 0.01010678
BAC 0.02623693
BIIB 0.00641112
BMY 0.00572571
C 0.01618904
CAT 0.00853238
CBS 0.00535202
CELG 0.00706119
CHTR 0.00468062
CMCSA 0.00792091
CMG 0.00368642
COP 0.01094283
COST 0.00351177
CRM 0.00378376
CSCO 0.01003376
CVS 0.00508950
CVX 0.01281855
DAL 0.00568246
...
NVDA 0.00149411
ORCL 0.00913365
OXY 0.00678390
PEP 0.00509282
PFE 0.01261526
PG 0.02044298
PM 0.00434542
PXD 0.01244848
QCOM 0.01540352
REGN 0.00790694
SBUX 0.00449808
SLB 0.00942333
T 0.01113099
TGT 0.00335446
TWX 0.00632876
TXN 0.00491314
UAL 0.00251041
UNH 0.00571037
UNP 0.00602028
UPS 0.00461096
USB 0.00500290
UTX 0.00720390
V 0.01159745
VLO 0.00494688
VZ 0.01221208
WBA 0.00856913
WFC 0.01351840
WMT 0.00869769
WYNN 0.00517396
XOM 0.02046276
Name: 2014-08-01, Length: 99, dtype: float64
280
ticker
AAL 0.01574075
AAPL 0.07837675
ABBV 0.01282181
ABT 0.00357958
AGN 0.00914547
AIG 0.00837615
AMAT 0.00375056
AMGN 0.00537268
AMZN 0.01811484
APC 0.00751148
AVGO 0.00178362
AXP 0.00730501
BA 0.01001043
BAC 0.01653741
BIIB 0.00755466
BMY 0.00414171
C 0.01185610
CAT 0.00904188
CBS 0.01402562
CELG 0.00545510
CHTR 0.00296935
CMCSA 0.01248685
CMG 0.00501264
COP 0.00627990
COST 0.00521061
CRM 0.00289526
CSCO 0.01004972
CVS 0.00698450
CVX 0.01086292
DAL 0.01006421
...
NVDA 0.00862909
ORCL 0.00707027
OXY 0.00514758
PEP 0.00805649
PFE 0.01572322
PG 0.00940762
PM 0.00564639
PXD 0.00506995
QCOM 0.01246863
REGN 0.00448620
SBUX 0.00459628
SLB 0.00862470
T 0.01124698
TGT 0.00509334
TWX 0.01136070
TXN 0.00444046
UAL 0.00616179
UNH 0.00590282
UNP 0.00460345
UPS 0.00621674
USB 0.00480895
UTX 0.00912176
V 0.01000167
VLO 0.00421557
VZ 0.01330754
WBA 0.03230876
WFC 0.01383405
WMT 0.00713908
WYNN 0.00934584
XOM 0.02050639
Name: 2014-08-08, Length: 99, dtype: float64
285
ticker
AAL 0.00686136
AAPL 0.08294770
ABBV 0.00882481
ABT 0.00359991
AGN 0.00969361
AIG 0.00546425
AMAT 0.01011268
AMGN 0.00973772
AMZN 0.02404376
APC 0.00788283
AVGO 0.00273340
AXP 0.00495156
BA 0.00939953
BAC 0.01636478
BIIB 0.01005552
BMY 0.00441354
C 0.01409767
CAT 0.00708096
CBS 0.00618112
CELG 0.00723115
CHTR 0.00362588
CMCSA 0.01008995
CMG 0.00416036
COP 0.00721108
COST 0.00395090
CRM 0.00465874
CSCO 0.01333333
CVS 0.00448345
CVX 0.01150181
DAL 0.00690586
...
NVDA 0.00312922
ORCL 0.00944935
OXY 0.00645445
PEP 0.00581795
PFE 0.01003952
PG 0.00994782
PM 0.00552643
PXD 0.00537233
QCOM 0.01177671
REGN 0.00354162
SBUX 0.00541283
SLB 0.00947247
T 0.01424393
TGT 0.00393878
TWX 0.01007023
TXN 0.00467810
UAL 0.00480697
UNH 0.00345555
UNP 0.00503737
UPS 0.00781096
USB 0.00400534
UTX 0.00554993
V 0.00851531
VLO 0.00545367
VZ 0.01241572
WBA 0.00948248
WFC 0.01110134
WMT 0.00859257
WYNN 0.00346902
XOM 0.01478355
Name: 2014-08-15, Length: 99, dtype: float64
###Markdown
Portfolio TurnoverWith the portfolio rebalanced, we need to use a metric to measure the cost of rebalancing the portfolio. Implement `get_portfolio_turnover` to calculate the annual portfolio turnover. We'll be using the formulas used in the classroom:$ AnnualizedTurnover =\frac{SumTotalTurnover}{NumberOfRebalanceEvents} * NumberofRebalanceEventsPerYear $$ SumTotalTurnover =\sum_{t,n}{\left | x_{t,n} - x_{t+1,n} \right |} $ Where $ x_{t,n} $ are the weights at time $ t $ for equity $ n $.$ SumTotalTurnover $ is just a different way of writing $ \sum \left | x_{t_1,n} - x_{t_2,n} \right | $
###Code
def get_portfolio_turnover(all_rebalance_weights, shift_size, rebalance_count, n_trading_days_in_year=252):
"""
Calculage portfolio turnover.
Parameters
----------
all_rebalance_weights : list of Ndarrays
The ETF weights for each point they are rebalanced
shift_size : int
The number of days between each rebalance
rebalance_count : int
Number of times the portfolio was rebalanced
n_trading_days_in_year: int
Number of trading days in a year
Returns
-------
portfolio_turnover : float
The portfolio turnover
"""
assert shift_size > 0
assert rebalance_count > 0
#TODO: Implement function
total = 0
for i in range(0, rebalance_count):
temp = np.absolute(all_rebalance_weights[i+1] - all_rebalance_weights[i])
total = total + temp.sum()
portfolio_turnover = total/rebalance_count*n_trading_days_in_year/shift_size
return portfolio_turnover
project_tests.test_get_portfolio_turnover(get_portfolio_turnover)
###Output
Tests Passed
###Markdown
Run the following cell to get the portfolio turnover from `get_portfolio turnover`.
###Code
print(get_portfolio_turnover(all_rebalance_weights, shift_size, len(all_rebalance_weights) - 1))
###Output
16.726832660502772
|
Obj 0 - pre-processing/Experiment 0.1 - Testing Named Entity Recognition in the spaCy models/.ipynb_checkpoints/Named Entity Corrections-checkpoint.ipynb | ###Markdown
Named Entity CorrectionsIn this notebook we test the named entity recognition in the spaCy language model.Each sentence in each document is reviewed by displaying the named entities in each.Any errors are noted and a report is produced.The errors are corrected with an custom pipeline component added to the pipeline. Import the files
###Code
%%time
import datetime
import os
FileList = ['20010114-Remarks at the National Day of Prayer & Remembrance Service.txt',
'20010115-First Radio Address following 911.txt',
'20010117-Address at Islamic Center of Washington, D.C..txt',
'20010120-Address to Joint Session of Congress Following 911 Attacks.txt',
'20010911-Address to the Nation.txt',
'20011007-Operation Enduring Freedom in Afghanistan Address to the Nation.txt',
'20011011-911 Pentagon Remembrance Address.txt',
'20011011-Prime Time News Conference on War on Terror.txt',
'20011026-Address on Signing the USA Patriot Act of 2001.txt',
'20011110-First Address to the United Nations General Assembly.txt',
'20011211-Address to Citadel Cadets.txt',
'20011211-The World Will Always Remember 911.txt',
'20020129-First (Official) Presidential State of the Union Address.txt',
]
raw = ''
filepath = 'C:/Users/Steve/OneDrive - University of Southampton/CulturalViolence/KnowledgeBases/Speeches/'
binladenpath = os.path.join(filepath, 'Osama bin Laden/')
bushpath = os.path.join(filepath, 'George Bush/')
for f in FileList:
with open(bushpath + f, 'r') as text:
raw = raw + text.read()
FileList = ['19960823-OBL Declaration.txt',
'20011007-OBL Full Warning.txt',
'20011109-OBL.txt',
'20021124-OBL Letter to America.txt',
'20041101-Al Jazeera Speech.txt'
]
for f in FileList:
with open(binladenpath + f, 'r') as text:
raw = raw + text.read()
# with open(os.path.join(filepath, "fulltext.txt"), 'w') as text:
# text.write(raw)
print('length of doc: ', len(raw))
print(f'completed at: {datetime.datetime.now().strftime("%b %d %Y %H:%M:%S")}')
###Output
length of doc: 220536
completed at: Apr 15 2020 20:10:52
Wall time: 8.98 ms
###Markdown
Setup spaCy pipeline
###Code
%%time
import spacy
model = 'en_core_web_md'
print('loading: ', model)
nlp = spacy.load(model)
print(f'completed at: {datetime.datetime.now().strftime("%b %d %Y %H:%M:%S")}')
%%time
import os
import json
import datetime
# setup object to store entity corrections, which in turn forms the basis for the custom pipeline component.
named_entity_corrections = {
# inbuilt with spaCy
"PERSON" : ["usama bin muhammad bin ladin"],
"NORP" : ["ahlul-sunnah", "infidel", "kuffar", "kafiroon", "kaferoon", "muslim", "da'ees", "ulama", "afghan", "afghans", "Afghans"],
"FAC" : ["makka", "ka'ba", "capitol", "guadalcanal", "the world trade center", \
"the treaty room of the white house"],
"ORG" : ["bani quraydah", "taliban", "al qaeda", "egyptian islamic jihad", "islamic movement of uzbekistan", \
"republicans", "democrats", "mafia", "crusaders", "mujahideen", "mujahidin", "halliburton", "Jaish-i-Mohammed", \
"ummah", "quraysh", "bani qainuqa'"],
"GPE" : ["NATO", "the arabian peninsula", "the land of the two holy places", "the country of the two holy places", "the land of the two holy mosques" \
"the country of the two holy mosque", "qana", "assam", "erithria", "chechnia", "makka", "makkah", "qunduz", "mazur-e-sharif", "rafah"],
"LOC" : ["dar al-islam", "kabal", "iwo jima", "ground zero", "world", "dunya", "Hindu Kush"],
"PRODUCT" : ["united 93", "global hawk", "flight 93", "predator"],
"EVENT" : ["september 11th"],
"WORK_OF_ART" : ["national anthem", "memorandum", "flag", "the marshall plan", "semper fi", "allahu akbar"],
"LAW" : ["constitution", "anti-ballistic missile treaty", "the treaty of hudaybiyyah", "kyoto agreement", " Human Rights"],
"LANGUAGE" : [],
"DATE" : ["shawwaal", "muharram", "rashidoon"],
"TIME" : [],
"PERCENT" : [],
"MONEY" : ["riyal"],
"QUANTITY" : [],
"ORDINAL" : [],
"CARDINAL" : [],
##user defined
"DIRECTVIOLENCE" : ["gulf war"],
"STRUCTURALVIOLENCE" : ["cold war", "war on terror"],
"RELIGION" : ["islam", "christianity"],
"DEITY" : ["hubal", "god", "Lord", "almighty"],
"RELIGIOUSFIGURE" : ["jesus", "abraham", "jibreel", "ishmael", "isaac", "allah", "imraan", "hud", "aal-imraan", "al-ma'ida", \
"baqarah", "an-nisa", "al-ahzab", "shu'aib", "al'iz ibn abd es-salaam", \
"ibn taymiyyah", "an-noor", "majmoo' al fatawa", "luqman", "al-masjid an-nabawy", \
"abd ur-rahman ibn awf", "abu jahl", "aal imraan", "the messenger of allah", \
"Saheeh Al-Jame", "at-tirmidhi", "at-taubah", "haroon ar-rasheed", "ameer-ul-mu'mineen", \
"assim bin thabit", "moses", "satan"],
"RELIGIOUSLAW" : ["halal", "haram", "shari'a", "mushrik", "fatwa", "fatwas", "shariah", "shari'ah"],
"RELIGIOUSCONFLICT" : ["jihad", "crusade"],
"RELIGIOUS_WORK_OF_ART" : ["koranic", "Quran", "quran", "Koran", "as-sayf", "taghut", "torah", "psalm", "qiblah", "allahu akbar"],
"RELIGIOUS_EVENT" : ["Hegira", "the Day of Judgment"],
"RELIGIOUSENTITY" : [],
"RELIGIOUS_FAC" : ["kaa'ba", "ka'bah"],
}
filepath = r'C:\Users\Steve\OneDrive - University of Southampton\CNDPipeline\dataset'
## create file to store entity corrections
with open(os.path.join(filepath, "named_entity_corrections.json"), "wb") as f:
f.write(json.dumps(named_entity_corrections).encode("utf-8"))
print(f'completed at: {datetime.datetime.now().strftime("%b %d %Y %H:%M:%S")}')
from spacy.pipeline import EntityRuler
from spacy.matcher import PhraseMatcher
from spacy.tokens import Doc
from spacy.tokens import Span
from spacy.pipeline import merge_noun_chunks
import pandas as pd
# create entity ruler for custom pipeline component
entities = EntityRuler(nlp, overwrite_ents=True, phrase_matcher_attr = "LOWER")
for key, value in named_entity_corrections.items():
pattern = {"label" : key, "pattern" : [{"LOWER" : {"IN" : value}}]}, #, "POS" : {"IN": ["PROPN", "NOUN"]}
entities.add_patterns(pattern)
# modify spaCy pipeline with custom component
import json
from spacy.pipeline import merge_entities
from spacy.strings import StringStore
for pipe in nlp.pipe_names:
if pipe not in ['tagger', "parser", "ner"]:
nlp.remove_pipe(pipe)
for key in named_entity_corrections.keys():
nlp.vocab.strings.add(key)
nlp.add_pipe(entities, after = "ner")
# nlp.add_pipe(ent_matcher, before = "ner")
nlp.add_pipe(merge_entities, last = True)
#nlp.add_pipe(merge_noun_chunks, last = True)
print("Pipeline Components")
print(' | '.join(nlp.pipe_names))
print("processing doc")
doc = nlp(raw)
print("doc processed")
print('-----')
print("current corrections")
print('-----')
#print out the corrections
for label, terms in named_entity_corrections.items():
if len(terms) > 0:
patterns = [text.upper() for text in terms]
print(label, patterns)
# patterns = [nlp.make_doc(text) for text in pattern["pattern"]] # -- used for PhraseMatcher
# self.matcher.add(pattern["label"], None, *patterns)
print(f'completed at: {datetime.datetime.now().strftime("%b %d %Y %H:%M:%S")}')
###Output
_____no_output_____
###Markdown
Review Each Sentence to Check for CorrectionsIterate through each sentence to review the named entities.Check the named entity against the wikipedia entry.Correct as required.
###Code
import wikipediaapi
import pandas as pd
import os
def get_wikisummary(token):
wiki_wiki = wikipediaapi.Wikipedia('en')
page_py = wiki_wiki.page(token)
if page_py.exists():
return (page_py.title, " ".join(str(nlp(page_py.summary, disable = ['tokenizer', 'ner']).sents.__next__()).split()))
else:
return ('no wiki reference', 'no wiki reference')
filepath = "C:/Users/Steve/University of Southampton/CulturalViolence/KnowledgeBases/Experiment 2 - Testing Named Entity Recognition in the spaCy models/"
if input("Restart from fresh (y/n): ").lower() == 'n':
filename = input('existing filename: ')
with open(os.path.join(filepath, filename), 'r') as fp:
corrections_dict = json.load(fp)
with open(os.path.join(filepath, "seen_tokens.json"), 'r') as fp:
seen_tokens = {key for key in json.load(fp)}
else:
corrections_dict = dict()
seen_tokens = set()
### !!! The bin laden object here needs to be changed.
for i, doc in enumerate(binladen):
for token in binladen.speeches_nlp[i].text_nlp:
entries_dict = dict()
if token.ent_type_ and \
token.ent_type_ not in ['ORATOR', 'DATE', 'TIME', 'PERCENT', 'MONEY', 'QUANTITY', 'ORDINAL', 'CARDINAL'] and \
token.text not in seen_tokens:
seen_tokens.add(token.text)
with open(os.path.join(filepath, "seen_tokens.json"), "wb") as f:
f.write(json.dumps(dict.fromkeys(seen_tokens)).encode("utf-8"))
wikientry = get_wikisummary(token.text)
entries_dict[token.text] = [token.ent_type_, wikientry[0], wikientry[1]]
entries_dict['sentence'] = ['', '', token.sent]
displacy.render(token.sent, style = 'ent')
pd.set_option('display.max_colwidth', -1)
display(pd.DataFrame.from_dict(entries_dict, orient='index', columns = ['ent_type_', 'wiki_title', 'summary'])
.style.set_properties(**{'text-align': 'left'})
.set_table_styles([dict(selector='th', props=[('text-align', 'left')])]))
if input('correct y/n ').lower() == 'n':
corrections_dict[token.text] = {
'original ent_type_' : token.ent_type_,
'wiki_title': wikientry[0],
'wiki_summary' : wikientry[1],
'correction' : input('correct type')
}
### check wiki entry and correct with manual entry if required
answer = 'n'
while answer == 'n':
display(pd.DataFrame.from_dict(corrections_dict[token.text], orient = "index"))
answer = input('correct wiki entry? (y/n)').lower()
if answer != 'n':
break
corrections_dict[token.text] = {
'original ent_type_' : token.ent_type_,
'wiki_title': input("wiki_title: "),
'wiki_summary' : input("wiki_summary: "),
'correction' : input("correct type: ")
}
with open(os.path.join(filapth, "binladen_entitycorrections.json"), "wb") as f:
f.write(json.dumps(corrections_dict).encode("utf-8"))
print('complete')
###Output
_____no_output_____
###Markdown
Create PDF Report for Each Orator
###Code
import json
import pandas as pd
from jinja2 import Environment, FileSystemLoader
from weasyprint import HTML
filepath = "C:/Users/Steve/OneDrive - University of Southampton/CulturalViolence/KnowledgeBases/Experiment 2 - Testing Named Entity Recognition in the spaCy models/"
with open(os.path.join(filepath, "binladen_entitycorrections.json"), 'r') as fp:
questions = json.load(fp)
env = Environment(loader=FileSystemLoader(searchpath=filepath))
template = env.get_template('myreport.html')
table = pd.DataFrame.from_dict(questions).T
template_vars = {"title" : "bin Laden Entity Corrections",
"islamic_terms": table.to_html()}
html_out = template.render(template_vars)
HTML(string=html_out).write_pdf(os.path.join(filepath, "binladen_entitycorrections.pdf"), stylesheets=[os.path.join(filepath, "style.css")])
pd.set_option('expand_frame_repr', False)
pd.set_option("display.max_columns", 999)
pd.set_option("display.max_rows", 999)
display(pd.DataFrame.from_dict(questions).T
.style.set_properties(**{'text-align': 'left'})
.set_table_styles([dict(selector='th', props=[('text-align', 'left')])]))
print(f'completed at {str(datetime.datetime.now())}') #1220
###Output
_____no_output_____ |
notebooks/project/P2_Heuristical_TicTacToe_Agents.ipynb | ###Markdown
Data Science Foundations Project Part 2: Heuristical Agents **Instructor**: Wesley Beckner**Contact**: [email protected] makin' some wack AI today--- 2.0 Preparing Environment and Importing Data[back to top](top) 2.0.1 Import Packages[back to top](top)
###Code
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
class TicTacToe:
# can preset winner and starting player
def __init__(self, winner='', start_player=''):
self.winner = winner
self.start_player = start_player
self.board = {1: ' ',
2: ' ',
3: ' ',
4: ' ',
5: ' ',
6: ' ',
7: ' ',
8: ' ',
9: ' ',}
self.win_patterns = [[1,2,3], [4,5,6], [7,8,9],
[1,4,7], [2,5,8], [3,6,9],
[1,5,9], [7,5,3]]
# the other functions are now passed self
def visualize_board(self):
print(
"|{}|{}|{}|\n|{}|{}|{}|\n|{}|{}|{}|\n".format(*self.board.values())
)
def check_winning(self):
for pattern in self.win_patterns:
values = [self.board[i] for i in pattern]
if values == ['X', 'X', 'X']:
self.winner = 'X' # we update the winner status
return "'X' Won!"
elif values == ['O', 'O', 'O']:
self.winner = 'O'
return "'O' Won!"
return ''
def check_stalemate(self):
if (' ' not in self.board.values()) and (self.check_winning() == ''):
self.winner = 'Stalemate'
return "It's a stalemate!"
class GameEngine(TicTacToe):
def __init__(self, setup='auto'):
super().__init__()
self.setup = setup
def setup_game(self):
if self.setup == 'user':
players = int(input("How many Players? (type 0, 1, or 2)"))
self.player_meta = {'first': {'label': 'X',
'type': 'human'},
'second': {'label': 'O',
'type': 'human'}}
if players == 1:
first = input("who will go first? (X, (AI), or O (Player))")
if first == 'O':
self.player_meta = {'second': {'label': 'X',
'type': 'ai'},
'first': {'label': 'O',
'type': 'human'}}
else:
self.player_meta = {'first': {'label': 'X',
'type': 'ai'},
'second': {'label': 'O',
'type': 'human'}}
elif players == 0:
first = random.choice(['X', 'O'])
if first == 'O':
self.player_meta = {'second': {'label': 'X',
'type': 'ai'},
'first': {'label': 'O',
'type': 'ai'}}
else:
self.player_meta = {'first': {'label': 'X',
'type': 'ai'},
'second': {'label': 'O',
'type': 'ai'}}
elif self.setup == 'auto':
first = random.choice(['X', 'O'])
if first == 'O':
self.start_player = 'O'
self.player_meta = {'second': {'label': 'X',
'type': 'ai'},
'first': {'label': 'O',
'type': 'ai'}}
else:
self.start_player = 'X'
self.player_meta = {'first': {'label': 'X',
'type': 'ai'},
'second': {'label': 'O',
'type': 'ai'}}
def play_game(self):
while True:
for player in ['first', 'second']:
self.visualize_board()
player_label = self.player_meta[player]['label']
player_type = self.player_meta[player]['type']
if player_type == 'human':
move = input("{}, what's your move?".format(player_label))
# we're going to allow the user to quit the game from the input line
if move in ['q', 'quit']:
self.winner = 'F'
print('quiting the game')
break
move = int(move)
if self.board[move] != ' ':
while True:
move = input("{}, that position is already taken! "\
"What's your move?".format(player))
move = int(move)
if self.board[move] != ' ':
continue
else:
break
else:
while True:
move = random.randint(1,9)
if self.board[move] != ' ':
continue
print('test')
else:
break
self.board[move] = player_label
# the winner varaible will now be check within the board object
self.check_winning()
self.check_stalemate()
if self.winner == '':
continue
elif self.winner == 'Stalemate':
print(self.check_stalemate())
self.visualize_board()
break
else:
print(self.check_winning())
self.visualize_board()
break
if self.winner != '':
return self
###Output
_____no_output_____
###Markdown
2.0.2 Load Dataset[back to top](top) 2.1 AI HeuristicsDevelop a better AI based on your analyses of game play so far. Q1In our groups, let's discuss what rules we would like to hard code in. Harsha, Varsha and I will help you with the flow control to program these rules
###Code
# we will define some variables to help us define the types of positions
middle = 5
side = [2, 4, 6, 8]
corner = [1, 3, 7, 9]
# recall that our board is a dictionary
tictactoe = TicTacToe()
tictactoe.board
# and we have a win_patterns object to help us with the algorithm
tictactoe.win_patterns
###Output
_____no_output_____
###Markdown
for example, if we want to check if the middle piece is available, and play it if it is. How do we do that?
###Code
# set some key variables
player = 'X'
opponent = 'O'
avail_moves = [i for i in tictactoe.board.keys() if tictactoe.board[i] == ' ']
# a variable that will keep track if we've found a move we like or not
move_found = False
# <- some other moves we might want to make would go here -> #
# and now for our middle piece play
if move_found == False: # if no other move has been found yet
if middle in avail_moves: # if middle is available
move_found = True # then change our move_found status
move = middle # update our move
###Output
_____no_output_____
###Markdown
> Note: in the following when I say _return a move_ I mean when we wrap this up in a function we will want the return to be for a move. For now I just mean that the _result_ of your code in Q3 is to take the variable name `move` and set it equal to the tic-tac-toe board piece the AI will playOur standard approach will be to always ***return a move by the agent***. Whether the agent is heruistical or from some other ML framework we ***always want to return a move*** Q2Write down your algorithm steps in markdown. i.e.1. play a corner piece2. play to opposite corner from the opponent, etc.3. ....etc. Q3Begin to codify your algorithm from Q3. Make sure that no matter what, you ***return a move***
###Code
# some starting variables for you
self = TicTacToe() # this is useful cheat for when we actually put this in as a method
player_label = 'X'
opponent = 'O'
avail_moves = [i for i in self.board.keys() if self.board[i] == ' ']
# temp board will allow us to play hypothetical moves and see where they get us
# in case you need it
temp_board = self.board.copy()
###Output
_____no_output_____
###Markdown
2.2 Wrapping our AgentNow that we've created a conditional tree for our AI to make a decision, we need to integrate this within the gaming framework we've made so far. How should we do this? Let's define this thought pattern or tree as an agent.Recall our play_game function within `GameEngine`
###Code
def play_game(self):
while True:
for player in ['first', 'second']:
self.visualize_board()
player_label = self.player_meta[player]['label']
player_type = self.player_meta[player]['type']
if player_type == 'human':
move = input("{}, what's your move?".format(player_label))
# we're going to allow the user to quit the game from the input line
if move in ['q', 'quit']:
self.winner = 'F'
print('quiting the game')
break
move = int(move)
if self.board[move] != ' ':
while True:
move = input("{}, that position is already taken! "\
"What's your move?".format(player))
move = int(move)
if self.board[move] != ' ':
continue
else:
break
########################################################################
##################### WE WANT TO CHANGE THESE LINES ####################
########################################################################
else:
while True:
move = random.randint(1,9)
if self.board[move] != ' ':
continue
print('test')
else:
break
self.board[move] = player_label
# the winner varaible will now be check within the board object
self.check_winning()
self.check_stalemate()
if self.winner == '':
continue
elif self.winner == 'Stalemate':
print(self.check_stalemate())
self.visualize_board()
break
else:
print(self.check_winning())
self.visualize_board()
break
if self.winner != '':
return self
###Output
_____no_output_____
###Markdown
2.2.1 Redefining the Random AgentIn particular, we want to change lines 30-37 to take our gaming agent in as a parameter to make decisions. Let's try this.In `setup_game` we want to have the option to set the AI type/level. In `play_game` we want to make a call to that AI to make the move. For instance, our random AI will go from:```while True: move = random.randint(1,9) if self.board[move] != ' ': continue else: break```to:```def random_ai(self): while True: move = random.randint(1,9) if self.board[move] != ' ': continue else: break return move```
###Code
class GameEngine(TicTacToe):
def __init__(self, setup='auto'):
super().__init__()
self.setup = setup
##############################################################################
########## our fresh off the assembly line tictactoe playing robot ###########
##############################################################################
def random_ai(self):
while True:
move = random.randint(1,9)
if self.board[move] != ' ':
continue
else:
break
return move
def setup_game(self):
if self.setup == 'user':
players = int(input("How many Players? (type 0, 1, or 2)"))
self.player_meta = {'first': {'label': 'X',
'type': 'human'},
'second': {'label': 'O',
'type': 'human'}}
if players != 2:
########################################################################
################# Allow the user to set the ai level ###################
########################################################################
level = int(input("select AI level (1, 2)"))
if level == 1:
self.ai_level = 1
elif level == 2:
self.ai_level = 2
else:
print("Unknown AI level entered, this will cause problems")
if players == 1:
first = input("who will go first? (X, (AI), or O (Player))")
if first == 'O':
self.player_meta = {'second': {'label': 'X',
'type': 'ai'},
'first': {'label': 'O',
'type': 'human'}}
else:
self.player_meta = {'first': {'label': 'X',
'type': 'ai'},
'second': {'label': 'O',
'type': 'human'}}
elif players == 0:
first = random.choice(['X', 'O'])
if first == 'O':
self.player_meta = {'second': {'label': 'X',
'type': 'ai'},
'first': {'label': 'O',
'type': 'ai'}}
else:
self.player_meta = {'first': {'label': 'X',
'type': 'ai'},
'second': {'label': 'O',
'type': 'ai'}}
elif self.setup == 'auto':
first = random.choice(['X', 'O'])
if first == 'O':
self.start_player = 'O'
self.player_meta = {'second': {'label': 'X',
'type': 'ai'},
'first': {'label': 'O',
'type': 'ai'}}
else:
self.start_player = 'X'
self.player_meta = {'first': {'label': 'X',
'type': 'ai'},
'second': {'label': 'O',
'type': 'ai'}}
##########################################################################
############## and automatically set the ai level otherwise ##############
##########################################################################
self.ai_level = 1
def play_game(self):
while True:
for player in ['first', 'second']:
self.visualize_board()
player_label = self.player_meta[player]['label']
player_type = self.player_meta[player]['type']
if player_type == 'human':
move = input("{}, what's your move?".format(player_label))
if move in ['q', 'quit']:
self.winner = 'F'
print('quiting the game')
break
move = int(move)
if self.board[move] != ' ':
while True:
move = input("{}, that position is already taken! "\
"What's your move?".format(player))
move = int(move)
if self.board[move] != ' ':
continue
else:
break
else:
if self.ai_level == 1:
move = self.random_ai()
######################################################################
############## we will leave this setting empty for now ##############
######################################################################
elif self.ai_level == 2:
pass
self.board[move] = player_label
self.check_winning()
self.check_stalemate()
if self.winner == '':
continue
elif self.winner == 'Stalemate':
print(self.check_stalemate())
self.visualize_board()
break
else:
print(self.check_winning())
self.visualize_board()
break
if self.winner != '':
return self
###Output
_____no_output_____
###Markdown
Let's test that our random ai works now in this format
###Code
random.seed(12)
game = GameEngine(setup='auto')
game.setup_game()
game.play_game()
###Output
| | | |
| | | |
| | | |
| | | |
| |O| |
| | | |
| | | |
| |O| |
| | |X|
| | | |
| |O|O|
| | |X|
| | |X|
| |O|O|
| | |X|
| | |X|
| |O|O|
|O| |X|
|X| |X|
| |O|O|
|O| |X|
|X| |X|
| |O|O|
|O|O|X|
|X| |X|
|X|O|O|
|O|O|X|
'O' Won!
|X|O|X|
|X|O|O|
|O|O|X|
###Markdown
Let's try it with a user player:
###Code
random.seed(12)
game = GameEngine(setup='user')
game.setup_game()
game.play_game()
###Output
How many Players? (type 0, 1, or 2)2
| | | |
| | | |
| | | |
X, what's your move?q
quiting the game
###Markdown
Q4Now let's fold in our specialized AI agent. Add your code under the `heurstic_ai` function. Note that the `player_label` is passed as an input parameter now
###Code
class GameEngine(TicTacToe):
def __init__(self, setup='auto'):
super().__init__()
self.setup = setup
##############################################################################
################### YOUR BADASS HEURISTIC AGENT GOES HERE ####################
##############################################################################
def heuristic_ai(self, player_label):
# SOME HELPER VARIABLES IF YOU NEED THEM
opponent = ['X', 'O']
opponent.remove(player_label)
opponent = opponent[0]
avail_moves = [i for i in self.board.keys() if self.board[i] == ' ']
temp_board = self.board.copy()
################## YOUR CODE GOES HERE, RETURN THAT MOVE! ##################
while True: # DELETE LINES 20 - 25, USED FOR TESTING PURPOSES ONLY
move = random.randint(1,9)
if self.board[move] != ' ':
continue
else:
break
############################################################################
return move
def random_ai(self):
while True:
move = random.randint(1,9)
if self.board[move] != ' ':
continue
else:
break
return move
def setup_game(self):
if self.setup == 'user':
players = int(input("How many Players? (type 0, 1, or 2)"))
self.player_meta = {'first': {'label': 'X',
'type': 'human'},
'second': {'label': 'O',
'type': 'human'}}
if players != 2:
########################################################################
################# Allow the user to set the ai level ###################
########################################################################
level = int(input("select AI level (1, 2)"))
if level == 1:
self.ai_level = 1
elif level == 2:
self.ai_level = 2
else:
print("Unknown AI level entered, this will cause problems")
if players == 1:
first = input("who will go first? (X, (AI), or O (Player))")
if first == 'O':
self.player_meta = {'second': {'label': 'X',
'type': 'ai'},
'first': {'label': 'O',
'type': 'human'}}
else:
self.player_meta = {'first': {'label': 'X',
'type': 'ai'},
'second': {'label': 'O',
'type': 'human'}}
elif players == 0:
first = random.choice(['X', 'O'])
if first == 'O':
self.player_meta = {'second': {'label': 'X',
'type': 'ai'},
'first': {'label': 'O',
'type': 'ai'}}
else:
self.player_meta = {'first': {'label': 'X',
'type': 'ai'},
'second': {'label': 'O',
'type': 'ai'}}
elif self.setup == 'auto':
first = random.choice(['X', 'O'])
if first == 'O':
self.start_player = 'O'
self.player_meta = {'second': {'label': 'X',
'type': 'ai'},
'first': {'label': 'O',
'type': 'ai'}}
else:
self.start_player = 'X'
self.player_meta = {'first': {'label': 'X',
'type': 'ai'},
'second': {'label': 'O',
'type': 'ai'}}
##########################################################################
############## and automatically set the ai level otherwise ##############
##########################################################################
self.ai_level = 1
def play_game(self):
while True:
for player in ['first', 'second']:
self.visualize_board()
player_label = self.player_meta[player]['label']
player_type = self.player_meta[player]['type']
if player_type == 'human':
move = input("{}, what's your move?".format(player_label))
if move in ['q', 'quit']:
self.winner = 'F'
print('quiting the game')
break
move = int(move)
if self.board[move] != ' ':
while True:
move = input("{}, that position is already taken! "\
"What's your move?".format(player))
move = int(move)
if self.board[move] != ' ':
continue
else:
break
else:
if self.ai_level == 1:
move = self.random_ai()
######################################################################
############## we will leave this setting empty for now ##############
######################################################################
elif self.ai_level == 2:
move = self.heuristic_ai(player_label)
self.board[move] = player_label
self.check_winning()
self.check_stalemate()
if self.winner == '':
continue
elif self.winner == 'Stalemate':
print(self.check_stalemate())
self.visualize_board()
break
else:
print(self.check_winning())
self.visualize_board()
break
if self.winner != '':
return self
###Output
_____no_output_____
###Markdown
Q5And we'll test that it works!
###Code
random.seed(12)
game = GameEngine(setup='user')
game.setup_game()
game.play_game()
###Output
How many Players? (type 0, 1, or 2)1
select AI level (1, 2)2
who will go first? (X, (AI), or O (Player))O
| | | |
| | | |
| | | |
O, what's your move?5
| | | |
| |O| |
| | | |
| | | |
| |O| |
| |X| |
O, what's your move?9
| | | |
| |O| |
| |X|O|
| | | |
| |O|X|
| |X|O|
O, what's your move?1
'O' Won!
|O| | |
| |O|X|
| |X|O|
###Markdown
Q6 Test the autorun feature!
###Code
game = GameEngine(setup='auto')
game.setup_game()
game.play_game()
###Output
| | | |
| | | |
| | | |
| | | |
| | | |
|O| | |
| |X| |
| | | |
|O| | |
|O|X| |
| | | |
|O| | |
|O|X| |
| | | |
|O| |X|
'O' Won!
|O|X| |
|O| | |
|O| |X|
|
inferential_stats_ex_3_hospital_readmit.ipynb | ###Markdown
Hospital Readmissions Data Analysis and Recommendations for Reduction BackgroundIn October 2012, the US government's Center for Medicare and Medicaid Services (CMS) began reducing Medicare payments for Inpatient Prospective Payment System hospitals with excess readmissions. Excess readmissions are measured by a ratio, by dividing a hospital’s number of “predicted” 30-day readmissions for heart attack, heart failure, and pneumonia by the number that would be “expected,” based on an average hospital with similar patients. A ratio greater than 1 indicates excess readmissions. Exercise DirectionsIn this exercise, you will:+ critique a preliminary analysis of readmissions data and recommendations (provided below) for reducing the readmissions rate+ construct a statistically sound analysis and make recommendations of your own More instructions provided below. Include your work **in this notebook and submit to your Github account**. Resources+ Data source: https://data.medicare.gov/Hospital-Compare/Hospital-Readmission-Reduction/9n3s-kdb3+ More information: http://www.cms.gov/Medicare/medicare-fee-for-service-payment/acuteinpatientPPS/readmissions-reduction-program.html+ Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet****
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import bokeh.plotting as bkp
from mpl_toolkits.axes_grid1 import make_axes_locatable
# read in readmissions data provided
hospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')
###Output
_____no_output_____
###Markdown
**** Preliminary Analysis
###Code
# deal with missing and inconvenient portions of data
clean_hospital_read_df = hospital_read_df[hospital_read_df['Number of Discharges'] != 'Not Available']
clean_hospital_read_df.loc[:, 'Number of Discharges'] = clean_hospital_read_df['Number of Discharges'].astype(int)
clean_hospital_read_df = clean_hospital_read_df.sort_values('Number of Discharges')
clean_hospital_read_df.head(3)
# generate a scatterplot for number of discharges vs. excess rate of readmissions
# lists work better with matplotlib scatterplot function
x = [a for a in clean_hospital_read_df['Number of Discharges'][81:-3]]
y = list(clean_hospital_read_df['Excess Readmission Ratio'][81:-3])
fig, ax = plt.subplots(figsize=(8,5))
ax.scatter(x, y,alpha=0.2)
ax.fill_between([0,350], 1.15, 2, facecolor='red', alpha = .15, interpolate=True)
ax.fill_between([800,2500], .5, .95, facecolor='green', alpha = .15, interpolate=True)
ax.set_xlim([0, max(x)])
ax.set_xlabel('Number of discharges', fontsize=12)
ax.set_ylabel('Excess rate of readmissions', fontsize=12)
ax.set_title('Scatterplot of number of discharges vs. excess rate of readmissions', fontsize=14)
ax.grid(True)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
**** Preliminary ReportRead the following results/report. While you are reading it, think about if the conclusions are correct, incorrect, misleading or unfounded. Think about what you would change or what additional analyses you would perform.**A. Initial observations based on the plot above**+ Overall, rate of readmissions is trending down with increasing number of discharges+ With lower number of discharges, there is a greater incidence of excess rate of readmissions (area shaded red)+ With higher number of discharges, there is a greater incidence of lower rates of readmissions (area shaded green) **B. Statistics**+ In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1 + In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1 **C. Conclusions**+ There is a significant correlation between hospital capacity (number of discharges) and readmission rates. + Smaller hospitals/facilities may be lacking necessary resources to ensure quality care and prevent complications that lead to readmissions.**D. Regulatory policy recommendations**+ Hospitals/facilties with small capacity (< 300) should be required to demonstrate upgraded resource allocation for quality care to continue operation.+ Directives and incentives should be provided for consolidation of hospitals and facilities to have a smaller number of them with higher capacity and number of discharges. **** ExerciseInclude your work on the following **in this notebook and submit to your Github account**. A. Do you agree with the above analysis and recommendations? Why or why not? B. Provide support for your arguments and your own recommendations with a statistically sound analysis: 1. Setup an appropriate hypothesis test. 2. Compute and report the observed significance value (or p-value). 3. Report statistical significance for $\alpha$ = .01. 4. Discuss statistical significance and practical significance. Do they differ here? How does this change your recommendation to the client? 5. Look at the scatterplot above. - What are the advantages and disadvantages of using this plot to convey information? - Construct another plot that conveys the same information in a more direct manner.You can compose in notebook cells using Markdown: + In the control panel at the top, choose Cell > Cell Type > Markdown+ Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet****
###Code
# Your turn
###Output
_____no_output_____
###Markdown
A. Do you agree with the analysis above?Besides the fact that it seems unfair to penalize a facility based on a predicted/expected value rather than actual values, there are more concrete issues with the analysis. For one thing, all the conclusions are based solely on one scatter plot. There is no statistical support to prove if the claims are significant. Also, there is some inconsistency in describing small hospitals as those with less than 100 discharges, or those with less than 300 discharges. Finally, the red and green blocks seem arbitrarily defined. B. New analysis Hypothesis test definitions:*Ho*: There is no difference in excess readmissions between smaller hospitals (100 discharges or less) and larger hospitals.*Ha*: Smaller hospitals tend to have higher rates of excess readmissions.
###Code
#Reshape date by removing unnecessary columns
df = clean_hospital_read_df
df = df[['Hospital Name', 'State', 'Number of Discharges', 'Excess Readmission Ratio',\
'Predicted Readmission Rate', 'Expected Readmission Rate', 'Number of Readmissions']]
df.info()
#There are still cols with null values; remove them
df_clean = df.dropna(axis=0)
df_clean.info()
#Split into small and large categories
small = df_clean[df_clean['Number of Discharges'] <= 100]
large = df_clean[df_clean['Number of Discharges'] > 100]
#Find mean excess readmission ratio for small and large sets
obs_diff = np.mean(small['Excess Readmission Ratio']) - np.mean(large['Excess Readmission Ratio'])
#Get statistical moments and print out findings
l_desc = large['Excess Readmission Ratio'].describe()
s_desc = small['Excess Readmission Ratio'].describe()
print('Large Set', l_desc)
print('Small Set', s_desc)
print('The difference between the means is: {}'.format(obs_diff))
###Output
Large Set count 10274.000000
mean 1.005768
std 0.095046
min 0.549500
25% 0.947725
50% 1.000800
75% 1.059600
max 1.909500
Name: Excess Readmission Ratio, dtype: float64
Small Set count 1223.000000
mean 1.022088
std 0.058154
min 0.893500
25% 0.983800
50% 1.016700
75% 1.052750
max 1.495300
Name: Excess Readmission Ratio, dtype: float64
The difference between the means is: 0.016320732987291198
###Markdown
Perhaps somewhat surprising is that the means for both sets are above 1.0.Now to explore visually:
###Code
import seaborn as sns
sns.set()
#histograms
l_err = large['Excess Readmission Ratio']
s_err = small['Excess Readmission Ratio']
fig = plt.figure(figsize=(8, 6))
_ = plt.hist(l_err, bins=30, density=True, alpha=0.5)
_ = plt.hist(s_err, bins=30, density=True, alpha=0.4)
_ = plt.xlabel('Excess Readmission Ratio')
_ = plt.ylabel('PDF')
_ = plt.title('Distribution of Excess Readmission Ratios')
#ECDFs
def ecdf(data):
'''returns x, y values for cdf'''
n = len(data)
x = np.sort(data)
y = np.arange(1, n+1) / n
return x,y
#Get x and y values for large and small sets
x_l, y_l = ecdf(l_err)
x_s, y_s = ecdf(s_err)
#plot the densities
fig = plt.figure(figsize=(8,6))
_ = plt.plot(x_l, y_l, marker='.', linestyle='none')
_ = plt.plot(x_s, y_s, marker='.', linestyle='none')
_ = plt.xlabel('Excess Readmission Ratio')
_ = plt.ylabel('CDF')
_ = plt.title('Cumulative Density of Excess Readmission Ratios')
###Output
_____no_output_____
###Markdown
The distributions approach normal, but have different shapes. Also, the smaller set is clearly more variable. Now bootstrap several samples to test the significance of the observed results.
###Code
#Instatiate array of replicates
samp_diffs = np.empty(100000)
#compute the differences in means 100,000 times; append to array
for i in range(100000):
combined = np.concatenate((l_err, s_err))
perm = np.random.permutation(combined)
s_split = perm[len(s_err):]
l_split = perm[:len(s_err)]
samp_diffs[i] = np.mean(s_split) - np.mean(l_split)
#calculate the p-value
p_val = np.sum(abs(samp_diffs) >= obs_diff) / len(samp_diffs)
print('The p-value is: ', p_val)
###Output
The p-value is: 0.0
###Markdown
The p-value is clearly very low, well below 0.01. Out of 100,000 trials, zero differences in means were as extreme as the one observed. All this suggests a rejection of the null hypothesis, and that there is significant difference between smaller and larger hospitals and the readmission rates. It appears the initial analysis was on to something, but now there is evidence to back up the claim.However, this doesn't necessarily speak to the practical significance of the analysis. We don't have any information to explain the financial impact of the situation. The struggling hospitals in the test are small, so are they really costing the program that much releative to the overall sample. Perhaps more importantly, if funding is cut, how will that affect the community. These hospitals might be small because they serve isolated areas, so punishing them might not be worth the human cost. Creating a new plot to better distinguish small and large:
###Code
#Set up x, y values for both small and large sets
x_sm = [a for a in small['Number of Discharges']]
y_sm = list(small['Excess Readmission Ratio'])
x_lg = [a for a in large['Number of Discharges']]
y_lg = list(large['Excess Readmission Ratio'])
#Plot the two sets concurrently
fig = plt.figure(figsize=(8,5))
_ = plt.scatter(x_sm, y_sm, alpha=0.5, c='r', s=10, label='Small')
_ = plt.scatter(x_lg, y_lg, alpha=0.5, c='b', s=10, label='Large')
_ = plt.axhline(1.0, c='k')
_ = plt.xlabel('Number of discharges', fontsize=12)
_ = plt.ylabel('Excess rate of readmissions', fontsize=12)
_ = plt.xscale('log')
_ = plt.legend()
_ = plt.title('Readmissions split between small and large')
###Output
_____no_output_____ |
notebooks/120219_report.ipynb | ###Markdown
Entropy curves before and after calibrationMotivating example: Ovid's Unicorn passage:'In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.'
###Code
import os
import sys
sys.path.append('../examples')
sys.path.append('../jobs')
from tqdm import trange
import torch
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import matplotlib.pyplot as plt
from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config
from generate_with_calibration import get_lookahead_entropies
# unicorn text
data = np.load('../jobs/output/113019_entropy/result.npz')['avg_ents'][0]
plt.plot(np.exp(data))
plt.xlabel('t')
plt.ylabel('$e^H$')
plt.title('Entropy blowup, no calibration')
plt.plot(np.exp(data[20:]))
plt.xlabel('t')
plt.ylabel('$e^H$')
plt.title('Entropy blowup, no calibration, post 20 generations')
# unicorn text, top 40
data_top40 = np.load('../jobs/output/113019_entropy_top40/result.npz')['avg_ents'][0]
plt.plot(np.exp(data_top40))
plt.xlabel('t')
plt.ylabel('$e^H$')
plt.title('Entropy blowup, top 40 truncation')
# to be or not to be
data_ctrl = np.load('../jobs/output/113019_entropy_ctrl/result.npz')['avg_ents'][0]
plt.plot(np.exp(data_ctrl))
# calibrated on unicorn, avg_ents back down
data_128 = np.load('../jobs/output/120119_cal_top128/result.npz')['avg_ents'][0]
plt.plot(np.exp(data_128))
# full calibration
data_full = np.load('../jobs/output/120119_cal_top128/result.npz')['avg_ents'][0]
plt.plot(np.exp(data_full))
# calibrated on unicorn, avg_ents back down
data_128l = np.load('../jobs/output/113019_cal_128long/result.npz')['avg_ents'][0]
plt.plot(np.exp(data_128l))
# calibrated on unicorn, top 1024
data_1024 = np.load('../jobs/output/120119_cal_top1024/result.npz')['avg_ents'][0]
plt.plot(np.exp(data_1024))
# full calibration
data_cal = np.load('../jobs/output/120119_calibration/result.npz')['avg_ents'][0]
plt.plot(np.exp(data_cal))
fig, ax = plt.subplots()
ax.plot(np.exp(data), c='red', label='no calibration')
ax.plot(np.exp(data_128), c='green', label='calibrated, top 128')
ax.plot(np.exp(data_1024), c='blue', label='calibrated, top 1024')
ax.set_xlabel('t')
ax.set_ylabel('$e^H$')
ax.legend()
ax.set_title('How calibration affects entropy blowup')
fig, ax = plt.subplots()
ax.plot(np.exp(data), c='red', label='no calibration')
ax.plot(np.exp(data_128), c='green', label='calibrated, top 128')
ax.plot(np.exp(data_1024), c='blue', label='calibrated, top 1024')
ax.plot(np.exp(data_top40), c='black', label='top 40 sampling')
ax.set_xlabel('t')
ax.set_ylabel('$e^H$')
ax.legend()
ax.set_title('How calibration affects entropy blowup')
###Output
_____no_output_____
###Markdown
Visualizing calibration
###Code
def set_seed(seed=42, n_gpu=0):
np.random.seed(seed)
torch.manual_seed(seed)
if n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpus = torch.cuda.device_count()
set_seed()
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.to(device)
model.eval()
MAX_LENGTH = int(10000)
length = 100
if length < 0 and model.config.max_position_embeddings > 0:
length = model.config.max_position_embeddings
elif 0 < model.config.max_position_embeddings < length:
length = model.config.max_position_embeddings # No generation bigger than model size
elif length < 0:
length = MAX_LENGTH
vocab_size = tokenizer.vocab_size
raw_text = 'In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.'
# raw_text = 'I like.'
context = tokenizer.encode(raw_text)
context = torch.tensor(context, dtype=torch.long, device=device)
context = context.unsqueeze(0)
generated = context
with torch.no_grad():
inputs = {'input_ids': generated}
outputs = model(**inputs)
next_token_logits = outputs[0][:, -1, :]
next_probs = F.softmax(next_token_logits, dim=-1)
prob = next_probs.mean(axis=0).cpu().numpy()
sort_prob = np.sort(prob)[::-1]
plt.plot(sort_prob[:10])
# top k=40 filtering
filter_value=-float('Inf')
indices_to_remove = next_token_logits < torch.topk(next_token_logits, 40)[0][..., -1, None]
next_token_logits[indices_to_remove] = filter_value
next_probs = F.softmax(next_token_logits, dim=-1)
prob = next_probs.mean(axis=0).cpu().numpy()
sort_prob_40 = np.sort(prob)[::-1]
plt.plot(sort_prob_40[:10])
# calibration
lookahead_ents = get_lookahead_entropies(model, generated[0], 64, vocab_size, candidates=None, device=device).unsqueeze(0)
calibrated_next_logits = next_token_logits + (-1.27) * lookahead_ents
next_probs = F.softmax(calibrated_next_logits, dim=-1)
prob = next_probs.mean(axis=0).cpu().numpy()
sort_prob_cal = np.sort(prob)[::-1]
plt.plot(sort_prob_cal[:10])
plt.plot(prob)
plt.plot(lookahead_ents.cpu().numpy()[0])
teddy = lookahead_ents.cpu().numpy()
fig, ax = plt.subplots()
ax.plot(sort_prob[:10], c='blue', label='uncalibrated')
ax.plot(sort_prob_cal[:10], c='orange', label='calibrated')
ax.set_xlabel('token')
ax.set_ylabel('$p$')
ax.legend()
ax.set_title('Effect of calibration')
fig, ax = plt.subplots()
ax.bar(range(len(sort_prob[:10])), teddy[0][:10])
sort_prob[:128].sum()
plt.bar(range(50), sort_prob[:50])
plt.xlabel('token number')
plt.ylabel('$p$')
###Output
_____no_output_____ |
notebooks/Text2Image_FFT.ipynb | ###Markdown
Text to Image
Based on [CLIP](https://github.com/openai/CLIP) + FFT from [Lucent](https://github.com/greentfrapp/lucent) // made by [eps696](https://github.com/eps696)
Features
* uses image and/or text as prompts
* generates [FFT-encoded](https://github.com/greentfrapp/lucent/blob/master/lucent/optvis/param/spatial.py) image (detailed tiled textures, a la deepdream)
* ! very fast convergence
* ! undemanding for RAM (fullHD resolution and more)
**Run this cell after each session restart**
###Code
#@title General setup
import subprocess
CUDA_version = [s for s in subprocess.check_output(["nvcc", "--version"]).decode("UTF-8").split(", ") if s.startswith("release")][0].split(" ")[-1]
print("CUDA version:", CUDA_version)
if CUDA_version == "10.0":
torch_version_suffix = "+cu100"
elif CUDA_version == "10.1":
torch_version_suffix = "+cu101"
elif CUDA_version == "10.2":
torch_version_suffix = ""
else:
torch_version_suffix = "+cu110"
!pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex
try:
!pip3 install googletrans==3.1.0a0
from googletrans import Translator, constants
# from pprint import pprint
translator = Translator()
except: pass
!pip install ftfy
import os
import time
import random
import imageio
import numpy as np
import PIL
from skimage import exposure
from base64 import b64encode
import torch
import torch.nn as nn
import torchvision
from IPython.display import HTML, Image, display, clear_output
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import ipywidgets as ipy
# import glob
from google.colab import output, files
import warnings
warnings.filterwarnings("ignore")
!git clone https://github.com/openai/CLIP.git
%cd /content/CLIP/
import clip
perceptor, preprocess = clip.load('ViT-B/32')
workdir = '_out'
tempdir = os.path.join(workdir, 'ttt')
os.makedirs(tempdir, exist_ok=True)
clear_output()
### FFT from Lucent library https://github.com/greentfrapp/lucent
def pixel_image(shape, sd=2.):
tensor = (torch.randn(*shape) * sd).cuda().requires_grad_(True)
return [tensor], lambda: tensor
# From https://github.com/tensorflow/lucid/blob/master/lucid/optvis/param/spatial.py
def rfft2d_freqs(h, w):
"""Computes 2D spectrum frequencies."""
fy = np.fft.fftfreq(h)[:, None]
# when we have an odd input dimension we need to keep one additional frequency and later cut off 1 pixel
if w % 2 == 1:
fx = np.fft.fftfreq(w)[: w // 2 + 2]
else:
fx = np.fft.fftfreq(w)[: w // 2 + 1]
return np.sqrt(fx * fx + fy * fy)
def fft_image(shape, sd=0.1, decay_power=1., smooth_col=1.):
batch, channels, h, w = shape
freqs = rfft2d_freqs(h, w)
init_val_size = (batch, channels) + freqs.shape + (2,) # 2 for imaginary and real components
spectrum_real_imag_t = (torch.randn(*init_val_size) * sd).cuda().requires_grad_(True)
scale = 1.0 / np.maximum(freqs, 1.0 / max(w, h)) ** decay_power
scale = torch.tensor(scale).float()[None, None, ..., None].cuda()
def inner():
scaled_spectrum_t = scale * spectrum_real_imag_t
image = torch.irfft(scaled_spectrum_t, 2, normalized=True, signal_sizes=(h, w))
image = image[:batch, :channels, :h, :w]
image = image / smooth_col # reduce saturation, smoothen colors & contrast
return image
return [spectrum_real_imag_t], inner
def to_valid_rgb(image_f, decorrelate=True):
def inner():
image = image_f()
if decorrelate:
image = _linear_decorrelate_color(image)
return torch.sigmoid(image)
return inner
def _linear_decorrelate_color(tensor):
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
t_permute = tensor.permute(0,2,3,1)
t_permute = torch.matmul(t_permute, torch.tensor(color_correlation_normalized.T).to(device))
tensor = t_permute.permute(0,3,1,2)
return tensor
color_correlation_svd_sqrt = np.asarray([[0.26, 0.09, 0.02],
[0.27, 0.00, -0.05],
[0.27, -0.09, 0.03]]).astype("float32")
max_norm_svd_sqrt = np.max(np.linalg.norm(color_correlation_svd_sqrt, axis=0))
color_correlation_normalized = color_correlation_svd_sqrt / max_norm_svd_sqrt
### Libs
def slice_imgs(imgs, count, transform=None, uniform=True):
def map(x, a, b):
return x * (b-a) + a
rnd_size = torch.rand(count)
if uniform is True:
rnd_offx = torch.rand(count)
rnd_offy = torch.rand(count)
else: # ~normal around center
rnd_offx = torch.clip(torch.randn(count) * 0.2 + 0.5, 0, 1)
rnd_offy = torch.clip(torch.randn(count) * 0.2 + 0.5, 0, 1)
sz = [img.shape[2:] for img in imgs]
sz_min = [np.min(s) for s in sz]
if uniform is True:
sz = [[2*s[0], 2*s[1]] for s in list(sz)]
imgs = [pad_up_to(imgs[i], sz[i], type='centr') for i in range(len(imgs))]
sliced = []
for i, img in enumerate(imgs):
cuts = []
for c in range(count):
csize = map(rnd_size[c], 224, 0.98*sz_min[i]).int()
offsetx = map(rnd_offx[c], 0, sz[i][1] - csize).int()
offsety = map(rnd_offy[c], 0, sz[i][0] - csize).int()
cut = img[:, :, offsety:offsety + csize, offsetx:offsetx + csize]
cut = torch.nn.functional.interpolate(cut, (224,224), mode='bilinear')
if transform is not None:
cut = transform(cut)
cuts.append(cut)
sliced.append(torch.cat(cuts, 0))
return sliced
def makevid(seq_dir, size=None):
out_sequence = seq_dir + '/%03d.jpg'
out_video = seq_dir + '.mp4'
!ffmpeg -y -v warning -i $out_sequence $out_video
data_url = "data:video/mp4;base64," + b64encode(open(out_video,'rb').read()).decode()
wh = '' if size is None else 'width=%d height=%d' % (size, size)
return """<video %s controls><source src="%s" type="video/mp4"></video>""" % (wh, data_url)
# Tiles an array around two points, allowing for pad lengths greater than the input length
# adapted from https://discuss.pytorch.org/t/symmetric-padding/19866/3
def tile_pad(xt, padding):
h, w = xt.shape[-2:]
left, right, top, bottom = padding
def tile(x, minx, maxx):
rng = maxx - minx
mod = np.remainder(x - minx, rng)
out = mod + minx
return np.array(out, dtype=x.dtype)
x_idx = np.arange(-left, w+right)
y_idx = np.arange(-top, h+bottom)
x_pad = tile(x_idx, -0.5, w-0.5)
y_pad = tile(y_idx, -0.5, h-0.5)
xx, yy = np.meshgrid(x_pad, y_pad)
return xt[..., yy, xx]
def pad_up_to(x, size, type='centr'):
sh = x.shape[2:][::-1]
if list(x.shape[2:]) == list(size): return x
padding = []
for i, s in enumerate(size[::-1]):
if 'side' in type.lower():
padding = padding + [0, s-sh[i]]
else: # centr
p0 = (s-sh[i]) // 2
p1 = s-sh[i] - p0
padding = padding + [p0,p1]
y = tile_pad(x, padding)
return y
class ProgressBar(object):
def __init__(self, task_num=10):
self.pbar = ipy.IntProgress(min=0, max=task_num, bar_style='') # (value=0, min=0, max=max, step=1, description=description, bar_style='')
self.labl = ipy.Label()
display(ipy.HBox([self.pbar, self.labl]))
self.task_num = task_num
self.completed = 0
self.start()
def start(self, task_num=None):
if task_num is not None:
self.task_num = task_num
if self.task_num > 0:
self.labl.value = '0/{}'.format(self.task_num)
else:
self.labl.value = 'completed: 0, elapsed: 0s'
self.start_time = time.time()
def upd(self, *p, **kw):
self.completed += 1
elapsed = time.time() - self.start_time + 0.0000000000001
fps = self.completed / elapsed if elapsed>0 else 0
if self.task_num > 0:
finaltime = time.asctime(time.localtime(self.start_time + self.task_num * elapsed / float(self.completed)))
fin = ' end %s' % finaltime[11:16]
percentage = self.completed / float(self.task_num)
eta = int(elapsed * (1 - percentage) / percentage + 0.5)
self.labl.value = '{}/{}, rate {:.3g}s, time {}s, left {}s, {}'.format(self.completed, self.task_num, 1./fps, shortime(elapsed), shortime(eta), fin)
else:
self.labl.value = 'completed {}, time {}s, {:.1f} steps/s'.format(self.completed, int(elapsed + 0.5), fps)
self.pbar.value += 1
if self.completed == self.task_num: self.pbar.bar_style = 'success'
return
# return self.completed
def time_days(sec):
return '%dd %d:%02d:%02d' % (sec/86400, (sec/3600)%24, (sec/60)%60, sec%60)
def time_hrs(sec):
return '%d:%02d:%02d' % (sec/3600, (sec/60)%60, sec%60)
def shortime(sec):
if sec < 60:
time_short = '%d' % (sec)
elif sec < 3600:
time_short = '%d:%02d' % ((sec/60)%60, sec%60)
elif sec < 86400:
time_short = time_hrs(sec)
else:
time_short = time_days(sec)
return time_short
!nvidia-smi -L
print('\nDone!')
###Output
_____no_output_____
###Markdown
Type some text to hallucinate it, or upload some image to neuremix it.
Or use both, why not.
###Code
#@title Input
text = "" #@param {type:"string"}
translate = False #@param {type:"boolean"}
#@markdown or
upload_image = False #@param {type:"boolean"}
if translate:
text = translator.translate(text, dest='en').text
if upload_image:
uploaded = files.upload()
###Output
_____no_output_____
###Markdown
This method converges pretty fast, yet you may set more iterations just to get that nice animation (the result may get better as well).
`smooth_col` scaler desaturates image, while uncovering more details (the bigger the smoother).
###Code
#@title Generate
# from google.colab import drive
# drive.mount('/content/GDrive')
# clipsDir = '/content/GDrive/MyDrive/T2I ' + dtNow.strftime("%Y-%m-%d %H%M")
!rm -rf tempdir
sideX = 1280 #@param {type:"integer"}
sideY = 720 #@param {type:"integer"}
smooth_col = 2.#@param {type:"number"}
uniform = True #@param {type:"boolean"}
#@markdown > Training
steps = 100 #@param {type:"integer"}
samples = 100 #@param {type:"integer"}
learning_rate = .05 #@param {type:"number"}
#@markdown > Misc
save_freq = 1 #@param {type:"integer"}
audio_notification = False #@param {type:"boolean"}
norm_in = torchvision.transforms.Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711))
if upload_image:
input = list(uploaded.values())[0]
print(list(uploaded)[0])
img_in = torch.from_numpy(imageio.imread(input).astype(np.float32)/255.).unsqueeze(0).permute(0,3,1,2).cuda()
in_sliced = slice_imgs([img_in], samples, transform=norm_in)[0]
img_enc = perceptor.encode_image(in_sliced).detach().clone()
del img_in, in_sliced; torch.cuda.empty_cache()
if len(text) > 2:
print(text)
if translate:
translator = Translator()
text = translator.translate(text, dest='en').text
print(' translated to:', text)
tx = clip.tokenize(text)
txt_enc = perceptor.encode_text(tx.cuda()).detach().clone()
shape = [1, 3, sideY, sideX]
param_f = fft_image
# param_f = pixel_image
# learning_rate = 1.
params, image_f = param_f(shape, smooth_col=smooth_col)
image_f = to_valid_rgb(image_f)
optimizer = torch.optim.Adam(params, learning_rate)
def displ(img, fname=None):
img = np.array(img)[:,:,:]
img = np.transpose(img, (1,2,0))
img = exposure.equalize_adapthist(np.clip(img, -1., 1.))
img = np.clip(img*255, 0, 255).astype(np.uint8)
if fname is not None:
imageio.imsave(fname, np.array(img))
imageio.imsave('result.jpg', np.array(img))
def checkin(num):
with torch.no_grad():
img = image_f().cpu().numpy()[0]
displ(img, os.path.join(tempdir, '%03d.jpg' % num))
outpic.clear_output()
with outpic:
display(Image('result.jpg'))
def train(i):
loss = 0
img_out = image_f()
imgs_sliced = slice_imgs([img_out], samples, norm_in, uniform=uniform)
out_enc = perceptor.encode_image(imgs_sliced[-1])
loss = 0
if upload_image:
loss += -100*torch.cosine_similarity(img_enc, out_enc, dim=-1).mean()
if len(text) > 2:
loss += -100*torch.cosine_similarity(txt_enc, out_enc, dim=-1).mean()
if isinstance(loss, int): print(' Loss not defined, check the inputs'); exit(1)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i % save_freq == 0:
checkin(i // save_freq)
outpic = ipy.Output()
outpic
pbar = ProgressBar(steps)
for i in range(steps):
train(i)
_ = pbar.upd()
HTML(makevid(tempdir))
files.download('_out/ttt.mp4')
if audio_notification == True: output.eval_js('new Audio("https://freesound.org/data/previews/80/80921_1022651-lq.ogg").play()')
###Output
_____no_output_____ |
L06-pytorch/code/grad-intermediate-var.ipynb | ###Markdown
STAT 453: Deep Learning (Spring 2020) Instructor: Sebastian Raschka ([email protected]) Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2020/ GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss20 Example Showing How to Get Gradients of an Intermediate Variable in PyTorch This notebook illustrates how we can fetch the intermediate gradients of a function that is composed of multiple inputs and multiple computation steps in PyTorch. Note that gradient is simply a vector listing the derivatives of a function with respectto each argument of the function. So, strictly speaking, we are discussing how to obtain the partial derivatives here. Assume we have the simple toy from slides For instance, if we are interested in obtaining the partial derivative of the output a with respect to each of the input and intermediate nodes, we could do the following in PyTorch, where `d_a_b` denotes "partial derivative of a with respect to b" and so forth: Intermediate Gradients in PyTorch via autograd's `grad` In PyTorch, there are multiple ways to compute partial derivatives or gradients. If the goal is to just compute partial derivatives, the most straightforward way would be using `torch.autograd`'s `grad` function. By default, the `retain_graph` parameter of the `grad` function is set to `False`, which will free the graph after computing the partial derivative. Thus, if we want to obtain multiple partial derivatives, we need to set `retain_graph=True`. Note that this is a very inefficient solution though, as multiple passes over the graph are being made where intermediate results are being recalculated: As Adam Paszke (PyTorch developer) suggested to me, this can be made in a efficient manner by passing a tuple to the `grad` function so that it can reuse intermediate results and only require one pass over the graph:
###Code
import torch
import torch.nn.functional as F
from torch.autograd import grad
x = torch.tensor([3.], requires_grad=True)
w = torch.tensor([2.], requires_grad=True)
b = torch.tensor([1.], requires_grad=True)
u = x * w
v = u + b
a = F.relu(v)
partial_derivatives = grad(a, (x, w, b, u, v))
for name, grad in zip("xwbuv", (partial_derivatives)):
print('d_a_%s:' % name, grad)
###Output
d_a_x: tensor([2.])
d_a_w: tensor([3.])
d_a_b: tensor([1.])
d_a_u: tensor([1.])
d_a_v: tensor([1.])
###Markdown
Intermediate Gradients in PyTorch via `retain_grad` In PyTorch, we most often use the `backward()` method on an output variable to compute its partial derivative (or gradient) with respect to its inputs (typically, the weights and bias units of a neural network). By default, PyTorch only stores the gradients of the leaf variables (e.g., the weights and biases) via their `grad` attribute to save memory. So, if we are interested in the intermediate results in a computational graph, we can use the `retain_grad` method to store gradients of non-leaf variables as follows:
###Code
import torch
import torch.nn.functional as F
from torch.autograd import Variable
x = torch.tensor([3.], requires_grad=True)
w = torch.tensor([2.], requires_grad=True)
b = torch.tensor([1.], requires_grad=True)
u = x * w
v = u + b
a = F.relu(v)
u.retain_grad()
v.retain_grad()
a.backward()
for name, var in zip("xwbuv", (x, w, b, u, v)):
print('d_a_%s:' % name, var.grad)
###Output
d_a_x: tensor([2.])
d_a_w: tensor([3.])
d_a_b: tensor([1.])
d_a_u: tensor([1.])
d_a_v: tensor([1.])
###Markdown
STAT 453: Deep Learning (Spring 2020) Instructor: Sebastian Raschka ([email protected]) Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2020/ GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss20
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
###Output
Sebastian Raschka
CPython 3.7.1
IPython 7.12.0
torch 1.4.0
###Markdown
Example Showing How to Get Gradients of an Intermediate Variable in PyTorch This notebook illustrates how we can fetch the intermediate gradients of a function that is composed of multiple inputs and multiple computation steps in PyTorch. Note that gradient is simply a vector listing the derivatives of a function with respectto each argument of the function. So, strictly speaking, we are discussing how to obtain the partial derivatives here. Assume we have this simple toy graph:  Now, we provide the following values to b, x, and w; the red numbers indicate the intermediate values of the computation and the end result: Now, the next image shows the partial derivatives of the output node, a, with respect to the input nodes (b, x, and w) as well as all the intermediate partial derivatives: For instance, if we are interested in obtaining the partial derivative of the output a with respect to each of the input and intermediate nodes, we could do the following in PyTorch, where `d_a_b` denotes "partial derivative of a with respect to b" and so forth: Intermediate Gradients in PyTorch via autograd's `grad` In PyTorch, there are multiple ways to compute partial derivatives or gradients. If the goal is to just compute partial derivatives, the most straightforward way would be using `torch.autograd`'s `grad` function. By default, the `retain_graph` parameter of the `grad` function is set to `False`, which will free the graph after computing the partial derivative. Thus, if we want to obtain multiple partial derivatives, we need to set `retain_graph=True`. Note that this is a very inefficient solution though, as multiple passes over the graph are being made where intermediate results are being recalculated:
###Code
import torch
import torch.nn.functional as F
from torch.autograd import grad
x = torch.tensor([3.], requires_grad=True)
w = torch.tensor([2.], requires_grad=True)
b = torch.tensor([1.], requires_grad=True)
u = x * w
v = u + b
a = F.relu(v)
d_a_b = grad(a, b, retain_graph=True)
d_a_u = grad(a, u, retain_graph=True)
d_a_v = grad(a, v, retain_graph=True)
d_a_w = grad(a, w, retain_graph=True)
d_a_x = grad(a, x)
for name, grad in zip("xwbuv", (d_a_x, d_a_w, d_a_b, d_a_u, d_a_v)):
print('d_a_%s:' % name, grad)
###Output
d_a_x: (tensor([2.]),)
d_a_w: (tensor([3.]),)
d_a_b: (tensor([1.]),)
d_a_u: (tensor([1.]),)
d_a_v: (tensor([1.]),)
###Markdown
As Adam Paszke (PyTorch developer) suggested to me, this can be made rewritten in a more efficient manner by passing a tuple to the `grad` function so that it can reuse intermediate results and only require one pass over the graph:
###Code
import torch
import torch.nn.functional as F
from torch.autograd import grad
x = torch.tensor([3.], requires_grad=True)
w = torch.tensor([2.], requires_grad=True)
b = torch.tensor([1.], requires_grad=True)
u = x * w
v = u + b
a = F.relu(v)
partial_derivatives = grad(a, (x, w, b, u, v))
for name, grad in zip("xwbuv", (partial_derivatives)):
print('d_a_%s:' % name, grad)
###Output
d_a_x: tensor([2.])
d_a_w: tensor([3.])
d_a_b: tensor([1.])
d_a_u: tensor([1.])
d_a_v: tensor([1.])
###Markdown
Intermediate Gradients in PyTorch via `retain_grad` In PyTorch, we most often use the `backward()` method on an output variable to compute its partial derivative (or gradient) with respect to its inputs (typically, the weights and bias units of a neural network). By default, PyTorch only stores the gradients of the leaf variables (e.g., the weights and biases) via their `grad` attribute to save memory. So, if we are interested in the intermediate results in a computational graph, we can use the `retain_grad` method to store gradients of non-leaf variables as follows:
###Code
import torch
import torch.nn.functional as F
from torch.autograd import Variable
x = torch.tensor([3.], requires_grad=True)
w = torch.tensor([2.], requires_grad=True)
b = torch.tensor([1.], requires_grad=True)
u = x * w
v = u + b
a = F.relu(v)
u.retain_grad()
v.retain_grad()
a.backward()
for name, var in zip("xwbuv", (x, w, b, u, v)):
print('d_a_%s:' % name, var.grad)
###Output
d_a_x: tensor([2.])
d_a_w: tensor([3.])
d_a_b: tensor([1.])
d_a_u: tensor([1.])
d_a_v: tensor([1.])
###Markdown
Intermediate Gradients in PyTorch Using Hooks Finally, and this is a not-recommended workaround, we can use hooks to obtain intermediate gradients. While the two other approaches explained above should be preferred, this approach highlights the use of hooks, which may come in handy in certain situations.> The hook will be called every time a gradient with respect to the variable is computed. (http://pytorch.org/docs/master/autograd.htmltorch.autograd.Variable.register_hook) Based on the suggestion by Adam Paszke (https://discuss.pytorch.org/t/why-cant-i-see-grad-of-an-intermediate-variable/94/7?u=rasbt), we can use these hooks in a combination with a little helper function, `save_grad` and a `hook` closure writing the partial derivatives or gradients to a global variable `grads`. So, if we invoke the `backward` method on the output node `a`, all the intermediate results will be collected in `grads`, as illustrated below:
###Code
import torch
import torch.nn.functional as F
grads = {}
def save_grad(name):
def hook(grad):
grads[name] = grad
return hook
x = torch.tensor([3.], requires_grad=True)
w = torch.tensor([2.], requires_grad=True)
b = torch.tensor([1.], requires_grad=True)
u = x * w
v = u + b
x.register_hook(save_grad('d_a_x'))
w.register_hook(save_grad('d_a_w'))
b.register_hook(save_grad('d_a_b'))
u.register_hook(save_grad('d_a_u'))
v.register_hook(save_grad('d_a_v'))
a = F.relu(v)
a.backward()
grads
###Output
_____no_output_____ |
src/assignment/201111/bandit2.ipynb | ###Markdown
ぐりーでぃ
###Code
def getChoice(V):
if V[0]-V[1]>0:
choice="A"
elif V[0]-V[1]<0:
choice="B"
else: #価値が等しいときはランダム
if np.random.rand()>0.5:
choice="A"
else:
choice="B"
return choice #choiceを返す
###Output
_____no_output_____
###Markdown
εグリーディの場合のgetChoice()
###Code
def getChoice(V,e):
if V[0]-V[1]>0:
if np.random.rand()<e:
choice="B"
else:
choice="A"
elif V[0]-V[1]<0:
if np.random.rand()<e:
choice="A"
else:
choice="B"
else: #価値が等しいときはランダム
if np.random.rand()>0.5:
choice="A"
else:
choice="B"
return choice #choiceを返す
###Output
_____no_output_____
###Markdown
softmaxの場合のgetChoice()
###Code
def getChoice(V,beta):
P=softmax(V,beta)
if np.random.rand()<P[0]:
choice="A"
else:
choice="B"
return choice #choiceを返す
def softmax(V,beta): #引数はnumpy配列Vとbeta
V=V/np.max(V) #最大値で割っておく。やらなくてもいいけど。
P=np.exp(beta*V)/np.sum(np.exp(beta*V))#softmaxの式そのまま
return P #それぞれの選択肢の選択確率の入った配列
###Output
_____no_output_____
###Markdown
強化学習エージェント
###Code
def RLagent(V,R,hist,alpha):
Vnxt=np.zeros(2)
if hist==1: #前回Aが選ばれた場合はAの価値を更新
Vnxt[0]=V[0]+alpha*(R-V[0])
Vnxt[1]=V[1]
else:
Vnxt[1]=V[1]+alpha*(R-V[1])
Vnxt[0]=V[0]
return Vnxt #新しい価値を返す
###Output
_____no_output_____
###Markdown
メインのてつづき
###Code
t=20
hist=np.zeros(20,dtype=int)
goukei=0
#必要な変数を追加
V=np.zeros((21,2))
V[0,:]=30,30 #予測の初期値、0のままでもいい
alpha=0.3
if np.random.rand()>0.5:
a,b=45,60
else:
a,b=60,45
for i in range(t):
choice=getChoice(V[i,:],0.1)
if choice=="A":
hist[i]=1
r=np.random.normal(a,20)
elif choice=="B":
hist[i]=2
r=np.random.normal(b,20)
else:
break#A, B 以外のキーを押したら終了
if r<0:
r=0
r=int(r)
goukei+=r
print(f"{r}点当たりました")
print(f"合計は{goukei}点です")
print()
V[i+1,:]=RLagent(V[i,:],r,hist[i],alpha)# 次のトライアルに行く前に学習
print()
print("+++++++++++++++++")
print("Aの期待値: "+str(a)+", Bの期待値: "+str(b))
print("Aの選択数: "+str(np.sum(hist[hist==1]))+", Bの選択回数: "+str(30-np.sum(hist[hist==1])))
print("合計点: "+str(goukei))
print("+++++++++++++++++")
hist
V
import matplotlib.pyplot as plt #インポート
plt.plot(V)
plt.ylabel("Value",fontsize=16)
plt.xlabel("Trials",fontsize=16)
plt.title("learning curve")
plt.show()
plt.plot(hist)
plt.ylabel("choice")
plt.xlabel("Trials")
plt.title("choice history")
plt.show()
plt.plot(V)
plt.show()
###Output
_____no_output_____ |
analyses/spx_volatility.ipynb | ###Markdown
SPX volatility The aim of this notebook is to investigate the volatility of SPX and to see whether the measured volatility matches the implied volatilities observed through options. The following steps will be undertaken:- **Day count**: I investigate whether non-trading days have lower volatility than trading days and whether changes on the last trading day before a non trading day have a different volatility than other trading days (also based on a comment in the book)- **Long history**: I compare different moving average windows in order to see what can be said about the recommended window of 90 or 180 days- **Volatility growth**: In many calculations, uncertainty grows as `sqrt(t)`. This is compared with the increase in volatilities. Since the volatiliy does not change from the real world to risk neutral (Girsanov), this relationship should also hold on observed data.A version of the notebook is available as html file since github sometimes cannot properly display notebooks.
###Code
from datetime import datetime
from io import StringIO
import os
import numpy as np
import pandas as pd
import plotly.graph_objects as go
from plotly.offline.offline import iplot
from scipy.stats import spearmanr, norm
from sklearn.linear_model import LinearRegression, HuberRegressor, RANSACRegressor
from sklearn.preprocessing import PolynomialFeatures
import requests
###Output
_____no_output_____
###Markdown
Day count conventions
###Code
csv = requests.get("https://raw.githubusercontent.com/o1i/hull/main/data/2012-12-13_spx_historic.csv").content.decode("utf-8")
spx_hist = pd.read_csv(StringIO(csv))
dt_fmt = "%Y-%m-%d"
spx_hist["date_dt"] = spx_hist["date"].map(lambda x: datetime.strptime(x, dt_fmt))
spx_hist.sort_values("date_dt", inplace=True)
spx_hist.set_index("date_dt", inplace=True)
spx_hist["weekday"] = spx_hist.index.map(lambda x: x.strftime("%a"))
spx_hist["log_return"] = np.log10(spx_hist["close"] / spx_hist["close"].shift(1))
# The first 15 years or so open = close --> to be excluded
first_close_unlike_open = list(~(spx_hist["open"] == spx_hist["close"])).index(True)
spx_hist_short = spx_hist[first_close_unlike_open:]
intra_day = np.log10(spx_hist_short["close"] / spx_hist_short["open"])
###Output
_____no_output_____
###Markdown
Intra Day moves
###Code
fig = go.Figure(layout_yaxis_range=[-0.01,0.01],
layout_title="One-day log10-returns by weekday")
for wd in ["Mon", "Tue", "Wed", "Thu", "Fri"]:
fig.add_trace(go.Box(y=intra_day[spx_hist_short["weekday"] == wd], name=wd))
iplot(fig)
###Output
_____no_output_____
###Markdown
Interestingly, the median values are increasing over the week (aka relatively more positive movements towards the end of the week)
###Code
fig = go.Figure(layout_yaxis_range=[0,0.01],
layout_title="Absolute one day log10 returns by weekday")
for wd in ["Mon", "Tue", "Wed", "Thu", "Fri"]:
fig.add_trace(go.Box(y=intra_day[spx_hist_short["weekday"] == wd].map(lambda x: abs(x)), name=wd))
iplot(fig)
###Output
_____no_output_____
###Markdown
While both Q3 as well as the upper fence is lower on Friday, it does not seem to fundamentally change the picture compared to other days. Also assuming that this pattern is so trivial it would be exploited until it no longer was a pattern, I will not treat Fridays differently in what follows. Trading vs non-trading days Since in the data available to me the "implied" prices at the end of non-business days were not availabe, I will compare the following:- The close of day `d` is compared to the open of `d+3` for Mondays, Tuesdays and Fridays.- Only two-day breaks over the weekend will be considered for simplicity. Any three-day weekend or a non-trading day in the middle of the week will be ignored.- Only the pattern over the entire period is analysed. Changes in this behaviour could be academically of interest, but not done here since not at the heart of what this notebook should be (implied volatilities).
###Code
breaks = spx_hist_short.copy()
breaks[["wd_1", "open_1"]] = breaks[["weekday", "open"]].shift(-1)
breaks[["wd_3", "open_3"]] = breaks[["weekday", "open"]].shift(-3)
breaks = breaks[((breaks["weekday"] == "Mon") & (breaks["wd_3"] == "Thu")) |
((breaks["weekday"] == "Tue") & (breaks["wd_3"] == "Fri")) |
((breaks["weekday"] == "Fri") & (breaks["wd_1"] == "Mon"))]
breaks["open_after"] = np.where(breaks["weekday"] == "Fri", breaks["open_1"], breaks["open_3"])
gap = np.log10(breaks["open_after"] / breaks["close"])
fig = go.Figure(layout_yaxis_range=[-0.03, 0.03],
layout_title="log10(open_(d+2) / close(d)) starting on different weekdays")
for wd in ["Mon", "Tue", "Fri"]:
fig.add_trace(go.Box(y=gap[breaks["weekday"] == wd], name=wd))
iplot(fig)
###Output
_____no_output_____
###Markdown
As expected, there is significantly more movement over trading periods than in non-trading periods. I will therefore, as suggested by Hull, ignore non-trading days but treat fridays as any other day. Thus the holes in the time series do not require special treatment. Just as a confirmation, I will look at close-to-close variability that now should be slightly larger for mondays that incorporate the small Friday close to Monday open volatility.
###Code
fig = go.Figure(layout_yaxis_range=[0,0.025],
layout_title="Close-to-close absolute 1-day backward looking log10-returns for consecutive trading days")
for wd in ["Mon", "Tue", "Wed", "Thu", "Fri"]:
fig.add_trace(go.Box(y=spx_hist_short.loc[spx_hist_short["weekday"] == wd, "log_return"].map(lambda x: abs(x)), name=wd))
iplot(fig)
###Output
_____no_output_____
###Markdown
As expected the values are a tad higher, but by surprisingly little.What is not done here is to see whether on bank holidays (which may be idiosyncratic to U.S. stocks) there is more volatility than on weekends (that are the same in most major market places). One hypothesis could be that the reduced volatility is due to less information on those days, which would be more the case for weekends than for country-specific days off.Since we can now look at close to close movements, the whole time series becomes usable. Past volatility to predict future volatility
###Code
# Assumption: 252 business days per year, i.e. 21 per month
def std_trace(n_month: int, col: str, name: str, backward: bool = True):
n = 21*n_month
window = n if backward else pd.api.indexers.FixedForwardWindowIndexer(window_size=n)
return go.Scatter(
x=spx_hist.iloc[::5].index,
y=spx_hist["log_return"].rolling(window).std().values[::5],
mode="lines",
marker={"color":col},
name=name,
text=[f"Index: {i}" for i in range(len(spx_hist.index))]
)
#trace_bw_1m = std_trace(1, "#762a83", "BW 1m", True)
trace_bw_3m = std_trace(3, "#9970ab", "BW 3m", True)
trace_bw_6m = std_trace(6, "#c2a5cf", "BW 6m", True)
#trace_bw_12m = std_trace(12, "#e7d4e8", "BW 12m", True)
#trace_fw_1m = std_trace(1, "#1b7837", "FW 1m", False)
trace_fw_3m = std_trace(3, "#5aae61", "FW 3m", False)
trace_fw_6m = std_trace(6, "#a6dba0", "FW 6m", False)
#trace_fw_12m = std_trace(12, "#d9f0d3", "FW 12m", False)
layout = {
'showlegend': True,
"title": "Little agreement of backward and forward standard deviation",
"xaxis": {"title": "Date"},
"yaxis": {"title": "Std of daily close-to-close log-returns"}
}
fig = {
'layout': layout,
'data': [#trace_bw_1m,
trace_bw_3m,
trace_bw_6m,
#trace_bw_12m,
#trace_fw_1m,
trace_fw_3m,
trace_fw_6m,
#trace_fw_12m
],
}
iplot(fig)
###Output
_____no_output_____
###Markdown
It appears as if except in the stationary case (ca 2012-2015) the past volatility does a surprisingly bad job of predicting future volatility (with obvious implications for options pricing). While one could do formal statistical tests, I believe a scatterplot and maybe an R2 or so will get me closer to a feeling about what is actually happening.All four trailing windows can be used as estimators for all the leading windows, leading to 16 possible combinations. Also, these windows are available on every trading day and therefore looking at windows on every day would lead to strong dependencies whereas arbitrarily choosing how to split the data into disjunct parts may also lead to variance inflation.I will therefore for one example (6m back, 6m forward) compare the variance of the R2 estimator introduced by the choice of windows, and if sufficiently small pick the canonical non-overlapping windowms for every combination of leading and trailing window size for further analysis. The expectation is that the plot of offset vs R2 is nearly constant and has (almost) the same values for offset 0 as for offset 252-1.
###Code
n = int(252/2)
backward = spx_hist["log_return"].rolling(n).std()
forward = spx_hist["log_return"].rolling(pd.api.indexers.FixedForwardWindowIndexer(window_size=n)).std()
valid = backward.notna() & forward.notna()
backward = backward[valid]
forward = forward[valid]
index = np.array(range(len(forward)))
def get_r2(offset: int, window: int):
x = backward[offset::window].values.reshape([-1, 1])
y = forward[offset::window]
model = LinearRegression()
model.fit(x, y)
return model.score(x, y)
window = 252
fig = go.Figure(layout_yaxis_range=[0,0.5],
layout_title="Expanatory power measured in R2 depends heavily on window offset",
layout_xaxis_title="Offset (in trading days)",
layout_yaxis_title="R2 of forward std regressed on backward std")
fig.add_trace(go.Scatter(x=list(range(window)), y=[get_r2(i, window) for i in range(window)], mode="markers+lines"))
iplot(fig)
###Output
_____no_output_____
###Markdown
Clearly only the second assumption holds. It appears as if R2 is extremely sensitive to the offset. For example, comparing offset 0 vs 50 R2 drops from about 50% to 10% explained variance, which would mean that deciding on a backwards window size to predict a certain future window of volatility would have to somehow take into account all possible offsets. To confirm, let's have a closer look at this specific example.
###Code
window = 252
offset_0 = 0
offset_1 = 50
x0 = backward[offset_0::window]
y0 = forward[offset_0::window]
x1 = backward[offset_1::window]
y1 = forward[offset_1::window]
text_0 = [f"Index: {offset_0 + i * window}, bw: {x0_}, fw: {y0[i]}" for i, x0_ in enumerate(x0)]
text_1 = [f"Index: {offset_1 + i * window}, bw: {x1_}, fw: {y1[i]}" for i, x1_ in enumerate(x1)]
min_x = min(min(x0), min(x1))
max_x = max(max(x0), max(x1))
m0 = LinearRegression()
m0.fit(x0.values.reshape([-1, 1]), y0)
m1 = LinearRegression()
m1.fit(x1.values.reshape([-1, 1]), y1)
fig = go.Figure(layout_title="Comparable dispersion despide large R2-difference for offsets 0 and 50",
layout_xaxis_title="Backward standard deviation",
layout_yaxis_title="Forward standard deviation")
fig.add_trace(go.Scatter(x=x0, y=y0, mode="markers", name=f"Offset {offset_0}", marker={"color": "#1f77b4"},
text=text_0))
fig.add_trace(go.Scatter(x=x1, y=y1, mode="markers", name=f"Offset {offset_1}", marker={"color": "#ff7f0e"},
text=text_1))
fig.add_trace(go.Scatter(x=[min_x, max_x], y=[min_x, max_x],
line={"color": "#aaaaaa"}, name="1:1-line", mode="lines"))
fig.add_trace(go.Scatter(x=[min_x, max_x], y=[m0.intercept_ + m0.coef_[0] * min_x, m0.intercept_ + m0.coef_[0] * max_x],
line={"color": "#1f77b4", "dash":"dash"}, mode="lines", showlegend=False))
fig.add_trace(go.Scatter(x=[min_x, max_x], y=[m1.intercept_ + m1.coef_[0] * min_x, m1.intercept_ + m1.coef_[0] * max_x],
line={"color": "#ff7f0e", "dash":"dash"}, mode="lines", showlegend=False))
iplot(fig)
###Output
_____no_output_____
###Markdown
It becomes clear that pearson correlation may not be an ideal choice for this kind of analysis. When looking at tho two sets of points, the dispersion seems comparable and I am convinced that the outliers dominate the residual sums of squares. So probably a more robust measure of correlation may improve things.
###Code
def get_sr2(offset: int, window: int):
return spearmanr(backward[offset::window], forward[offset::window]).correlation
window = 252
fig = go.Figure(layout_yaxis_range=[0,1],
layout_title="Spearmans rho less sensitive to window offset than R2",
layout_xaxis_title="Offset (in trading days)",
layout_yaxis_title="Spearman's rho")
fig.add_trace(go.Scatter(x=list(range(window)), y=[get_sr2(i, window) for i in range(window)], mode="markers+lines"))
iplot(fig)
###Output
_____no_output_____
###Markdown
The statement that the choice of window offset does not impact further analysis is not correct. If standard OLS is used to choose the best backward window size to predict the volatility in the future one may incur significant distortions depending on the window used.However, the statement that the choice of window offset has a significant impact on the predictive power seems equally tenuous, since the dispersion (if measured using rank correlations) is fairly stable.The problems arise with large spikes in volatility that seem to be unpredictable as well as short-lived. Neither ignoring them (R2-Problem) nor deleting those data seems to be a good option. Instead I propose to use a more robust regression.I will consider RANSAC and Huber regression, choosing the one with less volatility of the parameters over time (and again, if this were a real exercise, the same would have to be done for all combinations of forward and backward windows to ensure that the finding is not an artifact of the one pair chosen for this analysis).
###Code
window = 252
huber = HuberRegressor()
ransac = RANSACRegressor()
ols = LinearRegression()
def get_parameters(offset: int, window: int) -> tuple:
"""Return Huber-intercept, Huber beta, RANSAC-intercept and RANSAC-beta"""
x = backward[offset::window].values.reshape([-1, 1])
y = forward[offset::window]
huber.fit(x, y)
ransac.fit(x, y)
ols.fit(x, y)
return np.array([huber.intercept_, huber.coef_[0],
ransac.estimator_.intercept_, ransac.estimator_.coef_[0],
ols.intercept_, ols.coef_[0]]).reshape([1, -1])
coefs = np.concatenate([get_parameters(i, window) for i in range(window)])
fig = go.Figure(layout_yaxis_range=[0,1],
layout_title="Huber Regression more stable, but still with significant variability",
layout_xaxis_title="Offset (in trading days)",
layout_yaxis_title="Coefficients")
fig.add_trace(go.Scatter(x=np.arange(window), y=coefs[:, 0], line={"color": "#1f77b4", "dash":"dash"}, name="Intercept Huber"))
fig.add_trace(go.Scatter(x=np.arange(window), y=coefs[:, 1], line={"color": "#1f77b4", "dash":"solid"}, name="Coef Huber"))
fig.add_trace(go.Scatter(x=np.arange(window), y=coefs[:, 2], line={"color": "#7f7f7f", "dash":"dash"}, name="Intercept RANSAC"))
fig.add_trace(go.Scatter(x=np.arange(window), y=coefs[:, 3], line={"color": "#7f7f7f", "dash":"solid"}, name="Coef RANSAC"))
fig.add_trace(go.Scatter(x=np.arange(window), y=coefs[:, 4], line={"color": "#2ca02c", "dash":"dash"}, name="Intercept OLS"))
fig.add_trace(go.Scatter(x=np.arange(window), y=coefs[:, 5], line={"color": "#2ca02c", "dash":"solid"}, name="Coef OLS"))
iplot(fig)
###Output
_____no_output_____
###Markdown
Ignoring the (highly volatile) RANSAC, I am somewhat surprised at the fact that the outliers affect the parameters of the regression to a lesser degree than the window offset. Also it is noteworthy that the coefficient is markedly lower than 1 for most windows which is somewhat at odds with the expectations. One explanation could be the spikedness of high-volatility phases: In phases where backwards-volatility is particularly high, the forward volatility is lower than the backward volatility which would explain the size of the coefficient being less than one.To come to a conclusion about finding "the best" way of predicting the volatility in a given future window: Parameters based on linear estimates between forward and backward volatilities are highly dependent on the window offset and it is not obvious how to choose a point estimator this way. An obvious solution would be to allow for some non-linear dependency between the backwards volatility and the forward volatility. One way would be to apply a log transform to the predictor, another would be to add polynomial terms. Let's try both.
###Code
window = 252
offset_0 = 0
offset_1 = 171
x0 = backward[offset_0::window].values.reshape([-1, 1])
y0 = forward[offset_0::window]
x1 = backward[offset_1::window].values.reshape([-1, 1])
y1 = forward[offset_1::window]
all_x = np.concatenate([x0, x1])
min_x = all_x.min()
max_x = all_x.max()
x_pred = np.linspace(min_x, max_x, 200).reshape([-1, 1])
poly_trafo = PolynomialFeatures(degree=4)
m = LinearRegression()
m.fit(poly_trafo.fit_transform(x0), y0)
m0_pred_poly = m.predict(poly_trafo.fit_transform(x_pred))
m.fit(np.log(x0), y0)
m0_pred_log = m.predict(np.log(x_pred))
m.fit(poly_trafo.fit_transform(x1), y1)
m1_pred_poly = m.predict(poly_trafo.fit_transform(x_pred))
m.fit(np.log(x1), y1)
m1_pred_log = m.predict(np.log(x_pred))
col_0 = "#1f77b4"
col_1 = "#ff7f0e"
fig = go.Figure(layout_title="Comparable dispersion despide large R2-difference for offsets 0 and 50",
layout_xaxis_title="Backward standard deviation",
layout_yaxis_title="Forward standard deviation",
layout_yaxis_range=[0,0.015])
fig.add_trace(go.Scatter(x=x0.flatten(), y=y0, mode="markers", name=f"Offset {offset_0}", marker={"color": col_0},
text=text_0))
fig.add_trace(go.Scatter(x=x1.flatten(), y=y1, mode="markers", name=f"Offset {offset_1}", marker={"color": col_1},
text=text_1))
fig.add_trace(go.Scatter(x=[min_x, max_x], y=[min_x, max_x],
line={"color": "#aaaaaa"}, name="1:1-line", mode="lines"))
fig.add_trace(go.Scatter(x=x_pred.flatten(),
y=m0_pred_poly,
line={"color": col_0, "dash":"dash"}, mode="lines", name="Polynomial"))
fig.add_trace(go.Scatter(x=x_pred.flatten(),
y=m0_pred_log,
line={"color": col_0, "dash":"dot"}, mode="lines", name="Log trafo"))
fig.add_trace(go.Scatter(x=x_pred.flatten(),
y=m1_pred_poly,
line={"color": col_1, "dash":"dash"}, mode="lines", name="Polynomial"))
fig.add_trace(go.Scatter(x=x_pred.flatten(),
y=m1_pred_log,
line={"color": col_1, "dash":"dot"}, mode="lines", name="Log trafo"))
iplot(fig)
###Output
_____no_output_____
###Markdown
As expected, polynomial fits behave unpredictably towards outliers and the comparison of how strong coefficients react to window offsets will only be done for the (Huberised) log-transformed model.
###Code
window = 252
huber = HuberRegressor()
huber2 = HuberRegressor()
def get_parameters_log(offset: int, window: int) -> tuple:
"""Return Huber-intercept, Huber beta, RANSAC-intercept and RANSAC-beta"""
x = backward[offset::window].values.reshape([-1, 1])
y = forward[offset::window]
huber.fit(x, y)
huber2.fit(np.log(x), y)
return np.array([huber.intercept_, huber.coef_[0],
huber2.intercept_, huber2.coef_[0], ]).reshape([1, -1])
coefs = np.concatenate([get_parameters_log(i, window) for i in range(window)])
fig = go.Figure(layout_title="Parameters of model on transformed data less volatile in absolute terms",
layout_xaxis_title="Offset (in trading days)",
layout_yaxis_title="Coefficients")
fig.add_trace(go.Scatter(x=np.arange(window), y=coefs[:, 0], line={"color": col_0, "dash":"dash"}, name="Intercept Untransformed"))
fig.add_trace(go.Scatter(x=np.arange(window), y=coefs[:, 1], line={"color": col_0, "dash":"solid"}, name="Coef Untransformed"))
fig.add_trace(go.Scatter(x=np.arange(window), y=coefs[:, 2], line={"color": col_1, "dash":"dash"}, name="Intercept Transformed"))
fig.add_trace(go.Scatter(x=np.arange(window), y=coefs[:, 3], line={"color": col_1, "dash":"solid"}, name="Coef Transformed"))
iplot(fig)
###Output
_____no_output_____
###Markdown
While in absolute terms the fluctuations of the parameters did not change by much, in relative terms the situation did not get much better. However, maybe this was the wrong way of looking at the problem: While from a modelling point of view (and for the confidence in the model) it would of course be very desirable to have stable model parameters, maybe in practice the stability of the results are more important. As a last analysis before actually finding a good choice of window to predict a given future volatility, I will look at the variability of prediction depending on the window offset.For that I will outline an area marking the interquartile range as well as lines for the median and the 5% and 95% quantiles for every point for which I predict both untransformed and log-transformed inputs.
###Code
window = 252
huber = HuberRegressor()
huber2 = HuberRegressor()
x_pred = np.linspace(min(min(forward), min(backward)), max(max(forward), max(backward)), 200).reshape([-1, 1])
def get_parameters_log(offset: int, window: int) -> tuple:
"""Return Huber-intercept, Huber beta, RANSAC-intercept and RANSAC-beta"""
x = backward[offset::window].values.reshape([-1, 1])
y = forward[offset::window]
huber.fit(x, y)
untransformed = huber.predict(x_pred).reshape([-1, 1, 1]) # Dims: x, offset, model
huber2.fit(np.log(x), y)
transformed = huber2.predict(np.log(x_pred)).reshape([-1, 1, 1])
return np.concatenate([untransformed, transformed], axis = 2)
preds = np.concatenate([get_parameters_log(i, window) for i in range(window)], axis=1)
quantiles = np.quantile(preds, [0.05, 0.25, 0.5, 0.75, 0.95], axis=1)
x_obs = np.linspace(x_pred.min(), x_pred.max(), 30)
bins = np.digitize(backward, x_obs)
observed = (pd.DataFrame({"bin": bins, "fw": forward})
.groupby("bin")["fw"]
.quantile(q=[0.05, 0.25, 0.5, 0.75, 0.95])
.unstack(level=1))
col_0 = "rgba(31,119,180, 0.2)"
col_1 = "rgba(255,127,14, 0.2)"
gray = "rgba(70, 70, 70, 0.2)"
fig = go.Figure(layout_title="Overall fit is hard to judge",
layout_xaxis_title="Past volatility",
layout_yaxis_title="Predicted future volatility")
fig.add_trace(go.Scatter(x=x_obs, y=observed[0.05].values, line={"color": gray, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=x_obs, y=observed[0.50].values, line={"color": gray, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=x_obs, y=observed[0.95].values, line={"color": gray, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=x_obs, y=observed[0.25].values, line={"color": gray, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=x_obs, y=observed[0.75].values, line={"color": gray, "dash":"solid"}, name="Observed", fill="tonexty"))
fig.add_trace(go.Scatter(x=x_pred.flatten(), y=quantiles[0, :, 0], line={"color": col_0, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=x_pred.flatten(), y=quantiles[2, :, 0], line={"color": col_0, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=x_pred.flatten(), y=quantiles[4, :, 0], line={"color": col_0, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=x_pred.flatten(), y=quantiles[1, :, 0], line={"color": col_0, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=x_pred.flatten(), y=quantiles[3, :, 0], line={"color": col_0, "dash":"solid"}, name="Pred Untransformed", fill="tonexty"))
fig.add_trace(go.Scatter(x=x_pred.flatten(), y=quantiles[0, :, 1], line={"color": col_1, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=x_pred.flatten(), y=quantiles[2, :, 1], line={"color": col_1, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=x_pred.flatten(), y=quantiles[4, :, 1], line={"color": col_1, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=x_pred.flatten(), y=quantiles[1, :, 1], line={"color": col_1, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=x_pred.flatten(), y=quantiles[3, :, 1], line={"color": col_1, "dash":"solid"}, name="Pred Transformed", fill="tonexty"))
iplot(fig)
###Output
_____no_output_____
###Markdown
The number of observations is limited and the true volatility of observed values, in particular for higher past volatilities is likely to be understated. While predictions on transformed data likely underpredict future volatilities if the past was marked by really low volatilities, predictions on transformed data seem to give more credible results for higher past volatility regimes. While in practice getting this exactly right (and experimenting much more with proper predictions based on more than just one input variable would be required), I will leave it at that for now and trust the book for now. Volatility growth In all of the following i disregards the days of the week, holidays etc. and treat the data as a steady stream of trading days. While not entirely accurate this seems somewhat justified from the analysis above and common practice (cf. Hull).Let $N$ be the number of observed trading days, $\{x_0, ..., x_{N-1}\}$ be the observed log returns, $w \in \mathbb{N}_+$ the window size, and $t \in \{w, 1, ..., N-w-1\}$ be a time point at which the volatility is observed. Let $\hat{\sigma}_{t}^{w} := \sqrt{\frac{1}{w} \cdot \sum_{i=t-w+1}^{t}(x_i - \bar{x}_t)^2}$ with $\bar{x}_t := \frac{1}{w} \cdot \sum_{i=t-w+1}^{t}x_i$.Assuming the daily log returns follow a zero centred normal distribution with standard deviation $\hat{\sigma}_{t}^{w}$, I can normalise these forward returns to make them all standard normal and hence comparable. The expectation is then that $Y_{t, j}:=\sum_{k=1}^{j}x_{n_t + k} \sim \mathcal{N(0, j)}$. To verify this, I will have to choose the $w$ and the $t$ such that the sample size is large enough (small $w$) but the $t$ are far enough apart such that the dependence is not too bad.Before having done the analysis my expectation is that the lower tail of the distribution is heavier than the upper tail (big moves tend to be to the downside), and that it is leptokurtic (movements are flat followed by larger movements rather than a steady creep upwards). I will test different sizes for $w$, but have the windows overlap, such that the evaluation period of one $t$ is the data on which the standard deviation of the next window is calculated.
###Code
def get_observed_returns(w, correct: bool = True) -> np.ndarray:
"""Gets all valid cumulative log returns"""
returns = spx_hist["log_return"]
if correct:
returns = returns - returns.mean()
backward = returns.rolling(w).std().values
forward = np.concatenate([
((returns
.rolling(pd.api.indexers.FixedForwardWindowIndexer(window_size=n))
.sum())
- returns)
.values
.reshape([-1, 1])
for n in range(2, w+2)],
axis=1)
forward_norm = forward / np.tile(backward.reshape([-1, 1]), (1, w))
forward_norm = forward_norm[~np.isnan(forward_norm).any(axis=1)]
return forward_norm
def select_observed_returns(forward_norm: np.ndarray, w: int, offset=0) -> np.ndarray:
"""Selects log returns so that they become less correlated"""
return forward_norm[offset::w, :]
def get_quantiles(forward_norm: np.ndarray, quantiles: list) -> np.ndarray:
"""Calculates quantiles from the observed returns (to be compared with the normal quantiles)"""
return np.quantile(forward_norm, quantiles, axis=0)
def get_window_quantiles(w: int, offset=0, correct: bool = True, quantiles=[0.01, 0.05, 0.25, 0.5, 0.75, 0.95, 0.99]):
cum_returns = get_observed_returns(w, correct)
cum_norm_returns = select_observed_returns(cum_returns, w, offset=offset)
return get_quantiles(cum_norm_returns, quantiles), cum_norm_returns.shape[0]
def get_normal_quantiles(t_max: int, quantiles: list = [0.01, 0.05, 0.25, 0.5, 0.75, 0.95, 0.99]) -> np.ndarray:
"""Returns theoretical quantiles from the standard normal"""
q = norm.ppf(quantiles).reshape([-1, 1])
scale = np.array([np.sqrt(i + 1) for i in range(t_max)]).reshape([1, -1])
return np.matmul(q, scale)
def add_traces(fig, quantiles: np.ndarray, col: str, fillcol: str, name: str):
"""Adds quantile traces to the fig and returns the fig. Assumes there are 7 quantiles to show with 2-4 in colors"""
fig.add_trace(go.Scatter(x=[i + 1 for i in range(quantiles.shape[1])], y=quantiles[0, :], line={"color": col, "dash":"dot"}, showlegend=False))
fig.add_trace(go.Scatter(x=[i + 1 for i in range(quantiles.shape[1])], y=quantiles[6, :], line={"color": col, "dash":"dot"}, showlegend=False))
fig.add_trace(go.Scatter(x=[i + 1 for i in range(quantiles.shape[1])], y=quantiles[1, :], line={"color": col, "dash":"dash"}, showlegend=False))
fig.add_trace(go.Scatter(x=[i + 1 for i in range(quantiles.shape[1])], y=quantiles[5, :], line={"color": col, "dash":"dash"}, showlegend=False))
fig.add_trace(go.Scatter(x=[i + 1 for i in range(quantiles.shape[1])], y=quantiles[3, :], line={"color": col, "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=[i + 1 for i in range(quantiles.shape[1])], y=quantiles[2, :], line={"color": "rgba(0, 0, 0, 0)", "dash":"solid"}, showlegend=False))
fig.add_trace(go.Scatter(x=[i + 1 for i in range(quantiles.shape[1])], y=quantiles[4, :], line={"color": "rgba(0, 0, 0, 0)", "dash":"solid"}, name=name, fill="tonexty", fillcolor=fillcol))
return fig
w = 21*6
uncorrected_window_quantiles, _ = get_window_quantiles(w, correct=False)
corrected_window_quantiles, n = get_window_quantiles(w)
col_0 = "rgba(31,119,180, 0.6)"
col_0_f = "rgba(31,119,180, 0.3)"
col_1 = "rgba(255,127,14, 0.8)"
col_1_f = "rgba(255,127,14, 0.4)"
gray = "rgba(90, 90, 90, 0.8)"
gray_f = "rgba(90, 90, 90, 0.4)"
fig = go.Figure(layout_title=f"True development too positive, smaller IQR and unexpected tails, w={w}, n={n}",
layout_xaxis_title="Trading days after t",
layout_yaxis_title="Cumulative normalised return")
fig = add_traces(fig, get_normal_quantiles(w), gray, gray_f, "Normal")
fig = add_traces(fig, uncorrected_window_quantiles, col_0, col_0_f, "Observed Uncorrected")
fig = add_traces(fig, corrected_window_quantiles, col_1, col_1_f, "Observed Corrected")
iplot(fig)
###Output
_____no_output_____ |
ASSIGNMENT_2-2.ipynb | ###Markdown
Assignment 2-2: A Simple Analysis of Film Reviews Preparation
###Code
%matplotlib inline
import pandas as pd
from matplotlib import pyplot as plt
from matplotlib import font_manager as fm
import matplotlib as mpl
import numpy as np
mpl.rcParams['figure.dpi']=440
%config InlineBackend.figure_format = 'svg'
font_path = "ASSIGNMENT_2_FILES/SiYuanSongTi.ttf"
fm.FontManager().addfont(font_path)
font_prop = fm.FontProperties(fname=font_path)
df = pd.read_csv("ASSIGNMENT_2_FILES/reviews.csv", low_memory=False)
df
###Output
_____no_output_____
###Markdown
Task 1: Count Reviews**Task:** Collect the review count of all films, and use bar chart to display the 10 most reviewed films.
###Code
# Count the occurrences of rows in respect of 'movie_name'
comment_count_large = df['movie_name'].value_counts()
# Filter out top 10
top_10_film_names_salted = list(comment_count_large[0:10:1].index)
top_10_comment_count = list(comment_count_large[0:10:1].values)
print(top_10_film_names_salted)
# Process for film names
top_10_film_names = list(map(lambda x: x[0:x.find("的影评")], top_10_film_names_salted))
print(top_10_film_names)
plt.figure(figsize=(20,5))
plt.title("Top 10 Most Reviewed Films", fontproperties=font_prop, color='#454553')
plt.ylim(3000, 7900)
plt.bar(top_10_film_names, top_10_comment_count, edgecolor='#4AA0D5', facecolor="#D8E9F0", width=0.4)
plt.xticks(list(range(10)), top_10_film_names, color='#454553', fontproperties=font_prop, rotation=0)
plt.yticks([3000, 4000, 5000, 6000, 7000], color='#454553', fontproperties=font_prop)
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.spines['bottom'].set_color('#454553')
ax.spines['left'].set_color('#454553')
ax.tick_params(axis='x', color='#454553')
ax.tick_params(axis='y', color='#454553')
for x,y in zip(top_10_film_names, top_10_comment_count):
plt.text(x,y,'%d'%y,ha="center",va="bottom", fontproperties=font_prop, color='#4AA0D5')
###Output
_____no_output_____
###Markdown
Task 2: Weekly Reviews**Task:** Find the film with the most reviews. Plot the review count **each week** using a line chart, where each week is a point on the graph. During the creation of the graph, it is apparent that the first few days are the summit of the reviews. Thus, instead of using a linear tick on the $y$-axis, I used a logarithmic $y$-axis to prevent the curve from being so steady from day 10 and on.
###Code
from datetime import datetime
from datetime import timedelta
# Binary filter
comments = df[df['movie_name']=='后会无期的影评 (7876)']
# Collect dates
comments_date = comments['评论时间']
# Parse into datetime.datetime class
comments_date_time = list(map(
lambda x: datetime.strptime(x, '%Y/%m/%d %H:%M'),
comments_date.values
))
# Sort and find minimum
comments_date_time.sort()
begin_date = comments_date_time[0]
# Get week count calculated
comments_week = list(map(
lambda x: (x - begin_date).days // 7,
comments_date_time
))
print(comments_week.count(2))
# Get ready for graph
# X-axis: week
begin_week = comments_week[0]
end_week = comments_week[-1]
# Y-axis: comments
comments_count_by_week = list(map(lambda x: comments_week.count(x), range(begin_week, end_week + 1)))
plt.yscale('log')
plt.xlim(begin_week, end_week)
plt.ylim(5, 5000)
plt.plot(comments_count_by_week, color='#4AA0D5')
plt.title('Comment Count of 后会无期 by Weeks', fontproperties=font_prop, color='#454553')
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.spines['bottom'].set_color('#454553')
ax.spines['left'].set_color('#454553')
ax.tick_params(axis='x', color='#454553')
ax.tick_params(axis='y', color='#454553')
plt.xticks(range(0, 180, 20), color='#454553', fontproperties=font_prop)
plt.yticks(np.geomspace(10, 10000, 7), list(map(lambda x: int(round(x)), np.geomspace(10, 10000, 7))), color='#454553', fontproperties=font_prop)
plt.grid(True)
###Output
_____no_output_____ |
Impala_SQL_Jupyter/Impala_Basic_Kerberos.ipynb | ###Markdown
IPython/Jupyter notebooks for Apache Impala with Kerberos authentication 1. Connect to the target database- requires Cloudera impyla package and thrift_sasl- edit the value kerberos_service_name as relevant for your environment
###Code
from impala.dbapi import connect
conn = connect(host='impalasrv-prod', port=21050, kerberos_service_name='impala', auth_mechanism='GSSAPI')
###Output
_____no_output_____
###Markdown
2. Run a query and fetch the results
###Code
cur = conn.cursor()
cur.execute('select * from test2.emp limit 2')
cur.fetchall()
###Output
_____no_output_____
###Markdown
Integration with pandas
###Code
cur = conn.cursor()
cur.execute('select * from test2.emp')
from impala.util import as_pandas
df = as_pandas(cur)
df.head()
###Output
_____no_output_____
###Markdown
More examples of integration with IPython ecosystem
###Code
cur = conn.cursor()
cur.execute('select ename, sal from test2.emp')
df = as_pandas(cur)
%matplotlib inline
import matplotlib
matplotlib.style.use('ggplot')
df.plot()
###Output
_____no_output_____ |
notebooks/api_guide/spectral_examples.ipynb | ###Markdown
Cross Power Spectral Density
###Code
cx = np.random.rand(int(1e8))
cy = np.random.rand(int(1e8))
fs = int(1e6)
%%timeit
ccsd = signal.csd(cx, cy, fs, nperseg=1024)
gx = cp.random.rand(int(1e8))
gy = cp.random.rand(int(1e8))
fs = int(1e6)
%%timeit
gcsd = cusignal.csd(gx, gy, fs, nperseg=1024)
###Output
The slowest run took 6.98 times longer than the fastest. This could mean that an intermediate result is being cached.
2.96 ms ± 3.01 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Periodogram
###Code
csig = np.random.rand(int(1e8))
fs = int(1e6)
%%timeit
f, Pxx_spec = signal.periodogram(csig, fs, 'flattop', scaling='spectrum')
gsig = cp.random.rand(int(1e8))
fs = int(1e6)
%%timeit
gf, gPxx_spec = cusignal.periodogram(gsig, fs, 'flattop', scaling='spectrum')
###Output
1.6 ms ± 32.9 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Welch PSD
###Code
csig = np.random.rand(int(1e8))
fs = int(1e6)
%%timeit
cf, cPxx_spec = signal.welch(csig, fs, nperseg=1024)
gsig = cp.random.rand(int(1e8))
fs = int(1e6)
%%timeit
gf, gPxx_spec = cusignal.welch(gsig, fs, nperseg=1024)
###Output
77.8 ms ± 1.05 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
Spectrogram
###Code
csig = np.random.rand(int(1e8))
fs = int(1e6)
%%timeit
cf, ct, cPxx_spec = signal.spectrogram(csig, fs)
gsig = cp.random.rand(int(1e8))
fs = int(1e6)
%%timeit
gf, gt, gPxx_spec = cusignal.spectrogram(gsig, fs)
###Output
40.4 ms ± 329 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Coherence
###Code
cx = np.random.rand(int(1e8))
cy = np.random.rand(int(1e8))
fs = int(1e6)
%%timeit
cf, cCxy = signal.coherence(cx, cy, fs, nperseg=1024)
gx = cp.random.rand(int(1e8))
gy = cp.random.rand(int(1e8))
fs = int(1e6)
%%timeit
gf, gCxy = cusignal.coherence(gx, gy, fs, nperseg=1024)
###Output
7.61 ms ± 3.01 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Short Time Fourier Transform
###Code
cx = np.random.rand(int(1e8))
fs = int(1e6)
%%timeit
cf, ct, cZxx = signal.stft(cx, fs, nperseg=1000)
gx = cp.random.rand(int(1e8))
fs = int(1e6)
%%timeit
gf, gt, gZxx = cusignal.stft(gx, fs, nperseg=1024)
###Output
The slowest run took 22.07 times longer than the fastest. This could mean that an intermediate result is being cached.
22.5 ms ± 15.5 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
|
run_classifier_lis_colab.ipynb | ###Markdown
2019/09/28 由曾元顯建立,2020/08/24修改。需先下載 Google 釋出的 BERT 程式與中文模型,若不清楚,可參考:https://github.com/SamTseng/Chinese_Skewed_TxtClf/blob/master/BERT_txtclf_HowTo.docx 。 壹、設置並查詢 GPU在 Colab 畫面中,點擊 編輯 > 筆記本設定 或者 執行階段 > 變更執行階段類型 ,選擇 GPU 做為 硬體加速器
###Code
# 查詢 GPU
!nvidia-smi
###Output
Mon Aug 24 09:57:15 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.57 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 35C P8 25W / 149W | 0MiB / 11441MiB | 0% Default |
| | | ERR! |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
###Markdown
貳、設定 Colab 權限以連結本機上的雲端硬碟 運行下面的程式碼,會出現鏈接,點開鏈接,選擇自己的 Google 帳號,將得到的代碼(類似:「4/rgFOTdBFqoSFMoinr_9uzzTLziWUyGjSQgNLt1EV4gbrM6gBYp-Vnxk」),拷貝後輸入鏈接下面的框中即可。
###Code
from google.colab import drive
drive.mount('/content/gdrive')
###Output
_____no_output_____
###Markdown
設置工作路徑:
###Code
import os, time
os.chdir( "/content/gdrive/My Drive/BERT/lis" ) # change directory to your GoogleDrive
###Output
_____no_output_____
###Markdown
將筆記本的工作路徑設置到BERT代碼的文件夾下。可以用ls命令查看是否成功:
###Code
!pwd
!ls
###Output
/content/gdrive/My Drive/BERT/lis
chinese_L-12_H-768_A-12 Model_LIS run_classifier_lis.py
data optimization.py tokenization.py
eval_clf_lis.py Output_LIS train_test_split.py
extract_features.py __pycache__
modeling.py run_classifier_lis_colab.ipynb
###Markdown
參、安裝舊版的 tesorflow 以便執行 Google 釋出的程式由於現在連上 Colab,都使用新版 tensorflow,要先解除。再安裝舊版的 tensorflow,以搭配 Google 釋出的程式:https://github.com/google-research/bert
###Code
!pip uninstall tensorflow
# 會被問:Proceed (y/n)? 記得要按「y]
!pip install tensorflow==1.15.0
###Output
Collecting tensorflow==1.15.0
Using cached https://files.pythonhosted.org/packages/3f/98/5a99af92fb911d7a88a0005ad55005f35b4c1ba8d75fba02df726cd936e6/tensorflow-1.15.0-cp36-cp36m-manylinux2010_x86_64.whl
Requirement already satisfied: tensorboard<1.16.0,>=1.15.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (1.15.0)
Requirement already satisfied: tensorflow-estimator==1.15.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (1.15.1)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (1.31.0)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (0.34.2)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (1.15.0)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (1.12.1)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (1.1.2)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (0.8.1)
Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (3.12.4)
Requirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (0.2.2)
Requirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (1.18.5)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (1.1.0)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (0.9.0)
Requirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (1.0.8)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (3.3.0)
Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.0) (0.2.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.0) (3.2.2)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.0) (1.0.1)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.0) (49.2.0)
Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.8->tensorflow==1.15.0) (2.10.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.0) (1.7.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.0) (3.1.0)
Installing collected packages: tensorflow
Successfully installed tensorflow-1.15.0
###Markdown
肆、訓練、預測、評估 1. 訓練 lis注意1:當前的目錄下,至少要有這幾個 Google 釋出的檔案:extract_features.pymodeling.pyoptimization.pytokenization.pychinese_L-12_H-768_A-12:最好改成多國語言版:https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip注意2:run_classifier_lis.py 裡面,需要讀取 LIS_train.txt 與 LIS_test.txt 兩個檔案,請確保其檔名正確,路徑也正確。訓練時間大約花費 1500 秒(K80 GPU)。若訓練時間很多,例如不到60秒,可能已經訓練過,或是訓練資料有誤,請刪除 Model_LIS 目錄後,等待同步完成,再重新訓練。
###Code
time_Start = time.time()
!python run_classifier_lis.py \
--task_name=lis --do_train=true --do_eval=false \
--data_dir=data \
--vocab_file=multi_cased_L-12_H-768_A-12/vocab.txt \
--bert_config_file=multi_cased_L-12_H-768_A-12/bert_config.json \
--init_checkpoint=multi_cased_L-12_H-768_A-12/bert_model.ckpt \
--max_seq_length=256 \
--train_batch_size=8 \
--learning_rate=2e-5 \
--num_train_epochs=10.0 \
--output_dir=Model_LIS
print("It takes %4.2f seconds to train lis."%(time.time()-time_Start))
###Output
_____no_output_____
###Markdown
2、預測 lis預測時間約需 42 秒。
###Code
time_Start = time.time()
!python run_classifier_lis.py \
--task_name=lis --do_predict=true \
--data_dir=data \
--vocab_file=multi_cased_L-12_H-768_A-12/vocab.txt \
--bert_config_file=multi_cased_L-12_H-768_A-12/bert_config.json \
--init_checkpoint=Model_LIS \
--max_seq_length=256 \
--output_dir=Output_LIS
print("It takes %4.2f seconds to predict CnonC."%(time.time()-time_Start))
###Output
_____no_output_____
###Markdown
3.評估 lis 成效
###Code
!python eval_clf_lis.py lis data/LIS_test.txt Output_LIS/test_results.tsv
###Output
Print out the F1-score of the result of BERT classifier for a data set.
Usage: python eval_clf.py Data_Name Test_File Predicted_File
Example: python eval_clf.py CnonC CnonC_test.txt Output_CnonC/test_results.tsv
sys.argv: ['eval_clf_lis.py', 'lis', 'data/LIS_test.txt', 'Output_LIS/test_results.tsv']
E.資訊服務與使用者研究 147
H.數位典藏與數位學習研究 110
I.資訊與社會 96
G.資訊系統與檢索 93
F.圖書館與資訊服務機構管理 92
J.資訊計量學 58
C.館藏發展 44
B.圖書資訊學教育 35
D.資訊與知識組織 27
A.圖書資訊學理論與發展 5
Name: 0, dtype: int64
From 'data/LIS_test.txt' file: df0
Label_List='['E.資訊服務與使用者研究', 'H.數位典藏與數位學習研究', 'I.資訊與社會', 'G.資訊系統與檢索', 'F.圖書館與資訊服務機構管理', 'J.資訊計量學', 'C.館藏發展', 'B.圖書資訊學教育', 'D.資訊與知識組織', 'A.圖書資訊學理論與發展']'
Number of Labels: 10
From copy of get_labels() in BERT's run_classifier.py
Label_List='['A.圖書資訊學理論與發展', 'B.圖書資訊學教育', 'C.館藏發展', 'D.資訊與知識組織', 'E.資訊服務與使用者研究', 'F.圖書館與資訊服務機構管理', 'G.資訊系統與檢索', 'H.數位典藏與數位學習研究', 'I.資訊與社會', 'J.資訊計量學']'
Number of Labels:10
Num of Classes (Categories or Labels): 10
<class 'numpy.ndarray'> Label Names [:2]: ['A.圖書資訊學理論與發展' 'B.圖書資訊學教育']
Label Names transformed[:2]: [0 1]
Label inverse transform [0, 1]: ['A.圖書資訊學理論與發展' 'B.圖書資訊學教育']
MicroF1 = 0.6011, MacroF1=0.4981
Precision Recall F1 Support
Micro 0.6011 0.6011 0.6011 None
Macro 0.5495 0.4898 0.4981 None
[[ 0 1 0 0 0 1 0 3 0 0]
[ 1 9 0 1 3 8 1 5 6 1]
[ 3 0 15 2 0 15 2 1 5 1]
[ 1 0 0 10 2 0 2 7 1 4]
[ 0 7 0 2 96 3 6 18 13 2]
[ 1 1 1 1 6 65 7 2 8 0]
[ 0 0 0 2 9 7 50 17 3 5]
[ 0 1 0 0 6 1 3 97 1 1]
[ 3 2 0 2 8 3 6 14 41 17]
[ 5 1 0 0 1 0 3 2 4 42]]
precision recall f1-score support
A.圖書資訊學理論與發展 0.0000 0.0000 0.0000 5
B.圖書資訊學教育 0.4091 0.2571 0.3158 35
C.館藏發展 0.9375 0.3409 0.5000 44
D.資訊與知識組織 0.5000 0.3704 0.4255 27
E.資訊服務與使用者研究 0.7328 0.6531 0.6906 147
F.圖書館與資訊服務機構管理 0.6311 0.7065 0.6667 92
G.資訊系統與檢索 0.6250 0.5376 0.5780 93
H.數位典藏與數位學習研究 0.5843 0.8818 0.7029 110
I.資訊與社會 0.5000 0.4271 0.4607 96
J.資訊計量學 0.5753 0.7241 0.6412 58
accuracy 0.6011 707
macro avg 0.5495 0.4899 0.4981 707
weighted avg 0.6204 0.6011 0.5939 707
|
2021_03_01_get_data_from_pdf_practice.ipynb | ###Markdown
Pythonでpdfデータにあるテーブルデータを一括でcsvに直す方法 - Qiita https://qiita.com/risako_/items/0c625a6bcb1cd80cf259 tabula_example.ipynb - Colaboratory https://colab.research.google.com/github/chezou/tabula-py/blob/master/examples/tabula_example.ipynbscrollTo=q6FGPenCluQz---- Setup
###Code
!pip install -q tabula-py
import tabula
import pandas as pd
###Output
_____no_output_____
###Markdown
Main
###Code
# 対象のpdfはlattice=Trueにしないとデータ欠損が起きる
target_year = 2019
pdf_path = 'http://www.eiren.org/toukei/img/eiren_kosyu/data_' + str(target_year) + '.pdf'
df = tabula.read_pdf(pdf_path, lattice=True)[0]
print(df.columns)
# Rename columns
df = df.rename(columns={'順位': 'rank', '公開月': 'release_date', '作 品 名': 'title', '興収(単位:億円)': 'box_office', '配給会社': 'agency'})
df.head()
df.set_index('rank', inplace=True)
df.head()
# 興行収入を決算資料などと合わせやすいように百万円の単位に変換
df['box_office'] = df['box_office'].apply(lambda x: x * 100).apply(int)
df.head()
df['release_year'] = list(pd.Series(df.index).apply(lambda x: target_year))
df['release_month'] = list(pd.Series(df.index).apply(lambda x: 0))
for index, row in df.iterrows():
date = row['release_date'].split('/')
if len(date) > 1:
df.iat[index-1, 4] = int(date[0]) + 2000
df.iat[index-1, 5] = date[-1].replace('月', '')
df.drop(columns=['release_date'], inplace=True)
df = df.reindex(columns=['release_year', 'release_month', 'title', 'box_office', 'agency'])
df.head()
###Output
_____no_output_____ |
other/Create MFC mask.ipynb | ###Markdown
Here, I'm going to create the mask that defined MFC for further analysis.First, I load a cerebral cortex probabilty map, from the Harvard Oxford atlas
###Code
cortex = nib.load('cerbcort.nii.gz')
# Binarize
cortex = nib.Nifti1Image((cortex.get_data() > 0).astype('int'), cortex.get_header().get_best_affine())
niplt.plot_roi(cortex)
###Output
_____no_output_____
###Markdown
Next, I use meshgrid to mask voxels based on location. Importantly, we use nibabel's affine to convert an MNI coordinate (e.g. 10, 0, 0), to voxel space.
###Code
i, j, k = np.meshgrid(*map(np.arange, cortex.get_data().shape), indexing='ij')
# Maximum left and right X coordinates
X_l = nib.affines.apply_affine(np.linalg.inv(cortex.get_affine()), [-10, 0, 0])[0]
X_r = nib.affines.apply_affine(np.linalg.inv(cortex.get_affine()), [10, 0, 0])[0]
# Maximum Y and Z coordinates
Y = nib.affines.apply_affine(np.linalg.inv(cortex.get_affine()), [0, -22, 0])[1]
Z = nib.affines.apply_affine(np.linalg.inv(cortex.get_affine()), [0, 0, -32])[2]
###Output
_____no_output_____
###Markdown
Finally, we use the voxel space coordinates to mask the 30% Harvard-Oxford cortical mask, and binarize it
###Code
## Exclude lateral
cortex.get_data()[
np.where((i < X_r) |
(i > X_l))] = 0
# Exclude posterior
cortex.get_data()[
np.where(j < Y)] = 0
## Exclude ventral
cortex.get_data()[
np.where(k < Z)] = 0
# Binarize
cortex.get_data()[cortex.get_data() < 1] = 0
cortex.get_data()[cortex.get_data() >= 1] = 1
niplt.plot_roi(cortex)
###Output
_____no_output_____ |
teams/team_lynx/OpenCV/Face_Detection/CU1.ipynb | ###Markdown
Now, every face will be cropped and saved
###Code
# In this cell, some libraries were imported
import cv2
import sys
import os
from PIL import Image, ImageDraw
import pylab
import time
# Face Detection Function
def detectFaces(image_name):
print ("Face Detection Start.")
# Read the image and convert to gray to reduce the data
img = cv2.imread(image_name)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)#Color => Gray
# The haarcascades classifier is used to train data
#face_cascade = cv2.CascadeClassifier("/usr/share/opencv/haarcascades/haarcascade_frontalface_default.xml")
faces = face_cascade.detectMultiScale(gray, 1.2, 5)#1.3 and 5are the min and max windows of the treatures
result = []
for (x,y,width,height) in faces:
result.append((x,y,x+width,y+height))
print ("Face Detection Complete.")
return result
#Crop faces and save them in the same directory
filepath ="/home/xilinx/jupyter_notebooks/OpenCV/Face_Detection/images/"
dir_path ="/home/xilinx/jupyter_notebooks/OpenCV/Face_Detection/"
filecount = len(os.listdir(filepath))-1
image_count = 1#count is the number of images
face_cascade = cv2.CascadeClassifier("/usr/share/opencv/haarcascades/haarcascade_frontalface_default.xml")
for fn in os.listdir(filepath): #fn 表示的是文件名
start = time.time()
if image_count <= filecount:
image_name = str(image_count) + '.JPG'
image_path = filepath + image_name
image_new = dir_path + image_name
#print (image_path)
#print (image_new)
os.system('cp '+(image_path)+ (' /home/xilinx/jupyter_notebooks/OpenCV/Face_Detection/'))
faces = detectFaces(image_name)
if not faces:
print ("Error to detect face")
if faces:
#All croped face images will be saved in a subdirectory
face_name = image_name.split('.')[0]
#os.mkdir(save_dir)
count = 0
for (x1,y1,x2,y2) in faces:
file_name = os.path.join(dir_path,face_name+str(count)+".jpg")
Image.open(image_name).crop((x1,y1,x2,y2)).save(file_name)
#os.system('rm -rf '+(image_path)+' /home/xilinx/jupyter_notebooks/OpenCV/Face_Detection/')
count+=1
os.system('rm -rf '+(image_new))
print("The " + str(image_count) +" image were done.")
print("Congratulation! The total of the " + str(count) + " faces in the " +str(image_count) + " image.")
end = time.time()
TimeSpan = end - start
if image_count <= filecount:
print ("The time of " + str(image_count) + " image is " +str(TimeSpan) + " s.")
image_count = image_count + 1
import numpy as np
import cv2
from matplotlib import pyplot as plt
import pylab
# Initiate ORB detector
orb = cv2.ORB_create()
img1 = cv2.imread('/home/xilinx/jupyter_notebooks/OpenCV/Face_Detection/20.jpg',cv2.COLOR_BGR2GRAY)
img2 = cv2.imread('/home/xilinx/jupyter_notebooks/OpenCV/Face_Detection/30.jpg',cv2.COLOR_BGR2GRAY)
#plt.imshow(img1),plt.show()
#plt.imshow(img2),plt.show()
brisk = cv2.BRISK_create()
(kpt1, desc1) = brisk.detectAndCompute(img1, None)
bk_img1 = img1.copy()
out_img1 = img1.copy()
out_img1 = cv2.drawKeypoints(bk_img1, kpt1, out_img1)
plt.imshow(out_img1),plt.show()
(kpt2, desc2) = brisk.detectAndCompute(img1, None)
bk_img2 = img2.copy()
out_img2 = img2.copy()
out_img2 = cv2.drawKeypoints(bk_img2, kpt2, out_img2)
plt.imshow(out_img2),plt.show()
# 特征点匹配
matcher = cv2.BFMatcher()
matches = matcher.match(desc1, desc2)
print(matches)
matches = sorted(matches, key = lambda x:x.distance)
def drawMatches(img1, kp1, img2, kp2, matches):
"""
My own implementation of cv2.drawMatches as OpenCV 2.4.9
does not have this function available but it's supported in
OpenCV 3.0.0
This function takes in two images with their associated
keypoints, as well as a list of DMatch data structure (matches)
that contains which keypoints matched in which images.
An image will be produced where a montage is shown with
the first image followed by the second image beside it.
Keypoints are delineated with circles, while lines are connected
between matching keypoints.
img1,img2 - Grayscale images
kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint
detection algorithms
matches - A list of matches of corresponding keypoints through any
OpenCV keypoint matching algorithm
"""
# Create a new output image that concatenates the two images together
# (a.k.a) a montage
rows1 = img1.shape[0]
cols1 = img1.shape[1]
rows2 = img2.shape[0]
cols2 = img2.shape[1]
out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')
# Place the first image to the left
out[:rows1,:cols1] = np.dstack([img1, img1, img1])
# Place the next image to the right of it
out[:rows2,cols1:] = np.dstack([img2, img2, img2])
# For each pair of points we have between both images
# draw circles, then connect a line between them
for mat in matches:
# Get the matching keypoints for each of the images
img1_idx = mat.queryIdx
img2_idx = mat.trainIdx
# x - columns
# y - rows
(x1,y1) = kp1[img1_idx].pt
(x2,y2) = kp2[img2_idx].pt
# Draw a small circle at both co-ordinates
# radius 4
# colour blue
# thickness = 1
cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)
cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)
# Draw a line in between the two points
# thickness = 1
# colour blue
cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255, 0, 0), 1)
return out
# Draw first 10 matches.
print (len(matches))
img3 = drawMatches(img1,kpt1,img2,kpt2,matches[:2])
plt.imshow(img3),plt.show()
###Output
_____no_output_____ |
Sequence-Models/Week-3/Neural+machine+translation+with+attention+-+v4.ipynb | ###Markdown
Neural Machine TranslationWelcome to your first programming assignment for this week! You will build a Neural Machine Translation (NMT) model to translate human readable dates ("25th of June, 2009") into machine readable dates ("2009-06-25"). You will do this using an attention model, one of the most sophisticated sequence to sequence models. This notebook was produced together with NVIDIA's Deep Learning Institute. Let's load all the packages you will need for this assignment.
###Code
from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
from keras.layers import RepeatVector, Dense, Activation, Lambda
from keras.optimizers import Adam
from keras.utils import to_categorical
from keras.models import load_model, Model
import keras.backend as K
import numpy as np
from faker import Faker
import random
from tqdm import tqdm
from babel.dates import format_date
from nmt_utils import *
import matplotlib.pyplot as plt
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
1 - Translating human readable dates into machine readable datesThe model you will build here could be used to translate from one language to another, such as translating from English to Hindi. However, language translation requires massive datasets and usually takes days of training on GPUs. To give you a place to experiment with these models even without using massive datasets, we will instead use a simpler "date translation" task. The network will input a date written in a variety of possible formats (*e.g. "the 29th of August 1958", "03/30/1968", "24 JUNE 1987"*) and translate them into standardized, machine readable dates (*e.g. "1958-08-29", "1968-03-30", "1987-06-24"*). We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD. <!-- Take a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !--> 1.1 - DatasetWe will train the model on a dataset of 10000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples.
###Code
m = 10000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)
dataset[:10]
###Output
_____no_output_____
###Markdown
You've loaded:- `dataset`: a list of tuples of (human readable date, machine readable date)- `human_vocab`: a python dictionary mapping all characters used in the human readable dates to an integer-valued index - `machine_vocab`: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. These indices are not necessarily consistent with `human_vocab`. - `inv_machine_vocab`: the inverse dictionary of `machine_vocab`, mapping from indices back to characters. Let's preprocess the data and map the raw text data into the index values. We will also use Tx=30 (which we assume is the maximum length of the human readable date; if we get a longer input, we would have to truncate it) and Ty=10 (since "YYYY-MM-DD" is 10 characters long).
###Code
Tx = 30
Ty = 10
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)
print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
print("Xoh.shape:", Xoh.shape)
print("Yoh.shape:", Yoh.shape)
###Output
X.shape: (10000, 30)
Y.shape: (10000, 10)
Xoh.shape: (10000, 30, 37)
Yoh.shape: (10000, 10, 11)
###Markdown
You now have:- `X`: a processed version of the human readable dates in the training set, where each character is replaced by an index mapped to the character via `human_vocab`. Each date is further padded to $T_x$ values with a special character (). `X.shape = (m, Tx)`- `Y`: a processed version of the machine readable dates in the training set, where each character is replaced by the index it is mapped to in `machine_vocab`. You should have `Y.shape = (m, Ty)`. - `Xoh`: one-hot version of `X`, the "1" entry's index is mapped to the character thanks to `human_vocab`. `Xoh.shape = (m, Tx, len(human_vocab))`- `Yoh`: one-hot version of `Y`, the "1" entry's index is mapped to the character thanks to `machine_vocab`. `Yoh.shape = (m, Tx, len(machine_vocab))`. Here, `len(machine_vocab) = 11` since there are 11 characters ('-' as well as 0-9). Lets also look at some examples of preprocessed training examples. Feel free to play with `index` in the cell below to navigate the dataset and see how source/target dates are preprocessed.
###Code
index = 0
print("Source date:", dataset[index][0])
print("Target date:", dataset[index][1])
print()
print("Source after preprocessing (indices):", X[index])
print("Target after preprocessing (indices):", Y[index])
print()
print("Source after preprocessing (one-hot):", Xoh[index])
print("Target after preprocessing (one-hot):", Yoh[index])
###Output
Source date: 9 may 1998
Target date: 1998-05-09
Source after preprocessing (indices): [12 0 24 13 34 0 4 12 12 11 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36
36 36 36 36 36]
Target after preprocessing (indices): [ 2 10 10 9 0 1 6 0 1 10]
Source after preprocessing (one-hot): [[ 0. 0. 0. ..., 0. 0. 0.]
[ 1. 0. 0. ..., 0. 0. 0.]
[ 0. 0. 0. ..., 0. 0. 0.]
...,
[ 0. 0. 0. ..., 0. 0. 1.]
[ 0. 0. 0. ..., 0. 0. 1.]
[ 0. 0. 0. ..., 0. 0. 1.]]
Target after preprocessing (one-hot): [[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
[ 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
###Markdown
2 - Neural machine translation with attentionIf you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate. Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down. The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step. 2.1 - Attention mechanismIn this part, you will implement the attention mechanism presented in the lecture videos. Here is a figure to remind you how the model works. The diagram on the left shows the attention model. The diagram on the right shows what one "Attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$, which are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$). **Figure 1**: Neural machine translation with attention Here are some properties of the model that you may notice: - There are two separate LSTMs in this model (see diagram on the left). Because the one at the bottom of the picture is a Bi-directional LSTM and comes *before* the attention mechanism, we will call it *pre-attention* Bi-LSTM. The LSTM at the top of the diagram comes *after* the attention mechanism, so we will call it the *post-attention* LSTM. The pre-attention Bi-LSTM goes through $T_x$ time steps; the post-attention LSTM goes through $T_y$ time steps. - The post-attention LSTM passes $s^{\langle t \rangle}, c^{\langle t \rangle}$ from one time step to the next. In the lecture videos, we were using only a basic RNN for the post-activation sequence model, so the state captured by the RNN output activations $s^{\langle t\rangle}$. But since we are using an LSTM here, the LSTM has both the output activation $s^{\langle t\rangle}$ and the hidden cell state $c^{\langle t\rangle}$. However, unlike previous text generation examples (such as Dinosaurus in week 1), in this model the post-activation LSTM at time $t$ does will not take the specific generated $y^{\langle t-1 \rangle}$ as input; it only takes $s^{\langle t\rangle}$ and $c^{\langle t\rangle}$ as input. We have designed the model this way, because (unlike language generation where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date. - We use $a^{\langle t \rangle} = [\overrightarrow{a}^{\langle t \rangle}; \overleftarrow{a}^{\langle t \rangle}]$ to represent the concatenation of the activations of both the forward-direction and backward-directions of the pre-attention Bi-LSTM. - The diagram on the right uses a `RepeatVector` node to copy $s^{\langle t-1 \rangle}$'s value $T_x$ times, and then `Concatenation` to concatenate $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ to compute $e^{\langle t, t'}$, which is then passed through a softmax to compute $\alpha^{\langle t, t' \rangle}$. We'll explain how to use `RepeatVector` and `Concatenation` in Keras below. Lets implement this model. You will start by implementing two functions: `one_step_attention()` and `model()`.**1) `one_step_attention()`**: At step $t$, given all the hidden states of the Bi-LSTM ($[a^{},a^{}, ..., a^{}]$) and the previous hidden state of the second LSTM ($s^{}$), `one_step_attention()` will compute the attention weights ($[\alpha^{},\alpha^{}, ..., \alpha^{}]$) and output the context vector (see Figure 1 (right) for details):$$context^{} = \sum_{t' = 1}^{T_x} \alpha^{}a^{}\tag{1}$$ Note that we are denoting the attention in this notebook $context^{\langle t \rangle}$. In the lecture videos, the context was denoted $c^{\langle t \rangle}$, but here we are calling it $context^{\langle t \rangle}$ to avoid confusion with the (post-attention) LSTM's internal memory cell variable, which is sometimes also denoted $c^{\langle t \rangle}$. **2) `model()`**: Implements the entire model. It first runs the input through a Bi-LSTM to get back $[a^{},a^{}, ..., a^{}]$. Then, it calls `one_step_attention()` $T_y$ times (`for` loop). At each iteration of this loop, it gives the computed context vector $c^{}$ to the second LSTM, and runs the output of the LSTM through a dense layer with softmax activation to generate a prediction $\hat{y}^{}$. **Exercise**: Implement `one_step_attention()`. The function `model()` will call the layers in `one_step_attention()` $T_y$ using a for-loop, and it is important that all $T_y$ copies have the same weights. I.e., it should not re-initiaiize the weights every time. In other words, all $T_y$ steps should have shared weights. Here's how you can implement layers with shareable weights in Keras:1. Define the layer objects (as global variables for examples).2. Call these objects when propagating the input.We have defined the layers you need as global variables. Please run the following cells to create them. Please check the Keras documentation to make sure you understand what these layers are: [RepeatVector()](https://keras.io/layers/core/repeatvector), [Concatenate()](https://keras.io/layers/merge/concatenate), [Dense()](https://keras.io/layers/core/dense), [Activation()](https://keras.io/layers/core/activation), [Dot()](https://keras.io/layers/merge/dot).
###Code
# Defined shared layers as global variables
repeator = RepeatVector(Tx)
concatenator = Concatenate(axis=-1)
densor1 = Dense(10, activation = "tanh")
densor2 = Dense(1, activation = "relu")
activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook
dotor = Dot(axes = 1)
###Output
_____no_output_____
###Markdown
Now you can use these layers to implement `one_step_attention()`. In order to propagate a Keras tensor object X through one of these layers, use `layer(X)` (or `layer([X,Y])` if it requires multiple inputs.), e.g. `densor(X)` will propagate X through the `Dense(1)` layer defined above.
###Code
# GRADED FUNCTION: one_step_attention
def one_step_attention(a, s_prev):
"""
Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights
"alphas" and the hidden states "a" of the Bi-LSTM.
Arguments:
a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a)
s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s)
Returns:
context -- context vector, input of the next (post-attetion) LSTM cell
"""
### START CODE HERE ###
# Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line)
s_prev = repeator(s_prev)
# Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)
concat = concatenator([a,s_prev])
# Use densor1 to propagate concat through a small fully-connected neural network to compute the "intermediate energies" variable e. (≈1 lines)
e = densor1(concat)
# Use densor2 to propagate e through a small fully-connected neural network to compute the "energies" variable energies. (≈1 lines)
energies = densor2(e)
# Use "activator" on "energies" to compute the attention weights "alphas" (≈ 1 line)
alphas = activator(energies)
# Use dotor together with "alphas" and "a" to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line)
context = dotor([alphas,a])
### END CODE HERE ###
return context
###Output
_____no_output_____
###Markdown
You will be able to check the expected output of `one_step_attention()` after you've coded the `model()` function. **Exercise**: Implement `model()` as explained in figure 2 and the text above. Again, we have defined global layers that will share weights to be used in `model()`.
###Code
n_a = 32
n_s = 64
post_activation_LSTM_cell = LSTM(n_s, return_state = True)
output_layer = Dense(len(machine_vocab), activation=softmax)
###Output
_____no_output_____
###Markdown
Now you can use these layers $T_y$ times in a `for` loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps: 1. Propagate the input into a [Bidirectional](https://keras.io/layers/wrappers/bidirectional) [LSTM](https://keras.io/layers/recurrent/lstm)2. Iterate for $t = 0, \dots, T_y-1$: 1. Call `one_step_attention()` on $[\alpha^{},\alpha^{}, ..., \alpha^{}]$ and $s^{}$ to get the context vector $context^{}$. 2. Give $context^{}$ to the post-attention LSTM cell. Remember pass in the previous hidden-state $s^{\langle t-1\rangle}$ and cell-states $c^{\langle t-1\rangle}$ of this LSTM using `initial_state= [previous hidden state, previous cell state]`. Get back the new hidden state $s^{}$ and the new cell state $c^{}$. 3. Apply a softmax layer to $s^{}$, get the output. 4. Save the output by adding it to the list of outputs.3. Create your Keras model instance, it should have three inputs ("inputs", $s^{}$ and $c^{}$) and output the list of "outputs".
###Code
# GRADED FUNCTION: model
def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):
"""
Arguments:
Tx -- length of the input sequence
Ty -- length of the output sequence
n_a -- hidden state size of the Bi-LSTM
n_s -- hidden state size of the post-attention LSTM
human_vocab_size -- size of the python dictionary "human_vocab"
machine_vocab_size -- size of the python dictionary "machine_vocab"
Returns:
model -- Keras model instance
"""
# Define the inputs of your model with a shape (Tx,)
# Define s0 and c0, initial hidden state for the decoder LSTM of shape (n_s,)
X = Input(shape=(Tx, human_vocab_size))
s0 = Input(shape=(n_s,), name='s0')
c0 = Input(shape=(n_s,), name='c0')
s = s0
c = c0
# Initialize empty list of outputs
outputs = []
### START CODE HERE ###
# Step 1: Define your pre-attention Bi-LSTM. Remember to use return_sequences=True. (≈ 1 line)
a = Bidirectional(LSTM(n_a, return_sequences=True))(X)
# Step 2: Iterate for Ty steps
for t in range(Ty):
# Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)
context = one_step_attention(a, s)
# Step 2.B: Apply the post-attention LSTM cell to the "context" vector.
# Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line)
s, _, c = post_activation_LSTM_cell(context,initial_state=[s, c])
# Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line)
out = output_layer(s)
# Step 2.D: Append "out" to the "outputs" list (≈ 1 line)
outputs.append(out)
# Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line)
model = Model(inputs=[X, s0, c0], outputs=outputs)
### END CODE HERE ###
return model
###Output
_____no_output_____
###Markdown
Run the following cell to create your model.
###Code
model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))
###Output
_____no_output_____
###Markdown
Let's get a summary of the model to check if it matches the expected output.
###Code
model.summary()
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 30, 37) 0
____________________________________________________________________________________________________
s0 (InputLayer) (None, 64) 0
____________________________________________________________________________________________________
bidirectional_1 (Bidirectional) (None, 30, 64) 17920 input_1[0][0]
____________________________________________________________________________________________________
repeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0]
lstm_1[0][0]
lstm_1[1][0]
lstm_1[2][0]
lstm_1[3][0]
lstm_1[4][0]
lstm_1[5][0]
lstm_1[6][0]
lstm_1[7][0]
lstm_1[8][0]
____________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_1[0][0]
repeat_vector_1[0][0]
bidirectional_1[0][0]
repeat_vector_1[1][0]
bidirectional_1[0][0]
repeat_vector_1[2][0]
bidirectional_1[0][0]
repeat_vector_1[3][0]
bidirectional_1[0][0]
repeat_vector_1[4][0]
bidirectional_1[0][0]
repeat_vector_1[5][0]
bidirectional_1[0][0]
repeat_vector_1[6][0]
bidirectional_1[0][0]
repeat_vector_1[7][0]
bidirectional_1[0][0]
repeat_vector_1[8][0]
bidirectional_1[0][0]
repeat_vector_1[9][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 30, 10) 1290 concatenate_1[0][0]
concatenate_1[1][0]
concatenate_1[2][0]
concatenate_1[3][0]
concatenate_1[4][0]
concatenate_1[5][0]
concatenate_1[6][0]
concatenate_1[7][0]
concatenate_1[8][0]
concatenate_1[9][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 30, 1) 11 dense_1[0][0]
dense_1[1][0]
dense_1[2][0]
dense_1[3][0]
dense_1[4][0]
dense_1[5][0]
dense_1[6][0]
dense_1[7][0]
dense_1[8][0]
dense_1[9][0]
____________________________________________________________________________________________________
attention_weights (Activation) (None, 30, 1) 0 dense_2[0][0]
dense_2[1][0]
dense_2[2][0]
dense_2[3][0]
dense_2[4][0]
dense_2[5][0]
dense_2[6][0]
dense_2[7][0]
dense_2[8][0]
dense_2[9][0]
____________________________________________________________________________________________________
dot_1 (Dot) (None, 1, 64) 0 attention_weights[0][0]
bidirectional_1[0][0]
attention_weights[1][0]
bidirectional_1[0][0]
attention_weights[2][0]
bidirectional_1[0][0]
attention_weights[3][0]
bidirectional_1[0][0]
attention_weights[4][0]
bidirectional_1[0][0]
attention_weights[5][0]
bidirectional_1[0][0]
attention_weights[6][0]
bidirectional_1[0][0]
attention_weights[7][0]
bidirectional_1[0][0]
attention_weights[8][0]
bidirectional_1[0][0]
attention_weights[9][0]
bidirectional_1[0][0]
____________________________________________________________________________________________________
c0 (InputLayer) (None, 64) 0
____________________________________________________________________________________________________
lstm_1 (LSTM) [(None, 64), (None, 6 33024 dot_1[0][0]
s0[0][0]
c0[0][0]
dot_1[1][0]
lstm_1[0][0]
lstm_1[0][2]
dot_1[2][0]
lstm_1[1][0]
lstm_1[1][2]
dot_1[3][0]
lstm_1[2][0]
lstm_1[2][2]
dot_1[4][0]
lstm_1[3][0]
lstm_1[3][2]
dot_1[5][0]
lstm_1[4][0]
lstm_1[4][2]
dot_1[6][0]
lstm_1[5][0]
lstm_1[5][2]
dot_1[7][0]
lstm_1[6][0]
lstm_1[6][2]
dot_1[8][0]
lstm_1[7][0]
lstm_1[7][2]
dot_1[9][0]
lstm_1[8][0]
lstm_1[8][2]
____________________________________________________________________________________________________
dense_3 (Dense) (None, 11) 715 lstm_1[0][0]
lstm_1[1][0]
lstm_1[2][0]
lstm_1[3][0]
lstm_1[4][0]
lstm_1[5][0]
lstm_1[6][0]
lstm_1[7][0]
lstm_1[8][0]
lstm_1[9][0]
====================================================================================================
Total params: 52,960
Trainable params: 52,960
Non-trainable params: 0
____________________________________________________________________________________________________
###Markdown
**Expected Output**:Here is the summary you should see **Total params:** 52,960 **Trainable params:** 52,960 **Non-trainable params:** 0 **bidirectional_1's output shape ** (None, 30, 64) **repeat_vector_1's output shape ** (None, 30, 64) **concatenate_1's output shape ** (None, 30, 128) **attention_weights's output shape ** (None, 30, 1) **dot_1's output shape ** (None, 1, 64) **dense_3's output shape ** (None, 11) As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics you want to use. Compile your model using `categorical_crossentropy` loss, a custom [Adam](https://keras.io/optimizers/adam) [optimizer](https://keras.io/optimizers/usage-of-optimizers) (`learning rate = 0.005`, $\beta_1 = 0.9$, $\beta_2 = 0.999$, `decay = 0.01`) and `['accuracy']` metrics:
###Code
### START CODE HERE ### (≈2 lines)
opt = Adam(lr=0.005, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
The last step is to define all your inputs and outputs to fit the model:- You already have X of shape $(m = 10000, T_x = 30)$ containing the training examples.- You need to create `s0` and `c0` to initialize your `post_attention_LSTM_cell` with 0s.- Given the `model()` you coded, you need the "outputs" to be a list of 11 elements of shape (m, T_y). So that: `outputs[i][0], ..., outputs[i][Ty]` represent the true labels (characters) corresponding to the $i^{th}$ training example (`X[i]`). More generally, `outputs[i][j]` is the true label of the $j^{th}$ character in the $i^{th}$ training example.
###Code
s0 = np.zeros((m, n_s))
c0 = np.zeros((m, n_s))
outputs = list(Yoh.swapaxes(0,1))
###Output
_____no_output_____
###Markdown
Let's now fit the model and run it for one epoch.
###Code
model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)
###Output
Epoch 1/1
10000/10000 [==============================] - 35s - loss: 16.5547 - dense_3_loss_1: 1.2459 - dense_3_loss_2: 1.0470 - dense_3_loss_3: 1.7569 - dense_3_loss_4: 2.6673 - dense_3_loss_5: 0.7695 - dense_3_loss_6: 1.2492 - dense_3_loss_7: 2.6438 - dense_3_loss_8: 0.8831 - dense_3_loss_9: 1.7005 - dense_3_loss_10: 2.5915 - dense_3_acc_1: 0.4596 - dense_3_acc_2: 0.6819 - dense_3_acc_3: 0.2973 - dense_3_acc_4: 0.0579 - dense_3_acc_5: 0.9790 - dense_3_acc_6: 0.3633 - dense_3_acc_7: 0.0550 - dense_3_acc_8: 0.9612 - dense_3_acc_9: 0.2160 - dense_3_acc_10: 0.0879
###Markdown
While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples: Thus, `dense_2_acc_8: 0.89` means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.)
###Code
model.load_weights('models/model.h5')
###Output
_____no_output_____
###Markdown
You can now see the results on new examples.
###Code
EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
for example in EXAMPLES:
source = string_to_int(example, Tx, human_vocab)
source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1)
prediction = model.predict([source, s0, c0])
prediction = np.argmax(prediction, axis = -1)
output = [inv_machine_vocab[int(i)] for i in prediction]
print("source:", example)
print("output:", ''.join(output))
###Output
source: 3 May 1979
output: 1979-05-03
source: 5 April 09
output: 2009-05-05
source: 21th of August 2016
output: 2016-08-21
source: Tue 10 Jul 2007
output: 2007-07-10
source: Saturday May 9 2018
output: 2018-05-09
source: March 3 2001
output: 2001-03-03
source: March 3rd 2001
output: 2001-03-03
source: 1 March 2001
output: 2001-03-01
###Markdown
You can also change these examples to test with your own examples. The next part will give you a better sense on what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character. 3 - Visualizing Attention (Optional / Ungraded)Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (say the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what part of the output is looking at what part of the input.Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this: **Figure 8**: Full Attention MapNotice how the output ignores the "Saturday" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We see also that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's "18" in order to generate "2018." 3.1 - Getting the activations from the networkLets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$. To figure out where the attention values are located, let's start by printing a summary of the model .
###Code
model.summary()
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 30, 37) 0
____________________________________________________________________________________________________
s0 (InputLayer) (None, 64) 0
____________________________________________________________________________________________________
bidirectional_1 (Bidirectional) (None, 30, 64) 17920 input_1[0][0]
____________________________________________________________________________________________________
repeat_vector_1 (RepeatVector) (None, 30, 64) 0 s0[0][0]
lstm_1[0][0]
lstm_1[1][0]
lstm_1[2][0]
lstm_1[3][0]
lstm_1[4][0]
lstm_1[5][0]
lstm_1[6][0]
lstm_1[7][0]
lstm_1[8][0]
____________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 30, 128) 0 bidirectional_1[0][0]
repeat_vector_1[0][0]
bidirectional_1[0][0]
repeat_vector_1[1][0]
bidirectional_1[0][0]
repeat_vector_1[2][0]
bidirectional_1[0][0]
repeat_vector_1[3][0]
bidirectional_1[0][0]
repeat_vector_1[4][0]
bidirectional_1[0][0]
repeat_vector_1[5][0]
bidirectional_1[0][0]
repeat_vector_1[6][0]
bidirectional_1[0][0]
repeat_vector_1[7][0]
bidirectional_1[0][0]
repeat_vector_1[8][0]
bidirectional_1[0][0]
repeat_vector_1[9][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 30, 10) 1290 concatenate_1[0][0]
concatenate_1[1][0]
concatenate_1[2][0]
concatenate_1[3][0]
concatenate_1[4][0]
concatenate_1[5][0]
concatenate_1[6][0]
concatenate_1[7][0]
concatenate_1[8][0]
concatenate_1[9][0]
____________________________________________________________________________________________________
dense_2 (Dense) (None, 30, 1) 11 dense_1[0][0]
dense_1[1][0]
dense_1[2][0]
dense_1[3][0]
dense_1[4][0]
dense_1[5][0]
dense_1[6][0]
dense_1[7][0]
dense_1[8][0]
dense_1[9][0]
____________________________________________________________________________________________________
attention_weights (Activation) (None, 30, 1) 0 dense_2[0][0]
dense_2[1][0]
dense_2[2][0]
dense_2[3][0]
dense_2[4][0]
dense_2[5][0]
dense_2[6][0]
dense_2[7][0]
dense_2[8][0]
dense_2[9][0]
____________________________________________________________________________________________________
dot_1 (Dot) (None, 1, 64) 0 attention_weights[0][0]
bidirectional_1[0][0]
attention_weights[1][0]
bidirectional_1[0][0]
attention_weights[2][0]
bidirectional_1[0][0]
attention_weights[3][0]
bidirectional_1[0][0]
attention_weights[4][0]
bidirectional_1[0][0]
attention_weights[5][0]
bidirectional_1[0][0]
attention_weights[6][0]
bidirectional_1[0][0]
attention_weights[7][0]
bidirectional_1[0][0]
attention_weights[8][0]
bidirectional_1[0][0]
attention_weights[9][0]
bidirectional_1[0][0]
____________________________________________________________________________________________________
c0 (InputLayer) (None, 64) 0
____________________________________________________________________________________________________
lstm_1 (LSTM) [(None, 64), (None, 6 33024 dot_1[0][0]
s0[0][0]
c0[0][0]
dot_1[1][0]
lstm_1[0][0]
lstm_1[0][2]
dot_1[2][0]
lstm_1[1][0]
lstm_1[1][2]
dot_1[3][0]
lstm_1[2][0]
lstm_1[2][2]
dot_1[4][0]
lstm_1[3][0]
lstm_1[3][2]
dot_1[5][0]
lstm_1[4][0]
lstm_1[4][2]
dot_1[6][0]
lstm_1[5][0]
lstm_1[5][2]
dot_1[7][0]
lstm_1[6][0]
lstm_1[6][2]
dot_1[8][0]
lstm_1[7][0]
lstm_1[7][2]
dot_1[9][0]
lstm_1[8][0]
lstm_1[8][2]
____________________________________________________________________________________________________
dense_3 (Dense) (None, 11) 715 lstm_1[0][0]
lstm_1[1][0]
lstm_1[2][0]
lstm_1[3][0]
lstm_1[4][0]
lstm_1[5][0]
lstm_1[6][0]
lstm_1[7][0]
lstm_1[8][0]
lstm_1[9][0]
====================================================================================================
Total params: 52,960
Trainable params: 52,960
Non-trainable params: 0
____________________________________________________________________________________________________
###Markdown
Navigate through the output of `model.summary()` above. You can see that the layer named `attention_weights` outputs the `alphas` of shape (m, 30, 1) before `dot_2` computes the context vector for every time step $t = 0, \ldots, T_y-1$. Lets get the activations from this layer.The function `attention_map()` pulls out the attention values from your model and plots them.
###Code
attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday 09 Oct 1993", num = 7, n_s = 64)
###Output
_____no_output_____ |
CICIDS-2017-LSTM-2-Multiclass.ipynb | ###Markdown
LSTM Binary Multiclass with CICIDS2017 If you are missing the pickle, try running the *Pickle-CICIDS2017.ipynb* Notebook.
###Code
from datetime import datetime
import json
import os
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', None)
###Output
_____no_output_____
###Markdown
Data loading and prep As we've pickled the normalized and encoded dataset, we only need to load these pickles to get the Pandas DataFrames back. **Hint**: If you miss the pickles, go ahead and run the notebook named *Pickle-NSL-KDD.ipynb*
###Code
def load_df(filename):
filepath = os.path.join('CICIDS2017', filename+'.pkl')
return pd.read_pickle(filepath)
cic_train_data = load_df('cic_train_data')
cic_test_data = load_df('cic_test_data')
cic_train_labels = load_df('cic_train_labels')
cic_test_labels = load_df('cic_test_labels')
cic_train_data.tail()
###Output
_____no_output_____
###Markdown
We only need 6 features, so we create a new DF that only holds them. The mapping is as follows: | NSL-KDD field | CICIDS2017 field ||---------------|---------------------|| duration | flow_duration || protocol_type | protocol || src_bytes | total_fwd_packets || dst_bytes | total_backward_packets || count | flow_packets_per_s || srv_count | destination_port |
###Code
fields = ['flow_duration', 'protocol', 'total_fwd_packets', 'total_backward_packets','flow_packets_per_s','destination_port']
cic_train_data = cic_train_data.filter(fields, axis=1)
cic_test_data = cic_test_data.filter(fields, axis=1)
cic_train_data.tail()
###Output
_____no_output_____
###Markdown
Label Translation As we are doing binary classification, we only need to know if the entry is normal/benign (0) or malicious (1)
###Code
with open(os.path.join('CICIDS2017','cic_label_wordindex.json')) as json_in:
data = json.load(json_in)
print(data)
normal_index = data['benign']
def f(x):
return 0 if x == normal_index else 1
f = np.vectorize(f)
cic_train_labels.head()
# We only want to know if it's benign or not, so we switch to 0 or 1
cic_train_labels = f(cic_train_labels['label_encoded'].values)
cic_test_labels = f(cic_test_labels['label_encoded'].values)
cic_train_labels[:5]
print("Training Set Size:\t",len(cic_train_data))
print("Training Label Size:\t",len(cic_train_labels))
print("Test Set Size:\t\t",len(cic_test_data))
print("Test Label Size:\t",len(cic_test_labels))
###Output
Training Set Size: 1839982
Training Label Size: 1839982
Test Set Size: 990761
Test Label Size: 990761
###Markdown
Runtime Preqs Let's go ahead and set some crucial parameters for the learning process
###Code
epochs = 50
batch_size = 32
no_of_classes = len(np.unique(np.concatenate((cic_train_labels, cic_test_labels))))
run_date = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
runtype_name = 'cicids-lstm-2multiclass-b{}-e{}'.format(batch_size, epochs)
log_folder_path = os.path.join('logs',runtype_name + '-{}'.format(run_date))
###Output
_____no_output_____
###Markdown
The Keras Embedding Layer expects a maximum vocabulary size, which we can simply calculate by finding max() of the encoded data
###Code
all_data = pd.concat([cic_train_data, cic_test_data])
voc_size = (all_data.max().max()+1).astype('int64')
print("Maximum vocabulary size:", voc_size)
###Output
Maximum vocabulary size: 2
###Markdown
Building and Training the Model
###Code
from keras.callbacks import EarlyStopping,ModelCheckpoint,TensorBoard
callbacks = [
ModelCheckpoint(
filepath='models/'+runtype_name+'-{}.h5'.format(run_date),
monitor='val_loss',
save_best_only=True # Only save one. Only overwrite this one if val_loss has improved
),
TensorBoard(
log_dir=log_folder_path
)
]
from keras.models import Sequential
from keras.layers import Dense, Embedding, LSTM
from keras.optimizers import RMSprop
from keras.utils import plot_model
# see https://stackoverflow.com/a/49436133/3864726
# This is especially important in an environment like Jupyter, where the Kernel keeps on running
from keras import backend as K
K.clear_session()
model = Sequential()
model.add(Embedding(voc_size, 12,input_length=6))
model.add(LSTM(12, name='LSTMnet'))
model.add(Dense(2, activation='softmax')) # Multiclass classification. For binary, one would use i.e. sigmoid
model.compile(optimizer=RMSprop(lr=0.001),
loss='sparse_categorical_crossentropy', # Multiclass classification! Binary would be binary_crossentropy
metrics=['acc'])
model.summary()
history = model.fit(cic_train_data, cic_train_labels,
epochs=epochs,
batch_size=batch_size,
verbose=1,
validation_data=(cic_test_data, cic_test_labels),
callbacks=callbacks)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 6, 12) 24
_________________________________________________________________
LSTMnet (LSTM) (None, 12) 1200
_________________________________________________________________
dense_1 (Dense) (None, 2) 26
=================================================================
Total params: 1,250
Trainable params: 1,250
Non-trainable params: 0
_________________________________________________________________
Train on 1839982 samples, validate on 990761 samples
Epoch 1/50
956832/1839982 [==============>...............] - ETA: 2:11 - loss: 0.3396 - acc: 0.8532 |
notebooks/ROI/03_ClimateChange/S4_SLR_TCs/00_Generate_Database.ipynb | ###Markdown
Generate database - climate change: intermediate SLR scenario (S2, +1m) and future TCs probability
###Code
# --------------------------------------
# Teslakit database
p_data = r'/Users/albacid/Projects/TeslaKit_projects'
db = Database(p_data)
# make new database for climate change - S4
db.MakeNewSite('ROI_CC_S4')
###Output
Teslakit Site generated at /Users/albacid/Projects/TeslaKit_projects/sites/ROI_CC_S4
|
lecture-13.ipynb | ###Markdown
Lecture 13. The algebra of hierarchical matrices Previous lecture - Introduction to hierarhical matrices ($\mathcal{H}, \mathcal{H}^2$) as algebraic interpretation of the FMM- The concept of block rows and nested bases- The concept of splitting of the matrix into blocks Todays lecture- How to construct the hierarhical approximation (both in H- and H-2 cases) BookA good introductory book is S. Borm "Efficient numerical methods for non-local operators: H2-matrix compression, algorithms and analysis". Hierarhical matrices- Split the matrix into blocks $A(t, s)$ corresponding to the mosaic partioning, approximate "far" blocks with low-rank matrices.- **$H^2$** matrix: using the block row (i.e. interaction of the box with everything outside) is of low-rank.- Computation of the factorization requires the treatment of block matrices. Simple case: H-matricesThe case of H-matrices is simple: the matrix is represented as a collection of low-rank blocks, so we have to approximate each block independently, $$A(t, s) \approx U(t, s) V(t, s)^{\top}.$$How we can do that? NLA Flashback: approximation of low-rank matricesA rank-$r$ matrix can be represented as$$A = C \widehat{A}^{-1} R, $$where $C$ are some **columns** of the matrix $A$, $R$ are some rows of the matrix $A$, $\widehat{A}$ is the submatrix on their intersection.Approximate case:If $\widehat{A}$ is the submatrix with **maximal volume** (where volume is the absolute value of the determinant) Cross-approximationIdea of the cross approximation: select the submatrix to maximize the determinant, i.e. in a **greedy** fashion.The term "cross" comes from the rank-$1$ update formula$$A_{ij} := A_{ij} - \frac{A_{i j^*} A_{i^* j}}{A_{i^* j^*}},$$where the **pivots** $(i^*, j^*)$ has to be selected in such a way that $|A_{i^* j^*}|$ is as big as possible. Pivoting strategies- Row/column pivoting (select a column, find maximal in it).- Rook pivoting- Additionally, do some random sampling- There is a result by L. Demanet et. al on the class of matrices, where the approximation exist Concept of approximation for H-matrices1. Create a list of blocks2. Sample rows/columns to get the low-rank factorization3. Profit! $H^2$-matricesFor the $H^2$ matrices the situation is much more complicated.The standard way to go is to first compress into the $\mathcal{H}$-form, and then do **recompression** into the $\mathcal{H}^2$ form. Such recompression can be done in $\mathcal{O}(n \log n)$ operations.Can we do it directly? Nested cross approximation- A block row is of low-rank -> there exist **basis rows** that span the row space.- If we join the basis rows from the children, we get the basis rows for the father.- This requires the SVD of the matrix $2r \times N$, and it has $\mathcal{O}(N^2)$ complexity (although better, than $\mathcal{O}(N^3)$ for direct SVD of big blocks) Solution: Approximate the "far zone" with few receivers ("representor set"). Demo
###Code
import os
os.environ['OMP_NUM_THREADS'] = '1'
os.environ['MKL_NUM_THREADS'] = '1'
from h2py.tree.quadtree import ParticlesQuadTree as PQT
from h2py.tree.octtree import ParticlesOctTree as POT
from h2py.tree.inertial_tree import ParticlesInertialTree as PIT
from h2py.data.particles_data import ParticlesData as PD
from h2py.problem import Problem
from h2py.data.particles import log_distance
inv_distance = log_distance
import numpy as np
from time import time
import sys
N = 20000
np.random.seed(0)
particles = np.random.rand(2, N)
data = PD(particles)
tree = PIT(data, block_size = 20)
problem = Problem(inv_distance, tree, tree)
problem.build()
problem.gen_queue(symmetric=0)
print('H2 1e-5')
problem.factorize('h2', tau=1e-5, onfly=0, verbose=1, iters=1)
###Output
1.95954990387
H2 1e-5
('Function calls:', 36682)
('Function values computed:', 33309887)
('Function time:', 1.7329978942871094)
('Average time per function value:', 5.202653177079554e-08)
('Maxvol time:', 6.476972341537476)
('Total MCBH time:', 9.493279933929443)
###Markdown
Representor set- Way 1: Select it using apriori knowledge (geometrical approach)- Way 2: For "good columns" it is sufficient to know good columns of the father! For details, see http://arxiv.org/abs/1309.1773 Inversion of the hierarchical matricesRecall our goal is often to solve the integral equation (i.e., compute the inverse).One of the possible ways is to use **recursive block elimination** Block-LU (or Schur complement)Consider the matrix$$A = \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22}\end{bmatrix},$$where the first group of variables correspond to the unknowns in the first node, the second group variables corresponds to the unknowns in the second node (binary tree implicitly assumed).Then, $$A \begin{bmatrix} q_1 \\ q_2 \end{bmatrix} = \begin{bmatrix} f_1 \\ f_2 \end{bmatrix}.$$ After the elimination we have the following equality for the **Schur complement**. $$q_1 = A^{-1}_{11} f_1 - A^{-1}_{11} A_{12} q_2, \quad \underbrace{(A_{22} - A_{21} A^{-1}_{11} A_{12})}_{\mbox{Schur complement}} q_2 = f_2 - A_{21} A^{-1}_{11} f_1.$$The core idea is **recursion**: if we know $A^{-1}_{11},$ then we can compute the matrix $S$, and invert it as well.The multiplication of H-matrices has $\mathcal{O}(N \log N)$ complexity, and is also (typically) implemented via recursion. Multiplication of H-matricesConsider 1D partioning, and multiplication of two matrices with H-structure (i.e., blocks(1,2) and (2,1) ) have low-rank$$\begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22}\end{bmatrix}\begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22}\end{bmatrix}$$The (1, 2) block of the result is $$ \underbrace{A_{21} B_{11}}_{\mbox{low rank}} + \underbrace{A_{22} B_{21}}_{\mbox{low rank}}$$(1, 1) and (2, 2) blocks are evaluated **recursively**, so it is a recursion inside recursion. Summary- Use Block elimination (by nodes of the tree)- Problem is reduced to the multiplication of $H$-matrices- Constant is high- Can compute $LU$-factorization instead. Fast direct solvers for sparse matricesSparse matrices coming from PDEs are in fact H-matrices as well!So, we can compute the inverse by the same block factorization technique.Works very well for the "1D" partioning (i.e., off-diagonal blocks are of low-rank), does not work for 2D/3D problems with optimal complexity (but the constants can be really good).We also have our own idea (an since it is unpublished, will use the whiteboard magic here :) Summary - Nested cross approximation- Block Schur elimination idea for the inversion Next lecture (week)- We will talk about high-frequency problems (and there are problems there)- FFT on the non-uniform grid, butterfly algorithm- Precorrected FFT
###Code
from IPython.core.display import HTML
def css_styling():
styles = open("./styles/custom.css", "r").read()
return HTML(styles)
css_styling()
###Output
_____no_output_____ |
Chapter02/Basic probability for predictive modeling.ipynb | ###Markdown
Generating random numbers and setting theseed
###Code
import random
print([random.uniform(0, 10) for x in range(3)])
print([random.uniform(0, 10) for x in range(3)])
random.seed(12345)
print([random.uniform(0, 10) for x in range(3)])
random.seed(12345)
print([random.uniform(0, 10) for x in range(3)])
###Output
[4.166198725453412, 0.10169169457068361, 8.252065092537432]
|
hic/PermutationsOnBlockMatrices.ipynb | ###Markdown
Permutations on Block MatricesThis is pretty straight forward. We are dealing here with a class of boolean matrices $\mathcal{B}$ that are equivalent by permutations to a block diagonal matrix (blocks of 1s).This means $A \in \mathcal{B}$, if and only if there exists a permutation matrix $P$, such that $C = P^{t} \cdot A \cdot P$ is a block diagonal matrix.In this case we will also say that $A \equiv C$ (by permutations. Not that $A \cdot P$ means permuting the columns of $A$m according to $P$, and $P^{t} \cdot A$ means permuting the rows of $A$ in the same permutation that $P$ define if we look at it as a column permutation. So the operation $P^{t} A P$ is equivalent to re-indexing.Another thing to consider is for any permutation matrix $P^{t} = P^{-1}$. lets identify the permutation matrix $P$ with the corresponding column permutation $\pi$.Recall that any permutation can be expressed as a composition of 2-cycles: $\pi = \prod_1^k a_i$, so correspondingly $P = \prod_1^n A_i$.Recall also that a 2-cycle is its own inverse.This confirms that $P^t = \prod_n^1 A_i^t = \prod_n^1 A_i^{-1} = P^{-1}$ (and note the order change). So $P^t A$ is like doing the 2-permutations of $p$ on the rows and in the reverse order.Now lets look at a block matrix and what permutations do and don't to it.
###Code
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
from functools import reduce
def blockMatrix(blocks):
"""creates a bloack 0-1 matrix.
param blocks: list of non-negative integers which is the size of the blocks.
a 0 block size corresponds to a 0 on the main diagonal.
"""
blocks = np.array(blocks).astype("int64")
f = lambda x: 1 if x == 0 else x
n = np.sum([f(x) for x in blocks])
n = int(n)
A = np.zeros((n, n))
pos = 0
for i in range(len(blocks)):
b = blocks[i]
if b > 0:
A[pos : pos + b, pos : pos + b] = np.ones((b, b))
pos += f(b)
return A
def permutationMatrix(ls):
"""returns a permutation matrix of size len(ls)^2.
param ls: should be a reordering of range(len(ls)), which defines the
permutation on the ROWS.
returns a permutation matrix P.
np.dot(P,A) should be rearrangement of the rows of A according to P.
To permute the columns of a matrix A use:
Q = np.transpose(P), then: np.dot(A,Q).
"""
n = len(ls)
P = np.zeros((n, n))
for i in range(n):
P[i, ls[i]] = 1
return P
def shuffleCopyMatrix(lins, louts, msize):
"""Returns a matrix P that represents switch and copy operations
on the rows of a matrix.
param msize: the size (of the square matrix).
param lins: row indices to be replaced.
param louts: row that replace the ones listed in lins.
lins and louts must be of the same length and contain indiced within
range(msize).
These operations are performed on the identity matrix, and the result
is the return value P.
"""
# P = np.zeros((msize,msize))
P = np.identity(msize)
I = np.identity(msize)
if not len(lins) == len(louts):
return P
for i in range(len(lins)):
P[lins[i]] = I[louts[i]]
return P
def scoreMatrix(n):
"""The score function of the matrix. The assumption is that the true
arrangement maximized the interaction close to the main diagonal.
The total sum of the interaction is an invariant, preserved by permuations.
param n: size of ca 2-d n over n array.
returns the score matrix, which is used to calculate the score of any given
n^2 matrix.
"""
s = np.arange(n)
s = np.exp(-s)
S = np.zeros((n, n))
for i in range(n):
S[i][i:] = s[: n - i]
return S
def score(A, S):
"""returns the weighted sum (by the score matrix S) of
the matrix A
"""
return np.sum(A * S)
def constrainMatrix(ls, A):
"""Returns a matrix of the same dimension as A, but every entry with
an index (either row or column) not in ls is 0.
"""
B = np.zeros_like(A)
#B[np.ix_(ls,ls)] = 1
B[np.ix_(ls,ls)] = A[np.ix_(ls,ls)]
#B[ls][:,ls] = A[ls][:,ls]
return B
def resetIndices(ls, A):
"""essentially returns the constraint of A to the complement indices of ls,
by reseting all the indices in ls to 0.
"""
B = A.copy()
B[ls,:] = 0
B[:,ls] = 0
return B
def reindexMatrix(iss, jss, A):
"""iss and jss are lists of indices of equal size, representing
a permuation: iss[i] is replaced with jss[i]. all other indices which are
not in the lists left unchanged.
"""
n = len(A)
B = np.zeros_like(A)
tss = [i for i in range(n)]
for i in range(len(iss)):
tss[iss[i]] = jss[i]
for i in range(n):
for j in range(n):
B[i, j] = A[tss[i], tss[j]]
return B
def scorePair(iss, jss, refmat, scoremat):
A = np.zeros_like(refmat)
l = iss + jss
n = len(l)
for i in range(n):
for j in range(n):
A[i, j] = refmat[l[i], l[j]]
return score(A, scoremat)
def scorePair2(iss, jss, refmat):
"""scores the interaction of two segments
iss and jss. weighted by the ideal diagonal distribution.
"""
s = 0
temp = 0
for i in range(len(iss)):
for j in range(len(jss)):
temp = np.exp(-np.abs(j + len(iss) - i))
# we only care about interaction between the 2 segments and not
# inside each one of them which wouldn't be affected by
# rearrangement.
s += refmat[iss[i], jss[j]] * temp
return s
def scorePair3(iss, jss, refmat, lreverse=False, rreverse=False):
"""iss, jss must be lists of segments of the index range of refmat,
our reference matrix.
reurns the interaction score of iss and jss as if reindexed the matrix so
that they will be adjuscent to each other.
"""
s = 0
temp = 0
for i in range(len(iss)):
for j in range(len(jss)):
x = iss[i]
y = jss[j]
if lreverse:
x = iss[-1 - i]
if rreverse:
y = jss[-1 -j ]
# temp = np.exp(-np.abs(i-j))
#temp = np.exp(-np.abs(x - y))
temp = np.exp(-np.abs(j + len(iss) - i))
# we only care about interaction between the 2 segments and not
# inside each one of them which wouldn't be affected by
# rearrangement.
s += refmat[x, y] * temp
return s
# and the corrsponding indices are:
def articulate(l):
"""l is a list of positive integers.
returns the implied articulation, meaning a list of lists (or 1d arrays)
ls, such that ls[0] it the numbers 0 to l[0]-1, ls[1] is a list of the
numbers ls[1] to ls[2]-1 etc.
"""
# ls = [np.arange(l[0]).astype('uint64')]
ls = []
offsets = np.cumsum([0] + l)
for i in range(0, len(l)):
xs = np.arange(l[i]).astype("uint64") + offsets[i]
ls.append(xs)
return ls
def flip1(s, A, arts):
"""flips (reverses) the s'th segment, as listed by arts.
returns new matrix.
param s: the segment to flip.
param A: the matrix.
param arts: the articulation of A.
"""
myarts = arts.copy()
myarts[s] = np.flip(myarts[s])
B = reindexMatrix(arts[s], myarts[s], A)
return B
def indexing(arts):
return reduce(lambda x,y: x + list(y), arts, [])
def swap2(s, r, A, arts):
"""swaps segments s and r, and returns the new
matrix and the new segmentation.
"""
myarts = arts.copy()
myarts[s] = arts[r]
myarts[r] = arts[s]
B = reindexMatrix(indexing(arts), indexing(myarts), A)
newarts = articulate([len(x) for x in myarts])
return newarts, B
def improve(A, xs):
"""param: A: matrix.
param xs: associated articulation.
checks if segments 0 and 1 should be attached in some particular order.
"""
if len(xs) == 1:
return xs, A
iss = xs[0]
jss = xs[1]
# we're going to see if iss and jss belong together in some configuration
sl = scorePair3(iss,jss, A)
slrv = scorePair3(iss,jss, A, lreverse=True)
sr = scorePair3(jss,iss, A)
srrv = scorePair3(jss,iss, A, rreverse=True)
t = np.max([sl, slrv, sr, srrv])
mysegs = [len(xs[i]) for i in range(1, len(xs))]
mysegs[0] += len(xs[0])
mysegs = articulate(mysegs)
if t == 0:
return xs, A
if t == sl:
# nothin to change
return mysegs, A
elif t == sr:
# swap 0 and 1 segments
_, B = swap2(1,0, A, xs)
return mysegs, B
elif t == slrv:
# first flip the segment 0
B = flip1(0, A, xs)
return mysegs, B
else:
# first flip the segment 0
B = flip1(0, A, xs)
# then make the switch
_, B = swap2(1,0, B, xs)
return mysegs, B
myblocks = [10,15,17,19,17,15,10]
mymatrix = blockMatrix(myblocks)
plt.matshow(mymatrix)
###Output
_____no_output_____
###Markdown
We now define a distribution (well sort of, not normalized) on the $103 \times 103$ matrix. We would like for cells close to the diagonal to have a high chances of being $1$ and the further the cell if from the main diagonal, it is exponentially less likely for it to be $1$. So $P(A[i,j] = 1) \propto e^{-|i-j|}$The plots below show this distribution and also how it looks in log scale.
###Code
dmatrix = scoreMatrix(len(mymatrix))
dmatrix += np.transpose(dmatrix)
dmatrix -= np.identity(len(dmatrix))
fig, axs = plt.subplots(nrows=1, ncols=2)
fig.suptitle('ideal distribution of 1s and 0s')
axs[0].imshow(dmatrix)
axs[0].set_title('original')
axs[1].imshow(np.log(dmatrix))
axs[1].set_title('log scale')
###Output
_____no_output_____
###Markdown
we call this distribution matrix $S$, and let $A$ be any boolean matrix that is equivalent by permutations to some block diagonal matrix $B$.We define the score of $A$ to be $s(A) = \sum_{i,j} A_{i,j}S_{i,j} = \sum_{i,j} A_{i,j}\exp-|i-j|)$Theorem: If $A$ and $B$ as above, then $s(A) \leq s(B)$ and $s(B) = s(A)$ if and only if $A$ is itself a block matrix which is a permutation on the blocks of $B$ (and inside a block, any permutationof its indices as well).proof: pretty straight forward by induction. Sow lets assume that we are only given the matrix $A$. We know it is equivalent by permutations to some unknown block matrix $B$. How easy is it to reach $B$ from $A$?So first recall that any permutation on the blocks of $B$ would result in a matrix of the same, maximal score. We don't necessarily know how to distinguish between such matrices.Our indices run from $0$ to $n-1$ ($n=103$ in our particular sampple). Every block diagonal matrix can be defined by a consequetive list of the block sizes on its diagonal (look for example that the python code above). So lets say that $B = \text{blcks}_{i = 0}^k(b_i)$ (the sum of the $b_i$'s add up to $n$ - the number of $b_i$ which are $0$ (which represent a $0$ on the diagonal).Lets say that $I = [0,n)$ (our index range) and the blocks of $B$ can be designated by $I = \cup_1^k I_i = \cup_i [x_i, y_i) = \cup_i [x_i, x_i+b_i)$(where $0= x_0 < x_1 < \dots x_i < x_{i+1} \dots x_n = n - b_n$) Now lets consider a different segmentation $\sigma(I) = {[u_i,v_i)}$ of $I$: $I = \cup_0^l J_i = \cup_j [u_j, v_j)$ where $0=u_0 < v_0=u_1 < \dots$We require from the segmentation $\sigma(I)$ to satisfy the following conditions:1) If $u_j \in [x_i,y_i) = I_i$ then $v_j \notin I_i$.2) IF $u_j = x_i$ for some $i$, then $\forall i v_j \neq y_i$3) IF $v_j = y_i$ for some $i$, then $\forall i u_j \neq x_i$4) for just the two end segments, we may allow them to be a prefix of the first block or a suffix of the last bloack, and then all the internal segments must be starting and ending inside a block and not at its exact ends.To explain this in words, every segment $J_j$ cannot be contained completely inside just one interval $I_i$, and in addition no segment $J-j$ is a exactly the union of several consequetive intervals.This type of segmentation ensures that every segment $J_j$ interacts with at least one of $J_{j-1}$ or $J_{j+1}$ where by interacting we mean there is some $s \in J_{i}$ and $t \in J_{i+1}$ and some $r \in \{0 \dots n\}$ such that $s,t \in I_r$ and therefore $A[s,t] = 1$.Moreover, it follows from our requirement from segmentations that $J_i$ and $J_j$ have $0$ interaction if $|i-j| > 1$. So now suppose that we know that matrix $A$ was obtained by permutations from $B$ that are of two types:1) permuting the order of the segments $\pi(J) = \{J_{\pi(i)}\}$, and2) reversing some of the segments: $J_i \to \text{rev}(J_i)$.And we are given not just $A$ but also a list of numbers that represent segments (which were obtained from the segmentation $J$).In other words we have a segmented partition $K = \cup_1^l K_i = \cup [c_i,d_i)$ which was obtained from the segmentation $J$ by performing the permutations as explained above and then renaming the indices to $0 \dot n-1$. Even after this rearrangement it still holds that $K_i$ will interact with one or two and not more segments $K_x, K_y$ and these will be segments that correspond to a conseqetive 3 neighboring segements in the original segmentation $J$.The strongest interaction (according to our ideal distribution) will happen if we stitch two such interacting $k$-segments in the right order (as it will be closest to the diagonal.This allows us to reconstruct the original matrix in a very simple way:while the number of segments is greater than 1:* pick a segment* find the segment with the strongest interaction with it* stitch them together and re-index the indices to reflect the reordering and possibly reverse operations that is required. reducing the number of segments by 1* repeatI haven't provided the exact definition of 'strongest interaction', will do it later. But this is basically how it goes:for two arbitraru segments $X = [a, a+1, \dots, a+x)$ and $Y = [y, y+1, \dots y+b)$, we define a score function with respect to the matrix $A$ as follows:$s(X,Y) = \sum_{i =0}^{x-1} \sum_{j = 0}^{y-1} A[X[i], Y[j]] \exp(-|i-j|)$In general $s(X,Y) \neq s(Y,X)$. And we also allow reverse, so if $Y^r$ is the reverse of $Y$, let $A|_{Y^r}$ be the resulting matrix from $A$ after reindexing according to $Y^r$.Then $s(X,Y^r)$ is the same formula as above but replacing $A$ with $A|_{Y^r}$.In the algorithm above, to find a neighbor of $X$ to join with it, we find the one or two interaction partners $Y,Z$, and pick the one (or its reverse) that maximized:$s(X,Y), s(Y,X), s(Y^r, X), S(X, Y^r), s(X,Z), \dots$. 8 combinations in total. We actually go over all the segments but these are the only combination which will possibly be non-zero. An exampleWe shall construct this block matrix:
###Code
# a bigger experiment
fooblocks = [15,17,19,20,10,21,30,21,40,9,27,19]
foo = blockMatrix(fooblocks)
np.sum(fooblocks)
foosegs = [38, 34, 32, 38, 37, 34, 35]
np.sum(foosegs)
np.sum(fooblocks)
foosegs = articulate(foosegs)
plt.matshow(foo)
###Output
_____no_output_____
###Markdown
perform some swaps and flips and keep track of the segmentation
###Code
# now perform some flips and swaps
bar = flip1(0, foo, foosegs)
bar = flip1(1, foo, foosegs)
bar = flip1(5, foo, foosegs)
barsegs, bar = swap2(0,3, bar, foosegs)
barsegs, bar = swap2(1,2, bar, barsegs)
barsegs, bar = swap2(5,2, bar, barsegs)
barsegs, bar = swap2(4,0, bar, barsegs)
barsegs, bar = swap2(5, 1, bar, barsegs)
barsegs, bar = swap2(1,2, bar, barsegs)
plt.matshow(foo)
plt.matshow(bar)
oldbar = bar.copy()
# reconstruction procedure
while len(barsegs) > 1:
x = np.random.randint(1,len(barsegs))
barsegs, bar = swap2(1,x, bar, barsegs)
barsegs, bar = improve(bar, barsegs)
plt.matshow(foo)
plt.title("original")
plt.matshow(oldbar)
plt.title("scrumbled")
plt.matshow(bar)
plt.title("reconstructed")
###Output
_____no_output_____ |
Deep Learning/Generative Adversarial Networks/Deep Convolutional GANs/Batch_Normalization_Exercises.ipynb | ###Markdown
Batch Normalization – Practice Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is **not** a good network for classfying MNIST digits. You could create a _much_ simpler network and get _better_ results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources. This notebook includes two versions of the network that you can edit. The first uses higher level functions from the `tf.layers` package. The second is the same network, but uses only lower level functions in the `tf.nn` package.
1. [Batch Normalization with `tf.layers.batch_normalization`](example_1)
2. [Batch Normalization with `tf.nn.batch_normalization`](example_2) The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named `mnist`. You'll need to run this cell before running anything else in the notebook.
###Code
%tensorflow_version 1.x
from tensorflow.python.util import deprecation
deprecation._PRINT_DEPRECATION_WARNINGS = False
!rm -r *
!wget -q https://github.com/bhupendpatil/Practice/raw/master/DataSet/mnist.zip
!unzip -q mnist.zip
!rm mnist.zip
import tensorflow as tf
from mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
###Output
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting MNIST_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
###Markdown
Batch Normalization using `tf.layers.batch_normalization`
This version of the network uses `tf.layers` for almost everything, and expects you to implement batch normalization using [`tf.layers.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization) We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
###Code
"""
DO NOT MODIFY THIS CELL
"""
def fully_connected(prev_layer, num_units):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
###Output
_____no_output_____
###Markdown
We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with even depths, and strides of 2x2 on layers with odd depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
###Code
"""
DO NOT MODIFY THIS CELL
"""
def conv_layer(prev_layer, layer_depth):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
###Output
_____no_output_____
###Markdown
**Run the following cell**, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network **without** batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
###Code
"""
DO NOT MODIFY THIS CELL
"""
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
###Output
Batch: 0: Validation loss: 0.69082, Validation accuracy: 0.09860
Batch: 25: Training loss: 0.34277, Training accuracy: 0.06250
Batch: 50: Training loss: 0.32889, Training accuracy: 0.04688
Batch: 75: Training loss: 0.32620, Training accuracy: 0.07812
Batch: 100: Validation loss: 0.32561, Validation accuracy: 0.09900
Batch: 125: Training loss: 0.32773, Training accuracy: 0.04688
Batch: 150: Training loss: 0.32607, Training accuracy: 0.07812
Batch: 175: Training loss: 0.32291, Training accuracy: 0.09375
Batch: 200: Validation loss: 0.32515, Validation accuracy: 0.09760
Batch: 225: Training loss: 0.32457, Training accuracy: 0.14062
Batch: 250: Training loss: 0.32427, Training accuracy: 0.12500
Batch: 275: Training loss: 0.32498, Training accuracy: 0.09375
Batch: 300: Validation loss: 0.32558, Validation accuracy: 0.09580
Batch: 325: Training loss: 0.32433, Training accuracy: 0.10938
Batch: 350: Training loss: 0.32314, Training accuracy: 0.17188
Batch: 375: Training loss: 0.32673, Training accuracy: 0.06250
Batch: 400: Validation loss: 0.32602, Validation accuracy: 0.11260
Batch: 425: Training loss: 0.32655, Training accuracy: 0.03125
Batch: 450: Training loss: 0.32432, Training accuracy: 0.12500
Batch: 475: Training loss: 0.32564, Training accuracy: 0.10938
Batch: 500: Validation loss: 0.32554, Validation accuracy: 0.11260
Batch: 525: Training loss: 0.32659, Training accuracy: 0.07812
Batch: 550: Training loss: 0.32356, Training accuracy: 0.09375
Batch: 575: Training loss: 0.32601, Training accuracy: 0.12500
Batch: 600: Validation loss: 0.32561, Validation accuracy: 0.11260
Batch: 625: Training loss: 0.32789, Training accuracy: 0.03125
Batch: 650: Training loss: 0.32451, Training accuracy: 0.10938
Batch: 675: Training loss: 0.32386, Training accuracy: 0.12500
Batch: 700: Validation loss: 0.32542, Validation accuracy: 0.10700
Batch: 725: Training loss: 0.32297, Training accuracy: 0.14062
Batch: 750: Training loss: 0.32773, Training accuracy: 0.06250
Batch: 775: Training loss: 0.32538, Training accuracy: 0.18750
Final validation accuracy: 0.11260
Final test accuracy: 0.11350
Accuracy on 100 samples: 0.14
###Markdown
With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. **Edit these cells** to add batch normalization to the network. For this exercise, you should use [`tf.layers.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization) to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the `Batch_Normalization_Solutions` notebook to see how we did things. **TODO:** Modify `fully_connected` to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
###Code
def fully_connected(prev_layer, num_units, is_training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
layer = tf.layers.batch_normalization(layer,training=is_training)
layer = tf.nn.relu(layer)
return layer
###Output
_____no_output_____
###Markdown
**TODO:** Modify `conv_layer` to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
###Code
def conv_layer(prev_layer, layer_depth, is_training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias=False, activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer,training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
###Output
_____no_output_____
###Markdown
**TODO:** Edit the `train` function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
###Code
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys,is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
###Output
Batch: 0: Validation loss: 0.69111, Validation accuracy: 0.09860
Batch: 25: Training loss: 0.57048, Training accuracy: 0.15625
Batch: 50: Training loss: 0.44815, Training accuracy: 0.15625
Batch: 75: Training loss: 0.37872, Training accuracy: 0.07812
Batch: 100: Validation loss: 0.34246, Validation accuracy: 0.11260
Batch: 125: Training loss: 0.33639, Training accuracy: 0.06250
Batch: 150: Training loss: 0.33977, Training accuracy: 0.15625
Batch: 175: Training loss: 0.39093, Training accuracy: 0.09375
Batch: 200: Validation loss: 0.42325, Validation accuracy: 0.11260
Batch: 225: Training loss: 0.49268, Training accuracy: 0.14062
Batch: 250: Training loss: 0.53055, Training accuracy: 0.12500
Batch: 275: Training loss: 0.68850, Training accuracy: 0.17188
Batch: 300: Validation loss: 0.75199, Validation accuracy: 0.13720
Batch: 325: Training loss: 0.50162, Training accuracy: 0.29688
Batch: 350: Training loss: 0.40609, Training accuracy: 0.53125
Batch: 375: Training loss: 0.13581, Training accuracy: 0.78125
Batch: 400: Validation loss: 0.11445, Validation accuracy: 0.82680
Batch: 425: Training loss: 0.05287, Training accuracy: 0.93750
Batch: 450: Training loss: 0.04585, Training accuracy: 0.92188
Batch: 475: Training loss: 0.03973, Training accuracy: 0.93750
Batch: 500: Validation loss: 0.04040, Validation accuracy: 0.94000
Batch: 525: Training loss: 0.03635, Training accuracy: 0.96875
Batch: 550: Training loss: 0.02377, Training accuracy: 0.95312
Batch: 575: Training loss: 0.04652, Training accuracy: 0.90625
Batch: 600: Validation loss: 0.02746, Validation accuracy: 0.96200
Batch: 625: Training loss: 0.04083, Training accuracy: 0.92188
Batch: 650: Training loss: 0.04183, Training accuracy: 0.93750
Batch: 675: Training loss: 0.07430, Training accuracy: 0.90625
Batch: 700: Validation loss: 0.03619, Validation accuracy: 0.95720
Batch: 725: Training loss: 0.03778, Training accuracy: 0.93750
Batch: 750: Training loss: 0.01193, Training accuracy: 0.98438
Batch: 775: Training loss: 0.01404, Training accuracy: 0.98438
Final validation accuracy: 0.96080
Final test accuracy: 0.95660
Accuracy on 100 samples: 0.94
###Markdown
With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: `Accuracy on 100 samples`. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using `tf.nn.batch_normalization`
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses `tf.nn` for almost everything, and expects you to implement batch normalization using [`tf.nn.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization).
**Optional TODO:** You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
**TODO:** Modify `fully_connected` to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
**Note:** For convenience, we continue to use `tf.layers.dense` for the `fully_connected` layer. By this point in the class, you should have no problem replacing that with matrix operations between the `prev_layer` and explicit weights and biases variables.
###Code
def fully_connected(prev_layer, num_units, is_training):
"""
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:param is_training: bool or Tensor
Indicates whether or not the network is currently training, which tells the batch normalization
layer whether or not it should update or use its population statistics.
:returns Tensor
A new fully connected layer
"""
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
gamma = tf.Variable(tf.ones([num_units]))
beta = tf.Variable(tf.zeros([num_units]))
pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False)
pop_variance = tf.Variable(tf.ones([num_units]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
batch_mean, batch_variance = tf.nn.moments(layer, [0])
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)
batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference)
return tf.nn.relu(batch_normalized_output)
###Output
_____no_output_____
###Markdown
**TODO:** Modify `conv_layer` to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
**Note:** Unlike in the previous example that used `tf.layers`, adding batch normalization to these convolutional layers _does_ require some slight differences to what you did in `fully_connected`.
###Code
def conv_layer(prev_layer, layer_depth, is_training):
"""
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:param is_training: bool or Tensor
Indicates whether or not the network is currently training, which tells the batch normalization
layer whether or not it should update or use its population statistics.
:returns Tensor
A new convolutional layer
"""
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
gamma = tf.Variable(tf.ones([out_channels]))
beta = tf.Variable(tf.zeros([out_channels]))
pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False)
pop_variance = tf.Variable(tf.ones([out_channels]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
batch_mean, batch_variance = tf.nn.moments(layer, [0,1,2], keep_dims=False)
decay = 0.99
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon)
def batch_norm_inference():
return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon)
batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference)
return tf.nn.relu(batch_normalized_output)
###Output
_____no_output_____
###Markdown
**TODO:** Edit the `train` function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
###Code
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Add placeholder to indicate whether or not we're training the model
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually, just to make sure batch normalization really worked
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
###Output
Batch: 0: Validation loss: 0.69106, Validation accuracy: 0.11000
Batch: 25: Training loss: 0.58873, Training accuracy: 0.06250
Batch: 50: Training loss: 0.48330, Training accuracy: 0.09375
Batch: 75: Training loss: 0.41027, Training accuracy: 0.10938
Batch: 100: Validation loss: 0.38271, Validation accuracy: 0.09760
Batch: 125: Training loss: 0.36644, Training accuracy: 0.09375
Batch: 150: Training loss: 0.35370, Training accuracy: 0.06250
Batch: 175: Training loss: 0.38368, Training accuracy: 0.09375
Batch: 200: Validation loss: 0.43162, Validation accuracy: 0.11260
Batch: 225: Training loss: 0.38645, Training accuracy: 0.06250
Batch: 250: Training loss: 0.39129, Training accuracy: 0.15625
Batch: 275: Training loss: 0.41361, Training accuracy: 0.12500
Batch: 300: Validation loss: 0.21838, Validation accuracy: 0.53320
Batch: 325: Training loss: 0.24763, Training accuracy: 0.51562
Batch: 350: Training loss: 0.29008, Training accuracy: 0.48438
Batch: 375: Training loss: 0.31715, Training accuracy: 0.46875
Batch: 400: Validation loss: 0.14153, Validation accuracy: 0.76100
Batch: 425: Training loss: 0.23911, Training accuracy: 0.65625
Batch: 450: Training loss: 0.11234, Training accuracy: 0.84375
Batch: 475: Training loss: 0.12944, Training accuracy: 0.87500
Batch: 500: Validation loss: 0.05809, Validation accuracy: 0.92340
Batch: 525: Training loss: 0.08880, Training accuracy: 0.87500
Batch: 550: Training loss: 0.04000, Training accuracy: 0.95312
Batch: 575: Training loss: 0.07236, Training accuracy: 0.85938
Batch: 600: Validation loss: 0.04058, Validation accuracy: 0.93840
Batch: 625: Training loss: 0.02419, Training accuracy: 0.96875
Batch: 650: Training loss: 0.02142, Training accuracy: 0.96875
Batch: 675: Training loss: 0.03455, Training accuracy: 0.96875
Batch: 700: Validation loss: 0.09063, Validation accuracy: 0.89260
Batch: 725: Training loss: 0.03693, Training accuracy: 0.93750
Batch: 750: Training loss: 0.07074, Training accuracy: 0.89062
Batch: 775: Training loss: 0.03272, Training accuracy: 0.95312
Final validation accuracy: 0.96580
Final test accuracy: 0.96280
Accuracy on 100 samples: 0.96
|
Class lab reports/Chapter_6_lab_v6_upload.ipynb | ###Markdown
6. Learning to Classify Text 1.1 Gender Identification In Chapter 4, we saw that male and female names have some distinctive characteristics. Names ending in a, e and i are likely to be female, while names ending in k, o, r, s and t are likely to be male. Let's build a classifier to model these differences more precisely. The first step in creating a classifier is deciding what features of the input are relevant, and how to encode those features. For this example, we'll start by just looking at the final letter of a given name. The following feature extractor function builds a dictionary containing relevant information about a given name:
###Code
import nltk
def gender_features(word):
return {'last_letter': word[-1]}
gender_features('Shrek')
###Output
_____no_output_____
###Markdown
The returned dictionary, known as a feature set, maps from feature names to their values. Feature names are case-sensitive strings that typically provide a short human-readable description of the feature, as in the example 'last_letter'. Now that we've defined a feature extractor, we need to prepare a list of examples and corresponding class labels.
###Code
from nltk.corpus import names
labeled_names = ([(name, 'male') for name in names.words('male.txt')] +
[(name, 'female') for name in names.words('female.txt')])
len(labeled_names)
import random
random.seed(1)
random.shuffle(labeled_names)
###Output
_____no_output_____
###Markdown
Next, we use the feature extractor to process the names data, and divide the resulting list of feature sets into a training set and a test set. The training set is used to train a new "naive Bayes" classifier. (See Slides)
###Code
featuresets = [(gender_features(n), gender) for (n, gender) in labeled_names]
train_set, test_set = featuresets[500:], featuresets[:500]
classifier = nltk.NaiveBayesClassifier.train(train_set)
###Output
_____no_output_____
###Markdown
We will learn more about the naive Bayes classifier later in the chapter. For now, let's just test it out on some names that did not appear in its training data:
###Code
classifier.classify(gender_features('Neo'))
classifier.classify(gender_features('Trinity'))
###Output
_____no_output_____
###Markdown
Observe that these character names from The Matrix are correctly classified. Although this science fiction movie is set in 2199, it still conforms with our expectations about names and genders. We can systematically evaluate the classifier on a much larger quantity of unseen data:
###Code
print(nltk.classify.accuracy(classifier, test_set))
###Output
0.78
###Markdown
Finally, we can examine the classifier to determine which features it found most effective for distinguishing the names' genders:
###Code
classifier.show_most_informative_features(5)
###Output
Most Informative Features
last_letter = 'a' female : male = 35.7 : 1.0
last_letter = 'k' male : female = 32.1 : 1.0
last_letter = 'p' male : female = 18.6 : 1.0
last_letter = 'f' male : female = 17.2 : 1.0
last_letter = 'v' male : female = 11.1 : 1.0
###Markdown
This listing shows that the names in the training set that end in "a" are female 36 times more often than they are male, but names that end in "k" are male 32 times more often than they are female. These ratios are known as likelihood ratios, and can be useful for comparing different feature-outcome relationships. Exercise 1. Use this classifier to test your own names or any names of your own choosing.
###Code
classifier.classify(gender_features('Alice'))
###Output
_____no_output_____
###Markdown
Exercise 2. Modify the gender_features() function to provide the classifier with features encoding the length of the name, or its first letter. Retrain the classifier with these new features, and test its accuracy. (3 minutes)
###Code
def gender_features1(word):
return {'first_letter': word[0]}
featuresets = [(gender_features1(n), gender) for (n, gender) in labeled_names]
train_set, test_set = featuresets[500:], featuresets[:500]
classifier = nltk.NaiveBayesClassifier.train(train_set)
print(nltk.classify.accuracy(classifier, test_set))
def gender_features2(word):
return {'word_length': len(word)}
featuresets = [(gender_features2(n), gender) for (n, gender) in labeled_names]
train_set, test_set = featuresets[500:], featuresets[:500]
classifier = nltk.NaiveBayesClassifier.train(train_set)
print(nltk.classify.accuracy(classifier, test_set))
###Output
0.654
###Markdown
1.2 Choosing The Right Features Selecting relevant features and deciding how to encode them for a learning method can have an enormous impact on the learning method's ability to extract a good model. Much of the interesting work in building a classifier is deciding what features might be relevant, and how we can represent them. Although it's often possible to get decent performance by using a fairly simple and obvious set of features, there are usually significant gains to be had by using carefully constructed features based on a thorough understanding of the task at hand Typically, feature extractors are built through a process of trial-and-error, guided by intuitions about what information is relevant to the problem It's common to start with a "kitchen sink" approach, including all the features that you can think of, and then checking to see which features actually are helpful.
###Code
def gender_features2(name):
features = {}
features["first_letter"] = name[0].lower()
features["last_letter"] = name[-1].lower()
for letter in 'abcdefghijklmnopqrstuvwxyz':
features["count({})".format(letter)] = name.lower().count(letter)
features["has({})".format(letter)] = (letter in name.lower())
return features
gender_features2('John')
###Output
_____no_output_____
###Markdown
However, there are usually limits to the number of features that you should use with a given learning algorithm — if you provide too many features, then the algorithm will have a higher chance of relying on idiosyncrasies of your training data that don't generalize well to new examples. This problem is known as overfitting, and can be especially problematic when working with small training sets. (See Slides)
###Code
featuresets = [(gender_features2(n), gender) for (n, gender) in labeled_names]
train_set, test_set = featuresets[500:], featuresets[:500]
classifier = nltk.NaiveBayesClassifier.train(train_set)
print(nltk.classify.accuracy(classifier, test_set))
###Output
0.78
###Markdown
Once an initial set of features has been chosen, a very productive method for refining the feature set is error analysis. First, we select a development set, containing the corpus data for creating the model. This development set is then subdivided into the training set and the dev-test set. (See Slides)
###Code
train_names = labeled_names[1500:]
devtest_names = labeled_names[500:1500]
test_names = labeled_names[:500]
###Output
_____no_output_____
###Markdown
Having divided the corpus into appropriate datasets, we train a model using the training set [1], and then run it on the dev-test set[2].
###Code
train_set = [(gender_features(n), gender) for (n, gender) in train_names]
devtest_set = [(gender_features(n), gender) for (n, gender) in devtest_names]
test_set = [(gender_features(n), gender) for (n, gender) in test_names]
classifier = nltk.NaiveBayesClassifier.train(train_set) #1
print(nltk.classify.accuracy(classifier, devtest_set)) #2
###Output
0.753
###Markdown
Using the dev-test set, we can generate a list of the errors that the classifier makes when predicting name genders:
###Code
errors = []
for (name, tag) in devtest_names:
guess = classifier.classify(gender_features(name))
if guess != tag:
errors.append( (tag, guess, name) )
###Output
_____no_output_____
###Markdown
We can then examine individual error cases where the model predicted the wrong label, and try to determine what additional pieces of information would allow it to make the right decision (or which existing pieces of information are tricking it into making the wrong decision). The feature set can then be adjusted accordingly. The names classifier that we have built generates about 100 errors on the dev-test corpus:
###Code
for (tag, guess, name) in sorted(errors):
print('correct={:<8} guess={:<8} name={:<30}'.format(tag, guess, name))
###Output
correct=female guess=male name=Adelind
correct=female guess=male name=Allyson
correct=female guess=male name=Alyss
correct=female guess=male name=Anett
correct=female guess=male name=Ardis
correct=female guess=male name=Beatriz
correct=female guess=male name=Beitris
correct=female guess=male name=Berget
correct=female guess=male name=Bette-Ann
correct=female guess=male name=Bird
correct=female guess=male name=Bliss
correct=female guess=male name=Brittan
correct=female guess=male name=Caitlin
correct=female guess=male name=Cam
correct=female guess=male name=Caro
correct=female guess=male name=Caroljean
correct=female guess=male name=Carolyn
correct=female guess=male name=Carolynn
correct=female guess=male name=Cathleen
correct=female guess=male name=Catlin
correct=female guess=male name=Charleen
correct=female guess=male name=Cherilynn
correct=female guess=male name=Chriss
correct=female guess=male name=Christan
correct=female guess=male name=Christen
correct=female guess=male name=Clair
correct=female guess=male name=Consuelo
correct=female guess=male name=Dallas
correct=female guess=male name=Dawn
correct=female guess=male name=Diahann
correct=female guess=male name=Diann
correct=female guess=male name=Dionis
correct=female guess=male name=Easter
correct=female guess=male name=Eden
correct=female guess=male name=Eilis
correct=female guess=male name=Ellen
correct=female guess=male name=Ellynn
correct=female guess=male name=Emmalynn
correct=female guess=male name=Ester
correct=female guess=male name=Evangelin
correct=female guess=male name=Fanchon
correct=female guess=male name=Gaynor
correct=female guess=male name=Gwendolen
correct=female guess=male name=Harriet
correct=female guess=male name=Hedwig
correct=female guess=male name=Janeen
correct=female guess=male name=Janot
correct=female guess=male name=Jaquelin
correct=female guess=male name=Jo Ann
correct=female guess=male name=Jo-Ann
correct=female guess=male name=Jocelyn
correct=female guess=male name=Jonis
correct=female guess=male name=Jordan
correct=female guess=male name=Joslyn
correct=female guess=male name=Joyann
correct=female guess=male name=Karen
correct=female guess=male name=Katlin
correct=female guess=male name=Kerstin
correct=female guess=male name=Kirstyn
correct=female guess=male name=Kris
correct=female guess=male name=Kristien
correct=female guess=male name=Lamb
correct=female guess=male name=Leanor
correct=female guess=male name=Leeann
correct=female guess=male name=Linn
correct=female guess=male name=Lois
correct=female guess=male name=Mair
correct=female guess=male name=Marget
correct=female guess=male name=Margot
correct=female guess=male name=Margret
correct=female guess=male name=Marie-Ann
correct=female guess=male name=Marris
correct=female guess=male name=Meg
correct=female guess=male name=Megen
correct=female guess=male name=Meghan
correct=female guess=male name=Miran
correct=female guess=male name=Philis
correct=female guess=male name=Phyllys
correct=female guess=male name=Piper
correct=female guess=male name=Robinet
correct=female guess=male name=Robyn
correct=female guess=male name=Rosamond
correct=female guess=male name=Shannen
correct=female guess=male name=Shaylynn
correct=female guess=male name=Sioux
correct=female guess=male name=Suzan
correct=female guess=male name=Tamiko
correct=female guess=male name=Tomiko
correct=female guess=male name=Viv
correct=female guess=male name=Willyt
correct=female guess=male name=Yoshiko
correct=male guess=female name=Abdul
correct=male guess=female name=Aguste
correct=male guess=female name=Ajay
correct=male guess=female name=Aldrich
correct=male guess=female name=Alfonse
correct=male guess=female name=Allah
correct=male guess=female name=Alley
correct=male guess=female name=Amery
correct=male guess=female name=Angie
correct=male guess=female name=Arel
correct=male guess=female name=Arvy
correct=male guess=female name=Ashby
correct=male guess=female name=Augustine
correct=male guess=female name=Avery
correct=male guess=female name=Avi
correct=male guess=female name=Baillie
correct=male guess=female name=Barth
correct=male guess=female name=Barty
correct=male guess=female name=Bertie
correct=male guess=female name=Binky
correct=male guess=female name=Blayne
correct=male guess=female name=Brady
correct=male guess=female name=Brody
correct=male guess=female name=Brooke
correct=male guess=female name=Burl
correct=male guess=female name=Chance
correct=male guess=female name=Clare
correct=male guess=female name=Clarence
correct=male guess=female name=Clay
correct=male guess=female name=Clayborne
correct=male guess=female name=Clive
correct=male guess=female name=Cody
correct=male guess=female name=Curtice
correct=male guess=female name=Daffy
correct=male guess=female name=Dane
correct=male guess=female name=Darrell
correct=male guess=female name=Dave
correct=male guess=female name=Davie
correct=male guess=female name=Davy
correct=male guess=female name=Deryl
correct=male guess=female name=Dewey
correct=male guess=female name=Dickie
correct=male guess=female name=Doyle
correct=male guess=female name=Duffie
correct=male guess=female name=Dwayne
correct=male guess=female name=Earle
correct=male guess=female name=Edie
correct=male guess=female name=Elijah
correct=male guess=female name=Emmanuel
correct=male guess=female name=Esme
correct=male guess=female name=Fonzie
correct=male guess=female name=Frederich
correct=male guess=female name=Garvey
correct=male guess=female name=Gerome
correct=male guess=female name=Gerri
correct=male guess=female name=Giffie
correct=male guess=female name=Gil
correct=male guess=female name=Gill
correct=male guess=female name=Godfrey
correct=male guess=female name=Guthrie
correct=male guess=female name=Hale
correct=male guess=female name=Haley
correct=male guess=female name=Hamish
correct=male guess=female name=Hansel
correct=male guess=female name=Haskell
correct=male guess=female name=Henrique
correct=male guess=female name=Herbie
correct=male guess=female name=Hercule
correct=male guess=female name=Hermy
correct=male guess=female name=Hersch
correct=male guess=female name=Hersh
correct=male guess=female name=Hezekiah
correct=male guess=female name=Iggie
correct=male guess=female name=Ikey
correct=male guess=female name=Isaiah
correct=male guess=female name=Ismail
correct=male guess=female name=Jackie
correct=male guess=female name=Jeffry
correct=male guess=female name=Jeremie
correct=male guess=female name=Jesse
correct=male guess=female name=Jeth
correct=male guess=female name=Jodie
correct=male guess=female name=Jody
correct=male guess=female name=Joe
correct=male guess=female name=Johnnie
correct=male guess=female name=Johny
correct=male guess=female name=Kalle
correct=male guess=female name=Keene
correct=male guess=female name=Kendall
correct=male guess=female name=Kerry
correct=male guess=female name=Klee
correct=male guess=female name=Leigh
correct=male guess=female name=Leroy
correct=male guess=female name=Lindsey
correct=male guess=female name=Luce
correct=male guess=female name=Maurise
correct=male guess=female name=Mel
correct=male guess=female name=Merrill
correct=male guess=female name=Montague
correct=male guess=female name=Murray
correct=male guess=female name=Neddie
correct=male guess=female name=Neel
correct=male guess=female name=Neil
correct=male guess=female name=Neville
correct=male guess=female name=Nikolai
correct=male guess=female name=Noah
correct=male guess=female name=Orville
correct=male guess=female name=Oswell
correct=male guess=female name=Parsifal
correct=male guess=female name=Paul
correct=male guess=female name=Perry
correct=male guess=female name=Piggy
correct=male guess=female name=Pooh
correct=male guess=female name=Randolph
correct=male guess=female name=Rawley
correct=male guess=female name=Rice
correct=male guess=female name=Rich
correct=male guess=female name=Ricki
correct=male guess=female name=Rickie
correct=male guess=female name=Ritch
correct=male guess=female name=Ritchie
correct=male guess=female name=Roarke
correct=male guess=female name=Rodney
correct=male guess=female name=Roni
correct=male guess=female name=Roscoe
correct=male guess=female name=Royce
correct=male guess=female name=Rustie
correct=male guess=female name=Samuele
correct=male guess=female name=Saul
correct=male guess=female name=Saxe
correct=male guess=female name=Shane
correct=male guess=female name=Sibyl
correct=male guess=female name=Sinclare
correct=male guess=female name=Sloane
correct=male guess=female name=Smith
correct=male guess=female name=Sully
correct=male guess=female name=Tarrance
correct=male guess=female name=Timmie
correct=male guess=female name=Timmy
correct=male guess=female name=Torre
correct=male guess=female name=Torrey
correct=male guess=female name=Towney
correct=male guess=female name=Uriah
correct=male guess=female name=Vachel
correct=male guess=female name=Vail
correct=male guess=female name=Val
correct=male guess=female name=Vergil
correct=male guess=female name=Verne
correct=male guess=female name=Voltaire
correct=male guess=female name=Wallace
correct=male guess=female name=Wendel
correct=male guess=female name=Weslie
correct=male guess=female name=Willie
correct=male guess=female name=Wolfy
correct=male guess=female name=Yancey
correct=male guess=female name=Zechariah
###Markdown
Looking through this list of errors makes it clear that some suffixes that are more than one letter can be indicative of name genders. For example, names ending in yn appear to be predominantly female, despite the fact that names ending in n tend to be male; and names ending in ch are usually male, even though names that end in h tend to be female. We therefore adjust our feature extractor to include features for two-letter suffixes:
###Code
def gender_features(word):
return {'suffix1': word[-1:],
'suffix2': word[-2:]}
###Output
_____no_output_____
###Markdown
Rebuilding the classifier with the new feature extractor, we see that the performance on the dev-test dataset.
###Code
train_set = [(gender_features(n), gender) for (n, gender) in train_names]
devtest_set = [(gender_features(n), gender) for (n, gender) in devtest_names]
classifier = nltk.NaiveBayesClassifier.train(train_set)
print(nltk.classify.accuracy(classifier, devtest_set))
###Output
0.784
###Markdown
1.3 Document Classification In Chapter 1, we saw several examples of corpora where documents have been labeled with categories. Using these corpora, we can build classifiers that will automatically tag new documents with appropriate category labels. First, we construct a list of documents, labeled with the appropriate categories. For this example, we've chosen the Movie Reviews Corpus, which categorizes each review as positive or negative.
###Code
from nltk.corpus import movie_reviews
documents = [(list(movie_reviews.words(fileid)), category)
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
documents[2]
random.seed(2)
random.shuffle(documents)
###Output
_____no_output_____
###Markdown
Next, we define a feature extractor for documents, so the classifier will know which aspects of the data it should pay attention to . For document topic identification, we can define a feature for each word, indicating whether the document contains that word. To limit the number of features that the classifier needs to process, we begin by constructing a list of the 2000 most frequent words in the overall corpus [1]. We can then define a feature extractor [2] that simply checks whether each of these words is present in a given document.
###Code
all_words = nltk.FreqDist(w.lower() for w in movie_reviews.words())
word_features = list(all_words)[:2000] #1
word_features
def document_features(document): #2
document_words = set(document) #3
features = {}
for word in word_features:
features['contains({})'.format(word)] = (word in document_words)
return features
print(document_features(movie_reviews.words('pos/cv957_8737.txt')))
###Output
{'contains(plot)': True, 'contains(:)': True, 'contains(two)': True, 'contains(teen)': False, 'contains(couples)': False, 'contains(go)': False, 'contains(to)': True, 'contains(a)': True, 'contains(church)': False, 'contains(party)': False, 'contains(,)': True, 'contains(drink)': False, 'contains(and)': True, 'contains(then)': True, 'contains(drive)': False, 'contains(.)': True, 'contains(they)': True, 'contains(get)': True, 'contains(into)': True, 'contains(an)': True, 'contains(accident)': False, 'contains(one)': True, 'contains(of)': True, 'contains(the)': True, 'contains(guys)': False, 'contains(dies)': False, 'contains(but)': True, 'contains(his)': True, 'contains(girlfriend)': True, 'contains(continues)': False, 'contains(see)': False, 'contains(him)': True, 'contains(in)': True, 'contains(her)': False, 'contains(life)': False, 'contains(has)': True, 'contains(nightmares)': False, 'contains(what)': True, "contains(')": True, 'contains(s)': True, 'contains(deal)': False, 'contains(?)': False, 'contains(watch)': True, 'contains(movie)': True, 'contains(")': True, 'contains(sorta)': False, 'contains(find)': False, 'contains(out)': True, 'contains(critique)': False, 'contains(mind)': False, 'contains(-)': True, 'contains(fuck)': False, 'contains(for)': True, 'contains(generation)': False, 'contains(that)': True, 'contains(touches)': False, 'contains(on)': True, 'contains(very)': True, 'contains(cool)': False, 'contains(idea)': True, 'contains(presents)': False, 'contains(it)': True, 'contains(bad)': False, 'contains(package)': False, 'contains(which)': True, 'contains(is)': True, 'contains(makes)': False, 'contains(this)': True, 'contains(review)': False, 'contains(even)': False, 'contains(harder)': False, 'contains(write)': False, 'contains(since)': False, 'contains(i)': False, 'contains(generally)': False, 'contains(applaud)': False, 'contains(films)': False, 'contains(attempt)': False, 'contains(break)': False, 'contains(mold)': False, 'contains(mess)': False, 'contains(with)': True, 'contains(your)': False, 'contains(head)': False, 'contains(such)': False, 'contains(()': True, 'contains(lost)': False, 'contains(highway)': False, 'contains(&)': False, 'contains(memento)': False, 'contains())': True, 'contains(there)': True, 'contains(are)': True, 'contains(good)': False, 'contains(ways)': False, 'contains(making)': True, 'contains(all)': True, 'contains(types)': False, 'contains(these)': False, 'contains(folks)': False, 'contains(just)': True, 'contains(didn)': False, 'contains(t)': False, 'contains(snag)': False, 'contains(correctly)': False, 'contains(seem)': False, 'contains(have)': True, 'contains(taken)': False, 'contains(pretty)': False, 'contains(neat)': False, 'contains(concept)': False, 'contains(executed)': False, 'contains(terribly)': False, 'contains(so)': False, 'contains(problems)': True, 'contains(well)': True, 'contains(its)': False, 'contains(main)': False, 'contains(problem)': False, 'contains(simply)': False, 'contains(too)': False, 'contains(jumbled)': False, 'contains(starts)': False, 'contains(off)': False, 'contains(normal)': False, 'contains(downshifts)': False, 'contains(fantasy)': False, 'contains(world)': True, 'contains(you)': True, 'contains(as)': True, 'contains(audience)': False, 'contains(member)': False, 'contains(no)': False, 'contains(going)': False, 'contains(dreams)': False, 'contains(characters)': False, 'contains(coming)': False, 'contains(back)': False, 'contains(from)': True, 'contains(dead)': False, 'contains(others)': True, 'contains(who)': True, 'contains(look)': True, 'contains(like)': True, 'contains(strange)': False, 'contains(apparitions)': False, 'contains(disappearances)': False, 'contains(looooot)': False, 'contains(chase)': True, 'contains(scenes)': False, 'contains(tons)': False, 'contains(weird)': False, 'contains(things)': True, 'contains(happen)': False, 'contains(most)': True, 'contains(not)': True, 'contains(explained)': False, 'contains(now)': False, 'contains(personally)': False, 'contains(don)': False, 'contains(trying)': False, 'contains(unravel)': False, 'contains(film)': False, 'contains(every)': False, 'contains(when)': True, 'contains(does)': False, 'contains(give)': False, 'contains(me)': True, 'contains(same)': True, 'contains(clue)': False, 'contains(over)': False, 'contains(again)': False, 'contains(kind)': True, 'contains(fed)': False, 'contains(up)': False, 'contains(after)': False, 'contains(while)': True, 'contains(biggest)': False, 'contains(obviously)': False, 'contains(got)': True, 'contains(big)': False, 'contains(secret)': False, 'contains(hide)': False, 'contains(seems)': False, 'contains(want)': False, 'contains(completely)': False, 'contains(until)': False, 'contains(final)': False, 'contains(five)': False, 'contains(minutes)': False, 'contains(do)': True, 'contains(make)': True, 'contains(entertaining)': False, 'contains(thrilling)': False, 'contains(or)': False, 'contains(engaging)': False, 'contains(meantime)': False, 'contains(really)': False, 'contains(sad)': False, 'contains(part)': False, 'contains(arrow)': False, 'contains(both)': False, 'contains(dig)': False, 'contains(flicks)': False, 'contains(we)': False, 'contains(actually)': True, 'contains(figured)': False, 'contains(by)': True, 'contains(half)': False, 'contains(way)': True, 'contains(point)': False, 'contains(strangeness)': False, 'contains(did)': False, 'contains(start)': True, 'contains(little)': True, 'contains(bit)': False, 'contains(sense)': False, 'contains(still)': False, 'contains(more)': False, 'contains(guess)': False, 'contains(bottom)': False, 'contains(line)': False, 'contains(movies)': True, 'contains(should)': False, 'contains(always)': False, 'contains(sure)': False, 'contains(before)': False, 'contains(given)': False, 'contains(password)': False, 'contains(enter)': False, 'contains(understanding)': False, 'contains(mean)': False, 'contains(showing)': False, 'contains(melissa)': False, 'contains(sagemiller)': False, 'contains(running)': False, 'contains(away)': False, 'contains(visions)': False, 'contains(about)': True, 'contains(20)': False, 'contains(throughout)': False, 'contains(plain)': False, 'contains(lazy)': False, 'contains(!)': True, 'contains(okay)': False, 'contains(people)': False, 'contains(chasing)': False, 'contains(know)': False, 'contains(need)': False, 'contains(how)': True, 'contains(giving)': False, 'contains(us)': True, 'contains(different)': False, 'contains(offering)': False, 'contains(further)': False, 'contains(insight)': False, 'contains(down)': False, 'contains(apparently)': False, 'contains(studio)': False, 'contains(took)': False, 'contains(director)': False, 'contains(chopped)': False, 'contains(themselves)': False, 'contains(shows)': False, 'contains(might)': False, 'contains(ve)': False, 'contains(been)': False, 'contains(decent)': False, 'contains(here)': True, 'contains(somewhere)': False, 'contains(suits)': False, 'contains(decided)': False, 'contains(turning)': False, 'contains(music)': False, 'contains(video)': False, 'contains(edge)': False, 'contains(would)': False, 'contains(actors)': False, 'contains(although)': False, 'contains(wes)': False, 'contains(bentley)': False, 'contains(seemed)': False, 'contains(be)': True, 'contains(playing)': True, 'contains(exact)': False, 'contains(character)': False, 'contains(he)': True, 'contains(american)': False, 'contains(beauty)': False, 'contains(only)': True, 'contains(new)': False, 'contains(neighborhood)': False, 'contains(my)': False, 'contains(kudos)': False, 'contains(holds)': False, 'contains(own)': True, 'contains(entire)': False, 'contains(feeling)': False, 'contains(unraveling)': False, 'contains(overall)': False, 'contains(doesn)': False, 'contains(stick)': False, 'contains(because)': False, 'contains(entertain)': False, 'contains(confusing)': False, 'contains(rarely)': False, 'contains(excites)': False, 'contains(feels)': False, 'contains(redundant)': False, 'contains(runtime)': False, 'contains(despite)': False, 'contains(ending)': False, 'contains(explanation)': False, 'contains(craziness)': False, 'contains(came)': False, 'contains(oh)': False, 'contains(horror)': False, 'contains(slasher)': False, 'contains(flick)': False, 'contains(packaged)': False, 'contains(someone)': False, 'contains(assuming)': False, 'contains(genre)': False, 'contains(hot)': False, 'contains(kids)': False, 'contains(also)': True, 'contains(wrapped)': False, 'contains(production)': False, 'contains(years)': False, 'contains(ago)': False, 'contains(sitting)': False, 'contains(shelves)': False, 'contains(ever)': True, 'contains(whatever)': False, 'contains(skip)': False, 'contains(where)': True, 'contains(joblo)': False, 'contains(nightmare)': False, 'contains(elm)': False, 'contains(street)': False, 'contains(3)': False, 'contains(7)': False, 'contains(/)': False, 'contains(10)': False, 'contains(blair)': False, 'contains(witch)': False, 'contains(2)': False, 'contains(crow)': False, 'contains(9)': False, 'contains(salvation)': False, 'contains(4)': False, 'contains(stir)': False, 'contains(echoes)': False, 'contains(8)': False, 'contains(happy)': False, 'contains(bastard)': False, 'contains(quick)': True, 'contains(damn)': False, 'contains(y2k)': False, 'contains(bug)': False, 'contains(starring)': False, 'contains(jamie)': False, 'contains(lee)': False, 'contains(curtis)': False, 'contains(another)': False, 'contains(baldwin)': False, 'contains(brother)': False, 'contains(william)': False, 'contains(time)': False, 'contains(story)': False, 'contains(regarding)': False, 'contains(crew)': False, 'contains(tugboat)': False, 'contains(comes)': False, 'contains(across)': False, 'contains(deserted)': False, 'contains(russian)': False, 'contains(tech)': False, 'contains(ship)': False, 'contains(kick)': False, 'contains(power)': False, 'contains(within)': False, 'contains(gore)': False, 'contains(bringing)': False, 'contains(few)': False, 'contains(action)': True, 'contains(sequences)': False, 'contains(virus)': False, 'contains(empty)': False, 'contains(flash)': False, 'contains(substance)': False, 'contains(why)': False, 'contains(was)': False, 'contains(middle)': False, 'contains(nowhere)': False, 'contains(origin)': False, 'contains(pink)': False, 'contains(flashy)': False, 'contains(thing)': False, 'contains(hit)': False, 'contains(mir)': False, 'contains(course)': True, 'contains(donald)': False, 'contains(sutherland)': False, 'contains(stumbling)': False, 'contains(around)': False, 'contains(drunkenly)': False, 'contains(hey)': False, 'contains(let)': False, 'contains(some)': False, 'contains(robots)': False, 'contains(acting)': False, 'contains(below)': False, 'contains(average)': False, 'contains(likes)': False, 'contains(re)': True, 'contains(likely)': False, 'contains(work)': False, 'contains(halloween)': False, 'contains(h20)': False, 'contains(wasted)': False, 'contains(real)': False, 'contains(star)': False, 'contains(stan)': False, 'contains(winston)': False, 'contains(robot)': False, 'contains(design)': False, 'contains(schnazzy)': False, 'contains(cgi)': False, 'contains(occasional)': False, 'contains(shot)': False, 'contains(picking)': False, 'contains(brain)': False, 'contains(if)': True, 'contains(body)': False, 'contains(parts)': False, 'contains(turn)': False, 'contains(otherwise)': False, 'contains(much)': False, 'contains(sunken)': False, 'contains(jaded)': False, 'contains(viewer)': False, 'contains(thankful)': False, 'contains(invention)': False, 'contains(timex)': False, 'contains(indiglo)': False, 'contains(based)': False, 'contains(late)': False, 'contains(1960)': False, 'contains(television)': False, 'contains(show)': False, 'contains(name)': False, 'contains(mod)': False, 'contains(squad)': False, 'contains(tells)': False, 'contains(tale)': False, 'contains(three)': False, 'contains(reformed)': False, 'contains(criminals)': False, 'contains(under)': False, 'contains(employ)': False, 'contains(police)': False, 'contains(undercover)': True, 'contains(however)': True, 'contains(wrong)': True, 'contains(evidence)': False, 'contains(gets)': True, 'contains(stolen)': False, 'contains(immediately)': False, 'contains(suspicion)': False, 'contains(ads)': False, 'contains(cuts)': False, 'contains(claire)': False, 'contains(dane)': False, 'contains(nice)': False, 'contains(hair)': False, 'contains(cute)': False, 'contains(outfits)': False, 'contains(car)': False, 'contains(chases)': False, 'contains(stuff)': False, 'contains(blowing)': False, 'contains(sounds)': False, 'contains(first)': False, 'contains(fifteen)': False, 'contains(quickly)': False, 'contains(becomes)': False, 'contains(apparent)': False, 'contains(certainly)': False, 'contains(slick)': False, 'contains(looking)': False, 'contains(complete)': False, 'contains(costumes)': False, 'contains(isn)': False, 'contains(enough)': False, 'contains(best)': True, 'contains(described)': False, 'contains(cross)': False, 'contains(between)': True, 'contains(hour)': False, 'contains(long)': False, 'contains(cop)': False, 'contains(stretched)': False, 'contains(span)': False, 'contains(single)': False, 'contains(clich)': False, 'contains(matter)': False, 'contains(elements)': False, 'contains(recycled)': False, 'contains(everything)': True, 'contains(already)': False, 'contains(seen)': False, 'contains(nothing)': False, 'contains(spectacular)': False, 'contains(sometimes)': False, 'contains(bordering)': False, 'contains(wooden)': False, 'contains(danes)': False, 'contains(omar)': False, 'contains(epps)': False, 'contains(deliver)': False, 'contains(their)': False, 'contains(lines)': False, 'contains(bored)': False, 'contains(transfers)': False, 'contains(onto)': False, 'contains(escape)': False, 'contains(relatively)': False, 'contains(unscathed)': False, 'contains(giovanni)': False, 'contains(ribisi)': False, 'contains(plays)': False, 'contains(resident)': False, 'contains(crazy)': False, 'contains(man)': False, 'contains(ultimately)': False, 'contains(being)': False, 'contains(worth)': True, 'contains(watching)': False, 'contains(unfortunately)': False, 'contains(save)': False, 'contains(convoluted)': False, 'contains(apart)': False, 'contains(occupying)': False, 'contains(screen)': True, 'contains(young)': False, 'contains(cast)': False, 'contains(clothes)': False, 'contains(hip)': False, 'contains(soundtrack)': False, 'contains(appears)': False, 'contains(geared)': False, 'contains(towards)': False, 'contains(teenage)': False, 'contains(mindset)': False, 'contains(r)': False, 'contains(rating)': False, 'contains(content)': False, 'contains(justify)': False, 'contains(juvenile)': False, 'contains(older)': False, 'contains(information)': False, 'contains(literally)': False, 'contains(spoon)': False, 'contains(hard)': False, 'contains(instead)': False, 'contains(telling)': False, 'contains(dialogue)': False, 'contains(poorly)': False, 'contains(written)': False, 'contains(extremely)': False, 'contains(predictable)': False, 'contains(progresses)': False, 'contains(won)': False, 'contains(care)': False, 'contains(heroes)': False, 'contains(any)': False, 'contains(jeopardy)': False, 'contains(ll)': False, 'contains(aren)': False, 'contains(basing)': False, 'contains(nobody)': False, 'contains(remembers)': False, 'contains(questionable)': False, 'contains(wisdom)': False, 'contains(especially)': True, 'contains(considers)': False, 'contains(target)': False, 'contains(fact)': False, 'contains(number)': False, 'contains(memorable)': False, 'contains(can)': False, 'contains(counted)': False, 'contains(hand)': False, 'contains(missing)': False, 'contains(finger)': False, 'contains(times)': False, 'contains(checked)': False, 'contains(six)': False, 'contains(clear)': False, 'contains(indication)': False, 'contains(them)': True, 'contains(than)': False, 'contains(cash)': False, 'contains(spending)': False, 'contains(dollar)': False, 'contains(judging)': False, 'contains(rash)': False, 'contains(awful)': False, 'contains(seeing)': True, 'contains(avoid)': False, 'contains(at)': False, 'contains(costs)': False, 'contains(quest)': False, 'contains(camelot)': False, 'contains(warner)': False, 'contains(bros)': False, 'contains(feature)': False, 'contains(length)': False, 'contains(fully)': False, 'contains(animated)': False, 'contains(steal)': False, 'contains(clout)': False, 'contains(disney)': False, 'contains(cartoon)': False, 'contains(empire)': False, 'contains(mouse)': False, 'contains(reason)': False, 'contains(worried)': False, 'contains(other)': True, 'contains(recent)': False, 'contains(challenger)': False, 'contains(throne)': False, 'contains(last)': False, 'contains(fall)': False, 'contains(promising)': False, 'contains(flawed)': False, 'contains(20th)': False, 'contains(century)': False, 'contains(fox)': False, 'contains(anastasia)': False, 'contains(hercules)': False, 'contains(lively)': False, 'contains(colorful)': False, 'contains(palate)': False, 'contains(had)': False, 'contains(beat)': False, 'contains(hands)': False, 'contains(crown)': False, 'contains(1997)': False, 'contains(piece)': False, 'contains(animation)': False, 'contains(year)': False, 'contains(contest)': False, 'contains(arrival)': False, 'contains(magic)': False, 'contains(kingdom)': False, 'contains(mediocre)': False, 'contains(--)': True, 'contains(d)': False, 'contains(pocahontas)': False, 'contains(those)': False, 'contains(keeping)': False, 'contains(score)': False, 'contains(nearly)': False, 'contains(dull)': False, 'contains(revolves)': False, 'contains(adventures)': False, 'contains(free)': False, 'contains(spirited)': False, 'contains(kayley)': False, 'contains(voiced)': False, 'contains(jessalyn)': False, 'contains(gilsig)': False, 'contains(early)': True, 'contains(daughter)': False, 'contains(belated)': False, 'contains(knight)': False, 'contains(king)': False, 'contains(arthur)': False, 'contains(round)': False, 'contains(table)': False, 'contains(dream)': False, 'contains(follow)': False, 'contains(father)': False, 'contains(footsteps)': False, 'contains(she)': True, 'contains(chance)': False, 'contains(evil)': False, 'contains(warlord)': False, 'contains(ruber)': False, 'contains(gary)': False, 'contains(oldman)': False, 'contains(ex)': False, 'contains(gone)': False, 'contains(steals)': False, 'contains(magical)': False, 'contains(sword)': False, 'contains(excalibur)': False, 'contains(accidentally)': False, 'contains(loses)': False, 'contains(dangerous)': True, 'contains(booby)': False, 'contains(trapped)': False, 'contains(forest)': False, 'contains(help)': True, 'contains(hunky)': False, 'contains(blind)': False, 'contains(timberland)': False, 'contains(dweller)': False, 'contains(garrett)': False, 'contains(carey)': False, 'contains(elwes)': False, 'contains(headed)': False, 'contains(dragon)': False, 'contains(eric)': False, 'contains(idle)': False, 'contains(rickles)': False, 'contains(arguing)': False, 'contains(itself)': False, 'contains(able)': False, 'contains(medieval)': False, 'contains(sexist)': False, 'contains(prove)': False, 'contains(fighter)': False, 'contains(side)': False, 'contains(pure)': False, 'contains(showmanship)': False, 'contains(essential)': False, 'contains(element)': False, 'contains(expected)': False, 'contains(climb)': False, 'contains(high)': False, 'contains(ranks)': False, 'contains(differentiates)': False, 'contains(something)': False, 'contains(saturday)': False, 'contains(morning)': False, 'contains(subpar)': False, 'contains(instantly)': False, 'contains(forgettable)': False, 'contains(songs)': False, 'contains(integrated)': False, 'contains(computerized)': False, 'contains(footage)': False, 'contains(compare)': False, 'contains(run)': False, 'contains(angry)': False, 'contains(ogre)': False, 'contains(herc)': False, 'contains(battle)': False, 'contains(hydra)': False, 'contains(rest)': False, 'contains(case)': False, 'contains(stink)': False, 'contains(none)': False, 'contains(remotely)': False, 'contains(interesting)': False, 'contains(race)': False, 'contains(bland)': False, 'contains(end)': False, 'contains(tie)': False, 'contains(win)': False, 'contains(comedy)': True, 'contains(shtick)': False, 'contains(awfully)': False, 'contains(cloying)': False, 'contains(least)': True, 'contains(signs)': False, 'contains(pulse)': False, 'contains(fans)': False, "contains(-')": False, 'contains(90s)': False, 'contains(tgif)': False, 'contains(will)': True, 'contains(thrilled)': False, 'contains(jaleel)': False, 'contains(urkel)': False, 'contains(white)': False, 'contains(bronson)': False, 'contains(balki)': False, 'contains(pinchot)': False, 'contains(sharing)': False, 'contains(nicely)': False, 'contains(realized)': False, 'contains(though)': False, 'contains(m)': False, 'contains(loss)': False, 'contains(recall)': False, 'contains(specific)': False, 'contains(providing)': False, 'contains(voice)': False, 'contains(talent)': False, 'contains(enthusiastic)': False, 'contains(paired)': False, 'contains(singers)': False, 'contains(sound)': False, 'contains(musical)': False, 'contains(moments)': False, 'contains(jane)': False, 'contains(seymour)': False, 'contains(celine)': False, 'contains(dion)': False, 'contains(must)': False, 'contains(strain)': False, 'contains(through)': False, 'contains(aside)': False, 'contains(children)': False, 'contains(probably)': False, 'contains(adults)': False, 'contains(grievous)': False, 'contains(error)': False, 'contains(lack)': False, 'contains(personality)': False, 'contains(learn)': False, 'contains(goes)': False, 'contains(synopsis)': False, 'contains(mentally)': False, 'contains(unstable)': False, 'contains(undergoing)': False, 'contains(psychotherapy)': False, 'contains(saves)': False, 'contains(boy)': False, 'contains(potentially)': False, 'contains(fatal)': False, 'contains(falls)': False, 'contains(love)': False, 'contains(mother)': False, 'contains(fledgling)': False, 'contains(restauranteur)': False, 'contains(unsuccessfully)': False, 'contains(attempting)': False, 'contains(gain)': False, 'contains(woman)': True, 'contains(favor)': False, 'contains(takes)': False, 'contains(pictures)': False, 'contains(kills)': False, 'contains(comments)': True, 'contains(stalked)': False, 'contains(yet)': False, 'contains(seemingly)': False, 'contains(endless)': True, 'contains(string)': False, 'contains(spurned)': False, 'contains(psychos)': False, 'contains(getting)': True, 'contains(revenge)': False, 'contains(type)': False, 'contains(stable)': False, 'contains(category)': False, 'contains(1990s)': False, 'contains(industry)': False, 'contains(theatrical)': False, 'contains(direct)': False, 'contains(proliferation)': False, 'contains(may)': False, 'contains(due)': False, 'contains(typically)': False, 'contains(inexpensive)': False, 'contains(produce)': False, 'contains(special)': False, 'contains(effects)': False, 'contains(stars)': False, 'contains(serve)': False, 'contains(vehicles)': False, 'contains(nudity)': False, 'contains(allowing)': False, 'contains(frequent)': False, 'contains(night)': False, 'contains(cable)': False, 'contains(wavers)': False, 'contains(slightly)': False, 'contains(norm)': False, 'contains(respect)': False, 'contains(psycho)': False, 'contains(never)': True, 'contains(affair)': False, 'contains(;)': False, 'contains(contrary)': False, 'contains(rejected)': False, 'contains(rather)': False, 'contains(lover)': False, 'contains(wife)': True, 'contains(husband)': False, 'contains(entry)': False, 'contains(doomed)': False, 'contains(collect)': False, 'contains(dust)': False, 'contains(viewed)': False, 'contains(midnight)': False, 'contains(provide)': False, 'contains(suspense)': False, 'contains(sets)': False, 'contains(interspersed)': False, 'contains(opening)': False, 'contains(credits)': False, 'contains(instance)': False, 'contains(serious)': False, 'contains(sounding)': False, 'contains(narrator)': False, 'contains(spouts)': False, 'contains(statistics)': False, 'contains(stalkers)': False, 'contains(ponders)': False, 'contains(cause)': False, 'contains(stalk)': False, 'contains(implicitly)': False, 'contains(implied)': False, 'contains(men)': False, 'contains(shown)': False, 'contains(snapshot)': False, 'contains(actor)': False, 'contains(jay)': False, 'contains(underwood)': False, 'contains(states)': False, 'contains(daryl)': False, 'contains(gleason)': False, 'contains(stalker)': False, 'contains(brooke)': False, 'contains(daniels)': False, 'contains(meant)': False, 'contains(called)': False, 'contains(guesswork)': False, 'contains(required)': False, 'contains(proceeds)': False, 'contains(begins)': False, 'contains(obvious)': False, 'contains(sequence)': False, 'contains(contrived)': False, 'contains(quite)': False, 'contains(brings)': False, 'contains(victim)': False, 'contains(together)': False, 'contains(obsesses)': False, 'contains(follows)': False, 'contains(tries)': True, 'contains(woo)': False, 'contains(plans)': False, 'contains(become)': False, 'contains(desperate)': False, 'contains(elaborate)': False, 'contains(include)': False, 'contains(cliche)': False, 'contains(murdered)': False, 'contains(pet)': False, 'contains(require)': False, 'contains(found)': False, 'contains(exception)': False, 'contains(cat)': False, 'contains(shower)': False, 'contains(events)': False, 'contains(lead)': True, 'contains(inevitable)': False, 'contains(showdown)': False, 'contains(survives)': False, 'contains(invariably)': False, 'contains(conclusion)': False, 'contains(turkey)': False, 'contains(uniformly)': False, 'contains(adequate)': False, 'contains(anything)': False, 'contains(home)': False, 'contains(either)': False, 'contains(turns)': False, 'contains(toward)': False, 'contains(melodrama)': False, 'contains(overdoes)': False, 'contains(words)': False, 'contains(manages)': False, 'contains(creepy)': False, 'contains(pass)': False, 'contains(demands)': False, 'contains(maryam)': False, 'contains(abo)': False, 'contains(close)': False, 'contains(played)': True, 'contains(bond)': False, 'contains(chick)': False, 'contains(living)': False, 'contains(daylights)': False, 'contains(equally)': False, 'contains(title)': False, 'contains(ditzy)': False, 'contains(strong)': False, 'contains(independent)': False, 'contains(business)': False, 'contains(owner)': False, 'contains(needs)': False, 'contains(proceed)': False, 'contains(example)': False, 'contains(suspicions)': False, 'contains(ensure)': False, 'contains(use)': False, 'contains(excuse)': False, 'contains(decides)': False, 'contains(return)': False, 'contains(toolbox)': False, 'contains(left)': False, 'contains(place)': True, 'contains(house)': False, 'contains(leave)': False, 'contains(door)': False, 'contains(answers)': False, 'contains(opens)': False, 'contains(wanders)': False, 'contains(returns)': False, 'contains(enters)': False, 'contains(our)': False, 'contains(heroine)': False, 'contains(danger)': False, 'contains(somehow)': False, 'contains(parked)': False, 'contains(front)': False, 'contains(right)': False, 'contains(oblivious)': False, 'contains(presence)': False, 'contains(inside)': False, 'contains(whole)': False, 'contains(episode)': False, 'contains(places)': False, 'contains(incredible)': False, 'contains(suspension)': False, 'contains(disbelief)': False, 'contains(questions)': False, 'contains(validity)': False, 'contains(intelligence)': False, 'contains(receives)': False, 'contains(highly)': False, 'contains(derivative)': False, 'contains(somewhat)': False, 'contains(boring)': False, 'contains(cannot)': False, 'contains(watched)': False, 'contains(rated)': False, 'contains(mostly)': False, 'contains(several)': False, 'contains(murder)': False, 'contains(brief)': True, 'contains(strip)': False, 'contains(bar)': False, 'contains(offensive)': False, 'contains(many)': True, 'contains(thrillers)': False, 'contains(mood)': False, 'contains(stake)': False, 'contains(else)': False, 'contains(capsule)': True, 'contains(2176)': False, 'contains(planet)': False, 'contains(mars)': False, 'contains(taking)': False, 'contains(custody)': False, 'contains(accused)': False, 'contains(murderer)': False, 'contains(face)': False, 'contains(menace)': False, 'contains(lot)': False, 'contains(fighting)': False, 'contains(john)': False, 'contains(carpenter)': False, 'contains(reprises)': False, 'contains(ideas)': False, 'contains(previous)': False, 'contains(assault)': False, 'contains(precinct)': False, 'contains(13)': False, 'contains(homage)': False, 'contains(himself)': False, 'contains(0)': False, 'contains(+)': False, 'contains(believes)': False, 'contains(fight)': True, 'contains(horrible)': False, 'contains(writer)': False, 'contains(supposedly)': False, 'contains(expert)': False, 'contains(mistake)': False, 'contains(ghosts)': False, 'contains(drawn)': False, 'contains(humans)': False, 'contains(surprisingly)': False, 'contains(low)': False, 'contains(powered)': False, 'contains(alien)': False, 'contains(addition)': False, 'contains(anybody)': False, 'contains(made)': False, 'contains(grounds)': False, 'contains(sue)': False, 'contains(chock)': False, 'contains(full)': False, 'contains(pieces)': False, 'contains(prince)': False, 'contains(darkness)': False, 'contains(surprising)': False, 'contains(managed)': False, 'contains(fit)': False, 'contains(admittedly)': False, 'contains(novel)': False, 'contains(science)': False, 'contains(fiction)': False, 'contains(experience)': False, 'contains(terraformed)': False, 'contains(walk)': False, 'contains(surface)': False, 'contains(without)': False, 'contains(breathing)': False, 'contains(gear)': False, 'contains(budget)': False, 'contains(mentioned)': False, 'contains(gravity)': False, 'contains(increased)': False, 'contains(earth)': False, 'contains(easier)': False, 'contains(society)': False, 'contains(changed)': False, 'contains(advanced)': False, 'contains(culture)': False, 'contains(women)': False, 'contains(positions)': False, 'contains(control)': False, 'contains(view)': False, 'contains(stagnated)': False, 'contains(female)': False, 'contains(beyond)': False, 'contains(minor)': False, 'contains(technological)': False, 'contains(advances)': False, 'contains(less)': False, 'contains(175)': False, 'contains(expect)': False, 'contains(change)': False, 'contains(ten)': False, 'contains(basic)': False, 'contains(common)': False, 'contains(except)': False, 'contains(yes)': False, 'contains(replaced)': False, 'contains(tacky)': False, 'contains(rundown)': False, 'contains(martian)': False, 'contains(mining)': False, 'contains(colony)': False, 'contains(having)': False, 'contains(criminal)': False, 'contains(napolean)': False, 'contains(wilson)': False, 'contains(desolation)': False, 'contains(williams)': False, 'contains(facing)': False, 'contains(hoodlums)': False, 'contains(automatic)': False, 'contains(weapons)': False, 'contains(nature)': False, 'contains(behave)': False, 'contains(manner)': False, 'contains(essentially)': False, 'contains(human)': False, 'contains(savages)': False, 'contains(lapse)': False, 'contains(imagination)': False, 'contains(told)': False, 'contains(flashback)': False, 'contains(entirely)': False, 'contains(filmed)': False, 'contains(almost)': False, 'contains(tones)': False, 'contains(red)': False, 'contains(yellow)': False, 'contains(black)': False, 'contains(powerful)': False, 'contains(scene)': True, 'contains(train)': True, 'contains(rushing)': False, 'contains(heavy)': False, 'contains(sadly)': False, 'contains(buildup)': False, 'contains(terror)': False, 'contains(creates)': False, 'contains(looks)': True, 'contains(fugitive)': False, 'contains(wannabes)': False, 'contains(rock)': False, 'contains(band)': False, 'contains(kiss)': False, 'contains(building)': False, 'contains(bunch)': False, 'contains(sudden)': False, 'contains(jump)': False, 'contains(sucker)': False, 'contains(thinking)': False, 'contains(scary)': False, 'contains(happening)': False, 'contains(standard)': False, 'contains(haunted)': False, 'contains(shock)': False, 'contains(great)': True, 'contains(newer)': False, 'contains(unimpressive)': False, 'contains(digital)': False, 'contains(decapitations)': False, 'contains(fights)': False, 'contains(short)': False, 'contains(stretch)': False, 'contains(release)': False, 'contains(mission)': False, 'contains(panned)': False, 'contains(reviewers)': False, 'contains(better)': False, 'contains(rate)': False, 'contains(scale)': False, 'contains(following)': False, 'contains(showed)': False, 'contains(liked)': False, 'contains(moderately)': False, 'contains(classic)': False, 'contains(comment)': False, 'contains(twice)': False, 'contains(ask)': False, 'contains(yourself)': False, 'contains(8mm)': False, 'contains(eight)': True, 'contains(millimeter)': False, 'contains(wholesome)': False, 'contains(surveillance)': False, 'contains(sight)': False, 'contains(values)': False, 'contains(becoming)': False, 'contains(enmeshed)': False, 'contains(seedy)': False, 'contains(sleazy)': False, 'contains(underworld)': False, 'contains(hardcore)': False, 'contains(pornography)': False, 'contains(bubbling)': False, 'contains(beneath)': False, 'contains(town)': False, 'contains(americana)': False, 'contains(sordid)': False, 'contains(sick)': False, 'contains(depraved)': False, 'contains(necessarily)': False, 'contains(stop)': True, 'contains(order)': False, 'contains(satisfy)': False, 'contains(twisted)': False, 'contains(desires)': False, 'contains(position)': False, 'contains(influence)': False, 'contains(kinds)': False, 'contains(demented)': False, 'contains(talking)': False, 'contains(snuff)': False, 'contains(supposed)': False, 'contains(documentaries)': False, 'contains(victims)': False, 'contains(brutalized)': False, 'contains(killed)': False, 'contains(camera)': False, 'contains(joel)': False, 'contains(schumacher)': False, 'contains(credit)': False, 'contains(batman)': False, 'contains(robin)': False, 'contains(kill)': False, 'contains(forever)': False, 'contains(client)': False, 'contains(thirds)': False, 'contains(unwind)': False, 'contains(fairly)': True, 'contains(conventional)': False, 'contains(persons)': False, 'contains(drama)': False, 'contains(albeit)': False, 'contains(particularly)': False, 'contains(unsavory)': False, 'contains(core)': False, 'contains(threatening)': False, 'contains(along)': True, 'contains(explodes)': False, 'contains(violence)': False, 'contains(think)': False, 'contains(finally)': False, 'contains(tags)': False, 'contains(ridiculous)': False, 'contains(self)': False, 'contains(righteous)': False, 'contains(finale)': False, 'contains(drags)': False, 'contains(unpleasant)': False, 'contains(trust)': False, 'contains(waste)': False, 'contains(hours)': False, 'contains(nicolas)': False, 'contains(snake)': False, 'contains(eyes)': False, 'contains(cage)': False, 'contains(private)': False, 'contains(investigator)': False, 'contains(tom)': False, 'contains(welles)': False, 'contains(hired)': False, 'contains(wealthy)': False, 'contains(philadelphia)': False, 'contains(widow)': False, 'contains(determine)': False, 'contains(whether)': False, 'contains(reel)': False, 'contains(safe)': False, 'contains(documents)': False, 'contains(girl)': False, 'contains(assignment)': True, 'contains(factly)': False, 'contains(puzzle)': False, 'contains(neatly)': False, 'contains(specialized)': False, 'contains(skills)': False, 'contains(training)': False, 'contains(easy)': False, 'contains(cops)': False, 'contains(toilet)': False, 'contains(tanks)': False, 'contains(clues)': False, 'contains(deeper)': False, 'contains(digs)': False, 'contains(investigation)': False, 'contains(obsessed)': False, 'contains(george)': False, 'contains(c)': False, 'contains(scott)': False, 'contains(paul)': False, 'contains(schrader)': False, 'contains(occasionally)': False, 'contains(flickering)': False, 'contains(whirs)': False, 'contains(sprockets)': False, 'contains(winding)': False, 'contains(projector)': False, 'contains(reminding)': False, 'contains(task)': False, 'contains(hints)': False, 'contains(toll)': False, 'contains(lovely)': False, 'contains(catherine)': False, 'contains(keener)': False, 'contains(frustrated)': False, 'contains(cleveland)': False, 'contains(ugly)': False, 'contains(split)': False, 'contains(level)': False, 'contains(harrisburg)': False, 'contains(pa)': False, 'contains(condemn)': False, 'contains(condone)': False, 'contains(subject)': False, 'contains(exploits)': False, 'contains(irony)': False, 'contains(seven)': False, 'contains(scribe)': False, 'contains(andrew)': False, 'contains(kevin)': True, 'contains(walker)': False, 'contains(vision)': False, 'contains(lane)': False, 'contains(limited)': False, 'contains(hollywood)': False, 'contains(product)': False, 'contains(snippets)': False, 'contains(covering)': False, 'contains(later)': False, 'contains(joaquin)': False, 'contains(phoenix)': False, 'contains(far)': False, 'contains(adult)': False, 'contains(bookstore)': False, 'contains(flunky)': False, 'contains(max)': False, 'contains(california)': False, 'contains(cover)': False, 'contains(horrid)': False, 'contains(screened)': False, 'contains(familiar)': False, 'contains(revelation)': False, 'contains(sexual)': False, 'contains(deviants)': False, 'contains(indeed)': False, 'contains(monsters)': False, 'contains(everyday)': False, 'contains(neither)': False, 'contains(super)': False, 'contains(nor)': False, 'contains(shocking)': False, 'contains(banality)': False, 'contains(exactly)': False, 'contains(felt)': False, 'contains(weren)': False, 'contains(nine)': False, 'contains(laughs)': False, 'contains(months)': False, 'contains(terrible)': False, 'contains(mr)': False, 'contains(hugh)': False, 'contains(grant)': False, 'contains(huge)': False, 'contains(dork)': False, 'contains(oral)': False, 'contains(sex)': False, 'contains(prostitution)': False, 'contains(referring)': False, 'contains(bugs)': False, 'contains(annoying)': False, 'contains(adam)': False, 'contains(sandler)': False, 'contains(jim)': False, 'contains(carrey)': False, 'contains(eye)': False, 'contains(flutters)': False, 'contains(nervous)': False, 'contains(smiles)': False, 'contains(slapstick)': False, 'contains(fistfight)': False, 'contains(delivery)': False, 'contains(room)': False, 'contains(culminating)': False, 'contains(joan)': False, 'contains(cusack)': False, 'contains(lap)': False, 'contains(paid)': False, 'contains($)': False, 'contains(60)': False, 'contains(included)': False, 'contains(obscene)': False, 'contains(double)': False, 'contains(entendres)': False, 'contains(obstetrician)': False, 'contains(pregnant)': False, 'contains(pussy)': False, 'contains(size)': False, 'contains(hairs)': False, 'contains(coat)': False, 'contains(nonetheless)': False, 'contains(exchange)': False, 'contains(cookie)': False, 'contains(cutter)': False, 'contains(originality)': False, 'contains(humor)': False, 'contains(successful)': False, 'contains(child)': False, 'contains(psychiatrist)': False, 'contains(psychologist)': False, 'contains(scriptwriters)': False, 'contains(could)': False, 'contains(inject)': False, 'contains(unfunny)': False, 'contains(kid)': False, 'contains(dad)': False, 'contains(asshole)': False, 'contains(eyelashes)': False, 'contains(offers)': False, 'contains(smile)': False, 'contains(responds)': False, 'contains(english)': False, 'contains(accent)': False, 'contains(attitude)': False, 'contains(possibly)': False, 'contains(_huge_)': False, 'contains(beside)': False, 'contains(includes)': False, 'contains(needlessly)': False, 'contains(stupid)': False, 'contains(jokes)': False, 'contains(olds)': False, 'contains(everyone)': False, 'contains(shakes)': False, 'contains(anyway)': False, 'contains(finds)': False, 'contains(usual)': False, 'contains(reaction)': False, 'contains(fluttered)': False, 'contains(paves)': False, 'contains(possible)': False, 'contains(pregnancy)': False, 'contains(birth)': False, 'contains(gag)': False, 'contains(book)': False, 'contains(friend)': False, 'contains(arnold)': True, 'contains(provides)': False, 'contains(cacophonous)': False, 'contains(funny)': True, 'contains(beats)': False, 'contains(costumed)': False, 'contains(arnie)': False, 'contains(dinosaur)': False, 'contains(draw)': False, 'contains(parallels)': False, 'contains(toy)': False, 'contains(store)': False, 'contains(jeff)': False, 'contains(goldblum)': False, 'contains(hid)': False, 'contains(dreadful)': False, 'contains(hideaway)': False, 'contains(artist)': False, 'contains(fear)': False, 'contains(simultaneous)': False, 'contains(longing)': False, 'contains(commitment)': False, 'contains(doctor)': False, 'contains(recently)': False, 'contains(switch)': False, 'contains(veterinary)': False, 'contains(medicine)': False, 'contains(obstetrics)': False, 'contains(joke)': False, 'contains(old)': False, 'contains(foreign)': False, 'contains(guy)': True, 'contains(mispronounces)': False, 'contains(stereotype)': False, 'contains(say)': False, 'contains(yakov)': False, 'contains(smirnov)': False, 'contains(favorite)': False, 'contains(vodka)': False, 'contains(hence)': False, 'contains(take)': False, 'contains(volvo)': False, 'contains(nasty)': False, 'contains(unamusing)': False, 'contains(heads)': False, 'contains(simultaneously)': False, 'contains(groan)': False, 'contains(failure)': False, 'contains(loud)': False, 'contains(failed)': False, 'contains(uninspired)': False, 'contains(lunacy)': False, 'contains(sunset)': False, 'contains(boulevard)': False, 'contains(arrest)': False, 'contains(please)': False, 'contains(caught)': False, 'contains(pants)': False, 'contains(bring)': False, 'contains(theaters)': False, 'contains(faces)': False, 'contains(90)': False, 'contains(forced)': False, 'contains(unauthentic)': False, 'contains(anyone)': False, 'contains(q)': False, 'contains(80)': False, 'contains(sorry)': False, 'contains(money)': False, 'contains(unfulfilled)': False, 'contains(desire)': False, 'contains(spend)': False, 'contains(bucks)': False, 'contains(call)': False, 'contains(road)': False, 'contains(trip)': False, 'contains(walking)': False, 'contains(wounded)': False, 'contains(stellan)': False, 'contains(skarsg)': False, 'contains(rd)': False, 'contains(convincingly)': False, 'contains(zombified)': False, 'contains(drunken)': False, 'contains(loser)': False, 'contains(difficult)': True, 'contains(smelly)': False, 'contains(boozed)': False, 'contains(reliable)': False, 'contains(swedish)': False, 'contains(adds)': False, 'contains(depth)': False, 'contains(significance)': False, 'contains(plodding)': False, 'contains(aberdeen)': False, 'contains(sentimental)': False, 'contains(painfully)': False, 'contains(mundane)': False, 'contains(european)': False, 'contains(playwright)': False, 'contains(august)': False, 'contains(strindberg)': False, 'contains(built)': False, 'contains(career)': False, 'contains(families)': False, 'contains(relationships)': False, 'contains(paralyzed)': False, 'contains(secrets)': False, 'contains(unable)': False, 'contains(express)': False, 'contains(longings)': False, 'contains(accurate)': False, 'contains(reflection)': False, 'contains(strives)': False, 'contains(focusing)': False, 'contains(pairing)': False, 'contains(alcoholic)': False, 'contains(tomas)': False, 'contains(alienated)': False, 'contains(openly)': False, 'contains(hostile)': False, 'contains(yuppie)': False, 'contains(kaisa)': False, 'contains(lena)': False, 'contains(headey)': False, 'contains(gossip)': False, 'contains(haven)': False, 'contains(spoken)': False, 'contains(wouldn)': False, 'contains(norway)': False, 'contains(scotland)': False, 'contains(automobile)': False, 'contains(charlotte)': False, 'contains(rampling)': False, 'contains(sand)': False, 'contains(rotting)': False, 'contains(hospital)': False, 'contains(bed)': False, 'contains(cancer)': False, 'contains(soap)': False, 'contains(opera)': False, 'contains(twist)': False, 'contains(days)': False, 'contains(live)': False, 'contains(blitzed)': False, 'contains(step)': False, 'contains(foot)': False, 'contains(plane)': False, 'contains(hits)': False, 'contains(open)': False, 'contains(loathing)': False, 'contains(each)': True, 'contains(periodic)': False, 'contains(stops)': True, 'contains(puke)': False, 'contains(dashboard)': False, 'contains(whenever)': False, 'contains(muttering)': False, 'contains(rotten)': False, 'contains(turned)': False, 'contains(sloshed)': False, 'contains(viewpoint)': False, 'contains(recognizes)': False, 'contains(apple)': False, 'contains(hasn)': False, 'contains(fallen)': False, 'contains(tree)': False, 'contains(nosebleeds)': False, 'contains(snorting)': False, 'contains(coke)': False, 'contains(sabotages)': False, 'contains(personal)': False, 'contains(indifference)': False, 'contains(restrain)': False, 'contains(vindictive)': False, 'contains(temper)': False, 'contains(ain)': False, 'contains(pair)': False, 'contains(true)': False, 'contains(notes)': False, 'contains(unspoken)': False, 'contains(familial)': False, 'contains(empathy)': False, 'contains(note)': False, 'contains(repetitively)': False, 'contains(bitchy)': False, 'contains(screenwriters)': False, 'contains(kristin)': False, 'contains(amundsen)': False, 'contains(hans)': False, 'contains(petter)': False, 'contains(moland)': False, 'contains(fabricate)': False, 'contains(series)': True, 'contains(contrivances)': False, 'contains(propel)': False, 'contains(forward)': False, 'contains(roving)': False, 'contains(hooligans)': False, 'contains(drunks)': False, 'contains(nosy)': False, 'contains(flat)': False, 'contains(tires)': False, 'contains(figure)': False, 'contains(schematic)': False, 'contains(convenient)': False, 'contains(narrative)': False, 'contains(reach)': False, 'contains(unveil)': False, 'contains(dark)': False, 'contains(past)': False, 'contains(simplistic)': False, 'contains(devices)': False, 'contains(trivialize)': False, 'contains(conflict)': False, 'contains(mainstays)': False, 'contains(wannabe)': False, 'contains(exists)': False, 'contains(purely)': False, 'contains(sake)': False, 'contains(weak)': False, 'contains(unimaginative)': False, 'contains(casting)': False, 'contains(thwarts)': False, 'contains(pivotal)': False, 'contains(role)': False, 'contains(were)': False, 'contains(stronger)': False, 'contains(actress)': False, 'contains(perhaps)': False, 'contains(coast)': True, 'contains(performances)': False, 'contains(moody)': False, 'contains(haunting)': False, 'contains(cinematography)': False, 'contains(rendering)': False, 'contains(pastoral)': False, 'contains(ghost)': False, 'contains(reference)': False, 'contains(certain)': False, 'contains(superior)': False, 'contains(indie)': False, 'contains(intentional)': False, 'contains(busy)': False, 'contains(using)': False, 'contains(furrowed)': False, 'contains(brow)': False, 'contains(convey)': False, 'contains(twitch)': False, 'contains(insouciance)': False, 'contains(paying)': False, 'contains(attention)': False, 'contains(maybe)': False, 'contains(doing)': False, 'contains(reveal)': False, 'contains(worthwhile)': False, 'contains(earlier)': False, 'contains(released)': False, 'contains(2001)': False, 'contains(jonathan)': False, 'contains(nossiter)': False, 'contains(captivating)': False, 'contains(wonders)': False, 'contains(disturbed)': False, 'contains(parental)': False, 'contains(figures)': False, 'contains(bound)': False, 'contains(ceremonial)': False, 'contains(wedlock)': False, 'contains(differences)': False, 'contains(presented)': False, 'contains(significant)': False, 'contains(luminous)': False, 'contains(diva)': False, 'contains(preening)': False, 'contains(static)': False, 'contains(solid)': False, 'contains(performance)': False, 'contains(pathetic)': False, 'contains(drunk)': False, 'contains(emote)': False, 'contains(besides)': False, 'contains(catatonic)': False, 'contains(sorrow)': False, 'contains(genuine)': False, 'contains(ferocity)': False, 'contains(sexually)': False, 'contains(charged)': False, 'contains(frisson)': False, 'contains(during)': False, 'contains(understated)': False, 'contains(confrontations)': False, 'contains(suggest)': False, 'contains(gray)': False, 'contains(zone)': False, 'contains(complications)': False, 'contains(accompany)': False, 'contains(torn)': False, 'contains(romance)': False, 'contains(stifled)': False, 'contains(curiosity)': False, 'contains(thoroughly)': False, 'contains(explores)': False, 'contains(neurotic)': False, 'contains(territory)': False, 'contains(delving)': False, 'contains(americanization)': False, 'contains(greece)': False, 'contains(mysticism)': False, 'contains(illusion)': False, 'contains(deflect)': False, 'contains(pain)': False, 'contains(overloaded)': False, 'contains(willing)': False, 'contains(come)': False, 'contains(traditional)': False, 'contains(ambitious)': False, 'contains(sleepwalk)': False, 'contains(rhythms)': False, 'contains(timing)': False, 'contains(driven)': False, 'contains(stories)': False, 'contains(complexities)': False, 'contains(depressing)': False, 'contains(answer)': False, 'contains(lawrence)': False, 'contains(kasdan)': False, 'contains(trite)': False, 'contains(useful)': False, 'contains(grand)': False, 'contains(canyon)': False, 'contains(steve)': False, 'contains(martin)': False, 'contains(mogul)': False, 'contains(pronounces)': False, 'contains(riddles)': False, 'contains(answered)': False, 'contains(advice)': False, 'contains(heart)': False, 'contains(french)': False, 'contains(sees)': True, 'contains(parents)': False, 'contains(tim)': False, 'contains(roth)': False, 'contains(oops)': False, 'contains(vows)': False, 'contains(taught)': False, 'contains(musketeer)': False, 'contains(dude)': False, 'contains(used)': True, 'contains(fourteen)': False, 'contains(arrgh)': False, 'contains(swish)': False, 'contains(zzzzzzz)': False, 'contains(original)': False, 'contains(lacks)': False, 'contains(energy)': False, 'contains(next)': False, 'contains(hmmmm)': False, 'contains(justin)': False, 'contains(chambers)': False, 'contains(basically)': False, 'contains(uncharismatic)': False, 'contains(version)': False, 'contains(chris)': False, 'contains(o)': False, 'contains(donnell)': False, 'contains(range)': False, 'contains(mena)': False, 'contains(suvari)': False, 'contains(thora)': False, 'contains(birch)': False, 'contains(dungeons)': False, 'contains(dragons)': False, 'contains(miscast)': False, 'contains(deliveries)': False, 'contains(piss)': False, 'contains(poor)': False, 'contains(ms)': False, 'contains(fault)': False, 'contains(definitely)': False, 'contains(higher)': False, 'contains(semi)': False, 'contains(saving)': False, 'contains(grace)': False, 'contains(wise)': False, 'contains(irrepressible)': False, 'contains(once)': True, 'contains(thousand)': False, 'contains(god)': False, 'contains(beg)': False, 'contains(agent)': False, 'contains(marketplace)': False, 'contains(modern)': False, 'contains(day)': True, 'contains(roles)': False, 'contains(romantic)': False, 'contains(gunk)': False, 'contains(alright)': False, 'contains(yeah)': False, 'contains(yikes)': False, 'contains(notches)': False, 'contains(fellas)': False, 'contains(blares)': False, 'contains(ear)': False, 'contains(accentuate)': False, 'contains(annoy)': False, 'contains(important)': False, 'contains(behind)': False, 'contains(recognize)': False, 'contains(epic)': False, 'contains(fluffy)': False, 'contains(rehashed)': False, 'contains(cake)': False, 'contains(created)': False, 'contains(shrewd)': False, 'contains(advantage)': False, 'contains(kung)': True, 'contains(fu)': True, 'contains(phenomenon)': False, 'contains(test)': False, 'contains(dudes)': False, 'contains(keep)': False, 'contains(reading)': False, 'contains(editing)': False, 'contains(shoddy)': False, 'contains(banal)': False, 'contains(stilted)': False, 'contains(plentiful)': False, 'contains(top)': True, 'contains(horse)': False, 'contains(carriage)': False, 'contains(stand)': False, 'contains(opponent)': False, 'contains(scampering)': False, 'contains(cut)': False, 'contains(mouseketeer)': False, 'contains(rope)': False, 'contains(tower)': False, 'contains(jumping)': False, 'contains(chords)': False, 'contains(hanging)': False, 'contains(says)': False, 'contains(14)': False, 'contains(shirt)': False, 'contains(strayed)': False, 'contains(championing)': False, 'contains(fun)': True, 'contains(stretches)': False, 'contains(atrocious)': False, 'contains(lake)': False, 'contains(reminded)': False, 'contains(school)': False, 'contains(cringe)': False, 'contains(musketeers)': False, 'contains(fat)': False, 'contains(raison)': False, 'contains(etre)': False, 'contains(numbers)': False, 'contains(hoping)': False, 'contains(packed)': False, 'contains(stuntwork)': False, 'contains(promoted)': False, 'contains(trailer)': False, 'contains(major)': False, 'contains(swashbuckling)': False, 'contains(beginning)': False, 'contains(finishes)': False, 'contains(juggling)': False, 'contains(ladders)': False, 'contains(ladder)': True, 'contains(definite)': False, 'contains(keeper)': False, 'contains(regurgitated)': False, 'contains(crap)': False, 'contains(tell)': False, 'contains(deneuve)': False, 'contains(placed)': False, 'contains(hullo)': False, 'contains(barely)': False, 'contains(ugh)': False, 'contains(small)': False, 'contains(annoyed)': False, 'contains(trash)': False, 'contains(gang)': False, 'contains(vow)': False, 'contains(stay)': False, 'contains(thank)': False, 'contains(outlaws)': False, 'contains(5)': False, 'contains(crouching)': False, 'contains(tiger)': False, 'contains(hidden)': False, 'contains(matrix)': False, 'contains(replacement)': False, 'contains(killers)': False, 'contains(6)': False, 'contains(romeo)': False, 'contains(die)': False, 'contains(shanghai)': False, 'contains(noon)': False, 'contains(remembered)': False, 'contains(dr)': False, 'contains(hannibal)': False, 'contains(lecter)': False, 'contains(michael)': False, 'contains(mann)': False, 'contains(forensics)': False, 'contains(thriller)': False, 'contains(manhunter)': False, 'contains(scottish)': False, 'contains(brian)': False, 'contains(cox)': False}
###Markdown
Now that we've defined our feature extractor, we can use it to train a classifier to label new movie reviews . To check how reliable the resulting classifier is, we compute its accuracy on the test set [1]. And once again, we can use show_most_informative_features() to find out which features the classifier found to be most informative
###Code
featuresets = [(document_features(d), c) for (d,c) in documents]
train_set, test_set = featuresets[100:], featuresets[:100]
classifier = nltk.NaiveBayesClassifier.train(train_set)
print(nltk.classify.accuracy(classifier, test_set)) #1
classifier.show_most_informative_features(5)
###Output
Most Informative Features
contains(turkey) = True neg : pos = 8.8 : 1.0
contains(unimaginative) = True neg : pos = 8.4 : 1.0
contains(schumacher) = True neg : pos = 7.0 : 1.0
contains(suvari) = True neg : pos = 7.0 : 1.0
contains(mena) = True neg : pos = 7.0 : 1.0
###Markdown
Exercise 3. Use this classifer to classify a new text (e.g. customer reviews) of your own choosing.
###Code
doc = nltk.word_tokenize("Unfortunately I couldn't try the lava cake since it's only available for dinner menu.")
classifier.classify(document_features(doc))
###Output
_____no_output_____
###Markdown
1.4 Part-of-Speech Tagging In Chapter 5. we built a regular expression tagger that chooses a part-of-speech tag for a word by looking at the internal make-up of the word. However, this regular expression tagger had to be hand-crafted. Instead, we can train a classifier to work out which suffixes are most informative. Let's begin by finding out what the most common suffixes are:
###Code
from nltk.corpus import brown
suffix_fdist = nltk.FreqDist()
for word in brown.words():
word = word.lower()
suffix_fdist[word[-1:]] += 1
suffix_fdist[word[-2:]] += 1
suffix_fdist[word[-3:]] += 1
common_suffixes = [suffix for (suffix, count) in suffix_fdist.most_common(100)]
print(common_suffixes)
###Output
['e', ',', '.', 's', 'd', 't', 'he', 'n', 'a', 'of', 'the', 'y', 'r', 'to', 'in', 'f', 'o', 'ed', 'nd', 'is', 'on', 'l', 'g', 'and', 'ng', 'er', 'as', 'ing', 'h', 'at', 'es', 'or', 're', 'it', '``', 'an', "''", 'm', ';', 'i', 'ly', 'ion', 'en', 'al', '?', 'nt', 'be', 'hat', 'st', 'his', 'th', 'll', 'le', 'ce', 'by', 'ts', 'me', 've', "'", 'se', 'ut', 'was', 'for', 'ent', 'ch', 'k', 'w', 'ld', '`', 'rs', 'ted', 'ere', 'her', 'ne', 'ns', 'ith', 'ad', 'ry', ')', '(', 'te', '--', 'ay', 'ty', 'ot', 'p', 'nce', "'s", 'ter', 'om', 'ss', ':', 'we', 'are', 'c', 'ers', 'uld', 'had', 'so', 'ey']
###Markdown
Next, we'll define a feature extractor function which checks a given word for these suffixes:
###Code
def pos_features(word):
features = {}
for suffix in common_suffixes:
features['endswith({})'.format(suffix)] = word.lower().endswith(suffix)
return features
print (pos_features("visited"))
###Output
{'endswith(e)': False, 'endswith(,)': False, 'endswith(.)': False, 'endswith(s)': False, 'endswith(d)': True, 'endswith(t)': False, 'endswith(he)': False, 'endswith(n)': False, 'endswith(a)': False, 'endswith(of)': False, 'endswith(the)': False, 'endswith(y)': False, 'endswith(r)': False, 'endswith(to)': False, 'endswith(in)': False, 'endswith(f)': False, 'endswith(o)': False, 'endswith(ed)': True, 'endswith(nd)': False, 'endswith(is)': False, 'endswith(on)': False, 'endswith(l)': False, 'endswith(g)': False, 'endswith(and)': False, 'endswith(ng)': False, 'endswith(er)': False, 'endswith(as)': False, 'endswith(ing)': False, 'endswith(h)': False, 'endswith(at)': False, 'endswith(es)': False, 'endswith(or)': False, 'endswith(re)': False, 'endswith(it)': False, 'endswith(``)': False, 'endswith(an)': False, "endswith('')": False, 'endswith(m)': False, 'endswith(;)': False, 'endswith(i)': False, 'endswith(ly)': False, 'endswith(ion)': False, 'endswith(en)': False, 'endswith(al)': False, 'endswith(?)': False, 'endswith(nt)': False, 'endswith(be)': False, 'endswith(hat)': False, 'endswith(st)': False, 'endswith(his)': False, 'endswith(th)': False, 'endswith(ll)': False, 'endswith(le)': False, 'endswith(ce)': False, 'endswith(by)': False, 'endswith(ts)': False, 'endswith(me)': False, 'endswith(ve)': False, "endswith(')": False, 'endswith(se)': False, 'endswith(ut)': False, 'endswith(was)': False, 'endswith(for)': False, 'endswith(ent)': False, 'endswith(ch)': False, 'endswith(k)': False, 'endswith(w)': False, 'endswith(ld)': False, 'endswith(`)': False, 'endswith(rs)': False, 'endswith(ted)': True, 'endswith(ere)': False, 'endswith(her)': False, 'endswith(ne)': False, 'endswith(ns)': False, 'endswith(ith)': False, 'endswith(ad)': False, 'endswith(ry)': False, 'endswith())': False, 'endswith(()': False, 'endswith(te)': False, 'endswith(--)': False, 'endswith(ay)': False, 'endswith(ty)': False, 'endswith(ot)': False, 'endswith(p)': False, 'endswith(nce)': False, "endswith('s)": False, 'endswith(ter)': False, 'endswith(om)': False, 'endswith(ss)': False, 'endswith(:)': False, 'endswith(we)': False, 'endswith(are)': False, 'endswith(c)': False, 'endswith(ers)': False, 'endswith(uld)': False, 'endswith(had)': False, 'endswith(so)': False, 'endswith(ey)': False}
###Markdown
Now that we've defined our feature extractor, we can use it to train a classifier. For decision tree classifer, please use nltk.DecisionTreeClassifier.
###Code
tagged_words = brown.tagged_words(categories='news')
featuresets = [(pos_features(n), g) for (n,g) in tagged_words]
size = int(len(featuresets) * 0.1)
train_set, test_set = featuresets[size:], featuresets[:size]
classifier=nltk.NaiveBayesClassifier.train(train_set)
classifier.classify(pos_features('cats'))
###Output
_____no_output_____
###Markdown
Exercise 4: Use this classifier to classify a new word of your own choosing
###Code
classifier.classify(pos_features('studying'))
classifier.classify(pos_features('the'))
classifier.classify(pos_features('his'))
###Output
_____no_output_____
###Markdown
1.5 Exploiting Context By augmenting the feature extraction function, we could modify this part-of-speech tagger to leverage a variety of other word-internal features, such as the length of the word, the number of syllables it contains, or its prefix. However, as long as the feature extractor just looks at the target word, we have no way to add features that depend on the context that the word appears in. But contextual features often provide powerful clues about the correct tag — for example, when tagging the word "fly," knowing that the previous word is "a" will allow us to determine that it is functioning as a noun, not a verb. In order to accommodate features that depend on a word's context, we must revise the pattern that we used to define our feature extractor. Instead of just passing in the word to be tagged, we will pass in a complete (untagged) sentence, along with the index of the target word. .
###Code
def pos_features(sentence, i): #1
features = {"suffix(1)": sentence[i][-1:],
"suffix(2)": sentence[i][-2:],
"suffix(3)": sentence[i][-3:]}
if i == 0:
features["prev-word"] = "<START>"
else:
features["prev-word"] = sentence[i-1]
return features
brown.sents()[0]
pos_features(brown.sents()[0], 8)
tagged_sents = brown.tagged_sents(categories='news')
featuresets = []
for tagged_sent in tagged_sents:
untagged_sent = nltk.tag.untag(tagged_sent)
for i, (word, tag) in enumerate(tagged_sent):
featuresets.append( (pos_features(untagged_sent, i), tag) )
###Output
_____no_output_____
###Markdown
Given a tagged sentence, return an untagged version of that sentence. I.e., return a list containing the first element of each tuple in tagged_sentence.
###Code
nltk.tag.untag([('John', 'NNP'), ('saw', 'VBD'), ('Mary', 'NNP')])
size = int(len(featuresets) * 0.1)
train_set, test_set = featuresets[size:], featuresets[:size]
classifier = nltk.NaiveBayesClassifier.train(train_set)
nltk.classify.accuracy(classifier, test_set)
###Output
_____no_output_____
###Markdown
1.6 Sequence Classification One sequence classification strategy, known as consecutive classification or greedy sequence classification, is to find the most likely class label for the first input, then to use that answer to help find the best label for the next input. The process can then be repeated until all of the inputs have been labeled. This is the approach that was taken by the bigram tagger from 5, which began by choosing a part-of-speech tag for the first word in the sentence, and then chose the tag for each subsequent word based on the word itself and the predicted tag for the previous word. First, we must augment our feature extractor function to take a history argument, which provides a list of the tags that we've predicted for the sentence so far [1]. Each tag in history corresponds with a word in sentence. But note that history will only contain tags for words we've already classified, that is, words to the left of the target word. Having defined a feature extractor, we can proceed to build our sequence classifier [2]. During training, we use the annotated tags to provide the appropriate history to the feature extractor, but when tagging new sentences, we generate the history list based on the output of the tagger itself.
###Code
def pos_features(sentence, i, history): #1
features = {"suffix(1)": sentence[i][-1:],
"suffix(2)": sentence[i][-2:],
"suffix(3)": sentence[i][-3:]}
if i == 0:
features["prev-word"] = "<START>"
features["prev-tag"] = "<START>"
else:
features["prev-word"] = sentence[i-1]
features["prev-tag"] = history[i-1]
return features
class ConsecutivePosTagger(nltk.TaggerI): #2
def __init__(self, train_sents):
train_set = []
for tagged_sent in train_sents:
untagged_sent = nltk.tag.untag(tagged_sent)
history = []
for i, (word, tag) in enumerate(tagged_sent):
featureset = pos_features(untagged_sent, i, history)
train_set.append( (featureset, tag) )
history.append(tag)
self.classifier = nltk.NaiveBayesClassifier.train(train_set)
def tag(self, sentence):
history = []
for i, word in enumerate(sentence):
featureset = pos_features(sentence, i, history)
tag = self.classifier.classify(featureset)
history.append(tag)
return zip(sentence, history)
tagged_sents = brown.tagged_sents(categories='news')
size = int(len(tagged_sents) * 0.1)
train_sents, test_sents = tagged_sents[size:], tagged_sents[:size]
tagger = ConsecutivePosTagger(train_sents)
print(tagger.evaluate(test_sents))
###Output
0.7980528511821975
###Markdown
Exercise 5. Use this classifier to tag a new sentence of your own choosing.
###Code
print(list(tagger.tag(['This','is','nice','weather','do','go','out','for','hiking'])))
###Output
[('This', 'DT'), ('is', 'BEZ'), ('nice', 'NN'), ('weather', 'AP'), ('do', 'DO'), ('go', 'VB'), ('out', 'RP'), ('for', 'IN'), ('hiking', 'VBG')]
###Markdown
2 Further Examples of Supervised Classification 2.1 Sentence Segmentation Sentence segmentation can be viewed as a classification task for punctuation: whenever we encounter a symbol that could possibly end a sentence, such as a period or a question mark, we have to decide whether it terminates the preceding sentence. The first step is to obtain some data that has already been segmented into sentences and convert it into a form that is suitable for extracting features:
###Code
import nltk
sents = nltk.corpus.treebank_raw.sents()
sents[0]
sents[1]
tokens = []
boundaries = set()
offset = 0
###Output
_____no_output_____
###Markdown
When append() method adds its argument as a single element to the end of a list, the length of the list itself will increase by one. Whereas extend() method iterates over its argument adding each element to the list, extending the list.
###Code
for sent in sents:
tokens.extend(sent)
offset += len(sent)
boundaries.add(offset-1)
tokens
###Output
_____no_output_____
###Markdown
Here, tokens is a merged list of tokens from the individual sentences, and boundaries is a set containing the indexes of all sentence-boundary tokens. Next, we need to specify the features of the data that will be used in order to decide whether punctuation indicates a sentence-boundary:
###Code
def punct_features(tokens, i):
return {'next-word-capitalized': tokens[i+1][0].isupper(),
'prev-word': tokens[i-1].lower(),
'punct': tokens[i],
'prev-word-is-one-char': len(tokens[i-1]) == 1}
###Output
_____no_output_____
###Markdown
Based on this feature extractor, we can create a list of labeled featuresets by selecting all the punctuation tokens, and tagging whether they are boundary tokens or not:
###Code
featuresets = [(punct_features(tokens, i), (i in boundaries))
for i in range(1, len(tokens)-1)
if tokens[i] in '.?!']
###Output
_____no_output_____
###Markdown
Using these featuresets, we can train and evaluate a punctuation classifier:
###Code
size = int(len(featuresets) * 0.1)
train_set, test_set = featuresets[size:], featuresets[:size]
classifier = nltk.NaiveBayesClassifier.train(train_set)
nltk.classify.accuracy(classifier, test_set)
###Output
_____no_output_____
###Markdown
To use this classifier to perform sentence segmentation, we simply check each punctuation mark to see whether it's labeled as a boundary; and divide the list of words at the boundary marks. The listing in 2.1 shows how this can be done.
###Code
def segment_sentences(words):
start = 0
sents = []
for i, word in enumerate(words):
if word in '.?!' and classifier.classify(punct_features(words, i)) == True:
sents.append(words[start:i+1])
start = i+1
if start < len(words):
sents.append(words[start:])
return sents
segment_sentences(['I','am','going','to','attend','a','zoom','meeting','.','This','meeting','is','about','NLP'])
###Output
_____no_output_____
###Markdown
2.2 Identifying Dialogue Act Types When processing dialogue, it can be useful to think of utterances as a type of action performed by the speaker. This interpretation is most straightforward for performative statements such as "I forgive you" or "I bet you can't climb that hill." But greetings, questions, answers, assertions, and clarifications can all be thought of as types of speech-based actions. Recognizing the dialogue acts underlying the utterances in a dialogue can be an important first step in understanding the conversation. The NPS Chat Corpus, which was demonstrated in 1, consists of over 10,000 posts from instant messaging sessions. These posts have all been labeled with one of 15 dialogue act types, such as "Statement," "Emotion," "ynQuestion", and "Continuer." We can therefore use this data to build a classifier that can identify the dialogue act types for new instant messaging posts. The first step is to extract the basic messaging data. We will call xml_posts() to get a data structure representing the XML annotation for each post:
###Code
posts = nltk.corpus.nps_chat.xml_posts()[:10000]
###Output
_____no_output_____
###Markdown
Next, we'll define a simple feature extractor that checks what words the post contains:
###Code
def dialogue_act_features(post):
features = {}
for word in nltk.word_tokenize(post):
features['contains({})'.format(word.lower())] = True
return features
###Output
_____no_output_____
###Markdown
Finally, we construct the training and testing data by applying the feature extractor to each post (using post.get('class') to get a post's dialogue act type), and create a new classifier:
###Code
featuresets = [(dialogue_act_features(post.text), post.get('class'))
for post in posts]
size = int(len(featuresets) * 0.1)
train_set, test_set = featuresets[size:], featuresets[:size]
classifier = nltk.NaiveBayesClassifier.train(train_set)
print(nltk.classify.accuracy(classifier, test_set))
###Output
0.668
|
missao_3/smd_preproc.ipynb | ###Markdown
Apesar do dataset não apresentar valores vazios, é necessário verificar a existência de dados com marcadores. Os marcadores indicam valores que estão ausentes, mas que são importantes para o processo de mineração.
###Code
#Verificando nulos sinalizados com um marcador
df.head(100)
###Output
_____no_output_____
###Markdown
Para o dataset analisado, os dados faltantes estão sinalizados utilizando uma "?", como visto na linha 97 coluna a2. Vamos converter os marcadores para valores NaN, para que sejam entendidos como valores nulos pelas blibliotecas.
###Code
#Convertendo marcadores para NaN
df = df.replace('?', np.nan)
df.head(100)
###Output
_____no_output_____
###Markdown
Agora é possivel utilizar o gráfico da biblioteca missingno para visualizar a distribuição dos dados nulos.
###Code
#Matrix com a distribuição de dados nulos.
msno.matrix(df)
###Output
_____no_output_____
###Markdown
O atual Dataset não possui muitos nulos, para as variáveis categorias os dados serão removidos e para as variávies continuas serão calculados por sua média. Dados nulos geram dados imprecisos após a etapa de processamento. Então é necessário alterar os dados.
###Code
#Verificando as variáveis númericas
df.describe()
#Verificando os tipos das variáveis
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 690 entries, 0 to 689
Data columns (total 16 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 a1 678 non-null object
1 a2 678 non-null object
2 a3 690 non-null float64
3 a4 684 non-null object
4 a5 684 non-null object
5 a6 681 non-null object
6 a7 681 non-null object
7 a8 690 non-null float64
8 a9 690 non-null object
9 a10 690 non-null object
10 a11 690 non-null int64
11 a12 690 non-null object
12 a13 690 non-null object
13 a14 677 non-null object
14 a15 690 non-null int64
15 a16 690 non-null object
dtypes: float64(2), int64(2), object(12)
memory usage: 86.4+ KB
###Markdown
As variáveis a2 e a14, são númericas, mas foram definidas com tipos diferentes. Vamos adicionar os tipos corretos.
###Code
#Alterando os tipos das variáveis
df['a2'] = df['a2'].astype(np.float16)
df['a14'] = df['a14'].astype(np.float16)
df.describe()
###Output
_____no_output_____
###Markdown
Como apresentado pelos colegas em sala, a variável a14 representa um código postal.
###Code
#Atualizando valores nulos da variável a2 com sua média.
a2_media = df.a2.median()
df.a2.fillna(a2_media, inplace=True)
###Output
_____no_output_____
###Markdown
Os valores nulos(NaN) foram preenchidos com a sua média na coluna a2.
###Code
#Atualizando os valores nulos das variáveis a1, a4, a5, a6, a7 com os valores mais frequentes
a1_freq = df.a1.value_counts()[0:1]
a4_freq = df.a4.value_counts()[0:1]
a5_freq = df.a5.value_counts()[0:1]
a6_freq = df.a6.value_counts()[0:1]
a7_freq = df.a7.value_counts()[0:1]
a14_freq = df.a14.value_counts()[0]
df.a1.fillna(a1_freq, inplace=True)
df.a4.fillna(a4_freq, inplace=True)
df.a5.fillna(a5_freq, inplace=True)
df.a6.fillna(a6_freq, inplace=True)
df.a7.fillna(a7_freq, inplace=True)
df.a14.fillna(a14_freq, inplace=True)
###Output
_____no_output_____
###Markdown
Todos os dados nulos foram substituidos por suas médias para as variáveis númericas. Para as variáveis categóricas os valores que mais se repetem substituiram os nulos. Após consultar diveros materiais, a maioria comenta que é preferivel completar os dados nulos utilizando as técnicas disponívies ao invés de removê-las.
###Code
#Tranformar dados catégoricos em binários
df_dummified = pd.get_dummies(df, columns=['a1', 'a4', 'a5', 'a6', 'a7', 'a9', 'a10', 'a12', 'a13', 'a16'])
print(df_dummified)
###Output
a2 a3 a8 a11 a14 ... a13_g a13_p a13_s a16_+ a16_-
0 30.828125 0.000 1.25 1 202.0 ... 1 0 0 1 0
1 58.656250 4.460 3.04 6 43.0 ... 1 0 0 1 0
2 24.500000 0.500 1.50 0 280.0 ... 1 0 0 1 0
3 27.828125 1.540 3.75 5 100.0 ... 1 0 0 1 0
4 20.171875 5.625 1.71 0 120.0 ... 0 0 1 1 0
.. ... ... ... ... ... ... ... ... ... ... ...
685 21.078125 10.085 1.25 0 260.0 ... 1 0 0 0 1
686 22.671875 0.750 2.00 2 200.0 ... 1 0 0 0 1
687 25.250000 13.500 2.00 1 200.0 ... 1 0 0 0 1
688 17.921875 0.205 0.04 0 280.0 ... 1 0 0 0 1
689 35.000000 3.375 8.29 0 0.0 ... 1 0 0 0 1
[690 rows x 48 columns]
###Markdown
Os dados categóricos foram transformados para valores binários e categorizados por colunas, afim de facilitar sua utilização para o processamento.
###Code
#Normalização dos dados númericos
X = df_dummified.iloc[:, 0:3].values
###Output
[[3.0828125e+01 0.0000000e+00 1.2500000e+00]
[5.8656250e+01 4.4600000e+00 3.0400000e+00]
[2.4500000e+01 5.0000000e-01 1.5000000e+00]
...
[2.5250000e+01 1.3500000e+01 2.0000000e+00]
[1.7921875e+01 2.0500000e-01 4.0000000e-02]
[3.5000000e+01 3.3750000e+00 8.2900000e+00]]
|
Assignment_4_Ammar_Adil.ipynb | ###Markdown
2 - Updated Sentiment AnalysisIn the previous notebook, we got the fundamentals down for sentiment analysis. In this notebook, we'll actually get decent results.We will use:- packed padded sequences- pre-trained word embeddings- different RNN architecture- bidirectional RNN- multi-layer RNN- regularization- a different optimizerThis will allow us to achieve ~84% test accuracy. Preparing DataAs before, we'll set the seed, define the `Fields` and get the train/valid/test splits.We'll be using *packed padded sequences*, which will make our RNN only process the non-padded elements of our sequence, and for any padded element the `output` will be a zero tensor. To use packed padded sequences, we have to tell the RNN how long the actual sequences are. We do this by setting `include_lengths = True` for our `TEXT` field. This will cause `batch.text` to now be a tuple with the first element being our sentence (a numericalized tensor that has been padded) and the second element being the actual lengths of our sentences.
###Code
import torch
from torchtext import data
from torchtext import datasets
SEED = 1234
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
TEXT = data.Field(tokenize = 'spacy', include_lengths = True)
LABEL = data.LabelField(dtype = torch.float)
###Output
_____no_output_____
###Markdown
We then load the IMDb dataset.
###Code
from torchtext import datasets
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
###Output
_____no_output_____
###Markdown
Then create the validation set from our training set.
###Code
import random
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
###Output
_____no_output_____
###Markdown
Reversing Text before feeding it to iterator
###Code
for item in train_data:
vars(item).get('text').reverse()
###Output
_____no_output_____
###Markdown
Next is the use of pre-trained word embeddings. Now, instead of having our word embeddings initialized randomly, they are initialized with these pre-trained vectors.We get these vectors simply by specifying which vectors we want and passing it as an argument to `build_vocab`. `TorchText` handles downloading the vectors and associating them with the correct words in our vocabulary.Here, we'll be using the `"glove.6B.100d" vectors"`. `glove` is the algorithm used to calculate the vectors, go [here](https://nlp.stanford.edu/projects/glove/) for more. `6B` indicates these vectors were trained on 6 billion tokens and `100d` indicates these vectors are 100-dimensional.You can see the other available vectors [here](https://github.com/pytorch/text/blob/master/torchtext/vocab.pyL113).The theory is that these pre-trained vectors already have words with similar semantic meaning close together in vector space, e.g. "terrible", "awful", "dreadful" are nearby. This gives our embedding layer a good initialization as it does not have to learn these relations from scratch.**Note**: these vectors are about 862MB, so watch out if you have a limited internet connection.By default, TorchText will initialize words in your vocabulary but not in your pre-trained embeddings to zero. We don't want this, and instead initialize them randomly by setting `unk_init` to `torch.Tensor.normal_`. This will now initialize those words via a Gaussian distribution.
###Code
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(train_data,
max_size = MAX_VOCAB_SIZE,
vectors = "glove.6B.100d",
unk_init = torch.Tensor.normal_)
LABEL.build_vocab(train_data)
###Output
_____no_output_____
###Markdown
As before, we create the iterators, placing the tensors on the GPU if one is available.Another thing for packed padded sequences all of the tensors within a batch need to be sorted by their lengths. This is handled in the iterator by setting `sort_within_batch = True`.
###Code
BATCH_SIZE = 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
sort_within_batch = True,
device = device)
###Output
_____no_output_____
###Markdown
Build the ModelThe model features the most drastic changes. Different RNN ArchitectureWe'll be using a different RNN architecture called a Long Short-Term Memory (LSTM). Why is an LSTM better than a standard RNN? Standard RNNs suffer from the [vanishing gradient problem](https://en.wikipedia.org/wiki/Vanishing_gradient_problem). LSTMs overcome this by having an extra recurrent state called a _cell_, $c$ - which can be thought of as the "memory" of the LSTM - and the use use multiple _gates_ which control the flow of information into and out of the memory. For more information, go [here](https://colah.github.io/posts/2015-08-Understanding-LSTMs/). We can simply think of the LSTM as a function of $x_t$, $h_t$ and $c_t$, instead of just $x_t$ and $h_t$.$$(h_t, c_t) = \text{LSTM}(x_t, h_t, c_t)$$Thus, the model using an LSTM looks something like (with the embedding layers omitted):The initial cell state, $c_0$, like the initial hidden state is initialized to a tensor of all zeros. The sentiment prediction is still, however, only made using the final hidden state, not the final cell state, i.e. $\hat{y}=f(h_T)$. Bidirectional RNNThe concept behind a bidirectional RNN is simple. As well as having an RNN processing the words in the sentence from the first to the last (a forward RNN), we have a second RNN processing the words in the sentence from the **last to the first** (a backward RNN). At time step $t$, the forward RNN is processing word $x_t$, and the backward RNN is processing word $x_{T-t+1}$. In PyTorch, the hidden state (and cell state) tensors returned by the forward and backward RNNs are stacked on top of each other in a single tensor. We make our sentiment prediction using a concatenation of the last hidden state from the forward RNN (obtained from final word of the sentence), $h_T^\rightarrow$, and the last hidden state from the backward RNN (obtained from the first word of the sentence), $h_T^\leftarrow$, i.e. $\hat{y}=f(h_T^\rightarrow, h_T^\leftarrow)$ The image below shows a bi-directional RNN, with the forward RNN in orange, the backward RNN in green and the linear layer in silver.  Multi-layer RNNMulti-layer RNNs (also called *deep RNNs*) are another simple concept. The idea is that we add additional RNNs on top of the initial standard RNN, where each RNN added is another *layer*. The hidden state output by the first (bottom) RNN at time-step $t$ will be the input to the RNN above it at time step $t$. The prediction is then made from the final hidden state of the final (highest) layer.The image below shows a multi-layer unidirectional RNN, where the layer number is given as a superscript. Also note that each layer needs their own initial hidden state, $h_0^L$. RegularizationAlthough we've added improvements to our model, each one adds additional parameters. Without going into overfitting into too much detail, the more parameters you have in in your model, the higher the probability that your model will overfit (memorize the training data, causing a low training error but high validation/testing error, i.e. poor generalization to new, unseen examples). To combat this, we use regularization. More specifically, we use a method of regularization called *dropout*. Dropout works by randomly *dropping out* (setting to 0) neurons in a layer during a forward pass. The probability that each neuron is dropped out is set by a hyperparameter and each neuron with dropout applied is considered indepenently. One theory about why dropout works is that a model with parameters dropped out can be seen as a "weaker" (less parameters) model. The predictions from all these "weaker" models (one for each forward pass) get averaged together withinin the parameters of the model. Thus, your one model can be thought of as an ensemble of weaker models, none of which are over-parameterized and thus should not overfit. Implementation DetailsAnother addition to this model is that we are not going to learn the embedding for the `` token. This is because we want to explitictly tell our model that padding tokens are irrelevant to determining the sentiment of a sentence. This means the embedding for the pad token will remain at what it is initialized to (we initialize it to all zeros later). We do this by passing the index of our pad token as the `padding_idx` argument to the `nn.Embedding` layer.To use an LSTM instead of the standard RNN, we use `nn.LSTM` instead of `nn.RNN`. Also, note that the LSTM returns the `output` and a tuple of the final `hidden` state and the final `cell` state, whereas the standard RNN only returned the `output` and final `hidden` state. As the final hidden state of our LSTM has both a forward and a backward component, which will be concatenated together, the size of the input to the `nn.Linear` layer is twice that of the hidden dimension size.Implementing bidirectionality and adding additional layers are done by passing values for the `num_layers` and `bidirectional` arguments for the RNN/LSTM. Dropout is implemented by initializing an `nn.Dropout` layer (the argument is the probability of dropping out each neuron) and using it within the `forward` method after each layer we want to apply dropout to. **Note**: never use dropout on the input or output layers (`text` or `fc` in this case), you only ever want to use dropout on intermediate layers. The LSTM has a `dropout` argument which adds dropout on the connections between hidden states in one layer to hidden states in the next layer. As we are passing the lengths of our sentences to be able to use packed padded sequences, we have to add a second argument, `text_lengths`, to `forward`. Before we pass our embeddings to the RNN, we need to pack them, which we do with `nn.utils.rnn.packed_padded_sequence`. This will cause our RNN to only process the non-padded elements of our sequence. The RNN will then return `packed_output` (a packed sequence) as well as the `hidden` and `cell` states (both of which are tensors). Without packed padded sequences, `hidden` and `cell` are tensors from the last element in the sequence, which will most probably be a pad token, however when using packed padded sequences they are both from the last non-padded element in the sequence. We then unpack the output sequence, with `nn.utils.rnn.pad_packed_sequence`, to transform it from a packed sequence to a tensor. The elements of `output` from padding tokens will be zero tensors (tensors where every element is zero). Usually, we only have to unpack output if we are going to use it later on in the model. Although we aren't in this case, we still unpack the sequence just to show how it is done.The final hidden state, `hidden`, has a shape of _**[num layers * num directions, batch size, hid dim]**_. These are ordered: **[forward_layer_0, backward_layer_0, forward_layer_1, backward_layer 1, ..., forward_layer_n, backward_layer n]**. As we want the final (top) layer forward and backward hidden states, we get the top two hidden layers from the first dimension, `hidden[-2,:,:]` and `hidden[-1,:,:]`, and concatenate them together before passing them to the linear layer (after applying dropout).
###Code
torch.cuda.BoolTensor(1)
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, bidirectional, dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
# self.rnn = lambda first: nn.LSTM(embedding_dim, hidden_dim, bidirectional=bidirectional, dropout=dropout) if first else nn.LSTM(hidden_dim, hidden_dim, bidirectional=bidirectional, dropout=dropout)
self.rnn_first = nn.LSTM(embedding_dim, hidden_dim, bidirectional=bidirectional, dropout=dropout)
self.rnn_hidden = nn.LSTM(hidden_dim, hidden_dim, bidirectional=bidirectional, dropout=dropout)
# self.rnn3 = nn.LSTM(hidden_dim, hidden_dim, bidirectional=bidirectional, dropout=dropout)
self.fc = nn.Linear(hidden_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text, text_lengths):
#text = [sent len, batch size]
embedded = self.dropout(self.embedding(text))
#embedded = [sent len, batch size, emb dim]
#pack sequence
packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths.cpu())
# for i in torch.arange(0, 3):
# if i == 0:
# packed_embedded, (hidden, cell) = self.rnn(torch.cuda.BoolTensor(1))(packed_embedded)
# else:
# packed_embedded, (hidden, cell) = self.rnn(torch.cuda.BoolTensor(0))(packed_embedded)
for i in range(3):
if i == 0:
packed_embedded, (hidden, cell) = self.rnn_first(packed_embedded)
else:
packed_embedded, (hidden, cell) = self.rnn_hidden(packed_embedded, (hidden, cell))
# for i in torch.arange(0, 3):
# if i == 0:
# print(i)
# packed_embedded, (hidden, cell) = self.rnn1(packed_embedded)
# packed_embedded, (hidden, cell) = self.rnn2(packed_embedded)
# packed_embedded, (hidden, cell) = self.rnn3(packed_embedded)
#unpack sequence
# output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output)
#output = [sent len, batch size, hid dim * num directions]
#output over padding tokens are zero tensors
#hidden = [num layers * num directions, batch size, hid dim]
#cell = [num layers * num directions, batch size, hid dim]
#concat the final forward (hidden[-2,:,:]) and backward (hidden[-1,:,:]) hidden layers
#and apply dropout
# hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1))
hidden = self.dropout(hidden)
#hidden = [batch size, hid dim * num directions]
return self.fc(hidden)
###Output
_____no_output_____
###Markdown
Like before, we'll create an instance of our RNN class, with the new parameters and arguments for the number of layers, bidirectionality and dropout probability.To ensure the pre-trained vectors can be loaded into the model, the `EMBEDDING_DIM` must be equal to that of the pre-trained GloVe vectors loaded earlier.We get our pad token index from the vocabulary, getting the actual string representing the pad token from the field's `pad_token` attribute, which is `` by default.
###Code
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
HIDDEN_DIM = 256
OUTPUT_DIM = 1
BIDIRECTIONAL = False
DROPOUT = 0.2
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = RNN(INPUT_DIM,
EMBEDDING_DIM,
HIDDEN_DIM,
OUTPUT_DIM,
# N_LAYERS,
BIDIRECTIONAL,
DROPOUT,
PAD_IDX)
###Output
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py:61: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
"num_layers={}".format(dropout, num_layers))
###Markdown
We'll print out the number of parameters in our model. Notice how we have almost twice as many parameters as before!
###Code
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
###Output
The model has 3,393,385 trainable parameters
###Markdown
The final addition is copying the pre-trained word embeddings we loaded earlier into the `embedding` layer of our model.We retrieve the embeddings from the field's vocab, and check they're the correct size, _**[vocab size, embedding dim]**_
###Code
pretrained_embeddings = TEXT.vocab.vectors
print(pretrained_embeddings.shape)
###Output
torch.Size([25002, 100])
###Markdown
We then replace the initial weights of the `embedding` layer with the pre-trained embeddings.**Note**: this should always be done on the `weight.data` and not the `weight`!
###Code
model.embedding.weight.data.copy_(pretrained_embeddings)
###Output
_____no_output_____
###Markdown
As our `` and `` token aren't in the pre-trained vocabulary they have been initialized using `unk_init` (an $\mathcal{N}(0,1)$ distribution) when building our vocab. It is preferable to initialize them both to all zeros to explicitly tell our model that, initially, they are irrelevant for determining sentiment. We do this by manually setting their row in the embedding weights matrix to zeros. We get their row by finding the index of the tokens, which we have already done for the padding index.**Note**: like initializing the embeddings, this should be done on the `weight.data` and not the `weight`!
###Code
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]
model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
print(model.embedding.weight.data)
###Output
tensor([[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[-0.0382, -0.2449, 0.7281, ..., -0.1459, 0.8278, 0.2706],
...,
[-0.3970, 0.4024, 1.0612, ..., -0.0136, -0.3363, 0.6442],
[-0.5197, 1.0395, 0.2092, ..., -0.8857, -0.2294, 0.1244],
[ 0.0057, -0.0707, -0.0804, ..., -0.3292, -0.0130, 0.0716]])
###Markdown
We can now see the first two rows of the embedding weights matrix have been set to zeros. As we passed the index of the pad token to the `padding_idx` of the embedding layer it will remain zeros throughout training, however the `` token embedding will be learned. Train the Model Now to training the model.The only change we'll make here is changing the optimizer from `SGD` to `Adam`. SGD updates all parameters with the same learning rate and choosing this learning rate can be tricky. `Adam` adapts the learning rate for each parameter, giving parameters that are updated more frequently lower learning rates and parameters that are updated infrequently higher learning rates. More information about `Adam` (and other optimizers) can be found [here](http://ruder.io/optimizing-gradient-descent/index.html).To change `SGD` to `Adam`, we simply change `optim.SGD` to `optim.Adam`, also note how we do not have to provide an initial learning rate for Adam as PyTorch specifies a sensibile default initial learning rate.
###Code
import torch.optim as optim
optimizer = optim.Adam(model.parameters(), lr=0.0001, weight_decay=1e-4)
###Output
_____no_output_____
###Markdown
The rest of the steps for training the model are unchanged.We define the criterion and place the model and criterion on the GPU (if available)...
###Code
criterion = nn.BCEWithLogitsLoss()
model = model.to(device)
criterion = criterion.to(device)
###Output
_____no_output_____
###Markdown
We implement the function to calculate accuracy...
###Code
def binary_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
#round predictions to the closest integer
rounded_preds = torch.round(torch.sigmoid(preds))
correct = (rounded_preds == y).float() #convert into float for division
acc = correct.sum() / len(correct)
return acc
###Output
_____no_output_____
###Markdown
We define a function for training our model. As we have set `include_lengths = True`, our `batch.text` is now a tuple with the first element being the numericalized tensor and the second element being the actual lengths of each sequence. We separate these into their own variables, `text` and `text_lengths`, before passing them to the model.**Note**: as we are now using dropout, we must remember to use `model.train()` to ensure the dropout is "turned on" while training.
###Code
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
text, text_lengths = batch.text
predictions = model(text, text_lengths).squeeze()
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
###Output
_____no_output_____
###Markdown
Then we define a function for testing our model, again remembering to separate `batch.text`.**Note**: as we are now using dropout, we must remember to use `model.eval()` to ensure the dropout is "turned off" while evaluating.
###Code
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
text, text_lengths = batch.text
predictions = model(text, text_lengths).squeeze()
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
###Output
_____no_output_____
###Markdown
And also create a nice function to tell us how long our epochs are taking.
###Code
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
###Output
_____no_output_____
###Markdown
Finally, we train our model...
###Code
N_EPOCHS = 20
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut2-model.pt')
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
###Output
Epoch: 01 | Epoch Time: 0m 26s
Train Loss: 0.679 | Train Acc: 57.37%
Val. Loss: 0.656 | Val. Acc: 61.07%
Epoch: 02 | Epoch Time: 0m 26s
Train Loss: 0.675 | Train Acc: 57.21%
Val. Loss: 0.659 | Val. Acc: 62.07%
Epoch: 03 | Epoch Time: 0m 26s
Train Loss: 0.678 | Train Acc: 56.56%
Val. Loss: 0.651 | Val. Acc: 62.34%
Epoch: 04 | Epoch Time: 0m 26s
Train Loss: 0.608 | Train Acc: 67.66%
Val. Loss: 0.623 | Val. Acc: 65.27%
Epoch: 05 | Epoch Time: 0m 26s
Train Loss: 0.609 | Train Acc: 65.95%
Val. Loss: 0.436 | Val. Acc: 80.94%
Epoch: 06 | Epoch Time: 0m 26s
Train Loss: 0.572 | Train Acc: 69.82%
Val. Loss: 0.537 | Val. Acc: 74.13%
Epoch: 07 | Epoch Time: 0m 26s
Train Loss: 0.606 | Train Acc: 65.82%
Val. Loss: 0.580 | Val. Acc: 73.03%
Epoch: 08 | Epoch Time: 0m 26s
Train Loss: 0.391 | Train Acc: 83.40%
Val. Loss: 0.336 | Val. Acc: 85.67%
Epoch: 09 | Epoch Time: 0m 26s
Train Loss: 0.259 | Train Acc: 90.32%
Val. Loss: 0.305 | Val. Acc: 87.70%
Epoch: 10 | Epoch Time: 0m 26s
Train Loss: 0.192 | Train Acc: 93.44%
Val. Loss: 0.300 | Val. Acc: 88.30%
Epoch: 11 | Epoch Time: 0m 26s
Train Loss: 0.166 | Train Acc: 94.33%
Val. Loss: 0.342 | Val. Acc: 86.86%
Epoch: 12 | Epoch Time: 0m 26s
Train Loss: 0.129 | Train Acc: 95.86%
Val. Loss: 0.358 | Val. Acc: 87.72%
Epoch: 13 | Epoch Time: 0m 26s
Train Loss: 0.104 | Train Acc: 96.88%
Val. Loss: 0.417 | Val. Acc: 84.57%
Epoch: 14 | Epoch Time: 0m 26s
Train Loss: 0.099 | Train Acc: 97.03%
Val. Loss: 0.392 | Val. Acc: 87.69%
Epoch: 15 | Epoch Time: 0m 26s
Train Loss: 0.091 | Train Acc: 97.22%
Val. Loss: 0.391 | Val. Acc: 87.22%
Epoch: 16 | Epoch Time: 0m 26s
Train Loss: 0.070 | Train Acc: 98.03%
Val. Loss: 0.498 | Val. Acc: 85.89%
Epoch: 17 | Epoch Time: 0m 26s
Train Loss: 0.058 | Train Acc: 98.44%
Val. Loss: 0.494 | Val. Acc: 87.34%
Epoch: 18 | Epoch Time: 0m 26s
Train Loss: 0.052 | Train Acc: 98.66%
Val. Loss: 0.436 | Val. Acc: 87.50%
Epoch: 19 | Epoch Time: 0m 26s
Train Loss: 0.056 | Train Acc: 98.41%
Val. Loss: 0.480 | Val. Acc: 87.32%
Epoch: 20 | Epoch Time: 0m 26s
Train Loss: 0.055 | Train Acc: 98.41%
Val. Loss: 0.433 | Val. Acc: 87.34%
###Markdown
...and get our new and vastly improved test accuracy!
###Code
model.load_state_dict(torch.load('tut2-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
###Output
Test Loss: 0.319 | Test Acc: 87.32%
###Markdown
User InputWe can now use our model to predict the sentiment of any sentence we give it. As it has been trained on movie reviews, the sentences provided should also be movie reviews.When using a model for inference it should always be in evaluation mode. If this tutorial is followed step-by-step then it should already be in evaluation mode (from doing `evaluate` on the test set), however we explicitly set it to avoid any risk.Our `predict_sentiment` function does a few things:- sets the model to evaluation mode- tokenizes the sentence, i.e. splits it from a raw string into a list of tokens- indexes the tokens by converting them into their integer representation from our vocabulary- gets the length of our sequence- converts the indexes, which are a Python list into a PyTorch tensor- add a batch dimension by `unsqueeze`ing - converts the length into a tensor- squashes the output prediction from a real number between 0 and 1 with the `sigmoid` function- converts the tensor holding a single value into an integer with the `item()` methodWe are expecting reviews with a negative sentiment to return a value close to 0 and positive reviews to return a value close to 1.
###Code
import spacy
nlp = spacy.load('en')
def predict_sentiment(model, sentence):
model.eval()
tokenized = [tok.text for tok in nlp.tokenizer(sentence)]
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
length = [len(indexed)]
tensor = torch.LongTensor(indexed).to(device)
tensor = tensor.unsqueeze(1)
length_tensor = torch.LongTensor(length)
prediction = torch.sigmoid(model(tensor, length_tensor))
return prediction.item()
###Output
_____no_output_____
###Markdown
An example negative review...
###Code
predict_sentiment(model, "This film is terrible")
###Output
_____no_output_____
###Markdown
An example positive review...
###Code
predict_sentiment(model, "This film is great")
###Output
_____no_output_____ |
CIFAR10_Keras_TPU.ipynb | ###Markdown
CIFAR 10 Classification: TPU VersionThe goal of this notebook is to implement a simple conv net to classify CIFAR10 images using TPU and compare its training time against GPU on local workstation and Colab GPU.
###Code
import os
from time import time
import numpy as np
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
# check tensorflow version, we want the one that support eager mode
tf.__version__
# Turn on the eager mode
#tf.enable_eager_execution()
# Check if eager execution mode is on
tf.executing_eagerly()
###Output
_____no_output_____
###Markdown
DatasetLoad CIFAR10 dataset
###Code
from tensorflow.keras.datasets import cifar10, mnist
(x_train_full, y_train_full), (x_test, y_test) = cifar10.load_data()
print('x_train_full shape: {}, y_train_full.shape: {}'
.format(x_train_full.shape, y_train_full.shape))
print('x_test shape: {}, y_test.shape: {}'.format(x_test.shape, y_test.shape))
y_train_full = y_train_full.reshape(y_train_full.shape[0],)
y_test = y_test.reshape(y_test.shape[0],)
print('y_train_full shape: {}, y_test shape: {}'
.format(y_train_full.shape, y_test.shape))
# create validation set
split = 0.2
x_train, x_val, y_train, y_val = train_test_split(
x_train_full, y_train_full, test_size=0.2, random_state=42)
print('x_train: {}, y_train: {}, x_val: {}, y_val: {}'
.format(x_train.shape, y_train.shape, x_val.shape, y_val.shape))
###Output
x_train: (40000, 32, 32, 3), y_train: (40000,), x_val: (10000, 32, 32, 3), y_val: (10000,)
###Markdown
Lets plot 25 random images to get some idea about the dataset
###Code
# pick 25 random images and plot
idxs = np.random.randint(x_train.shape[0], size=25)
images = x_train[idxs]
labels = y_train[idxs]
classnames = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
fig, axes = plt.subplots(5,5, figsize=(8,9))
for i, ax in enumerate(axes.flat):
ax.imshow(images[i])
ax.axis('off')
idx = labels[i]
ax.set_title(classnames[idx])
plt.show()
# Using Dataset
# def scale(x, min_val=0.0, max_val=255.0):
# x = tf.to_float(x)
# return tf.div(tf.subtract(x, min_val), tf.subtract(max_val, min_val))
# convert to dataset
# train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# train_ds = train_ds.map(lambda x, y: (scale(x), tf.one_hot(y, 10))).shuffle(10000)
# test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test))
# test_ds = test_ds.map(lambda x, y: (scale(x), tf.one_hot(y, 10))).shuffle(10000)
# def train_generator(batch_size):
# while True:
# images, labels = train_ds.batch(batch_size).make_one_shot_iterator().get_next()
# yield images, labels
def train_gen(batch_size):
while True:
offset = np.random.randint(0, x_train.shape[0] - batch_size)
yield x_train[offset:offset+batch_size], y_train[offset:offset + batch_size]
###Output
_____no_output_____
###Markdown
Build a model
###Code
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(32, (3,3), padding='same', activation='relu', input_shape=(32,32,3)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.MaxPooling2D(pool_size=(3,3)))
model.add(tf.keras.layers.Conv2D(64, (3,3), padding='same', activation='relu'))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.MaxPooling2D(pool_size=(3,3)))
model.add(tf.keras.layers.Conv2D(128, (3,3), padding='same', activation='relu'))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.MaxPooling2D(pool_size=(3,3)))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Dense(10, activation='relu'))
model.add(tf.keras.layers.Activation('softmax'))
model.summary()
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
)
)
###Output
INFO:tensorflow:Querying Tensorflow master (b'grpc://10.4.66.154:8470') for TPU system metadata.
INFO:tensorflow:Found TPU system:
INFO:tensorflow:*** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:CPU:0, CPU, -1, 11020924145696970991)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 8642668331513702465)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:XLA_GPU:0, XLA_GPU, 17179869184, 8299197535424466515)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 6609252292601826176)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 4156117743592455349)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, 7482691375750755915)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 11234191712606254118)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 7932058859831483765)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, 11938862595229068735)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 10332782459826376580)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 2216244176311607036)
INFO:tensorflow:*** Available Device: _DeviceAttributes(/job:worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 17179869184, 1431070260623692239)
WARNING:tensorflow:tpu_model (from tensorflow.contrib.tpu.python.tpu.keras_support) is experimental and may change or be removed at any time, and without warning.
INFO:tensorflow:Connecting to: b'grpc://10.4.66.154:8470'
###Markdown
TrainTrain the model on TPU
###Code
tpu_model.compile(
optimizer=tf.train.AdamOptimizer(learning_rate=1e-3, ),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['sparse_categorical_accuracy']
)
batch_size=1024
start = time()
history = tpu_model.fit_generator(
train_gen(batch_size), epochs=25,
steps_per_epoch=np.ceil(x_train.shape[0]/batch_size),
validation_data = (x_val, y_val),
)
end = time()
print('Total training time {} seconds'.format(end - start))
def plot(losses, accuracies, subplot_title):
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14,4))
ax1.plot(losses)
ax1.set_xlabel('Epochs')
ax1.set_ylabel('Loss')
ax1.set_title(subplot_title[0])
ax2.plot(accuracies)
ax2.set_xlabel('Epochs')
ax2.set_ylabel('Accuracy')
ax2.set_title(subplot_title[1])
plt.show()
###Output
_____no_output_____
###Markdown
Plot Loss and AccuracyLets see how did we do on training and validation sets
###Code
# Training
plot(history.history['loss'],
history.history['sparse_categorical_accuracy'],
subplot_title=['Training Loss', 'Training Accuracy']
)
# Validation
plot(history.history['val_loss'],
history.history['val_sparse_categorical_accuracy'],
subplot_title=['Validation Loss', 'Validation Accuracy']
)
###Output
_____no_output_____
###Markdown
Test accuracyNext, we plot the model predictions on test set
###Code
cpu_model = tpu_model.sync_to_cpu()
idxs = np.random.randint(x_test.shape[0], size=25)
images = x_test[idxs]
true_labels = y_test[idxs]
preds = np.argmax(cpu_model.predict(images), axis=1)
fig, axes = plt.subplots(5,5, figsize=(8,9))
for i, ax in enumerate(axes.flat):
ax.imshow(images[i])
ax.axis('off')
idx = preds[i]
color = 'g' if idx == true_labels[i] else 'r'
ax.set_title(classnames[idx], color=color)
plt.show()
###Output
_____no_output_____
###Markdown
Now that we've got the basic model working, you can try improving its accuracy. For example, you can try :* preprocessing* transfer learning* learning rates using learning rate finder* different network architectureYou can try to match/beat the CIFAR10 accuracy listed on this page.
###Code
###Output
_____no_output_____ |
08_spark_metastore/06_define_schema_for_tables_using_structtype.ipynb | ###Markdown
Define Schema for Tables using StructTypeWhen we want to create a table using `spark.catalog.createTable` or using `spark.catalog.createExternalTable`, we need to specify Schema.
###Code
%%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/qVag5LlghPA?rel=0&controls=1&showinfo=0" frameborder="0" allowfullscreen></iframe>
###Output
_____no_output_____
###Markdown
* Schema can be inferred or we can pass schema using `StructType` object while creating the table..* `StructType` takes list of objects of type `StructField`.* `StructField` is built using column name and data type. All the data types are available under `pyspark.sql.types`.* We need to pass table name and schema for `spark.catalog.createTable`.* We have to pass path along with name and schema for `spark.catalog.createExternalTable`. Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our [10 node state of the art cluster/labs](https://labs.itversity.com/plans) to learn Spark SQL using our unique integrated LMS.
###Code
from pyspark.sql import SparkSession
import getpass
username = getpass.getuser()
spark = SparkSession. \
builder. \
config('spark.ui.port', '0'). \
config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \
enableHiveSupport(). \
appName(f'{username} | Python - Spark Metastore'). \
master('yarn'). \
getOrCreate()
###Output
_____no_output_____
###Markdown
If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.**Using Spark SQL**```spark2-sql \ --master yarn \ --conf spark.ui.port=0 \ --conf spark.sql.warehouse.dir=/user/${USER}/warehouse```**Using Scala**```spark2-shell \ --master yarn \ --conf spark.ui.port=0 \ --conf spark.sql.warehouse.dir=/user/${USER}/warehouse```**Using Pyspark**```pyspark2 \ --master yarn \ --conf spark.ui.port=0 \ --conf spark.sql.warehouse.dir=/user/${USER}/warehouse```
###Code
spark.conf.set('spark.sql.shuffle.partitions', '2')
###Output
_____no_output_____
###Markdown
TasksLet us perform tasks to create empty table using `spark.catalog.createTable` or using `spark.catalog.createExternalTable`.* Create database **{username}_hr_db** and table **employees** with following fields. Let us create Database first and then we will see how to create table. * employee_id of type Integer * first_name of type String * last_name of type String * salary of type Float * nationality of type String
###Code
import getpass
username = getpass.getuser()
spark.sql(f"CREATE DATABASE IF NOT EXISTS {username}_hr_db")
spark.catalog.setCurrentDatabase(f"{username}_hr_db")
spark.catalog.currentDatabase()
spark.catalog.createTable?
###Output
_____no_output_____
###Markdown
* Build StructType object using StructField list.
###Code
from pyspark.sql.types import StructField, StructType, \
IntegerType, StringType, FloatType
employeesSchema = StructType([
StructField("employee_id", IntegerType()),
StructField("first_name", StringType()),
StructField("last_name", StringType()),
StructField("salary", FloatType()),
StructField("nationality", StringType())
])
employeesSchema
help(employeesSchema)
employeesSchema.simpleString()
spark.sql('DROP TABLE IF EXISTS employees')
###Output
_____no_output_____
###Markdown
* Create table by passing StructType object as schema.
###Code
spark.catalog.createTable("employees", schema=employeesSchema)
###Output
_____no_output_____
###Markdown
* List the tables from database created.
###Code
spark.catalog.listTables()
spark.catalog.listColumns('employees')
spark.sql('DESCRIBE FORMATTED employees').show(100, truncate=False)
###Output
+----------------------------+-----------------------------------------------------------------------------------+-------+
|col_name |data_type |comment|
+----------------------------+-----------------------------------------------------------------------------------+-------+
|employee_id |int |null |
|first_name |string |null |
|last_name |string |null |
|salary |float |null |
|nationality |string |null |
| | | |
|# Detailed Table Information| | |
|Database |itversity_hr_db | |
|Table |employees | |
|Owner |itversity | |
|Created Time |Fri Mar 12 11:13:36 EST 2021 | |
|Last Access |Wed Dec 31 19:00:00 EST 1969 | |
|Created By |Spark 2.4.7 | |
|Type |MANAGED | |
|Provider |parquet | |
|Table Properties |[transient_lastDdlTime=1615565616] | |
|Location |hdfs://m01.itversity.com:9000/user/itversity/warehouse/itversity_hr_db.db/employees| |
|Serde Library |org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe | |
|InputFormat |org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat | |
|OutputFormat |org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat | |
|Storage Properties |[serialization.format=1] | |
+----------------------------+-----------------------------------------------------------------------------------+-------+
|
notebooks/off_by_one.ipynb | ###Markdown
settings
###Code
simulation = "/home/gijs/Work/spiel/runs/second_kat7_2018-05-28/results/"
neural_output = "/home/gijs/Work/astro-pix2pix/scratch/spiel_test_kat7/fits/"
number = 600
###Output
_____no_output_____
###Markdown
preface
###Code
psf_path = "{}{}-bigpsf-psf.fits".format(simulation, number)
skymodel_path = "{}{}-skymodel.fits".format(simulation, number)
dirty_path = "{}{}-wsclean-dirty.fits".format(simulation, number)
wsmodel_path = "{}{}-wsclean-model.fits".format(simulation, number)
psf = fits.open(psf_path)[0].data.squeeze()
psf = psf / psf.max()
skymodel = fits.open(skymodel_path)[0].data.squeeze() # the skymodel used as input to the telescope sim pipeline
dirty = fits.open(dirty_path)[0].data.squeeze() # dirty image created by wsclean
wsmodel = fits.open(wsmodel_path)[0].data.squeeze() # model image created by wsclean
###Output
_____no_output_____
###Markdown
preview
###Code
f, (a1, a2, a3) = plt.subplots(1, 3, figsize=(16, 3))
i1 = a1.pcolor(psf, cmap='cubehelix')
f.colorbar(i1, ax=a1)
a1.set_title('psf')
i2 = a2.pcolor(skymodel, cmap='cubehelix')
f.colorbar(i2, ax=a2)
_ = a2.set_title('skymodel')
i3 = a3.pcolor(dirty, cmap='cubehelix')
f.colorbar(i3, ax=a3)
_ = a3.set_title('dirty')
###Output
_____no_output_____
###Markdown
Convolving
###Code
convolved = fftconvolve(skymodel, psf, mode="same")
convolved_ws = fftconvolve(wsmodel, psf, mode="same")
p = psf.shape[0]
r = slice(p//2, -p//2+1) # uneven PSF needs +2, even psf +1
convolved_skymodel = fftconvolve(skymodel, psf, mode="full")[r,r]
f, (a1, a2) = plt.subplots(1, 2, figsize=(16,7))
i1 = a1.pcolor(convolved_skymodel, cmap='cubehelix')
f.colorbar(i1, ax=a1)
a1.set_title('convolved_skymodel')
i2 = a2.pcolor(dirty, cmap='cubehelix')
f.colorbar(i2, ax=a2)
_ = a2.set_title('dirty')
###Output
_____no_output_____
###Markdown
Risidual
###Code
risidual = dirty - convolved
risidual2 = dirty - convolved2
risidual_ws = dirty - convolved_ws
f, ((a1, a2), (a3, a4)) = plt.subplots(2, 2, figsize= [16, 14])
i1 = a1.pcolor(risidual, cmap='cubehelix')
f.colorbar(i1, ax=a1)
a1.set_title('risidual of convolved true skymodel')
i2 = a2.pcolor(risidual2, cmap='cubehelix')
f.colorbar(i2, ax=a2)
_ = a2.set_title('risidual shifted by one')
i3 = a3.pcolor(risidual_ws, cmap='cubehelix')
f.colorbar(i3, ax=a3)
_ = a3.set_title('risidual of wsclean model')
i4 = a4.pcolor(dirty, cmap='cubehelix')
f.colorbar(i4, ax=a4)
_ = a4.set_title('dirty')
###Output
_____no_output_____ |
Module_3_Cython/cython_functions.ipynb | ###Markdown
cdef function
###Code
%%cython
cdef my_func(x, y, z):
return x - y + z
print(my_func(10, 20, 30))
print(my_func(10.1, 20.9, 30.5))
###Output
20
19.700000000000003
###Markdown
Image, we want to use that function in another cell (Error)
###Code
my_func(10, 20, 30)
###Output
_____no_output_____
###Markdown
Defining the type of the variables within the function
###Code
%%cython
cdef int better_fun(int x, int y):
return x + y
print(better_fun(10, 20))
# print(my_func(10.1, 20.9, 30.5)) # This is gonna give error because of float types
###Output
30
###Markdown
Hybrid function (Python and C) cpdef
###Code
%%cython
cpdef my_func(x, y, z):
return x - y + z
print(my_func(10, 20, 30))
###Output
20
###Markdown
It can be accessed from another cell
###Code
print(my_func(1, 2, 3))
###Output
2
|
Asset Pricing with Incomplete Markets.ipynb | ###Markdown
Asset Pricing with Incomplete Markets
###Code
import numpy as np
import quantecon as qe
import scipy.linalg as la
qa = np.array([[1/2, 1/2], [2/3, 1/3]])
qb = np.array([[2/3, 1/3], [1/4, 3/4]])
mca = qe.MarkovChain(qa)
mcb = qe.MarkovChain(qb)
mca.stationary_distributions
mcb.stationary_distributions
def price_single_beliefs(transition, dividend_payoff, β=.75):
"""
Function to Solve Single Beliefs
"""
# First compute inverse piece
imbq_inv = la.inv(np.eye(transition.shape[0]) - β * transition)
# Next compute prices
prices = β * imbq_inv @ transition @ dividend_payoff
return prices
def price_optimistic_beliefs(transitions, dividend_payoff, β=.75,
max_iter=50000, tol=1e-16):
"""
Function to Solve Optimistic Beliefs
"""
# We will guess an initial price vector of [0, 0]
p_new = np.array([[0], [0]])
p_old = np.array([[10.], [10.]])
# We know this is a contraction mapping, so we can iterate to conv
for i in range(max_iter):
p_old = p_new
p_new = β * np.max([q @ p_old
+ q @ dividend_payoff for q in transitions],
1)
# If we succeed in converging, break out of for loop
if np.max(np.sqrt((p_new - p_old)**2)) < tol:
break
ptwiddle = β * np.min([q @ p_old
+ q @ dividend_payoff for q in transitions],
1)
phat_a = np.array([p_new[0], ptwiddle[1]])
phat_b = np.array([ptwiddle[0], p_new[1]])
return p_new, phat_a, phat_b
def price_pessimistic_beliefs(transitions, dividend_payoff, β=.75,
max_iter=50000, tol=1e-16):
"""
Function to Solve Pessimistic Beliefs
"""
# We will guess an initial price vector of [0, 0]
p_new = np.array([[0], [0]])
p_old = np.array([[10.], [10.]])
# We know this is a contraction mapping, so we can iterate to conv
for i in range(max_iter):
p_old = p_new
p_new = β * np.min([q @ p_old
+ q @ dividend_payoff for q in transitions],
1)
# If we succeed in converging, break out of for loop
if np.max(np.sqrt((p_new - p_old)**2)) < tol:
break
return p_new
qa = np.array([[1/2, 1/2], [2/3, 1/3]]) # Type a transition matrix
qb = np.array([[2/3, 1/3], [1/4, 3/4]]) # Type b transition matrix
# Optimistic investor transition matrix
qopt = np.array([[1/2, 1/2], [1/4, 3/4]])
# Pessimistic investor transition matrix
qpess = np.array([[2/3, 1/3], [2/3, 1/3]])
dividendreturn = np.array([[0], [1]])
transitions = [qa, qb, qopt, qpess]
labels = ['p_a', 'p_b', 'p_optimistic', 'p_pessimistic']
for transition, label in zip(transitions, labels):
print(label)
print("=" * 20)
s0, s1 = np.round(price_single_beliefs(transition, dividendreturn), 2)
print(f"State 0: {s0}")
print(f"State 1: {s1}")
print("-" * 20)
opt_beliefs = price_optimistic_beliefs([qa, qb], dividendreturn)
labels = ['p_optimistic', 'p_hat_a', 'p_hat_b']
for p, label in zip(opt_beliefs, labels):
print(label)
print("=" * 20)
s0, s1 = np.round(p, 2)
print(f"State 0: {s0}")
print(f"State 1: {s1}")
print("-" * 20)
###Output
p_optimistic
====================
State 0: [1.85]
State 1: [2.08]
--------------------
p_hat_a
====================
State 0: [1.85]
State 1: [1.69]
--------------------
p_hat_b
====================
State 0: [1.69]
State 1: [2.08]
--------------------
|
ML3-LinearRegression.ipynb | ###Markdown
Simple Linear Regression With scikit-learnPowered by: Dr. Hermann Völlinger, DHBW Stuttgart(Germany); July 2020 Following ideas from: "Linear Regression in Python" by Mirko Stojiljkovic, 28.4.2020 (see details: https://realpython.com/linear-regression-in-python/what-is-regression) Let’s start with the simplest case, which is simple linear regression.There are five basic steps when you’re implementing linear regression: 1. Import the packages and classes you need.2. Provide data to work with and eventually do appropriate transformations.3. Create a regression model and fit it with existing data.4. Check the results of model fitting to know whether the model is satisfactory.5. Apply the model for predictions.These steps are more or less general for most of the regression approaches and implementations. Step 1: Import packages and classesThe first step is to import the package numpy and the class LinearRegression from sklearn.linear_model:
###Code
# Step 1: Import packages and classes
import numpy as np
from sklearn.linear_model import LinearRegression
# import time module
import time
###Output
_____no_output_____
###Markdown
Now, you have all the functionalities you need to implement linear regression.The fundamental data type of NumPy is the array type called numpy.ndarray. The rest of this article uses the term array to refer to instances of the type numpy.ndarray.The class sklearn.linear_model.LinearRegression will be used to perform linear and polynomial regression and make predictions accordingly. Step 2: Provide dataThe second step is defining data to work with. The inputs (regressors, 𝑥) and output (predictor, 𝑦) should be arrays(the instances of the class numpy.ndarray) or similar objects. This is the simplest way of providing data for regression:
###Code
# Step 2: Provide data
x = np.array([ 5, 15, 25, 35, 45, 55]).reshape((-1, 1))
y = np.array([ 5, 20, 14, 32, 22, 38])
###Output
_____no_output_____
###Markdown
Now, you have two arrays: the input x and output y. You should call .reshape() on x because this array is required to be two-dimensional, or to be more precise, to have one column and as many rows as necessary. That’s exactly what the argument (-1, 1) of .reshape() specifies.
###Code
print ("This is how x and y look now:")
print("x=",x)
print("y=",y)
###Output
This is how x and y look now:
x= [[ 5]
[15]
[25]
[35]
[45]
[55]]
y= [ 5 20 14 32 22 38]
###Markdown
As you can see, x has two dimensions, and x.shape is (6, 1), while y has a single dimension, and y.shape is (6,). Step 3: Create a model and fit itThe next step is to create a linear regression model and fit it using the existing data.Let’s create an instance of the class LinearRegression, which will represent the regression model:
###Code
model = LinearRegression()
###Output
_____no_output_____
###Markdown
This statement creates the variable model as the instance of LinearRegression. You can provide several optionalparameters to LinearRegression: ----> fit_intercept is a Boolean (True by default) that decides whether to calculate the intercept 𝑏₀ (True) or considerit equal to zero (False).----> normalize is a Boolean (False by default) that decides whether to normalize the input variables (True) or not(False).----> copy_X is a Boolean (True by default) that decides whether to copy (True) or overwrite the input variables (False).----> n_jobs is an integer or None (default) and represents the number of jobs used in parallel computation. Noneusually means one job and -1 to use all processors.This example uses the default values of all parameters.It’s time to start using the model. First, you need to call .fit() on model:
###Code
model.fit(x, y)
###Output
_____no_output_____
###Markdown
With .fit(), you calculate the optimal values of the weights 𝑏₀ and 𝑏₁, using the existing input and output (x and y) asthe arguments. In other words, .fit() fits the model. It returns self, which is the variable model itself. That’s why youcan replace the last two statements with this one:
###Code
# model = LinearRegression().fit(x, y)
###Output
_____no_output_____
###Markdown
This statement does the same thing as the previous two. It’s just shorter. Step 4: Get resultsOnce you have your model fitted, you can get the results to check whether the model works satisfactorily andinterpret it.You can obtain the coefficient of determination (𝑅²) with .score() called on model:
###Code
r_sq = model.score(x, y)
print('coefficient of determination:', r_sq)
###Output
coefficient of determination: 0.7158756137479542
###Markdown
When you’re applying .score(), the arguments are also the predictor x and regressor y, and the return value is 𝑅².The attributes of model are .intercept_, which represents the coefficient,𝑏₀ and .coef_, which represents 𝑏₁:
###Code
print('intercept:', model.intercept_)
print('slope:', model.coef_)
###Output
intercept: 5.633333333333329
slope: [0.54]
###Markdown
The code above illustrates how to get 𝑏₀ and 𝑏₁. You can notice that .intercept_ is a scalar, while .coef_ is an array.The value 𝑏₀ = 5.63 (approximately) illustrates that your model predicts the response 5.63 when 𝑥 is zero. The value 𝑏₁= 0.54 means that the predicted response rises by 0.54 when 𝑥 is increased by one.You should notice that you can provide y as a two-dimensional array as well. In this case, you’ll get a similar result.This is how it might look:
###Code
new_model = LinearRegression().fit(x, y.reshape((-1, 1)))
print('intercept:', new_model.intercept_)
print('slope:', new_model.coef_)
###Output
intercept: [5.63333333]
slope: [[0.54]]
###Markdown
As you can see, this example is very similar to the previous one, but in this case, .intercept_ is a one-dimensional array with the single element 𝑏₀, and .coef_ is a two-dimensional array with the single element 𝑏₁. Step 5: Predict responseOnce there is a satisfactory model, you can use it for predictions with either existing or new data.To obtain the predicted response, use .predict():
###Code
y_pred = model.predict(x)
print('predicted response:', y_pred, sep='\n')
###Output
predicted response:
[ 8.33333333 13.73333333 19.13333333 24.53333333 29.93333333 35.33333333]
###Markdown
When applying .predict(), you pass the regressor as the argument and get the corresponding predicted response.This is a nearly identical way to predict the response:
###Code
y_pred = model.intercept_ + model.coef_ * x
print('predicted response:', y_pred, sep='\n')
###Output
predicted response:
[[ 8.33333333]
[13.73333333]
[19.13333333]
[24.53333333]
[29.93333333]
[35.33333333]]
###Markdown
In this case, you multiply each element of x with model.coef_ and add model.intercept_ to the product.The output here differs from the previous example only in dimensions. The predicted response is now a twodimensionalarray, while in the previous case, it had one dimension.If you reduce the number of dimensions of x to one, these two approaches will yield the same result. You can do thisby replacing x with x.reshape(-1), x.flatten(), or x.ravel() when multiplying it with model.coef_.In practice, regression models are oen applied for forecasts. This means that you can use fitted models to calculatethe outputs based on some other, new inputs: x_new = np.arange(5).reshape((-1, 1))print(x_new)y_new = model.predict(x_new)print(y_new) Here .predict() is applied to the new regressor x_new and yields the response y_new. This example conveniently usesarange() from numpy to generate an array with the elements from 0 (inclusive) to 5 (exclusive), that is 0, 1, 2, 3, and 4.You can find more information about LinearRegression on the official documentation page.
###Code
# print current date and time
print("date",time.strftime("%d.%m.%Y %H:%M:%S"))
print ("end")
###Output
date 11.08.2020 00:17:49
end
|
Assignment1/question4.ipynb | ###Markdown
Figure 4.1 ROC for Adaboost and Logistic regression. The blue curve is for Adaboost. The red curve is for Logistic regression.
###Code
precisionAB, recallAB, thresholdsAB = precision_recall_curve(y_test, y_scoreAB[:,1],pos_label='UP')
pr_aucAB = auc(recallAB, precisionAB)
mean_precisionAB=0
mean_precisionAB=np.linspace(0,1,1000)
precisionLR, recallLR, thresholdsLR = precision_recall_curve(y_test, y_scoreLR[:,1],pos_label='UP')
pr_aucLR = auc(recallLR, precisionLR)
mean_precisionLR=0
mean_precisionLR=np.linspace(0,1,100)
plt.figure(figsize=(10,10))
plt.plot(recallAB, precisionAB,color='b')
plt.plot(recallLR, precisionLR,color='r')
plt.plot([0,1],[0,1],
color='black',
linestyle='--')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.show()
###Output
_____no_output_____
###Markdown
Figure 4.2 PR for Adaboost and Logistic regression. The blue curve is for Adaboost. The red curve is for Logistic regression.
###Code
# pi = np.min(precisionAB)
piAB = np.min(precisionAB)
piLR = np.min(precisionLR)
precGAB = (precisionAB - piAB) / ((1-piAB)*precisionAB)
recGAB = (recallAB - piAB) / ((1-piAB)*recallAB)
precGLR = (precisionLR - piLR) / ((1-piLR)*precisionLR)
recGLR = (recallLR - piLR) / ((1-piLR)*recallLR)
precGAB_new = []
recGAB_new = []
for i in range(0, precGAB.shape[0]):
if(precGAB[i] > 0 and recGAB[i] > 0):
precGAB_new.append(precGAB[i])
recGAB_new.append(recGAB[i])
precGLR_new = []
recGLR_new = []
for i in range(0, precGLR.shape[0]):
if(precGLR[i] > 0 and recGLR[i] > 0):
precGLR_new.append(precGLR[i])
recGLR_new.append(recGLR[i])
prg_aucLR = auc(recGLR_new, precGLR_new)
prg_aucAB # PRG AUC for AB
prg_aucLR # PRG AUC for LR
pr_aucAB # PR AUC for AB
pr_aucLR # PR AUC for LR
roc_aucAB # ROC AUC for AB
roc_aucLR # ROC AUC for LR
plt.figure(figsize=(10,10))
plt.plot(recGAB_new, precGAB_new,color='b')
plt.plot(recGLR_new, precGLR_new,color='r')
plt.xlabel('Recall Gain')
plt.ylabel('Precision Gain')
plt.show()
###Output
_____no_output_____ |
Sunspot_Activity_Time_series.ipynb | ###Markdown
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format='-', start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel('Time')
plt.ylabel('Value')
plt.grid(True)
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/Sunspots.csv \
-O /tmp/sunspots.csv
import csv
time_step = []
sunspot = []
with open('/tmp/sunspots.csv') as csvfile:
reader = csv.reader(csvfile, delimiter = ',')
next(reader)
for row in reader:
sunspot.append(float(row[2]))
time_step.append(int(row[0]))
series = np.array(sunspot)
time = np.array(time_step)
plt.figure(figsize=(10, 6))
plot_series(time, series)
split_time = 3000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 30
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer_size):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift = 1, drop_remainder = True)
ds = ds.flat_map(lambda w:w.batch(window_size + 1))
ds = ds.shuffle(shuffle_buffer_size)
ds = ds.map(lambda w: (w[:-1], w[-1:]))
return ds.batch(batch_size).prefetch(1)
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift =1, drop_remainder = True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
window_size = 64
batch_size = 256
train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
print(train_set)
print(x_train.shape)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=5, strides=1, padding='causal', activation='relu', input_shape = [None, 1]),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.Dense(30, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x : x * 400)
])
model.summary()
lr_schedule = tf.keras.callbacks.LearningRateScheduler(lambda epoch : 1e-8 * 10**(epoch/20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=['mae'])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history['lr'], history.history['loss'])
plt.axis([1e-8, 1e-4, 0, 60])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
train_set = windowed_dataset(x_train, window_size=60, batch_size=100, shuffle_buffer_size=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=60, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(60, return_sequences=True),
tf.keras.layers.LSTM(60, return_sequences=True),
tf.keras.layers.Dense(30, activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 400)
])
optimizer = tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set,epochs=500)
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time-window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
print(tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy())
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
loss=history.history['loss']
epochs=range(len(loss)) # Get number of epochs
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, loss, 'r')
plt.title('Training loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["Loss"])
plt.figure()
zoomed_loss = loss[200:]
zoomed_epochs = range(200,500)
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(zoomed_epochs, zoomed_loss, 'r')
plt.title('Training loss')
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["Loss"])
plt.figure()
print(rnn_forecast)
###Output
_____no_output_____ |
test/project-2.ipynb | ###Markdown
Preprocessing:1) Load the raw data2) Transform them to 'trainable' data3) Feature selection
###Code
X = preprocess(df_train)
y = df_train['duration_label']
# include both uni-grams and bi-grams
# exclude stop words
vectorizer = TfidfVectorizer(sublinear_tf=True, ngram_range=(1,2), analyzer='word', stop_words= 'english')
X = vectorizer.fit_transform(X)
print("Shape of X (nrow, ncol):", X.shape)
# plot p-values before feature selection
chi_square, p_values = chi2(X, y)
plt.hist(p_values, edgecolor = 'black', bins=100)
plt.xlabel('p-value')
plt.ylabel('frequency')
plt.title("p-values of features (before selection)")
plt.xticks(np.arange(0,1.1,0.1))
plt.show()
# plot p-values after feature selection
fselect = GenericUnivariateSelect(chi2, mode='percentile', param=20)
X_new = fselect.fit_transform(X, y)
chi_square, p_values = chi2(X_new, y)
plt.hist(p_values, edgecolor = 'black', bins=100)
plt.xlabel('p-value')
plt.ylabel('frequency')
plt.title("p-values of features (after selection)")
plt.xticks(np.arange(0,1.1,0.1))
plt.show()
print("Shape of X_new (nrow, ncol):", X_new.shape)
###Output
_____no_output_____
###Markdown
Hyperparameter tuning
###Code
def hyperparameter_tuning(grid, model, X, y):
# define grid search
grid_search = GridSearchCV(estimator=model, param_grid=grid, n_jobs=-1, return_train_score=True)
grid_result = grid_search.fit(X, y)
# summarize results
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_))
train_means = grid_result.cv_results_['mean_train_score']
test_means = grid_result.cv_results_['mean_test_score']
test_stdvs = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
train_results = []
test_results = []
test_vars = []
for train_mean, test_mean, test_stdv, param in zip(train_means, test_means, test_stdvs, params):
if train_mean != 0 and test_mean != 0:
train_results.append(train_mean)
test_results.append(test_mean)
test_vars.append(test_stdv**2)
#print("%f (%f) with: %r" % (mean, stdev, param))
return train_results, test_results, test_vars
dc = DummyClassifier()
baseline = sum(cross_validate(dc, X_new, y, cv=5)['test_score'])/5
print("Baseline accuracy:", baseline)
# Logistic Regression
lg = LogisticRegression(max_iter=1000)
c_values = [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0, 15.0, 20.0]
grid = dict(C=c_values)
train_results_lg, test_results_lg, test_vars_lg = hyperparameter_tuning(grid, lg, X_new, y)
plt.grid()
plt.plot(c_values, train_results_lg, label='Train', marker='o')
plt.plot(c_values, test_results_lg, label='Test', marker='o')
plt.axhline(y=baseline, color='r', linestyle='--', label='zero-r baseline')
plt.title('Logistic regression')
plt.xlabel('Inverse of regularization strength')
plt.ylabel('Accuracy mean')
plt.legend()
plt.plot()
print("Train accuracy:", train_results_lg)
print("Test accuracy:", test_results_lg)
print("Test variance:", test_vars_lg)
# Decision Tree
dt = DecisionTreeClassifier()
max_depths = [1, 5, 10, 15, 20, 25, 50, 100, 200]
grid = dict(max_depth=max_depths)
train_results_dt, test_results_dt, test_vars_dt = hyperparameter_tuning(grid, dt, X_new, y)
plt.grid()
plt.plot(max_depths, train_results_dt, label='Train', marker='o')
plt.plot(max_depths, test_results_dt, label='Test', marker='o')
plt.axhline(y=baseline, color='r', linestyle='--', label='zero-r baseline')
plt.title('Decision Tree')
plt.xlabel('Max depth of tree')
plt.ylabel('Accuracy mean')
plt.legend()
plt.plot()
print("Train accuracy:", train_results_dt)
print("Test accuracy:", test_results_dt)
print("Test variance:", test_vars_dt)
# Linear SVM
lsvm = LinearSVC()
c_values = [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0, 15.0, 20.0]
grid = dict(C = c_values)
train_results_lsvm, test_results_lsvm, test_vars_lsvm = hyperparameter_tuning(grid, lsvm, X_new, y)
plt.grid()
plt.plot(c_values, train_results_lsvm, label='Train', marker='o')
plt.plot(c_values, test_results_lsvm, label='Test', marker='o')
plt.axhline(y=baseline, color='r', linestyle='--', label='zero-r baseline')
plt.xlabel('Regularization parameter')
plt.ylabel('Accuracy mean')
plt.title('Linear SVC')
plt.legend()
plt.plot()
print("Train accuracy:", train_results_lsvm)
print("Test accuracy:", test_results_lsvm)
print("Test variance:", test_vars_lsvm)
###Output
Best: 0.820125 using {'C': 1.0}
Train accuracy: [0.65559375, 0.73798125, 0.76494375, 0.8157499999999999, 0.8397, 0.9243062500000001, 0.9615625, 0.99899375, 0.9999812499999999, 1.0, 1.0]
Test accuracy: [0.6516, 0.73105, 0.754475, 0.790025, 0.7994000000000001, 0.816575, 0.820125, 0.817175, 0.8130749999999999, 0.8107749999999999, 0.809525]
Test variance: [1.0933750000000034e-05, 1.2041250000000037e-05, 6.483749999999943e-06, 1.1427499999999968e-05, 1.6521249999999885e-05, 1.9759999999999963e-05, 2.654375000000026e-05, 1.5291249999999973e-05, 2.0647499999999873e-05, 2.5471250000000286e-05, 2.6171250000000042e-05]
###Markdown
Evaluation: Confusion matrix
###Code
X_train, X_test, y_train, y_test = train_test_split(X_new, y, random_state=42)
lg = LogisticRegression(max_iter=1000, C=15.0)
lg.fit(X_train, y_train)
plot_confusion_matrix(lg, X_test, y_test)
plt.title('Logistic Regression')
plt.show()
dt = DecisionTreeClassifier(max_depth=10)
dt.fit(X_train, y_train)
plot_confusion_matrix(dt, X_test, y_test)
plt.title('Decision Tree')
plt.show()
lsvm = LinearSVC(C=1.0)
lsvm.fit(X_train, y_train)
plot_confusion_matrix(lsvm, X_test, y_test)
plt.title('Linear SVM')
plt.show()
###Output
_____no_output_____ |
notebooks/SnowEx_ASO_MODIS_Snow/Snow-tutorial.ipynb | ###Markdown
Snow Depth and Snow Cover Data Exploration This tutorial demonstrates how to access and compare coincident snow data across in-situ, airborne, and satellite platforms from NASA's SnowEx, ASO, and MODIS data sets, respectively. All data are available from the NASA National Snow and Ice Data Center Distributed Active Archive Center, or NSIDC DAAC. Here are the steps you will learn in this snow data notebook:1. Explore the coverage and structure of select NSIDC DAAC snow data products, as well as available resources to search and access data.2. Search and download spatiotemporally coincident data across in-situ, airborne, and satellite observations.3. Subset and reformat MODIS data using the NSIDC DAAC API.4. Read CSV and GeoTIFF formatted data using geopandas and rasterio libraries.5. Subset data based on buffered area.5. Extract and visualize raster values at point locations.6. Save output as shapefile for further GIS analysis. ___ Explore snow products and resources NSIDC introduction[The National Snow and Ice Data Center](https://nsidc.org) provides over 1100 data sets covering the Earth's cryosphere and more, all of which are available to the public free of charge. Beyond providing these data, NSIDC creates tools for data access, supports data users, performs scientific research, and educates the public about the cryosphere. Select Data Resources* [NSIDC Data Search](https://nsidc.org/data/search/keywords=snow) * Search NSIDC snow data* [NSIDC Data Update Announcements](https://nsidc.org/the-drift/data-update/) * News and tips for data users* [NASA Earthdata Search](http://search.earthdata.nasa.gov/) * Search and access data across the NASA Earthdata* [NASA Worldview](https://worldview.earthdata.nasa.gov/) * Interactive interface for browsing full-resolution, global, daily satellite images Snow Today[Snow Today](https://nsidc.org/snow-today), a collaboration with the University of Colorado's Institute of Alpine and Arctic Research (INSTAAR), provides near-real-time snow analysis for the western United States and regular reports on conditions during the winter season. Snow Today is funded by NASA Hydrological Sciences Program and utilizes data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument and snow station data from the Snow Telemetry (SNOTEL) network by the Natural Resources Conservation Service (NRCS), United States Department of Agriculture (USDA) and the California Department of Water Resources: www.wcc.nrcs.usda.gov/snow. Snow-related missions and data sets used in the following steps:* [SnowEx](https://nsidc.org/data/snowex) * SnowEx17 Ground Penetrating Radar, Version 2: https://doi.org/10.5067/G21LGCNLFSC5* [ASO](https://nsidc.org/data/aso) * ASO L4 Lidar Snow Depth 3m UTM Grid, Version 1: https://doi.org/10.5067/KIE9QNVG7HP0* [MODIS](https://nsidc.org/data/modis) * MODIS/Terra Snow Cover Daily L3 Global 500m SIN Grid, Version 6: https://doi.org/10.5067/MODIS/MOD10A1.006 Other relevant snow products:* [VIIRS Snow Data](http://nsidc.org/data/search/sortKeys=score,,desc/facetFilters=%257B%2522facet_sensor%2522%253A%255B%2522Visible-Infrared%2520Imager-Radiometer%2520Suite%2520%257C%2520VIIRS%2522%255D%252C%2522facet_parameter%2522%253A%255B%2522SNOW%2520COVER%2522%252C%2522Snow%2520Cover%2522%255D%257D/pageNumber=1/itemsPerPage=25)* [AMSR-E and AMSR-E/AMSR2 Unified Snow Data](http://nsidc.org/data/search/sortKeys=score,,desc/facetFilters=%257B%2522facet_sensor%2522%253A%255B%2522Advanced%2520Microwave%2520Scanning%2520Radiometer-EOS%2520%257C%2520AMSR-E%2522%252C%2522Advanced%2520Microwave%2520Scanning%2520Radiometer%25202%2520%257C%2520AMSR2%2522%255D%252C%2522facet_parameter%2522%253A%255B%2522SNOW%2520WATER%2520EQUIVALENT%2522%252C%2522Snow%2520Water%2520Equivalent%2522%255D%257D/pageNumber=1/itemsPerPage=25)* [MEaSUREs Snow Data](http://nsidc.org/data/search/keywords=measures/sortKeys=score,,desc/facetFilters=%257B%2522facet_parameter%2522%253A%255B%2522SNOW%2520COVER%2522%255D%252C%2522facet_sponsored_program%2522%253A%255B%2522NASA%2520National%2520Snow%2520and%2520Ice%2520Data%2520Center%2520Distributed%2520Active%2520Archive%2520Center%2520%257C%2520NASA%2520NSIDC%2520DAAC%2522%255D%252C%2522facet_format%2522%253A%255B%2522NetCDF%2522%255D%252C%2522facet_temporal_duration%2522%253A%255B%252210%252B%2520years%2522%255D%257D/pageNumber=1/itemsPerPage=25) * Near-Real-Time SSM/I-SSMIS EASE-Grid Daily Global Ice Concentration and Snow Extent (NISE), Version 5: https://doi.org/10.5067/3KB2JPLFPK3R ___ Import PackagesGet started by importing packages needed to run the following code blocks, including the `tutorial_helper_functions` module provided within this repository.
###Code
import os
import geopandas as gpd
from shapely.geometry import Polygon, mapping
from shapely.geometry.polygon import orient
import pandas as pd
import matplotlib.pyplot as plt
import rasterio
from rasterio.plot import show
import numpy as np
import pyresample as prs
import requests
import json
import pprint
from rasterio.mask import mask
from mpl_toolkits.axes_grid1 import make_axes_locatable
# This is our functions module. We created several helper functions to discover, access, and harmonize the data below.
import tutorial_helper_functions as fn
###Output
_____no_output_____
###Markdown
___ Data DiscoveryStart by identifying your study area and exploring coincident data over the same time and area. NASA Earthdata Search can be used to visualize file coverage over mulitple data sets and to access the same data you will be working with below: https://search.earthdata.nasa.gov/projects?projectId=5366449248 Identify area and time of interestSince our focus is on the Grand Mesa study site of the NASA SnowEx campaign, we'll use that area to search for coincident data across other data products. From the [SnowEx17 Ground Penetrating Radar Version 2](https://doi.org/10.5067/G21LGCNLFSC5) landing page, you can find the rectangular spatial coverage under the Overview tab, or you can draw a polygon over your area of interest in the map under the Download Data tab and export the shape as a geojson file using the Export Polygon icon shown below. An example polygon geojson file is provided in the /Data folder of this repository. Create polygon coordinate stringRead in the geojson file as a GeoDataFrame object and simplify and reorder using the shapely package. This will be converted back to a dictionary to be applied as our polygon search parameter.
###Code
polygon_filepath = str(os.getcwd() + '/Data/nsidc-polygon.json') # Note: A shapefile or other vector-based spatial data format could be substituted here.
gdf = gpd.read_file(polygon_filepath) #Return a GeoDataFrame object
# Simplify polygon for complex shapes in order to pass a reasonable request length to CMR. The larger the tolerance value, the more simplified the polygon.
# Orient counter-clockwise: CMR polygon points need to be provided in counter-clockwise order. The last point should match the first point to close the polygon.
poly = orient(gdf.simplify(0.05, preserve_topology=False).loc[0],sign=1.0)
#Format dictionary to polygon coordinate pairs for CMR polygon filtering
polygon = ','.join([str(c) for xy in zip(*poly.exterior.coords.xy) for c in xy])
print('Polygon coordinates to be used in search:', polygon)
poly
###Output
_____no_output_____
###Markdown
Set time rangeWe are interested in accessing files within each data set over the same time range, so we'll start by searching all of 2017.
###Code
temporal = '2017-01-01T00:00:00Z,2017-12-31T23:59:59Z' # Set temporal range
###Output
_____no_output_____
###Markdown
Create data dictionary Create a nested dictionary with each data set shortname and version, as well as shared temporal range and polygonal area of interest. Data set shortnames, or IDs, as well as version numbers, are located at the top of every NSIDC landing page.
###Code
data_dict = { 'snowex': {'short_name': 'SNEX17_GPR','version': '2','polygon': polygon,'temporal':temporal},
'aso': {'short_name': 'ASO_3M_SD','version': '1','polygon': polygon,'temporal':temporal},
'modis': {'short_name': 'MOD10A1','version': '6','polygon': polygon,'temporal':temporal}
}
###Output
_____no_output_____
###Markdown
Determine how many files exist over this time and area of interest, as well as the average size and total volume of those filesWe will use the `granule_info` function to query metadata about each data set and associated files using the [Common Metadata Repository (CMR)](https://cmr.earthdata.nasa.gov/search/site/docs/search/api.html), which is a high-performance, high-quality, continuously evolving metadata system that catalogs Earth Science data and associated service metadata records. Note that not all NSIDC data can be searched at the file level using CMR, particularly those outside of the NASA DAAC program.
###Code
for k, v in data_dict.items(): fn.granule_info(data_dict[k])
###Output
_____no_output_____
###Markdown
Find coincident dataThe function above tells us the size of data available for each data set over our time and area of interest, but we want to go a step further and determine what time ranges are coincident based on our bounding box. This `time_overlap` helper function returns a dataframe with file names, dataset_id, start date, and end date for all files that overlap in temporal range across all data sets of interest.
###Code
search_df = fn.time_overlap(data_dict)
print(len(search_df), ' total files returned')
search_df
###Output
_____no_output_____
###Markdown
___ Data AccessThe number of files has been greatly reduced to only those needed to compare data across these data sets. This CMR query will collect the data file URLs, including the associated quality and metadata files if available.
###Code
# Create new dictionary with fields needed for CMR url search
url_df = search_df.drop(columns=['start_date', 'end_date','version','dataset_id'])
url_dict = url_df.to_dict('records')
# CMR search variables
granule_search_url = 'https://cmr.earthdata.nasa.gov/search/granules'
headers= {'Accept': 'application/json'}
# Create URL list from each df row
urls = []
for i in range(len(url_dict)):
response = requests.get(granule_search_url, params=url_dict[i], headers=headers)
results = json.loads(response.content)
urls.append(fn.cmr_filter_urls(results))
# flatten url list
urls = list(np.concatenate(urls))
urls
###Output
_____no_output_____
###Markdown
Additional data access and subsetting services API AccessData can be accessed directly from our HTTPS file system through the URLs collected above, or through our Application Programming Interface (API). Our API offers you the ability to order data using specific temporal and spatial filters, as well as subset, reformat, and reproject select data sets. The same subsetting, reformatting, and reprojection services available on select data sets through NASA Earthdata Search can also be applied using this API. These options can be requested in a single access command without the need to script against our data directory structure. See our [programmatic access guide](https://nsidc.org/support/how/how-do-i-programmatically-request-data-services) for more information on API options. Add service request options for MODIS dataAccording to https://nsidc.org/support/faq/what-data-subsetting-reformatting-and-reprojection-services-are-available-for-MODIS-data, we can see that spatial subsetting and GeoTIFF reformatting are available for MOD10A1 so those options are requested below. The area subset must be described as a bounding box, which can be created based on the polygon bounds above. We will also add GeoTIFF reformatting to the MOD10A1 data dictionary and the temporal range will be set based on the range of MOD10A1 files in the dataframe above. These new parameters will be added to the API request below.
###Code
bounds = poly.bounds # Get polygon bounds to be used as bounding box input
data_dict['modis']['bbox'] = ','.join(map(str, list(bounds))) # Add bounding box subsetting to MODIS dictionary
data_dict['modis']['format'] = 'GeoTIFF' # Add geotiff reformatting to MODIS dictionary
# Set new temporal range based on dataframe above. Note that this will request all MOD10A1 data falling within this time range.
modis_start = min(search_df.loc[search_df['short_name'] == 'MOD10A1', 'start_date'])
modis_end = max(search_df.loc[search_df['short_name'] == 'MOD10A1', 'end_date'])
data_dict['modis']['temporal'] = ','.join([modis_start,modis_end])
print(data_dict['modis'])
###Output
_____no_output_____
###Markdown
Create the data request API endpointProgrammatic API requests are formatted as HTTPS URLs that contain key-value-pairs specifying the service operations that we specified above. We will first create a string of key-value-pairs from our data dictionary and we'll feed those into our API endpoint. This API endpoint can be executed via command line, a web browser, or in Python below.
###Code
base_url = 'https://n5eil02u.ecs.nsidc.org/egi/request' # Set NSIDC data access base URL
#data_dict['modis']['request_mode'] = 'stream' # Set the request mode to asynchronous
param_string = '&'.join("{!s}={!r}".format(k,v) for (k,v) in data_dict['modis'].items()) # Convert param_dict to string
param_string = param_string.replace("'","") # Remove quotes
api_request = [f'{base_url}?{param_string}']
print(api_request[0]) # Print API base URL + request parameters
###Output
_____no_output_____
###Markdown
Download optionsThe following functions will return the file URLs and the MOD10A1 API request. For demonstration purposes, these functions have been commented out, and instead the data utilized in the following steps will be accessed from a staged directory. ***Note that if you are running this notebook in Binder, the memory may not be sufficient to download these files. Please use the Docker or local Conda options provided in the README if you are interested in downloading all files.***
###Code
path = str(os.getcwd() + '/Data')
if not os.path.exists(path):
os.mkdir(path)
os.chdir(path)
#fn.cmr_download(urls)
#fn.cmr_download(api_request)
# pull data from staged bucket for demonstration
!aws --no-sign-request s3 cp s3://snowex-aso-modis-tutorial-data/ ./ --recursive #access data in staged directory
###Output
_____no_output_____
###Markdown
___ Read in SnowEx data and buffer points around Snotel locationThis SnowEx data set is provided in CSV. A [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/getting_started/overview.html) is used to easily read in data. For these next steps, just one day's worth of data will be selected from this file and the coincident ASO and MODIS data will be selected.
###Code
snowex_path = './SnowEx17_GPR_Version2_Week1.csv' # Define local filepath
df = pd.read_csv(snowex_path, sep='\t')
df.head()
###Output
_____no_output_____
###Markdown
Convert to time values and extract a single day The collection date needs to be extracted from the `collection` value and a new dataframe will be generated as a subset of the original based on a single day:
###Code
df['date'] = df.collection.str.rsplit('_').str[-1].astype(str)
df.date = pd.to_datetime(df.date, format="%m%d%y")
df = df.sort_values(['date'])
df_subset = df[df['date'] == '2017-02-08'] # subset original dataframe and only select this date
df.head()
###Output
_____no_output_____
###Markdown
Convert to Geopandas dataframe to provide point geometryAccording to the SnowEx documentation, the data are available in UTM Zone 12N so we'll set to this projection so that we can buffer in meters in the next step:
###Code
gdf_utm= gpd.GeoDataFrame(df_subset, geometry=gpd.points_from_xy(df_subset.x, df_subset.y), crs='EPSG:32612')
gdf_utm.head()
###Output
_____no_output_____
###Markdown
Buffer data around SNOTEL site We can further subset the SnowEx snow depth data to get within a 500 m radius of the [SNOTEL Mesa Lakes](https://wcc.sc.egov.usda.gov/nwcc/site?sitenum=622&state=co) site.First we'll create a new geodataframe with the SNOTEL site location, set to our SnowEx UTM coordinate reference system, and create a 500 meter buffer around this point. Then we'll subset the SnowEx points to the buffer and convert back to the WGS84 CRS:
###Code
# Create another geodataframe (gdfsel) with the center point for the selection
df_snotel = pd.DataFrame(
{'SNOTEL Site': ['Mesa Lakes'],
'Latitude': [39.05],
'Longitude': [-108.067]})
gdf_snotel = gpd.GeoDataFrame(df_snotel, geometry=gpd.points_from_xy(df_snotel.Longitude, df_snotel.Latitude), crs='EPSG:4326')
gdf_snotel.to_crs('EPSG:32612', inplace=True) # set CRS to UTM 12 N
buffer = gdf_snotel.buffer(500) #create 500 m buffer
gdf_buffer = gdf_utm.loc[gdf_utm.geometry.within(buffer.unary_union)] # subset dataframe to buffer region
gdf_buffer = gdf_buffer.to_crs('EPSG:4326')
###Output
_____no_output_____
###Markdown
___ Read in Airborne Snow Observatory data and clip to SNOTEL bufferSnow depth data from the ASO L4 Lidar Snow Depth 3m UTM Grid data set were calculated from surface elevation measured by the Riegl LMS-Q1560 airborne laser scanner (ALS). The data are provided in GeoTIFF format, so we'll use the [Rasterio](https://rasterio.readthedocs.io/en/latest/) library to read in the data.
###Code
aso_path = './ASO_3M_SD_USCOGM_20170208.tif' # Define local filepath
aso = rasterio.open(aso_path)
###Output
_____no_output_____
###Markdown
Clip data to SNOTEL bufferIn order to reduce the data volume to the buffered region of interest, we can subset this GeoTIFF to the same SNOTEL buffer:
###Code
buffer = buffer.to_crs(crs=aso.crs) # convert buffer to CRS of ASO rasterio object
out_img, out_transform = mask(aso, buffer, crop=True)
out_meta = aso.meta.copy()
epsg_code = int(aso.crs.data['init'][5:])
out_meta.update({"driver": "GTiff", "height": out_img.shape[1], "width": out_img.shape[2], "transform": out_transform, "crs": '+proj=utm +zone=13 +datum=WGS84 +units=m +no_defs'})
out_tif = 'clipped_ASO_3M_SD_USCOGM_20170208.tif'
with rasterio.open(out_tif, 'w', **out_meta) as dest:
dest.write(out_img)
clipped_aso = rasterio.open(out_tif)
aso_array = clipped_aso.read(1, masked=True)
###Output
_____no_output_____
###Markdown
___ Read in MODIS Snow Cover data We are interested in the Normalized Difference Snow Index (NDSI) snow cover value from the MOD10A1 data set, which is an index that is related to the presence of snow in a pixel. According to the [MOD10A1 FAQ](https://nsidc.org/support/faq/what-ndsi-snow-cover-and-how-does-it-compare-fsc), snow cover is detected using the NDSI ratio of the difference in visible reflectance (VIS) and shortwave infrared reflectance (SWIR), where NDSI = ((band 4-band 6) / (band 4 + band 6)).Note that you may need to change this filename output below if you download the data outside of the staged bucket, as the output names may vary per request.
###Code
modis_path = './MOD10A1_A2017039_h09v05_006_2017041102600_MOD_Grid_Snow_500m_NDSI_Snow_Cover_99f6ee91_subsetted.tif' # Define local filepath
modis = rasterio.open(modis_path)
modis_array = modis.read(1, masked=True)
###Output
_____no_output_____
###Markdown
___ Add ASO and MODIS data to GeoPandas dataframe In order to add data from these ASO and MODIS gridded data sets, we need to define the geometry parameters for theses, as well as the SnowEx data. The SnowEx geometry is set using the longitude and latitude values of the geodataframe:
###Code
snowex_geometry = prs.geometry.SwathDefinition(lons=gdf_buffer['long'], lats=gdf_buffer['lat'])
print('snowex geometry: ', snowex_geometry)
###Output
_____no_output_____
###Markdown
With ASO and MODIS data on regular grids, we can create area definitions for these using projection and extent metadata:
###Code
pprint.pprint(clipped_aso.profile)
print('')
print(clipped_aso.bounds)
pprint.pprint(modis.profile)
print('')
print(modis.bounds)
# Create area definition for ASO
area_id = 'UTM_13N' # area_id: ID of area
description = 'WGS 84 / UTM zone 13N' # description: Description
proj_id = 'UTM_13N' # proj_id: ID of projection (being deprecated)
projection = 'EPSG:32613' # projection: Proj4 parameters as a dict or string
width = clipped_aso.width # width: Number of grid columns
height = clipped_aso.height # height: Number of grid rows
area_extent = (234081.0, 4326303.0, 235086.0, 4327305.0)
aso_geometry = prs.geometry.AreaDefinition(area_id, description, proj_id, projection, width, height, area_extent)
# Create area definition for MODIS
area_id = 'Sinusoidal' # area_id: ID of area
description = 'Sinusoidal Modis Spheroid' # description: Description
proj_id = 'Sinusoidal' # proj_id: ID of projection (being deprecated)
projection = 'PROJCS["Sinusoidal Modis Spheroid",GEOGCS["Unknown datum based upon the Authalic Sphere",DATUM["Not_specified_based_on_Authalic_Sphere",SPHEROID["Sphere",6371007.181,887203.3395236016,AUTHORITY["EPSG","7035"]],AUTHORITY["EPSG","6035"]],PRIMEM["Greenwich",0],UNIT["degree",0.0174532925199433],AUTHORITY["EPSG","4035"]],PROJECTION["Sinusoidal"],PARAMETER["longitude_of_center",0],PARAMETER["false_easting",0],PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]]]' # projection: Proj4 parameters as a dict or string
width = modis.width # width: Number of grid columns
height = modis.height # height: Number of grid rows
area_extent = (-9332971.361735353, 4341240.1538655795, -9331118.110869242, 4343093.404731691)
modis_geometry = prs.geometry.AreaDefinition(area_id, description, proj_id, projection, width, height, area_extent)
###Output
_____no_output_____
###Markdown
Interpolate ASO and MODIS values onto SnowEx pointsTo interpolate ASO snow depth and MODIS snow cover data to SnowEx snow depth points, we can use the `pyresample` library. The `radius_of_influence` parameter determines maximum radius to look for nearest neighbor interpolation.
###Code
# add ASO values to geodataframe
import warnings
warnings.filterwarnings('ignore') # ignore warning when resampling to a different projection
gdf_buffer['aso_snow_depth'] = prs.kd_tree.resample_nearest(aso_geometry, aso_array, snowex_geometry, radius_of_influence=3)
# add MODIS values to geodataframe
gdf_buffer['modis_ndsi'] = prs.kd_tree.resample_nearest(modis_geometry, modis_array, snowex_geometry, radius_of_influence=500)
gdf_buffer.head()
###Output
_____no_output_____
###Markdown
___ Visualize data and export for further GIS analysisThe rasterio plot module allows you to directly plot GeoTIFFs objects. The SnowEx `Thickness` values are plotted against the clipped ASO snow depth raster.
###Code
gdf_buffer_aso_crs = gdf_buffer.to_crs('EPSG:32613')
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig, ax = plt.subplots(figsize=(10, 10))
show(clipped_aso, ax=ax)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1) # fit legend to height of plot
gdf_buffer_aso_crs.plot(column='Thickness', ax=ax, cmap='OrRd', legend=True, cax=cax, legend_kwds=
{'label': "Snow Depth (m)",});
###Output
_____no_output_____
###Markdown
We can do the same for MOD10A1: This was subsetted to the entire Grand Mesa region defined by the SnowEx data set coverage.
###Code
# Set dataframe to MOD10A1 Sinusoidal projection
gdf_buffer_modis_crs = gdf_buffer.to_crs('PROJCS["Sinusoidal Modis Spheroid",GEOGCS["Unknown datum based upon the Authalic Sphere",DATUM["Not_specified_based_on_Authalic_Sphere",SPHEROID["Sphere",6371007.181,887203.3395236016,AUTHORITY["EPSG","7035"]],AUTHORITY["EPSG","6035"]],PRIMEM["Greenwich",0],UNIT["degree",0.0174532925199433],AUTHORITY["EPSG","4035"]],PROJECTION["Sinusoidal"],PARAMETER["longitude_of_center",0],PARAMETER["false_easting",0],PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]]]')
fig, ax = plt.subplots(figsize=(10, 10))
show(modis, ax=ax)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1) # fit legend to height of plot
gdf_buffer_modis_crs.plot(column='Thickness', ax=ax, cmap='OrRd', legend=True, cax=cax, legend_kwds=
{'label': "Snow Depth (m)",});
###Output
_____no_output_____
###Markdown
Additional data imagery services NASA Worldview and the Global Browse Imagery ServiceNASA’s EOSDIS Worldview mapping application provides the capability to interactively browse over 900 global, full-resolution satellite imagery layers and then download the underlying data. Many of the available imagery layers are updated within three hours of observation, essentially showing the entire Earth as it looks “right now."According to the [MOD10A1 landing page](https://nsidc.org/data/mod10a1), snow cover imagery layers from this data set are available through NASA Worldview. This layer can be downloaded as various image files including GeoTIFF using the snapshot feature at the top right of the page. This link presents the MOD10A1 NDSI layer over our time and area of interest: https://go.nasa.gov/35CgYMd. Additionally, the NASA Global Browse Imagery Service provides up to date, full resolution imagery for select NSIDC DAAC data sets as web services including WMTS, WMS, KML, and more. These layers can be accessed in GIS applications following guidance on the [GIBS documentation pages](https://wiki.earthdata.nasa.gov/display/GIBS/Geographic+Information+System+%28GIS%29+Usage). Export dataframe to Shapefile Finally, the dataframe can be exported as an Esri shapefile for further analysis in GIS:
###Code
gdf_buffer = gdf_buffer.drop(columns=['date'])
gdf_buffer.to_file('snow-data-20170208.shp')
###Output
_____no_output_____ |
numpy_ufunc_butterfy_curve.ipynb | ###Markdown
NumPy ufunc (universal functions)
###Code
import numpy as np
print(f'NumPy version = {np.__version__}')
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
###Output
NumPy version = 1.14.5
###Markdown
unary ufunc
###Code
np.sqrt(2)
np.abs(-5)
a=np.arange(1, 11)
root=np.sqrt(a)
root
np.log(a) # natural log
np.log10(a)
###Output
_____no_output_____
###Markdown
butterfly curvehttps://en.wikipedia.org/wiki/Butterfly_curve_(transcendental) $$x=\sin(t)\left(e^{\cos(t)}-2\cos(4t)-\sin ^{5}\left({t \over 12}\right)\right) \\y=\cos(t)\left(e^{\cos(t)}-2\cos(4t)-\sin ^{5}\left({t \over 12}\right)\right) \\{0\leq t\leq 12\pi }$$
###Code
t=np.linspace(0, 12*np.pi, 360*5)
t
x=np.sin(t)*(np.exp(np.cos(t))-2*np.cos(4*t)-np.sin(t/12)**5)
y=np.cos(t)*(np.exp(np.cos(t))-2*np.cos(4*t)-np.sin(t/12)**5)
plt.plot(x,y)
###Output
_____no_output_____
###Markdown
binary ufunc
###Code
a=np.random.randint(1, 10, (4, 2))
a
b=np.random.randint(1, 10, (4, 2))
b
np.maximum(a, b)
np.add(a, b)
a + b
###Output
_____no_output_____ |
notebooks/group2vec.ipynb | ###Markdown
Group2vec**What?**: The notebook demostrates how to use subscriptions of users from social networks. We will train word2vec model, but instead of tokens will use groups**Data used**: I use open data of users from VK social network (popular in Russia, Ukrain, etc). Data crawled using [Suvec VK crawl engine](https://github.com/ProtsenkoAI/skady-user-vectorizer), [Skady ward - crawl GUI](https://github.com/ProtsenkoAI/skady-ward), both instruments I have developed myself**Why?** Because then we can get knowledge about groups and their users in social networks, similarly to how we analyze texts and their authors in NLP. For example, if you want to train RecSys that will work for new users of your service, you can apply group2vec to get some user features without interactions**Why word2vec and not BERT?**: this is simple example of how you can use crawled data. Of course, BERT and other SOTA-closer methods can icrease metrics 1. Set things up: import packages, load data, set variables
###Code
import os
import json
from time import time
from typing import List, Union, Callable, Dict
import gensim
import vk_api
from sklearn.manifold import TSNE
import numpy as np
from random import sample
%matplotlib inline
import matplotlib.pyplot as plt
# import seaborn as sns
DATA_PATH = "./data"
VECTOR_SIZE = 300
CORES_TO_USE = 3
# sns.set_palette(sns.color_palette("tab10"))
parsed_pth = os.path.join(DATA_PATH, "parsed_data.jsonl")
users_data = []
with open(parsed_pth) as f:
for line in f.readlines():
data = json.loads(line)
if "user_data" in data: # some data lines are corrupted
users_data.append(data)
###Output
_____no_output_____
###Markdown
2. Watch in data
###Code
print(f"We have {len(users_data)} users (text analog for NLP)")
nb_of_groups = 0
for user_data in users_data:
nb_of_groups += len(user_data["user_data"]["groups"])
print(f"They subscribed to {nb_of_groups} groups (token analog for NLP)")
some_user_data = users_data[0]
print("Each user has data about:", " and ".join(some_user_data["user_data"].keys()))
print("'Friends' is list of other users ids: ", some_user_data["user_data"]["friends"][:5])
print("'Groups' is list of groups ids: ", some_user_data["user_data"]["groups"][:5])
###Output
Each user has data about: friends and groups
'Friends' is list of other users ids: [45631, 79858, 117679, 125439, 154894]
'Groups' is list of groups ids: [23433159, 74562311, 189999199, 57846937, 20833574]
###Markdown
Intuition of group2vecHerinafter we treat groups as "tokens" and "users" as documents.**The idea of word2vec is**: if 2 tokens are met in similar contexts, their meaning is similar. **The idea of group2vec is**: if 2 groups are met in subscriptions of similar users (with a lot of common groups) this groups are similar.**Then** as word2vec appliers make text embedding averaging words embeddings, we make users embeddings averaging groups' ones 3. Prepare data for word2vec
###Code
corpus = []
for user_data in users_data:
# make strings because of gensim requirements
document = [str(group) for group in user_data["user_data"]["groups"]]
corpus.append(document)
###Output
_____no_output_____
###Markdown
Note: window of w2v is very large because groups don't have order, unlike words in text. Thus when model predicts a group, it can get information about any other group in user subscriptions
###Code
w2v_model = gensim.models.Word2Vec(min_count=20,
window=300,
vector_size=VECTOR_SIZE,
sample=6e-5, # downsampling popular groups
alpha=0.03,
min_alpha=0.0007,
negative=5,
workers=CORES_TO_USE)
w2v_model.build_vocab(corpus)
print(f"Total unique groups in corpus: {w2v_model.corpus_count}")
t = time()
w2v_model.train(corpus, total_examples=w2v_model.corpus_count, epochs=30, report_delay=1)
print('Time to train the model: {} mins'.format(round((time() - t) / 60, 2)))
w2v_model.wv.save("../resources/pretrained_models/w2v_33k.kv")
del w2v_model
w2v_vecs = gensim.models.KeyedVectors.load("../resources/pretrained_models/w2v_33k.kv")
###Output
_____no_output_____
###Markdown
4. Test groups embeddings
###Code
# authorizing in vk to get group names by ids
session = vk_api.VkApi(token=input("Enter your access token for vk\n"))
def request_groups_info(groups_names_or_ids: List) -> List[dict]:
resp = []
# vk api doesn't allow to get more than 500 ids per respones
groups_per_req = 500
for groups_batch_start_idx in range(0, len(groups_names_or_ids), groups_per_req):
groups_encoded = ",".join(groups_names_or_ids[groups_batch_start_idx: groups_batch_start_idx + groups_per_req])
resp.extend(session.method("groups.getById", values={"group_ids": groups_encoded, "fields": "name"}))
return resp
def get_groups_names(group_ids: List[str]):
resps = request_groups_info(group_ids)
names = [group["screen_name"] for group in resps]
return names
def get_groups_ids(groups_names: List[str]):
resps = request_groups_info(groups_names)
ids = [str(group["id"]) for group in resps]
return ids
###Output
_____no_output_____
###Markdown
Relations between groupsBelow trained word2vec finds groups that are similar or do not belong to the list
###Code
def apply_wv_method(group_names: Union[str, List[str]], wv_method):
group_ids = get_groups_ids(group_names)
model_preds = wv_method(group_ids)
return model_preds
def find_similar(group_name_or_names, vecs: gensim.models.KeyedVectors, cnt: int = 5):
if isinstance(group_name_or_names, str):
group_names = [group_name_or_names]
else:
group_names = group_name_or_names
similar_grops_preds = apply_wv_method(group_names, vecs.most_similar)
similar_groups_ids = [group_id for group_id, sim_score in similar_grops_preds[:cnt]]
groups_names = group_names + get_groups_names(similar_groups_ids)
src_groups_names = groups_names[:len(group_names)]
similar_group_names = groups_names[len(group_names):]
print(f"Similar groups for groups {src_groups_names}:")
print("\n".join(similar_group_names))
def find_odd_one_out(group_names, vecs: gensim.models.KeyedVectors):
preds = apply_wv_method(group_names, vecs.doesnt_match)
print("Odd one:", get_groups_names([preds]))
###Output
_____no_output_____
###Markdown
What groups are similar to BBC?
###Code
find_similar("bbc", vecs=w2v_vecs)
###Output
Similar groups for groups ['bbc']:
tvrain
vestifuture
sovsport
radioromantika
izvestia
###Markdown
Who is stranger between two humoristic groups and company page?
###Code
find_odd_one_out(["godnotent", "abstract_memes", "yandex"], vecs=w2v_vecs)
###Output
Odd one: ['yandex']
###Markdown
TSNE-map of groups
###Code
# training TSNE
all_groups_ids = w2v_vecs.index_to_key
all_vectors = [w2v_vecs.get_vector(group_id) for group_id in all_groups_ids]
tsne = TSNE(n_components=2, random_state=0, n_jobs=CORES_TO_USE, learning_rate=300, verbose=2)
tsne_vecs = tsne.fit_transform(all_vectors)
group_id_to_vec = {}
for group_id, vec in zip(all_groups_ids, tsne_vecs):
group_id_to_vec[group_id] = vec
def plot_tsne(groups_names: List[str]):
tsne_res = _get_tsne_vectors(groups_names)
_plot_2d_scatter(tsne_res, dots_labels=groups_names, title="TSNE of groups vectors")
def _get_vectors(groups_names, vecs, vector_size=VECTOR_SIZE):
res = []
return np.array(res)
def _get_tsne_vectors(groups_names, id_to_vec_map=group_id_to_vec):
groups_ids = get_groups_ids(groups_names)
res = []
for group in groups_ids:
if group in id_to_vec_map:
res.append(id_to_vec_map[group])
return res
def _plot_2d_scatter(cords, dots_labels, title: str):
plt.figure(figsize=(16, 16))
for label, cord in zip(dots_labels, cords):
plt.scatter(*cord)
plt.annotate(label,
xy=cord,
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
some_groups_names = get_groups_names(sample(all_groups_ids, 500))
plot_tsne(some_groups_names)
###Output
_____no_output_____
###Markdown
5. Apply in your projectYou can [download pretrained model](https://drive.google.com/drive/folders/1L_cHapEISPOgUohZN7Tt84f3kj5OHl41?usp=sharing) and use it to get user or group features.Example:
###Code
class UserVectorizer:
def __init__(self, embedding_model_pth: str, users_data: List[dict]):
embedding_model = gensim.models.KeyedVectors.load(embedding_model_pth)
self.group2vec_map = self._export_vectors_from_model(embedding_model)
self.user2data = self._create_user_to_data_map(users_data)
def _create_user_to_data_map(self, users_data: List[dict]):
user2data = {}
for sample in users_data:
user2data[sample["user_id"]] = sample["user_data"]
return user2data
def _export_vectors_from_model(self, model: gensim.models.KeyedVectors):
all_groups_ids = model.index_to_key
all_vecs = [model.get_vector(group_id) for group_id in all_groups_ids]
group_id_to_vec = {}
for group_id, vec in zip(all_groups_ids, all_vecs):
group_id_to_vec[group_id] = vec
return group_id_to_vec
def get_user_vector(self, user: str):
user_groups = self.user2data[user]["groups"]
mean_embed = np.zeros(VECTOR_SIZE)
cnt_groups_added = 0
for group in user_groups:
group = str(group)
if group in self.group2vec_map:
cnt_groups_added += 1
mean_embed += self.group2vec_map[group]
return mean_embed / cnt_groups_added
vectorizer = UserVectorizer("../resources/pretrained_models/w2v_33k.kv", users_data=users_data)
print("Some users:")
for sample in users_data[:5]:
print(sample["user_id"])
user_vector = vectorizer.get_user_vector("251073")
print(user_vector.shape, user_vector[:20])
###Output
(300,) [ 0.54511083 -0.26890017 -0.67196952 -0.14238928 -0.17930816 -0.17699027
0.38363072 1.19074538 0.10379778 0.06293709 0.18826926 0.19950686
0.11308157 0.39123757 0.10414121 0.14905234 0.43452398 -0.60979888
-0.926352 0.29128336]
|
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-09-03.ipynb | ###Markdown
RadarCOVID-Report Data Extraction
###Code
import datetime
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import dataframe_image as dfi
import matplotlib.ticker
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
sns.set()
matplotlib.rcParams['figure.figsize'] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
###Output
_____no_output_____
###Markdown
COVID-19 Cases
###Code
confirmed_df = pd.read_csv("https://covid19tracking.narrativa.com/csv/confirmed.csv")
radar_covid_countries = {"Spain"}
# radar_covid_regions = { ... }
confirmed_df = confirmed_df[confirmed_df["Country_EN"].isin(radar_covid_countries)]
# confirmed_df = confirmed_df[confirmed_df["Region"].isin(radar_covid_regions)]
# set(confirmed_df.Region.tolist()) == radar_covid_regions
confirmed_country_columns = list(filter(lambda x: x.startswith("Country_"), confirmed_df.columns))
confirmed_regional_columns = confirmed_country_columns + ["Region"]
confirmed_df.drop(columns=confirmed_regional_columns, inplace=True)
confirmed_df = confirmed_df.sum().to_frame()
confirmed_df.tail()
confirmed_df.reset_index(inplace=True)
confirmed_df.columns = ["sample_date_string", "cumulative_cases"]
confirmed_df.sort_values("sample_date_string", inplace=True)
confirmed_df["new_cases"] = confirmed_df.cumulative_cases.diff()
confirmed_df["rolling_mean_new_cases"] = confirmed_df.new_cases.rolling(7).mean()
confirmed_df.tail()
extraction_date_confirmed_df = \
confirmed_df[confirmed_df.sample_date_string == extraction_date]
extraction_previous_date_confirmed_df = \
confirmed_df[confirmed_df.sample_date_string == extraction_previous_date].copy()
if extraction_date_confirmed_df.empty and \
not extraction_previous_date_confirmed_df.empty:
extraction_previous_date_confirmed_df["sample_date_string"] = extraction_date
extraction_previous_date_confirmed_df["new_cases"] = \
extraction_previous_date_confirmed_df.rolling_mean_new_cases
extraction_previous_date_confirmed_df["cumulative_cases"] = \
extraction_previous_date_confirmed_df.new_cases + \
extraction_previous_date_confirmed_df.cumulative_cases
confirmed_df = confirmed_df.append(extraction_previous_date_confirmed_df)
confirmed_df.tail()
confirmed_df[["new_cases", "rolling_mean_new_cases"]].plot()
###Output
_____no_output_____
###Markdown
Extract API TEKs
###Code
from Modules.RadarCOVID import radar_covid
exposure_keys_df = radar_covid.download_last_radar_covid_exposure_keys(days=14)
exposure_keys_df[[
"sample_date_string", "source_url", "region", "key_data"]].head()
exposure_keys_summary_df = \
exposure_keys_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "tek_count"}, inplace=True)
exposure_keys_summary_df.head()
###Output
_____no_output_____
###Markdown
Dump API TEKs
###Code
tek_list_df = exposure_keys_df[["sample_date_string", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
"sample_date").tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
"Data/TEKs/Current/RadarCOVID-TEKs.json",
lines=True, orient="records")
tek_list_df.drop(columns=["extraction_date_with_hour"]).to_json(
"Data/TEKs/Daily/RadarCOVID-TEKs-" + extraction_date + ".json",
lines=True, orient="records")
tek_list_df.to_json(
"Data/TEKs/Hourly/RadarCOVID-TEKs-" + extraction_date_with_hour + ".json",
lines=True, orient="records")
tek_list_df.head()
###Output
_____no_output_____
###Markdown
Load TEK Dumps
###Code
import glob
def load_extracted_teks(mode, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame()
paths = list(reversed(sorted(glob.glob(f"Data/TEKs/{mode}/RadarCOVID-TEKs-*.json"))))
if limit:
paths = paths[:limit]
for path in paths:
logging.info(f"Loading TEKs from '{path}'...")
iteration_extracted_teks_df = pd.read_json(path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
return extracted_teks_df
###Output
_____no_output_____
###Markdown
Daily New TEKs
###Code
daily_extracted_teks_df = load_extracted_teks(mode="Daily", limit=14)
daily_extracted_teks_df.head()
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "new_tek_count",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.head()
new_tek_devices_df = daily_extracted_teks_df.copy()
new_tek_devices_df["new_sample_extraction_date"] = \
pd.to_datetime(new_tek_devices_df.sample_date) + datetime.timedelta(1)
new_tek_devices_df["extraction_date"] = pd.to_datetime(new_tek_devices_df.extraction_date)
new_tek_devices_df = new_tek_devices_df[
new_tek_devices_df.new_sample_extraction_date == new_tek_devices_df.extraction_date]
new_tek_devices_df.head()
new_tek_devices_df.set_index("extraction_date", inplace=True)
new_tek_devices_df = new_tek_devices_df.tek_list.apply(lambda x: len(set(x))).to_frame()
new_tek_devices_df.reset_index(inplace=True)
new_tek_devices_df.rename(columns={
"extraction_date": "sample_date_string",
"tek_list": "new_tek_devices"}, inplace=True)
new_tek_devices_df["sample_date_string"] = new_tek_devices_df.sample_date_string.dt.strftime("%Y-%m-%d")
new_tek_devices_df.head()
###Output
_____no_output_____
###Markdown
Hourly New TEKs
###Code
hourly_extracted_teks_df = load_extracted_teks(mode="Hourly", limit=24)
hourly_extracted_teks_df.head()
hourly_tek_list_df = hourly_extracted_teks_df.groupby("extraction_date_with_hour").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
hourly_tek_list_df = hourly_tek_list_df.set_index("extraction_date_with_hour").sort_index(ascending=True)
hourly_new_tek_df = hourly_tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
hourly_new_tek_df.rename(columns={
"tek_list": "new_tek_count"}, inplace=True)
hourly_new_tek_df.tail()
hourly_new_tek_devices_df = hourly_extracted_teks_df.copy()
hourly_new_tek_devices_df["new_sample_extraction_date"] = \
pd.to_datetime(hourly_new_tek_devices_df.sample_date) + datetime.timedelta(1)
hourly_new_tek_devices_df["extraction_date"] = pd.to_datetime(hourly_new_tek_devices_df.extraction_date)
hourly_new_tek_devices_df = hourly_new_tek_devices_df[
hourly_new_tek_devices_df.new_sample_extraction_date == hourly_new_tek_devices_df.extraction_date]
hourly_new_tek_devices_df.set_index("extraction_date_with_hour", inplace=True)
hourly_new_tek_devices_df_ = pd.DataFrame()
for i, chunk_df in hourly_new_tek_devices_df.groupby("extraction_date"):
chunk_df = chunk_df.copy()
chunk_df.sort_index(inplace=True)
chunk_tek_count_df = chunk_df.tek_list.apply(lambda x: len(set(x)))
chunk_df = chunk_tek_count_df.diff().fillna(chunk_tek_count_df).to_frame()
hourly_new_tek_devices_df_ = hourly_new_tek_devices_df_.append(chunk_df)
hourly_new_tek_devices_df = hourly_new_tek_devices_df_
hourly_new_tek_devices_df.reset_index(inplace=True)
hourly_new_tek_devices_df.rename(columns={
"tek_list": "new_tek_devices"}, inplace=True)
hourly_new_tek_devices_df.tail()
hourly_summary_df = hourly_new_tek_df.merge(
hourly_new_tek_devices_df, on=["extraction_date_with_hour"], how="outer")
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df.tail()
###Output
_____no_output_____
###Markdown
Data Merge
###Code
result_summary_df = exposure_keys_summary_df.merge(new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(new_tek_devices_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(confirmed_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["tek_count_per_new_case"] = \
result_summary_df.tek_count / result_summary_df.rolling_mean_new_cases
result_summary_df["new_tek_count_per_new_case"] = \
result_summary_df.new_tek_count / result_summary_df.rolling_mean_new_cases
result_summary_df["new_tek_devices_per_new_case"] = \
result_summary_df.new_tek_devices / result_summary_df.rolling_mean_new_cases
result_summary_df["new_tek_count_per_new_tek_device"] = \
result_summary_df.new_tek_count / result_summary_df.new_tek_devices
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df.set_index("sample_date", inplace=True)
result_summary_df = result_summary_df.sort_index(ascending=False)
###Output
_____no_output_____
###Markdown
Report Results Summary Table
###Code
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[[
"tek_count",
"new_tek_count",
"new_cases",
"rolling_mean_new_cases",
"tek_count_per_new_case",
"new_tek_count_per_new_case",
"new_tek_devices",
"new_tek_devices_per_new_case",
"new_tek_count_per_new_tek_device"]]
result_summary_df
###Output
_____no_output_____
###Markdown
Summary Plots
###Code
summary_ax_list = result_summary_df[[
"rolling_mean_new_cases",
"tek_count",
"new_tek_count",
"new_tek_devices",
"new_tek_count_per_new_tek_device",
"new_tek_devices_per_new_case"
]].sort_index(ascending=True).plot.bar(
title="Summary", rot=45, subplots=True, figsize=(15, 22))
summary_ax_list[-1].yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
###Output
_____no_output_____
###Markdown
Hourly Summary Plots
###Code
hourly_summary_ax_list = hourly_summary_df.plot.bar(
title="Last 24h Summary", rot=45, subplots=True)
###Output
_____no_output_____
###Markdown
Publish Results
###Code
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
media_path = get_temporary_image_path()
dfi.export(df, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(df=result_summary_df)
hourly_summary_plots_image_path = save_temporary_plot_image(ax=hourly_summary_ax_list)
###Output
_____no_output_____
###Markdown
Save Results
###Code
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(report_resources_path_prefix + "Summary-Table.html")
_ = shutil.copyfile(summary_plots_image_path, report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(summary_table_image_path, report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(hourly_summary_plots_image_path, report_resources_path_prefix + "Hourly-Summary-Plots.png")
report_daily_url_pattern = \
"https://github.com/pvieito/RadarCOVID-Report/blob/master/Notebooks/" \
"RadarCOVID-Report/{report_type}/RadarCOVID-Report-{report_date}.ipynb"
report_daily_url = report_daily_url_pattern.format(
report_type="Daily", report_date=extraction_date)
report_hourly_url = report_daily_url_pattern.format(
report_type="Hourly", report_date=extraction_date_with_hour)
###Output
_____no_output_____
###Markdown
Publish on README
###Code
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
summary_table_html = result_summary_df.to_html()
readme_contents = readme_contents.format(
summary_table_html=summary_table_html,
report_url_with_hour=report_hourly_url,
extraction_date_with_hour=extraction_date_with_hour)
with open("README.md", "w") as f:
f.write(readme_contents)
###Output
_____no_output_____
###Markdown
Publish on Twitter
###Code
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule":
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
hourly_summary_plots_media = api.media_upload(hourly_summary_plots_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
hourly_summary_plots_media.media_id,
]
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
new_teks = extraction_date_result_summary_df.new_tek_count.sum().astype(int)
new_teks_last_hour = extraction_date_result_hourly_summary_df.new_tek_count.sum().astype(int)
new_devices = extraction_date_result_summary_df.new_tek_devices.sum().astype(int)
new_devices_last_hour = extraction_date_result_hourly_summary_df.new_tek_devices.sum().astype(int)
new_tek_count_per_new_tek_device = \
extraction_date_result_summary_df.new_tek_count_per_new_tek_device.sum()
new_tek_devices_per_new_case = \
extraction_date_result_summary_df.new_tek_devices_per_new_case.sum()
status = textwrap.dedent(f"""
Report Update – {extraction_date_with_hour}
#ExposureNotification #RadarCOVID
Shared Diagnoses Day Summary:
- New TEKs: {new_teks} ({new_teks_last_hour:+d} last hour)
- New Devices: {new_devices} ({new_devices_last_hour:+d} last hour, {new_tek_count_per_new_tek_device:.2} TEKs/device)
- Usage Ratio: {new_tek_devices_per_new_case:.2%} devices/case
Report Link: {report_hourly_url}
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
###Output
_____no_output_____ |
old_versions/ref_initial/1main_time_series-v7.ipynb | ###Markdown
2018.10.27: Multiple states: Time series
###Code
import sys,os
import numpy as np
from scipy import linalg
from sklearn.preprocessing import OneHotEncoder
import matplotlib.pyplot as plt
%matplotlib inline
# setting parameter:
np.random.seed(1)
n = 10 # number of positions
m = 3 # number of values at each position
l = 2*((n*m)**2) # number of samples
g = 1.
def itab(n,m):
i1 = np.zeros(n)
i2 = np.zeros(n)
for i in range(n):
i1[i] = i*m
i2[i] = (i+1)*m
return i1.astype(int),i2.astype(int)
i1tab,i2tab = itab(n,m)
# generate coupling matrix w0:
def generate_coupling(n,m,g):
nm = n*m
w = np.random.normal(0.0,g/np.sqrt(nm),size=(nm,nm))
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[i1:i2,:] -= w[i1:i2,:].mean(axis=0)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[:,i1:i2] -= w[:,i1:i2].mean(axis=1)[:,np.newaxis]
return w
w0 = generate_coupling(n,m,g)
# 2018.10.27: generate time series by MCMC
def generate_sequences_MCMC(w,n,m,l):
#print(i1tab,i2tab)
# initial s (categorical variables)
s_ini = np.random.randint(0,m,size=(l,n)) # integer values
#print(s_ini)
# onehot encoder
enc = OneHotEncoder(n_values=m)
s = enc.fit_transform(s_ini).toarray()
#print(s)
ntrial = 100
for t in range(l-1):
h = np.sum(s[t,:]*w[:,:],axis=1)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
k = np.random.randint(0,m)
for itrial in range(ntrial):
k2 = np.random.randint(0,m)
while k2 == k:
k2 = np.random.randint(0,m)
if np.exp(h[i1+k2]- h[i1+k]) > np.random.rand():
k = k2
s[t+1,i1:i2] = 0.
s[t+1,i1+k] = 1.
return s
s = generate_sequences_MCMC(w0,n,m,l)
#print(s[:5])
# recover s0 from s
s0 = np.argmax(s.reshape(-1,m),axis=1).reshape(-1,n)
def eps_ab_func(s0,m):
l,n = s0.shape
eps = np.zeros((n,l-1,m,m))
eps[:,:,:] = -1.
for i in range(n):
for t in range(l-1):
eps[i,t,:,int(s0[t+1,i])] = -1.
eps[i,t,int(s0[t+1,i]),:] = 1.
return eps
eps_ab_all = eps_ab_func(s0,m)
l = s.shape[0]
s_av = np.mean(s[:-1],axis=0)
ds = s[:-1] - s_av
c = np.cov(ds,rowvar=False,bias=True)
#print(c)
c_inv = linalg.pinv(c,rcond=1e-15)
#print(c_inv)
nm = n*m
nloop = 100
w_infer = np.zeros((nm,nm))
for i in range(n):
eps_ab = eps_ab_all[i]
i1,i2 = i1tab[i],i2tab[i]
w_true = w0[i1:i2,:]
h = s[1:,i1:i2].copy()
for iloop in range(nloop):
h_av = h.mean(axis=0)
dh = h - h_av
dhds = dh[:,:,np.newaxis]*ds[:,np.newaxis,:]
dhds_av = dhds.mean(axis=0)
w = np.dot(dhds_av,c_inv)
h = np.dot(s[:-1],w.T)
# --------------- update h: ---------------------------------------------
# h_ab[t,i,j] = h[t,i] - h[t,j]
h_ab = h[:,:,np.newaxis] - h[:,np.newaxis,:]
eps_ab_expect = np.tanh(h_ab/2.)
# h[t,i,j] = eps_ab[t,i,j]*h_ab[t,i,j]/eps_expect[t,i,j] ( = 0 if eps_expect[t,i,j] = 0)
h_ab1 = np.divide(eps_ab*h_ab,eps_ab_expect, out=np.zeros_like(h_ab), where=eps_ab_expect!=0)
h = h_ab1.mean(axis=2)
mse = ((w_true - w)**2).mean()
slope = (w_true*w).sum()/(w_true**2).sum()
w_infer[i1:i2,:] = w
#print(iloop,mse,slope)
plt.scatter(w0,w_infer)
plt.plot([-0.3,0.3],[-0.3,0.3],'r--')
mse = ((w0-w_infer)**2).mean()
print(mse)
###Output
0.002404058497030432
|
entity_alingment.ipynb | ###Markdown
###Code
import torch
torch.__version__
!pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html
!pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html
!pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html
!pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html
!pip install torch-geometric
!pip install torch-geometric-temporalv
###Output
Looking in links: https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html
Collecting torch-scatter
Downloading https://pytorch-geometric.com/whl/torch-1.9.0%2Bcu102/torch_scatter-2.0.8-cp37-cp37m-linux_x86_64.whl (3.0 MB)
[K |████████████████████████████████| 3.0 MB 2.6 MB/s
[?25hInstalling collected packages: torch-scatter
Successfully installed torch-scatter-2.0.8
Looking in links: https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html
Collecting torch-sparse
Downloading https://pytorch-geometric.com/whl/torch-1.9.0%2Bcu102/torch_sparse-0.6.11-cp37-cp37m-linux_x86_64.whl (1.6 MB)
[K |████████████████████████████████| 1.6 MB 2.1 MB/s
[?25hRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from torch-sparse) (1.4.1)
Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.7/dist-packages (from scipy->torch-sparse) (1.19.5)
Installing collected packages: torch-sparse
Successfully installed torch-sparse-0.6.11
Looking in links: https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html
Collecting torch-cluster
Downloading https://pytorch-geometric.com/whl/torch-1.9.0%2Bcu102/torch_cluster-1.5.9-cp37-cp37m-linux_x86_64.whl (926 kB)
[K |████████████████████████████████| 926 kB 2.1 MB/s
[?25hInstalling collected packages: torch-cluster
Successfully installed torch-cluster-1.5.9
Looking in links: https://pytorch-geometric.com/whl/torch-1.9.0+cu102.html
Collecting torch-spline-conv
Downloading https://pytorch-geometric.com/whl/torch-1.9.0%2Bcu102/torch_spline_conv-1.2.1-cp37-cp37m-linux_x86_64.whl (382 kB)
[K |████████████████████████████████| 382 kB 2.1 MB/s
[?25hInstalling collected packages: torch-spline-conv
Successfully installed torch-spline-conv-1.2.1
Collecting torch-geometric
Downloading torch_geometric-1.7.2.tar.gz (222 kB)
[K |████████████████████████████████| 222 kB 5.0 MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (1.19.5)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (4.62.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (1.4.1)
Requirement already satisfied: networkx in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (2.6.2)
Requirement already satisfied: python-louvain in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (0.15)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (0.22.2.post1)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (2.23.0)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (1.1.5)
Collecting rdflib
Downloading rdflib-6.0.0-py3-none-any.whl (376 kB)
[K |████████████████████████████████| 376 kB 49.7 MB/s
[?25hRequirement already satisfied: googledrivedownloader in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (0.4)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (2.11.3)
Requirement already satisfied: pyparsing in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (2.4.7)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->torch-geometric) (2.0.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->torch-geometric) (2.8.2)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->torch-geometric) (2018.9)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas->torch-geometric) (1.15.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from rdflib->torch-geometric) (57.4.0)
Collecting isodate
Downloading isodate-0.6.0-py2.py3-none-any.whl (45 kB)
[K |████████████████████████████████| 45 kB 3.3 MB/s
[?25hRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->torch-geometric) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->torch-geometric) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->torch-geometric) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->torch-geometric) (2021.5.30)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->torch-geometric) (1.0.1)
Building wheels for collected packages: torch-geometric
Building wheel for torch-geometric (setup.py) ... [?25l[?25hdone
Created wheel for torch-geometric: filename=torch_geometric-1.7.2-py3-none-any.whl size=388143 sha256=8bb5b7f5ae4153ad19792182997b633f62c4e7e8ff4d668382fd802fce4caf09
Stored in directory: /root/.cache/pip/wheels/55/93/b6/2eeb0465afe89aee74d7a07a606e9770466d7565abd45a99d5
Successfully built torch-geometric
Installing collected packages: isodate, rdflib, torch-geometric
Successfully installed isodate-0.6.0 rdflib-6.0.0 torch-geometric-1.7.2
[31mERROR: Could not find a version that satisfies the requirement torch-geometric-temporalv (from versions: none)[0m
[31mERROR: No matching distribution found for torch-geometric-temporalv[0m
###Markdown
**About the DATASET:**The DBP15K dataset from the `"Cross-lingual Entity Alignment via Joint Attribute-Preserving Embedding" `_ paper, where Chinese, Japanese and French versions of DBpedia were linked to its English version. Node features are given by pre-trained and aligned monolingual word embeddings from the `"Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network" `_ paper.
###Code
from torch_geometric.datasets import DBP15K
class SumEmbedding(object):
""" this class help to compute the sum of the features in dim =1.
This is a kind of transformation technique used wise in GNN dataset.
returns = transformed data
"""
def __call__(self, data):
data.x1, data.x2 = data.x1.sum(dim=1), data.x2.sum(dim=1)
return data
data_zh_en = DBP15K('/content/dd', 'zh_en', transform=SumEmbedding())[0]
print('data_zh_en',data_zh_en)
data_en_zh = DBP15K('/content/dd', 'en_zh', transform=SumEmbedding())[0]
print('data_en_zh',data_en_zh)
data_fr_en = DBP15K('/content/dd', 'fr_en', transform=SumEmbedding())[0]
print('data_fr_en',data_fr_en)
data_en_fr = DBP15K('/content/dd', 'en_fr', transform=SumEmbedding())[0]
print('data_en_fr',data_en_fr)
data_ja_en = DBP15K('/content/dd', 'ja_en', transform=SumEmbedding())[0]
print('data_ja_en',data_ja_en)
data_en_ja = DBP15K('/content/dd', 'en_ja', transform=SumEmbedding())[0]
print('data_en_ja',data_en_ja)
###Output
_____no_output_____
###Markdown
###Code
data = data_en_zh
data.x1#,data.edge_index1
from torch_geometric.nn import MessagePassing
from torch_geometric.utils import add_self_loops, degree
import torch.nn.functional as F
from torch_geometric.nn import Sequential, GCNConv,SAGEConv,RGCNConv,GATConv
from torch.nn import Conv1d, Linear
def model_summary(model):
""" this function works very similar to inbuild model.summary() in pytorch
Parameters:
Input: model - pass the mdel about which you want the summary for
Output: prints no of parameters and etc
"""
print("model_summary")
print()
print("Layer_name"+"\t"*7+"Number of Parameters")
print("="*100)
model_parameters = [layer for layer in model.parameters() if layer.requires_grad]
layer_name = [child for child in model.children()]
j = 0
total_params = 0
print("\t"*10)
for i in layer_name:
print()
param = 0
try:
bias = (i.bias is not None)
except:
bias = False
if not bias:
param =model_parameters[j].numel()+model_parameters[j+1].numel()
j = j+2
else:
param =model_parameters[j].numel()
j = j+1
print(str(i)+"\t"*3+str(param))
total_params+=param
print("="*100)
print(f"Total Params:{total_params}")
#model_summary(net)
#print(model_summary(model_NN))
###Output
_____no_output_____
###Markdown
MLP model
###Code
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
features_dim = data.x1.shape[1] #300
self.conv1 = Linear(features_dim,150)
self.conv2 = Linear(150, 7) # 7 is our D here feature vector dimention
#self.linear = torch.nn.Linear(7,1)
def forward(self, featss):
x = featss
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
# x = F.relu(x)
# x = self.Linear(x)
# x = F.relu(x)
return x
###Output
_____no_output_____
###Markdown
CNN MODEL
###Code
class CNN(torch.nn.Module):
def __init__(self):
super().__init__()
features_dim = data.x1.shape[1] #300
self.conv1 = Conv1d(1, 1,3,stride=1)
self.conv2 = Conv1d(1, 1,3,stride =1) # 7 is our D here feature vector dimention
#self.linear = torch.nn.Linear(7,1)
def forward(self, featss):
x = featss
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
return x
###Output
_____no_output_____
###Markdown
GCN model
###Code
class GCN(torch.nn.Module):
def __init__(self):
super().__init__()
features_dim = data.x1.shape[1] #300
self.conv1 = GCNConv(features_dim, 150)
self.conv2 = GCNConv(150, 7) # 7 is our D here feature vector dimention
#self.linear = torch.nn.Linear(7,1)
def forward(self, featss,index):
x, edge_index = featss, index
x = self.conv1(x, edge_index)
x = F.relu(x)
x = self.conv2(x, edge_index)
return x
###Output
_____no_output_____
###Markdown
GRAPH SAGE model
###Code
class Gsage(torch.nn.Module):
def __init__(self):
super().__init__()
features_dim = data.x1.shape[1]
self.conv1 = SAGEConv(features_dim, 150)
self.conv2 = SAGEConv(150, 7) # 7 is our D here feature vector dimention
#self.linear = torch.nn.Linear(7,1)
def forward(self, featss,index):
x, edge_index = featss, index
x = self.conv1(x, edge_index)
x = F.relu(x)
x = self.conv2(x, edge_index)
return x
###Output
_____no_output_____
###Markdown
GAT MODEL
###Code
class GAT(torch.nn.Module):
def __init__(self):
super().__init__()
features_dim = data.x1.shape[1]
self.conv1 = GATConv(features_dim, 150)
self.conv2 = GATConv(150, 7) # 7 is our D here feature vector dimention
#self.linear = torch.nn.Linear(7,1)
def forward(self, featss,index):
x, edge_index = featss, index
x = self.conv1(x, edge_index)
x = F.relu(x)
x = self.conv2(x, edge_index)
return x
###Output
_____no_output_____
###Markdown
training module
###Code
from sklearn.metrics import roc_auc_score, f1_score,accuracy_score
def nn_train(data , model):
"""
Function to train the NN model and print the train, val and test accuracy, loss and F1 score
The model is highly inspired by siamese network
Parameters:
Input:
data : input data
model : as the name suggest
"""
# loss function and optimizer initalization
criterion= torch.nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
# variable to store best accuracy
best_test_acc=0
# extracting array for truth value for the given training data
l=data.train_y[0].tolist()
one = torch.zeros(l[-1]+1)
for i in range(len(l)):
if (i> len(l) -1):
break
one[l[i]] = 1
print(one.shape) # one = truth labels for the training data
end_train = l[-1]+1
# extracting array for truth value for the given rest of test data
k=data.test_y[0].tolist()
k = sorted(k)
truth_y = torch.zeros(k[-1]+1)
for i in range(len(k)):
truth_y[k[i]] = 1
# spliting data into valset and train set and
m= k[-1] - l[-1]
start_val = l[-1]+1
end_val = int(m*0.5-1)+4500
start_test = end_val+1
end_test = k[-1]+1
truth_val_y = truth_y[start_val:end_val]
truth_test_y = truth_y[start_test:end_test]
print('truth_val_y',truth_val_y.shape)
print('truth_test_y',truth_test_y.shape)
print('start_val',start_val)
print('end_val',end_val)
print('start_test',start_test)
print('end_test',end_test)
for e in range(70):
out1 = model(data.x1) # similar to siamese network
out2 = model(data.x2)
pdist = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
output = pdist(out1[:end_train], out2[:end_train]) # compute the distance between the output between two network out1 and out2
#print('output:', output.shape)
#output = F.sigmoid(output)
loss = criterion(output , one) # computing loss and using backprogation two bring similarentity closer ans pushing different entity father away from each other.
optimizer.zero_grad()
loss.backward()
optimizer.step()
#train accuracy calculation
train_output= output
train_output[train_output>0]= 1
train_output[train_output<0]= 0
train_accuracy = accuracy_score(one.detach().numpy(),train_output.detach().numpy())
#print("accuracy",accuracy)
val_output = pdist(out1[start_val:end_val], out2[start_val:end_val])
val_output[val_output>0]= 1
val_output[val_output<0]= 0
val_accuracy = accuracy_score(truth_val_y.detach().numpy(),val_output.detach().numpy())
#print("accuracy",accuracy)
if e % 5 == 0:
print('In epoch {}, loss: {}, train_accuracy: {}, val_accuracy: {}'.format(e, loss,train_accuracy,val_accuracy ))
print("-"*50)
#test accuracy, auc, f1 score calculation
pdist = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
test_output = pdist(out1[start_test:end_test], out2[start_test:end_test])
print('test_output shape',test_output.shape)
auc_score = roc_auc_score(truth_test_y.detach().numpy(),test_output.detach().numpy())
print("test_auc_score",auc_score)
test_output[test_output>0]= 1
test_output[test_output<0]= 0
accuracy = accuracy_score(truth_test_y.detach().numpy(),test_output.detach().numpy())
print("test_accuracy",accuracy)
f1_score_value = f1_score(truth_test_y.detach().numpy(),test_output.detach().numpy())
print("test_f1_score",f1_score_value)
# model =NN()
# nn_train(data,model)
from sklearn.metrics import roc_auc_score, f1_score,accuracy_score
def cnn_train(data , model,epoch):
"""
Function to train the NN model and print the train, val and test accuracy, loss and F1 score
The model is highly inspired by siamese network
Parameters:
Input:
data : input data
model : as the name suggest
"""
# loss function and optimizer initalization
criterion= torch.nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
# variable to store best accuracy
best_test_acc=0
# extracting array for truth value for the given training data
l=data.train_y[0].tolist()
one = torch.zeros(l[-1]+1)
for i in range(len(l)):
if (i> len(l) -1):
break
one[l[i]] = 1
print(one.shape) # one = truth labels for the training data
end_train = l[-1]+1
# extracting array for truth value for the given rest of test data
k=data.test_y[0].tolist()
k = sorted(k)
truth_y = torch.zeros(k[-1]+1)
for i in range(len(k)):
truth_y[k[i]] = 1
# spliting data into valset and train set and
m= k[-1] - l[-1]
start_val = l[-1]+1
end_val = int(m*0.5-1)+4500
start_test = end_val+1
end_test = k[-1]+1
truth_val_y = truth_y[start_val:end_val]
truth_test_y = truth_y[start_test:end_test]
print('truth_val_y',truth_val_y.shape)
print('truth_test_y',truth_test_y.shape)
print('start_val',start_val)
print('end_val',end_val)
print('start_test',start_test)
print('end_test',end_test)
for e in range(epoch):
out1 = model(data.x1.unsqueeze(1))
#print('out1:',out1.shape)
out2 = model(data.x2.unsqueeze(1))
#print('out2:',out2.shape)
#pdist = torch.nn.PairwiseDistance(p=2)
pdist = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
output = pdist(out1.squeeze(1)[:end_train], out2.squeeze(1)[:end_train])
#print('output:', output.shape)
#output = F.sigmoid(output)
loss = criterion(output , one)
optimizer.zero_grad()
loss.backward()
optimizer.step()
#train accuracy calculation
train_output= output
train_output[train_output>0]= 1
train_output[train_output<0]= 0
train_accuracy = accuracy_score(one.detach().numpy(),train_output.detach().numpy())
#print("accuracy",accuracy)
val_output = pdist(out1.squeeze(1)[start_val:end_val], out2.squeeze(1)[start_val:end_val])
val_output[val_output>0]= 1
val_output[val_output<0]= 0
val_accuracy = accuracy_score(truth_val_y.detach().numpy(),val_output.detach().numpy())
#print("accuracy",accuracy)
if e % 5 == 0:
print('In epoch {}, loss: {}, train_accuracy: {}, val_accuracy: {}'.format(e, loss,train_accuracy,val_accuracy ))
print("-"*50)
#test accuracy, auc, f1 score calculation
pdist = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
test_output = pdist(out1.squeeze(1)[start_test:end_test], out2.squeeze(1)[start_test:end_test])
#print('test_output shape',test_output.shape)
auc_score = roc_auc_score(truth_test_y.detach().numpy(),test_output.detach().numpy())
print("test_auc_score",auc_score)
test_output[test_output>0]= 1
test_output[test_output<0]= 0
accuracy = accuracy_score(truth_test_y.detach().numpy(),test_output.detach().numpy())
print("test_accuracy",accuracy)
f1_score_value = f1_score(truth_test_y.detach().numpy(),test_output.detach().numpy())
print("test_f1_score",f1_score_value)
from sklearn.metrics import roc_auc_score, f1_score,accuracy_score
#model = GCN()
#model = Gsage()
def train(data , model, epoch):
"""
Function to train the NN model and print the train, val and test accuracy, loss and F1 score
The model is highly inspired by siamese network
Parameters:
Input:
data : input data
model : as the name suggest
"""
# loss function and optimizer initalization
criterion= torch.nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
# variable to store best accuracy
best_test_acc=0
# extracting array for truth value for the given training data
l=data.train_y[0].tolist()
one = torch.zeros(l[-1]+1)
for i in range(len(l)):
if (i> len(l) -1):
break
one[l[i]] = 1
print(one.shape) # one = truth labels for the training data
end_train = l[-1]+1
# extracting array for truth value for the given rest of test data
k=data.test_y[0].tolist()
k = sorted(k)
truth_y = torch.zeros(k[-1]+1)
for i in range(len(k)):
truth_y[k[i]] = 1
# spliting data into valset and train set and
m= k[-1] - l[-1]
start_val = l[-1]+1
end_val = int(m*0.5-1)+4500
start_test = end_val+1
end_test = k[-1]+1
truth_val_y = truth_y[start_val:end_val]
truth_test_y = truth_y[start_test:end_test]
print('truth_val_y',truth_val_y.shape)
print('truth_test_y',truth_test_y.shape)
print('start_val',start_val)
print('end_val',end_val)
print('start_test',start_test)
print('end_test',end_test)
for e in range(epoch):
out1 = model(data.x1, data.edge_index1) # similar to siamese network
#print('out1:',out1.shape)
out2 = model(data.x2, data.edge_index2)
pdist = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
output = pdist(out1[:end_train], out2[:end_train]) # compute the distance between the output between two network out1 and out2
#print('output:', output.shape)
#output = F.sigmoid(output)
loss = criterion(output , one) # computing loss and using backprogation two bring similarentity closer ans pushing different entity father away from each other.
optimizer.zero_grad()
loss.backward()
optimizer.step()
#train accuracy calculation
train_output= output
train_output[train_output>0]= 1
train_output[train_output<0]= 0
train_accuracy = accuracy_score(one.detach().numpy(),train_output.detach().numpy())
val_output = pdist(out1[start_val:end_val], out2[start_val:end_val])
val_output[val_output>0]= 1
val_output[val_output<0]= 0
val_accuracy = accuracy_score(truth_val_y.detach().numpy(),val_output.detach().numpy())
#print("accuracy",accuracy)
if e % 5 == 0:
print('In epoch {}, loss: {}, train_accuracy: {}, val_accuracy: {}'.format(e, loss,train_accuracy,val_accuracy ))
print("-"*50)
#test accuracy, auc, f1 score calculation
pdist = torch.nn.CosineSimilarity(dim=1, eps=1e-6)
test_output = pdist(out1[start_test:end_test], out2[start_test:end_test])
auc_score = roc_auc_score(truth_test_y.detach().numpy(),test_output.detach().numpy())
print("test_auc_score",auc_score)
test_output[test_output>0]= 1
test_output[test_output<0]= 0
accuracy = accuracy_score(truth_test_y.detach().numpy(),test_output.detach().numpy())
print("test_accuracy",accuracy)
f1_score_value = f1_score(truth_test_y.detach().numpy(),test_output.detach().numpy())
print("test_f1_score",f1_score_value)
###Output
_____no_output_____
###Markdown
results and outputs for zh_en
###Code
print('MODEL: MLP')
model_NN =NN()
nn_train(data_zh_en,model_NN)
print(model_summary(model_NN))
print('MODEL: CNN')
model_CNN =CNN()
cnn_train(data_zh_en,model_CNN,30)
print(model_summary(model_CNN))
print('MODEL: GRAPH CONV(GCN)')
model_GCN= GCN()
print(model_summary(model_GCN))
train(data_zh_en,model_GCN, 14)
print('MODEL:GRAPH SAGE')
model_Gsage= Gsage()
print(model_summary(model_Gsage))
train(data_zh_en,model_Gsage, 20)
print('MODEL: GAT')
model_GAT= GAT()
print(model_summary(model_GAT))
train(data_zh_en,model_GAT,20)
###Output
MODEL: GAT
model_summary
Layer_name Number of Parameters
====================================================================================================
GATConv(300, 150, heads=1) 150
GATConv(150, 7, heads=1) 150
====================================================================================================
Total Params:300
None
torch.Size([4500])
truth_val_y torch.Size([5249])
truth_test_y torch.Size([5250])
start_val 4500
end_val 9749
start_test 9750
end_test 15000
In epoch 0, loss: 0.5814074277877808, train_accuracy: 0.7162222222222222, val_accuracy: 0.8376833682606211
In epoch 5, loss: 0.5140196681022644, train_accuracy: 0.7948888888888889, val_accuracy: 0.8325395313393027
In epoch 10, loss: 0.47523295879364014, train_accuracy: 0.8557777777777777, val_accuracy: 0.8209182701466946
In epoch 15, loss: 0.44787320494651794, train_accuracy: 0.8966666666666666, val_accuracy: 0.8035816345970661
--------------------------------------------------
test_auc_score 0.7538410635310866
test_accuracy 0.7975238095238095
test_f1_score 0.878722190530519
###Markdown
--- END--- results and outputs for en_zh
###Code
print('MODEL: MLP')
model_NN =NN()
nn_train(data_en_zh,model_NN)
print('MODEL: CNN')
model_CNN =CNN()
cnn_train(data_en_zh,model_CNN,40)
print('MODEL: GRAPH CONV(GCN)')
model_GCN= GCN()
train(data_en_zh,model_GCN, 14)
print('MODEL:GRAPH SAGE')
model_Gsage= Gsage()
train(data_en_zh,model_Gsage, 20)
print('MODEL: GAT')
model_GAT= GAT()
train(data_en_zh,model_GAT,20)
###Output
MODEL: GAT
torch.Size([4500])
truth_val_y torch.Size([5249])
truth_test_y torch.Size([5250])
start_val 4500
end_val 9749
start_test 9750
end_test 15000
In epoch 0, loss: 0.5721561908721924, train_accuracy: 0.7328888888888889, val_accuracy: 0.8632120403886454
In epoch 5, loss: 0.5109281539916992, train_accuracy: 0.8013333333333333, val_accuracy: 0.8462564297961517
In epoch 10, loss: 0.472844660282135, train_accuracy: 0.8602222222222222, val_accuracy: 0.8374928557820537
In epoch 15, loss: 0.44848209619522095, train_accuracy: 0.8922222222222222, val_accuracy: 0.8239664698037722
--------------------------------------------------
test_auc_score 0.7373010961925995
test_accuracy 0.8243809523809524
test_f1_score 0.8986590459441636
###Markdown
--- END--- results and outputs for fr_en
###Code
print('MODEL: MLP')
model_NN =NN()
nn_train(data_fr_en,model_NN)
print('MODEL: CNN')
model_CNN =CNN()
cnn_train(data_fr_en,model_CNN,50)
print('MODEL: GRAPH CONV(GCN)')
model_GCN= GCN()
train(data_fr_en,model_GCN, 14)
print('MODEL:GRAPH SAGE')
model_Gsage= Gsage()
train(data_fr_en,model_Gsage, 20)
print('MODEL: GAT')
model_GAT= GAT()
train(data_fr_en,model_GAT,20)
###Output
MODEL: GAT
torch.Size([4500])
truth_val_y torch.Size([5249])
truth_test_y torch.Size([5250])
start_val 4500
end_val 9749
start_test 9750
end_test 15000
In epoch 0, loss: 0.42628124356269836, train_accuracy: 0.91, val_accuracy: 0.955420080015241
In epoch 5, loss: 0.39698633551597595, train_accuracy: 0.916, val_accuracy: 0.9620880167650981
In epoch 10, loss: 0.3967367112636566, train_accuracy: 0.9166666666666666, val_accuracy: 0.9620880167650981
In epoch 15, loss: 0.3965286314487457, train_accuracy: 0.9168888888888889, val_accuracy: 0.9620880167650981
--------------------------------------------------
test_auc_score 0.730452354197162
test_accuracy 0.9655238095238096
test_f1_score 0.9824595406531641
###Markdown
--- END--- results and outputs for en_fr
###Code
print('MODEL: MLP')
model_NN =NN()
nn_train(data_en_fr,model_NN)
print('MODEL: CNN')
model_CNN =CNN()
cnn_train(data_en_fr,model_CNN,25)
print('MODEL: GRAPH CONV(GCN)')
model_GCN= GCN()
train(data_en_fr,model_GCN, 14)
print('MODEL:GRAPH SAGE')
model_Gsage= Gsage()
train(data_en_fr,model_Gsage, 20)
print('MODEL: GAT')
model_GAT= GAT()
train(data_en_fr,model_GAT,20)
###Output
MODEL: GAT
torch.Size([4499])
truth_val_y torch.Size([5250])
truth_test_y torch.Size([5250])
start_val 4499
end_val 9749
start_test 9750
end_test 15000
In epoch 0, loss: 0.4371315836906433, train_accuracy: 0.9015336741498111, val_accuracy: 0.9561904761904761
In epoch 5, loss: 0.4020709693431854, train_accuracy: 0.9113136252500555, val_accuracy: 0.9685714285714285
In epoch 10, loss: 0.40141913294792175, train_accuracy: 0.9113136252500555, val_accuracy: 0.9685714285714285
In epoch 15, loss: 0.4007355868816376, train_accuracy: 0.9126472549455434, val_accuracy: 0.9683809523809523
--------------------------------------------------
test_auc_score 0.691932634777929
test_accuracy 0.971047619047619
test_f1_score 0.9852998065764024
###Markdown
--- END--- results and outputs for ja_en
###Code
print('MODEL: MLP')
model_NN =NN()
nn_train(data_ja_en,model_NN)
print('MODEL: CNN')
model_CNN =CNN()
cnn_train(data_ja_en,model_CNN,40)
print('MODEL: GRAPH CONV(GCN)')
model_GCN= GCN()
train(data_ja_en,model_GCN, 14)
print('MODEL:GRAPH SAGE')
model_Gsage= Gsage()
train(data_ja_en,model_Gsage, 20)
print('MODEL: GAT')
model_GAT= GAT()
train(data_ja_en,model_GAT,20)
###Output
MODEL: GAT
torch.Size([4500])
truth_val_y torch.Size([5249])
truth_test_y torch.Size([5250])
start_val 4500
end_val 9749
start_test 9750
end_test 15000
In epoch 0, loss: 0.5381010174751282, train_accuracy: 0.7684444444444445, val_accuracy: 0.877309963802629
In epoch 5, loss: 0.49036112427711487, train_accuracy: 0.8226666666666667, val_accuracy: 0.8754048390169556
In epoch 10, loss: 0.46647703647613525, train_accuracy: 0.8531111111111112, val_accuracy: 0.8653076776528863
In epoch 15, loss: 0.452440470457077, train_accuracy: 0.8726666666666667, val_accuracy: 0.865688702610021
--------------------------------------------------
test_auc_score 0.7392341982923788
test_accuracy 0.8664761904761905
test_f1_score 0.9258593336858806
###Markdown
--- END--- results and outputs for en_ja
###Code
print('MODEL: MLP')
model_NN =NN()
nn_train(data_en_ja,model_NN)
print('MODEL: CNN')
model_CNN =CNN()
cnn_train(data_en_ja,model_CNN,45)
print('MODEL: GRAPH CONV(GCN)')
model_GCN= GCN()
train(data_en_ja,model_GCN, 14)
print('MODEL:GRAPH SAGE')
model_Gsage= Gsage()
train(data_en_ja,model_Gsage, 20)
print('MODEL: GAT')
model_GAT= GAT()
train(data_en_ja,model_GAT,20)
###Output
MODEL: GAT
torch.Size([4500])
truth_val_y torch.Size([5249])
truth_test_y torch.Size([5250])
start_val 4500
end_val 9749
start_test 9750
end_test 15000
In epoch 0, loss: 0.5394373536109924, train_accuracy: 0.776, val_accuracy: 0.8940750619165555
In epoch 5, loss: 0.5030845403671265, train_accuracy: 0.8086666666666666, val_accuracy: 0.8961706991807964
In epoch 10, loss: 0.48273682594299316, train_accuracy: 0.8337777777777777, val_accuracy: 0.8912173747380453
In epoch 15, loss: 0.4675613045692444, train_accuracy: 0.8524444444444444, val_accuracy: 0.8893122499523719
--------------------------------------------------
test_auc_score 0.7911957332041903
test_accuracy 0.8862857142857142
test_f1_score 0.9379611347812532
###Markdown
--- END--- Comparision between models on different dataset.
###Code
zh_en = [0.8951,0.9102,0.8948,0.9211,0.8716]
en_zh = [0.8878,0.9455,0.9043,0.9105,0.8986]
fr_en = [0.9682,0.9841,0.9825,0.9815,0.9824]
en_fr = [0.9649,0.9881,0.9850,0.9848,0.9852]
ja_en = [0.9263,0.9555,0.9268,0.9424,0.9258]
en_ja = [0.9206,0.9632,0.9417,0.9429,0.9379]
import pandas as pd
df = pd.DataFrame()
df['zh_en'] = zh_en
df['en_zh'] = en_zh
df['fr_en'] = fr_en
df['en_fr'] = en_fr
df['ja_en'] = ja_en
df['en_ja'] = en_ja
df.index = ['MLP','CNN','GCN','Gsage','GAT']
df
ax = df.plot.bar(figsize= [15,6], ylim = [0.8,1],rot =0,colormap= 'jet',title = 'model performence on diffrent dataset')
ax.set_xlabel('MODELS')
ax.set_ylabel('F1-SCORE')
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
tsne = TSNE(3, verbose=1)
tsne_proj1 = tsne.fit_transform(out1[start_test:end_test].detach().numpy())
tsne_proj2 = tsne.fit_transform(out2[start_test:end_test].detach().numpy())
# Plot those points as a scatter plot and label them based on the pred labels
#cmap = cm.get_cmap('tab20')
fig = plt.figure(figsize = (16, 9))
ax = plt.axes(projection ="3d")
ax.scatter3D(tsne_proj1[:,0],tsne_proj1[:,1],tsne_proj1[:,2])
ax.scatter3D(tsne_proj2[:,0],tsne_proj2[:,1],tsne_proj2[:,2])
#ax.legend(fontsize='large', markerscale=2)
plt.show()
###Output
[t-SNE] Computing 91 nearest neighbors...
[t-SNE] Indexed 5250 samples in 0.005s...
[t-SNE] Computed neighbors for 5250 samples in 0.143s...
[t-SNE] Computed conditional probabilities for sample 1000 / 5250
[t-SNE] Computed conditional probabilities for sample 2000 / 5250
[t-SNE] Computed conditional probabilities for sample 3000 / 5250
[t-SNE] Computed conditional probabilities for sample 4000 / 5250
[t-SNE] Computed conditional probabilities for sample 5000 / 5250
[t-SNE] Computed conditional probabilities for sample 5250 / 5250
[t-SNE] Mean sigma: 1.004055
[t-SNE] KL divergence after 250 iterations with early exaggeration: 63.729446
[t-SNE] KL divergence after 1000 iterations: 0.627038
[t-SNE] Computing 91 nearest neighbors...
[t-SNE] Indexed 5250 samples in 0.005s...
[t-SNE] Computed neighbors for 5250 samples in 0.155s...
[t-SNE] Computed conditional probabilities for sample 1000 / 5250
[t-SNE] Computed conditional probabilities for sample 2000 / 5250
[t-SNE] Computed conditional probabilities for sample 3000 / 5250
[t-SNE] Computed conditional probabilities for sample 4000 / 5250
[t-SNE] Computed conditional probabilities for sample 5000 / 5250
[t-SNE] Computed conditional probabilities for sample 5250 / 5250
[t-SNE] Mean sigma: 0.924361
[t-SNE] KL divergence after 250 iterations with early exaggeration: 63.900879
[t-SNE] KL divergence after 1000 iterations: 0.643122
|
3C6_p2q3_template.ipynb | ###Markdown
Examples Paper 2 Question 3
###Code
import numpy as np
import scipy.linalg as la
# COMPLETE THE FOLLOWING
K = np.array([[0,0],[0,0]])
M = np.array([[0,0],[0,0]])
D,V = la.eigh(K,M)
print('wn^2 = {}\n'.format(D))
###Output
_____no_output_____ |
.ipynb_checkpoints/run_test_ant-checkpoint.ipynb | ###Markdown
Run tests with Ant
###Code
# !python -m baselines.ddpg_custom.main --env-id Ant-v2 --nb-epochs 2000 --gamma 0.999 --nb-epoch-cycles 1 --nb-rollout-steps 10000 --seed 0
!python -m baselines.ddpg_custom.main --env-id Ant-v2 --nb-epochs 2000 --gamma 0.999 --nb-epoch-cycles 1 --nb-rollout-steps 10000 --seed 1
!python -m baselines.ddpg_custom.main --env-id Ant-v2 --nb-epochs 2000 --gamma 0.999 --nb-epoch-cycles 1 --nb-rollout-steps 10000 --seed 2
!python -m baselines.ddpg_custom.main --env-id Ant-v2 --nb-epochs 2000 --gamma 0.999 --nb-epoch-cycles 1 --nb-rollout-steps 10000 --seed 3
###Output
_____no_output_____ |
Labs/EulerAngles.ipynb | ###Markdown
Euler angle worksheetThis is a [jupyter notebook](https://jupyter.org/). Jupyter Notebooks allows you to combine notes, code, and output into a single document. You can even export your document as a presentation or presentation.In this worksheet, we will use the python3 SymPy package to derive expressions for converting between euler angles and matrices.If you would like more information about jupyter notebook features:* [Getting started tutorial](https://realpython.com/jupyter-notebook-introduction/creating-a-notebook)* [Reference on markdown text](https://help.github.com/articles/markdown-basics/)For this assignment, you do not need to run this notebook. It has been compiled for you and saved as a webpage. However, if you would like to play with it, start by running all the cells (from the menu: goto 'Cell' -> 'Run All').
###Code
# python3
from sympy import *
init_printing(use_latex='mathjax')
import math
# Define symbols
cx,sx = symbols('cx sx')
cy,sy = symbols('cy sy')
cz,sz = symbols('cz sz')
Rx = Matrix([
[1, 0, 0],
[0, cx,-sx],
[0, sx, cx]])
Ry = Matrix([
[ cy, 0, sy],
[ 0, 1, 0],
[-sy, 0, cy]])
Rz = Matrix([
[cz, -sz, 0],
[sz, cz, 0],
[0, 0, 1]])
###Output
_____no_output_____
###Markdown
Convert from ZYX euler angles to a matrixWe can compute the matrix Rzyx by multiplying matrixes corresponding to each consecutive rotation, e.g. $$R_{zyx}(\theta_x, \theta_y, \theta_z) = R_z(\theta_z) * R_y(\theta_y) * R_x(\theta_x)$$In this file, we will use the [SymPy](https://www.sympy.org/en/index.html) to compute algebraic expressions for euler angle matrices. Using these expressions, we will be able to derive formulas for converting from matrices to euler angles.In the following example, let* cx = $cos(\theta_x)$* sx = $sin(\theta_x)$* cy = $cos(\theta_y)$* sy = $sin(\theta_y)$* cz = $cos(\theta_z)$* sz = $sin(\theta_z)$
###Code
Rzyx = Rz * Ry * Rx
pprint(Rzyx)
###Output
⎡cy⋅cz -cx⋅sz + cz⋅sx⋅sy cx⋅cz⋅sy + sx⋅sz⎤
⎢ ⎥
⎢cy⋅sz cx⋅cz + sx⋅sy⋅sz cx⋅sy⋅sz - cz⋅sx⎥
⎢ ⎥
⎣ -sy cy⋅sx cx⋅cy ⎦
###Markdown
Now that we have a matrix expression for the ZYX euler angles, we have formulas which describe how matrices and euler angles relate to each. Specifically, suppose we have a 3x3 rotation matrix R with the following elements$$R = \begin{bmatrix}r_{00} & r_{01} & r_{02} \\r_{10} & r_{11} & r_{12} \\r_{20} & r_{21} & r_{22} \\\end{bmatrix}$$where each $r_{ij}$ represents a scalar value in $\mathbb{R}$. Usually math texts will use indexing at 1, but here let's use 0-based indexing so that it will be easier to use implement these formulas later.Now suppose we wish to extract the euler angles from this matrix. We can get the Y rotation back from the term $r_{20}$.$$r_{20} = -\sin(\theta_y) \\=> \theta_y = \sin(-r_{20})$$What about the rotations around X and Z? We can obtain these similarly using the terms from the first column and last row. A robust method involves using the fact that $$\tan(\theta) = \frac{\sin(\theta)}{\cos(\theta)}$$to form the following expression for obtaining $\theta_x$$$\frac{r_{21}}{r_{22}} = \frac{\sin(\theta_x)}{\cos(\theta_x)} = \tan(\theta_x) \\=> \theta_x = \text{atan2}(r_{21}, r_{22})$$The expression for $\theta_z$ can be obtained similarly$$\frac{r_{10}}{r_{00}} = \frac{\sin(\theta_z)}{\cos(\theta_z)} = \tan(\theta_z) \\=> \theta_z = \text{atan2}(r_{10}, r_{00})$$Using atan2 makes it easier to handle the cases when $\theta$ is near 0, 90, or 180 degrees, which makes sine and cosine close to zero and 1. Be careful when using acos and asin because values even *slightly* out of the range [-1,1] can lead to NaNs. The computer will not tolerate nansense! What happens to Rzyx when y is +/- 90 degrees?When the middle euler angle is 90 degrees, we need to look to the non-zero terms to values for the first and last angles. For example, for ZYX euler angles, we need to handle the case when Y is either positive or negative 90 degrees.
###Code
# Compute Rzyx when y is +90
Ry90 = Matrix([
[ 0, 0, 1],
[ 0, 1, 0],
[-1, 0, 0]])
Rzyx = Rz * Ry90 * Rx
pprint(Rzyx)
###Output
⎡0 -cx⋅sz + cz⋅sx cx⋅cz + sx⋅sz⎤
⎢ ⎥
⎢0 cx⋅cz + sx⋅sz cx⋅sz - cz⋅sx⎥
⎢ ⎥
⎣-1 0 0 ⎦
###Markdown
So now we have the above expression. We know that the Y rotation is 90 but what about the X and Z rotations? We need to look at the upper part of the matrix to figure these out.Let's apply the sine and cosine [addition rules](https://en.wikipedia.org/wiki/List_of_trigonometric_identities)$$\sin(z + x) = \sin(z) \cos(x) + \cos(z) \sin(x) \\\sin(z - x) = \sin(z) \cos(x) - \cos(z) \sin(x) \\\cos(z + x) = \cos(z) \cos(x) - \sin(z) \sin(x) \\\cos(z - x) = \cos(z) \cos(x) + \sin(z) \sin(x) \\$$Another useful property of sine and cosine is the following$$\sin(-\theta) = -\sin(\theta) \\\cos(-\theta) = \cos(\theta)$$Let's try to simplify the above matrix using these rules. For example, the term in position $r_{12}$ has two terms containing both sine and cosine, so it corresponds to one of the sine rules. It also has a negative, so its the difference between two angles X and Z.$$\begin{bmatrix}0 & s(x-z) & c(x-z) \\0 & c(x-z) & s(z-x) \\-1 & 0 & 0\end{bmatrix}$$which can be rewritten so every term has angle $x-z$$$\begin{bmatrix}0 & s(x-z) & c(x-z) \\0 & c(x-z) & -s(x-z) \\-1 & 0 & 0\end{bmatrix}$$Therefore, we can use atan2($r_{12}$, $r_{13}$) to get the $\theta$ angle corresponding to the difference between $x-z$. Many values for X and Z could combine to be $\theta$. Let's choose one of X or Z to be zero and then the other can be $\theta$.
###Code
# Compute Rzyx when y is -90
Ry90_Minus = Ry90.T
Rzyx = Rz * Ry90_Minus * Rx
pprint(Rzyx)
###Output
⎡0 -cx⋅sz - cz⋅sx -cx⋅cz + sx⋅sz⎤
⎢ ⎥
⎢0 cx⋅cz - sx⋅sz -cx⋅sz - cz⋅sx⎥
⎢ ⎥
⎣1 0 0 ⎦
###Markdown
$$\begin{bmatrix}0 & -s(x+z) & c(x+z) \\0 & c(z+x) & -s(x+z) \\1 & 0 & 0\end{bmatrix}$$ Convert from all euler angles to a matrixThe other five euler angle combinations can be derived similarly. XYZ
###Code
print("Rxyz")
pprint(Rx * Ry * Rz)
print()
print()
print("Y = 90")
pprint(Rx * Ry90 * Rz)
print()
print()
print("Y = -90")
pprint(Rx * Ry90_Minus * Rz)
print()
print()
###Output
Rxyz
⎡ cy⋅cz -cy⋅sz sy ⎤
⎢ ⎥
⎢cx⋅sz + cz⋅sx⋅sy cx⋅cz - sx⋅sy⋅sz -cy⋅sx⎥
⎢ ⎥
⎣-cx⋅cz⋅sy + sx⋅sz cx⋅sy⋅sz + cz⋅sx cx⋅cy ⎦
Y = 90
⎡ 0 0 1⎤
⎢ ⎥
⎢cx⋅sz + cz⋅sx cx⋅cz - sx⋅sz 0⎥
⎢ ⎥
⎣-cx⋅cz + sx⋅sz cx⋅sz + cz⋅sx 0⎦
Y = -90
⎡ 0 0 -1⎤
⎢ ⎥
⎢cx⋅sz - cz⋅sx cx⋅cz + sx⋅sz 0 ⎥
⎢ ⎥
⎣cx⋅cz + sx⋅sz -cx⋅sz + cz⋅sx 0 ⎦
###Markdown
Y = 90$$\begin{bmatrix}0 & 0 & 1 \\s(x+z) & c(x+z) & 0\\-c(x+z) & s(x+z) & 0 \\\end{bmatrix}$$Y = -90$$\begin{bmatrix}0 & 0 & -1 \\s(z-x) & c(z-x) & 0 \\c(z-x) & -s(z-x) & 0 \\\end{bmatrix}$$ YXZ
###Code
print("Ryxz")
pprint(Ry * Rx * Rz)
print()
print()
Rx90 = Matrix([
[1, 0, 0],
[0, 0,-1],
[0, 1, 0]])
print("+90")
pprint(Ry * Rx90 * Rz)
print()
print()
Rx90_Minus = Rx90.T
print("-90")
pprint(Ry * Rx90.T * Rz)
print()
print()
###Output
Ryxz
⎡cy⋅cz + sx⋅sy⋅sz -cy⋅sz + cz⋅sx⋅sy cx⋅sy⎤
⎢ ⎥
⎢ cx⋅sz cx⋅cz -sx ⎥
⎢ ⎥
⎣cy⋅sx⋅sz - cz⋅sy cy⋅cz⋅sx + sy⋅sz cx⋅cy⎦
+90
⎡cy⋅cz + sy⋅sz -cy⋅sz + cz⋅sy 0 ⎤
⎢ ⎥
⎢ 0 0 -1⎥
⎢ ⎥
⎣cy⋅sz - cz⋅sy cy⋅cz + sy⋅sz 0 ⎦
-90
⎡cy⋅cz - sy⋅sz -cy⋅sz - cz⋅sy 0⎤
⎢ ⎥
⎢ 0 0 1⎥
⎢ ⎥
⎣-cy⋅sz - cz⋅sy -cy⋅cz + sy⋅sz 0⎦
###Markdown
X = 90$$\begin{bmatrix}c(y-z) & s(y-z) & 0\\0 & 0 & -1 \\-s(y-z) & c(y-z) & 0 \\\end{bmatrix}$$X = -90$$\begin{bmatrix}c(y+z) & -s(y+z) & 0 \\0 & 0 & 1 \\-s(y+z) & -c(y+z) & 0 \\\end{bmatrix}$$ ZXY
###Code
print("Rzxy")
pprint(Rz * Rx * Ry)
print()
print()
print("+90")
pprint(Rz * Rx90 * Ry)
print()
print()
print("-90")
pprint(Rz * Rx90.T * Ry)
print()
print()
###Output
Rzxy
⎡cy⋅cz - sx⋅sy⋅sz -cx⋅sz cy⋅sx⋅sz + cz⋅sy ⎤
⎢ ⎥
⎢cy⋅sz + cz⋅sx⋅sy cx⋅cz -cy⋅cz⋅sx + sy⋅sz⎥
⎢ ⎥
⎣ -cx⋅sy sx cx⋅cy ⎦
+90
⎡cy⋅cz - sy⋅sz 0 cy⋅sz + cz⋅sy ⎤
⎢ ⎥
⎢cy⋅sz + cz⋅sy 0 -cy⋅cz + sy⋅sz⎥
⎢ ⎥
⎣ 0 1 0 ⎦
-90
⎡cy⋅cz + sy⋅sz 0 -cy⋅sz + cz⋅sy⎤
⎢ ⎥
⎢cy⋅sz - cz⋅sy 0 cy⋅cz + sy⋅sz ⎥
⎢ ⎥
⎣ 0 -1 0 ⎦
###Markdown
X = 90$$\begin{bmatrix}c(y+z) & 0 & s(y+z) \\s(y+z) & 0 & -c(y+z) \\0 & 1 & 0 \\\end{bmatrix}$$X = -90$$\begin{bmatrix}c(y-z) & 0 & s(y-z) \\-s(y-z) & 0 & c(y-z) \\0 & -1 & 0 \\\end{bmatrix}$$ XZY
###Code
print("Rxzy")
pprint(Rx * Rz * Ry)
print()
print()
Rz90 = Matrix([
[0, -1, 0],
[1, 0, 0],
[0, 0, 1]])
print("+90")
pprint(Rx * Rz90 * Ry)
print()
print()
print("-90")
pprint(Rx * Rz90.T * Ry)
print()
print()
###Output
Rxzy
⎡ cy⋅cz -sz cz⋅sy ⎤
⎢ ⎥
⎢cx⋅cy⋅sz + sx⋅sy cx⋅cz cx⋅sy⋅sz - cy⋅sx⎥
⎢ ⎥
⎣-cx⋅sy + cy⋅sx⋅sz cz⋅sx cx⋅cy + sx⋅sy⋅sz⎦
+90
⎡ 0 -1 0 ⎤
⎢ ⎥
⎢cx⋅cy + sx⋅sy 0 cx⋅sy - cy⋅sx⎥
⎢ ⎥
⎣-cx⋅sy + cy⋅sx 0 cx⋅cy + sx⋅sy⎦
-90
⎡ 0 1 0 ⎤
⎢ ⎥
⎢-cx⋅cy + sx⋅sy 0 -cx⋅sy - cy⋅sx⎥
⎢ ⎥
⎣-cx⋅sy - cy⋅sx 0 cx⋅cy - sx⋅sy ⎦
###Markdown
Z = 90$$\begin{bmatrix}0 & -1 & 0 \\c(x-y) & 0 & -s(x-y) \\s(x-y) & 0 & c(x-y) \\\end{bmatrix}$$Z = -90$$\begin{bmatrix}0 & 1 & 0 \\-c(x+y) & 0 & -s(x+y) \\-s(x+y) & 0 & c(x+y) \\\end{bmatrix}$$ YZX
###Code
print("Ryzx")
pprint(Ry * Rz * Rx)
print()
print()
print("+90")
pprint(Ry * Rz90 * Rx)
print()
print()
print("-90")
pprint(Ry * Rz90.T * Rx)
print()
print()
###Output
Ryzx
⎡cy⋅cz -cx⋅cy⋅sz + sx⋅sy cx⋅sy + cy⋅sx⋅sz⎤
⎢ ⎥
⎢ sz cx⋅cz -cz⋅sx ⎥
⎢ ⎥
⎣-cz⋅sy cx⋅sy⋅sz + cy⋅sx cx⋅cy - sx⋅sy⋅sz⎦
+90
⎡0 -cx⋅cy + sx⋅sy cx⋅sy + cy⋅sx⎤
⎢ ⎥
⎢1 0 0 ⎥
⎢ ⎥
⎣0 cx⋅sy + cy⋅sx cx⋅cy - sx⋅sy⎦
-90
⎡0 cx⋅cy + sx⋅sy cx⋅sy - cy⋅sx⎤
⎢ ⎥
⎢-1 0 0 ⎥
⎢ ⎥
⎣0 -cx⋅sy + cy⋅sx cx⋅cy + sx⋅sy⎦
|
_sequential/Deep Learning Sequential/Week 3/Machine Translation/Neural+machine+translation+with+attention+-+v1.ipynb | ###Markdown
Neural Machine TranslationWelcome to your first programming assignment for this week! You will build a Neural Machine Translation (NMT) model to translate human readable dates ("25th of June, 2009") into machine readable dates ("2009-06-25"). You will do this using an attention model, one of the most sophisticated sequence to sequence models. This notebook was produced together with NVIDIA's Deep Learning Institute. Let's load all the packages you will need for this assignment.
###Code
from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
from keras.layers import RepeatVector, Dense, Activation, Lambda
from keras.optimizers import Adam
from keras.utils import to_categorical
from keras.models import load_model, Model
import keras.backend as K
import numpy as np
from faker import Faker
import random
from tqdm import tqdm
from babel.dates import format_date
from nmt_utils import *
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 - Translating human readable dates into machine readable datesThe model you will build here could be used to translate from one language to another, such as translating from English to Hindi. However, language translation requires massive datasets and usually takes days of training on GPUs. To give you a place to experiment with these models even without using massive datasets, we will instead use a simpler "date translation" task. The network will input a date written in a variety of possible formats (*e.g. "the 29th of August 1958", "03/30/1968", "24 JUNE 1987"*) and translate them into standardized, machine readable dates (*e.g. "1958-08-29", "1968-03-30", "1987-06-24"*). We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD. <!-- Take a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !--> 1.1 - DatasetWe will train the model on a dataset of 10000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples.
###Code
m = 10000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)
dataset[:10]
###Output
_____no_output_____
###Markdown
You've loaded:- `dataset`: a list of tuples of (human readable date, machine readable date)- `human_vocab`: a python dictionary mapping all characters used in the human readable dates to an integer-valued index - `machine_vocab`: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. These indices are not necessarily consistent with `human_vocab`. - `inv_machine_vocab`: the inverse dictionary of `machine_vocab`, mapping from indices back to characters. Let's preprocess the data and map the raw text data into the index values. We will also use Tx=30 (which we assume is the maximum length of the human readable date; if we get a longer input, we would have to truncate it) and Ty=10 (since "YYYY-MM-DD" is 10 characters long).
###Code
Tx = 30
Ty = 10
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)
print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
print("Xoh.shape:", Xoh.shape)
print("Yoh.shape:", Yoh.shape)
###Output
_____no_output_____
###Markdown
You now have:- `X`: a processed version of the human readable dates in the training set, where each character is replaced by an index mapped to the character via `human_vocab`. Each date is further padded to $T_x$ values with a special character (). `X.shape = (m, Tx)`- `Y`: a processed version of the machine readable dates in the training set, where each character is replaced by the index it is mapped to in `machine_vocab`. You should have `Y.shape = (m, Ty)`. - `Xoh`: one-hot version of `X`, the "1" entry's index is mapped to the character thanks to `human_vocab`. `Xoh.shape = (m, Tx, len(human_vocab))`- `Yoh`: one-hot version of `Y`, the "1" entry's index is mapped to the character thanks to `machine_vocab`. `Yoh.shape = (m, Tx, len(machine_vocab))`. Here, `len(machine_vocab) = 11` since there are 11 characters ('-' as well as 0-9). Lets also look at some examples of preprocessed training examples. Feel free to play with `index` in the cell below to navigate the dataset and see how source/target dates are preprocessed.
###Code
index = 0
print("Source date:", dataset[index][0])
print("Target date:", dataset[index][1])
print()
print("Source after preprocessing (indices):", X[index])
print("Target after preprocessing (indices):", Y[index])
print()
print("Source after preprocessing (one-hot):", Xoh[index])
print("Target after preprocessing (one-hot):", Yoh[index])
###Output
_____no_output_____
###Markdown
2 - Neural machine translation with attentionIf you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate. Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down. The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step. 2.1 - Attention mechanismIn this part, you will implement the attention mechanism presented in the lecture videos. Here is a figure to remind you how the model works. The diagram on the left shows the attention model. The diagram on the right shows what one "Attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$, which are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$). **Figure 1**: Neural machine translation with attention Here are some properties of the model that you may notice: - There are two separate LSTMs in this model (see diagram on the left). Because the one at the bottom of the picture is a Bi-directional LSTM and comes *before* the attention mechanism, we will call it *pre-attention* Bi-LSTM. The LSTM at the top of the diagram comes *after* the attention mechanism, so we will call it the *post-attention* LSTM. The pre-attention Bi-LSTM goes through $T_x$ time steps; the post-attention LSTM goes through $T_y$ time steps. - The post-attention LSTM passes $s^{\langle t \rangle}, c^{\langle t \rangle}$ from one time step to the next. In the lecture videos, we were using only a basic RNN for the post-activation sequence model, so the state captured by the RNN output activations $s^{\langle t\rangle}$. But since we are using an LSTM here, the LSTM has both the output activation $s^{\langle t\rangle}$ and the hidden cell state $c^{\langle t\rangle}$. However, unlike previous text generation examples (such as Dinosaurus in week 1), in this model the post-activation LSTM at time $t$ does will not take the specific generated $y^{\langle t-1 \rangle}$ as input; it only takes $s^{\langle t\rangle}$ and $c^{\langle t\rangle}$ as input. We have designed the model this way, because (unlike language generation where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date. - We use $a^{\langle t \rangle} = [\overrightarrow{a}^{\langle t \rangle}; \overleftarrow{a}^{\langle t \rangle}]$ to represent the concatenation of the activations of both the forward-direction and backward-directions of the pre-attention Bi-LSTM. - The diagram on the right uses a `RepeatVector` node to copy $s^{\langle t-1 \rangle}$'s value $T_x$ times, and then `Concatenation` to concatenate $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ to compute $e^{\langle t, t'}$, which is then passed through a softmax to compute $\alpha^{\langle t, t' \rangle}$. We'll explain how to use `RepeatVector` and `Concatenation` in Keras below. Lets implement this model. You will start by implementing two functions: `one_step_attention()` and `model()`.**1) `one_step_attention()`**: At step $t$, given all the hidden states of the Bi-LSTM ($[a^{},a^{}, ..., a^{}]$) and the previous hidden state of the second LSTM ($s^{}$), `one_step_attention()` will compute the attention weights ($[\alpha^{},\alpha^{}, ..., \alpha^{}]$) and output the context vector (see Figure 1 (right) for details):$$context^{} = \sum_{t' = 0}^{T_x} \alpha^{}a^{}\tag{1}$$ Note that we are denoting the attention in this notebook $context^{\langle t \rangle}$. In the lecture videos, the context was denoted $c^{\langle t \rangle}$, but here we are calling it $context^{\langle t \rangle}$ to avoid confusion with the (post-attention) LSTM's internal memory cell variable, which is sometimes also denoted $c^{\langle t \rangle}$. **2) `model()`**: Implements the entire model. It first runs the input through a Bi-LSTM to get back $[a^{},a^{}, ..., a^{}]$. Then, it calls `one_step_attention()` $T_y$ times (`for` loop). At each iteration of this loop, it gives the computed context vector $c^{}$ to the second LSTM, and runs the output of the LSTM through a dense layer with softmax activation to generate a prediction $\hat{y}^{}$. **Exercise**: Implement `one_step_attention()`. The function `model()` will call the layers in `one_step_attention()` $T_y$ using a for-loop, and it is important that all $T_y$ copies have the same weights. I.e., it should not re-initiaiize the weights every time. In other words, all $T_y$ steps should have shared weights. Here's how you can implement layers with shareable weights in Keras:1. Define the layer objects (as global variables for examples).2. Call these objects when propagating the input.We have defined the layers you need as global variables. Please run the following cells to create them. Please check the Keras documentation to make sure you understand what these layers are: [RepeatVector()](https://keras.io/layers/core/repeatvector), [Concatenate()](https://keras.io/layers/merge/concatenate), [Dense()](https://keras.io/layers/core/dense), [Activation()](https://keras.io/layers/core/activation), [Dot()](https://keras.io/layers/merge/dot).
###Code
# Defined shared layers as global variables
repeator = RepeatVector(Tx)
concatenator = Concatenate(axis=-1)
densor = Dense(1, activation = "relu")
activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook
dotor = Dot(axes = 1)
###Output
_____no_output_____
###Markdown
Now you can use these layers to implement `one_step_attention()`. In order to propagate a Keras tensor object X through one of these layers, use `layer(X)` (or `layer([X,Y])` if it requires multiple inputs.), e.g. `densor(X)` will propagate X through the `Dense(1)` layer defined above.
###Code
# GRADED FUNCTION: one_step_attention
def one_step_attention(a, s_prev):
"""
Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights
"alphas" and the hidden states "a" of the Bi-LSTM.
Arguments:
a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a)
s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s)
Returns:
context -- context vector, input of the next (post-attetion) LSTM cell
"""
### START CODE HERE ###
# Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line)
s_prev = None
# Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)
concat = None
# Use densor to propagate concat through a small fully-connected neural network to compute the "energies" variable e. (≈1 lines)
e = None
# Use activator and e to compute the attention weights "alphas" (≈ 1 line)
alphas = None
# Use dotor together with "alphas" and "a" to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line)
context = None
### END CODE HERE ###
return context
###Output
_____no_output_____
###Markdown
You will be able to check the expected output of `one_step_attention()` after you've coded the `model()` function. **Exercise**: Implement `model()` as explained in figure 2 and the text above. Again, we have defined global layers that will share weights to be used in `model()`.
###Code
n_a = 32
n_s = 64
post_activation_LSTM_cell = LSTM(n_s, return_state = True)
output_layer = Dense(len(machine_vocab), activation=softmax)
###Output
_____no_output_____
###Markdown
Now you can use these layers $T_y$ times in a `for` loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps: 1. Propagate the input into a [Bidirectional](https://keras.io/layers/wrappers/bidirectional) [LSTM](https://keras.io/layers/recurrent/lstm)2. Iterate for $t = 0, \dots, T_y-1$: 1. Call `one_step_attention()` on $[\alpha^{},\alpha^{}, ..., \alpha^{}]$ and $s^{}$ to get the context vector $context^{}$. 2. Give $context^{}$ to the post-attention LSTM cell. Remember pass in the previous hidden-state $s^{\langle t-1\rangle}$ and cell-states $c^{\langle t-1\rangle}$ of this LSTM using `initial_state= [previous hidden state, previous cell state]`. Get back the new hidden state $s^{}$ and the new cell state $c^{}$. 3. Apply a softmax layer to $s^{}$, get the output. 4. Save the output by adding it to the list of outputs.3. Create your Keras model instance, it should have three inputs ("inputs", $s^{}$ and $c^{}$) and output the list of "outputs".
###Code
# GRADED FUNCTION: model
def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):
"""
Arguments:
Tx -- length of the input sequence
Ty -- length of the output sequence
n_a -- hidden state size of the Bi-LSTM
n_s -- hidden state size of the post-attention LSTM
human_vocab_size -- size of the python dictionary "human_vocab"
machine_vocab_size -- size of the python dictionary "machine_vocab"
Returns:
model -- Keras model instance
"""
# Define the inputs of your model with a shape (Tx,)
# Define s0 and c0, initial hidden state for the decoder LSTM of shape (n_s,)
X = Input(shape=(Tx, human_vocab_size))
s0 = Input(shape=(n_s,), name='s0')
c0 = Input(shape=(n_s,), name='c0')
s = s0
c = c0
# Initialize empty list of outputs
outputs = []
### START CODE HERE ###
# Step 1: Define your pre-attention Bi-LSTM. Remember to use return_sequences=True. (≈ 1 line)
a = None
# Step 2: Iterate for Ty steps
for t in range(None):
# Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)
context = None
# Step 2.B: Apply the post-attention LSTM cell to the "context" vector.
# Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line)
s, _, c = None
# Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line)
out = None
# Step 2.D: Append "out" to the "outputs" list (≈ 1 line)
None
# Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line)
model = None
### END CODE HERE ###
return model
###Output
_____no_output_____
###Markdown
Run the following cell to create your model.
###Code
model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))
###Output
_____no_output_____
###Markdown
Let's get a summary of the model to check if it matches the expected output.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
**Expected Output**:Here is the summary you should see **Total params:** 185,484 **Trainable params:** 185,484 **Non-trainable params:** 0 **bidirectional_1's output shape ** (None, 30, 128) **repeat_vector_1's output shape ** (None, 30, 128) **concatenate_1's output shape ** (None, 30, 256) **attention_weights's output shape ** (None, 30, 1) **dot_1's output shape ** (None, 1, 128) **dense_2's output shape ** (None, 11) As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using `categorical_crossentropy` loss, a custom [Adam](https://keras.io/optimizers/adam) [optimizer](https://keras.io/optimizers/usage-of-optimizers) (`learning rate = 0.005`, $\beta_1 = 0.9$, $\beta_2 = 0.999$, `decay = 0.01`) and `['accuracy']` metrics:
###Code
### START CODE HERE ### (≈2 lines)
opt = None
None
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
The last step is to define all your inputs and outputs to fit the model:- You already have X of shape $(m = 10000, T_x = 30)$ containing the training examples.- You need to create `s0` and `c0` to initialize your `post_activation_LSTM_cell` with 0s.- Given the `model()` you coded, you need the "outputs" to be a list of 11 elements of shape (m, T_y). So that: `outputs[i][0], ..., outputs[i][Ty]` represent the true labels (characters) corresponding to the $i^{th}$ training example (`X[i]`). More generally, `outputs[i][j]` is the true label of the $j^{th}$ character in the $i^{th}$ training example.
###Code
s0 = np.zeros((m, n_s))
c0 = np.zeros((m, n_s))
outputs = list(Yoh.swapaxes(0,1))
###Output
_____no_output_____
###Markdown
Let's now fit the model and run it for one epoch.
###Code
model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)
###Output
_____no_output_____
###Markdown
While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples: Thus, `dense_2_acc_8: 0.89` means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.)
###Code
model.load_weights('models/model.h5')
###Output
_____no_output_____
###Markdown
You can now see the results on new examples.
###Code
EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
for example in EXAMPLES:
source = string_to_int(example, Tx, human_vocab)
source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1)
prediction = model.predict([source, s0, c0])
prediction = np.argmax(prediction, axis = -1)
output = [inv_machine_vocab[int(i)] for i in prediction]
print("source:", example)
print("output:", ''.join(output))
###Output
_____no_output_____
###Markdown
You can also change these examples to test with your own examples. The next part will give you a better sense on what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character. 3 - Visualizing Attention (Optional / Ungraded)Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (say the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what part of the output is looking at what part of the input.Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this: **Figure 8**: Full Attention MapNotice how the output ignores the "Saturday" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We see also that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's "18" in order to generate "2018." 3.1 - Getting the activations from the networkLets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$. To figure out where the attention values are located, let's start by printing a summary of the model .
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Navigate through the output of `model.summary()` above. You can see that the layer named `attention_weights` outputs the `alphas` of shape (m, 30, 1) before `dot_2` computes the context vector for every time step $t = 0, \ldots, T_y-1$. Lets get the activations from this layer.The function `attention_map()` pulls out the attention values from your model and plots them.
###Code
attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday April 08 1993", num = 6, n_s = 64)
###Output
_____no_output_____ |
simple_nn_pytorch.ipynb | ###Markdown
Import the necessary libraries for the exercise.
###Code
pip install torch
pip install torchvision
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import confusion_matrix
import torch
from torch.nn import Parameter
from torch import nn
from torch.nn import functional as F
from torchvision import datasets, transforms
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
from torch.optim import SGD
from sklearn.metrics import accuracy_score
###Output
_____no_output_____
###Markdown
Check the version of pytorch
###Code
torch.__version__
###Output
_____no_output_____
###Markdown
The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision.LINK - https://pytorch.org/docs/stable/torchvision/index.htmlPreprocessing of data such as conversion of the features into Tensors and Normalisation is done when we load the data. Transforms are common image transformations. They can be chained together using Compose.Here we are applying two types of transforms to the images in the dataset.1. Converting the images in to torch tensors2. Normalising the images using mean and standard deviation(x_normalized = x-mean / stdThe values 0.1307 and 0.3081 represent the mean and standard deviation for MNIST dataset)LINK - https://pytorch.org/docs/stable/torchvision/transforms.htmlDataset can we downloaded using the inbuilt function in pytorch LINK - https://pytorch.org/docs/stable/torchvision/datasets.htmlmnist
###Code
pwd
#transforming the images to tensors and normalising
transforms = transforms.Compose(([transforms.ToTensor(),transforms.Normalize((0.1307,), (0.3081,))]))
# mnist train_set
mnist_train = MNIST('./data',train=True,download=True,transform=transforms)
# mnist test_set
mnist_test = MNIST('./data',train=False,transform=transforms)
###Output
_____no_output_____
###Markdown
Creating a train test and validation set. Train Dataset:The actual dataset that we use to train the model (weights and biases in the case of Neural Network). The model sees and learns from this data.Validation Dataset:The validation set is used to evaluate a given model, but this is for frequent evaluation. We as machine learning engineers use this data to fine-tune the model hyperparameters. Hence the model occasionally sees this data, but never does it “Learn” from this.Test Dataset:The Test dataset provides the gold standard used to evaluate the model. It is only used once a model is completely trained(using the train and validation sets). The test set is generally what is used to evaluate competing models.We split the training dataset into Train and Validation dataset. Here we will split 90% of our training data into train dataset and 10% into validation dataset.We would be using torch.utils.data.random_split function to randomly split the training data.LINK - https://pytorch.org/docs/stable/data.html
###Code
train_len = int(0.9*mnist_train.__len__())
valid_len = mnist_train.__len__() - train_len
mnist_train, mnist_valid = torch.utils.data.random_split(mnist_train, lengths=[train_len, valid_len])
print(f"Size of : Training-set:\t\t{mnist_train.__len__()}")
print(f"Size of : Validation-set:\t{mnist_valid.__len__()}")
print(f"Size of : Test-set:\t\t{mnist_test.__len__()}")
# The images are stored in one-dimensional arrays of this length.
img_size_flat = 784 # 28 x 28
# Tuple with height and width of images used to reshape arrays.
img_shape = (28,28)
# Number of classes, one class for each of 10 digits.
num_classes = 10
###Output
_____no_output_____
###Markdown
Helper-function for plotting imagesFunction used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Plot a few images to see if data is correct
###Code
# Get the first images from the test-set.
images = mnist_test.data[0:9]
# Get the true classes for those images.
cls_true = mnist_test.targets[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
PyTorchPyTorch is a Python package that provides two high-level features:1. Tensor computation (like NumPy) with strong GPU acceleration2. Deep neural networks built on a tape-based autograd systemYou can reuse your favorite Python packages such as NumPy, SciPy and Cython to extend PyTorch when needed. More about PyTorchAt a granular level, PyTorch is a library that consists of the following components:| Component | Description || ---- | --- || **torch** | a Tensor library like NumPy, with strong GPU support || **torch.autograd** | a tape-based automatic differentiation library that supports all differentiable Tensor operations in torch || **torch.nn** | a neural networks library deeply integrated with autograd designed for maximum flexibility || **torch.multiprocessing** | Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training || **torch.utils** | DataLoader, Trainer and other utility functions for convenience || **torch.legacy(.nn/.optim)** | legacy code that has been ported over from torch for backward compatibility reasons |Usually one uses PyTorch either as:- a replacement for NumPy to use the power of GPUs.- a deep learning research platform that provides maximum flexibility and speed Dynamic Neural Networks: Tape-Based AutogradPyTorch has a unique way of building neural networks: using and replaying a tape recorder.Most frameworks such as TensorFlow, Theano, Caffe and CNTK have a static view of the world.One has to build a neural network, and reuse the same structure again and again.Changing the way the network behaves means that one has to start from scratch.With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you tochange the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comesfrom several research papers on this topic, as well as current and past work such as[torch-autograd](https://github.com/twitter/torch-autograd),[autograd](https://github.com/HIPS/autograd),[Chainer](http://chainer.org), etc.While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date.You get the best of speed and flexibility for your crazy research. Recommended Readinghttps://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.htmlhttps://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html Neural Network Neural networks can be constructed using the torch.nn package.A typical training procedure for a neural network is as follows:- Define the neural network that has some learnable parameters (or weights)- Iterate over a dataset of inputs- Process input through the network- Compute the loss (how far is the output from being correct)- Propagate gradients back into the network’s parameters- Update the weights of the network, typically using a simple update rule: `weight = weight - learning_rate * gradient` Model We will define networks as a subclass of nn.ModuleYou just have to define the forward function, and the backward function (where gradients are computed) is automatically defined for you using autograd. You can use any of the Tensor operations in the forward function.The network we will be defining here will contain a `single Linear Layer`.The linear layer contains 2 variables `weights` and `bias`, which is changed by PyTorch so as to make the model perform better on the training data.This simple mathematical model multiplies the images variable x with the weights and then adds the biases.The result is a matrix of shape [num_images, num_classes] because x has shape [num_images, img_size_flat] and weights has shape [img_size_flat, num_classes], so the multiplication of those two matrices is a matrix with shape [num_images, num_classes] and then the biases vector is added to each row of that matrix. __init__( ) function1. The first variable that must be optimized is called weights and is defined here as a Pytorch variable that must be initialized with zeros and whose shape is `[img_size_flat, num_classes]`, so it is a 2-dimensional tensor (or matrix) with img_size_flat rows and num_classes columns.2. The second variable that must be optimized is called biases and is defined as a 1-dimensional tensor (or vector) of length num_classes.3. Here in the __init__() function both weight and bias Tensors wrapped in as `Parameter`.Parameters are `Tensor` subclasses, that have a very special property when used with Module s - when they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. in `parameters()` iterator. forward( ) function1. The forward function will take input of shape `[num_images, img_size_flat]` and multiplies `weight` of shape `[num_images, num_classes]` and the `bias` is added to final result. We can use torch built in func call `torch.addmm()` for performing the above operation.It is similar to `tf.nn.xw_plus_b` in `Tensorflow`2. The `out` in first line of `forward()` will have the shape `[num_images, num_classes]`3. However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each row of the `out` matrix sums to one, and each element is limited between zero and one. This is calculated using the so-called softmax function and the result is stored in `out` it self.```N.B The module `softmax` doesn’t work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. we will be Using LogSoftmax instead (it’s faster and has better numerical properties).```Softmax takes input matrix and `dim`, A dimension along which Softmax will be computed (so every slice along dim will sum to 1)If you are using softmax in forward function, the you should use Cross Entropy loss for the loss. LINK - https://pytorch.org/docs/stable/torch.htmltorch.addmm get_weights( )1. This function is used to get the weights of the Network. This could be plotted to understand what the model actually has learned
###Code
class LinearModel(nn.Module):
def __init__(self):
super(LinearModel, self).__init__()
self.weight = Parameter(torch.zeros((784, 10),dtype=torch.float32,requires_grad=True))
self.bias = Parameter(torch.zeros((10),dtype=torch.float32,requires_grad=True))
def get_weights(self):
return self.weight
def forward(self,x):
out = torch.addmm(self.bias, x, self.weight)
out = F.log_softmax(out,dim=1)
return out
###Output
_____no_output_____
###Markdown
Training Network Function to train the NetworkInput:model : model objectdevice : cpu or cudatrain_loader : Data loader. Combines a dataset and a sampler, and provides single or multi-process iterators over the datasetoptimizer : the function we are going to use for adjusting model parameters Step by step 1. `model.train()`: tells your model that you are training the model. So effectively layers like dropout, batchnorm etc. which behave different on the train and test procedures know what is going on and hence can behave accordingly.2. `for loop` : it will iterate through train_loader and will give you 2 outputs `data` and `target`. The size of `data` and `target` will depend on the `batch_size` that you have provided while creating the `DataLoader` function for `train dataset` . `data` shape `[batch_size,row,columns]`,`target` shape `[batch_size]` 3. Here`data` will be of shape `[batch_size,row,columns]` ,we will convert `data` into `[batch_size,row x columns]` (Flattening the images to fit into Linear layer)4. Moving the the `data` and `target` to devices based on our choice and machine specification. 5. By calling `optimizer.zero_grad` we will set Gradient buffers to zero. Else the gradients will get accumulated to existing gradients.6. Input the `data` to `model` and get outputs, The output will be of shape `[batch_size,num_classes]`7. `Loss function` : A loss function takes the (output, target) pair of inputs, and computes a value that estimates how far away the output is from the target . We will be using the negative log likelihood loss. It is useful to train a classification problem with C classes.`nll_loss` : calculates the difference between the `output` to `target` , `output` of shape `[batch_size,num_classes]` and `target` of shape `[batch_size]` where each value is `0 ≤ targets[i] ≤ num_classes−1`NOTE :MultiClass Classification : We can use have a log softmax layer in the forward function and use NLL_loss. Alternatively you can use Cross Entropy Loss for multi class classification. It has both nn.LogSoftmax() and nn.NLLLoss() in one single class.So you do not need a log softmax layer in your forward function. Code with Cross Entropy loss is available at the end of this notebook.Binary Classification: If you have a binary classification problem, You can use a sigmoide layer in your forward function and use BCELOss (Binary Cross Entropy loss) . Alrenatively you can use BCEWithLogitsLoss. This loss combines a `Sigmoid` layer and the `BCELoss` in one single class. So you do not need a sigmoid layer in your forward function. 8. when we call `loss.backward()`, the whole graph is differentiated w.r.t. the loss, and all Tensors in the graph that has `requires_grad=True` will have their `.grad` Tensor accumulated with the gradient.9. To backpropagate the error all we have to do is to `loss.backward()`10. `optimizer.step()` : performs a parameter update based on the current gradient (stored in .grad attribute of a parameter) and the update rule.LINK : https://pytorch.org/docs/stable/nn.html?highlight=model%20train https://pytorch.org/docs/stable/_modules/torch/nn/modules/loss.html https://pytorch.org/docs/stable/autograd.html https://pytorch.org/docs/stable/optim.html
###Code
def train(model,device,train_loader,optimizer):
model.train()
correct = 0
#y_true = [] # uncomment these lines if u want to use sklearn's metrics
#y_pred = [] # uncomment these lines if u want to use sklearn's metrics
for data,target in train_loader:
# we have a 28 X 28 image. we are flattening it to
data = torch.reshape(data,(-1,784))
data, target = data.to(device), target.to(device)
#set the gradient values to zero at the start of training.
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
pred = output.argmax(dim=1)
correct += pred.eq(target).sum().item() #comment these lines if u want to calculate accuracy without sklearn.
#y_pred.extend(pred.reshape(-1).tolist()) # uncomment these lines if u want to use sklearn's metrics
#y_true.extend(target.reshape(-1).tolist()) # uncomment these lines if u want to use sklearn's metrics
#print("Accuracy is" , accuracy_score(y_true,y_pred)) # uncomment these lines if u want to use sklearn's metrics
print('Training set Accuracy: {}/{} ({:.0f}%)\n'.format(correct, len(train_loader.dataset),100. * correct / len(train_loader.dataset))) #comment these lines if u want to calculate accuracy without sklearn.
###Output
_____no_output_____
###Markdown
Function to test the Network- Input: - model : model object - device : `cpu` or `cuda` - test_loader : Data loader. Combines a dataset and a sampler, and provides single or multi-process iterators over the dataset. Step by step 1. `model.eval()` : tells your model that you are testing the model.So the Regularization Layers like `Dropout` and `BatchNormalization` get disabled. 2. The wrapper `with torch.no_grad()` temporarily sets all the requires_grad flag to false. Because we don't need to compute gradients while are getting our inference from the network.This will reduce memory usage and speed up computations.3. The next three line are the same as train function4. we will calculate the test_loss across the complete dataset5. we will also calculate the `accuracy` of model by comparing with target
###Code
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
#y_true = [] # uncomment these lines if u want to use sklearn's metrics
#y_pred = [] # uncomment these lines if u want to use sklearn's metrics
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
data = torch.reshape(data,(-1,784))
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1) # get the index of the max log-probability
correct += pred.eq(target).sum().item()
# this is to flatten the tensor, convert to a list and extend it for all the batches
#y_pred.extend(pred.reshape(-1).tolist()) # uncomment these lines if u want to use sklearn's metrics
#y_true.extend(target.reshape(-1).tolist()) # uncomment these lines if u want to use sklearn's metrics
#print("Accuracy is" , accuracy_score(y_true,y_pred)) # uncomment these lines if u want to use sklearn's metrics
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(test_loss, correct, len(test_loader.dataset),
(100. * correct / len(test_loader.dataset))))
###Output
_____no_output_____
###Markdown
Checking Device Availabilty
###Code
device = "cuda" if torch.cuda.is_available() else "cpu"
###Output
_____no_output_____
###Markdown
DataLoader function for Training Set and Test Set:Data loader combines a dataset and a sampler, and provides single or multi-process iterators over the dataset.
###Code
kwargs = {'num_workers': 1, 'pin_memory': True} if device=='cuda' else {}
train_loader = DataLoader(mnist_train,batch_size=64,shuffle=True,**kwargs)
test_loader = DataLoader(mnist_test,batch_size=1024,shuffle=False,**kwargs)
###Output
_____no_output_____
###Markdown
Model Model object is created and Transferred to `device` according to the availabilty of `GPU`
###Code
model = LinearModel().to(device)
###Output
_____no_output_____
###Markdown
Optimizer We will be using `stochastic gradient descent|(SGD)` optimizer`SGD` takes `model parameters` the we want to optimze and a learning_rate `lr` in which the model parametrs get updated
###Code
optimizer = SGD(model.parameters(),lr=0.5)
###Output
_____no_output_____
###Markdown
Model parameters contain the weights and biases that we defined inside our model , So while training the weight and bias will get updated.
###Code
list(model.parameters())
###Output
_____no_output_____
###Markdown
Training Number of `epoch` is 10 .So in training the model will get iterated through the complete Training set 10 times .After each epoch we will run `test` and check how well the model is performing on unseen data.If the Training Accuracy is going Up and Testing Accuracy is going down we can say that model is overfitting on the train set. There are Techinques like `EarlyStopping` and usage of validation datset that we will discuss later.
###Code
epochs = 5
for epoch in range(epochs):
train(model,device,train_loader,optimizer)
test(model,device,test_loader)
###Output
Training set Accuracy: 45927/54000 (85%)
Test set: Average loss: 0.7441, Accuracy: 8943/10000 (89%)
Training set Accuracy: 47363/54000 (88%)
Test set: Average loss: 0.8624, Accuracy: 8811/10000 (88%)
Training set Accuracy: 47610/54000 (88%)
Test set: Average loss: 0.8176, Accuracy: 8865/10000 (89%)
Training set Accuracy: 47697/54000 (88%)
Test set: Average loss: 0.7644, Accuracy: 8996/10000 (90%)
Training set Accuracy: 47816/54000 (89%)
Test set: Average loss: 0.9330, Accuracy: 8752/10000 (88%)
###Markdown
CODE with Cross Entropy Loss :
###Code
device = "cuda" if torch.cuda.is_available() else "cpu"
kwargs = {'num_workers': 1, 'pin_memory': True} if device=='cuda' else {}
train_loader = DataLoader(mnist_train,batch_size=64,shuffle=True,**kwargs)
test_loader = DataLoader(mnist_test,batch_size=1024,shuffle=False,**kwargs)
model = LinearModel().to(device)
criterion = nn.CrossEntropyLoss()
optimizer = SGD(model.parameters(),lr=0.5)
class LinearModel(nn.Module):
def __init__(self):
super(LinearModel, self).__init__()
self.weight = Parameter(torch.zeros((784, 10),dtype=torch.float32,requires_grad=True))
self.bias = Parameter(torch.zeros((10),dtype=torch.float32,requires_grad=True))
def get_weights(self):
return self.weight
def forward(self,x):
out = torch.addmm(self.bias, x, self.weight)
return out
def train(model,device,train_loader,optimizer):
model.train()
correct = 0
y_true = []
y_pred = []
for data,target in train_loader:
data = torch.reshape(data,(-1,784))
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
# Cross entropy loss combines nn.LogSoftmax() and nn.NLLLoss() in one single class.
# So we need not have a logsoftmax layer in our forward function.
loss = criterion(output, target)
loss.backward()
optimizer.step()
pred = output.argmax(dim=1)
y_pred.extend(pred.reshape(-1).tolist())
y_true.extend(target.reshape(-1).tolist())
print("Accuracy on training set is" , accuracy_score(y_true,y_pred))
def test(model, device, test_loader):
model.eval()
correct = 0
y_true = []
y_pred = []
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
data = torch.reshape(data,(-1,784))
output = model(data)
pred = output.argmax(dim=1) # get the index of the max log-probability
y_true.extend(target.reshape(-1).tolist())
y_pred.extend(pred.reshape(-1).tolist())
print("Accuracy on test set is" , accuracy_score(y_true,y_pred))
epochs = 5
for epoch in range(epochs):
train(model,device,train_loader,optimizer)
test(model,device,test_loader)
###Output
Accuracy on training set is 0.8511111111111112
Accuracy on test set is 0.857
Accuracy on training set is 0.8763703703703704
Accuracy on test set is 0.8975
Accuracy on training set is 0.881962962962963
Accuracy on test set is 0.8883
Accuracy on training set is 0.881962962962963
Accuracy on test set is 0.8891
Accuracy on training set is 0.8859814814814815
Accuracy on test set is 0.7985
|
Data-Science-HYD-2k19/Day-based/Day 13.ipynb | ###Markdown
Constructors, Destructors, public, private and protected variables and methods: cotd.. of oops:
###Code
class Name():
def Hello(self): #If self is not given: TypeError: Hello() takes 0 positional arguments but 1 was given
print("Machine Learning")
ob = Name()
ob.Hello
ob.Hello()
###Output
Machine Learning
###Markdown
Constructors: ( __init__ )
###Code
#__init__ is the constructor name, should always construct a constructor in a class
class Person:
def __init__(self,name,surname,yob): #self is a reference to its own object
self.name=name
self.surname=surname
self.yob=yob
ob=Person("Sanjay","Prabhu",1999)
print(ob)
print("My name is {}{} and my yob is: {}".format(ob.name,ob.surname,ob.yob))
#Another example:
class Department:
def __init__(self,name,num_of_stu):
self.name = name
self.num_of_stu = num_of_stu
cse = Department("Comp. Sci.",240)
it = Department("Information Technology",144)
cce = Department("Comp. Comm. and Elec.",144)
mech = Department("Mechanical",200)
chem = Department("Chemical",150)
civil = Department("Civil",200)
biomed = Department("Bio. Med.",100)
a = [cse,it,cce,mech,chem,civil,biomed]
for i in a:
print("Name of the branch: ",i.name)
print("Num of Stu: ",i.num_of_stu)
#Global variables and local/class variables:
a = 10
class Demo():
b = 100
print(a)
print(Demo.b)
#cannot change the local variable, but the global variable can be changed
a = 100
b = 120
print(a)
print(Demo.b)
#Another example
class Person:
def __init__(self,name,surname,yob):
self.name = name
self.surname = surname
self.yob = yob
def age(self,curr_yr):
return curr_yr-self.yob
def __str__(self):
return "{} {} was born in {}".format(self.name,self.surname,self.yob)
Indian = Person("Sanjay","Prabhu",1999)
American = Person("Michael","McLarken",1997)
English = Person("Timothy","Bradshire",1998)
Spanish = Person("Jesus","Escobar",1995)
a = [Indian,American,English,Spanish]
for i in a:
print(i) # works based on the __str__
print("{}'s' age is: {}".format(i.name,i.age(2019)))
dir(Indian)
print(Indian.__dict__.keys())
#Another way of writing a class using a set method, but a bad practice
class Person:
def set_name(self,name):
self.name = name
def set_surname(self,surname):
self.surname = surname
def set_yob(self,yob):
self.yob = yob
def age(self,curr_yr):
return curr_yr-self.yob
def __str__(self):
return "{} {} was born in: {}".format(self.name,self.surname,self.yob)
ob = Person()
ob.set_name("Mani")
ob.set_surname("Gupta")
ob.set_yob(1978)
ob.name
ob.surname
ob.yob
ob.age(2018)
###Output
_____no_output_____
###Markdown
Advanced topics: Abstraction, Encapsulation, Polymorphism, Inheritance
###Code
''' private,public,protected '''
#public variables (anyname)
class Person:
def __init__(self,name,surname,yob):
self.name = name
self.surname = surname
self.yob = yob
def age(self,curr_yr):
return curr_yr-self.yob
def __str__(self):
return "{} {} was born in {}".format(self.name,self.surname,self.yob)
ob = Person("Anony","mous",1976)
print(ob)
print(ob.__dict__.keys())
#protected variables (_anyname)
#Used for financial and banking applications
class Person:
def __init__(self,name,surname,yob): #these are the parameters, hence there is no need of adding _ here
self._name = name #Here on, we are protecting the variables, hence, we add _ before the var name
self._surname = surname
self._yob = yob
def age(self,curr_yr):
return curr_yr-self._yob
def __str__(self):
return "{} {} was born in {}".format(self._name,self._surname,self._yob)
ob = Person("Anony","mous",1976)
print(ob)
print(ob.__dict__.keys())
ob.name #usage of protected hence giving an error
ob._name
ob._surname
ob._yob
ob.age(2019) #no change here since it only return a statement
#private variables( __anyname)
class Person:
def __init__(self,name,surname,yob):
self.__name = name
self.__surname = surname
self.__yob = yob
def age(self,curr_yr):
return curr_yr-self.__yob
def __str__(self):
return "{} {} was born in {}".format(self.__name,self.__surname,self.__yob)
ob = Person("Anony","mous",1976)
print(ob)
print(ob.__dict__.keys())
ob.name
ob._name
ob.__name #Usage of private variable hence giving an error
ob._Person__name
ob._Person__surname
ob._Person__yob
ob.age(2019) #no change here since it only return a statement
###Output
_____no_output_____
###Markdown
Difference between public,protected and private:
###Code
class Diff():
def __init__(self):
self.pub = ("Public variable")
self._pro = ("Protected variable")
self.__pri = ("Private variable")
def public_method(self):
print("Public")
def _protected_method(self):
print("Protected")
def __private_method(self):
print("Private")
ob = Diff()
ob.pub
ob._pro
ob._Diff__pri
ob.public_method()
ob._protected_method()
ob._Diff__private_method()
ob.__dict__
###Output
_____no_output_____
###Markdown
Nested private method- is it possible? [check]
###Code
#Example for private method:
class World:
def __India(self):
def __Telangana(self):
def __Hyderabad(self):
def __Ameerpet(self):
print("You have reached Ameerpet")
earth = World()
earth._World__India
#Example to call a private variable inside a private method
class Diff():
def __init__(self,to_be_printed):
self.pub = ("Public variable")
self._pro = ("Protected variable")
self.__pri = ("Private variable")
self.__to_be_printed = to_be_printed
def public_method(self):
print("Public")
def _protected_method(self):
print("Protected")
def __private_method(self,__to_be_printed):
return __to_be_printed
def __private_method_2(self,__not_defined_in_constructor):
return __not_defined_in_constructor
ob = Diff(2019)
ob._Diff__private_method(2019)
ob._Diff__private_method("constucted outside the code")
#If private method can be written inside a private method?
###Output
_____no_output_____
###Markdown
Can a method be written inside another method? [check]
###Code
#If method can be written inside another method?
class World:
def India(self):
print("You have reached India")
def Telangana(self):
print("You have reached Telangana")
def Hyderabad(self):
print("You have reached Hyderabad")
def Ameerpet(self):
print("You have reached Ameerpet")
ob = World()
ob.India()
#If method can be written inside another method?
class World:
def India(self,yo):
print("You have reached India")
self.yo = yo
def Telangana():
print(yo)
def Hyderabad():
print(yo)
def Ameerpet():
print(yo)
ob = World()
ob.India(12)
ob.India(12).Telanagana(12)
###Output
You have reached India
###Markdown
Can we use a Constructor twice?
###Code
class Diff():
def __init__(self,to_be_printed):
self.pub = ("Public variable")
self._pro = ("Protected variable")
self.__pri = ("Private variable")
self.__to_be_printed = to_be_printed
def public_method(self):
print("Public")
def _protected_method(self):
print("Protected")
def __private_method(self,__to_be_printed):
return __to_be_printed
def __init__(self):
self.pub = ("Overwritten public")
self._pro = ("Overwritten protected")
self.__pri = ("Overwritten private")
ob = Diff()
ob.pub
ob._pro
ob._Diff__pri
ob._Diff__private_method(2021)
###Output
_____no_output_____
###Markdown
Final example of public,protected and private:
###Code
#Another Example of a class without constructor and a settera
class mlearn:
pub = ("Data Science Public")
_pro = ("Data Science protected")
__pri = ("Data Science private")
def SetCourse(self,name):
self.name = name
def GetCourse(self):
return self.name
def _SetCourse2(self,name):
self._name = name
def _GetCourse2(self):
return self._name
def __SetCourse3(self,name):
self.__name = name
def __GetCourse3(self):
return self.__name
ob1 = mlearn()
###Output
_____no_output_____
###Markdown
public,protected,private variable access
###Code
ob1.pub
ob1._pro
ob1._mlearn__pri
###Output
_____no_output_____
###Markdown
pulic,protected and private method set
###Code
ob1.SetCourse("public")
ob1._SetCourse2("protected")
ob1._mlearn__SetCourse3("private")
###Output
_____no_output_____
###Markdown
public,protected and priavte method get
###Code
ob1.GetCourse()
ob1._GetCourse2()
ob1._mlearn__GetCourse3()
###Output
_____no_output_____
###Markdown
Destructor(__del__):
###Code
class Test:
''' module here '''
def __init__(self):
print("Constructor")
def __del__(self):
print("Destructor")
ob = Test()
ob.__del__()
print(Test.__name__)
print(Test.__doc__)
print(Test.__module__)
print(Test.__bases__)
###Output
(<class 'object'>,)
###Markdown
Deletion of an object created:
###Code
ob
if __name__ == "__main__":
ob = Test()
del ob
ob
###Output
_____no_output_____
###Markdown
Deletion of more than 1 object:
###Code
class Test:
''' module here '''
def __init__(self):
print("Constructor still not deleted")
def __del__(self):
print("Destructor still not deleted")
ob1 = Test()
ob2 = Test()
ob3 = Test()
if __name__ == "__main__":
ob1 = Test()
del ob1
ob1
ob2
ob3
###Output
_____no_output_____ |
archive/2017/lectures/Lists.ipynb | ###Markdown
Списъци Дефиниране
###Code
colors = ['red', 'green', 'blue', 'orange']
empty_list = []
print(colors)
###Output
['red', 'green', 'blue', 'orange']
###Markdown
Извличане на стойността на елемент на дадена позиция**!Броенето започва от нула**
###Code
print(colors[0])
print(colors[1])
print(colors[2])
print(colors[3])
###Output
red
green
blue
orange
###Markdown
Задаване на стойност на елемента на дадена позиция
###Code
colors[0] = 'brown'
colors
###Output
_____no_output_____
###Markdown
Дължина на списък
###Code
len(colors)
len([5, 4, 3, 2, 1])
###Output
_____no_output_____
###Markdown
Добавяне на елемент в края на списъка
###Code
colors.append('black')
colors
###Output
_____no_output_____
###Markdown
Добавяне на елемент в началото на списъка
###Code
colors.insert(0, 'white')
colors
###Output
_____no_output_____
###Markdown
Изтриване на елемент от списък
###Code
colors.pop(0)
colors
###Output
_____no_output_____
###Markdown
Обхождане на елементите на списъка
###Code
index = 0
while index < len(colors):
print(colors[index])
index = index + 1
###Output
brown
green
blue
orange
black
###Markdown
Предпочитан начин
###Code
for color in colors:
print(color)
###Output
brown
green
blue
orange
black
###Markdown
Лесен начин за генериране на поридици от числа
###Code
for number in [1, 2, 3, 4, 5, 7, 8, 9, 10]:
print(number)
###Output
1
2
3
4
5
7
8
9
10
###Markdown
Това е еквивалентно на:
###Code
for number in range(1, 11):
print(number)
for number in range(1, 21, 2):
print(number)
for number in range(10, 0, -1):
print(number)
###Output
_____no_output_____ |
aws_marketplace/creating_marketplace_products/models/build-model-package-for-listing-in-aws-marketplace.ipynb | ###Markdown
Package a machine learning model for listing on the AWS Marketplace This sample notebook provides scripts you can use to package and verify your ML model for listing on AWS Marketplace. This sample notebook shows you the end-to-end process by building a sample ML model based on the Iris plant dataset. The following diagram provides an overview of the ML model packaging process. As you can see, In [Step 1](step1) you will train a simple model, and you will store model artifacts into a joblib file. In [Step 2](step2) you will learn how to author scoring logic that loads the ML model, performs inference, and returns the prediction. In [Step 3](step3) you will learn how to package the ML model into a Docker Image. In [Step 4](step4) you will push this Docker image into Amazon ECR. In [Step 5](step5) you will learn how to package the ML model into a Model Package. In [Step 6](step6) you will validate this ML model by deploying it with Amazon SageMaker. In [Step 7](step7) you will learn about resources that guide you on how to list the ML model in AWS Marketplace. **Pre-requisites** 1. Before you start building an ML model, you are strongly recommended watching this [video](https://www.youtube.com/watch?v=npilyL5xvV4) to understand the overall end-to-end ML model building and listing process.2. You need to add the managed policy **[AmazonEC2ContainerRegistryFullAccess](https://docs.aws.amazon.com/AmazonECR/latest/userguide/security-iam-awsmanpol.htmlsecurity-iam-awsmanpol-AmazonEC2ContainerRegistryFullAccess)** to the role associated with your notebook instance.3. In order to create listings on AWS Marketplace you will need to register an AWS account to be a seller account by following the [seller registration process](https://docs.aws.amazon.com/marketplace/latest/userguide/seller-registration-process.html). This guide assumes that the notebook is to be run in the registered seller account.**Note** - This example shows how to package a simple Python example which showcases a decision tree model built with the scikit-learn machine learning package. You are recommended to follow the notebook once and then customize it for your own ML model. **Table of contents**1. [Step 1 - Build ML model](step1): 2. [Step 2 - Implement scoring logic](step2): 3. [Step 3 - Package model artifacts and scoring logic into a Docker image](step3) 1. [Step 3.1: Build Docker image to be included in the ML model](step31) 2. [Step 3.2 : Test Docker image](step32)4. [Step 4 - Push the Docker image into Amazon ECR](step4): 5. [Step 5 - Create an ML Model Package](step5): 6. [Step 6 - Validate model in Amazon SageMaker environment](step6): 1. [Step 6.1 Validate Real-time inference via Amazon SageMaker Endpoint](step61) 2. [Step 6.2 Validate batch inference via batch transform job](step61)7. [Step 7 - List ML model on AWS Marketplace](step7): Here we import all of the libraries needed throughout the notebook to complete the model packaging process. We also create the clients necessary to interact with the various services needed (e.g., ECR, SageMaker, and S3).
###Code
import base64
import boto3
import docker
import json
import pandas as pd
import requests
import sagemaker as sage
from sagemaker import get_execution_role, ModelPackage
import socket
import time
from urllib.parse import urlparse
# Training specific imports
from joblib import dump, load
from sklearn import tree
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
from src.scoring_logic import IrisLabel
# Common variables
session = sage.Session()
s3_bucket = session.default_bucket()
region = session.boto_region_name
account_id = boto3.client("sts").get_caller_identity().get("Account")
role = get_execution_role()
sagemaker = boto3.client("sagemaker")
s3_client = session.boto_session.client("s3")
ecr = boto3.client("ecr")
sm_runtime = boto3.client("sagemaker-runtime")
###Output
_____no_output_____
###Markdown
The model name will be re-used through various parts of the packaging and publishing process.
###Code
# Define parameters
model_name = "my-flower-detection-model"
###Output
_____no_output_____
###Markdown
Step 1: Build ML model For the purpose of this sample, this section builds a simple classification model using the [Iris plants dataset](https://scikit-learn.org/stable/datasets/toy_dataset.htmliris-plants-dataset) and then serializes it using joblib
###Code
iris = pd.read_csv("s3://sagemaker-sample-files/datasets/tabular/iris/iris.data", header=None)
features = iris.iloc[:, 0:4]
label = iris.iloc[:, 4].apply(
lambda x: IrisLabel[x.replace("Iris-", "")].value
) # Integer encode the labels
classifier = tree.DecisionTreeClassifier(random_state=0)
classifier = classifier.fit(features, label)
# Store the model
dump(classifier, "src/model-artifacts.joblib")
# Show the model
plt.figure(figsize=[15.4, 14.0])
tree.plot_tree(classifier, filled=True)
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: Implement scoring logic The supported input and output content types are left to the scoring logic. It is recommended to follow the [SageMaker standards](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-inference.html) for request and response formats where possible to provide a consistent experience to end users. The sample scoring logic provided in this example follows this standard.
###Code
!ls src
###Output
_____no_output_____
###Markdown
scoring_logic.py contains all the necessary logic to take the HTTP requests that arrive via the SageMaker endpoint, translate them as needed to perform an inference, and return a properly formatted response
###Code
!pygmentize src/scoring_logic.py
###Output
_____no_output_____
###Markdown
Amazon SageMaker uses two URLs in the container:* `/ping` will receive `GET` requests from the infrastructure. Your program returns 200 if the container is up and accepting requests.* `/invocations` is the endpoint that receives client inference `POST` requests. The format of the request and the response is up to the algorithm. If the client supplied `ContentType` and `Accept` headers, these will be passed in as well. For advanced usage like request tracing, `CustomAttributes` can be used (more [details](https://aws.amazon.com/blogs/machine-learning/amazon-sagemaker-runtime-now-supports-the-customattribute-header/)). All other headers will be stripped off by the SageMaker Endpoint.The container will have the model files in the same place they were written during training: /opt/ml -- model -- Note on Inference pricingWhen the buyer runs your software by hosting an endpoint to perform real-time inference, you can choose to set a price per inference or per hour that the endpoint is active. Batch transform processes always use hourly pricing.With inference pricing, AWS Marketplace charges your buyer for each invocation of your endpoint with an HTTP response code of 2XX. However, in some cases, your software may process a batch of inferences in a single invocation. For an endpoint deployment, you can indicate a custom number of inferences that AWS Marketplace should charge the buyer for that single invocation. To do this, include a custom metering header in the HTTP response headers of your invocation, as in the following example.```X-Amzn-Inference-Metering: {"Dimension": "inference.count", "ConsumedUnits": 3}```This example shows an invocation that charges the buyer for three inferences. You can find more information in the [documentation](https://docs.aws.amazon.com/marketplace/latest/userguide/machine-learning-pricing.html). Step 3: Package model artifacts and scoring logic into a Docker Image Docker image The provided Dockerfile packages the model artifacts and serving logic as well as installing all the dependencies needed at inference time (flask, gunicorn, sklearn).
###Code
!pygmentize src/Dockerfile
###Output
_____no_output_____
###Markdown
In this notebook, we are showing a minimal example for how to create an inference image for clarity. However, for models that use common machine learning frameworks such as Sklearn, TensorFlow, TensorFlow 2, PyTorch, and Apache MXNet, AWS provides [Deep Learning Containers](https://docs.aws.amazon.com/deep-learning-containers/latest/devguide/what-is-dlc.html) as well as [Scikit-learn and SparkML Containers](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-docker-containers-scikit-learn-spark.html), which are a set of optimized Docker images which greatly simplify the setup necessary for model serving. These images should be used as base images when possible as they are performance optimized for CPU, GPU, and Inferentia. [Detailed instructions](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-containers-frameworks-deep-learning.html) are available for using the Deep Learning Containers.Select the appropriate image (CPU/GPU/Inferentia/framework combination) and replace the ubuntu:18.04 base image when adapting this example notebook for your own model to take advantage of the prebuilt SageMaker containers. For additional performance optimization, [SageMaker Neo](https://docs.aws.amazon.com/sagemaker/latest/dg/neo.html) provides the ability to automatically optimize an existing model implemented in any common machine learning framework for deployment on cloud instances (including Inferentia). To take advantage of SageMaker Neo, follow the instructions for [compilation](https://docs.aws.amazon.com/sagemaker/latest/dg/neo-job-compilation.html) and [serving](https://docs.aws.amazon.com/sagemaker/latest/dg/neo-deployment-hosting-services-prerequisites.html). Serving application `serve` is a minimal script for starting up an HTTP server to handle requests. Here we use [gunicorn](https://gunicorn.org/) as it is appropriate for a production deployment of [Flask](https://flask.palletsprojects.com/) applications. For more complex deployments the prebuilt SageMaker containers include the [SageMaker Inference Toolkit](https://github.com/aws/sagemaker-inference-toolkit).
###Code
!pygmentize -l bash src/serve
###Output
_____no_output_____
###Markdown
How Amazon SageMaker runs your Docker containerAmazon SageMaker runs your container with the argument `serve`. How your container processes this argument depends on the container:* In the example here, we don't define an `ENTRYPOINT` in the Dockerfile so Docker will run the command `train` at training time and `serve` at serving time. In this example, we define these as executable bash scripts, but they could be any program that we want to start in that environment.* If you specify a program as an `ENTRYPOINT` in the Dockerfile, that program will be run at startup and the first argument will be `train` or `serve`. The program can then look at that argument and decide what to do.* If you are building separate containers for training and hosting (or building only for one or the other), you can define a program as an `ENTRYPOINT` in the Dockerfile and ignore (or verify) the first argument passed in. Step 3.1: Build Docker Image to be included in the ML model
###Code
docker_client = docker.from_env()
image, build_logs = docker_client.images.build(path="./src", tag=model_name)
###Output
_____no_output_____
###Markdown
Step 3.2 : Run Docker container
###Code
port = 8080
SECONDS = 1000000000 # One second in nanoseconds
container = docker_client.containers.run(
image,
detach=True,
name=model_name,
command="serve",
healthcheck={
"test": f"curl -f http://localhost:{port}/ping || exit 1",
"interval": 1 * SECONDS, # One second
"timeout": 1 * SECONDS, # One second
},
ports={f"{port}/tcp": port},
)
# Wait until our server is ready
while docker_client.api.inspect_container(container.name)["State"]["Health"]["Status"] != "healthy":
print("Waiting for server to become ready...")
time.sleep(1)
container.reload()
print(
f"Container is {docker_client.api.inspect_container(container.name)['State']['Health']['Status']}"
)
###Output
_____no_output_____
###Markdown
Step 3.3: Perform inference on the container Test that we can send a single record in a request.
###Code
container_invocation_url = f"http://127.0.0.1:{port}/invocations"
r = requests.post(
container_invocation_url,
headers={"Content-Type": "text/csv"},
data="5.1, 3.5, 1.4, 0.2", # setosa labeled record from training set
)
print(r.json())
###Output
_____no_output_____
###Markdown
Next, try sending multiple records in a request.
###Code
# Three records from the training set corresponding to setosa, versicolor, and virginica labels respectively
csv_input_data = """
5.1, 3.5, 1.4, 0.2
6.5, 2.8, 4.6, 1.5
6.3, 2.9, 5.6, 1.8
""".strip()
print(csv_input_data)
r = requests.post(
container_invocation_url, headers={"Content-Type": "text/csv"}, data=csv_input_data
)
print(r.json())
###Output
_____no_output_____
###Markdown
Next, try sending different supported input content types. JSON input Content-Type
###Code
json_input_data = json.dumps(
{
"instances": [
{"features": [5.1, 3.5, 1.4, 0.2]}, # setosa labeled record from training set
{"features": [6.5, 2.8, 4.6, 1.5]}, # versicolor
{"features": [6.3, 2.9, 5.6, 1.8]}, # virginica
]
}
)
r = requests.post(
container_invocation_url,
headers={"Content-Type": "application/json"},
data=json_input_data,
)
print(r.json())
###Output
_____no_output_____
###Markdown
JSON Lines input Content-Type
###Code
# Three records from the training set corresponding to setosa, versicolor, and virginica labels respectively
jsonlines_input_data = """
{\"features\": [5.1, 3.5, 1.4, 0.2]}
{\"features\": [6.5, 2.8, 4.6, 1.5]}
{\"features\": [6.3, 2.9, 5.6, 1.8]}
""".strip()
print(jsonlines_input_data)
r = requests.post(
container_invocation_url,
headers={"Content-Type": "application/jsonlines"},
data=jsonlines_input_data,
)
print(r.json())
###Output
_____no_output_____
###Markdown
CSV output Content-Type Test the response types by setting the Accept header to the desired type
###Code
r = requests.post(
container_invocation_url,
headers={"Content-Type": "application/jsonlines", "Accept": "text/csv"},
data=jsonlines_input_data,
)
print(r.text)
###Output
_____no_output_____
###Markdown
JSON Lines output Content-Type
###Code
r = requests.post(
container_invocation_url,
headers={
"Content-Type": "application/jsonlines",
"Accept": "application/jsonlines",
},
data=jsonlines_input_data,
)
print(r.text)
###Output
_____no_output_____
###Markdown
Note - If the container did not return the expected response, run the following command to see the logs.
###Code
print(container.logs().decode("utf-8"))
###Output
_____no_output_____
###Markdown
Congratulations, now that you have successfully tested container locally you can remove the container.
###Code
container.stop()
container.remove()
###Output
_____no_output_____
###Markdown
Step 4: Push the docker image into Amazon ECR Now that your docker image is ready, you are ready to push the docker image into the Amazon ECR repository. **NOTE:** The ECR repository must belong to the AWS account that is registered as a seller on the AWS Marketplace.
###Code
docker_image_arn = f"{account_id}.dkr.ecr.{region}.amazonaws.com/{model_name}"
docker_image_arn
###Output
_____no_output_____
###Markdown
The following code shows how to build the container image and push the container image to ECR using the Docker python SDK. This code looks for an ECR repository in the account you're using and the current default region (if you're using an Amazon SageMaker notebook instance, this will be the region where the notebook instance was created). If the repository doesn't exist, the script will create it.
###Code
repo_exists = model_name in [
repo["repositoryName"] for repo in ecr.describe_repositories().get("repositories")
]
if not repo_exists:
ecr.create_repository(repositoryName=model_name)
ecr_auth_data = ecr.get_authorization_token()["authorizationData"][0]
username, password = (
base64.b64decode(ecr_auth_data["authorizationToken"]).decode("utf-8").split(":")
)
docker_client.api.tag(model_name, docker_image_arn, tag="latest")
status = docker_client.api.push(
docker_image_arn,
tag="latest",
auth_config={"username": username, "password": password},
)
###Output
_____no_output_____
###Markdown
Step 5: Create an ML Model Package In this section, we will see how you can package your artifacts (ECR image and the trained artifact from your previous training job) into a ModelPackage. Once you complete this, you can list your product as a pretrained model in the AWS Marketplace.**NOTE:** If your model can be deployed on multiple hardware types (CPU/GPU/Inferentia) then a ModelPackage must be created for each and added to the MP listing as different versions as, in general, the container image used will be different for each. Model Package DefinitionA Model Package is a reusable abstraction for model artifacts that packages all the ingredients necessary for inference. It consists of an inference specification that defines the inference image to use along with an optional model data location.The ModelPackage must be created in the AWS account that is registered to be a seller on the AWS Marketplace. Step 5.1 Define parameters
###Code
model_description = "This model accepts petal length, petal width, sepal length, sepal width and predicts whether flower is of type setosa, versicolor, or virginica"
supported_content_types = ["text/csv", "application/json", "application/jsonlines"]
supported_response_MIME_types = [
"application/json",
"text/csv",
"application/jsonlines",
]
###Output
_____no_output_____
###Markdown
A Model Package creation process requires you to specify following: 1. Docker image 2. Model artifacts - You can either package these inside the docker image, as we have done in this example, or provide them as a gzipped tarball. 3. Validation specification In order to provide confidence to sellers (and buyers) that the products work in Amazon SageMaker, before listing them on AWS Marketplace SageMaker needs to perform basic validations. The product can be listed in AWS Marketplace only if this validation process succeeds. This validation process uses the validation profile and sample data provided by you to create a transform job in your account using the Model to verify your inference image works with SageMaker. Next, you need to identify the right instance-sizes for your ML models. You can do so by running performance tests on top of your ML Model.A [sample notebook](https://github.com/aws-samples/aws-marketplace-machine-learning/blob/master/right_size_your_sagemaker_endpoints/Right-sizing%20your%20Amazon%20SageMaker%20Endpoints.ipynb) is available to identify minimum suggested instance types.**NOTE:** In addition to tuning, take into account the requirements of your model when identifying instance types. If your model does not use GPU resources, then do not include GPU instance types. Similarly, if your model does use GPU resources, but can only make use of a single GPU, do not include instance types that have multiple GPUs as it will lead to increased infrastructure charges for your customers with no performance benefit.
###Code
supported_realtime_inference_instance_types = ["ml.m4.xlarge"]
supported_batch_transform_instance_types = ["ml.m4.xlarge"]
validation_file_name = "input.csv"
validation_input_path = f"s3://{s3_bucket}/validation-input-csv/"
validation_output_path = f"s3://{s3_bucket}/validation-output-csv/"
###Output
_____no_output_____
###Markdown
First, we create sample data to be used in the validation stage of the ModelPackage creation and upload it to S3.
###Code
csv_line = "5.1, 3.5, 1.4, 0.2"
with open("input.csv", "w") as f:
f.write(csv_line)
s3_client.put_object(Bucket=s3_bucket, Key="validation-input-csv/input.csv", Body=csv_line)
###Output
_____no_output_____
###Markdown
Step 5.2 Create Model Package
###Code
model_package = sagemaker.create_model_package(
ModelPackageName=model_name,
ModelPackageDescription=model_description,
InferenceSpecification={
"Containers": [
{
"Image": f"{docker_image_arn}:latest",
}
],
"SupportedTransformInstanceTypes": supported_batch_transform_instance_types,
"SupportedRealtimeInferenceInstanceTypes": supported_realtime_inference_instance_types,
"SupportedContentTypes": supported_content_types,
"SupportedResponseMIMETypes": supported_response_MIME_types,
},
CertifyForMarketplace=True, # Make sure to set this to True for Marketplace models!
ValidationSpecification={
"ValidationRole": role,
"ValidationProfiles": [
{
"ProfileName": "Validation-test",
"TransformJobDefinition": {
"BatchStrategy": "SingleRecord",
"TransformInput": {
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": validation_input_path,
}
},
"ContentType": supported_content_types[0],
},
"TransformOutput": {
"S3OutputPath": validation_output_path,
},
"TransformResources": {
"InstanceType": supported_batch_transform_instance_types[0],
"InstanceCount": 1,
},
},
},
],
},
)
session.wait_for_model_package(model_package_name=model_name)
###Output
_____no_output_____
###Markdown
Once you have executed the preceding cell, open the [Model Packages console from Amazon SageMaker](https://console.aws.amazon.com/sagemaker/home?region=us-east-1/model-packages/my-resources) and check if model creation succeeded. Choose the Model and then open the **Validation** tab to see the validation results. Step 6: Validate model in Amazon SageMaker environment Create a deployable model from the model package.
###Code
model = ModelPackage(
role=role,
model_package_arn=model_package["ModelPackageArn"],
sagemaker_session=session,
)
###Output
_____no_output_____
###Markdown
Step 6.1 Validate Real-time inference via Amazon SageMaker Endpoint Deploy the SageMaker model to an endpoint
###Code
model.deploy(
initial_instance_count=1,
instance_type=supported_realtime_inference_instance_types[0],
endpoint_name=model_name,
)
model.endpoint_name
content_type = supported_content_types[0]
###Output
_____no_output_____
###Markdown
Example invocation via boto3
###Code
response = sm_runtime.invoke_endpoint(
EndpointName=model.endpoint_name,
ContentType=content_type,
Accept="application/json",
Body=csv_input_data,
)
json.load(response["Body"])
###Output
_____no_output_____
###Markdown
Example invocation via the AWS CLI
###Code
# Perform inference
!aws sagemaker-runtime invoke-endpoint \
--endpoint-name $model.endpoint_name \
--body fileb://$validation_file_name \
--content-type $content_type \
--region $session.boto_region_name \
out.out
# Print inference
!head out.out
###Output
_____no_output_____
###Markdown
Clean up the endpoint and endpoint configuration created.
###Code
model.sagemaker_session.delete_endpoint(model.endpoint_name)
model.sagemaker_session.delete_endpoint_config(model.endpoint_name)
###Output
_____no_output_____
###Markdown
Step 6.2 Validate batch inference via batch transform job Run a batch-transform job
###Code
transformer = model.transformer(
instance_count=1,
instance_type=supported_batch_transform_instance_types[0],
accept="application/jsonlines",
)
transformer.transform(validation_input_path, content_type=content_type)
transformer.wait()
###Output
_____no_output_____
###Markdown
Retrieve the results from S3
###Code
parsed_url = urlparse(transformer.output_path)
file_key = f"{parsed_url.path[1:]}/{validation_file_name}.out"
response = s3_client.get_object(Bucket=s3_bucket, Key=file_key)
print(response["Body"].read().decode("utf-8"))
###Output
_____no_output_____
###Markdown
Congratulations! You just verified that the batch transform job is working as expected. Since the model is not required, you can delete it. Note that you are deleting the deployable model. Not the model package.
###Code
model.delete_model()
###Output
_____no_output_____
###Markdown
To publish the model to the AWS Marketplace, you will need to specify model package ARN. Copy the following Model Package ARN
###Code
model_package["ModelPackageArn"]
###Output
_____no_output_____ |
Code/PullCensus.ipynb | ###Markdown
Scrapping data for US state population from wikipedia https://en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States_by_populationWe have two options:1. using autoscraper library 2. using wikipedia library https://pypi.org/project/wikipedia/
###Code
# To install autoscraper [Makes it easier scrapping data]
#pip install git+https://github.com/alirezamika/autoscraper.git
###Output
Collecting git+https://github.com/alirezamika/autoscraper.git
Cloning https://github.com/alirezamika/autoscraper.git to c:\users\e125967\appdata\local\temp\1\pip-req-build-_aetx74t
Requirement already satisfied: requests in c:\users\e125967\appdata\local\continuum\anaconda3\envs\emailo\lib\site-packages (from autoscraper==1.1.5) (2.24.0)
Collecting bs4 (from autoscraper==1.1.5)
Downloading https://files.pythonhosted.org/packages/10/ed/7e8b97591f6f456174139ec089c769f89a94a1a4025fe967691de971f314/bs4-0.0.1.tar.gz
Requirement already satisfied: lxml in c:\users\e125967\appdata\local\continuum\anaconda3\envs\emailo\lib\site-packages (from autoscraper==1.1.5) (4.5.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\users\e125967\appdata\local\continuum\anaconda3\envs\emailo\lib\site-packages (from requests->autoscraper==1.1.5) (1.25.8)
Requirement already satisfied: chardet<4,>=3.0.2 in c:\users\e125967\appdata\local\continuum\anaconda3\envs\emailo\lib\site-packages (from requests->autoscraper==1.1.5) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\e125967\appdata\local\continuum\anaconda3\envs\emailo\lib\site-packages (from requests->autoscraper==1.1.5) (2020.6.20)
Requirement already satisfied: idna<3,>=2.5 in c:\users\e125967\appdata\local\continuum\anaconda3\envs\emailo\lib\site-packages (from requests->autoscraper==1.1.5) (2.10)
Collecting beautifulsoup4 (from bs4->autoscraper==1.1.5)
Downloading https://files.pythonhosted.org/packages/d1/41/e6495bd7d3781cee623ce23ea6ac73282a373088fcd0ddc809a047b18eae/beautifulsoup4-4.9.3-py3-none-any.whl (115kB)
Collecting soupsieve>1.2; python_version >= "3.0" (from beautifulsoup4->bs4->autoscraper==1.1.5)
Downloading https://files.pythonhosted.org/packages/6f/8f/457f4a5390eeae1cc3aeab89deb7724c965be841ffca6cfca9197482e470/soupsieve-2.0.1-py3-none-any.whl
Building wheels for collected packages: autoscraper, bs4
Building wheel for autoscraper (setup.py): started
Building wheel for autoscraper (setup.py): finished with status 'done'
Stored in directory: C:\Users\E125967\AppData\Local\Temp\1\pip-ephem-wheel-cache-kfpj3pca\wheels\c5\5f\a4\7f181e331bcece27dcb9f1c88b250235d2021a895a27804614
Building wheel for bs4 (setup.py): started
Building wheel for bs4 (setup.py): finished with status 'done'
Stored in directory: C:\Users\E125967\AppData\Local\pip\Cache\wheels\a0\b0\b2\4f80b9456b87abedbc0bf2d52235414c3467d8889be38dd472
Successfully built autoscraper bs4
Installing collected packages: soupsieve, beautifulsoup4, bs4, autoscraper
Successfully installed autoscraper-1.1.5 beautifulsoup4-4.9.3 bs4-0.0.1 soupsieve-2.0.1
Note: you may need to restart the kernel to use updated packages.
###Markdown
1. using AutoScraper
###Code
from autoscraper import AutoScraper
url = 'https://en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States_by_population'
wanted_list_state = ["California"]
wanted_list_population = ["37,253,956"]
scraper = AutoScraper()
result_statename = scraper.build(url, wanted_list_state)
print(result_statename)
result_population = scraper.build(url, wanted_list_population)
print(result_population)
import pandas as pd
# Create list of lists
list_ofLlists = [result_statename, result_population]
print(list_ofLlists)
df = pd.DataFrame (list_ofLlists).transpose()
df.columns = ['State', 'Population']
print (df)
df.sort_values(by=['State']).shape
###Output
_____no_output_____
###Markdown
We can pull both featears into a dictionary with one line:
###Code
from autoscraper import AutoScraper
url = 'https://en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_States_by_population'
wanted_dict = {'state': ["California"], 'population': ["37,253,956"]}
scraper = AutoScraper()
scraper.build(url, wanted_dict=wanted_dict)
result = scraper.get_result_similar(url, group_by_alias=True)
print(result)
###Output
{'state': ['California', 'Texas', 'Florida', 'New York', 'Pennsylvania', 'Illinois', 'Ohio', 'Georgia', 'North Carolina', 'Michigan', 'New Jersey', 'Virginia', 'Washington', 'Arizona', 'Massachusetts', 'Tennessee', 'Indiana', 'Missouri', 'Maryland', 'Wisconsin', 'Colorado', 'Minnesota', 'South Carolina', 'Alabama', 'Louisiana', 'Kentucky', 'Oregon', 'Oklahoma', 'Connecticut', 'Utah', 'Iowa', 'Nevada', 'Arkansas', 'Mississippi', 'Kansas', 'New Mexico', 'Nebraska', 'West Virginia', 'Idaho', 'Hawaii', 'New Hampshire', 'Maine', 'Montana', 'Rhode Island', 'Delaware', 'South Dakota', 'North Dakota', 'Alaska', 'Vermont', 'Wyoming', 'Massachusetts', 'Connecticut', 'New Hampshire', 'Maine', 'Rhode Island', 'Vermont', 'New York', 'Pennsylvania', 'New Jersey', 'Florida', 'Georgia', 'North Carolina', 'Virginia', 'Maryland', 'South Carolina', 'West Virginia', 'Delaware', 'District of Columbia', 'Tennessee', 'Alabama', 'Kentucky', 'Mississippi', 'Texas', 'Louisiana', 'Oklahoma', 'Arkansas', 'Illinois', 'Ohio', 'Michigan', 'Indiana', 'Wisconsin', 'Missouri', 'Minnesota', 'Iowa', 'Kansas', 'Nebraska', 'South Dakota', 'North Dakota', 'Arizona', 'Colorado', 'Utah', 'Nevada', 'New Mexico', 'Idaho', 'Montana', 'Wyoming', 'California', 'Washington', 'Oregon', 'Hawaii', 'Alaska'], 'population': ['37,253,956', '25,145,561', '18,801,310', '19,378,102', '12,702,379', '12,830,632', '11,536,504', '9,687,653', '9,535,483', '9,883,640', '8,791,894', '8,001,024', '6,724,540', '6,392,017', '6,547,629', '6,346,105', '6,483,802', '5,988,927', '5,773,552', '5,686,986', '5,029,196', '5,303,925', '4,625,364', '4,779,736', '4,533,372', '4,339,367', '3,831,074', '3,751,351', '3,574,097', '2,763,885', '3,046,355', '2,700,551', '2,915,918', '2,967,297', '2,853,118', '2,059,179', '1,826,341', '1,852,994', '1,567,582', '1,360,301', '1,316,470', '1,328,361', '989,415', '1,052,567', '897,934', '814,180', '672,591', '710,231', '625,741', '563,626', '6,547,629', '3,574,097', '1,316,470', '1,328,361', '1,052,567', '625,741', '19,378,102', '12,702,379', '8,791,894', '18,801,310', '9,687,653', '9,535,483', '8,001,024', '5,773,552', '4,625,364', '1,852,994', '897,934', '601,723', '6,346,105', '4,779,736', '4,339,367', '2,967,297', '25,145,561', '4,533,372', '3,751,351', '2,915,918', '12,830,632', '11,536,504', '9,883,640', '6,483,802', '5,686,986', '5,988,927', '5,303,925', '3,046,355', '2,853,118', '1,826,341', '814,180', '672,591', '6,392,017', '5,029,196', '2,763,885', '2,700,551', '2,059,179', '1,567,582', '989,415', '563,626', '37,253,956', '6,724,540', '3,831,074', '1,360,301', '710,231']}
###Markdown
2. Another approach using wikipedia package
###Code
# To install wilipedia, makes it super easy scrapping wikipedia
#pip install wikipedia
import pandas as pd
import wikipedia as wp
#Get the html source
html = wp.page("List of states and territories of the United States by population").html().encode("UTF-8")
df = pd.read_html(html)[0]
#df.to_csv('beautifulsoup_pandas.csv',header=0,index=False)
print (df)
###Output
Rank State \
Current 2010 State
0 1.0 1.0 California
1 2.0 2.0 Texas
2 3.0 4.0 Florida
3 4.0 3.0 New York
4 5.0 6.0 Pennsylvania
5 6.0 5.0 Illinois
6 7.0 7.0 Ohio
7 8.0 9.0 Georgia
8 9.0 10.0 North Carolina
9 10.0 8.0 Michigan
10 11.0 11.0 New Jersey
11 12.0 12.0 Virginia
12 13.0 13.0 Washington
13 14.0 16.0 Arizona
14 15.0 14.0 Massachusetts
15 16.0 17.0 Tennessee
16 17.0 15.0 Indiana
17 18.0 18.0 Missouri
18 19.0 19.0 Maryland
19 20.0 20.0 Wisconsin
20 21.0 22.0 Colorado
21 22.0 21.0 Minnesota
22 23.0 24.0 South Carolina
23 24.0 23.0 Alabama
24 25.0 25.0 Louisiana
25 26.0 26.0 Kentucky
26 27.0 27.0 Oregon
27 28.0 28.0 Oklahoma
28 29.0 30.0 Connecticut
29 30.0 35.0 Utah
30 31.0 29.0 Puerto Rico
31 32.0 31.0 Iowa
32 33.0 36.0 Nevada
33 34.0 33.0 Arkansas
34 35.0 32.0 Mississippi
35 36.0 34.0 Kansas
36 37.0 37.0 New Mexico
37 38.0 39.0 Nebraska
38 39.0 38.0 West Virginia
39 40.0 40.0 Idaho
40 41.0 41.0 Hawaii
41 42.0 43.0 New Hampshire
42 43.0 42.0 Maine
43 44.0 45.0 Montana
44 45.0 44.0 Rhode Island
45 46.0 46.0 Delaware
46 47.0 47.0 South Dakota
47 48.0 49.0 North Dakota
48 49.0 48.0 Alaska
49 50.0 51.0 District of Columbia
50 51.0 50.0 Vermont
51 52.0 52.0 Wyoming
52 53.0 53.0 Guam
53 54.0 54.0 U.S. Virgin Islands
54 55.0 56.0 Northern Mariana Islands
55 56.0 55.0 American Samoa
56 NaN NaN Contiguous United States
57 NaN NaN The fifty states
58 NaN NaN Fifty states + D.C.
59 NaN NaN Total U.S. (including D.C. and territories)
Census population Change, 2010â2019 \
Estimate, July 1, 2019[8] April 1, 2010[9] Percent[note 3]
0 39512223 37253956 6.1%
1 28995881 25145561 15.3%
2 21477737 18801310 14.2%
3 19453561 19378102 0.4%
4 12801989 12702379 0.8%
5 12671821 12830632 â1.2%
6 11689100 11536504 1.3%
7 10617423 9687653 9.6%
8 10488084 9535483 10.0%
9 9986857 9883640 1.0%
10 8882190 8791894 1.0%
11 8535519 8001024 6.7%
12 7614893 6724540 13.2%
13 7278717 6392017 13.9%
14 6892503 6547629 5.3%
15 6829174 6346105 7.6%
16 6732219 6483802 3.8%
17 6137428 5988927 2.5%
18 6045680 5773552 4.7%
19 5822434 5686986 2.4%
20 5758736 5029196 14.5%
21 5639632 5303925 6.3%
22 5148714 4625364 11.3%
23 4903185 4779736 2.6%
24 4648794 4533372 2.5%
25 4467673 4339367 3.0%
26 4217737 3831074 10.1%
27 3956971 3751351 5.5%
28 3565287 3574097 â0.2%
29 3205958 2763885 16.0%
30 3193694 3725789 â14.3%
31 3155070 3046355 3.6%
32 3080156 2700551 14.1%
33 3017804 2915918 3.5%
34 2976149 2967297 0.3%
35 2913314 2853118 2.1%
36 2096829 2059179 1.8%
37 1934408 1826341 5.9%
38 1792147 1852994 â3.3%
39 1787065 1567582 14.0%
40 1415872 1360301 4.1%
41 1359711 1316470 3.3%
42 1344212 1328361 1.2%
43 1068778 989415 8.0%
44 1059361 1052567 0.6%
45 973764 897934 8.4%
46 884659 814180 8.7%
47 762062 672591 13.3%
48 731545 710231 3.0%
49 705749 601723 17.3%
50 623989 625741 â0.3%
51 578759 563626 2.7%
52 168,485[10] 159,358[11] 5.7%
53 106,235[12] 106,405[13] â0.2%
54 51,433[14] 53,883[15] â4.5%
55 49,437[16] 55,519[17] â11.0%
56 325386357 306675006 6.2%
57 327533795 308143836 6.3%
58 328239523 308745538 6.3%
59 331808807 312846492 6.1%
Total U.S. House of Representatives Seats \
Absolute Total U.S. House of Representatives Seats
0 +2,257,700 53
1 +3,850,320 36
2 +2,676,427 27
3 +75,459 27
4 +99,610 18
5 â158,811 18
6 +152,596 16
7 +929,770 14
8 +952,601 13
9 +103,217 14
10 +90,296 12
11 +534,495 11
12 +890,353 10
13 +886,700 9
14 +344,874 9
15 +483,069 9
16 +248,417 9
17 +148,501 8
18 +272,128 8
19 +135,448 8
20 +729,540 7
21 +335,707 8
22 +523,350 7
23 +123,449 7
24 +115,422 6
25 +128,306 6
26 +386,663 5
27 +205,620 5
28 â8,810 5
29 +442,073 4
30 â532,095 1 (non-voting)
31 +108,715 4
32 +379,605 4
33 +101,886 4
34 +8,852 4
35 +60,196 4
36 +37,650 3
37 +108,067 3
38 â60,847 3
39 +219,483 2
40 +55,571 2
41 +43,241 2
42 +15,851 2
43 +79,363 1
44 +6,794 2
45 +75,830 1
46 +70,479 1
47 +89,471 1
48 +21,314 1
49 +104,026 1 (non-voting)
50 â1,752 1
51 +15,133 1
52 +9,127 1 (non-voting)
53 â170 1 (non-voting)
54 â2,450 1 (non-voting)
55 â6,082 1 (non-voting)
56 +19,011,351 432
57 +19,389,959 435
58 +19,493,985 435 (+ 1 non-voting)
59 +18,962,315 435 (+ 6 non-voting)
Estimated population per electoral vote, 2019[note 1] \
Estimated population per electoral vote, 2019[note 1]
0 718404
1 763050
2 740611
3 670812
4 640099
5 633591
6 649394
7 663589
8 699206
9 624179
10 634442
11 656578
12 634574
13 661702
14 626591
15 620834
16 612020
17 613743
18 604568
19 582243
20 639860
21 563963
22 572079
23 544798
24 581099
25 558459
26 602534
27 565282
28 509327
29 534326
30 â
31 525845
32 513359
33 502967
34 496024
35 485552
36 419366
37 386882
38 358435
39 446516
40 353968
41 339928
42 336053
43 356259
44 264840
45 324588
46 294886
47 254021
48 243848
49 235250
50 207996
51 192920
52 â
53 â
54 â
55 â
56 616262
57 612213
58 610111
59 â
Census population per House seat \
Estimated, 2019 2010
0 745514 702885
1 805441 698503
2 795472 696468
3 720502 717707
4 711222 705715
5 703990 712864
6 730569 721032
7 758387 691975
8 806776 733498
9 713347 705974
10 740183 732658
11 775956 727366
12 751489 672454
13 808746 710224
14 765834 727514
15 758797 705123
16 748024 720422
17 767179 748615
18 755710 721694
19 727804 710873
20 822677 720704
21 704954 662991
22 735531 660766
23 700455 682819
24 774799 755562
25 744612 723228
26 843547 766215
27 791394 750270
28 713057 714824
29 801490 690972
30 3193694 3725789
31 788768 761717
32 770039 675173
33 754451 728990
34 744037 742026
35 728329 713280
36 698943 686393
37 644803 608780
38 597391 617670
39 893033 783826
40 707936 680151
41 679856 658233
42 672106 664181
43 1068778 989417
44 529681 526466
45 973764 897934
46 884659 814180
47 762062 672591
48 731545 710231
49 â â
50 623989 625741
51 578759 563626
52 â â
53 â â
54 â â
55 â â
56 753209 708285
57 752951 708405
58 â â
59 â â
Percent of the total U.S. population, 2019[note 2]
Percent of the total U.S. population, 2019[note 2]
0 11.91%
1 8.74%
2 6.47%
3 5.86%
4 3.86%
5 3.82%
6 3.52%
7 3.20%
8 3.16%
9 3.01%
10 2.68%
11 2.57%
12 2.29%
13 2.19%
14 2.09%
15 2.06%
16 2.03%
17 1.85%
18 1.82%
19 1.75%
20 1.74%
21 1.70%
22 1.55%
23 1.48%
24 1.40%
25 1.35%
26 1.27%
27 1.19%
28 1.07%
29 0.97%
30 0.96%
31 0.95%
32 0.93%
33 0.91%
34 0.90%
35 0.88%
36 0.63%
37 0.58%
38 0.54%
39 0.54%
40 0.43%
41 0.41%
42 0.41%
43 0.32%
44 0.32%
45 0.29%
46 0.27%
47 0.23%
48 0.22%
49 0.21%
50 0.19%
51 0.17%
52 0.05%
53 0.03%
54 0.02%
55 0.02%
56 98.06%
57 98.71%
58 98.92%
59 100.00%
|
4_Regularisation_HW_ACD.ipynb | ###Markdown
Homework 4 Regularization in Machine LearningThis colaboratory contains Homework 4 of the Machine Learning course, which is due **October 31, midnight (23:59 EEST time)**. To complete the homework, extract **(File -> Download .ipynb)** and submit to the course webpage. Submission's rules:1. Please, submit only .ipynb that you extract from the Colaboratory.2. Run your homework exercises before submitting (output should be present, preferably restart the kernel and press run all the cells).3. Do not change the description of tasks in red (even if there is a typo|mistake|etc).4. Please, make sure to avoid unnecessary long printouts.5. Each task should be solved right under the question of the task and not elsewhere.6. Solutions to both regular and bonus exercises should be submitted in one IPYNB file. List of Homework's exercises:1. [Ex1](scrollTo=gCUvnKxZXTul) - 3 points2. [Ex2](scrollTo=yLmunCZ9k-G6) - 4 points3. [Ex3](scrollTo=lPdnuVSqeIN2) - 3 points4. [Bonus 1](scrollTo=jdZkblZW7bEp) - up to 4 points (based on quality of presentation)5. [Bonus 2](scrollTo=piaKpOh8If7h) - 2 points
###Code
!pip install -q plotnine
from plotnine import *
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
# old school TF
%tensorflow_version 1.x
# Supress warnings by TF 1.x
import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
import matplotlib.pyplot as plt
# loading in the cifar10 dataset
from keras.datasets import cifar10
from keras.layers import Input, Conv2D, Activation, Flatten, Dense, MaxPooling2D, BatchNormalization, Dropout
from keras import regularizers, optimizers, Sequential
# Auxiliary functions
def plot_curves(history):
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend(['Training', 'Validation'])
plt.title('Loss')
plt.subplot(1, 2, 2)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(['Training', 'Validation'])
plt.title('Accuracy')
def define_model(lambda_):
model = Sequential()
model.add(Conv2D(32, (3,3), padding='same', kernel_regularizer=regularizers.l2(lambda_), input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(Conv2D(32, (3,3), padding='same', kernel_regularizer=regularizers.l2(lambda_)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, (3,3), padding='same', kernel_regularizer=regularizers.l2(lambda_)))
model.add(Activation('relu'))
model.add(Conv2D(64, (3,3), padding='same', kernel_regularizer=regularizers.l2(lambda_)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(128, (3,3), padding='same', kernel_regularizer=regularizers.l2(lambda_)))
model.add(Activation('relu'))
model.add(Conv2D(128, (3,3), padding='same', kernel_regularizer=regularizers.l2(lambda_)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dense(100, activation='relu', kernel_regularizer=regularizers.l2(lambda_)))
model.add(Flatten())
model.add(Dense(10, activation='softmax'))
return(model)
def define_model_dropout(dropout_rate = 0):
model = Sequential()
model.add(Conv2D(32, (3,3), padding='same', input_shape=(32, 32, 3)))
model.add(Activation('relu'))
model.add(Conv2D(32, (3,3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(dropout_rate))
model.add(Conv2D(64, (3,3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3,3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(dropout_rate))
model.add(Conv2D(128, (3,3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(128, (3,3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(dropout_rate))
model.add(Dense(100, activation='relu'))
model.add(Dropout(dropout_rate))
model.add(Flatten())
model.add(Dense(10, activation='softmax'))
return(model)
###Output
_____no_output_____
###Markdown
Homework exercise 1 (3 points): ElasticNet algorithm combines both Ridge and LASSO regression. In the class we discussed Ridge and Lasso regression algorithms, which are basically, L2 and L1 regularisations applied to Linear Regression model. ElasticNet is a method that combines both L2 and L1 regularisations under one model. ElasticNet adds both L2 and L1 norms to the error function. Here you should train and visualise ElasticNet model on the toy dataset.
###Code
# Let's regenerate training data one more time
example_data = pd.DataFrame({'x':[1,2,3,4,5], 'y':[2,4,5,4,5]})
example_data['x^2'] = example_data.x**2
example_data['x^3'] = example_data.x**3
example_data['x^4'] = example_data.x**4
visualisation_data = pd.DataFrame({'x': np.linspace(start=0, stop=6, num=61),
'x^2': np.linspace(start=0, stop=6, num=61)**2,
'x^3': np.linspace(start=0, stop=6, num=61)**3,
'x^4': np.linspace(start=0, stop=6, num=61)**4})
###Output
_____no_output_____
###Markdown
**(Homework exercise 1- a)** Train ElasticNet as well as three other regression models (linear, ridge and lasso) using `sklearn` on example data. Visualise them using artificial visualisation data. **(1 point)**.
###Code
from sklearn.linear_model import LinearRegression, Lasso, Ridge, ElasticNet
# Regularization strength
lambda_ = 1
##### YOUR CODE STARTS #####
# first initialise different regressions
# then fit them to our example_data
# finally predict the visualisation data
lr = LinearRegression()
lr_ridge = Ridge(lambda_)
lr_lasso = Lasso(lambda_)
elnet = ElasticNet()
lr.fit(example_data[['x', 'x^2', 'x^3', 'x^4']], example_data[['y']])
lr_ridge.fit(example_data[['x', 'x^2', 'x^3', 'x^4']], example_data[['y']])
lr_lasso.fit(example_data[['x', 'x^2', 'x^3', 'x^4']], example_data[['y']])
elnet.fit(example_data[['x', 'x^2', 'x^3', 'x^4']], example_data[['y']])
visualisation_data['lr_y'] = lr.predict(visualisation_data[['x', 'x^2', 'x^3', 'x^4']])
visualisation_data['lr_ridge_y'] = lr_ridge.predict(visualisation_data[['x', 'x^2', 'x^3', 'x^4']])
visualisation_data['lr_lass_y'] = lr_lasso.predict(visualisation_data[['x', 'x^2', 'x^3', 'x^4']])
visualisation_data['elnet_y'] = elnet.predict(visualisation_data[['x', 'x^2', 'x^3', 'x^4']])
##### YOUR CODE ENDS #####
###Output
_____no_output_____
###Markdown
**(Homework exercise 1- b)** Visualise all four regression trends (baseline, LASSO, Ridge and ElasticNet) on the same figure. Highlight ElasticNet in yellow, while others in black (linear), red (ridge) and blue (lasso). **(1 point)**.
###Code
fig = (
ggplot(data = example_data,
mapping = aes(x = 'x', y = 'y')) +
##### YOUR CODE STARTS #####
geom_path(data = visualisation_data, mapping = aes(x = 'x', y = 'lr_y'), size = 1, colour = 'black') +
geom_path(data = visualisation_data, mapping = aes(x = 'x', y = 'lr_ridge_y'), size = 1, colour = 'red') +
geom_path(data = visualisation_data, mapping = aes(x = 'x', y = 'lr_lass_y'), size = 1, colour = 'blue') +
geom_path(data = visualisation_data, mapping = aes(x = 'x', y = 'elnet_y'), size = 1, colour = 'yellow') +
##### YOUR CODE ENDS #####
geom_point(fill = '#36B059',
size = 5.0,
stroke = 2.5,
colour = '#2BE062',
shape = 'o') +
labs(
title ='',
x = 'X',
y = 'y',
) +
xlim(0, 6) +
ylim(0, 7) +
theme_bw() +
theme(figure_size = (5, 5),
axis_line = element_line(size = 0.5, colour = "black"),
panel_grid_major = element_line(size = 0.05, colour = "black"),
panel_grid_minor = element_line(size = 0.05, colour = "black"),
axis_text = element_text(colour ='black')) +
guides(size = False)
)
fig
###Output
_____no_output_____
###Markdown
**(Homework exercise 1- c)** Print out ElasticNet coefficients and intercept, compare it to coefficients and intercept of other regressions. Which one ElasticNet seems to be more similar to? Which parameter in `sklearn.ElasticNet` is responsible for the difference between Ridge and LASSO and why? **(1 point)**.
###Code
##### YOUR CODE STARTS #####
print(f'ElasticNet regression coefficients are: {elnet.coef_}')
print(f'ElasticNet regression intercept: {round(elnet.intercept_[0], 8)}')
print(f'Lasso regression coefficients are: {lr_lasso.coef_}')
print(f'Lasso regression intercept: {round(lr_lasso.intercept_[0], 8)}')
print(f'Ridge regression coefficients are: {lr_ridge.coef_[0]}')
print(f'Ridge regression intercept: {round(lr_ridge.intercept_[0], 8)}')
##### YOUR CODE ENDS #####
###Output
ElasticNet regression coefficients are: [ 0. 0. 0.07844547 -0.01265445]
ElasticNet regression intercept: 2.9476951
Lasso regression coefficients are: [ 0. 0. 0.05608653 -0.00829621]
Lasso regression intercept: 3.1005046
Ridge regression coefficients are: [ 0.3882215 0.61877466 -0.20687874 0.0184773 ]
Ridge regression intercept: 1.72050238
###Markdown
Your textual answer goes here: --- Homework exercise 2 (4 points): searching for good dropout rate Use `sklearn` function (`KFold`) for cross-validation to find the best possible dropout rate for the neural network we used in the class (that you call via `define_model_dropout`).
###Code
# Keras comes with built-in loaders for common datasets
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# shorten dataset for quicker training
X_train = X_train[:25000]
y_train = y_train[:25000]
# Normalising values
mu = X_train.mean(axis=(0,1,2)) # finds mean of R, G and B separately
std = X_train.std(axis=(0,1,2)) # same for std
X_train_norm = (X_train - mu)/std
X_test_norm = (X_test - mu)/std
###Output
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 2s 0us/step
###Markdown
**(Homework exercise 2- a)** Run cross-validation by filling in the gaps and collect validation accuracy scores for each dropout rate. **(2 points)**.
###Code
from sklearn.model_selection import KFold
dropout_rates = [0.0, 0.1, 0.25, 0.5, 0.99] # feel free to choose other values to loop over
# you can collect both accuracy and loss if you like,
# but loss is influenced by the regularisation itself, so maybe less informative
val_fold_acc = np.zeros(len(dropout_rates))
val_fold_loss = np.zeros(len(dropout_rates))
for i, dropout_rate in enumerate(dropout_rates):
print(f'Validation loss for dropout rate = {dropout_rate}...')
##### YOUR CODE STARTS #####
# 4-fold cross validation
# Here we are using sklearn Cross Validation Function called KFold
kf = KFold(n_splits=4, random_state=111)
# Do not change these lines, we initialize empty lists
fold_acc = []
fold_loss = []
for train_index, val_index in kf.split(X_train_norm):
# split data into train_X, train_y and val_X, val_y depending on the fold:
train_X = X_train_norm[train_index]
train_y = y_train[train_index]
val_X = X_train_norm[val_index]
val_y = y_train[val_index]
# train the neural network with dropout_rate
model = define_model_dropout(dropout_rate)
# compile the model
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizers.Adam(lr=0.001), metrics=['accuracy'])
# fit the neural network on training data
# number of epochs is tricky, if you choose too little the performance will be unstable
# if you choose too large, it will take ages to complete...
fit = model.fit(train_X, train_y, batch_size=64, epochs=10)
# calculate accuracy for this fold and store it in fold_acc
accuracy = model.evaluate(val_X, val_y)
fold_acc.append(accuracy[1])
# and loss in fold_loss
fold_loss.append(accuracy[0])
##### YOUR CODE ENDS #####
print(f'Average validation accuracy for {dropout_rate} is {np.mean(fold_acc)}')
val_fold_acc[i] = np.mean(fold_acc)
val_fold_loss[i] = np.mean(fold_loss)
###Output
Validation loss for dropout rate = 0.0...
Epoch 1/10
18750/18750 [==============================] - 9s 503us/step - loss: 1.7019 - accuracy: 0.3763
Epoch 2/10
18750/18750 [==============================] - 7s 372us/step - loss: 1.2686 - accuracy: 0.5430
Epoch 3/10
18750/18750 [==============================] - 7s 371us/step - loss: 1.0194 - accuracy: 0.6401
Epoch 4/10
18750/18750 [==============================] - 7s 368us/step - loss: 0.8599 - accuracy: 0.6955
Epoch 5/10
18750/18750 [==============================] - 7s 371us/step - loss: 0.7239 - accuracy: 0.7445
Epoch 6/10
18750/18750 [==============================] - 7s 371us/step - loss: 0.6006 - accuracy: 0.7881
Epoch 7/10
18750/18750 [==============================] - 7s 379us/step - loss: 0.4844 - accuracy: 0.8304
Epoch 8/10
18750/18750 [==============================] - 8s 402us/step - loss: 0.3821 - accuracy: 0.8645
Epoch 9/10
18750/18750 [==============================] - 8s 433us/step - loss: 0.2844 - accuracy: 0.8971
Epoch 10/10
18750/18750 [==============================] - 8s 441us/step - loss: 0.2221 - accuracy: 0.9230
6250/6250 [==============================] - 2s 356us/step
Epoch 1/10
18750/18750 [==============================] - 10s 511us/step - loss: 1.7197 - accuracy: 0.3650
Epoch 2/10
18750/18750 [==============================] - 9s 463us/step - loss: 1.2760 - accuracy: 0.5431
Epoch 3/10
18750/18750 [==============================] - 9s 466us/step - loss: 1.0546 - accuracy: 0.6255
Epoch 4/10
18750/18750 [==============================] - 9s 465us/step - loss: 0.8898 - accuracy: 0.6843
Epoch 5/10
18750/18750 [==============================] - 9s 466us/step - loss: 0.7698 - accuracy: 0.7283
Epoch 6/10
18750/18750 [==============================] - 9s 465us/step - loss: 0.6523 - accuracy: 0.7724
Epoch 7/10
18750/18750 [==============================] - 9s 468us/step - loss: 0.5485 - accuracy: 0.8063
Epoch 8/10
18750/18750 [==============================] - 9s 465us/step - loss: 0.4507 - accuracy: 0.8428
Epoch 9/10
18750/18750 [==============================] - 9s 468us/step - loss: 0.3437 - accuracy: 0.8821
Epoch 10/10
18750/18750 [==============================] - 9s 464us/step - loss: 0.2809 - accuracy: 0.8998
6250/6250 [==============================] - 2s 377us/step
Epoch 1/10
18750/18750 [==============================] - 10s 551us/step - loss: 1.6994 - accuracy: 0.3752
Epoch 2/10
18750/18750 [==============================] - 9s 506us/step - loss: 1.2422 - accuracy: 0.5487
Epoch 3/10
18750/18750 [==============================] - 9s 500us/step - loss: 1.0128 - accuracy: 0.6385
Epoch 4/10
18750/18750 [==============================] - 9s 500us/step - loss: 0.8397 - accuracy: 0.7018
Epoch 5/10
18750/18750 [==============================] - 9s 505us/step - loss: 0.7113 - accuracy: 0.7468
Epoch 6/10
18750/18750 [==============================] - 9s 499us/step - loss: 0.5986 - accuracy: 0.7882
Epoch 7/10
18750/18750 [==============================] - 9s 475us/step - loss: 0.4708 - accuracy: 0.8345
Epoch 8/10
18750/18750 [==============================] - 9s 472us/step - loss: 0.3729 - accuracy: 0.8671
Epoch 9/10
18750/18750 [==============================] - 9s 473us/step - loss: 0.2842 - accuracy: 0.9018
Epoch 10/10
18750/18750 [==============================] - 9s 474us/step - loss: 0.2117 - accuracy: 0.9260
6250/6250 [==============================] - 2s 395us/step
Epoch 1/10
18750/18750 [==============================] - 10s 554us/step - loss: 1.6911 - accuracy: 0.3809
Epoch 2/10
18750/18750 [==============================] - 9s 502us/step - loss: 1.2476 - accuracy: 0.5490
Epoch 3/10
18750/18750 [==============================] - 9s 497us/step - loss: 1.0158 - accuracy: 0.6380
Epoch 4/10
18750/18750 [==============================] - 9s 494us/step - loss: 0.8345 - accuracy: 0.7042
Epoch 5/10
18750/18750 [==============================] - 9s 482us/step - loss: 0.7063 - accuracy: 0.7511
Epoch 6/10
18750/18750 [==============================] - 9s 474us/step - loss: 0.5946 - accuracy: 0.7916
Epoch 7/10
18750/18750 [==============================] - 9s 475us/step - loss: 0.4716 - accuracy: 0.8340
Epoch 8/10
18750/18750 [==============================] - 9s 474us/step - loss: 0.3616 - accuracy: 0.8723
Epoch 9/10
18750/18750 [==============================] - 9s 471us/step - loss: 0.2798 - accuracy: 0.9008
Epoch 10/10
18750/18750 [==============================] - 9s 474us/step - loss: 0.2125 - accuracy: 0.9260
6250/6250 [==============================] - 2s 395us/step
Average validation accuracy for 0.0 is 0.6983999907970428
Validation loss for dropout rate = 0.1...
Epoch 1/10
18750/18750 [==============================] - 11s 609us/step - loss: 1.7297 - accuracy: 0.3545
Epoch 2/10
18750/18750 [==============================] - 10s 557us/step - loss: 1.3090 - accuracy: 0.5216
Epoch 3/10
18750/18750 [==============================] - 10s 555us/step - loss: 1.1012 - accuracy: 0.6081
Epoch 4/10
18750/18750 [==============================] - 10s 550us/step - loss: 0.9534 - accuracy: 0.6609
Epoch 5/10
18750/18750 [==============================] - 10s 547us/step - loss: 0.8400 - accuracy: 0.7018
Epoch 6/10
18750/18750 [==============================] - 10s 546us/step - loss: 0.7603 - accuracy: 0.7322
Epoch 7/10
18750/18750 [==============================] - 10s 546us/step - loss: 0.6758 - accuracy: 0.7615
Epoch 8/10
18750/18750 [==============================] - 10s 547us/step - loss: 0.6075 - accuracy: 0.7872
Epoch 9/10
18750/18750 [==============================] - 10s 546us/step - loss: 0.5353 - accuracy: 0.8106
Epoch 10/10
18750/18750 [==============================] - 10s 538us/step - loss: 0.4752 - accuracy: 0.8324
6250/6250 [==============================] - 3s 404us/step
Epoch 1/10
18750/18750 [==============================] - 11s 606us/step - loss: 1.7480 - accuracy: 0.3474
Epoch 2/10
18750/18750 [==============================] - 10s 546us/step - loss: 1.3203 - accuracy: 0.5179
Epoch 3/10
18750/18750 [==============================] - 10s 553us/step - loss: 1.1189 - accuracy: 0.6006
Epoch 4/10
18750/18750 [==============================] - 10s 549us/step - loss: 0.9839 - accuracy: 0.6513
Epoch 5/10
18750/18750 [==============================] - 10s 547us/step - loss: 0.8688 - accuracy: 0.6926
Epoch 6/10
18750/18750 [==============================] - 10s 546us/step - loss: 0.7764 - accuracy: 0.7275
Epoch 7/10
18750/18750 [==============================] - 10s 549us/step - loss: 0.6928 - accuracy: 0.7573
Epoch 8/10
18750/18750 [==============================] - 10s 543us/step - loss: 0.6175 - accuracy: 0.7819
Epoch 9/10
18750/18750 [==============================] - 10s 541us/step - loss: 0.5535 - accuracy: 0.8057
Epoch 10/10
18750/18750 [==============================] - 10s 542us/step - loss: 0.4982 - accuracy: 0.8233
6250/6250 [==============================] - 3s 419us/step
Epoch 1/10
18750/18750 [==============================] - 12s 614us/step - loss: 1.7061 - accuracy: 0.3717
Epoch 2/10
18750/18750 [==============================] - 10s 560us/step - loss: 1.2733 - accuracy: 0.5383
Epoch 3/10
18750/18750 [==============================] - 10s 559us/step - loss: 1.0552 - accuracy: 0.6227
Epoch 4/10
18750/18750 [==============================] - 10s 557us/step - loss: 0.9219 - accuracy: 0.6715
Epoch 5/10
18750/18750 [==============================] - 10s 558us/step - loss: 0.8018 - accuracy: 0.7132
Epoch 6/10
18750/18750 [==============================] - 11s 561us/step - loss: 0.7302 - accuracy: 0.7418
Epoch 7/10
18750/18750 [==============================] - 10s 558us/step - loss: 0.6380 - accuracy: 0.7744
Epoch 8/10
18750/18750 [==============================] - 11s 563us/step - loss: 0.5815 - accuracy: 0.7951
Epoch 9/10
18750/18750 [==============================] - 11s 569us/step - loss: 0.4981 - accuracy: 0.8216
Epoch 10/10
18750/18750 [==============================] - 11s 563us/step - loss: 0.4468 - accuracy: 0.8407
6250/6250 [==============================] - 3s 432us/step
Epoch 1/10
18750/18750 [==============================] - 11s 612us/step - loss: 1.7320 - accuracy: 0.3538
Epoch 2/10
18750/18750 [==============================] - 10s 554us/step - loss: 1.3242 - accuracy: 0.5175
Epoch 3/10
18750/18750 [==============================] - 10s 556us/step - loss: 1.1060 - accuracy: 0.6025
Epoch 4/10
18750/18750 [==============================] - 10s 554us/step - loss: 0.9517 - accuracy: 0.6629
Epoch 5/10
18750/18750 [==============================] - 11s 560us/step - loss: 0.8464 - accuracy: 0.6996
Epoch 6/10
18750/18750 [==============================] - 11s 563us/step - loss: 0.7374 - accuracy: 0.7350
Epoch 7/10
18750/18750 [==============================] - 11s 571us/step - loss: 0.6692 - accuracy: 0.7621
Epoch 8/10
18750/18750 [==============================] - 11s 578us/step - loss: 0.5798 - accuracy: 0.7947
Epoch 9/10
18750/18750 [==============================] - 11s 573us/step - loss: 0.5117 - accuracy: 0.8175
Epoch 10/10
18750/18750 [==============================] - 11s 569us/step - loss: 0.4486 - accuracy: 0.8407
6250/6250 [==============================] - 3s 468us/step
Average validation accuracy for 0.1 is 0.723560020327568
Validation loss for dropout rate = 0.25...
Epoch 1/10
18750/18750 [==============================] - 12s 643us/step - loss: 1.7979 - accuracy: 0.3246
Epoch 2/10
18750/18750 [==============================] - 11s 574us/step - loss: 1.4017 - accuracy: 0.4811
Epoch 3/10
18750/18750 [==============================] - 10s 557us/step - loss: 1.2249 - accuracy: 0.5568
Epoch 4/10
18750/18750 [==============================] - 10s 557us/step - loss: 1.0912 - accuracy: 0.6079
Epoch 5/10
18750/18750 [==============================] - 11s 565us/step - loss: 0.9858 - accuracy: 0.6481
Epoch 6/10
18750/18750 [==============================] - 11s 562us/step - loss: 0.9064 - accuracy: 0.6797
Epoch 7/10
18750/18750 [==============================] - 11s 560us/step - loss: 0.8468 - accuracy: 0.6973
Epoch 8/10
18750/18750 [==============================] - 10s 556us/step - loss: 0.7903 - accuracy: 0.7211
Epoch 9/10
18750/18750 [==============================] - 10s 557us/step - loss: 0.7458 - accuracy: 0.7352
Epoch 10/10
18750/18750 [==============================] - 10s 555us/step - loss: 0.6944 - accuracy: 0.7549
6250/6250 [==============================] - 3s 439us/step
Epoch 1/10
18750/18750 [==============================] - 12s 614us/step - loss: 1.8298 - accuracy: 0.3195
Epoch 2/10
18750/18750 [==============================] - 10s 552us/step - loss: 1.4222 - accuracy: 0.4734
Epoch 3/10
18750/18750 [==============================] - 10s 552us/step - loss: 1.2338 - accuracy: 0.5544
Epoch 4/10
18750/18750 [==============================] - 10s 552us/step - loss: 1.0840 - accuracy: 0.6108
Epoch 5/10
18750/18750 [==============================] - 10s 548us/step - loss: 0.9726 - accuracy: 0.6501
Epoch 6/10
18750/18750 [==============================] - 10s 553us/step - loss: 0.8949 - accuracy: 0.6852
Epoch 7/10
18750/18750 [==============================] - 10s 553us/step - loss: 0.8215 - accuracy: 0.7070
Epoch 8/10
18750/18750 [==============================] - 10s 552us/step - loss: 0.7619 - accuracy: 0.7297
Epoch 9/10
18750/18750 [==============================] - 10s 551us/step - loss: 0.7262 - accuracy: 0.7401
Epoch 10/10
18750/18750 [==============================] - 10s 549us/step - loss: 0.6892 - accuracy: 0.7559
6250/6250 [==============================] - 3s 438us/step
Epoch 1/10
18750/18750 [==============================] - 12s 622us/step - loss: 1.8604 - accuracy: 0.3059
Epoch 2/10
18750/18750 [==============================] - 10s 554us/step - loss: 1.4066 - accuracy: 0.4825
Epoch 3/10
18750/18750 [==============================] - 10s 560us/step - loss: 1.2122 - accuracy: 0.5612
Epoch 4/10
18750/18750 [==============================] - 11s 568us/step - loss: 1.0827 - accuracy: 0.6089
Epoch 5/10
18750/18750 [==============================] - 11s 563us/step - loss: 0.9776 - accuracy: 0.6497
Epoch 6/10
18750/18750 [==============================] - 11s 561us/step - loss: 0.9000 - accuracy: 0.6788
Epoch 7/10
18750/18750 [==============================] - 10s 558us/step - loss: 0.8330 - accuracy: 0.7007
Epoch 8/10
18750/18750 [==============================] - 11s 562us/step - loss: 0.7899 - accuracy: 0.7209
Epoch 9/10
18750/18750 [==============================] - 11s 561us/step - loss: 0.7300 - accuracy: 0.7375
Epoch 10/10
18750/18750 [==============================] - 11s 564us/step - loss: 0.6963 - accuracy: 0.7542
6250/6250 [==============================] - 3s 448us/step
Epoch 1/10
18750/18750 [==============================] - 12s 625us/step - loss: 1.8639 - accuracy: 0.3044
Epoch 2/10
18750/18750 [==============================] - 11s 563us/step - loss: 1.4443 - accuracy: 0.4730
Epoch 3/10
18750/18750 [==============================] - 11s 565us/step - loss: 1.2360 - accuracy: 0.5554
Epoch 4/10
18750/18750 [==============================] - 11s 562us/step - loss: 1.0983 - accuracy: 0.6080
Epoch 5/10
18750/18750 [==============================] - 11s 562us/step - loss: 0.9845 - accuracy: 0.6501
Epoch 6/10
18750/18750 [==============================] - 11s 565us/step - loss: 0.9089 - accuracy: 0.6768
Epoch 7/10
18750/18750 [==============================] - 11s 562us/step - loss: 0.8507 - accuracy: 0.6962
Epoch 8/10
18750/18750 [==============================] - 11s 564us/step - loss: 0.7875 - accuracy: 0.7214
Epoch 9/10
18750/18750 [==============================] - 11s 568us/step - loss: 0.7495 - accuracy: 0.7342
Epoch 10/10
18750/18750 [==============================] - 11s 570us/step - loss: 0.7189 - accuracy: 0.7443
6250/6250 [==============================] - 3s 464us/step
Average validation accuracy for 0.25 is 0.7212000042200089
Validation loss for dropout rate = 0.5...
Epoch 1/10
18750/18750 [==============================] - 12s 647us/step - loss: 2.0442 - accuracy: 0.2105
Epoch 2/10
18750/18750 [==============================] - 11s 576us/step - loss: 1.6744 - accuracy: 0.3634
Epoch 3/10
18750/18750 [==============================] - 11s 570us/step - loss: 1.4998 - accuracy: 0.4455
Epoch 4/10
18750/18750 [==============================] - 11s 575us/step - loss: 1.3892 - accuracy: 0.4843
Epoch 5/10
18750/18750 [==============================] - 11s 569us/step - loss: 1.2985 - accuracy: 0.5251
Epoch 6/10
18750/18750 [==============================] - 11s 572us/step - loss: 1.2329 - accuracy: 0.5507
Epoch 7/10
18750/18750 [==============================] - 11s 581us/step - loss: 1.1762 - accuracy: 0.5734
Epoch 8/10
18750/18750 [==============================] - 11s 572us/step - loss: 1.1456 - accuracy: 0.5875
Epoch 9/10
18750/18750 [==============================] - 11s 571us/step - loss: 1.0976 - accuracy: 0.6037
Epoch 10/10
18750/18750 [==============================] - 11s 569us/step - loss: 1.0718 - accuracy: 0.6160
6250/6250 [==============================] - 3s 473us/step
Epoch 1/10
18750/18750 [==============================] - 12s 658us/step - loss: 2.0115 - accuracy: 0.2417
Epoch 2/10
18750/18750 [==============================] - 11s 579us/step - loss: 1.6346 - accuracy: 0.3876
Epoch 3/10
18750/18750 [==============================] - 11s 574us/step - loss: 1.4635 - accuracy: 0.4591
Epoch 4/10
18750/18750 [==============================] - 11s 576us/step - loss: 1.3526 - accuracy: 0.5047
Epoch 5/10
18750/18750 [==============================] - 11s 577us/step - loss: 1.2723 - accuracy: 0.5377
Epoch 6/10
18750/18750 [==============================] - 11s 583us/step - loss: 1.2122 - accuracy: 0.5597
Epoch 7/10
18750/18750 [==============================] - 11s 580us/step - loss: 1.1589 - accuracy: 0.5809
Epoch 8/10
18750/18750 [==============================] - 11s 573us/step - loss: 1.1249 - accuracy: 0.5995
Epoch 9/10
18750/18750 [==============================] - 11s 576us/step - loss: 1.0778 - accuracy: 0.6111
Epoch 10/10
18750/18750 [==============================] - 11s 576us/step - loss: 1.0455 - accuracy: 0.6223
6250/6250 [==============================] - 3s 485us/step
Epoch 1/10
18750/18750 [==============================] - 12s 648us/step - loss: 2.0032 - accuracy: 0.2448
Epoch 2/10
18750/18750 [==============================] - 11s 575us/step - loss: 1.6022 - accuracy: 0.3998
Epoch 3/10
18750/18750 [==============================] - 11s 575us/step - loss: 1.4397 - accuracy: 0.4647
Epoch 4/10
18750/18750 [==============================] - 11s 574us/step - loss: 1.3381 - accuracy: 0.5068
Epoch 5/10
18750/18750 [==============================] - 11s 580us/step - loss: 1.2642 - accuracy: 0.5347
Epoch 6/10
18750/18750 [==============================] - 11s 584us/step - loss: 1.2030 - accuracy: 0.5601
Epoch 7/10
18750/18750 [==============================] - 11s 584us/step - loss: 1.1484 - accuracy: 0.5859
Epoch 8/10
18750/18750 [==============================] - 11s 582us/step - loss: 1.1128 - accuracy: 0.5995
Epoch 9/10
18750/18750 [==============================] - 11s 581us/step - loss: 1.0706 - accuracy: 0.6133
Epoch 10/10
18750/18750 [==============================] - 11s 575us/step - loss: 1.0426 - accuracy: 0.6285
6250/6250 [==============================] - 3s 487us/step
Epoch 1/10
18750/18750 [==============================] - 12s 652us/step - loss: 2.0163 - accuracy: 0.2381
Epoch 2/10
18750/18750 [==============================] - 11s 577us/step - loss: 1.6171 - accuracy: 0.3884
Epoch 3/10
18750/18750 [==============================] - 11s 577us/step - loss: 1.4306 - accuracy: 0.4666
Epoch 4/10
18750/18750 [==============================] - 11s 573us/step - loss: 1.3359 - accuracy: 0.5070
Epoch 5/10
18750/18750 [==============================] - 11s 575us/step - loss: 1.2587 - accuracy: 0.5385
Epoch 6/10
18750/18750 [==============================] - 11s 579us/step - loss: 1.1965 - accuracy: 0.5670
Epoch 7/10
18750/18750 [==============================] - 11s 580us/step - loss: 1.1490 - accuracy: 0.5870
Epoch 8/10
18750/18750 [==============================] - 11s 578us/step - loss: 1.0976 - accuracy: 0.6030
Epoch 9/10
18750/18750 [==============================] - 11s 580us/step - loss: 1.0558 - accuracy: 0.6164
Epoch 10/10
18750/18750 [==============================] - 11s 576us/step - loss: 1.0286 - accuracy: 0.6314
6250/6250 [==============================] - 3s 492us/step
Average validation accuracy for 0.5 is 0.6349200010299683
Validation loss for dropout rate = 0.99...
Epoch 1/10
18750/18750 [==============================] - 12s 649us/step - loss: 13.5070 - accuracy: 0.1010
Epoch 2/10
18750/18750 [==============================] - 11s 572us/step - loss: 2.3145 - accuracy: 0.1031
Epoch 3/10
18750/18750 [==============================] - 11s 574us/step - loss: 2.3074 - accuracy: 0.1021
Epoch 4/10
18750/18750 [==============================] - 11s 574us/step - loss: 2.3054 - accuracy: 0.1014
Epoch 5/10
18750/18750 [==============================] - 11s 572us/step - loss: 2.3057 - accuracy: 0.1036
Epoch 6/10
18750/18750 [==============================] - 11s 572us/step - loss: 2.3039 - accuracy: 0.1020
Epoch 7/10
18750/18750 [==============================] - 11s 581us/step - loss: 2.3064 - accuracy: 0.1025
Epoch 8/10
18750/18750 [==============================] - 11s 579us/step - loss: 2.3059 - accuracy: 0.1037
Epoch 9/10
18750/18750 [==============================] - 11s 580us/step - loss: 2.3031 - accuracy: 0.1037
Epoch 10/10
18750/18750 [==============================] - 11s 572us/step - loss: 2.3034 - accuracy: 0.1037
6250/6250 [==============================] - 3s 496us/step
Epoch 1/10
18750/18750 [==============================] - 12s 653us/step - loss: 14.0818 - accuracy: 0.0937
Epoch 2/10
18750/18750 [==============================] - 11s 579us/step - loss: 2.3215 - accuracy: 0.0972
Epoch 3/10
18750/18750 [==============================] - 11s 578us/step - loss: 2.3095 - accuracy: 0.0979
Epoch 4/10
18750/18750 [==============================] - 11s 580us/step - loss: 2.3058 - accuracy: 0.0984
Epoch 5/10
18750/18750 [==============================] - 11s 582us/step - loss: 2.3058 - accuracy: 0.1003
Epoch 6/10
18750/18750 [==============================] - 11s 579us/step - loss: 2.3038 - accuracy: 0.1009
Epoch 7/10
18750/18750 [==============================] - 11s 583us/step - loss: 2.3038 - accuracy: 0.1003
Epoch 8/10
18750/18750 [==============================] - 11s 584us/step - loss: 2.3028 - accuracy: 0.0976
Epoch 9/10
18750/18750 [==============================] - 11s 578us/step - loss: 2.3040 - accuracy: 0.1019
Epoch 10/10
18750/18750 [==============================] - 11s 585us/step - loss: 2.3071 - accuracy: 0.1003
6250/6250 [==============================] - 3s 509us/step
Epoch 1/10
18750/18750 [==============================] - 12s 655us/step - loss: 18.5603 - accuracy: 0.0988
Epoch 2/10
18750/18750 [==============================] - 11s 575us/step - loss: 2.3216 - accuracy: 0.0975
Epoch 3/10
18750/18750 [==============================] - 11s 573us/step - loss: 2.3135 - accuracy: 0.0981
Epoch 4/10
18750/18750 [==============================] - 11s 579us/step - loss: 2.3068 - accuracy: 0.0996
Epoch 5/10
18750/18750 [==============================] - 11s 594us/step - loss: 2.3049 - accuracy: 0.1004
Epoch 6/10
18750/18750 [==============================] - 11s 586us/step - loss: 2.3037 - accuracy: 0.0991
Epoch 7/10
18750/18750 [==============================] - 11s 586us/step - loss: 2.3041 - accuracy: 0.1007
Epoch 8/10
18750/18750 [==============================] - 11s 597us/step - loss: 2.3036 - accuracy: 0.1008
Epoch 9/10
18750/18750 [==============================] - 11s 598us/step - loss: 2.3027 - accuracy: 0.1002
Epoch 10/10
18750/18750 [==============================] - 11s 597us/step - loss: 2.3026 - accuracy: 0.0992
6250/6250 [==============================] - 3s 545us/step
Epoch 1/10
18750/18750 [==============================] - 13s 685us/step - loss: 10.3403 - accuracy: 0.1005
Epoch 2/10
18750/18750 [==============================] - 11s 605us/step - loss: 2.3149 - accuracy: 0.0974
Epoch 3/10
18750/18750 [==============================] - 12s 614us/step - loss: 2.3112 - accuracy: 0.1015
Epoch 4/10
18750/18750 [==============================] - 11s 613us/step - loss: 2.3062 - accuracy: 0.0997
Epoch 5/10
18750/18750 [==============================] - 11s 606us/step - loss: 2.3041 - accuracy: 0.0974
Epoch 6/10
18750/18750 [==============================] - 11s 598us/step - loss: 2.3054 - accuracy: 0.1027
Epoch 7/10
18750/18750 [==============================] - 11s 600us/step - loss: 2.3060 - accuracy: 0.1025
Epoch 8/10
18750/18750 [==============================] - 11s 597us/step - loss: 2.3052 - accuracy: 0.1027
Epoch 9/10
18750/18750 [==============================] - 11s 602us/step - loss: 2.3035 - accuracy: 0.0999
Epoch 10/10
18750/18750 [==============================] - 11s 598us/step - loss: 2.3034 - accuracy: 0.0990
6250/6250 [==============================] - 3s 552us/step
Average validation accuracy for 0.99 is 0.0948799978941679
###Markdown
**(Homework exercise 2- b)** Create a plot (using standard matplotlib) that shows validation accuracy and loss for different dropout rates that you have tried, report the best one. **(1 point)**.
###Code
##### YOUR CODE STARTS #####
# Plot for accuracies
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.plot(dropout_rates, val_fold_acc, label="Validation Accuracy")
plt.xlabel('Dropout Rate')
plt.ylabel('Accuracy')
plt.legend()
plt.title('Validation Accuracy')
# Plot for losses
plt.subplot(1, 2, 2)
plt.plot(dropout_rates, val_fold_loss, label="Validation Loss")
plt.xlabel('Dropout Rate')
plt.ylabel('Loss')
plt.legend()
plt.title('Validation Loss')
plt.show()
##### YOUR CODE ENDS #####
###Output
_____no_output_____
###Markdown
According to the above graphs, the best dropout rate seems to be ...0.25 **(Homework exercise 2- c)** Re-train the network using the dropout rate reported in **(b)**. Visualise performance curves and interpret the results. (if results did not improve, no need to re-run the process again, just comment on the results). **(1 point)**.
###Code
##### YOUR CODE STARTS #####
# Define the model with identified dropout rate and compile it
# train the neural network with dropout_rate
model = define_model_dropout(dropout_rate=0.25)
# compile the model
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizers.Adam(lr=0.001), metrics=['accuracy'])
# Fit the model; return history object
history = model.fit(X_train_norm, y_train, batch_size=64, epochs=10, validation_split=0.2)
##### YOUR CODE ENDS #####
##### YOUR CODE STARTS #####
# plot the progress curves here
def plot_curves(history):
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend(['Training', 'Validation'])
plt.title('Loss')
plt.subplot(1, 2, 2)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(['Training', 'Validation'])
plt.title('Accuracy')
##### YOUR CODE ENDS #####
##### YOUR CODE STARTS #####
# evaluate the model here
plot_curves(history)
##### YOUR CODE ENDS #####
###Output
_____no_output_____
###Markdown
Your insightful interpretation of the results goes here: ... Homework exercise 3 (3 points): applying more sophisticated augmentation pipelines Check https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator and add more interesting transformation into the pipeline we have developed in the class. Train your network again, and interpret the results. First of all some setup.
###Code
# Keras comes with built-in loaders for common datasets
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# shorten dataset for quicker training
X_train = X_train[:25000]
y_train = y_train[:25000]
###Output
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170500096/170498071 [==============================] - 3s 0us/step
###Markdown
**(Homework exercise 3- a)** Add at least 2-3 more different transformations described at https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator. Augment CIFAR10 training images. Visualise a few random augmentated images (as we have done for the simple augmentation pipeline in the class). This time, make 5 by 5 grid instead of 3 by 3. Briefly explain your choice of augmentation pipeline (i.e. why these augmentation you added will help?). **(1 point)**.
###Code
from keras.preprocessing.image import ImageDataGenerator
##### YOUR CODE STARTS #####
# Create your own data augmentation pipeline:
datagen = ImageDataGenerator(rotation_range=90, # randomly rotate an image from 0 to 90 degrees
horizontal_flip=True, # vertically flip random images,
height_shift_range=0.2, # vertically shift an image by a fraction of 0% - 20% (of original height)
#width_shift_range=0.3,
featurewise_std_normalization=True,
#zca_whitening=True,
vertical_flip=True) # vertically flip random images
datagen.fit(X_train)
##### YOUR CODE ENDS #####
##### YOUR CODE STARTS #####
plt.rcParams['figure.figsize'] = (8.0, 8.0) # set default size of plots
# Configure batch size and retrieve one batch of images
for X_batch, y_batch in datagen.flow(X_train, y_train, batch_size=25):
# Show 25 images
for i in range(25):
plt.subplot(5, 5, 1 + i)
plt.imshow(X_batch[i].astype('uint8'))
plt.axis('off')
# show the plot
plt.show()
break
##### YOUR CODE ENDS #####
###Output
_____no_output_____
###Markdown
**(Homework exercise 3- b)** First, split the training data into train and validation sets using `train_test_split` function from `sklearn` (use 10% for validation). Then normalise each of the three sets (train, val and test) using mean and standard deviations computed on train images (for R, G and B separately). Finally, retrain the model using this new augmented training set. Use non-augmented normalised validation set for validation of the model while training. **(1.5 points)**.
###Code
from sklearn.model_selection import train_test_split
##### YOUR CODE STARTS #####
# Split the training data further into train and val
train_x, val_x, train_y, val_y = train_test_split(X_train, y_train, test_size = 0.1, random_state=42)
mu = train_x.mean(axis=(0,1,2)) # finds mean of R, G and B separately
std = train_x.std(axis=(0,1,2)) # same for std
X_train_norm = (train_x - mu)/std
val_x_norm = (val_x - mu)/std
X_test_norm = (X_test - mu)/std
##### YOUR CODE ENDS #####
# Assign augmentation schema to X_train_norm
datagen.fit(X_train_norm)
##### YOUR CODE STARTS #####
# Create a model
# here you can use either model with dropout or L2 regularisation defined above
model = define_model(0.00001)
# Compile the model as before (code is identical)
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizers.Adam(lr=0.001), metrics=['accuracy'])
# remember to use .fit_generator() to train the model
# use batch_size 64 as in the class
history = model.fit_generator(datagen.flow(X_train_norm, train_y, batch_size=64),
steps_per_epoch=X_train_norm.shape[0]//64, # number of steps per epochs, needs to be specified as we do augmentation
epochs=25,
verbose=1,
validation_data=(val_x_norm, val_y)
)
##### YOUR CODE ENDS #####
###Output
Epoch 1/25
351/351 [==============================] - 26s 73ms/step - loss: 2.0009 - accuracy: 0.2529 - val_loss: 1.8855 - val_accuracy: 0.3028
Epoch 2/25
351/351 [==============================] - 20s 56ms/step - loss: 1.8041 - accuracy: 0.3309 - val_loss: 1.7328 - val_accuracy: 0.3604
Epoch 3/25
351/351 [==============================] - 20s 57ms/step - loss: 1.6978 - accuracy: 0.3771 - val_loss: 1.6477 - val_accuracy: 0.4224
Epoch 4/25
351/351 [==============================] - 21s 59ms/step - loss: 1.6302 - accuracy: 0.4044 - val_loss: 1.5751 - val_accuracy: 0.4320
Epoch 5/25
351/351 [==============================] - 21s 59ms/step - loss: 1.5676 - accuracy: 0.4362 - val_loss: 1.5787 - val_accuracy: 0.4244
Epoch 6/25
351/351 [==============================] - 21s 59ms/step - loss: 1.5072 - accuracy: 0.4558 - val_loss: 1.5605 - val_accuracy: 0.4580
Epoch 7/25
351/351 [==============================] - 21s 59ms/step - loss: 1.4580 - accuracy: 0.4758 - val_loss: 1.4580 - val_accuracy: 0.4988
Epoch 8/25
351/351 [==============================] - 20s 58ms/step - loss: 1.4212 - accuracy: 0.4911 - val_loss: 1.4504 - val_accuracy: 0.5088
Epoch 9/25
351/351 [==============================] - 20s 58ms/step - loss: 1.3744 - accuracy: 0.5093 - val_loss: 1.4932 - val_accuracy: 0.4940
Epoch 10/25
351/351 [==============================] - 20s 58ms/step - loss: 1.3425 - accuracy: 0.5191 - val_loss: 1.3307 - val_accuracy: 0.5304
Epoch 11/25
351/351 [==============================] - 20s 58ms/step - loss: 1.3095 - accuracy: 0.5315 - val_loss: 1.4134 - val_accuracy: 0.5164
Epoch 12/25
351/351 [==============================] - 20s 58ms/step - loss: 1.2881 - accuracy: 0.5410 - val_loss: 1.4779 - val_accuracy: 0.5320
Epoch 13/25
351/351 [==============================] - 20s 57ms/step - loss: 1.2630 - accuracy: 0.5537 - val_loss: 1.3905 - val_accuracy: 0.5344
Epoch 14/25
351/351 [==============================] - 20s 57ms/step - loss: 1.2468 - accuracy: 0.5540 - val_loss: 1.2779 - val_accuracy: 0.5536
Epoch 15/25
351/351 [==============================] - 20s 57ms/step - loss: 1.2252 - accuracy: 0.5673 - val_loss: 1.3587 - val_accuracy: 0.5412
Epoch 16/25
351/351 [==============================] - 20s 58ms/step - loss: 1.2039 - accuracy: 0.5749 - val_loss: 1.2723 - val_accuracy: 0.5664
Epoch 17/25
351/351 [==============================] - 20s 58ms/step - loss: 1.2023 - accuracy: 0.5747 - val_loss: 1.3071 - val_accuracy: 0.5516
Epoch 18/25
351/351 [==============================] - 20s 58ms/step - loss: 1.1790 - accuracy: 0.5831 - val_loss: 1.4102 - val_accuracy: 0.5376
Epoch 19/25
351/351 [==============================] - 20s 58ms/step - loss: 1.1680 - accuracy: 0.5885 - val_loss: 1.3104 - val_accuracy: 0.5556
Epoch 20/25
351/351 [==============================] - 20s 58ms/step - loss: 1.1548 - accuracy: 0.5958 - val_loss: 1.2815 - val_accuracy: 0.5652
Epoch 21/25
351/351 [==============================] - 20s 57ms/step - loss: 1.1432 - accuracy: 0.6006 - val_loss: 1.3489 - val_accuracy: 0.5464
Epoch 22/25
351/351 [==============================] - 20s 58ms/step - loss: 1.1281 - accuracy: 0.6047 - val_loss: 1.4438 - val_accuracy: 0.5424
Epoch 23/25
351/351 [==============================] - 20s 58ms/step - loss: 1.1264 - accuracy: 0.6059 - val_loss: 1.2790 - val_accuracy: 0.5728
Epoch 24/25
351/351 [==============================] - 20s 58ms/step - loss: 1.1139 - accuracy: 0.6110 - val_loss: 1.2867 - val_accuracy: 0.5748
Epoch 25/25
351/351 [==============================] - 20s 58ms/step - loss: 1.0989 - accuracy: 0.6144 - val_loss: 1.3718 - val_accuracy: 0.5588
###Markdown
**(Homework exercise 3- c)** Plot the performance curves (loss and accuracy), evaluate your model on the non-augmented normalised test set and interpret the results. Did the performance improve? Why? Why not? **(0.5 points)**.
###Code
##### YOUR CODE STARTS #####
plot_curves(history)
##### YOUR CODE ENDS #####
##### YOUR CODE STARTS #####
# Create a model
model = define_model(0.00001)
# Compile the model as before (code is identical)
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizers.Adam(lr=0.001), metrics=['accuracy'])
history = model.fit(X_train_norm, train_y,
batch_size = 64,
epochs=25,
verbose=1,
validation_data=(val_x_norm, val_y)
)
plot_curves(history)
##### YOUR CODE ENDS #####
###Output
Train on 22500 samples, validate on 2500 samples
Epoch 1/25
22500/22500 [==============================] - 10s 455us/step - loss: 1.6077 - accuracy: 0.4103 - val_loss: 1.6102 - val_accuracy: 0.4660
Epoch 2/25
22500/22500 [==============================] - 9s 397us/step - loss: 1.1958 - accuracy: 0.5750 - val_loss: 1.1037 - val_accuracy: 0.6212
Epoch 3/25
22500/22500 [==============================] - 8s 373us/step - loss: 0.9554 - accuracy: 0.6659 - val_loss: 0.9467 - val_accuracy: 0.6708
Epoch 4/25
22500/22500 [==============================] - 8s 373us/step - loss: 0.7940 - accuracy: 0.7237 - val_loss: 0.9220 - val_accuracy: 0.6836
Epoch 5/25
22500/22500 [==============================] - 8s 372us/step - loss: 0.6798 - accuracy: 0.7638 - val_loss: 0.8892 - val_accuracy: 0.7028
Epoch 6/25
22500/22500 [==============================] - 8s 374us/step - loss: 0.5577 - accuracy: 0.8084 - val_loss: 0.8546 - val_accuracy: 0.7148
Epoch 7/25
22500/22500 [==============================] - 8s 372us/step - loss: 0.4523 - accuracy: 0.8472 - val_loss: 0.9748 - val_accuracy: 0.7012
Epoch 8/25
22500/22500 [==============================] - 8s 370us/step - loss: 0.3495 - accuracy: 0.8862 - val_loss: 0.9725 - val_accuracy: 0.7160
Epoch 9/25
22500/22500 [==============================] - 8s 370us/step - loss: 0.2833 - accuracy: 0.9055 - val_loss: 1.0023 - val_accuracy: 0.7152
Epoch 10/25
22500/22500 [==============================] - 8s 373us/step - loss: 0.2196 - accuracy: 0.9280 - val_loss: 1.2139 - val_accuracy: 0.7116
Epoch 11/25
22500/22500 [==============================] - 8s 371us/step - loss: 0.1899 - accuracy: 0.9392 - val_loss: 1.3978 - val_accuracy: 0.6956
Epoch 12/25
22500/22500 [==============================] - 8s 372us/step - loss: 0.1662 - accuracy: 0.9474 - val_loss: 1.2679 - val_accuracy: 0.7216
Epoch 13/25
22500/22500 [==============================] - 8s 373us/step - loss: 0.1375 - accuracy: 0.9599 - val_loss: 1.3752 - val_accuracy: 0.7152
Epoch 14/25
22500/22500 [==============================] - 8s 372us/step - loss: 0.1460 - accuracy: 0.9571 - val_loss: 1.5059 - val_accuracy: 0.7008
Epoch 15/25
22500/22500 [==============================] - 8s 376us/step - loss: 0.1364 - accuracy: 0.9604 - val_loss: 1.5323 - val_accuracy: 0.7028
Epoch 16/25
22500/22500 [==============================] - 8s 371us/step - loss: 0.1320 - accuracy: 0.9609 - val_loss: 1.7005 - val_accuracy: 0.7044
Epoch 17/25
22500/22500 [==============================] - 8s 374us/step - loss: 0.1161 - accuracy: 0.9690 - val_loss: 1.6407 - val_accuracy: 0.7144
Epoch 18/25
22500/22500 [==============================] - 8s 372us/step - loss: 0.1150 - accuracy: 0.9692 - val_loss: 1.8001 - val_accuracy: 0.7080
Epoch 19/25
22500/22500 [==============================] - 8s 372us/step - loss: 0.1310 - accuracy: 0.9636 - val_loss: 1.7479 - val_accuracy: 0.7128
Epoch 20/25
22500/22500 [==============================] - 8s 373us/step - loss: 0.1210 - accuracy: 0.9691 - val_loss: 1.5993 - val_accuracy: 0.7164
Epoch 21/25
22500/22500 [==============================] - 8s 372us/step - loss: 0.1247 - accuracy: 0.9672 - val_loss: 1.7898 - val_accuracy: 0.7140
Epoch 22/25
22500/22500 [==============================] - 8s 373us/step - loss: 0.1114 - accuracy: 0.9720 - val_loss: 1.8823 - val_accuracy: 0.7112
Epoch 23/25
22500/22500 [==============================] - 8s 374us/step - loss: 0.1146 - accuracy: 0.9713 - val_loss: 1.8131 - val_accuracy: 0.6972
Epoch 24/25
22500/22500 [==============================] - 8s 372us/step - loss: 0.1241 - accuracy: 0.9692 - val_loss: 1.8203 - val_accuracy: 0.7028
Epoch 25/25
22500/22500 [==============================] - 8s 372us/step - loss: 0.1030 - accuracy: 0.9747 - val_loss: 2.1646 - val_accuracy: 0.6928
###Markdown
Textual answer to (**c**) goes here: ... Bonus exercises*(NB, these are optional exercises!)* Bonus exercise 1 (up to 4 bonus points depending on presentation): Experimentally verify if CutMix augmentation helps to improve the test score on CIFAR10 (not clear, as images are very tiny). Link to the CutMix paper: https://arxiv.org/abs/1905.04899. Show couple of examples of CutMix augmented images and your implementation along with performance curves and scores. Compare the results of CutMix augmented model and the model without data augmentation.
###Code
##### YOUR CODE STARTS #####
##### YOUR CODE ENDS #####
#Comparison part
##### YOUR CODE STARTS #####
##### YOUR CODE ENDS #####
###Output
_____no_output_____
###Markdown
Your explanation: ... Bonus exercise 2 (2 points): Implement basic linear regression and Ridge regression using the closed form solutions (https://stats.stackexchange.com/questions/69205/how-to-derive-the-ridge-regression-solution). Run your implementations on the following synthetic dataset. Compare model coefficients to coefficients produced by `sklearn` functions `LinearRegression` and `Ridge`. Speculate about the difference in coefficients that you observe.
###Code
# here we generate a synthetic dataset:
from sklearn.datasets import make_regression
X, y, coefficients = make_regression(
n_samples=50,
n_features=4,
n_informative=1,
n_targets=1,
noise=5,
coef=True,
random_state=1
)
X.shape
###Output
_____no_output_____
###Markdown
Implement closed form solutions for both baseline linear regression and ridge regression (https://stats.stackexchange.com/questions/69205/how-to-derive-the-ridge-regression-solution) on the synthetic dataset:
###Code
n, m = X.shape
I = np.identity(m)
lambda_ = 1
##### YOUR CODE STARTS #####
# Implement baseline linear regression (closed form solution)
lr_coef = ...
lr_intercept = ...
# Implement Ridge regression (closed form solution)
lr_ridge_coef = ...
lr_rigde_intercept = ...
##### YOUR CODE ENDS #####
from sklearn.linear_model import LinearRegression
# Initialise Linear Regression model from sklearn:
lr = LinearRegression()
lr.fit(X, y)
print(lr.coef_)
print(lr.intercept_)
from sklearn.linear_model import Ridge
lr_ridge = Ridge(lambd, solver='cholesky')
lr_ridge.fit(X, y)
print(lr_ridge.coef_)
print(lr_ridge.intercept_)
###Output
_____no_output_____
###Markdown
Compare coefficients you obtained using closed form solution and sklearn implementations. Comment on the difference you observe.
###Code
##### YOUR CODE STARTS #####
print(f"Manually calculated W1 = {}, W0 = {}")
print(f"Sklearn implementation W1 = {}, W0 = {}")
##### YOUR CODE ENDS #####
###Output
_____no_output_____
###Markdown
Your textual answer explaining the difference between coefficients goes here: ... Comments (optional feedback to the course instructors)Here, please, leave your comments regarding the homework, possibly answering the following questions: * how much time did you send on this homework?* was it too hard/easy for you?* what would you suggest to add or remove?* anything else you would like to tell us
###Code
###Output
_____no_output_____ |
Album_Sales_Prediction/Code/Modeling.ipynb | ###Markdown
Imports
###Code
from __future__ import print_function
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pickle
from itertools import cycle
from IPython.display import Image
from sklearn.cross_validation import train_test_split
from sklearn import svm
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
%matplotlib inline
from __future__ import division
from patsy import dmatrices
from sklearn import linear_model as lm
from sklearn.linear_model import LogisticRegression
from sklearn import preprocessing
from sklearn import cross_validation
from sklearn import metrics
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.metrics import roc_curve, auc
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import LinearSVC
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
import matplotlib.cm as cm
from sklearn import naive_bayes
from sklearn.metrics import accuracy_score, classification_report
# from catboost import CatBoostRegressor, CatBoostClassifier, Pool
###Output
/Users/cyrusrustomji/anaconda3/lib/python3.6/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
###Markdown
Modeling Open and sort dataframe
###Code
with open('pipeline.pkl', 'rb') as handle:
data = pickle.load(handle)
data.head()
df = data.drop(columns=['popularity_log','followers_log'])
df = df.sort_values(by=['followers'],ascending=False)
df.head()
df.head()
###Output
_____no_output_____
###Markdown
x = df['popularity']min_max_scaler = preprocessing.MinMaxScaler()x_scaled = min_max_scaler.fit_transform(x.values)rating_df = pd.DataFrame(x_scaled)rating_df
###Code
scaled_features = df.copy()
col_names = ['popularity', 'followers']
features = scaled_features[col_names]
scaler = StandardScaler().fit(features.values)
features = scaler.transform(features.values)
scaled_features[col_names] = features
scaled_features.head()
###Output
/Users/cyrusrustomji/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py:475: DataConversionWarning: Data with input dtype int64 was converted to float64 by StandardScaler.
warnings.warn(msg, DataConversionWarning)
###Markdown
Use the train/test info below for all models
###Code
y,X=dmatrices('platinum ~ followers + popularity',data=df,return_type='dataframe')
xtrain, xtest, ytrain, ytest = cross_validation.train_test_split(X, y, test_size=0.2, random_state=1234)
###Output
_____no_output_____
###Markdown
Logistic Regression Question for JB/CS if my test size is large, the accuracy will increase, right? Should I randomly shuffle the train sets below?
###Code
# 1. Fix the below to make sure it does not include repeat variable names
# 2. Leave changes to variables seperate in each model
def plot_confusion_matrix(cm,title='Confusion matrix', cmap=plt.cm.Reds):
plt.imshow(cm, interpolation='nearest',cmap=cmap)
plt.title(title)
plt.colorbar()
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
#Could be a typical function for classifying:
def train_score(classifier,x,y):
xtrain, xtest, ytrain, ytest2 = cross_validation.train_test_split(x, y, test_size=0.2, random_state=1234)
ytrain2=np.ravel(ytrain)
clf = classifier.fit(xtrain, ytrain2)
# accuracy for test & train:
train_acc=clf.score(xtrain, ytrain2)
test_acc=clf.score(xtest,ytest2)
print("Training Data Accuracy: %0.2f" %(train_acc))
print("Test Data Accuracy: %0.2f" %(test_acc))
y_true = ytest
y_pred = clf.predict(xtest)
conf = confusion_matrix(y_true, y_pred)
print(conf)
print ('\n')
print ("Precision: %0.2f" %(conf[0, 0] / (conf[0, 0] + conf[1, 0])))
print ("Recall: %0.2f"% (conf[0, 0] / (conf[0, 0] + conf[0, 1])))
cm=confusion_matrix(y_true, y_pred, labels=None)
plt.figure()
plot_confusion_matrix(cm)
log_clf=LogisticRegression()
train_score(log_clf,X,y)
###Output
Training Data Accuracy: 0.36
Test Data Accuracy: 0.45
[[0 6]
[0 5]]
Precision: nan
Recall: 0.00
###Markdown
ROC Curve
###Code
log = LogisticRegression()
log.fit(xtrain,np.ravel(ytrain))
y_score=log.predict_proba(xtest)[:,1]
fpr, tpr,thres = roc_curve(ytest, y_score)
roc_auc = auc(fpr, tpr)
plt.figure()
# Plotting our Baseline..
plt.plot([0,1],[0,1])
plt.plot(fpr,tpr)
plt.xlabel('FPR')
plt.ylabel('TPR')
tpr
thres
1-thres
df.head()
###Output
_____no_output_____
###Markdown
Gradient Descent Does GD implement theta, l1,l2, and combo? SVM Question for JB/CS: run for loop to find most efficient gamma? SVC with linear kernel
###Code
# fit linear model
model_svm = svm.SVC(kernel='linear')
ytrain3 = np.ravel(ytrain)
model_svm.fit(xtrain, ytrain3)
# predict out of sample
y_pred = model_svm.predict(xtest)
# check accuracy
accuracy_score(ytest,y_pred)
model_svm.coef_
from sklearn.metrics import average_precision_score
average_precision = average_precision_score(ytest, y_pred)
print('Average precision-recall score:', round(average_precision,2))
from sklearn.metrics import precision_recall_curve
precision, recall, _ = precision_recall_curve(ytest, y_pred)
plt.step(recall, precision, color='b', alpha=0.2,
where='post')
plt.fill_between(recall, precision, step='post', alpha=0.2,
color='b')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('2-class Precision-Recall curve: AP={0:0.2f}'.format(
average_precision))
###Output
_____no_output_____
###Markdown
SVC with RBF kernel
###Code
# fit rbf model
model_svm2 = svm.SVC(kernel='rbf')
model_svm2.fit(xtrain, ytrain3)
y_pred2 = model_svm2.predict(xtest)
y_pred2
accuracy_score(ytest,y_pred2)
confusion_matrix(ytest,y_pred2)
from sklearn.metrics import precision_recall_curve
precision, recall, _ = precision_recall_curve(ytest, y_pred2)
plt.step(recall, precision, color='b', alpha=0.2,
where='post')
plt.fill_between(recall, precision, step='post', alpha=0.2,
color='b')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('2-class Precision-Recall curve: AP={0:0.2f}'.format(
average_precision))
###Output
_____no_output_____
###Markdown
SVC with poly kernel
###Code
# fit poly model
model_svm3 = svm.SVC(kernel='poly')
model_svm3.fit(xtrain, ytrain3)
y_pred3 = model_svm3.predict(xtest)
y_pred3
accuracy_score(ytest,y_pred3)
confusion_matrix(ytest,y_pred3)
###Output
_____no_output_____
###Markdown
SVM
###Code
y,X=dmatrices('platinum ~ followers + popularity',data=df,return_type='dataframe')
def quick_test(model, X, y):
xtrain, xtest, ytrain, ytest = train_test_split(X, y, test_size=0.2)
model.fit(xtrain, ytrain)
return model.score(xtest, ytest)
def quick_test_afew_times(model, X, y, n=10):
return np.mean([quick_test(model, X, y) for j in range(n)])
linearsvc = LinearSVC()
quick_test_afew_times(linearsvc, X, y)
###Output
/Users/cyrusrustomji/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
###Markdown
Naive Bayes
###Code
y,X=dmatrices('platinum ~ followers + popularity',data=df,return_type='dataframe')
xtrain, xtest, ytrain, ytest = cross_validation.train_test_split(X, y, test_size=0.2, random_state=1234)
###Output
_____no_output_____
###Markdown
Binomial NB
###Code
model = naive_bayes.BernoulliNB()
ytrain = np.ravel(ytrain)
model.fit(xtrain, ytrain)
print("Accuracy: %.3f"% accuracy_score(ytest, model.predict(xtest)))
print(classification_report(ytest, model.predict(xtest)))
###Output
Accuracy: 0.545
precision recall f1-score support
0.0 0.55 1.00 0.71 6
1.0 0.00 0.00 0.00 5
avg / total 0.30 0.55 0.39 11
###Markdown
Multinomial NB
###Code
# this would work the best because the outcome of one album going platinum, will not affect
# the outcome of another album going platinum
model = naive_bayes.MultinomialNB()
model.fit(xtrain, ytrain)
print("Accuracy: %.3f"% accuracy_score(ytest, model.predict(xtest)))
print(classification_report(ytest, model.predict(xtest)))
def plot_confusion_matrix(cm,title='Confusion matrix', cmap=plt.cm.Reds):
plt.imshow(cm, interpolation='nearest',cmap=cmap)
plt.title(title)
plt.colorbar()
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
#Could be a typical function for classifying:
def train_score(classifier,x,y):
xtrain, xtest, ytrain, ytest2 = cross_validation.train_test_split(x, y, test_size=0.2, random_state=1234)
ytrain2=np.ravel(ytrain)
clf = classifier.fit(xtrain, ytrain2)
# accuracy for test & train:
train_acc=clf.score(xtrain, ytrain2)
test_acc=clf.score(xtest,ytest2)
print("Training Data Accuracy: %0.2f" %(train_acc))
print("Test Data Accuracy: %0.2f" %(test_acc))
y_true = ytest
y_pred = clf.predict(xtest)
conf = confusion_matrix(y_true, y_pred)
print(conf)
print ('\n')
print ("Precision: %0.2f" %(conf[0, 0] / (conf[0, 0] + conf[1, 0])))
print ("Recall: %0.2f"% (conf[0, 0] / (conf[0, 0] + conf[0, 1])))
cm=confusion_matrix(y_true, y_pred, labels=None)
plt.figure()
plot_confusion_matrix(cm)
log_clf=naive_bayes.MultinomialNB()
train_score(log_clf,X,y)
model.fit(xtrain, ytrain)
print(classification_report(ytest, model.predict(xtest)))
###Output
precision recall f1-score support
0.0 0.75 1.00 0.86 6
1.0 1.00 0.60 0.75 5
avg / total 0.86 0.82 0.81 11
###Markdown
SVCs Part 2 Linear SVC
###Code
from sklearn.svm import LinearSVC, SVC
from sklearn.preprocessing import scale
xtrain = scale(xtrain)
xtest = scale(xtest)
model = LinearSVC()
model.fit(xtrain, ytrain)
print("Accuracy: %.3f"% accuracy_score(ytest, model.predict(xtest)))
print(classification_report(ytest, model.predict(xtest)))
###Output
Accuracy: 0.727
precision recall f1-score support
0.0 0.71 0.83 0.77 6
1.0 0.75 0.60 0.67 5
avg / total 0.73 0.73 0.72 11
###Markdown
SVC
###Code
model = SVC()
model.fit(xtrain, ytrain)
print("Accuracy: %.3f"% accuracy_score(ytest, model.predict(xtest)))
print(classification_report(ytest, model.predict(xtest)))
###Output
Accuracy: 0.818
precision recall f1-score support
0.0 0.75 1.00 0.86 6
1.0 1.00 0.60 0.75 5
avg / total 0.86 0.82 0.81 11
###Markdown
Decision Trees
###Code
y,X=dmatrices('platinum ~ followers + popularity',data=df,return_type='dataframe')
xtrain, xtest, ytrain, ytest = cross_validation.train_test_split(X, y, test_size=0.2, random_state=1234)
###Output
_____no_output_____
###Markdown
How to pick the right number of trees
###Code
for i in range(1,20,1):
decisiontree = DecisionTreeClassifier(max_depth=i)
print(i,quick_test_afew_times(decisiontree, X, y))
###Output
1 0.845454545455
2 0.809090909091
3 0.836363636364
4 0.827272727273
5 0.854545454545
6 0.790909090909
7 0.818181818182
8 0.827272727273
9 0.818181818182
10 0.790909090909
11 0.790909090909
12 0.790909090909
13 0.781818181818
14 0.763636363636
15 0.8
16 0.845454545455
17 0.809090909091
18 0.845454545455
19 0.9
###Markdown
This is random every time, right?
###Code
for i in range(1,10,1):
randomforest = RandomForestClassifier(n_estimators=i)
yrf = np.ravel(y)
print(i, quick_test_afew_times(randomforest, X, yrf))
decisiontree = DecisionTreeClassifier()
quick_test_afew_times(decisiontree, X, y)
linearsvc = LinearSVC(loss='hinge')
quick_test_afew_times(linearsvc, X, yrf)
svc = SVC()
quick_test_afew_times(svc, X, yrf)
s2 = SVC()
X2 = (0.5-X) * 2
quick_test_afew_times(s2, X2, yrf)
###Output
_____no_output_____ |
Nabil_prediction.ipynb | ###Markdown
Plotting the target variable:
###Code
#plot
plt.figure(figsize=(16,8))
plt.plot(df['Close'], label='Close Price history')
###Output
_____no_output_____
###Markdown
Using LSTM:
###Code
#importing required libraries
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM
#creating dataframe
data = df.sort_index(ascending=True, axis=0)
new_data = pd.DataFrame(index=range(0,len(df)),columns=['Close'])
for i in range(0,len(data)):
new_data['Close'][i] = data['Close'][i]
#creating train and test sets
dataset = new_data.values
train = dataset[0:300,:]
valid = dataset[300:,:]
#converting dataset into x_train and y_train
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = scaler.fit_transform(dataset)
x_train, y_train = [], []
for i in range(60,len(train)):
x_train.append(scaled_data[i-60:i,0])
y_train.append(scaled_data[i,0])
x_train, y_train = np.array(x_train), np.array(y_train)
x_train = np.reshape(x_train, (x_train.shape[0],x_train.shape[1],1))
###Output
_____no_output_____
###Markdown
Creating the LSTM model:
###Code
regressor = Sequential()
regressor.add(LSTM(units=50, return_sequences=True, input_shape = (x_train.shape[1], 1)))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units=50, return_sequences=True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units=50, return_sequences=True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units=50))
regressor.add(Dropout(0.2))
regressor.add(Dense(units=1))
regressor.compile(loss='mean_squared_error', optimizer='adam')
regressor.fit(x_train, y_train, epochs=1, batch_size=1, verbose=2)
#predicting values, using past 60 from the train data
inputs = new_data[len(new_data) - len(valid) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = scaler.transform(inputs)
X_test = []
for i in range(60,inputs.shape[0]):
X_test.append(inputs[i-60:i,0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1))
closing_price = regressor.predict(X_test)
closing_price = scaler.inverse_transform(closing_price)
###Output
_____no_output_____
###Markdown
Result Demo:
###Code
rms=np.sqrt(np.mean(np.power((valid-closing_price),2)))
rms
#for plotting
train = new_data[:300]
valid = new_data[300:]
valid['Predictions'] = closing_price
plt.plot(train['Close'])
plt.plot(valid[['Close']], color='green')
plt.plot(valid[['Predictions']], color='red')
###Output
C:\Users\ASUS\AppData\Local\Temp/ipykernel_14728/1233336585.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
valid['Predictions'] = closing_price
|
scripts/ABS-CALCS.ipynb | ###Markdown
With solcore
###Code
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
import numpy as np
from solcore.structure import Structure, Layer
#from solcore.absorption_calculator import calculate_rat
from solcore import material,si
#from solcore.absorption_calculator import calculate_absorption_profile
from solcore.parameter_system.parameter_system import *
from solcore.optics.tmm import calculate_rat,calculate_absorption_profile
from solcore import eV,convert
import solcore.quantum_mechanics as QM
bulk = material("GaAs")(T=T, strained=False)
M43226 = Structure([
Layer(width=si(10, 'nm'), material=material("GaAs")(T=T,Nd=7e18,strained=False)),
Layer(width=si(100, 'nm'), material=material("AlGaAs")(T=T,Al=0.20,Nd =6e18,strained=False)),
Layer(width=si(300, 'nm'), material=material("AlGaAs")(T=T,Al=0.20,strained=False)),
Layer(width=si(11.87, 'nm'),material=material("GaAs")(T=T,strained=False)),
Layer(width=si(0.424, 'nm'), material=material("AlAs")(T=T, strained=False )),
Layer(width=si(13.85, 'nm'), material=material("GaAs")(T=T,strained=False )),
Layer(width=si(300, 'nm'), material=material("AlGaAs")(T=T,Al=0.30,strained=False )),
Layer(width=si(600, 'nm'), material=material("AlGaAs")(T=T,Al=0.20,Nd =6e18,strained=False))],
substrate=bulk
)
output_1 = QM.schrodinger(M43226, quasiconfined=0, graphtype='potentials',Efield=0, symmetric=False,num_eigenvalues=2, show=True)
T=15
Air = material("Air")(T=T)
GaAs = material("GaAs")(T=T)
bulk = material("GaAs")(T=T, strained=False)
n_AlGaAsd1 = material("AlGaAs")(T=T , Al=0.15, Nd =6e18 )
AlGaAs = material("AlGaAs")(T=T , Al=0.15 )
n_AlGaAsd2 = material("AlGaAs")(T=T , Al=0.15, Nd =7e18 )
n_GaAs = material("GaAs")(T=T , Nd =7e18 )
n2_AlGaAsd1 = material("AlGaAs")(T=T , Al=0.20, Nd =6e18 )
AlGaAs2 = material("AlGaAs")(T=T , Al=0.30 )
AlGaAs3 = material("AlGaAs")(T=T , Al=0.20 )
M43172 = Structure([
Layer(width=si(10, 'nm'), material=material("GaAs")(T=T,Nd=7e18,strained=False)),
Layer(width=si(100, 'nm'), material=material("AlGaAs")(T=T,Al=0.20,Nd =6e18,strained=False)),
Layer(width=si(240, 'nm'), material=material("AlGaAs")(T=T,Al=0.20,strained=False)),
Layer(width=si(30, 'nm'), material=material("AlGaAs")(T=T,Al=0.30,strained=False)),
Layer(width=si(30, 'nm'), material=material("AlGaAs")(T=T,Al=0.30,strained=False)),
Layer(width=si(11.87, 'nm'),material=material("GaAs")(T=T,strained=False)),
Layer(width=si(0.565, 'nm'), material=material("AlAs")(T=T,Al=0.15,strained=False)),
Layer(width=si(13.85, 'nm'), material=material("GaAs")(T=T,strained=False )),
Layer(width=si(30, 'nm'), material=material("AlGaAs")(T=T,Al=0.30,strained=False)),
Layer(width=si(30, 'nm'), material=material("AlGaAs")(T=T,Al=0.30,strained=False)),
Layer(width=si(240, 'nm'), material=material("AlGaAs")(T=T,Al=0.20,strained=False)),
Layer(width=si(600, 'nm'), material=material("AlGaAs")(T=T,Al=0.20,Nd =6e18,strained=False)),
],substrate=bulk)
M43171 = Structure([
Layer(width=si(10, 'nm'), material=material("GaAs")(T=T,Nd=7e18,strained=False)),
Layer(width=si(100, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,Nd =6e18,strained=False)),
Layer(width=si(200, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False)),
Layer(width=si(100, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False)),
Layer(width=si(11.87, 'nm'),material=material("GaAs")(T=T,strained=False)),
Layer(width=si(3.96, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False)),
Layer(width=si(13.85, 'nm'), material=material("GaAs")(T=T,strained=False )),
Layer(width=si(100, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False)),
Layer(width=si(200, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False)),
Layer(width=si(600, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,Nd =6e18,strained=False)),
],substrate=bulk)
M43226 = Structure([
Layer(width=si(10, 'nm'), material=material("GaAs")(T=T,Nd=7e18,strained=False)),
Layer(width=si(100, 'nm'), material=material("AlGaAs")(T=T,Al=0.20,Nd =6e18,strained=False)),
Layer(width=si(300, 'nm'), material=material("AlGaAs")(T=T,Al=0.20,strained=False)),
Layer(width=si(11.87, 'nm'),material=material("GaAs")(T=T,strained=False)),
Layer(width=si(0.424, 'nm'), material=material("AlAs")(T=T, strained=False )),
Layer(width=si(13.85, 'nm'), material=material("GaAs")(T=T,strained=False )),
Layer(width=si(300, 'nm'), material=material("AlGaAs")(T=T,Al=0.30,strained=False )),
Layer(width=si(600, 'nm'), material=material("AlGaAs")(T=T,Al=0.20,Nd =6e18,strained=False))],
substrate=bulk
)
stp=20
zl =600
li=660
lf=690
wl = np.linspace(li,lf, int(round(zl/stp)))
out1 = calculate_absorption_profile(M43171, wl, steps_size=stp,z_limit=zl)
out2 = calculate_absorption_profile(M43172, wl, steps_size=stp,z_limit=zl)
out3 = calculate_absorption_profile(M43226, wl, steps_size=stp,z_limit=zl)
print(wl.shape[0]*wl.shape[0])
print(out1['absorption'].shape,out2['absorption'].shape,out3['absorption'].shape)
import matplotlib.ticker as ticker
def fmt(x, pos):
a, b = '{:.1e}'.format(x).split('e')
b = int(b)
return r'${} \times 10^{{{}}}$'.format(a, b)
fig,(ax1,ax2,ax3)=plt.subplots(3,1,figsize=(10,7))
bar1 = ax1.contourf(out1['position'], wl,
out1['absorption'],
100,cmap="jet")
ax1.set_xlabel('Position (nm)',fontsize=18)
ax1.set_ylabel('Wavelength (nm)',fontsize=18)
cbar = plt.colorbar(bar1,ax=ax1,location='top',orientation='horizontal')
cbar.set_label('Absorption (1/nm)',fontsize=18)
ax2.contourf(out2['position'], wl,
out2['absorption'],
100,cmap="jet")
ax3.contourf(out3['position'], wl,
out3['absorption'],
100,cmap="jet")
plt.show()
dep=len(wl)
z1=out1['absorption']
pos1 = out1['position']
z2=out2['absorption']
pos2 = out2['position']
z3=out3['absorption']
pos3 = out3['position']
#xx = np.repeat(pos,dep,axis=0)
xx = np.tile(pos1,dep)
yy = np.repeat(wl,dep,axis=0)
#yy = np.tile(wl,dep)
zz1=z1.ravel()
zz2=z2.ravel()
zz3=z3.ravel()
expt1=np.zeros((len(zz1),3))
expt2=np.zeros((len(zz2),3))
expt3=np.zeros((len(zz3),3))
expt1[:,0]=expt2[:,0]=expt3[:,0]=xx
expt1[:,1]=expt2[:,1]=expt3[:,1]=yy
expt1[:,2]=zz1
expt2[:,2]=zz2
expt3[:,2]=zz3
np.savetxt("DATA/M4_3171-mapabs001.csv",expt1,delimiter=',')
np.savetxt("DATA/M4_3172-mapabs001.csv",expt2,delimiter=',')
np.savetxt("DATA/M4_3226-mapabs001.csv",expt3,delimiter=',')
#np.savetxt("DATA/M4_3171-abs001.csv",np.array([wl,A]).T,delimiter=',')
expt1.shape
###Output
_____no_output_____
###Markdown
STRUCTURES WITH LESS LAYERS
###Code
Air = material("Air")(T=T)
GaAs = material("GaAs")(T=14)
bulk = material("GaAs")(T=T, strained=False)
n_AlGaAsd1 = material("AlGaAs")(T=T , Al=0.15, Nd =6e18 )
AlGaAs = material("AlGaAs")(T=T , Al=0.15 )
M43140 = Structure([
Layer(width=si(10, 'nm'), material=material("GaAs")(T=T,Nd=0,strained=False)),
Layer(width=si(300, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False)),
Layer(width=si(13.85, 'nm'),material=material("GaAs")(T=T,strained=False)),
Layer(width=si(1.98, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False)),
Layer(width=si(13.85, 'nm'), material=material("GaAs")(T=T,strained=False )),
Layer(width=si(300, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False )),
Layer(width=si(600, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,Nd =6e18,strained=False))],
substrate=bulk
)
M43521 = Structure([
Layer(width=si(10, 'nm'), material=material("GaAs")(T=T,Nd=0,strained=False)),
Layer(width=si(300, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False)),
Layer(width=si(23.74, 'nm'),material=material("GaAs")(T=T,strained=False)),
Layer(width=si(1.98, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False)),
Layer(width=si(13.85, 'nm'), material=material("GaAs")(T=T,strained=False )),
Layer(width=si(300, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False )),
Layer(width=si(600, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,Nd =6e18,strained=False))],
substrate=bulk
)
M43523 = Structure([
Layer(width=si(10, 'nm'), material=material("GaAs")(T=T,Nd=0,strained=False)),
Layer(width=si(300, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False)),
Layer(width=si(11.87, 'nm'),material=material("GaAs")(T=T,strained=False)),
Layer(width=si(1.98, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False)),
Layer(width=si(11.87, 'nm'), material=material("GaAs")(T=T,strained=False )),
Layer(width=si(300, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,strained=False )),
Layer(width=si(600, 'nm'), material=material("AlGaAs")(T=T,Al=0.15,Nd =6e18,strained=False))],
substrate=bulk
)
s1 = calculate_absorption_profile(M43140, wl, steps_size=stp,z_limit=zl)
s2 = calculate_absorption_profile(M43521, wl, steps_size=stp,z_limit=zl)
s3 = calculate_absorption_profile(M43523, wl, steps_size=stp,z_limit=zl)
print(wl.shape)
print(s1['absorption'].shape[0]*s1['absorption'].shape[1])
print(s1['absorption'].shape,s2['absorption'].shape,s3['absorption'].shape)
fig,(ax1,ax2,ax3)=plt.subplots(3,1,figsize=(10,7))
bar1 = ax1.contourf(s1['position'], wl,
s1['absorption'],
100,cmap="jet")
ax1.set_xlabel('Position (nm)',fontsize=18)
ax1.set_ylabel('Wavelength (nm)',fontsize=18)
cbar = plt.colorbar(bar1,ax=ax1,location='top',orientation='horizontal')
cbar.set_label('Absorption (1/nm)',fontsize=18)
ax2.contourf(s2['position'], wl,
s2['absorption'],
100,cmap="jet")
ax3.contourf(s3['position'], wl,
s3['absorption'],
100,cmap="jet")
plt.show()
dep=len(wl)
z1=s1['absorption']
pos1 = s1['position']
z2=s2['absorption']
pos2 = s2['position']
z3=s3['absorption']
pos3 = s3['position']
#xx = np.repeat(pos,dep,axis=0)
xx = np.tile(pos1,dep)
yy = np.repeat(wl,dep,axis=0)
#yy = np.tile(wl,dep)
zz1=z1.ravel()
zz2=z2.ravel()
zz3=z3.ravel()
expt1=np.zeros((len(zz1),3))
expt2=np.zeros((len(zz2),3))
expt3=np.zeros((len(zz3),3))
expt1[:,0]=expt2[:,0]=expt3[:,0]=xx
expt1[:,1]=expt2[:,1]=expt3[:,1]=yy
expt1[:,2]=zz1
expt2[:,2]=zz2
expt3[:,2]=zz3
np.savetxt("DATA/M4_3140-mapabs001.csv",expt1,delimiter=',')
np.savetxt("DATA/M4_3521-mapabs001.csv",expt2,delimiter=',')
np.savetxt("DATA/M4_3523-mapabs001.csv",expt3,delimiter=',')
expt1.shape
###Output
_____no_output_____ |
12Oct/.ipynb_checkpoints/500_2s_64fPANet-checkpoint.ipynb | ###Markdown
BALANCED TESTING
###Code
# NBname='_F-12fPANetb'
# y_predb = fmodel.model.predict(testb)#.ravel()
# fpr_0, tpr_0, thresholds_0 = roc_curve(tlabelsb[:,1], y_predb[:,1])
# fpr_x.append(fpr_0)
# tpr_x.append(tpr_0)
# thresholds_x.append(thresholds_0)
# auc_x.append(auc(fpr_0, tpr_0))
# # predict probabilities for testb set
# yhat_probs = fmodel.model.predict(testb, verbose=0)
# # predict crisp classes for testb set
# yhat_classes = fmodel.model.predict_classes(testb, verbose=0)
# # reduce to 1d array
# testby=tlabelsb[:,1]
# # yhat_probs = yhat_probs[:, 1]
# # #yhat_classes = yhat_classes[:, 0]
# # accuracy: (tp + tn) / (p + n)
# acc_S.append(accuracy_score(testby, yhat_classes))
# #print('Accuracy: %f' % accuracy_score(testby, yhat_classes))
# #precision tp / (tp + fp)
# pre_S.append(precision_score(testby, yhat_classes))
# #print('Precision: %f' % precision_score(testby, yhat_classes))
# #recall: tp / (tp + fn)
# rec_S.append(recall_score(testby, yhat_classes))
# #print('Recall: %f' % recall_score(testby, yhat_classes))
# # f1: 2 tp / (2 tp + fp + fn)
# f1_S.append(f1_score(testby, yhat_classes))
# #print('F1 score: %f' % f1_score(testby, yhat_classes))
# # kappa
# kap_S.append(cohen_kappa_score(testby, yhat_classes))
# #print('Cohens kappa: %f' % cohen_kappa_score(testby, yhat_classes))
# # confusion matrix
# mat_S.append(confusion_matrix(testby, yhat_classes))
# #print(confusion_matrix(testby, yhat_classes))
# with open('perform'+NBname+'.txt', "w") as f:
# f.writelines("AUC \t Accuracy \t Precision \t Recall \t F1 \t Kappa\n")
# f.writelines(map("{}\t{}\t{}\t{}\t{}\t{}\n".format, auc_x, acc_S, pre_S, rec_S, f1_S, kap_S))
# for x in range(len(fpr_x)):
# f.writelines(map("{}\n".format, mat_S[x]))
# f.writelines(map("{}\t{}\t{}\n".format, fpr_x[x], tpr_x[x], thresholds_x[x]))
# # ==========================================================================
# # # THIS IS THE BALANCED testb; DO NOT UNCOMMENT UNTIL THE END
# # ==========================================================================
# plot_auc(auc_x,fpr_x,tpr_x,NBname)
###Output
_____no_output_____
###Markdown
to see which samples were correctly classified ...
###Code
# yhat_probs[yhat_probs[:,1]>=0.5,1]
# yhat_probs[:,1]>=0.5
# yhat_classes
# testby
###Output
_____no_output_____
###Markdown
IMBALANCED TESTING
###Code
# NBname='_F-12fPANetim'
# y_pred = fmodel.model.predict(testim)#.ravel()
# fpr_0, tpr_0, thresholds_0 = roc_curve(tlabelsim[:,1], y_pred[:,1])
# fpr_x.append(fpr_0)
# tpr_x.append(tpr_0)
# thresholds_x.append(thresholds_0)
# auc_x.append(auc(fpr_0, tpr_0))
# # predict probabilities for testim set
# yhat_probs = fmodel.model.predict(testim, verbose=0)
# # predict crisp classes for testim set
# yhat_classes = fmodel.model.predict_classes(testim, verbose=0)
# # reduce to 1d array
# testimy=tlabelsim[:,1]
# #yhat_probs = yhat_probs[:, 0]
# #yhat_classes = yhat_classes[:, 0]
# # accuracy: (tp + tn) / (p + n)
# acc_S.append(accuracy_score(testimy, yhat_classes))
# #print('Accuracy: %f' % accuracy_score(testimy, yhat_classes))
# #precision tp / (tp + fp)
# pre_S.append(precision_score(testimy, yhat_classes))
# #print('Precision: %f' % precision_score(testimy, yhat_classes))
# #recall: tp / (tp + fn)
# rec_S.append(recall_score(testimy, yhat_classes))
# #print('Recall: %f' % recall_score(testimy, yhat_classes))
# # f1: 2 tp / (2 tp + fp + fn)
# f1_S.append(f1_score(testimy, yhat_classes))
# #print('F1 score: %f' % f1_score(testimy, yhat_classes))
# # kappa
# kap_S.append(cohen_kappa_score(testimy, yhat_classes))
# #print('Cohens kappa: %f' % cohen_kappa_score(testimy, yhat_classes))
# # confusion matrix
# mat_S.append(confusion_matrix(testimy, yhat_classes))
# #print(confusion_matrix(testimy, yhat_classes))
# with open('perform'+NBname+'.txt', "w") as f:
# f.writelines("##THE TWO LINES ARE FOR BALANCED AND IMBALALANCED TEST\n")
# f.writelines("#AUC \t Accuracy \t Precision \t Recall \t F1 \t Kappa\n")
# f.writelines(map("{}\t{}\t{}\t{}\t{}\t{}\n".format, auc_x, acc_S, pre_S, rec_S, f1_S, kap_S))
# f.writelines("#TRUE_SENSITIVE \t TRUE_RESISTANT\n")
# for x in range(len(fpr_x)):
# f.writelines(map("{}\n".format, mat_S[x]))
# #f.writelines(map("{}\t{}\t{}\n".format, fpr_x[x], tpr_x[x], thresholds_x[x]))
# f.writelines("#FPR \t TPR \t THRESHOLDs\n")
# for x in range(len(fpr_x)):
# #f.writelines(map("{}\n".format, mat_S[x]))
# f.writelines(map("{}\t{}\t{}\n".format, fpr_x[x], tpr_x[x], thresholds_x[x]))
# f.writelines("#NEXT\n")
# # ==========================================================================
# # # THIS IS THE UNBIASED testim; DO NOT UNCOMMENT UNTIL THE END
# # ==========================================================================
# plot_auc(auc_x,fpr_x,tpr_x,NBname)
###Output
_____no_output_____
###Markdown
to see which samples were correctly classified ...
###Code
# yhat_probs[yhat_probs[:,1]>=0.5,1]
# yhat_probs[:,1]>=0.5
# yhat_classes
# testimy
###Output
_____no_output_____
###Markdown
MISCELLANEOUS
###Code
# mat_S #confusion matrix
# auc_x #AUC balanced, imabalanced
# produces extremely tall png, that doesn't really fit into a screen
# plot_model(model0, to_file='model'+NBname+'.png', show_shapes=True,show_layer_names=False)
# produces SVG object. dont uncomment until desperate
# SVG(model_to_dot(model0, show_shapes=True,show_layer_names=False).create(prog='dot', format='svg'))
###Output
_____no_output_____
###Markdown
END OF TESTING
###Code
print(str(datetime.datetime.now()))
# # =================================
# # Legacy codes
# # =================================
# # sdata.shape
# # (200, 1152012, 1)
# print('\n')
# sen_batch = np.random.RandomState(seed=45).permutation(sdata.shape[0])
# print(sen_batch)
# print('\n')
# bins = np.linspace(0, 200, 41)
# print(bins.shape)
# print(bins)
# print('\n')
# digitized = np.digitize(sen_batch, bins,right=False)
# print(digitized.shape)
# print(digitized)
# # #instead of 10, run counter
# # print(np.where(digitized==10))
# # print(sdata[np.where(digitized==10)].shape)
# # # (array([ 0, 96, 101, 159, 183]),)
# # # (5, 1152012, 1)
# # dig_sort=digitized
# # dig_sort.sort()
# # # print(dig_sort)
# # # [ 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5
# # # 5 6 6 6 6 6 7 7 7 7 7 8 8 8 8 8 9 9 9 9 9 10 10 10
# # # 10 10 11 11 11 11 11 12 12 12 12 12 13 13 13 13 13 14 14 14 14 14 15 15
# # # 15 15 15 16 16 16 16 16 17 17 17 17 17 18 18 18 18 18 19 19 19 19 19 20
# # # 20 20 20 20 21 21 21 21 21 22 22 22 22 22 23 23 23 23 23 24 24 24 24 24
# # # 25 25 25 25 25 26 26 26 26 26 27 27 27 27 27 28 28 28 28 28 29 29 29 29
# # # 29 30 30 30 30 30 31 31 31 31 31 32 32 32 32 32 33 33 33 33 33 34 34 34
# # # 34 34 35 35 35 35 35 36 36 36 36 36 37 37 37 37 37 38 38 38 38 38 39 39
# # # 39 39 39 40 40 40 40 40]
# # print(val_idx_k)
# # # array([ 2, 3, 8, 10, 14, 15, 23, 24, 30, 32])
# # print(val_idx_k+1)
# # # array([ 3, 4, 9, 11, 15, 16, 24, 25, 31, 33])
# # print('\n')
# # print(sdata[np.isin(digitized,train_idx_k+1)].shape)
# # # (150, 1152012, 1)
# # print(sdata[np.isin(digitized,val_idx_k+1)].shape)
# # # (50, 1152012, 1)
###Output
_____no_output_____ |
4. Deep Neural Networks with PyTorch/5. Deep Networks/9. BachNorm_v2.ipynb | ###Markdown
Batch Normalization with the MNIST Dataset Objective for this Notebook 1. Define Several Neural Networks, Criterion function, Optimizer. 2. Train Neural Network using Batch Normalization and no Batch Normalization Table of ContentsIn this lab, you will build a Neural Network using Batch Normalization and compare it to a Neural Network that does not use Batch Normalization. You will use the MNIST dataset to test your network. Neural Network Module and Training FunctionLoad Data Define Several Neural Networks, Criterion function, OptimizerTrain Neural Network using Batch Normalization and no Batch NormalizationAnalyze ResultsEstimated Time Needed: 25 min Preparation We'll need the following libraries:
###Code
# These are the libraries will be used for this lab.
# Using the following line code to install the torchvision library
# !mamba install -y torchvision
import torch
import torch.nn as nn
import torchvision.transforms as transforms
import torchvision.datasets as dsets
import torch.nn.functional as F
import matplotlib.pylab as plt
import numpy as np
torch.manual_seed(0)
###Output
_____no_output_____
###Markdown
Neural Network Module and Training Function Define the neural network module or class Neural Network Module with two hidden layers using Batch Normalization
###Code
# Define the Neural Network Model using Batch Normalization
class NetBatchNorm(nn.Module):
# Constructor
def __init__(self, in_size, n_hidden1, n_hidden2, out_size):
super(NetBatchNorm, self).__init__()
self.linear1 = nn.Linear(in_size, n_hidden1)
self.linear2 = nn.Linear(n_hidden1, n_hidden2)
self.linear3 = nn.Linear(n_hidden2, out_size)
self.bn1 = nn.BatchNorm1d(n_hidden1)
self.bn2 = nn.BatchNorm1d(n_hidden2)
# Prediction
def forward(self, x):
x = self.bn1(torch.sigmoid(self.linear1(x)))
x = self.bn2(torch.sigmoid(self.linear2(x)))
x = self.linear3(x)
return x
# Activations, to analyze results
def activation(self, x):
out = []
z1 = self.bn1(self.linear1(x))
out.append(z1.detach().numpy().reshape(-1))
a1 = torch.sigmoid(z1)
out.append(a1.detach().numpy().reshape(-1).reshape(-1))
z2 = self.bn2(self.linear2(a1))
out.append(z2.detach().numpy().reshape(-1))
a2 = torch.sigmoid(z2)
out.append(a2.detach().numpy().reshape(-1))
return out
###Output
_____no_output_____
###Markdown
Neural Network Module with two hidden layers with out Batch Normalization
###Code
# Class Net for Neural Network Model
class Net(nn.Module):
# Constructor
def __init__(self, in_size, n_hidden1, n_hidden2, out_size):
super(Net, self).__init__()
self.linear1 = nn.Linear(in_size, n_hidden1)
self.linear2 = nn.Linear(n_hidden1, n_hidden2)
self.linear3 = nn.Linear(n_hidden2, out_size)
# Prediction
def forward(self, x):
x = torch.sigmoid(self.linear1(x))
x = torch.sigmoid(self.linear2(x))
x = self.linear3(x)
return x
# Activations, to analyze results
def activation(self, x):
out = []
z1 = self.linear1(x)
out.append(z1.detach().numpy().reshape(-1))
a1 = torch.sigmoid(z1)
out.append(a1.detach().numpy().reshape(-1).reshape(-1))
z2 = self.linear2(a1)
out.append(z2.detach().numpy().reshape(-1))
a2 = torch.sigmoid(z2)
out.append(a2.detach().numpy().reshape(-1))
return out
###Output
_____no_output_____
###Markdown
Define a function to train the model. In this case the function returns a Python dictionary to store the training loss and accuracy on the validation data
###Code
# Define the function to train model
def train(model, criterion, train_loader, validation_loader, optimizer, epochs=100):
i = 0
useful_stuff = {'training_loss':[], 'validation_accuracy':[]}
for epoch in range(epochs):
for i, (x, y) in enumerate(train_loader):
model.train()
optimizer.zero_grad()
z = model(x.view(-1, 28 * 28))
loss = criterion(z, y)
loss.backward()
optimizer.step()
useful_stuff['training_loss'].append(loss.data.item())
correct = 0
for x, y in validation_loader:
model.eval()
yhat = model(x.view(-1, 28 * 28))
_, label = torch.max(yhat, 1)
correct += (label == y).sum().item()
accuracy = 100 * (correct / len(validation_dataset))
useful_stuff['validation_accuracy'].append(accuracy)
return useful_stuff
###Output
_____no_output_____
###Markdown
Make Some Data Load the training dataset by setting the parameters train to True and convert it to a tensor by placing a transform object int the argument transform
###Code
# load the train dataset
train_dataset = dsets.MNIST(root='./data', train=True, download=True, transform=transforms.ToTensor())
###Output
_____no_output_____
###Markdown
Load the validating dataset by setting the parameters train False and convert it to a tensor by placing a transform object into the argument transform
###Code
# load the train dataset
validation_dataset = dsets.MNIST(root='./data', train=False, download=True, transform=transforms.ToTensor())
###Output
_____no_output_____
###Markdown
create the training-data loader and the validation-data loader object
###Code
# Create Data Loader for both train and validating
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=2000, shuffle=True)
validation_loader = torch.utils.data.DataLoader(dataset=validation_dataset, batch_size=5000, shuffle=False)
###Output
_____no_output_____
###Markdown
Define Neural Network, Criterion function, Optimizer and Train the Model Create the criterion function
###Code
# Create the criterion function
criterion = nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
Variables for Neural Network Shape hidden_dim used for number of neurons in both hidden layers.
###Code
# Set the parameters
input_dim = 28 * 28
hidden_dim = 100
output_dim = 10
###Output
_____no_output_____
###Markdown
Train Neural Network using Batch Normalization and no Batch Normalization Train Neural Network using Batch Normalization :
###Code
# Create model, optimizer and train the model
model_norm = NetBatchNorm(input_dim, hidden_dim, hidden_dim, output_dim)
optimizer = torch.optim.Adam(model_norm.parameters(), lr = 0.1)
training_results_Norm=train(model_norm , criterion, train_loader, validation_loader, optimizer, epochs=5)
###Output
_____no_output_____
###Markdown
Train Neural Network with no Batch Normalization:
###Code
# Create model without Batch Normalization, optimizer and train the model
model = Net(input_dim, hidden_dim, hidden_dim, output_dim)
optimizer = torch.optim.Adam(model.parameters(), lr = 0.1)
training_results = train(model, criterion, train_loader, validation_loader, optimizer, epochs=5)
###Output
_____no_output_____
###Markdown
Analyze Results Compare the histograms of the activation for the first layer of the first sample, for both models.
###Code
model.eval()
model_norm.eval()
out=model.activation(validation_dataset[0][0].reshape(-1,28*28))
plt.hist(out[2],label='model with no batch normalization' )
out_norm=model_norm.activation(validation_dataset[0][0].reshape(-1,28*28))
plt.hist(out_norm[2],label='model with normalization')
plt.xlabel("activation ")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We see the activations with Batch Normalization are zero centred and have a smaller variance. Compare the training loss for each iteration
###Code
# Plot the diagram to show the loss
plt.plot(training_results['training_loss'], label='No Batch Normalization')
plt.plot(training_results_Norm['training_loss'], label='Batch Normalization')
plt.ylabel('Cost')
plt.xlabel('iterations ')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Compare the validating accuracy for each iteration
###Code
# Plot the diagram to show the accuracy
plt.plot(training_results['validation_accuracy'],label='No Batch Normalization')
plt.plot(training_results_Norm['validation_accuracy'],label='Batch Normalization')
plt.ylabel('validation accuracy')
plt.xlabel('epochs ')
plt.legend()
plt.show()
###Output
_____no_output_____ |
109. Convert Sorted List to Binary Search Tree.ipynb | ###Markdown
MediumGiven a singly linked list where elements are sorted in ascending order, convert it to a height balanced BST.For this problem, a height-balanced binary tree is defined as a binary tree in which the depth of the two subtrees of every node never differ by more than 1.Example: Given the sorted linked list: [-10,-3,0,5,9], One possible answer is: [0,-3,9,-10,null,5], which represents the following height balanced BST: 0 / \ -3 9 / / -10 5 ThoughtGet the middle of the list, and make it search left part and right part.
###Code
# Definition for singly-linked list.
# class ListNode:
# def __init__(self, x):
# self.val = x
# self.next = None
# Definition for a binary tree node.
# class TreeNode:
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
class Solution:
def sortedListToBST(self, head: ListNode) -> TreeNode:
nums = []
while head:
nums.append(head.val)
head=head.next
return self.helper(nums)
def helper(self,nums):
if not nums:
return None
mid = len(nums)//2
root = TreeNode(nums[mid])
root.left = self.helper(nums[:mid])
root.right = self.helper(nums[mid+1:])
return root
###Output
_____no_output_____ |
P2-Advanced_lane_lines.ipynb | ###Markdown
Advanced Lane Finding ProjectThe goals / steps of this project are the following:* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.* Apply a distortion correction to raw images.* Use color transforms, gradients, etc., to create a thresholded binary image.* Apply a perspective transform to rectify binary image ("birds-eye view").* Detect lane pixels and fit to find the lane boundary.* Determine the curvature of the lane and vehicle position with respect to center.* Warp the detected lane boundaries back onto the original image.* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.--- First, I'll compute the camera calibration using chessboard images
###Code
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
%matplotlib qt
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*9,3), np.float32)
objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space
imgpoints = [] # 2d points in image plane.
# Make a list of calibration images
images = glob.glob('./camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, (9,6),None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp)
imgpoints.append(corners)
# Draw and display the corners
img = cv2.drawChessboardCorners(img, (9,6), corners, ret)
cv2.imshow('img',img)
cv2.waitKey(100)
cv2.destroyAllWindows()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
###Output
_____no_output_____
###Markdown
Distortion Correction
###Code
def undist(img):
undst_img = cv2.undistort(img, mtx, dist, None, mtx)
return undst_img
###Output
_____no_output_____
###Markdown
Combined Sobel-x Gradient and S-channel Threshold for edge Detection Sobel operatorThe Sobel operator is at the heart of the Canny edge detection algorithm you used in the Introductory Lesson. Applying the Sobel operator to an image is a way of taking the derivative of the image in the xx or yy direction. x vs. y -->In the above images, you can see that the gradients taken in both the xx and the yy directions detect the lane lines and pick up other edges. Taking the gradient in the xx direction emphasizes edges closer to vertical. Alternatively, taking the gradient in the yy direction emphasizes edges closer to horizontal.In order to emphasizes vertical lines, we only use the gradient in the xx direction. HLS transformationIn HLS(Hue, Lightness, Saturation) space, saturation is a measurement of colorfulness. So, as colors get lighter and closer to white, they have a lower saturation value, whereas colors that are the most intense, like a bright primary color (imagine a bright red, blue, or yellow), have a high saturation value. You can get a better idea of these values by looking at the 3D color spaces pictured below.
###Code
def edge_detection(img):
# Convert to HLS color space and separate the S channel
# Note: img is the undistorted image
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
s_channel = hls[:,:,2]
# Grayscale image
# NOTE: we already saw that standard grayscaling lost color information for the lane lines
# Explore gradients in other colors spaces / color channels to see what might work better
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Sobel x
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
thresh_min = 20
thresh_max = 100
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1
# Threshold color channel
s_thresh_min = 140
s_thresh_max = 255
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh_min) & (s_channel <= s_thresh_max)] = 1
# Stack each channel to view their individual contributions in green and blue respectively
# This returns a stack of the two binary images, whose components you can see as different colors
color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255
# Combine the two binary thresholds
combined_binary = np.zeros_like(sxbinary)
combined_binary[(s_binary == 1) | (sxbinary == 1)] = 1
# Plotting thresholded images
# f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))
# ax1.set_title('Stacked thresholds')
# ax1.imshow(color_binary)
# ax2.set_title('Combined S channel and gradient thresholds')
# ax2.imshow(combined_binary, cmap='gray')
# cv2.imwrite('./output_images/combined_binary.jpg', combined_binary*255)
return combined_binary
###Output
_____no_output_____
###Markdown
Perspective TransformAfter detecting edges, the image needs to be transformed to a bird's eyes view. Source coordinates are selected manually assuming that the cammera is mounted at the center of the vehicle.
###Code
def warp(img):
ylim = img.shape[0]
xlim = img.shape[1]
# Four source coordinates
src = np.float32(
[[xlim/2-438,ylim],
[xlim/2-40,448],
[xlim/2+40,448],
[xlim/2+438,ylim]])
# Four desired coordinates
offset = 300
dst = np.float32(
[[offset,ylim],
[offset,0],
[xlim-offset,0],
[xlim-offset,ylim]])
M = cv2.getPerspectiveTransform(src, dst)
Minv = cv2.getPerspectiveTransform(dst, src)
warped = cv2.warpPerspective(img, M, (xlim,ylim), flags=cv2.INTER_LINEAR)
# f, (ax1, ax2) = plt.subplots(1,2,figsize=(20,10))
# ax1.set_title('Original')
# ax1.imshow(img, cmap = 'gray')
# ax2.set_title('Warped')
# ax2.imshow(warped, cmap = 'gray')
return warped, Minv
###Output
_____no_output_____
###Markdown
Sliding Windows and Fit Polynomial Line Finding Method: Peaks in a Histogram -->With this histogram we are adding up the pixel values along each column in the image. In our thresholded binary image, pixels are either 0 or 1, so the two most prominent peaks in this histogram will be good indicators of the x-position of the base of the lane lines. We can use that as a starting point for where to search for the lines. From that point, we can use a sliding window, placed around the line centers, to find and follow the lines up to the top of the frame.
###Code
def find_lane_pixels(binary_warped):
# Assuming you have created a warped binary image called "binary_warped"
# Take a histogram of the bottom half of the image
histogram = np.sum(binary_warped[binary_warped.shape[0]//2:,:], axis=0)
# Create an output image to draw on and visualize the result
out_img = np.dstack((binary_warped, binary_warped, binary_warped))*255
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin
margin = 100
# Set minimum number of pixels found to recenter window
minpix = 50
# Set height of windows - based on nwindows above and image shape
window_height = np.int(binary_warped.shape[0]//nwindows)
# Identify the x and y positions of all nonzero (i.e. activated) pixels in the image
nonzero = binary_warped.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base
rightx_current = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = binary_warped.shape[0] - (window+1)*window_height
win_y_high = binary_warped.shape[0] - window*window_height
### Find the four below boundaries of the window ###
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),
(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),
(win_xright_high,win_y_high),(0,255,0), 2)
### Identify the nonzero pixels in x and y within the window ###
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
### If you found > minpix pixels, recenter next window ###
### (`right` or `leftx_current`) on their mean position ###
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty, out_img
def fit_polynomial(binary_warped):
# Find our lane pixels first
leftx, lefty, rightx, righty, out_img = find_lane_pixels(binary_warped)
### TO-DO: Fit a second order polynomial to each using `np.polyfit` ###
left_fit = np.polyfit(lefty, leftx, 2)
right_fit = np.polyfit(righty, rightx, 2)
# Generate x and y values for plotting
ploty = np.linspace(0, binary_warped.shape[0]-1, binary_warped.shape[0] )
try:
left_fitx = left_fit[0]*ploty**2 + left_fit[1]*ploty + left_fit[2]
right_fitx = right_fit[0]*ploty**2 + right_fit[1]*ploty + right_fit[2]
except TypeError:
# Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
left_fitx = 1*ploty**2 + 1*ploty
right_fitx = 1*ploty**2 + 1*ploty
## Visualization ##
# Colors in the left and right lane regions
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
# Plots the left and right polynomials on the lane lines
# plt.plot(left_fitx, ploty, color='yellow')
# plt.plot(right_fitx, ploty, color='yellow')
# Calculate curvature of the curve
left_curverad, right_curverad, dis = measure_curvature_pixels(ploty, left_fitx, right_fitx)
return out_img, left_curverad, right_curverad, left_fitx, right_fitx, dis
###Output
_____no_output_____
###Markdown
Determine the Curvature of the Lane and the Position of the VehicleThe radius of curvature at any point x of the function x=f(y) is given as follows: -->The y values of your image increase from top to bottom, so if, for example, you wanted to measure the radius of curvature closest to your vehicle, you could evaluate the formula above at the y value corresponding to the bottom of your image.\\(R_{curve}=[1+]\over\{||}\\)
###Code
def measure_curvature_pixels(ploty, left_fitx, right_fitx):
'''
Calculates the curvature of polynomial functions in pixels.
'''
# Define y-value where we want radius of curvature
# We'll choose the maximum y-value, corresponding to the bottom of the image
y_eval = np.max(ploty)
# Define conversions in x and y from pixels space to meters
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/700 # meters per pixel in x dimension
leftx = left_fitx[::-1] # Reverse to match top-to-bottom in y
rightx = right_fitx[::-1] # Reverse to match top-to-bottom in y
# Calculate vehicel position respect to the center of the line
dis = 1280/2 - (leftx[0]+rightx[0])/2
dis = dis*xm_per_pix
# Fit a second order polynomial to pixel positions in each fake lane line
left_fit_cr = np.polyfit(ploty*ym_per_pix, leftx*xm_per_pix, 2)
right_fit_cr = np.polyfit(ploty*ym_per_pix, rightx*xm_per_pix, 2)
##### Implement the calculation of R_curve (radius of curvature) #####
left_curverad = ((1 + (2*left_fit_cr[0]*y_eval + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0])
right_curverad = ((1 + (2*right_fit_cr[0]*y_eval + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0])
return left_curverad, right_curverad, dis
###Output
_____no_output_____
###Markdown
Draw the Line onto the Original Image
###Code
def draw_line(image, warped, left_fitx, right_fitx, Minv):
ploty = np.linspace(0, warped.shape[0]-1, warped.shape[0] )
# Create an image to draw the lines on
warp_zero = np.zeros_like(warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv, (image.shape[1], image.shape[0]))
# Combine the result with the original image
result = cv2.addWeighted(image, 1, newwarp, 0.3, 0)
return result
###Output
_____no_output_____
###Markdown
Lane Detection Function
###Code
def lane_detection(img):
# Distorsion correction
undist_img = undist(img)
# Edges creation
edges_img = edge_detection(undist_img)
# Warping
binary_warped, Minv = warp(edges_img)
# Finding lines
out_img, left_curverad, right_curverad, left_fitx, right_fitx, dis = fit_polynomial(binary_warped)
# Draw lines
result = draw_line(undist_img, binary_warped, left_fitx, right_fitx, Minv)
# Write some Text
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(result,'Radius of Curvature(m): {0}'.format((left_curverad+right_curverad)/2),(20,50), font, 1,(255,255,255),2)
cv2.putText(result,'Vehicle is {0}m left of center'.format(dis),(20,80), font, 1,(255,255,255),2)
return result
###Output
_____no_output_____
###Markdown
Test on Images
###Code
# images = glob.glob('./test_images/test*.jpg')
# for fname in images:
# img = cv2.imread(fname)
# result = lane_detection(img)
# cv2.imshow(fname,result)
# cv2.waitKey(0)
# cv2.destroyAllWindows()
# cv2.imwrite('./output_images/result.jpg', result)
fname = ('./test_images/test1.jpg')
img = cv2.imread(fname)
result = lane_detection(img)
cv2.imshow(fname,result)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.imwrite('./output_images/test1.jpg', result)
###Output
_____no_output_____
###Markdown
Test on Videos
###Code
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(img):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
result = lane_detection(img)
return result
output = 'output_videos/harder_challenge_video.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("harder_challenge_video.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(output, audio=False)
###Output
t: 0%| | 3/1199 [00:00<00:49, 24.29it/s, now=None] |
Natural Language Processing/notebooks/09-distributional-semantics.ipynb | ###Markdown
Distributional Semantics For this notebook, we'll be using the 500 document Brown corpus included in NLTK
###Code
from nltk.corpus import brown
###Output
_____no_output_____
###Markdown
This notebook is divided up into two independent parts: the first uses PMI for distinguishing good collocations, and the second involves building a vector space model.For the PMI portion, we'll use a function which extracts the information we need for a particular two word collocation, namely counts of each word individually, counts of the collocation, and the total number of word tokens in the corpus, and then calculates the PMI:
###Code
import math
def get_PMI_for_collocation_brown(word1,word2):
word1_count = 0
word2_count = 0
both_count = 0
total_count = 0.0 # so that division results in a float
for sent in brown.sents():
sent = [word.lower() for word in sent]
for i in range(len(sent)):
total_count += 1
if sent[i] == word1:
word1_count += 1
if i < len(sent) - 1 and sent[i + 1] == word2:
both_count += 1
elif sent[i] == word2:
word2_count += 1
return math.log((both_count/total_count)/((word1_count/total_count)*(word2_count/total_count)), 2)
###Output
_____no_output_____
###Markdown
Note that in a typical use case, we probably wouldn't do it this way, since we'd likely want to calculate PMI across many different words, and collecting the statisitcs for this can be done in a single pass across the corpus for all words, and then the PMI calculated in a separate function. Anyway, let's compare the PMI for two phrases, "hard work" and "some work"
###Code
print(get_PMI_for_collocation_brown("hard","work"))
print(get_PMI_for_collocation_brown("some","work"))
###Output
5.237244531670497
1.9135320271049516
###Markdown
Based on PMI, "hard work" appears to be a much better collocation than "some work", which matches our intuition. Go ahead and try out this out some other collocations. For the second part of the notebook, let's first create a sparse document-term matrix, using sci-kit learn. We'll then apply tf-idf weighting and SVD to learn word vectors.
###Code
from sklearn.feature_extraction import DictVectorizer
def get_BOW(text):
BOW = {}
for word in text:
BOW[word.lower()] = BOW.get(word.lower(),0) + 1
return BOW
texts = []
for fileid in brown.fileids():
texts.append(get_BOW(brown.words(fileid)))
vectorizer = DictVectorizer()
brown_matrix = vectorizer.fit_transform(texts)
print(brown_matrix)
###Output
(0, 49) 1.0
(0, 58) 1.0
(0, 169) 1.0
(0, 181) 1.0
(0, 205) 1.0
(0, 238) 1.0
(0, 322) 33.0
(0, 373) 3.0
(0, 374) 3.0
(0, 393) 87.0
(0, 395) 4.0
(0, 405) 88.0
(0, 454) 4.0
(0, 465) 1.0
(0, 695) 1.0
(0, 720) 1.0
(0, 939) 1.0
(0, 1087) 1.0
(0, 1103) 1.0
(0, 1123) 1.0
(0, 1159) 1.0
(0, 1170) 1.0
(0, 1173) 1.0
(0, 1200) 3.0
(0, 1451) 1.0
: :
(499, 49161) 1.0
(499, 49164) 1.0
(499, 49242) 1.0
(499, 49253) 1.0
(499, 49275) 1.0
(499, 49301) 1.0
(499, 49313) 1.0
(499, 49369) 1.0
(499, 49385) 1.0
(499, 49386) 4.0
(499, 49390) 2.0
(499, 49410) 2.0
(499, 49446) 1.0
(499, 49576) 1.0
(499, 49590) 1.0
(499, 49613) 3.0
(499, 49691) 42.0
(499, 49694) 3.0
(499, 49697) 3.0
(499, 49698) 1.0
(499, 49707) 17.0
(499, 49708) 1.0
(499, 49710) 4.0
(499, 49711) 1.0
(499, 49797) 1.0
###Markdown
Our matrix is sparse: for instance, columns 0-48 in row 0 are empty, and are just left out, only the rows and columns with values other than zeros are displayedRather than removing stopwords as we did for text classification, let's add some idf weighting to this matrix. Scikit-learn has a built-in tf-idf transformer for just this purpose.
###Code
from sklearn.feature_extraction.text import TfidfTransformer
transformer = TfidfTransformer(smooth_idf=False,norm=None)
brown_matrix_tfidf = transformer.fit_transform(brown_matrix)
print(brown_matrix_tfidf)
###Output
(0, 49646) 1.7298111649315369
(0, 49613) 1.3681693233644676
(0, 49596) 3.7066318654255337
(0, 49386) 9.98833379406486
(0, 49378) 8.731629015654066
(0, 49313) 2.62964061975162
(0, 49301) 7.374075931214787
(0, 49292) 2.184170177029756
(0, 49224) 3.385966701933097
(0, 49147) 6.0
(0, 49041) 3.407945608651872
(0, 49003) 22.210096880912054
(0, 49001) 5.741605353137016
(0, 48990) 16.84677293625242
(0, 48951) 4.7297014486341915
(0, 48950) 4.939351940117883
(0, 48932) 3.9565115604007097
(0, 48867) 7.046120322868667
(0, 48777) 1.41855034765682
(0, 48771) 13.694210097452498
(0, 48769) 6.236428984115791
(0, 48753) 1.2957142441490452
(0, 48749) 3.1984194075136347
(0, 48720) 1.1648746431902341
(0, 48670) 2.1974319458783156
: :
(499, 2710) 3.120263536200091
(499, 2688) 2.04412410338404
(499, 2670) 3.9565115604007097
(499, 2611) 4.270169119255751
(499, 2468) 6.521460917862246
(499, 2439) 4.170085660698769
(499, 2415) 4.122633007848826
(499, 2413) 2.320337504305643
(499, 2388) 2.096614286005437
(499, 2358) 6.115995809754082
(499, 2290) 61.0
(499, 2289) 7.5533024513831695
(499, 2286) 11.156201344558639
(499, 2285) 20.714812015222506
(499, 2283) 1.2256466815323281
(499, 1345) 6.521460917862246
(499, 1141) 4.506557897319982
(499, 405) 83.0
(499, 395) 12.710333931244342
(499, 393) 188.0
(499, 374) 4.0872168559431525
(499, 373) 4.095849955425997
(499, 354) 7.214608098422191
(499, 322) 7.538167310351703
(499, 320) 3.4769384801388235
###Markdown
Next, let's apply SVD. Scikit-learn does not expose the internal details of the decomposition, we just use the [TruncatedSVD class](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html) directly get a matrix with k dimensions. Since the Brown corpus is a fairly small corpus, we'll do k=10. Note that we'll first transpose the document-term sparse matrix to a term-document matrix before we apply SVD.
###Code
from sklearn.decomposition import TruncatedSVD
from scipy.sparse import csr_matrix
#dimension of brown_matrix_tfidf = num_documents x num_vocab
#dimension of brown_matrix_tfidf_transposed = num_vocab x num_documents
brown_matrix_tfidf_transposed = csr_matrix(brown_matrix_tfidf).transpose()
svd = TruncatedSVD(n_components=10)
brown_matrix_lowrank = svd.fit_transform(brown_matrix_tfidf_transposed)
print(brown_matrix_lowrank.shape)
print(brown_matrix_lowrank)
###Output
(49815, 10)
[[ 1.46529922e+02 -1.56578300e+02 3.77295895e+01 ... 1.87574330e+01
5.17826940e+00 -1.32357467e+01]
[ 6.10797743e-01 6.77336542e-01 -2.04392054e-01 ... -1.02796238e+00
-1.14385161e-01 -1.12871217e+00]
[ 1.00411586e+00 1.99456979e-01 -1.25054329e-01 ... 1.14578446e+00
-4.14250674e-01 -1.68706426e-01]
...
[ 3.26612758e-01 2.53370725e-01 -2.71177861e-01 ... 2.51508282e-01
1.31093947e-01 1.59715022e-01]
[ 6.35382477e-01 7.12100488e-01 -2.82140022e-02 ... 6.70060518e-01
1.78645267e-01 3.52829119e-01]
[ 3.27037764e-01 7.38765531e-01 2.09243078e+00 ... -2.95536854e-01
-3.95585989e-01 -1.02777409e-02]]
###Markdown
The returned matrix corresponds to the transformed term/word matrix, $U \Sigma$, after SVD factorisation, $X \approx U \Sigma V^T$, applied to `brown_matrix_tfidf_transposed`, as $X$. Note that the resulting matrix is not sparse.The last thing we'll do is to compare some words and see if their similarity fits our intuition.
###Code
import numpy as np
from numpy.linalg import norm
v1 = brown_matrix_lowrank[vectorizer.vocabulary_["medical"]]
v2 = brown_matrix_lowrank[vectorizer.vocabulary_["health"]]
v3 = brown_matrix_lowrank[vectorizer.vocabulary_["gun"]]
def cos_sim(a, b):
return np.dot(a, b)/(norm(a)*norm(b))
print(cos_sim(v1, v2))
print(cos_sim(v1, v3))
###Output
0.8095752240062064
0.14326323746458713
###Markdown
There'll be some variability to the exact cosine similarity values that you'll get (feel free to re-run SVD and check this), but hopefully you should find that _medical_ and _health_ is more closely related to each other than _medical_ and _gun_.Next let's try _information_, _retrieval_ and _science_!
###Code
v1 = brown_matrix_lowrank[vectorizer.vocabulary_["information"]]
v2 = brown_matrix_lowrank[vectorizer.vocabulary_["retrieval"]]
v3 = brown_matrix_lowrank[vectorizer.vocabulary_["science"]]
print(cos_sim(v1, v2))
print(cos_sim(v1, v3))
###Output
0.4883440848184145
0.43337516485029776
|
examples/ReGraph_demo.ipynb | ###Markdown
ReGraph tutorial: from simple graph rewriting to a graph hierarchy This notebook consists of simple examples of usage of the ReGraph library
###Code
import copy
import networkx as nx
from regraph.hierarchy import Hierarchy
from regraph.rules import Rule
from regraph.plotting import plot_graph, plot_instance, plot_rule
from regraph.primitives import find_matching, print_graph, equal, add_nodes_from, add_edges_from
from regraph.utils import keys_by_value
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
I. Simple graph rewriting 1. Initialization of a graph ReGraph works with NetworkX graph objects, both undirected graphs (`nx.Graph`) and directed ones (`nx.DiGraph`). The workflow of the graph initialization in NetworkX can be found [here](http://networkx.readthedocs.io/en/networkx-1.11/tutorial/tutorial.html).
###Code
graph = nx.DiGraph()
add_nodes_from(graph,
[
('1', {'name': 'EGFR', 'state': 'p'}),
('2', {'name': 'BND'}),
('3', {'name': 'Grb2', 'aa': 'S', 'loc': 90}),
('4', {'name': 'SH2'}),
('5', {'name': 'EGFR'}),
('6', {'name': 'BND'}),
('7', {'name': 'Grb2'}),
('8', {'name': 'WAF1'}),
('9', {'name': 'BND'}),
('10', {'name': 'G1-S/CDK', 'state': 'p'}),
])
edges = [
('1', '2', {'s': 'p'}),
('4', '2', {'s': 'u'}),
('4', '3'),
('5', '6', {'s': 'p'}),
('7', '6', {'s': 'u'}),
('8', '9'),
('9', '8'),
('10', '8', {"a": {1}}),
('10', '9', {"a": {2}}),
('5', '2', {'s': 'u'})
]
add_edges_from(graph, edges)
type(graph.node['1']['name'])
###Output
_____no_output_____
###Markdown
ReGraph provides some utils for graph plotting that are going to be used in the course of this tutorial.
###Code
positioning = plot_graph(graph)
###Output
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:522: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if not cb.is_string_like(edge_color) \
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:543: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if cb.is_string_like(edge_color) or len(edge_color) == 1:
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:724: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if not cb.is_string_like(label):
###Markdown
2. Initialization of a rule - Graph rewriting is implemented as an application of a **graph rewriting rule** to a given input graph object $G$. A graph rewriting rule $R$ is a span $LHS \leftarrow P \rightarrow RHS$, where $LHS$ is a graph that represents a left hand side of the rule -- a pattern that is going to be matched inside of the graph, $P$ is a graph that represents a preserved part of the rule -- together with a homomorphism $LHS \leftarrow P$ it specifies nodes and edges that are going to be preserved in the course of application of the rule. $RHS$ and a homomorphism $P \rightarrow RHS$ on the other hand specify nodes and edges that are going to be added. In addition, if two nodes $n^P_1, n^P_2$ of $P$ map to the same node $n^{LHS}$ in $LHS$, $n^{LHS}$ is going to be cloned during graph rewriting. Symmetrically, if two nodes of $n^P_1$ and $n^P_2$ in $P$ match to the same node $n^{RHS}$ in $RHS$, $n^P_1$ and $n^P_2$ are merged.- $LHS$, $P$ and $RHS$ can be defined as NetworkX graphs
###Code
pattern = nx.DiGraph()
add_nodes_from(
pattern,
[(1, {'state': 'p'}),
(2, {'name': 'BND'}),
3,
4]
)
add_edges_from(
pattern,
[(1, 2, {'s': 'p'}),
(3, 2, {'s': 'u'}),
(3, 4)]
)
p = nx.DiGraph()
add_nodes_from(p,
[(1, {'state': 'p'}),
'1_clone',
(2, {'name': 'BND'}),
3,
4
])
add_edges_from(
p,
[(1, 2),
('1_clone', 2),
(3, 4)
])
rhs = nx.DiGraph()
add_nodes_from(
rhs,
[(1, {'state': 'p'}),
'1_clone',
(2, {'name': 'BND'}),
3,
4,
5
])
add_edges_from(
rhs,
[(1, 2, {'s': 'u'}),
('1_clone', 2),
(2, 4),
(3, 4),
(5, 3)
])
p_lhs = {1: 1, '1_clone': 1, 2: 2, 3: 3, 4: 4}
p_rhs = {1: 1, '1_clone': '1_clone', 2: 2, 3: 3, 4: 4}
###Output
_____no_output_____
###Markdown
- A rule of graph rewriting is implemeted in the class `regraph.library.rules.Rule`. An instance of `regraph.library.rules.Rule` is initialized with NetworkX graphs $LHS$, $P$, $RHS$, and two dictionaries specifying $LHS \leftarrow P$ and $P \rightarrow RHS$.- For visualization of a rule `regraph.library.plotting.plot_rule` util is implemented in ReGraph.
###Code
rule = Rule(p, pattern, rhs, p_lhs, p_rhs)
plot_rule(rule)
###Output
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:522: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if not cb.is_string_like(edge_color) \
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:543: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if cb.is_string_like(edge_color) or len(edge_color) == 1:
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:724: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if not cb.is_string_like(label):
###Markdown
1. Rewriting 1.1. Matching of LHS- The matchings of $LHS$ in $G$ ($LHS \rightarrowtail G$) can be found using `regraph.library.primitives.find_matching` function. This function returns a list of dictionaries representing the matchings. If no matchings were found the list is empty.- Visualization of the matching in $G$ is implemented in the `regraph.library.plotting.plot_instance` util.
###Code
instances = find_matching(graph, rule.lhs)
print("Instances:")
for instance in instances:
print(instance)
plot_instance(graph, rule.lhs, instance, parent_pos=positioning) #filename=("instance_example_%d.png" % i))
###Output
Instances:
{1: '1', 2: '2', 3: '4', 4: '3'}
###Markdown
1.2. Rewriting1. Graph rewriting can be performed with the `regraph.library.primitives.rewrite` function. It takes as an input a graph, an instance of the matching (dictionary that specifies the mapping from the nodes of $LHS$ to the nodes of $G$), a rewriting rule (an instance of the `regraph.library.rules.Rule` class), and a parameter `inplace` (by default set to `True`). If `inplace` is `True` rewriting will be performed directly in the provided graph object and the function will return a dictionary corresponding to the $RHS$ matching in the rewritten graph ($RHS \rightarrowtail G'$), otherwise the rewriting function will return a new graph object corresponding to the result of rewriting and the $RHS$ matching.2. Another possibility to perform graph rewriting is implemented in the `apply_to` method of a `regraph.library.Rule` class. It takes as an input a graph and an instance of the matching. It applies a corresponding (to `self`) rewriting rule and returns a new graph (the result of graph rewriting).
###Code
# Rewriting without modification of the initial object
graph_backup = copy.deepcopy(graph)
new_graph_1, rhs_graph = rule.apply_to(graph, instances[0], inplace=False)
# print(equal(new_graph_1, new_graph_2))
print(new_graph_1.edge['1']['2'])
assert(equal(graph_backup, graph))
print("Matching of RHS:", rhs_graph)
plot_instance(graph, rule.lhs, instances[0], parent_pos=positioning)
new_pos = plot_instance(new_graph_1, rule.rhs, rhs_graph, parent_pos=positioning)
###Output
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:522: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if not cb.is_string_like(edge_color) \
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:543: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if cb.is_string_like(edge_color) or len(edge_color) == 1:
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:724: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if not cb.is_string_like(label):
###Markdown
ReGraph also provides a primitive for testing equality of two graphs in `regraph.library.primitives.equal`. In our previous example we can see that a graph obtained by application of a rule `new_graph` (through the `Rule` interface) and an initial graph object `graph` after in-place rewriting are equal. II. Hierarchy of graphs & rewritingReGraph allows to create a hierarchy of graphs connected together by means of **typing homomorphisms**. In the context of hierarchy if there exists a homomorphism $G \rightarrow T$ we say that graph $G$ is typed by a graph $T$. Graph hierarchy is a DAG, where nodes are graphs and edges are typing homomorphisms between graphs.ReGraph provides two kinds of typing for graphs: **partial typing** and **total typing**.- **Total typing** ($G \rightarrow T)$ is a homomorphism which maps every node of $G$ to some node in $T$ (a type);- **Partial typing** ($G \rightharpoonup T$) is a slight generalisation of total typing, which allows only a subset of nodes from $G$ to be typed by nodes in $T$ (to have types in $T$), whereas the rest of the nodes which do not have a mapping to $T$ are considered as nodes which do not have type in $T$.**Note:** Use total typing if you would like to make sure that the nodes of your graphs are always strictly typed by some metamodel. 1. Example: simple hierarchy 1.1. Initialization of a hierarchyConsider the following example of a simple graph hierarchy. The two graphs $G$ and $T$ are being created and added to the heirarchy. Afterwards a typing homomorphism (total) between $G$ and $T$ is added, so that every node of $G$ is typed by some node in $T$.
###Code
# Define graph G
g = nx.DiGraph()
g.add_nodes_from(["protein", "binding", "region", "compound"])
g.add_edges_from([("region", "protein"), ("protein", "binding"), ("region", "binding"), ("compound", "binding")])
# Define graph T
t = nx.DiGraph()
t.add_nodes_from(["action", "agent"])
t.add_edges_from([("agent", "agent"), ("agent", "action")])
# Define graph G'
g_prime = nx.DiGraph()
g_prime.add_nodes_from(
["EGFR", "BND_1", "SH2", "Grb2"]
)
g_prime.add_edges_from([
("EGFR", "BND_1"),
("SH2", "BND_1"),
("SH2", "Grb2")
])
# Create a hierarchy
simple_hierarchy = Hierarchy()
simple_hierarchy.add_graph("G", g, {"name": "Simple protein interaction"})
simple_hierarchy.add_graph("T", t, {"name": "Agent interaction"})
simple_hierarchy.add_typing(
"G", "T",
{"protein": "agent",
"region": "agent",
"compound": "agent",
"binding": "action",
},
total=True
)
simple_hierarchy.add_graph("G_prime", g_prime, {"name": "EGFR and Grb2 binding"})
simple_hierarchy.add_typing(
"G_prime", "G",
{
"EGFR": "protein",
"BND_1": "binding",
"SH2": "region",
"Grb2": "protein"
},
total=True
)
print(simple_hierarchy)
plot_graph(simple_hierarchy.node["T"].graph)
pos = plot_graph(simple_hierarchy.node["G"].graph)
plot_graph(simple_hierarchy.node["G_prime"].graph, parent_pos=pos)
###Output
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:522: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if not cb.is_string_like(edge_color) \
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:543: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if cb.is_string_like(edge_color) or len(edge_color) == 1:
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:724: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if not cb.is_string_like(label):
###Markdown
1.2. Rewriting in the hierarchyReGraph implements rewriting of graphs in the hierarchy, this rewriting is more restrictive as application of a rewriting rule cannot violate any typing defined in the hierarchy. The following code illustrates the application of a rewriting rule to the graph in the hierarchy. On the first step we create a `Rule` object containing a rule we would like to apply.
###Code
lhs = nx.DiGraph()
add_nodes_from(lhs, [1, 2])
add_edges_from(lhs, [(1, 2)])
p = nx.DiGraph()
add_nodes_from(p, [1, 2])
add_edges_from(p, [])
rhs = nx.DiGraph()
add_nodes_from(rhs, [1, 2, 3])
add_edges_from(rhs, [(3, 1), (3, 2)])
# By default if `p_lhs` and `p_rhs` are not provided
# to a rule, it tries to construct this homomorphisms
# automatically by matching the names. In this case we
# have defined lhs, p and rhs in such a way that that
# the names of the matching nodes correspond
rule = Rule(p, lhs, rhs)
plot_rule(rule)
###Output
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:522: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if not cb.is_string_like(edge_color) \
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:543: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if cb.is_string_like(edge_color) or len(edge_color) == 1:
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:724: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if not cb.is_string_like(label):
###Markdown
Now, we would like to use the rule defined above in the following context: in the graph `G_prime` we want to find _"protien"_ nodes connected to _"binding"_ nodes and to delete the edge connecting them, after that we would like to add a new intermediary node and connect it to the previous _"protein"_ and _"binding"_.We can provide this context by specifying a typing of the $LHS$ of the rule, which would indicated that node `1` is a _"protein"_, and node `2` is a _"binding"_. Now the hierarchy will search for a matching of $LHS$ respecting the types of the nodes.
###Code
lhs_typing = {
"G": {
1: "protein",
2: "binding"
}
}
###Output
_____no_output_____
###Markdown
`regraph.library.Hierarchy` provides the method `find_matching` to find matchings of a pattern in a given graph in the hierarchy. The typing of $LHS$ should be provided to the `find_matching` method.
###Code
# Find matching of lhs without lhs_typing
instances_untyped = simple_hierarchy.find_matching("G_prime", lhs)
pos = plot_graph(simple_hierarchy.node["G_prime"].graph)
print("Instances found without pattern typing:")
for instance in instances_untyped:
print(instance)
plot_instance(simple_hierarchy.node["G_prime"].graph, lhs, instance, parent_pos=pos)
# Find matching of lhs with lhs_typing
instances = simple_hierarchy.find_matching("G_prime", lhs, lhs_typing)
print("\n\nInstances found with pattern typing:")
for instance in instances:
print(instance)
plot_instance(simple_hierarchy.node["G_prime"].graph, lhs, instance, parent_pos=pos)
###Output
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:522: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if not cb.is_string_like(edge_color) \
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:543: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if cb.is_string_like(edge_color) or len(edge_color) == 1:
/home/eugenia/anaconda3/lib/python3.6/site-packages/networkx-1.11-py3.6.egg/networkx/drawing/nx_pylab.py:724: MatplotlibDeprecationWarning: The is_string_like function was deprecated in version 2.1.
if not cb.is_string_like(label):
###Markdown
As a rewriting rule can implement addition and merging of some nodes, an appropriate typing of the $RHS$ allows to specify the typing for new nodes.~~- By default, if a typing of $RHS$ is not provided, all the nodes added and merged will be not typed. **Note:** If a graph $G$ was totally typed by some graph $T$, and a rewriting rule which transforms $G$ into $G'$ has added/merged some nodes for which there is no typing in $T$ specified, $G'$ will become only _partially_ typed by $T$ and ReGraph will raise a warning.~~- If a typing of a new node is specified in the $RHS$ typing, the node will have this type as long as it is consistent (homomrophism $G' \rightarrow T$ is valid) with $T$.- If a typing of a merged node is specified in the $RHS$ typing, the node will have this type as long as (a) all the nodes that were merged had this type (b) new typing is a consistent homomrophism ($G' \rightarrow T$ is valid). For our example, we will not specify the type of the new node `3`, so that `G_prime` after rewriting will become only parially typed by `G`.
###Code
print("Node types in `G_prime` before rewriting: \n")
for node in simple_hierarchy.node["G_prime"].graph.nodes():
print(node, simple_hierarchy.node_type("G_prime", node))
rhs_typing = {
"G": {
3: "region"
}
}
new_hierarchy, _ = simple_hierarchy.rewrite("G_prime", rule, instances[0], lhs_typing, rhs_typing, inplace=False)
plot_graph(new_hierarchy.node["G_prime"].graph)
plot_graph(new_hierarchy.node["G"].graph)
plot_graph(new_hierarchy.node["T"].graph)
print("Node types in `G_prime` before rewriting: \n")
for node in new_hierarchy.node["G_prime"].graph.nodes():
print(node, new_hierarchy.node_type("G_prime", node))
###Output
Node types in `G_prime` before rewriting:
EGFR {'G': 'protein'}
BND_1 {'G': 'binding'}
SH2 {'G': 'region'}
Grb2 {'G': 'protein'}
3 {'G': 'region'}
###Markdown
Now, rewriting can be performed using `regraph.library.hierarchy.Hierarchy.rewrite` method. It takes as an input id of the graph to rewrite, a rule, an instance of the LHS of a rule ($LHS \rightarrow G$), and a typing of $LHS$ and $RHS$.**Note:** In case the graph to be rewritten is not typed by any other graph in the hierarchy, the $LHS$ and $RHS$ typings are not required.
###Code
newer_hierarchy, _ = simple_hierarchy.rewrite("G_prime", rule, instances[0], lhs_typing, inplace=False, strict=False)
print("Node types in `G_prime` after rewriting: \n")
for node in newer_hierarchy.node["G_prime"].graph.nodes():
print(node, newer_hierarchy.node_type("G_prime", node))
plot_graph(newer_hierarchy.node["G_prime"].graph)
plot_graph(newer_hierarchy.node["G"].graph)
plot_graph(newer_hierarchy.node["T"].graph)
print("Node types in `G_prime` after rewriting: \n")
for node in new_hierarchy.node["G_prime"].graph.nodes():
print(node, new_hierarchy.node_type("G_prime", node))
###Output
Node types in `G_prime` after rewriting:
EGFR {'G': 'protein'}
BND_1 {'G': 'binding'}
SH2 {'G': 'region'}
Grb2 {'G': 'protein'}
3 {'G': 'region'}
###Markdown
Later on if a node form $G$ is not typed in $T$, we can specify a typing for this node.In the example we type the node `3` as a `region` in `G`. It is also possible to remove a graph from the hierarchy using the `regraph.library.hierarchy.Hierarchy.remove_graph` method. It takes as an input the id of a graph to remove, and if the argument `reconnect` is set to `True`, it reconnects all the graphs typed by the graph being removed to the graphs typing it.In our example if we remove graph `G` from the hierarchy, `G_prime` is now directly typed by `T`.
###Code
simple_hierarchy.remove_graph("G", reconnect=True)
print(simple_hierarchy)
print("New node types in 'G_prime':\n")
for node in simple_hierarchy.node["G_prime"].graph.nodes():
print(node, ": ", simple_hierarchy.node_type("G_prime", node))
###Output
Graphs (directed == True):
Nodes:
Graph: T {'name': {'Agent interaction'}}
Graph: G_prime {'name': {'EGFR and Grb2 binding'}}
Typing homomorphisms:
G_prime -> T: total == True
Relations:
attributes :
{}
New node types in 'G_prime':
EGFR : {'T': 'agent'}
BND_1 : {'T': 'action'}
SH2 : {'T': 'agent'}
Grb2 : {'T': 'agent'}
###Markdown
2. Example: advanced hierarchyThe following example illustrates more sophisticaled hierarchy example. 2.1. DAG hierarchy
###Code
hierarchy = Hierarchy()
colors = nx.DiGraph()
colors.add_nodes_from([
"green", "red"
])
colors.add_edges_from([
("red", "green"),
("red", "red"),
("green", "green")
])
hierarchy.add_graph("colors", colors, {"id": "https://some_url"})
shapes = nx.DiGraph()
shapes.add_nodes_from(["circle", "square"])
shapes.add_edges_from([
("circle", "square"),
("square", "circle"),
("circle", "circle")
])
hierarchy.add_graph("shapes", shapes)
quality = nx.DiGraph()
quality.add_nodes_from(["good", "bad"])
quality.add_edges_from([
("bad", "bad"),
("bad", "good"),
("good", "good")
])
hierarchy.add_graph("quality", quality)
g1 = nx.DiGraph()
g1.add_nodes_from([
"red_circle",
"red_square",
"some_circle",
])
g1.add_edges_from([
("red_circle", "red_square"),
("red_circle", "red_circle"),
("red_square", "red_circle"),
("some_circle", "red_circle")
])
g1_colors = {
"red_circle": "red",
"red_square": "red",
}
g1_shapes = {
"red_circle": "circle",
"red_square": "square",
"some_circle": "circle"
}
hierarchy.add_graph("g1", g1)
hierarchy.add_typing("g1", "colors", g1_colors, total=False)
hierarchy.add_typing("g1", "shapes", g1_shapes, total=False)
g2 = nx.DiGraph()
g2.add_nodes_from([
"good_circle",
"good_square",
"bad_circle",
"good_guy",
"some_node"
])
g2.add_edges_from([
("good_circle", "good_square"),
("good_square", "good_circle"),
("bad_circle", "good_circle"),
("bad_circle", "bad_circle"),
("some_node", "good_circle"),
("good_guy", "good_square")
])
g2_shapes = {
"good_circle": "circle",
"good_square": "square",
"bad_circle": "circle"
}
g2_quality = {
"good_circle": "good",
"good_square": "good",
"bad_circle": "bad",
"good_guy": "good"
}
hierarchy.add_graph("g2", g2)
hierarchy.add_typing("g2", "shapes", g2_shapes)
hierarchy.add_typing("g2", "quality", g2_quality)
g3 = nx.DiGraph()
g3.add_nodes_from([
"good_red_circle",
"bad_red_circle",
"good_red_square",
"some_circle_node",
"some_strange_node"
])
g3.add_edges_from([
("bad_red_circle", "good_red_circle"),
("good_red_square", "good_red_circle"),
("good_red_circle", "good_red_square")
])
g3_g1 = {
"good_red_circle": "red_circle",
"bad_red_circle": "red_circle",
"good_red_square": "red_square"
}
g3_g2 = {
"good_red_circle": "good_circle",
"bad_red_circle": "bad_circle",
"good_red_square": "good_square",
}
hierarchy.add_graph("g3", g3)
hierarchy.add_typing("g3", "g1", g3_g1)
hierarchy.add_typing("g3", "g2", g3_g2)
lhs = nx.DiGraph()
lhs.add_nodes_from([1, 2])
lhs.add_edges_from([(1, 2)])
p = nx.DiGraph()
p.add_nodes_from([1, 11, 2])
p.add_edges_from([(1, 2)])
rhs = copy.deepcopy(p)
rhs.add_nodes_from([3])
p_lhs = {1: 1, 11: 1, 2: 2}
p_rhs = {1: 1, 11: 11, 2: 2}
r1 = Rule(p, lhs, rhs, p_lhs, p_rhs)
hierarchy.add_rule("r1", r1, {"desc": "Rule 1: typed by two graphs"})
lhs_typing1 = {1: "red_circle", 2: "red_square"}
rhs_typing1 = {3: "red_circle"}
# rhs_typing1 = {1: "red_circle", 11: "red_circle", 2: "red_square"}
lhs_typing2 = {1: "good_circle", 2: "good_square"}
rhs_typing2 = {3: "bad_circle"}
# rhs_typing2 = {1: "good_circle", 11: "good_circle", 2: "good_square"}
hierarchy.add_rule_typing("r1", "g1", lhs_typing1, rhs_typing1)
hierarchy.add_rule_typing("r1", "g2", lhs_typing2, rhs_typing2)
###Output
_____no_output_____
###Markdown
Some of the graphs in the hierarchy are now typed by multiple graphs, which is reflected in the types of nodes, as in the example below:
###Code
print("Node types in G3:\n")
for node in hierarchy.node["g3"].graph.nodes():
print(node, hierarchy.node_type("g3", node))
hierarchy.add_node_type("g3", "some_circle_node", {"g1": "red_circle", "g2": "good_circle"})
hierarchy.add_node_type("g3", "some_strange_node", {"g2": "some_node"})
print("Node types in G3:\n")
for node in hierarchy.node["g3"].graph.nodes():
print(node, hierarchy.node_type("g3", node))
###Output
Node types in G3:
good_red_circle {'g1': 'red_circle', 'g2': 'good_circle'}
bad_red_circle {'g1': 'red_circle', 'g2': 'bad_circle'}
good_red_square {'g1': 'red_square', 'g2': 'good_square'}
some_circle_node {'g1': 'red_circle', 'g2': 'good_circle'}
some_strange_node {'g2': 'some_node'}
###Markdown
Notice that as `G3` is paritally typed by both `G1` and `G2`, not all the nodes have types in both `G1` and `G2`. For example, node `some_circle_node` is typed only by `some_circle` in `G1`, but is not typed by any node in `G2`. 2.2. Rules as nodes of a hierarchyHaving constructed a sophisticated rewriting rule typed by some nodes in the hierarchy one may want to store this rule and to be able to propagate any changes that happen in the hierarchy to the rule as well. ReGraph's `regraph.library.hierarchy.Hierarchy` allows to add graph rewriting rules as nodes in the hierarchy. Rules in the hierarchy can be (partially) typed by graphs. **Note:** nothing can be typed by a rule in the hierarchy.In the example below, a rule is added to the previously constructed hierarchy and typed by graphs `g1` and `g2`:
###Code
print(hierarchy)
print(hierarchy.edge["r1"]["g1"].lhs_mapping)
print(hierarchy.edge["r1"]["g1"].rhs_mapping)
print(hierarchy.edge["r1"]["g2"].lhs_mapping)
print(hierarchy.edge["r1"]["g2"].rhs_mapping)
###Output
{1: 'red_circle', 2: 'red_square'}
{3: 'red_circle', 1: 'red_circle', 11: 'red_circle', 2: 'red_square'}
{1: 'good_circle', 2: 'good_square'}
{3: 'bad_circle', 1: 'good_circle', 11: 'good_circle', 2: 'good_square'}
###Markdown
2.3. Rewriting and propagationWe now show how graph rewriting can be performed in such an hierarchy. In the previous example we perfromed graph rewriting on the top level of the hierarchy, meaning that the graph that was rewritten did not type any other graph.The following example illustrates what happens if we rewrite a graph typing some other graphs. The ReGraph hierarchy is able to propagate the changes made by rewriting on any level to all the graphs (as well as the rules) typed by the one subject to rewriting.
###Code
lhs = nx.DiGraph()
lhs.add_nodes_from(["a", "b"])
lhs.add_edges_from([
("a", "b"),
("b", "a")
])
p = nx.DiGraph()
p.add_nodes_from(["a", "a1", "b"])
p.add_edges_from([
("a", "b"),
("a1", "b")
])
rhs = copy.deepcopy(p)
rule = Rule(
p, lhs, rhs,
{"a": "a", "a1": "a", "b": "b"},
{"a": "a", "a1": "a1", "b": "b"},
)
instances = hierarchy.find_matching("shapes", lhs)
print("Instances:")
for instance in instances:
print(instance)
plot_instance(hierarchy.node["shapes"].graph, rule.lhs, instance)
_, m = hierarchy.rewrite("shapes", rule, {"a": "circle", "b": "square"})
print(hierarchy)
sep = "========================================\n\n"
print("Graph 'shapes':\n")
print("===============")
print_graph(hierarchy.node["shapes"].graph)
print(sep)
print("Graph 'g1':\n")
print("===========")
print_graph(hierarchy.node["g1"].graph)
print(sep)
print("Graph 'g2':\n")
print("===========")
print_graph(hierarchy.node["g2"].graph)
print(sep)
print("Graph 'g3':\n")
print("===========")
print_graph(hierarchy.node["g3"].graph)
print(sep)
print("Rule 'r1':\n")
print("===========")
print("\nLHS:")
print_graph(hierarchy.node["r1"].rule.lhs)
print("\nP:")
print_graph(hierarchy.node["r1"].rule.p)
print("\nRHS:")
print_graph(hierarchy.node["r1"].rule.rhs)
###Output
Graph 'shapes':
===============
Nodes:
circle : {}
square : {}
circle1 : {}
Edges:
circle -> square : {}
circle -> circle : {}
circle -> circle1 : {}
circle1 -> square : {}
circle1 -> circle : {}
circle1 -> circle1 : {}
========================================
Graph 'g1':
===========
Nodes:
red_circle : {}
red_square : {}
some_circle : {}
red_circle1 : {}
some_circle1 : {}
Edges:
red_circle -> red_square : {}
red_circle -> red_circle : {}
red_circle -> red_circle1 : {}
some_circle -> red_circle : {}
some_circle -> red_circle1 : {}
red_circle1 -> red_square : {}
red_circle1 -> red_circle : {}
red_circle1 -> red_circle1 : {}
some_circle1 -> red_circle : {}
some_circle1 -> red_circle1 : {}
========================================
Graph 'g2':
===========
Nodes:
good_circle : {}
good_square : {}
bad_circle : {}
good_guy : {}
some_node : {}
good_circle1 : {}
bad_circle1 : {}
Edges:
good_circle -> good_square : {}
bad_circle -> good_circle : {}
bad_circle -> bad_circle : {}
bad_circle -> good_circle1 : {}
bad_circle -> bad_circle1 : {}
good_guy -> good_square : {}
some_node -> good_circle : {}
some_node -> good_circle1 : {}
good_circle1 -> good_square : {}
bad_circle1 -> good_circle : {}
bad_circle1 -> bad_circle : {}
bad_circle1 -> good_circle1 : {}
bad_circle1 -> bad_circle1 : {}
========================================
Graph 'g3':
===========
Nodes:
good_red_circle : {}
bad_red_circle : {}
good_red_square : {}
some_circle_node : {}
some_strange_node : {}
good_red_circle1 : {}
bad_red_circle1 : {}
some_circle_node1 : {}
Edges:
good_red_circle -> good_red_square : {}
bad_red_circle -> good_red_circle : {}
bad_red_circle -> good_red_circle1 : {}
good_red_circle1 -> good_red_square : {}
bad_red_circle1 -> good_red_circle : {}
bad_red_circle1 -> good_red_circle1 : {}
========================================
Rule 'r1':
===========
LHS:
Nodes:
1 : {}
2 : {}
Edges:
1 -> 2 : {}
P:
Nodes:
1 : {}
11 : {}
2 : {}
Edges:
1 -> 2 : {}
RHS:
Nodes:
1 : {}
11 : {}
2 : {}
3 : {}
Edges:
1 -> 2 : {}
###Markdown
2.4 Rewriting with the rules in the hierarchyReGraph provides utils that allow to apply rules stored in the hierarchy to the graph nodes of the hierarchy.In the following example the rule `r1` is being applied for rewriting of the graph `g3`.
###Code
print(hierarchy.rule_lhs_typing["r1"]["g1"])
print(hierarchy.rule_rhs_typing["r1"]["g1"])
print(hierarchy.typing["g3"]["g1"])
instances = hierarchy.find_rule_matching("g3", "r1")
hierarchy.apply_rule(
"g3",
"r1",
instances[0]
)
print_graph(hierarchy.node["g3"].graph)
###Output
Nodes:
good_red_circle : {}
bad_red_circle : {}
good_red_square : {}
some_circle_node : {}
some_strange_node : {}
good_red_circle1 : {}
bad_red_circle1 : {}
some_circle_node1 : {}
good_red_circle2 : {}
3 : {}
Edges:
good_red_circle -> good_red_square : {}
bad_red_circle -> good_red_circle : {}
bad_red_circle -> good_red_circle1 : {}
bad_red_circle -> good_red_circle2 : {}
good_red_circle1 -> good_red_square : {}
bad_red_circle1 -> good_red_circle : {}
bad_red_circle1 -> good_red_circle1 : {}
bad_red_circle1 -> good_red_circle2 : {}
###Markdown
2.5 Export/load hierarchyReGraph provides the following methods for loading and exporting your hierarchy:- `regraph.library.hierarchy.Hierarchy.to_json` creates a json representations of the hierarchy;- `regraph.library.hierarchy.Hierarchy.from_json` loads an hierarchy from json representation (returns new `Hierarchy` object); - `regraph.library.hierarchy.Hierarchy.export` exports the hierarchy to a file (json format);- `regraph.library.hierarchy.Hierarchy.load` loads an hierarchy from a .json file (returns new object as well).
###Code
hierarchy_json = hierarchy.to_json()
new_hierarchy = Hierarchy.from_json(hierarchy_json, directed=True)
new_hierarchy == hierarchy
###Output
_____no_output_____
###Markdown
3. Example: advanced rule and rewritingBy default rewriting requires all the nodes in the result of the rewriting to be totally typed by all the graphs typing the graph subject to rewriting. If parameter `total` in the rewriting is set to `False`, rewriting is allowed to produce untyped nodes.In addition, rewriting is available in these possible configurations:1. **Strong typing of a rule** (default) autocompletes the types of the nodes in a rule with the respective types of the matching.~~2. **Weak typing of a rule:** (parameter `strong=False`) only checks the consistency of the types given explicitly by a rule, and allows to remove node types. If typing of a node in RHS does not contain explicit typing by some typing graph -- this node will be not typed by this graph in the result.~~~~**Note: ** Weak typing should be used with parameter `total` set to False, otherwise deletion of node types will be not possible.~~Examples below illustrate some interesting use-cases of rewriting with different rule examples.
###Code
base = nx.DiGraph()
base.add_nodes_from([
("circle", {"a": {1, 2}, "b": {3, 4}}),
("square", {"a": {3, 4}, "b": {1, 2}})
])
base.add_edges_from([
("circle", "circle", {"c": {1, 2}}),
("circle", "square", {"d": {1, 2}}),
])
little_hierarchy = Hierarchy()
little_hierarchy.add_graph("base", base)
graph = nx.DiGraph()
graph.add_nodes_from([
("c1", {"a": {1}}),
("c2", {"a": {2}}),
"s1",
"s2",
("n1", {"x":{1}})
])
graph.add_edges_from([
("c1", "c2", {"c": {1}}),
("c2", "s1"),
("s2", "n1", {"y": {1}})
])
little_hierarchy.add_graph("graph", graph)
little_hierarchy.add_typing(
"graph", "base",
{
"c1": "circle",
"c2": "circle",
"s1": "square",
"s2": "square"
}
)
###Output
_____no_output_____
###Markdown
3.1. Strong typing of a ruleMain idea of strong typing is that the typing of LHS and RHS can be inferred from the matching and autocompleted respectively. It does not allow deletion of types as every node preserved throughout the rewriting will keep its original type.
###Code
# In this rule we match any pair of nodes and try to add an edge between them
# the rewriting will fail every time the edge is not allowed between two nodes
# by its typing graphs
# define a rule
lhs = nx.DiGraph()
lhs.add_nodes_from([1, 2])
p = copy.deepcopy(lhs)
rhs = copy.deepcopy(lhs)
rhs.add_edges_from([(1, 2)])
rule = Rule(p, lhs, rhs)
instances = little_hierarchy.find_matching(
"graph",
rule.lhs
)
current_hierarchy = copy.deepcopy(little_hierarchy)
for instance in instances:
try:
current_hierarchy.rewrite(
"graph",
rule,
instance
)
print("Instance rewritten: ", instance)
print()
except Exception as e:
print("\nFailed to rewrite an instance: ", instance)
print("Addition of an edge was not allowed, error message received:")
print("Exception type: ", type(e))
print("Message: ", e)
print()
print_graph(current_hierarchy.node["graph"].graph)
print("\n\nTypes of nodes after rewriting:")
for node in current_hierarchy.node["graph"].graph.nodes():
print(node, current_hierarchy.node_type("graph", node))
lhs = nx.DiGraph()
lhs.add_nodes_from([1, 2])
p = copy.deepcopy(lhs)
rhs = nx.DiGraph()
rhs.add_nodes_from([1])
rule = Rule(p, lhs, rhs, p_rhs={1: 1, 2: 1})
instances = little_hierarchy.find_matching(
"graph",
rule.lhs
)
for instance in instances:
try:
current_hierarchy, _ = little_hierarchy.rewrite(
"graph",
rule,
instance,
inplace=False
)
print("Instance rewritten: ", instance)
print_graph(current_hierarchy.node["graph"].graph)
print("\n\nTypes of nodes after rewriting:")
for node in current_hierarchy.node["graph"].graph.nodes():
print(node, current_hierarchy.node_type("graph", node))
print()
except Exception as e:
print("\nFailed to rewrite an instance: ", instance)
print("Merge was not allowed, error message received:")
print("Exception type: ", type(e))
print("Message: ", e)
print()
###Output
Instance rewritten: {1: 'c1', 2: 'c2'}
Nodes:
s1 : {}
s2 : {}
n1 : {'x': {1}}
c1_c2 : {'a': {1, 2}}
Edges:
s2 -> n1 : {'y': {1}}
c1_c2 -> c1_c2 : {'c': {1}}
c1_c2 -> s1 : {}
Types of nodes after rewriting:
s1 {'base': 'square'}
s2 {'base': 'square'}
n1 {}
c1_c2 {'base': ['circle']}
Instance rewritten: {2: 'c1', 1: 'c2'}
Nodes:
s1 : {}
s2 : {}
n1 : {'x': {1}}
c2_c1 : {'a': {1, 2}}
Edges:
s2 -> n1 : {'y': {1}}
c2_c1 -> c2_c1 : {'c': {1}}
c2_c1 -> s1 : {}
Types of nodes after rewriting:
s1 {'base': 'square'}
s2 {'base': 'square'}
n1 {}
c2_c1 {'base': ['circle']}
Failed to rewrite an instance: {1: 'c1', 2: 's1'}
Merge was not allowed, error message received:
Exception type: <class 'regraph.exceptions.RewritingError'>
Message: Inconsistent typing of the rule: node '1' from the preserved part is typed by a graph 'base' as 'circle' from the lhs and as a 'square, circle' from the rhs.
Failed to rewrite an instance: {2: 'c1', 1: 's1'}
Merge was not allowed, error message received:
Exception type: <class 'regraph.exceptions.RewritingError'>
Message: Inconsistent typing of the rule: node '1' from the preserved part is typed by a graph 'base' as 'square' from the lhs and as a 'square, circle' from the rhs.
Failed to rewrite an instance: {1: 'c1', 2: 's2'}
Merge was not allowed, error message received:
Exception type: <class 'regraph.exceptions.RewritingError'>
Message: Inconsistent typing of the rule: node '1' from the preserved part is typed by a graph 'base' as 'circle' from the lhs and as a 'square, circle' from the rhs.
Failed to rewrite an instance: {2: 'c1', 1: 's2'}
Merge was not allowed, error message received:
Exception type: <class 'regraph.exceptions.RewritingError'>
Message: Inconsistent typing of the rule: node '1' from the preserved part is typed by a graph 'base' as 'square' from the lhs and as a 'square, circle' from the rhs.
Instance rewritten: {1: 'c1', 2: 'n1'}
Nodes:
c2 : {'a': {2}}
s1 : {}
s2 : {}
c1_n1 : {'a': {1}, 'x': {1}}
Edges:
c2 -> s1 : {}
s2 -> c1_n1 : {'y': {1}}
c1_n1 -> c2 : {'c': {1}}
Types of nodes after rewriting:
c2 {'base': 'circle'}
s1 {'base': 'square'}
s2 {'base': 'square'}
c1_n1 {'base': ['circle']}
Instance rewritten: {2: 'c1', 1: 'n1'}
Nodes:
c2 : {'a': {2}}
s1 : {}
s2 : {}
n1_c1 : {'x': {1}, 'a': {1}}
Edges:
c2 -> s1 : {}
s2 -> n1_c1 : {'y': {1}}
n1_c1 -> c2 : {'c': {1}}
Types of nodes after rewriting:
c2 {'base': 'circle'}
s1 {'base': 'square'}
s2 {'base': 'square'}
n1_c1 {'base': ['circle']}
Failed to rewrite an instance: {1: 'c2', 2: 's1'}
Merge was not allowed, error message received:
Exception type: <class 'regraph.exceptions.RewritingError'>
Message: Inconsistent typing of the rule: node '1' from the preserved part is typed by a graph 'base' as 'circle' from the lhs and as a 'square, circle' from the rhs.
Failed to rewrite an instance: {2: 'c2', 1: 's1'}
Merge was not allowed, error message received:
Exception type: <class 'regraph.exceptions.RewritingError'>
Message: Inconsistent typing of the rule: node '1' from the preserved part is typed by a graph 'base' as 'square' from the lhs and as a 'square, circle' from the rhs.
Failed to rewrite an instance: {1: 'c2', 2: 's2'}
Merge was not allowed, error message received:
Exception type: <class 'regraph.exceptions.RewritingError'>
Message: Inconsistent typing of the rule: node '1' from the preserved part is typed by a graph 'base' as 'circle' from the lhs and as a 'square, circle' from the rhs.
Failed to rewrite an instance: {2: 'c2', 1: 's2'}
Merge was not allowed, error message received:
Exception type: <class 'regraph.exceptions.RewritingError'>
Message: Inconsistent typing of the rule: node '1' from the preserved part is typed by a graph 'base' as 'square' from the lhs and as a 'square, circle' from the rhs.
Instance rewritten: {1: 'c2', 2: 'n1'}
Nodes:
c1 : {'a': {1}}
s1 : {}
s2 : {}
c2_n1 : {'a': {2}, 'x': {1}}
Edges:
c1 -> c2_n1 : {'c': {1}}
s2 -> c2_n1 : {'y': {1}}
c2_n1 -> s1 : {}
Types of nodes after rewriting:
c1 {'base': 'circle'}
s1 {'base': 'square'}
s2 {'base': 'square'}
c2_n1 {'base': ['circle']}
Instance rewritten: {2: 'c2', 1: 'n1'}
Nodes:
c1 : {'a': {1}}
s1 : {}
s2 : {}
n1_c2 : {'x': {1}, 'a': {2}}
Edges:
c1 -> n1_c2 : {'c': {1}}
s2 -> n1_c2 : {'y': {1}}
n1_c2 -> s1 : {}
Types of nodes after rewriting:
c1 {'base': 'circle'}
s1 {'base': 'square'}
s2 {'base': 'square'}
n1_c2 {'base': ['circle']}
Instance rewritten: {1: 's1', 2: 's2'}
Nodes:
c1 : {'a': {1}}
c2 : {'a': {2}}
n1 : {'x': {1}}
s1_s2 : {}
Edges:
c1 -> c2 : {'c': {1}}
c2 -> s1_s2 : {}
s1_s2 -> n1 : {'y': {1}}
Types of nodes after rewriting:
c1 {'base': 'circle'}
c2 {'base': 'circle'}
n1 {}
s1_s2 {'base': ['square']}
Instance rewritten: {2: 's1', 1: 's2'}
Nodes:
c1 : {'a': {1}}
c2 : {'a': {2}}
n1 : {'x': {1}}
s2_s1 : {}
Edges:
c1 -> c2 : {'c': {1}}
c2 -> s2_s1 : {}
s2_s1 -> n1 : {'y': {1}}
Types of nodes after rewriting:
c1 {'base': 'circle'}
c2 {'base': 'circle'}
n1 {}
s2_s1 {'base': ['square']}
Instance rewritten: {1: 's1', 2: 'n1'}
Nodes:
c1 : {'a': {1}}
c2 : {'a': {2}}
s2 : {}
s1_n1 : {'x': {1}}
Edges:
c1 -> c2 : {'c': {1}}
c2 -> s1_n1 : {}
s2 -> s1_n1 : {'y': {1}}
Types of nodes after rewriting:
c1 {'base': 'circle'}
c2 {'base': 'circle'}
s2 {'base': 'square'}
s1_n1 {'base': ['square']}
Instance rewritten: {2: 's1', 1: 'n1'}
Nodes:
c1 : {'a': {1}}
c2 : {'a': {2}}
s2 : {}
n1_s1 : {'x': {1}}
Edges:
c1 -> c2 : {'c': {1}}
c2 -> n1_s1 : {}
s2 -> n1_s1 : {'y': {1}}
Types of nodes after rewriting:
c1 {'base': 'circle'}
c2 {'base': 'circle'}
s2 {'base': 'square'}
n1_s1 {'base': ['square']}
Instance rewritten: {1: 's2', 2: 'n1'}
Nodes:
c1 : {'a': {1}}
c2 : {'a': {2}}
s1 : {}
s2_n1 : {'x': {1}}
Edges:
c1 -> c2 : {'c': {1}}
c2 -> s1 : {}
s2_n1 -> s2_n1 : {'y': {1}}
Types of nodes after rewriting:
c1 {'base': 'circle'}
c2 {'base': 'circle'}
s1 {'base': 'square'}
s2_n1 {'base': ['square']}
Instance rewritten: {2: 's2', 1: 'n1'}
Nodes:
c1 : {'a': {1}}
c2 : {'a': {2}}
s1 : {}
n1_s2 : {'x': {1}}
Edges:
c1 -> c2 : {'c': {1}}
c2 -> s1 : {}
n1_s2 -> n1_s2 : {'y': {1}}
Types of nodes after rewriting:
c1 {'base': 'circle'}
c2 {'base': 'circle'}
s1 {'base': 'square'}
n1_s2 {'base': ['square']}
###Markdown
~~ 3.3. Weak typing of a rule~~~~If rewriting parameter `strong_typing` is set to `False`, the weak typing of a rule is applied. All the types of the nodes in the RHS of the rule which do not have explicitly specified types will be removed.~~ 4. Merging with a hierarchy 4.1. Example: merging disjoint hierarchies (merge by ids)
###Code
g1 = nx.DiGraph()
g1.add_node(1)
g2 = copy.deepcopy(g1)
g3 = copy.deepcopy(g1)
g4 = copy.deepcopy(g1)
hierarchy = Hierarchy()
hierarchy.add_graph(1, g1, graph_attrs={"name": {"Main hierarchy"}})
hierarchy.add_graph(2, g2, graph_attrs={"name": {"Base hierarchy"}})
hierarchy.add_graph(3, g3)
hierarchy.add_graph(4, g4)
hierarchy.add_typing(1, 2, {1: 1})
hierarchy.add_typing(1, 4, {1: 1})
hierarchy.add_typing(2, 3, {1: 1})
hierarchy.add_typing(4, 3, {1: 1})
hierarchy1 = copy.deepcopy(hierarchy)
hierarchy2 = copy.deepcopy(hierarchy)
hierarchy3 = copy.deepcopy(hierarchy)
h1 = nx.DiGraph()
h1.add_node(2)
h2 = copy.deepcopy(h1)
h3 = copy.deepcopy(h1)
h4 = copy.deepcopy(h1)
other_hierarchy = Hierarchy()
other_hierarchy.add_graph(1, h1, graph_attrs={"name": {"Main hierarchy"}})
other_hierarchy.add_graph(2, h2, graph_attrs={"name": {"Base hierarchy"}})
other_hierarchy.add_graph(3, h3)
other_hierarchy.add_graph(4, h4)
other_hierarchy.add_typing(1, 2, {2: 2})
other_hierarchy.add_typing(1, 4, {2: 2})
other_hierarchy.add_typing(2, 3, {2: 2})
other_hierarchy.add_typing(4, 3, {2: 2})
hierarchy1.merge_by_id(other_hierarchy)
print(hierarchy1)
###Output
Graphs (directed == True):
Nodes:
Graph: 1 {'name': {'Main hierarchy'}}
Graph: 2 {'name': {'Base hierarchy'}}
Graph: 3 {}
Graph: 4 {}
Graph: 1_1 {'name': {'Main hierarchy'}}
Graph: 2_1 {'name': {'Base hierarchy'}}
Graph: 3_1 {}
Graph: 4_1 {}
Typing homomorphisms:
1 -> 2: total == False
1 -> 4: total == False
2 -> 3: total == False
4 -> 3: total == False
1_1 -> 4_1: total == False
1_1 -> 2_1: total == False
2_1 -> 3_1: total == False
4_1 -> 3_1: total == False
Relations:
attributes :
{}
###Markdown
4.2. Example: merging hierarchies with common nodes
###Code
# Now we make node 1 in the hierarchies to be the same graph
hierarchy2.node[1].graph.add_node(2)
other_hierarchy.node[1].graph.add_node(1)
hierarchy2.merge_by_id(other_hierarchy)
print(hierarchy2)
# Now make a hierarchies to have two common nodes with an edge between them
hierarchy3.node[1].graph.add_node(2)
other_hierarchy.node[1].graph.add_node(1)
hierarchy3.node[2].graph.add_node(2)
other_hierarchy.node[2].graph.add_node(1)
hierarchy4 = copy.deepcopy(hierarchy3)
hierarchy3.merge_by_id(other_hierarchy)
print(hierarchy3)
print(hierarchy3.edge[1][2].mapping)
hierarchy4.merge_by_attr(other_hierarchy, "name")
print(hierarchy4)
print(hierarchy4.edge['1_1']['2_2'].mapping)
###Output
Graphs (directed == True):
Nodes:
Graph: 3 {}
Graph: 4 {}
Graph: 1_1 {'name': {'Main hierarchy'}}
Graph: 2_2 {'name': {'Base hierarchy'}}
Graph: 3_1 {}
Graph: 4_1 {}
Typing homomorphisms:
4 -> 3: total == False
1_1 -> 4: total == False
1_1 -> 2_2: total == False
1_1 -> 4_1: total == False
2_2 -> 3: total == False
2_2 -> 3_1: total == False
4_1 -> 3_1: total == False
Relations:
attributes :
{}
{1: 1, 2: 2}
|
jupyter notebooks (to be deprecated)/delete_output.ipynb | ###Markdown
Delete Output Folder contentsQuick script to delete output folder contents so that can re-run the max_min scripts
###Code
import os
import shutil
folder_name = [
'Dec20_data/Interfraction/Interfraction 3D 0.8',
'Dec20_data/Interfraction/Interfraction DIXON 2.0',
'Dec20_data/Intrafraction 3D vs DIXON HR IP 2.0'
]
# gets output folders
output_dir_list = []
for i in range(len(folder_name)):
output_dir_list.append(
os.path.join(os.getcwd(), folder_name[i], "output")
)
for n in range(3):
# remove entire folder
print('Clearing out {}'.format(output_dir_list[n]))
shutil.rmtree(output_dir_list[n])
# remake blank folder
os.makedirs(output_dir_list[n], exist_ok=True)
###Output
Clearing out /mnt/3602F83B02F80223/MR Linac/Dec20_data/Interfraction/Interfraction 3D 0.8/output
Clearing out /mnt/3602F83B02F80223/MR Linac/Dec20_data/Interfraction/Interfraction DIXON 2.0/output
Clearing out /mnt/3602F83B02F80223/MR Linac/Dec20_data/Intrafraction 3D vs DIXON HR IP 2.0/output
|
notebooks/stash/ResNet.ipynb | ###Markdown
Pure ResNet train on cifar10 `$ sudo rmmod nvidia_uvm` `$ sudo modprobe nvidia_uvm`
###Code
import numpy as np
import torch
from torch import nn
from torch.utils.data import DataLoader
import torch.backends.cudnn as cudnn
from torch.optim.lr_scheduler import CosineAnnealingLR
import torchvision
from torchvision.transforms import transforms
from torchvision import models
# check if gpu training is available
if torch.cuda.is_available():
device = torch.device('cuda')
cudnn.deterministic = True
cudnn.benchmark = True
else:
device = torch.device('cpu')
print("Using...", device)
model = models.resnet50(pretrained=False, num_classes=10)
model.to(device)
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 256
n_epoch=400
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
checkpoint_name="resnet_ch3"
from atm.simclr.utils import save_checkpoint
import os
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
from torch.utils.tensorboard import SummaryWriter
tb = SummaryWriter()
def get_num_correct(preds, labels):
return preds.argmax(dim=1).eq(labels).sum().item()
for epoch in range(n_epoch): # loop over the dataset multiple times
running_loss = 0.0
total_correct = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data[0].to(device), data[1].to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
preds = model(inputs)
loss = criterion(preds, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
total_correct+= get_num_correct(preds, labels)
tb.add_scalar("Loss", running_loss, epoch)
tb.add_scalar("Correct", total_correct, epoch)
tb.add_scalar("Accuracy", total_correct/ len(trainset), epoch)
print('Finished Training')
save_checkpoint({
'epoch': n_epoch,
'state_dict': model.state_dict(),
'optimizer': optimizer.state_dict(),
}, is_best=False, filename=os.path.join('./', f"resnet_ch3_{epoch}.pth"))
loss
import argparse
args = argparse.Namespace()
args.data='./datasets'
args.dataset_name='cifar10'
args.arch='resnet50'
args.workers=1
args.epochs=300
args.batch_size=256
args.lr=0.02
args.wd=0.0005
args.disable_cuda=False
args.fp16_precision=True
args.out_dim=128
args.log_every_n_steps=100
args.temperature=0.07
args.n_views = 2
args.gpu_index=0
args.device='cuda' if torch.cuda.is_available() else 'cpu'
print("Using device:", args.device)
assert args.n_views == 2, "Only two view training is supported. Please use --n-views 2."
# check if gpu training is available
if not args.disable_cuda and torch.cuda.is_available():
args.device = torch.device('cuda')
cudnn.deterministic = True
cudnn.benchmark = True
else:
args.device = torch.device('cpu')
args.gpu_index = -1
import pickle
import numpy as np
ddir = "../../tonemap/bf_data/Nair_and_Abraham_2010/"
fn = ddir + "all_gals.pickle"
all_gals = pickle.load(open(fn, "rb"))
all_gals = all_gals[1:] # Why the first galaxy image is NaN?
good_gids = np.array([gal['img_name'] for gal in all_gals])
from astrobf.utils.misc import load_Nair
cat_data = load_Nair(ddir + "catalog/table2.dat")
# pd dataframe
cat = cat_data[cat_data['ID'].isin(good_gids)]
tmo_params = {'b': 6.0, 'c': 3.96, 'dl': 9.22, 'dh': 2.45}
###Output
_____no_output_____
###Markdown
DataLoader
###Code
import matplotlib.pyplot as plt
from PIL import Image
from typing import Any, Callable, Optional, Tuple
from torchvision.datasets.vision import VisionDataset
from functools import partial
from astrobf.tmo import Mantiuk_Seidel
class TonemapImageDataset(VisionDataset):
def __init__(self,
data_array,
tmo,
labels: Optional = None,
train: bool=True,
transform: Optional[Callable] = None,
target_transform: Optional[Callable] = None,):
self._array = data_array
self._good_gids = np.array([gal['img_name'] for gal in data_array])
self.img_labels = labels
self.transform = transform
self.target_transform = target_transform
self.tmo = tmo
self._bad_tmo=False
def _apply_tm(self, image):
try:
return self.tmo(image)
except ZeroDivisionError:
print("division by zero. Probably bad choice of TM parameters")
self._bad_tmo=True
return image
def _to_8bit(self, image):
"""
Normalize per image (or use global min max??)
"""
image = (image - image.min())/image.ptp()
image *= 255
return image.astype('uint8')
def __len__(self) -> int:
return len(self._array)
def __getitem__(self, idx: int) -> Tuple[Any, Any]:
"""
For super
"""
image, _segmap, weight = self._array[idx]['data']
image[~_segmap.astype(bool)] = 0#np.nan # Is it OK to have nan?
image[image < 0] = 0
image = self._to_8bit(self._apply_tm(image))
image = Image.fromarray(image)
label = self.img_labels[idx]
if self.transform is not None:
image = self.transform(image)
return image, label
from torch.utils.data import SubsetRandomSampler
args.batch_size = 512
# data prepare
frac_train = 0.8
all_data = TonemapImageDatasetPair(all_gals, partial(Mantiuk_Seidel, **tmo_params),
labels=cat['TT'].to_numpy(),
train=True,
transform=train_transform)
all_data_val = TonemapImageDataset(all_gals, partial(Mantiuk_Seidel, **tmo_params),
labels=cat['TT'].to_numpy(),
train=False,
transform=test_transform)
len_data = len(all_data)
data_idx = np.arange(len_data)
np.random.shuffle(data_idx)
ind = int(np.floor(len_data * frac_train))
train_idx, test_idx = data_idx[:ind], data_idx[ind:]
train_sampler = SubsetRandomSampler(train_idx)
test_sampler = SubsetRandomSampler(test_idx)
train_loader = DataLoader(all_data, batch_size=args.batch_size, shuffle=False, num_workers=16, pin_memory=True, drop_last=True,
sampler=train_sampler)
test_loader = DataLoader(all_data_val, batch_size=args.batch_size, shuffle=False, num_workers=16, pin_memory=True, drop_last=True,
sampler=test_sampler)
###Output
_____no_output_____ |
DSVC-mod/machinelearning/Lesson2/Iris.ipynb | ###Markdown
Iris 实验说明在这个实验中,将会对鸢尾花数据进行分类.
###Code
import numpy as np
import pandas as pd
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.patches as mpatches
path = 'iris.data' # 数据文件路径
data = pd.read_csv(path, header=None)
data[4] = pd.Categorical(data[4]).codes
# iris_types = data[4].unique()
# print iris_types
# for i, type in enumerate(iris_types):
# data.set_value(data[4] == type, 4, i)
x, y = np.split(data.values, (4,), axis=1)
# print 'x = \n', x
# print 'y = \n', y
# 仅使用前两列特征
x = x[:, :2]
###Output
_____no_output_____
###Markdown
本实验中我们通过pandas直接读取csv格式的数据.以下的代码用于展示手动读取数据的方法,供参考.
###Code
# # # 手动读取数据
# f = file(path)
# x = []
# y = []
# for d in f:
# d = d.strip()
# if d:
# d = d.split(',')
# y.append(d[-1])
# x.append(map(float, d[:-1]))
# print '原始数据X:\n', x
# print '原始数据Y:\n', y
# x = np.array(x)
# print 'Numpy格式X:\n', x
# y = np.array(y)
# print 'Numpy格式Y - 1:\n', y
# y[y == 'Iris-setosa'] = 0
# y[y == 'Iris-versicolor'] = 1
# y[y == 'Iris-virginica'] = 2
# print 'Numpy格式Y - 2:\n', y
# y = y.astype(dtype=np.int)
# print 'Numpy格式Y - 3:\n', y
# print '\n\n============================================\n\n'
# # 使用sklearn的数据预处理
# df = pd.read_csv(path, header=None)
# x = df.values[:, :-1]
# y = df.values[:, -1]
# print x.shape
# print y.shape
# print 'x = \n', x
# print 'y = \n', y
# le = preprocessing.LabelEncoder()
# le.fit(['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'])
# print le.classes_
# y = le.transform(y)
# print 'Last Version, y = \n', y
# def iris_type(s):
# it = {'Iris-setosa': 0,
# 'Iris-versicolor': 1,
# 'Iris-virginica': 2}
# return it[s]
#
# # 路径,浮点型数据,逗号分隔,第4列使用函数iris_type单独处理
# data = np.loadtxt(path, dtype=float, delimiter=',',
# converters={4: iris_type})
lr = Pipeline([('sc', StandardScaler()),
('poly', PolynomialFeatures(degree=9)),
('clf', LogisticRegression())])
lr.fit(x, y.ravel())
y_hat = lr.predict(x)
y_hat_prob = lr.predict_proba(x)
np.set_printoptions(suppress=True)
print('y_hat = \n', y_hat)
print('y_hat_prob = \n', y_hat_prob)
print(u'准确度:%.2f%%' % (100*np.mean(y_hat == y.ravel())))
# 画图
N, M = 500, 500 # 横纵各采样多少个值
x1_min, x1_max = x[:, 0].min(), x[:, 0].max() # 第0列的范围
x2_min, x2_max = x[:, 1].min(), x[:, 1].max() # 第1列的范围
t1 = np.linspace(x1_min, x1_max, N)
t2 = np.linspace(x2_min, x2_max, M)
x1, x2 = np.meshgrid(t1, t2) # 生成网格采样点
x_test = np.stack((x1.flat, x2.flat), axis=1) # 测试点
# # 无意义,只是为了凑另外两个维度
# x3 = np.ones(x1.size) * np.average(x[:, 2])
# x4 = np.ones(x1.size) * np.average(x[:, 3])
# x_test = np.stack((x1.flat, x2.flat, x3, x4), axis=1) # 测试点
mpl.rcParams['font.sans-serif'] = [u'simHei']
mpl.rcParams['axes.unicode_minus'] = False
cm_light = mpl.colors.ListedColormap(['#77E0A0', '#FF8080', '#A0A0FF'])
cm_dark = mpl.colors.ListedColormap(['g', 'r', 'b'])
y_hat = lr.predict(x_test) # 预测值
y_hat = y_hat.reshape(x1.shape) # 使之与输入的形状相同
plt.figure(facecolor='w')
plt.pcolormesh(x1, x2, y_hat, cmap=cm_light) # 预测值的显示
plt.scatter(x[:, 0], x[:, 1], c=y.reshape(y.shape[0]), edgecolors='k', s=50, cmap=cm_dark) # 样本的显示
plt.xlabel(u'花萼长度', fontsize=14)
plt.ylabel(u'花萼宽度', fontsize=14)
plt.xlim(x1_min, x1_max)
plt.ylim(x2_min, x2_max)
plt.grid()
patchs = [mpatches.Patch(color='#77E0A0', label='Iris-setosa'),
mpatches.Patch(color='#FF8080', label='Iris-versicolor'),
mpatches.Patch(color='#A0A0FF', label='Iris-virginica')]
plt.legend(handles=patchs, fancybox=True, framealpha=0.8)
plt.title(u'鸢尾花Logistic回归分类效果 - 标准化', fontsize=17)
plt.show()
###Output
_____no_output_____ |
misc/01. Numbers and Computers.ipynb | ###Markdown
Significant DigitsThe significant digits of a number are those that we know with confidence. Numbers on a computer are not different. Floating point numbers are represented in the the form\begin{equation}m \times b^e\end{equation}where $m$ is called the mantissa and represents the digits of the number, $b$ is the base (10 for decial, 2 for binary, 8 for octal...), and $e$ is the exponent. These numbers are stored in a computer as a series of digits to represent the sign of the number, the mantissa, and the exponent, such thatThis representation divides the memory into bits or registers and each digit occupies one of those bits. The digits in a number will occupy the manitssa up to however many places are allowed by the computer. For example, if a computer has a 7 bit representation where 1 bit is reserved for the sign of the number, 2 bits are reserved for the signed exponent, and 4 bits are reserved for the mantissa, then, the number 0.5468113287 is represented as:++00547Note how the computer had to round (or chop) the last digit. This chopping or rounding leads to round of errors (discussed later), but keep that in mind for the moment. The implications of this are very important. Example 1 When two floating-point numbers are added, the mantissa of the number with the smaller exponent is modified so that the exponents are the same. This will align the decimal points to make addition straightforward. Suppose we wanted to add $0.1557\times 10^1 + 0.4381 \times 10^{-1}$ on a computer that with a four digit mantissa and a one digit exponent. First the computer will align the number with the smallest exponent to the number with the larger one, such that:$0.4381 \times 10^{-1} \to 0.004381 \times 10^1$But this result can't fit on the mantissa, so the number either gets chopped ($0.004381 \to 0.0043$ or rounded $0.004381 \to 0.0044$). Now the addition returns $0.1600\times 10^1$. In practice, numbers in a computer are represented using a binary system following the same idea for the exponent and mantissa discussed above. Modern computers can deal with a huge mantissas and can represent very large and very small numbers. For a 32bit representation (default float in Python and Matlab), the mantissa is a whopping 24 bits! But still, round off errors are present! Example 2Let's add a large number to a small number
###Code
x = 1e18
x - 1
###Output
_____no_output_____
###Markdown
now try to evaluate $ x - x + 1$ and compare that to $x - (x-1)$
###Code
x - x + 1
x - (x-1)
###Output
_____no_output_____
###Markdown
The impact of the binary representation also has other effects. Fractions for example cannot be fully represented in binary digits and have to be roudned on chopped!For example, the true value of $1/10 = 0.1 = 0.000 11 00 11 00 11 00 11 00 11 00 \ldots $but in practice, on a 32-bit representation, this number gets chopped or rounded. Here's an example of how this could induce error. Example 3In numerical methods, we often have to iterate and repeat calculations hundreds of thousands of times. This leads to accumulation of round off errors. Consider for example evaluating the sum$\sum_1^{10^5} 10^{-5} = 10^{-5} + 10^{-5} + \ldots + 10^{-5} = 100,000 \times 10^{-5} = 1$Perform this summation using half (np.float16), single (np.float32), double (np.float64), and triple precision (np.float128)
###Code
import numpy as np # we need numpy to define the different floats
x = 0.1
s16 = s32 = s64 = s128 = 0.0
ntotal = 100000
exactval = x*ntotal
for i in range(0,ntotal):
s16 = s16 + np.float16(x)
s32 = s32 + np.float32(x)
s64 = s64 + np.float64(x)
s128 = s128 + np.float128(x)
print(s16)
print(s32)
print(s64)
print(s128)
print((s16 - exactval))
print((s32 - exactval))
print((s64 - exactval))
print((s128 - exactval))
print(np.float16(1.0/3.0))
###Output
0.3333
|
tp8/Iris.ipynb | ###Markdown
Perceptron
###Code
p = Perceptron(
# random_state=42,
max_iter=10,
tol=0.001,
#verbose = True
)
p.fit(X_train, y_train)
print("Cantidad de iteraciones: " +str(p.n_iter_))
disp = metrics.plot_confusion_matrix(p, X_test, y_test,cmap=plt.cm.Blues)
disp.ax_.set_title('Confusion Matrix')
plt.show()
###Output
Cantidad de iteraciones: 9
###Markdown
Perceptron multicapa
###Code
#activation = "identity"
activation = "logistic"
#activation = "tanh"
#activation = "relu"
mlp = MLPClassifier(#random_state=42,
hidden_layer_sizes=(30,30),
max_iter = 1000,
activation = activation,
verbose = True
)
mlp.fit(X_train, y_train)
print("Cantidad de iteraciones: " +str(mlp.n_iter_))
disp = metrics.plot_confusion_matrix(mlp, X_test, y_test,cmap=plt.cm.Blues)
disp.ax_.set_title('Confusion Matrix')
plt.show()
###Output
Cantidad de iteraciones: 679
|
fetch_vote_smash_kakuzuke.ipynb | ###Markdown
準備
###Code
import tweepy
import datetime
from pytz import timezone
from collections import defaultdict
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 20)
pd.set_option("display.max_colwidth", 80)
# yyyyMMddHHmmss形式で現在の日本時間を出力
def get_now():
now = datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=9))) # 日本時刻
return now.strftime('%Y%m%d%H%M%S')
# 申請・取得したキーやトークンを入力する
API_KEY = 'pppppppppppppppppppppppppp'
API_SECRET_KEY = 'kkkkkkkkkkkkkkkkkkkkkkkkkkk'
Access_token = 'mmmmmmmmmmmmmmmmmmmmm'
Access_secret = 'nnnnnnnnnnnnnnnnnnnnnn'
auth = tweepy.OAuthHandler(API_KEY, API_SECRET_KEY)
auth.set_access_token(Access_token, Access_secret)
api = tweepy.API(auth, wait_on_rate_limit=True) # wait_on_rate_limit=True, API上限到達時に自動で待機する
###Output
_____no_output_____
###Markdown
データ取得
###Code
# 「Aの部屋に投票した人」と判定する条件 (Bの部屋についても同様に定める)
# 2021/12/29 18:00 から 2021/12/30 17:59の間に、#Aの部屋 のハッシュタグを含み #Bの部屋 のハッシュタグを含まないツイートをした
# 以下の条件も検討していたが、引用RTは取得できず、RTユーザーは約100件しか取得できなかった(cf.https://mura-shin.com/python_twitter/)ので断念
# 2. 出題ツイートの引用RTにて、文字列"A"を含み"B"を含まないツイートをしている
# 3. 出題ツイートを通常RTした後の10分以内の直近のツイートにて、文字列"A"を含み"B"を含まないツイートをしている
# 出題ツイートID
nietono_tw_id = 1476116006825521160
# 回答ツイートとユーザー情報を取得
room_names = ["A", "B"]
tw_data = defaultdict(list)
for vote_to, unvote_to in zip(room_names, reversed(room_names)):
for tweet in tweepy.Cursor(api.search_tweets, q=f"#{vote_to}の部屋 AND -#{unvote_to}の部屋", since_id=nietono_tw_id, until='2021-12-30_17:59:59_JST', result_type="recent").items():
if tweet.text[0:4]=="RT @": # 単純RTや、引用RTのRT(例:1476301813599068161)はtweet.text[0:4]=="RT @"となることを利用して除く
continue
else:
tw_data["vote"].append(vote_to)
tw_data["tw_time"].append(tweet.created_at.astimezone(timezone('Asia/Tokyo'))) # JSTに修正
tw_data["user_id"].append(tweet.user.id)
tw_data["user_name"].append(tweet.user.name)
tw_data["user_screen_name"].append(tweet.user.screen_name)
tw_data["user_followers_count"].append(tweet.user.followers_count)
tw_data["url"].append(f"twitter.com/{tweet.user.screen_name}/status/{tweet.id}")
tw_data["is_quote"].append(tweet.is_quote_status)
df_vote = pd.DataFrame.from_dict(tw_data)
df_vote.to_csv("df_vote_" + get_now() + ".csv", index=False, encoding="utf-8-sig") # 時刻付きで出力
df_vote
###Output
_____no_output_____
###Markdown
データ加工・出力
###Code
# 何度か分けて取得したデータをマージ。各ユーザーごとに最後の回答だけ残す
df_vote = pd.DataFrame()
for yyyyMMddHHmmss in [20211230173339, 20211230175952, 20211230181450, 20211230183555]:
df_vote = pd.concat([df_vote, pd.read_csv(f"df_vote_{yyyyMMddHHmmss}.csv")])
df_vote = (df_vote
.sort_values(["tw_time"])
.drop_duplicates("user_id", keep="last")
)
# メインキャラ別集計のため、取り忘れたユーザープロフを取得
user_prof = []
for user_id in df_vote.user_id:
try:
user_prof.append(api.get_user(user_id = user_id).description)
except:
user_prof.append("")
df_vote["user_prof"] = user_prof
# 出力前にデータフレーム修正
# 文字列を与えると、ソニック使用者と推定されるかピクオリ使用者と推定されるかそれ以外かを返す
def sonic_pikmin_flg(prof):
prof = prof.lower() # 小文字に統一
if any(x in prof for x in ["ソニック", "sonic"]):
return "sonic"
if any(x in prof for x in ["ピクオリ","ピクミン","オリマー","アルフ","pikmin","olimar", "alph"]):
return "pikmin"
else:
return "other"
# ユーザープロフから「ソニック」「ピクオリ」使用者を抽出する
df_vote["main"] = df_vote["user_prof"].apply(sonic_pikmin_flg)
# カラム名修正
df_vote.columns = [x.replace("user","twitter_user").replace("url","vote_url") for x in df_vote.columns]
# 出題後何時間後のツイートかカラム追加
df_vote["delta_hours"] = (df_vote["tw_time"].apply(pd.to_datetime)-pd.to_datetime("2021-12-29 18:00:00+09:00")).dt.total_seconds().apply(lambda x: int(x/3600))
# 出力
df_vote.to_csv("df_vote.csv", index=False, encoding="utf-8-sig")
df_vote
###Output
_____no_output_____
###Markdown
集計、図示
###Code
# 投票数集計
# 出題ツイートのアンケとは異なりBの方が多い模様
df_vote.vote.value_counts()
df_vote.vote.value_counts(normalize=True)
# 回答ごとの基本統計量
df_vote[['vote', 'tw_time', 'twitter_user_followers_count', 'delta_hours']].groupby("vote").describe().T
###Output
_____no_output_____
###Markdown
使用キャラ別集計
###Code
# 動画に出てくるキャラを使用している人は正解率が高いか?
df_vote[df_vote.main.isin(["sonic", "pikmin"])].vote.value_counts()
df_vote[df_vote.main.isin(["sonic", "pikmin"])].vote.value_counts(normalize=True)
# ソニック使いに限定
df_vote[df_vote.main == "sonic"].vote.value_counts()
df_vote[df_vote.main == "sonic"].vote.value_counts(normalize=True)
# ピクオリ使いに限定
df_vote[df_vote.main == "pikmin"].vote.value_counts()
df_vote[df_vote.main == "pikmin"].vote.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
フォロワー数ヒストグラム
###Code
# 集計データフレームとフォロワー数の下限と上限を与えると、A、B投票ごとにヒストグラムを表示する
def followers_hist(df, f_min, f_max):
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.hist(df[(df.vote == "A") & (df.user_followers_count >= f_min) & (df.user_followers_count <= f_max)].user_followers_count, bins=30, color="red", alpha=0.7, label="A")
ax.hist(df[(df.vote == "B") & (df.user_followers_count >= f_min) & (df.user_followers_count <= f_max)].user_followers_count, bins=30, color="blue", alpha=0.5, label="B")
ax.set_xlabel('user_followers_count')
ax.set_ylabel('counts')
ax.legend(loc='upper right')
plt.savefig(f"followers_hist_{f_min}_{f_max}.png")
plt.show()
# フォロワー数ヒストグラム。2000人以下を集計
followers_hist(0, 2000)
# フォロワー数ヒストグラム。1000人以上2万人以下を集計
followers_hist(1000, 20000)
# フォロワー数箱ひげ図
sns.boxplot(x=df_vote.vote, y=df_vote.twitter_user_followers_count, palette=['red','dodgerblue'])
plt.ylim(0, 2000)
plt.savefig('boxplot_followers.png')
###Output
_____no_output_____
###Markdown
経過時間別集計
###Code
# 経過時間ごとの回答数
df_per_hour = (df_vote
.groupby(["delta_hours", "vote"], as_index=False).count()
[["delta_hours", "vote", "vote_url"]]
.rename(columns={"vote_url":"n_vote"})
)
ax = df_per_hour.groupby(["delta_hours", "vote"]).sum().unstack().plot.bar(rot=0, color=["r", "b"])
ax.set_ylabel('n_vote')
ax.figure.set_size_inches((13,6))
plt.savefig("AB_per_hours.png")
plt.show()
# 経過時間ごとの正解率
df_ratio_per_hour = (df_per_hour
.assign(n_vote_per_hour = lambda x: x.groupby("delta_hours")["n_vote"].transform("sum"))
.pipe(lambda x: x[x.vote == "A"])
.assign(A_ratio = lambda x: x.n_vote / x.n_vote_per_hour)
)
ax = df_ratio_per_hour.plot.bar(x='delta_hours', y='A_ratio', color="g", rot=0, legend=False)
ax.set_ylabel('A_ratio')
ax.figure.set_size_inches((13,6))
plt.savefig("A_ratio_per_hours.png")
plt.show()
# 経過時間と正解率の相関
df_ratio_per_hour.corr()
###Output
_____no_output_____ |
ens1.ipynb | ###Markdown
Emsemblingcombina le previsioni di due modelli (TabNet, Catboost) history* ens2.csv 0.4687, first 33%
###Code
import pandas as pd
import numpy as np
SUB1 = "submission34.csv"
SUB2 = "submission10.csv"
df1 = pd.read_csv(SUB1)
df2 = pd.read_csv(SUB2)
df_avg = df1.copy()
df_avg["count"] = (df1["count"] + df2["count"]) / 2.0
FILE_SUB = "ens2.csv"
df_avg.to_csv(FILE_SUB, index=False)
!kaggle competitions submit -c "bike-sharing-demand" -f $FILE_SUB -m "ens2 sub34, sub10"
###Output
100%|████████████████████████████████████████| 244k/244k [00:02<00:00, 95.8kB/s]
Successfully submitted to Bike Sharing Demand |
Pandas/Dados/Criando Agrupamentos.ipynb | ###Markdown
Relatório de Análise VII Criando Agrupamentos
###Code
import pandas as pd
dados = pd.read_csv("aluguel_residencial.csv", sep=';')
dados.head(15)
dados['Valor'].mean() # média de valor
bairros = ['Barra da Tijuca', 'Copacabana', 'Ipanema', 'Leblon', 'Botafogo', 'Flamengo', 'Tijuca']
selecao = dados['Bairro'].isin(bairros)
dados = dados[selecao]
dados['Bairro'].drop_duplicates()
grupo_bairro = dados.groupby('Bairro')
type(grupo_bairro)
grupo_bairro.groups
for bairro, data in grupo_bairro:
print(bairro)
for bairro, data in grupo_bairro:
print(type(dados))
for bairro, data in grupo_bairro:
print('{} -> {}'.format(bairro, dados.Valor.mean()))
grupo_bairro[['Valor', 'Condominio']].mean().round(2)
###Output
_____no_output_____
###Markdown
Exercício
###Code
import pandas as pd
alunos = pd.DataFrame({'Nome': ['Ary', 'Cátia', 'Denis', 'Beto', 'Bruna', 'Dara', 'Carlos', 'Alice'],
'Sexo': ['M', 'F', 'M', 'M', 'F', 'F', 'M', 'F'],
'Idade': [15, 27, 56, 32, 42, 21, 19, 35],
'Notas': [7.5, 2.5, 5.0, 10, 8.2, 7, 6, 5.6],
'Aprovado': [True, False, False, True, True, True, False, False]},
columns = ['Nome', 'Idade', 'Sexo', 'Notas', 'Aprovado'])
alunos
sexo = alunos.groupby('Sexo')
sexo = pd.DataFrame(sexo['Notas'].mean().round(2))
sexo.columns = ['Notas Médias']
sexo
###Output
_____no_output_____
###Markdown
*********** Estatísticas Descritivas
###Code
grupo_bairro['Valor'].describe().round(2)
grupo_bairro['Valor'].aggregate(['min', 'max']).rename(columns={'min': 'Mínimo', 'max':'Máximo'})
import matplotlib.pyplot as plt
%matplotlib inline
plt.rc('figure', figsize=(10,10))
fig = grupo_bairro['Valor'].std().plot.bar(color='blue')
fig.set_ylabel('Valor do Aluguel')
fig.set_title('Valor Médio do Aluguel por Bairro', {'fontsize':22})
###Output
_____no_output_____
###Markdown
Exercício II
###Code
precos = pd.DataFrame([['Feira', 'Cebola', 2.5],
['Mercado', 'Cebola', 1.99],
['Supermercado', 'Cebola', 1.69],
['Feira', 'Tomate', 4],
['Mercado', 'Tomate', 3.29],
['Supermercado', 'Tomate', 2.99],
['Feira', 'Batata', 4.2],
['Mercado', 'Batata', 3.99],
['Supermercado', 'Batata', 3.69]],
columns = ['Local', 'Produto', 'Preço'])
precos
produtos = precos.groupby('Produto')
produtos.describe().round(2)
estatisticas = ['mean', 'std', 'min', 'max']
nomes = {'mean': 'Média', 'std': 'Desvio Padrão',
'min': 'Mínimo', 'max': 'Máximo'}
produtos['Preço'].aggregate(estatisticas).rename(columns = nomes)
estatisticas = ['mean', 'std', 'min', 'max']
nomes = {'mean': 'Média', 'std': 'Desvio Padrão',
'min': 'Mínimo', 'max': 'Máximo'}
produtos['Preço'].aggregate(estatisticas).rename(columns = nomes).round(2)
produtos['Preço'].agg(['mean', 'std']) # forma simplificada de chamar o aggregate
###Output
_____no_output_____ |
Data Notebook.ipynb | ###Markdown
Introduction to Linear Regression Analysis
###Code
# imports
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import warnings
from scipy import stats
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
# for jupyter plot visualization
% matplotlib inline
# filtering out warnings
warnings.filterwarnings(action="ignore", module="scipy", message="^internal gelsd")
warnings.filterwarnings(action="ignore", module="sklearn", message="^max_iter")
warnings.filterwarnings(action="ignore", module="sklearn", message="^Maximum")
# data import
df = pd.read_csv('dataset_raw.csv')
stats_df = df.drop(['city', 'state', 'zip'], 1)
stats_df = stats_df.dropna()
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
summary_df = pd.concat([stats_df.describe().T, stats_df.median(), stats_df.mode().iloc[0], stats_df.var()], axis=1)
summary_df.columns = ['Count', 'Mean', 'Std', 'Min', '25%', '50%', '75%', 'Max', 'Median', 'Mode', 'Variance']
summary_df
###Output
_____no_output_____
###Markdown
Finding Outliers Outside of 3 Standard Deviations, and Noting it
###Code
outlier_index = []
for col in stats_df:
upper_standard_dev_3 = stats_df[col].std() * 3
lower_standard_dev_3 = -upper_standard_dev_3
outlier_index.extend(stats_df.index[(stats_df[col] > upper_standard_dev_3) | (stats_df[col] < lower_standard_dev_3)].tolist())
outlier_index = set(outlier_index)
outlier_bool_df = stats_df.index.isin(outlier_index)
df_sans_outliers = stats_df[~outlier_bool_df]
df_sans_outliers.to_csv('dataset_cleaned.csv', index=False)
outlier_df = stats_df[outlier_bool_df]
outlier_df.to_csv('dataset_outliers.csv', index=False)
outlier_df
###Output
_____no_output_____
###Markdown
Scatter Plot Matrix
###Code
g = sns.pairplot(df_sans_outliers, size=3.5)
plt.subplots_adjust(top=0.95)
g.fig.suptitle('Relationships Between Housing Features', size=20)
plt.show()
###Output
_____no_output_____
###Markdown
Correlation Mapping
###Code
fig, ax = plt.subplots(figsize=(15,15))
cbar_ax = fig.add_axes([.905, .15, .05, .775])
sns.heatmap(df_sans_outliers.corr(), annot=True, square=True, cbar_ax=cbar_ax, ax=ax).set_title('Correlation Between Housing Features', size=20)
plt.subplots_adjust(top=0.95)
plt.show()
###Output
_____no_output_____
###Markdown
Correlation Coefficients Ranked
###Code
# subtracting the outliers from the training dataset
training_df = df_sans_outliers.copy()
training_df.corr()['price'].sort_values(ascending=False).iloc[1:]
###Output
_____no_output_____
###Markdown
ML and Model Optimization
###Code
from sklearn.linear_model import SGDRegressor
from sklearn.model_selection import GridSearchCV
# defining the independent and dependent variables
X = training_df.drop('price', 1)
y = training_df['price']
# creating grid search cv parameters for brute force parameter searching
sgd = GridSearchCV(SGDRegressor(), param_grid={
'loss': ['squared_loss', 'huber', 'epsilon_insensitive', 'squared_epsilon_insensitive'],
'penalty': ['none', 'l2', 'l1', 'elasticnet'],
'max_iter': [1000],
'tol': [1e-3]
}, return_train_score=True, scoring='neg_mean_squared_error')
lr = GridSearchCV(LinearRegression(), param_grid={
'fit_intercept': [True, False],
'normalize': [True, False]
}, return_train_score=True, scoring='neg_mean_squared_error')
# Manually selecting most important features
three_feature_df = X[['sqft', 'bedrooms', 'bathrooms']]
two_feature_df = X[['sqft', 'bathrooms']]
one_feature_df = X['sqft'].values.reshape(-1, 1)
# Iterating through the dataframes containing 1, 2, and 3 features
MSE_ranking_dict = {}
for x, name in zip([three_feature_df, two_feature_df, one_feature_df], ['three', 'two', 'one']):
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.1, random_state=0)
# fitting the GridSearchCV pipelines
lr.fit(x, y)
sgd.fit(x, y)
# fitting the best estimators of each grid search
lr.best_estimator_.fit(X_train, y_train)
sgd.best_estimator_.fit(X_train, y_train)
# assigning keys and values for display
lr_key = f"Linear Regression MSE using {name} features"
lr_value = mean_squared_error(y_test, lr.best_estimator_.predict(X_test))
lr_coefs = [y for y in lr.best_estimator_.coef_]
if len(lr_coefs) < 6:
if name == 'three':
lr_coefs = [0] * (3 - len(lr_coefs)) + lr_coefs
elif name == 'two':
lr_coefs = [0] * (2 - len(lr_coefs)) + [lr_coefs[0]] + [0] + [lr_coefs[1]]
elif name == 'one':
lr_coefs = [0] * (1 - len(lr_coefs)) + lr_coefs + [0] * 2
sgd_key = f"Stochastic Gradient Descent MSE using {name} features"
sgd_value = mean_squared_error(y_test, sgd.best_estimator_.predict(X_test))
sgd_coefs = [y for y in sgd.best_estimator_.coef_]
if len(sgd_coefs) < 6:
if name == 'three':
sgd_coefs = [0] * (3 - len(sgd_coefs)) + sgd_coefs
elif name == 'two':
sgd_coefs = [0] * (2 - len(sgd_coefs)) + [sgd_coefs[0]] + [0] + [sgd_coefs[1]]
elif name == 'one':
sgd_coefs = [0] * (1 - len(sgd_coefs)) + sgd_coefs + [0] * 2
MSE_ranking_dict[sgd_key] = [sgd_value] + sgd_coefs + [sgd.best_estimator_.intercept_[0]]
MSE_ranking_dict[lr_key] = [lr_value] + lr_coefs + [lr.best_estimator_.intercept_]
# displaying and sorting the MSEs of each model/feature combination
MSE_diplay_df = pd.DataFrame.from_dict(MSE_ranking_dict, orient='index')
MSE_diplay_df.columns = ['MSE'] + [f"{x.capitalize()} Coefficient" for x in list(X.columns)] + ['Intercept']
MSE_diplay_df.sort_values('MSE')
###Output
_____no_output_____
###Markdown
Normalized Root Mean Squared Error
###Code
pd.Series(np.sqrt(MSE_diplay_df['MSE'])/y_train.mean()).sort_values()
###Output
_____no_output_____ |
m03_v02_store_sales_prediction.ipynb | ###Markdown
0.0 IMPORTS
###Code
import pandas as pd
import inflection
import math
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import Image
import datetime
###Output
_____no_output_____
###Markdown
0.1. Helper Functions 0.2 Loading Data
###Code
path_1 = 'C:\\Users\\joaoa\\Documents\\DSprojects\\Git\\repos\\DataScience_Em_Producao\\data\\train.csv'
path_2 = 'C:\\Users\\joaoa\\Documents\\DSprojects\\Git\\repos\\DataScience_Em_Producao\\data\\store.csv'
df_sales_raw = pd.read_csv(path_1, low_memory=False)
df_store_raw = pd.read_csv(path_2, low_memory=False)
#merge
df_raw = pd.merge(df_sales_raw, df_store_raw, how='left', on='Store')
df_raw.sample()
###Output
_____no_output_____
###Markdown
1.0 DESCRIÇÃO DOS DADOS
###Code
#make a copy
df1 = df_raw.copy()
###Output
_____no_output_____
###Markdown
1.1 Rename Columns
###Code
cols_old = ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo', 'StateHoliday', 'SchoolHoliday', 'StoreType',
'Assortment', 'CompetitionDistance', 'CompetitionOpenSinceMonth', 'CompetitionOpenSinceYear', 'Promo2',
'Promo2SinceWeek', 'Promo2SinceYear', 'PromoInterval']
snakecase = lambda x: inflection.underscore(x)
cols_new = list(map(snakecase, cols_old))
#rename
df1.columns = cols_new
###Output
_____no_output_____
###Markdown
1.2 Data Dimensions
###Code
print('Number of Rows: {}'.format(df1.shape[0]))
print('Number of Colums: {}'.format(df1.shape[1]))
###Output
Number of Rows: 1017209
Number of Colums: 18
###Markdown
1.3 Data Types
###Code
df1['date'] = pd.to_datetime(df1['date'])
df1.dtypes
###Output
_____no_output_____
###Markdown
1.4 Check NA
###Code
df1.isna().sum()
###Output
_____no_output_____
###Markdown
1.5 Fillout NA
###Code
#competition_distance (200000.0 as it was no competition)
df1['competition_distance'] = df1['competition_distance'].apply(lambda x: 200000.0 if math.isnan(x) else x)
#competition_open_since_month
df1['competition_open_since_month'] = df1.apply(lambda x: x['date'].month if math.isnan(x['competition_open_since_month']) else x['competition_open_since_month'], axis=1)
#competition_open_since_yea
df1['competition_open_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['competition_open_since_year']) else x['competition_open_since_year'], axis=1)
#promo2_since_week
df1['promo2_since_week'] = df1.apply(lambda x: x['date'].week if math.isnan(x['promo2_since_week']) else x['promo2_since_week'], axis=1)
#promo2_since_year
df1['promo2_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['promo2_since_year']) else x['promo2_since_year'], axis=1)
#promo_interval
month_map = {1:'Jan', 2:'Fev', 3:'Mar', 4:'Apr', 5:'May', 6:'Jun', 7:'Jul', 8:'Aug', 9:'Sep', 10:'Oct', 11:'Nov', 12:'Dec'}
df1['promo_interval'].fillna(0, inplace=True)
df1['month_map'] = df1['date'].dt.month.map(month_map)
df1['is_promo'] = df1[['promo_interval','month_map']].apply(lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] is x['promo_interval'].split(',') else 0)
df1.sample(5).T
df1.isna().sum()
###Output
_____no_output_____
###Markdown
1.6 Change types
###Code
df1['competition_open_since_month'] = df1['competition_open_since_month'].astype('int64')
df1['competition_open_since_year'] = df1['competition_open_since_year'].astype('int64')
df1['promo2_since_week'] = df1['promo2_since_week'].astype('int64')
df1['promo2_since_year'] = df1['promo2_since_year'].astype('int64')
df1.dtypes
###Output
_____no_output_____
###Markdown
1.7 Descriptive Statistics
###Code
num_attributes = df1.select_dtypes(include=['int64', 'float64'])
cat_attributes = df1.select_dtypes(exclude=['int64', 'float64', 'datetime64[ns]'])
###Output
_____no_output_____
###Markdown
1.7.1 Numerical Attributes
###Code
#Central Tendency - mean, median
ct1 = pd.DataFrame(num_attributes.apply(np.mean)).T
ct2 = pd.DataFrame(num_attributes.apply(np.median)).T
#Dispersion - std, max, min, range, skew, kurtosis
d1 = pd.DataFrame(num_attributes.apply(np.std)).T
d2 = pd.DataFrame(num_attributes.apply(min)).T
d3 = pd.DataFrame(num_attributes.apply(max)).T
d4 = pd.DataFrame(num_attributes.apply(lambda x: x.max() - x.min())).T
d5 = pd.DataFrame(num_attributes.apply(lambda x: x.skew())).T
d6 = pd.DataFrame(num_attributes.apply(lambda x: x.kurtosis())).T
#merge
M = pd.concat([d3,d2,d4,ct1,ct2,d1,d5,d6]).T.reset_index()
M.columns = ['attributes', 'max', 'min', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
M
#set figure size
fig, ax = plt.subplots()
fig.set_size_inches(7, 7)
sns.distplot(df1['customers'])
cat_attributes.apply(lambda x: x.unique().shape[0])
#set figure size
fig, ax = plt.subplots()
fig.set_size_inches(15, 13)
aux1 = df1[(df1['state_holiday'] != '0') & (df1['sales'] > 0)]
plt.subplot(1,3,1)
sns.boxplot(x='state_holiday',y='sales',data=aux1)
plt.subplot(1,3,2)
sns.boxplot(x='store_type',y='sales',data=aux1)
plt.subplot(1,3,3)
sns.boxplot(x='assortment',y='sales',data=aux1)
###Output
_____no_output_____
###Markdown
2.0 Feature Engineering
###Code
df2 = df1.copy()
###Output
_____no_output_____
###Markdown
Mind Map of Hypothesis
###Code
Image('C:\\Users\\joaoa\\Documents\\DSprojects\\Git\\repos\\DataScience_Em_Producao\MindMapHypothesis.png')
###Output
_____no_output_____
###Markdown
2.1 Hypothesis creation 2.1.1 Store Hypothesis **1. Stores with more employees sell more** **2. Stores with bigger stock sell more** **3. Larger Stores sell more****4. Stores with bigger assortment sell more** **5. Stores with closer competitors sell less** **6. Stores with competitors for longer sell more** 2.1.2 Products Hypothesis **1. Stores with bigger investment in Marketing sell more****2. Stores that display more their products sell more** **3. Stores with cheaper products sell more****4. Stores with bigger discounts on their promos sell more** **5. Stores with promo for longer sell more****6. Stores with more days of promo sell more** **7. Stores with more consecutive promo sell more** 2.1.3 Time Hypothesis **1. Stores open at Chrismas sell more** **2. Stores sell more within the years** **3. Stores sell more in the second semester****4. Stores sell more after day 10th of the month** **5. Stores sell less on weekends** **6. Stores sell less on school holidays** 2.2 Final list of Hypothesis **1. Stores with bigger assortment sell more** **2. Stores with closer competitors sell less** **3. Stores with competitors for longer sell more****4. Stores with promo for longer sell more****5. Stores with more days of promo sell more** **6. Stores with more consecutive promo sell more** **7. Stores open at Chrismas sell more** **8. Stores sell more within the years** **9. Stores sell more in the second semester****10. Stores sell more after day 10th of the month** **11. Stores sell less on weekends** **12. Stores sell less on school holidays** 2.3 Feature Engineering
###Code
#year
df2['year'] = df2['date'].dt.year
#month
df2['month'] = df2['date'].dt.month
#day
df2['day'] = df2['date'].dt.day
#week of year
df2['week_of_year'] = df2['date'].dt.weekofyear
#year week
df2['year_week'] = df2['date'].dt.strftime('Y%-W%')
#competition since
df2['competition_since'] = df2.apply(lambda x: datetime.datetime(year=x['competition_open_since_year'], month=x['competition_open_since_month'], day=1), axis=1)
df2['competition_time_month'] = ((df2['date'] - df2['competition_since'])/30).apply(lambda x: x.days).astype(int)
#promo since
df2['promo_since'] = df2['promo2_since_year'].astype(str) + '-' + df2['promo2_since_week'].astype(str)
df2['promo_since'] = df2['promo_since'].apply(lambda x: datetime.datetime.strptime(x + '-1', '%Y-%W-%w') - datetime.timedelta(days=7))
df2['promo_time_week'] = ((df2['date'] - df2['promo_since'])/7).apply(lambda x: x.days).astype(int)
#assortment
df2['assortment'] = df2['assortment'].apply(lambda x: 'basic' if x == 'a' else 'extra' if x == 'b' else 'extended')
#state holiday
df2['state_holiday'] = df2['state_holiday'].apply(lambda x: 'public_holiday' if x == 'a' else 'easter_holiday' if x == 'b' else 'christmas' if x == 'c' else 'regular_day')
df2.head().T
###Output
_____no_output_____
###Markdown
3.0 VARIABLES' FILTRATION
###Code
df3 = df2.copy()
###Output
_____no_output_____
###Markdown
3.1 Rows' Filtration
###Code
df3 = df3[(df3['open'] != 0) & (df3['sales'] != 0)]
###Output
_____no_output_____
###Markdown
3.2 Columns' Selection
###Code
cols_drop = ['customers', 'open', 'promo_interval', 'month_map']
df3 = df3.drop(cols_drop, axis=1)
df3.columns
###Output
_____no_output_____ |
notebook1a3375ef89_final.ipynb | ###Markdown
Escolha frameworkIniciei tentando utilizar a biblioteca Pytorch, porém após não conseguir avançar na criação do dataset para treinamento, optei por utilizar o Tensorflow mais o Keras.Tensorflow e Keras se mostram um pouco mais fáceis de compreender e utilizar, tanto na criação do modelo e , principalmente, na criação do dataset de treino.
###Code
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from PIL import Image
import torch
import torchvision
from torchvision.datasets import DatasetFolder
from torchvision import transforms
import numpy as np
import collections
import matplotlib.pyplot as plt
%matplotlib inline
import re
from pydicom import dcmread
from skimage.io import imsave
import os
###Output
_____no_output_____
###Markdown
Alteração do diretório de trabalho para ter acesso aos arquivos do dataset da competição
###Code
INPUTDIR_PATH = '/kaggle/input/rsna-miccai-brain-tumor-radiogenomic-classification'
os.chdir(INPUTDIR_PATH)
sorted(os.listdir('train/00000'))
###Output
_____no_output_____
###Markdown
Para facilitar a navegação entre diretórios, criei constantes para as principais pastas que utilizei.Criado também uma constante como se fosse um enumerador para cada tipo de ressonância.
###Code
# Declaração de constantes
TRAIN_FOLDER = 'train'
TEST_FOLDER = 'test'
mri_types = collections.namedtuple('mri_types', ['FLAIR', 'T1W', 'T1WCE', 'T2W'])
MRI_TYPES = mri_types('FLAIR', 'T1w', 'T1wCE', 'T2w')
PNG_DATASET_DIR = '/kaggle/working/png_dataset'
PNG_TEST_DIR = '/kaggle/working/png_test'
WITH_MGMT_DIR = '/kaggle/working/png_dataset/with_mgmt'
WITHOUT_MGMT_DIR = '/kaggle/working/png_dataset/without_mgmt'
###Output
_____no_output_____
###Markdown
Criado os diretórios para salvar as imagens DICOM, que é o formato da ressonância magnética, convertidos em arquivos PNG.
###Code
os.mkdir(PNG_DATASET_DIR)
os.mkdir(WITH_MGMT_DIR)
os.mkdir(WITHOUT_MGMT_DIR)
os.mkdir(PNG_TEST_DIR)
###Output
_____no_output_____
###Markdown
Funções auxiliaresAqui foram criadas três funções auxiliares.A img_loader(path) é para carregar a imagem DICOM, verificar se é uma imagem vazia e retornar ela no formato de array. Caso o arquivo seja vazio, retorna 0.A função png_save(path) chama o método acima, verifica se é uma array ou 0 e caso seja um array, converte e salva a imagem DICOM no formato PNG.imgs_path_finder(folder, patient) percorre todos os pacientes e seus respectivos subdiretórios e retorna o caminho de cada arquivo DICOM de forma ordenada no formato de uma lista.
###Code
def img_loader(path):
ds = dcmread(os.path.join(INPUTDIR_PATH, path))
if (ds.pixel_array.sum() != 0):
arr = ds.pixel_array
# arr = tf.constant(arr)
return arr
else:
return 0
def png_save(path):
image = img_loader(path)
if isinstance(image, np.ndarray):
fname = path.replace('/', '-')
fname = fname.replace('dcm', 'png')
imsave(fname, image)
def imgs_path_finder(folder, patient):
images = []
image_path = []
for mri in MRI_TYPES:
images = sorted(os.listdir(os.path.join(folder, patient, mri)),
key=lambda file: int(re.sub('[^0-9]', '', file)))
for img in images:
path = os.path.join(folder,
patient,
mri,
img)
image_path.append(path)
print(path)
return image_path
###Output
_____no_output_____
###Markdown
Análise exploratóriaVerificamos alguns dados do arquivo .csv que contém a classificação de cada paciente para a presença do MGMT ou não da pasta TRAIN do dataset.
###Code
df_label = pd.read_csv("train_labels.csv", dtype={'BraTS21ID':str, 'MGMT_value':int})
df_label.MGMT_value.unique()
df_label.describe()
df_label.head()
###Output
_____no_output_____
###Markdown
Gerado duas variáveis contendo a lista de pacientes do diretório de TRAIN e TEST.
###Code
# Lista os pacientes que fazem parte do diretório de treinamento
train_patients = [subdirs for subdirs in os.listdir(TRAIN_FOLDER)]
print('Número de pacientes no diretório de treino', len(train_patients))
# Lista os pacientes que fazem parte do diretório de teste
test_patients = [subdirs for subdirs in os.listdir(TEST_FOLDER)]
print('Número de pacientes no diretório de teste', len(test_patients))
print('Total de pacientes', len(test_patients)+len(train_patients))
###Output
_____no_output_____
###Markdown
Função de exclusão de casos com falha no datasetNa descrição da competição, foi informado que os diretórios dos três pacientes 00109, 00123 e 00709 estavam gerando erro no processo de treinamento e orientavam realizar a exclusão deles.A função abaixo serve para eliminar as pastas dos três pacientes.
###Code
# Exclusão dos pacientes 00109, 00123, 00709 devido a falha do dataset
patients_delete = ('00109', '00123', '00709')
try:
for patient in patients_delete:
df_label = df_label[df_label.BraTS21ID != patient]
train_patients.remove(patient)
except Exception as err:
print('erro: ', err)
print('Número de pacientes no diretório de treino', len(train_patients))
###Output
_____no_output_____
###Markdown
Após ordenação dos pacientes e dos arquivos de imagem, aparentemente a busca pelos caminhos da fotos ocorreu mais rápido, por isso eu faço a ordenação das listas de pacientes abaixo.
###Code
test_patients = sorted(test_patients)
train_patients = sorted(train_patients)
###Output
_____no_output_____
###Markdown
Criado a lista contendo uma tuple com o identificador do paciente, o caminho da imagem DICOM e a classificação de presença de MGMT para o diretório TRAIN.Criado também a lista com uma tuple com o identificador do paciente e o caminho da imagem DICOM para o diretório TEST. Não foi fornecido no dataset da competição a classificação de MGMT.
###Code
# retorna uma lista de duas dimensões
# necessário a conversão de patient para list de forma que todos os elementos sejam do tipo list
# acrescenta o label de presença de MGMT
images_path = []
for patient in train_patients[:25]:
images_path.append([
[patient],
imgs_path_finder(TRAIN_FOLDER, patient),
[str(int(df_label[df_label['BraTS21ID']==patient].MGMT_value))]
])
test_images_path = []
for patient in test_patients[:10]:
test_images_path.append([
[patient],
imgs_path_finder(TEST_FOLDER, patient)
])
###Output
_____no_output_____
###Markdown
Diretório Imagens TreinamentoAtravés do código abaixo, verificamos para cada paciente o Label de classificação para MGMT e chamamos as funções auxiliares para carregar e salvar os arquivos PNG na seguinte estrutura:\Esse tipo de estrutura foi utilizado para utilizar as facilidades da biblioteca Keras, que a partir desse formato de diretórios automaticamente gera uma dataset para ser utilizado no treinamento dos modelos de rede neural.O Keras entende os subdiretórios WITH_MGMT_DIR e WITHOUT_MGMT_DIR como sendo a categoria e fazendo a associação dessa classe a cada imagem dentro do subdiretório.
###Code
for patient in images_path:
if patient[2][0] == '1':
os.chdir(WITH_MGMT_DIR)
for image in patient[1]:
png_save(image)
os.chdir(INPUTDIR_PATH)
else:
os.chdir(WITHOUT_MGMT_DIR)
for image in patient[1]:
png_save(image)
os.chdir(INPUTDIR_PATH)
###Output
_____no_output_____
###Markdown
Aqui simplesmente buscamos as imagens de cada paciente no diretório TEST, convertemos em PNG e salvamos na pasta PNG_TEST_DIR.
###Code
os.chdir(PNG_TEST_DIR)
for patient in test_images_path:
for image in patient[1]:
png_save(image)
os.chdir(INPUTDIR_PATH)
###Output
_____no_output_____
###Markdown
Keras e Tensorflow Após especificar o tamanho das nossas imagens e do batch, utilizamos a API do Keras para automaticamente criarmos os datasets de treinamento e validação que iremos utilizar no modelo CNN. Configuramos através dos parâmetros "validation_split" e "subset" a porcentagem de 20% para validação e a divisão do diretório de imagens em dois para treino e validação.
###Code
image_size = (512, 512)
batch_size = 32
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
PNG_DATASET_DIR,
validation_split=0.2,
subset="training",
seed=1337,
image_size=image_size,
batch_size=batch_size,
)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
PNG_DATASET_DIR,
validation_split=0.2,
subset="validation",
seed=1337,
image_size=image_size,
batch_size=batch_size,
)
###Output
_____no_output_____
###Markdown
Apenas para exemplificação, pegamos uma amostra do train_ds e plotamos as imagens junto com as respectivas classificações.
###Code
plt.figure(figsize=(10, 10))
for images, labels in train_ds.take(1):
for i in range(9):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(images[i].numpy().astype("uint8"))
plt.title(int(labels[i]))
plt.axis("off")
###Output
_____no_output_____
###Markdown
Utilizado o método "prefetch" para agilizar o carregamento dos dados durante o treinamento da rede neural.
###Code
train_ds = train_ds.prefetch(buffer_size=32)
val_ds = val_ds.prefetch(buffer_size=32)
###Output
_____no_output_____
###Markdown
CNNDe forma experimental, criamos uma rede neural convolucional de apenas oito camadas.
###Code
def make_model(input_shape):
inputs = keras.Input(shape=input_shape)
x = keras.Sequential()
x = layers.experimental.preprocessing.Rescaling(1.0 / 255)(inputs)
x = layers.Conv2D(32, 3, strides=2, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.Conv2D(64, 3, padding="same")(x)
x = layers.BatchNormalization()(x)
x = layers.Activation("relu")(x)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(units=1, activation="sigmoid")(x)
return keras.Model(inputs, outputs)
model = make_model(input_shape=image_size + (3,))
model.summary()
###Output
_____no_output_____
###Markdown
Definimos então o número de épocas e uma função para salvar o melhor modelo encontrado durante treinamento.\Ao compilar o modelo, escolhemos como otimizador o Adam, função de perda do tipo "binary_crossentropy", uma vez que nosso problema é do tipo tem ou não MGMT, e como métrica a acurácia.\Observando a saída do método "fit", podemos observar que a nossa rede não performa de forma adequada, inclusive aparentemente ela não faz nenhum tipo de aprendizado significativo.
###Code
epochs = 20
callbacks = [
keras.callbacks.ModelCheckpoint(filepath="/kaggle/working/save_at_{epoch}.h5",
save_best_only=True)
]
model.compile(
optimizer=keras.optimizers.Adam(1e-3),
loss="binary_crossentropy",
metrics=["accuracy"],
)
model.fit(
train_ds, epochs=epochs, callbacks=callbacks, validation_data=val_ds,
)
###Output
_____no_output_____
###Markdown
Transferência de aprendizadoPara melhorarmos nossa classificação, optamos por utilizar a transferência de aprendizado de um modelo já pré-treinado. No nosso caso, escolhemos o Xception.\O próprio Keras fornece um método para obtermos esse modelo com os pesos pré-treinados no dataset do Imagenet.\Para permitir que o modelo se adeque ao nosso problema, não incluímos a última camada de classificação do Xception ao criarmos nosso modelo base de rede neural.
###Code
base_model = keras.applications.Xception(
weights='imagenet', # Load weights pre-trained on ImageNet.
input_shape=(512, 512, 3),
include_top=False) # Do not include the ImageNet classifier at the top.
###Output
_____no_output_____
###Markdown
Congelamos então os parâmetros do modelo base para não alterarmos os pesos já pré-treinados e criamos a partir dele um novo modelo, adicionando mais duas camdas, sendo a última a nossa camada de classificação.
###Code
base_model.trainable = False
inputs_2 = keras.Input(shape=(512, 512, 3))
x_2 = base_model(inputs_2, training=False)
# Convert features of shape `base_model.output_shape[1:]` to vectors
x_2 = keras.layers.GlobalAveragePooling2D()(x_2)
# A Dense classifier with a single unit (binary classification)
outputs_2 = keras.layers.Dense(1, activation="sigmoid")(x_2)
model_2 = keras.Model(inputs_2, outputs_2)
model_2.summary()
###Output
_____no_output_____
###Markdown
Com essa nova rede neural, obtemos uma acurácia próxima de 80%.
###Code
epochs = 20
callbacks = [
keras.callbacks.ModelCheckpoint(filepath="/kaggle/working/transfer_save_at_{epoch}.h5",
save_best_only=True)
]
model_2.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[keras.metrics.BinaryAccuracy()])
model_2.fit(train_ds, epochs=epochs, callbacks=callbacks, validation_data=val_ds)
###Output
_____no_output_____
###Markdown
Utilizando modelo salvoDepois de termos salvo o melhor modelo durante trainamento, podemos restaurar o mesmo e fazer as nossas classificações.
###Code
#restored_model = keras.models.load_model("/kaggle/input/saved-models/transfer_save_at_17.h5")
#restored_model.summary()
predictions = []
i = 0
for file in os.listdir(PNG_TEST_DIR)[:100]:
image = tf.keras.preprocessing.image.load_img(os.path.join(PNG_TEST_DIR, file),
target_size=image_size)
input_arr = keras.preprocessing.image.img_to_array(image)
input_arr = np.array([input_arr]) # Convert single image to a batch.
predictions.append([file, model_2.predict(input_arr)])
predictions
###Output
_____no_output_____
###Markdown
Ajuste finoAtravés do código abaixo, poderíamos fazer um ajuste final do modelo ao descongelar os parâmetros do modelo base, porém devido as limitações de recursos do ambiente de desenvolvimento, o código abaixo gera erro de Out Of Memory.
###Code
'''
base_model.trainable = True
model_2.compile(optimizer=keras.optimizers.Adam(1e-5),
loss=keras.losses.BinaryCrossentropy(from_logits=True),
metrics=[keras.metrics.BinaryAccuracy()])
model_2.fit(train_ds, epochs=10, callbacks=callbacks, validation_data=val_ds)
'''
###Output
_____no_output_____ |
backTracking/combSum.ipynb | ###Markdown
Chapter : BacktrackingTitle : Combination sum implementationLink : [YouTube](https://youtu.be/C6vZH6hnzJg)ChapterLink : 문제 : 주어진 array의 합이 sum이 되는 모든 combination 조합을 찾아라예제: [1,2,3] sum = 5답 : [[1,1,1,1,1],[1,1,1,2],[1,2,2],[1,1,3],[2,3]]
###Code
from typing import List
class CombSum:
def solution(self, in_list: List[int], target: int) -> List[List[int]]:
if len(in_list)==0:
return []
#set init member Vars
self.__result = []
self.__in_list = in_list
comb = []
self.__bt(0,target,comb) #backtracking
return self.__result
def __bt(self, prevIdx:int, targetSum:int, comb:List[int]):
#exit conditions
if targetSum==0:
self.__result.append(comb.copy())
elif targetSum < 0:
return
#process(candidates filtering)
for idx in range(prevIdx,len(self.__in_list)):
num = self.__in_list[idx]
#recusion call
comb.append(num)
self.__bt(idx,targetSum-num,comb)
comb.pop()
return
combSum = CombSum()
combSum.solution(in_list=[1,2,3],target=5)
###Output
_____no_output_____ |
tutorials/mosaiks.ipynb | ###Markdown
MOSAIKS feature extractionThis tutorial demonstrates the **MOSAIKS** method for extracting _feature vectors_ from satellite imagery patches for use in downstream modeling tasks. It will show:- How to extract 1km$^2$ patches of Sentinel 2 multispectral imagery for a list of latitude, longitude points- How to extract summary features from each of these imagery patches- How to use the summary features in a linear model of the population density at each point BackgroundConsider the case where you have a dataset of latitude and longitude points assosciated with some dependent variable (for example: population density, weather, housing prices, biodiversity) and, potentially, other independent variables. You would like to model the dependent variable as a function of the independent variables, but instead of including latitude and longitude directly in this model, you would like to include some high dimensional representation of what the Earth looks like at that point (that hopefully explains some of the variance in the dependent variable!). From the computer vision literature, there are various [representation learning techniques](https://en.wikipedia.org/wiki/Feature_learning) that can be used to do this, i.e. extract _features vectors_ from imagery. This notebook gives an implementation of the technique described in [Rolf et al. 2021](https://www.nature.com/articles/s41467-021-24638-z), "A generalizable and accessible approach to machine learning with global satellite imagery" called Multi-task Observation using Satellite Imagery & Kitchen Sinks (**MOSAIKS**). For more information about **MOSAIKS** see the [project's webpage](http://www.globalpolicy.science/mosaiks).**Notes**:- This example uses [Sentinel-2 Level-2A data](https://planetarycomputer.microsoft.com/dataset/sentinel-2-l2a). The techniques used here apply equally well to other remote-sensing datasets.- If you're running this on the [Planetary Computer Hub](http://planetarycomputer.microsoft.com/compute), make sure to choose the **GPU - PyTorch** profile when presented with the form to choose your environment.
###Code
!pip install -q git+https://github.com/geopandas/dask-geopandas
import warnings
import time
import os
RASTERIO_BEST_PRACTICES = dict( # See https://github.com/pangeo-data/cog-best-practices
CURL_CA_BUNDLE="/etc/ssl/certs/ca-certificates.crt",
GDAL_DISABLE_READDIR_ON_OPEN="EMPTY_DIR",
AWS_NO_SIGN_REQUEST="YES",
GDAL_MAX_RAW_BLOCK_CACHE_SIZE="200000000",
GDAL_SWATH_SIZE="200000000",
VSI_CURL_CACHE_SIZE="200000000",
)
os.environ.update(RASTERIO_BEST_PRACTICES)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
import rasterio
import rasterio.warp
import rasterio.mask
import shapely.geometry
import geopandas
import dask_geopandas
from sklearn.linear_model import RidgeCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
from scipy.stats import spearmanr
from scipy.linalg import LinAlgWarning
from dask.distributed import Client
warnings.filterwarnings(action="ignore", category=LinAlgWarning, module="sklearn")
import pystac_client
import planetary_computer as pc
###Output
_____no_output_____
###Markdown
First we define the pytorch model that we will use to extract the features and a helper method. The **MOSAIKS** methodology describes several ways to do this and we use the simplest.
###Code
def featurize(input_img, model, device):
"""Helper method for running an image patch through the model.
Args:
input_img (np.ndarray): Image in (C x H x W) format with a dtype of uint8.
model (torch.nn.Module): Feature extractor network
"""
assert len(input_img.shape) == 3
input_img = torch.from_numpy(input_img / 255.0).float()
input_img = input_img.to(device)
with torch.no_grad():
feats = model(input_img.unsqueeze(0)).cpu().numpy()
return feats
class RCF(nn.Module):
"""A model for extracting Random Convolution Features (RCF) from input imagery."""
def __init__(self, num_features=16, kernel_size=3, num_input_channels=3):
super(RCF, self).__init__()
# We create `num_features / 2` filters so require `num_features` to be divisible by 2
assert num_features % 2 == 0
self.conv1 = nn.Conv2d(
num_input_channels,
num_features // 2,
kernel_size=kernel_size,
stride=1,
padding=0,
dilation=1,
bias=True,
)
nn.init.normal_(self.conv1.weight, mean=0.0, std=1.0)
nn.init.constant_(self.conv1.bias, -1.0)
def forward(self, x):
x1a = F.relu(self.conv1(x), inplace=True)
x1b = F.relu(-self.conv1(x), inplace=True)
x1a = F.adaptive_avg_pool2d(x1a, (1, 1)).squeeze()
x1b = F.adaptive_avg_pool2d(x1b, (1, 1)).squeeze()
if len(x1a.shape) == 1: # case where we passed a single input
return torch.cat((x1a, x1b), dim=0)
elif len(x1a.shape) == 2: # case where we passed a batch of > 1 inputs
return torch.cat((x1a, x1b), dim=1)
###Output
_____no_output_____
###Markdown
Next, we initialize the model and pytorch components
###Code
num_features = 1024
device = torch.device("cuda")
model = RCF(num_features).eval().to(device)
###Output
_____no_output_____
###Markdown
Read dataset of (lat, lon) points and corresponding labels We read a CSV of 100,000 randomly sampled (lat, lon) points over the US and the corresponding population living roughly within 1km$^2$ of the points from the [Gridded Population of the World](https://sedac.ciesin.columbia.edu/downloads/data/gpw-v4/gpw-v4-population-density-rev10/gpw-v4-population-density-rev10_2015_30_sec_tif.zip) dataset. This data comes from the [Code Ocean capsule](https://codeocean.com/capsule/6456296/tree/v2) that accompanies the Rolf et al. 2021 paper.
###Code
df = pd.read_csv(
"https://files.codeocean.com/files/verified/fa908bbc-11f9-4421-8bd3-72a4bf00427f_v2.0/data/int/applications/population/outcomes_sampled_population_CONTUS_16_640_UAR_100000_0.csv?download", # noqa: E501
index_col=0,
na_values=[-999],
).dropna()
points = df[["lon", "lat"]]
population = df["population"]
gdf = geopandas.GeoDataFrame(df, geometry=geopandas.points_from_xy(df.lon, df.lat))
gdf
###Output
_____no_output_____
###Markdown
Get rid of points with nodata population values
###Code
population.plot.hist();
###Output
_____no_output_____
###Markdown
Population is lognormal distributed, so transforming it to log-space makes sense for modeling purposes
###Code
population_log = np.log10(population + 1)
population_log.plot.hist();
ax = points.assign(population=population_log).plot.scatter(
x="lon",
y="lat",
c="population",
s=1,
cmap="viridis",
figsize=(10, 6),
colorbar=False,
)
ax.set_axis_off();
###Output
_____no_output_____
###Markdown
Extract features from the imagery around each pointWe need to find a suitable Sentinel 2 scene for each point. As usual, we'll use `pystac-client` to search for items matching some conditions, but we don't just want do make a `.search()` call for each of the 67,968 remaining points. Each HTTP request is relatively slow. Instead, we will *batch* or points and search *in parallel*.We need to be a bit careful with how we batch up our points though. Since a single Sentinel 2 scene will cover many points, we want to make sure that points which are spatially close together end up in the same batch. In short, we need to spatially partition the dataset. This is implemented in `dask-geopandas`.So the overall workflow will be1. Find an appropriate STAC item for each point (in parallel, using the spatially partitioned dataset)2. Feed the points and STAC items to a custom Dataset that can read imagery given a point and the URL of a overlapping S2 scene3. Use a custom Dataloader, which uses our Dataset, to feed our model imagery and save the corresponding features
###Code
NPARTITIONS = 250
ddf = dask_geopandas.from_geopandas(gdf, npartitions=1)
hd = ddf.hilbert_distance().compute()
gdf["hd"] = hd
gdf = gdf.sort_values("hd")
dgdf = dask_geopandas.from_geopandas(gdf, npartitions=NPARTITIONS, sort=False)
###Output
_____no_output_____
###Markdown
We'll write a helper function that
###Code
def query(points):
"""
Find a STAC item for points in the `points` DataFrame
Parameters
----------
points : geopandas.GeoDataFrame
A GeoDataFrame
Returns
-------
geopandas.GeoDataFrame
A new geopandas.GeoDataFrame with a `stac_item` column containing the STAC
item that covers each point.
"""
intersects = shapely.geometry.mapping(points.unary_union.convex_hull)
search_start = "2018-01-01"
search_end = "2019-12-31"
catalog = pystac_client.Client.open(
"https://planetarycomputer.microsoft.com/api/stac/v1"
)
# The time frame in which we search for non-cloudy imagery
search = catalog.search(
collections=["sentinel-2-l2a"],
intersects=intersects,
datetime=[search_start, search_end],
query={"eo:cloud_cover": {"lt": 10}},
limit=500,
)
ic = search.get_all_items_as_dict()
features = ic["features"]
features_d = {item["id"]: item for item in features}
data = {
"eo:cloud_cover": [],
"geometry": [],
}
index = []
for item in features:
data["eo:cloud_cover"].append(item["properties"]["eo:cloud_cover"])
data["geometry"].append(shapely.geometry.shape(item["geometry"]))
index.append(item["id"])
items = geopandas.GeoDataFrame(data, index=index, geometry="geometry").sort_values(
"eo:cloud_cover"
)
point_list = points.geometry.tolist()
point_items = []
for point in point_list:
covered_by = items[items.covers(point)]
if len(covered_by):
point_items.append(features_d[covered_by.index[0]])
else:
# There weren't any scenes matching our conditions for this point (too cloudy)
point_items.append(None)
return points.assign(stac_item=point_items)
%%time
with Client(n_workers=16) as client:
print(client.dashboard_link)
meta = dgdf._meta.assign(stac_item=[])
df2 = dgdf.map_partitions(query, meta=meta).compute()
df2.head()
df3 = df2.dropna(subset=["stac_item"])
matching_urls = [
pc.sign(item["assets"]["visual"]["href"]) for item in df3.stac_item.tolist()
]
points = df3[["lon", "lat"]].to_numpy()
population_log = np.log10(df3["population"].to_numpy() + 1)
class CustomDataset(Dataset):
def __init__(self, points, fns, buffer=500):
self.points = points
self.fns = fns
self.buffer = buffer
def __len__(self):
return self.points.shape[0]
def __getitem__(self, idx):
lon, lat = self.points[idx]
fn = self.fns[idx]
if fn is None:
return None
else:
point_geom = shapely.geometry.mapping(shapely.geometry.Point(lon, lat))
with rasterio.Env():
with rasterio.open(fn, "r") as f:
point_geom = rasterio.warp.transform_geom(
"epsg:4326", f.crs.to_string(), point_geom
)
point_shape = shapely.geometry.shape(point_geom)
mask_shape = point_shape.buffer(self.buffer).envelope
mask_geom = shapely.geometry.mapping(mask_shape)
try:
out_image, out_transform = rasterio.mask.mask(
f, [mask_geom], crop=True
)
except ValueError as e:
if "Input shapes do not overlap raster." in str(e):
return None
out_image = out_image / 255.0
out_image = torch.from_numpy(out_image).float()
return out_image
dataset = CustomDataset(points, matching_urls)
dataloader = DataLoader(
dataset,
batch_size=8,
shuffle=False,
num_workers=os.cpu_count() * 2,
collate_fn=lambda x: x,
pin_memory=False,
)
x_all = np.zeros((points.shape[0], num_features), dtype=float)
tic = time.time()
i = 0
for images in dataloader:
for image in images:
if image is not None:
# A full image should be ~101x101 pixels (i.e. ~1km^2 at a 10m/px spatial
# resolution), however we can receive smaller images if an input point
# happens to be at the edge of a S2 scene (a literal edge case). To deal
# with these (edge) cases we crudely drop all images where the spatial
# dimensions aren't both greater than 20 pixels.
if image.shape[1] >= 20 and image.shape[2] >= 20:
image = image.to(device)
with torch.no_grad():
feats = model(image.unsqueeze(0)).cpu().numpy()
x_all[i] = feats
else:
# this happens if the point is close to the edge of a scene
# (one or both of the spatial dimensions of the image are very small)
pass
else:
pass # this happens if we do not find a S2 scene for some point
if i % 1000 == 0:
print(
f"{i}/{points.shape[0]} -- {i / points.shape[0] * 100:0.2f}%"
+ f" -- {time.time()-tic:0.2f} seconds"
)
tic = time.time()
i += 1
###Output
_____no_output_____
###Markdown
Use the extracted features and given labels to model population density as a function of imageryWe split the available data 80/20 into train/test. We use a cross-validation approach to tune the regularization parameter of a Ridge regression model, then apply the model to the test data and measure the R2.
###Code
y_all = population_log.copy()
x_all.shape, y_all.shape
###Output
_____no_output_____
###Markdown
And one final masking -- any sample that has all zeros for features means that we were unsuccessful at extracting features for that point.
###Code
nofeature_mask = ~(x_all.sum(axis=1) == 0)
x_all = x_all[nofeature_mask]
y_all = y_all[nofeature_mask]
x_all.shape, y_all.shape
x_train, x_test, y_train, y_test = train_test_split(
x_all, y_all, test_size=0.2, random_state=0
)
ridge_cv_random = RidgeCV(cv=5, alphas=np.logspace(-8, 8, base=10, num=17))
ridge_cv_random.fit(x_train, y_train)
print(f"Validation R2 performance {ridge_cv_random.best_score_:0.2f}")
y_pred = np.maximum(ridge_cv_random.predict(x_test), 0)
plt.figure()
plt.scatter(y_pred, y_test, alpha=0.2, s=4)
plt.xlabel("Predicted", fontsize=15)
plt.ylabel("Ground Truth", fontsize=15)
plt.title(r"$\log_{10}(1 + $people$/$km$^2)$", fontsize=15)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([0, 6])
plt.ylim([0, 6])
plt.text(
0.5,
5,
s="R$^2$ = %0.2f" % (r2_score(y_test, y_pred)),
fontsize=15,
fontweight="bold",
)
m, b = np.polyfit(y_pred, y_test, 1)
plt.plot(y_pred, m * y_pred + b, color="black")
plt.gca().spines.right.set_visible(False)
plt.gca().spines.top.set_visible(False)
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
In addition to a R$^2$ value of ~0.55 on the test points, we can see that we have a rank-order correlation (spearman's r) of 0.66.
###Code
spearmanr(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Spatial extrapolationIn the previous section we split the points randomly and found that our model can _interpolate_ population density with an R2 of 0.55, however this result does not say anything about how well the model will extrapolate. Whenever you are modeling spatio-temporal data it is important to consider what the model is doing as well as the purpose of the model, then evaluate it appropriately. Here, we test how our modeling approach above is able to extrapolate to areas that it has not been trained on. Specifically we train the linear model with data from the _western_ portion of the US then test it on data from the _eastern_ US and interpret the results.
###Code
points = points[nofeature_mask]
###Output
_____no_output_____
###Markdown
First we calculate the 80th percentile longitude of the points in our dataset. Points that are to the west of this value will be in our training split and points to the east of this will be in our testing split.
###Code
split_lon = np.percentile(points[:, 0], 80)
train_idxs = np.where(points[:, 0] <= split_lon)[0]
test_idxs = np.where(points[:, 0] > split_lon)[0]
x_train = x_all[train_idxs]
x_test = x_all[test_idxs]
y_train = y_all[train_idxs]
y_test = y_all[test_idxs]
###Output
_____no_output_____
###Markdown
Visually, the split looks like this:
###Code
plt.figure()
plt.scatter(points[:, 0], points[:, 1], c=y_all, s=1)
plt.vlines(
split_lon,
ymin=points[:, 1].min(),
ymax=points[:, 1].max(),
color="black",
linewidth=4,
)
plt.axis("off")
plt.show()
plt.close()
ridge_cv = RidgeCV(cv=5, alphas=np.logspace(-8, 8, base=10, num=17))
ridge_cv.fit(x_train, y_train)
###Output
_____no_output_____
###Markdown
We can see that our validation performance is similar to that of the random split:
###Code
print(f"Validation R2 performance {ridge_cv.best_score_:0.2f}")
###Output
Validation R2 performance 0.11
###Markdown
However, our _test_ R$^2$ is much lower, 0.13 compared to 0.55. This shows that the linear model trained on **MOSAIKS** features and population data sampled from the _western_ US is not able to predict the population density in the _eastern_ US as well. However, from the scatter plot we can see that the predictions aren't random which warrants further investigation...
###Code
y_pred = np.maximum(ridge_cv.predict(x_test), 0)
plt.figure()
plt.scatter(y_pred, y_test, alpha=0.2, s=4)
plt.xlabel("Predicted", fontsize=15)
plt.ylabel("Ground Truth", fontsize=15)
plt.title(r"$\log_{10}(1 + $people$/$km$^2)$", fontsize=15)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([0, 6])
plt.ylim([0, 6])
plt.text(
0.5,
5,
s="R$^2$ = %0.2f" % (r2_score(y_test, y_pred)),
fontsize=15,
fontweight="bold",
)
m, b = np.polyfit(y_pred, y_test, 1)
plt.plot(y_pred, m * y_pred + b, color="black")
plt.gca().spines.right.set_visible(False)
plt.gca().spines.top.set_visible(False)
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
The rank-order correlation is still high, 0.61 compared to 0.66 from the random split. This shows that the model is still able to correctly _order_ the density of input points, however is wrong about the magnitude of the population densities.
###Code
spearmanr(y_test, y_pred)
###Output
_____no_output_____
###Markdown
This makes sense when we compare the distributions of population density of points from the western US to that of the eastern US -- the label distributions are completely different. The distribution of **MOSAIKS** features likely doesn't change, however the mapping between these features and population density definitely varies with space.
###Code
bins = np.linspace(0, 5, num=50)
plt.figure()
plt.hist(y_train, bins=bins)
plt.ylabel("Frequency")
plt.xlabel(r"$\log_{10}(1 + $people$/$km$^2)$")
plt.title("Train points -- western US")
plt.gca().spines.right.set_visible(False)
plt.gca().spines.top.set_visible(False)
plt.show()
plt.close()
plt.figure()
plt.hist(y_test, bins=bins)
plt.ylabel("Frequency")
plt.xlabel(r"$\log_{10}(1 + $people$/$km$^2)$")
plt.title("Test points -- eastern US")
plt.gca().spines.right.set_visible(False)
plt.gca().spines.top.set_visible(False)
plt.show()
plt.close()
###Output
_____no_output_____ |
notebooks/dc1.ipynb | ###Markdown
I will drop some of the columns:- `FL_DATE`: Because I have the month and the day of the week in separate columns.- `ORIGIN_CITY_NAME`: Because the airport code will be enough.- `DEST_CITY_NAME`: Because the airport code will be enough.- `Unnamed: 13`: Because it is null for every row.
###Code
# Getting rid of the necessary columns
df.drop(['FL_DATE',
'ORIGIN_CITY_NAME',
'DEST_CITY_NAME',
'Unnamed: 13'], axis=1, inplace=True)
# We are left with this
df.head(3)
# Let's have a look at our null values
df.isna().sum()[df.isna().sum()!=0]
###Output
_____no_output_____
###Markdown
I will drop rows where `ARR_DEL15` is null since that is what we are predicting. I will drop rows where `CRS_ELAPSED_TIME` is null since there are only 10 such rows.
###Code
# Getting rid of thos null values
df.dropna(inplace=True)
# Our data still looks mostly the same, but it is cleaned up now
df.head(3)
###Output
_____no_output_____
###Markdown
I will use `LabelEncoder` to turn `ORIGIN`, `DEST`, and `UNIQUE_CARRIER` to numbers.
###Code
# Here's a LabelEncoder fit to ORIGIN
le = LabelEncoder().fit(df['ORIGIN'])
# Here I transform the column to those numbers
df['ORIGIN'] = le.transform(df['ORIGIN'])
# Here's a LabelEncoder fit to DEST
le = LabelEncoder().fit(df['DEST'])
# Here I transform the column to those numbers
df['DEST'] = le.transform(df['DEST'])
# Here's a LabelEncoder fit to UNIQUE_CARRIER
le = LabelEncoder().fit(df['UNIQUE_CARRIER'])
# Here I transform the column to those numbers
df['UNIQUE_CARRIER'] = le.transform(df['UNIQUE_CARRIER'])
# Here is the data
df.head(3)
# We are ready to do a train and test data split
# Let's first separate our input and output variables
# Input variables
X = df.drop(['ARR_DEL15'], axis=1)
# Output variables
y = df['ARR_DEL15']
# Train/test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2,
stratify=df['ARR_DEL15'], random_state = 42)
model = tf.keras.Sequential([
tf.keras.layers.Embedding(32, input_length=9, output_dim=32),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(32, activation = 'relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(df['ARR_DEL15'].nunique(), activation='softmax')
])
model.summary()
model.compile(loss = 'sparse_categorical_crossentropy', optimizer='adam', metrics = ['accuracy'])
%%time
#converting to numpy arrays prior to fitting into the model
X_train = np.asarray(X_train)
y_train = np.asarray(y_train)
X_test = np.asarray(X_test)
y_test = np.asarray(y_test)
#No need to run a loop, keras does it for us unlike in case of pytorch.
#Though pytorch offers more flexibility in logic
#change verbose, to display training state differently
history = model.fit(x_train, y_train, epochs=30, validation_data=(X_test, y_test), verbose=1)
###Output
Epoch 1/30
|
docs/lectures/lecture06/notebook/L2_1.ipynb | ###Markdown
Title Principal Components Analysis Description :This exercise demonstrates the effect of scaling for Principal Components Analysis:After this exercise you should see following two plots: Hints: Principal Components AnalysisStandard scalerRefer to lecture notebook.Do not change any other code except the blanks.
###Code
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
%matplotlib inline
df = pd.read_csv('data2.csv')
display(df.describe())
df.head()
### edTest(test_pca_noscaling) ###
#Fit and Plot the first 2 principal components (no scaling)
fitted_pca = PCA().fit(____)
pca_result = fitted_pca.transform(____)
plt.scatter(pca_result[:,0],pca_result[:,1])
plt.xlabel("Principal Component 1")
plt.ylabel("Principal Component 2")
plt.title("PCA - No scaling");
### edTest(test_pca_scaled) ###
#scale the data and plot first 2 principal components
scaled_df = StandardScaler().____
fitted_pca = PCA().fit(____)
pca_result = fitted_pca.transform(____)
plt.scatter(pca_result[:,0],pca_result[:,1])
plt.xlabel("Principal Component 1")
plt.ylabel("Principal Component 2")
plt.title("PCA - with scaled data");
###Output
_____no_output_____ |
notebooks/1_-_Using_ImageJ/ImageJ_Ops/05_-_Math_on_Image.ipynb | ###Markdown
Math on Image In the `math` namespace, Ops provides traditional mathematical operations such as `add`, `subtract`, `multiply` and `divide`. These operations are overloaded in several ways:* Operate pixelwise between two images—e.g., `math.add(image1, image2)` when `image1` and `image2` have the same dimensions.* Operate between an image and a constant—e.g., `math.add(image, 5)` to add 5 to each sample of `image`.* Operate between two numerical values—e.g., `math.add(3, 5)` to compute the sum of 3 and 5.Some `math` ops are also already heavily optimized, since we used the `math.add` op as a testbed to validate that Ops could perform as well or better than ImageJ 1.x does.
###Code
#@ImageJ ij
import net.imglib2.RandomAccessibleInterval
// Define some handy shorthands!
tile = { images ->
int[] gridLayout = images[0] in List ?
[images[0].size, images.size] : // 2D images list
[images.size] // 1D images list
RandomAccessibleInterval[] rais = images.flatten()
ij.notebook().mosaic(gridLayout, rais)
}
"ImageJ is ready to go."
// Prepare a couple of equally sized images.
import net.imglib2.type.numeric.real.FloatType
image1 = ij.op().run("create.img", [160, 96], new FloatType())
image2 = ij.op().run("copy.rai", image1)
// Gradient toward bottom right.
ij.op().image().equation(image1, "p[0] + p[1]")
minMax1 = ij.op().stats().minMax(image1)
println("image1 range = (" + minMax1.getA() + ", " + minMax1.getB() + ")")
// Sinusoid.
ij.op().image().equation(image2, "64 * (Math.sin(0.1 * p[0]) + Math.cos(0.1 * p[1])) + 128")
minMax2 = ij.op().stats().minMax(image2)
println("image2 range = (" + minMax2.getA() + ", " + minMax2.getB() + ")")
[["image1":image1, "image2":image2]]
###Output
image1 range = (0.0, 254.0)
image2 range = (0.020272091031074524, 255.97271728515625)
###Markdown
Let's test `math.add(image, number)`:
###Code
addImage = image1 // Try also with image2!
tile([
addImage,
ij.op().run("math.add", ij.op().run("copy.rai", addImage), 60),
ij.op().run("math.add", ij.op().run("copy.rai", addImage), 120),
ij.op().run("math.add", ij.op().run("copy.rai", addImage), 180)
])
###Output
_____no_output_____
###Markdown
Notice how we had to make a copy of the source image for each `add(image, number)` above? This is because right now, the best-matching `math.add` op is an _inplace_ operation, modifying the source image. Ops is still young, and needs more fine tuning! In the meantime, watch out for details like this.Now we'll try `math.add(image1, image2)` and `math.subtract(image1, image2)`:
###Code
sum = ij.op().run("math.add", image1, image2)
diff = ij.op().run("math.subtract", image1, image2)
tile([sum, diff])
###Output
_____no_output_____
###Markdown
Here is `math.multiply(image1, image2)`:
###Code
ij.op().run("math.multiply", image1, image2)
###Output
_____no_output_____
###Markdown
And finally `math.divide(image1, image2)`:
###Code
ij.op().run("math.divide", image1, image2)
###Output
_____no_output_____ |
nbs/05_predicting_votes.ipynb | ###Markdown
Predicting votes> Let's see how how well votes of politicians in polls can be predicted. **The strategy**:- first: only include a politician id and a poll id as features - second: include text features based on the poll title and or description **TL;DR**- using only politician id and poll id we find an 88% accuracy (over validation given random split) => individual outcome is highly associated with votes of others in the same poll **TODO**:- test tfidf features- combine poll title and description for feature generation- try transformer based features- visualise most incorrect predicted polls and politicians
###Code
%load_ext autoreload
%autoreload 2
from bundestag import abgeordnetenwatch as aw
from bundestag import poll_clustering as pc
from bundestag import vote_prediction as vp
import pandas as pd
from fastai.tabular.all import *
###Output
_____no_output_____
###Markdown
Setup Loading preprocessed dataframes (see `03_abgeordnetenwatch.ipynb`). First let's the votes.
###Code
df_all_votes = pd.read_parquet(aw.ABGEORDNETENWATCH_PATH / f'df_all_votes.parquet')
df_all_votes.head()
###Output
_____no_output_____
###Markdown
Loading further info on politicians
###Code
%%time
df_mandates = pd.read_parquet(path=aw.ABGEORDNETENWATCH_PATH / 'df_mandates.parquet')
df_mandates['fraction_names'].apply(lambda x: 0 if not isinstance(x,list) else len(x)).value_counts()
###Output
_____no_output_____
###Markdown
Loading data on polls (description, title and so on)
###Code
%%time
df_polls = pd.read_parquet(path=aw.ABGEORDNETENWATCH_PATH / 'df_polls.parquet')
df_polls.head(3).T
###Output
_____no_output_____
###Markdown
Modelling using only poll and politician ids as features Split into train and validation
###Code
vp.test_poll_split(vp.poll_splitter(df_all_votes))
###Output
_____no_output_____
###Markdown
Creating train / valid split
###Code
%%time
splits = RandomSplitter(valid_pct=.2)(df_all_votes)
# splits = vp.poll_splitter(df_all_votes, valid_pct=.2)
splits
###Output
_____no_output_____
###Markdown
Setting target variable and count frequencies
###Code
y_col = 'vote'
print(f'target values: {df_all_votes[y_col].value_counts()}')
###Output
_____no_output_____
###Markdown
Training Final data preprocessing for training
###Code
%%time
to = TabularPandas(df_all_votes, cat_names=['politician name', 'poll_id'], y_names=[y_col],
procs=[Categorify], y_block=CategoryBlock, splits=splits)
dls = to.dataloaders(bs=512)
###Output
_____no_output_____
###Markdown
Finding the learning rate for training
###Code
%%time
learn = tabular_learner(dls)
lrs = learn.lr_find()
lrs
###Output
_____no_output_____
###Markdown
Training the artificial neural net
###Code
%%time
learn.fit_one_cycle(5, lrs.valley)
###Output
_____no_output_____
###Markdown
Inspecting predictions
###Code
vp.plot_predictions(learn, df_all_votes, df_mandates, df_polls, splits)
###Output
_____no_output_____
###Markdown
accuracy:- random split: 88% - poll based split: ~50%, politician embedding itself insufficient to reasonably predict vote Inspecting resulting embeddings
###Code
%%time
embeddings = vp.get_embeddings(learn)
vp.test_embeddings(embeddings)
proponents = vp.get_poll_proponents(df_all_votes, df_mandates)
vp.test_poll_proponents(proponents)
proponents.head()
vp.plot_poll_embeddings(df_all_votes, df_polls, embeddings, df_mandates=df_mandates)
vp.plot_politician_embeddings(df_all_votes, df_mandates, embeddings)
###Output
_____no_output_____
###Markdown
embed scatters after pca:- poll based split => mandates form two groups- random split => polls and mandates each form 2-3 groups Modelling using `poll_title`-based features LDA topic weights as features
###Code
%%time
source_col = 'poll_title'
nlp_col = f'{source_col}_nlp_processed'
num_topics = 25
st = pc.SpacyTransformer()
# load data and prepare text for modelling
df_polls_lda = (df_polls
.assign(**{nlp_col: lambda x: st.clean_text(x, col=source_col)}))
# modelling
st.fit(df_polls_lda[nlp_col].values, mode='lda', num_topics=num_topics)
# creating text features using fitted model
df_polls_lda, nlp_feature_cols = df_polls_lda.pipe(st.transform, col=nlp_col, return_new_cols=True)
# inspecting
display(df_polls_lda.head())
pc.pca_plot_lda_topics(df_polls_lda, st, source_col, nlp_feature_cols)
df_all_votes.head()
df_input = df_all_votes.join(df_polls_lda[['poll_id']+nlp_feature_cols].set_index('poll_id'), on='poll_id')
df_input.head()
%%time
splits = vp.poll_splitter(df_input, valid_pct=.2)
splits
%%time
to = TabularPandas(df_input,
cat_names=['politician name', ], # 'poll_id'
cont_names=nlp_feature_cols, # using the new features
y_names=[y_col],
procs=[Categorify, Normalize],
y_block=CategoryBlock, splits=splits)
dls = to.dataloaders(bs=512)
%%time
learn = tabular_learner(dls)
lrs = learn.lr_find()
lrs
%%time
learn.fit_one_cycle(5,
# 2e-2)
lrs.valley)
vp.plot_predictions(learn, df_all_votes, df_mandates, df_polls, splits)
###Output
_____no_output_____
###Markdown
poll_id split:- politician name + poll_id + 10 lda topics based on poll title do not improve the accuracy- politician name + poll_id + 5 lda topics based on poll title: ~49%- politician name + poll_id + 10 lda topics based on poll title: ~57%- politician name + poll_id + 25 lda topics based on poll title: ~45% Modelling using `poll_description`-based features LDA topic weights as features
###Code
%%time
source_col = 'poll_description'
nlp_col = f'{source_col}_nlp_processed'
num_topics = 25
st = pc.SpacyTransformer()
# load data and prepare text for modelling
df_polls_lda = (df_polls
.assign(**{nlp_col: lambda x: st.clean_text(x, col=source_col)}))
# modelling
st.fit(df_polls_lda[nlp_col].values, mode='lda', num_topics=num_topics)
# creating text features using fitted model
df_polls_lda, nlp_feature_cols = df_polls_lda.pipe(st.transform, col=nlp_col, return_new_cols=True)
# inspecting
display(df_polls_lda.head())
pc.pca_plot_lda_topics(df_polls_lda, st, source_col, nlp_feature_cols)
df_input = df_all_votes.join(df_polls_lda[['poll_id']+nlp_feature_cols].set_index('poll_id'), on='poll_id')
df_input.head()
%%time
splits = vp.poll_splitter(df_input, valid_pct=.2)
splits
%%time
to = TabularPandas(df_input,
cat_names=['politician name', ], # 'poll_id'
cont_names=nlp_feature_cols, # using the new features
y_names=[y_col],
procs=[Categorify, Normalize],
y_block=CategoryBlock, splits=splits)
dls = to.dataloaders(bs=512)
%%time
learn = tabular_learner(dls)
lrs = learn.lr_find()
lrs
%%time
learn.fit_one_cycle(5,
# 2e-2)
lrs.valley)
vp.plot_predictions(learn, df_all_votes, df_mandates, df_polls, splits)
###Output
_____no_output_____ |
ML1/linear/025_Exercises.ipynb | ###Markdown
ExercisesThere are three exercises in this notebook:1. Use the cross-validation method to test the linear regression with different $\alpha$ values, at least three.2. Implement a SGD method that will train the Lasso regression for 10 epochs.3. Extend the Fisher's classifier to work with two features. Use the class as the $y$. 1. Cross-validation linear regressionYou need to change the variable ``alpha`` to be a list of alphas. Next do a loop and finally compare the results.
###Code
import numpy as np
x1 = np.array([188, 181, 197, 168, 167, 187, 178, 194, 140, 176, 168, 192, 173, 142, 176]).reshape(-1, 1).reshape(15,1)
y = np.array([141, 106, 149, 59, 79, 136, 65, 136, 52, 87, 115, 140, 82, 69, 121]).reshape(-1, 1).reshape(15,1)
x = np.asmatrix(np.c_[np.ones((15,1)),x1])
I = np.identity(2)
alpha = [0.1, 0.001, 0.01] # change here
# add 1-3 line of code here
wa = [np.array(np.linalg.inv(x.T*x + a * I)*x.T*y).ravel() for a in alpha]
# add 1-3 lines to compare the results
from sklearn.metrics import mean_squared_error
print(alpha[np.argmin([mean_squared_error(y, list([xi*w[1]+w[0] for xi in x1])) for w in wa])])
###Output
0.001
###Markdown
2. Implement based on the Ridge regression example, the Lasso regression.Please implement the SGD method and compare the results with the sklearn Lasso regression results.
###Code
def sgd(xi, yi, wi, alpha, lr=0.001):
haty = xi.dot(wi[0]) + wi[1]
intermidiate = -2 * np.matmul((yi - haty).T, xi)
if wi[0] > 0:
wi[0] -= lr*(intermidiate + alpha)
else:
wi[0] -= lr*(intermidiate - alpha)
wi[1] -= lr*intermidiate
return wi
x = np.array([188, 181, 197, 168, 167, 187, 178, 194, 140, 176, 168, 192, 173, 142, 176]).reshape(-1, 1)
y = np.array([141, 106, 149, 59, 79, 136, 65, 136, 52, 87, 115, 140, 82, 69, 121]).reshape(-1, 1)
w = np.zeros(2)
alpha = 0.1
for k in range(10):
w = sgd(x, y, w, alpha)
print(w)
###Output
[-2.93142408e+29 -2.93142408e+29]
###Markdown
3. Extend the Fisher's classifierPlease extend the targets of the ``iris_data`` variable and use it as the $y$.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
iris_data = load_iris()
iris_df = pd.DataFrame(iris_data.data,columns=iris_data.feature_names)
#iris_df.head()
x = iris_df['sepal width (cm)'].values # change here
y = iris_df['sepal length (cm)'].values # change here
y1 = iris_df['petal length (cm)'].values
dataset_size = np.size(x)
mean_x, mean_y, mean_y1 = np.mean(x), np.mean(y), np.mean(y1)
SS_xy = (np.sum(y * x) - dataset_size * mean_y * mean_x) + (np.sum(y1 * x) - dataset_size * mean_y1 * mean_x)
SS_xx = np.sum(x * x) - dataset_size * mean_x * mean_x
a = SS_xy / SS_xx
b = mean_y + mean_y1 - a * mean_x
y_pred = a * x + b
print(y_pred)
###Output
[ 8.73433411 9.7136254 9.32190888 9.51776714 8.53847585 7.95090107
8.93019237 8.93019237 9.90948366 9.51776714 8.34261759 8.93019237
9.7136254 9.7136254 7.75504282 6.97160978 7.95090107 8.73433411
8.14675933 8.14675933 8.93019237 8.34261759 8.53847585 9.12605063
8.93019237 9.7136254 8.93019237 8.73433411 8.93019237 9.32190888
9.51776714 8.93019237 7.55918456 7.3633263 9.51776714 9.32190888
8.73433411 8.53847585 9.7136254 8.93019237 8.73433411 11.08463321
9.32190888 8.73433411 8.14675933 9.7136254 8.14675933 9.32190888
8.34261759 9.12605063 9.32190888 9.32190888 9.51776714 11.08463321
10.10534192 10.10534192 9.12605063 10.88877495 9.90948366 10.30120018
11.67220799 9.7136254 11.28049147 9.90948366 9.90948366 9.51776714
9.7136254 10.30120018 11.28049147 10.69291669 9.32190888 10.10534192
10.69291669 10.10534192 9.90948366 9.7136254 10.10534192 9.7136254
9.90948366 10.49705844 10.88877495 10.88877495 10.30120018 10.30120018
9.7136254 8.93019237 9.51776714 11.08463321 9.7136254 10.69291669
10.49705844 9.7136254 10.49705844 11.08463321 10.30120018 9.7136254
9.90948366 9.90948366 10.69291669 10.10534192 9.12605063 10.30120018
9.7136254 9.90948366 9.7136254 9.7136254 10.69291669 9.90948366
10.69291669 8.53847585 9.32190888 10.30120018 9.7136254 10.69291669
10.10534192 9.32190888 9.7136254 8.14675933 10.49705844 11.28049147
9.32190888 10.10534192 10.10534192 10.30120018 9.12605063 9.32190888
10.10534192 9.7136254 10.10534192 9.7136254 10.10534192 8.14675933
10.10534192 10.10534192 10.49705844 9.7136254 8.93019237 9.51776714
9.7136254 9.51776714 9.51776714 9.51776714 10.30120018 9.32190888
9.12605063 9.7136254 10.69291669 9.7136254 8.93019237 9.7136254 ]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.