path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Notebooks/ComputerVision/7. CV. Hough Lines.ipynb | ###Markdown
Hough transform Line detectionThe simplest boundary you can detect is a line and more complex boundaries are often made up of several lines. When you do edge detection you'll find that edges when pieced together form longer line segments and shapes. We can think of representing any line as a function of space. In image space the equation that represents a line is $y=m x + b$ where $m$ is the slope of the line and $b$ is how far it shifted up or down.A useful transformation is moving this line representation from image space to Hough space, also called parameter space. The Hough transform converts data points from image space to Hough space and represents a line in the simplest way: as a point at the coordinate $m_0$, $b_0$ which are the same $m$ and $b$ from the line equation $y=m x + b$.  Patterns in Hough space can help us identify lines or other shapes. For example, look at these two lines in Hough space that intersect at the point m_b_0. In image space, it's the line with the equation $y = m_0 x + b_0.$  And if we have a line made of mini segments or points close to the same line equation in image space this turns into many intersecting lines in Hough space. Imagine this line as part of an edge detected image where a line just has some small discontinuities in it. Our strategy then for finding continuous lines in an image is to look at intersection points in Hough space.  Straight up and down lines have an infinite slopes. A better way to transform image space is by turning Hough space into polar coordinates. Instead of $m$ and $b$, $rho$, which is the distance of the line from the origin and $\theta$ which is the angle from the horizontal axis are used. A fragmented line or points that fall in a line can be identified in Hough space as the intersection of sinusoids.  Hough Lines detection
###Code
import numpy as np
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# read and display de image
image = cv2.imread('images/phone.jpg')
image_copy = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image_copy)
# convert to gray scale and perform edge detection
image_gray = cv2.cvtColor(image_copy, cv2.COLOR_RGB2GRAY)
# Canny parmeters
low = 50
high = 100
image_edges = cv2.Canny(image_gray, low, high)
plt.imshow(image_edges, cmap="gray")
# define the Hough transform parameters
# make a blank the same size as our image to draw on
rho = 3
theta = np.pi/270
threshold = 10
min_line_length = 100
max_line_gap = 5
line_image = np.copy(image_copy)
# run Hough on the edge-detected image
lines = cv2.HoughLinesP(image_edges, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap)
# iterate over the output "lines" and draw lines on the image copy
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),12)
plt.imshow(line_image)
###Output
_____no_output_____ |
时间序列分析/金融时间序列入门.ipynb | ###Markdown
**一、与时间序列分析相关的部分基础知识/概念** 1.1 什么是时间序列简而言之:对某一个或者一组变量$x\left ( t \right )$进行观察测量,将在一系列时刻$t_{1},t_{2},⋯,t_{n}$所得到的离散数字组成的序列集合,称之为时间序列。例如: 某股票A从2015年6月1日到2016年6月1日之间各个交易日的收盘价,可以构成一个时间序列;某地每天的最高气温可以构成一个时间序列。一些特征: **趋势**:是时间序列在长时期内呈现出来的持续向上或持续向下的变动。**季节变动**:是时间序列在一年内重复出现的周期性波动。它是诸如气候条件、生产条件、节假日或人们的风俗习惯等各种因素影响的结果。**周期波动**:是时间序列呈现出得非固定长度的周期性变动。循环波动的周期可能会持续一段时间,但与趋势不同,它不是朝着单一方向的持续变动,而是涨落相同的交替波动。**随机波动**:是时间序列中除去趋势、季节变动和周期波动之后的随机波动。不规则波动通常总是夹杂在时间序列中,致使时间序列产生一种波浪形或震荡式的变动。只含有随机波动的序列也称为**平稳序列**。
###Code
1.白噪声序列:无任何规则,预测没有意义
2.平稳非白噪声序列:均值和方差都是常数
3.非平稳序列:转化为平稳序列,然后按照平稳序列的算法拟合 差分变换--->ARIMA
1.单变量时间序列 ARMA--->GARCH
2.多变量时间序列 VAR----->MGARCH
###Output
_____no_output_____
###Markdown
待分析的时间序列 ---> 平稳性检验 非 转换为平稳序列 差分 * 单位根检验 * ACF PACF * 截尾 * 拖尾 ---> 白噪声检验 * 是白噪声 停止 * 不是白噪声 计算ACF PACF ---> 模型识别 ---> 参数估计 ---> 模型检验 ---> 模型优化 ---> 模型预测 1.2 平稳性平稳时间序列粗略地讲,一个时间序列,如果均值没有系统的变化(无趋势)、方差没有系统变化,且严格消除了周期性变化,就称之是平稳的。
###Code
IndexData = DataAPI.MktIdxdGet(indexID=u"",ticker=u"000001",beginDate=u"20130101",endDate=u"20140801",field=u"tradeDate,closeIndex,CHGPct",pandas="1")
IndexData = IndexData.set_index(IndexData['tradeDate'])
IndexData['colseIndexDiff_1'] = IndexData['closeIndex'].diff(1) # 1阶差分处理
IndexData['closeIndexDiff_2'] = IndexData['colseIndexDiff_1'].diff(1) # 2阶差分处理
IndexData.plot(subplots=True,figsize=(18,12))
###Output
_____no_output_____
###Markdown
 上图中第一张图为上证综指部分年份的收盘指数,是一个**非平稳时间序列**;而下面两张为**平稳时间序列**(当然这里没有检验,只是为了让大家看出差异,关于检验序列的平稳性后续会讨论) 细心的朋友已经发现,下面两张图,实际上是对第一个序列做了**差分**处理,方差和均值基本平稳,成为了平稳时间序列,后面我们会谈到这种处理。下面可以给出平稳性的定义了: **严平稳**:如果对所有的时刻t,任意正整数k和任意k个正整数$(t_{1},t_{2},...,t_{k})$, $(r_{t_{1}},r_{t_{2}},...,r_{t_{k}})$ 的联合分布与 $(r_{t_{1}+t},r_{t_{2}+t},...,r_{t_{k}+t})$ 的联合分布相同,我们称时间序列{rt}是**严平稳**的。也就是, $(r_{t_{1}},r_{t_{2}},...,r_{t_{k}})$ 的联合分布在时间的平移变换下保持不变,这是个很强的条件。而我们经常假定的是平稳性的一个较弱的方式  1.3 相关系数和自相关函数 1.3.1 相关系数 
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
a = pd.Series([9,8,7,5,4,2])
b = a - a.mean() # 去均值
plt.figure(figsize=(10,4))
a.plot(label='a')
b.plot(label='mean removed a')
plt.legend()
###Output
_____no_output_____
###Markdown
1.3.2 自相关函数 (Autocorrelation Function, ACF)   下面给出示例:
###Code
from scipy import stats
import statsmodels.api as sm # 统计相关的库
data = IndexData['closeIndex'] # 上证指数
m = 10 # 我们检验10个自相关系数
acf,q,p = sm.tsa.acf(data,nlags=m,qstat=True) ## 计算自相关系数 及p-value
out = np.c_[range(1,11), acf[1:], q, p]
output=pd.DataFrame(out, columns=['lag', "AC", "Q", "P-value"])
output = output.set_index('lag')
output
###Output
_____no_output_____ |
S3_Failover_w_SDK.ipynb | ###Markdown
Python with BOTO3 and OneFS Platform API for S3 Failover **Assumptions:**1. S3 is configured and enabled on both clusters2. buckets are configured on both clusters3. users are equally created on both clusters4. S3 Users are configured with the following RBAC Permission ISI_PRIV_LOGIN_PAPI and (ptional) ISI_PRIV_NS_IFS_ACCESS5. each user has already S3 Keys generated on both clusters If not the Function newS3Key() below can be used to generate them.6. S3 Buckets are equally set up on both clusters Setup and Requirements
###Code
# Requirements
import boto3
import urllib3
from pprint import pprint
# isi_sdk
from __future__ import print_function
import time
import isi_sdk_9_1_0
from isi_sdk_9_1_0.rest import ApiException
# Disable SSL selfsigned certificate warnings
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
# Config Variables OneFS
myHost = '192.168.188.192' # source cluster
#myHost = '192.168.188.201' # target cluster
myEndpointURL = 'https://' + myHost + ':9021'
OneFSUser = 'root'
OneFSPw = 'a'
###Output
_____no_output_____
###Markdown
Get the S3 Keys using the OneFS SDK
###Code
def myS3keys(user, password, papiHost):
# Configure HTTP
configuration = isi_sdk_9_1_0.Configuration()
configuration.username = user
configuration.password = password
configuration.verify_ssl = False
configuration.host = "https://" + papiHost + ":8080"
# create an instance of the API class
api_instance = isi_sdk_9_1_0.ProtocolsApi(isi_sdk_9_1_0.ApiClient(configuration))
try:
api_response = api_instance.list_s3_mykeys()
except ApiException as e:
print("Exception when calling ProtocolsApi->list_s3_mykeys: %s\n" % e)
else :
return api_response
my = myS3keys(OneFSUser, OneFSPw, myHost)
pprint(my)
myAccessID = my.keys.access_id
mySecretKey = my.keys.secret_key
###Output
{'keys': {'access_id': '1_root_accid',
'old_key_expiry': 1636711391,
'old_key_timestamp': 1636704440,
'old_secret_key': 'LcbUd7sU_J2BmD47giMrvACipsZj',
'secret_key': 'GzkYEoCnKn82Jn8pI2EFcQNAuNkf',
'secret_key_timestamp': 1636710791}}
###Markdown
Generate a new S3 Key using the OneFS SDK
###Code
def newS3key(user, password, papiHost):
# Configure HTTP
configuration = isi_sdk_9_1_0.Configuration()
configuration.username = user
configuration.password = password
configuration.verify_ssl = False
configuration.host = "https://" + papiHost + ":8080"
# create an instance of the API class
api_instance = isi_sdk_9_1_0.ProtocolsApi(isi_sdk_9_1_0.ApiClient(configuration))
s3_mykey = isi_sdk_9_1_0.S3Key() # S3Key |
force = True # bool | Forces to create new key. (optional)
try:
api_response = api_instance.create_s3_mykey(s3_mykey, force=force)
except ApiException as e:
print("Exception when calling ProtocolsApi->create_s3_mykey: %s\n" % e)
else :
return api_response
my = newS3key(OneFSUser, OneFSPw, myHost)
pprint(my)
myAccessID = my.keys.access_id
mySecretKey = my.keys.secret_key
###Output
{'keys': {'access_id': '1_root_accid',
'old_key_expiry': 1636711391,
'old_key_timestamp': 1636704440,
'old_secret_key': 'LcbUd7sU_J2BmD47giMrvACipsZj',
'secret_key': 'GzkYEoCnKn82Jn8pI2EFcQNAuNkf',
'secret_key_timestamp': 1636710791}}
###Markdown
List objects with BOTO - lookup a new Key Pair in case of a authentication error...**Note: There are other reasons why an S3 call could return a 403 Error. => This is for demonstraion purpose only!** It can be potentially dangerous to automatially get the keys and just retry with out further sanity checks.
###Code
def listObjects(myAccID, mySecret, myURL, bucket):
global myAccessID
global mySecretKey
# S3 Config
# disable unsigned SSL warning
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
from botocore import config
from botocore.exceptions import ClientError
my_config = config.Config(
region_name = "",
signature_version = 'v4',
retries = {
'max_attempts' : 10,
'mode' : 'standard'
}
)
# create a S3 Session Object
session = boto3.session.Session()
s3_client = session.client(
service_name='s3',
aws_access_key_id=myAccID,
aws_secret_access_key=mySecret,
endpoint_url=myURL,
use_ssl=True,
verify=False
)
try:
pprint(s3_client.list_objects(Bucket=bucket))
except ClientError as e:
eStatus = e.response['ResponseMetadata']['HTTPStatusCode']
print(e.response['Error']['Message'])
if eStatus == 403 :
print("Time to add the call to get a new key")
my = myS3keys(OneFSUser, OneFSPw, myHost)
pprint(my)
myAccessID = my.keys.access_id
mySecretKey = my.keys.secret_key
listObjects(myAccessID, mySecretKey, myURL, bucket)
listObjects(myAccessID, mySecretKey, myEndpointURL, 'omnibot')
###Output
{'Contents': [{'ETag': '"d41d8cd98f00b204e9800998ecf8427e"',
'Key': 'this_is_cluster_1',
'LastModified': datetime.datetime(2021, 11, 12, 8, 52, 17, tzinfo=tzutc()),
'Owner': {'DisplayName': 'root', 'ID': 'root'},
'Size': 0,
'StorageClass': 'STANDARD'}],
'IsTruncated': False,
'Marker': '',
'MaxKeys': 1000,
'Name': 'omnibot',
'Prefix': '',
'ResponseMetadata': {'HTTPHeaders': {'connection': 'keep-alive',
'content-length': '460',
'x-amz-request-id': '564950518'},
'HTTPStatusCode': 200,
'HostId': '',
'RequestId': '564950518',
'RetryAttempts': 0}}
|
src/main/scala/paristech/.ipynb_checkpoints/TP3_scala-checkpoint.ipynb | ###Markdown
Supplément test de l' algorithme Decision Tree
###Code
val dt = new DecisionTreeClassifier()
.setFeaturesCol("features")
.setLabelCol("final_status")
.setPredictionCol("predictions")
.setRawPredictionCol("raw_predictions")
val pipeline_dt = new Pipeline()
.setStages(Array(tokenizer, remover, cvModel, idf, indexerCountry, indexerCurrency, encoder, assembler, dt))
val bc_model = pipeline_dt.fit(training)
val dfWithSimplePredictionsDt = bc_model.transform(test)
// Select (prediction, true label) and compute test error.
val evaluator_dt = new MulticlassClassificationEvaluator()
.setLabelCol("final_status")
.setPredictionCol("predictions")
.setMetricName("accuracy")
val accuracy_dt = evaluator_dt.evaluate(dfWithSimplePredictionsDt)
println("Test Error = " + (1.0 - accuracy_dt))
val dtparamGrid = new ParamGridBuilder()
.addGrid(dt.maxDepth,(1 to 11 by 1).toArray)
.addGrid(dt.maxBins, Array(2,10,20,40,60,80,100))
.build()
val dt_tvs = new TrainValidationSplit()
.setEstimator(pipeline_dt)
.setEvaluator(evaluator_dt)
.setEstimatorParamMaps(dtparamGrid)
.setTrainRatio(0.7)
val model_dt_tvs = dt_tvs.fit(training)
val dfWithPredictionsDt = model_dt_tvs.transform(test)
//dfWithPredictionsDt.groupBy("final_status", "predictions").count.show()
val accuracy_dt_tvs = evaluator_dt.evaluate(dfWithPredictionsDt)
println("Test Error = " + (1.0 - accuracy_dt_tvs))
bc_model.write.overwrite.save("models/decision_tree_model")
model_tvs.write.overwrite.save("models/model_lr_tvs")
###Output
_____no_output_____ |
notebooks/IPython extension.ipynb | ###Markdown
While `nb-mermaid` will work with any Jupyter kernel, IPython provides a way to capture loading an extension directly in a cell as a `%linemagic`:
###Code
%reload_ext mermaid
###Output
_____no_output_____ |
aps/satskred/import_skreddb/create_test_data2.ipynb | ###Markdown
The following columns are used in the import routine- time -> skredTidspunkt (should probably be t_1)- time -> registrertDato (should probably be t_1)- geometry -> SHAPE- dem_min -> hoydeStoppSkred_moh- asp_median -> eksposisjonUtlopsomr - slp_mean -> snittHelningUtlopsomr_gr - slp_max -> maksHelningUtlopsomr_gr - slp_min -> minHelningUtlopsomr_gr- area -> arealUtlopsomr_m2 Important: skredID needs to be the same in all tables.
###Code
import_list = ['time', 'area', 'dem_min', 'asp_median', 'slp_mean', 'slp_max', 'slp_min', 'geometry']
# Add a name to the polygon
gdf['_name'] = "Original"
# convert t_0 (the time the reference image was taken) into a datetime object and give it a descriptive name
gdf['_reference_date'] = pd.to_datetime(gdf['t_0']) # this actually overwrites the existing column "refdate", but since it is a duplicate of t_0 we don't care.
# convert t_1 (the time the activity image was taken) into a datetime object and give it a descriptive name
gdf['_detection_date'] = pd.to_datetime(gdf['t_1'])
aval_map = gdf.plot(column="area", linewidth=0.3, edgecolor='black', cmap="OrRd", alpha=0.9)
plg = gdf['geometry'][0]
print(type(plg), plg)
print(dt_base.strftime(dt_fmt))
dt_1 = dt_base + timedelta(days=-6, minutes=-1)
print(dt_1.strftime(dt_fmt))
print(filename_base.format(dt_base.strftime(dt_fmt_act), dt_1.strftime(dt_fmt_ref)))
scns = []
scn_dict = {
"scn_0": {
"le": 0.005,
"re": 0.002,
"bn": 0.005,
"tn": 0.002,
"dem_min": 800,
"asp_median": 180,
"slp_mean": 9.0,
"slp_max": 12.0,
"slp_min": 4.0,
"days": 1,
"seconds": 15,
"_dis_id": 1
},
"scn_1": {
"le": 0.006,
"re": 0.002,
"bn": 0.004,
"tn": 0.002,
"dem_min": 850,
"asp_median": 190,
"slp_mean": 8.0,
"slp_max": 11.0,
"slp_min": 3.0,
"days": 3,
"seconds": 65,
"_dis_id": 1
},
"scn_2": {
"le": 0.0055,
"re": 0.002,
"bn": 0.0055,
"tn": 0.002,
"dem_min": 850,
"asp_median": 210,
"slp_mean": 7.0,
"slp_max": 10.0,
"slp_min": 2.0,
"days": 8,
"seconds": 22,
"_dis_id": 1
},
"scn_3": {
"le": 0.007,
"re": 0.003,
"bn": 0.008,
"tn": 0.003,
"dem_min": 650,
"asp_median": 110,
"slp_mean": 9.5,
"slp_max": 15.0,
"slp_min": 5.2,
"days": 5,
"seconds": 8,
"_dis_id": 1
},
"scn_4": {
"le": 0.006,
"re": 0.002,
"bn": 0.006,
"tn": 0.005,
"dem_min": 770,
"asp_median": 140,
"slp_mean": 8.5,
"slp_max": 14.3,
"slp_min": 4.2,
"days": 4,
"seconds": 78,
"_dis_id": 1
}
}
def create_scenario(sd):
# Input is an element from scn_dict
# Define the base coordinates for the test polygons
e_base = 20.94
n_base= 69.71
dt_fmt = '%Y-%m-%dT%H:%M:%S.%f'
dt_fmt_act = '%Y%m%d_%H%M%S'
dt_fmt_ref = '%Y%m%d'
dt_base = datetime.strptime('2019-08-21T15:51:09.025897', dt_fmt)
filename_base ="AvalDet_{0}_ref_{1}_trno_087_ZZ"
le = e_base + sd["le"] # left_easting
re = le + sd["re"] # right_easting
bn = n_base + sd["bn"] # bottom_northing
tn = bn + sd["tn"] # top_northing
p = Polygon([(le, bn), (re, bn), (re, tn), (le, tn)])
act_date = dt_base + timedelta(days=sd["days"], seconds=sd["seconds"])
ref_date = act_date + timedelta(days=-6, seconds=-60)
# Create a copy of the original polygon and alter its properties
scn = gdf.copy(deep=True)
scn['geometry'] = p
print(scn.crs)
scn.to_crs({'init': 'epsg:32633'}, inplace=True)
print(scn.crs)
scn['area'] = scn['geometry'].area
scn['length'] = scn['geometry'].length
scn.to_crs(gdf.crs, inplace=True)
scn['_name'] = filename_base.format(act_date.strftime(dt_fmt_act), ref_date.strftime(dt_fmt_ref))
print(scn['_name'][0])
scn['east'] = scn['geometry'].centroid.x
scn['north'] = scn['geometry'].centroid.y
scn['dem_min'] = sd['dem_min']
scn['asp_median'] = sd['asp_median']
scn['slp_mean'] = sd['slp_mean']
scn['slp_max'] = sd['slp_max']
scn['slp_min'] = sd['slp_min']
scn['time'] = act_date.strftime(dt_fmt)
scn['t_1'] = act_date.strftime(dt_fmt)
scn['refdate'] = ref_date.strftime(dt_fmt)
scn['t_0'] = ref_date.strftime(dt_fmt)
scn['_dis_id'] = 1 # used only to merge (dissolve) polygons for verification needs to be same for all scns
new_dir = '../data/{0}'.format(scn['_name'][0])
if not os.path.exists(new_dir):
os.mkdir(new_dir)
scn.drop(['_detection_date', '_reference_date', '_dis_id'], axis=1).to_file(filename='../data/{0}/{0}.shp'.format(scn['_name'][0], scn['_name'][0]))
return scn
for sd in scn_dict.keys():
scns.append(create_scenario(scn_dict[sd]))
ax = gdf.plot()
for scn in scns:
scn.plot(ax=ax, edgecolor='black', alpha=0.6)
test_scns = pd.concat(scns)
new_dir = '../data/scns'
if not os.path.exists(new_dir):
os.mkdir(new_dir)
test_scns.drop(['_detection_date', '_reference_date', '_dis_id'], axis=1).to_file(filename='../data/scns/test_scns.shp')
dissolved_scns = test_scns.dissolve(by='_dis_id')
test_scns.to_csv('../data/scns/scns.csv')
test_scns.filter(import_list).head()
###Output
_____no_output_____ |
2016/Visualization.ipynb | ###Markdown
Bar Charts
###Code
data = np.random.permutation(np.array(["dog"]*10 + ["cat"]*7 + ["rabbit"]*3))
counts = collections.Counter(data)
plt.bar(range(len(counts)), counts.values())
plt.xticks(np.arange(len(counts))+0.4, counts.keys());
plt.ylabel("Count")
plt.xlabel("Animal");
plt.savefig("barchart.pdf")
###Output
_____no_output_____
###Markdown
Pie Charts
###Code
plt.pie(counts.values(), labels=counts.keys(), autopct='%1.1f%%',
colors=["yellowgreen", "lightskyblue", "gold"])
plt.axis('equal');
plt.savefig("piechart.pdf")
###Output
_____no_output_____
###Markdown
Histograms
###Code
data = np.random.randn(10000)
plt.hist(data, bins=50);
plt.xlabel("Value")
plt.ylabel("Count")
plt.savefig("histogram.pdf")
###Output
_____no_output_____
###Markdown
Scatter plot
###Code
x = np.random.randn(1000)
y = 0.4*x**2 + x + 0.7*np.random.randn(1000)
plt.plot(x,y,'.')
plt.xlabel("Value 1")
plt.ylabel("Value 2")
plt.savefig("scatter.pdf")
###Output
_____no_output_____
###Markdown
Line plot
###Code
x = np.linspace(0,10,1000)
y = np.cumsum(0.01*np.random.randn(1000))
plt.plot(x,y,'-')
plt.xlabel("Time")
plt.ylabel("Value")
plt.savefig("line.pdf")
###Output
_____no_output_____
###Markdown
Box and whisker plot
###Code
category = np.random.choice(['dog','cat','rabbit'], 2000, p=[0.5,0.3,0.2])
weights = np.zeros_like(category, dtype=np.float64)
weights[category=="dog"] = 30 + 5*np.random.randn(np.sum(category=="dog"))
weights[category=="cat"] = 18 + 3*np.random.randn(np.sum(category=="cat"))
weights[category=="rabbit"] = 16 + 4*np.random.randn(np.sum(category=="rabbit"))
labels = collections.Counter(category).keys()
data = [weights[category==l] for l in labels]
# seaborn doesn't capture the plt.boxplot command
seaborn.boxplot(data, whis=[1,99], showfliers=True);
plt.xticks(np.arange(len(labels))+1, labels);
plt.ylabel("Weight")
plt.xlabel("Animal")
plt.savefig("boxplot.pdf")
###Output
_____no_output_____
###Markdown
Heatmap (2D histogram version)
###Code
x = np.random.randn(100000)
y = 0.4*x**2 + x + 0.7*np.random.randn(100000)
plt.hist2d(x, y, bins=100);
plt.set_cmap(plt.cm.get_cmap('hot'))
plt.grid(False)
plt.ylim([-3,8])
plt.xlabel("Value 1")
plt.ylabel("Value 2")
plt.savefig("hist2d.pdf")
###Output
_____no_output_____
###Markdown
Heatmap (matrix version)
###Code
category2 = np.zeros_like(category)
category2[category=="dog"] = np.random.choice(['dog','cat','rabbit'], np.sum(category=="dog"), p=[0.8,0.05,0.15])
category2[category=="cat"] = np.random.choice(['dog','cat','rabbit'], np.sum(category=="cat"), p=[0.2,0.6,0.2])
category2[category=="rabbit"] = np.random.choice(['dog','cat','rabbit'], np.sum(category=="rabbit"), p=[0.4,0.4,0.2])
counter = collections.Counter(zip(category, category2))
labels = collections.Counter(category).keys()
M = np.array([[float(counter[(i,j)]) for i in labels] for j in labels])
M = M / np.sum(M,axis=1)[:,None]
plt.imshow(M, interpolation="Nearest", cmap="hot", vmax=1, vmin=0)
plt.grid(False)
plt.xticks(np.arange(len(labels)), labels);
plt.yticks(np.arange(len(labels)), labels);
plt.xlabel("Second Pet")
plt.ylabel("First Pet")
plt.axes().xaxis.tick_top()
plt.axes().xaxis.set_label_position("top")
plt.savefig("heatmap.pdf")
###Output
_____no_output_____
###Markdown
Scatter Matrix
###Code
x = np.random.randn(1000)
y = 0.4*x**2 + x + 0.7*np.random.randn(1000)
z = 0.5 + 0.2*(y-1)**2 + 0.1*np.random.randn(1000)
df = pd.DataFrame(np.array([x,y,z]).T)
seaborn.pairplot(df);
plt.savefig("scatter_matrix.pdf")
###Output
_____no_output_____
###Markdown
Bubble chart
###Code
plt.scatter(x, y, s=z*20, color=seaborn.color_palette()[0])
plt.xlabel("Value 1")
plt.ylabel("Value 2")
plt.savefig("bubble.pdf")
###Output
_____no_output_____
###Markdown
Color scatter plot
###Code
xy1 = np.random.randn(1000,2).dot(np.random.randn(2,2)) + np.random.randn(2)
xy2 = np.random.randn(1000,2).dot(np.random.randn(2,2)) + np.random.randn(2)
plt.scatter(xy1[:,0], xy1[:,1], color=seaborn.color_palette()[0])
plt.axes().scatter(xy2[:,0], xy2[:,1], color=seaborn.color_palette()[1])
plt.xlabel("Value 1")
plt.ylabel("Value 2")
plt.legend(["Class 1", "Class 2"])
plt.savefig("scatter_color.pdf")
%matplotlib notebook
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot3D(x,y,z, '.')
ax.set_xlabel("Value 1")
ax.set_ylabel("Value 2")
ax.set_zlabel("Value 3");
###Output
_____no_output_____ |
src/.ipynb_checkpoints/pd-classification-checkpoint.ipynb | ###Markdown
Parkinson's Disease Classification with KNN by Group 28 (Michael DeMarco, Philip Zhang, Chris Cai, Yuichi Ito) Figure 1 A screengrab from Praat, the acoustic analysis software used in collecting the data set AbstractThis project aims to classify whether or not a given patient has Parkinson's disease based on various variables derived from their speech sounds. We used a KNN classification algorithm with six predictors to create a model with an accuracy of 80%. We concluded it's accuracy is satisfactory for the scope of DSCI 100, working with the tools we had, but not accurate enough for commercial use. We discussed how this study may be extended to other degenerative diseases. Finally, we elaborated on how one would make such a model usable by the public by providing a website frontend for collecting new data points to test. Table of Contents- Introduction - Background - Question - Data set- Methods - Data wrangling and pre-processing - Conducting the Classification Analysis - Predictors - Visualizing the Results- Exploration - Preliminary exploratory data analysis in R - Reading in the data - Wrangling - Creating the training and testing sets - Training set in detail - Predictors in detail- Hypothesis - Expected Outcomes- Results - Classification & Visualization- Discussion - Summary - Impact - Next Steps- References - Works Cited 1 Introduction BackgroundParkinson’s disease (PD) is a degenerative disorder of the central nervous system (CNS) which severely impacts motor control. Parkinson’s has neither a cure nor a known cause. Speech, however, is widely known to play a significant role in the detection of PD. According to Sakar1 , vocal disorder exists in 90% of patients at an early stage.[1] Here's the link to the original [data set](http://archive.ics.uci.edu/ml/datasets/Parkinson%27s+Disease+Classification), and corresponding [paper](https://www.sciencedirect.com/science/article/abs/pii/S1568494618305799?via%3Dihub). Note that the paper used the data set with a similar goal but with very different predictor variables than the ones we select here. QuestionTo what extent can we use elements of speech sounds to predict whether or not a patient has Parkinson’s disease? Data setOur data set is from [UCI’s Machine Learning repository](http://archive.ics.uci.edu/ml/datasets/Parkinson%27s+Disease+Classification). This data set contains numerous variables derived from speech sounds gathered by Sakar using the [Praat](http://www.fon.hum.uva.nl/praat/) acoustic analysis software. Each participant in the study was asked to repeat their pronunciations of the /a/ phoneme three times; each entry is thus associated with an ID parameter ranging from 0 to 251. We chose to focus on the following six predictor parameters:| Speech Sound Component | Variable Name in Data Set | Description ||------------------------------------------|---------------------------|------------------------------------------------------------------|| Pitch Period Entropy (PPE) | PPE | measures the impaired control of someone’s pitch || Detrended Fluctuation Analysis (DFA) | DFA | a measure of the ‘noisiness’ of speech waves || Recurrence Period Density Entropy (RPDE) | RPDE | repetitiveness of speech waves || Jitter | locPctJitter | a measure of frequency instability || Shimmer | locShimmer | a measure of amplitude instability || Fundamental Frequency | meanPeriodPulses | someone’s pitch (calculated from the inverse of the mean period) |Figure 2 Table describing each of the predictor's used in the original analysisThere’s a lot of complicated mathematics associated with each of these parameters (including Fourier transforms, dense equations, etc.) so the interested reader is encouraged to reference Sakar directly. 2 Methods PredictorsMany researchers have attempted to use speech signal processing for Parkinson’s disease classification. Based on Sakar and other widely supported literature, the predictors identified above are the most popular speech features used in PD studies and should prove effective. They are also all related to motor control, which is what the early stages of Parkinson’s directly impacts. Data wrangling and pre-processingEach participant’s data (i.e., their pronunciation of /a/) was collected three times; to analyze this data set, each of these three observations was merged by taking the mean of each of our predictor variables in question.The data was not completely tidy, but the columns that we used were, so the untidy columns were neglected. Next, to be able to conduct classification, our predictor variables were standardized and centered. Each of our variables had a unique distribution; PPE, DFA, and RPDE were on a scale of 0 to 1 and the pitch variable was on an order of magnitude to 10-3, for example, so the entire data set was scaled so that we did not have to worry about each variable’s individual distribution. Conducting the Classification AnalysisSince our question is a binary classification problem, we used the K-nearest neighbours classification algorithm (KNN). We optimized our choice of k using cross-validation on our training set, and then used the optimal choice of k to create our final model. We then determined its accuracy using the test set. Visualizing the ResultsTo visualize the effectiveness of our KNN model, we were not able to draw a scatter plot directly, as we worked with more than 2-dimensions. We had a line plot showing the trend of increasing values of k on the accuracy of our model for many folds to assist in determining which k we should use for our final model. Then, we created a bar plot to show the number of correct predictions, false positives, and false negatives. Figure 3 An example knn visualization with only two predictors. 3 Exploration Exploratory data analysisWe conducted a preliminary exploration of the data, to getter a better sense of what we were working with.
###Code
# Reinstall packages (if needed)
# install.packages("tidyverse")
# install.packages("caret")
# install.packages("repr")
# install.packages("GGally")
install.packages("formula.tools")
# Fix for strange error message
# install.packages("e1071")
# Load in the necessary libraries
library(caret)
library(repr)
library(GGally)
library(tidyverse)
library(formula.tools)
# Make all of the following results reproducible, use this value across the analysis
set.seed(28)
###Output
_____no_output_____
###Markdown
Reading in the dataTo begin, we read in the data from the UCI repository. We did not read it directly from UCI's URL, as it contained a zip, and trying to deal with that in R is tedious. Rather, we uploaded the file to Google Drive and we accessed it from there.
###Code
# Reads the dataset from the web into R; the Drive URL is self-made
pd_data <- read_csv("https://drive.google.com/uc?export=download&id=1p9JuoTRM_-t56x7gptZU2TRNue8IHFHc", skip = 1)
#narrows down the dataset to the variables that we will use
baseline_data <- pd_data %>%
select(id:meanHarmToNoiseHarmonicity, class)
head(baseline_data)
###Output
_____no_output_____
###Markdown
Figure 4 A slice of the basline data loaded from the UCI repository WranglingAs was mentioned in the methods section, each participant was represented three times (e.g., see three rows with `id == 0` above). We merged these by taking the mean of each of the predictor columns, after grouping by `id`.
###Code
# Averages the values of each subject's three trials so that each subject is represented by one row
project_data <- baseline_data %>%
group_by(id) %>%
summarize(PPE = mean(PPE),
DFA = mean(DFA),
RPDE = mean(RPDE),
meanPeriodPulses = mean(meanPeriodPulses),
locPctJitter = mean(locPctJitter),
locShimmer = mean(locShimmer),
# meanAutoCorrHarmonicity = mean(meanAutoCorrHarmonicity),--legacy from project proposal
class = mean(class)) %>%
mutate(class = as.factor(class)) %>%
mutate(has_pd = (class == 1))
head(project_data)
###Output
_____no_output_____
###Markdown
Figure 5 A table containing the tidied data and only relevant columns remaining Creating the training and testing setsBelow we created the training and test sets using `createDataPartition()` from the `caret` package.
###Code
# Determines which percentage of rows will be used in the training set and testing set (75%/25% split)
set.seed(28)
training_rows <- project_data %>%
select(has_pd) %>%
unlist() %>%
createDataPartition(p = 0.75, list = FALSE)
# Splits the dataset into a training set and testing set
training_set <- project_data %>% slice(training_rows)
testing_set <- project_data %>% slice(-training_rows)
head(training_set)
###Output
_____no_output_____
###Markdown
Figure 6 A slice of our training set data, after splitting our data into two separate sets As mentioned in the data wrangling section of "Methods," we eventually scaled our data. Scaling and other pre-processing was done in the analysis section. Testing set in detailHere we looked at the testing set in more detail, exploring the balance and spread of our selected columns.
###Code
# Reports the number of counts per class
class_counts <- training_set %>%
group_by(has_pd) %>%
summarize(n = n())
class_counts
###Output
_____no_output_____
###Markdown
Figure 7 A table displaying the balance in our training set
###Code
options(repr.plot.width=4,repr.plot.height=4)
class_counts_plot <- ggplot(class_counts, aes(x = has_pd, y = n, fill = has_pd)) +
geom_bar(stat="identity") +
labs(x = 'Has PD?', y = 'Number', fill = "Has PD") +
ggtitle("Balance in PD Data Set") +
theme(text = element_text(size = 18), legend.position = "none")
class_counts_plot
###Output
_____no_output_____
###Markdown
Figure 8 Visualizing the balance in our training set using a bar chart We had many more—almost three times as many—patients with PD than without in this data set. Therefore, we could conclude our training set was somewhat imbalanced (in fact, it was the same imbalance as the original data set thanks to `createDataPartition()` handling stratification for us); however, it was not severe enough to warrant use of `upScale()`. This limitation is further discussed at the end of our analysis.
###Code
# Reports the means, maxes, and mins of each predictor variable used
predictor_max <- training_set %>%
select(PPE:locShimmer) %>%
map_df(~ max(., na.rm = TRUE))
predictor_min <- training_set %>%
select(PPE:locShimmer) %>%
map_df(~ min(., na.rm = TRUE))
predictor_mean <- training_set %>%
select(PPE:locShimmer) %>%
map_df(~ mean(., na.rm = TRUE))
stats_merged <- rbind(predictor_max, predictor_min, predictor_mean)
stat <- c('max','mean','min')
stats_w_names <- data.frame(stat, stats_merged)
predictor_stats <- gather(stats_w_names,
key = variable,
value = value,
PPE:locShimmer)
predictor_stats
###Output
_____no_output_____
###Markdown
Figure 9 A table containing the mean, max, and min of each of our predictor variables Predictors in detail
###Code
# Visualizes and compares the distributions of each of the predictor variables
plot_pairs <- training_set %>%
select(PPE:locShimmer) %>%
ggpairs(title = "PD speech predictor variable correlations")
# plot_pairs
plot_pairs_by_class <- training_set %>%
ggpairs(.,
legend = 9,
columns = 2:8,
mapping = ggplot2::aes(colour=has_pd),
lower = list(continuous = wrap("smooth", alpha = 0.3, size=0.1)),
title = "PD speech predictor variable correlations by class") +
theme(legend.position = "bottom")
# plot_pairs_by_class
###Output
_____no_output_____
###Markdown
The following two plots were created using the `GGPairs` library. The first, without color, strictly provides detail about the distribution and correlation between each pair created from our six predictor variables. Three of our predictors, DFA, RPDE, and meanPeriodPulses take on a much wider range of values than PPE, jitter, and shimmer. Many of our variables exhibit somewhat positive correlations on the scatterplot, though some have an entirely fuzzy distribution. For example, compare the plots in the PPE column to those in the RPDE column. This likely comes as a result of the spread of the predictors.
###Code
options(repr.plot.width=10,repr.plot.height=10, repr.plot.pointsize=20)
plot_pairs
###Output
_____no_output_____
###Markdown
Figure 10 A pairwise visualization exploring the relationships between our predictor variables With this understanding, we used a second plot, grouped and colored by `has_pd`, to assist in anticipating what the impact of these predictors would be. We noted that for every individual distribution, there is a marked difference between the red and blue groupings, which boded well for our analysis. On average, the healthy patients (i.e., `has_pd == FALSE`) fell on the lower end of the spectrum for our predictors, apart from PPE, where healthy patients exhibited higher values on average. Though we weren’t able to visualize our final model directly (as it was in six dimensions), we predicted from these plots that the new patients which fell on the "lower end" for most of these variables would be healthy. This also made intuitive sense; Parkinson’s is a degenerative disease for the muscles, so unhealthy patients would likely experience more rapid change in various speech variables due to tremors.
###Code
options(repr.plot.width=10,repr.plot.height=10, repr.plot.pointsize=20)
plot_pairs_by_class
###Output
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
###Markdown
Figure 11 Another visualization of the relationship between our predictors, now with their class distributions considered 4 Hypothesis Expected OutcomesFrom our analysis, we expected to find that the six variables of speech we identified could form an effective model for determining whether or not a patient has Parkinson’s. We anticipated our findings would allow us to make reasonable predictions of whether or not a new patient has PD given their speech data. 5 Analysis Classification & Visualization Using all predictors from proposalBelow is our first attempt at constructing a classification model using all predictors from the proposal.
###Code
# Scale the data set (pre-processing)
scale_transformer <- preProcess(training_set, method = c("center", "scale"))
training_set <- predict(scale_transformer, training_set)
testing_set <- predict(scale_transformer, testing_set)
# head(training_set)
# head(testing_set)
X_train <- training_set %>%
select(-class, -has_pd) %>%
data.frame()
X_test <- testing_set %>%
select(-class, -has_pd) %>%
data.frame()
Y_train <- training_set %>%
select(class) %>%
unlist()
Y_test <- testing_set %>%
select(class) %>%
unlist()
# head(X_train)
# head(Y_train)
train_control <- trainControl(method="cv", number = 10)
k <- data.frame(k = seq(from = 1, to = 51, by = 2))
knn_model_cv_10fold <- train(x = X_train, y = Y_train, method = "knn", tuneGrid = k, trControl = train_control)
# knn_model_cv_10fold
accuracies <- knn_model_cv_10fold$results
head(accuracies)
###Output
_____no_output_____
###Markdown
Figure 12 A table containing the accuracy values for our first few values of k
###Code
options(repr.plot.height = 5, repr.plot.width = 5)
accuracy_vs_k <- ggplot(accuracies, aes(x = k, y = Accuracy)) +
geom_point() +
geom_line() +
ggtitle("Graphing Accuracy Against K") +
labs(subtitle = "Initial attempt—all predictors used")
accuracy_vs_k
###Output
_____no_output_____
###Markdown
Figure 13 A plot of accuracies of various k-values for a 10-fold cross-validation
###Code
k <- accuracies %>%
arrange(desc(Accuracy)) %>%
head(n = 1) %>%
select(k)
k
###Output
_____no_output_____
###Markdown
Figure 14 A table containing the optimal choice of k It looks like a choice of 5 here yields the high accuracy value. After approximately a k value of 40, our accuracy plateaus at just above 0.75. We will now retrain our model using this choice of k.
###Code
k = data.frame(k = 5)
model_knn <- train(x = X_train, y = Y_train, method = "knn", tuneGrid = k)
# model_knn
Y_test_predicted <- predict(object = model_knn, X_test)
# head(Y_test_predicted)
model_quality <- confusionMatrix(data = Y_test_predicted, reference = Y_test)
model_quality
###Output
_____no_output_____
###Markdown
Figure 15 Our final model statistics for our data set with all predictors Our final model had an accuracy of 79.4%, which is pretty good! Give our model is in six dimensions, there is no simple visualization for it. However, we can visualize whether or not our model had more false positives or false negatives, and which it was better are predicting: sick or healthy. The confusion matrix gives us all of these values in a 2x2 grid.
###Code
# Create a dataset
matrix_table <- as.data.frame(model_quality$table) %>%
mutate(isCorrect = (Prediction == Reference))
# matrix_table
matrix_plot <- ggplot(matrix_table, aes(fill = isCorrect, x = Reference, y = Freq)) +
geom_bar(position="stack", stat="identity", width = 0.7) +
labs(title = "Confusion Matrix Bar Chart", y = "Frequency", x = "Has PD?") +
scale_fill_discrete(name="Correction Prediction?")
matrix_plot
###Output
_____no_output_____ |
hongong Python/review_self study Python 05_2021.04.02.fri.ipynb | ###Markdown
함수 5-1 함수 만들기- 호출, 매개변수, 리턴값, 가변 매개변수, 기본 매개변수 - 함수 호출: 함수를 사용하는 것 (input)- 매개변수: 함수 호출 시 넣게 되는 자료 (fuc, 전처리)- 리턴값: 함수 호출로 최종적으로 나오는 결과 (output) 함수의 기본- def 함수 이름():- ㅁㅁㅁㅁ 문장 기본적인 함수
###Code
def print_3_times():
print("안녕하세요")
print("안녕하세요")
print("안녕하세요")
print_3_times()
###Output
안녕하세요
안녕하세요
안녕하세요
###Markdown
함수에 매개변수 - def 함수 이름(매개변수, 매개변수, ...):- ㅁㅁㅁㅁ문장 매개변수의 기본
###Code
def print_n_times(value, n):
for i in range(n):
print(value)
print_n_times("안녕하세요", 5) # 매개변수는 덜 넣거나 더 넣으면 타입에러 뜸
###Output
안녕하세요
안녕하세요
안녕하세요
안녕하세요
안녕하세요
###Markdown
가변 매개변수- 매개변수를 원하는만큼 받을 수 있는 함수- def 함수 이름(매개변수, 매개변수, ..., *가변 매개변수):- ㅁㅁㅁㅁ문장- 가변 매개변수 뒤에는 일반 매개변수가 올 수 없고,- 하나만 사용할 수 있음 가변 매개변수 함수
###Code
def print_n_times(n, *values):
# n번 반복
for i in range(n):
# value는 리스트처럼 활용
for value in values:
print(value)
#줄바꿈
print()
#함수 호출
print_n_times(3, "안녕하세요", "즐거운", "파이썬 프로그램")
###Output
안녕하세요
즐거운
파이썬 프로그램
안녕하세요
즐거운
파이썬 프로그램
안녕하세요
즐거운
파이썬 프로그램
###Markdown
기본 매개변수- '매개변수 = 값'- 기본 미개변수 뒤에는 일반 매개변수가 올 수 없다. 기본 매개변수
###Code
def print_n_times(value, n=2):
# n번 반복
for i in range(n):
print(value)
# 함수 호출
print_n_times("안녕하세요")
###Output
안녕하세요
안녕하세요
###Markdown
기본 매개변수가 가변 매개변수보다 앞에 올 때 (에러)
###Code
def print_n_times(n=2, *values):
#n번 반복
for i in range(n):
# values는 리스트처럼 활용
for value in values:
print(value)
# 단순한 줄바꿈
print()
# 함수 호출
print_n_times("안녕하세요", "즐거운", "파이썬 프로그래밍")
###Output
_____no_output_____
###Markdown
가변 매개변수가 기본 매개변수보다 앞에 올때
###Code
def print_n_times(*values, n=2):
#n번 반복
for i in range(n):
# values는 리스트처럼 활용
for value in values:
print(value)
# 단순한 줄바꿈
print()
# 함수 호출
print_n_times("안녕하세요", "즐거운", "파이썬 프로그래밍")
###Output
안녕하세요
즐거운
파이썬 프로그래밍
안녕하세요
즐거운
파이썬 프로그래밍
###Markdown
키워드 매개변수
###Code
def print_n_times(*values, n=2):
#n번 반복
for i in range(n):
# values는 리스트처럼 활용
for value in values:
print(value)
# 단순한 줄바꿈
print()
# 함수 호출
print_n_times("안녕하세요", "즐거운", "파이썬 프로그래밍", n=3)
# 매개변수 이름을 지정해서 입력하는 매개변수
###Output
안녕하세요
즐거운
파이썬 프로그래밍
안녕하세요
즐거운
파이썬 프로그래밍
안녕하세요
즐거운
파이썬 프로그래밍
###Markdown
기본 매개변수 중에서 필요한 값만 입력하기 여러 함수 호출 형태
###Code
def test(a, b=10, c=100):
print(a+b+c)
# 1) 기본 형태
test(10,20,30)
# 2) 키워드 매개변수로 모든 매개변수를 지정한 형태
test(a=10, b=100, c=200)
# 3) 키워드 매개변수로 모든 매개변수를 마구잡이로 지정한 형태
test(c=10, a=100, b=200)
# 4) 키워드 매개변수로 일부 매개변수만 지정한 형태
test(10, c=200)
###Output
60
310
310
220
###Markdown
리턴return 자료 없이 리턴하기
###Code
def return_test():
print("A 위치 입니다.")
return # 리턴
print("B 위치 입니다.")
# 함수 호출
return_test()
###Output
A 위치 입니다.
###Markdown
자료와 함께 리턴하기
###Code
def return_test():
return 100 # 리턴
# 함수 호출
value = return_test()
print(value)
###Output
100
###Markdown
아무 것도 리턴하지 않았을 때의 리턴값
###Code
# 함수 정의
def return_test():
return
# 함수 호출
value = return_test()
print(value)
###Output
None
###Markdown
기본적인 함수의 활용- def 함수(매개변수):- ㅁㅁㅁㅁ변수 = 초깃값- ㅁㅁㅁㅁ 여러 가지 처리- ㅁㅁㅁㅁ 여러 가지 처리- ㅁㅁㅁㅁ 여러 가지 처리- ㅁㅁㅁㅁreturn 범위 내부의 정수를 모두 더하는 함수
###Code
# 함수 선언
def sum_all(start,end):
# 변수 선언
output = 0
# 반복문 돌려 숫자 더하기
for i in range(start, end+1):
output+= i
# 리턴
return output
# 함수 호출
print("0 to 100:", sum_all(0,100))
print("0 to 1000:", sum_all(0,1000))
print("50 to 100:", sum_all(50,100))
print("500 to 1000:", sum_all(500,1000))
###Output
0 to 100: 5050
0 to 1000: 500500
50 to 100: 3825
500 to 1000: 375750
###Markdown
기본 매개변수와 키워드 매개변수를 활용해 범위의 정수를 더하는 함수
###Code
# 함수 선언
def sum_all(start=0, end=100, step=1):
# 변수 선언
output = 0
# 반복문 돌려 숫자 더하기
for i in range(start, end+1,step):
output+= i
# 리턴
return output
# 함수 호출
print("A.", sum_all(0,100,10))
print("B.", sum_all(end=100))
print("C.", sum_all(end=100, step=2))
###Output
A. 550
B. 5050
C. 2550
###Markdown
핵심포인트- 호출: 함수를 실행하는 행위- 매개변수: 함수의 괄호 내부에 넣는 것- 리턴값: 함수의 최종적인 결과- 가변 매개변수 함수: 매개변수를 원하는 만큼 받을 수 있는 함수- 기본 매개변수: 매개변수에 아무것도 넣지 않아도 들어가는 값 확인문제
###Code
# f(x) = 2x+1
def f(x):
return 2*int(x)+1
print(f(10))
# f(x) = x^2+2x+1
def f(x):
return int(x)**2+2*int(x)+1
print(f(10))
# 매개변수로 전달된 값들을 모두 곱해서 리턴하는 가변 매개변수 함수 만들기
# 실행결과 3150
def mul(*values):
output = 1
for value in values:
output *= value
return output
print(mul(5, 7, 9, 10))
###Output
3150
|
notebooks/Online Learning Fast.ipynb | ###Markdown
Prendiamo un elemento alla volta dal test, facciamo la similarità di quell'elemento ed estraiamo il top50. Quell'elemento poi verrà predetto. Una volta tirata fuori la predizione di quell'elemento, va aggiunto al train [di LightGBM]: non importa rifare la similarità perché valutando un elemento alla volta sappiamo già dalla similarità che facciamo se quell'elemento è nel train o no.
###Code
df_train = pd.read_csv("../dataset/original/train.csv", escapechar="\\")
df_test = pd.read_csv("../dataset/original/test.csv", escapechar="\\")
df_train = df_train.sort_values(by='record_id').reset_index(drop=True)
df_test = df_test.sort_values(by='record_id').reset_index(drop=True)
df_train.linked_id = df_train.linked_id.astype(int)
df_test['linked_id'] = df_test.record_id.str.split("-")
df_test['linked_id'] = df_test.linked_id.apply(lambda x: x[0])
df_test.linked_id = df_test.linked_id.astype(int)
#df_train.linked_id = df_train.linked_id.astype(int)
only_test = set(df_test.linked_id.values) - set(df_train.linked_id.values)
only_test_recordid = df_test[df_test.linked_id.isin(only_test)]
df_test = df_test.drop('linked_id', axis=1)
train1 = pd.read_csv("../dataset/validation_2/train_complete.csv")
train2 = pd.read_csv("../dataset/validation_3/train_complete.csv")
val = pd.read_csv("../dataset/validation/train_complete.csv")
def remove_spaces(s, n=3):
s = re.sub(' +',' ',s).strip()
ngrams = zip(*[s[i:] for i in range(n)])
return [''.join(ngram) for ngram in ngrams]
def ngrams_name(test_record, df_train):
df_train.name = df_train.name.astype(str)
test_record['name'] = test_record['name'].astype(str)
corpus = list(df_train.name)
corpus.append(test_record['name'])
vectorizer = CountVectorizer(preprocessor = remove_spaces, analyzer=remove_spaces)
X = vectorizer.fit_transform(corpus)
X_train = X[:df_train.shape[0],:]
X_test = X[df_train.shape[0]:,:]
similarity = sim.jaccard(X_test, X_train.T, k=300)
return similarity.tocsr()
def ngrams_name_fast(test_record, vectorizer, X_train):
# deve:
# - prendere il nome del test
# - vettorizzarlo
# - calcolare la similarità
# - ritornare la similarità con una nuova riga e una nuova colonna
# - ritornare X_train con una nuova riga (la vettorizzazione del nuovo record)
# ?
X_test = vectorizer.transform([test_record['name']])
similarity = sim.jaccard(X_test, X_train.T, k=300).tocsr()
return similarity.tocsr(), vstack([X_train,X_test])
def ngrams_address(test_record, df_train):
df_train.address = df_train.address.fillna('').astype(str)
test_record.address = test_record.fillna({'address':''}).address
corpus = list(df_train.address)
corpus.append(test_record.address)
vectorizer = CountVectorizer(preprocessor = remove_spaces, analyzer=remove_spaces)
X = vectorizer.fit_transform(corpus)
X_train = X[:df_train.shape[0],:]
X_test = X[df_train.shape[0]:,:]
similarity = sim.jaccard(X_test, X_train.T, k=300)
return similarity.tocsr()
def ngrams_address_fast(test_record, vectorizer, X_train):
test_record.address = test_record.fillna({'address':''}).address
X_test = vectorizer.transform([test_record.address])
similarity = sim.jaccard(X_test, X_train.T, k=300)
return similarity.tocsr(), vstack([X_train,X_test])
def ngrams_email(test_record, df_train):
df_train.email = df_train.email.fillna('').astype(str)
test_record.email = test_record.fillna({'email':''}).email
corpus = list(df_train.email)
corpus.append(test_record.email)
vectorizer = CountVectorizer(preprocessor = remove_spaces, analyzer=remove_spaces)
X = vectorizer.fit_transform(corpus)
X_train = X[:df_train.shape[0],:]
X_test = X[df_train.shape[0]:,:]
similarity = sim.jaccard(X_test, X_train.T, k=300)
return similarity.tocsr()
def ngrams_email_fast(test_record, vectorizer, X_train):
test_record.email = test_record.fillna({'email':''}).email
X_test = vectorizer.transform([test_record.email])
similarity = sim.jaccard(X_test, X_train.T, k=300)
return similarity.tocsr(), vstack([X_train,X_test])
def convert_phones(df_in):
"""
This functions transforms the phone column from scientific notation to readable string
format, e.g. 1.2933+E10 to 12933000000
: param df_in : the original df with the phone in scientific notation
: return : the clean df
"""
df = df_in.copy()
df.phone = df.phone.fillna('').astype(str)
df.phone = [p.split('.')[0] for p in df.phone]
return df
def ngrams_phone(test_record, df_train):
# manually convert test_record phone
if np.isnan(test_record.phone):
test_record.phone = test_record.fillna({'phone':''}).phone
else:
test_record.phone = test_record.fillna({'phone':''}).phone.astype(str)
test_record.phone = test_record.phone.split('.')[0]
df_train = convert_phones(df_train)
corpus = list(df_train.phone)
corpus.append(test_record.phone)
vectorizer = CountVectorizer(preprocessor = remove_spaces, analyzer=remove_spaces)
X = vectorizer.fit_transform(corpus)
X_train = X[:df_train.shape[0],:]
X_test = X[df_train.shape[0]:,:]
similarity = sim.jaccard(X_test, X_train.T, k=300)
return similarity.tocsr()
def ngrams_phone_fast(test_record, vectorizer, X_train):
# manually convert test_record phone
if np.isnan(test_record.phone):
test_record.phone = test_record.fillna({'phone':''}).phone
else:
test_record.phone = test_record.fillna({'phone':''}).phone.astype(str)
test_record.phone = test_record.phone.split('.')[0]
X_test = vectorizer.transform([test_record.phone])
similarity = sim.jaccard(X_test, X_train.T, k=300)
return similarity.tocsr(), vstack([X_train,X_test])
def expand_df(df):
df_list = []
for (q, pred, pred_rec, score, s_name, s_email, s_phone, idx) in tqdm(
zip(df.queried_record_id, df.predicted_record_id, df.predicted_record_id_record, df.cosine_score,
df.name_cosine, df.email_cosine, df.phone_cosine, df.linked_id_idx)):
for x in range(len(pred)):
df_list.append((q, pred[x], pred_rec[x], score[x], s_name[x], s_email[x], s_phone[x], idx[x]))
# TODO da cambiare predicted_record_id in predicted_linked_id e 'predicted_record_id_record' in 'predicted_record_id'
df_new = pd.DataFrame(df_list, columns=['queried_record_id', 'predicted_record_id', 'predicted_record_id_record',
'cosine_score', 'name_cosine',
'email_cosine', 'phone_cosine',
'linked_id_idx',
])
return df_new
def expand_similarities(test_record, vectorizer_name, vectorizer_email, vectorizer_phone, vectorizer_address,
X_train_name, X_train_email, X_train_phone, X_train_address, k=50):
sim_name, X_train_name_new = ngrams_name_fast(test_record, vectorizer_name, X_train_name)
sim_email, X_train_email_new = ngrams_email_fast(test_record, vectorizer_email, X_train_email)
sim_phone, X_train_phone_new = ngrams_phone_fast(test_record, vectorizer_phone, X_train_phone)
sim_address, X_train_address_new = ngrams_address_fast(test_record, vectorizer_address, X_train_address)
hybrid = sim_name + 0.2 * sim_email + 0.2 * sim_phone + 0.2 * sim_address
linid_ = []
linid_idx = []
linid_score = []
linid_name_cosine = []
linid_email_cosine = []
linid_phone_cosine = []
linid_address_cosine = []
linid_record_id = []
tr = df_train[['record_id', 'linked_id']]
indices = hybrid.nonzero()[1][hybrid.data.argsort()[::-1]][:k]
df = tr.loc[indices, :][:k]
linid_.append(df['linked_id'].values)
linid_idx.append(df.index)
linid_record_id.append(df.record_id.values)
linid_score.append(np.sort(hybrid.data)[::-1][:k]) # Questo ha senso perché tanto gli indices sono sortati in base allo scores di hybrid
linid_name_cosine.append([sim_name[0, t] for t in indices])
linid_email_cosine.append([sim_email[0, t] for t in indices])
linid_phone_cosine.append([sim_phone[0, t] for t in indices])
df = pd.DataFrame()
df['queried_record_id'] = [test_record.record_id]
df['predicted_record_id'] = linid_
df['predicted_record_id_record'] = linid_record_id
df['cosine_score'] = linid_score
df['name_cosine'] = linid_name_cosine
df['email_cosine'] = linid_email_cosine
df['phone_cosine'] = linid_phone_cosine
df['linked_id_idx'] = linid_idx
df_new = expand_df(df)
return df_new, X_train_name_new, X_train_email_new, X_train_phone_new, X_train_address_new
def get_linked_id(new_row):
new_row['linked_id'] = new_row.record_id.split("-")
new_row['linked_id'] = new_row.linked_id[0]
new_row['linked_id'] = int(new_row.linked_id)
return new_row
def create_all_vectorizers(df_train):
# vectorizer Name
df_train.name = df_train.name.astype(str)
corpus_name = list(df_train.name)
vectorizer_name = CountVectorizer(preprocessor = remove_spaces, analyzer=remove_spaces)
X_train_name = vectorizer_name.fit_transform(corpus_name)
# vectorizer Email
df_train.email = df_train.email.fillna('').astype(str)
corpus_email = list(df_train.email)
vectorizer_email = CountVectorizer(preprocessor = remove_spaces, analyzer=remove_spaces)
X_train_email = vectorizer_email.fit_transform(corpus_email)
# vectorizer Address
df_train.address = df_train.address.fillna('').astype(str)
corpus_address = list(df_train.address)
vectorizer_address = CountVectorizer(preprocessor = remove_spaces, analyzer=remove_spaces)
X_train_address = vectorizer_address.fit_transform(corpus_address)
# vectorizer Phone
df_train = convert_phones(df_train)
corpus_phone = list(df_train.phone)
vectorizer_phone = CountVectorizer(preprocessor = remove_spaces, analyzer=remove_spaces)
X_train_phone = vectorizer_phone.fit_transform(corpus_phone)
return vectorizer_name, vectorizer_email, vectorizer_address, vectorizer_phone, X_train_name, X_train_email, X_train_address, X_train_phone
###Output
_____no_output_____
###Markdown
Train
###Code
train = pd.concat([train1, train2])
eval_group = val.groupby('queried_record_id').size().values
group = train.groupby('queried_record_id').size().values
ranker = lgb.LGBMRanker()
print('Start LGBM...')
t1 = time.time()
ranker.fit(train.drop(['queried_record_id', 'target', 'predicted_record_id','predicted_record_id_record', 'linked_id_idx'], axis=1),
train['target'], group=group,
eval_set=[(val.drop(['queried_record_id', 'target', 'predicted_record_id','predicted_record_id_record', 'linked_id_idx'], axis=1), val['target'])],
eval_group=[eval_group], early_stopping_rounds=5)
t2 = time.time()
print(f'Learning completed in {int(t2-t1)} seconds.')
###Output
Start LGBM...
[1] valid_0's ndcg@1: 0.972762
Training until validation scores don't improve for 5 rounds.
[2] valid_0's ndcg@1: 0.975492
[3] valid_0's ndcg@1: 0.977745
[4] valid_0's ndcg@1: 0.979941
[5] valid_0's ndcg@1: 0.980212
[6] valid_0's ndcg@1: 0.98034
[7] valid_0's ndcg@1: 0.980388
[8] valid_0's ndcg@1: 0.981028
[9] valid_0's ndcg@1: 0.980887
[10] valid_0's ndcg@1: 0.982163
[11] valid_0's ndcg@1: 0.982491
[12] valid_0's ndcg@1: 0.982518
[13] valid_0's ndcg@1: 0.982995
[14] valid_0's ndcg@1: 0.983013
[15] valid_0's ndcg@1: 0.983066
[16] valid_0's ndcg@1: 0.98335
[17] valid_0's ndcg@1: 0.983762
[18] valid_0's ndcg@1: 0.983911
[19] valid_0's ndcg@1: 0.984109
[20] valid_0's ndcg@1: 0.984468
[21] valid_0's ndcg@1: 0.984486
[22] valid_0's ndcg@1: 0.984516
[23] valid_0's ndcg@1: 0.984564
[24] valid_0's ndcg@1: 0.984748
[25] valid_0's ndcg@1: 0.984889
[26] valid_0's ndcg@1: 0.985003
[27] valid_0's ndcg@1: 0.984911
[28] valid_0's ndcg@1: 0.985016
[29] valid_0's ndcg@1: 0.98516
[30] valid_0's ndcg@1: 0.985187
[31] valid_0's ndcg@1: 0.985288
[32] valid_0's ndcg@1: 0.985283
[33] valid_0's ndcg@1: 0.985331
[34] valid_0's ndcg@1: 0.985336
[35] valid_0's ndcg@1: 0.985463
[36] valid_0's ndcg@1: 0.985406
[37] valid_0's ndcg@1: 0.98545
[38] valid_0's ndcg@1: 0.98552
[39] valid_0's ndcg@1: 0.985524
[40] valid_0's ndcg@1: 0.985638
[41] valid_0's ndcg@1: 0.985713
[42] valid_0's ndcg@1: 0.985713
[43] valid_0's ndcg@1: 0.98577
[44] valid_0's ndcg@1: 0.985783
[45] valid_0's ndcg@1: 0.985853
[46] valid_0's ndcg@1: 0.985862
[47] valid_0's ndcg@1: 0.985919
[48] valid_0's ndcg@1: 0.985962
[49] valid_0's ndcg@1: 0.986019
[50] valid_0's ndcg@1: 0.986041
[51] valid_0's ndcg@1: 0.986111
[52] valid_0's ndcg@1: 0.986133
[53] valid_0's ndcg@1: 0.986221
[54] valid_0's ndcg@1: 0.986208
[55] valid_0's ndcg@1: 0.986256
[56] valid_0's ndcg@1: 0.986278
[57] valid_0's ndcg@1: 0.986313
[58] valid_0's ndcg@1: 0.986269
[59] valid_0's ndcg@1: 0.986282
[60] valid_0's ndcg@1: 0.986243
[61] valid_0's ndcg@1: 0.98626
[62] valid_0's ndcg@1: 0.986304
Early stopping, best iteration is:
[57] valid_0's ndcg@1: 0.986313
Learning completed in 342 seconds.
###Markdown
Example
###Code
vectorizer_name, vectorizer_email, vectorizer_address, vectorizer_phone, X_train_name, X_train_email, X_train_address, X_train_phone= create_all_vectorizers(df_train)
t1 = time.time()
test_record_exp, X_train_name_new, X_train_email_new, X_train_phone_new, X_train_address_new = expand_similarities(df_test.loc[2], vectorizer_name, vectorizer_email, vectorizer_phone, vectorizer_address,
X_train_name, X_train_email, X_train_phone, X_train_address)
t2 = time.time()
t2-t1
test_record_exp
###Output
_____no_output_____
###Markdown
Create Sequential Test Answers Dobbiamo creare un test set sequenziale [con le relative risposte], in modo che se ci sono due record che fanno riferimento allo stesso linked_id che però sono presenti solo nel test, il primo che arriva non ha riferimenti nel train ed è impossibile predirlo correttamente; viene però aggiunto al train e quando il secondo record deve essere valutato la risposta corretta corrisponde a quello precedente.
###Code
group_train = df_train.groupby('linked_id').apply(lambda x: list(x['record_id'])).reset_index().rename(columns={0:'record_ids'})
df_test
df_test['linked_id'] = df_test.record_id.str.split('-')
df_test['linked_id'] = df_test.linked_id.apply(lambda x: x[0])
df_test.linked_id = df_test.linked_id.astype(int)
df_test = df_test.merge(group_train, how= 'left', on='linked_id')
def seq_labelling(df):
linked_seen = []
new_group = { }
new_df = []
df_notna = df[~df.record_ids.isna()]
df_na = df[df.record_ids.isna()]
relevant_idx= []
for (i, l, r) in tqdm(zip(df_na.index, df_na.linked_id, df_na.record_id)):
if l not in linked_seen:
linked_seen.append(l)
new_df.append((i, r, np.nan))
new_group[l] = [r]
else:
current_group = new_group[l]
#print(f'{r} and the group: {current_group}')
new_df.append((i, r, current_group))
new_group[l].append(r)
relevant_idx.append(i)
res = pd.DataFrame(new_df, columns=['index','record_id', 'record_ids']).set_index('index')
full_df = pd.concat([df_notna, res])
full_df = full_df.sort_index()
return full_df, relevant_idx
def get_target(df):
df, idx = seq_labelling(df)
df_notidx = df.loc[~df.index.isin(idx)]
df = df.loc[idx]
new_df = []
for (i,r, g) in tqdm(zip(df.index, df.record_id, df.record_ids)):
idx = g.index(r)
g = g[:idx]
new_df.append((i, r, g))
new_df = pd.DataFrame(new_df, columns=['index','record_id', 'record_ids']).set_index('index')
res = pd.concat([new_df, df_notidx])
res = res.sort_index()
return res
seq = get_target(df_test[['record_id', 'linked_id','record_ids']])
###Output
_____no_output_____
###Markdown
Incremental Evaluation NotaInvece di fare le predizioni su tutto il test, visto che sappiamo come performa l'algoritmo classico su quei record che hanno riferimenti nel train, possiamo restringere l'analisi ai record del test che non hanno riferimenti nel train e che hanno duplicati all'interno del test stesso [per quelli che non sono duplicati non ha senso fare tale analisi perché non forniscono nessun contributo alle query successive]
###Code
duplicates_df = only_test_recordid[only_test_recordid.duplicated('linked_id', keep=False)]
duplicates_df
idx = duplicates_df.index
def reorder_preds(preds):
ordered_lin = []
ordered_score = []
ordered_record = []
for i in range(len(preds)):
l = sorted(preds[i], key=lambda t: t[1], reverse=True)
lin = [x[0] for x in l]
s = [x[1] for x in l]
r = [x[2] for x in l]
ordered_lin.append(lin)
ordered_score.append(s)
ordered_record.append(r)
return ordered_lin, ordered_score, ordered_record
# Re-load training set
df_train = pd.read_csv("../dataset/original/train.csv", escapechar="\\")
df_train = df_train.sort_values(by='record_id').reset_index(drop=True)
df_train.linked_id = df_train.linked_id.astype(int)
# get vectorizers
vectorizer_name, vectorizer_email, vectorizer_address, vectorizer_phone, X_train_name, X_train_email, X_train_address, X_train_phone= create_all_vectorizers(df_train)
# Online Learning -------------> TODO: ci mette troppo, circa 7ore
test_prediction = pd.DataFrame()
seen = []
for i in tqdm(idx):
test_row = duplicates_df.loc[i]
print(f'Record: {test_row.record_id}')
t1 = time.time()
#print(f'Extract test record at index: {i}')
# Get the test record to be evaluate
#print('Create Features')
test_row_exp, X_train_name, X_train_email, X_train_phone, X_train_address = expand_similarities(test_row, vectorizer_name, vectorizer_email, vectorizer_phone, vectorizer_address,
X_train_name, X_train_email, X_train_phone, X_train_address)
test_row_exp = adding_features(test_row_exp, isValidation=False, path=os.path.join('..', 'dataset', 'original'), incremental_train=df_train)
# Get predictions
#print('Get Predictions')
predictions = ranker.predict(test_row_exp.drop(['queried_record_id','linked_id_idx', 'predicted_record_id','predicted_record_id_record'], axis=1))
test_row_exp['predictions'] = predictions
df_predictions = test_row_exp[['queried_record_id', 'predicted_record_id', 'predicted_record_id_record', 'predictions']]
# Re-order predictions
#print('Reorder Predictions')
rec_pred = []
for (l,p,record_id) in zip(df_predictions.predicted_record_id, df_predictions.predictions, df_predictions.predicted_record_id_record):
rec_pred.append((l, p, record_id))
df_predictions['rec_pred'] = rec_pred
group_queried = df_predictions[['queried_record_id', 'rec_pred']].groupby('queried_record_id').apply(lambda x: list(x['rec_pred']))
df_predictions = pd.DataFrame(group_queried).reset_index().rename(columns={0 : 'rec_pred'})
# Store predictions
#print('Store Predictions')
df_predictions['ordered_linked'], df_predictions['ordered_scores'], df_predictions['ordered_record'] = reorder_preds(df_predictions.rec_pred.values)
test_prediction = pd.concat([test_prediction, df_predictions], ignore_index=True)
new_row = get_linked_id(test_row)
df_train = df_train.append(new_row, ignore_index=True)
t2 = time.time()
#print(f'Iteration completed in {t2 - t1}s')
test_prediction
###Output
_____no_output_____ |
SVM _updated (1) (1).ipynb | ###Markdown
ASSIGNMENT - SUPPORT VECTOR MACHINE( SVM ) [1] READING DATA
###Code
import pandas as pd
project_data=pd.read_csv("Donor_choose/train_data.csv")
resource_data=pd.read_csv("Donor_choose/resources.csv")
###Output
_____no_output_____
###Markdown
[2] Sorting data according to time [TBS]
###Code
# how to replace elements in list python: https://stackoverflow.com/a/2582163/4084039
cols=['Date' if x=='project_submitted_datetime'else x for x in list(project_data.columns)]
#sort dataframe based on time pandas python: https://stackoverflow.com/a/49702492/4084039
project_data['Date'] = pd.to_datetime(project_data['project_submitted_datetime'])
project_data.drop('project_submitted_datetime', axis=1, inplace=True)
project_data.sort_values(by=['Date'], inplace=True)
# how to reorder columns pandas python: https://stackoverflow.com/a/13148611/4084039
project_data = project_data[cols]
print(cols)
project_data.head(2)
print(project_data.shape)
print("="*100)
print(project_data.columns.values)
print("="*100)
print(resource_data.shape)
print("="*100)
print(resource_data.columns.values)
###Output
(109248, 17)
====================================================================================================
['Unnamed: 0' 'id' 'teacher_id' 'teacher_prefix' 'school_state' 'Date'
'project_grade_category' 'project_subject_categories'
'project_subject_subcategories' 'project_title' 'project_essay_1'
'project_essay_2' 'project_essay_3' 'project_essay_4'
'project_resource_summary' 'teacher_number_of_previously_posted_projects'
'project_is_approved']
====================================================================================================
(1541272, 4)
====================================================================================================
['id' 'description' 'quantity' 'price']
###Markdown
[3] Preproccesing Steps
###Code
categories=list(project_data['project_subject_categories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
cat_list = []
for i in categories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp+=j.strip()+" " #" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_') # we are replacing the & value into
cat_list.append(temp.strip())
project_data['clean_categories'] = cat_list
project_data.drop(['project_subject_categories'], axis=1, inplace=True)
project_data.head(2)
# count of all the words in corpus python: https://stackoverflow.com/a/22898595/4084039
from collections import Counter
my_counter = Counter()
for word in project_data['clean_categories'].values:
my_counter.update(word.split())
# dict sort by value python: https://stackoverflow.com/a/613218/4084039
cat_dict = dict(my_counter)
sorted_cat_dict = dict(sorted(cat_dict.items(), key=lambda kv: kv[1]))
sorted_cat_dict
sub_catogories = list(project_data['project_subject_subcategories'].values)
# remove special characters from list of strings python: https://stackoverflow.com/a/47301924/4084039
# https://www.geeksforgeeks.org/removing-stop-words-nltk-python/
# https://stackoverflow.com/questions/23669024/how-to-strip-a-specific-word-from-a-string
# https://stackoverflow.com/questions/8270092/remove-all-whitespace-in-a-string-in-python
sub_cat_list = []
for i in sub_catogories:
temp = ""
# consider we have text like this "Math & Science, Warmth, Care & Hunger"
for j in i.split(','): # it will split it in three parts ["Math & Science", "Warmth", "Care & Hunger"]
if 'The' in j.split(): # this will split each of the catogory based on space "Math & Science"=> "Math","&", "Science"
j=j.replace('The','') # if we have the words "The" we are going to replace it with ''(i.e removing 'The')
j = j.replace(' ','') # we are placeing all the ' '(space) with ''(empty) ex:"Math & Science"=>"Math&Science"
temp +=j.strip()+" "#" abc ".strip() will return "abc", remove the trailing spaces
temp = temp.replace('&','_')
sub_cat_list.append(temp.strip())
project_data['clean_subcategories'] = sub_cat_list
project_data.drop(['project_subject_subcategories'], axis=1, inplace=True)
project_data.head(2)
# count of all the words in corpus python: https://stackoverflow.com/a/22898595/4084039
from collections import Counter
my_counter = Counter()
for word in project_data['clean_subcategories'].values:
my_counter.update(word.split())
# dict sort by value python: https://stackoverflow.com/a/613218/4084039
sub_cat_dict = dict(my_counter)
sorted_sub_cat_dict = dict(sorted(sub_cat_dict.items(), key=lambda kv: kv[1]))
sorted_sub_cat_dict
# merge two column text dataframe:
project_data["essay"] = project_data["project_essay_1"].map(str) +\
project_data["project_essay_2"].map(str) + \
project_data["project_essay_3"].map(str) + \
project_data["project_essay_4"].map(str)
print(project_data["essay"].value_counts)
project_data.head(2)
# we get the cost of the project using resource.csv file
resource_data.head(2)
price_data = resource_data.groupby('id').agg({'price':'sum', 'quantity':'sum'}).reset_index()
price_data.head(2)
# join two dataframes in python:
project_data = pd.merge(project_data, price_data, on='id', how='left')
approved_price = project_data[project_data['project_is_approved']==1]['price'].values
rejected_price = project_data[project_data['project_is_approved']==0]['price'].values
# http://zetcode.com/python/prettytable/
from prettytable import PrettyTable
import numpy as np
t = PrettyTable()
t.field_names = ["Percentile", "Approved Projects", "Not Approved Projects"]
for i in range(0,101,5):
t.add_row([i,np.round(np.percentile(approved_price,i), 3), np.round(np.percentile(rejected_price,i), 3)])
print(t)
project_data.head(2)
# https://stackoverflow.com/a/47091490/4084039
import re
def decontracted(phrase):
# specific
phrase = re.sub(r"won't", "will not", phrase)
phrase = re.sub(r"can\'t", "can not", phrase)
# general
phrase = re.sub(r"n\'t", " not", phrase)
phrase = re.sub(r"\'re", " are", phrase)
phrase = re.sub(r"\'s", " is", phrase)
phrase = re.sub(r"\'d", " would", phrase)
phrase = re.sub(r"\'ll", " will", phrase)
phrase = re.sub(r"\'t", " not", phrase)
phrase = re.sub(r"\'ve", " have", phrase)
phrase = re.sub(r"\'m", " am", phrase)
return phrase
import nltk
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
stopWords = set(stopwords.words('english'))
print(stopWords)
sno=nltk.stem.SnowballStemmer('english')
print(sno.stem('amazing'))
# PREPROCESSING FOR ESSAYS
from tqdm import tqdm
import re
import string
from bs4 import BeautifulSoup
preprocessed_essays = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data['essay'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
sent = re.sub("\S*\d\S*", "", sent).strip()
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e not in stopWords)
preprocessed_essays.append(sent.lower().strip())
# After preprocessing
preprocessed_essays[2000]
project_data = pd.DataFrame(project_data)
project_data['cleaned_essays'] = preprocessed_essays
project_data.head(2)
# Data preprocessing on title text
from tqdm import tqdm
import re
import string
from bs4 import BeautifulSoup
preprocessed_title_text = []
# tqdm is for printing the status bar
for sentance in tqdm(project_data['project_title'].values):
sent = decontracted(sentance)
sent = sent.replace('\\r', ' ')
sent = sent.replace('\\"', ' ')
sent = sent.replace('\\n', ' ')
sent = re.sub('[^A-Za-z0-9]+', ' ', sent)
sent = re.sub("\S*\d\S*", "", sent).strip()
# https://gist.github.com/sebleier/554280
sent = ' '.join(e for e in sent.split() if e not in stopWords)
preprocessed_title_text.append(sent.lower().strip())
project_data = pd.DataFrame(project_data)
project_data['cleaned_title_text'] = preprocessed_title_text
project_data.head(2)
print(project_data.shape)
print(project_data.columns)
# Fill blank spaces of teacher_prefix with nan
#https://stackoverflow.com/questions/42224700/attributeerror-float-object-has-no-attribute-split
project_data['teacher_prefix'] = project_data['teacher_prefix'].fillna('null')
project_data.head(2)
# count of all the words in corpus python: https://stackoverflow.com/a/22898595/4084039
from collections import Counter
my_counter = Counter()
for word in project_data['project_grade_category'].values:
my_counter.update(word.split())
project_grade_category_dict = dict(my_counter)
project_grade_category_dict = dict(sorted(project_grade_category_dict.items(), key=lambda kv: kv[1]))
###Output
_____no_output_____
###Markdown
Train Test split
###Code
project_data.shape
project_data["project_is_approved"].value_counts()
# Randomly sample 50k points
project_data = project_data.sample(n=50000)
project_data.shape
# Define x & y for splitting
y=project_data['project_is_approved'].values
project_data.drop(['project_is_approved'], axis=1, inplace=True) # drop project is approved columns
x=project_data
print(y.shape)
print(x.shape)
x.head(2)
# break in train test
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test= train_test_split(x,y,test_size=0.3,random_state=2,shuffle=False)
# now break trainig data further in train and cv
#x_train,x_cv,y_train,y_cv= train_test_split(x_train, y_train, test_size=0.3 ,random_state=2,stratify=y_train)
print(x_train.shape, y_train.shape)
#print(x_cv.shape, y_cv.shape)
print(x_test.shape, y_test.shape)
print("="*100)
x=np.count_nonzero(y_test)
# count no. of project app or not on test data set
print(len(y_test)-x)
print(x)
x_train.head(2)
###Output
_____no_output_____
###Markdown
Apply BOW and OHE
###Code
from sklearn.feature_extraction.text import CountVectorizer
# Vectorizing text data
# We are considering only the words which appeared in at least 10 documents(rows or projects).
vectorizer = CountVectorizer(min_df=10,ngram_range=(1,2))
vectorizer.fit_transform(x_train["cleaned_essays"].values)
x_train_essay_bow = vectorizer.transform(x_train['cleaned_essays'].values)
#x_cv_essay_bow = vectorizer.transform(x_cv['cleaned_essays'].values)
x_test_essay_bow = vectorizer.transform(x_test['cleaned_essays'].values)
print("After vectorizations")
print(x_train_essay_bow.shape, y_train.shape)
#print(x_cv_essay_bow.shape, y_cv.shape)
print(x_test_essay_bow.shape, y_test.shape)
print("="*100)
# BOW on clean_titles
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df=10,ngram_range=(1,2))
vectorizer.fit(x_train['cleaned_title_text'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
x_train_titles_bow = vectorizer.transform(x_train['cleaned_title_text'].values)
#x_cv_titles_bow = vectorizer.transform(x_cv['cleaned_title_text'].values)
x_test_titles_bow = vectorizer.transform(x_test['cleaned_title_text'].values)
print("After vectorizations")
print(x_train_titles_bow.shape, y_train.shape)
#print(x_cv_titles_bow.shape, y_cv.shape)
print(x_test_titles_bow.shape, y_test.shape)
print("="*100)
# ONE of subject category
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
vectorizer.fit(x_train['clean_categories'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
x_train_clean_cat_ohe = vectorizer.transform(x_train['clean_categories'].values)
#x_cv_clean_cat_ohe = vectorizer.transform(x_cv['clean_categories'].values)
x_test_clean_cat_ohe = vectorizer.transform(x_test['clean_categories'].values)
print("After vectorizations")
print(x_train_clean_cat_ohe.shape, y_train.shape)
#print(x_cv_clean_cat_ohe.shape, y_cv.shape)
print(x_test_clean_cat_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
# ONE of subject subcategory
vectorizer = CountVectorizer()
vectorizer.fit(x_train['clean_subcategories'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
x_train_clean_subcat_ohe = vectorizer.transform(x_train['clean_subcategories'].values)
#x_cv_clean_subcat_ohe = vectorizer.transform(x_cv['clean_subcategories'].values)
x_test_clean_subcat_ohe = vectorizer.transform(x_test['clean_subcategories'].values)
print("After vectorizations")
print(x_train_clean_cat_ohe.shape, y_train.shape)
#print(x_cv_clean_cat_ohe.shape, y_cv.shape)
print(x_test_clean_cat_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
# one hot encoding the catogorical features: categorical_categories
# teacher_prefix
vectorizer = CountVectorizer()
vectorizer.fit(x_train['teacher_prefix'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
x_train_teacher_pre = vectorizer.transform(x_train['teacher_prefix'].values)
#x_cv_teacher_pre = vectorizer.transform(x_cv['teacher_prefix'].values)
x_test_teacher_pre = vectorizer.transform(x_test['teacher_prefix'].values)
print("After vectorizations")
print(x_train_teacher_pre.shape, y_train.shape)
#print(x_cv_teacher_pre.shape, y_cv.shape)
print(x_test_teacher_pre.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
# school_state
vectorizer = CountVectorizer()
vectorizer.fit(x_train['school_state'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
x_train_state_ohe = vectorizer.transform(x_train['school_state'].values)
#x_cv_state_ohe = vectorizer.transform(x_cv['school_state'].values)
x_test_state_ohe = vectorizer.transform(x_test['school_state'].values)
print("After vectorizations")
print(x_train_state_ohe.shape, y_train.shape)
#print(x_cv_state_ohe.shape, y_cv.shape)
print(x_test_state_ohe.shape, y_test.shape)
print(vectorizer.get_feature_names())
print("="*100)
project_grade_category= x_train['project_grade_category'].unique()
vectorizer5 = CountVectorizer(vocabulary=list(project_grade_category), lowercase=False, binary=True)
vectorizer5.fit(x_train['project_grade_category'].values) # fit has to happen only on train data
# we use the fitted CountVectorizer to convert the text to vector
x_train_grade_ohe = vectorizer5.transform(x_train['project_grade_category'].values)
#x_cv_grade_ohe = vectorizer.transform(x_cv['project_grade_category'].values)
x_test_grade_ohe = vectorizer5.transform(x_test['project_grade_category'].values)
print("After vectorizations")
print(x_train_grade_ohe.shape, y_train.shape)
#print(x_cv_grade_ohe.shape, y_cv.shape)
print(x_test_grade_ohe.shape, y_test.shape)
print(vectorizer5.get_feature_names())
print("="*100)
# Standarized the numerical features: Price
from sklearn.preprocessing import StandardScaler
price_scalar = StandardScaler()
price_scalar.fit(x_train['price'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
x_train_price_std = price_scalar.transform(x_train['price'].values.reshape(-1,1))
#x_cv_price_std = price_scalar.transform(x_cv['price'].values.reshape(-1,1))
x_test_price_std = price_scalar.transform(x_test['price'].values.reshape(-1,1))
print("After vectorizations")
print(x_train_price_std.shape, y_train.shape)
#print(x_cv_price_std.shape, y_cv.shape)
print(x_test_price_std.shape, y_test.shape)
print("="*100)
print(f"Mean : {price_scalar.mean_[0]}, Standard deviation : {np.sqrt(price_scalar.var_[0])}")
x_train_price_std
# Standarized the numerical features: teacher_previously
from sklearn.preprocessing import StandardScaler
teacher_previously_scalar = StandardScaler()
teacher_previously_scalar.fit(x_train['teacher_number_of_previously_posted_projects'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
x_train_teacher_previously_std = teacher_previously_scalar.transform(x_train['teacher_number_of_previously_posted_projects'].values.reshape(-1,1))
#x_cv_teacher_previously_std = teacher_previously_scalar.transform(x_cv['teacher_number_of_previously_posted_projects'].values.reshape(-1,1))
x_test_teacher_previously_std = teacher_previously_scalar.transform(x_test['teacher_number_of_previously_posted_projects'].values.reshape(-1,1))
print("After vectorizations")
print(x_train_teacher_previously_std.shape, y_train.shape)
#print(x_cv_teacher_previously_std.shape, y_cv.shape)
print(x_test_teacher_previously_std.shape, y_test.shape)
print("="*100)
print(f"Mean : {teacher_previously_scalar.mean_[0]}, Standard deviation : {np.sqrt(teacher_previously_scalar.var_[0])}")
# Standarized the numerical features:quantity
from sklearn.preprocessing import StandardScaler
quantity_scalar = StandardScaler()
quantity_scalar.fit(x_train['quantity'].values.reshape(-1,1)) # finding the mean and standard deviation of this data
x_train_quantity_std = quantity_scalar.transform(x_train['quantity'].values.reshape(-1,1))
#x_cv_teacher_previously_std = teacher_previously_scalar.transform(x_cv['teacher_number_of_previously_posted_projects'].values.reshape(-1,1))
x_test_quantity_std = quantity_scalar.transform(x_test['quantity'].values.reshape(-1,1))
print("After vectorizations")
print(x_train_quantity_std.shape, y_train.shape)
#print(x_cv_teacher_previously_std.shape, y_cv.shape)
print(x_test_quantity_std.shape, y_test.shape)
print("="*100)
print(f"Mean : {quantity_scalar.mean_[0]}, Standard deviation : {np.sqrt(quantity_scalar.var_[0])}")
# CONCATINATE all features
# merge two sparse matrices: https://stackoverflow.com/a/19710648/4084039
from scipy.sparse import hstack
X_train = hstack((x_train_essay_bow,x_train_titles_bow,x_train_clean_cat_ohe,x_train_clean_subcat_ohe, x_train_state_ohe, x_train_teacher_pre, x_train_grade_ohe, x_train_price_std,x_train_teacher_previously_std)).tocsr()
#X_cv = hstack((x_cv_essay_bow,x_cv_titles_bow,x_cv_clean_cat_ohe,x_cv_clean_subcat_ohe, x_cv_state_ohe, x_cv_teacher_pre, x_cv_grade_ohe, x_cv_price_std,x_cv_teacher_previously_std)).tocsr()
X_test = hstack((x_test_essay_bow,x_test_titles_bow,x_test_clean_cat_ohe,x_test_clean_subcat_ohe, x_test_state_ohe, x_test_teacher_pre, x_test_grade_ohe, x_test_price_std,x_test_teacher_previously_std)).tocsr()
print("Final Data matrix")
print(X_train.shape, y_train.shape)
#print(X_cv.shape, y_cv.shape)
print(X_test.shape, y_test.shape)
print("="*100)
type(X_train)
###Output
_____no_output_____
###Markdown
SET : 1 [BOW}
###Code
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import SGDClassifier
clf_param_grid = {
'alpha' : [10**-x for x in range(-4,5)],
'penalty' : ['l1','l2']
}
SGD1 = SGDClassifier(class_weight='balanced')
estimator = GridSearchCV(SGD1, param_grid=clf_param_grid ,cv=10, verbose=1, scoring="roc_auc",n_jobs=-1)
estimator.fit(X_train,y_train)
print(estimator.best_params_)
import warnings
warnings.filterwarnings("ignore")
b1=estimator.best_params_["alpha"]
p1=estimator.best_params_["penalty"]
import matplotlib.pyplot as plt
import math
train_auc1= estimator.cv_results_['mean_train_score'][estimator.cv_results_['param_penalty']==p1]
train_auc_std1= estimator.cv_results_['std_train_score'][estimator.cv_results_['param_penalty']==p1]
cv_auc1 = estimator.cv_results_['mean_test_score'][estimator.cv_results_['param_penalty']==p1]
cv_auc_std1= estimator.cv_results_['std_test_score'][estimator.cv_results_['param_penalty']==p1]
ax=plt.subplot()
plt.plot(clf_param_grid['alpha'], train_auc1, label='Train AUC')
# this code is copied from here: https://stackoverflow.com/a/48803361/4084039
plt.gca().fill_between(clf_param_grid['alpha'],train_auc1 - train_auc_std1,train_auc1 + train_auc_std1,alpha=0.2,color='darkblue')
# create a shaded area between [mean - std, mean + std]
plt.plot(clf_param_grid['alpha'], cv_auc1, label='CV AUC')
# this code is copied from here: https://stackoverflow.com/a/48803361/4084039
plt.gca().fill_between(clf_param_grid['alpha'],cv_auc1 - cv_auc_std1,cv_auc1 + cv_auc_std1,alpha=0.2,color='darkorange')
plt.scatter(clf_param_grid['alpha'], train_auc1, label='Train AUC points')
plt.scatter(clf_param_grid['alpha'], cv_auc1, label='CV AUC points')
plt.xscale('log')
plt.axis('tight')
plt.legend()
plt.xlabel("alpha: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
model_new1=SGDClassifier(penalty=p1, alpha= b1,class_weight='balanced',random_state=1)
model_new1.fit(X_train,y_train)
import numpy as np
import math
# custom function
def sigmoid(x):
return 1 / (1 + math.exp(-x))
# define vectorized sigmoid
sigmoid_v = np.vectorize(sigmoid)
# test
scores = np.array([ -0.54761371, 17.04850603, 4.86054302])
print (sigmoid_v(scores))
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
import matplotlib.pyplot as plt
score_roc_train = model_new1.decision_function(X_train)
fpr_train, tpr_train, threshold_train = roc_curve(y_train, sigmoid_v(score_roc_train))
roc_auc_train = auc(fpr_train, tpr_train)
score_roc_test = model_new1.decision_function(X_test)
fpr_test, tpr_test, threshold_test = roc_curve(y_test,sigmoid_v( score_roc_test))
roc_auc_test = auc(fpr_test, tpr_test)
plt.plot(fpr_train, tpr_train, label = "Train_AUC"+str(auc(fpr_train, tpr_train)))
plt.plot(fpr_test, tpr_test, label = "Test_AUC"+str(auc(fpr_test, tpr_test)))
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.title('ROC Curve of SVM ')
plt.show()
y_train_pred = model_new1.predict(X_train)
y_test_pred = model_new1.predict(X_test)
# we are defining are own function for use probabilities to plot confusion matrix
# we have to plot confusion matrix at least fpr and high tpr values
def predict(proba,threshold,fpr,tpr):
t = threshold[np.argmax(tpr*(1-fpr))]
# (tpr*(1-fpr)) will be maximum if your fpr is very low and tpr is very high
print("the maximum value of tpr*(1-fpr)", max(tpr*(1-fpr)), "for threshold", np.round(t,3))
predictions = []
for i in proba:
if i>=t:
predictions.append(1)
else:
predictions.append(0)
return predictions
from sklearn.metrics import confusion_matrix
import seaborn as sns
ax = plt.subplot()
print("Train confusion matrix")
cnn_train = confusion_matrix(y_train, predict(y_train_pred, threshold_train, fpr_train, tpr_train))
sns.heatmap(cnn_train,annot = True,ax=ax,fmt='d')
ax.set_xlabel('Predicted label')
ax.set_ylabel('True label')
ax.xaxis.set_ticklabels(['negative','positive'])
ax.yaxis.set_ticklabels(['negative','positive'])
plt.title('Confusion Matrix\n')
ax = plt.subplot()
print("Test confusion matrix")
cnn_test = confusion_matrix(y_test, predict(y_test_pred, threshold_test, fpr_test, tpr_test))
sns.heatmap(cnn_test,annot = True,ax=ax,fmt='d')
ax.set_xlabel('Predicted label')
ax.set_ylabel('True label')
ax.xaxis.set_ticklabels(['negative','positive'])
ax.yaxis.set_ticklabels(['negative','positive'])
plt.title('Confusion Matrix\n')
from sklearn.metrics import classification_report
print("_" * 101)
print("Classification Report: \n")
print(classification_report(y_test,y_test_pred))
print("_" * 101)
###Output
_____________________________________________________________________________________________________
Classification Report:
precision recall f1-score support
0 0.30 0.53 0.39 2303
1 0.90 0.78 0.83 12697
micro avg 0.74 0.74 0.74 15000
macro avg 0.60 0.65 0.61 15000
weighted avg 0.81 0.74 0.77 15000
_____________________________________________________________________________________________________
###Markdown
SET : 2 ()tf-idf TFIDF VECTORIZER
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer8 = TfidfVectorizer(min_df=10)
preprocessed_essays_xtr_tfidf = vectorizer8.fit_transform(x_train['cleaned_essays'])
print("Shape of matrix after one hot encodig ",preprocessed_essays_xtr_tfidf.shape)
preprocessed_essays_xtest_tfidf = vectorizer8.transform(x_test['cleaned_essays'])
print("Shape of matrix after one hot encodig ",preprocessed_essays_xtest_tfidf.shape)
vectorizer9 = TfidfVectorizer(min_df=10)
preprocessed_title_xtr_tfidf = vectorizer9.fit_transform(x_train['cleaned_title_text'])
print("Shape of matrix after one hot encodig ",preprocessed_title_xtr_tfidf.shape)
preprocessed_title_xtest_tfidf = vectorizer9.transform(x_test['cleaned_title_text'])
print("Shape of matrix after one hot encodig ",preprocessed_title_xtest_tfidf.shape)
X_train_tfidf=hstack((preprocessed_essays_xtr_tfidf,preprocessed_title_xtr_tfidf,x_train_clean_cat_ohe,x_train_clean_subcat_ohe,x_train_state_ohe,x_train_teacher_pre,x_train_grade_ohe,x_train_price_std,x_train_teacher_previously_std
,x_train_quantity_std ))
#X_cv_tfidf=hstack((preprocessed_essays_xcv_tfidf,preprocessed_title_xcv_tfidf,x_cv_clean_cat_ohe,x_cv_clean_subcat_ohe, x_cv_state_ohe, x_cv_teacher_pre, x_cv_grade_ohe, x_cv_price_std,x_cv_teacher_previously_std))
X_test_tfidf=hstack((preprocessed_essays_xtest_tfidf,preprocessed_title_xtest_tfidf,x_test_clean_cat_ohe,x_test_clean_subcat_ohe, x_test_state_ohe, x_test_teacher_pre, x_test_grade_ohe, x_test_price_std,x_test_teacher_previously_std
,x_test_quantity_std ))
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import SGDClassifier
clf_param_grid = {
'alpha' : [10**-x for x in range(-6,5)],
'penalty' : ['l1','l2']
}
SGD2 = SGDClassifier(class_weight='balanced')
estimator2 = GridSearchCV(SGD2, param_grid=clf_param_grid ,cv=10, verbose=2, scoring="roc_auc",n_jobs=-1)
estimator2.fit(X_train_tfidf,y_train)
print(estimator2.best_params_)
b2=estimator2.best_params_["alpha"]
p2=estimator2.best_params_["penalty"]
train_auc2= estimator2.cv_results_['mean_train_score'][estimator2.cv_results_['param_penalty']==p2]
train_auc_std2= estimator2.cv_results_['std_train_score'][estimator2.cv_results_['param_penalty']==p2]
cv_auc2 = estimator2.cv_results_['mean_test_score'][estimator2.cv_results_['param_penalty']==p2]
cv_auc_std2= estimator2.cv_results_['std_test_score'][estimator2.cv_results_['param_penalty']==p2]
ax=plt.subplot()
plt.plot(clf_param_grid['alpha'], train_auc2, label='Train AUC')
# this code is copied from here: https://stackoverflow.com/a/48803362/4084039
plt.gca().fill_between(clf_param_grid['alpha'],train_auc2 - train_auc_std2,train_auc2 + train_auc_std2,alpha=0.2,color='darkblue')
# create a shaded area between [mean - std, mean + std]
plt.plot(clf_param_grid['alpha'], cv_auc2, label='CV AUC')
# this code is copied from here: https://stackoverflow.com/a/48803362/4084039
plt.gca().fill_between(clf_param_grid['alpha'],cv_auc2 - cv_auc_std2,cv_auc2 + cv_auc_std2,alpha=0.2,color='darkorange')
plt.scatter(clf_param_grid['alpha'], train_auc2, label='Train AUC points')
plt.scatter(clf_param_grid['alpha'], cv_auc2, label='CV AUC points')
plt.xscale('log')
plt.axis('tight')
plt.legend()
plt.xlabel("alpha: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
model_new2=SGDClassifier( penalty=p2, alpha= b2,class_weight='balanced')
model_new2.fit(X_train_tfidf,y_train)
score_roc_train = model_new2.decision_function(X_train_tfidf)
fpr_train, tpr_train, threshold_train = roc_curve(y_train, sigmoid_v(score_roc_train))
roc_auc_train = auc(fpr_train, tpr_train)
score_roc_test = model_new2.decision_function(X_test_tfidf)
fpr_test, tpr_test, threshold_test = roc_curve(y_test,sigmoid_v (score_roc_test))
roc_auc_test = auc(fpr_test, tpr_test)
plt.plot(fpr_train, tpr_train, label = "Train_AUC"+str(auc(fpr_train, tpr_train)))
plt.plot(fpr_test, tpr_test, label = "Test_AUC"+str(auc(fpr_test, tpr_test)))
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.title('ROC Curve of SVM ')
plt.show()
y_train_pred = model_new2.predict(X_train_tfidf)
y_test_pred = model_new2.predict(X_test_tfidf)
from sklearn.metrics import confusion_matrix
ax = plt.subplot()
print("Train confusion matrix")
cnn_train = confusion_matrix(y_train, predict(y_train_pred, threshold_train, fpr_train, tpr_train))
sns.heatmap(cnn_train,annot = True,ax=ax,fmt='d')
ax.set_xlabel('Predicted label')
ax.set_ylabel('True label')
ax.xaxis.set_ticklabels(['negative','positive'])
ax.yaxis.set_ticklabels(['negative','positive'])
plt.title('Confusion Matrix\n')
from sklearn.metrics import confusion_matrix
ax = plt.subplot()
print("test confusion matrix")
cnn_test = confusion_matrix(y_test, predict(y_test_pred, threshold_test, fpr_test, tpr_test))
sns.heatmap(cnn_test,annot = True,ax=ax,fmt='d')
ax.set_xlabel('Predicted label')
ax.set_ylabel('True label')
ax.xaxis.set_ticklabels(['negative','positive'])
ax.yaxis.set_ticklabels(['negative','positive'])
plt.title('Confusion Matrix\n')
from sklearn.metrics import classification_report
print("_" * 101)
print("Classification Report: \n")
print(classification_report(y_test,y_test_pred))
print("_" * 101)
###Output
_____________________________________________________________________________________________________
Classification Report:
precision recall f1-score support
0 0.24 0.66 0.35 2303
1 0.91 0.63 0.74 12697
micro avg 0.63 0.63 0.63 15000
macro avg 0.58 0.64 0.55 15000
weighted avg 0.81 0.63 0.68 15000
_____________________________________________________________________________________________________
###Markdown
SET 3 [AVG-W2V]
###Code
list_preprocessed_essays_xtr = []
for e in x_train['cleaned_essays'].values:
list_preprocessed_essays_xtr.append(e.split())
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
preprocessed_essays_xtr_w2v=Word2Vec(list_preprocessed_essays_xtr,min_count=10,size=100,workers =12 )
# average Word2Vec
# compute average word2vec for each review.
preprocessed_essays_xtr_avg_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(x_train['cleaned_essays']): # for each review/sentence
vector = np.zeros(100) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in preprocessed_essays_xtr_w2v.wv.vocab:
vector += preprocessed_essays_xtr_w2v[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
preprocessed_essays_xtr_avg_w2v_vectors.append(vector)
print(len(preprocessed_essays_xtr_avg_w2v_vectors))
print(len(preprocessed_essays_xtr_avg_w2v_vectors[0]))
preprocessed_essays_xtest_avg_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(x_test['cleaned_essays']): # for each review/sentence
vector = np.zeros(100) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in preprocessed_essays_xtr_w2v.wv.vocab:
vector += preprocessed_essays_xtr_w2v[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
preprocessed_essays_xtest_avg_w2v_vectors.append(vector)
print(len(preprocessed_essays_xtest_avg_w2v_vectors))
print(len(preprocessed_essays_xtest_avg_w2v_vectors[0]))
list_preprocessed_title_xtr = []
for e in x_train['cleaned_title_text'].values:
list_preprocessed_title_xtr.append(e.split())
preprocessed_title_xtr_w2v=Word2Vec(list_preprocessed_title_xtr,min_count=10,size=100,workers = 8)
preprocessed_title_xtr_avg_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(x_train['cleaned_title_text']): # for each review/sentence
vector = np.zeros(100) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in preprocessed_title_xtr_w2v.wv.vocab:
vector += preprocessed_title_xtr_w2v[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
preprocessed_title_xtr_avg_w2v_vectors.append(vector)
print(len(preprocessed_title_xtr_avg_w2v_vectors))
print(len(preprocessed_title_xtr_avg_w2v_vectors[0]))
preprocessed_title_xtest_avg_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(x_test['cleaned_title_text']): # for each review/sentence
vector = np.zeros(100) # as word vectors are of zero length
cnt_words =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if word in preprocessed_title_xtr_w2v.wv.vocab:
vector += preprocessed_title_xtr_w2v[word]
cnt_words += 1
if cnt_words != 0:
vector /= cnt_words
preprocessed_title_xtest_avg_w2v_vectors.append(vector)
print(len(preprocessed_title_xtest_avg_w2v_vectors))
print(len(preprocessed_title_xtest_avg_w2v_vectors[0]))
from scipy.sparse import hstack
X_train_w2v=hstack((preprocessed_essays_xtr_avg_w2v_vectors,preprocessed_title_xtr_avg_w2v_vectors,x_train_clean_cat_ohe,x_train_clean_subcat_ohe,x_train_state_ohe,x_train_teacher_pre,x_train_grade_ohe,x_train_price_std,x_train_teacher_previously_std
,x_train_quantity_std))
#X_cv_tfidf=hstack((preprocessed_essays_xcv_tfidf,preprocessed_title_xcv_tfidf,x_cv_clean_cat_ohe,x_cv_clean_subcat_ohe, x_cv_state_ohe, x_cv_teacher_pre, x_cv_grade_ohe, x_cv_price_std,x_cv_teacher_previously_std))
X_test_w2v=hstack((preprocessed_essays_xtest_avg_w2v_vectors,preprocessed_essays_xtest_avg_w2v_vectors,x_test_clean_cat_ohe,x_test_clean_subcat_ohe, x_test_state_ohe, x_test_teacher_pre, x_test_grade_ohe, x_test_price_std,x_test_teacher_previously_std
,x_test_quantity_std))
print(X_train_w2v.shape)
print(X_test_w2v.shape)
from sklearn.model_selection import RandomizedSearchCV
from sklearn.linear_model import SGDClassifier
clf_param_grid = {
'alpha' : [10**-x for x in range(-6,5)],
'penalty' : ['l1','l2']
}
SGD3 = SGDClassifier(class_weight='balanced')
estimator3 = RandomizedSearchCV(SGD3, param_distributions=clf_param_grid ,cv=10, verbose=2,scoring="roc_auc",n_jobs=6)
estimator3.fit(X_train_w2v,y_train)
print(estimator3.best_params_)
b3=estimator3.best_params_["alpha"]
p3=estimator3.best_params_["penalty"]
train_auc3= estimator3.cv_results_['mean_train_score'][estimator3.cv_results_['param_penalty']==p3]
train_auc_std3= estimator3.cv_results_['std_train_score'][estimator3.cv_results_['param_penalty']==p3]
cv_auc3 = estimator3.cv_results_['mean_test_score'][estimator3.cv_results_['param_penalty']==p3]
cv_auc_std3= estimator3.cv_results_['std_test_score'][estimator3.cv_results_['param_penalty']==p3]
plt.plot(clf_param_grid['alpha'][:4], train_auc3, label='Train AUC')
# this code is copied from here: https://stackoverflow.com/a/48803363/4084039
#plt.gca().fill_between(clf_param_grid['alpha'][:4],train_auc3 - train_auc_std3,train_auc3 + train_auc_std3,alpha=0.3,color='darkblue')
# create a shaded area between [mean - std, mean + std]
plt.plot(clf_param_grid['alpha'][:4], cv_auc3, label='CV AUC')
# this code is copied from here: https://stackoverflow.com/a/48803363/4084039
#plt.gca().fill_between(clf_param_grid['alpha'][:4],cv_auc3 - cv_auc_std3,cv_auc3 + cv_auc_std3,alpha=0.3,color='darkorange')
plt.scatter(clf_param_grid['alpha'][:4], train_auc3, label='Train AUC points')
plt.scatter(clf_param_grid['alpha'][:4], cv_auc3, label='CV AUC points')
plt.xscale('log')
plt.legend()
plt.xlabel("alpha: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
model_new3=SGDClassifier( penalty=p3, alpha= b3,class_weight='balanced')
model_new3.fit(X_train_w2v,y_train)
score_roc_train = model_new3.decision_function(X_train_w2v)
fpr_train, tpr_train, threshold_train = roc_curve(y_train, sigmoid_v(score_roc_train))
roc_auc_train = auc(fpr_train, tpr_train)
score_roc_test = model_new3.decision_function(X_test_w2v)
fpr_test, tpr_test, threshold_test = roc_curve(y_test, sigmoid_v(score_roc_test))
roc_auc_test = auc(fpr_test, tpr_test)
plt.plot(fpr_train, tpr_train, label = "Train_AUC"+str(auc(fpr_train, tpr_train)))
plt.plot(fpr_test, tpr_test, label = "Test_AUC"+str(auc(fpr_test, tpr_test)))
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.title('ROC Curve of SVM ')
plt.show()
y_train_pred = model_new3.predict(X_train_w2v)
y_test_pred = model_new3.predict(X_test_w2v)
from sklearn.metrics import confusion_matrix
ax = plt.subplot()
print("Train confusion matrix")
cnn_train = confusion_matrix(y_train, predict(y_train_pred, threshold_train, fpr_train, tpr_train))
sns.heatmap(cnn_train,annot = True,ax=ax,fmt='d')
ax.set_xlabel('Predicted label')
ax.set_ylabel('True label')
ax.xaxis.set_ticklabels(['negative','positive'])
ax.yaxis.set_ticklabels(['negative','positive'])
plt.title('Confusion Matrix\n')
ax = plt.subplot()
print("test confusion matrix")
cnn_test = confusion_matrix(y_test, predict(y_test_pred, threshold_test, fpr_test, tpr_test))
sns.heatmap(cnn_test,annot = True,ax=ax,fmt='d')
ax.set_xlabel('Predicted label')
ax.set_ylabel('True label')
ax.xaxis.set_ticklabels(['negative','positive'])
ax.yaxis.set_ticklabels(['negative','positive'])
plt.title('Confusion Matrix\n')
from sklearn.metrics import classification_report
print("_" * 101)
print("Classification Report: \n")
print(classification_report(y_test,y_test_pred))
print("_" * 101)
###Output
_____________________________________________________________________________________________________
Classification Report:
precision recall f1-score support
0 0.29 0.40 0.33 2303
1 0.88 0.82 0.85 12697
micro avg 0.76 0.76 0.76 15000
macro avg 0.58 0.61 0.59 15000
weighted avg 0.79 0.76 0.77 15000
_____________________________________________________________________________________________________
###Markdown
SET : 4 [TFIDF-W2V]
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_model1 = TfidfVectorizer()
tfidf_model1.fit(x_train['cleaned_essays'])
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model1.get_feature_names(), list(tfidf_model1.idf_)))
tfidf_words = set(tfidf_model1.get_feature_names())
# average Word2Vec
# compute average word2vec for each review.
preprocessed_essays_xtr_tfidf_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(x_train['cleaned_essays']): # for each review/sentence
vector = np.zeros(100) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in list(preprocessed_essays_xtr_w2v.wv.vocab)) and (word in tfidf_words):
vec = preprocessed_essays_xtr_w2v[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
preprocessed_essays_xtr_tfidf_w2v_vectors.append(vector)
print(len(preprocessed_essays_xtr_tfidf_w2v_vectors))
print(len(preprocessed_essays_xtr_tfidf_w2v_vectors[0]))
preprocessed_essays_xtest_tfidf_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(x_test['cleaned_essays']): # for each review/sentence
vector = np.zeros(100) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in list(preprocessed_essays_xtr_w2v.wv.vocab)) and (word in tfidf_words):
vec = preprocessed_essays_xtr_w2v[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
preprocessed_essays_xtest_tfidf_w2v_vectors.append(vector)
print(len(preprocessed_essays_xtest_tfidf_w2v_vectors))
print(len(preprocessed_essays_xtest_tfidf_w2v_vectors[0]))
# Similarly you can vectorize for title also
tfidf_model2 = TfidfVectorizer()
tfidf_model2.fit(x_train['cleaned_title_text'])
# we are converting a dictionary with word as a key, and the idf as a value
dictionary = dict(zip(tfidf_model2.get_feature_names(), list(tfidf_model1.idf_)))
tfidf_words = set(tfidf_model2.get_feature_names())
preprocessed_title_xtr_tfidf_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(x_train['cleaned_title_text']): # for each review/sentence
vector = np.zeros(100) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in list(preprocessed_title_xtr_w2v.wv.vocab)) and (word in tfidf_words):
vec = preprocessed_title_xtr_w2v[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
preprocessed_title_xtr_tfidf_w2v_vectors.append(vector)
print(len(preprocessed_title_xtr_tfidf_w2v_vectors))
print(len(preprocessed_title_xtr_tfidf_w2v_vectors[0]))
preprocessed_title_xtest_tfidf_w2v_vectors = []; # the avg-w2v for each sentence/review is stored in this list
for sentence in tqdm(x_test['cleaned_title_text']): # for each review/sentence
vector = np.zeros(100) # as word vectors are of zero length
tf_idf_weight =0; # num of words with a valid vector in the sentence/review
for word in sentence.split(): # for each word in a review/sentence
if (word in list(preprocessed_title_xtr_w2v.wv.vocab)) and (word in tfidf_words):
vec = preprocessed_title_xtr_w2v[word] # getting the vector for each word
# here we are multiplying idf value(dictionary[word]) and the tf value((sentence.count(word)/len(sentence.split())))
tf_idf = dictionary[word]*(sentence.count(word)/len(sentence.split())) # getting the tfidf value for each word
vector += (vec * tf_idf) # calculating tfidf weighted w2v
tf_idf_weight += tf_idf
if tf_idf_weight != 0:
vector /= tf_idf_weight
preprocessed_title_xtest_tfidf_w2v_vectors.append(vector)
print(len(preprocessed_title_xtest_tfidf_w2v_vectors))
print(len(preprocessed_title_xtest_tfidf_w2v_vectors[0]))
X_train_tfidf_w2v=hstack((preprocessed_essays_xtr_tfidf_w2v_vectors,preprocessed_title_xtr_tfidf_w2v_vectors,x_train_clean_cat_ohe,x_train_clean_subcat_ohe,x_train_state_ohe,x_train_teacher_pre,x_train_grade_ohe,x_train_price_std,x_train_teacher_previously_std
,x_train_quantity_std ))
#X_cv_tfidf=hstack((preprocessed_essays_xcv_tfidf,preprocessed_title_xcv_tfidf,x_cv_clean_cat_ohe,x_cv_clean_subcat_ohe, x_cv_state_ohe, x_cv_teacher_pre, x_cv_grade_ohe, x_cv_price_std,x_cv_teacher_previously_std))
X_test_tfidf_w2v=hstack((preprocessed_essays_xtest_tfidf_w2v_vectors,preprocessed_essays_xtest_tfidf_w2v_vectors,x_test_clean_cat_ohe,x_test_clean_subcat_ohe, x_test_state_ohe, x_test_teacher_pre, x_test_grade_ohe, x_test_price_std,x_test_teacher_previously_std
,x_test_quantity_std))
print(X_train_tfidf_w2v.shape)
print(X_test_tfidf_w2v.shape)
from sklearn.model_selection import RandomizedSearchCV
from sklearn.linear_model import SGDClassifier
clf_param_grid = {
'alpha' : [10**-x for x in range(-6,6)],
'penalty' : ['l1','l2']
}
SGD4 = SGDClassifier(class_weight='balanced')
estimator4 = RandomizedSearchCV(SGD4, param_distributions=clf_param_grid ,cv=10, verbose=2,scoring="roc_auc",n_jobs=4)
estimator4.fit(X_train_tfidf_w2v,y_train)
print(estimator4.best_params_)
b4=estimator4.best_params_["alpha"]
p4=estimator4.best_params_["penalty"]
train_auc4= estimator4.cv_results_['mean_train_score'][estimator4.cv_results_['param_penalty']==p4]
train_auc_std4= estimator4.cv_results_['std_train_score'][estimator4.cv_results_['param_penalty']==p4]
cv_auc4 = estimator4.cv_results_['mean_test_score'][estimator4.cv_results_['param_penalty']==p4]
cv_auc_std4= estimator4.cv_results_['std_test_score'][estimator4.cv_results_['param_penalty']==p4]
ax=plt.subplot()
plt.plot(clf_param_grid['alpha'][:6], train_auc4, label='Train AUC')
# this code is copied from here: https://stackoverflow.com/a/48804464/4084049
#plt.gca().fill_between(clf_param_grid['alpha'][:6],train_auc4 - train_auc_std4,train_auc4 + train_auc_std4,alpha=0.4,color='darkblue')
# create a shaded area between [mean - std, mean + std]
plt.plot(clf_param_grid['alpha'][:6], cv_auc4, label='CV AUC')
# this code is copied from here: https://stackoverflow.com/a/48804464/4084049
#plt.gca().fill_between(clf_param_grid['alpha'][:6],cv_auc4 - cv_auc_std4,cv_auc4 + cv_auc_std4,alpha=0.4,color='darkorange')
plt.scatter(clf_param_grid['alpha'][:6], train_auc4, label='Train AUC points')
plt.scatter(clf_param_grid['alpha'][:6], cv_auc4, label='CV AUC points')
plt.axis([10**-1,10**5,0.675,0.710])
plt.xscale('log')
plt.axis('tight')
plt.legend()
plt.xlabel("alpha: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
model_new4 = SGDClassifier( penalty=p4, alpha=b4,class_weight='balanced')
model_new4.fit(X_train_tfidf_w2v,y_train)
score_roc_train = model_new4.decision_function(X_train_tfidf_w2v)
fpr_train, tpr_train, threshold_train = roc_curve(y_train, sigmoid_v(score_roc_train))
roc_auc_train = auc(fpr_train, tpr_train)
score_roc_test = model_new4.decision_function(X_test_tfidf_w2v)
fpr_test, tpr_test, threshold_test = roc_curve(y_test, sigmoid_v(score_roc_test))
roc_auc_test = auc(fpr_test, tpr_test)
plt.plot(fpr_train, tpr_train, label = "Train_AUC"+str(auc(fpr_train, tpr_train)))
plt.plot(fpr_test, tpr_test, label = "Test_AUC"+str(auc(fpr_test, tpr_test)))
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.title('ROC Curve of KNN ')
plt.show()
y_train_pred = model_new4.predict(X_train_tfidf_w2v)
y_test_pred = model_new4.predict(X_test_tfidf_w2v)
from sklearn.metrics import confusion_matrix
ax = plt.subplot()
print("Train confusion matrix")
cnn_train = confusion_matrix(y_train, predict(y_train_pred, threshold_train, fpr_train, tpr_train))
sns.heatmap(cnn_train,annot = True,ax=ax,fmt='d')
ax.set_xlabel('Predicted label')
ax.set_ylabel('True label')
ax.xaxis.set_ticklabels(['negative','positive'])
ax.yaxis.set_ticklabels(['negative','positive'])
plt.title('Confusion Matrix\n')
from sklearn.metrics import confusion_matrix
ax = plt.subplot()
print("test confusion matrix")
cnn_test = confusion_matrix(y_test, predict(y_test_pred, threshold_test, fpr_test, tpr_test))
sns.heatmap(cnn_test,annot = True,ax=ax,fmt='d')
ax.set_xlabel('Predicted label')
ax.set_ylabel('True label')
ax.xaxis.set_ticklabels(['negative','positive'])
ax.yaxis.set_ticklabels(['negative','positive'])
plt.title('Confusion Matrix\n')
from sklearn.metrics import classification_report
print("_" * 101)
print("Classification Report: \n")
print(classification_report(y_test,y_test_pred))
print("_" * 101)
###Output
_____________________________________________________________________________________________________
Classification Report:
precision recall f1-score support
0 0.24 0.71 0.36 2303
1 0.92 0.60 0.73 12697
micro avg 0.62 0.62 0.62 15000
macro avg 0.58 0.66 0.54 15000
weighted avg 0.82 0.62 0.67 15000
_____________________________________________________________________________________________________
###Markdown
SET : 5 [Truncated SVD]
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer8 = TfidfVectorizer(min_df=10)
preprocessed_essays_xtr_tfidf = vectorizer8.fit_transform(x_train['cleaned_essays'])
print("Shape of matrix after one hot encodig ",preprocessed_essays_xtr_tfidf.shape)
preprocessed_essays_xtest_tfidf = vectorizer8.transform(x_test['cleaned_essays'])
print("Shape of matrix after one hot encodig ",preprocessed_essays_xtest_tfidf.shape)
###Output
Shape of matrix after one hot encodig (35000, 10483)
Shape of matrix after one hot encodig (15000, 10483)
###Markdown
CHECK FOR VARIANCE EXPLAINED BY n_components and find best no. of component
###Code
from sklearn.decomposition import TruncatedSVD
svd = TruncatedSVD(n_components=6000, algorithm='randomized', n_iter=5, random_state=None, tol=0.0001)
svd.fit(preprocessed_essays_xtr_tfidf)
# List of explained variances
tsvd_var_ratios = svd.explained_variance_ratio_
np.cumsum(tsvd_var_ratios)
# Create a function for find best no. of components
def select_n_components(var_ratio, goal_var: float):
# Set initial variance explained so far
total_variance = 0.0
# Set initial number of features
n_components = 0
# For the explained variance of each feature:
for explained_variance in var_ratio:
total_variance += explained_variance
n_components += 1
if total_variance >= goal_var:
break
return n_components
###Output
_____no_output_____
###Markdown
Find total of 95% var explained
###Code
n_components=select_n_components(tsvd_var_ratios, 0.95)
n_components
from sklearn.decomposition import TruncatedSVD
svd_tr = TruncatedSVD(n_components=5722, algorithm='randomized', n_iter=5, random_state=None, tol=0.001)
svd_train = svd_tr.fit_transform(preprocessed_essays_xtr_tfidf)
svd_test = svd_tr.transform(preprocessed_essays_xtest_tfidf)
from scipy.sparse import hstack
X_train_s5=hstack((svd_train,x_train_clean_cat_ohe,x_train_clean_subcat_ohe,x_train_state_ohe,x_train_teacher_pre,x_train_grade_ohe,x_train_price_std,x_train_teacher_previously_std
,x_train_quantity_std )).tocsr()
X_test_s5=hstack((svd_test,x_test_clean_cat_ohe,x_test_clean_subcat_ohe, x_test_state_ohe, x_test_teacher_pre, x_test_grade_ohe, x_test_price_std,x_test_teacher_previously_std
,x_test_quantity_std)).tocsr()
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import SGDClassifier
clf_param_grid = {
'alpha' : [10**-x for x in range(-4,5)],
'penalty' : ['l1','l2']
}
SGD5 = SGDClassifier(class_weight='balanced')
estimator5 = GridSearchCV(SGD5, param_grid=clf_param_grid ,cv=5, verbose=21, scoring="roc_auc",n_jobs=3)
estimator5.fit(X_train_s5,y_train)
print(estimator5.best_params_)
b5=estimator5.best_params_["alpha"]
p5=estimator5.best_params_["penalty"]
train_auc5= estimator5.cv_results_['mean_train_score'][estimator5.cv_results_['param_penalty']==p5]
train_auc_std5= estimator5.cv_results_['std_train_score'][estimator5.cv_results_['param_penalty']==p5]
cv_auc5 = estimator5.cv_results_['mean_test_score'][estimator5.cv_results_['param_penalty']==p5]
cv_auc_std5= estimator5.cv_results_['std_test_score'][estimator5.cv_results_['param_penalty']==p5]
ax=plt.subplot()
plt.plot(clf_param_grid['alpha'][:9], train_auc5, label='Train AUC')
# this code is copied from here: https://stackoverflow.com/a/58805565/5085059
#plt.gca().fill_between(clf_param_grid['alpha'][:9],train_auc5 - train_auc_std5,train_auc5 + train_auc_std5,alpha=0.5,color='darkblue')
# create a shaded area between [mean - std, mean + std]
plt.plot(clf_param_grid['alpha'][:9], cv_auc5, label='CV AUC')
# this code is copied from here: https://stackoverflow.com/a/58805565/5085059
#plt.gca().fill_between(clf_param_grid['alpha'][:9],cv_auc5 - cv_auc_std5,cv_auc5 + cv_auc_std5,alpha=0.5,color='darkorange')
plt.scatter(clf_param_grid['alpha'][:9], train_auc5, label='Train AUC points')
plt.scatter(clf_param_grid['alpha'][:9], cv_auc5, label='CV AUC points')
plt.axis([10**-2,10**5,0.675,0.710])
plt.xscale('log')
plt.axis('tight')
plt.legend()
plt.xlabel("alpha: hyperparameter")
plt.ylabel("AUC")
plt.title("ERROR PLOTS")
plt.grid()
plt.show()
model_new5=SGDClassifier( penalty=p5, alpha=b5,class_weight='balanced')
model_new5.fit(X_train_s5,y_train)
score_roc_train = model_new5.decision_function(X_train_s5)
fpr_train, tpr_train, threshold_train = roc_curve(y_train, sigmoid_v(score_roc_train))
roc_auc_train = auc(fpr_train, tpr_train)
score_roc_test = model_new5.decision_function(X_test_s5)
fpr_test, tpr_test, threshold_test = roc_curve(y_test, sigmoid_v(score_roc_test))
roc_auc_test = auc(fpr_test, tpr_test)
plt.plot(fpr_train, tpr_train, label = "Train_AUC"+str(auc(fpr_train, tpr_train)))
plt.plot(fpr_test, tpr_test, label = "Test_AUC"+str(auc(fpr_test, tpr_test)))
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.title('ROC Curve of SVM ')
plt.show()
y_train_pred = model_new5.predict(X_train_s5)
y_test_pred = model_new5.predict(X_test_s5)
from sklearn.metrics import confusion_matrix
ax = plt.subplot()
print("Train confusion matrix")
cnn_train = confusion_matrix(y_train, predict(y_train_pred, threshold_train, fpr_train, tpr_train))
sns.heatmap(cnn_train,annot = True,ax=ax,fmt='d')
ax.set_xlabel('Predicted label')
ax.set_ylabel('True label')
ax.xaxis.set_ticklabels(['negative','positive'])
ax.yaxis.set_ticklabels(['negative','positive'])
plt.title('Confusion Matrix\n')
from sklearn.metrics import confusion_matrix
ax = plt.subplot()
print("test confusion matrix")
cnn_test = confusion_matrix(y_test, predict(y_test_pred, threshold_test, fpr_test, tpr_test))
sns.heatmap(cnn_test,annot = True,ax=ax,fmt='d')
ax.set_xlabel('Predicted label')
ax.set_ylabel('True label')
ax.xaxis.set_ticklabels(['negative','positive'])
ax.yaxis.set_ticklabels(['negative','positive'])
plt.title('Confusion Matrix\n')
from sklearn.metrics import classification_report
print("_" * 101)
print("Classification Report: \n")
print(classification_report(y_test,y_test_pred))
print("_" * 101)
from prettytable import PrettyTable
pretty = PrettyTable()
pretty.field_names = ['Vectorizer','Model','Hyperparameter_alpha','Hyperparameter_penalty','AUC']
pretty.add_row(['BOW','BRUTE',b1,p1,'0.71'])
pretty.add_row(['TF-IDF','BRUTE',b2,p2,'0.69'])
pretty.add_row(['AVG W2V','BRUTE',b3,p3,'0.68'])
pretty.add_row(['TFIDF WEIGHTED','BRUTE',b4,p4,'0.69'])
pretty.add_row(['TRUNCATED SVD','BRUTE',b5,p5,'0.69'])
print(pretty)
###Output
+----------------+-------+----------------------+------------------------+------+
| Vectorizer | Model | Hyperparameter_alpha | Hyperparameter_penalty | AUC |
+----------------+-------+----------------------+------------------------+------+
| BOW | BRUTE | 0.1 | l2 | 0.71 |
| TF-IDF | BRUTE | 0.001 | l2 | 0.69 |
| AVG W2V | BRUTE | 0.001 | l2 | 0.68 |
| TFIDF WEIGHTED | BRUTE | 0.01 | l2 | 0.69 |
| TRUNCATED SVD | BRUTE | 0.0001 | l1 | 0.69 |
+----------------+-------+----------------------+------------------------+------+
|
Google_stock_price_prediction_RNN.ipynb | ###Markdown
###Code
! pip install kaggle
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
! kaggle datasets download ptheru/googledta
! unzip googledta.zip
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
import os
print(os.listdir("../content"))
dataset_train = pd.read_csv("../content/trainset.csv")
dataset_train
trainset = dataset_train.iloc[:,1:2].values
trainset
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0,1))
training_scaled = sc.fit_transform(trainset)
training_scaled
x_train = []
y_train = []
for i in range(60,1259):
x_train.append(training_scaled[i-60:i, 0])
y_train.append(training_scaled[i,0])
x_train,y_train = np.array(x_train),np.array(y_train)
x_train.shape
x_train = np.reshape(x_train, (x_train.shape[0],x_train.shape[1],1))
regressor = Sequential()
regressor.add(LSTM(units = 50,return_sequences = True,input_shape = (x_train.shape[1],1)))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 50,return_sequences = True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 50,return_sequences = True))
regressor.add(Dropout(0.2))
regressor.add(LSTM(units = 50))
regressor.add(Dropout(0.2))
regressor.add(Dense(units = 1))
regressor.compile(optimizer = 'adam',loss = 'mean_squared_error')
regressor.fit(x_train,y_train,epochs = 100, batch_size = 32)
dataset_test =pd.read_csv("../content/testset.csv")
dataset_test
real_stock_price = dataset_test.iloc[:,1:2].values
dataset_total = pd.concat((dataset_train['Open'],dataset_test['Open']),axis = 0)
dataset_total
inputs = dataset_total[len(dataset_total) - len(dataset_test)-60:].values
inputs
inputs = inputs.reshape(-1,1)
inputs
inputs = sc.transform(inputs)
inputs.shape
x_test = []
for i in range(60,185):
x_test.append(inputs[i-60:i,0])
x_test = np.array(x_test)
x_test.shape
x_test = np.reshape(x_test, (x_test.shape[0],x_test.shape[1],1))
x_test.shape
predicted_price = regressor.predict(x_test)
predicted_price = sc.inverse_transform(predicted_price)
predicted_price
plt.plot(real_stock_price,color = 'red', label = 'Real Price')
plt.plot(predicted_price, color = 'blue', label = 'Predicted Price')
plt.title('Google Stock Price Prediction')
plt.xlabel('Time')
plt.ylabel('Google Stock Price')
plt.legend()
plt.show()
###Output
_____no_output_____ |
course 2 - Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/programming assignments/.ipynb_checkpoints/Part3_Gradient+Checking-checkpoint.ipynb | ###Markdown
Gradient CheckingWelcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".Let's do it!
###Code
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
###Output
_____no_output_____
###Markdown
1) How does gradient checking work?Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$. Let's look back at the definition of a derivative (or gradient):$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."We know the following:- $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly. - You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct. Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct! 2) 1-dimensional gradient checkingConsider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. **Figure 1** : **1D linear model** The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation"). **Exercise**: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
"""
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
"""
### START CODE HERE ### (approx. 1 line)
J = theta * x
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
###Output
J = 8
###Markdown
**Expected Output**: ** J ** 8 **Exercise**: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
###Code
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
"""
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
"""
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
###Output
dtheta = 2
###Markdown
**Expected Output**: ** dtheta ** 2 **Exercise**: To show that the `backward_propagation()` function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.**Instructions**:- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow: 1. $\theta^{+} = \theta + \varepsilon$ 2. $\theta^{-} = \theta - \varepsilon$ 3. $J^{+} = J(\theta^{+})$ 4. $J^{-} = J(\theta^{-})$ 5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$- Then compute the gradient using backward propagation, and store the result in a variable "grad"- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$You will need 3 Steps to compute this formula: - 1'. compute the numerator using np.linalg.norm(...) - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice. - 3'. divide them.- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
###Code
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
"""
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta + epsilon # Step 1
thetaminus = theta - epsilon # Step 2
J_plus = thetaplus * x # Step 3
J_minus = thetaminus * x # Step 4
gradapprox = (J_plus - J_minus) / (2 * epsilon) # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = x
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
###Output
The gradient is correct!
difference = 2.91933588329e-10
###Markdown
**Expected Output**:The gradient is correct! ** difference ** 2.9193358103083e-10 Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`. Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it! 3) N-dimensional gradient checking The following figure describes the forward and backward propagation of your fraud detection model. **Figure 2** : **deep neural network***LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID*Let's look at your implementations for forward propagation and backward propagation.
###Code
def forward_propagation_n(X, Y, parameters):
"""
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
"""
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
###Output
_____no_output_____
###Markdown
Now, run backward propagation.
###Code
def backward_propagation_n(X, Y, cache):
"""
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
###Output
_____no_output_____
###Markdown
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct. **How does gradient checking work?**.As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "`dictionary_to_vector()`" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.The inverse function is "`vector_to_dictionary`" which outputs back the "parameters" dictionary. **Figure 2** : **dictionary_to_vector() and vector_to_dictionary()** You will need these functions in gradient_check_n()We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.**Exercise**: Implement gradient_check_n().**Instructions**: Here is pseudo-code that will help you implement the gradient check.For each i in num_parameters:- To compute `J_plus[i]`: 1. Set $\theta^{+}$ to `np.copy(parameters_values)` 2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$ 3. Calculate $J^{+}_i$ using to `forward_propagation_n(x, y, vector_to_dictionary(`$\theta^{+}$ `))`. - To compute `J_minus[i]`: do the same thing with $\theta^{-}$- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to `parameter_values[i]`. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: $$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
###Code
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):
"""
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] += epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] -= epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(gradapprox - grad) # Step 1'
denominator = np.linalg.norm(gradapprox) + np.linalg.norm(grad) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference > 1e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
###Output
[93mThere is a mistake in the backward propagation! difference = 1.18904178788e-07[0m
|
Unsupervised Analysis of Days of Week.ipynb | ###Markdown
Unsupervised Analysis of Days of WeekTreating crossings each day as features to learn about the relationships between various day
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
###Output
_____no_output_____
###Markdown
Get data
###Code
from data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Fremont Bridge Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01)
###Output
C:\Users\user\Downloads\Github\JupyterWorkflow\data.py:33: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access
data.column = ['Weat', 'East']
###Markdown
Principal Component Analysis
###Code
X =pivoted.fillna(0).T.values
X.shape
X2=PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1])
###Output
_____no_output_____
###Markdown
Unsupervised Clusering
###Code
gmm =GaussianMixture(2).fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar()
fig, ax=plt.subplots(1, 2, figsize=(4, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0])
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1])
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster')
###Output
_____no_output_____
###Markdown
Comparing with Day of Week
###Code
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Analyzing Outlierswith holiday-like pattern
###Code
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels == 1) & (dayofweek <5)]
###Output
_____no_output_____ |
notebooks/constant_selection.ipynb | ###Markdown
Constant selection
###Code
import numpy as np
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
import ipywidgets as widgets
%matplotlib inline
%matplotlib widget
###Output
_____no_output_____
###Markdown
The Moran model + constant selectionThe difference, the _only_ difference, between this and the original Moran model is that we now consider fitness in who we select when drawing an individual to reproduce. Before we considered each individual equally but now we weight individuals of each species using their fitness. we represent this weighting by the `fit_ratio` parameter which described the ratio between species 1 fitness($k_1$) and species 2 fitness ($k_2$). Inorder to do this weighting without violating the rule that the probabilities of each species must add up to 1, we do the weighting like this:$$P_{birth}(S=1) = \frac{(k_1/k_2) f_1}{(k_1/k_2) f_1 + f_2}$$This is the only change we have to make to the basic Moran model to introduce constant selection, and we will never have to consider the individual species' fitnesses on their own. Temporally varying selectionVellend also introduces a 'temporally varying' selection. He introduces this after frequency dependence but his model is more like this constant selection model so we will consider that here. In Vellend's description, the change in time comes as a change in the fitness ratio that depends on the simulated year.First let's say that we've arrived at our fitness ratio, we'll call this $\gamma$\begin{align}\frac{k_1}{k_2} = \gamma\end{align}We do the temporal variation, at least the simple example vellend shows, by flipping the ratio every 10 years. In general we flip the ratio every $y_{flip}$ years. When we've gone through $y_{flip}$ years we simple redefine the fitness ratio as:\begin{align}\frac{k_1}{k_2} = \gamma^{-1}\end{align}I've represented this in the code with a variable called `time_dep` that can either be `True` or `False`. If its true, we do the 10 year flip as shown in the chapter.
###Code
def moran_constant_selection(n_indiv, n_years, init_f1, fit_ratio, time_dep):
""" Implements the Moran model including constant selection and simulates it in time.
Parameters
----------
n_indiv: int
number of individuals in the community
n_years: int
number of potential 'turnovers' of the community
init_f1: float
initial frequency of species 1
fit_ratio: float
ratio of fitness of species 1 to fitness of species 2
Return
------
moran: np.array (n_years, 2)
contains the species frequencies for each year of the simulation """
# set up an empty array for the simulated frequencies
# initialize with the given frequency. python counts from 0, sorrryyyy
moran = np.zeros((n_years, 2))
moran[0] = np.array([init_f1, 1 - init_f1])
# get a vector representing the community. vellend calls this COM
# this just makes a random vector drawn from the initial frequency
comm = np.random.choice([0, 1], size=n_indiv, replace=True, p=moran[0])
# now we can do the loop as in vellend. I'm going to write it a bit differently
# but the idea is basically the same. (yes its slower)
for year in range(1, n_years):
# if time dependence is true we flip the fitness ratio every ten years
if time_dep and year % 10 == 0:
fit_ratio = fit_ratio**-1
# for each year we potentially replace each individual so we can loop
# over that as well
for indiv in range(n_indiv):
# get probability of species 1
f1 = np.sum(comm == 0) / n_indiv
pr1 = fit_ratio * f1 / (fit_ratio * f1 + (1 - f1))
# replace one individual with another
death = np.random.randint(n_indiv)
birth = np.random.choice([0, 1], p=[pr1, 1 - pr1])
comm[death] = birth
# when we're done looping over all of the individuals we can update
# frequencies for the year (the timescale we care about)
f1 = np.sum(comm == 0) / n_indiv
moran[year] = np.array([f1, 1 - f1])
return moran
###Output
_____no_output_____
###Markdown
Running the simulations and visualizationAs before we have a bunch of messy code to draw the plots. Once again we can change parameter values in the first block and not worry about the rest (unless you want to!)
###Code
def draw_simulation(iterations=10, individuals=250, years=50, initial_freq=0.5, fit_ratio=1, time_dep=False):
# the plot bit, this just makes a blank plot
fig, ax = plt.subplots(ncols=2, figsize=(10,4), sharey=True, gridspec_kw = {'width_ratios':[3, 1]})
# trajectory labels
ax[0].set_xlabel('Years')
ax[0].set_ylabel('Species 1 frequency')
ax[0].set_ylim([0, 1])
# distribution labels and bins
hist_bins = np.arange(0, 1.1, 0.1)
ax[1].set_xlabel('Count')
ax[1].set_xlim([0, iterations])
# we're going to plot a distribution too need an array for it
dist = np.zeros(iterations)
# we run the simulation and draw a trajectory
for i in tqdm(range(iterations)):
simulation = moran_constant_selection(individuals, years, initial_freq, fit_ratio, time_dep)
ax[0].plot(range(years), simulation[:, 0], c='C0')
# add final freq to our dist array
dist[i] = simulation[-1, 0]
# plot the distribution too
ax[1].hist(dist, bins=hist_bins, orientation='horizontal')
ax[1].axhline(np.mean(dist), linestyle='--')
plt.tight_layout()
plt.show()
# set up the interface
widgets.interact_manual(draw_simulation, iterations=(5, 50, 5), individuals=(10, 1000, 10), years=(5, 200, 5),
initial_freq=(0.0, 1.0, 0.01), fit_ratio=(0.67, 1.5, 0.01), time_dep=(False))
###Output
_____no_output_____ |
vol2_v2/extractor_asis14.ipynb | ###Markdown
Extract barrier island metrics along transectsAuthor: Emily Sturdivant, [email protected]***Extract barrier island metrics along transects for Barrier Island Geomorphology Bayesian Network. See the project [README](https://github.com/esturdivant-usgs/BI-geomorph-extraction/blob/master/README.md) and the Methods Report (Zeigler et al., in review). Pre-requisites:- All the input layers (transects, shoreline, etc.) must be ready. This is performed with the notebook file prepper.ipynb.- The files servars.py and configmap.py may need to be updated for the current dataset. Notes:- This notebook includes interactive quality checking, which requires the user's attention. For thorough QC'ing, we recommend displaying the layers in ArcGIS, especially to confirm the integrity of values for variables such as distance to inlet (__Dist2Inlet__) and widths of the landmass (__WidthPart__, etc.). *** Import modules
###Code
import os
import sys
import pandas as pd
import numpy as np
import io
import arcpy
import pyproj
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
import core.functions_warcpy as fwa
import core.functions as fun
print("Date: {}".format(datetime.date.today()))
# print(os.__version__)
# print(sys.__version__)
print('pandas version: {}'.format(pd.__version__))
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(matplotlib.__version__))
# print(io.__version__)
# print(arcpy.__version__)
print('pyproj version: {}'.format(pyproj.__version__))
# print(bi_transect_extractor.__version__)
###Output
Date: 2019-10-08
pandas version: 0.20.3
numpy version: 1.13.1
matplotlib version: 1.5.3
pyproj version: 1.9.5.1
###Markdown
Initialize variablesThis cell prompts you for the site, year, and project directory path. `setvars.py` retrieves the pre-determined values for that site in that year from `configmap.py`. The project directory will be used to set up your workspace. It's hidden for security – sorry! I recommend that you type the path somewhere and paste it in.
###Code
from core.setvars import *
###Output
site (options: Metompkin, RhodeIsland, ShipShoal, Cedar, Fisherman, Wreck, CapeHatteras, ParkerRiver, Monomoy, Assawoman, Parramore, Myrtle, Assateague, Forsythe, Smith, CapeLookout, Cobb, CoastGuard, FireIsland, Rockaway): Assateague
year (options: 2010, 2012, 2014): 2014
Path to project directory (e.g. \\Macolume\dir\FireIsland2014): ···············································
###Markdown
Change the filename variables to match your local files. They should be in an Esri file geodatabase named site+year.gdb in your project directory, which you input above and is the value of the variable `home`.
###Code
# Extended transects: NASC transects extended and sorted, ready to be the base geometry for processing
extendedTrans = os.path.join(home, 'extTrans')
# Tidied transects: Extended transects without overlapping transects
extTrans_tidy = os.path.join(home, 'tidyTrans')
# Geomorphology points: positions of indicated geomorphic features
ShorelinePts = os.path.join(home, 'SLpts') # shoreline
dlPts = os.path.join(home, 'DLpts') # dune toe
dhPts = os.path.join(home, 'DHpts') # dune crest
# Inlet lines: polyline feature classes delimiting inlet position. Must intersect the full island shoreline
inletLines = os.path.join(home, 'Assateague2014_inletLines')
# Full island shoreline: polygon that outlines the island shoreline, MHW on oceanside and MTL on bayside
barrierBoundary = os.path.join(home, 'bndpoly_2sl')
# Elevation grid: DEM of island elevation at either 5 m or 1 m resolution
elevGrid = os.path.join(home, 'DEM_rasterdataset')
# ---
# OPTIONAL - comment out each one that is not available
# ---
#
# morphdata_prefix = '14CNT01'
# Study area boundary; manually digitize if the barrier island study area does not end at an inlet.
SA_bounds = os.path.join(home, 'SA_bounds')
# Armoring lines: digitize lines of shorefront armoring to be used if dune toe points are not available.
armorLines = os.path.join(home, 'armorLines')
# Extended transects with Construction, Development, and Nourishment coding
tr_w_anthro = os.path.join(home, 'extTrans_wAnthro')
# Piping Plover Habitat BN raster layers
SubType = os.path.join(home, 'ASIS14_SubType') # substrate type
VegType = os.path.join(home, 'ASIS14_VegType') # vegetation type
VegDens = os.path.join(home, 'ASIS14_VegDen') # vegetation density
GeoSet = os.path.join(home, 'ASIS14_GeoSet') # geomorphic setting
# Derivatives of inputs: They will be generated during process if they are not found.
shoreline = os.path.join(home, 'ShoreBetweenInlets') # oceanside shoreline between inlets; generated from shoreline polygon, inlet lines, and SA bounds
slopeGrid = os.path.join(home, 'slope_5m') # Slope at 5 m resolution; generated from DEM
###Output
_____no_output_____
###Markdown
Transect-averaged valuesWe work with the shapefile/feature class as a pandas DataFrame as much as possible to speed processing and minimize reliance on the ArcGIS GUI display.1. Add the bearing of each transect line to the attribute table from the LINE_BEARING geometry attribute.1. Create a pandas dataframe from the transects feature class. In the process, remove some of the unnecessary fields. The resulting dataframe is indexed by __sort_ID__ with columns corresponding to the attribute fields in the transects feature class. 2. Add __DD_ID__.3. Join the values from the transect file that includes the three anthropologic development fields, __Construction__, __Development__, and __Nourishment__.
###Code
# Add BEARING field to extendedTrans feature class
arcpy.AddGeometryAttributes_management (extendedTrans, 'LINE_BEARING')
print("Adding line bearing field to transects.")
# Copy feature class to dataframe.
trans_df = fwa.FCtoDF(extendedTrans, id_fld=tID_fld, extra_fields=extra_fields)
# Set capitalization of fields to match expected
colrename = {}
for f in sorted_pt_flds:
for c in trans_df.columns:
if f.lower() == c.lower() and f != c:
colrename[c] = f
print("Renaming {} to {}".format(c, f))
trans_df.rename(columns=colrename, inplace=True)
# Set DD_ID, MHW, and Azimuth fields
trans_df['DD_ID'] = trans_df[tID_fld] + sitevals['id_init_val']
trans_df['MHW'] = sitevals['MHW']
trans_df.drop('Azimuth', axis=1, inplace=True, errors='ignore')
trans_df.rename_axis({"BEARING": "Azimuth"}, axis=1, inplace=True)
# Get anthro fields and join to DF
if 'tr_w_anthro' in locals():
trdf_anthro = fwa.FCtoDF(tr_w_anthro, id_fld=tID_fld, dffields=['Development', 'Nourishment','Construction'])
trans_df = fun.join_columns(trans_df, trdf_anthro)
# Save
trans_df.to_pickle(os.path.join(scratch_dir, 'trans_df.pkl'))
# Display
print("\nHeader of transects dataframe (rows 1-5 out of {}): ".format(len(trans_df)))
trans_df.head()
trans_df.loc[:,['TRANSECTID', 'Azimuth', 'LRR', 'sort_ID', 'DD_ID', 'MHW', 'Development']].sample(10)
###Output
_____no_output_____
###Markdown
Get XY and Z/slope from SL, DH, DL points within 25 m of transectsAdd to each transect row the positions of the nearest pre-created beach geomorphic features (shoreline, dune toe, and dune crest). If needed, convert morphology points stored locally to feature classes for use.After which, view the new feature classes in a GIS. Isolate the points to the region of interest. Quality check them. Then copy them for use with this code, which will require setting the filenames to match those included here or changing the values included here to match the final filenames.
###Code
if "morphdata_prefix" in locals():
csvpath = os.path.join(proj_dir, 'Input_Data', '{}_morphology'.format(morphdata_prefix),
'{}_morphology.csv'.format(morphdata_prefix))
dt_fc, dc_fc, sl_fc = fwa.MorphologyCSV_to_FCsByFeature(csvpath, state, proj_code,
csv_fill = 999, fc_fill = -99999, csv_epsg=4326)
print("OUTPUT: morphology point feature classes in the scratch gdb. We recommend QC before proceeding.")
###Output
_____no_output_____
###Markdown
ShorelineThe MHW shoreline easting and northing (__SL_x__, __SL_y__) are the coordinates of the intersection of the oceanside shoreline with the transect. Each transect is assigned the foreshore slope (__Bslope__) from the nearest shoreline point within 25 m. These values are populated for each transect as follows: 1. get __SL_x__ and __SL_y__ at the point where the transect crosses the oceanside shoreline; 2. find the closest shoreline point to the intersection point (must be within 25 m) and copy the slope value from the point to the transect in the field __Bslope__.
###Code
if not arcpy.Exists(inletLines):
# manually create lines that correspond to end of land and cross the MHW line (refer to shoreline polygon)
arcpy.CreateFeatureclass_management(home, os.path.basename(inletLines), 'POLYLINE', spatial_reference=utmSR)
print("OUTPUT: {}. Interrupt execution to manually create lines at each inlet.".format(inletLines))
if not arcpy.Exists(shoreline):
if not 'SA_bounds' in locals():
SA_bounds = ''
shoreline = fwa.CreateShoreBetweenInlets(barrierBoundary, inletLines, shoreline, ShorelinePts, proj_code, SA_bounds)
# Get the XY position where transect crosses the oceanside shoreline
sl2trans_df, ShorelinePts = fwa.add_shorelinePts2Trans(extendedTrans, ShorelinePts, shoreline,
tID_fld, proximity=pt2trans_disttolerance)
# Save and print sample
sl2trans_df.to_pickle(os.path.join(scratch_dir, 'sl2trans.pkl'))
sl2trans_df.sample(5)
# Export the inlet delineation and shoreline polygons to the scratch directory ultimately for publication
arcpy.FeatureClassToFeatureClass_conversion(inletLines, scratch_dir, pts_name.split('_')[0] + '_inletLines.shp')
arcpy.FeatureClassToFeatureClass_conversion(barrierBoundary, scratch_dir, pts_name.split('_')[0] + '_shoreline.shp')
print('OUTPUT: Saved inletLines and shoreline shapefiles in the scratch directory.')
fwa.pts_to_csv_and_eainfoxml(ShorelinePts, '_SLpts', scratch_dir, pts_name, field_defs, fill)
###Output
...converting feature class to array...
...converting array to dataframe...
Number of points in dataset: (6039, 10)
OBJECTID_______________________________1 | 6047________________ No fills_________No nulls
Shape............... nan
state............... 12 | 13
seg____________________________________1 | 88__________________ No fills_________No nulls
profile________________________________1 | 797_________________ No fills_________No nulls
sl_x_________________________-685.332759 | 497.819121__________ No fills_________No nulls
ci95_slx_____________________________0.0 | 0.848279____________ No fills_________No nulls
slope__________________________-0.231863 | -0.000548___________ No fills_________No nulls
easting____________________464785.199053 | 491903.892334_______ No fills_________No nulls
northing__________________4189226.119702 | 4241668.023755______ No fills_________No nulls
OUTPUT: asis14_SLpts.shp in specified scratch_dir.
OUTPUT: asis14_SLpts.csv (size: 0.46 MB) in specified scratch_dir.
###Markdown
Dune positions along transects__DL_x__, __DL_y__, and __DL_z__ are the easting, northing, and elevation, respectively, of the nearest dune toe point within 25 meters of the transect. __DH_x__, __DH_y__, and __DH_z__ are the easting, northing, and elevation, respectively, of the nearest dune crest point within 25 meters. __DL_snapX__, __DL_snapY__, __DH_snapX__, and __DH_snapY__ are the eastings and northings of the points "snapped" to the transect. "Snapping" finds the position along the transect nearest to the point, i.e. orthogonal to the transect. These values are used to find the beach width. The elevation values are not snapped; we use the elevation values straight from the original points. These values are populated as follows: 1. Find the nearest dune crest/toe point to the transect and proceed if the distance is less than 25 m. If there are no points within 25 m of the transect, populate the row with Null values.2. Get the X, Y, and Z values of the point. 3. Find the position along the transect of an orthogonal line drawn to the dune point (__DL_snapX__, __DL_snapY__, __DH_snapX__, and __DH_snapY__). This is considered the 'snapped' XY position and is calculated using the arcpy geometry method.
###Code
# Create dataframe for both dune crest and dune toe positions
dune2trans_df, dhPts, dlPts = fwa.find_ClosestPt2Trans_snap(extendedTrans, dhPts, dlPts, trans_df,
tID_fld, proximity=pt2trans_disttolerance)
# Save and print sample
dune2trans_df.to_pickle(os.path.join(scratch_dir, 'dune2trans.pkl'))
dune2trans_df.sample(5)
fwa.pts_to_csv_and_eainfoxml(dlPts, '_DTpts', scratch_dir, pts_name, field_defs, fill)
fwa.pts_to_csv_and_eainfoxml(dhPts, '_DCpts', scratch_dir, pts_name, field_defs, fill)
###Output
...converting feature class to array...
...converting array to dataframe...
Number of points in dataset: (4493, 12)
OBJECTID_______________________________1 | 4493________________ No fills_________No nulls
Shape............... nan
state............... 12 | 13
seg____________________________________1 | 88__________________ No fills_________No nulls
profile________________________________1 | 797_________________ No fills_________No nulls
lon___________________________-75.394665 | -75.093626__________ No fills_________No nulls
lat____________________________37.850433 | 38.323336___________ No fills_________No nulls
easting____________________465288.623404 | 491816.106114_______ No fills_________No nulls
northing__________________4189290.483995 | 4241695.00959_______ No fills_________No nulls
dlow_x_______________________-782.018865 | 431.256053__________ No fills_________No nulls
dlow_z___________________________1.02095 | 4.892596____________ No fills_________No nulls
z_error__________________________0.00852 | 1.005295____________ No fills_________No nulls
OUTPUT: asis14_DTpts.shp in specified scratch_dir.
OUTPUT: asis14_DTpts.csv (size: 0.43 MB) in specified scratch_dir.
...converting feature class to array...
...converting array to dataframe...
Number of points in dataset: (5342, 12)
OBJECTID______________________________21 | 5396________________ No fills_________No nulls
Shape............... nan
state............... 12 | 13
seg____________________________________1 | 88__________________ No fills_________No nulls
profile________________________________1 | 797_________________ No fills_________No nulls
lon___________________________-75.400129 | -75.093738__________ No fills_________No nulls
lat____________________________37.850653 | 38.323387___________ No fills_________No nulls
easting____________________464808.940908 | 491806.330672_______ No fills_________No nulls
northing__________________4189314.991735 | 4241700.577644______ No fills_________No nulls
dhigh_x______________________-823.268865 | 488.756053__________ No fills_________No nulls
dhigh_z_________________________0.929257 | 7.995073____________ No fills_________No nulls
z_error_________________________0.005932 | 0.904903____________ No fills_________No nulls
OUTPUT: asis14_DCpts.shp in specified scratch_dir.
OUTPUT: asis14_DCpts.csv (size: 0.51 MB) in specified scratch_dir.
###Markdown
Armoring__Arm_x__, __Arm_y__, and __Arm_z__ are the easting, northing, and elevation, respectively, where an artificial structure crosses the transect in the vicinity of the beach. These features are meant to supplement the dune toe data set by providing an upper limit to the beach in areas where dune toe extraction was confounded by the presence of an artificial structure. Values are populated for each transect as follows: 1. Get the positions of intersection between the digitized armoring lines and the transects (Intersect tool from the Overlay toolset); 2. Extract the elevation value at each intersection point from the DEM (Extract Multi Values to Points tool from Spatial Analyst);
###Code
# Create elevation raster at 5-m resolution if not already
elevGrid = fwa.ProcessDEM_2(elevGrid, utmSR)
# Armoring line
if not arcpy.Exists(armorLines):
arcpy.CreateFeatureclass_management(home, os.path.basename(armorLines), 'POLYLINE', spatial_reference=utmSR)
print("{} created. If shorefront armoring exists, interrupt execution to manually digitize.".format(armorLines))
arm2trans_df = fwa.ArmorLineToTrans_PD(extendedTrans, armorLines, sl2trans_df, tID_fld, proj_code, elevGrid)
# Save and print sample
arm2trans_df.to_pickle(os.path.join(scratch_dir, 'arm2trans.pkl'))
try:
arm2trans_df.sample(5)
except:
pass
###Output
OUTPUT: DEM_rasterdataset_5m at 5x5 resolution.
\\Mac\stor\Projects\DeepDive\TE_vol2\Assateague\Assateague2014.gdb\armorLines created. If shorefront armoring exists, interrupt execution to manually digitize.
Armoring file either missing or empty so we will proceed without armoring data. If shorefront tampering is present at this site, cancel the operations to digitize.
###Markdown
Add all the positions to the trans_dfJoin the new dataframes to the transect dataframe. Before it performs the join, `join_columns_id_check()` checks the index and the ID field for potential errors such as whether they are the equal and whether there are duplicated IDs or null values in either.
###Code
# Load saved dataframes
trans_df = pd.read_pickle(os.path.join(scratch_dir, 'trans_df.pkl'))
sl2trans_df = pd.read_pickle(os.path.join(scratch_dir, 'sl2trans.pkl'))
dune2trans_df = pd.read_pickle(os.path.join(scratch_dir, 'dune2trans.pkl'))
arm2trans_df = pd.read_pickle(os.path.join(scratch_dir, 'arm2trans.pkl'))
# Join positions of shoreline, dune crest, dune toe, armoring
trans_df = fun.join_columns_id_check(trans_df, sl2trans_df, tID_fld)
trans_df = fun.join_columns_id_check(trans_df, dune2trans_df, tID_fld)
trans_df = fun.join_columns_id_check(trans_df, arm2trans_df, tID_fld)
# Save and print sample
trans_df.to_pickle(os.path.join(scratch_dir, 'trans_df_beachmetrics.pkl'))
trans_df.loc[:,['sort_ID', 'DD_ID', 'TRANSECTID', 'MHW', 'SL_x', 'Bslope', 'DH_z', 'DL_z']].sample(10)
###Output
...checking ID field(s) for df2 (join)...
...checking ID field(s) for df2 (join)...
...checking ID field(s) for df2 (join)...
###Markdown
Check for errors*Optional*Display summary stats / histograms and create feature classes. The feature classes display the locations that will be used to calculate beach width. Review the output feature classes in a GIS to validate.
###Code
plots = trans_df.hist(['DH_z', 'DL_z', 'Arm_z'])
# Subplot Labels
plots[0][0].set_xlabel("Elevation (m in NAVD88)")
plots[0][0].set_ylabel("Frequency")
plots[0][1].set_xlabel("Elevation (m in NAVD88)")
plots[0][1].set_ylabel("Frequency")
try:
plots[0][2].set_xlabel("Elevation (m in NAVD88)")
plots[0][2].set_ylabel("Frequency")
except:
pass
plt.show()
plt.close()
# Convert dataframe to feature class - shoreline points with slope
fwa.DFtoFC(sl2trans_df, os.path.join(arcpy.env.workspace, 'pts2trans_SL'),
spatial_ref=utmSR, id_fld=tID_fld, xy=["SL_x", "SL_y"], keep_fields=['Bslope'])
print('OUTPUT: pts2trans_SL in designated scratch geodatabase.')
# Dune crests
try:
fwa.DFtoFC(dune2trans_df, os.path.join(arcpy.env.workspace, 'ptSnap2trans_DH'),
spatial_ref=utmSR, id_fld=tID_fld, xy=["DH_snapX", "DH_snapY"], keep_fields=['DH_z'])
print('OUTPUT: ptSnap2trans_DH in designated scratch geodatabase.')
except Exception as err:
print(err)
pass
# Dune toes
try:
fwa.DFtoFC(dune2trans_df, os.path.join(arcpy.env.workspace, 'ptSnap2trans_DL'),
spatial_ref=utmSR, id_fld=tID_fld, xy=["DL_snapX", "DL_snapY"], keep_fields=['DL_z'])
print('OUTPUT: ptSnap2trans_DL in designated scratch geodatabase.')
except Exception as err:
print(err)
pass
###Output
... converting dataframe to array...
... converting array to feature class...
OUTPUT: pts2trans_SL in designated scratch geodatabase.
... converting dataframe to array...
... converting array to feature class...
OUTPUT: ptSnap2trans_DH in designated scratch geodatabase.
... converting dataframe to array...
... converting array to feature class...
OUTPUT: ptSnap2trans_DL in designated scratch geodatabase.
###Markdown
Calculate upper beach width and heightUpper beach width (__uBW__) and upper beach height (__uBH__) are calculated based on the difference in position between two points: the position of MHW along the transect (__SL_x__, __SL_y__) and the dune toe position or equivalent (usually __DL_snapX__, __DL_snapY__). In some cases, the dune toe is not appropriate to designate the "top of beach" so beach width and height are calculated from either the position of the dune toe, the dune crest, or the base of an armoring structure. The dune crest was only considered a possibility if the dune crest elevation (__DH_zMHW__) was less than or equal to `maxDH`. They are calculated as follows: 2. Calculate distances from MHW to the position along the transect of the dune toe (__DistDL__), dune crest (__DistDH__), and armoring (__DistArm__). 2. Adjust the elevations to MHW, populating fields __DH_zmhw__, __DL_zmhw__, and __Arm_zmhw__. 3. Conditionally select the appropriate feature to represent "top of beach." Dune toe is prioritized. If it is not available and __DH_zmhw__ is less than or equal to maxDH, use dune crest. If neither of the dune positions satisfy the conditions and an armoring feature intersects with the transect, use the armoring position. If none of the three are possible, __uBW__ and __uBH__ will be null. 4. Copy the distance to shoreline and height above MHW (__Dist--__, __---zmhw__) to __uBW__ and __uBH__, respectively. Notes:- In some morphology datasets, missing elevation values at a point indicate that the point should not be used to measure beach width. In those cases, use the `skip_missing_z` argument to select whether or not to skip these points.
###Code
# Load saved dataframe
trans_df = pd.read_pickle(os.path.join(scratch_dir, 'trans_df_beachmetrics.pkl'))
# Calculate distances from shore to dunes, etc.
trans_df = fwa.calc_BeachWidth_fill(extendedTrans, trans_df, maxDH, tID_fld,
sitevals['MHW'], fill, skip_missing_z=True)
###Output
...checking ID field(s) for df2 (join)...
Fields uBW and uBH populated with beach width and beach height.
###Markdown
Dist2InletDistance to nearest tidal inlet (__Dist2Inlet__) is computed as alongshore distance of each sampling transect from the nearest tidal inlet. This distance includes changes in the path of the shoreline instead of simply a Euclidean distance and reflects sediment transport pathways. It is measured using the oceanside shoreline between inlets (ShoreBetweenInlets). Note that the ShoreBetweenInlets feature class must be both 'dissolved' and 'singlepart' so that each feature represents one-and-only-one shoreline that runs the entire distance between two inlets or equivalent. If the shoreline is bounded on both sides by an inlet, measure the distance to both and assign the minimum distance of the two. If the shoreline meets only one inlet (meaning the study area ends before the island ends), use the distance to the only inlet. The process uses the cut, disjoint, and length geometry methods and properties in ArcPy data access module. The function measure_Dist2Inlet() prints a warning when the difference in Dist2Inlet between two consecutive transects is greater than 300.
###Code
# Calc Dist2Inlet in new dataframe
dist_df = fwa.measure_Dist2Inlet(shoreline, extendedTrans, inletLines, tID_fld)
# Join to transects
trans_df = fun.join_columns_id_check(trans_df, pd.DataFrame(dist_df.Dist2Inlet), tID_fld, fill=fill)
# Save and view last 10 rows
dist_df.to_pickle(os.path.join(scratch_dir, 'dist2inlet_df.pkl'))
dist_df.tail(10)
###Output
CAUTION: Large change in Dist2Inlet values between transects 21 (3 3.580595
Name: Dist2Inlet, dtype: float64 m) and 22 (666.1722765582263 m).
CAUTION: Large change in Dist2Inlet values between transects 46 (28 2207.49022
Name: Dist2Inlet, dtype: float64 m) and 47 (2663.276674527246 m).
CAUTION: Large change in Dist2Inlet values between transects 85 (67 6669.116191
Name: Dist2Inlet, dtype: float64 m) and 86 (5732.512161078799 m).
Duration: 0:0:38.9 seconds
...checking ID field(s) for df2 (join)...
###Markdown
Clip transects, get barrier widthsCalculates __WidthLand__, __WidthFull__, and __WidthPart__, which measure different flavors of the cross-shore width of the barrier island. __WidthLand__ is the above-water distance between the back-barrier and seaward MHW shorelines. __WidthLand__ only includes regions of the barrier within the shoreline polygon (bndpoly_2sl) and does not extend into any of the sinuous or intervening back-barrier waterways and islands. __WidthFull__ is the total distance between the back-barrier and seaward MHW shorelines (including space occupied by waterways). __WidthPart__ is the width of only the most seaward portion of land within the shoreline. These are calculated as follows: 1. Clip the transect to the full island shoreline (Clip in the Analysis toolbox); 2. For __WidthLand__, get the length of the multipart line segment from "SHAPE@LENGTH" feature class attribute. When the feature is multipart, this will include only the remaining portions of the transect; 3. For __WidthPart__, convert the clipped transect from multipart to singlepart and get the length of the first line segment, which should be the most seaward; 4. For __WidthFull__, calculate the distance between the first vertex and the last vertex of the clipped transect (Feature Class to NumPy Array with explode to points, pandas groupby, numpy hypot).
###Code
# Clip transects, get barrier widths
widths_df = fwa.calc_IslandWidths(extendedTrans, barrierBoundary, tID_fld=tID_fld)
# # Save
widths_df.to_pickle(os.path.join(scratch_dir, 'widths_df.pkl'))
# Join
trans_df = fun.join_columns_id_check(trans_df, widths_df, tID_fld, fill=fill)
# Save
trans_df.to_pickle(os.path.join(scratch_dir, trans_name+'_null_prePts.pkl'))
trans_df.loc[:,['sort_ID', 'WidthLand', 'WidthPart', 'WidthFull', 'Dist2Inlet', 'uBW']].sample(10)
###Output
_____no_output_____
###Markdown
5-m PointsThe point dataset samples the land every 5 m along each shore-normal transect. Split transects into points at 5-m intervals. The point dataset is created from the tidied transects (tidyTrans, created during pre-processing) as follows: 1. Clip the tidied transects (tidyTrans) to the shoreline polygon (bndpoly_2sl) , retaining only those portions of the transects that represent land.2. Produce a dataframe of point positions along each transect every 5 m starting from the ocean-side shoreline. This uses the positionAlongLine geometry method accessed with a Search Cursor and saves the outputs in a new dataframe. 3. Create a point feature class from the dataframe. Note: Sometimes the system doesn't seem to register the new feature class (transPts_unsorted) for a while. I'm not sure how to work around that, other than just to wait.
###Code
pts_df, pts_presort = fwa.TransectsToPointsDF(extTrans_tidy, barrierBoundary, fc_out=pts_presort)
print("OUTPUT: '{}' in scratch geodatabase.".format(os.path.basename(pts_presort)))
# Save
pts_df.to_pickle(os.path.join(scratch_dir, 'pts_presort.pkl'))
###Output
Clipping transects to within the shoreline bounds ('tidytrans_clipped')...
Getting points every 5m along each transect and saving in new dataframe...
Converting dataframe to feature class ('transPts_unsorted')...
... converting dataframe to array...
... converting array to feature class...
Duration: 0:29:6.8 seconds
OUTPUT: 'transPts_unsorted' in scratch geodatabase.
###Markdown
Add Elevation and Slope to points__ptZ__ (later __ptZmhw__) and __ptSlp__ are the elevation and slope at the 5-m cell corresponding to the point. 1. Create the slope and DEM rasters if they don't already exist. We use the 5-m DEM to generate a slope surface (Slope tool in 3D Analyst). 2. Use Extract Multi Values to Points tool in Spatial Analyst. 3. Convert the feature class back to a dataframe.
###Code
# Elevation grid: DEM of island elevation at either 5 m or 1 m resolution
elevGrid = os.path.join(home, 'DEM_rasterdataset')
arcpy.Exists(slopeGrid)
# Create slope raster from DEM
if not arcpy.Exists(slopeGrid):
arcpy.Slope_3d(elevGrid, slopeGrid, 'PERCENT_RISE')
print("OUTPUT: slope file in designated home geodatabase.")
# Add elevation and slope values at points.
arcpy.sa.ExtractMultiValuesToPoints(pts_presort, [[elevGrid, 'ptZ'], [slopeGrid, 'ptSlp']])
print("OUTPUT: added slope and elevation to '{}' in designated scratch geodatabase.".format(os.path.basename(pts_presort)))
if 'SubType' in locals():
# Add substrate type, geomorphic setting, veg type, veg density values at points.
arcpy.sa.ExtractMultiValuesToPoints(pts_presort, [[SubType, 'SubType'], [VegType, 'VegType'],
[VegDens, 'VegDens'], [GeoSet, 'GeoSet']])
# Convert to dataframe
pts_df = fwa.FCtoDF(pts_presort, xy=True, dffields=[tID_fld,'ptZ', 'ptSlp', 'SubType',
'VegType', 'VegDens', 'GeoSet'])
# Recode fill values
pts_df.replace({'GeoSet': {9999:np.nan}, 'SubType': {9999:np.nan}, 'VegType': {9999:np.nan},
'VegDens': {9999:np.nan}}, inplace=True)
else:
print("Plover BN layers not specified (we only check for SubType), so we'll proceed without them. ")
# Convert to dataframe
pts_df = fwa.FCtoDF(pts_presort, xy=True, dffields=[tID_fld,'ptZ', 'ptSlp'])
# Convert new fields to appropriate types
pts_df = pts_df.astype({'ptZ':'float64', 'ptSlp':'float64'})
# Save and view sample
pts_df.to_pickle(os.path.join(scratch_dir, 'pts_extractedvalues_presort.pkl'))
pts_df.loc[:,['SplitSort', 'DD_ID', 'TRANSECTID', 'GeoSet', 'SubType', 'VegType', 'VegDens', 'ptZ', 'ptSlp']].sample(10)
# Print histogram of elevation extracted to points
plots = pts_df.hist('ptZ')
# Subplot Labels
plots[0][0].set_xlabel("Elevation (m in NAVD88)")
plots[0][0].set_ylabel("Frequency")
# Display
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
Calculate distances and sort points__SplitSort__ is a unique numeric identifier of the 5-m points at the study site, sorted by order along shoreline and by distance from oceanside. __SplitSort__ values are populated by sorting the points by __sort_ID__ and __Dist_Seg__ (see below). __Dist_Seg__ is the Euclidean distance between the point and the seaward shoreline (__SL_x__, __SL_y__). __Dist_MHWbay__ is the distance between the point and the bayside shoreline and is calculated by subtracting the __Dist_Seg__ value from the __WidthPart__ value of the transect. __DistSegDH__, __DistSegDL__, and __DistSegArm__ measure the distance of each 5-m point from the dune crest and dune toe position along a particular transect. They are calculated as the Euclidean distance between the 5-m point and the given feature.
###Code
# Load saved dataframes
pts_df = pd.read_pickle(os.path.join(scratch_dir, 'pts_extractedvalues_presort.pkl'))
trans_df = pd.read_pickle(os.path.join(scratch_dir, trans_name+'_null_prePts.pkl'))
# Calculate DistSeg, Dist_MHWbay, DistSegDH, DistSegDL, DistSegArm, and sort points (SplitSort)
pts_df = fun.join_columns(pts_df, trans_df, tID_fld)
pts_df = fun.prep_points(pts_df, tID_fld, pID_fld, sitevals['MHW'], fill)
# Aggregate ptZmhw to max and mean and join to transects
pts_df, zmhw = fun.aggregate_z(pts_df, sitevals['MHW'], tID_fld, 'ptZ', fill)
trans_df = fun.join_columns(trans_df, zmhw)
# Join transect values to pts
pts_df = fun.join_columns(pts_df, trans_df, tID_fld)
# pID_fld needs to be among the columns
if not pID_fld in pts_df.columns:
pts_df.reset_index(drop=False, inplace=True)
# Match field names to those in sorted_pt_flds list
for fld in pts_df.columns:
if fld not in sorted_pt_flds:
for i, fldi in enumerate(sorted_pt_flds):
if fldi.lower() == fld.lower():
sorted_pt_flds[i] = fld
print(fld)
# Drop extra fields and sort columns
trans_df.drop(extra_fields, axis=1, inplace=True, errors='ignore')
for i, f in enumerate(sorted_pt_flds):
for c in pts_df.columns:
if f.lower() == c.lower():
sorted_pt_flds[i] = c
pts_df = pts_df.reindex_axis(sorted_pt_flds, axis=1)
# convert projected coordinates to geographic coordinates (lat, lon in NAD83)
pts_df = fun.add_latlon(pts_df, proj_code)
# Save dataframes
trans_df.to_pickle(os.path.join(scratch_dir, trans_name+'_null.pkl'))
pts_df.to_pickle(os.path.join(scratch_dir, pts_name+'_null.pkl'))
# View random rows from the points DF
pts_df.loc[:,['SplitSort', 'seg_x', 'SubType', 'ptZmhw', 'sort_ID', 'Dist_Seg', 'mean_Zmhw']].sample(10)
###Output
_____no_output_____
###Markdown
Change number of significant digits- Elevations: 2 significant digits - Lengths: 1 (lidar has 15-30 cm resolution in vertical and 1-m-ish in horizontal I believe)- UTM coords: 2- lat longs: 6- LRR: 2Would be nice if fill value was always integer (not -99999.0 etc.)
###Code
# Replace NaNs with fill value
pts_df.fillna(fill, inplace=True)
# Round fields to given significant digits
fprecision = {# UTM coordinates
'seg_x':2, 'seg_y':2, 'SL_x':2, 'SL_y':2, 'DL_x':2, 'DL_y':2,
'DH_x':2, 'DH_y':2, 'DL_snapX':2, 'DL_snapY':2, 'DH_snapX':2, 'DH_snapY':2,
'Arm_x':2, 'Arm_y':2,
# Geographic coordinates
'seg_lon':6, 'seg_lat':6,
# Elevations
'ptZ':2, 'ptZmhw':2, 'DL_z':2, 'DL_zmhw':2, 'DH_z':2, 'DH_zmhw':2,
'Arm_z':2, 'Arm_zmhw':2, 'uBH':2, 'mean_Zmhw':2, 'max_Zmhw':2,
# Lengths
'Dist_Seg':1, 'Dist_MHWbay':1, 'DistSegDH':1, 'DistSegDL':1, 'DistSegArm':1, 'DistDH':1,
'DistDL':1, 'DistArm':1,'Dist2Inlet':1, 'WidthPart':1, 'WidthLand':1, 'WidthFull':1, 'uBW':1,
# LRR
'LRR':2,
# IDs
'SplitSort':0, 'sort_ID':0, 'TRANSECTID':0, 'TRANSORDER':0, 'DD_ID':0,
# Other
'Azimuth':1,'Bslope':4,'ptSlp':4}
pts_df = pts_df.round(fprecision)
# # Set GeoSet, SubType, VegDens, VegType fields to int32 dtypes
pts_df = pts_df.astype({'GeoSet':'int32', 'SubType':'int32', 'VegDens':'int32', 'VegType':'int32',
'Construction':'int32', 'Development':'int32', 'Nourishment':'int32',
'sort_ID':'int32', 'DD_ID':'int32', 'TRANSECTID':'int32'})
# Recode
pts_df.to_pickle(os.path.join(scratch_dir, pts_name+'_fill_ints.pkl'))
# View random rows from the points DF
pts_df.loc[:,['SplitSort', 'sort_ID', 'seg_y', 'VegType', 'ptSlp', 'Dist_Seg', 'max_Zmhw', 'Development', 'TRANSECTID']].sample(10)
###Output
_____no_output_____
###Markdown
Recode the values for CSV output and model running
###Code
# Recode
pts_df4csv = pts_df.replace({'SubType': {7777:'{1111, 2222}', 1000:'{1111, 3333}'},
'VegType': {77:'{11, 22}', 88:'{22, 33}', 99:'{33, 44}'},
'VegDens': {666: '{111, 222}', 777: '{222, 333}',
888: '{333, 444}', 999: '{222, 333, 444}'}})
# Save and view sample
pts_df4csv.to_pickle(os.path.join(scratch_dir, pts_name+'_csv.pkl'))
pts_df.loc[:,['SplitSort', 'sort_ID', 'VegType', 'SubType', 'VegDens', 'GeoSet']].sample(10)
# Replace fills with Nulls
pts_df = pts_df.replace(fill, np.nan)
pts_df.to_pickle(os.path.join(scratch_dir, pts_name+'_null.pkl'))
###Output
_____no_output_____
###Markdown
Quality checkingLook at extracted profiles from around the island. Enter the transect ID within the available range when prompted. Evaluate the plots for consistency among variables. Repeat various times until you can be satisfied that the variables are consistent with each other and appear to represent reality. View areas with inconsistencies in a GIS.
###Code
desccols = ['DL_zmhw', 'DH_zmhw', 'Arm_zmhw', 'uBW', 'uBH', 'Dist2Inlet',
'WidthPart', 'WidthLand', 'WidthFull', 'mean_Zmhw', 'max_Zmhw']
# Histograms
trans_df.hist(desccols, sharey=True, figsize=[15, 10], bins=20)
plt.show()
plt.close('all')
flds_dist = ['SplitSort', 'Dist_Seg', 'Dist_MHWbay', 'DistSegDH', 'DistSegDL', 'DistSegArm']
flds_z = ['ptZmhw', 'ptZ', 'ptSlp']
pts_df.loc[:,flds_dist+flds_z].describe()
pts_df.hist(flds_dist, sharey=True, figsize=[15, 8], layout=(2,3))
pts_df.hist(flds_z, sharey=True, figsize=[15, 4], layout=(1,3))
plt.show()
plt.close('all')
# Prompt for transect identifier (sort_ID) and get all points from that transect.
trans_in = int(input('Transect ID ("sort_ID" {:d}-{:d}): '.format(int(pts_df[tID_fld].head(1)), int(pts_df[tID_fld].tail(1)))))
pts_set = pts_df[pts_df[tID_fld] == trans_in]
# Plot
fig = plt.figure(figsize=(13,10))
# Plot the width of the island.
ax1 = fig.add_subplot(211)
try:
fun.plot_island_profile(ax1, pts_set, sitevals['MHW'], sitevals['MTL'])
except TypeError as err:
print('TypeError: {}'.format(err))
pass
# Zoom in on the upper beach.
ax2 = fig.add_subplot(212)
try:
fun.plot_beach_profile(ax2, pts_set, sitevals['MHW'], sitevals['MTL'], maxDH)
except TypeError as err:
print('TypeError: {}'.format(err))
pass
# Display
plt.show()
plt.close('all')
###Output
Transect ID ("sort_ID" 1-1229): 1200
###Markdown
Report field values
###Code
# Load dataframe
pts_df4csv = pd.read_pickle(os.path.join(scratch_dir, pts_name+'_csv.pkl'))
xmlfile = os.path.join(scratch_dir, pts_name+'_eainfo.xml')
fun.report_fc_values(pts_df4csv, field_defs, xmlfile)
###Output
Number of points in dataset: (278037, 56)
SplitSort______________________________0 | 278036______________ No fills_________No nulls
seg_x__________________________464797.51 | 491897.45___________ Fills present____No nulls
seg_y_________________________4189227.39 | 4241883.85__________ Fills present____No nulls
seg_lon_______________________-75.400256 | -75.092695__________ Fills present____No nulls
seg_lat________________________37.849861 | 38.325035___________ Fills present____No nulls
Dist_Seg_____________________________0.0 | 4097.4______________ Fills present____No nulls
Dist_MHWbay______________________-3888.1 | 3831.4______________ Fills present____No nulls
DistSegDH_________________________-379.2 | 4061.4______________ Fills present____No nulls
DistSegDL_________________________-244.5 | 4082.7______________ Fills present____No nulls
DistSegArm________________________-99999 | -99999______________ ONLY Fills_______No nulls
ptZ________________________________-1.52 | 15.12_______________ Fills present____No nulls
ptSlp______________________________0.001 | 73.2456_____________ Fills present____No nulls
ptZmhw_____________________________-1.86 | 14.78_______________ Fills present____No nulls
GeoSet.............. -99999 | 1 | 2 | 3 | 4 | 5 | 6 | 7
SubType............. -99999 | 3333 | 4444 | 6666 | {1111, 2222} | {1111, 3333}
VegDens............. -99999 | 111 | 555 | {111, 222} | {333, 444}
VegType............. -99999 | 11 | 55 | {11, 22} | {22, 33} | {33, 44}
sort_ID________________________________1 | 1229________________ No fills_________No nulls
TRANSECTID____________________________99 | 2430________________ Fills present____No nulls
DD_ID_____________________________210001 | 211229______________ No fills_________No nulls
Azimuth_____________________________27.8 | 321.0_______________ No fills_________No nulls
LRR________________________________-6.24 | 21.49_______________ Fills present____No nulls
SL_x___________________________464867.68 | 491897.57___________ Fills present____No nulls
SL_y__________________________4189227.39 | 4241661.22__________ Fills present____No nulls
Bslope_____________________________-0.21 | -0.0104_____________ Fills present____No nulls
DL_x___________________________465288.62 | 491816.11___________ Fills present____No nulls
DL_y__________________________4189290.48 | 4241695.01__________ Fills present____No nulls
DL_z________________________________1.02 | 4.32________________ Fills present____No nulls
DL_zmhw_____________________________0.68 | 3.98________________ Fills present____No nulls
DL_snapX_______________________465282.66 | 491815.98___________ Fills present____No nulls
DL_snapY______________________4189292.32 | 4241694.69__________ Fills present____No nulls
DH_x___________________________464812.56 | 491806.33___________ Fills present____No nulls
DH_y__________________________4189318.45 | 4241700.58__________ Fills present____No nulls
DH_z________________________________1.03 | 7.95________________ Fills present____No nulls
DH_zmhw_____________________________0.69 | 7.61________________ Fills present____No nulls
DH_snapX_______________________464813.72 | 491805.65___________ Fills present____No nulls
DH_snapY______________________4189319.22 | 4241698.93__________ Fills present____No nulls
Arm_x_____________________________-99999 | -99999______________ ONLY Fills_______No nulls
Arm_y_____________________________-99999 | -99999______________ ONLY Fills_______No nulls
Arm_z_____________________________-99999 | -99999______________ ONLY Fills_______No nulls
Arm_zmhw__________________________-99999 | -99999______________ ONLY Fills_______No nulls
DistDH______________________________16.5 | 380.4_______________ Fills present____No nulls
DistDL_______________________________2.4 | 377.1_______________ Fills present____No nulls
DistArm___________________________-99999 | -99999______________ ONLY Fills_______No nulls
Dist2Inlet___________________________3.6 | 31423.2_____________ Fills present____No nulls
WidthPart___________________________39.2 | 3831.4______________ Fills present____No nulls
WidthLand___________________________39.2 | 3985.8______________ Fills present____No nulls
WidthFull___________________________39.2 | 6077.4______________ Fills present____No nulls
uBW__________________________________2.4 | 380.4_______________ Fills present____No nulls
uBH_________________________________0.68 | 3.98________________ Fills present____No nulls
ub_feat............. -99999 | DH | DL
mean_Zmhw__________________________-0.01 | 2.18________________ Fills present____No nulls
max_Zmhw____________________________0.68 | 14.78_______________ Fills present____No nulls
Construction........ 111 | 222 | 333
Development......... 111 | 222 | 333
Nourishment......... 111 | 222 | 333
###Markdown
Outputs Transect-averagedOutput the transect-averaged metrics in the following formats:- transects, unpopulated except for ID values, as gdb feature class- transects, unpopulated except for ID values, as shapefile- populated transects with fill values as gdb feature class- populated transects with null values as gdb feature class- populated transects with fill values as shapefile- raster of beach width (__uBW__) by transect
###Code
# Load the dataframe
trans_df = pd.read_pickle(os.path.join(scratch_dir, trans_name+'_null.pkl'))
###Output
_____no_output_____
###Markdown
Vector format
###Code
# Create transect file with only ID values and geometry to publish.
trans_flds = ['TRANSECTID', 'DD_ID', 'MHW']
for i, f in enumerate(trans_flds):
for c in trans_df.columns:
if f.lower() == c.lower():
trans_flds[i] = c
trans_4pub = fwa.JoinDFtoFC(trans_df.loc[:,trans_flds], extendedTrans, tID_fld, out_fc=sitevals['code']+'_trans')
out_shp = arcpy.FeatureClassToFeatureClass_conversion(trans_4pub, scratch_dir, sitevals['code']+'_trans.shp')
print("OUTPUT: {} in specified scratch_dir.".format(os.path.basename(str(out_shp))))
trans_4pubdf = fwa.FCtoDF(trans_4pub)
xmlfile = os.path.join(scratch_dir, trans_4pub + '_eainfo.xml')
trans_df_extra_flds = fun.report_fc_values(trans_4pubdf, field_defs, xmlfile)
# Create transect FC with fill values - Join values from trans_df to the transect FC as a new file.
trans_fc = fwa.JoinDFtoFC_2(trans_df, extendedTrans, tID_fld, out_fc=trans_name+'_fill')
# Create transect FC with null values
fwa.CopyFCandReplaceValues(trans_fc, fill, None, out_fc=trans_name+'_null', out_dir=home)
# Save final transect SHP with fill values
out_shp = arcpy.FeatureClassToFeatureClass_conversion(trans_fc, scratch_dir, trans_name+'_shp.shp')
print("OUTPUT: {} in specified scratch_dir.".format(os.path.basename(str(out_shp))))
###Output
Created asis14_trans_fill from input dataframe and extTrans file.
OUTPUT: asis14_trans_null
OUTPUT: asis14_trans_shp.shp in specified scratch_dir.
###Markdown
Raster - beach widthIt may be necessary to close any Arc sessions you have open.
###Code
# Create a template raster corresponding to the transects.
if not arcpy.Exists(rst_transID):
print("{} was not found so we will create the base raster.".format(os.path.basename(rst_transID)))
outEucAll = arcpy.sa.EucAllocation(extTrans_tidy, maximum_distance=50, cell_size=cell_size, source_field=tID_fld)
outEucAll.save(os.path.basename(rst_transID))
# Create raster of uBW values by joining trans_df to the template raster.
out_rst = fwa.JoinDFtoRaster(trans_df, os.path.basename(rst_transID), bw_rst, fill, tID_fld, 'uBW')
###Output
OUTPUT: asis14_ubw. Field "Value" is ID and "uBW" is beachwidth.
###Markdown
5-m pointsOutput the point metrics in the following formats:- tabular, in CSV- populated points with fill values as gdb feature class- populated points with null values as gdb feature class- populated points with fill values as shapefile
###Code
# Load the saved dataframes
pts_df4csv = pd.read_pickle(os.path.join(scratch_dir, pts_name+'_csv.pkl'))
pts_df = pd.read_pickle(os.path.join(scratch_dir, pts_name+'_null.pkl'))
###Output
_____no_output_____
###Markdown
Tabular format
###Code
# Save CSV in scratch_dir
csv_fname = os.path.join(scratch_dir, pts_name +'.csv')
pts_df4csv.to_csv(csv_fname, na_rep=fill, index=False)
sz_mb = os.stat(csv_fname).st_size/(1024.0 * 1024.0)
print("\nOUTPUT: {} (size: {:.2f} MB) in specified scratch_dir.".format(os.path.basename(csv_fname), sz_mb))
###Output
OUTPUT: asis14_pts.csv (size: 107.61 MB) in specified scratch_dir.
###Markdown
Vector format
###Code
# Convert pts_df to FC - automatically converts NaNs to fills (default fill is -99999)
pts_fc = fwa.DFtoFC_large(pts_df, out_fc=os.path.join(arcpy.env.workspace, pts_name+'_fill'),
spatial_ref=utmSR, df_id=pID_fld, xy=["seg_x", "seg_y"])
# Save final FCs with null values
fwa.CopyFCandReplaceValues(pts_fc, fill, None, out_fc=pts_name+'_null', out_dir=home)
# Save final points as SHP with fill values
out_pts_shp = arcpy.FeatureClassToFeatureClass_conversion(pts_fc, scratch_dir, pts_name+'_shp.shp')
print("OUTPUT: {} in specified scratch_dir.".format(os.path.basename(str(out_pts_shp))))
###Output
Converting points DF to FC...
... converting dataframe to array...
... converting array to feature class...
OUTPUT: asis14_pts_fill
Duration: 0:31:8.2 seconds
OUTPUT: asis14_pts_null
OUTPUT: asis14_pts_shp.shp in specified scratch_dir.
|
IMDB_and_Amazon_Review_Classification_with_SpaCy.ipynb | ###Markdown
1) Importing the datasets
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import spacy
nlp = spacy.load('en_core_web_sm')
data_yelp = pd.read_csv('/content/yelp_labelled.txt', sep='\t',header=None)
data_yelp.head()
# review and sentiment
# 0-Negative, 1-Positive for positive review
# Assign column names
columan_name = ['Review', 'Sentiment']
data_yelp.columns = columan_name
data_yelp.head()
data_yelp.shape
# 1000 rows (reviews), 2 columns (Sentiments)
data_amazon = pd.read_csv('/content/amazon_cells_labelled.txt',sep='\t',header=None)
data_amazon.head()
# review and sentiment
# 0-Negative, 1-Positive for positive review
data_amazon.columns = columan_name
data_amazon.head()
data_amazon.shape
data_imdb = pd.read_csv('/content/imdb_labelled.txt',sep='\t',header=None)
data_imdb.head()
data_imdb.columns = columan_name
data_imdb.head()
data_imdb.shape
# Append all the data in a single dataframe
data = data_yelp.append([data_amazon, data_imdb],ignore_index=True)
data.shape
data.head()
# check distribution of sentiments
data['Sentiment'].value_counts()
# 1346 positive reviews
# 1362 Negative reviews
# check for null values
data.isnull().sum()
# no null values in the data
x = data['Review']
y = data['Sentiment']
###Output
_____no_output_____
###Markdown
2) Data Cleaning
###Code
# here we will remove stopwords, punctuations
# as well as we will apply lemmatization
###Output
_____no_output_____
###Markdown
Create a function to clean the data
###Code
import string
punct = string.punctuation
punct
from spacy.lang.en.stop_words import STOP_WORDS
stopwords = list(STOP_WORDS) # list of stopwords
# creating a function for data cleaning
def text_data_cleaning(sentence):
doc = nlp(sentence)
tokens = [] # list of tokens
for token in doc:
if token.lemma_ != "-PRON-":
temp = token.lemma_.lower().strip()
else:
temp = token.lower_
tokens.append(temp)
cleaned_tokens = []
for token in tokens:
if token not in stopwords and token not in punct:
cleaned_tokens.append(token)
return cleaned_tokens
# if root form of that word is not pronoun then it is going to convert that into lower form
# and if that word is a proper noun, then we are directly taking lower form, because there is no lemma for proper noun
text_data_cleaning("Hello all, It's a beautiful day outside there!")
# stopwords and punctuations removed
###Output
_____no_output_____
###Markdown
Vectorization Feature Engineering (TF-IDF)
###Code
from sklearn.svm import LinearSVC
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
tfidf = TfidfVectorizer(tokenizer=text_data_cleaning)
# tokenizer=text_data_cleaning, tokenization will be done according to this function
classifier = LinearSVC()
###Output
_____no_output_____
###Markdown
3) Train the model Splitting the dataset into the Train and Test set
###Code
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 0)
x_train.shape, x_test.shape
# 2198 samples in training dataset and 550 in test dataset
x_train.head()
###Output
_____no_output_____
###Markdown
Fit the x_train and y_train
###Code
clf = Pipeline([('tfidf',tfidf), ('clf',classifier)])
# it will first do vectorization and then it will do classification
clf.fit(x_train, y_train)
# in this we don't need to prepare the dataset for testing(x_test)
###Output
_____no_output_____
###Markdown
4) Predict the Test set results
###Code
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
y_pred = clf.predict(x_test)
# confusion_matrix
confusion_matrix(y_test, y_pred)
# classification_report
print(classification_report(y_test, y_pred))
# we are getting almost 77% accuracy
accuracy_score(y_test, y_pred)
# 76% accuracy
clf.predict(["Wow, I am learning Natural Language Processing in fun fashion!"])
# output is 1, that means review is positive
clf.predict(["It's hard to learn new things!"])
# output is 0, that means review is Negative
###Output
_____no_output_____ |
correction_factor_maps.ipynb | ###Markdown
South Africa
###Code
shpZAF = geopandas.read_file(data_path + '/country_shapefiles/ZAF/zaf_admbnda_adm1_2016SADB_OCHA.shp')
cfEg2_ZAF = xr.open_dataarray(results_path + '/ZAF/cf_ERA_GWA2.nc')
cfEg3_ZAF = xr.open_dataarray(results_path + '/ZAF/cf_ERA_GWA3.nc')
cfMg2_ZAF = xr.open_dataarray(results_path + '/ZAF/cf_MERRA_GWA2.nc')
cfMg3_ZAF = xr.open_dataarray(results_path + '/ZAF/cf_MERRA_GWA3.nc')
max(cfEg2_ZAF.max().values,cfMg2_ZAF.max().values,cfEg3_ZAF.max().values,cfMg3_ZAF.max().values)
data = [cfEg2_ZAF,cfEg3_ZAF,cfMg2_ZAF,cfMg3_ZAF]
titles = ['ERA5 GWA2','ERA5 GWA3','MERRA-2 GWA2','MERRA-2 GWA3']
fig, axes = plt.subplots(2, 2,figsize=(8,5.4),gridspec_kw = {'hspace':0.5})
for i in range(4):
axes[int(i/2),i%2].set_xlim(16,32.5)
axes[int(i/2),i%2].set_ylim(-35,-22)
data[i].plot.imshow(ax=axes[int(i/2),i%2],vmin=0, vmax=2)
shpZAF.boundary.plot(ax=axes[int(i/2),i%2],color='lightgray',linewidth=0.8)
axes[int(i/2),i%2].set_title(titles[i])
axes[int(i/2),i%2].set_xlabel('Longitude')
axes[int(i/2),i%2].set_ylabel('Latitude')
plt.savefig(results_path + '/plots/cf_ZAF.png',dpi=300)
plt.boxplot(pd.Series(cfEg2_ZAF.values.flatten()).dropna())
fig,ax = plt.subplots(figsize=(10,5))
ax.set_xlim(16,32.5)
ax.set_ylim(-35,-22)
cfEg2_ZAF.plot.imshow(ax=ax)
shpZAF.boundary.plot(ax=ax,color='lightgray',linewidth=0.8)
###Output
_____no_output_____
###Markdown
New Zealand
###Code
shpNZ = geopandas.read_file(data_path + '/country_shapefiles/NZ/CON2017_HD_Clipped.shp')
cfEg2_NZ = xr.open_dataarray(results_path + '/NZ/cf_ERA_GWA2.nc')
cfEg3_NZ = xr.open_dataarray(results_path + '/NZ/cf_ERA_GWA3.nc')
cfMg2_NZ = xr.open_dataarray(results_path + '/NZ/cf_MERRA_GWA2.nc')
cfMg3_NZ = xr.open_dataarray(results_path + '/NZ/cf_MERRA_GWA3.nc')
max(cfEg2_NZ.max().values,cfMg2_NZ.max().values,cfEg3_NZ.max().values,cfMg3_NZ.max().values)
data = [cfEg2_NZ,cfEg3_NZ,cfMg2_NZ,cfMg3_NZ]
titles = ['ERA5 GWA2','ERA5 GWA3','MERRA-2 GWA2','MERRA-2 GWA3']
fig, axes = plt.subplots(2, 2,figsize=(8,7),gridspec_kw = {'hspace':0.5})
for i in range(4):
axes[int(i/2),i%2].set_xlim(166,178)
axes[int(i/2),i%2].set_ylim(-34,-48)
try:
data[i].sel(longitude=slice(160,180),latitude=slice(-34,-48)).plot.imshow(ax=axes[int(i/2),i%2],vmin=0,vmax=3)
except:
data[i].sel(lon=slice(160,180),lat=slice(-34,-48)).plot.imshow(ax=axes[int(i/2),i%2],vmin=0,vmax=3)
shpNZ.to_crs({'init': 'epsg:4326'}).boundary.plot(ax=axes[int(i/2),i%2],color='lightgray',linewidth=0.4)
axes[int(i/2),i%2].set_title(titles[i])
axes[int(i/2),i%2].set_xlabel('Longitude')
axes[int(i/2),i%2].set_ylabel('Latitude')
plt.savefig(results_path + '/plots/cf_NZ.png',dpi=300)
data = [cfEg2_NZ,cfEg3_NZ,cfMg2_NZ,cfMg3_NZ]
titles = ['ERA5 GWA2','ERA5 GWA3','MERRA-2 GWA2','MERRA-2 GWA3']
fig, axes = plt.subplots(1, 4,figsize=(40,8),gridspec_kw = {'wspace':0.1})
for i in range(4):
data[i].plot.imshow(ax=axes[i])
axes[i].set_title(titles[i])
###Output
_____no_output_____
###Markdown
Brazil
###Code
shpBRA = geopandas.read_file(data_path + '/country_shapefiles/BRA/BRA_adm1.shp')
cfEg2_BRA = xr.open_dataarray(results_path + '/BRA/cf_ERA_GWA2.nc')
cfEg3_BRA = xr.open_dataarray(results_path + '/BRA/cf_ERA_GWA3.nc')
cfMg2_BRA = xr.open_dataarray(results_path + '/BRA/cf_MERRA_GWA2.nc')
cfMg3_BRA = xr.open_dataarray(results_path + '/BRA/cf_MERRA_GWA3.nc')
max(cfEg2_BRA.max().values,cfMg2_BRA.max().values,cfEg3_BRA.max().values,cfMg3_BRA.max().values)
data = [cfEg2_BRA,cfEg3_BRA,cfMg2_BRA,cfMg3_BRA]
titles = ['ERA5 GWA2','ERA5 GWA3','MERRA-2 GWA2','MERRA-2 GWA3']
fig, axes = plt.subplots(2, 2,figsize=(9,5.4),gridspec_kw = {'wspace':0.1,'hspace':0.6})
for i in range(4):
axes[int(i/2),i%2].set_xlim(-75,-30)
axes[int(i/2),i%2].set_ylim(-25,5)
data[i].plot.imshow(ax=axes[int(i/2),i%2],norm=matplotlib.colors.PowerNorm(gamma=0.5),vmin=0,vmax=19)
shpBRA.boundary.plot(ax=axes[int(i/2),i%2],color='lightgray',linewidth=0.8)
axes[int(i/2),i%2].set_title(titles[i])
axes[int(i/2),i%2].set_xlabel('longitude')
axes[int(i/2),i%2].set_ylabel('latitude')
plt.savefig(results_path + '/plots/cf_BRA.png')
data = [cfEg2_BRA,cfEg3_BRA,cfMg2_BRA,cfMg3_BRA]
titles = ['ERA5 GWA2','ERA5 GWA3','MERRA-2 GWA2','MERRA-2 GWA3']
fig, axes = plt.subplots(2, 2,figsize=(9,5.4),gridspec_kw = {'wspace':0.1,'hspace':0.6})
for i in range(4):
axes[int(i/2),i%2].set_xlim(-75,-30)
axes[int(i/2),i%2].set_ylim(-25,5)
data[i].plot.imshow(ax=axes[int(i/2),i%2],vmin=0,vmax=2)
shpBRA.boundary.plot(ax=axes[int(i/2),i%2],color='lightgray',linewidth=0.8)
axes[int(i/2),i%2].set_title(titles[i])
axes[int(i/2),i%2].set_xlabel('Longitude')
axes[int(i/2),i%2].set_ylabel('Latitude')
plt.savefig(results_path + '/plots/cf_BRA.png',dpi=300)
data = [cfEg2_BRA,cfEg3_BRA,cfMg2_BRA,cfMg3_BRA]
titles = ['ERA5 GWA2','ERA5 GWA3','MERRA-2 GWA2','MERRA-2 GWA3']
fig, axes = plt.subplots(1, 4,figsize=(18,2.7),gridspec_kw = {'wspace':0.25})
for i in range(4):
axes[i].set_xlim(-75,-30)
axes[i].set_ylim(-25,5)
data[i].plot.imshow(ax=axes[i])
shpBRA.boundary.plot(ax=axes[i],color='lightgray',linewidth=0.8)
axes[i].set_title(titles[i])
plt.savefig(results_path + '/plots/cf_BRA.png')
fig, axes = plt.subplots(1, 4,figsize=(40,8),gridspec_kw = {'wspace':0.1})
for i in range(4):
data[i].plot.imshow(ax=axes[i])
axes[i].set_title(titles[i])
###Output
_____no_output_____
###Markdown
USA
###Code
shpUSA = geopandas.read_file(data_path + '/country_shapefiles/USA/cb_2018_us_state_500k.shp')
cfEg2_USA = xr.open_dataarray(results_path + '/USA/cf_ERA_GWA2.nc')
cfEg3_USA = xr.open_dataarray(results_path + '/USA/cf_ERA_GWA3.nc')
cfMg2_USA = xr.open_dataarray(results_path + '/USA/cf_MERRA_GWA2.nc')
cfMg3_USA = xr.open_dataarray(results_path + '/USA/cf_MERRA_GWA3.nc')
max(cfEg2_USA.max().values,cfMg2_USA.max().values,cfEg3_USA.max().values,cfMg3_USA.max().values)
data = [cfEg2_USA,cfEg3_USA,cfMg2_USA,cfMg3_USA]
titles = ['ERA5 GWA2','ERA5 GWA3','MERRA-2 GWA2','MERRA-2 GWA3']
fig, axes = plt.subplots(1, 4,figsize=(18,2.7),gridspec_kw = {'wspace':0.25})
for i in range(4):
axes[i].set_xlim(-125,-67)
axes[i].set_ylim(25,50)
data[i].plot.imshow(ax=axes[i])
shpUSA.boundary.plot(ax=axes[i],color='lightgray',linewidth=0.8)
axes[i].set_title(titles[i])
data = [cfEg2_USA,cfEg3_USA,cfMg2_USA,cfMg3_USA]
titles = ['ERA5 GWA2','ERA5 GWA3','MERRA-2 GWA2','MERRA-2 GWA3']
fig, axes = plt.subplots(2, 2,figsize=(13,5.4),gridspec_kw = {'wspace':0.15,'hspace':0.5})
for i in range(4):
axes[int(i/2),i%2].set_xlim(-125,-67)
axes[int(i/2),i%2].set_ylim(25,50)
data[i].plot.imshow(ax=axes[int(i/2),i%2])
shpUSA.boundary.plot(ax=axes[int(i/2),i%2],color='lightgray',linewidth=0.8)
axes[int(i/2),i%2].set_title(titles[i])
plt.savefig(results_path + '/plots/cf_USA.png')
data = [cfEg2_USA,cfEg3_USA,cfMg2_USA,cfMg3_USA]
titles = ['ERA5 GWA2','ERA5 GWA3','MERRA-2 GWA2','MERRA-2 GWA3']
fig, axes = plt.subplots(2, 2,figsize=(13,5.4),gridspec_kw = {'wspace':0.15,'hspace':0.5})
for i in range(4):
axes[int(i/2),i%2].set_xlim(-125,-67)
axes[int(i/2),i%2].set_ylim(25,50)
data[i].plot.imshow(ax=axes[int(i/2),i%2],norm=matplotlib.colors.PowerNorm(gamma=0.5),vmin=0,vmax=13)
shpUSA.boundary.plot(ax=axes[int(i/2),i%2],color='lightgray',linewidth=0.8)
axes[int(i/2),i%2].set_title(titles[i])
axes[int(i/2),i%2].set_xlabel('longitude')
axes[int(i/2),i%2].set_ylabel('latitude')
plt.savefig(results_path + '/plots/cf_USA.png')
data = [cfEg2_USA,cfEg3_USA,cfMg2_USA,cfMg3_USA]
titles = ['ERA5 GWA2','ERA5 GWA3','MERRA-2 GWA2','MERRA-2 GWA3']
fig, axes = plt.subplots(2, 2,figsize=(13,5.4),gridspec_kw = {'wspace':0.15,'hspace':0.5})
for i in range(4):
axes[int(i/2),i%2].set_xlim(-125,-67)
axes[int(i/2),i%2].set_ylim(25,50)
data[i].plot.imshow(ax=axes[int(i/2),i%2],vmin=0,vmax=2)
shpUSA.boundary.plot(ax=axes[int(i/2),i%2],color='lightgray',linewidth=0.8)
axes[int(i/2),i%2].set_title(titles[i])
axes[int(i/2),i%2].set_xlabel('Longitude')
axes[int(i/2),i%2].set_ylabel('Latitude')
plt.savefig(results_path + '/plots/cf_USA.png',dpi=300)
fig, axes = plt.subplots(1, 4,figsize=(40,8),gridspec_kw = {'wspace':0.1})
for i in range(4):
data[i].plot.imshow(ax=axes[i])
axes[i].set_title(titles[i])
###Output
_____no_output_____
###Markdown
compare GWA2 and GWA3 cfs
###Code
def dif_g2_g3(gwa2,gwa3,ds):
if ds=='E':
gwa3ig2 = gwa3.interp(longitude = gwa2.longitude.values,
latitude = gwa2.latitude.values,
method='nearest')
else:
gwa3ig2 = gwa3.interp(lon = gwa2.lon.values,
lat = gwa2.lat.values,
method='nearest')
difg2g3 = (gwa3ig2-gwa2).values.flatten()
return(difg2g3[~np.isnan(difg2g3)])
def plot_difcf_country(Egwa2,Egwa3,Mgwa2,Mgwa3,ax,title):
difE = dif_g2_g3(Egwa2,Egwa3,'E')
difM = dif_g2_g3(Mgwa2,Mgwa3,'M')
difs = pd.DataFrame({'Dataset':['ERA5']*len(difE) + ['MERRA-2']*len(difM),
'correction factor GWA3 - correction factor GWA2':np.append(difE,difM)})
sns.boxplot(x='Dataset',y='correction factor GWA3 - correction factor GWA2',data=difs,ax=ax)
ax.set_title(title)
fig, axes = plt.subplots(1, 4,figsize=(12,6),gridspec_kw = {'wspace':0.5})
plot_difcf_country(cfEg2_USA,cfEg3_USA,cfMg2_USA,cfMg3_USA,axes[0],'USA')
plot_difcf_country(cfEg2_BRA,cfEg3_BRA,cfMg2_BRA,cfMg3_BRA,axes[1],'Brazil')
plot_difcf_country(cfEg2_NZ,cfEg3_NZ,cfMg2_NZ,cfMg3_NZ,axes[2],'New Zealand')
plot_difcf_country(cfEg2_ZAF,cfEg3_ZAF,cfMg2_ZAF,cfMg3_ZAF,axes[3],'South Africa')
plt.savefig(results_path + '/plots/cf_MBE_GWA2_GWA3.png')
###Output
_____no_output_____ |
00-gpu-testing-examples/mnist_sample_example.ipynb | ###Markdown
Reference:* https://towardsdatascience.com/building-a-deep-learning-model-using-keras-1548ca149d37?gi=b9fc6d293b7* https://www.tensorflow.org/tutorials
###Code
import tensorflow as tf
from distutils.version import LooseVersion
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.12'), 'Please use TensorFlow version 1.12 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
mnist = tf.keras.datasets.mnist
# Load mnist data set
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train[0,10]
# convert the values from RGB integer to 0-1 range for better training
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train[0,10]
from tensorflow.keras.models import Sequential
from tensorflow.layers import Dense, Flatten, Dropout
# use a FFT model from kera
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
model.add(Dense(units=512, activation=tf.nn.relu))
model.add(Dropout(rate=0.2))
model.add(Dense(units=10, activation=tf.nn.softmax))
# model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
layers = model.layers
# display object attributes
# dir(layers[0])
print("Model:")
print("input_shape, output_shape")
for layer in layers:
print(layer.input_shape,',',layer.output_shape)
# use model summary function
model.summary()
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
loss, accuracy = model.evaluate(x_test, y_test)
print("validation accuracy: ", accuracy)
test_list = list([1,2])
def max_list_index(l):
'''
this function returns the index and value of the max element from the given list.
example:
test_list = [4, 3, 10]
idx, value = max_list_index(test_list)
idx = 2 # since the max element 10 has index 2
value = 10 # 10 is the max element from the given list
'''
# print(test_list)
# l.__getitem__(1)
idx = max(range(len(l)), key=l.__getitem__ )
return idx, l[idx]
idx, value = max_list_index(test_list)
print("index of max element is:", idx)
print("value of max element is:", value)
import matplotlib.pyplot as plt
# import matplotlib.image as mpimg
import numpy as np
%matplotlib inline
# make slice so that it is a patch with one number
# test_images = x_test[0:1]
x_test_number = x_test.shape[0]
image_index = np.random.randint(0, x_test_number)
test_images = x_test[image_index:image_index + 1]
test_images = (test_images * 255) #.astype(np.uint8)
# print(test_images)
print(test_images.shape)
# show the first image from the batch test_images
plt.imshow(test_images[0], cmap="gray", interpolation='nearest')
# Make prediction
pred_prob = model.predict(test_images)
print(pred_prob.tolist()[0])
number, confidence = max_list_index(pred_prob.tolist()[0])
print("Predict as: ", number, " with conficdence: ", confidence)
###Output
[0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
Predict as: 1 with conficdence: 1.0
|
Sequence Models/Dinosaurus Island Character level language model final v3.ipynb | ###Markdown
Character level language model - Dinosaurus landWelcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely! Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](dinos.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! By completing this assignment you will learn:- How to store text data for processing using an RNN - How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit- How to build a character-level text generation recurrent neural network- Why clipping the gradients is importantWe will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment.
###Code
import numpy as np
from utils import *
import random
###Output
_____no_output_____
###Markdown
1 - Problem Statement 1.1 - Dataset and PreprocessingRun the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
###Code
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
###Output
There are 19909 total characters and 27 unique characters in your data.
###Markdown
The characters are a-z (26 characters) plus the "\n" (or newline character), which in this assignment plays a role similar to the `` (or "End of sentence") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, `char_to_ix` and `ix_to_char` are the python dictionaries.
###Code
char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }
ix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }
print(ix_to_char)
###Output
{0: '\n', 1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e', 6: 'f', 7: 'g', 8: 'h', 9: 'i', 10: 'j', 11: 'k', 12: 'l', 13: 'm', 14: 'n', 15: 'o', 16: 'p', 17: 'q', 18: 'r', 19: 's', 20: 't', 21: 'u', 22: 'v', 23: 'w', 24: 'x', 25: 'y', 26: 'z'}
###Markdown
1.2 - Overview of the modelYour model will have the following structure: - Initialize parameters - Run the optimization loop - Forward propagation to compute the loss function - Backward propagation to compute the gradients with respect to the loss function - Clip the gradients to avoid exploding gradients - Using the gradients, update your parameter with the gradient descent update rule.- Return the learned parameters **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a RNN - Step by Step". At each time-step, the RNN tries to predict what is the next character given the previous characters. The dataset $X = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set, while $Y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is such that at every time-step $t$, we have $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$. 2 - Building blocks of the modelIn this part, you will build two important blocks of the overall model:- Gradient clipping: to avoid exploding gradients- Sampling: a technique used to generate charactersYou will then apply these two functions to build the model. 2.1 - Clipping the gradients in the optimization loopIn this section you will implement the `clip` function that you will call inside of your optimization loop. Recall that your overall loop structure usually consists of a forward pass, a cost computation, a backward pass, and a parameter update. Before updating the parameters, you will perform gradient clipping when needed to make sure that your gradients are not "exploding," meaning taking on overly large values. In the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed. There are different ways to clip gradients; we will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. More generally, you will provide a `maxValue` (say 10). In this example, if any component of the gradient vector is greater than 10, it would be set to 10; and if any component of the gradient vector is less than -10, it would be set to -10. If it is between -10 and 10, it is left alone. **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into slight "exploding gradient" problems. **Exercise**: Implement the function below to return the clipped gradients of your dictionary `gradients`. Your function takes in a maximum threshold and returns the clipped versions of your gradients. You can check out this [hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html) for examples of how to clip in numpy. You will need to use the argument `out = ...`.
###Code
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for gradient in [dWax, dWaa, dWya, db, dby]:
gradient = np.clip(gradient,-maxValue,maxValue,out=gradient)
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, 10)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
###Output
gradients["dWaa"][1][2] = 10.0
gradients["dWax"][3][1] = -10.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 10.]
gradients["dby"][1] = [ 8.45833407]
###Markdown
** Expected output:** **gradients["dWaa"][1][2] ** 10.0 **gradients["dWax"][3][1]** -10.0 **gradients["dWya"][1][2]** 0.29713815361 **gradients["db"][4]** [ 10.] **gradients["dby"][1]** [ 8.45833407] 2.2 - SamplingNow assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below: **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network then sample one character at a time. **Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps:- **Step 1**: Pass the network the first "dummy" input $x^{\langle 1 \rangle} = \vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$- **Step 2**: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:$$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. We have provided a `softmax()` function that you can use.- **Step 3**: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. To implement it, you can use [`np.random.choice`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html).Here is an example of how to use `np.random.choice()`:```pythonnp.random.seed(0)p = np.array([0.1, 0.0, 0.7, 0.2])index = np.random.choice([0, 1, 2, 3], p = p.ravel())```This means that you will pick the `index` according to the distribution: $P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.- **Step 4**: The last step to implement in `sample()` is to overwrite the variable `x`, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating you've reached the end of the dinosaur name.
###Code
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
"""
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
"""
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line)
x = np.zeros((vocab_size,1))
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = np.zeros((n_a,1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# Idx is a flag to detect a newline character, we initialize it to -1
idx = -1
# Loop over time-steps t. At each time-step, sample a character from a probability distribution and append
# its index to "indices". We'll stop if we reach 50 characters (which should be very unlikely with a well
# trained model), which helps debugging and prevents entering an infinite loop.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh(np.dot(Wax,x)+np.dot(Waa,a_prev)+b)
z = np.dot(Wya,a)+by
y = softmax(z)
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
idx = np.random.choice(list(range(vocab_size)), p=y.ravel())
# Append the index to "indices"
indices.append(idx)
# Step 4: Overwrite the input character as the one corresponding to the sampled index.
x = np.zeros((vocab_size, 1))
x[idx] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:", indices)
print("list of sampled characters:", [ix_to_char[i] for i in indices])
###Output
Sampling:
list of sampled indices: [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0]
list of sampled characters: ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\n', '\n']
###Markdown
** Expected output:** **list of sampled indices:** [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0] **list of sampled characters:** ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\n', '\n'] 3 - Building the language model It is time to build the character-level language model for text generation. 3.1 - Gradient descent In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:- Forward propagate through the RNN to compute the loss- Backward propagate through time to compute the gradients of the loss with respect to the parameters- Clip the gradients if necessary - Update your parameters using gradient descent **Exercise**: Implement this optimization process (one step of stochastic gradient descent). We provide you with the following functions: ```pythondef rnn_forward(X, Y, a_prev, parameters): """ Performs the forward propagation through the RNN and computes the cross-entropy loss. It returns the loss' value as well as a "cache" storing values to be used in the backpropagation.""" .... return loss, cache def rnn_backward(X, Y, parameters, cache): """ Performs the backward propagation through time to compute the gradients of the loss with respect to the parameters. It returns also all the hidden states.""" ... return gradients, adef update_parameters(parameters, gradients, learning_rate): """ Updates parameters using the Gradient Descent Update Rule.""" ... return parameters```
###Code
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = rnn_forward(X,Y,a_prev,parameters)
# Backpropagate through time (≈1 line)
gradients, a = rnn_backward(X,Y,parameters,cache)
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = clip(gradients, 5)
# Update parameters (≈1 line)
parameters = update_parameters(parameters, gradients, learning_rate)
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
###Output
Loss = 126.503975722
gradients["dWaa"][1][2] = 0.194709315347
np.argmax(gradients["dWax"]) = 93
gradients["dWya"][1][2] = -0.007773876032
gradients["db"][4] = [-0.06809825]
gradients["dby"][1] = [ 0.01538192]
a_last[4] = [-1.]
###Markdown
** Expected output:** **Loss ** 126.503975722 **gradients["dWaa"][1][2]** 0.194709315347 **np.argmax(gradients["dWax"])** 93 **gradients["dWya"][1][2]** -0.007773876032 **gradients["db"][4]** [-0.06809825] **gradients["dby"][1]** [ 0.01538192] **a_last[4]** [-1.] 3.2 - Training the model Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order. **Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this:```python index = j % len(examples) X = [None] + [char_to_ix[ch] for ch in examples[index]] Y = X[1:] + [char_to_ix["\n"]]```Note that we use: `index= j % len(examples)`, where `j = 1....num_iterations`, to make sure that `examples[index]` is always a valid statement (`index` is smaller than `len(examples)`).The first entry of `X` being `None` will be interpreted by `rnn_forward()` as setting $x^{\langle 0 \rangle} = \vec{0}$. Further, this ensures that `Y` is equal to `X` but shifted one step to the left, and with an additional "\n" appended to signify the end of the dinosaur name.
###Code
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text, size of the vocabulary
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss, don't worry about it)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Use the hint above to define one training example (X,Y) (≈ 2 lines)
index = j % len(examples)
X = [None] + [char_to_ix[ch] for ch in examples[index]]
Y = X[1:] + [char_to_ix["\n"]]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result for grading purposed, increment the seed by one.
print('\n')
return parameters
###Output
_____no_output_____
###Markdown
Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
###Code
parameters = model(data, ix_to_char, char_to_ix)
###Output
Iteration: 0, Loss: 23.087336
Nkzxwtdmfqoeyhsqwasjkjvu
Kneb
Kzxwtdmfqoeyhsqwasjkjvu
Neb
Zxwtdmfqoeyhsqwasjkjvu
Eb
Xwtdmfqoeyhsqwasjkjvu
Iteration: 2000, Loss: 27.884160
Liusskeomnolxeros
Hmdaairus
Hytroligoraurus
Lecalosapaus
Xusicikoraurus
Abalpsamantisaurus
Tpraneronxeros
Iteration: 4000, Loss: 25.901815
Mivrosaurus
Inee
Ivtroplisaurus
Mbaaisaurus
Wusichisaurus
Cabaselachus
Toraperlethosdarenitochusthiamamumamaon
Iteration: 6000, Loss: 24.608779
Onwusceomosaurus
Lieeaerosaurus
Lxussaurus
Oma
Xusteonosaurus
Eeahosaurus
Toreonosaurus
Iteration: 8000, Loss: 24.070350
Onxusichepriuon
Kilabersaurus
Lutrodon
Omaaerosaurus
Xutrcheps
Edaksoje
Trodiktonus
Iteration: 10000, Loss: 23.844446
Onyusaurus
Klecalosaurus
Lustodon
Ola
Xusodonia
Eeaeosaurus
Troceosaurus
Iteration: 12000, Loss: 23.291971
Onyxosaurus
Kica
Lustrepiosaurus
Olaagrraiansaurus
Yuspangosaurus
Eealosaurus
Trognesaurus
Iteration: 14000, Loss: 23.382339
Meutromodromurus
Inda
Iutroinatorsaurus
Maca
Yusteratoptititan
Ca
Troclosaurus
Iteration: 16000, Loss: 23.288447
Meuspsangosaurus
Ingaa
Iusosaurus
Macalosaurus
Yushanis
Daalosaurus
Trpandon
Iteration: 18000, Loss: 22.823526
Phytrolonhonyg
Mela
Mustrerasaurus
Peg
Ytronorosaurus
Ehalosaurus
Trolomeehus
Iteration: 20000, Loss: 23.041871
Nousmofonosaurus
Loma
Lytrognatiasaurus
Ngaa
Ytroenetiaudostarmilus
Eiafosaurus
Troenchulunosaurus
Iteration: 22000, Loss: 22.728849
Piutyrangosaurus
Midaa
Myroranisaurus
Pedadosaurus
Ytrodon
Eiadosaurus
Trodoniomusitocorces
Iteration: 24000, Loss: 22.683403
Meutromeisaurus
Indeceratlapsaurus
Jurosaurus
Ndaa
Yusicheropterus
Eiaeropectus
Trodonasaurus
Iteration: 26000, Loss: 22.554523
Phyusaurus
Liceceron
Lyusichenodylus
Pegahus
Yustenhtonthosaurus
Elagosaurus
Trodontonsaurus
Iteration: 28000, Loss: 22.484472
Onyutimaerihus
Koia
Lytusaurus
Ola
Ytroheltorus
Eiadosaurus
Trofiashates
Iteration: 30000, Loss: 22.774404
Phytys
Lica
Lysus
Pacalosaurus
Ytrochisaurus
Eiacosaurus
Trochesaurus
Iteration: 32000, Loss: 22.209473
Mawusaurus
Jica
Lustoia
Macaisaurus
Yusolenqtesaurus
Eeaeosaurus
Trnanatrax
Iteration: 34000, Loss: 22.396744
Mavptokekus
Ilabaisaurus
Itosaurus
Macaesaurus
Yrosaurus
Eiaeosaurus
Trodon
###Markdown
ConclusionYou can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc.If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce: Mangosaurus! 4 - Writing like ShakespeareThe rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in ths sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. Let's become poets! We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.
###Code
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
###Output
Using TensorFlow backend.
###Markdown
To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt). Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
###Code
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
###Output
Write the beginning of your poem, the Shakespeare machine will complete it. Your input is: she was lover
Here is your poem:
she was lover,
his hast outhing thregen so reverle deange,
and tonmen leps perentore his babe,
and thangs shise in withen my by beling,
in grein my soncues of twyrite's falber,
spethenss thereor new i rend i thy shall.
in a ever as worst your my foerered,
ar habver had in newen hery dounged,
switt is beited wintt forten be thas hele,
ner thine thad direfoled to stans me the bear.
ser to bignore lasty be onde |
Kata09/kata09.ipynb | ###Markdown
Ejercicio 1
###Code
# Función para leer 3 tanques de combustible y muestre el promedio
def Promedio(a, b , c):
return (a + b + c) /3
def Reporte(tanque1, tanque2, tanque3):
# promedio = Promedio(tanque1, tanque2 , tanque3)
return print(
f"""Fuel Report:
Total Average: {Promedio(tanque1, tanque2 , tanque3)}%
Main tank: {tanque1}%
External tank: {tanque2}%
Hydrogen tank: {tanque3}%
"""
)
Reporte(88, 76, 70)
###Output
Fuel Report:
Total Average: 78.0%
Main tank: 88%
External tank: 76%
Hydrogen tank: 70%
###Markdown
Ejercicio 2: Trabajo con argumentos de palabra clave
###Code
# Función con un informe preciso de la misión. Considera hora de prelanzamiento, tiempo de vuelo, destino, tanque externo y tanque interno
def missionReport(prelanzamiento, timeFlight, destination, externalTank, mainTank):
return f"""
Mission to: {destination}
Total travel time: {prelanzamiento + timeFlight} minutes
Total fuel left: {externalTank + mainTank} gallons
"""
def missionReport1(destination, *minutes, **fuel_reservoirs):
return f"""
Mission to: {destination}
Total travel time: {sum(minutes)} minutes
Total fuel left: {sum(fuel_reservoirs.values())}
"""
def missionReport2(destination, *minutes, **fuel_reservoirs):
main_report = f"""
Mission to {destination}
Total travel time: {sum(minutes)} minutes
Total fuel left: {sum(fuel_reservoirs.values())}
"""
for tank_name, gallons in fuel_reservoirs.items():
main_report += f"{tank_name} tank --> {gallons} gallons left\n"
return main_report
print(missionReport2("Moon", 8, 11, 55, main=300000, external=200000,interno=1))
# print(missionReport1("Moon", 10, 15, 51, 24, main=300000, external=200000, interno=1))
###Output
Mission to Moon
Total travel time: 74 minutes
Total fuel left: dict_keys(['main', 'external', 'interno'])
|
code/playgrounds/fancy_numpy_stuff.ipynb | ###Markdown
How to take the k biggest values along a given axis for a given matrix:
###Code
x = np.random.rand(5, 5)
print(x)
a = np.argsort(x, axis=1)
# argsort gives ascending order so we have to reverse the order
a = a[:, -1::-1]
print(a)
# a contains only the indices along the second axis (the columns) but for integer indexing we also need the row indices:
# they will be broadcasted so we need it to be 2D array
rows = np.arange(5)[:, np.newaxis]
print(rows)
print(x[rows, a[:, 0:3]])
###Output
[[0.60080737 0.67869159 0.2950931 0.22690127 0.43422898]
[0.82156904 0.72417623 0.60568798 0.46421807 0.78133806]
[0.44970497 0.76147525 0.64039773 0.02456953 0.30827886]
[0.56624744 0.77421173 0.7109107 0.19469638 0.00313829]
[0.75471545 0.90058493 0.74241208 0.88063427 0.55420083]]
[[1 0 4 2 3]
[0 4 1 2 3]
[1 2 0 4 3]
[1 2 0 3 4]
[1 3 0 2 4]]
[[0]
[1]
[2]
[3]
[4]]
[[0.67869159 0.60080737 0.43422898]
[0.82156904 0.78133806 0.72417623]
[0.76147525 0.64039773 0.44970497]
[0.77421173 0.7109107 0.56624744]
[0.90058493 0.88063427 0.75471545]]
|
notebooks/MakingPrettierGraphs.ipynb | ###Markdown
Making Prettier Graphs---[Author] ERGM--- If you wish to make prettier graphs please read onwards:
###Code
#1st off lets load the various libaries, etc that we need:
#this command to tell Jupyter to plot figures inline with the text
%matplotlib inline
# import pyfolding, the pyfolding models and ising models
import pyfolding
from pyfolding import *
# import the package for plotting, call it plt
import matplotlib.pyplot as plt
# import numpy as well
import numpy as np
###Output
_____no_output_____
###Markdown
--- Now, we need to load some data to analyse.In this notebook I will be using data from the paper below:```Mapping the Topography of a Protein Energy LandscapeHutton, R. D., Wilkinson, J., Faccin, M., Sivertsson, E. M., Pelizzola, A., Lowe, A. R., Bruscolini, P. & Itzhaki, L. S.J Am Chem Soc (2015) 127, 46: 14610-25```[http://pubs.acs.org/doi/10.1021/jacs.5b07370] Considerations1. Kinetics data should be entered as rate constants ( *k* ) and NOT as the `log` of the rate constant.2. There can be no "empty" cells between data points in the .csv file for kinetics data.
###Code
# start by loading a data set
# arguments are "path", "filename"
# loading the data - The kinetics of each protein is in one .csv as per the example .csv above
pth = "/Users/ergm/Dropbox/AlanLoweCollaboration/Datasets/GankyrinFolding"
GankyrinChevron = pyfolding.read_kinetic_data(pth,"GankyrinWTChevron.csv")
# let's give this dataset a good name
GankyrinChevron.ID = 'Gankyrin WT'
###Output
_____no_output_____
###Markdown
--- Lets plot the data.
###Code
# now plot the equilm data
GankyrinChevron.plot()
# ...or, custom plotting although this is more advanced!
k1_x, k1_y = GankyrinChevron.chevron('k1') # defines the slower chevron rates
k2_x, k2_y = GankyrinChevron.chevron('k2') # defines the faster chevron rates
plt.figure(figsize=(8,5)) # makes the figure 8cm by 5cm
plt.plot(k1_x, k1_y, 'ro', markersize=8) # red filled circles, sized '8'
plt.plot(k2_x, k2_y, 'ko', markersize=8) # black filled circles, sized '8'
plt.rc('xtick', labelsize=14) # fontsize of the x tick labels
plt.rc('ytick', labelsize=14) # fontsize of the y tick labels
plt.ylim([-4, 5]) # y axis from 0 to 8
plt.xlim([0, 8]) # x axis from 0 to 5
plt.grid(False) # no grid on the graph
plt.xlabel('[GdmHCl] (M)', fontsize=14) # x axis title with fontsize
plt.ylabel(r'$\ln k_{obs}$ $(s^{-1})$', fontsize=14) # y axis title with fontsize
plt.title('Gankyrin folding kinetics', fontsize=14) # Plot title
plt.show()
###Output
_____no_output_____ |
4_ImplementedModels_2_keras_DRAFT.ipynb | ###Markdown
4. Definition and Fitting of the ModelTime Series Forecasting (LSTM) We divide the data between train and test data. In total we have 3 years. We'll take for the training the first two years and the thir one leave it to the testing of the training. Thet is the same as taking the first 2/3 of the data to train and the remaining 1/3 to test. 4.1 Test and Training Sets
###Code
values = full.values
train_hours = int(2*full.shape[0]/3)
train = values[:train_hours, :]
test = values[train_hours:, :]
#Separate them into Variables and Targets
n_obs = (n_hours) * n_features
train_X, train_y = train[:, :n_obs], train[:, -24*n_features:]
test_X, test_y = test[:, :n_obs], test[:, -24*n_features:]
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], (n_hours), n_features))
test_X = test_X.reshape((test_X.shape[0], (n_hours), n_features))
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
###Output
(17504, 24, 10) (17504, 240) (8753, 24, 10) (8753, 240)
###Markdown
4.2 Defining the Models 4.2.1 Model 1
###Code
# define model
model_1 = Sequential()
model_1.add(Bidirectional(LSTM(200, activation='relu', return_sequences = True), input_shape=(train_X.shape[1], train_X.shape[2])))
model_1.add(LSTM(200, activation='relu', input_shape=(train_X.shape[1], train_X.shape[2])))
model_1.add(Dense(120))
model_1.add(Dense(240))
model_1.compile(optimizer='adam', loss='mse')
# fit model
history = model_1.fit(train_X, train_y, epochs=100, batch_size=24, validation_data=(test_X, test_y), verbose=1, shuffle=False)
# plot history
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
pyplot.show()
###Output
_____no_output_____
###Markdown
We create the test_input tensor to see how the network does when predicting.
###Code
# make a prediction
ypred = model_1.predict(test_X)
#print('ypred.shape=',ypred.shape)
test_X = test_X.reshape((test_X.shape[0], (n_hours)*n_features))
#print('test_X.shape=',test_X.shape)
# invert scaling for forecast
inv_ypred = np.concatenate((test_X[:, :n_hours*n_features], ypred), axis=1)
full_predicted = scaler.inverse_transform(inv_ypred )
y_predicted = full_predicted[:,-n_hours*n_features:]
#print('inv_ypred.shape=',inv_ypred.shape)
# invert scaling for actual
test_y = test_y.reshape((len(test_y), n_hours*n_features))
inv_y = np.concatenate((test_X[:, -n_hours*n_features:], test_y), axis=1)
full_actual = scaler.inverse_transform(inv_y)
y_actual = full_actual[:,-n_hours*n_features:]
#print('inv_y.shape=',inv_y.shape)
# calculate RMSE
rmse = np.sqrt(mean_squared_error(y_predicted, y_actual))
print('Test RMSE: %.3f' % rmse)
predicted = pd.DataFrame(data=full_predicted, columns=full.columns)
actual = pd.DataFrame(data=full_actual, columns=full.columns)
def extract_variable_df(actual_name, in_dataframe):
out_dataframe = pd.DataFrame()
for i in range(24):
if i == 0:
out_dataframe['{}_t{}'.format(actual_name, i)] = in_dataframe['{}(t)'.format(actual_name)]
else:
out_dataframe['{}_t{}'.format(actual_name, i)] = in_dataframe['{}(t+{})'.format(actual_name, i)]
return out_dataframe
pred_electricity = extract_variable_df('electricity', predicted)
actual_electricity = extract_variable_df('electricity', actual)
Predicted = go.Scatter(x=np.arange(pred_electricity.shape[1]), y=pred_electricity.values[48], opacity = 1, name = 'Forecasted Value', line=dict(color='royalblue'), yaxis='y')
Actual = go.Scatter(x=np.arange(actual_electricity.shape[1]), y=actual_electricity.values[48], opacity = 0.7, name = 'Actual Value', line=dict(color='lightblue'), yaxis='y')
layout = go.Layout(title='Electricity Forecasting', xaxis=dict(title='Hour'),
yaxis=dict(title='kBTU', overlaying='y'),
yaxis2=dict(title='kBTU', side='right'))
fig = go.Figure(data=[Predicted, Actual], layout=layout)
fig.show()
###Output
_____no_output_____ |
ipynb/consensus_and_profile.ipynb | ###Markdown
Consensus and Profile ProblemA matrix is a rectangular table of values divided into rows and columns. An **m × n** matrix has **m** rows and **n** columns. Given a matrix **A**, we write A_i,j to indicate the value found at the intersection of row **i** and column **j**.Say that we have a collection of DNA strings, all having the same length **n**. Their profile matrix is a **4 × n** matrix **P** in which **P_1,j** represents the number of times that 'A' occurs in the **j-th** position of one of the strings, **P_2,j** represents the number of times that 'C' occurs in the **j-th** position, and so on.A consensus string **c** is a string of length **n** formed from our collection by taking the most common symbol at each position; the **j-th** symbol of **c** therefore corresponds to the symbol having the maximum value in the **j-th** column of the profile matrix. Of course, there may be more than one most common symbol, leading to multiple possible consensus strings.----------------------| DNA strings |---------------------| A T C C A G C T || G G G C A A C T || A T G G A T C T || A A G C A A C C || T T G G A A C T || A T G C C A T T || A T G G C A C T | ----| Profile |----------------| **A** 5 1 0 0 5 5 0 0 || **C** 0 0 1 4 2 0 6 1 || **G** 1 1 6 3 0 1 0 0 || **T** 1 5 0 0 0 1 1 6 |**Consensus** A T G C A A C T> **Given:** A collection of at most 10 DNA strings of equal length (at most 1 kbp) in FASTA format.> **Return:** A consensus string and profile matrix for the collection. (If several possible consensus strings exist, then you may return any one of them.)
###Code
def read_fasta(fp):
name, seq = None, []
for line in fp:
line = line.rstrip()
if line.startswith(">"):
if name: yield (name, ''.join(seq))
name, seq = line, []
else:
seq.append(line)
if name: yield (name, ''.join(seq))
data_list = []
file = open('main.out','w')
with open('rosalind_cons.txt') as fp: #replace filename with yours
for name, seq in read_fasta(fp):
data_list.append(seq)
length = len(data_list)
# length of string
L = len(seq)
P = [[0 for x in xrange(L)] for y in xrange(4)]
Q = ['A','C','G','T']
for x in range(L):
for y in range(4):
for z in range(length):
P[y][x] = P[y][x] + data_list[z][x].count(Q[y])
domi = ""
for x in range(L):
MAX = 0
for y in range(4):
if P[y][x]>=P[MAX][x]:
MAX = y
if MAX == 0:
domi = domi+'A'
elif MAX ==1:
domi = domi+'C'
elif MAX ==2:
domi = domi+'G'
elif MAX ==3:
domi = domi+'T'
file.write('%s\n%s\n%s\n%s\n%s' %(
domi,'A: '+str(P[0]).strip('[]').replace(',',''),'C: '
+str(P[1]).strip('[]').replace(',',''),'G: '
+str(P[2]).strip('[]').replace(',',''),'T: '
+str(P[3]).strip('[]').replace(',','')))
###Output
_____no_output_____ |
02. Color processing/[homework]color_processing.ipynb | ###Markdown
Домашняя работа 2[Форма](https://forms.gle/GPHZ9AGPpU922vHC7) для отправки решения. ДЕДЛАЙНЫ:Задача №1 - 10 янв 23:59Задача №2 - 10 янв 23:59Укажите в названии файла: 1. имя и фамилия2. номер домашки3. номер задания(NAME_LES_TASK.ipynb) Задача №1 - Лес или пустыня?Часто при анализе изображений местности необходимо понять ее характер. В частности, если определить, что на изображении преобладет вода, то имеет смысл искать корабли на таком изображении. Если на картинке густой лес, то, возможно, это не лучшая зона для посадки дрона или беспилотника.Ваша задача - написать программу, которая будет отличать лес от пустыни. В приложении можно найти реальные спутниковые снимки лесов и пустынь.Примеры изображений:
###Code
# Ваш код
###Output
_____no_output_____
###Markdown
Задача №2 - Кусочки пазла.Даны кусочки изображения, ваша задача склеить пазл в исходную картинку. Условия:* Дано исходное изображение для проверки, использовать собранное изображение в самом алгоритме нельзя;* В качестве первого изображения, начиная с которого нужно собрать пазл, всегда принимается верхняя левая часть изображения;* В процессе проверки решения пазлы могут быть перемешаны, т.е. порядок пазлов в проверке может отличаться от исходного Примеры изображений:
###Code
# Ваш код
###Output
_____no_output_____ |
nircam_jdox/nircam_grism_time_series/figure2_grism_throughput_order1.ipynb | ###Markdown
Figure 2: Grism Throughput Response for LW Filters ($1^{st}$ Order) *** Table of Contents1. [Information](Information)2. [Imports](Imports)3. [Data](Data)4. [Generate the First Order Throughput Response Plot](Generate-the-First-Order-Throughput-Response-Plot)5. [Issues](Issues)6. [About this Notebook](About-this-Notebook)*** Information JDox links: * [NIRCam Grism Time Series](https://jwst-docs.stsci.edu/display/JTI/NIRCam+Grism+Time+SeriesNIRCamGrismTimeSeries-Filters) * Figure 2: Throughput response for NIRCam grism and long wavelength filters ($1^{st}$ order) Imports
###Code
import os
import pylab
import numpy as np
import pysynphot as S
from astropy.io import ascii
from astropy.table import Table
import requests
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data Data Location: The data is stored in a NIRCam JDox Box folder here:[ST-INS-NIRCAM -> JDox -> nircam_grism_time_series](https://stsci.box.com/s/tf6049a75u6f3uc26q3xu6w8tv456pk7)
###Code
files = [('https://stsci.box.com/shared/static/p3sfybd8efc83qv89ds1vnflnz2noq3b.txt', 'F070W_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/ops33fuuaiz79qcv26srmussv14ousuz.txt', 'F090W_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/bcdj04zp58liw9egpy7y889zcctcdx86.txt', 'F115W_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/o0vw1ih8yfiq97afmcbkrjgzep52umdf.txt', 'F140M_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/sfxywmlhcj1rg9cn7jsjipvnexqba2pc.txt', 'F150W2_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/iznsxikuh4sshfmvmwhdfzbos0t5tp76.txt', 'F150W_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/ynn2cslbli9pu6lpam7q4ii3uhy8csc5.txt', 'F162M_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/hqwuuk0cwzh244ip07ynlov1em1qxzd2.txt', 'F164N_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/bi6oe3xm3xzqjeiv4fd9jiw45abxypss.txt', 'F182M_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/3kkfo2vipr30llc3segu3jybgnp1b5wy.txt', 'F187N_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/knanodd82bisgl9jm8kkolzx7clz0n6v.txt', 'F200W_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/fn48mlbqwfb1iks2oiuhmbog0zrmt5cn.txt', 'F210M_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/2wdne7tkfmdov5avedef3zypwzzwhavr.txt', 'F212N_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/5eswzma6gtld0kgksc7wdeajjwhdycso.txt', 'F250M_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/88kjudlxg00fu1g1ep5vpzh2d2rey7i1.txt', 'F277W_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/djjq4dmrmwa90yfgzjra6nzugwyecoqm.txt', 'F300M_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/0orsgodf2k33h65yxyd651p3q37zgrhe.txt', 'F322W2_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/z5pz3m31xzqolo56n4x1eac9nvnpaabz.txt', 'F323N_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/r83zb002r4ga42rv1gxyzwymbj8hz1hl.txt', 'F335M_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/l68ztc38lok2xdsxvrtjlp57sbsrh1n7.txt', 'F356W_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/ib7v9la4jfzjvmxeure35upr7elw1epa.txt', 'F360M_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/xwxp2rc3bz3rqndd5ximhlcwp9kypkqr.txt', 'F405N_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/xwrox8e8puh7uwazl2q43fspbifd7gpu.txt', 'F410M_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/rsuq0nalldng8tplgi8x24tztkdcbzfc.txt', 'F430M_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/ifka343brbjh2tvkugsdhfeujp6b1iqn.txt', 'F444W_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/rvkqjp0s0b709phimtmdvl641fpexxer.txt', 'F460M_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/kestno7niryilo4c0t91a9z7vmth4751.txt', 'F466N_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/xlh9t4k2breqipeq5io4y5x2jp2m7myn.txt', 'F470N_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/hx47qygujmo0c8kl94kcrxvwr01msrxc.txt', 'F480M_NRC_and_OTE_ModAB_mean.txt'),
('https://stsci.box.com/shared/static/q002uyvir8h5q9i0wuwdgugtxp274fe3.fits', 'jwst_nircam_f070w_moda_trans.fits'),
('https://stsci.box.com/shared/static/v621d7261r7u3oklvwq7h8jcpvuc11va.fits', 'jwst_nircam_f070w_modb_trans.fits'),
('https://stsci.box.com/shared/static/0yc9u7qd7xz1nuuaeiiplyizs56xsdcg.fits', 'jwst_nircam_f070w_trans.fits'),
('https://stsci.box.com/shared/static/emt5mhp2he7akjmd1q5gc6tlyb8hlvd0.fits', 'jwst_nircam_f090w_moda_trans.fits'),
('https://stsci.box.com/shared/static/087tgi8n7d954zslcq3cwva6gowqhr9u.fits', 'jwst_nircam_f090w_modb_trans.fits'),
('https://stsci.box.com/shared/static/vqes8f08a626t9g27s7eraztvta8d2d1.fits', 'jwst_nircam_f090w_trans.fits'),
('https://stsci.box.com/shared/static/r90ou18ouufa2hyliyzal00llrcvgsgt.fits', 'jwst_nircam_f115w_moda_trans.fits'),
('https://stsci.box.com/shared/static/q4s6k5x03xm9r7il4vwks1bcaq7ll2ep.fits', 'jwst_nircam_f115w_modb_trans.fits'),
('https://stsci.box.com/shared/static/hrxhow0nt9k17bkf0znotjgu3xuls4zw.fits', 'jwst_nircam_f115w_trans.fits'),
('https://stsci.box.com/shared/static/tfzruo3qef0c2w3djxx2f9mx9q3k3e26.fits', 'jwst_nircam_f140m_moda_trans.fits'),
('https://stsci.box.com/shared/static/001iq06zqvj8ue9o83j0de7hevfaquxk.fits', 'jwst_nircam_f140m_modb_trans.fits'),
('https://stsci.box.com/shared/static/3vl8ifi4ottw6j9mskozl2ja4eqpee4q.fits', 'jwst_nircam_f140m_trans.fits'),
('https://stsci.box.com/shared/static/ipxyd17ap5uoknh8yhq1d0phl5v3qev3.fits', 'jwst_nircam_f150w2_moda_trans.fits'),
('https://stsci.box.com/shared/static/14yxoyt9m5nnln1f214uonbm40ouosvo.fits', 'jwst_nircam_f150w2_modb_trans.fits'),
('https://stsci.box.com/shared/static/eyn84gua40i76lujc6ytgqmksdhusrio.fits', 'jwst_nircam_f150w2_trans.fits'),
('https://stsci.box.com/shared/static/cqm9zx2qp71vbropsv9gl78ku0soig76.fits', 'jwst_nircam_f150w_moda_trans.fits'),
('https://stsci.box.com/shared/static/f5ytcqlr0d3qb2f9by5e9muk57y7fpzn.fits', 'jwst_nircam_f150w_modb_trans.fits'),
('https://stsci.box.com/shared/static/z0alz51pjnawgwy49cpew6fteand92sr.fits', 'jwst_nircam_f150w_trans.fits'),
('https://stsci.box.com/shared/static/1yjs6dskrir0k2k5lli0n4e4q8ui0nk5.fits', 'jwst_nircam_f162m_moda_trans.fits'),
('https://stsci.box.com/shared/static/f2s4mt01mmsj7ukb29fl36oumnq4lwsl.fits', 'jwst_nircam_f162m_modb_trans.fits'),
('https://stsci.box.com/shared/static/b944mc6y29p5gsz7flyrrectzz80lq7m.fits', 'jwst_nircam_f162m_trans.fits'),
('https://stsci.box.com/shared/static/ivimch76f0gkgte3ehtlk0sbp31ky995.fits', 'jwst_nircam_f164n_moda_trans.fits'),
('https://stsci.box.com/shared/static/2mlngldc0oel1erccp435egj6hsdh8bk.fits', 'jwst_nircam_f164n_modb_trans.fits'),
('https://stsci.box.com/shared/static/h7vngf9b3hk2bnnquubptgahr4ddy35t.fits', 'jwst_nircam_f164n_trans.fits'),
('https://stsci.box.com/shared/static/lrtpz42dehiyxyn1i57rqtss3zmo5m82.fits', 'jwst_nircam_f182m_moda_trans.fits'),
('https://stsci.box.com/shared/static/7wnlka8jmrp2cbf213zr6sj9nxzz1b6h.fits', 'jwst_nircam_f182m_modb_trans.fits'),
('https://stsci.box.com/shared/static/gkics13m42hixktgi01no3k3e4mdlbfa.fits', 'jwst_nircam_f182m_trans.fits'),
('https://stsci.box.com/shared/static/zy62f09jt4qhdpmpi06oupt1j0cwvnoi.fits', 'jwst_nircam_f187n_moda_trans.fits'),
('https://stsci.box.com/shared/static/oz3gvx2ifxmqcoq58wh27y6xc8xb1vup.fits', 'jwst_nircam_f187n_modb_trans.fits'),
('https://stsci.box.com/shared/static/zioq21vr5c0knayu7vbydupo42y14iy1.fits', 'jwst_nircam_f187n_trans.fits'),
('https://stsci.box.com/shared/static/nqmjloz1czufvn2kdcq4ratuzoyobeb2.fits', 'jwst_nircam_f200w_moda_trans.fits'),
('https://stsci.box.com/shared/static/0f80l7n1735oyb0lyecu9bfbtsbxep8y.fits', 'jwst_nircam_f200w_modb_trans.fits'),
('https://stsci.box.com/shared/static/coagdaiiqwsvrosccrrdyzhwbwifffit.fits', 'jwst_nircam_f200w_trans.fits'),
('https://stsci.box.com/shared/static/vqqicdwx55tgruvxvxtu18utmliyk3zx.fits', 'jwst_nircam_f210m_moda_trans.fits'),
('https://stsci.box.com/shared/static/id6v2oz6ui6i3aho65jb1x4dtzk08v80.fits', 'jwst_nircam_f210m_modb_trans.fits'),
('https://stsci.box.com/shared/static/onv7qidjadzk8hl8uexux8o7s6ubuxd2.fits', 'jwst_nircam_f210m_trans.fits'),
('https://stsci.box.com/shared/static/oabpu1eyv17eurs3jle3cqcv4td7zlw0.fits', 'jwst_nircam_f212n_moda_trans.fits'),
('https://stsci.box.com/shared/static/necxttcispe8lrlvfjenp6y1i2wqzf4q.fits', 'jwst_nircam_f212n_modb_trans.fits'),
('https://stsci.box.com/shared/static/so5wgxnot995yi20m0myqg67n1n0szq0.fits', 'jwst_nircam_f212n_trans.fits'),
('https://stsci.box.com/shared/static/bhzst8us5qegi2zmfgtf2k13eijbo0yh.fits', 'jwst_nircam_f250m_moda_trans.fits'),
('https://stsci.box.com/shared/static/n525hr454e3muwa8hjg2s3pc649b90io.fits', 'jwst_nircam_f250m_modb_trans.fits'),
('https://stsci.box.com/shared/static/t9wea2b7ynlqum42cazs3uog1ncw9nql.fits', 'jwst_nircam_f250m_trans.fits'),
('https://stsci.box.com/shared/static/zpxqwg0fr3d3baokajh50t8orou8ai6w.fits', 'jwst_nircam_f277w_moda_trans.fits'),
('https://stsci.box.com/shared/static/hlagbg669x27xx26i1voef2dysmb8gqm.fits', 'jwst_nircam_f277w_modb_trans.fits'),
('https://stsci.box.com/shared/static/nep9ebnmoyogmup48gkifnekour2du3b.fits', 'jwst_nircam_f277w_trans.fits'),
('https://stsci.box.com/shared/static/77m4vzldbjvyesnmak7a8yrivie3abj1.fits', 'jwst_nircam_f300m_moda_trans.fits'),
('https://stsci.box.com/shared/static/ywl67tyf7h67e84kccqfjmj68kjk5946.fits', 'jwst_nircam_f300m_modb_trans.fits'),
('https://stsci.box.com/shared/static/29ysbdq1ijvjayex5dsjx8fqck6pc57m.fits', 'jwst_nircam_f300m_trans.fits'),
('https://stsci.box.com/shared/static/ji9pz1symqweujoxwfg1k94vma0t8w7h.fits', 'jwst_nircam_f322w2_moda_trans.fits'),
('https://stsci.box.com/shared/static/7g2ml1dfcsjkhthzaa2d64kxq2l2ozvo.fits', 'jwst_nircam_f322w2_modb_trans.fits'),
('https://stsci.box.com/shared/static/18344rcr1mgehveuuq5ouh5gdadtizgl.fits', 'jwst_nircam_f322w2_trans.fits'),
('https://stsci.box.com/shared/static/asn3qkwhf0j2f59j7a5c3jr8vmkd1hf8.fits', 'jwst_nircam_f323n_moda_trans.fits'),
('https://stsci.box.com/shared/static/znnt66i6muj1r2ggdy85jkayq79h50cn.fits', 'jwst_nircam_f323n_modb_trans.fits'),
('https://stsci.box.com/shared/static/hj9i2hjgnyukxg0g3hl7g1xll70fcknt.fits', 'jwst_nircam_f323n_trans.fits'),
('https://stsci.box.com/shared/static/bh7fot8pkstl8slffktbzmokoiptnmdg.fits', 'jwst_nircam_f335m_moda_trans.fits'),
('https://stsci.box.com/shared/static/5n9b8f70405d97z3er1688g5mfcc53db.fits', 'jwst_nircam_f335m_modb_trans.fits'),
('https://stsci.box.com/shared/static/yb3o86rbicpud401xw5woi230ipwe88y.fits', 'jwst_nircam_f335m_trans.fits'),
('https://stsci.box.com/shared/static/3a24wuinnrlmz3fwgbhn0b66topjix9k.fits', 'jwst_nircam_f356w_moda_trans.fits'),
('https://stsci.box.com/shared/static/x31rhyxm832gxnyov2bvxbluesxl2oj3.fits', 'jwst_nircam_f356w_modb_trans.fits'),
('https://stsci.box.com/shared/static/qz3ugwcw3dmjt0jcfyrnhnboo7x7yxj1.fits', 'jwst_nircam_f356w_trans.fits'),
('https://stsci.box.com/shared/static/iq75hpd6njl4znq0sg9oll9l1xh15l09.fits', 'jwst_nircam_f360m_moda_trans.fits'),
('https://stsci.box.com/shared/static/mphs45niz6p3tke1o4jw4b0i0io7qt0g.fits', 'jwst_nircam_f360m_modb_trans.fits'),
('https://stsci.box.com/shared/static/cwxdluohobrcpk32qq9osgzc7o651pa4.fits', 'jwst_nircam_f360m_trans.fits'),
('https://stsci.box.com/shared/static/jytl3abr69pjvn3vwcii0573ijkbo7sb.fits', 'jwst_nircam_f405n_moda_trans.fits'),
('https://stsci.box.com/shared/static/77w780bfxtxess80a6p0zsaykj49p6mx.fits', 'jwst_nircam_f405n_modb_trans.fits'),
('https://stsci.box.com/shared/static/wb8cdpdxml8fjqbz118ohmhqydfy95eu.fits', 'jwst_nircam_f405n_trans.fits'),
('https://stsci.box.com/shared/static/5i3hdxhswzi98eunjajscxxxj9i9pv4e.fits', 'jwst_nircam_f410m_moda_trans.fits'),
('https://stsci.box.com/shared/static/wh1gxudiw5bx9du0ch85wl55mkd2304c.fits', 'jwst_nircam_f410m_modb_trans.fits'),
('https://stsci.box.com/shared/static/cwqu9d7fars0o8jxn0fi559sc82h8u0s.fits', 'jwst_nircam_f410m_trans.fits'),
('https://stsci.box.com/shared/static/i1q4on0xoh86uqtwhg1czp4k66ui9af5.fits', 'jwst_nircam_f430m_moda_trans.fits'),
('https://stsci.box.com/shared/static/afm1b6sjgjc18as5ldszatgesmle97z2.fits', 'jwst_nircam_f430m_modb_trans.fits'),
('https://stsci.box.com/shared/static/q2jn74ra7oi4lgklpv6b0y75sjytguqf.fits', 'jwst_nircam_f430m_trans.fits'),
('https://stsci.box.com/shared/static/z7s8qsejqyqaupfk3rwf1bxg5xcknu5o.fits', 'jwst_nircam_f444w_moda_trans.fits'),
('https://stsci.box.com/shared/static/q9aeblatsq9p5lzgq9g2rzryfjg201r7.fits', 'jwst_nircam_f444w_modb_trans.fits'),
('https://stsci.box.com/shared/static/xnl9t1xf3md1wzljb1lu9t8s1jgssxvg.fits', 'jwst_nircam_f444w_trans.fits'),
('https://stsci.box.com/shared/static/sm91chlb3bx5bi681bew3g7q7j953r97.fits', 'jwst_nircam_f460m_moda_trans.fits'),
('https://stsci.box.com/shared/static/lfrcunsgb9j5zkv1ndw76rfzs8i2qmib.fits', 'jwst_nircam_f460m_modb_trans.fits'),
('https://stsci.box.com/shared/static/46ggp89lwcap1e1t6l4e9hubr7wwlm9b.fits', 'jwst_nircam_f460m_trans.fits'),
('https://stsci.box.com/shared/static/dyys9qu87xs4fos558jduxmm4ie498g8.fits', 'jwst_nircam_f466n_moda_trans.fits'),
('https://stsci.box.com/shared/static/0286ejhytmm17edmb6wsdoe4gwaycti7.fits', 'jwst_nircam_f466n_modb_trans.fits'),
('https://stsci.box.com/shared/static/6rtqsgk5hvq7gzwa6i6ezbuc7xc1mgtw.fits', 'jwst_nircam_f466n_trans.fits'),
('https://stsci.box.com/shared/static/4x8x01ihh8qscll6ol4ds5lmgag9b2iy.fits', 'jwst_nircam_f470n_moda_trans.fits'),
('https://stsci.box.com/shared/static/4hk9vtkqporxj98ay7h9jfnx3cky97aa.fits', 'jwst_nircam_f470n_modb_trans.fits'),
('https://stsci.box.com/shared/static/e0v1oim4u8lqdtpetsminmlkae73ccp1.fits', 'jwst_nircam_f470n_trans.fits'),
('https://stsci.box.com/shared/static/ucg4cdwf566xaorpdv3utrykhvkld2gs.fits', 'jwst_nircam_f480m_moda_trans.fits'),
('https://stsci.box.com/shared/static/qefjdc97u8mz18uu2nc513a2yxrqm6jh.fits', 'jwst_nircam_f480m_modb_trans.fits'),
('https://stsci.box.com/shared/static/kdttpdh2krrco6tno2tvroi6om87xfkk.fits', 'jwst_nircam_f480m_trans.fits')]
def download_file(url, file_name, output_directory='./', overwrite=False):
"""Download a file from Box given the direct URL
Parameters
----------
url : str
URL to the file to be downloaded
file_name : str
The name of the file being downloaded
output_directory : str
Directory to download file_name into
overwrite : str
If False and the file to download already exists, the download
will be skipped. If True, the file will be downloaded regardless
of whether it already exists in output_directory
Returns
-------
download_filename : str
Name of the downloaded file
"""
download_filename = os.path.join(output_directory, file_name)
if not os.path.isfile(download_filename) or overwrite is True:
print("Downloading {}".format(file_name))
with requests.get(url, stream=True) as response:
if response.status_code != 200:
raise RuntimeError("Wrong URL - {}".format(url))
with open(download_filename, 'wb') as f:
for chunk in response.iter_content(chunk_size=2048):
if chunk:
f.write(chunk)
else:
print("{} already exists. Skipping download.".format(download_filename))
return download_filename
###Output
_____no_output_____
###Markdown
Load the data(The next cell assumes you downloaded the data into your ```Users/$(logname)/``` home directory)
###Code
if os.environ.get('LOGNAME') is None:
raise ValueError("WARNING: LOGNAME environment variable not set!")
box_directory = os.path.join("/Users/", os.environ['LOGNAME'], "box_data")
box_directory
if not os.path.isdir(box_directory):
try:
os.mkdir(box_directory)
except:
raise OSError("Unable to create {}".format(box_directory))
for file_info in files:
file_url, filename = file_info
outfile = download_file(file_url, filename, output_directory=box_directory)
filters = ["F250M","F277W","F300M","F322W2","F335M","F356W","F360M","F410M","F430M","F444W","F460M","F480M"]
# NIRCAM grism throughputs given to Nor by Tom Greene (November 17, 2017):
# First order
modAm1 = np.array([0.240,0.306,0.374,0.437,0.494,0.543,0.585,0.620,0.648,0.670,0.686,0.696,0.702,0.705,0.703,0.702,0.694,0.685,0.674,0.661,0.649,0.636,0.621,0.609,0.593,0.579,0.566])
modBm1 = np.array([0.178,0.226,0.276,0.323,0.365,0.401,0.432,0.458,0.479,0.495,0.507,0.514,0.519,0.521,0.520,0.518,0.513,0.506,0.498,0.489,0.479,0.470,0.459,0.450,0.438,0.428,0.418])
# Second order
modAm2 = np.array([0.387,0.283,0.210,0.151,0.109,0.079,0.056,0.037,0.024,0.015,0.011,0.005,0.002,0.000,0.000,0.001,0.001,0.001,0.002,0.003,0.005,0.008,0.010,0.011,0.012,0.013,0.014])
modBm2 = np.array([0.286,0.209,0.155,0.111,0.080,0.058,0.041,0.027,0.018,0.011,0.008,0.004,0.002,0.000,0.000,0.001,0.001,0.001,0.002,0.002,0.004,0.006,0.007,0.008,0.009,0.010,0.010])
# Blaze wavelength
blaze_w = np.array([2.40,2.50,2.60,2.70,2.80,2.90,3.00,3.10,3.20,3.30,3.40,3.50,3.60,3.70,3.80,3.90,4.00,4.10,4.20,4.30,4.40,4.50,4.60,4.70,4.80,4.90,5.00])
# Include JWST optics?
NRC_plus_OTE = True
###Output
_____no_output_____
###Markdown
Generate the First Order Throughput Response Plot
###Code
NUM_COLORS = len(filters)
cm = pylab.get_cmap('tab10')
f, ax1 = plt.subplots(1, figsize=(15, 10))
for i,fil in zip(range(NUM_COLORS),filters):
if 'W' in fil:
color = cm(1.*i/NUM_COLORS)
if NRC_plus_OTE:
wav, thpt = np.loadtxt(os.path.join(box_directory, fil+'_NRC_and_OTE_ModAB_mean.txt'), unpack=True, skiprows=1)
data = S.ArrayBandpass(wav, thpt, name=fil)
else:
data = S.FileBandpass(os.path.join(box_directory, 'jwst_nircam_'+fil.lower()+'_moda_trans.fits'))
maxval = 0.003
ax1.plot(data.wave[data.throughput > maxval],data.throughput[data.throughput > maxval],lw=3,label=fil)
ax1.text(4.7, 0.47, 'LW B grism',color='black',alpha=0.65,fontsize=20)
ax1.text(4.7, 0.63, 'LW A grism',color='black',fontsize=20)
ax1.plot(blaze_w,modAm1,lw=2,color='black',marker='o',markersize=7)
ax1.plot(blaze_w,modBm1,lw=2,color='grey',marker='d',markersize=7)
miny,maxy = ax1.get_ylim()
minx,maxx = ax1.get_xlim()
ax1.legend(bbox_to_anchor=(0.16, 1),fontsize=15)
ax1.tick_params(labelsize=18)
f.text(0.5, 0.04, 'Wavelength ($\mu m$)', ha='center', fontsize=22)
f.text(0.05, 0.5, 'Throughput', va='center', rotation='vertical', fontsize=22)
###Output
_____no_output_____
###Markdown
Figure option 2: all filters
###Code
cm = pylab.get_cmap('tab10')
f, ax1 = plt.subplots(1, figsize=(15, 10))
for i,fil in zip(range(NUM_COLORS),filters):
color = cm(1.*i/NUM_COLORS)
if NRC_plus_OTE:
wav, thpt = np.loadtxt(os.path.join(box_directory, fil+'_NRC_and_OTE_ModAB_mean.txt'), unpack=True, skiprows=1)
data = S.ArrayBandpass(wav, thpt, name=fil)
else:
data = S.FileBandpass(os.path.join(box_directory, 'jwst_nircam_'+fil.lower()+'_moda_trans.fits'))
maxval = 0.003
ax1.plot(data.wave[data.throughput > maxval],data.throughput[data.throughput > maxval],lw=3,label=fil,color=color)
ax1.text(4.7, 0.47, 'LW B grism',color='black',alpha=0.65,fontsize=20)
ax1.text(4.7, 0.63, 'LW A grism',color='black',fontsize=20)
ax1.plot(blaze_w,modAm1,lw=2,color='black',marker='o',markersize=7)
ax1.plot(blaze_w,modBm1,lw=2,color='grey',marker='d',markersize=7)
miny,maxy = ax1.get_ylim()
minx,maxx = ax1.get_xlim()
ax1.legend(bbox_to_anchor=(1., 1.013),fontsize=15)
ax1.tick_params(labelsize=18)
f.text(0.5, 0.04, 'Wavelength ($\mu m$)', ha='center', fontsize=22)
f.text(0.05, 0.5, 'Throughput', va='center', rotation='vertical', fontsize=22)
###Output
_____no_output_____ |
Practices/Practice17_Pandas-Subsetting-II.ipynb | ###Markdown
Practice: Subsetting Pandas DataFrames IIFor this practice, let's use the `iris` dataset:
###Code
# import the pandas package
import pandas as pd
# set the path
path = 'https://raw.githubusercontent.com/GWC-DCMB/curriculum-notebooks/master/'
# this is where the file is located
filename = path + 'SampleData/iris.csv'
# load the iris dataset into a DataFrame
iris = pd.read_csv(filename)
###Output
_____no_output_____
###Markdown
Take a look at the dataset:
###Code
# take a look at the beginning
# subset the first 5 rows from iris
# save it to a variable called subset1
# subset a few columns from the subset1 dataframe
# save it to a variable called subset 2
###Output
_____no_output_____
###Markdown
Let's try subsetting both rows and columns at the same time!
###Code
# create a new subset from iris that's identical to subset2
# but write only one line of code
# save it to a variable called subset3
# check your work -- how does subset2 compare to subset3?
# subset rows 20 through 30 and columns petal_length & petal width
# write only one line of code
###Output
_____no_output_____
###Markdown
Now let's subset using `query`:
###Code
# subset rows where the species is not setosa
# subset rows where sepal_width is greater than 4
# subset rows where sepal_width is between 2 and 3
# subset rows where sepal_width is less than 3.5 and the species is virginica
# subset rows where the pedal width is 0.3 or the species is versicolor
###Output
_____no_output_____
###Markdown
**Bonus**: Try to subset with both `query` and square brackets `[]` on the same line:
###Code
# pick any query and any columns to subset with
###Output
_____no_output_____ |
notebook/fraud-detection-catboost.ipynb | ###Markdown
Data Import libraries:
###Code
import pandas as pd
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
import sklearn.metrics as metrics
import catboost as cb
import gc
###Output
_____no_output_____
###Markdown
Import data & combine transaction and idendity columns:
###Code
train_transaction = pd.read_csv('/home/fatihakyon/dev/inzva/ieee-fraud-detection/data/train_transaction.csv')
train_identity = pd.read_csv('/home/fatihakyon/dev/inzva/ieee-fraud-detection/data/train_identity.csv')
train = pd.merge(train_transaction, train_identity, on="TransactionID", how="left")
train = train.set_index("TransactionID", drop="True")
del train_transaction, train_identity
gc.collect()
test_transaction = pd.read_csv('/home/fatihakyon/dev/inzva/ieee-fraud-detection/data/test_transaction.csv')
test_identity = pd.read_csv('/home/fatihakyon/dev/inzva/ieee-fraud-detection/data/test_identity.csv')
test = pd.merge(test_transaction, test_identity, on="TransactionID", how="left")
test = test.set_index("TransactionID", drop="True")
del test_transaction, test_identity
gc.collect()
train.shape
test.shape
train.head()
###Output
_____no_output_____
###Markdown
Rename test data columns:
###Code
mapping = {}
for column_name in test.columns:
mapping[column_name] = column_name.replace("-", "_")
test.rename(columns=mapping, inplace=True)
test.columns
###Output
_____no_output_____
###Markdown
Reduce memory usage:
###Code
import numpy as np
def reduce_mem_usage(df):
"""
From kernel https://www.kaggle.com/gemartin/load-data-reduce-memory-usage
Iterate through all the columns of a dataframe and modify the data type
to reduce memory usage.
"""
start_mem = df.memory_usage().sum() / 1024 ** 2
print("Memory usage of dataframe is {:.2f} MB".format(start_mem))
for col in df.columns:
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == "int":
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if (
c_min > np.finfo(np.float16).min
and c_max < np.finfo(np.float16).max
):
df[col] = df[col].astype(np.float16)
elif (
c_min > np.finfo(np.float32).min
and c_max < np.finfo(np.float32).max
):
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
pass
# df[col] = df[col].astype("category")
end_mem = df.memory_usage().sum() / 1024 ** 2
print("Memory usage after optimization is: {:.2f} MB".format(end_mem))
print("Decreased by {:.1f}%".format(100 * (start_mem - end_mem) / start_mem))
return df
train = reduce_mem_usage(train)
test = reduce_mem_usage(test)
###Output
Memory usage of dataframe is 1955.37 MB
Memory usage after optimization is: 648.22 MB
Decreased by 66.8%
Memory usage of dataframe is 1673.87 MB
Memory usage after optimization is: 563.43 MB
Decreased by 66.3%
###Markdown
Split train into train/val:
###Code
VAL_SPLIT = 0.2
# Split train into train/val
y = train["isFraud"].copy()
X = train.drop("isFraud", axis=1)
X_train, X_val, y_train, y_val = train_test_split(
X, y, test_size=VAL_SPLIT, random_state=13
)
X_test = test.copy()
del train, test, X, y
X_train = X_train.fillna(-999)
X_val = X_val.fillna(-999)
X_test = X_test.fillna(-999)
###Output
_____no_output_____
###Markdown
Categorical features:
###Code
categorical_features = []
for f in X_train.columns:
if X_train[f].dtype=='object' or X_test[f].dtype=='object':
categorical_features.append(f)
lbl = preprocessing.LabelEncoder()
lbl.fit(list(X_train[f].values) + list(X_test[f].values) + list(X_val[f].values))
X_train[f] = lbl.transform(list(X_train[f].values))
X_val[f] = lbl.transform(list(X_val[f].values))
X_test[f] = lbl.transform(list(X_test[f].values))
###Output
_____no_output_____
###Markdown
CatBoost
###Code
clf = cb.CatBoostClassifier(n_estimators=200,
learning_rate=0.05,
metric_period=500,
od_wait=500,
task_type='CPU',
depth=8)
clf.fit(X_train, y_train, cat_features=categorical_features)
###Output
0: learn: 0.6113626 total: 145ms remaining: 28.8s
199: learn: 0.0947855 total: 26.2s remaining: 0us
###Markdown
Results:
###Code
def calculate_scores(estimator, X_val, y_val):
y_val_prediction = estimator.predict(X_val)
y_val_proba = estimator.predict_proba(X_val)[:, 1]
conf_matrix = metrics.confusion_matrix(y_val, y_val_prediction)
accuracy_score = metrics.accuracy_score(y_val, y_val_prediction)
roc_auc_score = metrics.roc_auc_score(y_val, y_val_proba)
f1_score = metrics.f1_score(y_val, y_val_prediction)
classification_report = metrics.classification_report(y_val, y_val_prediction)
print("Confusion Matrix: \n%s" % str(conf_matrix))
print("\nAccuracy: %.4f" % accuracy_score)
print("\nAUC: %.4f" % roc_auc_score)
print("\nF1 Score: %.4f" % f1_score)
print("\nClassification Report: \n",classification_report)
return {
"conf_matrix": conf_matrix,
"accuracy_score": accuracy_score,
"roc_auc_score": roc_auc_score,
"f1_score": f1_score,
"classification_report": classification_report
}
_ = calculate_scores(clf, X_val, y_val)
###Output
Confusion Matrix:
[[113854 118]
[ 2787 1349]]
Accuracy: 0.9754
AUC: 0.8755
F1 Score: 0.4815
Classification Report:
precision recall f1-score support
0 0.98 1.00 0.99 113972
1 0.92 0.33 0.48 4136
accuracy 0.98 118108
macro avg 0.95 0.66 0.73 118108
weighted avg 0.97 0.98 0.97 118108
###Markdown
Hyperarameter Optimization Randomized Search:
###Code
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import uniform as sp_randFloat
from scipy.stats import randint as sp_randInt
param_grid = {
'silent': [False],
'learning_rate': sp_randFloat(0.01, 0.3),
'n_estimators': [200],
'depth': sp_randInt(6, 16),
'l2_leaf_reg':[3,1,5,10,100],
'loss_function': ['Logloss', 'CrossEntropy']
}
def perform_random_search(
estimator, X_train, X_val, y_train, y_val, param_grid, scoring=None
):
hyperparam_optimizer = RandomizedSearchCV(
estimator=estimator,
param_distributions=param_grid,
scoring=scoring,
cv=2,
n_iter=20,
n_jobs=1,
refit=True,
random_state=13,
)
hyperparam_optimizer.fit(X_train, y_train, eval_set=(X_val, y_val))
return hyperparam_optimizer.best_estimator_
# accuracy score
best_estimator = perform_random_search(clf, X_train, X_val, y_train, y_val, param_grid, scoring='accuracy')
_ = calculate_scores(best_estimator, X_val, y_val)
# roc auc score
best_estimator = perform_random_search(clf, X_train, X_val, y_train, y_val, param_grid, scoring='roc_auc')
_ = calculate_scores(best_estimator, X_val, y_val)
# f1 score
best_estimator = perform_random_search(clf, X_train, X_val, y_train, y_val, param_grid, scoring='f1')
_ = calculate_scores(best_estimator, X_val, y_val)
###Output
Warning: Overfitting detector is active, thus evaluation metric is calculated on every iteration. 'metric_period' is ignored for evaluation metric.
###Markdown
Bayesian Search:
###Code
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
param_grid = {
'silent': [False],
'learning_rate': Real(0.01, 0.3),
'n_estimators': [200],
'depth': Integer(6, 16),
'l2_leaf_reg':[3,1,5,10,100],
'loss_function': ['Logloss', 'CrossEntropy']
}
def perform_bayes_search(
estimator, X_train, X_val, y_train, y_val, param_grid, scoring=None
):
hyperparam_optimizer = BayesSearchCV(
estimator=estimator,
search_spaces=param_grid,
scoring=scoring,
cv=2,
n_iter=15,
n_jobs=1,
refit=True,
return_train_score=False,
optimizer_kwargs={"base_estimator": "GP"},
random_state=13,
fit_params={
'eval_set': (X_val, y_val),
}
)
hyperparam_optimizer.fit(X_train, y_train)
return hyperparam_optimizer.best_estimator_
# accuracy score
best_estimator = perform_bayes_search(clf, X_train, X_val, y_train, y_val, param_grid, scoring='accuracy')
_ = calculate_scores(best_estimator, X_val, y_val)
# roc auc score
best_estimator = perform_bayes_search(clf, X_train, X_val, y_train, y_val, param_grid, scoring='roc_auc')
_ = calculate_scores(best_estimator, X_val, y_val)
# f1 score
best_estimator = perform_bayes_search(clf, X_train, X_val, y_train, y_val, param_grid, scoring='f1')
_ = calculate_scores(best_estimator, X_val, y_val)
###Output
Warning: Overfitting detector is active, thus evaluation metric is calculated on every iteration. 'metric_period' is ignored for evaluation metric.
|
measuring-quantum-volume.ipynb | ###Markdown
Measuring Quantum Volume Introduction**Quantum Volume (QV)** is a single-number metric that can be measured using a concreteprotocol on near-term quantum computers of modest size. The QV method quantifiesthe largest random circuit of equal width and depth that the computer successfully implements.Quantum computing systems with high-fidelity operations, high connectivity, large calibrated gatesets, and circuit rewriting toolchains are expected to have higher quantum volumes. The Quantum Volume ProtocolA QV protocol (see [1]) consists of the following steps:(We should first import the relevant qiskit classes for the demonstration).
###Code
import matplotlib.pyplot as plt
from IPython.display import clear_output
from tqdm.notebook import tqdm
#Import Qiskit classes
import qiskit
from qiskit import assemble, transpile
from qiskit.providers.aer.noise import NoiseModel
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, thermal_relaxation_error
from qiskit.providers.aer import AerSimulator
from qiskit.utils import QuantumInstance
#Import the qv function
import qiskit.ignis.verification.quantum_volume as qv
from qiskit import IBMQ
if IBMQ.active_account() is None:
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q-kaist', group='internal', project='default')
backend = provider.get_backend('ibmq_montreal')
# backend = AerSimulator.from_backend(backend)
quantum_instance = QuantumInstance(backend=backend, shots=2**13)
basis_gates = backend.configuration().basis_gates
###Output
_____no_output_____
###Markdown
Step 1: Generate QV sequencesIt is well-known that quantum algorithms can be expressed as polynomial-sized quantum circuits built from two-qubit unitary gates. Therefore, a model circuit consists of $d$ layers of random permutations of the qubit labels, followed by random two-qubit gates (from $SU(4)$). When the circuit width $m$ is odd, one of the qubits is idle in each layer.More precisely, a **QV circuit** with **depth $d$** and **width $m$**, is a sequence $U = U^{(d)}...U^{(2)}U^{(1)}$ of $d$ layers:$$ U^{(t)} = U^{(t)}_{\pi_t(m'-1),\pi_t(m)} \otimes ... \otimes U^{(t)}_{\pi_t(1),\pi_t(2)} $$each labeled by times $t = 1 ... d$ and acting on $m' = 2 \lfloor n/2 \rfloor$ qubits. Each layer is specified by choosing a uniformly random permutation $\pi_t \in S_m$ of the $m$ qubit indicesand sampling each $U^{(t)}_{a,b}$, acting on qubits $a$ and $b$, from the Haar measure on $SU(4)$.In the following example we have 6 qubits Q0,Q1,Q3,Q5,Q7,Q10. We are going to look at subsets up to the full set(each volume circuit will be depth equal to the number of qubits in the subset)
###Code
# qubit_lists: list of list of qubit subsets to generate QV circuits
qubit_lists = [[0, 1, 2, 3, 5, 8], [0, 1, 2, 3, 5, 8, 11], [0, 1, 2, 3, 5, 8, 11, 14], [0, 1, 2, 3, 5, 8, 11, 14, 13]]
# ntrials: Number of random circuits to create for each subset
ntrials = 100
###Output
_____no_output_____
###Markdown
We generate the quantum volume sequences. We start with a small example (so it doesn't take too long to run).
###Code
import warnings
warnings.filterwarnings('ignore')
qv_circs, qv_circs_nomeas = qv.qv_circuits(qubit_lists, ntrials)
###Output
_____no_output_____
###Markdown
As an example, we print the circuit corresponding to the first QV sequence. Note that the ideal circuits are run on the first n qubits (where n is the number of qubits in the subset).
###Code
# pass the first trial of the nomeas through the transpiler to illustrate the circuit
#qv_circs_nomeas[0] = qiskit.compiler.transpile(qv_circs_nomeas[0], basis_gates=basis_gates)
qv_circs_nomeas[0][0].draw(fold=-1)
###Output
_____no_output_____
###Markdown
Step 2: Simulate the ideal QV circuitsThe quantum volume method requires that we know the ideal output for each circuit, so we use the statevector simulator in Aer to get the ideal result.
###Code
# sv_sim = qiskit.Aer.get_backend('aer_simulator')
sv_sim = QuantumInstance(backend=AerSimulator(), shots=2**13)
ideal_results = []
for trial in tqdm(range(ntrials)):
# clear_output(wait=True)
for qc in qv_circs_nomeas[trial]:
qc.save_statevector()
#result = qiskit.execute(qv_circs_nomeas[trial], backend=sv_sim).result()
result = sv_sim.execute(qv_circs_nomeas[trial])
ideal_results.append(result)
# print(f'Simulated trial {trial+1}/{ntrials}')
###Output
_____no_output_____
###Markdown
Next, we load the ideal results into a quantum volume fitter
###Code
qv_fitter = qv.QVFitter(qubit_lists=qubit_lists)
qv_fitter.add_statevectors(ideal_results)
###Output
_____no_output_____
###Markdown
Step 3: Calculate the heavy outputsTo define when a model circuit $U$ has been successfully implemented in practice, we use the *heavy output* generation problem. The ideal output distribution is $p_U(x) = |\langle x|U|0 \rangle|^2$, where $x \in \{0,1\}^m$ is an observable bit-string. Consider the set of output probabilities given by the range of $p_U(x)$ sorted in ascending order $p_0 \leq p_1 \leq \dots \leq p_{2^m-1}$. The median of the set of probabilities is $p_{med} = (p_{2^{m-1}} + p_{2^{m-1}-1})/2$, and the *heavy outputs* are$$ H_U = \{ x \in \{0,1\}^m \text{ such that } p_U(x)>p_{med} \}.$$The heavy output generation problem is to produce a set of output strings such that more than two-thirds are heavy.As an illustration, we print the heavy outputs from various depths and their probabilities (for trial 0):
###Code
for qubit_list in qubit_lists:
l = len(qubit_list)
print ('qv_depth_'+str(l)+'_trial_0:', qv_fitter._heavy_outputs['qv_depth_'+str(l)+'_trial_0'])
for qubit_list in qubit_lists:
l = len(qubit_list)
print ('qv_depth_'+str(l)+'_trial_0:', qv_fitter._heavy_output_prob_ideal['qv_depth_'+str(l)+'_trial_0'])
exp_results = []
for trial in tqdm(range(ntrials)):
#clear_output(wait=True)
# t_qcs = transpile(qv_circs[trial], basis_gates=basis_gates, optimization_level=3)
# qobj = assemble(t_qcs)
# result = backend.run(qobj, max_parallel_experiments=0, shots=2**13).result()
result = quantum_instance.execute(qv_circs[trial])
exp_results.append(result)
#print(f'Completed trial {trial+1}/{ntrials}')
###Output
_____no_output_____
###Markdown
Step 5: Calculate the average gate fidelityThe *average gate fidelity* between the $m$-qubit ideal unitaries $U$ and the executed $U'$ is:$$ F_{avg}(U,U') = \frac{|Tr(U^{\dagger}U')|^2/2^m+1}{2^m+1}$$The observed distribution for an implementation $U'$ of model circuit $U$ is $q_U(x)$, and the probability of samplinga heavy output is:$$ h_U = \sum_{x \in H_U} q_U(x)$$As an illustration, we print the heavy output counts from various depths (for trial 0):
###Code
qv_fitter.add_data(exp_results)
for qubit_list in qubit_lists:
l = len(qubit_list)
print ('qv_depth_'+str(l)+'_trial_0:', qv_fitter._heavy_output_counts['qv_depth_'+str(l)+'_trial_0'])
###Output
_____no_output_____
###Markdown
Step 6: Calculate the achievable depthThe probability of observing a heavy output by implementing a randomly selected depth $d$ model circuit is:$$h_d = \int_U h_U dU$$The *achievable depth* $d(m)$ is the largest $d$ such that we are confident that $h_d > 2/3$. In other words,$$ h_1,h_2,\dots,h_{d(m)}>2/3 \text{ and } h_{d(m)+1} \leq 2/3$$We now convert the heavy outputs in the different trials and calculate the mean $h_d$ and the error for plotting the graph.
###Code
plt.figure(figsize=(10, 6))
ax = plt.gca()
# Plot the essence by calling plot_rb_data
qv_fitter.plot_qv_data(ax=ax, show_plt=False)
# Add title and label
ax.set_title('Quantum Volume for up to %d Qubits \n and %d Trials'%(len(qubit_lists[-1]), ntrials), fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Step 7: Calculate the Quantum VolumeThe quantum volume treats the width and depth of a model circuit with equal importance and measures the largest square-shaped (i.e., $m = d$) model circuit a quantum computer can implement successfully on average. The *quantum volume* $V_Q$ is defined as$$\log_2 V_Q = \arg\max_{m} \min (m, d(m))$$ We list the statistics for each depth. For each depth we list if the depth was successful or not and with what confidence interval. For a depth to be successful the confidence interval must be > 97.5%.
###Code
qv_success_list = qv_fitter.qv_success()
qv_list = qv_fitter.ydata
QV = 1
for qidx, qubit_list in enumerate(qubit_lists):
if qv_list[0][qidx]>2/3:
if qv_success_list[qidx][0]:
print("Width/depth %d greater than 2/3 (%f) with confidence %f (successful). Quantum volume %d"%
(len(qubit_list),qv_list[0][qidx],qv_success_list[qidx][1],qv_fitter.quantum_volume()[qidx]))
QV = qv_fitter.quantum_volume()[qidx]
else:
print("Width/depth %d greater than 2/3 (%f) with confidence %f (unsuccessful)."%
(len(qubit_list),qv_list[0][qidx],qv_success_list[qidx][1]))
else:
print("Width/depth %d less than 2/3 (unsuccessful)."%len(qubit_list))
print ("The Quantum Volume is:", QV)
###Output
_____no_output_____
###Markdown
References[1] Andrew W. Cross, Lev S. Bishop, Sarah Sheldon, Paul D. Nation, and Jay M. Gambetta, *Validating quantum computers using randomized model circuits*, Phys. Rev. A **100**, 032328 (2019). https://arxiv.org/pdf/1811.12926
###Code
import qiskit.tools.jupyter
%qiskit_version_table
###Output
_____no_output_____ |
Sketchify your Photo/src/Activity 2/Activity-2-Invert Image.ipynb | ###Markdown
Obtain a negative of an ImageA negative of the image can be obtained by "inverting" the grayscale value of every pixel. Since by default grayscale values are represented as integers in the range [0,255] (i.e., precision CV_8U), the "inverse" of a grayscale value x is simply 255-x: There 2 different ways to transform an image to negative using the OpenCV module. The first method explains negative transformation step by step and the second method explains negative transformation of an image in single line.**1st method: Steps for negative transformation**1. Read an image2. Get height and width of the image3. Each pixel contains 3 channels. So, take a pixel value and collect 3 channels in 3 different variables.4. Negate 3 pixels values from 255 and store them again in pixel used before.5. Do it for all pixel values present in image.**2nd method: Steps for negative transformation**1. Read an image and store it in a variable.2. Subtract the variable from 1 and store the value in another variable.3. All done. You successfully done the negative transformation.
###Code
#Import the image input/output library
###Output
_____no_output_____
###Markdown
Solution ```pythonimport imageio```
###Code
# Read the Image and Covert it to gray scale
###Output
_____no_output_____
###Markdown
Solution ```pythons = imageio.imread(img)g=grayscale(s)```
###Code
# Implement basic logic of inverting an image
###Output
_____no_output_____ |
project/Packaging/ModulesAndPackaging_Notes.ipynb | ###Markdown
Modules and Packaging====At some point, you will want to organize and distribute your code library for the whole world to share, preferably on PyPI so that it is pip installable. ReferencesThis notebook shows a bare-bones version of creating and distributing a project to PyPI. Please follow the instructions in the official documentations. For convenience, you can use the sample project as a template. - [Packaging and Distributing Projects](https://packaging.python.org/tutorials/distributing-packages/)- [A sample Python project](https://github.com/pypa/sampleproject)For more about how to organize the structure of your package - [Official tutorial on packages](https://docs.python.org/3/tutorial/modules.htmlpackages)If you are still confused about what `__init__.py` does, this [blog post](and the mysterious `__init__.py`, see) might help. Install packages we will use for packaging
###Code
! pip install -U pip
! pip install twine
###Output
Collecting pip
Downloading pip-9.0.3-py2.py3-none-any.whl (1.4MB)
[K 100% |████████████████████████████████| 1.4MB 841kB/s eta 0:00:01
[?25hInstalling collected packages: pip
Found existing installation: pip 9.0.1
Uninstalling pip-9.0.1:
Successfully uninstalled pip-9.0.1
Successfully installed pip-9.0.3
Collecting twine
Downloading twine-1.11.0-py2.py3-none-any.whl
Requirement already satisfied: requests!=2.15,!=2.16,>=2.5.0 in /opt/conda/lib/python3.6/site-packages (from twine)
Requirement already satisfied: tqdm>=4.14 in /opt/conda/lib/python3.6/site-packages (from twine)
Collecting pkginfo>=1.4.2 (from twine)
Downloading pkginfo-1.4.2-py2.py3-none-any.whl
Requirement already satisfied: setuptools>=0.7.0 in /opt/conda/lib/python3.6/site-packages (from twine)
Collecting requests-toolbelt>=0.8.0 (from twine)
Downloading requests_toolbelt-0.8.0-py2.py3-none-any.whl (54kB)
[K 100% |████████████████████████████████| 61kB 4.7MB/s ta 0:00:011
[?25hRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests!=2.15,!=2.16,>=2.5.0->twine)
Requirement already satisfied: idna<2.7,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests!=2.15,!=2.16,>=2.5.0->twine)
Requirement already satisfied: urllib3<1.23,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests!=2.15,!=2.16,>=2.5.0->twine)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests!=2.15,!=2.16,>=2.5.0->twine)
Installing collected packages: pkginfo, requests-toolbelt, twine
Successfully installed pkginfo-1.4.2 requests-toolbelt-0.8.0 twine-1.11.0
###Markdown
ModulesIn Pythoh, any `.py` file is a module in that it can be imported. Because the interpreter runs the entrie file when a moudle is imported, it is traditional to use a guard to ignore code that should only run when the file is executed as a script.
###Code
%%file foo.py
"""
When this file is imported with `import foo`,
only `useful_func1()` and `useful_func()` are loaded,
and the test code `assert ...` is ignored. However,
when we run foo.py as a script `python foo.py`, then
the two assert statements are run.
Most commonly, the code under `if __naem__ == '__main__':`
consists of simple examples or test cases for the functions
defined in the moule.
"""
def useful_func1():
pass
def useful_fucn2():
pass
if __name__ == '__main__': #???
assert(useful_func1() is None)
assert(useful_fucn2() is None)
###Output
Writing foo.py
###Markdown
Organization of files in a moduleWhen the number of files you write grow large, you will probably want to orgnize them into their own directory structure. To make a folder a module, you just need to include a file named `__init__.py` in the folder. This file can be empty. For example, here is a module named `pkg` with sub-modules `sub1` and `sub2`.```./pkg:__init__.py foo.py sub1 sub2./pkg/sub1:__init__.py more_sub1_stuff.py sub1_stuff.py./pkg/sub2:__init__.py sub2_stuff.py```
###Code
import pkg.foo as foo
foo.f1()
import pkg
pkg.foo.f1()
###Output
_____no_output_____
###Markdown
How to import a module at the same levelWithin a package, we need to use absolute path names for importing other modules in the same directory. This prevents confusion as to whether you want to import a system moudle with the same name. For example, `foo.sub1.more_sub1_stuff.py` imports functions from `foo.sub1.sub1_stuff.py`
###Code
! cat pkg/sub1/more_sub1_stuff.py
from pkg.sub1.more_sub1_stuff import g3
g3()
###Output
_____no_output_____
###Markdown
How to import a moudle at a different levelAgain, just use absolute paths. For example, `sub2_stuff.py` in the `sub2` directory uses functions from `sub1_stuff.py` in the `sub1` directory:
###Code
! cat pkg/sub2/sub2_stuff.py
from pkg.sub2.sub2_stuff import h2
h2()
###Output
_____no_output_____
###Markdown
Distributing your packageSuppose we want to distribute our code as a library (for example, on PyPI so that it cnn be installed with `pip`). Let's create an `sta663-` (the username part is just to avoid name conflicts) library containing the `pkg` package and some other files:- `README.md`: some information about the library- `sta663.py`: a standalone module- `run_sta663.py`: a script (intended for use as `python run_sta663.py`)
###Code
! ls -R sta663
! cat sta663/run_sta663.py
###Output
import pkg.foo as foo
from pkg.sub1.more_sub1_stuff import g3
from pkg.sub2.sub2_stuff import h2
print foo.f1()
print g3()
print h2()
###Markdown
Using distutilsAll we need to do is to write a `setup.py` file.
###Code
%%file sta663/setup.py
from setuptools import setup
setup(name = "sta663-cliburn",
version = "1.0",
author='Cliburn Chan',
author_email='[email protected]',
url='http://people.duke.edu/~ccc14/sta-663-2018/',
py_modules = ['sta663'],
packages = ['pkg', 'pkg/sub1', 'pkg/sub2'],
scripts = ['run_sta663.py'],
python_requires='>=3',
)
###Output
Overwriting sta663/setup.py
###Markdown
Build a source archive for distribution
###Code
%%bash
cd sta663
python setup.py sdist
cd -
! ls -R sta663
###Output
_____no_output_____
###Markdown
DistributionYou can now distribute `sta663-1.0.tar.gz` to somebody else for installation in the usual way.
###Code
%%bash
cp sta663/dist/sta663-1.0.tar.gz /tmp
cd /tmp
tar xzf sta663-1.0.tar.gz
cd sta663-1.0
python setup.py install
import sta663
from sta663 import pkg
pkg.sub1.sub1_stuff.g1()
pkg.sub1.sub1_stuff.g2()
pkg.sub1.more_sub1_stuff.g3()
pkg.sub2.sub2_stuff.h1()
pkg.sub2.sub2_stuff.h2()
###Output
_____no_output_____
###Markdown
Distributing to PyPIFor testing, please upload to TestPyPI which is cleaned on a regular basis. See instructions at https://packaging.python.org/guides/using-testpypi/using-test-pypi- **Note 1**: You need to confirm your email address after registration.- **Note 2**: You can easily delete any uploaded packages by logging in at https://test.pypi.org.When your package is ready for public release, you can upload to PyPI. See instructions athttps://packaging.python.org/tutorials/distributing-packages/id78
###Code
%%bash
export TWINE_USERNAME=''
export TWINE_PASSWORD=''
twine upload --repository-url https://test.pypi.org/legacy/ sta663/dist/*
%%bash
pip install --index-url https://test.pypi.org/simple/ sta663
###Output
_____no_output_____ |
_notebooks/2020-10-22-k_nearest_neighbors.ipynb | ###Markdown
K-Nearest Neighbors (K-NN)> A tutorial On how to use K-Nearest Neighbours.- toc: true - badges: true- comments: true- categories: [jupyter, Classification] 0. Data Preprocessing 0.1 Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
0.2 Importing the dataset
###Code
dataset = pd.read_csv('Social_Network_Ads.csv')
dataset
###Output
_____no_output_____
###Markdown
0.3 Check if any null value
###Code
dataset.isna().sum()
dataset.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 400 entries, 0 to 399
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 User ID 400 non-null int64
1 Gender 400 non-null object
2 Age 400 non-null int64
3 EstimatedSalary 400 non-null int64
4 Purchased 400 non-null int64
dtypes: int64(4), object(1)
memory usage: 15.8+ KB
###Markdown
Drop User ID
###Code
dataset.drop('User ID', axis=1, inplace=True)
dataset.head()
###Output
_____no_output_____
###Markdown
0.4 Split into X & y
###Code
X = dataset.drop('Purchased', axis=1)
X.head()
y = dataset['Purchased']
y.head()
###Output
_____no_output_____
###Markdown
0.5 Convert categories into numbers
###Code
from sklearn.preprocessing import OneHotEncoder
from sklearn.compose import ColumnTransformer
categorical_feature = ["Gender"]
one_hot = OneHotEncoder()
transformer = ColumnTransformer([("one_hot",
one_hot,
categorical_feature)],
remainder="passthrough")
transformed_X = transformer.fit_transform(X)
pd.DataFrame(transformed_X).head()
###Output
_____no_output_____
###Markdown
0.6 Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(transformed_X, y, test_size = 0.25, random_state = 2509)
###Output
_____no_output_____
###Markdown
0.7 Feature Scaling
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
1.Training the model on the Training set
###Code
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
1.1 Score
###Code
classifier.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
2.Predicting the Test set results
###Code
y_pred = classifier.predict(X_test)
###Output
_____no_output_____
###Markdown
2.2 Making the Confusion Matrix
###Code
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
###Output
[[66 4]
[ 3 27]]
|
MODULO 2/Problema1.ipynb | ###Markdown
MARIO GAME Implemente un programa que imprima una media pirámide de una altura específica, como se indica a continuación. $ ./marioHeight: 4 Especificaciones - Cree un archivo llamado mario.py,el cual es un programa que recrea una media pirámide usando los hash () para los bloques.- Para hacer las cosas más interesantes, primero solicite al usuario input la altura de la media pirámide, el cual debe ser un número entero positivo entre 1 y 8, inclusive.- Si el usuario no proporciona un número entero positivo no mayor que 8, debe volver a solicitar el mismo.- Luego, genere (con la ayuda de print uno o más bucles) la media pirámide deseada.- Tenga cuidado de alinear la esquina inferior izquierda de su media pirámide con el borde izquierdo de la ventana de su terminal. Uso Su programa debería comportarse según el ejemplo siguiente. $ ./marioHeight: 4 Pruebas - Ejecute su programa como python mario.py y espere una solicitud de entrada. Escribe -1 y presiona enter. Su programa debe rechazar esta entrada como inválida, por ejemplo, volviendo a solicitar al usuario que escriba otro número.- Ejecute su programa como python mario.py y espere una solicitud de entrada. Escribe 0 y presiona enter. Su programa debe rechazar esta entrada como inválida, por ejemplo, volviendo a solicitar al usuario que escriba otro número.- Ejecute su programa como python mario.py y espere una solicitud de entrada. Escribe 1 y presiona enter. Su programa debería generar la siguiente salida. Asegúrese de que la pirámide esté alineada con la esquina inferior izquierda de su terminal y de que no haya espacios adicionales al final de cada línea. Ejecute su programa como python mario.py y espere una solicitud de entrada. Escribe 2 y presiona enter. Su programa debería generar la siguiente salida. Asegúrese de que la pirámide esté alineada con la esquina inferior izquierda de su terminal y de que no haya espacios adicionales al final de cada línea. Ejecute su programa como python mario.py y espere una solicitud de entrada. Escribe 8 y presiona enter. Su programa debería generar la siguiente salida. Asegúrese de que la pirámide esté alineada con la esquina inferior izquierda de su terminal y de que no haya espacios adicionales al final de cada línea. Ejecute su programa como python mario.py y espere una solicitud de entrada. Escribe 9 y presiona enter. Su programa debe rechazar esta entrada como inválida, por ejemplo, volviendo a solicitar al usuario que escriba otro número. Luego, escribe 2 y presiona enter. Su programa debería generar la siguiente salida. Asegúrese de que la pirámide esté alineada con la esquina inferior izquierda de su terminal y de que no haya espacios adicionales al final de cada línea. - Ejecute su programa como python mario.py y espere una solicitud de entrada. Escribe fooy presiona enter. Su programa debe rechazar esta entrada como inválida, por ejemplo, volviendo a solicitar al usuario que escriba otro número.- Ejecute su programa como python mario.py y espere una solicitud de entrada. No escriba nada y presione enter. Su programa debe rechazar esta entrada como inválida, por ejemplo, volviendo a solicitar al usuario que escriba otro número.
###Code
numero = int(input('Ingresa un numero'))
for triangulo in range(1, numero+1):
print(' '*(numero-triangulo)+'#'*triangulo)
if numero is str:
break
###Output
Ingresa un numero 30
|
workshop-resources/data-science-and-machine-learning/Data_Science_1/bioscience-project/DS1-Instructor.ipynb | ###Markdown
Exploration: NumPy and PandasA fundamental component of mastering data science concepts is applying and practicing them. This exploratory notebook is designed to provide you with a semi-directed space to do just that with the Python, NumPy, and pandas skills that you either covered in an in-person workshop or through Microsoft Learn. The specific examples in this notebook apply NumPy and pandas concepts in a life-sciences context, but they are applicable across disciplines and industry verticals.This notebook is divided into different stages of exploration. Initial suggestions for exploration are more structured than later ones and can provide some additional concepts and skills for tackling data-science challenges with real-world data. However, this notebook is designed to provide you with a launchpad for your personal experimentation with data science, so feel free to add cells and running your own experiments beyond those suggested here. That is the power and the purpose of a platform like Jupyter notebooks! Setup and Refresher on NotebooksBefore we begin, you will need to important the principal libraries used to explore and manipulate data in Python: NumPy and pandas. The cell below also imports Matplotlib, the main visualization library in Python. For simplicity and consistency with prior instruction, industry-standard aliases are applied to these imported libraries. The cell below also runs `%matplotlib inline` magic command, which instructs Jupyter to display Matplotlib output directly in the notebook.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
As it might have been a while since you last worked with Jupyter notebooks, here is a quick refresher on efficiently using them. Notebook CellsNotebook cells are divided into Markdown text cells and interactive code cells. You can easily recognize code cells by the `[-]:` to the left of them.Code in a code cells has only been executed -- and is thus available for use in other code cells in the notebook -- if there is a number beside the code cell (for example, `[1]:`).To run the code in a cell, you can click the **Run** icon at the top left of a code cell or press **`Ctrl` + `Enter`**. Instructor's NoteOpen-ended questions for students and groups in this notebook are followed by italicized explanations in your copy of this notebook to more easily help you guide group discussions or answer student questions. Documentation and HelpDocumentation for Python objects and functions is available directly in Jupyter notebooks. In order to access the documentation, simply put a question mark in front of the object or function in a code cell and execute the cell (for example, `?print`). A window containing the documentation will then open at the bottom of the notebook.On to exploration! Section 1: Guided ExplorationFor this workshop, you will step into the role of a data scientist helping researchers examine a new dataset of genetic information. The dataset is entitled `genome.txt`. It contains close to 1 million single-nucleotide polymorphisms (SNPs) from build 36 of the human genome. SNPs are variations of a single nucleotide that occurs at specific positions in a genome and thus serve as useful sign posts for pinpointing susceptibility to diseases and other genetic conditions. As a result, they are particularly useful in the field of bioinformatics.This particular set of SNPs was originally generated by the genomics company 23andME in February 2011 and is drawn from the National Center for Biotechnology Information within the United States National Institutes of Health and is used here as data in the public domain. Import and Investigate the DataRecall that `pd.read_csv()` is the most common way to pull external data into a pandas DataFrame. Despite its name, this function works on a variety of file formats containing delimited data. Go ahead and try it on `genome.txt`.(If you need a refresher on the syntax for using this function, refer back to the Reactor pandas materials, to the build-in Jupyter documentation with `?pd.read_csv`, or to the [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html).)
###Code
# Import the data from genome.txt into a DataFrame.
genome = pd.read_csv('Data/genome.txt')
genome.head()
###Output
_____no_output_____
###Markdown
The \t indicate that the data in this text file is delimited by tabs. Try re-importing this data and this time specify the separator to use.
###Code
# Import the data from genome.txt into a DataFrame and specify the separator.
genome = pd.read_csv('Data/genome.txt', sep='\t')
genome.head()
###Output
_____no_output_____
###Markdown
Is the DataFrame now in a more useful form? If so, what initial information can gather about the dataset, such as its features or number of observations? (The [`head()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.head.html) and [`info()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.info.html) methods are some of the tools that you can use for your preliminary investigation of the dataset.) - *The dataset has 966,977 rows and four columns, none of which have missing values. One column (position) has numeric data while the other columns have string data.*
###Code
# Use the info() method to print a concise summary of the genome DataFrame.
genome.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 966977 entries, 0 to 966976
Data columns (total 4 columns):
rsid 966977 non-null object
chromosome 966977 non-null object
position 966977 non-null int64
genotype 966977 non-null object
dtypes: int64(1), object(3)
memory usage: 29.5+ MB
###Markdown
Interpreting the Dataset**`rsid`** is short for Reference SNP cluster ID; it is the identification number used by researchers and in databases to refer to specific SNPs. (For more information, see the [23andMe](https://customercare.23andme.com/hc/en-us/articles/212196908-What-Are-RS-Numbers-Rsid-) reference page on RS numbers.**`chromosome`** refers to which of the 23 human chromosomes (22 numbered and one X or Y chromosome) or the mitochondrial chromosoe on which an SNP appears.**`position`** refers to the specific location of a given SNP on the reference human genome. (For more information on the reference human genome, see the [human genome overview page](https://www.ncbi.nlm.nih.gov/grc/human) on the Genome Reference Consortium web site.) **`genotype`** refers to the pair of nucleotides that form an SNP inherited at a given chromosomal position (one coming from the father and one coming from the mother). (For more information on genotypes -- as well as a list of notable genotypes -- visit the [genotype page](https://www.snpedia.com/index.php/Genotype) at SNPedia.) Section 2: Intermediate ExplorationNow that you know more about the dataset, the next step is to start asking questions of it. A common next attribute of this particular dataset to investigate is how many SNPs occur per chromosome. Use the `value_counts()` method to determine this.(**Note:** You might also find it helpful to refer back to the Reactors pandas material to work through this section.)
###Code
# Use the value_counts() method to find the counts of unique values in the chromosome column.
genome['chromosome'].value_counts()
###Output
_____no_output_____
###Markdown
Numbers are good for precision, but often a visualization can provide a more succinct (and digestible) presentation of data. The `plot()` method is a convenient way to invoke Matplotlib directly from your DataFrame. Use the `plot()` method to creat a bar chart of of the SNP value counts per chromosome. (**Hint:** You can do this either by specifying the `kind` argument in the `plot()` method or by using the `plot.bar()` method.)
###Code
# Plot the value counts for the unique values in the chromosome column.
genome['chromosome'].value_counts().plot(kind='bar');
###Output
_____no_output_____
###Markdown
The `value_counts()` method automatically sorts the chromosomes highest occurrence of SNPs to the lowest. However, because we are exploring data related to the different chromosomes, viewing the data sorted by the order in which chromosomes appear in the dataset can be more useful the researchers you are helping. Try to see if you can figure out how to do so. (The `sort_index()` method is one possibility. Another possibility gets raised in the comments in this Stack Overflow [article](https://stackoverflow.com/questions/43855474/changing-sort-in-value-counts).)
###Code
# Print the value counts for the unique values in the chromosome column in the order in which they occur.
genome['chromosome'].value_counts()[genome['chromosome'].unique()]
###Output
_____no_output_____
###Markdown
Now try plotting this output. (One of the joys of pandas is that you can continue to string methods to objects in order to generate powerful, compact code.)
###Code
# Plot the value counts for the unique values in the chromosome column in the order in which they occur.
genome['chromosome'].value_counts()[genome['chromosome'].unique()].plot(kind='bar');
###Output
_____no_output_____
###Markdown
Grouping DataThus far we have only looked at one column of the DataFrame, `chomosome`. However, pandas provides tools to quickly examine more of the DataFrame at once. If you haven't tried it already, the `groupby()` method can be useful in situations like this to turn the values of one of the columns in a DataFrame into the index for the DataFrame. Try using that method using the `chromosome` column coupled with the `count()` method.
###Code
# Use the groupby() and count() methods to create a DataFrame of grouped values.
genome.groupby('chromosome').count()
###Output
_____no_output_____
###Markdown
You can also plot your now-grouped DataFrame. (**Note:** Your bar graph might not turn out the way you expected. If so, discuss with your partner or group what might solve this problem.) - *If you do not specify a column within the grouped DataFrame to plot, Python will plot all three columns for each chomosome.*
###Code
# Plot the unique genotype counts from the grouped chromosome DataFrame.
genome.groupby('chromosome')['genotype'].count().plot(kind='bar');
###Output
_____no_output_____
###Markdown
Changing what you group by is another means of asking questions about your data. Now try grouping by `genotype`. What does this new grouping tell you? - *The SNPs in this dataset appear most frequenly on certain genotypes.*
###Code
# Plot the unique chromosome counts from the grouped genotype DataFrame.
genome.groupby('genotype')['chromosome'].count().plot.bar();
###Output
_____no_output_____
###Markdown
**Note:** The **D** and **DD** are not nucleotides themselves; DNA nucleotides can only be adenine (A), thymine (T), guanine (G) and cytosine (C). Rather, the **D** stands for genotypes in which in which one or more base pairs (or even an entire part of a chromosome) has been **deleted** during DNA replication.Similarly, **I** and **II** represent genotypes of [wobble base pairs](https://en.wikipedia.org/wiki/Wobble_base_pair) that do not do not follow the conventional A-T and C-G pairing in DNA. Such genotypes are responsible for ailments such as [sickle-cell disease](https://en.wikipedia.org/wiki/Sickle_cell_disease). Pivoting DataYou can summarize your DataFrame in a fashion similar to pivot tables in spreadsheets by use of the [`pivot()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot.html) function. Try using this function on your grouped data, but be advised that the `index`, `columns`, and `values` arguments for this function all require `str` or `object` inputs, so it might be easiest to first capture the output from the `groupby()` in a new DataFrame. (This Stack Overflow [answer](https://stackoverflow.com/questions/10373660/converting-a-pandas-groupby-output-from-series-to-dataframe) given by Wes McKinney, the creator of pandas, might prove useful in doing this.)
###Code
# Created a DataFrame by grouping the genome data by both genotype and chromosome.
grouped_genome = genome.groupby(['genotype', 'chromosome'])['rsid'].count().reset_index()
# Rename the rsid column to count as this is the role that this column will fill in this new DataFrame.
grouped_genome.rename(columns={'rsid': 'count'}, inplace=True)
grouped_genome.head()
# Create a pivoted DataFrame with genotype as the index, chromosome as the columns, and count as the values.
pivot_genome = grouped_genome.pivot(index='genotype',
columns='chromosome',
values='count')
pivot_genome.head()
###Output
_____no_output_____
###Markdown
Stacked Bar GraphWith the data pivoted, you can try somewhat more advanced visualization of your data such as a stacked bar graph in which you can see not just how many instances of different genotypes there are in the dataset, but also how many of those instances occur on each chromosome. This can be useful to helping your researchers understand the dataset in that they are not only interested in the number of genotype occurences, but also on which chromosomes they occur. Build upon what you know about visualiziing with Matplotlib and refer to the documentation for more hints about how to execute this.
###Code
# Plot the pivoted DataFrame to create a stacked barchart.
pivot_genome.plot.bar(stacked=True, figsize=(10,7));
# Access the documentation for the bar function from within Jupyter.
?pivot_genome.plot.bar
###Output
[1;31mSignature:[0m [0mpivot_genome[0m[1;33m.[0m[0mplot[0m[1;33m.[0m[0mbar[0m[1;33m([0m[0mx[0m[1;33m=[0m[1;32mNone[0m[1;33m,[0m [0my[0m[1;33m=[0m[1;32mNone[0m[1;33m,[0m [1;33m**[0m[0mkwds[0m[1;33m)[0m[1;33m[0m[1;33m[0m[0m
[1;31mDocstring:[0m
Vertical bar plot.
A bar plot is a plot that presents categorical data with
rectangular bars with lengths proportional to the values that they
represent. A bar plot shows comparisons among discrete categories. One
axis of the plot shows the specific categories being compared, and the
other axis represents a measured value.
Parameters
----------
x : label or position, optional
Allows plotting of one column versus another. If not specified,
the index of the DataFrame is used.
y : label or position, optional
Allows plotting of one column versus another. If not specified,
all numerical columns are used.
**kwds
Additional keyword arguments are documented in
:meth:`pandas.DataFrame.plot`.
Returns
-------
axes : matplotlib.axes.Axes or np.ndarray of them
An ndarray is returned with one :class:`matplotlib.axes.Axes`
per column when ``subplots=True``.
See Also
--------
pandas.DataFrame.plot.barh : Horizontal bar plot.
pandas.DataFrame.plot : Make plots of a DataFrame.
matplotlib.pyplot.bar : Make a bar plot with matplotlib.
Examples
--------
Basic plot.
.. plot::
:context: close-figs
>>> df = pd.DataFrame({'lab':['A', 'B', 'C'], 'val':[10, 30, 20]})
>>> ax = df.plot.bar(x='lab', y='val', rot=0)
Plot a whole dataframe to a bar plot. Each column is assigned a
distinct color, and each row is nested in a group along the
horizontal axis.
.. plot::
:context: close-figs
>>> speed = [0.1, 17.5, 40, 48, 52, 69, 88]
>>> lifespan = [2, 8, 70, 1.5, 25, 12, 28]
>>> index = ['snail', 'pig', 'elephant',
... 'rabbit', 'giraffe', 'coyote', 'horse']
>>> df = pd.DataFrame({'speed': speed,
... 'lifespan': lifespan}, index=index)
>>> ax = df.plot.bar(rot=0)
Instead of nesting, the figure can be split by column with
``subplots=True``. In this case, a :class:`numpy.ndarray` of
:class:`matplotlib.axes.Axes` are returned.
.. plot::
:context: close-figs
>>> axes = df.plot.bar(rot=0, subplots=True)
>>> axes[1].legend(loc=2) # doctest: +SKIP
Plot a single column.
.. plot::
:context: close-figs
>>> ax = df.plot.bar(y='speed', rot=0)
Plot only selected categories for the DataFrame.
.. plot::
:context: close-figs
>>> ax = df.plot.bar(x='lifespan', rot=0)
[1;31mFile:[0m c:\users\jeppesen\appdata\local\continuum\anaconda3\lib\site-packages\pandas\plotting\_core.py
[1;31mType:[0m method
###Markdown
Section 3: Individual ExplorationThere is still a lot you can do with this dataset or related ones. Ideas include: - Moving the legend for the stacked bar graph off to one side. This will make the visualization easier to read and more useful to the researchers.
###Code
# Students can play with settings like bbox_to_anchor to arrive at an appearance they like.
fig = plt.figure()
# Create an Axis class object for the stacked barchart.
ax = pivot_genome.plot.bar(stacked=True, figsize=(10,7))
# Specify values for the loc and bbox_to_anchor paramaters of the legend() method in order to render the legend to the side.
ax.legend(loc='upper center', bbox_to_anchor=(1.09, 1.014), ncol=1)
plt.show();
###Output
_____no_output_____
###Markdown
- Researching the average number of base pairs per human chromosome to see (and visualize) what proportion of each chromosome is represented in this dataset. Comparisons of this sort can help researchers quickly see what is out of the ordinary in a dataset such as this one.
###Code
# Create a new DataFrame to house the aggregated genotype totals by chromosome.
chrom_num = genome.groupby('chromosome')['genotype'].count().reset_index()
chrom_num.tail(10)
# It is subtle, but there are actually two rows for Chromosome 21.
# Combine the genotype totals for both rows and delete the surplus row.
chrom_num.iloc[20, 1] = chrom_num.iloc[20, 1] + chrom_num.iloc[21, 1]
chrom_num.drop([21], axis=0, inplace=True)
chrom_num.reset_index(inplace=True, drop=True)
chrom_num.tail(10)
# Now make the chromosome number the index for the DataFrame.
chrom_num.set_index('chromosome', inplace=True)
chrom_num.tail()
# Looked up number of base pairs per chromosome from https://ghr.nlm.nih.gov/chromosome
# and https://www.nature.com/scitable/topicpage/mtdna-and-mitochondrial-diseases-903/.
# Add these numbers as a new column to the DataFrame.
mil_base_pairs = [249, 243, 198, 191, 181, 171, 159, 146, 141, 133, 135, 134, 115, 107, 102, 90, 83, 78, 59, 63, 48, 51, 0.017, 155, 59]
chrom_num['million base pairs'] = mil_base_pairs
chrom_num.head()
# Find the proportion that the number of genotypes represents for each chromosome.
chrom_num['proportion'] = chrom_num['genotype'].divide(chrom_num['million base pairs']).divide(1000000)
chrom_num.tail()
# Now normalize the proportions of genotypes and basepairs based on the column totals to graph.
chrom_num['gt_proportion'] = chrom_num['genotype'].divide(chrom_num['genotype'].sum())
chrom_num['bp_proportion'] = chrom_num['proportion'].divide(chrom_num['proportion'].sum())
chrom_num.tail()
# Graph just the 'gt_proportion' and 'bp_proportion'.
chrom_num[['gt_proportion', 'bp_proportion']].plot(kind='bar');
# Mitochondrial genotypes are ridiculously overrepresented in the data.
# Remove that row and recompute to see how the other chromosomes stacked up.
chrom_num.drop(['MT'], axis=0, inplace=True)
chrom_num['proportion'] = chrom_num['genotype'].divide(chrom_num['million base pairs']).divide(1000000)
chrom_num['gt_proportion'] = chrom_num['genotype'].divide(chrom_num['genotype'].sum())
chrom_num['bp_proportion'] = chrom_num['proportion'].divide(chrom_num['proportion'].sum())
chrom_num.tail()
# No regraph the two columns of interest from the DataFrame.
# Genotypes on the lower-numbered chromosomes are far overrepresented a proporition to the size of those chromosomes.
chrom_num[['gt_proportion', 'bp_proportion']].plot(kind='bar');
###Output
_____no_output_____ |
3 - Internal evaluation.ipynb | ###Markdown
Performance evaluation on complete data
###Code
# Performance measures
ytrue = [int(v) for v in test_data[test_data.columns[0]].values]
probs = model.predict(xtest)
ypred = [int(0.5 < p) for p in probs]
auc = roc_auc_score(ytrue, probs)
fpr, tpr, thresholds = roc_curve(ytrue, probs)
brier = brier_score_loss(ytrue, probs)
cal, dis = caldis(ytrue, probs)
acc = accuracy_score(ytrue, ypred)
precision, recall, f1score, support = precision_recall_fscore_support(ytrue, ypred)
conf = confusion_matrix(ytrue, ypred)
P = N = 0
TP = TN = 0
FP = FN = 0
for i in range(len(ytrue)):
if ytrue[i] == 1:
P += 1
if ypred[i] == 1: TP += 1
else: FN += 1
else:
N += 1
if ypred[i] == 0: TN += 1
else: FP += 1
sens = float(TP)/P
spec = float(TN)/N
# Positive and Negative Predictive Values
# https://en.wikipedia.org/wiki/Positive_and_negative_predictive_values
ppv = float(TP) / (TP + FP)
npv = float(TN) / (TN + FN)
# Likelihood ratios
# https://en.wikipedia.org/wiki/Likelihood_ratios_in_diagnostic_testing
lr_pos = sens / (1 - spec) if spec < 1 else np.inf
lr_neg = (1 - sens) / spec if 0 < spec else np.inf
# for i in range(len(ytrue)):
# print i, probs[i], ypred[i], ytrue[i]
# print "True outcomes:", ytrue
# print "Prediction :", ypred
print "Number of cases:", len(ytrue)
print "CFR :", 100 * (float(np.sum(ytrue)) / len(ytrue))
print ""
print "Measures of performance"
print "AUC :", auc
print "Brier :", brier
print "Calibration :", cal
print "Discrimination:", dis
print "Accuracy :", acc
print "Sensitivity :", sens
print "Specificity :", spec
print "PPV :", ppv
print "NPV :", npv
print "LR+ :", lr_pos
print "LR- :", lr_neg
with open(os.path.join(model_name, 'internal-eval.txt'), 'w') as of:
of.write("Measures of performance\n")
of.write("AUC : " + str(auc) + "\n")
of.write("Brier : " + str(brier) + "\n")
of.write("Calibration : " + str(cal) + "\n")
of.write("Discrimination: " + str(dis) + "\n")
of.write("Accuracy : " + str(acc) + "\n")
of.write("Sensitivity : " + str(sens) + "\n")
of.write("Specificity : " + str(spec) + "\n")
of.write("Sensitivity : " + str(sens) + "\n")
of.write("Specificity : " + str(spec) + "\n")
of.write("PPV : " + str(ppv) + "\n")
of.write("NPV : " + str(npv) + "\n")
of.write("LR+ : " + str(lr_pos) + "\n")
of.write("LR- : " + str(lr_neg) + "\n")
# ROC plot
fpr, tpr, thresholds = roc_curve(ytrue, probs)
fig, ax = plt.subplots()
plt.xlim([-0.1, 1.1])
plt.ylim([-0.1, 1.1])
plt.plot([0, 1], [0, 1], 'k--', c='grey')
plt.plot(fpr, tpr, color='black')
plt.xlabel('1 - Specificity')
plt.ylabel('Sensitivity')
fig.savefig(os.path.join(model_name, 'internal-roc-complete.pdf'))
# Calibration plot
cal_table = calibration_table(ytrue, probs, 10)
fig, ax = plt.subplots()
plt.plot([0.05, 0.95], [0.05, 0.95], '-', c='grey', linewidth=0.5, zorder=1)
plt.xlim([-0.1, 1.1])
plt.ylim([-0.1, 1.1])
plt.xlabel('Predicted Risk')
plt.ylabel('Observed Risk')
x = cal_table['pred_prob']
y = cal_table['true_prob']
# f = interp1d(x, y, kind='cubic')
# xnew = np.linspace(min(x), max(x), num=50, endpoint=True)
# plt.plot(xnew, f(xnew))
plt.plot(x, y, color='black')
fig.savefig(os.path.join(model_name, 'internal-cal-complete.pdf'))
# Sensitivity/specificity for an entire range of thresholds
ncells = 20
thresholds = np.linspace(0.0, 1.0, ncells + 1)
ytrue = [int(v) for v in test_data[test_data.columns[0]].values]
x = test_data[test_data.columns[1:]].values
probs = model.predict(x)
nt = len(thresholds) - 1
interval = [str(t) + "-" + str(t + 1.0/ncells) for t in thresholds[:-1]]
sensitivity = [[] for n in range(0, nt)]
specificity = [[] for n in range(0, nt)]
surv_count = [[] for n in range(0, nt)]
died_count = [[] for n in range(0, nt)]
for n in range(0, nt):
t = thresholds[n]
t1 = thresholds[n + 1]
ypred = [int(t < p) for p in probs]
in_cell = [t < p and p <= t1 for p in probs]
nsurv = 0
ndied = 0
P = N = 0
TP = TN = 0
for i in range(len(ytrue)):
if ytrue[i] == 1:
P += 1
if ypred[i] == 1: TP += 1
if in_cell[i]: ndied += 1
else:
N += 1
if ypred[i] == 0: TN += 1
if in_cell[i]: nsurv += 1
sens = float(TP)/P
spec = float(TN)/N
sensitivity[n].append(sens)
specificity[n].append(spec)
surv_count[n].append(nsurv)
died_count[n].append(ndied)
sensitivity = [np.mean(sensitivity[n]) for n in range(0, nt)]
specificity = [np.mean(specificity[n]) for n in range(0, nt)]
surv_count = [np.mean(surv_count[n]) for n in range(0, nt)]
died_count = [np.mean(died_count[n]) for n in range(0, nt)]
dfcomp = pd.DataFrame({'Threshold':pd.Series(np.array(thresholds[:-1] + 1.0/ncells)),
'Sensitivity':pd.Series(np.array(sensitivity)),
'Specificity':pd.Series(np.array(specificity)),
'Survival':pd.Series(np.array(surv_count)),
'Mortality':pd.Series(np.array(died_count))},
columns=['Threshold', 'Sensitivity', 'Specificity',
'Survival', 'Mortality'])
dfcomp.to_csv(os.path.join(model_name, 'sens-spec-complete.csv'))
create_plots(model_name, dfcomp, low_thres, med_thres, 'complete')
###Output
Low 29% 18/62 CFR=0%
Medium 46% 29/62 CFR=17%
High 24% 15/62 CFR=60%
###Markdown
Calculations and plots on imputed data
###Code
# Odd-ratios with 95% CI
ci_critical_value = 1.96 # For 95% CI
boot_folder = os.path.join(model_name, 'boot')
imp_folder = os.path.join(model_name, 'imp')
odds_values = {}
data_files = glob.glob(imp_folder + '/imputation-*.csv')
for fn in data_files:
dat = pd.read_csv(fn, na_values="\\N")[variables]
val = dat[dat.columns[1:]].values
pos0 = fn.index("imputation-") + 11
pos1 = fn.index(".csv")
idx = fn[pos0:pos1]
odds_boot = {}
index_files = glob.glob(boot_folder + '/index-' + idx + '-*.txt')
model_files = glob.glob(boot_folder + '/model-' + idx + '-*.txt')
nboot = len(index_files)
for b in range(0, nboot):
rows = []
with open(index_files[b]) as ifile:
lines = ifile.readlines()
for line in lines:
pieces = line.split()[1:]
rows += [int(i) - 1 for i in pieces]
model = LogRegModel(model_files[b], model_format='GLM')
model.loadVarTypes(isth_data_file, isth_dict_file)
odds = model.getOddRatios(xtest)
for k in odds:
if not k in odds_boot: odds_boot[k] = 0
odds_boot[k] += odds[k]
for k in odds_boot:
odds_boot[k] = odds_boot[k] / nboot
for k in odds_boot:
if not k in odds_values: odds_values[k] = []
if 0 < odds_boot[k] and odds_boot[k] < 100:
odds_values[k] = odds_values[k] + [odds_boot[k]]
# Loading the model's odds
odds_model = {}
with open(model_oddratios) as f:
lines = f.readlines()
for line in lines:
pieces = line.split()
odds_model[pieces[0]] = pieces[1]
# Adding the CIs to the model's file
with open(model_oddratios, 'w') as f:
for k in odds_values:
mean_odds = np.mean(odds_values[k])
std_odds = np.std(odds_values[k])
ci_odds = [mean_odds - ci_critical_value * std_odds, mean_odds + ci_critical_value * std_odds]
odds_model[k] = odds_model[k] + ' (' + str(ci_odds[0]) + ', ' + str(ci_odds[1]) + ')'
print k, odds_model[k]
f.write(k + ' ' + odds_model[k] + '\n')
# Averaged ROC curve
ci_critical_value = 1.96 # For 95% CI
boot_folder = os.path.join(model_name, 'boot')
imp_folder = os.path.join(model_name, 'imp')
fig, ax = plt.subplots()
plt.xlim([-0.2, 1.1])
plt.ylim([-0.1, 1.1])
plt.plot([0, 1], [0, 1], 'k--', c='grey', linewidth=0.5)
plt.xlabel('1 - Specificity')
plt.ylabel('Sensitivity')
data_files = glob.glob(imp_folder + '/imputation-*.csv')
imp_fpr = []
imp_tpr = []
auc_values = []
for fn in data_files:
dat = pd.read_csv(fn, na_values="\\N")[variables]
val = dat[dat.columns[1:]].values
pos0 = fn.index("imputation-") + 11
pos1 = fn.index(".csv")
idx = fn[pos0:pos1]
index_files = glob.glob(boot_folder + '/index-' + idx + '-*.txt')
model_files = glob.glob(boot_folder + '/model-' + idx + '-*.txt')
# Micro-averaging the ROC curves from bootstrap samples:
# http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html
ytrue = []
probs = []
ypred = []
nboot = len(index_files)
for b in range(0, nboot):
rows = []
with open(index_files[b]) as ifile:
lines = ifile.readlines()
for line in lines:
pieces = line.split()[1:]
rows += [int(i) - 1 for i in pieces]
ytrue += [int(v) for v in dat[dat.columns[0]].values[rows]]
x = val[rows,:]
model = LogRegModel(model_files[b], model_format='GLM')
pboot = model.predict(x)
probs += list(pboot)
ypred += [int(0.5 < p) for p in pboot]
auc = roc_auc_score(ytrue, probs)
fpr, tpr, thresholds = roc_curve(ytrue, probs)
plt.plot(fpr, tpr, color='black', alpha=0.05)
imp_fpr += [fpr]
imp_tpr += [tpr]
auc_values += [auc]
# Macro-average of ROC cuve over all imputations.
# First aggregate all false positive rates
all_fpr = np.unique(np.concatenate(imp_fpr))
# Then interpolate all ROC curves at this points
mean_tpr = np.zeros_like(all_fpr)
for i in range(0, len(imp_fpr)):
mean_tpr += interp(all_fpr, imp_fpr[i], imp_tpr[i])
mean_tpr /= len(imp_fpr)
# mean_auc = metrics.auc(all_fpr, mean_tpr)
mean_auc = np.mean(auc_values)
std_auc = np.std(auc_values)
ci_auc = [mean_auc - ci_critical_value * std_auc, mean_auc + ci_critical_value * std_auc]
print "Mean AUC:", mean_auc
print "95% CI:", ci_auc
with open(os.path.join(model_name, 'average-roc-bootstrap.txt'), 'w') as f:
f.write(str(mean_auc) + ' (' + str(ci_auc[0]) + ',' + str(ci_auc[1]) +')')
plt.plot(all_fpr, mean_tpr, color='red', alpha=1.0)
fig.savefig(os.path.join(model_name, 'average-roc-bootstrap.pdf'))
# Average calibration plot
boot_folder = os.path.join(model_name, 'boot')
imp_folder = os.path.join(model_name, 'imp')
fig, ax = plt.subplots()
# plt.plot([0, 1], [0, 1], '-', c='grey', linewidth=0.8 * 1, zorder=1)
plt.plot([0.05, 0.95], [0.05, 0.95], '-', c='grey', linewidth=0.5, zorder=1)
plt.xlim([-0.1, 1.1])
plt.ylim([-0.1, 1.1])
plt.xlabel('Predicted Risk')
plt.ylabel('Observed Risk')
# lgnd = plt.legend(loc='lower right', scatterpoints=1, fontsize=10)
data_files = glob.glob(imp_folder + '/imputation-*.csv')
imp_ppr = []
imp_tpr = []
cal_values = []
for fn in data_files:
dat = pd.read_csv(fn, na_values="\\N")[variables]
val = dat[dat.columns[1:]].values
pos0 = fn.index("imputation-") + 11
pos1 = fn.index(".csv")
idx = fn[pos0:pos1]
index_files = glob.glob(boot_folder + '/index-' + idx + '-*.txt')
model_files = glob.glob(boot_folder + '/model-' + idx + '-*.txt')
ytrue = []
probs = []
ypred = []
nboot = len(index_files)
for b in range(0, nboot):
rows = []
with open(index_files[b]) as ifile:
lines = ifile.readlines()
for line in lines:
pieces = line.split()[1:]
rows += [int(i) - 1 for i in pieces]
ytrue += [int(v) for v in dat[dat.columns[0]].values[rows]]
x = val[rows,:]
model = LogRegModel(model_files[b], model_format='GLM')
pboot = model.predict(x)
probs += list(pboot)
ypred += [int(0.5 < p) for p in pboot]
cal_table = calibration_table(ytrue, probs, 10)
cal_values += [np.sqrt(calibration2(ytrue, probs, 10))]
# sizes = cal_table['count'] / 20
# plt.scatter(cal_table['pred_prob'], cal_table['true_prob'], s=sizes, c='red', marker='o', lw = 0, alpha=0.8, zorder=2)
x = cal_table['pred_prob']
y = cal_table['true_prob']
f = interp1d(x, y, kind='cubic')
xnew = np.linspace(min(x), max(x), num=50, endpoint=True)
plt.plot(xnew, f(xnew), color='black', alpha=0.1)
imp_ppr += [x]
imp_tpr += [y]
all_ppr = np.unique(np.concatenate(imp_ppr))
mean_tpr = np.zeros_like(all_ppr)
for i in range(0, len(imp_ppr)):
mean_tpr += interp(all_ppr, imp_ppr[i], imp_tpr[i])
mean_tpr /= len(imp_ppr)
# Won't use the mean of the cal_values obtaned from each bootstrap sample because they have, in general,
# non-uniform occupancy of the bins, so this result in abnormally low estimates of calibration, specially
# if those with high discrepancies between
# Aggregating all predictions allow to have good occupancy, but the calibration calculation is done within
# each boostrap, so we just use the average results for the predicted and true probabilities. Also, we take
# the square root of the calibration, so it really represents the average difference between both curves
# v = np.array(all_ppr - mean_tpr)
# mean_cal = np.sqrt(np.mean(v * v))
# print "Mean calibration:", mean_cal
mean_cal = np.mean(cal_values)
std_cal = np.std(cal_values)
ci_cal = [mean_cal - ci_critical_value * std_cal, mean_cal + ci_critical_value * std_cal]
print "Mean calibration:", mean_cal
print "95% CI:", ci_cal
with open(os.path.join(model_name, 'average-calibration-bootstrap.txt'), 'w') as f:
f.write(str(mean_cal) + ' (' + str(ci_cal[0]) + ',' + str(ci_cal[1]) +')')
xnew = np.linspace(min(all_ppr), max(all_ppr), num=2 * len(all_ppr), endpoint=True)
f = interp1d(all_ppr, mean_tpr, kind='cubic')
plt.plot(xnew, f(xnew), color='red', alpha=1.0)
fig.savefig(os.path.join(model_name, 'average-calibration-bootstrap.pdf'))
# Final performance figures for paper
ncells = 20
thresholds = np.linspace(0.0, 1.0, ncells + 1)
nt = len(thresholds) - 1
sensitivity = [[] for n in range(0, nt)]
specificity = [[] for n in range(0, nt)]
surv_count = [[] for n in range(0, nt)]
died_count = [[] for n in range(0, nt)]
boot_folder = os.path.join(model_name, 'boot')
imp_folder = os.path.join(model_name, 'imp')
data_files = glob.glob(imp_folder + '/imputation-*.csv')
imp_ppr = []
imp_tpr = []
for fn in data_files:
dat = pd.read_csv(fn, na_values="\\N")[variables]
val = dat[dat.columns[1:]].values
pos0 = fn.index("imputation-") + 11
pos1 = fn.index(".csv")
idx = fn[pos0:pos1]
index_files = glob.glob(boot_folder + '/index-' + idx + '-*.txt')
model_files = glob.glob(boot_folder + '/model-' + idx + '-*.txt')
ytrue = []
probs = []
ypred = []
nboot = len(index_files)
for b in range(0, nboot):
rows = []
with open(index_files[b]) as ifile:
lines = ifile.readlines()
for line in lines:
pieces = line.split()[1:]
rows += [int(i) - 1 for i in pieces]
ytrue = [int(v) for v in dat[dat.columns[0]].values[rows]]
x = val[rows,:]
model = LogRegModel(model_files[b], model_format='GLM')
probs = model.predict(x)
for n in range(0, nt):
t = thresholds[n]
t1 = thresholds[n + 1]
ypred = [int(t < p) for p in probs]
in_cell = [t < p and p <= t1 for p in probs]
nsurv = 0
ndied = 0
P = N = 0
TP = TN = 0
for i in range(len(ytrue)):
if ytrue[i] == 1:
P += 1
if ypred[i] == 1: TP += 1
if in_cell[i]: ndied += 1
else:
N += 1
if ypred[i] == 0: TN += 1
if in_cell[i]: nsurv += 1
sens = float(TP)/P
spec = float(TN)/N
sensitivity[n].append(sens)
specificity[n].append(spec)
surv_count[n].append(nsurv)
died_count[n].append(ndied)
sensitivity = [np.mean(sensitivity[n]) for n in range(0, nt)]
specificity = [np.mean(specificity[n]) for n in range(0, nt)]
surv_count = [np.mean(surv_count[n]) for n in range(0, nt)]
died_count = [np.mean(died_count[n]) for n in range(0, nt)]
dfboot = pd.DataFrame({'Threshold':pd.Series(np.array(thresholds[:-1] + 1.0/ncells)),
'Sensitivity':pd.Series(np.array(sensitivity)),
'Specificity':pd.Series(np.array(specificity)),
'Survival':pd.Series(np.array(surv_count)),
'Mortality':pd.Series(np.array(died_count))},
columns=['Threshold', 'Sensitivity', 'Specificity',
'Survival', 'Mortality'])
dfboot.to_csv(os.path.join(model_name, 'sens-spec-bootstrap.csv'))
create_plots(model_name, dfboot, low_thres, med_thres, 'bootstrap', mean_auc, mean_cal)
###Output
Low 30% 32/105 CFR=2%
Medium 40% 42/105 CFR=14%
High 29% 31/105 CFR=59%
|
semana_2/dia_1/.ipynb_checkpoints/RESU_Ejercicios Colecciones-checkpoint.ipynb | ###Markdown
Ejercicio 1Dada la siguiente lista:> ```ejer_1 = [1,2,3,4,5]```Inviertela par que quede de la siguiente manera> ```ejer_1 = [5,4,3,2,1]```
###Code
ejer_1 = [1,2,3,4,5]
ejer_1.reverse()
print(ejer_1)
###Output
[5, 4, 3, 2, 1]
###Markdown
Ejercicio 2Eleva todos los elementos de la lista al cuadrado> ```ejer_2 = [1,2,3,4,5]```
###Code
ejer_2 = [1,2,3,4,5]
for i in range(len(ejer_2)):
ejer_2[i] = ejer_2[i]**2
print(ejer_2)
###Output
[1, 4, 9, 16, 25]
###Markdown
Ejercicio 3Crea una lista nueva con todas las combinaciones de las siguientes os listas:> ```ejer_3_1 = ["Hola", "amigo"]```>> ```ejer_3_2 = ["Que", "tal"]```Obten el siguiente output:['Hola Que', 'Hola tal', 'amigo Que', 'amigo tal']
###Code
ejer_3_1 = ["Hola", "amigo"]
ejer_3_2 = ["Que", "tal"]
ejer_3_tot = []
for i in ejer_3_1:
for j in ejer_3_2:
ejer_3_tot.append(i + " " + j)
print(ejer_3_tot)
###Output
['Hola Que', 'Hola tal', 'amigo Que', 'amigo tal']
###Markdown
Ejercicio 4Dada la siguiente lista, encuentra el valor 45, sustituyelo por el 0> ```ejer_4 = [20, 47, 19, 29, 45, 67, 78, 90]```
###Code
ejer_4 = [20, 47, 19, 29, 45, 67, 78, 90]
print(ejer_4)
indice = ejer_4.index(45)
ejer_4[indice] = 0
print(ejer_4)
###Output
[20, 47, 19, 29, 45, 67, 78, 90]
[20, 47, 19, 29, 0, 67, 78, 90]
###Markdown
Ejercicio 5Dada la siguiente lista, elimina todos los valores iguales a 3> ```ejer_5 = [3, 20, 3, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3]```
###Code
ejer_5 = [3, 20, 3, 47, 19, 3, 3, 29, 45, 67, 78, 90, 3, 3]
for i in ejer_5:
print(i)
print(ejer_5)
if i == 3:
ejer_5.remove(3)
#print(ejer_5)
print(ejer_5)
###Output
3
[3, 20, 3, 47, 19, 3, 3, 29, 45, 67, 78, 90, 3, 3]
3
[20, 3, 47, 19, 3, 3, 29, 45, 67, 78, 90, 3, 3]
19
[20, 47, 19, 3, 3, 29, 45, 67, 78, 90, 3, 3]
3
[20, 47, 19, 3, 3, 29, 45, 67, 78, 90, 3, 3]
29
[20, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3]
45
[20, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3]
67
[20, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3]
78
[20, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3]
90
[20, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3]
3
[20, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3]
[20, 47, 19, 29, 45, 67, 78, 90, 3, 3]
###Markdown
Ejercicio 61. Crea una tupla con 3 elementos2. Crea otra tupla con un elemento y comprueba su tipo3. Crea una tupla con elementos de diferentes tipos4. Imprime por pantalla el primer y ultimo elemento de la tupla del apartado 3. Usa `len` para el ultimo5. Añade un elemento a la tupla del apartado 3.6. Eliminar un elemento de la tupla del apartado 5, que se encuentre más o menos en la mitad.7. Convierte la tupla del apartado 5 en una lista
###Code
# Apartado 1
ejer_6_1 = (1,2,3)
print(ejer_6_1)
# Apartado 2
ejer_6_2 = (4,)
print(ejer_6_2)
print(type(ejer_6_2))
# Apartado 3
ejer_6_3 = ("a", 5, True, 7.)
print(ejer_6_3)
# Apartado 4
print(ejer_6_3[0])
print(ejer_6_3[len(ejer_6_3)-1])
# Apartado 5
ejer_6_5 = (8,)
ejer_6_5_1 = ejer_6_3 + ejer_6_5
print(ejer_6_5_1)
# Apartado 6
ejer_6_1 = ejer_6_5_1[0:2]
ejer_6_2 = ejer_6_5_1[3:5]
print(ejer_6_1 + ejer_6_2)
# Apartado 7
print(list(ejer_6_5_1))
###Output
(1, 2, 3)
(4,)
<class 'tuple'>
('a', 5, True, 7.0)
a
7.0
('a', 5, True, 7.0, 8)
('a', 5, 7.0, 8)
['a', 5, True, 7.0, 8]
###Markdown
Ejercicio 7Concatena todos los elementos de la tupla en un unico string. Para ello utiliza el metodo `.join()` de los Strings> ```ejer_7 = ("cien", "cañones", "por", "banda")```Resultado: `cien cañones por banda`
###Code
ejer_7 = ("cien", "cañones", "por", "banda")
print(" ".join(ejer_7))
###Output
cien cañones por banda
###Markdown
Ejercicio 8Obten el tercer elemento de la siguiente tupla, y el tercero empezando por la cola> ```ejer_8 = (3, 20, 3, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3)```
###Code
ejer_8 = (3, 20, 3, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3)
print(ejer_8[2])
print(ejer_8[-3])
###Output
3
90
###Markdown
Ejercicio 91. ¿Cuántas veces se repite el 3 en la siguiente tupla?2. Crea una tupla nueva con los elementos desde la posicion 5 a la 10.3. ¿Cuántos elementos tiene la tupla `ejer_9`?> ```ejer_9 = (3, 20, 3, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3, 5, 2, 4, 7, 9, 4, 2, 4, 3, 3, 4, 6, 7)```
###Code
ejer_9 = (3, 20, 3, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3, 5, 2, 4, 7, 9, 4, 2, 4, 3, 3, 4, 6, 7)
print(ejer_9.count(3))
ejer_9_2 = ejer_9[4:10]
print(ejer_9_2)
print(len(ejer_9))
###Output
7
(19, 3, 29, 45, 67, 78)
26
###Markdown
Ejercicio 10Comprueba si el numero 60 esta en la tupla del ejercicio 9
###Code
60 in ejer_9
###Output
_____no_output_____
###Markdown
Ejercicio 111. Convierte la tupla del apartado 10 en una lista2. Convierte la tupla del apartado 10 en un set3. Convierte la tupla del apartado 10 en un diccionario. Usa también los indices
###Code
ejer_11 = (3, 20, 3, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3, 5, 2, 4, 7, 9, 4, 2, 4, 3, 3, 4, 6, 7)
# Convertir a lista
print(list(ejer_11))
# Convertir en set
print(set(ejer_11))
ejer_11_dict = {}
# Convertir a diccionario
for i,j in enumerate(ejer_11):
#print(i)
#print(j)
ejer_11_dict[i] = j
print(ejer_11_dict)
###Output
[3, 20, 3, 47, 19, 3, 29, 45, 67, 78, 90, 3, 3, 5, 2, 4, 7, 9, 4, 2, 4, 3, 3, 4, 6, 7]
{2, 3, 67, 5, 4, 7, 6, 9, 45, 78, 47, 19, 20, 90, 29}
{0: 3, 1: 20, 2: 3, 3: 47, 4: 19, 5: 3, 6: 29, 7: 45, 8: 67, 9: 78, 10: 90, 11: 3, 12: 3, 13: 5, 14: 2, 15: 4, 16: 7, 17: 9, 18: 4, 19: 2, 20: 4, 21: 3, 22: 3, 23: 4, 24: 6, 25: 7}
###Markdown
Ejercicio 12Convierte la siguiente tupla en un diccionario> ```ejer_12 = [("x", 1), ("x", 2), ("x", 3), ("y", 1), ("y", 2), ("z", 1)]```
###Code
ejer_12 = [("x", 1), ("x", 2), ("x", 3), ("y", 1), ("y", 2), ("z", 1)]
ejer_12_dic = {}
for i,j in ejer_12:
#print(i)
#print(j)
ejer_12_dic[i] = j
print(ejer_12_dic)
dict(ejer_12)
ejer_12_dic["x"]
###Output
_____no_output_____
###Markdown
Ejercicio 131. Crea una lista ordenada ascendente con las claves del diccionario2. Crea otra lista ordenada descendente con los valores3. Añade una nueva clave/valor4. Busca la clave 2 dentro del diccionario5. Itera la clave y el valor del diccionario con un unico for> ```ejer_13 = {4:78, 2:98, 8:234, 5:29}```
###Code
ejer_13 = {4:78, 2:98, 8:234, 5:29}
# Lista de claves ordenada ascendentemente
list_1 = sorted(list(ejer_13.keys()))
# Lista de valores ordenados descendentemente
list_2 = sorted(list(ejer_13.values()), reverse = True)
print(list_1)
print(list_2)
# Añadir elemento
ejer_13[9] = 65
print(ejer_13)
print(2 in ejer_13)
for clave, valor in ejer_13.items():
print(f"Clave: {clave}, Valor -> {valor}")
###Output
[2, 4, 5, 8]
[234, 98, 78, 29]
{4: 78, 2: 98, 8: 234, 5: 29, 9: 65}
True
Clave: 4, Valor -> 78
Clave: 2, Valor -> 98
Clave: 8, Valor -> 234
Clave: 5, Valor -> 29
Clave: 9, Valor -> 65
###Markdown
Ejercicio 14Junta ambos diccionarios. Para ello, utiliza `update> ```ejer_14_1 = {1: 11, 2: 22}```>> ```ejer_14_2 = {3: 33, 4: 44}```
###Code
ejer_14_1 = {1: 11, 2: 22}
ejer_14_2 = {3: 33, 4: 44}
ejer_14_1.update(ejer_14_2)
print(ejer_14_1)
###Output
{1: 11, 2: 22, 3: 33, 4: 44}
###Markdown
Ejercicio 15Suma todos los valores del dicionario> ```ejer_15 = {1: 11, 2: 22, 3: 33, 4: 44, 5: 55}```
###Code
ejer_15 = {1: 11, 2: 22, 3: 33, 4: 44, 5: 55}
print(sum(ejer_15.values()))
###Output
165
###Markdown
Ejercicio 16Multiplica todos los valores del diccionario> ```ejer_16 = {1: 11, 2: 22, 3: 33, 4: 44, 5: 55}```
###Code
ejer_16 = {1: 11, 2: 22, 3: 33, 4: 44, 5: 55}
prod = 1
for i in ejer_16.values():
prod = prod * i
print(prod)
###Output
19326120
###Markdown
Ejercicio 171. Crea un set de tres elementos2. Añade un cuarto3. Elimina el utlimo elemento añadido4. Elimina el elemento 10, si está presenta. Usa `discard()`
###Code
# Crear set
ejer_17 = {5,8,2}
print(ejer_17)
# Añadir elemento
ejer_17.add(6)
print(ejer_17)
# Elimina el ultimo elemento
ejer_17.remove(6)
# Eliminar el elemento 10
ejer_17.discard(10)
###Output
{8, 2, 5}
{8, 2, 5, 6}
|
docs/T674201_Graph_embeddings.ipynb | ###Markdown
Graph ML Embeddings Installations
###Code
!pip install node2vec
!pip install karateclub
!pip install python-Levenshtein
!pip install gensim==3.8.0
!pip install git+https://github.com/palash1992/GEM.git
!pip install stellargraph[demos]==1.2.1
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pathlib import Path
import networkx as nx
from networkx.drawing.layout import bipartite_layout
import networkx.algorithms.community as nx_comm
import os
from scipy.io import mmread
from collections import Counter
import random
from node2vec import Node2Vec
from karateclub import Graph2Vec
from node2vec.edges import HadamardEmbedder
import os
import numpy as np
import pandas as pd
import networkx as nx
import stellargraph as sg
from stellargraph.mapper import FullBatchNodeGenerator
from stellargraph.layer import GCN
import tensorflow as tf
from tensorflow.keras import layers, optimizers, losses, metrics, Model
from sklearn import preprocessing, model_selection
from IPython.display import display, HTML
from scipy.linalg import sqrtm
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
%matplotlib inline
default_edge_color = 'gray'
default_node_color = '#407cc9'
enhanced_node_color = '#f5b042'
enhanced_edge_color = '#cc2f04'
###Output
_____no_output_____
###Markdown
Plot utils Draw graph
###Code
def draw_graph(G, pos_nodes=None, node_names={}, node_size=50, plot_weight=False):
pos_nodes = pos_nodes if pos_nodes else nx.spring_layout(G)
nx.draw(G, pos_nodes, with_labels=False, node_size=node_size, edge_color='gray', arrowsize=30)
pos_attrs = {}
for node, coords in pos_nodes.items():
pos_attrs[node] = (coords[0], coords[1] + 0.08)
nx.draw_networkx_labels(G, pos_attrs, labels=node_names, font_family='serif', font_size=20)
if plot_weight:
pos_attrs = {}
for node, coords in pos_nodes.items():
pos_attrs[node] = (coords[0], coords[1] + 0.08)
nx.draw_networkx_labels(G, pos_attrs, font_family='serif', font_size=20)
edge_labels=dict([((a,b,),d["weight"]) for a,b,d in G.edges(data=True)])
nx.draw_networkx_edge_labels(G, pos_nodes, edge_labels=edge_labels)
plt.axis('off')
axis = plt.gca()
axis.set_xlim([1.2*x for x in axis.get_xlim()])
axis.set_ylim([1.2*y for y in axis.get_ylim()])
###Output
_____no_output_____
###Markdown
Draw enhanced path on the graph
###Code
def draw_enhanced_path(G, path_to_enhance, node_names={}, filename=None, layout=None):
path_edges = list(zip(path,path[1:]))
pos_nodes = nx.spring_layout(G) if layout is None else layout(G)
pos_nodes = nx.spring_layout(G)
nx.draw(G, pos_nodes, with_labels=False, node_size=50, edge_color='gray')
pos_attrs = {}
for node, coords in pos_nodes.items():
pos_attrs[node] = (coords[0], coords[1] + 0.08)
nx.draw_networkx_labels(G, pos_attrs, labels=node_names, font_family='serif')
nx.draw_networkx_edges(G,pos_nodes,edgelist=path_edges, edge_color='#cc2f04', style='dashed', width=2.0)
plt.axis('off')
axis = plt.gca()
axis.set_xlim([1.2*x for x in axis.get_xlim()])
axis.set_ylim([1.2*y for y in axis.get_ylim()])
if filename:
plt.savefig(filename, format="png")
###Output
_____no_output_____
###Markdown
Embedding module
###Code
class Embeddings:
"""
Notes
-----
Shallow embedding methods
These methods are able to learn and return only the embedding values
for the learned input data. Generally speaking, all the unsupervised
embedding algorithms based on matrix factorization use the same principle.
They all factorize an input graph expressed as a matrix in different
components (commonly knows as matrix factorization). The main difference
between each method lies in the loss function used during the optimization
process. Indeed, different loss functions allow creating an embedding space
that emphasizes specific properties of the input graph.
"""
@staticmethod
def generate_random_graphs(n_graphs=20, nx_class=None):
def generate_random():
n = random.randint(6, 20)
k = random.randint(5, n)
p = random.uniform(0, 1)
if nx_class:
return nx_class(n,k,p), [n,k,p]
else:
return nx.watts_strogatz_graph(n,k,p), [n,k,p]
return [generate_random() for x in range(n_graphs)]
def graph_embedding(self, graph_list=None, dim=2, wl_iterations=10):
"""
Given a dataset with m different graphs, the task is to build a machine
learning algorithm capable of classifying a graph into the right class.
We can then see this problem as a classification problem, where the
dataset is defined by a list of pairs, <Gi,yi> , where Gi is a graph
and yi is the class the graph belongs to.
Representation learning (network embedding) is the task that aims to
learn a mapping function f:G→Rn , from a discrete graph to a continuous
domain. Function f will be capable of performing a low-dimensional
vector representation such that the properties (local and global) of
graph G are preserved.
"""
if graph_list is None:
graph_list = self.generate_random_graphs()
model = Graph2Vec(dimensions=dim, wl_iterations=wl_iterations)
model.fit([x[0] for x in graph_list])
graph_embeddings = model.get_embedding()
return graph_list, model, graph_embeddings
def node_embedding(self, G=None, dim=2, window=10):
"""
Given a (possibly large) graph G=(V,E), the goal is to classify each
vertex v∈V into the right class. In this setting, the dataset includes
G and a list of pairs, <vi,yi>, where vi is a node of graph G and yi is
the class to which the node belongs. In this case, the mapping function
would be f:V→Rn.
The Node2Vec algorithm can be seen as an extension of DeepWalk. Indeed,
as with DeepWalk, Node2Vec also generates a set of random walks used as
input to a skip-gram model. Once trained, the hidden layers of the skip-gram
model are used to generate the embedding of the node in the graph. The main
difference between the two algorithms lies in the way the random walks are
generated.
Indeed, if DeepWalk generates random walks without using any bias, in Node2Vec
a new technique to generate biased random walks on the graph is introduced.
The algorithm to generate the random walks combines graph exploration by merging
Breadth-First Search (BFS) and Depth-First Search (DFS). The way those two
algorithms are combined in the random walk's generation is regularized by
two parameters p, and q. p defines the probability of a random walk getting
back to the previous node, while q defines the probability that a random
walk can pass through a previously unseen part of the graph.
Due to this combination, Node2Vec can preserve high-order proximities by
preserving local structures in the graph as well as global community structures.
This new method of random walk generation allows solving the limitation of
DeepWalk preserving the local neighborhood properties of the node.
"""
if G is None:
G = nx.barbell_graph(m1=7, m2=4)
node2vec = Node2Vec(G, dimensions=dim)
model = node2vec.fit(window=window)
node_embeddings = [model.wv.get_vector(str(x)) for x in G.nodes()]
return G, model, node_embeddings
def edge_embedding(self, G=None, dim=2, window=10):
"""
Given a (possibly large) graph G=(V,E), the goal is to classify each
edge e∈E , into the right class. In this setting, the dataset includes
G and a list of pairs, <ei,yi>, where ei is an edge of graph G
and yi is the class to which the edge belongs. Another typical task for
this level of granularity is link prediction, the problem of predicting
the existence of a link between two existing nodes in a graph. In this
case, the mapping function would be f:E→Rn.
Contrary to the other embedding function, the Edge to Vector (Edge2Vec)
algorithm generates the embedding space on edges, instead of nodes.
This algorithm is a simple side effect of the embedding generated by
using Node2Vec. The main idea is to use the node embedding of two adjacent
nodes to perform some basic mathematical operations in order to extract
the embedding of the edge connecting them.
"""
G, model, _ = self.node_embedding(G=G, dim=dim, window=window)
edges_embs = HadamardEmbedder(keyed_vectors=model.wv)
edge_embeddings = [edges_embs[(str(x[0]), str(x[1]))] for x in G.edges()]
return G, model, edge_embeddings
def graph_factorization(self, G=None, data_set=None, max_iter=10000, eta=1*10**-4, regu=1.0):
"""
The GF algorithm was one of the first models to reach good computational
performance in order to perform the node embedding of a given graph. The
loss function used in this method was mainly designed to improve GF
performances and scalability. Indeed, the solution generated by this
method could be noisy. Moreover, it should be noted, by looking at its
matrix factorization formulation, that GF performs a strong symmetric
factorization. This property is particularly suitable for undirected
graphs, where the adjacency matrix is symmetric, but could be a potential
limitation for undirected graphs.
"""
if G is None:
G = nx.barbell_graph(m1=7, m2=4)
from gem.embedding.gf import GraphFactorization
Path("gem/intermediate").mkdir(parents=True, exist_ok=True)
model = GraphFactorization(d=2, data_set=data_set, max_iter=max_iter, eta=eta, regu=regu)
model.learn_embedding(G)
return G, model
def graph_representation(self, G=None, dimensions=2, order=3):
"""
Graph representation with global structure information (GraphRep), such
as HOPE, allows us to preserve higher-order proximity without forcing
its embeddings to have symmetric properties.
We initialize the GraRep class from the karateclub library. In this
implementation, the dimension parameter represents the dimension of the
embedding space, while the order parameter defines the maximum number of
orders of proximity between nodes. The number of columns of the final
embedding matrix (stored, in the example, in the embeddings variable)
is dimension x order, since, as we said, for each proximity order an
embedding is computed and concatenated in the final embedding matrix.
"""
if G is None:
G = nx.barbell_graph(m1=7, m2=4)
from karateclub.node_embedding.neighbourhood.grarep import GraRep
model = GraRep(dimensions=dimensions, order=order)
model.fit(G)
embeddings = model.get_embedding()
return G, model, embeddings
def hope(self, G=None, d=4, beta=0.01):
if G is None:
G = nx.barbell_graph(m1=7, m2=4)
from gem.embedding.hope import HOPE
model = HOPE(d=d, beta=beta)
model.learn_embedding(G)
return G, model
def gcn_embedding(self, G=None):
"""
Unsupervised graph representation learning using Graph ConvNet as encoder
The model embeds a graph by using stacked Graph ConvNet layers
"""
if G is None:
G = nx.barbell_graph(m1=7, m2=4)
order = np.arange(G.number_of_nodes())
A = nx.to_numpy_matrix(G, nodelist=order)
I = np.eye(G.number_of_nodes())
A_hat = A + np.eye(G.number_of_nodes()) # add self-connections
D_hat = np.array(np.sum(A_hat, axis=0))[0]
D_hat = np.array(np.diag(D_hat))
D_hat = np.linalg.inv(sqrtm(D_hat))
A_hat = D_hat @ A_hat @ D_hat
gcn1 = GCNLayer(G.number_of_nodes(), 8)
gcn2 = GCNLayer(8, 4)
gcn3 = GCNLayer(4, 2)
H1 = gcn1.forward(A_hat, I)
H2 = gcn2.forward(A_hat, H1)
H3 = gcn3.forward(A_hat, H2)
embeddings = H3
return G, embeddings
class GCNLayer():
def __init__(self, n_inputs, n_outputs):
self.n_inputs = n_inputs
self.n_outputs = n_outputs
self.W = self._glorot_init(self.n_outputs, self.n_inputs)
self.activation = np.tanh
@staticmethod
def _glorot_init(nin, nout):
sd = np.sqrt(6.0 / (nin + nout))
return np.random.uniform(-sd, sd, size=(nin, nout))
def forward(self, A, X):
self._X = (A @ X).T # (N,N)*(N,n_outputs) ==> (n_outputs,N)
H = self.W @ self._X # (N, D)*(D, n_outputs) => (N, n_outputs)
H = self.activation(H)
return H.T # (n_outputs, N)
###Output
_____no_output_____
###Markdown
Runs
###Code
###############################################################################
# Drawing Undirected Graphs
###############################################################################
G = nx.Graph()
V = {'Dublin', 'Paris', 'Milan', 'Rome'}
E = [('Milan','Dublin'), ('Milan','Paris'), ('Paris','Dublin'), ('Milan','Rome')]
G.add_nodes_from(V)
G.add_edges_from(E)
draw_graph(G, pos_nodes=nx.shell_layout(G), node_size=500)
new_nodes = {'London', 'Madrid'}
new_edges = [('London','Rome'), ('Madrid','Paris')]
G.add_nodes_from(new_nodes)
G.add_edges_from(new_edges)
draw_graph(G, pos_nodes=nx.shell_layout(G), node_size=500)
node_remove = {'London', 'Rome'}
G.remove_nodes_from(node_remove)
draw_graph(G, pos_nodes=nx.shell_layout(G), node_size=500)
node_edges = [('Milan','Dublin'), ('Milan','Paris')]
G.remove_edges_from(node_edges)
draw_graph(G, pos_nodes=nx.shell_layout(G), node_size=500)
G = nx.Graph()
nodes = {1:'Dublin',2:'Paris',3:'Milan',4:'Rome',5:'Naples',6:'Moscow',7:'Tokyo'}
G.add_nodes_from(nodes.keys())
G.add_edges_from([(1,2),(1,3),(2,3),(3,4),(4,5),(5,6),(6,7),(7,5)])
draw_graph(G, node_names=nodes, pos_nodes=nx.spring_layout(G), node_size=50)
###############################################################################
# Drawing Directed Graphs
###############################################################################
G = nx.DiGraph()
V = {'Dublin', 'Paris', 'Milan', 'Rome'}
E = [('Milan','Dublin'), ('Paris','Milan'), ('Paris','Dublin'), ('Milan','Rome')]
G.add_nodes_from(V)
G.add_edges_from(E)
draw_graph(G, pos_nodes=nx.shell_layout(G), node_size=500)
###############################################################################
# Drawing Weighted Directed Graphs
###############################################################################
G = nx.MultiDiGraph()
V = {'Paris', 'Dublin','Milan', 'Rome'}
E = [ ('Paris','Dublin', 11), ('Paris','Milan', 8),
('Milan','Rome', 5),('Milan','Dublin', 19)]
G.add_nodes_from(V)
G.add_weighted_edges_from(E)
draw_graph(G, pos_nodes=nx.shell_layout(G), node_size=500, plot_weight=True)
###############################################################################
# Drawing Shortest Path
###############################################################################
G = nx.Graph()
nodes = {1:'Dublin',2:'Paris',3:'Milan',4:'Rome',5:'Naples',6:'Moscow',7:'Tokyo'}
G.add_nodes_from(nodes.keys())
G.add_edges_from([(1,2),(1,3),(2,3),(3,4),(4,5),(5,6),(6,7),(7,5)])
path = nx.shortest_path(G, source=1, target=7)
draw_enhanced_path(G, path, node_names=nodes)
###############################################################################
# Network Efficieny Graphs
###############################################################################
G = nx.complete_graph(n=7)
nodes = {0:'Dublin',1:'Paris',2:'Milan',3:'Rome',4:'Naples',5:'Moscow',6:'Tokyo'}
ge = round(nx.global_efficiency(G),2)
ax = plt.gca()
ax.text(-.4, -1.3, "Global Efficiency:{}".format(ge), fontsize=14, ha='left', va='bottom');
draw_graph(G, node_names=nodes, pos_nodes=nx.spring_layout(G))
G = nx.cycle_graph(n=7)
nodes = {0:'Dublin',1:'Paris',2:'Milan',3:'Rome',4:'Naples',5:'Moscow',6:'Tokyo'}
le = round(nx.global_efficiency(G),2)
ax = plt.gca()
ax.text(-.4, -1.3, "Global Efficiency:{}".format(le), fontsize=14, ha='left', va='bottom');
draw_graph(G, node_names=nodes, pos_nodes=nx.spring_layout(G))
###############################################################################
# Clustering Coefficient
###############################################################################
G = nx.Graph()
nodes = {1:'Dublin',2:'Paris',3:'Milan',4:'Rome',5:'Naples',6:'Moscow',7:'Tokyo'}
G.add_nodes_from(nodes.keys())
G.add_edges_from([(1,2),(1,3),(2,3),(3,4),(4,5),(5,6),(6,7),(7,5)])
cc = nx.clustering(G)
node_size=[(v + 0.1) * 200 for v in cc.values()]
draw_graph(G, node_names=nodes, node_size=node_size, pos_nodes=nx.spring_layout(G))
###############################################################################
# Drawing Benchmark Graphs
###############################################################################
complete = nx.complete_graph(n=7)
lollipop = nx.lollipop_graph(m=7, n=3)
barbell = nx.barbell_graph(m1=7, m2=4)
plt.figure(figsize=(15,6))
plt.subplot(1,3,1)
draw_graph(complete)
plt.title("Complete")
plt.subplot(1,3,2)
plt.title("Lollipop")
draw_graph(lollipop)
plt.subplot(1,3,3)
plt.title("Barbell")
draw_graph(barbell)
###############################################################################
# Graph2vec embedding
###############################################################################
embedder = Embeddings()
_, _, graph_embeddings = embedder.graph_embedding()
fig, ax = plt.subplots(figsize=(10,10))
for i, vec in enumerate(graph_embeddings):
ax.scatter(vec[0],vec[1], s=1000)
ax.annotate(str(i), (vec[0],vec[1]), fontsize=40)
###############################################################################
# Node2vec embedding
###############################################################################
embedder = Embeddings()
G, model, node_embeddings = embedder.node_embedding()
fig, ax = plt.subplots(figsize=(10,10))
for x in G.nodes():
v = model.wv.get_vector(str(x))
ax.scatter(v[0],v[1], s=1000)
ax.annotate(str(x), (v[0],v[1]), fontsize=12)
###############################################################################
# Edge2vec embedding
###############################################################################
embedder = Embeddings()
G, model, edge_embeddings = embedder.edge_embedding()
fig, ax = plt.subplots(figsize=(10,10))
for x,v in zip(G.edges(),edge_embeddings):
ax.scatter(v[0],v[1], s=1000)
ax.annotate(str(x), (v[0],v[1]), fontsize=16)
###############################################################################
# Graph Factorization embedding
###############################################################################
embedder = Embeddings()
G, model = embedder.graph_factorization()
fig, ax = plt.subplots(figsize=(10,10))
for x in G.nodes():
v = model.get_embedding()[x]
ax.scatter(v[0],v[1], s=1000)
ax.annotate(str(x), (v[0],v[1]), fontsize=12)
###############################################################################
# Graph Representation embedding
###############################################################################
embedder = Embeddings()
G, model, embeddings = embedder.graph_representation()
fig, ax = plt.subplots(1, 3, figsize=(16,5))
for x in G.nodes():
v = model.get_embedding()[x]
ax[0].scatter(v[0],v[1], s=1000)
ax[0].annotate(str(x), (v[0],v[1]), fontsize=12)
ax[0].set_title('k=1')
ax[1].scatter(v[2],v[3], s=1000)
ax[1].annotate(str(x), (v[2],v[3]), fontsize=12)
ax[1].set_title('k=2')
ax[2].scatter(v[4],v[5], s=1000)
ax[2].annotate(str(x), (v[4],v[5]), fontsize=12)
ax[2].set_title('k=3')
###############################################################################
# HOPE embedding
###############################################################################
embedder = Embeddings()
G, model = embedder.hope()
fig, ax = plt.subplots(figsize=(10,10))
for x in G.nodes():
v = model.get_embedding()[x,2:]
ax.scatter(v[0],v[1], s=1000)
ax.annotate(str(x), (v[0],v[1]), fontsize=20)
###Output
_____no_output_____ |
tools/data-analysis/causal-occlusion/cfv_main_figures.ipynb | ###Markdown
This notebook creates the plots for the main paper. Imports
###Code
# to import from mturk folder
import os, sys, inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
mturkdir = os.path.join(os.path.dirname(os.path.dirname(currentdir)), "mturk")
sys.path.insert(0, mturkdir)
from mturk import RepeatedTaskResult
import numpy as np
from matplotlib import pyplot as plt
import pickle
from glob import glob
import pandas as pd
import seaborn as sns
import json
import utils_figures as utf
import utils_figures_helper as utf_helper
import utils_MTurk_figures as utf_mturk
import utils_data as utd
###Output
_____no_output_____
###Markdown
Parameters
###Code
# path to the folder containing the pkl files generated by the experiment's code
raw_results_folder = "data/counterfactual_experiment"
save_csv = True
include_baselines = True
ignore_duplicate_participants = False # set to False when using data that has no duplicates.
# START for figures
save_fig = False
# name of the folder in ./figures/ where all resulting figures will be saved
exp_str = "counterfactual_experiment"
instr_type_list = ["optimized", "natural", "mixed", "blur", "none"]
branches_labels_list = ["3x3", "pool"]
kernel_size_list = ["1", "3"]
# END for figures
# START for payment
mturk_payment_one_HIT = 2.34
mturk_payment_one_HIT_none = 0.84
repetition_factor_due_to_exclusion = 1.35
expected_distinct_workers = 50
# END for payment
instruction_labels = {
"optimized": "Synthetic",
"natural": "Natural",
"mixed": "Mixed",
"none": "None",
"blur": "Blur",
}
labels = [instruction_labels[it] for it in instr_type_list]
if save_fig:
os.makedirs(os.path.join("figures", exp_str), exist_ok=True)
###Output
_____no_output_____
###Markdown
Load data & preprocess
###Code
"""
Check if the `calculate_relative_activation_difference.py` script was already run for all json configurations
If not, do so.
to add the query activation information to the structure
save the resulting files with the filenames shown below
"""
if include_baselines:
structure_json_map = {
"natural": "natural_with_baselines.json",
"optimized": "optimized_with_baselines.json",
"mixed": "mixed_with_baselines.json",
"blur": "natural_blur_with_baselines.json",
"none": "natural_with_baselines.json",
}
else:
structure_json_map = {
"natural": "natural.json",
"optimized": "optimized.json",
"mixed": "mixed.json",
"blur": "natural_blur.json",
"none": "natural.json",
}
trial_structures = utd.load_and_parse_trial_structure(
raw_results_folder, [structure_json_map[it] for it in instr_type_list]
)
trial_structures = {k: v for k, v in zip(instr_type_list, trial_structures)}
df, df_checks, df_feedback = utd.load_and_parse_all_results(
raw_results_folder, instr_type_list
)
###Output
_____no_output_____
###Markdown
Add a column to the result df indicating whether the row belongs to an excluded or included response
###Code
def get_map_excluded_responses(column_name="passed_checks"):
def map_excluded_responses(row):
rows = df_checks[
(df_checks["task_id"] == row["task_id"])
& (df_checks["response_index"] == row["response_index"])
]
result = not rows[column_name].item()
return result
return map_excluded_responses
df["excluded_response"] = df.apply(get_map_excluded_responses("passed_checks"), axis=1)
###Output
_____no_output_____
###Markdown
Create a unique column based on task id and response id (unique within each task)
###Code
df, df_checks = utd.add_task_response_id(df, df_checks)
df_main = (
df[(df["catch_trial"] == False) & (df["is_demo"] == False)]
.reset_index()
.drop("index", axis=1)
)
df_catch_trials = (
df[(df["catch_trial"] == True) & (df["is_demo"] == False)]
.reset_index()
.drop("index", axis=1)
)
df_demo_trials = df[df["is_demo"] == True].reset_index().drop("index", axis=1)
###Output
_____no_output_____
###Markdown
Append structure information such as layer, kernel size, etc. to the dataframe
###Code
df_main = utd.append_trial_structure_to_results(df_main, trial_structures)
df_catch_trials = utd.append_trial_structure_to_results(
df_catch_trials, trial_structures
)
if ignore_duplicate_participants:
df_duplicate_tasks = pd.read_csv(
os.path.join(raw_results_folder, "duplicate_tasks.csv")
)
df_main["excluded_response"] = df_main.apply(
axis=1,
func=lambda row: True
if row["excluded_response"]
else (
len(
df_duplicate_tasks[
(df_duplicate_tasks["mode"] == row["mode"])
& (df_duplicate_tasks["task_number"] == row["task_number"])
]
)
> 0
),
)
###Output
_____no_output_____
###Markdown
Split data up in trials belonging to excluded responses, and those that passed the exclusion criteria
###Code
df_main_excluded = df_main[df_main["excluded_response"]]
df_main_not_excluded = df_main[~df_main["excluded_response"]]
df_catch_trials_excluded = df_catch_trials[df_catch_trials["excluded_response"]]
df_catch_trials_not_excluded = df_catch_trials[~df_catch_trials["excluded_response"]]
df_demo_trials_excluded = df_demo_trials[df_demo_trials["excluded_response"]]
df_demo_trials_not_excluded = df_demo_trials[~df_demo_trials["excluded_response"]]
###Output
_____no_output_____
###Markdown
Calculate how often the demo trials had to be repeated
###Code
df_checks = utd.checks_add_demo_trial_repetitions(df_demo_trials, df_checks)
df, df_checks = utd.process_checks(df, df_checks)
df_catch_trials_not_excluded_ignoring_catch_trials = utd.get_catch_trials_as_main_data(
df_catch_trials, df_checks
)
df_checks_not_excluded = df_checks[df_checks["passed_checks"]]
df_checks_excluded = df_checks[~df_checks["passed_checks"]]
if save_csv:
# save dataframes to csv
df_checks.to_csv(os.path.join(raw_results_folder, "df_exclusion_criteria.csv"))
df.to_csv(os.path.join(raw_results_folder, "df_trials.csv"))
figures_folder = os.path.join(
"figures", os.path.basename(os.path.realpath(raw_results_folder))
)
if save_fig:
os.makedirs(figures_folder, exist_ok=True)
print("Saving results to", figures_folder)
###Output
_____no_output_____
###Markdown
Figure 1C
###Code
df_expert_baseline = pd.read_csv("data/baselines/df_main_trials.csv")
df_expert_baseline["expert_baseline"] = True
df_main_not_excluded_copy = df_main_not_excluded.copy()
df_main_not_excluded_copy["expert_baseline"] = False
df_main_not_excluded_with_expert_baseline = pd.concat((df_expert_baseline, df_main_not_excluded_copy)).reset_index(drop=True)
utf.make_plot_workers_understood_task(
df_main_not_excluded_with_expert_baseline,
figures_folder,
exp_str,
["optimized", "natural", "none"],
["Synthetic", "Natural", "None"],
fig_1=True,
include_experts=False,
save_fig=save_fig
)
###Output
_____no_output_____
###Markdown
Figure 3A: Performance
###Code
utf.make_plot_workers_understood_task(
df_main_not_excluded_with_expert_baseline,
figures_folder,
exp_str,
instr_type_list,
labels,
fig_1=False,
save_fig=save_fig
)
del df_main_not_excluded_with_expert_baseline, df_main_not_excluded_copy
###Output
_____no_output_____
###Markdown
Figure 3B: Reaction Time
###Code
utf.make_plot_natural_are_better_wrt_reaction_time(
df_main_not_excluded,
results_folder=figures_folder,
save_fig=save_fig,
instr_type_list=instr_type_list,
labels=labels
)
###Output
_____no_output_____
###Markdown
Figure 4A: Baseline Accuracies
###Code
# Load expert data
df_expert_baseline = pd.read_csv("data/baselines/df_main_trials.csv")
df_expert_baseline["expert_baseline"] = True
df_expert_baseline["mode_extended"] = df_expert_baseline.apply(
lambda row: "e_" + row["mode"], axis=1 # e for expert
)
df_expert_baseline["kernel_size"] = df_expert_baseline.apply(
lambda row: str(row["kernel_size"]), axis=1
)
df_expert_baseline["layer"] = df_expert_baseline.apply(
lambda row: str(row["layer"]), axis=1
)
# extend worker df with new columns
df_main_not_excluded_copy = df_main_not_excluded.copy()
df_main_not_excluded_copy["expert_baseline"] = False
df_main_not_excluded_copy["mode_extended"] = df_main_not_excluded_copy.apply(
lambda row: "w_" + row["mode"], axis=1 # w for worker
)
df_main_not_excluded_with_expert_baseline = pd.concat(
(df_expert_baseline, df_main_not_excluded_copy)
).reset_index(drop=True)
# load primary object baseline
if os.path.exists("data/baselines/df_primary_object_baseline.csv"):
df_primary_object_baseline = pd.read_csv(
"data/baselines2/df_primary_object_baseline.csv"
)
def parse_primary_object_baseline(row):
mask = (
(df_primary_object_baseline["batch"] == row["batch"])
& (df_primary_object_baseline["layer"] == row["layer"])
& (df_primary_object_baseline["kernel_size"] == row["kernel_size"])
)
selected_rows = df_primary_object_baseline[mask]
if not len(selected_rows) == 1:
print(
"missing information for row:",
row[["batch", "trial_index", "mode", "task_number"]],
)
print()
return selected_rows.iloc[0]["primary_object_choice"]
df_main_not_excluded_with_expert_baseline[
"primary_object_baseline_choice"
] = df_main_not_excluded_with_expert_baseline.apply(
axis=1, func=parse_primary_object_baseline
)
# clean up
del df_primary_object_baseline
else:
print(
"Could not find objects baselines csv and, thus, cannot append this information to the dataframe"
)
df_main_not_excluded_with_expert_baseline[
"correct_center"
] = df_main_not_excluded_with_expert_baseline.apply(
lambda row: row["max_query_center_distance"] > row["min_query_center_distance"],
axis=1,
)
df_main_not_excluded_with_expert_baseline[
"correct_std"
] = df_main_not_excluded_with_expert_baseline.apply(
lambda row: row["max_query_patch_std"] < row["min_query_patch_std"], axis=1
)
df_main_not_excluded_with_expert_baseline[
"correct_primary"
] = df_main_not_excluded_with_expert_baseline.apply(
lambda row: True if row["primary_object_baseline_choice"] == 1 else False, axis=1
)
df_main_not_excluded_with_expert_baseline[
"correct_saliency"
] = df_main_not_excluded_with_expert_baseline.apply(
lambda row: row["max_query_patch_saliency"] < row["min_query_patch_saliency"],
axis=1,
)
baseline_accuracies = {
"Center": df_main_not_excluded_with_expert_baseline[
~df_main_not_excluded_with_expert_baseline["expert_baseline"] # only taking dataframe from workers
]["correct_center"].mean(),
"Variance": df_main_not_excluded_with_expert_baseline[
~df_main_not_excluded_with_expert_baseline["expert_baseline"]
]["correct_std"].mean(),
"Object": 0.6344107407407408,
"Saliency": df_main_not_excluded_with_expert_baseline[
~df_main_not_excluded_with_expert_baseline["expert_baseline"]
]["correct_saliency"].mean(),
}
baseline_sems = {
"Object": 0.006435863163504224,
}
utf.plot_baseline_accuracy(
baseline_accuracies,
sems=baseline_sems,
results_folder=figures_folder,
save_fig=save_fig,
label_order=["Center", "Object", "Variance", "Saliency"],
)
###Output
_____no_output_____
###Markdown
Figure 4 B and C: Cohen's Kappa
###Code
extended_mode_list = [
"w_optimized",
"w_natural",
"w_mixed",
"w_blur",
"w_none",
"b_center",
"b_std",
"b_saliency",
]
cohens_kappa = utf_helper.get_cohens_kappa_all_conditions_with_each_other(
df_main_not_excluded_with_expert_baseline, extended_mode_list, "mode_extended"
)
(
cohens_kappa_matrix,
cohens_kappa_std_matrix,
cohens_kappa_sem_matrix,
) = utf_helper.get_cohens_kappa_matrix_all_conditions_with_each_other(
extended_mode_list, cohens_kappa
)
extended_mode_label_list = [
"Synthetic",
"Natural",
"Mixed",
"Blur",
"None",
"Center\nBaseline",
"Variance\nBaseline",
"Saliency\nBaseline",
]
extended_mode_list = [
"w_optimized",
"w_natural",
"w_mixed",
"w_blur",
"w_none",
"b_center",
"b_std",
"b_saliency",
]
utf_mturk.plot_worker_baseline_consistency_matrix(
np.round(cohens_kappa_matrix * 100, 2).astype(int),
np.round(2 * cohens_kappa_sem_matrix * 100, 2).astype(int),
extended_mode_list,
extended_mode_label_list,
figures_folder,
save_fig=save_fig,
vmin=cohens_kappa_matrix.min() * 100,
vmax=cohens_kappa_matrix.max() * 100,
)
extended_mode_list = ["w_optimized", "w_natural", "b_saliency"]
(
cohens_kappa_submatrix,
cohens_kappa_std_submatrix,
cohens_kappa_sem_submatrix,
) = utf_helper.get_cohens_kappa_matrix_all_conditions_with_each_other(
extended_mode_list, cohens_kappa
)
extended_mode_label_list = ["Synthetic", "Natural", "Saliency\nBaseline"]
utf_mturk.plot_worker_baseline_consistency_matrix(
np.round(cohens_kappa_submatrix * 100, 2).astype(int),
np.round(2 * cohens_kappa_sem_submatrix * 100, 2).astype(int),
extended_mode_list,
extended_mode_label_list,
figures_folder,
save_fig=save_fig,
vmin=cohens_kappa_matrix.min() * 100,
vmax=cohens_kappa_matrix.max() * 100,
)
###Output
_____no_output_____
###Markdown
Figure 5A, B: Performance by Unit
###Code
for kernel_size_i in df_main_not_excluded["kernel_size"].unique():
print(f"kernel_size {kernel_size_i}")
utf_mturk.plot_accuracy_per_layer(
df_main_not_excluded[df_main_not_excluded["kernel_size"] == kernel_size_i],
results_folder=figures_folder,
save_fig=save_fig,
instr_type_list=instr_type_list,
title_prefix=f"For kernel size {kernel_size_i}: ",
legend=False,
)
# include error bars by setting show_sem=True
###Output
_____no_output_____
###Markdown
Figure 5C Stop using the catch trials as exclusion criterion and plot the, thus, unbiased performance over these trials
###Code
utf_mturk.plot_accuracy_per_layer(
df_catch_trials_not_excluded_ignoring_catch_trials,
results_folder=figures_folder,
save_fig=save_fig,
instr_type_list=instr_type_list,
legend=False,
)
###Output
_____no_output_____
###Markdown
Figure 7: Relative Activation Difference
###Code
for kernel_size_i in sorted(df_main_not_excluded["kernel_size"].unique()):
utf_mturk.plot_accuracy_vs_relative_activation_difference(
df_main_not_excluded[df_main_not_excluded["kernel_size"] == kernel_size_i],
results_folder=figures_folder,
save_fig=save_fig,
fig_name_suffix=f"_kernel_size{kernel_size_i}",
)
###Output
_____no_output_____ |
chapter2/homework/computer/3-29/201611680275.ipynb | ###Markdown
练习1:
###Code
def compute_sum(end):
i = 0
total_n = 1
while i < end:
i = i + 1
total_n = total_n * i
return total_n
n = int(input('请输入第1个整数,以回车结束。'))
m = int(input('请输入第2个整数,以回车结束。'))
k = int(input('请输入第3个整数,以回车结束。'))
print('最终的和是:', compute_sum(m) + compute_sum(n) + compute_sum(k))
###Output
请输入第1个整数,以回车结束。3
请输入第2个整数,以回车结束。2
请输入第3个整数,以回车结束。2
最终的和是: 10
###Markdown
练习2:
###Code
def compute(end):
pro = 1
num = 1
i = 1.0
j = 1.0/i
total_m = 0
while num <= end:
j = 1.0/i * pro
total_m = total_m + j
pro = pro * (-1)
i = i + 2
num = num + 1
return total_m
n1 = int(input('请输入一个整数,以回车结束:'))
print('最终和的四倍是:',compute(n1) * 4.0)
n2 = int(input('请输入一个整数,以回车结束:'))
print('最终和的四倍是:',compute(n2) * 4.0)
###Output
请输入一个整数,以回车结束:1000
最终和的四倍是: 3.140592653839794
请输入一个整数,以回车结束:100000
最终和的四倍是: 3.1415826535897198
###Markdown
练习3(第一小题):
###Code
name=input('请输入你的名字:')
def your_sex(s):
if s==1:
sex='Miss'
else:
sex='Mr'
return sex
def birthday(birthday_month,birthday_data):
if birthday_month==1:
if birthday_data>=20:
xinzuo='水瓶'
else:
xinzuo='摩羯'
elif birthday_month==2:
if birthday_data>=19:
xinzuo='双鱼'
else:
xinzuo='水瓶'
elif birthday_month==3:
if birthday_data>=21:
xinzuo='白羊'
else:
xinzuo='双鱼'
elif birthday_month==4:
if birthday_data>=20:
xinzuo='金牛'
else:
xinzuo='白羊'
elif birthday_month==5:
if birthday_data>=21:
xinzuo='双子'
else:
xinzuo='金牛'
elif birthday_month==6:
if birthday_data>=22:
xinzuo='巨蟹'
else:
xinzuo='双子'
elif birthday_month==7:
if birthday_data>=23:
xinzuo='狮子'
else:
xinzuo='巨蟹'
elif birthday_month==8:
if birthday_data>=23:
xinzuo='处女'
else:
xinzuo='狮子'
elif birthday_month==9:
if birthday_data>=23:
xinzuo='天秤'
else:
xinzuo='处女'
elif birthday_month==10:
if birthday_data>=24:
xinzuo='天蝎'
else:
xinzuo='天秤'
elif birthday_month==11:
if birthday_data>=23:
xinzuo='射手'
else:
xinzuo='天蝎'
elif birthday_month==12:
if birthday_data>=22:
xinzuo='摩羯'
else:
xinzuo='射手'
return xinzuo
ni=int(input('如果你是女性,输入1,如果你是男性,输入2:'))
birthday_month=int(input('你的生日月份是:'))
birthday_data=int(input('你的生日日期是:'))
print('你好,',your_sex(ni),name,'你是',birthday(birthday_month,birthday_data))
###Output
请输入你的名字:du
如果你是女性,输入1,如果你是男性,输入2:1
你的生日月份是:1
你的生日日期是:2
你好, Miss du 你是 摩羯
###Markdown
练习3(第二小题):
###Code
def add(word):
if word.endswith('s') or word.endswith('x') or word.endswith('ch') or word.endswith('sh') or word.endswith('o'):
end='加es'
elif word.endswith('ay')==0 and word.endswith('ey')==0 and word.endswith('iy')==0 and word.endswith('oy')==0 and word.endswith('uy')==0 and word.endswith('yy')==0 and word.endswith('y'):
end='变y为i再加es'
elif word=='have':
end='变为has'
else:
end='直接加s'
return end
verb=input('请输入一个动词的原型:')
print('它的变化规则是:',add(verb))
###Output
请输入一个动词的原型:study
它的变化规则是: 变y为i再加es
###Markdown
挑战性练习:
###Code
def compute_sum(star,end,tap):
i = star
total_n = 0
while i <= end:
total_n = total_n + i
i = i + tap
return total_n
m = int(input('请输入开始整数,以回车结束。'))
n = int(input('请输入结束整数,以回车结束。'))
k = int(input('请输入间隔整数,以回车结束。'))
print('最终的和是:', compute_sum(m,n,k))
###Output
请输入开始整数,以回车结束。1
请输入结束整数,以回车结束。100
请输入间隔整数,以回车结束。2
最终的和是: 2500
|
notebooks/Power_Split_001.ipynb | ###Markdown
Get recording
###Code
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
%load_ext autoreload
%autoreload 2
from Cfg import Cfg
from RecordingCorpus import RecordingCorpus
from multiprocessing import Pool
import warnings
warnings.filterwarnings("ignore")
import librosa
import librosa.display
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
C = Cfg('NIST', 16000, 'amharic', 'build')
if __name__ == '__main__':
with Pool(16) as pool:
recordings = RecordingCorpus(C, pool)
a=recordings.artifacts[0]
transcript=[x[3:] for x in a.target.value if float(x[4]) <= 60]
speech=[(int(float(x)*C.sample_rate), int(float(y)*C.sample_rate), z) for x,y,z in [w for w in transcript if len(w)==3]]
silence=[(int(float(x)*C.sample_rate), int(float(y)*C.sample_rate)) for x,y in [w for w in transcript if len(w)==2]]
audio=a.source.value[0:16*C.sample_rate]
audio=a.source.value
from collect_false import collect_false
T=np.arange(audio.shape[0])/C.sample_rate
S = librosa.feature.melspectrogram(y=audio, sr=C.sample_rate, n_mels=64, fmax=8000)
S_dB = librosa.power_to_db(S, ref=np.max)
s_dB_mean=np.mean(S_dB,axis=0)
speech_q=(s_dB_mean>-70)
TSQ=T[-1]*np.arange(len(speech_q))/len(speech_q)
silences=T[-1]*collect_false(speech_q)/len(speech_q)
int(pauses[-1][-1]*C.sample_rate), audio.shape[0]
max_duration=16.5
for gap in np.linspace(2,0.01,20):
pauses=[(x,y) for x,y in silences if y-x > gap and int(y*C.sample_rate) < audio.shape[0]]
cuts=np.array([int(C.sample_rate*(x+y)/2) for x,y in pauses if x != 0.0])
sizes=np.diff(np.hstack([[0],cuts]))/C.sample_rate
if sizes.max() <= max_duration:
print("gap", gap)
break
plt.figure(figsize=(60,8))
ax1 = plt.subplot(1,1,1)
ax1.plot(T, audio, color='blue');
for cut in cuts:
ax1.axvline(x=cut/C.sample_rate, color='purple')
plt.show()
plt.figure(figsize=(60,8))
ax1 = plt.subplot(1,1,1)
ax1.plot(T, audio, color='blue');
ax1.plot(TSQ, 2*speech_q-1, color='green')
for (x,y) in pauses:
ax1.plot([x,y], [0,0], color='red', linewidth=3)
for cut in cuts:
ax1.axvline(x=cut/C.sample_rate, color='purple')
plt.xlim(350,380)
plt.show()
plt.figure(figsize=(60,8))
ax1 = plt.subplot(2,1,1)
ax1.plot(T, audio, color='blue');
ax1.plot(TSQ, 2*speech_q-1, color='green')
for (x,y) in silences:
if y-x > 0.2:
ax1.plot([x,y], [0,0], color='red', linewidth=3)
ax2 = plt.subplot(2,1,2)
librosa.display.specshow(S_dB, x_axis='time', y_axis='mel', sr=C.sample_rate, fmax=8000);
ax2.sharex(ax1)
plt.xlim(510,530)
plt.show()
silences
###Output
_____no_output_____ |
third_party_apps/xbrl_extracts_4.ipynb | ###Markdown
<div style=" width: 100%; height: 100px; border:none; background:linear-gradient(-45deg, d84227, ce3587); text-align: center; position: relative;"> <span style=" font-size: 40px; padding: 0 10px; color: white; margin: 0; position: absolute; top: 50%; left: 50%; -ms-transform: translate(-50%, -50%); transform: translate(-50%, -50%); font-weight: bold; "> XBRL PARSER
###Code
"""
Martin Wood - Office for National Statistics
[email protected]
23/07/2018
XBRL parser
Contains functions that scrape and clean an XBRL document's content
and variables, returning a dict ready for dumping into MongoDB.
ELliott PHillips - Office for National Statistics
[email protected]
30/06/2020
Code adapted and furthered for Companies House XBRL accounts parsing and outputting.
"""
import os
import re
import numpy as np
import pandas as pd
from datetime import datetime
from dateutil import parser
from bs4 import BeautifulSoup as BS # Can parse xml or html docs
# Table of variables and values that indicate consolidated status
consolidation_var_table = {
"includedinconsolidationsubsidiary": True,
"investmententityrequiredtoapplyexceptionfromconsolidationtruefalse": True,
"subsidiaryunconsolidatedtruefalse": False,
"descriptionreasonwhyentityhasnotpreparedconsolidatedfinancialstatements": "exist",
"consolidationpolicy": "exist"
}
def clean_value(string):
"""
Take a value that's stored as a string,
clean it and convert to numeric.
If it's just a dash, it's taken to mean
zero.
"""
if string.strip() == "-":
return (0.0)
try:
return float(string.strip().replace(",", "").replace(" ", ""))
except:
pass
return (string)
def retrieve_from_context(soup, contextref):
"""
Used where an element of the document contained no data, only a
reference to a context element.
Finds the relevant context element and retrieves the relevant data.
Returns a text string
Keyword arguments:
soup -- BeautifulSoup souped html/xml object
contextref -- the id of the context element to be raided
"""
try:
context = soup.find("xbrli:context", id=contextref)
contents = context.find("xbrldi:explicitmember").get_text().split(":")[-1].strip()
except:
contents = ""
return (contents)
def retrieve_accounting_standard(soup):
"""
Gets the account reporting standard in use in a document by hunting
down the link to the schema reference sheet that always appears to
be in the document, and extracting the format and standard date from
the string of the url itself.
WARNING - That means that there's a lot of implicity hardcoded info
on the way these links are formated and referenced, within this
function. Might need changing someday.
Returns a 3-tuple (standard, date, original url)
Keyword arguments:
soup -- BeautifulSoup souped html/xml object
"""
# Find the relevant link by its unique attribute
link_obj = soup.find("link:schemaref")
# If we didn't find anything it's an xml doc using a different
# element name:
if link_obj == None:
link_obj = soup.find("schemaref")
# extract the name of the .xsd schema file, which contains format
# and date information
text = link_obj['xlink:href'].split("/")[-1].split(".")[0]
# Split the extracted text into format and date, return values
return (text[:-10].strip("-"), text[-10:], link_obj['xlink:href'])
def retrieve_unit(soup, each):
"""
Gets the reporting unit by trying to chase a unitref to
its source, alternatively uses element attribute unitref
if it's not a reference to another element.
Returns the unit
Keyword arguments:
soup -- BeautifulSoup souped html/xml object
each -- element of BeautifulSoup souped object
"""
# If not, try to discover the unit string in the
# soup object
try:
unit_str = soup.find(id=each['unitref']).get_text()
except:
# Or if not, in the attributes of the element
try:
unit_str = each.attrs['unitref']
except:
return ("NA")
return (unit_str.strip())
def retrieve_date(soup, each):
"""
Gets the reporting date by trying to chase a contextref
to its source and extract its period, alternatively uses
element attribute contextref if it's not a reference
to another element.
Returns the date
Keyword arguments:
soup -- BeautifulSoup souped html/xml object
each -- element of BeautifulSoup souped object
"""
# Try to find a date tag within the contextref element,
# starting with the most specific tags, and starting with
# those for ixbrl docs as it's the most common file.
date_tag_list = ["xbrli:enddate",
"xbrli:instant",
"xbrli:period",
"enddate",
"instant",
"period"]
for tag in date_tag_list:
try:
date_str = each['contextref']
date_val = parser.parse(soup.find(id=each['contextref']).find(tag).get_text()). \
date(). \
isoformat()
return (date_val)
except:
pass
try:
date_str = each.attrs['contextref']
date_val = parser.parse(each.attrs['contextref']). \
date(). \
isoformat()
return (date_val)
except:
pass
return ("NA")
def parse_element(soup, element):
"""
For a discovered XBRL tagged element, go through, retrieve its name
and value and associated metadata.
Keyword arguments:
soup -- BeautifulSoup object of accounts document
element -- soup object of discovered tagged element
"""
if "contextref" not in element.attrs:
return ({})
element_dict = {}
# Basic name and value
try:
# Method for XBRLi docs first
element_dict['name'] = element.attrs['name'].lower().split(":")[-1]
except:
# Method for XBRL docs second
element_dict['name'] = element.name.lower().split(":")[-1]
element_dict['value'] = element.get_text()
element_dict['unit'] = retrieve_unit(soup, element)
element_dict['date'] = retrieve_date(soup, element)
# If there's no value retrieved, try raiding the associated context data
if element_dict['value'] == "":
element_dict['value'] = retrieve_from_context(soup, element.attrs['contextref'])
# If the value has a defined unit (eg a currency) convert to numeric
if element_dict['unit'] != "NA":
element_dict['value'] = clean_value(element_dict['value'])
# Retrieve sign of element if exists
try:
element_dict['sign'] = element.attrs['sign']
# if it's negative, convert the value then and there
if element_dict['sign'].strip() == "-":
element_dict['value'] = 0.0 - element_dict['value']
except:
pass
return (element_dict)
def parse_elements(element_set, soup):
"""
For a set of discovered elements within a document, try to parse
them. Only keep valid results (test is whether field "name"
exists).
Keyword arguments:
element_set -- BeautifulSoup iterable search result object
soup -- BeautifulSoup object of accounts document
"""
elements = []
for each in element_set:
element_dict = parse_element(soup, each)
if 'name' in element_dict:
elements.append(element_dict)
return (elements)
def summarise_by_sum(doc, variable_names):
"""
Takes a document (dict) after extraction, and tries to extract
a summary variable relating to the financial state of the enterprise
by summing all those named that exist. Returns dict.
Keyword arguments:
doc -- an extracted document dict, with "elements" entry as created
by the 'scrape_clean_elements' functions.
variable_names - variables to find and sum if they exist
"""
# Convert elements to pandas df
df = pd.DataFrame(doc['elements'])
# Subset to most recent (latest dated)
df = df[df['date'] == doc['doc_balancesheetdate']]
total_assets = 0.0
unit = "NA"
# Find the total assets by summing components
for each in variable_names:
# Fault-tolerant, will skip whatever isn't numeric
try:
total_assets = total_assets + df[df['name'] == each].iloc[0]['value']
# Retrieve reporting unit if exists
unit = df[df['name'] == each].iloc[0]['unit']
except:
pass
return ({"total_assets": total_assets, "unit": unit})
def summarise_by_priority(doc, variable_names):
"""
Takes a document (dict) after extraction, and tries to extract
a summary variable relating to the financial state of the enterprise
by looking for each named, in order. Returns dict.
Keyword arguments:
doc -- an extracted document dict, with "elements" entry as created
by the 'scrape_clean_elements' functions.
variable_names - variables to find and check if they exist.
"""
# Convert elements to pandas df
df = pd.DataFrame(doc['elements'])
# Subset to most recent (latest dated)
df = df[df['date'] == doc['doc_balancesheetdate']]
primary_assets = 0.0
unit = "NA"
# Find the net asset/liability variable by hunting names in order
for each in variable_names:
try:
# Fault tolerant, will skip whatever isn't numeric
primary_assets = df[df['name'] == each].iloc[0]['value']
# Retrieve reporting unit if it exists
unit = df[df['name'] == each].iloc[0]['unit']
break
except:
pass
return ({"primary_assets": primary_assets, "unit": unit})
def summarise_set(doc, variable_names):
"""
Takes a document (dict) after extraction, and tries to extract
summary variables relating to the financial state of the enterprise
by returning all those named that exist. Returns dict.
Keyword arguments:
doc -- an extracted document dict, with "elements" entry as created
by the 'scrape_clean_elements' functions.
variable_names - variables to find and return if they exist.
"""
results = {}
# Convert elements to pandas df
df = pd.DataFrame(doc['elements'])
# Subset to most recent (latest dated)
df = df[df['date'] == doc['doc_balancesheetdate']]
# Find all the variables of interest should they exist
for each in variable_names:
try:
results[each] = df[df['name'] == each].iloc[0]['value']
except:
pass
# Send the variables back to be appended
return (results)
def scrape_elements(soup, filepath):
"""
Parses an XBRL (xml) company accounts file
for all labelled content and extracts the
content (and metadata, eg; unitref) of each
element found to a dictionary
params: filepath (str)
output: list of dicts
"""
# Try multiple methods of retrieving data, I think only the first is
# now needed though. The rest will be removed after testing this
# but should not affect execution speed.
try:
element_set = soup.find_all()
elements = parse_elements(element_set, soup)
if len(elements) <= 5:
raise Exception("Elements should be gte 5, was {}".format(len(elements)))
except:
#if fails parsing create dummy entry elements so entry still exists in dictonary
elements = [{'name': 'NA', 'value': 'NA', 'unit': 'NA', 'date': 'NA'}]
pass
return (elements)
def flatten_data(doc):
"""
Takes the data returned by process account, with its tree-like
structure and reorganises it into a long-thin format table structure
suitable for SQL applications.
"""
# Need to drop components later, so need copy in function
doc2 = doc.copy()
doc_df = pd.DataFrame()
# Pandas should create series, then columns, from dicts when called
# like this
if not isinstance(doc2['elements'], int):
for element in doc2['elements']:
doc_df = doc_df.append(element, ignore_index=True)
# Dump the "elements" entry in the doc dict
doc2.pop("elements")
# Create uniform columns for all other properties
for key in doc2:
doc_df[key] = doc2[key]
return (doc_df)
def amp_correction(filepath):
"""
Looks for & symbol in html file and uses regex to strip out any instances that are not strictly amp;
Named arguments:
filepath -- complete filepath (string) from drive root
"""
#import re
#open file as txt
#file = open(filepath, 'r')
#corrected_file = ""
#for line in file:
#line = line.strip()
#print(line)
#return file
#too generalised!
#determine if any lines need correcting
#if re.search(r'&(?!amp;)',line) is not None:
#print(line)
#print message to show correction
#print('file {} is being corrected for a misplaced & sign, may cause some strange parsing'.format(file))
#replace with acceptable syntax
#line = re.sub(r'&(?!amp;)',r'&',line)
#append subbed lines back to file
#corrected_file += line
#return corrected file
#return(corrected_file)
def process_account(filepath):
"""
Scrape all of the relevant information from
an iXBRL (html) file, upload the elements
and some metadata to a mongodb.
Named arguments:
filepath -- complete filepath (string) from drive root
"""
doc = {}
# Some metadata, doc name, upload date/time, archive file it came from
doc['doc_name'] = filepath.split("/")[-1]
doc['doc_type'] = filepath.split(".")[-1].lower()
doc['doc_upload_date'] = str(datetime.now())
doc['arc_name'] = filepath.split("/")[-2]
doc['parsed'] = True
# Complicated ones
sheet_date = filepath.split("/")[-1].split(".")[0].split("_")[-1]
doc['doc_balancesheetdate'] = datetime.strptime(sheet_date, "%Y%m%d").date().isoformat()
doc['doc_companieshouseregisterednumber'] = filepath.split("/")[-1].split(".")[0].split("_")[-2]
try:
soup = BS(open(filepath, "rb"), "html.parser")
except:
print("Failed to open: " + filepath)
return (1)
# Get metadata about the accounting standard used
try:
doc['doc_standard_type'], doc['doc_standard_date'], doc['doc_standard_link'] = retrieve_accounting_standard(soup)
doc['parsed'] = True
except:
doc['doc_standard_type'], doc['doc_standard_date'], doc['doc_standard_link'] = (0, 0, 0)
doc['parsed'] = False
# Fetch all the marked elements of the document
try:
doc['elements'] = scrape_elements(soup, filepath)
except Exception as e:
doc['parsed'] = False
doc['Error'] = e
try:
return (doc)
except Exception as e:
return (e)
###Output
_____no_output_____
###Markdown
<div style=" width: 100%; height: 100px; border:none; background:linear-gradient(-45deg, d84227, ce3587); text-align: center; position: relative;"> <span style=" font-size: 40px; padding: 0 10px; color: white; margin: 0; position: absolute; top: 50%; left: 50%; -ms-transform: translate(-50%, -50%); transform: translate(-50%, -50%); font-weight: bold; "> EXTRACTION
###Code
import os
import numpy as np
import pandas as pd
import importlib
import time
import sys
def get_filepaths(directory):
""" Helper function -
Get all of the filenames in a directory that
end in htm* or xml.
Under the assumption that all files within
the folder are financial records. """
files = [directory + "/" + filename
for filename in os.listdir(directory)
if (("htm" in filename.lower()) or ("xml" in filename.lower()))]
month_and_year = ('').join(directory.split('-')[-1:])
month, year = month_and_year[:-4], month_and_year[-4:]
return files, month, year
def progressBar(name, value, endvalue, bar_length = 50, width = 20):
percent = float(value) / endvalue
arrow = '-' * int(round(percent*bar_length) - 1) + '>'
spaces = ' ' * (bar_length - len(arrow))
sys.stdout.write(
"\r{0: <{1}} : [{2}]{3}% ({4} / {5})".format(
name,
width,
arrow + spaces,
int(round(percent*100)),
value,
endvalue
)
)
sys.stdout.flush()
if value == endvalue:
sys.stdout.write('\n\n')
def retrieve_list_of_tags(dataframe, column, output_folder):
"""
Save dataframe containing all unique tags to txt format in specified directory.
Arguements:
dataframe: tabular data
column: location of xbrl tags
output_folder: user specified file location
Returns:
None
Raises:
None
"""
list_of_tags = results['name'].tolist()
list_of_tags_unique = list(set(list_of_tags))
print(
"Number of tags in total: {}\nOf which are unique: {}".format(len(list_of_tags), len(list_of_tags_unique))
)
with open(output_folder + "/" + folder_year + "-" + folder_month + "_list_of_tags.txt", 'w') as f:
for item in list_of_tags_unique:
f.write("%s\n" % item)
def get_tag_counts(dataframe, column, output_folder):
"""
Save dataframe containing all unique tags to txt format in specified directory.
Arguements:
dataframe: tabular data
column: location of xbrl tags
output_folder: user specified file location
Returns:
None
Raises:
None
"""
cache = dataframe
cache["count"] = cache.groupby(by = column)[column].transform("count")
cache.sort_values("count", inplace = True, ascending = False)
cache.drop_duplicates(subset = [column, "count"], keep = "first", inplace = True)
cache = cache[[column, "count"]]
print(cache.shape)
cache.to_csv(
output_folder + "/" + folder_year + "-" + folder_month + "_unique_tag_frequencies.csv",
header = None,
index = True,
sep = "\t",
mode = "a"
)
def build_month_table(list_of_files):
"""
"""
process_start = time.time()
# Empty table awaiting results
results = pd.DataFrame()
COUNT = 0
# For every file
#for file in files:
for file in files:
COUNT += 1
# Read the file
doc = process_account(file)
# tabulate the results
doc_df = flatten_data(doc)
# append to table
results = results.append(doc_df)
progressBar("XBRL Accounts Parsed", COUNT, len(files), bar_length = 50, width = 20)
print("Average time to process an XBRL file: \x1b[31m{:0f}\x1b[0m".format((time.time() - process_start) / 60, 2), "seconds")
return results
def output_xbrl_month(dataframe, output_folder, file_type = "csv"):
"""
Save dataframe to csv format in specified directory, with particular naming scheme "YYYY-MM_xbrl_data.csv".
Arguments:
dataframe: tabular data
output_folder: user specified file destination
Returns:
None
Raises:
None
"""
if file_type == "csv":
dataframe.to_csv(
output_folder
+ "/"
+ folder_year
+ "-"
+ folder_month
+ "_xbrl_data.csv",
index = False,
header = True
)
else:
print("I need a CSV for now...")
# Get all the filenames from the example folder
files, folder_month, folder_year = get_filepaths("/shares/data/20200519_companies_house_accounts/xbrl_unpacked_data/Accounts_monthly_Data-July2010")
print(len(files))
# Here you can splice/truncate the number of files you want to process for testing
#files = files[791:793]
print(folder_month, folder_year)
###Output
July 2010
###Markdown
Use these commands to inspect portions of the XML data:```pythondoc = process_account(files[0:]) display for fundocdoc['elements'] Loop through the document, retrieving any element with a matching namefor element in doc['elements']: if element['name'] == 'balancesheetdate': print(element) Extract the all the data to long-thin table format for use with SQL Note, tables from docs should be appendable to one another to create tables of all dataflatten_data(doc).head(15)```
###Code
# Finally, build a table of all variables from all example (digital) documents
# This can take a while
results = build_month_table(files)
print(results.shape)
print(results.doc_name.drop_duplicates().count(),len(files))
results.head(10)
# Hi Rob,
#
# Just checked several example batch files against the parser's outputs and QA checks,
# the shape in the above cell is always exactly 1 less than the number of rows in the
# outputted csv - as expected with the header row.
#
# Don't seem to get any error, and I've updated the methods in this notebook further.
#
# Let me know how you get on with this version!
#
# Elliott.
output_xbrl_month(results, "/shares/data/20200519_companies_house_accounts/xbrl_parsed_data")
# Find list of all unqiue tags in dataset
list_of_tags = results["name"].tolist()
list_of_tags_unique = list(set(list_of_tags))
print("Longest tag: ", len(max(list_of_tags_unique, key = len)))
# Output all unqiue tags to a txt file
retrieve_list_of_tags(
results,
"name",
"/shares/data/20200519_companies_house_accounts/logs"
)
# Output all unqiue tags and their relative frequencies to a txt file
get_tag_counts(
results,
"name",
"/shares/data/20200519_companies_house_accounts/logs"
)
test_output_data = pd.read_csv("/shares/data/20200519_companies_house_accounts/xbrl_parsed_data/Accounts_monthly_Data-July2010", lineterminator = "\n")
print(test_output_data.shape)
###Output
_____no_output_____ |
visualization/SentimentDifferenceAnalysis.ipynb | ###Markdown
Sentiment Difference Visualizations
###Code
import statistics as stat
import numpy as np
import pandas as pd
import csv as csv
import matplotlib.pyplot as mpl
import os
from tqdm import tqdm
pwd = "/home/srivbane/shared/caringbridge/data/projects/sna-social-support/csv_data/"
epoch_day = 86400000 # accounting for milliseconds
with open(os.path.join(pwd, "from_inflection.csv"), 'r', encoding='utf-8') as infile:
a_df = pd.read_csv(infile, header=0)
from_df = a_df.fillna(0)[(a_df.pre_count.values > 5) & (a_df.post_count.values > 5)]
with open(os.path.join(pwd, "to_inflection.csv"), 'r', encoding='utf-8') as infile:
b_df = pd.read_csv(infile, header=0)
to_df = b_df.fillna(0)[(b_df.pre_count.values > 5) & (b_df.post_count.values > 5)]
with open(os.path.join(pwd, "overall_inflection.csv"), 'r', encoding='utf-8') as infile:
c_df = pd.read_csv(infile, header=0)
overall_df = c_df.fillna(0)[(c_df.pre_count.values > 5) & (c_df.post_count.values > 5)]
em = 12
mpl.rcParams['figure.figsize'] = [9, 4]
mpl.rcParams['figure.dpi'] = 300
mpl.rcParams['font.family'] = "serif"
mpl.rcParams['font.size'] = 8
print(f"FROM: {len(a_df)} entries \t\t TO: {len(b_df)} entries \t\t OVERALL: {len(c_df)} entries ")
print(f"FROM: {len(from_df)} valid entries \t TO: {len(to_df)} valid entries \t OVERALL: {len(overall_df)} valid entries ")
###Output
FROM: 240034 entries TO: 246331 entries OVERALL: 274559 entries
FROM: 47466 valid entries TO: 47385 valid entries OVERALL: 38688 valid entries
###Markdown
Difference Plots
###Code
fig, axs = mpl.subplots(1, 3, sharey=True)
axs[0].hist(from_df.diff_pos, bins=30, range=(-.02, .02), color = 'b')
axs[0].set_title(f"FROM ($\mu$ = {round(stat.mean(from_df.diff_pos.values), 4)})")
axs[1].hist(to_df.diff_pos, bins=30, range=(-.02, .02), color = 'b')
axs[1].set_title(f"TO ($\mu$ = {round(stat.mean(to_df.diff_pos.values), 4)})")
axs[2].hist(overall_df.diff_pos, bins=30, range=(-.02, .02), color = 'b')
axs[2].set_title(f"OVERALL ($\mu$ = {round(stat.mean(overall_df.diff_pos.values), 4)})")
fig.suptitle("Positive Sentiment Difference for a Given Inflection", fontsize=em+2)
mpl.show()
fig, axs = mpl.subplots(1, 3, sharey=True)
axs[0].hist(from_df.diff_neg, bins=30, range=(-.02, .02), color = 'r')
axs[0].set_title(f"FROM ($\mu$ = {round(stat.mean(from_df.diff_neg.values), 4)})")
axs[1].hist(to_df.diff_neg, bins=30, range=(-.02, .02), color = 'r')
axs[1].set_title(f"TO ($\mu$ = {round(stat.mean(to_df.diff_neg.values), 4)})")
axs[2].hist(overall_df.diff_neg, bins=30, range=(-.02, .02), color = 'r')
axs[2].set_title(f"OVERALL ($\mu$ = {round(stat.mean(overall_df.diff_neg.values), 4)})")
fig.suptitle("Negative Sentiment Difference for a Given Inflection", fontsize=em+2)
mpl.show()
###Output
_____no_output_____
###Markdown
Fitting the RegressionFit this to pre_pos, pre_count, the post_pre_count_ratio, tenure, pre_all_patient_author_ratio, and initiator_ratio.
###Code
import statsmodels.api as sm
import statsmodels.formula.api as smf
dat = overall_df.copy()
dat.head()
with open(os.path.join(pwd, "dynamic_auth.csv"), 'r', encoding='utf-8') as auths:
authors = pd.read_csv(auths, usecols=[0,2,3], names = ['user_id', 'first', 'last'])
authors['tenure'] = ((authors["last"] - authors["first"]) / epoch_day)
authors = authors[["user_id", "tenure"]]
dat = pd.merge(dat, authors, on="user_id", how="left")
dat["pre_pred_prop"] = np.divide(dat["pre_pa_count"], dat["pre_count"])
dat["inf_count_prop"] = np.divide(dat["post_count"], dat["pre_count"])
dat["single_author"] = [int(x > .9 or x < .1) for x in dat["pre_pred_prop"]]
dat["patient_author"] = [int(x > .9) for x in dat["pre_pred_prop"]]
dat.head()
results = smf.ols('post_pos ~ pre_pos + pre_count + pre_pa_count + is_init + tenure + pre_pred_prop + inf_count_prop', data=dat).fit()
print(results.summary())
corr = dat.corr()
corr = corr.round(2)
corr.style.apply(lambda x: ["background-color: yellow" if abs(v) > .1 else "" for v in x], axis = 1)
###Output
_____no_output_____ |
supervised/random_forests.py.ipynb | ###Markdown
Bosques de arboles de desición
###Code
import argparse
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from utilities import visualize_classifier
#Analizador de argumentos, dependiendo de la entrada
#construimos un clisificador aleatorio o uno extremadamente aleatorio
def build_arg_parser():
parser = argparse.ArgumentParser(description='Classify data using \ Ensemble Learning techniques')
parser.add_argument('--classifier-type', dest='classifier_type',
required=True, choices=['rf','erf'],help="Type of classifier \
to use, can be either 'rf' or 'erf'")
return parser
# Defición de función pricipar y analizar los argumentos de entrada
"""if __name__=='__main__':
#analizar los argumentos de entrada
args = build_arg_parser().parse_args()
classifer_type = args.classifier_type
"""
def random_forests(type_forest, file):
#Cargamos los datos
input_file = file
data = np.loadtxt(input_file, delimiter=',')
X, y = data[:, :-1], data[:, -1]
#Separamos los datos por etiquetas
class_0 = np.array(X[y==0])
class_1 = np.array(X[y==1])
class_2 = np.array(X[y==2])
#Visualizar los datos de entrada
plt.figure()
plt.scatter(class_0[:,0], class_0[:, 1], s=75, facecolors='white', edgecolors='black', linewidth=1,marker='s')
plt.scatter(class_1[:,0], class_1[:, 1], s=75, facecolors='white', edgecolors='black', linewidth=1,marker='o')
plt.scatter(class_2[:,0], class_2[:, 1], s=75, facecolors='white', edgecolors='black', linewidth=1,marker='^')
plt.title('Input data')
#Dividir datos en entrenamiento y prueba
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.25, random_state=5)
"""Definimos los parámetros que se utilizarán cuando construyamos el clasificador.
n_estimators: se refiere a la cantidad de árboles que se construirán.
max_depth: se refiere al número máximo de niveles en cada árbol.
random_state: se refiere al valor semilla del generador de números aleatorios
necesario para inicializar el algoritmo de clasificador de bosque aleatorio.
"""
params = {'n_estimators':100, 'max_depth':4, 'random_state':0}
if type_forest == 'rf':
classifier = RandomForestClassifier(**params)
else:
classifier = ExtraTreesClassifier(**params)
classifier.fit(X_train,y_train)
print('\nTraining dataset\n')
visualize_classifier(classifier, X_train, y_train,)
#calculo de datos de salida en base a las entradas de prueba
y_test_pred = classifier.predict(X_test)
print('\nTest dataset\n')
visualize_classifier(classifier, X_test, y_test)
#Evaluación del clasificador
# Evaluate classifier performance
class_names = ['Class-0', 'Class-1', 'Class-2']
print("\n" + "#"*40)
print("\nClassifier performance on training dataset\n")
print(classification_report(y_train, classifier.predict(X_train),target_names=class_names))
print("#"*40 + "\n")
print("#"*40)
print("\nClassifier performance on test dataset\n")
print(classification_report(y_test, y_test_pred,target_names=class_names))
print("#"*40 + "\n")
#Estimar la medida de confianza de la predicción
test_datapoints = np.array([[5, 5], [3, 6], [6, 4], [7, 2], [4, 4],[5, 2]])
print("\nConfidence measure:")
for datapoint in test_datapoints:
probabilities = classifier.predict_proba([datapoint])[0]
predicted_class = 'Class-' + str(np.argmax(probabilities))
print('\nDatapoint:', datapoint)
print('Predicted class:', predicted_class)
# Visualize the datapoints
print('\nTest datapoints\n')
visualize_classifier(classifier, test_datapoints, [0]*len(test_datapoints))
#Clasificación bosque aleatorio
random_forests('rf','data_random_forests.txt')
#Clasificación bosque extremadamente aleatorio
random_forests('erf','data_random_forests.txt')
###Output
Training dataset
|
week8/ex7-kmeans and PCA/2- 2D kmeans.ipynb | ###Markdown
2-2维kmeans
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
import scipy.io as sio
mat = sio.loadmat('./data/ex7data2.mat')
data2 = pd.DataFrame(mat.get('X'), columns=['X1', 'X2'])
print(data2.head())
sns.set(context="notebook", style="white")
sns.lmplot('X1', 'X2', data=data2, fit_reg=False)
plt.show()
###Output
X1 X2
0 1.842080 4.607572
1 5.658583 4.799964
2 6.352579 3.290854
3 2.904017 4.612204
4 3.231979 4.939894
###Markdown
0. random initfor initial centroids
###Code
def combine_data_C(data, C):
data_with_c = data.copy()
data_with_c['C'] = C
return data_with_c
# k-means fn --------------------------------
def random_init(data, k):
"""choose k sample from data set as init centroids
Args:
data: DataFrame
k: int
Returns:
k samples: ndarray
"""
return data.sample(k).as_matrix()
def _find_your_cluster(x, centroids):
"""find the right cluster for x with respect to shortest distance
Args:
x: ndarray (n, ) -> n features
centroids: ndarray (k, n)
Returns:
k: int
"""
distances = np.apply_along_axis(func1d=np.linalg.norm, # this give you l2 norm
axis=1,
arr=centroids - x) # use ndarray's broadcast
return np.argmin(distances)
def assign_cluster(data, centroids):
"""assign cluster for each node in data
return C ndarray
"""
return np.apply_along_axis(lambda x: _find_your_cluster(x, centroids),
axis=1,
arr=data.as_matrix())
def new_centroids(data, C):
data_with_c = combine_data_C(data, C)
return data_with_c.groupby('C', as_index=False).\
mean().\
sort_values(by='C').\
drop('C', axis=1).\
as_matrix()
def cost(data, centroids, C):
m = data.shape[0]
expand_C_with_centroids = centroids[C]
distances = np.apply_along_axis(func1d=np.linalg.norm,
axis=1,
arr=data.as_matrix() - expand_C_with_centroids)
return distances.sum() / m
def _k_means_iter(data, k, epoch=100, tol=0.0001):
"""one shot k-means
with early break
"""
centroids = random_init(data, k)
cost_progress = []
for i in range(epoch):
print('running epoch {}'.format(i))
C = assign_cluster(data, centroids)
centroids = new_centroids(data, C)
cost_progress.append(cost(data, centroids, C))
if len(cost_progress) > 1: # early break
if (np.abs(cost_progress[-1] - cost_progress[-2])) / cost_progress[-1] < tol:
break
return C, centroids, cost_progress[-1]
def k_means(data, k, epoch=100, n_init=10):
"""do multiple random init and pick the best one to return
Args:
data (pd.DataFrame)
Returns:
(C, centroids, least_cost)
"""
tries = np.array([_k_means_iter(data, k, epoch) for _ in range(n_init)])
least_cost_idx = np.argmin(tries[:, -1])
return tries[least_cost_idx]
random_init(data2, 3)
###Output
_____no_output_____
###Markdown
1. cluster assignmenthttp://stackoverflow.com/questions/14432557/matplotlib-scatter-plot-with-different-text-at-each-data-point find closest cluster experiment
###Code
init_centroids = random_init(data2, 3)
init_centroids
x = np.array([1, 1])
fig, ax = plt.subplots(figsize=(6,4))
ax.scatter(x=init_centroids[:, 0], y=init_centroids[:, 1])
for i, node in enumerate(init_centroids):
ax.annotate('{}: ({},{})'.format(i, node[0], node[1]), node)
ax.scatter(x[0], x[1], marker='x', s=200)
plt.show()
_find_your_cluster(x, init_centroids)
###Output
_____no_output_____
###Markdown
1 epoch cluster assigning
###Code
C = assign_cluster(data2, init_centroids)
data_with_c =combine_data_C(data2, C)
data_with_c.head()
###Output
_____no_output_____
###Markdown
See the first round clustering result
###Code
sns.lmplot('X1', 'X2', hue='C', data=data_with_c, fit_reg=False)
plt.show()
###Output
_____no_output_____
###Markdown
2. calculate new centroid
###Code
new_centroids(data2, C)
###Output
_____no_output_____
###Markdown
putting all together, take1this is just 1 shot `k-means`, if the random init pick the bad starting centroids, the final clustering may be very sub-optimal
###Code
final_C, final_centroid, _= _k_means_iter(data2, 3)
data_with_c = combine_data_C(data2, final_C)
sns.lmplot('X1', 'X2', hue='C', data=data_with_c, fit_reg=False)
plt.show()
###Output
_____no_output_____
###Markdown
calculate the cost
###Code
cost(data2, final_centroid, final_C)
###Output
_____no_output_____
###Markdown
k-mean with multiple tries of randome init, pick the best one with least cost
###Code
best_C, best_centroids, least_cost = k_means(data2, 3)
least_cost
data_with_c = combine_data_C(data2, best_C)
sns.lmplot('X1', 'X2', hue='C', data=data_with_c, fit_reg=False)
plt.show()
###Output
_____no_output_____
###Markdown
try sklearn kmeans
###Code
from sklearn.cluster import KMeans
sk_kmeans = KMeans(n_clusters=3)
sk_kmeans.fit(data2)
sk_C = sk_kmeans.predict(data2)
data_with_c = combine_data_C(data2, sk_C)
sns.lmplot('X1', 'X2', hue='C', data=data_with_c, fit_reg=False)
plt.show()
###Output
_____no_output_____ |
avacado_Data_Analysis (1).ipynb | ###Markdown
a. How the prices of avocado vary from time to time A - SanFrancisco
###Code
data["Date"]=pd.to_datetime(data["Date"],format='%Y-%m-%d')
data["Year"]=data["Date"].dt.year
data["Month"]=data["Date"].dt.month
data["Day"]=data["Date"].dt.day
data=data.sort_values(["Year","Month","Day"])
data.head(3)
Region_grouped = data.groupby('Date')['AveragePrice'].mean().reset_index()
Region_grouped.head()
Region_grouped
plt.figure(figsize=(20,6))
plt.plot(Region_grouped['Date'], Region_grouped['AveragePrice'], label='Average Price variationwith Time', linewidth=2, color='blue')
plt.legend()
###Output
_____no_output_____
###Markdown
b. Which Areas sell avocado at a very high price A- HartfordSpringfield
###Code
data[['Date','region']].duplicated()
data_regionwise = data.groupby('region')['AveragePrice'].mean()
data_regionwise = pd.DataFrame(data_regionwise)
data_regionwise.dtypes
data_regionwise[data_regionwise['AveragePrice']==data_regionwise['AveragePrice'].max()]
###Output
_____no_output_____
###Markdown
c. In which year, the prices of avocado were the highest A - 2017
###Code
data.head(3)
data_yearwise = data.groupby('year')['AveragePrice'].mean().reset_index()
data_yearwise.head()
data_yearwise.sort_values('AveragePrice', ascending=False)
###Output
_____no_output_____
###Markdown
d. Which areas are having highest demand for avocado A - TotalUS
###Code
data.head(3)
data_Demandwsie = data.groupby('region')['Total Bags'].sum()
data_Demandwsie = pd.DataFrame(data_Demandwsie)
data_Demandwsie.sort_values('Total Bags', ascending=False).head()
###Output
_____no_output_____
###Markdown
e. Which type of avocado is more popular A - conventional
###Code
data.head(3)
data_typewise = data.groupby('type')['Total Volume'].sum().reset_index()
data_typewise.sort_values('Total Volume', ascending=False)
###Output
_____no_output_____ |
.ipynb_checkpoints/plink_data_analysis-checkpoint.ipynb | ###Markdown
Manhattan Plot
###Code
from bioinfokit import analys, visuz
df_ = analys.get_data("mhat").data
df_.head()
#df.TEST.unique()
df.head()
###Output
_____no_output_____
###Markdown
Save default
###Code
# Create Manhattan plot with default parameters
visuz.marker.mhat(df= df, chr = "CHR", pv="P")
#visuz.marker.mhat(df=df_, chr='chr',pv='pvalue')
###Output
_____no_output_____
###Markdown
Change Colors
###Code
visuz.marker.mhat(df = df, chr = "CHR", pv = "P", color= ("red", "green"))
# to keep colors in all Chromosome, We need Chromosome Number length
df["CHR"].nunique()
# we need to pass 23 Colors
# Add Different Colors equal to number of Chromosome
colors = ["red", "yellow","#a7414a", "#696464", "#00743f", "#563838", "#6a8a82",
"#a37c27", "#5edfff", "#282726", "#c0334d", "#c9753d",
"red", "yellow","#a7414a", "#696464", "#00743f", "#563838", "#6a8a82",
"#a37c27", "#5edfff", "#282726", "#c0334d"]
visuz.marker.mhat(df = df, chr = "CHR", pv= "P", color=colors)
print(len(colors)) # The Number of Colors must be same with Chromosome Unique type
###Output
23
###Markdown
Add Genome-Wide significance line,
###Code
## By default line will plotted at P=5E-0.8
# We can change this value as per need
visuz.marker.mhat(df = df, chr="CHR", pv= "P", color=colors, gwas_sign_line=True)
# Line will not display if the P value is so small
###Output
_____no_output_____
###Markdown
Change LIne with user input
###Code
# Change the position of genome-wide significance lin
# You can change this value as per need
visuz.marker.mhat(df=df, chr="CHR", pv = "P", color=colors, gwas_sign_line=True, gwasp=5E-04)
# gwasp line will not show if p value to high
###Output
_____no_output_____
###Markdown
Add annotation to SNPs (default text)
###Code
# Add name to SNPs based on the significance defined by "Gwasp"
visuz.marker.mhat(df= df, chr= "CHR", pv = "P", color=colors, gwas_sign_line=True,
gwasp=5E-04, markernames=True, markeridcol="SNP")
## This is Time Consuming, It took me around 10 min 32 GB Ram and 12 core
###Output
_____no_output_____
###Markdown
Add annotation to SNPs (box text)
###Code
# Add name to SNPs based on the significance defined by gwasp
visuz.marker.markerhat(df = df, chr = "CHR", pv = "P", color = colors, gwas_sign_line = True,
gwasp = 5E-06, markernames = True, markeridcol = "SNP", gstyle = 2)
###Output
_____no_output_____
###Markdown
The Below are sample data presenting mataplot
###Code
from pandas import DataFrame
from scipy.stats import uniform
from scipy.stats import randint
import numpy as np
## Some Sample data
df = DataFrame({"gene":["gene-%i" % i for i in np.arange(10000)],
"pvalue":uniform.rvs(size = 10000),
"Chromosome": ["ch-%i" %i for i in randint.rvs(0,12, size = 10000)]})
df.head()
df['minuslog10pvalue'] = -np.log10(df.pvalue)
df = df.sort_values("Chromosome")
df.head()
df.tail()
df["ind"] = range(len(df))
df.reset_index(drop=True, inplace=True)
df
df_grouped = df.groupby(("Chromosome"))
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(111)
colors = ['red','green','blue', 'yellow']
for num, (name, group) in enumerate(df_grouped):
group.plot(kind = "scatter", x = "ind", y = "minuslog10pvalue",\
color = colors[num % len(colors)], ax = ax)
x_labels.append(name)
x_labels_pos.append((group["ind"].iloc[-1]- (group["ind"].iloc[-1]
- group["ind"].iloc[-1])/2))
ax.set_xticks(x_labels_pos)
ax.set_xticklabels(x_labels)
ax.set_xlim([0, len(df)])
ax.set_xlabel("Chromosome")
from pandas import DataFrame
from scipy.stats import uniform
from scipy.stats import randint
import numpy as np
import matplotlib.pyplot as plt
# some sample data
df = DataFrame({'gene' : ['gene-%i' % i for i in np.arange(10000)],
'pvalue' : uniform.rvs(size=10000),
'chromosome' : ['ch-%i' % i for i in randint.rvs(0,12,size=10000)]})
# -log_10(pvalue)
df['minuslog10pvalue'] = -np.log10(df.pvalue)
df.chromosome = df.chromosome.astype('category')
df.chromosome = df.chromosome.cat.set_categories(['ch-%i' % i for i in range(12)], ordered=True)
df = df.sort_values('chromosome')
# How to plot gene vs. -log10(pvalue) and colour it by chromosome?
df['ind'] = range(len(df))
df_grouped = df.groupby(('chromosome'))
fig = plt.figure(figsize=(20,10))
ax = fig.add_subplot(111)
colors = ['red','green','blue', 'yellow']
x_labels = []
x_labels_pos = []
for num, (name, group) in enumerate(df_grouped):
group.plot(kind='scatter', x='ind', y='minuslog10pvalue',color=colors[num % len(colors)], ax=ax)
x_labels.append(name)
x_labels_pos.append((group['ind'].iloc[-1] - (group['ind'].iloc[-1] - group['ind'].iloc[0])/2))
ax.set_xticks(x_labels_pos)
ax.set_xticklabels(x_labels)
ax.set_xlim([0, len(df)])
ax.set_ylim([0, 3.5])
ax.set_xlabel('Chromosome')
import matplotlib.pyplot as plt
from numpy.random import randn, random_sample
g = random_sample(int(1e5))*10 # Uniform random values between 0 and 10
p = abs(randn(int(1e5))) # Abs of Normally distributed data
"""
plot g vs p in groups with different colors
colors are cycled automatically by matplotlib
use another colormap or define own colors for a different cycle
"""
plt.figure(figsize=(20,10))
for i in range(1, 11):
plt.plot(g[abs(g-i)< 1], p[abs(g-i)< 1], ls = "", marker = ".")
plt.show()
###Output
_____no_output_____ |
model/Simple_CNN_CWE476.ipynb | ###Markdown
Data Preprocessing
###Code
# Loading formatted data
# I use format the data into pd dataframe
# See data_formatting.ipynb for details
train_data = pd.read_pickle("../dataset/train.pickle")
validate_data = pd.read_pickle("../dataset/validate.pickle")
test_data = pd.read_pickle("../dataset/test.pickle")
###Output
_____no_output_____
###Markdown
Tokenize the source code BoW For data batching convenience, the paper trained only on functions with token length $10 \leq l \leq 500$, padded to the maximum length of **500** The paper does not mention to pad the 0 at the end or at the beginning, so I assume they append the padding at the end (actually, this is not a big deal in CNN) text_to_word_sequence does not work since it ask a single string
###Code
# train_tokenized = tf.keras.preprocessing.text.text_to_word_sequence(train_data[0])
# x_train = tf.keras.preprocessing.sequence.pad_sequences(train_tokenized, maxlen=500, padding="post")
# validate_tokenized = tf.keras.preprocessing.text.text_to_word_sequence(validate_data[0])
# x_validate = tf.keras.preprocessing.sequence.pad_sequences(validate_tokenized, maxlen=500, padding="post")
# test_tokenized = tf.keras.preprocessing.text.text_to_word_sequence(test_data[0])
# x_test = tf.keras.preprocessing.sequence.pad_sequences(test_tokenized, maxlen=500, padding="post")
###Output
_____no_output_____
###Markdown
Init the Tokenizer BoW
###Code
# The paper does not declare the num of words to track, I am using 10000 here
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words=10000)
# Required before using texts_to_sequences
# Arguments; a list of strings
tokenizer.fit_on_texts(list(train_data[0]))
###Output
_____no_output_____
###Markdown
For data batching convenience, the paper trained only on functions with token length $10 \leq l \leq 500$, padded to the maximum length of **500** The paper does not mention to pad the 0 at the end or at the beginning, so I assume they append the padding at the end (actually, this is not a big deal in CNN)
###Code
train_tokenized = tokenizer.texts_to_sequences(train_data[0])
x_train = tf.keras.preprocessing.sequence.pad_sequences(train_tokenized, maxlen=500, padding="post")
validate_tokenized = tokenizer.texts_to_sequences(validate_data[0])
x_validate = tf.keras.preprocessing.sequence.pad_sequences(validate_tokenized, maxlen=500, padding="post")
test_tokenized = tokenizer.texts_to_sequences(test_data[0])
x_test = tf.keras.preprocessing.sequence.pad_sequences(test_tokenized, maxlen=500, padding="post")
y_train = train_data[train_data.columns[4]].astype(int)
y_validate = validate_data[validate_data.columns[4]].astype(int)
y_test = test_data[test_data.columns[4]].astype(int)
###Output
_____no_output_____
###Markdown
Model Design This dataset is highly imbalanced, so I am working on adjusting the train weightshttps://www.tensorflow.org/tutorials/structured_data/imbalanced_data
###Code
clear, vulnerable = (train_data[train_data.columns[4]]).value_counts()
total = vulnerable + clear
print("Total: {}\n Vulnerable: {} ({:.2f}% of total)\n".format(total, vulnerable, 100 * vulnerable / total))
weight_for_0 = (1 / clear)*(total)/2.0
weight_for_1 = (1 / vulnerable)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(input_dim=10000, output_dim=13, input_length=500))
model.add(tf.keras.layers.Conv1D(filters=512, kernel_size=9, activation="relu"))
model.add(tf.keras.layers.MaxPool1D(pool_size=4))
model.add(tf.keras.layers.Dropout(rate=0.5))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(units=64, activation="relu"))
model.add(tf.keras.layers.Dense(units=16, activation="relu"))
# I am using the sigmoid rather than the softmax mentioned in the paper
model.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Adam Optimization with the parameter stated in the paper
adam = tf.keras.optimizers.Adam(lr=0.0002)
# Define the evaluation metrics
METRICS = [
tf.keras.metrics.TruePositives(name='tp'),
tf.keras.metrics.FalsePositives(name='fp'),
tf.keras.metrics.TrueNegatives(name='tn'),
tf.keras.metrics.FalseNegatives(name='fn'),
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='auc'),
]
model.compile(optimizer=adam, loss="binary_crossentropy", metrics=METRICS)
model.summary()
history = model.fit(x=x_train, y=y_train, batch_size=128, epochs=10, verbose=1, class_weight=class_weight, validation_data=(x_validate, y_validate))
with open('CWE476_trainHistory', 'wb') as history_file:
pickle.dump(history.history, history_file)
model.save("Simple_CNN_CWE476")
results = model.evaluate(x_test, y_test, batch_size=128)
###Output
_____no_output_____ |
ClassExercises/Week1_Adventure/pyAdventureSolution.ipynb | ###Markdown
Adventure Time SolutionBelow is a solution that uses a dictionary whose keys are the place locations and whose values are themselves dictionaries with two keys/value pairs: the description of that location and a list of places that one can go next. So a dictionary within a dictionary (dict-ception). For instance, a data structure with one entry in the dictionary might look like this
###Code
places = {"Office":{
"description":"The place where you frantically try to get everything ready for class",
"next":["Kitchen", "Bathroom"]
}}
print(places["Office"]["description"]) # Print out the description
###Output
The place where you frantically try to get everything ready for class
###Markdown
Some students chose to make the value a list instead of a dictionary, which is fine. The only downside is we now need to use some magic number indices to access the description and next place, so it is stylistically not quite as elegant
###Code
places = {"Office":["The place where you frantically try to get everything ready for class", ["Kitchen", "Bathroom"]]}
###Output
_____no_output_____
###Markdown
Matching braces can be a real pain when we have nested lists/dictionaries like this, so I'm going to break it down by starting over and adding elements one at a time
###Code
places = {}
places["Office"] = {"description":"The place where you frantically try to get everything ready for class",
"next":["Kitchen", "Bathroom"]}
places["Bathroom"] = {"description":"The place you go when nature calls",
"next":["Office"]}
places["Kitchen"] = {"description":"The place where you heat up frozen food and sometimes (but rarely) cook",
"next":["Office", "Upstairs Bedroom", "Outside"]}
places["Outside"] = {"description":"The place you will go when there is a COVID vaccine",
"next":["Kitchen"]}
places["Upstairs Bedroom"] = {"description":"The place you go rarely when you've finished your work for the day",
"next":["Kitchen", "Upstairs Bathroom"]}
places["Upstairs Bathroom"] = {"description":"The higher altitude version of the place you go when nature calls",
"next":["Kitchen", "Upstairs Bedroom"]}
###Output
_____no_output_____
###Markdown
Finally, we can put this together in a while loop that's our game. We'll make the game end when we end up outside
###Code
start = "Office"
end = "Outside"
state = start
while state != end:
print("You are at ", state, ".", places[state]["description"])
nxt = ""
while len(nxt) == 0: # Keep looping until we get a valid input
print("Please type your next destination", places[state]["next"], ":")
nxt = input()
if nxt in places[state]["next"]:
state = nxt
else:
print("Invalid next location!")
nxt = ""
print("You have arrived at ", end, ".", places[end]["description"])
###Output
You are at Office . The place where you frantically try to get everything ready for class
Please type your next destination ['Kitchen', 'Bathroom'] :
Bathroom
You are at Bathroom . The place you go when nature calls
Please type your next destination ['Office'] :
Office
You are at Office . The place where you frantically try to get everything ready for class
Please type your next destination ['Kitchen', 'Bathroom'] :
Kitchen
You are at Kitchen . The place where you heat up frozen food and sometimes (but rarely) cook
Please type your next destination ['Office', 'Upstairs Bedroom', 'Outside'] :
Office
You are at Office . The place where you frantically try to get everything ready for class
Please type your next destination ['Kitchen', 'Bathroom'] :
Kitchen
You are at Kitchen . The place where you heat up frozen food and sometimes (but rarely) cook
Please type your next destination ['Office', 'Upstairs Bedroom', 'Outside'] :
blah
Invalid next location!
Please type your next destination ['Office', 'Upstairs Bedroom', 'Outside'] :
Upstairs bedroom
Invalid next location!
Please type your next destination ['Office', 'Upstairs Bedroom', 'Outside'] :
Upstairs Bedroom
You are at Upstairs Bedroom . The place you go rarely when you've finished your work for the day
Please type your next destination ['Kitchen', 'Upstairs Bathroom'] :
Upstairs Bathroom
You are at Upstairs Bathroom . The higher altitude version of the place you go when nature calls
Please type your next destination ['Kitchen', 'Upstairs Bedroom'] :
Upstairs Bedroom
You are at Upstairs Bedroom . The place you go rarely when you've finished your work for the day
Please type your next destination ['Kitchen', 'Upstairs Bathroom'] :
Kitchen
You are at Kitchen . The place where you heat up frozen food and sometimes (but rarely) cook
Please type your next destination ['Office', 'Upstairs Bedroom', 'Outside'] :
Outside
You have arrived at Outside . The place you will go when there is a COVID vaccine
|
notebooks/nb.ipynb | ###Markdown
demo notebook
###Code
from demo import utils
feet = 5.5
meters = utils.feet_to_meters(feet)
meters
###Output
_____no_output_____ |
1ST PROJECT/1ST PROJECT.ipynb | ###Markdown
CRYPTOGRAPHY Cryptography means converting a simple plain text into something which is not understandable easy until someone has the KEY or Code and vice-versaCipher Text -> A text which is the result of Encryption is knows as Cipher TextEncryption -> Converting a message to Secret message is known as EncrytpionDecryption -> Converting back the Secret messege to Normal Message
###Code
# Write a Python Program which converts, Any Given Text to Chiper Text and when every the key is avalible it should convert it back to the Normal Message
#Step 1 - > Which Library to use to convert a Normal text to Chiper Text
#Step 2 -> Take Input from the User and convert it to Chiper Text
#Step 3 -> Display back the Chiper Text to user
#Step 4 -> Load the Key and If the Key is the same, based upon the Input provided, convert it to normal Text
#installing the cryptography library from pypi
!pip install cryptography
from cryptography.fernet import Fernet
#Function to generate password key
def genratePassKey():
key = Fernet.generate_key()
abc = open("PasswordKey.key",'wb') #This is for opening the 'abc' file which indicates that the file is opened for writing in binary mode
abc.write(key) # wriiting in anything in key
abc.close() #closing the file abc
genratePassKey() #calling the function
#function which show generated key
def getMyKey():
abc = open("PasswordKey.key",'rb') #file read-only mode in binary format. The pointer is at the beginning of the file.
return abc.read()
getMyKey()
# function to take any string which is we want to encrypt
def getContentFromUser():
return bytes(input("Enter key to ENCRYPT : "),'utf-8')
userinp = getContentFromUser()
#function which encrypt the inputed message
def encryptMessage(message_normal):
key = getMyKey()
k = Fernet(key)
encrypted_Message = k.encrypt(message_normal)
return encrypted_Message
encryptedmsg = encryptMessage(bytes(userinp))
#function to decryptmessage
def decryptMessage(message_secret):
key = getMyKey()
k = Fernet(key)
decrypted_Message = k.decrypt(message_secret)
return decrypted_Message
decryptMessage(encryptedmsg).decode('utf-8')
###Output
_____no_output_____ |
pymaceuticals-plotting.ipynb | ###Markdown
The Power of Plots Matplotlib Homework Assignment Author: Alex Schanne In this assignment, we will be working for Pymaceuticals Inc. on their most recent animal study. We will be generating all the tables and figures needed for the technical report of the study. Additionally we will provide a summary of our results at the end. More specifically this report will: Clean all the data for duplicates. Generate a statistical summary of the data, including mean, median, variance, standard deviation and SEM of the tumor volume in each drug regimen. Use Panda's DataFrame and Matplotlib's Pyplot to visualize. Calculate the final tumor volume of each mouse across the four most promising treatment regimens: Capomulin, Ramicane, Infubinol, and Ceftamin, in order to look at the quartiles and determine outliers. Create a Box and Whiskers plot of the final tumor volume for all four treatment regimens and highlight any potential outliers in distict color and style. Select a Capomulin mouse and generate a plot of time versus tumor volume. Generate a scatter plot of mouse weight versus average tumor volume for Capomulin. And calculate the correlation coefficient and linear regression model for that relationship. Trends and Observations First ObservationThe first thing we notice when we take a look at this study is the subject population. They carefully tried to keep the population half and half male-female and equal for each drug regimen. So at least superficially (without knowing any other attributes of the population) that the trial was trying to be 'fair' for each regimen. The only regimen that did not have 25 mouse subjects was Stelasyn. Second ObservationAnother immediate observation we can make about this data is from the data summary of tumor volume over time. We see that the variance in tumor volume is low to start but seem to increase throughout the timepoints. This is to be expected if we are to believe that the study was set up correctly and we are seeing how each mouse responds to the treatment idiosyncratically. In any experiment, setting the initial conditions, in this case ensuring approximately equal tumor volumes in the mice, will allow a better measurement of how different variables are affected by the controlled variables. For instance the mouse population in the Capomulin treatment group, show a 0.00 variance at timepoint 0, but by timepoint 45, they show a variance of 19.035028. Third ObservationAnother thing we notice is the range and skew of the weight data for the top two of the top four drugs seem to be smaller and lower. Or rather, their range is not as large. Neither Capomulin or Ramicane have outliers and their interquartile range is below 10, meaning the majority of the mice's weights data falls within 10 mm3 of each other. Additionally the mice of those regimen's weigh, on average, less than the mice of the other regimens. *This seems to suggest that lower weight is a positive outcome of the drug treatments. This is not explicitly stated. So it should be noted that this is only inferred. Any conclusions based on this assumption should be re-evaluated, and not be considered final. Setting up and Cleaning the Data
###Code
#Importing dependencies and doing pre-work
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as st
import plotly.graph_objects as go
import plotly.offline as pyo
#importing files and creating Dataframes
mouse_path = "data/Mouse_metadata.csv"
results_path = "data/Study_results.csv"
mouse_pd = pd.read_csv(mouse_path)
study_pd = pd.read_csv(results_path)
#merging dataframes into one, on 'Mouse ID'
stewart_little = pd.merge(mouse_pd, study_pd, on = "Mouse ID")
#Cleaning the columns names to ameliorate typing
mouse_study = stewart_little.rename(columns = {'Mouse ID': 'mouse_id',
'Drug Regimen': 'drug',
'Sex': 'sex',
'Age_months': 'age',
'Weight (g)': 'weight',
'Timepoint': 'timepoint',
'Tumor Volume (mm3)': 'tumor_volume',
'Metastatic Sites': 'metastatic_sites'})
mouse_study.head()
#Checking amount of data in study versus how many unique mouse ids are included in the study
print(len(mouse_study), mouse_study['mouse_id'].nunique())
#Checking the number of different drug regimens
mouse_study['drug'].value_counts()
#Dropping the duplicate mice rows and keeping their initial row
clean_complete = mouse_study.drop_duplicates(subset = 'mouse_id', keep = 'first', inplace = False)
clean_complete.head()
print(clean_complete['mouse_id'].count())
print(clean_complete['drug'].value_counts())
###Output
249
Infubinol 25
Zoniferol 25
Ramicane 25
Propriva 25
Capomulin 25
Placebo 25
Ceftamin 25
Naftisol 25
Ketapril 25
Stelasyn 24
Name: drug, dtype: int64
###Markdown
Summary Statistics Now that we have created clean dataframes, combining the study results and the mice data, we will quickly go over a statistical summary of the data. That way we have a better understanding of the data's narrative and are better prepared to do more in-depth analysis later.
###Code
#Generating a summary statistics table for the tumor volume for each regimen
drug_group = mouse_study.set_index("timepoint").groupby(['drug', 'timepoint']).agg({'tumor_volume': ['mean', 'median', 'var', 'std', 'sem']})
drug_group
###Output
_____no_output_____
###Markdown
Bar and Pie Charts Now that we have a clear summary of each drug's performance as measured by tumor volume (mm3), we can start to plot some of the key behaviors and trends we want to analyze. First we will visualize with Bar and Pie charts.
###Code
#Pandas: Generating a bar plot showing the total number of mice for each treatment throughout study.
#creating the dataframe
x_axis = clean_complete['drug'].unique().tolist()
y_axis = clean_complete.groupby(['drug'])['mouse_id'].count().tolist()
drug_mouse = pd.DataFrame({'drug':x_axis, 'mice': y_axis})
#creating the plot and formatting to match the last
subject_plot = drug_mouse.plot(kind = "bar", figsize = (12,7), color = 'g')
subject_plot.get_xticks()
subject_plot.set_xticklabels(drug_mouse['drug'])
subject_plot.set_title('The Number of Mice in Each Drug Regimen')
subject_plot.set_xlabel('Drug Regimen')
subject_plot.set_ylabel('Number of Mice in per Regimen in Trial')
subject_plot.set_xlim(-1, len(x_axis))
subject_plot.set_ylim(0, max(y_axis) + 5)
#saving and showing bar chart
plt.savefig('charts/drugsubjects_barchart1')
plt.show()
#Matplotlib: Generating a bar plot showing the total number of mice for each treatment throughout study.
x_axis = clean_complete['drug'].unique().tolist()
y_axis = clean_complete.groupby(['drug'])['mouse_id'].count().tolist()
#Creating and formatting the figure
plt.figure(figsize = (12,7))
plt.bar(x_axis, y_axis, color = 'g', align = 'center')
plt.xlabel("Drug Regimens")
plt.ylabel("Number of Mice in per Regimen in Trial")
plt.title("The Number of Mice in Each Drug Regimen")
plt.xlim(-1, len(x_axis))
plt.ylim(0, max(y_axis) + 5)
#saving and showing bar chart
plt.savefig('charts/drugsubjects_barchart2')
plt.show()
#Pandas: Generating a pie chart showing male vs female subjects for each treatment throughout study.
#the data to be used
gender_group = clean_complete.groupby(['sex']).count()[['mouse_id']]
labels = clean_complete['sex'].unique().tolist()
#creating the pie chart
gender_group.plot.pie(subplots = True, explode = (0, 0.1), labels = labels, autopct="%1.1f%%", figsize = (5,5), shadow = True)
plt.ylabel('Sex')
plt.title('Male vs. Female Subjects in Study Population')
#saving and showing pie chart
plt.savefig('charts/subjectsex_piechart1')
plt.show()
#Matplotlib: Generating a pie chart showing male vs female subjects for each treatment throughout study.
fig, ax = plt.subplots()
ax.pie(gender_group, explode = (0, 0.1), labels = labels, autopct="%1.1f%%", shadow = True)
ax.set_ylabel('Sex')
ax.set_title('Male vs. Female Subjects in Study Population')
#showing pie chart
plt.savefig('charts/subjectsex_piechart2')
plt.show()
###Output
C:\Users\alexs\anaconda3\envs\PythonData\lib\site-packages\ipykernel_launcher.py:3: MatplotlibDeprecationWarning:
Non-1D inputs to pie() are currently squeeze()d, but this behavior is deprecated since 3.1 and will be removed in 3.3; pass a 1D array instead.
###Markdown
Quartiles, Outliers and Boxplots Now that we have looked more closely at our study population, we move on to the data range. We will look at the skew of our data, where most of our data lies with its range and we will look to see if there are any outliers in our data.
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
last_time = mouse_study.drop_duplicates(subset = 'mouse_id', keep = 'last', inplace = False)
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
top4 = last_time[(last_time['drug'] == 'Capomulin') | (last_time['drug'] == 'Ramicane') | (last_time['drug'] == 'Infubinol') | (last_time['drug'] == 'Ceftamin')]
top4
# Put treatments into a list for for loop (and later for plot labels)
regimens = top4['drug'].tolist()
# Create empty list to fill with tumor vol data (for plotting)
vol_data = []
outlier_list = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
quartiles = top4['tumor_volume'].quantile([.25,.5,.75])
lowerq = quartiles[.25]
upperq = quartiles[.75]
iqr = upperq-lowerq
#Finding outliers with pandas function
def outliers(df):
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
tum_vol = top4['tumor_volume']
for volume in tum_vol:
if volume > upper_bound or volume < lower_bound:
outlier_list.append(volume)
vol_data.append(volume)
else:
vol_data.append(volume)
return outlier_list
outliers(top4)
#Printing out results of Quartile and Outlier Analysis
print(f"The lower quartile of the tumor volume data is: {lowerq}.")
print(f"The upper quartile of the tumor volume data is: {upperq}.")
print(f"The interquartile range of the tumor volume data is: {iqr}.")
print(f"The the median of the tumor volume data is: {quartiles[0.5]}.")
if len(outlier_list) != 0:
print(f"The full dataset for tumor volume in all regimens contains the following outlier(s):" + str(outlier_list))
else:
print("There are no outliers in the Tumor Volume data for the top four regimens.")
#This is a Box and Whiskers Plot for the data for all 4 regimens. It will be a guide for the
green_diamond = dict(markerfacecolor='g', marker='D')
fig1, ax1 = plt.subplots()
ax1.set_title('Final Tumor Volume in All Regimens')
ax1.set_ylabel('Final Tumor Volume (mm3)')
ax1.boxplot(vol_data, flierprops=green_diamond)
plt.show()
#Analyzing the Quartiles and Outliers of Capomulin
#Capomulin DataFrame
capo_data = top4[top4['drug'].isin(['Capomulin'])]
#Creating Empty lists for to loop through for plotting
capo_vol = []
capo_outliers = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
quartiles = capo_data['tumor_volume'].quantile([.25,.5,.75])
capo_lowerq = quartiles[.25]
capo_upperq = quartiles[.75]
capo_iqr = capo_upperq-capo_lowerq
#Finding outliers with pandas function
def outliers(df):
lower_bound = capo_lowerq - (1.5*capo_iqr)
upper_bound = capo_upperq + (1.5*capo_iqr)
tum_vol = capo_data['tumor_volume']
for volume in tum_vol:
if volume > upper_bound or volume < lower_bound:
capo_outliers.append(volume)
capo_vol.append(volume)
else:
capo_vol.append(volume)
return capo_outliers
outliers(capo_data)
print(f"The lower quartile of Capomulin's tumor volume data is: {capo_lowerq}")
print(f"The upper quartile of Capomulin's tumor volume is: {capo_upperq}")
print(f"The interquartile range of Capomulin's tumor volume is: {capo_iqr}")
print(f"The the median of Capomulin's tumor volume is: {quartiles[0.5]} ")
if len(capo_outliers) != 0:
print(f"The data for Capomulin contains the following outlier(s):" + str(capo_outliers))
else:
print("There are no outliers in the Tumor Volume data for Capomulin.")
#Creating the Box and Whiskers plot for Capomulin
green_diamond = dict(markerfacecolor='g', marker='D')
fig2, ax2 = plt.subplots()
ax2.set_title('Final Tumor Volume for Capomulin')
ax2.set_ylabel('Final Tumor Volume (mm3)')
ax2.boxplot(capo_vol, flierprops=green_diamond)
plt.show()
#Analyzing the Quartiles and Outliers of Ramicane
#Ramicane DataFrame
rami_data = top4[top4['drug'].isin(['Ramicane'])]
#Creating Empty lists for to loop through for plotting
rami_vol = []
rami_outliers = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
quartiles = rami_data['tumor_volume'].quantile([.25,.5,.75])
rami_lowerq = quartiles[.25]
rami_upperq = quartiles[.75]
rami_iqr = rami_upperq-rami_lowerq
#Finding outliers with pandas function
def outliers(df):
lower_bound = rami_lowerq - (1.5*rami_iqr)
upper_bound = rami_upperq + (1.5*rami_iqr)
tum_vol = rami_data['tumor_volume']
for volume in tum_vol:
if volume > upper_bound or volume < lower_bound:
rami_outliers.append(volume)
rami_vol.append(volume)
else:
rami_vol.append(volume)
return rami_outliers
outliers(rami_data)
print(f"The lower quartile of Ramicane's tumor volume data is: {rami_lowerq}")
print(f"The upper quartile of Ramicane's tumor volume data is: {rami_upperq}")
print(f"The interquartile range of Ramicane's tumor volume data is: {rami_iqr}")
print(f"The the median of Ramicane's tumor volume data is: {quartiles[0.5]} ")
if len(rami_outliers) != 0:
print(f"The data for Ramicane contains the following outlier(s):" + str(rami_outliers))
else: print("There are no outliers in the Tumor Volume data for Ramicane.")
#Creating the Box and Whiskers plot for Ramicane
green_diamond = dict(markerfacecolor='g', marker='D')
fig3, ax3 = plt.subplots()
ax3.set_title('Final Tumor Volume for Ramicane')
ax3.set_ylabel('Final Tumor Volume (mm3)')
ax3.boxplot(rami_vol, flierprops=green_diamond)
plt.show()
#Analyzing the Quartiles and Outliers of Infubinol
#Infubinol DataFrame
infu_data = top4[top4['drug'].isin(['Infubinol'])]
#Creating Empty lists for to loop through for plotting
infu_vol = []
infu_outliers = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
quartiles = infu_data['tumor_volume'].quantile([.25,.5,.75])
infu_lowerq = quartiles[.25]
infu_upperq = quartiles[.75]
infu_iqr = infu_upperq-infu_lowerq
#Finding outliers with pandas function
def outliers(df):
lower_bound = infu_lowerq - (1.5*infu_iqr)
upper_bound = infu_upperq + (1.5*infu_iqr)
tum_vol = infu_data['tumor_volume']
for volume in tum_vol:
if volume > upper_bound or volume < lower_bound:
infu_outliers.append(volume)
infu_vol.append(volume)
else:
infu_vol.append(volume)
return infu_outliers
outliers(infu_data)
print(f"The lower quartile of Infubinol's tumor volume data is: {infu_lowerq}")
print(f"The upper quartile of Infubinol's tumor volume data is: {infu_upperq}")
print(f"The interquartile range of Infubinol's tumor volume data is: {infu_iqr}")
print(f"The the median of Infubinol's tumor volume data is: {quartiles[0.5]} ")
if len(infu_outliers) != 0:
print(f"The data for Infubinol contains the following outlier(s):" + str(infu_outliers))
else: print("There are no outliers in the Tumor Volume data for Infubinol.")
#Creating the Box and Whiskers plot for Infubinol
green_diamond = dict(markerfacecolor='g', marker='D')
fig4, ax4 = plt.subplots()
ax4.set_title('Final Tumor Volume for Infubinol')
ax4.set_ylabel('Final Tumor Volume (mm3)')
ax4.boxplot(infu_vol, showfliers=green_diamond)
plt.show()
#Analyzing the Quartiles and Outliers of Ceftamin
#Ceftamin DataFrame
ceft_data = top4[top4['drug'].isin(['Ceftamin'])]
#Creating Empty lists for to loop through for plotting
ceft_vol = []
ceft_outliers = []
# Calculate the IQR and quantitatively determine if there are any potential outliers.
quartiles = ceft_data['tumor_volume'].quantile([.25,.5,.75])
ceft_lowerq = quartiles[.25]
ceft_upperq = quartiles[.75]
ceft_iqr = ceft_upperq-ceft_lowerq
#Finding outliers with pandas function
def outliers(df):
lower_bound = ceft_lowerq - (1.5*ceft_iqr)
upper_bound = ceft_upperq + (1.5*ceft_iqr)
tum_vol = ceft_data['tumor_volume']
for volume in tum_vol:
if volume > upper_bound or volume < lower_bound:
ceft_outliers.append(volume)
ceft_vol.append(volume)
else:
ceft_vol.append(volume)
return ceft_outliers
outliers(ceft_data)
print(f"The lower quartile of Ceftamin's tumor volume data is: {ceft_lowerq}")
print(f"The upper quartile of Ceftamin's tumor volume data is: {ceft_upperq}")
print(f"The interquartile range of Ceftamin's tumor volume data is: {ceft_iqr}")
print(f"The the median of Ceftamin's tumor volume data is: {quartiles[0.5]} ")
if len(ceft_outliers) != 0:
print(f"The data for Ceftamin contains the following outlier(s):" + str(ceft_outliers))
else: print("There are no outliers in the Tumor Volume data for Ceftamin.")
#Creating the Box and Whiskers plot for Ceftamin
green_diamond = dict(markerfacecolor='g', marker='D')
fig4, ax5 = plt.subplots()
ax5.set_title('Final Tumor Volume for Ceftamin')
ax5.set_ylabel('Final Tumor Volume (mm3)')
ax5.boxplot(ceft_vol, showfliers=green_diamond)
plt.show()
#Combining the box and whiskers into one plot using plotly (just to try- found it while working on the homework)
trace0 = go.Box(
y = capo_vol,
name = "Capomulin"
)
trace1 = go.Box(
y = rami_vol,
name = "Ramicane"
)
trace2 = go.Box(
y = infu_vol,
name = "Infubinol"
)
trace3 = go.Box(
y = ceft_vol,
name = "Ceftamin"
)
data = [trace0, trace1, trace2, trace3]
layout = go.Layout(title = "Final Tumor Volume (mm3) for the Top Four Drug Regimens")
fig = go.Figure(data=data, layout=layout)
pyo.plot(fig)
#Combining plots using matplotlib
#formatting outliers
green_diamond = dict(markerfacecolor='g', marker='D')
#Creating the figure
fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(5, figsize = (5,20), sharex = True, sharey = True)
fig.suptitle("Tumor Volume at Final Time Point for Drug Trials", fontsize = 16, fontweight = "bold")
ax1 = plt.subplot(511)
ax1.set_title('Final Tumor Volume in All Regimens')
ax1.boxplot(vol_data, flierprops=green_diamond)
ax2 = plt.subplot(512, sharey = ax1)
ax2.set_title('Final Tumor Volume for Capomulin')
ax2.boxplot(capo_vol, flierprops=green_diamond)
ax3 = plt.subplot(513, sharey = ax1)
ax3.set_title('Final Tumor Volume for Ramicane')
ax3.set_ylabel('Final Tumor Volume (mm3)')
ax3.boxplot(rami_vol, flierprops=green_diamond)
ax4 = plt.subplot(514, sharey = ax1)
ax4.set_title('Final Tumor Volume for Infubinol')
ax4.boxplot(infu_vol, showfliers=green_diamond)
ax5 = plt.subplot(515, sharey = ax1)
ax5.set_title('Final Tumor Volume for Ceftamin')
ax5.boxplot(ceft_vol, showfliers=green_diamond)
#Save and Show plot
plt.savefig('charts/tumorvol_boxplots')
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Filter original data for just the Capomulin Drug Regime
capomulin_mice = mouse_study.loc[(mouse_study["drug"] == "Capomulin"),:]
capomulin_mouse = capomulin_mice.loc[(capomulin_mice['mouse_id'] == 's185'),:]
# Set variables to hold relevant data
timept = capomulin_mouse["timepoint"]
tumor_vol = capomulin_mouse["tumor_volume"]
# Plot the tumor volume for various mice
tumor_volume_line, = plt.plot(timept, tumor_vol, marker = 'D')
# Show the chart, add labels
plt.xlabel('Timepoint')
plt.ylabel('Tumor Volume')
plt.title('Tumor Volume over Time for One Mouse on Capomulin')
#Save and show figure
plt.savefig('charts/capomouse_tumvol_vs_time')
plt.show()
# Generate a line plot of time point versus tumor volume for the mice treated with Capomulin
# Filter original data for just the Capomulin Drug Regime
capomulin_mice = mouse_study.loc[(mouse_study["drug"] == "Capomulin"),:]
# Set variables to hold relevant data
timept = capomulin_mice["timepoint"]
tumor_vol = capomulin_mice["tumor_volume"]
# Plot the tumor volume for various mice
tumor_volume_line, = plt.plot(timept, tumor_vol, marker = 's', color = 'black')
# Show the chart, add labels
plt.xlabel('Timepoint')
plt.ylabel('Tumor Volume')
plt.title('Tumor Volume over Time for Capomulin Mice')
#Save and show plot
plt.savefig('charts/capomice_tumvol_vs_time')
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
# Pull values for x and y values
mouse_weight = capomulin_mice.groupby(capomulin_mice["mouse_id"])["weight"].mean()
tumor_volume = capomulin_mice.groupby(capomulin_mice["mouse_id"])["tumor_volume"].mean()
# Create Scatter Plot with values calculated above
plt.scatter(mouse_weight,tumor_volume)
plt.xlabel("Weight of Mouse")
plt.ylabel("Tumor Volume (mm3)")
plt.title("Weight of Mouse vs. Tumor Volume on Capomulin")
#Save and show plot
plt.savefig('charts/weight_vs_tumvol_scatter')
plt.show()
###Output
_____no_output_____
###Markdown
The graph above is a scatterplot of the mice through all the timepoints during the Capomulin trial, so the mean of their weight and the tumor volume is an average for the entirety of the trial and therefore may not be representative of any specific time point. As we see in the previous line chart of weight over time in the trial, weight fluctuate drastically throughout the trial. What we can see from this graph is that there seems to be a positive correlation between weight of the mouse and tumor volume throughout the trial. Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
# Pull values for x and y values
mouse_weight = capomulin_mice.groupby(capomulin_mice["mouse_id"])["weight"].mean()
tumor_vol = capomulin_mice.groupby(capomulin_mice["mouse_id"])["tumor_volume"].mean()
# Perform a linear regression on tumor volume versus mouse weight
slope, int, r, p, std_err = st.linregress(mouse_weight, tumor_vol)
# Create equation of line to calculate predicted weight
fit = slope * mouse_weight + int
line_eq = 'y = ' +str(round(slope,2)) + 'x +' + str(round(int,2))
# Plot the linear model on top of scatter plot
plt.scatter(mouse_weight,tumor_vol)
plt.xlabel("Weight of Mouse")
plt.ylabel("Tumor Volume")
plt.title("Weight of Mouse vs. Tumor Volume on Capomulin")
plt.plot(mouse_weight,fit,"-")
plt.annotate(line_eq, (18,37), fontsize=15)
plt.xticks(mouse_weight, rotation=90)
#save and show plot
plt.savefig('charts/weight_vs_tumvol_regress')
plt.show()
# Caculate correlation coefficient
corr = round(st.pearsonr(mouse_weight,tumor_vol)[0],2)
print(f'The correlation between weight and tumor value is {corr}')
###Output
_____no_output_____ |
Data Visualization with Python/DV0101EN-Exercise-Introduction-to-Matplotlib-and-Line-Plots-py.ipynb | ###Markdown
Introduction to Matplotlib and Line Plots IntroductionThe aim of these labs is to introduce you to data visualization with Python as concrete and as consistent as possible. Speaking of consistency, because there is no *best* data visualization library avaiblable for Python - up to creating these labs - we have to introduce different libraries and show their benefits when we are discussing new visualization concepts. Doing so, we hope to make students well-rounded with visualization libraries and concepts so that they are able to judge and decide on the best visualitzation technique and tool for a given problem _and_ audience.Please make sure that you have completed the prerequisites for this course, namely [**Python for Data Science**](https://cognitiveclass.ai/courses/python-for-data-science/).**Note**: The majority of the plots and visualizations will be generated using data stored in *pandas* dataframes. Therefore, in this lab, we provide a brief crash course on *pandas*. However, if you are interested in learning more about the *pandas* library, detailed description and explanation of how to use it and how to clean, munge, and process data stored in a *pandas* dataframe are provided in our course [**Data Analysis with Python**](https://cognitiveclass.ai/courses/data-analysis-python/). ------------ Table of Contents1. [Exploring Datasets with *pandas*](0)1.1 [The Dataset: Immigration to Canada from 1980 to 2013](2)1.2 [*pandas* Basics](4) 1.3 [*pandas* Intermediate: Indexing and Selection](6) 2. [Visualizing Data using Matplotlib](8) 2.1 [Matplotlib: Standard Python Visualization Library](10) 3. [Line Plots](12) Exploring Datasets with *pandas* *pandas* is an essential data analysis toolkit for Python. From their [website](http://pandas.pydata.org/):>*pandas* is a Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, **real world** data analysis in Python.The course heavily relies on *pandas* for data wrangling, analysis, and visualization. We encourage you to spend some time and familizare yourself with the *pandas* API Reference: http://pandas.pydata.org/pandas-docs/stable/api.html. The Dataset: Immigration to Canada from 1980 to 2013 Dataset Source: [International migration flows to and from selected countries - The 2015 revision](http://www.un.org/en/development/desa/population/migration/data/empirical2/migrationflows.shtml).The dataset contains annual data on the flows of international immigrants as recorded by the countries of destination. The data presents both inflows and outflows according to the place of birth, citizenship or place of previous / next residence both for foreigners and nationals. The current version presents data pertaining to 45 countries.In this lab, we will focus on the Canadian immigration data.For sake of simplicity, Canada's immigration data has been extracted and uploaded to one of IBM servers. You can fetch the data from [here](https://ibm.box.com/shared/static/lw190pt9zpy5bd1ptyg2aw15awomz9pu.xlsx).--- *pandas* Basics The first thing we'll do is import two key data analysis modules: *pandas* and **Numpy**.
###Code
import numpy as np # useful for many scientific computing in Python
import pandas as pd # primary data structure library
###Output
_____no_output_____
###Markdown
Let's download and import our primary Canadian Immigration dataset using *pandas* `read_excel()` method. Normally, before we can do that, we would need to download a module which *pandas* requires to read in excel files. This module is **xlrd**. For your convenience, we have pre-installed this module, so you would not have to worry about that. Otherwise, you would need to run the following line of code to install the **xlrd** module:```!conda install -c anaconda xlrd --yes``` Now we are ready to read in our data.
###Code
df_can = pd.read_excel('https://ibm.box.com/shared/static/lw190pt9zpy5bd1ptyg2aw15awomz9pu.xlsx',
sheet_name='Canada by Citizenship',
skiprows=range(20),
skip_footer=2)
print ('Data read into a pandas dataframe!')
###Output
/home/jupyterlab/conda/lib/python3.6/site-packages/pandas/util/_decorators.py:177: FutureWarning: the 'skip_footer' keyword is deprecated, use 'skipfooter' instead
return func(*args, **kwargs)
###Markdown
Let's view the top 5 rows of the dataset using the `head()` function.
###Code
df_can.head()
# tip: You can specify the number of rows you'd like to see as follows: df_can.head(10)
###Output
_____no_output_____
###Markdown
We can also veiw the bottom 5 rows of the dataset using the `tail()` function.
###Code
df_can.tail()
###Output
_____no_output_____
###Markdown
When analyzing a dataset, it's always a good idea to start by getting basic information about your dataframe. We can do this by using the `info()` method.
###Code
df_can.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 195 entries, 0 to 194
Data columns (total 43 columns):
Type 195 non-null object
Coverage 195 non-null object
OdName 195 non-null object
AREA 195 non-null int64
AreaName 195 non-null object
REG 195 non-null int64
RegName 195 non-null object
DEV 195 non-null int64
DevName 195 non-null object
1980 195 non-null int64
1981 195 non-null int64
1982 195 non-null int64
1983 195 non-null int64
1984 195 non-null int64
1985 195 non-null int64
1986 195 non-null int64
1987 195 non-null int64
1988 195 non-null int64
1989 195 non-null int64
1990 195 non-null int64
1991 195 non-null int64
1992 195 non-null int64
1993 195 non-null int64
1994 195 non-null int64
1995 195 non-null int64
1996 195 non-null int64
1997 195 non-null int64
1998 195 non-null int64
1999 195 non-null int64
2000 195 non-null int64
2001 195 non-null int64
2002 195 non-null int64
2003 195 non-null int64
2004 195 non-null int64
2005 195 non-null int64
2006 195 non-null int64
2007 195 non-null int64
2008 195 non-null int64
2009 195 non-null int64
2010 195 non-null int64
2011 195 non-null int64
2012 195 non-null int64
2013 195 non-null int64
dtypes: int64(37), object(6)
memory usage: 65.6+ KB
###Markdown
To get the list of column headers we can call upon the dataframe's `.columns` parameter.
###Code
df_can.columns.values
###Output
_____no_output_____
###Markdown
Similarly, to get the list of indicies we use the `.index` parameter.
###Code
df_can.index.values
###Output
_____no_output_____
###Markdown
Note: The default type of index and columns is NOT list.
###Code
print(type(df_can.columns))
print(type(df_can.index))
###Output
<class 'pandas.core.indexes.base.Index'>
<class 'pandas.core.indexes.range.RangeIndex'>
###Markdown
To get the index and columns as lists, we can use the `tolist()` method.
###Code
df_can.columns.tolist()
df_can.index.tolist()
print (type(df_can.columns.tolist()))
print (type(df_can.index.tolist()))
###Output
<class 'list'>
<class 'list'>
###Markdown
To view the dimensions of the dataframe, we use the `.shape` parameter.
###Code
# size of dataframe (rows, columns)
df_can.shape
###Output
_____no_output_____
###Markdown
Note: The main types stored in *pandas* objects are *float*, *int*, *bool*, *datetime64[ns]* and *datetime64[ns, tz] (in >= 0.17.0)*, *timedelta[ns]*, *category (in >= 0.15.0)*, and *object* (string). In addition these dtypes have item sizes, e.g. int64 and int32. Let's clean the data set to remove a few unnecessary columns. We can use *pandas* `drop()` method as follows:
###Code
# in pandas axis=0 represents rows (default) and axis=1 represents columns.
df_can.drop(['AREA','REG','DEV','Type','Coverage'], axis=1, inplace=True)
df_can.head(2)
###Output
_____no_output_____
###Markdown
Let's rename the columns so that they make sense. We can use `rename()` method by passing in a dictionary of old and new names as follows:
###Code
df_can.rename(columns={'OdName':'Country', 'AreaName':'Continent', 'RegName':'Region'}, inplace=True)
df_can.columns
###Output
_____no_output_____
###Markdown
We will also add a 'Total' column that sums up the total immigrants by country over the entire period 1980 - 2013, as follows:
###Code
df_can['Total'] = df_can.sum(axis=1)
###Output
_____no_output_____
###Markdown
We can check to see how many null objects we have in the dataset as follows:
###Code
df_can.isnull().sum()
###Output
_____no_output_____
###Markdown
Finally, let's view a quick summary of each column in our dataframe using the `describe()` method.
###Code
df_can.describe()
###Output
_____no_output_____
###Markdown
--- *pandas* Intermediate: Indexing and Selection (slicing) Select Column**There are two ways to filter on a column name:**Method 1: Quick and easy, but only works if the column name does NOT have spaces or special characters.```python df.column_name (returns series)```Method 2: More robust, and can filter on multiple columns.```python df['column'] (returns series)``````python df[['column 1', 'column 2']] (returns dataframe)```--- Example: Let's try filtering on the list of countries ('Country').
###Code
df_can.Country # returns a series
###Output
_____no_output_____
###Markdown
Let's try filtering on the list of countries ('OdName') and the data for years: 1980 - 1985.
###Code
df_can[['Country', 1980, 1981, 1982, 1983, 1984, 1985]] # returns a dataframe
# notice that 'Country' is string, and the years are integers.
# for the sake of consistency, we will convert all column names to string later on.
###Output
_____no_output_____
###Markdown
Select RowThere are main 3 ways to select rows:```python df.loc[label] filters by the labels of the index/column df.iloc[index] filters by the positions of the index/column``` Before we proceed, notice that the defaul index of the dataset is a numeric range from 0 to 194. This makes it very difficult to do a query by a specific country. For example to search for data on Japan, we need to know the corressponding index value.This can be fixed very easily by setting the 'Country' column as the index using `set_index()` method.
###Code
df_can.set_index('Country', inplace=True)
# tip: The opposite of set is reset. So to reset the index, we can use df_can.reset_index()
df_can.head(3)
# optional: to remove the name of the index
df_can.index.name = None
###Output
_____no_output_____
###Markdown
Example: Let's view the number of immigrants from Japan (row 87) for the following scenarios: 1. The full row data (all columns) 2. For year 2013 3. For years 1980 to 1985
###Code
# 1. the full row data (all columns)
print(df_can.loc['Japan'])
# alternate methods
print(df_can.iloc[87])
print(df_can[df_can.index == 'Japan'].T.squeeze())
# 2. for year 2013
print(df_can.loc['Japan', 2013])
# alternate method
print(df_can.iloc[87, 36]) # year 2013 is the last column, with a positional index of 36
# 3. for years 1980 to 1985
print(df_can.loc['Japan', [1980, 1981, 1982, 1983, 1984, 1984]])
print(df_can.iloc[87, [3, 4, 5, 6, 7, 8]])
###Output
1980 701
1981 756
1982 598
1983 309
1984 246
1984 246
Name: Japan, dtype: object
1980 701
1981 756
1982 598
1983 309
1984 246
1985 198
Name: Japan, dtype: object
###Markdown
Column names that are integers (such as the years) might introduce some confusion. For example, when we are referencing the year 2013, one might confuse that when the 2013th positional index. To avoid this ambuigity, let's convert the column names into strings: '1980' to '2013'.
###Code
df_can.columns = list(map(str, df_can.columns))
# [print (type(x)) for x in df_can.columns.values] #<-- uncomment to check type of column headers
###Output
_____no_output_____
###Markdown
Since we converted the years to string, let's declare a variable that will allow us to easily call upon the full range of years:
###Code
# useful for plotting later on
years = list(map(str, range(1980, 2014)))
years
###Output
_____no_output_____
###Markdown
Filtering based on a criteriaTo filter the dataframe based on a condition, we simply pass the condition as a boolean vector. For example, Let's filter the dataframe to show the data on Asian countries (AreaName = Asia).
###Code
# 1. create the condition boolean series
condition = df_can['Continent'] == 'Asia'
print (condition)
# 2. pass this condition into the dataFrame
df_can[condition]
# we can pass mutliple criteria in the same line.
# let's filter for AreaNAme = Asia and RegName = Southern Asia
df_can[(df_can['Continent']=='Asia') & (df_can['Region']=='Southern Asia')]
# note: When using 'and' and 'or' operators, pandas requires we use '&' and '|' instead of 'and' and 'or'
# don't forget to enclose the two conditions in parentheses
###Output
_____no_output_____
###Markdown
Before we proceed: let's review the changes we have made to our dataframe.
###Code
print ('data dimensions:', df_can.shape)
print(df_can.columns)
df_can.head(2)
###Output
data dimensions: (195, 38)
Index(['Continent', 'Region', 'DevName', '1980', '1981', '1982', '1983',
'1984', '1985', '1986', '1987', '1988', '1989', '1990', '1991', '1992',
'1993', '1994', '1995', '1996', '1997', '1998', '1999', '2000', '2001',
'2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010',
'2011', '2012', '2013', 'Total'],
dtype='object')
###Markdown
--- Visualizing Data using Matplotlib Matplotlib: Standard Python Visualization LibraryThe primary plotting library we will explore in the course is [Matplotlib](http://matplotlib.org/). As mentioned on their website: >Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shell, the jupyter notebook, web application servers, and four graphical user interface toolkits.If you are aspiring to create impactful visualization with python, Matplotlib is an essential tool to have at your disposal. Matplotlib.PyplotOne of the core aspects of Matplotlib is `matplotlib.pyplot`. It is Matplotlib's scripting layer which we studied in details in the videos about Matplotlib. Recall that it is a collection of command style functions that make Matplotlib work like MATLAB. Each `pyplot` function makes some change to a figure: e.g., creates a figure, creates a plotting area in a figure, plots some lines in a plotting area, decorates the plot with labels, etc. In this lab, we will work with the scripting layer to learn how to generate line plots. In future labs, we will get to work with the Artist layer as well to experiment first hand how it differs from the scripting layer. Let's start by importing `Matplotlib` and `Matplotlib.pyplot` as follows:
###Code
# we are using the inline backend
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
*optional: check if Matplotlib is loaded.
###Code
print ('Matplotlib version: ', mpl.__version__) # >= 2.0.0
###Output
Matplotlib version: 2.2.2
###Markdown
*optional: apply a style to Matplotlib.
###Code
print(plt.style.available)
mpl.style.use(['ggplot']) # optional: for ggplot-like style
###Output
['Solarize_Light2', '_classic_test', 'bmh', 'classic', 'dark_background', 'fast', 'fivethirtyeight', 'ggplot', 'grayscale', 'seaborn-bright', 'seaborn-colorblind', 'seaborn-dark-palette', 'seaborn-dark', 'seaborn-darkgrid', 'seaborn-deep', 'seaborn-muted', 'seaborn-notebook', 'seaborn-paper', 'seaborn-pastel', 'seaborn-poster', 'seaborn-talk', 'seaborn-ticks', 'seaborn-white', 'seaborn-whitegrid', 'seaborn', 'tableau-colorblind10']
###Markdown
Plotting in *pandas*Fortunately, pandas has a built-in implementation of Matplotlib that we can use. Plotting in *pandas* is as simple as appending a `.plot()` method to a series or dataframe.Documentation:- [Plotting with Series](http://pandas.pydata.org/pandas-docs/stable/api.htmlplotting)- [Plotting with Dataframes](http://pandas.pydata.org/pandas-docs/stable/api.htmlapi-dataframe-plotting) Line Pots (Series/Dataframe) **What is a line plot and why use it?**A line chart or line plot is a type of plot which displays information as a series of data points called 'markers' connected by straight line segments. It is a basic type of chart common in many fields.Use line plot when you have a continuous data set. These are best suited for trend-based visualizations of data over a period of time. **Let's start with a case study:**In 2010, Haiti suffered a catastrophic magnitude 7.0 earthquake. The quake caused widespread devastation and loss of life and aout three million people were affected by this natural disaster. As part of Canada's humanitarian effort, the Government of Canada stepped up its effort in accepting refugees from Haiti. We can quickly visualize this effort using a `Line` plot:**Question:** Plot a line graph of immigration from Haiti using `df.plot()`. First, we will extract the data series for Haiti.
###Code
haiti = df_can.loc['Haiti', years] # passing in years 1980 - 2013 to exclude the 'total' column
haiti.head()
###Output
_____no_output_____
###Markdown
Next, we will plot a line plot by appending `.plot()` to the `haiti` dataframe.
###Code
haiti.plot()
###Output
_____no_output_____
###Markdown
*pandas* automatically populated the x-axis with the index values (years), and the y-axis with the column values (population). However, notice how the years were not displayed because they are of type *string*. Therefore, let's change the type of the index values to *integer* for plotting.Also, let's label the x and y axis using `plt.title()`, `plt.ylabel()`, and `plt.xlabel()` as follows:
###Code
haiti.index = haiti.index.map(int) # let's change the index values of Haiti to type integer for plotting
haiti.plot(kind='line')
plt.title('Immigration from Haiti')
plt.ylabel('Number of immigrants')
plt.xlabel('Years')
plt.show() # need this line to show the updates made to the figure
###Output
_____no_output_____
###Markdown
We can clearly notice how number of immigrants from Haiti spiked up from 2010 as Canada stepped up its efforts to accept refugees from Haiti. Let's annotate this spike in the plot by using the `plt.text()` method.
###Code
haiti.plot(kind='line')
plt.title('Immigration from Haiti')
plt.ylabel('Number of Immigrants')
plt.xlabel('Years')
# annotate the 2010 Earthquake.
# syntax: plt.text(x, y, label)
plt.text(2000, 6000, '2010 Earthquake') # see note below
plt.show()
###Output
_____no_output_____
###Markdown
With just a few lines of code, you were able to quickly identify and visualize the spike in immigration!Quick note on x and y values in `plt.text(x, y, label)`: Since the x-axis (years) is type 'integer', we specified x as a year. The y axis (number of immigrants) is type 'integer', so we can just specify the value y = 6000. ```python plt.text(2000, 6000, '2010 Earthquake') years stored as type int``` If the years were stored as type 'string', we would need to specify x as the index position of the year. Eg 20th index is year 2000 since it is the 20th year with a base year of 1980.```python plt.text(20, 6000, '2010 Earthquake') years stored as type int``` We will cover advanced annotation methods in later modules. We can easily add more countries to line plot to make meaningful comparisons immigration from different countries. **Question:** Let's compare the number of immigrants from India and China from 1980 to 2013. Step 1: Get the data set for China and India, and display dataframe.
###Code
### type your answer here
df_CI = df_can.loc[['India', 'China'], years]
df_CI.head()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:df_CI = df_can.loc[['India', 'China'], years]df_CI.head()--> Step 2: Plot graph. We will explicitly specify line plot by passing in `kind` parameter to `plot()`.
###Code
### type your answer here
df_CI.plot(kind='line')
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:df_CI.plot(kind='line')--> That doesn't look right...Recall that *pandas* plots the indices on the x-axis and the columns as individual lines on the y-axis. Since `df_CI` is a dataframe with the `country` as the index and `years` as the columns, we must first transpose the dataframe using `transpose()` method to swap the row and columns.
###Code
df_CI = df_CI.transpose()
df_CI.head()
###Output
_____no_output_____
###Markdown
*pandas* will auomatically graph the two countries on the same graph. Go ahead and plot the new transposed dataframe. Make sure to add a title to the plot and label the axes.
###Code
### type your answer here
df_CI.index = df_CI.index.map(int)
df_CI.plot(kind='line')
plt.title('Immigrants from India and China')
plt.xlabel('Years')
plt.ylabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____
###Markdown
Double-click __here__ for the solution.<!-- The correct answer is:df_CI.index = df_CI.index.map(int) let's change the index values of df_CI to type integer for plottingdf_CI.plot(kind='line')--><!--plt.title('Immigrants from China and India')plt.ylabel('Number of Immigrants')plt.xlabel('Years')--><!--plt.show()--> From the above plot, we can observe that the China and India have very similar immigration trends through the years. *Note*: How come we didn't need to transpose Haiti's dataframe before plotting (like we did for df_CI)?That's because `haiti` is a series as opposed to a dataframe, and has the years as its indices as shown below. ```pythonprint(type(haiti))print(haiti.head(5))```>class 'pandas.core.series.Series' >1980 1666 >1981 3692 >1982 3498 >1983 2860 >1984 1418 >Name: Haiti, dtype: int64 Line plot is a handy tool to display several dependent variables against one independent variable. However, it is recommended that no more than 5-10 lines on a single graph; any more than that and it becomes difficult to interpret. **Question:** Compare the trend of top 5 countries that contributed the most to immigration to Canada.
###Code
### type your answer here
df_can.sort_values(by='Total', ascending=False, axis=0, inplace=True)
df_top5 = df_can.head(5)
df_top5 = df_top5[years].transpose()
print(df_top5)
df_top5.index = df_top5.index.map(int)
df_top5.plot(kind='line', figsize=(14, 8))
plt.title('Immigration Trend of Top 5 Countries')
plt.xlabel('Years')
plt.ylabel('Number of Immigrants')
plt.show()
###Output
_____no_output_____ |
MServajean/TD4/Spark_-_Stream_of_data_correction.ipynb | ###Markdown
Importing spark
###Code
# import findspark
# findspark.init()
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("Python Spark").getOrCreate()
sc = spark.sparkContext
###Output
_____no_output_____
###Markdown
Preparing the data
###Code
df_transactions = spark.read.option("header", True)\
.option("delimiter", "|")\
.option("delimiter", ",")\
.option("inferSchema", "true")\
.csv('data/train.csv')\
.withColumnRenamed('default_payment_next_month', 'label')
df_transactions.printSchema()
train, test = df_transactions.randomSplit([0.8, 0.2])
###Output
_____no_output_____
###Markdown
Preparing the model
###Code
from pyspark.ml import Pipeline
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.classification import LogisticRegression
assembler = VectorAssembler(
inputCols=["MARRIAGE", "EDUCATION", "PAY_0", "PAY_2", "PAY_3"],
outputCol="features"
)
lr = LogisticRegression(maxIter=10, regParam=10.0, elasticNetParam=0.0)
pipeline = Pipeline(stages=[assembler, lr])
###Output
_____no_output_____
###Markdown
Fitting the model
###Code
model = pipeline.fit(train)
###Output
_____no_output_____
###Markdown
Evaluation of the model
###Code
predictions = model.transform(test)
predictions.printSchema()
from pyspark.ml.evaluation import BinaryClassificationEvaluator
evaluator_roc = BinaryClassificationEvaluator(
labelCol='label',
rawPredictionCol='rawPrediction',
metricName='areaUnderROC'
)
print('Area under ROC = %s' % evaluator_roc.evaluate(predictions))
###Output
Area under ROC = 0.6825956971417326
###Markdown
Streaming data The stream will be produced by ```generate_transactions.py```
###Code
from pyspark.streaming import StreamingContext
ssc = StreamingContext(sc, 1)
def process(time, rdd):
print("========= %s =========" % str(time))
try:
# Convert RDD[String] to DataFrame
rdd_proper = rdd.map(lambda line: line.split(','))
df_stream = spark.createDataFrame(rdd_proper)
# changing schema
for c, i in zip(df_stream.columns, train.schema):
df_stream = df_stream.withColumn(i.name, df_stream[c].cast(i.dataType))
if df_stream.count() > 0:
predictions = model.transform(df_stream)
print(
'Area under ROC = %s (with %ld elements)' % (evaluator_roc.evaluate(predictions), df_stream.count())
)
except Exception as e:
print(e)
stream = ssc.textFileStream('data/output/')
stream.foreachRDD(process)
ssc.start()
# ssc.awaitTermination()
# stop stream
ssc.stop(True, True)
###Output
_____no_output_____ |
drl-portfolio-optimization.ipynb | ###Markdown
DRL Portfolio Optimization AcknowledgementsThis workbook is the culmination of three separate half credit independent studies by the author [Daniel Fudge](https://www.linkedin.com/in/daniel-fudge) with [Professor Yelena Larkin](https://www.linkedin.com/in/yelena-larkin-6b7b361b/) as part of a concurrent Master of Business Administration (MBA) and a [Diploma in Financial Engineering](https://schulich.yorku.ca/programs/fnen/) from the [Schulich School of Business](https://schulich.yorku.ca/). I wanted to thank Schulich and especially Professor Larkin for giving me the freedom to explorer the intersection of Machine Learning and Finance. I'd like to also mention the excellent training I recieved from [A Cloud Guru](https://acloud.guru/learn/aws-certified-machine-learning-specialty) to prepare for this project. This notebook takes much of the design and low level code from the AWS sample portfolio management [Notebook](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/reinforcement_learning/rl_portfolio_management_coach_customEnv), which inturn is based on Jiang, Zhengyao, Dixing Xu, and Jinjun Liang. "A deep reinforcement learning framework for the financial portfolio management problem." arXiv preprint arXiv:1706.10059 (2017). As detailed below, this notebook relies on the [RL Coach](https://github.com/NervanaSystems/coachbatch-reinforcement-learning) from Intel AI Labs, Apache [MXNet](https://mxnet.apache.org/) and OpenAI [Gym](https://gym.openai.com/). All of which are amazing projects that I highly recommend investigating. IntroductionNow that we have the signals `signals.pkl` and prices `stock-data-clean.pkl` processed we can build the Reinforcement Learning algorithm. The signals will be used to define the state of the market environment, called `observations` in the Gym documentation and `state` in other locations. The prices will be used to generate the `rewards`. For a refresher on DRL I recommend looking through [report2](docs/report2.pdf) in this repo. The basic RL feed back loop is shown below. Reinforcement Learning feedback loop. Image source: https://i.stack.imgur.com/eoeSq.png Action SpaceThe `action` space is simply vector containing the weights of the stocks in the portfolio. Inside the environment these weights are limited to (0, 1) and an addition weight is added for the amount of cash in the portfolio. The cash weight is simply one minus the sum of the other weights and also limited to (0, 1). The last step is to normalize these weight so the sum is equal to 1.0. State (or Observation)The `state` is simply the signals we compiled in the previous data preparation [notebook](docs/data-preparation-no-memory.ipynb) including a time embbeding called the `window_length` within the code. Instead of using the LSTM discussed previously, we are using a short Convolutional Neural Net (CNN) to capture some historical information from the data within the `agent`. AgentFor the agent we are using the [RL Coach](https://github.com/NervanaSystems/coachbatch-reinforcement-learning) from Intel AI Labs that is integrated into AWS [Sagemaker](https://docs.aws.amazon.com/sagemaker/latest/dg/reinforcement-learning.htmlsagemaker-rl). As you can see below there are a large number of RL algorithms to choose from. In [report2](docs/report2.pdf) we discussed serval of these and focused on the Deep Deterministic Policy Gradient (DDPG) Actor-Critic method however for this test we are trying the Proximal Policy Optimization (PPO) algorithm that is explained nicely by Jonathan Hui [here](https://medium.com/@jonathan_hui/rl-proximal-policy-optimization-ppo-explained-77f014ec3f12) and the original paper by Schulman et al. [here](https://arxiv.org/pdf/1707.06347.pdf). Remember that the goal of the agent is to learn a policy that maximizes the sum of discounted rewards, which in our case will result in the maximization of the portfolio value.RL Coach AlgorithmsImafe Source: https://github.com/NervanaSystems/coachbatch-reinforcement-learning Tweaks to AgentIf you are playing with the number of signals you may have to change the input to the Agent. Unfortunately one of the disadvantages of using pre-built frameworks is the extra overhead and the difficulity finding the little details you need to tweak. Here we have to define input, which for us is a 2D CNN or Conv2D as a RL Coach [input embedder](https://nervanasystems.github.io/coach/design/network.html) but it is actually passed to the Conv2D within Apache [MXNet]. The RL Coach [code](https://github.com/NervanaSystems/coach/blob/master/rl_coach/architectures/layers.py) that passes the arguments to the MXNet Conv2D [code](https://beta.mxnet.io/api/gluon/_autogen/mxnet.gluon.nn.Conv2D.html) only accepts the 3 arguments and the MXNet infers the rest. This lack of control makes setting up the problem a little tricky. Also for time series data the MXNet [Conv1D](https://beta.mxnet.io/api/gluon/_autogen/mxnet.gluon.nn.Conv1D.html) would be more appropriate but RL coach doesn't give us the flexibility. The interface to RL Coach is defined in `preset-portfolio-management-clippedppo.py` shown below. The line you may want to tweak is under "Agent" and contains the Conv2D argument. This needs to coorespond to the observation space definition and how the observation is pulled from the signals in the `portfolio_env.py` file shown below as well. Both of these are in the `src` folder.
###Code
!pygmentize src/preset-portfolio-management-clippedppo.py
###Output
[34mfrom[39;49;00m [04m[36mrl_coach.agents.clipped_ppo_agent[39;49;00m [34mimport[39;49;00m ClippedPPOAgentParameters
[34mfrom[39;49;00m [04m[36mrl_coach.architectures.layers[39;49;00m [34mimport[39;49;00m Dense, Conv2d
[34mfrom[39;49;00m [04m[36mrl_coach.base_parameters[39;49;00m [34mimport[39;49;00m VisualizationParameters, PresetValidationParameters
[34mfrom[39;49;00m [04m[36mrl_coach.base_parameters[39;49;00m [34mimport[39;49;00m MiddlewareScheme, DistributedCoachSynchronizationType
[34mfrom[39;49;00m [04m[36mrl_coach.core_types[39;49;00m [34mimport[39;49;00m TrainingSteps, EnvironmentEpisodes, EnvironmentSteps, RunPhase
[34mfrom[39;49;00m [04m[36mrl_coach.environments.gym_environment[39;49;00m [34mimport[39;49;00m GymVectorEnvironment, ObservationSpaceType
[34mfrom[39;49;00m [04m[36mrl_coach.exploration_policies.e_greedy[39;49;00m [34mimport[39;49;00m EGreedyParameters
[34mfrom[39;49;00m [04m[36mrl_coach.graph_managers.basic_rl_graph_manager[39;49;00m [34mimport[39;49;00m BasicRLGraphManager
[34mfrom[39;49;00m [04m[36mrl_coach.graph_managers.graph_manager[39;49;00m [34mimport[39;49;00m ScheduleParameters
[34mfrom[39;49;00m [04m[36mrl_coach.schedules[39;49;00m [34mimport[39;49;00m LinearSchedule
[37m####################[39;49;00m
[37m# Graph Scheduling #[39;49;00m
[37m####################[39;49;00m
schedule_params = ScheduleParameters()
schedule_params.improve_steps = TrainingSteps([34m20000[39;49;00m)
schedule_params.steps_between_evaluation_periods = EnvironmentSteps([34m2048[39;49;00m)
schedule_params.evaluation_steps = EnvironmentEpisodes([34m5[39;49;00m)
schedule_params.heatup_steps = EnvironmentSteps([34m0[39;49;00m)
[37m#########[39;49;00m
[37m# Agent #[39;49;00m
[37m#########[39;49;00m
agent_params = ClippedPPOAgentParameters()
[37m# Input Embedder with no CNN[39;49;00m
[37m#agent_params.network_wrappers['main'].input_embedders_parameters['observation'].scheme = [Dense(71)][39;49;00m
[37m#agent_params.network_wrappers['main'].input_embedders_parameters['observation'].activation_function = 'tanh'[39;49;00m
[37m#agent_params.network_wrappers['main'].middleware_parameters.scheme = [Dense(128)][39;49;00m
[37m#agent_params.network_wrappers['main'].middleware_parameters.activation_function = 'tanh'[39;49;00m
[37m# Input Embedder used in sample notebook[39;49;00m
agent_params.network_wrappers[[33m'[39;49;00m[33mmain[39;49;00m[33m'[39;49;00m].input_embedders_parameters[[33m'[39;49;00m[33mobservation[39;49;00m[33m'[39;49;00m].scheme = [Conv2d([34m32[39;49;00m, [[34m3[39;49;00m, [34m1[39;49;00m], [34m1[39;49;00m)]
agent_params.network_wrappers[[33m'[39;49;00m[33mmain[39;49;00m[33m'[39;49;00m].middleware_parameters.scheme = MiddlewareScheme.Empty
agent_params.network_wrappers[[33m'[39;49;00m[33mmain[39;49;00m[33m'[39;49;00m].learning_rate = [34m0.0001[39;49;00m
agent_params.network_wrappers[[33m'[39;49;00m[33mmain[39;49;00m[33m'[39;49;00m].batch_size = [34m64[39;49;00m
agent_params.algorithm.clipping_decay_schedule = LinearSchedule([34m1.0[39;49;00m, [34m0[39;49;00m, [34m150000[39;49;00m)
agent_params.algorithm.discount = [34m0.99[39;49;00m
agent_params.algorithm.num_steps_between_copying_online_weights_to_target = EnvironmentSteps([34m2048[39;49;00m)
[37m# Distributed Coach synchronization type.[39;49;00m
agent_params.algorithm.distributed_coach_synchronization_type = DistributedCoachSynchronizationType.SYNC
agent_params.exploration = EGreedyParameters()
agent_params.exploration.epsilon_schedule = LinearSchedule([34m1.0[39;49;00m, [34m0.01[39;49;00m, [34m10000[39;49;00m)
[37m###############[39;49;00m
[37m# Environment #[39;49;00m
[37m###############[39;49;00m
env_params = GymVectorEnvironment(level=[33m'[39;49;00m[33mportfolio_env:PortfolioEnv[39;49;00m[33m'[39;49;00m)
env_params.[31m__dict__[39;49;00m[[33m'[39;49;00m[33mobservation_space_type[39;49;00m[33m'[39;49;00m] = ObservationSpaceType.Tensor
[37m########[39;49;00m
[37m# Test #[39;49;00m
[37m########[39;49;00m
preset_validation_params = PresetValidationParameters()
preset_validation_params.test = [36mTrue[39;49;00m
graph_manager = BasicRLGraphManager(agent_params=agent_params, env_params=env_params,
schedule_params=schedule_params, vis_params=VisualizationParameters(),
preset_validation_params=preset_validation_params)
###Markdown
EnvironmentThis leaves us with the environment to generate. Here we build a custom finacial market environment (or simulator) based on the signals and prices we compiled previously on top of the OpenAI [Gym](https://gym.openai.com/), which is also integrated into AWS [Sagemaker](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-rl-environments.htmlsagemaker-rl-environments-gym). This takes care of the low level integration allows us to focus on the features of the environment specific to portfolio optimization. During each training epoch, the trainer randomly selects a date to start. It then steps through each day in the epoch and following these high level operations.1. Initialze the state from the signals with the randomly selected start date and sets the portfolio to $1 of cash.1. Pass the state to the agent who generates an action, which is a new set of desired portfolio weights.1. Pass the new weigths (action) to the environment who calculates: * The new portfolio value based on changes in the prices, weights and transaction costs. * The reward based on the previous weights and the change in prices. * The new state, which is simply pulled from the signals dataset.1. Pass the new reward and state to the agent, who must then learn to make a better action. Custom Portfolio EnvironmentThe code below implements the custom financial market environment that the agent must trade in.
###Code
!pygmentize src/portfolio_env.py
###Output
[33m""" Modified from https://github.com/awslabs/amazon-sagemaker-examples """[39;49;00m
[34mimport[39;49;00m [04m[36mgym[39;49;00m
[34mimport[39;49;00m [04m[36mgym.spaces[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mpandas[39;49;00m [34mas[39;49;00m [04m[36mpd[39;49;00m
EPS = [34m1e-8[39;49;00m
[34mclass[39;49;00m [04m[32mPortfolioEnv[39;49;00m(gym.Env):
[33m""" This class creates the financial market environment that the Agent interact with.[39;49;00m
[33m[39;49;00m
[33m It extends the OpenAI Gym environment https://gym.openai.com/.[39;49;00m
[33m[39;49;00m
[33m More information of how it is integrated into AWS is found here[39;49;00m
[33m https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-rl-environments.html#sagemaker-rl-environments-gym[39;49;00m
[33m[39;49;00m
[33m The observations include a history of the signals with the given `window_length` ending at the current date.[39;49;00m
[33m[39;49;00m
[33m Args:[39;49;00m
[33m steps (int): Steps or days in an episode.[39;49;00m
[33m trading_cost (float): Cost of trade as a fraction.[39;49;00m
[33m window_length (int): How many past observations to return.[39;49;00m
[33m start_date_index (int | None): The date index in the signals and price arrays.[39;49;00m
[33m[39;49;00m
[33m Attributes:[39;49;00m
[33m action_space (gym.spaces.Box): [n_tickers] The portfolio weighting not including cash.[39;49;00m
[33m dates (np.array of np.datetime64): [n_days] Dates for the signals and price history arrays.[39;49;00m
[33m info_list (list): List of info dictionaries for each step.[39;49;00m
[33m n_signals (int): Number of signals in each observation.[39;49;00m
[33m n_tickers (int): Number of tickers in the price history.[39;49;00m
[33m observation_space (gym.spaces.Box) [self.n_signals, window_length] The signals with a window_length history.[39;49;00m
[33m portfolio_value (float): The portfolio value, starting with $1 in cash.[39;49;00m
[33m gain (np.array): [n_days, n_tickers, gain] The relative price vector; today's / yesterday's price.[39;49;00m
[33m signals (np.array): [n_signals, n_days, 1] Signals that define the observable environment.[39;49;00m
[33m start_date_index (int): The date index in the signals and price arrays.[39;49;00m
[33m step_number (int): The step number of the episode.[39;49;00m
[33m steps (int): Steps or days in an episode.[39;49;00m
[33m test (bool): Indicates if this is a test or training session.[39;49;00m
[33m test_length (int): Trading days reserved for testing.[39;49;00m
[33m tickers (list of str): The stock tickers.[39;49;00m
[33m trading_cost (float): Cost of trade as a fraction.[39;49;00m
[33m window_length (int): How many past observations to return.[39;49;00m
[33m """[39;49;00m
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m, steps=[34m506[39;49;00m, trading_cost=[34m0.0025[39;49;00m, window_length=[34m5[39;49;00m, start_date_index=[36mNone[39;49;00m):
[33m"""An environment for financial portfolio management."""[39;49;00m
[37m# Initialize some local parameters[39;49;00m
[36mself[39;49;00m.csv = [33m'[39;49;00m[33m/opt/ml/output/data/portfolio-management.csv[39;49;00m[33m'[39;49;00m
[36mself[39;49;00m.info_list = [36mlist[39;49;00m()
[36mself[39;49;00m.portfolio_value = [34m1.0[39;49;00m
[36mself[39;49;00m.start_date_index = start_date_index
[36mself[39;49;00m.step_number = [34m0[39;49;00m
[36mself[39;49;00m.steps = steps
[36mself[39;49;00m.test_length = [34m506[39;49;00m
[36mself[39;49;00m.trading_cost = trading_cost
[37m# Save some arguments as attributes[39;49;00m
[36mself[39;49;00m.window_length = window_length
[37m# Determine if this is a test or training session and limit the start_date_index accordingly[39;49;00m
[36mself[39;49;00m.test = [36mFalse[39;49;00m
[34mwith[39;49;00m [36mopen[39;49;00m(os.path.join(os.path.dirname([31m__file__[39;49;00m), [33m'[39;49;00m[33msession-type.txt[39;49;00m[33m'[39;49;00m)) [34mas[39;49;00m f:
tmp = f.read().lower()
[34mif[39;49;00m tmp.startswith([33m'[39;49;00m[33mtest[39;49;00m[33m'[39;49;00m):
[36mself[39;49;00m.test = [36mTrue[39;49;00m
[34melif[39;49;00m tmp.startswith([33m'[39;49;00m[33mtrain[39;49;00m[33m'[39;49;00m):
[36mself[39;49;00m.test = [36mFalse[39;49;00m
[34melse[39;49;00m:
[34mraise[39;49;00m [36mValueError[39;49;00m([33m'[39;49;00m[33mSession type not defined!!![39;49;00m[33m'[39;49;00m)
[37m# Read the stock data and convert to the relative price vector (gain)[39;49;00m
[37m# Note the raw prices have an extra day vs the signals to calculate gain[39;49;00m
raw_prices = pd.read_csv(os.path.join(os.path.dirname([31m__file__[39;49;00m), [33m'[39;49;00m[33mprices.csv[39;49;00m[33m'[39;49;00m),index_col=[34m0[39;49;00m, parse_dates=[36mTrue[39;49;00m)
[36mself[39;49;00m.tickers = raw_prices.columns.tolist()
[36mself[39;49;00m.gain = np.hstack((np.ones((raw_prices.shape[[34m0[39;49;00m]-[34m1[39;49;00m, [34m1[39;49;00m)), raw_prices.values[[34m1[39;49;00m:] / raw_prices.values[:-[34m1[39;49;00m]))
[36mself[39;49;00m.dates = raw_prices.index.values[[34m1[39;49;00m:]
[36mself[39;49;00m.n_dates = [36mself[39;49;00m.dates.shape[[34m0[39;49;00m]
[36mself[39;49;00m.n_tickers = [36mlen[39;49;00m([36mself[39;49;00m.tickers)
[36mself[39;49;00m.weights = np.insert(np.zeros([36mself[39;49;00m.n_tickers), [34m0[39;49;00m, [34m1.0[39;49;00m)
[37m# Read the signals[39;49;00m
[36mself[39;49;00m.signals = pd.read_csv(os.path.join(os.path.dirname([31m__file__[39;49;00m), [33m'[39;49;00m[33msignals.csv[39;49;00m[33m'[39;49;00m),
index_col=[34m0[39;49;00m, parse_dates=[36mTrue[39;49;00m).T.values[:, :, np.newaxis]
[36mself[39;49;00m.n_signals = [36mself[39;49;00m.signals.shape[[34m0[39;49;00m]
[37m# Define the action space as the portfolio weights where wn are [0, 1] for each asset not including cash[39;49;00m
[36mself[39;49;00m.action_space = gym.spaces.Box(low=[34m0[39;49;00m, high=[34m1[39;49;00m, shape=([36mself[39;49;00m.n_tickers,), dtype=np.float32)
[37m# Define the observation space, which are the signals[39;49;00m
[36mself[39;49;00m.observation_space = gym.spaces.Box(low=-[34m1[39;49;00m, high=[34m1[39;49;00m, shape=([36mself[39;49;00m.n_signals, [36mself[39;49;00m.window_length, [34m1[39;49;00m), dtype=np.float32)
[37m# Rest the environment[39;49;00m
[36mself[39;49;00m.reset()
[37m# -----------------------------------------------------------------------------------[39;49;00m
[34mdef[39;49;00m [32mstep[39;49;00m([36mself[39;49;00m, action):
[33m"""Step the environment.[39;49;00m
[33m[39;49;00m
[33m See https://gym.openai.com/docs/#observations for detailed description of return values.[39;49;00m
[33m[39;49;00m
[33m Args:[39;49;00m
[33m action (np.array): The desired portfolio weights [w0...].[39;49;00m
[33m[39;49;00m
[33m Returns:[39;49;00m
[33m np.array: [n_signals, window_length, 1] The observation of the environment (state)[39;49;00m
[33m float: The reward received from the previous action.[39;49;00m
[33m bool: Indicates if the simulation is complete.[39;49;00m
[33m dict: Debugging information.[39;49;00m
[33m """[39;49;00m
[36mself[39;49;00m.step_number += [34m1[39;49;00m
[37m# Force the new weights (w1) to (0.0, 1.0) and sum weights = 1, note 1st weight is cash.[39;49;00m
w1 = np.clip(action, a_min=[34m0[39;49;00m, a_max=[34m1[39;49;00m)
w1 = np.insert(w1, [34m0[39;49;00m, np.clip([34m1[39;49;00m - w1.sum(), a_min=[34m0[39;49;00m, a_max=[34m1[39;49;00m))
w1 = w1 / w1.sum()
[37m# Calculate the reward; Numbered equations are from https://arxiv.org/abs/1706.10059[39;49;00m
t = [36mself[39;49;00m.start_date_index + [36mself[39;49;00m.step_number
y1 = [36mself[39;49;00m.gain[t]
w0 = [36mself[39;49;00m.weights
p0 = [36mself[39;49;00m.portfolio_value
dw1 = (y1 * w0) / (np.dot(y1, w0) + EPS) [37m# (eq7) weights evolve into[39;49;00m
mu1 = [36mself[39;49;00m.trading_cost * (np.abs(dw1 - w1)).sum() [37m# (eq16) cost to change portfolio[39;49;00m
p1 = p0 * ([34m1[39;49;00m - mu1) * np.dot(y1, w1) [37m# (eq11) final portfolio value[39;49;00m
p1 = np.clip(p1, [34m0[39;49;00m, np.inf) [37m# Limit portfolio to zero (busted)[39;49;00m
rho1 = p1 / p0 - [34m1[39;49;00m [37m# rate of returns[39;49;00m
reward = np.log((p1 + EPS) / (p0 + EPS)) [37m# log rate of return[39;49;00m
[37m# Save weights and portfolio value for next iteration[39;49;00m
[36mself[39;49;00m.weights = w1
[36mself[39;49;00m.portfolio_value = p1
[37m# Observe the new environment (state)[39;49;00m
t0 = t - [36mself[39;49;00m.window_length + [34m1[39;49;00m
observation = [36mself[39;49;00m.signals[:, t0:t+[34m1[39;49;00m, :]
[37m# Save some information for debugging and plotting at the end[39;49;00m
r = y1.mean()
[34mif[39;49;00m [36mself[39;49;00m.step_number == [34m1[39;49;00m:
market_value = r
[34melse[39;49;00m:
market_value = [36mself[39;49;00m.info_list[-[34m1[39;49;00m][[33m"[39;49;00m[33mmarket_value[39;49;00m[33m"[39;49;00m] * r
info = {[33m"[39;49;00m[33mreward[39;49;00m[33m"[39;49;00m: reward, [33m"[39;49;00m[33mlog_return[39;49;00m[33m"[39;49;00m: reward, [33m"[39;49;00m[33mportfolio_value[39;49;00m[33m"[39;49;00m: p1, [33m"[39;49;00m[33mreturn[39;49;00m[33m"[39;49;00m: r, [33m"[39;49;00m[33mrate_of_return[39;49;00m[33m"[39;49;00m: rho1,
[33m"[39;49;00m[33mweights_mean[39;49;00m[33m"[39;49;00m: w1.mean(), [33m"[39;49;00m[33mweights_std[39;49;00m[33m"[39;49;00m: w1.std(), [33m"[39;49;00m[33mcost[39;49;00m[33m"[39;49;00m: mu1, [33m'[39;49;00m[33mdate[39;49;00m[33m'[39;49;00m: [36mself[39;49;00m.dates[t],
[33m'[39;49;00m[33msteps[39;49;00m[33m'[39;49;00m: [36mself[39;49;00m.step_number, [33m"[39;49;00m[33mmarket_value[39;49;00m[33m"[39;49;00m: market_value}
[36mself[39;49;00m.info_list.append(info)
[37m# Check if finished and write to file[39;49;00m
done = [36mFalse[39;49;00m
[34mif[39;49;00m ([36mself[39;49;00m.step_number >= [36mself[39;49;00m.steps) [35mor[39;49;00m (p1 <= [34m0[39;49;00m):
done = [36mTrue[39;49;00m
pd.DataFrame([36mself[39;49;00m.info_list).sort_values(by=[[33m'[39;49;00m[33mdate[39;49;00m[33m'[39;49;00m]).to_csv([36mself[39;49;00m.csv)
[34mreturn[39;49;00m observation, reward, done, info
[34mdef[39;49;00m [32mreset[39;49;00m([36mself[39;49;00m):
[33m"""Reset the environment to the initial state.[39;49;00m
[33m[39;49;00m
[33m Ref:[39;49;00m
[33m https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-rl-environments.html#sagemaker-rl-environments-gym[39;49;00m
[33m """[39;49;00m
[36mself[39;49;00m.info_list = [36mlist[39;49;00m()
[36mself[39;49;00m.weights = np.insert(np.zeros([36mself[39;49;00m.n_tickers), [34m0[39;49;00m, [34m1.0[39;49;00m)
[36mself[39;49;00m.portfolio_value = [34m1.0[39;49;00m
[36mself[39;49;00m.step_number = [34m0[39;49;00m
[34mif[39;49;00m [36mself[39;49;00m.test:
[36mself[39;49;00m.start_date_index = [36mself[39;49;00m.n_dates - [36mself[39;49;00m.test_length
[34melif[39;49;00m [36mself[39;49;00m.start_date_index [35mis[39;49;00m [36mNone[39;49;00m:
[36mself[39;49;00m.start_date_index = np.random.random_integers([36mself[39;49;00m.window_length - [34m1[39;49;00m,
[36mself[39;49;00m.n_dates - [36mself[39;49;00m.test_length - [36mself[39;49;00m.steps)
[34melse[39;49;00m:
[37m# The start index must >= the window length to avoid a negative index or data leakage[39;49;00m
[36mself[39;49;00m.start_date_index = [36mmax[39;49;00m([36mself[39;49;00m.start_date_index, [36mself[39;49;00m.window_length - [34m1[39;49;00m)
[37m# The start index <= n_dates - test length - the steps in the episode to avoid reading test data[39;49;00m
[36mself[39;49;00m.start_date_index = [36mmin[39;49;00m([36mself[39;49;00m.start_date_index, [36mself[39;49;00m.n_dates - [36mself[39;49;00m.test_length - [36mself[39;49;00m.steps)
t = [36mself[39;49;00m.start_date_index + [36mself[39;49;00m.step_number
t0 = t - [36mself[39;49;00m.window_length + [34m1[39;49;00m
observation = [36mself[39;49;00m.signals[:, t0:t+[34m1[39;49;00m, :]
[34mreturn[39;49;00m observation
###Markdown
Setup Roles and permissionsTo get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations.
###Code
import boto3
import glob
from IPython.display import HTML
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import sagemaker
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from src.misc import get_execution_role, wait_for_s3_object
import sys
import subprocess
import time
from time import gmtime, strftime
sys.path.append("common")
%matplotlib inline
###Output
_____no_output_____
###Markdown
Setup S3 bucketsSet up the linkage and authentication to the S3 bucket that you want to use for checkpoint and the metadata.
###Code
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket)
checkpoint_path = s3_output_path[:-1] + '/checkpoints'
print("Checkpoint s3 path: {}".format(checkpoint_path))
###Output
Checkpoint s3 path: s3://sagemaker-us-east-1-031118886020/checkpoints
###Markdown
Create an IAM roleGet the execution IAM role that allows this notebook to connect to the training and evaluation instances.
###Code
role = sagemaker.get_execution_role()
print("Using IAM role arn: {}".format(role))
###Output
Using IAM role arn: arn:aws:iam::031118886020:role/sagemaker
###Markdown
Train the RL model using the Python SDK Script mode1. Specify the source directory where the environment, presets and training code is uploaded, `src`.2. Specify the `entry_point` as the training code, 3. Specify the choice of RL `toolkit` and framework. This automatically resolves to the ECR path for the RL Container. 4. Define the training parameters such as the instance count, job name, S3 path for output and job name. 5. Specify the hyperparameters for the RL agent algorithm. The `RLCOACH_PRESET` can be used to specify the RL agent algorithm you want to use. 6. [Optional] Choose the metrics that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks. The metrics are defined using regular expression matching. Price reference - 60,000 steps & no middleware networkHere are some price data for reference. Obviously these values will change as you change the hyperparameters and model architecture. Please check the pricing [list](https://aws.amazon.com/sagemaker/pricing/) for the latest pricing and instance types.- **ml.c4.xlarge:** $423s = 7.1 min = 0.118 hr \rightarrow \$0.03 = 0.118 hr \cdot 0.279 \frac{\$}{hr} $ - **ml.c4.2xlarge:** $371s = 6.2 min = 0.103 hr \rightarrow \$0.06 = 0.103 hr \cdot 0.557 \frac{\$}{hr} $ - **ml.c4.4xlarge:** $380s = 6.3 min = 0.106 hr \rightarrow \$0.12 = 0.106 hr \cdot 1.114 \frac{\$}{hr} $ - **ml.c4.8xlarge:** $390s = 6.5 min = 0.108 hr \rightarrow \$0.24 = 0.108 hr \cdot 2.227 \frac{\$}{hr} $ - **ml.m4.4xlarge:** $414s = 6.9 min = 0.115 hr \rightarrow \$0.13 = 0.115 hr \cdot 1.12 \frac{\$}{hr} $ - **ml.p3.2xlarge:** $558s = 9.3 min = 0.155 hr \rightarrow \$0.66 = 0.155 hr \cdot 4.284 \frac{\$}{hr}$- **ml.p2.xlarge:** $713s = 11.9 min = 0.198 hr \rightarrow \$0.25 = 0.198 hr \cdot 1.26 \frac{\$}{hr}$
###Code
instance_type = "ml.c4.2xlarge"
# First define the Reinforcement Learning Estimator
train_estimator = RLEstimator(source_dir='src',
entry_point="train-coach.py",
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='0.11.0',
framework=RLFramework.MXNET,
role=role,
train_instance_count=1,
train_instance_type=instance_type,
output_path=s3_output_path,
hyperparameters = {
"RLCOACH_PRESET" : "preset-portfolio-management-clippedppo",
"rl.agent_params.algorithm.discount": 0.9,
"rl.evaluation_steps:EnvironmentEpisodes": 5,
"training_epochs": 10,
"improve_steps":100000})
# Perform the training
train_estimator.fit()
# Bring the training output back to the Sagemaker instance
train_job_name = train_estimator._current_job_name
print("\nJob name: {}\n".format(train_job_name))
output_tar_key = "{}/output/output.tar.gz".format(train_job_name)
intermediate_folder_key = "{}/output/intermediate/".format(train_job_name)
intermediate_url = "s3://{}/{}".format(s3_bucket, intermediate_folder_key)
tmp_dir = "/tmp/{}".format(train_job_name)
local_checkpoint_path = tmp_dir + '/checkpoint'
os.system("mkdir {}".format(tmp_dir))
wait_for_s3_object(s3_bucket, output_tar_key, tmp_dir)
os.system("tar -xvzf {}/output.tar.gz -C {}".format(tmp_dir, tmp_dir))
os.system("aws s3 cp --recursive {} {}".format(intermediate_url, tmp_dir))
os.system("tar -xvzf {}/output.tar.gz -C {}".format(tmp_dir, tmp_dir))
print("\nCopied output files to 'tmp_dir': {}\n".format(tmp_dir))
###Output
2020-03-29 18:36:45 Starting - Starting the training job...
2020-03-29 18:36:47 Starting - Launching requested ML instances......
2020-03-29 18:37:48 Starting - Preparing the instances for training...
2020-03-29 18:38:45 Downloading - Downloading input data
2020-03-29 18:38:45 Training - Downloading the training image...
2020-03-29 18:39:04 Training - Training image download completed. Training in progress.[34mbash: cannot set terminal process group (-1): Inappropriate ioctl for device[0m
[34mbash: no job control in this shell[0m
[34m2020-03-29 18:39:07,540 sagemaker-containers INFO Imported framework sagemaker_mxnet_container.training[0m
[34m2020-03-29 18:39:07,544 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2020-03-29 18:39:07,556 sagemaker_mxnet_container.training INFO MXNet training environment: {'SM_LOG_LEVEL': '20', 'SM_INPUT_DATA_CONFIG': '{}', 'SM_FRAMEWORK_MODULE': 'sagemaker_mxnet_container.training:main', 'SM_HP_RL.EVALUATION_STEPS:ENVIRONMENTEPISODES': '5', 'SM_INPUT_DIR': '/opt/ml/input', 'SM_OUTPUT_DATA_DIR': '/opt/ml/output/data', 'SM_FRAMEWORK_PARAMS': '{"sagemaker_estimator":"RLEstimator"}', 'SM_HPS': '{"RLCOACH_PRESET":"preset-portfolio-management-clippedppo","improve_steps":100000,"rl.agent_params.algorithm.discount":0.9,"rl.evaluation_steps:EnvironmentEpisodes":5,"training_epochs":10}', 'SM_HP_RL.AGENT_PARAMS.ALGORITHM.DISCOUNT': '0.9', 'SM_CHANNELS': '[]', 'SM_CURRENT_HOST': 'algo-1', 'SM_TRAINING_ENV': '{"additional_framework_parameters":{"sagemaker_estimator":"RLEstimator"},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_mxnet_container.training:main","hosts":["algo-1"],"hyperparameters":{"RLCOACH_PRESET":"preset-portfolio-management-clippedppo","improve_steps":100000,"rl.agent_params.algorithm.discount":0.9,"rl.evaluation_steps:EnvironmentEpisodes":5,"training_epochs":10},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","job_name":"sagemaker-rl-mxnet-2020-03-29-18-36-44-773","log_level":20,"model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-031118886020/sagemaker-rl-mxnet-2020-03-29-18-36-44-773/source/sourcedir.tar.gz","module_name":"train-coach","network_interface_name":"eth0","num_cpus":8,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"train-coach.py"}', 'SM_OUTPUT_DIR': '/opt/ml/output', 'SM_INPUT_CONFIG_DIR': '/opt/ml/input/config', 'SM_HP_IMPROVE_STEPS': '100000', 'SM_MODEL_DIR': '/opt/ml/model', 'SM_MODULE_NAME': 'train-coach', 'SM_NUM_GPUS': '0', 'SM_HP_TRAINING_EPOCHS': '10', 'SM_HP_RLCOACH_PRESET': 'preset-portfolio-management-clippedppo', 'SM_MODULE_DIR': 's3://sagemaker-us-east-1-031118886020/sagemaker-rl-mxnet-2020-03-29-18-36-44-773/source/sourcedir.tar.gz', 'SM_USER_ARGS': '["--RLCOACH_PRESET","preset-portfolio-management-clippedppo","--improve_steps","100000","--rl.agent_params.algorithm.discount","0.9","--rl.evaluation_steps:EnvironmentEpisodes","5","--training_epochs","10"]', 'SM_USER_ENTRY_POINT': 'train-coach.py', 'SM_OUTPUT_INTERMEDIATE_DIR': '/opt/ml/output/intermediate', 'SM_HOSTS': '["algo-1"]', 'SM_NUM_CPUS': '8', 'SM_NETWORK_INTERFACE_NAME': 'eth0', 'SM_RESOURCE_CONFIG': '{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}'}[0m
[34m2020-03-29 18:39:07,713 sagemaker-containers INFO Module train-coach does not provide a setup.py. [0m
[34mGenerating setup.py[0m
[34m2020-03-29 18:39:07,713 sagemaker-containers INFO Generating setup.cfg[0m
[34m2020-03-29 18:39:07,713 sagemaker-containers INFO Generating MANIFEST.in[0m
[34m2020-03-29 18:39:07,714 sagemaker-containers INFO Installing module with the following command:[0m
[34m/usr/bin/python -m pip install -U . [0m
[34mProcessing /opt/ml/code[0m
[34mBuilding wheels for collected packages: train-coach
Running setup.py bdist_wheel for train-coach: started[0m
[34m Running setup.py bdist_wheel for train-coach: finished with status 'done'
Stored in directory: /tmp/pip-ephem-wheel-cache-d18k2ksa/wheels/35/24/16/37574d11bf9bde50616c67372a334f94fa8356bc7164af8ca3[0m
[34mSuccessfully built train-coach[0m
[34mInstalling collected packages: train-coach[0m
[34mSuccessfully installed train-coach-1.0.0[0m
[34mYou are using pip version 18.1, however version 20.0.2 is available.[0m
[34mYou should consider upgrading via the 'pip install --upgrade pip' command.[0m
[34m2020-03-29 18:39:09,494 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2020-03-29 18:39:09,506 sagemaker-containers INFO Invoking user script
[0m
[34mTraining Env:
[0m
[34m{
"framework_module": "sagemaker_mxnet_container.training:main",
"channel_input_dirs": {},
"hosts": [
"algo-1"
],
"additional_framework_parameters": {
"sagemaker_estimator": "RLEstimator"
},
"current_host": "algo-1",
"output_data_dir": "/opt/ml/output/data",
"hyperparameters": {
"rl.agent_params.algorithm.discount": 0.9,
"rl.evaluation_steps:EnvironmentEpisodes": 5,
"RLCOACH_PRESET": "preset-portfolio-management-clippedppo",
"training_epochs": 10,
"improve_steps": 100000
},
"input_data_config": {},
"num_cpus": 8,
"job_name": "sagemaker-rl-mxnet-2020-03-29-18-36-44-773",
"num_gpus": 0,
"module_name": "train-coach",
"model_dir": "/opt/ml/model",
"log_level": 20,
"network_interface_name": "eth0",
"user_entry_point": "train-coach.py",
"resource_config": {
"hosts": [
"algo-1"
],
"current_host": "algo-1",
"network_interface_name": "eth0"
},
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"input_config_dir": "/opt/ml/input/config",
"module_dir": "s3://sagemaker-us-east-1-031118886020/sagemaker-rl-mxnet-2020-03-29-18-36-44-773/source/sourcedir.tar.gz",
"input_dir": "/opt/ml/input"[0m
[34m}
[0m
[34mEnvironment variables:
[0m
[34mSM_LOG_LEVEL=20[0m
[34mSM_INPUT_DATA_CONFIG={}[0m
[34mSM_FRAMEWORK_MODULE=sagemaker_mxnet_container.training:main[0m
[34mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[34mSM_HP_RL.EVALUATION_STEPS:ENVIRONMENTEPISODES=5[0m
[34mSM_FRAMEWORK_PARAMS={"sagemaker_estimator":"RLEstimator"}[0m
[34mSM_HPS={"RLCOACH_PRESET":"preset-portfolio-management-clippedppo","improve_steps":100000,"rl.agent_params.algorithm.discount":0.9,"rl.evaluation_steps:EnvironmentEpisodes":5,"training_epochs":10}[0m
[34mSM_HP_RL.AGENT_PARAMS.ALGORITHM.DISCOUNT=0.9[0m
[34mSM_CHANNELS=[][0m
[34mPYTHONPATH=/usr/local/bin:/usr/lib/python35.zip:/usr/lib/python3.5:/usr/lib/python3.5/plat-x86_64-linux-gnu:/usr/lib/python3.5/lib-dynload:/usr/local/lib/python3.5/dist-packages:/usr/lib/python3/dist-packages[0m
[34mSM_CURRENT_HOST=algo-1[0m
[34mSM_OUTPUT_DIR=/opt/ml/output[0m
[34mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[34mSM_USER_ARGS=["--RLCOACH_PRESET","preset-portfolio-management-clippedppo","--improve_steps","100000","--rl.agent_params.algorithm.discount","0.9","--rl.evaluation_steps:EnvironmentEpisodes","5","--training_epochs","10"][0m
[34mSM_HP_IMPROVE_STEPS=100000[0m
[34mSM_MODEL_DIR=/opt/ml/model[0m
[34mSM_MODULE_NAME=train-coach[0m
[34mSM_NUM_GPUS=0[0m
[34mSM_HP_TRAINING_EPOCHS=10[0m
[34mSM_HP_RLCOACH_PRESET=preset-portfolio-management-clippedppo[0m
[34mSM_MODULE_DIR=s3://sagemaker-us-east-1-031118886020/sagemaker-rl-mxnet-2020-03-29-18-36-44-773/source/sourcedir.tar.gz[0m
[34mSM_INPUT_DIR=/opt/ml/input[0m
[34mSM_HOSTS=["algo-1"][0m
[34mSM_USER_ENTRY_POINT=train-coach.py[0m
[34mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[34mSM_TRAINING_ENV={"additional_framework_parameters":{"sagemaker_estimator":"RLEstimator"},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_mxnet_container.training:main","hosts":["algo-1"],"hyperparameters":{"RLCOACH_PRESET":"preset-portfolio-management-clippedppo","improve_steps":100000,"rl.agent_params.algorithm.discount":0.9,"rl.evaluation_steps:EnvironmentEpisodes":5,"training_epochs":10},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","job_name":"sagemaker-rl-mxnet-2020-03-29-18-36-44-773","log_level":20,"model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-031118886020/sagemaker-rl-mxnet-2020-03-29-18-36-44-773/source/sourcedir.tar.gz","module_name":"train-coach","network_interface_name":"eth0","num_cpus":8,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"train-coach.py"}[0m
[34mSM_NUM_CPUS=8[0m
[34mSM_NETWORK_INTERFACE_NAME=eth0[0m
[34mSM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}
[0m
[34mInvoking script with the following command:
[0m
[34m/usr/bin/python -m train-coach --RLCOACH_PRESET preset-portfolio-management-clippedppo --improve_steps 100000 --rl.agent_params.algorithm.discount 0.9 --rl.evaluation_steps:EnvironmentEpisodes 5 --training_epochs 10
[0m
###Markdown
Plot the training history
###Code
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
wait_for_s3_object(s3_bucket, os.path.join(intermediate_folder_key, csv_file_name), tmp_dir)
df = pd.read_csv("{}/{}".format(tmp_dir, csv_file_name)).dropna(subset=['Evaluation Reward'])
_ = df.plot(x=x_axis,y=y_axis, figsize=(11,3))
###Output
Waiting for s3://sagemaker-us-east-1-031118886020/sagemaker-rl-mxnet-2020-03-29-18-36-44-773/output/intermediate/worker_0.simple_rl_graph.main_level.main_level.agent_0.csv...
Downloading sagemaker-rl-mxnet-2020-03-29-18-36-44-773/output/intermediate/worker_0.simple_rl_graph.main_level.main_level.agent_0.csv
###Markdown
Plot the RL Policy performance against a buy and hold policy
###Code
df = pd.read_csv(tmp_dir + '/portfolio-management.csv', index_col='date')
_ = df[["portfolio_value", "market_value"]].plot(rot=30, figsize=(11,3))
###Output
_____no_output_____ |
module_8_calculus/training_linear_regression_model_using_gradient_descent.ipynb | ###Markdown
Training the linear regression model using Gradient Descent
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
## load the dataset
df = pd.read_csv("../data/housing_data.csv")
df.head()
###Output
_____no_output_____
###Markdown
* RM - average number of rooms per dwelling* AGE - proportion of owner-occupied units built prior to 1940* DIS - weighted distances to five Boston employment centres* RAD - index of accessibility to radial highways* TAX - full-value property-tax rate per USD10,000* PTRATIO - pupil-teacher ratio by town* B - 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town* LSTAT - percentage lower status of the population* PRICE - Median value of owner-occupied homes in $1000's
###Code
import seaborn as sns
sns.heatmap(df.corr().abs(), annot=True)
df['PRICE']
df['RM']
plt.scatter(df['RM'], df['PRICE'])
plt.xlabel('RM')
plt.ylabel('PRICE')
X = df['RM'].to_numpy().reshape(-1, 1)
X.shape
y = df['PRICE'].to_numpy().reshape(-1, 1)
y.shape
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X = scaler.fit_transform(X)
y = scaler.fit_transform(y)
def mean_sq_error(x, y, m):
n = x.shape[0]
loss = ((1 / 2*n)) * np.sum((m*x - y)**2)
return loss
def cost_fn_derivative(x, y, m):
n = x.shape[0]
cost_derivative = (1/n) * np.sum((m*x - y) * x)
return cost_derivative
m = 0
mean_sq_error(X, y, m)
cost_fn_derivative(x=X, y=y, m=m)
lr = 0.1
m = 0
m_iter = []
loss_iter = []
for i in range(40):
m = m - lr * cost_fn_derivative(x=X, y=y, m=m)
loss = mean_sq_error(X, y, m)
m_iter.append(m)
loss_iter.append(loss)
m_iter
loss_iter
plt.scatter(m_iter, loss_iter, linewidths=1.5)
plt.xlabel("Value of m")
plt.ylabel("loss")
m_iter[-1]
x_data = np.linspace(-5, 5, 100)
###Output
_____no_output_____ |
applications/notebooks/examples/scala/kmeans_over_single_GeoTiff.ipynb | ###Markdown
Kmeans over a single GeoTiffThis notebook shows how to read from Spark a single- and multi-band GeoTiff. We use [**GeoTrellis**](https://github.com/locationtech/geotrellis) to read the GeoTiff as a RDD, extracts from it a band, filters NaN and converts the result to a RDD of dense vectors. Such RDD is then passed to **Kmeans** cluster algorithm from **Spark-MLlib** for training. The kmeans model is then saved into HDFS.Note: In this example the grid cells define the dimension of the matrix. Since only the year **1980** is loaded, the matrix only has one record. To understand how to load several GeoTiffs and tranpose a matrix to have years the dimension of the matrix the reader should check [kmeans_multiGeoTiffs_matrixTranspose](kmeans_multiGeoTiffs_matrixTranspose.ipynb) notebooks in the scala examples. Dependencies
###Code
import geotrellis.raster.MultibandTile
import geotrellis.spark.io.hadoop._
import geotrellis.vector.ProjectedExtent
import org.apache.hadoop.fs.Path
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.mllib.clustering.KMeans
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.rdd.RDD
###Output
_____no_output_____
###Markdown
Read GeoTiff into a RDD
###Code
var band_NaN_RDD :RDD[Array[Double]] = sc.emptyRDD
val single_band = True;
var filepath :String = ""
if (single_band) {
//Single band GeoTiff
filepath = "hdfs:///user/hadoop/spring-index/LastFreeze/1980.tif"
} else {
//Multi band GeoTiff
filepath = "hdfs:///user/hadoop/spring-index/BloomFinal/1980.tif"
}
if (single_band) {
//Lets load a Singleband GeoTiff and return RDD just with the tile.
val bands_RDD = sc.hadoopGeoTiffRDD(filepath).values
//Conversion to ArrayDouble is necessary to thne generate a Dense Vector
band_NaN_RDD = bands_RDD.map( m => m.toArrayDouble())
} else {
//Lets load a Multiband GeoTiff and return RDD just with the tiles.
val bands_RDD = sc.hadoopMultibandGeoTiffRDD(filepath).values
//Extract the 4th band
band_NaN_RDD = bands_RDD.map( m => m.band(3).toArrayDouble())
}
###Output
vector length with NaN is 30388736
vector length without NaN is 13695035
###Markdown
Manipulate the RDD
###Code
//Go to each vector and print the length of each vector
band_NaN_RDD.collect().foreach(m => println("vector length with NaN is %d".format(m.length)))
//Go to each vector and filter out all NaNs
val band_RDD = band_NaN_RDD.map(m => m.filter(v => !v.isNaN))
//Go to each vector and print the length of each vector to see how many NaN were removed
band_RDD.collect().foreach(m => println("vector length without NaN is %d".format(m.length)))
###Output
_____no_output_____
###Markdown
Create a RDD of dense Vectors
###Code
// Create a Vector with NaN converted to 0s
//val band_vec = band_NaN_RDD.map(s => Vectors.dense(s.map(v => if (v.isNaN) 0 else v))).cache()
// Create a Vector without NaN values
val band_vec = band_RDD.map(s => Vectors.dense(s)).cache()
###Output
_____no_output_____
###Markdown
Train Kmeans
###Code
val numClusters = 2
val numIterations = 20
val clusters = {
KMeans.train(band_vec,numClusters,numIterations)
}
// Evaluate clustering by computing Within Set Sum of Squared Errors
val WSSSE = clusters.computeCost(band_vec)
println("Within Set Sum of Squared Errors = " + WSSSE)
###Output
Within Set Sum of Squared Errors = 0.0
###Markdown
Save kmeans model
###Code
//Un-persist the model
band_vec.unpersist()
// Shows the result.
println("Cluster Centers: ")
//clusters.clusterCenters.foreach(println)
//Clusters save the model
if (band_count == 1) {
clusters.save(sc, "hdfs:///user/pheno/spring_index/LastFreeze/1980_kmeans_model")
} else {
clusters.save(sc, "hdfs:///user/pheno/spring_index/BloomFinal/1980_kmeans_model")
}
###Output
Cluster Centers:
|
OOP_Application_Example.ipynb | ###Markdown
###Code
class Person:
def __init__(self, std, pre, mid, fin):
self.__std = std
self.__pre = pre
self.__mid = mid
self.__fin = fin
def Terms(self):
print(self.__std, self.__pre, self.__mid, self.__fin)
print("Student Term Grades (Prelim, Midterm, Finals):")
stu1 = Person("Student 1", 88, 89, 87)
stu2 = Person("Student 2", 88, 87, 98)
stu3 = Person("Student 3", 91, 90, 92)
stu1.Terms()
stu2.Terms()
stu3.Terms()
class Student(Person):
def __init__(self, pre, mid, fin):
self.__pre = pre
self.__mid = mid
self.__fin = fin
def Grade(self):
return((self.__pre + self.__mid + self.__fin)/3)
print("Student Term Average:")
std1 = Student(88, 89, 87)
print("Student 1:", round(std1.Grade(), 2))
std2 = Student(88, 87, 90)
print("Student 2:", round(std2.Grade(), 2))
std3 = Student(91, 90, 92)
print("Student 3:", round(std3.Grade(), 2))
###Output
Student Term Grades (Prelim, Midterm, Finals):
Student 1 88 89 87
Student 2 88 87 98
Student 3 91 90 92
Student Term Average:
Student 1: 88.0
Student 2: 88.33
Student 3: 91.0
|
04-Visualization-Matplotlib-Pandas/04-02-Pandas Visualization/Visualizing Time Series Data.ipynb | ###Markdown
Visualizing Time Series DataLet's go through a few key points of creatng nice time visualizations!
###Code
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Optional for interactive (watch video for full details)
#%matplotlib notebook
mcdon = pd.read_csv('mcdonalds.csv',index_col='Date',parse_dates=True)
mcdon.head()
# Not Good!
mcdon.plot()
mcdon['Adj. Close'].plot()
mcdon['Adj. Volume'].plot()
mcdon['Adj. Close'].plot(figsize=(12,8))
mcdon['Adj. Close'].plot(figsize=(12,8))
plt.ylabel('Close Price')
plt.xlabel('Overwrite Date Index')
plt.title('Mcdonalds')
mcdon['Adj. Close'].plot(figsize=(12,8),title='Pandas Title')
###Output
_____no_output_____
###Markdown
Plot Formatting X Limits
###Code
mcdon['Adj. Close'].plot(xlim=['2007-01-01','2009-01-01'])
mcdon['Adj. Close'].plot(xlim=['2007-01-01','2009-01-01'],ylim=[0,50])
###Output
_____no_output_____
###Markdown
Color and Style
###Code
mcdon['Adj. Close'].plot(xlim=['2007-01-01','2007-05-01'],ylim=[0,40],ls='--',c='r')
###Output
_____no_output_____
###Markdown
X TicksThis is where you will need the power of matplotlib to do heavy lifting if you want some serious customization!
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as dates
mcdon['Adj. Close'].plot(xlim=['2007-01-01','2007-05-01'],ylim=[0,40])
idx = mcdon.loc['2007-01-01':'2007-05-01'].index
stock = mcdon.loc['2007-01-01':'2007-05-01']['Adj. Close']
idx
stock
###Output
_____no_output_____
###Markdown
Basic matplotlib plot
###Code
fig, ax = plt.subplots()
ax.plot_date(idx, stock,'-') #plot_date
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Fix the overlap!
###Code
fig, ax = plt.subplots()
ax.plot_date(idx, stock,'-')
#automatic format
fig.autofmt_xdate() # Auto fixes the overlap!
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Customize grid
###Code
fig, ax = plt.subplots()
ax.plot_date(idx, stock,'-')
ax.yaxis.grid(True)
ax.xaxis.grid(True)
fig.autofmt_xdate() # Auto fixes the overlap!
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Format dates on Major Axis
###Code
fig, ax = plt.subplots()
ax.plot_date(idx, stock,'-')
# Grids
ax.yaxis.grid(True)
ax.xaxis.grid(True)
# Major Axis
ax.xaxis.set_major_locator(dates.MonthLocator())
ax.xaxis.set_major_formatter(dates.DateFormatter('%b\n%Y'))
fig.autofmt_xdate() # Auto fixes the overlap!
plt.tight_layout()
plt.show()
fig, ax = plt.subplots()
ax.plot_date(idx, stock,'-')
# Grids
ax.yaxis.grid(True)
ax.xaxis.grid(True)
# Major Axis
ax.xaxis.set_major_locator(dates.MonthLocator())
ax.xaxis.set_major_formatter(dates.DateFormatter('\n\n\n\n%Y--%B'))
fig.autofmt_xdate() # Auto fixes the overlap!
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Minor Axis
###Code
fig, ax = plt.subplots()
ax.plot_date(idx, stock,'-')
# Major Axis
ax.xaxis.set_major_locator(dates.MonthLocator())
ax.xaxis.set_major_formatter(dates.DateFormatter('\n\n%Y--%B'))
# Minor Axis
ax.xaxis.set_minor_locator(dates.WeekdayLocator())
ax.xaxis.set_minor_formatter(dates.DateFormatter('%d'))
# Grids
ax.yaxis.grid(True)
ax.xaxis.grid(True)
fig.autofmt_xdate() # Auto fixes the overlap!
plt.tight_layout()
plt.show()
fig, ax = plt.subplots(figsize=(10,8))
ax.plot_date(idx, stock,'-')
# Major Axis
ax.xaxis.set_major_locator(dates.WeekdayLocator(byweekday=1))
ax.xaxis.set_major_formatter(dates.DateFormatter('%B-%d-%a'))
# Grids
ax.yaxis.grid(True)
ax.xaxis.grid(True)
fig.autofmt_xdate() # Auto fixes the overlap!
plt.tight_layout()
###Output
_____no_output_____ |
Google_CoLab_XGB_Big5.ipynb | ###Markdown
###Code
from xgboost import XGBClassifier
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import StandardScaler
# https://stackoverflow.com/questions/57986259/multiclass-classification-with-xgboost-classifier
# !unzip -q original.zip
model = XGBClassifier (learning_rate=0.5, objective='multi:softmax')
big5 = pd.read_csv ('big5.csv', sep='\t')
big5.head()
###Output
_____no_output_____ |
.ipynb_checkpoints/W-stacking-improved Optimised with decreased w-planes (W=4, x0=0.2)-checkpoint.ipynb | ###Markdown
Reference:Ye, H. (2019). Accurate image reconstruction in radio interferometry (Doctoral thesis). https://doi.org/10.17863/CAM.39448Haoyang Ye, Stephen F Gull, Sze M Tan, Bojan Nikolic, Optimal gridding and degridding in radio interferometry imaging, Monthly Notices of the Royal Astronomical Society, Volume 491, Issue 1, January 2020, Pages 1146–1159, https://doi.org/10.1093/mnras/stz2970Github: https://github.com/zoeye859/Imaging-Tutorial
###Code
%matplotlib notebook
import numpy as np
from scipy.optimize import leastsq, brent
from scipy.linalg import solve_triangular
import matplotlib.pyplot as plt
import scipy.integrate as integrate
from time import process_time
from numpy.linalg import inv
np.set_printoptions(precision=6)
from Imaging_core_new import *
from Gridding_core import *
import pickle
with open("min_misfit_gridding_7_0p2.pkl", "rb") as pp:
opt_funcs = pickle.load(pp)
###Output
_____no_output_____
###Markdown
1. Read in the data
###Code
######### Read in visibilities ##########
data = np.genfromtxt('out_barray_6d.csv', delimiter = ',')
jj = complex(0,1)
u_original = data.T[0]
v_original = data.T[1]
w_original = -data.T[2]
V_original = data.T[3] + jj*data.T[4]
n_uv = len(u_original)
uv_max = max(np.sqrt(u_original**2+v_original**2))
V,u,v,w = Visibility_minusw(V_original,u_original,v_original,w_original)
#### Determine the pixel size ####
X_size = 900 # image size on x-axis
Y_size = 900 # image size on y-axis
X_min = -np.pi/60. #You can change X_min and X_max in order to change the pixel size.
X_max = np.pi/60.
X = np.linspace(X_min, X_max, num=X_size+1)[0:X_size]
Y_min = -np.pi/60. #You can change Y_min and Y_max in order to change the pixel size.
Y_max = np.pi/60.
Y = np.linspace(Y_min,Y_max,num=Y_size+1)[0:Y_size]
pixel_resol_x = 180. * 60. * 60. * (X_max - X_min) / np.pi / X_size
pixel_resol_y = 180. * 60. * 60. * (Y_max - Y_min) / np.pi / Y_size
print ("The pixel size on x-axis is ", pixel_resol_x, " arcsec")
###Output
The pixel size on x-axis is 23.999999999999996 arcsec
###Markdown
2. Determine w plane number Nw_2R
###Code
W = 7
M, x0, h = opt_funcs[W].M, opt_funcs[W].x0, opt_funcs[W].h
n0, w_values, dw = calcWgrid_offset(W, X_max, Y_max, w, x0, symm=True)
###Output
We will have 26 w-planes
###Markdown
3 3D Gridding + Imaging + CorrectingTo know more about gridding, you can refer to https://github.com/zoeye859/Imaging-Tutorial Calculating gridding values for w respectively
###Code
im_size = 2250
ind = find_nearestw(w_values, w)
C_w = cal_grid_w(w, w_values, ind, dw, W, h, M, x0)
###Output
Elapsed time during the w gridding value calculation in seconds: 8.187552365000101
###Markdown
Gridding on w-axis
###Code
V_wgrid, u_wgrid, v_wgrid, beam_wgrid = grid_w_offset(V, u, v, w, C_w, w_values, W, len(w_values), ind, n0)
###Output
Elapsed time during the w-gridding calculation in seconds: 2.3843892449999995
###Markdown
Imaging
###Code
def grid_uv(V_update, u_update, v_update, beam_update, W, im_size, X_max, X_min, Y_max, Y_min, h, M, x0):
"""
Grid on u-axis and v-axis
Args:
V_update (np.narray): visibility data on the certain w-plane
u_update (np.narray): u of the (u,v,w) coordinates on the certain w-plane
v_update (np.narray): v of the (u,v,w) coordinates on the certain w-plane
w_update (np.narray): w of the (u,v,w) coordinates on the certain w-plane
W (int): support width of the gridding function
"""
V_grid = np.zeros((im_size,im_size),dtype = np.complex_)
B_grid = np.zeros((im_size,im_size),dtype = np.complex_)
C_u, u_grid = cal_grid_uv(u_update, W, im_size, X_max, X_min, h, M, x0)
C_v, v_grid = cal_grid_uv(v_update, W, im_size, Y_max, Y_min, h, M, x0)
for k in range(0,len(V_update)):
C_uk = C_u[k]
C_vk = C_v[k]
if W % 2 == 1:
u_index = np.int(np.around(u_grid[k]))
v_index = np.int(np.around(v_grid[k]))
else:
u_index = np.int(np.floor(u_grid[k]))
v_index = np.int(np.floor(v_grid[k]))
u_k=0
for m in range(-W//2+1,-W//2+1+W):
v_k=0
for n in range(-W//2+1,-W//2+1+W):
V_grid[u_index+m,v_index+n] += C_uk[u_k] * C_vk[v_k] * V_update[k]
B_grid[u_index+m,v_index+n] += C_uk[u_k] * C_vk[v_k] * beam_update[k]
v_k+=1
u_k+=1
return V_grid, B_grid
I_size = int(im_size*2*x0)
I_image = np.zeros((I_size,I_size),dtype = np.complex_)
B_image = np.zeros((I_size,I_size),dtype = np.complex_)
t2_start = process_time()
for w_ind in range(len(w_values)):
print ('Gridding the ', w_ind, 'th level facet out of ', len(w_values),' w facets.\n')
V_update = np.asarray(V_wgrid[w_ind])
u_update = np.asarray(u_wgrid[w_ind])
v_update = np.asarray(v_wgrid[w_ind])
beam_update = np.asarray(beam_wgrid[w_ind])
V_grid, B_grid = grid_uv(V_update, u_update, v_update, beam_update, W, im_size, X_max, X_min, Y_max, Y_min, h, M, x0)
print ('FFT the ', w_ind, 'th level facet out of ', len(w_values),' w facets.\n')
I_image += FFTnPShift_offset(V_grid, w_values[w_ind], X, Y, im_size, x0, n0)
B_image += FFTnPShift_offset(B_grid, w_values[w_ind], X, Y, im_size, x0, n0)
B_grid = np.zeros((im_size,im_size),dtype = np.complex_)
V_grid = np.zeros((im_size,im_size),dtype = np.complex_)
t2_stop = process_time()
print("Elapsed time during imaging in seconds:", t2_stop-t2_start)
###Output
Gridding the 0 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.36680272800003877
Elapsed time during the u/v gridding value calculation in seconds: 0.36454913099987607
FFT the 0 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 1 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 2.305769428000076
Elapsed time during the u/v gridding value calculation in seconds: 2.289033023999991
FFT the 1 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 2 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 3.8411090659999445
Elapsed time during the u/v gridding value calculation in seconds: 3.8169645559999026
FFT the 2 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 3 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 4.942067829000052
Elapsed time during the u/v gridding value calculation in seconds: 4.923582562999854
FFT the 3 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 4 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 5.815981094000108
Elapsed time during the u/v gridding value calculation in seconds: 5.829218212999876
FFT the 4 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 5 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 6.510376555999983
Elapsed time during the u/v gridding value calculation in seconds: 6.47194761500009
FFT the 5 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 6 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 7.023058178999918
Elapsed time during the u/v gridding value calculation in seconds: 7.00615604099994
FFT the 6 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 7 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 7.067264843000203
Elapsed time during the u/v gridding value calculation in seconds: 7.109062306999931
FFT the 7 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 8 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 5.479340631000014
Elapsed time during the u/v gridding value calculation in seconds: 5.480119515000069
FFT the 8 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 9 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 4.1769611770000665
Elapsed time during the u/v gridding value calculation in seconds: 4.19470699600015
FFT the 9 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 10 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 3.3027111640001294
Elapsed time during the u/v gridding value calculation in seconds: 3.2969850169999972
FFT the 10 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 11 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 2.5296875610001734
Elapsed time during the u/v gridding value calculation in seconds: 2.5101788789997954
FFT the 11 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 12 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 1.9537372330000835
Elapsed time during the u/v gridding value calculation in seconds: 1.9322258929998952
FFT the 12 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 13 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 1.4234372910000275
Elapsed time during the u/v gridding value calculation in seconds: 1.4228298709999763
FFT the 13 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 14 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 1.042621987000075
Elapsed time during the u/v gridding value calculation in seconds: 1.0356624859998647
FFT the 14 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 15 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.7313754649999282
Elapsed time during the u/v gridding value calculation in seconds: 0.7170594720000736
FFT the 15 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 16 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.4879663930000788
Elapsed time during the u/v gridding value calculation in seconds: 0.4816383559998485
FFT the 16 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 17 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.3114565210000819
Elapsed time during the u/v gridding value calculation in seconds: 0.3085141300000487
FFT the 17 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 18 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.1880746790000103
Elapsed time during the u/v gridding value calculation in seconds: 0.18781221300014295
FFT the 18 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 19 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.11067428199999085
Elapsed time during the u/v gridding value calculation in seconds: 0.11153932999991412
FFT the 19 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 20 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.0689003990000856
Elapsed time during the u/v gridding value calculation in seconds: 0.06829192499981218
FFT the 20 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 21 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.03835078500014788
Elapsed time during the u/v gridding value calculation in seconds: 0.0379822449999665
FFT the 21 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 22 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.024912316999916584
Elapsed time during the u/v gridding value calculation in seconds: 0.024506475000180217
FFT the 22 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 23 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.013887522000004537
Elapsed time during the u/v gridding value calculation in seconds: 0.013434985000003508
FFT the 23 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
FFTing...
Phaseshifting...
Gridding the 24 th level facet out of 26 w facets.
Elapsed time during the u/v gridding value calculation in seconds: 0.005102541000042038
Elapsed time during the u/v gridding value calculation in seconds: 0.004718458000070314
FFT the 24 th level facet out of 26 w facets.
FFTing...
Phaseshifting...
###Markdown
Rescale and have a look
###Code
I_image_now = image_rescale(I_image,im_size, n_uv)
B_image_now = image_rescale(B_image,im_size, n_uv)
plt.figure()
plt.imshow(np.rot90(I_image_now.real,1), origin = 'lower')
plt.xlabel('Image Coordinates X')
plt.ylabel('Image Coordinates Y')
plt.show()
B_image_now[450,450]
###Output
_____no_output_____
###Markdown
Correcting functions h(x)h(y) on x and y axis W= 4, x0 = 0.2
###Code
def xy_correct(I, opt_func, im_size, x0):
"""
Rescale the obtained image
Args:
W (int): support width of the gridding function
im_size (int): the image size, it is to be noted that this is before the image cropping
opt_func (np.ndarray): The vector of grid correction values sampled on [0,x0) to optimize
I (np.narray): summed up image
Return:
I_xycorrected (np.narray): corrected image on x,y axis
"""
I_size = int(im_size*2*x0)
x = np.arange(-im_size/2, im_size/2)/im_size
h_map = get_grid_correction(opt_func, x)
index_x = int((0.5+x0)*im_size)
index_y = int((0.5+x0)*im_size)
#index_x = int(I_size * 1.5)
#index_y = int(I_size * 1.5)
temp = np.delete(h_map,np.s_[0:(im_size - index_x)],0)
Cor_gridx = np.delete(temp,np.s_[I_size:index_x],0) #correcting function on x-axis
Cor_gridy = np.delete(temp,np.s_[I_size:index_y],0) #correcting function on y-axis
I_xycorrected = np.zeros([I_size,I_size],dtype = np.complex_)
for i in range(0,I_size):
for j in range(0,I_size):
I_xycorrected[i,j] = I[i,j] * Cor_gridx[i] * Cor_gridy[j]
return I_xycorrected
I_xycorrected = xy_correct(I_image_now, opt_funcs[W], im_size, x0=0.2)
B_xycorrected = xy_correct(B_image_now, opt_funcs[W], im_size, x0=0.2)
plt.figure()
plt.imshow(np.rot90(I_xycorrected.real,1), origin = 'lower')
plt.xlabel('Image Coordinates X')
plt.ylabel('Image Coordinates Y')
plt.show()
B_xycorrected[450,450]
###Output
_____no_output_____
###Markdown
Correcting function on z axis
###Code
def z_correct_cal_offset(lut, X_min, X_max, Y_min, Y_max, dw, h, im_size, W, M, x0, n0=1):
"""
Return:
Cor_gridz (np.narray): correcting function on z-axis by Sze
"""
I_size = int(im_size*2*x0)
nu, x = make_evaluation_grids(W, M, I_size)
gridder = calc_gridder(h, x0, nu, W, M)
grid_correction = gridder_to_grid_correction(gridder, nu, x, W)
print (grid_correction[:I_size].shape)
#h_map = np.zeros(im_size, dtype=float)
#h_map[I_size:] = grid_correction[:I_size]
#h_map[:I_size] = grid_correction[:0:-1]
xrange = X_max - X_min
yrange = Y_max - Y_min
ny = im_size
nx = im_size
l_map = np.linspace(X_min, X_max, nx+1)[:nx]/(2*x0)
m_map = np.linspace(Y_min, Y_max, ny+1)[:ny]/(2*x0)
ll, mm = np.meshgrid(l_map, m_map)
# Do not allow NaN or values outside the x0 for the optimal function
z = abs(dw*(np.sqrt(np.maximum(0.0, 1. - ll**2 - mm**2))-n0))
z[z > x0] = x0
fmap = lut.interp(z)
Cor_gridz = image_crop(fmap, im_size, x0)
return Cor_gridz
lut = setup_lookup_table(opt_funcs[7], 256, 7, x0)
Cor_gridz = z_correct_cal_offset(lut, X_min, X_max, Y_min, Y_max, dw, h, im_size, W, M, x0, n0)
I_zcorrected = z_correct(I_xycorrected, Cor_gridz, im_size, x0=0.25)
B_zcorrected = z_correct(B_xycorrected, Cor_gridz, im_size, x0=0.25)
#np.savetxt('I_Figure6.csv', I_zcorrected.real, delimiter = ',')
###Output
(900,)
###Markdown
4 DFT and FFT dirty image difference
###Code
I_DFT = np.loadtxt('I_DFT_900_out6db.csv', delimiter = ',')
I_dif = I_DFT - I_zcorrected.real
plt.figure()
plt.imshow(np.rot90(I_dif,1), origin = 'lower')
plt.colorbar()
plt.xlabel('Image Coordinates X')
plt.ylabel('Image Coordinates Y')
plt.show()
rms = RMS(I_dif, im_size, 1, x0=0.2)
print (rms)
from astropy.io import fits
fits_file = 'out_1800.flux.fits'
hdu_list = fits.open(fits_file)
pbcor = hdu_list[0].data
hdu_list.close()
pbcor = pbcor.reshape((1800,1800))
pbcor = pbcor[450:1350,450:1350]
I_dif_r = I_rotation(900,I_dif)
I_dif_r_pbcor = pb_cor(pbcor,900,I_dif_r)
np.savetxt('Difference_W4_x2.csv',I_dif_r_pbcor, delimiter=',')
I_diff_47planes = np.loadtxt('Difference_47planes.csv', delimiter = ',')
#I_diff_186planes = np.loadtxt('Difference_186planes.csv', delimiter = ',')
I_diff_470planes = np.loadtxt('Difference_470planes.csv', delimiter = ',')
I_diff_10000planes = np.loadtxt('Difference_10000planes.csv', delimiter = ',')
I_diff = np.loadtxt('Difference_improved.csv', delimiter = ',')
I_diff1 = np.loadtxt('Difference_W4_x25.csv', delimiter = ',')
I_diff2 = np.loadtxt('Difference_W4_x2.csv', delimiter = ',')
rms47 = np.zeros(450)
#rms186 = np.zeros(450)
rms470 = np.zeros(450)
rms10000 = np.zeros(450)
rms = np.zeros(450)
rms1 = np.zeros(450)
rms2 = np.zeros(450)
j = 0
for i in np.arange(0,450,1):
rms47[j] = np.sqrt(np.mean(I_diff_47planes[i:(900-i),i:(900-i)]**2))
#rms186[j] = np.sqrt(np.mean(I_diff_186planes[i:(900-i),i:(900-i)]**2))
rms470[j] = np.sqrt(np.mean(I_diff_470planes[i:(900-i),i:(900-i)]**2))
rms10000[j] = np.sqrt(np.mean(I_diff_10000planes[i:(900-i),i:(900-i)]**2))
rms[j] = np.sqrt(np.mean(I_diff[i:(900-i),i:(900-i)]**2))
rms1[j] = np.sqrt(np.mean(I_diff1[i:(900-i),i:(900-i)]**2))
rms2[j] = np.sqrt(np.mean(I_diff2[i:(900-i),i:(900-i)]**2))
j=j+1
plt.figure()
i = np.arange(0,450,1)
x = (450-i)/450/2
plt.semilogy(x,rms47, label = 'W-Stacking (W=7,x0=0.25,47 planes)')
#plt.semilogy(x,rms186, label = 'W-Stacking (186 planes)')
plt.semilogy(x,rms470, label = 'W-Stacking (W=7,x0=0.25,470 planes)')
plt.semilogy(x,rms10000, label = 'W-Stacking (W=7,x0=0.25,10000 planes)')
plt.semilogy(x,rms, label = 'Improved W-Stacking (W=7,x0=0.25,22 planes)')
plt.semilogy(x,rms1, label = 'Improved W-Stacking (W=4,x0=0.25,19 planes)')
plt.semilogy(x,rms2, label = 'Improved W-Stacking (W=4,x0=0.2,23 planes)')
#plt.ylim(1e-7,1e-1)
plt.title(r'RMS of image misfit')
plt.xlabel('Normalised image plane coordinate')
plt.ylabel('RMS of image misfit')
plt.grid()
plt.legend(bbox_to_anchor=(1.1, 1.05))
plt.show()
plt.savefig('RMS_comparison_W4_2.png', dpi=300, bbox_inches='tight')
###Output
_____no_output_____ |
2-xarray-example/1-basic-xarray.ipynb | ###Markdown
xarray还有一个数据处理的库非常好用 -- xarray。本文参考以下资料,简单记录下xarray的基本使用方法。- [xarray官方文档](http://xarray.pydata.org/en/stable/)- [Pangeo Tutorial Gallery / Xarray Tutorial](http://gallery.pangeo.io/repos/pangeo-data/pangeo-tutorial-gallery/xarray.html)xarray是一个专门用来处理多维标签数据的python库。Xarray在类似于**NumPy的原始数组上以维度,坐标和属性的形式引入标签**,从而提供了更直观,更简洁和更少出错的开发人员体验。引入标签的原因是现实世界中,数据往往不是原始的多维数组,而还包含一系列反映数据编码信息的标签,比如时空信息。xarray可以利用这些标签更好地完成对数据的操作,后面会根据实际使用逐步记录。Xarray借鉴了pandas,特别适合处理netCDF文件,netcdf是xarray数据模型的来源,如果不了解netcdf可以参考:[netcdf-python 介绍](https://github.com/OuyangWenyu/aqualord/blob/master/DataFormat/netcdf.ipynb)。xarray中最基本的数据结构是DataArray和Dataset,前者类似于pandas里的Series,后者类似DataFrame,更多信息后面记录。例如,下图显示了一个“数据立方体”数据,其中温度和降水共享相同的三个维度,以及作为辅助坐标的经度和纬度。另外xarray还与dask紧密集成以进行并行计算,后面用到dask再做记录。conda安装:```Shell 官方网站的推荐安装conda install -c conda-forge xarray dask netCDF4 bottleneck``` 基本术语以下无明确说明时,arr 表示DataArray对象。- DataArray:是一个多维数组,有标签,如果name属性设置了,就成为一个named DataArray- Dataset:一个类dict的DataArray对象集合,有多个排列的维度。Datasets有 Varaible- Varaible:类似于netcdf的variable,包含维度,数据,属性等。variables和numpy数组的主要区别是变量上的广播运算基于维度名称的。每个DataArray有一个潜在variable可以通过arr.variable访问。Variable在Dataset和DataArray内的,所以不必单独用它。- Dimension:一个维度轴就是固定一个维度上的所有点的集合。每个维度轴有一个名字,比如x维度。DataArray对象的维度就是被命名的维度轴。第i个维度名称可用arr.dim[i]获取。默认的维度名称是dim_0,dim_1以此类推。- Coordinate:一个标记维度的数组。在一维情况下,坐标数组的至可以被认为是维度的标签。有两种坐标: - Dimension coordinate:一维的坐标数组用名字和维度名字指定给arr,可见于arr.dims。维度坐标类似于DataFrame中的index。 - Non-dimension coordinate:非维度坐标可见于arr.coords,这些坐标数组可以是一维也可以是多维,多维情况主要见于物理坐标和逻辑坐标不一致的时候,非维度坐标是不能索引的。- Index:index是优化数据结构用于快速索引和切片的。xarray用维度坐标可以i快速索引。以上有些晦涩,接下来看例子。 基本数据结构先看DataArray,多维标记数组。 DataArray
###Code
import numpy as np
data = np.random.rand(4, 3)
data
import pandas as pd
locs = ['IA', 'IL', 'IN']
times = pd.date_range('2000-01-01', periods=4)
times
import xarray as xr
foo = xr.DataArray(data, coords=[times, locs], dims=['time', 'space'])
foo
###Output
_____no_output_____
###Markdown
可以看到这个arr有两个坐标,一个时间,一个空间,时间对应一个坐标数组,是pandas的data_range,空间对应一个list,相应的数据有4 * 3=12个,且数组共4行,对应4个时间,三列对应三个空间。最简单的初始化方式是直接使用data':
###Code
xr.DataArray(data)
###Output
_____no_output_____
###Markdown
可以看到默认的坐标。另外,可以直接使用字典形式创建coords:
###Code
xr.DataArray(data, coords=[('time', times), ('space', locs)])
###Output
_____no_output_____
###Markdown
再比如多维的:
###Code
xr.DataArray(data, coords={'time': times, 'space': locs, 'const': 42,
'ranking': (('time', 'space'), np.arange(12).reshape(4,3))},
dims=['time', 'space'])
###Output
_____no_output_____
###Markdown
如上,time和space是维度坐标,const和ranking就是非维度坐标。维度是直接和数据对应上的,其他非维度的坐标是其他和数据不直接对应的。另外,还可以使用DataFrame来初始化:
###Code
df = pd.DataFrame({'x': [0, 1], 'y': [2, 3]}, index=['a', 'b'])
df
df.index.name = 'abc'
df
df.columns.name = 'xyz'
df
xr.DataArray(df)
###Output
_____no_output_____
###Markdown
接下来看看数组的属性:
###Code
foo.values
###Output
_____no_output_____
###Markdown
注意DataArray中的数组值都是统一的数据类型。如果需要不同数据类型的,那么需要使用Dataset。
###Code
foo.dims
foo.coords
foo.attrs
print(foo.name)
###Output
None
###Markdown
以上有缺失默认值的,可以使用下列方式补充:
###Code
foo.name = 'foo'
foo.attrs['units'] = 'meters'
foo
###Output
_____no_output_____
###Markdown
使用rename会返回一个新的数据数组:
###Code
foo.rename('bar')
###Output
_____no_output_____
###Markdown
坐标是类似dict类型的
###Code
foo.coords['time']
foo['time']
###Output
_____no_output_____
###Markdown
坐标可以被删除:
###Code
foo['ranking'] = ('space', [1, 2, 3])
foo.coords
del foo['ranking']
foo.coords
###Output
_____no_output_____
###Markdown
稍微小结下 DataArray:首先主体还是一个数组,数组的坐标直接对应维度坐标,比如一个二维的数组,那么就有两个维度坐标,数组中每个数据都唯一对应一对维度坐标值;还可以有非维度坐标,另外还有描述DataArray属性的。下面看Dataset,它就是为netcdf定制的。可以简单地认为 Dataset 就是多个具有相同坐标系的 DataArray 的组合体。 Dataset
###Code
temp = 15 + 8 * np.random.randn(2, 2, 3)
temp
precip = 10 * np.random.rand(2, 2, 3)
precip
lon = [[-99.83, -99.32], [-99.79, -99.23]]
lat = [[42.25, 42.21], [42.63, 42.59]]
ds = xr.Dataset({'temperature': (['x', 'y', 'time'], temp),
'precipitation': (['x', 'y', 'time'], precip)},
coords={'lon': (['x', 'y'], lon),
'lat': (['x', 'y'], lat),
'time': pd.date_range('2014-09-06', periods=3),
'reference_time': pd.Timestamp('2014-09-05')})
ds
###Output
_____no_output_____
###Markdown
可以直接传入DataArray到Dataset,或者DataFrame。
###Code
xr.Dataset({'bar': foo})
type(foo.to_pandas())
xr.Dataset({'bar': foo.to_pandas()})
###Output
_____no_output_____
###Markdown
variable 就是 组成 Dataset 的 DataArray 的名字,也就是上面初始化时候 用的 dict 的 key。像dict中使用key那样在Dataset中使用 variable 即可取出 DataArray:
###Code
'temperature' in ds
ds['temperature']
###Output
_____no_output_____
###Markdown
或者:
###Code
ds.temperature
###Output
_____no_output_____
###Markdown
Dataset的变量:
###Code
ds.data_vars
###Output
_____no_output_____
###Markdown
Dataset的坐标:
###Code
ds.coords
###Output
_____no_output_____
###Markdown
Dataset的属性:
###Code
ds.attrs
ds.attrs['title'] = 'example attribute'
ds
###Output
_____no_output_____
###Markdown
Dataset的长度是指的是变量的个数。
###Code
len(ds)
###Output
_____no_output_____
###Markdown
如果想要保存 Dataset,最自然的方法自然是保存到 netcdf 文件。
###Code
ds.to_netcdf("saved_on_disk.nc")
###Output
_____no_output_____
###Markdown
在xarray中读取netcdf文件可以这样:
###Code
ds_disk = xr.open_dataset("saved_on_disk.nc")
ds_disk
###Output
_____no_output_____
###Markdown
简单运算示例xarray的DataArray计算和numpy的array计算是一致的,所以至少形式上,两者可以保持一致,这对python编程也是很有利的.下面是具体的例子.
###Code
import numpy as np
import xarray as xr
###Output
_____no_output_____
###Markdown
现在构建 f(x)=sin(x)
###Code
x = np.linspace(-np.pi, np.pi, 19)
f = np.sin(x)
###Output
_____no_output_____
###Markdown
把数据放入DataArray
###Code
da_f = xr.DataArray(f)
da_f
###Output
_____no_output_____
###Markdown
完善数据的信息
###Code
da_f = xr.DataArray(f, dims=['x'])
da_f
da_f = xr.DataArray(f, dims=['x'], coords={'x': x})
da_f
###Output
_____no_output_____
###Markdown
xarray还有像pandas那样的方便绘图功能,也是由matplotlib提供的,如果还没安装过,使用conda安装即可,关于可视化,后续会有更多记录。
###Code
da_f.plot(marker='o')
###Output
_____no_output_____
###Markdown
索引数据和numpy中的indexing和slicing一致
###Code
# get the 10th item
da_f[10]
# get the first 10 items
da_f[:10]
###Output
_____no_output_____
###Markdown
不过在xarray中更强大的是 .sel() 函数,来进行带标签的索引,这样能根据坐标来索引
###Code
da_f.sel(x=0)
da_f.sel(x=slice(0, np.pi)).plot()
###Output
_____no_output_____
###Markdown
基本运算当对xarray DataArrays进行数学运算时,坐标随之而来。假设我们要计算: $g=f^2+1$,那么可以像在numpy中运算一样:
###Code
da_g = da_f**2 + 1
da_g
da_g.plot()
###Output
_____no_output_____
###Markdown
处理多维数据相比于单维数据和numpy,pandas类似的能力,xarray在处理多维数据方面有更多的优势.手动下载下数据, 在本文件夹下执行:```Shellgit clone https://github.com/pangeo-data/tutorial-data.git```读取示例netcdf文件数据:
###Code
import xarray as xr
ds = xr.open_dataset('./tutorial-data/sst/NOAA_NCDC_ERSST_v3b_SST-1960.nc')
ds
# both do the exact same thing
# dictionary syntax
sst = ds['sst']
# attribute syntax
sst = ds.sst
sst
###Output
_____no_output_____
###Markdown
下面是多维索引,选择特定日期的数据
###Code
sst.sel(time='1960-06-15').plot(vmin=-2, vmax=30)
###Output
_____no_output_____
###Markdown
还可以沿任何axis选择值:
###Code
sst.sel(lon=180).transpose().plot()
sst.sel(lon=180, lat=40).plot()
###Output
_____no_output_____
###Markdown
下面是一些数据分析的例子,对数据做"减法",统计如均值,标准差,最值的信息.比如,针对所有维度:
###Code
sst.mean()
# 对float数据, 默认是skip nan值的
sst.mean(skipna=True)
sst_time_mean = sst.mean(dim='time')
sst_time_mean.plot(vmin=-2, vmax=30)
sst_zonal_mean = sst.mean(dim='lon')
sst_zonal_mean.transpose().plot()
sst_time_and_zonal_mean = sst.mean(dim=('time', 'lon'))
sst_time_and_zonal_mean.plot()
# some might prefer to have lat on the y axis
sst_time_and_zonal_mean.plot(y='lat')
###Output
_____no_output_____
###Markdown
下面是一个更复杂的例子,计算加权平均温度. 因为球面上每个微元的面积并不一样, 所以需要计算出来然后面积加权.计算球面上一个微元的面积:$$\delta A = R^2 \delta\phi \delta\lambda cos(\phi)$$其中,$\phi$是维度,$\lambda$是经度,单位都是弧度. R是地球半径.
###Code
import numpy as np
R = 6.37e6
# we know already that the spacing of the points is one degree latitude
dϕ = np.deg2rad(1.)
dλ = np.deg2rad(1.)
dA = R**2 * dϕ * dλ * np.cos(np.deg2rad(ds.lat))
dA.plot()
###Output
_____no_output_____
###Markdown
其中,ds.lat 是将维度取出来, 维度是坐标,所以取出来数据是一维的, 也是DataArray
###Code
ds.lat
###Output
_____no_output_____
###Markdown
对应的dA也是相同结构
###Code
dA
###Output
_____no_output_____
###Markdown
可以将数据扩充到lat和lon作为坐标的DataArray上, 利用where函数即可, where函数不仅可以判断索引, 还具备广播能力
###Code
dA.where(sst[0].notnull())
###Output
_____no_output_____
###Markdown
可以看到
###Code
sst[0]
sst[0].notnull()
###Output
_____no_output_____
###Markdown
在为True的地方, 会根据坐标 lat 的值实现广播.下面绘图看看:
###Code
pixel_area = dA.where(sst[0].notnull())
pixel_area.plot()
total_ocean_area = pixel_area.sum(dim=('lon', 'lat'))
sst_weighted_mean = (sst * pixel_area).sum(dim=('lon', 'lat')) / total_ocean_area
sst_weighted_mean.plot()
###Output
_____no_output_____
###Markdown
xarray 还可以一次性打开多个 文件, 写入一个dataset. 使用 open_mfdataset 函数即可.
###Code
ds_all = xr.open_mfdataset('./tutorial-data/sst/*nc', combine='by_coords')
ds_all
###Output
_____no_output_____
###Markdown
现在57年的数据都在其中.接着尝试一些运算, 先看看 groupby
###Code
sst_clim = ds_all.sst.groupby('time.month').mean(dim='time')
sst_clim
###Output
_____no_output_____ |
docs/source/load_data.ipynb | ###Markdown
Load rheology data If you have a google account you can run this documentation notebook [Open in colab](https://colab.research.google.com/github/rheopy/rheofit/blob/master/docs/source/load_data.ipynb)
###Code
from IPython.display import clear_output
!pip install git+https://github.com/rheopy/rheofit.git --upgrade
clear_output()
import rheofit
import numpy as np
import pandas as pd
import pybroom as pb
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
###Output
_____no_output_____
###Markdown
The module rheofit.rheodata provides a class to store structured rheology data.We initially focus loading datafile exported by the trios software in two possible format:* Excel format - as structured by the trios software with the multitab export opion* Rheoml format [rheoml link](https://www.tomcoat.com/IUPAC/index.php?Page=XML%20Repository) Import data from Multitab excel (assuming format exported by TA Trios)
###Code
# Download example of xls file format
import requests
url = 'https://github.com/rheopy/rheofit/raw/master/docs/source/_static/Flow_curve_example.xls'
r = requests.get(url, allow_redirects=True)
with open('Flow_curve_example.xls', 'wb') as file:
file.write(r.content)
excel_file=pd.ExcelFile('Flow_curve_example.xls')
excel_file.sheet_names
flow_curve=rheofit.rheodata.rheology_data('Flow_curve_example.xls')
flow_curve.filename
flow_curve.Details
flow_curve[0]
print(flow_curve[0][0])
flow_curve[0][1]
print(flow_curve[1][0])
flow_curve[1][1]
# In summary
flow_curve=rheofit.rheodata.rheology_data('Flow_curve_example.xls')
for (label,data) in flow_curve:
plt.loglog('Shear rate','Stress',data=data,marker='o',label=label)
plt.xlabel('$\dot\gamma$')
plt.ylabel('$\sigma$')
plt.legend()
flow_curve.tidy
import altair as alt
alt.Chart(flow_curve.tidy).mark_point().encode(
alt.X('Shear rate', scale=alt.Scale(type='log')),
alt.Y('Stress', scale=alt.Scale(type='log')),
color='stepname')
#.interactive()
###Output
_____no_output_____
###Markdown
Import data from Rheoml
###Code
# Download example of xls file format
import requests
url = 'https://raw.githubusercontent.com/rheopy/rheofit/master/docs/source/_static/Flow_curve_example.xml'
r = requests.get(url, allow_redirects=True)
with open('Flow_curve_example.xml', 'wb') as file:
file.write(r.content)
# In summary from xml (rheoml schema)
# to do - add to rheology_data class the option to import from xml
# Solve naming problem for shear stress and shear rate
steps_table_list=rheofit.rheodata.dicttopanda(rheofit.rheodata.get_data_dict('Flow_curve_example.xml'))
for data in steps_table_list:
plt.loglog('ShearRate','ShearStress',data=data,marker='o')
plt.xlabel('$\dot\gamma$')
plt.ylabel('$\sigma$')
plt.legend()
###Output
_____no_output_____ |
hw5/train/RNN_feature_extractor.ipynb | ###Markdown
training sample for seq2seq prediciton
###Code
def normalize(image):
'''
normalize for pre-trained model input
'''
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
transform_input = transforms.Compose([
transforms.ToPILImage(),
transforms.Pad((0,40), fill=0, padding_mode='constant'),
transforms.Resize(224),
# transforms.CenterCrop(224),
# transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize
])
return transform_input(image)
# load data from FullLength folder
# training set
print("training set .....")
with torch.no_grad():
video_path = "../HW5_data/FullLengthVideos/videos/train/"
category_list = sorted(listdir(video_path))
category = category_list[1]
train_all_video_frame = []
# cnn_feature_extractor = torchvision.models.densenet121(pretrained=True).features.cuda()
model = torchvision.models.resnet101(pretrained=True)
cnn_feature_extractor = nn.Sequential(*list(model.children())[:-1]).cuda()
for category in category_list:
print("category:",category)
image_list_per_folder = sorted(listdir(os.path.join(video_path,category)))
category_frames = []
for image in image_list_per_folder:
image_rgb = skimage.io.imread(os.path.join(video_path, category,image))
image_nor = normalize(image_rgb)
feature = cnn_feature_extractor(image_nor.view(1,3,224,224).cuda()).cpu().view(2048) # 1024*7*7 if use densenet
category_frames.append(feature)
train_all_video_frame.append(torch.stack(category_frames))
print("\nvalidation set .....")
video_path = "../HW5_data/FullLengthVideos/videos/valid/"
category_list = sorted(listdir(video_path))
category = category_list[1]
test_all_video_frame = []
for category in category_list:
print("category:",category)
image_list_per_folder = sorted(listdir(os.path.join(video_path,category)))
category_frames = []
for image in image_list_per_folder:
image_rgb = skimage.io.imread(os.path.join(video_path, category,image))
image_nor = normalize(image_rgb)
feature = cnn_feature_extractor(image_nor.view(1,3,224,224).cuda()).cpu().view(2048)
category_frames.append(feature)
test_all_video_frame.append(torch.stack(category_frames))
with open("train_FullLength_features_resnet.pkl", "wb") as f:
pickle.dump(train_all_video_frame, f)
with open("valid_FullLength_features_resnet.pkl", "wb") as f:
pickle.dump(test_all_video_frame, f)
###Output
_____no_output_____
###Markdown
Cut to defined size
###Code
with open("../features/train_FullLength_features_resnet.pkl", "rb") as f:
train_all_video_frame = pickle.load(f)
with open("../features/valid_FullLength_features_resnet.pkl", "rb") as f:
valid_all_video_frame = pickle.load(f)
# load ground truth
label_path = "../HW5_data/FullLengthVideos/labels/train/"
category_txt_list = sorted(listdir(label_path))
train_category_labels = []
for txt in category_txt_list:
file_path = os.path.join(label_path,txt)
with open(file_path,"r") as f:
label = [int(w.strip()) for w in f.readlines()]
train_category_labels.append(label)
label_path = "../HW5_data/FullLengthVideos/labels/valid/"
category_txt_list = sorted(listdir(label_path))
valid_category_labels = []
for txt in category_txt_list:
file_path = os.path.join(label_path,txt)
with open(file_path,"r") as f:
label = [int(w.strip()) for w in f.readlines()]
valid_category_labels.append(label)
###Output
_____no_output_____
###Markdown
using "slice" function in torch
###Code
def cut_frames(features_per_category, labels_per_category, size = 200, overlap = 20):
feature_size = 50176
a = torch.split(features_per_category, size-overlap)
b = torch.split(torch.Tensor(labels_per_category), size-overlap)
cut_features = []
cut_labels = []
for i in range(len(a)):
if i==0:
cut_features.append(a[i])
cut_labels.append(b[i])
else:
cut_features.append(torch.cat((a[i-1][-overlap:],a[i])))
cut_labels.append(torch.cat((b[i-1][-overlap:],b[i])))
lengths = [len(f) for f in cut_labels]
return cut_features, cut_labels, lengths
r1, r2, r3 = cut_frames(train_all_video_frame[0],train_category_labels[0], size = 120, overlap = 10)
cutting_steps = 350
overlap_steps = 30
train_cut_features = []
train_cut_labels = []
train_cut_lengths = []
for category_frames, category_labels in zip(train_all_video_frame,train_category_labels):
features, labels, lengths = cut_frames(category_frames,category_labels,
size = cutting_steps, overlap = overlap_steps)
train_cut_features += features
train_cut_labels += labels
train_cut_lengths += lengths
print("one category done")
valid_lengths = [len(s) for s in valid_all_video_frame]
valid_lengths
with open("../features/train_cut_features_350_30_resnet.pkl", "wb") as f:
pickle.dump(train_cut_features,f)
with open("../features/train_cut_labels_350_30_resnet.pkl", "wb") as f:
pickle.dump(train_cut_labels,f)
with open("../features/train_cut_lengths_350_30_resnet.pkl", "wb") as f:
pickle.dump(train_cut_lengths,f)
with open("../features/valid_cut_features_no_cut_resnet.pkl", "wb") as f:
pickle.dump(valid_all_video_frame,f)
with open("../features/valid_cut_labels_no_cut_resnet.pkl", "wb") as f:
pickle.dump(valid_category_labels,f)
with open("../features/valid_cut_lengths_no_cut_resnet.pkl", "wb") as f:
pickle.dump(valid_lengths,f)
###Output
_____no_output_____ |
projector.ipynb | ###Markdown
TensorBoard DependenciesNote: the following dependencies were **not included** in the project's requirements file:* tensorflow* tensorboard* torch
###Code
# Load in the used dependencies
import os
import numpy as np
import tensorflow as tf
import tensorboard as tb
tf.io.gfile = tb.compat.tensorflow_stub.io.gfile
from torch.utils.tensorboard import SummaryWriter
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Data Hyperparameters
###Code
LOG_DIR = './population_backup/storage/experiment6/'
topology_id = 3
overwrite = True
rm_duplicates = False # Remove all samples with at least 'dup_columns' duplicate values
dup_columns = None # Number of duplicated columns a single sample has before removal
filter_score = True # Filter out samples having a fitness of min_fitness or more
classify = True # Classify the samples based on the connections
if topology_id in [2,3,222,3333]:
min_fitness = 1 # GRU is capable of finding all targets
elif topology_id in [22]:
min_fitness = 0.6 # Best score of 11/18
else:
raise Exception(f"Add topology ID '{topology_id}'")
###Output
_____no_output_____
###Markdown
Fetch
###Code
# Setup the header
head = ['fitness']
if topology_id in [1, 2, 3]: # GRU populations
head += ['bias_r', 'bias_z', 'bias_h',
'weight_xr', 'weight_xz', 'weight_xh',
'weight_hr', 'weight_hz', 'weight_hh']
elif topology_id in [22, 33]: # SRU populations
head += ['bias_h', 'weight_xh', 'weight_hh']
elif topology_id in [2222, 3333]:
head += ['bias_z', 'bias_h',
'weight_xz', 'weight_xh',
'weight_hz', 'weight_hh']
elif topology_id in [222, 333]:
head += ['delay', 'scale', 'resting']
else:
raise Exception(f"Topology ID '{topology_id}' not supported!")
if topology_id in [1]:
head += ['conn1', 'conn2']
elif topology_id in [2, 22, 222, 2222]:
head += ['bias_rw', 'conn2']
elif topology_id in [3, 33, 333, 3333]:
head += ['bias_rw', 'conn0', 'conn1', 'conn2']
else:
raise Exception(f"Topology ID '{topology_id}' not supported!")
# Check if tsv files already exist
raw_path = os.path.join(LOG_DIR, f'topology_{topology_id}/data/topology_{topology_id}.csv')
data_path = os.path.join(LOG_DIR, f'topology_{topology_id}/data/data.tsv')
meta_path = os.path.join(LOG_DIR, f'topology_{topology_id}/data/meta.tsv')
# Load in the data (without header)
if not overwrite and os.path.exists(data_path):
data = np.genfromtxt(data_path, delimiter='\t')
meta = np.genfromtxt(meta_path, delimiter='\t')
else:
raw = np.genfromtxt(raw_path, delimiter=',')[1:]
data = raw[:,:-1]
meta = raw[:,-1]
np.savetxt(data_path, data, delimiter='\t')
np.savetxt(meta_path, meta, delimiter='\t')
# Print shape:
print(f"Data shape: {data.shape}")
print(f"Meta shape: {meta.shape}")
# Transform to pandas dataframe (easier to manipulate)
data_pd = pd.DataFrame(data, columns=head[1:])
meta_pd = pd.DataFrame(meta, columns=head[:1])
data_pd.head()
meta_pd.head()
###Output
_____no_output_____
###Markdown
Filter the data
###Code
# Filter out the complete duplicates
indices = data_pd.duplicated()
data_pd = data_pd[~indices.values]
meta_pd = meta_pd[~indices.values]
print(f"Data shape: {data_pd.shape}")
print(f"Meta shape: {meta_pd.shape}")
# For example, if you want to see only fitnesses of 1 (perfect score).
if filter_score:
indices = meta_pd >= min_fitness
data_pd = data_pd[indices.values]
meta_pd = meta_pd[indices.values]
print(f"Data shape: {data_pd.shape}")
print(f"Meta shape: {meta_pd.shape}")
# Filter out all the samples that have at least one duplicate value (in each of its columns)
if rm_duplicates:
indices = (meta_pd<0).astype(int).values.flatten() # Little hack
for h in head[1:]:
indices += data_pd.duplicated(subset=h).astype(int).values
# Remove all that exceed the set threshold
data_pd = data_pd[indices < dup_columns]
meta_pd = meta_pd[indices < dup_columns]
print(f"Dropping duplicates that occur in {dup_columns} columns or more")
print(f" > Data shape: {data_pd.shape}")
print(f" > Meta shape: {meta_pd.shape}")
data_pd.head()
###Output
_____no_output_____
###Markdown
Visualize the dataRemark: Symmetry in data clearly by conn1 (this is the connection between GRU node and output of the global network)
###Code
# indices = (data_pd['bias_rw'] >= 2) & (data_pd['conn2'] >= -3)
indices = (data_pd['bias_rw'] != 0)
temp = data_pd[indices.values]
print(f"Data shape: {temp.shape}")
plt.figure(figsize=(15,3))
temp.boxplot()
plt.ylim(-6,6)
plt.show()
plt.close()
for i, v in enumerate(temp.values):
a = abs(v[3]-2.1088881375106086)
b = abs(v[4]+2.819266855899976)
if a+b < 1:
print(i, "-", a+b,"-", round(a,3), "-", round(b,3))
temp.head()
temp.values[1][4]
plt.figure(figsize=(15,3))
data_pd.boxplot()
plt.show()
plt.close()
def adapt_and_show(data, indices=None):
data_temp = data
if indices is not None: data_temp = data_temp[indices.values]
print(f"Size: {data_temp.shape}")
plt.figure(figsize=(15,5))
for i, h in enumerate(head[1:]):
plt.subplot(int(len(head)/6+1),6,i+1)
sns.violinplot(data_temp[h])
plt.title(h)
if 'delay' in h:
continue
if 'bias' in h or 'resting' in h:
plt.xlim(-3,3)
else:
plt.xlim(-6,6)
plt.yticks([])
plt.tight_layout()
plt.show()
plt.close()
# indices = (data_pd['conn1'] > 0) & (data_pd['conn0'] > 0)
indices = None
adapt_and_show(data_pd, indices)
# h = 'bias_h'
# t = '$c_0$'
# plt.figure(figsize=(3,1.3))
# sns.violinplot(data_pd[h])
# plt.title(t)
# if 'bias' in h:
# plt.xlim(-3,3)
# else:
# plt.xlim(-6,6)
# plt.ylim(0)
# plt.yticks([])
# plt.xlabel('')
# plt.tight_layout()
# plt.savefig(f"delete_me/{h}.png", bbox_inches='tight', pad_inches=0.02)
# plt.show()
# plt.close()
###Output
_____no_output_____
###Markdown
Add columnAdd a class column, used for visualisation in the Projector.
###Code
def classify_connections(r):
if r['conn0'] >= 0 and r['conn1'] >= 0:
return "PP" # Positive Positive
elif r['conn0'] >= 0 and r['conn1'] < 0:
return "PN" # Positive Negative
elif r['conn0'] < 0 and r['conn1'] >= 0:
return "NP" # Negative Positive
else:
return "NN" # Negative Negative
classes = None
if classify:
classes = data_pd.apply(lambda row: classify_connections(row), axis=1).values
###Output
_____no_output_____
###Markdown
FormatCreate labels for visualisation.
###Code
# Create better labels
meta_str = []
for d, m in zip(data_pd.values, meta_pd.values):
s = [str(round(m[0], 2))]
s += [str(round(x, 5))for x in d]
meta_str.append(s)
if classify:
if 'classes' not in head: head.insert(0, 'classes')
for i, m in enumerate(meta_str):
meta_str[i].insert(0, classes[i])
# Example
print(f"Data example: \n> {data_pd.values[0]}")
print(f"Label example: \n> {meta_str[0]}")
print(f"For header: \n> {head}")
###Output
Data example:
> [ 1.37952784 1.2845938 -1.31854468 2.00584058 1.1426045 1.14685034
1.28932122 -1.62728255 -2.0571236 2.42080414 4.51275769 4.15302192
-4.97156486]
Label example:
> ['PP', '1.0', '1.37953', '1.28459', '-1.31854', '2.00584', '1.1426', '1.14685', '1.28932', '-1.62728', '-2.05712', '2.4208', '4.51276', '4.15302', '-4.97156']
For header:
> ['classes', 'fitness', 'bias_r', 'bias_z', 'bias_h', 'weight_xr', 'weight_xz', 'weight_xh', 'weight_hr', 'weight_hz', 'weight_hh', 'bias_rw', 'conn0', 'conn1', 'conn2']
###Markdown
MagicFolder in which data is stored is "runs/topology_1" (always keep "runs"). Change this is you want to compare several configurations.
###Code
%%capture
# Fire up the TensorBoard
writer = SummaryWriter(log_dir=f"runs/topology_{topology_id}") # Overwrites if already exists
writer.add_embedding(data_pd.to_numpy(), meta_str, metadata_header=head)
writer.close()
%load_ext tensorboard
%tensorboard --logdir=runs
# Tensorboard can be opened in separate tab: localhost:6006
###Output
warning: Embedding dir exists, did you set global_step for add_embedding()?
|
notebooks/scrubbed/covid-19-tests-august-2020.ipynb | ###Markdown
Trying to predict number of COVID-19 tests by August 1st 2020*Notebook by Steven Rybicki*In this notebook, we'll be focusing on trying to predict the number of COVID-19 tests in the US. Namely, we'll be looking at forecasting the answer to:> By August 1, how many tests for COVID-19 will have been administered in the US?taken from the [Metaculus question](https://pandemic.metaculus.com/questions/4400/by-august-1-how-many-tests-for-covid-19-will-have-been-administered-in-the-us/). This just looks at number of tests, not distinguishing by type or if someone is tested multiple times.To do this, we're going to be using [Ergo](https://github.com/oughtinc/ergo), a library by [Ought](https://ought.org/). This lets you integrate model based forecasting (building up a numerical model to predict outcomes) with judgement based forecasting (using calibration to predict outcomes less formally). To see other notebooks using Ergo, see [the Ergo repo](https://github.com/oughtinc/ergo/tree/master/notebooks). I'll be trying to walk through how you'd use this realistically if you wanted to forecast the answer to a question without investing a lot of time on exploration. This means our model will be kind of rough, but hopefully accurate enough to provide value, and I'll be iteratively trying to improve it as we go along. Notebook setupThis imports libraries, and sets up some useful functions
###Code
%%capture
!pip install --progress-bar off poetry
!pip install --progress-bar off git+https://github.com/oughtinc/ergo.git@e9f6463de652f4e15d7c9ab0bb9f067fef8847f7
import warnings
import ssl
warnings.filterwarnings(action="ignore", category=FutureWarning)
warnings.filterwarnings(action="ignore", module="plotnine")
ssl._create_default_https_context = ssl._create_unverified_context
import ergo
import seaborn
import numpy as np
import pandas as pd
from datetime import timedelta, date, datetime
import matplotlib.pyplot as plt
pd.set_option('precision', 2)
def summarize_samples(samples):
"""
Print out the p5, p50 and p95 of the given sample
"""
stats = samples.describe(percentiles=[0.05, 0.5, 0.95])
percentile = lambda pt: float(stats.loc[f"{pt}%"])
return f"{percentile(50):.2f} ({percentile(5):.2f} to {percentile(95):.2f})"
def show_marginal(func):
"""
Use Ergo to generate 1000 samples of the distribution of func, and then plot them as a distribution.
"""
samples = ergo.run(func, num_samples=1000)["output"]
seaborn.distplot(samples).set_title(func.__doc__);
plt.show()
print(f"Median {func.__doc__}: {summarize_samples(samples)}")
###Output
_____no_output_____
###Markdown
How are we going to predict this? PrinciplesHere's a couple things we're going to try to do when predicting this:- *fermi estimate / decomposition*: focus on decomposing the question into multiple parts that we can estimate separately. We'll start with our overall question (how many tests) and decompose it into smaller questions that are easier to predict (e.g. what's the estimated number of tests we will make tomorrow?).- *use live data*: where possible, fetch data from an up to date source. This lets us easily update our guess, as we can just rerun the notebook and it will update our prediction based on updated data. - *wisdom of the crowds*: incorporate other predictions, namely the [Metaculus one](https://pandemic.metaculus.com/questions/4400/by-august-1-how-many-tests-for-covid-19-will-have-been-administered-in-the-us/), so we can include information we might have missed. Question decompositionLet's decompose our question into a number of sub questions. We'll then focus on trying to estimate each of these separately, allowing us to iteratively improve our model as we go along.So the number of tests done by August 1st could be decomposed as: - ensemble prediction of: - my prediction - current number of tests done to date: how many tests have we done up until today? - how much we can expect that number to change until August 1st? - if current rates remain unchanged, what will that number be in August? - what change in testing rates should we expect based on past changes? - what will be the impact of future factors on testing rates? - metaculus question My Prediction Get relevant dataTo know the current testing rate, and how it changes, we're going to use the https://covidtracking.com/ API to fetch the testing statistics. To make it easier to parse, we're going to filter to the fields we care about:- date- totalTestResults: culmulative tests per day- totalTestResultsIncrease: how many new test results were reported that day
###Code
testing_data = pd.read_csv("https://covidtracking.com/api/v1/us/daily.csv")
# testing data uses numbers for dates (20200531 for e.g.), so let's convert that to use python dates instead
testing_data["date"] = testing_data["date"].apply(lambda d: datetime.strptime(str(d), "%Y%m%d"))
# now filter just to the columns we care about
testing_data = testing_data[["date", "totalTestResults", "totalTestResultsIncrease"]]
testing_data
###Output
_____no_output_____
###Markdown
Current number of testsWe can now use our dataset to easily find the number of tests done today (or the latest in the dataset), which we can verify is correct by looking at the table above
###Code
# pick the largest, therefore most up to date, date
current_date = testing_data["date"].max()
# if there's multiple entries for a day for whatever reason, take the first
current_test_number = testing_data.loc[testing_data["date"] == current_date]["totalTestResults"][0]
current_date, current_test_number
###Output
_____no_output_____
###Markdown
How much can we expect this to change until August 1st?We now want to predict how many new tests we can expect from now until August 1st. One easy way to do this:- look at the distribution of new tests per day over the last month- if we simulate what happens if we continue like this until August 1st, what distribution do we expect? Distribution of tests over the past monthFirst, let's look at the `totalTestResultsIncrease` over the past month.
###Code
last_month = current_date - timedelta(days=30)
test_data_over_last_month = testing_data[testing_data['date'].between(last_month, current_date)]
seaborn.distplot(test_data_over_last_month["totalTestResultsIncrease"])
###Output
_____no_output_____
###Markdown
Now, we want to use this to help build a model. Since we're using Ergo, this is easy: we just create a function that represents this field, which returns a random sample from the dataset. Then, to simulate the distribution of this field, we can sample from this function a large number of times. This is what the `show_marginal` function does, as well as plotting the resulting distribution.
###Code
def test_results_increase_per_day():
"# test results per day"
return ergo.random_choice(list(test_data_over_last_month["totalTestResultsIncrease"]))
show_marginal(test_results_increase_per_day)
###Output
_____no_output_____
###Markdown
As a sanity check, this distribution looks very similar as plotting the distribution above, which makes sense as they should be equivalent since the above is just randomly sampling from the original.Right now, this isn't especially useful: we've just created a similar distribution to one we had originally. But now that we've created this in Ergo, it becomes easy to manipulate and combine this with other distributions. We'll be doing this to extrapolate how many tests we can expect by August. How much we can expect that to increase by August 1st? Repeating our current ratesNow, using our new `test_results_increase_per_day` function, can we extrapolate what the number of tests tomorrow could look like? One way to do this: - look at test results today, `current_test_number`- sample from our `test_results_increase_per_day` distribution, add it to how many we have so farSince `test_results_increase_per_day` returns a sample from this distribution, this just looks like adding `current_test_number` to `test_results_increase_per_day`.
###Code
def test_results_tomorrow():
"# test results tomorrow"
return current_test_number + test_results_increase_per_day()
show_marginal(test_results_tomorrow)
###Output
_____no_output_____
###Markdown
Can we extend this to extrapolate further in the future? One simple way to do this (which we'll refine later): call `test_results_increase_per_day()` for each day in the future we want to forecast, and add them together. We could also just do `totalTestResultsIncrease() * number_of_days`, but this approach lets us sample more points and better approximate the potential increase. To verify this is working in a sensible way, let's test it for tomorrow:
###Code
def test_results_for_date(date: datetime):
number_of_days = (date - current_date).days
return current_test_number + sum(test_results_increase_per_day() for _ in range(number_of_days))
def test_results_for_date_tomorrow():
"# test results tomorrow using test_results_for_date"
return test_results_for_date(current_date + timedelta(days=1))
show_marginal(test_results_for_date_tomorrow)
###Output
_____no_output_____
###Markdown
At least in my notebook, these show the exact same answer. So, now, if we want to predict for August, we can just change the date given.
###Code
def test_results_for_deadline():
"# test results for August 1st"
return test_results_for_date(datetime(2020, 8, 1))
show_marginal(test_results_for_deadline)
###Output
_____no_output_____
###Markdown
Note that with Ergo, we can just use normal python operators and functions (`+, sum`) to sample from `test_results_increase_per_day()` and create this new distribution `test_results_for_date`. This lets us focus on building our model, and not have to worry about the math of combining distributions. What change in testing rates should we expect based on past changes?The above analysis assumes that number of tests per day is not increasing, as if it is we'll be underestimating. Let's validate that, and look at how the number of tests has changed over time.
###Code
# create a days column, since that will help us make the x-axis look cleaner
# (dates overlap with a smaller graph)
first_date = testing_data["date"].min()
testing_data["days"] = \
testing_data["date"].apply(lambda d: (d - first_date).days)
seaborn.lineplot(x="days", y="totalTestResultsIncrease", data=testing_data)
###Output
_____no_output_____
###Markdown
So we can see that roughly, month over month, the number of tests per day are increasing linearly. This means that our `test_results_for_deadline()` model above is probably underestimating the number of tests, as it's assuming the testing capacity remains static. What we want is to estimate the day over day increase, and use that to increase the number of tests our model is adding per day. Since it looks very linear right now, let's use a linear regression to estimate the slope of increases. We'll look at data when we past 100 000 cases, as that's where the trend seems to become linear.
###Code
# ignore early testing data, as it was mostly zero, then growing exponentially
test_data_worth_looking_at = testing_data[testing_data['totalTestResults'] >= 100000]
# do a linear regression using scipy
from scipy import stats
slope_of_test_increases = stats.linregress(test_data_worth_looking_at["days"], test_data_worth_looking_at["totalTestResultsIncrease"]).slope
slope_of_test_increases
###Output
_____no_output_____
###Markdown
Now, with this in hand, we can modify our previous estimate by using this slope.
###Code
def test_results_for_date_with_slope(date: datetime):
number_of_days = (date - current_date).days
return current_test_number + sum(test_results_increase_per_day() + slope_of_test_increases * day for day in range(number_of_days))
def test_results_for_deadline_with_slope():
"# test results for August 1st with linear increase in tests"
return test_results_for_date_with_slope(datetime(2020, 8, 1))
show_marginal(test_results_for_deadline_with_slope)
###Output
_____no_output_____
###Markdown
What will be the impact of future factors on testing rates?Right now, our model is potentially a bit too overconfident. Testing might not increase linearly at this rate forever:- we might increase testing rate, because of: - changes in testing policy to increase number of tests - outbreaks that intensify in regions - tests that are cheaper - more infrastructure to deploy tests - etc.- we might decrease testing rate, because of: - changes in testing policy to decrease the number of tests, to try understate the impact of the disease - containment strategies working well - hitting capacity limits on test manufacture - not being able to import any tests due to shortages in other countries - etc.We could go down the rabbit hole of trying to predict each of these, but that would take a ton of time, and not necessarily change the prediction much. The cheaper time-wise thing to do is to try estimate how much we expect testing rates to change in the future. This is using the judgement based forecasting we mentioned above. I'm going to guess that linear growth is still possible until August, but that it could be anything from 0.25 to 2 current growth per day. we're going to approximate this with a lognormal distribution, as we think that the most likely rate is around 1, but don't want to discount values at the tails.
###Code
def estimated_slope():
"# estimated slope"
return ergo.lognormal_from_interval(0.5, 2) * slope_of_test_increases
show_marginal(estimated_slope)
###Output
_____no_output_____
###Markdown
Now we can adapt our previous model, just using this estimated slope instead.
###Code
def test_results_for_date_with_slope(date: datetime):
# The slightly more correct way of doing this would be to replace estimated_slope() * day with repeated calls
# to estimated_slope. But as we're predicing kind of far into the future, this makes the simulation really slow.
number_of_days = (date - current_date).days
return current_test_number + sum(test_results_increase_per_day() + estimated_slope() * day for day in range(number_of_days))
def test_results_for_deadline_with_slope():
"# test results for August 1st with linear increase in tests"
return test_results_for_date_with_slope(datetime(2020, 8, 1))
show_marginal(test_results_for_deadline_with_slope)
###Output
_____no_output_____
###Markdown
Incorporating the Metaculus predictionNow that we have our own prediction, let's look at the Metaculus one. This will give us a sense of how our model stacks up to their community's model, and is a good sanity check to see if we missed something obvious. If you're running this notebook, remember to replace the credentials below with your own.
###Code
metaculus = ergo.Metaculus(username="oughtpublic", password="123456", api_domain="pandemic")
testing_question = metaculus.get_question(4400, name="# tests administered")
testing_question.show_community_prediction()
###Output
_____no_output_____
###Markdown
This is similar to our prediction, just less certain (longer tails) and centered slightly higher. Since we think they have valuable information we might have missed, we're going to create a simple essemble prediction: pick randomly between a prediction the Metaculus community would make, and we would make, to create a combined model.Seaborn at time of writing has difficult plotting a distribution plot for this, because of some large outliers in the sample, so let's use a boxplot instead. This also helps show how to use Ergo outside of our `show_marginal` function.
###Code
def ensemble_prediction():
"# ensemble prediction"
if ergo.flip(0.5):
return testing_question.sample_community()
else:
return test_results_for_deadline_with_slope()
# use ergo to create a distribution from 1000 samples, using our ensemble prediction
# this is output as a pandas dataframe, making it easy to plot and manipulate
samples = ergo.run(ensemble_prediction, num_samples=1000)["output"]
# show them as a box plot
seaborn.boxplot(samples)
print("Median:", summarize_samples(samples))
###Output
_____no_output_____
###Markdown
This shows a distribution similar to the one we had, just less certain, and with some probability for the long tails. Submitting to metaculusIf we want to, we can now submit this model to Metaculus.
###Code
# below commented out to avoid accidentally submitting an answer when you don't mean to
# samples = ergo.run(ensemble_prediction, num_samples=1000)["output"]
# testing_question.submit_from_samples(samples)
###Output
_____no_output_____
###Markdown
SummaryTo create this model, we did the following:- we fetched the latest data from https://covidtracking.com/- estimated the growth of the number of tests to be a combination of the past linear growth, and a log-normal distribution- used that to extrapolate the number of tests we'd have in August- then combined that prediction with the metaculus prediction, to create a final forecast for number of tests on August 1stSince we did a lot of iteration on our model, I'm including a cleaned up version of the entire thing below, so you can see everything at once.
###Code
class Covid19TestsModel(object):
def __init__(self, testing_data, testing_question):
self.testing_data = testing_data
self.testing_question = testing_question
self.last_month = current_date - timedelta(days=30)
self.current_date = testing_data["date"].max()
# if there's multiple entries for a day for whatever reason, take the first
self.current_test_number = testing_data.loc[testing_data["date"] == current_date]["totalTestResults"][0]
# data subsets
self.test_data_over_last_month = self.testing_data[testing_data['date'].between(last_month, current_date)]
# calculate slope
test_data_worth_looking_at = testing_data[testing_data['totalTestResults'] >= 100000]
self.slope_of_test_increases = stats.linregress(
test_data_worth_looking_at["days"],
test_data_worth_looking_at["totalTestResultsIncrease"]).slope
def test_results_increase_per_day(self):
"""
Estimated test increase over the past day looking at increases over the last month
"""
return ergo.random_choice(list(self.test_data_over_last_month["totalTestResultsIncrease"]))
def estimated_slope(self):
"""
Estimated slope of increase of tests per day looking at linear regression of test cases,
and a log-normal prediction of the possible changes
"""
return ergo.lognormal_from_interval(0.5, 2) * self.slope_of_test_increases
def test_results_for_date_with_slope(self, date: datetime):
"""
Estimated test results for date, estimating based on the number of estimated test results per day
including the estimated rate of increase
"""
number_of_days = (date - self.current_date).days
return self.current_test_number + \
sum(self.test_results_increase_per_day() + self.estimated_slope() * day for day in range(number_of_days))
def test_results_for_deadline_with_slope(self):
return self.test_results_for_date_with_slope(datetime(2020, 8, 1))
def ensemble_prediction(self):
"# ensemble prediction"
if ergo.flip(0.5):
return self.testing_question.sample_community()
else:
return self.test_results_for_deadline_with_slope()
show_marginal(Covid19TestsModel(testing_data, testing_question).ensemble_prediction)
###Output
_____no_output_____ |
codes/labs_lecture04/lab01_linear_module/linear_module_exercise.ipynb | ###Markdown
Lab 04.01 : Linear module -- exercise Inspecting a linear module
###Code
import torch
import torch.nn as nn
###Output
_____no_output_____
###Markdown
Make a linear module WITHOUT bias that takes input of size 2 and return output of size 3.
###Code
mod= # complete here
print(mod)
###Output
_____no_output_____
###Markdown
Print the internal parameters of the module. Try to print the bias and double check that it return you "None"
###Code
print( # complete here )
print( # complete here )
###Output
_____no_output_____
###Markdown
Make a vector $x=\begin{bmatrix}1\\1 \end{bmatrix}$ and feed it to your linear module. What is the output?
###Code
x= # complete here
print(x)
y= # complete here
print(y)
###Output
_____no_output_____
###Markdown
Change the internal parameters of your module so that the weights become $W=\begin{bmatrix}1&2 \\ 3&4 \\ 5&6\end{bmatrix}$. You need to write 6 lines, one per entry to be changed.
###Code
# complete here
# complete here
# complete here
# complete here
# complete here
# complete here
print(mod.weight)
###Output
_____no_output_____
###Markdown
Feed the vector x to your module with updated parameters. What is the output?
###Code
y= # complete here
print(y)
###Output
_____no_output_____ |
Panda.ipynb | ###Markdown
Loading in Data
###Code
import pandas as pd
!wget https://raw.githubusercontent.com/lazyprogrammer/machine_learning_examples/master/pytorch/aapl_msi_sbux.csv
df = pd.read_csv('aapl_msi_sbux.csv',error_bad_lines=False);
df = pd.read_csv('https://github.com/sammanthp007/Stock-Price-Prediction-Using-KNN-Algorithm/blob/master/sbux.csv',error_bad_lines=False);
type(df)
!head aapl_msi_sbux.csv.1
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1259 entries, 0 to 1258
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 AAPL 1259 non-null float64
1 MSI 1259 non-null float64
2 SBUX 1259 non-null float64
dtypes: float64(3)
memory usage: 29.6 KB
###Markdown
Select Columns and Rows
###Code
df.columns
df.head()
df['AAPL']
type(df['AAPL'])
type(df[['AAPL','MSI']])
df.iloc[0]
df.loc[0]
df2 = pd.read_csv('aapl_msi_sbux.csv',index_col='AAPL')
df2.head()
df2.loc[67.8542]
df[df['AAPL'] > 67]
df[df['AAPL'] != 67.8542]
import numpy as np
A = np.arange(10)
A
A[A % 2 == 0]
df.values
type(df.values)
A = df.values
df.to_csv('output.csv')
!head output.csv
df.to_csv('output2.csv',index=False)
!head output2.csv
###Output
AAPL,MSI,SBUX
67.8542,60.3,28.185
68.5614,60.9,28.07
66.8428,60.83,28.13
66.7156,60.81,27.915
66.6556,61.12,27.775
65.7371,61.43,27.17
65.7128,62.03,27.225
64.1214,61.26,26.655
63.7228,60.88,26.675
###Markdown
The apply() Function
###Code
def date_to_year(row):
return int(row['AAPL'] + 1)
df.apply(date_to_year,axis = 1)
###Output
_____no_output_____
###Markdown
Plotting with Panda
###Code
df['AAPL'].hist();
df['AAPL'].plot();
df[['AAPL','MSI','SBUX']].plot.box();
from pandas.plotting import scatter_matrix
scatter_matrix(df[['AAPL','MSI','SBUX']],alpha=0.2);
###Output
_____no_output_____ |
docs/_sources/dl/dl.ipynb | ###Markdown
General Below serves some key concepts in ml, dl, and more.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
ReferencesGradeint Descent: Coursera Standford NLP course Perceptrons Multi-layer Network Gradient Descent / Logistic regression SigmoidYou will learn to use logistic regression for text classification. * The sigmoid function is defined as: $$ h(z) = \frac{1}{1+\exp^{-z}} \tag{1}$$It maps the input 'z' to a value that ranges between 0 and 1, and so it can be treated as a probability. Figure 1
###Code
def sigmoid(z):
'''
Input:
z: is the input (can be a scalar or an array)
Output:
h: the sigmoid of z
'''
### START CODE HERE ###
# calculate the sigmoid of z
h = None
h = 1 / (1 + np.exp(-z) )
### END CODE HERE ###
return h
###Output
_____no_output_____
###Markdown
Update the weightsTo update your weight vector $\theta$, you will apply gradient descent to iteratively improve your model's predictions. The gradient of the cost function $J$ with respect to one of the weights $\theta_j$ is:$$\nabla_{\theta_j}J(\theta) = \frac{1}{m} \sum_{i=1}^m(h^{(i)}-y^{(i)})x^{(i)}_j \tag{5}$$* 'i' is the index across all 'm' training examples.* 'j' is the index of the weight $\theta_j$, so $x^{(i)}_j$ is the feature associated with weight $\theta_j$* To update the weight $\theta_j$, we adjust it by subtracting a fraction of the gradient determined by $\alpha$:$$\theta_j = \theta_j - \alpha \times \nabla_{\theta_j}J(\theta) $$* The learning rate $\alpha$ is a value that we choose to control how big a single update will be. **Note**- h = hypothesis = calculated target. - y = ground truth Instructions: Implement gradient descent function* The number of iterations 'num_iters" is the number of times that you'll use the entire training set.* For each iteration, you'll calculate the cost function using all training examples (there are 'm' training examples), and for all features.* Instead of updating a single weight $\theta_i$ at a time, we can update all the weights in the column vector: $$\mathbf{\theta} = \begin{pmatrix}\theta_0\\\theta_1\\ \theta_2 \\ \vdots\\ \theta_n\end{pmatrix}$$* $\mathbf{\theta}$ has dimensions (n+1, 1), where 'n' is the number of features, and there is one more element for the bias term $\theta_0$ (note that the corresponding feature value $\mathbf{x_0}$ is 1).* The 'logits', 'z', are calculated by multiplying the feature matrix 'x' with the weight vector 'theta'. $z = \mathbf{x}\mathbf{\theta}$ * $\mathbf{x}$ has dimensions (m, n+1) * $\mathbf{\theta}$: has dimensions (n+1, 1) * $\mathbf{z}$: has dimensions (m, 1)* The prediction 'h', is calculated by applying the sigmoid to each element in 'z': $h(z) = sigmoid(z)$, and has dimensions (m,1).* The cost function $J$ is calculated by taking the dot product of the vectors 'y' and 'log(h)'. Since both 'y' and 'h' are column vectors (m,1), transpose the vector to the left, so that matrix multiplication of a row vector with column vector performs the dot product.$$J = \frac{-1}{m} \times \left(\mathbf{y}^T \cdot log(\mathbf{h}) + \mathbf{(1-y)}^T \cdot log(\mathbf{1-h}) \right)$$* The update of theta is also vectorized. Because the dimensions of $\mathbf{x}$ are (m, n+1), and both $\mathbf{h}$ and $\mathbf{y}$ are (m, 1), we need to transpose the $\mathbf{x}$ and place it on the left in order to perform matrix multiplication, which then yields the (n+1, 1) answer we need:$$\mathbf{\theta} = \mathbf{\theta} - \frac{\alpha}{m} \times \left( \mathbf{x}^T \cdot \left( \mathbf{h-y} \right) \right)$$
###Code
def gradientDescent(x, y, theta, alpha, num_iters):
'''
Input:
x: matrix of features which is (m,n+1)
y: corresponding labels of the input matrix x, dimensions (m,1)
theta: weight vector of dimension (n+1,1)
alpha: learning rate
num_iters: number of iterations you want to train your model for
Output:
J: the final cost
theta: your final weight vector
Hint: you might want to print the cost to make sure that it is going down.
'''
### START CODE HERE ###
def _dot(x, y):
return np.dot(x, y)
def _log(x):
return np.log(x)
# get 'm', the number of rows in matrix x
m = x.shape[0]
for i in range(0, num_iters):
# get z, the dot product of x and theta
z = _dot(x, theta)
# get the sigmoid of z
h = sigmoid(z)
# calculate the cost function
J = -1/m * ( _dot( y.transpose(), _log(h) ) + _dot( (1-y).transpose(), _log(1-h) ) )
# update the weights theta
theta -= alpha/m * _dot(x.transpose(), (h-y) )
### END CODE HERE ###
J = float(J)
return J, theta
###Output
_____no_output_____
###Markdown
Backpropagation Cross Entropy Softmax The softmax function has many roots. In physics, it is known as the Boltzmann or Gibbs distribution; in statistics, it’s multinomial logistic regression; and in the natural language processing (NLP) community it’s known as the maximum entropy (MaxEnt) 5 6 7 classifier. Weight Decay Momentum Convolution
###Code
#
###Output
_____no_output_____ |
Measurement parameters from Confusion Matrix - Reverse Engineering.ipynb | ###Markdown
Suggestion: If you observe that this jupyter notebook is not loading properply please follow these following steps: 1. Go to : https://nbviewer.jupyter.org/2. Here you will see a box, where it is writen "Enter the location of a Jupyter Notebook to have it rendered here:"3. Into that box, please paste the URL of this notebook.NB: you can copy the URL from the browser address bar, and then paste it there (step3) Purpose Sometimes it might happen that we considered only precision score from the computed model. We saved the confusion matrix for multiclass, and we have calculated the precision score. However, after some time, you might be needing to calculate the Recall, and F1 score from that confusion matrix. It is possible to calculate through the equations. You can easily see the description of precision, recall and F1 score from this link: https://en.wikipedia.org/wiki/Precision_and_recall. However, for multiclass classification, it is often very tough to back calculate all the things. So in this code, I have attempted to find out a way to calculate all the measurement paramters from the confusion matrix. You just need to provide the confusion matrix to the code, and it will calculate the rest for you. lets start
###Code
import numpy as np
# input the row of the predicted values
cc = np.array([[48,1,0],[2,86,24],[4,36,181]])
# input the label names
labels = ['Normal', 'Micro', 'Macro']
# you can put any size.
# Here in this example we have put 3X3. You can use it for any dimenion, NXN, where N = 2 to your desired number
# score calculate
# recall score
recall = np.diag(cc) / np.sum(cc, axis = 1) # axis 1 means column values
print(recall)
# mean of recall
mr = np.mean(recall)
print(mr)
# precision score
precision = np.diag(cc) / np.sum(cc, axis = 0) # axis 0 means row values
print(precision)
# mean of precision
mr = np.mean(recall)
print(mr)
# F1 score
F1 = 2 * (precision * recall) / (precision + recall)
print(F1)
# mean of F1 score
mf = np.mean(F1)
print(mf)
# plot the confusion matrix
# Plot function
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
import matplotlib.pyplot as plt
import numpy as np
import itertools
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(3)
plt.figure(figsize=(8, 6))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.savefig('confusion_matrix.pdf', bbox_inches='tight')
pass
plot_confusion_matrix(cm = cc,
normalize = False,
target_names = labels,
title = "Confusion Matrix for Dataset")
###Output
_____no_output_____ |
spark.stream/sakarya_raw_tweets.ipynb | ###Markdown
TWITTER DUYGU ANALİZİ PROJESİ 1. APACHE SPARK KURULUMU
###Code
!pip install findspark
!pip install pymongo
import pymongo
myclient = pymongo.MongoClient(
"mongodb://root:[email protected]:27017/")
mydb = myclient["sakarya"]
import findspark
findspark.init("/usr/local/spark-3.2.0-bin-hadoop3.2")
from pyspark.conf import SparkConf
from pyspark import SparkContext
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
from pyspark.sql.functions import *
import html
import numpy as np
import json
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
pd.options.display.max_columns = None
pd.options.display.max_rows = 250
pd.options.display.max_colwidth = 150
sns.set(color_codes=True)
spark = SparkSession.builder\
.master("local[*]") \
.appName("ml") \
.config("spark.memory.fraction","0.8") \
.config("spark.executor.memory","8g") \
.config("spark.driver.memory","8g") \
.config("spark.sql.hive.filesourcePartitionFileCacheSize", "621440000") \
.config("spark.sql.sources.bucketing.maxBuckets", "100000") \
.config("spark.sql.shuffle.partitions", "2000") \
.config("spark.driver.maxResultSize","2g") \
.config("spark.shuffle.file.buffer","64k") \
.config("spark.scheduler.listenerbus.eventqueue.capacity", "1000") \
.config("spark.broadcast.blockSize", "8m") \
.config("spark.sql.autoBroadcastJoinThreshold", "-1") \
.getOrCreate()
###Output
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/usr/local/spark-3.2.0-bin-hadoop3.2/jars/spark-unsafe_2.12-3.2.0.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
21/12/11 19:24:29 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
###Markdown
2. EĞİTİM VERİSİNİN HAZIRLANMASI
###Code
TRAIN_TWEET_PATH = "/home/jovyan/work/twitter_clients.csv"
train_df = spark.read \
.option("header", True) \
.option("inferSchema", True) \
.option("sep", ",") \
.csv(TRAIN_TWEET_PATH).dropna()
train_df=train_df.filter(~((train_df.emotionSegment != "1")
& (train_df.emotionSegment != "0")))
train_df.count()
from pyspark.sql.functions import col
from pyspark.sql.types import StringType, IntegerType
train_df = train_df.withColumn("createdAt",col("createdAt").cast(IntegerType())) \
.withColumn("emotionSegment",col("emotionSegment").cast(IntegerType())) \
.withColumn("likeCount",col("likeCount").cast(IntegerType())) \
.withColumn("name",col("name").cast(StringType())) \
.withColumn("quoteCount",col("quoteCount").cast(IntegerType())) \
.withColumn("replyCount",col("replyCount").cast(IntegerType())) \
.withColumn("retweetCount",col("retweetCount").cast(IntegerType())) \
.withColumn("text",col("text").cast(StringType()))
train_df.printSchema()
train_df = train_df.withColumn("original_text", f.col("text"))
user_regex = r"(@\w{1,15})"
hastag_regex = r"(#\w{1,})"
url_regex = r"((https?|ftp|file):\/{2,3})+([-\w+&@#/%=~|$?!:,.]*)|(www.)+([-\w+&@#/%=~|$?!:,.]*)"
email_regex = r"[\w.-]+@[\w.-]+\.[a-zA-Z]{1,}"
train_df = (
train_df
.withColumn("text", f.regexp_replace(f.col("text"), url_regex, "")) \
.withColumn("text", f.regexp_replace(f.col("text"), hastag_regex, "")) \
.withColumn("text", f.regexp_replace(f.col("text"), user_regex, "")) \
.withColumn("text", f.regexp_replace(f.col("text"), email_regex, ""))
)
@f.udf
def html_unescape(s: str):
if isinstance(s, str):
return html.unescape(s)
return s
train_df = train_df.withColumn("text", html_unescape(f.col("text")))
train_df.filter(f.col("text")=='').count()
train_df = train_df.filter(~(f.col("text")==''))
train_df_clean = (train_df \
.withColumn("text", f.regexp_replace(f.col("text"), "[^a-zA-Z']", " ")) \
.withColumn("text", f.regexp_replace(f.col("text"), " +", " ")) \
.withColumn("text", f.trim(f.col("text"))))
train_df_show = train_df_clean.sample(fraction=0.0001, seed=311564)
train_df_show.toPandas().head(20)
###Output
###Markdown
3. MAKİNE ÖĞRENMESİ MODELİNİN HAZIRLANMASI
###Code
%%time
from pyspark.ml.feature import StopWordsRemover, Tokenizer, HashingTF, IDF
from pyspark.ml.classification import LogisticRegression
from pyspark.ml import Pipeline
tokenizer = Tokenizer(inputCol="text", outputCol="words1")
stopwords_remover = StopWordsRemover(inputCol="words1",
outputCol="words2",
stopWords=StopWordsRemover.loadDefaultStopWords("turkish"))
hashing_tf = HashingTF(inputCol="words2",
outputCol="term_frequency")
idf = IDF(inputCol="term_frequency",
outputCol="features",
minDocFreq=5)
lr = LogisticRegression(maxIter = 10, labelCol="emotionSegment", featuresCol="features")
###Output
_____no_output_____
###Markdown
4. MAKİNE ÖĞRENMESİ MODELİNİN EĞİTİLMESİ
###Code
(trainingData, validationData, testData) = train_df_clean.randomSplit([0.6, 0.2, 0.2], seed=896)
semantic_analysis = Pipeline(
stages=[tokenizer, stopwords_remover, hashing_tf, idf, lr])
semantic_analysis_model = semantic_analysis.fit(train_df_clean)
%%time
trained_df = semantic_analysis_model.transform(trainingData)
test_df = semantic_analysis_model.transform(testData)
val_df = semantic_analysis_model.transform(validationData)
###Output
CPU times: user 91.8 ms, sys: 28.2 ms, total: 120 ms
Wall time: 650 ms
###Markdown
5. MAKİNE ÖĞRENMESİ MODELİNİN DEĞERLENDİRİLMESİ
###Code
%%time
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.ml.evaluation import BinaryClassificationEvaluator
evaluator = MulticlassClassificationEvaluator(
labelCol="emotionSegment",
metricName="accuracy",
predictionCol="prediction")
accuracy_val = evaluator.evaluate(val_df)
accuracy_test = evaluator.evaluate(test_df)
print("Accuracy validationData: ", accuracy_val)
print("Accuracy testData: ", accuracy_test)
evaluator = MulticlassClassificationEvaluator(
labelCol="emotionSegment",
metricName="f1",
predictionCol="prediction")
f1_val = evaluator.evaluate(val_df)
f1_test = evaluator.evaluate(test_df)
print("f1 score validationData: ", f1_val)
print("f1 score testData: ", f1_test)
my_eval_lr = BinaryClassificationEvaluator(rawPredictionCol='prediction', labelCol='emotionSegment', metricName='areaUnderROC')
roc_val = my_eval_lr.evaluate(val_df)
roc_test = my_eval_lr.evaluate(test_df)
print("roc_val score validationData: ", roc_val)
print("roc_test score testData: ", roc_test)
trainingData, validationData, testData
val_df.columns
val_df.groupBy('emotionSegment', 'prediction').count().show()
TN = prediction.filter('prediction = 0 AND label = prediction').count()
TP = prediction.filter('prediction = 1 AND label = prediction').count()
FN = prediction.filter('prediction = 0 AND label != prediction').count()
FP = prediction.filter('prediction = 1 AND label != prediction').count()
accuracy = (TN + TP) / (TN + TP + FN + FP)
confusion_tablo = val_df.groupBy('emotionSegment', 'prediction').count().toPandas()
confusion_tablo
true_positive = confusion_tablo["count"][0]
true_negative = confusion_tablo["count"][3]
false_negative = confusion_tablo["count"][1]
false_positive = confusion_tablo["count"][2]
arr = np.empty((2,2))
arr[0][0] = true_negative
arr[0][1] = false_positive
arr[1][0] = false_negative
arr[1][1] = true_positive
int(true_positive/(true_positive+false_positive)*100)
int(true_negative/(false_negative+true_negative)*100)
int((false_negative/(false_negative+true_negative))*100)
int(false_positive/(true_positive+false_positive)*100)
ErrorRate = (false_negative + false_positive) / (true_positive + true_negative + false_negative + false_positive)
SensivityRate = true_positive/(false_negative + true_positive)
SpecifityRate = true_negative/(true_negative + false_positive)
PrecisionRate = true_positive/(false_positive+true_positive)
PrevalenceRate = (false_negative+true_positive) / (false_negative+true_positive+true_negative+false_positive)
RecalRate = true_positive/(true_positive+false_negative)
print(ErrorRate)
print(SensivityRate)
print(SpecifityRate)
print(PrecisionRate)
print(PrevalenceRate)
print(RecalRate)
fig, ax = plt.subplots(figsize=(10,10))
sns.heatmap(arr, annot=True, linewidths=.5, ax=ax, fmt="g")
api_info_model = {"TypeKey":"TwitterApi", "TotalRequestLimit":0,
"RemainingRequest":0, "MlResultAccuracyRate":accuracy_val*100,
"MlResultF1Rate":f1_val*100,"MlResultROCRate":roc_val*100,
"true_positive":166969,"true_negative":155888,
"false_negative":50379,"false_positive":52438,
"ErrorRate":24,"SensivityRate":76,
"SpecifityRate":74,"PrecisionRate":76,
"PrevalenceRate":51,"RecalRate":76,
"MachineLearningModelName":"LogisticRegression"}
mycol = mydb["ApiInfoModel"]
mycol.insert_one(api_info_model)
###Output
_____no_output_____
###Markdown
6. EĞİTİLMİŞ MODELİN KAYIT EDİLMESİ
###Code
semantic_analysis_model.save("lr_sakarya_twitter_sentiment_analysis_model.pkl")
###Output
21/12/11 19:34:19 WARN TaskSetManager: Stage 28 contains a task of very large size (4184 KiB). The maximum recommended task size is 1000 KiB.
21/12/11 19:34:21 WARN TaskSetManager: Stage 32 contains a task of very large size (1496 KiB). The maximum recommended task size is 1000 KiB.
###Markdown
SAKARYA ÜNİVERSİTESİYLE İLGİLİ TWEETLERİN DUYGU ANALİZİNİN YAPILMASI 1. EĞİTİLMİŞ MODELİN YÜKLENMESİ
###Code
from pyspark.ml import PipelineModel
MODEL_PATH = "/home/jovyan/work/model/lr_sakarya_twitter_sentiment_analysis_model.pkl"
sentiment_model = PipelineModel.load(MODEL_PATH)
###Output
_____no_output_____
###Markdown
2. SAKARYA ÜNİVERSİTESİYLE İLGİLİ TWEETLERİN DATAFRAME HALİNE ÇEVİRİLMESİ
###Code
SAKARYA_TWEET_PATH = "/home/jovyan/work/sakarya_student_tweets2.csv"
sakarya_tweet_df = spark.read \
.option("header", True) \
.option("inferSchema", True) \
.option("sep", ",") \
.csv(SAKARYA_TWEET_PATH).dropna()
sakarya_tweet_df.limit(20).toPandas().head()
sakarya_tweet_df.count()
###Output
###Markdown
3. SAKARYA ÜNİVERSİTESİYLE İLGİLİ TWEETLERİN HAZIRLANMASI
###Code
sakarya_tweet_df = sakarya_tweet_df.withColumn("createdAt",col("createdAt").cast(IntegerType())) \
.withColumn("likeCount",col("likeCount").cast(IntegerType())) \
.withColumn("name",col("name").cast(StringType())) \
.withColumn("quoteCount",col("quoteCount").cast(IntegerType())) \
.withColumn("replyCount",col("replyCount").cast(IntegerType())) \
.withColumn("retweetCount",col("retweetCount").cast(IntegerType())) \
.withColumn("text",col("text").cast(StringType()))
sakarya_tweet_df.printSchema()
sakarya_tweet_df = sakarya_tweet_df.withColumn("original_text", f.col("text"))
user_regex = r"(@\w{1,15})"
hastag_regex = r"(#\w{1,})"
url_regex = r"((https?|ftp|file):\/{2,3})+([-\w+&@#/%=~|$?!:,.]*)|(www.)+([-\w+&@#/%=~|$?!:,.]*)"
email_regex = r"[\w.-]+@[\w.-]+\.[a-zA-Z]{1,}"
sakarya_tweet_df = (
sakarya_tweet_df
.withColumn("text", f.regexp_replace(f.col("text"), url_regex, "")) \
.withColumn("text", f.regexp_replace(f.col("text"), hastag_regex, "")) \
.withColumn("text", f.regexp_replace(f.col("text"), user_regex, "")) \
.withColumn("text", f.regexp_replace(f.col("text"), email_regex, ""))
)
import html
@f.udf
def html_unescape(s: str):
if isinstance(s, str):
return html.unescape(s)
return s
sakarya_tweet_df = sakarya_tweet_df.withColumn("text", html_unescape(f.col("text")))
sakarya_tweet_df.filter(f.col("text")=='').count()
sakarya_tweet_df = sakarya_tweet_df.filter(~(f.col("text")==''))
sakarya_tweet_df_clean = (sakarya_tweet_df \
.withColumn("text", f.regexp_replace(f.col("text"), "[^a-zA-Z']", " ")) \
.withColumn("text", f.regexp_replace(f.col("text"), " +", " ")) \
.withColumn("text", f.trim(f.col("text"))))
sakarya_tweet_df_clean.show(4)
###Output
+---------+---------+------+----------+----------+------------+--------------------+--------------------+
|createdAt|likeCount| name|quoteCount|replyCount|retweetCount| text| original_text|
+---------+---------+------+----------+----------+------------+--------------------+--------------------+
| 20211207| 1|sau***| 0| 0| 0| A Piece of Paradise|A Piece of Paradise.|
| 20211207| 0|mka***| 0| 0| 0|bol bol g k'y z n...|bol bol gök'yüzün...|
| 20211207| 34|Muf***| 0| 1| 7|Haftaya konu umuz...|Haftaya konuğumuz...|
| 20211207| 0|ADA***| 0| 0| 0|Afrikal rencilerd...|Afrikalı Öğrencil...|
+---------+---------+------+----------+----------+------------+--------------------+--------------------+
only showing top 4 rows
###Markdown
4. SAKARYA ÜNİVERSİTESİYLE İLGİLİ TWEETLERİN DUYGU ANALİZİ YAPILMASI
###Code
raw_sentiment = sentiment_model.transform(sakarya_tweet_df_clean)
sentiment = raw_sentiment.select(
"createdAt", "likeCount", "name", "quoteCount","replyCount",
"retweetCount" ,"text", "original_text", f.col("prediction").alias("user_sentiment")
)
sentiment.limit(100).toPandas().head(100)
###Output
21/12/11 19:49:28 WARN DAGScheduler: Broadcasting large task binary with size 5.5 MiB
Traceback (most recent call last):
File "/usr/local/spark-3.2.0-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/daemon.py", line 186, in manager
File "/usr/local/spark-3.2.0-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/daemon.py", line 74, in worker
File "/usr/local/spark-3.2.0-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/worker.py", line 663, in main
if read_int(infile) == SpecialLengths.END_OF_STREAM:
File "/usr/local/spark-3.2.0-bin-hadoop3.2/python/lib/pyspark.zip/pyspark/serializers.py", line 564, in read_int
raise EOFError
EOFError
###Markdown
5. SAKARYA DUYGU ANALİZİNİN GRAFİĞE DÖNÜŞTÜRÜLMESİ
###Code
sentiment.groupBy("user_sentiment").count().toPandas()
graph_data = sentiment.groupBy("user_sentiment").count().toPandas()
sns.barplot(x = graph_data["user_sentiment"].values, y = graph_data["count"].values)
###Output
_____no_output_____
###Markdown
6. MongoDb VERİTABANINA VERİNİN KAYIT EDİLMESİ
###Code
results = sentiment.toJSON().map(lambda j: json.loads(j)).collect()
print(results[0])
mycol = mydb["twitter_sentiment"]
mycol.insert_many(results)
###Output
_____no_output_____
###Markdown
6. SPARK OTURUMUNUN SONLANDIRILMASI
###Code
spark.stop()
###Output
_____no_output_____ |
docs/src/notebooks/non-interactive/projected_spectral_as_subplots.ipynb | ###Markdown
Projected bandstructures/DOS as subplots with custom colours This example notebook provides a recipe/hack to get projectors as subplots. This functionality is not wrapped into the `dispersion` script (as of Aug 2020) as the whole module needs refactoring first. These examples assume some spectral files with the seedname "Si" in the current directory.
###Code
from matador.plotting.spectral_plotting import dispersion_plot, dos_plot
import matplotlib.pyplot as plt
%matplotlib inline
fig, axes = plt.subplots(1, 2, figsize=(10, 5), sharey=True)
colours_override = ["firebrick", "darkorchid"]
for ind, proj in enumerate([[("Si", "s", None)], [("Si", "p", None)]]):
dispersion_kwargs = {
"projectors_to_plot": proj,
"plot_pdis": True,
"pdis_interpolation_factor": 10,
"colours": [colours_override[ind]],
"colour_cycle": [colours_override[ind]]
}
dispersion_plot(
"Si",
axes[ind],
dispersion_kwargs
)
axes[1].set_ylabel(None)
plt.tight_layout()
###Output
Detected path labels from cell file
Detected path labels from cell file
###Markdown
We can also do the same with density of states plots. Note here, the PDOS is from a different structure and calculation to the total DOS, and is shown only for ilustrative purposes.
###Code
fig, axes = plt.subplots(2, 1, figsize=(12, 8), sharex=True)
colours_override = ["dodgerblue", "gold", "magenta", "olive"]
for ind, proj in enumerate([[("K", "s", None), ("K", "p", None)], [("P", "s", None), ("P", "p", None)]]):
dos_kwargs = {
"projectors_to_plot": proj,
"colours": colours_override[2*ind:],
"pdos_hide_sum": True,
"plot_pdos": True
}
dos_plot("Si", axes[ind], dos_kwargs)
axes[0].set_xlabel(None)
plt.tight_layout()
###Output
_____no_output_____ |
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/05_02/Begin/Figures.ipynb | ###Markdown
Figures and Subplots- figure - container thats holds all elements of plot(s)- subplot - appears within a rectangular grid within a figure
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
my_first_figure = plt.figure("My First Figure")
subplot_1 = my_first_figure.add_subplot(2, 3, 1)
subplot_6 = my_first_figure.add_subplot(2, 3, 6)
plt.plot(np.random.rand(50).cumsum(), 'k--')
plt.show()
subplot_2 = my_first_figure.add_subplot(2, 3, 2)
plt.plot(np.random.rand(50), 'go')
subplot_6
plt.show()
###Output
_____no_output_____ |
tensorflow/chapter_preliminaries/probability.ipynb | ###Markdown
概率:label:`sec_prob`简单地说,机器学习就是做出预测。根据病人的临床病史,我们可能想预测他们在下一年心脏病发作的*概率*。在飞机喷气发动机的异常检测中,我们想要评估一组发动机读数为正常运行情况的概率有多大。在强化学习中,我们希望智能体(agent)能在一个环境中智能地行动。这意味着我们需要考虑在每种可行的行为下获得高奖励的概率。当我们建立推荐系统时,我们也需要考虑概率。例如,假设我们为一家大型在线书店工作,我们可能希望估计某些用户购买特定图书的概率。为此,我们需要使用概率学。有完整的课程、专业、论文、职业、甚至院系,都致力于概率学的工作。所以很自然地,我们在这部分的目标不是教授你整个科目。相反,我们希望教给你在基础的概率知识,使你能够开始构建你的第一个深度学习模型,以便你可以开始自己探索它。现在让我们更认真地考虑第一个例子:根据照片区分猫和狗。这听起来可能很简单,但对于机器却可能是一个艰巨的挑战。首先,问题的难度可能取决于图像的分辨率。:width:`300px`:label:`fig_cat_dog`如 :numref:`fig_cat_dog`所示,虽然人类很容易以$160 \times 160$像素的分辨率识别猫和狗,但它在$40\times40$像素上变得具有挑战性,而且在$10 \times 10$像素下几乎是不可能的。换句话说,我们在很远的距离(从而降低分辨率)区分猫和狗的能力可能会变为猜测。概率给了我们一种正式的途径来说明我们的确定性水平。如果我们完全肯定图像是一只猫,我们说标签$y$是"猫"的*概率*,表示为$P(y=$"猫"$)$等于$1$。如果我们没有证据表明$y=$“猫”或$y=$“狗”,那么我们可以说这两种可能性是相等的,即$P(y=$"猫"$)=P(y=$"狗"$)=0.5$。如果我们不十分确定图像描绘的是一只猫,我们可以将概率赋值为$0.5<P(y=$"猫"$)<1$。现在考虑第二个例子:给出一些天气监测数据,我们想预测明天北京下雨的概率。如果是夏天,下雨的概率是0.5。在这两种情况下,我们都不确定结果,但这两种情况之间有一个关键区别。在第一种情况中,图像实际上是狗或猫二选一。在第二种情况下,结果实际上是一个随机的事件。因此,概率是一种灵活的语言,用于说明我们的确定程度,并且它可以有效地应用于广泛的领域中。 基本概率论假设我们掷骰子,想知道看到1的几率有多大,而不是看到另一个数字。如果骰子是公平的,那么所有六个结果$\{1, \ldots, 6\}$都有相同的可能发生,因此我们可以说$1$发生的概率为$\frac{1}{6}$。然而现实生活中,对于我们从工厂收到的真实骰子,我们需要检查它是否有瑕疵。检查骰子的唯一方法是多次投掷并记录结果。对于每个骰子,我们将观察到$\{1, \ldots, 6\}$中的一个值。对于每个值,一种自然的方法是将它出现的次数除以投掷的总次数,即此*事件*(event)概率的*估计值*。*大数定律*(law of large numbers)告诉我们:随着投掷次数的增加,这个估计值会越来越接近真实的潜在概率。让我们用代码试一试!首先,我们导入必要的软件包。
###Code
%matplotlib inline
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
from d2l import tensorflow as d2l
###Output
_____no_output_____
###Markdown
在统计学中,我们把从概率分布中抽取样本的过程称为*抽样*(sampling)。笼统来说,可以把*分布*(distribution)看作是对事件的概率分配,稍后我们将给出的更正式定义。将概率分配给一些离散选择的分布称为*多项分布*(multinomial distribution)。为了抽取一个样本,即掷骰子,我们只需传入一个概率向量。输出是另一个相同长度的向量:它在索引$i$处的值是采样结果中$i$出现的次数。
###Code
fair_probs = tf.ones(6) / 6
tfp.distributions.Multinomial(1, fair_probs).sample()
###Output
_____no_output_____
###Markdown
在估计一个骰子的公平性时,我们希望从同一分布中生成多个样本。如果用Python的for循环来完成这个任务,速度会慢得惊人。因此我们使用深度学习框架的函数同时抽取多个样本,得到我们想要的任意形状的独立样本数组。
###Code
tfp.distributions.Multinomial(10, fair_probs).sample()
###Output
_____no_output_____
###Markdown
现在我们知道如何对骰子进行采样,我们可以模拟1000次投掷。然后,我们可以统计1000次投掷后,每个数字被投中了多少次。具体来说,我们计算相对频率,以作为真实概率的估计。
###Code
counts = tfp.distributions.Multinomial(1000, fair_probs).sample()
counts / 1000
###Output
_____no_output_____
###Markdown
因为我们是从一个公平的骰子中生成的数据,我们知道每个结果都有真实的概率$\frac{1}{6}$,大约是$0.167$,所以上面输出的估计值看起来不错。我们也可以看到这些概率如何随着时间的推移收敛到真实概率。让我们进行500组实验,每组抽取10个样本。
###Code
counts = tfp.distributions.Multinomial(10, fair_probs).sample(500)
cum_counts = tf.cumsum(counts, axis=0)
estimates = cum_counts / tf.reduce_sum(cum_counts, axis=1, keepdims=True)
d2l.set_figsize((6, 4.5))
for i in range(6):
d2l.plt.plot(estimates[:, i].numpy(),
label=("P(die=" + str(i + 1) + ")"))
d2l.plt.axhline(y=0.167, color='black', linestyle='dashed')
d2l.plt.gca().set_xlabel('Groups of experiments')
d2l.plt.gca().set_ylabel('Estimated probability')
d2l.plt.legend();
###Output
_____no_output_____ |
1_mosaic_data_attention_experiments/3_stage_wise_training/alternate_minimization/effect on interpretability/type4_with_sparse_regulariser/10runs_entropy_001_every10_where_what.ipynb | ###Markdown
**Focus Net**
###Code
class Module1(nn.Module):
def __init__(self):
super(Module1, self).__init__()
self.fc1 = nn.Linear(2, 100)
self.fc2 = nn.Linear(100, 1)
def forward(self, z):
x = torch.zeros([batch,9],dtype=torch.float64)
y = torch.zeros([batch,2], dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
for i in range(9):
x[:,i] = self.helper(z[:,i])[:,0]
log_x = F.log_softmax(x,dim=1)
x = F.softmax(x,dim=1) # alphas
for i in range(9):
x1 = x[:,i]
y = y + torch.mul(x1[:,None],z[:,i])
return y , x , log_x
def helper(self,x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
###Output
_____no_output_____
###Markdown
**Classification Net**
###Code
class Module2(nn.Module):
def __init__(self):
super(Module2, self).__init__()
self.fc1 = nn.Linear(2, 100)
self.fc2 = nn.Linear(100, 3)
def forward(self,y):
y = F.relu(self.fc1(y))
y = self.fc2(y)
return y
criterion = nn.CrossEntropyLoss()
def my_cross_entropy(x, y,alpha,log_alpha,k):
# log_prob = -1.0 * F.log_softmax(x, 1)
# loss = log_prob.gather(1, y.unsqueeze(1))
# loss = loss.mean()
loss = criterion(x,y)
#alpha = torch.clamp(alpha,min=1e-10)
b = -1.0* alpha * log_alpha
b = torch.mean(torch.sum(b,dim=1))
closs = loss
entropy = b
loss = (1-k)*loss + ((k)*b)
return loss,closs,entropy
###Output
_____no_output_____
###Markdown
###Code
def calculate_attn_loss(dataloader,what,where,criter,k):
what.eval()
where.eval()
r_loss = 0
cc_loss = 0
cc_entropy = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha,log_alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
#ent = np.sum(entropy(alpha.cpu().detach().numpy(), base=2, axis=1))/batch
# mx,_ = torch.max(alpha,1)
# entropy = np.mean(-np.log2(mx.cpu().detach().numpy()))
# print("entropy of batch", entropy)
#loss = (1-k)*criter(outputs, labels) + k*ent
loss,closs,entropy = my_cross_entropy(outputs,labels,alpha,log_alpha,k)
r_loss += loss.item()
cc_loss += closs.item()
cc_entropy += entropy.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,cc_loss/i,cc_entropy/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
number_runs = 10
full_analysis =[]
FTPT_analysis = pd.DataFrame(columns = ["FTPT","FFPT", "FTPF","FFPF"])
k = 0.001
every_what_epoch = 10
for n in range(number_runs):
print("--"*40)
# instantiate focus and classification Model
torch.manual_seed(n)
where = Module1().double()
torch.manual_seed(n)
what = Module2().double()
where = where.to("cuda")
what = what.to("cuda")
# instantiate optimizer
optimizer_where = optim.Adam(where.parameters(),lr =0.001)
optimizer_what = optim.Adam(what.parameters(), lr=0.001)
#criterion = nn.CrossEntropyLoss()
acti = []
analysis_data = []
loss_curi = []
epochs = 1000
# calculate zeroth epoch loss and FTPT values
running_loss ,_,_,anlys_data= calculate_attn_loss(train_loader,what,where,criterion,k)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
# training starts
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
if ((epoch) % (every_what_epoch*2) ) <= every_what_epoch-1 :
print(epoch+1,"updating where_net, what_net is freezed")
print("--"*40)
elif ((epoch) % (every_what_epoch*2)) > every_what_epoch-1 :
print(epoch+1,"updating what_net, where_net is freezed")
print("--"*40)
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha,log_alpha = where(inputs)
outputs = what(avg)
my_loss,_,_ = my_cross_entropy(outputs,labels,alpha,log_alpha,k)
# print statistics
running_loss += my_loss.item()
my_loss.backward()
if ((epoch) % (every_what_epoch*2) ) <= every_what_epoch-1 :
optimizer_where.step()
elif ( (epoch) % (every_what_epoch*2)) > every_what_epoch-1 :
optimizer_what.step()
# optimizer_where.step()
# optimizer_what.step()
#break
running_loss,ccloss,ccentropy,anls_data = calculate_attn_loss(train_loader,what,where,criterion,k)
analysis_data.append(anls_data)
print('epoch: [%d] loss: %.3f celoss: %.3f entropy: %.3f' %(epoch + 1,running_loss,ccloss,ccentropy))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.005:
break
print('Finished Training run ' +str(n))
#break
analysis_data = np.array(analysis_data)
FTPT_analysis.loc[n] = analysis_data[-1,:4]/30
full_analysis.append((epoch, analysis_data))
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha,log_alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))
a,b= full_analysis[0]
print(a)
cnt=1
for epoch, analysis_data in full_analysis:
analysis_data = np.array(analysis_data)
# print("="*20+"run ",cnt,"="*20)
plt.figure(figsize=(6,6))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label="ftpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label="ffpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label="ftpf")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label="ffpf")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.title("Training trends for run "+str(cnt))
plt.savefig(path+"where_what/every10/run"+str(cnt)+".png",bbox_inches="tight")
plt.savefig(path+"where_what/every10/run"+str(cnt)+".pdf",bbox_inches="tight")
cnt+=1
np.mean(np.array(FTPT_analysis),axis=0) #array([87.85333333, 5.92 , 0. , 6.22666667])
FTPT_analysis.to_csv(path+"where_what/FTPT_analysis_every10.csv",index=False)
FTPT_analysis
###Output
_____no_output_____ |
Programacao_Linear_pacote_pulp_ALUNA_O.ipynb | ###Markdown
###Code
# Instalar a biblioteca pulp
# importar a bilblioteca pulp
%pip install pulp
from pulp import *
# Definir um nome para o problema e especificar o que quero fazer: maximizar o lucro
model = LpProblem('Teste_maximo',LpMaximize)
# Definir as variáveis de decisão
x =LpVariable('x',lowBound=0)
y =LpVariable('y',lowBound=0)
# definir a função objetivo do problema em questão
# função objetivo
model += 5*x + 4*y
# definir as restrições do problema
model += 6*x + 4*y <= 24 # 1a restrição
model += x + 2*y <= 6 # 2a restrição
# Comando para resolver a programação linear - fç objetivo com as restrições
status = model.solve() # nome_da_fç.solve() - comando para resolver o problema
print(model)
print('Otimo valor de x: ',value(x))
print('Otimo valor de y: ',value(y))
print('Resultado da função objetivo: ',value(model.objective))
print('Qual é o estatus do problema: ',LpStatus[status])
###Output
_____no_output_____
###Markdown
- ATIVIDADE Programação Linear - pulp Quero maximar uma função objetivo - Quero obter lucro máximo em uma atividade. max Lucro = 200 * x1 + 300 * x2 subject - s.a (sujeito a): restrições x1 + x2 <= 45 (área) 3 * x1 + x2 <= 100 (empregados) 2 * x1 + 4 * x2 <= 120 (fertilizante) * x1 = quantidade de milho; x2 = quantidade de feijão* área de plantão* número de total de trabalhadores no campo* quantidade disponível de fertizante para o plantioResposta: x1 = 20x2 = 20Lucro Máximo = 10000
###Code
###Output
_____no_output_____ |
algorithms/trees/traversals.ipynb | ###Markdown
Traversals
###Code
class Node:
def __init__(self, value, left=None, right=None):
self.value = value
self.left = left
self.right = right
head = Node(
value="A",
left=Node(
value="B",
left=Node(
value="D",
left=Node(value="H"),
right=Node(value="I"),
),
right=Node(value="E"),
),
right=Node(
value="C",
left=Node(value="F"),
right=Node(
value="G",
left=Node(value="J")
),
),
)
###Output
_____no_output_____
###Markdown
Pre-order
###Code
def preorder_traversal(node):
stack = []
if node:
stack.append(node.value)
stack = stack + preorder_traversal(node.left)
stack = stack + preorder_traversal(node.right)
return stack
preorder_traversal(head)
###Output
_____no_output_____
###Markdown
In-order
###Code
def inorder_traversal(node):
stack = []
if node:
stack = inorder_traversal(node.left)
stack.append(node.value)
stack = stack + inorder_traversal(node.right)
return stack
inorder_traversal(head)
###Output
_____no_output_____
###Markdown
Post-order
###Code
def postorder_traversal(node):
stack = []
if node:
stack = postorder_traversal(node.left)
stack = stack + postorder_traversal(node.right)
stack.append(node.value)
return stack
postorder_traversal(head)
###Output
_____no_output_____ |
netflix-amazonprime-disney-hoststar-recom-system.ipynb | ###Markdown
Here we will be building content based recommendation system for movies and web shows on 3 OTT platforms viz: Netflix,Amazon Prime and Disney+ Hotstar.As these are most popular OTT platforms worldwide, one might stuck on what to watch on them. Loading Datasets
###Code
# Load netflix data
netflix = pd.read_csv("/kaggle/input/netflix-shows/netflix_titles.csv")
netflix.head()
netflix.shape
# Load Amazon Prime Data
amznprime = pd.read_csv("/kaggle/input/amazon-prime-movies-and-tv-shows/amazon_prime_titles.csv")
amznprime.head()
amznprime.shape
hotstar = pd.read_csv("/kaggle/input/disney-movies-and-tv-shows/disney_plus_titles.csv")
hotstar.head()
hotstar.shape
###Output
_____no_output_____
###Markdown
We now concatinate all three dataframes as we are building combined recommendation system.* column "show id" is common in all dataframes, we will drop it * In final dataframe we want information about on which platform recommended shows are available* We will add one column naming "platform" in all dataframes,for example :- **netflix['platform']** containing all values as "Netflix"
###Code
# Dropping "show_id" column
netflix.drop("show_id", axis=1, inplace=True)
amznprime.drop("show_id",axis=1, inplace=True)
hotstar.drop("show_id", axis=1, inplace=True)
# adding "platform" column to respective dataframe
netflix['platform'] = "Netflix"
amznprime['platform'] = "Amazon Prime"
hotstar['platform'] = "Disney+ Hotstar"
## creating final Dataframe
df = pd.concat([netflix, amznprime, hotstar], ignore_index=True)
df.shape
df.tail()
###Output
_____no_output_____
###Markdown
Analysis of final Dataframe
###Code
# Type of content available
df.type.unique()
# misssing values
df.isnull().sum()
df.info()
# we need to create new Data frame according to features required for recommendation
newdf = df.drop('rating', axis=1)
newdf.head()
dropped_nan = df.dropna()
dropped_nan.shape
###Output
_____no_output_____
###Markdown
We have large number of entries with missing valuesdropping them will result in loss of information
###Code
# removing spaces between individual terms in entries
# for ex ""
newdf['director']=newdf['director'].apply(lambda x: str(x).replace(" ",""))
newdf['cast']=newdf['cast'].apply(lambda x: str(x).replace(" ",""))
newdf['country']=newdf['country'].apply(lambda x: str(x).replace(" ",""))
newdf['listed_in']=newdf['listed_in'].apply(lambda x: str(x).replace(" ",""))
newdf['type']=newdf['type'].apply(lambda x: str(x).replace(" ",""))
# creating list of words in each string available in 'description' column
newdf['description']=newdf['description'].apply(lambda x:x.split())
# converting all rows to list
newdf['type']=newdf['type'].apply(lambda x:list(x.split(" ")))
newdf['listed_in']=newdf['listed_in'].apply(lambda x:list(x.split(" ")))
newdf['director'] = newdf['director'].apply(lambda x:list(x.split(" ")))
newdf['cast'] = newdf['cast'].apply(lambda x:list(x.split(" ")))
newdf['country'] = newdf['country'].apply(lambda x:list(x.split(" ")))
newdf['release_year'] =newdf['release_year'].apply(lambda x:list(str(x).split(" ")))
# creating new column for tags
newdf['tags'] = newdf['type']+newdf['director']+newdf['cast']+newdf['country']+newdf['release_year']+newdf['listed_in']+newdf['description']
# we have lot of entries with "nan" values, remove those values from tag list
for row in newdf['tags']:
if "nan" in row:
row.remove("nan")
# to form vectors we need strings, so convert list of tags to string
newdf['tags']=newdf['tags'].apply(lambda x:" ".join(x))
from sklearn.feature_extraction.text import TfidfVectorizer
# create instance of TFIDF vectorizer with stopwords in "english" language
#so that vectors will be formed without frequent stopwords
tv = TfidfVectorizer(max_features=30000,stop_words="english")
# create vectors and storing them into array
vectors = tv.fit_transform(newdf['tags']).toarray()
vectors.shape
from sklearn.metrics.pairwise import cosine_similarity
similarity = cosine_similarity(vectors)
similarity
dis=sorted(list(enumerate(similarity[0])), reverse=True, key=lambda x:x[1])
dis[1:6]
#newdf[newdf['title'] == "Ganglands"].index
# above code gives tuple => Int64Index([2], dtype='int64')
# we jusst want index [2] to be fetched, in 0th position in tuple
newdf[newdf['title'] == "Ganglands"].index[0]
# define recommend function
def recommend(movie):
index = newdf[newdf['title'] == movie].index[0]
distances = sorted(list(enumerate(similarity[index])), reverse=True,key=lambda x:x[1])
#for i in distances[1:6]:
#print(newdf.iloc[i[0]].title)
#print("Watch on:- ",newdf.iloc[i[0]].platform)
dic = {0: df.iloc[distances[1][0]],
1: df.iloc[distances[2][0]],
2: df.iloc[distances[3][0]],
3: df.iloc[distances[4][0]],
4: df.iloc[distances[5][0]]}
df1 = pd.DataFrame(dic).T
return df1
recommend("Kota Factory")
###Output
_____no_output_____ |
notebooks/complete/07_Loading-data.ipynb | ###Markdown
Read a text file of data using `numpy.loadtxt`* Numpy provides the function loadtxt to read and parse numeric data from a text file.* The file can be delimited with commas (a 'comma separated file'), tabs, or other common delimiters* Numerical data can be converted to floating point data or integers* Headers and comments can be ignored during the reading of the file.
###Code
!cat data/galileo_flat.txt
data = numpy.loadtxt('data/galileo_flat.txt')
print("data is ", data)
print("data shape is ", data.shape)
###Output
data is [[1500. 1000.]
[1340. 828.]
[1328. 800.]
[1172. 600.]
[ 800. 300.]]
data shape is (5, 2)
###Markdown
Read a comma separated file of data with headers using keywords* If you have a delimiter in your file (a comma, tab, vertical line), specify that with the `delimiter` keyword.* If you use a comment character consistently, using the `comments` keyword.* If you have a header you want to skip, use `skiprows`
###Code
!cat data/galileo_flat.csv
data = numpy.loadtxt('data/galileo_flat.csv', comments="#", skiprows=2, delimiter=',')
print(data)
###Output
[[1500. 1000.]
[1340. 828.]
[1328. 800.]
[1172. 600.]
[ 800. 300.]]
###Markdown
* Because the header lines are commented, I don't need `skiprows`.* Because I used the pound sign for a comment, I don't need the `comments` keyword.
###Code
data = numpy.loadtxt('data/galileo_flat.csv', delimiter=',')
print(data)
###Output
[[1500. 1000.]
[1340. 828.]
[1328. 800.]
[1172. 600.]
[ 800. 300.]]
###Markdown
Remember your data has the shape `ROWS X COLUMNS`* Your data will be shaped with the rows first.* You can change the order with transpose
###Code
print("data shape is ",data.shape)
###Output
data shape is (5, 2)
###Markdown
Split the data into variables using the `unpack` keyword
###Code
D,H = numpy.loadtxt('data/galileo_flat.csv', delimiter=',',unpack=True)
print(D,H)
print("D shape is ",D.shape)
print("H shape is ",H.shape)
#equivalent code
data = numpy.loadtxt('data/galileo_flat.csv', delimiter=',')
#transpose the array to columns x rows
D,H = data.T
print(D,H)
print("D shape is ",D.shape)
print("H shape is ",H.shape)
###Output
[1500. 1340. 1328. 1172. 800.] [1000. 828. 800. 600. 300.]
D shape is (5,)
H shape is (5,)
###Markdown
Save data with `numpy.savetxt`
###Code
numpy.savetxt("data/mydata.txt", data, delimiter=',')
!cat data/mydata.txt
###Output
1.500000000000000000e+03,1.000000000000000000e+03
1.340000000000000000e+03,8.280000000000000000e+02
1.328000000000000000e+03,8.000000000000000000e+02
1.172000000000000000e+03,6.000000000000000000e+02
8.000000000000000000e+02,3.000000000000000000e+02
###Markdown
Control the data format with the `fmt` keyword* The default format for the data is floating point data with many digits* You can change the format with the `fmt` keyword
###Code
numpy.savetxt("data/mydata2.txt", data,delimiter=',', fmt='%.6g')
!cat data/mydata2.txt
###Output
1500,1000
1340,828
1328,800
1172,600
800,300
###Markdown
Add a header string with `header`* Add header text to the file with the `header` keyword.* Include column titles in the `header` keyword.
###Code
header="""Distance (D), Header(H)
header lines are automatically commented out"""
newdata = numpy.vstack([D,H]).T
numpy.savetxt("data/mydata3.txt", newdata, delimiter=',', header=header, fmt='%.6g')
!cat data/mydata3.txt
###Output
# Distance (D), Header(H)
# header lines are automatically commented out
1500,1000
1340,828
1328,800
1172,600
800,300
###Markdown
More complex loadtxt commands can make your data more flexible* Using the `dtype` keyword allows fine control over the types of data you read.* Using `dtype` allows you to 'name' your data columns and reference them with the name.
###Code
data = numpy.loadtxt('data/galileo_flat.csv', comments="#", skiprows=2, delimiter=',',\
dtype={'names':("Distance","Height"), 'formats':('f4','f4')})
print("data shape is ", data.shape)
print("Distance data is ", data["Distance"])
###Output
data shape is (5,)
Distance data is [1500. 1340. 1328. 1172. 800.]
|
TFversion_TrainingAutonomousCar.ipynb | ###Markdown
Dataset
###Code
CSV = "TrainData.csv"
ImageDir = "_TrainData"
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Handling dataframe
###Code
df = pd.read_csv(CSV)
df.head(2)
###Output
_____no_output_____
###Markdown
Dataset Visualization
###Code
visualY = df['Steering']
visualY = visualY * 10
visualY = np.asarray(visualY, dtype= np.int32)
unique, count = np.unique(visualY, return_counts =True)
print(np.asarray((unique, count)))
plt.bar(unique,count)
plt.show()
tempSet = df.loc[df['Steering'] <= -.2]
frames = [df, tempSet, tempSet,tempSet ,tempSet]
df = pd.concat(frames)
type(df)
visualY = df['Steering']
visualY = visualY * 10
visualY = np.asarray(visualY, dtype= np.int32)
unique, count = np.unique(visualY, return_counts =True)
print(np.asarray((unique, count)))
plt.bar(unique,count)
plt.show()
tempSet = df.loc[df['Steering'] < -.5]
tempSetPos = df.loc[df['Steering'] > 0]
frames = [df, tempSet,tempSetPos, tempSet,tempSetPos,tempSetPos,tempSetPos,tempSet]
df = pd.concat(frames)
visualY = df['Steering']
visualY = visualY * 10
visualY = np.asarray(visualY, dtype= np.int32)
unique, count = np.unique(visualY, return_counts =True)
print(np.asarray((unique, count)))
X = df[['Left','Center','Right']]
Y = df['Steering']
plt.bar(unique,count)
plt.show()
X = X.values
Y = Y.values
X.shape, Y.shape
###Output
_____no_output_____
###Markdown
Train test set
###Code
X_train, X_test, y_train, y_test = train_test_split(X,Y, test_size =0.2, random_state = None, shuffle = True)
X_train.shape, y_train.shape
X_test.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Image details
###Code
height = 32
width = 32
channel = 3
###Output
_____no_output_____
###Markdown
Common functions
###Code
def readImage(imageName):
global ImageDir
return mpimg.imread(os.path.join(ImageDir, imageName.strip()))
def getTrainBatch(batchSize,X,Y, isTraining):
global height, width, channel
indexRange = np.random.permutation(X.shape[0])
currentIndex =0
images = np.empty(shape =[batchSize, height,width,channel], dtype=np.float32)
steer = np.empty(shape = [batchSize,1],dtype=np.float32)
for index in indexRange:
if isTraining:
choiceIndex = np.random.choice(3)
_image = imageResize(readImage(X[index][choiceIndex]))
if choiceIndex ==0:
steer[currentIndex] = (Y[index] + 0.2)
elif choiceIndex ==1:
steer[currentIndex] = Y[index]
else:
steer[currentIndex] = (Y[index] - 0.2)
else:
_image= imageResize(readImage(X[index][1]))
steer[currentIndex] = Y[index]
images[currentIndex] = (_image/127.5) -1
currentIndex = currentIndex+1
if currentIndex == batchSize:
break
return images,steer
def imageResize(image):
global height, width
return cv2.resize(image, (width, height), cv2.INTER_AREA)
###Output
_____no_output_____
###Markdown
Graph Parameters
###Code
conv1_filters = 32
conv1_kSize = 5
conv1_strides = 1
conv1_padding = 'SAME'
conv2_filters = 32
conv2_kSize = 5
conv2_strides = 2
conv2_padding = 'SAME'
conv3_filters = 64
conv3_kSize = 5
conv3_strides = 2
conv3_padding = 'SAME'
conv4_filters = 64
conv4_kSize = 3
conv4_strides = 2
conv4_padding = 'SAME'
conv5_filters = 64
conv5_kSize = 3
conv5_strides = 1
conv5_padding = 'SAME'
dense1_filter = 1000
dense2_filter = 128
output_unit = 1
tf.reset_default_graph()
###Output
_____no_output_____
###Markdown
Placeholders
###Code
X = tf.placeholder(tf.float32, shape=[None, height, width, channel], name = 'X_Placeholder')
Y = tf.placeholder(tf.float32, shape=[None,1], name = 'Y_Placeholder')
X, Y
###Output
_____no_output_____
###Markdown
Graph
###Code
conv1 = tf.layers.conv2d(inputs = X, filters = conv1_filters, kernel_size = conv1_kSize, strides = conv1_strides, padding = conv1_padding)
conv1
conv2 = tf.layers.conv2d(inputs = conv1, filters = conv2_filters, kernel_size = conv2_kSize, strides = conv2_strides, padding = conv2_padding)
conv2
conv3 = tf.layers.conv2d(inputs = conv2, filters = conv3_filters, kernel_size = conv3_kSize, strides = conv3_strides, padding = conv3_padding)
conv3
conv4 = tf.layers.conv2d(inputs = conv3, filters = conv4_filters, kernel_size = conv4_kSize, strides = conv4_strides, padding = conv4_padding, name="conv4")
conv4
conv5 = tf.layers.conv2d(inputs = conv4, filters = conv5_filters, kernel_size = conv5_kSize, strides = conv5_strides, padding = conv5_padding, name="conv5")
conv5
flatten = tf.reshape(conv5, shape =[-1, conv5.shape[1].value * conv5.shape[2].value * conv5.shape[3].value])
flatten
dense1 = tf.layers.dense(inputs = flatten, units = dense1_filter, activation = tf.nn.elu)
dense1
dense2 =tf.layers.dense(inputs = dense1, units = dense2_filter, activation = tf.nn.elu)
dense2
output = tf.layers.dense(inputs = dense2, units = output_unit)
output
Y.shape, output.shape
loss = tf.losses.mean_squared_error(labels = Y, predictions = output)
optimizer = tf.train.AdamOptimizer()
train = optimizer.minimize(loss)
saver = tf.train.Saver()
init = tf.global_variables_initializer()
###Output
_____no_output_____
###Markdown
Hyperparameters
###Code
totalEpochs = 10
batchSize = 32
totalSets = (int)(len(X_train)/ batchSize)
totalSets
#graphs
TrainingLossArray = np.empty(shape=[totalEpochs],dtype = np.float32)
TestLossArray = np.empty(shape=[totalEpochs],dtype = np.float32)
###Output
_____no_output_____
###Markdown
Training
###Code
a = (int)(time.time() * 10000) + np.random.randint(100,size=1)
with tf.Session() as sess:
sess.run(init)
for epoch in range(totalEpochs):
currentIndex = 0
print("Epoch {0} started.......................".format(epoch))
for index in range(totalSets):
x, y = getTrainBatch(batchSize, X_train, y_train, True)
sess.run(train,feed_dict = {X:x, Y:y})
_loss = sess.run(loss,feed_dict = {X:x, Y:y})
if index % 100 == 0:
print(" Completed count {0} and Loss : {1}".format(index,_loss))
print(" --------------------------------------------------------------------------------")
print(" Trainign loss {0} ".format(_loss))
TrainingLossArray[epoch] = _loss
x, y = getTrainBatch(batchSize, X_test, y_test, False)
_loss = sess.run(loss,feed_dict = {X:x, Y:y})
print(" Test loss {0} ".format(_loss))
TestLossArray[epoch] = _loss
modelName = "model/Model_" + str(a[0]) +"_____" + str(epoch)
saver.save(sess,modelName)
print("Model saved : ", modelName)
x, y = getTrainBatch(1000, X_test, y_test, False)
_yPred = sess.run(output,feed_dict = {X:x, Y:y})
_loss = sess.run(loss,feed_dict = {X:x, Y:y})
print(" Test loss {0}".format(_loss))
saver.save(sess,modelName)
print("Completed")
###Output
Epoch 0 started.......................
Completed count 0 and Loss : 5.957803726196289
Completed count 100 and Loss : 0.03394746035337448
Completed count 200 and Loss : 0.03969404101371765
Completed count 300 and Loss : 0.03363872691988945
Completed count 400 and Loss : 0.018486790359020233
Completed count 500 and Loss : 0.037606559693813324
Completed count 600 and Loss : 0.029704757034778595
Completed count 700 and Loss : 0.014800004661083221
Completed count 800 and Loss : 0.025718403980135918
Completed count 900 and Loss : 0.011952178552746773
--------------------------------------------------------------------------------
Trainign loss 0.022728394716978073
Test loss 0.02102157473564148
Model saved : model/Model_15425395460637_____0
Epoch 1 started.......................
Completed count 0 and Loss : 0.018008362501859665
Completed count 100 and Loss : 0.025861207395792007
Completed count 200 and Loss : 0.012491237372159958
Completed count 300 and Loss : 0.01675018109381199
Completed count 400 and Loss : 0.014935837127268314
Completed count 500 and Loss : 0.016359511762857437
Completed count 600 and Loss : 0.02713022381067276
Completed count 700 and Loss : 0.012450508773326874
Completed count 800 and Loss : 0.013137966394424438
Completed count 900 and Loss : 0.022013578563928604
--------------------------------------------------------------------------------
Trainign loss 0.02010934054851532
Test loss 0.008556215092539787
Model saved : model/Model_15425395460637_____1
Epoch 2 started.......................
Completed count 0 and Loss : 0.024761663749814034
Completed count 100 and Loss : 0.017742976546287537
Completed count 200 and Loss : 0.014024957083165646
Completed count 300 and Loss : 0.013843515887856483
Completed count 400 and Loss : 0.01666743867099285
Completed count 500 and Loss : 0.015240740031003952
Completed count 600 and Loss : 0.018563304096460342
Completed count 700 and Loss : 0.014833538793027401
Completed count 800 and Loss : 0.020379208028316498
Completed count 900 and Loss : 0.016841385513544083
--------------------------------------------------------------------------------
Trainign loss 0.01233590766787529
Test loss 0.012782157398760319
Model saved : model/Model_15425395460637_____2
Epoch 3 started.......................
Completed count 0 and Loss : 0.01000949926674366
Completed count 100 and Loss : 0.01622285135090351
Completed count 200 and Loss : 0.00921539030969143
Completed count 300 and Loss : 0.01390217524021864
Completed count 400 and Loss : 0.012011686339974403
Completed count 500 and Loss : 0.01210271567106247
Completed count 600 and Loss : 0.012698809616267681
Completed count 700 and Loss : 0.016898345202207565
Completed count 800 and Loss : 0.010021215304732323
Completed count 900 and Loss : 0.024219246581196785
--------------------------------------------------------------------------------
Trainign loss 0.011744080111384392
Test loss 0.014946043491363525
Model saved : model/Model_15425395460637_____3
Epoch 4 started.......................
Completed count 0 and Loss : 0.012202995829284191
Completed count 100 and Loss : 0.006572953425347805
Completed count 200 and Loss : 0.01677410677075386
Completed count 300 and Loss : 0.010465624742209911
Completed count 400 and Loss : 0.010911883786320686
Completed count 500 and Loss : 0.019148729741573334
Completed count 600 and Loss : 0.008076565340161324
Completed count 700 and Loss : 0.010886328294873238
Completed count 800 and Loss : 0.01144014485180378
Completed count 900 and Loss : 0.012792317196726799
--------------------------------------------------------------------------------
Trainign loss 0.009549302980303764
Test loss 0.00735051091760397
Model saved : model/Model_15425395460637_____4
Epoch 5 started.......................
Completed count 0 and Loss : 0.014001871459186077
Completed count 100 and Loss : 0.007088589947670698
Completed count 200 and Loss : 0.00874466821551323
Completed count 300 and Loss : 0.013246467337012291
Completed count 400 and Loss : 0.017277300357818604
Completed count 500 and Loss : 0.014141371473670006
Completed count 600 and Loss : 0.01178213395178318
Completed count 700 and Loss : 0.007392230909317732
Completed count 800 and Loss : 0.015902362763881683
Completed count 900 and Loss : 0.01170360017567873
--------------------------------------------------------------------------------
Trainign loss 0.010131200775504112
Test loss 0.007869934663176537
Model saved : model/Model_15425395460637_____5
Epoch 6 started.......................
Completed count 0 and Loss : 0.013822907581925392
Completed count 100 and Loss : 0.009814513847231865
Completed count 200 and Loss : 0.011550010181963444
Completed count 300 and Loss : 0.015861298888921738
Completed count 400 and Loss : 0.0067636012099683285
Completed count 500 and Loss : 0.010816155932843685
Completed count 600 and Loss : 0.015764955431222916
Completed count 700 and Loss : 0.024696968495845795
Completed count 800 and Loss : 0.009034502319991589
Completed count 900 and Loss : 0.010379107668995857
--------------------------------------------------------------------------------
Trainign loss 0.011758310720324516
Test loss 0.007323317229747772
Model saved : model/Model_15425395460637_____6
Epoch 7 started.......................
Completed count 0 and Loss : 0.009674912318587303
Completed count 100 and Loss : 0.00991353951394558
Completed count 200 and Loss : 0.021988237276673317
Completed count 300 and Loss : 0.008852913975715637
Completed count 400 and Loss : 0.011050541885197163
Completed count 500 and Loss : 0.0131304282695055
Completed count 600 and Loss : 0.009884987026453018
Completed count 700 and Loss : 0.0070093898102641106
Completed count 800 and Loss : 0.010978732258081436
Completed count 900 and Loss : 0.008886807598173618
--------------------------------------------------------------------------------
Trainign loss 0.01353539526462555
Test loss 0.0217283945530653
Model saved : model/Model_15425395460637_____7
Epoch 8 started.......................
Completed count 0 and Loss : 0.008982986211776733
Completed count 100 and Loss : 0.010816440917551517
Completed count 200 and Loss : 0.00935292337089777
Completed count 300 and Loss : 0.009537456557154655
Completed count 400 and Loss : 0.0074629876762628555
Completed count 500 and Loss : 0.01001269742846489
Completed count 600 and Loss : 0.012569618411362171
Completed count 700 and Loss : 0.00762396352365613
Completed count 800 and Loss : 0.012749603018164635
Completed count 900 and Loss : 0.0074554854072630405
--------------------------------------------------------------------------------
Trainign loss 0.010573378764092922
Test loss 0.015552621334791183
Model saved : model/Model_15425395460637_____8
Epoch 9 started.......................
Completed count 0 and Loss : 0.006878113839775324
Completed count 100 and Loss : 0.007174311671406031
Completed count 200 and Loss : 0.014582308009266853
Completed count 300 and Loss : 0.009218496270477772
Completed count 400 and Loss : 0.007633640430867672
Completed count 500 and Loss : 0.007256672717630863
Completed count 600 and Loss : 0.01507255993783474
Completed count 700 and Loss : 0.008994203060865402
Completed count 800 and Loss : 0.009660067036747932
Completed count 900 and Loss : 0.007024089805781841
--------------------------------------------------------------------------------
Trainign loss 0.006193377077579498
Test loss 0.010820752941071987
Model saved : model/Model_15425395460637_____9
Test loss 0.010033893398940563
###Markdown
Training loss
###Code
_trainLoss = np.around(TrainingLossArray, decimals=4)
_testLoss = np.around(TestLossArray, decimals=4)
line1, = plt.plot(_trainLoss, label='Train loss')
line2, = plt.plot(_testLoss, label='Test loss')
first_legend = plt.legend(handles=[line1], loc =1)
ax = plt.gca().add_artist(first_legend)
plt.legend(handles=[line2], loc =4)
plt.show()
startPoint = 70
line1, = plt.plot(y[startPoint:startPoint+35], label='True value')
line2, = plt.plot(_yPred[startPoint:startPoint+35],label='Pred value')
first_legend = plt.legend(handles=[line1], loc =1)
ax = plt.gca().add_artist(first_legend)
plt.legend(handles=[line2], loc =4)
plt.savefig("graph1.jpg")
plt.imshow(imageResize(readImage(X_train[900][0])))
a.shape
b.shape
###Output
_____no_output_____ |
ID by county.ipynb | ###Markdown
Idaho updates daily at 2pm CDT
###Code
from selenium import webdriver
import time
import pandas as pd
import pendulum
import re
import yaml
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
#chrome_options.add_argument("--disable-extensions")
#chrome_options.add_argument("--disable-gpu")
#chrome_options.add_argument("--no-sandbox) # linux only
chrome_options.add_argument("--start-maximized")
# chrome_options.add_argument("--headless")
chrome_options.add_argument("user-agent=[Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:73.0) Gecko/20100101 Firefox/73.0]")
with open('config.yaml', 'r') as f:
config = yaml.safe_load(f.read())
state = 'ID'
scrape_timestamp = pendulum.now().strftime('%Y%m%d%H%M%S')
url = 'https://coronavirus.idaho.gov/'
def fetch():
driver = webdriver.Chrome('../20190611 - Parts recommendation/chromedriver', options=chrome_options)
driver.get(url)
time.sleep(5)
datatbl = driver.find_element_by_id('tablepress-1')
data = []
for row in datatbl.find_elements_by_css_selector('tr'):
data.append([cell.text for cell in row.find_elements_by_css_selector('td')])
d = []
for row in data:
if len(row) ==4:
d.append(row[1:])
else:
d.append(row)
page_source = driver.page_source
driver.close()
return pd.DataFrame(d, columns=['county','positive_cases','deaths']), page_source
def save(df, source):
df.to_csv(f"{config['data_folder']}/{state}_county_{scrape_timestamp}.txt", sep='|', index=False)
with open(f"{config['data_source_backup_folder']}/{state}_county_{scrape_timestamp}.html", 'w') as f:
f.write(source)
def run():
df, source = fetch()
save(df, source)
###Output
_____no_output_____ |
notebooks/audioDSP-intro_phase_vocoder.ipynb | ###Markdown
PROCESAMIENTO DIGITAL DE SEÑALES DE AUDIO Introducción al algoritmo de Phase Vocoder
###Code
%matplotlib inline
import math
import time
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
from scipy.io import wavfile
import IPython.display as ipd
###Output
_____no_output_____
###Markdown
**NOTA:** *Las siguientes dos celdas solo son necesarias para descargar el archivo de ejemplo. Ignórelas si va a trabajar con sus propios archivos de audio.*
###Code
!pip install wget
import wget
###Output
_____no_output_____
###Markdown
Cómo correr el notebookSe puede bajar y correr el notebook de forma local en una computadora.O también se puede correr en Google Colab usando el siguiente enlace. Run in Google Colab IntroducciónEste ejercicio sirve de introducción al algoritmo de phase vocoder. Se analiza una señal de audio usando la STFT y luego se la reconstruye antitransformando la DFT de cada trama y combinándolas mediante el método de Overlap-Add.A continuación se carga la señal de audio que será procesada.
###Code
# download audio file
wget.download('https://github.com/mrocamora/audio-dsp/blob/main/audio/singing_voice.wav?raw=true')
# read the audio file to process (from https://openairlib.net/)
filename = 'singing_voice.wav'
# filename = './audio/trumpet.wav'
# read audio file
fs, x = wavfile.read(filename)
# normalize maximum (absolute) amplitude
x = x / np.max(abs(x)) * 0.9
# time corresponding to the audio signal
time_x = np.arange(0, x.size)/fs
# plot the audio signal waveform
plt.figure(figsize=(12,6))
ax1 = plt.subplot(2, 1, 1)
plt.plot(time_x, x)
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
ipd.Audio(x, rate=fs)
###Output
_____no_output_____
###Markdown
Parte 1En esta primera parte se analiza el efecto del enventanado y las restricciones que debe cumplir según el método de **overlap-add**. Complete el código que se proporciona a continuación y siga los siguientes pasos. 1. Acumular adecuadamente las ventanas de análisis de acuerdo al algoritmo de **overlap-add**.2. Calcular el factor de escalado de amplitud $C$, para el caso en que el solapamiento de las ventanas es constante.3. Cambiar el factor de decimación temporal ($R$) y analice los resultados. En particular pruebe para $\frac{1}{4} L$ y $\frac{3}{4} L$.4. Analice el resultado para otras ventanas suavizantes (e.g. Hamming, Blackman) usando los mismos factores de decimación.
###Code
# length of the input signal
M = x.size;
# length of the analysis window in samples
L = 2048
# hop size in samples.
R = int(L/2)
# total number of analysis frames
num_frames = int(np.floor((M-L)/R))
# analysis window
window = signal.windows.get_window('hann', L)
# overlap-add (OLA) of the analysis windows
olawin = np.zeros((num_frames-1)*R+L)
# for each analysis frame
for ind in range(num_frames):
# initial index of current window
n_ini = ind * R
# overlap-add the window
# olawin[??] =
# compute the amplitude scaling factor
# C =
print("C = ", C)
print("max(olawin) = ", max(olawin))
# plot the analysis window
plt.figure(figsize=(12,6))
ax1 = plt.subplot(2, 1, 1)
plt.plot(window, 'r')
plt.ylabel('Amplitude')
plt.xlabel('Time (samples)')
plt.title('Analysis window')
# plot the overlap-add of the analysis windows
plt.figure(figsize=(12,6))
ax1 = plt.subplot(2, 1, 1)
plt.plot(olawin)
plt.xlabel('Time (samples)')
plt.ylabel('Amplitude')
plt.title('Overlap-add of the analysis windows')
###Output
_____no_output_____
###Markdown
Parte 2El siguiente código implementa la etapa de análisis del algoritmo de **phase-vocoder**, es decir una STFT. Complete el código que se proporciona a continuación y siguiendo los siguientes pasos. 1. Calcular la DFT de cada trama de señal aplicando una ventana suavizante.2. Calcular el valor de frequencia de cada bin en radianes.3. Calcular el instante temporal en muestras de cada trama.
###Code
def analysis_STFT(x, L=2048, R=256, win='hann'):
""" compute the analysis phase of the phase vocoder, i.e. the STFT of the input audio signal
Parameters
----------
x : numpy array
input audio signal (mono) as a numpy 1D array.
L : int
window length in samples.
R : int
hop size in samples.
win : string
window type as defined in scipy.signal.windows.
Returns
-------
X_stft : numpy array
STFT of x as a numpy 2D array.
omega_stft : numpy array
frequency values in radians.
samps_stft : numpy array
time sample at the begining of each frame.
"""
# length of the input signal
M = x.size;
# number of points to compute the DFT (FFT)
N = L
# analysis window
window = signal.windows.get_window(win, L)
# total number of analysis frames
num_frames = int(np.floor((M - L) / R))
# initialize stft
X_stft = np.zeros((N, num_frames), dtype = complex)
# process each frame
for ind in range(num_frames):
# initial and ending points of the frame
n_ini = int(ind * R)
n_end = n_ini + L
# signal frame
# xr =
# save DFT of the signal frame
# X_stft[:, ind] =
# frequency values in radians
# omega_stft =
# time sample at the center of each frame
# samps_stft =
return X_stft, omega_stft, samps_stft
###Output
_____no_output_____
###Markdown
Parte 3Una vez completada la implementación de la función `analysis_STFT` siga los siguientes pasos.1. Ejecute la función `analysis_STFT` para distintos valores de $L$ y $R$ y analice el resultado en el espectragrama.2. ¿Qué controla el parámetro $L$? ¿Qué controla el parámetro $R$? 3. ¿Qué relación deben cumplir $L$ y $R$? ¿Por qué?
###Code
# window length in samples
L = 2048
# hop size in samples
R = 256
# compute STFT
X_stft, omega_stft, samps_stft = analysis_STFT(x, L, R, win='hann')
# max frequency index
ind_fmax = int(X_stft.shape[0]/2)+1
# frequency values (Hz)
stft_freqs = omega_stft[:ind_fmax]*fs/(2*np.pi)
# time values of the stft
stft_time = samps_stft/fs
plt.figure(figsize=(12,8))
ax1 = plt.subplot(2, 1, 1)
plt.pcolormesh(stft_time, stft_freqs, 20*np.log10(np.abs(X_stft[:ind_fmax, :])), cmap='jet')
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
###Output
_____no_output_____
###Markdown
Parte 4El siguiente código implementa la etapa de síntesis a partir de la STFT. Complete el código que se proporciona a continuación y siguiendo los siguientes pasos. 1. Calcule la reconstrucción de cada trama de señal.2. Acumule tramas sucesivas según el método de **overlap-adds**.3. Modifique la amplitud de la señal obtenida por el factor de compensación del enventanado.
###Code
def synthesis_STFT(X_stft, L=2048, R=256, win='hann'):
""" compute the synthesis using the IFFT of each frame combined with overlap-add
Parameters
----------
X_stft : numpy array
STFT of x as a numpy 2D array.
L : int
window length in samples.
R : int
hop size in samples.
win : string
window type as defined in scipy.signal.windows.
Returns
-------
x : numpy array
output audio signal (mono) as a numpy 1D array.
"""
# number of frequency bins
N = X_stft.shape[0];
# analysis window
window = signal.windows.get_window(win, L)
# total number of analysis frames
num_frames = X_stft.shape[1]
# initialize otuput signal in the time domain
y = np.zeros(num_frames * R + L)
# process each frame
for ind in range(num_frames):
# reconstructed signal frame
# yr =
# initial and ending points of the frame
# n_ini =
# n_end =
# overlap-add the signal frame
# y[n_ini:n_end] =
# compute the amplitude scaling factor
# C =
# compensate the amplitude scaling factor
y /= C
return y
###Output
_____no_output_____
###Markdown
Parte 5Una vez completada la implementación de la función `synthesis_STFT` siga los siguientes pasos.1. Ejecute la función `analysis_STFT` y luego la función `synthesis_STFT` usando $L=2048$ y $R=256$. 2. Evalúe la reconstrucción en términos de la forma de onda. Evalúe la reconstrucción auditivamente.3. Ejecute la función `analysis_STFT` usando $L=2048$ y $R=256$. Ejecute la función `synthesis_STFT` usando $L=2048$ y $R=512$. 4. Antes de escuchar el resultado indique qué tipo de modificación temporal espera que se produzca. ¿Una acortamiento o alargamiento temporal?5. Evalúe la reconstrucción en términos de la forma de onda. Evalúe la reconstrucción auditivamente.6. Repita la parte 3 en adelante usando $L=2048$ y $R=128$.
###Code
# window length in samples
L = 2048
# hop size in samples
R = 256
# compute STFT
X_stft, omega_stft, samps_stft = analysis_STFT(x, L, R, win='hann')
# hop size in samples
R = 256
# compute the synthesis from the STFT
y = synthesis_STFT(X_stft, L, R, win='hann')
# time corresponding to the audio signal
time_y = np.arange(0, y.size)/fs
# plot the audio signal waveform
plt.figure(figsize=(12,6))
ax1 = plt.subplot(2, 1, 1)
plt.plot(time_x, x)
plt.ylabel('Amplitude')
ax1 = plt.subplot(2, 1, 2)
plt.plot(time_y, y)
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
ipd.Audio(x, rate=fs)
ipd.Audio(y, rate=fs)
###Output
_____no_output_____ |
Topic Modeling/Topic Modeling.ipynb | ###Markdown
Tokenizer for Tweets
###Code
tknzr = TweetTokenizer(preserve_case=False, # Convertir a minúsculas
reduce_len=True, # Reducir caracteres repetidos
strip_handles=False) # Mostrar @usuarios
###Output
_____no_output_____
###Markdown
Create Stemmer of class Snowball
###Code
stmmr = snowballstemmer.stemmer('Spanish')
###Output
_____no_output_____
###Markdown
Stopword
###Code
spec_chars = ["…",'"',"“","/","(",")","[","]","?","¿","!","¡",
"rt",":","…",",","\n","#","\t","",".","$",
"...","-","🤢"]
# create English stop words list
en_stop = get_stop_words('es')
def clean_tweet(tmp_tweet):
"""
Eliminar tokens que:
- Estén dentro de lista_de_paro.
- Sean ligas.
- Si es una mención i.e @potus, se cambia por token genérico @usuario.
"""
return [_ for _ in tmp_tweet
if _ not in spec_chars
and not _.startswith(('http', 'htt'))]
nuevos_tweets = []
for i in tweets.find():
if "retweeted_status" in i: # Si es retweet...
tokens = tknzr.tokenize(i["retweeted_status"]['text'])
stopped_tokens = [i for i in tokens
if not i in en_stop]
stemmed_tokens = clean_tweet(stopped_tokens)
nuevos_tweets.append(stemmed_tokens)
else: # Si no es retweet...
tokens = tknzr.tokenize(i['text'])
stopped_tokens = [i for i in tokens
if not i in en_stop]
stemmed_tokens = clean_tweet(stopped_tokens)
nuevos_tweets.append(stemmed_tokens)
df = pd.DataFrame()
df["tweet"] = nuevos_tweets
df.head(100)
###Output
_____no_output_____
###Markdown
turn our tokenized documents into a id term dictionary
###Code
dictionary = corpora.Dictionary(df["tweet"])
###Output
_____no_output_____
###Markdown
convert tokenized documents into a document-term matrix
###Code
corpus = [dictionary.doc2bow(text) for text in df["tweet"]]
###Output
_____no_output_____
###Markdown
generate LDA model
###Code
ldamodel = LdaModel(corpus,
num_topics = 20,
id2word = dictionary,
passes = 20,
minimum_probability = 0.05)
print(ldamodel)
print(ldamodel.print_topics())
ldamodel.save(fname="ldamodel")
###Output
_____no_output_____ |
delhi.ipynb | ###Markdown
Number of routes
###Code
routes.route_long_name.unique().shape
# Number of routes per km2
1270 / 1484
# Number of routes per capita
1270 / 16_787_941
###Output
_____no_output_____
###Markdown
Number of stops
###Code
stops = pd.read_csv('data/india/GTFS/stops.txt')
stops.stop_id.unique().shape
# Number of stops per km2
4192 / 1484
# Number of stops per capita
4192 / 16_787_941
###Output
_____no_output_____
###Markdown
Average number of stops per route Should we make a distinction between route and trip? What if a route has two different trips that goes to a different set of stations? Why should we treat them a single route, rather than two separate routes?If we ignore routes and only count trips, the calculation becomes much easier
###Code
trips = pd.read_csv('data/india/GTFS/trips.txt')
stop_times = pd.read_csv('data/india/GTFS/stop_times.txt')
trip_to_stop_id = stop_times[['trip_id', 'stop_id']].drop_duplicates()
trip_to_stop_id
num_stops_in_every_trip = trip_to_stop_id.groupby('trip_id').stop_id.count()
num_stops_in_every_trip
sns.histplot(x=num_stops_in_every_trip, bins=30)
plt.xlabel('Number of stops')
plt.title('Delhi')
sns.despine()
plt.savefig('figures/d_nstops.png')
num_stops_in_every_trip.to_csv('gen/d_nstops.csv')
num_stops_in_every_trip.mean()
num_stops_in_every_trip.median()
###Output
_____no_output_____
###Markdown
Restore old trip to routes code
###Code
route_to_trip_id = trips[['route_id', 'trip_id']].drop_duplicates()
route_to_trip_id
def f(x):
'''
input: dataframe with columns trip_id and stop_id. the trip_id has only one unique value
output: list of all stop ids that this trip stops at
'''
return x.stop_id.to_list()
trip_stops = trip_to_stop_id.groupby('trip_id').apply(f)
trip_stops
def g(x):
'''Same as `f` above but collects trip ids instead of stop ids'''
return x.trip_id.to_list()
route_trips = route_to_trip_id.groupby('route_id').apply(g)
route_trips
def h(x):
# mean len of every trip in this route
return np.mean([len(trip_stops[trip_id]) for trip_id in x])
num_stops_for_every_route = route_trips.apply(h)
num_stops_for_every_route
# Average number of stops
num_stops_for_every_route.mean()
num_stops_for_every_route.plot.hist()
num_stops_for_every_route.to_csv('gen/d_nstops2.csv')
###Output
_____no_output_____
###Markdown
Average route length Again, we're going to consider each trip as a route by themselves
###Code
# wgs84 is in degrees, pseudo mercator is in meters
wgs84 = pyproj.CRS('EPSG:4326')
pseudo_mercator = pyproj.CRS('EPSG:3857')
project = pyproj.Transformer.from_crs(wgs84, pseudo_mercator, always_xy=True).transform
def transformer(p: Point):
return transform(project, p)
###Output
_____no_output_____
###Markdown
Need to use the stop_sequence to ensure the stops are in order
###Code
stop_times_with_loc = stop_times.merge(stops, on='stop_id')
stop_times_with_loc
###Output
_____no_output_____
###Markdown
I can't find the code for this onebut 175STLDOWN and 133DOWN shares the first half - then continues after the last stophttps://www.transsee.ca/stoplist?a=delhi&r=1102https://www.transsee.ca/stoplist?a=delhi&r=1230
###Code
stop_times_with_loc[stop_times_with_loc.trip_id == '0_20_20'].sort_values('stop_sequence')
trips[trips.trip_id == '0_20_20']
routes[routes.route_id == 0]
def f(x):
df = x.sort_values('stop_sequence')
points = df[['stop_lon', 'stop_lat']].apply(
lambda row: transformer(Point(row.stop_lon, row.stop_lat)),
axis=1
)
return LineString(points.reset_index(drop=True))
trip_lines = stop_times_with_loc.groupby('trip_id').apply(f)
trip_lines
trip_lengths = trip_lines.apply(lambda x: x.length)
trip_lengths
# Average trip length in meters
trip_lengths.mean()
trip_lengths.median()
sns.histplot(x=trip_lengths, bins=40)
sns.despine()
plt.title('Delhi')
plt.xlabel('Route length (m)')
plt.savefig('figures/d_rlengths.png')
trip_lengths.to_csv('gen/d_rlengths.csv')
trip_lengths.sort_values(ascending=False).head()
###Output
_____no_output_____
###Markdown
Restore old code again
###Code
# Very slow
def h(x):
"""x is a list of trip_ids, all belong to the same route"""
lens = []
for trip_id in x:
points = []
# for every stop in this trip, get its point coordinates
for stop in trip_stops[trip_id]:
df = stops[stops.stop_id == stop]
x_c = df.stop_lon.iloc[0]
y_c = df.stop_lat.iloc[0]
points.append(transformer(Point(x_c, y_c)))
# line length of this trip
lens.append(LineString(points).length)
# mean linestring len of every trip in this route
return np.mean(lens)
route_lengths = route_trips.apply(h)
route_lengths
# Average route length in meters
route_lengths.mean()
route_lengths.plot.hist()
route_lengths.to_csv('gen/d_rlengths2.csv')
###Output
_____no_output_____
###Markdown
Average distance between stops
###Code
def calc_distances(x):
'''
input: [(float, float)]
a list of coordinates
output: [float]
the distances between each coordinates
'''
return [Point(a).distance(Point(b)) for a, b in zip(x, x[1:])]
def f(x):
df = x.sort_values('stop_sequence')
points = df[['stop_lon', 'stop_lat']].apply(
lambda row: transformer(Point(row.stop_lon, row.stop_lat)),
axis=1
)
return calc_distances(points.reset_index(drop=True))
trip_stop_dists = stop_times_with_loc.groupby('trip_id').apply(f)
trip_stop_dists
trip_avg_stop_dists = trip_stop_dists.apply(np.mean)
trip_avg_stop_dists
# Global average distance between stops (meters)
trip_avg_stop_dists.mean()
trip_avg_stop_dists.median()
sns.histplot(x=trip_avg_stop_dists)
# The individual distances between every stops in every route
trip_stop_dists.explode().astype(float).describe()
trip_stop_dists.explode().astype(float).plot.box()
trip_avg_stop_dists.sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Median bus speed Removed because some stops don't have an associated stop time:
###Code
np.setdiff1d(stops.stop_id.unique(), stop_times.stop_id.unique())
stop_times[stop_times.stop_id == 1730]
stops[stops.stop_id == 1730]
###Output
_____no_output_____
###Markdown
Number of links Number of nodes Network diameter
###Code
def f(x):
df = x.sort_values('stop_sequence')
# networkx refuses to take Points so we need to transform it back to floats
def g(row):
p = transformer(Point(row.stop_lon, row.stop_lat))
return (p.x, p.y)
points = df[['stop_lon', 'stop_lat']].apply(g,axis=1)
# combine all stops into a single list for the single trip_id
return points.reset_index(drop=True).to_list()
trip_stop_coords = stop_times_with_loc.groupby('trip_id').apply(f)
trip_stop_coords
trip_stop_coords_dict = trip_stop_coords.to_dict()
g = nx.from_dict_of_lists(trip_stop_coords_dict)
#nx.algorithms.distance_measures.diameter(g)
###Output
_____no_output_____
###Markdown
Network density
###Code
#nx.classes.function.density(g)
###Output
_____no_output_____ |
Homework/To Do/Homework 03.ipynb | ###Markdown
Design Choices in Convolutional Neural Networks Importing packages
###Code
import numpy as np
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras import backend as K
from keras.preprocessing import image
from keras.applications.mobilenet import MobileNet
from keras.applications.vgg16 import preprocess_input, decode_predictions
from keras.models import Model
import timeit
import warnings
warnings.filterwarnings('ignore')
###Output
Using TensorFlow backend.
###Markdown
Preparing Dataset
###Code
batch_size = 128
num_classes = 10
epochs = 2
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
###Output
Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
###Markdown
Part 1: Influence of convolution size
Try the models with different convolution sizes 5x5, 7x7 and 9x9 etc.
Analyze the number of model parameters, accuracy and training time Model with (3 x 3) Convolution
###Code
K.clear_session()
start = timeit.default_timer()
model = Sequential()
model.add(Conv2D(8, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
model.add(Conv2D(16, (3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))
end = timeit.default_timer()
print("Time Taken to run the model:",end - start, "seconds")
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:107: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 26, 26, 8) 80
_________________________________________________________________
conv2d_2 (Conv2D) (None, 24, 24, 16) 1168
_________________________________________________________________
flatten_1 (Flatten) (None, 9216) 0
_________________________________________________________________
dense_1 (Dense) (None, 32) 294944
_________________________________________________________________
dense_2 (Dense) (None, 10) 330
=================================================================
Total params: 296,522
Trainable params: 296,522
Non-trainable params: 0
_________________________________________________________________
Train on 60000 samples, validate on 10000 samples
Epoch 1/2
60000/60000 [==============================] - 36s 595us/step - loss: 0.2562 - acc: 0.9224 - val_loss: 0.0807 - val_acc: 0.9737
Epoch 2/2
60000/60000 [==============================] - 38s 625us/step - loss: 0.0735 - acc: 0.9784 - val_loss: 0.0595 - val_acc: 0.9818
Time Taken to run the model: 73.62484016500002 seconds
###Markdown
Try models with different Convolution sizes
###Code
# Write your code here. Use the same architecture as above.
# Write your code here. Use the same architecture as above.
# Write your code here. Use the same architecture as above.
###Output
_____no_output_____
###Markdown
Write your findings about activations here?
1. Finding 1
2. Finding 2
Part 2: Influence of Striding
Try the models with different stride sizes such as 2,3,4 etc.
Analyze the number of model parameters, accuracy and training time Model with Convolution with 2 Steps
###Code
start = timeit.default_timer()
model = Sequential()
model.add(Conv2D(8, kernel_size=(3, 3), strides=2, activation='relu', input_shape=input_shape))
model.add(Conv2D(16, (3, 3), strides=2, activation='relu'))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))
end = timeit.default_timer()
print("Time Taken to run the model:",end - start, "seconds")
# Write your code here. Use the same architecture as above.
# Write your code here. Use the same architecture as above.
# Write your code here. Use the same architecture as above.
###Output
_____no_output_____
###Markdown
Write your findings about influence of striding here?
1. Finding 1
2. Finding 2
Part 3: Influence of Padding
Try the models with padding and without padding.
Analyze the number of model parameters, accuracy and training time Model with (3 x 3) Convolution with Same Padding
###Code
start = timeit.default_timer()
model = Sequential()
model.add(Conv2D(8, kernel_size=(3, 3), strides=1, padding='same', activation='relu', input_shape=input_shape))
model.add(Conv2D(16, (3, 3), strides=1, padding='same', activation='relu'))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))
end = timeit.default_timer()
print("Time Taken to run the model:",end - start, "seconds")
# Write your code here. Use the same architecture as above.
# Write your code here. Use the same architecture as above.
###Output
_____no_output_____
###Markdown
Write your findings about influence of padding here?
1. Finding 1
2. Finding 2
Part 4: Influence of Pooling
Try the models with different pooling window sizes such as 2x2, 3x3, 4x4 etc.
Analyze the number of model parameters, accuracy and training time Model with (3 x 3) Convolution with Pooling (2 x 2)
###Code
start = timeit.default_timer()
model = Sequential()
model.add(Conv2D(8, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(16, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))
end = timeit.default_timer()
print("Time Taken to run the model:",end - start, "seconds")
###Output
_____no_output_____
###Markdown
Model with (3 x 3) Convolution with Pooling (3 x 3)
###Code
# Write your code here
# Use the same model design from the above cell
# Write your code here
# Use the same model design from the above cell
###Output
_____no_output_____ |
5-ExamProblems/.src/EX1-F2020-Deploy/Exam1-Deploy-Student-Version.ipynb | ###Markdown
ENGR 1330 Exam 1 Sec 003/004 Fall 2020Take Home Portion of Exam 1 Full name R: HEX: ENGR 1330 Exam 1 Sec 003/004 Date: Question 1 (1 pts):Run the cell below, and leave the results in your notebook (Windows users may get an error, leave the error in place)
###Code
#### RUN! the Cell ####
import sys
! hostname
! whoami
print(sys.executable) # OK if generates an exception message on Windows machines
###Output
theodores-MBP
theodore
/opt/anaconda3/bin/python
###Markdown
Question 2 (9 pts):- __When it is 8:00 in Lubbock,__ - __It is 9:00 in New York__ - __It is 14:00 in London__ - __It is 15:00 in Cairo__ - __It is 16:00 in Istanbul__ - __It is 19:00 in Hyderabad__ - __It is 22:00 in Tokyo__ __Write a function that reports the time in New York, London, Cairo, Istanbul, Hyderabad, and Tokyo based on the time in Lubbock. Use a 24-hour time format. Include error trapping that:__1- Issues a message like "Please Enter A Number from 00 to 23" if the first input is numeric but outside the range of [0,23].2- Takes any numeric input for "Lubbock time" selection , and forces it into an integer.3- Issues an appropriate message if the user's selection is non-numeric.__Check your function for these times:__- 8:00- 17:00- 0:00
###Code
# Code and run your solution here:
###Output
_____no_output_____
###Markdown
Question 3 (28 pts): Follow the steps below. Add comments to your script and signify when each step and each task is done. *hint: For this problem you will need the numpy and pandas libraries.- __STEP1: There are 8 digits in your R. Define a 2x4 array with these 8 digits, name it "Rarray", and print it__- __STEP2: Find the maximum value of the "Rarray" and its position__- __STEP3: Sort the "Rarray" along the rows, store it in a new array named "Rarraysort", and print the new array out__- __STEP4: Define and print a 4x4 array that has the "Rarray" as its two first rows, and "Rarraysort" as its next rows. Name this new array "DoubleRarray"__- __STEP5: Slice and print a 4x3 array from the "DoubleRarray" that contains the last three columns of it. Name this new array "SliceRarray".__- __STEP6: Define the "SliceRarray" as a panda dataframe:__ - name it "Rdataframe", - name the rows as "Row A","Row B","Row C", and "Row D" - name the columns as "Column 1", "Column 2", and "Column 3"- __STEP7: Print the first few rows of the "Rdataframe".__- __STEP8: Create a new dataframe object ("R2dataframe") by adding a column to the "Rdataframe", name it "Column X" and fill it with "None" values. Then, use the appropriate descriptor function and print the data model (data column count, names, data types) of the "R2dataframe"__- __STEP9: Replace the **'None'** in the "R2dataframe" with 0. Then, print the summary statistics of each numeric column in the data frame.__- __STEP10: Define a function based on the equation below, apply on the entire "R2dataframe", store the results in a new dataframe ("R3dataframe"), and print the results and the summary statistics again.__ $$ y = x^2 - 5x +7 $$- __STEP11: Print the number of occurrences of each unique value in "Column 3"__- __STEP12: Sort the data frame with respect to "Column 1" with a descending order and print it__- __STEP13: Write the final format of the "R3dataframe" on a CSV file, named "Rfile.csv"__- __STEP14: Read the "Rfile.csv" and print its content.__** __Make sure to attach the "Rfile.csv" file to your midterm exam submission.__
###Code
# Code and Run your solution here:
###Output
_____no_output_____
###Markdown
Problem 4 (32 pts)Graphing Functions Special Functions Consider the two functions listed below:\begin{equation}f(x) = e^{-\alpha x}\label{eqn:fofx}\end{equation}\begin{equation}g(x) = \gamma sin(\beta x)\label{eqn:gofx}\end{equation}Prepare a plot of the two functions on the same graph. Use the values in Table below for $\alpha$, $\beta$, and $\gamma$.|Parameter|Value||:---|---:||$\alpha$|0.50||$\beta$|3.00||$\gamma$|$\frac{\pi}{2}$|The plot should have $x$ values ranging from $0$ to $10$ (inclusive) in sufficiently small increments to see curvature in the two functions as well as to identify the number and approximate locations of intersections. In this problem, intersections are locations in the $x-y$ plane where the two "curves" cross one another of the two plots. By-hand evaluate f(x) for x=1, alpha = 1/2 (Simply enter your answer from a calculator)f(x) = By-hand evaluate g(x) for x=3.14, beta = 1/2, gamma = 2 (Simply enter your answer from a calculator)g(x) =
###Code
# Define the first function f(x,alpha), test the function using your by hand answer
# Define the second function g(x,beta,gamma), test the function using your by hand answer
# Built a list for x that ranges from 0 to 10, inclusive, with adjustable step sizes for plotting later on
# Build a plotting function that plots both functions on the same chart
# Using the plot as a guide, find the approximate values of x where the two curves intercept (i.e. f(x) = g(x))
# You can either use interactive input, or direct specify x values, but need to show results
###Output
_____no_output_____
###Markdown
Bonus Problem 1. Extra Credit (You must complete the regular problems)!__create a class to compute the average grade (out of 10) of the students based on their grades in Quiz1, Quiz2, the Mid-term, Quiz3, and the Final exam.__| Student Name | Quiz 1 | Quiz 2 | Mid-term | Quiz 3 | Final Exam || ------------- | -----------| -----------| -------------| -----------| -------------|| Harry | 8 | 9 | 8 | 10 | 9 || Ron | 7 | 8 | 8 | 7 | 9 || Hermione | 10 | 10 | 9 | 10 | 10 || Draco | 8 | 7 | 9 | 8 | 9 || Luna | 9 | 8 | 7 | 6 | 5 |1. __Use docstrings to describe the purpose of the class.__2. __Create an object for each car brand and display the output as shown below.__"Student Name": **Average Grade** 3. __Create and print out a dictionary with the student names as keys and their average grades as data.__
###Code
#Code and run your solution here:
###Output
_____no_output_____
###Markdown
Bonus 2 Extra credit (You must complete the regular problems)! Write the VOLUME Function to compute the volume of Cylinders, Spheres, Cones, and Rectangular Boxes. This function should:- First, ask the user about __the shape of the object__ of interest using this statement:*"Please choose the shape of the object. Enter 1 for "Cylinder", 2 for "Sphere", 3 for "Cone", or 4 for "Rectangular Box""*- Second, based on user's choice in the previous step, __ask for the right inputs__.- Third, print out an statement with __the input values and the calculated volumes__. Include error trapping that:1. Issues a message that **"The object should be either a Cylinder, a Sphere, a Cone, or a Rectangular Box. Please Enter A Number from 1,2,3, and 4!"** if the first input is non-numeric.2. Takes any numeric input for the initial selection , and force it into an integer.4. Issues an appropriate message if the user's selection is numeric but outside the range of [1,4]3. Takes any numeric input for the shape characteristics , and force it into a float.4. Issues an appropriate message if the object characteristics are as non-numerics. Test the script for:1. __Sphere, r=10__2. __r=10 , Sphere__3. __Rectangular Box, w=5, h=10, l=0.5__- __Volume of a Cylinder = πr²h__- __Volume of a Sphere = 4(πr3)/3__- __Volume of a Cone = (πr²h)/3__- __Volume of a Rectangular Box = whl__
###Code
#Code and Run your solution here
###Output
_____no_output_____ |
HousePricePredictionDNN.ipynb | ###Markdown
House Price Prediction with Keras (DNN) Using Pandas and Matplotlib for Exploratory Analysis
###Code
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
pd.options.display.max_columns =None
training = pd.read_csv("train.csv")
testing = pd.read_csv('test.csv')
Id = testing[['Id']].values # we need to keep it for the submission file
training = training.drop(['Id'],axis = 1)
testing = testing.drop(['Id'],axis =1)
testing.shape
training.shape
training.head()
training['SalePrice'].describe()
sns.distplot(training['SalePrice'])
###Output
_____no_output_____
###Markdown
Analyzing Skewness and Kurtosis
###Code
print("Skewness: %f" % training['SalePrice'].skew())
print("Kurtosis: %f" % training['SalePrice'].kurt())
The correlation matrix is one of the most useful visualization tools
# Correlation Matrix
corrmat = training.corr()
f,ax = plt.subplots(figsize = (12,9))
sns.heatmap(corrmat,vmax = .8,square = True)
# check correlation between sales price and numercial fields
# then check for variance between sales price and categorical fields
# https://www.kaggle.com/pmarcelino/comprehensive-data-exploration-with-python
k = 10
cols = corrmat.nlargest(k,'SalePrice')['SalePrice'].index
cm = np.corrcoef(training[cols].values.T)
sns.set(font_scale=1.25)
hm = sns.heatmap(cm,cbar = True, annot = True, square = True, fmt = '.2f',
annot_kws = {'size':10}, yticklabels = cols.values, xticklabels = cols.values)
plt.show()
###Output
_____no_output_____
###Markdown
The correlation matrix above shows us the top 10 most correlated numerical variables in the dataset.- Look at the variables Garage Cars and Garage Area. We can infer that The amount of cars able to fit in the garage is a consequence of the sqf of the garage. so Lets keep the one with higher correlation.- Drop garage area Selecting Features
###Code
# Take only top ten variables correlation coef
t_numeric = training[corrmat.nlargest(k,'SalePrice')['SalePrice'].index]# defining column selection
t_numeric.head()
t_categorical = training.select_dtypes(include= 'object')
t_categorical['SalePrice'] = training['SalePrice']
###Output
/Users/franciscoromero/opt/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
"""Entry point for launching an IPython kernel.
###Markdown
The correlation ratio allows us to compute a score describing the relationship between a categorical variable and anumerical one. source: https://towardsdatascience.com/the-search-for-categorical-correlation-a1cf7f1888c9
###Code
def correlation_ratio(categories, measurements):
fcat, _ = pd.factorize(categories)
cat_num = np.max(fcat)+1
y_avg_array = np.zeros(cat_num)
n_array = np.zeros(cat_num)
for i in range(0,cat_num):
cat_measures = measurements[np.argwhere(fcat == i).flatten()]
n_array[i] = len(cat_measures)
y_avg_array[i] = np.average(cat_measures)
y_total_avg = np.sum(np.multiply(y_avg_array,n_array))/np.sum(n_array)
numerator = np.sum(np.multiply(n_array,np.power(np.subtract(y_avg_array,y_total_avg),2)))
denominator = np.sum(np.power(np.subtract(measurements,y_total_avg),2))
if numerator == 0:
eta = 0.0
else:
eta = numerator/denominator
return eta
for column in t_categorical:
if correlation_ratio(t_categorical[column],t_categorical['SalePrice']) > 0.4:
print(column)
print(correlation_ratio(t_categorical[column],t_categorical['SalePrice']))
# According to this findings we are going to use only this categories in our new dataset
t_categorical = t_categorical[['Neighborhood','ExterQual','BsmtQual','KitchenQual']]
training = pd.concat([t_numeric,t_categorical],axis =1)
training.head()
training.shape
testing.shape
###Output
_____no_output_____
###Markdown
Handling Missing Data
###Code
# Keep in mind that this data set has many features and some of them have most of their data missing
# In this case we are going to handle missing data in a simple way.
total = training.isnull().sum().sort_values(ascending=False)
percent = (training.isnull().sum()/training.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_data.head(30)
# We can see the features with the most missing values
# Folowing the kaggle article "Comprehensive Data Exploration with Python"
# eliminate columns that have more than 1 data missing
# this is a simple but not the best approach since with some extra work filling missing values in the correct way
# can help with increasing the accuracy of the model
# below are out selected features and correspoding missing values
# Drop the columns containing more than one missing value
# you can choose to leave it and fill the missing fata with the mode since less than 15% is missing
training= training.drop((missing_data[missing_data['Total']>1]).index,1)
training.isnull().sum().max()
training.head()
testing.shape
# for the test data I will remove the comlums that were removed from the training data. We need to do this
# since out model needs the same features to predeict the prices.
Y_training = training[['SalePrice']] # Our label
training = training.drop(['SalePrice'],axis = 1) # our Features
mylist = training.columns # columns that we want to keep in the testing data
mylist = mylist.values
testing = testing[mylist] # filter testing data to have only the desired columns
testing.isna().sum() # check for null values again
#For this test I will just drop those columns
testing['KitchenQual']=testing['KitchenQual'].fillna(testing['KitchenQual'].mode()[0])
testing['TotalBsmtSF'] =testing['TotalBsmtSF'].fillna(testing['TotalBsmtSF'].mean())
testing.drop(['GarageArea'], axis =1, inplace =True)
training.drop(['GarageArea'], axis = 1, inplace = True)
testing['GarageCars'] =testing['GarageCars'].fillna(testing['GarageCars'].mode()[0])
testing.shape
training.shape
Y_training.shape
Id.shape
###Output
_____no_output_____
###Markdown
Dummy Variables: Dumies allow our model to better interpret categorical variables by converting them insimple (1,0) variables. e.g A categorical column with 3 categories such as: low medium high will be converted into 3 new comlumns where1 will indicate the precense of such category and 0 its absence.
###Code
# Getting Dummy variables for the model to be processed by our neural net
training = pd.get_dummies(training)
testing = pd.get_dummies(testing)
# Check the head and notice the one hot (dummies) as explained above
training.head()
# Check the head of testing too.
testing.head()
# Notice that I check the shape repeteadly. I do this to track any mistake when handling the data set
# In case of a shape discrepancy I will go back to see what I did wrong.
testing.shape
training.shape
###Output
_____no_output_____
###Markdown
Scaling The Data
###Code
## Convert the train and test data in to np arrays to work as input for the neural net
# Traininng data X and Y
X_training = training.values
X_training = X_training.astype(np.float)
Y_training = Y_training.values
Y_training = Y_training.astype(np.float)
# Testing Data X there is no Y since it is what we want to predict
X_testing = testing.values
X_testing = X_testing.astype(np.float)
###Output
_____no_output_____
###Markdown
Using MinMax Scaler or Standard Scaker from scikit learn package makes it easy to normalize our data
###Code
from sklearn.preprocessing import MinMaxScaler
X_scaler = MinMaxScaler(feature_range=(0,1))
Y_scaler = MinMaxScaler(feature_range=(0,1))
X_scaled_training = X_scaler.fit_transform(X_training)
Y_scaled_training = Y_scaler.fit_transform(Y_training)
X_scaled_testing = X_scaler.transform(X_testing)
# To scale the data back we need to keep this constants
print(Y_scaler.scale_[0], Y_scaler.min_[0])
# Checking the shape of X and Y they must have the same number of rows
X_scaled_training.shape
Y_scaled_training.shape
X_scaled_testing.shape
###Output
_____no_output_____
###Markdown
Building The Model Keras a high level API written on top of TensorFlow and it is extremely beginner friendly. It has built in layers, activation functions optimizers and performance metrics and it works like a charm.To begin you can import from tensorflow.keras layers, this will make your code look a little cleaner.
###Code
from tensorflow.keras import layers
###Output
_____no_output_____
###Markdown
For this regression problem we are going to use fully connected layers in other words "Dense" layers.Activation function ReLu which means Rectified Linear units
###Code
def build_model():
model = tf.keras.Sequential([
layers.Dense(256, activation='relu',input_shape=[41]),
layers.Dense(128,activation = 'relu'),
layers.Dense(64,activation = 'relu'),
layers.Dense(1, activation = 'linear')
])
optimizer = tf.keras.optimizers.Adam()
model.compile(optimizer = optimizer ,
loss = 'mse',
metrics =['mae','mse'])
return model
model= build_model()
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_4 (Dense) (None, 256) 10752
_________________________________________________________________
dense_5 (Dense) (None, 128) 32896
_________________________________________________________________
dense_6 (Dense) (None, 64) 8256
_________________________________________________________________
dense_7 (Dense) (None, 1) 65
=================================================================
Total params: 51,969
Trainable params: 51,969
Non-trainable params: 0
_________________________________________________________________
###Markdown
Fitting The Model
###Code
from tensorflow import keras
import tensorflow_docs as tfdocs
import tensorflow_docs.plots
import tensorflow_docs.modeling
# Early stoping helps prevent overfitting by stoping training when the validation score does not improve
# patience means the amount of epochs that have to elapse w/o improvement for the call back to stop the training
early_stop = keras.callbacks.EarlyStopping(monitor = 'val_loss',patience = 20)
history = model.fit(X_scaled_training,Y_scaled_training,epochs = 500, validation_split = 0.2,verbose =2,
callbacks = [tfdocs.modeling.EpochDots()] )
# Visualize Training progress
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
plotter = tfdocs.plots.HistoryPlotter(smoothing_std=2)
plotter.plot({'Basic': history}, metric = "mae")
plt.ylim([0, 0.5])
plt.ylabel('MAE [SalePrice]')
# test_error_rate = model.evaluate(X_scaled_testing,Y_scaled_training,verbose = 2)
# print("The MSE for this data set is: {}".format(test_error_rate))
result = model.predict(X_scaled_testing)
result
###Output
_____no_output_____
###Markdown
Revert Scaling of The Predicted Results Note above that the results are comming out scaled. To scale back the results to the original scale of the data set we need the Y_scaler we computed earlier. Then, perform the oposite operations to scale back the data.
###Code
# re-scale the result to the original values
prediction = result + 0.048465490904041106
prediction = prediction / 1.3886960144424386e-06
prediction
prediction.shape
Id.shape
###Output
_____no_output_____
###Markdown
Creating The Submission File
###Code
sub_df = pd.DataFrame({'Id':Id[:,0],'SalePrice':prediction[:,0]})
sub_df.isnull().sum()
sub_df.to_csv('Kaggle_House_Price_NN_13.csv',index = False)
###Output
_____no_output_____ |
notebooks/stack_models-Copy1.ipynb | ###Markdown
Base models
###Code
lasso = make_pipeline(RobustScaler(), Lasso(alpha =0.005, random_state=1,max_iter = 2000))
ENet = make_pipeline(RobustScaler(), ElasticNet(alpha=0.01, l1_ratio=.9, random_state=3))
CatBoost = CatBoostRegressor(loss_function='MAE',random_seed =123,learning_rate=0.1,max_depth=8,task_type='GPU',od_type = 'Iter',od_wait= 15,iterations = 2000)
model_xgb = xgb.XGBRegressor(colsample_bytree=0.4603, gamma=0.0468,
learning_rate=0.15, max_depth=12,
n_estimators=500,
reg_alpha=0.4, reg_lambda=0.8,
subsample=0.5, silent=1,
random_state =7, nthread = -1)
model_lgb = lgb.LGBMRegressor(objective='mae',num_leaves=5,
learning_rate=0.15, n_estimators=500,
bagging_fraction = 0.8,
bagging_freq = 5, feature_fraction = 0.2319,
feature_fraction_seed=9 )
param_grids = {'n_estimators': 1000,
'max_features': 0.8, # tuned
'max_depth': 14, # tuned
}
model_qrf = RandomForestQuantileRegressor(**param_grids,
criterion = 'mae',
n_jobs = -1,
random_state =123)
n_folds = 5
def rmsle_cv(model,cv = None,X=None,y=None):
if cv is None:
cv = KFold(n_folds, shuffle=False, random_state=42).get_n_splits(X.values)
score = make_scorer(mase,greater_is_better=False)
#rmse= np.sqrt(-cross_val_score(model, X.values, y.values.reshape(-1,1), scoring=score, cv = cv))
rmse= cross_val_score(model, X.values, y.values.reshape(-1,), scoring=score, cv = cv)
return(rmse)
def rmsle_cv_gen(model,cv = None):
if cv is None:
cv = KFold(n_folds, shuffle=True, random_state=42).get_n_splits(X.values)
scores = []
for fold_n, (train_index, valid_index) in enumerate(cv.split(X,fixed_length=False, train_splits=train_splits, test_splits=test_splits)):
# print('Fold', fold_n, 'started at', time.ctime())
X_train, X_valid = X.iloc[train_index], X.iloc[valid_index]
y_train, y_valid = y.iloc[train_index], y.iloc[valid_index]
model.fit(X_train,y_train)
preds = model.predict(X_valid)
score_val = mase(preds,y_valid)
# print(f'Fold {fold_n}. Score: {score_val:.4f}.')
print('')
scores.append(score_val)
#print(f'CV mean score: {np.mean(scores):.4f}, std: {np.std(scores):.4f}.')
return(np.array(scores))
class StackingAveragedModels(BaseEstimator, RegressorMixin, TransformerMixin):
def __init__(self, base_models, meta_model, n_folds=5):
self.base_models = base_models
self.meta_model = meta_model
self.n_folds = n_folds
# We again fit the data on clones of the original models
def fit(self, X, y):
self.base_models_ = [list() for x in self.base_models]
self.meta_model_ = clone(self.meta_model)
kfold = KFold(n_splits=self.n_folds, shuffle=True, random_state=156)
# Train cloned base models then create out-of-fold predictions
# that are needed to train the cloned meta-model
out_of_fold_predictions = np.zeros((X.shape[0], len(self.base_models)))
for i, model in enumerate(self.base_models):
for train_index, holdout_index in kfold.split(X, y):
instance = clone(model)
self.base_models_[i].append(instance)
instance.fit(X[train_index], y[train_index])
y_pred = instance.predict(X[holdout_index])
out_of_fold_predictions[holdout_index, i] = y_pred
# Now train the cloned meta-model using the out-of-fold predictions as new feature
self.meta_model_.fit(out_of_fold_predictions, y)
return self
#Do the predictions of all base models on the test data and use the averaged predictions as
#meta-features for the final prediction which is done by the meta-model
def predict(self, X):
meta_features = np.column_stack([
np.column_stack([model.predict(X) for model in base_models]).mean(axis=1)
for base_models in self.base_models_ ])
return self.meta_model_.predict(meta_features)
score = rmsle_cv(lasso,cv = cv)
print("\nLasso score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
score = rmsle_cv(lasso)
print("\nLasso score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
score = rmsle_cv(ENet,cv=cv)
print("ElasticNet score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
score = rmsle_cv(ENet,cv=None)
print("ElasticNet score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
score = rmsle_cv(GBoost,cv=cv)
print("Gradient Boosting score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
score = rmsle_cv(GBoost)
print("Gradient Boosting score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
score = rmsle_cv(model_xgb,cv=cv)
print("Xgboost score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
score = rmsle_cv(model_xgb)
print("Xgboost score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
score = rmsle_cv(model_lgb,cv=cv)
print("LGBM score: {:.4f} ({:.4f})\n" .format(score.mean(), score.std()))
score = rmsle_cv(model_lgb)
print("LGBM score: {:.4f} ({:.4f})\n" .format(score.mean(), score.std()))
score = rmsle_cv(model_qrf,cv=cv)
print("QRFscore: {:.4f} ({:.4f})\n" .format(score.mean(), score.std()))
score = rmsle_cv(model_qrf)
print("QRF score: {:.4f} ({:.4f})\n" .format(score.mean(), score.std()))
pool = Pool(X, y)
params = {'iterations': 3000,
'loss_function': 'MAE',
'verbose': False,
"random_seed":123,
"learning_rate":0.1,
"max_depth":8,
"task_type":'GPU',
"od_type" : 'Iter',
"od_wait": 15
}
scores = cv(pool, params)
#print("Catboostmodels score: {:.4f} ({:.4f})".format(scores.mean(), scores.std()))
scores.tail()
stacked_averaged_models = StackingAveragedModels(base_models = (ENet, lasso,model_xgb),
meta_model = lasso)
score = rmsle_cv(stacked_averaged_models)
print("Stacking Averaged models score: {:.4f} ({:.4f})".format(score.mean(), score.std()))
###Output
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
C:\Users\abiryukov\AppData\Local\Continuum\anaconda3\envs\ocp\lib\site-packages\sklearn\linear_model\coordinate_descent.py:492: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems.
ConvergenceWarning)
###Markdown
Pull the TPOT models as basic models, attach QRF and do a stacking prediction
###Code
pipe_a = exported_pipeline = make_pipeline(
make_union(
FunctionTransformer(copy,validate=False),
MinMaxScaler()
),
StackingEstimator(estimator=ExtraTreesRegressor(bootstrap=True, max_depth=5, max_features=0.15000000000000002, min_samples_leaf=0.055, min_samples_split=0.505, n_estimators=100)),
StackingEstimator(estimator=ExtraTreesRegressor(bootstrap=False, max_depth=9, max_features=0.15000000000000002, min_samples_leaf=0.255, min_samples_split=0.005, n_estimators=100)),
StackingEstimator(estimator=ExtraTreesRegressor(bootstrap=True, max_depth=5, max_features=0.35000000000000003, min_samples_leaf=0.20500000000000002, min_samples_split=0.35500000000000004, n_estimators=250)),
LGBMRegressor(colsample_bytree=0.75, learning_rate=0.01, max_bin=127, max_depth=4, min_child_weight=15, n_estimators=300, num_leaves=90, objective="fair", reg_alpha=0.05, subsample=0.75, subsample_freq=0, verbosity=-1,n_thread=-1)
)
pipe_b = make_pipeline(
StackingEstimator(estimator=ExtraTreesRegressor(bootstrap=True, max_depth=4, max_features=0.5500000000000002, min_samples_leaf=0.055, min_samples_split=0.455, n_estimators=300)),
MinMaxScaler(),
StackingEstimator(estimator=ExtraTreesRegressor(bootstrap=True, max_depth=5, max_features=0.5500000000000002, min_samples_leaf=0.255, min_samples_split=0.20500000000000002, n_estimators=500)),
LGBMRegressor(colsample_bytree=1.0, learning_rate=0.2, max_bin=63, max_depth=4, min_child_weight=0.001, n_estimators=250, num_leaves=90, objective="huber", reg_alpha=0.007, subsample=0.8, subsample_freq=0, verbosity=-1,n_thread=-1)
)
pipe_c = exported_pipeline = make_pipeline(
make_union(
FunctionTransformer(copy,validate=False),
make_union(
FunctionTransformer(copy,validate=False),
FunctionTransformer(copy,validate=False)
)
),
StackingEstimator(estimator=LGBMRegressor(colsample_bytree=1.0, learning_rate=0.005, max_bin=127, max_depth=5, min_child_weight=1, n_estimators=250, num_leaves=100, objective="mape", reg_alpha=0.01, subsample=0.7, subsample_freq=10, verbosity=-1)),
LGBMRegressor(colsample_bytree=1.0, learning_rate=0.001, max_bin=127, max_depth=4, min_child_weight=10, n_estimators=300, num_leaves=70, objective="mape", reg_alpha=0.05, subsample=0.9, subsample_freq=30, verbosity=-1,n_thread=-1)
)
stacked_averaged_models = StackingAveragedModels(base_models = (pipe_a, pipe_b,pipe_c,model_xgb),
meta_model = ENet)
score = rmsle_cv(stacked_averaged_models)
print("Stacking Averaged models score: {:.4f} ({:.4f})".format(score.mean(), score.std()))
print("Stacking Averaged models score: {:.4f} ({:.4f})".format(score.mean(), score.std()))
###Output
Stacking Averaged models score: -3.3235 (0.3430)
###Markdown
Final predictions:
###Code
from scipy.stats import boxcox
from scipy.special import inv_boxcox
preds_all = []
for year in [2016,2017]:
print(year)
data_dict = pd.read_pickle(f'../data/processed/data_dict_all.pkl')
X_test = data_dict[year]['X_test_ts'].copy()
tgt = "rougher.output.recovery"
X = data_dict[year]['X_train_ts'].copy()
print(f'1) Test shape: {X_test.shape}, train: {X.shape}')
y = data_dict[year]['y_train'][tgt].dropna()
y = y[(y>5) & (y <100)]
inds = X.index.intersection(y.index)
X = X.loc[inds]
y = y.loc[inds]
stacked_averaged_models.fit(X.values, y)
ypred_r = stacked_averaged_models.predict(X_test.values)
preds_r = pd.DataFrame(data = {'date':X_test.index, tgt:ypred_r}).set_index('date')
tgt = "final.output.recovery"
print(f'2) Test shape: {X_test.shape}, train: {X.shape}')
X = data_dict[year]['X_train_ts'].copy()
X_test = data_dict[year]['X_test_ts'].copy()
y = data_dict[year]['y_train'][tgt].dropna()
y = y[(y>5) & (y <100)]
inds = X.index.intersection(y.index)
X = X.loc[inds]
y = y.loc[inds]
print(f'3) Test shape: {X_test.shape}, train: {X.shape}')
stacked_averaged_models.fit(X.values, y)
ypred_f = stacked_averaged_models.predict(X_test.values)
preds_f = pd.DataFrame(data = {'date':X_test.index, tgt:ypred_f}).set_index('date')
preds_all.append(preds_r.join(preds_f))
stacked_preds_sub = pd.concat(preds_all)
stacked_preds_sub = stacked_preds_sub.reset_index()
stacked_preds_sub['date'] = stacked_preds_sub['date'].dt.strftime('%Y-%m-%dT%H:%M:%SZ')
stacked_preds_sub.set_index('date',inplace=True)
stacked_preds_sub.drop_duplicates(inplace=True)
stacked_preds_sub.to_csv('../results/stacked_sub_3_window.csv')
ypred_r
###Output
_____no_output_____ |
tensorflow_1_x/7_kaggle/notebooks/sql/raw/ex2_select.ipynb | ###Markdown
IntroTry some **SELECT** statements of your own to see if you can answer the questions from a large dataset of air pollution measurements.Again, run the cell below to set everything up.
###Code
# Set up feedack system
from learntools.core import binder
binder.bind(globals())
from learntools.sql.ex2 import *
# import package with helper functions
import bq_helper
# create a helper object for this dataset
open_aq = bq_helper.BigQueryHelper(active_project="bigquery-public-data",
dataset_name="openaq")
print("Setup Complete")
# print list of tables in this dataset (there's only one!)
print('Tables list: {}'.format(open_aq.list_tables()))
# print look at top few rows
open_aq.head('global_air_quality')
###Output
_____no_output_____
###Markdown
Now you'll write and run the code to answer the questions below. Question 1) Which countries have reported pollution levels in units of "ppm"? In case it's useful to see an example query, here's some code from the tutorial:```query = """SELECT city FROM `bigquery-public-data.openaq.global_air_quality` WHERE country = 'US' """open_aq.query_to_pandas_safe(query)```
###Code
# Your Code Goes Here
first_query = ____
first_results = open_aq.query_to_pandas_safe(first_query)
# View top few rows of results
print(first_results.head())
q_1.check()
###Output
_____no_output_____
###Markdown
For the solution, uncomment the line below.
###Code
#q_1.solution()
###Output
_____no_output_____
###Markdown
2) Select all columns of the rows where pollution levels were reported to be exactly 0?
###Code
# Your Code Goes Here
zero_pollution_query = ____
zero_pollution_results = ____
print(zero_pollution_results.head())
q_2.check()
###Output
_____no_output_____
###Markdown
For the solution, uncomment the line below:
###Code
#q_2.solution()
###Output
_____no_output_____ |
notebooks/truth_match_table.ipynb | ###Markdown
Rubin LSST DESC DC2: Accessing Truth-match Table with GCRCatalogs**Authors**: Yao-Yuan Mao (@yymao), Javier Sanchez (@fjaviersanchez), Joanne Bogart (@JoanneBogart)This notebook will illustrate the basics of accessing the Truth-match Table, which contains the basic properties of truth objects (i.e., inputs to the image simulations) and also matching information with respect to the Object Table.**Prerequisite**: Please go over the Object Table tutorial first! **Learning objectives**: After going through this notebook, you should be able to: 1. Load and efficiently access a DC2 Truth-match Table with the GCRCatalogs 2. Understand and have references for the Truth-match Table schema and data model 3. Make some validation plots using the Truth-match Table Before you startMake sure you have followed the instructions [here](https://lsstdesc.org/DC2-Public-Release/) to download the data files, install `GCRCatalogs`, and set up `root_dir` for `GCRCatalogs`. Import necessary packages
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import GCRCatalogs
from GCRCatalogs.helpers.tract_catalogs import tract_filter, sample_filter
from GCRCatalogs import GCRQuery
###Output
_____no_output_____
###Markdown
Access Truth-match Table with GCRCatalogsJust like the Object Table, the Truth-match Table is available as parquet files, and `GCRCatalogs` provides a high-level user interface for accessing it. However, while the Truth-match Table is a single set of parquet files, it appears as two entries in `GCRCatalogs` as we will see below. - The catalog entry `desc_dc2_run2.2i_dr6_truth` contains all truth information, including true sources that are not detected in the DR6 Object Table. - The catalog entry `desc_dc2_run2.2i_dr6_object_with_truth_match` contains all the identical information of the Object Table (`desc_dc2_run2.2i_dr6_object`), plus additional columns that contain information of best-match truth objects. In other words, `desc_dc2_run2.2i_dr6_object_with_truth_match` is like a "left join" of `desc_dc2_run2.2i_dr6_object` and `desc_dc2_run2.2i_dr6_truth`, but with spatial separation and magnitude difference as join keys. See the release note for further details. Which catalog entry should I use?- If you don't need any truth information, just use `desc_dc2_run2.2i_dr6_object`. - If you only need the best-match truth source information for each object (i.e., each row) in the Object Table, use `desc_dc2_run2.2i_dr6_object_with_truth_match`- If you need more truth infomation beyond the best matches (i.e., undetected sources, blended sources), you need to use `desc_dc2_run2.2i_dr6_truth`
###Code
GCRCatalogs.get_public_catalog_names()
object_cat = GCRCatalogs.load_catalog('desc_dc2_run2.2i_dr6_object')
match_cat = GCRCatalogs.load_catalog('desc_dc2_run2.2i_dr6_object_with_truth_match')
truth_cat = GCRCatalogs.load_catalog('desc_dc2_run2.2i_dr6_truth')
###Output
_____no_output_____
###Markdown
Let's first take a look at `match_cat` (which loads `desc_dc2_run2.2i_dr6_object_with_truth_match`). As discussed earlier, this catalog is just Object Table plus best match truth information. We can check if all the columns in the Object Table are present in this catalog too.
###Code
set(object_cat.list_all_quantities()).issubset(match_cat.list_all_quantities())
###Output
_____no_output_____
###Markdown
Indeed `match_cat` contains all columns in the Object Table. So what are the additional columns?
###Code
set(match_cat.list_all_quantities()).difference(object_cat.list_all_quantities())
###Output
_____no_output_____
###Markdown
These columns describe the infomation of the matched truth source for each object in the Object Table. Note that most of these columns have a `_truth` postfix (i.e., suffix) to avoid potential confusion with columns in the Object Table.In fact, these columns are the same set of columns in `truth_cat` (but there's no `_truth` postfix in `truth_cat`):
###Code
sorted(truth_cat.list_all_quantities())
# We can verify what we just said is indeed true:
additional_cols_in_match = set(match_cat.list_all_quantities()).difference(object_cat.list_all_quantities())
set(col[:-6] if col.endswith("_truth") else col for col in additional_cols_in_match) == set(truth_cat.list_all_quantities())
###Output
_____no_output_____
###Markdown
What are "best matches"?Recall that `match_cat` is a "left join" of Object Table and Truth Table, so each object is assigned a "best match" truth source.A "best match" truth source is the truth source that is within 1 arcsec of the object in consideration, and has the smallest magnitude difference.If there's no truth sources that are within 1 arcsec of the object in consideration, or if the smallest magnitude difference is larger than 1 mag, then the "best match" would be just the nearest neighbor. If a best match satisfies the two criteria (separation < 1 arcsec and magnitude difference < 1 mag), we call it a "good match". Otherwise, it would just be the nearest neighbor. Note that a "good match" could still (in fact, is likely to) be the nearest neighbor. Let's take a closer look. We can use the `is_good_match` and `is_nearest_neighbor` to tell whether the best match is a good match and/or the nearest neighbor.
###Code
data = match_cat.get_quantities(["mag_r_cModel", "mag_r_truth", "match_sep", "is_good_match", "is_nearest_neighbor"], native_filters=[tract_filter(3830)])
# Note that if the best match is not a good match, then it must be the nearest neighbor.
(data["is_good_match"] | data["is_nearest_neighbor"]).all()
# Most "good matches" (sep. < 1 arcsec, dmag < 1 mag) would also happen to be nearest neighbors (NN).
# "Not good" matches can be due to either large separation or large magnitude difference.
# Let's define a few masks to distinguish these cases:
dmag = np.abs(data["mag_r_truth"] - data["mag_r_cModel"])
good_and_nn = data["is_good_match"] & data["is_nearest_neighbor"] # Good matches that happen to be the nearest neighbor (NN) too
good_not_nn = data["is_good_match"] & (~data["is_nearest_neighbor"]) # Good matches that are not the nearest neighbor (NN)
not_good_small_dmag = (~data["is_good_match"]) & (dmag < 1) # Not good matches that have dmag < 1 mag (so they must have sep > 1 arcsec)
not_good_large_dmag = (~data["is_good_match"]) & ((dmag >= 1) | np.isnan(dmag)) # Not good matches that have dmag > 1 mag or undefined mag
# These four masks are mutually exclusive:
(np.vstack([good_and_nn, good_not_nn, not_good_small_dmag, not_good_large_dmag]).astype(np.int).sum(axis=0) == 1).all()
sep_bins = np.linspace(0, 5, 51)
fig, ax = plt.subplots(ncols=2, figsize=(10, 4), dpi=100)
hist = ax[0].hist(data["match_sep"], bins=sep_bins)[0]
ax[0].set_yscale("log");
ax[0].set_xlabel("separation [arcsec]");
ax[0].set_ylabel("number (per bin)");
bottom = None
for mask, color in zip(
[good_and_nn, good_not_nn, not_good_small_dmag, not_good_large_dmag],
[plt.cm.Greens(0.8), plt.cm.Greens(0.4), plt.cm.Oranges(0.4), plt.cm.Oranges(0.8)]
):
frac = np.histogram(data["match_sep"][mask], bins=sep_bins)[0] / hist
ax[1].bar(sep_bins[:-1], frac, width=np.ediff1d(sep_bins), bottom=bottom, align="edge", color=color)
if bottom is None:
bottom = frac
else:
bottom += frac
ax[1].set_xlabel("separation [arcsec]");
ax[1].set_ylabel("fraction (per bin)");
###Output
_____no_output_____
###Markdown
In the figure above, the left panel shows the distribution of spatial separation for all best matches (note that y-axis is in log scale). The vast majority of objects have matches within 1 arcsec, with a small tail goes to up to 5 arcsec. The right panel shows that, in each bin of spatial separation, the fraction of the four situations that we defined early. The greens bars denote "good matches" (sep. < 1 arcsec and dmag < 1 mag), and we can see that the majority of "good matches" are also nearest neighbor (darker green), especially if the separation is small. The oranges bars denote "not good matches" (so that they are all nearest neighbors by definition).About only 20% of these nearest neighbors but not good matches have close magnitude (dmag < 1 mag). Let's also look at the measured vs. true magnitudes for these different matches.
###Code
fig, ax = plt.subplots(2, 2, figsize=(8, 8), dpi=100)
for mask, color, label, ax_this in zip(
[good_and_nn, good_not_nn, not_good_small_dmag, not_good_large_dmag],
[plt.cm.Greens(0.8), plt.cm.Greens(0.4), plt.cm.Oranges(0.4), plt.cm.Oranges(0.8)],
["good match\nnearest neighbor", "good match\nnot nearest neighbor", "not good match\nnearest neighbor\ndmag < 1", "not good match\nnearest neighbor\ndmag > 1"],
ax.flat
):
ax_this.scatter(data["mag_r_truth"][mask], data["mag_r_cModel"][mask], color=color, s=0.005, rasterized=True)
ax_this.set_xlim(15, 35)
ax_this.set_ylim(15, 35)
ax_this.plot([15, 35], [15, 35], color='grey', lw=0.5)
ax_this.text(16, 34, label, va="top")
fig.text(0.5, 0.02, "true r-band mag", ha="center");
fig.text(0.02, 0.5, "measured r-band mag", va="center", rotation=90);
###Output
_____no_output_____
###Markdown
Note that because `match_cat` uses the Object Table as the reference catalog for the match, a unique truth source may appear twice in the catalog.
###Code
truth_id = match_cat.get_quantities(["id_truth"], native_filters=tract_filter(3830))["id_truth"]
number_of_objects = len(truth_id)
number_of_unique_truth = len(np.unique(truth_id))
print("Number of objects", number_of_objects)
print("Number of unique truth sources", number_of_unique_truth)
###Output
_____no_output_____
###Markdown
Truth TableNow we switch to look at the full truth table `truth_cat`. When using `truth_cat`, you will obtain all truth sources, including those below the detection limit or blended. You will also obtain only *unique* truth sources (unlike the case of `match_cat` as we shown above). The Truth Table allows us to inspect, for example, undetected sources. Here, let's look at how many truth galaxies are detected (and not blended) as a function of the galaxy magnitude. To select only galaxies, we can filter on `truth_type == 1`. Here the truth type is 1 for galaxies, 2 for stars, and 3 for SNe.
###Code
truth_galaxies = truth_cat.get_quantities(
quantities=["match_objectId", "is_good_match", "mag_r"],
filters=["truth_type == 1"],
native_filters=[tract_filter(3830)],
)
mag_bins = np.linspace(14.5, 29.5, 46)
fig, ax = plt.subplots(figsize=(6,4), dpi=100)
ax.hist(truth_galaxies["mag_r"][truth_galaxies["is_good_match"]], mag_bins, histtype="step", label="has good match", lw=1.5, color="C2");
ax.hist(truth_galaxies["mag_r"][(~truth_galaxies["is_good_match"]) & (truth_galaxies["match_objectId"] > -1)], mag_bins, histtype="step", label="has NN (but not good) match", lw=1.5, color="C1");
ax.hist(truth_galaxies["mag_r"][truth_galaxies["match_objectId"] == -1], mag_bins, histtype="step", label="has no matches at all", lw=1.5, color="C3");
ax.set_yscale("log");
ax.legend(loc="upper left");
ax.set_xlabel("r-band mag");
ax.set_ylabel("number per bin");
ax.set_xticks(np.arange(14, 31));
###Output
_____no_output_____
###Markdown
From the plot above, we see that at around $r = 27$, about half of the sources are "detected", in the sense that a good match in the Object Table is found. Much more sources fainter than $r = 27$ are not detected (or a match is not found). On the bright end, the vast majority have good matches. A handful cases where there is no good matches or no matches are interesting blending study cases.Finally, recall that in the Object Table tutorial, we looked at the galaxy number density function. We can now compare the truth galaxy number density function with the measured one! We will mostly follow the same procedure as in the Object Table tutorial. We will estimate the sky area assuming a rectangular geometry of a single tract.
###Code
truth_galaxies = truth_cat.get_quantities(
quantities=["mag_i", "ra", "dec"],
filters=["truth_type == 1"],
native_filters=[tract_filter(3830)],
)
measured_galaxies = object_cat.get_quantities(
quantities=["mag_i_cModel"],
filters=["extendedness == 1", "clean", (np.isfinite, "mag_i_cModel")],
native_filters=[tract_filter(3830)],
)
d_ra = truth_galaxies["ra"].max() - truth_galaxies["ra"].min()
d_dec = truth_galaxies["dec"].max() - truth_galaxies["dec"].min()
cos_dec = np.cos(np.deg2rad(np.median(truth_galaxies["dec"])))
sky_area_sq_arcmin = (d_ra * cos_dec * 60) * (d_dec * 60)
print(sky_area_sq_arcmin)
mag_bins = np.linspace(15, 30, 51)
cdf_truth = np.searchsorted(truth_galaxies["mag_i"], mag_bins, sorter=truth_galaxies["mag_i"].argsort())
cdf_measured = np.searchsorted(measured_galaxies["mag_i_cModel"], mag_bins, sorter=measured_galaxies["mag_i_cModel"].argsort())
fig, ax = plt.subplots(2, sharex=True, figsize=(6,5), gridspec_kw={"hspace": 0, "height_ratios": (3,1)}, dpi=100)
ax[0].semilogy(mag_bins, cdf_measured / sky_area_sq_arcmin, label="Measured")
ax[0].semilogy(mag_bins, cdf_truth / sky_area_sq_arcmin, label="Truth", ls="--")
ax[1].set_xlabel("$i$-band cModel mag");
ax[0].set_ylabel("Cumulative number per sq. arcmin");
ax[1].semilogy(mag_bins, cdf_measured/cdf_truth)
ax[1].set_ylim(0.5, 2)
ax[0].legend();
ax[0].grid(); # add grid to guide our eyes
ax[1].grid();
###Output
_____no_output_____ |
notebooks_paper_2021/Results/results_embeddings.ipynb | ###Markdown
Embedding effect on ESN - IMDB dataset Librairies
###Code
import io
import os
import re
import sys
import pickle
import time
from timeit import default_timer as timer
import numpy as np
import random
import torch
import torch.nn as nn
import torch.optim as optim
from datasets import load_dataset, Dataset, concatenate_datasets
from transformers import AutoTokenizer
from transformers import BertModel
from transformers.data.data_collator import DataCollatorWithPadding
import esntorch.core.reservoir as res
import esntorch.core.learning_algo as la
import esntorch.core.merging_strategy as ms
import esntorch.core.esn as esn
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
###Output
_____no_output_____
###Markdown
Dataset
###Code
# rename correct column as 'labels': depends on the dataset you load
def load_and_enrich_dataset(dataset_name, split, cache_dir):
dataset = load_dataset(dataset_name, split=split, cache_dir=CACHE_DIR)
dataset = dataset.rename_column('label', 'labels') # cf 'imdb' dataset
dataset = dataset.map(lambda e: tokenizer(e['text'], truncation=True, padding=False), batched=True)
dataset.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels'])
def add_lengths(sample):
sample["lengths"] = sum(sample["input_ids"] != 0)
return sample
dataset = dataset.map(add_lengths, batched=False)
return dataset
CACHE_DIR = '~/Data/huggignface/' # put your own path here
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
full_train_dataset = load_and_enrich_dataset('imdb', split='train', cache_dir=CACHE_DIR).sort("lengths") # toriving/sst5
train_val_datasets = full_train_dataset.train_test_split(train_size=0.8, shuffle=True)
train_dataset = train_val_datasets['train'].sort("lengths")
val_dataset = train_val_datasets['test'].sort("lengths")
test_dataset = load_and_enrich_dataset('imdb', split='test', cache_dir=CACHE_DIR).sort("lengths")
dataset_d = {
'full_train': full_train_dataset,
'train': train_dataset,
'val': val_dataset,
'test': test_dataset
}
dataloader_d = {}
for k, v in dataset_d.items():
dataloader_d[k] = torch.utils.data.DataLoader(v,
batch_size=128,#256, reduced for bi-direction
collate_fn=DataCollatorWithPadding(tokenizer))
dataset_d
###Output
_____no_output_____
###Markdown
Model
###Code
# results
acc_l = []
time_l = []
# loop over seeds
for seed in [7, 1949, 1947, 1979, 1983]:
print('\n SEED: {}'.format(seed))
# model params
esn_params = {
'embedding_weights': 'bert-base-uncased',
'distribution' : 'uniform',
'input_dim' : 768,
'reservoir_dim' : 500,
'bias_scaling' : 1.0,
'sparsity' : 0.99,
'spectral_radius' : 0.6,
'leaking_rate': 0.8,
'activation_function' : 'tanh',
'input_scaling' : 0.5,
'mean' : 0.0,
'std' : 1.0,
'learning_algo' : None,
'criterion' : None,
'optimizer' : None,
'merging_strategy' : 'mean',
'lexicon' : None,
'bidirectional' : False,
'device' : device, # this is new
'seed' : seed
}
# model
ESN = esn.EchoStateNetwork(**esn_params)
ESN.learning_algo = la.RidgeRegression(alpha=16)# , mode='normalize')
ESN = ESN.to(device)
# warm up (new)
nb_sentences = 3
for i in range(nb_sentences):
sentence = dataset_d["train"].select([i])
dataloader_tmp = torch.utils.data.DataLoader(sentence,
batch_size=1,
collate_fn=DataCollatorWithPadding(tokenizer))
for sentence in dataloader_tmp:
ESN.warm_up(sentence)
# train
t0 = timer()
LOSS = ESN.fit(dataloader_d["full_train"]) # changed back full_train => train
t1 = timer()
time_l.append(t1 - t0)
# predict
acc = ESN.predict(dataloader_d["test"], verbose=False)[1].item()
# results
acc_l.append(acc)
# delete objects
del ESN.learning_algo
del ESN.criterion
del ESN.merging_strategy
del ESN
torch.cuda.empty_cache()
acc_l2 = [88.37999725341797,
88.4280014038086,
88.50399780273438,
88.4959945678711,
88.2239990234375]
time_l2 = [172.32195102376863,
174.07607653178275,
174.04811804788187,
173.9174865661189,
174.44320438290015]
###Output
_____no_output_____
###Markdown
Plot
###Code
with open('embedding_results.pkl', 'rb') as fh:
acc_raw_d = pickle.load(fh)
# Take 5 samples out of 10
acc_d = {}
for k, v in acc_raw_d.items():
ints_l = random.sample(range(0, 10), 5)
acc_l = [v[k] for k in ints_l]
acc_d[k] = np.mean(acc_l), np.std(acc_l)
acc_d
# Add BERT results
acc_d['bert-base-uncased'] = np.mean(acc_l2), np.std(acc_l2)
acc_d
acc_d.pop('charngram.100d', None)
import matplotlib.pyplot as plt
plt.style.use('ggplot')
fig, ax = plt.subplots(figsize=(7,3))
acc_l = [v[0] for v in acc_d.values()]
std_l = [v[1] for v in acc_d.values()]
# ax.errorbar(range(len(acc_d)), acc_l, yerr=std_l, fmt='-', color='C1', linewidth=2)
ax.plot(range(len(acc_d)), acc_l, marker='o', color='C1', linewidth=3)
# ax.set_title('Test Accuracy of an ESN over IMDB dataset')
ax.set_xticks(range(len(acc_d)))
ax.set_xticklabels(acc_d.keys(), rotation=45)
ax.set_xlabel('Pre-trained embeddings')
ax.set_ylabel('Test accuracy')
plt.savefig("embeddings.pdf", bbox_inches='tight', dpi=300)
plt.show()
###Output
_____no_output_____ |
PythonOverview.ipynb | ###Markdown
___ ___ Python Crash CoursePlease note, this is not meant to be a comprehensive overview of Python or programming in general, if you have no programming experience, you should probably take my other course: [Complete Python Bootcamp](https://www.udemy.com/complete-python-bootcamp/?couponCode=PY20) instead.**This notebook is just a code reference for the videos, no written explanations here**This notebook will just go through the basic topics in order:* Data types * Numbers * Strings * Printing * Lists * Dictionaries * Booleans * Tuples * Sets* Comparison Operators* if, elif, else Statements* for Loops* while Loops* range()* list comprehension* functions* lambda expressions* map and filter* methods____ Data types Numbers
###Code
1 + 1
1 * 3
1 / 2
2 ** 4
4 % 2
5 % 2
(2 + 3) * (5 + 5)
###Output
_____no_output_____
###Markdown
Loop 10 random s and test for even/odd
###Code
from random import randint
for i in range(10):
rand_num = randint(0, 10000)
if rand_num % 2: # we could have written: if rand_num & 2 == 0:
print("even: \t", rand_num) # \t is a tab char
else:
print("odd: \t", rand_num)
###Output
odd: 4872
odd: 6880
odd: 4630
odd: 5010
odd: 5416
even: 9083
odd: 3326
odd: 7326
even: 5229
even: 3147
###Markdown
Variable Assignment
###Code
# Use lowercase; don't start with number or special characters; separate with _
name_of_var = 2
x = 2
y = 3
z = x + y
z
###Output
_____no_output_____
###Markdown
Comments Use a single at beginning of line for a comment, or use: 2 spaces + after code, or use: """docstring block of text here more docstring text here a bit more here to explain the code more here and here... """ (end with these followed by a blank line) Strings
###Code
'single quotes'
"double quotes"
"wrap lot's of other quotes"
###Output
_____no_output_____
###Markdown
Printing
###Code
x = 'hello'
x
print(x)
num = 12
name = 'Sam'
print('My number is: {one}, and my name is: {two}'.format(one=num,two=name))
# advantage with the above method is that you don't have to worry about
# the order of 'one' and 'two', and you can use them multiple times, such as below.
# Note that you must use legal variable names in the {}
print('Number: {one}, name: {two}. And {two} is {one} years old!'.format(one=num, two=name))
print('My number is: {}, and my name is: {}'.format(num,name))
# the above method is certainly the easiest to write and read if you're usinhg
# the values only once in a line. You can also do it this way:
print('Number: {0}, name: {1}. And {1} is {0} years old!'.format(num, name))
###Output
Number: 12, name: Sam. And Sam is 12 years old!
###Markdown
Lists
###Code
[1,2,3]
['hi',1,[1,2]]
my_list = ['a','b','c', 'd', 'e']
my_list.append('f') # appends single value to the end of the list
my_list.insert(5,'Z') # .insert(position_to_insert, value_to_insert)
my_list
my_list.extend(['j', 'k', 'l']) # add multiple items to your list
my_list
my_list[0]
my_list[1]
my_list[-1]
my_list[-4:-2]
my_list[-3::]
my_list[1:]
my_list[:3] # ending index number is NOT included
my_list[2:5]
my_list[0] = 'NEW'
my_list
my_list[5] = "HOLA!" # or use: my_list.insert(5, 'HOLA!')
my_list
nest = [1,2,3,[4,5,['target']], 6, 7, ['a', 'b', 'c']]
nest[3]
nest[-1]
nest[3][2]
nest[3][2][0]
nest[3][2]
nest[-1][0]
nested = [1, 2, [3, 4, [5, 6, [7, 8]]]]
nested[2]
nested[2][2]
nested[2][2][2]
nested[2][2][2][0]
###Output
_____no_output_____
###Markdown
Dictionaries
###Code
d = {'key1':'item1','key2':'item2'}
d
d['key1']
###Output
_____no_output_____
###Markdown
Booleans
###Code
True
False
###Output
_____no_output_____
###Markdown
Tuples
###Code
t = (1,2,3)
t[0]
t[0] = 'NEW'
###Output
_____no_output_____
###Markdown
Sets
###Code
{1,2,3}
{1,2,3,1,2,1,2,3,3,3,3,2,2,2,1,1,2}
###Output
_____no_output_____
###Markdown
Comparison Operators
###Code
1 > 2
1 < 2
1 >= 1
1 <= 4
1 == 1
'hi' == 'bye'
###Output
_____no_output_____
###Markdown
Logic Operators
###Code
(1 > 2) and (2 < 3)
(1 > 2) or (2 < 3)
(1 == 2) or (2 == 3) or (4 == 4)
###Output
_____no_output_____
###Markdown
if,elif, else Statements
###Code
if 1 < 2:
print('Yep!')
if 1 < 2:
print('yep!')
if 1 < 2:
print('first')
else:
print('last')
if 1 > 2:
print('first')
else:
print('last')
if 1 == 2:
print('first')
elif 3 == 3:
print('middle')
else:
print('Last')
###Output
_____no_output_____
###Markdown
for Loops
###Code
seq = [1,2,3,4,5]
for item in seq:
print(item)
for item in seq:
print('Yep')
for jelly in seq:
print(jelly+jelly)
###Output
_____no_output_____
###Markdown
while Loops
###Code
i = 1
while i < 5:
print('i is: {}'.format(i))
i = i+1
###Output
_____no_output_____
###Markdown
range()
###Code
range(5)
for i in range(5):
print(i)
list(range(5))
###Output
_____no_output_____
###Markdown
list comprehension
###Code
x = [1,2,3,4]
out = []
for item in x:
out.append(item**2)
print(out)
[item**2 for item in x]
###Output
_____no_output_____
###Markdown
functions
###Code
def my_func(param1='default'):
"""
Docstring goes here.
"""
print(param1)
my_func
my_func()
my_func('new param')
my_func(param1='new param')
def square(x):
return x**2
out = square(2)
print(out)
###Output
_____no_output_____
###Markdown
lambda expressions
###Code
def times2(var):
return var*2
times2(2)
lambda var: var*2
###Output
_____no_output_____
###Markdown
map and filter
###Code
seq = [1,2,3,4,5]
map(times2,seq)
list(map(times2,seq))
list(map(lambda var: var*2,seq))
filter(lambda item: item%2 == 0,seq)
list(filter(lambda item: item%2 == 0,seq))
###Output
_____no_output_____
###Markdown
methods
###Code
st = 'hello my name is Sam'
st.lower()
st.upper()
st.split()
tweet = 'Go Sports! #Sports'
tweet.split('#')
tweet.split('#')[1]
d
d.keys()
d.items()
lst = [1,2,3]
lst.pop()
lst
'x' in [1,2,3]
'x' in ['x','y','z']
###Output
_____no_output_____ |
part02_SolversII.ipynb | ###Markdown
Overview of Solvers II Goal- Get an overview of various solvers in Ocean- Understand the main purpose of each solver- Get familiar with some basic solver parameters- Get familiar with embedding ProblemTo focus on sampler, we are going to create a simple BQM problem that we will solve using different solvers.
###Code
from dimod import BinaryQuadraticModel, to_networkx_graph
bqm = BinaryQuadraticModel('SPIN')
bqm.add_variable(0, -1)
bqm.add_variable(1, -1)
bqm.add_variable(4, -1)
bqm.add_variable(5, -1)
bqm.add_interaction(0, 4, 1.0)
bqm.add_interaction(0, 5, 1.0)
bqm.add_interaction(1, 4, 1.0)
bqm.add_interaction(1, 5, 1.0)
bqm.add_interaction(0, 1, 1.0)
bqm.add_interaction(4, 5, 1.0)
import networkx as nx
nx.draw(to_networkx_graph(bqm))
###Output
_____no_output_____
###Markdown
ExactSolver- Mainly for debugging purposes- Can solve problems with up to 20 variables (or more) depending on the system
###Code
from dimod import ExactSolver
solver = ExactSolver()
response = solver.sample(bqm)
print(response.truncate(10))
###Output
_____no_output_____
###Markdown
DWaveSampler- The main interface to use the quantum annealing processor- It can select different quantum processors including the most recent Advantage system- It can solve optimization problems casted as an Ising Hamiltonian- The solver API allow submitting more abstract problems forms such as QUBO and BQM
###Code
from dwave.system import DWaveSampler
sampler = DWaveSampler(solver=dict(topology__type='chimera'))
response = sampler.sample(
bqm, num_reads=10,
annealing_time=10,
auto_scale=False,
answer_mode='raw'
)
print(response)
###Output
_____no_output_____
###Markdown
What happened?- The graph of qubit connectivity is not fully connected. - Some qubit-qubit interactions do not exist- If the problem graph has interactions that don't exist, we cannot solve that problem directly- What should we do? EmbeddingComposite- As mentioned above, the graph of a problem and the processor may not be compatible- One may use a chain of qubits that are strongly connected to act as a single qubit- This is a complicated process that requires an hour long lecture- The `EmbeddingComposite` abstracts away all the complications
###Code
from dwave.system import EmbeddingComposite
from dwave.system import DWaveSampler
sampler = EmbeddingComposite(DWaveSampler(solver=dict(topology__type='chimera')))
response = sampler.sample(
bqm, num_reads=10,
annealing_time=10,
auto_scale=True,
answer_mode='raw',
return_embedding=True
)
print(response)
print(response.info.get('embedding_context').get('embedding'))
###Output
_____no_output_____
###Markdown
DWaveCliqueSampler- More abstractions- If you have a dense or fully connected problem, it's much better to use a clique sampler than heuristic embedding because heuristic embedding may not find efficient embeddings (e.g. it may find embeddings with larger chains and uneven chain length).- Even more abstraction $\rightarrow$ `DWaveCliqueSampler` is a standalone sampler
###Code
from dwave.system import DWaveCliqueSampler
sampler = DWaveCliqueSampler()
response = sampler.sample(
bqm, num_reads=10,
annealing_time=10,
answer_mode='raw'
)
print(response)
###Output
_____no_output_____
###Markdown
LeapHybridSampler- The most flexible solver - Can solve large, dense problems efficiently using classical and quantum resources- 20,000 variable fully connected (~200M biases)- 1 million variables with at most 200M biases- Only one parameter - time limit (the minimum time limit is chosen by default)
###Code
from dwave.system import LeapHybridSampler
sampler = LeapHybridSampler()
print(sampler.properties)
response = sampler.sample(
bqm, time_limit=10,
)
print(response)
###Output
_____no_output_____ |
Assign_SVM_Forest_fires.ipynb | ###Markdown
Problem Statement:classify the Size_Categorie using SVMmonth month of the year: 'jan' to 'dec'day day of the week: 'mon' to 'sun'FFMC FFMC index from the FWI system: 18.7 to 96.20DMC DMC index from the FWI system: 1.1 to 291.3DC DC index from the FWI system: 7.9 to 860.6ISI ISI index from the FWI system: 0.0 to 56.10temp temperature in Celsius degrees: 2.2 to 33.30RH relative humidity in %: 15.0 to 100wind wind speed in km/h: 0.40 to 9.40rain outside rain in mm/m2 : 0.0 to 6.4Size_Categorie the burned area of the forest ( Small , Large)
###Code
import pandas as pd
import numpy as np
from sklearn.svm import SVC
import warnings
warnings.filterwarnings("ignore")
from sklearn.preprocessing import LabelEncoder
forest = pd.read_csv("G:/data sceince/Assignments/SVM/forestfires.csv")
forest.head()
forest.info()
forest.dtypes
forest.month.unique()
# convering the size_category column into Labels by using label encoder
label_encoder = LabelEncoder()
forest['size_category'] = label_encoder.fit_transform(forest.size_category)
# Dropping the month and day columns as they are not important
forest = forest.drop(['month','day'],axis = 1)
forest.head()
forest.describe()
forest.size_category.value_counts()
forest.dtypes
forest.shape
# Splitting the data into x and y
x = forest.drop(['size_category'],axis = 1)
y = forest['size_category']
x
y
###Output
_____no_output_____
###Markdown
Building the model
###Code
# Applying train and test split on the data
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size = 0.33, random_state = 0)
svm = SVC(kernel = 'linear', gamma = 1, C = 1)
svm.fit(x_train,y_train)
preds = svm.predict(x_test)
from sklearn.metrics import classification_report
print(classification_report(preds,y_test))
from sklearn import metrics
metrics.accuracy_score(y_test,preds)*100
###Output
_____no_output_____
###Markdown
Inference : Accuracy of the model is 98.83%
###Code
metrics.precision_score(y_test,preds)
metrics.f1_score(preds,y_test)
from sklearn.metrics import confusion_matrix
confusion_matrix(preds,y_test)
###Output
_____no_output_____ |
notebooks/atn_db.ipynb | ###Markdown
Functions to create and import data to the database for atn_resilience
###Code
#imports for the notebook
import os
# dir_path = os.path.dirname(os.getcwd())
dir_path = os.getcwd()
script_dir = dir_path + '/atnresilience'
os.chdir(script_dir)
import create_atn_db
#Create the paths for the database and raw files
#dir_path is the path to the main directory
db_path = '%s/data/processed/atn_db.sqlite' %(dir_path,)
raw_path = '%s/data/raw/'%(dir_path,)
processed_direc = '%s/data/processed/'%(dir_path,)
print(db_path)
print(processed_direc)
###Output
/Users/allen/Documents/atnresilience/data/processed/atn_db.sqlite
/Users/allen/Documents/atnresilience/data/processed/
###Markdown
Create the atn database from raw dataBefore running any of the following functions, create the following sub-directories in the main atnresilience directory: /data/raw/ /data/processed/ /data/graph/In the '/data/raw/' directory, include all of the raw data files for the years to be processed.If a year has already been run, it does not need to be run again. DO NOT IMPORT THE SAME RAW DATA TWICE. THIS WILL CREATE A DUPLICATE IN THE DB.
###Code
#Specify the range of years to add to the database:
min_year = 2015
max_year = 2015
atn_db_loader = create_atn_db.ATNLoader()
#Create the database and the table 'atn_performance'
#If already exists, nothing will happen
atn_db_loader.create_db()
#Run the function to append the data from the specified years to the database.
#Data for months 1-12 of that year but be in the folder '/data/raw'
year_range = list(range(min_year, max_year+1))
for i in year_range:
atn_db_loader.import_data(i)
print('Data inserted to database for specified years')
###Output
Finished inserting month 1 to DB.
Finished inserting month 2 to DB.
Finished inserting month 3 to DB.
Finished inserting month 4 to DB.
Finished inserting month 5 to DB.
Finished inserting month 6 to DB.
Finished inserting month 7 to DB.
Finished inserting month 8 to DB.
Finished inserting month 9 to DB.
Finished inserting month 10 to DB.
Finished inserting month 11 to DB.
Finished inserting month 12 to DB.
Finished inserting data for year 2015
Data inserted to database for specified years
###Markdown
Create the airport coordinate database from raw dataBefore running any of the following functions, the airport coordinate data file must be added to the raw folder: /data/raw/ The data can be downloaded from open flights: https://openflights.org/data.html . The extended dataset was used as the high-quality set does not include some airports which appear in the atn database. Header titles must first be added for at least the 'Country', 'ICAO', 'lat', and 'long' columns
###Code
create_coord_db = create_atn_db.CoordinateLoader()
create_coord_db.create_coords_table()
create_coord_db.import_coords_data()
###Output
_____no_output_____ |
1_Methods.ipynb | ###Markdown
Methods 1 - A model setup The main idea of the study is to estimate an upped bound of alkalinity generation in the Wadden Sea.The calculation of alkalinity changes is based on the concept of "explicitly conservative form of total alkalinity" ($\text{TA}_{\text{ec}}$) ([Wolf-Gladrow et al., 2007]):$\text{TA}_{\text{ec}} = \lbrack\text{Na}^{+}\rbrack + 2\lbrack\text{Mg}^{2 +}\rbrack + 2\lbrack\text{Ca}^{2 +}\rbrack + \lbrack \text{K}^{+}\rbrack + 2\lbrack\text{Sr}^{2 +}\rbrack + \text{TNH}_{3} - \lbrack\text{Cl}^{-}\rbrack - \lbrack\text{Br}^{-}\rbrack - \lbrack\text{NO}_{3}^{-}\rbrack - \text{TPO}_{4} - 2\text{TSO}_{4} - \text{THF} - \text{THNO}_{2}$, where$\text{TNH}_{3} = \lbrack\text{NH}_{3}\rbrack + \lbrack\text{NH}_{4}^{+}\rbrack$,$\text{TPO}_{4} = \lbrack\text{H}_{3}\text{PO}_{4}\rbrack + \lbrack \text{H}_{2}\text{PO}_{4}^{-}\rbrack + \lbrack\text{HPO}_{4}^{2 -}\rbrack + \lbrack\text{PO}_{4}^{3 -}\rbrack$,$\text{TSO}_{4} = \lbrack\text{SO}_{4}^{2 -}\rbrack + \lbrack\text{HSO}_{4}^{-}\rbrack$,$\text{THF} = \lbrack \text{F}^{-}\rbrack + \lbrack\text{HF}\rbrack$, and$\text{THNO}_{2} = \lbrack\text{NO}_{2}^{-}\rbrack + \lbrack\text{HNO}_{2}\rbrack$.[Wolf-Gladrow et al., 2007]: https://doi.org/10.1016/j.marchem.2007.01.006 Increase or decrease of concentrations of any of the $\text{TA}_{\text{ec}}$ compounds will change alkalinity.For example, an increase of concentration of $\lbrack\text{Ca}^{2 +}\rbrack$ by 1 mole will increase TA by 2 moles.Or an increase of concentration of $\lbrack\text{NO}_{3}^{-}\rbrack$ by 1 mole will decrease TA by 1 mole.These changes can be caused by biogeochemical reactions or other sources like freshwater fluxes, riverine inputs, fluxes to and from sediments (see for example [Zeebe and Wolf-Gladrow (2001)], [Follows et al. (2006)], [Wolf-Gladrow et al. (2007)]).[Zeebe and Wolf-Gladrow (2001)]: https://www.elsevier.com/books/co2-in-seawater-equilibrium-kinetics-isotopes/zeebe/978-0-444-50946-8[Follows et al. (2006)]: https://doi.org/10.1016/j.ocemod.2005.05.004[Wolf-Gladrow et al. (2007)]: https://doi.org/10.1016/j.marchem.2007.01.006 In order to estimate alkalinity generation in the Wadden Sea, we should consider all processes going on in the Wadden Sea that can change the concentrations of species in $\text{TA}_{\text{ec}}$.These processes are biogeochemical transformations listed in [Wolf-Gladrow et al. (2007)] and transport processes within and between water column and sediments of the Wadden Sea, advective exchange of the Wadden Sea with the surrounding areas.The reason that we have to include sediments along with the water column is the following.Some biogeochemical transformations in the coastal area, which can change TA, are active in the water column, others are more active and sediments.For example, typically denitrification takes place in the sediments, in the absence of oxygen ([Libes, 2011]).Primary production is often higher in the water column, where sunlight is more available ([Libes, 2011]).Therefore, we should consider both the water column and sediments.In this study, to calculate alkalinity, we consider both the water column and sediments of the Wadden Sea.We use a vertically resolved 1-D box as a proxy of the Wadden Sea, we split this box into different layers (to resolve a vertical resolution), calculate the necessary biogeochemical reactions increments for each layer, and evaluate the mixing between these layers.Also, we take into consideration the exchange of the water column of the 1-D box with an external pool (the Wadden Sea surrounding areas).[Wolf-Gladrow et al. (2007)]: https://doi.org/10.1016/j.marchem.2007.01.006[Libes, 2011]: https://www.elsevier.com/books/introduction-to-marine-biogeochemistry/libes/978-0-12-088530-5 A model setup for calculations consist of:1. The 1-D Sympagic-Pelagic-Benthic transport Model, SPBM ([Yakubov et al., 2019], https://github.com/BottomRedoxModel/SPBM). It is a governing program resolving a transport equation (diffusive and vertical advective (sinking, burying) terms) between and within the water column and sediments. SPBM also parametrizes horizontal exchange with the external pool (the Wadden Sea surrounding areas).2. A biogeochemical model (https://github.com/BottomRedoxModel/brom_niva_module/tree/dev-sham). It sends sources minus sinks terms to the transport model. The biogeochemical model is explained thoroughly in the Methods 2 section.The software is written in Fortran 2003.The SPBM and biogeochemical model are linked through the Framework for Aquatic Biogeochemical Models, FABM ([Bruggeman and Bolding, 2014]).[Yakubov et al., 2019]: https://doi.org/10.3390/w11081582[Bruggeman and Bolding, 2014]: https://doi.org/10.1016/j.envsoft.2014.04.002 The grid For calculations, to study alkalinity generation in the Wadden Sea we use the vertically resolved box (the modeling domain) containing the water column (the water domain) and sediments (the sediment domain).This vertically resolved box is a proxy of the Wadden Sea.Assuming a mean depth of the Wadden Sea of 2.5 m ([van Beusekom et al., 1999]), we split the water domain into two layers of 1.25 m and 1.15 m depth.Near the bottom, we have a benthic boundary layer (BBL) consisting of 2 layers of 0.05 m depth each.The BBL is a layer with eddy diffusion coefficients decreasing linearly to zero at the SWI.The sediment domain has 40 layers of 0.01 m depth each.[van Beusekom et al., 1999]: https://link.springer.com/article/10.1007/BF02764176  **Figure M1-1**. The model grid scheme. Using the proposed grid the transport program (SPBM) updates each time step (300 sec.) the concentrations of the state variables (they are provided in the Methods 2 section) in each layer by contributions of diffusion, reaction (concentration increments from the biogeochemical model), advection, and horizontal exchange with the external pool. Forcing, initial and boundary conditions The functioning of the transport and biogeochemical models needs some forcing (for example, to calculate sources minus sinks terms the biogeochemical model requires data of seawater temperature, salinity, and photosynthetically active radiation (PAR)).Also, we have to establish state variables initial conditions and conditions on the boundaries of the modeling box.The data for forcing (seawater temperature, salinity, density) and initial conditions are averaged to one year from the World Ocean Database (WOD) for the years 2000 - 2010 from a rectangular region (the Southern North Sea, 54.35-55.37$^{\circ}$N 6.65-8.53$^{\circ}$E) that is adjacent to the North Frisian Wadden Sea.The data from WOD are stored in `wadden_sea.nc` file.The data of Chlorophyll a are taken from [Loebl et al. (2007)].The data of $\text{NH}_{4}^{+}$ are taken from [van Beusekom et al. (2009)].Boundary conditions are set up for $\text{O}_{2}$ at the surface boundary as an exchange with the atmosphere as in ([Yakushev et al., 2017]).For all other species, boundary conditions at the bottom and the surface interfaces of the model box are set to zero fluxes.[Loebl et al. (2007)]: https://doi.org/10.1016/j.seares.2007.06.003[van Beusekom et al. (2009)]: https://doi.org/10.1016/j.seares.2008.06.005[Yakushev et al., 2017]: https://doi.org/10.5194/gmd-10-453-2017 For diffusive updates, SPBM needs to know the vertical diffusion coefficients in the water column and the dispersion coefficients in sediments (which are analogs to vertical diffusion coefficients in the water column).The vertical diffusion coefficients in the water column are calculated according to the vertical density distributions following [Gargett (1984)].Vertical advective updates in the water column (sinking of the particles) are calculated according to the sinking velocities of particles.The dispersion coefficients in sediments and sinking velocities of particles are discussed in the Methods 3 section.Vertical advective updates in the sediments (burying) are neglected (no burying).[Gargett (1984)]: https://doi.org/10.1357/002224084788502756 SPBM calculates some state variables' horizontal exchange of the modeling domain with the external pool.To supply the model water domain with nutrients for the proper functioning of the phytoplankton model, we introduce horizontal exchange of concentration of phosphates, ammonium, nitrates, and silicates in the modeling domain with concentrations in the external pool.The concentrations of variables in the external pool are considered to have a permanent seasonal profile.The seasonal profiles of phosphates, nitrate, and silicates external pool concentrations are from the World Ocean Database from the same region as the data for forcing.Ammonium external pool seasonal profile concentrations are from [van Beusekom et al. (2009)].Horizontal exchange is controlled by the horizontal diffusivity coefficient$\ K_{h}$ ([Okubo, 1971], [Okubo, 1976]).The value of the horizontal diffusivity coefficient is discussed in the Methods 3 section.Along with concentrations of the corresponding elements, phosphate, ammonium and nitrate exchange also affects alkalinity according to $\text{TA}_{\text{ec}}$ expression.Sulfate ion ($\text{SO}_{4}^{2 -}$) is a compound of $\text{TA}_{\text{ec}}$ so it is also taken into account by horizontal exchange for the alkalinity exchange evaluation between the model water domain and the external pool.$\text{SO}_{4}^{2 -}$ affects TA according to $\text{TA}_{\text{ec}}$ expression as well.It is a major ion so its concentration in the external pool is approximated by a constant value of 25000 $\text{mM m}^{- 3}$.The advective exchange of other state variables is not considered (concentrations of these state variables in the external pool are assumed to be similar to the concentrations in the modeling domain).[van Beusekom et al. (2009)]: https://doi.org/10.1016/j.seares.2008.06.005[Okubo, 1971]: https://doi.org/10.1016/0011-7471(71)90046-5[Okubo, 1976]: https://doi.org/10.1016/0011-7471(76)90897-4 SPBM calculates the allochtonous organic matter influx to the modeling domain.To reflect the heterotrophic nature of the Wadden Sea ([van Beusekom et al., 1999]) we add an additional advective influx of particulate OM state variable ($\text{POM}$).This $\text{POM}$ inflow is adopted from the value for the net import of OM to the Sylt-Rømø basin in the North Frisian Wadden Sea (110 $\text{g}\ \text{m}^{- 2}\ \text{year}^{- 1}$) reported in ([van Beusekom et al., 1999]) as a sinusoidal curve with a maximum in May ([Joint and Pomroy, 1993]; [de Beer et al., 2005]).This value is also close to the Wadden Sea average OM input (100 $\text{g}\ \text{m}^{- 2}\ \text{year}^{- 1}$) from the North Sea ([van Beusekom et al., 1999]).[van Beusekom et al., 1999]: https://doi.org/10.1007/BF02764176[Joint and Pomroy, 1993]: https://www.int-res.com/articles/meps/99/m099p169.pdf[de Beer et al., 2005]: https://doi.org/10.4319/lo.2005.50.1.0113 IPython notebook `s_1_generate_netcdf.ipynb` reads the data from WOD (`wadden_sea.nc`) and forms another NetCDF data file `wadden_sea_out.nc` which contains the data filtered and averaged to one year, calculated diffusion coefficients, calculated theoretical surface PAR values for the region of the Wadden Sea, and calculated OM influx.The governing program SPBM uses `wadden_sea_out.nc` to get all the necessary information.There is an IPython notebook to check the data written in `wadden_sea_out.nc` - `s_2_check_data.ipynb` Preliminary evaluations for the biogeochemical model construction We have a tool to calculate the transport of the state variables in the multilayer box representing the Wadden Sea, but we still missing the biogeochemical model to update the concentrations of the state variables due to biogeochemical reactions.Here we provide the reasoning to include some reactions and skip others.There are thirteen terms in $\text{TA}_{\text{ec}}$ expression and the most abundant biogeochemical processes in the coastal ocean change the concentrations of six of them:$2\lbrack\text{Ca}^{2 +}\rbrack$, $\text{TNH}_{3}$, $\lbrack\text{NO}_{3}^{-}\rbrack$, $\text{TPO}_{4}$, $2\text{TSO}_{4}$, $\text{THNO}_{2}$.Therefore, if the biogeochemical reactions change the concentration of certain terms, they also change TA.The influence of the biogeochemical reactions on alkalinity directly follows from $\text{TA}_{\text{ec}}$ expression ([Wolf-Gladrow et al., 2007], see also the definition of $\text{TA}_{\text{ec}}$ in the beginning of this Section):[Wolf-Gladrow et al., 2007]: https://doi.org/10.1016/j.marchem.2007.01.006 1. Nutrient assimilation by primary producers. * Assimilation of one mole of $\text{NO}_{3}^{-}$ or $\text{NO}_{2}^{-}$ increases alkalinity by one mole, assimilation of one mole of $\text{NH}_{4}^{+}$ decrease alkalinity by one mole. * Assimilation of one mole of phosphate increases alkalinity by one mole.2. Organic matter degradation. * Oxygen respiration increases alkalinity by 15 moles ($16\text{NH}_{3} - 1\text{H}_{3}\text{PO}_{4}$) per 106 moles of $\text{CH}_{2}\text{O}$ oxidized:$(\text{CH}_{2}\text{O})_{106}(\text{NH}_{3})_{16}\text{H}_{3}\text{PO}_{4} + 106\text{O}_{2} \rightarrow 106\text{CO}_{2} + 16\text{NH}_{3} + \text{H}_{3}\text{PO}_{4} + 106\text{H}_{2}\text{O}$. * Denitrification increases alkalinity by 99.8 moles ($84.8\text{HNO}_{3} + 16\text{NH}_{3} - 1\text{H}_{3}\text{PO}_{4}$) per 106 moles of $\text{CH}_{2}\text{O}$ oxidized:$(\text{CH}_{2}\text{O})_{106}(\text{NH}_{3})_{16}\text{H}_{3}\text{PO}_{4} + 84.8\text{HNO}_{3} \rightarrow 106\text{CO}_{2} + 42.4\text{N}_{2} + 16\text{NH}_{3} + \text{H}_{3}\text{PO}_{4} + 148.4\text{H}_{2}\text{O}$. * Sulfate reduction increases alkalinity by 121 moles ($2 \cdot 53\text{SO}_{4}^{2 -} + 16\text{NH}_{3} - 1\text{H}_{3}\text{PO}_{4}$) per 106 moles of $\text{CH}_{2}\text{O}$ oxidized:$(\text{CH}_{2}\text{O})_{106}(\text{NH}_{3})_{16}\text{H}_{3}\text{PO}_{4} + 53\text{SO}_{4}^{2-} \rightarrow 106\text{HCO}_{3}^{-} + 16\text{NH}_{3} + \text{H}_{3}\text{PO}_{4} + 53\text{H}_{2}\text{S}$. * Other OM degradation reactions.3. Nitrification, which decreases alkalinity by two moles per mole of $\text{NH}_{4}^{+}$ oxidized:$\text{NH}_{4}^{+} + 1.5\text{O}_{2} \rightarrow \text{NO}_{3}^{-} + 2\text{H}^{+} + \text{H}_{2}\text{O}$. 4. Calcium carbonate precipitation and dissolution. * Precipitation of one mole of calcium carbonate decreases alkalinity by two moles:$\text{Ca}^{2+} + 2\text{HCO}_{3}^{-} \rightarrow \text{CaCO}_{3} + \text{CO}_{2} + \text{H}_{2}\text{O}$or$\text{Ca}^{2+} + \text{CO}_{3}^{-} \rightarrow \text{CaCO}_{3}$. * Calcium carbonate dissolution increases alkalinity by two moles per one mole of calcium carbonate dissolved:$\text{CaCO}_{3} + \text{CO}_{2} + \text{H}_{2}\text{O} \rightarrow \text{Ca}^{2 +} + 2\text{HCO}_{3}^{-}$. Now we can try to estimate which of these processes are the most important ones for alkalinity changes in the Wadden Sea.At first, we write down the mean concentrations of alkalinity and the mentioned six compounds ($\lbrack\text{Ca}^{2 +}\rbrack$, $\text{TNH}_{3}$, $\lbrack\text{NO}_{3}^{-}\rbrack$, $\text{TPO}_{4}$, $\text{TSO}_{4}$, $\text{THNO}_{2}$, all concentration are in $\text{mM m}^{- 3}$) in the Southern Wadden Sea.We use these mean concentrations from the Southern Wadden Sea as initial state of concentrations in the Wadden Sea before local biogeochemical transformations.So we can see the concentrations of $\text{TA}_{\text{ec}}$ compounds that correspond to $\text{TA}_{\text{ec}}$, afterwards we can track how biogeochemical transformations change alkalinity.
###Code
import src.fetch_data as fd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# get some data from the World Ocean Database (WOD)
par, temperature, no3, ammonium, po4, si, irradiance = fd.get_data()
f'NH4={ammonium.mean()}; NO3={no3.mean()}; PO4={po4.mean()}'
###Output
_____no_output_____
###Markdown
Nitrites' concentration is negligibly small, so we skip it.Also, we assume that average TA concentration equals 2300 $\text{mM m}^{- 3}$.$\text{Ca}^{2 +}$ and $\text{TSO}_{4}$ are the major ions of seawater with the following approximate concentrations (in $\text{mM m}^{- 3}$):
###Code
Ca, SO4 = (10000, 25000)
###Output
_____no_output_____
###Markdown
Initial concentrations of TA compound elements before local biogeochemical transformations:|Parameter:|$$\lbrack\text{Ca}^{2+}\rbrack$$|$$\text{TNH}_{3}$$|$$\lbrack\text{NO}_{3}^{-}\rbrack$$|$$\text{TPO}_{4}$$|$$\text{TSO}_{4}$$|$$\text{THNO}_{2}$$|$$\text{TA}$$||:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:||Values, $\text{mM m}^{- 3}$:|10000|3.4|16.1|0.6|25000|0|2300| These values correspond to each other, for example, $\text{NO}_{3}^{-}$ concentration of 16 $\text{mM m}^{- 3}$ corresponds to TA of 2300 $\text{mM m}^{- 3}$.An increase of $\text{NO}_{3}^{-}$ by one mole will decrease TA by one mole (due to negative charge sign).Thus, we can track TA changes due to changes of its compound ions.To understand how biogeochemical reactions can affect TA we make a function calculating TA changes according to $\text{TA}_{\text{ec}}$ expression:$$\delta [\text{TA}] = 2\delta [\text{Ca}^{2 +}] - 2\delta [\text{TSO}_{4}] + \delta [\text{NH}_{4}^{+}] - \delta [\text{NO}_{3}^{-}] - \delta [\text{PO}_{4}^{-}]$$For example, if $\text{NO}_{3}^{-}$ drops down to zero (from 16), it will increase alkalinity by 16 $\text{mM m}^{- 3}$.Providing a change of a particular compound we can track a TA change.
###Code
def alk_change(TA, dCa=0, dSO4=0, dNH4=0, dNO3=0, dPO4=0):
return TA + 2*dCa - 2*dSO4 + dNH4 - dNO3 - dPO4
def sinusoidal(max_value):
"""Creates a sinusoidal line with a period of 365,
minimum value of zero,
and a maximum value of max_value"""
day=np.arange(0,365,1)
return (1/2)*max_value*(1+np.sin(2*np.pi*((day-90)/365)))
###Output
_____no_output_____
###Markdown
Let's test a TA change due to a change of $\text{Ca}^{2 +}$ concentration.For example, calcifiers consume 100 $\text{mM m}^{- 3}$ of $\text{Ca}^{2 +}$ during a year, and then the equal amount of calcifiers' skeletons dissolve restoring the concentration of $\text{Ca}^{2 +}$ in the end of the year.
###Code
dCa = -sinusoidal(100)
Ca_year = Ca + dCa
TA_year = alk_change(TA = 2300, dCa = dCa)
ox = np.arange(0,365,1)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(2, 1, 1) # row-col-num
ax1 = fig.add_subplot(2, 1, 2)
ax.plot(ox, Ca_year); ax1.plot(ox, TA_year)
ax.set_ylabel('Ca$^{2+}$'); ax1.set_ylabel('TA');
ax1.set_xlabel('day')
###Output
_____no_output_____
###Markdown
**Figure M1-2**. TA response to $\text{Ca}^{2 +}$ change. We see that consuming of 100 $\text{mM m}^{- 3}$ of $\text{Ca}^{2 +}$ decreases alkalinity by 200 $\text{mM m}^{- 3}$, which is obvious from $\text{TA}_{\text{ec}}$ expression.The local activities of calcifiers cannot increase alkalinity above 2300 $\text{mM m}^{- 3}$.To increase TA we need an input of $\text{Ca}^{2 +}$, which can come in form of $\text{Ca}^{2 +}$ or $\text{CaCO}_3$.Additional $\text{Ca}^{2 +}$ can come with terrestrial inflow.We do not consider it here since we are interested in biogeochemical transformation occurring in the Wadden Sea.The supply of allochtonous $\text{CaCO}_3$ to the Wadden Sea has not yet been reported ([Thomas et al., 2009]).Calcium carbonate related biogeochemical processes cannot increase alkalinity in the Wadden Sea.As a first approximation according to the goal to calculate the maximum alkalinity generation in the Wadden Sea due to biogeochemical processes we can skip $\text{CaCO}_3$ precipitation/dissolution while preparing the biogeochemical model.[Thomas et al., 2009]: https://doi.org/10.5194/bg-6-267-2009 Now let's assume that sulfate reduction decreases $\text{SO}_{4}^{2 -}$ by 100 $\text{mM m}^{- 3}$.
###Code
dSO4 = -sinusoidal(100)
SO4_year = SO4 + dSO4
TA_year = alk_change(TA = 2300, dSO4 = dSO4)
fig = plt.figure(figsize=(8, 6))
ax, ax1 = (fig.add_subplot(2, 1, 1), fig.add_subplot(2, 1, 2))
ax.plot(ox, SO4_year); ax1.plot(ox, TA_year)
ax.set_ylabel('SO$_4^{2-}$'); ax1.set_ylabel('TA')
ax1.set_xlabel('day')
###Output
_____no_output_____ |
part_0/python_grammar_00.ipynb | ###Markdown
Python特点 Python是著名的“龟叔”Guido van Rossum在1989年圣诞节期间,为了打发无聊的圣诞节而编写的一个编程语言。* 编程语言目的:编程语言是用来写程序,目的是让计算机来执行程序;CPU只识别机器指令,因此不同的编程语言都需要"翻译"成机器指令;但是不同的编程语言写同一个任务所需要的代码量不同。比如,C语言1000行代码,Java只要100行,而Python可能只要20行,所以Python是一种相当高级的语言;代码少的代价是运行速度慢其他流行的编程语言特点:* C语言:面向过程语言,速度快,适合编写操作系统和嵌入式程序* Java:面向对象语法简洁,适合网页App * JavaScript:网页编程**每种编程语言都有各自的特性,因此没有最好的语言只有最合适某种情况的语言**使用Python开发的APP和其他应用:* Youtube - 视频社交网站* 豆瓣网 - 电影等文化产品的资料数据库网站* 知乎 - 一个问答网站* 人工智能程序框架 - TensorFlowPython优点: 1-语法简单,拥有大量的三方库 2-AI开发的“官方”语言Python缺点:1-运行速度慢;因为Python是解释型语言,你的代码在执行时会一行一行地翻译成CPU能理解的机器码,这个翻译过程非常耗时;而C程序是运行前直接编译成CPU能执行的机器码,所有速度快2-代码不能加密。如果要发布你的Python程序,实际上就是发布源代码,这一点跟C语言不同,C语言不用发布源代码,只需要把编译后的机器码(也就是你在Windows上常见的xxx.exe文件)发布出去 Python解释器 Python有多种不同的解释器1-CPython:Python官方网站下载并安装好Python 3.x后,就得到一个官方版本的解释器:CPython。这个解释器是用C语言开发的,所以叫CPython(使用最广泛的解释器)2-IPython:基于CPython之上的一个交互式解释器,IPython只是在交互方式上有所增强,但是执行Python代码的功能和CPython是完全一样的3-Jython:运行在Java平台上的Python解释器,可以直接把Python代码编译成Java字节码执行 运行Python文件 有同学问,能不能像.exe文件那样直接运行.py文件呢?在Windows上是不行的,但是在Mac和Linux上是可以的,方法是在.py文件的第一行加上一个特殊的注释:!/usr/bin/env python3 print('hello, world')然后,通过命令给hello.py以执行权限:$ chmod a+x hello.pybash hello.py注:现在基本都是通过安装Anaconda来创建所需的Python环境,同时Python的第三方包的安装和管理也十分方便。 编程语言流行榜-TIOBE排行榜(2020年)
###Code
%%html
<img src='TIOBE排行榜.png', width=800, height=500>
###Output
_____no_output_____ |
lipschitz_estimates/ionoshpere_lipschitz_estimates.ipynb | ###Markdown
Ionoshpere Dataset - Lipschitz Continuity - LIME - SHAP
###Code
print("Bismillahir Rahmanir Rahim")
###Output
Bismillahir Rahmanir Rahim
###Markdown
Imports and Paths
###Code
from IPython.display import display, HTML
from lime.lime_tabular import LimeTabularExplainer
from pprint import pprint
from scipy.spatial.distance import pdist, squareform
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, confusion_matrix
from sklearn.utils.multiclass import unique_labels
from sklearn import metrics
from sklearn.metrics import classification_report
from sklearn.metrics.pairwise import cosine_similarity
from scipy import spatial
%matplotlib inline
import glob
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pathlib
import sklearn
import seaborn as sns
import statsmodels
import eli5
import lime
import shap
shap.initjs()
###Output
_____no_output_____
###Markdown
Load and preprocess dataTrain/test split = 0.80/0.20
###Code
# Set the seed experimentations and interpretations.
np.random.seed(111)
project_path = pathlib.Path.cwd().parent.parent
import pathlib
dataset_path = str(project_path) + '/datasets/ionosphere/ionosphere.data'
# print(dataset_path)
fp = open(dataset_path, "r")
rows = []
for line in fp:
rows.append(line)
rows_sep = [sub.split(",") for sub in rows]
iono = pd.DataFrame(np.array(rows_sep))
iono_col_names = np.array(iono.columns.tolist()).astype(str)
iono_col_names = np.where(iono_col_names=='34', 'label', iono_col_names)
iono.columns = iono_col_names
iono['label'] = iono['label'].apply(lambda label: label.split('\n')[0])
labels_iono = iono['label']
labels_iono_list = labels_iono.values.tolist()
# labels_iono_codes = labels_train_iono.astype("category").cat.codes
features_iono = iono.iloc[:,:-1]
display(iono.head())
train_iono, test_iono, labels_train_iono, labels_test_iono = train_test_split(
features_iono, labels_iono, train_size=0.80)
labels_train_iono_codes = labels_train_iono.astype("category").cat.codes
labels_test_iono_codes = labels_test_iono.astype("category").cat.codes
""" This form is only compatiable with rest of the notebook code.
"""
train = train_iono.to_numpy().astype(float)
labels_train = labels_train_iono.to_numpy()
test = test_iono.to_numpy().astype(float)
labels_test = labels_test_iono.to_numpy()
x_testset = test
feature_names = features_iono.columns.values
target_names = np.unique(labels_test)
# here 0 = 'b' & 1 = 'g'
unique_targets = np.unique([0, 1]) # LIME only takes integer,
print("Feature names", feature_names)
print("Target names", target_names)
print("Number of uniques label or target names", unique_targets)
print("Training record", train[0:1])
print("Label for training record", labels_train[0:1])
###Output
Training record [[ 1. 0. 0.52542 -0.0339 0.94915 0.08475 0.52542 -0.16949
0.30508 -0.01695 0.50847 -0.13559 0.64407 0.28814 0.83051 -0.35593
0.54237 0.01695 0.55932 0.0339 0.59322 0.30508 0.86441 0.05085
0.40678 0.15254 0.67287 -0.00266 0.66102 -0.0339 0.83051 -0.15254
0.76271 -0.10169]]
Label for training record ['g']
###Markdown
Train and evaluate models.Train Logistic Regression and Random Forest models so these can be used as black box models when evaluating explanations methods. Fit Logistic Regression and Random Forest
###Code
lr = sklearn.linear_model.LogisticRegression(class_weight='balanced')
lr.fit(train, labels_train)
rf = RandomForestClassifier(n_estimators=500, class_weight='balanced_subsample')
rf.fit(train, labels_train)
###Output
_____no_output_____
###Markdown
Predict using logistic regression and random forest models
###Code
labels_pred_lr = lr.predict(test)
labels_pred_rf = rf.predict(test)
score_lr = metrics.accuracy_score(labels_test, labels_pred_lr)
score_rf = metrics.accuracy_score(labels_test, labels_pred_rf)
print("Logitic Regression accuracy score.", score_lr)
predict_proba_lr = lr.predict_proba(test[:5])
print("\nLogistic Regression predict probabilities\n\n", predict_proba_lr)
predict_lr = lr.predict(test[:5])
print("\nLogistic Regression predictions", predict_lr)
print("\n\n\nRandom Forest accuracy score.", score_rf)
predict_proba_rf = rf.predict_proba(test[:5])
print("\nRandom Forest predict probabilities\n\n", predict_proba_rf)
predict_rf = rf.predict(test[:5])
print("\nRandom Forest predictions", predict_rf)
###Output
Logitic Regression accuracy score. 0.8309859154929577
Logistic Regression predict probabilities
[[0.98942588 0.01057412]
[0.35666392 0.64333608]
[0.01094379 0.98905621]
[0.90375436 0.09624564]
[0.1104594 0.8895406 ]]
Logistic Regression predictions ['b' 'g' 'g' 'b' 'g']
Random Forest accuracy score. 0.9577464788732394
Random Forest predict probabilities
[[0.984 0.016]
[0.08 0.92 ]
[0.756 0.244]
[0.904 0.096]
[0.048 0.952]]
Random Forest predictions ['b' 'g' 'b' 'b' 'g']
###Markdown
Classification reports of logistic regression and random forest
###Code
report_lr = classification_report(labels_test, labels_pred_lr, target_names=target_names)
print("Logistic Regression classification report.")
print(report_lr)
report_rf = classification_report(labels_test, labels_pred_rf, target_names=target_names)
print("Random Forestclassification report.")
print(report_rf)
###Output
Logistic Regression classification report.
precision recall f1-score support
b 0.82 0.69 0.75 26
g 0.84 0.91 0.87 45
accuracy 0.83 71
macro avg 0.83 0.80 0.81 71
weighted avg 0.83 0.83 0.83 71
Random Forestclassification report.
precision recall f1-score support
b 0.93 0.96 0.94 26
g 0.98 0.96 0.97 45
accuracy 0.96 71
macro avg 0.95 0.96 0.95 71
weighted avg 0.96 0.96 0.96 71
###Markdown
Classification reports display as dataframes
###Code
total_targets = len(target_names)
report_lr = classification_report(labels_test, labels_pred_lr, target_names=target_names, output_dict=True)
report_lr = pd.DataFrame(report_lr).transpose().round(2)
report_lr = report_lr.iloc[:total_targets,:-1]
display(report_lr)
report_rf = classification_report(labels_test, labels_pred_rf, target_names=target_names, output_dict=True)
report_rf = pd.DataFrame(report_rf).transpose().round(2)
report_rf = report_rf.iloc[:total_targets,:-1]
display(report_rf)
avg_f1_lr = report_lr['f1-score'].mean()
print("Logistic Regression average f1-score", avg_f1_lr)
avg_f1_rf = report_rf['f1-score'].mean()
print("Random Forest average f1-score", avg_f1_rf)
###Output
Logistic Regression average f1-score 0.81
Random Forest average f1-score 0.955
###Markdown
Confusion matrix of logistic regression and random forest
###Code
matrix_lr = confusion_matrix(labels_test, labels_pred_lr)
matrix_lr = pd.DataFrame(matrix_lr, columns=target_names).transpose()
matrix_lr.columns = target_names
display(matrix_lr)
matrix_rf = confusion_matrix(labels_test, labels_pred_rf)
matrix_rf = pd.DataFrame(matrix_rf, columns=target_names).transpose()
matrix_rf.columns = target_names
display(matrix_rf)
###Output
_____no_output_____
###Markdown
Combine confusion matrix and classification report of logistic regression and random forest
###Code
matrix_report_lr = pd.concat([matrix_lr, report_lr], axis=1)
display(matrix_report_lr)
matrix_report_rf = pd.concat([matrix_rf, report_rf], axis=1)
display(matrix_report_rf)
###Output
_____no_output_____
###Markdown
Saving matrices and reports into csvThese CSVs can be used easily to draw tables in LaTex.
###Code
file_path = str(project_path) + '/datasets/modelling-results/'
filename = 'iono_matrix_report_lr.csv'
matrix_report_lr.to_csv(file_path + filename, index=True)
filename = 'iono_matrix_report_rf.csv'
matrix_report_rf.to_csv(file_path + filename, index=True)
###Output
_____no_output_____
###Markdown
Extract predicted target names for logistic regression and random forest
###Code
target_names = target_names
targets = unique_targets
targets_labels = dict(zip(targets, target_names))
print(targets_labels)
###Output
{0: 'b', 1: 'g'}
###Markdown
Ionoshpere dataset specific changes to extract codesExtracting code such as [0, 1] against ['b', 'g'] values
###Code
dummies = pd.get_dummies(labels_pred_lr)
labels_pred_codes_lr = dummies.values.argmax(1)
dummies = pd.get_dummies(labels_pred_rf)
labels_pred_codes_rf = dummies.values.argmax(1)
labels_names_pred_lr = []
for label in labels_pred_codes_lr:
labels_names_pred_lr.append(targets_labels[label])
labels_names_pred_rf = []
for label in labels_pred_codes_rf:
labels_names_pred_rf.append(targets_labels[label])
print("Logistic Regression predicted targets and their names.\n")
print(labels_pred_codes_lr)
print(labels_names_pred_lr)
print("\n\nRandom Forest predicted targets and their names.")
print(labels_pred_codes_rf)
print(labels_names_pred_rf)
###Output
Logistic Regression predicted targets and their names.
[0 1 1 0 1 1 1 1 1 0 0 1 0 1 1 1 0 1 0 1 1 0 1 0 1 1 1 0 1 1 1 1 0 0 1 1 0
1 1 1 0 1 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 0 0 1 1 1 1 0 0 0 1]
['b', 'g', 'g', 'b', 'g', 'g', 'g', 'g', 'g', 'b', 'b', 'g', 'b', 'g', 'g', 'g', 'b', 'g', 'b', 'g', 'g', 'b', 'g', 'b', 'g', 'g', 'g', 'b', 'g', 'g', 'g', 'g', 'b', 'b', 'g', 'g', 'b', 'g', 'g', 'g', 'b', 'g', 'g', 'g', 'g', 'b', 'g', 'g', 'g', 'g', 'g', 'b', 'g', 'g', 'g', 'g', 'g', 'b', 'g', 'g', 'g', 'b', 'b', 'g', 'g', 'g', 'g', 'b', 'b', 'b', 'g']
Random Forest predicted targets and their names.
[0 1 0 0 1 1 1 1 1 0 0 1 0 1 1 0 0 1 0 1 1 1 1 0 1 1 1 0 1 0 1 1 0 0 1 0 0
1 1 1 0 1 1 1 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 0 0 1 1 1 0 0 1 0 0]
['b', 'g', 'b', 'b', 'g', 'g', 'g', 'g', 'g', 'b', 'b', 'g', 'b', 'g', 'g', 'b', 'b', 'g', 'b', 'g', 'g', 'g', 'g', 'b', 'g', 'g', 'g', 'b', 'g', 'b', 'g', 'g', 'b', 'b', 'g', 'b', 'b', 'g', 'g', 'g', 'b', 'g', 'g', 'g', 'b', 'b', 'g', 'g', 'g', 'g', 'g', 'b', 'b', 'g', 'g', 'g', 'g', 'g', 'g', 'g', 'g', 'b', 'b', 'g', 'g', 'g', 'b', 'b', 'g', 'b', 'b']
###Markdown
Interpret Black Box Models 1. Interpret Logitistic Regression and Random Forest using LIME LIME explanations util functions
###Code
def lime_explanations(index, x_testset, explainer, model, unique_targets, class_predictions):
instance = x_testset[index]
exp = explainer.explain_instance(instance,
model.predict_proba,
labels=unique_targets,
top_labels=None,
num_features=len(x_testset[index]),
num_samples=6000)
# Array class_predictions contains predicted class labels
exp_vector_predicted_class = exp.as_map()[class_predictions[index]]
return (exp_vector_predicted_class, exp.score), exp
def explanation_to_dataframe(index, x_testset, explainer, model, unique_targets, class_predictions, dataframe):
feature_imp_tuple, exp = lime_explanations(index,
x_testset,
explainer,
model,
unique_targets,
class_predictions)
exp_val = tuple(sorted(feature_imp_tuple[0]))
data = dict((x, y) for x, y in exp_val)
list_val = list(data.values())
list_val.append(feature_imp_tuple[1])
dataframe.loc[index] = list_val
return dataframe, exp
""" Define LIME Explainer
"""
explainer_lime = LimeTabularExplainer(train,
mode = 'classification',
training_labels = labels_train,
feature_names=feature_names,
verbose=False,
class_names=target_names,
feature_selection='auto',
discretize_continuous=True)
from tqdm import tqdm
col_names = list(feature_names)
col_names.append('lime_score')
###Output
_____no_output_____
###Markdown
Interpret logistic regression on testset using LIME
###Code
explanations_lime_lr = pd.DataFrame(columns=col_names)
for index in tqdm(range(0,len(test))):
explanations_lime_lr, exp = explanation_to_dataframe(index,
test,
explainer_lime,
rf, # random forest model
unique_targets,
labels_pred_codes_lr, # random forest predictions
explanations_lime_lr)
print("LIME explanations on logistic regression.")
display(explanations_lime_lr.head())
display(explanations_lime_lr.iloc[:,:-1].head(1))
###Output
LIME explanations on logistic regression.
###Markdown
Interpret random forest on testset using LIME
###Code
explanations_lime_rf = pd.DataFrame(columns=col_names)
for index in tqdm(range(0,len(test))):
explanations_lime_rf, exp = explanation_to_dataframe(index,
test,
explainer_lime,
rf, # random forest model
unique_targets,
labels_pred_codes_rf, # random forest predictions
explanations_lime_rf)
print("LIME explanations on random forest.")
display(explanations_lime_rf.head())
display(explanations_lime_rf.iloc[:,:-1].head(1))
###Output
LIME explanations on random forest.
###Markdown
2. Interpret Logitistic Regression and Random Forest using SHAP
###Code
def shapvalue_to_dataframe(test, labels_pred, shap_values, feature_names):
exp_shap_array = []
for test_index in range(0, len(test)):
label_pred = labels_pred[test_index]
exp_shap_array.append(shap_values[label_pred][test_index])
df_exp_shap = pd.DataFrame(exp_shap_array)
df_exp_shap.columns = feature_names
return df_exp_shap
###Output
_____no_output_____
###Markdown
Interpret logistic regression using SHAP
###Code
shap_train_summary = shap.kmeans(train, 50)
explainer_shap_lr = shap.KernelExplainer(lr.predict_proba, shap_train_summary)
# print("Shap Train Sample Summary", shap_train_summary)
shap_values_lr = explainer_shap_lr.shap_values(test, nsamples='auto')
shap_expected_values_lr = explainer_shap_lr.expected_value
print("Shapley Expected Values", shap_expected_values_lr)
shap.summary_plot(shap_values_lr, test, feature_names=feature_names)
###Output
_____no_output_____
###Markdown
Interpret random forest using SHAP
###Code
shap_values_rf = shap.TreeExplainer(rf).shap_values(test)
shap.summary_plot(shap_values_rf, test, feature_names=feature_names)
###Output
_____no_output_____
###Markdown
Extract explanations from SHAP values computed on logistic regressions and random forest models. Preprocessing SHAP values**_shap_values_** returns 3D array in a form of (num_classes, num_test_instance, num_features) e.g. in our iris dataset it will be (3, 30, 4)
###Code
explanations_shap_lr = shapvalue_to_dataframe(test,
labels_pred_codes_lr,
shap_values_lr,
feature_names)
display(explanations_shap_lr.head())
display(explanations_shap_lr.iloc[:,:].head(1))
explanations_shap_rf = shapvalue_to_dataframe(test,
labels_pred_codes_rf,
shap_values_rf,
feature_names)
display(explanations_shap_rf.head())
display(explanations_shap_rf.iloc[:,:].head(1))
###Output
_____no_output_____
###Markdown
Local Lipschitz Estimates as a stability measure for LIME & SHAP Find Local Lipschitz of points L(x) Define neighborhood around anchor point x0
###Code
def norm(Xs, x0, norm=2):
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html
norm = np.linalg.norm(x0 - Xs, norm) # /np.linalg.norm(b[0] - b, 2)
return norm
def neighborhood_with_euclidean(x_points, anchor_index, radius):
x_i = x_points[anchor_index]
radius = radius * np.sqrt(len(x_points[anchor_index]))
x_js = x_points.tolist()
del x_js[anchor_index]
dist = (x_i - x_js)**2
dist = np.sum(dist, axis=1)
dist = np.sqrt(dist)
neighborhood_indices = []
for index in range(0, len(dist)):
if dist[index] < radius:
neighborhood_indices.append(index)
return neighborhood_indices
def neighborhood_with_KDTree(x_points, anchor_index, radius):
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.query_ball_point.html
tree = spatial.KDTree(x_points)
neighborhood_indices = tree.query_ball_point(x_points[anchor_index],
radius * np.sqrt(len(x_points[anchor_index])))
return neighborhood_indices
###Output
_____no_output_____
###Markdown
Local Lipschitz of explanation methods (LIME, SHAP)
###Code
def lipschitz_formula(nearby_points, nearby_points_exp, anchorX, anchorX_exp):
anchorX_norm2 = np.apply_along_axis(norm, 1, nearby_points, anchorX)
anchorX_exp_norm2 = np.apply_along_axis(norm, 1, nearby_points_exp, anchorX_exp)
anchorX_avg_norm2 = anchorX_exp_norm2/anchorX_norm2
anchorX_LC_argmax = np.argmax(anchorX_avg_norm2)
return anchorX_avg_norm2, anchorX_LC_argmax
def lipschitz_estimate(anchorX, x_points, explanations_x_points, anchor_index, neighborhood_indices):
# extract anchor point explanations
anchorX_exp = explanations_x_points[anchor_index]
# extract anchor point neighborhood's explanations
nearby_points = x_points[neighborhood_indices]
nearby_points_exp = explanations_x_points[neighborhood_indices]
# find local lipschitz estimate (lc)
anchorX_avg_norm2, anchorX_LC_argmax = lipschitz_formula(nearby_points,
nearby_points_exp,
anchorX,
anchorX_exp)
return anchorX_avg_norm2, anchorX_LC_argmax
def find_lipschitz_estimates(x_points, x_points_lime_exp, x_points_shap_exp, radii):
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html
# https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.argmax.html
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.query_ball_point.html
instances = []
anchor_x_index = []
lc_coefficient_lime = []
x_deviation_index_lime = []
x_deviation_index_shap = []
lc_coefficient_shap = []
radiuses = []
neighborhood_size = []
for radius in radii:
for anchor_index in range(0, len(x_points)):
# define neighorbood of around anchor point using radius
# neighborhood_indices = neighborhood_with_KDTree(x_points, anchor_index, radius)
# neighborhood_indices.remove(anchor_index) # remove anchor index to remove anchor point
neighborhood_indices = neighborhood_with_euclidean(x_points, anchor_index, radius)
print(neighborhood_indices)
radiuses.append(radius)
if len(neighborhood_indices) == 0:
continue
neighborhood_size.append(len(neighborhood_indices))
# extract anchor point and its original index
anchorX = x_points[anchor_index]
instances.append(anchorX)
anchor_x_index.append(anchor_index)
# find local lipschitz estimate (lc) LIME
anchorX_avg_norm2, anchorX_LC_argmax = lipschitz_estimate(anchorX,
x_points,
x_points_lime_exp,
anchor_index,
neighborhood_indices)
lc_coefficient_lime.append(anchorX_avg_norm2[anchorX_LC_argmax])
# find deviation point from anchor point LIME explanations
deviation_point_index = neighborhood_indices[anchorX_LC_argmax]
x_deviation_index_lime.append(deviation_point_index)
# find local lipschitz estimate (lc) SHAP
anchorX_avg_norm2, anchorX_LC_argmax = lipschitz_estimate(anchorX,
x_points,
x_points_shap_exp,
anchor_index,
neighborhood_indices)
lc_coefficient_shap.append(anchorX_avg_norm2[anchorX_LC_argmax])
# find deviation point from anchor point LIME explanations
deviation_point_index = neighborhood_indices[anchorX_LC_argmax]
x_deviation_index_shap.append(deviation_point_index)
# columns_lipschitz will be reused so to avoid confusion naming convention should remain similar
columns_lipschitz = ['instance', 'anchor_x_index', 'lc_coefficient_lime', 'x_deviation_index_lime',
'lc_coefficient_shap', 'x_deviation_index_shap', 'radiuses', 'neighborhood_size']
zippedList = list(zip(instances, anchor_x_index, lc_coefficient_lime, x_deviation_index_lime,
lc_coefficient_shap, x_deviation_index_shap, radiuses, neighborhood_size))
return zippedList, columns_lipschitz
###Output
_____no_output_____
###Markdown
Prepare points from testset
###Code
X = pd.DataFrame(test)
x_points = X.copy().values
print("Testset")
# display(X.head())
# radii = [1.00, 1.25]
radii = [0.75]
###Output
Testset
###Markdown
1. Lipschitz est. using explanations generated on logistic regression model
###Code
print("LIME generated explanations")
X_lime_exp = explanations_lime_lr.iloc[:,:-1].copy()
# display(X_lime_exp.head())
print("SHAP generated explanations")
X_shap_exp = explanations_shap_lr.iloc[:,:].copy()
# display(X_shap_exp.head())
x_points_lime_exp = X_lime_exp.copy().values
x_points_shap_exp = X_shap_exp.copy().values
zippedList, columns_lipschitz = find_lipschitz_estimates(x_points,
x_points_lime_exp,
x_points_shap_exp,
radii)
lr_lipschitz = pd.DataFrame(zippedList, columns=columns_lipschitz)
###Output
[0, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 27, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[0, 2, 3, 4, 5, 6, 7, 8, 10, 12, 13, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 67, 68]
[4, 5, 10, 12, 13, 21, 24, 46, 48, 49, 55, 57, 59, 62, 65]
[0, 1, 4, 5, 7, 10, 12, 13, 15, 16, 17, 18, 19, 20, 21, 22, 24, 30, 33, 35, 36, 38, 39, 43, 44, 45, 48, 49, 50, 53, 54, 56, 57, 61, 64, 67, 68]
[0, 1, 4, 5, 6, 10, 12, 13, 17, 20, 21, 22, 23, 24, 27, 29, 30, 31, 35, 36, 41, 42, 44, 46, 48, 49, 50, 52, 54, 55, 56, 57, 59, 61, 62, 63, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 10, 12, 13, 14, 16, 17, 20, 21, 22, 23, 24, 27, 29, 30, 31, 32, 33, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 10, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[0, 1, 4, 5, 6, 10, 12, 13, 14, 20, 21, 23, 24, 27, 29, 30, 32, 36, 41, 42, 44, 46, 48, 49, 52, 54, 55, 56, 57, 59, 60, 61, 62, 63, 67, 68]
[0, 1, 3, 17, 18, 19, 20, 22, 24, 28, 35, 37, 38, 40, 43, 44, 45, 47, 50, 56, 58, 64, 67, 68]
[0, 1, 15, 17, 18, 20, 22, 24, 31, 35, 43, 44, 45, 50, 54, 56, 64, 67, 68]
[17]
[0, 1, 2, 3, 4, 5, 6, 7, 12, 13, 16, 17, 20, 21, 22, 23, 24, 27, 29, 30, 31, 32, 33, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[0, 15, 17, 18, 20, 22, 35, 43, 44, 45, 50, 56, 64, 68]
[0, 1, 2, 3, 4, 5, 6, 7, 11, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 7, 11, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[5, 6, 7, 13, 14, 21, 23, 24, 27, 46, 55, 57, 59, 67]
[0, 1, 3, 6, 9, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 24, 31, 33, 35, 39, 43, 44, 45, 47, 50, 53, 54, 56, 57, 58, 61, 64, 67, 68]
[0, 1, 3, 5, 6, 11, 13, 14, 16, 17, 18, 20, 21, 22, 23, 24, 25, 29, 30, 33, 34, 35, 36, 41, 42, 43, 44, 45, 46, 48, 49, 50, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 27, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 66, 67, 68]
[0, 1, 3, 6, 8, 9, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 24, 25, 29, 30, 33, 35, 36, 38, 39, 40, 43, 44, 45, 47, 49, 50, 53, 54, 56, 57, 58, 61, 64, 66, 67, 68]
[0, 1, 3, 8, 16, 18, 19, 20, 22, 24, 28, 35, 37, 38, 40, 43, 44, 45, 47, 50, 54, 56, 58, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 66, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 7, 11, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[0, 1, 3, 4, 5, 6, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 27, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 64, 67, 68]
[0, 1, 4, 5, 6, 7, 11, 13, 14, 15, 17, 18, 21, 22, 23, 24, 27, 29, 30, 32, 35, 36, 38, 41, 42, 44, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[1, 6, 13, 14, 17, 19, 21, 22, 25, 29, 30, 33, 36, 41, 49, 53, 54, 55, 57, 62]
[]
[0, 1, 4, 5, 6, 7, 11, 13, 14, 15, 18, 21, 22, 23, 24, 25, 29, 30, 31, 32, 35, 36, 41, 42, 44, 46, 48, 49, 50, 52, 54, 55, 56, 57, 59, 60, 61, 62, 63, 67, 68]
[8, 20, 37, 38, 40, 45, 58]
[0, 1, 4, 5, 6, 7, 11, 13, 14, 17, 18, 19, 21, 22, 23, 24, 25, 26, 28, 30, 31, 32, 33, 35, 36, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 7, 11, 13, 14, 17, 18, 19, 21, 22, 23, 24, 25, 26, 28, 30, 33, 34, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[0, 1, 4, 5, 6, 9, 11, 13, 14, 16, 18, 21, 22, 23, 25, 28, 30, 35, 38, 42, 43, 44, 45, 46, 50, 52, 54, 55, 56, 57, 64, 67, 68]
[1, 5, 6, 7, 11, 13, 14, 22, 24, 25, 28, 30, 36, 41, 42, 46, 48, 49, 54, 55, 57, 59, 62]
[0, 1, 3, 5, 6, 11, 13, 14, 16, 17, 18, 19, 21, 22, 23, 25, 26, 30, 31, 35, 36, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 53, 54, 55, 56, 57, 59, 61, 62, 64, 67, 68]
[0, 1, 17, 18, 21, 23, 25, 31, 35, 37, 38, 43, 44, 45, 49, 51, 53, 54, 56, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 28, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 53, 54, 55, 56, 57, 58, 59, 61, 62, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 7, 11, 13, 14, 17, 18, 19, 21, 22, 23, 24, 25, 26, 28, 30, 31, 33, 34, 36, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[0, 1, 8, 18, 20, 21, 23, 25, 29, 35, 36, 38, 40, 43, 44, 45, 50, 56, 58, 64, 67, 68]
[0, 1, 3, 5, 6, 8, 11, 13, 14, 18, 19, 20, 21, 22, 23, 24, 25, 29, 31, 32, 35, 36, 38, 40, 42, 43, 44, 45, 46, 48, 49, 50, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 67, 68]
[0, 1, 3, 16, 18, 19, 21, 23, 36, 43, 44, 45, 50, 54, 56, 61, 64, 67, 68]
[0, 1, 8, 18, 19, 20, 21, 23, 25, 29, 36, 38, 39, 43, 44, 45, 47, 56, 58, 64, 67, 68]
[1, 4, 5, 6, 7, 11, 13, 14, 17, 21, 22, 24, 25, 26, 28, 30, 31, 33, 34, 37, 42, 44, 46, 48, 49, 50, 53, 54, 55, 56, 57, 59, 61, 62, 63, 67, 68]
[0, 1, 4, 5, 6, 7, 11, 13, 14, 17, 18, 21, 22, 23, 24, 25, 28, 30, 31, 32, 33, 34, 36, 37, 39, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 67, 68]
[0, 1, 3, 5, 6, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 25, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 53, 54, 55, 56, 57, 58, 61, 62, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 28, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 66, 67, 68]
[0, 1, 3, 5, 6, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 25, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 53, 54, 55, 56, 57, 58, 59, 61, 62, 64, 67, 68]
[0, 1, 2, 4, 5, 6, 7, 11, 13, 14, 15, 17, 18, 21, 22, 23, 24, 25, 28, 30, 31, 32, 33, 34, 36, 37, 39, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 65, 67, 68]
[0, 8, 16, 18, 19, 20, 21, 23, 34, 36, 41, 44, 45, 46, 54, 56, 58, 64, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 7, 11, 13, 14, 17, 18, 21, 22, 23, 24, 25, 28, 30, 31, 33, 34, 36, 37, 39, 42, 43, 44, 45, 46, 47, 49, 50, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 65, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 7, 11, 13, 14, 17, 18, 19, 21, 22, 23, 24, 25, 26, 28, 30, 31, 33, 34, 35, 36, 37, 39, 42, 43, 44, 45, 46, 47, 49, 50, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 28, 30, 31, 32, 34, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[18, 21, 23, 35, 36, 45, 51, 56, 67, 68]
[0, 1, 4, 5, 6, 7, 11, 13, 14, 18, 21, 22, 23, 24, 25, 28, 30, 32, 37, 43, 45, 47, 49, 50, 51, 54, 55, 56, 57, 59, 61, 62, 63, 67, 68]
[0, 1, 3, 5, 6, 11, 13, 14, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 30, 31, 34, 35, 36, 37, 42, 43, 44, 45, 46, 47, 49, 50, 51, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 7, 9, 11, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 28, 30, 31, 32, 33, 34, 35, 36, 37, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 66, 67, 68]
[0, 1, 2, 4, 5, 6, 7, 11, 13, 14, 15, 17, 18, 21, 22, 23, 24, 25, 26, 28, 30, 31, 32, 33, 34, 36, 37, 39, 42, 43, 44, 45, 46, 47, 49, 50, 51, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 65, 67, 68]
###Markdown
2. Lipschitz est. using explanations generated on random forest model
###Code
print("LIME generated explanations")
X_lime_exp = explanations_lime_rf.iloc[:,:-1].copy()
# display(X_lime_exp.head())
print("SHAP generated explanations")
X_shap_exp = explanations_shap_rf.iloc[:,:].copy()
# display(X_shap_exp.head())
x_points_lime_exp = X_lime_exp.copy().values
x_points_shap_exp = X_shap_exp.copy().values
zippedList, columns_lipschitz = find_lipschitz_estimates(x_points,
x_points_lime_exp,
x_points_shap_exp,
radii)
rf_lipschitz = pd.DataFrame(zippedList, columns=columns_lipschitz)
###Output
[0, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 27, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[0, 2, 3, 4, 5, 6, 7, 8, 10, 12, 13, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 67, 68]
[4, 5, 10, 12, 13, 21, 24, 46, 48, 49, 55, 57, 59, 62, 65]
[0, 1, 4, 5, 7, 10, 12, 13, 15, 16, 17, 18, 19, 20, 21, 22, 24, 30, 33, 35, 36, 38, 39, 43, 44, 45, 48, 49, 50, 53, 54, 56, 57, 61, 64, 67, 68]
[0, 1, 4, 5, 6, 10, 12, 13, 17, 20, 21, 22, 23, 24, 27, 29, 30, 31, 35, 36, 41, 42, 44, 46, 48, 49, 50, 52, 54, 55, 56, 57, 59, 61, 62, 63, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 10, 12, 13, 14, 16, 17, 20, 21, 22, 23, 24, 27, 29, 30, 31, 32, 33, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 10, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[0, 1, 4, 5, 6, 10, 12, 13, 14, 20, 21, 23, 24, 27, 29, 30, 32, 36, 41, 42, 44, 46, 48, 49, 52, 54, 55, 56, 57, 59, 60, 61, 62, 63, 67, 68]
[0, 1, 3, 17, 18, 19, 20, 22, 24, 28, 35, 37, 38, 40, 43, 44, 45, 47, 50, 56, 58, 64, 67, 68]
[0, 1, 15, 17, 18, 20, 22, 24, 31, 35, 43, 44, 45, 50, 54, 56, 64, 67, 68]
[17]
[0, 1, 2, 3, 4, 5, 6, 7, 12, 13, 16, 17, 20, 21, 22, 23, 24, 27, 29, 30, 31, 32, 33, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[0, 15, 17, 18, 20, 22, 35, 43, 44, 45, 50, 56, 64, 68]
[0, 1, 2, 3, 4, 5, 6, 7, 11, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 7, 11, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[5, 6, 7, 13, 14, 21, 23, 24, 27, 46, 55, 57, 59, 67]
[0, 1, 3, 6, 9, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 24, 31, 33, 35, 39, 43, 44, 45, 47, 50, 53, 54, 56, 57, 58, 61, 64, 67, 68]
[0, 1, 3, 5, 6, 11, 13, 14, 16, 17, 18, 20, 21, 22, 23, 24, 25, 29, 30, 33, 34, 35, 36, 41, 42, 43, 44, 45, 46, 48, 49, 50, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 27, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 66, 67, 68]
[0, 1, 3, 6, 8, 9, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 24, 25, 29, 30, 33, 35, 36, 38, 39, 40, 43, 44, 45, 47, 49, 50, 53, 54, 56, 57, 58, 61, 64, 66, 67, 68]
[0, 1, 3, 8, 16, 18, 19, 20, 22, 24, 28, 35, 37, 38, 40, 43, 44, 45, 47, 50, 54, 56, 58, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 66, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 7, 11, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[0, 1, 3, 4, 5, 6, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 27, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 64, 67, 68]
[0, 1, 4, 5, 6, 7, 11, 13, 14, 15, 17, 18, 21, 22, 23, 24, 27, 29, 30, 32, 35, 36, 38, 41, 42, 44, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 27, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 65, 67, 68]
[1, 6, 13, 14, 17, 19, 21, 22, 25, 29, 30, 33, 36, 41, 49, 53, 54, 55, 57, 62]
[]
[0, 1, 4, 5, 6, 7, 11, 13, 14, 15, 18, 21, 22, 23, 24, 25, 29, 30, 31, 32, 35, 36, 41, 42, 44, 46, 48, 49, 50, 52, 54, 55, 56, 57, 59, 60, 61, 62, 63, 67, 68]
[8, 20, 37, 38, 40, 45, 58]
[0, 1, 4, 5, 6, 7, 11, 13, 14, 17, 18, 19, 21, 22, 23, 24, 25, 26, 28, 30, 31, 32, 33, 35, 36, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 7, 11, 13, 14, 17, 18, 19, 21, 22, 23, 24, 25, 26, 28, 30, 33, 34, 35, 36, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[0, 1, 4, 5, 6, 9, 11, 13, 14, 16, 18, 21, 22, 23, 25, 28, 30, 35, 38, 42, 43, 44, 45, 46, 50, 52, 54, 55, 56, 57, 64, 67, 68]
[1, 5, 6, 7, 11, 13, 14, 22, 24, 25, 28, 30, 36, 41, 42, 46, 48, 49, 54, 55, 57, 59, 62]
[0, 1, 3, 5, 6, 11, 13, 14, 16, 17, 18, 19, 21, 22, 23, 25, 26, 30, 31, 35, 36, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 53, 54, 55, 56, 57, 59, 61, 62, 64, 67, 68]
[0, 1, 17, 18, 21, 23, 25, 31, 35, 37, 38, 43, 44, 45, 49, 51, 53, 54, 56, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 28, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 53, 54, 55, 56, 57, 58, 59, 61, 62, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 7, 11, 13, 14, 17, 18, 19, 21, 22, 23, 24, 25, 26, 28, 30, 31, 33, 34, 36, 41, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[0, 1, 8, 18, 20, 21, 23, 25, 29, 35, 36, 38, 40, 43, 44, 45, 50, 56, 58, 64, 67, 68]
[0, 1, 3, 5, 6, 8, 11, 13, 14, 18, 19, 20, 21, 22, 23, 24, 25, 29, 31, 32, 35, 36, 38, 40, 42, 43, 44, 45, 46, 48, 49, 50, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 67, 68]
[0, 1, 3, 16, 18, 19, 21, 23, 36, 43, 44, 45, 50, 54, 56, 61, 64, 67, 68]
[0, 1, 8, 18, 19, 20, 21, 23, 25, 29, 36, 38, 39, 43, 44, 45, 47, 56, 58, 64, 67, 68]
[1, 4, 5, 6, 7, 11, 13, 14, 17, 21, 22, 24, 25, 26, 28, 30, 31, 33, 34, 37, 42, 44, 46, 48, 49, 50, 53, 54, 55, 56, 57, 59, 61, 62, 63, 67, 68]
[0, 1, 4, 5, 6, 7, 11, 13, 14, 17, 18, 21, 22, 23, 24, 25, 28, 30, 31, 32, 33, 34, 36, 37, 39, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 67, 68]
[0, 1, 3, 5, 6, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 25, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 53, 54, 55, 56, 57, 58, 61, 62, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 28, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 66, 67, 68]
[0, 1, 3, 5, 6, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 25, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 53, 54, 55, 56, 57, 58, 59, 61, 62, 64, 67, 68]
[0, 1, 2, 4, 5, 6, 7, 11, 13, 14, 15, 17, 18, 21, 22, 23, 24, 25, 28, 30, 31, 32, 33, 34, 36, 37, 39, 42, 43, 44, 45, 46, 48, 49, 50, 52, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 65, 67, 68]
[0, 8, 16, 18, 19, 20, 21, 23, 34, 36, 41, 44, 45, 46, 54, 56, 58, 64, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 7, 11, 13, 14, 17, 18, 21, 22, 23, 24, 25, 28, 30, 31, 33, 34, 36, 37, 39, 42, 43, 44, 45, 46, 47, 49, 50, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 65, 67, 68]
[0, 1, 2, 3, 4, 5, 6, 7, 11, 13, 14, 17, 18, 19, 21, 22, 23, 24, 25, 26, 28, 30, 31, 33, 34, 35, 36, 37, 39, 42, 43, 44, 45, 46, 47, 49, 50, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 8, 9, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 28, 30, 31, 32, 34, 36, 37, 38, 39, 40, 42, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[18, 21, 23, 35, 36, 45, 51, 56, 67, 68]
[0, 1, 4, 5, 6, 7, 11, 13, 14, 18, 21, 22, 23, 24, 25, 28, 30, 32, 37, 43, 45, 47, 49, 50, 51, 54, 55, 56, 57, 59, 61, 62, 63, 67, 68]
[0, 1, 3, 5, 6, 11, 13, 14, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 30, 31, 34, 35, 36, 37, 42, 43, 44, 45, 46, 47, 49, 50, 51, 54, 55, 56, 57, 59, 61, 62, 63, 64, 67, 68]
[0, 1, 3, 4, 5, 6, 7, 9, 11, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 28, 30, 31, 32, 33, 34, 35, 36, 37, 39, 40, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 64, 66, 67, 68]
[0, 1, 2, 4, 5, 6, 7, 11, 13, 14, 15, 17, 18, 21, 22, 23, 24, 25, 26, 28, 30, 31, 32, 33, 34, 36, 37, 39, 42, 43, 44, 45, 46, 47, 49, 50, 51, 53, 54, 55, 56, 57, 59, 60, 61, 62, 63, 65, 67, 68]
###Markdown
1. Lipschitz est. visualizations computed on logistic regression model
###Code
epsilon1 = lr_lipschitz[lr_lipschitz['radiuses'] == 1.00]
epsilon125 = lr_lipschitz[lr_lipschitz['radiuses'] == 1.25]
# display(epsilon1.head())
# display(epsilon125.head())
print("Lipschitz estimates on logistic regression model.")
epsilon1_lc_lime_aggre = np.mean(epsilon1['lc_coefficient_lime'])
epsilon1_lc_shap_aggre = np.mean(epsilon1['lc_coefficient_shap'])
print("\nLIME, epsilon 1.00, Aggregated L(x) = ", epsilon1_lc_lime_aggre)
print("SHAP, epsilon 1.00, Aggregated L(x) = ", epsilon1_lc_shap_aggre)
epsilon125_lc_lime_aggre = np.mean(epsilon125['lc_coefficient_lime'])
epsilon125_lc_shap_aggre = np.mean(epsilon125['lc_coefficient_shap'])
print("\nLIME, epsilon 1.25, Aggregated L(x) = ", epsilon125_lc_lime_aggre)
print("SHAP, epsilon 1.25, Aggregated L(x) = ", epsilon125_lc_shap_aggre)
###Output
Lipschitz estimates on logistic regression model.
LIME, epsilon 1.00, Aggregated L(x) = nan
SHAP, epsilon 1.00, Aggregated L(x) = nan
LIME, epsilon 1.25, Aggregated L(x) = nan
SHAP, epsilon 1.25, Aggregated L(x) = nan
###Markdown
2. Lipschitz est. visualizations computed on random forest model
###Code
epsilon1 = rf_lipschitz[rf_lipschitz['radiuses'] == 1.00]
epsilon125 = rf_lipschitz[rf_lipschitz['radiuses'] == 1.25]
# display(epsilon075.head())
# display(epsilon1.head())
# display(epsilon125.head())
print("Lipschitz estimates on random forest model.")
epsilon1_lc_lime_aggre = np.mean(epsilon1['lc_coefficient_lime'])
epsilon1_lc_shap_aggre = np.mean(epsilon1['lc_coefficient_shap'])
print("\nLIME, epsilon 1.00, Aggregated L(x) = ", epsilon1_lc_lime_aggre)
print("SHAP, epsilon 1.00, Aggregated L(x) = ", epsilon1_lc_shap_aggre)
epsilon125_lc_lime_aggre = np.mean(epsilon125['lc_coefficient_lime'])
epsilon125_lc_shap_aggre = np.mean(epsilon125['lc_coefficient_shap'])
print("\nLIME, epsilon 1.25, Aggregated L(x) = ", epsilon125_lc_lime_aggre)
print("SHAP, epsilon 1.25, Aggregated L(x) = ", epsilon125_lc_shap_aggre)
###Output
Lipschitz estimates on random forest model.
LIME, epsilon 1.00, Aggregated L(x) = 0.1704131061013217
SHAP, epsilon 1.00, Aggregated L(x) = 0.09775652916250081
LIME, epsilon 1.25, Aggregated L(x) = 0.1704131061013217
SHAP, epsilon 1.25, Aggregated L(x) = 0.09775652916250081
###Markdown
Visualizations
###Code
df1 = epsilon075.loc[:, ['lc_coefficient_lime']]
df1.rename(columns={'lc_coefficient_lime': 'Lipschitz Estimates'}, inplace=True)
df1['method'] = 'LIME'
df1['Dataset'] = 'Ionoshpere'
df2 = epsilon075.loc[:, ['lc_coefficient_shap']]
df2.rename(columns={'lc_coefficient_shap': 'Lipschitz Estimates'}, inplace=True)
df2['method'] = 'SHAP'
df2['Dataset'] = 'Ionoshpere'
df = df1.append(df2)
ax = sns.boxplot(x='method', y="Lipschitz Estimates", data=df)
ax = sns.boxplot(x="Dataset", y="Lipschitz Estimates",
hue="method",
data=df)
sns.despine(offset=10, trim=True)
###Output
_____no_output_____
###Markdown
LIME visualizations by single points
###Code
explainer_lime = LimeTabularExplainer(train,
mode = 'classification',
training_labels = labels_train,
feature_names=feature_names,
verbose=False,
class_names=target_names,
feature_selection='auto',
discretize_continuous=True)
x_instance = test[anchor_index]
LR_exp_lime = explainer_lime.explain_instance(x_instance,
LR_iris.predict_proba,
labels=np.unique(iris.target),
top_labels=None,
num_features=len(x_instance),
num_samples=6000)
LR_exp_lime.show_in_notebook()
x_instance = test[similar_point_index]
LR_exp_lime = explainer_lime.explain_instance(x_instance,
LR_iris.predict_proba,
labels=np.unique(iris.target),
top_labels=None,
num_features=len(x_instance),
num_samples=6000)
LR_exp_lime.show_in_notebook()
i = np.random.randint(0, test.shape[0])
i = 0
LR_exp_lime_map = LR_exp_lime.as_map()
# pprint(LR_exp_lime_map)
print('Predicted class for i:', labels_pred_lr[i])
LR_exp_lime_list = LR_exp_lime.as_list(label=labels_pred_lr[i])
# pprint(LR_exp_lime_list)
###Output
_____no_output_____
###Markdown
Conclusions
###Code
lr_lime_iris = [2.657, 3.393, 1.495]
rf_lime_iris = [3.010, 3.783, 1.767]
lr_shap_iris = [2.716, 3.512, 1.463]
rf_shap_iris = [1.969, 3.546, 2.136]
find_min_vector = np.array([lr_lime_iris, rf_lime_iris, lr_shap_iris, rf_shap_iris])
np.amin(find_min_vector, axis=0)
from sklearn.linear_model import Ridge
import numpy as np
n_samples, n_features = 10, 5
rng = np.random.RandomState(0)
y = rng.randn(n_samples)
X = rng.randn(n_samples, n_features)
clf = Ridge(alpha=1.0)
clf.fit(X, y)
###Output
_____no_output_____ |
docs/Python/Pandas/MonthBegin_and_MonthEnd.ipynb | ###Markdown
Pandas TimeSeries MonthBegin and MonthEnd
###Code
import pandas as pd
from pandas.tseries.offsets import MonthBegin, MonthEnd
###Output
_____no_output_____
###Markdown
MonthEndHelpful when figuring out whether month ends on 28th, 29th, 30th, 31st.
###Code
todays_date = '2021-02-07'
month_end_date = pd.to_datetime(todays_date) + MonthEnd(1)
print(month_end_date)
###Output
2021-02-28 00:00:00
###Markdown
MonthBeginHelpful when figuring out when current/previous/next MonthBegin.
###Code
month_start_date = pd.to_datetime(todays_date) + MonthBegin(-1)
print(month_start_date)
next_month_start_date = pd.to_datetime(todays_date) + MonthBegin(1)
print(next_month_start_date)
###Output
2021-03-01 00:00:00
|
10 - Model Selection/K_Fold_Cross_Validation.ipynb | ###Markdown
ROC curves and Area Under the CurveAn ROC curve is the most commonly used way to visualize the performance of a binary classifier, and AUC is (arguably) the best way to summarize its performance in a single number. As such, gaining a deep understanding of ROC curves and AUC is beneficial for data scientists, machine learning practitioners, and medical researchers (among others).
###Code
from sklearn.metrics import (confusion_matrix, precision_recall_curve, auc,
roc_curve, recall_score, classification_report, f1_score,
precision_recall_fscore_support)
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
roc_auc = auc(fpr, tpr)
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, label='AUC = %0.4f'% roc_auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.001, 1])
plt.ylim([0, 1.001])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show();
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.