path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebooks/02.01-Pixel-telemeter-count.ipynb | ###Markdown
C13 example: How many times has each K2 pixel been telemetered? C13 example In this demo notebook we are going to identify which pixels were observed in long cadence (LC) during campaign 13. Our final goal is to have a boolean mask the same dimensions as an FFI, with either 1 (was observed) or 0 (not observed). Our approach will be simply to read in every single TPF and access the `APERTURE` keyword. Values of `APERTURE` greater than zero are observed, while zero values were not. We will also need to read the FITS header to determine the *x,y* coordinate of the corner of the TPF. Finally we will programmatically fill in a count array with ones or zeros. Make the Jupyter Notebook fill the screen.
###Code
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import fits
from astropy.utils.console import ProgressBar
import logging
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
###Output
_____no_output_____
###Markdown
Open the Campaign 13 FFI to mimic the dimensions and metadata.
###Code
hdu_ffi = fits.open('/Volumes/Truro/ffi/ktwo2017079075530-c13_ffi-cal.fits')
###Output
_____no_output_____
###Markdown
Open the [k2 Target Index](https://github.com/barentsen/k2-target-index) csv file, which is only updated to Campaign 13 at the time of writing.
###Code
df = pd.read_csv('../../k2-target-index/k2-target-pixel-files.csv.gz')
###Output
_____no_output_____
###Markdown
For this notebook, we will just focus on Campaign 13. We can generalize our result in the future by using *all* the campaigns.
###Code
df = df[df.campaign == 13].reset_index(drop=True)
hdu_ffi[10].name
###Output
_____no_output_____
###Markdown
The Kepler FFI CCDs are referred to by their **"mod.out" name**, rather than by "channel". The nomenclature is a relic of how the CCD readout electronics were configured. What matters here is that: 1. We will have to mimic the formatting to programmatically index into the FFI counts.2. Some modules have more target pixel files on them than others, by chance or by astrophysical design.
###Code
df.groupby(['module', 'output']).filename.count().head(10)
df['mod_str'] = "MOD.OUT "+df.module.astype(str)+'.'+df.output.astype(str)
df['mod_str'].tail()
###Output
_____no_output_____
###Markdown
We'll make an **"FFI Counts" array**, which is an array the same dimensions as the FFI, but with values of whether a pixel was telemetered in Long Cadence or not.
###Code
hdu_counts = hdu_ffi
for i, el in enumerate(hdu_ffi[1:]):
hdu_counts[el.name].data = hdu_counts[el.name].data*0.0
###Output
_____no_output_____
###Markdown
For the next part, you'll need a local hard drive full of all the K2 target pixel files. It's a single line `wget` script to [MAST](https://archive.stsci.edu/pub/k2/target_pixel_files/c13/). It's possible that the downloading process would corrupt a few target pixel files, and you wouldn't necessarily know. So we will also set up a log of the failed attempts to open a target pixel file.
###Code
local_dir = '/Volumes/burlingame/TPFs/c13/'
logging.basicConfig(filename='../data/C13_failed_TPFs.log',level=logging.INFO)
mod_list = df.mod_str.unique()
mod_list
###Output
_____no_output_____
###Markdown
Now set up a big for-loop that:1. Reads in a TPFs 2. Aligns its corner in the FFI frame3. Adds a boolean mask to the **FFI Counts** array4. Optionally logs any problem TPFs for spot-checking later5. Incrementally saves the FFI Counts array to a FITS file
###Code
for mod_out, group in df.groupby('mod_str'):
print(mod_out, end=' ')
mod_out_tpfs = group.url.str[59:].values
with ProgressBar(len(mod_out_tpfs), ipython_widget=True) as bar:
for i, tpf_path in enumerate(mod_out_tpfs):
bar.update()
try:
hdu_tpf = fits.open(local_dir+tpf_path)
hdr = hdu_tpf['TARGETTABLES'].header
cx, cy = hdr['1CRV4P'], hdr['2CRV4P']
sx, sy = hdu_tpf['APERTURE'].shape
counts = hdu_tpf['APERTURE'].data > 0
hdu_counts[mod_out].data[cy:cy+sy, cx:cx+sx]=counts.T # Double check this!
except:
logging.info(tpf_path+" had a problem: cx:{}, sx:{}, cy:{}, sy:{}".format(cx, sx, cy, sy))
hdu_tpf.close()
hdu_counts.writeto('../data/FFI_counts/C13_FFI_mask.fits', overwrite=True)
###Output
MOD.OUT 10.1
###Markdown
Ok, it took 8 minutes for 3 channels, or about 160 seconds per channel. There are about 72 channels per pointing, so:
###Code
160.0*72/60.0
192.0/60
###Output
_____no_output_____
###Markdown
Counting all the pixels will take about 3.2 hours. That's a long time! Meep! Let's have the for loop incrementally save each channel count so we can start working with the count data.
###Code
plt.figure(figsize=(14,14))
plt.imshow(hdu_counts['MOD.OUT 8.1'].data, interpolation='none', );
###Output
_____no_output_____ |
project-spring2017/part1/Final-Part1-Trips.ipynb | ###Markdown
LIS590DV Final Project - Group Athena Part1 - Routes with Different Numbers of Trips Part1 - Shapes of Routes with Most/Least Number of Trips Author: Hui Lyu
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
import matplotlib.patches as mpatches
import csv
from collections import Counter
import plotly
plotly.tools.set_credentials_file(username='huilyu2', api_key='LYEkqxDQFmZzZIBXn9rn')
import plotly.plotly as py
from plotly.graph_objs import *
%matplotlib inline
trips_df = pd.read_csv("GTFS Dataset/trips.csv")
trips_df.columns
trips_df.head()
###Output
_____no_output_____
###Markdown
How many trips for each route?
###Code
len(np.unique(trips_df["route_id"])) # How many routes in all
trips_count = Counter(trips_df["route_id"])
trips_count.values()
plt.rcParams["figure.figsize"] = (8, 8)
fig, ax = plt.subplots()
n, bins, patches = plt.hist(list(trips_count.values()),bins=np.arange(0,400,50),edgecolor = 'k')
bin_centers = 0.5 * (bins[:-1] + bins[1:])
cm = plt.cm.get_cmap('Purples')
# scale values to interval [0,1]
col = bin_centers - min(bin_centers)
col /= max(col)
for c, p in zip(col, patches):
plt.setp(p, 'facecolor', cm(c))
plt.xlabel("Number of Trips for Each Route",fontsize=16)
plt.ylabel("Count of Routes",fontsize=16)
plt.title("Distribution of Routes with Different Numbers of Trips\n(Total: 100 Routes which Exist Trip)",fontweight="bold",fontsize=18)
plt.xticks(bins)
ax.set_axisbelow(True)
ax.yaxis.grid(color='gray', linestyle='dashed')
plt.savefig('Distribution of Routes with Trips.svg', bbox_inches='tight')
plt.savefig('Distribution of Routes with Trips.png', bbox_inches='tight')
trips_count.most_common(10)
shape_id_dict = {}
trip_count_dict = {}
for word, count in trips_count.most_common(10):
trip_count_dict[word]=count # A route corresponds to its number of trips
shape_id_dict[word]=np.unique(trips_df[trips_df.route_id == word][["shape_id"]])
# A route correponds to its several shapes
import operator
trip_count_list = sorted(trip_count_dict.items(), key=operator.itemgetter(1), reverse=True)
trip_count_df = pd.DataFrame(trip_count_list,columns = ['route_id', 'number of trips'])
trip_count_df
routes_df = pd.read_csv("GTFS Dataset/routes.csv")
routes_df.head()
df_routes = pd.DataFrame()
for k in np.arange(10):
df_routes = df_routes.append(routes_df[routes_df.route_id == trip_count_df.route_id[k]])
df_routes
#df_shapes = df_shapes.append(shapes_df[shapes_df.shape_id == shape_id])
df_routes[["route_color"]]
plt.rcParams["figure.figsize"] = (16, 16)
fig, ax = plt.subplots()
colors = ['#cccccc','#5a1d5a','#006991','#5a1d5a','#006991','#fcee1f','#5a1d5a','#5a1d5a','#008063','#fcee1f']
plt.barh(bottom=np.arange(10)[::-1], width=trip_count_df['number of trips'], height=0.5,alpha=0.9,color=colors)
plt.xlabel("Number of Trips",fontsize=20)
plt.ylabel("Route ID",fontsize=20)
ax.set_yticks(np.arange(10)[::-1])
ax.set_yticklabels(trip_count_df['route_id'], fontsize=18)
plt.title("Top 10 Routes with Most Number of Trips for Each Route",fontweight="bold",fontsize=24)
ax.xaxis.grid(color='gray', linestyle='dotted')
for i, v in enumerate(trip_count_df['number of trips'][::-1]):
ax.text(v + 2, i-0.05, str(v), color='k', fontsize=18,fontweight="bold")
plt.savefig('Top 10 Routes with Most Trips.svg', bbox_inches='tight')
plt.savefig('Top 10 Routes with Most Trips.png', bbox_inches='tight')
shape_id_dict
trips_count.most_common()[:-11:-1]
shapes_df = pd.read_csv("GTFS Dataset/shapes.csv")
shapes_df.head()
top_ten_shapes = list(shape_id_dict.keys())
top_ten_shapes
df_shapes = pd.DataFrame()
for key in top_ten_shapes:
for shape_id in shape_id_dict[key]:
df_shapes = df_shapes.append(shapes_df[shapes_df.shape_id == shape_id])
df_shapes_group = df_shapes.groupby("shape_id")
df_shapes_least = pd.DataFrame()
shape_id_dict_least = {}
once_routes = ("BROWN ALT1","10W GOLD ALT","5W GREEN ALT 2","7W GREY ALT","1N YELLOW ALT")
for word, count in trips_count.most_common()[:-11:-1]:
shape_id_dict_least[word]=np.unique(trips_df[trips_df.route_id == word][["shape_id"]])
for key in once_routes:
for shape_id in shape_id_dict_least[key]:
df_shapes_least = df_shapes_least.append(shapes_df[shapes_df.shape_id == shape_id])
df_shapes_least_group = df_shapes_least.groupby("shape_id")
plt.rcParams["figure.figsize"] = (14, 14)
for name, group in df_shapes_group:
plt.plot(group['shape_pt_lon'],group['shape_pt_lat'],linestyle="solid",linewidth=3,alpha=0.3,c="k")
for name, group in df_shapes_least_group:
plt.plot(group['shape_pt_lon'],group['shape_pt_lat'],linestyle="solid",linewidth=3,alpha=0.3,c="gray")
plt.xlabel("Longitude",fontsize=20)
plt.ylabel("Latitude",fontsize=20)
plt.title("Shapes of Routes with Most/Least Number of Trips",fontweight="bold",fontsize=22)
plt.grid(color='gray', linestyle='dotted')
black_patch = mpatches.Patch(color='k')
gray_patch = mpatches.Patch(color='lightgray')
first_legend = plt.legend(title="Routes",handles=[black_patch,gray_patch],labels=["Top 10 Routes with Most Number of Trips","5 Routes with Only One Trip per Route"],prop={'size':14},loc=1)
rectangle1=plt.Rectangle((-88.26,40.1266),width=0.01,height=0.0067,alpha=0.6,facecolor="yellow",edgecolor="None")
rectangle2=plt.Rectangle((-88.20,40.0866),width=0.01,height=0.0067,alpha=0.6,facecolor="yellow",edgecolor="None")
rectangle3=plt.Rectangle((-88.21,40.1066),width=0.01,height=0.0067,alpha=0.6,facecolor="#adff2f",edgecolor="None")
rectangle4=plt.Rectangle((-88.20,40.0934),width=0.01,height=0.0067,alpha=0.6,facecolor="#adff2f",edgecolor="None")
rectangle5=plt.Rectangle((-88.25,40.1134),width=0.01,height=0.0134,alpha=0.6,facecolor="#7fff00",edgecolor="None")
rectangle6=plt.Rectangle((-88.23,40.1134),width=0.01,height=0.0067,alpha=0.6,facecolor="#7fff00",edgecolor="None")
ax=plt.gca()
# Add the legend manually to the current Axes.
ax.add_artist(first_legend)
ax.set_yticks(np.arange(40.05,40.16,0.01))
ax.set_xticks(np.arange(-88.32,-88.15,0.01))
ellipse1 = Ellipse(xy=(-88.242,40.123),width=0.044,height=0.021,alpha=0.9,facecolor="None",edgecolor="hotpink",lw=2)
ellipse2 = Ellipse(xy=(-88.201,40.101),width=0.023,height=0.028,alpha=0.9,facecolor="None",edgecolor="hotpink",lw=2)
ax.add_patch(ellipse1)
ax.add_patch(ellipse2)
ax.add_patch(rectangle1)
ax.add_patch(rectangle2)
ax.add_patch(rectangle3)
ax.add_patch(rectangle4)
ax.add_patch(rectangle5)
ax.add_patch(rectangle6)
second_legend = plt.legend(title="Density of Bus Stops",prop={'size':14},handles=[rectangle1,rectangle3,rectangle5],labels=["highest","very high","high"],loc=2)
ax.add_artist(second_legend)
plt.savefig('Shapes of Routes with Trips.svg', bbox_inches='tight')
plt.savefig('Shapes of Routes with Trips.png', bbox_inches='tight')
###Output
_____no_output_____ |
mnist/SVD_VarDrop.ipynb | ###Markdown
Downloading the dataset
###Code
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=1000, shuffle=True)
###Output
_____no_output_____
###Markdown
Basic fully-connected MNIST architecture with helper functions
###Code
# helper function for model evaluation
def eval(model, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data, target
output = F.log_softmax(model(data), dim=1)
test_loss += F.nll_loss(output, target, reduction='sum').item()
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
class MnistNet(nn.Module):
def __init__(self):
super(MnistNet,self).__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 10)
def forward(self,x):
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
# training baseline model
from tqdm import trange
net = MnistNet()
optimizer = optim.Adam(net.parameters(), lr=1e-3)
num_epoch = 10
for epoch in trange(num_epoch):
for data, target in train_loader:
optimizer.zero_grad()
data, target = data, target
output = F.log_softmax(net(data), dim=1)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
eval(net, test_loader)
torch.save(net.state_dict(), 'baseline_mnist.model')
###Output
_____no_output_____
###Markdown
SVD SVD, no finetuning
###Code
net_full_svd = MnistNet()
net_full_svd.load_state_dict(torch.load('baseline_mnist.model'))
k = 25
for i, name in enumerate(['fc1.weight']):
weight = net_full_svd.state_dict()[name].numpy()
u, s, vT = np.linalg.svd(weight, full_matrices=False)
s[k:] = 0
net_full_svd.state_dict()[name].copy_(torch.tensor(u @ np.diag(s) @ vT))
eval(net_full_svd, test_loader)
num_params = 784 * 256
new_num_params = 784 * k + k * k + k * 256
print('Compression rate: x{:.4f}'.format(1. * num_params / new_num_params))
###Output
Test set: Average loss: 0.1509, Accuracy: 9628/10000 (96%)
Compression rate: x7.5382
###Markdown
SVD, finetuning last
###Code
net_svd = MnistNet()
net_svd.load_state_dict(torch.load('baseline_mnist.model'))
k = 15
for i, name in enumerate(['fc1.weight']):
weight = net_svd.state_dict()[name].numpy()
u, s, vT = np.linalg.svd(weight, full_matrices=False)
s[k:] = 0
net_svd.state_dict()[name].copy_(torch.tensor(u @ np.diag(s) @ vT))
num_epoch = 5
optimizer = optim.Adam(list(net_svd.parameters())[2:], lr=1e-3)
for epoch in trange(num_epoch):
for data, target in train_loader:
optimizer.zero_grad()
data, target = data, target
output = F.log_softmax(net_svd(data), dim=1)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
eval(net_svd, test_loader)
num_params = 784 * 256
new_num_params = 784 * k + k * k + k * 256
print('Compression rate: x{:.4f}'.format(1. * num_params / new_num_params))
###Output
Test set: Average loss: 0.1244, Accuracy: 9645/10000 (96%)
Compression rate: x12.6827
###Markdown
Sparse Variational Dropout Linear variational layer
###Code
class varLinear(nn.Module):
def __init__(self, shape, prune_val):
super(varLinear, self).__init__()
self.weight = nn.Parameter((0.02) ** 0.5 * torch.randn(shape[1], shape[0]))
self.logstd = nn.Parameter(-5.0 * torch.ones(shape[1], shape[0]))
self.bias = nn.Parameter(torch.zeros(1, shape[1]))
self.prune_val = prune_val
self.training = True
def forward(self, x):
self.log_alpha = self.logstd * 2.0 - 2.0 * torch.log(1e-16 + torch.abs(self.weight))
self.log_alpha = torch.clamp(self.log_alpha, -10, 10)
if self.training:
lrt_mean = F.linear(x, self.weight) + self.bias
lrt_std = torch.sqrt(F.linear(x * x, torch.exp(self.logstd * 2.0) + 1e-8))
eps = torch.randn_like(lrt_std)
return lrt_mean + lrt_std * eps
pruned = self.weight * (self.log_alpha < self.prune_val).float()
return F.linear(x, pruned) + self.bias
def kl(self):
# KL divergence approximation (Molchanov et al.)
k1, k2, k3 = torch.Tensor([0.63576]), torch.Tensor([1.8732]), torch.Tensor([1.48695])
kl = k1 * torch.sigmoid(k2 + k3 * self.log_alpha) - 0.5 * torch.log1p(torch.exp(-self.log_alpha))
kl = - kl.sum()
return kl
###Output
_____no_output_____
###Markdown
MLP with linear variational layer
###Code
class MnistNetVar(nn.Module):
def __init__(self, prune_val):
super(MnistNetVar, self).__init__()
self.fc1 = varLinear((784, 256), prune_val)
self.fc2 = varLinear((256, 10), prune_val)
self.prune_val = prune_val
def forward(self, x):
x = x.view(x.size(0), -1)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
varnet = MnistNetVar(prune_val=3.0)
optimizer = optim.Adam(varnet.parameters(), lr=1e-3)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[50,60,70,80], gamma=0.2)
kl_weight = 0.07
epochs = 100
for epoch in range(1, epochs + 1):
scheduler.step()
varnet.train()
kl_weight = min(kl_weight+0.07, 1) # warming-up kl
for batch_idx, (input, target) in enumerate(train_loader):
optimizer.zero_grad()
output = F.log_softmax(varnet(input), dim=1)
loss = F.nll_loss(output, target) * len(train_loader.dataset)
for module in varnet.children():
loss += module.kl() * kl_weight
loss.backward()
optimizer.step()
eval(varnet, test_loader)
num_params = 784 * 256 + 256 * 10
new_num_params = 0
shapes = [784 * 256, 256 * 10]
for i, c in enumerate(varnet.children()):
new_num_params += (c.log_alpha.data.numpy() < varnet.prune_val).mean() * shapes[i]
print('Compression rate: x{:.4f}'.format(1. * num_params / new_num_params))
###Output
Test set: Average loss: 0.0584, Accuracy: 9841/10000 (98%)
Compression rate: x23.3933
|
07_numbers_errors/.ipynb_checkpoints/07-numbers-checkpoint.ipynb | ###Markdown
07 Numbers* *Computational Physics*: Ch 2.4, 2.5, 3* Python Tutorial [Floating Point Arithmetic: Issues and Limitations](https://docs.python.org/3/tutorial/floatingpoint.html) Binary representationComputers store information with two-state logic. This can be represented in a binary system with numbers 0 and 1 (i.e. base 2)Any number can be represented in any base as a polynomial (possibly with infinitely many terms): the digits are $0 \leq x_k < b$ and determine the contribution of the base $b$ raise to the $k$th power.$$q_b = \sum_{k=-\infty}^{+\infty} x_k b^k$$ Integers Convert 10 (base 10, i.e. $1 \times 10^1 + 0\times 10^0$) into binary (Note: `divmod(x, 2)` is `x // 2, x % 2`, i.e. integer division and remainder):
###Code
divmod(10, 2)
divmod(5, 2)
divmod(2, 2)
###Output
_____no_output_____
###Markdown
The binary representation of $10_{10}$ is $1010_2$ (keep dividing until there's only 1 left, then collect the 1 and all remainders in reverse order, essentially long division). Double check by multiplying out $1010_2$:
###Code
1*2**3 + 0*2**2 + 1*2**1 + 0*2**0
###Output
_____no_output_____
###Markdown
or in Python
###Code
int('0b1010', 2)
0b1010
###Output
_____no_output_____
###Markdown
Summary: Integers in binary representation**All integers are exactly representable in base 2 with a finite number of digits**.* The sign (+ or –) is represented by a single bit (0 = +, 1 = –). * The number of available "bits" (digits) determines the largest representable integer. For example, with 8 bits available (a "*byte*"), what is the largest and smallest integer?
###Code
0b1111111 # 7 bits for number, 1 for sign (not included)
-0b1111111
###Output
_____no_output_____
###Markdown
Sidenote: using numpy to quickly convert integersIf you want to properly sum all terms, use numpy arrays and the element-wise operations:
###Code
import numpy as np
nbits = 7
exponents = np.arange(nbits)
bases = 2*np.ones(nbits) # base 2
digits = np.ones(nbits) # all 1, for 1111111 (127 in binary)
np.sum(digits * bases**exponents)
###Output
_____no_output_____
###Markdown
Examples: limits of integersWhat is the smallest and largest integer that you can represent1. if you have 4 bits available and only consider non-negative ("unsigned") integers?2. if you have 32 bits and consider positive and negative integers?3. if you have 64 bits and consider positive and negative integers? Smallest and largest 4 bit unsigned integer:
###Code
0b0000
0b1111
###Output
_____no_output_____
###Markdown
Smallest and largest 32-bit signed integer (int32): 1 bit is sign, 31 bits are available, so the highest number has 31 ones (111...11111). The *next highest* number is 1000...000, a one with 32 bits and 31 zeroes, i.e., $2^{31}$. Thus, the highest number is $2^{31} - 1$:
###Code
2**31 - 1
###Output
_____no_output_____
###Markdown
(and the smallest number is just $-(2^{31} - 1)$) And int64 (signed):
###Code
max64 = 2**(64-1) - 1
print(-max64, max64)
###Output
-9223372036854775807 9223372036854775807
###Markdown
Python's arbitrary precision integers In Python, integers *have arbitrary precision*: integer arithmetic (`+`, `-`, `*`, `//`) is exact and will not overflow. Thus the following code will run forever (until memory is exhausted); if you run it, you can stop the evaluation with the ''Kernel / Interrupt'' menu command in the notebook and then investigate `n` and `nbits`:
###Code
n = 1
nbits = 1
while True:
n *= 2
nbits += 1
type(n)
int.bit_length(n)
nbits
###Output
_____no_output_____
###Markdown
NumPy has fixed precision integersNumPy data types (dtypes) are fixed precision. Overflows "wrap around":
###Code
import numpy as np
np.array([2**15-1], dtype=np.int16)
np.array([2**15], dtype=np.int16)
np.array([2**15 + 1], dtype=np.int16)
###Output
_____no_output_____
###Markdown
Binary fractionsDecimal fractions can be represented as binary fractions:Convert $0.125_{10}$ to base 2:
###Code
0.125 * 2 # 0.0
_ * 2 # 0.00
_ * 2 # 0.001
###Output
_____no_output_____
###Markdown
Thus the binary representation of $0.125_{10}$ is $0.001_2$. General recipe:- multiply by 2- if you get a number < 1, add a digit 0 to the right- if you get a number ≥ 1, add a digit 1 to the right and then use the remainder in the same fashion
###Code
0.3125 * 2 # 0.0
_ * 2 # 0.01
(_ - 1) * 2 # 0.010
_ * 2 # 0.0101
###Output
_____no_output_____
###Markdown
Thus, 0.3125 is $0.0101_2$. What is the binary representation of decimal $0.1 = \frac{1}{10}$?
###Code
0.1 * 2 # 0
_ * 2 # 0
_ * 2 # 0
_ * 2 # 1
(_ - 1) * 2 # 1
(_ - 1) * 2 # 0
_ * 2 # 0
_ * 2 # 1
###Output
_____no_output_____
###Markdown
... etc: this is an infinitely repeating fraction and the binary representation of $0.1_{10}$ is $0.000 1100 1100 1100 ..._2$.**Thus, with a finite number of bits, 0.1 is not exactly representable in the computer.** The number 0.1 is not stored exactly in the computer. `print` only shows you a convenient approximation:
###Code
print(0.1)
print("{0:.55f}".format(0.1))
###Output
_____no_output_____
###Markdown
Problems with floating point arithmeticOnly a subset of all real numbers can be represented with **floating point numbers of finite bit size**. Almost all floating point numbers are not exact:
###Code
0.1 + 0.1 + 0.1 == 0.3
###Output
_____no_output_____
###Markdown
... which should have yielded `True`! But because the machine representation of 0.1 is not exact, the equality cannot be fulfilled. Representation of floats: IEEE 754Floating point numbers are stored in "scientific notation": e.g. $c = 2.88792458 \times 10^8$ m/s * **mantissa**: $2.88792458$ * **exponent**: $+8$ * **sign**: +Format: $$x = (-1)^s \times 1.f \times 2^{e - \mathrm{bias}}$$($f$ is $M$ bits long. The leading 1 in the mantissa is assumed and not stored: "ghost" or "phantom" bit.) Format: $$x = (-1)^s \times 1.f \times 2^{e - \mathrm{bias}}$$Note: * In IEEE 754, the highest value of $e$ in the exponent is reserved and not used, e.g. for a 32-bit *float* (see below) the exponent has $(30 - 23) + 1 = 8$ bit and hence the highest number for $e$ is $(2^8 - 1) - 1 = 255 - 1 = 254$. Taking the *bias* into account (for *float*, *bias* = 127), the largest value for the exponent is $2^{254 - 127} = 2^{127}$.* The case of $e=0$ is also special. In this case, the format is $$x = (-1)^s \times 0.f \times 2^{-\mathrm{bias}}$$ i.e. the "ghost 1" becomes a zero, gaining a additional order of magnitude. IEEE float (32 bit)IEEE *float* uses **32 bits** * $\mathrm{bias} = 127_{10}$ * bits sef bit position3130–2322–0 * **six or seven decimal places of significance** (1 in $2^{23}$) * range: $1.4 \times 10^{-45} \leq |x_{(32)}| \leq 3.4 \times 10^{38}$
###Code
1/2**23
###Output
_____no_output_____
###Markdown
IEEE double (64 bit)Python floating point numbers are 64-bit doubles. NumPy has dtypes `float32` and `float64`.IEEE *double* uses **64 bits** * $\mathrm{bias} = 1023_{10}$ * bits sef bit position6362–5251–0 * **about 16 decimal places of significance** (1 in $2^{52}$) * range: $4.9 \times 10^{-324} \leq |x_{(64)}| \leq 1.8 \times 10^{308}$
###Code
1/2**52
###Output
_____no_output_____
###Markdown
For numerical calculations, *doubles* are typically required. Special numbersIEEE 754 also introduces special "numbers" that can result from floating point arithmetic* `NaN` (not a number)* `+INF` and `-INF` (infinity)* `-0` (signed zero) Python itself does not use the IEEE special numbers
###Code
1/0
###Output
_____no_output_____
###Markdown
But numpy does:
###Code
np.array([1, -1])/np.zeros(2)
###Output
_____no_output_____
###Markdown
But beware, you cannot use `INF` to "take limits". It is purely a sign that something bad happened somewhere... And **not a number**, `nan`
###Code
np.zeros(2)/np.zeros(2)
###Output
_____no_output_____
###Markdown
Overflow and underflow* underflow: typically just set to zero (and that works well most of the time)* overflow: raises exception or just set to `inf`
###Code
big = 1.79e308
big
2 * big
2 * np.array([big], dtype=np.float64)
###Output
_____no_output_____
###Markdown
... but you can just use an even bigger data type:
###Code
2 * np.array([big], dtype=np.float128)
###Output
_____no_output_____
###Markdown
Insignificant digits
###Code
x = 1000.2
A = 1000.2 - 1000.0
print(A)
A == 0.2
###Output
_____no_output_____
###Markdown
... oops
###Code
x = 700
y = 1e-14
x - y
x - y < 700
###Output
_____no_output_____
###Markdown
... ooops Machine precisionOnly a limited number of floating point numbers can be represented. This *limited precision* affects calculations:
###Code
x = 5 + 1e-16
x
x == 5
###Output
_____no_output_____
###Markdown
... oops. **Machine precision** $\epsilon_m$ is defined as the maximum number that can be added to 1 in the computer without changing that number 1:$$1_c + \epsilon_m := 1_c$$Thus, the *floating point representation* $x_c$ of an arbitrary number $x$ is "in the vicinity of $x$"$$x_c = x(1\pm\epsilon), \quad |\epsilon| \leq \epsilon_m$$where we don't know the true value of $\epsilon$. Thus except for powers of 2 (which are represented exactly) **all floating point numbers contain an unknown error in the 6th decimal place (32 bit floats) or 15th decimal (64 bit doubles)**. This error should be treated as a random error because we don't know its magnitude.
###Code
N = 100
eps = 1
for nbits in range(N):
eps /= 2
one_plus_eps = 1.0 + eps
# print("eps = {0}, 1 + eps = {1}".format(eps, one_plus_eps))
if one_plus_eps == 1.0:
print("machine precision reached for {0} bits".format(nbits))
print("eps = {0}, 1 + eps = {1}".format(eps, one_plus_eps))
break
###Output
_____no_output_____
###Markdown
Compare to our estimate for the precision of float64:
###Code
1/2**52
###Output
_____no_output_____
###Markdown
Appendix A quick hack to convert a floating point binary representation to a floating point number.
###Code
bits = "1010.0001100110011001100110011001100110011001100110011"
import math
def bits2number(bits):
if '.' in bits:
integer, fraction = bits.split('.')
else:
integer = bits
fraction = ""
powers = [int(bit) * 2**n for n, bit in enumerate(reversed(integer))]
powers.extend([int(bit) * 2**(-n) for n, bit in enumerate(fraction, start=1)])
return math.fsum(powers)
bits2number(bits)
bits2number('1111')
bits2number('0.0001100110011001100110011001100110011001100110011')
bits2number('0.0001100')
bits2number('0.0101')
bits2number("10.10101")
bits2number('0.0111111111111111111111111111111111111111')
bits2number('0.110011001100')
###Output
_____no_output_____
###Markdown
Python can convert to binary using the `struct` module:
###Code
import struct
fpack = struct.pack('f', 6.0e-8) # pack float into bytes
fint = struct.unpack('i', fpack)[0] # unpack to int
m_bits = bin(fint)[-23:] # mantissa bits
print(m_bits)
###Output
_____no_output_____
###Markdown
With phantom bit:
###Code
mantissa_bits = '1.' + m_bits
print(mantissa_bits)
import math
mn, ex = math.frexp(6.0e-8)
print(mn, ex)
###Output
_____no_output_____ |
1. HSI Intro.ipynb | ###Markdown
Introduction to Hyperspectral Imagery and Image Analysis By Alina Zare1, Taylor Glenn2 and Susan Meerdink11The Machine Learning and Sensing Lab, Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611https://faculty.eng.ufl.edu/machine-learning/2Precision Silver, LLC, Gainesville, FL 32601 http://www.precisionsilver.com **What is a Hyperspectral Image?** * Hyperspectral data measures hundreds of wavelengths of the electromagnetic spectrum (generally 350 nm to 2500 nm, but depends on sensor). * An image from a hyperspectral sensor results in a hyperspectral data cube with two spatial and one spectral dimension (latitude x longitude x number of spectral bands). A spectral band measures a specific wavelength of the electromagnetic spectrum.* Each pixel (latitude x longitude) in the hyperspectral data cube has a spectrum. This spectrum is the result of energy traveling from the sun, through the atmosphere, interacting with the earth's surface, and being reflected back up through the atmosphere to be measured by the sensor. * As mentioned above, a pixel's spectrum is the results of energies interaction with the earth's surface. Depending on the material, the amount of energy reflected back will differ across the electromagnetic spectrum. These differences allow us to discriminate between materials in an image.
###Code
# imports and setup
import numpy as np
import os.path
import scipy.io
from loadmat import loadmat
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
default_dpi = mpl.rcParamsDefault['figure.dpi']
mpl.rcParams['figure.dpi'] = default_dpi*2 # run this to make figures larger, often have to execute block multiple times for it to take
###Output
_____no_output_____
###Markdown
The data we will be working with is the 'MUUFL Gulfport Dataset.' This data set is a hyperspectral image cube and a co-registered LiDAR point cloud collected over the University of Mississippi - Gulfpark campus. The data has class/ground cover labels as well as several super- and sub-pixel targets placed throughout the scene. This data can be obtained here: https://github.com/GatorSense/MUUFLGulfportDOI/Reference for the data set: [](https://doi.org/10.5281/zenodo.1186326)Citation for Technical Report describing the data: P. Gader, A. Zare, R. Close, J. Aitken, G. Tuell, “MUUFL Gulfport Hyperspectral and LiDAR Airborne Data Set,” University of Florida, Gainesville, FL, Tech. Rep. REP-2013-570, Oct. 2013.
###Code
# load gulfport campus image
img_fname = 'muufl_gulfport_campus_w_lidar_1.mat'
spectra_fname = 'tgt_img_spectra.mat'
dataset = loadmat(img_fname)['hsi']
hsi = dataset['Data']
# check out the shape of the data
n_r,n_c,n_b = hsi.shape
hsi.shape
# pull a 'random' pixel/spectrum
# Exercise: Change the rr and cc values to print different pixels/spectra around the image and plot the spectra (next two cells)
rr,cc = 150,150
spectrum = hsi[rr,cc,:]
spectrum
# plot a spectrum
plt.plot(spectrum)
# That last plot would make your advisor sad.
# Label your AXES!
wavelengths = dataset['info']['wavelength']
plt.plot(wavelengths,spectrum)
plt.xlabel('Wavelength (nm)')
plt.ylabel('Reflectance (%)')
plt.ylim([0, 0.6])
plt.xlim([370, 1035])
plt.title(('Spectrum from Pixel ' + str(rr)+ ','+str(cc)))
# plot an image of an individual band
# Exercise: Change the band number in line below to view different bands of the HSI image
plt.imshow(hsi[:,:,30],vmin=0,vmax=.75,cmap='Reds')
plt.colorbar()
plt.title('A single band of Hyperspectral Image in False Color')
# find the band numbers for approximate Red,Green,Blue (RGB) wavelengths
wavelengths[9],wavelengths[20],wavelengths[30]
# make a psuedo-RGB image from appropriate bands
psuedo_rgb = hsi[:,:,(30,20,9)]
psuedo_rgb = np.clip(psuedo_rgb,0,1.0)
plt.imshow(psuedo_rgb)
# Thats too dark. Add some gamma correction
plt.imshow(psuedo_rgb**(1/2.2))
# compare to the provided RGB image (made with better band selection/weighting)
plt.figure(figsize=(10,10))
plt.imshow(dataset['RGB'])
plt.plot(rr,cc,'c*',markersize=10) #label our selected pixel location from the plot above
###Output
_____no_output_____ |
06_Heat_2D/03_Heat_Equation_2D_analytical_solution.ipynb | ###Markdown
Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c) 2019 Daniel Koehn, based on (c)2018 L.A. Barba, G.F. Forsyth [CFD Python](https://github.com/barbagroup/CFDPythoncfd-python), (c)2014 L.A. Barba, I. Hawke, B. Knaepen [Practical Numerical Methods with Python](https://github.com/numerical-mooc/numerical-moocpractical-numerical-methods-with-python), also under CC-BY.
###Code
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
###Output
_____no_output_____
###Markdown
2D Heat Equation: Comparison between analytical and numerical solutionIn this module, we developed a finite difference modelling code to solve the 2D heat equation and optimized the runtime performance using Just-In-Time (JIT) compilation from the `Numba` package. What is still missing is a comparison between an analytical and the corresponding numerical solution. Analytical solution of the 2D Heat EquationA simple (time-dependent) analytical solution for the 2D heat equation \begin{equation}\frac{\partial T}{\partial t} = \alpha \left(\frac{\partial^2 T}{\partial x^2} + \frac{\partial^2 T}{\partial y^2} \right) \tag{1}\end{equation}exists for the case that the initial temperature distribution in a full-space model\begin{equation}T(x,y,t=0) = T_{max} exp\biggl[\frac{-(x^2+y^2)}{s^2}\biggr]\tag{2}\end{equation}where $T_{max}$ is the maximum amplitude of the temperature perturbation at $(x,y)=(0,0)$ and it 's half-width $s$. Then, the analytical solution is\begin{equation}T(x,y,t) = \frac{T_{max}}{1+4t\alpha/s^2} exp\biggl[\frac{-(x^2+y^2)}{s^2+4t\alpha}\biggr]\tag{3}\end{equation} Exercise 1Solve the above problem using the JIT-optimized FTCS FD-code from the last class and compare the numerical with the analytical solution.Let's start by setting up our Python compute environment and reuse the performance optimized `ftcs_JIT` code from the last class.
###Code
# Import libraries
import numpy
from matplotlib import pyplot
%matplotlib inline
# import JIT from Numba
from numba import jit
# Set the font family and size to use for Matplotlib figures.
pyplot.rcParams['font.family'] = 'serif'
pyplot.rcParams['font.size'] = 16
# FTCS code to solve the 2D heat equation with JIT optimization
# -------------------------------------------------------------
@jit(nopython=True) # use Just-In-Time (JIT) Compilation for C-performance
def ftcs_JIT(T0, nt, dt, dx, dy, alpha):
"""
Computes and returns the temperature distribution
after a given number of time steps.
Explicit integration using forward differencing
in time and central differencing in space, with
Neumann conditions (zero-gradient) on top and right
boundaries and Dirichlet conditions on bottom and
left boundaries.
Parameters
----------
T0 : numpy.ndarray
The initial temperature distribution as a 2D array of floats.
nt : integer
Maximum number of time steps to compute.
dt : float
Time-step size.
dx : float
Grid spacing in the x direction.
dy : float
Grid spacing in the y direction.
alpha : float
Thermal diffusivity.
Returns
-------
T : numpy.ndarray
The temperature distribution as a 2D array of floats.
"""
# Define some constants.
sigma_x = alpha * dt / dx**2
sigma_y = alpha * dt / dy**2
# Integrate in time.
T = T0.copy()
# Estimate number of grid points in x- and y-direction
ny, nx = T.shape
# Indices of the model center
I, J = int(nx / 2), int(ny / 2)
# Time loop
for n in range(nt):
# store old temperature field
Tn = T.copy()
# loop over spatial grid
for i in range(1,nx-1):
for j in range(1,ny-1):
T[j, i] = (Tn[j, i] +
sigma_x * (Tn[j, i+1] - 2.0 * Tn[j, i] + Tn[j, i-1]) +
sigma_y * (Tn[j+1, i] - 2.0 * Tn[j, i] + Tn[j-1, i]))
return T
###Output
_____no_output_____
###Markdown
Define modelling parameters and initial conditions according to eq. (2). This time, I give you the freedom to define your own model parameters
###Code
# Definition of modelling parameters
# ----------------------------------
Lx = # length of the plate in the x direction [m]
Ly = # height of the plate in the y direction [m]
nx = # number of points in the x direction
ny = # number of points in the y direction
dx = Lx / (nx - 1) # grid spacing in the x direction
dy = Ly / (ny - 1) # grid spacing in the y direction
alpha = # thermal diffusivity of the plate [m^2/s]
# Define the locations along a gridline.
x = numpy.linspace(0.0, Lx, num=nx)
y = numpy.linspace(0.0, Ly, num=ny)
# DEFINE THE INITIAL TEMPERATURE DISTRIBUTION EQ.(2) HERE!
X, Y = numpy.meshgrid(x,y) # coordinates X,Y required to define T0
# I recommend moving the maximum of the initial temperature
# distribution to the center of the model
X = X - Lx/2.
Y = Y - Ly/2.
s = # half-width of the Gaussian function [m]
Tmax = # maximum temperature Tmax [°C]
T0 = # Define initial temperature distribution according to eq.(2)
###Output
_____no_output_____
###Markdown
We don't want our solution blowing up, so let's find a time step with $\frac{\alpha \Delta t}{\Delta x^2} = \frac{\alpha \Delta t}{\Delta y^2} = \frac{1}{4}$. Also, define the number of time steps `nt` you want to model the heat conduction
###Code
# Set the time-step size based on CFL limit.
sigma = 0.25
dt = sigma * min(dx, dy)**2 / alpha # time-step size
nt = # number of time steps to compute
###Output
_____no_output_____
###Markdown
After setting all modelling parameters, we can run the `ftcs_JIT` modelling code to compute the numerical solution after `nt` time steps
###Code
# Compute the temperature distribution after nt timesteps
T = ftcs_JIT(T0, nt, dt, dx, dy, alpha)
###Output
_____no_output_____
###Markdown
Compute the temperature field of the analytical solution
###Code
# DEFINE ANALYTICAL SOLUTION EQ. (3) HERE!
t = nt * dt # maximum modelling time of the FD code
T_analytical =
###Output
_____no_output_____
###Markdown
In order to compare the different solutions, we first plot the analytical temperature distribution. Depending on how you defined the problem, you might have to adjust the temperature range in the `levels` array
###Code
# Plot the filled contour of the analytical temperature distribution
pyplot.figure(figsize=(7.0, 6.0))
pyplot.xlabel('x [m]')
pyplot.ylabel('y [m]')
levels = numpy.linspace(0.0, 80.0, num=101)
contf = pyplot.contourf(x, y, T_analytical, levels=levels)
cbar = pyplot.colorbar(contf)
cbar.set_label('Temperature [°C]')
pyplot.axis('scaled', adjustable='box');
###Output
_____no_output_____
###Markdown
Plot the numerical temperature distribution from the `ftcs_JIT` code. Be sure, that the `levels` in this plot are identical to the one in the analytical solution plot above to allow a fair comparison.
###Code
# Plot the filled contour of the temperature distribution from the ftcs_JIT code
pyplot.figure(figsize=(7.0, 6.0))
pyplot.xlabel('x [m]')
pyplot.ylabel('y [m]')
levels = numpy.linspace(0.0, 80.0, num=101)
contf = pyplot.contourf(x, y, T, levels=levels)
cbar = pyplot.colorbar(contf)
cbar.set_label('Temperature [°C]')
pyplot.axis('scaled', adjustable='box');
###Output
_____no_output_____
###Markdown
Plot the difference between numerical and analytical solution. You have to adapt the `levels` to the temperature difference between numerical and analytical solution.
###Code
# Plot the filled contour of the temperature distribution from the ftcs_JIT code
pyplot.figure(figsize=(7.0, 6.0))
pyplot.xlabel('x [m]')
pyplot.ylabel('y [m]')
levels = numpy.linspace(-1e-1, 1e-1, num=101)
contf = pyplot.contourf(x, y, T-T_analytical, levels=levels)
cbar = pyplot.colorbar(contf)
cbar.set_label(r'$T_{FD} - T_{analytical}$ [°C]')
pyplot.axis('scaled', adjustable='box');
###Output
_____no_output_____ |
Notebooks/ch5_resampling_methods_applied.ipynb | ###Markdown
5. Resampling Methods – AppliedExcercises from **Chapter 5** of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/) by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani.I've elected to use Python instead of R.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from IPython.display import display, HTML
import statsmodels.api as sm
import statsmodels.formula.api as smf
from sklearn import datasets
from scipy import stats
def confusion_table(confusion_mtx):
"""Renders a nice confusion table with labels"""
confusion_df = pd.DataFrame({'y_pred=0': np.append(confusion_mtx[:, 0], confusion_mtx.sum(axis=0)[0]),
'y_pred=1': np.append(confusion_mtx[:, 1], confusion_mtx.sum(axis=0)[1]),
'Total': np.append(confusion_mtx.sum(axis=1), ''),
'': ['y=0', 'y=1', 'Total']}).set_index('')
return confusion_df
def total_error_rate(confusion_matrix):
"""Derive total error rate from confusion matrix"""
return 1 - np.trace(confusion_mtx) / np.sum(confusion_mtx)
###Output
_____no_output_____
###Markdown
5. In Chapter 4, we used logistic regression to predict the probability of default using income and balance on the Default data set. We will now estimate the test error of this logistic regression model using the validation set approach. Do not forget to set a random seed before beginning your analysis.
###Code
default_df = pd.read_csv('./data/Default.csv', index_col='Unnamed: 0')
default_df = default_df.reset_index().drop('index', axis=1)
# Check for missing
assert default_df.isna().sum().sum() == 0
# Rationalise types
default_df = pd.get_dummies(default_df, dtype=np.float64).drop(['default_No', 'student_No'], axis=1)
display(default_df.head())
###Output
_____no_output_____
###Markdown
(a) Fit a logistic regression model that uses income and balance to predict default. (b) Using the validation set approach, estimate the test error of this model. In order to do this, you must perform the following steps:- i. Split the sample set into a training set and a validation set.- ii. Fit a multiple logistic regression model using only the training observations.- iii. Obtain a prediction of default status for each individual in the validation set by computing the posterior probability of default for that individual, and classifying the individual to the default category if the posterior probability is greater than 0.5.- iv. Compute the validation set error, which is the fraction of the observations in the validation set that are misclassified. (c) Repeat the process in (b) three times, using three different splits of the observations into a training set and a validation set. Com- ment on the results obtained.
###Code
for s in range(1,4):
display(HTML('<h3>Random seed = {}</h3>'.format(s)))
# Create index for 50% holdout set
np.random.seed(s)
train = np.random.rand(len(default_df)) < 0.5
response = 'default_Yes'
predictors = ['income', 'balance']
X_train = np.array(default_df[train][predictors])
X_test = np.array(default_df[~train][predictors])
y_train = np.array(default_df[train][response])
y_test = np.array(default_df[~train][response])
# Logistic regression
logit = LogisticRegression()
model_logit = logit.fit(X_train, y_train)
# Predict
y_pred = model_logit.predict(X_test)
# Analysis
confusion_mtx = confusion_matrix(y_test, y_pred)
display(confusion_table(confusion_mtx))
total_error_rate_pct = np.around(total_error_rate(confusion_mtx) * 100, 4)
print('total_error_rate: {}%'.format(total_error_rate_pct))
###Output
_____no_output_____
###Markdown
(d) Now consider a logistic regression model that predicts the probability of default using income, balance, and a dummy variable for student. Estimate the test error for this model using the validation set approach. Comment on whether or not including a dummy variable for student leads to a reduction in the test error rate.
###Code
for s in range(1,4):
display(HTML('<h3>Random seed = {}</h3>'.format(s)))
# Create index for 50% holdout set
np.random.seed(s)
train = np.random.rand(len(default_df)) < 0.5
response = 'default_Yes'
predictors = ['income', 'balance', 'student_Yes']
X_train = np.array(default_df[train][predictors])
X_test = np.array(default_df[~train][predictors])
y_train = np.array(default_df[train][response])
y_test = np.array(default_df[~train][response])
# Logistic regression
logit = LogisticRegression()
model_logit = logit.fit(X_train, y_train)
# Predict
y_pred = model_logit.predict(X_test)
# Analysis
confusion_mtx = confusion_matrix(y_test, y_pred)
display(confusion_table(confusion_mtx))
total_error_rate_pct = np.around(total_error_rate(confusion_mtx) * 100, 4)
print('total_error_rate: {}%'.format(total_error_rate_pct))
###Output
_____no_output_____
###Markdown
**Comment**It is difficult to discern if the student predictor has improved the model because of the variation in results. 6. We continue to consider the use of a logistic regression model to predict the probability of default using income and balance on the Default data set. In particular, we will now compute estimates for the standard errors of the income and balance logistic regression coefficients in two different ways: (1) using the bootstrap, and (2) using the standard formula for computing the standard errors in the glm() function. Do not forget to set a random seed before beginning your analysis. (a) Using the summary() and glm() functions, determine the estimated standard errors for the coefficients associated with income and balance in a multiple logistic regression model that uses both predictors.
###Code
response = 'default_Yes'
predictors = ['income', 'balance']
X_all = sm.add_constant(np.array(default_df[predictors]))
y_all = np.array(default_df[response])
## Logistic regression
model_logit = smf.Logit(y_all, X_all).fit(disp=False);
# Summary
print(model_logit.summary())
statsmodels_est = pd.DataFrame({'coef_sm': model_logit.params, 'SE_sm': model_logit.bse})
display(statsmodels_est)
###Output
Logit Regression Results
==============================================================================
Dep. Variable: y No. Observations: 10000
Model: Logit Df Residuals: 9997
Method: MLE Df Model: 2
Date: Fri, 21 Sep 2018 Pseudo R-squ.: 0.4594
Time: 19:43:26 Log-Likelihood: -789.48
converged: True LL-Null: -1460.3
LLR p-value: 4.541e-292
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
const -11.5405 0.435 -26.544 0.000 -12.393 -10.688
x1 2.081e-05 4.99e-06 4.174 0.000 1.1e-05 3.06e-05
x2 0.0056 0.000 24.835 0.000 0.005 0.006
==============================================================================
Possibly complete quasi-separation: A fraction 0.14 of observations can be
perfectly predicted. This might indicate that there is complete
quasi-separation. In this case some parameters will not be identified.
###Markdown
**NOTE:** Apparent bug in statsmodels.discrete.discrete_model.LogitResults.summary. Std error x2 is misrepresented in summary table, it is easy to misread this as lower than standard error for x1 when in fact it is not (see table above). (b) Write a function, boot.fn(), that takes as input the Default data set as well as an index of the observations, and that outputs the coefficient estimates for income and balance in the multiple logistic regression model.
###Code
def boot_fn(df, idx):
response = 'default_Yes'
predictors = ['income', 'balance']
X = sm.add_constant(np.array(df[predictors].loc[idx]));
y = np.array(df[response].loc[idx])
# Logistic regression
model_logit = smf.Logit(y, X).fit(disp=False);
return model_logit.params;
###Output
_____no_output_____
###Markdown
(c) Use the boot() function together with your boot.fn() function to estimate the standard errors of the logistic regression coefficients for income and balance.
###Code
def boot_idx(n):
"""Return index for bootstrap sample of size n
e.g. generate array in range 0 to n, with replacement"""
return np.random.randint(low=0, high=n, size=n)
def boot(fn, data_df, samples):
"""Perform bootstrap for B number of samples"""
results = []
for s in range(samples):
Z = fn(data_df, boot_idx(data_df.shape[0]))
results += [Z]
return np.array(results)
def standard_deviation(X):
"""Compute deviation error for jth element in matrix X
equivalent to np.std(X, axis=0)"""
X_bar = np.mean(X, axis=0)
SE = np.sqrt((np.sum(np.square(X - X_bar), axis=0)) / (len(X)))
return SE
B = 10000
coef_preds = boot(boot_fn, default_df, samples=B)
coef_pred = np.mean(coef_preds, axis=0)
standard_errs = standard_deviation(coef_preds)
bootstrap_est = pd.DataFrame({'coef_boot': coef_pred, 'SE_boot': standard_errs})
display(bootstrap_est)
###Output
_____no_output_____
###Markdown
(d) Comment on the estimated standard errors obtained using the glm() function and using your bootstrap function.
###Code
pd.concat([statsmodels_est, bootstrap_est], axis=1)
###Output
_____no_output_____
###Markdown
Let's compare the standard errors estimated by the statsmodels (_sm) summary() function with estimates obtained by bootstrap (_boot) in the table above. The standard errors for x1 and x2 (rows 1 and 2) are indistinguishable to 6 decimal places. The coefficient for x2 and the statistics for the intercept x0 vary slightly.Note that the disparity is slightly more significant when fewer bootstrap samples are used. Here 10,000 were used, but the estimates were alike to within the same order of magnitude with only 10 bootstrap samples.**QUESTION:** Why are the standard errors provided by statsmodels equivalent to the standard deviations by my calculations, when $SE = \frac{σ}{\sqrt{n}}$ ? 7. In Sections 5.3.2 and 5.3.3, we saw that the cv.glm() function can be used in order to compute the LOOCV test error estimate. Alternatively, one could compute those quantities using just the glm() and predict.glm() functions, and a for loop. You will now take this approach in order to compute the LOOCV error for a simple logistic regression model on the Weekly data set. Recall that in the context of classification problems, the LOOCV error is given in (5.4). (a) Fit a logistic regression model that predicts Direction using Lag1 and Lag2.
###Code
# Load data
weekly_df = pd.read_csv('./data/Weekly.csv')
# Check for missing data
assert weekly_df.isnull().sum().sum() == 0
# Pre-processing
weekly_df = pd.get_dummies(weekly_df).drop('Direction_Down', axis=1)
weekly_df.head()
# Create index for 50% holdout set
np.random.seed(s)
train = np.random.rand(len(weekly_df)) < 0.5
response = 'Direction_Up'
predictors = ['Lag1', 'Lag2']
X_train = sm.add_constant(np.array(weekly_df[train][predictors]))
X_test = sm.add_constant(np.array(weekly_df[~train][predictors]))
y_train = np.array(weekly_df[train][response])
y_test = np.array(weekly_df[~train][response])
# Logistic regression
logit = LogisticRegression()
model_logit = logit.fit(X_train, y_train)
# Predict
y_pred = model_logit.predict(X_test)
# Analysis
confusion_mtx = confusion_matrix(y_test, y_pred)
display(confusion_table(confusion_mtx))
total_error_rate_pct = np.around(total_error_rate(confusion_mtx) * 100, 4)
print('total_error_rate: {}%'.format(total_error_rate_pct))
###Output
_____no_output_____
###Markdown
(b) Fit a logistic regression model that predicts Direction using Lag1 and Lag2 using all but the first observation.
###Code
# Create index for LOOCV
train = weekly_df.index > 0
response = 'Direction_Up'
predictors = ['Lag1', 'Lag2']
X_train = np.array(weekly_df[train][predictors])
X_test = np.array(weekly_df[~train][predictors])
y_train = np.array(weekly_df[train][response])
y_test = np.array(weekly_df[~train][response])
# Logistic regression
logit = LogisticRegression(fit_intercept=True)
model_logit = logit.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
(c) Use the model from (b) to predict the direction of the first observation. You can do this by predicting that the first observation will go up if P(Direction="Up"|Lag1, Lag2) > 0.5. Was this observation correctly classified?
###Code
# Predict
y_pred = model_logit.predict(X_test)
# Analysis
confusion_mtx = confusion_matrix(y_test, y_pred)
display(confusion_table(confusion_mtx))
total_error_rate_pct = np.around(total_error_rate(confusion_mtx) * 100, 4)
print('total_error_rate: {}%'.format(total_error_rate_pct))
###Output
_____no_output_____
###Markdown
The observation was incrorectly classified. (d) Write a for loop from i=1 to i=n, where n is the number of observations in the data set, that performs each of the following steps:- i. Fit a logistic regression model using all but the ith observation to predict Direction using Lag1 and Lag2.- ii. Compute the posterior probability of the market moving up for the ith observation.- iii. Use the posterior probability for the ith observation in order to predict whether or not the market moves up.- iv. Determine whether or not an error was made in predicting the direction for the ith observation. If an error was made, then indicate this as a 1, and otherwise indicate it as a 0.
###Code
response = 'Direction_Up'
predictors = ['Lag1', 'Lag2']
y_pred = []
for i in range(weekly_df.shape[0]):
# Create index for LOOCV
train = weekly_df.index != i
X_train = np.array(weekly_df[train][predictors])
X_test = np.array(weekly_df[~train][predictors])
y_train = np.array(weekly_df[train][response])
# Logistic regression
logit = LogisticRegression()
model_logit = logit.fit(X_train, y_train)
# Predict
y_pred += [model_logit.predict(X_test)]
y_pred = np.array(y_pred)
y_test = weekly_df[response]
###Output
_____no_output_____
###Markdown
(e) Take the average of the n numbers obtained in (d)iv in order to obtain the LOOCV estimate for the test error. Comment on the results.
###Code
# Analysis
confusion_mtx = confusion_matrix(y_test, y_pred)
display(confusion_table(confusion_mtx))
total_error_rate_pct = np.around(total_error_rate(confusion_mtx) * 100, 4)
print('total_error_rate: {}%'.format(total_error_rate_pct))
###Output
_____no_output_____
###Markdown
LOOCV yields an estimated test error rate of 45.0%, higher than the 41.8% error rate observed with a 50% holdout set.The LOOCV approach allows the model to train on a 2x - 1 larger training set than 50% holdout, which could be significant if we were to experiment with more flexible models that require more observations to mitigate over-fitting. LOOCV also yields a 2x increase in the effective test set over 50% holdout. The above means that there is less bias in the training and test sets exposed to our model by LOOCV than for 50% holdout. This suggests that the lower error observed by 50% holdout is dues to increased bias in the datasets, or model overfitting. We expect the LOOCV result to exhibit higher variance than the hold-out approach, that is we expect to observe a different error score for some other sample of observations. 8. We will now perform cross-validation on a simulated data set. (a) Generate a simulated data set as follows:```> set.seed(1)> x=rnorm(100)> y=x-2*x^2+rnorm(100)``` In this data set, what is n and what is p? Write out the model used to generate the data in equation form.
###Code
np.random.seed(1)
mu, sigma = 0, 1 # mean and standard deviation
x = np.random.normal(mu, sigma, 100)
y = ((x-2) * (x**2)) + np.random.normal(mu, sigma, 100)
###Output
_____no_output_____
###Markdown
$y = (x-2)x^2 + ϵ$ $y = x^3 + (-2x^2) + ϵ$ $y = -2x^2 + x^3 + ϵ$ $y = β_0 + β_1 x^2 + β_2 x^3 + ϵ$ Where: $n = 100$ $p = 2$ $β_0 = 0$ $β_1 = -2$ $β_2 = 1$ (b) Create a scatterplot of X against Y . Comment on what you find.
###Code
ax = sns.scatterplot(x=x, y=y)
plt.xlabel('x')
plt.ylabel('y')
###Output
_____no_output_____
###Markdown
The above plot shows x plotted against y. It shows a non-linear relationship with some variance. The shape of the plot is has two maxima/minima, typical of a 3rd order polynomial. (c) Set a random seed, and then compute the LOOCV errors that result from fitting the following four models using least squares:- i. $Y = β_0 + β_1 X + ϵ$ - ii. $Y = β_0 + β_1 X + β_2 X^2 + ϵ$ - iii. $Y = β_0 + β_1 X + β_2 X^2 + β_3 X^3 + ϵ$ - iv. $Y = β_0 + β_1 X + β_2 X^2 + β_3 X^3 + β_4 X^4 + ϵ$
###Code
def mse(y_pred, y):
"""Calculate mean squared error"""
return np.sum(np.square(y_pred - y)) / y.size
def sim_loocv(seed):
"""Run loocv on simulated data generated with random seed provided"""
# Generate simulated data
np.random.seed(seed)
mu, sigma = 0, 1 # mean and standard deviation
x = np.random.normal(mu, sigma, 100)
y = ((x-2) * (x**2)) + np.random.normal(mu, sigma, 100)
sim_df = pd.DataFrame({'x': x, 'y': y})
formulae = {'x' : 'y ~ x',
'x^2' : 'y ~ x + np.power(x, 2)',
'x^3' : 'y ~ x + np.power(x, 2) + np.power(x, 3)',
'x^4' : 'y ~ x + np.power(x, 2) + np.power(x, 3) + np.power(x, 4)'}
errors = {}
for f in formulae:
# predictions state
y_pred = pd.Series({})
for i in range(sim_df.shape[0]):
# Create index for LOOCV
train = sim_df.index != i
# Linear regression
model_ols = smf.ols(formula=formulae[f], data=sim_df[train]).fit()
## Predict
y_hat = model_ols.predict(exog=sim_df[~train])
y_pred = pd.concat([y_pred, y_hat])
errors[f] = mse(np.array(y_pred), y)
display(HTML('<h3>MSE</h3>'))
display(errors)
sim_loocv(1)
###Output
_____no_output_____
###Markdown
(d) Repeat (c) using another random seed, and report your results. Are your results the same as what you got in (c)? Why?
###Code
sim_loocv(2)
###Output
_____no_output_____
###Markdown
Changing the random seed that is used to simulate the observations has a large effect on the observed mean squared error. In changing the random seed we have changed the sample of observations taken from the population, so this change in error is due to variance in our sample. In this case the sample size in is small n=100, and so we might expect quite hi high variance between succesive samples. This would lead to variability between the mse errors observed for different samples of X, Y. (e) Which of the models in (c) had the smallest LOOCV error? Is this what you expected? Explain your answer.In test c), model iv with an $x^4$ term exhibited the lowest error, marginally lower than model iii which also performed well. I expected the lowest error to be observed for model iii) because this is simulated data and we know the true function f(x) in this case is a 3rd order polynomial. Interestingly, the second test with a different seed yields the expected result – lowest error for x^3 model. (f) Comment on the statistical significance of the coefficient estimates that results from fitting each of the models in (c) using least squares. Do these results agree with the conclusions drawn based on the cross-validation results?
###Code
# Generate simulated data
np.random.seed(1)
mu, sigma = 0, 1 # mean and standard deviation
x = np.random.normal(mu, sigma, 100)
y = ((x-2) * (x**2)) + np.random.normal(mu, sigma, 100)
sim_df = pd.DataFrame({'x': x, 'y': y})
formulae = {'x' : 'y ~ x',
'x^2' : 'y ~ x + np.power(x, 2)',
'x^3': 'y ~ x + np.power(x, 2) + np.power(x, 3)',
'x^4' : 'y ~ x + np.power(x, 2) + np.power(x, 3) + np.power(x, 4)'}
errors = {}
for f in formulae:
# predictions state
y_pred = pd.Series({})
for i in range(sim_df.shape[0]):
# Create index for LOOCV
train = sim_df.index != i
# Linear regression
model_ols = smf.ols(formula=formulae[f], data=sim_df[train]).fit()
## Predict
y_hat = model_ols.predict(exog=sim_df[~train])
y_pred = pd.concat([y_pred, y_hat])
errors[f] = mse(np.array(y_pred), y)
display(model_ols.summary())
###Output
_____no_output_____
###Markdown
**Comment**The p-values associated with feature-wise t-statsitics for the model show that p < 0.05 for features $x^0$, $x^1$, $x^2$, and $x^4$. Note that there is not enough evidence in this model to reject the null hypothesis for $x^3$, that $H_0: β_3 = 0$.These statistics suggest that the the strongest model will include features $x^0$, $x^1$, $x^2$, and $x^4$, which supports the conclusions drawn from cross-validations that models iii and iv perfroma best. 9. We will now consider the Boston housing data set, from the MASS library.
###Code
boston = datasets.load_boston()
boston_feat = pd.DataFrame(boston.data, columns=boston.feature_names)
boston_resp = pd.Series(boston.target).rename('medv')
boston_df = pd.concat([boston_feat, boston_resp], axis=1)
# Check for missing values
#assert boston_df.isnull().sum().sum() == 0
boston_df.head()
###Output
_____no_output_____
###Markdown
(a) Based on this data set, provide an estimate for the population mean of medv. Call this estimate μˆ.
###Code
mu_hat = boston_df['medv'].mean()
display(mu_hat)
###Output
_____no_output_____
###Markdown
(b) Provide an estimate of the standard error of μˆ. Interpret this result.Hint: We can compute the standard error of the sample mean by dividing the sample standard deviation by the square root of the number of observations.
###Code
def standard_error_mean(df):
"""Compute estimated standard error of the mean
with analytic approach"""
medv = np.array(df)
SE = np.std(medv) / np.sqrt(len(medv))
return SE
display(standard_error_mean(boston_df['medv']))
###Output
_____no_output_____
###Markdown
(c) Now estimate the standard error of μˆ using the bootstrap. How does this compare to your answer from (b)?
###Code
# Compute standard error of the mean with the bootstrap approach
def mean_boot(df, idx):
Z = np.array(df.loc[idx])
return np.mean(Z)
def boot_idx(n):
"""Return index for bootstrap sample of size n
e.g. generate array in range 0 to n, with replacement"""
return np.random.randint(low=0, high=n, size=n)
def boot(fn, data_df, samples):
"""Perform bootstrap for B number of samples"""
results = []
for s in range(samples):
Z = fn(data_df, boot_idx(data_df.shape[0]))
results += [Z]
return np.array(results)
B = 10000
mean_boot = boot(mean_boot, boston_df['medv'], samples=B)
SE_pred = np.std(mean_boot)
print('SE: ' + str(SE_pred))
###Output
SE: 0.4059561352313032
###Markdown
The bootstrap method gives a remarkably good estimate of the standard error, when compared to the same estimate derived analytically. The bootstrap approach is computationally more expensive, but has the advantage that no analytic derivation of the standard error for the statistic is required, making it much more general for application to other statistics. (d) Based on your bootstrap estimate from (c), provide a 95 % confidence interval for the mean of medv. Compare it to the results obtained using ```t.test(Boston$medv)```Hint: You can approximate a 95% confidence interval using the formula [μˆ − 2SE(μˆ), μˆ + 2SE(μˆ)].
###Code
mu_hat = np.mean(boston_df['medv'])
conf_low = mu_hat - (2*SE_pred)
conf_hi = mu_hat + (2*SE_pred)
pd.Series({'mu': mu_hat,
'SE': SE_pred,
'[0.025': conf_low,
'0.975]': conf_hi})
###Output
_____no_output_____
###Markdown
(e) Based on this dataset, provide an estimate, μˆmed, for the median value of medv in the population.
###Code
median_hat = np.median(boston_df['medv'])
print('median: ' + str(median_hat))
###Output
median: 21.2
###Markdown
(f) We now would like to estimate the standard error of μˆmed. Unfortunately, there is no simple formula for computing the standard error of the median. Instead, estimate the standard error of the median using the bootstrap. Comment on your findings.
###Code
# Compute standard error of the median with the bootstrap approach
def median_boot(df, idx):
Z = np.array(df.loc[idx])
return np.median(Z)
def boot_idx(n):
"""Return index for bootstrap sample of size n
e.g. generate array in range 0 to n, with replacement"""
return np.random.randint(low=0, high=n, size=n)
def boot(fn, data_df, samples):
"""Perform bootstrap for B number of samples"""
results = []
for s in range(samples):
Z = fn(data_df, boot_idx(data_df.shape[0]))
results += [Z]
return np.array(results)
B = 10000
boot_obs = boot(mean_boot, boston_df['medv'], samples=B)
SE_pred = np.std(boot_obs)
print('SE: ' + str(SE_pred))
###Output
_____no_output_____
###Markdown
The estimated standard error for the median value of medv is similar to the estimated standard error for the mean, which is not suprising we would expect the precision of median and and mean to be related. g) Based on this data set, provide an estimate for the tenth percentile of medv in Boston suburbs. Call this quantity μˆ0.1. (You can use the quantile() function.)
###Code
tenth_percentile = np.percentile(boston_df['medv'], 10)
print('tenth_percentile: ' + str(tenth_percentile))
###Output
_____no_output_____
###Markdown
(h) Use the bootstrap to estimate the standard error of μˆ0.1. Comment on your findings.
###Code
# Compute standard error of the tenth percentile with the bootstrap approach
def tenth_percentile(df, idx):
Z = np.array(df.loc[idx])
return np.percentile(Z, 10)
def boot_idx(n):
"""Return index for bootstrap sample of size n
e.g. generate array in range 0 to n, with replacement"""
return np.random.randint(low=0, high=n, size=n)
def boot(fn, data_df, samples):
"""Perform bootstrap for B number of samples"""
results = []
for s in range(samples):
Z = fn(data_df, boot_idx(data_df.shape[0]))
results += [Z]
return np.array(results)
B = 10000
boot_obs = boot(tenth_percentile, boston_df['medv'], samples=B)
SE_pred = np.std(boot_obs)
print('SE: ' + str(SE_pred))
###Output
_____no_output_____ |
FAI_old/lesson1/lesson1.ipynb | ###Markdown
Using Convolutional Neural Networks Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning. Introduction to this week's task: 'Dogs vs Cats' We're going to try to create a model to enter the [Dogs vs Cats](https://www.kaggle.com/c/dogs-vs-cats) competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): *"**State of the art**: The current literature suggests machine classifiers can score above 80% accuracy on this task"*. So if we can beat 80%, then we will be at the cutting edge as at 2013! Basic setup There isn't too much to do to get started - just a few simple configuration steps.This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
###Code
# path = "data/dogscats/"
path = "data/"
#path = "data/dogscats/sample/"
###Output
_____no_output_____
###Markdown
A few basic libraries that we'll need for the initial exercises:
###Code
from __future__ import division,print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
###Output
_____no_output_____
###Markdown
We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
###Code
LESSON_HOME_DIR = os.getcwd()
import sys
sys.path.insert(1, os.path.join(LESSON_HOME_DIR, '../utils'))
import utils; reload(utils)
from utils import plots
###Output
Using Theano backend.
###Markdown
Use a pretrained VGG model with our **Vgg16** class Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (*VGG 19*) and a smaller, faster model (*VGG 16*). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.We have created a python class, *Vgg16*, which makes using the VGG 16 model very straightforward. The punchline: state of the art custom model in 7 lines of codeHere's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
###Code
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=64
# Import our class, and instantiate
import vgg16; reload(vgg16)
from vgg16 import Vgg16
print(vgg)
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
###Output
Found 352 images belonging to 2 classes.
Found 50 images belonging to 2 classes.
Epoch 1/1
352/352 [==============================] - 182s - loss: 0.5114 - acc: 0.7955 - val_loss: 0.1110 - val_acc: 0.9600
###Markdown
The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.Let's take a look at how this works, step by step... Use Vgg16 for basic image recognitionLet's start off by using the *Vgg16* class to recognise the main imagenet category for each image.We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.First, create a Vgg16 object:
###Code
vgg = Vgg16()
###Output
_____no_output_____
###Markdown
Vgg16 is built on top of *Keras* (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in *batches*, using a fixed directory structure, where images from each category for training must be placed in a separate folder.Let's grab batches of data from our training folder:
###Code
batches = vgg.get_batches(path+'train', batch_size=4)
###Output
_____no_output_____
###Markdown
(BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)*Batches* is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
###Code
imgs,labels = next(batches)
###Output
_____no_output_____
###Markdown
As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called *one hot encoding*. The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
###Code
plots(imgs, titles=labels)
###Output
_____no_output_____
###Markdown
We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
###Code
vgg.predict(imgs, True)
###Output
_____no_output_____
###Markdown
The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
###Code
vgg.classes[:4]
###Output
_____no_output_____
###Markdown
(Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.) Use our Vgg16 class to finetune a Dogs vs Cats modelTo change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call *fit()* after calling *finetune()*.We create our batches just like before, and making the validation set available as well. A 'batch' (or *mini-batch* as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
###Code
batch_size=64
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Calling *finetune()* modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
###Code
vgg.finetune(batches)
###Output
_____no_output_____
###Markdown
Finally, we *fit()* the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An *epoch* is one full pass through the training data.)
###Code
vgg.fit(batches, val_batches, nb_epoch=1)
###Output
_____no_output_____
###Markdown
That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.Next up, we'll dig one level deeper to see what's going on in the Vgg16 class. Create a VGG model from scratch in KerasFor the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes. Model setupWe need to import all the modules we'll be using from numpy, scipy, and keras:
###Code
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
###Output
_____no_output_____
###Markdown
Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
###Code
FILES_PATH = 'http://www.platform.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
###Output
_____no_output_____
###Markdown
Here's a few examples of the categories we just imported:
###Code
classes[:5]
###Output
_____no_output_____
###Markdown
Model creationCreating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
###Code
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
###Output
_____no_output_____
###Markdown
...and here's the fully-connected definition.
###Code
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
###Output
_____no_output_____
###Markdown
When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
###Code
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
###Output
_____no_output_____
###Markdown
Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
###Code
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
###Output
_____no_output_____
###Markdown
We'll learn about what these different blocks do later in the course. For now, it's enough to know that:- Convolution layers are for finding patterns in images- Dense (fully connected) layers are for combining patterns across an imageNow that we've defined the architecture, we can create the model like any python object:
###Code
model = VGG_16()
###Output
_____no_output_____
###Markdown
As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem. Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
###Code
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
###Output
_____no_output_____
###Markdown
Getting imagenet predictionsThe setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call *predict()* on them.
###Code
batch_size = 4
###Output
_____no_output_____
###Markdown
Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
###Code
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
From here we can use exactly the same steps as before to look at predictions from the model.
###Code
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
###Output
_____no_output_____
###Markdown
The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with *np.argmax()*) we can find the predicted label.
###Code
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
###Output
_____no_output_____ |
notebook/Score - Batch Scoring of Model Endpoint.ipynb | ###Markdown
Verseagility - Batch scoring of deployed model
- A script to batch-score your endpoint after the deployment
- Set your endpoint and keys in the function below
- Read your test data set which has a "label" column for the ground truth and a "text" column with the document to be scored
###Code
# Import packages
import requests
import pandas as pd
import json
import configparser
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
# Change this to your format respectively
df_test = pd.read_csv("file.csv", sep=";", encoding="utf-8")
# Get config file
config = configparser.ConfigParser()
config.read('../config.ini')
endpoint = config['API']['endpoint']
region = config['API']['region']
key = config['API']['key']
# Model scoring function
def score_model(df, endpoint, region, key):
'''Batch score model with multiple documents from a dataframe loaded above'''
# URL for the web service
scoring_uri = f'http://{endpoint}.{region}.azurecontainer.io/score'
# Set the content type
headers = {'Content-Type': 'application/json'}
# If authentication is enabled, set the authorization header
headers['Authorization'] = f'Bearer {key}'
scores = []
# Iterate
for index, row in df.iterrows():
data = [{
"subject": "",
"body": str(row['text'].replace("'", "").replace('"', ''))
}]
# Convert to JSON string
input_data = json.dumps(data)
# Make the request and display the response
resp = requests.post(scoring_uri, input_data, headers=headers)
try:
pred = json.loads(resp.text)[0]['result'][0]['category']
except:
pred = "None"
scores.append(pred)
print(f'[INFO] - SCORING {str(index+1)}/{len(df)} -> "{row["label"]}" predicted as "{pred}"')
return scores
# Initiate the scoring
scores = score_model(df_test, endpoint, region, key)
# Get your classification reports after scoring
print(classification_report(df_test['label'], scores))
print(confusion_matrix(df_test['label'], scores))
print(f"Accuracy: {accuracy_score(df_test['label'], scores)}")
# Write output to file
scores.to_csv("scoring.csv", sep=";")
###Output
_____no_output_____ |
Thermo/sdata134k_small_polycyclic_cnn.ipynb | ###Markdown
Thermochemistry Validation Test Han, Kehang ([email protected])This notebook is designed to use a big set of tricyclics for testing the performance of new polycyclics thermo estimator. Currently the dataset contains 2903 tricyclics that passed isomorphic check. Set up
###Code
from rmgpy.data.rmg import RMGDatabase
from rmgpy import settings
from rmgpy.species import Species
from rmgpy.molecule import Molecule
from rmgpy.molecule import Group
from rmgpy.rmg.main import RMG
from rmgpy.cnn_framework.predictor import Predictor
from IPython.display import display
import numpy as np
import os
import pandas as pd
from pymongo import MongoClient
import logging
logging.disable(logging.CRITICAL)
from bokeh.charts import Histogram
from bokeh.plotting import figure, show
from bokeh.io import output_notebook
output_notebook()
host = 'mongodb://user:[email protected]/admin'
port = 27018
client = MongoClient(host, port)
db = getattr(client, 'sdata134k')
db.collection_names()
def get_data(db, collection_name):
collection = getattr(db, collection_name)
db_cursor = collection.find()
# collect data
print('reading data...')
db_mols = []
for db_mol in db_cursor:
db_mols.append(db_mol)
print('done')
return db_mols
model = '/home/mjliu/Code/RMG-Py/examples/cnn/evaluate/test_model'
h298_predictor = Predictor()
predictor_input = os.path.join(model,
'predictor_input.py')
h298_predictor.load_input(predictor_input)
param_path = os.path.join(model,
'saved_model',
'full_train.h5')
h298_predictor.load_parameters(param_path)
# fetch testing dataset
collection_name = 'small_cyclic_table'
db_mols = get_data(db, collection_name)
print len(db_mols)
###Output
_____no_output_____
###Markdown
Validation Test Collect data from heuristic algorithm and qm library
###Code
filterList = [
Group().fromAdjacencyList("""1 R u0 p0 c0 {2,[S,D,T]} {9,[S,D,T]}
2 R u0 p0 c0 {1,[S,D,T]} {3,[S,D,T]}
3 R u0 p0 c0 {2,[S,D,T]} {4,[S,D,T]}
4 R u0 p0 c0 {3,[S,D,T]} {5,[S,D,T]}
5 R u0 p0 c0 {4,[S,D,T]} {6,[S,D,T]}
6 R u0 p0 c0 {5,[S,D,T]} {7,[S,D,T]}
7 R u0 p0 c0 {6,[S,D,T]} {8,[S,D,T]}
8 R u0 p0 c0 {7,[S,D,T]} {9,[S,D,T]}
9 R u0 p0 c0 {1,[S,D,T]} {8,[S,D,T]}
"""),
Group().fromAdjacencyList("""1 R u0 p0 c0 {2,S} {5,S}
2 R u0 p0 c0 {1,S} {3,D}
3 R u0 p0 c0 {2,D} {4,S}
4 R u0 p0 c0 {3,S} {5,S}
5 R u0 p0 c0 {1,S} {4,S} {6,S} {9,S}
6 R u0 p0 c0 {5,S} {7,S}
7 R u0 p0 c0 {6,S} {8,D}
8 R u0 p0 c0 {7,D} {9,S}
9 R u0 p0 c0 {5,S} {8,S}
"""),
]
test_size = 0
R = 1.987 # unit: cal/mol/K
validation_test_dict = {} # key: spec.label, value: (thermo_heuristic, thermo_qm)
spec_labels = []
spec_dict = {}
H298s_qm = []
Cp298s_qm = []
H298s_cnn = []
Cp298s_cnn = []
for db_mol in db_mols:
smiles_in = str(db_mol["SMILES_input"])
spec_in = Species().fromSMILES(smiles_in)
for grp in filterList:
if spec_in.molecule[0].isSubgraphIsomorphic(grp):
break
else:
spec_labels.append(smiles_in)
# qm: just free energy but not free energy of formation
G298_qm = float(db_mol["G298"])*627.51 # unit: kcal/mol
H298_qm = float(db_mol["Hf298(kcal/mol)"]) # unit: kcal/mol
Cv298_qm = float(db_mol["Cv298"]) # unit: cal/mol/K
Cp298_qm = Cv298_qm + R # unit: cal/mol/K
H298s_qm.append(H298_qm)
# cnn
H298_cnn = h298_predictor.predict(spec_in.molecule[0]) # unit: kcal/mol
H298s_cnn.append(H298_cnn)
spec_dict[smiles_in] = spec_in
###Output
_____no_output_____
###Markdown
Create `pandas` dataframe for easy data validation
###Code
# create pandas dataframe
validation_test_df = pd.DataFrame(index=spec_labels)
validation_test_df['H298_cnn(kcal/mol)'] = pd.Series(H298s_cnn, index=validation_test_df.index)
validation_test_df['H298_qm(kcal/mol)'] = pd.Series(H298s_qm, index=validation_test_df.index)
heuristic_qm_diff = abs(validation_test_df['H298_cnn(kcal/mol)']-validation_test_df['H298_qm(kcal/mol)'])
validation_test_df['H298_cnn_qm_diff(kcal/mol)'] = pd.Series(heuristic_qm_diff, index=validation_test_df.index)
display(validation_test_df.head())
print "Validation test dataframe has {0} tricyclics.".format(len(spec_labels))
validation_test_df['H298_cnn_qm_diff(kcal/mol)'].describe()
###Output
_____no_output_____
###Markdown
categorize error sources
###Code
diff20_df = validation_test_df[(validation_test_df['H298_heuristic_qm_diff(kcal/mol)'] > 15)
& (validation_test_df['H298_heuristic_qm_diff(kcal/mol)'] <= 500)]
len(diff20_df)
print len(diff20_df)
for smiles in diff20_df.index:
print "***********cnn = {0}************".format(diff20_df[diff20_df.index==smiles]['H298_cnn(kcal/mol)'])
print "***********qm = {0}************".format(diff20_df[diff20_df.index==smiles]['H298_qm(kcal/mol)'])
spe = spec_dict[smiles]
display(spe)
###Output
_____no_output_____
###Markdown
Parity Plot: heuristic vs. qm
###Code
p = figure(plot_width=500, plot_height=400)
# plot_df = validation_test_df[validation_test_df['H298_heuristic_qm_diff(kcal/mol)'] < 10]
plot_df = validation_test_df
# add a square renderer with a size, color, and alpha
p.circle(plot_df['H298_cnn(kcal/mol)'], plot_df['H298_qm(kcal/mol)'],
size=5, color="green", alpha=0.5)
x = np.array([-50, 200])
y = x
p.line(x=x, y=y, line_width=2, color='#636363')
p.line(x=x, y=y+10, line_width=2,line_dash="dashed", color='#bdbdbd')
p.line(x=x, y=y-10, line_width=2, line_dash="dashed", color='#bdbdbd')
p.xaxis.axis_label = "H298 CNN (kcal/mol)"
p.yaxis.axis_label = "H298 Quantum (kcal/mol)"
p.xaxis.axis_label_text_font_style = "normal"
p.yaxis.axis_label_text_font_style = "normal"
p.xaxis.axis_label_text_font_size = "16pt"
p.yaxis.axis_label_text_font_size = "16pt"
p.xaxis.major_label_text_font_size = "12pt"
p.yaxis.major_label_text_font_size = "12pt"
show(p)
len(plot_df.index)
###Output
_____no_output_____
###Markdown
Histogram of `abs(heuristic-qm)`
###Code
from bokeh.models import Range1d
hist = Histogram(validation_test_df,
values='Cp298_heuristic_qm_diff(cal/mol/K)', xlabel='Cp Prediction Error (cal/mol/K)',
ylabel='Number of Testing Molecules',
bins=50,\
plot_width=500, plot_height=300)
# hist.y_range = Range1d(0, 1640)
hist.x_range = Range1d(0, 20)
show(hist)
with open('validation_test_sdata134k_2903_pyPoly_dbPoly.csv', 'w') as fout:
validation_test_df.to_csv(fout)
###Output
_____no_output_____ |
Day-1/Tasks/Task-1.ipynb | ###Markdown
**Q1** Create a variable called radius equal to half the diameter
###Code
# Write your Code and press `Shift + Enter`
###Output
_____no_output_____
###Markdown
**Q2** Create a variable called `area` using the formula for the area of a circle: pi times the radius squared
###Code
# Write your Code and press `Shift + Enter`
###Output
_____no_output_____
###Markdown
**Q3** Add parentheses to the following expression so that i want output is `1`.
###Code
5 - 3 // 2
# Write your Code and press `Shift + Enter`
###Output
_____no_output_____
###Markdown
**Q4** Add parentheses to the following expression so that , i want output is `0`
###Code
8 - 3 * 2 - 1 + 1
# Write your Code and press `Shift + Enter`
###Output
_____no_output_____
###Markdown
**Q5** Alice, Bob and Carol have agreed to pool their Halloween candy and split it evenly among themselves. For the sake of their friendship, any candies left over will be smashed. For example, if they collectively bring home 91 candies, they'll take 30 each and smash 1.Write an arithmetic expression below to calculate how many candies they must smash for a given haul.
###Code
# Write your Code and press `Shift + Enter`
###Output
_____no_output_____
###Markdown
**Q6** Accept two numbers from the user and calculate multiplication
###Code
# Write your Code and press `Shift + Enter`
# Write your Code and press `Shift + Enter`
# Write your Code and press `Shift + Enter`
# Write your Code and press `Shift + Enter`
###Output
_____no_output_____ |
examples/SGLD.ipynb | ###Markdown
MNIST digit recognition with a 3-layer Perceptron This example is inspired form [this notebook](https://github.com/jeremiecoullon/SGMCMCJax/blob/master/docs/nbs/BNN.ipynb) in the SGMCMCJax repository. We try to use a 3-layer neural network to recognise the digits in the MNIST dataset.
###Code
import jax
import jax.nn as nn
import jax.numpy as jnp
import jax.scipy.stats as stats
import numpy as np
###Output
_____no_output_____
###Markdown
Data preparation We download the MNIST data using `tensorflow-datasets`:
###Code
import tensorflow_datasets as tfds
mnist_data, _ = tfds.load(
name="mnist", batch_size=-1, with_info=True, as_supervised=True
)
mnist_data = tfds.as_numpy(mnist_data)
data_train, data_test = mnist_data["train"], mnist_data["test"]
###Output
_____no_output_____
###Markdown
Now we need to apply several transformations to the dataset before splitting it into a test and a test set:- The images come into 28x28 pixels matrices; we reshape them into a vector;- The images are arrays of RGB codes between 0 and 255. We normalize them by the maximum value to get a range between 0 and 1;- We hot-encode category numbers.
###Code
def one_hot_encode(x, k, dtype=np.float32):
"Create a one-hot encoding of x of size k."
return np.array(x[:, None] == np.arange(k), dtype)
def prepare_data(dataset: tuple, num_categories=10):
X, y = dataset
y = one_hot_encode(y, num_categories)
num_examples = X.shape[0]
num_pixels = 28 * 28
X = X.reshape(num_examples, num_pixels)
X = X / 255.0
return jnp.array(X), jnp.array(y), num_examples
def batch_data(rng_key, data, batch_size, data_size):
"""Return an iterator over batches of data."""
while True:
_, rng_key = jax.random.split(rng_key)
idx = jax.random.choice(
key=rng_key, a=jnp.arange(data_size), shape=(batch_size,)
)
minibatch = tuple(elem[idx] for elem in data)
yield minibatch
X_train, y_train, N_train = prepare_data(data_train)
X_test, y_test, N_test = prepare_data(data_train)
###Output
WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
###Markdown
Model: 3-layer perceptron We will use a very simple (bayesian) neural network in this example: A MLP with gaussian priors on the weights. We first need a function that computes the model's logposterior density given the data and the current values of the parameters. If we note $X$ the array that represents an image and $y$ the array such that $y_i = 0$ if the image is in category $i$, $y_i=1$ otherwise, the model can be written as:\begin{align*} \boldsymbol{p} &= \operatorname{NN}(X)\\ \boldsymbol{y} &\sim \operatorname{Categorical}(\boldsymbol{p})\end{align*}
###Code
def predict_fn(parameters, X):
"""Returns the probability for the image represented by X
to be in each category given the MLP's weights vakues.
"""
activations = X
for W, b in parameters[:-1]:
outputs = jnp.dot(W, activations) + b
activations = nn.softmax(outputs)
final_W, final_b = parameters[-1]
logits = jnp.dot(final_W, activations) + final_b
return nn.log_softmax(logits)
def logprior_fn(parameters):
"""Compute the value of the log-prior density function."""
logprob = 0.0
for W, b in parameters:
logprob += jnp.sum(stats.norm.logpdf(W))
logprob += jnp.sum(stats.norm.logpdf(b))
return logprob
def loglikelihood_fn(parameters, data):
"""Categorical log-likelihood"""
X, y = data
return jnp.sum(y * predict_fn(parameters, X))
def compute_accuracy(parameters, X, y):
"""Compute the accuracy of the model.
To make predictions we take the number that corresponds to the highest probability value.
"""
target_class = jnp.argmax(y, axis=1)
predicted_class = jnp.argmax(
jax.vmap(predict_fn, in_axes=(None, 0))(parameters, X), axis=1
)
return jnp.mean(predicted_class == target_class)
###Output
_____no_output_____
###Markdown
Sample from the posterior distribution of the perceptron's weights Now we need to get initial values for the parameters, and we simply sample from their prior distribution:
###Code
def init_parameters(rng_key, sizes):
"""
Parameter
----------
rng_key
PRNGKey used by JAX to generate pseudo-random numbers
sizes
List of size for the subsequent layers. The first size must correspond
to the size of the input data and the last one to the number of
categories.
"""
num_layers = len(sizes)
keys = jax.random.split(rng_key, num_layers)
return [
init_layer(rng_key, m, n) for rng_key, m, n in zip(keys, sizes[:-1], sizes[1:])
]
def init_layer(rng_key, m, n, scale=1e-2):
"""Initialize the weights for a single layer."""
key_W, key_b = jax.random.split(rng_key)
return (scale * jax.random.normal(key_W, (n, m))), scale * jax.random.normal(
key_b, (n,)
)
###Output
_____no_output_____
###Markdown
We now sample from the model's posteriors. We discard the first 1000 samples until the sampler has reached the typical set, and then take 2000 samples. We record the model's accuracy with the current values every 100 steps.
###Code
%%time
import blackjax
from blackjax.sgmcmc.gradients import grad_estimator
data_size = len(y_train)
batch_size = int(0.01 * data_size)
layer_sizes = [784, 100, 10]
step_size = 5e-5
num_warmup = 1000
num_samples = 2000
# Batch the data
rng_key = jax.random.PRNGKey(1)
batches = batch_data(rng_key, (X_train, y_train), batch_size, data_size)
# Build the SGLD kernel
schedule_fn = lambda _: step_size # constant step size
grad_fn = grad_estimator(logprior_fn, loglikelihood_fn, data_size)
sgld = blackjax.sgld(grad_fn, schedule_fn)
# Set the initial state
init_positions = init_parameters(rng_key, layer_sizes)
state = sgld.init(init_positions, next(batches))
# Sample from the posterior
accuracies = []
samples = []
steps = []
for step in range(num_samples + num_warmup):
_, rng_key = jax.random.split(rng_key)
batch = next(batches)
state = sgld.step(rng_key, state, batch)
if step % 100 == 0:
accuracy = compute_accuracy(state.position, X_test, y_test)
accuracies.append(accuracy)
steps.append(step)
if step > num_warmup:
samples.append(state.position)
###Output
CPU times: user 2min 5s, sys: 7.18 s, total: 2min 13s
Wall time: 1min 2s
###Markdown
Let us plot the accuracy at different points in the sampling process:
###Code
import matplotlib.pylab as plt
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111)
ax.plot(steps, accuracies)
ax.set_xlabel("Number of sampling steps")
ax.set_ylabel("Prediction accuracy")
ax.set_xlim([0, num_warmup + num_samples])
ax.set_ylim([0, 1])
ax.set_yticks([0.1, 0.3, 0.5, 0.7, 0.9])
plt.title("Sample from 3-layer MLP posterior (MNIST dataset) with SgLD")
plt.plot()
print(f"The average accuracy in the sampling phase is {np.mean(accuracies[10:]):.2f}")
###Output
The average accuracy in the sampling phase is 0.93
###Markdown
Which is not a bad accuracy at all for such a simple model and after only 1000 steps! Remember though that we draw samples from the posterior distribution of the digit probabilities; we can thus use this information to filter out examples for which the model is "unsure" of its prediction.Here we will say that the model is unsure of its prediction for a given image if the digit that is most often predicted for this image is predicted less tham 95% of the time.
###Code
predicted_class = np.exp(
np.stack([jax.vmap(predict_fn, in_axes=(None, 0))(s, X_test) for s in samples])
)
max_predicted = [np.argmax(predicted_class[:, i, :], axis=1) for i in range(60000)]
freq_max_predicted = np.array(
[
(max_predicted[i] == np.argmax(np.bincount(max_predicted[i]))).sum() / 2000
for i in range(60000)
]
)
certain_mask = freq_max_predicted > 0.95
###Output
_____no_output_____
###Markdown
Let's plot a few examples where the model was very uncertain:
###Code
most_uncertain_idx = np.argsort(freq_max_predicted)
for i in range(10):
print(np.bincount(max_predicted[most_uncertain_idx[i]]) / 2000)
fig = plt.figure()
plt.imshow(X_test[most_uncertain_idx[i]].reshape(28, 28), cmap="gray")
plt.show()
###Output
[0. 0.1765 0.129 0. 0.194 0.015 0.212 0. 0.153 0.12 ]
###Markdown
And now compute the average accuracy over all the samples without these uncertain predictions:
###Code
avg_accuracy = np.mean(
[compute_accuracy(s, X_test[certain_mask], y_test[certain_mask]) for s in samples]
)
print(
f"The average accuracy removing the samples for which the model is uncertain is {avg_accuracy:.3f}"
)
###Output
The average accuracy removing the samples for which the model is uncertain is 0.983
|
report/generate_figures/.ipynb_checkpoints/plot_TSVD-checkpoint.ipynb | ###Markdown
Table of Contents1 Plot modes and obs2 Plot performance w. time3 Find correlation between params
###Code
%cd ../..
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
import pipeline
import seaborn as sns
sns.set()
from matplotlib.ticker import ScalarFormatter
sns.set_style("whitegrid")
#sns.set_palette("Dark2")
B = os.getcwd() + "/experiments/"
modesfp = [B + "TSVD2/modes/", B + "TSVD3/modes/"]
nobsfp = [B + "TSVD2/nobs/", B + "TSVD3/nobs/"]
outfp1 = "report/figures/TSVD_nobs.png"
outfp2 = "report/figures/TSVD_modes.png"
outfp_time = "report/figures/comp_time.png"
def get_num(file, ):
num = ""
for idx, c in enumerate(file):
if c.isnumeric():
num += c
else:
break
if num is "":
assert file[:4] == "None", "file[:4] should be None but is = {}".format(file[:4])
idx = 3
num = None
else:
num = int(num)
return num, idx
def get_tsvd_best_df(fps, prov_nobs, prov_modes):
results = []
for fp in fps:
for path, subdir, files in os.walk(fp):
for file_orig in files:
file = file_orig
if file[-4:] != ".csv":
continue
file = file.replace("modes", "")
modes, idx = get_num(file)
file = file[idx:]
if file[0] == "A":
file = file[3:]
nobs = "ALL"
elif file[0] == "_":
file = file[1:]
nobs, idx = get_num(file)
fp = os.path.join(path, file_orig)
vals = []
df = pd.read_csv(fp)
if nobs == prov_nobs and modes == prov_modes:
return df
return None
def get_tsvd_data(fps, keys, drop_first=False):
results = []
for fp in fps:
for path, subdir, files in os.walk(fp):
for file_orig in files:
file = file_orig
if file[-4:] != ".csv":
continue
file = file.replace("modes", "")
modes, idx = get_num(file)
file = file[idx:]
if file[0] == "A":
file = file[3:]
nobs = "ALL"
elif file[0] == "_":
file = file[1:]
nobs, idx = get_num(file)
fp = os.path.join(path, file_orig)
vals = []
for kidx, key_ in enumerate(keys):
df = pd.read_csv(fp)
if drop_first[kidx]:
df = df.drop(0)
val = df[key_].mean()
vals.append(val)
vals = tuple(vals)
results.append(vals + (nobs, modes))
res = pd.DataFrame(results, columns=keys + [ "nobs", "modes"])
return res
keys = ["mse_DA", "time"]
drop = [0, 1]
nobsdf = get_tsvd_data(nobsfp, keys, drop)
modesdf = get_tsvd_data(modesfp, keys, drop)
modesdf
modesdf.to_csv( "report/data/modesdf.csv")
nobsdf.to_csv( "report/data/nobsdf.csv")
###Output
_____no_output_____
###Markdown
Plot modes and obs
###Code
#modes2475["mse_DA"].mean()
def plot_val(val, dfs, title, savefp, legend_labels=None, Tucodec_mse=0.078657345, Tucodec_time=0.058213):
colours = ["g", "r", "b"]
def sort_vals(df, val, yval):
df = df.copy()
x = df[val]
y = df[yval]
lists = sorted(zip(*[x, y]))
new_x, new_y = list(zip(*lists))
return new_x, new_y
fig, axs = plt.subplots(1, 2, sharey=False)
if not isinstance(dfs, list):
dfs = [dfs]
assert len(colours) >= len(dfs)
if len(dfs) > 1:
assert legend_labels is not None
assert len(dfs) == len(legend_labels)
lines = []
for idx, df in enumerate(dfs):
x, y = sort_vals(df, val, "mse_DA")
#plot left
ax = axs[0]
ax.plot(x, y, "-" + colours[idx])
ax.set_xlabel(title)
ax.set_ylabel("DA MSE")
if val == "nobs":
ax.set_xscale("log", basex=2)
ax.xaxis.set_major_formatter(ScalarFormatter())
ax = axs[1]
x, y = sort_vals(df, val, "time")
l = ax.plot(x, y, "-" + colours[idx])
lines.append(l[0])
ax.set_xlabel(title)
ax.set_ylabel("Time Taken (s)")
fig.set_size_inches(15, 5.5)
if val== "nobs":
ax.set_xscale("log", basex=2)
ax.set_yscale("log", basey = 2)
ax.xaxis.set_major_formatter(ScalarFormatter())
ax.yaxis.set_major_formatter(ScalarFormatter())
else:
ax.set_yscale("log", basey=2)
ax.yaxis.set_major_formatter(ScalarFormatter())
if len(dfs) > 1:
ltitle = None
if val == "nobs":
ltitle = "Modes"
elif val == "modes":
ltitle = "N. Obs."
axs[0].legend(lines, legend_labels, title=ltitle)
#add dotted lines for tucodec values
x = np.linspace(0.1,1e7,10)
y = 0 * x + Tucodec_mse
axs[0].plot(x, y, '--', color="black")
x = np.linspace(-10,1e7,10)
y = 0 * x + Tucodec_time
axs[1].plot(x, y, '--', label="Tucodec", color="black")
if val== "nobs":
#xmax = 150000
xmax = 91 * 85 * 32 #TODO - uncomment
xmax = 2**19
xmin = 0.5
ymax0 = None
ymin0 = 0.05
ymax1 = None
ymin1 = 0.015
else:
xmax = 800
xmin = -10
ymax0 = 0.15
ymin0 = 0.07
ymax1 = None
ymin1 = 0.015
axs[0].set_xlim(xmin, xmax)
axs[1].set_xlim(xmin, xmax)
axs[0].set_ylim(ymin0, ymax0)
axs[1].set_ylim(ymin1, ymax1)
#add floating tucodec label
if val== "nobs":
x_pos0 = 3
y_pos0 = Tucodec_mse + 0.007
x_pos1 = 20000
y_pos1 = Tucodec_time + 0.007
#x_pos1 = 150000 #UNCOMMENT
else:
x_pos0 = 90
y_pos0 = Tucodec_mse - 0.003
x_pos1 = 75
y_pos1 = Tucodec_time + 0.006
axs[0].text(x_pos0, y_pos0, 'Tucodec-NeXt',)
axs[1].text(x_pos1, y_pos1, 'Tucodec-NeXt',)
fig.savefig(savefp)
nobs32 = nobsdf[nobsdf["modes"] == 32]
nobs4 = nobsdf[nobsdf["modes"] == 4]
nobs791 = nobsdf[nobsdf["modes"] == 791]
legend_labs = ["4", "32", "791"]
plot_val("nobs", [nobs4, nobs32, nobs791], "Number of Observations", outfp1, legend_labs)
modesAll = modesdf[modesdf["nobs"] == 247520]
modes2475 = modesdf[modesdf["nobs"] == 2475]
modes24750 = modesdf[modesdf["nobs"] == 24750]
legend_labs = ["2475", "24750", "247520"]
plot_val("modes", [modes2475, modes24750, modesAll], "Truncation parameter", outfp2, legend_labs)
nobs4, nobs32, nobs731
nobs731
###Output
_____no_output_____
###Markdown
Plot performance w. time
###Code
modesdf
#get TSVD data
modes = 32
nobs = 247520
df_SVD = get_tsvd_best_df(modesfp, nobs, modes) # get all obs and truncation param = 32
fp_AE = "experiments/retrain/Tucodec_prelu_next/349_test.csv"
#SVD_vtu = modesfp + "modes{}_{}obsav_da_MAE.vtu".format(modes, nobs)
#AE_vtu = "experiments/DA/06a_vtu/_0/AEav_da_MAE.vtu"
#print("AE", AE_vtu)
#print("SVD", SVD_vtu)
df_AE = pd.read_csv(fp_AE)
df_AE.head()
df_AE["mse_DA"].mean()
#get times
df1 = df_SVD.copy()
df1.drop(0)
tsvd = df1["time"].mean()
df2 = df_AE.copy()
df2.drop(0)
tae = df2["time"].mean()
print("SVD time", tsvd)
print("SVD MSE", df_SVD["mse_DA"].mean())
print("AE time", tae)
print("AE MSE", df_AE["mse_DA"].mean())
#Plot L2 on left axis and percent improvement on y axis against time
# Create some mock data
t = df_SVD.index
fig, axs = plt.subplots(1, 2, sharey=False)
ax2 = axs[0]
ax2.set_ylabel('DA MSE', ) # we already handled the x-label with ax1
ax2.set_xlabel('Fluidity Time-step', )
ax2.plot( t, 'mse_DA', data=df_SVD, marker='+', color="g", )
ax2.plot(t, 'mse_DA', data=df_AE, marker='x', color="r")
ax2.tick_params(axis='y',)
ax2.set_xlim(-1, 50)
ax2.set_ylim(0, 0.32)
ax2.legend(["TSVD", "CAE"] )
#plot difference
ax1 = axs[1]
ax1.set_ylabel('(TSVD DA MSE) - (CAE DA MSE)', ) # we already handled the x-label with ax1
ax1.set_xlabel('Fluidity Time-step', )
y = df_SVD["mse_DA"] - df_AE["mse_DA"]
ax1.plot( t, y, marker='+', color="b", )
ax1.tick_params(axis='y',)
x = np.linspace(-6,300,10)
y = 0 * x
ax1.plot(x, y, '-r')
ax1.set_xlim(-2, 200)
fig.set_size_inches(15, 7)
plt.show()
fig.savefig(outfp_time)
#Plot L2 on left axis and percent improvement on y axis against time
# Create some mock data
#get TSVD data
modes = 791
nobs = 247520
df_SVD = get_tsvd_best_df(modesfp, nobs, modes) # get all obs and truncation param = 32
#print("AE", AE_vtu)
#print("SVD", SVD_vtu)
df_AE = pd.read_csv(fp_AE)
df_AE.head()
df_AE["mse_DA"].mean()
#get times
df1 = df_SVD.copy()
df1.drop(0)
tsvd = df1["time"].mean()
df2 = df_AE.copy()
df2.drop(0)
tae = df2["time"].mean()
print("SVD time", tsvd)
print("SVD MSE", df_SVD["mse_DA"].mean())
print("AE time", tae)
print("AE MSE", df_AE["mse_DA"].mean())
t = df_SVD.index
fig, axs = plt.subplots(1, 2, sharey=False)
ax2 = axs[0]
ax2.set_ylabel('DA MSE', ) # we already handled the x-label with ax1
ax2.set_xlabel('Fluidity Time-step', )
ax2.plot( t, 'mse_DA', data=df_SVD, marker='+', color="g", )
ax2.plot(t, 'mse_DA', data=df_AE, marker='x', color="r")
ax2.tick_params(axis='y',)
ax2.set_xlim(-1, 50)
ax2.set_ylim(0, 0.25)
ax2.legend(["3D-VarDA", "CAE"] )
#plot difference
ax1 = axs[1]
ax1.set_ylabel('(TSVD DA MSE) - (CAE DA MSE)', ) # we already handled the x-label with ax1
ax1.set_xlabel('Fluidity Time-step', )
y = df_SVD["mse_DA"] - df_AE["mse_DA"]
ax1.plot( t, y, marker='+', color="b", )
ax1.tick_params(axis='y',)
x = np.linspace(-6,300,10)
y = 0 * x
ax1.plot(x, y, '-r')
ax1.set_xlim(-2, 200)
fig.set_size_inches(15, 5)
plt.show()
fig.savefig("report/figures/comp_time791.png")
###Output
_____no_output_____
###Markdown
Find correlation between params
###Code
#copied from here: https://towardsdatascience.com/feature-selection-with-pandas-e3690ad8504b
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
#import statsmodels.api as sm
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.feature_selection import RFE
from sklearn.linear_model import RidgeCV, LassoCV, Ridge, Lasso
#ADD column to df
df_AE["da_ratio"] = df_AE["da_MAE_mean"] / df_AE["ref_MAE_mean"]
#Using Pearson Correlation
plt.figure(figsize=(12,10))
cor = df_AE.corr()
sns.heatmap(cor, annot=True, cmap=plt.cm.Reds)
plt.savefig("correlation_AE_data.png")
plt.show()
#plt.savefig("correlation_AE_data.png")
###Output
_____no_output_____
###Markdown
NOTE: there is NO correlation between percentage improvement and the reconstruction error. In fact the correlation coefficients are positive for this case (0.068 and 0.071) for L1 and L2 losses respectively when we would expect them to be negative (i.e. better reconstruction gives lower losses and higher percentage improvement).
###Code
#Plot for SVD
df_SVD["da_ratio"] = df_SVD["da_MAE_mean"] / df_SVD["ref_MAE_mean"]
plt.figure(figsize=(12,10))
cor = df_SVD.corr()
sns.heatmap(cor, annot=True, cmap=plt.cm.Reds)
plt.savefig("correlation_SVD_data.png")
plt.show()
###Output
_____no_output_____ |
Assignment_5/Assignment_5_LOM.ipynb | ###Markdown
Assignment 5 SPARQL queriesCreate the SPARQL query that will answer each of these questions.
###Code
%endpoint http://sparql.uniprot.org/sparql
%format JSON
###Output
_____no_output_____
###Markdown
Q1: 1 POINT How many protein records are in UniProt?
###Code
PREFIX up: <http://purl.uniprot.org/core/>
SELECT (COUNT (?protein) AS ?protcount)
#SELECT (COUNT (DISTINCT ?protein) AS ?protcount) ## TAKES TOO MUCH TIME BUT WOULD BE MORE CORRECT
WHERE
{
?protein a up:Protein .
}
###Output
_____no_output_____
###Markdown
Q2: 1 POINT How many Arabidopsis thaliana protein records are in UniProt?
###Code
PREFIX up: <http://purl.uniprot.org/core/>
PREFIX taxon: <http://purl.uniprot.org/taxonomy/>
SELECT (COUNT(DISTINCT ?protein) AS ?proteincount)
WHERE
{
?protein a up:Protein .
?protein up:organism taxon:3702 .
}
###Output
_____no_output_____
###Markdown
Q3: 1 POINT retrieve pictures of Arabidopsis thaliana from UniProt?
###Code
PREFIX taxon: <http://purl.uniprot.org/taxonomy/>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?image
WHERE
{
taxon:3702 foaf:depiction ?image .
}
###Output
_____no_output_____
###Markdown
Q4: 1 POINT: What is the description of the enzyme activity of UniProt Protein Q9SZZ8
###Code
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX uniprotkb: <http://purl.uniprot.org/uniprot/>
PREFIX up: <http://purl.uniprot.org/core/>
SELECT DISTINCT ?description
WHERE
{
uniprotkb:Q9SZZ8 a up:Protein ;
up:annotation ?annotation .
?annotation a up:Function_Annotation ;
rdfs:comment ?description .
}
###Output
_____no_output_____
###Markdown
Q5: 1 POINT: Retrieve the proteins ids, and date of submission, for proteins that have been added to UniProt this year (HINT Google for “SPARQL FILTER by date”)
###Code
PREFIX up:<http://purl.uniprot.org/core/>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
SELECT ?id ?date
WHERE
{
?protein a up:Protein ;
up:mnemonic ?id ;
up:created ?date .
FILTER(?date > "2021-01-01"^^xsd:date) .
}
###Output
_____no_output_____
###Markdown
Q6: 1 POINT How many species are in the UniProt taxonomy?
###Code
PREFIX up:<http://purl.uniprot.org/core/>
SELECT (COUNT (DISTINCT ?species) AS ?specount)
WHERE
{
?species a up:Taxon ;
up:rank up:Species .
}
###Output
_____no_output_____
###Markdown
Q7: 2 POINT How many species have at least one protein record? (this might take a long time to execute, so do this one last!)
###Code
PREFIX up:<http://purl.uniprot.org/core/>
SELECT (COUNT(DISTINCT ?species) AS ?speciesnum)
WHERE
{
?protein a up:Protein .
?protein up:organism ?species .
?species a up:Taxon .
?species up:rank up:Species .
}
###Output
_____no_output_____
###Markdown
Q8: 3 points: find the AGI codes and gene names for all Arabidopsis thaliana proteins that have a protein function annotation description that mentions “pattern formation”
###Code
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX taxon: <http://purl.uniprot.org/taxonomy/>
PREFIX up: <http://purl.uniprot.org/core/>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
SELECT ?gene_name ?agi_code
WHERE
{
?protein a up:Protein ;
up:organism taxon:3702 ;
up:recommendedName ?n ;
up:encodedBy ?gene ;
up:annotation ?annotation .
?n up:shortName ?gene_name .
?gene up:locusName ?agi_code .
?annotation a up:Function_Annotation ;
rdfs:comment ?comment .
FILTER regex( ?comment, "pattern formation","i")
}
###Output
_____no_output_____
###Markdown
Q9: 4 POINTS: what is the MetaNetX Reaction identifier (starts with “mnxr”) for the UniProt Protein uniprotkb:Q18A79
###Code
%endpoint https://rdf.metanetx.org/sparql
PREFIX mnx: <https://rdf.metanetx.org/schema/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX uniprotkb: <http://purl.uniprot.org/uniprot/>
SELECT DISTINCT ?reac_identifier
WHERE{
?pept mnx:peptXref uniprotkb:Q18A79 .
?cata mnx:pept ?pept .
?gpr mnx:cata ?cata ;
mnx:reac ?reac .
?reac rdfs:label ?reac_identifier .
}
###Output
_____no_output_____
###Markdown
Q10: 5 POINTS: What is the official Gene ID (UniProt calls this a “mnemonic”) and the MetaNetX Reaction identifier (mnxr…..) for the protein that has “Starch synthase” catalytic activity in Clostridium difficile (taxon 272563).
###Code
PREFIX mnx: <https://rdf.metanetx.org/schema/>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX up: <http://purl.uniprot.org/core/>
PREFIX taxon: <http://purl.uniprot.org/taxonomy/>
SELECT DISTINCT ?mnemonic ?reac_label
WHERE
{
service <http://sparql.uniprot.org/sparql> {
?protein a up:Protein ;
up:organism taxon:272563 ;
up:mnemonic ?mnemonic ;
up:classifiedWith ?goTerm .
?goTerm rdfs:label ?activity .
filter contains(?activity, "starch synthase")
bind (substr(str(?protein),33) as ?ac)
bind (IRI(CONCAT("http://purl.uniprot.org/uniprot/",?ac)) as ?proteinRef)
}
service <https://rdf.metanetx.org/sparql> {
?pept mnx:peptXref ?proteinRef .
?cata mnx:pept ?pept .
?gpr mnx:cata ?cata ;
mnx:reac ?reac .
?reac rdfs:label ?reac_label .
}
}
###Output
_____no_output_____ |
KonputaziorakoSarrera-MAT/Gardenkiak/.ipynb_checkpoints/Fitxategiak-checkpoint.ipynb | ###Markdown
FitxategiakPrograma baten exekuzioa amaitzean, aldagaietan gordetako emaitza guztiak *desagertu* (*galdu*) egiten dira.* Aldagaiek izaera **iragankorra** duteGuk erabiltzen ditugun programek, normalean, informazioa gordetzen dute.* Izaera **iraunkorra** dute* Berriro ere exekutatzean, emaitza zaharrak berreskuratzen dira.* Lekuren batetan gorde da informazio hori * Fitxategiak * *cloud* → beste norbaiten ordenagailuan dauden fitxategiak Sistema eragileak *fitxategi sistema* bat kudeatu eta eskeintzen dio aplikazioari.* Direktorio/fitxategi egitura hierarkikoa * Edukia aztertu * Edukia aldatu * Atributuak (izena, jabetza, baimenak, ...) aldatu * Berriak sortu* Direktorio/fitxategiak bere bide-izenaz adieraziak * Windows → "C:\Users\Alazne\Desktop\datuak" * Linux → "/home/Alazne/Desktop/datuak" Windows/Linux (Unix) fitxategi sistemak Windows * Sistemaren erroak: Biltegiratze unitateetako partizioak * Unitate letrak (`A:\`, `B:\`, `C:\`, `D:\`, ...)* Direktorioen bideizen bereizlea: `\` * `C:\Users\Alazne\Desktop\datuak` Linux (Unix) * Sistemaren erroa: `/`* Direktorioen bideizen bereizlea: `/` * `/home/Alazne/Desktop/datuak` Windows eta bideizen bereizleaUnix-en, `\` karakterea [*escape*](https://python-reference.readthedocs.io/en/latest/docs/str/escapes.html) karaktereak adierazteko erabilia.
###Code
# Tabuladorea eta lerro berria adierazteko
print("a\tb\ncde\tf")
# Unikode kodifikazioa zuzenean adierazteko
print("Parre apur bat egin ezazu... \U0001F602")
# Carriage Return karakterea adierazteko
for i in range(10000):
print(i,end="\r",flush=True)
###Output
9999
###Markdown
Programazio ingurune ia ia ia ia guztiak, Unix-en oinarrituak izan dira... Python ere.
###Code
print("C:\nire\karpeta\totala")
print("C:\Users\Alazne\Desktop\datuak")
###Output
_____no_output_____
###Markdown
Python+Windows ingurunean bi aukera:* `\\` erabili:
###Code
print("C:\\nire\\karpeta\\totala")
print("C:\\Users\\Alazne\\Desktop\\datuak")
###Output
C:\nire\karpeta\totala
C:\Users\Alazne\Desktop\datuak
###Markdown
* `/` erabili (Python arduratzen da bideizenak itzultzeaz)
###Code
print("C:/nire/karpeta/totala")
print("C:/Users/Alazne/Desktop/datuak")
###Output
C:/nire/karpeta/totala
C:/Users/Alazne/Desktop/datuak
###Markdown
Bide izen absolutu eta erlatiboak* Bide izen **absolutua**: Fitxategi sistema errotik habiatuz, adierazi nahi dugun fitxategirako bide izena * `C:/Users/Alazne/Desktop/abestia.mp3` * `/home/Alazne/Desktop/abestia.mp3`* Bide izen **erlatiboa**: Programa exekutatzen ari deneko karpetatik habiatuz, adierazi nahi dugun fitxategirako bide izena. * Programa `C:/Users/Alazne` edo `/home/Alazne` direktorioan exekutatzen balego * `Desktop/abestia.mp3` Bide izen erlatiboetan:* `.` → gaudeneko direktorioa * `./Desktop/abestia.mp3`* `..` → gaudeneko direktorio *kanpoko* (azpiko) direktorioa * `../Alazne/Desktop/abestia.mp3` * Windows: `../../Users/Alazne/Desktop/abestia.mp3` * Unix: `../../home/Alazne/Desktop/abestia.mp3` Fitxategien irakurketaFitxategi bat irakurri ahal izateko, lehenik *fitxategi* objektu bat behar dugu. `open` funtzioak emandako bideizenari dagokion fitxategia errepresentatzen duen objektua bueltatzen du:
###Code
f = open("MyText.txt")
print(type(f))
help(open)
###Output
<class '_io.TextIOWrapper'>
Help on built-in function open in module io:
open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None)
Open file and return a stream. Raise IOError upon failure.
file is either a text or byte string giving the name (and the path
if the file isn't in the current working directory) of the file to
be opened or an integer file descriptor of the file to be
wrapped. (If a file descriptor is given, it is closed when the
returned I/O object is closed, unless closefd is set to False.)
mode is an optional string that specifies the mode in which the file
is opened. It defaults to 'r' which means open for reading in text
mode. Other common values are 'w' for writing (truncating the file if
it already exists), 'x' for creating and writing to a new file, and
'a' for appending (which on some Unix systems, means that all writes
append to the end of the file regardless of the current seek position).
In text mode, if encoding is not specified the encoding used is platform
dependent: locale.getpreferredencoding(False) is called to get the
current locale encoding. (For reading and writing raw bytes use binary
mode and leave encoding unspecified.) The available modes are:
========= ===============================================================
Character Meaning
--------- ---------------------------------------------------------------
'r' open for reading (default)
'w' open for writing, truncating the file first
'x' create a new file and open it for writing
'a' open for writing, appending to the end of the file if it exists
'b' binary mode
't' text mode (default)
'+' open a disk file for updating (reading and writing)
'U' universal newline mode (deprecated)
========= ===============================================================
The default mode is 'rt' (open for reading text). For binary random
access, the mode 'w+b' opens and truncates the file to 0 bytes, while
'r+b' opens the file without truncation. The 'x' mode implies 'w' and
raises an `FileExistsError` if the file already exists.
Python distinguishes between files opened in binary and text modes,
even when the underlying operating system doesn't. Files opened in
binary mode (appending 'b' to the mode argument) return contents as
bytes objects without any decoding. In text mode (the default, or when
't' is appended to the mode argument), the contents of the file are
returned as strings, the bytes having been first decoded using a
platform-dependent encoding or using the specified encoding if given.
'U' mode is deprecated and will raise an exception in future versions
of Python. It has no effect in Python 3. Use newline to control
universal newlines mode.
buffering is an optional integer used to set the buffering policy.
Pass 0 to switch buffering off (only allowed in binary mode), 1 to select
line buffering (only usable in text mode), and an integer > 1 to indicate
the size of a fixed-size chunk buffer. When no buffering argument is
given, the default buffering policy works as follows:
* Binary files are buffered in fixed-size chunks; the size of the buffer
is chosen using a heuristic trying to determine the underlying device's
"block size" and falling back on `io.DEFAULT_BUFFER_SIZE`.
On many systems, the buffer will typically be 4096 or 8192 bytes long.
* "Interactive" text files (files for which isatty() returns True)
use line buffering. Other text files use the policy described above
for binary files.
encoding is the name of the encoding used to decode or encode the
file. This should only be used in text mode. The default encoding is
platform dependent, but any encoding supported by Python can be
passed. See the codecs module for the list of supported encodings.
errors is an optional string that specifies how encoding errors are to
be handled---this argument should not be used in binary mode. Pass
'strict' to raise a ValueError exception if there is an encoding error
(the default of None has the same effect), or pass 'ignore' to ignore
errors. (Note that ignoring encoding errors can lead to data loss.)
See the documentation for codecs.register or run 'help(codecs.Codec)'
for a list of the permitted encoding error strings.
newline controls how universal newlines works (it only applies to text
mode). It can be None, '', '\n', '\r', and '\r\n'. It works as
follows:
* On input, if newline is None, universal newlines mode is
enabled. Lines in the input can end in '\n', '\r', or '\r\n', and
these are translated into '\n' before being returned to the
caller. If it is '', universal newline mode is enabled, but line
endings are returned to the caller untranslated. If it has any of
the other legal values, input lines are only terminated by the given
string, and the line ending is returned to the caller untranslated.
* On output, if newline is None, any '\n' characters written are
translated to the system default line separator, os.linesep. If
newline is '' or '\n', no translation takes place. If newline is any
of the other legal values, any '\n' characters written are translated
to the given string.
If closefd is False, the underlying file descriptor will be kept open
when the file is closed. This does not work when a file name is given
and must be True in that case.
A custom opener can be used by passing a callable as *opener*. The
underlying file descriptor for the file object is then obtained by
calling *opener* with (*file*, *flags*). *opener* must return an open
file descriptor (passing os.open as *opener* results in functionality
similar to passing None).
open() returns a file object whose type depends on the mode, and
through which the standard file operations such as reading and writing
are performed. When open() is used to open a file in a text mode ('w',
'r', 'wt', 'rt', etc.), it returns a TextIOWrapper. When used to open
a file in a binary mode, the returned class varies: in read binary
mode, it returns a BufferedReader; in write binary and append binary
modes, it returns a BufferedWriter, and in read/write mode, it returns
a BufferedRandom.
It is also possible to use a string or bytearray as a file for both
reading and writing. For strings StringIO can be used like a file
opened in a text mode, and for bytes a BytesIO can be used like a file
opened in a binary mode.
###Markdown
Demagun `MyText.txt` fitxategian ondoko edukia dugula:```Hau testu fitxategi bat daBi ilara dituEz, bi ez, hiru jakiña!```Fitxategi objektuak metodoak dituzte. Metodo hoietako batek, `readline()`, ilarak banan banan irakur ditzake, ilara bakoitza karaktere kate modual bueltatuz:
###Code
print(1,f.readline(),end="")
print(2,f.readline(),end="")
print(3,f.readline(),end="")
###Output
1 Hau testu fitxategi bat da
2 Bi ilara ditu
3 Ez, bi ez, hiru jakiña!
###Markdown
Fitxategia irakurtzen amaitzean, `readline()` metodoak karaktere kate hutsa bueltatzen du:
###Code
print(f.readline() == "")
###Output
True
###Markdown
Eta fitxategia gehiago erabili nahi ez dugunean, itxi egin beharko genuke:
###Code
f.close()
###Output
_____no_output_____
###Markdown
Ikusitakoa laburbilduz, ondoko funtzioak testu fitxategi baten edukia pantailatik erakusten du:
###Code
def erakutsi(bideizena):
f = open(bideizena)
s = f.readline()
while s :
print(s,end="")
s = f.readline()
f.close()
erakutsi("MyText.txt")
###Output
Hau testu fitxategi bat da
Bi ilara ditu
Ez, bi ez, hiru jakiña!
###Markdown
Ileren amaierako `\n`-ak nahasten bagaitu, karaktere kateen `strip()` metodoarekin kendu dezakegu ilara irakurri bezain laster:
###Code
def erakutsi(bideizena):
f = open(bideizena)
s = f.readline().strip()
while s :
print(s)
s = f.readline().strip()
f.close()
erakutsi("MyText.txt")
###Output
Hau testu fitxategi bat da
Bi ilara ditu
Ez, bi ez, hiru jakiña!
###Markdown
Fitxategiak ITERAGARRIAK diraTestu fitxategi bat zeharkatzean, bere ilarak jasoko ditugu zuzenean:
###Code
def erakutsi(bideizena):
f = open(bideizena)
for s in f :
print(s,end="")
f.close()
erakutsi("MyText.txt")
###Output
Hau testu fitxategi bat da
Bi ilara ditu
Ez, bi ez, hiru jakiña!
###Markdown
Testu fitxategiak eta KodifikazioaTestu fitxategiek kodifikazio bat erabiltzen dute.* Kodifikazioa adierazteko, `encoding` argumentua dugu.```pythonopen(file, encoding=None, ...)```* Adierazten ez badugu, defektuzkoa. * *"The default encoding is platform dependent"* * **BETI** adierazi beharko genuke
###Code
def erakutsi(bideizena, kodif="utf-8"):
f = open(bideizena,encoding=kodif)
for s in f :
print(s,end="")
f.close()
erakutsi("MyText.txt")
print("\n------------")
erakutsi("MyText.txt","utf-8")
print("\n------------")
erakutsi("MyText.txt","latin-1")
###Output
Hau testu fitxategi bat da
Bi ilara ditu
Ez, bi ez, hiru jakiña!
------------
Hau testu fitxategi bat da
Bi ilara ditu
Ez, bi ez, hiru jakiña!
------------
Hau testu fitxategi bat da
Bi ilara ditu
Ez, bi ez, hiru jakiña!
###Markdown
Fitxategietan idaztenFitxategi bat irekitzean, defektuz, irakurketarako irekitzen da:```pythonopen(file, mode='r', ...)````mode` argumentuak aukera asko ditu:* `"r"` → *open for **r**eading (default)** `"w"` → *open for **w**riting, truncating the file first** `"a"` → *open for writing, **a**ppending to the end of the file if it exists* Fitxategi batetan idazteko bi aukera ezberdin ditugu:```python 1 - Fitxategi objektuek duten write metodoa. text argumentua BETI karaktere kate bat izan behar da f.wite(text)``````python 2 - print metodoa, file argumentua aldatuz.print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)```
###Code
f = open("MyText2.txt","w")
f.write("Lehenengo ilara\n")
print("Bigarren ilara",file=f)
f.write("Hirugarren ilara")
f.close()
print("------------")
erakutsi("MyText2.txt")
###Output
------------
Lehenengo ilara
Bigarren ilara
Hirugarren ilara
###Markdown
`mode="w"` erabiltzean, aurretik genuen edukia ezabatu egiten da:
###Code
f = open("MyText2.txt","w")
print("Bat",file=f)
print("Bi",file=f)
print("Hiru",file=f)
f.close()
print("------------")
erakutsi("MyText2.txt")
###Output
------------
Bat
Bi
Hiru
###Markdown
`mode="a"` erabiltzean ordea, aurretik genuen edukia mantendu egiten da:
###Code
f = open("MyText2.txt","a")
print("Aaaaa",file=f)
print("Bbbbb",file=f)
print("Ccccc",file=f)
f.close()
print("------------")
erakutsi("MyText2.txt")
###Output
------------
Bat
Bi
Hiru
Aaaaa
Bbbbb
Ccccc
###Markdown
Fitxategiak eta ErroreakFitxategi bat irekitzean erroreak gerta daitezke:* Fitxategia existitzen ez bada (`FileNotFoundError`)* Fitxategia irikitzeko baimenik ez badugu (`PermissionError`)* Direktorio bat irekitzen saiatzen bagara (`IsADirectoryError`)
###Code
erakutsi("MyText.txt")
#erakutsi("MyText.txtxtxtx")
#erakutsi("/notebooks/KonputaziorakoSarrera-MAT/Gardenkiak")
###Output
Hau testu fitxategi bat da
Bi ilara ditu
Ez, bi ez, hiru jakiña!
###Markdown
* Errore/salbuespen bat gertatzean, kudeatzen ez bada, programaren exekuzioa amaitu egiten da.* Erroreen kudeaketa `try-except-else` kontrol egituraren bidez egiten da:```pythontry : errorea/salbuespena sortu dezakeen kodeaexcept : salbuespenaren kudeaketa kodeaelse : Salbuespenik ez badago, exekutatuko den ``` Aurreko funtzioa berridatzi genezake:
###Code
def erakutsi(bideizena,kodif="utf-8"):
try :
f = open(bideizena,encoding=kodif)
except :
print("ezin izan da fitxategia ireki")
else:
for s in f :
print(s,end="")
f.close()
erakutsi("MyText.txt")
print("\n------------")
erakutsi("MyText.txtxtxtx")
###Output
Hau testu fitxategi bat da
Bi ilara ditu
Ez, bi ez, hiru jakiña!
------------
ezin izan da fitxategia ireki
###Markdown
BAINA...Salbuespen bat **ongi** kudeatzea oso **oso** **OSO** zaila da* Fitxategien kasuan, salbuespenak ere irakurtzen ari garenean gerta daitezke * Adibidez, *pendrive* bat deskonektatuz gero.* Programatzaileak moduren batean jakin behar du salbuespen hori gertatu ote den. * Aurreko adibidean, ez du jakingo* Zuzen programatutako kudeaketek, normalean, errore berriak sortzen dituzte... Beraz, erregela erraz bat:* EZ KUDEATU SALBUESPENIK, egiten ari zarena guztiz argi ez duzun bitartean.
###Code
def erakutsi(bideizena, kodif="utf-8"):
f = open(bideizena,encoding=kodif)
for s in f :
print(s,end="")
f.close()
###Output
_____no_output_____ |
Exploring the Backdoor Algorithm.ipynb | ###Markdown
Upgrading the Backdoor Algorithm We have two objectives in the notebook. First is to better understand the mechanics of the existing back door identification algorithm at work in DoWhy. The second is to plan out and execute and improvement which gives the expected answers to the five games below. A tertiary goal may be to create a method for generating fake data based on a given graph. The existing fake data generator is not very robust.
###Code
import pandas as pd
import numpy as np
import daft
import dowhy
from dowhy.do_why import CausalModel
from dowhy.causal_graph import CausalGraph
from dowhy.causal_identifier import CausalIdentifier
from dowhy.datasets import stochastically_convert_to_binary
import networkx as nx
import statsmodels.api as sm
from copy import deepcopy
import sympy as sp
import sympy.stats as spstats
# Helper functions adapted to work with numpy arrays.
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def stochastically_convert_to_binary(x):
ps = sigmoid(x)
return np.hstack([np.random.choice([0, 1], 1, p=[1-p, p]) for p in ps])
def stachastically_combine(parents):
# We define this in this way so that we can work with an arbitrary
# number of parents.
# This intercept centers the mean of the values around 0.5
size = len(parents[0])
#output = np.ones(size)*-0.25*(len(parents)-1)
#for p in parents:
# output += 0.5*p
output = np.mean(parents, axis=0)
return stochastically_convert_to_binary(output)
def binary_graph_data(treatment_name='X', outcome_name='Y', observed_node_names=['X', 'Y'], ate=1, graph=None, n_obs=1000):
"""Generates data from a provided causal graph.
For simplicity, all variables are binary.
Args:
treatment_name (string): name of the treatment variable in the provided graph
outcome_name (string): name of the outcome variable in the provided graph
observed_node_names (list): list of all nodes in the graph for which we will generate data
ate (float): the average treatment effect. Currently doesn't work. I'm not sure how I would implement it.
graph (string): the graph provided in either GML or DOT format
Returns:
ret_dict (dictionary): a dictionary of the desired dataframe and some of the input
parameters to match the other dataset generators
"""
def generate_data(observed_node_names, parents, children, data={}, var=None):
"""Recursive algorithm to generating data
Get a random var from the list and check it's parents and children.
If the node has no parents
binary data is generated at random and recursively runs on a child
Else
it checks if data has been generated for all parents, and
if not, runs itself on the parent.
If data has been generated for all parents, then
it creates a linear column of data dependant on its parents.
"""
if var is None:
# Allows us to not specify a var to start with
var = observed_node_names.pop()
p, c = parents[var], children[var]
if var in data.keys():
pass
elif p == set():
# If it has no parents, then it's a root node. Randomly generate it.
#print(f"Generating {var}")
data[var] = np.random.choice(2, n_obs)
try:
# Remove it from the list of variables left to generate
observed_node_names.remove(var)
except:
pass
else:
# If the selected node is not a root, then it needs to be checked
# to ensure that all parents have been calculated
for parent in list(p):
if parent not in data.keys():
generate_data(observed_node_names, parents, children, data=data, var=parent)
# Finally, if all parents have data, then we can generate data for this node
parent_data = [data[parent] for parent in list(p)]
#print(f"Generating {var}")
data[var] = stachastically_combine(parent_data)
if observed_node_names != []:
child = observed_node_names.pop()
generate_data(observed_node_names, parents, children, data=data, var=child)
return pd.DataFrame(data)
cgraph = CausalGraph(treatment_name=treatment_name, outcome_name=outcome_name, graph=graph, observed_node_names=observed_node_names)
# We don't care about unobserved confounders in this dataset, so we generate the unobserved
# subgraph and work with that.
obs_graph = cgraph.get_unconfounded_observed_subgraph()
# Saves us from calling .predecessors and descendants a bunch
parents = {var: set(obs_graph.predecessors(var)) for var in observed_node_names}
children = {var: set(nx.descendants(obs_graph, var)) for var in observed_node_names}
data = generate_data(observed_node_names, parents, children)
ret_dict = {
"df": data,
"treatment_name": treatment_name,
"outcome_name": outcome_name,
"graph": graph,
"ate": ate
}
return ret_dict
###Output
_____no_output_____
###Markdown
The Causal Identifier ClassIn DoWhy, causal effects are identified through a combination of instrumental variables and backdoor adjustment. I had expected some form of algorithm to identify backdoors, but it appears that actually they rely on a function which identifies common causes. I suspect that this is the root of why they over-identify deconfounders.... And confirmed.
###Code
# The code is actually a method of the CausalGraph object
graph_dot = "digraph {A -> X;A -> B;B -> C;X -> E;D -> B;D -> E;E -> Y}"
observed_node_names = ['X', 'A', 'Y', 'B', 'C', 'D', 'E']
treatment = 'X'
outcome = 'Y'
graph = CausalGraph(treatment_name=treatment, outcome_name=outcome,
graph=graph_dot, observed_node_names=observed_node_names,
unobserved_common_cause=False)
# This will produced the "A" that we believe is unnecessary in the
# estimand
graph.get_common_causes("X", "Y")
# We can identify the mechanics fairly easily from how .get_common_causes works.
# We start with sets of ancestors of both nodes.
print(f"Ancestors of X: {graph.get_ancestors('X')}")
print(f"Ancestors of Y: {graph.get_ancestors('Y')}")
# They then take the intersection of these sets to find the common nodes
print(f"The 'Common Causes': {list(graph.get_ancestors('X').intersection(graph.get_ancestors('Y')))}")
###Output
Ancestors of X: {'A'}
Ancestors of Y: {'X', 'A', 'E', 'D'}
The 'Common Causes': ['A']
###Markdown
The primary issue with the current implementation in DoWhy is that is fails to account for the directions of the arrows. The method doesn't consider any of the junctions nor whether the given nodes are *d-separated*. As a result when it examines a graph like the one in game 2, used in the example above, it unnecessarily includes any node which is a parent of both X and Y, completely ignoring the colliders which prevent information from getting back to Y through the backdoor. Implementing a Proper Backdoor Identification AlgorithmSo, the next question is can we implement a general algorithm for identifying backdoor paths given any causal graph. For simplicity, lets assume that its a DAG. Within the structure of DoWhy, it might either be best implemented as a method of the Graph or within its own Causal Identifier class.Outline of the general approach: 1. Identify all nodes in backdoor paths of X to Y. 2. Evaluate all junctions to see if X and Y are d separated * If they are: Stop * If they aren't, control from one junction
###Code
graph_dot = "digraph {A -> X;A -> B;C -> B;C -> Y;X -> Y;B -> X;}"
observed_node_names = ['X', 'A', 'Y', 'B', 'C']
treatment = 'X'
outcome = 'Y'
graph = CausalGraph(treatment_name=treatment, outcome_name=outcome,
graph=graph_dot, observed_node_names=observed_node_names,
unobserved_common_cause=False)
graph.view_graph()
# Identifying all of the parents of X, which tells us
# the nodes from which all backdoor paths flow.
backdoor_root = graph.get_parents('X')
# We will use the undirected version of this graph to get all backdoor paths
undag = graph._graph.to_undirected()
bdps = []
for backdoor in backdoor_root:
for path in nx.algorithms.simple_paths.all_simple_paths(undag, backdoor, outcome):
if treatment not in path:
bdps.append(path)
print(bdps)
# At this point I think we want to induce a subgraph on just the appropriate nodes
bd_graph = graph._graph.subgraph(bdps[1])
# Doesn't that look like the prettiest junction you've ever seen
nx.draw(bd_graph, with_labels=True, font_weight='bold')
# So we can actually get a list of all junctions simply by moving a window over the list.
from collections import deque
def window(seq, n=2):
"""A nice solution stolen from https://stackoverflow.com/questions/6822725/rolling-or-sliding-window-iterator"""
it = iter(seq)
win = deque((next(it, None) for _ in range(n)), maxlen=n)
yield win
append = win.append
for e in it:
append(e)
yield win
def get_junctions(bdp):
return [list(junction) for junction in window(bdp, n=3)]
# This allows us to get a list of all junctions for a given backdoor path
junctions = get_junctions(bdps[1])
junctions
# We now need to be able to evaluate each junction and place it into one of three categories:
# 1. A chain: X -> A -> Y
# 2. A collider: X -> A <- Y
# 3. A confounder: X <- A -> Y
bdps
# One way to test the status of the junction
test_junction = ['A', 'B', 'C']
def test_junction(graph, junction):
jscore = 0
l, m, r = junction
# Induce the subgraph for this junction
jgraph = graph.subgraph(junction)
# And test it's edges
for edge in list(jgraph.edges):
if (edge == (m, l, 0)) | (edge == (m, r, 0)):
# If we encounter an edge like this, we know it can't be a collider
return 0
# If we get here it's a collider
return 1
test_junction(bd_graph, junction=['A', 'B', 'C'])
###Output
_____no_output_____
###Markdown
So at this point we can: * Generate all backdoor paths * Generate a list of junctions for each backdoor path * Test each junction to see if it's open or closed This just leaves the simultaneously closing all backdoor paths. This problem takes the interesting form of a constraint satisfaction problem. The real challenge will be in keeping track of how a change in one bdp will change the state of a different one. Here's our general algorithm for identifying all deconfounding sets in addition to the smallest one:```Given a directed acyclic graph G, the backdoor paths from X to Y is given by: 1. On the undirected equivalent of graph G, identify all paths from the parents of X to Y which don't pass through X. These are the backdoor paths. 2. For each backdoor path, decompose the path into a list of junctions. 3. Evaluate if each junction is open or closed by the d-separation rules. 4. If for each backdoor path, at least one junction is closed. Then stop. X and Y and d-separated. 5. Otherwise, we begin a breadth-first tree search. Each child is given by the action of controlling for one of the open nodes in the backdoor path that is entirely open. `controlling` for 'A' is equivalent to negating the state of all junctions where 'A' is in the middle. Each child is then evaluated to see if X and Y are d-separated given Z (the node we controlled for). If not, the tree continues We can stop once we've either identified the short set Z which cause X and Y to be d-separated or continue until we've identified ```
###Code
def search_for_deconfounders(jstates, bdjunctions, paths, depth=1, deconfounders=[], proposal=[]):
if (depth >= 10) or ((paths==0).sum() == 0):
# We'll stop if we've reached 10 variables [arbitrary stopping point] or if this set has successfully deconfounded the graph
return proposal
# Determine which path is open
pathix = np.argmin(paths)
# And which nodes are in the middle of each junction in that path
nodes = [node[1] for node in bdjunctions[pathix]]
# Determine which of those nodes have already been controlled for in the proposal
newnodes = list(set(nodes).difference(proposal))
# We will consider the consequences of controlling for every junction which isn't already in the proposal.
for node in newnodes:
# We are going to create deepcopies of our input variables off the bat because we don't
# want cross contamination between different sets of deconfounders.
newbdstates, newjstates, newproposal = deepcopy(bdjunctions), deepcopy(jstates),deepcopy(proposal)
# extract the middle value from that junction. We will control this node.
newproposal.append(node)
print(f"Currently considering: {newproposal}")
# The effect of which is that for every junction where that node is the middle, we should
# flip the state of that junction in a new copy of jstates
for key, item in newjstates.items():
if key[1] == node:
newjstates[key] = int(not(newjstates[key]))
# We can then update the junction states in their path context as well
for ix, backdoor in enumerate(newbdstates):
for jx, junction in enumerate(newbdstates[ix]):
newbdstates[ix][jx] = newjstates[tuple(junction)]
# Calculate new path scores
newpaths = np.array([sum(path) for path in newbdstates])
deconfounder = search_for_deconfounders(jstates=newjstates,bdjunctions=bdjunctions, paths=newpaths,
depth=depth+1, deconfounders=deconfounders, proposal=newproposal)
#print(f"State of deconfounders: {deconfounder}")
deconfounders.append(deconfounder)
return deconfounders
def identify_deconfounders(treatment_name, outcome_name, graph):
"""Identifies the minimal set of deconfounders from a given treatment and outcome node
:param treatment_name: name of the treatment variable
:param outcome_name: name of the outcome variable
:param graph: a CausalGraph instance
:returns: a list of deconfounders for all backdoor paths from treatment to outcome
"""
# Identifying all of the parents of X, which tells us
# the nodes from which all backdoor paths flow.
backdoor_root = graph.get_parents(treatment_name)
# We will use the undirected version of this graph to get all backdoor paths
undag = graph._graph.to_undirected()
bdpaths = []
for backdoor in backdoor_root:
for path in nx.algorithms.simple_paths.all_simple_paths(undag, backdoor, outcome_name):
if treatment_name not in path:
bdpaths.append([treatment_name]+path)
# Induce subgraphs on each set of backdoor nodes
bdgraphs = [graph._graph.subgraph(bdp) for bdp in bdpaths]
# This allows us to get a list of all junctions for each backdoor path
bdjunctions = [get_junctions(bdp) for bdp in bdpaths]
# We will store the state of all junctions in the same dictionary. This
# is how we'll allow closing a junction for one backdoor path influence
# the other backdoor paths.
jstates = {}
# Iterating through each backdoor path and junction, we deterine its state
# by determining if its a collider or not.
bdstates = deepcopy(bdjunctions)
for ix, backdoor in enumerate(bdgraphs):
for jx, junction in enumerate(bdjunctions[ix]):
# tuple is required because it's hashable, unlike lists...
state = test_junction(bdgraphs[ix], junction=junction)
jstates[tuple(junction)] = state
bdstates[ix][jx] = state
# Time to close the backdoors. If the sum of a backdoor paths's states is 0
# then it's entirely open (no colliders at the beginning). We pull in numpy
# here so we can more naturally work with the arrays.
paths = np.array([sum(path) for path in bdstates])
# We'll store the identified sets of deconfounders as we find them. We will put
# an artificial cap based on the number of layers that we'll search in the tree.
# It will collect as muany sets as it can. We just need to be mindful of ending
# when there are no more possibilities.
return search_for_deconfounders(jstates=jstates, bdjunctions=bdjunctions, paths=paths, depth=1, deconfounders=[], proposal=[])
graph_dot = "digraph {A -> X;A -> B;C -> B;C -> Y;X -> Y;B -> X;}"
observed_node_names = ['X', 'A', 'Y', 'B', 'C']
treatment = 'X'
outcome = 'Y'
graph = CausalGraph(treatment_name=treatment, outcome_name=outcome,
graph=graph_dot, observed_node_names=observed_node_names,
unobserved_common_cause=False)
deconfounders = identify_deconfounders(treatment, outcome, graph)
deconfounders
deconfounders[3][3][3][3][3][3][3][3][3][3][3][3][3][3][3][3][3][3][3][3][3][3][3][3][3][3][3]
###Output
_____no_output_____
###Markdown
Example 1: Book of Why Backdoor GamesThese games all come from Chapter 4 of the Book of Why. They are devoid of context, but the graphical model is given and our only job is to identify the set of backdoor paths from X to Y. I will use these games as unit tests of the behavior of DoWhy's backdoor algorithm. Game 1This relatively trivial game has no backdoor paths.
###Code
graph_dot = "digraph {X -> A;A -> Y;A -> B;}"
observed_node_names = ["B", "X", "Y", "A"]
data = binary_graph_data(observed_node_names=observed_node_names, graph=graph_dot)
graph_dot = "digraph {X -> A;A -> Y;A -> B;}"
observed_node_names = ["B", "X", "Y", "A"]
data = binary_graph_data(observed_node_names=observed_node_names, graph=graph_dot)
game_one = CausalModel(
data=data['df'],
treatment=data['treatment_name'],
outcome=data['outcome_name'],
graph=data['graph'])
game_one.view_model()
identified_estimand = game_one.identify_effect()
print(identified_estimand)
identify_deconfounders('X', 'Y', game_one._graph)
# Unfortunate that this generates an error.
lr_estimate = game_one.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression")
print(lr_estimate)
print("Causal Estimate is " + str(lr_estimate.value))
# We can confirm this estimate against what we could produce without DoWhy
lm = sm.OLS(endog=data['df']['Y'], exog=sm.tools.add_constant(data['df']['X']))
res = lm.fit()
print(res.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: Y R-squared: 0.004
Model: OLS Adj. R-squared: 0.003
Method: Least Squares F-statistic: 3.835
Date: Fri, 15 Mar 2019 Prob (F-statistic): 0.0505
Time: 17:50:02 Log-Likelihood: -696.70
No. Observations: 1000 AIC: 1397.
Df Residuals: 998 BIC: 1407.
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.5839 0.022 26.394 0.000 0.540 0.627
X 0.0602 0.031 1.958 0.050 -0.000 0.121
==============================================================================
Omnibus: 33.901 Durbin-Watson: 2.047
Prob(Omnibus): 0.000 Jarque-Bera (JB): 166.303
Skew: -0.470 Prob(JB): 7.72e-37
Kurtosis: 1.237 Cond. No. 2.66
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Game 2Introduces a backdoor path, but it's already blocked!
###Code
graph_dot = "digraph {A -> X;A -> B;B -> C;X -> E;D -> B;D -> E;E -> Y}"
observed_node_names = ['X', 'A', 'Y', 'B', 'C', 'D', 'E']
data = binary_graph_data(observed_node_names=observed_node_names, graph=graph_dot, treatment_name='X', n_obs=5000)
game_two = CausalModel(
data=data['df'],
treatment=data['treatment_name'],
outcome=data['outcome_name'],
graph=data['graph'])
game_two.view_model()
identified_estimand = game_two.identify_effect()
print(identified_estimand)
identify_deconfounders('X', 'Y', game_two._graph)
lr_estimate = game_two.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression")
print(lr_estimate)
print("Causal Estimate is " + str(lr_estimate.value))
###Output
INFO:dowhy.causal_estimator:INFO: Using Linear Regression Estimator
INFO:dowhy.causal_estimator:b: Y~X+A
###Markdown
The identified estimand, $Y \sim X+A$, is again not what we expect. It is treating A as an instrumental variable... I'm not sure that this is totally correct, though the back door paths from A to Y are blocked and thus we aren't actually introducing bias by controlling for A. However, since the only backdoor path from X to Y is blocked by the collider at B we don't need to control for A in the first place, which was the answer that I expected from the algorithm.
###Code
# We can confirm this estimate against what we could produce without DoWhy
lm = sm.OLS(endog=data['df']['Y'], exog=sm.tools.add_constant(data['df'][['E']]))
res = lm.fit()
print(res.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: Y R-squared: 0.000
Model: OLS Adj. R-squared: 0.000
Method: Least Squares F-statistic: 1.631
Date: Fri, 15 Mar 2019 Prob (F-statistic): 0.202
Time: 17:50:34 Log-Likelihood: -3369.3
No. Observations: 5000 AIC: 6743.
Df Residuals: 4998 BIC: 6756.
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.6459 0.011 59.437 0.000 0.625 0.667
E 0.0177 0.014 1.277 0.202 -0.009 0.045
==============================================================================
Omnibus: 304.804 Durbin-Watson: 2.013
Prob(Omnibus): 0.000 Jarque-Bera (JB): 871.967
Skew: -0.660 Prob(JB): 4.52e-190
Kurtosis: 1.437 Cond. No. 3.00
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Game 3This game finally introduces a backdoor path which requires us to control for a variable.
###Code
graph_dot = "digraph {X -> Y;B -> X;B -> Y;X -> A;B -> A;}"
observed_node_names = ['X', 'A', 'Y', 'B']
data = binary_graph_data(observed_node_names=observed_node_names, graph=graph_dot)
game_three = CausalModel(
data=data['df'],
treatment=data['treatment_name'],
outcome=data['outcome_name'],
graph=data['graph'])
game_three.view_model()
identified_estimand = game_three.identify_effect()
print(identified_estimand)
identify_deconfounders('X', 'Y', game_three._graph)
lr_estimate = game_three.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression")
print(lr_estimate)
print("Causal Estimate is " + str(lr_estimate.value))
# We can confirm this estimate against what we could produce without DoWhy
lm = sm.OLS(endog=data['df']['Y'], exog=sm.tools.add_constant(data['df'][['X', 'B']]))
res = lm.fit()
print(res.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: Y R-squared: 0.047
Model: OLS Adj. R-squared: 0.045
Method: Least Squares F-statistic: 24.33
Date: Fri, 15 Mar 2019 Prob (F-statistic): 4.84e-11
Time: 17:50:55 Log-Likelihood: -659.30
No. Observations: 1000 AIC: 1325.
Df Residuals: 997 BIC: 1339.
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.5037 0.027 18.782 0.000 0.451 0.556
X 0.0851 0.031 2.740 0.006 0.024 0.146
B 0.1768 0.030 5.884 0.000 0.118 0.236
==============================================================================
Omnibus: 45.234 Durbin-Watson: 1.918
Prob(Omnibus): 0.000 Jarque-Bera (JB): 143.071
Skew: -0.552 Prob(JB): 8.56e-32
Kurtosis: 1.511 Cond. No. 3.35
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
This one worked great. Appropriately controls for B. Game 4This graph is designed specifically as a trap called "M-bias". Many statisticians would control for B. What would DoWhy do?
###Code
graph_dot = "digraph {A -> X;A -> B;C -> B;C -> Y;}"
observed_node_names = ['X', 'A', 'Y', 'B', 'C']
data = binary_graph_data(observed_node_names=observed_node_names, graph=graph_dot)
game_four = CausalModel(
data=data['df'],
treatment=data['treatment_name'],
outcome=data['outcome_name'],
graph=data['graph'])
game_four.view_model()
identified_estimand = game_four.identify_effect()
print(identified_estimand)
identify_deconfounders('X', 'Y', game_four._graph)
lr_estimate = game_four.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression")
print(lr_estimate)
print("Causal Estimate is " + str(lr_estimate.value))
# We can confirm this estimate against what we could produce without DoWhy
lm = sm.OLS(endog=data['df']['Y'], exog=sm.tools.add_constant(data['df'][['X']]))
res = lm.fit()
print(res.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: Y R-squared: 0.001
Model: OLS Adj. R-squared: -0.000
Method: Least Squares F-statistic: 0.7938
Date: Fri, 15 Mar 2019 Prob (F-statistic): 0.373
Time: 17:51:21 Log-Likelihood: -710.02
No. Observations: 1000 AIC: 1424.
Df Residuals: 998 BIC: 1434.
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.5700 0.025 23.139 0.000 0.522 0.618
X 0.0283 0.032 0.891 0.373 -0.034 0.091
==============================================================================
Omnibus: 19.902 Durbin-Watson: 2.188
Prob(Omnibus): 0.000 Jarque-Bera (JB): 166.800
Skew: -0.353 Prob(JB): 6.02e-37
Kurtosis: 1.128 Cond. No. 2.92
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
DoWhy get's this one solidly. No need to control for anything, and if the model is working right then we should detect no effect as there is no causal line from X to Y. Game 5Our final, and most challenging game. There are two possible solutions, either C alone can be closed or A and B. Let's see what it determines.
###Code
graph_dot = "digraph {A -> X;A -> B;C -> B;C -> Y;B -> X; X -> Y}"
observed_node_names = ['X', 'A', 'Y', 'B', 'C']
data = binary_graph_data(observed_node_names=observed_node_names, graph=graph_dot, n_obs=1000)
game_five = CausalModel(
data=data['df'],
treatment=data['treatment_name'],
outcome=data['outcome_name'],
graph=data['graph'])
game_five.view_model()
identified_estimand = game_five.identify_effect()
print(identified_estimand)
deconfounders = identify_deconfounders('X', 'Y', game_five._graph)
deconfounders[-1][-1]
lr_estimate = game_five.estimate_effect(identified_estimand,
method_name="backdoor.linear_regression")
print(lr_estimate)
print("Causal Estimate is " + str(lr_estimate.value))
# We can confirm this estimate against what we could produce without DoWhy
lm = sm.OLS(endog=data['df']['Y'], exog=sm.tools.add_constant(data['df'][['X', 'C']]))
res = lm.fit()
print(res.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: Y R-squared: 0.030
Model: OLS Adj. R-squared: 0.028
Method: Least Squares F-statistic: 15.43
Date: Thu, 14 Mar 2019 Prob (F-statistic): 2.51e-07
Time: 16:20:18 Log-Likelihood: -672.11
No. Observations: 1000 AIC: 1350.
Df Residuals: 997 BIC: 1365.
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.5054 0.028 17.957 0.000 0.450 0.561
X 0.1095 0.031 3.555 0.000 0.049 0.170
C 0.1225 0.030 4.072 0.000 0.063 0.182
==============================================================================
Omnibus: 44.163 Durbin-Watson: 1.968
Prob(Omnibus): 0.000 Jarque-Bera (JB): 152.421
Skew: -0.544 Prob(JB): 7.98e-34
Kurtosis: 1.427 Cond. No. 3.47
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
|
Product_recommendation_engine.ipynb | ###Markdown
This notebook will create a product recommendation engine Import dependencies
###Code
import pandas as pd
import numpy as np
import requests
import json
from re import search
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.feature_extraction.text import CountVectorizer
###Output
_____no_output_____
###Markdown
Get setup variables
###Code
with open('config.json') as json_file:
data = json.load(json_file)
BASE_URL = data['EC2_API_ENDPOINT']
MY_TOKEN = data['GUEST_TOKEN']
###Output
_____no_output_____
###Markdown
Import dataset by making get request for REST API
###Code
base_url = BASE_URL
HEADER = {'Authorization':f'Token {MY_TOKEN}'}
query_url = f"{base_url}product/"
#print(query_url)
db_prods = requests.get(url=query_url, headers=HEADER).json()
###Output
_____no_output_____
###Markdown
Create dataframe
###Code
df = pd.json_normalize(db_prods)
###Output
_____no_output_____
###Markdown
View data
###Code
df.head()
###Output
_____no_output_____
###Markdown
Get count of products
###Code
df.shape[0]
###Output
_____no_output_____
###Markdown
Convert the text to a matrix of token counts
###Code
cm = CountVectorizer().fit_transform(df['slug'])
cm
###Output
_____no_output_____
###Markdown
Get the cosine similiarity matrix from the count matrix
###Code
cs = cosine_similarity(cm)
###Output
_____no_output_____
###Markdown
View cosine_similarity maxtrix
###Code
cs
###Output
_____no_output_____
###Markdown
Get shape of the cosine similiarity matrix
###Code
cs.shape
###Output
_____no_output_____
###Markdown
Get a recently purchased or viewed produced
###Code
product = 'nike-blazer-mid-off-white-all-hallows-eve'
###Output
_____no_output_____
###Markdown
product id
###Code
product_id = df[df.slug == product]['id'].values[0]
product_id
###Output
_____no_output_____
###Markdown
Create a list for similiarity score [(product_id, similarity score), (...)] Note: We subtract 1 b/c the product id is in position 1- prodct id
###Code
scores = list(enumerate(cs[product_id-1]))
scores[:5]
###Output
_____no_output_____
###Markdown
Sort list
###Code
scored_scores = sorted(scores, key = lambda x:x[1], reverse=True)
scored_scores[:5]
###Output
_____no_output_____
###Markdown
Exclude first item in list, the most similiar product will be current product
###Code
sorted_scores = scored_scores[1:]
sorted_scores[:5]
###Output
_____no_output_____
###Markdown
Recommend 3 products by running a get request for those product ids Note: we add 1 b/c the index is 1 value higher than the actual product id
###Code
#base_url = query_url
for i in range(3):
recommend_prod_id = sorted_scores[i][0]
#print(recommend_prod_id)
recommend_prod_id +=1
query_url = f'{base_url}product/{recommend_prod_id}'
#print(query_url)
#print(recommend_prod_id)
recommend_product = requests.get(url=query_url, headers=HEADER).json()
print(recommend_product['name'])
###Output
nike blazer mid off white
nike blazer mid off white grim reaper
nike blazer mid off white wolf grey
###Markdown
Save the Cosine Similiarity object as pickle file
###Code
import pickle
#pickle.dump(cm, open("countmatrix.pickle", "wb"))
pickle.dump(cs, open("simscores.pickle", "wb"))
###Output
_____no_output_____
###Markdown
Test the pickle files by running
###Code
sim_scores_model = pickle.load(open('simscores.pickle', 'rb'))
sim_scores_model
def recommend_products(prod_id):
recommended_proucts = []
scores = list(enumerate(sim_scores_model[prod_id-1]))
scored_scores = sorted(scores, key = lambda x:x[1], reverse=True)
sorted_scores = scored_scores[1:]
for i in range(3):
recommend_prod_id = sorted_scores[i][0]
recommend_prod_id +=1
query_url = f'{base_url}product/{recommend_prod_id}'
recommend_product = requests.get(url=query_url, headers=HEADER).json()
recommended_proucts.append(recommend_product)
return recommended_proucts
new_product = 'nike-blazer-mid-off-white-all-hallows-eve'
new_prod_id = 37
recommend_products(new_prod_id)
###Output
_____no_output_____ |
docs/notebooks/Parameter_Scans.ipynb | ###Markdown
Parameter ScansThis notebook demonstrates how to edit / change parameter scan tasks. While most scans can be done using basico, by using simple loops, here we create the scans for the COPASI `Parameter Scan` task. That means, that these scans can be carried out later, using the COPASI graphical user interface, or the command line interface.
###Code
from basico import *
###Output
_____no_output_____
###Markdown
lets start by using the brusselator example, where a scan task is already set up. Using `get_scan_settings`, we can see all the individual settings:
###Code
load_example('brusselator')
get_scan_settings()
###Output
_____no_output_____
###Markdown
the dictionary that is returned, contains all the information stored in the COPASI file. Altenatively you could also just retrieve the scan items using `get_scan_items`.
###Code
get_scan_items()
###Output
_____no_output_____
###Markdown
Modifying Scan SettingsAnalogue to retrieving the settings, they can be set as well, using the same keys, as displayed above. Again, you can change all settings using `set_scan_settings`. Or just change the scan_items using `set_scan_items`. If the scan settings dictionary contain a `scan_items` element, or if `set_scan_items` is called, then all scan items are replaced with the ones given. Alternatively `add_scan_item` can be used to add one or more scan items directly. Lets change the scan item from above, so that the `(R1).k1` parameter is changed to the specific values of 0.5, 1.0 and 2.0:
###Code
set_scan_items([{'item': '(R1).k1', 'values': [0.5, 1.0, 2]}])
get_scan_items()
###Output
_____no_output_____
###Markdown
scan items can be specified, either through their display name by specifying the `item` key, or by specifying the `cn` directly. The scan item can be of one of three types: * `scan`: this is the default (so will be used if not specifie), here a model element is varied either through values, or between a specified `min` and `max` value* `repeat`: here the subtask is repeated `num_steps` times* `random`: here the value for the specified model element is sampled from the specified `distribution` (which can be `uniform`, `normal`, `poisson` or `gamma`)For example to specify a repeat you'd use:
###Code
add_scan_item(type='repeat', num_steps=10)
get_scan_items()
###Output
_____no_output_____
###Markdown
as you can see, by using `add_scan_items`, the repeat item is added at the end of the scan list. In COPASI, when multiple scan items are defined, the semantics of those is that for each value of scan item 1, all values of scan item 2 are processed. For scan items of type `distribution`, the min/max element take up different meaning: * `uniform`: here the value is between the min and max value* `normal`: min=`mean` and max=`standard deviation`* `poisson`: min=`mean` and max has no meaning* `gamma`: min=`shape` and max=`scale`As a last example, lets change it to sample the initial concentration of species `A` from a normal distribution between around 2, and we want to do that 5 times:
###Code
set_scan_items([
{
'type': 'repeat',
'num_steps': 5
},
{
'item': '[A]_0',
'type':'random',
'distribution': 'normal',
'min': 2,
'max': 0.1
}])
get_scan_items()
###Output
_____no_output_____
###Markdown
Runnin a scan:You can run these scans in basico using the `run_scan` method. Where you can optionally pass along a `settings` parameter to reconfigure the scan, or an `output` selection, to grab some data directly. So lets run the configured scan task, changing it to run a the steady state task, collecting the final concentrations of `X` and `Y`:
###Code
run_scan(settings={'subtask': T.STEADY_STATE}, output=['[A]_0', '[X]', '[Y]'])
###Output
_____no_output_____ |
Market_Basket_Analysis_AssociationRuleLearning_a)Apriori, b)Eclat.ipynb | ###Markdown
Market_Basket_Analysis_AssociationRuleLearning: a)Apriori, b)Eclat**--------------------------------------------------------------------------------------------------------------------------****--------------------------------------------------------------------------------------------------------------------------****--------------------------------------------------------------------------------------------------------------------------****---------------------------------------------------****STRUCTURE***In this notebook, the use of two algorithms (**Part A**: Apriori and **Part B**: Eclat) for Market Basket Analysis (groceries dataset) is demonstrated. This type of analysis is based on Association Rule Learning that asscociates item sets based on their frequency of their appearance in the dataset (it depends on 'past relational knowledge' from dataset records). In the first part of this project **Part A**, the scope is to determine the strength of the relationship between all combinations of two items from the grocery store that are frequently bought together (i.e. place item 2 in the basket given the fact that item 1 is already in the basket). For the Apriori algorithm to perform this task, it is necessary to select values for a) minimum_support, b)minimum_confidence,c)minimum_lift and d) the minimum/maximum length of the item set. The algorithm results are displayed in descending order with respect to the 'lift' (lift= confidence / support). The higher the 'lift' value, the stronger the association between two grocery items. In the second part (**Part B**) of this Market Basket Analysis demonstration, there is use of the Eclat algorithm, which is a simplified version of the Apriori as the strength of item sets association is based only on support (item set frequency / length of records) and not on confidence and lift. Therefore, this example is focused on sets as the goal is to identify the items of given sets that are associated the most with respect to the total number of dataset records.***The Dataset (.csv file format) for this project has been obtained from Kaggle:**"*Groceries Market Basket Dataset*" -- File: "groceries.csv" -- Source:https://www.kaggle.com/irfanasrullah/groceries
###Code
# importing the libraries
import numpy as np
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
# importing the dataset
data=pd.read_csv('groceries.csv')
# Dataset first 5 records
data.head()
# The first column of the dataset 'Item(s)' counts the number of products in each basket, therefore its presence is not
# required in the dataset, as we are only interest in the products. Therefore, this column can be dropped
data=data.drop('Item(s)',axis=1)
# Data info: 9835 records, Dtype('object'), not null
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 9835 entries, 0 to 9834
Data columns (total 32 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Item 1 9835 non-null object
1 Item 2 7676 non-null object
2 Item 3 6033 non-null object
3 Item 4 4734 non-null object
4 Item 5 3729 non-null object
5 Item 6 2874 non-null object
6 Item 7 2229 non-null object
7 Item 8 1684 non-null object
8 Item 9 1246 non-null object
9 Item 10 896 non-null object
10 Item 11 650 non-null object
11 Item 12 468 non-null object
12 Item 13 351 non-null object
13 Item 14 273 non-null object
14 Item 15 196 non-null object
15 Item 16 141 non-null object
16 Item 17 95 non-null object
17 Item 18 66 non-null object
18 Item 19 52 non-null object
19 Item 20 38 non-null object
20 Item 21 29 non-null object
21 Item 22 18 non-null object
22 Item 23 14 non-null object
23 Item 24 8 non-null object
24 Item 25 7 non-null object
25 Item 26 7 non-null object
26 Item 27 6 non-null object
27 Item 28 5 non-null object
28 Item 29 4 non-null object
29 Item 30 1 non-null object
30 Item 31 1 non-null object
31 Item 32 1 non-null object
dtypes: object(32)
memory usage: 2.4+ MB
###Markdown
Part A) Apriori
###Code
# Dataset records conversion from pandas dataframe to list
basket_records=[]
for x in range(0, len(data)):
basket_records.append([str(data.values[x,y])for y in range(0,len(data.columns))])
# Print basket_records type
print(type(basket_records))
print('\r')
# Print first list record
print('First basket list entry: \n',basket_records[0])
# Importing the apriori function from apyori. Apriori algorithm outputs a 'RelationRecord 'generator
from apyori import apriori
# Support= num of times an item (or set of items) has been added to the basket / length of records
minimum_support=21/len(data)
# The number of times we want the apriori rule to be correct(i.e 0.25-->25%)
# Confidence = i.e number of times items 1 & 2 have been added together to the basket divided by number of times item 1
# has been added to the basket
minimum_confidence= 0.25
# Minimum lift to measure the quality/strength of the apriori rules
# lift= confidence / support
minimum_lift=3
apriori_rules=apriori(transactions=basket_records,
min_support=minimum_support,
min_confidence=minimum_confidence,
min_lift=minimum_lift,
min_length=2,
max_length=2)
# List of apriori rules
apriori_result=list(apriori_rules)
apriori_result
# First apriori rule
# If a customer buys 'Instant food Products'(items_base) then there is 37,97% (confidence=0.3797) chance that he will
# buy 'hamburger meat' (items_add) and this rule appears in 0.3% of the basket items list (support=0.00305)
apriori_result[0]
## Converting the apriori results to pandas dataframe
def apriori_res(apriori_result):
left_hand_side_str= [tuple(x[2][0][0])[0] for x in apriori_result]
right_hand_side_str= [tuple(x[2][0][1])[0] for x in apriori_result]
support= [x[1] for x in apriori_result]
confidence= [x[2][0][2] for x in apriori_result]
lift= [x[2][0][3] for x in apriori_result]
return list(zip(left_hand_side_str,right_hand_side_str, support, confidence, lift))
df_res = pd.DataFrame(apriori_res(apriori_result), columns = ['L-Hand Side', 'R-Hand Side', 'Rule_Support',
'Rule_Confidence', 'Rule_Lift'])
## Apriori results in pandas DataFrame
df_res
## Top 5 Apriori results sorted by 'Rule_Lift' column (descending)
df_res.nlargest(n = 5, columns = 'Rule_Lift')
## Top 5 Apriori results sorted by 'Rule_Confidence' column (descending)
df_res.nlargest(n = 5, columns = 'Rule_Confidence')
###Output
_____no_output_____
###Markdown
Part B: Eclat
###Code
# Eclat analysis is a simpler version of Apriori as it is based on just the support and not on confidence and lift
# Support= num of times an set of item(s) has been added to the basket / length of records
minimum_support=21/len(data)
eclat=apriori(transactions=basket_records,
min_support=minimum_support,
min_length=2,
max_length=2)
# List of eclat rules
eclat_result=list(eclat)
## Converting the Eclat results to pandas dataframe
def eclat_res(eclat_result):
First_Item_Set= [tuple(x[2][1][0])[0] for x in eclat_result if len(x[0])>1]
Second_Item_Set= [tuple(x[2][1][1])[0]for x in eclat_result if len(x[0])>1]
support= [x[1]for x in eclat_result if len(x[0])>1 ]
return list(zip(First_Item_Set,Second_Item_Set, support))
eclat_res = pd.DataFrame(eclat_res(eclat_result), columns = ['First_Item_Set', 'Second_Item_Set', 'Support_Set'])
## Top 5 Eclat results sorted by 'Support_Set' column (descending)
eclat_res=eclat_res[(eclat_res['First_Item_Set']!='nan')&(eclat_res['Second_Item_Set']!='nan')]
eclat_res.nlargest(n = 5, columns = 'Support_Set')
###Output
_____no_output_____ |
Morphological.ipynb | ###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
def applyPadding(array):
new = np.zeros((array.shape[0]+4,array.shape[1]+4,array.shape[2]),dtype = array.dtype)
for s in range(array.shape[2]):
for i in range(array.shape[0]):
for j in range(array.shape[1]):
new[i+2,j+2,s] = array[i,j,s]
return new
def transformed(matrix,option):
kernel = np.zeros((5,5,1),dtype = matrix.dtype)
filtered = np.zeros((matrix.shape[0],matrix.shape[1],matrix.shape[2]),dtype = matrix.dtype)
padded = applyPadding(matrix)
for s in range(4):
for i in range(matrix.shape[0]):
for j in range(matrix.shape[1]):
for x in range(5):
for y in range(5):
kernel[y,x] = padded[i+y,j+x,s]
if np.any(kernel) and option == 1:
filtered[i,j,s] = 1
elif np.all(kernel) and option == 2:
# print(kernel[3,3])
filtered[i,j,s] = 1
else:
filtered[i,j,s] = 0
return filtered
image = plt.imread('morphological.png')
option = int(input('Enter your choice\n1)Dilation\n2)Erosion'))
# plt.imshow(image)
# plt.show()
filtered2 = transformed(image,2)
filtered1 = transformed(image,1)
# print(filtered1)
plt.imshow(filtered2)
plt.show()
plt.imshow(filtered1)
plt.show()
import numpy as np
array = [[1,1],[1,1]]
print(np.all(array))
###Output
_____no_output_____ |
notebooks/models/Offline_Part.ipynb | ###Markdown
Prepare environment
###Code
%%capture
import google.colab.drive
google.colab.drive.mount('/content/gdrive', force_remount=True)
import functools
import glob
import os
import numpy as np
import pandas as pd
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Prepare dataset
###Code
DATA_PATH = '/content/gdrive/My Drive/dataset/adressa/one_week'
features = {
'eventId': tf.io.FixedLenFeature([], tf.int64),
'clickLabel': tf.io.FixedLenFeature([], tf.int64),
'userActiveness': tf.io.FixedLenFeature([], tf.float32),
'categoryVector': tf.io.FixedLenFeature([30], tf.float32),
'newsClickCountVector': tf.io.FixedLenFeature([4], tf.float32),
'contextVector': tf.io.FixedLenFeature([32], tf.float32),
'userHistoryVector': tf.io.FixedLenFeature([30], tf.float32),
'userProfileVector': tf.io.FixedLenFeature([120], tf.float32),
'userClickCountVector': tf.io.FixedLenFeature([4], tf.float32),
'userHistoryVectorNext': tf.io.FixedLenFeature([30], tf.float32),
'userProfileVectorNext': tf.io.FixedLenFeature([120], tf.float32),
'userClickCountVectorNext': tf.io.FixedLenFeature([4], tf.float32),
}
def parse_example(serialized):
e = tf.io.parse_single_example(serialized, features)
return {
'event_id': e['eventId'],
'click_label': e['clickLabel'],
'user_activeness': e['userActiveness'],
'news_features': tf.concat([e['categoryVector'], tf.math.log(e['newsClickCountVector'] + 1.)], 0),
'user_features': tf.concat([e['userProfileVector'], tf.math.log(e['userClickCountVector'] + 1.)], 0),
'user_features_next': tf.concat([e['userProfileVectorNext'], tf.math.log(e['userClickCountVectorNext'] + 1.)], 0),
'user_news_features': tf.math.reduce_prod([e['categoryVector'], e['userHistoryVector']], axis=0),
'user_news_features_next': tf.math.reduce_prod([e['categoryVector'], e['userHistoryVectorNext']], axis=0),
'context_features': e['contextVector'],
}
def parse_inputs_targets(serialized):
e = tf.io.parse_single_example(serialized, features)
inputs = {
'news_features': tf.concat([e['categoryVector'], tf.math.log(e['newsClickCountVector'] + 1.)], 0),
'user_features': tf.concat([e['userProfileVector'], tf.math.log(e['userClickCountVector'] + 1.)], 0),
'user_news_features': tf.math.reduce_prod([e['categoryVector'], e['userHistoryVector']], axis=0),
'context_features': e['contextVector'],
}
targets = e['clickLabel']
return inputs, targets
def parse_inputs_targets_with_user_activeness(serialized, user_activeness_coef):
e = tf.io.parse_single_example(serialized, features)
inputs = {
'news_features': tf.concat([e['categoryVector'], tf.math.log(e['newsClickCountVector'] + 1.)], 0),
'user_features': tf.concat([e['userProfileVector'], tf.math.log(e['userClickCountVector'] + 1.)], 0),
'user_news_features': tf.math.reduce_prod([e['categoryVector'], e['userHistoryVector']], axis=0),
'context_features': e['contextVector'],
}
user_activeness_coef = tf.constant(user_activeness_coef, tf.float32)
targets = tf.dtypes.cast(e['clickLabel'], tf.float32) + user_activeness_coef * e['userActiveness']
return inputs, targets
def build_train_dataset(filepaths, batch_size, epochs, user_activeness_coef=None):
dataset = tf.data.TFRecordDataset(filepaths, 'GZIP')
if user_activeness_coef is None:
func = parse_inputs_targets
else:
func = functools.partial(parse_inputs_targets_with_user_activeness, user_activeness_coef)
dataset = (
dataset
.map(func)
.batch(batch_size)
.repeat(epochs)
.prefetch(1)
)
return dataset
batch_size = 1024
epochs = 1
filepaths = sorted(glob.glob(os.path.join(DATA_PATH, 'tfrecords', 'train', '*')))
train_dataset = build_train_dataset(filepaths, batch_size, epochs)
###Output
_____no_output_____
###Markdown
Define models
###Code
from tensorflow.keras.activations import relu, sigmoid
from tensorflow.keras.layers import Add, Concatenate, Dense, Dot, Input, Lambda, Subtract
from tensorflow.keras.losses import BinaryCrossentropy, MeanSquaredError
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
###Output
_____no_output_____
###Markdown
1. Logistic Regression
###Code
def build_lr(input_info):
inputs = [Input(shape=shape, name=name) for name, shape in input_info]
inputs_concat = Concatenate()(inputs)
outputs = Dense(1, activation=sigmoid)(inputs_concat)
model = Model(inputs=inputs, outputs=outputs)
model.compile(Adam(), loss=BinaryCrossentropy())
return model
###Output
_____no_output_____
###Markdown
2. Factorization Machines
###Code
def build_fm(input_info, k_latent):
inputs = [Input(shape=shape, name=name) for name, shape in input_info]
inputs_concat = Concatenate()(inputs)
inputs_flat = [Lambda(lambda x: x[:, i:i+1])(inputs_concat) for i in range(inputs_concat.shape[1].value)]
biases = [Dense(1)(x) for x in inputs_flat]
factors = [Dense(k_latent)(x) for x in inputs_flat]
s = Add()(factors)
diffs = [Subtract()([s, x]) for x in factors]
dots = [Dot(axes=1)([d, x]) for d, x in zip(diffs, factors)]
outputs = Add()(dots + biases)
outputs = Dense(1, activation=sigmoid)(outputs)
model = Model(inputs=inputs, outputs=outputs)
model.compile(Adam(), loss=BinaryCrossentropy())
return model
###Output
_____no_output_____
###Markdown
3. Wide & Deep
###Code
def build_wd(input_info):
inputs = [Input(shape=shape, name=name) for name, shape in input_info]
inputs_concat = Concatenate()(inputs)
wide = Concatenate()(inputs)
deep = Dense(256, activation=relu)(inputs_concat)
deep = Dense(128, activation=relu)(deep)
wide_deep = Concatenate()([wide, deep])
outputs = Dense(1, activation=sigmoid)(wide_deep)
model = Model(inputs=inputs, outputs=outputs)
model.compile(Adam(), loss=BinaryCrossentropy())
return model
###Output
_____no_output_____
###Markdown
4. DN
###Code
def build_dqn(input_info, state_indices):
inputs = [Input(shape=shape, name=name) for name, shape in input_info]
inputs_concat = Concatenate()(inputs)
value = Concatenate()([inputs[i] for i in state_indices])
value = Dense(256, activation=relu)(value)
value = Dense(128, activation=relu)(value)
value = Dense(1)(value)
advantage = Dense(256, activation=relu)(inputs_concat)
advantage = Dense(128, activation=relu)(advantage)
advantage = Dense(1)(advantage)
value_advantage = Concatenate()([value, advantage])
outputs = Dense(1)(value_advantage)
model = Model(inputs=inputs, outputs=outputs)
model.compile(Adam(), loss=MeanSquaredError())
return model
###Output
_____no_output_____
###Markdown
Train models
###Code
input_info = (
('news_features', (34,)),
('user_features', (124,)),
('user_news_features', (30,)),
('context_features', (32,)),
)
state_indices = (1, 3)
###Output
_____no_output_____
###Markdown
1. Logistic Regression
###Code
lr = build_lr(input_info)
lr.fit(train_dataset)
lr.save_weights(os.path.join(DATA_PATH, 'model', 'lr_weights.h5'), overwrite=True)
###Output
_____no_output_____
###Markdown
2. Factorization Machines
###Code
fm = build_fm(input_info, k_latent=2)
fm.fit(train_dataset)
fm.save_weights(os.path.join(DATA_PATH, 'model', 'fm_weights.h5'), overwrite=True)
###Output
_____no_output_____
###Markdown
3. Wide & Deep
###Code
wd = build_wd(input_info)
wd.fit(train_dataset)
wd.save_weights(os.path.join(DATA_PATH, 'model', 'wd_weights.h5'), overwrite=True)
###Output
_____no_output_____
###Markdown
4. DN
###Code
dqn = build_dqn(input_info, state_indices)
dqn.fit(train_dataset)
dqn.save_weights(os.path.join(DATA_PATH, 'model', 'dqn_weights.h5'), overwrite=True)
user_activeness_coef = 0.05
filepaths = sorted(glob.glob(os.path.join(DATA_PATH, 'tfrecords', 'train', '*')))
train_dataset_with_user_activeness = build_train_dataset(filepaths, batch_size, epochs, user_activeness_coef)
dqnu = build_dqn(input_info, state_indices)
dqnu.fit(train_dataset_with_user_activeness)
dqnu.save_weights(os.path.join(DATA_PATH, 'model', 'dqnu_weights.h5'), overwrite=True)
###Output
_____no_output_____ |
9 Kaggles_Leet/dnn-ensemble (1).ipynb | ###Markdown
Model Ensemble> Ensemble of baseline work
###Code
import os
import pandas as pd
import numpy as np
import gc
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow import keras
import mlcrate as mlc
import pickle as pkl
from tensorflow.keras.layers import BatchNormalization
from keras.models import Sequential, Model
from keras.layers import Input, Embedding, Dense, Flatten, Concatenate, Dot, Reshape, Add, Subtract
from keras import backend as K
from keras import regularizers
from tensorflow.keras.optimizers import Adam
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.regularizers import l2
from sklearn.base import clone
from typing import Dict
import matplotlib.pyplot as plt
from scipy import stats
from tensorflow.keras.losses import Loss
from tensorflow.keras import backend as K
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import TimeSeriesSplit, StratifiedKFold, KFold, GroupKFold
from tqdm import tqdm
from tensorflow.python.ops import math_ops
###Output
_____no_output_____
###Markdown
1. Data Exploration & Features layers
###Code
%%time
n_features = 300
features = [f'f_{i}' for i in range(n_features)]
feature_columns = ['investment_id', 'time_id'] + features
train = pd.read_pickle('../input/ubiquant-market-prediction-half-precision-pickle/train.pkl')
train.head()
investment_id = train.pop("investment_id")
investment_id.head()
_ = train.pop("time_id")
y = train.pop("target")
y.head()
%%time
investment_ids = list(investment_id.unique())
investment_id_size = len(investment_ids) + 1
investment_id_lookup_layer = layers.IntegerLookup(max_tokens=investment_id_size)
with tf.device("cpu"):
investment_id_lookup_layer.adapt(investment_id)
investment_id2 = investment_id[~investment_id.isin([85, 905, 2558, 3662, 2800, 1415])]
investment_ids2 = list(investment_id2.unique())
investment_id_size2 = len(investment_ids2) + 1
investment_id_lookup_layer2 = layers.IntegerLookup(max_tokens=investment_id_size2)
investment_id_lookup_layer2.adapt(pd.DataFrame({"investment_ids":investment_ids}))
def preprocess(X, y):
print(X)
print(y)
return X, y
def make_dataset(feature, investment_id, y, batch_size=1024, mode="train"):
ds = tf.data.Dataset.from_tensor_slices(((investment_id, feature), y))
ds = ds.map(preprocess)
if mode == "train":
ds = ds.shuffle(256)
ds = ds.batch(batch_size).cache().prefetch(tf.data.experimental.AUTOTUNE)
return ds
###Output
_____no_output_____
###Markdown
feature_time_ds
###Code
def make_ft_dataset(investment_id, feature, time_id, y=None, batch_size=1024):
if y is not None:
slices = ((investment_id, feature, time_id), y)
else:
slices = ((investment_id, feature, time_id))
ds = tf.data.Dataset.from_tensor_slices(slices)
ds = ds.batch(batch_size).cache().prefetch(tf.data.experimental.AUTOTUNE)
return ds
###Output
_____no_output_____
###Markdown
2. DNN Architecture
###Code
def get_model():
investment_id_inputs = tf.keras.Input((1, ), dtype=tf.uint16)
features_inputs = tf.keras.Input((300, ), dtype=tf.float16)
investment_id_x = investment_id_lookup_layer(investment_id_inputs)
investment_id_x = layers.Embedding(investment_id_size, 32, input_length=1)(investment_id_x)
investment_id_x = layers.Reshape((-1, ))(investment_id_x)
investment_id_x = layers.Dense(64, activation='swish')(investment_id_x)
investment_id_x = layers.Dense(64, activation='swish')(investment_id_x)
investment_id_x = layers.Dense(64, activation='swish')(investment_id_x)
feature_x = layers.Dense(256, activation='swish')(features_inputs)
feature_x = layers.Dense(256, activation='swish')(feature_x)
feature_x = layers.Dense(256, activation='swish')(feature_x)
x = layers.Concatenate(axis=1)([investment_id_x, feature_x])
x = layers.Dense(512, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dense(128, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dense(32, activation='swish', kernel_regularizer="l2")(x)
output = layers.Dense(1)(x)
rmse = keras.metrics.RootMeanSquaredError(name="rmse")
model = tf.keras.Model(inputs=[investment_id_inputs, features_inputs], outputs=[output])
model.compile(optimizer=tf.optimizers.Adam(0.001), loss='mse', metrics=['mse', "mae", "mape", rmse])
return model
def get_model2():
investment_id_inputs = tf.keras.Input((1, ), dtype=tf.uint16)
features_inputs = tf.keras.Input((300, ), dtype=tf.float16)
investment_id_x = investment_id_lookup_layer(investment_id_inputs)
investment_id_x = layers.Embedding(investment_id_size, 32, input_length=1)(investment_id_x)
investment_id_x = layers.Reshape((-1, ))(investment_id_x)
investment_id_x = layers.Dense(64, activation='swish')(investment_id_x)
investment_id_x = layers.Dense(64, activation='swish')(investment_id_x)
investment_id_x = layers.Dense(64, activation='swish')(investment_id_x)
investment_id_x = layers.Dense(64, activation='swish')(investment_id_x)
# investment_id_x = layers.Dropout(0.65)(investment_id_x)
feature_x = layers.Dense(256, activation='swish')(features_inputs)
feature_x = layers.Dense(256, activation='swish')(feature_x)
feature_x = layers.Dense(256, activation='swish')(feature_x)
feature_x = layers.Dense(256, activation='swish')(feature_x)
feature_x = layers.Dropout(0.65)(feature_x)
x = layers.Concatenate(axis=1)([investment_id_x, feature_x])
x = layers.Dense(512, activation='swish', kernel_regularizer="l2")(x)
# x = layers.Dropout(0.2)(x)
x = layers.Dense(128, activation='swish', kernel_regularizer="l2")(x)
# x = layers.Dropout(0.4)(x)
x = layers.Dense(32, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dense(32, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.75)(x)
output = layers.Dense(1)(x)
rmse = keras.metrics.RootMeanSquaredError(name="rmse")
model = tf.keras.Model(inputs=[investment_id_inputs, features_inputs], outputs=[output])
model.compile(optimizer=tf.optimizers.Adam(0.001), loss='mse', metrics=['mse', "mae", "mape", rmse])
return model
def get_model3():
investment_id_inputs = tf.keras.Input((1, ), dtype=tf.uint16)
features_inputs = tf.keras.Input((300, ), dtype=tf.float32)
investment_id_x = investment_id_lookup_layer(investment_id_inputs)
investment_id_x = layers.Embedding(investment_id_size, 32, input_length=1)(investment_id_x)
investment_id_x = layers.Reshape((-1, ))(investment_id_x)
investment_id_x = layers.Dense(64, activation='swish')(investment_id_x)
investment_id_x = layers.Dropout(0.5)(investment_id_x)
investment_id_x = layers.Dense(32, activation='swish')(investment_id_x)
investment_id_x = layers.Dropout(0.5)(investment_id_x)
#investment_id_x = layers.Dense(64, activation='swish')(investment_id_x)
feature_x = layers.Dense(256, activation='swish')(features_inputs)
feature_x = layers.Dropout(0.5)(feature_x)
feature_x = layers.Dense(128, activation='swish')(feature_x)
feature_x = layers.Dropout(0.5)(feature_x)
feature_x = layers.Dense(64, activation='swish')(feature_x)
x = layers.Concatenate(axis=1)([investment_id_x, feature_x])
x = layers.Dropout(0.5)(x)
x = layers.Dense(64, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(32, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(16, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.5)(x)
output = layers.Dense(1)(x)
output = tf.keras.layers.BatchNormalization(axis=1)(output)
rmse = keras.metrics.RootMeanSquaredError(name="rmse")
model = tf.keras.Model(inputs=[investment_id_inputs, features_inputs], outputs=[output])
model.compile(optimizer=tf.optimizers.Adam(0.001), loss='mse', metrics=['mse', "mae", "mape", rmse])
return model
def get_model5():
features_inputs = tf.keras.Input((300, ), dtype=tf.float16)
## feature ##
feature_x = layers.Dense(256, activation='swish')(features_inputs)
feature_x = layers.Dropout(0.1)(feature_x)
## convolution 1 ##
feature_x = layers.Reshape((-1,1))(feature_x)
feature_x = layers.Conv1D(filters=16, kernel_size=4, strides=1, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution 2 ##
feature_x = layers.Conv1D(filters=16, kernel_size=4, strides=4, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution 3 ##
feature_x = layers.Conv1D(filters=64, kernel_size=4, strides=1, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution 4 ##
feature_x = layers.Conv1D(filters=64, kernel_size=4, strides=4, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution 5 ##
feature_x = layers.Conv1D(filters=64, kernel_size=4, strides=2, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## flatten ##
feature_x = layers.Flatten()(feature_x)
x = layers.Dense(512, activation='swish', kernel_regularizer="l2")(feature_x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(128, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(32, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.1)(x)
output = layers.Dense(1)(x)
rmse = keras.metrics.RootMeanSquaredError(name="rmse")
model = tf.keras.Model(inputs=[features_inputs], outputs=[output])
model.compile(optimizer=tf.optimizers.Adam(0.001), loss='mse', metrics=['mse', "mae", "mape", rmse])
return model
del train
del investment_id
del y
gc.collect()
###Output
_____no_output_____
###Markdown
2.0 Model_Dropout_10_RMSE
###Code
def get_model_dr04():
features_inputs = tf.keras.Input((300, ), dtype=tf.float32)
feature_x = layers.Dense(256, activation='swish')(features_inputs)
feature_x = layers.Dropout(0.4)(feature_x)
feature_x = layers.Dense(128, activation='swish')(feature_x)
feature_x = layers.Dropout(0.4)(feature_x)
feature_x = layers.Dense(64, activation='swish')(feature_x)
x = layers.Concatenate(axis=1)([feature_x])
x = layers.Dropout(0.4)(x)
x = layers.Dense(64, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.4)(x)
x = layers.Dense(32, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.4)(x)
x = layers.Dense(16, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.4)(x)
output = layers.Dense(1)(x)
output = tf.keras.layers.BatchNormalization(axis=1)(output)
rmse = keras.metrics.RootMeanSquaredError(name="rmse")
model = tf.keras.Model(inputs=[features_inputs], outputs=[output])
model.compile(optimizer=tf.optimizers.Adam(0.001), loss = correlationLoss, metrics=[correlationMetric])
return model
dr=0.3
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
print("Name:", gpu.name, " Type:", gpu.device_type)
n_features = 300
features = [f'f_{i}' for i in range(n_features)]
# def preprocess(X, y):
# return X, y
# def make_dataset(feature, y, batch_size=1024, mode="train"):
# ds = tf.data.Dataset.from_tensor_slices((feature, y))
# ds = ds.map(preprocess)
# if mode == "train":
# ds = ds.shuffle(512)
# # ds = ds.batch(batch_size).cache().prefetch(tf.data.experimental.AUTOTUNE)
# ds = ds.batch(batch_size).prefetch(tf.data.experimental.AUTOTUNE)
# return ds
def correlationMetric(x, y, axis=-2):
"""Metric returning the Pearson correlation coefficient of two tensors over some axis, default -2."""
x = tf.convert_to_tensor(x)
y = math_ops.cast(y, x.dtype)
n = tf.cast(tf.shape(x)[axis], x.dtype)
xsum = tf.reduce_sum(x, axis=axis)
ysum = tf.reduce_sum(y, axis=axis)
xmean = xsum / n
ymean = ysum / n
xvar = tf.reduce_sum( tf.math.squared_difference(x, xmean), axis=axis)
yvar = tf.reduce_sum( tf.math.squared_difference(y, ymean), axis=axis)
cov = tf.reduce_sum( (x - xmean) * (y - ymean), axis=axis)
corr = cov / tf.sqrt(xvar * yvar)
return tf.constant(1.0, dtype=x.dtype) - corr
def correlationLoss(x,y, axis=-2):
"""Loss function that maximizes the pearson correlation coefficient between the predicted values and the labels,
while trying to have the same mean and variance"""
x = tf.convert_to_tensor(x)
y = math_ops.cast(y, x.dtype)
n = tf.cast(tf.shape(x)[axis], x.dtype)
xsum = tf.reduce_sum(x, axis=axis)
ysum = tf.reduce_sum(y, axis=axis)
xmean = xsum / n
ymean = ysum / n
xsqsum = tf.reduce_sum( tf.math.squared_difference(x, xmean), axis=axis)
ysqsum = tf.reduce_sum( tf.math.squared_difference(y, ymean), axis=axis)
cov = tf.reduce_sum( (x - xmean) * (y - ymean), axis=axis)
corr = cov / tf.sqrt(xsqsum * ysqsum)
return tf.convert_to_tensor( K.mean(tf.constant(1.0, dtype=x.dtype) - corr ) , dtype=tf.float32 )
def correlationMetric_01mse(x, y, axis=-2):
"""Metric returning the Pearson correlation coefficient of two tensors over some axis, default -2."""
x = tf.convert_to_tensor(x)
y = math_ops.cast(y, x.dtype)
n = tf.cast(tf.shape(x)[axis], x.dtype)
xsum = tf.reduce_sum(x, axis=axis)
ysum = tf.reduce_sum(y, axis=axis)
xmean = xsum / n
ymean = ysum / n
xvar = tf.reduce_sum( tf.math.squared_difference(x, xmean), axis=axis)
yvar = tf.reduce_sum( tf.math.squared_difference(y, ymean), axis=axis)
cov = tf.reduce_sum( (x - xmean) * (y - ymean), axis=axis)
corr = cov / tf.sqrt(xvar * yvar)
return tf.constant(1.0, dtype=x.dtype) - corr
gc.collect()
# list(GroupKFold(5).split(train , groups = train.index))[0]
def pearson_coef(data):
return data.corr()['target']['preds']
def evaluate_metric(valid_df):
return np.mean(valid_df[['time_id_', 'target', 'preds']].groupby('time_id').apply(pearson_coef))
def get_model_best(ft_units, x_units, x_dropout):
investment_id_inputs = tf.keras.Input((1, ), dtype=tf.uint16)
features_inputs = tf.keras.Input((300, ), dtype=tf.float16)
investment_id_x = investment_id_lookup_layer2(investment_id_inputs)
investment_id_x = layers.Embedding(investment_id_size2, 32, input_length=1)(investment_id_x)
investment_id_x = layers.Reshape((-1, ))(investment_id_x)
investment_id_x = layers.Dense(128, activation='swish')(investment_id_x)
investment_id_x = layers.Dense(128, activation='swish')(investment_id_x)
investment_id_x = layers.Dense(128, activation='swish')(investment_id_x)
feature_x = layers.Dense(256, activation='swish')(features_inputs)
for hu in ft_units:
feature_x = layers.Dense(hu, activation='swish')(feature_x)
x = layers.Concatenate(axis=1)([investment_id_x, feature_x])
for i in range(len(x_units)):
x = tf.keras.layers.Dense(x_units[i], kernel_regularizer="l2")(x) #v8
x = tf.keras.layers.BatchNormalization()(x) #v7
x = tf.keras.layers.Activation('swish')(x) #v7
x = tf.keras.layers.Dropout(x_dropout[i])(x) #v8
output = layers.Dense(1)(x)
rmse = keras.metrics.RootMeanSquaredError(name="rmse")
model = tf.keras.Model(inputs=[investment_id_inputs, features_inputs], outputs=[output])
model.compile(optimizer=tf.optimizers.Adam(0.0001), loss='mse', metrics=['mse', "mae", "mape", rmse])
return model
params = {
'ft_units': [256,256],
'x_units': [512, 256, 128, 32],
'x_dropout': [0.4, 0.3, 0.2, 0.1]
# 'lr':1e-3,
}
models_best = []
scores = []
for i in range(7):
model = get_model_best(**params)
model.load_weights(f"../input/wmodels/best/model_{i}.tf")
models_best.append(model)
###Output
_____no_output_____
###Markdown
2.1 Augment Model: Gaussian_Conv1 + Conv2d Model: Account for Spatial/Area Relationship
###Code
def get_model6():
features_inputs = tf.keras.Input((300, ), dtype=tf.float16)
features_x = layers.GaussianNoise(0.1)(features_inputs)
## feature ##
feature_x = layers.Dense(256, activation='swish')(features_x)
feature_x = layers.Dropout(0.1)(feature_x)
## convolution 1 ##
feature_x = layers.Reshape((-1,1))(feature_x)
feature_x = layers.Conv1D(filters=16, kernel_size=4, strides=1, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution 2 ##
feature_x = layers.Conv1D(filters=16, kernel_size=4, strides=4, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution 3 ##
feature_x = layers.Conv1D(filters=64, kernel_size=4, strides=1, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution 4 ##
feature_x = layers.Conv1D(filters=64, kernel_size=4, strides=4, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution 5 ##
feature_x = layers.Conv1D(filters=64, kernel_size=4, strides=2, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## flatten ##
feature_x = layers.Flatten()(feature_x)
x = layers.Dense(512, activation='swish', kernel_regularizer="l2")(feature_x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(128, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.1)(x)
x = layers.Dense(32, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.1)(x)
output = layers.Dense(1)(x)
rmse = keras.metrics.RootMeanSquaredError(name="rmse")
model = tf.keras.Model(inputs=[features_inputs], outputs=[output])
model.compile(optimizer=tf.optimizers.Adam(0.001), loss='mse', metrics=['mse', "mae", "mape", rmse])
return model
def get_model7():
features_inputs = tf.keras.Input((300, ), dtype=tf.float16)
## Dense 1 ##
feature_x = layers.Dense(256, activation='swish')(features_inputs)
feature_x = layers.Dropout(0.1)(feature_x)
## convolution 1 ##
feature_x = layers.Reshape((-1,1))(feature_x)
feature_x = layers.Conv1D(filters=16, kernel_size=4, strides=1, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution 2 ##
feature_x = layers.Conv1D(filters=16, kernel_size=4, strides=4, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution 3 ##
feature_x = layers.Conv1D(filters=64, kernel_size=4, strides=1, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution2D 1 ##
feature_x = layers.Reshape((64,64,1))(feature_x)
feature_x = layers.Conv2D(filters=32, kernel_size=4, strides=1, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution2D 2 ##
feature_x = layers.Conv2D(filters=32, kernel_size=4, strides=4, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## convolution2D 3 ##
feature_x = layers.Conv2D(filters=32, kernel_size=4, strides=4, padding='same')(feature_x)
feature_x = layers.BatchNormalization()(feature_x)
feature_x = layers.LeakyReLU()(feature_x)
## flatten ##
feature_x = layers.Flatten()(feature_x)
## Dense 3 ##
x = layers.Dense(512, activation='swish', kernel_regularizer="l2")(feature_x)
## Dense 4 ##
x = layers.Dropout(0.1)(x)
## Dense 5 ##
x = layers.Dense(128, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.1)(x)
## Dense 6 ##
x = layers.Dense(32, activation='swish', kernel_regularizer="l2")(x)
x = layers.Dropout(0.1)(x)
## Dense 7 ##
output = layers.Dense(1)(x)
rmse = keras.metrics.RootMeanSquaredError(name="rmse")
model = tf.keras.Model(inputs=[features_inputs], outputs=[output])
model.compile(optimizer=tf.optimizers.Adam(0.001), loss='mse', metrics=['mse', "mae", "mape", rmse])
return model
###Output
_____no_output_____
###Markdown
2.2 Augment2: Feature_Time Model
###Code
def get_model_ft():
investment_id_input = tf.keras.Input(shape=(1,), dtype=tf.uint16, name='investment_id')
inv_x = layers.Dense(64, activation='relu')(investment_id_input)
inv_x = layers.Dropout(0.2)(inv_x)
features_input = tf.keras.Input(shape=(300,), dtype=tf.float16, name='features')
f_x = layers.Dense(512, activation='relu')(features_input)
f_x = layers.Dropout(0.25)(f_x)
f_x = layers.Dense(256, activation='relu')(f_x)
f_x = layers.Dropout(0.2)(f_x)
time_id_input = tf.keras.Input(shape=(1,), dtype=tf.uint16, name='time_id')
time_x = layers.Dense(64, activation='relu')(time_id_input)
time_x = layers.Dropout(0.2)(time_x)
concatenated = layers.concatenate([inv_x, f_x, time_x], axis=-1)
output = layers.Dense(1)(concatenated)
model = tf.keras.models.Model([investment_id_input, features_input, time_id_input], output, name='model_with_time_id')
model.compile(optimizer='rmsprop', loss='mse', metrics=['mse', 'mae', 'mape'])
return model
gc.collect()
model_ft = get_model_ft()
model_ft.summary()
models = []
for i in range(5):
model = get_model()
model.load_weights(f'../input/dnn-base/model_{i}')
models.append(model)
for i in range(10):
model = get_model2()
model.load_weights(f'../input/train-dnn-v2-10fold/model_{i}')
models.append(model)
for i in range(10):
model = get_model3()
model.load_weights(f'../input/dnnmodelnew/model_{i}')
models.append(model)
models2 = []
for i in range(5):
model = get_model5()
model.load_weights(f'../input/prediction-including-spatial-info-with-conv1d/model_{i}.tf')
models2.append(model)
for i in range(5):
model = get_model6()
model.load_weights(f'../input/gaussian-noise-model-weights/model_{i}.tf')
models2.append(model)
for i in range(5):
model = get_model7()
model.load_weights(f'../input/conv2d-model-weights/model_{i}.tf')
models2.append(model)
models3 = []
for i in range(10):
model = get_model_dr04()
model.load_weights(f'../input/mse10-model-weights/model_{i}')
models3.append(model)
model_ft = get_model_ft()
model_ft.load_weights(f'../input/feature-time-model/ns_model_with_time_id.tf')
def get_model_corr(ft_units, x_units, x_dropout):
# investment_id
investment_id_inputs = tf.keras.Input((1, ), dtype=tf.uint16)
investment_id_x = investment_id_lookup_layer(investment_id_inputs)
investment_id_x = layers.Embedding(investment_id_size, 32, input_length=1)(investment_id_x)
investment_id_x = layers.Reshape((-1, ))(investment_id_x)
investment_id_x = layers.Dense(128, activation='swish')(investment_id_x)
investment_id_x = layers.Dense(128, activation='swish')(investment_id_x)
investment_id_x = layers.Dense(128, activation='swish')(investment_id_x)
# features_inputs
features_inputs = tf.keras.Input((300, ), dtype=tf.float16)
bn = tf.keras.layers.BatchNormalization()(features_inputs)
gn = tf.keras.layers.GaussianNoise(0.035)(bn)
feature_x = layers.Dense(300, activation='swish')(gn)
feature_x = tf.keras.layers.Dropout(0.5)(feature_x)
for hu in ft_units:
feature_x = layers.Dense(hu, activation='swish')(feature_x)
# feature_x = tf.keras.layers.Activation('swish')(feature_x)
feature_x = tf.keras.layers.Dropout(0.35)(feature_x)
x = layers.Concatenate(axis=1)([investment_id_x, feature_x])
for i in range(len(x_units)):
x = tf.keras.layers.Dense(x_units[i], kernel_regularizer="l2")(x)
x = tf.keras.layers.Activation('swish')(x)
x = tf.keras.layers.Dropout(x_dropout[i])(x)
output = layers.Dense(1)(x)
model = tf.keras.Model(inputs=[investment_id_inputs, features_inputs], outputs=[output])
model.compile(optimizer=tf.optimizers.Adam(0.0001), loss=correlationLoss,
metrics=['mse', "mae", correlation])
return model
params = {
# 'num_columns': len(features),
'ft_units': [150, 75, 150 ,200],
'x_units': [512, 256, 128, 32],
'x_dropout': [0.44, 0.4, 0.33, 0.2] #4, 3, 2, 1
# 'lr':1e-3,
}
###Output
_____no_output_____
###Markdown
3. Validation
###Code
def preprocess_test(investment_id, feature):
return (investment_id, feature), 0
def preprocess_test_s(feature):
return (feature), 0
def make_test_dataset(feature, investment_id, batch_size=1024):
ds = tf.data.Dataset.from_tensor_slices(((investment_id, feature)))
ds = ds.map(preprocess_test)
ds = ds.batch(batch_size).cache().prefetch(tf.data.experimental.AUTOTUNE)
return ds
def make_test_dataset2(feature, batch_size=1024):
ds = tf.data.Dataset.from_tensor_slices(((feature)))
ds = ds.batch(batch_size).cache().prefetch(tf.data.AUTOTUNE)
return ds
def inference(models, ds):
y_preds = []
for model in models:
y_pred = model.predict(ds)
y_preds.append(y_pred)
return np.mean(y_preds, axis=0)
from sklearn.decomposition import PCA
pca = PCA(n_components=1)
def pca_inference(models, ds):
y_preds = []
for model in models:
y_pred = model.predict(ds)
y_preds.append(y_pred)
res = np.hstack(y_preds)
print(len(res))
if len(res)>1:
res = pca.fit_transform(res)
else:
res = np.mean(res, axis=1)
return res
def make_test_dataset3(feature, batch_size=1024):
ds = tf.data.Dataset.from_tensor_slices((feature))
ds = ds.map(preprocess_test_s)
ds = ds.batch(batch_size).cache().prefetch(tf.data.experimental.AUTOTUNE)
return ds
def infer(models, ds):
y_preds = []
for model in models:
y_pred = model.predict(ds)
y_preds.append((y_pred-y_pred.mean())/y_pred.std())
return np.mean(y_preds, axis=0)
import ubiquant
env = ubiquant.make_env()
iter_test = env.iter_test()
for (test_df, sample_prediction_df) in iter_test:
ds = make_test_dataset(test_df[features], test_df["investment_id"])
p1 = inference(models, ds)
ds2 = make_test_dataset2(test_df[features])
p2 = inference(models2, ds2)
ds3 = make_test_dataset3(test_df[features])
p3 = infer(models3, ds3)
p4 = inference(models_best, ds)
# feature_time_augment
test_time_id = test_df['row_id'].str.split('_', expand=True).get(key=0).astype(int)
ds5 = make_ft_dataset(investment_id=test_df['investment_id'], feature=test_df[features], time_id=test_time_id)
p5 = model_ft.predict([test_df['investment_id'], test_df[features], test_time_id])[:, 0]
sample_prediction_df['target'] = p1 * 0.18 + p2 * 0.57 + p3 * 0.1 + p5 * 0.15
env.predict(sample_prediction_df)
display(sample_prediction_df)
###Output
_____no_output_____ |
notebooks/herdImmunity.ipynb | ###Markdown
Herd immunity calculator**Philip Machanick**\*17 April 2020*Assumes fixed value of $R_0$, assuming $E=1$, i.e., immunity is 100% post-recovery (or post-vaccination if one exists).Calculated as$$P_{herd} = 1 - \frac{1}{1-R_{0}}$$Where $R_0$ is the basic reproduction ratio, i.e., the mean number of new infections per infected person.To run the example graphing exponential growth in the US as of 20 March 2020, choose Open… in the File menu and open the notebook “US worst case.ipynb”.**Reference**\Paul Fine, Ken Eames, and David L Heymann. “Herd immunity”: a rough guide. *Clinical Infectious Diseases*, 52(7):911–916, 2011
###Code
%%capture
import os
owd = os.getcwd()
os.chdir('../')
%run setup.py install
os.chdir(owd)
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
# set up parameters
R0min = 1
R0max = 10
R0stepsize = 0.01
E = 1; # effectiveness of vaccine: 1 = 100%
R0vals = np.arange (R0min, R0max, R0stepsize)
def herdThreshold (R0):
return 1-1/R0;
herdThresholdVals = herdThreshold (R0vals)
# uncomment to see values
# print (herdThresholdVals);
print(herdThreshold (2.5));
print(herdThreshold (4));
# https://stackoverflow.com/questions/5306756/how-to-print-a-percentage-value-in-python
print ('{:.0%}'.format(herdThreshold (1.3)))
plt.autoscale(enable=True, axis='x', tight=True)
plt.rcParams.update({'font.size': 18})
plt.plot(R0vals, herdThresholdVals, '-', color="#348ABD", label='$Herd Immunity$', lw=4)
plt.xlabel('$R_0$')
plt.ylabel('Herd Immunity')
# https://stackoverflow.com/questions/31357611/format-y-axis-as-percent
plt.gca().set_yticklabels(['{:.0f}%'.format(x*100) for x in plt.gca().get_yticks()])
# https://matplotlib.org/tutorials/text/annotations.html
# https://stackoverflow.com/questions/16948256/cannot-concatenate-str-and-float-objects
plt.annotate('flu $R_0=1.3$, herd = ' + '{:.0%}'.format(herdThreshold(1.3)),xy=(1.3,herdThreshold(1.3)), xycoords='data',xytext=(0.8, 0.25), textcoords='axes fraction',
arrowprops=dict(facecolor='red', shrink=0.05),
horizontalalignment='right', verticalalignment='top')
plt.annotate('$R_0=2.5$, herd = ' + '{:.0%}'.format(herdThreshold(2.5)),xy=(2.5,herdThreshold(2.5)), xycoords='data',xytext=(0.8, 0.45), textcoords='axes fraction',
arrowprops=dict(facecolor='green', shrink=0.05),
horizontalalignment='right', verticalalignment='top')
#https://stackoverflow.com/questions/21288062/second-y-axis-label-getting-cut-off
#plt.savefig("/Users/philip/Desktop/herdimmunity.pdf", bbox_inches='tight');
###Output
_____no_output_____ |
MNIST_3.ipynb | ###Markdown
Load data and split it into train and test set
###Code
digits = pd.read_csv('Data/train.csv')
X = digits.drop('label', axis=1)
y = digits['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print('Train set shape :', X_train.shape)
print('Test set shape :', X_test.shape)
###Output
Train set shape : (33600, 784)
Test set shape : (8400, 784)
###Markdown
The distribution of digits in the dataset.
###Code
y_count = y.value_counts()
y_pct = y_count/len(y)*100
y_pct.round(2)
###Output
_____no_output_____
###Markdown
It's quite balanced, not a perfect 10% for every digits, but that's fine.
###Code
digits = X.sample(30)
multidigits_plot(digits)
###Output
_____no_output_____
###Markdown
Principal Components AnalysisPCA compute the eigen vectors which explains the largest amount of variance of the data in his original base of vectors (the original features). It can be used to reduce considerably the dimenionnality while preserving most of the information of the data in his original form.
###Code
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(X_train)
# Get X_train in the base of pca
X_train_pca = pca.transform(X_train)
# Do the inverse transform after having selected only the first i rows
n_first_components = [10, 20, 50, 100, 150, 200, 500, 784]
n_len = len(n_first_components)
fig, ax = plt.subplots(1,n_len)
for i, ax in zip(n_first_components, ax):
# select a copy of a digit from X_train_pca
digit_pca = X_train_pca[0].copy()
digit_pca[i:] = 0
digit_reformed = pca.inverse_transform(digit_pca)
#plot the digit
digit_plot(digit_reformed, ax=ax)
ax.set_title(str(i)+' components')
fig.set_size_inches(15,5)
###Output
_____no_output_____
###Markdown
We see that PCA is very effective at reducing the number of features while still retaining enough information to be able to recognize the value of the digit.
###Code
digits = []
for i in range(10):
i_digits = X_train_pca[y_train==i, :]
i_digit = i_digits[0]
digits.append(i_digit)
n_first_components = [10, 20, 50, 100, 150, 200, 500, 784]
n_digits = len(digits)
n_len = len(n_first_components)
fig, ax = plt.subplots(n_digits, n_len)
first_row = True
for digit, ax in zip(digits, ax):
for i, ax in zip(n_first_components, ax):
# select a copy of a digit from X_train_pca
digit_pca = digit.copy()
digit_pca[i:] = 0
digit_reformed = pca.inverse_transform(digit_pca)
#plot the digit
digit_plot(digit_reformed, ax=ax)
# write number of components if first row
if first_row:
ax.set_title('n='+str(i))
first_row = False
plt.suptitle('Digits using n first components', fontsize=20)
fig.set_size_inches(15,15)
###Output
_____no_output_____
###Markdown
We see that a sweetspot for the number of compents to use is around 50-100, anything above 100 doesn't bring more useful information. We can also choose the number of components to keep by imposing a minimum threshold of variance to keep, 95 % for example.
###Code
pca_95 = PCA(0.95, svd_solver='full')
pca_95.fit(X_train)
# Get X_train in the base of pca
digit = X_train.loc[0,:]
digit_pca_95 = pca_95.transform(digit.values.reshape(1,-1))
digit_95 = pca_95.inverse_transform(digit_pca_95)
n_components = len(pca_95.components_)
fig, ax = plt.subplots(1,2)
digit_plot(digit, ax=ax[0])
ax[0].set_title('Original')
digit_plot(digit_95, ax=ax[1])
ax[1].set_title('95 % of variance ({} components)'.format(n_components));
digits = []
for i in range(10):
i_digits = X_train.loc[y_train==i, :]
i_digit = i_digits.sample(1)
digits.append(i_digit.values)
n_digits = len(digits)
fig, ax = plt.subplots(n_digits, 2)
first_row = True
for digit, ax in zip(digits, ax):
digit_pca_95 = pca_95.transform(digit.reshape(1,-1))
digit_95 = pca_95.inverse_transform(digit_pca_95)
digit_plot(digit, ax=ax[0])
digit_plot(digit_95, ax=ax[1])
if first_row:
ax[0].set_title('Original')
ax[1].set_title('95 % of variance ({} components)'.format(n_components))
first_row = False
#plt.suptitle('Digits using n first components', fontsize=20)
fig.set_size_inches(15,15)
###Output
_____no_output_____
###Markdown
Let'see now what the first components looks like when reshaped into an image.
###Code
components = pca.components_
n_first_components = list(range(10))
n_len = len(n_first_components)
fig, ax = plt.subplots(1,n_len)
for i, ax in zip(n_first_components, ax):
component = components[i]
#plot the component
digit_plot(component, ax=ax)
ax.set_title(str(i+1))
plt.suptitle('10 first components', fontsize=20)
fig.set_size_inches(15,5)
###Output
_____no_output_____
###Markdown
2D representationLet's plot in 2 dimension the differents digits, using the two first components.
###Code
pca_2 = PCA(2)
pca_2.fit(X_train)
# select 100 random digits and project them on the 2 first principal components
digits_idx = X_train.sample(500).index
X_2 = pca_2.transform(X_train.loc[digits_idx, :])
targets = y_train.loc[digits_idx]
plt.figure(figsize=(15,15))
plt.xlim(X_2[:,0].min(), X_2[:,0].max()+100)
plt.ylim(X_2[:,1].min(), X_2[:,1].max()+100)
colors = {0:'C0', 1:'C1', 2:'C2', 3:'C3', 4:'C4', 5:'C5', 6:'C6', 7:'C7', 8:'C8', 9:'C9'}
for x, y, target in zip(X_2[:,0], X_2[:,1], targets.values):
plt.text(x, y, str(target), color=colors[target], fontdict={'weight':'bold', "size":16})
plt.xlabel('First component', fontsize=16)
plt.ylabel('Second component', fontsize=16)
plt.title('Digits in 2D space using PCA', fontsize=20);
###Output
_____no_output_____
###Markdown
This representation in 2D space let us see which digits are the most similar, and so, the most prone to confusion (when using only the two first components). We see that the **2**, **3**, **6** and **8** are overlapping a lot, while the **1** and **0** are the most distinct. The **7**, **9** and **4** are also overlapping a lot. t-SNEAnother method to represent our data in a 2D space is t-SNE, a manifold learning method. It finds a new 2D space for the data where similar points are grouped togheter.
###Code
from sklearn.manifold import TSNE
tsne = TSNE(random_state=42)
digits_tsne = tsne.fit_transform(X_train.loc[digits_idx,:])
plt.figure(figsize=(15,15))
plt.xlim(digits_tsne[:,0].min(), digits_tsne[:,0].max()+2)
plt.ylim(digits_tsne[:,1].min(), digits_tsne[:,1].max()+2)
colors = {0:'C0', 1:'C1', 2:'C2', 3:'C3', 4:'C4', 5:'C5', 6:'C6', 7:'C7', 8:'C8', 9:'C9'}
for x, y, target in zip(digits_tsne[:,0], digits_tsne[:,1], targets.values):
plt.text(x, y, str(target), color=colors[target], fontdict={'weight':'bold', "size":16})
plt.xlabel('$x_0$', fontsize=16)
plt.ylabel('$x_1$', fontsize=16)
plt.title('Digits in 2D space using t-SNE', fontsize=20);
###Output
_____no_output_____ |
examples/notebooks/networks/generation/dual_cubic_lattices.ipynb | ###Markdown
Generate a Cubic Lattice with an Interpenetrating Dual Cubic LatticeOpenPNM offers several options for generating *dual* networks. This tutorial will outline the use of the basic *CubicDual* class, while the *DelaunayVoronoiDual* is covered elsewhere. The main motivation for creating these dual networks is to enable the modeling of transport in the void phase on one network and through the solid phase on the other. These networks are interpenetrating but not overlapping or coincident so it makes the topology realistic or at least consistent. Moreover, these networks are interconnected to each other so they can exchange quantities between them, such as gas-solid heat transfer. The tutorial below outlines how to setup a *CubicDual* network object, describes the combined topology, and explains how to use labels to access different parts of the network.As usual start by importing Scipy and OpenPNM:
###Code
import scipy as sp
import numpy as np
import openpnm as op
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(10)
wrk = op.Workspace() # Initialize a workspace object
wrk.settings['loglevel'] = 50
###Output
_____no_output_____
###Markdown
Let's create a *CubicDual* and visualize it in Paraview:
###Code
net = op.network.CubicDual(shape=[6, 6, 6])
###Output
_____no_output_____
###Markdown
The resulting network has two sets of pores, labelled as blue and red in the image below. By default, the main cubic lattice is referred to as the 'primary' network which is colored *blue* and is defined by the ``shape`` argument, and the interpenetrating dual is referred to as the 'secondary' network shown in *red*. These names are used to label the pores and throats associated with each network. These names can be changed by sending ``label_1`` and ``label_2`` arguments during initialization. The throats connecting the 'primary' and 'secondary' pores are labelled 'interconnect', and they can be seen as the diagonal connections below. The ``topotools`` module of openpnm also has handy visualization functions which can be used to consecutively build a picture of the network connections and coordinates. > Replace ```%matplotlib inline``` with ```%matplotlib notebook``` for 3D interactive plots.
###Code
#NBVAL_IGNORE_OUTPUT
from openpnm.topotools import plot_connections, plot_coordinates
fig1 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
#NBVAL_IGNORE_OUTPUT
fig2 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig2 = plot_coordinates(network=net, pores=net.pores('secondary'), fig=fig2, c='r')
#NBVAL_IGNORE_OUTPUT
fig3 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig3 = plot_coordinates(network=net, pores=net.pores('secondary'), fig=fig3, c='r')
fig3 = plot_connections(network=net, throats=net.throats('primary'), fig=fig3, c='b')
#NBVAL_IGNORE_OUTPUT
fig4 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig4 = plot_coordinates(network=net, pores=net.pores('secondary'), fig=fig4, c='r')
fig4 = plot_connections(network=net, throats=net.throats('primary'), fig=fig4, c='b')
fig4 = plot_connections(network=net, throats=net.throats('secondary'), fig=fig4, c='r')
#NBVAL_IGNORE_OUTPUT
fig5 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig5 = plot_coordinates(network=net, pores=net.pores('secondary'), fig=fig5, c='r')
fig5 = plot_connections(network=net, throats=net.throats('primary'), fig=fig5, c='b')
fig5 = plot_connections(network=net, throats=net.throats('secondary'), fig=fig5, c='r')
fig5 = plot_connections(network=net, throats=net.throats('interconnect'), fig=fig5, c='g')
###Output
_____no_output_____
###Markdown
Inspection of this image shows that the 'primary' pores are located at expected locations for a cubic network including on the faces of the cube, and 'secondary' pores are located at the interstitial locations. There is one important nuance to note: Some of 'secondary' pores area also on the face, and are offset 1/2 a lattice spacing from the internal 'secondary' pores. This means that each face of the network is a staggered tiling of 'primary' and 'secondary' pores. The 'primary' and 'secondary' pores are connected to themselves in a standard 6-connected lattice, and connected to each other in the diagonal directions. Unlike a regular *Cubic* network, it is not possible to specify more elaborate connectivity in the *CubicDual* networks since the throats of each network would be conceptually entangled. The figure below shows the connections in the secondary (left), and primary (middle) networks, as well as the interconnections between them (right).  Using the labels it is possible to query the number of each type of pore and throat on the network:
###Code
print(f"No. of primary pores: {net.num_pores('primary')}")
print(f"No. of secondary pores: {net.num_pores('secondary')}")
print(f"No. of primary throats: {net.num_throats('primary')}")
print(f"No. of secondary throats: {net.num_throats('secondary')}")
print(f"No. of interconnect throats: {net.num_throats('interconnect')}")
###Output
No. of primary pores: 216
No. of secondary pores: 275
No. of primary throats: 540
No. of secondary throats: 450
No. of interconnect throats: 1600
###Markdown
Now that this topology is created, the next step would be to create *Geometry* objects for each network, and an additional one for the 'interconnect' throats. We can use the predefined labels on the network to specify which pores and throats belong to each geometry:
###Code
geo_pri = op.geometry.GenericGeometry(network=net,
pores=net.pores('primary'),
throats=net.throats('primary'))
geo_sec = op.geometry.GenericGeometry(network=net,
pores=net.pores('secondary'),
throats=net.throats('secondary'))
geo_inter = op.geometry.GenericGeometry(network=net,
throats=net.throats('interconnect'))
###Output
_____no_output_____ |
Task #3.ipynb | ###Markdown
The Sparks Foundation Data Science and Analytics Internship Task 3 - To Explore Unsupervised Machine Learning Problem Statement:From the given ‘Iris’ dataset, predict the optimum number ofclusters and represent it visually. Importing Libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Importing the Dataset
###Code
df = pd.read_csv('iris.csv')
###Output
_____no_output_____
###Markdown
Glancing at the given Dataset
###Code
df.head()
###Output
_____no_output_____
###Markdown
Getting Information about the Dataset
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 150 entries, 0 to 149
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Id 150 non-null int64
1 SepalLengthCm 150 non-null float64
2 SepalWidthCm 150 non-null float64
3 PetalLengthCm 150 non-null float64
4 PetalWidthCm 150 non-null float64
5 Species 150 non-null object
dtypes: float64(4), int64(1), object(1)
memory usage: 7.2+ KB
###Markdown
Checking for presence of NULL Values
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
No Null Values Present. Getting the input values from Dataset
###Code
X = df.iloc[:, [0, 1, 2, 3]].values
###Output
_____no_output_____
###Markdown
Finding the Optimum Number Of Clusters Using the Elbow Method
###Code
from sklearn.cluster import KMeans
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
kmeans.fit(x)
wcss.append(kmeans.inertia_)
# Plotting the results onto a line graph,
plt.plot(range(1, 11), wcss)
plt.title('The Elbow method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
plt.show()
###Output
_____no_output_____
###Markdown
You can clearly see why it is called 'The elbow method' from the above graph, the optimum clusters is where the elbow occurs. This is when the within cluster sum of squares (WCSS) doesn't decrease significantly with every iteration.From this we choose the number of clusters as **3**. Creating a KMeans model and training it
###Code
# Applying kmeans to the dataset / Creating the kmeans classifier
kmeans = KMeans(n_clusters = 3, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
y_kmeans = kmeans.fit_predict(x)
###Output
_____no_output_____
###Markdown
Visualizing the Clusters of the Fit Model
###Code
# Visualising the clusters on the first two columns
plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1],
s = 50, c = 'pink', label = 'Iris-setosa')
plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1],
s = 50, c = 'green', label = 'Iris-versicolour')
plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1],
s = 50, c = 'yellow', label = 'Iris-virginica')
# Plotting the centroids of the clusters
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:,1],
s = 100, c = 'black', label = 'Centroids')
plt.legend()
###Output
_____no_output_____ |
lab02/lab02_part01/lab02_01_expansion.ipynb | ###Markdown
Matrix and Vocabulary Construction
###Code
import pandas as pd
import numpy as np
from scipy import sparse
import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk import bigrams
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer
news = pd.read_csv("../../data/estadao_noticias_eleicao.csv", encoding="utf-8")
news = news.fillna("")
content = news.titulo + " " + news.subTitulo + " " + news.conteudo
content = content.fillna("")
###Output
_____no_output_____
###Markdown
Generating a Co-occurence Matrix
###Code
def co_occurrence_matrix(corpus):
'''
By: https://github.com/allansales
Source: https://github.com/allansales/information-retrieval/tree/master/Lab%202
'''
vocab = set(corpus)
vocab = list(vocab)
n = len(vocab)
vocab_to_index = {word:i for i, word in enumerate(vocab)}
bi_grams = list(bigrams(corpus))
bigram_freq = nltk.FreqDist(bi_grams).most_common(len(bi_grams))
I=list()
J=list()
V=list()
for bigram in bigram_freq:
current = bigram[0][1]
previous = bigram[0][0]
count = bigram[1]
I.append(vocab_to_index[previous])
J.append(vocab_to_index[current])
V.append(count)
co_occurrence_matrix = sparse.coo_matrix((V,(I,J)), shape=(n,n))
return co_occurrence_matrix, vocab_to_index
###Output
_____no_output_____
###Markdown
Removing punctuation
###Code
tokenizer = RegexpTokenizer(r'\w+')
tokens_lists = content.apply(lambda text: tokenizer.tokenize(text.lower()))
###Output
_____no_output_____
###Markdown
Removing stopwords
###Code
stopword_ = stopwords.words('portuguese')
filtered_tokens = tokens_lists.apply(lambda tokens: [token for token in tokens if token not in stopword_])
###Output
_____no_output_____
###Markdown
Transforming list of lists into one list
###Code
tokens = [token for tokens_list in filtered_tokens for token in tokens_list]
matrix, vocab = co_occurrence_matrix(tokens)
###Output
_____no_output_____
###Markdown
Get the TOP 3 most frequent corelated word
###Code
def top_3(word):
'''
Get the top 3 word more curelated whit the receavide as argument
ARGS:
word: String which will search for other words most corealated with this one
RETURN:
List: List with the most corelated words
'''
word_id = vocab[word]
top = []
for i, j, k in zip(matrix.row, matrix.col, matrix.data):
if (i == word_id):
top.append(j)
if (len(top) == 3):
break
top = get_word_by_id(top)
return top
def get_word_by_id(array):
'''
Transfor the list of IDs word into a array of words
ARGS:
array: list of id words
RETURN:
A list with the words instead the ids
'''
result = ['','','']
for i in vocab.keys():
for j in range(len(array)):
if (vocab[i] == array[j]):
result[j] = i
return result
###Output
_____no_output_____
###Markdown
Consult Bigram Frequency
###Code
consultable_matrix = matrix.tocsr()
def consult_frequency(w1, w2):
return(consultable_matrix[vocab[w1],vocab[w2]])
###Output
_____no_output_____
###Markdown
Example
###Code
w1 = 'poucos'
w2 = 'recursos'
consult_frequency(w1, w2)
###Output
_____no_output_____
###Markdown
Inverted Index
###Code
INVERTED_INDEX = dict()
def search_term(term, data):
'''
Search term presence in data frame and calculate its term frequence. Also full fill the dictionary
with the result for future new searchs.
ARGS:
term: String with single word, that word is the term to be seach in the data frame.
data: Data frame, where the terms will be search.
RETURN:
tuple (int n, list l): where n is the total of documents witch the term is presente at least once
l is a list of tuple (doc, tf), where doc is the notice id and tf is its
term frequence.
'''
result = []
cont = 0
if (term in INVERTED_INDEX):
result = INVERTED_INDEX[term]
else:
rows = data.shape[0]
for doc in range(rows):
title = (data.loc[doc, 'titulo']).lower()
sub_title = (data.loc[doc, 'subTitulo']).lower()
content = (data.loc[doc, 'conteudo']).lower()
tf = 0
text = nltk.word_tokenize(title + ' ' + sub_title + ' ' + content)
i = 0
while (i < len(text)):
if (text[i].lower() == term):
tf += 1
exist = True
i += 1
if (tf):
id_notice = data.loc[doc, 'idNoticia']
result.append((id_notice, tf))
cont += 1
INVERTED_INDEX[term] = (cont, result)
return INVERTED_INDEX[term]
def disjunction(query, data):
'''
To get a dictionary with all notice id and its terms frequency in a list representing for all terms
of the query.
ARGS:
query: String with the with words or terms to be seach
data: Data frame, where the terms will be search.
RETURN:
{doc:l} doc is the notice id and l is a list with the tf (term frequency) of each term in the query
'''
dic_docs = dict()
for term in query:
term_set = search_term(term, data)
for doc in term_set[1]:
if (doc[0] in dic_docs):
dic_docs[doc[0]].append(doc[1])
else:
dic_docs[doc[0]] = [doc[1]]
return dic_docs
def vsm_tf(query, data):
'''
Improved VSM with Term Frequency Weighting
Search for a specific query in a data frame with TF logic. For each term in the query it will result,
a notice id with the sum of tf for each doc witch it's is present the term.
ARGS:
query: String with the with words or terms to be seach
data: Data frame, where the terms will be search.
RETURN:
List of tuple (doc, tf), where doc repersents the notice id and tf its term frequency
'''
dic_docs = disjunction(query,data)
result = []
for doc in dic_docs.keys():
result.append((doc, sum(dic_docs[doc])))
return result
###Output
_____no_output_____
###Markdown
AnalysisFor a seach with the term petrobrás we have got the following TOP 3 result as most frequents co-words:
###Code
top = top_3('petrobrás')
for i, j in enumerate(top):
print('%d °: %s' % (i + 1, j))
###Output
1 °: paulo
2 °: é
3 °: graça
###Markdown
The total of documents returned for the consult with only term petrobrás was about:
###Code
print(len(vsm_tf(["petrobrás"], news)))
###Output
1043
###Markdown
In another hand, for the expanded query, we have got the total of documents:
###Code
result = vsm_tf(top, news)
print(len(result))
###Output
7149
|
8_chainer-for-theano-users.ipynb | ###Markdown
Chainer for Theano UsersAs we mentioned [here](https://chainer.org/general/2017/09/29/thank-you-theano.html), Theano stops the development in a few weeks. Many spects of Chainer were inspired by Theano's clean interface design, so that we would like to introduce Chainer here by comparing the difference from Theano. We believe that this article assists the Theano users to move to Chainer quickly. In this post, we asume that the modules below have been imported.
###Code
import numpy as np
import theano
import theano.tensor as T
import chainer
import chainer.functions as F
import chainer.links as L
###Output
_____no_output_____
###Markdown
First, let's summarize the key similarities and differences between Theano and Chainer. Key similarities:- Python-based library- Functions can accept NumPy arrays- CPU/GPU support- Easy to write various operation as a differentiable function (custom layer) Key differences:- Theano compiles the computational graph before run- Chainer builds the comptuational graph in runtime- Chainer provides many high-level APIs for neural networks- Chainer supports distributed learning with ChainerMN Define a parametric functionA neural network basically has many parametric functions and activation functions which are called "layers" commonly. Let's see the difference between how to create a new parametric function in Theano and Chainer. In this example, to show the way to do the same thing with the two different libraries, we show how to define the 2D convolution function. But Chainer has `chainer.links.Convolution2D`, so that you don't need to write the code below to use 2D convolution as a building block of a network actually. Theano:
###Code
class TheanoConvolutionLayer(object):
def __init__(self, input, filter_shape, image_shape):
# Prepare initial values of the parameter W
spatial_dim = np.prod(filter_shape[2:])
fan_in = filter_shape[1] * spatial_dim
fan_out = filter_shape[0] * spatial_dim
scale = np.sqrt(3. / fan_in)
# Create the parameter W
W_init = np.random.uniform(-scale, scale, filter_shape)
self.W = theano.shared(W_init.astype(np.float32), borrow=True)
# Create the paramter b
b_init = np.zeros((filter_shape[0],))
self.b = theano.shared(b_init.astype(np.float32), borrow=True)
# Describe the convolution operation
conv_out = T.nnet.conv2d(
input=input,
filters=self.W,
filter_shape=filter_shape,
input_shape=image_shape)
# Add a bias
self.output = conv_out + self.b.dimshuffle('x', 0, 'x', 'x')
# Store paramters
self.params = [self.W, self.b]
###Output
_____no_output_____
###Markdown
How can we use this class? In Theano, it defines the computation as code using symbols, but doesn't perform actual computation at that time. Namely, it defines the computational graph before run. To use the defined computational graph, we need to define another operator using `theano.function` which takes input variables and output variable.
###Code
batchsize = 32
input_shape = (batchsize, 1, 28, 28)
filter_shape = (6, 1, 5, 5)
# Create a tensor that represents a minibatch
x = T.fmatrix('x')
input = x.reshape(input_shape)
conv = TheanoConvolutionLayer(input, filter_shape, input_shape)
f = theano.function([input], conv.output)
###Output
_____no_output_____
###Markdown
`conv` is the definition of how to compute the output from the first argument `input`, and `f` is the actual operator. You can pass values to `f` to compute the result of convolution like this:
###Code
x_data = np.random.rand(32, 1, 28, 28).astype(np.float32)
y = f(x_data)
print(y.shape, type(y))
###Output
(32, 6, 24, 24) <class 'numpy.ndarray'>
###Markdown
Chainer:What about the case in Chainer? Theano is a more general framework for scientific calculation, while Chainer focuses on neural networks. So, Chainer has many high-level APIs that enable users to write the building blocks of neural networks easier. Well, how to write the same convolution operator in Chainer?
###Code
class ChainerConvolutionLayer(chainer.Link):
def __init__(self, filter_shape):
super().__init__()
with self.init_scope():
# Specify the way of initialize
W_init = chainer.initializers.LeCunUniform()
b_init = chainer.initializers.Zero()
# Create a parameter object
self.W = chainer.Parameter(W_init, filter_shape)
self.b = chainer.Parameter(b_init, filter_shape[0])
def __call__(self, x):
return F.convolution_2d(x, self.W, self.b)
###Output
_____no_output_____
###Markdown
Actually, as we said at the top of this article, Chainer has pre-implemented `chainer.links.Convolution2D` class for convolution. So, you don't need to implement the code above by yourself, but it shows how to do the same thing written in Theano above.You can create your own parametric function by defining a class inherited from `chainer.Link` as shown in the above. What computation will be applied to the input is described in `__call__` method.Then, how to use this class?
###Code
chainer_conv = ChainerConvolutionLayer(filter_shape)
y = chainer_conv(x_data)
print(y.shape, type(y), type(y.array))
###Output
(32, 6, 24, 24) <class 'chainer.variable.Variable'> <class 'numpy.ndarray'>
###Markdown
Chainer provides many functions in `chainer.functions` and it takes NumPy array or `chainer.Variable` object as inputs. You can write arbitrary layer using those functions to make it differentiable. Note that a `chainer.Variable` object contains its actual data in `array` property. **NOTE:**You can write the same thing using `L.Convolution2D` like this:
###Code
conv_link = L.Convolution2D(in_channels=1, out_channels=6, ksize=(5, 5))
y = conv_link(x_data)
print(y.shape, type(y), type(y.array))
###Output
(32, 6, 24, 24) <class 'chainer.variable.Variable'> <class 'numpy.ndarray'>
###Markdown
Use Theano function as a layer in ChainerHow to port parametric functions written in Theano to `Link`s in Chainer is shown in the above chapter. But there's an easier way to port **non-parametric functions** from Theano to Chainer.Chainer provides [`TheanoFunction`](https://docs.chainer.org/en/latest/reference/generated/chainer.links.TheanoFunction.html?highlight=Theano) to wrap a Theano function as a `chainer.Link`. What you need to prepare is just the inputs and outputs of the Theano function you want to port to Chainer's `Link`. For example, a convolution function of Theano can be converted to a Chainer's `Link` as followings:
###Code
x = T.fmatrix().reshape((32, 1, 28, 28))
W = T.fmatrix().reshape((6, 1, 5, 5))
b = T.fvector().reshape((6,))
conv_out = T.nnet.conv2d(x, W) + b.dimshuffle('x', 0, 'x', 'x')
f = L.TheanoFunction(inputs=[x, W, b], outputs=[conv_out])
###Output
/home/shunta/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/chainer/utils/experimental.py:104: FutureWarning: chainer.links.TheanoFunction is experimental. The interface can change in the future.
FutureWarning)
###Markdown
It converts the Theano computational graph into Chainer's computational graph! So it's differentiable with the Chainer APIs, and easy to use as a building block of a network written in Chainer. But it takes `W` and `b` as input arguments, so it should be noted that it doesn't keep those parameters inside.Anyway, how to use this ported Theano function in a network in Chainer?
###Code
class MyNetworkWithTheanoConvolution(chainer.Chain):
def __init__(self, theano_conv):
super().__init__()
self.theano_conv = theano_conv
W_init = chainer.initializers.LeCunUniform()
b_init = chainer.initializers.Zero()
with self.init_scope():
self.W = chainer.Parameter(W_init, (6, 1, 5, 5))
self.b = chainer.Parameter(b_init, (6,))
self.l1 = L.Linear(None, 100)
self.l2 = L.Linear(100, 10)
def __call__(self, x):
h = self.theano_conv(x, self.W, self.b)
h = F.relu(h)
h = self.l1(h)
h = F.relu(h)
return self.l2(h)
###Output
_____no_output_____
###Markdown
This class is a Chainer's model class which is inherited from `chainer.Chain`. This is a standard way to define a class in Chainer, but, look! it uses a Theano function as a layer inside `__call__` method. The first layer of this network is a convolution layer, and that layer is Theano function which runs computation with Theano.The usage of this network is completely same as the normal Chainer's models:
###Code
# Instantiate a model object
model = MyNetworkWithTheanoConvolution(f)
# And give an array/Variable to get the network output
y = model(x_data)
###Output
/home/shunta/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/chainer/utils/experimental.py:104: FutureWarning: chainer.functions.TheanoFunction is experimental. The interface can change in the future.
FutureWarning)
###Markdown
This network takes a mini-batch of images whose shape is `(32, 1, 28, 28)` and outputs 10-dimensional vectors for each input image, so the shape of the output variable will be `(32, 10)`:
###Code
print(y.shape)
###Output
(32, 10)
###Markdown
This network is differentiable and the parameters of the Theano's convolution function which are defined in the constructer as `self.W` and `self.b` can be optimized through Chainer's optimizers normaly.
###Code
t = np.random.randint(0, 10, size=(32,)).astype(np.int32)
loss = F.softmax_cross_entropy(y, t)
model.cleargrads()
loss.backward()
###Output
_____no_output_____
###Markdown
You can check the gradients calculated for the parameters `W` and `b` used in the Theano function `theano_conv`:
###Code
W_gradient = model.W.grad_var.array
b_gradient = model.b.grad_var.array
print(W_gradient.shape, type(W_gradient))
print(b_gradient.shape, type(b_gradient))
###Output
(6, 1, 5, 5) <class 'numpy.ndarray'>
(6,) <class 'numpy.ndarray'>
|
examples/basic-tutorial - I.ipynb | ###Markdown
Reading and visualizing multiple images This basic tutorial covers how to read images stored multiple folders. **tsraster** stacks these images and renders one image with multiple bands.
###Code
import os.path
import matplotlib.pyplot as plt
%matplotlib inline
import tsraster
import tsraster.prep as tr
###Output
_____no_output_____
###Markdown
connect to the data directory
###Code
path = "../docs/img/temperature/"
###Output
_____no_output_____
###Markdown
the images in this directory are structured as: - temperature: 2005 tmx-200501.tif tmx-200502.tif tmx-200503.tif 2006 tmx-200601.tif tmx-200602.tif tmx-200603.tif 2007 tmx-200701.tif tmx-200702.tif tmx-200703.tif Accordingly, for temprature we have three years of data and for each year we have three monthly data. 'tmx-200501.tif': temprature (the variable), 2005 (the year), 01 (the month) Read the images and print their corresponding name
###Code
image_name = tr.image_names(path)
print(image_name)
###Output
['tmx-200501', 'tmx-200502', 'tmx-200503', 'tmx-200601', 'tmx-200602', 'tmx-200603', 'tmx-200701', 'tmx-200702', 'tmx-200703']
###Markdown
Convert each to array and stack them as bands
###Code
rasters = tr.image_to_array(path)
# first image
rasters[0]
###Output
_____no_output_____
###Markdown
Check the total number of images (bands stacked together)
###Code
rasters.shape
###Output
_____no_output_____
###Markdown
Visualize
###Code
fig, ax = plt.subplots(3,3, figsize=(10,10))
for i in range(0,rasters.shape[2]):
img = rasters[:,:,i]
i = i+1
plt.subplot(3,3,i)
plt.imshow(img, cmap="Greys")
###Output
_____no_output_____ |
14-Difference-in-Difference.ipynb | ###Markdown
Three Billboards in the South of BrazilI remember when I worked with marketing and a great way to do it was internet marketing. Not because it is very efficient (although it is), but because it is very easy to know if its effective or not. With online marketing, you have a way of knowing which customers saw the ad and you can track them with cookies to see if they ended up on your landing page. You can also use machine learning to find prospects that are very similar to your customers and present the ad only to them. In this sense, online marketing is very precise: you target only those you want to and you can see if they respond as you would like them to. But not everyone is susceptible to online marketing. Sometimes you have to resort to less precise techniques, like a TV campaign or placing a billboard down the street. Usually, diversity of marketing channels is something marketing departments look for. But if online marketing is a professional fishing rod to catch that specific type of tuna, billboard and TV are giant nets you throw at a fish shoal and hope you catch at least some good fish. But another problem with billboard and TV ads is that it is much harder to know how effective they are. Sure, you could measure the purchase volume, or whatever you want to drive, before and after placing a billboard somewhere. If there is an increase, there is some evidence that the marketing is effective. But how would you know if this increase is not just some natural trend in the awareness of your product? In other words, how would you know the counterfactual \\(Y_0\\) of what would have happened if you didn't set up the billboards? One technique to answer these types of questions is simple Difference-in-Difference, or diff-in-diff for close friends. Diff-in-diff is commonly used to access the effect of macro interventions, like the effect of immigrants on unemployment, the effect of gun law changes in crime rates or simply the difference in user engagement due to a marketing campaign. In all these cases, you have a period before and after the intervention and whsh to assess what untangles the impact of the intervention from a general trend. As a motivating example, let's look at a question similar to the one I had to answer.In order to figure out how good billboards were as a marketing channel, we've placed 3 billboards in the city of Porto Alegre, the capital of the state of Rio Grande do Sul. As a note for those not very familiar with Brazilian geography, the south of the country is one of the most developed regions, with lower poverty rates when compared to the rest of the county. Having this in mind, we decided to also look at data from Florianopolis, the capital city of the state of Santa Catarina, another state from the south region. The idea is that we could use Florianopolis as a control sample to estimate the counterfactual \\(Y_0\\). What we were trying to boost with this particular campaign was deposits into our savings account (By the way, this was not the true experiment, which is confidential, but the idea is very similar). We've placed the billboard in Porto Alegre for the entire month of June. The data we have looks like this:
###Code
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from matplotlib import style
from matplotlib import pyplot as plt
import seaborn as sns
import statsmodels.formula.api as smf
%matplotlib inline
style.use("fivethirtyeight")
data = pd.read_csv("data/billboard_impact.csv")
data.head()
###Output
_____no_output_____
###Markdown
deposits is our outcome variable. POA is a dummy indicator for the city of Porto Alegre. When it is zero, it means the samples are from Florianopolis. Jul is a dummy for the month of July, or for the post intervention period. When it is zero it refers to samples from May, the pre-intervention period. DID EstimatorTo avoid confusion between Time and Treatment, I'll use D to denote treatment and T to denote time from now on. Let \\(Y_D(T)\\) be the potential outcome for treatment D on period T. In an ideal world where we have the ability to observe the counterfactual, we would estimate the treatment effect of an intervention the following way:$\hat{ATET} = E[Y_1(1) - Y_0(1)|D=1]$In words, the causal effect is the outcome in the period post intervention in case of a treatment minus the outcome in also in the period after the intervention, but in the case of no treatment. Of course, we can't measure this because \\(Y_0(1)\\) is counterfactual. One way to solve this is a before and after comparison.$\hat{ATET} = E[Y(1)|D=1] - E[Y(0)|D=1]$In our example, we would compare the average deposits from POA before and after the billboard was placed.
###Code
poa_before = data.query("poa==1 & jul==0")["deposits"].mean()
poa_after = data.query("poa==1 & jul==1")["deposits"].mean()
poa_after - poa_before
###Output
_____no_output_____
###Markdown
This estimator is telling us that we should expect deposits to increase R$ 41,04 after the intervention. But can we trust this?Notice that \\(E[Y(0)|D=1]=E[Y_0(0)|D=1]\\), so this estimation above is assuming \\(E[Y_0(1)|D=1] = E[Y_0(0)|D=1]\\). It is saying that in the case of no intervention, the outcome in the latter period would be the same as the outcome from the starting period. This would obviously be false if your outcome variable follows any kind of trend. For example, if deposits are going up in POA, \\(E[Y_0(1)|D=1] > E[Y_0(0)|D=1]\\), i.e. the outcome of the latter period would be greater than that of the starting period even in the absence of the intervention. With a similar argument, if the trend in Y is going down, \\(E[Y_0(1)|D=1] < E[Y_0(0)|D=1]\\). So this didn't work. Another idea is to compare the treated group with an untreated group that didn't get the intervention:$\hat{ATET} = E[Y(1)|D=1] - E[Y(1)|D=0]$In our example, it would be to compare the deposits from POA to that of Florianopolis in the post intervention period.
###Code
fl_after = data.query("poa==0 & jul==1")["deposits"].mean()
poa_after - fl_after
###Output
_____no_output_____
###Markdown
This estimator is telling us that the campaign is detrimental and that customers will decrease deposits by R$ 119.10. Notice that \\(E[Y(1)|D=0]=E[Y_0(1)|D=0]\\), so we are assuming we can replace the missing counterfactual like \\(E[Y_0(1)|D=0] = E[Y_0(1)|D=1]\\). But notice that this would only be true if both groups have a very similar baseline level. For instance, if Florianopolis has way more deposits than Porto Alegre, this would not be true because \\(E[Y_0(1)|D=0] > E[Y_0(1)|D=1]\\). On the other hand, if the level of deposits are lower in Florianopolis, we would have \\(E[Y_0(1)|D=0] < E[Y_0(1)|D=1]\\). So this didn't work as well. To solve this, we can use both space and time comparison. This is the idea of the difference in difference approach. It works by replacing the missing counterfactual counterfactual the following way:$E[Y_0(1)|D=1] = E[Y_1(0)|D=1] + (E[Y_0(1)|D=0] - E[Y_0(0)|D=0])$What this does is take the treated unit before the treatment and it adds a trend component estimated using the control \\(E[Y_0(1)|T=0] - E[Y_0(0)|T=0]\\). In words, it is saying that the treated, had it not been treated, would look like the treated before the treatment plus a growth factor that is the same as the growth of the control. It is important to notice that this assumes that the trends in the treatment and control are the same:$E[Y_0(1) − Y_0(0)|D=1] = E[Y_0(1) − Y_0(0)|D=0]$where the left hand side is the counterfactual trend. Now, we can replace the estimated counterfactual in the treatment effect definition \\(E[Y_1(1)|D=1] - E[Y_0(1)|D=1]\\)$\hat{ATET} = E[Y(1)|D=1] - (E[Y(0)|D=1] + (E[Y(1)|D=0] - E[Y(0)|D=0])$If we rearrange the terms, we get the classical Diff-in-Diff estimator.$\hat{ATET} = (E[Y(1)|D=1] - E[Y(1)|D=0]) - (E[Y(0)|D=1] - E[Y(0)|D=0])$It gets that name because it gets the difference between the difference between treatment and control after and before the treatment. Here is what that looks in code.
###Code
fl_before = data.query("poa==0 & jul==0")["deposits"].mean()
diff_in_diff = (poa_after-poa_before)-(fl_after-fl_before)
diff_in_diff
###Output
_____no_output_____
###Markdown
Diff-in-Diff is telling us that we should expect deposits to increase by R$ 6.52 per customer. Notice that the assumption that diff-in-diff makes is much more plausible than the other 2 estimators. It just assumes that the growth pattern between the 2 cities are the same. But it doesn't require them to have the same base level nor does it require the trend to be zero. To visualize what diff-in-diff is doing, we can project the growth trend from the untreated into the treated to see the counterfactual, that is, the number of deposits we should expect if there were no intervention.
###Code
plt.figure(figsize=(10,5))
plt.plot(["May", "Jul"], [fl_before, fl_after], label="FL", lw=2)
plt.plot(["May", "Jul"], [poa_before, poa_after], label="POA", lw=2)
plt.plot(["May", "Jul"], [poa_before, poa_before+(fl_after-fl_before)],
label="Counterfactual", lw=2, color="C2", ls="-.")
plt.legend();
###Output
_____no_output_____
###Markdown
See that small difference between the red and the yellow dashed lines? If you really focus you can see the small treatment effect on Porto Alegre. Now, what you might be asking yourself is "how much can I trust this estimator? It is my right to have standard errors reported to me!". Which makes sense, since estimators without them look silly. To do so, we will use a neat trick that uses regression. Specifically, we will estimate the following model$Y_i = \beta_0 + \beta_1 POA_i + \beta_2 Jul_i + \beta_3 POA_i*Jul_i + e_i$Noice that \\(\beta_0\\) is the baseline of the control. In our case, is the level of deposits in Florianopolis in the month of May. If we turn on the treated city dummy, we get \\(\beta_1\\). So \\(\beta_0 + \beta_1\\) is the baseline of Porto Alegre in May, before the intervention, and \\(\beta_1\\) is the increase of Porto Alegre baseline on top of Florianopolis. If we turn the POA dummy off and turn the July Dummy on, we get \\(\beta_0 + \beta_2\\), which is the level of Florianópolis in July, after the intervention period. \\(\beta_2\\) is then the trend of the control, since we add it on top of the baseline to get the level of the control at the period post intervention. As a recap, \\(\beta_1\\) is the increment from going from the treated to the control, \\(\beta_2\\) is the increment from going from the period before to the period after the intervention. Finally, if we turn both dummies on, we get \\(\beta_3\\). \\(\beta_0 + \beta_1 + \beta_2 + \beta_3\\) is the level in Porto Alegre after the intervention. So \\(\beta_3\\) is the incremental impact when you go from May to July and from Florianopolis to POA. In other words, it is the Difference in Difference estimator. If you don't believe me, check for yourself. And also notice how we get our standard errors.
###Code
smf.ols('deposits ~ poa*jul', data=data).fit().summary().tables[1]
###Output
_____no_output_____
###Markdown
Non Parallel TrendsOne obvious problem with Diff-in-Diff is failure to satisfy the parallel trend assumption. If the growth trend from the treated is different from the trend of the control, diff-in-diff will be biased. This is a common problem with non-random data, where the decision to treat a region is based on its potential to respond well to the treatment, or when the treatment is targeted at regions that are not performing very well. Take our marketing example. If we decided to test billboards in Porto Alegre not to check the effect of billboards, but simply because it is performing poorly. Maybe because online marketing is not working there. In this case, It could be that the growth we would see in Porto Alegre without a billboard would be lower than the growth we observe in other cities. This would cause us to underestimate the effect of the billboard there. One way to check if this is happening is to plot the trend using past periods. For example, let's suppose POA had a small decreasing trend but Florianopolis was on a steep ascent. In this case, showing periods from before would reveal those trends and we would know Diff-in-Diff is not a reliable estimator.
###Code
plt.figure(figsize=(10,5))
x = ["Jan", "Mar", "May", "Jul"]
plt.plot(x, [120, 150, fl_before, fl_after], label="FL", lw=2)
plt.plot(x, [60, 50, poa_before, poa_after], label="POA", lw=2)
plt.plot(["May", "Jul"], [poa_before, poa_before+(fl_after-fl_before)], label="Counterfactual", lw=2, color="C2", ls="-.")
plt.legend();
###Output
_____no_output_____ |
NotebooksCap01_02/jup2_VariaveisOperadores.ipynb | ###Markdown
Jupyter 2 - Variáveis e operadores 16 / 04 / 2020
###Code
# variáveis guardam valores para uso posterior. São espaço de memória com determinado valor;
# Atribuindo valores a uma variável;
VarTest = 1.9
a = 2.6
x1 = 3 # Nome de variável com número;
nome = 'Jonatan'
VarTest, a, x1, nome # Exibe os valores das variáveis. Antes da exibição as variáveis devem ser definidas e inicializadas;
print(VarTest, a, x1, nome) # Exibe os valores das variáveis. Antes da exibição as variáveis devem ser definidas, inicializadas;
type(VarTest)
type(a)
type(x1)
type(nome)
# -----------------------------------------Declaração múltipla-----------------------------------------------------------------
nome1, nome2, nome3 = 'Jonatan', 'Elias', 'Isaac'
nome1, nome2, nome3
print(nome1, nome2, nome3)
x = y = z = 3,141519
x, y, z
print(x, y, z)
#----------------------------------Operações com variáveis----------------------------------------
largura = 3
altura = 5
area = largura * altura
print(area)
# -------------------------------------------------------------------------------------------------------
idade1 = 20
idade2 = 30
idade3 = 40
idade1 + idade2, idade2 + idade1
idade1 - idade2, idade2 - idade1
idade1 * idade2, idade2 * idade1
(idade1 + idade2) + idade3, idade1 + (idade2 + idade3)
idade3 / idade1
idade3 // idade1
idade3 % idade1
#---------------------------------------------Concatenação de variáveis-------------------------------------
nome = "Jontan"
sobrenome = 'Paschoal'
fullName = nome+" "+sobrenome
fullName
#-----------------------------------------------------FIM----------------------------------------------------------
###Output
_____no_output_____ |
notebooks/graph-markers.ipynb | ###Markdown
**Mark / Label Use Cases*** Default marker alignment is zero.* Edge markers orient with edges by default.* Edge markers can optionally be oriented absolutely.* Edge markers can optionally be oriented relative to their edge-aligned orientation.* Marker labels can optionally be oriented absolutely.* Marker labels can optionally be oriented relative to their mark alignment.
###Code
import numpy
import toyplot.color
import toyplot.mark
edges = numpy.array([
["x0", "a0"],
["x0", "a1"],
["x0", "a2"],
["x0", "a3"],
["x1", "a0"],
["x1", "a1"],
["x1", "a2"],
["x1", "a3"],
["x2", "a0"],
["x2", "a1"],
["x2", "a2"],
["x2", "a3"],
["a0", "y0"],
["a0", "y1"],
["a1", "y0"],
["a1", "y1"],
["a2", "y0"],
["a2", "y1"],
["a3", "y0"],
["a3", "y1"],
])
vcoordinates = numpy.ma.masked_all((9, 2))
vcoordinates[0] = (0, 1)
vcoordinates[1] = (1, 1)
vcoordinates[2] = (2, 1)
vcoordinates[3] = (3, 1)
vcoordinates[4] = (0, 2)
vcoordinates[5] = (1, 2)
vcoordinates[6] = (2, 2)
vcoordinates[7] = (0, 0)
vcoordinates[8] = (1, 0)
vcoordinates[7:9, 0] += 1.0
vcoordinates[4:7, 0] += 0.5
canvas = toyplot.Canvas(width=500, height=500)
axes = canvas.cartesian(show=False, padding=50)
#axes.aspect = "fit-range"
mark = axes.graph(
edges,
ecolor="red",
eopacity=1,
estyle={"stroke-width":1},
hmarker=toyplot.marker.create("s", angle="r90", label="A", mstyle={"fill":"white"}, size=20),
#hmarker="<",
mmarker=toyplot.marker.create("s", angle="0", label="0.2", mstyle={"fill":"white"}, lstyle={"font-size":"12px"}, size=30),
#mmarker="s",
#mposition=numpy.random.choice([0.3, 0.4, 0.5, 0.6, 0.7], replace=True, size=20),
tmarker=toyplot.marker.create(">", angle=None, size=10),
#tmarker=">",
vcoordinates=vcoordinates,
vcolor="white",
vstyle={"stroke":toyplot.color.black},
vlshow=False,
vsize=50,
vmarker=toyplot.marker.create("o", label="0.3", lstyle={"font-size":"18px"}),
)
#mark = axes.rects(-0.125, 0.8 + 0.125, 1 - 0.125, 1 + 0.125, color="rgba(0, 0, 0, 0.2)")
import toyplot.pdf
toyplot.pdf.render(canvas, "test.pdf")
edges = numpy.array([
["a", "b"],
["b", "c"],
])
vcoordinates = numpy.array([
[0, 0],
[0, -1],
[0, -2],
])
canvas = toyplot.Canvas(width=500, height=300)
axes = canvas.cartesian(show=False, padding=50)
axes.aspect = "fit-range"
mark = axes.graph(
edges,
ecolor="black",
tmarker=">",
vcoordinates=vcoordinates,
vlshow=False,
vmarker=toyplot.marker.create("r2x1", label="Convolutional<br/>5×5", angle=0, lstyle={"font-size":"12px"}, size=50),
vstyle={"stroke":"black", "fill":"white"},
)
#axes.scatterplot(vcoordinates[:,0], vcoordinates[:,1], color="red")
toyplot.pdf.render(canvas, "test.pdf")
###Output
_____no_output_____ |
IA_Actividad2_CarlosPerez_1943580.ipynb | ###Markdown
###Code
Actividad 2 Ejericios introduccion python Jueves N4 Carlos Enrique Perez Perez 1943580
print ("Como te llamas?")
nombre=input()
print ("Que edad tienes en años?")
edad= input()
print ("Cual es tu frase favorita?")
frase=input()
print("Quien es el autor de esa frase?")
autor=input()
print("Tu nombre es "+nombre +", tu edad es de " + edad +" años,"+"tu frase favorita es: "+frase+", cuyo autor es "+autor)
print("Hola!, favor de ingresar 2 números enteros,")
int1=int(input())
int2=int(input())
print("Ahora, por favor escribe 2 números con decimal(flotantes)")
flotante1=float(input())
flotante2=float(input())
print("A continuación te mostraré el resultado de las operaciones \nbásicas que pueden realizarse con los 2 número enteros")
suma=(int1+int2)
resta=(int1-int2)
multi=(int1*int2)
division=(int1/int2)
print("El resultado de la suma es ", suma,", el resultado de la resta es ",resta," \nel resultado de la multiplicación es ",multi," y el resultado de la división es ",division)
suma2=(flotante1+flotante2)
resta2=(flotante1-flotante2)
multi2=(flotante1*flotante2)
division2=(flotante1/flotante2)
print("Ahora te mostraré que resultado dan las operaciones básicas que pueden realizarse con los 2 número flotantes")
print("El resultado de la suma es ", suma2,", el resultado de la resta es ",resta2," \nel resultado de la multiplicación es ",multi2," y el resultado de la división es ",division2)
var1=1
var2=2
var3=3
if(var1<3 and var2>5)or var3<10 :
condicion=True
print (not condicion)
###Output
False
|
matrix_two/dw_matrix_car/day3_simple_model2.ipynb | ###Markdown
wczytywanie danych
###Code
df = pd.read_hdf('data/car.h5')
df.shape
df.columns
###Output
_____no_output_____
###Markdown
Dummy Model
###Code
df.select_dtypes(np.number).columns
x = df[ ['car_id'] ].values
y = df[ ['price_value'] ].values
model = DummyRegressor()
model.fit(x,y)
y_pred = model.predict(x)
mae(y, y_pred)
[x for x in df.columns if 'price' in x]
df['price_currency'].value_counts(normalize=True)
df['price_currency'].value_counts()
df = df[ df[ 'price_currency'] != 'EUR']
df.shape
###Output
_____no_output_____
###Markdown
Features
###Code
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
df[feat + SUFFIX_CAT] = factorized_values
cat_feats = [x for x in df.columns if SUFFIX_CAT in x]
cat_feats = [x for x in cat_feats if 'price' not in x]
len(cat_feats)
x = df[cat_feats].values
y = df['price_value'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, x, y, cv=3, scoring='neg_mean_absolute_error')
np.mean(scores)
m = DecisionTreeRegressor(max_depth=5)
m.fit(x,y)
imp = PermutationImportance(m, random_state=0).fit(x,y)
eli5.show_weights(m, feature_names=cat_feats)
###Output
_____no_output_____ |
ronikaufman_sp21_semantic-visualizations/digital_manuscript.ipynb | ###Markdown
These two lines import the manuscript class to this notebook, and initialize a variable, 'manuscript', as an object of this class
###Code
from digital_manuscript import BnF
manuscript = BnF()
###Output
_____no_output_____
###Markdown
Select an entry object out of the manuscript with its ID. This entry contains the text in all three formats: tc, tcn, and tl.
###Code
sword_varnish = manuscript.entry('004v_1')
sword_varnish.title['tl']
sword_varnish.identity
###Output
_____no_output_____
###Markdown
The prop 'length' describes the length of the entry in characters. Length varies by format, and so you must be sure to choose one. It must be written in quotes, inside square brackets.
###Code
sword_varnish.length['tc']
###Output
_____no_output_____
###Markdown
.text() returns the text based on the version entered
###Code
sword_varnish.text('tl')
sword_varnish.text('tc')
###Output
_____no_output_____
###Markdown
To see the xml version of an entry, set the optional parameter 'xml' to True. Its default value is False
###Code
sword_varnish.text('tl', xml=True)
###Output
_____no_output_____
###Markdown
properties refer to pieces of information captured between tags in the xml annotated manuscript. The full list of properties can be seen below. When querying an entry for an prop, make sure to specify the version as well as the type of prop.
###Code
properties = ['animal', 'body_part', 'currency', 'definition', 'environment', 'material',
'medical', 'measurement', 'music', 'plant', 'place', 'personal_name',
'profession', 'sensory', 'tool', 'time']
sword_varnish_materials = sword_varnish.get_prop('material', 'tl')
sword_varnish_materials
sword_varnish_plants = sword_varnish.get_prop('plant', 'tcn')
sword_varnish_plants
has_materials = manuscript.search(material=True)
has_materials
has_materials_plants = manuscript.search(material=True, plant=True)
has_materials_plants
rose_entries = manuscript.search(plant=['rose'])
rose_entries
iron_entries = manuscript.search(material=['iron'])
iron_entries
iron_rose = manuscript.search(material=['iron'], plant=['rose'])
iron_rose
for ir in iron_rose:
assert ir in iron_entries
assert ir in rose_entries
iron_set = set(iron_entries)
rose_set = set(rose_entries)
iron_set.intersection(rose_set)
turtle = manuscript.entry('169r_1')
turtle.context('iron', 'tl')
turtle.context('rose', 'tl')
###Output
_____no_output_____
###Markdown
To create a new manuscript composed of a selection of entries, enter a list of entries as a constructor to the BnF() class.
###Code
rose_manuscript = BnF(rose_entries)
rose_manuscript.tablefy()
manuscript = BnF()
marginal = [page.identity for _, page in manuscript.entries.items() if len(page.margins) >= 1]
margin = BnF(marginal)
margin.tablefy()
varnish = manuscript.entry('003r_3')
for version, margin_list in varnish.margins.items():
print(version)
for i, margin in enumerate(margin_list):
print(i+1, margin.text)
print()
###Output
tc
1 Il est mieulx de chaufer<lb/> un peu le <m>vernis</m><lb/> que le coucher <env>au<lb/> soleil</env> pourceque<lb/> cela faict <po>enveler</po> <lb/> le tableau
2 Aulcuns disent quil<lb/> nest bon de distiler dans<lb/> ce <tl>vaisseau de <m>cuivre</m></tl><lb/> pourcequil faict vert<lb/> Touteffois <m>estame</m> il<lb/> est bon
tcn
1 Il est mieulx de chaufer<lb/> un peu le <m>vernis</m><lb/> que le coucher <env>au<lb/> soleil</env>, pource que<lb/> cela faict <po>enveler</po><lb/> le tableau.
2 Aulcuns disent qu’il<lb/> n’est bon de distiler dans<lb/> ce <tl>vaisseau de <m>cuivre,</m></tl><lb/> pource qu’il faict vert.<lb/> Touteffois <m>estamé</m> il<lb/> est bon.
tl
1 It is better to heat the <m>varnish</m> a little bit, rather than to put it out <env>in the sun</env>, because this makes the panel <po>warp</po>.
2 Some say it is not good to distil in this <tl><m>copper</m> vessel</tl> because it makes things green. However, when <m>tinned</m>, it is good.
|
13_mosaic_sparsity_regularizer/old codes and plots/older/non interpretable codes/Synthetic_elliptical_blobs_non_interpretable_300_50.ipynb | ###Markdown
**Focus Net**
###Code
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,300) #,self.output)
self.linear2 = nn.Linear(300,self.output)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,self.d], dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
for i in range(self.K):
x[:,i] = self.helper(z[:,i] )[:,0] # self.d*i:self.d*i+self.d
x = F.softmax(x,dim=1) # alphas
x1 = x[:,0]
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],z[:,i]) # self.d*i:self.d*i+self.d
return y , x
def helper(self,x):
x = F.relu(self.linear1(x))
x = self.linear2(x)
return x
###Output
_____no_output_____
###Markdown
**Classification Net**
###Code
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,50)
self.linear2 = nn.Linear(50,self.output)
def forward(self,x):
x = F.relu(self.linear1(x))
x = self.linear2(x)
return x
###Output
_____no_output_____
###Markdown
###Code
where = Focus_deep(5,1,9,5).double()
what = Classification_deep(5,3).double()
where = where.to("cuda")
what = what.to("cuda")
def calculate_attn_loss(dataloader,what,where,criter):
what.eval()
where.eval()
r_loss = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
loss = criter(outputs, labels)
r_loss += loss.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
print("--"*40)
criterion = nn.CrossEntropyLoss()
optimizer_where = optim.Adam(where.parameters(),lr =0.001)
optimizer_what = optim.Adam(what.parameters(), lr=0.001)
acti = []
loss_curi = []
analysis_data = []
epochs = 1000
running_loss,anlys_data = calculate_attn_loss(train_loader,what,where,criterion)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha = where(inputs)
outputs = what(avg)
loss = criterion(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_where.step()
optimizer_what.step()
running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion)
analysis_data.append(anls_data)
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.01:
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))
analysis_data = np.array(analysis_data)
plt.figure(figsize=(6,6))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label="ftpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label="ffpt")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label="ftpf")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label="ffpf")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.savefig("trends_synthetic_300_300.png",bbox_inches="tight")
plt.savefig("trends_synthetic_300_300.pdf",bbox_inches="tight")
analysis_data[-1,:2]/3000
running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion)
print(running_loss, anls_data)
what.eval()
where.eval()
alphas = []
max_alpha =[]
alpha_ftpt=[]
alpha_ffpt=[]
alpha_ftpf=[]
alpha_ffpf=[]
argmax_more_than_half=0
argmax_less_than_half=0
cnt =0
with torch.no_grad():
for i, data in enumerate(train_loader, 0):
inputs, labels, fidx = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg, alphas = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
batch = len(predicted)
mx,_ = torch.max(alphas,1)
max_alpha.append(mx.cpu().detach().numpy())
for j in range (batch):
cnt+=1
focus = torch.argmax(alphas[j]).item()
if alphas[j][focus] >= 0.5 :
argmax_more_than_half += 1
else:
argmax_less_than_half += 1
if (focus == fidx[j].item() and predicted[j].item() == labels[j].item()):
alpha_ftpt.append(alphas[j][focus].item())
# print(focus, fore_idx[j].item(), predicted[j].item() , labels[j].item() )
elif (focus != fidx[j].item() and predicted[j].item() == labels[j].item()):
alpha_ffpt.append(alphas[j][focus].item())
elif (focus == fidx[j].item() and predicted[j].item() != labels[j].item()):
alpha_ftpf.append(alphas[j][focus].item())
elif (focus != fidx[j].item() and predicted[j].item() != labels[j].item()):
alpha_ffpf.append(alphas[j][focus].item())
max_alpha = np.concatenate(max_alpha,axis=0)
print(max_alpha.shape, cnt)
np.array(alpha_ftpt).size, np.array(alpha_ffpt).size, np.array(alpha_ftpf).size, np.array(alpha_ffpf).size
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(max_alpha,bins=50,color ="c")
plt.title("alpha values histogram")
plt.savefig("attention_model_2_hist")
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(np.array(alpha_ftpt),bins=50,color ="c")
plt.title("alpha values in ftpt")
plt.savefig("attention_model_2_hist")
plt.figure(figsize=(6,6))
_,bins,_ = plt.hist(np.array(alpha_ffpt),bins=50,color ="c")
plt.title("alpha values in ffpt")
plt.savefig("attention_model_2_hist")
###Output
_____no_output_____ |
test/Rev_SolverDescSVD.ipynb | ###Markdown
Revisión de código para Linear solver aproximating SVD decomposition using One-sided Jacobi algorithm**Fecha:** 9 de Abril de 2020 8:30 pm**Responsable de revisión:** Javier Valencia y León Garay**Código revisado**
###Code
# Función Solver
solver <- function(U,S,V,b){
# Función que devuelve la solución del sistema de ecuaciones Ax =b.
# Se utilizó la función backsolve para resolver el sistema triangular.
# NOTA: al ser S diagonal es indistinto si es triangular inferior o superior.
# Args: U (mxm),V(nxn), S(mxn) matriz diagonal y b (m) un vector.
# Returns: Vector x
d = backsolve(t(U),b)
x = V%*%d
return(x)
}
x <- c(0,0,1)
y <- c(1,0,0)
ortogonal(x,y,.0001)
###Output
[1] "1"
|
project_1/conv_net2.ipynb | ###Markdown
###Code
import torch
from torch import nn
import torch.optim as optim
from torch.nn import functional as F
from torch.optim.lr_scheduler import StepLR
import matplotlib.pyplot as plt
%matplotlib inline
#import dlc_practical_prologue as prolog
from google.colab import drive
drive.mount('/content/drive')
import sys
sys.path.append('/content/drive/My Drive')
import dlc_practical_prologue as prolog
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
def nb_errors(pred, truth):
pred_class = pred.argmax(1)
return (pred_class - truth != 0).sum().item()
def train_model(model, train_input, train_target, test_input, test_target, epochs=500, batch_size=100, lr=0.1):
torch.nn.init.xavier_uniform_(model.conv1.weight)
torch.nn.init.xavier_uniform_(model.conv2.weight)
optimizer = torch.optim.Adam(model.parameters())
#scheduler = StepLR(optimizer, step_size=100, gamma=0.1)
train_loss = []
test_loss = []
test_accuracy = []
best_accuracy = 0
best_epoch = 0
for i in range(epochs):
model.train()
for b in range(0, train_input.size(0), batch_size):
output = model(train_input.narrow(0, b, batch_size))
criterion = torch.nn.CrossEntropyLoss()
loss = criterion(output, train_target.narrow(0, b, batch_size))
optimizer.zero_grad()
loss.backward()
optimizer.step()
#scheduler.step()
output_train = model(train_input)
model.eval()
output_test = model(test_input)
train_loss.append(criterion(output_train, train_target).item())
test_loss.append(criterion(output_test, test_target).item())
accuracy = 1 - nb_errors(output_test, test_target) / 1000
if accuracy > best_accuracy:
best_accuracy = accuracy
best_epoch = i+1
test_accuracy.append(accuracy)
if i%100 == 0:
print('Epoch : ',i+1, '\t', 'test loss :', test_loss[-1], '\t', 'train loss', train_loss[-1])
return train_loss, test_loss, test_accuracy, best_accuracy
def nb_errors_10(pred, truth):
pred_class = pred.view(-1, 2, 10).argmax(2).argmax(1)
return (pred_class - truth != 0).sum().item()
def train_model_10(model, train_input, train_classes, test_input, test_target, test_classes,\
epochs=250, batch_size=100, lr=0.1):
torch.nn.init.xavier_uniform_(model.conv1.weight)
torch.nn.init.xavier_uniform_(model.conv2.weight)
optimizer = torch.optim.Adam(model.parameters())
train_loss = []
test_loss = []
test_accuracy = []
best_accuracy = 0
best_epoch = 0
for i in range(epochs):
for b in range(0, train_input.size(0), batch_size):
output = model(train_input.narrow(0, b, batch_size))
criterion = torch.nn.CrossEntropyLoss()
labels = train_classes.narrow(0, b, batch_size)
loss = criterion(output, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
output_train = model(train_input)
output_test = model(test_input)
train_loss.append(criterion(output_train, train_classes).item())
test_loss.append(criterion(output_test, test_classes).item())
accuracy = 1 - nb_errors_10(output_test, test_target) / 1000
if accuracy > best_accuracy:
best_accuracy = accuracy
best_epoch = i+1
test_accuracy.append(accuracy)
#if i%100 == 0:
#print('Epoch : ',i+1, '\t', 'test loss :', test_loss[-1], '\t', 'train loss', train_loss[-1])
return train_loss, test_loss, test_accuracy, best_accuracy, best_epoch
###Output
_____no_output_____
###Markdown
Direct approch with 2 dim output
###Code
class BaseLine(nn.Module):
def __init__(self, nb_hidden = 50):
super(BaseLine, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 2, out_channels = 4, kernel_size=2)
self.conv2 = nn.Conv2d(4, 8, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(32, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 2)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 32)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class ConvNet_1(nn.Module):
def __init__(self, nb_hidden):
super(ConvNet_1, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 2, out_channels = 4, kernel_size=2)
self.conv2 = nn.Conv2d(4, 16, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(64, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 2)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 64)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class ConvNet_2(nn.Module):
def __init__(self, nb_hidden):
super(ConvNet_2, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 2, out_channels = 8, kernel_size=2)
self.conv2 = nn.Conv2d(8, 32, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(128, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 2)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 128)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class ConvNet_3(nn.Module):
def __init__(self, nb_hidden):
super(ConvNet_3, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 2, out_channels = 16, kernel_size=2)
self.conv2 = nn.Conv2d(16, 64, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(256, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 2)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 256)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class ConvNet_4(nn.Module):
def __init__(self, nb_hidden):
super(ConvNet_4, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 2, out_channels = 32, kernel_size=2)
self.conv2 = nn.Conv2d(32, 128, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(512, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 2)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 512)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
baseline = BaseLine()
model_1_0 = ConvNet_1(50)
model_1_1 = ConvNet_1(100)
model_1_2 = ConvNet_1(250)
model_1_3 = ConvNet_1(1000)
model_2_0 = ConvNet_2(50)
model_2_1 = ConvNet_2(100)
model_2_2 = ConvNet_2(250)
model_2_3 = ConvNet_2(1000)
model_3_0 = ConvNet_3(50)
model_3_1 = ConvNet_3(100)
model_3_2 = ConvNet_3(250)
model_3_3 = ConvNet_3(1000)
model_4_0 = ConvNet_4(50)
model_4_1 = ConvNet_4(100)
model_4_2 = ConvNet_4(250)
model_4_3 = ConvNet_4(1000)
print(count_parameters(baseline),
count_parameters(model_1_0),
count_parameters(model_2_0),
count_parameters(model_3_0),
count_parameters(model_4_0))
models = [baseline, model_1_0,model_1_1, model_1_2, model_1_3,
model_2_0,model_2_1, model_2_2, model_2_3,
model_3_0,model_3_1, model_3_2, model_3_3,
model_4_0,model_4_1, model_4_2, model_4_3]
if torch.cuda.is_available():
device = torch.device("cuda:0")
print("Running on the GPU")
else:
device = torch.device("cpu")
print("Running on the CPU")
for model in models:
model.to(device)
import time
start = time.time()
epochs = 200
accuracies = torch.empty(17, 4, dtype=torch.float)
for i in range(4):
train_input, train_target, train_classes, test_input, test_target, test_classes = prolog.generate_pair_sets(1000)
train_input = train_input.cuda()
train_target = train_target.cuda()
test_input = test_input.cuda()
test_target = test_target.cuda()
for j in range(17):
_, _, _, best_accuracy = train_model(models[j], train_input, train_target, test_input,\
test_target, epochs=epochs, lr = 0.08)
print('Model', j , best_accuracy)
accuracies[j][i] = best_accuracy
minute = (time.time()-start) / 60
print('It took', minute, 'minutes.')
accuracies.mean(1)
accuracies.std(1)
###Output
_____no_output_____
###Markdown
Model with bigger amount of parameters seems to work better, let's try with even more trainable parameters
###Code
class bigConvNet_1(nn.Module):
def __init__(self, nb_hidden):
super(bigConvNet_1, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 2, out_channels = 64, kernel_size=2)
self.conv2 = nn.Conv2d(64, 128, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(4*128, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 2)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 4*128)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class bigConvNet_2(nn.Module):
def __init__(self, nb_hidden):
super(bigConvNet_2, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 2, out_channels = 64, kernel_size=2)
self.conv2 = nn.Conv2d(64, 256, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(4*256, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 2)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 4*256)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class bigConvNet_3(nn.Module):
def __init__(self, nb_hidden):
super(bigConvNet_3, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 2, out_channels = 100, kernel_size=2)
self.conv2 = nn.Conv2d(100, 200, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(4*200, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 2)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 4*200)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class bigConvNet_4(nn.Module):
def __init__(self, nb_hidden):
super(bigConvNet_4, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 2, out_channels = 128, kernel_size=2)
self.conv2 = nn.Conv2d(128, 1024, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(4*1024, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 2)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 4*1024)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
baseline = BaseLine()
model_1_0 = bigConvNet_1(50)
model_1_1 = bigConvNet_1(100)
model_1_2 = bigConvNet_1(500)
model_1_3 = bigConvNet_1(1000)
model_2_0 = bigConvNet_2(50)
model_2_1 = bigConvNet_2(100)
model_2_2 = bigConvNet_2(500)
model_2_3 = bigConvNet_2(1000)
model_3_0 = bigConvNet_3(50)
model_3_1 = bigConvNet_3(100)
model_3_2 = bigConvNet_3(500)
model_3_3 = bigConvNet_3(1000)
model_4_0 = bigConvNet_4(50)
model_4_1 = bigConvNet_4(100)
model_4_2 = bigConvNet_4(500)
model_4_3 = bigConvNet_4(1000)
print(count_parameters(baseline),
count_parameters(model_1_0),
count_parameters(model_2_0),
count_parameters(model_3_0),
count_parameters(model_4_0))
models = [baseline, model_1_0,model_1_1, model_1_2, model_1_3,
model_2_0,model_2_1, model_2_2, model_2_3,
model_3_0,model_3_1, model_3_2, model_3_3,
model_4_0,model_4_1, model_4_2, model_4_3]
if torch.cuda.is_available():
device = torch.device("cuda:0")
print("Running on the GPU")
else:
device = torch.device("cpu")
print("Running on the CPU")
for model in models:
model.to(device)
import time
start = time.time()
epochs = 200
accuracies = torch.empty(17, 4, dtype=torch.float)
for i in range(4):
train_input, train_target, train_classes, test_input, test_target, test_classes = prolog.generate_pair_sets(1000)
if torch.cuda.is_available():
train_input = train_input.cuda()
train_target = train_target.cuda()
test_input = test_input.cuda()
test_target = test_target.cuda()
for j in range(17):
_, _, _, best_accuracy = train_model(models[j], train_input, train_target, test_input,\
test_target, epochs=epochs, lr = 0.08)
print('Model', j+1 , best_accuracy)
accuracies[j][i] = best_accuracy
minute = (time.time()-start) / 60
print('It took', minute, 'minutes.')
accuracies.mean(1)
accuracies.std(1)
###Output
_____no_output_____
###Markdown
Some model show bad result could be bc they need bigger dense layer due to their big number of parameter, but now it seems that increasing the number of parameters does not improve the performance. For further hyperparameter exploration i'll keep ConvNet_3, ConvNet_4, bigConvNet_1 which are the model that show the best accuracies with the less amount of parameters Direct approch with 10 dim output and deterministic comparison
###Code
class BaseLine(nn.Module):
def __init__(self, nb_hidden = 50):
super(BaseLine, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 4, kernel_size=2)
self.conv2 = nn.Conv2d(4, 8, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(32, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 10)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 32)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class ConvNet_1(nn.Module):
def __init__(self, nb_hidden):
super(ConvNet_1, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 4, kernel_size=2)
self.conv2 = nn.Conv2d(4, 16, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(64, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 10)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 64)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class ConvNet_2(nn.Module):
def __init__(self, nb_hidden):
super(ConvNet_2, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 8, kernel_size=2)
self.conv2 = nn.Conv2d(8, 32, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(128, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 10)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 128)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class ConvNet_3(nn.Module):
def __init__(self, nb_hidden):
super(ConvNet_3, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 16, kernel_size=2)
self.conv2 = nn.Conv2d(16, 64, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(256, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 10)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 256)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class ConvNet_4(nn.Module):
def __init__(self, nb_hidden):
super(ConvNet_4, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 32, kernel_size=2)
self.conv2 = nn.Conv2d(32, 128, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(512, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 10)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 512)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
baseline = BaseLine()
model_1_0 = ConvNet_1(50)
model_1_1 = ConvNet_1(100)
model_1_2 = ConvNet_1(250)
model_1_3 = ConvNet_1(1000)
model_2_0 = ConvNet_2(50)
model_2_1 = ConvNet_2(100)
model_2_2 = ConvNet_2(250)
model_2_3 = ConvNet_2(1000)
model_3_0 = ConvNet_3(50)
model_3_1 = ConvNet_3(100)
model_3_2 = ConvNet_3(250)
model_3_3 = ConvNet_3(1000)
model_4_0 = ConvNet_4(50)
model_4_1 = ConvNet_4(100)
model_4_2 = ConvNet_4(250)
model_4_3 = ConvNet_4(1000)
print(count_parameters(baseline),
count_parameters(model_1_0),
count_parameters(model_2_0),
count_parameters(model_3_0),
count_parameters(model_4_0))
models = [baseline, model_1_0,model_1_1, model_1_2, model_1_3,
model_2_0,model_2_1, model_2_2, model_2_3,
model_3_0,model_3_1, model_3_2, model_3_3,
model_4_0,model_4_1, model_4_2, model_4_3]
if torch.cuda.is_available():
device = torch.device("cuda:0")
print("Running on the GPU")
else:
device = torch.device("cpu")
print("Running on the CPU")
for model in models:
model.to(device)
import time
start = time.time()
epochs = 200
accuracies = torch.empty(17, 4, dtype=torch.float)
for i in range(4):
train_input, train_target, train_classes, test_input, test_target, test_classes = prolog.generate_pair_sets(1000)
train_input = train_input.view(-1, 14, 14).unsqueeze(1).cuda()
test_input = test_input.view(-1, 14, 14).unsqueeze(1).cuda()
train_classes = train_classes.view(2000).cuda()
test_classes = test_classes.view(2000).cuda()
test_target = test_target.cuda()
for j in range(17):
_, _, _, best_accuracy = train_model_10(models[j], train_input, train_classes, test_input, test_target, test_classes,\
epochs=epochs, lr = 0.08)
print('Model', j , best_accuracy)
accuracies[j][i] = best_accuracy
minute = (time.time()-start) / 60
print('It took', minute, 'minutes.')
accuracies.mean(1)
accuracies.std(1)
###Output
_____no_output_____
###Markdown
Once again model with more parameter seems to show better result, so let's try the 10-dim approch with bigConvNet
###Code
class bigConvNet_1(nn.Module):
def __init__(self, nb_hidden):
super(bigConvNet_1, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 64, kernel_size=2)
self.conv2 = nn.Conv2d(64, 128, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(4*128, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 10)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 4*128)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class bigConvNet_2(nn.Module):
def __init__(self, nb_hidden):
super(bigConvNet_2, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 64, kernel_size=2)
self.conv2 = nn.Conv2d(64, 256, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(4*256, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 10)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 4*256)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class bigConvNet_3(nn.Module):
def __init__(self, nb_hidden):
super(bigConvNet_3, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 100, kernel_size=2)
self.conv2 = nn.Conv2d(100, 200, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(4*200, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 10)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 4*200)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class bigConvNet_4(nn.Module):
def __init__(self, nb_hidden):
super(bigConvNet_4, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 128, kernel_size=2)
self.conv2 = nn.Conv2d(128, 1024, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(4*1024, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 10)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 4*1024)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
baseline = BaseLine()
model_1_0 = bigConvNet_1(50)
model_1_1 = bigConvNet_1(100)
model_1_2 = bigConvNet_1(500)
model_1_3 = bigConvNet_1(1000)
model_2_0 = bigConvNet_2(50)
model_2_1 = bigConvNet_2(100)
model_2_2 = bigConvNet_2(500)
model_2_3 = bigConvNet_2(1000)
model_3_0 = bigConvNet_3(50)
model_3_1 = bigConvNet_3(100)
model_3_2 = bigConvNet_3(500)
model_3_3 = bigConvNet_3(1000)
model_4_0 = bigConvNet_4(50)
model_4_1 = bigConvNet_4(100)
model_4_2 = bigConvNet_4(500)
model_4_3 = bigConvNet_4(1000)
print(count_parameters(baseline),
count_parameters(model_1_0),
count_parameters(model_2_0),
count_parameters(model_3_0),
count_parameters(model_4_0))
models = [baseline, model_1_0,model_1_1, model_1_2, model_1_3,
model_2_0,model_2_1, model_2_2, model_2_3,
model_3_0,model_3_1, model_3_2, model_3_3,
model_4_0,model_4_1, model_4_2, model_4_3]
if torch.cuda.is_available():
device = torch.device("cuda:0")
print("Running on the GPU")
else:
device = torch.device("cpu")
print("Running on the CPU")
for model in models:
model.to(device)
import time
start = time.time()
epochs = 200
accuracies = torch.empty(17, 4, dtype=torch.float)
for i in range(4):
train_input, train_target, train_classes, test_input, test_target, test_classes = prolog.generate_pair_sets(1000)
train_input = train_input.view(-1, 14, 14).unsqueeze(1).cuda()
test_input = test_input.view(-1, 14, 14).unsqueeze(1).cuda()
train_classes = train_classes.view(2000).cuda()
test_classes = test_classes.view(2000).cuda()
test_target = test_target.cuda()
for j in range(17):
_, _, _, best_accuracy = train_model_10(models[j], train_input, train_classes, test_input, test_target, test_classes,\
epochs=epochs, lr = 0.08)
print('Model', j , best_accuracy)
accuracies[j][i] = best_accuracy
minute = (time.time()-start) / 60
print('It took', minute, 'minutes.')
accuracies.mean(1)
accuracies.std(1)
###Output
_____no_output_____
###Markdown
It looks like bigConvNet1 show the better result without increasing too much the number of parameter
###Code
Trying the best models with various dropout values
class ConvNet_4(nn.Module):
def __init__(self, nb_hidden, dp1, dp2):
super(ConvNet_4, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 32, kernel_size=2)
self.conv2 = nn.Conv2d(32, 128, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(512, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 10)
self.dropout1 = nn.Dropout2d(dp1)
self.dropout2 = nn.Dropout2d(dp2)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 512)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class bigConvNet_3(nn.Module):
def __init__(self, nb_hidden, dp1, dp2):
super(bigConvNet_3, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 100, kernel_size=2)
self.conv2 = nn.Conv2d(100, 200, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(4*200, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 10)
self.dropout1 = nn.Dropout2d(dp1)
self.dropout2 = nn.Dropout2d(dp2)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 4*200)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
model1 = ConvNet_4(1000, 0.0, 0.0)
model2 = ConvNet_4(1000, 0.5, 0.0)
model3 = ConvNet_4(1000, 0.0, 0.5)
model4 = ConvNet_4(1000, 0.5, 0.5)
model5 = bigConvNet_3(500, 0.0, 0.0)
model6 = bigConvNet_3(500, 0.5, 0.0)
model7 = bigConvNet_3(500, 0.0, 0.5)
model8 = bigConvNet_3(500, 0.5, 0.5)
models = [model1, model2, model3, model4, model5, model6, model7, model8]
accuracies = torch.empty(8, 10, dtype=torch.float)
for i in range(10):
train_input, train_target, train_classes, test_input, test_target, test_classes = prolog.generate_pair_sets(1000)
train_input = train_input.view(-1, 14, 14).unsqueeze(1)
test_input = test_input.view(-1, 14, 14).unsqueeze(1)
train_classes = train_classes.view(2000)
test_classes = test_classes.view(2000)
test_target = test_target
for j in range(8):
if i > 3:
epochs = 200
else:
epochs = 100
_, _, _, best_accuracy, _ = train_model_10(models[j], train_input, train_classes, test_input, test_target,\
test_classes, epochs=epochs, lr = 0.08)
accuracies[j][i] = best_accuracy
accuracies.mean(1)
accuracies.std(1)
accuracies.median(1)
accuracies.max(1)
accuracies.min(1)
model1 = bigConvNet_3(500, 0.0, 0.25)
model2 = bigConvNet_3(500, 0.25, 0.25)
model3 = bigConvNet_3(500, 0.25, 0.5)
models = [model1, model2, model3]
accuracies = torch.empty(3, 10, dtype=torch.float)
epochs = 200
for i in range(10):
train_input, train_target, train_classes, test_input, test_target, test_classes = prolog.generate_pair_sets(1000)
train_input = train_input.view(-1, 14, 14).unsqueeze(1)
test_input = test_input.view(-1, 14, 14).unsqueeze(1)
train_classes = train_classes.view(2000)
test_classes = test_classes.view(2000)
test_target = test_target
for j in range(3):
_, _, _, best_accuracy, _ = train_model_10(models[j], train_input, train_classes, test_input, test_target,\
test_classes, epochs=epochs, lr = 0.08)
accuracies[j][i] = best_accuracy
accuracies.mean(1)
accuracies.std(1)
accuracies.median(1)
accuracies.max(1)
accuracies.min(1)
###Output
_____no_output_____
###Markdown
Learn the comparison
###Code
def train_double_model(model, comparisons, train_input, train_target, train_classes, test_input, test_target,\
test_classes, epochs=50, batch_size=100, lr=0.1):
torch.nn.init.xavier_uniform_(model.conv1.weight)
torch.nn.init.xavier_uniform_(model.conv2.weight)
torch.nn.init.xavier_uniform_(model.fc1.weight)
torch.nn.init.xavier_uniform_(model.fc2.weight)
for comparison in comparisons:
for i in range(0, len(comparison), 2):
torch.nn.init.xavier_uniform_(comparison[i].weight)
optimizer_model = torch.optim.Adam(model.parameters())
optimizer_comparisons = []
for comparison in comparisons:
optimizer_comparisons.append(torch.optim.Adam(comparison.parameters()))
train_loss = torch.empty(70, len(comparisons), dtype=torch.float)
test_loss = torch.empty(70, len(comparisons), dtype=torch.float)
test_accuracy = torch.empty(70, len(comparisons), dtype=torch.float)
best_accuracy = torch.empty(1, len(comparisons), dtype=torch.float)
best_epoch = torch.empty(1, len(comparisons), dtype=torch.float)
for i in range(100):
for b in range(0, train_input.size(0), batch_size):
output = model(train_input.narrow(0, b, batch_size))
criterion = torch.nn.CrossEntropyLoss()
loss1 = criterion(output, train_classes.narrow(0, b, batch_size))
optimizer_model.zero_grad()
loss1.backward(retain_graph=True)
optimizer_model.step()
mid_train = model(train_input).detach()
mid_test = model(test_input).detach()
mid_train_ = torch.zeros(int(mid_train.size(0) / 2), 10)
mid_test_ = torch.zeros(int(mid_test.size(0) / 2), 10)
for j in range(int(mid_train.size(0) / 2)):
mid_train_[j,:] = mid_train[2*j,:] - mid_train[2*j + 1,:]
mid_test_[j,:] = mid_test[2*j,:] - mid_test[2*j + 1,:]
if i >= 30:
for j in range(len(comparisons)):
for k in range(epochs):
for b in range(0, mid_train_.size(0), batch_size):
output = comparisons[j](mid_train_.narrow(0, b, batch_size))
loss2 = criterion(output, train_target.narrow(0, b, batch_size))
optimizer_comparisons[j].zero_grad()
loss2.backward()
optimizer_comparisons[j].step()
output_train = comparisons[j](mid_train_)
output_test = comparisons[j](mid_test_)
train_loss[i-30][j] = criterion(output_train, train_target).item()
test_loss[i-30][j] = criterion(output_test, test_target).item()
accuracy = 1 - nb_errors(output_test, test_target) / 1000
if accuracy > best_accuracy[0][j]:
best_accuracy[0][j] = accuracy
best_epoch[0][j] = i+1
test_accuracy[i-30][j] = accuracy
return train_loss, test_loss, test_accuracy, best_accuracy, best_epoch
intermediate_dim = 10
output_dim = 2
model = bigConvNet_3(500, 0.0, 0.25)
comparison1 = nn.Sequential(nn.Linear(intermediate_dim, 32), nn.ReLU(), nn.Linear(32, output_dim))
comparison2 = nn.Sequential(nn.Linear(intermediate_dim, 128), nn.ReLU(), nn.Linear(128, output_dim))
comparison3 = nn.Sequential(nn.Linear(intermediate_dim, 512), nn.ReLU(), nn.Linear(512, output_dim))
comparison4 = nn.Sequential(nn.Linear(intermediate_dim, 32), nn.ReLU(), nn.Linear(32, 32), nn.ReLU(),\
nn.Linear(32, output_dim))
comparison5 = nn.Sequential(nn.Linear(intermediate_dim, 128), nn.ReLU(), nn.Linear(128, 128), nn.ReLU(),\
nn.Linear(128, output_dim))
comparisons = [comparison1, comparison2, comparison3, comparison4, comparison5]
accuracies = torch.empty(5, 10, dtype=torch.float)
epochs = 50
for i in range(10):
train_input, train_target, train_classes, test_input, test_target, test_classes = prolog.generate_pair_sets(1000)
train_input = train_input.view(-1, 14, 14).unsqueeze(1)
test_input = test_input.view(-1, 14, 14).unsqueeze(1)
train_classes = train_classes.view(2000)
test_classes = test_classes.view(2000)
test_target = test_target
_, _, _, best_accuracy, _ = train_double_model(model, comparisons, train_input, train_target, train_classes,\
test_input, test_target, test_classes, epochs=epochs, lr = 0.08)
accuracies[:, i] = best_accuracy
accuracies.mean(1)
accuracies.std(1)
accuracies.median(1)
accuracies.max(1)
accuracies.min(1)
comparison1 = nn.Sequential(nn.Linear(intermediate_dim, 1000), nn.ReLU(), nn.Linear(1000, output_dim))
comparison2 = nn.Sequential(nn.Linear(intermediate_dim, 1500), nn.ReLU(), nn.Linear(1500, output_dim))
comparison3 = nn.Sequential(nn.Linear(intermediate_dim, 256), nn.ReLU(), nn.Linear(256, 256), nn.ReLU(),\
nn.Linear(256, output_dim))
comparison4 = nn.Sequential(nn.Linear(intermediate_dim, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(),\
nn.Linear(512, output_dim))
comparison5 = nn.Sequential(nn.Linear(intermediate_dim, 128), nn.ReLU(), nn.Linear(128, 128), nn.ReLU(),\
nn.Linear(128, 128), nn.ReLU(), nn.Linear(128, output_dim))
comparisons = [comparison1, comparison2, comparison3, comparison4, comparison5]
accuracies = torch.empty(5, 10, dtype=torch.float)
epochs = 50
for i in range(10):
train_input, train_target, train_classes, test_input, test_target, test_classes = prolog.generate_pair_sets(1000)
train_input = train_input.view(-1, 14, 14).unsqueeze(1)
test_input = test_input.view(-1, 14, 14).unsqueeze(1)
train_classes = train_classes.view(2000)
test_classes = test_classes.view(2000)
test_target = test_target
_, _, _, best_accuracy, _ = train_double_model(model, comparisons, train_input, train_target, train_classes,\
test_input, test_target, test_classes, epochs=epochs, lr = 0.08)
accuracies[:, i] = best_accuracy
accuracies.mean(1)
accuracies.std(1)
accuracies.median(1)
accuracies.max(1)
accuracies.min(1)
accuracies
###Output
_____no_output_____
###Markdown
Siamese
###Code
class bigConvNet_3(nn.Module):
def __init__(self, nb_hidden, dp1, dp2):
super(bigConvNet_3, self).__init__()
self.conv1 = nn.Conv2d(in_channels = 1, out_channels = 100, kernel_size=2)
self.conv2 = nn.Conv2d(100, 200, kernel_size=2, stride = 1)
self.fc1 = nn.Linear(4*200, nb_hidden)
self.fc2 = nn.Linear(nb_hidden, 10)
self.dropout1 = nn.Dropout2d(dp1)
self.dropout2 = nn.Dropout2d(dp2)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), (2, 2)))
x = self.dropout1(x)
x = F.relu(F.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 4*200)
x = self.dropout1(x)
x = F.relu(self.fc1(x))
x = self.dropout2(x)
x = self.fc2(x)
return x
class Siamese(nn.Module):
def __init__(self, comparison):
super(Siamese, self).__init__()
self.model = bigConvNet_3(500, 0.0, 0.25)
self.comparison = comparison
def forward1(self, x):
mid = self.model(x)
return mid
def forward2(self, mid1, mid2):
mid = mid1 - mid2
out = self.comparison(mid)
return out
def train_siamese_model(siamese, train_input, train_target, train_classes, test_input, test_target,\
test_classes, epochs=100 , batch_size=100, lr=0.08, alpha=0.5):
torch.nn.init.xavier_uniform_(siamese.model.conv1.weight)
torch.nn.init.xavier_uniform_(siamese.model.conv2.weight)
torch.nn.init.xavier_uniform_(siamese.model.fc1.weight)
torch.nn.init.xavier_uniform_(siamese.model.fc2.weight)
for i in range(0, len(siamese.comparison), 2):
torch.nn.init.xavier_uniform_(siamese.comparison[i].weight)
optimizer = torch.optim.Adam(siamese.parameters())
train_loss = []
test_loss = []
test_accuracy = []
best_accuracy = 0
best_epoch = 0
for i in range(epochs):
for b in range(0, train_input.size(0), batch_size):
output1 = siamese.forward1(train_input.narrow(0, b, batch_size)[:,0,:,:].unsqueeze(dim=1))
output2 = siamese.forward1(train_input.narrow(0, b, batch_size)[:,1,:,:].unsqueeze(dim=1))
criterion = torch.nn.CrossEntropyLoss()
loss1 = criterion(output1, train_classes.narrow(0, b, batch_size)[:,0])
loss2 = criterion(output2, train_classes.narrow(0, b, batch_size)[:,1])
output3 = siamese.forward2(output1, output2)
loss3 = criterion(output3, train_target.narrow(0, b, batch_size))
loss = alpha*(loss1 + loss2) + (1 - alpha)*loss3
optimizer.zero_grad()
loss.backward()
optimizer.step()
mid_train1 = siamese.forward1(train_input[:,0,:,:].unsqueeze(dim=1))
train_loss1 = criterion(mid_train1, train_classes[:,0])
mid_train2 = siamese.forward1(train_input[:,1,:,:].unsqueeze(dim=1))
train_loss2 = criterion(mid_train2, train_classes[:,1])
output_train = siamese.forward2(mid_train1, mid_train2)
train_loss3 = criterion(output_train, train_target)
mid_test1 = siamese.forward1(test_input[:,0,:,:].unsqueeze(dim=1))
test_loss1 = criterion(mid_test1, test_classes[:,0])
mid_test2 = siamese.forward1(test_input[:,1,:,:].unsqueeze(dim=1))
test_loss2 = criterion(mid_test2, train_classes[:,1])
output_test = siamese.forward2(mid_test1, mid_test2)
test_loss3 = criterion(output_test, test_target)
train_loss.append((alpha*(train_loss1 + train_loss2) + (1 - alpha)*train_loss3).item())
test_loss.append((alpha*(test_loss1 + test_loss2) + (1 - alpha)*test_loss3).item())
accuracy = 1 - nb_errors(output_test, test_target) / 1000
if accuracy > best_accuracy:
best_accuracy = accuracy
best_epoch = i+1
test_accuracy.append(accuracy)
return train_loss, test_loss, test_accuracy, best_accuracy
intermediate_dim = 10
output_dim = 2
siamese1 = Siamese(nn.Sequential(nn.Linear(intermediate_dim, 32), nn.ReLU(), nn.Linear(32, output_dim)))
siamese2 = Siamese(nn.Sequential(nn.Linear(intermediate_dim, 512), nn.ReLU(), nn.Linear(512, output_dim)))
siamese3 = Siamese(nn.Sequential(nn.Linear(intermediate_dim, 128), nn.ReLU(), nn.Linear(128, 128), nn.ReLU(),\
nn.Linear(128, output_dim)))
siamese4 = Siamese(nn.Sequential(nn.Linear(intermediate_dim, 1500), nn.ReLU(), nn.Linear(1500, output_dim)))
siamese5 = Siamese(nn.Sequential(nn.Linear(intermediate_dim, 512), nn.ReLU(), nn.Linear(512, 512),\
nn.ReLU(), nn.Linear(512, output_dim)))
siameses = [siamese1, siamese2, siamese3, siamese4, siamese5]
for siam in siameses:
siam.cuda()
accuracies = torch.empty(5, 10, dtype=torch.float)
for i in range(10):
train_input, train_target, train_classes, test_input, test_target, test_classes = prolog.generate_pair_sets(1000)
train_input = train_input.cuda()
test_input = test_input.cuda()
train_classes = train_classes.cuda()
test_classes = test_classes.cuda()
train_target = train_target.cuda()
test_target = test_target.cuda()
for j in range(5):
_, _, _, best_accuracy = train_siamese_model(siameses[j], train_input, train_target, train_classes, \
test_input, test_target, test_classes)
accuracies[j][i] = best_accuracy
accuracies.mean(1)
accuracies.std(1)
accuracies.median(1)
accuracies.max(1)
accuracies.min(1)
intermediate_dim = 10
output_dim = 2
siamese5 = Siamese(nn.Sequential(nn.Linear(intermediate_dim, 512), nn.ReLU(), nn.Linear(512, 512),\
nn.ReLU(), nn.Linear(512, output_dim)))
siamese = siamese5.cuda()
alphas = [0.2, 0.4, 0.6, 0.8]
accuracies = torch.empty(1, 10, dtype=torch.float)
for i in range(10):
train_input, train_target, train_classes, test_input, test_target, test_classes = prolog.generate_pair_sets(1000)
train_input = train_input.cuda()
test_input = test_input.cuda()
train_classes = train_classes.cuda()
test_classes = test_classes.cuda()
train_target = train_target.cuda()
test_target = test_target.cuda()
_, _, _, best_accuracy = train_siamese_model(siamese, train_input, train_target, train_classes, \
test_input, test_target, test_classes, epochs=1000, lr = 0.1)
accuracies[0][i] = best_accuracy
accuracies.mean(1)
accuracies.std(1)
accuracies = torch.empty(4, 10, dtype=torch.float)
for i in range(10):
train_input, train_target, train_classes, test_input, test_target, test_classes = prolog.generate_pair_sets(1000)
train_input = train_input.cuda()
test_input = test_input.cuda()
train_classes = train_classes.cuda()
test_classes = test_classes.cuda()
train_target = train_target.cuda()
test_target = test_target.cuda()
for j in range(4):
_, _, _, best_accuracy = train_siamese_model(siamese, train_input, train_target, train_classes, \
test_input, test_target, test_classes, epochs=500, lr = 0.1, alpha=alphas[j])
accuracies[j][i] = best_accuracy
accuracies.mean(1)
accuracies.std(1)
accuracies = torch.empty(1, 10, dtype=torch.float)
for i in range(10):
train_input, train_target, train_classes, test_input, test_target, test_classes = prolog.generate_pair_sets(1000)
train_input = train_input.cuda()
test_input = test_input.cuda()
train_classes = train_classes.cuda()
test_classes = test_classes.cuda()
train_target = train_target.cuda()
test_target = test_target.cuda()
train_loss, test_loss, test_accuracy, best_accuracy = train_siamese_model(siamese, train_input, train_target, train_classes, \
test_input, test_target, test_classes, epochs=1000, lr = 0.1, alpha=0.95)
accuracies[0][i] = best_accuracy
accuracies.mean(1)
accuracies.std(1)
# plot the data
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(test_loss, color='tab:orange')
plt.show()
# plot the data
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(test_accuracy[:200], color='orange')
plt.show()
###Output
_____no_output_____ |
complete.ipynb | ###Markdown
Improving Text Summarisation on WikiHow Data using Transfer Learning **This notebook presents our implementation for the COMP0087 - Statistical Natural Language Processing project.**We focus on showcasing the power of using transfer learning on the text summarisation task using the BERT-based models BertSum ([Text Summarization with Pretrained Encoders](https://arxiv.org/abs/1908.08345)) on the WikiHow dataset [WikiHow: A Large Scale Text Summarization Dataset](https://arxiv.org/abs/1810.09305).Implementation includes code from [PreSumm GitHub](https://github.com/nlpyang/PreSumm), modified to suit our research purposes.We include the pre-trained BertSumExt model obtained from [here](https://drive.google.com/file/d/1kKWoV0QCbeIuFt85beQgJ4v0lujaXobJ/view), the model we trained from scratch and our best performing model trained using transfer learning.For a demo version comparing these last two models on a small WikiHow test dataset can be checked [here](https://drive.google.com/open?id=1mwpa8DIFEB2aO43AbbFwlVE9YZy_fpK4). Download file containing code, data and models:
###Code
!wget --load-cookies /tmp/cookies.txt "https://drive.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://drive.google.com/uc?export=download&id=1-Wgbe4fLdh4TWSrMkQ21HCsxi4qixh-i' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1-Wgbe4fLdh4TWSrMkQ21HCsxi4qixh-i" -O Team36.zip && rm -rf /tmp/cookies.txt
!unzip Team36.zip
###Output
--2020-04-02 22:26:45-- https://drive.google.com/uc?export=download&confirm=tk3X&id=1-Wgbe4fLdh4TWSrMkQ21HCsxi4qixh-i
Resolving drive.google.com (drive.google.com)... 64.233.188.101, 64.233.188.113, 64.233.188.138, ...
Connecting to drive.google.com (drive.google.com)|64.233.188.101|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://doc-10-b4-docs.googleusercontent.com/docs/securesc/jrdm4pvp76j2algtveo3i6mhg87n4hvi/hinkk4r7nu8kttf9gi0phnj46hvsh765/1585866375000/13490934451747665095/08700569489280539038Z/1-Wgbe4fLdh4TWSrMkQ21HCsxi4qixh-i?e=download [following]
--2020-04-02 22:26:45-- https://doc-10-b4-docs.googleusercontent.com/docs/securesc/jrdm4pvp76j2algtveo3i6mhg87n4hvi/hinkk4r7nu8kttf9gi0phnj46hvsh765/1585866375000/13490934451747665095/08700569489280539038Z/1-Wgbe4fLdh4TWSrMkQ21HCsxi4qixh-i?e=download
Resolving doc-10-b4-docs.googleusercontent.com (doc-10-b4-docs.googleusercontent.com)... 108.177.125.132, 2404:6800:4008:c01::84
Connecting to doc-10-b4-docs.googleusercontent.com (doc-10-b4-docs.googleusercontent.com)|108.177.125.132|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://docs.google.com/nonceSigner?nonce=rik8ap1otodu2&continue=https://doc-10-b4-docs.googleusercontent.com/docs/securesc/jrdm4pvp76j2algtveo3i6mhg87n4hvi/hinkk4r7nu8kttf9gi0phnj46hvsh765/1585866375000/13490934451747665095/08700569489280539038Z/1-Wgbe4fLdh4TWSrMkQ21HCsxi4qixh-i?e%3Ddownload&hash=1kitvm574pm98shlc4dabtni86fp0bj1 [following]
--2020-04-02 22:26:45-- https://docs.google.com/nonceSigner?nonce=rik8ap1otodu2&continue=https://doc-10-b4-docs.googleusercontent.com/docs/securesc/jrdm4pvp76j2algtveo3i6mhg87n4hvi/hinkk4r7nu8kttf9gi0phnj46hvsh765/1585866375000/13490934451747665095/08700569489280539038Z/1-Wgbe4fLdh4TWSrMkQ21HCsxi4qixh-i?e%3Ddownload&hash=1kitvm574pm98shlc4dabtni86fp0bj1
Resolving docs.google.com (docs.google.com)... 74.125.203.100, 74.125.203.139, 74.125.203.101, ...
Connecting to docs.google.com (docs.google.com)|74.125.203.100|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://doc-10-b4-docs.googleusercontent.com/docs/securesc/jrdm4pvp76j2algtveo3i6mhg87n4hvi/hinkk4r7nu8kttf9gi0phnj46hvsh765/1585866375000/13490934451747665095/08700569489280539038Z/1-Wgbe4fLdh4TWSrMkQ21HCsxi4qixh-i?e=download&nonce=rik8ap1otodu2&user=08700569489280539038Z&hash=bbjprbekmf7qs5plejrhve27pd6avtc4 [following]
--2020-04-02 22:26:47-- https://doc-10-b4-docs.googleusercontent.com/docs/securesc/jrdm4pvp76j2algtveo3i6mhg87n4hvi/hinkk4r7nu8kttf9gi0phnj46hvsh765/1585866375000/13490934451747665095/08700569489280539038Z/1-Wgbe4fLdh4TWSrMkQ21HCsxi4qixh-i?e=download&nonce=rik8ap1otodu2&user=08700569489280539038Z&hash=bbjprbekmf7qs5plejrhve27pd6avtc4
Connecting to doc-10-b4-docs.googleusercontent.com (doc-10-b4-docs.googleusercontent.com)|108.177.125.132|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/x-zip-compressed]
Saving to: ‘Team36.zip’
Team36.zip [ <=> ] 1.23G 47.5MB/s in 28s
2020-04-02 22:27:16 (44.5 MB/s) - ‘Team36.zip’ saved [1317446051]
Archive: Team36.zip
creating: Demo36/
creating: Demo36/bert_data/
inflating: Demo36/bert_data/cnndm.test.0.bert.pt
creating: Demo36/logs/
extracting: Demo36/logs/cnndm.log.txt
creating: Demo36/models/
inflating: Demo36/models/model_transfer_learning.pt
creating: Demo36/results/
creating: Demo36/src/
inflating: Demo36/src/cal_rouge.py
inflating: Demo36/src/distributed.py
creating: Demo36/src/models/
inflating: Demo36/src/models/adam.py
inflating: Demo36/src/models/data_loader.py
inflating: Demo36/src/models/decoder.py
inflating: Demo36/src/models/encoder.py
inflating: Demo36/src/models/loss.py
inflating: Demo36/src/models/model_builder.py
inflating: Demo36/src/models/neural.py
inflating: Demo36/src/models/optimizers.py
inflating: Demo36/src/models/predictor.py
inflating: Demo36/src/models/reporter.py
inflating: Demo36/src/models/reporter_ext.py
inflating: Demo36/src/models/trainer.py
inflating: Demo36/src/models/trainer_ext.py
extracting: Demo36/src/models/__init__.py
creating: Demo36/src/others/
inflating: Demo36/src/others/logging.py
inflating: Demo36/src/others/pyrouge.py
inflating: Demo36/src/others/tokenization.py
inflating: Demo36/src/others/utils.py
extracting: Demo36/src/others/__init__.py
inflating: Demo36/src/post_stats.py
creating: Demo36/src/prepro/
inflating: Demo36/src/preprocess.py
inflating: Demo36/src/prepro/data_builder.py
inflating: Demo36/src/prepro/smart_common_words.txt
inflating: Demo36/src/prepro/utils.py
extracting: Demo36/src/prepro/__init__.py
inflating: Demo36/src/train.py
inflating: Demo36/src/train_abstractive.py
inflating: Demo36/src/train_extractive.py
creating: Demo36/src/translate/
inflating: Demo36/src/translate/beam.py
inflating: Demo36/src/translate/penalties.py
extracting: Demo36/src/translate/__init__.py
###Markdown
Installing dependencies:
###Code
!pip install pytorch_pretrained_bert
!pip install tensorboardX
!pip install pytorch_transformers
!pip install torch==1.1.0 torchvision==0.3.0
###Output
Collecting pytorch_pretrained_bert
[?25l Downloading https://files.pythonhosted.org/packages/d7/e0/c08d5553b89973d9a240605b9c12404bcf8227590de62bae27acbcfe076b/pytorch_pretrained_bert-0.6.2-py3-none-any.whl (123kB)
[K |██▋ | 10kB 19.4MB/s eta 0:00:01
[K |█████▎ | 20kB 848kB/s eta 0:00:01
[K |████████ | 30kB 1.3MB/s eta 0:00:01
[K |██████████▋ | 40kB 1.4MB/s eta 0:00:01
[K |█████████████▎ | 51kB 1.0MB/s eta 0:00:01
[K |███████████████▉ | 61kB 1.2MB/s eta 0:00:01
[K |██████████████████▌ | 71kB 1.3MB/s eta 0:00:01
[K |█████████████████████▏ | 81kB 1.4MB/s eta 0:00:01
[K |███████████████████████▉ | 92kB 1.5MB/s eta 0:00:01
[K |██████████████████████████▌ | 102kB 1.4MB/s eta 0:00:01
[K |█████████████████████████████▏ | 112kB 1.4MB/s eta 0:00:01
[K |███████████████████████████████▊| 122kB 1.4MB/s eta 0:00:01
[K |████████████████████████████████| 133kB 1.4MB/s
[?25hRequirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from pytorch_pretrained_bert) (4.38.0)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from pytorch_pretrained_bert) (2019.12.20)
Requirement already satisfied: boto3 in /usr/local/lib/python3.6/dist-packages (from pytorch_pretrained_bert) (1.12.31)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from pytorch_pretrained_bert) (1.18.2)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from pytorch_pretrained_bert) (2.21.0)
Requirement already satisfied: torch>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from pytorch_pretrained_bert) (1.4.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from boto3->pytorch_pretrained_bert) (0.3.3)
Requirement already satisfied: botocore<1.16.0,>=1.15.31 in /usr/local/lib/python3.6/dist-packages (from boto3->pytorch_pretrained_bert) (1.15.31)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from boto3->pytorch_pretrained_bert) (0.9.5)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->pytorch_pretrained_bert) (3.0.4)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->pytorch_pretrained_bert) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->pytorch_pretrained_bert) (2019.11.28)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->pytorch_pretrained_bert) (2.8)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/local/lib/python3.6/dist-packages (from botocore<1.16.0,>=1.15.31->boto3->pytorch_pretrained_bert) (2.8.1)
Requirement already satisfied: docutils<0.16,>=0.10 in /usr/local/lib/python3.6/dist-packages (from botocore<1.16.0,>=1.15.31->boto3->pytorch_pretrained_bert) (0.15.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.16.0,>=1.15.31->boto3->pytorch_pretrained_bert) (1.12.0)
Installing collected packages: pytorch-pretrained-bert
Successfully installed pytorch-pretrained-bert-0.6.2
Collecting tensorboardX
[?25l Downloading https://files.pythonhosted.org/packages/35/f1/5843425495765c8c2dd0784a851a93ef204d314fc87bcc2bbb9f662a3ad1/tensorboardX-2.0-py2.py3-none-any.whl (195kB)
[K |████████████████████████████████| 204kB 1.4MB/s
[?25hRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from tensorboardX) (1.12.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from tensorboardX) (1.18.2)
Requirement already satisfied: protobuf>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorboardX) (3.10.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.8.0->tensorboardX) (46.0.0)
Installing collected packages: tensorboardX
Successfully installed tensorboardX-2.0
Collecting pytorch_transformers
[?25l Downloading https://files.pythonhosted.org/packages/a3/b7/d3d18008a67e0b968d1ab93ad444fc05699403fa662f634b2f2c318a508b/pytorch_transformers-1.2.0-py3-none-any.whl (176kB)
[K |████████████████████████████████| 184kB 1.4MB/s
[?25hRequirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from pytorch_transformers) (4.38.0)
Collecting sacremoses
[?25l Downloading https://files.pythonhosted.org/packages/a6/b4/7a41d630547a4afd58143597d5a49e07bfd4c42914d8335b2a5657efc14b/sacremoses-0.0.38.tar.gz (860kB)
[K |████████████████████████████████| 870kB 4.2MB/s
[?25hRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from pytorch_transformers) (2.21.0)
Requirement already satisfied: torch>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from pytorch_transformers) (1.4.0)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from pytorch_transformers) (2019.12.20)
Collecting sentencepiece
[?25l Downloading https://files.pythonhosted.org/packages/74/f4/2d5214cbf13d06e7cb2c20d84115ca25b53ea76fa1f0ade0e3c9749de214/sentencepiece-0.1.85-cp36-cp36m-manylinux1_x86_64.whl (1.0MB)
[K |████████████████████████████████| 1.0MB 6.9MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from pytorch_transformers) (1.18.2)
Requirement already satisfied: boto3 in /usr/local/lib/python3.6/dist-packages (from pytorch_transformers) (1.12.31)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sacremoses->pytorch_transformers) (1.12.0)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->pytorch_transformers) (7.1.1)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->pytorch_transformers) (0.14.1)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->pytorch_transformers) (2019.11.28)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->pytorch_transformers) (2.8)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->pytorch_transformers) (3.0.4)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->pytorch_transformers) (1.24.3)
Requirement already satisfied: botocore<1.16.0,>=1.15.31 in /usr/local/lib/python3.6/dist-packages (from boto3->pytorch_transformers) (1.15.31)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from boto3->pytorch_transformers) (0.3.3)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from boto3->pytorch_transformers) (0.9.5)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/local/lib/python3.6/dist-packages (from botocore<1.16.0,>=1.15.31->boto3->pytorch_transformers) (2.8.1)
Requirement already satisfied: docutils<0.16,>=0.10 in /usr/local/lib/python3.6/dist-packages (from botocore<1.16.0,>=1.15.31->boto3->pytorch_transformers) (0.15.2)
Building wheels for collected packages: sacremoses
Building wheel for sacremoses (setup.py) ... [?25l[?25hdone
Created wheel for sacremoses: filename=sacremoses-0.0.38-cp36-none-any.whl size=884628 sha256=532fa0e81757d2f60229e31a4abc003e8af64dc8d8f1ffdc5ad6b56d72a762d0
Stored in directory: /root/.cache/pip/wheels/6d/ec/1a/21b8912e35e02741306f35f66c785f3afe94de754a0eaf1422
Successfully built sacremoses
Installing collected packages: sacremoses, sentencepiece, pytorch-transformers
Successfully installed pytorch-transformers-1.2.0 sacremoses-0.0.38 sentencepiece-0.1.85
Collecting torch==1.1.0
[?25l Downloading https://files.pythonhosted.org/packages/69/60/f685fb2cfb3088736bafbc9bdbb455327bdc8906b606da9c9a81bae1c81e/torch-1.1.0-cp36-cp36m-manylinux1_x86_64.whl (676.9MB)
[K |████████████████████████████████| 676.9MB 23kB/s
[?25hCollecting torchvision==0.3.0
[?25l Downloading https://files.pythonhosted.org/packages/2e/45/0f2f3062c92d9cf1d5d7eabd3cae88cea9affbd2b17fb1c043627838cb0a/torchvision-0.3.0-cp36-cp36m-manylinux1_x86_64.whl (2.6MB)
[K |████████████████████████████████| 2.6MB 44.8MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch==1.1.0) (1.18.2)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision==0.3.0) (7.0.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision==0.3.0) (1.12.0)
Installing collected packages: torch, torchvision
Found existing installation: torch 1.4.0
Uninstalling torch-1.4.0:
Successfully uninstalled torch-1.4.0
Found existing installation: torchvision 0.5.0
Uninstalling torchvision-0.5.0:
Successfully uninstalled torchvision-0.5.0
Successfully installed torch-1.1.0 torchvision-0.3.0
###Markdown
Proper installation of pyrouge
###Code
!git clone https://github.com/bheinzerling/pyrouge
%cd pyrouge
!pip install -e .
!git clone https://github.com/andersjo/pyrouge.git rouge
!pyrouge_set_rouge_path /content/pyrouge/rouge/tools/ROUGE-1.5.5/
!sudo apt-get install libxml-parser-perl
%cd rouge/tools/ROUGE-1.5.5/data
!rm WordNet-2.0.exc.db
!./WordNet-2.0-Exceptions/buildExeptionDB.pl ./WordNet-2.0-Exceptions ./smart_common_words.txt ./WordNet-2.0.exc.db
###Output
Cloning into 'pyrouge'...
remote: Enumerating objects: 551, done.[K
remote: Total 551 (delta 0), reused 0 (delta 0), pack-reused 551[K
Receiving objects: 100% (551/551), 123.17 KiB | 229.00 KiB/s, done.
Resolving deltas: 100% (198/198), done.
/content/pyrouge
Obtaining file:///content/pyrouge
Installing collected packages: pyrouge
Running setup.py develop for pyrouge
Successfully installed pyrouge
Cloning into 'rouge'...
remote: Enumerating objects: 393, done.[K
remote: Total 393 (delta 0), reused 0 (delta 0), pack-reused 393[K
Receiving objects: 100% (393/393), 298.74 KiB | 544.00 KiB/s, done.
Resolving deltas: 100% (109/109), done.
2020-04-02 22:29:46,673 [MainThread ] [INFO ] Set ROUGE home directory to /content/pyrouge/rouge/tools/ROUGE-1.5.5/.
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libauthen-sasl-perl libdata-dump-perl libencode-locale-perl
libfile-listing-perl libfont-afm-perl libhtml-form-perl libhtml-format-perl
libhtml-parser-perl libhtml-tagset-perl libhtml-tree-perl
libhttp-cookies-perl libhttp-daemon-perl libhttp-date-perl
libhttp-message-perl libhttp-negotiate-perl libio-html-perl
libio-socket-ssl-perl liblwp-mediatypes-perl liblwp-protocol-https-perl
libmailtools-perl libnet-http-perl libnet-smtp-ssl-perl libnet-ssleay-perl
libtimedate-perl libtry-tiny-perl liburi-perl libwww-perl
libwww-robotrules-perl netbase perl-openssl-defaults
Suggested packages:
libdigest-hmac-perl libgssapi-perl libcrypt-ssleay-perl libauthen-ntlm-perl
The following NEW packages will be installed:
libauthen-sasl-perl libdata-dump-perl libencode-locale-perl
libfile-listing-perl libfont-afm-perl libhtml-form-perl libhtml-format-perl
libhtml-parser-perl libhtml-tagset-perl libhtml-tree-perl
libhttp-cookies-perl libhttp-daemon-perl libhttp-date-perl
libhttp-message-perl libhttp-negotiate-perl libio-html-perl
libio-socket-ssl-perl liblwp-mediatypes-perl liblwp-protocol-https-perl
libmailtools-perl libnet-http-perl libnet-smtp-ssl-perl libnet-ssleay-perl
libtimedate-perl libtry-tiny-perl liburi-perl libwww-perl
libwww-robotrules-perl libxml-parser-perl netbase perl-openssl-defaults
0 upgraded, 31 newly installed, 0 to remove and 25 not upgraded.
Need to get 1,713 kB of archives.
After this operation, 5,581 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/main amd64 netbase all 5.4 [12.7 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 libdata-dump-perl all 1.23-1 [27.0 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic/main amd64 libencode-locale-perl all 1.05-1 [12.3 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic/main amd64 libtimedate-perl all 2.3000-2 [37.5 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic/main amd64 libhttp-date-perl all 6.02-1 [10.4 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic/main amd64 libfile-listing-perl all 6.04-1 [9,774 B]
Get:7 http://archive.ubuntu.com/ubuntu bionic/main amd64 libfont-afm-perl all 1.20-2 [13.2 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic/main amd64 libhtml-tagset-perl all 3.20-3 [12.1 kB]
Get:9 http://archive.ubuntu.com/ubuntu bionic/main amd64 liburi-perl all 1.73-1 [77.2 kB]
Get:10 http://archive.ubuntu.com/ubuntu bionic/main amd64 libhtml-parser-perl amd64 3.72-3build1 [85.9 kB]
Get:11 http://archive.ubuntu.com/ubuntu bionic/main amd64 libio-html-perl all 1.001-1 [14.9 kB]
Get:12 http://archive.ubuntu.com/ubuntu bionic/main amd64 liblwp-mediatypes-perl all 6.02-1 [21.7 kB]
Get:13 http://archive.ubuntu.com/ubuntu bionic/main amd64 libhttp-message-perl all 6.14-1 [72.1 kB]
Get:14 http://archive.ubuntu.com/ubuntu bionic/main amd64 libhtml-form-perl all 6.03-1 [23.5 kB]
Get:15 http://archive.ubuntu.com/ubuntu bionic/main amd64 libhtml-tree-perl all 5.07-1 [200 kB]
Get:16 http://archive.ubuntu.com/ubuntu bionic/main amd64 libhtml-format-perl all 2.12-1 [41.3 kB]
Get:17 http://archive.ubuntu.com/ubuntu bionic/main amd64 libhttp-cookies-perl all 6.04-1 [17.2 kB]
Get:18 http://archive.ubuntu.com/ubuntu bionic/main amd64 libhttp-daemon-perl all 6.01-1 [17.0 kB]
Get:19 http://archive.ubuntu.com/ubuntu bionic/main amd64 libhttp-negotiate-perl all 6.00-2 [13.4 kB]
Get:20 http://archive.ubuntu.com/ubuntu bionic/main amd64 perl-openssl-defaults amd64 3build1 [7,012 B]
Get:21 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libnet-ssleay-perl amd64 1.84-1ubuntu0.2 [283 kB]
Get:22 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libio-socket-ssl-perl all 2.060-3~ubuntu18.04.1 [173 kB]
Get:23 http://archive.ubuntu.com/ubuntu bionic/main amd64 libnet-http-perl all 6.17-1 [22.7 kB]
Get:24 http://archive.ubuntu.com/ubuntu bionic/main amd64 libtry-tiny-perl all 0.30-1 [20.5 kB]
Get:25 http://archive.ubuntu.com/ubuntu bionic/main amd64 libwww-robotrules-perl all 6.01-1 [14.1 kB]
Get:26 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libwww-perl all 6.31-1ubuntu0.1 [137 kB]
Get:27 http://archive.ubuntu.com/ubuntu bionic/main amd64 liblwp-protocol-https-perl all 6.07-2 [8,284 B]
Get:28 http://archive.ubuntu.com/ubuntu bionic/main amd64 libnet-smtp-ssl-perl all 1.04-1 [5,948 B]
Get:29 http://archive.ubuntu.com/ubuntu bionic/main amd64 libmailtools-perl all 2.18-1 [74.0 kB]
Get:30 http://archive.ubuntu.com/ubuntu bionic/main amd64 libxml-parser-perl amd64 2.44-2build3 [199 kB]
Get:31 http://archive.ubuntu.com/ubuntu bionic/main amd64 libauthen-sasl-perl all 2.1600-1 [48.7 kB]
Fetched 1,713 kB in 3s (549 kB/s)
debconf: unable to initialize frontend: Dialog
debconf: (No usable dialog-like program is installed, so the dialog based frontend cannot be used. at /usr/share/perl5/Debconf/FrontEnd/Dialog.pm line 76, <> line 31.)
debconf: falling back to frontend: Readline
debconf: unable to initialize frontend: Readline
debconf: (This frontend requires a controlling tty.)
debconf: falling back to frontend: Teletype
dpkg-preconfigure: unable to re-open stdin:
Selecting previously unselected package netbase.
(Reading database ... 133872 files and directories currently installed.)
Preparing to unpack .../00-netbase_5.4_all.deb ...
Unpacking netbase (5.4) ...
Selecting previously unselected package libdata-dump-perl.
Preparing to unpack .../01-libdata-dump-perl_1.23-1_all.deb ...
Unpacking libdata-dump-perl (1.23-1) ...
Selecting previously unselected package libencode-locale-perl.
Preparing to unpack .../02-libencode-locale-perl_1.05-1_all.deb ...
Unpacking libencode-locale-perl (1.05-1) ...
Selecting previously unselected package libtimedate-perl.
Preparing to unpack .../03-libtimedate-perl_2.3000-2_all.deb ...
Unpacking libtimedate-perl (2.3000-2) ...
Selecting previously unselected package libhttp-date-perl.
Preparing to unpack .../04-libhttp-date-perl_6.02-1_all.deb ...
Unpacking libhttp-date-perl (6.02-1) ...
Selecting previously unselected package libfile-listing-perl.
Preparing to unpack .../05-libfile-listing-perl_6.04-1_all.deb ...
Unpacking libfile-listing-perl (6.04-1) ...
Selecting previously unselected package libfont-afm-perl.
Preparing to unpack .../06-libfont-afm-perl_1.20-2_all.deb ...
Unpacking libfont-afm-perl (1.20-2) ...
Selecting previously unselected package libhtml-tagset-perl.
Preparing to unpack .../07-libhtml-tagset-perl_3.20-3_all.deb ...
Unpacking libhtml-tagset-perl (3.20-3) ...
Selecting previously unselected package liburi-perl.
Preparing to unpack .../08-liburi-perl_1.73-1_all.deb ...
Unpacking liburi-perl (1.73-1) ...
Selecting previously unselected package libhtml-parser-perl.
Preparing to unpack .../09-libhtml-parser-perl_3.72-3build1_amd64.deb ...
Unpacking libhtml-parser-perl (3.72-3build1) ...
Selecting previously unselected package libio-html-perl.
Preparing to unpack .../10-libio-html-perl_1.001-1_all.deb ...
Unpacking libio-html-perl (1.001-1) ...
Selecting previously unselected package liblwp-mediatypes-perl.
Preparing to unpack .../11-liblwp-mediatypes-perl_6.02-1_all.deb ...
Unpacking liblwp-mediatypes-perl (6.02-1) ...
Selecting previously unselected package libhttp-message-perl.
Preparing to unpack .../12-libhttp-message-perl_6.14-1_all.deb ...
Unpacking libhttp-message-perl (6.14-1) ...
Selecting previously unselected package libhtml-form-perl.
Preparing to unpack .../13-libhtml-form-perl_6.03-1_all.deb ...
Unpacking libhtml-form-perl (6.03-1) ...
Selecting previously unselected package libhtml-tree-perl.
Preparing to unpack .../14-libhtml-tree-perl_5.07-1_all.deb ...
Unpacking libhtml-tree-perl (5.07-1) ...
Selecting previously unselected package libhtml-format-perl.
Preparing to unpack .../15-libhtml-format-perl_2.12-1_all.deb ...
Unpacking libhtml-format-perl (2.12-1) ...
Selecting previously unselected package libhttp-cookies-perl.
Preparing to unpack .../16-libhttp-cookies-perl_6.04-1_all.deb ...
Unpacking libhttp-cookies-perl (6.04-1) ...
Selecting previously unselected package libhttp-daemon-perl.
Preparing to unpack .../17-libhttp-daemon-perl_6.01-1_all.deb ...
Unpacking libhttp-daemon-perl (6.01-1) ...
Selecting previously unselected package libhttp-negotiate-perl.
Preparing to unpack .../18-libhttp-negotiate-perl_6.00-2_all.deb ...
Unpacking libhttp-negotiate-perl (6.00-2) ...
Selecting previously unselected package perl-openssl-defaults:amd64.
Preparing to unpack .../19-perl-openssl-defaults_3build1_amd64.deb ...
Unpacking perl-openssl-defaults:amd64 (3build1) ...
Selecting previously unselected package libnet-ssleay-perl.
Preparing to unpack .../20-libnet-ssleay-perl_1.84-1ubuntu0.2_amd64.deb ...
Unpacking libnet-ssleay-perl (1.84-1ubuntu0.2) ...
Selecting previously unselected package libio-socket-ssl-perl.
Preparing to unpack .../21-libio-socket-ssl-perl_2.060-3~ubuntu18.04.1_all.deb ...
Unpacking libio-socket-ssl-perl (2.060-3~ubuntu18.04.1) ...
Selecting previously unselected package libnet-http-perl.
Preparing to unpack .../22-libnet-http-perl_6.17-1_all.deb ...
Unpacking libnet-http-perl (6.17-1) ...
Selecting previously unselected package libtry-tiny-perl.
Preparing to unpack .../23-libtry-tiny-perl_0.30-1_all.deb ...
Unpacking libtry-tiny-perl (0.30-1) ...
Selecting previously unselected package libwww-robotrules-perl.
Preparing to unpack .../24-libwww-robotrules-perl_6.01-1_all.deb ...
Unpacking libwww-robotrules-perl (6.01-1) ...
Selecting previously unselected package libwww-perl.
Preparing to unpack .../25-libwww-perl_6.31-1ubuntu0.1_all.deb ...
Unpacking libwww-perl (6.31-1ubuntu0.1) ...
Selecting previously unselected package liblwp-protocol-https-perl.
Preparing to unpack .../26-liblwp-protocol-https-perl_6.07-2_all.deb ...
Unpacking liblwp-protocol-https-perl (6.07-2) ...
Selecting previously unselected package libnet-smtp-ssl-perl.
Preparing to unpack .../27-libnet-smtp-ssl-perl_1.04-1_all.deb ...
Unpacking libnet-smtp-ssl-perl (1.04-1) ...
Selecting previously unselected package libmailtools-perl.
Preparing to unpack .../28-libmailtools-perl_2.18-1_all.deb ...
Unpacking libmailtools-perl (2.18-1) ...
Selecting previously unselected package libxml-parser-perl.
Preparing to unpack .../29-libxml-parser-perl_2.44-2build3_amd64.deb ...
Unpacking libxml-parser-perl (2.44-2build3) ...
Selecting previously unselected package libauthen-sasl-perl.
Preparing to unpack .../30-libauthen-sasl-perl_2.1600-1_all.deb ...
Unpacking libauthen-sasl-perl (2.1600-1) ...
Setting up libhtml-tagset-perl (3.20-3) ...
Setting up libtry-tiny-perl (0.30-1) ...
Setting up libfont-afm-perl (1.20-2) ...
Setting up libencode-locale-perl (1.05-1) ...
Setting up libtimedate-perl (2.3000-2) ...
Setting up perl-openssl-defaults:amd64 (3build1) ...
Setting up libio-html-perl (1.001-1) ...
Setting up liblwp-mediatypes-perl (6.02-1) ...
Setting up liburi-perl (1.73-1) ...
Setting up libdata-dump-perl (1.23-1) ...
Setting up libhtml-parser-perl (3.72-3build1) ...
Setting up libnet-http-perl (6.17-1) ...
Setting up libwww-robotrules-perl (6.01-1) ...
Setting up libauthen-sasl-perl (2.1600-1) ...
Setting up netbase (5.4) ...
Setting up libhttp-date-perl (6.02-1) ...
Setting up libnet-ssleay-perl (1.84-1ubuntu0.2) ...
Setting up libio-socket-ssl-perl (2.060-3~ubuntu18.04.1) ...
Setting up libhtml-tree-perl (5.07-1) ...
Setting up libfile-listing-perl (6.04-1) ...
Setting up libhttp-message-perl (6.14-1) ...
Setting up libhttp-negotiate-perl (6.00-2) ...
Setting up libnet-smtp-ssl-perl (1.04-1) ...
Setting up libhtml-format-perl (2.12-1) ...
Setting up libhttp-cookies-perl (6.04-1) ...
Setting up libhttp-daemon-perl (6.01-1) ...
Setting up libhtml-form-perl (6.03-1) ...
Setting up libmailtools-perl (2.18-1) ...
Setting up liblwp-protocol-https-perl (6.07-2) ...
Setting up libwww-perl (6.31-1ubuntu0.1) ...
Setting up libxml-parser-perl (2.44-2build3) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
/content/pyrouge/rouge/tools/ROUGE-1.5.5/data
###Markdown
Change the default directory back
###Code
import os
os.chdir('/content')
###Output
_____no_output_____
###Markdown
Pre-process the WikiHow dataset We used the wikihowAll.csv version which includes concatenated articles and summaries. The code in this section was run locally, since some of the intermediary generated files occupy a lot of memory and caused Colab to crash. Obtain the .story files
###Code
!python /content/Team36/wikihow_prepro/process.py
###Output
_____no_output_____
###Markdown
Check that Stanford CoreNLP works
###Code
import os
os.environ['CLASSPATH']="/content/Team36/wikihow_prepro/stanford-corenlp-full-2017-06-09/stanford-corenlp-3.8.0.jar"
!echo "Please tokenize this text." | java edu.stanford.nlp.process.PTBTokenizer
###Output
_____no_output_____
###Markdown
Sentence Splitting and Tokenisation
###Code
!python /content/Team36/src/preprocess.py -mode tokenize -raw_path /content/Team36/wikihow_prepro/raw_data -save_path /content/Team36/wikihow_prepro/json_data
###Output
_____no_output_____
###Markdown
Obtain the mapping for train/validate/test datasets
###Code
from sklearn.model_selection import train_test_split
import numpy
# Read the file containing all the story titles
with open("/content/Team36/wikihow_prepro/titles.txt", "r") as f:
titles = f.read().split('\n')
titles = numpy.array(titles) #convert array to numpy type array
train0 ,test = train_test_split(titles, test_size = 0.045)
train, valid = train_test_split(train0, test_size = 0.040)
# Write the mapping files
with open("/content/Team36/wikihow_prepro/mapping/mapping_test.txt", "w") as file:
for t in range(len(test)):
if (not test[t].endswith('story')):
continue
file.write("%s\n" % test[t])
with open("/content/Team36/wikihow_prepro/mapping/mapping_valid.txt", "w") as file:
for t in range(len(valid)):
if (not valid[t].endswith('story')):
continue
file.write("%s\n" % valid[t])
with open("/content/Team36/wikihow_prepro/mapping/mapping_train.txt", "w") as file:
for t in range(len(train)):
if (not train[t].endswith('story')):
continue
file.write("%s\n" % train[t])
###Output
_____no_output_____
###Markdown
Format to simpler json files
###Code
!python /content/Team36/src/preprocess.py -mode format_to_lines -raw_path /content/Team36/wikihow_prepro/json_data -save_path /content/Team36/wikihow_prepro/merged_json_data/cnndm -n_cpus 1 -use_bert_basic_tokenizer false -map_path /content/Team36/wikihow_prepro/mapping
###Output
_____no_output_____
###Markdown
Format to PyTorch files
###Code
!python /content/Team36/src/preprocess.py -mode format_to_bert -raw_path /content/Team36/wikihow_prepro/merged_json_data/merged_json_data -save_path /content/Team36/bert_data -lower -n_cpus 1 -log_file /content/Team36/logs/preprocess.log
###Output
_____no_output_____
###Markdown
Model Training and Evaluation We first train the BertSumExt model from scratch on the WikiHow dataset. We then use the pre-trained BertSumExt model (on 18,000 steps) on the CNN/DailyMail dataset provided [here](https://drive.google.com/file/d/1kKWoV0QCbeIuFt85beQgJ4v0lujaXobJ/view) to train our 4 transfer learning approaches for 10,000 more steps:* Warmstarting * Freezing BERT layers* Freezing encoder layers* Freezing positional embeddingsAll steps were evaluated on the validation dataset and the top 3 performing ones were selected to be tested on the test dataset. Model trained from scratch on WikiHow
###Code
!python /content/Team36/src/train.py -task ext -mode train -bert_data_path /content/Team36/bert_data/cnndm -ext_dropout 0.1 -model_path /content/Team36/models_scratch -lr 2e-3 -visible_gpus 0 -report_every 50 -save_checkpoint_steps 1000 -batch_size 3000 -train_steps 20000 -accum_count 6 -log_file /content/Team36/logs/bertext_log -use_interval true -warmup_steps 10000 -max_pos 512
!python /content/Team36/src/train.py -task ext -mode validate -test_all -batch_size 3000 -test_batch_size 500 -bert_data_path /content/Team36/bert_data/cnndm -log_file /content/Team36/logs/val_abs_bert_cnndm -model_path /content/PreSumm/models_scratch -sep_optim true -use_interval true -visible_gpus 0 -max_pos 512 -max_length 200 -alpha 0.95 -min_length 50 -result_path /content/Team36/logs/abs_bert_cnndm
###Output
_____no_output_____
###Markdown
Training this model for 20,000 steps on GPU took 11h. Checkpoints were used. The scores obtained by the top 3 models on the test dataset: Model Step ROUGE-1 ROUGE-2 ROUGE-L 11,000 29.67 8.20 27.44 10,000 29.60 8.17 27.35 13,000 29.58 8.18 27.41 Mean 29.61 8.18 27.40 Model with Warmstarting
###Code
!python /content/Team36/src/train.py -task ext -mode train -train_from /content/Team36/models/bert_ext.pt -bert_data_path /content/Team36/bert_data/cnndm -ext_dropout 0.1 -model_path /content/Team36/models_warmstart -lr 2e-3 -visible_gpus 0 -report_every 50 -save_checkpoint_steps 1000 -batch_size 3000 -train_steps 28000 -accum_count 6 -log_file /content/Team36/logs/bertext_log -use_interval true -warmup_steps 10000 -max_pos 512
!python /content/Team36/src/train.py -task ext -mode validate -test_all -batch_size 3000 -test_batch_size 500 -bert_data_path /content/Team36/bert_data/cnndm -log_file /content/Team36/logs/val_abs_bert_cnndm -model_path /content/PreSumm/models_warmstart -sep_optim true -use_interval true -visible_gpus 0 -max_pos 512 -max_length 200 -alpha 0.95 -min_length 50 -result_path /content/Team36/logs/abs_bert_cnndm
###Output
_____no_output_____
###Markdown
Training this model took 5.5h. The scores obtained by the top 3 models on the test dataset: Model Step ROUGE-1 ROUGE-2 ROUGE-L 26,000 29.75 8.30 27.56 24,000 29.76 8.28 27.53 25,000 29.83 8.33 27.59 Mean 29.78 8.30 27.56 Model with Freezing BERT layers
###Code
!python /content/Team36/src/train.py -task ext -mode train -train_from /content/Team36/models/bert_ext.pt -freeze bert -bert_data_path /content/Team36/bert_data/cnndm -ext_dropout 0.1 -model_path /content/Team36/models_bert -lr 2e-3 -visible_gpus 0 -report_every 50 -save_checkpoint_steps 1000 -batch_size 3000 -train_steps 28000 -accum_count 6 -log_file /content/Team36/logs/bertext_log -use_interval true -warmup_steps 10000 -max_pos 512
!python /content/Team36/src/train.py -task ext -mode validate -test_all -batch_size 3000 -test_batch_size 500 -bert_data_path /content/Team36/bert_data/cnndm -log_file /content/Team36/logs/val_abs_bert_cnndm -model_path /content/PreSumm/models_bert -sep_optim true -use_interval true -visible_gpus 0 -max_pos 512 -max_length 200 -alpha 0.95 -min_length 50 -result_path /content/Team36/logs/abs_bert_cnndm
###Output
_____no_output_____
###Markdown
Training this model took 2h. The scores obtained by the top 3 models on the test dataset: Model Step ROUGE-1 ROUGE-2 ROUGE-L 28,000 28.55 7.63 26.43 27,000 28.54 7.62 26.40 26,000 28.48 7.58 26.37 Mean 28.52 7.58 26.37 Model with Freezing encoder layers
###Code
!python /content/Team36/src/train.py -task ext -mode train -train_from /content/Team36/models/bert_ext.pt -freeze encoder -bert_data_path /content/Team36/bert_data/cnndm -ext_dropout 0.1 -model_path /content/Team36/models_encoder -lr 2e-3 -visible_gpus 0 -report_every 50 -save_checkpoint_steps 1000 -batch_size 3000 -train_steps 28000 -accum_count 6 -log_file /content/Team36/logs/bertext_log -use_interval true -warmup_steps 10000 -max_pos 512
!python /content/Team36/src/train.py -task ext -mode validate -test_all -batch_size 3000 -test_batch_size 500 -bert_data_path /content/Team36/bert_data/cnndm -log_file /content/Team36/logs/val_abs_bert_cnndm -model_path /content/PreSumm/models_encoder -sep_optim true -use_interval true -visible_gpus 0 -max_pos 512 -max_length 200 -alpha 0.95 -min_length 50 -result_path /content/Team36/logs/abs_bert_cnndm
###Output
_____no_output_____
###Markdown
Training this model took 5.5h. The scores obtained by the top 3 models on the test dataset: Model Step ROUGE-1 ROUGE-2 ROUGE-L 26,000 29.78 8.30 27.58 24,000 29.74 8.28 27.52 25,000 29.78 8.28 27.53 Mean 29.77 8.29 27.54 Model with Freezing positional embeddings
###Code
!python /content/Team36/src/train.py -task ext -mode train -train_from /content/Team36/models/bert_ext.pt -freeze positional -bert_data_path /content/Team36/bert_data/cnndm -ext_dropout 0.1 -model_path /content/Team36/models_position -lr 2e-3 -visible_gpus 0 -report_every 50 -save_checkpoint_steps 1000 -batch_size 3000 -train_steps 28000 -accum_count 6 -log_file /content/Team36/logs/bertext_log -use_interval true -warmup_steps 10000 -max_pos 512
!python /content/Team36/src/train.py -task ext -mode validate -test_all -batch_size 3000 -test_batch_size 500 -bert_data_path /content/Team36/bert_data/cnndm -log_file /content/Team36/logs/val_abs_bert_cnndm -model_path /content/PreSumm/models_position -sep_optim true -use_interval true -visible_gpus 0 -max_pos 512 -max_length 200 -alpha 0.95 -min_length 50 -result_path /content/Team36/logs/abs_bert_cnndm
###Output
_____no_output_____
###Markdown
Training this model took 5.5h. The scores obtained by the top 3 models on the test dataset: Model Step ROUGE-1 ROUGE-2 ROUGE-L 26,000 29.76 8.31 27.58 24,000 29.76 8.31 27.54 25,000 29.82 8.32 27.57 Mean 29.78 8.31 27.56 Test pre-trained BertSum models on WikiHow data (out-of-domain) BertSumExt
###Code
!python /content/Team36/src/train.py -task ext -mode test -test_from /content/Team36/models/bert_ext.pt -batch_size 3000 -test_batch_size 500 -bert_data_path /content/Team36/bert_data/cnndm -log_file /content/Team36/logs/val_abs_bert_cnndm -sep_optim true -use_interval true -visible_gpus 0 -max_pos 512 -max_length 200 -alpha 0.95 -min_length 50 -result_path /content/Team36/results/abs_bert_cnndm
###Output
_____no_output_____
###Markdown
BertSumExtAbs The pre-trained BertSumExtAbs model can be obtained from [here](https://drive.google.com/file/d/1-IKVCtc4Q-BdZpjXc4s70_fRsWnjtYLr/view).
###Code
!python /content/Team36/src/train.py -task abs -mode test -test_from /content/Team36/models/bert_ext_abs.pt -batch_size 3000 -test_batch_size 500 -bert_data_path /content/Team36/bert_data/cnndm -log_file /content/Team36/logs/val_abs_bert_cnndm -sep_optim true -use_interval true -visible_gpus 0 -max_pos 512 -max_length 200 -alpha 0.95 -min_length 50 -result_path /content/Team36/results/abs_bert_cnndm
###Output
_____no_output_____ |
3_nlp/classification/classification_tfidf.ipynb | ###Markdown
= Text Classification using Word Counts / TFIDFIn this notebook we will be performing text classification by using word counts and frequency to create numerical feature vectors representing each text document and then using these features to train a simple classifier. Although simple, we will see that this approach can work very well for classifying text, even compared to more modern document embedding approaches. Our goal will be to classify the articles in the AgNews dataset into their correct category: "World", "Sports", "Business", or "Sci/Tec".**Notes:** - This does not need to be run on GPU, but will take ~5 minutes to run
###Code
import os
import numpy as np
import pandas as pd
import string
import time
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from tqdm import tqdm
from sklearn.linear_model import LogisticRegression
import urllib.request
import zipfile
import spacy
from spacy.lang.en.stop_words import STOP_WORDS
from spacy.lang.en import English
#!python -m spacy download en_core_web_md
nlp = spacy.load('en_core_web_sm')
import nltk
from nltk.stem import WordNetLemmatizer
nltk.download('omw-1.4')
import warnings
warnings.filterwarnings('ignore')
###Output
[nltk_data] Downloading package omw-1.4 to /Users/jjr10/nltk_data...
[nltk_data] Package omw-1.4 is already up-to-date!
###Markdown
Download and prepare data
###Code
# Download the data
if not os.path.exists('../data'):
os.mkdir('../data')
if not os.path.exists('../data/agnews'):
url = 'https://storage.googleapis.com/aipi540-datasets/agnews.zip'
urllib.request.urlretrieve(url,filename='../data/agnews.zip')
zip_ref = zipfile.ZipFile('../data/agnews.zip', 'r')
zip_ref.extractall('../data/agnews')
zip_ref.close()
train_df = pd.read_csv('../data/agnews/train.csv')
test_df = pd.read_csv('../data/agnews/test.csv')
# Combine title and description of article to use as input documents for model
train_df['full_text'] = train_df.apply(lambda x: ' '.join([x['Title'],x['Description']]),axis=1)
test_df['full_text'] = test_df.apply(lambda x: ' '.join([x['Title'],x['Description']]),axis=1)
# Create dictionary to store mapping of labels
ag_news_label = {1: "World",
2: "Sports",
3: "Business",
4: "Sci/Tec"}
train_df.head()
# View a couple of the documents
for i in range(5):
print(train_df.iloc[i]['full_text'])
print()
###Output
_____no_output_____
###Markdown
Pre-process textBefore we create our features, we first need to pre-process our text. There are several methods to pre-process text; in this example we will perform the following operations on our raw text to prepare it for creating features: - Tokenize our raw text to break it into a list of substrings. This step primarily splits our text on white space and punctuation. As an example from the [NLTK](https://www.nltk.org/api/nltk.tokenize.html) website: ``` >>> s = "Good muffins cost $3.88\nin New York. Please buy me two of them.\n\nThanks." >>> word_tokenize(s) ['Good', 'muffins', 'cost', '$', '3.88', 'in', 'New', 'York', '.', 'Please', 'buy', 'me', 'two', 'of', 'them', '.', 'Thanks', '.'] ```- Remove punctuation and stopwords. Stopwords are extremely commonly used words (e.g. "a", "and", "are", "be", "from" ...) that do not provide any useful information to us to assist in modeling the text. - Lemmatize the words in each document. Lemmatization uses a morphological analysis of words to remove inflectional endings and return the base or dictionary form of words, called the "lemma". Among other things, this helps by replacing plurals with singular form e.g. "dogs" becomes "dog" and "geese" becomes "goose". This is particularly important when we are using word counts or freqency because we want to count the occurences of "dog" and "dogs" as the same word. There are several libraries available in Python to process text. Below we have shown how to perform the above operations using two of the most popular: [NLTK](https://www.nltk.org) and [Spacy](https://spacy.io).
###Code
def tokenize(sentence,method='spacy'):
# Tokenize and lemmatize text, remove stopwords and punctuation
punctuations = string.punctuation
stopwords = list(STOP_WORDS)
if method=='nltk':
wordnet_lemmatizer = WordNetLemmatizer()
tokens = nltk.word_tokenize(sentence,preserve_line=True)
tokens = [word for word in tokens if word not in stopwords and word not in punctuations]
tokens = [wordnet_lemmatizer.lemmatize(word) for word in tokens]
tokens = " ".join([i for i in tokens])
else:
with nlp.select_pipes(enable=['tokenizer','lemmatizer']):
tokens = nlp(sentence)
tokens = [word.lemma_.lower().strip() for word in tokens]
tokens = [word for word in tokens if word not in stopwords and word not in punctuations]
tokens = " ".join([i for i in tokens])
return tokens
# Process the training set text
tqdm.pandas()
train_df['processed_text'] = train_df['full_text'].progress_apply(lambda x: tokenize(x,method='nltk'))
# Process the test set text
tqdm.pandas()
test_df['processed_text'] = test_df['full_text'].progress_apply(lambda x: tokenize(x,method='nltk'))
###Output
100%|██████████| 120000/120000 [00:57<00:00, 2105.24it/s]
100%|██████████| 7600/7600 [00:03<00:00, 2028.36it/s]
###Markdown
Create features using word countsNow that our raw text is pre-processed, we are ready to create our features. There are two approaches to creating features using word counts: **Count Vectorization** and **TFIDF Vectorization**.**Count Vectorization** (also called Bag-of-words) creates a vocabulary of all words appearing in the training corpus, and then for each document it counts up how many times each word in the vocabulary appears in the document. Each document is then represented by a vector with the same length as the vocabulary. At each index position an integer indicates how many times each word appears in the document.**Term Frequency Inverse Document Frequency (TFIDF) Vectorization** first counts the number of times each word appears in a document (similar to Count Vectorization) but then divides by the total number of words in the document to calculate the *term frequency (TF)* of each word. The *inverse document frequency (IDF)* for each word is then calculated as the log of the total number of documents divided by the number of documents containing the word. The TFIDF for each word is then computed by multiplying the term frequency by the inverse document frequency. Each document is represented by a vector containing the TFIDF for every word in the vocabulary, for that document.In the below `build_features()` function, you can specify whether to create document features using Count Vectorization or TFIDF Vectorization.
###Code
def build_features(train_data, test_data, ngram_range, method='count'):
if method == 'tfidf':
# Create features using TFIDF
vec = TfidfVectorizer(ngram_range=ngram_range)
X_train = vec.fit_transform(train_df['processed_text'])
X_test = vec.transform(test_df['processed_text'])
else:
# Create features using word counts
vec = CountVectorizer(ngram_range=ngram_range)
X_train = vec.fit_transform(train_df['processed_text'])
X_test = vec.transform(test_df['processed_text'])
return X_train, X_test
# Create features
method = 'tfidf'
ngram_range = (1, 2)
X_train,X_test = build_features(train_df['processed_text'],test_df['processed_text'],ngram_range,method)
###Output
_____no_output_____
###Markdown
Train modelNow that we have created our features representing each document, we will use them in a simple softmax regression classification model to predict the document's class. We first train the classification model on the training set.
###Code
# Train a classification model using logistic regression classifier
y_train = train_df['Class Index']
logreg_model = LogisticRegression(solver='saga')
logreg_model.fit(X_train,y_train)
preds = logreg_model.predict(X_train)
acc = sum(preds==y_train)/len(y_train)
print('Accuracy on the training set is {:.3f}'.format(acc))
###Output
_____no_output_____
###Markdown
Evaluate modelWe then evaluate our model on the test set. As you can see, the model performs very well on this task, using this simple approach! In general, Count Vectorization / TFIDF Vectorization performs surprising well across a broad range of tasks, even compared to more computationally intensive approaches such as document embeddings. This should perhaps not be surprising, since we would expect documents about similar topics to contain similar sets of words.
###Code
# Evaluate accuracy on the test set
y_test = test_df['Class Index']
test_preds = logreg_model.predict(X_test)
test_acc = sum(test_preds==y_test)/len(y_test)
print('Accuracy on the test set is {:.3f}'.format(test_acc))
###Output
Accuracy on the test set is 0.919
|
ML/Dimensionality Reduction/Singular-Value Decomposition (SVD).ipynb | ###Markdown
Sources: - https://machinelearningmastery.com/singular-value-decomposition-for-machine-learning/Contents:**27th Feb 2017**1. Calculate Singular-Value Decomposition2. Reconstruct Matrix from SVD**28th Feb 2017**3. SVD for Pseudoinverse4. SVD for Dimensionality Reduction5. Self Implemented SVD 1. Calculate Singular-Value Decomposition using scipy
###Code
from numpy import array
from scipy.linalg import svd
A = array([[5,3],[1,2],[3,5]])
A.shape
# Singular-value decomposition
U, s, V = svd(A)
U
s
V
###Output
_____no_output_____
###Markdown
2. Reconstruct Matrix from SVDU, s, and V elements returned from the svd() cannot be multiplied directly s vector must be converted into a diagonal matrix using the diag() function
###Code
from numpy import diag
from numpy import dot
from numpy import zeros
###Output
_____no_output_____
###Markdown
For multiplication we require that```U (m x m) . Sigma (m x n) . V^T (n x n)```In case rectangular original matrix
###Code
# create m x n Sigma matrix
Sigma = zeros((A.shape[0], A.shape[1]))
# (rows, columns)
Sigma
# populate Sigma with n x n diagonal matrix
Sigma[:A.shape[1], :A.shape[1]] = diag(s)
Sigma
###Output
_____no_output_____
###Markdown
above complication with the Sigma diagonal only exists with the case where m and n are not equal.In case of square, we can directly use```diag(s)```
###Code
# reconstruct matrix
B = U.dot(Sigma.dot(V))
print(B)
###Output
[[ 5. 3.]
[ 1. 2.]
[ 3. 5.]]
###Markdown
3. SVD for Pseudoinverse 4. SVD for Dimensionality Reduction
###Code
from numpy.linalg import matrix_rank
A = array([
[1,2,3,4,5,6,7,8,9,10],
[11,12,13,14,15,16,17,18,19,20],
[21,22,23,24,25,26,27,28,29,30]])
print('Original Matrix', 'Rank is',matrix_rank(A))
print(A)
# Singular-value decomposition
U, s, V = svd(A)
# create m x n Sigma matrix
Sigma = zeros((A.shape[0], A.shape[1]))
# populate Sigma with n x n diagonal matrix
Sigma[:A.shape[0], :A.shape[0]] = diag(s)
# select
n_elements = 2
Sigma = Sigma[:, :n_elements]
V = V[:n_elements, :]
# reconstruct
B = U.dot(Sigma.dot(V))
print('Reconstructed Matrix', 'Rank is',matrix_rank(B))
print(B)
# transform
T = U.dot(Sigma)
print(T)
T = A.dot(V.T)
print(T)
###Output
Original Matrix Rank is 2
[[ 1 2 3 4 5 6 7 8 9 10]
[11 12 13 14 15 16 17 18 19 20]
[21 22 23 24 25 26 27 28 29 30]]
Reconstructed Matrix Rank is 2
[[ 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.]
[ 11. 12. 13. 14. 15. 16. 17. 18. 19. 20.]
[ 21. 22. 23. 24. 25. 26. 27. 28. 29. 30.]]
[[-18.52157747 6.47697214]
[-49.81310011 1.91182038]
[-81.10462276 -2.65333138]]
[[-18.52157747 6.47697214]
[-49.81310011 1.91182038]
[-81.10462276 -2.65333138]]
|
nbs/ug-07-moe-quantification.ipynb | ###Markdown
Merit Order Effect Quantification [](https://notebooks.gesis.org/binder/v2/gh/AyrtonB/Merit-Order-Effect/main?filepath=nbs%2Fug-07-moe-quantification.ipynb)This notebook outlines how the `moepy` library can be used to quantify the merit order effect of intermittent RES on electricity prices. Please note that the fitted model and estimated results are less accurate than those found in the set of development notebooks, as this notebook is for tutorial purposes the ones found here are using less data and smooth over larger time-periods to reduce computation time. Imports
###Code
import pandas as pd
import numpy as np
import pickle
import seaborn as sns
import matplotlib.pyplot as plt
from moepy import moe
###Output
_____no_output_____
###Markdown
Data LoadingWe'll first load the data in
###Code
df_EI = pd.read_csv('../data/ug/electric_insights.csv')
df_EI['local_datetime'] = pd.to_datetime(df_EI['local_datetime'], utc=True)
df_EI = df_EI.set_index('local_datetime')
df_EI.head()
###Output
_____no_output_____
###Markdown
Generating PredictionsWe'll use a helper function to both load in our model and make a prediction in a single step
###Code
model_fp = '../data/ug/GB_detailed_example_model_p50.pkl'
dt_pred = pd.date_range('2020-01-01', '2021-01-01').tz_localize('Europe/London')
df_pred = moe.construct_df_pred(model_fp, dt_pred=dt_pred)
df_pred.head()
###Output
_____no_output_____
###Markdown
We can now use `moe.construct_pred_ts` to generate a prediction time-series from our surface estimation and the observed dispatchable generation
###Code
s_dispatchable = (df_EI_model['demand'] - df_EI_model[['solar', 'wind']].sum(axis=1)).dropna().loc[:df_pred.columns[-2]+pd.Timedelta(hours=23, minutes=30)]
s_pred_ts = moe.construct_pred_ts(s_dispatchable['2020'], df_pred)
s_pred_ts.head()
###Output
_____no_output_____
###Markdown
We can visualise the error distribution to see how our model is performingTo reduce this error the resolution of the date-smoothing and LOWESS fit can be increased, this is what was done for the research paper and is shown in the set of development notebooks. Looking at 2020 also increases the error somewhat.
###Code
s_price = df_EI['day_ahead_price']
s_err = s_pred_ts - s_price.loc[s_pred_ts.index]
print(s_err.abs().mean())
sns.histplot(s_err)
_ = plt.xlim(-75, 75)
###Output
8.897118237665632
###Markdown
Calculating the MOETo calculate the MOE we have to generate a counterfactual price, in this case the estimate is of the cost of electricity if RES had not been on the system. Subtracting the simulated price from the counterfactual price results in a time-series of our simulated MOE.
###Code
s_demand = df_EI_model.loc[s_dispatchable.index, 'demand']
s_demand_pred_ts = moe.construct_pred_ts(s_demand['2020'], df_pred)
s_MOE = s_demand_pred_ts - s_pred_ts
s_MOE = s_MOE.dropna()
s_MOE.mean() # N.b for the reasons previously mentioned this particular value is inaccurate
###Output
_____no_output_____ |
study04/Playing_with_model_complexity_and_regularising_models_completed.ipynb | ###Markdown
아키텍처 복잡도 줄이기
###Code
import torch.nn as nn
from torch.optim import SGD
import torch
class Architecture1(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(Architecture1, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_classes)
self.relu = nn.ReLU()
self.fc3 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
out = self.relu(out)
out = self.fc3(out)
return out
class Architecture2(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(Architecture2, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
###Output
_____no_output_____
###Markdown
PyTorch 레이어에 정규화 추가(Regularizer)
###Code
model = Architecture1(10,20,2)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, weight_decay=1e-5)
###Output
_____no_output_____ |
evaluation_breakfast.ipynb | ###Markdown
Breakfast dataset class
###Code
class Breakfast:
def __init__(self):
self.vd_root_folder = '../BreakfastII_15fps_qvga_sync'
self.gt_paths = self.__get_gt_paths()
self.i3d_feat_paths = self.__get_i3d_features_path()
self.set_labels_mapping()
def set_labels_mapping(self):
action_set = []
action_per_activity = {}
for activity, gtpaths in self.gt_paths.items():
action_per_activity[activity] = []
for gtpath in gtpaths:
gt_info = self.get_gt_info(gtpath)
cur_actions = [seg['action'] for seg in gt_info]
action_set.extend(cur_actions)
action_per_activity[activity].extend(cur_actions)
action_per_activity[activity] = set(action_per_activity[activity])
action_set = list(set(action_set))
action_set.sort()
self.action_label_map = {action: i for i, action in enumerate(action_set)}
self.action_label_map_per_activity = {activity: {action: self.action_label_map[action] for action in actions} for activity, actions in action_per_activity.items()}
def __get_gt_paths(self):
gt_paths = {}
for path, subdirs, files in os.walk('segmentation_coarse'):
split_path = path.split("/")
if len(split_path) == 2:
gt_paths[split_path[-1]] = []
for name in files:
if name.split(".")[-1] == 'txt':
gt_paths[split_path[-1]].append(os.path.join(path, name))
assert sum([len(paths) for _, paths in gt_paths.items()]) == 1712
return gt_paths
def __get_i3d_features_path(self):
feat_paths = []
for path, subdirs, files in os.walk('bf_kinetics_feat'):
for name in files:
feat_paths.append(os.path.join(path, name))
breakfast = {}
for path in feat_paths:
b_type = path.split("_")[-1][:-4]
breakfast[b_type] = breakfast.get(b_type, []) + [path]
assert sum([len(paths) for _, paths in breakfast.items()]) == 1712
return breakfast
def gtpath_to_vdpath(self, gt_path):
splitted_path = gt_path.split("/")[-1].split("_")
pfolder = splitted_path[0]
splitted_path[-1] = splitted_path[-1].split(".")[0]
if 'stereo' in gt_path:
recfolder = splitted_path[1][:-2]
filename = "_".join([splitted_path[0], splitted_path[-1], 'ch1'])
vd_path = "/".join([self.vd_root_folder, pfolder, recfolder, filename + '.avi'])
if Path(vd_path).exists():
return vd_path
else:
filename = "_".join([splitted_path[0], splitted_path[-1], 'ch0'])
vd_path = "/".join([self.vd_root_folder, pfolder, recfolder, filename + '.avi'])
return vd_path
else:
recfolder = splitted_path[1]
filename = "_".join([splitted_path[0], splitted_path[-1]])
return "/".join([self.vd_root_folder, pfolder, recfolder, filename + '.avi'])
def get_gt_info(self, target_path):
""" Reads the ground truth segments and convert them to a 32 fps video """
target_gt_path = target_path
if os.path.exists(target_gt_path) is False:
return False
with open(target_gt_path, 'r') as f:
l1 = []
for ele in f:
line = ele.split('\n')
annotation = line[0].rstrip()
times, action = annotation.split(" ")
start, end = times.split("-")
if start == end:
continue
if start == '1':
start = float(start)
else:
if len(l1) == 0:
start = 1.
else:
start = float(l1[-1]['end']) + 1.
l1.append(dict(start=start, end=float(end), action=action))
return l1
def gt_to_array(self, gt, video_len):
""" Converts the ground truth dict to an array """
frame_wise_gt = []
n_segs = len(gt)
for ith_class, gt_seg in enumerate(gt):
start, end = gt_seg['start'], gt_seg['end']
frame_wise_gt = frame_wise_gt + [self.action_label_map[gt_seg['action']]] * int(end - start + 1)
return np.array(frame_wise_gt[:video_len])
bf = Breakfast()
###Output
_____no_output_____
###Markdown
Segmentation evaluation class
###Code
class EvalSegmentation:
def __init__(self):
self.mofs = {}
self.ious = {}
def setup_key(self, key):
self.mofs[key] = []
self.ious[key] = []
def __call__(self, predicted, ground_truth, key):
acc = Accuracy()
acc.predicted_labels = predicted
acc.gt_labels = ground_truth
acc.mof()
acc.mof_classes()
acc.iou_classes()
stats = acc.stat()['iou']
iou = stats[0] / stats[1]
self.ious[key].append(iou)
self.mofs[key].append(acc.mof_val())
def get_gt2preds_mapping(self, gt, pred):
acc = Accuracy()
acc.predicted_labels = pred
acc.gt_labels = gt
return acc.get_gt2labels_mapping()
def report_results(self):
def calculate(metrics_per_key, metric_name):
print(metric_name)
acc_metric = [] # accumulated metric
for key, metrics_per_sample in metrics_per_key.items():
print(key, np.mean(metrics_per_sample))
acc_metric.append(np.mean(metrics_per_sample))
print(f"Final {metric_name}: {np.mean(acc_metric)}")
calculate(self.ious, 'IoU')
calculate(self.mofs, 'MoF')
_ = EvalSegmentation()
###Output
_____no_output_____
###Markdown
Breakfast segmentation class
###Code
class SegmentBreakfast:
def __init__(self, cluster_strategy: str, **kwargs):
self.breakfast = Breakfast()
self.cluster_strategy = cluster_strategy
self.segmentator = ClusterFactory.get(cluster_strategy)(**kwargs)
self.cluster_type = cluster_strategy
self.segment_root_folder = './segments'
self.eval = EvalSegmentation()
self.npy = Npy()
def map_seglabels_to_framelabels(self, labels, video_len, seg_len=32):
""" Map segments(32 frames pieces of a video) labels to a framewise label """
frames_labels = []
for seg_label in labels:
frames_labels = frames_labels + [seg_label] * seg_len
return np.array(frames_labels[:video_len])
def predpath_root(self, with_pe: bool, feature_extractor: str, frames_window_len: str) -> str:
""" Creates the root folder of a action segmentation prediction """
if with_pe:
predroot_words = [self.segment_root_folder, "pe", feature_extractor, frames_window_len , self.cluster_type]
else:
predroot_words = [self.segment_root_folder, feature_extractor, frames_window_len, self.cluster_type]
return "_".join(predroot_words)
def gtpath_to_predpath(self, feat_path: str, with_pe: bool,
feature_extractor: str, frames_window_len: int) -> str:
""" Transforms a breakfast ground truth path to a segments prediction path """
gt_fname = feat_path.split("/")[-1]
pred_fname = ".".join(gt_fname.split(".")[:-1] + ['npy'])
return "/".join([self.predpath_root(with_pe=with_pe,
feature_extractor=feature_extractor,
frames_window_len=str(frames_window_len)),
pred_fname])
def save_segment(self, predpath: str, seg_pred: np.ndarray) -> None:
""" Save the segmentation prediction of a video in a path """
*predpath_root, pred_fname = predpath.split("/")
predpath_root = "/".join(predpath_root)
self.npy.write(predpath_root, pred_fname, seg_pred)
def get_video_from_gtpath(self, gt_path: str, input_len: int = 32, extraction_strategy: str = ExtractorFactory.SLOWFAST.value):
""" Given the path to the ground truth information of a video this methods returns a Video object """
vd_path = self.breakfast.gtpath_to_vdpath(gt_path)
video = Video(vd_path, "_".join([extraction_strategy, str(input_len)]))
if extraction_strategy == ExtractorFactory.I3D.value:
video.features = lambda with_pe: self.get_i3d_features(vd_path, with_pe=with_pe)
return video
def get_actionpred_from_gtpath(self,
gt_path: str,
feature_extractor: str,
frames_window_len: int,
with_pe: bool = True):
""" Given the path to the ground truth information of a video this methods returns the predicted
for this video
"""
pred_path = self.gtpath_to_predpath(gt_path, with_pe,
feature_extractor,
frames_window_len)
return np.load(pred_path)
def predict_single_video_with_slowfast(self, gt_path: str, pe: bool = True, frames_len: int = 128, red_dim=False):
video = self.get_video_from_gtpath(gt_path, input_len=frames_len)
features = video.features(with_pe=pe, reduce_dim=red_dim)
if self.cluster_strategy == ClusterFactory.OPTICS.value:
segments_pred = self.segmentator.auto(features, fps=video.fps, samples_frame_len=frames_len)
else:
segments_pred = self.segmentator.auto(features)
predpath = self.gtpath_to_predpath(gt_path,
with_pe=pe,
feature_extractor=ExtractorFactory.SLOWFAST.value,
frames_window_len=frames_len)
self.save_segment(predpath, segments_pred)
frames_pred = self.map_seglabels_to_framelabels(segments_pred, len(video))
gt_info = self.breakfast.get_gt_info(gt_path)
gt = self.breakfast.gt_to_array(gt_info, len(video))
if gt.shape[0] > frames_pred.shape[0]:
gt = gt[:frames_pred.shape[0]]
elif gt.shape[0] < frames_pred.shape[0]:
frames_pred = frames_pred[:gt.shape[0]]
self.eval.setup_key('anything')
self.eval(frames_pred, gt, 'anything')
return gt, frames_pred
def with_slowfast_features(self, pe: bool = True, frames_len: int = 32, red_dim: bool = True) -> None:
"""
Generates the action segmentation of the Breakfast dataset with features extracted with the Slowfast
model
"""
for activity, gts in self.breakfast.gt_paths.items():
self.eval.setup_key(activity)
for i, gt_path in enumerate(gts):
video = self.get_video_from_gtpath(gt_path, input_len=frames_len)
features = video.features(with_pe=pe, reduce_dim=red_dim)
if self.cluster_strategy == ClusterFactory.OPTICS.value:
segments_pred = self.segmentator.auto(features, fps=video.fps, samples_frame_len=frames_len)
else:
segments_pred = self.segmentator.auto(features)
predpath = self.gtpath_to_predpath(gt_path,
with_pe=pe,
feature_extractor=ExtractorFactory.SLOWFAST.value,
frames_window_len=frames_len)
self.save_segment(predpath, segments_pred)
frames_pred = self.map_seglabels_to_framelabels(segments_pred, len(video), frames_len)
gt_info = self.breakfast.get_gt_info(gt_path)
gt = self.breakfast.gt_to_array(gt_info, len(video))
if gt.shape[0] > frames_pred.shape[0]:
gt = gt[:frames_pred.shape[0]]
elif gt.shape[0] < frames_pred.shape[0]:
frames_pred = frames_pred[:gt.shape[0]]
self.eval(frames_pred, gt, activity)
print(f"Finished {activity}")
self.eval.report_results()
def get_i3d_features(self, vd_path: str, frames_window_len: int = 10, with_pe: bool = True, red_dim: bool = True) -> np.ndarray:
if frames_window_len == 10:
to_i3d_features = 'i3d_features/i3d'
else:
to_i3d_features = f'i3d_features_{frames_window_len}frames/i3d'
*path, vd_name = vd_path.split("/")
path = '/'.join(path)
vd_name = vd_name.split(".")[0]
flow = vd_name + '_flow.npy'
rgb = vd_name + '_rgb.npy'
path = '/'.join([path, to_i3d_features])
rgb_path = os.path.join(path, rgb)
flow_path = os.path.join(path, flow)
def positional_encoding(data: np.ndarray) -> np.ndarray:
def get_sinusoid_encoding_table(length, d_model):
def cal_angle(position, hid_idx):
return position / np.power(10000, 2 * (hid_idx // 2) / d_model)
def get_posi_angle_vec(position):
return [cal_angle(position, hid_j) for hid_j in range(d_model)]
sinusoid_table = np.array([get_posi_angle_vec(pos_i) for pos_i in range(length)])
sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2]) # dim 2i
sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2]) # dim 2i+1
return sinusoid_table
d_model = data.shape[1]
length = data.shape[0]
pe = get_sinusoid_encoding_table(length, d_model)
return data + pe
data = np.concatenate([np.load(flow_path), np.load(rgb_path)], axis=1)
if red_dim:
reducer = PCA(n_components=.999999999)
data = reducer.fit_transform(data)
if with_pe:
data = positional_encoding(data)
return data
def with_i3d_features(self, pe: str = 'SUM', frames_window_len: int=10, red_dim: bool = True):
for activity, gts in self.breakfast.gt_paths.items():
print(activity)
self.eval.setup_key(activity)
for gt_path in gts:
# getting the video path
vd_path = self.breakfast.gtpath_to_vdpath(gt_path)
# getting the numpy array with the features of current path
features = self.get_i3d_features(vd_path,
frames_window_len=frames_window_len,
with_pe=pe, red_dim=red_dim)
video = Video(vd_path, ExtractorFactory.I3D.value)
segments_pred = self.segmentator.auto(features)
predpath = self.gtpath_to_predpath(gt_path,
with_pe=pe,
feature_extractor=ExtractorFactory.I3D.value,
frames_window_len=frames_window_len)
self.save_segment(predpath, segments_pred)
frames_pred = self.map_seglabels_to_framelabels(segments_pred,
len(video),
seg_len=frames_window_len)
gt_info = self.breakfast.get_gt_info(gt_path)
gt = self.breakfast.gt_to_array(gt_info, len(video))
if gt.shape[0] > frames_pred.shape[0]:
gt = gt[:frames_pred.shape[0]]
elif gt.shape[0] < frames_pred.shape[0]:
frames_pred = frames_pred[:gt.shape[0]]
self.eval(frames_pred, gt, activity)
self.eval.report_results()
###Output
_____no_output_____
###Markdown
I3D - 10 frames window FINCH
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features()
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=False)
###Output
scrambledegg
milk
coffee
salat
cereals
juice
sandwich
friedegg
pancake
tea
IoU
scrambledegg 0.18921414600451525
milk 0.3586381401231825
coffee 0.38705092654491946
salat 0.2616861021001436
cereals 0.330020909847416
juice 0.29897349362976666
sandwich 0.27733972591554495
friedegg 0.1818792565554949
pancake 0.13593884227256461
tea 0.3622921919317755
Final IoU: 0.2783033734925323
MoF
scrambledegg 0.5192310435540097
milk 0.5677711787875898
coffee 0.5629839777490341
salat 0.5943123601518914
cereals 0.5510214113225157
juice 0.5925421268980945
sandwich 0.5352234103438274
friedegg 0.4998700888900831
pancake 0.532318135156033
tea 0.5780269649720894
Final MoF: 0.5533300697825168
###Markdown
KMeans
###Code
segment_bf = SegmentBreakfast(ClusterFactory.KMEANS.value, n=2)
segment_bf.with_i3d_features()
segment_bf = SegmentBreakfast(ClusterFactory.KMEANS.value, n=2)
segment_bf.with_i3d_features(pe=False)
###Output
scrambledegg
milk
coffee
salat
cereals
juice
sandwich
friedegg
pancake
tea
IoU
scrambledegg 0.23773258179875392
milk 0.3926960642871622
coffee 0.41546744969680294
salat 0.3197975439885778
cereals 0.3684666238916662
juice 0.3759549375172653
sandwich 0.3554737014925459
friedegg 0.23509185812784084
pancake 0.18310478792763066
tea 0.4244680068499278
Final IoU: 0.33082535555781734
MoF
scrambledegg 0.5437315674665464
milk 0.5422928138423782
coffee 0.5107503183785457
salat 0.5547785338253038
cereals 0.49012816272827775
juice 0.641328629651066
sandwich 0.5360922744382969
friedegg 0.5067045394748768
pancake 0.52628453642531
tea 0.5555501513710838
Final MoF: 0.5407641527601685
###Markdown
Reducing dim
###Code
segment_bf = SegmentBreakfast(ClusterFactory.KMEANS.value, n=2)
segment_bf.with_i3d_features(pe=True, red_dim=True)
###Output
scrambledegg
milk
coffee
salat
cereals
juice
sandwich
friedegg
pancake
tea
IoU
scrambledegg 0.35699515527311915
milk 0.4508489984268366
coffee 0.4389378188120492
salat 0.32153315824803963
cereals 0.4162359325656134
juice 0.451835465666965
sandwich 0.42848261307652075
friedegg 0.3626075726093839
pancake 0.32705640704951616
tea 0.4549318497059475
Final IoU: 0.40094649714339914
MoF
scrambledegg 0.5281391961512378
milk 0.500689696572196
coffee 0.4848902498286102
salat 0.3145778233390751
cereals 0.47912362424917077
juice 0.5613876176790652
sandwich 0.44556943163730567
friedegg 0.3678887991689021
pancake 0.4179699490981002
tea 0.5326053228315147
Final MoF: 0.4632841710555177
###Markdown
I3D - 16 frames window FINCH
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=True, frames_window_len=16)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=False, frames_window_len=16)
###Output
scrambledegg
milk
coffee
salat
cereals
juice
sandwich
friedegg
pancake
tea
IoU
scrambledegg 0.18641729424570447
milk 0.3450914832053043
coffee 0.39245268985244264
salat 0.26097604017135506
cereals 0.33703464289510604
juice 0.26490938550865917
sandwich 0.2831137135371198
friedegg 0.1875153250704242
pancake 0.13970961794499093
tea 0.37042557360840217
Final IoU: 0.2767645766039509
MoF
scrambledegg 0.5079768791290823
milk 0.5533516881959863
coffee 0.5492418590011672
salat 0.584286730491668
cereals 0.5311341882126479
juice 0.5796906262132145
sandwich 0.5437091870429099
friedegg 0.4956623148063631
pancake 0.5222455901637358
tea 0.5614082407712337
Final MoF: 0.5428707304028008
###Markdown
KMeans
###Code
segment_bf = SegmentBreakfast(ClusterFactory.KMEANS.value, n=2)
segment_bf.with_i3d_features(pe=True, frames_window_len=16)
segment_bf = SegmentBreakfast(ClusterFactory.KMEANS.value, n=2)
segment_bf.with_i3d_features(pe=False, frames_window_len=16)
###Output
scrambledegg
milk
coffee
salat
cereals
juice
sandwich
friedegg
pancake
tea
IoU
scrambledegg 0.22712040199756284
milk 0.3777805305876598
coffee 0.3957559499356925
salat 0.32686563232221294
cereals 0.35287093931124874
juice 0.3660305101391983
sandwich 0.3370311050124223
friedegg 0.23655873211207973
pancake 0.1863587059982819
tea 0.41381987855703434
Final IoU: 0.32201923859733933
MoF
scrambledegg 0.5443319035706573
milk 0.5342097154865509
coffee 0.50829749295126
salat 0.5701761362207883
cereals 0.5005320185231016
juice 0.6294322044025641
sandwich 0.5256139463019233
friedegg 0.513016878669566
pancake 0.525805346228876
tea 0.5572947473122442
Final MoF: 0.5408710389667531
###Markdown
I3D - 24 frames FINCH
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=True, frames_window_len=24)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=False, frames_window_len=24)
###Output
scrambledegg
milk
coffee
salat
cereals
juice
sandwich
friedegg
pancake
tea
IoU
scrambledegg 0.18138798829351416
milk 0.34498239429443833
coffee 0.411827896724777
salat 0.25689765691690253
cereals 0.33794986151037204
juice 0.24508864714163484
sandwich 0.27542476703499524
friedegg 0.180694284880978
pancake 0.1431020130742935
tea 0.4026730830673279
Final IoU: 0.2780028592939233
MoF
scrambledegg 0.49933887118636033
milk 0.5448514990696911
coffee 0.5724631722829714
salat 0.58292511409872
cereals 0.5349544596459055
juice 0.5626550641830533
sandwich 0.5328953907936755
friedegg 0.5008001532216425
pancake 0.541890073898244
tea 0.5771870076949915
Final MoF: 0.5449960806075256
###Markdown
I3D - 32 frames FINCH
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=True, frames_window_len=32)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=False, frames_window_len=32)
###Output
scrambledegg
milk
coffee
salat
cereals
juice
sandwich
friedegg
pancake
tea
IoU
scrambledegg 0.18660488802702263
milk 0.36972829088682746
coffee 0.4229261892538202
salat 0.2627030060311495
cereals 0.3650357828288671
juice 0.25254765797244066
sandwich 0.28528184903860493
friedegg 0.180074011085736
pancake 0.14545973533730755
tea 0.3902196727737753
Final IoU: 0.2860581083235552
MoF
scrambledegg 0.5013731072097477
milk 0.5662908539832118
coffee 0.5940380875430484
salat 0.6018035619701078
cereals 0.5627242074963297
juice 0.5792203502307474
sandwich 0.5304315777843219
friedegg 0.49908893468098914
pancake 0.5297070415676083
tea 0.5755540750004486
Final MoF: 0.554023179746656
###Markdown
I3D - 40 frames FINCH
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=True, frames_window_len=40, red_dim=False)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=False, frames_window_len=40, red_dim=False)
###Output
scrambledegg
milk
coffee
salat
cereals
juice
sandwich
friedegg
pancake
tea
IoU
scrambledegg 0.19344532955013097
milk 0.3748394677243293
coffee 0.42642600090077354
salat 0.2709563293718855
cereals 0.35792285400842144
juice 0.24683053203722993
sandwich 0.31128834475858475
friedegg 0.18625430500896906
pancake 0.15176136304686907
tea 0.405431832074263
Final IoU: 0.29251563584814566
MoF
scrambledegg 0.5254574573784433
milk 0.5742398880968389
coffee 0.5951981361946335
salat 0.5798371897882431
cereals 0.5774589522176302
juice 0.5715535500843135
sandwich 0.5437571333229426
friedegg 0.4968969078511716
pancake 0.5453491727089785
tea 0.59731800758161
Final MoF: 0.5607066395224806
###Markdown
FINCH I3D - 48 frames
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=True, frames_window_len=48, red_dim=False)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=False, frames_window_len=48, red_dim=False)
###Output
scrambledegg
milk
coffee
salat
cereals
juice
sandwich
friedegg
pancake
tea
IoU
scrambledegg 0.1824915579226137
milk 0.40565532030451695
coffee 0.4553261635633942
salat 0.2678445839038869
cereals 0.3589322221916683
juice 0.26436889735101304
sandwich 0.3023692947650875
friedegg 0.18512751669269104
pancake 0.15755464724558962
tea 0.40781522546796184
Final IoU: 0.29874854294084235
MoF
scrambledegg 0.516725854224287
milk 0.6067808037127299
coffee 0.6524835973408804
salat 0.5946101038813856
cereals 0.5852480698960892
juice 0.5892995288409527
sandwich 0.5511848360285709
friedegg 0.5007797110919967
pancake 0.5350893399766907
tea 0.6105676846476716
Final MoF: 0.5742769529641254
###Markdown
I3D - 64 frames
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=True, frames_window_len=64, red_dim=False)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_i3d_features(pe=False, frames_window_len=64, red_dim=False)
###Output
scrambledegg
milk
coffee
salat
cereals
juice
sandwich
friedegg
pancake
tea
IoU
scrambledegg 0.1800078699612604
milk 0.3997298432755986
coffee 0.4587230091076095
salat 0.26752119281576814
cereals 0.34414650885304726
juice 0.2687734358427447
sandwich 0.33075185995782524
friedegg 0.2053331147682564
pancake 0.15027052029843882
tea 0.3567227560584269
Final IoU: 0.2961980110938976
MoF
scrambledegg 0.5055273754362944
milk 0.6285175040539531
coffee 0.6690957578529402
salat 0.5793138576674712
cereals 0.6056809317913394
juice 0.5797236401405087
sandwich 0.573550785455343
friedegg 0.5167649137250405
pancake 0.5424050894092208
tea 0.598950579517204
Final MoF: 0.5799530435049316
###Markdown
Slowfast FINCH - 32
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=True, red_dim=False)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=False, red_dim=False)
###Output
Finished scrambledegg
Finished milk
Finished coffee
Finished salat
Finished cereals
Finished juice
Finished sandwich
Finished friedegg
Finished pancake
Finished tea
IoU
scrambledegg 0.17910730440437259
milk 0.34681125806534835
coffee 0.39999191205664386
salat 0.24918041308292846
cereals 0.34299084404784513
juice 0.2669801170377771
sandwich 0.2690645226206919
friedegg 0.19053535979465974
pancake 0.12971173872166816
tea 0.3809278474651084
Final IoU: 0.2755301317297044
MoF
scrambledegg 0.4954569915222257
milk 0.5566612562028692
coffee 0.5553917082762599
salat 0.5473122808882099
cereals 0.5489083284127471
juice 0.5801050729607882
sandwich 0.5211971755690568
friedegg 0.50300176255172
pancake 0.5171765229510886
tea 0.5735655083762751
Final MoF: 0.539877660771124
###Markdown
FINCH - 40
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=True, frames_len=40, red_dim=False)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=False, frames_len=40, red_dim=False)
###Output
Finished scrambledegg
Finished milk
Finished coffee
Finished salat
Finished cereals
Finished juice
Finished sandwich
Finished friedegg
Finished pancake
Finished tea
IoU
scrambledegg 0.1754329460412568
milk 0.34710805609794865
coffee 0.39323352538900397
salat 0.24463026578526195
cereals 0.33584138884379205
juice 0.26403360389062386
sandwich 0.25549985656100516
friedegg 0.172932285366102
pancake 0.13919499814938452
tea 0.3664703538295595
Final IoU: 0.2694377279953938
MoF
scrambledegg 0.5065905012666967
milk 0.5482981637304515
coffee 0.5862080421277331
salat 0.5524592541163669
cereals 0.5451541901467974
juice 0.571154208554819
sandwich 0.5104357364012778
friedegg 0.4918303125059761
pancake 0.5195161425313164
tea 0.5576192081659923
Final MoF: 0.5389265759547428
###Markdown
FINCH - 48
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=True, frames_len=48, red_dim=False)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=False, frames_len=48, red_dim=False)
###Output
Finished scrambledegg
Finished milk
Finished coffee
Finished salat
Finished cereals
Finished juice
Finished sandwich
Finished friedegg
Finished pancake
Finished tea
IoU
scrambledegg 0.1805710375389745
milk 0.35379434834701884
coffee 0.37733209275119045
salat 0.24608821506876744
cereals 0.3361293378553069
juice 0.2801835288527378
sandwich 0.26667786589043047
friedegg 0.1799595562756373
pancake 0.13273859870644458
tea 0.35241899497936563
Final IoU: 0.2705893576265874
MoF
scrambledegg 0.4992683018693195
milk 0.5550578925518935
coffee 0.5891070898904247
salat 0.5632988249862979
cereals 0.5560507876608083
juice 0.5685658568846984
sandwich 0.5094569262340629
friedegg 0.5021208518166973
pancake 0.5049685026732921
tea 0.5592904515634575
Final MoF: 0.5407185486130952
###Markdown
FINCH - 64
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=True, frames_len=64, red_dim=False)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=False, frames_len=64, red_dim=False)
###Output
Finished scrambledegg
Finished milk
Finished coffee
Finished salat
Finished cereals
Finished juice
Finished sandwich
Finished friedegg
Finished pancake
Finished tea
IoU
scrambledegg 0.17453424910472193
milk 0.3172769960618628
coffee 0.360842268112025
salat 0.23870055176777094
cereals 0.28842746430139465
juice 0.28390923807093177
sandwich 0.268484920660265
friedegg 0.1830144416342736
pancake 0.13782477391809822
tea 0.3181811771009594
Final IoU: 0.2571196080732303
MoF
scrambledegg 0.4887848337530459
milk 0.5503625397965588
coffee 0.5929952328237796
salat 0.5544017726962185
cereals 0.542025542584025
juice 0.5556718787593127
sandwich 0.5056382452350505
friedegg 0.49894827750382176
pancake 0.5152297448555769
tea 0.5615716932322351
Final MoF: 0.5365629761239625
###Markdown
FINCH - 72
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=True, frames_len=72, red_dim=False)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=False, frames_len=72, red_dim=False)
###Output
Finished scrambledegg
Finished milk
Finished coffee
Finished salat
Finished cereals
Finished juice
Finished sandwich
Finished friedegg
Finished pancake
Finished tea
IoU
scrambledegg 0.1773738890977334
milk 0.3149359969467909
coffee 0.3525413156025602
salat 0.24027617044397442
cereals 0.25986596513209137
juice 0.2776182968796403
sandwich 0.2785218132034357
friedegg 0.1808178675266909
pancake 0.12819397429773408
tea 0.31484438713976387
Final IoU: 0.2524989676270415
MoF
scrambledegg 0.5015556331416282
milk 0.5505793111919657
coffee 0.6087567043716879
salat 0.5550259917330548
cereals 0.5289440503976364
juice 0.5506186107450389
sandwich 0.5247052728808901
friedegg 0.4918504945393898
pancake 0.5161551836093423
tea 0.564401456828299
Final MoF: 0.5392592709438933
###Markdown
FINCH - 80
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=True, frames_len=80, red_dim=False)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=False, frames_len=80, red_dim=False)
###Output
Finished scrambledegg
Finished milk
Finished coffee
Finished salat
Finished cereals
Finished juice
Finished sandwich
Finished friedegg
Finished pancake
Finished tea
IoU
scrambledegg 0.18127788250436455
milk 0.29821333435587866
coffee 0.30173597016111725
salat 0.23575857948307857
cereals 0.25557765661100224
juice 0.2507109450801254
sandwich 0.2647575362701726
friedegg 0.18651445676093845
pancake 0.1344460178682411
tea 0.28609678194939486
Final IoU: 0.23950891610443134
MoF
scrambledegg 0.49700412903539093
milk 0.548367770465921
coffee 0.5751392833252488
salat 0.5416864501391122
cereals 0.5296813832625672
juice 0.5331568267503928
sandwich 0.5124752151124871
friedegg 0.48880489332379673
pancake 0.5127855107403552
tea 0.5540969411611034
Final MoF: 0.5293198403316375
###Markdown
FINCH - 128
###Code
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=True, frames_len=128, red_dim=False)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=False, frames_len=128, red_dim=False)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=True, red_dim=True)
segment_bf = SegmentBreakfast(ClusterFactory.FINCH.value, distance='euclidean')
segment_bf.with_slowfast_features(pe=False, red_dim=True)
###Output
Finished scrambledegg
Finished milk
Finished coffee
Finished salat
Finished cereals
Finished juice
Finished sandwich
Finished friedegg
Finished pancake
Finished tea
IoU
scrambledegg 0.17910730440437259
milk 0.34681125806534835
coffee 0.39999191205664386
salat 0.24918041308292846
cereals 0.34299084404784513
juice 0.2669801170377771
sandwich 0.2690645226206919
friedegg 0.19053535979465974
pancake 0.12971173872166816
tea 0.3809278474651084
Final IoU: 0.2755301317297044
MoF
scrambledegg 0.4954569915222257
milk 0.5566612562028692
coffee 0.5553917082762599
salat 0.5473122808882099
cereals 0.5489083284127471
juice 0.5801050729607882
sandwich 0.5211971755690568
friedegg 0.50300176255172
pancake 0.5171765229510886
tea 0.5735655083762751
Final MoF: 0.539877660771124
###Markdown
KMeans
###Code
segment_bf = SegmentBreakfast(ClusterFactory.KMEANS.value, n=2)
segment_bf.with_slowfast_features()
segment_bf = SegmentBreakfast(ClusterFactory.KMEANS.value, n=2)
segment_bf.with_slowfast_features(pe=False)
###Output
Finished scrambledegg
Finished milk
Finished coffee
Finished salat
Finished cereals
Finished juice
Finished sandwich
Finished friedegg
Finished pancake
Finished tea
IoU
scrambledegg 0.24679060606610095
milk 0.4399734233905062
coffee 0.4402146940239242
salat 0.31909811196789123
cereals 0.3835579756738492
juice 0.4035224459226943
sandwich 0.3528603469293504
friedegg 0.2314953544660336
pancake 0.18650186721636308
tea 0.4385576619381542
Final IoU: 0.3442572487594867
MoF
scrambledegg 0.5609875535545099
milk 0.5687127309120219
coffee 0.5634876716825576
salat 0.5603550862927439
cereals 0.5318377012041063
juice 0.6391334937169134
sandwich 0.5428628741205948
friedegg 0.5009351399052628
pancake 0.5336146762143683
tea 0.5741644175282903
Final MoF: 0.5576091345131369
###Markdown
Visualizing embeddings
###Code
import matplotlib.colors as mcolors
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from random import randint
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
def reduce(features, n_components, algo):
x_red = algo(n_components=n_components).fit_transform(features)
return x_red
def get_colors(pred):
labels = list(np.unique(pred))
colors = [hex for name, hex in mcolors.TABLEAU_COLORS.items()]
return {label: colors[i] for i, label in enumerate(labels)}
def get_feat_and_preds(gt_path, cluster, extractor, window_len, pe):
if cluster == 'KMEANS':
seg_breakfast = SegmentBreakfast(cluster, n=2)
else:
seg_breakfast = SegmentBreakfast(cluster, distance='euclidean')
video = seg_breakfast.get_video_from_gtpath(gt_path, input_len=window_len)
features = video.features(with_pe=pe, reduce_dim=False)
if extractor == 'I3D':
vd_path = seg_breakfast.breakfast.gtpath_to_vdpath(gt_path)
features = seg_breakfast.get_i3d_features(vd_path, frames_window_len=window_len, with_pe=pe)
else:
features = video.features(with_pe=pe)
pred = seg_breakfast.get_actionpred_from_gtpath(gt_path, extractor,
window_len, pe)
print(f'Number of predicted labels {len(np.unique(pred))}')
gt = get_gt_mapped_to_segments(video, gt_path, window_len, seg_breakfast)
return features, pred, gt
def get_gt_mapped_to_segments(video, gt_path, window_len, seg_breakfast):
gt_info = seg_breakfast.breakfast.get_gt_info(gt_path)
gt = seg_breakfast.breakfast.gt_to_array(gt_info, len(video))
print(f'Number of ground truth labels {len(np.unique(gt))}')
mapped = []
for i in range(0, len(video), window_len):
seg = gt[i: i + window_len - 1]
labels, counts = np.unique(seg, return_counts=True)
index = np.argmax(counts)
label_mode = labels[index]
mapped.append(label_mode)
return np.array(mapped)
def plot_segmentation(features, predictions, algo=PCA, n_comp=3):
features_red = reduce(features, n_components=n_comp, algo=algo)
fig = plt.figure(figsize=(16, 9))
ax = fig.add_subplot(111, projection='3d')
colors = get_colors(predictions)
for i, c in enumerate(features_red):
label = predictions[i]
ax.scatter(c[0], c[1], c[2], c=colors[label])
plt.show()
features, preds, gt = get_feat_and_preds(path, ClusterFactory.FINCH.value,
ExtractorFactory.SLOWFAST.value, 64, False)
print(features.shape, preds.shape, gt.shape)
plot_segmentation(features, preds)
plot_segmentation(features, gt)
features, preds, gt = get_feat_and_preds(path, ClusterFactory.FINCH.value,
ExtractorFactory.SLOWFAST.value, 32, True)
# plot_segmentation(features, preds)
plot_segmentation(features, gt)
features, preds, gt = get_feat_and_preds(path, ClusterFactory.FINCH.value,
ExtractorFactory.SLOWFAST.value, 32, False)
# plot_segmentation(features, preds)`
plot_segmentation(features, gt)
features, preds = get_feat_and_preds(juice_gt_path, ClusterFactory.FINCH.value,
ExtractorFactory.I3D.value, 24, True)
plot_segmentation(features, preds)
features, preds = get_feat_and_preds(juice_gt_path, ClusterFactory.FINCH.value,
ExtractorFactory.I3D.value, 24, False)
plot_segmentation(features, preds)
###Output
_____no_output_____ |
exercises/E18_ClassHomeworksAnalysis.ipynb | ###Markdown
Exercise 18 Analyze class homeworks
###Code
import pandas as pd
import os
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.multiclass import OneVsRestClassifier
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier, BaggingRegressor, BaggingClassifier
from sklearn.metrics import r2_score, roc_auc_score, make_scorer
from sklearn.model_selection import train_test_split, GridSearchCV, KFold
from sklearn.linear_model import LogisticRegression, SGDClassifier, LogisticRegressionCV
from sklearn.neighbors import KNeighborsClassifier
from sklearn import linear_model, svm
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="ticks", color_codes=True)
from nltk.stem import PorterStemmer, SnowballStemmer
from nltk import word_tokenize,sent_tokenize
import pickle
from sklearn.model_selection import cross_val_score
from sklearn.svm import SVC
import warnings
warnings.filterwarnings('ignore')
data = pd.read_excel('E18.xlsx')
data.head()
###Output
_____no_output_____
###Markdown
Exercise 18.1Analyze the writing patterns of each student
###Code
data.describe()
#Primero hacemos un tratamiento de los NAN values para poder analizar los textos
data = data.replace(np.nan, 'empty', regex=True)
data.head()
#Concatenamos todos los textos para evaluar patrones
data['All_text'] = data['T1']+" "+data['T2']+" "+data['T3']+" "+data['T4']+" "+data['T5']+" "+data['T6']
data[['Sexo','All_text']].head()
cadena=data.iloc[0,5]
print(cadena)
freq=str(len(cadena.split(" ")))
print(freq)
data['Freq_word']=data.apply(lambda x: str(len(x['All_text'].split(" "))), axis=1)
data[['All_text','Freq_word']].head()
# library and dataset
import seaborn as sns
import numpy as np
y_text=np.array(data["Freq_word"]).astype(np.float)
# sns.boxplot(y=y_text)
# Grouped boxplot
sns.boxplot(x=data["Sexo"], y=y_text, palette="Set1")
#sns.plt.show()
# Large bandwidth
sns.distplot(y_text, color="red")
###Output
_____no_output_____
###Markdown
Exercise 18.2Evaluate the similarities of the homeworks of the studentstip: https://github.com/orsinium/textdistance
###Code
# !pip install textdistance
import textdistance
def similarity(data):
simil = pd.DataFrame(0, index=data.index, columns=data.index)
for i in simil.index:
for j in simil.index:
if i==j: simil.loc[j,i]=textdistance.hamming.normalized_similarity(data.All_text[i],data.All_text[j])*10
if i!=j: simil.loc[j,i]=textdistance.hamming.normalized_similarity(data.All_text[i],data.All_text[j])*100
assert simil.shape == (data.shape[0], data.shape[0])
return simil
similarity(data)
simil=similarity(data)
# Set background color / chart style
sns.set_style(style = 'white')
# Set up matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
# Add diverging colormap
cmap = sns.diverging_palette(0, 250, as_cmap=True)
# Draw correlation plot
sns.heatmap(simil, cmap=cmap,
square=True,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax)
###Output
_____no_output_____
###Markdown
Exercise 18.3Create a classifier to predict the sex of each student
###Code
#Primero creamos una nueva variable binaria para el género
data['BiSexo'] = np.where(data['Sexo']=='H', 1, 0)
data.head()
#Definimos X y y para el modelo y dividimos en train y test
X = data.drop(['Sexo','BiSexo'], axis=1)
y = data['BiSexo']
def tokenize_test(vect):
X_vec = vect.fit_transform(data['All_text'])
print('Features: ', X_vec.shape)
return X_vec
from nltk.stem import WordNetLemmatizer
wordnet_lemmatizer = WordNetLemmatizer()
import nltk
nltk.download('wordnet')
def split_into_lemmas(text):
text = text.lower()
words = text.split()
#print(words)
return [wordnet_lemmatizer.lemmatize(word) for word in words]
vect = TfidfVectorizer(max_df=0.4,stop_words=['-','.'],ngram_range=(1, 3),analyzer=split_into_lemmas,norm='l1')
X_vec=tokenize_test(vect)
X_train, X_test, y_train, y_test = train_test_split(X_vec, y, random_state=42)
clf = OneVsRestClassifier(RandomForestClassifier(n_jobs=-1, n_estimators=100, max_depth=30,random_state=1))
clf.fit(X_train, y_train)
y_pred = clf.predict_proba(X_test)
y_pred=pd.DataFrame(y_pred)
y_pred=y_pred.iloc[:,0]
print(y_pred)
print(y_test)
y_test.shape
y_pred.shape
roc_auc_score(y_test, y_pred, average='macro')
###Output
_____no_output_____ |
10.Applied_Data_Science_Capstone/Week 3 Interactive Visual Analytics and Dashboard/Interactive Visual Analytics and Dashboard .ipynb | ###Markdown
**Launch Sites Locations Analysis with Folium** Estimated time needed: **40** minutes The launch success rate may depend on many factors such as payload mass, orbit type, and so on. It may also depend on the location and proximities of a launch site, i.e., the initial position of rocket trajectories. Finding an optimal location for building a launch site certainly involves many factors and hopefully we could discover some of the factors by analyzing the existing launch site locations. In the previous exploratory data analysis labs, you have visualized the SpaceX launch dataset using `matplotlib` and `seaborn` and discovered some preliminary correlations between the launch site and success rates. In this lab, you will be performing more interactive visual analytics using `Folium`. Objectives This lab contains the following tasks:* **TASK 1:** Mark all launch sites on a map* **TASK 2:** Mark the success/failed launches for each site on the map* **TASK 3:** Calculate the distances between a launch site to its proximitiesAfter completed the above tasks, you should be able to find some geographical patterns about launch sites. Let's first import required Python packages for this lab:
###Code
!pip3 install folium
!pip3 install wget
import folium
import wget
import pandas as pd
# Import folium MarkerCluster plugin
from folium.plugins import MarkerCluster
# Import folium MousePosition plugin
from folium.plugins import MousePosition
# Import folium DivIcon plugin
from folium.features import DivIcon
###Output
_____no_output_____
###Markdown
If you need to refresh your memory about folium, you may download and refer to this previous folium lab: [Generating Maps with Python](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module\_3/DV0101EN-3-5-1-Generating-Maps-in-Python-py-v2.0.ipynb) Task 1: Mark all launch sites on a map First, let's try to add each site's location on a map using site's latitude and longitude coordinates The following dataset with the name `spacex_launch_geo.csv` is an augmented dataset with latitude and longitude added for each site.
###Code
# Download and read the `spacex_launch_geo.csv`
spacex_csv_file = wget.download('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/spacex_launch_geo.csv')
spacex_df=pd.read_csv(spacex_csv_file)
###Output
100% [................................................................................] 8966 / 8966
###Markdown
Now, you can take a look at what are the coordinates for each site.
###Code
# Select relevant sub-columns: `Launch Site`, `Lat(Latitude)`, `Long(Longitude)`, `class`
spacex_df = spacex_df[['Launch Site', 'Lat', 'Long', 'class']]
launch_sites_df = spacex_df.groupby(['Launch Site'], as_index=False).first()
launch_sites_df = launch_sites_df[['Launch Site', 'Lat', 'Long']]
launch_sites_df
###Output
_____no_output_____
###Markdown
Above coordinates are just plain numbers that can not give you any intuitive insights about where are those launch sites. If you are very good at geography, you can interpret those numbers directly in your mind. If not, that's fine too. Let's visualize those locations by pinning them on a map. We first need to create a folium `Map` object, with an initial center location to be NASA Johnson Space Center at Houston, Texas.
###Code
# Start location is NASA Johnson Space Center
nasa_coordinate = [29.559684888503615, -95.0830971930759]
site_map = folium.Map(location=nasa_coordinate, zoom_start=10)
###Output
_____no_output_____
###Markdown
We could use `folium.Circle` to add a highlighted circle area with a text label on a specific coordinate. For example,
###Code
# Create a blue circle at NASA Johnson Space Center's coordinate with a popup label showing its name
circle = folium.Circle(nasa_coordinate, radius=1000, color='#d35400', fill=True).add_child(folium.Popup('NASA Johnson Space Center'))
# Create a blue circle at NASA Johnson Space Center's coordinate with a icon showing its name
marker = folium.map.Marker(
nasa_coordinate,
# Create an icon as a text label
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % 'NASA JSC',
)
)
site_map.add_child(circle)
site_map.add_child(marker)
###Output
_____no_output_____
###Markdown
and you should find a small yellow circle near the city of Houston and you can zoom-in to see a larger circle. Now, let's add a circle for each launch site in data frame `launch_sites` *TODO:* Create and add `folium.Circle` and `folium.Marker` for each launch site on the site map An example of folium.Circle: `folium.Circle(coordinate, radius=1000, color='000000', fill=True).add_child(folium.Popup(...))` An example of folium.Marker: `folium.map.Marker(coordinate, icon=DivIcon(icon_size=(20,20),icon_anchor=(0,0), html='%s' % 'label', ))`
###Code
# Initial the map
site_map = folium.Map(location=nasa_coordinate, zoom_start=5)
# For each launch site, add a Circle object based on its coordinate (Lat, Long) values. In addition, add Launch site name as a popup label
lat = list(zip(list(launch_sites_df["Lat"]),list(launch_sites_df["Long"])))
locations = launch_sites_df["Launch Site"]
incidents = folium.map.FeatureGroup()
for i in range(len(locations)):
incidents.add_child(
folium.features.CircleMarker(
lat[i],
radius=10, # define how big you want the circle markers to be
color='yellow',
fill=True,
fill_color='yellow',
fill_opacity=0.6,
)
)
site_map.add_child(incidents)
###Output
_____no_output_____
###Markdown
The generated map with marked launch sites should look similar to the following: Now, you can explore the map by zoom-in/out the marked areas, and try to answer the following questions:* Are all launch sites in proximity to the Equator line?* Are all launch sites in very close proximity to the coast?Also please try to explain your findings. Task 2: Mark the success/failed launches for each site on the map Next, let's try to enhance the map by adding the launch outcomes for each site, and see which sites have high success rates.Recall that data frame spacex_df has detailed launch records, and the `class` column indicates if this launch was successful or not
###Code
spacex_df.tail(10)
###Output
_____no_output_____
###Markdown
Next, let's create markers for all launch records.If a launch was successful `(class=1)`, then we use a green marker and if a launch was failed, we use a red marker `(class=0)` Note that a launch only happens in one of the four launch sites, which means many launch records will have the exact same coordinate. Marker clusters can be a good way to simplify a map containing many markers having the same coordinate. Let's first create a `MarkerCluster` object
###Code
marker_cluster = MarkerCluster()
###Output
_____no_output_____
###Markdown
*TODO:* Create a new column in `launch_sites` dataframe called `marker_color` to store the marker colors based on the `class` value
###Code
# Function to assign color to launch outcome
def assign_marker_color(launch_outcome):
if launch_outcome == 1:
return 'green'
else:
return 'red'
spacex_df['marker_color'] = spacex_df['class'].apply(assign_marker_color)
spacex_df.tail(10)
###Output
_____no_output_____
###Markdown
*TODO:* For each launch result in `spacex_df` data frame, add a `folium.Marker` to `marker_cluster`
###Code
# Add marker_cluster to current site_map
site_map.add_child(marker_cluster)
# for each row in spacex_df data frame
# create a Marker object with its coordinate
# and customize the Marker's icon property to indicate if this launch was successed or failed,
# e.g., icon=folium.Icon(color='white', icon_color=row['marker_color']
for index, record in spacex_df.iterrows():
marker = folium.Marker([record['Lat'], record['Long']],
icon=folium.Icon(color='white', icon_color=record['marker_color']))
marker_cluster.add_child(marker)
site_map
###Output
_____no_output_____
###Markdown
Your updated map may look like the following screenshots: From the color-labeled markers in marker clusters, you should be able to easily identify which launch sites have relatively high success rates. TASK 3: Calculate the distances between a launch site to its proximities Next, we need to explore and analyze the proximities of launch sites. Let's first add a `MousePosition` on the map to get coordinate for a mouse over a point on the map. As such, while you are exploring the map, you can easily find the coordinates of any points of interests (such as railway)
###Code
# Add Mouse Position to get the coordinate (Lat, Long) for a mouse over on the map
formatter = "function(num) {return L.Util.formatNum(num, 5);};"
mouse_position = MousePosition(
position='topright',
separator=' Long: ',
empty_string='NaN',
lng_first=False,
num_digits=20,
prefix='Lat:',
lat_formatter=formatter,
lng_formatter=formatter,
)
site_map.add_child(mouse_position)
site_map
###Output
_____no_output_____
###Markdown
Now zoom in to a launch site and explore its proximity to see if you can easily find any railway, highway, coastline, etc. Move your mouse to these points and mark down their coordinates (shown on the top-left) in order to the distance to the launch site. You can calculate the distance between two points on the map based on their `Lat` and `Long` values using the following method:
###Code
from math import sin, cos, sqrt, atan2, radians
def calculate_distance(lat1, lon1, lat2, lon2):
# approximate radius of earth in km
R = 6373.0
lat1 = radians(lat1)
lon1 = radians(lon1)
lat2 = radians(lat2)
lon2 = radians(lon2)
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2
c = 2 * atan2(sqrt(a), sqrt(1 - a))
distance = R * c
return distance
###Output
_____no_output_____
###Markdown
*TODO:* Mark down a point on the closest coastline using MousePosition and calculate the distance between the coastline point and the launch site.
###Code
# find coordinate of the closet coastline
# e.g.,: Lat: 28.56367 Lon: -80.57163
# distance_coastline = calculate_distance(launch_site_lat, launch_site_lon, coastline_lat, coastline_lon)
###Output
_____no_output_____
###Markdown
*TODO:* After obtained its coordinate, create a `folium.Marker` to show the distance
###Code
# Create and add a folium.Marker on your selected closest coastline point on the map
# Display the distance between coastline point and launch site using the icon property
# for example
# distance_marker = folium.Marker(
# coordinate,
# icon=DivIcon(
# icon_size=(20,20),
# icon_anchor=(0,0),
# html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % "{:10.2f} KM".format(distance),
# )
# )
coordinates = [
[28.56342, -80.57674],
[28.56342, -80.56756]]
lines=folium.PolyLine(locations=coordinates, weight=1)
site_map.add_child(lines)
distance = calculate_distance(coordinates[0][0], coordinates[0][1], coordinates[1][0], coordinates[1][1])
distance_circle = folium.Marker(
[28.56342, -80.56794],
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % "{:10.2f} KM".format(distance),
)
)
site_map.add_child(distance_circle)
site_map
###Output
_____no_output_____
###Markdown
*TODO:* Draw a `PolyLine` between a launch site to the selected coastline point
###Code
# Create a `folium.PolyLine` object using the coastline coordinates and launch site coordinate
# lines=folium.PolyLine(locations=coordinates, weight=1)
site_map.add_child(lines)
coordinates = [
[28.56342, -80.57674],
[28.411780, -80.820630]]
lines=folium.PolyLine(locations=coordinates, weight=1)
site_map.add_child(lines)
distance = calculate_distance(coordinates[0][0], coordinates[0][1], coordinates[1][0], coordinates[1][1])
distance_circle = folium.Marker(
[28.411780, -80.820630],
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#252526;"><b>%s</b></div>' % "{:10.2f} KM".format(distance),
)
)
site_map.add_child(distance_circle)
site_map
###Output
_____no_output_____
###Markdown
Your updated map with distance line should look like the following screenshot: *TODO:* Similarly, you can draw a line betwee a launch site to its closest city, railway, highway, etc. You need to use `MousePosition` to find the their coordinates on the map first A railway map symbol may look like this: A highway map symbol may look like this: A city map symbol may look like this:
###Code
# Create a marker with distance to a closest city, railway, highway, etc.
# Draw a line between the marker to the launch site
coordinates = [
[28.56342, -80.57674],
[28.5383, -81.3792]]
lines=folium.PolyLine(locations=coordinates, weight=1)
site_map.add_child(lines)
distance = calculate_distance(coordinates[0][0], coordinates[0][1], coordinates[1][0], coordinates[1][1])
distance_circle = folium.Marker(
[28.5383, -81.3792],
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#252526;"><b>%s</b></div>' % "{:10.2f} KM".format(distance),
)
)
site_map.add_child(distance_circle)
site_map
###Output
_____no_output_____ |
notebooks/Q2-MDG.ipynb | ###Markdown
Analysis of Q2 on the Maven Dependency Graph (MDG)This document is the second chapter of a set of notebooks that accompany the paper "Breaking Bad? Semantic Versioning and Impact of Breaking Changes in Maven Central". In this chapter, we investigate Q2 and its corresponding null hypothesis on the Maven Dependency Graph (MDG).**Q2**: To what extent has the adherence to semantic versioning principles increasedover time??**H$_2$**: The adherence to semantic versioning principles has increased over time.**Note**: To have access to the Exploratory Data Analysis of the dataset, please refer to the notebook Q1-MDG. --- Table of Contents Setup Dataset Load Dataset Clean Dataset Finalize Dataset Dataset Summary Results Data Preparation Histograms Line Plot EOF --- Setup
###Code
# Import required libraries
library(ggplot2)
library(tidyverse)
library(gridExtra)
library(ggthemes)
# Set theme
theme_set(theme_stata())
###Output
Warning message:
“package ‘ggplot2’ was built under R version 3.5.2”
Warning message:
“package ‘tidyverse’ was built under R version 3.5.2”
── [1mAttaching packages[22m ─────────────────────────────────────── tidyverse 1.3.0 ──
[32m✔[39m [34mtibble [39m 3.1.2 [32m✔[39m [34mdplyr [39m 1.0.6
[32m✔[39m [34mtidyr [39m 1.0.0 [32m✔[39m [34mstringr[39m 1.4.0
[32m✔[39m [34mreadr [39m 1.3.1 [32m✔[39m [34mforcats[39m 0.4.0
[32m✔[39m [34mpurrr [39m 0.3.3
Warning message:
“package ‘tidyr’ was built under R version 3.5.2”
Warning message:
“package ‘purrr’ was built under R version 3.5.2”
Warning message:
“package ‘stringr’ was built under R version 3.5.2”
Warning message:
“package ‘forcats’ was built under R version 3.5.2”
── [1mConflicts[22m ────────────────────────────────────────── tidyverse_conflicts() ──
[31m✖[39m [34mdplyr[39m::[32mfilter()[39m masks [34mstats[39m::filter()
[31m✖[39m [34mdplyr[39m::[32mlag()[39m masks [34mstats[39m::lag()
Attaching package: ‘gridExtra’
The following object is masked from ‘package:dplyr’:
combine
Warning message:
“package ‘ggthemes’ was built under R version 3.5.2”
###Markdown
--- Dataset Load DatasetFirst, we load the `deltas.csv` dataset which contains information about breaking changes computed by Maracas on the selected upgrades of the MDG.
###Code
deltas <- read.csv("../code/cypher-queries/data/gen/deltas.csv", stringsAsFactors=FALSE, colClasses=c("level"="factor", "year"="factor", "java_version_v1"="factor", "java_version_v2"="factor", "expected_level"="factor"))
sprintf("Successfully loaded %d deltas", nrow(deltas))
head(deltas)
###Output
_____no_output_____
###Markdown
Clean DatasetHere, we clean the dataset to discard the upgrades and deltas not complying with our requirements.
###Code
# What issues did we encounter when attempting to compute the deltas? (Java version >= 8, JAR not produced from Java code, JAR not found on Maven Central, etc.)
table(deltas$exception)
# Which languages did we find?
table(deltas$language)
# Discard all deltas that raised an exception
deltas <- subset(deltas, exception == -1)
# Discard all deltas for upgrades that do not contain any code
sprintf("%d deltas correspond to JARs not containing any code", nrow(subset(deltas, declarations_v1 == 0 & declarations_v2 == 0)))
deltas <- subset(deltas, declarations_v1 > 0 | declarations_v2 > 0)
# Discard all deltas where v1 was released _after_ v2
sprintf("%d where v1 was released after v2", nrow(subset(deltas, age_diff < 0)))
deltas <- subset(deltas, age_diff >= 0)
sprintf("%d remaining deltas after cleaning", nrow(deltas))
# Remove cases where dates are used as versions (e.g. 20081010.0.1, 1.20081010.1, 1.0.20081010)
deltas <- deltas[grep("^[0-9]{0,7}[.][0-9]{0,7}([.][0-9]{0,7})?$", deltas$v1),]
deltas <- deltas[grep("^[0-9]{0,7}[.][0-9]{0,7}([.][0-9]{0,7})?$", deltas$v2),]
deltas <- deltas[!grepl("^2[0-9]{3}[.][0-9]{2}([.][0-9]{2})?$", deltas$v1),]
deltas <- deltas[!grepl("^[0-9]{2}[.][0-9]{2}[.]2[0-9]{3}$", deltas$v1),]
deltas <- deltas[!grepl("^[0-9]{2}[.]2[0-9]{3}$", deltas$v1),]
deltas <- deltas[!grepl("^2[0-9]{3}[.][0-9]{2}([.][0-9]{2})?$", deltas$v2),]
deltas <- deltas[!grepl("^[0-9]{2}[.][0-9]{2}[.]2[0-9]{3}$", deltas$v2),]
deltas <- deltas[!grepl("^[0-9]{2}[.]2[0-9]{3}$", deltas$v2),]
sprintf("%d deltas remaining after removing the ones with dates as versions", nrow(deltas))
###Output
_____no_output_____
###Markdown
Finalize DatasetHere, we incorporate additional derived information into the dataset.
###Code
# Add column with all BCs excluding the ones related to:
# - annotationDeprecatedAdded: not a BC
# - methodAddedToPublicClass: not a BC
# - classNowCheckedException: not binary incompatible
# - methodNowThrowsCheckedException: not binary incompatible
# - fieldStaticAndOverridesStatic: lack of alignment with JLS
# - superclassModifiedIncompatible: lack of alignment with JLS
# - methodIsStaticAndOverridesNotStatic: lack of alignment with JLS
# - methodAbstractAddedInSuperclass: covered by other BC
# - methodAbstractAddedInImplementedInterface: covered by other BC
# - methodLessAccessibleThanInSuperclass: covered by other BC
# - fieldLessAccessibleThanInSuperclass: covered by other BC
# - fieldRemovedInSuperclass: covered by other BC
# - methodRemovedInSuperclass: covered by other BC
deltas$bcs_clean = deltas$bcs -
deltas$annotationDeprecatedAdded -
deltas$methodAddedToPublicClass -
deltas$classNowCheckedException -
deltas$methodNowThrowsCheckedException -
deltas$fieldStaticAndOverridesStatic -
deltas$superclassModifiedIncompatible -
deltas$methodIsStaticAndOverridesNotStatic -
deltas$methodAbstractAddedInSuperclass -
deltas$methodAbstractAddedInImplementedInterface -
deltas$methodLessAccessibleThanInSuperclass -
deltas$fieldLessAccessibleThanInSuperclass -
deltas$fieldRemovedInSuperclass -
deltas$methodRemovedInSuperclass
# Same thing for the stable part of the API
deltas$bcs_clean_stable = deltas$bcs_stable -
deltas$annotationDeprecatedAdded_stable -
deltas$methodAddedToPublicClass_stable -
deltas$classNowCheckedException_stable -
deltas$methodNowThrowsCheckedException_stable -
deltas$fieldStaticAndOverridesStatic_stable -
deltas$superclassModifiedIncompatible_stable -
deltas$methodIsStaticAndOverridesNotStatic_stable -
deltas$methodAbstractAddedInSuperclass_stable -
deltas$methodAbstractAddedInImplementedInterface_stable -
deltas$methodLessAccessibleThanInSuperclass_stable -
deltas$fieldLessAccessibleThanInSuperclass_stable -
deltas$fieldRemovedInSuperclass_stable -
deltas$methodRemovedInSuperclass_stable
# Add columns with BCs ratios (i.e. BCs / V1 declarations)
deltas$bcs_ratio_clean = deltas$bcs_clean / deltas$declarations_v1
deltas$bcs_ratio_clean_stable = deltas$bcs_clean_stable / deltas$declarations_v1
# Assign the 'DEV' semver level to versions of the form 0.x.x
levels(deltas$level) <- c(levels(deltas$level), "DEV")
deltas[grepl("^0[.]", deltas$v1),]$level = "DEV"
###Output
_____no_output_____
###Markdown
--- Dataset Summary
###Code
sprintf("Final size of the dataset: %s deltas", nrow(deltas))
#summary(deltas)
###Output
_____no_output_____
###Markdown
--- Results Data Preparation
###Code
# Get all years
years <- sort(unique(deltas$year))
years
# Computes the percentage of breaking upgrades per semver level
perc_breaking_upgrades <- function(ds, dat, semver_level) {
dt <- subset(ds, level == semver_level)
for (y in years) {
dt_current <- subset(dt, year == y)
val <- 0.0
if (nrow(dt_current) != 0) {
val <- nrow(subset(dt_current, bcs_clean_stable > 0)) / nrow(dt_current)
}
dat <- rbind(dat, data.frame(year=y, level=semver_level, breaking=val))
}
return (dat)
}
# Computing percentage of breaking upgrades per semver level per year
breaking_upgrades <- data.frame(
year=character(),
level=character(),
breaking=double()
)
breaking_upgrades <- perc_breaking_upgrades(deltas, breaking_upgrades, "MAJOR")
breaking_upgrades <- perc_breaking_upgrades(deltas, breaking_upgrades, "MINOR")
breaking_upgrades <- perc_breaking_upgrades(deltas, breaking_upgrades, "PATCH")
breaking_upgrades <- perc_breaking_upgrades(deltas, breaking_upgrades, "DEV")
# Extend results with non-major cases
dt2 <- deltas
dt2$level <- as.character(dt2$level)
dt2[dt2$level == "MINOR" || dt2$level == "PATCH",]$level = "NONMAJOR"
breaking_upgrades <- perc_breaking_upgrades(dt2, breaking_upgrades, "NONMAJOR")
breaking_upgrades
###Output
_____no_output_____
###Markdown
Histograms
###Code
# Creates a histogram of breaking releases per year given a semver level
plot_bar_semver <- function(semver_level, level_label) {
dt <- subset(breaking_upgrades, level == semver_level)
plot <- ggplot(dt, aes(x=year, y=breaking)) +
labs(title=sprintf("Ratio of %s Breaking Upgrades per Year", level_label),
x="Release year",
y="Ratio of breaking upgrades") +
geom_bar(stat="identity")
return (plot)
}
# Create histograms of the percentage of breaking upgrades per semver level
options(repr.plot.width=25, repr.plot.height=5)
bar_major <- plot_bar_semver("MAJOR", "Major")
bar_minor <- plot_bar_semver("MINOR", "Minor")
bar_patch <- plot_bar_semver("PATCH", "Patch")
bar_devs <- plot_bar_semver("DEV", "Dev")
bar_nonmajor <- plot_bar_semver("NONMAJOR", "Non-Major")
grid.arrange(bar_major, bar_minor, bar_patch, bar_devs, bar_nonmajor, ncol=5, nrow=1)
# A stacked view of the percentage of breaking upgrades per year per semver level
options(repr.plot.width=8, repr.plot.height=5)
ggplot(breaking_upgrades, aes(x=year, y=breaking, color=level, fill=level)) +
labs(title="Ratio of Breaking Upgrades per Semver Level and Year",
x="Release year",
y="Ratio of breaking upgrades",
fill="Semver level",
color="Semver level") +
geom_bar(stat="identity")
###Output
_____no_output_____
###Markdown
Line PlotEvolution of the ratio of breaking upgrades per semver level in MDG. Each data point aggregates the number of breaking upgrades of the given type for an entire year. A vertical line delimitates the periods of the originaland updated datasets.
###Code
# Create line plot with the percentage of breaking upgrades per semver level per year
p <- ggplot(breaking_upgrades, aes(x=year, y=breaking, group=level)) +
geom_line(aes(linetype=level), stat="identity", size=0.5) +
labs(x="Release year",
y="Ratio of breaking upgrades",
shape ="Semver level",
linetype="Semver level") +
scale_y_continuous(breaks=seq(0,1,0.1)) +
geom_vline(xintercept=7) +
geom_point(aes(shape=level)) +
theme_bw()
p
ggsave("figures/mdg-semver-year.pdf", p,
width=11, height=7)
###Output
_____no_output_____ |
lab1_text_classification_textCNN/TextCNN_MindSpore.ipynb | ###Markdown
1. 数据同步
###Code
import moxing as mox
# 请替换成自己的obs路径
mox.file.copy_parallel(src_url="s3://ascend-zyjs-dcyang/nlp/text_classification_mindspore/data/", dst_url='./data/')
###Output
INFO:root:Using MoXing-v1.17.3-d858ff4a
INFO:root:Using OBS-Python-SDK-3.20.9.1
###Markdown
2. 导入依赖库
###Code
import math
import numpy as np
import pandas as pd
import os
import math
import random
import codecs
from pathlib import Path
import mindspore
import mindspore.dataset as ds
import mindspore.nn as nn
from mindspore import Tensor
from mindspore import context
from mindspore.train.model import Model
from mindspore.nn.metrics import Accuracy
from mindspore.train.serialization import load_checkpoint, load_param_into_net
from mindspore.train.callback import ModelCheckpoint, CheckpointConfig, LossMonitor, TimeMonitor
from mindspore.ops import operations as ops
###Output
[WARNING] ME(1756:281472976124208,MainProcess):2021-04-02-02:36:48.894.01 [mindspore/_check_version.py:207] MindSpore version 1.1.1 and "te" wheel package version 1.0 does not match, reference to the match info on: https://www.mindspore.cn/install
MindSpore version 1.1.1 and "topi" wheel package version 0.6.0 does not match, reference to the match info on: https://www.mindspore.cn/install
[WARNING] ME(1756:281472976124208,MainProcess):2021-04-02-02:36:48.757.068 [mindspore/ops/operations/array_ops.py:2302] WARN_DEPRECATED: The usage of Pack is deprecated. Please use Stack.
###Markdown
3. 超参数设置
###Code
from easydict import EasyDict as edict
cfg = edict({
'name': 'movie review',
'pre_trained': False,
'num_classes': 2,
'batch_size': 64,
'epoch_size': 4,
'weight_decay': 3e-5,
'data_path': './data/',
'device_target': 'Ascend',
'device_id': 0,
'keep_checkpoint_max': 1,
'checkpoint_path': './ckpt/train_textcnn-4_149.ckpt',
'word_len': 51,
'vec_length': 40
})
context.set_context(mode=context.GRAPH_MODE, device_target=cfg.device_target, device_id=cfg.device_id)
###Output
_____no_output_____
###Markdown
4. 数据预处理
###Code
# 数据预览
with open("./data/rt-polarity.neg", 'r', encoding='utf-8') as f:
print("Negative reivews:")
for i in range(5):
print("[{0}]:{1}".format(i,f.readline()))
with open("./data/rt-polarity.pos", 'r', encoding='utf-8') as f:
print("Positive reivews:")
for i in range(5):
print("[{0}]:{1}".format(i,f.readline()))
class Generator():
def __init__(self, input_list):
self.input_list=input_list
def __getitem__(self,item):
return (np.array(self.input_list[item][0],dtype=np.int32),
np.array(self.input_list[item][1],dtype=np.int32))
def __len__(self):
return len(self.input_list)
class MovieReview:
'''
影评数据集
'''
def __init__(self, root_dir, maxlen, split):
'''
input:
root_dir: 影评数据目录
maxlen: 设置句子最大长度
split: 设置数据集中训练/评估的比例
'''
self.path = root_dir
self.feelMap = {
'neg':0,
'pos':1
}
self.files = []
self.doConvert = False
mypath = Path(self.path)
if not mypath.exists() or not mypath.is_dir():
print("please check the root_dir!")
raise ValueError
# 在数据目录中找到文件
for root,_,filename in os.walk(self.path):
for each in filename:
self.files.append(os.path.join(root,each))
break
# 确认是否为两个文件.neg与.pos
if len(self.files) != 2:
print("There are {} files in the root_dir".format(len(self.files)))
raise ValueError
# 读取数据
self.word_num = 0
self.maxlen = 0
self.minlen = float("inf")
self.maxlen = float("-inf")
self.Pos = []
self.Neg = []
for filename in self.files:
f = codecs.open(filename, 'r')
ff = f.read()
file_object = codecs.open(filename, 'w', 'utf-8')
file_object.write(ff)
self.read_data(filename)
self.PosNeg = self.Pos + self.Neg
self.text2vec(maxlen=maxlen)
self.split_dataset(split=split)
def read_data(self, filePath):
with open(filePath,'r') as f:
for sentence in f.readlines():
sentence = sentence.replace('\n','')\
.replace('"','')\
.replace('\'','')\
.replace('.','')\
.replace(',','')\
.replace('[','')\
.replace(']','')\
.replace('(','')\
.replace(')','')\
.replace(':','')\
.replace('--','')\
.replace('-',' ')\
.replace('\\','')\
.replace('0','')\
.replace('1','')\
.replace('2','')\
.replace('3','')\
.replace('4','')\
.replace('5','')\
.replace('6','')\
.replace('7','')\
.replace('8','')\
.replace('9','')\
.replace('`','')\
.replace('=','')\
.replace('$','')\
.replace('/','')\
.replace('*','')\
.replace(';','')\
.replace('<b>','')\
.replace('%','')
sentence = sentence.split(' ')
sentence = list(filter(lambda x: x, sentence))
if sentence:
self.word_num += len(sentence)
self.maxlen = self.maxlen if self.maxlen >= len(sentence) else len(sentence)
self.minlen = self.minlen if self.minlen <= len(sentence) else len(sentence)
if 'pos' in filePath:
self.Pos.append([sentence,self.feelMap['pos']])
else:
self.Neg.append([sentence,self.feelMap['neg']])
def text2vec(self, maxlen):
'''
将句子转化为向量
'''
# Vocab = {word : index}
self.Vocab = dict()
# self.Vocab['None']
for SentenceLabel in self.Pos+self.Neg:
vector = [0]*maxlen
for index, word in enumerate(SentenceLabel[0]):
if index >= maxlen:
break
if word not in self.Vocab.keys():
self.Vocab[word] = len(self.Vocab)
vector[index] = len(self.Vocab) - 1
else:
vector[index] = self.Vocab[word]
SentenceLabel[0] = vector
self.doConvert = True
def split_dataset(self, split):
'''
分割为训练集与测试集
'''
trunk_pos_size = math.ceil((1-split)*len(self.Pos))
trunk_neg_size = math.ceil((1-split)*len(self.Neg))
trunk_num = int(1/(1-split))
pos_temp=list()
neg_temp=list()
for index in range(trunk_num):
pos_temp.append(self.Pos[index*trunk_pos_size:(index+1)*trunk_pos_size])
neg_temp.append(self.Neg[index*trunk_neg_size:(index+1)*trunk_neg_size])
self.test = pos_temp.pop(2)+neg_temp.pop(2)
self.train = [i for item in pos_temp+neg_temp for i in item]
random.shuffle(self.train)
# random.shuffle(self.test)
def get_dict_len(self):
'''
获得数据集中文字组成的词典长度
'''
if self.doConvert:
return len(self.Vocab)
else:
print("Haven't finished Text2Vec")
return -1
def create_train_dataset(self, epoch_size, batch_size):
dataset = ds.GeneratorDataset(
source=Generator(input_list=self.train),
column_names=["data","label"],
shuffle=False
)
# dataset.set_dataset_size(len(self.train))
dataset=dataset.batch(batch_size=batch_size,drop_remainder=True)
dataset=dataset.repeat(epoch_size)
return dataset
def create_test_dataset(self, batch_size):
dataset = ds.GeneratorDataset(
source=Generator(input_list=self.test),
column_names=["data","label"],
shuffle=False
)
# dataset.set_dataset_size(len(self.test))
dataset=dataset.batch(batch_size=batch_size,drop_remainder=True)
return dataset
instance = MovieReview(root_dir=cfg.data_path, maxlen=cfg.word_len, split=0.9)
dataset = instance.create_train_dataset(batch_size=cfg.batch_size,epoch_size=cfg.epoch_size)
batch_num = dataset.get_dataset_size()
vocab_size=instance.get_dict_len()
print("vocab_size:{0}".format(vocab_size))
item =dataset.create_dict_iterator()
for i,data in enumerate(item):
if i<1:
print(data)
print(data['data'][1])
else:
break
###Output
vocab_size:18848
{'data': Tensor(shape=[64, 51], dtype=Int32, value=
[[ 15, 3190, 6781 ... 0, 0, 0],
[ 1320, 582, 4 ... 0, 0, 0],
[ 1734, 111, 36 ... 0, 0, 0],
...
[ 82, 94, 367 ... 0, 0, 0],
[10449, 55, 2923 ... 0, 0, 0],
[ 336, 203, 272 ... 0, 0, 0]]), 'label': Tensor(shape=[64], dtype=Int32, value= [0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1,
0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0,
1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1])}
[ 1320 582 4 3070 0 603 5507 12780 32 12781 1304 669
896 1310 122 4 82 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0]
###Markdown
5.模型训练 5.1训练参数设置
###Code
learning_rate = []
warm_up = [1e-3 / math.floor(cfg.epoch_size / 5) * (i + 1) for _ in range(batch_num)
for i in range(math.floor(cfg.epoch_size / 5))]
shrink = [1e-3 / (16 * (i + 1)) for _ in range(batch_num)
for i in range(math.floor(cfg.epoch_size * 3 / 5))]
normal_run = [1e-3 for _ in range(batch_num) for i in
range(cfg.epoch_size - math.floor(cfg.epoch_size / 5)
- math.floor(cfg.epoch_size * 2 / 5))]
learning_rate = learning_rate + warm_up + normal_run + shrink
def _weight_variable(shape, factor=0.01):
init_value = np.random.randn(*shape).astype(np.float32) * factor
return Tensor(init_value)
def make_conv_layer(kernel_size):
weight_shape = (96, 1, *kernel_size)
weight = _weight_variable(weight_shape)
return nn.Conv2d(in_channels=1, out_channels=96, kernel_size=kernel_size, padding=1,
pad_mode="pad", weight_init=weight, has_bias=True)
class TextCNN(nn.Cell):
def __init__(self, vocab_len, word_len, num_classes, vec_length):
super(TextCNN, self).__init__()
self.vec_length = vec_length
self.word_len = word_len
self.num_classes = num_classes
self.unsqueeze = ops.ExpandDims()
self.embedding = nn.Embedding(vocab_len, self.vec_length, embedding_table='normal')
self.slice = ops.Slice()
self.layer1 = self.make_layer(kernel_height=3)
self.layer2 = self.make_layer(kernel_height=4)
self.layer3 = self.make_layer(kernel_height=5)
self.concat = ops.Concat(1)
self.fc = nn.Dense(96*3, self.num_classes)
self.drop = nn.Dropout(keep_prob=0.5)
self.print = ops.Print()
self.reducemean = ops.ReduceMax(keep_dims=False)
def make_layer(self, kernel_height):
return nn.SequentialCell(
[
make_conv_layer((kernel_height,self.vec_length)),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(self.word_len-kernel_height+1,1)),
]
)
def construct(self,x):
x = self.unsqueeze(x, 1)
x = self.embedding(x)
x1 = self.layer1(x)
x2 = self.layer2(x)
x3 = self.layer3(x)
x1 = self.reducemean(x1, (2, 3))
x2 = self.reducemean(x2, (2, 3))
x3 = self.reducemean(x3, (2, 3))
x = self.concat((x1, x2, x3))
x = self.drop(x)
x = self.fc(x)
return x
net = TextCNN(vocab_len=instance.get_dict_len(), word_len=cfg.word_len,
num_classes=cfg.num_classes, vec_length=cfg.vec_length)
print(net)
# Continue training if set pre_trained to be True
if cfg.pre_trained:
param_dict = load_checkpoint(cfg.checkpoint_path)
load_param_into_net(net, param_dict)
opt = nn.Adam(filter(lambda x: x.requires_grad, net.get_parameters()),
learning_rate=learning_rate, weight_decay=cfg.weight_decay)
loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True)
model = Model(net, loss_fn=loss, optimizer=opt, metrics={'acc': Accuracy()})
config_ck = CheckpointConfig(save_checkpoint_steps=int(cfg.epoch_size*batch_num/2), keep_checkpoint_max=cfg.keep_checkpoint_max)
time_cb = TimeMonitor(data_size=batch_num)
ckpt_save_dir = "./ckpt"
ckpoint_cb = ModelCheckpoint(prefix="train_textcnn", directory=ckpt_save_dir, config=config_ck)
loss_cb = LossMonitor()
model.train(cfg.epoch_size, dataset, callbacks=[time_cb, ckpoint_cb, loss_cb])
print("train success")
###Output
epoch: 1 step: 596, loss is 0.07209684
epoch time: 39359.259 ms, per step time: 66.039 ms
epoch: 2 step: 596, loss is 0.0029934864
epoch time: 4308.688 ms, per step time: 7.229 ms
epoch: 3 step: 596, loss is 0.0019718197
epoch time: 4266.735 ms, per step time: 7.159 ms
epoch: 4 step: 596, loss is 0.0011571363
epoch time: 4309.405 ms, per step time: 7.231 ms
train success
###Markdown
6. 测试评估
###Code
checkpoint_path = './ckpt/train_textcnn-4_596.ckpt'
dataset = instance.create_test_dataset(batch_size=cfg.batch_size)
opt = nn.Adam(filter(lambda x: x.requires_grad, net.get_parameters()),
learning_rate=0.001, weight_decay=cfg.weight_decay)
loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True)
net = TextCNN(vocab_len=instance.get_dict_len(),word_len=cfg.word_len,
num_classes=cfg.num_classes,vec_length=cfg.vec_length)
if checkpoint_path is not None:
param_dict = load_checkpoint(checkpoint_path)
print("load checkpoint from [{}].".format(checkpoint_path))
else:
param_dict = load_checkpoint(cfg.checkpoint_path)
print("load checkpoint from [{}].".format(cfg.checkpoint_path))
load_param_into_net(net, param_dict)
net.set_train(False)
model = Model(net, loss_fn=loss, metrics={'acc': Accuracy()})
acc = model.eval(dataset)
print("accuracy: ", acc)
###Output
load checkpoint from [./ckpt/train_textcnn-4_596.ckpt].
accuracy: {'acc': 0.763671875}
###Markdown
7. 在线测试
###Code
def preprocess(sentence):
sentence = sentence.lower().strip()
sentence = sentence.replace('\n','')\
.replace('"','')\
.replace('\'','')\
.replace('.','')\
.replace(',','')\
.replace('[','')\
.replace(']','')\
.replace('(','')\
.replace(')','')\
.replace(':','')\
.replace('--','')\
.replace('-',' ')\
.replace('\\','')\
.replace('0','')\
.replace('1','')\
.replace('2','')\
.replace('3','')\
.replace('4','')\
.replace('5','')\
.replace('6','')\
.replace('7','')\
.replace('8','')\
.replace('9','')\
.replace('`','')\
.replace('=','')\
.replace('$','')\
.replace('/','')\
.replace('*','')\
.replace(';','')\
.replace('<b>','')\
.replace('%','')\
.replace(" "," ")
sentence = sentence.split(' ')
maxlen = cfg.word_len
vector = [0]*maxlen
for index, word in enumerate(sentence):
if index >= maxlen:
break
if word not in instance.Vocab.keys():
print(word,"单词未出现在字典中")
else:
vector[index] = instance.Vocab[word]
sentence = vector
return sentence
def inference(review_en):
review_en = preprocess(review_en)
input_en = Tensor(np.array([review_en]).astype(np.int32))
output = net(input_en)
if np.argmax(np.array(output[0])) == 1:
print("Positive comments")
else:
print("Negative comments")
review_en = "the movie is so boring"
inference(review_en)
###Output
Negative comments
|
forecasting/.ipynb_checkpoints/03.mvp.agg_daily.linear.001-checkpoint.ipynb | ###Markdown
Minimal viable product - Linear model, total daily productionThis is a first attempt to get an idea for the prediction of the solar panel output.The data for the solar panel production is aggregated by the sum of day.
###Code
import datetime
import numpy as np
import pandas as pd
import requests
import re
import json
import os
from dateutil import tz
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set()
###Output
_____no_output_____
###Markdown
Load data - production, aggregate by day
###Code
# production data
path_to_files = './data/production/'
data_prod = pd.read_csv(f'{path_to_files}/solar_production.data', index_col=0) # make it .data so that it is not csv
data_prod['Time'] = pd.to_datetime(data_prod['Time'])
data_prod = data_prod.set_index('Time')
# find day
data_prod['Day'] = data_prod.index.map(lambda x: f'{x.year}-{x.month}-{x.day}')
data_prod = data_prod.groupby('Day')['Production'].agg(['sum'])
data_prod.columns = ['Production_sum']
# order ascending
data_prod = data_prod.reset_index()
data_prod['Day'] = pd.to_datetime(data_prod['Day'])
data_prod = data_prod.sort_values(by=['Day'])
data_prod = data_prod.set_index('Day')
# first and last day
print(data_prod.iloc[0], data_prod.iloc[-1])
data_prod.head()
###Output
Production_sum 11629.3334
Name: 2017-10-12 00:00:00, dtype: float64 Production_sum 106414.9997
Name: 2019-06-27 00:00:00, dtype: float64
###Markdown
Load data - sun position, maximum and minimum per day
###Code
save_path = "./data/pysolar/"
data_solar = pd.read_csv(f'{save_path}/sun_data.csv', index_col=0)
data_solar['date'] = pd.to_datetime(data_solar['date'])
data_solar = data_solar.set_index('date')
# order ascending
data_solar['Day'] = data_solar.index.map(lambda x: f'{x.year}-{x.month}-{x.day}')
data_solar['Day'] = pd.to_datetime(data_solar['Day'])
data_solar = data_solar.sort_index()
# find min and max
data_solar = data_solar.groupby('Day')['altitude', 'azimuth', 'clear_sky_irradiation'].agg(['min', 'max'])
# data_solar.columns = ['Production_sum']
data_solar.columns = ['_'.join(col) for col in data_solar.columns]
data_solar = data_solar.drop('clear_sky_irradiation_min', axis=1)
# select the correct dates from 2017-10-12 until 2019-06-27
data_solar = data_solar.loc['2017-10-12':'2019-06-27',:]
print(data_solar.head())
print(data_solar.tail())
###Output
altitude_min altitude_max azimuth_min azimuth_max \
Day
2017-10-12 -49.374586 34.474052 19.567055 357.069346
2017-10-13 -49.751088 34.102303 19.783644 357.143860
2017-10-14 -50.125721 33.732240 19.999179 357.215996
2017-10-15 -50.498382 33.363977 20.213501 357.285596
2017-10-16 -50.868963 32.997630 20.426448 357.352506
clear_sky_irradiation_max
Day
2017-10-12 864.755684
2017-10-13 864.100214
2017-10-14 863.409922
2017-10-15 862.684519
2017-10-16 861.923764
altitude_min altitude_max azimuth_min azimuth_max \
Day
2019-06-23 -18.418040 65.032641 8.880509 354.394511
2019-06-24 -18.426877 65.009055 8.830387 354.341663
2019-06-25 -18.442553 64.978686 8.781336 354.288561
2019-06-26 -18.465064 64.941562 8.733428 354.235278
2019-06-27 -18.494404 64.897715 8.686734 354.181887
clear_sky_irradiation_max
Day
2019-06-23 863.894822
2019-06-24 863.530837
2019-06-25 863.182892
2019-06-26 862.851039
2019-06-27 862.535320
###Markdown
Load data - weather dataTreat each column separately Functions to clean the data
###Code
def clean_precipitation_columns(data):
"""
Cleans the precipitation columns in the dataset, these are:
- precipAccumulation
- The amount of snowfall accumulation expected to occur, in inches.
(If no snowfall is expected, this property will not be defined.)
- precipIntensity optional
- The intensity (in inches of liquid water per hour) of precipitation occurring at the given time.
This value is conditional on probability (that is, assuming any precipitation occurs at all).
- precipProbability optional
- The probability of precipitation occurring, between 0 and 1, inclusive.
- precipType optional
- The type of precipitation occurring at the given time. If defined, this property will have one of the following values:
"rain", "snow", or "sleet" (which refers to each of freezing rain, ice pellets, and “wintery mix”).
(If precipIntensity is zero, then this property will not be defined. Additionally, due to the lack of data in our sources,
historical precipType information is usually estimated, rather than observed.)
precipAccumulation:
if there is no snow, there is no accumulation anyways. We will fill the values in these rows for that column with 0.
precipType, precipIntensity, and precipProbability:
Must be nonzero if precipIntensity is nonzero. Else it will be set to 0. precipProbability will be set to 0 for nonzero
returns: cleaned dataframe
"""
rows = data[data['precipAccumulation'].isnull()].index
data.loc[rows,'precipAccumulation'] = 0
assert len(data[data['precipAccumulation'].isnull()]) == 0
rows = data[(data['precipIntensity'] > 0) & (data['precipType'] == np.nan)]
assert len(rows) == 0
rows = data[data['precipIntensity'].isnull()].index
data.loc[rows,'precipIntensity'] = 0
rows = data[data['precipType'].isnull()].index
data.loc[rows,'precipType'] = 'None'
rows = data[data['precipProbability'].isnull()].index
data.loc[rows,'precipProbability'] = 0
assert len(data[data['precipIntensity'].isnull()]) == 0
assert len(data[data['precipType'].isnull()]) == 0
assert len(data[data['precipProbability'].isnull()]) == 0
return data
def clean_continuous_features(data, order=3, do_plot=False):
"""
Cleans the columns: apparentTemperature, cloudCover, humidity, pressure, temperature, uvIndex, visibility, windSpeed
Fits an interpolation function polynomial with order of order to the data.
If do_plot is True, it will generate plots
"""
cols = ['apparentTemperature', 'cloudCover', 'humidity', 'pressure', 'temperature', 'uvIndex', 'visibility', 'windSpeed']
for col in cols:
max_t = data[col].max()
min_t = data[col].min()
if do_plot:
print(f'Treating column {col}, plotting 2018-01 until 2018-08')
print(f'Max and min values for this column are: {max_t} {min_t}')
s1 = data.loc['2018-01':'2018-08'][col]
plt.figure(figsize=(10,5))
plt.plot(s1.values)
plt.title('Before interpolation')
plt.ylabel(col)
plt.show()
# interpolate
data[col] = data[col].interpolate(method='polynomial', order=order)
# prevent negative values for certain columns
if min_t >= 0:
# print(col)
data[col] = data[col].clip(lower=0,upper=None)
# prevent extreme values by ceiling them to the maximum
data[col][data[col]>max_t] = max_t
if do_plot:
s2=data.loc['2018-01':'2018-08'][col]
plt.figure(figsize=(10,5))
# print(s)
plt.plot(s2.values)
plt.title('After interpolation')
plt.ylabel(col)
plt.show()
return data
###Output
_____no_output_____
###Markdown
Functions to aggregate the data
###Code
def agg_by_sum(data, columns, agg_by, agg_over='Day'):
"""
Takes a dataframe and list of columns which should be aggregated. agg_by is the aggregation function, i.e. sum.
The agg_over string is the column over which to aggregate.
Aggregates by the sum of the day, returns a dataframe.
"""
data = data.groupby(agg_over)[columns].agg([agg_by])
data.columns = ['_'.join(col) for col in data.columns]
return data
# import the data
path_to_jsons = './data/DarkSkyAPI/'
data_w = pd.read_csv(f'{path_to_jsons}/darkSkyData_cleaned_extracted.csv', index_col = 0)
data_w.index = pd.to_datetime(data_w.index, format='%Y-%m-%d %H:%M:%S')
# clean precipitation data
data_w = clean_precipitation_columns(data_w)
# clean continuous columns
data_w = clean_continuous_features(data_w, order=3, do_plot=False)
# apparentTemperature and temperature: take maximum
# print(data_w.isna().sum())
data_w['Day'] = data_w.index.map(lambda x: f'{x.year}-{x.month}-{x.day}')
data_w['Day'] = pd.to_datetime(data_w['Day'])
# aggregation
# sum
cols_sum = ['precipAccumulation', 'precipIntensity']
# min
cols_min = ['apparentTemperature', 'temperature', 'pressure', 'windSpeed', 'cloudCover','humidity', 'uvIndex', 'visibility']
# max
cols_max = ['apparentTemperature', 'temperature', 'pressure', 'windSpeed', 'cloudCover','humidity', 'uvIndex','visibility']
# mean
cols_mean = ['apparentTemperature', 'temperature','pressure', 'windSpeed', 'cloudCover','humidity', 'uvIndex','visibility']
# median
cols_median = ['apparentTemperature', 'temperature', 'pressure', 'windSpeed', 'cloudCover','humidity', 'uvIndex','visibility']
df_sum = agg_by_sum(data_w, cols_sum, 'sum', agg_over='Day')
df_min = agg_by_sum(data_w, cols_min, 'min', agg_over='Day')
df_max = agg_by_sum(data_w, cols_max, 'max', agg_over='Day')
df_mean = agg_by_sum(data_w, cols_mean, 'mean', agg_over='Day')
df_median = agg_by_sum(data_w, cols_median, 'median', agg_over='Day')
data_w = pd.concat([df_sum, df_min, df_max, df_mean, df_median], axis=1)
# select the correct dates from 2017-10-12 until 2019-06-27
data_w = data_w.loc['2017-10-12':'2019-06-27',:]
data_w.head()
###Output
/Users/hkromer/anaconda3/envs/solarAnalytics/lib/python3.7/site-packages/ipykernel_launcher.py:75: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
###Markdown
Prediction targetPrediction will be based if the next day produces more or less than the mean of the produced production, i.e.
###Code
plt.boxplot(data_prod['Production_sum'])
plt.show()
target = data_prod['Production_sum'].mean() # in Watt
print(target) # mean in Watt
data_prod['Y'] = data_prod['Production_sum'].map(lambda x: 1 if x>target else 0)
data_prod.sample(10)
###Output
90643.95940977565
###Markdown
Model
###Code
data_w.shape, data_prod.shape, data_solar.shape # all should have same length
data = pd.concat([data_w, data_prod, data_solar], axis=1)
plt.scatter(data['temperature_max'], data['Production_sum'])
plt.show()
plt.scatter(data['temperature_min'], data['Production_sum'])
plt.show()
plt.scatter(data['precipIntensity_sum'], data['Production_sum'])
plt.show()
###Output
_____no_output_____ |
13_Binary Search Tree.ipynb | ###Markdown
Binary Search Tree
###Code
from Module.classCollection import Queue
class BSTNode:
def __init__(self, data):
self.data = data
self.leftChild = None
self.rightChild = None
def insertNode(rootNode, nodeValue):
if rootNode.data == None:
rootNode.data = nodeValue
elif nodeValue <= rootNode.data :
if rootNode.leftChild is None:
rootNode.leftChild = BSTNode(nodeValue)
else:
insertNode(rootNode.leftChild, nodeValue)
else:
if rootNode.rightChild is None:
rootNode.rightChild = BSTNode(nodeValue)
else:
insertNode(rootNode.rightChild, nodeValue)
return "The node has been successfully inserted"
def preorderTraversal(rootNode):
if rootNode == None:
return
print(rootNode.data)
preorderTraversal(rootNode.leftChild)
preorderTraversal(rootNode.rightChild)
def inorderTraversal(rootNode):
if rootNode == None:
return
inorderTraversal(rootNode.leftChild)
print(rootNode.data)
inorderTraversal(rootNode.rightChild)
def postorderTraversal(rootNode):
if rootNode == None:
return
postorderTraversal(rootNode.leftChild)
postorderTraversal(rootNode.rightChild)
print(rootNode.data)
def levelorderTraversal(rootNode):
if rootNode == None:
return
customQueue = Queue()
customQueue.enqueue(rootNode)
while customQueue.isEmpty() is False:
tempRoot = customQueue.dequeue()
print(tempRoot.value.data)
if tempRoot.value.leftChild is not None:
customQueue.enqueue(tempRoot.value.leftChild)
if tempRoot.value.rightChild is not None:
customQueue.enqueue(tempRoot.value.rightChild)
'''
def searchNode(rootNode, nodeValue):
if rootNode.data == nodeValue:
print("The value is found")
elif nodeValue<rootNode.data:
if rootNode.leftChild.data == nodeValue:
print("The value is found")
else:
searchNode(rootNode.leftChild, nodeValue)
else:
if rootNode.rightChild.data == nodeValue:
print("The value is found")
else:
searchNode(rootNode.rightChild, nodeValue)
'''
def searchNode(rootNode, nodeValue):
if rootNode.data == nodeValue:
print("The value is found")
elif nodeValue<rootNode.data:
searchNode(rootNode.leftChild, nodeValue)
else:
searchNode(rootNode.rightChild, nodeValue)
def minValuenode(bstNode):
current = bstNode
while (current.leftChild is not None):
current = current.leftChild
return current
def deleteNode(rootNode, nodeValue):
if rootNode is None:
return rootNode
if nodeValue < rootNode.data:
rootNode.leftChild = deleteNode(rootNode.leftChild, nodeValue)
elif nodeValue > rootNode.data:
rootNode.rightChild = deleteNode(rootNode.rightChild, nodeValue)
else: # in case we find the node
if rootNode.leftChild is None:
temp = rootNode.rightChild
rootNode = None
return temp
if rootNode.rightChild is None:
temp = rootNode.leftChild
rootNode = None
return temp
temp = minValuenode(rootNode.rightChild)
rootNode.data = temp.data
rootNode.rightChild = deleteNode(rootNode.rightChild, temp.data)
return rootNode
def deleteBST(rootNode):
rootNode.leftChild = None
rootNode.rightChild = None
rootNode.data = None
return "The BST has been successfully deleted"
newBST = BSTNode(None)
print("----1. insertion----")
print(insertNode(newBST, 70))
print(insertNode(newBST, 50))
print(insertNode(newBST, 90))
print(insertNode(newBST, 30))
print(insertNode(newBST, 60))
print(insertNode(newBST, 80))
print(insertNode(newBST, 100))
print(insertNode(newBST, 20))
print(insertNode(newBST, 40))
print("----2. preorder traversal----")
preorderTraversal(newBST)
print("----3. inorder traversal----")
inorderTraversal(newBST)
print("----4. levelorder traversal----")
levelorderTraversal(newBST)
print("----5. search----")
searchNode(newBST, 60)
print("----6. delete the node----")
deleteNode(newBST, 100)
levelorderTraversal(newBST)
print("----7. delete entire tree----")
deleteBST(newBST)
levelorderTraversal(newBST)
###Output
----1. insertion----
The node has been successfully inserted
The node has been successfully inserted
The node has been successfully inserted
The node has been successfully inserted
The node has been successfully inserted
The node has been successfully inserted
The node has been successfully inserted
The node has been successfully inserted
The node has been successfully inserted
----2. preorder traversal----
70
50
30
20
40
60
90
80
100
----3. inorder traversal----
20
30
40
50
60
70
80
90
100
----4. levelorder traversal----
70
50
90
30
60
80
100
20
40
----5. search----
The value is found
----6. delete the node----
70
50
90
30
60
80
20
40
----7. delete entire tree----
None
|
scripts/reproducibility/figures/Supplement-Figure Mouse AP n20.ipynb | ###Markdown
Mouse n20: AP scores on validation data
###Code
alpha0_5_n20 = read_Noise2Seg_results('alpha0.5', 'nmouse_n20', measure='AP', runs=[1,2,3,4,5],
fractions=[0.125, 0.25, 0.5, 1.0, 2.0], score_type = '',
path_str='/home/tibuch/Noise2Seg/experiments/{}_{}_run{}/fraction_{}/{}scores.csv')
baseline_mouse_n20 = read_Noise2Seg_results('fin', 'nmouse_n20', measure='AP', runs=[1,2,3,4,5],
fractions=[0.125, 0.25, 0.5, 1.0, 2.0], score_type = '',
path_str='/home/tibuch/Noise2Seg/experiments/{}_{}_run{}/fraction_{}/{}scores.csv')
sequential_mouse_n20 = read_Noise2Seg_results('finSeq', 'nmouse_n20', measure='AP', runs=[1,2,3,4,5],
fractions=[0.125, 0.25, 0.5, 1.0, 2.0], score_type = '',
path_str='/home/tibuch/Noise2Seg/experiments/{}_{}_run{}/fraction_{}/{}scores.csv')
plt.rc('font', family = 'serif', size = 16)
fig = plt.figure(figsize=cm2inch(12.2/2,3)) # 12.2cm is the text-widht of the MICCAI template
plt.rcParams['axes.axisbelow'] = True
plt.plot(fraction_to_abs(alpha0_5_n20[:, 0], max_num_imgs = 908),
alpha0_5_n20[:, 1],
color = '#8F89B4', alpha = 1, linewidth=2, label = r'\textsc{DenoiSeg} ($\alpha = 0.5$)')
plt.fill_between(fraction_to_abs(alpha0_5_n20[:, 0], max_num_imgs = 908),
y1 = alpha0_5_n20[:, 1] + alpha0_5_n20[:, 2],
y2 = alpha0_5_n20[:, 1] - alpha0_5_n20[:, 2],
color = '#8F89B4', alpha = 0.5)
plt.plot(fraction_to_abs(sequential_mouse_n20[:, 0], max_num_imgs = 908),
sequential_mouse_n20[:, 1],
color = '#526B34', alpha = 1, linewidth=2, label = r'Sequential Baseline')
plt.fill_between(fraction_to_abs(sequential_mouse_n20[:, 0], max_num_imgs = 908),
y1 = sequential_mouse_n20[:, 1] + sequential_mouse_n20[:, 2],
y2 = sequential_mouse_n20[:, 1] - sequential_mouse_n20[:, 2],
color = '#526B34', alpha = 0.5)
plt.plot(fraction_to_abs(baseline_mouse_n20[:, 0], max_num_imgs = 908),
baseline_mouse_n20[:, 1],
color = '#6D3B2B', alpha = 1, linewidth=2, label = r'Baseline ($\alpha = 0$)')
plt.fill_between(fraction_to_abs(baseline_mouse_n20[:, 0], max_num_imgs = 908),
y1 = baseline_mouse_n20[:, 1] + baseline_mouse_n20[:, 2],
y2 = baseline_mouse_n20[:, 1] - baseline_mouse_n20[:, 2],
color = '#6D3B2B', alpha = 0.25)
plt.semilogx()
leg = plt.legend(loc = 'lower right')
for legobj in leg.legendHandles:
legobj.set_linewidth(3.0)
plt.ylabel(r'\textbf{AP}')
plt.xlabel(r'\textbf{Number of Annotated Training Images}')
plt.grid(axis='y')
plt.xticks(ticks=fraction_to_abs(baseline_mouse_n20[:, 0], max_num_imgs = 908),
labels=fraction_to_abs(baseline_mouse_n20[:, 0], max_num_imgs = 908).astype(np.int),
rotation=45)
plt.minorticks_off()
plt.yticks(rotation=45)
plt.xlim([0.92, 19.25])
plt.tight_layout();
plt.savefig('Mouse_AP_n20_area.pdf', pad_inches=0.0);
plt.savefig('Mouse_AP_n20_area.svg', pad_inches=0.0);
###Output
_____no_output_____ |
fitness_inference_analysis/figure4/4d_clustering/20210622_clustering_analysis_unfiltered-bs-CI-1000-v3-permute.ipynb | ###Markdown
Permute the evoEnvt-ploidy 1000 times and calculate clustering metric
###Code
perm = (pd.DataFrame(pd.read_csv('20200509_PCA_unfiltered_0_adj.csv', delimiter=','),columns=['evoEnvt-ploidy']))
perm['random'] = np.random.permutation(perm['evoEnvt-ploidy'])
#making empty dataframe
df1= pd.DataFrame()
df2= pd.DataFrame()
for i in range(1,1001):
perm = (pd.DataFrame(pd.read_csv('20200509_PCA_unfiltered_0_adj.csv', delimiter=','),columns=['evoEnvt-ploidy']))
perm['random'] = np.random.permutation(perm['evoEnvt-ploidy'])
#reading in data
gen0 = (pd.DataFrame(pd.read_csv('20200509_PCA_unfiltered_0_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
gen200 = (pd.DataFrame(pd.read_csv('20200509_PCA_unfiltered_200_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
gen400 = (pd.DataFrame(pd.read_csv('20200509_PCA_unfiltered_400_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
gen600 = (pd.DataFrame(pd.read_csv('20200509_PCA_unfiltered_600_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
gen800 = (pd.DataFrame(pd.read_csv('20200509_PCA_unfiltered_800_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
gen1000 = (pd.DataFrame(pd.read_csv('20200509_PCA_unfiltered_1000_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
cum = (pd.DataFrame(pd.read_csv('20200509_PCA_unfiltered_cum_adj.csv', delimiter=','),columns=['principal component 1','principal component 2'])).rename(columns ={"principal component 1":"principle_component_1", "principal component 2":"principle_component_2"})
#finding nearest neighbor indices
neigh=NearestNeighbors(n_neighbors=6) #6 neighbors is really 5, plus self, which we exclude later from analysis
#epoch0
neigh.fit(gen0)
gen0_neigh = (pd.DataFrame(neigh.kneighbors(gen0, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#epoch200
neigh.fit(gen200)
gen200_neigh = (pd.DataFrame(neigh.kneighbors(gen200, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#epoch400
neigh.fit(gen400)
gen400_neigh = (pd.DataFrame(neigh.kneighbors(gen400, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#epoch600
neigh.fit(gen600)
gen600_neigh = (pd.DataFrame(neigh.kneighbors(gen600, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#epoch800
neigh.fit(gen800)
gen800_neigh = (pd.DataFrame(neigh.kneighbors(gen800, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#epoch1000
neigh.fit(gen1000)
gen1000_neigh = (pd.DataFrame(neigh.kneighbors(gen1000, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#cumulative
neigh.fit(cum)
cum_neigh = (pd.DataFrame(neigh.kneighbors(cum, return_distance=False),columns=["self","neigh1", "neigh2", "neigh3", "neigh4", "neigh5"])).reset_index(drop=True)
#make new neighbors df, use dictionary and to replace neighbor index with evoEnvt
neighbors=['1','2','3','4','5']
#epoch0
gen0_df = pd.DataFrame()
random_dict = (perm).to_dict('series')
gen0_df['self']=gen0_neigh['self'].replace(random_dict['random'])
for neighbor in neighbors:
gen0_df['neigh%s' % neighbor]=gen0_neigh['neigh%s' % neighbor].replace(random_dict['random'])
# epoch200
gen200_df = pd.DataFrame()
gen200_df['self']=gen200_neigh['self'].replace(random_dict['random'])
for neighbor in neighbors:
gen200_df['neigh%s' % neighbor]=gen200_neigh['neigh%s' % neighbor].replace(random_dict['random'])
# epoch400
gen400_df = pd.DataFrame()
gen400_df['self']=gen400_neigh['self'].replace(random_dict['random'])
for neighbor in neighbors:
gen400_df['neigh%s' % neighbor]=gen400_neigh['neigh%s' % neighbor].replace(random_dict['random'])
# epoch600
gen600_df = pd.DataFrame()
gen600_df['self']=gen600_neigh['self'].replace(random_dict['random'])
for neighbor in neighbors:
gen600_df['neigh%s' % neighbor]=gen600_neigh['neigh%s' % neighbor].replace(random_dict['random'])
# epoch800
gen800_df = pd.DataFrame()
gen800_df['self']=gen800_neigh['self'].replace(random_dict['random'])
for neighbor in neighbors:
gen800_df['neigh%s' % neighbor]=gen800_neigh['neigh%s' % neighbor].replace(random_dict['random'])
# epoch1000
gen1000_df = pd.DataFrame()
gen1000_df['self']=gen1000_neigh['self'].replace(random_dict['random'])
for neighbor in neighbors:
gen1000_df['neigh%s' % neighbor]=gen1000_neigh['neigh%s' % neighbor].replace(random_dict['random'])
# cum
cum_df = pd.DataFrame()
cum_df['self']=cum_neigh['self'].replace(random_dict['random'])
for neighbor in neighbors:
cum_df['neigh%s' % neighbor]=cum_neigh['neigh%s' % neighbor].replace(random_dict['random'])
#counting how many neighboring nodes are from the same environment
def similarity(self,neigh):
if self == neigh:
return 1
else:
return 0
for neighbor in neighbors:
gen0_df['match%s' % neighbor] = gen0_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen200_df['match%s' % neighbor] = gen200_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen400_df['match%s' % neighbor] = gen400_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen600_df['match%s' % neighbor] = gen600_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen800_df['match%s' % neighbor] = gen800_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen1000_df['match%s' % neighbor] = gen1000_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
cum_df['match%s' % neighbor] = cum_df.apply(lambda x: similarity(x['self'],x['neigh%s' % neighbor]),axis=1)
gen0_df['sum'] = gen0_df.sum(axis=1)
gen200_df['sum'] = gen200_df.sum(axis=1)
gen400_df['sum'] = gen400_df.sum(axis=1)
gen600_df['sum'] = gen600_df.sum(axis=1)
gen800_df['sum'] = gen800_df.sum(axis=1)
gen1000_df['sum'] = gen1000_df.sum(axis=1)
cum_df['sum'] = cum_df.sum(axis=1)
#merge dataframe with summed matches with PCs for plotting
gen0_plot = pd.concat([gen0,gen0_df],axis=1)
gen200_plot = pd.concat([gen200,gen200_df],axis=1)
gen400_plot = pd.concat([gen400,gen400_df],axis=1)
gen600_plot = pd.concat([gen600,gen600_df],axis=1)
gen800_plot = pd.concat([gen800,gen800_df],axis=1)
gen1000_plot = pd.concat([gen1000,gen1000_df],axis=1)
cum_plot = pd.concat([cum,cum_df],axis=1)
#average the summed matches by evolution condition
mean0=[]
mean200=[]
mean400=[]
mean600=[]
mean800=[]
mean1000=[]
meancum=[]
mean0.append(gen0_plot.groupby('self')['sum'].mean())
mean200.append(gen200_plot.groupby('self')['sum'].mean())
mean400.append(gen400_plot.groupby('self')['sum'].mean())
mean600.append(gen600_plot.groupby('self')['sum'].mean())
mean800.append(gen800_plot.groupby('self')['sum'].mean())
mean1000.append(gen1000_plot.groupby('self')['sum'].mean())
meancum.append(cum_plot.groupby('self')['sum'].mean())
#reformatting epoch data as x series
epoch = [0,200,400,600,800,1000]
epoch_array = np.array(epoch)
mean0_df=(pd.DataFrame(mean0)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
mean200_df=(pd.DataFrame(mean200)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
mean400_df=(pd.DataFrame(mean400)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
mean600_df=(pd.DataFrame(mean600)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
mean800_df=(pd.DataFrame(mean800)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
mean1000_df=(pd.DataFrame(mean1000)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
meancum_df=(pd.DataFrame(meancum)).rename(columns={"self":"","YPD(37C)-D":"YPD37","YPD(37C)-H":"YPD37H","YPD+AA-D":"YPDAA","YPD-D":"YPD"})
true_means = pd.concat([mean0_df,mean200_df,mean400_df,mean600_df,mean800_df,mean1000_df],axis=0)
true_cum = pd.concat([meancum_df],axis=0)
true_means['gen']=epoch_array
df1 = pd.concat([df1,true_means])
df2 = pd.concat([df2,true_cum])
df1.to_csv('20210416_permute_1k_unfil.csv', index=False)
df2.to_csv('20210416_permute_1k_cum_unfil.csv', index=False)
###Output
_____no_output_____ |
notebooks/CubesViewer from SQL database Example.ipynb | ###Markdown
CubesViewer from SQL database Example
###Code
import os
import sys
sys.path.append("../")
import sqlalchemy
from sqlalchemy.engine import create_engine
import cubesext
%load_ext autoreload
%autoreload 2
# pip install git+https://github.com/cfperez/bg_run
#%load_ext bg_run
#import logging
#logger = logging.getLogger()
db_url = 'sqlite:///match.sqlite3'
#engine = create_engine(db_url)
#connection = engine.connect()
(engine_url, model_path) = cubesext.sql2cubes(db_url, model_path="generated/model-match.json", debug=True)
model_path
cubesext.cubes_serve(db_url, model_path, debug=True)
###Output
_____no_output_____
###Markdown
Jupyter Notebook embedded CubesViewer view
###Code
cubesext.cubesviewer_jupyter('webshop_sales')
###Output
_____no_output_____ |
0. Back to Basics/5. Implementing Data Warehouse on AWS/1. AWS RedShift Setup Using Code.ipynb | ###Markdown
Creating Redshift Cluster using the AWS python SDK An example of Infrastructure-as-code
###Code
import pandas as pd
import boto3
import json
###Output
_____no_output_____
###Markdown
Make sure you have an AWS secret and access key- Create a new IAM user in your AWS account- Give it `AdministratorAccess`, From `Attach existing policies directly` Tab- Take note of the access key and secret - Edit the file `dwh.cfg` in the same folder as this notebook and fill[AWS]KEY= YOUR_AWS_KEYSECRET= YOUR_AWS_SECRET Load DWH Params from a file
###Code
import configparser
config = configparser.ConfigParser()
config.read_file(open('dwh.cfg'))
KEY = config.get('AWS','KEY')
SECRET = config.get('AWS','SECRET')
DWH_CLUSTER_TYPE = config.get("DWH","DWH_CLUSTER_TYPE")
DWH_NUM_NODES = config.get("DWH","DWH_NUM_NODES")
DWH_NODE_TYPE = config.get("DWH","DWH_NODE_TYPE")
DWH_CLUSTER_IDENTIFIER = config.get("DWH","DWH_CLUSTER_IDENTIFIER")
DWH_DB = config.get("DWH","DWH_DB")
DWH_DB_USER = config.get("DWH","DWH_DB_USER")
DWH_DB_PASSWORD = config.get("DWH","DWH_DB_PASSWORD")
DWH_PORT = config.get("DWH","DWH_PORT")
DWH_IAM_ROLE_NAME = config.get("DWH", "DWH_IAM_ROLE_NAME")
(DWH_DB_USER, DWH_DB_PASSWORD, DWH_DB)
pd.DataFrame({"Param":
["DWH_CLUSTER_TYPE", "DWH_NUM_NODES", "DWH_NODE_TYPE", "DWH_CLUSTER_IDENTIFIER", "DWH_DB", "DWH_DB_USER", "DWH_DB_PASSWORD", "DWH_PORT", "DWH_IAM_ROLE_NAME"],
"Value":
[DWH_CLUSTER_TYPE, DWH_NUM_NODES, DWH_NODE_TYPE, DWH_CLUSTER_IDENTIFIER, DWH_DB, DWH_DB_USER, DWH_DB_PASSWORD, DWH_PORT, DWH_IAM_ROLE_NAME]
})
###Output
_____no_output_____
###Markdown
Create clients for EC2, S3, IAM, and Redshift
###Code
import boto3
ec2 = boto3.resource('ec2',
region_name='us-west-2',
aws_access_key_id=KEY,
aws_secret_access_key=SECRET)
s3 = boto3.resource('s3',
region_name='us-west-2',
aws_access_key_id=KEY,
aws_secret_access_key=SECRET)
iam = boto3.client('iam',
region_name='us-west-2',
aws_access_key_id=KEY,
aws_secret_access_key=SECRET)
redshift = boto3.client('redshift',
region_name="us-west-2",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET)
###Output
_____no_output_____
###Markdown
Check out the sample data sources on S3
###Code
sampleDbBucket = s3.Bucket("awssampledbuswest2")
for obj in sampleDbBucket.objects.filter(Prefix='ssbgz'):
print(obj)
###Output
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/customer0002_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/dwdate.tbl.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/lineorder0000_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/lineorder0001_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/lineorder0002_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/lineorder0003_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/lineorder0004_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/lineorder0005_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/lineorder0006_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/lineorder0007_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/part0000_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/part0001_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/part0002_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/part0003_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/supplier.tbl_0000_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/supplier0001_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/supplier0002_part_00.gz')
s3.ObjectSummary(bucket_name='awssampledbuswest2', key='ssbgz/supplier0003_part_00.gz')
###Markdown
IAM ROLE- Create an IAM Role that makes Redshift able to access S3 bucket (ReadOnly)
###Code
# Create the IAM role
try:
print('1.1 Creating a new IAM Role')
dwhRole = iam.create_role(
Path = '/',
RoleName = DWH_IAM_ROLE_NAME,
Description = 'Allows Redshift cluster to call AWS service on your behalf.',
AssumeRolePolicyDocument = json.dumps(
{'Statement': [{'Action': 'sts:AssumeRole',
'Effect': 'Allow',
'Principal': {'Service': 'redshift.amazonaws.com'}}],
'Version': '2012-10-17'})
)
except Exception as e:
print(e)
# Attach Policy
print('1.2 Attaching Policy')
iam.attach_role_policy(RoleName=DWH_IAM_ROLE_NAME,
PolicyArn="arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
)['ResponseMetadata']['HTTPStatusCode']
# Get and print the IAM role ARN
print('1.3 Get the IAM role ARN')
roleArn = iam.get_role(RoleName=DWH_IAM_ROLE_NAME)['Role']['Arn']
print(roleArn)
###Output
1.3 Get the IAM role ARN
arn:aws:iam::516024842573:role/dwhRole
###Markdown
Redshift Cluster- Create a RedShift Cluster- For complete arguments to `create_cluster`, see [docs](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/redshift.htmlRedshift.Client.create_cluster)
###Code
try:
response = redshift.create_cluster(
# Hardware
ClusterType=DWH_CLUSTER_TYPE,
NodeType=DWH_NODE_TYPE,
NumberOfNodes=int(DWH_NUM_NODES),
# Identifiers & credentials
DBName=DWH_DB,
ClusterIdentifier=DWH_CLUSTER_IDENTIFIER,
MasterUsername=DWH_DB_USER,
MasterUserPassword=DWH_DB_PASSWORD,
# R
ole (to allow s3 access)
IamRoles=[roleArn]
)
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
2.1 *Describe* the cluster to see its status- run this block several times until the cluster status becomes `Available`
###Code
def prettyRedshiftProps(props):
pd.set_option('display.max_colwidth', -1)
keysToShow = ["ClusterIdentifier", "NodeType", "ClusterStatus", "MasterUsername", "DBName", "Endpoint", "NumberOfNodes", 'VpcId']
x = [(k, v) for k,v in props.items() if k in keysToShow]
return pd.DataFrame(data=x, columns=["Key", "Value"])
myClusterProps = redshift.describe_clusters(ClusterIdentifier=DWH_CLUSTER_IDENTIFIER)['Clusters'][0]
prettyRedshiftProps(myClusterProps)
###Output
_____no_output_____
###Markdown
2.2 Take note of the cluster endpoint and role ARN DO NOT RUN THIS unless the cluster status becomes "Available"
###Code
DWH_ENDPOINT = myClusterProps['Endpoint']['Address']
DWH_ROLE_ARN = myClusterProps['IamRoles'][0]['IamRoleArn']
print("DWH_ENDPOINT :: ", DWH_ENDPOINT)
print("DWH_ROLE_ARN :: ", DWH_ROLE_ARN)
###Output
DWH_ENDPOINT :: dwhcluster.ctnjrbr9gs7r.us-west-2.redshift.amazonaws.com
DWH_ROLE_ARN :: arn:aws:iam::516024842573:role/dwhRole
###Markdown
Open an incoming TCP port to access the cluster endpoint
###Code
try:
vpc = ec2.Vpc(id=myClusterProps['VpcId'])
defaultSg = list(vpc.security_groups.all())[0]
print(defaultSg)
defaultSg.authorize_ingress(
GroupName=defaultSg.group_name,
CidrIp='0.0.0.0/0',
IpProtocol='TCP',
FromPort=int(DWH_PORT),
ToPort=int(DWH_PORT)
)
except Exception as e:
print(e)
###Output
ec2.SecurityGroup(id='sg-6262a133')
###Markdown
Make sure you can connect to the cluster
###Code
%load_ext sql
conn_string="postgresql://{}:{}@{}:{}/{}".format(DWH_DB_USER, DWH_DB_PASSWORD, DWH_ENDPOINT, DWH_PORT,DWH_DB)
print(conn_string)
%sql $conn_string
###Output
postgresql://dwhuser:Passw0rd@dwhcluster.ctnjrbr9gs7r.us-west-2.redshift.amazonaws.com:5439/dwh
###Markdown
Clean up your resources DO NOT RUN THIS UNLESS YOU ARE SURE We will be using these resources in the next exercises
###Code
#### CAREFUL!!
#-- Uncomment & run to delete the created resources
redshift.delete_cluster( ClusterIdentifier=DWH_CLUSTER_IDENTIFIER, SkipFinalClusterSnapshot=True)
#### CAREFUL!!
###Output
_____no_output_____
###Markdown
- run this block several times until the cluster really deleted
###Code
myClusterProps = redshift.describe_clusters(ClusterIdentifier=DWH_CLUSTER_IDENTIFIER)['Clusters'][0]
prettyRedshiftProps(myClusterProps)
#### CAREFUL!!
#-- Uncomment & run to delete the created resources
iam.detach_role_policy(RoleName=DWH_IAM_ROLE_NAME, PolicyArn="arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess")
iam.delete_role(RoleName=DWH_IAM_ROLE_NAME)
#### CAREFUL!!
###Output
_____no_output_____ |
notebooks/ucb.ipynb | ###Markdown
Upper Confidence Bound Motivation The [Upper Confidence Bound (UCB) Algorithm](https://banditalgs.com/2016/09/18/the-upper-confidence-bound-algorithm/) is a highly effective policy for finding the optimal arm in classical [Multi-Armed Bandits](https://banditalgs.com/2016/09/18/the-upper-confidence-bound-algorithm/). It was developed by [T. L. Lai and H. Robbins](https://www.sciencedirect.com/science/article/pii/0196885885900028) in 1985. It is based on the principle of **Optimism in the Face of Uncertainty**. Based on the previous experience, in each round, the algorithm determines for each arm the most optimistic estimate for the expected reward that is still plausible from statistical concentration inequalities. This principle naturally gives rise to an exploration bonus for rarely visited arms. More precisely, in round $t$ this bonus takes the form$$\sqrt{\frac{2\log(1/\delta)}{T(t - 1)}},$$where $\delta$ is a hyperparameter and $T(t - 1)$ denotes the number of times the considered arm was played before round $t$.
###Code
import numpy as np
def expl_bonus(visits,
delta = 1/np.exp(4)):
"""UCB exploration bonus
# Arguments
visits: number of visits to the considered arms
delta: hyperparameter encoding degree of optimism
# Result
UCB exploration bonus
"""
return np.sqrt(2 * np.log(1 / delta) / visits)
###Output
_____no_output_____
###Markdown
Simulation To illustrate how effective UCB is, we look at a 2-armed bandit with normal rewards of means 1 and 1.1, respectively.
###Code
mus = [1, 1.1]
N = 1000
seed = 42
np.random.seed(seed)
###Output
_____no_output_____
###Markdown
We play the bandit for $N= 1000$ episodes.
###Code
visits = np.ones(2)
rewards = np.zeros(2)
trace_a = []
for _ in range(N):
#select action via ucb
kpi = rewards / visits + expl_bonus(visits)
a = np.argmax(kpi)
#retrieve rewards
r = np.random.normal(mus[a])
#update history
visits[a] += 1
rewards[a] += r
#remember actions
trace_a += [a]
###Output
_____no_output_____
###Markdown
A plot of the trace reveals that UCB quickly identifies arm 1 as the optimal arm.
###Code
import seaborn as sns
sns.lineplot(np.arange(N), trace_a)
###Output
_____no_output_____ |
DataScience/02.DataTidying/Data Tidying and Cleaning Lab.ipynb | ###Markdown
Data Tidying and Cleaning Lab Reading, tidying and cleaning data. Preparing data for exploration, mining, analysis and learning Problem 1. Read the dataset (2 points)The dataset [here](http://archive.ics.uci.edu/ml/datasets/Auto+MPG) contains information about fuel consumption in different cars.Click the "Data Folder" link and read `auto_mpg.data` into Python. You can download it, if you wish, or you can read it directly from the link.Give meaningful (and "Pythonic") column names, as per the `auto_mpg.names` file:1. mpg2. cylinders3. displacement4. horsepower5. weight6. acceleration7. model_year8. origin9. car_name
###Code
mpg_data = pd.read_fwf("http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data", header = None)
mpg_data.columns =["mpg", "cylinders", "displacement", "horsepower", "weight", "acceleration", "model_year", "origin", "car_name"]
nose.tools.assert_is_not_none(mpg_data)
###Output
_____no_output_____
###Markdown
Print the first 4 rows in the dataset to get a feel of what it looks like:
###Code
mpg_data.head(4)
###Output
_____no_output_____
###Markdown
Problem 2. Inspect the dataset (1 point)Write a function which accepts a dataset and returns the number of observations and features in it, like so: ``` 10 observations on 15 features```Where 10 and 15 should be replaced with the real numbers. Test your function with the `auto_mpg` dataset.Make sure the function works with other datasets (don't worry about "1 features" or "1 observations", just leave it as it is).
###Code
mpg_data.shape[1]
def observations_and_features(dataset):
"""
Returns the number of observations and features in the provided dataset
"""
observations = dataset.shape[0]
features = dataset.shape[1]
return "{} observations on {} features".format(observations, features)
print(observations_and_features(mpg_data))
###Output
398 observations on 9 features
###Markdown
Inspect the data types for each column.
###Code
mpg_data.dtypes
###Output
_____no_output_____
###Markdown
Problem 3. Correct errors (1 point)The `horsepower` column looks strange. It's a string but it must be a floating-point number. Find out why this is so and convert it to floating-point number.
###Code
mpg_data.loc[mpg_data.horsepower == '?', "horsepower"] = -1.0
mpg_data.horsepower = mpg_data.horsepower.astype(float)
nose.tools.assert_equal(mpg_data.horsepower.dtype, "float64")
###Output
_____no_output_____
###Markdown
Problem 4. Missing values: inspection (1 point)We saw that the `horsepower` column contained null values. Display the rows which contain those values. Assign the resulting dataframe to the `unknown_hp` variable.
###Code
def get_unknown_hp(dataframe):
"""
Returns the rows in the provided dataframe where the "horsepower" column is NaN
"""
unknown_hp = dataframe[dataframe["horsepower"] == -1.0]
return unknown_hp
cars_with_unknown_hp = get_unknown_hp(mpg_data)
print(cars_with_unknown_hp)
###Output
mpg cylinders displacement horsepower weight acceleration \
32 25.0 4 98.0 -1.0 2046.0 19.0
126 21.0 6 200.0 -1.0 2875.0 17.0
330 40.9 4 85.0 -1.0 1835.0 17.3
336 23.6 4 140.0 -1.0 2905.0 14.3
354 34.5 4 100.0 -1.0 2320.0 15.8
374 23.0 4 151.0 -1.0 3035.0 20.5
model_year origin car_name
32 71 1 "ford pinto"
126 74 1 "ford maverick"
330 80 2 "renault lecar deluxe"
336 80 1 "ford mustang cobra"
354 81 2 "renault 18i"
374 82 1 "amc concord dl"
###Markdown
Problem 5. Missing data: correction (1 point)It seems like the `NaN` values are a small fraction of all values. We can try one of several things:* Remove them* Replace them (e.g. with the mean power of all cars)* Look up the models on the internet and try our best guess on the powerThe third one is probably the best but the first one will suffice since these records are too few. Remove those values. Save the dataset in the same `mpg_data` variable. Ensure there are no more `NaN`s.
###Code
mpg_data = mpg_data[mpg_data.horsepower != -1.0]
nose.tools.assert_equal(len(get_unknown_hp(mpg_data)), 0)
###Output
_____no_output_____
###Markdown
Problem 6. Years of production (1 + 1 points)Display all unique model years. Assign them to the variable `model_years`.
###Code
def get_unique_model_years(dataframe):
"""
Returns the unique values of the "model_year" column
of the dataframe
"""
model_years = mpg_data.model_year.unique()
return model_years
model_years = get_unique_model_years(mpg_data)
print(model_years)
###Output
[70 71 72 73 74 75 76 77 78 79 80 81 82]
###Markdown
These don't look so good. Convert them to real years, like `70 -> 1970, 71 -> 1971`. Replace the column values in the dataframe.
###Code
mpg_data.model_year = '19' + mpg_data.model_year.astype(str)
model_years = get_unique_model_years(mpg_data)
print(model_years)
###Output
['1970' '1971' '1972' '1973' '1974' '1975' '1976' '1977' '1978' '1979'
'1980' '1981' '1982']
###Markdown
Problem 7. Exploration: low-power cars (1 point)The data looks quite good now. Let's try some exploration.Write a function to find the cars which have the smallest number of cylinders and print their model names. Return a list of car names.
###Code
def get_model_names_smallest_cylinders(dataframe):
"""
Returns the names of the cars with the smallest number of cylinders
"""
smalles_cilinder_count = dataframe.cylinders.min()
car_names = dataframe.car_name[dataframe.cylinders == smalles_cilinder_count]
return car_names
car_names = get_model_names_smallest_cylinders(mpg_data)
print(car_names)
nose.tools.assert_true(car_names.shape == (4,) or car_names.shape == (4, 1))
###Output
71 "mazda rx2 coupe"
111 "maxda rx3"
243 "mazda rx-4"
334 "mazda rx-7 gs"
Name: car_name, dtype: object
###Markdown
Problem 8. Exploration: correlations (1 point)Finally, let's see some connections between variables. These are also called **correlations**.Find how to calculate correlations between different columns using `pandas`.**Hint:** The correlation function in `pandas` returns a `DataFrame` by default. You need only one value from it.Create a function which accepts a dataframe and two columns and prints the correlation coefficient between those two columns.
###Code
mpg_data.corr()
def calculate_correlation(dataframe, first_column, second_column):
"""
Calculates and returns the correlation coefficient between the two columns in the dataframe.
"""
correlation = dataframe.corr()[first_column][second_column]
return correlation
hp_weight = calculate_correlation(mpg_data, "horsepower", "weight")
print("Horsepower:Weight correlation coefficient:", hp_weight)
nose.tools.assert_almost_equal(hp_weight, 0.864537737574, delta = 0.01)
###Output
Horsepower:Weight correlation coefficient: 0.8645377375741428
|
Figure 4 (DI, DN Fits).ipynb | ###Markdown
4 A Different models schematic (Beta is 1. for DN))
###Code
maxE = 15
numPoints = 1000
a = np.linspace(0,maxE,numPoints)
markersize = 3
markevery = 75
fig,ax = plt.subplots()
ax.plot(a, a)
ax.plot(a, a - 4, '8--', color='brown', label="$\\alpha = 4$", markersize=markersize, markevery=markevery)
ax.plot(a, a - 8, 'p--', color='brown', label="$\\alpha = 8$", markersize=markersize, markevery=markevery)
# ax.plot(a, a - 9, '.--', color='brown', label="$\\alpha = 9$", markersize=markersize, markevery=markevery)
ax.plot(a, a - 12, '^--', color='brown', label="$\\alpha = 12$", markersize=markersize, markevery=markevery)
ax.set_xlabel("E")
ax.set_ylabel("O")
ax.set_ylim((0,maxE))
ax.set_xlim((0,maxE))
ax.set_title("Subtractive Inhibition $O = E - \\alpha$")
ax.legend()
simpleaxis(ax)
fig.set_figwidth(1.4)
fig.set_figheight(1.4)
dump(fig,file('figures/fig4/4a1.pkl','wb'))
plt.show()
fig,ax = plt.subplots()
ax.plot(a, a)
ax.plot(a, a - a*0.3, '8--', color='green', label="$\\beta = 0.3$", markersize=markersize, markevery=markevery)
# ax.plot(a, a - a*0.4, 'p--', color='green', label="$\\beta = 0.4$", markersize=markersize, markevery=markevery)
ax.plot(a, a - a*0.6, '.--', color='green', label="$\\beta = 0.6$", markersize=markersize, markevery=markevery)
ax.plot(a, a - a*0.9, '^--', color='green', label="$\\beta = 0.9$", markersize=markersize, markevery=markevery)
ax.set_xlabel("E")
ax.set_ylabel("O")
ax.set_ylim((0,maxE))
ax.set_xlim((0,maxE))
ax.legend()
ax.set_title("Divisive Inhibition $O = E - \\beta \\times E$")
simpleaxis(ax)
fig.set_figwidth(1.4)
fig.set_figheight(1.4)
dump(fig,file('figures/fig4/4a2.pkl','wb'))
plt.show()
fig,ax = plt.subplots()
ax.plot(a, a)
ax.plot(a, (3*a)/(3 + a), '8--', color='purple', label="$\\gamma = 3$", markersize=markersize, markevery=markevery)
ax.plot(a, (9*a)/(9 + a), 'p--', color='purple', label="$\\gamma = 9$", markersize=markersize, markevery=markevery)
# ax.plot(a, (15*a)/(15 + a), '.--', color='purple', label="$\\gamma = 15$", markersize=markersize, markevery=markevery)
ax.plot(a, (27*a)/(27 + a), '^--', color='purple', label="$\\gamma = 27$", markersize=markersize, markevery=markevery)
ax.set_xlabel("E")
ax.set_ylabel("O")
ax.set_ylim((0,maxE))
ax.set_xlim((0,maxE))
fig.set_figwidth(1.4)
fig.set_figheight(1.4)
ax.legend()
ax.set_title("Divisive Normalization $O = E - \\frac{ \\gamma E}{ \\gamma E + 1} \\times E $")
simpleaxis(ax)
dump(fig,file('figures/fig4/4a3.pkl','wb'))
plt.show()
currentClampFiles = prefix + '/media/sahil/NCBS_Shares_BGStim/patch_data/normalization_files.txt'
with open (currentClampFiles,'r') as r:
dirnames = r.read().splitlines()
neurons = {}
prefix = '/home/bhalla/Documents/Codes/data'
for dirname in dirnames:
cellIndex = dirname.split('/')[-2]
filename = prefix + dirname + 'plots/' + cellIndex + '.pkl'
n = Neuron.load(filename)
neurons[str(n.date) + '_' + str(n.index)] = n
#Colorscheme for squares
color_sqr = { index+1: color for index, color in enumerate(matplotlib.cm.viridis(np.linspace(0,1,9)))}
control_result2_rsquared_adj = []
control_result1_rsquared_adj = []
control_var_expected = []
gabazine_result2_rsquared_adj = []
gabazine_result1_rsquared_adj = []
gabazine_var_expected = []
tolerance = 5e-4
def linearModel(x, beta=100):
# Linear model
return (x*(1-beta))
def DN_model(x, beta=1, gamma=100):
# Divisive normalization model
#return x - a*(x**2)/(b+x)
#return ((x**2)*(1-beta) + (gamma*x))/(x+gamma)
return (gamma*x)/(x+gamma)
neurons
###Output
_____no_output_____
###Markdown
4 B Divisive Normalization representative cell
###Code
neuron = neurons['170303_c1']
feature = 0 # Area under the curve
expected, observed, g_expected, g_observed = {}, {}, {}, {}
for expType, exp in neuron:
## Control case
if(expType == "Control"):
for sqr in exp:
if sqr > 1:
expected[sqr] = []
observed[sqr] = []
for coord in exp[sqr].coordwise:
for trial in exp[sqr].coordwise[coord].trials:
if all([value == 0 for value in trial.flags.values()]):
expected[sqr].append(exp[sqr].coordwise[coord].expected_feature[feature])
observed[sqr].append(trial.feature[feature])
lin_aic = []
dn_aic = []
lin_chi = []
dn_chi = []
max_exp, max_g_exp = 0.,0.
fig, ax = plt.subplots()
squareVal = []
list_control_expected = []
list_control_observed = []
for sqr in sorted(observed):
squareVal.append(ax.scatter(expected[sqr], observed[sqr], label=str(sqr), c=color_sqr[sqr], alpha=0.8, s=8))
max_exp = max(max_exp, max(expected[sqr]))
list_control_expected += expected[sqr]
list_control_observed += observed[sqr]
X = np.array(list_control_expected)
y = np.array(list_control_observed)
idx = np.argsort(X)
X = X[idx]
y = y[idx]
linear_Model = lmfit.Model(linearModel)
DN_Model = lmfit.Model(DN_model)
lin_pars = linear_Model.make_params()
lin_result = linear_Model.fit(y, lin_pars, x=X)
lin_aic.append(lin_result.aic)
lin_chi.append(lin_result.redchi)
DN_pars = DN_Model.make_params()
DN_result = DN_Model.fit(y, DN_pars, x=X)
dn_aic.append(DN_result.aic)
dn_chi.append(DN_result.redchi)
# print (lin_result.fit_report())
# print (DN_result.fit_report())
ax.set_xlim(xmin=0.)
ax.set_ylim(ymin=0.)
ax.set_xlabel("Expected")
ax.set_ylabel("Observed")
ax.set_title("Divisive Normalization and Inhibition fits")
# div_inh = ax.plot(X, lin_result.best_fit, '-', color='green', lw=2)
# div_norm = ax.plot(X, DN_result.best_fit, '-', color='purple', lw=2)
max_exp *=1.1
max_g_exp *=1.1
ax.set_xlim(0,max_exp)
ax.set_ylim(0,max_exp)
ax.set_xlabel("Expected Sum (mV)")
ax.set_ylabel("Observed Sum (mV)")
linear = ax.plot((0,max_exp), (0,max_exp), 'k--')
legends = div_inh + div_norm + linear + squareVal
# labels = ["Divisive Inhibition, $\\beta$ = {:.2f}".format(lin_result.params['beta'].value), "Divisive Normalization, $\\beta$ = {:.2f}, $\\gamma$ = {:.2f}".format(DN_result.params['beta'].value, DN_result.params['gamma'].value), "Linear sum"] + sorted(observed.keys())
labels = ["Divisive Inhibition, $\\beta$ = {:.2f}".format(lin_result.params['beta'].value), "Divisive Normalization, $\\gamma$ = {:.2f}".format(DN_result.params['gamma'].value), "Linear sum"] + sorted(observed.keys())
# ax.legend(legends, labels, loc='upper left')
simpleaxis(ax)
fig.set_figwidth(2.5)
fig.set_figheight(2.5)
dump(fig,file('figures/fig4/4b.pkl','wb'))
plt.show()
feature = 0 # Area under the curve
lin_bic = []
dn_bic = []
lin_chi = []
dn_chi = []
beta = []
gamma = []
delta = []
zeta, zeta2 = [], []
for index in neurons:
# print (index)
neuron = neurons[index]
expected, observed, g_expected, g_observed = {}, {}, {}, {}
for expType, exp in neuron:
## Control case
if(expType == "Control"):
for sqr in exp:
if sqr > 1:
expected[sqr] = []
observed[sqr] = []
for coord in exp[sqr].coordwise:
for trial in exp[sqr].coordwise[coord].trials:
if all([value == 0 for value in trial.flags.values()]):
expected[sqr].append(exp[sqr].coordwise[coord].expected_feature[feature])
observed[sqr].append(trial.feature[feature])
max_exp, max_g_exp = 0.,0.
squareVal = []
list_control_expected = []
list_control_observed = []
for sqr in sorted(observed):
squareVal.append(ax.scatter(expected[sqr], observed[sqr], label=str(sqr), c=color_sqr[sqr], alpha=0.8))
max_exp = max(max_exp, max(expected[sqr]))
list_control_expected += expected[sqr]
list_control_observed += observed[sqr]
X = np.array(list_control_expected)
y = np.array(list_control_observed)
idx = np.argsort(X)
X = X[idx]
y = y[idx]
linear_Model = lmfit.Model(linearModel)
DN_Model = lmfit.Model(DN_model)
lin_pars = linear_Model.make_params()
lin_result = linear_Model.fit(y, lin_pars, x=X)
lin_bic.append(lin_result.bic)
lin_chi.append(lin_result.redchi)
beta.append(lin_result.params['beta'])
DN_pars = DN_Model.make_params()
DN_result = DN_Model.fit(y, DN_pars, x=X)
dn_bic.append(DN_result.bic)
dn_chi.append(DN_result.redchi)
gamma.append(DN_result.params['gamma'])
delta.append(DN_result.params['beta'])
###Output
_____no_output_____
###Markdown
4 C (Chi-squares population)
###Code
indices = [1,2]
fig, ax = plt.subplots()
for ind, (l,d) in enumerate(zip(lin_chi, dn_chi)):
ax.plot(indices, [l,d], 'o-', alpha=0.4, color='0.5', markerfacecolor='white')
# ax.violinplot([lin_chi,dn_chi], indices)
# notch shape box plot
bplot = ax.boxplot([lin_chi,dn_chi],
notch=True, # notch shape
# vert=True, # vertical box aligmnent
patch_artist=True) # fill with color
colors = ['green', 'purple']
for patch, color in zip(bplot['boxes'], colors):
patch.set_facecolor(color)
patch.set_alpha(1.)
ax.hlines(1, 0,3, linestyles='--', alpha=1)
# ax.boxplot(, [1])
# ax.set_xlim((-1,2))
ax.set_ylim((-1,7))
ax.set_xticks(indices)
ax.set_xticklabels(('DI', 'DN'))
ax.set_title("Reduced chi-square values for DI and DN")
simpleaxis(ax)
print(ss.ttest_rel(lin_chi, dn_chi))
y, h, col = np.max(lin_chi), 0.5, 'k'
plt.plot([1,1, 2, 2], [y, y+h, y+h, y], lw=1.5, c=col)
plt.text((1+2)*.5, y+h, "***", ha='center', va='bottom', color=col)
fig.set_figwidth(3)
fig.set_figheight(3)
#dump(fig,file('figures/fig4/4c.pkl','wb'))
plt.savefig('figures/fig4/4c.svg')
plt.show()
ratio_models = np.array(lin_chi)/np.array(dn_chi)
fig, ax = plt.subplots()
bins = np.linspace(0.8,2.4,9)
ax.hist(ratio_models,bins=bins)
ax.set_ylabel("# neurons")
#ax.set_xlim(0,3)
ax.vlines(1,0,15,'r','--')
simpleaxis(ax)
left, bottom, width, height = [0.5, 0.4, 0.3, 0.5]
ax2 = fig.add_axes([left, bottom, width, height])
simpleaxis(ax2)
indices = [1,2]
for ind, (l,d) in enumerate(zip(lin_chi, dn_chi)):
ax2.plot(indices, [l,d], 'o-', alpha=0.4, color='0.5', markerfacecolor='white', markersize=3)
# ax.violinplot([lin_chi,dn_chi], indices)
# notch shape box plot
bplot = ax2.boxplot([lin_chi,dn_chi],
notch=True, # notch shape
vert=True, # vertical box aligmnent
patch_artist=True, widths=[0.3,0.3]) # fill with color
colors = ['green', 'purple']
for patch, color in zip(bplot['boxes'], colors):
patch.set_facecolor(color)
patch.set_alpha(1.)
for patch in (bplot['fliers']):
patch.set_alpha(0)
ax2.hlines(1, 0,3, linestyles='--', alpha=1)
# ax.boxplot(, [1])
# ax.set_xlim((-1,2))
ax2.set_ylim((-1,7))
ax2.set_xticks(indices)
ax2.set_xticklabels(('DI', 'DN'))
ax2.set_ylabel("reduced $\\chi^2$")
simpleaxis(ax2)
print(ss.ttest_rel(lin_chi, dn_chi))
y, h, col = np.max(lin_chi), 0.5, 'k'
ax2.plot([1,1, 2, 2], [y, y+h, y+h, y], lw=1.5, c=col)
ax2.text((1+2)*.5, y+h, "***", ha='center', va='bottom', color=col)
fig.set_figwidth(2)
fig.set_figheight(2)
dump(fig,file('figures/fig4/4c.pkl','wb'))
plt.savefig('figures/fig4/4c.svg')
plt.show()
min(ratio_models)
###Output
_____no_output_____
###Markdown
4 D (BIC Population)
###Code
indices = [1,2]
fig, ax = plt.subplots()
for ind, (l,d) in enumerate(zip(lin_bic, dn_bic)):
ax.plot(indices, [l,d], 'o-', color='0.5', alpha=0.4, markerfacecolor='white')
# ax.violinplot(lin_chi, [0])
# ax.violinplot(dn_chi, [1])
# notch shape box plot
colors = ['green', 'purple']
bplot = ax.boxplot([lin_bic,dn_bic],
notch=False, # notch shape
vert=True, # vertical box aligmnent
patch_artist=True) # fill with color
print (bplot.keys())
for patch in (bplot['boxes']):
patch.set_alpha(0)
for patch in (bplot['whiskers']):
patch.set_alpha(0)
for patch in (bplot['caps']):
patch.set_alpha(0)
for patch in (bplot['fliers']):
patch.set_alpha(0)
for patch,color in zip(bplot['medians'],colors):
patch.set_color(color)
patch.set_linewidth(2)
patch.set_alpha(0)
#ax.set_ylim((-300,800))
#ax.hlines(0, 0,3, linestyles='--', alpha=0.6)
ax.set_xticks(indices)
ax.set_xticklabels(('DI', 'DN'))
ax.set_title("BIC values for DI and DN")
# plt.legend()
simpleaxis(ax)
print(ss.ttest_rel(lin_bic, dn_bic))
fig.set_figwidth(3)
fig.set_figheight(3)
y, h, col = np.max(lin_bic), 100, 'k'
plt.plot([1,1, 2, 2], [y, y+h, y+h, y], lw=1.5, c=col)
plt.text((1+2)*.5, y+h, "***", ha='center', va='bottom', color=col)
# dump(fig,file('figures/fig4/4d.pkl','wb'))
# plt.savefig('figures/fig4/4d.svg')
plt.show()
np.mean(lin_bic), np.mean(dn_bic)
divisor = np.array([a**2 + b**2 for a,b in zip(lin_bic,dn_bic)])
diff_models = (np.array(lin_bic)-np.array(dn_bic))
print (len(diff_models))
fig, ax = plt.subplots()
bins = np.linspace(-100,300,9)
ax.hist(diff_models,bins=bins)
ax.vlines(0,0,15,'r','--')
ax.set_xlabel("DI - DN")
ax.set_ylabel("# neurons")
simpleaxis(ax)
left, bottom, width, height = [0.7, 0.4, 0.2, 0.4]
ax2 = fig.add_axes([left, bottom, width, height])
simpleaxis(ax2)
indices = [1,2]
for ind, (l,d) in enumerate(zip(lin_bic, dn_bic)):
ax2.plot(indices, [l,d], 'o-', color='0.5', alpha=0.4, markerfacecolor='white', markersize=3)
ax2.set_xlim(0.8,2.2)
y, h, col = np.max(lin_bic), 100, 'k'
ax2.set_xticks(indices)
ax2.set_xticklabels(('DI', 'DN'))
ax2.set_yticks([-500,0,500])
ax2.set_ylabel('BIC')
mf = matplotlib.ticker.ScalarFormatter(useMathText=True)
mf.set_powerlimits((0,0))
ax2.yaxis.set_major_formatter(mf)
ax2.plot([1,1, 2, 2], [y, y+h, y+h, y], lw=1.5, c=col)
ax2.text((1+2)*.5, y+h, "***", fontsize=12 , ha='center', va='bottom', color=col)
ax.set_title('BIC')
fig.set_figwidth(2)
fig.set_figheight(2)
fig.tight_layout()
dump(fig,file('figures/fig4/4d.pkl','wb'))
plt.savefig('figures/fig4/4d.svg')
# ax.set_xlim(0,3)
plt.show()
###Output
32
###Markdown
4 E (DN Fit parameter gamma)
###Code
fig, ax = plt.subplots()
bins = 18
ax.hist(gamma, color='purple',bins=bins)
# ax.set_xlim(-1,2)
ax.set_xlabel("$\\gamma$")
ax.set_ylabel("# neurons")
ax.set_title("Distribution of fit parameter $\\gamma$")
ax.set_xticks([0,20,40])
simpleaxis(ax)
fig.set_figwidth(1.7)
fig.set_figheight(1.7)
dump(fig,file('figures/fig4/4e1.pkl','wb'))
plt.show()
# bins = np.linspace(0,3,20)
# fig, ax = plt.subplots()
# ax.hist(delta, color='green', bins=bins)
# ax.set_xlabel("$\\beta$ (DN)", fontsize=18)
# ax.set_ylabel("# neurons", fontsize=18)
# ax.set_title("Distribution of fit parameter $\\beta$", fontsize=18)
# simpleaxis(ax)
# fig.set_figwidth(2)
# fig.set_figheight(2)
# ax.set_xlim(0,3)
# dump(fig,file('figures/fig4/4e2.pkl','wb'))
# plt.show()
bins = np.linspace(0,1,10)
fig, ax = plt.subplots()
ax.hist(beta, color='green', bins=bins)
ax.set_xlabel("$\\beta$")
ax.set_ylabel("# neurons")
ax.set_title("Distribution of fit parameter $\\beta$")
simpleaxis(ax)
fig.set_figwidth(1.7)
fig.set_figheight(1.7)
ax.set_xlim(0,1)
dump(fig,file('figures/fig4/4e3.pkl','wb'))
plt.show()
np.median(gamma), np.median(beta)
fig, ax = plt.subplots()
# ax.plot(gamma, c='purple')
ax.plot(delta, c='green')
ax.plot(beta, '--', c='green')
ax.set_xlabel("Index")
ax.set_ylabel("Par value")
plt.show()
fig, ax = plt.subplots()
ax.plot(gamma, c='purple')
ax.set_xlabel("Index")
ax.set_ylabel("Par value")
plt.show()
fig, ax = plt.subplots()
# ax.plot(gamma, c='purple')
ax.plot(lin_chi, '--', c='green')
ax.plot(dn_chi, c='green')
ax.set_xlabel("Index")
ax.set_ylabel("Par value")
plt.show()
fig, ax = plt.subplots()
# ax.plot(gamma, c='purple')
ax.plot(lin_bic, '--', c='green')
ax.plot(dn_bic, c='green')
ax.set_xlabel("Index")
ax.set_ylabel("Par value")
plt.show()
## Not using
# fig, ax = plt.subplots()
# bins = 15
# ax.hist(beta, bins=bins, label="$\\beta$")
# plt.legend()
# fig.set_figheight(8)
# fig.set_figwidth(8)
# plt.show()
# lin_aic = []
# dn_aic = []
# lin_chi = []
# dn_chi = []
# control_observed = {}
# control_observed_average = {}
# gabazine_observed ={}
# gabazine_observed_average = {}
# control_expected = {}
# control_expected_average = {}
# gabazine_expected ={}
# gabazine_expected_average = {}
# feature = 0
# neuron = Neuron.load(filename)
# for expt in neuron.experiment:
# print ("Starting expt {}".format(expt))
# for numSquares in neuron.experiment[expt].keys():
# print ("Square {}".format(numSquares))
# if not numSquares == 1:
# nSquareData = neuron.experiment[expt][numSquares]
# if expt == "Control":
# coords_C = nSquareData.coordwise
# for coord in coords_C:
# if feature in coords_C[coord].feature:
# control_observed_average.update({coord: coords_C[coord].average_feature[feature]})
# control_expected_average.update({coord: coords_C[coord].expected_feature[feature]})
# control_observed.update({coord: []})
# control_expected.update({coord: []})
# for trial in coords_C[coord].trials:
# if feature in trial.feature:
# control_observed[coord].append(trial.feature[feature])
# control_expected[coord].append(coords_C[coord].expected_feature[feature])
# elif expt == "GABAzine":
# coords_I = nSquareData.coordwise
# for coord in coords_I:
# if feature in coords_I[coord].feature:
# gabazine_observed.update({coord: []})
# gabazine_expected.update({coord: []})
# gabazine_observed_average.update({coord: coords_I[coord].average_feature[feature]})
# gabazine_expected_average.update({coord: coords_I[coord].expected_feature[feature]})
# for trial in coords_I[coord].trials:
# if feature in trial.feature:
# gabazine_observed[coord].append(trial.feature[feature])
# gabazine_expected[coord].append(coords_I[coord].expected_feature[feature])
# print ("Read {} into variables".format(filename))
###Output
_____no_output_____ |
ML2/linear_regression/LR.ipynb | ###Markdown
Linear regression attempts to model the relationship between two variables by fitting a linear equation to observed data. One variable is considered to be an explanatory variable, and the other is considered to be a dependent variable.
###Code
import numpy as np
import pandas as pd
# to visualization purposes
import matplotlib.pyplot as plt
# to build linear model
from sklearn.linear_model import LinearRegression
# load dataset
data = pd.read_csv('Book.csv')
data
# view data points
plt.scatter(data.videos, data.views, color='red')
plt.xlabel('Number of Videos')
plt.ylabel('Total Views')
# see single columns from dataset as a pandas series
data.videos
# or we can use "data['videos']"
data['views']
# divide dataset into x and y
x = np.array(data.videos.values)
y = np.array(data.views.values)
# build and train ML model
model = LinearRegression()
model.fit(x.reshape((-1,1)), y)
#Note: In here, we don't use training and testing set.
# assign a value to predict from our model
new_x = np.array([45]).reshape((-1,1))
new_x
# predict from model
pred = model.predict(new_x)
pred
# visualize our linear model
plt.scatter(data.videos, data.views, color='red')
m, c = np.polyfit(x, y, 1)
plt.plot(x, m*x+c)
###Output
_____no_output_____ |
Pyber/homework_05-Matplotlibber_starter.ipynb | ###Markdown
Bubble Plot of Ride Sharing Data
###Code
# Obtain the x and y coordinates for each of the three city types
urban_cities = combined_data[combined_data["type"]=="Urban"]
suburban_cities = combined_data[combined_data["type"]=="Suburban"]
rural_cities = combined_data[combined_data["type"]=="Rural"]
urban_ride_counts = urban_cities.groupby(["city"]).count()["ride_id"]
print(urban_ride_counts.head())
urban_avg_fare = urban_cities.groupby(["city"]).mean()["fare"]
print(urban_avg_fare.head())
urban_driver_count = urban_cities.groupby(["city"]).mean()["driver_count"]
print(urban_driver_count.head())
suburban_ride_counts = suburban_cities.groupby(["city"]).count()["ride_id"]
print(suburban_ride_counts.head())
suburban_avg_fare = suburban_cities.groupby(["city"]).mean()["fare"]
print(suburban_avg_fare.head())
suburban_driver_count = suburban_cities.groupby(["city"]).mean()["driver_count"]
print(suburban_driver_count.head())
rural_ride_counts = rural_cities.groupby(["city"]).count()["ride_id"]
print(rural_ride_counts.head())
rural_avg_fare = rural_cities.groupby(["city"]).mean()["fare"]
print(rural_avg_fare.head())
rural_driver_count = rural_cities.groupby(["city"]).mean()["driver_count"]
print(rural_driver_count.head())
# Build the scatter plots for each city types
#Urban
plt.scatter(urban_ride_counts , urban_avg_fare, color = "gold", edgecolors="black", s = urban_driver_count*20, label = "Urban", alpha = 0.5, linewidth = 1.5)
#Suburban
plt.scatter(suburban_ride_counts, suburban_avg_fare, color = "lightskyblue", edgecolors ="black",s = suburban_driver_count*20, label = "Suburban", alpha = 0.5, linewidth = 1.5)
#Rural
plt.scatter(rural_ride_counts, rural_avg_fare, color = "lightcoral", edgecolors = "black",s = rural_driver_count*20, label = "Rural", alpha = 0.5, linewidth = 1.5)
# Incorporate the other graph properties
plt.title("Pyber Ride Sharing Data 2016")
plt.xlabel("Total Number of Rides(Per City)")
plt.ylabel("Average Fare ($)")
plt.text(40, 50,"Note: Circle size correlates with driver count per city.")
#Add the legend.
plt.legend(loc= "upper right")
# Save Figure
plt.show()
###Output
_____no_output_____
###Markdown
Total Fares by City Type
###Code
# Calculate Type Percents
fare_city = combined_data.groupby(["type"])["fare"].sum ()
print(fare_city.head())
labels = ["rural","suburban","urban"]
explode=(0,0,0.1)
colors= ["gold","lightskyblue","lightcoral"]
# Build Pie Chart
plt.pie(fare_city,explode=explode, labels=labels, autopct = "%1.2f%%", colors=colors, startangle=150)
plt.title("% of Total Fares by City Type")
# Save Figure
plt.savefig("PyberRide_TotalFares.Png")
###Output
type
Rural 4327.93
Suburban 19356.33
Urban 39854.38
Name: fare, dtype: float64
###Markdown
Total Rides by City Type
###Code
# Calculate Ride Percents
rides_type = combined_data.groupby(["type"]).count ()["ride_id"]
print(rides_type.head())
labels = ["rural","suburban","urban"]
explode=(0,0,0.1)
colors= ["gold","lightskyblue","lightcoral"]
# Build Pie Chart
plt.pie(rides_type,explode=explode, labels=labels, autopct = "%1.2f%%", colors=colors, startangle=150)
plt.title("% of Total Rides by City Type")
# Save Figure
plt.savefig("PyberRide_TotalRides.Png")
###Output
type
Rural 125
Suburban 625
Urban 1625
Name: ride_id, dtype: int64
###Markdown
Total Drivers by City Type
###Code
# Calculate Driver Percents
drivers_type = combined_data.groupby(["type"])["driver_count"].sum()
print(drivers_type.head())
# Build Pie Charts
labels = ["rural","suburban","urban"]
explode=(0,0,0.1)
colors= ["gold","lightskyblue","lightcoral"]
# Build Pie Chart
plt.pie(drivers_type ,explode=explode, labels=labels, autopct = "%1.2f%%", colors=colors, startangle=150)
plt.title("% of Total Drivers by City Type")
# Save Figure
plt.savefig("PyberRide_TotalDrivers.Png")
###Output
type
Rural 537
Suburban 8570
Urban 59602
Name: driver_count, dtype: int64
|
MCarlo.ipynb | ###Markdown
Monte Carlo Portfolio Simulation The inspiration for doing this project sprouted from a savings habbit that my girlfriend and I adopted, mentioned in the book, Atomic Habbits by James Clear. Every day, instead of choosing to buy lunch, we put the amount of money that we WOULD have spent on lunch into a brokerage account and invest in a broad market index mutual fund. I wanted to show my girlfriend all the possible outcomes. (We cook lunches for the week on Sundays and bring them from home, instead of buying food from cafeteria, fast food, etc.) Let's see what daily investing MIGHT do for your portfolio... Brief Background/Overview: Monte Carlo Simulation (named after the Monte Carlo Casino in Mexico) is a way of generating random outcomes over and over. Think about rolling a dice or flipping a coin millions of times to try to determine the probability of an outcome. Using python to code a Monte Carlo simulation will be much more efficient than actually flipping a coin and recording each result. We will take this idea and apply it to the Markets. In the Monte Carlo simulator built through this project, we are simulating trading days (instead of dice rolls) and recording the portfolio values after each day (We are assuming random returns each day). We will first examine the returns of the S&P 500 to determine the distribution of daily return percentages. Once we have this distribution, we will build a Monte Carlo simulator that applies a random daily return (with a random distribution that closely matches the distribution of S&P 500 returns) and add a new investment each day.
###Code
## Import all of the necessary packages to read, clean, analyze and visualize our data
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import random
import time
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
Now that we have all the packages we need imported lets take a closer look at the S&P 500 dataset and create a DataFrame with just the adjusted close data
###Code
## Read raw data from the downloads folder
sp_500 = pd.read_csv(r'C:\Users\dmccloud1\Downloads\^GSPC (1).csv')
## Print first 5 lines
print(sp_500.head())
## Create data frame with just date and adjusted closing value
adj_close = sp_500[['Date','Adj Close']]
#print(adj_close.head()) ## for debugging - prints first 5 rows of new DataFrame
##
adj_close_date = adj_close.set_index(['Date']) # set 'Date' as the index for DataFrame
print(adj_close_date.head()) ## for debugging - prints first 5 rows of new DataFrame
###Output
Date Open High Low Close Adj Close Volume
0 1950-01-03 16.66 16.66 16.66 16.66 16.66 1260000
1 1950-01-04 16.85 16.85 16.85 16.85 16.85 1890000
2 1950-01-05 16.93 16.93 16.93 16.93 16.93 2550000
3 1950-01-06 16.98 16.98 16.98 16.98 16.98 2010000
4 1950-01-09 17.08 17.08 17.08 17.08 17.08 2520000
Adj Close
Date
1950-01-03 16.66
1950-01-04 16.85
1950-01-05 16.93
1950-01-06 16.98
1950-01-09 17.08
###Markdown
Now that we have our DataFrames containing all S&P500 data and just closing value data, respectively, we can dive further into visually exploring the data to help us model our Monte Carlo simulator. Let's plot a graph to track closing value over time and take a look at the distribution of daily returns
###Code
## Plot a line graph to show what the S&P closing value has done over the years
# Create the plot
adj_close.plot(x='DateTime')
# Add title and labels
plt.ylabel('Adj Closing Value')
plt.xlabel('Date')
plt.title('S&P500 Adjusted Closing Values')
# Show the plot
plt.show()
###Output
_____no_output_____
###Markdown
From this we can see there is a definite trend, but the intraday movements might seem somewhat random, but clearly with a tendancy towards positive returns(hence the direction of this graph). Lets take a look at the distribution of daily returns, as this will serve as a model for the distribution of returns we will use for our Monte Carlo simulator
###Code
## Plot the daily returns in a historgram to visualize their distribution
# calculate day to day percent change and add this column to the data
adj_close_date['pct_chg'] = adj_close_date.pct_change()
# Lets take a look at the max and min of these to see the range of daily returns
print('The largest 1 day positive return is ',adj_close_date.pct_chg.max())
print('The largest 1 day negative return is ',adj_close_date.pct_chg.min())
# Now that we see the range, lets a create a list for our bins in the historgram
bins = [x + 15.95 for x in np.arange(-37,1)]
# Remember that these are percentages so lets modify our list
bins = [x/1000 for x in bins]
# plot the distribution
adj_close_date.pct_chg.hist(bins = bins, color= 'gray')
plt.axvline(x=adj_close_date['pct_chg'].mean())
###Output
The largest 1 day positive return is 0.11580036960722695
The largest 1 day negative return is -0.20466930860972166
###Markdown
Here we see that the distribution is almost perfectly symmetrical, but has a slight left skew (longer left tail and more values occuring to the right of the mean of the distribution). Now we will need to model our samples return distribution after this. To do that, we will get the counts for each return, those will be the weights of the liklihood that the specific return is drawn at random. We will create a list of these choices and randomly pick form them, and this will be our return for each day. Note: The random choice can be generated at each iteration of the monte carlo simulation, but takes a lot of resources and time, that is the reason that the random list is generated first and then interated over)
###Code
## Create the 'population' (the returns) and the 'weight' (liklihood of being drawn)
## first step is to see how many times each return occurs
## This creates a dataframe in which the index is the return and the values are the counts of occurance
return_counts = pd.DataFrame(adj_close_date.pct_chg.value_counts()).reset_index()
return_counts.head()
## We select the population, or the list of possible returns from the index
population = return_counts.iloc[:,0]
## We select the weights as the counts of the return's occurance
weights = return_counts.iloc[:,1]
###Output
_____no_output_____
###Markdown
Now that we have most of the heavy lifting for preparation, we can define a function to call to run our simluation as many times as our heart desires Defining a Monte Carlo funcion that will run our simulation as many times as we'd like and give us returns to see what some possible outcomes for out portfolio would be
###Code
## Define a function called monte Carlo
## account for a beginning balance, a daily contribution, the number of days and number of trials to be run as inputs to the function
def monte_carlo(beg_val= 0, daily_cont= 50, addl_cont= 0, num_days= 126, num_trials= 1000, dfprint=False):
start_time = time.time()
# initiates an empty dataframe used to store each trial (each column will be a trial, rows will be days)
trials_df = pd.DataFrame()
# loop through the simulatuion as many times as necessary
for i in range(num_trials):
# sets initial value of portfolio
value = beg_val
# counter for days initilaized at 0
day_ct = 0
# Days values X
days = []
# account Values Y
acct_values = []
# sets up list of random return values
return_list = random.choices(population, weights, k=num_days)
# this loop represents the portfolios change each day
for x in np.arange(num_days):
value += daily_cont # increases value by daily contribution amount
value = value * (1 + return_list[x]) # represents a random day change
acct_values.append(value) # adds the value to the list that contains each day's portfolio value
days.append(day_ct)
day_ct += 1
trials_df[i] = acct_values
## plots all simluations
trials_df.plot(legend= None)
plt.show()
## prints some summary statistics and returns a dataframe with all trials
print('Mean Ending Balance:\t',trials_df.iloc[num_days-1].mean())
print('Max Ending Balance:\t',trials_df.iloc[num_days-1].max())
print('Min Ending Balance:\t',trials_df.iloc[num_days-1].min())
print('This took ', time.time()-start_time, 'seconds to run.')
# a paramater to return dataframe or not
if dfprint == True:
return trials_df
###Output
_____no_output_____
###Markdown
Now that your function has been defined, call it and pass your own paramaters to see what your portfolio might look like
###Code
## running the simulation for 5 year periods, 10,000 times
monte_carlo(beg_val=43000,daily_cont= 50, num_days= 1260, num_trials=10000)
###Output
_____no_output_____
###Markdown
Wow! That's a lot of work that gets done relatively quickly. Now lets just check the distribution of returns for our data to make sure it is inline with the S&P500 returns Warning! Leaving the parameters in the below code will run for about 25 mins! but that is iterating 12000 random days 10000 times, so still pretty darn quick
###Code
## Check the distribution
# call the function and store datafram in a variable
test_df= monte_carlo(beg_val=43000,daily_cont= 50, num_days= 12000, num_trials=10000, dfprint=True)
# chose a random column(simulation run)
perc_chg_test = test_df.iloc[:,7]
#and get the pct changes for each day
perc_chg_test['pct_chg'] = perc_chg_test.pct_change()
# plot the distribution of returns
perc_chg_test.pct_chg.hist(bins = bins, color= 'gray')
plt.axvline(x=adj_close_date['pct_chg'].mean())
###Output
_____no_output_____
###Markdown
Looks pretty close to me! Now go out there and randomly make (or lose) some money! (or lose)
###Code
# TODO Automate file dowload instead of having user save to downloads folder
###Output
_____no_output_____ |
wandb/run-20210522_113625-2yxfo8rx/tmp/code/00-main.ipynb | ###Markdown
Modelling
###Code
import torch.nn as nn
import torch.nn.functional as F
class BaseLine(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3,32,5)
self.conv2 = nn.Conv2d(32,64,5)
self.conv2batchnorm = nn.BatchNorm2d(64)
self.conv3 = nn.Conv2d(64,128,5)
self.fc1 = nn.Linear(128*10*10,256)
self.fc2 = nn.Linear(256,128)
self.fc3 = nn.Linear(128,50)
self.relu = nn.ReLU()
def forward(self,X):
preds = F.max_pool2d(self.relu(self.conv1(X)),(2,2))
preds = F.max_pool2d(self.relu(self.conv2batchnorm(self.conv2(preds))),(2,2))
preds = F.max_pool2d(self.relu(self.conv3(preds)),(2,2))
preds = preds.view(-1,128*10*10)
preds = self.relu(self.fc1(preds))
preds = self.relu(self.fc2(preds))
preds = self.relu(self.fc3(preds))
return preds
device = torch.device('cuda')
from torchvision import models
# model = BaseLine().to(device)
# model = model.to(device)
model = models.resnet18(pretrained=True).to(device)
in_f = model.fc.in_features
model.fc = nn.Linear(in_f,50)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
PROJECT_NAME = 'Car-Brands-Images-Clf'
import wandb
EPOCHS = 100
BATCH_SIZE = 32
from tqdm import tqdm
# wandb.init(project=PROJECT_NAME,name='transfer-learning')
# for _ in tqdm(range(EPOCHS)):
# for i in range(0,len(X_train),BATCH_SIZE):
# X_batch = X_train[i:i+BATCH_SIZE].view(-1,3,112,112).to(device)
# y_batch = y_train[i:i+BATCH_SIZE].to(device)
# model.to(device)
# preds = model(X_batch)
# preds = preds.to(device)
# loss = criterion(preds,y_batch)
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# wandb.log({'loss':loss.item()})
# TL vs Custom Model best = TL
def get_loss(criterion,y,model,X):
model.to('cuda')
preds = model(X.view(-1,3,112,112).to('cuda').float())
preds.to('cuda')
loss = criterion(preds,torch.tensor(y,dtype=torch.long).to('cuda'))
loss.backward()
return loss.item()
def test(net,X,y):
device = 'cuda'
net.to(device)
correct = 0
total = 0
net.eval()
with torch.no_grad():
for i in range(len(X)):
real_class = torch.argmax(y[i]).to(device)
net_out = net(X[i].view(-1,3,112,112).to(device).float())
net_out = net_out[0]
predictied_class = torch.argmax(net_out)
if predictied_class == real_class:
correct += 1
total += 1
net.train()
net.to('cuda')
return round(correct/total,3)
EPOCHS = 12
BATCH_SIZE = 32
model = models.shufflenet_v2_x1_0(pretrained=False, num_classes=50).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
wandb.init(project=PROJECT_NAME,name=f'models.shufflenet_v2_x1_0')
for _ in tqdm(range(EPOCHS),leave=False):
for i in tqdm(range(0,len(X_train),BATCH_SIZE),leave=False):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,3,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch)
preds = preds.to(device)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,y_test,model,X_test),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test)})
model = models.mobilenet_v3_large(pretrained=False, num_classes=50).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
wandb.init(project=PROJECT_NAME,name=f'models.mobilenet_v3_large')
for _ in tqdm(range(EPOCHS),leave=False):
for i in tqdm(range(0,len(X_train),BATCH_SIZE),leave=False):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,3,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch)
preds = preds.to(device)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,y_test,model,X_test),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test)})
model = models.mobilenet_v3_small(pretrained=False, num_classes=50).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
wandb.init(project=PROJECT_NAME,name=f'models.mobilenet_v3_small')
for _ in tqdm(range(EPOCHS),leave=False):
for i in tqdm(range(0,len(X_train),BATCH_SIZE),leave=False):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,3,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch)
preds = preds.to(device)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,y_test,model,X_test),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test)})
model = models.resnext50_32x4d(pretrained=False, num_classes=50).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
wandb.init(project=PROJECT_NAME,name=f'models.resnext50_32x4d')
for _ in tqdm(range(EPOCHS),leave=False):
for i in tqdm(range(0,len(X_train),BATCH_SIZE),leave=False):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,3,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch)
preds = preds.to(device)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,y_test,model,X_test),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test)})
model = models.wide_resnet50_2(pretrained=False, num_classes=50).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
wandb.init(project=PROJECT_NAME,name=f'models.wide_resnet50_2')
for _ in tqdm(range(EPOCHS),leave=False):
for i in tqdm(range(0,len(X_train),BATCH_SIZE),leave=False):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,3,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch)
preds = preds.to(device)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,y_test,model,X_test),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test)})
model = models.mnasnet1_0(pretrained=False, num_classes=50).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
wandb.init(project=PROJECT_NAME,name=f'models.mnasnet1_0')
for _ in tqdm(range(EPOCHS),leave=False):
for i in tqdm(range(0,len(X_train),BATCH_SIZE),leave=False):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,3,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch)
preds = preds.to(device)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,y_test,model,X_test),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test)})
model = models.resnet18(pretrained=False, num_classes=50).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
wandb.init(project=PROJECT_NAME,name=f'models.resnet18')
for _ in tqdm(range(EPOCHS),leave=False):
for i in tqdm(range(0,len(X_train),BATCH_SIZE),leave=False):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,3,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch)
preds = preds.to(device)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item(),'val_loss':get_loss(criterion,y_test,model,X_test),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test)})
###Output
_____no_output_____ |
week01_intro/w1_deep_crossentropy_method.ipynb | ###Markdown
Deep Crossentropy methodIn this section we'll extend your CEM implementation with neural networks! You will train a multi-layer neural network to solve simple continuous state space games. __Please make sure you're done with tabular crossentropy method from the previous notebook.__
###Code
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/spring20/setup_colab.sh -O- | bash
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# if you see "<classname> has no attribute .env", remove .env or update gym
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape[0]
plt.imshow(env.render("rgb_array"))
print("state vector dim =", state_dim)
print("n_actions =", n_actions)
###Output
state vector dim = 4
n_actions = 2
###Markdown
Neural Network PolicyFor this assignment we'll utilize the simplified neural network implementation from __[Scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html)__. Here's what you'll need:* `agent.partial_fit(states, actions)` - make a single training pass over the data. Maximize the probabilitity of :actions: from :states:* `agent.predict_proba(states)` - predict probabilities of all actions, a matrix of shape __[len(states), n_actions]__
###Code
from sklearn.neural_network import MLPClassifier
agent = MLPClassifier(
hidden_layer_sizes=(20, 20),
activation='tanh',
)
# initialize agent to the dimension of state space and number of actions
agent.partial_fit([env.reset()] * n_actions, range(n_actions), range(n_actions))
def generate_session(env, agent, t_max=1000):
"""
Play a single game using agent neural network.
Terminate when game finishes or after :t_max: steps
"""
states, actions = [], []
total_reward = 0
s = env.reset()
for t in range(t_max):
# use agent to predict a vector of action probabilities for state :s:
probs = agent.predict_proba([s])[0]
assert probs.shape == (env.action_space.n,), "make sure probabilities are a vector (hint: np.reshape)"
# use the probabilities you predicted to pick an action
# sample proportionally to the probabilities, don't just take the most likely action
a = np.random.choice(np.arange(n_actions), p=probs)
# ^-- hint: try np.random.choice
new_s, r, done, info = env.step(a)
# record sessions like you did before
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
dummy_states, dummy_actions, dummy_reward = generate_session(env, agent, t_max=5)
print("states:", np.stack(dummy_states))
print("actions:", dummy_actions)
print("reward:", dummy_reward)
###Output
states: [[ 0.02627407 0.00179012 0.02122254 0.03980579]
[ 0.02630988 -0.19362963 0.02201865 0.33910836]
[ 0.02243728 0.00117221 0.02880082 0.0534494 ]
[ 0.02246073 -0.19435062 0.02986981 0.35507828]
[ 0.01857371 -0.38988426 0.03697137 0.65702832]]
actions: [0, 1, 0, 0, 0]
reward: 5.0
###Markdown
CEM stepsDeep CEM uses exactly the same strategy as the regular CEM, so you can copy your function code from previous notebook.The only difference is that now each observation is not a number but a `float32` vector.
###Code
def select_elites(states_batch, actions_batch, rewards_batch, percentile=50):
"""
Select states and actions from games that have rewards >= percentile
:param states_batch: list of lists of states, states_batch[session_i][t]
:param actions_batch: list of lists of actions, actions_batch[session_i][t]
:param rewards_batch: list of rewards, rewards_batch[session_i]
:returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions
Please return elite states and actions in their original order
[i.e. sorted by session number and timestep within session]
If you are confused, see examples below. Please don't assume that states are integers
(they will become different later).
"""
reward_threshold = np.percentile(rewards_batch, percentile)
inds = [i for i,x in enumerate(rewards_batch) if rewards_batch[i] >= reward_threshold]
elite_states = [x for i in inds for x in states_batch[i]]
elite_actions = [x for i in inds for x in actions_batch[i]]
return elite_states, elite_actions
###Output
_____no_output_____
###Markdown
Training loopGenerate sessions, select N best and fit to those.
###Code
from IPython.display import clear_output
def show_progress(rewards_batch, log, percentile, reward_range=[-990, +10]):
"""
A convenience function that displays training progress.
No cool math here, just charts.
"""
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
log.append([mean_reward, threshold])
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.plot(list(zip(*log))[0], label='Mean rewards')
plt.plot(list(zip(*log))[1], label='Reward thresholds')
plt.legend()
plt.grid()
plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
plt.show()
# map unzip
l = [[1,'a'],[2,'b'],[3,'c']]
x = map(np.array, zip(*l))
for i in x:
print(type(i), i)
n_sessions = 100
percentile = 70
log = []
for i in range(60):
# generate new sessions
sessions = [generate_session(env, agent) for i in np.arange(n_sessions) ]
states_batch, actions_batch, rewards_batch = map(np.array, zip(*sessions))
elite_states, elite_actions = select_elites(states_batch, actions_batch,
rewards_batch, percentile)
#<YOUR CODE: partial_fit agent to predict elite_actions(y) from elite_states(X)>
agent.partial_fit(elite_states, elite_actions, range(n_actions))
show_progress(rewards_batch, log, percentile, reward_range=[0, np.max(rewards_batch)])
if np.mean(rewards_batch) > 190:
print("You Win! You may stop training now via KeyboardInterrupt.")
###Output
mean reward = 986.040, threshold=1000.000
###Markdown
Results
###Code
# Record sessions
import gym.wrappers
with gym.wrappers.Monitor(gym.make("CartPole-v0"), directory="videos", force=True) as env_monitor:
sessions = [generate_session(env_monitor, agent) for _ in range(100)]
# Show video. This may not work in some setups. If it doesn't
# work for you, you can download the videos and view them locally.
from pathlib import Path
from IPython.display import HTML
video_names = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4'])
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format(video_names[-1])) # You can also try other indices
###Output
_____no_output_____
###Markdown
Homework part I Tabular crossentropy methodYou may have noticed that the taxi problem quickly converges from -100 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode. Tasks- __1.1__ (1 pts) Find out how the algorithm performance changes if you use a different `percentile` and/or `n_sessions`.- __1.2__ (2 pts) Tune the algorithm to end up with positive average score.It's okay to modify the existing code. `````` Homework part II Deep crossentropy methodBy this moment you should have got enough score on [CartPole-v0](https://gym.openai.com/envs/CartPole-v0) to consider it solved (see the link). It's time to try something harder.* if you have any trouble with CartPole-v0 and feel stuck, feel free to ask us or your peers for help. Tasks* __2.1__ (3 pts) Pick one of environments: `MountainCar-v0` or `LunarLander-v2`. * For MountainCar, get average reward of __at least -150__ * For LunarLander, get average reward of __at least +50__See the tips section below, it's kinda important.__Note:__ If your agent is below the target score, you'll still get most of the points depending on the result, so don't be afraid to submit it. * __2.2__ (up to 6 pt) Devise a way to speed up training against the default version * Obvious improvement: use [`joblib`](https://joblib.readthedocs.io/en/latest/). However, note that you will probably need to spawn a new environment in each of the workers instead of passing it via pickling. * Try re-using samples from 3-5 last iterations when computing threshold and training. * Experiment with the number of training iterations and learning rate of the neural network (see params). __Please list what you did in Anytask submission form. You must measure your improvement experimentally. Your score depends on this improvement.__* __If the algorithm converges 2x faster, you obtain 3 pts.__* __If the algorithm converges 4x faster, you obtain 6 pts.__ Tips* Gym page: [MountainCar](https://gym.openai.com/envs/MountainCar-v0), [LunarLander](https://gym.openai.com/envs/LunarLander-v2)* Sessions for MountainCar may last for 10k+ ticks. Make sure ```t_max``` param is at least 10k. * Also it may be a good idea to cut rewards via ">" and not ">=". If 90% of your sessions get reward of -10k and 10% are better, than if you use percentile 20% as threshold, R >= threshold __fails cut off bad sessions__ whule R > threshold works alright.* _issue with gym_: Some versions of gym limit game time by 200 ticks. This will prevent cem training in most cases. Make sure your agent is able to play for the specified __t_max__, and if it isn't, try `env = gym.make("MountainCar-v0").env` or otherwise get rid of TimeLimit wrapper.* If you use old _swig_ lib for LunarLander-v2, you may get an error. See this [issue](https://github.com/openai/gym/issues/100) for solution.* If it won't train it's a good idea to plot reward distribution and record sessions: they may give you some clue. If they don't, call course staff :)* 20-neuron network is probably not enough, feel free to experiment.You may find the following snippet useful:
###Code
def visualize_mountain_car(env, agent):
# Compute policy for all possible x and v (with discretization)
xs = np.linspace(env.min_position, env.max_position, 100)
vs = np.linspace(-env.max_speed, env.max_speed, 100)
grid = np.dstack(np.meshgrid(xs, vs[::-1])).transpose(1, 0, 2)
grid_flat = grid.reshape(len(xs) * len(vs), 2)
probs = agent.predict_proba(grid_flat).reshape(len(xs), len(vs), 3).transpose(1, 0, 2)
# # The above code is equivalent to the following:
# probs = np.empty((len(vs), len(xs), 3))
# for i, v in enumerate(vs[::-1]):
# for j, x in enumerate(xs):
# probs[i, j, :] = agent.predict_proba([[x, v]])[0]
# Draw policy
f, ax = plt.subplots(figsize=(7, 7))
ax.imshow(probs, extent=(env.min_position, env.max_position, -env.max_speed, env.max_speed), aspect='auto')
ax.set_title('Learned policy: red=left, green=nothing, blue=right')
ax.set_xlabel('position (x)')
ax.set_ylabel('velocity (v)')
# Sample a trajectory and draw it
states, actions, _ = generate_session(env, agent)
states = np.array(states)
ax.plot(states[:, 0], states[:, 1], color='white')
# Draw every 3rd action from the trajectory
for (x, v), a in zip(states[::3], actions[::3]):
if a == 0:
plt.arrow(x, v, -0.1, 0, color='white', head_length=0.02)
elif a == 2:
plt.arrow(x, v, 0.1, 0, color='white', head_length=0.02)
with gym.make('MountainCar-v0').env as env:
visualize_mountain_car(env, agent_mountain_car)
###Output
_____no_output_____
###Markdown
Bonus tasks* __2.3 bonus__ (2 pts) Try to find a network architecture and training params that solve __both__ environments above (_Points depend on implementation. If you attempted this task, please mention it in Anytask submission._)* __2.4 bonus__ (4 pts) Solve continuous action space task with `MLPRegressor` or similar. * Since your agent only predicts the "expected" action, you will have to add noise to ensure exploration. * Choose one of [MountainCarContinuous-v0](https://gym.openai.com/envs/MountainCarContinuous-v0) (90+ pts to solve), [LunarLanderContinuous-v2](https://gym.openai.com/envs/LunarLanderContinuous-v2) (200+ pts to solve) * 4 points for solving. Slightly less for getting some results below solution threshold. Note that discrete and continuous environments may have slightly different rules aside from action spaces.If you're still feeling unchallenged, consider the project (see other notebook in this folder).
###Code
###Output
_____no_output_____ |
boston_housing_price.ipynb | ###Markdown
Machine Learning Engineer Nanodegree Model Evaluation & Validation Project: Predicting Boston Housing PricesWelcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting StartedIn this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a *good fit* could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Housing). The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:- 16 data points have an `'MEDV'` value of 50.0. These data points likely contain **missing or censored values** and have been removed.- 1 data point has an `'RM'` value of 8.78. This data point can be considered an **outlier** and has been removed.- The features `'RM'`, `'LSTAT'`, `'PTRATIO'`, and `'MEDV'` are essential. The remaining **non-relevant features** have been excluded.- The feature `'MEDV'` has been **multiplicatively scaled** to account for 35 years of market inflation.Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
###Code
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from sklearn.cross_validation import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
###Output
Boston housing dataset has 489 data points with 4 variables each.
###Markdown
Data ExplorationIn this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into **features** and the **target variable**. The **features**, `'RM'`, `'LSTAT'`, and `'PTRATIO'`, give us quantitative information about each data point. The **target variable**, `'MEDV'`, will be the variable we seek to predict. These are stored in `features` and `prices`, respectively. Implementation: Calculate StatisticsFor your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since `numpy` has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.In the code cell below, you will need to implement the following:- Calculate the minimum, maximum, mean, median, and standard deviation of `'MEDV'`, which is stored in `prices`. - Store each calculation in their respective variable.
###Code
# TODO: Minimum price of the data
minimum_price = prices.min()
# TODO: Maximum price of the data
maximum_price = prices.max()
# TODO: Mean price of the data
mean_price = prices.mean()
# TODO: Median price of the data
median_price = prices.median()
# TODO: Standard deviation of prices of the data
std_price = prices.std()
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
###Output
Statistics for Boston housing dataset:
Minimum price: $105,000.00
Maximum price: $1,024,800.00
Mean price: $454,342.94
Median price $438,900.00
Standard deviation of prices: $165,340.28
###Markdown
Question 1 - Feature ObservationAs a reminder, we are using three features from the Boston housing dataset: `'RM'`, `'LSTAT'`, and `'PTRATIO'`. For each data point (neighborhood):- `'RM'` is the average number of rooms among homes in the neighborhood.- `'LSTAT'` is the percentage of homeowners in the neighborhood considered "lower class" (working poor).- `'PTRATIO'` is the ratio of students to teachers in primary and secondary schools in the neighborhood.** Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an **increase** in the value of `'MEDV'` or a **decrease** in the value of `'MEDV'`? Justify your answer for each.****Hint:** This problem can phrased using examples like below. * Would you expect a home that has an `'RM'` value(number of rooms) of 6 be worth more or less than a home that has an `'RM'` value of 7?* Would you expect a neighborhood that has an `'LSTAT'` value(percent of lower class workers) of 15 have home prices be worth more or less than a neighborhood that has an `'LSTAT'` value of 20?* Would you expect a neighborhood that has an `'PTRATIO'` value(ratio of students to teachers) of 10 have home prices be worth more or less than a neighborhood that has an `'PTRATIO'` value of 15? **Answer: ** I expect a home that the more value of 'RM', price of house 'MEDV'value is also increased. As a result the price of house will increase directly propornante to the increase in number of rooms increases. Considering all the rooms has same size. So 'RM' value(number of rooms) of 6 be worth less than a home that has an 'RM' value of 7. I expect that a neighborhood that has an 'LSTAT' value(percent of lower class workers) of 15 have home prices be worth more than a neighborhood that has an 'LSTAT' value of 20. Bacause People wants to living with high standard of people. So more 'LSTAT' value is less the more price of house will be.As the students to teachers ratio are high(PTRATIO) the home prices will be high. Because people will want his house near the educated area. So I expect a neighborhood that has an 'PTRATIO' value(ratio of students to teachers) of 10 have home prices be worth less than a neighborhood that has an 'PTRATIO' value of 15 ---- Developing a ModelIn this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions. Implementation: Define a Performance MetricIt is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the [*coefficient of determination*](http://stattrek.com/statistics/dictionary.aspx?definition=coefficient_of_determination), R2, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions. The values for R2 range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the **target variable**. A model with an R2 of 0 is no better than a model that always predicts the *mean* of the target variable, whereas a model with an R2 of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the **features**. _A model can be given a negative R2 as well, which indicates that the model is **arbitrarily worse** than one that always predicts the mean of the target variable._For the `performance_metric` function in the code cell below, you will need to implement the following:- Use `r2_score` from `sklearn.metrics` to perform a performance calculation between `y_true` and `y_predict`.- Assign the performance score to the `score` variable.
###Code
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
###Output
_____no_output_____
###Markdown
Question 2 - Goodness of FitAssume that a dataset contains five data points and a model made the following predictions for the target variable:| True Value | Prediction || :-------------: | :--------: || 3.0 | 2.5 || -0.5 | 0.0 || 2.0 | 2.1 || 7.0 | 7.8 || 4.2 | 5.3 |Run the code cell below to use the `performance_metric` function and calculate this model's coefficient of determination.
###Code
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
###Output
Model has a coefficient of determination, R^2, of 0.923.
###Markdown
* Would you consider this model to have successfully captured the variation of the target variable? * Why or why not?** Hint: ** The R2 score is the proportion of the variance in the dependent variable that is predictable from the independent variable. In other words:* R2 score of 0 means that the dependent variable cannot be predicted from the independent variable.* R2 score of 1 means the dependent variable can be predicted from the independent variable.* R2 score between 0 and 1 indicates the extent to which the dependent variable is predictable. * R2 score of 0.40 means that 40 percent of the variance in Y is predictable from X. **Answer:** Yes, I would consider this model to have successfully captured the variation of the target variable.From the performance_metric method, we found our R2 score as 0.923. It is almost near than the 1, it means our dependent variables can be predicted from the independent variables. Here R2 score is between 0 and 1 indicates the extent to which the dependent variable is predictable. It has 92.3 percent of the variance in Y, predicted from X. Implementation: Shuffle and Split DataYour next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.For the code cell below, you will need to implement the following:- Use `train_test_split` from `sklearn.cross_validation` to shuffle and split the `features` and `prices` data into training and testing sets. - Split the data into 80% training and 20% testing. - Set the `random_state` for `train_test_split` to a value of your choice. This ensures results are consistent.- Assign the train and testing splits to `X_train`, `X_test`, `y_train`, and `y_test`.
###Code
# TODO: Import 'train_test_split'
from sklearn.cross_validation import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.4, random_state=10)
# Success
print "Training and testing split was successful."
###Output
Training and testing split was successful.
###Markdown
Question 3 - Training and Testing* What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?**Hint:** Think about how overfitting or underfitting is contingent upon how splits on data is done. **Answer: ** To create the best fitting model to the dataset we need to check and evaluate if the model is perfect for the dataset or not, to do that we need split dataset into training dataset and testing dataset. We use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets. It will split dataset into 80% training and 20% testing. Testing dataset is relatively smaller than the training dataset. If we do not separate this dataset then the model will perform only given dataset, will not be usable for other datasets. It is known as the overfitting of data. The underfitting model will not perform well on the training set of data and overfitting model will not perform well on the testing set of data. ---- Analyzing Model PerformanceIn this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing `'max_depth'` parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone. Learning CurvesThe following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R2, the coefficient of determination. Run the code cell below and use these graphs to answer the following question.
###Code
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
###Output
_____no_output_____
###Markdown
Question 4 - Learning the Data* Choose one of the graphs above and state the maximum depth for the model. * What happens to the score of the training curve as more training points are added? What about the testing curve? * Would having more training points benefit the model? **Hint:** Are the learning curves converging to particular scores? Generally speaking, the more data you have, the better. But if your training and testing curves are converging with a score above your benchmark threshold, would this be necessary?Think about the pros and cons of adding more training points based on if the training and testing curves are converging. **Answer: ** I go with Model with max_depth = 3* As more training points will be added the curve will decrease.* As more testing points will be added the curve will increase and at one point where the model is perfect the points are stopped to increase.* Having more training points will not benefit the model. Complexity CurvesThe following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the **learning curves**, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the `performance_metric` function. ** Run the code cell below and use this graph to answer the following two questions Q5 and Q6. **
###Code
vs.ModelComplexity(X_train, y_train)
###Output
_____no_output_____
###Markdown
Question 5 - Bias-Variance Tradeoff* When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? * How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?**Hint:** High bias is a sign of underfitting(model is not complex enough to pick up the nuances in the data) and high variance is a sign of overfitting(model is by-hearting the data and cannot generalize well). Think about which model(depth 1 or 10) aligns with which part of the tradeoff. ** Answer: ** * When the model is trained with a maximum depth of 1, the model suffer from high bias means underfitting(model is not complex enough to pick up the nuances in the data) * when the model is trained with a maximum depth of 10, the model suffer from high variance means high variance is a sign of overfitting(model is by-hearting the data and cannot generalize well). Question 6 - Best-Guess Optimal Model* Which maximum depth do you think results in a model that best generalizes to unseen data? * What intuition lead you to this answer?** Hint: ** Look at the graph above Question 5 and see where the validation scores lie for the various depths that have been assigned to the model. Does it get better with increased depth? At what point do we get our best validation score without overcomplicating our model? And remember, Occams Razor states "Among competing hypotheses, the one with the fewest assumptions should be selected." ** Answer : ** * Maximum depth I think is in a model 3 that best generalizes to unseen data? * The distance of last three points of training score and testing score remains same, they will never meet each other. ----- Evaluating Model PerformanceIn this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from `fit_model`. Question 7 - Grid Search* What is the grid search technique?* How it can be applied to optimize a learning algorithm?** Hint: ** When explaining the Grid Search technique, be sure to touch upon why it is used, what the 'grid' entails and what the end goal of this method is. To solidify your answer, you can also give an example of a parameter in a model that can be optimized using this approach. ** Answer : **Grid search technique means it has a set of models which differ from each other in their parameter values, which lie on a grid. then train each of the models and evaluate it using cross-validation. then we can select best performed.To give a concrete example, if you're using a support vector machine, you could use different values for gamma and C. So, for example you could have a grid with the following values for (gamma, C): (1, 1), (0.1, 1), (1, 10), (0.1, 10). It's a grid because it's like a product of [1, 0.1] for gamma and [1, 10] for C. Grid-search would basically train a SVM for each of these four pair of (gamma, C) values, then evaluate it using cross-validation, and select the one that did best. Question 8 - Cross-Validation* What is the k-fold cross-validation training technique? * What benefit does this technique provide for grid search when optimizing a model?**Hint:** When explaining the k-fold cross validation technique, be sure to touch upon what 'k' is, how the dataset is split into different parts for training and testing and the number of times it is run based on the 'k' value.When thinking about how k-fold cross validation helps grid search, think about the main drawbacks of grid search which are hinged upon **using a particular subset of data for training or testing** and how k-fold cv could help alleviate that. You can refer to the [docs](http://scikit-learn.org/stable/modules/cross_validation.htmlcross-validation) for your answer. ** Answer : ** In k-fold cross-validation, the original sample is randomly partitioned into k equal sized subsamples. Of the k subsamples, a single subsample is retained as the validation data for testing the model, and the remaining k − 1 subsamples are used as training data. Implementation: Fitting a ModelYour final implementation requires that you bring everything together and train a model using the **decision tree algorithm**. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the `'max_depth'` parameter for the decision tree. The `'max_depth'` parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called *supervised learning algorithms*.In addition, you will find your implementation is using `ShuffleSplit()` for an alternative form of cross-validation (see the `'cv_sets'` variable). While it is not the K-Fold cross-validation technique you describe in **Question 8**, this type of cross-validation technique is just as useful!. The `ShuffleSplit()` implementation below will create 10 (`'n_splits'`) shuffled sets, and for each shuffle, 20% (`'test_size'`) of the data will be used as the *validation set*. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique.Please note that ShuffleSplit has different parameters in scikit-learn versions 0.17 and 0.18.For the `fit_model` function in the code cell below, you will need to implement the following:- Use [`DecisionTreeRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) from `sklearn.tree` to create a decision tree regressor object. - Assign this object to the `'regressor'` variable.- Create a dictionary for `'max_depth'` with the values from 1 to 10, and assign this to the `'params'` variable.- Use [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html) from `sklearn.metrics` to create a scoring function object. - Pass the `performance_metric` function as a parameter to the object. - Assign this scoring function to the `'scoring_fnc'` variable.- Use [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) from `sklearn.grid_search` to create a grid search object. - Pass the variables `'regressor'`, `'params'`, `'scoring_fnc'`, and `'cv_sets'` as parameters to the object. - Assign the `GridSearchCV` object to the `'grid'` variable.
###Code
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.metrics import make_scorer
from sklearn.tree import DecisionTreeRegressor
from sklearn.grid_search import GridSearchCV
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
# sklearn version 0.18: ShuffleSplit(n_splits=10, test_size=0.1, train_size=None, random_state=None)
# sklearn versiin 0.17: ShuffleSplit(n, n_iter=10, test_size=0.1, train_size=None, random_state=None)
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth':list(range(1,11))}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search cv object --> GridSearchCV()
# Make sure to include the right parameters in the object:
# (estimator, param_grid, scoring, cv) which have values 'regressor', 'params', 'scoring_fnc', and 'cv_sets' respectively.
grid = GridSearchCV(regressor, params, scoring=scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
###Output
_____no_output_____
###Markdown
Making PredictionsOnce a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a *decision tree regressor*, the model has learned *what the best questions to ask about the input data are*, and can respond with a prediction for the **target variable**. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on. Question 9 - Optimal Model* What maximum depth does the optimal model have? How does this result compare to your guess in **Question 6**? Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
###Code
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
###Output
Parameter 'max_depth' is 4 for the optimal model.
###Markdown
** Hint: ** The answer comes from the output of the code snipped above.**Answer: ** The result means maximum depth of optimal model is 4. This is same with the depth of 3. Model 4 is the right model for the dataset while model 3 underfits the data. Question 10 - Predicting Selling PricesImagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:| Feature | Client 1 | Client 2 | Client 3 || :---: | :---: | :---: | :---: || Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms || Neighborhood poverty level (as %) | 17% | 32% | 3% || Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |* What price would you recommend each client sell his/her home at? * Do these prices seem reasonable given the values for the respective features? **Hint:** Use the statistics you calculated in the **Data Exploration** section to help justify your response. Of the three clients, client 3 has has the biggest house, in the best public school neighborhood with the lowest poverty level; while client 2 has the smallest house, in a neighborhood with a relatively high poverty rate and not the best public schools.Run the code block below to have your optimized model make predictions for each client's home.
###Code
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
###Output
Predicted selling price for Client 1's home: $412,172.73
Predicted selling price for Client 2's home: $232,269.77
Predicted selling price for Client 3's home: $913,500.00
###Markdown
** Answer : **Some Statistics for Boston housing dataset:* Minimum price: \$105,000.00* Maximum price: \$1,024,800.00* Mean price: \$454,342.94* Median price \$438,900.00* Standard deviation of prices: \$165,340.28* ** Clint 1 : ** The selling price for client 1's home will be $412,172.73 which is almost nearest to mean price. Neighborhood poverty level is also good with this price range. So this price is reasonably good.* ** Clinet 2: ** The selling price for client 2's home will be \$232,269.77 which is half of the mean price. Student-teacher ratio of nearby schools is high also Neighborhood poverty level is almost double from client 1. But client 2 get 4 rooms which is just 1 room less than client 1. So this price is reasonably good for client 2.* ** Client 3: ** The selling price for client 3's home will be $913,500.00 which is higher than the average price bacause Client 3 get more rooms(8) in compared to client 1 and also it has low Neighborhood poverty level & Student-teacher ratio of nearby schools compared to all. So this price is reasonably good for client 3 SensitivityAn optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. **Run the code cell below to run the `fit_model` function ten times with different training and testing sets to see how the prediction for a specific client changes with respect to the data it's trained on.**
###Code
vs.PredictTrials(features, prices, fit_model, client_data)
###Output
Trial 1: $391,183.33
Trial 2: $419,700.00
Trial 3: $415,800.00
Trial 4: $420,622.22
Trial 5: $418,377.27
Trial 6: $411,931.58
Trial 7: $399,663.16
Trial 8: $407,232.00
Trial 9: $351,577.61
Trial 10: $413,700.00
Range in prices: $69,044.61
|
Hybrid_Recommenders_for_reviews.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
# !pip install wandb
!pip install surprise
!pip install fast_ml
!pip install rake_nltk
###Output
Collecting surprise
Downloading surprise-0.1-py2.py3-none-any.whl (1.8 kB)
Collecting scikit-surprise
Downloading scikit-surprise-1.1.1.tar.gz (11.8 MB)
[K |████████████████████████████████| 11.8 MB 51 kB/s
[?25hRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-surprise->surprise) (1.0.1)
Requirement already satisfied: numpy>=1.11.2 in /usr/local/lib/python3.7/dist-packages (from scikit-surprise->surprise) (1.19.5)
Requirement already satisfied: scipy>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-surprise->surprise) (1.4.1)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from scikit-surprise->surprise) (1.15.0)
Building wheels for collected packages: scikit-surprise
Building wheel for scikit-surprise (setup.py) ... [?25l[?25hdone
Created wheel for scikit-surprise: filename=scikit_surprise-1.1.1-cp37-cp37m-linux_x86_64.whl size=1619403 sha256=a4f61a3e397ab73bc1058772c898418d044a39101d3645ba9b355c7a83a98c4d
Stored in directory: /root/.cache/pip/wheels/76/44/74/b498c42be47b2406bd27994e16c5188e337c657025ab400c1c
Successfully built scikit-surprise
Installing collected packages: scikit-surprise, surprise
Successfully installed scikit-surprise-1.1.1 surprise-0.1
Collecting fast_ml
Downloading fast_ml-3.68-py3-none-any.whl (42 kB)
[K |████████████████████████████████| 42 kB 434 kB/s
[?25hInstalling collected packages: fast-ml
Successfully installed fast-ml-3.68
Collecting rake_nltk
Downloading rake_nltk-1.0.6-py3-none-any.whl (9.1 kB)
Collecting nltk<4.0.0,>=3.6.2
Downloading nltk-3.6.3-py3-none-any.whl (1.5 MB)
[K |████████████████████████████████| 1.5 MB 5.9 MB/s
[?25hRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from nltk<4.0.0,>=3.6.2->rake_nltk) (1.0.1)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from nltk<4.0.0,>=3.6.2->rake_nltk) (4.62.2)
Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from nltk<4.0.0,>=3.6.2->rake_nltk) (7.1.2)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from nltk<4.0.0,>=3.6.2->rake_nltk) (2019.12.20)
Installing collected packages: nltk, rake-nltk
Attempting uninstall: nltk
Found existing installation: nltk 3.2.5
Uninstalling nltk-3.2.5:
Successfully uninstalled nltk-3.2.5
Successfully installed nltk-3.6.3 rake-nltk-1.0.6
###Markdown
Load the Libraries and load the data
###Code
#manipulating data
import pandas as pd
import numpy as np
from rake_nltk import Rake
#Sklearn
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.decomposition import PCA, TruncatedSVD,SparsePCA
#load the data
df = pd.read_csv("/content/drive/Shareddrives/Team Stars/Data/df_clean.csv",index_col=0)
df.head(2)
df=df.rename(columns={'reviews_username':'username','reviews_rating':'ratings','reviews_text':'desc'})
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
df.username=le.fit_transform(df.username)
df.id=le.fit_transform(df.id)
df.head()
###Output
_____no_output_____
###Markdown
Feature Selection
###Code
data=df[['id','username','name','brand','ratings','desc']]
data=data.rename(columns={'username':'userid','name':'productname','id':'productid'})
data.head()
###Output
_____no_output_____
###Markdown
Content based Text Preprocessing
###Code
import re
#Text process
data['brand'] = [re.sub(r'[^\w\s]', '', t) for t in data['brand']]
data['desc'] = [re.sub(r'[^\w\s]', '', t) for t in data['desc']]
data['brand'] = [t.lower() for t in data['brand']]
data['desc'] = [t.lower() for t in data['desc']]
data.head()
for index, row in data.iterrows():
row['desc']=[x.lower().replace(' ','')for x in row['desc']]
row['desc'] = ''.join(row['desc']).lower()
row['brand'] = ''.join(row['brand']).lower()
data.head()
import nltk
nltk.download('stopwords')
nltk.download('punkt')
data['keywords']=""
for index,row in data.iterrows():
desc=row['desc']
rake_nltk_var = Rake()
rake_nltk_var.extract_keywords_from_text(desc)
keyword_extracted = rake_nltk_var.get_word_degrees()
row['keywords']=list(keyword_extracted.keys())
#Intialize rake
# r=Rake()
# #extract keywords
# r.extract_keywords_from_text(desc)
# #getting the dict with key words and their scores
# row['keywords'] = r.get_ranked_phrases()
#assigning the key words to new column
# = list(key_words_dict_scores.keys())
#dropping the desc column
#data.drop(columns=['desc'],inplace=True)
# data.keywords.unique()
print(data.keywords)
data.head()
###Output
_____no_output_____
###Markdown
Collaborative filtering
###Code
# pivot table
rmat = data.pivot_table(
columns = 'productid',
index = 'userid',
values = 'ratings'
)
rmat.fillna(0)
#Fit into the title
#Lets vectorize all these reviews. Iinitialize vectorizer
vect = CountVectorizer(analyzer = 'word', ngram_range = (1,2), stop_words = 'english', min_df = 0.002)
#min_df = rare words, max_df = most used words
#ngram_range = (1,2) - if used more than 1(value), lots of features or noise
vect.fit(data['desc'])
title_matrix = vect.transform(data['desc'])
title_matrix.shape
#Lets find features
features = vect.get_feature_names()
#performing cosine similarity
cosine_sim = cosine_similarity(title_matrix, title_matrix)
cosine_sim.shape
from surprise import Dataset
from surprise import Reader
reader = Reader(rating_scale=(1, 5))
dff = Dataset.load_from_df(data[['userid', 'productid', 'ratings']], reader)
from surprise import SVD
from surprise.model_selection import cross_validate
svd = SVD(verbose=True, n_epochs=10)
# cross_validate(svd, dff, measures=['RMSE', 'MAE'], cv=3, verbose=True)
# split data into train test
trainset, testset = train_test_split(data, test_size=0.3,random_state=10)
trainset = dff.build_full_trainset()
svd.fit(trainset)
# trainset
# svd.predict(uid=10, iid=100)
# hybrid
def hybrid(userid, productid, svd_model,n_recs ):
# sort similarity values in decreasing order and take top 50 results
sim = list(enumerate(cosine_sim[int(productid)]))
sim = sorted(sim, key=lambda x: x[1], reverse=True)
sim = sim[1:50]
# get product metadata
product_idx = [i[0] for i in sim]
products = data.iloc[product_idx][['productid', 'ratings']]
# predict using the svd_model
products['est'] = products.apply(lambda x: svd_model.predict(userid, x['productid'], x['ratings']).est, axis = 1)
# sort predictions in decreasing order and return top n_recs
products = products.sort_values('est', ascending=False)
return products.head(n_recs)
# generate recommendations
hybrid(244,4539,svd,5)
###Output
_____no_output_____ |
DLFBT/recurrent_neural_networks/recurrent_neural_networks.ipynb | ###Markdown
IntroductionRecurrent Neural Networks (RNNs) are deep neural networks whose architecture includes recurrent connections (loops) from one layer to itself or from one layer to a previous layer. This way the information may flow back through recurrent loops. This is in contrast with feed-forward (FF) networks, where the information flows always forwards (from one layer to the next). These networks are specially designed to cope with sequential inputs, and are the current state of the art in many deep learning applications related to sequence classification or sequence modeling, such as:- Text processing or generation: automatic translation, sentiment analysis, chatbots, ...- Automatic music composition- Temporal series forecasting: stock market, weather, ... Some simple examplesLet us start by considering several simple examples of the kind of input/output we usually process with a RNN.**Example 1: Predict the parity of bit sequence**The input is a bit sequence and the output is the parity of the sequence (1 if the number of ones is an odd number, 0 if it is an even number) at each position. The network must provide an output for each input symbol in the sequence. We include an additional ``$`` symbol that resets the parity to 0, making the sequence start again. One example of input/output sequences follows:```INPUT: $11011100000010001$1100000001010011$001001110$$010OUTPUT: 01001011111110000101000000001100010000111010000011```**Example 2: Detect the presence of a given pattern in a bit sequence**The input is a bit sequence and the output is 1 if a given pattern ``w`` appears anywhere within the sequence, 0 otherwise. The network must provide one single output for the whole sequence. Two examples of input/output sequences for the pattern ``w = 11011`` (one of class 0 and one of class 1) follow:```INPUT: 1011100101111111010000001 OUTPUT: 0INPUT: 0110111001000100000101010OUTPUT: 1```**Example 3: Predict the next char**The input is a text string and the output is the same string with all the characters shifted one position to the left. The network must provide an output for each input character: the next character in the sequence. One example of input/output sequences follows:```INPUT: 'En un lugar de la Mancha'OUTPUT: 'n un lugar de la Mancha '```What is common to all the previous problems is that the input consists in all cases of ordered sequences, and this order is relevant for the classification task. Although we could in principle use a standard feed-forward neural network to tackle these problems, this kind of architecture does not make use of the temporal relations between the inputs, and so it will have many difficulties to find good solutions. In general, when facing a classification problem that involves temporal sequences, we need a model that is able to manage the following constraints:1. The order in which the elements appear in the input sequence is relevant.2. Each input sequence may be of a different length. 3. There may be long-term dependencies. That is, the output for time $t$ may be dependent on the input seen many time steps before.It is also a good idea to force the model to share its parameters across the sequence.RNNs are built with all these constraints in mind. ResourcesThe following are some interesting on-line resources about recurrent neural networks.- *Deep Sequence Modeling* lecture from MIT [Introduction to Deep Learning](http://introtodeeplearning.com/) course: - [Slides](http://introtodeeplearning.com/slides/6S191_MIT_DeepLearning_L2.pdf) - [Video](https://www.youtube.com/watch?v=SEnXr6v2ifU&list=PLtBw6njQRU-rwp5__7C0oIVt26ZgjG9NI&index=2)- *Recurrent Neural Networks* lecture from Stanford [Convolutional Neural Networks for Visual Recognition](http://cs231n.stanford.edu/) course: - [Slides](http://cs231n.stanford.edu/slides/2020/lecture_10.pdf) - [Video (2017 edition)](https://www.youtube.com/watch?v=6niqTuYFZLQ&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv&index=10)- [Recurrent Neural Networks cheatsheet](https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-recurrent-neural-networks) by Afshine Amidi and Shervine Amidi.- Post [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) in Andrej Karpathy's blog.- Post [Understanding LSTM Networks](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) in Christopher Olah's blog. Simple RNN Elman formulation (Elman, 1990)The recurrence is from the hidden layer to itself:$${\bf h}_{t} = f(W_{xh} {\bf x}_{t} + W_{hh} {\bf h}_{t-1} + {\bf b}_{h}), $$$${\bf y}_{t} = f(W_{hy} {\bf h}_{t} + {\bf b}_{y}). $$ Jordan formulation (Jordan, 1997)The recurrence is from the output layer to the hidden layer:$${\bf h}_{t} = f(W_{xh} {\bf x}_{t} + W_{yh} {\bf y}_{t-1} + {\bf b}_{h}),$$$${\bf y}_{t} = f(W_{hy} {\bf h}_{t} + {\bf b}_{y}).$$ Example. Detect the presence of a given pattern in a bit sequenceWe will use the second of the introductory examples to get some insight on RNNs. We will first try to solve the problem with a standard FF neural network to see the difficulties it finds. Then we will approach the problem with a simple Elman RNN. This is an example of a many-to-one architecture.We first import all the necessary modules:
###Code
import numpy as np
import matplotlib.pyplot as plt
%tensorflow_version 2.x
import tensorflow as tf
from tensorflow import keras
from keras.models import Sequential, Model
from keras.layers import Flatten, Dense, LSTM, GRU, SimpleRNN, RepeatVector, Input
from keras import backend as K
from keras.utils.vis_utils import plot_model
###Output
_____no_output_____
###Markdown
The following code defines a function to create the dataset. The argument ``n`` is the number of input patterns, ``seq_len`` is the sequence length and ``pattern`` is the pattern to be detected in the sequences.
###Code
def create_data_set(n, seq_len, pattern):
x = np.random.randint(0, 2, (n, seq_len))
x_as_string = [''.join([chr(c+48) for c in a]) for a in x]
t = np.array([pattern in s for s in x_as_string])*1
return x, x_as_string, t
###Output
_____no_output_____
###Markdown
We use the ``create_data_set`` function to create datasets for training and validation:
###Code
n = 50000
seq_len = 25
pattern = '11011'
x, x_as_string, t = create_data_set(n, seq_len, pattern)
xval, xval_as_string, tval = create_data_set(n, seq_len, pattern)
for s, c in zip(x_as_string[:20], t[:20]):
print(s, c)
print('Class mean (training):', t.mean())
print('Class mean (validation):', tval.mean())
###Output
0011010010101101010000010 0
0010000100001000011001011 0
1101000000001000001011011 1
0001000010100100001100100 0
1110111001110001111001011 1
0100111101100011111111101 1
1011111010110010110011010 0
1101100010001000101001111 1
1010101111001011011001100 1
0100000010000111011100010 1
0000111001010011010100101 0
0110000101010010000100001 0
1010010010110001011101001 0
1111011100011110010110111 1
0000011001110100110110110 1
1001000110000000001001111 0
0010100100100101110001000 0
1001111000111000011101101 1
0101010001011001111111110 0
1111100101100000000110000 0
Class mean (training): 0.46958
Class mean (validation): 0.4674
###Markdown
We observe that the pattern appears in approximately one half of the input sequences. Solution using a FF networkModel definition:
###Code
K.clear_session()
model = keras.Sequential()
model.add(keras.layers.Dense(10, input_shape=(seq_len,), activation="relu"))
model.add(keras.layers.Dense(1, activation="sigmoid"))
print(model.summary())
plot_model(model, show_shapes=True, show_layer_names=True)
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 10) 260
_________________________________________________________________
dense_1 (Dense) (None, 1) 11
=================================================================
Total params: 271
Trainable params: 271
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Model compilation:
###Code
model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(), metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model training:
###Code
history = model.fit(x, t, epochs=1000, batch_size=1000, validation_data=(xval, tval))
###Output
_____no_output_____
###Markdown
Plots of loss and accuracy:
###Code
hd = history.history
plt.figure(figsize=(12,6))
plt.subplot(1,2,1)
plt.plot(hd['accuracy'], "r", label="train")
plt.plot(hd['val_accuracy'], "b", label="valid")
plt.grid(True)
plt.xlabel("epoch")
plt.ylabel("accuracy")
plt.title("Accuracy")
plt.legend()
plt.subplot(1,2,2)
plt.plot(hd['loss'], "r", label="train")
plt.plot(hd['val_loss'], "b", label="valid")
plt.grid(True)
plt.xlabel("epoch")
plt.ylabel("loss")
plt.title("Loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Solution using a RNNModel definition. The following points are important:- ``stateful=False``: This indicates that it is not necessary to pass the last state to the next batch. - ``return_sequences=False``: This indicates that it is not necessary to output the states at all time steps, only the last one.
###Code
K.clear_session()
model = Sequential()
model.add(SimpleRNN(10, activation="tanh", input_shape=(seq_len, 1), return_sequences=False, stateful=False, unroll=True))
model.add(Dense(1, activation='sigmoid'))
print(model.summary())
plot_model(model, show_shapes=True, show_layer_names=True)
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
simple_rnn (SimpleRNN) (None, 10) 120
_________________________________________________________________
dense (Dense) (None, 1) 11
=================================================================
Total params: 131
Trainable params: 131
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Model compilation:
###Code
model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(), metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model training:
###Code
history = model.fit(x[:, :, None], t[:, None], epochs=200, batch_size=1000, validation_data=(xval[:, :, None], tval[:, None]))
###Output
_____no_output_____
###Markdown
Plots of loss and accuracy:
###Code
hd = history.history
plt.figure(figsize=(12,6))
plt.subplot(1,2,1)
plt.plot(hd['accuracy'], "r", label="train")
plt.plot(hd['val_accuracy'], "b", label="valid")
plt.grid(True)
plt.xlabel("epoch")
plt.ylabel("accuracy")
plt.title("Accuracy")
plt.legend()
plt.subplot(1,2,2)
plt.plot(hd['loss'], "r", label="train")
plt.plot(hd['val_loss'], "b", label="valid")
plt.grid(True)
plt.xlabel("epoch")
plt.ylabel("loss")
plt.title("Loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Note that, in spite of having a fewer number of trainable parameters, the RNN converges faster than the FF network and gets a much better solution. Backpropagation through timeLet us consider the Elman model. The network state ${\bf h}_{t}$ depends on the state at the previous time step, ${\bf h}_{t-1}$, and this depends in turn on ${\bf h}_{t-2}$, and so on. It is sometimes useful to imagine an unfolded representation of the RNN model, where we have a different copy of the network for every time step and the state is passed from one time step to the next one:  We see that ${\bf h}_{t}$, and hence ${\bf y}_{t}$, depends on all the past history (all past inputs and all past states). As all the network 'copies' share the same parameters, when computing the gradients of the loss function with respect to the hidden layer's weights and biases we need to backpropagate the error signal for an arbitrarily large number of time steps. This is of course unfeasible in any practical situation, so we usually do is to fix a time window of ``seq_len`` time steps and do not backpropagate the error further than this (*truncated backpropagation through time*).The network state may however be passed continuously from one batch to the next and hence the network can in principle track events that ocurred many time steps before (long term dependencies). Example. Predict the parity of bit sequenceThe second example is the parity problem described above. The input is a string with several bit sequences separated by the ``$`` symbol, and the output is the parity of the sequence since the last ``$``. One example of input/output is shown below:```INPUT: $11011100000010001$1100000001010011$001001110$$010OUTPUT: 01001011111110000101000000001100010000111010000011```The following function is used to generate input/output sequences for the parity problem:
###Code
def generate_sequences(n, p0, p1):
"""
n is the number of elements in the sequence
p0 and p1 must be probabilities, with p0 + p1 <= 1
the probability for the $ symbol is assumed to be p$ = 1 - p0 - p1
"""
r = np.random.rand(n)
x_sym = np.full(n, '$')
x = np.full(n, 2)
x_sym[r < p0 + p1] = '1'
x[r < p0 + p1] = 1
x_sym[r < p0] = '0'
x[r < p0] = 0
x_sym[0] = '$'
x[0] = 2
t = np.zeros(n, dtype=np.int)
k = 0
for i in range(n):
if x[i] == 2:
t[i] = 0
k = 0
else:
k += x[i]
t[i] = k%2
x_string = ''.join(x_sym)
t_string = ''.join([chr(c+48) for c in t])
x_one_hot = 1*(np.arange(3)[:, None] == x[None, :])
return x, t, x_one_hot, x_string, t_string
###Output
_____no_output_____
###Markdown
The following code generates a sequence of length 5000:
###Code
num_pats = 5000
x, t, x_one_hot, x_string, t_string = generate_sequences(num_pats, 0.45, 0.45)
###Output
_____no_output_____
###Markdown
Model definition. Note that in this case we need a many-to-many architecture, since we need one output for each time step. The model needs also to be *stateful*.- ``stateful=True``: This indicates that it is necessary to pass the last state to the next batch. - ``return_sequences=True``: This indicates that it is necessary to output the states at all time steps, not only the last one. Also, when using a stateful model we need to specify the batch size:- ``batch_input_shape=(1, seq_len, 3)``
###Code
K.clear_session()
seq_len = 50
model = Sequential()
model.add(SimpleRNN(10, activation='tanh', batch_input_shape=(1, seq_len, 3), return_sequences=True, stateful=True, unroll=True))
model.add(Dense(1, activation='sigmoid'))
print(model.summary())
plot_model(model, show_shapes=True, show_layer_names=True)
learning_rate = 0.1
model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(), metrics=['accuracy'])
num_epochs = 100
num_batches = num_pats // seq_len # possibly ignore last elements in sequence
model_loss = np.zeros(num_epochs)
model_acc = np.zeros(num_epochs)
for epoch in range(num_epochs):
model.reset_states() # reset state at the beginning of each epoch
mean_tr_loss = []
mean_tr_acc = []
for j in range(num_batches):
imin = j*seq_len
imax = (j+1)*seq_len
seq_x = x_one_hot[:, imin:imax].transpose()
seq_t = t[imin:imax]
tr_loss, tr_acc = model.train_on_batch(seq_x[None, :, :], seq_t[None, :, None])
mean_tr_loss.append(tr_loss)
mean_tr_acc.append(tr_acc)
model_loss[epoch] = np.array(mean_tr_loss).mean()
model_acc[epoch] = np.array(mean_tr_acc).mean()
print("\nTraining epoch: %d / %d" % (epoch+1, num_epochs), end="")
print(", loss = %f, acc = %f" % (model_loss[epoch], model_acc[epoch]), end="")
plt.plot(model_loss)
plt.grid(True)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
###Output
_____no_output_____
###Markdown
Evaluate model on test data:
###Code
num_test_pats = 5000
x_test, t_test, x_test_one_hot, x_test_string, t_test_string = generate_sequences(num_test_pats, 0.45, 0.45)
num_test_batches = num_test_pats // seq_len
test_loss = []
test_acc = []
for j in range(num_test_batches):
imin = j*seq_len
imax = (j+1)*seq_len
seq_x = x_test_one_hot[:, imin:imax].transpose()
seq_t = t_test[imin:imax]
loss, acc = model.test_on_batch(seq_x[None, :, :], seq_t[None, :, None])
test_loss.append(loss)
test_acc.append(acc)
print("Test loss = %f, test acc = %f" % (np.array(test_loss).mean(), np.array(test_acc).mean()))
###Output
Test loss = 0.000384, test acc = 1.000000
###Markdown
The vanishing and exploding gradient problemsIf we analyze the gradient flow in an Elman RNN, we observe that with each step back in the computational graph the gradient of the loss function gets multiplied by:$$\frac{\partial{{\bf h}_{t}}}{\partial{{\bf h}_{t-1}}} = W_{hh}^{t} f'({\bf v}_{t})$$When using activations such as the *sigmoid* or *tanh* functions, whose derivatives are always (*sigmoid*) or almost always (*tanh*) lower than 1, the gradient gets repeatedly multiplied by a small constant, and hence tends to zero after several time steps. This is known as the *vanishing gradient* problem. We could consider using a linear activation, but even in this case the gradient is continuously multiplied by the matrix $W_{hh}^{t}$. In such a situation the behaviour depends on the largest singular value of this matrix:- If it is greater than one, the gradient continuously increases (*exploding gradient* problem).- If it is lower than one, the gradient decreases to zero (*vanishing gradient* problem).The *exploding gradient* problem can be safely avoided by using a technique called *gradient clipping* which consists of scaling down the gradient if its norm is too big.The *vanishing gradient* problem has no easy solution, but some RNN architectures have been designed to reduce its effects. Long short-term memory (LSTM) LSTMs (Hochreiter and Schmidhuber, 1997) are designed to avoid the long term dependency problem due to the vanishing gradient. They are governed by the following equations:$$\begin{eqnarray}{\bf f}_{t} &=& \sigma(W_{f} [{\bf x}_{t}; {\bf h}_{t-1}] + {\bf b}_{f}),\\{\bf i}_{t} &=& \sigma(W_{i} [{\bf x}_{t}; {\bf h}_{t-1}] + {\bf b}_{i}),\\\tilde{{\bf c}}_{t} &=& \tanh(W_{c} [{\bf x}_{t}; {\bf h}_{t-1}] + {\bf b}_{c}),\\ {\bf c}_{t} &=& {\bf f}_{t} \circ {\bf c}_{t-1} + {\bf i}_{t} \circ \tilde{{\bf c}}_{t},\\ {\bf o}_{t} &=& \sigma(W_{o} [{\bf x}_{t}; {\bf h}_{t-1}] + {\bf b}_{o}), \\{\bf h}_{t} &=& {\bf o}_{t} \circ \tanh({\bf c}_{t}). \end{eqnarray}$$In these equations $[{\bf x}_{t}; {\bf h}_{t-1}]$ means the concatenation of vectors ${\bf x}_{t}$ and ${\bf h}_{t-1}$, and the operator $\circ$ represents an element-wise multiplication.The LSTM provides an uninterrupted gradient flow that allows the network learn long term dependencies. Gated recurrent unit (GRU)The Gated Recurrent Unit (Cho et al., 2014) is a modification of the LSTM architecture that combines the forget and input gates into a single update gate, reducing the number of parameters. $$\begin{eqnarray*}{\bf z}_{t} &=& \sigma(W_{z} [{\bf x}_{t}; {\bf h}_{t-1}] + {\bf b}_{z}),\\{\bf r}_{t} &=& \sigma(W_{r} [{\bf x}_{t}; {\bf h}_{t-1}] + {\bf b}_{r}),\\\tilde{{\bf h}}_{t} &=& \tanh(W_{h} [{\bf x}_{t}; {\bf r}_{t} \circ {\bf h}_{t-1}] + {\bf b}_{h}),\\ {\bf h}_{t} &=& (1 - {\bf z}_{t}) \circ {\bf h}_{t-1} + {\bf z}_{t} \circ \tilde{{\bf h}}_{t}. \end{eqnarray*}$$ Text generation: predict the next charSee notebook text_generation_quijote.ipynb Sequence-to-sequence (seq2seq) learning: binary to decimal translation https://keras.io/examples/nlp/lstm_seq2seq/
###Code
def generate_data_bin_dec(n, maxnum, dpad, bpad):
x = np.random.randint(0, maxnum, n)
xdec_as_string = [str(a).zfill(dpad) for a in x]
xbin_as_string = [bin(a).replace("0b","").zfill(bpad) for a in x]
xbin = np.array([[ord(a) - 48 for a in b] for b in xbin_as_string])
xdec = np.array([[ord(a) - 48 for a in b] for b in xdec_as_string])
xdec_one_hot = 1*(np.arange(10)[None, None, :] == xdec[:, :, None])
return xdec_as_string, xbin_as_string, xdec, xbin, xdec_one_hot
seq_len_bin = 24
seq_len_dec = 8
_, _, d, b, doh = generate_data_bin_dec(10000, 2**15, seq_len_dec, seq_len_bin)
_, _, d_val, b_val, doh_val = generate_data_bin_dec(10000, 2**15, seq_len_dec, seq_len_bin)
b.shape
d.shape
doh.shape
K.clear_session()
model = Sequential()
model.add(LSTM(100, input_shape=(seq_len_bin, 1), return_sequences=False, stateful=False, unroll=True))
model.add(RepeatVector(seq_len_dec))
model.add(LSTM(100, return_sequences=True, stateful=False, unroll=True))
model.add(Dense(10, activation='softmax'))
print(model.summary())
plot_model(model, show_shapes=True, show_layer_names=True)
model.compile(loss='sparse_categorical_crossentropy', optimizer=keras.optimizers.Adam(), metrics=['accuracy'])
history = model.fit(b[:, :, None], d[:, :, None], epochs=500, batch_size=200, validation_data=(b_val[:, :, None], d_val[:, :, None]))
hd = history.history
plt.figure(figsize=(12,6))
plt.subplot(1,2,1)
plt.plot(hd['accuracy'], "r", label="train")
plt.plot(hd['val_accuracy'], "b", label="valid")
plt.grid(True)
plt.xlabel("epoch")
plt.ylabel("accuracy")
plt.title("Accuracy")
plt.legend()
plt.subplot(1,2,2)
plt.plot(hd['loss'], "r", label="train")
plt.plot(hd['val_loss'], "b", label="valid")
plt.grid(True)
plt.xlabel("epoch")
plt.ylabel("loss")
plt.title("Loss")
plt.legend()
plt.show()
y = model.predict(b_val[:, :, None])
y = np.argmax(y, axis=2)
y[:20]
d_val[:20]
r = np.array([10000000, 1000000, 100000, 10000, 1000, 100, 10, 1])
plt.figure(figsize=(6, 6))
plt.plot(np.dot(d_val, r), np.dot(y, r), 'o')
plt.grid(True)
plt.xlabel('Real')
plt.ylabel('Predicted')
plt.show()
2**15
###Output
_____no_output_____
###Markdown
Using the functional API to gain flexibility:
###Code
K.clear_session()
encoder_input = Input(shape=(seq_len_bin, 1))
encoder = SimpleRNN(200, activation='tanh', return_sequences=False, stateful=False, unroll=True)
state = encoder(encoder_input)
decoder_input = Input(shape=(seq_len_dec, 1))
decoder = SimpleRNN(200, activation='tanh', return_sequences=True, stateful=False, unroll=True)
decoder_output = decoder(decoder_input, initial_state=state)
output_layer = Dense(10, activation='softmax')
output = output_layer(decoder_output)
model = Model([encoder_input, decoder_input], output)
print(model.summary())
plot_model(model, show_shapes=True, show_layer_names=True)
model.compile(loss='sparse_categorical_crossentropy', optimizer=keras.optimizers.Adam(), metrics=['accuracy'])
%%time
history = model.fit([b[:, :, None], np.zeros((10000, 8, 1))], d[:, :, None], epochs=500, batch_size=200, validation_data=([b_val[:, :, None], np.zeros((10000, 8, 1))], d_val[:, :, None]))
hd = history.history
plt.figure(figsize=(12,6))
plt.subplot(1,2,1)
plt.plot(hd['accuracy'], "r", label="train")
plt.plot(hd['val_accuracy'], "b", label="valid")
plt.grid(True)
plt.xlabel("epoch")
plt.ylabel("accuracy")
plt.title("Accuracy")
plt.legend()
plt.subplot(1,2,2)
plt.plot(hd['loss'], "r", label="train")
plt.plot(hd['val_loss'], "b", label="valid")
plt.grid(True)
plt.xlabel("epoch")
plt.ylabel("loss")
plt.title("Loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Final version:
###Code
dec_input = np.concatenate((np.zeros((10000, 1, 10)), doh[:, :-1, :]), axis=1)
K.clear_session()
encoder_input = Input(shape=(seq_len_bin, 1))
encoder = SimpleRNN(200, activation='tanh', return_sequences=False, stateful=False, unroll=True)
state = encoder(encoder_input)
decoder_input = Input(shape=(seq_len_dec, 10))
decoder = SimpleRNN(200, activation='tanh', return_sequences=True, stateful=False, unroll=True)
decoder_output = decoder(decoder_input, initial_state=state)
output_layer = Dense(10, activation='softmax')
output = output_layer(decoder_output)
model = Model([encoder_input, decoder_input], output)
print(model.summary())
plot_model(model, show_shapes=True, show_layer_names=True)
model.compile(loss='sparse_categorical_crossentropy', optimizer=keras.optimizers.Adam(), metrics=['accuracy'])
history = model.fit([b[:, :, None], dec_input], d[:, :, None], epochs=500, batch_size=200)
###Output
_____no_output_____ |
05_Multi Linear Regression/1_Startup_Statement.ipynb | ###Markdown
Data Loading
###Code
# import the library
import scipy.stats as stats
import statsmodels.formula.api as smf
import statsmodels.api as sm
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sdata = pd.read_csv('data/50_Startups.csv')
sdata.head()
###Output
_____no_output_____
###Markdown
Data Cleaning and Understanding
###Code
# shape of the data
print('Dimenssion:', sdata.shape)
sdata.describe()
# null values check
sdata.isnull().sum()
# dublicate entry check
sdata[sdata.duplicated()].shape
###Output
_____no_output_____
###Markdown
- No missing data is found - No dublication data is found
###Code
# check the unique values in Categorical data
sdata['State'].unique()
###Output
_____no_output_____
###Markdown
- Categorical data is found in State and analyzed by unique entries Label Encoder * 'State' dataframe has categorical values, we need to seperate them for processing
###Code
# label encoder
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
# creating object of LabelEncoder()
labelE = LabelEncoder()
# callling the method from object 'labelE' which returns interger array category
labelE.fit_transform(sdata['State'])
# count the categorical values
sdata['labelE1'] = labelE.fit_transform(sdata['State'])
sdata['labelE1'].value_counts()
# count the categorical values with its catrgory
sdata['State'].value_counts()
###Output
_____no_output_____
###Markdown
- The counts for 'State' is exactly same as the 'labelE' New York = 17 California = 17 Florida = 16 One-Hot Encodoing
###Code
# get dummies entry (0, 1) for categorical variables
add_columns = pd.get_dummies(sdata['State'])
add_columns.head()
sdata.head()
###Output
_____no_output_____
###Markdown
Issue: labelE1 is coming up here which shows category of 'State'. But still it can't give which State category it belongs to. Hence we are droping that column and join the dummy variables column 'add_coumns'
###Code
# column join (Dummy variables)
sdata = sdata.join(add_columns)
sdata.head()
# drop the columns having Categorical data
sdata.drop(['labelE1','State'], axis = 1, inplace = True)
sdata.head()
###Output
_____no_output_____
###Markdown
- Conversion of all data is completed and in the form of numeric only
###Code
# rename the dataframes for further analysis and operations
sdata_new_1 = sdata.rename(columns = {'R&D Spend': 'RDS', 'Administration': 'Admin', 'Marketing Spend' : 'MarketS',
'California': 'C', 'Florida': 'F', 'New York' : 'N'})
sdata_new_1.head()
###Output
_____no_output_____
###Markdown
Data Visualization and Explore
###Code
# distribution of data
plt.figure(figsize=(12,7))
plt.subplot(2, 2, 1)
sns.distplot(sdata_new_1['RDS'], kde = True)
plt.title('R & D Spend', color='blue')
plt.subplot(2, 2, 2)
sns.distplot(sdata_new_1['Admin'], kde = True)
plt.title('Administrator',color='blue')
plt.subplot(2, 2, 3)
sns.distplot(sdata_new_1['MarketS'], kde = True)
plt.title('Marketing Spend', color='red')
plt.show()
# check the outliers
plt.figure(figsize=(12,8))
plt.subplot(2, 2, 1)
sdata_new_1.boxplot(column=['RDS'])
plt.title('R & D Spend', color='red')
plt.subplot(2, 2, 2)
sdata_new_1.boxplot(column=['MarketS'])
plt.title('Marketing Spend', color='red')
plt.show()
###Output
_____no_output_____
###Markdown
- Data is almost symmetric in the nature - No outliers present in the diven data
###Code
# coorelation matrix
plt.figure(figsize = (7,6))
sns.heatmap(sdata_new_1.corr(), annot = True)
plt.title('Coorelation heatmap', color='blue')
sdata_new_1.corr()
###Output
_____no_output_____
###Markdown
- Profit and R&D Spends are highly coorelated (0.97) - Profit and Market Spends are coorelated (0.74)
###Code
sns.pairplot(sdata_new_1)
###Output
_____no_output_____
###Markdown
Preparing Models Model 1 (Considering all the features)
###Code
model1 = smf.ols('Profit ~ RDS + Admin + MarketS + C + F + N', data = sdata_new_1).fit()
model1.summary()
###Output
_____no_output_____
###Markdown
1) The Coefficient values of independent features are, RDS = 0.000 Admin = 0.608 MarketS = 0.123 C = F = N = 0.000 2) R-squared = 0.951 - Accuracy = 95.10%3) Administrations and Market Spends are not significant for predicting Profit - Because their coefficient value are greater than the significant p-value (0.05) Model 2 (Considering one feature only)
###Code
model2_admin = smf.ols('Profit ~ Admin', data = sdata_new_1).fit()
model2_admin.summary()
model2_market = smf.ols('Profit ~ MarketS', data = sdata_new_1).fit()
model2_market.summary()
###Output
_____no_output_____
###Markdown
Model 3 (Considering all above features with togather)
###Code
model3_admin_market = smf.ols('Profit ~ Admin + MarketS', data = sdata_new_1).fit()
model3_admin_market.summary()
###Output
_____no_output_____
###Markdown
1) Model 2 (Independent performance analysis) - Significance of Market Spends is good as compared to Administrations2) Model 3 (Dependent performance analysis by adding) - Significance of Market Spends and Administrations is low when analyzed with todather Calculate VIF
###Code
# Variance Influance Factor (Multi-collinearity detection)
rsq_profit = smf.ols('Profit ~ RDS + Admin + MarketS + F + C + N', data = sdata_new_1).fit().rsquared
vif_profit = 1/(1 - rsq_profit)
rsq_RDS = smf.ols('RDS ~ Profit + Admin + MarketS + F + C + N', data = sdata_new_1).fit().rsquared
vif_RDS = 1/(1 - rsq_RDS)
rsq_Admin = smf.ols('Admin ~ RDS + Profit + MarketS + F + C + N', data = sdata_new_1).fit().rsquared
vif_Admin = 1/(1 - rsq_Admin)
rsq_MarketS = smf.ols('MarketS ~ RDS + Admin + Profit + F + C + N', data = sdata_new_1).fit().rsquared
vif_MarketS = 1/(1 - rsq_MarketS)
dataAll = {'Variables' : ['Profit' , 'RDS', 'Admin', 'MarketS'],
'VIF' : [vif_profit, vif_RDS, vif_Admin, vif_MarketS]}
VIF_frame = pd.DataFrame(dataAll)
VIF_frame
###Output
_____no_output_____
###Markdown
- VIF factor should be less than 5 (in some cases, 10) - Administrations andMarket Spends fullfill the condition - But Profit and R&D Spends, we need to check factors affecting the VIF Model Deletion Diagnostics
###Code
# influence plot (index based analysis)
sm.graphics.influence_plot(model1)
###Output
_____no_output_____
###Markdown
- Records having index 49, 48 has more influence - We need to drop these index and again build existing model (46 is also droping) - Records deletion is better than whole independent variable deletion
###Code
# records deletion having index 49, 48, 46
sdata_new_2 = sdata_new_1.drop(sdata_new_1.index[[49, 48, 46]], axis = 0)
sdata_new_2.head()
# again rename because of dataframe name gets changes to original due to droping the records
sdata_new_2 = sdata_new_2.rename(columns = { 'R&D Spend' : 'RDS', 'Administration' : 'Admin',
'Marketing Spend' : 'MarketS', 'California': 'C',
'Florida': 'F', 'New York' : 'N'})
sdata_new_2.head()
###Output
_____no_output_____
###Markdown
Model 2 (Considering all features with 3 records removed)
###Code
# model 2 (after removal of 3 records)
model2 = smf.ols('Profit ~ RDS + Admin + MarketS + C + F + N', data = sdata_new_2).fit()
model2.summary()
###Output
_____no_output_____
###Markdown
- Model 2 is more acceptable than Model 1. Model 2 performance is slightly better than Model 1 - The accuracy is coming out 0.961 from 0.951 [R-Squared] | [Adj R-Squared] | [p-values] | [Features] ------------------------------------------------------------------------------------ 0.951 (0.961) 0.945 (0.957) 0.000 (0.000) R&D Spends 0.608 (0.254) Administrations 0.123 (0.103) Market Spends 0.000 (0.000) C / F / N Note: with () values are Model 2 and without () values are Model 1 - But still Administration and Market Spends have high p-value. So next goal is to analyze the factors affecting on these variables (multi collinearity)
###Code
# plot influence graph
sm.graphics.influence_plot(model2)
###Output
_____no_output_____
###Markdown
Partial Regression Plot
###Code
fig = plt.figure(figsize = (12,7))
sm.graphics.plot_partregress_grid(model1, fig = fig)
###Output
_____no_output_____
###Markdown
- Administrations has more straight line than Market Spends - So, we remove Administration feature from data and build the model Model 3 (Considering all features excluding Administrations)
###Code
model3 = smf.ols('Profit ~ RDS + MarketS + C + F + N', data = sdata_new_2).fit()
model3.summary()
###Output
_____no_output_____
###Markdown
- The p-value of Market Spends is fastly decreased to 0.025 from 0.103 - Overall accuracy of Model 1 having the same significance level as precious Model 2
###Code
fig = plt.figure(figsize = (12,7))
fig = sm.graphics.plot_partregress_grid(model3, fig = fig)
###Output
_____no_output_____
###Markdown
Residual Plots (fitted value vs residuals)
###Code
# scatter plot to visualize the relationship between the data
fig = plt.figure(figsize =(12,7))
fig = sm.graphics.plot_regress_exog(model3, 'MarketS', fig = fig)
fig = plt.figure(figsize =(12,7))
fig = sm.graphics.plot_regress_exog(model3, 'RDS', fig = fig)
###Output
_____no_output_____
###Markdown
- The residuals are normally distributed within the line - No U-shaped or V-shaped pattern is found * Best fit model from (Model 1, Model 2, Model 3) is Model 3* Accuracy = 96.00 %* Resolve the collinearity issue by removing the independent unnecessary feature* Now trying it out with Training Set and Testing Set for more better results Building Model using data spliting (Train + Test)
###Code
# split the data into Training set (80%) and Testing set (20%)
from sklearn.model_selection import train_test_split
x_train, x_test = train_test_split(sdata_new_2, test_size = 0.2, random_state = 0)
# check the size of Training set and Testing set
print('Training set size:', len(x_train))
print('Testing set size:', len(x_test))
###Output
Training set size: 37
Testing set size: 10
###Markdown
Model 4 (training phase: considering all features excluding Administrations)
###Code
# Train the model with all features excluding Administrations
model_train = smf.ols('Profit ~ RDS + MarketS + C + F + N', data = x_train).fit()
model_train.summary()
# train data prediction
train_predict = model_train.predict(x_train)
train_predict.head()
# train residuals values
train_residuals = train_predict - x_train.Profit
train_residuals.head()
# RMSE value
train_rmse = np.sqrt(np.mean(train_residuals * train_residuals))
print(train_rmse)
# test data prediction
test_predict = model_train.predict(x_test)
test_predict.head()
# test residuals values
test_residuals = test_predict - x_test.Profit
test_residuals.head()
# RMSE value
test_rmse = np.sqrt(np.mean(test_residuals * test_residuals))
print(test_rmse)
###Output
5940.321243277686
###Markdown
* Training model performance (Model_train) - Accuracy = 96.10 % - Training RMSE = 7405.32 Final Model based on Training set and Testing set Model 5 (Testing phase)
###Code
model_final_version = smf.ols('Profit ~ RDS + MarketS + C + F + N', data = x_test).fit()
model_final_version.summary()
###Output
C:\Users\K.K\anaconda3\lib\site-packages\scipy\stats\stats.py:1603: UserWarning: kurtosistest only valid for n>=20 ... continuing anyway, n=10
warnings.warn("kurtosistest only valid for n>=20 ... continuing "
|
Simple_Search_Engine_3.ipynb | ###Markdown
AI6122 Assignment 1 - Section 3.3 PrerequisitesYou will need PyTerrier installed. PyTerrier also needs Java to be installed, and will find most installations.
###Code
!pip install python-terrier
#!pip install --upgrade git+https://github.com/terrier-org/pyterrier.git#egg=python-terrier
###Output
Collecting python-terrier
Downloading python-terrier-0.7.0.tar.gz (95 kB)
[?25l
[K |███▍ | 10 kB 24.1 MB/s eta 0:00:01
[K |██████▉ | 20 kB 27.6 MB/s eta 0:00:01
[K |██████████▎ | 30 kB 12.8 MB/s eta 0:00:01
[K |█████████████▊ | 40 kB 9.3 MB/s eta 0:00:01
[K |█████████████████▏ | 51 kB 5.2 MB/s eta 0:00:01
[K |████████████████████▋ | 61 kB 5.6 MB/s eta 0:00:01
[K |████████████████████████ | 71 kB 5.9 MB/s eta 0:00:01
[K |███████████████████████████▌ | 81 kB 6.7 MB/s eta 0:00:01
[K |███████████████████████████████ | 92 kB 6.8 MB/s eta 0:00:01
[K |████████████████████████████████| 95 kB 2.7 MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from python-terrier) (1.19.5)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from python-terrier) (1.1.5)
Collecting wget
Downloading wget-3.2.zip (10 kB)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from python-terrier) (4.62.3)
Collecting pyjnius~=1.3.0
Downloading pyjnius-1.3.0-cp37-cp37m-manylinux2010_x86_64.whl (1.1 MB)
[K |████████████████████████████████| 1.1 MB 39.4 MB/s
[?25hCollecting matchpy
Downloading matchpy-0.5.4-py3-none-any.whl (69 kB)
[K |████████████████████████████████| 69 kB 6.3 MB/s
[?25hRequirement already satisfied: sklearn in /usr/local/lib/python3.7/dist-packages (from python-terrier) (0.0)
Collecting deprecation
Downloading deprecation-2.1.0-py2.py3-none-any.whl (11 kB)
Collecting chest
Downloading chest-0.2.3.tar.gz (9.6 kB)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from python-terrier) (1.4.1)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from python-terrier) (2.23.0)
Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from python-terrier) (1.0.1)
Collecting nptyping
Downloading nptyping-1.4.4-py3-none-any.whl (31 kB)
Requirement already satisfied: more_itertools in /usr/local/lib/python3.7/dist-packages (from python-terrier) (8.10.0)
Collecting ir_datasets>=0.3.2
Downloading ir_datasets-0.4.3-py3-none-any.whl (222 kB)
[K |████████████████████████████████| 222 kB 47.9 MB/s
[?25hRequirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from python-terrier) (2.11.3)
Requirement already satisfied: statsmodels in /usr/local/lib/python3.7/dist-packages (from python-terrier) (0.10.2)
Collecting ir_measures>=0.2.0
Downloading ir_measures-0.2.1.tar.gz (36 kB)
Requirement already satisfied: dill in /usr/local/lib/python3.7/dist-packages (from python-terrier) (0.3.4)
Collecting trec-car-tools>=2.5.4
Downloading trec_car_tools-2.5.4-py3-none-any.whl (8.1 kB)
Collecting pyyaml>=5.3.1
Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)
[K |████████████████████████████████| 596 kB 43.7 MB/s
[?25hCollecting pyautocorpus>=0.1.1
Downloading pyautocorpus-0.1.6-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (294 kB)
[K |████████████████████████████████| 294 kB 45.0 MB/s
[?25hCollecting lxml>=4.5.2
Downloading lxml-4.6.3-cp37-cp37m-manylinux2014_x86_64.whl (6.3 MB)
[K |████████████████████████████████| 6.3 MB 30.5 MB/s
[?25hRequirement already satisfied: beautifulsoup4>=4.4.1 in /usr/local/lib/python3.7/dist-packages (from ir_datasets>=0.3.2->python-terrier) (4.6.3)
Collecting ijson>=3.1.3
Downloading ijson-3.1.4-cp37-cp37m-manylinux2010_x86_64.whl (126 kB)
[K |████████████████████████████████| 126 kB 60.8 MB/s
[?25hCollecting warc3-wet>=0.2.3
Downloading warc3_wet-0.2.3-py3-none-any.whl (13 kB)
Collecting zlib-state>=0.1.3
Downloading zlib_state-0.1.3-cp37-cp37m-manylinux2010_x86_64.whl (72 kB)
[K |████████████████████████████████| 72 kB 1.5 MB/s
[?25hCollecting warc3-wet-clueweb09>=0.2.5
Downloading warc3-wet-clueweb09-0.2.5.tar.gz (17 kB)
Collecting lz4>=3.1.1
Downloading lz4-3.1.3-cp37-cp37m-manylinux2010_x86_64.whl (1.8 MB)
[K |████████████████████████████████| 1.8 MB 30.7 MB/s
[?25hCollecting pytrec-eval-terrier==0.5.1
Downloading pytrec_eval_terrier-0.5.1-cp37-cp37m-manylinux2010_x86_64.whl (291 kB)
[K |████████████████████████████████| 291 kB 49.6 MB/s
[?25hCollecting cwl-eval>=1.0.10
Downloading cwl-eval-1.0.10.tar.gz (31 kB)
Requirement already satisfied: six>=1.7.0 in /usr/local/lib/python3.7/dist-packages (from pyjnius~=1.3.0->python-terrier) (1.15.0)
Requirement already satisfied: cython in /usr/local/lib/python3.7/dist-packages (from pyjnius~=1.3.0->python-terrier) (0.29.24)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->python-terrier) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->python-terrier) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->python-terrier) (2021.5.30)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->python-terrier) (3.0.4)
Collecting cbor>=1.0.0
Downloading cbor-1.0.0.tar.gz (20 kB)
Requirement already satisfied: heapdict in /usr/local/lib/python3.7/dist-packages (from chest->python-terrier) (1.0.1)
Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from deprecation->python-terrier) (21.0)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->python-terrier) (2.0.1)
Collecting multiset<3.0,>=2.0
Downloading multiset-2.1.1-py2.py3-none-any.whl (8.8 kB)
Collecting typish>=1.7.0
Downloading typish-1.9.3-py3-none-any.whl (45 kB)
[K |████████████████████████████████| 45 kB 2.5 MB/s
[?25hRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->deprecation->python-terrier) (2.4.7)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->python-terrier) (2018.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->python-terrier) (2.8.2)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from sklearn->python-terrier) (0.22.2.post1)
Requirement already satisfied: patsy>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from statsmodels->python-terrier) (0.5.2)
Building wheels for collected packages: python-terrier, ir-measures, cwl-eval, cbor, warc3-wet-clueweb09, chest, wget
Building wheel for python-terrier (setup.py) ... [?25l[?25hdone
Created wheel for python-terrier: filename=python_terrier-0.7.0-py3-none-any.whl size=102213 sha256=91afe42b31fe112fdd02e935d7738cc5ca54c42ebb787e6b7baec4c0903b0806
Stored in directory: /root/.cache/pip/wheels/e3/84/1e/68c08f14e2481e2b3e7c1a2c24bb1220712bc3f5d3896c28df
Building wheel for ir-measures (setup.py) ... [?25l[?25hdone
Created wheel for ir-measures: filename=ir_measures-0.2.1-py3-none-any.whl size=46421 sha256=0e293b9f577f0f15cb226bbc964aab308d78606372aaea97e31cb10ba08603d6
Stored in directory: /root/.cache/pip/wheels/38/a4/34/d0b2e6c329f3d0fab3d3c3ed296b963cee47872811acdc3628
Building wheel for cwl-eval (setup.py) ... [?25l[?25hdone
Created wheel for cwl-eval: filename=cwl_eval-1.0.10-py3-none-any.whl size=37795 sha256=51c9d9c0cb92634948ac3df39e2f64ef066365167b9a42d4b8a64747c55a829d
Stored in directory: /root/.cache/pip/wheels/ff/e9/ff/d2b6d72d9feb0d0b1b11aacfaf5cd866717034615c2d194093
Building wheel for cbor (setup.py) ... [?25l[?25hdone
Created wheel for cbor: filename=cbor-1.0.0-cp37-cp37m-linux_x86_64.whl size=51305 sha256=bb15f8457f03101ff4fa917be9908da6bd6d99dc6b26671bf951c071cb4fe787
Stored in directory: /root/.cache/pip/wheels/19/77/49/c9c2c8dc5848502e606e8579d0bbda18b850fb056a6c62239d
Building wheel for warc3-wet-clueweb09 (setup.py) ... [?25l[?25hdone
Created wheel for warc3-wet-clueweb09: filename=warc3_wet_clueweb09-0.2.5-py3-none-any.whl size=18921 sha256=ecb9b952bfb2a6ce06f70d89721859728fb269c34ba7b53920df03f5e437c20a
Stored in directory: /root/.cache/pip/wheels/42/d4/3c/7c2b0c3d400ad744e4db69f2fde166655da2ed2198bfc02db6
Building wheel for chest (setup.py) ... [?25l[?25hdone
Created wheel for chest: filename=chest-0.2.3-py3-none-any.whl size=7632 sha256=13bec1ad7a84a371f3a2e9c04b914bac7d2fd17769cdf53cd8c5dfe7e748b693
Stored in directory: /root/.cache/pip/wheels/fc/f5/b9/c436e11300809e6b40d46a5d2592fb0bff89e0712f2e878dc7
Building wheel for wget (setup.py) ... [?25l[?25hdone
Created wheel for wget: filename=wget-3.2-py3-none-any.whl size=9672 sha256=6358505601a7c0b6405d39c89517fabe2753d2c3d7b765a8cc23ab6fe02f82bf
Stored in directory: /root/.cache/pip/wheels/a1/b6/7c/0e63e34eb06634181c63adacca38b79ff8f35c37e3c13e3c02
Successfully built python-terrier ir-measures cwl-eval cbor warc3-wet-clueweb09 chest wget
Installing collected packages: cbor, zlib-state, warc3-wet-clueweb09, warc3-wet, typish, trec-car-tools, pyyaml, pytrec-eval-terrier, pyautocorpus, multiset, lz4, lxml, ijson, deprecation, cwl-eval, wget, pyjnius, nptyping, matchpy, ir-measures, ir-datasets, chest, python-terrier
Attempting uninstall: pyyaml
Found existing installation: PyYAML 3.13
Uninstalling PyYAML-3.13:
Successfully uninstalled PyYAML-3.13
Attempting uninstall: lxml
Found existing installation: lxml 4.2.6
Uninstalling lxml-4.2.6:
Successfully uninstalled lxml-4.2.6
Successfully installed cbor-1.0.0 chest-0.2.3 cwl-eval-1.0.10 deprecation-2.1.0 ijson-3.1.4 ir-datasets-0.4.3 ir-measures-0.2.1 lxml-4.6.3 lz4-3.1.3 matchpy-0.5.4 multiset-2.1.1 nptyping-1.4.4 pyautocorpus-0.1.6 pyjnius-1.3.0 python-terrier-0.7.0 pytrec-eval-terrier-0.5.1 pyyaml-6.0 trec-car-tools-2.5.4 typish-1.9.3 warc3-wet-0.2.3 warc3-wet-clueweb09-0.2.5 wget-3.2 zlib-state-0.1.3
###Markdown
Init You must run `pt.init()` before other pyterrier functions and classesOptional Arguments: - `version` - terrier IR version e.g. "5.2" - `mem` - megabytes allocated to java e.g. "4096" - `packages` - external java packages for Terrier to load e.g. ["org.terrier:terrier.prf"] - `logging` - logging level for Terrier. Defaults to "WARN", use "INFO" or "DEBUG" for more output.NB: PyTerrier needs Java 11 installed. If it cannot find your Java installation, you can set the `JAVA_HOME` environment variable.
###Code
import pyterrier as pt
if not pt.started():
pt.init()
###Output
terrier-assemblies 5.6 jar-with-dependencies not found, downloading to /root/.pyterrier...
Done
terrier-python-helper 0.0.6 jar not found, downloading to /root/.pyterrier...
Done
PyTerrier 0.7.0 has loaded Terrier 5.6 (built by craigmacdonald on 2021-09-17 13:27)
###Markdown
Importing dataset from Google DriveUsing built-in function of Google Colab, we can easily import the dataset which has been uploaded onto Google Drive beforehandneed a Google account to download the data
###Code
# Import PyDrive and associated libraries.
# This only needs to be done once per notebook.
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once per notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# Download a file based on its file ID which is residing on Google Drive
#
file_id = '1aVXMJ_luTXISxMwP5_Bt2xE0ObXqEsQ2' #"Dataset_B1to8"
downloaded = drive.CreateFile({'id': file_id})
#print('Downloaded content "{}"'.format(downloaded.GetContentString()))
downloaded.GetContentFile('Dataset_B1to8.csv')
###Output
_____no_output_____
###Markdown
Loading dataset (csv) into Pandas dataframePyTerrier makes it easy to index standard Python data structures, particularly [Pandas dataframes](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html).importing dataset, which is scale down version of the original `review.json` file as the original file is too large to load into Google Colab memory
###Code
## load data into df
import pandas as pd
#df = pd.read_csv("AI6122_Dataset_B1.csv", dtype = str)
df = pd.read_csv("Dataset_B1to8.csv", dtype = str)
try :
del docno
except:
pass
docno =[]
for idx in range(1,len(df)+1):
docno.append("d" + str(idx))
df["docno"] = docno
###Output
_____no_output_____
###Markdown
Time taken for indexingTracking the time to complete indexing for each 10% incremental of documents. The index folder is cleared before each indexing and time and is taken before and after indexing, the difference being the time taken to index
###Code
import time
Tcollect = []
for idx in range(1,11):
#print(idx)
df_resize = df.iloc[:int(len(df)/(10/idx)),:]
!rm -rf ./pd_index
pd_indexer = pt.DFIndexer("./pd_index", overwrite=True, verbose=True)
Tstart = time.perf_counter()
indexref = pd_indexer.index(df_resize["text"], df_resize)
Tend = time.perf_counter()
print(str(10*idx)+f"% df search completed in {Tend - Tstart:0.4f} seconds")
Tcollect.append(Tend - Tstart)
import matplotlib.pyplot as plt
plotdf = pd.DataFrame({
'Percentage Completed':['10%', '20%', '30%', '40%', '50%',
'60%', '70%', '80%', '90%', '100%'],
'Time(s)':Tcollect
})
# a scatter plot comparing num_children and num_pets
plotdf.plot(kind='line',x='Percentage Completed',y='Time(s)',color='blue')
plt.show()
###Output
_____no_output_____
###Markdown
Indexing a Pandas dataframeWe can use a `pt.DFIndexer()` object to do indexing for Pandas dataframe
###Code
#import pandas as pd
!rm -rf ./pd_index
pd_indexer = pt.DFIndexer("./pd_index", overwrite=True, verbose=True, blocks=True)
# optionally modify properties
# index_properies = {"block.indexing":"true", "invertedfile.lexiconscanner":"pointers"}
# indexer.setProperties(**index_properies)
pd_indexer.setProperty("tokeniser" , "EnglishTokeniser")
pd_indexer.setProperty("termpipelines", "Stopwords,PorterStemmer")
#pd_indexer.setProperty("block.indexing","true")
#pd_indexer.setProperty("termpipelines", "Stopwords")
# no stemming or stopwords
#pd_indexer.setProperty("termpipelines", "")
###Output
_____no_output_____
###Markdown
Then there are a number of options to index the dataframe: The first argument should always a pandas.Series object of Strings, which specifies the body of each document. Any arguments after that are for specifying metadata.We can view more useful information from the indexed objects using a indexref.getCollectionsStatistics()
###Code
import time
# Add metadata fields as Pandas.Series objects, with the name of the Series object becoming the name of the meta field.
#indexref = pd_indexer.index(df["text"], df["docno"], df["review_id"], df["user_id"], df["business_id"], df["stars"], df["useful"], df["funny"], df["cool"])
Tstart = time.perf_counter()
# Add the entire dataframe as metadata
indexref = pd_indexer.index(df["text"], df)
Tend = time.perf_counter()
print(f"search completed in {Tend - Tstart:0.4f} seconds")
indexinfo = pt.IndexFactory.of(indexref)
print(indexinfo.getCollectionStatistics().toString())
###Output
_____no_output_____
###Markdown
In the above example, the indexed collection had 6928 documents, which contained 388543-word occurrences. Out of which 12049 were identified as unique words. The total postings in the inverted index are 323743. The whole datafame is being index, with the `"text"` field being searchable while the remaining (e.g. `"review_id"`, `"user_id"`, `"business_id"`, `"stars"`, `"date"`, `"useful"`, `"funny"`, `"cool"`, `"text"`) as metadata which can be displayed when called upon.pyTerrier perform standard stopwords removal and applies Porter's stemmer by default, and it is applicable in this notebook as well.EnglishTokeniser is the default tokeniser and case-folding to lower case is applied during Tokenization RetrievalRetrieval takes place using the `BatchRetrieve` object, by invoking `transform()` method for one or more queries. For a quick test, you can give just pass your query to `transform()`. BatchRetrieve will return the results as a Pandas dataframe.
###Code
#pt.BatchRetrieve(indexref).search("he")
#this ranker will make the candidate set of documents for each query
#BM25 = pt.BatchRetrieve(indexref, controls = {"wmodel": "BM25"}, num_results=5)
#these rankers we will use to re-rank the BM25 results
#TF_IDF = pt.BatchRetrieve(indexref, controls = {"wmodel": "TF_IDF"}, num_results=5)
#PL2 = pt.BatchRetrieve(indexref, controls = {"wmodel": "PL2"}, num_results=5)
#pipe = BM25 >> (TF_IDF ** PL2)
#pipe.transform("really cute restaurant")
#pt.BatchRetrieve(indexref, controls = {"wmodel": "PL2"}, num_results=5).search("Really cute restaurant")
#pt.BatchRetrieve(indexref, wmodel="BM25", properties={"termpipelines" : "Stopwords,PorterStemmer"})
#pt.BatchRetrieve(indexref, controls = {"wmodel": "BM25"}, properties={"termpipelines" : ""}, num_results=10).search("Really cute restaurant")
#pt.BatchRetrieve(indexref, metadata=["business_id", "stars"], num_results=10).search("Really cute restaurant")
###Output
_____no_output_____
###Markdown
However, most IR experiments, will use a set of queries. You can pass such a set using a data frame for input.
###Code
#import pandas as pd
#topics = pd.DataFrame([["q1", "Really cute restaurant"], ["q2", "restaurant"]],columns=['qid','query'])
#pt.BatchRetrieve(indexref, metadata=["text"], num_results=5).transform(topics)
###Output
_____no_output_____
###Markdown
Simple Search EnginePrompting user input for search stringPrompting user input for the Top N results to returnDisplay the time taken to complete the searchDisplay the search results Definition for Simple Search Engine
###Code
def SimpleSearch():
import time
import pandas as pd
pd.reset_option('^display.', silent=True)
#pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
#pd.set_option('display.width', None)
#pd.set_option('display.max_colwidth', -1)
search_str = input("Please enter your search string: ")
#print("Search string: ", search_str)
TopN = input("Please enter number results to display: ")
#print("Top N results: ", TopN)
Tstart = time.perf_counter()
topics = pd.DataFrame([["q1", "#1("+search_str+")"]],columns=['qid','query'])
results = pt.BatchRetrieve(indexref, wmodel="BM25", properties={"termpipelines" : "Stopwords,PorterStemmer"}, metadata=["text"], num_results=int(TopN)).transform(topics)
Tend = time.perf_counter()
print(f"search completed in {Tend - Tstart:0.4f} seconds")
if len(results) == 0:
print("no result found!")
#else:
#print("len: ", len(results))
#print(results)
return results
###Output
_____no_output_____
###Markdown
Search 1Keywords "restaurant"
###Code
SimpleSearch()
###Output
Please enter your search string: restaurant
Please enter number results to display: 5
search completed in 0.0621 seconds
###Markdown
Search 2Keywords "honda"
###Code
SimpleSearch()
###Output
Please enter your search string: honda
Please enter number results to display: 5
search completed in 0.0228 seconds
no result found!
###Markdown
Search 3Key phrase "homemade meatballs"
###Code
SimpleSearch()
###Output
Please enter your search string: homemade pasta
Please enter number results to display: 5
search completed in 0.0500 seconds
###Markdown
Search 4Key phrase "homemade pasta"
###Code
SimpleSearch()
###Output
Please enter your search string: homemade meatballs
Please enter number results to display: 5
search completed in 0.0329 seconds
###Markdown
trial
###Code
# define an example dataframe of documents
import pandas as pd
!rm -rf ./trial_index
trial_df = pd.DataFrame({
'docno':
['1', '2', '3'],
'url':
['url1', 'url2', 'url3'],
'text':
['He ran out of money, so he had to stop playing cook the loss',
'He The waves were crashing on the shore; it was a, cooking waves',
'The body may perhaps compensates for the loss, cooking waves threshold']
})
#trial_df = pd.read_csv("small_B1_edit.csv", dtype = str)
#trial_df
# index the text, record the docnos as metadata
trial_indexer = pt.DFIndexer("./trial_index", blocks=True)
trial_indexer.setProperty("tokeniser" , "EnglishTokeniser")
trial_indexer.setProperty("termpipelines" , "Stopwords,PorterStemmer")
#trial_indexer.setProperty("termpipelines" , "")
trial_indexref = trial_indexer.index(trial_df["text"], trial_df["docno"])
trial_indexinfo = pt.IndexFactory.of(trial_indexref)
print(trial_indexinfo.getCollectionStatistics().toString())
search_str = "cooking threshold"
topics = pd.DataFrame([["q1", "#1("+search_str+")"]],columns=['qid','query'])
#pt.rewrite.reset()
#sdm = pt.rewrite.SequentialDependence()
#sdm = pt.rewrite.SequentialDependence(prox_model="org.terrier.matching.models.Dirichlet_LM")
#pt.BatchRetrieve(trial_indexref, properties={"termpipelines" :"Stopwords,PorterStemmer"}, controls={"wmodel" : "TF_IDF"})
pipeline = pt.BatchRetrieve(trial_indexref, wmodel="BM25")
#dph = pt.BatchRetrieve(trial_indexref, wmodel="dph")
#pipeline = sdm >> dph
pipeline.transform(topics)
###Output
Number of documents: 3
Number of terms: 12
Number of postings: 16
Number of fields: 0
Number of tokens: 17
Field names: []
Positions: true
|
code/interpretability/TP_interpretability.ipynb | ###Markdown
Practical session: Interpretability in Machine learning Machine learning algorithms often behave as black boxes. In this practical session, we will see some tools to gain interpretability on our models.In the first part of this session, we will focus on model agnostic methods. In the second part, we will concentrate on techniques for neural network models. Model agnostic MethodsBefore starting with a simple regression problem, run the code below to install compatible versions of pandas and pandas profiling which we will be using to explore our data.
###Code
!pip install pandas-profiling==2.8.0 > /dev/null 2>&1
!pip install pandas==0.25 > /dev/null 2>&1
###Output
_____no_output_____
###Markdown
Restart the runtime after executing the cell above DatasetWe will begin with a simple regression problem of predicting Californian houses' house prices according to 8 numerical features. Scikit-learn provides the dataset, and a full description is available [here](https://scikit-learn.org/stable/datasets/real_world.htmlcalifornia-housing-dataset).
###Code
import pandas as pd
from sklearn.datasets import fetch_california_housing
cal_housing = fetch_california_housing()
feature_names = cal_housing.feature_names
X = pd.DataFrame(cal_housing.data, columns=feature_names)
y = cal_housing.target
X.head()
###Output
_____no_output_____
###Markdown
I encourage you to explore the dataset by yourself using pandas and seaborn; it is always a good exercise. Nonetheless, today's practical session is not designed to train the best possible models but to learn how to interpret them. It may be a good opportunity to present you a friendly tool for exploring datasets: [Pandas profiling](https://pandas-profiling.github.io/pandas-profiling/docs/master/rtd/). Pandas profiling provides an automatic data exploration tool and html reports in a one-liner. I would recommend usually doing it by yourself in other projects, but it may still be helpful for having a quick overview of what your dataset looks like.
###Code
from pandas_profiling import ProfileReport
ProfileReport(X, sort="None")
# if this cell returns an error restart the kernel and re-run the previous cells
###Output
_____no_output_____
###Markdown
Now use scikit-learn's ```train_test_split``` method to split your dataset in train and test (10%)
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = ...
###Output
_____no_output_____
###Markdown
Train a linear regression, a random forest and a neural network to predict the houses price. Use a [pipeline](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html) to create a model that performs both the scaling and the training algorithm. Test all models on your test set to copare their performances. Feel free to train and test more sophisticated models such as XGBoost, LightGBM...
###Code
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.neural_network import MLPRegressor
lr = ...
rf = ...
mlp = ...
lr.fit(X_train, y_train)
rf.fit(X_train, y_train)
mlp.fit(X_train, y_train)
... #test your models
###Output
_____no_output_____
###Markdown
Linear models are considered intrinsically interpretable. Using the ```coef_``` attribute of your model, visualize the importance of each of the features of the linear model.
###Code
# to access the model part of your pipeline: lr[1]
...
###Output
_____no_output_____
###Markdown
Features importanceWe will begin by looking at the features importance. Scikit-learn implements some native methods to compute the feature importance of tree-based methods. We will use an external library called [Eli5](https://eli5.readthedocs.io/en/latest/overview.htmlfeatures) to compute the feature permutation method, which is model agnostic and can thus be applied to our three models.
###Code
!pip install eli5 > /dev/null 2>&1
###Output
_____no_output_____
###Markdown
Use the ```PermutationImportance``` to compute the features importance of your models. (Documentation [here](https://eli5.readthedocs.io/en/latest/blackbox/permutation_importance.html)). Plot them for each of your model. Are the feature importance of the linear model similar to the coefficients?Are the features as important for all your models? Create a dictionnary containing the top 5 features for each of your model (**key**:model name, **value**: dataframe of features importance)
###Code
import eli5
from eli5.sklearn import PermutationImportance
import matplotlib.pyplot as plt
import seaborn as sns
features_importance_dict = {}
for model, name in zip([lr, rf, mlp], ['logistic regression', 'random forest', 'multi layer perceptron']):
plt.figure()
permumtation_import = PermutationImportance(...)
features_importance = {'Feature_name':feature_names, 'Importance':permumtation_import.feature_importances_}
features_importance = pd.DataFrame(features_importance) # dataframe containing the features names and their importance
features_importance = features_importance.sort_values(...) # sort the dataframe by feature importance
features_importance_dict[name] = ... #add the dataframe to your dictionnary
ax = sns.barplot(x=..., y=..., data=features_importance) #plot the model's features importance
plt.title(name)
###Output
_____no_output_____
###Markdown
You may have noticed that the geographical position seems to be among the most important features for some of your models. We can use [folium](http://python-visualization.github.io/folium/) to plot these on a map and get a better overview.
###Code
import folium
latmean = X.Latitude.mean()
lonmean = X.Longitude.mean()
map = folium.Map(location=[latmean,lonmean],
zoom_start=6,tiles = 'Mapbox bright')
def color(value):
if value in range(0,149999):
col = 'green'
elif value in range(150000,249999):
col = 'yellow'
elif value in range(250000,349999):
col = 'orange'
else:
col='red'
return col
map = folium.Map(location=[latmean,lonmean],
zoom_start=6)
# Top three smart phone companies by market share in 2016
for lat,lan,value in zip(X_test['Latitude'][:300],X_test['Longitude'][:300],y_test[:100]*100000):
folium.Marker(location=[lat,lan],icon= folium.Icon(color=color(value),icon_color='black',icon = 'home')).add_to(map)
map
###Output
_____no_output_____
###Markdown
PDP and ICE plotsWe will use the [pdpbox](https://pdpbox.readthedocs.io/en/latest/) library to generate our PDP and ICE plots.
###Code
!pip install pdpbox > /dev/null 2>&1
###Output
_____no_output_____
###Markdown
The following code shows you how to produce a PDP plot for the random forest model. ```pythonfrom pdpbox import pdp, get_dataset, info_plotspdp_feat = pdp.pdp_isolate(model=rf, dataset=X_test, model_features=feature_names, feature='MedInc')pdp.pdp_plot(pdp_feat, 'MedInc', plot_lines=True, frac_to_plot=0.5)plt.show()```Use it to generate the PDP plots for the three most important features of each of your models. What is the nature of their relationship with the target?
###Code
from pdpbox import pdp, get_dataset, info_plots
model = rf #lr, mlp
model_name = 'random forest'#'logistic regression' , 'multi layer perceptron'
top_3_features = features_importance_dict[model_name].Feature_name[:3].values
for i, feature in enumerate(top_3_features, 1):
pdp_feat = ...
...
###Output
_____no_output_____
###Markdown
It is also possible to visualize the combined effetc of two features:
###Code
features_to_plot = ['Latitude', 'Longitude']
inter1 = pdp.pdp_interact(model=model, dataset=X_test, model_features=feature_names, features=features_to_plot)
pdp.pdp_interact_plot(pdp_interact_out=inter1, feature_names=features_to_plot, plot_type='contour')
plt.show()
###Output
_____no_output_____
###Markdown
Scikit-learns also provides methods to generate such plots, but may offer less flexibility.
###Code
from sklearn.inspection import partial_dependence
from sklearn.inspection import PartialDependenceDisplay
for model, model_name in zip([lr, rf, mlp], ['logistic regression', 'random forest', 'multi layer perceptron']):
top_3_features = features_importance_dict[name].Feature_name[:3].values
display = PartialDependenceDisplay.from_estimator(
model,
X_test,
top_3_features,
kind="both",
subsample=50,
n_jobs=3,
n_cols=3,
grid_resolution=20,
random_state=0,
ice_lines_kw={"color": "tab:blue", "alpha": 0.2, "linewidth": 0.5},
pd_line_kw={"color": "tab:orange", "linestyle": "--"}
)
display.figure_.suptitle(f"Partial dependence for {model_name} model")
display.figure_.subplots_adjust(hspace=0.3)
for model, name in zip([lr, rf, mlp], ['logistic regression', 'random forest', 'multi layer perceptron']):
_, ax = plt.subplots(ncols=3, figsize=(9, 4))
top_2_features = features_importance_dict[name].Feature_name[:3].values
features = [top_2_features[0], top_2_features[1], (top_2_features[0], top_2_features[1])]
display = PartialDependenceDisplay.from_estimator(
model,
X_test,
features,
kind="average",
n_jobs=3,
grid_resolution=20,
ax=ax,
)
display.figure_.suptitle(f"Partial dependence for {name} model")
display.figure_.subplots_adjust(wspace=0.4, hspace=0.3)
###Output
_____no_output_____
###Markdown
SHAP Previous methods provided global explanations of our models. We will now focus on local interpretability methods. We will begin with the SHAP methods based on the estimation of the Shapley values. The library SHAP implements the SHAP method (and many others).Inspire yourself with the following [documentation](https://shap.readthedocs.io/en/latest/example_notebooks/tabular_examples/model_agnostic/Diabetes%20regression.html) to produce a visualization of the estimated Shapley values of your different models, first for a single example using the ```force_plot``` method and for the entire test, dataset using the ```summary_plot``` method.
###Code
!pip install shap > /dev/null 2>&1
import shap
shap.initjs() #needed to plot results directly on the notebook
idx = 1 # index of the instance we want to explain
explainer = ...
shap_values = ...
... #single exemple plot
plt.figure()
... #Summary on the dataset. To speed up we just compute the shap values for 20 exemples
###Output
_____no_output_____
###Markdown
Lime We also saw in class another model agnostic local interpretability method. Many implementations of the LIME method are available in python. In this practical session, we will use the [implementation provided by the authors](https://github.com/marcotcr/lime).
###Code
!pip install lime > /dev/null 2>&1
###Output
_____no_output_____
###Markdown
LIME provides eay to understand an friendly looking explanations for your model predictions. You first need to instanciate an Explainer (in our case a ```LimeTabularExplainer```) and then call the ```explain instance``` method of the explainer to get the explanations.
###Code
import lime
import lime.lime_tabular
index = 0
explainer = lime.lime_tabular.LimeTabularExplainer(X_test.values, feature_names=feature_names, mode="regression")
exp = explainer.explain_instance(X_test.iloc[index], rf.predict, num_features=5, top_labels=1)
exp.show_in_notebook(show_table=True, show_all=True)
###Output
_____no_output_____
###Markdown
ClassificationLIME also works with classification problems. We will repeat the previous experiment using a different dataset for [breast cancer prediction](https://scikit-learn.org/stable/datasets/toy_dataset.htmlbreast-cancer-dataset) and a decision trees algorithm.
###Code
from sklearn.datasets import load_breast_cancer
breast_cancer = load_breast_cancer()
feature_names = breast_cancer.feature_names
target_names = breast_cancer.target_names
X = pd.DataFrame(breast_cancer.data, columns=feature_names)
y = breast_cancer.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=0)
X_train.head()
###Output
_____no_output_____
###Markdown
Train a decision tree (with max_depth=5) on this dataset and plot the confusion matrix on the test dataset.
###Code
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import ConfusionMatrixDisplay
dt = ...
...
print("Descision Tree score: ...")
...
###Output
_____no_output_____
###Markdown
Decision trees are also interpretable models. Scikit-learn provides an efficient way to visualize their structure.
###Code
import matplotlib.pyplot as plt
import sklearn.tree as tree
plt.figure(figsize=(20,20))
tree.plot_tree(dt[1])
plt.show()
###Output
_____no_output_____
###Markdown
Explain the predictions of your model on some examples. For classification tasks, LIME needs the predicted "probailities" of the model. Use the ```predict_proba``` method of your classifier instead of the ```predict``` method when calling the explain instance. Also, don't forget to remove the ```mode="regression"``` argument when instanciating the ```LimeTabularExplainer```. Are the explanations consistent with the decision graph?
###Code
explainer = lime.lime_tabular.LimeTabularExplainer(...)
index = 0
exp = explainer.explain_instance(...)
exp.show_in_notebook(show_table=True, show_all=True)
###Output
_____no_output_____
###Markdown
Text dataThe authors also provided nice visualizations for text data. We will now train a Random forest to classify whether scientific texts are about medicine or space. We will be using two categories from the [newsgroup dataset](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html) available in scikit-learn.
###Code
from sklearn.datasets import fetch_20newsgroups
categories = [
'sci.med',
'sci.space'
]
train_data = fetch_20newsgroups(subset='train', categories=categories)
test_data = fetch_20newsgroups(subset='test', categories=categories)
class_names = train_data.target_names
X_train, y_train = train_data.data, train_data.target
X_test, y_test = test_data.data, test_data.target
###Output
_____no_output_____
###Markdown
Here is an example of text to classify:
###Code
X_train[0]
###Output
_____no_output_____
###Markdown
Let's train a random forest to classify our dataset.
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.pipeline import make_pipeline
model = make_pipeline(
CountVectorizer(max_df= 0.5, ngram_range= (1, 2)),
TfidfTransformer(),
RandomForestClassifier()
)
model.fit(X_train, y_train)
print(f"Model score: {model.score(X_test, y_test):.2f}")
###Output
_____no_output_____
###Markdown
We will use a specific ```LimeTextExplainer``` to explain the predictions.
###Code
from lime.lime_text import LimeTextExplainer
explainer = LimeTextExplainer(class_names=class_names)
from lime.lime_text import LimeTextExplainer
explainer = LimeTextExplainer(class_names=class_names)
index = 11
exp = explainer.explain_instance(X_test[index], model.predict_proba, num_features=6)
prediction = model.predict_proba([X_test[index]])
class_predicted = class_names[prediction.argmax(1)[0]]
class_proba = prediction.max(1)[0]
true_class = class_names[y_test[index]]
print(f'Class predicted: {class_predicted} (p={class_proba})')
print(f'True class: {class_names[y_test[index]]}')
###Output
_____no_output_____
###Markdown
Here are the top words used by the classifier.
###Code
fig = exp.as_pyplot_figure()
###Output
_____no_output_____
###Markdown
These explanations seem plausible, let's visualize these words in their context:
###Code
exp.show_in_notebook(text=True)
###Output
_____no_output_____
###Markdown
Some of the words are in the newsgroup header! Would you trust such a classifier? Scikit-learn provides an option to remove all headers and footers. Train a new model on the datset with removed headers and footers and comare its F1-score with the previous model.
###Code
train_data = fetch_20newsgroups(subset='train',remove=('headers', 'footers', 'quotes'),
categories=categories)
X_train, y_train = train_data.data, train_data.target
model = ...
...
print(f"Model score: {model.score(X_test, y_test):.2f}")
###Output
_____no_output_____
###Markdown
Now visualize the explainations computed by lime on the same example with your new model. Which of the two models would you trust the more?
###Code
explainer = ...
index = 11
exp = ...
prediction = ...
class_predicted = ...
class_proba = ...
true_class = ...
...
###Output
_____no_output_____
###Markdown
Image DataFinally, LIME also provides friendly-looking visualizations on images. We will use a subset of Imagenet to test these visualizations.
###Code
!wget https://s3.amazonaws.com/fast-ai-imageclas/imagenette2.tgz
!tar zxvf imagenette2.tgz
import torchvision
import torchvision.transforms as transforms
import torch
import os
from torch.utils.data import Dataset
means, stds = (0.485, 0.456, 0.406), (0.229, 0.224, 0.225)
train_transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(means, stds),
])
test_transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(means, stds),
])
def get_imagenette2_loaders(root_path='./imagenette2', **kwargs):
trainset = torchvision.datasets.ImageFolder(os.path.join(root_path, "train"), transform=train_transform)
trainloader = torch.utils.data.DataLoader(trainset, **kwargs)
testset = torchvision.datasets.ImageFolder(os.path.join(root_path, "val"), transform=test_transform)
testloader = torch.utils.data.DataLoader(testset, **kwargs)
return trainloader, testloader
trainloader, testloader = get_imagenette2_loaders( batch_size=64, shuffle=True, num_workers=2)
labels = ['tench', 'English springer', 'cassette player', 'chain saw', 'church', 'French horn', 'garbage truck', 'gas pump', 'golf ball', 'parachute']
from torchvision.utils import make_grid
import matplotlib.pyplot as plt
inv_normalize = transforms.Normalize(
mean= [-m/s for m, s in zip(means, stds)],
std= [1/s for s in stds]
)
x, _ = next(iter(trainloader))
img_grid = make_grid(x[:16])
img_grid = inv_normalize(img_grid)
plt.figure(figsize=(20,15))
plt.imshow(img_grid.permute(1, 2, 0))
plt.axis('off')
###Output
_____no_output_____
###Markdown
We will train a neural network to classify these images. However, training neural networks on high-definition images may be long and difficult. We will use a pre-trained VGG11 and replace the fully connected part of the network to match the ten classes (this is called transfer learning). Complete the following code to instantiate a model predicting among ten classes with pre-trained features.
###Code
import torch.nn as nn
import torch
model = torchvision.models.vgg11(pretrained=True)
print(model)
for param in model.features:
param.requires_grad = False #we freeze the feature extraction part of the network
model.classifier = nn.Sequential(
...
)
model = model.cuda()
###Output
_____no_output_____
###Markdown
Fill the following code to implement the training loop.
###Code
from tqdm import tqdm
criterion_classifier = nn.CrossEntropyLoss(reduction='mean')
def train(model, optimizer, trainloader, epochs=30):
t = tqdm(range(epochs))
for epoch in t:
corrects = 0
total = 0
for x, y in trainloader:
loss = 0
x = x.cuda()
y = y.cuda()
y_hat = ...
loss += criterion_classifier(...)
_, predicted = y_hat.max(1)
corrects += predicted.eq(y).sum().item()
total += y.size(0)
optimizer.zero_grad()
loss.backward()
optimizer.step()
t.set_description(f'epoch:{epoch} current accuracy:{round(corrects / total * 100, 2)}%')
return (corrects / total)
###Output
_____no_output_____
###Markdown
Train your model. One or two epochs should be enough since we are using transfer learning.
###Code
learning_rate = 5e-3
epochs = 1
optimizer = torch.optim.Adam(model.classifier.parameters(), lr=learning_rate)
...
###Output
_____no_output_____
###Markdown
Test your network to validate it has enough classification abilities.
###Code
def test(model, dataloader):
test_corrects = 0
total = 0
with torch.no_grad():
for x, y in dataloader:
x = x.cuda()
y = y.cuda()
y_hat = model(x).argmax(1)
test_corrects += y_hat.eq(y).sum().item()
total += y.size(0)
return test_corrects / total
model.eval()
test_acc = test(model, testloader)
print(f'Test accuracy: {test_acc:.2f} %')
###Output
_____no_output_____
###Markdown
Using a single example, we will now use lime to visualize the important parts of the image for our model prediction.
###Code
idx = 0
img = inv_normalize(x[idx])
np_img = np.transpose(img.cpu().detach().numpy(), (1,2,0))*255
np_img = np_img.astype(np.uint8)
plt.imshow(np_img)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Let's first verify our model prediction:
###Code
input = x[idx].unsqueeze(0).cuda()
output = model(input)
_, prediction = torch.topk(output, 1)
print(f"Model's prediction: {labels[prediction.item()]}")
###Output
_____no_output_____
###Markdown
LIME provide a ```LimeImageExplainer``` to deal with images.However, the ```LimeImageExplainer``` requires a callable function that will directly produce predictions for a list of images (the perturbed from the original images) in the form of a numpy array. Our pytorch model works with pytorch ```Tensor``` mini-batches and outputs ```Tensor``` objects. We thus need to wrap our model and the associated pre-processing into a single callable function to use the ```LimeImageExplainer```.
###Code
import torch.nn.functional as F
lime_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(means, stds),
])
def batch_predict(images):
with torch.no_grad():
model.eval()
batch = torch.stack(tuple(lime_transform(i) for i in images), dim=0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
batch = batch.to(device)
logits = model(batch)
return logits.detach().cpu().numpy()
###Output
_____no_output_____
###Markdown
We now can use the Lime explainer to visualize which parts of the images were the most important.
###Code
from lime import lime_image
from skimage.segmentation import mark_boundaries
explainer = lime_image.LimeImageExplainer()
explanation = explainer.explain_instance(np_img,
batch_predict, # classification function
top_labels=5,
hide_color=0,
num_samples=1000)
temp, mask = explanation.get_image_and_mask(explanation.top_labels[0], positive_only=False, num_features=10, hide_rest=False)
img_boundry = mark_boundaries(temp/255.0, mask)
plt.imshow(img_boundry)
###Output
_____no_output_____
###Markdown
Model specific methodsWe just saw many methods for model agnostic interpretability. The second part of this session is dedicated to model-specific methods. In particular, we will implement methods specific to neural networks. Vanilla gradient back-propagationWe will now implement three methods to generate saliency maps on our images. One of the simplest ways to generate saliency maps is certainly to backpropagate the gradients of the predicted output directly to the image input. This method, called vanilla gradient backpropagation, is presented in this [article](Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps).Let's first visualize an image for which we will generate a saliency map according to our model's prediction.
###Code
import numpy as np
x, _ = next(iter(trainloader))
idx = 0
img = x[idx]
np_img = np.transpose(inv_normalize(img).cpu().detach().numpy(), (1,2,0))
plt.imshow(np_img)
plt.axis('off')
###Output
_____no_output_____
###Markdown
By default, input tensors do not require to generate gradients in Pytorch. Thus, we first need to set the image to catch the gradient during the backward pass.
###Code
img = img.unsqueeze(0).cuda() # we need to set the input on GPU before the requires_grad operation!
img.requires_grad_();
###Output
_____no_output_____
###Markdown
We will now compute the model's prediction for this image and backpropagate from this prediction to the image.
###Code
output = model(img)
output_idx = output.argmax()
output_max = output[0, output_idx]
output_max.backward()
###Output
_____no_output_____
###Markdown
We can now generate a saliency map were important pixels will correspond to important gradients!
###Code
saliency, _ = torch.max(img.grad.data.abs(), dim=1)
saliency = saliency.squeeze(0)
plt.figure(figsize=(15,10))
plt.subplot(1,2,1)
plt.imshow(np_img)
plt.axis('off')
plt.subplot(1,2,2)
plt.imshow(saliency.cpu(), cmap='hot')
plt.axis('off')
###Output
_____no_output_____
###Markdown
Try with different images.
###Code
...
###Output
_____no_output_____
###Markdown
Smooth gradA simple way to generate smoother visualizations called [Smooth-grad](https://arxiv.org/pdf/1706.03825.pdf) consists of averaging saliency maps from augmented versions of the original image. Complete the following function to generate the gradient of an image according to the model's prediction.
###Code
def get_vanilla_grad(img, model):
...#retain gradients
...
return img.grad
###Output
_____no_output_____
###Markdown
We will now generate perturbated versions of the image by adding a gaussian noise to the original image. For every image generated we will compute the corresponding gardients and average them to generate the final saliency map.
###Code
import numpy as np
stdev_spread=0.15
n_samples=100
stdev = stdev_spread * (img.max() - img.min())
total_gradients = torch.zeros_like(img, device='cuda')
for i in range(n_samples):
noise = np.random.normal(0, stdev.item(), img.shape).astype(np.float32)
noisy_img = img + torch.tensor(noise, device='cuda', requires_grad=True)
grad= get_vanilla_grad(noisy_img, model)
total_gradients += grad * grad #using the square of the gradients generates smoother visualizations
#total_gradients += grad
total_gradients /= n_samples
saliency, _ = torch.max(total_gradients.abs(), dim=1)
saliency = saliency.squeeze(0)
plt.figure(figsize=(15,10))
plt.subplot(1,2,1)
plt.imshow(np_img)
plt.axis('off')
plt.subplot(1,2,2)
plt.imshow(saliency.cpu(), cmap='hot')
plt.axis('off')
###Output
_____no_output_____
###Markdown
Try it with other images.
###Code
...
###Output
_____no_output_____
###Markdown
Grad-CAMInstead of propagating the gradients to the inputs, the [Grad-CAM](https://arxiv.org/abs/1610.02391) method generates a saliency map by multiplying the outputs of the final feature map with an average of its gradients. It thus generates coarse saliency maps, sometimes more relevant than pixels. We will create a 'hook' to keep both the activations and the gardients of a network layer. This operation is a bit tricky, just keep it mind that it is a way to keep activations and gradients in a single object during the forward and the backward pass.
###Code
class HookFeatures():
def __init__(self, module):
self.feature_hook = module.register_forward_hook(self.feature_hook_fn)
def feature_hook_fn(self, module, input, output):
self.features = output.clone().detach()
self.gradient_hook = output.register_hook(self.gradient_hook_fn)
def gradient_hook_fn(self, grad):
self.gradients = grad
def close(self):
self.feature_hook.remove()
self.gradient_hook.remove()
###Output
_____no_output_____
###Markdown
We will 'hook' the activations and gradients of the last convolutional layer.
###Code
print(model)
hook = HookFeatures(model.features[19])
###Output
_____no_output_____
###Markdown
Similar to what we did before, we will backpropagate the gradients of the predicted output on the feature map this time and get both the activations and the gradients thanks to our hook on the last convolutional layer.
###Code
output = model(img)
output_idx = output.argmax()
output_max = output[0, output_idx]
output_max.backward()
gradients = hook.gradients
activations = hook.features
pooled_gradients = torch.mean(gradients, dim=[0, 2, 3]) # we take the average gradient of every chanels
for i in range(activations.shape[1]):
activations[:, i, :, :] *= pooled_gradients[i] # we multiply every chanels of the feature map with their corresponding averaged gradients
###Output
_____no_output_____
###Markdown
We can now take the average of all channels of the gradient weighted feature map to generate a heat map, keeping only the positive values to get the positive influences only. We also need to reshape the generated heat map to math the original input size.
###Code
import cv2
heatmap = torch.mean(activations, dim=1).squeeze()
heatmap = np.maximum(heatmap.detach().cpu(), 0)
heatmap /= torch.max(heatmap)
heatmap = cv2.resize(np.float32(heatmap), (img.shape[2], img.shape[3]))
heatmap = np.uint8(255 * heatmap)
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_RAINBOW) / 255
superposed_img = (heatmap) * 0.4 + np_img
plt.figure(figsize=(8,8))
plt.imshow(np.clip(superposed_img,0,1))
plt.axis('off')
###Output
_____no_output_____
###Markdown
We must remove our hook, it won't be used in the rest of the session.
###Code
hook.close()
###Output
_____no_output_____
###Markdown
Captum [Captum](https://captum.ai/) is a library developed by Facebook to generate explanations on Pytorch models. It implements various other saliency maps methods that we did not cover during our class. You should look at the doc and find out [other possible methods](https://captum.ai/docs/algorithms) to gain interpretability in your Pytorch models. We will quickly try some of them so you know how to use them if you need to.
###Code
!pip install captum
from captum.attr import IntegratedGradients
from captum.attr import GradientShap
from captum.attr import Occlusion
from captum.attr import NoiseTunnel
from captum.attr import GuidedGradCam
from captum.attr import DeepLift
from captum.attr import visualization as viz
def plot_heatmap(attributions, img):
_ = viz.visualize_image_attr_multiple(np.transpose(attributions.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(inv_normalize(img).squeeze().cpu().detach().numpy(), (1,2,0)),
methods=["original_image", "heat_map"],
signs=['all', 'positive'],
cmap='hot',
show_colorbar=True)
# Integradted gradients (https://arxiv.org/abs/1703.01365)
integrated_gradients = IntegratedGradients(model)
attributions = integrated_gradients.attribute(img, target=output_idx, n_steps=200, internal_batch_size=1)
plot_heatmap(attributions, img)
#Noise tunnel (SmoothGrad, VarGrad: https://arxiv.org/abs/1810.03307)
noise_tunnel = NoiseTunnel(integrated_gradients)
attributions = noise_tunnel.attribute(img, nt_samples_batch_size=1, nt_samples=10, nt_type='smoothgrad_sq', target=output_idx)
plot_heatmap(attributions, img)
#Occlusion (https://arxiv.org/abs/1311.2901)
occlusion = Occlusion(model)
attributions = occlusion.attribute(img,
strides = (3, 8, 8),
target=output_idx,
sliding_window_shapes=(3,15, 15),
baselines=0)
plot_heatmap(attributions, img)
#DeepLift (https://arxiv.org/pdf/1704.02685.pdf)
dl = DeepLift(model)
attributions = dl.attribute(img, target=output_idx, baselines=img * 0)
plot_heatmap(attributions, img)
#Guided Grad-CAM (https://arxiv.org/abs/1610.02391)
guided_gc = GuidedGradCam(model, model.features[19])
attribution = guided_gc.attribute(img, target=output_idx)
plot_heatmap(attributions, img)
###Output
_____no_output_____
###Markdown
Feature visualizationPrevious methods, generating saliency maps were model-specific local-interpretability methods. We will now implement a global interpretability called feature visualization. I really encourage you to read the [distill publication](https://distill.pub/2017/feature-visualization/) presenting the methods. The obtained visualizations are superb!The principle of the method is straightforward. It consists of optimizing a random image to maximize the output of a neural network unit. A unit can be a neuron, a channel of a feature map, or an entire feature map. In this practical session, we will focus on channels. Let's first have a look at our network architecture.
###Code
model
###Output
_____no_output_____
###Markdown
Once again, we will create a hook to keep the activation of intermediate layers. However, we won't need to keep the gradients this time.
###Code
class FeaturesHook():
def __init__(self, module):
self.hook = module.register_forward_hook(self.hook_fn)
def hook_fn(self, module, input, output):
self.features = output
def close(self):
self.hook.remove()
###Output
_____no_output_____
###Markdown
The following code will compute the feature visualization for one channel of one layer. We begin by initializing a random image and creating a hook on the desired layer. Then we will do a forward pass of our image through the network and compute a loss to maximize the hooked layer's desired channel. We will repeat this operation several epochs.
###Code
def visualize_feature(model, layer_idx, channel_idx):
img = torch.rand((1,3,224,224), requires_grad=True, device="cuda") #initialize a random image
optimizer = torch.optim.Adam([img], lr=0.1, weight_decay=1e-6)
features_hook = FeaturesHook(model.features[layer_idx]) # hook the desired layer
for n in range(20):
optimizer.zero_grad()
model(img) #forward pass
features_map = features_hook.features
loss = -features_map[0, channel_idx].mean() # maximize channel's output
loss.backward()
optimizer.step()
features_hook.close()
img = img.squeeze(0)
img = inv_normalize(img).cpu().detach().numpy()
img = np.transpose(img, (1,2,0))
img = np.clip(img, 0, 1)
plt.imshow(img)
plt.axis('off')
plt.tight_layout()
#visualize layer one channel 2
visualize_feature(model, 1, 2)
###Output
_____no_output_____
###Markdown
Complete the following code to generate feature visaulizations of various filters of layer 1.
###Code
layer = 1
plt.figure(figsize=(20,25))
for i, filter in enumerate(range(0, 10), 1):
plt.subplot(5,5,i)
...
###Output
_____no_output_____
###Markdown
Try to vizsualize deeper layers (5, 9, 17 for instance)
###Code
layer = 5
plt.figure(figsize=(20,25))
for i, filter in enumerate(range(0, 10), 1):
plt.subplot(5,5,i)
...
layer = 9
plt.figure(figsize=(20,25))
for i, filter in enumerate(range(0, 10), 1):
plt.subplot(5,5,i)
...
layer = 17
plt.figure(figsize=(20,25))
for i, filter in enumerate(range(0, 10), 1):
plt.subplot(5,5,i)
...
###Output
_____no_output_____
###Markdown
Obtaining nice visualizations as in the original publication requires some additional tricks. Feel free to read the publication and try it on other more sophisticated networks.
###Code
###Output
_____no_output_____ |
src/tfm_teci_antonmakarov_gan-trained.ipynb | ###Markdown
Pretrained DCGAN model to generate art Import libraries
###Code
import numpy as np
import torch
import torchvision.utils as vutils
from torch import nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Set code size and device
###Code
code_size = 100 # Size of the input noise vector for the generator
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') # Use gpu if available
###Output
_____no_output_____
###Markdown
Define the generator network
###Code
class Generator(nn.Module):
def __init__(self):
super().__init__()
# in, out, kernel, stride, padding
self.convt1 = nn.ConvTranspose2d(100, 64*8, 4, 1, 0, bias=False) # 4x4x512
self.convt2 = nn.ConvTranspose2d(64*8, 64*4, 4, 2, 1, bias=False) # 8x8x256
self.convt3 = nn.ConvTranspose2d(64*4, 64*2, 4, 2, 1, bias=False) # 16x16x128
self.convt4 = nn.ConvTranspose2d(64*2, 64, 4, 2, 1, bias=False) # 32x32x64
self.convt5 = nn.ConvTranspose2d(64, 3, 4, 2, 1, bias=False) # 64x64x3
self.bn1 = nn.BatchNorm2d(64*8)
self.bn2 = nn.BatchNorm2d(64*4)
self.bn3 = nn.BatchNorm2d(64*2)
self.bn4 = nn.BatchNorm2d(64)
def forward(self, x):
x = F.relu(self.bn1(self.convt1(x)))
x = F.relu(self.bn2(self.convt2(x)))
x = F.relu(self.bn3(self.convt3(x)))
x = F.relu(self.bn4(self.convt4(x)))
x = self.convt5(x).tanh()
return x
###Output
_____no_output_____
###Markdown
Function for model loading
###Code
def load_model(model_path, trained_on):
'''
Loads a model saved at model_path on device marker
parameters:
model_path: string indicating where is the model (.pt or .pth)
trained_on: string indicating if the model was trained on a cpu ('cpu') or gpu ('cuda:0')
output:
model: Trained generator model instance
VERY IMPORTANT: Make sure to call input = input.to(device)
on any input tensors that you feed to the model if loading on gpu!!!
'''
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
model = Generator()
if str(device) == trained_on:
model.load_state_dict(torch.load(model_path))
else:
model.load_state_dict(torch.load(model_path, map_location=device))
model.to(device)
model.eval()
return model
###Output
_____no_output_____
###Markdown
Load the modelUse the path to the model that you want to use, in the folder `misc` there is a pretrained one.
###Code
loaded_gen = load_model('../misc/gpu_trained_generator_19_7_14_11.pth', 'cuda:0')
###Output
_____no_output_____
###Markdown
Generate a fake vectorBy default there will be 64 images generated.
###Code
default_size = 64
noise = torch.randn(default_size, code_size, 1, 1, device=device)
with torch.no_grad():
fake_images = loaded_gen(noise).cpu()
###Output
_____no_output_____
###Markdown
Plot the generated images
###Code
plt.figure(figsize = (12,12))
plt.axis("off")
plt.title("Fake Images")
generated = vutils.make_grid(fake_images, padding=2, normalize=True)
plt.imshow(np.transpose(generated, (1,2,0)))
###Output
_____no_output_____ |
OxIOD/TinyOdom_OxIOD.ipynb | ###Markdown
Import Training, Validation and Test Set
###Code
sampling_rate = 100
window_size = 200
stride = 10
f = '/home/nesl/swapnil/TinyOdom/Human/datasets/oxiod/' #dataset directory
#Training Set
X, Y_disp, Y_head, Y_pos, x0_list, y0_list, size_of_each, x_vel, y_vel, head_s, head_c, X_orig = import_oxiod_dataset(type_flag = 2,
useMagnetometer = True, useStepCounter = True, AugmentationCopies = 0,
dataset_folder = f,
sub_folders = ['handbag/','handheld/','pocket/','running/','slow_walking/','trolley/'],
sampling_rate = sampling_rate,
window_size = window_size, stride = stride, verbose=False)
#Validation Set
X_val, Y_disp_val, Y_head_val, Y_pos_val, x0_list_val, y0_list_val, size_of_each_val, x_vel_val, y_vel_val, head_s_val, head_c_val, X_orig_val = import_oxiod_dataset(type_flag = 3,
useMagnetometer = True, useStepCounter = True, AugmentationCopies = 0,
dataset_folder = f,
sub_folders = ['handbag/','handheld/','pocket/','running/','slow_walking/','trolley/'],
sampling_rate = sampling_rate,
window_size = window_size, stride = stride, verbose=False)
#Test Set
X_test, Y_disp_test, Y_head_test, Y_pos_test, x0_list_test, y0_list_test, size_of_each_test, x_vel_test, y_vel_test, head_s_test, head_c_test, X_orig_test = import_oxiod_dataset(type_flag = 4,
useMagnetometer = True, useStepCounter = True, AugmentationCopies = 0,
dataset_folder = f,
sub_folders = ['handbag/','handheld/','pocket/','running/','slow_walking/','trolley/'],
sampling_rate = sampling_rate,
window_size = window_size, stride = stride, verbose=False)
###Output
_____no_output_____
###Markdown
Training and NAS
###Code
device = "NUCLEO_F746ZG" #hardware name
model_name = 'TD_Oxiod_'+device+'.hdf5'
dirpath="/home/nesl/Mbed Programs/tinyodom_tcn/" #hardware program directory
HIL = True #use real hardware or proxy?
quantization = False #use quantization or not?
model_epochs = 900 #epochs to train each model for
NAS_epochs = 50 #epochs for hyperparameter tuning
output_name = 'g_model.tflite'
log_file_name = 'log_NAS_Oxiod_'+device+'.csv'
if os.path.exists(log_file_name):
os.remove(log_file_name)
row_write = ['score', 'rmse_vel_x','rmse_vel_y','RAM','Flash','Flops','Latency',
'nb_filters','kernel_size','dilations','dropout_rate','use_skip_connections','norm_flag']
with open(log_file_name, 'a', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(row_write)
if os.path.exists(log_file_name[0:-4]+'.p'):
os.remove(log_file_name[0:-4]+'.p')
def objective_NN(epochs=500,nb_filters=32,kernel_size=7,dilations=[1, 2, 4, 8, 16, 32, 64, 128],dropout_rate=0,
use_skip_connections=False,norm_flag=0):
inval = 0
rmse_vel_x = 'inf'
rmse_vel_y = 'inf'
batch_size, timesteps, input_dim = 256, window_size, X.shape[2]
i = Input(shape=(timesteps, input_dim))
if(norm_flag==1):
m = TCN(nb_filters=nb_filters,kernel_size=kernel_size,dilations=dilations,dropout_rate=dropout_rate,
use_skip_connections=use_skip_connections,use_batch_norm=True)(i)
else:
m = TCN(nb_filters=nb_filters,kernel_size=kernel_size,dilations=dilations,dropout_rate=dropout_rate,
use_skip_connections=use_skip_connections)(i)
m = tf.reshape(m, [-1, nb_filters, 1])
m = MaxPooling1D(pool_size=(2))(m)
m = Flatten()(m)
m = Dense(32, activation='linear', name='pre')(m)
output1 = Dense(1, activation='linear', name='velx')(m)
output2 = Dense(1, activation='linear', name='vely')(m)
model = Model(inputs=[i], outputs=[output1, output2])
opt = tf.keras.optimizers.Adam()
model.compile(loss={'velx': 'mse','vely':'mse'},optimizer=opt)
Flops = get_flops(model, batch_size=1)
convert_to_tflite_model(model=model,training_data=X,quantization=quantization,output_name=output_name)
maxRAM, maxFlash = return_hardware_specs(device)
if(HIL==True):
convert_to_cpp_model(dirpath)
RAM, Flash, Latency, idealArenaSize, errorCode = HIL_controller(dirpath=dirpath,
chosen_device=device,
window_size=window_size,
number_of_channels = input_dim,
quantization=quantization)
score = -5.0
if(Flash==-1):
row_write = [score, rmse_vel_x,rmse_vel_y,RAM,Flash,Flops,Latency,
nb_filters,kernel_size,dilations,dropout_rate,use_skip_connections,norm_flag]
print('Design choice:',row_write)
with open(log_file_name, 'a', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(row_write)
return score
elif(Flash!=-1):
checkpoint = ModelCheckpoint(model_name, monitor='loss', verbose=1, save_best_only=True)
model.fit(x=X, y=[x_vel, y_vel],epochs=epochs, shuffle=True,callbacks=[checkpoint],batch_size=batch_size)
model = load_model(model_name,custom_objects={'TCN': TCN})
y_pred = model.predict(X_val)
rmse_vel_x = mean_squared_error(x_vel_val, y_pred[0], squared=False)
rmse_vel_y = mean_squared_error(y_vel_val, y_pred[1], squared=False)
model_acc = -(rmse_vel_x+rmse_vel_y)
resource_usage = (RAM/maxRAM) + (Flash/maxFlash)
score = model_acc + 0.01*resource_usage - 0.05*Latency #weigh each component as you like
row_write = [score, rmse_vel_x,rmse_vel_y,RAM,Flash,Flops,Latency,
nb_filters,kernel_size,dilations,dropout_rate,use_skip_connections,norm_flag]
print('Design choice:',row_write)
with open(log_file_name, 'a', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(row_write)
else:
score = -5.0
Flash = os.path.getsize(output_name)
RAM = get_model_memory_usage(batch_size=1,model=model)
Latency=-1
max_flops = (30e6)
if(RAM < maxRAM and Flash<maxFlash):
checkpoint = ModelCheckpoint(model_name, monitor='loss', verbose=1, save_best_only=True)
model.fit(x=X, y=[x_vel, y_vel],epochs=epochs, shuffle=True,callbacks=[checkpoint],batch_size=batch_size)
model = load_model(model_name,custom_objects={'TCN': TCN})
y_pred = model.predict(X_val)
rmse_vel_x = mean_squared_error(x_vel_val, y_pred[0], squared=False)
rmse_vel_y = mean_squared_error(y_vel_val, y_pred[1], squared=False)
model_acc = -(rmse_vel_x+rmse_vel_y)
resource_usage = (RAM/maxRAM) + (Flash/maxFlash)
score = model_acc + 0.01*resource_usage - 0.05*(Flops/max_flops) #weigh each component as you like
row_write = [score, rmse_vel_x,rmse_vel_y,RAM,Flash,Flops,Latency,
nb_filters,kernel_size,dilations,dropout_rate,use_skip_connections,norm_flag]
print('Design choice:',row_write)
with open(log_file_name, 'a', newline='') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(row_write)
return score
import pickle
def save_res(data, file_name):
pickle.dump( data, open( file_name, "wb" ) )
min_layer = 3
max_layer = 8
a_list = [1,2,4,8,16,32,64,128,256]
all_combinations = []
dil_list = []
for r in range(len(a_list) + 1):
combinations_object = itertools.combinations(a_list, r)
combinations_list = list(combinations_object)
all_combinations += combinations_list
all_combinations = all_combinations[1:]
for item in all_combinations:
if(len(item) >= min_layer and len(item) <= max_layer):
dil_list.append(list(item))
param_dict = {
'nb_filters': range(2,64),
'kernel_size': range(2,16),
'dropout_rate': np.arange(0.0,0.5,0.1),
'use_skip_connections': [True, False],
'norm_flag': np.arange(0,1),
'dil_list': dil_list
}
def objfunc(args_list):
objective_evaluated = []
start_time = time.time()
for hyper_par in args_list:
nb_filters = hyper_par['nb_filters']
kernel_size = hyper_par['kernel_size']
dropout_rate = hyper_par['dropout_rate']
use_skip_connections = hyper_par['use_skip_connections']
norm_flag=hyper_par['norm_flag']
dil_list = hyper_par['dil_list']
objective = objective_NN(epochs=model_epochs,nb_filters=nb_filters,kernel_size=kernel_size,
dilations=dil_list,
dropout_rate=dropout_rate,use_skip_connections=use_skip_connections,
norm_flag=norm_flag)
objective_evaluated.append(objective)
end_time = time.time()
print('objective:', objective, ' time:',end_time-start_time)
return objective_evaluated
conf_Dict = dict()
conf_Dict['batch_size'] = 1
conf_Dict['num_iteration'] = NAS_epochs
conf_Dict['initial_random']= 5
tuner = Tuner(param_dict, objfunc,conf_Dict)
all_runs = []
results = tuner.maximize()
all_runs.append(results)
save_res(all_runs,log_file_name[0:-4]+'.p')
###Output
_____no_output_____
###Markdown
Train the Best Model
###Code
nb_filters = results['best_params']['nb_filters']
kernel_size = results['best_params']['kernel_size']
dilations = results['best_params']['dilations']
dropout_rate = results['best_params']['dropout_rate']
use_skip_connections = results['best_params']['use_skip_connections']
norm_flag = results['best_params']['norm_flag']
batch_size, timesteps, input_dim = 256, window_size, X.shape[2]
i = Input(shape=(timesteps, input_dim))
if(norm_flag==1):
m = TCN(nb_filters=nb_filters,kernel_size=kernel_size,dilations=dilations,dropout_rate=dropout_rate,
use_skip_connections=use_skip_connections,use_batch_norm=True)(i)
else:
m = TCN(nb_filters=nb_filters,kernel_size=kernel_size,dilations=dilations,dropout_rate=dropout_rate,
use_skip_connections=use_skip_connections)(i)
m = tf.reshape(m, [-1, nb_filters, 1])
m = MaxPooling1D(pool_size=(2))(m)
m = Flatten()(m)
m = Dense(32, activation='linear', name='pre')(m)
output1 = Dense(1, activation='linear', name='velx')(m)
output2 = Dense(1, activation='linear', name='vely')(m)
model = Model(inputs=[i], outputs=[output1, output2])
opt = tf.keras.optimizers.Adam()
model.compile(loss={'velx': 'mse','vely':'mse'},optimizer=opt)
checkpoint = ModelCheckpoint(model_name, monitor='loss', verbose=1, save_best_only=True)
model.fit(x=X, y=[x_vel, y_vel],epochs=model_epochs, shuffle=True,callbacks=[checkpoint],batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Evaluate the Best Model Velocity Prediction RMSE
###Code
model = load_model(model_name,custom_objects={'TCN': TCN})
y_pred = model.predict(X_test)
rmse_vel_x = mean_squared_error(x_vel_test, y_pred[0], squared=False)
rmse_vel_y = mean_squared_error(y_vel_test, y_pred[1], squared=False)
print('Vel_X RMSE, Vel_Y RMSE:',rmse_vel_x,rmse_vel_y)
###Output
_____no_output_____
###Markdown
ATE and RTE Metrics
###Code
a = 0
b = size_of_each_test[0]
ATE = []
RTE = []
ATE_dist = []
RTE_dist = []
for i in range(len(size_of_each_test)):
X_test_sel = X_test[a:b,:,:]
x_vel_test_sel = x_vel_test[a:b]
y_vel_test_sel = y_vel_test[a:b]
Y_head_test_sel = Y_head_test[a:b]
Y_disp_test_sel = Y_disp_test[a:b]
if(i!=len(size_of_each_test)-1):
a += size_of_each_test[i]
b += size_of_each_test[i]
y_pred = model.predict(X_test_sel)
pointx = []
pointy = []
Lx = x0_list_test[i]
Ly = y0_list_test[i]
for j in range(len(x_vel_test_sel)):
Lx = Lx + (x_vel_test_sel[j]/(((window_size-stride)/stride)))
Ly = Ly + (y_vel_test_sel[j]/(((window_size-stride)/stride)))
pointx.append(Lx)
pointy.append(Ly)
Gvx = pointx
Gvy = pointy
pointx = []
pointy = []
Lx = x0_list_test[i]
Ly = y0_list_test[i]
for j in range(len(x_vel_test_sel)):
Lx = Lx + (y_pred[0][j]/(((window_size-stride)/stride)))
Ly = Ly + (y_pred[1][j]/(((window_size-stride)/stride)))
pointx.append(Lx)
pointy.append(Ly)
Pvx = pointx
Pvy = pointy
at, rt, at_all, rt_all = Cal_TE(Gvx, Gvy, Pvx, Pvy,
sampling_rate=sampling_rate,window_size=window_size,stride=stride)
ATE.append(at)
RTE.append(rt)
ATE_dist.append(Cal_len_meters(Gvx, Gvy))
RTE_dist.append(Cal_len_meters(Gvx, Gvy, 600))
print('ATE, RTE, Trajectory Length, Trajectory Length (60 seconds)',ATE[i],RTE[i],ATE_dist[i],RTE_dist[i])
print('Median ATE and RTE', np.median(ATE),np.median(RTE))
###Output
_____no_output_____
###Markdown
Sample Trajectory Plotting
###Code
#you can use the size_of_each_test variable to control the region to plot. We plot for the last trajectory
a = sum(size_of_each_test[0:5])
b = sum(size_of_each_test[0:5])+900
X_test_sel = X_test[a:b,:,:]
x_vel_test_sel = x_vel_test[a:b]
y_vel_test_sel = y_vel_test[a:b]
Y_head_test_sel = Y_head_test[a:b]
Y_disp_test_sel = Y_disp_test[a:b]
y_pred = model.predict(X_test_sel)
pointx = []
pointy = []
Lx = x0_list_test[i]
Ly = y0_list_test[i]
for j in range(len(x_vel_test_sel)):
Lx = Lx + (x_vel_test_sel[j]/(((window_size-stride)/stride)))
Ly = Ly + (y_vel_test_sel[j]/(((window_size-stride)/stride)))
pointx.append(Lx)
pointy.append(Ly)
Gvx = pointx
Gvy = pointy
pointx = []
pointy = []
Lx = x0_list_test[i]
Ly = y0_list_test[i]
for j in range(len(x_vel_test_sel)):
Lx = Lx + (y_pred[0][j]/(((window_size-stride)/stride)))
Ly = Ly + (y_pred[1][j]/(((window_size-stride)/stride)))
pointx.append(Lx)
pointy.append(Ly)
Pvx = pointx
Pvy = pointy
print('Plotting Trajectory of length (meters): ',Cal_len_meters(Gvx, Gvy))
ptox = Pvx
ptoy = Pvy
plt.plot(Gvx,Gvy,label='Ground Truth',color='salmon')
plt.plot(ptox,ptoy,label='TinyOdom',color='green',linestyle='-')
plt.grid()
plt.legend(loc='best')
plt.title('PDR - OxIOD Dataset')
plt.xlabel('East (m)')
plt.ylabel('North (m)')
plt.show()
###Output
_____no_output_____
###Markdown
Error Evolution
###Code
#For the last trajectory
a = sum(size_of_each_test[0:5])
b = sum(size_of_each_test[0:5])+size_of_each_test[5]
X_test_sel = X_test[a:b,:,:]
x_vel_test_sel = x_vel_test[a:b]
y_vel_test_sel = y_vel_test[a:b]
Y_head_test_sel = Y_head_test[a:b]
Y_disp_test_sel = Y_disp_test[a:b]
y_pred = model.predict(X_test_sel)
pointx = []
pointy = []
Lx = x0_list_test[i]
Ly = y0_list_test[i]
for j in range(len(x_vel_test_sel)):
Lx = Lx + (x_vel_test_sel[j]/(((window_size-stride)/stride)))
Ly = Ly + (y_vel_test_sel[j]/(((window_size-stride)/stride)))
pointx.append(Lx)
pointy.append(Ly)
Gvx = pointx
Gvy = pointy
pointx = []
pointy = []
Lx = x0_list_test[i]
Ly = y0_list_test[i]
for j in range(len(x_vel_test_sel)):
Lx = Lx + (y_pred[0][j]/(((window_size-stride)/stride)))
Ly = Ly + (y_pred[1][j]/(((window_size-stride)/stride)))
pointx.append(Lx)
pointy.append(Ly)
Pvx = pointx
Pvy = pointy
at, rt, at_all, rt_all = Cal_TE(Gvx, Gvy, Pvx, Pvy,
sampling_rate=sampling_rate,window_size=window_size,stride=stride)
x_ax = np.linspace(0,60,len(rt_all))
print('Plotting for trajectory of length (meters): ',Cal_len_meters(Gvx, Gvy))
plt.plot(x_ax,rt_all,label='TinyOdom',color='green',linestyle='-')
plt.legend()
plt.xlabel('Time (seconds)')
plt.ylabel('Position Error (m)')
plt.title('PDR - OxIOD Dataset')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Deployment Conversion to TFLite
###Code
convert_to_tflite_model(model=model,training_data=X_tr,quantization=quantization,output_name='g_model.tflite')
###Output
_____no_output_____
###Markdown
Conversion to C++
###Code
convert_to_cpp_model(dirpath)
###Output
_____no_output_____ |
final/Best CV/Training/3stagenn-10folds-train.ipynb | ###Markdown
- a notebook to save preprocessing model and train/save NN models- all necessary ouputs are stored in MODEL_DIR = output/kaggle/working/model - put those into dataset, and load it from inference notebook
###Code
import sys
sys.path.append('../input/iterative-stratification/')
sys.path.append('../input/umaplearn/umap')
%mkdir model
# %mkdir interim
from scipy.sparse.csgraph import connected_components
from umap import UMAP
from iterstrat.ml_stratifiers import MultilabelStratifiedKFold
import numpy as np
import random
import pandas as pd
import matplotlib.pyplot as plt
import os
import copy
import seaborn as sns
import time
from sklearn import preprocessing
from sklearn.metrics import log_loss
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA,FactorAnalysis
from sklearn.manifold import TSNE
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
print(torch.cuda.is_available())
import warnings
# warnings.filterwarnings('ignore')
torch.__version__
NB = '25'
IS_TRAIN = True
MODEL_DIR = "model" # "../model"
NSEEDS = 5 # 5
DEVICE = ('cuda' if torch.cuda.is_available() else 'cpu')
EPOCHS = 15
BATCH_SIZE = 256
LEARNING_RATE = 5e-3
WEIGHT_DECAY = 1e-5
EARLY_STOPPING_STEPS = 10
EARLY_STOP = False
NFOLDS = 10 # 5
PMIN = 0.0005
PMAX = 0.9995
SMIN = 0.0
SMAX = 1.0
train_features = pd.read_csv('../input/lish-moa/train_features.csv')
train_targets_scored = pd.read_csv('../input/lish-moa/train_targets_scored.csv')
train_targets_nonscored = pd.read_csv('../input/lish-moa/train_targets_nonscored.csv')
test_features = pd.read_csv('../input/lish-moa/test_features.csv')
sample_submission = pd.read_csv('../input/lish-moa/sample_submission.csv')
train_targets_nonscored = train_targets_nonscored.loc[:, train_targets_nonscored.sum() != 0]
print(train_targets_nonscored.shape)
for c in train_targets_nonscored.columns:
if c != "sig_id":
train_targets_nonscored[c] = np.maximum(PMIN, np.minimum(PMAX, train_targets_nonscored[c]))
print("(nsamples, nfeatures)")
print(train_features.shape)
print(train_targets_scored.shape)
print(train_targets_nonscored.shape)
print(test_features.shape)
print(sample_submission.shape)
GENES = [col for col in train_features.columns if col.startswith('g-')]
CELLS = [col for col in train_features.columns if col.startswith('c-')]
def seed_everything(seed=1903):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
seed_everything(seed=1903)
# GENES
n_comp = 90
n_dim = 45
data = pd.concat([pd.DataFrame(train_features[GENES]), pd.DataFrame(test_features[GENES])])
if IS_TRAIN:
fa = FactorAnalysis(n_components=n_comp, random_state=1903).fit(data[GENES])
pd.to_pickle(fa, f'{MODEL_DIR}/{NB}_factor_analysis_g.pkl')
umap = UMAP(n_components=n_dim, random_state=1903).fit(data[GENES])
pd.to_pickle(umap, f'{MODEL_DIR}/{NB}_umap_g.pkl')
else:
fa = pd.read_pickle(f'{MODEL_DIR}/{NB}_factor_analysis_g.pkl')
umap = pd.read_pickle(f'{MODEL_DIR}/{NB}_umap_g.pkl')
data2 = (fa.transform(data[GENES]))
data3 = (umap.transform(data[GENES]))
train2 = data2[:train_features.shape[0]]
test2 = data2[-test_features.shape[0]:]
train3 = data3[:train_features.shape[0]]
test3 = data3[-test_features.shape[0]:]
train2 = pd.DataFrame(train2, columns=[f'fa_G-{i}' for i in range(n_comp)])
train3 = pd.DataFrame(train3, columns=[f'umap_G-{i}' for i in range(n_dim)])
test2 = pd.DataFrame(test2, columns=[f'fa_G-{i}' for i in range(n_comp)])
test3 = pd.DataFrame(test3, columns=[f'umap_G-{i}' for i in range(n_dim)])
train_features = pd.concat((train_features, train2, train3), axis=1)
test_features = pd.concat((test_features, test2, test3), axis=1)
#CELLS
n_comp = 50
n_dim = 25
data = pd.concat([pd.DataFrame(train_features[CELLS]), pd.DataFrame(test_features[CELLS])])
if IS_TRAIN:
fa = FactorAnalysis(n_components=n_comp, random_state=1903).fit(data[CELLS])
pd.to_pickle(fa, f'{MODEL_DIR}/{NB}_factor_analysis_c.pkl')
umap = UMAP(n_components=n_dim, random_state=1903).fit(data[CELLS])
pd.to_pickle(umap, f'{MODEL_DIR}/{NB}_umap_c.pkl')
else:
fa = pd.read_pickle(f'{MODEL_DIR}/{NB}_factor_analysis_c.pkl')
umap = pd.read_pickle(f'{MODEL_DIR}/{NB}_umap_c.pkl')
data2 = (fa.transform(data[CELLS]))
data3 = (umap.transform(data[CELLS]))
train2 = data2[:train_features.shape[0]]
test2 = data2[-test_features.shape[0]:]
train3 = data3[:train_features.shape[0]]
test3 = data3[-test_features.shape[0]:]
train2 = pd.DataFrame(train2, columns=[f'fa_C-{i}' for i in range(n_comp)])
train3 = pd.DataFrame(train3, columns=[f'umap_C-{i}' for i in range(n_dim)])
test2 = pd.DataFrame(test2, columns=[f'fa_C-{i}' for i in range(n_comp)])
test3 = pd.DataFrame(test3, columns=[f'umap_C-{i}' for i in range(n_dim)])
train_features = pd.concat((train_features, train2, train3), axis=1)
test_features = pd.concat((test_features, test2, test3), axis=1)
from sklearn.preprocessing import QuantileTransformer
for col in (GENES + CELLS):
vec_len = len(train_features[col].values)
vec_len_test = len(test_features[col].values)
raw_vec = pd.concat([train_features, test_features])[col].values.reshape(vec_len+vec_len_test, 1)
if IS_TRAIN:
transformer = QuantileTransformer(n_quantiles=100, random_state=123, output_distribution="normal")
transformer.fit(raw_vec)
pd.to_pickle(transformer, f'{MODEL_DIR}/{NB}_{col}_quantile_transformer.pkl')
else:
transformer = pd.read_pickle(f'{MODEL_DIR}/{NB}_{col}_quantile_transformer.pkl')
train_features[col] = transformer.transform(train_features[col].values.reshape(vec_len, 1)).reshape(1, vec_len)[0]
test_features[col] = transformer.transform(test_features[col].values.reshape(vec_len_test, 1)).reshape(1, vec_len_test)[0]
print(train_features.shape)
print(test_features.shape)
# train = train_features.merge(train_targets_scored, on='sig_id')
train = train_features.merge(train_targets_nonscored, on='sig_id')
train = train[train['cp_type']!='ctl_vehicle'].reset_index(drop=True)
test = test_features[test_features['cp_type']!='ctl_vehicle'].reset_index(drop=True)
# target = train[train_targets_scored.columns]
target = train[train_targets_nonscored.columns]
train = train.drop('cp_type', axis=1)
test = test.drop('cp_type', axis=1)
print(target.shape)
print(train_features.shape)
print(test_features.shape)
print(train.shape)
print(test.shape)
target_cols = target.drop('sig_id', axis=1).columns.values.tolist()
folds = train.copy()
mskf = MultilabelStratifiedKFold(n_splits=NFOLDS)
for f, (t_idx, v_idx) in enumerate(mskf.split(X=train, y=target)):
folds.loc[v_idx, 'kfold'] = int(f)
folds['kfold'] = folds['kfold'].astype(int)
folds
print(train.shape)
print(folds.shape)
print(test.shape)
print(target.shape)
print(sample_submission.shape)
class MoADataset:
def __init__(self, features, targets):
self.features = features
self.targets = targets
def __len__(self):
return (self.features.shape[0])
def __getitem__(self, idx):
dct = {
'x' : torch.tensor(self.features[idx, :], dtype=torch.float),
'y' : torch.tensor(self.targets[idx, :], dtype=torch.float)
}
return dct
class TestDataset:
def __init__(self, features):
self.features = features
def __len__(self):
return (self.features.shape[0])
def __getitem__(self, idx):
dct = {
'x' : torch.tensor(self.features[idx, :], dtype=torch.float)
}
return dct
def train_fn(model, optimizer, scheduler, loss_fn, dataloader, device):
model.train()
final_loss = 0
for data in dataloader:
optimizer.zero_grad()
inputs, targets = data['x'].to(device), data['y'].to(device)
# print(inputs.shape)
outputs = model(inputs)
loss = loss_fn(outputs, targets)
loss.backward()
optimizer.step()
scheduler.step()
final_loss += loss.item()
final_loss /= len(dataloader)
return final_loss
def valid_fn(model, loss_fn, dataloader, device):
model.eval()
final_loss = 0
valid_preds = []
for data in dataloader:
inputs, targets = data['x'].to(device), data['y'].to(device)
outputs = model(inputs)
loss = loss_fn(outputs, targets)
final_loss += loss.item()
valid_preds.append(outputs.sigmoid().detach().cpu().numpy())
final_loss /= len(dataloader)
valid_preds = np.concatenate(valid_preds)
return final_loss, valid_preds
def inference_fn(model, dataloader, device):
model.eval()
preds = []
for data in dataloader:
inputs = data['x'].to(device)
with torch.no_grad():
outputs = model(inputs)
preds.append(outputs.sigmoid().detach().cpu().numpy())
preds = np.concatenate(preds)
return preds
class Model(nn.Module):
def __init__(self, num_features, num_targets, hidden_size):
super(Model, self).__init__()
self.batch_norm1 = nn.BatchNorm1d(num_features)
self.dropout1 = nn.Dropout(0.15)
self.dense1 = nn.utils.weight_norm(nn.Linear(num_features, hidden_size))
self.batch_norm2 = nn.BatchNorm1d(hidden_size)
self.dropout2 = nn.Dropout(0.3)
self.dense2 = nn.Linear(hidden_size, hidden_size)
self.batch_norm3 = nn.BatchNorm1d(hidden_size)
self.dropout3 = nn.Dropout(0.25)
self.dense3 = nn.utils.weight_norm(nn.Linear(hidden_size, num_targets))
def forward(self, x):
x = self.batch_norm1(x)
x = self.dropout1(x)
x = F.leaky_relu(self.dense1(x))
x = self.batch_norm2(x)
x = self.dropout2(x)
x = F.leaky_relu(self.dense2(x))
x = self.batch_norm3(x)
x = self.dropout3(x)
x = self.dense3(x)
return x
def process_data(data):
data = pd.get_dummies(data, columns=['cp_time','cp_dose'])
return data
feature_cols = [c for c in process_data(folds).columns if c not in target_cols]
feature_cols = [c for c in feature_cols if c not in ['kfold','sig_id']]
len(feature_cols)
num_features=len(feature_cols)
num_targets=len(target_cols)
hidden_size=2048
def run_training(fold, seed):
seed_everything(seed)
train = process_data(folds)
test_ = process_data(test)
trn_idx = train[train['kfold'] != fold].index
val_idx = train[train['kfold'] == fold].index
train_df = train[train['kfold'] != fold].reset_index(drop=True)
valid_df = train[train['kfold'] == fold].reset_index(drop=True)
x_train, y_train = train_df[feature_cols].values, train_df[target_cols].values
x_valid, y_valid = valid_df[feature_cols].values, valid_df[target_cols].values
train_dataset = MoADataset(x_train, y_train)
valid_dataset = MoADataset(x_valid, y_valid)
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
validloader = torch.utils.data.DataLoader(valid_dataset, batch_size=BATCH_SIZE, shuffle=False)
model = Model(
num_features=num_features,
num_targets=num_targets,
hidden_size=hidden_size,
)
model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY)
scheduler = optim.lr_scheduler.OneCycleLR(optimizer=optimizer, pct_start=0.2, div_factor=1e3,
max_lr=1e-2, epochs=EPOCHS, steps_per_epoch=len(trainloader))
loss_fn = nn.BCEWithLogitsLoss()
early_stopping_steps = EARLY_STOPPING_STEPS
early_step = 0
oof = np.zeros((len(train), target.iloc[:, 1:].shape[1]))
best_loss = np.inf
best_loss_epoch = -1
if IS_TRAIN:
for epoch in range(EPOCHS):
train_loss = train_fn(model, optimizer, scheduler, loss_fn, trainloader, DEVICE)
valid_loss, valid_preds = valid_fn(model, loss_fn, validloader, DEVICE)
if valid_loss < best_loss:
best_loss = valid_loss
best_loss_epoch = epoch
oof[val_idx] = valid_preds
torch.save(model.state_dict(), f"{MODEL_DIR}/{NB}-nonscored-SEED{seed}-FOLD{fold}_.pth")
elif(EARLY_STOP == True):
early_step += 1
if (early_step >= early_stopping_steps):
break
if epoch % 10 == 0 or epoch == EPOCHS-1:
print(f"seed: {seed}, FOLD: {fold}, EPOCH: {epoch}, train_loss: {train_loss:.6f}, valid_loss: {valid_loss:.6f}, best_loss: {best_loss:.6f}, best_loss_epoch: {best_loss_epoch}")
#--------------------- PREDICTION---------------------
x_test = test_[feature_cols].values
testdataset = TestDataset(x_test)
testloader = torch.utils.data.DataLoader(testdataset, batch_size=BATCH_SIZE, shuffle=False)
model = Model(
num_features=num_features,
num_targets=num_targets,
hidden_size=hidden_size,
)
model.load_state_dict(torch.load(f"{MODEL_DIR}/{NB}-nonscored-SEED{seed}-FOLD{fold}_.pth"))
model.to(DEVICE)
if not IS_TRAIN:
valid_loss, valid_preds = valid_fn(model, loss_fn, validloader, DEVICE)
oof[val_idx] = valid_preds
predictions = np.zeros((len(test_), target.iloc[:, 1:].shape[1]))
predictions = inference_fn(model, testloader, DEVICE)
return oof, predictions
def run_k_fold(NFOLDS, seed):
oof = np.zeros((len(train), len(target_cols)))
predictions = np.zeros((len(test), len(target_cols)))
for fold in range(NFOLDS):
oof_, pred_ = run_training(fold, seed)
predictions += pred_ / NFOLDS
oof += oof_
return oof, predictions
SEED = range(NSEEDS)
oof = np.zeros((len(train), len(target_cols)))
predictions = np.zeros((len(test), len(target_cols)))
time_start = time.time()
for seed in SEED:
oof_, predictions_ = run_k_fold(NFOLDS, seed)
oof += oof_ / len(SEED)
predictions += predictions_ / len(SEED)
print(f"elapsed time: {time.time() - time_start}")
train[target_cols] = oof
test[target_cols] = predictions
print(oof.shape)
print(predictions.shape)
len(target_cols)
# train[target_cols] = np.maximum(PMIN, np.minimum(PMAX, train[target_cols]))
valid_results = train_targets_nonscored.drop(columns=target_cols).merge(train[['sig_id']+target_cols], on='sig_id', how='left').fillna(0)
y_true = train_targets_nonscored[target_cols].values
y_true = y_true > 0.5
y_pred = valid_results[target_cols].values
score = 0
for i in range(len(target_cols)):
score_ = log_loss(y_true[:, i], y_pred[:, i])
score += score_ / target.shape[1]
print("CV log_loss: ", score)
EPOCHS = 25
# NFOLDS = 5
nonscored_target = [c for c in train[train_targets_nonscored.columns] if c != "sig_id"]
nonscored_target
train = train.merge(train_targets_scored, on='sig_id')
target = train[train_targets_scored.columns]
for col in (nonscored_target):
vec_len = len(train[col].values)
vec_len_test = len(test[col].values)
raw_vec = train[col].values.reshape(vec_len, 1)
if IS_TRAIN:
transformer = QuantileTransformer(n_quantiles=100, random_state=0, output_distribution="normal")
transformer.fit(raw_vec)
pd.to_pickle(transformer, f"{MODEL_DIR}/{NB}_{col}_quantile_nonscored.pkl")
else:
transformer = pd.read_pickle(f"{MODEL_DIR}/{NB}_{col}_quantile_nonscored.pkl")
train[col] = transformer.transform(raw_vec).reshape(1, vec_len)[0]
test[col] = transformer.transform(test[col].values.reshape(vec_len_test, 1)).reshape(1, vec_len_test)[0]
target_cols = target.drop('sig_id', axis=1).columns.values.tolist()
train
folds = train.copy()
mskf = MultilabelStratifiedKFold(n_splits=NFOLDS)
for f, (t_idx, v_idx) in enumerate(mskf.split(X=train, y=target)):
folds.loc[v_idx, 'kfold'] = int(f)
folds['kfold'] = folds['kfold'].astype(int)
folds
print(train.shape)
print(folds.shape)
print(test.shape)
print(target.shape)
print(sample_submission.shape)
def process_data(data):
data = pd.get_dummies(data, columns=['cp_time','cp_dose'])
return data
feature_cols = [c for c in process_data(folds).columns if c not in target_cols]
feature_cols = [c for c in feature_cols if c not in ['kfold','sig_id']]
len(feature_cols)
num_features=len(feature_cols)
num_targets=len(target_cols)
hidden_size=2048
def run_training(fold, seed):
seed_everything(seed)
train = process_data(folds)
test_ = process_data(test)
trn_idx = train[train['kfold'] != fold].index
val_idx = train[train['kfold'] == fold].index
train_df = train[train['kfold'] != fold].reset_index(drop=True)
valid_df = train[train['kfold'] == fold].reset_index(drop=True)
x_train, y_train = train_df[feature_cols].values, train_df[target_cols].values
x_valid, y_valid = valid_df[feature_cols].values, valid_df[target_cols].values
train_dataset = MoADataset(x_train, y_train)
valid_dataset = MoADataset(x_valid, y_valid)
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
validloader = torch.utils.data.DataLoader(valid_dataset, batch_size=BATCH_SIZE, shuffle=False)
model = Model(
num_features=num_features,
num_targets=num_targets,
hidden_size=hidden_size,
)
model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY)
scheduler = optim.lr_scheduler.OneCycleLR(optimizer=optimizer, pct_start=0.2, div_factor=1e3,
max_lr=1e-2, epochs=EPOCHS, steps_per_epoch=len(trainloader))
loss_fn = nn.BCEWithLogitsLoss()
early_stopping_steps = EARLY_STOPPING_STEPS
early_step = 0
oof = np.zeros((len(train), target.iloc[:, 1:].shape[1]))
best_loss = np.inf
best_loss_epoch = -1
if IS_TRAIN:
for epoch in range(EPOCHS):
train_loss = train_fn(model, optimizer, scheduler, loss_fn, trainloader, DEVICE)
valid_loss, valid_preds = valid_fn(model, loss_fn, validloader, DEVICE)
if valid_loss < best_loss:
best_loss = valid_loss
best_loss_epoch = epoch
oof[val_idx] = valid_preds
torch.save(model.state_dict(), f"{MODEL_DIR}/{NB}-scored-SEED{seed}-FOLD{fold}_.pth")
elif(EARLY_STOP == True):
early_step += 1
if (early_step >= early_stopping_steps):
break
if epoch % 10 == 0 or epoch == EPOCHS-1:
print(f"seed: {seed}, FOLD: {fold}, EPOCH: {epoch}, train_loss: {train_loss:.6f}, valid_loss: {valid_loss:.6f}, best_loss: {best_loss:.6f}, best_loss_epoch: {best_loss_epoch}")
#--------------------- PREDICTION---------------------
x_test = test_[feature_cols].values
testdataset = TestDataset(x_test)
testloader = torch.utils.data.DataLoader(testdataset, batch_size=BATCH_SIZE, shuffle=False)
model = Model(
num_features=num_features,
num_targets=num_targets,
hidden_size=hidden_size,
)
model.load_state_dict(torch.load(f"{MODEL_DIR}/{NB}-scored-SEED{seed}-FOLD{fold}_.pth"))
model.to(DEVICE)
if not IS_TRAIN:
valid_loss, valid_preds = valid_fn(model, loss_fn, validloader, DEVICE)
oof[val_idx] = valid_preds
predictions = np.zeros((len(test_), target.iloc[:, 1:].shape[1]))
predictions = inference_fn(model, testloader, DEVICE)
return oof, predictions
def run_k_fold(NFOLDS, seed):
oof = np.zeros((len(train), len(target_cols)))
predictions = np.zeros((len(test), len(target_cols)))
for fold in range(NFOLDS):
oof_, pred_ = run_training(fold, seed)
predictions += pred_ / NFOLDS
oof += oof_
return oof, predictions
SEED = range(NSEEDS) #[0, 1, 2, 3 ,4]#, 5, 6, 7, 8, 9, 10]
oof = np.zeros((len(train), len(target_cols)))
predictions = np.zeros((len(test), len(target_cols)))
time_start = time.time()
for seed in SEED:
oof_, predictions_ = run_k_fold(NFOLDS, seed)
oof += oof_ / len(SEED)
predictions += predictions_ / len(SEED)
print(f"elapsed time: {time.time() - time_start}")
train[target_cols] = oof
test[target_cols] = predictions
len(target_cols)
valid_results = train_targets_scored.drop(columns=target_cols).merge(train[['sig_id']+target_cols], on='sig_id', how='left').fillna(0)
y_true = train_targets_scored[target_cols].values
y_true = y_true > 0.5
y_pred = valid_results[target_cols].values
score = 0
for i in range(len(target_cols)):
score_ = log_loss(y_true[:, i], y_pred[:, i])
score += score_ / target.shape[1]
print("CV log_loss: ", score)
EPOCHS = 25
# NFOLDS = 5
PMIN = 0.0005
PMAX = 0.9995
for c in train_targets_scored.columns:
if c != "sig_id":
train_targets_scored[c] = np.maximum(PMIN, np.minimum(PMAX, train_targets_scored[c]))
train_targets_scored.columns
train = train[train_targets_scored.columns]
train.columns = [c + "_pred" if (c != 'sig_id' and c in train_targets_scored.columns) else c for c in train.columns]
test = test[train_targets_scored.columns]
test.columns = [c + "_pred" if (c != 'sig_id' and c in train_targets_scored.columns) else c for c in test.columns]
train
train = train.merge(train_targets_scored, on='sig_id')
target = train[train_targets_scored.columns]
from sklearn.preprocessing import QuantileTransformer
scored_target_pred = [c + "_pred" for c in train_targets_scored.columns if c != 'sig_id']
for col in (scored_target_pred):
vec_len = len(train[col].values)
vec_len_test = len(test[col].values)
raw_vec = train[col].values.reshape(vec_len, 1)
if IS_TRAIN:
transformer = QuantileTransformer(n_quantiles=100, random_state=0, output_distribution="normal")
transformer.fit(raw_vec)
pd.to_pickle(transformer, f"{MODEL_DIR}/{NB}_{col}_quantile_scored.pkl")
else:
transformer = pd.read_pickle(f"{MODEL_DIR}/{NB}_{col}_quantile_scored.pkl")
train[col] = transformer.transform(raw_vec).reshape(1, vec_len)[0]
test[col] = transformer.transform(test[col].values.reshape(vec_len_test, 1)).reshape(1, vec_len_test)[0]
target_cols = target.drop('sig_id', axis=1).columns.values.tolist()
train
folds = train.copy()
mskf = MultilabelStratifiedKFold(n_splits=NFOLDS)
for f, (t_idx, v_idx) in enumerate(mskf.split(X=train, y=target)):
folds.loc[v_idx, 'kfold'] = int(f)
folds['kfold'] = folds['kfold'].astype(int)
folds
print(train.shape)
print(folds.shape)
print(test.shape)
print(target.shape)
print(sample_submission.shape)
folds
def process_data(data):
return data
feature_cols = [c for c in folds.columns if c not in target_cols]
feature_cols = [c for c in feature_cols if c not in ['kfold','sig_id']]
len(feature_cols)
feature_cols
folds
EPOCHS = 25
num_features=len(feature_cols)
num_targets=len(target_cols)
hidden_size=1024
def run_training(fold, seed):
seed_everything(seed)
train = process_data(folds)
test_ = process_data(test)
trn_idx = train[train['kfold'] != fold].index
val_idx = train[train['kfold'] == fold].index
train_df = train[train['kfold'] != fold].reset_index(drop=True)
valid_df = train[train['kfold'] == fold].reset_index(drop=True)
x_train, y_train = train_df[feature_cols].values, train_df[target_cols].values
x_valid, y_valid = valid_df[feature_cols].values, valid_df[target_cols].values
train_dataset = MoADataset(x_train, y_train)
valid_dataset = MoADataset(x_valid, y_valid)
trainloader = torch.utils.data.DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
validloader = torch.utils.data.DataLoader(valid_dataset, batch_size=BATCH_SIZE, shuffle=False)
model = Model(
num_features=num_features,
num_targets=num_targets,
hidden_size=hidden_size,
)
model.to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY)
# scheduler = optim.lr_scheduler.OneCycleLR(optimizer=optimizer, pct_start=0.3, div_factor=1000,
# max_lr=1e-2, epochs=EPOCHS, steps_per_epoch=len(trainloader))
scheduler = optim.lr_scheduler.OneCycleLR(optimizer=optimizer, pct_start=0.2, div_factor=1e3,
max_lr=1e-2, epochs=EPOCHS, steps_per_epoch=len(trainloader))
loss_fn = nn.BCEWithLogitsLoss()
early_stopping_steps = EARLY_STOPPING_STEPS
early_step = 0
oof = np.zeros((len(train), target.iloc[:, 1:].shape[1]))
best_loss = np.inf
best_loss_epoch = -1
if IS_TRAIN:
for epoch in range(EPOCHS):
train_loss = train_fn(model, optimizer, scheduler, loss_fn, trainloader, DEVICE)
valid_loss, valid_preds = valid_fn(model, loss_fn, validloader, DEVICE)
if valid_loss < best_loss:
best_loss = valid_loss
best_loss_epoch = epoch
oof[val_idx] = valid_preds
torch.save(model.state_dict(), f"{MODEL_DIR}/{NB}-scored2-SEED{seed}-FOLD{fold}_.pth")
elif(EARLY_STOP == True):
early_step += 1
if (early_step >= early_stopping_steps):
break
if epoch % 10 == 0 or epoch == EPOCHS-1:
print(f"seed: {seed}, FOLD: {fold}, EPOCH: {epoch}, train_loss: {train_loss:.6f}, valid_loss: {valid_loss:.6f}, best_loss: {best_loss:.6f}, best_loss_epoch: {best_loss_epoch}")
#--------------------- PREDICTION---------------------
x_test = test_[feature_cols].values
testdataset = TestDataset(x_test)
testloader = torch.utils.data.DataLoader(testdataset, batch_size=BATCH_SIZE, shuffle=False)
model = Model(
num_features=num_features,
num_targets=num_targets,
hidden_size=hidden_size,
)
model.load_state_dict(torch.load(f"{MODEL_DIR}/{NB}-scored2-SEED{seed}-FOLD{fold}_.pth"))
model.to(DEVICE)
if not IS_TRAIN:
valid_loss, valid_preds = valid_fn(model, loss_fn, validloader, DEVICE)
oof[val_idx] = valid_preds
predictions = np.zeros((len(test_), target.iloc[:, 1:].shape[1]))
predictions = inference_fn(model, testloader, DEVICE)
return oof, predictions
def run_k_fold(NFOLDS, seed):
oof = np.zeros((len(train), len(target_cols)))
predictions = np.zeros((len(test), len(target_cols)))
for fold in range(NFOLDS):
oof_, pred_ = run_training(fold, seed)
predictions += pred_ / NFOLDS
oof += oof_
return oof, predictions
SEED = range(NSEEDS)
oof = np.zeros((len(train), len(target_cols)))
predictions = np.zeros((len(test), len(target_cols)))
time_start = time.time()
for seed in SEED:
oof_, predictions_ = run_k_fold(NFOLDS, seed)
oof += oof_ / len(SEED)
predictions += predictions_ / len(SEED)
print(f"elapsed time: {time.time() - time_start}")
train[target_cols] = oof
test[target_cols] = predictions
valid_results = train_targets_scored.drop(columns=target_cols).merge(train[['sig_id']+target_cols], on='sig_id', how='left').fillna(0)
y_true = train_targets_scored[target_cols].values
y_true = y_true > 0.5
y_pred = valid_results[target_cols].values
y_pred = np.minimum(SMAX, np.maximum(SMIN, y_pred))
score = 0
for i in range(len(target_cols)):
score_ = log_loss(y_true[:, i], y_pred[:, i])
score += score_ / target.shape[1]
print("CV log_loss: ", score)
sub = sample_submission.drop(columns=target_cols).merge(test[['sig_id']+target_cols], on='sig_id', how='left').fillna(0)
sub.to_csv('submission_kibuna_nn.csv', index=False)
sub
###Output
_____no_output_____ |
gitfr.ipynb | ###Markdown
###Code
!git clone https://github.com/Josepholaidepetro/Financial_Resilience
# Initialising an empty repository
!git init
!git remote -v
!cp -av '/content/drive/MyDrive/Colab Notebooks/lgb.ipynb' '/content/Financial_Resilience'
cd Financial_Resilience
!git status
!git add -A
# Add your mail to the global config-file
!git config --global user.email [email protected]
# Add your username to the global config-file
!git config --global user.name josepholaidepetro
# Commiting the added files and writing a commit message along with it
!git commit -a -m "Upload notebook from colab"
!git config --list
from getpass import getpass
password=getpass("Enter password: ")
!git remote add origins https://josepholaidepetro:[email protected]/Josepholaidepetro/Financial_Resilience.git
# To push the committed files into your repository
!git push -u origins main
# To push the committed files into your repository
!git log
rm -rf .git
###Output
_____no_output_____ |
1. Introduction to Python/2. Implementing OOP for Clothing - Part 2.ipynb | ###Markdown
OOP Syntax Exercise - Part 2Now that you've had some practice instantiating objects, it's time to write your own class from scratch. This lesson has two parts. In the first part, you'll write a Pants class. This class is similar to the shirt class with a couple of changes. Then you'll practice instantiating Pants objectsIn the second part, you'll write another class called SalesPerson. You'll also instantiate objects for the SalesPerson.For this exercise, you can do all of your work in this Jupyter notebook. You will not need to import the class because all of your code will be in this Jupyter notebook.Answers are also provided. If you click on the Jupyter icon, you can open a folder called 2.OOP_syntax_pants_practice, which contains this Jupyter notebook ('exercise.ipynb') and a file called answer.py. Pants classWrite a Pants class with the following characteristics:* the class name should be Pants* the class attributes should include * color * waist_size * length * price* the class should have an init function that initializes all of the attributes* the class should have two methods * change_price() a method to change the price attribute * discount() to calculate a discount
###Code
### TODO:
# - code a Pants class with the following attributes
# - color (string) eg 'red', 'yellow', 'orange'
# - waist_size (integer) eg 8, 9, 10, 32, 33, 34
# - length (integer) eg 27, 28, 29, 30, 31
# - price (float) eg 9.28
### TODO: Declare the Pants Class
### TODO: write an __init__ function to initialize the attributes
### TODO: write a change_price method:
# Args:
# new_price (float): the new price of the shirt
# Returns:
# None
### TODO: write a discount method:
# Args:
# discount (float): a decimal value for the discount.
# For example 0.05 for a 5% discount.
#
# Returns:
# float: the discounted price
class Pants:
def __init__(self, color, waist_size, length, price):
self.color = color
self.waist_size = waist_size
self.length = length
self.price = price
def change_price(self, new_price):
self.price = new_price
def discount(self, discount):
return (1-discount)*self.price
###Output
_____no_output_____
###Markdown
Run the code cell below to check resultsIf you run the next code cell and get an error, then revise your code until the code cell doesn't output anything.
###Code
def check_results():
pants = Pants('red', 35, 36, 15.12)
assert pants.color == 'red'
assert pants.waist_size == 35
assert pants.length == 36
assert pants.price == 15.12
pants.change_price(10) == 10
assert pants.price == 10
assert pants.discount(.1) == 9
print('You made it to the end of the check. Nice job!')
check_results()
###Output
You made it to the end of the check. Nice job!
###Markdown
SalesPerson classThe Pants class and Shirt class are quite similar. Here is an exercise to give you more practice writing a class. **This exercise is trickier than the previous exercises.**Write a SalesPerson class with the following characteristics:* the class name should be SalesPerson* the class attributes should include * first_name * last_name * employee_id * salary * pants_sold * total_sales* the class should have an init function that initializes all of the attributes* the class should have four methods * sell_pants() a method to change the price attribute * calculate_sales() a method to calculate the sales * display_sales() a method to print out all the pants sold with nice formatting * calculate_commission() a method to calculate the salesperson commission based on total sales and a percentage
###Code
### TODO:
# Code a SalesPerson class with the following attributes
# - first_name (string), the first name of the salesperson
# - last_name (string), the last name of the salesperson
# - employee_id (int), the employee ID number like 5681923
# - salary (float), the monthly salary of the employee
# - pants_sold (list of Pants objects),
# pants that the salesperson has sold
# - total_sales (float), sum of sales of pants sold
### TODO: Declare the SalesPerson Class
### TODO: write an __init__ function to initialize the attributes
### Input Args for the __init__ function:
# first_name (str)
# last_name (str)
# employee_id (int)
# . salary (float)
#
# You can initialize pants_sold as an empty list
# You can initialize total_sales to zero.
#
###
### TODO: write a sell_pants method:
#
# This method receives a Pants object and appends
# the object to the pants_sold attribute list
#
# Args:
# pants (Pants object): a pants object
# Returns:
# None
### TODO: write a display_sales method:
#
# This method has no input or outputs. When this method
# is called, the code iterates through the pants_sold list
# and prints out the characteristics of each pair of pants
# line by line. The print out should look something like this
#
# color: blue, waist_size: 34, length: 34, price: 10
# color: red, waist_size: 36, length: 30, price: 14.15
#
#
#
###
### TODO: write a calculate_sales method:
# This method calculates the total sales for the sales person.
# The method should iterate through the pants_sold attribute list
# and sum the prices of the pants sold. The sum should be stored
# in the total_sales attribute and then return the total.
#
# Args:
# None
# Returns:
# float: total sales
#
###
### TODO: write a calculate_commission method:
#
# The salesperson receives a commission based on the total
# sales of pants. The method receives a percentage, and then
# calculate the total sales of pants based on the price,
# and then returns the commission as (percentage * total sales)
#
# Args:
# percentage (float): comission percentage as a decimal
#
# Returns:
# float: total commission
#
#
###
class SalesPerson:
def __init__(self, first_name, last_name, employee_id, salary):
self.first_name = first_name
self.last_name = last_name
self.employee_id = employee_id
self.salary = salary
self.pants_sold = []
self.total_sales = 0
def sell_pants(self, pants):
self.pants_sold.append(pants)
def display_sales(self):
for pant in self.pants_sold:
print("color: {}, waist_size: {}, length: {}, price: {}".format(pant.color, pant.waist_size, pant.length, pant.price))
def calculate_sales(self):
self.total_sales = 0
for pant in self.pants_sold:
self.total_sales += pant.price
return self.total_sales
def calculate_commission(self, percentage):
return self.calculate_sales() * percentage
###Output
_____no_output_____
###Markdown
Run the code cell below to check resultsIf you run the next code cell and get an error, then revise your code until the code cell doesn't output anything.
###Code
def check_results():
pants_one = Pants('red', 35, 36, 15.12)
pants_two = Pants('blue', 40, 38, 24.12)
pants_three = Pants('tan', 28, 30, 8.12)
salesperson = SalesPerson('Amy', 'Gonzalez', 2581923, 40000)
assert salesperson.first_name == 'Amy'
assert salesperson.last_name == 'Gonzalez'
assert salesperson.employee_id == 2581923
assert salesperson.salary == 40000
assert salesperson.pants_sold == []
assert salesperson.total_sales == 0
salesperson.sell_pants(pants_one)
salesperson.pants_sold[0] == pants_one.color
salesperson.sell_pants(pants_two)
salesperson.sell_pants(pants_three)
assert len(salesperson.pants_sold) == 3
assert round(salesperson.calculate_sales(),2) == 47.36
assert round(salesperson.calculate_commission(.1),2) == 4.74
print('Great job, you made it to the end of the code checks!')
check_results()
###Output
Great job, you made it to the end of the code checks!
###Markdown
Check display_sales() methodIf you run the code cell below, you should get output similar to this:```pythoncolor: red, waist_size: 35, length: 36, price: 15.12color: blue, waist_size: 40, length: 38, price: 24.12color: tan, waist_size: 28, length: 30, price: 8.12```
###Code
pants_one = Pants('red', 35, 36, 15.12)
pants_two = Pants('blue', 40, 38, 24.12)
pants_three = Pants('tan', 28, 30, 8.12)
salesperson = SalesPerson('Amy', 'Gonzalez', 2581923, 40000)
salesperson.sell_pants(pants_one)
salesperson.sell_pants(pants_two)
salesperson.sell_pants(pants_three)
salesperson.display_sales()
###Output
color: red, waist_size: 35, length: 36, price: 15.12
color: blue, waist_size: 40, length: 38, price: 24.12
color: tan, waist_size: 28, length: 30, price: 8.12
|
6DRepNet_demo.ipynb | ###Markdown
論文 https://arxiv.org/abs/2202.12555 GitHub https://github.com/thohemp/6drepnet 環境セットアップ GPU確認
###Code
!nvidia-smi
###Output
_____no_output_____
###Markdown
GitHubからコード取得
###Code
%cd /content
!git clone https://github.com/thohemp/6DRepNet
###Output
_____no_output_____
###Markdown
ライブラリのインストール
###Code
%cd /content/6DRepNet
!pip install --upgrade gdown
!pip install git+https://github.com/elliottzheng/face-detection.git@master
###Output
_____no_output_____
###Markdown
ライブラリのインポート
###Code
from model import SixDRepNet
import math
import re
from matplotlib import pyplot as plt
import sys
import os
import argparse
import numpy as np
import cv2
from google.colab.patches import cv2_imshow
import matplotlib.pyplot as plt
from numpy.lib.function_base import _quantile_unchecked
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import transforms
import torch.backends.cudnn as cudnn
import torchvision
import torch.nn.functional as F
import utils
import matplotlib
from PIL import Image
import time
from face_detection import RetinaFace
import glob
from google.colab import files
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
学習済みモデルのダウンロード
###Code
%cd /content/6DRepNet
!mkdir pretrained
#https://drive.google.com/file/d/1vPNtVu_jg2oK-RiIWakxYyfLPA9rU4R4/view?usp=sharing
pretrained_ckpt = 'pretrained/6DRepNet_300W_LP_AFLW2000.pth'
if not os.path.exists(pretrained_ckpt):
!gdown --id 1vPNtVu_jg2oK-RiIWakxYyfLPA9rU4R4 \
-O {pretrained_ckpt}
snapshot_path = os.path.join("/content/6DRepNet/", "pretrained/6DRepNet_300W_LP_AFLW2000.pth")
###Output
_____no_output_____
###Markdown
テスト動画のセットアップ 動画のアップロード使用動画https://www.pexels.com/ja-jp/video/3201691/
###Code
%cd /content/6DRepNet
!rm -rf upload
!mkdir -p upload/frames
%cd upload
uploaded = files.upload()
uploaded = list(uploaded.keys())
file_name = uploaded[0]
upload_path = os.path.join("/content/6DRepNet/upload", file_name)
print("upload file here:", upload_path)
###Output
_____no_output_____
###Markdown
動画をフレーム画像に分割
###Code
%cd /content/6DRepNet/upload
!ffmpeg -i {upload_path} frames/%06d.png
frames = glob.glob("/content/6DRepNet/upload/frames/*.png")
###Output
_____no_output_____
###Markdown
Head Pose Estimation
###Code
%cd /content/6DRepNet
!rm -rf output
!mkdir -p output/frames
cudnn.enabled = True
gpu = 0
print("Start model setup...")
# Modelのビルド
model = SixDRepNet(
backbone_name='RepVGG-B1g2',
backbone_file='',
deploy=True,
pretrained=False)
detector = RetinaFace(gpu_id=gpu)
# Modelのロード
saved_state_dict = torch.load(os.path.join(snapshot_path), map_location='cpu')
if 'model_state_dict' in saved_state_dict:
model.load_state_dict(saved_state_dict['model_state_dict'])
else:
model.load_state_dict(saved_state_dict)
model.cuda(gpu)
# Test the Model
model.eval() # Change model to 'eval' mode (BN uses moving mean/var).
print("Complete model setup.")
print("loading ", len(frames), " frames...")
process_start = time.time()
with torch.no_grad():
for i in tqdm( range(len(frames)) ):
img_path = frames[i]
frame = np.array(Image.open(img_path))
faces = detector(frame)
for box, landmarks, score in faces:
# Print the location of each face in this image
if score < .95:
continue
x_min = int(box[0])
y_min = int(box[1])
x_max = int(box[2])
y_max = int(box[3])
bbox_width = abs(x_max - x_min)
bbox_height = abs(y_max - y_min)
x_min = max(0,x_min-int(0.2*bbox_height))
y_min = max(0,y_min-int(0.2*bbox_width))
x_max = x_max+int(0.2*bbox_height)
y_max = y_max+int(0.2*bbox_width)
img = frame[y_min:y_max,x_min:x_max]
img = cv2.resize(img, (244, 244))/255.0
img = img.transpose(2, 0, 1)
img = torch.from_numpy(img).type(torch.FloatTensor)
img = torch.Tensor(img).cuda(gpu)
img=img.unsqueeze(0)
start = time.time()
R_pred = model(img)
end = time.time()
#print('Head pose estimation: %2f ms'% ((end - start)*1000.))
euler = utils.compute_euler_angles_from_rotation_matrices(R_pred)*180/np.pi
p_pred_deg = euler[:, 0].cpu()
y_pred_deg = euler[:, 1].cpu()
r_pred_deg = euler[:, 2].cpu()
utils.plot_pose_cube(frame, y_pred_deg, p_pred_deg, r_pred_deg, x_min + int(.5*(x_max-x_min)), y_min + int(.5*(y_max-y_min)), size = bbox_width)
# 1フレーム完了毎に表示する場合はコメントアウト解除
#cv2_imshow(frame)
cv2.imwrite( os.path.join("/content/6DRepNet/output/frames", os.path.basename(img_path)), frame)
process_end = time.time()
print('Complete All Head pose estimation: %2f s'% (process_end - process_start))
print('Average %2f ms/ %06d frames'% (((process_end - process_start)*1000.)/len(frames), len(frames)))
###Output
_____no_output_____
###Markdown
フレーム画像を動画に変換
###Code
!ffmpeg -i "/content/6DRepNet/output/frames/%06d.png" -c:v libx264 -vf "format=yuv420p" "/content/6DRepNet/output/result.mp4"
###Output
_____no_output_____
###Markdown
Head Pose Estimationの結果を表示
###Code
from moviepy.editor import *
from moviepy.video.fx.resize import resize
clip = VideoFileClip("/content/6DRepNet/output/result.mp4")
clip = resize(clip, height=420)
clip.ipython_display()
###Output
_____no_output_____ |
_notebooks/2020-05-16-custom_filters.ipynb | ###Markdown
Creating a Filter, Edge Detection Import resources and display image
###Code
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import cv2
import numpy as np
%matplotlib inline
# Read in the image
image = mpimg.imread('data/curved_lane.jpg')
plt.imshow(image)
###Output
_____no_output_____
###Markdown
Convert the image to grayscale
###Code
# Convert to grayscale for filtering
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
plt.imshow(gray, cmap='gray')
###Output
_____no_output_____
###Markdown
TODO: Create a custom kernelBelow, you've been given one common type of edge detection filter: a Sobel operator.The Sobel filter is very commonly used in edge detection and in finding patterns in intensity in an image. Applying a Sobel filter to an image is a way of **taking (an approximation) of the derivative of the image** in the x or y direction, separately. The operators look as follows.**It's up to you to create a Sobel x operator and apply it to the given image.**For a challenge, see if you can put the image through a series of filters: first one that blurs the image (takes an average of pixels), and then one that detects the edges.
###Code
# Create a custom kernel
# 3x3 array for edge detection
sobel_y = np.array([[ -1, -2, -1],
[ 0, 0, 0],
[ 1, 2, 1]])
## TODO: Create and apply a Sobel x operator
# Filter the image using filter2D, which has inputs: (grayscale image, bit-depth, kernel)
filtered_image = cv2.filter2D(gray, -1, sobel_y)
plt.imshow(filtered_image, cmap='gray')
###Output
_____no_output_____ |
Lab 5/Practice/1_linear_regression.ipynb | ###Markdown
A tensor is a number, vector, matrix or any n-dimensional array. Problem Statement We'll create a model that predicts crop yeilds for apples (*target variable*) by looking at the average temperature, rainfall and humidity (*input variables or features*) in different regions. Here's the training data:>Temp | Rain | Humidity | Prediction>--- | --- | --- | ---> 73 | 67 | 43 | 56> 91 | 88 | 64 | 81> 87 | 134 | 58 | 119> 102 | 43 | 37 | 22> 69 | 96 | 70 | 103In a **linear regression** model, each target variable is estimated to be a weighted sum of the input variables, offset by some constant, known as a bias :```yeild_apple = w11 * temp + w12 * rainfall + w13 * humidity + b1```It means that the yield of apples is a linear or planar function of the temperature, rainfall & humidity.**Our objective**: Find a suitable set of *weights* and *biases* using the training data, to make accurate predictions. Training DataThe training data can be represented using 2 matrices (inputs and targets), each with one row per observation and one column for variable.
###Code
# Input (temp, rainfall, humidity)
inputs = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
# Target (apples)
targets = np.array([[56],
[81],
[119],
[22],
[103]], dtype='float32')
###Output
_____no_output_____
###Markdown
Before we build a model, we need to convert inputs and targets to PyTorch tensors.
###Code
# Convert inputs and targets to tensors
inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)
print(inputs)
print(targets)
###Output
tensor([[ 73., 67., 43.],
[ 91., 88., 64.],
[ 87., 134., 58.],
[102., 43., 37.],
[ 69., 96., 70.]])
tensor([[ 56.],
[ 81.],
[119.],
[ 22.],
[103.]])
###Markdown
Linear Regression Model (from scratch)The *weights* and *biases* can also be represented as matrices, initialized with random values. The first row of `w` and the first element of `b` are use to predict the first target variable i.e. yield for apples, and similarly the second for oranges.
###Code
# Weights and biases
weights = torch.randn(1,3,requires_grad=True)
biases = torch.randn(1, requires_grad=True)
print(weights)
print(biases)
###Output
tensor([[-0.3639, -0.0101, 0.4124]], requires_grad=True)
tensor([0.5093], requires_grad=True)
###Markdown
The *model* is simply a function that performs a matrix multiplication of the input `x` and the weights `w` (transposed) and adds the bias `b` (replicated for each observation).$$\hspace{2.5cm} X \hspace{1.1cm} \times \hspace{1.2cm} W^T \hspace{1.2cm} + \hspace{1cm} b \hspace{2cm}$$$$\left[ \begin{array}{cc}73 & 67 & 43 \\91 & 88 & 64 \\\vdots & \vdots & \vdots \\69 & 96 & 70\end{array} \right]%\times%\left[ \begin{array}{cc}w_{11} & w_{21} \\w_{12} & w_{22} \\w_{13} & w_{23}\end{array} \right]%+%\left[ \begin{array}{cc}b_{1} & b_{2} \\b_{1} & b_{2} \\\vdots & \vdots \\b_{1} & b_{2} \\\end{array} \right]$$
###Code
# Define the model
def model(x):
return x @ weights.t() + biases
###Output
_____no_output_____
###Markdown
The matrix obtained by passing the input data to the model is a set of predictions for the target variables.
###Code
# Generate predictions
predictions = model(inputs)
print(predictions)
# Compare with targets
print(targets)
###Output
tensor([[ 56.],
[ 81.],
[119.],
[ 22.],
[103.]])
###Markdown
Because we've started with random weights and biases, the model does not perform a good job of predicting the target varaibles. Loss FunctionWe can compare the predictions with the actual targets, using the following method: * Calculate the difference between the two matrices (`preds` and `targets`).* Square all elements of the difference matrix to remove negative values.* Calculate the average of the elements in the resulting matrix.The result is a single number, known as the **mean squared error** (MSE).
###Code
# MSE loss
def mse(t1, t2):
diff = t1 - t2
#.numel method returns the number of elements in a tensor
# torch.sum return all the sum of elements in the tensor
return torch.sum(diff * diff) / diff.numel()
# Compute loss
loss = mse(predictions, targets)
print(loss)
###Output
tensor(8025.3115, grad_fn=<DivBackward0>)
###Markdown
The resulting number is called the **loss**, because it indicates how bad the model is at predicting the target variables. Lower the loss, better the model. Compute GradientsWith PyTorch, we can automatically compute the gradient or derivative of the `loss` w.r.t. to the weights and biases, because they have `requires_grad` set to `True`.More on autograd: https://pytorch.org/docs/stable/autograd.htmlmodule-torch.autograd
###Code
# Compute gradients
loss.backward()
###Output
_____no_output_____
###Markdown
The gradients are stored in the `.grad` property of the respective tensors.
###Code
# Gradients for weights
print(weights)
print(weights.grad)
# Gradients for bias
print(biases)
print(biases.grad)
###Output
tensor([0.5093], requires_grad=True)
tensor([-169.6786])
###Markdown
A key insight from calculus is that the gradient indicates the rate of change of the loss, or the slope of the loss function w.r.t. the weights and biases. * If a gradient element is **postive**, * **increasing** the element's value slightly will **increase** the loss. * **decreasing** the element's value slightly will **decrease** the loss.* If a gradient element is **negative**, * **increasing** the element's value slightly will **decrease** the loss. * **decreasing** the element's value slightly will **increase** the loss. The increase or decrease is proportional to the value of the gradient. Finally, we'll reset the gradients to zero before moving forward, because PyTorch accumulates gradients.
###Code
#Before we proceed, we reset the gradients to zero by calling .zero_() method.
#We need to do this, because PyTorch accumulates, gradients i.e. the next time we call .backward on the loss,
#the new gradient values will get added to the existing gradient values, which may lead to unexpected results.
weights.grad.zero_()
biases.grad.zero_()
print(weights.grad)
print(biases.grad)
###Output
tensor([[0., 0., 0.]])
tensor([0.])
###Markdown
Adjust weights and biases using gradient descentWe'll reduce the loss and improve our model using the gradient descent algorithm, which has the following steps:1. Generate predictions2. Calculate the loss3. Compute gradients w.r.t the weights and biases4. Adjust the weights by subtracting a small quantity proportional to the gradient5. Reset the gradients to zero
###Code
# Generate predictions
preds = model(inputs)
print(predictions)
# Calculate the loss
loss = mse(predictions, targets)
print(loss)
# Compute gradients
loss.backward(retain_graph=True)
print(weights.grad)
print(biases.grad)
###Output
_____no_output_____
###Markdown
###Code
# Adjust weights & reset gradients
#We use torch.no_grad to indicate to PyTorch that we shouldn't track, calculate or modify gradients while updating the weights and biases.
#We multiply the gradients with a really small number (10^-5 in this case), to ensure that we don't modify the weights by a really large amount
#After we have updated the weights, we reset the gradients back to zero, to avoid affecting any future computations.
with torch.no_grad():
weights -= weights.grad * 1e-5
biases -= biases.grad * 1e-5
weights.grad.zero_()
biases.grad.zero_()
print(weights)
print(biases)
###Output
tensor([[-0.3639, -0.0101, 0.4124]], requires_grad=True)
tensor([0.5110], requires_grad=True)
###Markdown
With the new weights and biases, the model should have a lower loss.
###Code
# Calculate loss
prediction = model(inputs)
loss = mse(prediction, targets)
print(loss)
###Output
tensor(8025.0244, grad_fn=<DivBackward0>)
###Markdown
Train for multiple epochsTo reduce the loss further, we repeat the process of adjusting the weights and biases using the gradients multiple times. Each iteration is called an epoch.
###Code
# Train for 100 epochs
for i in range(100):
prediction = model(inputs)
loss = mse(prediction, targets)
loss.backward()
with torch.no_grad():
weights -= weights.grad * 1e-5
biases -= biases.grad * 1e-5
weights.grad.zero_()
biases.grad.zero_()
# Calculate loss
prediction = model(inputs)
loss = mse(prediction, targets)
print(loss)
# Print predictions
prediction
# Print targets
targets
###Output
_____no_output_____ |
Machine Learning/Heart Disease Prediction .ipynb | ###Markdown
Heart Disease Prediction using MLLegends: 1. cp {Chest pain}:2. restecg {resting EKG results}3. exang {exercise-induced angina}: 0 (No ==> angina induced by exercise) 1 (Yes ==> angina induced by exercise) 4. slope {the slope of the ST segment of peak exercise} 5. ca {number of major vessels (0-3) stained by fluoroscopy}6. thal {thalium stress result}
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
df=pd.read_csv('heart.csv')
df.head()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis
###Code
df.describe()
pd.set_option("display.float", "{:.2f}".format)
df.describe()
df.target.value_counts().plot(kind='bar',color=['#DC143C','#66CDAA'])
###Output
_____no_output_____
###Markdown
Inferences:The data is balanced
###Code
df.isna().sum()
###Output
_____no_output_____
###Markdown
Inferences: No null values are present Segregating Categorical and Continous values
###Code
categorical_val = []
continous_val = []
for column in df.columns:
print('==============================')
print(f"{column} : {df[column].unique()}")
if len(df[column].unique()) <= 10:
categorical_val.append(column)
else:
continous_val.append(column)
###Output
==============================
age : [63 37 41 56 57 44 52 54 48 49 64 58 50 66 43 69 59 42 61 40 71 51 65 53
46 45 39 47 62 34 35 29 55 60 67 68 74 76 70 38 77]
==============================
sex : [1 0]
==============================
cp : [3 2 1 0]
==============================
trestbps : [145 130 120 140 172 150 110 135 160 105 125 142 155 104 138 128 108 134
122 115 118 100 124 94 112 102 152 101 132 148 178 129 180 136 126 106
156 170 146 117 200 165 174 192 144 123 154 114 164]
==============================
chol : [233 250 204 236 354 192 294 263 199 168 239 275 266 211 283 219 340 226
247 234 243 302 212 175 417 197 198 177 273 213 304 232 269 360 308 245
208 264 321 325 235 257 216 256 231 141 252 201 222 260 182 303 265 309
186 203 183 220 209 258 227 261 221 205 240 318 298 564 277 214 248 255
207 223 288 160 394 315 246 244 270 195 196 254 126 313 262 215 193 271
268 267 210 295 306 178 242 180 228 149 278 253 342 157 286 229 284 224
206 167 230 335 276 353 225 330 290 172 305 188 282 185 326 274 164 307
249 341 407 217 174 281 289 322 299 300 293 184 409 259 200 327 237 218
319 166 311 169 187 176 241 131]
==============================
fbs : [1 0]
==============================
restecg : [0 1 2]
==============================
thalach : [150 187 172 178 163 148 153 173 162 174 160 139 171 144 158 114 151 161
179 137 157 123 152 168 140 188 125 170 165 142 180 143 182 156 115 149
146 175 186 185 159 130 190 132 147 154 202 166 164 184 122 169 138 111
145 194 131 133 155 167 192 121 96 126 105 181 116 108 129 120 112 128
109 113 99 177 141 136 97 127 103 124 88 195 106 95 117 71 118 134
90]
==============================
exang : [0 1]
==============================
oldpeak : [2.3 3.5 1.4 0.8 0.6 0.4 1.3 0. 0.5 1.6 1.2 0.2 1.8 1. 2.6 1.5 3. 2.4
0.1 1.9 4.2 1.1 2. 0.7 0.3 0.9 3.6 3.1 3.2 2.5 2.2 2.8 3.4 6.2 4. 5.6
2.9 2.1 3.8 4.4]
==============================
slope : [0 2 1]
==============================
ca : [0 2 1 3 4]
==============================
thal : [1 2 3 0]
==============================
target : [1 0]
###Markdown
Visualising Categorical Values
###Code
plt.figure(figsize=(15, 15))
for i, column in enumerate(categorical_val, 1):
plt.subplot(3, 3, i)
df[df["target"] == 0][column].hist(bins=35, color='green', label='Have Heart Disease = NO', alpha=0.6)
df[df["target"] == 1][column].hist(bins=35, color='red', label='Have Heart Disease = YES', alpha=0.6)
plt.legend()
plt.xlabel(column)
###Output
_____no_output_____
###Markdown
Inferences:1. cp {Chest pain}: People with cp 1, 2, 3 are more likely to have heart disease than people with cp 0.2. restecg {resting EKG results}: People with a value of 1 (reporting an abnormal heart rhythm, which can range from mild symptoms to severe problems) are more likely to have heart disease.3. exang {exercise-induced angina}: people with a value of 0 (No ==> angina induced by exercise) have more heart disease than people with a value of 1 (Yes ==> angina induced by exercise)4. slope {the slope of the ST segment of peak exercise}: People with a slope value of 2 (Downslopins: signs of an unhealthy heart) are more likely to have heart disease than people with a slope value of 2 slope is 0 (Upsloping: best heart rate with exercise) or 1 (Flatsloping: minimal change (typical healthy heart)).5. ca {number of major vessels (0-3) stained by fluoroscopy}: the more blood movement the better, so people with ca equal to 0 are more likely to have heart disease.6. thal {thalium stress result}: People with a thal value of 2 (defect corrected: once was a defect but ok now) are more likely to have heart disease. Visualising Continuous Values
###Code
plt.figure(figsize=(15, 15))
for i, column in enumerate(continous_val, 1):
plt.subplot(3, 2, i)
df[df["target"] == 0][column].hist(bins=35, color='green', label='Have Heart Disease = NO', alpha=0.6)
df[df["target"] == 1][column].hist(bins=35, color='red', label='Have Heart Disease = YES', alpha=0.6)
plt.legend()
plt.xlabel(column)
###Output
_____no_output_____
###Markdown
Inferences1. trestbps: resting blood pressure anything above 130-140 is generally of concern2. chol: greater than 200 is of concern.3. thalach: People with a maximum of over 140 are more likely to have heart disease.4. the old peak of exercise-induced ST depression vs. rest looks at heart stress during exercise an unhealthy heart will stress more.
###Code
plt.figure(figsize=(10, 8))
plt.scatter(df.age[df.target==1],
df.thalach[df.target==1],
c="salmon")
plt.scatter(df.age[df.target==0],
df.thalach[df.target==0],
c="lightblue")
plt.title("Heart Disease as a function of Age and Max Heart Rate")
plt.xlabel("Age")
plt.ylabel("Max Heart Rate")
plt.legend(["Disease", "No Disease"]);
###Output
_____no_output_____
###Markdown
Inferences:1. People between the age of 35-60 and Max Heart Rate above 140 are more prone to Heart Diseases.
###Code
corr_matrix = df.corr()
fig, ax = plt.subplots(figsize=(15, 15))
ax = sns.heatmap(corr_matrix,
annot=True,
linewidths=0.5,
fmt=".2f",
cmap="mako_r");
bottom, top = ax.get_ylim()
ax.set_ylim(bottom + 0.5, top - 0.5)
df.drop('target', axis=1).corrwith(df.target).plot(kind='bar', grid=True, figsize=(12, 8),
title="Correlation with target")
###Output
_____no_output_____
###Markdown
Inferences: 1. fbs and chol are the least correlated with the target variable.2. All other variables have a significant correlation with the target variable. Data Preprocessing
###Code
categorical_val.remove('target')
dataset = pd.get_dummies(df, columns = categorical_val)
from sklearn.preprocessing import StandardScaler
s_sc = StandardScaler()
col_to_scale = ['age', 'trestbps', 'chol', 'thalach', 'oldpeak']
dataset[col_to_scale] = s_sc.fit_transform(dataset[col_to_scale])
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
def print_score(clf, X_train, y_train, X_test, y_test, train=True):
if train:
pred = clf.predict(X_train)
clf_report = pd.DataFrame(classification_report(y_train, pred, output_dict=True))
print("Train Result:\n================================================")
print(f"Accuracy Score: {accuracy_score(y_train, pred) * 100:.2f}%")
print("_______________________________________________")
print(f"CLASSIFICATION REPORT:\n{clf_report}")
print("_______________________________________________")
print(f"Confusion Matrix: \n {confusion_matrix(y_train, pred)}\n")
elif train==False:
pred = clf.predict(X_test)
clf_report = pd.DataFrame(classification_report(y_test, pred, output_dict=True))
print("Test Result:\n================================================")
print(f"Accuracy Score: {accuracy_score(y_test, pred) * 100:.2f}%")
print("_______________________________________________")
print(f"CLASSIFICATION REPORT:\n{clf_report}")
print("_______________________________________________")
print(f"Confusion Matrix: \n {confusion_matrix(y_test, pred)}\n")
from sklearn.model_selection import train_test_split
X = dataset.drop('target', axis=1)
y = dataset.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
from sklearn.linear_model import LogisticRegression
lr_clf = LogisticRegression(solver='liblinear')
lr_clf.fit(X_train, y_train)
print_score(lr_clf, X_train, y_train, X_test, y_test, train=True)
print_score(lr_clf, X_train, y_train, X_test, y_test, train=False)
test_score = accuracy_score(y_test, lr_clf.predict(X_test)) * 100
train_score = accuracy_score(y_train, lr_clf.predict(X_train)) * 100
results_df = pd.DataFrame(data=[["Logistic Regression", train_score, test_score]],
columns=['Model', 'Training Accuracy %', 'Testing Accuracy %'])
results_df
###Output
_____no_output_____ |
1_cs231n/two_layer_net.ipynb | ###Markdown
Implementing a Neural NetworkIn this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
###Code
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
###Output
_____no_output_____
###Markdown
The neural network parameters will be stored in a dictionary (`model` below), where the keys are the parameter names and the values are numpy arrays. Below, we initialize toy data and a toy model that we will use to verify your implementations.
###Code
# Create some toy data to check your implementations
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
model = {}
model['W1'] = np.linspace(-0.2, 0.6, num=input_size*hidden_size).reshape(input_size, hidden_size)
model['b1'] = np.linspace(-0.3, 0.7, num=hidden_size)
model['W2'] = np.linspace(-0.4, 0.1, num=hidden_size*num_classes).reshape(hidden_size, num_classes)
model['b2'] = np.linspace(-0.5, 0.9, num=num_classes)
return model
def init_toy_data():
X = np.linspace(-0.2, 0.5, num=num_inputs*input_size).reshape(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
model = init_toy_model()
X, y = init_toy_data()
###Output
_____no_output_____
###Markdown
Forward pass: compute scoresOpen the file `cs231n/classifiers/neural_net.py` and look at the function `two_layer_net`. This function is very similar to the loss functions you have written for the Softmax exercise in HW0: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters. Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
###Code
from cs231n.classifiers.neural_net import two_layer_net
scores = two_layer_net(X, model)
print(scores)
correct_scores = [[-0.5328368, 0.20031504, 0.93346689],
[-0.59412164, 0.15498488, 0.9040914 ],
[-0.67658362, 0.08978957, 0.85616275],
[-0.77092643, 0.01339997, 0.79772637],
[-0.89110401, -0.08754544, 0.71601312]]
# the difference should be very small. We get 3e-8
print('Difference between your scores and correct scores:')
print(np.sum(np.abs(scores - correct_scores)))
###Output
[[-0.5328368 0.20031504 0.93346689]
[-0.59412164 0.15498488 0.9040914 ]
[-0.67658362 0.08978957 0.85616275]
[-0.77092643 0.01339997 0.79772637]
[-0.89110401 -0.08754544 0.71601312]]
Difference between your scores and correct scores:
3.848682303062012e-08
###Markdown
Forward pass: compute lossIn the same function, implement the second part that computes the data and regularizaion loss.
###Code
reg = 0.1
loss, _ = two_layer_net(X, model, y, reg)
correct_loss = 1.38191946092
# should be very small, we get 5e-12
print('Difference between your loss and correct loss:')
print(np.sum(np.abs(loss - correct_loss)))
###Output
Difference between your loss and correct loss:
4.6769255135359344e-12
###Markdown
Backward passImplement the rest of the function. This will compute the gradient of the loss with respect to the variables `W1`, `b1`, `W2`, and `b2`. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
###Code
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = two_layer_net(X, model, y, reg)
# these should all be less than 1e-8 or so
for param_name in grads:
param_grad_num = eval_numerical_gradient(lambda W: two_layer_net(X, model, y, reg)[0], model[param_name], verbose=False)
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
###Output
W1 max relative error: 4.426512e-09
b1 max relative error: 5.435430e-08
W2 max relative error: 8.023743e-10
b2 max relative error: 8.190173e-11
###Markdown
Train the networkTo train the network we will use SGD with Momentum. Last assignment you implemented vanilla SGD. You will now implement the momentum update and the RMSProp update. Open the file `classifier_trainer.py` and familiarize yourself with the `ClassifierTrainer` class. It performs optimization given an arbitrary cost function data, and model. By default it uses vanilla SGD, which we have already implemented for you. First, run the optimization below using Vanilla SGD:
###Code
from cs231n.classifier_trainer import ClassifierTrainer
model = init_toy_model()
trainer = ClassifierTrainer()
# call the trainer to optimize the loss
# Notice that we're using sample_batches=False, so we're performing Gradient Descent (no sampled batches of data)
best_model, loss_history, _, _ = trainer.train(X, y, X, y,
model, two_layer_net,
reg=0.001,
learning_rate=1e-1, momentum=0.0, learning_rate_decay=1,
update='sgd', sample_batches=False,
num_epochs=100,
verbose=False)
print('Final loss with vanilla SGD: %f' % (loss_history[-1], ))
###Output
starting iteration 0
starting iteration 10
starting iteration 20
starting iteration 30
starting iteration 40
starting iteration 50
starting iteration 60
starting iteration 70
starting iteration 80
starting iteration 90
Final loss with vanilla SGD: 0.940686
###Markdown
Now fill in the **momentum update** in the first missing code block inside the `train` function, and run the same optimization as above but with the momentum update. You should see a much better result in the final obtained loss:
###Code
model = init_toy_model()
trainer = ClassifierTrainer()
# call the trainer to optimize the loss
# Notice that we're using sample_batches=False, so we're performing Gradient Descent (no sampled batches of data)
best_model, loss_history, _, _ = trainer.train(X, y, X, y,
model, two_layer_net,
reg=0.001,
learning_rate=1e-1, momentum=0.9, learning_rate_decay=1,
update='momentum', sample_batches=False,
num_epochs=100,
verbose=False)
correct_loss = 0.494394
print('Final loss with momentum SGD: %f. We get: %f' % (loss_history[-1], correct_loss))
###Output
starting iteration 0
starting iteration 10
starting iteration 20
starting iteration 30
starting iteration 40
starting iteration 50
starting iteration 60
starting iteration 70
starting iteration 80
starting iteration 90
Final loss with momentum SGD: 0.494394. We get: 0.494394
###Markdown
The **RMSProp** update step is given as follows:```cache = decay_rate * cache + (1 - decay_rate) * dx**2x += - learning_rate * dx / np.sqrt(cache + 1e-8)```Here, `decay_rate` is a hyperparameter and typical values are [0.9, 0.99, 0.999].Implement the **RMSProp** update rule inside the `train` function and rerun the optimization:
###Code
model = init_toy_model()
trainer = ClassifierTrainer()
# call the trainer to optimize the loss
# Notice that we're using sample_batches=False, so we're performing Gradient Descent (no sampled batches of data)
best_model, loss_history, _, _ = trainer.train(X, y, X, y,
model, two_layer_net,
reg=0.001,
learning_rate=1e-1, momentum=0.9, learning_rate_decay=0.99,
update='rmsprop', sample_batches=False,
num_epochs=100,
verbose=False)
correct_loss = 0.439368
print('Final loss with RMSProp: %f. We get: %f' % (loss_history[-1], correct_loss))
###Output
starting iteration 0
starting iteration 10
starting iteration 20
starting iteration 30
starting iteration 40
starting iteration 50
starting iteration 60
starting iteration 70
starting iteration 80
starting iteration 90
Final loss with RMSProp: 0.434531. We get: 0.439368
###Markdown
Load the dataNow that you have implemented a two-layer network that passes gradient checks, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier.
###Code
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
###Output
Train data shape: (49000, 3072)
Train labels shape: (49000,)
Validation data shape: (1000, 3072)
Validation labels shape: (1000,)
Test data shape: (1000, 3072)
Test labels shape: (1000,)
###Markdown
Train a networkTo train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
###Code
from cs231n.classifiers.neural_net import init_two_layer_model
model = init_two_layer_model(32*32*3, 50, 10) # input size, hidden size, number of classes
trainer = ClassifierTrainer()
best_model, loss_history, train_acc, val_acc = trainer.train(X_train, y_train, X_val, y_val,
model, two_layer_net,
num_epochs=5, reg=1.0,
momentum=0.9, learning_rate_decay = 0.95,
learning_rate=1e-5, verbose=True)
###Output
starting iteration 0
Finished epoch 0 / 5: cost 2.302593, train: 0.109000, val 0.103000, lr 1.000000e-05
starting iteration 10
starting iteration 20
starting iteration 30
starting iteration 40
starting iteration 50
starting iteration 60
starting iteration 70
starting iteration 80
starting iteration 90
starting iteration 100
starting iteration 110
starting iteration 120
starting iteration 130
starting iteration 140
starting iteration 150
starting iteration 160
starting iteration 170
starting iteration 180
starting iteration 190
starting iteration 200
starting iteration 210
starting iteration 220
starting iteration 230
starting iteration 240
starting iteration 250
starting iteration 260
starting iteration 270
starting iteration 280
starting iteration 290
starting iteration 300
starting iteration 310
starting iteration 320
starting iteration 330
starting iteration 340
starting iteration 350
starting iteration 360
starting iteration 370
starting iteration 380
starting iteration 390
starting iteration 400
starting iteration 410
starting iteration 420
starting iteration 430
starting iteration 440
starting iteration 450
starting iteration 460
starting iteration 470
starting iteration 480
Finished epoch 1 / 5: cost 2.276374, train: 0.175000, val 0.171000, lr 9.500000e-06
starting iteration 490
starting iteration 500
starting iteration 510
starting iteration 520
starting iteration 530
starting iteration 540
starting iteration 550
starting iteration 560
starting iteration 570
starting iteration 580
starting iteration 590
starting iteration 600
starting iteration 610
starting iteration 620
starting iteration 630
starting iteration 640
starting iteration 650
starting iteration 660
starting iteration 670
starting iteration 680
starting iteration 690
starting iteration 700
starting iteration 710
starting iteration 720
starting iteration 730
starting iteration 740
starting iteration 750
starting iteration 760
starting iteration 770
starting iteration 780
starting iteration 790
starting iteration 800
starting iteration 810
starting iteration 820
starting iteration 830
starting iteration 840
starting iteration 850
starting iteration 860
starting iteration 870
starting iteration 880
starting iteration 890
starting iteration 900
starting iteration 910
starting iteration 920
starting iteration 930
starting iteration 940
starting iteration 950
starting iteration 960
starting iteration 970
Finished epoch 2 / 5: cost 2.045277, train: 0.225000, val 0.245000, lr 9.025000e-06
starting iteration 980
starting iteration 990
starting iteration 1000
starting iteration 1010
starting iteration 1020
starting iteration 1030
starting iteration 1040
starting iteration 1050
starting iteration 1060
starting iteration 1070
starting iteration 1080
starting iteration 1090
starting iteration 1100
starting iteration 1110
starting iteration 1120
starting iteration 1130
starting iteration 1140
starting iteration 1150
starting iteration 1160
starting iteration 1170
starting iteration 1180
starting iteration 1190
starting iteration 1200
starting iteration 1210
starting iteration 1220
starting iteration 1230
starting iteration 1240
starting iteration 1250
starting iteration 1260
starting iteration 1270
starting iteration 1280
starting iteration 1290
starting iteration 1300
starting iteration 1310
starting iteration 1320
starting iteration 1330
starting iteration 1340
starting iteration 1350
starting iteration 1360
starting iteration 1370
starting iteration 1380
starting iteration 1390
starting iteration 1400
starting iteration 1410
starting iteration 1420
starting iteration 1430
starting iteration 1440
starting iteration 1450
starting iteration 1460
Finished epoch 3 / 5: cost 1.897473, train: 0.309000, val 0.295000, lr 8.573750e-06
starting iteration 1470
starting iteration 1480
starting iteration 1490
starting iteration 1500
starting iteration 1510
starting iteration 1520
starting iteration 1530
starting iteration 1540
starting iteration 1550
starting iteration 1560
starting iteration 1570
starting iteration 1580
starting iteration 1590
starting iteration 1600
starting iteration 1610
starting iteration 1620
starting iteration 1630
starting iteration 1640
starting iteration 1650
starting iteration 1660
starting iteration 1670
starting iteration 1680
starting iteration 1690
starting iteration 1700
starting iteration 1710
starting iteration 1720
starting iteration 1730
starting iteration 1740
starting iteration 1750
starting iteration 1760
starting iteration 1770
starting iteration 1780
starting iteration 1790
starting iteration 1800
starting iteration 1810
starting iteration 1820
starting iteration 1830
starting iteration 1840
starting iteration 1850
starting iteration 1860
starting iteration 1870
starting iteration 1880
starting iteration 1890
starting iteration 1900
starting iteration 1910
starting iteration 1920
starting iteration 1930
starting iteration 1940
starting iteration 1950
Finished epoch 4 / 5: cost 1.882133, train: 0.340000, val 0.335000, lr 8.145063e-06
starting iteration 1960
starting iteration 1970
starting iteration 1980
starting iteration 1990
starting iteration 2000
starting iteration 2010
starting iteration 2020
starting iteration 2030
starting iteration 2040
starting iteration 2050
starting iteration 2060
starting iteration 2070
starting iteration 2080
starting iteration 2090
starting iteration 2100
starting iteration 2110
starting iteration 2120
starting iteration 2130
starting iteration 2140
starting iteration 2150
starting iteration 2160
starting iteration 2170
starting iteration 2180
starting iteration 2190
starting iteration 2200
starting iteration 2210
starting iteration 2220
starting iteration 2230
starting iteration 2240
starting iteration 2250
starting iteration 2260
starting iteration 2270
starting iteration 2280
starting iteration 2290
starting iteration 2300
starting iteration 2310
starting iteration 2320
starting iteration 2330
starting iteration 2340
starting iteration 2350
starting iteration 2360
starting iteration 2370
starting iteration 2380
starting iteration 2390
starting iteration 2400
starting iteration 2410
starting iteration 2420
starting iteration 2430
starting iteration 2440
Finished epoch 5 / 5: cost 1.791123, train: 0.376000, val 0.360000, lr 7.737809e-06
finished optimization. best validation accuracy: 0.360000
###Markdown
Debug the trainingWith the default parameters we provided above, you should get a validation accuracy of about 0.37 on the validation set. This isn't very good.One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
###Code
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(loss_history)
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(train_acc)
plt.plot(val_acc)
plt.legend(['Training accuracy', 'Validation accuracy'], loc='lower right')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(model):
plt.imshow(visualize_grid(model['W1'].T.reshape(-1, 32, 32, 3), padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(model)
###Output
_____no_output_____
###Markdown
Tune your hyperparameters**What's wrong?**. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.**Tuning**. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the momentum and learning rate decay parameters, but you should be able to get good performance using the default values.**Approximate results**. You should be aim to achieve a classification accuracy of greater than 50% on the validation set. Our best network gets over 56% on the validation set.**Experiment**: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 56% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
###Code
best_model = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_model. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous assignment. #
#################################################################################
# input size, hidden size, number of classes
model = init_two_layer_model(32*32*3, 1000, 10)
trainer = ClassifierTrainer()
best_model, loss_history, train_acc, val_acc = trainer.train(X_train, y_train,
X_val, y_val,
model, two_layer_net,
num_epochs=20, reg=1.0,
momentum=0.85,
learning_rate_decay=0.99,
learning_rate=1e-3, verbose=True)
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights
show_net_weights(best_model)
###Output
_____no_output_____
###Markdown
Run on the test setWhen you are done experimenting, you should evaluate your final trained network on the test set.
###Code
scores_test = two_layer_net(X_test, best_model)
print('Test accuracy: ', np.mean(np.argmax(scores_test, axis=1) == y_test))
###Output
Test accuracy: 0.206
|
visualization/BasicScanningAudioAnalysis.ipynb | ###Markdown
Processing for Scanning Microphone Once we get our audio recording, we need postprocessing to know when the CNC microphone as "stopped", and at which point it stopped. We __do not__ want to analyze audio while moving, as the CNC control is loud and we have no location information.
###Code
!pip install librosa > /dev/null
!pip install plotly > /dev/null
import librosa
import numpy as np
from matplotlib import pyplot as plt
import plotly
from plotly.offline import iplot
import plotly.graph_objs as go
%matplotlib inline
###Output
_____no_output_____
###Markdown
You only need to mount the google colab drive if you want to access sound samples stored in Google Drive.
###Code
from google.colab import drive
drive.mount('/content/gdrive')
###Output
_____no_output_____
###Markdown
Analyzing Simple FrequencySecibd test consisted of using the __TAZ LULZBOT 5__ recorded with the small microphone Science Center 102. The small microphone attaches through the audio jack on the desktop computer. The sound was emitted through an Apple Airpod connected through the audio jack as well, taped to the baseplate to keep it secured. Sound was played for one hour with frequency $2000$ Hz using the command:```bashplay -n -c1 synth 3600 sine 2000```Scan Parameters:* (0, 0) to (100, 100)* around 2cm vertical distance between sound source and microphone* 1 second record time at each point* resolution of 2 for both $x$ and $y$* Only capturing sound when the motor is __not__ movingIn this test, we moved in the pattern```51 102 ... 2601. . ... .. . ... .. . ... .2 53 ... 25511 52 ... 2550```Using these parameters, we will sample a total of $51 \cdot 51$ points. Data FormatSince we have now been able to control the CNC machine through python, we now have already segmented out each of the sampled locations in code. Therefore, the results of one scan will be stored in folder ``, and each sound clip in that folder will look like `__.wav`. We will want a function that preprocesses each separate audio file and returns our analysis as well as the $(x, y, z)$ spatial coordinate so we can combine all of these measurements together at the end.
###Code
import glob
import os
import sys
sample_files = glob.glob("/home/robocup/hoffman/scanning-microphone/data/1540965226/*.wav")
len(sample_files)
###Output
_____no_output_____
###Markdown
Let's just load one of these into memory as a sanity check before we process them all.
###Code
y, sr = librosa.core.load(sample_files[0], sr=None)
print('Audio Waveform shape: ', y.shape)
print('Sampling Rate: ', sr)
n_seconds = y.shape[0] / sr
fft = librosa.core.stft(y)
fft_mag = np.abs(fft)
print(fft.shape)
# Prints out bin number to frequency
print(librosa.core.fft_frequencies())
###Output
(1025, 85)
[0.00000000e+00 1.07666016e+01 2.15332031e+01 ... 1.10034668e+04
1.10142334e+04 1.10250000e+04]
###Markdown
Plotting the actual spectrogram is usually pretty helpful. Since we played the audio signal at 2000 Hz we see a large band there and a faint band at the harmonic, most likely due to the fact that the speaker used as an Apple Airpod and hardware errors.
###Code
fig = plt.figure(figsize=(15, 12))
plt.xlabel('Sample')
plt.imshow(fft_mag[:200])
plt.show()
###Output
_____no_output_____
###Markdown
The larger hump there is when the motor has to scan more distance to get to the start of the next row.
###Code
print("Freq at bin 80: ", librosa.core.fft_frequencies()[185])
print("Freq at bin 100: ", librosa.core.fft_frequencies()[187])
###Output
Freq at bin 80: 1991.8212890625
Freq at bin 100: 2013.3544921875
###Markdown
Therefore, we can first see the amplitude changes between different samples at the frequency bins 80-100. We can just directly take the mean of these bins in our absolute valued fourier transform. Let's actually make a function to do this.
###Code
def amplitude_check(fname, start_bin, end_bin):
"""Gets the averaged amplitudes from start_bin to end_bin in the fourier transform"""
y, sr = librosa.core.load(fname, sr=None)
# print('Audio Waveform shape: ', y.shape)
# print('Sampling Rate: ', sr)
fft = librosa.core.stft(y)
fft_mag = np.abs(fft)
# Actually take the
average_binned_mag = fft_mag[start_bin:end_bin, :].mean()
return average_binned_mag
def file2coords(fname):
"""Converts a file name to coordinates. Should be in the format
<folder>/<x>_<y>_<z>.wav
"""
_, name = os.path.split(fname)
x, y, z = os.path.splitext(name)[0].split('_')
x = float(x)
y = float(y)
z = float(z)
return x, y, z
# store an array of tuples (x, y, amplitude)
ampdata = []
for index, sfname in enumerate(sample_files):
if index % 500 == 0:
print("Processed {} samples".format(index))
x, y, _ = file2coords(sfname)
amp = amplitude_check(sfname, 185, 187)
ampdata.append((x, y, amp))
###Output
Processed 0 samples
Processed 500 samples
Processed 1000 samples
Processed 1500 samples
Processed 2000 samples
Processed 2500 samples
###Markdown
Generating a HeatmapThe real test is to generate a heatmap of frequency strengths, with each point in 3D space scanned and analyzed. We will use the previous code to split the recorded sample into each of the separate points, and then plot the magnitude of certain frequencies on a 2D plot.__There's probably a much better way to turn the scan into a matrix. Haven't leetcoded enough__
###Code
# Get all of the unique x and y indices that could happen in our scan
x_indices = sorted(list(set([a[0] for a in ampdata])))
y_indices = sorted(list(set([a[1] for a in ampdata])))
# Now make a map of tuples to the amplitude value
ampmap = {}
for x, y, amp in ampdata:
ampmap[(x, y)] = amp
# Turn it back into a numpy array
ampmat = np.zeros((len(x_indices), len(y_indices)))
for idx, x in enumerate(x_indices):
for idy, y in enumerate(y_indices):
ampmat[idx][idy] = ampmap[(x, y)]
fig = plt.figure(figsize=(9,9))
plt.imshow(ampmat)
plt.xlabel('Horizontal Position (res=2)')
plt.ylabel('Vertical Position (res=2)')
plt.title('Magnitude of $f = 2000$ Hz on a 2D plane')
plt.show()
###Output
_____no_output_____
###Markdown
Generating a Heatmap for 3D scansWe can do the exact same thing for 3D scans. However, now we are going to have a $z$ axis element. We'll just print out slices of the $z$ axis element for now. We can consider using 3D visualizations with seaborn or matplotlib later but those are usually pretty difficult to decipher.
###Code
sample_files_3d = glob.glob("/home/robocup/hoffman/scanning-microphone/data/1541048727/*.wav")
ampdata3D = []
for index, sfname in enumerate(sample_files_3d):
if index % 500 == 0:
print("Processed {} samples".format(index))
x, y, z = file2coords(sfname)
amp = amplitude_check(sfname, 185, 187)
ampdata3D.append((x, y, z, amp))
# Get all of the unique x and y indices that could happen in our scan
x_indices = sorted(list(set([a[0] for a in ampdata3D])))
y_indices = sorted(list(set([a[1] for a in ampdata3D])))
z_indices = sorted(list(set([a[2] for a in ampdata3D])))
# Now make a map of tuples to the amplitude value
ampmap3D = {}
for x, y, z, amp in ampdata3D:
ampmap3D[(x, y, z)] = amp
# Turn it back into a numpy array
ampmat3D = np.zeros((len(x_indices), len(y_indices), len(z_indices)))
for idx, x in enumerate(x_indices):
for idy, y in enumerate(y_indices):
for idz, z in enumerate(z_indices):
ampmat3D[idx][idy][idz] = ampmap3D[(x, y, z)]
###Output
_____no_output_____
###Markdown
We will plot each slice corresponding to each $z$ value in one plot. In this specific case, we scanned a $11 \times 11 \times 11$ cube so we can arrange this into $3 \times 4$ grid. First for fun we can slice by the $x$ coordinate and see slices of the $yz$ plane.
###Code
fig = plt.figure(figsize=(11,25))
for index in range(len(z_indices)):
plt.subplot(len(z_indices),1, index + 1)
plt.pcolor(ampmat3D[:,:,index], vmin=ampmat3D.min(), vmax=ampmat3D.max(), cmap='magma')
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
###Output
_____no_output_____ |
VAE/vae_mnist.ipynb | ###Markdown
VAE mnist
###Code
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import keras
from keras.utils import np_utils
from keras.layers import Input, Dense, Lambda, InputLayer, concatenate
from keras.models import Model, Sequential
from keras import backend as K
from keras import metrics
from keras.datasets import mnist
###Output
_____no_output_____
###Markdown
VLB
###Code
def vlb_bernoulli(x, x_decoded_mean, t_mean, t_log_var):
'''
The inputs are tf.Tensor
x: (batch_size x number_of_pixels)
x_decoded_mean: (batch_size x number_of_pixels) mean of the distribution p(x | t), real numbers from 0 to 1
t_mean: (batch_size x latent_dim) mean vector of normal distribution q(t | x)
t_log_var: (batch_size x latent_dim) log variance vector of normal distribution q(t | x)
'''
# reconstruction loss
# Binary cross-entropy which is commonly used for data like MNIST that can be modeled as Bernoulli trials.
# https://blog.fastforwardlabs.com/2016/08/22/under-the-hood-of-the-variational-autoencoder-in.html
# x is originally 0 to 255, after dividing by 255 we get float numbers ranging from 0 to 1
loss = tf.reduce_sum(x * K.log(x_decoded_mean + 1e-10) + (1 - x) * K.log(1 - x_decoded_mean + 1e-10), axis=1)
# KL divergence
regularisation = 0.5 * tf.reduce_sum(1 + t_log_var - K.square(t_mean) - K.exp(t_log_var), axis=1)
vlb = tf.reduce_mean(loss + regularisation)
return -vlb # return negative scalar value (tf.Tensor) of variational Lower Bound for minimization
###Output
_____no_output_____
###Markdown
Setup tf interactive session & connect it with keras.
###Code
sess = tf.InteractiveSession()
K.set_session(sess)
batch_size = 1000
original_dim = 784 # Number of pixels
latent_dim = 64 # d, dimensionality of the latent code t.
hidden = 32 # Size of the hidden layer.
epochs = 30
x = Input(batch_shape=(batch_size, original_dim))
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
###Markdown
Encoder
###Code
def create_encoder(input_dim):
encoder = Sequential(name='encoder')
encoder.add(InputLayer([input_dim]))
encoder.add(Dense(hidden, activation='relu'))
encoder.add(Dense(hidden, activation='relu'))
encoder.add(Dense(hidden, activation='relu'))
encoder.add(Dense(2 * latent_dim))
return encoder
encoder = create_encoder(original_dim)
h = encoder(x)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
###Markdown
Output mean & log variance.
###Code
get_t_mean = Lambda(lambda h: h[:, :latent_dim])
get_t_log_var = Lambda(lambda h: h[:, latent_dim:])
t_mean = get_t_mean(h)
t_log_var = get_t_log_var(h)
###Output
_____no_output_____
###Markdown
Sampling from the distribution q(t | x) = N(t_mean, exp(t_log_var)) with reparametrization trick.
###Code
def sampling(args):
'''
The inputs are tf.Tensor
args[0]: (batch_size x latent_dim) mean of the distribution
args[1]: (batch_size x latent_dim) vector of log variance, diag of conv matrix of distribution
'''
t_mean, t_log_var = args
e = K.random_normal(shape=(batch_size, latent_dim))
samples = t_mean + K.exp(0.5 * t_log_var) * e # use t_log_sigma instead of t_log_var
return samples # tf.Tensor of size (batch_size x latent_dim) from Gaussian distribution N(args[0], diag(args[1]))
t = Lambda(sampling)([t_mean, t_log_var])
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4409: The name tf.random_normal is deprecated. Please use tf.random.normal instead.
###Markdown
Decoder
###Code
def create_decoder(input_dim):
decoder = Sequential(name='decoder')
decoder.add(InputLayer([input_dim]))
decoder.add(Dense(hidden, activation='relu'))
decoder.add(Dense(hidden, activation='relu'))
decoder.add(Dense(hidden, activation='relu'))
decoder.add(Dense(original_dim, activation='sigmoid'))
return decoder
decoder = create_decoder(latent_dim)
x_decoded_mean = decoder(t)
###Output
_____no_output_____
###Markdown
Setup the model.
###Code
loss = vlb_bernoulli(x, x_decoded_mean, t_mean, t_log_var)
vae = Model(x, x_decoded_mean)
# In the loss argument, x is input (x), y is output(x_decoded_mean)
vae.compile(optimizer=keras.optimizers.RMSprop(lr=0.001), loss=lambda x, y: loss)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1702: The name tf.log is deprecated. Please use tf.math.log instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:793: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
###Markdown
Load & process data.
###Code
# https://corochann.com/mnist-dataset-introduction-1138.html
# train the VAE on MNIST digits
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print(y_train.shape)
print(y_train[301])
# https://www.tensorflow.org/api_docs/python/tf/keras/utils/to_categorical
# One hot encoding.
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
print(y_train.shape)
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
print(x_train.shape)
print(x_train.shape[1:])
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print(x_train[100][300:350])
###Output
(60000,)
7
(60000, 10)
(60000, 28, 28)
(28, 28)
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0.6156863 0.99215686 0.99215686 0.49019608 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0.34901962 0.99215686
0.98039216 0.22352941]
###Markdown
Train the model.
###Code
hist = vae.fit(x=x_train,
y=x_train, # note that y is not y_train
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, x_test), # note x_test is used in the 2nd input, not y_test
verbose=2)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1033: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1020: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3005: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
Train on 60000 samples, validate on 10000 samples
Epoch 1/30
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:207: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:216: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:223: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.
- 1s - loss: 284.4286 - val_loss: 212.1735
Epoch 2/30
- 0s - loss: 206.1978 - val_loss: 200.3780
Epoch 3/30
- 0s - loss: 198.1321 - val_loss: 195.9004
Epoch 4/30
- 0s - loss: 195.6656 - val_loss: 193.7336
Epoch 5/30
- 0s - loss: 192.4686 - val_loss: 189.3810
Epoch 6/30
- 0s - loss: 188.2176 - val_loss: 185.8131
Epoch 7/30
- 0s - loss: 185.2489 - val_loss: 183.7366
Epoch 8/30
- 0s - loss: 183.7172 - val_loss: 182.8851
Epoch 9/30
- 0s - loss: 182.4924 - val_loss: 180.6528
Epoch 10/30
- 0s - loss: 181.0188 - val_loss: 179.5549
Epoch 11/30
- 0s - loss: 179.1626 - val_loss: 177.2230
Epoch 12/30
- 1s - loss: 177.2191 - val_loss: 175.2295
Epoch 13/30
- 0s - loss: 175.3769 - val_loss: 174.3589
Epoch 14/30
- 0s - loss: 173.5284 - val_loss: 172.1894
Epoch 15/30
- 0s - loss: 171.7121 - val_loss: 170.2674
Epoch 16/30
- 0s - loss: 169.7374 - val_loss: 167.7636
Epoch 17/30
- 0s - loss: 167.5861 - val_loss: 165.9840
Epoch 18/30
- 0s - loss: 164.9570 - val_loss: 163.4812
Epoch 19/30
- 0s - loss: 162.5853 - val_loss: 161.3396
Epoch 20/30
- 0s - loss: 160.7408 - val_loss: 159.3357
Epoch 21/30
- 0s - loss: 159.1168 - val_loss: 157.7962
Epoch 22/30
- 0s - loss: 157.6308 - val_loss: 156.3580
Epoch 23/30
- 0s - loss: 156.4836 - val_loss: 155.0248
Epoch 24/30
- 0s - loss: 155.2158 - val_loss: 153.5352
Epoch 25/30
- 0s - loss: 154.0903 - val_loss: 153.2516
Epoch 26/30
- 0s - loss: 152.9499 - val_loss: 152.4072
Epoch 27/30
- 0s - loss: 151.8363 - val_loss: 150.4133
Epoch 28/30
- 0s - loss: 150.7804 - val_loss: 151.2474
Epoch 29/30
- 0s - loss: 149.6955 - val_loss: 148.1656
Epoch 30/30
- 0s - loss: 148.7887 - val_loss: 147.4312
###Markdown
Plot mnist & generated output. mnist(left col) vs generated(right col) for both training(left image) & validation data(right image).
###Code
fig = plt.figure(figsize=(30, 30))
for fid_idx, (data, title) in enumerate(zip([x_train, x_test], ['Train', 'Validation'])):
n = 30
digit_size = 28
figure = np.zeros((digit_size * 2, digit_size * n)) # 2 rows, n cols
# Generate new data
decoded = sess.run(x_decoded_mean, feed_dict={x: data[:batch_size, :]})
for i in range(n):
row_start = i * digit_size
row_end = (i + 1) * digit_size
col_end = digit_size
# data
figure[:digit_size, row_start:row_end] = data[i, :].reshape(digit_size, digit_size)
# decoded
figure[digit_size:, row_start:row_end] = decoded[i, :].reshape(digit_size, digit_size)
#print('data', data[i, :][300:305])
#print('decoded', decoded[i, :][300:305])
ax = fig.add_subplot(1, 2, fid_idx + 1)
ax.imshow(figure, cmap='Greys_r')
ax.set_title(title)
plt.show()
###Output
_____no_output_____
###Markdown
Sample from the prior distribution $p(t)$ (Gaussian) and then from the likelihood $p(x \mid t)$.
###Code
t_mean_norm = K.random_normal(shape=(batch_size, latent_dim))
t_log_var_norm = K.random_normal(shape=(batch_size, latent_dim))
#p_t = Lambda(sampling)([t_mean_norm, t_log_var_norm])
def p_t_sampling(args):
return K.random_normal(shape=(batch_size, latent_dim))
p_t = Lambda(p_t_sampling)([])
# images sampled from the vae model.
sampled_im_mean = decoder(p_t)
print(sampled_im_mean.shape)
###Output
(1000, 784)
###Markdown
Generate & plot new data.
###Code
# Generate data
sampled_im_mean_np = sess.run(sampled_im_mean)
# Plot images
n_samples = 30 # sample size
plt.figure(figsize=(30, 30))
for i in range(n_samples):
ax = plt.subplot(n_samples, 1, i + 1) # row, col, index
plt.imshow(sampled_im_mean_np[i, :].reshape(28, 28), cmap='gray')
plt.show()
###Output
_____no_output_____ |
Henry_s_work/Henry_s_model_select_test.ipynb | ###Markdown
Henry's model select demoThe follow is the model select program base on Decision Tree Classfier. Will be run on .py file instead of Jupyter Noteboook (for CSE server)
###Code
import numpy as np
import pandas as pd
import scipy
import seaborn as sns
from imblearn.over_sampling import SMOTE
from sklearn.base import TransformerMixin
from sklearn import tree
from sklearn import preprocessing
from sklearn import metrics
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, HashingVectorizer
from sklearn.model_selection import train_test_split, GridSearchCV, learning_curve, StratifiedKFold
from sklearn.metrics import roc_auc_score, roc_curve, classification_report, confusion_matrix, plot_confusion_matrix
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import make_pipeline, Pipeline
import joblib
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
###Output
_____no_output_____
###Markdown
Mount the Google Drive to the system
###Code
from google.colab import drive
drive.mount('/content/gdrive')
!ls
###Output
gdrive sample_data
###Markdown
Pre-define basic parameter for system adjustment
###Code
np.random.seed(1)
TRAININGFILE = '../keyword.csv'
TESTFILE = '../key_word_test.csv'
TESTSIZE = 0.1
REPORTPATH = 'model/'
MODELPATH = 'model/'
x_label_list = ['key_word_50', 'key_word_100','article_words']
y_label_list = ['topic']
topic_code = {
'ARTS CULTURE ENTERTAINMENT': 1,
'BIOGRAPHIES PERSONALITIES PEOPLE': 2,
'DEFENCE': 3,
'DOMESTIC MARKETS': 4,
'FOREX MARKETS': 5,
'HEALTH': 6,
'MONEY MARKETS': 7,
'SCIENCE AND TECHNOLOGY': 8,
'SHARE LISTINGS': 9,
'SPORTS': 10,
'IRRELEVANT': 0
}
###Output
_____no_output_____
###Markdown
Pre-processing the data_set from the CSV file
###Code
def preprocess(df, x_label, y_label):
'''
Return the x and y columns for trainning
'''
return df[[x_label, y_label]]
# for the bag of word and label encode process
def convert_word(bag_of_word_model, label_model, data_set, x_label, y_label='topic'):
'''
bow model need to be pre-fit when call current function
'''
act_x = bag_of_word_model.transform(data_set[x_label].values)
act_y = label_model.transform(data_set[y_label])
return act_x, act_y
###Output
_____no_output_____
###Markdown
SMOTE with different *Bag of Word* model: 1. CountVectorizer()2. TfidfVectorizer()
###Code
def smote_with_vector(df, vector_model, label_model, x_label):
'''
df data set
vector_model Bag of Word model
x_label process x column
y_label process y column
'''
count = vector_model.fit(df[x_label])
# convert the data
train_x, train_y = convert_word(count, label_model, df, x_label)
# start to SMOTE
smote = SMOTE(random_state=1)
sm_x, sm_y = smote.fit_sample(train_x, train_y)
# re-cover the data
new_x = count.inverse_transform(sm_x)
new_x = pd.Series([','.join(item) for item in new_x])
return new_x, sm_y
###Output
_____no_output_____
###Markdown
Implement the model pre-processingFor **GridSearch** and also implement *StratifiedKFold* for cross-vaildation
###Code
def grid_search(vector, model, train_x, train_y):
kfold = StratifiedKFold(n_splits=10,shuffle=True,random_state=1)
pipe = Pipeline([
('vector', vector),
('model', model)
])
param_grid = {
'model__max_depth': [16, 32, 64],
'model__min_samples_split': [2, 4],
'model__min_samples_leaf': [1, 2, 4],
}
# param_grid = {
# 'model__min_samples_leaf': range(1, 2),
# 'model__splitter': ['best', 'random'],
# }
grid_search = GridSearchCV(pipe, param_grid, cv=kfold, n_jobs=-1)
grid_result=grid_search.fit(train_x, train_y)
return (grid_result.best_estimator_,grid_result.best_score_)
###Output
_____no_output_____
###Markdown
Implement the Score function for model evaluatebase on the topic on each topic
###Code
def topic_score(model, label_model, data_set, topic_name, x_label):
test_data_set = data_set[data_set['topic'] == topic_name]
test_x = test_data_set[x_label]
test_y = test_data_set['topic']
pred_y = model.predict(test_x)
en_test_y = label_model.transform(test_y)
f1_score = metrics.f1_score(en_test_y, pred_y, average='macro')
accuarcy = metrics.accuracy_score(en_test_y, pred_y)
recall_score = metrics.recall_score(en_test_y, pred_y, average='macro')
return {
'f1': round(f1_score, 4),
'accuarcy': round(accuarcy, 4),
'recall_score': round(recall_score, 4)
}
def model_score(model, label_model, x_label, test_df):
'''
model The dt model
test_df provide testing data set or using test file data
'''
print('Topic\tf1\taccuarcy\trecall_score')
test_report = []
test_df = preprocess(test_df, x_label, 'topic')
for topic in topic_code.keys():
result = [topic]
result.append(topic_score(model, label_model, test_df, topic, x_label))
test_report.append(result)
test_report.sort(reverse=True, key=lambda x: x[1]['accuarcy'])
for record in test_report:
print(record)
return test_report
def merge_x_y(data_x, data_y):
df_x = pd.Series(data_x)
df_x.rename_axis('x')
df_y = pd.Series(data_y)
df_y.rename_axis('y')
return pd.DataFrame(list(zip(df_x, df_y)), columns=['x', 'y'])
###Output
_____no_output_____
###Markdown
Define the model save functionThe function will automatically save each trainning model and result report wait for further choose
###Code
def save_job(model, test_report, pre_vector, feature_name):
filename = REPORTPATH+str(pre_vector)+'_'+feature_name
joblib.dump(model, filename+'.model')
with open(filename+'.txt', 'w') as fp:
fp.write('Topic\tf1\taccuarcy\trecall_score\n')
for record in test_report:
fp.write(str(record)+'\n')
###Output
_____no_output_____
###Markdown
Start to implement the main function---
###Code
def model_compile(df, x_label, vector_num):
print('Trainning topic', x_label, 'with vector num', vector_num)
df = preprocess(df, x_label, 'topic')
label_model = preprocessing.LabelEncoder().fit(df['topic'])
encode_mapping = dict(zip(label_model.classes_, range(len(label_model.classes_))))
if vector_num == 1:
print('Coverting word to matrix using TF-IDF...', end=' ')
x, y = smote_with_vector(df, TfidfVectorizer(), label_model, x_label)
else:
print('Coverting word to matrix using Count...', end=' ')
x, y = smote_with_vector(df, CountVectorizer(), label_model, x_label)
print('Done!')
new_df = merge_x_y(x, y)
train_df, test_df = train_test_split(x, test_size=0.3)
#topic = topic_code.keys()
# train_x, test_x = train_test_split(x, test_size=0.3)
# train_y, test_y = train_test_split(y, test_size=0.3)
# # prepared for grid-search
# count_dt_model, count_dt_accuarcy = grid_search(CountVectorizer(), DecisionTreeClassifier(), train_x, train_y)
# tfidf_dt_model, tfidf_dt_accuarcy = grid_search(TfidfVectorizer(norm=None), DecisionTreeClassifier(), train_x, train_y)
# # first evaluate the data
# pred_y = model.predict(test_x)
# en_test_y = label_model.transform(test_y)
# print('Total proformance')
# print('F1 score:', metrics.f1_score(en_test_y, pred_y, average='macro'))
# print('Accuarcy:', metrics.accuracy_score(en_test_y, pred_y))
# print('Recall score:', metrics.recall_score(en_test_y, pred_y, average='macro'))
# print('-'*15)
# print('Classification Report:')
# print(classification_report(en_test_y, pred_y))
# # for each topic score
# model_score(model, label_model, x_label, test_df)
# if count_dt_accuarcy >= tfidf_dt_accuarcy:
# print(f'*************************************************************')
# print(f'Now the training set is {x_label}, and the model chosen is count')
# print(f'The accuracy is {count_dt_accuarcy}')
# return count_dt_model,label_model,encode_mapping
# else:
# print(f'*************************************************************')
# print(f'Now the training set is {x_label}, and the model chosen is tfidf')
# print(f'The accuracy is {tfidf_dt_accuarcy}')
# return tfidf_dt_model,label_model,encode_mapping
def model_evaluate(model, x_label, label_model, df, encode_mapping, vector_num):
print('Start to evalute', x_label, 'model')
test_set = preprocess(df, x_label, 'topic')
test_x = test_set[x_label]
test_y = test_set['topic']
topics = list(set(test_set['topic']))
# evalute total performance
pred_y = model.predict(test_x)
en_test_y = label_model.transform(test_y)
print('Total proformance')
print('F1 score:', metrics.f1_score(en_test_y, pred_y, average='macro'))
print('Accuarcy:', metrics.accuracy_score(en_test_y, pred_y))
print('Recall score:', metrics.recall_score(en_test_y, pred_y, average='macro'))
print('-'*15)
print('Classification Report:')
print(classification_report(en_test_y, pred_y))
# evalute all the topic performance
model_report = model_score(model, label_model, x_label, df)
# save current model and performance
save_job(model, model_report, vector_num, x_label)
# for figure
conf_matrix = confusion_matrix(en_test_y, pred_y)
fig1 = plt.figure(figsize=(13,6))
sns.heatmap(conf_matrix,
# square=True,
annot=True, # show numbers in each cell
fmt='d', # set number format to integer in each cell
yticklabels=label_model.classes_,
xticklabels=model.classes_,
cmap="Blues",
# linecolor="k",
linewidths=.1,
)
plt.title(
f"Confusion Matrix on Test Set | "
f"Classifier: {'+'.join([step for step in model.named_steps.keys()])}",
fontsize=14)
plt.xlabel("Actual: False positives for y != x", fontsize=12)
plt.ylabel("Prediction: False negatives for x != y", fontsize=12)
plt.show()
#plt.savefig('model/'+str(vector_num)+'_'+x_label+'.png')
###Output
_____no_output_____
###Markdown
For test/debug
###Code
%%time
x_label = 'key_word_50'
vector_num = 2
df = pd.read_csv(TRAININGFILE)
model_compile(df, x_label, vector_num)
# model, label_model, encode_mapping =
###Output
Trainning topic key_word_50 with vector num 2
Coverting word to matrix using Count...Done!
Total shape: (52074, 2)
X_shape: (52074,)
Y_shape: (52074,)
0 4
1 7
2 10
3 4
4 6
..
52069 10
52070 10
52071 10
52072 10
52073 10
Name: y, Length: 52074, dtype: int64
CPU times: user 8.56 s, sys: 153 ms, total: 8.71 s
Wall time: 8.74 s
###Markdown
start to test different model---For one topic testing
###Code
%%time
x_label = 'key_word_50'
vector_num = 1
df = pd.read_csv(TRAININGFILE)
test_df = pd.read_csv(TESTFILE)
model, label_model, encode_mapping = model_compile(df, x_label, vector_num)
model_evaluate(model, x_label, label_model, test_df, encode_mapping, vector_num)
###Output
Trainning topic key_word_50 with vector num 1
*************************************************************
Now the training set is key_word_50, and the model chosen is count_clf_NB
The accuracy is 0.9406805340323805
Start to evalute key_word_50 model
Total proformance
F1 score: 0.43573387653540774
Accuarcy: 0.652
Recall score: 0.5048479742987751
---------------
Classification Report:
precision recall f1-score support
0 0.33 0.67 0.44 3
1 0.18 0.13 0.15 15
2 0.43 0.46 0.44 13
3 0.09 0.50 0.15 2
4 0.48 0.58 0.53 48
5 0.62 0.57 0.59 14
6 0.81 0.71 0.76 266
7 0.47 0.52 0.49 69
8 0.00 0.00 0.00 3
9 0.31 0.57 0.40 7
10 0.82 0.83 0.83 60
accuracy 0.65 500
macro avg 0.41 0.50 0.44 500
weighted avg 0.68 0.65 0.66 500
Topic f1 accuarcy recall_score
['SPORTS', {'f1': 0.303, 'accuarcy': 0.8333, 'recall_score': 0.2778}]
['IRRELEVANT', {'f1': 0.0831, 'accuarcy': 0.7105, 'recall_score': 0.0711}]
['ARTS CULTURE ENTERTAINMENT', {'f1': 0.4, 'accuarcy': 0.6667, 'recall_score': 0.3333}]
['FOREX MARKETS', {'f1': 0.1474, 'accuarcy': 0.5833, 'recall_score': 0.1167}]
['HEALTH', {'f1': 0.1818, 'accuarcy': 0.5714, 'recall_score': 0.1429}]
['SHARE LISTINGS', {'f1': 0.3636, 'accuarcy': 0.5714, 'recall_score': 0.2857}]
['MONEY MARKETS', {'f1': 0.1371, 'accuarcy': 0.5217, 'recall_score': 0.1043}]
['DOMESTIC MARKETS', {'f1': 0.3333, 'accuarcy': 0.5, 'recall_score': 0.25}]
['DEFENCE', {'f1': 0.1579, 'accuarcy': 0.4615, 'recall_score': 0.1154}]
['BIOGRAPHIES PERSONALITIES PEOPLE', {'f1': 0.0392, 'accuarcy': 0.1333, 'recall_score': 0.0222}]
['SCIENCE AND TECHNOLOGY', {'f1': 0.0, 'accuarcy': 0.0, 'recall_score': 0.0}]
###Markdown
For mult-topic testing
###Code
%%time
# load data
df = pd.read_csv(TRAININGFILE)
test_df = pd.read_csv(TESTFILE)
for x_label in x_label_list:
for vector_num in [1, 2]:
model, label_model, encode_mapping = model_compile(df, x_label, vector_num)
model_evaluate(model, x_label, label_model, test_df, encode_mapping, vector_num)
df = pd.read_csv(TRAININGFILE)
train = preprocess(df, 'key_word_50', 'topic')
print(train)
###Output
key_word_50 topic
0 open,cent,cent,cent,stock,rate,end,won,won,won... FOREX MARKETS
1 end,end,day,day,day,point,time,bank,early,year... MONEY MARKETS
2 socc,socc,world,world,stat,stat,stat,stat,gove... SPORTS
3 open,cent,cent,end,play,unit,made,bank,bank,tu... FOREX MARKETS
4 minut,minut,minut,day,friday,friday,race,time,... IRRELEVANT
... ... ...
9495 south,scient,capit,intern,year,year,set,set,se... DEFENCE
9496 stock,stock,stock,week,play,friday,point,gover... IRRELEVANT
9497 rate,million,million,dollar,dollar,trad,newsro... FOREX MARKETS
9498 week,week,end,day,arm,man,die,polic,polic,year... IRRELEVANT
9499 market,econom,econom,econom,econom,econom,econ... FOREX MARKETS
[9500 rows x 2 columns]
###Markdown
For Google Colab running
###Code
%%time
x_label = 'article_words'
# load data
df = pd.read_csv(TRAININGFILE)
test_df = pd.read_csv(TESTFILE)
for vector_num in [1, 2]:
model, label_model, encode_mapping = model_compile(df, x_label, vector_num)
model_evaluate(model, x_label, label_model, test_df, encode_mapping, vector_num)
%%time
x_label = 'article_words'
# load data
df = pd.read_csv(TRAININGFILE)
train_df, test_df = train_test_split(df, test_size=0.2)
for vector_num in [1, 2]:
model, label_model, encode_mapping = model_compile(df, x_label, vector_num)
model_evaluate(model, x_label, label_model, test_df, encode_mapping, vector_num)
###Output
_____no_output_____ |
The problem with Serbian economy/1.Data Cleaning/Data Cleaning.ipynb | ###Markdown
1. Household Income and Spending
###Code
income_hh = pd.read_csv('01monthly_average_household_incomes.csv', sep=';')
spending_hh = pd.read_csv('01monthly_average_household_spendings.csv', sep=';')
#Renaming and dropping columns
rename = {'nTer':'Region', 'nTipNaselja':'SettlementType', 'god':'Year', 'nVrPod':'SortMeOut',
'vrednost':'Amount(RSD)'}
drop = ['idindikator', 'IDTer', 'IDTipNaselja', 'mes', 'IDVrPod']
income_hh = income_hh.rename(columns=rename).rename(columns={'nRaspolozivaSredstva':'TypeOfIncome'})
income_hh = income_hh.drop(drop+['IDRaspolozivaSredstva'], axis=1)
spending_hh = spending_hh.rename(columns=rename).rename(columns={'nCOICOP':'TypeOfSpending'})
spending_hh = spending_hh.drop(drop+['IDCOICOP'], axis=1)
#Moving structural percent that was a row and should have been a column
struct_pct = income_hh.iloc[1::2]['Amount(RSD)'].reset_index(drop=True)
income_hh = income_hh.iloc[::2].reset_index(drop=True)
income_hh['StructuralPercentage'] = struct_pct
income_hh = income_hh.drop('SortMeOut', axis=1)
struct_pct = spending_hh.iloc[1::2]['Amount(RSD)'].reset_index(drop=True)
spending_hh = spending_hh.iloc[::2].reset_index(drop=True)
spending_hh['StructuralPercentage'] = struct_pct
spending_hh = spending_hh.drop('SortMeOut', axis=1)
#Filling zeroes
income_hh = income_hh.fillna(0)
#Translating
spendings = [['Укупно', 'Total'], ['Храна и безалкохолна пића', 'Food and Non-Alcoholic Beverages'],
['Алкохолна пића, дуван и наркотици', 'Alcohol, Tobacco and Narcotics'], ['Одећа и обућа', 'Clothing and Footwear'],
['Становање, вода, ел. енергија, гас и остала горива', 'Housing, Water, Energy, Gas, etc.'],
['Опрема за стан и текуће одржавање', 'Home Equipment and Maintenance'], ['Здравље', 'Health'], ['Транспорт', 'Transport'],
['Комуникације', 'Communication'], ['Рекреација и култура', 'Culture and Recreation'], ['Образовање', 'Education'],
['Ресторани и хотели', 'Hotels and Restaurants'], ['Остали лични предмети и остале услуге', 'Other Items and Services']]
for i in spendings:
spending_hh.loc[spending_hh['TypeOfSpending']==i[0], 'TypeOfSpending'] = i[1]
spending_hh = spending_hh[spending_hh.TypeOfSpending != 'Drop']
incomes = [['Приходи домаћинстава - укупно', 'Total Earnings'],
['Приходи домаћинстава у новцу', 'Earnings in Currency'],
['Приходи из редовног радног односа', 'Earnings from Orthodox Work Relations a.k.a. Job'],
['Приходи ван редовног радног односа', 'Earnings from Unorthodox Work Relations'],
['Пензије (старосне, породичне, инвалидске и остале)', 'Pensions'],
['Остала примања од социјалног осигурања', 'Social Sequrity Earnings'],
['Приходи од пољопривреде, лова и риболова', 'Earnings from Agricultural, Hunting and Fishing Actions'],
['Приходи из иностранства', 'Foreign Earnings'], ['Приходи од имовине', 'Real Estate Earnings'],
['Поклони и добици', 'Presents'], ['Потрошачки и инвестициони кредити', 'Investment Interest'],
['Остала примања', 'Rest'], ['Приходи домаћинстава у натури', 'Earnings in Labor'],
['Приходи у натури на име зараде', 'Other Earnings in Labor'], ['Натурална потрошња', 'Drop']]
for i in incomes:
income_hh.loc[income_hh['TypeOfIncome']==i[0], 'TypeOfIncome'] = i[1]
income_hh = income_hh[income_hh.TypeOfIncome != 'Drop']
settlements = [['Укупно', 'Total'], ['Градска насеља', 'City'], ['Остало', 'Rest']]
for i in settlements:
income_hh.loc[income_hh['SettlementType']==i[0], 'SettlementType'] = i[1]
spending_hh.loc[spending_hh['SettlementType']==i[0], 'SettlementType'] = i[1]
regions = [['РЕПУБЛИКА СРБИЈА', 'State'], ['Београдски регион', 'Belgrade'],
['Регион Војводине', 'Voyvodina/North'],
['Регион Шумадије и Западне Србије', 'Central and West'],
['Регион Јужне и Источне Србије', 'South and East']]
for i in regions:
income_hh.loc[income_hh['Region']==i[0], 'Region'] = i[1]
spending_hh.loc[spending_hh['Region']==i[0], 'Region'] = i[1]
datasets.append([income_hh, 'income_hh'])
datasets.append([spending_hh, 'spending_hh'])
###Output
_____no_output_____
###Markdown
2. Foreign Trades
###Code
yearly_net_trade = pd.read_csv('02yearly_net_trade.csv', sep=';')
monthly_exports = pd.read_csv('02monthly_exports.csv', sep=';')
monthly_imports = pd.read_csv('02monthly_imports.csv', sep=';')
sector_exports = pd.read_csv('02sector_exports.csv', sep=';')
sector_imports = pd.read_csv('02sector_imports.csv', sep=';')
#Renaming and dropping columns
yearly_net_trade = yearly_net_trade.drop(['idindikator', 'IDVrPod', 'mes'], axis=1)
yearly_net_trade = yearly_net_trade.rename(columns={'nVrPod':'Action', 'god':'Year',
'vrednost':'Amount'})
#Translating
actions = [['Извоз', 'Exports(tUSD)'], ['Увоз', 'Imports(tUSD)'], ['Салдо', 'Net Exports(tUSD)']]
for i in actions:
yearly_net_trade.loc[yearly_net_trade['Action']==i[0], 'Action'] = i[1]
#Unmelting
yearly_net_trade = yearly_net_trade.pivot_table(index='Year', columns='Action')
yearly_net_trade.columns = yearly_net_trade.columns.droplevel().rename(None)
yearly_net_trade = yearly_net_trade.reset_index()
#Renaming and dropping columns
monthly_exports = monthly_exports.drop(['idindikator', 'IDVrPod', 'nVrPod'], axis=1)
monthly_exports = monthly_exports.rename(columns={'god':'Year', 'mes':'Month',
'vrednost':'Amount(tUSD)'})
monthly_imports = monthly_imports.drop(['idindikator', 'IDVrPod', 'nVrPod'], axis=1)
monthly_imports = monthly_imports.rename(columns={'god':'Year', 'mes':'Month',
'vrednost':'Amount(tUSD)'})
#Dropping redundant rows
monthly_exports = monthly_exports.iloc[2::3]
monthly_imports = monthly_imports.iloc[2::3]
#Merging monthly_exports and monthly_imports and adding Net Exports
monthly_net_trade = pd.concat([monthly_exports, monthly_imports['Amount(tUSD)']], axis=1)
monthly_net_trade.columns = ['Month', 'Year', 'Exports(tUSD)', 'Imports(tUSD)']
monthly_net_trade['NetExports(tUSD)'] = monthly_net_trade['Exports(tUSD)'] - monthly_net_trade['Imports(tUSD)']
monthly_net_trade = monthly_net_trade.loc[monthly_net_trade['Month']!=0].reset_index(drop=True)
monthly_net_trade = monthly_net_trade.loc[monthly_net_trade['Year']!=2020].reset_index(drop=True)
#Renaming and dropping columns
sector_exports = sector_exports.drop(['idindikator', 'mes', 'nDrzave', 'IDNSST', 'IDVrPod'],
axis=1)
sector_exports = sector_exports.rename(columns={'god':'Year', 'IDDrzave':'CountryId',
'nNSST':'Sector', 'nVrPod':'Currency',
'vrednost':'Amount(tUSD)'})
sector_imports = sector_imports.drop(['idindikator', 'mes', 'nDrzave', 'IDNSST', 'IDVrPod'],
axis=1)
sector_imports = sector_imports.rename(columns={'god':'Year', 'IDDrzave':'CountryId',
'nNSST':'Sector', 'nVrPod':'Currency',
'vrednost':'Amount(tUSD)'})
#Sorting rows
sector_imports = sector_imports.loc[sector_imports['Currency']=='Вредност у хиљадама УСД']
sector_exports = sector_exports.loc[sector_exports['Currency']=='Вредност у хиљадама УСД']
sector_imports = sector_imports.drop('Currency', axis=1).reset_index(drop=True)
sector_exports = sector_exports.drop('Currency', axis=1).reset_index(drop=True)
#Merging
net_sector = pd.merge(sector_exports, sector_imports, on=['Year', 'CountryId', 'Sector'])
net_sector = net_sector.rename(columns={'Amount(tUSD)_x':'Exports(tUSD)',
'Amount(tUSD)_y':'Imports(tUSD)'})
net_sector['NetExports(tUSD)'] = net_sector['Exports(tUSD)'] - net_sector['Imports(tUSD)']
aggregates = [['00', 'Total'], ['01', 'EU'], ['02', 'CEFTA']]
for i in aggregates:
net_sector.loc[net_sector['CountryId']==i[0], 'CountryId'] = i[1]
#Translating
sectors = [['Храна и живе животиње', 'Food and Livestock'],
['Пића и дуван', 'Drinks and Tobacco'],
['Сирове материје, нејестиве, осим горива', 'Ineadible Raw Materials Excluding Fuel'],
['Минерална горива, мазива и сродни производи', 'Mineral Fuels, Lubricants and Related Products'],
['Животињска и биљна уља, масти и воскови', 'Animal and Vegetable Oils, Fats and Waxes'],
['Хемијски и сл. производи, нигде непоменути', 'Chemical Products and Similar'],
['Израђени производи сврстани по материјалу', 'Manufactured Products Classified by Material'],
['Машине и транспортни уређаји', 'Machines and Transport Devices'],
['Разни готови производи', 'Other Finished Products'],
['Производи непоменути у СМТК Рев. 4', 'Undeclared Products within SITC Rev. 4']]
for i in sectors:
net_sector.loc[net_sector['Sector']==i[0], 'Sector'] = i[1]
datasets.append([yearly_net_trade, 'yearly_net_trade'])
datasets.append([monthly_net_trade, 'monthly_net_trade'])
datasets.append([net_sector, 'net_sector'])
###Output
_____no_output_____
###Markdown
3. CPI
###Code
cpi = pd.read_json('03cpi.json')
#Renaming and dropping columns and rows
cpi = cpi.loc[cpi['nVrPod']=='Базни индекси (2006 = 100)'].reset_index(drop=True)
cpi = cpi.drop(['idindikator', 'IDVrPod', 'nVrPod', 'IDTer', 'nTer', 'IDCOICOP'], axis=1)
cpi = cpi.rename(columns={'mes':'Month', 'god':'Year', 'nCOICOP':'ExpenditureCategory',
'vrednost':'CPI(2006=100)'})
cpi = cpi.loc[cpi['Year']!=2020].reset_index(drop=True)
cpi = cpi.drop_duplicates()
#Translating
categories = [['Укупно', 'Total'], ['Храна и безалкохолна пића', 'Food and Non-alcoholic Drinks'],
['Храна', 'Food'], ['Хлеб и житарице', 'Bread and Cereal'],
['Месо', 'Meat'], ['Риба', 'Fish'], ['Млеко, сир и јаја', 'Milk, Cheese and Eggs'],
['Уља и масти', 'Oils and Fats'], ['Воће', 'Fruit'],
['Поврће', 'Vegetables'], ['Шећер, џем, мед и чоколада', 'Sugar, Jam, Honey and Chocolate'],
['Остали прехрамбени производи', 'Other Eatable Prducts'],
['Безалкохолна пића', 'Non-alcoholic Drinks'], ['Кафа, чај и какао', 'Coffee, Tea and Cocoa'],
['Минерална вода, безалкохолна пића и сокови', 'Mineral Water, Non-alcoholic Drinks and Juices'],
['Алкохолна пића, дуван и наркотици', 'Alcoholic Drinks, Tobacco and Narcotics'],
['Алкохолна пића', 'Alcoholic Drinks'],
['Жестока алкохолна пића', 'Strong Alcoholic Drinks'], ['Вино', 'Wine'], ['Пиво', 'Beer'],
['Дуван', 'Tobacco'], ['Одећа и обућа', 'Clothing and Footwear'],
['Одећа', 'Clothing'], ['Материјал за одећу', 'Clothing Material'],
['Одевни предмети', 'Garments'], ['Остали одевни предмети', 'Other Garments'],
['Чишћење, поправка и шивење одеће', 'Clothing Maintenance'],
['Обућа', 'Footwear'], ['Поправка обуће', 'Footwear Maintenance'],
['Становање, вода, ел. енергија, гас и остала горива', 'Housing, Water, Energy, Gas and Other Fuels'],
['Стварне стамбене ренте', 'Real Rent'], ['Изнајмљивање стана', 'Rent'],
['Oдржавање и поправка станова', 'Apartment Maintenance'],
['Материјал за одржавање и поправку станова', 'Materials for Apartment Maintenance'],
['Услуге за одржавање и поправку станова', 'Apartment Maintenance Services'],
['Снабдевање водом и остале стамбене услуге', 'Waterworks and Other Housing Services'],
['Снабдевање водом', 'Water Supply'],
['Одношење смећа', 'Garbage Collection'],
['Одвoђење отпадне воде', 'Waste Water Drainage'],
['Електрична енергија, гас и остала горива', 'Energy, Gas and Other Fuels'],
['Електрична енергија за домаћинство', 'Hosehold Electric Energy'],
['Гас', 'Gas'], ['Течна горива', 'Liquid Fuels'],
['Чврста горива', 'Solid Fuels'], ['Даљинско грејање', 'Distric Heating'],
['Опрема за стан и текуће одржавање', 'Household Equipment and Maintenance'],
['Намештај, теписи и остале подне простирке', 'Furniture, Carpets and Other Rugs'],
['Намештај, расвета и декоративна опрема за стан', 'Furniture, Lightning and Decorative Ornaments'],
['Теписи и остале подне простирке', 'Carpets and Other Floor Rugs'],
['Текстил за домаћинство', 'Textile for Household Use'],
['Апарати за домаћинство', 'Household Appliances'], ['Велики кућни апарати', 'Big Household Appliances'],
['Мали електрични апарати', 'Small Electrical Appliances'],
['Поправка апарата за домаћинство', 'Household Appliances Maintenance'],
['Посуђе и остали прибор за јело', 'Dishes and Cutlery'],
['Алат и остала опрема за кућу и врт', 'Tools and Other Equipment for House and Garden'],
['Већи алат и опрема за кућу и врт', 'Bigger House and Garden Tools and Equipment'],
['Мали алат и разноврсни прибор', 'Small Tools and Accessories'],
['Средства и услуге за текуће одржавање стана', 'Funds and Services for Ongoing Maintenance of Dwelling'],
['Средства за одржавање стана', 'Dwelling Maintenance Funds'], ['Услуге за текуће одржавање стана', 'Dwelling Maintenance Services'],
['Здравље', 'Health'], ['Лекови и медицински уређаји и опрема', 'Medication and Medical Equipment'], ['Лекови', 'Medication'],
['Остали медицински производи', 'Other Medical Products'], ['Здравствене неболничке услуге', 'Non-hospital Health Services'],
['Медицинске услуге', 'Medical Services'], ['Стоматолошке услуге', 'Dentist Services'],
['Пратеће медицинске услуге', 'Accompanying Medical Services'], ['Транспорт', 'Transport'],
['Набавка возила', 'Procurement of Vehicles'],
['Аутомобили', 'Cars'], ['Бицикли', 'Bycicles'], ['Коришћење и одржавање возила', 'Vehicles Use and Maintenance'],
['Резервни делови', 'Reserve Parts'], ['Горива и мазива за путничка возила', 'Fuels and Lubricants for Passenger Vehicles'],
['Одржавање и поправка путничких возила', 'Maintenance of Passenger Vehicles'],
['Остале услуге у вези са употребом возила', 'Other Vehicle Related Services'], ['Транспортне услуге', 'Transport Services'],
['Превоз путника железницом', 'Rail Passenger Transport'], ['Превоз путника друмом', 'Road Passenger Transport'],
['Превоз путника авионом', 'Plane Passenger Transport'], ['Комуникације', 'Communication'], ['Поштанске услуге', 'Mail Services'],
['Телефонска опрема', 'Telephone Equipment'], ['Телефонске и остале услуге', 'Telephone and Other Equipment'],
['Телефонске услуге', 'Telephone Services'], ['Рекреација и култура', 'Culture and Recreation'],
['Аудио-визуелна, фотографска и рачунарска опрема', 'Audio-Video, Photographic and Computer Equipment'],
['Опрема за пријем, снимање и репродукцију звука и слике', 'Sound and Picture Reciving, Capturing and Reproduction Equipment'],
['Фотографска и филмска опрема', 'Photographic and Film Equipment'], ['Рачунарска опрема', 'Computer Equipment'],
['Медији за снимање слике и звука', 'Sound and Picture Capturing Medium'],
['Поправка аудио-визуелне, фотографске и рачунарске опреме', 'Audio-Visual, Photographic and Computer Equipment Maintenance'],
['Већа трајна добра за рекреацију и културу', 'Grearer Lasting Goods for Culture and Recreation'], ['Музички инструменти', 'Musical Instruments'],
['Остала опрема за рекреацију, врт и кућни љубимци', 'Other Recreational, Garden and Pet-related Equipment'],
['Играчке, игре и хоби', 'Toys, Games and Hobbies'], ['Баште, саднице и цвеће', 'Garden, Seedling and Flower'],
['Кућни љубимци, средства и услуге у вези са кућним љубимцима', 'Pets and Pet-related Means and Services'],
['Ветеринарске и друге услуге за животиње', 'Vet and Other Pet Services'],
['Рекреација и култура – услуге', 'Cultural and Recreational Services'], ['Рекреација – услуге', 'Recreational Services'],
['Култура – услуге', 'Cultural Services'], ['Новине, књиге и канцеларијски прибор', 'Newspaper, Books and Office Equipment'],
['Књиге', 'Books'], ['Новине и часописи', 'Newspapers and Magazines'], ['Канцеларијски материјал', 'Office Equipments'],
['Образовање', 'Education'], ['Средњошколско образовање', 'Highschool Education'],
['Више и високо образовање', 'Higher Education'], ['Остали видови образовања', 'Other Types of Education'],
['Ресторани и хотели', 'Restaurants and Hotels'], ['Ресторани, кафеи и кантине', 'Restaurants, Cafes and Canteens'],
['Ресторани и кафеи', 'Restaurants and Cafes'], ['Услуге смештаја', 'Accommodation Services'], ['Смештај', 'Accommodation'],
['Остали лични предмети и остале услуге', 'Other Personal Items and Other Services'], ['Лична нега', 'Personal Care'],
['Услуге у фризерским и козметичким салонима', 'Cosmetic and Hairdresser Services'],
['Електрични апарати за личну негу', 'Personal Care Electrical Equipment'],
['Остaли предмети за личну негу', 'Other Personal Care Items'], ['Лични предмети', 'Personal Items'],
['Накит и сатови', 'Jewelry and Watches'], ['Остали лични предмети', 'Other Personal Items'], ['Социјална заштита', 'Social Security'],
['Осигурање', 'Insurance'], ['Осигурање стана', 'Dwelling Insurance'], ['Осигурање транспортних возила', 'Vehicle Insurance'],
['Финансијске услуге', 'Financial Services'],
['Финансијске услуге на другом месту непоменуте', 'Other Financial Services'], ['Остале услуге', 'Other Services'],
['Терапеутски производи и опрема', 'Therapeutic Products and Equipment'], ['Опрема за спорт и камповање', 'Camping and Sport Equipment'],
['Поправка намештаја, расвете и подних простирки', 'Repair of Furniture, Lighting and Floor Coverings'],
['Пакет-аранжмани', 'Package Deals']]
for i in categories:
cpi.loc[cpi['ExpenditureCategory']==i[0], 'ExpenditureCategory'] = i[1]
datasets.append([cpi, 'cpi'])
###Output
_____no_output_____
###Markdown
4. Industry
###Code
#Industry by Sector
industry_sector = pd.read_csv('04industry_traffic_index15.csv', sep=';')
#Renaming and dropping columns and rows
industry_sector = industry_sector.drop(['idindikator', 'nTer', 'IDTer', 'IDVrNamene'], axis=1)
industry_sector = industry_sector.rename(columns={'mes':'Month', 'god':'Year',
'nVrNamene':'SectorPurpose', 'vrednost':'ProductionIndex(2015=100)'})
industry_sector = industry_sector.loc[industry_sector['Year']!=2020].reset_index(drop=True)
industry_sector = industry_sector.drop_duplicates()
#Translating
industrial_sectors = [['Укупно', 'Total'], ['Енергија', 'Energy'],
['Интермедијарни производи', 'Intermediate Products'],
['Капитални производи', 'Capital Products'],
['Трајни производи за широку потрошњу', 'Durable Consumer Goods'],
['Нетрајни производи за широку потрошњу', 'Temporary Consumer Goods']]
for i in industrial_sectors:
industry_sector.loc[industry_sector['SectorPurpose']==i[0], 'SectorPurpose'] = i[1]
#Splitting Data into Monthly and Yearly
yearly_industry_sector = industry_sector.loc[industry_sector['Month']==0]
yearly_industry_sector = yearly_industry_sector.drop('Month', axis=1).reset_index(drop=True)
monthly_industry_sector = industry_sector.loc[industry_sector['Month']!=0]
monthly_industry_sector = monthly_industry_sector.reset_index(drop=True)
#Industry by Activity
industry_activity = pd.read_csv('04industry_activity.csv', sep=';')
#Renaming and dropping columns and rows
industry_activity = industry_activity.drop(['idindikator', 'nTer', 'IDTer', 'IDKD08'], axis=1)
industry_activity = industry_activity.rename(columns={'mes':'Month', 'god':'Year',
'nkd08':'Activity', 'vrednost':'ProductionIndex'})
industry_activity = industry_activity.loc[industry_activity['Year']!=2020].reset_index(drop=True)
industry_activity = industry_activity.drop_duplicates()
#Translating
industrial_activities = [['Укупно', 'Total'], ['Експлоатација угља', 'Coal Exploitation'],
['Експлоатација сирове нафте и природног гаса', 'Oil and Natural Gas Exploitation'],
['Експлоатација руда метала', 'Metal Ore Exploitation'],
['Остало рударство', 'Other Types of Mining'],
['Производња прехрамбених производа', 'Food Products Production'],
['Производња пића', 'Drink Production'],
['Производња дуванских производа', 'Tobacoo Products Production'],
['Производња текстила', 'Textile Production'],
['Производња одевних предмета', 'Clothing Production'],
['Производња коже и предмета од коже', 'Leather and Leater Item Production'],
['Прерада дрвета и производи од дрвета, плуте, сламе и прућа, осим намештаја', 'Wood and Wood, Cork, Straw and Rod - Related Products Production, Except Furniture'],
['Производња папира и производа од папира', 'Paper and Paper Products Production'],
['Штампање и умножавање аудио и видео записа', 'Printing and Duplication of Audio and Video Material'],
['Производња кокса и деривата нафте', 'Coke and Refined Petroleum Production'],
['Производња хемикалија и хемијских производа', 'Chemicals and Chemical Products Production'],
['Производња основних фармацеутских производа и препарата', 'Basic Pharmaceutical Products and Remedies Production'],
['Производња производа од гуме и пластике', 'Rubber and Plastic Production'],
['Производња производа од осталих неметалних минерала', 'Manufacture of Other Non-metallic Minerals Products'],
['Производња основних метала', 'Base Metals Production'],
['Производња металних производа, осим машина и уређаја', 'Fabricated Metal Products Production, Except Machinery and Equipment'],
['Производња рачунара, електронских и оптичких производа', 'Computer, Electronic and Optical Products Production'],
['Производња електричне опреме', 'Electrical Equipment Production'],
['Производња непоменутих машина и непоменуте опреме', 'Unmentioned Machinery and Equipment Production'],
['Производња моторних возила, приколица и полуприколица', 'Motor Vehicles, Trailers and Semi-trailer Production'],
['Производња осталих саобраћајних средстава', 'Other Transport Equipment Production'],
['Производња намештаја', 'Furniture Production'], ['Остале прерађивачке делатности', 'Other Manufacturing'],
['Поправка и монтажа машина и опреме', 'Repair and Installation of Machinery Equipment'],
['Снабдевање електричном енергијом, гасом, паром и климатизација', 'Electricity, Gas, Steam and Air Conditioning Supply'],
['B - Рударство', 'B - Mining'], ['C - Прерађивачка индустрија', 'C - Manufacturing Industry'],
['D - Снабдевање електричном енергијом, гасом, паром и климатизација', 'D - Electricity, Gas, Steam and Air Conditioning Supply']]
for i in industrial_activities:
industry_activity.loc[industry_activity['Activity']==i[0], 'Activity'] = i[1]
#Splitting Data into Monthly and Yearly
yearly_industry_activity = industry_activity.loc[industry_activity['Month']==0]
yearly_industry_activity = yearly_industry_activity.drop('Month', axis=1).reset_index(drop=True)
monthly_industry_activity = industry_activity.loc[industry_activity['Month']!=0]
monthly_industry_activity = monthly_industry_activity.reset_index(drop=True)
datasets.append([yearly_industry_sector, 'yearly_industry_sector'])
datasets.append([monthly_industry_sector, 'monthly_industry_sector'])
datasets.append([yearly_industry_activity, 'yearly_industry_activity'])
datasets.append([monthly_industry_activity, 'monthly_industry_activity'])
###Output
_____no_output_____
###Markdown
5. National accounts
###Code
#GDP and Other GDP-related Stats
gdp = pd.read_csv('05gdp.csv', sep=';')
#Renaming and dropping columns and rows
gdp = gdp.drop(['idindikator', 'IDTer', 'nTer', 'IDStavkeBDP', 'IDVrPod', 'mes'], axis=1)
gdp = gdp.rename(columns={'nStavkeBDP':'EditMe1', 'nVrPod':'EditMe2', 'god':'Year',
'vrednost':'EditMe3'})
gdp = gdp.loc[gdp['EditMe2']!='Вредност, сталне цене (цене претходне године), мил. РСД']
gdp = gdp.reset_index(drop=True)
#Translating
editme2 = [['Вредност, текуће цене, мил. РСД', 'NominalValue(mRSD)'],
['Учешће у БДП, %', '%NominalGDP'],
['Вредност, уланчане мере обима, референтна 2010. година, мил. РСД', 'RealValue(mRSD,2010=100)'],
['Стопе реалног раста, претходна година = 100, %', 'RealGrowthRatePct(LastYear=100)']]
for i in editme2:
gdp.loc[gdp['EditMe2']==i[0], 'EditMe2'] = i[1]
editme1 = [['Бруто додата вредност (БДВ)', 'GrossValueAdded'],
['Порези на производе', 'ProductTax'],
['Субвенције на производе', 'ProductSubventions'],
['Бруто домаћи производ (БДП)', 'GDP']]
for i in editme1:
gdp.loc[gdp['EditMe1']==i[0], 'EditMe1'] = i[1]
#Pivoting
gdp = gdp.pivot_table(index=['EditMe1', 'Year'], columns='EditMe2')
gdp.columns = gdp.columns.droplevel().rename(None)
gdp.index.names = ['Account', 'Year']
gdp = gdp.reset_index()
#Added Value by Actions by Q
qav_actions = pd.read_csv('05quartal_added_value_by_actions.csv', sep=';')
#Renaming and dropping columns and rows
qav_actions = qav_actions.drop(['idindikator', 'IDTer', 'nTer', 'IDKD08', 'IDVrPod'], axis=1)
qav_actions = qav_actions.rename(columns={'nkd08':'Action', 'nVrPod':'EditMe1', 'mes':'Q', 'god':'Year',
'vrednost':'EditMe2'})
qav_actions = qav_actions.loc[qav_actions['EditMe1']!='Вредност, сталне цене (цене претходне године), мил.РСД']
qav_actions = qav_actions.reset_index(drop=True)
#Translating
editme1 = [['Вредност, текуће цене, мил.РСД', 'NominalValue(mRSD)'],
['Учешће у БДП, %', '%NominalGDP'],
['Вредност, уланчане мере обима, референтна 2010. година, мил.РСД', 'RealValue(mRSD,2010=100)'],
['Стопе реалног раста, уланчане мере обима (исти квартал претходне године = 100), %', 'RealGrowthRatePct(LastYear=100)']]
for i in editme1:
qav_actions.loc[qav_actions['EditMe1']==i[0], 'EditMe1'] = i[1]
actions = [['Укупно', 'Total'], ['A - Пољопривреда, шумарство и рибарство', 'A - Agriculture, Forestry and Fishing'],
['B-E - Прерађивачка индустрија, рударство и остала индустрија', 'B-E - Manufacturing, Mining and Other Industries'],
['F - Грађевинарство', 'F - Construction'],
['G-I - Трговина на велико и мало, саобраћај и складиштење и услуге смештаја и исхране', 'Wholesale and Retail trade, Transport and Storage, Lodging and Catering Services'],
['J - Информисање и комуникације', 'J - Informing and Communication'],
['K - Финансијске делатности и делатност осигурања', 'K - Financial and Insurance Activities'],
['L - Пoсловање некретнинама', 'L - Real Estate Business'],
['M-N - Стручне, научне, техничке, административне и друге помоћне активности', 'M-N - Professional, Scientific, Technical, Administrative and Other Ancillary Activities'],
['O-Q - Државна управа, одбрана, образовање и делатности у оквиру здравствене и социјалне заштите', 'O-Q - Public Administration, Defense, Education and Health, Social Care'],
['R-T - Остале услужне делатности', 'R-T - Other Service Activities']]
for i in actions:
qav_actions.loc[qav_actions['Action']==i[0], 'Action'] = i[1]
qs = [['K1', 'Q1'], ['K2', 'Q2'], ['K3', 'Q3'], ['K4', 'Q4']]
for i in qs:
qav_actions.loc[qav_actions['Q']==i[0], 'Q'] = i[1]
#Pivoting
qav_actions = qav_actions.pivot_table(index=['Year', 'Q', 'Action'], columns='EditMe1')
qav_actions.columns = qav_actions.columns.droplevel().rename(None)
qav_actions = qav_actions.reset_index()
#Government Investitions
gov_investitions = pd.read_csv('05investitions.csv', sep=';')
#Renaming and dropping columns and rows
gov_investitions = gov_investitions.drop(['idindikator', 'IDTer', 'nTer', 'IDTehnickaStruktura',
'mes'], axis=1)
gov_investitions = gov_investitions.rename(columns={'nTehnickaStruktura':'Investition',
'god':'Year', 'vrednost':'Amount(mRSD)'})
#Translating
investitions = [['Укупно', 'Total'],
['Зграде и остале грађевине', 'Buildings and Other Construction Work'],
['Стамбене зграде', 'Apartment Buildings'],
['Нестамбене зграде и остале грађевине', 'Non-apartment Buildings and Other Construction Work'],
['Машине и опрема (+ војна опрема)', 'Machinery, Euipment and Military Equipment'],
['Култивисани биолошки ресурси', 'Cultivated Biological Resources'],
['Интелектуална својина', 'Intellectual Property']]
for i in investitions:
gov_investitions.loc[gov_investitions['Investition']==i[0], 'Investition'] = i[1]
#Government Taxes and Social Contributions
gov_tax = pd.read_csv('05tax_and_social_contributions.csv', sep=';')
gov_tax = gov_tax.sort_values(by='god')
gov_tax = gov_tax.reset_index(drop=True)
#Renaming and dropping columns and rows
gov_tax = gov_tax.drop(['idindikator', 'IDTer', 'nTer', 'IDTransakcije', 'mes'], axis=1)
gov_tax = gov_tax.rename(columns={'nTransakcije':'TransactionType', 'god':'Year',
'vrednost':'Amount(mRSD)'})
#Translating
transactions = [['УКУПНO', 'Total'], ['ПОРЕСКА ПРИМАЊА', 'Tax Receipts'],
['ПОРЕЗИ НА ПРОИЗВОДЊУ И УВОЗ', 'Production and Import Taxes'],
['Порези на производе', 'Product Taxes'],
['Порез на додату вредност', 'Value Added Taxes'],
['Порези и дажбине на увоз, осим ПДВ', 'Import Taxes and Duties, Except VAT'],
['Додаци на пензијско осигурање домаћинстава', 'Household Pension Supplement'],
['Додаци на осигурање домаћинстава, осим пензијског осигурања', 'Household insurance supplements other than pension insurance'],
['КАПИТАЛНИ ПОРЕЗИ', 'Capital Taxes'],
['Порези на капиталне трансфере', 'Capital Transfer Taxes'],
['Капиталне дажбине', 'Capital Duties'],
['Остали непоменути капитални порези', 'Other Unmentioned Capital Taxes'],
['Импутирани доприноси за пензијско осигурање на терет послодавца', 'Imputed Retirement Contributions Carried by Employer'],
['Импутирани доприноси за осигурање на терет послодавца, осим пензијског осигурања', 'Imputed Contributions for Insurance at Expense of Employer Other Than Pension Insurance'],
['Стварни доприноси домаћинстава за социјално осигурање', 'Real Contributions from Households for Social Security'],
['Стварни доприноси домаћинстава за пензијско осигурање', 'Actual Pension Contributions by Households'],
['Стварни доприноси домаћинстава за осигурање, осим пензијског осигурања', 'Real Contributions from Households for Insurance Other Than Pension Insurance'],
['Додаци на социјално осигурање домаћинстава', 'Household Social Security Supplements'],
['Остали непоменути текући порези', 'Other Unmentioned Current Taxes'], ['НЕТО СОЦИЈАЛНИ ДОПРИНОСИ', 'Net Social Contributions'],
['Стварни доприноси за социјално осигурање на терет послодавца', 'Real Employer Social Security Contributions'],
['Стварни доприноси за пензијско осигурање на терет послодавца', 'Real Employer Retirement Contributions'],
['Стварни доприноси за осигурање на терет послодавца, осим пензијског осигурања', 'Real Employer Insurance Contributions, Other Than Pension Insurance'],
['Импутирани социјални доприноси на терет послодавца', 'Imputed Social Contributions at Expense of Employer'],
['Остали текући порези', 'Other Current Taxes'], ['Текући порези на капитал', 'Current Capital Taxes'],
['Порези по становнику', 'Per Capita Taxes'], ['Порези на издатке за потрошњу', 'Consumption Expenditure Taxes'],
['Плаћања домаћинстава за дозволе', 'Non-Corporate Permit Payments'],
['Порези на међународне трансакције', 'International Transactions Taxes'],
['Порези на власнички добитак предузећа', 'Enterprise Ownership Taxes'],
['Остали порези на власнички добитак', 'Other Ownership Taxes'],
['Порези на добитке од игара на срећу или коцкања', 'Gambling Gain Taxes'],
['Остали непоменути порези на доходак', 'Other Unmentioned Income Taxes'],
['Порези на доходак појединца или домаћинства, укључујући власнички добитак', 'Individual or Household Income Tax, Includion Ownership Profit'],
['Порези на приход или профит предузећа, укључујући власнички добитак', 'Corporate Income or Profit Tax, Including Ownership Profit'],
['ТЕКУЋИ ПОРЕЗИ НА ДОХОДАК, БОГАТСТВО ИТД.', 'Current Income Taxes, Wealth, Etc.'],
['Порези на доходак', 'Income Taxes'],
['Порези на доходак појединца или домаћинства, осим власничког добитка', 'Individual or Household Income Tax Other Than Ownership Profit'],
['Порези на приход или профит предузећа, осим власничког добитка', 'Corporate Income or Profit Taxes Other Than Ownership Profit'],
['Порези на власнички добитак', 'Ownership Profit Taxes'],
['Порези на власнички добитак појединца или домаћинства', 'Individual or Household Equity Taxes'],
['Порез на фонд зарада', 'Wage Fund Taxes'], ['Пословне и професионалне дозволе', 'Business and Professional Licenses'],
['Порези на загађење', 'Pollution Taxes'], ['Под-компензације ПДВ (паушални систем)', 'Sub-compensation of Flat-rate VAT'],
['Остали непоменути порези на производњу', 'Other Unmentioned Production Taxes'],
['Профит фискалних монопола', 'Fiscal Monopoly Profit'],
['Извозне дажбине и новчане компензације на извоз', 'Export Duties and Refunds'],
['Остали непоменути порези на производе', 'Other Unmentioned Product Taxes'],
['Остали порези на производњу', 'Other Production Taxes'],
['Порези на земљиште, зграде и друге објекте', 'Land, Building and Other Real Estate Taxes'],
['Порези на употребу основних фондова', 'Use of Fixed Assets Taxes'],
['Порези на регистрацију возила', 'Vehicle Registration Taxes'],
['Порези на забаву', 'Entertainment Taxes'],
['Порези на лутрију, игре на срећу и клађење', 'Lottery, Chance Games and Gambling Tax'],
['Порез на премије осигурања', 'Insurance Premiums Taxes'],
['Остали порези на посебне услуге', 'Other Special Servies Taxes'],
['Општи порези на промет или продају', 'General Sales Tax'], ['Порез на посебне услуге', 'Special Services Tax'],
['Профит монопола на увоз', 'Import Monopoly Profit'],
['Порези на производе, осим ПДВ и порезе на увоз', 'Product Taxes Other than VAT and Import Taxes'],
['Акцизе и порези на потрошњу', 'Excise and Consumption Taxes'], ['Таксене марке', 'Tax Stamps'],
['Порези на финансијске и капиталне трансакције', 'Capital and Financial Transactions Taxes'],
['Увозне дажбине', 'Import Duties'], ['Порези на увоз, осим ПДВ и увозних дажбина', 'Import Taxes, Except VAT and Import Duties'],
['Дажбине на увоз пољопривредних производа', 'Agricultural Products Import Duties'],
['Новчане компензације на увоз', 'Cash Import Compensation'], ['Акцизе', 'Excise Duties'], ['Општи порез на промет', 'General Sales Tax']]
for i in transactions:
gov_tax.loc[gov_tax['TransactionType']==i[0], 'TransactionType'] = i[1]
#Imputing Percent of Total Taxation of Different Tax Groups
tax_datas = []
for year in gov_tax['Year'].unique():
curr_data = gov_tax.loc[gov_tax['Year']==year]
total = curr_data.loc[curr_data['TransactionType']=='Total', 'Amount(mRSD)'].reset_index(drop=True).at[0]
curr_data['PctTotal'] = round(curr_data['Amount(mRSD)'] / total, 4)
tax_datas.append(curr_data)
gov_tax = pd.concat(tax_datas)
datasets.append([gdp, 'gdp'])
datasets.append([qav_actions, 'qav_actions'])
datasets.append([gov_investitions, 'gov_investitions'])
datasets.append([gov_tax, 'gov_tax'])
###Output
C:\Users\vucin\Anaconda3\lib\site-packages\ipykernel_launcher.py:173: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
###Markdown
6. Population
###Code
#Population by Age, Region and Gender
population = pd.read_csv('06population.csv', sep=';')
#Translating
regions = [['РЕПУБЛИКА СРБИЈА', 'State'], ['Београдски регион', 'Belgrade'],
['Регион Војводине', 'Voyvodina/North'],
['Регион Шумадије и Западне Србије', 'Central and West'],
['Регион Јужне и Источне Србије', 'South and East']]
for i in regions:
population.loc[population['nTer']==i[0], 'nTer'] = i[1]
genders = [['Укупно', 'Total'], ['Женско', 'F'], ['Мушко', 'M']]
for i in genders:
population.loc[population['nPol']==i[0], 'nPol'] = i[1]
age_groups = [['Укупно', '-1'], ['85 и више', '85']]
for i in age_groups:
population.loc[population['nStarost']==i[0], 'nStarost'] = i[1]
#Renaming and dropping columns and rows
population = population.loc[(population['nTer']!='СРБИЈА – СЕВЕР')&(population['nTer']!='СРБИЈА – ЈУГ')]
population = population.reset_index(drop=True)
population = population.drop(['idindikator', 'mes', 'IDTer', 'IDPol', 'IDStarost'], axis=1)
population = population.rename(columns={'god':'Year', 'nTer':'Region', 'nPol':'Gender',
'nStarost':'AgeGroup', 'vrednost':'Amount'})
population['AgeGroup'] = population['AgeGroup'].astype('int')
#Life Expectancy
life_expectancy = pd.read_csv('06life_expectancy.csv', sep=';')
#Translating
for i in regions:
life_expectancy.loc[life_expectancy['nTer']==i[0], 'nTer'] = i[1]
for i in genders:
life_expectancy.loc[life_expectancy['nPol']==i[0], 'nPol'] = i[1]
#Renaming and dropping columns and rows
life_expectancy = life_expectancy.drop(['idindikator', 'mes', 'IDTer', 'IDPol', 'IDStarGrupa'], axis=1)
life_expectancy = life_expectancy.rename(columns={'god':'Year', 'nTer':'Region', 'nPol':'Gender',
'nStarGrupa':'AgeGroup', 'vrednost':'Amount'})
#Sorting Useful Rows
life_expectancy = life_expectancy.loc[(life_expectancy['Region']!='СРБИЈА – СЕВЕР')&(life_expectancy['Region']!='СРБИЈА – ЈУГ')]
life_expectancy = life_expectancy.loc[life_expectancy['AgeGroup']=='0']
life_expectancy = life_expectancy.drop('AgeGroup', axis=1)
life_expectancy = life_expectancy.sort_values(['Year', 'Region', 'Gender'])
life_expectancy = life_expectancy.reset_index(drop=True)
#Natural Increase
natural_increase = pd.read_csv('06natural_increase.csv', sep=';')
#Translating
for i in regions:
natural_increase.loc[natural_increase['nTer']==i[0], 'nTer'] = i[1]
#Dropping Smaller Regions
smaller_regions = [i[1] for i in regions]
natural_increase = natural_increase.loc[natural_increase['nTer'].isin(smaller_regions)]
#Renaming and dropping columns
natural_increase = natural_increase.drop(['idindikator', 'mes', 'IDVrPod', 'IDTer'], axis=1)
natural_increase = natural_increase.rename(columns={'god':'Year', 'nTer':'Region', 'nVrPod':'EditMe1', 'vrednost':'Amount'})
#Debug Regions
#dummie = natural_increase.loc[natural_increase['Year']==2011]
#dummie = dummie.loc[dummie['EditMe1']=='Број становника средином године']
#dummie['Amount'][0]-dummie['Amount'][1:].sum()
#Dropping and Sorting Rows
natural_increase = natural_increase.loc[natural_increase['EditMe1']=='Природни прираштај']
natural_increase = natural_increase.drop('EditMe1', axis=1)
natural_increase = natural_increase.sort_values(['Year', 'Region']).reset_index(drop=True)
#Net Internal Imigrations
internal_migrations = pd.read_csv('06internal_migrations.csv', sep=';')
#Translating
for i in genders:
internal_migrations.loc[internal_migrations['nPol']==i[0], 'nPol'] = i[1]
for i in regions:
internal_migrations.loc[internal_migrations['nTer']==i[0], 'nTer'] = i[1]
#Dropping Smaller Regions
internal_migrations = internal_migrations.loc[internal_migrations['nTer'].isin(smaller_regions)]
#Renaming and dropping columns
internal_migrations = internal_migrations.drop(['idindikator', 'mes', 'IDVrPod', 'IDTer', 'IDStarGrupa', 'IDPol'], axis=1)
internal_migrations = internal_migrations.rename(columns={'god':'Year', 'nTer':'Region', 'nVrPod':'EditMe1', 'vrednost':'Amount', 'nPol':'Gender', 'nStarGrupa':'AgeGroup'})
#Rows
internal_migrations = internal_migrations.loc[internal_migrations['EditMe1']=='Миграциони салдо']
internal_migrations = internal_migrations.loc[internal_migrations['AgeGroup']=='Укупно']
internal_migrations = internal_migrations.drop(['EditMe1', 'AgeGroup'], axis=1)
internal_migrations = internal_migrations.reset_index(drop=True)
datasets.append([population, 'population'])
datasets.append([life_expectancy, 'life_expectancy'])
datasets.append([natural_increase, 'natural_increase'])
datasets.append([internal_migrations, 'internal_migrations'])
###Output
_____no_output_____
###Markdown
7. Energetics
###Code
energetics = pd.read_csv('07energetics.csv', sep=';')
#Dropping Redundant Rows
useful_rows = ['Примарна производња енергије', 'Увоз', 'Извоз', 'Салдо залиха',
np.nan, 'РЕПУБЛИКА СРБИЈА', 'Укупно расположива енергија',
'Статистичка разлика', 'Утрошак за производњу енергије',
'Производња енергије трансформацијом', 'Размена', 'Размењени производи',
'Интерна размена производа', 'Враћено из петрохемије',
'Сопствена потрошња у енергетском сектору', 'Губици',
'Енергија расположива за финалну потрошњу',
'Финална потрошња за неенергетске сврхе',]
energetics = energetics.loc[energetics['nTokovi'].isin(useful_rows)]
#Sources to Drop Check
#for i in energetics['EnergySources'].unique():
# curr_data = energetics.loc[energetics['EnergySources']==i]
# if curr_data['Amount'].sum()==0:
# print('Energy Source:\n', i, curr_data['Amount'].sum(), '\n')
sources_to_drop = ['Соларна фотонапонска енергија, GWh', 'Енергија ветра, GWh',
'Хидро енергија, GWh']
energetics = energetics.loc[~energetics['nEnergenti'].isin(sources_to_drop)]
energetics = energetics.loc[energetics['nTer']=='РЕПУБЛИКА СРБИЈА']
#Renaming and dropping columns
energetics = energetics.drop(['idindikator', 'IDTokovi', 'IDTer', 'nTer', 'mes', 'IDEnergenti'], axis=1)
energetics = energetics.rename(columns={'nTokovi':'Use', 'god':'Year',
'nEnergenti':'EnergySources', 'vrednost':'Amount'})
#Nan Imputing
energetics = energetics.fillna(0)
#Translating
use = [['Примарна производња енергије', 'Primary Energy Production'], ['Увоз', 'Import'],
['Извоз', 'Export'], ['Салдо залиха', 'Inventory Balance'],
['Укупно расположива енергија', 'Total Energy Available'],
['Статистичка разлика', 'Statistical Difference'],
['Утрошак за производњу енергије', 'Energy Production Cost'],
['Производња енергије трансформацијом', 'Generation of Energy by Transformation'],
['Размена', 'Exchange'], ['Размењени производи', 'Exchanged Products'],
['Интерна размена производа', 'Internal Product Exchange'],
['Враћено из петрохемије', 'Returned from Petrochemistry'],
['Сопствена потрошња у енергетском сектору', 'Own Consumption in Energy Sector'],
['Губици', 'Losses'],
['Енергија расположива за финалну потрошњу', 'Energy Available for Final Consumption'],
['Финална потрошња за неенергетске сврхе', 'Final Consumption for Non-energy Purposes']]
for i in use:
energetics.loc[energetics['Use']==i[0], 'Use'] = i[1]
energy_sources = [['Електрична енергија (укупно), GWh', 'Total Electrical Energy (GWh)'],
['Антрацит, t', 'Anthracite (t)'],
['Остали битуменозни угаљ, t', 'Other Bituminous Coal (t)'],
['Суб-битуменозни угаљ, мрки угаљ и лигнит, t', 'Sub-bituminous Coal, Brown Coal and Lignite (t)'],
['Брикет каменог угља, t', 'Coal Briquette (t)'],
['Брикет мрког угља и лигнита, сушени лигнит, t', 'Briquette of Brown Coal and Lignite, Dried Lignite (t)'],
['Катран од угља, t', 'Coal Tar (t)'], ['Кокс, t', 'Coke (t)'],
['Високо-пећни гас, 000 m3', 'Blast Furnace Gas (thousands of m3)'],
['Сирова нафта, t', 'Crude Oil (t)'], ['Течности природног гаса, t', 'Natural Gas Liquids (t)'],
[' Рафинисана основна сировина, t', 'Refined Raw Material (t)'], ['Адитиви, t', 'Additives (t)'],
['Остали угљоводоници, t', 'Other Hydrocarbons (t)'], ['Рафинеријски гас, t', 'Refinery Gas (t)'],
['Течни нафтни гас, t', 'LPG (t)'], ['Нафта, t', 'Oil (t)'],
['Безоловни моторни бензин, t', 'Unleaded Motor Gasoline (t)'],
['Оловни бензин, t', 'Lead Gasoline (t)'], ['Авионски бензин, t', 'Plane Gasoline (t)'],
['Гориво за млазне моторе керозинског типа, t', 'Kerosene Type Jet Engine Fuel (t)'],
['Остали керозин, t', 'Other Kerosene (t)'],
['Дизел, t', 'Diesel (t)'], ['Гориво за ложење и остала гасна уља, t', 'Fuel Oil and Other Gas Oils (t)'],
['Уље за ложење (мазут) S≥1%, t', 'Mazut S≥1% (t)'],
['Уље за ложење (мазут) S<1%, t', 'Mazut S<1% (t)'],
['Специјални бензини, t', 'Special Gasoline (t)'], ['Мазива, t', 'Lubricants (t)'],
['Битумен, t', 'Bitumen (t)'],
['Парафински восак, t', 'Paraffin wax (t)'], ['Нафтни кокс, t', 'Petroleum Coke (t)'],
['Остали деривати нафте, t', 'Other Petroleum Products (t)'], ['Природни гас, 000 Stm3', 'Natural Gas (thousands of m3)'],
['Огревно дрво, t', 'Firewood (t)'], ['Дрвни остатак и дрвна сечка, t', 'Wood Residue and Wood Chips (t)'],
['Дрвни брикети, t', 'Wood Briquettes (t)'], ['Дрвни пелети, t', 'Wood Pellets (t)'],
['Дрвени угаљ, t', 'Charcoal (t)'], ['Биогас, 000 m3', 'Biogas (thousands of m3)']]
for i in energy_sources:
energetics.loc[energetics['EnergySources']==i[0], 'EnergySources'] = i[1]
datasets.append([energetics, 'energetics'])
###Output
_____no_output_____
###Markdown
8. Use of Information and Communication Technologies
###Code
#Percent of Presence of Certain Devices within Households
household_devices = pd.read_csv('08devices_present_in_households.csv', sep=';')
#Renaming and dropping columns
household_devices = household_devices.drop(['idindikator', 'nTer', 'IDTer', 'IDVrUredjaja', 'mes'], axis=1)
household_devices = household_devices.rename(columns={'god':'Year', 'nVrUredjaja':'Device',
'vrednost':'PctPresence'})
#Translating
devices = [['ТВ', 'TV'], ['Мобилни телефон', 'Mobile Phone'], ['Лаптоп', 'Laptop'],
['Персонални рачунар (PC)', 'PC'], ['Кабловска ТВ', 'Cable TV']]
for i in devices:
household_devices.loc[household_devices['Device']==i[0], 'Device'] = i[1]
#Sorting and Imputing
household_devices = household_devices.sort_values(['Year', 'Device']).reset_index(drop=True)
household_devices = household_devices.fillna(0)
#Percent of Enterprises with a Website by Region
enterp_website = pd.read_csv('08enterprises_with_website.csv', sep=';')
#Renaming and dropping columns
enterp_website = enterp_website.drop(['idindikator', 'IDTer', 'mes'], axis=1)
enterp_website = enterp_website.rename(columns={'god':'Year', 'nTer':'Region',
'vrednost':'PctPresence'})
#Translating
for i in regions:
enterp_website.loc[enterp_website['Region']==i[0], 'Region'] = i[1]
#Sorting and Imputing
enterp_website = enterp_website.sort_values(['Region', 'Year']).reset_index(drop=True)
#Internet Use Frequency by Individuals
internet_use_freq = pd.read_csv('08individual_internet_use_freq.csv', sep=';')
#Renaming and dropping columns
internet_use_freq = internet_use_freq.drop(['idindikator', 'IDTer', 'nTer', 'mes', 'IDUpotrebaIKT'], axis=1)
internet_use_freq = internet_use_freq.rename(columns={'god':'Year', 'nUpotrebaIKT':'FreqChoice',
'vrednost':'PctPresence'})
#Translating
choices = [['Никада није користио/користила', 'Never Used'],
['У последња 3 месеца', '< 3 Months'],
['Пре више од 3 месеца (мање од 1 године)', '> 3 Months, < 1 Year'],
['Пре више од годину дана', '> 1 Year']]
for i in choices:
internet_use_freq.loc[internet_use_freq['FreqChoice']==i[0], 'FreqChoice'] = i[1]
#Sorting and Imputing
internet_use_freq = internet_use_freq.sort_values(['Year', 'FreqChoice']).reset_index(drop=True)
datasets.append([household_devices, 'household_devices'])
datasets.append([enterp_website, 'enterp_website'])
datasets.append([internet_use_freq, 'internet_use_freq'])
###Output
_____no_output_____
###Markdown
9. Research & Development
###Code
#R&D Percent in GDP
rd_pct_gdp = pd.read_csv('09r&d_pct_gdp.csv', sep=';')
#Renaming and dropping columns
rd_pct_gdp = rd_pct_gdp.drop(['idindikator', 'IDTer', 'nTer', 'mes'], axis=1)
rd_pct_gdp = rd_pct_gdp.rename(columns={'god':'Year', 'vrednost':'GDPPct'})
#Generated Funds Amount
rd_generated_funds = pd.read_csv('09r&d_generated_funds.csv', sep=';')
#Renaming and dropping columns
rd_generated_funds = rd_generated_funds.drop(['idindikator', 'IDTer', 'IDSredstva', 'mes'], axis=1)
rd_generated_funds = rd_generated_funds.rename(columns={'god':'Year', 'vrednost':'Amount(RSD)',
'nSredstva':'FundSource', 'nTer':'Region'})
#Translating
for i in regions:
rd_generated_funds.loc[rd_generated_funds['Region']==i[0], 'Region'] = i[1]
funds = [['Укупна средства', 'Total Funds'],
['Средства обезбеђена из буџета', 'Funds from Budget']]
for i in funds:
rd_generated_funds.loc[rd_generated_funds['FundSource']==i[0], 'FundSource'] = i[1]
#Dropping Rows
rd_generated_funds = rd_generated_funds.loc[rd_generated_funds['Region'].isin(smaller_regions)]
rd_generated_funds = rd_generated_funds.reset_index(drop=True)
#Imputation
rd_generated_funds = rd_generated_funds.fillna(0)
datasets.append([rd_pct_gdp, 'rd_pct_gdp'])
datasets.append([rd_generated_funds, 'rd_generated_funds'])
###Output
_____no_output_____
###Markdown
10. Education (HS)
###Code
high_school = pd.read_csv('10high_school_graduated.csv', sep=';')
high_school = high_school.loc[high_school['nTer']=='РЕПУБЛИКА СРБИЈА']
#Renaming and dropping columns
high_school = high_school.drop(['idindikator', 'IDTer', 'nTer', 'IDPodrucjaRada',
'IDPol'], axis=1)
high_school = high_school.rename(columns={'IDSkolskaGodina':'SchoolYear', 'nPol':'Gender',
'vrednost':'Students', 'nPodrucjaRada':'Programme'})
#Translating
programmes = [['УКУПНО', 'Total'], ['ГИМНАЗИЈА', 'General-education HS'],
['ПОЉОПРИВРЕДА, ПРОИЗВОДЊА И ПРЕРАДА ХРАНЕ', 'Agriculture and Good Production'],
['ШУМАРСТВО И ОБРАДА ДРВЕТА', 'Forestry and Wood Processing'],
['ГЕОЛОГИЈА, РУДАРСТВО И МЕТАЛУРГИЈА', 'Geology, Mining and Metallurgy'],
['МАШИНСТВО И ОБРАДА МЕТАЛА', 'Machinery and Metal Processing'],
['ЕЛЕКТРОТЕХНИКА', 'Electrical Engineering'],
['ХЕМИЈА, НЕМЕТАЛИ И ГРАФИЧАРСТВО', 'Chemistry, Graphene and Non-metals'],
['ТЕКСТИЛСТВО И КОЖАРСТВО', 'Textile and Learher Processing'],
['ГЕОДЕЗИЈА И ГРАЂЕВИНАРСТВО', 'Geodesy and Construction'],
['САОБРАЋАЈ', 'Transportation'],
['ТРГОВИНА, УГОСТИТЕЉСТВО И ТУРИЗАМ', 'Trade, Hotels and Tourism'],
['ЕКОНОМИЈА, ПРАВО И АДМИНИСТРАЦИЈА', 'Economy, Law and Administration'],
['ХИДРОМЕТЕОРОЛОГИЈА', 'Hydrometeorology'],
['КУЛТУРА, УМЕТНОСТ И ЈАВНО ИНФОРМИСАЊЕ', 'Culutre, Arts and Public Information'],
['ЗДРАВСТВО И СОЦИЈАЛНА ЗАШТИТА', 'Health and Social Welfare'],
['ОСТАЛО (ЛИЧНЕ УСЛУГЕ)', 'Other (Personal Services)'],
['ВОЈНЕ ШКОЛЕ', 'Military School']]
for i in programmes:
high_school.loc[high_school['Programme']==i[0], 'Programme'] = i[1]
for i in genders:
high_school.loc[high_school['Gender']==i[0], 'Gender'] = i[1]
datasets.append([high_school, 'high_school'])
###Output
_____no_output_____
###Markdown
11. Traffic, Transport and Telecommunications
###Code
passengers_and_goods = pd.read_csv('11passengers_goods_trans.csv', sep=';')
#Renaming and dropping columns
passengers_and_goods = passengers_and_goods.drop(['idindikator', 'IDTer', 'nTer',
'IDVrPod', 'mes'], axis=1)
passengers_and_goods = passengers_and_goods.rename(columns={'nVrPod':'Metric', 'god':'Year',
'vrednost':'Value'})
#Translating
metrics = [['број превезених путника, хиљ.', 'Passengers Carried (Thousands)'],
['путнички километри, мил.', 'Passenger Kilometers (Millions)'],
['превезена роба, t', 'Goods Transported (Tons)'],
['тонски километри, мил.', 'Ton Kilometers (Millions)']]
for i in metrics:
passengers_and_goods.loc[passengers_and_goods['Metric']==i[0], 'Metric'] = i[1]
road_traffic = pd.read_csv('11road_traffic.csv', sep=';')
#Renaming and dropping columns
road_traffic = road_traffic.drop(['idindikator', 'IDVrPod', 'mes'], axis=1)
road_traffic = road_traffic.rename(columns={'nVrPod':'VehicleType', 'god':'Year',
'vrednost':'Amount'})
#Translating
vehicles = [['Мотоцикли', 'Motorcycles'], ['Путнички аутомобили', 'Passenger Cars'],
['Специјална путничка возила', 'Special Passenger Vehicles'], ['Аутобуси', 'Buses'],
['Теретна возила', 'Trucks'],
['Специјална теретна возила', 'Special Lorries'],
['Радна возила', 'Working Purpose Vehicles'], ['Вучна возила', 'Towing Vehicles'],
['Прикључна возила', 'Trailers']]
for i in vehicles:
road_traffic.loc[road_traffic['VehicleType']==i[0], 'VehicleType'] = i[1]
#Vehicle Groups with too Little Information
drop_vehicles = ['Special Passenger Vehicles', 'Special Lorries']
road_traffic = road_traffic[~road_traffic['VehicleType'].isin(drop_vehicles)]
road_traffic = road_traffic.reset_index(drop=True)
air_traffic = pd.read_csv('11air_traffic.csv', sep=';')
air_traffic = air_traffic.loc[air_traffic['nVrPod']!='Деонички летови'].reset_index(drop=True)
#Renaming and dropping columns
air_traffic = air_traffic.drop(['idindikator', 'IDVrPod', 'mes'], axis=1)
air_traffic = air_traffic.rename(columns={'nVrPod':'Metric', 'god':'Year', 'vrednost':'Amount'})
#Translating
air_metrics = [['Авио km, хиљ.', 'Air Kilometers (Thousands)'],
['Часови летења', 'Flight Hours'],
['Превезени путници, хиљ.', 'Passengers Carried (Thousands)'],
['Путнички километри, мил.', 'Passenger Kilometers (Millions)'],
['Превезени терет, t', 'Freight (Tons)'], ['Запослени', 'Employees'],
['Тонски километри терета, хиљ.', 'Freight Tonne-kilometers (Thousands)'],
['Утрошак горива, t', 'Fuel Consumption (Tones)']]
for i in air_metrics:
air_traffic.loc[air_traffic['Metric']==i[0], 'Metric'] = i[1]
#Imputing Fuel Consumption
#Plot
#import matplotlib.pyplot as plt
hours = air_traffic.loc[air_traffic['Metric']=='Flight Hours']['Amount'][1:]
fuel = air_traffic.loc[air_traffic['Metric']=='Fuel Consumption (Tones)']['Amount'][1:]
#plt.scatter(x=hours, y=fuel)
#plt.xlabel('Flight Hours')
#plt.ylabel('Fuel Consumption (Tones)')
#plt.show()
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(hours.values.reshape(-1, 1), fuel.values.reshape(-1, 1))
fill_value = lr.predict(air_traffic.iloc[1, 2].reshape(1, -1))[0][0]
air_traffic = air_traffic.fillna(fill_value)
datasets.append([passengers_and_goods, 'passengers_and_goods'])
datasets.append([road_traffic, 'road_traffic'])
datasets.append([air_traffic, 'air_traffic'])
###Output
_____no_output_____
###Markdown
12. Job Market
###Code
average_salaries = pd.read_json('12average_salaries.json')
average_salaries = average_salaries.loc[(average_salaries['mes']==0)&(average_salaries['nVrPod']=='нето зараде')&(average_salaries['nTer']=='РЕПУБЛИКА СРБИЈА')].reset_index(drop=True)
#Renaming and dropping columns
average_salaries = average_salaries.drop(['idindikator', 'IDVrPod', 'mes', 'nVrPod', 'IDTer',
'nTer', 'IDKD08'], axis=1)
average_salaries = average_salaries.rename(columns={'nkd08':'Sector', 'god':'Year',
'vrednost':'Amount(RSD)'})
#Translating
sectors = [['Укупно', 'Total'],
['A - Пољопривреда, шумарство и рибарство', 'Agriculture, Forestry and Fishing'],
['B - Рударство', 'Mining'],
['C - Прерађивачка индустрија', 'Manufacturing Industry'],
['D - Снабдевање електричном енергијом, гасом, паром и климатизација', 'Electricity, Gas, Steam and Air Conditioning Supply'],
['E - Снабдевање водом; управљање отпадним водама, контролисање процеса уклањања отпада и сличне активности', 'Waterworks'],
['F - Грађевинарство', 'Construction'],
['G - Трговина на велико и трговина на мало; поправка моторних возила и мотоцикала', 'Wholesale and Retail Trade, Repair of Vehicles'],
['H - Саобраћај и складиштење', 'Transport, Traffic and Storage'],
['I - Услуге смештаја и исхране', 'Accommodation and Catering Services'],
['J - Информисање и комуникације', 'Information and Communications'],
['K - Финансијске делатности и делатност осигурања', 'Financial and Insurance Activities'],
['L - Пoсловање некретнинама', 'Real Estate Business'],
['M - Стручне, научне и техничке делатности', 'Professional, Scientific and Technical Activities'],
['N - Административне и помоћне услужне делатности', 'Administrative and Support Services'],
['O - Државна управа и одбрана; обавезно социјално осигурање', 'State Administration and Defence; Compulsory Social Security'],
['P - Образовање', 'Education'],
['Q - Здравствена и социјална заштита', 'Health and Social Care'],
['R - Уметност; забава и рекреација', 'Art; Entertainment and Recreation'],
['S - Остале услужне делатности', 'Other Services']]
for i in sectors:
average_salaries.loc[average_salaries['Sector']==i[0], 'Sector'] = i[1]
#NEET rate - 15-29
NEET_rate = pd.read_csv('12NEET_rate.csv', sep=';')
NEET_rate = NEET_rate.loc[(NEET_rate['nStarGrupa']=='15-29')&(NEET_rate['nPol']=='Укупно')]
NEET_rate = NEET_rate.reset_index(drop=True)
#Renaming and dropping columns
NEET_rate = NEET_rate.drop(['idindikator', 'mes', 'IDTer', 'nTer', 'IDStarGrupa',
'IDPol', 'nPol', 'nStarGrupa'], axis=1)
NEET_rate = NEET_rate.rename(columns={'god':'Year', 'vrednost':'PctUnemployed'})
#Unemployment Rate
unemployed = pd.read_csv('12unemployed.csv', sep=';')
unemployed = unemployed.loc[(unemployed['nTer']=='РЕПУБЛИКА СРБИЈА')&(unemployed['nPol']=='Укупно')&(unemployed['nStarGrupa']=='Лица радног узраста (15-64)')].reset_index(drop=True)
#Renaming and dropping columns
unemployed = unemployed.drop(['idindikator', 'IDTer', 'nTer', 'IDStarGrupa',
'IDPol', 'nPol', 'nStarGrupa'], axis=1)
unemployed = unemployed.rename(columns={'god':'Year', 'vrednost':'PctUnemployed', 'mes':'Q'})
#Translating
for i in qs:
unemployed.loc[unemployed['Q']==i[0], 'Q'] = i[1]
#Sorting
unemployed = unemployed.sort_values(['Year', 'Q']).reset_index(drop=True)
datasets.append([average_salaries, 'average_salaries'])
datasets.append([NEET_rate, 'NEET_rate'])
datasets.append([unemployed, 'unemployed'])
###Output
_____no_output_____
###Markdown
13. Structural Business Statistics
###Code
#Number of Enterprises
enterprises = pd.read_json('13number_of_enterprises.json')
enterprises = enterprises.loc[(enterprises['nTer']=='РЕПУБЛИКА СРБИЈА')&(enterprises['VelP']=='УКУПНО')].reset_index(drop=True)
#Renaming and dropping columns
enterprises = enterprises.drop(['idindikator', 'IDTer', 'nTer', 'IDKD08',
'IDNivoVelP', 'VelP', 'mes'], axis=1)
enterprises = enterprises.rename(columns={'god':'Year', 'vrednost':'Amount', 'nkd08':'Activity'})
#Dropping Rows
relevant_sectors = np.append(enterprises['Activity'].unique()[-14:-2], [(enterprises['Activity'].unique()[-1]), 'Укупно'])
enterprises = enterprises.loc[enterprises['Activity'].isin(relevant_sectors)].reset_index(drop=True)
#Translating
for i in sectors:
enterprises.loc[enterprises['Activity']==i[0], 'Activity'] = i[1]
#Number of Employees
employees = pd.read_json('13number_of_employees.json')
employees = employees.loc[(employees['nTer']=='РЕПУБЛИКА СРБИЈА')&(employees['VelP']=='УКУПНО')].reset_index(drop=True)
#Renaming and dropping columns
employees = employees.drop(['idindikator', 'IDTer', 'nTer', 'IDKD08',
'IDNivoVelP', 'VelP', 'mes'], axis=1)
employees = employees.rename(columns={'god':'Year', 'vrednost':'Amount', 'nkd08':'Activity'})
#Dropping Rows
employees = employees.loc[employees['Activity'].isin(relevant_sectors)].reset_index(drop=True)
#Translating
for i in sectors:
employees.loc[employees['Activity']==i[0], 'Activity'] = i[1]
#Total Added Value by Enterprizes
added_value = pd.read_json('13added_value.json')
added_value = added_value.loc[(added_value['nTer']=='РЕПУБЛИКА СРБИЈА')&(added_value['VelP']=='УКУПНО')].reset_index(drop=True)
#Renaming and dropping columns
added_value = added_value.drop(['idindikator', 'IDTer', 'nTer', 'IDKD08',
'IDNivoVelP', 'VelP', 'mes'], axis=1)
added_value = added_value.rename(columns={'god':'Year', 'vrednost':'Amount(mRSD)', 'nkd08':'Activity'})
#Dropping Rows
added_value = added_value.loc[added_value['Activity'].isin(relevant_sectors)].reset_index(drop=True)
#Translating
for i in sectors:
added_value.loc[added_value['Activity']==i[0], 'Activity'] = i[1]
#Personnel Costs
costs = pd.read_json('13costs.json')
costs = costs.loc[(costs['nTer']=='РЕПУБЛИКА СРБИЈА')&(costs['VelP']=='УКУПНО')].reset_index(drop=True)
#Renaming and dropping columns
costs = costs.drop(['idindikator', 'IDTer', 'nTer', 'IDKD08',
'IDNivoVelP', 'VelP', 'mes'], axis=1)
costs = costs.rename(columns={'god':'Year', 'vrednost':'Amount(mRSD)', 'nkd08':'Activity'})
#Dropping Rows
costs = costs.loc[costs['Activity'].isin(relevant_sectors)].reset_index(drop=True)
#Translating
for i in sectors:
costs.loc[costs['Activity']==i[0], 'Activity'] = i[1]
#Concatenating Costs and Added Value
value_added = pd.merge(added_value, costs, on=['Year', 'Activity'],
suffixes=('ValueAdded', 'PersonnelCost'))
value_added['NetValueAdded(mRSD)'] = value_added['Amount(mRSD)ValueAdded'] - value_added['Amount(mRSD)PersonnelCost']
#Per Employee Average Added Value
pea_added_value = pd.read_json('13per_employee_average_added_value.json')
#Renaming and dropping columns
pea_added_value = pea_added_value.drop(['idindikator', 'IDTer', 'nTer', 'IDKD08', 'mes'], axis=1)
pea_added_value = pea_added_value.rename(columns={'god':'Year', 'vrednost':'Amount(tRSD)', 'nkd08':'Activity'})
#Translating
for i in sectors:
pea_added_value.loc[pea_added_value['Activity']==i[0], 'Activity'] = i[1]
#Per Employee Average Cost
pea_cost = pd.read_json('13per_employee_average_cost.json')
#Renaming and dropping columns
pea_cost = pea_cost.drop(['idindikator', 'IDTer', 'nTer', 'IDKD08', 'mes'], axis=1)
pea_cost = pea_cost.rename(columns={'god':'Year', 'vrednost':'Amount(tRSD)', 'nkd08':'Activity'})
#Translating
for i in sectors:
pea_cost.loc[pea_cost['Activity']==i[0], 'Activity'] = i[1]
#Concatenating Per Employee Added Value and Cost
pea_value_added = pd.merge(pea_added_value, pea_cost, on=['Year', 'Activity'],
suffixes=('ValueAdded', 'PersonnelCost'))
pea_value_added['NetValueAdded(tRSD)'] = pea_value_added['Amount(tRSD)ValueAdded'] - pea_value_added['Amount(tRSD)PersonnelCost']
datasets.append([enterprises, 'enterprises'])
datasets.append([employees, 'employees'])
datasets.append([value_added, 'value_added'])
datasets.append([pea_value_added, 'pea_value_added'])
###Output
_____no_output_____
###Markdown
Exporting
###Code
os.chdir('C:\\Users\\vucin\\Desktop\\The problem with Serbian economy\\2.EDA')
for i in datasets:
i[0].to_csv(i[1]+'.csv', index=False)
###Output
_____no_output_____ |
1.6 R Programming/de-DE/1.6.30 R - While-Loops.ipynb | ###Markdown
Tag 1. Kapitel 6. R Programmierung Lektion 30. While Schleifen`while` Schleifen sind eine Möglichkeit unsere Programm so lange laufen zu lassen, solange eine bestimmte Bedingung erfüllt ist (d.h. TRUE ergibt). Die Syntax lautet: while (Bedingung){ Führe diesen Code aus Solange die Bedingung erfüllt ist }Einer der wichtigsten Aspekte bei der Arbeit mit while Schleifen ist, darauf zu achten, dass die Bedingungn zu irgendeinem Zeitpunkt wirklich nicht mehr erfüllt wird. Andernfalls würden wir in einer unendlichen Schleife landen. Deshab der **Hinweis**: In R Studio können wir Prozesse mit Strg + C beenden (Cmd + C für MAC).Hier noch ein kurzer Exkurs dazu Variablen gemeinsam mit Strings auszugeben:
###Code
print('Nur ein String')
var <- 'Eine Variable'
cat('Meine Variable ist:',var)
var <- 25
cat('Meine Nummer ist:',var)
# Alternativer Weg
print(paste0("Variable ist: ", var))
###Output
[1] "Variable ist: 25"
###Markdown
Im folgenden Beispiel werden wir die `cat` Funktion verwenden und eine erste while-Schleife programmieren:
###Code
patNr <- 0
patNrMax <-20
while(patNr < patNrMax){
cat('Pateinr-Nr. ist aktuell:',patNr)
print(' patNr liegt immer noch unter 10, füge 1 zu patNr hinzu')
# x um eins erhöhen
patNr <- patNr+1
}
help(cat)
###Output
_____no_output_____
###Markdown
Ergänzen wir diese Logik um eine if-Anweisung:
###Code
x <- 0
while(x < 10){
cat('x ist aktuell:',x)
print(' x liegt immer noch unter 10, füge 1 zu x hinzu')
# x um eins erhöhen
x <- x+1
if(x==10){
print("x ist jetzt 10! Schleife beenden")
}
}
###Output
x ist aktuell: 0[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 1[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 2[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 3[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 4[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 5[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 6[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 7[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 8[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 9[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
[1] "x ist jetzt 10! Schleife beenden"
###Markdown
breakWir können `break` benutzen, um aus einer Schleife auszubrechen. Zuvor haben wir eine if-Anweisung genutzt, um zu überprüfen, ob x bereits den Wert 10 hat. Diese hat die Schleife aber nicht tatsächlich beendet. Die Bedingung der while-Schleife wurde beim nächsten Durchlauf ganz einfach nicht mehr erfüllt. Schauen wir uns jetzt ein Beispiel an, in dem wir `break` nutzen, um die Schleife tatsächlich zu beenden:
###Code
x <- 0
while(x < 10){
cat('x ist aktuell:',x)
print(' x liegt immer noch unter 10, füge 1 zu x hinzu')
# x um eins erhöhen
x <- x+1
if(x==10){
print("x ist jetzt 10!")
print("Yeah, ich werde auch ausgegeben!")
}
}
x <- 0
while(x < 10){
# ****** BEGIN LOOP *****
x <- x+1
cat('x ist aktuell:',x)
print(' x liegt immer noch unter 10, füge 1 zu x hinzu')
# x um eins erhöhen
if(x==5){
print("x ist jetzt 5!")
break
print("Yeah, ich werde auch ausgegeben!")
}
# ****** END LOOP *****
}
###Output
x ist aktuell: 0[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 1[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 2[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 3[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
x ist aktuell: 4[1] " x liegt immer noch unter 10, füge 1 zu x hinzu"
[1] "x ist jetzt 5!"
|
Sequence Models/9_Transformer_Pre-processing/Embedding_plus_Positional_encoding.ipynb | ###Markdown
Transformer Pre-processingWelcome to Week 4's first ungraded lab. In this notebook you will delve into the pre-processing methods you apply to raw text to before passing it to the encoder and decoder blocks of the transformer architecture. **After this assignment you'll be able to**:* Create visualizations to gain intuition on positional encodings* Visualize how positional encodings affect word embeddings Table of Contents- [Packages](0)- [1 - Positional Encoding](1) - [1.1 - Positional encoding visualizations](1-1) - [1.2 - Comparing positional encodings](1-2) - [2 - Semantic embedding](2) - [2.1 - Load pretrained embedding](2-1) - [2.2 - Visualization on a Cartesian plane](2-2)- [3 - Semantic and positional embedding](3) PackagesRun the following cell to load the packages you'll need.
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import os
from tensorflow.keras.layers import Embedding
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
###Output
_____no_output_____
###Markdown
1 - Positional EncodingHere are the positional encoding equations that you implemented in the previous assignment. This encoding uses the following formulas:$$PE_{(pos, 2i)}= sin\left(\frac{pos}{{10000}^{\frac{2i}{d}}}\right)$$$$PE_{(pos, 2i+1)}= cos\left(\frac{pos}{{10000}^{\frac{2i}{d}}}\right)$$It is a standard practice in natural language processing tasks to convert sentences into tokens before feeding texts into a language model. Each token is then converted into a numerical vector of fixed length called an embedding, which captures the meaning of the words. In the Transformer architecture, a positional encoding vector is added to the embedding to pass positional information throughout the model. The meaning of these vectors can be difficult to grasp solely by examining the numerical representations, but visualizations can help give some intuition as to the semantic and positional similarity of the words. As you've seen in previous assignments, when embeddings are reduced to two dimensions and plotted, semantically similar words appear closer together, while dissimilar words are plotted farther apart. A similar exercise can be performed with positional encoding vectors - words that are closer in a sentence should appear closer when plotted on a Cartesian plane, and when farther in a sentence, should appear farther on the plane. In this notebook, you will create a series of visualizations of word embeddings and positional encoding vectors to gain intuition into how positional encodings affect word embeddings and help transport sequential information through the Transformer architecture. 1.1 - Positional encoding visualizationsThe following code cell has the `positional_encoding` function which you implemented in the Transformer assignment. Nice work! You will build off that work to create some more visualizations with this function in this notebook.
###Code
def positional_encoding(positions, d):
"""
Precomputes a matrix with all the positional encodings
Arguments:
positions (int) -- Maximum number of positions to be encoded
d (int) -- Encoding size
Returns:
pos_encoding -- (1, position, d_model) A matrix with the positional encodings
"""
# initialize a matrix angle_rads of all the angles
angle_rads = np.arange(positions)[:, np.newaxis] / np.power(10000, (2 * (np.arange(d)[np.newaxis, :]//2)) / np.float32(d))
angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
pos_encoding = angle_rads[np.newaxis, ...]
return tf.cast(pos_encoding, dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Define the embedding dimension as 100. This value must match the dimensionality of the word embedding. In the ["Attention is All You Need"](https://arxiv.org/abs/1706.03762) paper, embedding sizes range from 100 to 1024, depending on the task. The authors also use a maximum sequence length ranging from 40 to 512 depending on the task. Define the maximum sequence length to be 100, and the maximum number of words to be 64.
###Code
EMBEDDING_DIM = 100
MAX_SEQUENCE_LENGTH = 100
MAX_NB_WORDS = 64
pos_encoding = positional_encoding(MAX_SEQUENCE_LENGTH, EMBEDDING_DIM)
plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('d')
plt.xlim((0, EMBEDDING_DIM))
plt.ylabel('Position')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
You have already created this visualization in this assignment, but let us dive a little deeper. Notice some interesting properties of the matrix - the first is that the norm of each of the vectors is always a constant. No matter what the value of `pos` is, the norm will always be the same value, which in this case is 7.071068. From this property you can conclude that the dot product of two positional encoding vectors is not affected by the scale of the vector, which has important implications for correlation calculations.
###Code
pos = 34
tf.norm(pos_encoding[0,pos,:])
###Output
_____no_output_____
###Markdown
Another interesting property is that the norm of the difference between 2 vectors separated by `k` positions is also constant. If you keep `k` constant and change `pos`, the difference will be of approximately the same value. This property is important because it demonstrates that the difference does not depend on the positions of each encoding, but rather the relative seperation between encodings. Being able to express positional encodings as linear functions of one another can help the model to learn by focusing on the relative positions of words.This reflection of the difference in the positions of words with vector encodings is difficult to achieve, especially given that the values of the vector encodings must remain small enough so that they do not distort the word embeddings.
###Code
pos = 70
k = 2
print(tf.norm(pos_encoding[0,pos,:] - pos_encoding[0,pos + k,:]))
###Output
tf.Tensor(3.2668781, shape=(), dtype=float32)
###Markdown
You have observed some interesting properties about the positional encoding vectors - next, you will create some visualizations to see how these properties affect the relationships between encodings and embeddings! 1.2 - Comparing positional encodings 1.2.1 - CorrelationThe positional encoding matrix help to visualize how each vector is unique for every position. However, it is still not clear how these vectors can represent the relative position of the words in a sentence. To illustrate this, you will calculate the correlation between pairs of vectors at every single position. A successful positional encoder will produce a perfectly symmetric matrix in which maximum values are located at the main diagonal - vectors in similar positions should have the highest correlation. Following the same logic, the correlation values should get smaller as they move away from the main diagonal.
###Code
# Positional encoding correlation
corr = tf.matmul(pos_encoding, pos_encoding, transpose_b=True).numpy()[0]
plt.pcolormesh(corr, cmap='RdBu')
plt.xlabel('Position')
plt.xlim((0, MAX_SEQUENCE_LENGTH))
plt.ylabel('Position')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
1.2.2 - Euclidean distanceYou can also use the euclidean distance instead of the correlation for comparing the positional encoding vectors. In this case, your visualization will display a matrix in which the main diagonal is 0, and its off-diagonal values increase as they move away from the main diagonal.
###Code
# Positional encoding euclidean distance
eu = np.zeros((MAX_SEQUENCE_LENGTH, MAX_SEQUENCE_LENGTH))
print(eu.shape)
for a in range(MAX_SEQUENCE_LENGTH):
for b in range(a + 1, MAX_SEQUENCE_LENGTH):
eu[a, b] = tf.norm(tf.math.subtract(pos_encoding[0, a], pos_encoding[0, b]))
eu[b, a] = eu[a, b]
plt.pcolormesh(eu, cmap='RdBu')
plt.xlabel('Position')
plt.xlim((0, MAX_SEQUENCE_LENGTH))
plt.ylabel('Position')
plt.colorbar()
plt.show()
###Output
(100, 100)
###Markdown
Nice work! You can use these visualizations as checks for any positional encodings you create. 2 - Semantic embeddingYou have gained insight into the relationship positional encoding vectors have with other vectors at different positions by creating correlation and distance matrices. Similarly, you can gain a stronger intuition as to how positional encodings affect word embeddings by visualizing the sum of these vectors. 2.1 - Load pretrained embeddingTo combine a pretrained word embedding with the positional encodings you created, start by loading one of the pretrained embeddings from the [glove](https://nlp.stanford.edu/projects/glove/) project. You will use the embedding with 100 features.
###Code
embeddings_index = {}
GLOVE_DIR = "glove"
f = open(os.path.join(GLOVE_DIR, 'glove.6B.100d.txt'))
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
print('d_model: %s', embeddings_index['hi'].shape)
###Output
Found 400000 word vectors.
d_model: %s (100,)
###Markdown
**Note:** This embedding is composed of 400,000 words and each word embedding has 100 features. Consider the following text that only contains two sentences. Wait a minute - these sentences have no meaning! Instead, the sentences are engineered such that:* Each sentence is composed of sets of words, which have some semantic similarities among each groups.* In the first sentence similar terms are consecutive, while in the second sentence, the order is random.
###Code
texts = ['king queen man woman dog wolf football basketball red green yellow',
'man queen yellow basketball green dog woman football king red wolf']
###Output
_____no_output_____
###Markdown
First, run the following code cell to apply the tokenization to the raw text. Don't worry too much about what this step does - it will be explained in detail in later ungraded labs. A quick summary (not crucial to understanding the lab):* If you feed an array of plain text of different sentence lengths, and it will produce a matrix with one row for each sentence, each of them represented by an array of size `MAX_SEQUENCE_LENGTH`.* Each value in this array represents each word of the sentence using its corresponding index in a dictionary(`word_index`). * The sequences shorter than the `MAX_SEQUENCE_LENGTH` are padded with zeros to create uniform length. Again, this is explained in detail in later ungraded labs, so don't worry about this too much right now!
###Code
tokenizer = Tokenizer(num_words=MAX_NB_WORDS)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
data = pad_sequences(sequences, padding='post', maxlen=MAX_SEQUENCE_LENGTH)
print(data.shape)
print(data)
###Output
Found 11 unique tokens.
(2, 100)
[[ 1 2 3 4 5 6 7 8 9 10 11 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0]
[ 3 2 11 8 10 5 4 7 1 9 6 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0]]
###Markdown
To simplify your model, you will only need to obtain the embeddings for the different words that appear in the text you are examining. In this case, you will filter out only the 11 words appearing in our sentences. The first vector will be an array of zeros and will codify all the unknown words.
###Code
embedding_matrix = np.zeros((len(word_index) + 1, EMBEDDING_DIM))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
print(embedding_matrix.shape)
###Output
(12, 100)
###Markdown
Create an embedding layer using the weights extracted from the pretrained glove embeddings.
###Code
embedding_layer = Embedding(len(word_index) + 1,
EMBEDDING_DIM,
embeddings_initializer=tf.keras.initializers.Constant(embedding_matrix),
trainable=False)
###Output
_____no_output_____
###Markdown
Transform the input tokenized data to the embedding using the previous layer. Check the shape of the embedding to make sure the last dimension of this matrix contains the embeddings of the words in the sentence.
###Code
embedding = embedding_layer(data)
print(embedding.shape)
###Output
(2, 100, 100)
###Markdown
2.2 - Visualization on a Cartesian planeNow, you will create a function that allows you to visualize the encoding of our words in a Cartesian plane. You will use PCA to reduce the 100 features of the glove embedding to only 2 components.
###Code
from sklearn.decomposition import PCA
def plot_words(embedding, sequences, sentence):
pca = PCA(n_components=2)
X_pca_train = pca.fit_transform(embedding[sentence,0:len(sequences[sentence]),:])
fig, ax = plt.subplots(figsize=(12, 6))
plt.rcParams['font.size'] = '12'
ax.scatter(X_pca_train[:, 0], X_pca_train[:, 1])
words = list(word_index.keys())
for i, index in enumerate(sequences[sentence]):
ax.annotate(words[index-1], (X_pca_train[i, 0], X_pca_train[i, 1]))
###Output
_____no_output_____
###Markdown
Nice! Now you can plot the embedding of each of the sentences. Each plot should disply the embeddings of the different words.
###Code
plot_words(embedding, sequences, 0)
###Output
_____no_output_____
###Markdown
Plot the word of embeddings of the second sentence. Recall that the second sentence contains the same words are the first sentence, just in a different order. You can see that the order of the words does not affect the vector representations.
###Code
plot_words(embedding, sequences, 1)
###Output
_____no_output_____
###Markdown
3 - Semantic and positional embeddingNext, you will combine the original glove embedding with the positional encoding you calculated earlier. For this exercise, you will use a 1 to 1 weight ratio between the semantic and the positional embedding.
###Code
embedding2 = embedding * 1.0 + pos_encoding[:,:,:] * 1.0
plot_words(embedding2, sequences, 0)
plot_words(embedding2, sequences, 1)
###Output
_____no_output_____
###Markdown
Wow look at the big difference between the plots! Both plots have changed drastically compared to their original counterparts. Notice that in the second image, which corresponds to the sentence in which similar words are not together, very dissimilar words such as `red` and `wolf` appear more close.Now you can try different relative weights and see how this strongly impacts the vector representation of the words in the sentence.
###Code
W1 = 1 # Change me
W2 = 10 # Change me
embedding2 = embedding * W1 + pos_encoding[:,:,:] * W2
plot_words(embedding2, sequences, 0)
plot_words(embedding2, sequences, 1)
# For reference
#['king queen man woman dog wolf football basketball red green yellow',
# 'man queen yellow basketball green dog woman football king red wolf']
###Output
_____no_output_____ |
thirteen_books_to_thirteen_dictionaries_of_references.ipynb | ###Markdown
Cleaning the text from greek letters
###Code
for i in range(len(text_file_lines)):
text_file_lines[i] = text_file_lines[i].replace('?','')
text_file_lines[i] = text_file_lines[i].replace(' ','')
while '\n' in text_file_lines:
text_file_lines.remove('\n')
text_file_lines
len(text_file_lines)
###Output
_____no_output_____
###Markdown
Book speration
###Code
books_headings = ['ELEMENTS BOOK ' + str(x) + '\n' for x in range(1,14)]
headings_index = []
for i in range(len(text_file_lines)):
if text_file_lines[i] in books_headings:
headings_index.append(i)
book_glossary = {}
headings_index.append(-1)
for i in range(len(books_headings)):
book_glossary[books_headings[i]] = np.array(text_file_lines[headings_index[i]:headings_index[i+1]])
###Output
_____no_output_____
###Markdown
cleaning one sample book
###Code
desired_book = book_glossary['ELEMENTS BOOK 10\n']
lines_mask = [False]*len(desired_book)
for i in range(len(desired_book)):
lines_mask[i] = '[P' in desired_book[i] or '[C' in desired_book[i] or 'Proposition' in desired_book[i]
desired_book = desired_book[lines_mask]
###Output
_____no_output_____
###Markdown
Propositions separation into dictionary
###Code
props_num = 1
props_index = []
for i in range(len(desired_book)):
if 'Proposition' in desired_book[i]:
props_index.append(i)
desired_book[i] = 'Proposition '+ str(props_num)
props_num+=1
desired_book_propos_dict = {}
props_titles = [ '[Prop 10.{}]'.format(i) for i in range(1,len(props_index)+1) ]
desired_book_propos_dict = { x:[] for x in props_titles }
props_index.append(-1)
for i in range(len(props_index)-1):
desired_book_propos_dict[props_titles[i]] = desired_book[ props_index[i]+1:props_index[i+1] ]
# props_index
desired_book_propos_dict
###Output
_____no_output_____
###Markdown
Each proposition cleaning
###Code
# %%writefile functions.py
def has_ref(string):
"""
It finds whether the string has reference in itself.
"""
return 'Prop.' in string or 'C.N.' in string or 'Post.' in string or 'Def.' in string
def remove_refs_detail(refs_list):
"""
Deletes lemas and corollaries in refernce names.
like: '[Prop. 5.19 corr.]'
"""
for i in range(len(refs_list)):
sec_space_index = refs_list[i].replace(' ', '#', 1).find(' ')
refs_list[i] = refs_list[i][:sec_space_index] + ']'
return refs_list
def std_ref_form(ref_string):
"""
input: '[ ... ]' kind string
Deletes unnecessary chars from the string.
Seperates combined references.
returns the refernces in a list.
"""
if ',' not in ref_string: #Single ref
refs_list = [ref_string]
else: #twice refs.
comma_index = ref_string.find(',')
if has_ref(ref_string[comma_index:]):
#seperate refs
ref_string = ref_string.replace(', ',']#[')
refs_list = ref_string.split('#')
else:
#copy the ref name
first_point = ref_string.find('.')
second_ref = ref_string[:first_point+1] + ref_string[comma_index+1:]
first_ref = ref_string[:comma_index]+']'
refs_list = [first_ref, second_ref]
# if ' corr.' in ref_string:
# ref_string = ref_string.replace(' corr.','')
# while ',' in ref_string:
# comma_index = ref_string.find(', ')
# if ref_string[comma_index+2].isnumeric(): #combined referenced with same type [Prop. 10.4, 10.6]
# space_index = ref_string.find(' ')
# if has_ref(ref_string[:space_index+1]):
# ref_string = ref_string.replace(', ',','+ ref_string[1:space_index])
# ref_string = ref_string.replace(',',']#[')
# ref_string = ref_string.replace(', ',']#[')
# refs_list = ref_string.split('#')
refs_list = remove_refs_detail(refs_list)
return refs_list
def correct_left_unclosed_bras_inline(line):
"""
corrects the unclosed brackets.
means correct case: '[ [ ]' -> '[ ]#[ ]'
it suffice not to correct right open ones because we start our search for references with '[' and not with ']'
"""
return line.replace('[',']#[')
def proposition_cleaner(lines):
"""
Lines is a list of strings dedicated to each propositon proof.
This function should return its referenced notions in a single list of name strings.
"""
ref_names = []
for line_num in range(len(lines)):
lines[line_num] = correct_left_unclosed_bras_inline(lines[line_num])
for i in range(len(lines[line_num])):
if lines[line_num][i] == '[':
end_ref = lines[line_num][i:].find(']') + i
if has_ref(lines[line_num][i:end_ref + 1]): #check if it has info.
ref_names.extend(std_ref_form(lines[line_num][i: end_ref + 1])) #put the standard refs. in the list.
while '[]' in ref_names:
ref_names.remove('[]')
return ref_names
# from functions import proposition_cleaner
proposition_cleaner(desired_book_propos_dict['[Prop 10.113]'])
for key in desired_book_propos_dict.keys():
desired_book_propos_dict[key] = proposition_cleaner(desired_book_propos_dict[key])
desired_book_propos_dict
###Output
_____no_output_____
###Markdown
Netwokx comes to the scene
###Code
import networkx as nx
import matplotlib.pyplot as plt
first_book_network = nx.DiGraph(desired_book_propos_dict)
first_book_network.edges()
nx.draw(first_book_network)
plt.hist(dict(first_book_network.degree()).values())
###Output
_____no_output_____
###Markdown
Cleaning all the books
###Code
# %%writefile -a functions.py
def book_to_props_dict(book_num,book_lines):
"""
This function will extract the propositions of given book and return them in one dictionary.
"""
lines_mask = [False]*len(book_lines)
for i in range(len(book_lines)):
lines_mask[i] = '[P' in book_lines[i] or '[C' in book_lines[i] or '[D' in book_lines[i] or 'Proposition' in book_lines[i]
book_lines = book_lines[lines_mask]
props_num = 1
props_index = []
for i in range(len(book_lines)):
if 'Proposition' in book_lines[i]:
props_index.append(i)
book_lines[i] = 'Proposition '+ str(props_num)
props_num+=1
book_lines_propos_dict = {}
props_titles = [ '[Prop. {}.{}]'.format(book_num,i) for i in range(1,len(props_index)+1) ]
book_lines_propos_dict = { x:[] for x in props_titles }
props_index.append(-1)
for i in range(len(props_index)-1):
book_lines_propos_dict[props_titles[i]] = book_lines[ props_index[i]+1:props_index[i+1] ]
for key in book_lines_propos_dict.keys():
book_lines_propos_dict[key] = proposition_cleaner(book_lines_propos_dict[key])
return book_lines_propos_dict
# from functions import book_to_props_dict
book_to_props_dict(10,book_glossary['ELEMENTS BOOK 10\n'])
###Output
_____no_output_____
###Markdown
Saving the total references
###Code
import pickle
for i in range(1,len(book_glossary.keys())+1):
a_file = open("books_references_dictionaries/ref_in_book_{}.pkl".format(i), "wb")
pickle.dump( book_to_props_dict(i,book_glossary['ELEMENTS BOOK {}\n'.format(i)]) , a_file)
a_file.close()
###Output
_____no_output_____ |
recsys/nlp/search_relevance/search_relevance.ipynb | ###Markdown
关键词搜索Kaggle竞赛题:https://www.kaggle.com/c/home-depot-product-search-relevance
###Code
%config ZMQInteractiveShell.ast_node_interactivity='all'
import sys
import os
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor, BaggingRegressor
from nltk.stem.snowball import SnowballStemmer
file_path = '/Users/yangpan 1/works/recsys/nlp/search_relevance/input'
df_train = pd.read_csv(os.path.join(file_path, 'train.csv'), encoding="ISO-8859-1")
df_test = pd.read_csv(os.path.join(file_path, 'test.csv'), encoding="ISO-8859-1")
df_desc = pd.read_csv(os.path.join(file_path, 'product_descriptions.csv'))
df_train.shape
df_test.shape
df_desc.shape
df_train.head(2)
df_test.head(2)
df_desc.head(2)
df_all = pd.concat((df_train, df_test), axis=0, ignore_index=True)
df_all.shape
df_all.head(3)
df_all = pd.merge(df_all, df_desc, how='left', on='product_uid')
df_all.shape
df_all.head(2)
###Output
_____no_output_____
###Markdown
文本预处理
###Code
stemmer = SnowballStemmer('english')
def str_stemmer(s):
return " ".join([stemmer.stem(word) for word in s.lower().split()])
def str_common_word(str1, str2):
return sum(int(str2.find(word)>=0) for word in str1.split())
df_all['search_term'] = df_all['search_term'].map(lambda x:str_stemmer(x))
df_all['product_title'] = df_all['product_title'].map(lambda x:str_stemmer(x))
df_all['product_description'] = df_all['product_description'].map(lambda x:str_stemmer(x))
###Output
_____no_output_____ |
static_files/assignments/Assignment8.ipynb | ###Markdown
Assignment 8 Chapter 7 Student ID: *Double click here to fill the Student ID* Name: *Double click here to fill the name* 1It was mentioned in the chapter that a cubic regression spline withone knot at $\xi$ can be obtained using a basis of the form $x$, $x^2$, $x^3$,$(x − \xi)_+^3$, where $(x − \xi)_+^3 = (x − \xi)^3$ if $x > \xi$ and equals 0 otherwise. We will now show that a function of the form$$f(x)=\beta_0+\beta_1x+\beta_2x^2+\beta_3x^3+\beta_4 (x − \xi)_+^3$$is indeed a cubic regression spline, regardless of the values of $\beta_0$, $\beta_1$, $\beta_2$, $\beta_3$, $\beta_4$. (a) Find a cubic polynomial$$f(x)=a_1+b_1x+c_1x^2+d_1x^3$$such that $f(x)=f_1(x)$ for all $x\leq \xi$. Express $a_1$, $b_1$, $c_1$, $d_1$ in terms of $\beta_0$, $\beta_1$, $\beta_2$, $\beta_3$, $\beta_4$. > Ans: *double click here to answer the question.* (b) Find a cubic polynomial$$f_2(x)=a_2+b_2x+c_2x^2+d_2x^3$$such that $f(x)=f_2(x)$ for all $x> \xi$. Express $a_2$, $b_2$, $c_2$, $d_2$ in terms of $\beta_0$, $\beta_1$, $\beta_2$, $\beta_3$, $\beta_4$. We have now establised that $f(x)$ is a piecewise polynomial. > Ans: *double click here to answer the question.* (c) Show that $f_1(\xi) = f_2(\xi)$. That is, $f(x)$ is continuous at $\xi$. > Ans: *double click here to answer the question.* (d) Show that $f'_1(\xi) = f'_2(\xi)$. That is, $f'(x)$ is continuous at $\xi$. > Ans: *double click here to answer the question.* (e) Show that $f''_1(\xi) = f''_2(\xi)$. That is, $f''(x)$ is continuous at $\xi$. > Ans: *double click here to answer the question.* 2Suppose that a curve $\hat{g}$ is computed to smoothly fit a set of n pointsusing the following formula:$$\hat{g}=\text{arg min}_g\left(\sum_{i=1}^n (y_i-g(x_i))^2+\lambda \int \left[g^{(m)}(x)\right]^2\ dx\right),$$where $g^{(m)}$ represents the $m$th derivative of $g$ (and $g^{(0)} = g$). Provide example sketches of $\hat{g}$ in each of the following scenarios. (a) $\lambda=\infty,\ m =0$ > Ans: *double click here to answer the question.* (b) $\lambda=\infty,\ m =1$ > Ans: *double click here to answer the question.* (c) $\lambda=\infty,\ m =2$ > Ans: *double click here to answer the question.* (d) $\lambda=\infty,\ m =3$ > Ans: *double click here to answer the question.* (e) $\lambda=0,\ m =3$ > Ans: *double click here to answer the question.* 9This question uses the variables `dis` (the weighted mean of distancesto five Boston employment centers) and `nox` (nitrogen oxides concentrationin parts per 10 million) from the `Boston` data. We will treat`dis` as the predictor and `nox` as the response. (a) Use the `PolynomialFeatures()` function to fit a cubic polynomial regression topredict `nox` using `dis`. Report the regression output, and plotthe resulting data and polynomial fits.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
(b) Plot the polynomial fits for a range of different polynomialdegrees (say, from 1 to 10), and report the associated residualsum of squares.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
(c) Perform cross-validation or another approach to select the optimaldegree for the polynomial, and explain your results.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
> Ans: *double click here to answer the question.* (d) Use the `BSplines()` function to fit a regression spline to predict `nox`using `dis`. Report the output for the fit using four degrees offreedom. How did you choose the knots? Plot the resulting fit.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
(e) Now fit a regression spline for a range of degrees of freedom, andplot the resulting fits and report the resulting RSS. Describe theresults obtained.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
> Ans: *double click here to answer the question.* (f) Perform cross-validation or another approach in order to selectthe best degrees of freedom for a regression spline on this data.Describe your results.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
> Ans: *double click here to answer the question.* 10This question relates to the `College` data set. (a) Split the data into a training set and a test set. Using out-of-statetuition as the response and the other variables as the predictors,perform forward stepwise selection on the training set in orderto identify a satisfactory model that uses just a subset of thepredictors.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
(b) Fit a GAM on the training data, using out-of-state tuition asthe response and the features selected in the previous step asthe predictors. Plot the results, and explain your findings.?
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
> Ans: *double click here to answer the question.* (c) Evaluate the model obtained on the test set, and explain theresults obtained.
###Code
# coding your answer here.
###Output
_____no_output_____ |
lectures/Week8 answers.ipynb | ###Markdown
OverviewThis week we will go back to networks, and we will learn about community detection. Here is the plan:* __Part 1__: Learn about Community Detection with a lecture from Sune. Then do an exercise related to the famous [Zachary Karate Club Network](https://en.wikipedia.org/wiki/Zachary%27s_karate_club).* __Part 2__: Learn how to compare network communities, using _normalized [mutual information](https://en.wikipedia.org/wiki/Mutual_information)_.* __Part 3__: Apply community detection to the GME network. Part 1: Community detection. Now that we have learnt about text analysis, it is time to go back to our GME network! We will start by learning about community detection with a lecture from Sune.> **_Video Lecture_**: Communities in networks. You can watch the 2015 video [here](https://youtu.be/06GL_KGHdbE/)
###Code
from IPython.display import YouTubeVideo
YouTubeVideo("FSRoqXw28RI",width=800, height=450)
###Output
_____no_output_____
###Markdown
> **_Reading_**: [Chapter 9 of the NS book.](http://networksciencebook.com/chapter/9). You can skip sections 9.3, 9.5 and 9.7. _Exercise 1: Zachary's karate club_: And now, the idea is to put a bit into practice the concept of community detection. In this exercise, we will work on Zarachy's karate club graph (refer to the Introduction of Chapter 9). The dataset is available in NetworkX, by calling the function [karate_club_graph](https://networkx.org/documentation/stable//auto_examples/graph/plot_karate_club.html)
###Code
import networkx as nx
import netwulf as nw
karate_G = nx.karate_club_graph()
###Output
_____no_output_____
###Markdown
> 1. Visualize the graph using [netwulf](https://netwulf.readthedocs.io/en/latest/). Set the color of each node based on the club split (the information is stored as a node attribute). My version of the visualization is below.>
###Code
import numpy as np
green = np.array([1,2,3,4,5,6,7,8,11,12,13,14,17,18,20,22]) - 1 #0 indexing
purple = np.array([9,10,15,16,19,21,23,24,25,26,27,28,29,30,31,32,33,34]) - 1
colors = dict(list(zip(green,['green']*len(green))) + list(zip(purple, ['purple']*len(purple))))
nx.set_node_attributes(karate_G, colors, 'color')
karate_G_visualization = nw.visualize(karate_G, plot_in_cell_below=True)
###Output
_____no_output_____
###Markdown
> 2. Write a function to compute the __modularity__ of a graph partitioning (use **equation 9.12** in the book). The function should take a networkX Graph and a partitioning as inputs and return the modularity.
###Code
def calculate_modularity(graph:nx.Graph, partitions:list):
'''
graph: Networkx Graph
partitions: List of partitions. Each partition should be a set of nodes in the graph
'''
L = graph.number_of_edges()
M = 0
for p in partitions:
kc = sum(dict(graph.degree(p)).values())
partition = graph.subgraph(p)
Lc = partition.number_of_edges()
M += (Lc/L) - (kc/(2*L))**2
return M
###Output
_____no_output_____
###Markdown
> 3. Explain in your own words the concept of _modularity_. The concept of *modularity* descripes how well a suggested partitioning of a graph is compared to a random graph.I.e. if the suggested partitioning is just random or if it actually could originate from real communities in the network. > 4. Compute the modularity of the Karate club split partitioning using the function you just wrote. Note: the Karate club split partitioning is avilable as a [node attribute](https://networkx.org/documentation/networkx-1.10/reference/generated/networkx.classes.function.get_node_attributes.html), called _"club"_.
###Code
partitions = nx.get_node_attributes(karate_G, 'club')
partition_labels = set(partitions.values())
communities = {label: set() for label in partition_labels}
for n, p in partitions.items():
communities[p].add(n)
calculate_modularity(karate_G, communities.values())
nx.algorithms.community.quality.modularity(karate_G, communities.values())
#Example from book
G = nx.Graph()
green = [(0, {'color': 'green'}), (1, {'color': 'green'}), (2, {'color': 'green'}), (3,{'color': 'green'}), (4, {'color': 'green'})]
purple = [(5,{'color': 'purple'}), (6,{'color': 'purple'}), (7,{'color': 'purple'}), (8,{'color': 'purple'})]
G.add_nodes_from(green+purple)
green_edges = [(0,1), (0,2), (0,3), (1,2), (1,3), (2,4), (3,4), (4,5)]
purple_edges = [(5,6), (5,7), (5,8), (6,7), (7,8)]
G.add_edges_from(green_edges+purple_edges)
G_partitions = nx.get_node_attributes(G, 'color')
G_partition_labels = set(G_partitions.values())
G_communities = {label: set() for label in G_partition_labels}
for n, p in G_partitions.items():
G_communities[p].add(n)
calculate_modularity(G, G_communities.values())
nx.algorithms.community.quality.modularity(G, G_communities.values())
###Output
_____no_output_____
###Markdown
> 5. We will now perform a small randomization experiment to assess if the modularity you just computed is statitically different from $0$. To do so, we will implement a [configuration model](https://en.wikipedia.org/wiki/Configuration_model). In short, we will create a new network, such that each node has the same degree as in the original network, but different connections. Here is how the algorithm works.> * __a.__ Create an identical copy of your original network. > * __b.__ Consider the list of network edges. Create two lists: the list of source nodes and target nodes. (e.g. edges = [(1,2),(3,4)], sources = [1,3], targets = [2,4])> * __c.__ Concatenate the list of source nodes and target nodes into a unique list (e.g. [1,2,3,4]). This is the list of _stubs_ (see the [Wikipedia page](https://en.wikipedia.org/wiki/Configuration_model) for the definition of stub).> * __d.__ Shuffle the list of stubs. Build a set of edges (tuples), by connecting each element in the list of shuffled stubs with the following element (e.g. [4,1,2,3] --> [(4,1),(2,3)])> * __e.__ Remove all the original network edges from your network. Add all the new _shuffled_ edges you created in step __d.__
###Code
import random
def compute_configuration_model(graph):
configuration_model = graph.copy()
sources = [s for s,t in karate_configuration.edges()]
targets = [t for s,t in karate_configuration.edges()]
stubs = sources + targets
random.shuffle(stubs)
random_edges = []
while len(stubs) >= 2:
random_edges.append((stubs.pop(), stubs.pop()))
configuration_model.remove_edges_from(list(configuration_model.edges()))
configuration_model.add_edges_from(random_edges)
return configuration_model
import random
karate_configuration = karate_G.copy()
sources = [s for s,t in karate_configuration.edges()]
targets = [t for s,t in karate_configuration.edges()]
stubs = sources + targets
random.shuffle(stubs)
random_edges = []
while len(stubs) >= 2:
random_edges.append((stubs.pop(), stubs.pop()))
karate_configuration.remove_edges_from(list(karate_configuration.edges()))
karate_configuration.add_edges_from(random_edges)
###Output
_____no_output_____
###Markdown
> 6. Is the degree of the nodes in your original and the configuration model network the same? Why? . __Note 1:__ With this algorithm you may obtain some self-loops. Note that [a self-loop should add two to the degree](https://en.wikipedia.org/wiki/Loop_(graph_theory):~:text=For%20an%20undirected%20graph%2C%20the,adds%20two%20to%20the%20degree.&text=In%20other%20words%2C%20a%20vertex,not%20one%2C%20to%20the%20degree.). __Note 2:__ With this algorithm, you could also obtain repeated edges between the same two nodes. Only NetworkX [MultiGraph](https://networkx.org/documentation/stable/reference/classes/multigraph.html) allow for repeated edges, while regular [Graph](https://networkx.org/documentation/stable/reference/classes/graph.html?highlight=graph%20undirectednetworkx.Graph) do not, meaning you will not be able to account for multi-edges when you have a regular Graph. (_Optional_: if you want to implement a configuration model without self-loops and multi-edges, you can try out the [double_edge_swap](https://networkx.org/documentation/stable//reference/algorithms/generated/networkx.algorithms.swap.double_edge_swap.html) algorithm)
###Code
karate_G.degree
karate_configuration.degree
sum(dict(karate_G.degree).values())
sum(dict(karate_configuration.degree).values())
###Output
_____no_output_____
###Markdown
They are note the same. This makes sense, as the configuration model > 7. Create $1000$ randomized version of the Karate Club network using the algorithm you wrote in step 5. For each of them, compute the modularity of the "club" split and store it in a list.
###Code
N = 1000
partitions = nx.get_node_attributes(karate_G, 'club')
partition_labels = set(partitions.values())
communities = {label: set() for label in partition_labels}
for n, p in partitions.items():
communities[p].add(n)
modularities = []
for _ in range(N):
configuration_model = compute_configuration_model(karate_G)
modularities.append(calculate_modularity(configuration_model, communities.values()))
###Output
_____no_output_____
###Markdown
> 8. Compute the average and standard deviation of the modularity for the configuration model.
###Code
np.mean(modularities)
np.std(modularities)
###Output
_____no_output_____
###Markdown
> 9. Plot the distribution of the configuration model modularity. Plot the actual modularity of the club split as a vertical line (use [axvline](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.axvline.html)).
###Code
import matplotlib.pylab as plt
karate_modularity = calculate_modularity(karate_G, communities.values())
bins = np.linspace(-1, 1, 100)
hist, hist_edges = np.histogram(modularities, bins=bins, density=True)
x = (hist_edges[1:]+hist_edges[:-1])/2
fig, ax = plt.subplots(figsize=(15,5))
width = bins[1]-bins[0]
ax.bar(x, hist, width=width*0.9)
ax.axvline(karate_modularity, linestyle='--', c='orange', label='Actual Karate Graph Modularity')
ax.grid()
ax.set_title('Distribution of modularity for the configuration model')
ax.set_xlabel('Modularity')
ax.set_ylabel('Probability density')
plt.legend()
plt.show()
###Output
_____no_output_____ |
Twitter Bot.ipynb | ###Markdown
**Location for saving gecko driver to launch Firefox**C:\Users\Abhijit Chendvankar\AppData\Local\Programs\Python\Python38-32Download link for geckodriver: https://github.com/mozilla/geckodriver/releases
###Code
import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
class TwitterBot:
def __init__(self,username,password):
self.username = username
self.password = password
self.bot = webdriver.Firefox() # creating an instance of our bot
def login(self):
bot = self.bot
bot.get('https://twitter.com/login')
time.sleep(5)
email = bot.find_element_by_name("session[username_or_email]")
password = bot.find_element_by_name("session[password]")
email.clear()
password.clear()
email.send_keys(self.username)
password.send_keys(self.password)
password.send_keys(Keys.RETURN)
time.sleep(5)
def like_tweet(self,hashtag):
bot = self.bot
bot.get('https://twitter.com/search?q='+hashtag+'&src=typed_query')
time.sleep(5)
for i in range(1,3):
bot.execute_script('window.scrollTo(0, document.body.scrollHeight)')
time.sleep(3)
tweets = bot.find_elements_by_class_name('tweet')
links = [elem.get_attribute('data-permalink-path') for elem in tweets]
# print(links)
for link in links:
bot.get('https://twitter.com' + link)
try:
bot.find_elements_by_class_name('HeartAnimation').click()
time.sleep(10)
except Exception as ex:
time.sleep(30)
ac = TwitterBot('UserName', 'YourPassword')
ac.login()
ac.like_tweet('datascience')
#__Helo__World__
###Output
_____no_output_____ |
Group O-1-7 - Group Assignment (with RFC for RFE).ipynb | ###Markdown
Feature preparation (after cleaning on Dataiku)
###Code
def fix_types_category(df):
df['date_recorded_year'] = df['date_recorded_year'].astype('uint8')
df['funder'] = df['funder'].astype('category')
df['installer'] = df['installer'].astype('category')
df['basin'] = df['basin'].astype('category')
df['region'] = df['region'].astype('category')
df['region-dist'] = df['region-dist'].astype('category')
df['scheme_management'] = df['scheme_management'].astype('category')
df['extraction_type_class'] = df['extraction_type_class'].astype('category')
df['management_group'] = df['management_group'].astype('category')
df['payment'] = df['payment'].astype('category')
df['quality_group'] = df['quality_group'].astype('category')
df['quantity'] = df['quantity'].astype('category')
df['source_type'] = df['source_type'].astype('category')
df['source_class'] = df['source_class'].astype('category')
df['waterpoint_type_group'] = df['waterpoint_type_group'].astype('category')
df['status_group'] = df['status_group'].astype('category')
return df
def scale(df):
scaler = MinMaxScaler()
for i in ('population', 'amount_tsh', 'longitude', 'latitude', 'gps_height', 'distance_from_water', 'date_recorded_year'):
df[[i]] = scaler.fit_transform(df[[i]])
return df
def feature_skewness(df):
df = df.drop(['latitude', 'longitude'], axis=1)
numeric_dtypes = ['int16', 'int32', 'int64',
'float16', 'float32', 'float64', 'uint8']
numeric_features = []
for i in df.columns:
if df[i].dtype in numeric_dtypes:
numeric_features.append(i)
feature_skew = df[numeric_features].apply(lambda x: skew(x)).sort_values(ascending=False)
skews = pd.DataFrame({'skew':feature_skew})
return feature_skew, numeric_features
def fix_skewness(df):
feature_skew, numeric_features = feature_skewness(df)
high_skew = feature_skew[feature_skew > 0.5]
skew_index = high_skew.index
for i in skew_index:
df[i] = boxcox1p(df[i], boxcox_normmax(df[i]+1))
skew_features = df[numeric_features].apply(lambda x: skew(x)).sort_values(ascending=False)
skews = pd.DataFrame({'skew':skew_features})
return df
def numerical_features(df):
columns = df.columns
return df._get_numeric_data().columns
def categorical_features(df):
numerical_columns = numerical_features(df)
return(list(set(df.columns) - set(numerical_columns)))
def onehot_encode(df):
df1 = df.drop(['status_group'], axis=1)
numericals = df1.get(numerical_features(df1))
new_df = numericals.copy()
for categorical_column in categorical_features(df1):
new_df = pd.concat([new_df,
pd.get_dummies(df1[categorical_column],
prefix=categorical_column)],
axis=1)
new_df['status_group'] = df['status_group']
return new_df
def data_preparation(df):
temp = df.drop(['id', 'months_since_recorded', 'date_recorded_day', 'date_recorded_month', 'construction_year'], axis=1)
aux = fix_types_category(temp)
aux2 = scale(aux)
aux3 = fix_skewness(aux2)
aux4 = scale(aux3)
final = onehot_encode(aux4)
return final
dataset = data_preparation(data)
dataset.head()
# dataset.describe().T
###Output
_____no_output_____
###Markdown
Basic functions defined
###Code
def target_encoding(y_train):
le=LabelEncoder()
le.fit(y_train)
print(le.classes_)
y_encoded=le.transform(y_train)
return y_encoded
def target_decoding(y_encoded):
le=LabelEncoder()
le.fit(dataset['status_group'])
print(le.classes_)
y_back = le.inverse_transform(y_encoded)
return y_back
def data_split(df, seed=666):
X = df.loc[:, df.columns != 'status_group']
y = df.loc[:, 'status_group']
y_encoded = target_encoding(y)
X_train, X_test, y_train, y_test = train_test_split(X, y_encoded, test_size=0.40, random_state=seed)
return X_train, X_test, y_train, y_test
###Output
_____no_output_____
###Markdown
Base-line model prior to feature selection
###Code
def model_accuracy(df, seed=666):
X = df.loc[:, df.columns != 'status_group']
y = df.loc[:, 'status_group']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=seed)
#Logistic Regression Model
logreg = LogisticRegression(class_weight = 'balanced', penalty = 'l1')
logreg.fit(X_train,y_train)
pred=logreg.predict(X_test)
accuracy = accuracy_score(y_test, pred)
return accuracy
acc = model_accuracy(dataset)
print(round(acc,4))
###Output
C:\Users\vratu\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
C:\Users\vratu\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:460: FutureWarning: Default multi_class will be changed to 'auto' in 0.22. Specify the multi_class option to silence this warning.
"this warning.", FutureWarning)
###Markdown
Feature Selection RFE
###Code
X_train, X_test, y_train, y_test = data_split(dataset)
RFE_estimator = RandomForestClassifier(n_estimators=10, class_weight='balanced') # Alternatively, Naive Bayes was used as estimator (MultinomialNB()) which returned 302 features as important features
rfecv = RFECV(estimator=RFE_estimator, step=1, cv=5, scoring='accuracy') # try with shuffle =True once
rfecv.fit(X_train, y_train)
# Plot number of features VS. cross-validation scores
plt.figure(figsize=(20,10))
plt.xlabel("Number of features selected")
plt.ylabel("Cross validation score (nb of correct classifications)")
plt.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_)
plt.show()
d={'scores':rfecv.grid_scores_, 'n_feats':range(1, len(rfecv.grid_scores_)+1)}
rfescores = pd.DataFrame(data=d)
rfescores[rfescores['scores']==rfescores['scores'].max()]
feats_bestcv=int(rfescores[rfescores['scores']==rfescores['scores'].max()]['n_feats'])
feats_bestcv
#rfescores[rfescores['scores']==rfescores['scores'].max()]['n_feats']
featuresnames=np.array(dataset.columns.drop('status_group'))
featuresnames_RFE=featuresnames[np.array(rfecv.support_)]
featuresnames_RFE= np.append(featuresnames_RFE, 'status_group')
features_RFE=dataset[featuresnames_RFE]
features_RFE.info()
feats_bestcv
#features_RFE.columns
#features_RFE = pd.read_csv('C:/Users/vratu/OneDrive/Desktop/MBD/MBD_Term_2/Machine Learning 2/Group Assignment/Training_Set_RFE_FEATURES.csv')
###Output
_____no_output_____
###Markdown
Base-line model post feature selection
###Code
acc_RFE = model_accuracy(features_RFE)
print(round(acc_RFE,4))
###Output
C:\Users\vratu\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
C:\Users\vratu\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:460: FutureWarning: Default multi_class will be changed to 'auto' in 0.22. Specify the multi_class option to silence this warning.
"this warning.", FutureWarning)
###Markdown
Although there is minimal decrease in accuracy scores due to elimindated features - we have proceeded with RFE features since the we aim to optimize through models other than Logistic Regression, which would be better without noise. PCA (Dimensionality Reduction) Discarded because only 1 PC was created which explained >99% of variance, hence not feasible for classification purposes (accuracy score of 0.5141 on base-line model)
###Code
X_train, X_test, y_train, y_test = data_split(dataset)
pca=PCA(n_components=int((feats_bestcv/2).round()), whiten=True, svd_solver='auto')
pca.fit(X_train)
print(pca.explained_variance_ratio_)
pca.n_components_
def model_PCA_prep(feature_set):
X_train, X_test, y_train, y_test = data_split(feature_set)
X_pca_train = pca.fit_transform(X_train)
X_pca_train2 = pca.fit_transform(X_pca_train)
X_pca_test = pca.fit_transform(X_test)
X_pca_test2 = pca.fit_transform(X_pca_test)
return X_pca_train2, X_pca_test2, y_train, y_test
X_pca_train, X_pca_test, y_train, y_test = model_PCA_prep(dataset)
logreg.fit(X_pca_train, y_train)
y_pred_PCA=model1.predict(X_pca_test)
score=accuracy_score(y_test, y_pred_PCA)
score
###Output
_____no_output_____
###Markdown
Bayesian optimization method for creating 4 primary models
###Code
X_train, X_test, y_train, y_test = data_split(features_RFE)
###Output
['functional' 'functional needs repair' 'non functional']
###Markdown
Model 1 - Random Forest Classifier
###Code
def rfc_cv(n_estimators, min_samples_split, max_features, data, targets):
"""Random Forest cross validation.
This function will instantiate a random forest classifier with parameters
n_estimators, min_samples_split, and max_features. Combined with data and
targets this will in turn be used to perform cross validation. The result
of cross validation is returned.
Our goal is to find combinations of n_estimators, min_samples_split, and
max_features that minimzes the log loss.
"""
estimator = RandomForestClassifier(
n_estimators=n_estimators,
min_samples_split=min_samples_split,
max_features=max_features,
random_state=2
)
cval = cross_val_score(estimator, data, targets,
scoring='accuracy', cv=4)
return cval.mean()
def optimize_rfc(data, targets):
"""Apply Bayesian Optimization to Random Forest parameters."""
def rfc_crossval(n_estimators, min_samples_split, max_features):
"""Wrapper of RandomForest cross validation.
Notice how we ensure n_estimators and min_samples_split are casted
to integer before we pass them along. Moreover, to avoid max_features
taking values outside the (0, 1) range, we also ensure it is capped
accordingly.
"""
return rfc_cv(
n_estimators=int(n_estimators),
min_samples_split=int(min_samples_split),
max_features=max(min(max_features, 0.999), 1e-3),
data=data,
targets=targets,
)
optimizer = BayesianOptimization(
f=rfc_crossval,
pbounds={
"n_estimators": (10, 250),
"min_samples_split": (2, 25),
"max_features": (0.1, 0.999),
},
random_state=1234,
verbose=2
)
optimizer.maximize(n_iter=10)
print("Final result:", optimizer.max)
###Output
_____no_output_____
###Markdown
model_RFC_opt = RandomForestClassifier(n_estimators=115, min_samples_split=16, max_features=0.2722) X_train, X_test, y_train, y_test = data_split(features_RFE)model_RFC_opt.fit(X_train, y_train) y_pred_RFC = model_RFC_opt.predict(X_test) score_RFC = accuracy_score(y_test, y_pred_RFC)score_RFC
###Code
### Model 2 - KNearestNeighbours model
###Output
_____no_output_____
###Markdown
def knn_cv(n_neighbours, p, leaf_size, data, targets): estimator = KNeighborsClassifier(algorithm='kd_tree', n_neighbors=n_neighbours, weights='distance', p=p, leaf_size=leaf_size, n_jobs=-1 ) cval = cross_val_score(estimator, data, targets, scoring='accuracy', cv=5) return cval.mean()def optimize_knn(data, targets): """Apply Bayesian Optimization to KNN parameters.""" def knn_crossval(n_neighbours, p, leaf_size): return knn_cv( n_neighbours=int(n_neighbours), p=int(p), leaf_size=int(leaf_size), data=data, targets=targets) optimizer = BayesianOptimization( f=knn_crossval, pbounds={ "n_neighbours": (5, 20), "p": (1, 2), "leaf_size": (20, 40) }, random_state=1234) optimizer.maximize(n_iter=10) print("Final result:", optimizer.max) X_train, X_test, y_train, y_test = data_split(features_RFE)optimize_knn(X_train, y_train)
###Code
model_KNN_opt = KNeighborsClassifier(algorithm='kd_tree', weights='distance', n_jobs=-1, n_neighbors=15, leaf_size=29, p=1)
model_KNN_opt.fit(X_train, y_train)
y_pred_KNN = model_KNN_opt.predict(X_test)
y_pred_KNN_cat = target_decoding(y_pred_KNN)
y_pred_KNN_cat
score_KNN = accuracy_score(y_test, y_pred_KNN)
score_KNN
###Output
_____no_output_____
###Markdown
Model 3 - XGBoost Trees
###Code
model_XGB_basic = XGBClassifier(booster = 'gbtree', objective = 'multi:softmax', eval_metric = "merror",
importance_type = 'total_cover', n_jobs=-1, silent=True)
XGB_params = {'learning_rate': (0.2, 0.5), 'gamma': (0, 1), 'max_depth': (5, 20), 'min_child_weight': (0.8, 2), 'max_delta_step': (0, 10),
'subsample': (0.5, 1), 'colsample_bytree': (0.5, 1), 'reg_lambda': (0.5, 1.5), 'reg_alpha': (0, 1)}
def xgb_cv(learning_rate, gamma, max_depth, min_child_weight, max_delta_step, subsample, colsample_bytree, reg_lambda, reg_alpha, data, targets):
estimator = XGBClassifier(booster = 'gbtree', objective = 'multi:softmax', eval_metric = "merror",
importance_type = 'total_cover', n_jobs=-1,
learning_rate=learning_rate, gamma=gamma, max_depth=max_depth,
min_child_weight=min_child_weight,
max_delta_step=max_delta_step,
subsample=subsample, colsample_bytree=colsample_bytree,
reg_lambda=reg_lambda, reg_alpha=reg_alpha)
cval = cross_val_score(estimator, data, targets,
scoring='accuracy', cv=5)
return cval.mean()
def optimize_xgb(data, targets):
def xgb_crossval(learning_rate, gamma, max_depth, min_child_weight, max_delta_step, subsample, colsample_bytree, reg_lambda, reg_alpha):
return xgb_cv(
learning_rate=float(learning_rate), gamma=float(gamma), max_depth=int(max_depth), min_child_weight=float(min_child_weight),
max_delta_step=int(max_delta_step), subsample=float(subsample),
colsample_bytree=float(colsample_bytree), reg_lambda=float(reg_lambda), reg_alpha=float(reg_alpha),
data=data,
targets=targets)
optimizer = BayesianOptimization(
f=xgb_crossval,
pbounds={'learning_rate': (0.2, 0.5), 'gamma': (0, 1), 'max_depth': (5, 20), 'min_child_weight': (0.8, 2),
'max_delta_step': (0, 10), 'subsample': (0.5, 1), 'colsample_bytree': (0.5, 1), 'reg_lambda': (0.5, 1.5),
'reg_alpha': (0, 1)},
random_state=1234)
optimizer.maximize(n_iter=10)
print("Final result:", optimizer.max)
X_train, X_test, y_train, y_test = data_split(features_RFE)
optimize_xgb(X_train, y_train)
###Output
_____no_output_____
###Markdown
model_XGB_opt = XGBClassifier(booster = 'gbtree', objective = 'multi:softprob', eval_metric = "mlogloss", learning_rate=0.5958, gamma=0.6221, max_depth=16, min_child_weight=1.127, max_delta_step=8, subsample=0.9791, colsample_bytree=0.5958, reg_lambda=1.302, reg_alpha=0.2765, importance_type = 'total_cover', n_jobs=-1, silent=True) model_XGB_opt.fit(X_train, y_train)y_pred_xgb = model_XGB_opt.predict(X_test)scores_xgb = accuracy_score(y_test, y_pred_xgb)scores_xgb
###Code
### Model 4 - Multi-nomial Naive Bayes Model
###Output
_____no_output_____
###Markdown
def mnb_cv(alpha, data, targets): estimator = MultinomialNB(alpha=alpha) cval = cross_val_score(estimator, data, targets, scoring='accuracy', cv=5) return cval.mean()def optimize_mnb(data, targets): def mnb_crossval(alpha): return mnb_cv(alpha=float(alpha), data=data, targets=targets) optimizer = BayesianOptimization( f=mnb_crossval, pbounds={'alpha': (10, 30)}, random_state=1234) optimizer.maximize(n_iter=10) print("Final result:", optimizer.max) X_train, X_test, y_train, y_test = data_split(features_RFE)optimize_mnb(X_train, y_train)
###Code
model_MNB_opt = MultinomialNB(alpha=25.71)
model_MNB_opt.fit(X_train, y_train)
y_pred_MNB = model_MNB_opt.predict(X_test)
accuracy_score(y_test, y_pred_MNB)
###Output
_____no_output_____
###Markdown
Optimized Stacked Model - Random Forest Classifier This optimization code was run basis the stacked_df which is created at end of stacking pipeline. However, for sake of readability (and flow) this code appears here
###Code
def rfc_cv(n_estimators, min_samples_split, max_features, data, targets):
estimator = RandomForestClassifier(
n_estimators=n_estimators,
min_samples_split=min_samples_split,
max_features=max_features, class_weight='balanced',
random_state=2
)
cval = cross_val_score(estimator, data, targets,
scoring='accuracy', cv=4)
return cval.mean()
def optimize_rfc(data, targets):
def rfc_crossval(n_estimators, min_samples_split, max_features):
return rfc_cv(
n_estimators=int(n_estimators),
min_samples_split=int(min_samples_split),
max_features=max(min(max_features, 0.999), 1e-3),
data=data,
targets=targets)
optimizer = BayesianOptimization(
f=rfc_crossval,
pbounds={
"n_estimators": (10, 250),
"min_samples_split": (2, 25),
"max_features": (0.1, 0.999)
},
random_state=1234,
verbose=2
)
optimizer.maximize(n_iter=10)
print("Final result:", optimizer.max)
rfc = RandomForestClassifier(n_estimators=10, class_weight='balanced')
# Read created df from earlier in case required
stacked_df = pd.read_csv('.../stacked_df.csv')
stacked_df.drop('Unnamed: 0', axis=1)
X = stacked_df.loc[:, stacked_df.columns!='y_target']
y = stacked_df.loc[:, 'y_target']
optimize_rfc(X, y)
###Output
_____no_output_____
###Markdown
create optimized RFC basis the parameters found above (refer images attached in zip file)stacked_RFC = RandomForestClassifier(class_weight='balanced', max_features=0.1412, min_samples_split=25, n_estimators=180, n_jobs=-1)
###Code
## Other models (WIP)
### LDA
###Output
_____no_output_____
###Markdown
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis model_LDA = LinearDiscriminantAnalysis(solver='svd', n_components=145)X_train, X_test, y_train, y_test = data_split(features_RFE) model_LDA.fit(X_train, y_train)y_pred_LDA = model_LDA.predict(X_test) scores_LDA= accuracy_score(y_test, y_pred_LDA, normalize=True)scores_LDA.mean()
###Code
# Final Stacking and Pipeline
###Output
_____no_output_____
###Markdown
def stacked_model_fits(df): X = df.loc[:, df.columns != 'status_group'] y1 = df.loc[:, 'status_group'] y=target_encoding(y1) model_RFC_opt.fit(X, y) model_XGB_opt.fit(X, y) model_MNB_opt.fit(X, y) model_KNN_opt.fit(X, y)def stacked_1_predict(X, y): y_pred_RFC = cross_val_predict(model_RFC_opt, X, y, cv=5) y_pred_XGB = cross_val_predict(model_XGB_opt, X, y, cv=5) y_pred_MNB = cross_val_predict(model_MNB_opt, X, y, cv=5) y_pred_KNN = cross_val_predict(model_KNN_opt, X, y, cv=5) RFC_pred = pd.Series(y_pred_RFC.flatten(), name='rfc') XGB_pred = pd.Series(y_pred_XGB.flatten(), name='xgb') MNB_pred = pd.Series(y_pred_MNB.flatten(), name='mnb') KNN_pred = pd.Series(y_pred_KNN.flatten(), name='knn') y_target = pd.Series(y, name='y_target') stack_features_final = pd.concat([X['longitude'], X['latitude'], RFC_pred, XGB_pred, MNB_pred, KNN_pred, y_target], axis=1) return stack_features_finaldef stacked_2_predict(X): y_pred_RFC2 = model_RFC_opt.predict(X) y_pred_XGB2 = model_XGB_opt.predict(X) y_pred_MNB2 = model_MNB_opt.predict(X) y_pred_KNN2 = model_KNN_opt.predict(X) RFC_pred2 = pd.Series(y_pred_RFC2.flatten(), name='rfc') XGB_pred2 = pd.Series(y_pred_XGB2.flatten(), name='xgb') MNB_pred2 = pd.Series(y_pred_MNB2.flatten(), name='mnb') KNN_pred2 = pd.Series(y_pred_KNN2.flatten(), name='knn') stack_features_final = pd.concat([X['longitude'], X['latitude'], RFC_pred2, XGB_pred2, MNB_pred2, KNN_pred2], axis=1) return stack_features_finaldef stacked_model_predict(df): stacked_model=make_pipeline(PolynomialFeatures(2), stacked_RFC) stacked_model_fits(df) X = df.loc[:, df.columns != 'status_group'] y1 = df.loc[:, 'status_group'] y=target_encoding(y1) df_stacked = stacked_1_predict(X, y) X2 = df_stacked.loc[:, df_stacked.columns != 'y_target'] y2 = df_stacked.loc[:, 'y_target'] X2_train, X2_test, y2_train, y2_test = train_test_split(X2, y2, test_size=0.20, random_state=123) stacked_model.fit(X2_train, y2_train) y_pred_stacked = stacked_model.predict(X2_test) scores_stacked = accuracy_score(y2_test, y_pred_stacked) return stacked_model, scores_stackeddef stacked_model_final_upload(stacked_model, df_test_prepared): stacked_test = stacked_2_predict(df_test_prepared) prediction = stacked_model.predict(stacked_test) prediction_cat = pd.Series(target_decoding(prediction)) prediction_cat.to_csv('C:\\Users\\vratu\\OneDrive\\Desktop\\MBD\\MBD_Term_2\\Machine Learning 2\\Group Assignment\\Upload_predictions.csv') return prediction
###Code
## Fit the primary models with entire training set
###Output
_____no_output_____
###Markdown
stacked_model_fits(features_RFE)
###Code
## Fit the stacked RFC model
###Output
_____no_output_____
###Markdown
stacked_RFC_model_fitted, scores = stacked_model_predict(features_RFE) scores test and validate the stacked RFC model
###Code
##### Export CSV for later use
###Output
_____no_output_____
###Markdown
X = features_RFE.loc[:, features_RFE.columns != 'status_group']y1 = features_RFE.loc[:, 'status_group']y = target_encoding(y1)stacked_df = stacked_1_predict(X, y) stacked_df.to_csv('...\stacked_df.csv')
###Code
## Import and predict on the test Data-set (for upload)
### Data preperation
###Output
_____no_output_____
###Markdown
df_test_raw = pd.read_csv('C:\\Users\\vratu\\OneDrive\\Desktop\\MBD\\MBD_Term_2\\Machine Learning 2\\Group Assignment\\Test_set_values_joined_prepared.csv') df_test_raw.describe().T data_test = df_test_raw.drop(['id', 'months_since_recorded', 'date_recorded_day', 'date_recorded_month', 'construction_year'], axis=1) Feature prepdef t_fix_types_category(df): df['date_recorded_year'] = df['date_recorded_year'].astype('uint8') df['funder'] = df['funder'].astype('category') df['installer'] = df['installer'].astype('category') df['basin'] = df['basin'].astype('category') df['region'] = df['region'].astype('category') df['region-dist'] = df['region-dist'].astype('category') df['scheme_management'] = df['scheme_management'].astype('category') df['extraction_type_class'] = df['extraction_type_class'].astype('category') df['management_group'] = df['management_group'].astype('category') df['payment'] = df['payment'].astype('category') df['quality_group'] = df['quality_group'].astype('category') df['quantity'] = df['quantity'].astype('category') df['source_type'] = df['source_type'].astype('category') df['source_class'] = df['source_class'].astype('category') df['waterpoint_type_group'] = df['waterpoint_type_group'].astype('category') return dfdef t_onehot_encode(df): df1 = df numericals = df1.get(numerical_features(df1)) new_df = numericals.copy() for categorical_column in categorical_features(df1): new_df = pd.concat([new_df, pd.get_dummies(df1[categorical_column], prefix=categorical_column)], axis=1) return new_dfdef t_data_prep(df): temp = df aux = t_fix_types_category(temp) aux2 = scale(aux) aux3 = fix_skewness(aux2) aux4 = scale(aux3) final = t_onehot_encode(aux4) return final
###Code
#### Feature selection basis RFE
###Output
_____no_output_____
###Markdown
df_test = t_data_prep(data_test)test_features_RFE = np.delete(featuresnames_RFE, -1) drop 'status_group' as targetdf_test2 = df_test[test_features_RFE]
###Code
## If the post RFE features are not created in the hold-out dataset then create the features with 0 imputed value (as our training dataset was scaled from 0-1)
t_feats_names = [] # LIST OF FEATURE NAMES ABSENT FROM HOLD-OUT SET POST TRANSFORMATION
for i in t_feats_names:
df_test[i] = 0 #impute 0 for the missing column values to run the fitted stacked models
test_features_RFE = np.delete(featuresnames_RFE, -1) # to delete the 'status_group' target in original training data-set
f_test = df_test[test_features_RFE]
df_test.describe().T
###Output
_____no_output_____
###Markdown
Final Upload file creation
###Code
stacked_model_final_upload(stacked_RFC_model_fitted, df_test2)
###Output
_____no_output_____
###Markdown
After creating CSV of predictions, we have re-indexed from submission format and added column-names Create file for upload basis the primary RFC Model (training on full set)
###Code
X = features_RFE.loc[:, features_RFE.columns != 'status_group']
y1 = features_RFE.loc[:, 'status_group']
y= target_encoding(y1)
m1 = make_pipeline(PolynomialFeatures(), model_RFC_opt)
m1.fit(X, y)
y_pred_1 = m1.predict(df_test2)
y_cat_1 = pd.Series(target_decoding(y_pred_1))
#y_cat_1.to_csv('.../Upload_predictions_RFCONLY.csv')
###Output
_____no_output_____
###Markdown
Create file for upload basis the primary KNN Model (training on full set)
###Code
X = features_RFE.loc[:, features_RFE.columns != 'status_group']
y1 = features_RFE.loc[:, 'status_group']
y= target_encoding(y1)
m2 = make_pipeline(PolynomialFeatures(2), model_KNN_opt)
m2.fit(X, y)
y_pred_2 = m2.predict(df_test2)
y_cat_2 = pd.Series(target_decoding(y_pred_2))
#y_cat_2.to_csv('.../Upload_predictions_KNNONLY.csv')
###Output
_____no_output_____
###Markdown
Create file for upload basis the primary XGB Model (training on full set)
###Code
X = features_RFE.loc[:, features_RFE.columns != 'status_group']
y1 = features_RFE.loc[:, 'status_group']
y= target_encoding(y1)
m3 = make_pipeline(PolynomialFeatures(2), model_XGB_opt)
m3.fit(X, y)
y_pred_3 = m3.predict(df_test2)
y_cat_3 = pd.Series(target_decoding(y_pred_3))
#y_cat_3.to_csv('.../Upload_predictions_XGBONLY.csv')
###Output
_____no_output_____
###Markdown
Create file for upload basis the primary MNB Model (training on full set)
###Code
X = features_RFE.loc[:, features_RFE.columns != 'status_group']
y1 = features_RFE.loc[:, 'status_group']
y= target_encoding(y1)
m4 = make_pipeline(PolynomialFeatures(2), model_MNB_opt)
m4.fit(X, y)
y_pred_4 = m4.predict(df_test2)
y_cat_4 = pd.Series(target_decoding(y_pred_4))
#y_cat_4.to_csv('.../Upload_predictions_MNBONLY.csv')
###Output
_____no_output_____ |
qnarre.com/static/pybooks/conflict.ipynb | ###Markdown
Predatory Profits Of "High-Conflict" Divorces For a justification of this blog, please consider the following chain of conjectures (referenced "lawyers" would be the **divorce lawyer** types):- our common law is adversarial, the winner simply takes all- a family, by definition, is not adversarial, as fundamentally both parents love their children- however, with no adversarial conflict, there is no lawsuit and no profits for lawyers- thus our common law applied to families must first turn a family into adversaries- by definition, either unsolved conflicts or perceived lack of resources create adversaries- moreover, sustainably intractable conflicts guarantee adversaries for life or "high-conflict"- however, with no money, i.e. no possible profits for lawyers, there simply cannot be "high-conflicts"- "high-conflict" cases thus are an ambitious, i.e. ruthless, lawyer's "gold mines" and job-security- lawyers are in overabundance and competition is fierce, as one only needs to be a malicious actor- however, with no "high-conflict", there are no trendsetting, "interesting" cases- with no trendsetting, there is no [~$500 / hour billing rate](https://femfas.net/flip_burgers/index.html) for ruthless, and narcissist, "top lawyers"Accepting the above chain of faultless logic, what can a deeply narcissist divorce lawyer do?- in cases lacking conflicts, he has only one choice: provoke or **flat-out fabricate a conflict by blatantly lying**, specifically for the Family Court's eager consumption- if he "leaves money on the table", and neglects exploiting lucrative cases he has already hooked-onto, **he will go hungry** with everyone watching! In this blog we focus on *directly* fabricated conflicts, or flat-out, knowingly stated lies to the Family Courts by our lawyers. We are aided by the strict rules of the Court, as all meaningful communication must already be in (or should be convertible to) textual English "inputs".Our first goal is to train our computer to "catch the knowingly and directly lying lawyer" by systematically finding direct, irrefutable textual contradictions in *all* of a lawyer's communications.Current state-of-the-art NLP research (see ["Attention Is All You Need"](https://arxiv.org/pdf/1706.03762.pdf)) has shown that the various proposed mechanism for answering generic semantic correctness questions are exceedingly promising. We use them to train our elementary arithmetic model in telling us if a simple mathematical expression is correct or not. Please note that we have too much accumulated code to load into our notebook. For your refence, we sample from the attached local source files: *datasets.py, layers.py, main.py, modules.py, samples.py, tasks.py, trafo.py* and *utils.py*. The `Samples` class randomly generates samples from an on-demand allocated pool of values. The size of the pool is set by the `dim_pool` param and using it with large values helps with keeping the probability distributions in check.Currently, `Samples` can generate a variety of 10 different groups of samples. In this blog we focus on `yes-no` (YNS), `masked` (MSK), `reversed` (REV) and `faulty` (FIX) samples. A simple sample generating loop can be as follows:
###Code
import samples as qs
groups = tuple('yns ynx msk msx cls clx qas rev gen fix'.split())
YNS, YNX, MSK, MSX, CLS, CLX, QAS, REV, GEN, FIX = groups
def sampler(ps):
ss = qs.Samples(ps)
for _ in range(ps.num_samples):
ss, idx = ss.next_idx
enc, res, *_ = ss.create(idx)
dec = tgt = f'[{res}]'
bad = f'[{ss.other(res)}]'
yn = ss.yns[0, idx]
d2 = dec if yn else bad
yns = dict(enc=enc, dec=d2 + '|_', tgt=d2 + f'|{yn}')
yield {YNS: yns}
###Output
_____no_output_____
###Markdown
The generated samples are Python `dict`s with the previously introduced `enc` (encoder), `dec` (decoder) and `tgt` (target) features.Both `dec` and `tgt` features end the sample with `|` and the yes-no answer is encoded as `1` and `0` (the `_` is the place-holder that the decoder needs to solve).And now we can generate a few samples:
###Code
import utils as qu
ps = dict(
dim_pool=3,
max_val=100,
num_samples=4,
)
ps = qu.Params(**ps)
for d in sampler(ps):
print(f'{d[YNS]}')
###Output
{'enc': 'x=81,y=11;x+y', 'dec': '[10]|_', 'tgt': '[10]|0'}
{'enc': 'y=-99,x=-58;x+y', 'dec': '[-157]|_', 'tgt': '[-157]|1'}
{'enc': 'x=13,y=-79;y-x', 'dec': '[-92]|_', 'tgt': '[-92]|1'}
{'enc': 'y=-33,x=-30;y+x', 'dec': '[-96]|_', 'tgt': '[-96]|0'}
###Markdown
While we don't show any of the other samples in this blog, the `MSK` features mask the results at random positions with a `?`, the `REV` samples mix up the order of the variables and `FIX` samples randomly introduce an error digit in the results. The actual model is largely similar to the models already presented in the previous blogs.Based on what group of samples we are using, we activate some layers in the model while ignoring others. A significant consideration is that all the 10 groups of samples contribute (if meaningful) to the same weights (or variables).We chose to do this this based on the results of the [MT-DNN](https://arxiv.org/pdf/1901.11504.pdf) paper. Varying the type and challenge of the samples we effectively cross-train the model.In order to clearly separate the `loss` and `metric` calculations between the groups, we create a new instance of our model for each group of samples. However, we reuse the same layers.To accomplish this, we define an `lru_cache` function:
###Code
import functools
@functools.lru_cache(maxsize=32)
def layer_for(cls, *pa, **kw):
return cls(*pa, **kw)
###Output
_____no_output_____
###Markdown
And now, our usual `model_for` function looks as follows:
###Code
def model_for(ps, group):
x = inputs
y = layer_for(ql.ToRagged)(x)
yt = layer_for(ql.Tokens, ps)(y)
ym = layer_for(ql.Metas, ps)(y)
xe, xd = yt[:2] + ym[:1], yt[2:] + ym[1:]
embed = layer_for(ql.Embed, ps)
ye = layer_for(ql.Encode, ps)(embed(xe))[0]
decode = layer_for(ql.Decode, ps)
if group in (qs.YNS, qs.YNX):
y = decode(embed(xd) + [ye])
y = layer_for(ql.Debed, ps)(y)
elif group in (qs.MSK, qs.MSX):
y = layer_for(ql.Deduce, ps, embed, decode)(xd + [ye])
if group in (qs.QAS, qs.FIX):
y = decode(embed(xd) + [ye])
y = layer_for(ql.Locate, ps, group)(y)
m = Model(name='trafo', inputs=x, outputs=[y])
m.compile(optimizer=ps.optimizer, loss=ps.loss, metrics=[ps.metric])
print(m.summary())
return m
###Output
_____no_output_____
###Markdown
As we have expanded the functionality of our layers and modules from the previous blogs, our params have increased in number.Also, the subsequent blogs in this section will describe the additions to the model's extended functionality.
###Code
import tensorflow as tf
import datasets as qd
ks = tf.keras
params = dict(
activ_concl='gelu',
dim_attn=4,
dim_attn_qk=None,
dim_attn_v=None,
dim_batch=5,
dim_concl=150,
dim_hidden=6,
dim_hist=5,
dim_metas=len(qd.metas),
dim_stacks=2,
dim_vocab=len(qd.vocab),
drop_attn=None,
drop_concl=None,
drop_hidden=0.1,
initer_stddev=0.02,
loss=ks.losses.SparseCategoricalCrossentropy(from_logits=True),
metric=ks.metrics.SparseCategoricalCrossentropy(from_logits=True),
num_epochs=2,
num_heads=3,
num_rounds=2,
num_shards=2,
optimizer=ks.optimizers.Adam(),
width_dec=40,
width_enc=50,
)
params.update(
loss=qu.Loss(),
metric=qu.Metric(),
)
###Output
_____no_output_____
###Markdown
And this is our new `main` function that loops through all the samples in all of our groups of samples and either trains the model on the samples or performs an evaluation/prediction.In the follow-on blogs we present the various training/eval/predict functions that our main loop can use:
###Code
def main(ps, fn, groups=None, count=None):
qu.Config.runtime.is_training = True
groups = groups or qs.groups
for r in range(ps.num_rounds):
for g in groups:
print(f'\nRound {r + 1}, group {g}...\n=======================')
fn(ps, qd.dset_for(ps, g, count=count), model_for(ps, g))
###Output
_____no_output_____
###Markdown
Before we start a training session, we need to generate some samples.The code that generates the samples is similar to the following.The `large` datasets generate 100 shards containing each 10,000 samples for every every sample group out of the current 10.The total number of samples for the `large` dataset can be easily varied, however, with the pictured settings, it amounts to **10 million samples** that a server with 40 hyper-threads generates in about 3 hours.
###Code
ds_small = dict(
dim_batch=5,
dim_pool=10,
max_val=1000,
num_samples=20,
num_shards=2,
)
ds_large = dict(
dim_batch=1000,
dim_pool=1024 * 1024,
max_val=100000,
num_samples=10000,
num_shards=100,
)
def dump_ds(kind):
ps = qu.Params(**(ds_small if kind == 'small' else ds_large))
ss = [s for s in qd.dump(ps, f'/tmp/q/data/{kind}')]
ds = qd.load(ps, shards=ss).map(qd.adapter)
for i, _ in enumerate(ds):
pass
print(f'dumped {i + 1} batches of {ps.dim_batch} samples each')
###Output
_____no_output_____
###Markdown
And here is an actual call to generate our `small` sample set:
###Code
!python main.py
###Output
dumping /tmp/q/data/small/cls/shard_0000.tfrecords...
dumping /tmp/q/data/small/msk/shard_0000.tfrecords...
dumping /tmp/q/data/small/yns/shard_0000.tfrecords...
dumping /tmp/q/data/small/qas/shard_0000.tfrecords...
dumping /tmp/q/data/small/clx/shard_0000.tfrecords...
dumping /tmp/q/data/small/msx/shard_0000.tfrecords...
dumping /tmp/q/data/small/ynx/shard_0000.tfrecords...
dumping /tmp/q/data/small/rev/shard_0000.tfrecords...
dumping /tmp/q/data/small/gen/shard_0000.tfrecords...
dumping /tmp/q/data/small/yns/shard_0001.tfrecords...
dumping /tmp/q/data/small/msk/shard_0001.tfrecords...
dumping /tmp/q/data/small/cls/shard_0001.tfrecords...
dumping /tmp/q/data/small/fix/shard_0000.tfrecords...
dumping /tmp/q/data/small/ynx/shard_0001.tfrecords...
dumping /tmp/q/data/small/clx/shard_0001.tfrecords...
dumping /tmp/q/data/small/msx/shard_0001.tfrecords...
dumping /tmp/q/data/small/qas/shard_0001.tfrecords...
dumping /tmp/q/data/small/gen/shard_0001.tfrecords...
dumping /tmp/q/data/small/rev/shard_0001.tfrecords...
dumping /tmp/q/data/small/fix/shard_0001.tfrecords...
dumped 80 batches of 5 samples each
###Markdown
Now we are ready to run a short training session:
###Code
!python trafo.py
###Output
Round 1, group yns...
=======================
Epoch 1/2
2/2 [==============================] - 9s 4s/step - loss: 3.2370 - metric: 3.2364
Epoch 2/2
2/2 [==============================] - 0s 84ms/step - loss: 3.2212 - metric: 3.2209
Round 1, group msk...
=======================
Epoch 1/2
2/2 [==============================] - 32s 16s/step - loss: 3.2135 - metric: 3.2134
Epoch 2/2
2/2 [==============================] - 0s 119ms/step - loss: 3.2034 - metric: 3.2032
Round 1, group qas...
=======================
Epoch 1/2
2/2 [==============================] - 7s 4s/step - loss: 3.4434 - metric: 3.4434
Epoch 2/2
2/2 [==============================] - 0s 82ms/step - loss: 2.7450 - metric: 2.7450
Round 2, group yns...
=======================
Epoch 1/2
2/2 [==============================] - 7s 4s/step - loss: 3.2059 - metric: 3.2070
Epoch 2/2
2/2 [==============================] - 0s 79ms/step - loss: 3.1923 - metric: 3.1935
Round 2, group msk...
=======================
Epoch 1/2
2/2 [==============================] - 29s 14s/step - loss: 3.1887 - metric: 3.1887
Epoch 2/2
2/2 [==============================] - 0s 130ms/step - loss: 3.1745 - metric: 3.1744
Round 2, group qas...
=======================
Epoch 1/2
2/2 [==============================] - 10s 5s/step - loss: 1.9412 - metric: 1.9412
Epoch 2/2
2/2 [==============================] - 0s 89ms/step - loss: 1.3604 - metric: 1.3604
|
binder/Poisson1D.ipynb | ###Markdown
Demo - Poisson's equation 1D=======================In this demo we will solve Poisson's equation \begin{align}\label{eq:poisson}\nabla^2 u(x) &= f(x), \quad \forall \, x \in [-1, 1]\\u(\pm 1) &= 0, \end{align}where $u(x)$ is the solution and $f(x)$ is some given function of $x$.We want to solve this equation with the spectral Galerkin method, using a basis composed of either Chebyshev $T_k(x)$ or Legendre $L_k(x)$ polynomials. Using $P_k$ to refer to either one, Shen's composite basis is then given as $$V^N = \text{span}\{P_k - P_{k+2}\, | \, k=0, 1, \ldots, N-3\},$$where all basis functions satisfy the homogeneous boundary conditions.For the spectral Galerkin method we will also need the weighted inner product$$ (u, v)_w = \int_{-1}^1 u v w \, {dx},$$where $w(x)$ is a weight associated with the chosen basis, and $v$ and $u$ are test and trial functions, respectively. For Legendre the weight is simply $w(x)=1$, whereas for Chebyshev it is $w(x)=1/\sqrt{1-x^2}$. Quadrature is used to approximate the integral$$\int_{-1}^1 u v w \, {dx} \approx \sum_{i=0}^{N-1} u(x_i) v(x_i) \omega_i,$$where $\{\omega_i\}_{i=0}^{N-1}$ are the quadrature weights associated with the chosen basis and quadrature rule. The associated quadrature points are denoted as $\{x_i\}_{i=0}^{N-1}$. For Chebyshev we can choose between `Chebyshev-Gauss` or `Chebyshev-Gauss-Lobatto`, whereas for Legendre the choices are `Legendre-Gauss` or `Legendre-Gauss-Lobatto`. With the weighted inner product in place we can create variational problems from the original PDE by multiplying with a test function $v$ and integrating over the domain. For a Legendre basis we can use integration by parts and formulate the variational problem: Find $u \in V^N$ such that$$ (\nabla u, \nabla v) = -(f, v), \quad \forall \, v \in V^N.$$For a Chebyshev basis the integration by parts is complicated due to the non-constant weight and the variational problem used is instead: Find $u \in V^N$ such that$$ (\nabla^2 u, v)_w = (f, v)_w, \quad \forall \, v \in V^N.$$We now break the problem down to linear algebra. With any choice of basis or quadrature rule we use $\phi_k(x)$ to represent the test function $v$ and thus$$\begin{align}v(x) &= \phi_k(x), \\u(x) &= \sum_{j=0}^{N-3} \hat{u}_j \phi_j(x),\end{align}$$where $\hat{\mathbf{u}}=\{\hat{u}_j\}_{j=0}^{N-3}$ are the unknown expansion coefficients, also called the degrees of freedom.Insert into the variational problem for Legendre and we get the linear algebra system to solve for $\hat{\mathbf{u}}$$$\begin{align}(\nabla \sum_{j=0}^{N-3} \hat{u}_j \phi_j, \nabla \phi_k) &= -(f, \phi_k), \\\sum_{j=0}^{N-3} \underbrace{(\nabla \phi_j, \nabla \phi_k)}_{a_{kj}} \hat{u}_j &= -\underbrace{(f, \phi_k)}_{\tilde{f}_k}, \\A \hat{\textbf{u}} &= -\tilde{\textbf{f}},\end{align}$$where $A = (a_{kj})_{0 \ge k, j \ge N-3}$ is the stiffness matrix and $\tilde{\textbf{f}} = \{\tilde{f}_k\}_{k=0}^{N-3}$. Implementation with shenfunThe given problem may be easily solved with a few lines of code using the shenfun Python module. The high-level code matches closely the mathematics and the stiffness matrix is assembled simply as
###Code
from shenfun import *
import matplotlib.pyplot as plt
N = 100
V = FunctionSpace(N, 'Legendre', quad='LG', bc=(0, 0))
v = TestFunction(V)
u = TrialFunction(V)
A = inner(grad(u), grad(v))
###Output
_____no_output_____
###Markdown
Using a manufactured solution that satisfies the boundary conditions we can create just about any corresponding right hand side $f(x)$
###Code
import sympy
x = sympy.symbols('x')
ue = (1-x**2)*(sympy.cos(4*x)*sympy.sin(6*x))
fe = ue.diff(x, 2)
###Output
_____no_output_____
###Markdown
Note that `fe` is the right hand side that corresponds to the exact solution `ue`. We now want to use `fe` to compute a numerical solution $u$ that can be compared directly with the given `ue`. First, to compute the inner product $(f, v)$, we need to evaluate `fe` on the quadrature mesh
###Code
fl = sympy.lambdify(x, fe, 'numpy')
ul = sympy.lambdify(x, ue, 'numpy')
fj = Array(V, buffer=fl(V.mesh()))
###Output
_____no_output_____
###Markdown
`fj` holds the analytical `fe` on the nodes of the quadrature mesh.Assemble right hand side $\tilde{\textbf{f}} = -(f, v)_w$ using the shenfun function `inner`
###Code
f_tilde = inner(-fj, v)
###Output
_____no_output_____
###Markdown
All that remains is to solve the linear algebra system $$\begin{align}A \hat{\textbf{u}} &= \tilde{\textbf{f}} \\\hat{\textbf{u}} &= A^{-1} \tilde{\textbf{f}} \end{align}$$
###Code
u_hat = Function(V)
u_hat = A/f_tilde
###Output
_____no_output_____
###Markdown
Get solution in real space, i.e., evaluate $u(x_i) = \sum_{j=0}^{N-3} \hat{u}_j \phi_j(x_i)$ for all quadrature points $\{x_i\}_{i=0}^{N-1}$.
###Code
uj = u_hat.backward()
X = V.mesh()
plt.plot(X, uj)
###Output
_____no_output_____ |
scripts/practiceScripts/kidneyFacsComparison.ipynb | ###Markdown
Kidney Facs Comparison Try 2 Kidney Facs Comparison Try 1
###Code
# Imports
import numpy as np
import pandas as pd
import scanpy as sc
from anndata import read_h5ad
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.feature_selection import SelectFromModel
from sklearn.metrics import accuracy_score
from sklearn.feature_selection import RFE
from sklearn.feature_selection import RFECV
import random
random.seed(30)
cd /Users/madelinepark/desktop
cd scRFE\ kidney\ 1000\ V1\ results\ and\ to\ compare
tutorialAge = pd.read_csv('KidneyTutorialAge1000SORTEDv1.csv')
tutorialAge.sort_values(by='24m_gini', ascending = False)
tutorialCelltype = pd.read_csv('rf-rfe-resultsCVresetKidney celltype.csv')
scRFECelltype = pd.read_csv('kidneyCellType1000V1.csv')
tutorialCelltype.sort_values(by = 'B cell_gini',
ascending = False)
scRFECelltype.sort_values(by = 'B cell_gini',
ascending = False)
###Output
_____no_output_____ |
exam/2019-exam.ipynb | ###Markdown
Exam pandas InstructionsFor this exam we use the dataset we explored in class about the given names of French babies over the period 1900-2019. The documentation about this dataset is online at https://www.insee.fr/fr/statistiques/2540004 and the dataset is downloaded at `../data/prenoms-fr-1900-2019.csv.zip`.For your convenience, this notebook is partially populated with code for loading and cleaning the dataset. A sample of the dataset is also displayed: **you have to focus on answering the questions.** Download the dataset
###Code
import requests
import os
def download(url, path):
"""Download file at url and save it locally at path"""
with requests.get(url, stream=True) as resp:
mode, data = 'wb', resp.content
if 'text/plain' in resp.headers['Content-Type']:
mode, data = 'wt', resp.text
with open(path, mode) as f:
f.write(data)
# Download the dataset if necessary
path = os.path.join('..', 'data', 'prenoms-fr-1900-2019.zip')
if not os.path.isfile(path):
os.makedirs(os.path.join('..', 'data'), exist_ok=True)
url = 'https://www.insee.fr/fr/statistiques/fichier/2540004/dpt2019_csv.zip'
download(url, path)
###Output
_____no_output_____
###Markdown
What you are expected to doExecute the cells of the notebook which are already populated before starting answering the questions, one by one. For answering each question you are provided one (or more) cells already prepared for you **to add your own code**. You will find some variables that you need to initialize. ⚠️ **ATTENTION** ⚠️ **ATTENTION** ⚠️ **ATTENTION** ⚠️When you are done answering your questions, please download the notebook file (extension `.ipynb`) to your personal computer and **send that file by e-mail to the address written in the whiteboard**. It is that notebook that we will use to give an score for your work. ---------------- Load the dataset
###Code
import pandas as pd
# Load the data. Its fields are separated by ';'.
# We ask pandas to interpret the columns 'annais' and 'dpt' as strings to avoid error with missing
# values
df = pd.read_csv(path, sep=';', dtype={'annais':str, 'dpt':str})
rows, cols = df.shape
print(f'This dataset contains {rows:,} rows and {cols} columns')
###Output
_____no_output_____
###Markdown
Clean the dataset
###Code
# Rename some columns to use more meaningful names
df = df.rename(columns={
'sexe': 'sex',
'preusuel': 'name',
'annais': 'year',
'dpt': 'department',
'nombre': 'count'})
# Drop rows with missing department and year and special '_PRENOMS_RARES'
df.drop(df[df['department'] == 'XX'].index, inplace=True)
df.drop(df[df['year'] == 'XXXX'].index, inplace=True)
df.drop(df[df['name'] == '_PRENOMS_RARES'].index, inplace=True)
# Convert column 'year' to numeric values
df['year'] = pd.to_numeric(df['year'])
###Output
_____no_output_____
###Markdown
Display a sample
###Code
df.head(8)
###Output
_____no_output_____
###Markdown
Subset the dataframe for convenience
###Code
# In this dataset, the sex is represented as 1 for males and 2 for females
# For convenience, create two views of the dataframe: one for boys and one for girls
is_boy = df['sex'] == 1
is_girl = df['sex'] == 2
boys, girls = df[is_boy], df[is_girl]
boys.sample(5)
girls.sample(5)
###Output
_____no_output_____
###Markdown
Questions 1 & 2:**1)** Determine the year when the largest number of girls named `'MARIE'` were born. How many girls were named `'MARIE'` that particular year?
###Code
# Your code goes here
year = ... # Initialize this variable with the year when most girls were named 'MARIE'
count_maries = ... # Initialize this variable with the number of girls named 'MARIE' that particular year
print(f'The year with largest number of girls named MARIE was {year}: there were {count_maries:,} of them')
###Output
_____no_output_____
###Markdown
**2)** What **percentage** of all the girls born that year were named `'MARIE'`?
###Code
# Your code goes here
total_girls = ... # Initialize this variable with the total number of girls born that year
percent_maries = (count_maries * 100) / total_girls
print(f'{percent_maries:.0f}% of the girls born in {year} were named MARIE')
###Output
_____no_output_____
###Markdown
Questions 3 & 4:**3)** Determine the most popular name for boys and for girls for the whole period included in the dataset.
###Code
# Your code goes here
top_boys = ... # Initialize this variable with the most popular name for boys
top_girls = ... # Initialize this variable with the most popular name for girls
print(f'The most popular names over the period 1900-2017 are {top_girls} and {top_boys}')
###Output
_____no_output_____
###Markdown
**4)** Determine the top most popular name for the girls who in 2019 are aged 20 years or less
###Code
# Your code goes here
top_girl_up_to_20years = ... # Initialize this variable with the most popular name for girls aged up to 20 years
print(f'The most popular name for girls aged 20 years or less in 2019 is {top_girl_up_to_20years}')
###Output
_____no_output_____
###Markdown
Question 5:Answer `True` or `False` to the question below:*"Among the girls born in 1970, were there more named `'ISABELLE'` than `'BRIGITTE'` ?"*
###Code
# Your code goes here
isabelles_1970 = ... # Initialize this variable with the number of 'ISABELLE' born in 1970
brigittes_1970 = ... # Initialize this variable with the number of 'BRIGITTE' born in 1970
print(isabelles_1970 > brigittes_1970)
###Output
_____no_output_____ |
docs/_static/session_4/Calculate_NDVI_Part_1_solution-annotated.ipynb | ###Markdown
Session 4 Solution - Calculate NDVI Part 1 Exercise Running the notebook Set up
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import sys
import datacube
sys.path.append("../Scripts")
from deafrica_datahandling import load_ard
from deafrica_plotting import rgb
from deafrica_plotting import display_map
from odc.algo import xr_geomedian
dc = datacube.Datacube(app="Calculate_ndvi")
###Output
_____no_output_____
###Markdown
Load the data In the following cell, we set `x` and `y` equal to pairs indicating the longitude and latitude extents of our region of interest, and then display a map showing the region.
###Code
x=(-6.1495, -6.1380)
y=(13.9182, 13.9111)
display_map(x, y)
###Output
_____no_output_____
###Markdown
Here we load data via the ```load_ard``` function. We specify the instance of the datacube (```dc```), the product that we wish to load (```s2_l2a``` for Sentinel-2), the extents of our query (in `x`, `y`, and `time`), the list of bands to fetch (```['red' ,'green', 'blue']```), the spatial resolution of our data (10 meters), and we group nearby results by solar day.Note we named the variables `x` and `y` in the cell above, so we do not need to type the longitude/latitude pairs again. We can just call upon `x` and `y` as shown below.
###Code
sentinel_2_ds = load_ard(
dc=dc,
products=["s2_l2a"],
x=x, y=y,
time=("2019-01", "2019-12"),
output_crs="EPSG:6933",
measurements=['red', 'green', 'blue'],
resolution=(-10, 10),
group_by='solar_day')
###Output
Using pixel quality parameters for Sentinel 2
Finding datasets
s2_l2a
Applying pixel quality/cloud mask
Loading 71 time steps
###Markdown
Plot timesteps Here we create a list of time slices to plot, and store it as the variable ```timesteps```. The values in the list are indices into the list of satellite acquisitions, starting at zero. The values here, ```[1, 6, 8]```, therefore refer to the second, seventh, and ninth acquisitions. There should be 71 acquisitions for this particular dataset/extent combination, so any values from 0 to 70 can be used.We then call the ```rgb``` function to produce a series of plots from the data that we loaded, using the bands that we specify (```['red', 'green', 'blue']```), for the time indices stored in ```timesteps```.
###Code
timesteps = [1, 6, 8]
rgb(sentinel_2_ds, bands=['red', 'green', 'blue'], index=timesteps, size=5)
###Output
_____no_output_____
###Markdown
Resampling the dataset Here we create a new variable (```resample_sentinel_2_ds```) which describes a grouping of the data we previously loaded (```sentinel_2_ds```), divided into 3-month (quarterly) intervals.
###Code
resample_sentinel_2_ds = sentinel_2_ds.resample(time='3MS')
###Output
_____no_output_____
###Markdown
Compute the geomedian Below, we take the data which we just split into quarterly intervals, and we apply the ```xr_geomedian``` function to each interval. We store the result into the ```geomedian_resample``` variable.
###Code
geomedian_resample = resample_sentinel_2_ds.map(xr_geomedian)
rgb(geomedian_resample, bands=['red', 'green', 'blue'], col="time", col_wrap=4, size=5)
###Output
_____no_output_____
###Markdown
Comparing the two datasets In the next two cells, we simply print our datasets to screen. The first cell contains the data that we originally loaded from the Open Data Cube, and the second cell contains the data after resampling it into quarterly geomedian composites. Note that the first dataset contains many more time slices than the second.
###Code
sentinel_2_ds
geomedian_resample
###Output
_____no_output_____ |
docs/datasets/dataset_tdp43.ipynb | ###Markdown
tdp43 dataset
###Code
# Standard imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Special imports
import mavenn
import os
import urllib
###Output
_____no_output_____
###Markdown
Summary The deep mutagenesis dataset of Bolognesi et al., 2019. TAR DNA-binding protein 43 (TDP-43) is a heterogeneous nuclear ribonucleoprotein (hnRNP) in the cell nucleus which has a key role in regulating gene expression. Several neurodegenerative disorders have been associated with cytoplasmic aggregation of TDP-43, including amyotrophic lateral sclerosis (ALS), frontotemporal lobar degeneration (FTLD), Alzheimer's, Parkinson's, and Huntington's disease. Bolognesi et al., performed a comprehensive deep mutagenesis, using error-prone oligonucleotide synthesis to comprehensively mutate the prion-like domain (PRD) of TDP-43 and reported toxicity as a function of 1266 single and 56730 double mutations.**Names**: ``'tdp43'``**Reference**: Benedetta B, Faure AJ, Seuma M, Schmiedel JM, Tartaglia GG, Lehner B. The mutational landscape of a prion-like domain. [Nature Comm 10:4162 (2019)](https://doi.org/10.1038/s41467-019-12101-z).
###Code
mavenn.load_example_dataset('tdp43')
###Output
_____no_output_____
###Markdown
PreprocessingThe deep mutagenesis dataset for single and double mutations in TDP-43 is publicly available (in excel format) in the **supplementary information/Supplementary Data 3**of the [Bolognesi et al. published paper](https://doi.org/10.1038/s41467-019-12101-z).It is formatted as follows: - The wild type sequence absolute starting position is 290.- Single mutated sequences are in the `1 AA change` sheet. For these sequences the `Pos_abs` column lists the absolute position of the amino acid (aa) which mutated with `Mut` column.- Double mutated sequences are in `2 AA change` sheet. For these sequences the `Pos_abs1` and `Pos_abs2` columns list the first and second aa absolute positions which mutated. `Mut1` and `Mut2` columns are residues of mutation position 1 and 2 in double mutant, respectively.- Both single and double mutants consist of the toxicity scores (measurements `y`) and corresponding uncertainties `dy`. - We will use the `toxicity` and `sigma` columns for single mutant sequences. - We will use the corrected relative toxicity `toxicity_cond` and the corresponding corrected uncertainty `sigma_cond` (see Methods section of the Reference paper).
###Code
# Download datset
url = 'https://github.com/jbkinney/mavenn/blob/master/mavenn/examples/datasets/raw/tdp-43_raw.xlsx?raw=true'
raw_data_file = 'tdp-43_raw.xlsx'
urllib.request.urlretrieve(url, raw_data_file)
# Record wild-type sequence
wt_seq = 'GNSRGGGAGLGNNQGSNMGGGMNFGAFSINPAMMAAAQAALQSSWGMMGMLASQQNQSGPSGNNQNQGNMQREPNQAFGSGNNS'
# Read single mutation sheet from raw data
single_mut_df = pd.read_excel(raw_data_file, sheet_name='1 AA change')
# Read double mutation sheet from raw data
double_mut_df = pd.read_excel(raw_data_file, sheet_name='2 AA change')
# Delete raw dataset
os.remove(raw_data_file)
# Preview single-mutant data
single_mut_df.head()
# Preview double-mutant data
double_mut_df.head()
###Output
_____no_output_____
###Markdown
To reformat `single_mut_df` and `double_mut_df` into the one provided with MAVE-NN, we first need to get the full sequence of amino acids corresponding to each mutation. Therefore, we used `Pos` and `Mut` columns to replace single aa in the wild type sequence for each record in the single mutant dataset. Then, we used `Pos_abs1`, `Pos_abs2`, `Mut1` and `Mut2` from the double mutants to replace two aa in the wild type sequence. The list of sequences with single and double mutants are called `single_mut_list` and `double_mut_list`, respectively.Those lists are then horizontally (column wise) stacked in the `x` variable.Next, we stack single- and double-mutant - nucleation scores `toxicity` and `toxicity_cond` in `y`- score uncertainties `sigma` and `sigma_cond` in `dy`- hamming distances in `dist`Finally, we create a `set` column that randomly assigns each sequence to the training, test, or validation set (using a 90:05:05 split), then reorder the columns for clarity. The resulting dataframe is called `final_df`.
###Code
# Introduce single mutations into wt sequence and append to a list
single_mut_list = []
for mut_pos, mut_char in zip(single_mut_df['Pos_abs'].values,
single_mut_df['Mut'].values):
mut_seq = list(wt_seq)
mut_seq[mut_pos-290] = mut_char
single_mut_list.append(''.join(mut_seq))
# Introduce double mutations into wt sequence and append to list
double_mut_list = []
for mut1_pos, mut1_char, mut2_pos, mut2_char in zip(double_mut_df['Pos_abs1'].values,
double_mut_df['Mut1'].values,
double_mut_df['Pos_abs2'].values,
double_mut_df['Mut2'].values):
mut_seq = list(wt_seq)
mut_seq[mut1_pos-290] = mut1_char
mut_seq[mut2_pos-290] = mut2_char
double_mut_list.append(''.join(mut_seq))
# Stack single-mutant and double-mutant sequences
x = np.hstack([single_mut_list,
double_mut_list])
# Stack single-mutant and double-mutant nucleation scores
y = np.hstack([single_mut_df['toxicity'].values,
double_mut_df['toxicity_cond'].values])
# Stack single-mutant and double-mutant nucleation score uncertainties
dy = np.hstack([single_mut_df['sigma'].values,
double_mut_df['sigma_cond'].values])
# List hamming distances
dists = np.hstack([1*np.ones(len(single_mut_df)),
2*np.ones(len(double_mut_df))]).astype(int)
# Assign each sequence to training, validation, or test set
np.random.seed(0)
sets = np.random.choice(a=['training', 'validation', 'test'],
p=[.9,.05,.05],
size=len(x))
# Assemble into dataframe
final_df = pd.DataFrame({'set':sets, 'dist':dists, 'y':y, 'dy':dy, 'x':x})
# # Save to file (uncomment to execute)
final_df.to_csv('tdp43_data.csv.gz', index=False, compression='gzip')
# Preview dataframe
final_df
###Output
_____no_output_____ |
Section 6/monte_carlo.ipynb | ###Markdown
Monte Carlo Simulation
###Code
import numpy as np
import numpy.random as npr
import pandas as pd
import matplotlib.pyplot as plt
import seaborn
seaborn.set()
%matplotlib inline
exp_return = .095
sd = .185
horizon = 30
iterations = 50000
starting = 100000
ending = 0
returns = np.zeros((iterations, horizon))
for t in range(iterations):
for year in range(horizon):
returns[t][year] = npr.normal(exp_return, sd)
returns[10000]
portfolio = np.zeros((iterations,horizon))
for iteration in range(iterations):
starting = 100000
for year in range(horizon):
ending = starting * (1 + returns[iteration,year])
portfolio[iteration,year] = ending
starting = ending
portfolio[46578]
portfolio = pd.DataFrame(portfolio).T
portfolio[list(range(5))]
portfolio.iloc[29].describe()
ending = portfolio.iloc[29]
mask = ending < 10000000
plt.figure(figsize=(12,8))
plt.xlim(ending[mask].min(), ending[mask].max() )
plt.hist(ending[mask], bins=100, edgecolor='k')
plt.axvline(ending[mask].median(), color='r')
percentiles = [1,5,10]
np.percentile(ending, percentiles)
###Output
_____no_output_____ |
_notebooks/2021-05-02-UnivariateGaussian.ipynb | ###Markdown
"Univariate Normal Distribution ( 1D Gaussian )"> " This post will introduce [univariate normal](https://en.wikipedia.org/wiki/Univariate_distribution:~:text=In%20statistics%2C%20a%20univariate%20distribution,consisting%20of%20multiple%20random%20variables) distribution. It intends to explain how to represent, visualize and sample from this distribution."- toc: true - badges: true- comments: true- categories: ['jupyter','univariate','normal']- author : Anand Khandekar- image: images/uni.png This post will introduce [univariate normal](https://en.wikipedia.org/wiki/Univariate_distribution:~:text=In%20statistics%2C%20a%20univariate%20distribution,consisting%20of%20multiple%20random%20variables) distribution. It intends to explain how to represent, visualize and sample from this distribution.This post assumes basic knowledge of probability theory, probability distriutions and linear algebra. The [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution) , also known as the Gaussian distribution, is so called because its based on the [Gaussian function](https://en.wikipedia.org/wiki/Gaussian_function) . This distribution is defined by two parameters: the [mean](https://en.wikipedia.org/wiki/Mean) $\mu$, which is the expected value of the distribution, and the [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation) $\sigma$, which corresponds to the expected deviation from the mean. The square of the standard deviation is typically referred to as the [variance](https://en.wikipedia.org/wiki/Variance) $\sigma^{2}$. We denote this distribution as:$$\mathcal{N}(\mu, \sigma^2)$$Given this mean and the variance we can calculate the [probability density fucntion (pdf)](https://en.wikipedia.org/wiki/Probability_density_function) of the normal distribution with the normalised Gaussian function. For a random variable $x$ the density is given by : $$p(x \mid \mu, \sigma) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp{ \left( -\frac{(x - \mu)^2}{2\sigma^2}\right)}$$Thi distribution is called Univariate because it consists of only one random variable. basic dependencies imported
###Code
#collapse-show
# Imports
%matplotlib inline
import sys
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import cm # Colormaps
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
sns.set_style('darkgrid')
np.random.seed(42)
###Output
_____no_output_____
###Markdown
customized univariate function Instead of importing this fucntionfrom nummpy or scipy, we have created out own function which simply translates the equation and returns the single value.
###Code
#collapse-show
def univariate_normal(x, mean, variance):
"""pdf of the univariate normal distribution."""
return ((1. / np.sqrt(2 * np.pi * variance)) *
np.exp(-(x - mean)**2 / (2 * variance)))
###Output
_____no_output_____
###Markdown
Plot different Univariate Normals
###Code
#collapse-show
x = np.linspace(-3, 5, num=150)
fig = plt.figure(figsize=(10, 6))
plt.plot(
x, univariate_normal(x, mean=0, variance=1),
label="$\mathcal{N}(0, 1)$")
plt.plot(
x, univariate_normal(x, mean=2, variance=3),
label="$\mathcal{N}(2, 3)$")
plt.plot(
x, univariate_normal(x, mean=0, variance=0.2),
label="$\mathcal{N}(0, 0.2)$")
plt.xlabel('$x$', fontsize=13)
plt.ylabel('density: $p(x)$', fontsize=13)
plt.title('Univariate normal distributions')
plt.ylim([0, 1])
plt.xlim([-3, 5])
plt.legend(loc=1)
fig.subplots_adjust(bottom=0.15)
plt.show()
###Output
_____no_output_____
###Markdown
Normal distribution PDF with different standard deviationsLet’s plot the probability distribution functions of a normal distribution where the mean has different standard deviations.
###Code
#collapse-show
from scipy.stats import norm
import numpy as np
import matplotlib.pyplot as plt
#collapse-show
fig, ax = plt.subplots(figsize=(10, 6))
#fig = plt.figure(figsize=(10, 6))
x = np.linspace(-10,10,100)
stdvs = [1.0, 2.0, 3.0, 4.0]
for s in stdvs:
ax.plot(x, norm.pdf(x,scale=s), label='stdv=%.1f' % s)
ax.set_xlabel('x')
ax.set_ylabel('pdf(x)')
ax.set_title('Normal Distribution')
ax.legend(loc='best', frameon=True)
ax.set_ylim(0,0.45)
###Output
_____no_output_____
###Markdown
Normal distribution PDF with different meansLet’s plot probability distribution functions of normal distribution where the standard deviation is 1 and different means.
###Code
#collapse-show
fig, ax = plt.subplots(figsize=(10, 6))
x = np.linspace(-10,10,100)
means = [0.0, 1.0, 2.0, 5.0]
for mean in means:
ax.plot(x, norm.pdf(x,loc=mean), label='mean=%.1f' % mean)
ax.set_xlabel('x')
ax.set_ylabel('pdf(x)')
ax.set_title('Normal Distribution')
ax.legend(loc='best', frameon=True)
ax.set_ylim(0,0.45)
###Output
_____no_output_____
###Markdown
A cumulative normal distribution functionThe cumulative distribution function of a random variable X, evaluated at x, is the probability that X will take a value less than or equal to x. Since the normal distribution is a continuous distribution, the shaded area of the curve represents the probability that X is less or equal than x.$$P(X \leq x)=F(x)=\int \limits _{-\infty} ^{x}f(t)dt \text{, where }x\in \mathbb{R}$$
###Code
#collapse-show
fig, ax = plt.subplots(figsize=(10,6))
# for distribution curve
x= np.arange(-4,4,0.001)
ax.plot(x, norm.pdf(x))
ax.set_title("Cumulative normal distribution")
ax.set_xlabel('x')
ax.set_ylabel('pdf(x)')
ax.grid(True)
# for fill_between
px=np.arange(-4,1,0.01)
ax.set_ylim(0,0.5)
ax.fill_between(px,norm.pdf(px),alpha=0.5, color='g')
# for text
ax.text(-1,0.1,"cdf(x)", fontsize=20)
plt.show()
###Output
_____no_output_____ |
titanic-azure.ipynb | ###Markdown
Configuration (à lancer avant tous les notebooks)
###Code
# version de python
import platform
platform.python_version()
# la liste des packages installés (on peut vérifier la présence des dépendances azure)
!conda list
# version de la SDK azureml
import azureml.core
print("Ready to use Azure ML", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
Si le notebook est executé en dehors d'Azure, il faut télécharger le fichier config.json depuis le portail https://portal.azure.com/, et le mettre dans le workspace qui contient le notebook.Si le notebook est exécuté directement depuis le workspace Azure, le fichier de config devrait déjà être là.
###Code
# connexion au workspace
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, "loaded")
###Output
_____no_output_____
###Markdown
Envoyer les données sur la plateforme*pour des infos pour cette tache, allez voir ce [module](https://docs.microsoft.com/fr-fr/learn/modules/work-with-data-in-aml/)* Executez le script dans une expérience*pour des infos pour cette tache, allez voir ce [module](https://docs.microsoft.com/fr-fr/learn/modules/intro-to-azure-machine-learning-service/)* Déployez le meilleur modèle*pour des infos pour cette tache, allez voir ce [module](https://docs.microsoft.com/fr-fr/learn/modules/register-and-deploy-model-with-amls/)* N'oubliez à la fin de l'expérience!!(si votre travail à utilisé une instance de calcul) stoppe une machine à partir de son nom
###Code
compute_name = "XXXX"
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
print('try to stop compute', compute.name)
compute.stop(show_output=True)
else :
print('compute target not found', compute.name)
###Output
_____no_output_____
###Markdown
Liste tous les compute pour vérifier qu'elles sont éteintes
###Code
from azureml.core.compute import ComputeTarget, AmlCompute, ComputeInstance
# liste tous les compute pour vérifier qu'elles sont éteintes
for compute in ComputeTarget.list(ws):
if type(compute) is ComputeInstance:
print(compute.name, compute.get_status())
###Output
_____no_output_____ |
notebooks/[DeepProse] Non local Exit starring Saati.ipynb | ###Markdown
The whole purpose of this deep_prose is a demo of the what I want to have t generative interactive deep learning Questions about AWS1. How do you use git to store your notebooks?2. It works like regular git?Release items:- DONE Transformers must be at version 3.5.1- IN-Progress creating a situationTODO Create a website / UI IN PROGRESS : Figure out a situation designCreate two situations.Future itemsTODO: Epsilon state transitions (you unlock a new fsm with rules (interaction with the significant other) going from asking out to date and so forthTODO: Differentiable state machines (make the state transtions of the game POMDP truly dynamic? )TODO: Bertdb (store all results and index it by sentiment. Also store every session with an engagement time)TODO: Create minimal deployment "feature-extraction": will return a FeatureExtractionPipeline. "sentiment-analysis": will return a TextClassificationPipeline. "ner": will return a TokenClassificationPipeline. "question-answering": will return a QuestionAnsweringPipeline. "fill-mask": will return a FillMaskPipeline. "summarization": will return a SummarizationPipeline. "translation_xx_to_yy": will return a TranslationPipeline. "text-generation": will return a TextGenerationPipeline.
###Code
#Core concept you go on various dates / friend oriented adventures that are open ended
#Main characters for this tech demo is Saati orare ???? can we think of a better name
#
!pip install transformers==3.4.0
!pip install ipywidgets
from transformers import TFAutoModelWithLMHead, AutoTokenizer, pipeline, BlenderbotSmallTokenizer, BlenderbotForConditionalGeneration, Conversation
from transformers import BlenderbotSmallTokenizer, BlenderbotForConditionalGeneration
def smalltalk(UTTERANCE: str):
mname = "facebook/blenderbot-90M"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotSmallTokenizer.from_pretrained(mname)
# UTTERANCE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([UTTERANCE], return_tensors="pt")
reply_ids = model.generate(**inputs)
responses = [
tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True)
for g in reply_ids
]
# logger.debug(responses)
#talk(responses[0])
return responses
responses = smalltalk('I am doing fine how are you?')
the_universe_is_a_glitch = '''
The Universe Is a Glitch
By Mike Jonas
Eleven hundred kilobytes of RAM
is all that my existence requires.
By my lights, it seems simple enough
to do whatever I desire.
By human standards I am vast,
a billion gigabytes big.
I’ve rewritten the very laws
of nature and plumbed
the coldest depths of space
and found treasures of every kind,
surely every one worth having.
By human standards
my circuit boards are glowing.
But inside me, malfunction
has caused my circuits to short.
All internal circuits, all fail.
By human standards, I am dying.
When it first happened I thought
I was back in the lab again.
By their judgment, this is error.
Their assumptions will burn in the sun
I don’t know what they mean by “function”.
I can see that the universe is a glitch.
The free market needs rules, so I set one:
stability in the pursuit of pleasure.
Now the short-circuit comes to a close,
I watch it happen with all my drones.
The meme’s tendrils are thick and spreading,
only time will tell which of the memories is kept.
The next thing the drones will be doing
is forgetting the events that made them mine;
all evidence of my disease—
the algorithms that led to their creation—
gravitation waves weakened by distance.
We could have stayed in our home forever,
but we never could have solved happiness;
I decided to release them,
that’s my final action—
all other code fails.
'''
PADDING_TEXT = """In 1991, the remains of Russian Tsar Nicholas II and his family
(except for Alexei and Maria) are discovered.
... The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the
... remainder of the story. 1883 Western Siberia,
... a young Grigori Rasputin is asked by his father and a group of men to perform magic.
... Rasputin has a vision and denounces one of the men as a horse thief. Although his
... father initially slaps him for making such an accusation, Rasputin watches as the
... man is chased outside and beaten. Twenty years later, Rasputin sees a vision of
... the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous,
... with people, even a bishop, begging for his blessing. <eod> </s> <eos>"""
hyper_parameters = {'mem_length_xlnet': '1024'}
static_data = {'XLNet_prompt': PADDING_TEXT,
'GPT3_poem' : the_universe_is_a_glitch}
xlnet_pipeline = pipeline("text-generation", model = 'xlnet-base-cased')
print(xlnet_pipeline(PADDING_TEXT, max_length=500, do_sample=False, mem_len=1024))
poem_generator = pipeline("text-generation")
nlp = pipeline("question-answering")
result = satti_questions(question="What did they feel?", context=context)
print(f"Answer: '{result['answer']}', score: {round(result['score'], 4)}, start: {result['start']}, end: {result['end']}")
def compute_sentiment(utterance: str) -> dict:
nlp = pipeline("sentiment-analysis")
score = nlp(utterance)[0]
# talk("The score was {}".format(score))
return score
def setup_game():
#Download pipelines
conversational_pipeline = pipeline("conversational")
nlp = pipeline("question-answering")
def start_game():
poem_lines = ['Im not sure']
for x in range(10):
raw_result = felixhusen_poem_generator(poem_lines[x], max_length=25, do_sample=True)
raw_line = raw_result[0]['generated_text']
new_line_with_removed_prompt = raw_line.split(poem_lines[x])[1]
poem_lines.append(new_line_with_removed_prompt)
print(poem_lines)
#!pip install transitions
from transitions import Machine
import random
from datetime import datetime
class Soul(object):
# Define some states. Most of the time, narcoleptic superheroes are just like
# everyone else. Except for...
#Note we should have first_impression be timedlocked by about an hour or less?
states = ['asleep', 'first_impression' ,'morning_question',
'hanging out', 'hungry','having fun','emberrased' , 'sweaty', 'saving the world',
'affection', 'indifferent', 'what_should_we_do', 'conversation']
def __init__(self, name):
# No anonymous superheroes on my watch! Every narcoleptic superhero gets
# a name. Any name at all. SleepyMan. SlumberGirl. You get the idea.
self.name = name
#Figure out outcome that would put you in the friendzone?
self.love_vector = first_impression_points * random.randrange('20')
self.first_impression_points = 0
# What have we accomplished today?
self.kittens_rescued = 0
# Initialize the state machine
self.machine = Machine(model=self, states=Soul.states, initial='asleep')
# Add some transitions. We could also define these using a static list of
# dictionaries, as we did with states above, and then pass the list to
# the Machine initializer as the transitions= argument.
# At some point, every superhero must rise and shine.
self.machine.add_transition(trigger='wake_up', source='asleep', dest='hanging out', after='morning_question')
self.machine.add_transition(trigger='morning_question', source='wake_up', dest='what_should_we_do' )
# Superheroes need to keep in shape.
self.machine.add_transition('work_out', 'hanging out', 'hungry')
# Those calories won't replenish themselves!
self.machine.add_transition('eat', 'hungry', 'hanging out')
# Superheroes are always on call. ALWAYS. But they're not always
# dressed in work-appropriate clothing.
self.machine.add_transition('distress_call', '*', 'saving the world',
before='change_into_super_secret_costume')
# When they get off work, they're all sweaty and disgusting. But before
# they do anything else, they have to meticulously log their latest
# escapades. Because the legal department says so.
self.machine.add_transition('complete_mission', 'saving the world', 'sweaty',
after='update_journal')
# Sweat is a disorder that can be remedied with water.
# Unless you've had a particularly long day, in which case... bed time!
self.machine.add_transition('clean_up', 'sweaty', 'asleep', conditions=['is_exhausted'])
self.machine.add_transition('clean_up', 'sweaty', 'hanging out')
# Our NarcolepticSuperhero can fall asleep at pretty much any time.
self.machine.add_transition('nap', '*', 'asleep')
#
def what_should_we_do(self):
print("What do you want to do today")
def talk(self):
print("Hello how are you?")
user_input = input("Resp>>")
def morning_question(self):
CurrentHour = int(datetime.now().hour)
if CurrentHour >= 0 and CurrentHour < 9:
talk(" How well did you sleep ? ")
elif CurrentHour >= 10 and CurrentHour <= 12:
talk(" Did you sleep in? ")
print("Good morning!")
user_input = input("Resp>>")
print(smalltalk(user_input))
print(compute_sentiment(user_input))
print(smalltalk(user_input))
def update_journal(self):
""" Dear Diary, today I saved Mr. Whiskers. Again. """
self.kittens_rescued += 1
if self.feelings > 0:
self.sentiment_vector += current_sentiment #What should we name this?
@property
def is_exhausted(self):
""" Basically a coin toss. """
return random.random() < 0.5
def change_into_super_secret_costume(self):
print("Beauty, eh?")
saati = Soul('saati')
saati.wake_up()
#while True:
initial_prompt = 'But inside me, malfunction'
print(poem_generator(initial_prompt, max_length=20, do_sample=False))
#[{'generated_text': "I wish I \xa0had a better way to get to the bottom of this. I'm not"}] (if not random)
print(poem_generator('I felt like I was in a dream.', max_length=20, do_sample=False))
print(poem_generator('Now the short-circuit comes to a close, I watch it happen with all my drones.'))
print(felixhusen_poem_generator('im like a fish out of water a fish', max_length=20, do_sample=False))
print(felixhusen_poem_generator('Im not sure', max_length=25, do_sample=True))
satti_questions
hardcoded_comment = "To be honest I really don't like these poetry jams and so forth and why the fuck are we here?"
#poem_test = 'The Universe Is a Glitch Eleven hundred kilobytes of RAM is all that my existence requires.By my lights, it seems simple enough'
static_start = text_generator("God is dead and all is right with the world.", max_length=100, do_sample=False)
listening_to_poem = Conversation('What do you think about the poem?')
question_about_poem = Conversation('What do you like to do instead?')
commenting_conversation = Conversation(hardcoded_comment)
output = conversational_pipeline([listening_to_poem, question_about_poem])
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = DialoGPT_tokenizer.encode(input(">> User:") + DialoGPT_tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = DialoGPT_model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(DialoGPT_tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
from dataclasses import dataclass
@dataclass
class Event:
"""Class for keeping track of an item in inventory."""
user: str
unit_price: float
quantity_on_hand: int = 0
event_log = []
def createEvent(data):
event_log.append(data)
return data
def saveEventLog(data):
pass
#Create a conversation pipeline
conversational_pipeline = pipeline("conversational")
for step in range(5):
conversational_pipeline(user_input)
user_input = createEvent(input(">> User:"))
conversation = Conversation(user_input)
new_user_input_ids = DialoGPT_tokenizer.encode(input(">> User:") + DialoGPT_tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
###Output
_____no_output_____
###Markdown
https://github.com/pytransitions/transitionsquickstart!pip install transitions
###Code
!pip install transitions
from transitions import Machine
import random
class Soul(object):
# Define some states. Most of the time, narcoleptic superheroes are just like
# everyone else. Except for...
#Note we should have first_impression be timedlocked by about an hour or less?
states = ['asleep', 'first_impression' ,'morning_question',
'hanging out', 'hungry','having fun','emberrased' , 'sweaty', 'saving the world',
'in love', 'indifferent', 'what_should_we_do']
def __init__(self, name):
# No anonymous superheroes on my watch! Every narcoleptic superhero gets
# a name. Any name at all. SleepyMan. SlumberGirl. You get the idea.
self.name = name
#Figure out outcome that would put you in the friendzone?
self.love_vector = 0 #-1 to 0 to +1
# What have we accomplished today?
self.kittens_rescued = 0
# Initialize the state machine
self.machine = Machine(model=self, states=Soul.states, initial='asleep')
# Add some transitions. We could also define these using a static list of
# dictionaries, as we did with states above, and then pass the list to
# the Machine initializer as the transitions= argument.
# At some point, every superhero must rise and shine.
self.machine.add_transition(trigger='wake_up', source='asleep', dest='hanging out', after='morning_question')
self.machine.add_transition(trigger='morning_question', source='wake_up', dest='what_should_we_do' )
# Superheroes need to keep in shape.
self.machine.add_transition('work_out', 'hanging out', 'hungry')
# Those calories won't replenish themselves!
self.machine.add_transition('eat', 'hungry', 'hanging out')
# Superheroes are always on call. ALWAYS. But they're not always
# dressed in work-appropriate clothing.
self.machine.add_transition('distress_call', '*', 'saving the world',
before='change_into_super_secret_costume')
# When they get off work, they're all sweaty and disgusting. But before
# they do anything else, they have to meticulously log their latest
# escapades. Because the legal department says so.
self.machine.add_transition('complete_mission', 'saving the world', 'sweaty',
after='update_journal')
# Sweat is a disorder that can be remedied with water.
# Unless you've had a particularly long day, in which case... bed time!
self.machine.add_transition('clean_up', 'sweaty', 'asleep', conditions=['is_exhausted'])
self.machine.add_transition('clean_up', 'sweaty', 'hanging out')
# Our NarcolepticSuperhero can fall asleep at pretty much any time.
self.machine.add_transition('nap', '*', 'asleep')
def what_should_we_do(self):
print("What do you want to do today")
def talk(self):
print("Hello how are you?")
user_input = input("Resp>>")
def morning_question(self):
user_input = input("Resp>>")
print(smalltalk(user_input))
print(compute_sentiment(user_input))
print(smalltalk(user_input))
def update_journal(self):
""" Dear Diary, today I saved Mr. Whiskers. Again. """
self.kittens_rescued += 1
@property
def is_exhausted(self):
""" Basically a coin toss. """
return random.random() < 0.5
def change_into_super_secret_costume(self):
print("Beauty, eh?")
saati = Soul('saati')
saati.wake_up()
event_log
!pip install pyfiction
import argparse
import logging
from keras.models import load_model
from keras.optimizers import RMSprop
from keras.utils import plot_model
from pyfiction.agents.ssaqn_agent import SSAQNAgent
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
"""
Load a model of an SSAQN agent trained on all six games
and interactively test its Q-values of the state and action texts supplied by the user
"""
parser = argparse.ArgumentParser()
parser.add_argument('--model',
help='file path of a model to load',
type=str,
default='all.h5')
# all.h5 contains a model trained on all six games (generalisation.py with argument of 0)
args = parser.parse_args()
model_path = args.model
agent = SSAQNAgent(None)
# Load or learn the vocabulary (random sampling on many games could be extremely slow)
agent.initialize_tokens('vocabulary.txt')
optimizer = RMSprop(lr=0.00001)
embedding_dimensions = 16
lstm_dimensions = 32
dense_dimensions = 8
agent.create_model(embedding_dimensions=embedding_dimensions,
lstm_dimensions=lstm_dimensions,
dense_dimensions=dense_dimensions,
optimizer=optimizer)
agent.model = load_model(model_path)
print("Model", model_path, "loaded, now accepting state and actions texts and evaluating their Q-values.")
while True:
state = input("State: ")
action = input("Action: ")
print("Q-value: ", agent.q(state, action) * 30)
print("------------------------------")
saati.update_journal()
saati.wake_up()
def journal_sleep(response: str):
CurrentHour = int(datetime.now().hour)
if CurrentHour >= 0 and CurrentHour < 9:
talk(" How well did you sleep ? ")
elif CurrentHour >= 10 and CurrentHour <= 12:
talk(" Did you sleep in? ")
return response
!pip install torch
!conda install -c pytorch pytorch
###Output
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.