path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
WineRec_TrainingSet_overview.ipynb | ###Markdown
###Code
#Import library
import pandas as pd
#Read both train and test files into separate data structures
train_file = pd.read_csv('train_70.csv',
sep=',',
names = ['user_id','item_id','ts',
'vintage',
'type_encoded',
'producer_encoded',
'variety_encoded',
'designation_encoded',
'vineyard_encoded',
'country_encoded',
'region_encoded',
'subregion_encoded',
'appellation_encoded',
'rating'
],
header=0)
print(train_file.head())
print(train_file.tail(),'\n\n')
###Output
user_id item_id ts vintage type_encoded \
0 4198 3026 1293148800000000000 2001 2
1 1235 629 1423094400000000000 2010 10
2 3271 3941 1339718400000000000 2007 2
3 1093 2133 1187395200000000000 2005 2
4 4934 2937 1243382400000000000 2006 10
producer_encoded variety_encoded designation_encoded vineyard_encoded \
0 694 180 963 0
1 889 70 9 0
2 300 194 9 0
3 449 221 927 0
4 294 236 301 0
country_encoded region_encoded subregion_encoded appellation_encoded \
0 1 39 0 303
1 0 2 0 10
2 1 80 28 191
3 2 56 0 203
4 1 69 0 310
rating
0 88
1 81
2 90
3 87
4 88
user_id item_id ts vintage type_encoded \
11018 1438 3732 1135814400000000000 0 13
11019 2444 862 1390694400000000000 2010 2
11020 2533 3160 1252886400000000000 2006 2
11021 5117 4098 1303171200000000000 2009 2
11022 3433 3128 1287014400000000000 2006 2
producer_encoded variety_encoded designation_encoded \
11018 69 71 173
11019 469 68 585
11020 11 176 9
11021 845 35 9
11022 294 24 302
vineyard_encoded country_encoded region_encoded subregion_encoded \
11018 0 1 84 0
11019 0 2 14 0
11020 0 1 66 0
11021 0 3 57 0
11022 0 1 69 0
appellation_encoded rating
11018 292 89
11019 8 87
11020 227 83
11021 0 74
11022 58 89
###Markdown
Let's get some stats out of the sample. The following code gets the total number of unique instances for each column, i.e. how many different users, items, styles, rating, brewerID and timestamps there are:
###Code
print(train_file.nunique())
###Output
user_id 4486
item_id 3469
ts 4216
vintage 24
type_encoded 15
producer_encoded 1022
variety_encoded 244
designation_encoded 1142
vineyard_encoded 106
country_encoded 4
region_encoded 88
subregion_encoded 46
appellation_encoded 338
rating 44
dtype: int64
###Markdown
This is nice because we have a large sample, compounded by many items and users, we have more users than items (as spected) but it would be nicer if there were more. We have almost 45 different ratings. We guess they accumulate arround a mean and some calfications are missing (ratings below 60 for example, there are no such bad wines to get below that score). Summary statisticsNow, let's take a deeper look into de shape of the data, to do that, we use different statistic functions provided by pandas library. It is important to notice that the overall mean rating is 87 points and the lowest 25% of the ratings are below 86 and the highest 25% of the ratings are above 89.
###Code
train_file['rating'].describe()
train_file['rating'].var()
train_file['rating'].std()
###Output
_____no_output_____
###Markdown
Skewness is a measure for symmetry, and encompases the lack of it. Any symmetric data should have a skewness near zero: negative values indicate data that are skewed left and positive values for the skewness indicate data that are skewed right. By skewed left, we mean that the left tail is long relative to the right tail and the skewness of our data refects this.
###Code
# Get Skewness of rating values
train_file['rating'].skew()
###Output
_____no_output_____
###Markdown
Kurtosis is a measure of whether the data are heavy-tailed or light-tailed relative to a normal distribution wich has a kurtosis of zero. Data sets with high kurtosis (positive values) tend to have heavy tails, or outliers and data sets with low kurtosis (negative valueas) tend to have light tails, or lack of outliers.
###Code
train_file['rating'].kurt()
###Output
_____no_output_____
###Markdown
All characteristics mentioned can also be seen in the folowing figure. We can appreciate the ratings are biased towards 85 points, and they tend to accumulate around it
###Code
# Rating density plot
ax = train_file.rating.plot(kind='hist')
ax.set_title('Density for Overall Ratings', fontsize=18)
ax.set_xlabel('Rating', fontsize=14)
train_file.rating.plot(kind='kde', ax=ax, secondary_y=True)
###Output
_____no_output_____
###Markdown
Dataset DensityNow, let's see all these tendencies we've resumed in the numbers just examined. First we plot the density function of the user ratings. The following plot depicts the frecuency for the y-axis of reviews per user and a probability density function.
###Code
ax = train_file.user_id.plot(kind='hist')
ax.set_title('Density for User observations', fontsize=18)
ax.set_xlabel('User ID', fontsize=14)
train_file.user_id.plot(kind='kde', ax=ax, secondary_y=True)
train_file.user_id.value_counts().describe()
ax = train_file.user_id.value_counts().plot(kind='hist') # just to obtain the ax
ax.set_title('Density for Users reviews observations', fontsize=18)
ax.set_xlabel('Number of reviews', fontsize=14)
#ax.set_xticks(range(0, 50), minor=False)
train_file.user_id.value_counts().plot(kind='kde', ax=ax, secondary_y=True)
train_file.item_id.value_counts().describe()
ax = train_file.item_id.value_counts().plot(kind='hist') # just to obtain the ax
ax.set_title('Density for Item reviews observations', fontsize=18)
ax.set_xlabel('Number of reviews', fontsize=14)
#ax.set_xticks(range(0, 50), minor=False)
train_file.item_id.value_counts().plot(kind='kde', ax=ax, secondary_y=True)
###Output
_____no_output_____ |
workspace/mlb/monte-carlo-basic.ipynb | ###Markdown
Monte Carlo - Basic Example Description: Simulate a baseball inning where we have a 30% chance of hitting a HR / 70% chance of getting OUT. Question(s): How many HRs can we expect to get per inning?
###Code
from enum import Enum
import numpy as np
import matplotlib.pyplot as plt
class AT_BAT_OUTCOME(Enum):
OUT = 1
HR = 2
def at_bat():
probability = np.random.rand()
## 30% chance of hitting HR / getting out.
if probability <= .3:
return AT_BAT_OUTCOME.HR
return AT_BAT_OUTCOME.OUT
def play_inning():
outs = 0
hrs = 0
while outs < 3:
outcome = at_bat()
if outcome == AT_BAT_OUTCOME.OUT:
outs += 1
else:
hrs += 1
return hrs
simulations = 100000
outcomes = [
play_inning()
for _
in range(simulations)
]
mu = np.mean(outcomes)
print('# of hrs per inning:', mu)
plt.figure(figsize=(10, 5))
plt.title('Histogram of HRs per inning')
plt.xlabel('HR')
heights, _, _ = plt.hist(outcomes, bins=10, alpha=0.6)
plt.vlines(
[mu],
ymin=min(heights),
ymax=max(heights),
colors='r'
)
plt.show()
###Output
# of hrs per inning: 1.2879
|
notebooks/figures/table2_spectral_series.ipynb | ###Markdown
Table 2 (Journal of Climate submission; Molina et al.) Table 2. Spectral analysis of area weighted averages of monthly SSTs (◦C) across the tropical Pacific(10.5◦S, 170.5◦W, 10.5◦N, 120.5◦W), as in Figure 5. Years 201-500 were considered for the Global and Pacificexperiments and years 1,001-1,300 were considered for the CESM1 control for correspondence to the sensitivityexperiments during AMOC collapse. Years 101-250 were considered for the Pacific Salt experiment, which were the years PMOC was active. **Table by: Maria J. Molina, NCAR**
###Code
# imports
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import cftime
from climatico.util import weighted_mean, pacific_lon
import subprocess
import copy
from datetime import timedelta
from config import directory_figs, directory_data
# list of filenames to do this for
file_g02sv = 'b1d.e11.B1850LENS.f09_g16.FWAtSalG02Sv.pop.h.SST.*.nc'
file_g04sv = 'b1d.e11.B1850LENS.f09_g16.FWAtSalG04Sv.pop.h.SST.*.nc'
file_p02sv = 'b1d.e11.B1850LENS.f09_g16.FWAtSalP02Sv.pop.h.SST.*.nc'
file_p04sv = 'b1d.e11.B1850LENS.f09_g16.FWAtSalP04Sv.pop.h.SST.*.nc'
file_psalt = 'b1d.e11.B1850LENS.f09_g16.FWPaSalP04Sv.pop.h.SST.*.nc'
file_cntrl = 'b1d.e11.B1850C5CN.f09_g16.005.pop.h.SST.*.nc'
ds_cntrl = xr.open_mfdataset(f'{directory_data}{file_cntrl}', combine='by_coords')
ds_cntrl = ds_cntrl.assign_coords(time=ds_cntrl.coords['time'] - timedelta(days=17))
ds_cntrl = ds_cntrl.isel(z_t=0)['SST'].sel(time=slice(cftime.DatetimeNoLeap(1001, 1, 1, 0, 0),cftime.DatetimeNoLeap(1301, 1, 1, 0, 0)))
ds_g02sv = xr.open_mfdataset(f'{directory_data}{file_g02sv}', combine='by_coords')
ds_g02sv = ds_g02sv.assign_coords(time=ds_g02sv.coords['time'] - timedelta(days=17))
ds_g02sv = ds_g02sv.isel(z_t=0)['SST'].sel(time=slice(cftime.DatetimeNoLeap(201, 1, 1, 0, 0),cftime.DatetimeNoLeap(501, 1, 1, 0, 0)))
ds_g04sv = xr.open_mfdataset(f'{directory_data}{file_g04sv}', combine='by_coords')
ds_g04sv = ds_g04sv.assign_coords(time=ds_g04sv.coords['time'] - timedelta(days=17))
ds_g04sv = ds_g04sv.isel(z_t=0)['SST'].sel(time=slice(cftime.DatetimeNoLeap(201, 1, 1, 0, 0),cftime.DatetimeNoLeap(501, 1, 1, 0, 0)))
ds_p02sv = xr.open_mfdataset(f'{directory_data}{file_p02sv}', combine='by_coords')
ds_p02sv = ds_p02sv.assign_coords(time=ds_p02sv.coords['time'] - timedelta(days=17))
ds_p02sv = ds_p02sv.isel(z_t=0)['SST'].sel(time=slice(cftime.DatetimeNoLeap(201, 1, 1, 0, 0),cftime.DatetimeNoLeap(501, 1, 1, 0, 0)))
ds_p04sv = xr.open_mfdataset(f'{directory_data}{file_p04sv}', combine='by_coords')
ds_p04sv = ds_p04sv.assign_coords(time=ds_p04sv.coords['time'] - timedelta(days=17))
ds_p04sv = ds_p04sv.isel(z_t=0)['SST'].sel(time=slice(cftime.DatetimeNoLeap(201, 1, 1, 0, 0),cftime.DatetimeNoLeap(501, 1, 1, 0, 0)))
ds_psalt = xr.open_mfdataset(f'{directory_data}{file_psalt}', combine='by_coords')
ds_psalt = ds_psalt.assign_coords(time=ds_psalt.coords['time'] - timedelta(days=17))
ds_psalt = ds_psalt.isel(z_t=0)['SST'].sel(time=slice(cftime.DatetimeNoLeap(101, 1, 1, 0, 0),cftime.DatetimeNoLeap(251, 1, 1, 0, 0)))
def grab_wghtmean(ds_cntrl, ds_g02sv, ds_g04sv, ds_p02sv, ds_p04sv, ds_psalt,
lon1 = 170.5, lon2 = -150.5, lat1 = 30.5, lat2 = 40.5):
"""
Compute weighted mean for the select region.
Args:
ds_cntrl: Xarray data array for control.
ds_g02sv: Xarray data array for 0.2 Sv global experiment.
ds_g04sv: Xarray data array for 0.4 Sv global experiment.
ds_p02sv: Xarray data array for 0.2 Sv pacific experiment.
ds_p04sv: Xarray data array for 0.4 Sv pacific experiment.
ds_psalt: Xarray data array for pacific salt experiment.
lon1 (float): Lower left corner longitude. Defaults to ``170.5``.
lon2 (float): Upper right corner longitude. Defaults to ``-150.5``.
lat1 (float): Lower left corner latitude. Defaults to ``30.5``.
lat2 (float): Upper right corner latitude. Defaults to ``40.5``.
"""
ds_cntrl_box = weighted_mean(ds_cntrl.sel(
lon=slice(pacific_lon(lon1, to180=False), pacific_lon(lon2, to180=False)),
lat=slice(lat1, lat2)), lat_name='lat')
ds_g02sv_box = weighted_mean(ds_g02sv.sel(
lon=slice(pacific_lon(lon1, to180=False), pacific_lon(lon2, to180=False)),
lat=slice(lat1, lat2)), lat_name='lat')
ds_g04sv_box = weighted_mean(ds_g04sv.sel(
lon=slice(pacific_lon(lon1, to180=False), pacific_lon(lon2, to180=False)),
lat=slice(lat1, lat2)), lat_name='lat')
ds_p02sv_box = weighted_mean(ds_p02sv.sel(
lon=slice(pacific_lon(lon1, to180=False), pacific_lon(lon2, to180=False)),
lat=slice(lat1, lat2)), lat_name='lat')
ds_p04sv_box = weighted_mean(ds_p04sv.sel(
lon=slice(pacific_lon(lon1, to180=False), pacific_lon(lon2, to180=False)),
lat=slice(lat1, lat2)), lat_name='lat')
ds_psalt_box = weighted_mean(ds_psalt.sel(
lon=slice(pacific_lon(lon1, to180=False), pacific_lon(lon2, to180=False)),
lat=slice(lat1, lat2)), lat_name='lat')
return ds_cntrl_box, ds_g02sv_box, ds_g04sv_box, ds_p02sv_box, ds_p04sv_box, ds_psalt_box
def grab_specx(da, variable='SST'):
"""
Calling to ncl directory to use spectral analysis ncl scripts.
Input your directory into the function.
Use ``os.path.dirname(os.getcwd())+'/ncl/'`` to find your respective path.
Args:
da: Xarray data array .
variables (str): Variable name. Defaults to ``SST``.
"""
da.to_dataset(name=variable).to_netcdf('/glade/u/home/molina/python_scripts/climatico/ncl/box_sst.nc')
subprocess.call([f'ml intel/18.0.5; ml ncl; ncl /glade/u/home/molina/python_scripts/climatico/ncl/specx_anal.ncl'], shell=True)
spcx = xr.open_dataset("~/python_scripts/climatico/ncl/spcx.nc")
frqx = xr.open_dataset("~/python_scripts/climatico/ncl/frq.nc")
spcxa = copy.deepcopy(spcx['spcx'].squeeze().values)
frqxa = copy.deepcopy(frqx['frq'].squeeze().values)
del spcx
del frqx
return spcxa, frqxa
%%capture
# equatorial pacific
ds_cntrl_box, ds_g02sv_box, ds_g04sv_box, ds_p02sv_box, ds_p04sv_box, ds_psalt_box = grab_wghtmean(
ds_cntrl, ds_g02sv, ds_g04sv, ds_p02sv, ds_p04sv, ds_psalt, lon1 = -170.5, lon2 = -120.5, lat1 = -5.5, lat2 = 5.5)
# control
spcx_cntrl3, frqx_cntrl3 = grab_specx(ds_cntrl_box)
# 2svg
spcx_g02sv3, frqx_g02sv3 = grab_specx(ds_g02sv_box)
# 4svg
spcx_g04sv3, frqx_g04sv3 = grab_specx(ds_g04sv_box)
# 2svp
spcx_p02sv3, frqx_p02sv3 = grab_specx(ds_p02sv_box)
# 4svp
spcx_p04sv3, frqx_p04sv3 = grab_specx(ds_p04sv_box)
#psalt
spcx_psalt3, frqx_psalt3 = grab_specx(ds_psalt_box)
print('tropical pac; frequency of max variance')
print(str(np.round(frqx_cntrl3[np.argmax(spcx_cntrl3)],3)))
print(str(np.round(frqx_g02sv3[np.argmax(spcx_g02sv3)],3)))
print(str(np.round(frqx_g04sv3[np.argmax(spcx_g04sv3)],3)))
print(str(np.round(frqx_p02sv3[np.argmax(spcx_p02sv3)],3)))
print(str(np.round(frqx_p04sv3[np.argmax(spcx_p04sv3)],3)))
print(str(np.round(frqx_psalt3[np.argmax(spcx_psalt3)],3)))
print('tropical pac; variance of annual cycle')
print(str(np.round(spcx_cntrl3[np.argwhere(frqx_cntrl3==1/12)],2)))
print(str(np.round(spcx_g02sv3[np.argwhere(frqx_g02sv3==1/12)],2)))
print(str(np.round(spcx_g04sv3[np.argwhere(frqx_g04sv3==1/12)],2)))
print(str(np.round(spcx_p02sv3[np.argwhere(frqx_p02sv3==1/12)],2)))
print(str(np.round(spcx_p04sv3[np.argwhere(frqx_p04sv3==1/12)],2)))
print(str(np.round(spcx_psalt3[np.argwhere(frqx_psalt3==1/12)],2)))
print('tropical pac; variance of semi-annual cycle')
print(str(np.round(spcx_cntrl3[np.argwhere(frqx_cntrl3==2/12)],2)))
print(str(np.round(spcx_g02sv3[np.argwhere(frqx_g02sv3==2/12)],2)))
print(str(np.round(spcx_g04sv3[np.argwhere(frqx_g04sv3==2/12)],2)))
print(str(np.round(spcx_p02sv3[np.argwhere(frqx_p02sv3==2/12)],2)))
print(str(np.round(spcx_p04sv3[np.argwhere(frqx_p04sv3==2/12)],2)))
print(str(np.round(spcx_psalt3[np.argwhere(frqx_psalt3==2/12)],2)))
###Output
tropical pac; variance of semi-annual cycle
[[31.4]]
[[26.46]]
[[20.38]]
[[30.44]]
[[25.59]]
[[18.45]]
|
Pipeline progression/22_Write comments in fit_poly.ipynb | ###Markdown
Ye ab tak ka code:
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(frame):
def cal_undistort(img):
# Reads mtx and dist matrices, peforms image distortion correction and returns the undistorted image
import pickle
# Read in the saved matrices
my_dist_pickle = pickle.load( open( "output_files/calib_pickle_files/dist_pickle.p", "rb" ) )
mtx = my_dist_pickle["mtx"]
dist = my_dist_pickle["dist"]
undistorted_img = cv2.undistort(img, mtx, dist, None, mtx)
#undistorted_img = cv2.cvtColor(undistorted_img, cv2.COLOR_BGR2RGB) #Use if you use cv2 to import image. ax.imshow() needs RGB image
return undistorted_img
def yellow_threshold(img, sxbinary):
# Convert to HLS color space and separate the S channel
# Note: img is the undistorted image
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
s_channel = hls[:,:,2]
h_channel = hls[:,:,0]
# Threshold color channel
s_thresh_min = 100
s_thresh_max = 255
#for 360 degree, my value for yellow ranged between 35 and 50. So uska half kar diya
h_thresh_min = 10
h_thresh_max = 25
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh_min) & (s_channel <= s_thresh_max)] = 1
h_binary = np.zeros_like(h_channel)
h_binary[(h_channel >= h_thresh_min) & (h_channel <= h_thresh_max)] = 1
# Combine the two binary thresholds
yellow_binary = np.zeros_like(s_binary)
yellow_binary[(((s_binary == 1) | (sxbinary == 1) ) & (h_binary ==1))] = 1
return yellow_binary
def xgrad_binary(img, thresh_min=30, thresh_max=100):
# Grayscale image
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Sobel x
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
#thresh_min = 30 #Already given above
#thresh_max = 100
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1
return sxbinary
def white_threshold(img, sxbinary, lower_white_thresh = 170):
r_channel = img[:,:,0]
g_channel = img[:,:,1]
b_channel = img[:,:,2]
# Threshold color channel
r_thresh_min = lower_white_thresh
r_thresh_max = 255
r_binary = np.zeros_like(r_channel)
r_binary[(r_channel >= r_thresh_min) & (r_channel <= r_thresh_max)] = 1
g_thresh_min = lower_white_thresh
g_thresh_max = 255
g_binary = np.zeros_like(g_channel)
g_binary[(g_channel >= g_thresh_min) & (g_channel <= g_thresh_max)] = 1
b_thresh_min = lower_white_thresh
b_thresh_max = 255
b_binary = np.zeros_like(b_channel)
b_binary[(b_channel >= b_thresh_min) & (b_channel <= b_thresh_max)] = 1
white_binary = np.zeros_like(r_channel)
white_binary[((r_binary ==1) & (g_binary ==1) & (b_binary ==1) & (sxbinary==1))] = 1
return white_binary
def thresh_img(img):
#sxbinary = xgrad_binary(img, thresh_min=30, thresh_max=100)
sxbinary = xgrad_binary(img, thresh_min=25, thresh_max=130)
yellow_binary = yellow_threshold(img, sxbinary) #(((s) | (sx)) & (h))
white_binary = white_threshold(img, sxbinary, lower_white_thresh = 150)
# Combine the two binary thresholds
combined_binary = np.zeros_like(sxbinary)
combined_binary[((yellow_binary == 1) | (white_binary == 1))] = 1
out_img = np.dstack((combined_binary, combined_binary, combined_binary))*255
return out_img
def perspective_transform(img):
# Define calibration box in source (original) and destination (desired or warped) coordinates
img_size = (img.shape[1], img.shape[0])
"""Notice the format used for img_size. Yaha bhi ulta hai. x axis aur fir y axis chahiye.
Apne format mein rows(y axis) and columns (x axis) hain"""
# Four source coordinates
# Order of points: top left, top right, bottom right, bottom left
src = np.array(
[[435*img.shape[1]/960, 350*img.shape[0]/540],
[530*img.shape[1]/960, 350*img.shape[0]/540],
[885*img.shape[1]/960, img.shape[0]],
[220*img.shape[1]/960, img.shape[0]]], dtype='f')
# Next, we'll define a desired rectangle plane for the warped image.
# We'll choose 4 points where we want source points to end up
# This time we'll choose our points by eyeballing a rectangle
dst = np.array(
[[290*img.shape[1]/960, 0],
[740*img.shape[1]/960, 0],
[740*img.shape[1]/960, img.shape[0]],
[290*img.shape[1]/960, img.shape[0]]], dtype='f')
#Compute the perspective transform, M, given source and destination points:
M = cv2.getPerspectiveTransform(src, dst)
#Warp an image using the perspective transform, M; using linear interpolation
#Interpolating points is just filling in missing points as it warps an image
# The input image for this function can be a colored image too
warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)
return warped, src, dst
def rev_perspective_transform(img, src, dst):
img_size = (img.shape[1], img.shape[0])
#Compute the perspective transform, M, given source and destination points:
Minv = cv2.getPerspectiveTransform(dst, src)
#Warp an image using the perspective transform, M; using linear interpolation
#Interpolating points is just filling in missing points as it warps an image
# The input image for this function can be a colored image too
un_warped = cv2.warpPerspective(img, Minv, img_size, flags=cv2.INTER_LINEAR)
return un_warped, Minv
def draw_polygon(img1, img2, src, dst):
src = src.astype(int) #Very important step (Pixels cannot be in decimals)
dst = dst.astype(int)
cv2.polylines(img1, [src], True, (255,0,0), 3)
cv2.polylines(img2, [dst], True, (255,0,0), 3)
def histogram_bottom_peaks (warped_img):
# This will detect the bottom point of our lane lines
# Take a histogram of the bottom half of the image
bottom_half = warped_img[((2*warped_img.shape[0])//5):,:,0] # Collecting all pixels in the bottom half
histogram = np.sum(bottom_half, axis=0) # Summing them along y axis (or along columns)
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2) # 1D array hai histogram toh uska bas 0th index filled hoga
#print(np.shape(histogram)) #OUTPUT:(1280,)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
return leftx_base, rightx_base
def find_lane_pixels(warped_img):
leftx_base, rightx_base = histogram_bottom_peaks(warped_img)
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin. So width = 2*margin
margin = 90
# Set minimum number of pixels found to recenter window
minpix = 1000 #I've changed this from 50 as given in lectures
# Set height of windows - based on nwindows above and image shape
window_height = np.int(warped_img.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = warped_img.nonzero() #pixel ke coordinates dega 2 seperate arrays mein
nonzeroy = np.array(nonzero[0]) # Y coordinates milenge 1D array mein. They will we arranged in the order of pixels
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base #initially set kar diya hai. For loop ke end mein change karenge
rightx_current = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = [] # Ismein lane-pixels ke indices collect karenge.
# 'nonzerox' array mein index daalke coordinate mil jaayega
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = warped_img.shape[0] - (window+1)*window_height
win_y_high = warped_img.shape[0] - window*window_height
"""### TO-DO: Find the four below boundaries of the window ###"""
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
"""
# Create an output image to draw on and visualize the result
out_img = np.copy(warped_img)
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),
(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),
(win_xright_high,win_y_high),(0,255,0), 2)
"""
### TO-DO: Identify the nonzero pixels in x and y within the window ###
#Iska poora explanation seperate page mein likha hai
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on the mean position of the pixels in your current window (re-centre)
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
"""return leftx, lefty, rightx, righty, out_img""" #agar rectangles bana rahe ho toh out_image rakhna
return leftx, lefty, rightx, righty
def fit_polynomial(warped_img, leftx, lefty, rightx, righty, fit_history, variance_history, rad_curv_history):
"""This will fit a parabola on each lane line, give back lane curve coordinates, radius of curvature """
#Fit a second order polynomial to each using `np.polyfit` ###
left_fit = np.polyfit(lefty,leftx,2)
right_fit = np.polyfit(righty,rightx,2)
# We'll plot x as a function of y
ploty = np.linspace(0, warped_img.shape[0]-1, warped_img.shape[0])
"""Primary coefficient detection: 1st level of curve fitting where frame naturally detects poins and fit's curve"""
#Steps: find a,b,c for our parabola: x=a(y^2)+b(y)+c
"""
1.a) If lane pixels found, fit a curve and get the coefficients for the left and right lane
1.b) If #pixels insuffient and curve couldn't be fit, use the curve from the previous frame if you have that data
(In case of lack of points in 1st frame, fit an arbitrary parabola with all coeff=1: Expected to improve later on)
2) Using coefficient we'll fit a parabola. We'll improve it with following techiniques later on:
- Variance lane pixels from parabola (to account for distance of curve points for )
- Shape and position of parabolas in the previous frame,
- Trends in radius of curvature,
- Frame mirroring (fine tuning one lane in the frame wrt to the other)
"""
try:
a1_new= left_fit[0]
b1_new= left_fit[1]
c1_new= left_fit[2]
a2_new= right_fit[0]
b2_new= right_fit[1]
c2_new= right_fit[2]
#Calculate the x-coordinates of the parabola. Here x is the dependendent variable and y is independent
left_fitx = a1_new*ploty**2 + b1_new*ploty + c1_new
right_fitx = a2_new*ploty**2 + b2_new*ploty + c2_new
status = True
except TypeError:
# Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
if(len(lane.curve_fit)!=5): #If you dont have any values in the history
left_fitx = 1*ploty**2 + 1*ploty #This is a senseless curve. If it was the 1st frame, we need to do something
right_fitx = 1*ploty**2 + 1*ploty
else: #replicate lane from previous frame if you have history
left_fitx = fit_history[0][4][0]*ploty**2 + fit_history[0][4][1]*ploty + fit_history[0][4][2]
right_fitx = fit_history[1][4][0]*ploty**2 + fit_history[1][4][1]*ploty + fit_history[1][4][2]
lane.count=-1 #Restart your search in next frame. At the end of this frame, 1 gets added. Hence we'll get net 0.
status = False
"""VARIANCE: Average distance of lane pixels from our curve which we have fit"""
# Calculating variance for both lanes in the current frame
left_sum = 0
for index in range(len(leftx)):
left_sum+= abs(leftx[index]-(a1_new*lefty[index]**2 + b1_new*lefty[index] + c1_new))
left_variance_new=left_sum/len(leftx)
right_sum=0
for index in range(len(rightx)):
right_sum+= abs(rightx[index]-(a2_new*righty[index]**2 + b2_new*righty[index] + c2_new))
right_variance_new=right_sum/len(rightx)
#If you have history for variance and curve coefficients
if((len(lane.curve_fit)==5)&(len(lane.variance)==5)):
left_variance_old = sum([(0.2*((5-index)**3)*element) for index,element in enumerate(variance_history[0])])/sum([0.2*((5-index)**3) for index in range(0,5)])
right_variance_old = sum([(0.2*((5-index)**3)*element) for index,element in enumerate(variance_history[1])])/sum([0.2*((5-index)**3) for index in range(0,5)])
# Finding weighted average for the previous elements data within fit_history
a1_old= sum([(0.2*(index+1)*element[0]) for index,element in enumerate(fit_history[0])])/sum([0.2*(index+1) for index in range(0,5)])
b1_old= sum([(0.2*(index+1)*element[1]) for index,element in enumerate(fit_history[0])])/sum([0.2*(index+1) for index in range(0,5)])
c1_old= sum([(0.2*(index+1)*element[2]) for index,element in enumerate(fit_history[0])])/sum([0.2*(index+1) for index in range(0,5)])
a2_old= sum([(0.2*(index+1)*element[0]) for index,element in enumerate(fit_history[1])])/sum([0.2*(index+1) for index in range(0,5)])
b2_old= sum([(0.2*(index+1)*element[1]) for index,element in enumerate(fit_history[1])])/sum([0.2*(index+1) for index in range(0,5)])
c2_old= sum([(0.2*(index+1)*element[2]) for index,element in enumerate(fit_history[1])])/sum([0.2*(index+1) for index in range(0,5)])
a1_new = (a1_new*((left_variance_old)**2) + a1_old*((left_variance_new)**2))/(((left_variance_old)**2) + ((left_variance_new)**2))
b1_new = (b1_new*((left_variance_old)**2) + b1_old*((left_variance_new)**2))/(((left_variance_old)**2) + ((left_variance_new)**2))
c1_new = (c1_new*((left_variance_old)**2) + c1_old*((left_variance_new)**2))/(((left_variance_old)**2) + ((left_variance_new))**2)
a2_new = (a2_new*((right_variance_old)**2) + a2_old*((right_variance_new)**2))/(((right_variance_old)**2) + ((right_variance_new))**2)
b2_new = (b2_new*((right_variance_old)**2) + b2_old*((right_variance_new)**2))/(((right_variance_old)**2) + ((right_variance_new))**2)
c2_new = (c2_new*((right_variance_old)**2) + c2_old*((right_variance_new)**2))/(((right_variance_old)**2) + ((right_variance_new))**2)
### Tracking the difference in curve fit coefficients over the frame
# from last to last frame -> last frame
del_a1_old = lane.coeff_diff[0][0]
del_b1_old = lane.coeff_diff[0][1]
del_c1_old = lane.coeff_diff[0][2]
del_a2_old = lane.coeff_diff[1][0]
del_b2_old = lane.coeff_diff[1][1]
del_c2_old = lane.coeff_diff[1][2]
# from last frame -> current frame
del_a1 = abs(a1_new - fit_history[0][4][0])
del_b1 = abs(b1_new - fit_history[0][4][1])
del_c1 = abs(c1_new - fit_history[0][4][2])
del_a2 = abs(a2_new - fit_history[1][4][0])
del_b2 = abs(b2_new - fit_history[1][4][1])
del_c2 = abs(c2_new - fit_history[1][4][2])
lane.coeff_diff = [[del_a1, del_b1, del_c1], [del_a2, del_b2, del_c2]]
# bas ab delta coefficient for each coefficient nikalna hai aur vo formula likh dena har element ke liye
a1_new = (a1_new*(del_a1_old) + a1_old*(del_a1))/((del_a1_old) + (del_a1))
b1_new = (b1_new*(del_b1_old) + b1_old*(del_b1))/((del_b1_old) + (del_b1))
c1_new = (c1_new*(del_c1_old) + c1_old*(del_c1))/((del_c1_old) + (del_c1))
"""
#print("a2_new",a2_new)
#print("b2_new",b2_new)
#print("c2_new",c2_new)
"""
a2_new = (a2_new*(del_a2_old) + a2_old*(del_a2))/((del_a2_old) + (del_a2))
b2_new = (b2_new*(del_b2_old) + b2_old*(del_b2))/((del_b2_old) + (del_b2))
c2_new = (c2_new*(del_c2_old) + c2_old*(del_c2))/((del_c2_old) + (del_c2))
"""
#print("")
#print("a2_old",a2_old)
#print("b2_old",b2_old)
#print("c2_old",c2_old)
#print("")
#print("a2_new",a2_new)
#print("b2_new",b2_new)
#print("c2_new",c2_new)
"""
y_eval = np.max(ploty)
# Calculation of R_curve (radius of curvature)
left_curverad = (((1 + (2*a1_new*y_eval + b1_new)**2)**1.5) / (2*a1_new))
right_curverad = (((1 + (2*a2_new*y_eval + b2_new)**2)**1.5) / (2*a2_new))
if(len(lane.rad_curv)==5):
"""How to check series is decreasing or increasing"""
slope_avg=0
for i in range(0,4):
slope_avg += ((slope_avg*i) + (rad_curv_history[0][i+1] - rad_curv_history[0][i]))/(i+1)
# If this is not the point of inflection, and still the radius of curvature changes sign, discard the curve
# Left
if (((rad_curv_history[0][4]>0) & (left_curverad<0) & (slope_avg>=0)) | ((rad_curv_history[0][4]<0) & (left_curverad>0) & (slope_avg<=0))):
a1_new = fit_history[0][4][0]
b1_new = fit_history[0][4][1]
c1_new = fit_history[0][4][2]
# Right
if (((rad_curv_history[1][4]>0) & (right_curverad<0) & (slope_avg>=0)) | ((rad_curv_history[1][4]<0) & (right_curverad>0) & (slope_avg<=0))):
a2_new = fit_history[1][4][0]
b2_new = fit_history[1][4][1]
c2_new = fit_history[1][4][2]
"""FRAME MIRRORING: Fine tuning one lane wrt to the other same as they'll have similar curvature"""
#Steps:
"""
1) Weighted average of the coefficients related to curve shape (a,b) to make both parabola a bit similar
2) Adjusting the 'c' coefficient using the lane centre of previous frame and lane width acc to current frame
"""
# We'll use lane centre for the previous frame to fine tune c of the parabola. First frame won't have a history so
# Just for the 1st frame, we'll define it according to itself and use. Won't make any impact but will set a base for the next frames
if (lane.count==0):
lane.lane_bottom_centre = (((a2_new*(warped_img.shape[0]-1)**2 + b2_new*(warped_img.shape[0]-1) + c2_new) + (a1_new*(warped_img.shape[0]-1)**2 + b1_new*(warped_img.shape[0]-1) + c1_new))/2)
# We'll find lane width according to the latest curve coefficients till now
lane.lane_width = (((lane.lane_width*lane.count)+(a2_new*(warped_img.shape[0]-1)**2 + b2_new*(warped_img.shape[0]-1) + c2_new) - (a1_new*(warped_img.shape[0]-1)**2 + b1_new*(warped_img.shape[0]-1) + c1_new))/(lane.count+1))
a1 = 0.8*a1_new + 0.2*a2_new
b1 = 0.8*b1_new + 0.2*b2_new
a2 = 0.2*a1_new + 0.8*a2_new
b2 = 0.2*b1_new + 0.8*b2_new
#c1 = 0.8*c1_new + 0.2*c2_new
#c2 = 0.2*c1_new + 0.8*c2_new
#T Taking the lane centre fromt the previous frame and finding "c" such that both lanes are equidistant from it.
c1_mirror = ((lane.lane_bottom_centre - (lane.lane_width/2))-(a1*(warped_img.shape[0]-1)**2 + b1*(warped_img.shape[0]-1)))
c2_mirror = ((lane.lane_bottom_centre + (lane.lane_width/2))-(a2*(warped_img.shape[0]-1)**2 + b2*(warped_img.shape[0]-1)))
c1= 0.7*c1_new + 0.3*c1_mirror
c2 = 0.7*c2_new + 0.3*c2_mirror
# Now we'll find the lane centre of this frame and overwrite the global variable s that the next frame can use this value
lane.lane_bottom_centre = (((a2*(warped_img.shape[0]-1)**2 + b2*(warped_img.shape[0]-1) + c2) + (a1*(warped_img.shape[0]-1)**2 + b1*(warped_img.shape[0]-1) + c1))/2)
#print("lane.lane_width",lane.lane_width)
#print("lane.lane_bottom_centre",lane.lane_bottom_centre)
left_curverad = (((1 + (2*a1*y_eval + b1)**2)**1.5) / (2*a1))
right_curverad = (((1 + (2*a2*y_eval + b2)**2)**1.5) / (2*a2))
left_fitx = a1*ploty**2 + b1*ploty + c1
right_fitx = a2*ploty**2 + b2*ploty + c2
"""
print("left_curverad",left_curverad)
print("left_curverad",right_curverad)
print("a2",a2)
print("b2",b2)
print("c2",c2)
print("")
"""
return [[a1,b1,c1], [a2,b2,c2]], left_fitx, right_fitx, status, [left_variance_new, right_variance_new], [left_curverad,right_curverad], ploty
# out_img here has boxes drawn and the pixels are colored
#return [[a1_new,b1_new,c1_new], [a2_new,b2_new,c2_new]], left_fitx, right_fitx, status, [left_variance_new, right_variance_new], ploty
def color_pixels_and_curve(out_img, leftx, lefty, rightx, righty, left_fitx, right_fitx):
ploty = np.linspace(0, warped_img.shape[0]-1, warped_img.shape[0])
## Visualization ##
# Colors in the left and right lane regions
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
# Converting the coordinates of our line into integer values as index of the image can't take decimals
left_fitx_int = left_fitx.astype(np.int32)
right_fitx_int = right_fitx.astype(np.int32)
ploty_int = ploty.astype(np.int32)
# Coloring the curve as yellow
out_img[ploty_int,left_fitx_int] = [255,255,0]
out_img[ploty_int,right_fitx_int] = [255,255,0]
# To thicken the curve
out_img[ploty_int,left_fitx_int+1] = [255,255,0]
out_img[ploty_int,right_fitx_int+1] = [255,255,0]
out_img[ploty_int,left_fitx_int-1] = [255,255,0]
out_img[ploty_int,right_fitx_int-1] = [255,255,0]
out_img[ploty_int,left_fitx_int+2] = [255,255,0]
out_img[ploty_int,right_fitx_int+2] = [255,255,0]
out_img[ploty_int,left_fitx_int-2] = [255,255,0]
out_img[ploty_int,right_fitx_int-2] = [255,255,0]
def search_around_poly(warped_img, left_fit, right_fit):
# HYPERPARAMETER
# Choose the width of the margin around the previous polynomial to search
# The quiz grader expects 100 here, but feel free to tune on your own!
margin = 100
# Grab activated pixels
nonzero = warped_img.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
### TO-DO: Set the area of search based on activated x-values ###
### within the +/- margin of our polynomial function ###
### Hint: consider the window areas for the similarly named variables ###
### in the previous quiz, but change the windows to our new search area ###
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Again, extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty
def modify_array(array, new_value):
if len(array)!=5:
for i in range(0,5):
array.append(new_value)
else:
dump_var=array[0]
array[0]=array[1]
array[1]=array[2]
array[2]=array[3]
array[3]=array[4]
array[4]=new_value
return array
undist_img = cal_undistort(frame)
thresh_img = thresh_img(undist_img) # Note: This is not a binary iamge. It has been stacked already within the function
warped_img, src, dst = perspective_transform(thresh_img)
#draw_polygon(frame, warped_img, src, dst) #the first image is the original image that you import into the system
#print("starting count",lane.count)
# Making the curve coefficient and variance history ready for our new frame
left_fit_previous = [i[0] for i in lane.curve_fit]
right_fit_previous = [i[1] for i in lane.curve_fit]
fit_history=[left_fit_previous, right_fit_previous]
left_variance_previous = [i[0] for i in lane.variance]
right_variance_previous = [i[1] for i in lane.variance]
variance_history=[left_variance_previous, right_variance_previous]
left_rad_curv_prev = [i[0] for i in lane.rad_curv]
right_rad_curv_prev = [i[1] for i in lane.rad_curv]
rad_curv_history = [left_rad_curv_prev, right_rad_curv_prev]
#print(rad_curv_history)
if (lane.count == 0):
leftx, lefty, rightx, righty = find_lane_pixels(warped_img) # Find our lane pixels first
elif (lane.count > 0):
leftx, lefty, rightx, righty = search_around_poly(warped_img, left_fit_previous[4], right_fit_previous[4])
curve_fit_new, left_fitx, right_fitx, status, variance_new, rad_curv_new,ploty = fit_polynomial(warped_img, leftx, lefty, rightx, righty, fit_history, variance_history,rad_curv_history)
lane.rad_curv = modify_array(lane.rad_curv, rad_curv_new)
lane.detected = status
lane.curve_fit = modify_array(lane.curve_fit, curve_fit_new)
lane.variance = modify_array(lane.variance, variance_new)
#print(lane.variance)
# Now we'll color the lane pixels and plot the identified curve over the image
color_pixels_and_curve(warped_img, leftx, lefty, rightx, righty, left_fitx, right_fitx)
unwarped_img, Minv = rev_perspective_transform(warped_img, src, dst)
# Create an image to draw the lines on
color_warp = np.zeros_like(warped_img).astype(np.uint8)
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv, (frame.shape[1], frame.shape[0]))
# Combine the result with the original image
result = cv2.addWeighted(undist_img, 1, newwarp, 0.3, 0)
lane.count = lane.count+1
#return warped_img
#return color_warp
return result
#return unwarped_img
###Output
_____no_output_____
###Markdown
Let's try classes
###Code
# Define a class to receive the characteristics of each line detection
class Line():
def __init__(self):
#Let's count the number of consecutive frames
self.count = 0
# was the line detected in the last iteration?
self.detected = False
#polynomial coefficients for the most recent fit
self.curve_fit = []
# Traking variance for the right lane
self.variance = []
#difference in fit coefficients between last and new fits. Just store the difference in coefficients for the last frame
self.coeff_diff = [[0,0,0],[0,0,0]]
#Lane width measured at the start of reset
self.lane_width = 0
#Let's track the midpoint of the previous frame
self.lane_bottom_centre = 0
#radius of curvature of the line in some units
self.rad_curv = []
# x values of the curve that we fit intially
#self.current_xfitted = []
# x values for detected line pixels
#self.allx = []
# y values for detected line pixels
#self.ally = []
# x values of the last n fits of the line
self.recent_xfitted = []
#average x values of the fitted line over the last n iterations
self.bestx = None
#polynomial coefficients averaged over the last n iterations
self.best_fit = None
#radius of curvature of the line in some units
self.radius_of_curvature = None
#distance in meters of vehicle center from the line
self.line_base_pos = None
lane=Line()
frame1= mpimg.imread("my_test_images/Highway_snaps/image (1).jpg")
frame2= mpimg.imread("my_test_images/Highway_snaps/image (2).jpg")
frame3= mpimg.imread("my_test_images/Highway_snaps/image (3).jpg")
frame4= mpimg.imread("my_test_images/Highway_snaps/image (4).jpg")
frame5= mpimg.imread("my_test_images/Highway_snaps/image (5).jpg")
frame6= mpimg.imread("my_test_images/Highway_snaps/image (6).jpg")
frame7= mpimg.imread("my_test_images/Highway_snaps/image (7).jpg")
frame8= mpimg.imread("my_test_images/Highway_snaps/image (8).jpg")
frame9= mpimg.imread("my_test_images/Highway_snaps/image (9).jpg")
(process_image(frame1))
(process_image(frame2))
plt.imshow(process_image(frame3))
(process_image(frame4))
(process_image(frame5))
(process_image(frame6))
(process_image(frame7))
(process_image(frame8))
(process_image(frame9))
matrix = [[[-8.8661072804927e-05, 0.14226773448007834, 526.2761730836886], [-8.8661072804927e-05, 0.14226773448007834, 526.2761730836886], [-8.8661072804927e-05, 0.14226773448007834, 526.2761730836886], [-8.8661072804927e-05, 0.14226773448007834, 526.2761730836886], [-8.8661072804927e-05, 0.14226773448007837, 526.2761730836886]], [[-0.0001476233231556424, 0.23167536974077843, 1331.6306119167443], [-0.0001476233231556424, 0.23167536974077843, 1331.6306119167443], [-0.0001476233231556424, 0.23167536974077843, 1331.6306119167443], [-0.0001476233231556424, 0.23167536974077843, 1331.6306119167443], [-0.0001476233231556424, 0.23167536974077843, 1331.6306119167443]]]
matrix[1]
###Output
_____no_output_____
###Markdown
Videoo test
###Code
# Define a class to receive the characteristics of each line detection
class Line():
def __init__(self):
#Let's count the number of consecutive frames
self.count = 0
# was the line detected in the last iteration?
self.detected = False
#polynomial coefficients for the most recent fit
self.curve_fit = []
# Traking variance for the right lane
self.variance = []
#difference in fit coefficients between last and new fits. Just store the difference in coefficients for the last frame
self.coeff_diff = [[0,0,0],[0,0,0]]
#Lane width measured at the start of reset
self.lane_width = 0
#Let's track the midpoint of the previous frame
self.lane_bottom_centre = 0
#radius of curvature of the line in some units
self.rad_curv = []
# x values of the curve that we fit intially
#self.current_xfitted = []
# x values for detected line pixels
#self.allx = []
# y values for detected line pixels
#self.ally = []
# x values of the last n fits of the line
self.recent_xfitted = []
#average x values of the fitted line over the last n iterations
self.bestx = None
#polynomial coefficients averaged over the last n iterations
self.best_fit = None
#radius of curvature of the line in some units
self.radius_of_curvature = None
#distance in meters of vehicle center from the line
self.line_base_pos = None
lane=Line()
project_output = 'output_files/video_clips/project_video_with_history_initial_condition_removed.mp4'
clip1 = VideoFileClip("project_video.mp4")
project_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!
%time project_clip.write_videofile(project_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(project_output))
###Output
_____no_output_____
###Markdown
.
###Code
# Define a class to receive the characteristics of each line detection
class Line():
def __init__(self):
#Let's count the number of consecutive frames
self.count = 0
# was the line detected in the last iteration?
self.detected = False
#polynomial coefficients for the most recent fit
self.curve_fit = []
# Traking variance for the right lane
self.variance = []
#difference in fit coefficients between last and new fits. Just store the difference in coefficients for the last frame
self.coeff_diff = [[0,0,0],[0,0,0]]
#Lane width measured at the start of reset
self.lane_width = 0
#Let's track the midpoint of the previous frame
self.lane_bottom_centre = 0
#radius of curvature of the line in some units
self.rad_curv = []
# x values of the curve that we fit intially
#self.current_xfitted = []
# x values for detected line pixels
#self.allx = []
# y values for detected line pixels
#self.ally = []
# x values of the last n fits of the line
self.recent_xfitted = []
#average x values of the fitted line over the last n iterations
self.bestx = None
#polynomial coefficients averaged over the last n iterations
self.best_fit = None
#radius of curvature of the line in some units
self.radius_of_curvature = None
#distance in meters of vehicle center from the line
self.line_base_pos = None
lane=Line()
project_output = 'output_files/video_clips/challenge_vid_newpipeline.mp4'
clip1 = VideoFileClip("challenge_video.mp4")
project_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!
%time project_clip.write_videofile(project_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(project_output))
###Output
_____no_output_____ |
Codes/riverlog_for_gis.ipynb | ###Markdown
山河事件簿分析工具riverlog_for_gis.py 需要在同一個目錄文件請參閱:https://docs.google.com/document/d/1iM_-YdZ8LFFbPkcL-4Irp9Z0laTXgUL2MzfwE4SWtFQ/editheading=h.lumlll4mf62a所有 API 下載範例程式法,下載成 CSV``` api_to_csv("rain-dailySum",[2020]) api_to_csv("rain-10minSum",["2020-09-01"]) api_to_csv("rain-station",None) api_to_csv("rain-rainData",["2020-09-01","23","24","121","122"]) api_to_csv("waterLevel-station",None) api_to_csv("waterLevel-waterLevelData",["2020-09-01","23","24","121","122"]) api_to_csv("waterLevelDrain-station",None) api_to_csv("waterLevelDrain-waterLevelDrainData",["2019-12-03","23","24","120","122"]) api_to_csv("waterLevelAgri-station",None) api_to_csv("waterLevelAgri-waterLevelAgriData",["2019-12-03","23","24","120","122"]) api_to_csv("sewer-station",None) api_to_csv("sewer-sewerData",["2019-12-02","24","25","121","122"]) api_to_csv("tide-station",None) api_to_csv("tide-tideData",["2020-09-01","23","24","121","122"]) api_to_csv("pump-station",None) api_to_csv("pump-pumpData",["2019-12-03","25","26","121","122"]) api_to_csv("reservoir-info",None) api_to_csv("reservoir-reservoirData",["2020-09-01"]) api_to_csv("flood-station",None) api_to_csv("flood-floodData",["2020-09-01"]) api_to_csv("alert-alertData",["2020-09-01"]) api_to_csv("alert-alertStatistic",[2020]) api_to_csv("alert-typhoonData",["2020-09-01"]) api_to_csv("elev-gridData",["7","23","24","120","121"]) api_to_csv("statistic-waterUseAgriculture",None) api_to_csv("statistic-waterUseCultivation",None) api_to_csv("statistic-waterUseLivestock",None) api_to_csv("statistic-waterUseLiving",None) api_to_csv("statistic-waterUseIndustry",None) api_to_csv("statistic-waterUseOverview",None) api_to_csv("statistic-monthWaterUse",None) api_to_csv("statistic-reservoirUse",None) api_to_csv("statistic-reservoirSiltation",None)``` Init All
###Code
from riverlog_for_gis import *
from datetime import date
gd = {}
def get_value_by_index(df,keyvalue, target_col):
"""
find df's column(key) = value, return value of target_col
keyvalue: col_name=value
"""
cols = keyvalue.split("=")
if len(cols)!=2:
return ""
keyvalue_key = cols[0]
keyvalue_value = cols[1]
if not target_col in df.columns:
return ""
values = df[df[keyvalue_key]==keyvalue_value][target_col].values.tolist()
if len(values)>0:
value = values[0]
else:
value = ""
return value
###Output
_____no_output_____
###Markdown
多水庫庫容百分比分析 Get/Load Data需要的時間範圍可以在此設定
###Code
def reservoir_load(bag,date_start, date_end):
df_info = api_to_csv("reservoir-info",None)
filename=api_to_csv_range(date_start,date_end,"reservoir-reservoirData",None,"ObservationTime")
dest_name="%s_GMT8.csv" %(filename[:-4])
df=csv_add_gmt8(filename,"ObservationTime", dest_name )
#handle info
df_info=df_info[df_info['Year']==105]
df_info.drop_duplicates(subset="id")
df_info["id"] = pd.to_numeric(df_info["id"])
#merge/filter
df2=df.merge(df_info, how='left', left_on='ReservoirIdentifier', right_on='id')
df2=df2.drop_duplicates(subset=["ObservationTime","ReservoirIdentifier"],keep='last')
df2=df2[df2['ReservoirIdentifier'].isin([10405,10201,10205])] #,20101,20201
#Calculate, Pivot
df2["ObservationTimeGMT8"] = pd.to_datetime(df2['ObservationTimeGMT8'])
df2['percent']=df2['EffectiveWaterStorageCapacity']/df2['EffectiveCapacity']*100
df2=df2[df2['percent']<=100]
df3 = df2.pivot(index='ObservationTimeGMT8', columns='ReservoirName', values='percent')
bag['reservoir-info']=df_info
bag['reservoir-reservoirData']=df2
bag['reservoir_pivot']=df3
def reservoir_plot(bag):
#plot
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
myfont = FontProperties(fname=r'/Library/Fonts/Microsoft/SimSun.ttf')
df = bag['reservoir_pivot']
df.plot()
plt.title("多水庫2021庫容比例",fontproperties=myfont)
plt.legend(prop=myfont)
plt.xticks(fontname = 'SimSun',size=8)
plt.yticks(fontname = 'SimSun',size=8)
plt.xlabel('時間',fontproperties=myfont)
plt.ylabel('百分比',fontproperties=myfont)
plt.show
reservoir_load(gd,"2021-01-01","2021-06-06")
reservoir_plot(gd)
###Output
reservoir-info: output/reservoir-info.csv saved, shape = (152, 22)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-01.csv saved, shape = (274, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-02.csv saved, shape = (296, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-03.csv saved, shape = (304, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-04.csv saved, shape = (310, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-05.csv saved, shape = (318, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-06.csv saved, shape = (321, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-07.csv saved, shape = (321, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-08.csv saved, shape = (322, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-09.csv saved, shape = (301, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-10.csv saved, shape = (306, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-11.csv saved, shape = (320, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-12.csv saved, shape = (319, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-13.csv saved, shape = (302, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-14.csv saved, shape = (218, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-15.csv saved, shape = (203, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-16.csv saved, shape = (208, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-17.csv saved, shape = (208, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-18.csv saved, shape = (211, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-19.csv saved, shape = (211, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-20.csv saved, shape = (174, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-21.csv saved, shape = (269, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-22.csv saved, shape = (268, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-23.csv saved, shape = (250, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-24.csv saved, shape = (279, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-25.csv saved, shape = (315, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-26.csv saved, shape = (322, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-27.csv saved, shape = (323, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-28.csv saved, shape = (322, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-29.csv saved, shape = (324, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-30.csv saved, shape = (305, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-01-31.csv saved, shape = (286, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-01.csv saved, shape = (292, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-02.csv saved, shape = (297, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-03.csv saved, shape = (313, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-04.csv saved, shape = (323, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-05.csv saved, shape = (322, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-06.csv saved, shape = (303, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-07.csv saved, shape = (305, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-08.csv saved, shape = (322, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-09.csv saved, shape = (323, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-10.csv saved, shape = (315, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-11.csv saved, shape = (317, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-12.csv saved, shape = (317, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-13.csv saved, shape = (315, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-14.csv saved, shape = (306, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-15.csv saved, shape = (307, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-16.csv saved, shape = (289, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-17.csv saved, shape = (309, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-18.csv saved, shape = (300, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-19.csv saved, shape = (316, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-20.csv saved, shape = (322, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-21.csv saved, shape = (303, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-22.csv saved, shape = (315, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-23.csv saved, shape = (323, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-24.csv saved, shape = (302, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-25.csv saved, shape = (309, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-26.csv saved, shape = (321, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-27.csv saved, shape = (302, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-02-28.csv saved, shape = (305, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-01.csv saved, shape = (316, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-02.csv saved, shape = (321, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-03.csv saved, shape = (319, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-04.csv saved, shape = (318, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-05.csv saved, shape = (321, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-06.csv saved, shape = (299, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-07.csv saved, shape = (306, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-08.csv saved, shape = (324, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-09.csv saved, shape = (324, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-10.csv saved, shape = (321, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-11.csv saved, shape = (290, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-12.csv saved, shape = (264, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-13.csv saved, shape = (258, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-14.csv saved, shape = (262, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-15.csv saved, shape = (315, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-16.csv saved, shape = (323, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-17.csv saved, shape = (322, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-18.csv saved, shape = (322, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-19.csv saved, shape = (323, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-20.csv saved, shape = (301, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-21.csv saved, shape = (312, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-22.csv saved, shape = (321, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-23.csv saved, shape = (320, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-24.csv saved, shape = (322, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-25.csv saved, shape = (323, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-26.csv saved, shape = (322, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-27.csv saved, shape = (303, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-28.csv saved, shape = (308, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-29.csv saved, shape = (323, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-30.csv saved, shape = (320, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-03-31.csv saved, shape = (321, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-04-01.csv saved, shape = (323, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-04-02.csv saved, shape = (300, 4)
reservoir-reservoirData: output/reservoir-reservoirData_2021-04-03.csv saved, shape = (300, 4)
###Markdown
今日淹水可看今天有哪些測站有淹過水需要預先準備的測站縣市資訊,在同目錄的 flood-station_縣市鄉鎮.csv,準備方式請參考文件
###Code
def flood_load(bag,date_str,limit=0):
#load 測站縣市補充資料
df_info_縣市鄉鎮 = pd.read_csv("flood-station_縣市鄉鎮.csv")
#get data, process
df_info=api_to_csv("flood-station",None)
df_info=df_info.merge(df_info_縣市鄉鎮, how='left', left_on='_id', right_on='_id')
df_info
#date_str = date.today() # 2021-06-07
print("Today is %s" %(date_str))
df = api_to_csv("flood-floodData",[date_str])
df["timeGMT8"] = df['time'].apply(date_to_gmt8)
df["timeGMT8"] = pd.to_datetime(df['timeGMT8'])
df=df.merge(df_info_縣市鄉鎮, how='left', left_on='stationID', right_on='_id')
df=df.drop_duplicates(subset=["time","stationName"],keep='last')
df['stationName_city']=df['COUNTYNAME'] + '|' + df['TOWNNAME'] + '|' + df['stationName']
#filter, sort
df=df[df['value']>=limit] #可改淹水高度, 有很多淹水資料時,改高一點比較不會太多
df.sort_values(by=['timeGMT8'])
bag['flood-station_縣市鄉鎮']=df_info_縣市鄉鎮
bag['flood-station']=df_info
bag['flood-floodData']=df
def flood_plot(bag,date_str):
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
df = bag['flood-floodData']
myfont = FontProperties(fname=r'/Library/Fonts/Microsoft/SimSun.ttf')
df2 = df.pivot(index='timeGMT8', columns='stationName_city', values='value')
df2.plot(style='.-')
title = "今日 %s 淹水感測器淹水值" %(date_str)
plt.title(title,fontproperties=myfont)
plt.legend(prop=myfont)
plt.xticks(fontname = 'SimSun',size=8)
plt.yticks(fontname = 'SimSun',size=8)
plt.xlabel('時間',fontproperties=myfont)
plt.ylabel('公分',fontproperties=myfont)
fig = plt.gcf()
fig.set_size_inches(8.5, 4.5)
plt.show
#淹水測站列表
def flood_list(bag):
df = bag['flood-floodData']
ary = df['stationName_city'].unique()
for name in ary:
print(name)
flood_load(gd,'2021-06-05',20)
flood_plot(gd,'2021-06-05')
#flood_list(gd)
###Output
flood-station: output/flood-station.csv saved, shape = (1269, 4)
Today is 2021-06-05
flood-floodData: output/flood-floodData_2021-06-05.csv saved, shape = (47309, 3)
###Markdown
雨量站相關
###Code
#列出新竹市測站
def rain_station_view():
df_info = api_to_csv("rain-station",None)
filter_city = df_info['city']=='新竹縣'
df_info = df_info[filter_city]
return df_info
def rain_load(bag, date_str,limit=0,reload=False):
df_info = api_to_csv("rain-station",None)
#date_str = date.today() # 2021-06-07
print("Today is %s" %(date_str))
df=api_to_csv("rain-rainData",[date_str,"20","26","120","122"],reload)
df["timeGMT8"] = df['time'].apply(date_to_gmt8)
df["timeGMT8"] = pd.to_datetime(df['timeGMT8'])
df=df.merge(df_info, how='left', left_on='stationID', right_on='stationID')
df=df.drop_duplicates(subset=["timeGMT8","stationID"],keep='last')
df['stationName']=df['city'] + '|' + df['town'] + '|' + df['name'] + '|' + df['stationID']
#filter, sort
df=df[df['now']>=limit] #可改雨量值, 有很多淹水資料時,改高一點比較不會太多
df=df.sort_values(by=['timeGMT8','stationID'])
bag['rain-station']=df_info
bag['rain-rainData']=df
#今日雨量 pivot
def rain_plot(bag,date_str):
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
df = bag['rain-rainData']
myfont = FontProperties(fname=r'/Library/Fonts/Microsoft/SimSun.ttf')
df2 = df.pivot(index='timeGMT8', columns='stationName', values='now')
df2.plot(style='.-')
title = "今日 %s 雨量站值" %(date_str)
plt.title(title,fontproperties=myfont)
plt.legend(prop=myfont)
plt.xticks(fontname = 'SimSun',size=8)
plt.yticks(fontname = 'SimSun',size=8)
plt.xlabel('時間',fontproperties=myfont)
plt.ylabel('mm',fontproperties=myfont)
fig = plt.gcf()
fig.set_size_inches(8.5, 4.5)
plt.show
def rain_hourdiff(bag, time_set,station_city):
#時雨量
df_info = gd['rain-station']
df = gd['rain-rainData']
#df_info.head()
f1=df_info['city'].isin(station_city)
#df_info[f1].values.tolist()
#df_info[f1]['city'].unique()
stations = df_info[f1]['stationID'].tolist()
#print(stations)
#df.head()
#time_set=['2021-06-10 15:00:00','2021-06-10 16:00:00']
f_time=df['timeGMT8'].isin(time_set)
f_station=df['stationID'].isin(stations)
df_f = df[f_station & f_time]
#df[f_station]
#df['city'].unique()
df_pivot = df_f.pivot(index='stationName', columns='timeGMT8', values='now')
df_pivot['rain_1hour']=df_pivot[time_set[1]]-df_pivot[time_set[0]]
bag['rain-hourdiff']=df_pivot
def to_slot_10min(t_src):
#t_now = datetime.now()
#t_added = timedelta(minutes = 10)
#t_slot= t_src - t_added
slot_min=int(int(t_src.minute/10)*10)
date_str="%i-%i-%i %i:%02i:00" %(t_src.year,t_src.month,t_src.day,t_src.hour,slot_min)
return date_str
def get_2slot(t_src,hour):
#print("t_src=%s" %(t_src))
date_str = to_slot_10min(t_src)
date_obj = datetime.strptime(date_str, "%Y-%m-%d %H:%M:%S")
date_obj2 = date_obj + timedelta(hours = hour)
date_str2 = date_obj2.strftime("%Y-%m-%d %H:%M:%S")
return [date_str,date_str2]
def rain_alarm_hour(bag,station_city,limit):
rain_load(gd, date.today(),True)
#time_set=['2021-06-10 15:00:00','2021-06-10 16:00:00']
time_now = datetime.now()
time_set = get_2slot(time_now-timedelta(minutes = 80),1)
#print(time_set)
#station_city=['新竹縣','新竹市']
rain_hourdiff(gd,time_set,station_city)
df_pivot = gd['rain-hourdiff']
df_pivot=df_pivot[df_pivot['rain_1hour']>limit]
df_pivot=df_pivot.sort_values(by=['rain_1hour'],ascending=False)
print("-----\nMonitor time: %s : %s 雨量站時雨量 > %i mm -----\n" %(time_now.strftime("%Y-%m-%d %H:%M:%S"), station_city,limit))
print(df_pivot)
def rain_day_max(bag,date_str,station_city):
rain_load(bag, date_str,True)
#station_city=['新竹縣','新竹市']
df_info = gd['rain-station']
df = gd['rain-rainData']
#df_info.head()
f1=df_info['city'].isin(station_city)
#df_info[f1].values.tolist()
#df_info[f1]['city'].unique()
stations = df_info[f1]['stationID'].tolist()
#f_time=df['timeGMT8'].isin(time_set)
f_station=df['stationID'].isin(stations)
df_f = df[f_station]
df_agg=df_f.groupby('stationName').agg({'now': ['max']})
bag['rain_day_max']=df_agg
#rain_station_view()
rain_load(gd, date.today(),20,True)
rain_plot(gd,date.today())
if 0:
#time_set=['2021-06-10 15:00:00','2021-06-10 16:00:00']
time_set = get_2slot(datetime.now()-timedelta(minutes = 80),1)
print(time_set)
station_city=['新竹縣','新竹市']
rain_hourdiff(gd,time_set,station_city)
df_pivot = gd['rain-hourdiff']
df_pivot=df_pivot[df_pivot['rain_1hour']>0]
df_pivot=df_pivot.sort_values(by=['rain_1hour'],ascending=False)
print(df_pivot)
if 0:
#今日雨量
rain_day_max(gd,date.today(),['臺東縣'])
df_agg=gd['rain_day_max']
print(df_agg.sort_values([('now', 'max')], ascending=False).head())
###Output
rain-station: output/rain-station.csv saved, shape = (2149, 6)
Today is 2021-06-12
rain-rainData: output/rain-rainData_2021-06-12.csv saved, shape = (55155, 3)
###Markdown
Monitor
###Code
#Alarm 時雨量監控
station_city=['臺東縣','屏東縣']
while True:
#print(datetime.now())
rain_alarm_hour(gd,station_city,1)
time.sleep(600)
###Output
rain-station: output/rain-station.csv saved, shape = (2149, 6)
Today is 2021-06-12
rain-rainData: output/rain-rainData_2021-06-12.csv saved, shape = (51897, 3)
-----
Monitor time: 2021-06-12 08:28:04 : ['臺東縣', '屏東縣'] 雨量站時雨量 > 1 mm -----
Empty DataFrame
Columns: [2021-06-12 07:00:00, 2021-06-12 08:00:00, rain_1hour]
Index: []
###Markdown
水位站
###Code
#查水位站
#常用站點 '河川水位測站-內灣-1300H013','河川水位測站-經國橋-1300H017' , '河川水位測站-上坪-1300H014'
def waterLevel_view():
df_info = api_to_csv("waterLevel-station",None) #'BasinIdentifier',ObservatoryName
filter_river = df_info['RiverName']=='上坪溪'
#filter_name = df_info['BasinIdentifier']=='1300H017'
# ID 查名稱
#value=get_value_by_index(df_info,"BasinIdentifier=1140H037", 'ObservatoryName')
value=get_value_by_index(df_info,"ObservatoryName=河川水位測站-內灣-1300H013", 'BasinIdentifier')
print(value)
return df_info[filter_river]
#準備今天的資料
def waterLevel_load(bag,date_str,reload=False):
df_info = api_to_csv("waterLevel-station",None) #'BasinIdentifier',ObservatoryName
#date_str=date.today() #2021-06-08
#date_str='2021-06-08'
df=api_to_csv("waterLevel-waterLevelData",[date_str,"23","25","120","123"],reload) #'RecordTime', 'StationIdentifier', 'WaterLevel'
df["RecordTimeGMT8"] = df['RecordTime'].apply(date_to_gmt8)
df["RecordTimeGMT8"] = pd.to_datetime(df['RecordTimeGMT8'])
df=df.merge(df_info, how='left', left_on='StationIdentifier', right_on='BasinIdentifier')
df=df.drop_duplicates(subset=["RecordTimeGMT8","StationIdentifier"],keep='last')
df['stationName']=df['StationIdentifier'] + '|' + df['ObservatoryName']
#filter, sort
df=df[df['WaterLevel']>0] #可改水位值, 有很多淹水資料時,改高一點比較不會太多
df=df.sort_values(by=['RecordTimeGMT8','StationIdentifier'])
bag['waterLevel-station']=df_info
bag['waterLevel-waterLevelData']=df
#畫單站圖表
def waterLevel_plotA(bag, StationId):
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
df = bag['waterLevel-waterLevelData']
filter_river = df['StationIdentifier']==StationId
#filter_river = df['RiverName'].str.contains('頭前溪', na=False)
df = df[filter_river]
myfont = FontProperties(fname=r'/Library/Fonts/Microsoft/SimSun.ttf')
df2 = df.pivot(index='RecordTimeGMT8', columns='stationName', values='WaterLevel')
df2.plot(style='.-')
title = "今日 %s 水位值" %(date_str)
plt.title(title,fontproperties=myfont)
plt.legend(prop=myfont)
plt.xticks(fontname = 'SimSun',size=8)
plt.yticks(fontname = 'SimSun',size=8)
plt.xlabel('時間',fontproperties=myfont)
plt.ylabel('米',fontproperties=myfont)
fig = plt.gcf()
fig.set_size_inches(8.0, 4.5)
plt.show
#兩個水位站畫在一張圖上
def waterLevel_plotB(bag, river_pair):
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
df_info = bag['waterLevel-station']
df = bag['waterLevel-waterLevelData']
#river_pair=['河川水位測站-內灣-1300H013','河川水位測站-經國橋-1300H017' ] #河川水位測站-上坪-1300H014
river_pair.append(get_value_by_index(df_info,"ObservatoryName="+river_pair[0], 'BasinIdentifier'))
river_pair.append(get_value_by_index(df_info,"ObservatoryName="+river_pair[1], 'BasinIdentifier'))
river1 = df['BasinIdentifier']==river_pair[2+0]
river2 = df['BasinIdentifier']==river_pair[2+1]
river_both = river1 | river2
df = df[river_both]
myfont = FontProperties(fname=r'/Library/Fonts/Microsoft/SimSun.ttf')
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.set_ylabel(river_pair[0],fontproperties=myfont) #,loc="top"
df[river1].plot(x='RecordTimeGMT8',y='WaterLevel',ax=ax1,label=river_pair[0],color='red',legend=None) #no need to specify for first axis
ax2.set_ylabel(river_pair[1],fontproperties=myfont)
df[river2].plot(x='RecordTimeGMT8',y='WaterLevel',ax=ax2,label=river_pair[1],color='blue',legend=None)
title = "今日 %s 水位值, 紅左藍右" %(date_str)
plt.title(title,fontproperties=myfont)
plt.xticks(fontname = 'SimSun',size=8)
plt.yticks(fontname = 'SimSun',size=8)
fig = plt.gcf()
fig.set_size_inches(7.0, 5)
plt.show
gd={}
#waterLevel_view()
waterLevel_load(gd,'2021-06-08')
#gd['waterLevel-station']
#waterLevel_plotA(gd, '1300H017')
river_pair=['河川水位測站-上坪-1300H014','河川水位測站-經國橋-1300H017']
waterLevel_plotB(gd, river_pair)
###Output
waterLevel-station: output/waterLevel-station.csv saved, shape = (5176, 13)
waterLevel-waterLevelData: output/waterLevel-waterLevelData_2021-06-08.csv saved, shape = (5316, 3)
###Markdown
單雨量站+單水位站 混合圖想觀察雨量站跟水位站的關係
###Code
def rain_load1(bag, date_str,reload=False):
df_info = api_to_csv("rain-station",None)
#date_str = date.today() # 2021-06-07
print("Today is %s" %(date_str))
df=api_to_csv("rain-rainData",[date_str,"23","25","121","122"],reload)
df["timeGMT8"] = df['time'].apply(date_to_gmt8)
df["timeGMT8"] = pd.to_datetime(df['timeGMT8'])
df=df.merge(df_info, how='left', left_on='stationID', right_on='stationID')
df=df.drop_duplicates(subset=["timeGMT8","stationID"],keep='last')
df['stationName']=df['city'] + '|' + df['town'] + '|' + df['name'] + '|' + df['stationID']
#filter, sort
#df=df[df['now']>10] #可改雨量值, 有很多淹水資料時,改高一點比較不會太多
df=df.sort_values(by=['timeGMT8','stationID'])
bag['rain-station']=df_info
bag['rain-rainData']=df
def rain_waterLevel_plot(bag,pair):
%matplotlib notebook
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
#pair=['內灣國小',河川水位測站-內灣-1300H013']
myfont = FontProperties(fname=r'/Library/Fonts/Microsoft/SimSun.ttf')
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
#ax2 先做就正常 [FIXME]
ax2.set_ylabel(pair[1],fontproperties=myfont)
bag['df2'].plot(x='RecordTimeGMT8',y='WaterLevel',ax=ax2,label=pair[1],color='blue',legend=None)
ax1.set_ylabel(pair[2+0],fontproperties=myfont) #,loc="top"
bag['df1'].plot(x='timeGMT8',y='now',ax=ax1,label=pair[2+0],color='red',legend=None) #no need to specify for first axis
title = "今日 %s 雨量/水位值, 紅左藍右" %(date_str)
plt.title(title,fontproperties=myfont)
plt.xticks(fontname = 'SimSun',size=8)
plt.yticks(fontname = 'SimSun',size=8)
fig = plt.gcf()
fig.set_size_inches(7.0, 5)
plt.show
date_str='2021-06-09'
pair=['81D650','河川水位測站-內灣-1300H013'] #雨量站: 內灣國小
#rain
rain_load1(gd,date_str,True)
df_rain = gd['rain-rainData']
df_rain_info = gd['rain-station']
gd['df1']=df_rain[df_rain['stationID']==pair[0]]
#waterLevel
waterLevel_load(gd,date_str,False)
df_waterLevel_info=gd['waterLevel-station']
df_waterLevel = gd['waterLevel-waterLevelData']
pair.append(get_value_by_index(df_rain,"stationID="+pair[0], 'stationName'))
pair.append(get_value_by_index(df_waterLevel_info,"ObservatoryName="+pair[1], 'BasinIdentifier'))
filter1 = df_waterLevel['BasinIdentifier']==pair[2+1]
gd['df2']=df_waterLevel[filter1]
rain_waterLevel_plot(gd,pair)
#gd['df1']
#df_rain_info
###Output
rain-station: output/rain-station.csv saved, shape = (2149, 6)
Today is 2021-06-09
rain-rainData: output/rain-rainData_2021-06-09.csv saved, shape = (46782, 3)
waterLevel-station: output/waterLevel-station.csv saved, shape = (5176, 13)
waterLevel-waterLevelData: output/waterLevel-waterLevelData_2021-06-09.csv saved, shape = (4950, 3)
###Markdown
時間差變化
###Code
# 今日水位差
def mydiff(series):
values = series.tolist()
return max(values)-min(values)
def waterLevel_diff(bag,date_str,filter_def=None): #BasinIdentifier,RiverName,stationName
waterLevel_load(gd,date_str,True)
df_waterLevel_info=gd['waterLevel-station']
df_waterLevel = gd['waterLevel-waterLevelData']
#print(df_waterLevel.columns)
if filter_def:
cols=filter_def.split("=")
#f1=df_waterLevel[cols[0]]==cols[1]
f1=df_waterLevel[cols[0]].str.contains(cols[1], na=False) #RiverName 欄位值似乎有問題
df=df_waterLevel[f1]
else:
df=df_waterLevel
df_agg=df.groupby('stationName').agg({'WaterLevel': ['max',mydiff]})
return df_agg
df_agg = waterLevel_diff(gd,date.today(),"RiverName=頭前溪") #"BasinIdentifier=1300H013"
df_agg.sort_values([('WaterLevel', 'mydiff')], ascending=False)
###Output
waterLevel-station: output/waterLevel-station.csv saved, shape = (5176, 13)
waterLevel-waterLevelData: output/waterLevel-waterLevelData_2021-06-12.csv saved, shape = (1605, 3)
###Markdown
警示geocode 可參考郵遞區號.csv
###Code
date_str=date.today()
df=api_to_csv("alert-alertData",[date_str])
if 0:
#今日雨量
station_city=['屏東縣','臺東縣']
rain_day_max(gd,date.today(),station_city)
df_agg=gd['rain_day_max']
print("今日 %s 最高雨量" %(station_city))
print(df_agg.sort_values([('now', 'max')], ascending=False).head())
df
###Output
alert-alertData: output/alert-alertData_2021-06-12.csv saved, shape = (2, 14)
rain-station: output/rain-station.csv saved, shape = (2149, 6)
Today is 2021-06-12
rain-rainData: output/rain-rainData_2021-06-12.csv saved, shape = (50811, 3)
今日 ['屏東縣', '臺東縣'] 最高雨量
now
max
stationName
臺東縣|太麻里鄉|多良|81S860 27.5
屏東縣|滿州鄉|檳榔|C0R280 24.0
屏東縣|滿州鄉|佳樂水|C0R680 21.5
臺東縣|綠島鄉|綠島|C0S730 21.5
臺東縣|太麻里鄉|賓茂國小|O1S660 19.0
|
Corpus analysis-Finished!!.ipynb | ###Markdown
Some basic analysis on the corpus:
###Code
with open('BIGDATA.txt','r') as myfile:
data_string=myfile.read().replace('\n','')
print("This string has", len(data_string), "characters.")
import nltk
len(data_string)
WordTokens=nltk.word_tokenize(data_string)
type(WordTokens)
###Output
_____no_output_____
###Markdown
we can see here that the corpus has not been lowered yet:
###Code
print(WordTokens.count("natural"),"",WordTokens.count("Natural"))
###Output
293 10
###Markdown
Let's lower the corpus:
###Code
LowerWordTokens=[]
for token in WordTokens:
if token.isalpha():
LowerWordTokens.append(token.lower())
LowerWordTokens[:50]
LowerFreqs=nltk.FreqDist(LowerWordTokens)
LowerFreqs
LowerFreqs.tabulate(10)
%mtplotlib inline
LowerFreqs.plot(30)
###Output
ERROR:root:Line magic function `%mtplotlib` not found.
###Markdown
lots of stopwords are in there, so next step is to get rid of them:
###Code
stopwords=nltk.corpus.stopwords.words("english")
print(stopwords)
LowerContentWordTokens=[]
for token in LowerWordTokens:
if token not in stopwords:
LowerContentWordTokens.append(token)
LowerContentWordTokens[:50]
LowerContentFreqs=nltk.FreqDist(LowerContentWordTokens)
LowerContentFreqs
import matplotlib.pyplot as plt
plt.xlabel('Words')
plt.ylabel('Counts')
plt.title('Top Frequency Content Terms in the corpus')
LowerContentFreqs.plot(30)
###Output
_____no_output_____ |
KCWI/.ipynb_checkpoints/KCWI_calcs-checkpoint.ipynb | ###Markdown
KCWI_calcs.ipynb functions from Busola Alabi, Apr 2018
###Code
from __future__ import division
import glob
import re
import os, sys
from astropy.io.fits import getheader, getdata
from astropy.wcs import WCS
import astropy.units as u
import numpy as np
from scipy import interpolate
import logging
from time import time
import matplotlib.pyplot as plt
from pylab import *
import matplotlib as mpl
import matplotlib.ticker as mtick
from scipy.special import gamma
def make_obj(flux, grat_wave, f_lam_index):
'''
'''
w = np.arange(3000)+3000.
p_A = flux/(2.e-8/w)*(w/grat_wave)**f_lam_index
return w, p_A
def inst_throughput(wave, grat):
'''
'''
eff_bl = np.asarray([0.1825,0.38,0.40,0.46,0.47,0.44])
eff_bm = np.asarray([0.1575, 0.33, 0.36, 0.42, 0.48, 0.45])
eff_bh1 = np.asarray([0., 0.0, 0.0, 0.0, 0.0, 0.])
eff_bh2 = np.asarray([0., 0.18, 0.3, 0.4, 0.28, 0.])
eff_bh3 = np.asarray([0., 0., 0., 0.2, 0.29, 0.31])
wave_0 = np.asarray([355.,380.,405.,450.,486.,530.])*10.
wave_bl = np.asarray([355., 530.])*10.
wave_bm = np.asarray([355., 530.])*10.
wave_bh1 = np.asarray([350., 450.])*10.
wave_bh2 = np.asarray([405., 486.])*10.
wave_bh3 = np.asarray([405., 530.])*10.
trans_atmtel = np.asarray([0.54, 0.55, 0.56, 0.56, 0.56, 0.55])
if grat=='BL':
eff = eff_bl*trans_atmtel
wave_range = wave_bl
if grat=='BM':
eff = eff_bm*trans_atmtel
wave_range = wave_bm
if grat=='BH1':
eff = eff_bh1*trans_atmtel
wave_range = wave_bh1
if grat=='BH2':
eff = eff_bh2*trans_atmtel
wave_range = wave_bh2
if grat=='BH3':
eff = eff_bh3*trans_atmtel
wave_range = wave_bh3
wave1 = wave
interpfunc = interpolate.interp1d(wave_0, eff, fill_value="extrapolate") #this is the only way I've gotten this interpolation to work
eff_int = interpfunc(wave1)
idx = np.where((wave1 <= wave_range[0]) | (wave1 > wave_range[1]))
eff_int[idx] = 0.
return eff_int
def obj_cts(w, f0, grat, exposure_time):
'''
'''
A_geo = np.pi/4.*(10.e2)**2
eff = inst_throughput(w, grat)
cts = eff*A_geo*exposure_time*f0
return cts
def sky(wave):
'''
'''
with open('mk_sky.dat') as f:
lines = (line for line in f if not line.startswith('#'))
skydata = np.loadtxt(lines, skiprows=2)
ws = skydata[:,0]
fs = skydata[:,1]
f_nu_data = getdata('lris_esi_skyspec_fnu_uJy.fits')
f_nu_hdr = getheader('lris_esi_skyspec_fnu_uJy.fits')
dw = f_nu_hdr["CDELT1"]
w0 = f_nu_hdr["CRVAL1"]
ns = len(fs)
ws = np.arange(ns)*dw + w0
f_lam = f_nu_data[:len(ws)]*1e-29*3.*1e18/ws/ws
interpfunc = interpolate.interp1d(ws,f_lam, fill_value="extrapolate")
fs_int = interpfunc(wave)
return fs_int
def sky_mk(wave):
'''
'''
with open('mk_sky.dat') as f:
lines = (line for line in f if not line.startswith('#'))
skydata = np.loadtxt(lines, skiprows=2)
ws = skydata[:,0]
fs = skydata[:,1]
f_nu_data = getdata('lris_esi_skyspec_fnu_uJy.fits')
f_nu_hdr = getheader('lris_esi_skyspec_fnu_uJy.fits')
dw = f_nu_hdr["CDELT1"]
w0 = f_nu_hdr["CRVAL1"]
ns = len(fs)
ws = np.arange(ns)*dw + w0
f_lam = f_nu_data[:len(ws)]*1e-29*3.*1e18/ws/ws
p_lam = f_lam/(2.e-8/ws)
interpfunc = interpolate.interp1d(ws,p_lam, fill_value="extrapolate") #using linear since argument not set in idl
ps_int = interpfunc(wave)
return ps_int
def sky_cts(w, grat, exposure_time, airmass=1.2, area=1.0):
'''
'''
A_geo = np.pi/4.*(10.e2)**2
eff = inst_throughput(w, grat)
cts = eff*A_geo*exposure_time*sky_mk(w)*airmass*area
return cts
def ETC(slicer, grating, grat_wave, f_lam_index, seeing, exposure_time, ccd_bin, spatial_bin=[],
spectral_bin=None, nas=True, sb=True, mag_AB=None, flux=None, Nframes=1, emline_width=None):
"""
Parameters
==========
slicer: str
L/M/S (Large, Medium or Small)
grating: str
BH1, BH2, BH3, BM, BL
grating wavelength: float or int
3400. < ref_wave < 6000.
f_lam_index: float
source f_lam ~ lam^f_lam_index, default = 0
seeing: float
arcsec
exposure_time: float
seconds for source image (total) for all frames
ccd_bin: str
'1x1','2x2'"
spatial_bin: list
[dx,dy] bin in arcsec x arcsec for binning extended emission flux. if sb=True then default is 1 x 1 arcsec^2'
spectral_bin: float or int
Ang to bin for S/N calculation, default=None
nas: boolean
nod and shuffle
sb: boolean
surface brightness m_AB in mag arcsec^2; flux = cgs arcsec^-2'
mag_AB: float or int
continuum AB magnitude at wavelength (ref_wave)'
flux: float
erg cm^-2 s^-1 Ang^1 (continuum source [total]); erg cm^-2 s^1 (point line source [total]) [emline = width in Ang]
EXTENDED: erg cm^-2 s^-1 Ang^1 arcsec^-2 (continuum source [total]); erg cm^-2 s^1 arcsec^-2 (point line source [total]) [emline = width in Ang]
Nframes: int
number of frames (default is 1)
emline_width: float
flux is for an emission line, not continuum flux (only works for flux), and emission line width is emline_width Ang
"""
# logger = logging.getLogger(__name__)
logger.info('Running KECK/ETC')
t0 = time()
slicer_OPTIONS = ('L', 'M','S')
grating_OPTIONS = ('BH1', 'BH2', 'BH3', 'BM', 'BL')
if slicer not in slicer_OPTIONS:
raise ValueError("slicer must be L, M, or S, wrongly entered {}".format(slicer))
logger.info('Using SLICER=%s', slicer)
if grating not in grating_OPTIONS:
raise ValueError("grating must be L, M, or S, wrongly entered {}".format(grating))
logger.info('Using GRATING=%s', grating)
if grat_wave < 3400. or grat_wave > 6000:
raise ValueError('wrong value for grating wavelength')
logger.info('Using reference wavelength=%.2f', grat_wave)
if len(spatial_bin) != 2 and len(spatial_bin) !=0:
raise ValueError('wrong spatial binning!!')
logger.info('Using spatial binning, spatial_bin=%s', str(spatial_bin[0])+'x'+str(spatial_bin[1]))
bin_factor = 1.
if ccd_bin == '2x2':
bin_factor = 0.25
if ccd_bin == '2x2' and slicer == 'S':
print'******** WARNING: DO NOT USE 2x2 BINNING WITH SMALL SLICER'
read_noise = 2.7 # electrons
Nf = Nframes
chsz = 3 #what is this????
nas_overhead = 10. #seconds per half cycle
seeing1 = seeing
seeing2 = seeing
pixels_per_arcsec = 1./0.147
if slicer == 'L':
seeing2 = 1.38
snr_spatial_bin = seeing1*seeing2
pixels_spectral = 8
arcsec_per_slice = 1.35
if slicer == 'M':
seeing2 = max(0.69,seeing)
snr_spatial_bin = seeing1*seeing2
pixels_spectral = 4
arcsec_per_slice = 0.69
if slicer == 'S':
seeing2 = seeing
snr_spatial_bin = seeing1*seeing2
pixels_spectral = 2
arcsec_per_slice = 0.35
N_slices = seeing/arcsec_per_slice
if len(spatial_bin) == 2:
N_slices = spatial_bin[1]/arcsec_per_slice
snr_spatial_bin = spatial_bin[0]*spatial_bin[1]
pixels_spatial_bin = pixels_per_arcsec * N_slices
print "GRATING :", grating
if grating == 'BL':
A_per_pixel = 0.625
if grating == 'BM':
A_per_pixel = 0.28
if grating == 'BH2' or grating == 'BH3':
A_per_pixel = 0.125
print 'A_per_pixel', A_per_pixel
logger.info('f_lam ~ lam = %.2f',f_lam_index)
logger.info('SEEING: %.2f, %s', seeing, ' arcsec')
logger.info('Ang/pixel: %.2f', A_per_pixel)
logger.info('spectral pixels in 1 spectral resolution element: %.2f',pixels_spectral)
A_per_spectral_bin = pixels_spectral*A_per_pixel
logger.info('Ang/resolution element: =%.2f',A_per_spectral_bin)
if spectral_bin is not None:
snr_spectral_bin = spectral_bin
else:
snr_spectral_bin = A_per_spectral_bin
logger.info('Ang/SNR bin: %.2f', snr_spectral_bin)
pixels_per_snr_spec_bin = snr_spectral_bin/A_per_pixel
logger.info('Pixels/Spectral SNR bin: %.2f', pixels_per_snr_spec_bin)
logger.info('SNR Spatial Bin [arcsec^2]: %.2f', snr_spatial_bin)
logger.info('SNR Spatial Bin [pixels^2]: %.2f', pixels_spatial_bin)
flux1 = 0
if flux is not None:
flux1 = flux
if flux is not None and emline_width is not None:
flux1 = flux/emline_width
if flux1 == 0 and emline_width is not None:
raise ValueError('Dont use mag_AB for emission line')
if mag_AB is not None:
flux1 = (10**(-0.4*(mag_AB+48.6)))*(3.e18/grat_wave)/grat_wave
w, p_A = make_obj(flux1,grat_wave, f_lam_index)
if sb==False and mag_AB is not None:
flux_input = ' mag_AB'
logger.info('OBJECT mag: %.2f, %s', mag_AB,flux_input)
if sb==True and mag_AB is not None:
flux_input = ' mag_AB / arcsec^2'
logger.info('OBJECT mag: %.2f, %s',mag_AB,flux_input)
if flux is not None and sb==False and emline_width is None:
flux_input = 'erg cm^-2 s^-1 Ang^-1'
if flux is not None and sb==False and emline_width is not None:
flux_input = 'erg cm^-2 s^-1 in '+ str(emline_width) +' Ang'
if flux is not None and sb and emline_width is None:
flux_input = 'erg cm^-2 s^-1 Ang^-1 arcsec^-2'
if flux is not None and sb and emline_width is not None:
flux_input = 'erg cm^-2 s^-1 arcsec^-2 in '+ str(emline_width) +' Ang'
if flux is not None:
logger.info('OBJECT Flux %.2f, %s',flux,flux_input)
if emline_width is not None:
logger.info('EMISSION LINE OBJECT --> flux is not per unit Ang')
t_exp = exposure_time
if nas==False:
c_o = obj_cts(w,p_A,grating,t_exp)*snr_spatial_bin*snr_spectral_bin
c_s = sky_cts(w,grating,exposure_time,airmass=1.2,area=1.0)*snr_spatial_bin*snr_spectral_bin
c_r = Nf*read_noise**2*pixels_per_snr_spec_bin*pixels_spatial_bin*bin_factor
snr = c_o/np.sqrt(c_s+c_o+c_r)
if nas==True:
n_cyc = np.floor((exposure_time-nas_overhead)/2./(nas+nas_overhead)+0.5)
total_exposure = (2*n_cyc*(nas+nas_overhead))+nas_overhead
logger.info('NAS: Rounding up to ',n_cyc, ' Cycles of NAS for total exposure of',total_exposure,' s')
t_exp = n_cyc*nas
c_o = obj_cts(w,p_A,grating,t_exp)*snr_spatial_bin*snr_spectral_bin
c_s = sky_cts(w,grating,t_exp,airmass=1.2,area=1.0)*snr_spatial_bin*snr_spectral_bin
c_r = 2.*Nf*read_noise**2*pixels_per_snr_spec_bin*pixels_spatial_bin*bin_factor
snr = c_o/np.sqrt(2.*c_s+c_o+c_r)
fig=figure(num=1, figsize=(12, 16), dpi=80, facecolor='w', edgecolor='k')
subplots_adjust(hspace=0.001)
ax0 = fig.add_subplot(611)
ax0.plot(w, snr, 'k-')
ax0.minorticks_on()
ax0.tick_params(axis='both',which='minor',direction='in', length=5,width=2)
ax0.tick_params(axis='both',which='major',direction='in', length=8,width=2,labelsize=8)
ylabel('SNR / %.1f'%snr_spectral_bin+r'$\rm \ \AA$', fontsize=12)
ax1 = fig.add_subplot(612)
ax1.plot(w,c_o, 'k--')
ax1.minorticks_on()
ax1.tick_params(axis='both',which='minor',direction='in',length=5,width=2)
ax1.tick_params(axis='both',which='major',direction='in',length=8,width=2,labelsize=12)
ylabel('Obj cts / %.1f'%snr_spectral_bin+r'$\rm \ \AA$', fontsize=12)
ax2 = fig.add_subplot(613)
ax2.plot(w,c_s, 'k--')
ax2.minorticks_on()
ax2.tick_params(axis='both',which='minor',direction='in', length=5,width=2)
ax2.tick_params(axis='both',which='major',direction='in', length=8,width=2,labelsize=12)
ylabel('Sky cts / %.1f'%snr_spectral_bin+r'$\rm \ \AA$', fontsize=12)
ax3 = fig.add_subplot(614)
ax3.plot(w,c_r*np.ones(len(w)), 'k--')
ax3.minorticks_on()
ax3.tick_params(axis='both',which='minor', direction='in', length=5,width=2)
ax3.tick_params(axis='both',which='major', direction='in', length=8,width=2,labelsize=12)
ylabel('Rd. Noise cts / %.1f'%snr_spectral_bin+r'$\rm \ \AA$', fontsize=12)
ax4 = fig.add_subplot(615)
yval = w[c_s > 0]
num = c_o[c_s > 0]
den = c_s[c_s > 0]
ax4.plot(yval, num/den, 'k--') #some c_s are zeros
ax4.minorticks_on()
xlim(min(w), max(w)) #only show show valid data!
ax4.tick_params(axis='both',which='minor', direction='in', length=5,width=2)
ax4.tick_params(axis='both',which='major', direction='in', length=8,width=2,labelsize=12)
ylabel('Obj/Sky cts /%.1f'%snr_spectral_bin+r'$\rm \ \AA$', fontsize=12)
ax5 = fig.add_subplot(616)
ax5.plot(w,p_A, 'k--')
ax5.yaxis.set_major_formatter(mtick.FormatStrFormatter('%.1e'))
ax5.minorticks_on()
ax5.tick_params(axis='both',which='minor',direction='in', length=5,width=2)
ax5.tick_params(axis='both',which='major',direction='in', length=8,width=2,labelsize=12)
ylabel('Flux ['r'$\rm ph\ cm^{-2}\ s^{-1}\ \AA^{-1}$]', fontsize=12)
xlabel('Wavelength ['r'$\rm \AA$]', fontsize=12)
show()
fig.savefig('{}.pdf'.format('KCWI_ETC_calc'), format='pdf', transparent=True, bbox_inches='tight')
logger.info('KCWI/ETC run successful!')
logging.basicConfig(level=logging.INFO, format='[%(levelname)s] %(message)s', stream=sys.stdout)
logger = logging.getLogger(__name__)
if __name__ == '__main__':
print("KCWI/ETC...python version")
###Output
KCWI/ETC...python version
###Markdown
Simulate DF44 observation, begin by figuring out Sérsic model conversions. See toy_jeans4.ipynb for more detailed Sérsic calculations.
###Code
# n, R_e, M_g = 0.85, 7.1, 19.05 # van Dokkum+16 (van Dokkum+17 is slightly different)
n, mu_0, a_e, R_e = 0.94, 24.2, 9.7, 7.9 # van Dokkum+17; some guesses
b_n = 1.9992*n - 0.3271
mu_m_e = mu_0 - 2.5*log10(n*exp(b_n)/b_n**(2.0*n)*gamma(2.0*n)) + 2.5*b_n / log(10.0) # mean SB, using Graham & Driver eqns 6 and ?
print '<mu>_e =', mu_m_e
ETC('M','BM', 5110., 0., 0.75, 3600., '2x2', spatial_bin=[14.0,14.0], spectral_bin=None, nas=False, sb=True, mag_AB=25.2, flux=None, Nframes=1, emline_width=None)
# S/N ~ 20/Ang, binned over ~1 R_e aperture
###Output
[INFO] Running KECK/ETC
[INFO] Using SLICER=M
[INFO] Using GRATING=BM
[INFO] Using reference wavelength=5110.00
[INFO] Using spatial binning, spatial_bin=14.0x14.0
GRATING : BM
A_per_pixel 0.28
[INFO] f_lam ~ lam = 0.00
[INFO] SEEING: 0.75, arcsec
[INFO] Ang/pixel: 0.28
[INFO] spectral pixels in 1 spectral resolution element: 4.00
[INFO] Ang/resolution element: =1.12
[INFO] Ang/SNR bin: 1.12
[INFO] Pixels/Spectral SNR bin: 4.00
[INFO] SNR Spatial Bin [arcsec^2]: 196.00
[INFO] SNR Spatial Bin [pixels^2]: 138.03
[INFO] OBJECT mag: 25.20, mag_AB / arcsec^2
###Markdown
Simulate VCC 1287 observation:
###Code
n, a_e, q, m_i = 0.6231, 46.34, 0.809, 15.1081 # Viraj Pandya sci_gf_i.fits header GALFIT results, sent 19 Mar 2018
R_e = a_e * sqrt(q)
g_i = 0.72 # note this is a CFHT g-i not SDSSS g-i
mu_m_e = m_i + 2.5*log10(2.0) + 2.5*log10(pi*R_e**2)
print '<mu>_e (i-band) =', mu_m_e
mu_m_e += g_i
print '<mu>_e (g-band) =', mu_m_e
b_n = 1.9992*n - 0.3271
mu_0 = mu_m_e + 2.5*log10(n*exp(b_n)/b_n**(2.0*n)*gamma(2.0*n)) - 2.5*b_n / log(10.0) # mean SB, using Graham & Driver eqns 6 and ?
print 'mu_0 (g-band) =', mu_0
ETC('M','BM', 5092., 0., 0.75, 3600., '2x2', spatial_bin=[16.5,20.4], spectral_bin=None, nas=False, sb=True, mag_AB=25.5, flux=None, Nframes=1, emline_width=None)
# S/N ~ 20/Ang, binned over full FOV
###Output
[INFO] Running KECK/ETC
[INFO] Using SLICER=M
[INFO] Using GRATING=BM
[INFO] Using reference wavelength=5092.00
[INFO] Using spatial binning, spatial_bin=16.5x20.4
GRATING : BM
A_per_pixel 0.28
[INFO] f_lam ~ lam = 0.00
[INFO] SEEING: 0.75, arcsec
[INFO] Ang/pixel: 0.28
[INFO] spectral pixels in 1 spectral resolution element: 4.00
[INFO] Ang/resolution element: =1.12
[INFO] Ang/SNR bin: 1.12
[INFO] Pixels/Spectral SNR bin: 4.00
[INFO] SNR Spatial Bin [arcsec^2]: 336.60
[INFO] SNR Spatial Bin [pixels^2]: 201.12
[INFO] OBJECT mag: 25.50, mag_AB / arcsec^2
###Markdown
Simulate Hubble VII observation:
###Code
R_e = 0.9
m_V = 15.8
mue = 15.8 + 2.5*log10(2) + 2.5*log10(pi*R_e**2)
print '<mu_V>_e = ', mue
side = sqrt(pi * R_e**2)
print 'box size = %f arcsec' % (side)
ETC('S','BM', 4500., 0., 0.75, 900., '1x1', spatial_bin=[side,side], spectral_bin=None, nas=False, sb=True, mag_AB=mue, flux=None, Nframes=3, emline_width=None)
###Output
<mu_V>_e = 17.5666622181
box size = 1.595208 arcsec
[INFO] Running KECK/ETC
[INFO] Using SLICER=S
[INFO] Using GRATING=BM
[INFO] Using reference wavelength=4500.00
[INFO] Using spatial binning, spatial_bin=1.59520846581x1.59520846581
GRATING : BM
A_per_pixel 0.28
[INFO] f_lam ~ lam = 0.00
[INFO] SEEING: 0.75, arcsec
[INFO] Ang/pixel: 0.28
[INFO] spectral pixels in 1 spectral resolution element: 2.00
[INFO] Ang/resolution element: =0.56
[INFO] Ang/SNR bin: 0.56
[INFO] Pixels/Spectral SNR bin: 2.00
[INFO] SNR Spatial Bin [arcsec^2]: 2.54
[INFO] SNR Spatial Bin [pixels^2]: 31.01
[INFO] OBJECT mag: 17.57, mag_AB / arcsec^2
|
Machine Learning/00_Importing_and_Storing_data.ipynb | ###Markdown
Problem Statement: Extract data from the given SalaryGender CSV file and store the data from each column in a separate NumPy array.
###Code
import numpy as np
import pandas as pd
df = pd.read_csv('C:/data/SalaryGender.csv',delimiter = ',')
Salary = np.array(df['Salary'])
Gender = np.array(df['Gender'])
Age = np.array(df['Age'])
print(df)
df.head()
df.tail()
df.describe()
df.isna().sum()
###no missing value in the dataset
###Output
_____no_output_____ |
Part5Improve/05-Linear-Regression/04-Vectorization/04-Vectorization.ipynb | ###Markdown
向量化后的SimpleLinearRegression a和b的向量化求解推导第2节得到的a和b的求导方法如下:b已经足够简单,不需要简化了,a的式子实际可以进行简化为向量点乘,推导过程如下:最终a的点乘法求值表达式如下:
###Code
import numpy as np
import matplotlib.pyplot as plt
x = np.array([1., 2., 3., 4., 5.])
y = np.array([1., 3., 2., 3., 5.])
from playML.SimpleLinearRegression import SimpleLinearRegression
regression = SimpleLinearRegression()
regression.fit(x, y)
regression.a_
regression.b_
y_predict = regression.predict(x)
plt.scatter(x, y)
plt.plot(x, y_predict, color="r")
plt.axis([0,6,0,6])
plt.show()
m = 1000000
big_x = np.random.random(size=m)
big_y = big_x*2.0 + 3.0 + np.random.normal(size=m) # 最后加个随机干扰噪声
###Output
_____no_output_____
###Markdown
> 上一节第3节实现的for循环来的方式时间是900多ms,下面用向量化的速度快了50倍,my god,向量点乘快这么多!!
###Code
%timeit regression.fit(big_x, big_y) # 上一节第3节实现的for循环来的方式时间是900多ms,速度差接近50倍,my god,向量点乘快这么多!!
###Output
10 loops, best of 3: 19.3 ms per loop
|
Machine Learning & Data Science Masterclass - JP/08-Linear-Regression-Models/02-Polynomial-Regression.ipynb | ###Markdown
Polynomial Regression with SciKit-Learn Why we need polynoimal features?- Sometimes linear regression is not enough for a feature which acts like log(x) which is not linear line. So we need higher order to transform it into linear.- Another use case is because of interaction (synergy) between one feature and another. Example: newspaper channel alone doesn't increase sales. TV channel alone makes some sales. However if newspaper channel is added in addition to TV channel, it increase more sales. This maybe because people got reminded about the product on newspaper, after they wached it on TV before.- Polynomial features allow us to create new features based on the original ones, by increasing the order (2, 3, ..etc)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('Data/Advertising.csv')
df.head()
# seperate features and labels
X = df.drop('sales', axis=1)
y = df['sales']
###Output
_____no_output_____
###Markdown
Polynomial Regression
###Code
from sklearn.preprocessing import PolynomialFeatures
###Output
_____no_output_____
###Markdown
The below converter is not the model. It is just feature converter.- converting the original features and increasing them into specific degrees.- include_bias is 1 Creating Polynomial Features$$\hat{y} = \beta_0 + \beta_1x_1 + \beta_1x^2_1 + ... + \beta_dx^d_1 + \epsilon$$
###Code
ploynomial_converter = PolynomialFeatures(degree=2, include_bias=False)
ploynomial_converter.fit(X) #converter tries to explore the features by reading every X columns
poly_features = ploynomial_converter.transform(X) # converter actually makes the transformation
# ploynomial_converter.fit_transform(X)
# same as above 2 seperate lines of fit() and transform()
poly_features
###Output
_____no_output_____
###Markdown
We can see that features column got increased after transformation.
###Code
poly_features.shape # after transformation 9 columns
X.shape #original ones 3 columns
X.iloc[0]
poly_features[0]
###Output
_____no_output_____
###Markdown
------
###Code
poly_features[0][:3] # 3 original ones
poly_features[0][:3] ** 2 # 3 squared ones
###Output
_____no_output_____
###Markdown
3 interaction terms $$x_1 \cdot x_2 \text{ and } x_1 \cdot x_3 \text{ and } x_2 \cdot x_3 $$
###Code
230.1*37.8
230.1*69.2
37.8*69.2
###Output
_____no_output_____
###Markdown
------ Train | Test Split
###Code
ploynomial_converter.fit_transform(X)
from sklearn.model_selection import train_test_split
# help(train_test_split)
# using poly features
X_train, X_test, y_train, y_test = train_test_split(poly_features, y, test_size=0.3, random_state=101)
###Output
_____no_output_____
###Markdown
Linear Regression Model
###Code
from sklearn.linear_model import LinearRegression
lr_model = LinearRegression()
lr_model.fit(X_train, y_train)
test_predictions = lr_model.predict(X_test)
###Output
_____no_output_____
###Markdown
Evaluation on the Test Set
###Code
# performance
from sklearn.metrics import mean_absolute_error, mean_squared_error
MAE = mean_absolute_error(y_test, test_predictions)
MSE = mean_squared_error(y_test, test_predictions)
RMSE = np.sqrt(MSE)
MAE
RMSE
###Output
_____no_output_____
###Markdown
Comparison with Simple Linear Regression**Results on the Test Set (Note: Use the same Random Split to fairly compare!)*** Simple Linear Regression: * MAE: 1.213 * RMSE: 1.516* Polynomial 2-degree: * MAE: 0.4896 * RMSE: 0.664
###Code
X.iloc[0]
poly_features[0]
lr_model.coef_
###Output
_____no_output_____
###Markdown
As we can see from coefficient, model is not even considering newspaper (the last one) which is squared of 69.2.4.788640e+03 is tied to -3.04715806e-05 ( which is almost 0)
###Code
69.2 ** 2
###Output
_____no_output_____
###Markdown
------- --- Choosing a Model Adjusting Parameters
###Code
# create the different order polynomial
# split ploy features : train / test
# fit on train set
# save RMSE for BOTH train & test sets
# PLOT the results (error vs poly order)
train_rmse_errors = []
test_rmse_errors = []
for deg in range(1, 10):
# create poloynomial features on different degree
poly_converter = PolynomialFeatures(degree=deg, include_bias=False)
poly_features = poly_converter.fit_transform(X)
# split poly featurs: train/test
X_train, X_test, y_train, y_test = train_test_split(poly_features, y, test_size=0.3, random_state=101)
# model fit
lr_model = LinearRegression(fit_intercept=True)
lr_model.fit(X_train, y_train)
# prediction
train_pred = lr_model.predict(X_train)
test_pred = lr_model.predict(X_test)
# calcuate RMSE
train_RMSE = np.sqrt(mean_squared_error(y_train, train_pred))
test_RMSE = np.sqrt(mean_squared_error(y_test, test_pred))
# store RMSE
train_rmse_errors.append(train_RMSE)
test_rmse_errors.append(test_RMSE)
train_rmse_errors
test_rmse_errors
###Output
_____no_output_____
###Markdown
Plot the train/test RMSE As we can see from the below plot, the error exploded around poly degree of 4 for TEST set.- this is overfitting- for train set, the error gets lower and lower when the poly degree get higher and higher.- for test set, the error gets higher and higher when the poly degree get higher (after a certain poly degree).
###Code
plt.plot(range(1,6), train_rmse_errors[:5], label='train RMSE')
plt.plot(range(1,6), test_rmse_errors[:5], label='test RMSE')
plt.xlabel('Degree of Polynomail')
plt.ylabel('RMSE')
plt.legend();
plt.plot(range(1,10), train_rmse_errors, label='train RMSE')
plt.plot(range(1,10), test_rmse_errors, label='test RMSE')
plt.xlabel('Degree of Polynomail')
plt.ylabel('RMSE')
plt.legend();
###Output
_____no_output_____
###Markdown
----- Finalizing Model Choice As for the final model, we will choose 3rd polynomial
###Code
final_converter = PolynomialFeatures(degree=3, include_bias=False)
final_model = LinearRegression()
full_converted_X = final_converter.fit_transform(X)
final_model.fit(full_converted_X, y)
from joblib import dump, load
dump(final_model, 'Models/mdl_poly.pkl')
dump(final_converter, 'Models/converter_poly.pkl')
###Output
_____no_output_____
###Markdown
------------ Prediction on New Data
###Code
loaded_converter = load('Models/converter_poly.pkl')
loaded_model = load('Models/mdl_poly.pkl')
campaign = [[149, 22, 12]]
transformed_data = loaded_converter.fit_transform(campaign)
loaded_model.predict(transformed_data)
###Output
_____no_output_____ |
code/rabbits-mine.ipynb | ###Markdown
Modeling and Simulation in PythonRabbit exampleCopyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# To switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
###Output
_____no_output_____
###Markdown
This notebook develops a simple growth model, like the ones in Chapter 3, and uses it to demonstrate a parameter sweep.The system we'll model is a rabbit farm. Suppose you start with an initial population of rabbits and let them breed. For simplicity, we'll assume that all rabbits are on the same breeding cycle, and we'll measure time in "seasons", where a season is the reproductive time of a rabbit.If we provide all the food, space and other resources a rabbit might need, we expect the number of new rabbits each season to be proportional to the current population, controlled by a parameter, `birth_rate`, that represents the number of new rabbits per existing rabbit, per season. As a starting place, I'll assume `birth_rate = 0.9`.Sadly, during each season, some proportion of the rabbits die. In a detailed model, we might keep track of each rabbit's age, because the chance of dying is probably highest for young and old rabbits, and lowest in between. But for simplicity, we'll model the death process with a single parameter, `death_rate`, that represent the numberof deaths per rabbit per season. As a starting place, I'll assume `death_rate = 0.5`.Here's a system object that contains these parameters as well as:* The initial population, `p0`,* The initial time, `t0`, and* The duration of the simulation, `t_end`, measured in seasons.
###Code
system = System(t0 = 0,
t_end = 10,
p0 = 10,
birth_rate = 0.9,
death_rate = 0.5)
system
###Output
_____no_output_____
###Markdown
Here's a version of run_simulation, similar to the one in Chapter 3, with both births and deaths proportional to the current population.
###Code
def run_simulation(system):
"""Runs a proportional growth model.
Adds TimeSeries to `system` as `results`.
system: System object with t0, t_end, p0,
birth_rate and death_rate
"""
results = TimeSeries()
results[system.t0] = system.p0
for t in linrange(system.t0, system.t_end):
births = system.birth_rate * results[t]
deaths = system.death_rate * results[t]
results[t+1] = results[t] + births - deaths
system.results = results
###Output
_____no_output_____
###Markdown
Now we can run the simulation and display the results:
###Code
run_simulation(system)
system.results
###Output
_____no_output_____
###Markdown
Notice that the simulation actually runs one season past `t_end`. That's an off-by-one error that I'll fix later, but for now we don't really care.The following function plots the results.
###Code
def plot_results(system, title=None):
"""Plot the estimates and the model.
system: System object with `results`
"""
newfig()
plot(system.results, 'bo', label='rabbits')
decorate(xlabel='Season',
ylabel='Rabbit population',
title=title)
###Output
_____no_output_____
###Markdown
And here's how we call it.
###Code
plot_results(system, title='Proportional growth model')
###Output
_____no_output_____
###Markdown
Let's suppose our goal is to maximize the number of rabbits, so the metric we care about is the final population. We can extract it from the results like this:
###Code
def final_population(system):
t_end = system.results.index[-1]
return system.results[t_end]
###Output
_____no_output_____
###Markdown
And call it like this:
###Code
final_population(system)
###Output
_____no_output_____
###Markdown
To explore the effect of the parameters on the results, we'll define `make_system`, which takes the system parameters as function parameters(!) and returns a `System` object:
###Code
def make_system(birth_rate=0.9, death_rate=0.5):
system = System(t0 = 0,
t_end = 10,
p0 = 10,
birth_rate = birth_rate,
death_rate = death_rate)
return system
###Output
_____no_output_____
###Markdown
Now we can make a `System`, run a simulation, and extract a metric:
###Code
system = make_system()
run_simulation(system)
final_population(system)
###Output
_____no_output_____
###Markdown
To see the relationship between `birth_rate` and final population, we'll define `sweep_birth_rate`:
###Code
def sweep_birth_rate(birth_rates, death_rate=0.5):
for birth_rate in birth_rates:
system = make_system(birth_rate=birth_rate,
death_rate=death_rate)
run_simulation(system)
p_end = final_population(system)
plot(birth_rate, p_end, 'gs', label='rabbits')
decorate(xlabel='Births per rabbit per season',
ylabel='Final population')
###Output
_____no_output_____
###Markdown
The first parameter of `sweep_birth_rate` is supposed to be an array; we can use `linspace` to make one.
###Code
birth_rates = linspace(0, 1, 21)
birth_rates
###Output
_____no_output_____
###Markdown
Now we can call `sweep_birth_rate`.The resulting figure shows the final population for a range of values of `birth_rate`.Confusingly, the results from a parameter sweep sometimes resemble a time series. It is very important to remember the difference. One way to avoid confusion: LABEL THE AXES.In the following figure, the x-axis is `birth_rate`, NOT TIME.
###Code
birth_rates = linspace(0, 1, 21)
sweep_birth_rate(birth_rates)
###Output
_____no_output_____
###Markdown
The code to sweep the death rate is similar.
###Code
def sweep_death_rate(death_rates, birth_rate=0.9):
for death_rate in death_rates:
system = make_system(birth_rate=birth_rate,
death_rate=death_rate)
run_simulation(system)
p_end = final_population(system)
plot(death_rate, p_end, 'r^', label='rabbits')
decorate(xlabel='Deaths per rabbit per season',
ylabel='Final population')
###Output
_____no_output_____
###Markdown
And here are the results. Again, the x-axis is `death_rate`, NOT TIME.
###Code
death_rates = linspace(0.1, 1, 20)
sweep_death_rate(death_rates)
###Output
_____no_output_____
###Markdown
In the previous sweeps, we hold one parameter constant and sweep the other.You can also sweep more than one variable at a time, and plot multiple lines on a single axis.To keep the figure from getting too cluttered, I'll reduce the number of values in `birth_rates`:
###Code
birth_rates = linspace(0.4, 1, 4)
birth_rates
###Output
_____no_output_____
###Markdown
By putting one for loop inside another, we can enumerate all pairs of values.The results show 4 lines, one for each value of `birth_rate`.(I did not plot the lines between the data points because of a limitation in `plot`.)
###Code
for birth_rate in birth_rates:
for death_rate in death_rates:
system = make_system(birth_rate=birth_rate,
death_rate=death_rate)
run_simulation(system)
p_end = final_population(system)
plot(death_rate, p_end, 'c^', label='rabbits')
decorate(xlabel='Deaths per rabbit per season',
ylabel='Final population')
###Output
_____no_output_____
###Markdown
If you suspect that the results depend on the difference between `birth_rate` and `death_rate`, you could run the same loop, plotting the "net birth rate" on the x axis.If you are right, the results will fall on a single curve, which means that knowing the difference is sufficient to predict the outcome; you don't actually have to know the two parameters separately.
###Code
for birth_rate in birth_rates:
for death_rate in death_rates:
system = make_system(birth_rate=birth_rate,
death_rate=death_rate)
run_simulation(system)
p_end = final_population(system)
net_birth_rate = birth_rate - death_rate
plot(net_birth_rate, p_end, 'mv', label='rabbits')
decorate(xlabel='Net births per rabbit per season',
ylabel='Final population')
###Output
_____no_output_____
###Markdown
On the other hand, if you guess that the results depend on the ratio of the parameters, rather than the difference, you could check by plotting the ratio on the x axis.If the results don't fall on a single curve, that suggests that the ratio alone is not sufficient to predict the outcome.
###Code
for birth_rate in birth_rates:
for death_rate in death_rates:
system = make_system(birth_rate=birth_rate,
death_rate=death_rate)
run_simulation(system)
p_end = final_population(system)
birth_ratio = birth_rate / death_rate
plot(birth_ratio, p_end, 'y>', label='rabbits')
decorate(xlabel='Ratio of births to deaths',
ylabel='Final population')
###Output
_____no_output_____ |
notebook/Sample2ManualAnnotationSampling.ipynb | ###Markdown
Sample 2 Manual Annotation Sampling===
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import os
from tqdm import tqdm
import bz2
import gzip
import json
import re
import hashlib
from datetime import datetime
import nltk
import scipy.stats
import para
from itertools import groupby
from collections import Counter
git_root_dir = !git rev-parse --show-toplevel
git_root_dir = git_root_dir[0]
git_root_dir
raw_data_dir = "/export/scratch2/wiki_data"
derived_data_dir = os.path.join(git_root_dir, "data", "derived")
raw_data_dir, derived_data_dir
stub_history_dir = os.path.join(derived_data_dir, 'stub-history-all-revisions')
stub_history_dir
working_dir = os.path.join(derived_data_dir, 'sample2-manual-annotation-samples')
os.makedirs(working_dir, exist_ok=True)
working_dir
start_date = datetime.fromisoformat('2014-04-01')
start_timestamp = int(start_date.timestamp())
end_date = datetime.fromisoformat('2020-01-01')
end_timestamp = int(end_date.timestamp())
start_timestamp, end_timestamp
###Output
_____no_output_____
###Markdown
Load sample 2
###Code
# read in the sample dataframe
revision_sample_dir = os.path.join(derived_data_dir, 'revision_sample')
sample2_filepath = os.path.join(revision_sample_dir, 'sample2_1M.pkl')
rev_df = pd.read_pickle(sample2_filepath)
len(rev_df)
# read in the ORES scores
revision_sample_dir = os.path.join(derived_data_dir, 'revision_sample')
sample2_filepath = os.path.join(revision_sample_dir, 'sample2_ores_scores.csv')
ores_df = pd.read_csv(sample2_filepath, header=None, names=['rev_id', 'damaging_prob', 'damaging_pred', 'goodfaith_prob', 'goodfaith_pred'])
len(ores_df)
rev_df = pd.merge(rev_df, ores_df, on='rev_id', how='inner')
len(rev_df)
rev_df.head()
###Output
_____no_output_____
###Markdown
Sample for Bruce LiuResponse to Haiyi's email on Wed, Apr 8, 12:08 PM."Would you please share the revision dataset you generated (the revision id, the ORES score, and the community response - whether the revision was reverted or not) with Bruce?"
###Code
# write out the full sample as a CSV
sample_subset_filepath = os.path.join(working_dir, f"sample2_2018_bruceliu.csv")
with open(sample_subset_filepath, 'w') as outfile:
outfile.write("page_id,rev_id,rev_timestamp,is_reverted,is_reverting,damaging_prob,goodfaith_prob\n")
for t in tqdm(rev_df.itertuples(), total=len(rev_df)):
url = f"https://en.wikipedia.org/wiki/?diff={t.rev_id}"
line = f"{t.page_id},{t.rev_id},{t.rev_timestamp},{t.is_reverted},{t.is_reverting},{t.damaging_prob},{t.goodfaith_prob}\n"
outfile.write(line)
###Output
100%|██████████| 1000000/1000000 [00:08<00:00, 124728.00it/s]
###Markdown
Samples from expected corners
###Code
# write out a sample of likelygood reverted revisions
n = 100
likelygood_threshold = 0.329
verylikelybad_threshold = 0.919
likelybad_threshold = 0.641
sample_subset_filepath = os.path.join(working_dir, f"sample2_likelygood_reverted_random{n}.csv")
with open(sample_subset_filepath, 'w') as outfile:
outfile.write("page_id,rev_id,rev_timestamp,rev_date,is_reverted,is_reverting,damaging_prob,diff_url\n")
subset = rev_df[(rev_df.damaging_prob <= likelygood_threshold)&(rev_df.is_reverted == 1)]
print(f"{len(subset)} likelygood reverted revisions")
subset = subset.sample(n=n, random_state=2)
for t in subset.itertuples():
url = f"https://en.wikipedia.org/wiki/?diff={t.rev_id}"
rev_date = datetime.utcfromtimestamp(t.rev_timestamp).strftime("%Y-%m-%d")
line = f"{t.page_id},{t.rev_id},{t.rev_timestamp},{rev_date},{t.is_reverted},{t.is_reverting},{t.damaging_prob},{url}\n"
outfile.write(line)
# write out a sample of verylikelybad reverted revisions
n = 100
likelygood_threshold = 0.329
verylikelybad_threshold = 0.919
likelybad_threshold = 0.641
sample_subset_filepath = os.path.join(working_dir, f"sample2_verylikelybad_nonreverted_random{n}.csv")
with open(sample_subset_filepath, 'w') as outfile:
outfile.write("page_id,rev_id,rev_timestamp,rev_date,is_reverted,is_reverting,damaging_prob,diff_url\n")
subset = rev_df[(rev_df.damaging_prob >= verylikelybad_threshold)&(rev_df.is_reverted == 0)]
print(f"{len(subset)} verylikelybad nonreverted revisions")
subset = subset.sample(n=n, random_state=2)
for t in subset.itertuples():
url = f"https://en.wikipedia.org/wiki/?diff={t.rev_id}"
rev_date = datetime.utcfromtimestamp(t.rev_timestamp).strftime("%Y-%m-%d")
line = f"{t.page_id},{t.rev_id},{t.rev_timestamp},{rev_date},{t.is_reverted},{t.is_reverting},{t.damaging_prob},{url}\n"
outfile.write(line)
###Output
303 verylikelybad nonreverted revisions
|
_notebooks/2019-12-12-Exploratory-Data-Analysis-Rossman-Data.ipynb | ###Markdown
Exploratory-Data-Analysis-Rossman-Data Data preparation / Feature engineering In addition to the provided data, we will be using external datasets put together by participants in the Kaggle competition. You can download all of them [here](http://files.fast.ai/part2/lesson14/rossmann.tgz). Then you shold untar them in the dirctory to which `PATH` is pointing below.For completeness, the implementation used to put them together is included below.
###Code
untar_data('http://files.fast.ai/part2/lesson14/rossmann',dest='/content/data/rossmann')
(Config().data_path()/'rossmann').ls()
(Config().data_path()/'rossmann')
PATH=(Config().data_path()/'rossmann')
table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test']
type(table_names)
??pd.read_csv
#PATH=Config().data_path()/Path('rossmann/')
tables = [pd.read_csv(PATH/f'{fname}.csv', low_memory=False) for fname in table_names]
train, store, store_states, state_names, googletrend, weather, test = tables
len(train),len(test)
??train.isnull()
###Output
_____no_output_____
###Markdown
**Define functions for Data Exploration**
###Code
def data_shape_and_head(df):
pd.set_option('float_format', '{:f}'.format)
print(f"Data Frame Shape: {df.shape}")
return df.head()
def percentage_of_null_data(df):
pd.options.mode.use_inf_as_na=True
total = df.isnull().sum()
percent = (df.isnull().sum()/df.isnull().count()*100)
tt = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
types = []
for col in df.columns:
dtype = str(df[col].dtype)
types.append(dtype)
tt['Types'] = types
return(np.transpose(tt))
def unique_values_eachcolumn(df):
for column in df.columns.values:
print(f"[df] Unique values of '{column}' : {df[column].nunique()}")
def plot_col_count_4_top20(column_name, title, df, size=6):
# Displays the count of records for each value of the column.
# Displays data for first 20 most frequent values
import seaborn as sns
f, ax = plt.subplots(1,1, figsize=(4*size,4))
total = float(len(df))
g = sns.countplot(df[column_name], order = df[column_name].value_counts().index[:30], palette='Dark2')
g.set_title("Number and Percentage of {}".format(title))
if(size > 2):
plt.xticks(rotation=90, size=8)
for p in ax.patches:
height = p.get_height()
ax.text(p.get_x()+p.get_width()/2.,
height + 3,
'{:1.2f}%'.format(100*height/total),
ha="center")
plt.show()
def plot_most_or_least_populated_data(df,most=True):
import seaborn as sns
total = df.isnull().count() - df.isnull().sum()
percent = 100 - (df.isnull().sum()/df.isnull().count()*100)
tt = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
tt = pd.DataFrame(tt.reset_index())
tt= (tt.sort_values(['Total'], ascending=most))
plt.figure(figsize=(10, 8))
sns.set(style='darkgrid')
ax = sns.barplot(x='Percent', y='index', data=tt.head(30), color='DarkOrange')
plt.title(('Most' if most else 'Least' ) + ' frequent columns/features in the dataframe')
plt.ylabel('Features/Columns')
plt.show()
###Output
_____no_output_____
###Markdown
**Do the data exploration for train Dataframe**
###Code
data_shape_and_head(train)
train.describe()
train.describe().T
train.count()
train['DayOfWeek'].count()
unique_values_eachcolumn(train)
plot_col_count_4_top20('Date', 'Counts', train,size=6)
plot_most_or_least_populated_data(train)
train.DayOfWeek.nunique()
train.StateHoliday.unique()
percentage_of_null_data(train)
train.Date
train.groupby('Date').max().Sales
train.sample(5, random_state=300).groupby('Date').max().Customers
#train['Date', 'Sales']
train.sample(20, random_state=5).groupby('Date').max().Sales.plot(kind='barh')
train.groupby('DayOfWeek').DayOfWeek
(train.groupby('DayOfWeek').sum())
train[0:10]
###Output
_____no_output_____
###Markdown
**`We looked at the train data. Similarly we can look at other tabular data as well and figure out the features that we can use.`**--- We turn state Holidays to booleans, to make them more convenient for modeling. We can do calculations on pandas fields using notation very similar (often identical) to numpy.
###Code
train.StateHoliday = train.StateHoliday!='0'
test.StateHoliday = test.StateHoliday!='0'
###Output
_____no_output_____
###Markdown
`join_df` is a function for joining tables on specific fields. By default, we'll be doing a left outer join of `right` on the `left` argument using the given fields for each table.Pandas does joins using the `merge` method. The `suffixes` argument describes the naming convention for duplicate fields. We've elected to leave the duplicate field names on the left untouched, and append a "\_y" to those on the right.
###Code
??pd.merge()
def join_df(left, right, left_on, right_on=None, suffix='_y'):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", suffix))
###Output
_____no_output_____
###Markdown
Join weather/state names.
###Code
weather = join_df(weather, state_names, "file", "StateName")
weather
###Output
_____no_output_____
###Markdown
In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.We're also going to replace all instances of state name 'NI' to match the usage in the rest of the data: 'HB,NI'. This is a good opportunity to highlight pandas indexing. We can use `.loc[rows, cols]` to select a list of rows and a list of columns from the dataframe. In this case, we're selecting rows w/ statename 'NI' by using a boolean list `googletrend.State=='NI'` and selecting "State".
###Code
googletrend.index[5]
googletrend.week.str.split(' - ', expand=True)[1]
googletrend
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State=='NI', "State"]
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
googletrend
###Output
_____no_output_____
###Markdown
The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.You should *always* consider this feature extraction step when working with date-time. Without expanding your date-time into these additional fields, you can't capture any trend/cyclical behavior as a function of time at any of these granularities. We'll add to every table with a date field.
###Code
def add_datepart(df, fldname, drop=True, time=False):
"Helper function that adds columns relevant to a date."
fld = df[fldname]
fld_dtype = fld.dtype
if isinstance(fld_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):
fld_dtype = np.datetime64
if not np.issubdtype(fld_dtype, np.datetime64):
df[fldname] = fld = pd.to_datetime(fld, infer_datetime_format=True)
targ_pre = re.sub('[Dd]ate$', '', fldname)
attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear',
'Is_month_end', 'Is_month_start', 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
if time: attr = attr + ['Hour', 'Minute', 'Second']
for n in attr: df[targ_pre + n] = getattr(fld.dt, n.lower())
df[targ_pre + 'Elapsed'] = fld.astype(np.int64) // 10 ** 9
if drop: df.drop(fldname, axis=1, inplace=True)
add_datepart(weather, "Date", drop=False)
add_datepart(googletrend, "Date", drop=False)
add_datepart(train, "Date", drop=False)
add_datepart(test, "Date", drop=False)
###Output
_____no_output_____
###Markdown
The Google trends data has a special category for the whole of the Germany - we'll pull that out so we can use it explicitly.
###Code
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
###Output
_____no_output_____
###Markdown
Now we can outer join all of our data into a single dataframe. Recall that in outer joins everytime a value in the joining field on the left table does not have a corresponding value on the right table, the corresponding row in the new table has Null values for all right table fields. One way to check that all records are consistent and complete is to check for Null values post-join, as we do here.*Aside*: Why not just do an inner join?If you are assuming that all records are complete and match on the field you desire, an inner join will do the same thing as an outer join. However, in the event you are wrong or a mistake is made, an outer join followed by a null-check will catch it. (Comparing before/after of rows for inner join is equivalent, but requires keeping track of before/after row 's. Outer join is easier.)
###Code
store = join_df(store, store_states, "Store")
len(store[store.State.isnull()])
??join_df
joined = join_df(train, store, "Store")
joined_test = join_df(test, store, "Store")
len(joined[joined.StoreType.isnull()]),len(joined_test[joined_test.StoreType.isnull()])
joined = join_df(joined, googletrend, ["State","Year", "Week"])
joined_test = join_df(joined_test, googletrend, ["State","Year", "Week"])
len(joined[joined.trend.isnull()]),len(joined_test[joined_test.trend.isnull()])
joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
joined_test = joined_test.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()]),len(joined_test[joined_test.trend_DE.isnull()])
joined = join_df(joined, weather, ["State","Date"])
joined_test = join_df(joined_test, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()]),len(joined_test[joined_test.Mean_TemperatureC.isnull()])
joined.describe()
for df in (joined, joined_test):
for c in df.columns:
if c.endswith('_y'):
if c in df.columns:
print(c)
for df in (joined, joined_test):
for c in df.columns:
if c.endswith('_y'):
if c in df.columns: df.drop(c, inplace=True, axis=1)
###Output
_____no_output_____
###Markdown
Next we'll fill in missing values to avoid complications with `NA`'s. `NA` (not available) is how Pandas indicates missing values; many models have problems when missing values are present, so it's always important to think about how to deal with them. In these cases, we are picking an arbitrary *signal value* that doesn't otherwise appear in the data.
###Code
joined.shape
for column in joined.columns:
print(column)
len(joined.loc[joined.StateName.isna() , "StateName"])
for df in (joined,joined_test):
df['CompetitionOpenSinceYear'] = df.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
df['CompetitionOpenSinceMonth'] = df.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
df['Promo2SinceYear'] = df.Promo2SinceYear.fillna(1900).astype(np.int32)
df['Promo2SinceWeek'] = df.Promo2SinceWeek.fillna(1).astype(np.int32)
###Output
_____no_output_____
###Markdown
Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of `apply()` in mapping a function across dataframe values.
###Code
for df in (joined,joined_test):
df["CompetitionOpenSince"] = pd.to_datetime(dict(year=df.CompetitionOpenSinceYear,
month=df.CompetitionOpenSinceMonth, day=15))
df["CompetitionDaysOpen"] = df.Date.subtract(df.CompetitionOpenSince).dt.days
###Output
_____no_output_____
###Markdown
We'll replace some erroneous / outlying data.
###Code
for df in (joined,joined_test):
df.loc[df.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
df.loc[df.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
###Output
_____no_output_____
###Markdown
We add "CompetitionMonthsOpen" field, limiting the maximum to 2 years to limit number of unique categories.
###Code
for df in (joined,joined_test):
df["CompetitionMonthsOpen"] = df["CompetitionDaysOpen"]//30
df.loc[df.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24
joined.CompetitionMonthsOpen.unique()
###Output
_____no_output_____
###Markdown
Same process for Promo dates. You may need to install the `isoweek` package first.
###Code
# If needed, uncomment:
! pip install isoweek
from isoweek import Week
for df in (joined,joined_test):
df["Promo2Since"] = pd.to_datetime(df.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1))
df["Promo2Days"] = df.Date.subtract(df["Promo2Since"]).dt.days
joined
for df in (joined,joined_test):
df.loc[df.Promo2Days<0, "Promo2Days"] = 0
df.loc[df.Promo2SinceYear<1990, "Promo2Days"] = 0
df["Promo2Weeks"] = df["Promo2Days"]//7
df.loc[df.Promo2Weeks<0, "Promo2Weeks"] = 0
df.loc[df.Promo2Weeks>25, "Promo2Weeks"] = 25
df.Promo2Weeks.unique()
df.Promo2Weeks.unique()
joined.to_pickle(PATH/'joined')
joined_test.to_pickle(PATH/'joined_test')
###Output
_____no_output_____
###Markdown
Durations It is common when working with time series data to extract data that explains relationships across rows as opposed to columns, e.g.:* Running averages* Time until next event* Time since last eventThis is often difficult to do with most table manipulation frameworks, since they are designed to work with relationships across columns. As such, we've created a class to handle this type of data.We'll define a function `get_elapsed` for cumulative counting across a sorted dataframe. Given a particular field `fld` to monitor, this function will start tracking time since the last occurrence of that field. When the field is seen again, the counter is set to zero.Upon initialization, this will result in datetime na's until the field is encountered. This is reset every time a new store is seen. We'll see how to use this shortly.
###Code
def get_elapsed(fld, pre):
day1 = np.timedelta64(1, 'D')
last_date = np.datetime64()
last_store = 0
res = []
for s,v,d in zip(df.Store.values,df[fld].values, df.Date.values):
if s != last_store:
last_date = np.datetime64()
last_store = s
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1))
df[pre+fld] = res
###Output
_____no_output_____
###Markdown
We'll be applying this to a subset of columns:
###Code
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
#df = train[columns]
df = train[columns].append(test[columns])
###Output
_____no_output_____
###Markdown
Let's walk through an example.Say we're looking at School Holiday. We'll first sort by Store, then Date, and then call `add_elapsed('SchoolHoliday', 'After')`:This will apply to each row with School Holiday:* A applied to every row of the dataframe in order of store and date* Will add to the dataframe the days since seeing a School Holiday* If we sort in the other direction, this will count the days until another holiday.
###Code
??df.sort_values()
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
df
###Output
_____no_output_____
###Markdown
We'll do this for two more fields.
###Code
fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
get_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
get_elapsed(fld, 'Before')
###Output
_____no_output_____
###Markdown
We're going to set the active index to Date.
###Code
df = df.set_index("Date")
###Output
_____no_output_____
###Markdown
Then set null values from elapsed field calculations to 0.
###Code
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
for o in ['Before', 'After']:
for p in columns:
a = o+p
df[a] = df[a].fillna(0).astype(int)
###Output
_____no_output_____
###Markdown
Next we'll demonstrate window functions in pandas to calculate rolling quantities.Here we're sorting by date (`sort_index()`) and counting the number of events of interest (`sum()`) defined in `columns` in the following week (`rolling()`), grouped by Store (`groupby()`). We do the same in the opposite direction.
###Code
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum()
fwd = df[['Store']+columns].sort_index(ascending=False
).groupby("Store").rolling(7, min_periods=1).sum()
###Output
_____no_output_____
###Markdown
Next we want to drop the Store indices grouped together in the window function.Often in pandas, there is an option to do this in place. This is time and memory efficient when working with large datasets.
###Code
bwd.drop('Store',1,inplace=True)
bwd.reset_index(inplace=True)
fwd.drop('Store',1,inplace=True)
fwd.reset_index(inplace=True)
df.reset_index(inplace=True)
###Output
_____no_output_____
###Markdown
Now we'll merge these values onto the df.
###Code
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
df.drop(columns,1,inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
###Code
df.to_pickle(PATH/'df')
df["Date"] = pd.to_datetime(df.Date)
df.columns
joined = pd.read_pickle(PATH/'joined')
joined_test = pd.read_pickle(PATH/f'joined_test')
joined = join_df(joined, df, ['Store', 'Date'])
joined_test = join_df(joined_test, df, ['Store', 'Date'])
###Output
_____no_output_____
###Markdown
The authors also removed all instances where the store had zero sale / was closed. We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little exploratory data analysis reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. By ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
###Code
joined = joined[joined.Sales!=0]
###Output
_____no_output_____
###Markdown
We'll back this up as well.
###Code
joined.reset_index(inplace=True)
joined_test.reset_index(inplace=True)
joined.to_pickle(PATH/'train_clean')
joined_test.to_pickle(PATH/'test_clean')
###Output
_____no_output_____ |
30_de_septiembre.ipynb | ###Markdown
PALINDROMO es una maplabra que se lee de igual forma de un sentido y de sentido inverso,ejemplo:1. sugus1. oso1. reconocer planteamiento del problemase decea encontrar todos los palindroms que existen en la franja horaria de un dia completocompleto, tomando como horario inicial las 00:00 y cmo horario final las 23:59 .el algoritmo debe mostrar en pantalla todos los palindromos existentes en ese rango, al final debe mostrar el conteo de total de palindromos existentes.
###Code
#solucion
h=["00","01","02","03","04","05","06","07","08","09","10","11","12",
"13","14","15","16","17","18","19","20","21","22","23"]
m=["00","01","02","03","04","05","06","07","08","09",
"10","11","12","13","14","15","16","17","18","19",
"20","21","22","23","24","25","26","27","28","29",
"30","31","32","33","34","35","36","37","38","39",
"40","41","42","43","44","45","46","47","48","49",
"50","51","52","53","54","55","56","57","58","59"]
arr=[]
palindromos=[]
contador=0
for i in range(len(h)):
for j in range(len(m)):
arr.append(h[i]+":"+m[j])
print(arr)
for k in range(0,len(arr)):
if (arr[k][0]==arr[k][4]) and (arr[k][1]==arr[k][3]):
print(arr[k])
contador+=1
print("los palindromos encontrados fueron:")
print(contador)
###Output
_____no_output_____ |
day1/geometry_excercies.ipynb | ###Markdown
Question: Can you try to modify the parameters and plot the line:x=25 You may use any of the above two methods. Is it possible to represent and plot the line in the current ax+by+c=0 form?
###Code
# Write code here to decide the apt. parameters
### Star Code Here ###
a = 1
b = 2
c = -25
### End Code Here ###
# Write code here to plot appropriately using one of the above methods discussed.
X = np.linspace(start= -200, stop=200, num=200)
Y = ((-a*X)-c)*1.0/b
%matplotlib inline
plt.plot(X,Y)
# Task : plot Y = 50
a = 0
b = 1
c = -50
Y = np.ones(X.shape)*50
%matplotlib inline
plt.plot(X,Y)
w = [a,b]
%matplotlib inline
plt.plot(X,Y,label='line')
plt.plot([0, w[0]], [0,w[1]], 'r-', label='w-vector')
plt.legend(loc = 'lower left')
#scaling w
scale_factor = 70
w = scale_factor* np.array([a,b])
%matplotlib inline
plt.plot(X,Y,label='line')
plt.plot([0, w[0]], [0,w[1]], 'r-', label='w-vector')
plt.legend(loc = 'lower left')
# task
scale_factor = 70
a = 1
b = 0
c= -10
Y1 = np.linspace(start= -200, stop=200, num=200)
X1 = (-b*Y-c)/a
w = scale_factor* np.array([a,b])
%matplotlib inline
plt.plot(X1,Y1,label='x= 10')
c2 = -20
Y2 = np.linspace(start= -200, stop=200, num=200)
X2 = (-b*Y2-c2)/a
plt.plot(X2,Y2,label='x= 20')
plt.plot([0, w[0]], [0,w[1]], 'r-', label='w-vector')
plt.legend(loc = 'upper left')
###Output
_____no_output_____
###Markdown
Question: Can you write code to find distance of the line x=50 and y=25 from the origin?Can you derive the formula to compute distance between two parallel lines from the formulas provided in the previous section?
###Code
def distance_from_origin(a,b,c):
return c/np.sqrt(pow(a,2)+pow(b,2))
def distance_between_pll_lines(a1,b1,c1,a2,b2,c2):
assert(a1==a2)
assert(b1==b2)
return (abs(c1-c2)/np.sqrt(pow(a,2)+pow(b,2)))
distance_from_origin(1,2,3)
distance_between_pll_lines(1,2,3,1,2,7)
#plotting any function using linspace
def plo(x,y):
%matplotlib inline
plt.plot(x,y,label=str(y))
plt.show()
x = np.linspace(-10,10,200)
sigmoid = 1.0/(1.0 + np.exp(-x))
plo(x,sigmoid)
tanh = (np.exp(x) - np.exp(-x))/ (np.exp(x) + np.exp(-x))
plo(x,tanh)
gaussian = np.exp(-pow(x,2))
plo(x,gaussian)
z = np.zeros(x.shape)
ReLU = np.maximum(z,x)
plo(x,ReLU)
Leaky_ReLU = np.maximum(.1*x,x)
plo(x,Leaky_ReLU)
step_ind = x >= 0
step = np.zeros(x.shape)
step[step_ind] = 1
plo(x,step)
natural_log = np.log(x)
plo(x,natural_log)
expo = np.exp(x)
plo(x,expo)
###Output
_____no_output_____
###Markdown
challenge question: Can you write the relation between tanh and sigmoid functions?
###Code
derived_tanh = (2*sigmoid -1 )/(2*pow(sigmoid,2) - 2*sigmoid +1)
plo(x,derived_tanh)
plo(x,tanh)
###Output
_____no_output_____
###Markdown
both of the above graphs are same hence our relation is verified Question: Write code to plot square, rectangle and ellipse
###Code
def plot_rec(startx,starty,l,b):
xs = np.linspace(startx, l, 100)
ys = np.linspace(starty, b, 100)
%matplotlib inline
plt.plot(xs,np.zeros(ys.shape))
plt.plot(np.zeros(xs.shape),ys)
plt.plot(xs,np.zeros(ys.shape)+b)
plt.plot(np.zeros(xs.shape)+a,ys)
plt.show()
# square
a = 5 #length of square
startx=0
starty=0
plot_rec(startx,starty,a,a)
# rectangle
l = 5
b = 10
startx=0
starty=0
plot_rec(startx,starty,l,b)
from math import pi, sqrt
u=1. #x-position of the center
v=0.5 #y-position of the center
a=2. #radius on the x-axis
b=1.5 #radius on the y-axis
t = np.linspace(0, 2*pi, 100)
plt.plot( u+a*np.cos(t) , v+b*np.sin(t) )
plt.grid(color='lightgray',linestyle='--')
plt.show()
%matplotlib inline
x = np.linspace(-2, 2, 200)
y = np.linspace(-2, 2, 200)
X, Y = np.meshgrid(x, y)
f = np.exp(-X*X - Y*Y)
C = plt.contour(X, Y, f)
plt.clabel(C)
# Plotting a 3D surface
from mpl_toolkits.mplot3d import Axes3D # Importing this is necessary when you plot in 3D
a = 1
b = 1
c = 3
d = 10
##############################
xx = np.linspace(-10, 10, 20)
yy = np.linspace(-10, 10, 20)
X, Y = np.meshgrid(xx, yy)
zz = (-a*X - b*Y - d) * 1.0 / c
plt3d = plt.figure().gca(projection='3d')
plt3d.plot_surface(X, Y, zz)
###Output
_____no_output_____
###Markdown
Question: Can you try plotting the plane characterised by the following parameters: a=1,b=1,c=0,d=10
###Code
from mpl_toolkits.mplot3d import Axes3D
a = 1
b = 1
c = 0
d = 10
xx = np.linspace(-10,10,20)
zz = np.linspace(-10,10,20)
X, Z = np.meshgrid(xx, yy)
yy = -(d+a*xx+c*zz)/b
plt3d = plt.figure().gca(projection='3d')
plt3d.plot_surface(X,yy,Z)
# Do not modify this Cell
from sklearn.datasets import make_swiss_roll
X, _ = make_swiss_roll(1000)
xs = X[:, 0]
ys = X[:, 1]
zs = X[:, 2]
from mpl_toolkits.mplot3d import Axes3D
%matplotlib qt
plt3d = plt.figure().gca(projection='3d')
plt3d.scatter(xs,ys,zs)
# Write code here to plot the 3D scatter-plot(~1 line)
###Output
Warning: Cannot change to a different GUI toolkit: qt. Using gtk3 instead.
###Markdown
Question: Can you plot the surface of a 2D Gaussian function on a 3D plot?
###Code
%matplotlib qt
xx = np.linspace(-10,10,200)
yy = np.linspace(-10,10,200)
X,Y = np.meshgrid(xx,yy)
Z = np.exp(-X**2-Y**2)
plt3d = plt.figure().gca(projection='3d')
plt3d.plot_surface(X,Y,Z)
###Output
Warning: Cannot change to a different GUI toolkit: qt. Using gtk3 instead.
|
FaceRecognition_for_assignment.ipynb | ###Markdown
For data Collection purpose
###Code
# # Move LFW Images to the following repository data/negative
# for directory in os.listdir('lfw'):
# for file in os.listdir(os.path.join('lfw', directory)):
# EX_PATH = os.path.join('lfw', directory, file)
# NEW_PATH = os.path.join(NEG_PATH, file)
# os.replace(EX_PATH, NEW_PATH)
# Import uuid library to generate unique image names
# import uuid
# os.path.join(ANC_PATH, '{}.jpg'.format(uuid.uuid1()))
# # Establish a connection to the webcam
# cap = cv2.VideoCapture(0)
# while cap.isOpened():
# ret, frame = cap.read()
# # Cut down frame to 250x250px
# frame = frame[120:120+250,200:200+250, :]
# # Collect anchors
# if cv2.waitKey(1) & 0XFF == ord('a'):
# # Create the unique file path
# imgname = os.path.join(ANC_PATH, '{}.jpg'.format(uuid.uuid1()))
# # Write out anchor image
# cv2.imwrite(imgname, frame)
# # Collect positives
# if cv2.waitKey(1) & 0XFF == ord('p'):
# # Create the unique file path
# imgname = os.path.join(POS_PATH, '{}.jpg'.format(uuid.uuid1()))
# # Write out positive image
# cv2.imwrite(imgname, frame)
# # Show image back to screen
# cv2.imshow('Image Collection', frame)
# # Breaking gracefully
# if cv2.waitKey(1) & 0XFF == ord('q'):
# break
# # Release the webcam
# cap.release()
# # Close the image show frame
# cv2.destroyAllWindows()
# def data_aug(img):
# data = []
# for i in range(9):
# img = tf.image.stateless_random_brightness(img, max_delta=0.02, seed=(1,2))
# img = tf.image.stateless_random_contrast(img, lower=0.6, upper=1, seed=(1,3))
# # img = tf.image.stateless_random_crop(img, size=(20,20,3), seed=(1,2))
# img = tf.image.stateless_random_flip_left_right(img, seed=(np.random.randint(100),np.random.randint(100)))
# img = tf.image.stateless_random_jpeg_quality(img, min_jpeg_quality=90, max_jpeg_quality=100, seed=(np.random.randint(100),np.random.randint(100)))
# img = tf.image.stateless_random_saturation(img, lower=0.9,upper=1, seed=(np.random.randint(100),np.random.randint(100)))
# data.append(img)
# return data
# img_path = os.path.join(ANC_PATH, '924e839c-135f-11ec-b54e-a0cec8d2d278.jpg')
# img = cv2.imread(img_path)
# augmented_images = data_aug(img)
# for image in augmented_images:
# cv2.imwrite(os.path.join(ANC_PATH, '{}.jpg'.format(uuid.uuid1())), image.numpy())
# for file_name in os.listdir(os.path.join(POS_PATH)):
# img_path = os.path.join(POS_PATH, file_name)
# img = cv2.imread(img_path)
# augmented_images = data_aug(img)
# for image in augmented_images:
# cv2.imwrite(os.path.join(POS_PATH, '{}.jpg'.format(uuid.uuid1())), image.numpy())
###Output
_____no_output_____
###Markdown
Training
###Code
anc_path = '/content/gdrive/MyDrive/Job/face_recognizer_assignment/nic/data/anchor'
pos_path = '/content/gdrive/MyDrive/Job/face_recognizer_assignment/nic/data/positive'
neg_path = '/content/gdrive/MyDrive/Job/face_recognizer_assignment/nic/data/negative'
os.getcwd()
anchor = tf.data.Dataset.list_files('/content/gdrive/MyDrive/Job/face_recognizer_assignment/nic/data/anchor'+'/*.jpg').take(3000)
positive = tf.data.Dataset.list_files('/content/gdrive/MyDrive/Job/face_recognizer_assignment/nic/data/positive'+'/*.jpg').take(3000)
negative = tf.data.Dataset.list_files('/content/gdrive/MyDrive/Job/face_recognizer_assignment/nic/data/negative'+'/*.jpg').take(3000)
dir_test = anchor.as_numpy_iterator()
print(dir_test.next())
def preprocess(file_path):
# Read in image from file path
byte_img = tf.io.read_file(file_path)
# Load in the image
img = tf.io.decode_jpeg(byte_img)
# Preprocessing steps - resizing the image to be 100x100x3
img = tf.image.resize(img, (100,100))
# Scale image to be between 0 and 1
img = img / 255.0
# Return image
return img
positives = tf.data.Dataset.zip((anchor, positive, tf.data.Dataset.from_tensor_slices(tf.ones(len(anchor)))))
negatives = tf.data.Dataset.zip((anchor, negative, tf.data.Dataset.from_tensor_slices(tf.zeros(len(anchor)))))
data = positives.concatenate(negatives)
samples = data.as_numpy_iterator()
exampple = samples.next()
exampple
def preprocess_twin(input_img, validation_img, label):
return(preprocess(input_img), preprocess(validation_img), label)
res = preprocess_twin(*exampple)
plt.imshow(res[0])
# Build dataloader pipeline
data = data.map(preprocess_twin)
data = data.cache()
data = data.shuffle(buffer_size=10000)
# Training partition
train_data = data.take(round(len(data)*.7))
train_data = train_data.batch(16)
train_data = train_data.prefetch(8)
# Testing partition
test_data = data.skip(round(len(data)*.7))
test_data = test_data.take(round(len(data)*.3))
test_data = test_data.batch(16)
test_data = test_data.prefetch(8)
round(len(data)*.7)
def make_embedding():
inp = Input(shape=(100,100,3), name='input_image')
# First block
c1 = Conv2D(64, (10,10), activation='relu')(inp)
m1 = MaxPooling2D(64, (2,2), padding='same')(c1)
# Second block
c2 = Conv2D(128, (7,7), activation='relu')(m1)
m2 = MaxPooling2D(64, (2,2), padding='same')(c2)
# Third block
c3 = Conv2D(128, (4,4), activation='relu')(m2)
m3 = MaxPooling2D(64, (2,2), padding='same')(c3)
# Final embedding block
c4 = Conv2D(256, (4,4), activation='relu')(m3)
f1 = Flatten()(c4)
d1 = Dense(4096, activation='sigmoid')(f1)
return Model(inputs=[inp], outputs=[d1], name='embedding')
embedding = make_embedding()
embedding.summary()
# Siamese L1 Distance class
class L1Dist(Layer):
# Init method - inheritance
def __init__(self, **kwargs):
super().__init__()
# Magic happens here - similarity calculation
def call(self, input_embedding, validation_embedding):
return tf.math.abs(input_embedding - validation_embedding)
l1 = L1Dist()
#l1(anchor_embedding, validation_embedding)
def make_siamese_model():
# Anchor image input in the network
input_image = Input(name='input_img', shape=(100,100,3))
# Validation image in the network
validation_image = Input(name='validation_img', shape=(100,100,3))
# Combine siamese distance components
siamese_layer = L1Dist()
siamese_layer._name = 'distance'
distances = siamese_layer(embedding(input_image), embedding(validation_image))
# Classification layer
classifier = Dense(1, activation='sigmoid')(distances)
return Model(inputs=[input_image, validation_image], outputs=classifier, name='SiameseNetwork')
siamese_model = make_siamese_model()
siamese_model.summary()
binary_cross_loss = tf.losses.BinaryCrossentropy()
opt = tf.keras.optimizers.Adam(1e-4) # 0.0001
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, 'ckpt')
checkpoint = tf.train.Checkpoint(opt=opt, siamese_model=siamese_model)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
@tf.function
def train_step(batch):
# Record all of our operations
with tf.GradientTape() as tape:
# Get anchor and positive/negative image
X = batch[:2]
# Get label
y = batch[2]
# Forward pass
yhat = siamese_model(X, training=True)
# Calculate loss
loss = binary_cross_loss(y, yhat)
print(loss)
# Calculate gradients
grad = tape.gradient(loss, siamese_model.trainable_variables)
# Calculate updated weights and apply to siamese model
opt.apply_gradients(zip(grad, siamese_model.trainable_variables))
# Return loss
return loss
# Import metric calculations
from tensorflow.keras.metrics import Precision, Recall
def train(data, EPOCHS):
# Loop through epochs
for epoch in range(1, EPOCHS+1):
print('\n Epoch {}/{}'.format(epoch, EPOCHS))
progbar = tf.keras.utils.Progbar(len(data))
# Creating a metric object
r = Recall()
p = Precision()
# Loop through each batch
for idx, batch in enumerate(data):
# Run train step here
loss = train_step(batch)
yhat = siamese_model.predict(batch[:2])
r.update_state(batch[2], yhat)
p.update_state(batch[2], yhat)
progbar.update(idx+1)
print(loss.numpy(), r.result().numpy(), p.result().numpy())
# Save checkpoints
if epoch % 10 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
EPOCHS = 50
train(train_data, EPOCHS)
###Output
Epoch 1/500
Tensor("binary_crossentropy/weighted_loss/value:0", shape=(), dtype=float32)
Tensor("binary_crossentropy/weighted_loss/value:0", shape=(), dtype=float32)
42/43 [============================>.] - ETA: 1sTensor("binary_crossentropy/weighted_loss/value:0", shape=(), dtype=float32)
43/43 [==============================] - 525s 1s/step
0.0711672 0.7535014 1.0
Epoch 2/500
43/43 [==============================] - 46s 1s/step
0.0020223379 0.9877301 0.9877301
Epoch 3/500
43/43 [==============================] - 47s 1s/step
0.0012368661 0.99715906 1.0
Epoch 4/500
43/43 [==============================] - 45s 1s/step
0.0010802895 1.0 1.0
Epoch 5/500
43/43 [==============================] - 46s 1s/step
0.0005421682 1.0 1.0
Epoch 6/500
43/43 [==============================] - 46s 1s/step
0.022434464 1.0 1.0
Epoch 7/500
43/43 [==============================] - 45s 1s/step
0.009069897 1.0 1.0
Epoch 8/500
43/43 [==============================] - 48s 1s/step
3.432327e-05 1.0 1.0
Epoch 9/500
43/43 [==============================] - 45s 1s/step
0.10575259 0.9446064 0.9908257
Epoch 10/500
43/43 [==============================] - 45s 1s/step
0.002517462 0.99421966 0.9971014
Epoch 11/500
43/43 [==============================] - 44s 1s/step
0.007182942 0.9941176 1.0
Epoch 12/500
43/43 [==============================] - 45s 1s/step
0.0024996113 0.99714285 1.0
Epoch 13/500
43/43 [==============================] - 43s 1s/step
6.7694106e-07 1.0 1.0
Epoch 14/500
43/43 [==============================] - 45s 1s/step
0.014686133 1.0 1.0
Epoch 15/500
43/43 [==============================] - 43s 993ms/step
2.0861798e-06 1.0 1.0
Epoch 16/500
43/43 [==============================] - 45s 1s/step
0.00032752688 1.0 1.0
Epoch 17/500
43/43 [==============================] - 44s 1s/step
3.5296736e-05 1.0 1.0
Epoch 18/500
43/43 [==============================] - 43s 1s/step
9.986716e-05 1.0 1.0
Epoch 19/500
43/43 [==============================] - 43s 1s/step
0.00011794722 1.0 1.0
Epoch 20/500
43/43 [==============================] - 44s 1s/step
0.00018182388 1.0 1.0
Epoch 21/500
43/43 [==============================] - 45s 1s/step
0.00077983673 1.0 1.0
Epoch 22/500
43/43 [==============================] - 43s 996ms/step
8.2043545e-05 1.0 1.0
Epoch 23/500
43/43 [==============================] - 45s 1s/step
0.00017132217 1.0 1.0
Epoch 24/500
43/43 [==============================] - 45s 1s/step
6.620422e-05 1.0 1.0
Epoch 25/500
43/43 [==============================] - 46s 1s/step
2.085065e-05 1.0 1.0
Epoch 26/500
43/43 [==============================] - 45s 1s/step
0.00036985351 1.0 1.0
Epoch 27/500
43/43 [==============================] - 45s 1s/step
0.0001416481 1.0 1.0
Epoch 28/500
43/43 [==============================] - 45s 1s/step
5.21806e-05 1.0 1.0
Epoch 29/500
43/43 [==============================] - 44s 1s/step
0.00059130794 1.0 1.0
Epoch 30/500
43/43 [==============================] - 43s 991ms/step
1.2228498e-05 1.0 1.0
Epoch 31/500
43/43 [==============================] - 45s 1s/step
4.48745e-06 1.0 1.0
Epoch 32/500
43/43 [==============================] - 43s 997ms/step
3.0095009e-05 1.0 1.0
Epoch 33/500
43/43 [==============================] - 45s 1s/step
3.1079603e-07 1.0 1.0
Epoch 34/500
43/43 [==============================] - 45s 1s/step
8.5149505e-09 1.0 1.0
Epoch 35/500
43/43 [==============================] - 42s 986ms/step
1.9712256e-06 1.0 1.0
Epoch 36/500
43/43 [==============================] - 45s 1s/step
5.5310793e-05 1.0 1.0
Epoch 37/500
43/43 [==============================] - 45s 1s/step
1.7029901e-08 1.0 1.0
Epoch 38/500
43/43 [==============================] - 46s 1s/step
3.7428694e-05 1.0 1.0
Epoch 39/500
43/43 [==============================] - 44s 1s/step
1.724291e-06 1.0 1.0
Epoch 40/500
43/43 [==============================] - 44s 1s/step
0.0003581268 1.0 1.0
Epoch 41/500
43/43 [==============================] - 45s 1s/step
0.00029178942 1.0 1.0
Epoch 42/500
43/43 [==============================] - 41s 949ms/step
1.5028997e-06 1.0 1.0
Epoch 43/500
43/43 [==============================] - 43s 1s/step
3.0750405e-05 1.0 1.0
Epoch 44/500
43/43 [==============================] - 44s 1s/step
0.00011264669 1.0 1.0
Epoch 45/500
43/43 [==============================] - 45s 1s/step
4.5520264e-05 1.0 1.0
Epoch 46/500
43/43 [==============================] - 45s 1s/step
0.0002128595 1.0 1.0
Epoch 47/500
43/43 [==============================] - 43s 1s/step
-0.0 1.0 1.0
Epoch 48/500
43/43 [==============================] - 44s 1s/step
-0.0 1.0 1.0
Epoch 49/500
43/43 [==============================] - 44s 1s/step
4.0868596e-05 1.0 1.0
Epoch 50/500
43/43 [==============================] - 43s 1s/step
7.4080344e-07 1.0 1.0
Epoch 51/500
43/43 [==============================] - 44s 1s/step
8.15796e-05 1.0 1.0
Epoch 52/500
43/43 [==============================] - 44s 1s/step
2.5417562e-06 1.0 1.0
Epoch 53/500
43/43 [==============================] - 43s 995ms/step
0.00015705943 1.0 1.0
Epoch 54/500
43/43 [==============================] - 44s 1s/step
0.00017930775 1.0 1.0
Epoch 55/500
43/43 [==============================] - 42s 985ms/step
0.00010372749 1.0 1.0
Epoch 56/500
43/43 [==============================] - 43s 1s/step
5.960466e-08 1.0 1.0
Epoch 57/500
43/43 [==============================] - 43s 995ms/step
0.00013909324 1.0 1.0
Epoch 58/500
43/43 [==============================] - 43s 1s/step
8.5149505e-09 1.0 1.0
Epoch 59/500
43/43 [==============================] - 45s 1s/step
1.592308e-06 1.0 1.0
Epoch 60/500
43/43 [==============================] - 44s 1s/step
1.2517082e-06 1.0 1.0
Epoch 61/500
43/43 [==============================] - 45s 1s/step
8.940701e-08 1.0 1.0
Epoch 62/500
43/43 [==============================] - 43s 986ms/step
1.617842e-07 1.0 1.0
Epoch 63/500
43/43 [==============================] - 44s 1s/step
2.2673126e-05 1.0 1.0
Epoch 64/500
43/43 [==============================] - 43s 1s/step
5.3983884e-05 1.0 1.0
Epoch 65/500
43/43 [==============================] - 43s 994ms/step
1.941435e-06 1.0 1.0
Epoch 66/500
43/43 [==============================] - 46s 1s/step
5.5347183e-08 1.0 1.0
Epoch 67/500
43/43 [==============================] - 46s 1s/step
1.0430869e-06 1.0 1.0
Epoch 68/500
43/43 [==============================] - 43s 1s/step
2.2990388e-07 1.0 1.0
Epoch 69/500
43/43 [==============================] - 42s 976ms/step
9.303163e-06 1.0 1.0
Epoch 70/500
43/43 [==============================] - 43s 1s/step
2.277186e-05 1.0 1.0
Epoch 71/500
43/43 [==============================] - 45s 1s/step
4.3973552e-05 1.0 1.0
Epoch 72/500
43/43 [==============================] - 44s 1s/step
5.3219587e-06 1.0 1.0
Epoch 73/500
43/43 [==============================] - 43s 999ms/step
1.660417e-07 1.0 1.0
Epoch 74/500
43/43 [==============================] - 44s 1s/step
1.0686335e-06 1.0 1.0
Epoch 75/500
43/43 [==============================] - 43s 996ms/step
3.8705235e-05 1.0 1.0
Epoch 76/500
43/43 [==============================] - 43s 1s/step
1.1921029e-06 1.0 1.0
Epoch 77/500
43/43 [==============================] - 44s 1s/step
2.5374998e-06 1.0 1.0
Epoch 78/500
43/43 [==============================] - 44s 1s/step
7.1527006e-06 1.0 1.0
Epoch 79/500
43/43 [==============================] - 42s 988ms/step
3.4059806e-08 1.0 1.0
Epoch 80/500
43/43 [==============================] - 45s 1s/step
2.2650124e-06 1.0 1.0
Epoch 81/500
43/43 [==============================] - 44s 1s/step
7.3743263e-06 1.0 1.0
Epoch 82/500
43/43 [==============================] - 42s 971ms/step
1.0337463e-05 1.0 1.0
Epoch 83/500
43/43 [==============================] - 44s 1s/step
7.635601e-05 1.0 1.0
Epoch 84/500
43/43 [==============================] - 44s 1s/step
4.8535315e-07 1.0 1.0
Epoch 85/500
43/43 [==============================] - 45s 1s/step
1.8563767e-05 1.0 1.0
Epoch 86/500
43/43 [==============================] - 45s 1s/step
1.5247644e-05 1.0 1.0
Epoch 87/500
43/43 [==============================] - 44s 1s/step
4.6832238e-08 1.0 1.0
Epoch 88/500
43/43 [==============================] - 45s 1s/step
3.9812334e-05 1.0 1.0
Epoch 89/500
43/43 [==============================] - 45s 1s/step
-0.0 1.0 1.0
Epoch 90/500
43/43 [==============================] - 45s 1s/step
1.8501116e-05 1.0 1.0
Epoch 91/500
43/43 [==============================] - 45s 1s/step
-0.0 1.0 1.0
Epoch 92/500
43/43 [==============================] - 45s 1s/step
4.9387572e-06 1.0 1.0
Epoch 93/500
43/43 [==============================] - 43s 987ms/step
1.5903437e-05 1.0 1.0
Epoch 94/500
43/43 [==============================] - 43s 994ms/step
2.3841898e-07 1.0 1.0
Epoch 95/500
43/43 [==============================] - 43s 1s/step
2.4073173e-05 1.0 1.0
Epoch 96/500
43/43 [==============================] - 45s 1s/step
-0.0 1.0 1.0
Epoch 97/500
43/43 [==============================] - 46s 1s/step
3.1037323e-06 1.0 1.0
Epoch 98/500
43/43 [==============================] - 43s 1s/step
1.941435e-06 1.0 1.0
Epoch 99/500
43/43 [==============================] - 45s 1s/step
7.6634585e-08 1.0 1.0
Epoch 100/500
43/43 [==============================] - 45s 1s/step
8.5149505e-09 1.0 1.0
Epoch 101/500
43/43 [==============================] - 45s 1s/step
9.1110536e-07 1.0 1.0
Epoch 102/500
43/43 [==============================] - 44s 1s/step
2.98881e-06 1.0 1.0
Epoch 103/500
43/43 [==============================] - 44s 1s/step
3.49973e-06 1.0 1.0
Epoch 104/500
43/43 [==============================] - 44s 1s/step
3.5185923e-05 1.0 1.0
Epoch 105/500
43/43 [==============================] - 44s 1s/step
4.5402998e-05 1.0 1.0
Epoch 106/500
43/43 [==============================] - 44s 1s/step
2.8761e-05 1.0 1.0
Epoch 107/500
43/43 [==============================] - 44s 1s/step
8.893529e-05 1.0 1.0
Epoch 108/500
43/43 [==============================] - 43s 1s/step
-0.0 1.0 1.0
Epoch 109/500
43/43 [==============================] - 45s 1s/step
9.919951e-07 1.0 1.0
Epoch 110/500
43/43 [==============================] - 44s 1s/step
2.5544892e-07 1.0 1.0
Epoch 111/500
43/43 [==============================] - 45s 1s/step
2.7248238e-06 1.0 1.0
Epoch 112/500
43/43 [==============================] - 44s 1s/step
3.976573e-06 1.0 1.0
Epoch 113/500
43/43 [==============================] - 44s 1s/step
1.1154671e-06 1.0 1.0
Epoch 114/500
43/43 [==============================] - 42s 988ms/step
1.583798e-06 1.0 1.0
Epoch 115/500
43/43 [==============================] - 44s 1s/step
1.4603195e-06 1.0 1.0
Epoch 116/500
43/43 [==============================] - 43s 997ms/step
-0.0 1.0 1.0
Epoch 117/500
43/43 [==============================] - 43s 988ms/step
3.7040107e-07 1.0 1.0
Epoch 118/500
43/43 [==============================] - 44s 1s/step
2.2522297e-06 1.0 1.0
Epoch 119/500
43/43 [==============================] - 44s 1s/step
6.207575e-06 1.0 1.0
Epoch 120/500
43/43 [==============================] - 45s 1s/step
3.448559e-07 1.0 1.0
Epoch 121/500
43/43 [==============================] - 46s 1s/step
1.0005127e-06 1.0 1.0
Epoch 122/500
43/43 [==============================] - 45s 1s/step
-0.0 1.0 1.0
Epoch 123/500
43/43 [==============================] - 45s 1s/step
2.8355346e-06 1.0 1.0
Epoch 124/500
43/43 [==============================] - 44s 1s/step
2.2990723e-06 1.0 1.0
Epoch 125/500
43/43 [==============================] - 46s 1s/step
9.273382e-06 1.0 1.0
Epoch 126/500
43/43 [==============================] - 44s 1s/step
1.0260554e-06 1.0 1.0
Epoch 127/500
43/43 [==============================] - 46s 1s/step
8.132016e-06 1.0 1.0
Epoch 128/500
43/43 [==============================] - 43s 1s/step
8.268301e-06 1.0 1.0
Epoch 129/500
43/43 [==============================] - 44s 1s/step
8.77045e-07 1.0 1.0
Epoch 130/500
43/43 [==============================] - 45s 1s/step
9.766998e-06 1.0 1.0
Epoch 131/500
43/43 [==============================] - 44s 1s/step
5.2367914e-06 1.0 1.0
Epoch 132/500
43/43 [==============================] - 43s 1s/step
1.2346688e-07 1.0 1.0
Epoch 133/500
43/43 [==============================] - 43s 996ms/step
2.8440302e-06 1.0 1.0
Epoch 134/500
43/43 [==============================] - 44s 1s/step
2.2053948e-06 1.0 1.0
Epoch 135/500
43/43 [==============================] - 43s 988ms/step
1.6178421e-07 1.0 1.0
Epoch 136/500
43/43 [==============================] - 44s 1s/step
1.209131e-06 1.0 1.0
Epoch 137/500
43/43 [==============================] - 45s 1s/step
3.661476e-06 1.0 1.0
Epoch 138/500
43/43 [==============================] - 44s 1s/step
1.3623932e-07 1.0 1.0
Epoch 139/500
43/43 [==============================] - 44s 1s/step
1.5454726e-06 1.0 1.0
Epoch 140/500
43/43 [==============================] - 43s 996ms/step
4.257481e-07 1.0 1.0
Epoch 141/500
43/43 [==============================] - 45s 1s/step
3.1207503e-06 1.0 1.0
Epoch 142/500
43/43 [==============================] - 45s 1s/step
3.9254996e-06 1.0 1.0
Epoch 143/500
43/43 [==============================] - 44s 1s/step
1.0558615e-06 1.0 1.0
Epoch 144/500
43/43 [==============================] - 43s 981ms/step
1.038827e-06 1.0 1.0
Epoch 145/500
43/43 [==============================] - 44s 1s/step
-0.0 1.0 1.0
Epoch 146/500
43/43 [==============================] - 44s 1s/step
8.174398e-07 1.0 1.0
Epoch 147/500
43/43 [==============================] - 42s 980ms/step
4.7897415e-06 1.0 1.0
Epoch 148/500
43/43 [==============================] - 44s 1s/step
2.5544852e-08 1.0 1.0
Epoch 149/500
43/43 [==============================] - 44s 1s/step
3.567853e-06 1.0 1.0
Epoch 150/500
43/43 [==============================] - 43s 1s/step
5.1089717e-08 1.0 1.0
Epoch 151/500
43/43 [==============================] - 46s 1s/step
4.1723374e-07 1.0 1.0
Epoch 152/500
43/43 [==============================] - 43s 1s/step
2.967491e-06 1.0 1.0
Epoch 153/500
43/43 [==============================] - 43s 998ms/step
-0.0 1.0 1.0
Epoch 154/500
43/43 [==============================] - 43s 997ms/step
-0.0 1.0 1.0
Epoch 155/500
43/43 [==============================] - 44s 1s/step
2.1713153e-07 1.0 1.0
Epoch 156/500
43/43 [==============================] - 44s 1s/step
9.426336e-06 1.0 1.0
Epoch 157/500
43/43 [==============================] - 43s 996ms/step
2.4267952e-06 1.0 1.0
Epoch 158/500
43/43 [==============================] - 45s 1s/step
1.3113131e-06 1.0 1.0
Epoch 159/500
43/43 [==============================] - 45s 1s/step
6.8119625e-08 1.0 1.0
Epoch 160/500
43/43 [==============================] - 45s 1s/step
1.1495274e-06 1.0 1.0
Epoch 161/500
43/43 [==============================] - 45s 1s/step
3.5762847e-07 1.0 1.0
Epoch 162/500
43/43 [==============================] - 46s 1s/step
7.0930982e-06 1.0 1.0
Epoch 163/500
43/43 [==============================] - 43s 1s/step
-0.0 1.0 1.0
Epoch 164/500
43/43 [==============================] - 45s 1s/step
5.1431393e-06 1.0 1.0
Epoch 165/500
43/43 [==============================] - 44s 1s/step
-0.0 1.0 1.0
Epoch 166/500
43/43 [==============================] - 43s 997ms/step
2.8950885e-07 1.0 1.0
Epoch 167/500
43/43 [==============================] - 44s 1s/step
3.2314626e-06 1.0 1.0
Epoch 168/500
43/43 [==============================] - 42s 979ms/step
3.5380247e-06 1.0 1.0
Epoch 169/500
43/43 [==============================] - 44s 1s/step
7.152593e-07 1.0 1.0
Epoch 170/500
43/43 [==============================] - 42s 980ms/step
8.4298284e-07 1.0 1.0
Epoch 171/500
43/43 [==============================] - 44s 1s/step
1.0047708e-06 1.0 1.0
Epoch 172/500
43/43 [==============================] - 45s 1s/step
8.514953e-08 1.0 1.0
Epoch 173/500
43/43 [==============================] - 46s 1s/step
6.8119625e-08 1.0 1.0
Epoch 174/500
43/43 [==============================] - 43s 990ms/step
8.600126e-07 1.0 1.0
Epoch 175/500
43/43 [==============================] - 44s 1s/step
1.907369e-06 1.0 1.0
Epoch 176/500
43/43 [==============================] - 44s 1s/step
-0.0 1.0 1.0
Epoch 177/500
43/43 [==============================] - 45s 1s/step
1.7285556e-06 1.0 1.0
Epoch 178/500
43/43 [==============================] - 43s 1s/step
7.2377435e-07 1.0 1.0
Epoch 179/500
43/43 [==============================] - 43s 1s/step
1.9584411e-07 1.0 1.0
Epoch 180/500
43/43 [==============================] - 42s 983ms/step
-0.0 1.0 1.0
Epoch 181/500
43/43 [==============================] - 48s 1s/step
1.4603221e-06 1.0 1.0
Epoch 182/500
43/43 [==============================] - 46s 1s/step
3.4059806e-08 1.0 1.0
Epoch 183/500
43/43 [==============================] - 43s 993ms/step
5.449579e-07 1.0 1.0
Epoch 184/500
43/43 [==============================] - 46s 1s/step
2.333132e-06 1.0 1.0
Epoch 185/500
43/43 [==============================] - 44s 1s/step
2.7247884e-07 1.0 1.0
Epoch 186/500
43/43 [==============================] - 45s 1s/step
1.0643762e-06 1.0 1.0
Epoch 187/500
43/43 [==============================] - 43s 1s/step
1.7200261e-06 1.0 1.0
Epoch 188/500
43/43 [==============================] - 43s 989ms/step
-0.0 1.0 1.0
Epoch 189/500
43/43 [==============================] - 44s 1s/step
-0.0 1.0 1.0
Epoch 190/500
43/43 [==============================] - 42s 982ms/step
-0.0 1.0 1.0
Epoch 191/500
43/43 [==============================] - 45s 1s/step
3.9594565e-07 1.0 1.0
Epoch 192/500
43/43 [==============================] - 43s 999ms/step
-0.0 1.0 1.0
Epoch 193/500
43/43 [==============================] - 45s 1s/step
6.8119625e-08 1.0 1.0
Epoch 194/500
43/43 [==============================] - 43s 1s/step
3.4059806e-08 1.0 1.0
Epoch 195/500
27/43 [=================>............] - ETA: 16s
###Markdown
Evolution
###Code
# Import metric calculations
from tensorflow.keras.metrics import Precision, Recall
# Get a batch of test data
test_input, test_val, y_true = test_data.as_numpy_iterator().next()
y_hat = siamese_model.predict([test_input, test_val])
# Post processing the results
[1 if prediction > 0.5 else 0 for prediction in y_hat ]
y_true
# Creating a metric object
m = Recall()
# Calculating the recall value
m.update_state(y_true, y_hat)
# Return Recall Result
m.result().numpy()
# Creating a metric object
m = Precision()
# Calculating the recall value
m.update_state(y_true, y_hat)
# Return Recall Result
m.result().numpy()
r = Recall()
p = Precision()
for test_input, test_val, y_true in test_data.as_numpy_iterator():
yhat = siamese_model.predict([test_input, test_val])
r.update_state(y_true, yhat)
p.update_state(y_true,yhat)
print(r.result().numpy(), p.result().numpy())
# Set plot size
plt.figure(figsize=(10,8))
# Set first subplot
plt.subplot(1,2,1)
plt.imshow(test_input[0])
# Set second subplot
plt.subplot(1,2,2)
plt.imshow(test_val[0])
# Renders cleanly
plt.show()
###Output
_____no_output_____
###Markdown
Rest of the code is for **implementation purpose**
###Code
# Import standard dependencies
import cv2
import os
import numpy as np
from tensorflow.keras.layers import Layer
import tensorflow as tf
import uuid
def face_crop(image):
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray,1.3,4)
face = image
face_dected = False
for (x,y,w,h) in faces:
face_dected = True
face = image[y:y+h,x:x+w]
face = cv2.resize(face, (250,250))
cv2.imshow("face",face)
return face_dected,face
def preprocess(file_path):
# Read in image from file path
byte_img = tf.io.read_file(file_path)
# Load in the image
img = tf.io.decode_jpeg(byte_img)
# Preprocessing steps - resizing the image to be 100x100x3
img = tf.image.resize(img, (100, 100))
# Scale image to be between 0 and 1
img = img / 255.0
# Return image
return img
# Siamese L1 Distance class
class L1Dist(Layer):
# Init method - inheritance
def __init__(self, **kwargs):
super().__init__()
def call(self, input_embedding, validation_embedding):
return tf.math.abs(input_embedding - validation_embedding)
def verify(model, detection_threshold, verification_threshold):
input_img = preprocess(os.path.join('application_data', 'input_image', 'input_image.jpg'))
#print("Before Outer Loop")
for folder_name in os.listdir("application_data"):
if folder_name == "input_image":
continue
# Build results array
#Finiding total employee count
#Walking through each directory and checking image of that directory to input image
results = []
#print("Before Inner Loop")
for image in os.listdir(os.path.join('application_data', folder_name)):
validation_img = preprocess(os.path.join('application_data', folder_name, image))
# Make Predictions
result = model.predict(list(np.expand_dims([input_img, validation_img], axis=1)))
results.append(result)
# Detection Threshold: Metric above which a prediciton is considered positive
detection = np.sum(np.array(results) > detection_threshold)
# Verification Threshold: Proportion of positive predictions / total positive samples
verification = detection / len(os.listdir(os.path.join('application_data', folder_name)))
# print("Checking to verify")
if verification > verification_threshold:
verified = True
print("Returning values")
print("Folder name: ", folder_name)
print("result : ", results)
return results, verified, folder_name
print("Folder name: ",folder_name)
print("result : ",results)
return results, False, "Not found"
def add_person(name_id):
#path to main folder + individual person photo folder
path = 'D:\\path\\to\\main\\folder\\' + name_id #path to main folder + individual person photo folder
print(path)
if(os.path.isdir(path)):
print("alreay in database")
return
else:
os.makedirs(path)
# Establish a connection to the webcam
cap = cv2.VideoCapture(0)
i=1
while cap.isOpened():
ret, frame = cap.read()
if(i>50):
break
cv2.imshow('Image Collection', frame)
# Cut down frame to 250x250px
face_detect, frame = face_crop(frame)
print(face_detect)
# Collect anchors
if cv2.waitKey(1) & 0XFF == ord('a'):
if(face_detect):
# Create the unique file path
imgname = os.path.join(path, '{}.jpg'.format(uuid.uuid1()))
# Write out anchor image
cv2.imwrite(imgname, frame)
i = i + 1
# Breaking gracefully
if cv2.waitKey(1) & 0XFF == ord('q'):
break
# Release the webcam
cap.release()
# Close the image show frame
cv2.destroyAllWindows()
def verify_a_person():
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
cv2.imshow('Verification', frame)
face_detect,frame = face_crop(frame)
if(not face_detect):
continue
# Verification trigger
if cv2.waitKey(10) & 0xFF == ord('v'):
cv2.imwrite(os.path.join('application_data', 'input_image', 'input_image.jpg'), frame)
# Run verification
results, verified, folder_name = verify(siamese_model, 0.4, 0.4)
print(verified)
# print(results)
print(folder_name)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
# Reload model
siamese_model = tf.keras.models.load_model('siamesemodelv2.h5',
custom_objects={'L1Dist': L1Dist, 'BinaryCrossentropy': tf.losses.BinaryCrossentropy})
while(1):
choice = input("What do you want to do : \n1.Add a new person\n2.Verify person\n3.Quit\n")
global face_cascade
#path to haar cascade file
haar_file = 'D:\\path\\to\\haar_cascade_file\\haarcascade_frontalface_default.xml'
face_cascade = cv2.CascadeClassifier(haar_file)
if(choice == '1'):
name_id = input("Enter person's identification: ")
add_person(name_id)
elif(choice == '2'):
verify_a_person()
else:
print(choice)
print("Thankyou")
break
###Output
_____no_output_____ |
Google-MLCC-Exercises/mlcc-exercises-ipynb-files_cn/14_multi_class_classification_of_handwritten_digits.ipynb | ###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
使用神经网络对手写数字进行分类  **学习目标:** * 训练线性模型和神经网络,以对传统 [MNIST](http://yann.lecun.com/exdb/mnist/) 数据集中的手写数字进行分类 * 比较线性分类模型和神经网络分类模型的效果 * 可视化神经网络隐藏层的权重 我们的目标是将每个输入图片与正确的数字相对应。我们会创建一个包含几个隐藏层的神经网络,并在顶部放置一个归一化指数层,以选出最合适的类别。 设置首先,我们下载数据集、导入 TensorFlow 和其他实用工具,并将数据加载到 *Pandas* `DataFrame`。请注意,此数据是原始 MNIST 训练数据的样本;我们随机选择了 20000 行。
###Code
from __future__ import print_function
import glob
import math
import os
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
mnist_dataframe = pd.read_csv(
"https://download.mlcc.google.cn/mledu-datasets/mnist_train_small.csv",
sep=",",
header=None)
# Use just the first 10,000 records for training/validation.
mnist_dataframe = mnist_dataframe.head(10000)
mnist_dataframe = mnist_dataframe.reindex(np.random.permutation(mnist_dataframe.index))
mnist_dataframe.head()
###Output
_____no_output_____
###Markdown
第一列中包含类别标签。其余列中包含特征值,每个像素对应一个特征值,有 `28×28=784` 个像素值,其中大部分像素值都为零;您也许需要花一分钟时间来确认它们不*全部*为零。  这些样本都是分辨率相对较低、对比度相对较高的手写数字图片。`0-9` 这十个数字中的每个可能出现的数字均由唯一的类别标签表示。因此,这是一个具有 10 个类别的多类别分类问题。现在,我们解析一下标签和特征,并查看几个样本。注意 `loc` 的使用,借助 `loc`,我们能够基于原来的位置抽出各列,因为此数据集中没有标题行。
###Code
def parse_labels_and_features(dataset):
"""Extracts labels and features.
This is a good place to scale or transform the features if needed.
Args:
dataset: A Pandas `Dataframe`, containing the label on the first column and
monochrome pixel values on the remaining columns, in row major order.
Returns:
A `tuple` `(labels, features)`:
labels: A Pandas `Series`.
features: A Pandas `DataFrame`.
"""
labels = dataset[0]
# DataFrame.loc index ranges are inclusive at both ends.
features = dataset.loc[:,1:784]
# Scale the data to [0, 1] by dividing out the max value, 255.
features = features / 255
return labels, features
training_targets, training_examples = parse_labels_and_features(mnist_dataframe[:7500])
training_examples.describe()
validation_targets, validation_examples = parse_labels_and_features(mnist_dataframe[7500:10000])
validation_examples.describe()
###Output
_____no_output_____
###Markdown
显示一个随机样本及其对应的标签。
###Code
rand_example = np.random.choice(training_examples.index)
_, ax = plt.subplots()
ax.matshow(training_examples.loc[rand_example].values.reshape(28, 28))
ax.set_title("Label: %i" % training_targets.loc[rand_example])
ax.grid(False)
###Output
_____no_output_____
###Markdown
任务 1:为 MNIST 构建线性模型首先,我们创建一个基准模型,作为比较对象。`LinearClassifier` 可提供一组 *k* 类一对多分类器,每个类别(共 *k* 个)对应一个分类器。您会发现,除了报告准确率和绘制对数损失函数随时间变化情况的曲线图之外,我们还展示了一个[**混淆矩阵**](https://en.wikipedia.org/wiki/Confusion_matrix)。混淆矩阵会显示错误分类为其他类别的类别。哪些数字相互之间容易混淆?另请注意,我们会使用 `log_loss` 函数跟踪模型的错误。不应将此函数与用于训练的 `LinearClassifier` 内部损失函数相混淆。
###Code
def construct_feature_columns():
"""Construct the TensorFlow Feature Columns.
Returns:
A set of feature columns
"""
# There are 784 pixels in each image.
return set([tf.feature_column.numeric_column('pixels', shape=784)])
###Output
_____no_output_____
###Markdown
在本次练习中,我们会对训练和预测使用单独的输入函数,并将这些函数分别嵌套在 `create_training_input_fn()` 和 `create_predict_input_fn()` 中,这样一来,我们就可以调用这些函数,以返回相应的 `_input_fn`,并将其传递到 `.train()` 和 `.predict()` 调用。
###Code
def create_training_input_fn(features, labels, batch_size, num_epochs=None, shuffle=True):
"""A custom input_fn for sending MNIST data to the estimator for training.
Args:
features: The training features.
labels: The training labels.
batch_size: Batch size to use during training.
Returns:
A function that returns batches of training features and labels during
training.
"""
def _input_fn(num_epochs=None, shuffle=True):
# Input pipelines are reset with each call to .train(). To ensure model
# gets a good sampling of data, even when number of steps is small, we
# shuffle all the data before creating the Dataset object
idx = np.random.permutation(features.index)
raw_features = {"pixels":features.reindex(idx)}
raw_targets = np.array(labels[idx])
ds = Dataset.from_tensor_slices((raw_features,raw_targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
feature_batch, label_batch = ds.make_one_shot_iterator().get_next()
return feature_batch, label_batch
return _input_fn
def create_predict_input_fn(features, labels, batch_size):
"""A custom input_fn for sending mnist data to the estimator for predictions.
Args:
features: The features to base predictions on.
labels: The labels of the prediction examples.
Returns:
A function that returns features and labels for predictions.
"""
def _input_fn():
raw_features = {"pixels": features.values}
raw_targets = np.array(labels)
ds = Dataset.from_tensor_slices((raw_features, raw_targets)) # warning: 2GB limit
ds = ds.batch(batch_size)
# Return the next batch of data.
feature_batch, label_batch = ds.make_one_shot_iterator().get_next()
return feature_batch, label_batch
return _input_fn
def train_linear_classification_model(
learning_rate,
steps,
batch_size,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a linear classification model for the MNIST digits dataset.
In addition to training, this function also prints training progress information,
a plot of the training and validation loss over time, and a confusion
matrix.
Args:
learning_rate: A `float`, the learning rate to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
training_examples: A `DataFrame` containing the training features.
training_targets: A `DataFrame` containing the training labels.
validation_examples: A `DataFrame` containing the validation features.
validation_targets: A `DataFrame` containing the validation labels.
Returns:
The trained `LinearClassifier` object.
"""
periods = 10
steps_per_period = steps / periods
# Create the input functions.
predict_training_input_fn = create_predict_input_fn(
training_examples, training_targets, batch_size)
predict_validation_input_fn = create_predict_input_fn(
validation_examples, validation_targets, batch_size)
training_input_fn = create_training_input_fn(
training_examples, training_targets, batch_size)
# Create a LinearClassifier object.
my_optimizer = tf.train.AdagradOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
classifier = tf.estimator.LinearClassifier(
feature_columns=construct_feature_columns(),
n_classes=10,
optimizer=my_optimizer,
config=tf.estimator.RunConfig(keep_checkpoint_max=1)
)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("LogLoss error (on validation data):")
training_errors = []
validation_errors = []
for period in range (0, periods):
# Train the model, starting from the prior state.
classifier.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute probabilities.
training_predictions = list(classifier.predict(input_fn=predict_training_input_fn))
training_probabilities = np.array([item['probabilities'] for item in training_predictions])
training_pred_class_id = np.array([item['class_ids'][0] for item in training_predictions])
training_pred_one_hot = tf.keras.utils.to_categorical(training_pred_class_id,10)
validation_predictions = list(classifier.predict(input_fn=predict_validation_input_fn))
validation_probabilities = np.array([item['probabilities'] for item in validation_predictions])
validation_pred_class_id = np.array([item['class_ids'][0] for item in validation_predictions])
validation_pred_one_hot = tf.keras.utils.to_categorical(validation_pred_class_id,10)
# Compute training and validation errors.
training_log_loss = metrics.log_loss(training_targets, training_pred_one_hot)
validation_log_loss = metrics.log_loss(validation_targets, validation_pred_one_hot)
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, validation_log_loss))
# Add the loss metrics from this period to our list.
training_errors.append(training_log_loss)
validation_errors.append(validation_log_loss)
print("Model training finished.")
# Remove event files to save disk space.
_ = map(os.remove, glob.glob(os.path.join(classifier.model_dir, 'events.out.tfevents*')))
# Calculate final predictions (not probabilities, as above).
final_predictions = classifier.predict(input_fn=predict_validation_input_fn)
final_predictions = np.array([item['class_ids'][0] for item in final_predictions])
accuracy = metrics.accuracy_score(validation_targets, final_predictions)
print("Final accuracy (on validation data): %0.2f" % accuracy)
# Output a graph of loss metrics over periods.
plt.ylabel("LogLoss")
plt.xlabel("Periods")
plt.title("LogLoss vs. Periods")
plt.plot(training_errors, label="training")
plt.plot(validation_errors, label="validation")
plt.legend()
plt.show()
# Output a plot of the confusion matrix.
cm = metrics.confusion_matrix(validation_targets, final_predictions)
# Normalize the confusion matrix by row (i.e by the number of samples
# in each class).
cm_normalized = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
ax = sns.heatmap(cm_normalized, cmap="bone_r")
ax.set_aspect(1)
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
plt.show()
return classifier
###Output
_____no_output_____
###Markdown
**花费 5 分钟的时间了解一下使用这种形式的线性模型时,准确率方面表现如何。在本次练习中,为自己设定限制,仅使用批量大小、学习速率和步数这三个超参数进行试验。**如果您从上述任何试验中得到的准确率约为 0.9,即可停止试验。
###Code
classifier = train_linear_classification_model(
learning_rate=0.02,
steps=100,
batch_size=10,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
解决方案点击下方即可查看一种可能的解决方案。 以下是一组使准确率应该约为 0.9 的参数。
###Code
_ = train_linear_classification_model(
learning_rate=0.03,
steps=1000,
batch_size=30,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
任务 2:使用神经网络替换线性分类器**使用 [`DNNClassifier`](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNClassifier) 替换上面的 LinearClassifier,并查找可实现 0.95 或更高准确率的参数组合。**您可能希望尝试 Dropout 等其他正则化方法。这些额外的正则化方法已记录在 `DNNClassifier` 类的注释中。
###Code
#
# YOUR CODE HERE: Replace the linear classifier with a neural network.
#
###Output
_____no_output_____
###Markdown
获得出色的模型后,通过评估我们将在下面加载的测试数据进行仔细检查,确认您没有过拟合验证集。
###Code
mnist_test_dataframe = pd.read_csv(
"https://download.mlcc.google.cn/mledu-datasets/mnist_test.csv",
sep=",",
header=None)
test_targets, test_examples = parse_labels_and_features(mnist_test_dataframe)
test_examples.describe()
#
# YOUR CODE HERE: Calculate accuracy on the test set.
#
###Output
_____no_output_____
###Markdown
解决方案点击下方即可查看可能的解决方案。 除了神经网络专用配置(例如隐藏单元的超参数)之外,以下代码与原始的 `LinearClassifer` 训练代码几乎完全相同。
###Code
def train_nn_classification_model(
learning_rate,
steps,
batch_size,
hidden_units,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a neural network classification model for the MNIST digits dataset.
In addition to training, this function also prints training progress information,
a plot of the training and validation loss over time, as well as a confusion
matrix.
Args:
learning_rate: A `float`, the learning rate to use.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
hidden_units: A `list` of int values, specifying the number of neurons in each layer.
training_examples: A `DataFrame` containing the training features.
training_targets: A `DataFrame` containing the training labels.
validation_examples: A `DataFrame` containing the validation features.
validation_targets: A `DataFrame` containing the validation labels.
Returns:
The trained `DNNClassifier` object.
"""
periods = 10
# Caution: input pipelines are reset with each call to train.
# If the number of steps is small, your model may never see most of the data.
# So with multiple `.train` calls like this you may want to control the length
# of training with num_epochs passed to the input_fn. Or, you can do a really-big shuffle,
# or since it's in-memory data, shuffle all the data in the `input_fn`.
steps_per_period = steps / periods
# Create the input functions.
predict_training_input_fn = create_predict_input_fn(
training_examples, training_targets, batch_size)
predict_validation_input_fn = create_predict_input_fn(
validation_examples, validation_targets, batch_size)
training_input_fn = create_training_input_fn(
training_examples, training_targets, batch_size)
# Create the input functions.
predict_training_input_fn = create_predict_input_fn(
training_examples, training_targets, batch_size)
predict_validation_input_fn = create_predict_input_fn(
validation_examples, validation_targets, batch_size)
training_input_fn = create_training_input_fn(
training_examples, training_targets, batch_size)
# Create feature columns.
feature_columns = [tf.feature_column.numeric_column('pixels', shape=784)]
# Create a DNNClassifier object.
my_optimizer = tf.train.AdagradOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
classifier = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
n_classes=10,
hidden_units=hidden_units,
optimizer=my_optimizer,
config=tf.contrib.learn.RunConfig(keep_checkpoint_max=1)
)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("LogLoss error (on validation data):")
training_errors = []
validation_errors = []
for period in range (0, periods):
# Train the model, starting from the prior state.
classifier.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute probabilities.
training_predictions = list(classifier.predict(input_fn=predict_training_input_fn))
training_probabilities = np.array([item['probabilities'] for item in training_predictions])
training_pred_class_id = np.array([item['class_ids'][0] for item in training_predictions])
training_pred_one_hot = tf.keras.utils.to_categorical(training_pred_class_id,10)
validation_predictions = list(classifier.predict(input_fn=predict_validation_input_fn))
validation_probabilities = np.array([item['probabilities'] for item in validation_predictions])
validation_pred_class_id = np.array([item['class_ids'][0] for item in validation_predictions])
validation_pred_one_hot = tf.keras.utils.to_categorical(validation_pred_class_id,10)
# Compute training and validation errors.
training_log_loss = metrics.log_loss(training_targets, training_pred_one_hot)
validation_log_loss = metrics.log_loss(validation_targets, validation_pred_one_hot)
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, validation_log_loss))
# Add the loss metrics from this period to our list.
training_errors.append(training_log_loss)
validation_errors.append(validation_log_loss)
print("Model training finished.")
# Remove event files to save disk space.
_ = map(os.remove, glob.glob(os.path.join(classifier.model_dir, 'events.out.tfevents*')))
# Calculate final predictions (not probabilities, as above).
final_predictions = classifier.predict(input_fn=predict_validation_input_fn)
final_predictions = np.array([item['class_ids'][0] for item in final_predictions])
accuracy = metrics.accuracy_score(validation_targets, final_predictions)
print("Final accuracy (on validation data): %0.2f" % accuracy)
# Output a graph of loss metrics over periods.
plt.ylabel("LogLoss")
plt.xlabel("Periods")
plt.title("LogLoss vs. Periods")
plt.plot(training_errors, label="training")
plt.plot(validation_errors, label="validation")
plt.legend()
plt.show()
# Output a plot of the confusion matrix.
cm = metrics.confusion_matrix(validation_targets, final_predictions)
# Normalize the confusion matrix by row (i.e by the number of samples
# in each class).
cm_normalized = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis]
ax = sns.heatmap(cm_normalized, cmap="bone_r")
ax.set_aspect(1)
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
plt.show()
return classifier
classifier = train_nn_classification_model(
learning_rate=0.05,
steps=1000,
batch_size=30,
hidden_units=[100, 100],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
接下来,我们来验证测试集的准确率。
###Code
mnist_test_dataframe = pd.read_csv(
"https://download.mlcc.google.cn/mledu-datasets/mnist_test.csv",
sep=",",
header=None)
test_targets, test_examples = parse_labels_and_features(mnist_test_dataframe)
test_examples.describe()
predict_test_input_fn = create_predict_input_fn(
test_examples, test_targets, batch_size=100)
test_predictions = classifier.predict(input_fn=predict_test_input_fn)
test_predictions = np.array([item['class_ids'][0] for item in test_predictions])
accuracy = metrics.accuracy_score(test_targets, test_predictions)
print("Accuracy on test data: %0.2f" % accuracy)
###Output
_____no_output_____
###Markdown
任务 3:可视化第一个隐藏层的权重。我们来花几分钟时间看看模型的 `weights_` 属性,以深入探索我们的神经网络,并了解它学到了哪些规律。模型的输入层有 `784` 个权重,对应于 `28×28` 像素输入图片。第一个隐藏层将有 `784×N` 个权重,其中 `N` 指的是该层中的节点数。我们可以将这些权重重新变回 `28×28` 像素的图片,具体方法是将 `N` 个 `1×784` 权重数组*变形*为 `N` 个 `28×28` 大小数组。运行以下单元格,绘制权重曲线图。请注意,此单元格要求名为 "classifier" 的 `DNNClassifier` 已经过训练。
###Code
print(classifier.get_variable_names())
weights0 = classifier.get_variable_value("dnn/hiddenlayer_0/kernel")
print("weights0 shape:", weights0.shape)
num_nodes = weights0.shape[1]
num_rows = int(math.ceil(num_nodes / 10.0))
fig, axes = plt.subplots(num_rows, 10, figsize=(20, 2 * num_rows))
for coef, ax in zip(weights0.T, axes.ravel()):
# Weights in coef is reshaped from 1x784 to 28x28.
ax.matshow(coef.reshape(28, 28), cmap=plt.cm.pink)
ax.set_xticks(())
ax.set_yticks(())
plt.show()
###Output
_____no_output_____ |
DATA_ANALYST/ML-2/m2_part2_normalization.ipynb | ###Markdown
Нормализация данных и удаление данных
###Code
import pandas as pd
import numpy as np
test_data = pd.DataFrame([[1, 2, np.nan], [3, np.nan, 417],
[0, 10, -212]], columns=['one', 'two', 'three'])
test_data
###Output
_____no_output_____
###Markdown
Нормализация: теорияНекоторые алгоритмы обращают внимание на масштаб переменных - это помогает алгоритму (например, градиентному спуску) лучше сходиться. Для этого нужно делать нормализацию данных - приведение переменных к одному масштабу. Кроме этого, если есть несколько наборов данных одной природы, но разного размера, их нужно нормализовать, чтобы иметь возможность сравнить влияние каких-то других признаков. Несмотря на то, что некоторые алгоритмы работают независимо от масштаба признаков, хуже от нормализации обычно не становится. Когда мы говорим о нормализации, мы говорим о числах. Мы посмотрим на работу методов нормализации из библиотеки `sklearn`. На вход будем подавать `pandas.DataFrame`, на выходе будем получать `np.ndarray`. Информация о структуре `pandas`-таблицы теряется. minmax нормализацияОдним из стандартных способов нормализации является `minmax` нормализация. Данный вид нормализации приводит независимо каждый признак к значению между 0 и 1. Как это работает? Для каждого признака алгоритм находит минимальное ($x_{min}$) и максимальное ($x_{max}$) значение, после этого признак `x` трансформируется в $$x := \frac{x - x_{min}}{x_{max} - x_{min}}$$
###Code
test_data = test_data.fillna(0)
test_data
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit_transform(test_data)
###Output
_____no_output_____
###Markdown
std нормализация (стандартная нормализация)`std` нормализация (иначе называется `стандартная нормализация` или `zero mean, unit variance`) - еще один вид нормализации признаков. Как он работает? Для каждого признака алгоритм независимо находит среднее значение ($x_{mean}$) и стандартное отклонение ($x_{std}$), после этого признак `x` трансформируется в $$x := \frac{x - x_{mean}}{x_{std}}$$
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit_transform(test_data)
###Output
_____no_output_____
###Markdown
`MinMaxScaler` и `StandardScaler` сохраняют параметры, с которыми проводит нормализацию. Это значит, что после нормализации признаков в тренировочной выборке нужно будет применить ту же нормализацию с валидационными и тестовыми данными. Про валидацию мы поговорим позже. Удаление ненужных строк и столбцовИногда в данных находятся признаки (столбцы), которые не несут никакой полезной информации или были считаны по ошибке. Их можно можно удалить с помощью метода `.drop(column_names, axis=1)`. В `columns` необходимо передать или название признака (столбца), или список названий признаков (столбцов):
###Code
test_data
test_data.drop('one', axis=1)
test_data.drop(['one', 'three'], axis=1)
###Output
_____no_output_____
###Markdown
Если в предыдущем методе в параметр `axis` передавать `0`, метод будет удалять строки с номерами, которые вы передадите (или один номер, или список номеров):
###Code
test_data.drop(0, axis=0)
test_data.drop([0, 2], axis=0)
###Output
_____no_output_____ |
analysis/rpq_analysis.ipynb | ###Markdown
Отчёт о проведении эксперимента Исходные данные: датасеты `LUBM300`, `LUBM500`, `LUBM1.5M`, `LUBM1M`; графы взяты без изменений, регулярные выражения были предобработаны Vadim Abzalov et al. для их удобного распознавания библиотекой `pyformlang`. Замеры времени произведены на ЭВМ с установленной ОС Ubuntu 18.04. Технические характеристики: Intel Core i3-6006U 2.00 GHz (3072 KB cache), 8Gb DDR3 RAM. Для каждого замера в наносекундах запуск алгоритма производился 5 раз, в итоговую таблицу записано среднее из всех попыток, округлённое до миллисекунд. На графиках для большей репрезентативности время указано в секундах. Кэширование отключалось. Контрольные цифры - столбец `reachable_pairs` из сводного датафрейма. Это числа достижимых пар вершин в графе, полученном в пересечении исходного графа и регулярного выражения. Контрольные цифры совпадают как при построении транзитивного замыкания пересечения возвдением в квадрат, так и при умножении на матрицу смежности. Регулярные выражения в результатах перегруппированы в 16 наборов, схожих по своей структуре, для более удобного восприятия информации на графиках.Сводная таблица, полученная после запуска бенчмарков на пяти датасетах:
###Code
import pandas as pd
import seaborn as sns
import numpy as np
import re
sns.set(rc={'figure.figsize':(13,9)})
def group_regexes(regex_name):
group_name = re.match('^q(_([0-9]*)|([0-9]*))_', regex_name).group().replace('_', '')
if len(group_name) < 3:
group_name = group_name.replace('q', 'q0')
return group_name
df = pd.read_csv('benchmark_lubm.csv')
df['regex'] = df['regex'].apply(group_regexes)
df['closure_time_s'] = df['closure_time_ms'] / 1e3
df['intersection_time_s'] = df['intersection_time_ms'] / 1e3
df['inference_time_s'] = df['inference_time_ms'] / 1e3
df = df.drop(['inference_time_ms', 'closure_time_ms', 'intersection_time_ms'], axis=1)
df.head()
###Output
_____no_output_____
###Markdown
Время вывода пар во всех вычислениях было меньше одной миллисекунды:
###Code
df['inference_time_s'].value_counts()
###Output
_____no_output_____
###Markdown
Для предстaвления данных и сравнения времени работы алгоритмов построения замыкания через возведение в квадрат и умножение на матрицу смежности используется удобная форма графика - boxplot. На нем явно видно медиану (линия внутри "коробки"), 25% и 75% квартили (сама "коробка"), 2% и 98% процентиль ("усы"), а также возможные выбросы (точки, лежащие за "усами").
###Code
def get_boxplots_for_algo(graph_name, df):
order = np.sort(df['regex'].unique())
bp = sns.boxplot(x='regex', y='closure_time_s', order=order, hue='algo', data=df[df.graph == graph_name])
bp.set_title(graph_name)
bp.set(xlabel='Query', ylabel='Building closure time in seconds')
return bp
###Output
_____no_output_____
###Markdown
LUBM300
###Code
get_boxplots_for_algo('LUBM300', df)
###Output
_____no_output_____
###Markdown
LUBM500
###Code
get_boxplots_for_algo('LUBM500', df)
###Output
_____no_output_____
###Markdown
LUBM1M
###Code
get_boxplots_for_algo('LUBM1M', df)
###Output
_____no_output_____
###Markdown
LUBM1.5M
###Code
get_boxplots_for_algo('LUBM1.5M', df)
###Output
_____no_output_____
###Markdown
LUBM1.9M
###Code
get_boxplots_for_algo('LUBM1.9M', df)
###Output
_____no_output_____
###Markdown
На графике снизу явно видна разница во времени вычисления тензороного произведения для разных датасетов с графами разного размера.
###Code
def get_stripplot_for_algo(df):
order = np.sort(df['regex'].unique())
sp = sns.stripplot(x='regex', y='intersection_time_s', order=order, hue='graph', data=df)
sp.set_title('Kronecker product')
sp.set(xlabel='Query', ylabel='Building intersection time in seconds')
sns.set_palette("Paired")
get_stripplot_for_algo(df)
###Output
_____no_output_____ |
docs/notebooks/intro-to-pymc3.ipynb | ###Markdown
A quick intro to PyMC3 for exoplaneteers
###Code
%run notebook_setup
###Output
_____no_output_____
###Markdown
[Hamiltonian Monte Carlo (HMC)](https://en.wikipedia.org/wiki/Hamiltonian_Monte_Carlo) methods haven't been widely used in astrophysics, but they are the standard methods for probabilistic inference using Markov chain Monte Carlo (MCMC) in many other fields.*exoplanet* is designed to provide the building blocks for fitting many exoplanet datasets using this technology, and this tutorial presents some of the basic features of the [PyMC3](https://docs.pymc.io/) modeling language and inference engine.The [documentation for PyMC3](https://docs.pymc.io/) includes many other tutorials that you should check out to get more familiar with the features that are available.In this tutorial, we will go through two simple examples of fitting some data using PyMC3.The first is the classic fitting a line to data with unknown error bars, and the second is a more relevant example where we fit a radial velocity model to the public radial velocity observations of [51 Peg](https://en.wikipedia.org/wiki/51_Pegasi).You can read more about fitting lines to data [in the bible of line fitting](https://arxiv.org/abs/1008.4686) and you can see another example of fitting the 51 Peg data using HMC (this time using [Stan](http://mc-stan.org)) [here](https://dfm.io/posts/stan-c++/). Hello world (AKA fitting a line to data)My standard intro to a new modeling language or inference framework is to fit a line to data.So. Let's do that with PyMC3.To start, we'll generate some fake data using a linear model.Feel free to change the random number seed to try out a different dataset.
###Code
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
true_m = 0.5
true_b = -1.3
true_logs = np.log(0.3)
x = np.sort(np.random.uniform(0, 5, 50))
y = true_b + true_m * x + np.exp(true_logs) * np.random.randn(len(x))
plt.plot(x, y, ".k")
plt.ylim(-2, 2)
plt.xlabel("x")
plt.ylabel("y");
###Output
_____no_output_____
###Markdown
To fit a model to these data, our model will have 3 parameters: the slope $m$, the intercept $b$, and the log of the uncertainty $\log(\sigma)$.To start, let's choose broad uniform priors on these parameters:$$\begin{eqnarray}p(m) &=& \left\{\begin{array}{ll}1/10 & \mathrm{if}\,-5 < m < 5 \\0 & \mathrm{otherwise} \\\end{array}\right. \\p(b) &=& \left\{\begin{array}{ll}1/10 & \mathrm{if}\,-5 < b < 5 \\0 & \mathrm{otherwise} \\\end{array}\right. \\p(\log(\sigma)) &=& \left\{\begin{array}{ll}1/10 & \mathrm{if}\,-5 < b < 5 \\0 & \mathrm{otherwise} \\\end{array}\right.\end{eqnarray}$$Then, the log-likelihood function will be$$\log p(\{y_n\}\,|\,m,\,b,\,\log(\sigma)) = -\frac{1}{2}\sum_{n=1}^N \left[\frac{(y_n - m\,x_n - b)^2}{\sigma^2} + \log(2\,\pi\,\sigma^2)\right]$$[**Note:** the second normalization term is needed in this model because we are fitting for $\sigma$ and the second term is *not* a constant.]Another way of writing this model that might not be familiar is the following:$$\begin{eqnarray}m &\sim& \mathrm{Uniform}(-5,\,5) \\b &\sim& \mathrm{Uniform}(-5,\,5) \\\log(\sigma) &\sim& \mathrm{Uniform}(-5,\,5) \\y_n &\sim& \mathrm{Normal}(m\,x_n+b,\,\sigma)\end{eqnarray}$$This is the way that a model like this is often defined in statistics and it will be useful when we implement out model in PyMC3 so take a moment to make sure that you understand the notation.Now, let's implement this model in PyMC3.The documentation for the distributions available in PyMC3's modeling language can be [found here](https://docs.pymc.io/api/distributions/continuous.html) and these will come in handy as you go on to write your own models.
###Code
import pymc3 as pm
with pm.Model() as model:
# Define the priors on each parameter:
m = pm.Uniform("m", lower=-5, upper=5)
b = pm.Uniform("b", lower=-5, upper=5)
logs = pm.Uniform("logs", lower=-5, upper=5)
# Define the likelihood. A few comments:
# 1. For mathematical operations like "exp", you can't use
# numpy. Instead, use the mathematical operations defined
# in "pm.math".
# 2. To condition on data, you use the "observed" keyword
# argument to any distribution. In this case, we want to
# use the "Normal" distribution (look up the docs for
# this).
pm.Normal("obs", mu=m*x+b, sd=pm.math.exp(logs), observed=y)
# This is how you will sample the model. Take a look at the
# docs to see that other parameters that are available.
trace = pm.sample(draws=1000, tune=1000, chains=2)
###Output
_____no_output_____
###Markdown
Now since we now have samples, let's make some diagnostic plots.The first plot to look at is the "traceplot" implemented in PyMC3.In this plot, you'll see the marginalized distribution for each parameter on the left and the trace plot (parameter value as a function of step number) on the right.In each panel, you should see two lines with different colors.These are the results of different independent chains and if the results are substantially different in the different chains then there is probably something going wrong.
###Code
pm.traceplot(trace, varnames=["m", "b", "logs"]);
###Output
_____no_output_____
###Markdown
It's also good to quantify that "looking substantially different" argument.This is implemented in PyMC3 as the "summary" function.In this table, some of the key columns to look at are `n_eff` and `Rhat`.* `n_eff` shows an estimate of the number of effective (or independent) samples for that parameter. In this case, `n_eff` should probably be around 500 per chain (there should have been 2 chains run).* `Rhat` shows the [Gelman–Rubin statistic](https://docs.pymc.io/api/diagnostics.htmlpymc3.diagnostics.gelman_rubin) and it should be close to 1.
###Code
pm.summary(trace, varnames=["m", "b", "logs"])
###Output
_____no_output_____
###Markdown
The last diagnostic plot that we'll make here is the [corner plot made using corner.py](https://corner.readthedocs.io).The easiest way to do this using PyMC3 is to first convert the trace to a [Pandas DataFrame](https://pandas.pydata.org/) and then pass that to `corner.py`.
###Code
import corner # https://corner.readthedocs.io
samples = pm.trace_to_dataframe(trace, varnames=["m", "b", "logs"])
corner.corner(samples, truths=[true_m, true_b, true_logs]);
###Output
_____no_output_____
###Markdown
**Extra credit:** Here are a few suggestions for things to try out while getting more familiar with PyMC3:1. Try initializing the parameters using the `testval` argument to the distributions. Does this improve performance in this case? It will substantially improve performance in more complicated examples.2. Try changing the priors on the parameters. For example, try the "uninformative" prior [recommended by Jake VanderPlas on his blog](http://jakevdp.github.io/blog/2014/06/14/frequentism-and-bayesianism-4-bayesian-in-python/Prior-on-Slope-and-Intercept).3. What happens as you substantially increase or decrease the simulated noise? Does the performance change significantly? Why? A more realistic example: radial velocity exoplanetsWhile the above example was cute, it doesn't really fully exploit the power of PyMC3 and it doesn't really show some of the real issues that you will face when you use PyMC3 as an astronomer.To get a better sense of how you might use PyMC3 in Real Life™, let's take a look at a more realistic example: fitting a Keplerian orbit to radial velocity observations.One of the key aspects of this problem that I want to highlight is the fact that PyMC3 (and the underlying model building framework [Theano](http://deeplearning.net/software/theano/)) don't have out-of-the-box support for the root-finding that is required to solve Kepler's equation.As part of the process of computing a Keplerian RV model, we must solve the equation:$$M = E - e\,\sin E$$for the eccentric anomaly $E$ given some mean anomaly $M$ and eccentricity $e$.There are commonly accepted methods of solving this equation using [Newton's method](https://en.wikipedia.org/wiki/Newton%27s_method), but if we want to expose that to PyMC3, we have to define a [custom Theano operation](http://deeplearning.net/software/theano/extending/extending_theano.html) with a custom gradient.I won't go into the details of the math (because [I blogged about it](https://dfm.io/posts/stan-c++/)) and I won't go into the details of the implementation (because [you can take a look at it on GitHub](https://github.com/dfm/exoplanet/tree/master/exoplanet/theano_ops/kepler)).So, for this tutorial, we'll use the custom Kepler solver that is implemented as part of *exoplanet* and fit the publicly available radial velocity observations of the famous exoplanetary system 51 Peg using PyMC3.First, we need to download the data from the exoplanet archive:
###Code
import requests
import pandas as pd
import matplotlib.pyplot as plt
# Download the dataset from the Exoplanet Archive:
url = "https://exoplanetarchive.ipac.caltech.edu/data/ExoData/0113/0113357/data/UID_0113357_RVC_001.tbl"
r = requests.get(url)
if r.status_code != requests.codes.ok:
r.raise_for_status()
data = np.array([l.split() for l in r.text.splitlines()
if not l.startswith("\\") and not l.startswith("|")],
dtype=float)
t, rv, rv_err = data.T
t -= np.mean(t)
# Plot the observations "folded" on the published period:
# Butler et al. (2006) https://arxiv.org/abs/astro-ph/0607493
lit_period = 4.230785
plt.errorbar((t % lit_period)/lit_period, rv, yerr=rv_err, fmt=".k", capsize=0)
plt.xlim(0, 1)
plt.ylim(-110, 110)
plt.annotate("period = {0:.6f} days".format(lit_period),
xy=(1, 0), xycoords="axes fraction",
xytext=(-5, 5), textcoords="offset points",
ha="right", va="bottom", fontsize=12)
plt.ylabel("radial velocity [m/s]")
plt.xlabel("phase");
###Output
_____no_output_____
###Markdown
Now, here's the implementation of a radial velocity model in PyMC3.Some of this will look familiar after the Hello World example, but things are a bit more complicated now.Take a minute to take a look through this and see if you can follow it.There's a lot going on, so I want to point out a few things to pay attention to:1. All of the mathematical operations (for example `exp` and `sqrt`) are being performed using Theano instead of NumPy.2. All of the parameters have initial guesses provided. This is an example where this makes a big difference because some of the parameters (like period) are very tightly constrained.3. Some of the lines are wrapped in `Deterministic` distributions. This can be useful because it allows us to track values as the chain progresses even if they're not parameters. For example, after sampling, we will have a sample for `bkg` (the background RV trend) for each step in the chain. This can be especially useful for making plots of the results.4. Similarly, at the end of the model definition, we compute the RV curve for a single orbit on a fine grid. This can be very useful for diagnosing fits gone wrong.5. For parameters that specify angles (like $\omega$, called `w` in the model below), it can be inefficient to sample in the angle directly because of the fact that the value wraps around at $2\pi$. Instead, it can be better to sample the unit vector specified by the angle. In practice, this can be achieved by sampling a 2-vector from an isotropic Gaussian and normalizing the components by the norm. This is implemented as part of *exoplanet* in the :class:`exoplanet.distributions.Angle` class.
###Code
import theano.tensor as tt
from exoplanet.orbits import get_true_anomaly
from exoplanet.distributions import Angle
with pm.Model() as model:
# Parameters
logK = pm.Uniform("logK", lower=0, upper=np.log(200),
testval=np.log(0.5*(np.max(rv) - np.min(rv))))
logP = pm.Uniform("logP", lower=0, upper=np.log(10),
testval=np.log(lit_period))
phi = pm.Uniform("phi", lower=0, upper=2*np.pi, testval=0.1)
e = pm.Uniform("e", lower=0, upper=1, testval=0.1)
w = Angle("w")
logjitter = pm.Uniform("logjitter", lower=-10, upper=5,
testval=np.log(np.mean(rv_err)))
rv0 = pm.Normal("rv0", mu=0.0, sd=10.0, testval=0.0)
rvtrend = pm.Normal("rvtrend", mu=0.0, sd=10.0, testval=0.0)
# Deterministic transformations
n = 2*np.pi*tt.exp(-logP)
P = pm.Deterministic("P", tt.exp(logP))
K = pm.Deterministic("K", tt.exp(logK))
cosw = tt.cos(w)
sinw = tt.sin(w)
s2 = tt.exp(2*logjitter)
t0 = (phi + w) / n
# The RV model
bkg = pm.Deterministic("bkg", rv0 + rvtrend * t / 365.25)
M = n * t - (phi + w)
# This is the line that uses the custom Kepler solver
f = get_true_anomaly(M, e + tt.zeros_like(M))
rvmodel = pm.Deterministic(
"rvmodel", bkg + K * (cosw*(tt.cos(f) + e) - sinw*tt.sin(f)))
# Condition on the observations
err = tt.sqrt(rv_err**2 + tt.exp(2*logjitter))
pm.Normal("obs", mu=rvmodel, sd=err, observed=rv)
# Compute the phased RV signal
phase = np.linspace(0, 1, 500)
M_pred = 2*np.pi * phase - (phi + w)
f_pred = get_true_anomaly(M_pred, e + tt.zeros_like(M_pred))
rvphase = pm.Deterministic(
"rvphase", K * (cosw*(tt.cos(f_pred) + e) - sinw*tt.sin(f_pred)))
###Output
_____no_output_____
###Markdown
In this case, I've found that it is useful to first optimize the parameters to find the "maximum a posteriori" (MAP) parameters and then start the sampler from there.This is useful here because MCMC is not designed to *find* the maximum of the posterior; it's just meant to sample the shape of the posterior.The performance of all MCMC methods can be really bad when the initialization isn't good (especially when some parameters are very well constrained).To find the maximum a posteriori parameters using PyMC3, you can use the :func:`exoplanet.optimize` function:
###Code
from exoplanet import optimize
with model:
map_params = optimize()
###Output
_____no_output_____
###Markdown
Let's make a plot to check that this initialization looks reasonable.In the top plot, we're looking at the RV observations as a function of time with the initial guess for the long-term trend overplotted in blue.In the lower panel, we plot the "folded" curve where we have wrapped the observations onto the best-fit period and the prediction for a single overplotted in orange. If this doesn't look good, try adjusting the initial guesses for the parameters and see if you can get a better fit.**Exercise:** Try changing the initial guesses for the parameters (as specified by the `testval` argument) and see how sensitive the results are to these values. Are there some parameters that are less important? Why is this?
###Code
fig, axes = plt.subplots(2, 1, figsize=(8, 8))
period = map_params["P"]
ax = axes[0]
ax.errorbar(t, rv, yerr=rv_err, fmt=".k")
ax.plot(t, map_params["bkg"], color="C0", lw=1)
ax.set_ylim(-110, 110)
ax.set_ylabel("radial velocity [m/s]")
ax.set_xlabel("time [days]")
ax = axes[1]
ax.errorbar(t % period, rv - map_params["bkg"], yerr=rv_err, fmt=".k")
ax.plot(phase * period, map_params["rvphase"], color="C1", lw=1)
ax.set_ylim(-110, 110)
ax.set_ylabel("radial velocity [m/s]")
ax.set_xlabel("phase [days]")
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Now let's sample the posterior starting from our MAP estimate.
###Code
with model:
trace = pm.sample(draws=2000, tune=1000, start=map_params, chains=2)
###Output
_____no_output_____
###Markdown
As above, it's always a good idea to take a look at the summary statistics for the chain.If everything went as planned, there should be more than 1000 effective samples per chain and the Rhat values should be close to 1.(Not too bad for less than 30 seconds of run time!)
###Code
pm.summary(trace, varnames=["logK", "logP", "phi", "e", "w", "logjitter", "rv0", "rvtrend"])
###Output
_____no_output_____
###Markdown
Similarly, we can make the corner plot again for this model.
###Code
samples = pm.trace_to_dataframe(trace, varnames=["K", "P", "e", "w"])
corner.corner(samples);
###Output
_____no_output_____
###Markdown
Finally, the last plot that we'll make here is of the posterior predictive density.In this case, this means that we want to look at the distribution of predicted models that are consistent with the data.As above, the top plot shows the raw observations as black error bars and the RV trend model is overplotted in blue.But, this time, the blue line is actually composed of 25 lines that are samples from the posterior over trends that are consistent with the data.In the bottom panel, the orange lines indicate the same 25 posterior samples for the RV curve of one orbit.
###Code
fig, axes = plt.subplots(2, 1, figsize=(8, 8))
period = map_params["P"]
ax = axes[0]
ax.errorbar(t, rv, yerr=rv_err, fmt=".k")
ax.set_ylabel("radial velocity [m/s]")
ax.set_xlabel("time [days]")
ax = axes[1]
ax.errorbar(t % period, rv - map_params["bkg"], yerr=rv_err, fmt=".k")
ax.set_ylabel("radial velocity [m/s]")
ax.set_xlabel("phase [days]")
for i in np.random.randint(len(trace) * trace.nchains, size=25):
axes[0].plot(t, trace["bkg"][i], color="C0", lw=1, alpha=0.3)
axes[1].plot(phase * period, trace["rvphase"][i], color="C1", lw=1, alpha=0.3)
axes[0].set_ylim(-110, 110)
axes[1].set_ylim(-110, 110)
plt.tight_layout()
###Output
_____no_output_____ |
Advanced Physics Lab I/Dynamics of Rotational Motion/drm.ipynb | ###Markdown
Part 1
###Code
m30d = {xaxis : [i for i in [80] * 4] + [i for i in [70] * 4] + [i for i in [60] * 4] + [i for i in [50] * 4] + [i for i in [40] * 4] + [i for i in [30] * 4],
yaxis : [0.1303, 0.1323, 0.1320, 0.1360,
0.1380, 0.1403, 0.1400, 0.1444,
0.1512, 0.1527, 0.1538, 0.1595,
0.1613, 0.1644, 0.1646, 0.1703,
0.1783, 0.1819, 0.1824, 0.1894,
0.2091, 0.2127, 0.2159, 0.2267]
}
m30 = pd.DataFrame(m30d)
m30 = m30.groupby([xaxis], as_index = False).mean()
m30['T'] = m30[yaxis] * 8
m30["$\omega^2$"] = (4 * (np.pi ** 2))/ ((m30["T"]) ** 2)
m30
print(m30.to_latex(index=True, escape=False, caption="Table of $h$ vs $\omega^2$ for $m=30g$")) # Do you want to show index in table?
X = np.array(m30['$\omega^2$']).reshape(-1, 1)
Y = np.array(m30[xaxis]/100).reshape(-1, 1)
reg = LinearRegression().fit(X, Y)
Y_pred = reg.predict(X)
reg_value = reg.score(X, Y)
intercept = reg.intercept_
coef = reg.coef_
I = (30/1000) * (2 * 9.8 * coef - ((22.54/1000)**2))
plt.scatter(X, Y)
plt.plot(X, Y_pred, color='blue')
plt.xlabel("Height [$cm$]")
plt.ylabel("$\omega^2$")
plt.title("Graph of $h$ vs $\omega^2$")
plt.grid()
plt.show()
display(Latex("$R^2$ = {}".format(reg_value)))
display(Latex("$f(x) = {}x {}$".format(coef[0, 0], intercept[0])))
display(Latex("$I$ = {:10.4f} $kgm^2$".format(I[0, 0])))
m30.sem()
plt.scatter(X, Y)
plt.errorbar(X, Y, yerr=m30.sem(axis=0)[1]) #plots error bar in terms of std of t, i.e 0.012500
plt.grid() # That means this can be used for statistical error calculation.
m30.sem(axis=0)[1]
m60d = {'Height [$cm$]': [i for i in [80] * 4] + [i for i in [70] * 4] + [i for i in [60] * 4] + [i for i in [50] * 4] + [i for i in [40] * 4] + [i for i in [30] * 4],
'$\overline{t} [s]$': [0.0867, 0.0874, 0.0870, 0.0885,
0.0931, 0.0939, 0.0934, 0.0952,
0.1001, 0.1005, 0.1010, 0.1035,
0.1092, 0.1098, 0.1105, 0.1132,
0.1214, 0.1221, 0.1231, 0.1264,
0.1376, 0.1396, 0.1395, 0.1433,
]
}
m60 = pd.DataFrame(m60d)
m60 = m60.groupby(['Height [$cm$]'], as_index = False).mean()
m60['T [$s$]'] = m60['$\overline{t} [s]$'] * 8
m60["$\omega^2$"] = (4 * (np.pi ** 2))/ ((m60["T [$s$]"]) ** 2)
m60
X = np.array(m60['$\omega^2$']).reshape(-1, 1)
Y = np.array(m60["Height [$cm$]"]/100).reshape(-1, 1)
reg = LinearRegression().fit(X, Y)
Y_pred = reg.predict(X)
reg_value = reg.score(X, Y)
intercept = reg.intercept_
coef = reg.coef_
I = (60/1000) * (2 * 9.8 * coef - ((22.54/1000)**2))
plt.scatter(X, Y)
plt.plot(X, Y_pred, color='blue')
plt.ylabel("Height [$cm$]")
plt.xlabel("$\omega^2 [rads^{-1}]$")
plt.title("Graph of $h$ vs $\omega^2$")
plt.grid()
#plt.show()
display(Latex("$R^2$ = {}".format(reg_value)))
display(Latex("$f(x) = {}x {}$".format(coef[0, 0], intercept[0])))
display(Latex("$I$ = {:10.4f} $kgm^2$".format(I[0, 0])))
m90d = {'Height [$cm$]': [i for i in [80] * 4] + [i for i in [70] * 4] + [i for i in [60] * 4] + [i for i in [50] * 4] + [i for i in [40] * 4] + [i for i in [30] * 4],
'$\overline{t} [s]$': [0.0696, 0.0697, 0.0698, 0.0712,
0.0734, 0.074, 0.0735, 0.0749,
0.08, 0.0801, 0.0804, 0.0820,
0.087, 0.0874, 0.0873, 0.0895,
0.0953, 0.0963, 0.0957, 0.0981,
0.1101, 0.1114, 0.1110, 0.1139]
}
m90 = pd.DataFrame(m90d)
m90 = m90.groupby(['Height [$cm$]'], as_index = False).mean()
m90['T [$s$]'] = m90['$\overline{t} [s]$'] * 8
m90["$\omega^2$"] = (4 * (np.pi ** 2))/ ((m90["T [$s$]"]) ** 2)
m90
X = np.array(m90['$\omega^2$']).reshape(-1, 1)
Y = np.array(m90["Height [$cm$]"]/100).reshape(-1, 1)
reg = LinearRegression().fit(X, Y)
Y_pred = reg.predict(X)
reg_value = reg.score(X, Y)
intercept = reg.intercept_
coef = reg.coef_
I = (90 /1000) * (2 * 9.8 * coef - ((22.54/1000)**2))
plt.scatter(X, Y)
plt.plot(X, Y_pred, color='blue')
plt.xlabel("Height [$cm$]")
plt.ylabel("$\omega^2$")
plt.title("Graph of $h$ vs $\omega^2$")
plt.grid()
#plt.show()
display(Latex("$R^2$ = {}".format(reg_value)))
display(Latex("$f(x) = {}x {}$".format(coef[0, 0], intercept[0])))
display(Latex("$I$ = {:10.4f} $kgm^2$".format(I[0, 0])))
###Output
_____no_output_____
###Markdown
Part 4
###Code
pred = {'$T$': [i for i in [24.4-4.4] * 4] + [i for i in [18.7-1.7] * 4] + [i for i in [15.8-1.1] * 4] + [i for i in [25.3-2.9] * 4] + [i for i in [26.3-2.8] * 4] + [i for i in [25.6-4.1] * 4] + [i for i in [15.7-1.7] * 4] + [i for i in [17.9-4.6] * 4],
'$T_1$': [0.0175, 0.0176, 0.0178, 0.0178,
0.0214, 0.0217, 0.0216, 0.0216,
0.0255, 0.0257, 0.0262, 0.0265,
0.0153, 0.0155, 0.0155, 0.0153,
0.0144, 0.0147, 0.0145, 0.0143,
0.0168, 0.0165, 0.0167, 0.0167,
0.0270, 0.027, 0.0275, 0.0278,
0.0291, 0.0294, 0.0292, 0.0291],
'$T_2$': [0.0271, 0.0272, 0.0272, 0.0273,
0.0315, 0.032, 0.0314, 0.0317,
0.0368, 0.0366, 0.0372, 0.0369,
0.0247, 0.0245, 0.0249, 0.0246,
0.0234, 0.0233, 0.0239, 0.0237,
0.0262, 0.0256, 0.0256, 0.0259,
0.0384, 0.038, 0.038, 0.0381,
0.0407, 0.0409, 0.0406, 0.0404]}
p20 = pd.DataFrame(pred)
p20 = p20.groupby(['$T$'], as_index = False).mean()
p20['$T_p$'] = p20['$T$'] * 2
p20['$T_{r1}$'] = p20['$T_1$'] * 8
p20['$T_{r2}$'] = p20['$T_2$']
p20
X = np.array(p20['$T_p$']).reshape(-1, 1)
Y = np.array(1/p20["$T_{r1}$"]).reshape(-1, 1)
reg = LinearRegression().fit(X, Y)
Y_pred = reg.predict(X)
reg_value = reg.score(X, Y)
intercept = reg.intercept_
coef = reg.coef_
m = 20/1000
g = 9.8
r = 26/100
I = (m * g * r) / (4 * (np.pi ** 2) * coef)
plt.scatter(X, Y)
plt.plot(X, Y_pred, color='blue')
plt.title("Graph of $T_p$ vs 1/Tr")
plt.xlabel("$T_p$")
plt.grid()
#plt.show()
display(Latex("$R^2$ = {}".format(reg_value)))
display(Latex("$f(x) = {}x {}$".format(coef[0, 0], intercept[0])))
display(Latex("$I$ = {:10.4f}".format(I[0, 0])))
X = np.array(p20['$T_p$']).reshape(-1, 1)
Y = np.array(1/p20["$T_{r2}$"]).reshape(-1, 1)
reg = LinearRegression().fit(X, Y)
Y_pred = reg.predict(X)
reg_value = reg.score(X, Y)
intercept = reg.intercept_
coef = reg.coef_
m = 20/1000
g = 9.8
r = 26/100
I = (m * g * r) / (4 * (np.pi ** 2) * coef)
plt.scatter(X, Y)
plt.plot(X, Y_pred, color='blue')
plt.xlabel("$T_p$")
plt.title("Graph of $T_p$ and 1/Tr when measure from second light barrier")
plt.grid()
#plt.show()
display(Latex("$R^2$ = {}".format(reg_value)))
display(Latex("$f(x) = {}x {}$".format(coef[0, 0], intercept[0])))
display(Latex("$I$ = {:10.4f} $kgm^2$".format(I[0, 0])))
###Output
_____no_output_____
###Markdown
Part 5 Error Analysis
###Code
r = ufloat(14, 0)
m = ufloat(0.03, 0.01)
e = r / m
print(e)
pred40 = {'$T$': [i for i in [24.5-4.5] * 4] + [i for i in [18.8-1.8] * 4] + [i for i in [15.9-1.2] * 4] + [i for i in [25.4-3] * 4] + [i for i in [26.4-2.9] * 4] + [i for i in [25.7-4.3] * 4] + [i for i in [15.8-1.8] * 4] + [i for i in [18-4.7] * 4],
'$T_1$': [0.0175, 0.0176, 0.0178, 0.0178,
0.0214, 0.0217, 0.0216, 0.0216,
0.0255, 0.0258, 0.0262, 0.0265,
0.0153, 0.0154, 0.0155, 0.0153,
0.0144, 0.0147, 0.0145, 0.0143,
0.0168, 0.0165, 0.0167, 0.0167,
0.0270, 0.027, 0.0275, 0.0278,
0.0291, 0.0294, 0.0292, 0.0291],
'$T_2$': [0.0271, 0.0272, 0.0272, 0.0273,
0.032, 0.0314, 0.0317, 0.0322,
0.0366, 0.0372, 0.0369, 0.0368,
0.0247, 0.0245, 0.0249, 0.0246,
0.0234, 0.0233, 0.0239, 0.0237,
0.0262, 0.026, 0.026, 0.0259,
0.0377, 0.0384, 0.038, 0.038,
0.0407, 0.0409, 0.0406, 0.0404]}
p40 = pd.DataFrame(pred40)
p40['$T_{r2}$'] = p40['$T_2$'] * 8
p40 = p40.groupby(['$T$'], as_index = False).mean()
p40['$T_p$'] = p40['$T$'] * 2
p40['$T_{r1}$'] = p40['$T_1$'] * 8
p40
X = np.array(p40['$T_p$']).reshape(-1, 1)
Y = np.array(1/p40["$T_{r1}$"]).reshape(-1, 1)
reg = LinearRegression().fit(X, Y)
Y_pred = reg.predict(X)
reg_value = reg.score(X, Y)
intercept = reg.intercept_
coef = reg.coef_
m = 20/1000
g = 9.8
r = 26/100
I = (m * g * r) / (4 * (np.pi ** 2) * coef)
plt.scatter(X, Y)
plt.plot(X, Y_pred, color='blue')
plt.title("Graph of $T_p$ vs 1/Tr")
plt.xlabel("$T_p$")
plt.grid()
#plt.show()
display(Latex("$R^2$ = {}".format(reg_value)))
display(Latex("$f(x) = {}x {}$".format(coef[0, 0], intercept[0])))
display(Latex("$I$ = {:10.4f}".format(I[0, 0])))
X = np.array(p40['$T_p$']).reshape(-1, 1)
Y = np.array(1/p40["$T_{r2}$"]).reshape(-1, 1)
reg = LinearRegression().fit(X, Y)
Y_pred = reg.predict(X)
reg_value = reg.score(X, Y)
intercept = reg.intercept_
coef = reg.coef_
m = 20/1000
g = 9.8
r = 26/100
I = (m * g * r) / (4 * (np.pi ** 2) * coef)
plt.scatter(X, Y)
plt.plot(X, Y_pred, color='blue')
plt.title("Graph of $T_p$ vs 1/Tr")
plt.xlabel("$T_p$")
plt.grid()
#plt.show()
display(Latex("$R^2$ = {}".format(reg_value)))
display(Latex("$f(x) = {}x {}$".format(coef[0, 0], intercept[0])))
display(Latex("$I$ = {:10.4f}".format(I[0, 0])))
###Output
_____no_output_____ |
Notebook_4.ipynb | ###Markdown
Lab 9 - Plotting Vector using NumPy and MatPlotLib In this laboratory we will be discussing the basics of numerical and scientific programming by working with Vectors using NumPy and MatPlotLib. ObjectivesAt the end of this activity you will be able to:1. Be familiar with the libraries in Python for numerical and scientific programming.2. Visualize vectors through Python programming.3. Perform simple vector operations through code. NumPy NumPy or Numerical Python, is mainly used for matrix and vector operations. It is capable of declaring computing and representing matrices. Most Python scientific programming libraries uses NumPy as the basic code.ScalarsRepresent magnitude or a single valueVectorsRepresent magnitude with directors Representing Vectors Now that you know how to represent vectors using their component and matrix form we can now hard-code them in Python. Let's say that you have the vectors: $$ A = 3\hat{x} + 2\hat{y} \\B = 1\hat{x} - 4\hat{y}\\C = 3ax + 2ay - 1az \\D = 1\hat{i} - 1\hat{j} + 2\hat{k}$$ In which it's matrix equivalent is: $$ A = \begin{bmatrix} 4 \\ 3\end{bmatrix} , B = \begin{bmatrix} 2 \\ -5\end{bmatrix} , C = \begin{bmatrix} 4 \\ 3 \\ -2 \end{bmatrix}, D = \begin{bmatrix} 2 \\ -2 \\ 3\end{bmatrix}$$$$ A = \begin{bmatrix} 4 & 3\end{bmatrix} , B = \begin{bmatrix} 2 & -5\end{bmatrix} , C = \begin{bmatrix} 4 & 3 & -2\end{bmatrix} , D = \begin{bmatrix} 2 & -2 & 3\end{bmatrix} $$ We can then start doing numpy code with this by:
###Code
## Importing necessary libraries
import numpy as np ## 'np' here is short-hand name of the library (numpy) or a nickname.
L = np.array([3, 2])
M = np.array([1, -6])
N = np.array([
[3],
[2],
[-3]
])
O = np.array ([[2],
[-3],
[2]])
print('Vector L is ', L)
print('Vector M is ', M)
print('Vector N is ', N)
print('Vector O is ', O)
###Output
Vector L is [3 2]
Vector M is [ 1 -6]
Vector N is [[ 3]
[ 2]
[-3]]
Vector O is [[ 2]
[-3]
[ 2]]
###Markdown
Describing vectors in NumPy Describing vectors is very important if we want to perform basic to advanced operations with them. The fundamental ways in describing vectors are knowing their shape, size and dimensions.
###Code
### Checking shapes
### Shapes tells us how many elements are there on each row and column
L.shape
H = np.array([2, 1, 3, 6, -1.3, 1])
H.shape
N.shape
### Checking size
### Array/Vector sizes tells us many total number of elements are there in the vector
O.size
### Checking dimensions
### The dimensions or rank of a vector tells us how many dimensions are there for the vector.
O.ndim
###Output
_____no_output_____
###Markdown
Great! Now let's try to explore in performing operations with these vectors. *Addition* The addition rule is simple, the we just need to add the elements of the matrices according to their index. So in this case if we add vector $L$ and vector $M$ we will have a resulting vector: $$J = 6\hat{x}-2\hat{y} \\ \\or \\ \\ J = \begin{bmatrix} 6 \\ -2\end{bmatrix} $$ So let's try to do that in NumPy in several number of ways:
###Code
J = np.add(L, M) ## this is the functional method usisng the numpy library
K = np.add(N, O)
J = L + M ## this is the explicit method, since Python does a value-reference so it can
## know that these variables would need to do array operations.
J
K = N + O
K
pos1 = np.array([1,1,1])
pos2 = np.array([1,2,4])
pos3 = np.array([0,4,-1])
pos4 = np.array([6,-2,2])
#J = pos1 + pos2 + pos3 + pos4
#J = np.multiply(pos1, pos4)
R = pos3 / pos4
R
pos1 = np.array([1,1,1])
pos2 = np.array([1,2,4])
pos3 = np.array([0,4,-1])
pos4 = np.array([6,-2,2])
R = pos1 + pos2 + pos3 + pos4
#R = np.multiply(pos3, pos4)
#R = pos2/ pos4
R
pos1 = np.array([1,1,1])
pos2 = np.array([1,2,4])
pos3 = np.array([0,4,-1])
pos4 = np.array([6,-2,2])
#R = pos1 + pos2 + pos3 + pos4
R = np.multiply(pos3, pos4)
#R = pos2 / pos4
R
###Output
_____no_output_____
###Markdown
Try for yourself! Try to implement subtraction, multiplication, and division with vectors $L$ and $M$!
###Code
### Try out you code here!
### Try out you code here!
### Subtraction
D = np.subtract (L,M)
D
### Multiplication
E = np.multiply (L,M)
E
### Division
F = np.divide (L,M)
F
###Output
_____no_output_____
###Markdown
Scaling Scaling or scalar multiplication takes a scalar value and performs multiplication with a vector. Let's take the example below: :$$S = 7 \cdot L$$ We can do this in numpy through:
###Code
#S = 7 * L
S = np.multiply(5,L)
S
###Scaling with two vectors
#S = 7 * L
#S = 7 * M
L = np.array ([2,1])
M = np.array ([11,5])
S = np.multiply (7,L)
S
###Scaling with two vectors
#S = 7 * L
#S = 7 * M
L = np.array ([2,1])
M = np.array ([11,5])
S = np.multiply (7,M)
S
###Output
_____no_output_____
###Markdown
MatPlotLib MatPlotLib or MATLab Plotting library is Python's take on MATLabs plotting feature. MatPlotLib can be used vastly from graping values to visualizing several dimensions of data. Visualizing Data It's not enough just solving these vectors so might need to visualize them. So we'll use MatPlotLib for that. We'll need to import it first.
###Code
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
X = [0, -2]
Y = [4, -2]
#ax = plt.axes(projection='3d')
plt.scatter(X[0], X[1], label='X', c='Green')
plt.scatter(Y[0], Y[1], label='Y', c='Black')
plt.grid()
plt.legend()
plt.show()
A = np.array([2, 0])
B = np.array([2, 6])
R = A + B
Magnitude = np.sqrt(np.sum(R**2))
plt.title("Resultant Vector\nMagnitude:{}" .format(Magnitude))
plt.xlim(-6, 4)
plt.ylim(-6, 4)
plt.quiver(0, 0, A[0], A[1], angles='xy', scale_units='xy', scale=1, color='Blue')
plt.quiver(A[0], A[1], B[0], B[1], angles='xy', scale_units='xy', scale=1, color='Red')
J = X + Y
plt.quiver(0, 0, R[0], R[1], angles='xy', scale_units='xy', scale=1, color='Magenta')
plt.grid()
plt.show()
print(J)
print(Magnitude)
Slope = R[1]/R[0]
print(Slope)
Angle = (np.arctan(Slope))*(180/np.pi)
print(Angle)
n = A.shape[0]
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.quiver(0,0, A[0], A[1], angles='xy', scale_units='xy',scale=1)
plt.quiver(A[0],A[1], B[0], B[1], angles='xy', scale_units='xy',scale=1)
plt.quiver(0,0, R[0], R[1], angles='xy', scale_units='xy',scale=1)
plt.show()
####Three vectors
Pengu = np.array([5, 7])
Ponggo = np.array([6, 8])
Precious = Pengu + Ponggo
Akia = np.array([-11, 13])
akiko = Akia + Pengu + Ponggo
magnitude = np.sqrt(np.sum(akiko**2))
plt.title("Resultant Vector\nMagnitude: {} \n Resultant: {}".format(magnitude, akiko))
plt.xlim(-10, 40)
plt.ylim(-10, 40)
plt.quiver(0, 0, Pengu[0], Pengu[1], angles='xy', scale_units='xy', scale=1, color='Pink')
plt.quiver(Pengu[0], Pengu[1], Ponggo[0], Ponggo[1], angles='xy', scale_units='xy', scale=1, color='Green')
plt.quiver(Precious[0], Precious[1], Akia[0], Akia[1], angles='xy', scale_units='xy', scale=1, color='Blue')
plt.quiver(0, 0, akiko[0], akiko[1], angles='xy', scale_units='xy', scale=1, color='red')
plt.grid()
plt.show()
Slope = R[1]/R[0]
print(Slope)
Angle = (np.arctan(Slope))*(180/np.pi)
print(Angle)
###Output
_____no_output_____ |
homeworks/Part 2/Homework 1 [general].ipynb | ###Markdown
Домашнее задание №1 - Применение методов NLPВ этом домашнем задании мы будем работать с данными из сорневнования: [Toxic comment classification challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge) В задании небходимо по тексту комментария определить веротяности следующих категорий:- toxic- severe_toxic- obscene- threat- insult- identity_hateКак и в соревновании мы везде будем использовать метрику ROC AUC для валидации_Обратите внимание, что каждый комментарий может иметь несколько меток разных классов_ Что нужно сделать? 1. Подготовка __[10%]__: - Скачайте данные, проведите первоначальные EDA: баланс классов, пересечение классов и т.д. - Придумайте и обоснуйте стратегию валидации. - Сделайте предбработку данных. Оцените что требуется делать с символами, заглавными буквами. Проведите лемматизацию или стеминг.2. Примените любой Embedding (word2vec или Glove) __[5%]__3. Постройте следующие модели (для каждой необходимо самостоятельно выбрать оптимальное количество слоеев и архитектуру, оценить качество, переобученность, построить кривые обучения и валидации, сделать выводы по примению модели): - Одномерные свертки __[20%]__ - LSTM или GRU __[20%]__ - Bidirectional LSTM __[20%]__ 4. Попробуйте применить к этой задаче BERT или GPT-2. Выбор оптимального количества слоеев и архитектура на ваш вкус (но не забудьте обосновать его). Оцените качетво и другие параметры работы модели. __[25%]__ Дополнительные 50%5. Основываясь на полученных результатах, сделайте свою лучшую модель и сделайте Late Submission на тестовых данных [challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge). Не забудьте приложить скриншот с Вашим скором. Скриншот вставьте прямо в ноутбук с решением или выведите в stdout. _______Правила полученения дополнительных баллов:_- можно получить от 20% до 50% в зависимости от метрики качества других участников нашего курса полученного на лидерборде- Чтобы получить минимум в 20% нужно: - Основные задания должны быть полностью решены - Обосновать то решение которое отправили. - Предложенная модель должна отличаться от тех, что строились в заданиях 2-4 Готовый ноутбук загрузите в эту форму: [http://bit.ly/dafe_hw](http://bit.ly/dafe_hw)
###Code
# import ruquired libraries
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
#settings
color = sns.color_palette()
sns.set_style("dark")
warnings.filterwarnings("ignore")
%matplotlib inline
# load dataset
train_path = 'jigsaw-toxic-comment-classification-challenge/train.csv'
test_path = 'jigsaw-toxic-comment-classification-challenge/test.csv'
df_train = pd.read_csv(train_path)
df_test = pd.read_csv(test_path)
###Output
_____no_output_____
###Markdown
General EDA
###Code
df_train.head()
print(f'train rows: {df_train.shape[0]} \n'
f'test rows: {df_test.shape[0]}')
x = df_train.iloc[:,2:].sum()
#marking comments without any tags as "clean"
rowsums = df_train.iloc[:,2:].sum(axis=1)
df_train['clean'] = (rowsums == 0)
#count number of clean entries
df_train['clean'].sum()
print(f"Total comments = {len(df_train)}")
print(f"Total clean comments = {df_train['clean'].sum()}")
print(f"Total tags = {x.sum()}")
print("Check for missing values in Train dataset")
null_check = df_train.isnull().sum()
print(null_check)
print("Check for missing values in Test dataset")
null_check = df_test.isnull().sum()
print(null_check)
print("filling NA with \"unknown\"")
df_train["comment_text"].fillna("unknown", inplace=True)
df_test["comment_text"].fillna("unknown", inplace=True)
x = df_train.iloc[:,2:].sum()
#plot
plt.figure(figsize=(8,4))
ax = sns.barplot(x.index, x.values, alpha=0.8)
plt.title("Amount per class")
plt.ylabel('Amount of occurrences', fontsize=12)
plt.xlabel('Type ', fontsize=12)
#adding the text labels
rects = ax.patches
labels = x.values
for rect, label in zip(rects, labels):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2, height + 5, label, ha='center', va='bottom')
plt.show()
x = rowsums.value_counts()
#plot
plt.figure(figsize=(8,4))
ax = sns.barplot(x.index, x.values, alpha=0.8,color=color[2])
plt.title("Multiple tags per comment")
plt.ylabel('Amount of occurrences', fontsize=12)
plt.xlabel('Amount of tags ', fontsize=12)
#adding the text labels
rects = ax.patches
labels = x.values
for rect, label in zip(rects, labels):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2, height + 5, label, ha='center', va='bottom')
plt.show()
temp_df = df_train.iloc[:,2:-1]
# filter temp by removing clean comments
# temp_df = temp_df[~df_train.clean]
corr = temp_df.corr()
plt.figure(figsize=(10,8))
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,
annot=True)
plt.show()
###Output
_____no_output_____ |
Projects/ALL_FastAI_VGG19.ipynb | ###Markdown
Peter Moss Acute Myeloid & Lymphoblastic Leukemia AI Research Project ALL FastAI VGG19 Classifier**Using The ALL Image Database for Image Processing & The Leukemia Blood Cell Image Classification Using Convolutional Neural Network Research Paper** The ALL FastAI VGG19 Classifier was created by [Salvatore Raieli](https://github.com/salvatorera) based on his [Resnet50](https://github.com/AMLResearchProject/ALL-FastAI-2019/blob/master/Projects/ALL_FastAI_Resnet_50.ipynb) project. The classifier provides a Google Colab notebook that uses FastAI with Resnet18 and ALL_IDB2 from the [Acute Lymphoblastic Leukemia Image Database for Image Processing dataset](https://homes.di.unimi.it/scotti/all/). FastAI VGG19 Classifier Project Contributors- [Salvatore Raieli](https://github.com/salvatorera "Salvatore Raieli") - PhD Immunolgy / Bioinformaticia, Bologna, Italy DISCLAIMERThese projects should be used for research purposes only. The purpose of the projects is to show the potential of Artificial Intelligence for medical support systems such as diagnosis systems.Although the classifiers are accurate and show good results both on paper and in real world testing, they are not meant to be an alternative to professional medical diagnosis.Salvatore Raieli is a bioinformatician researcher and PhD in Immunology, but does not work in medical diagnosis. Please use these systems responsibly.Please use this system responsibly. ALL Image Database for Image Processing by Fabio ScottiThe [Acute Lymphoblastic Leukemia Image Database for Image Processing](https://homes.di.unimi.it/scotti/all/) dataset created by [Fabio Scotti, Associate Professor Dipartimento di Informatica, Università degli Studi di Milano](https://homes.di.unimi.it/scotti/) is used in this notebook.Although in the [Leukemia Blood Cell Image Classification Using Convolutional Neural Network](http://www.ijcte.org/vol10/1198-H0012.pdf "Leukemia Blood Cell Image Classification Using Convolutional Neural Network") paper the ALL_IDB1 dataset is used, in this notebook you will use the ALL_IDB2 dataset. After removing 10 images per class for further testing and demonstrations, the dataset will be split into 80% and 20% for training and testing respectively. Clone the ALL FastAI 2020 repositoryFirst of all you should clone the [ALL-FastAI-2020](https://github.com/AMLResearchProject/ALL-FastAI-2020 "ALL-FastAI-2020") repository from the [Peter Moss Acute Myeloid & Lymphoblastic Leukemia AI Research Project](https://github.com/AMLResearchProject "Peter Moss Acute Myeloid & Lymphoblastic Leukemia AI Research Project") Github Organization. To do this, make sure you have Git installed, navigate to the location you want to clone the repository to on your device using terminal/commandline, and then use the following command:``` $ git clone https://github.com/AMLResearchProject/ALL-FastAI-2020.git```Once you have used the command above you will see a directory called **ALL-FastAI-2020** in the location you chose to clone to. In terminal, navigate to the **ALL-FastAI-2020** directory, this is your project root directory. Google Drive / ColabThis tutorial assumes you have access to [Google Drive](https://www.google.com/drive/) with enough space to save the dataset and related files. It is also assumed that you have access to [Google Colab](https://colab.research.google.com). Import data to Google DriveYou need to import **ALL_IDB2** from the [Acute Lymphoblastic Leukemia Image Database for Image Processing dataset](https://homes.di.unimi.it/scotti/all/) dataset, to do this you need to request permission from Fabio Scotti, the creator of the dataset. You can request permission by following the steps provided on [this page](https://homes.di.unimi.it/scotti/all/download). Once you have permission you need to upload the negative and positive examples provided in **ALL_IDB2** to your Google Drive. In this tutorial we assume you have uploaded your copy of the dataset to a folder located on your Google drive with the location: *AML-ALL-Classifiers/Python/_FastAI*. Once you have uploaded the dataset you can continue with this tutorial. Google Colab **You should now be running this tutorial on Google Colab, if not please read this tutorial from the beginning. ** First we need import the Google Colab Drive library, mount our dataset drive from Google Drive, and set the path to the ALL_IDB2 folder on your drive. Run the following code block to do this. You will be asked to click a link that will authorize the application with the permissions it needs to mount your drive etc. Follow the steps and then past the authorization key into this application.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
dataset_dir = "/content/gdrive/My Drive/fastai-v3/ALL_IDB2"
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly
Enter your authorization code:
··········
Mounted at /content/gdrive
###Markdown
Import required librariesWe need to import the relevant FastAI libraries, running the following code block with do this and get the paths to the dataset files.
###Code
from fastai.vision import *
from fastai.metrics import error_rate
fileNames = get_image_files(dataset_dir)
fileNames[:5]
###Output
_____no_output_____
###Markdown
Import datasetNow we need to import the dataset into this notebook. run the following code blocks to import the ALL_IDB2 dataset as a FastAI DataBunch. In the ImageDataBunch.from_name_re function we can see that we pass the *dataset_dir* we created earlier in the tutorial, fileNames that we created earlier, pattern for the files, some augmentation, the size of the images we need to replicate VGG19 input sizes and the number of batches. For more information about getting datasets ready with FastAI you can check out [this article](https://docs.fast.ai/vision.data.htmlQuickly-get-your-data-ready-for-training).
###Code
np.random.seed(2)
pattern = r'/\w+_(\d)\.tif$'
data = ImageDataBunch.from_name_re(dataset_dir, fileNames, pattern, ds_tfms=get_transforms(),
size=224, bs=32).normalize(imagenet_stats)
data
###Output
_____no_output_____
###Markdown
data.show_batch()Now we use the data.show_batch() function to show a batch of our data. Run the following code block to do this and view the results.
###Code
data.show_batch(rows=3, figsize=(7,6))
###Output
_____no_output_____
###Markdown
View classes infoNow we can run the following code block which will print out the classes list and lengths. View classes infoNow we can run the following code block which will print out the classes list and lengths.
###Code
print(data.classes)
len(data.classes),data.c
###Output
['0', '1']
###Markdown
The VGG19 model What and why to use transfer learning?Transfer learning is meaning use a pre-trained model to build our classifier. A pre-trained model is a model that has been previously trained on a dataset. The model comprehends the updated weights and bias. Using pre-trained model you are saving time and computational resources. Another vantage is that pre-trained models often perform better that architecture designed from scratch. To better understand this point, suppose you want to build a classifier able to sort different sailboat types. A model pre-trained on ships would have already capture in its first layers some boat features, learning faster and with better accuracy among the different sailboat types. The VGG19 architecture**VGG19** was proposed in the 2014 Visual Geometry Group (University of Oxford). They proposed a very deep architecture with 16 (or 19) layers, which was at time much deeper than what has been used in the prior models. They used 3×3 filters in all convolutional layers (stride equal to 1) to reduce the number of the parameters in a network. In concrete, the are 16 convolutional layers, followed by 2 fully connected layers (4096 neurons in each of the two layers). Last layer is a dense layers (1000 neurons, each one represents one of the ImageNet categories). [Original research article](https://arxiv.org/pdf/1409.1556.pdf) Test the VGG19 architecture with our datasetNow we are going to test how the FastaAI implementation of VGG19 works with the ALL_IDB2 dataset.Create the convolutional neural networkFirst we will create the convolutional neural network based on VGG19, to do this we can use the following code block which uses FastAI ( cnn_learner previously create_cnn) function. We pass the loaded data, specify the VGG19 model, pass error_rate & accuracy as a list for the metrics parameter specifying we want to see both error_rate and accuracy, and finally specify a weight decay of 1e-1 (1.0).
###Code
learn = cnn_learner(data, models.vgg19_bn, metrics=[error_rate,accuracy], wd=1e-1)
###Output
Downloading: "https://download.pytorch.org/models/vgg19_bn-c79401a0.pth" to /root/.cache/torch/checkpoints/vgg19_bn-c79401a0.pth
###Markdown
learn.lr_find() & learn.recorder.plot()Now we will use the learn.lr_find() function to run LR Finder. LR Finder help to find the best learning rate to use with our network. For more information the [original paper](https://arxiv.org/pdf/1506.01186.pdf). As shown from the output of above, learn.recorder.plot() function plot the loss over learning rate. Run the following code block to view the graph. The best learning rate should be chosen as the learning rate value where the curve is the steepest. You may try different learning rate values in order to pick up the best.
###Code
learn.lr_find()
learn.recorder.plot()
###Output
_____no_output_____
###Markdown
learn.fit_one_cycle() & learn.recorder.plot_losses()The learn.fit_one_cycle() function can be used to fit the model. Fit one cycle reach a comparable accuracy faster than th *fit* function in training of complex models. Fit one cycle instead of maintain fix the learning rate during all the iterations is linearly increasing the learning rate and then it is decreasing again (this process is what is called one cycle). Moreover, this learning rate variation is helping in preventing overfitting. We use 5 for the parameter *cyc_len* to specify the number of cycles to run (on cycle can be considered equivalent to an epoch), and *max_lr* to specify the maximum learning rate to use which we set as *0.001*. Fit one cycle varies the learning rate from 10 fold less the maximum learning rate selected. For more information about fit one cycle: [article](https://arxiv.org/pdf/1803.09820.pdf). We then use learn.fit_one_cycle() to plot the losses from *fit_one_cycle* as a graph.
###Code
lr = 1e-3
learn.fit_one_cycle(cyc_len=5, max_lr=lr)
learn.recorder.plot_losses()
###Output
_____no_output_____
###Markdown
Save the modelWe can save the model once it has been trained.
###Code
learn.save('VGG19_model')
###Output
_____no_output_____
###Markdown
learn.recorder.plot_lr()We use learn.recorder.plot_lr() to plot the learning rate.
###Code
learn.recorder.plot_lr()
###Output
_____no_output_____
###Markdown
ClassificationInterpretation()We use [ClassificationInterpretation()](https://docs.fast.ai/vision.learner.htmlClassificationInterpretation) to visualize interpretations of our model.
###Code
preds, y, losses = learn.get_preds(with_loss=True)
interp = ClassificationInterpretation(learn, preds, y, losses)
###Output
_____no_output_____
###Markdown
interp.plot_top_losses()We can use [interp.plot_top_losses()](https://docs.fast.ai/vision.learner.htmlplot_top_losses) to view our top losses and their details.
###Code
interp.plot_top_losses(9, figsize=(7,7))
###Output
_____no_output_____
###Markdown
interp.plot_confusion_matrix()Now we will use [interp.plot_confusion_matrix()](https://docs.fast.ai/vision.learner.htmlClassificationInterpretation.plot_confusion_matrix) to display a [confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix). Below, the top left square represents true negatives, while the top right square represents false positives, the bottom left square represents false negatives, and in the bottom right represents true positives.
###Code
interp.plot_confusion_matrix()
###Output
_____no_output_____
###Markdown
learn.unfreeze()Next we use learn.unfreeze() to unfreeze the model. AlexNet model was trained on ImageNet to classify images among 1000 categories. None of these categories is a leukemia cell, for these reason when fast.ai *cnn_learner* function is behind line substituting the last layer with 2 other layers. The last layer is a matrix that has the same size of our data class (*data.c*). Before, we just trained these two layers while the other model's layers were still keeping the downloaded weight. Unfreezing our model allow us to train also these other layers and updates their weights.
###Code
learn.unfreeze()
###Output
_____no_output_____
###Markdown
Train the entire (unfrozen) modelNow that we have unfrozen our model, we will use the following code blocks to train the whole model.
###Code
learn.lr_find()
learn.recorder.plot()
###Output
_____no_output_____
###Markdown
Slice parameterInitial layers are activated by simple patterns (like edge, lines, circles etc...) while the following layers are acquiring the ability to recognize more sophisticated patterns. Update too much the weight of these layers would probably decrease our accuracy. The scope of transfer learning is to exploit this ability of a pre-trained model in recognizing particular patterns and to adapt to our dataset. The parameter *slice* allows to apply ** discriminative learning rate**. In other words, we apply a smaller learning rate (in this case, 1e-5) to the earlier layer and a higher learning rate to the last layer.
###Code
nlr = slice(1e-4, 1e-3)
learn.fit_one_cycle(5, nlr)
learn.recorder.plot_losses()
preds,y,losses = learn.get_preds(with_loss=True)
interp = ClassificationInterpretation(learn, preds, y, losses)
interp.plot_top_losses(9, figsize=(7,7))
interp.plot_confusion_matrix()
###Output
_____no_output_____
###Markdown
Save the modelWe save our model after the un-freezing
###Code
learn.save('VGG19_unfreeze')
###Output
_____no_output_____ |
Smart_Pad_Scheduler_(Cloud_Version).ipynb | ###Markdown
**Smart PAD Scheduler** This notebook has been built to take an existing full field development and optimize the development sequence of the pads. It was born from an attempt to minimize the parent-child interactions (time gap) while honouring the development schedule hard constraints (e.g.: break-up season, water availability, start date, etc). Assessing the viability of a PADs sequence takes seconds instead of the traditional days. This significant time improvement allows running multiple iterations and it opens the door to identifying an optimal schedule.---It handles hard constraints like drilling and completion windows. Soft constraints are also tracked in order to identify optimized development sequences. ---This Jupyter notebook was built on the Google Colab platform and it's import and export functionalities are link to it. LIBRARIES: Import
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import pyplot
import matplotlib.cm as cm
from matplotlib.ticker import MultipleLocator, AutoMinorLocator, FuncFormatter, MaxNLocator, FormatStrFormatter
import matplotlib.dates as mdates
%matplotlib inline
import shapely.geometry
import shapely.affinity
from shapely.geometry import Polygon
from math import atan
!pip install descartes
from descartes import PolygonPatch
import datetime
from datetime import timedelta
from IPython.display import clear_output
import timeit
import warnings
warnings.filterwarnings("ignore")
#Local drive fetching
from google.colab import files
import io
# Code to read Google Drive file into Colaboratory:
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
###Output
Requirement already satisfied: descartes in /usr/local/lib/python3.6/dist-packages (1.1.0)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from descartes) (3.1.3)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->descartes) (0.10.0)
Requirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from matplotlib->descartes) (1.17.5)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->descartes) (1.1.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->descartes) (2.4.6)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->descartes) (2.6.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib->descartes) (1.12.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib->descartes) (45.1.0)
###Markdown
INPUTS **PAD Input (CSV)**This input is used to assign some pad specific info to the wells and provide (x,y) coordinate for labels display. Only one line per pad is allowed. That being said, a pad can be subdivided into groups of wells as long as they a different "id" and "id_short" (i.e.: xxx_N and xxx_S)Columns needed:* "id" -> a unique id for the pad* "id_short" -> a reduced character id that will be use for label display in map view and gant chart* "basin" -> the watershed the pad is located into* "x_mid" -> the x coordinate (preferably UTM Easting) to display the "id_short" in map view* "y_mid" -> the y coordinate (preferably UTM Northing) to display the "id_short" in map view* "drill_days" -> the number of days estimate to drill a given well on the pad.* "freehold_wells" -> the number of wells on the pad with Freehold status. This is subsequently used as a metrics to compare various runs.* "expiry" -> Date by which a given pad must be on production. This is also subsequently used as a metrics to compare various runs.**WELL Input (CSV)**This input is used to assign spatial attribute to the wells and subsequently create spatial representation of pads. One line per well is allowed.Columns needed:* "well" -> a unique id assigned to the well.* "pad_id" -> an id that must match the "id" column within the PAD input. This is use to assign the wells back to the pad they belong to.* "toe_x" -> the spatial x coordinate of the well toe (preferably UTM Easting).* "toe_y" -> the spatial y coordinate of the well toe (preferably UTM Northing).* "heel_x" -> the spatial x coordinate of the well heel (preferably UTM Easting).* "heel_y" -> the spatial y coordinate of the well heel (preferably UTM Northing).* "heel_md" & "toe_md" -> These 2 metrics are used to calculate the lateral length of the stick.**DRILLED Wells Input (CSV)**This input is used for visually displaying existing wells. Multiple lines (survey stations) can be given per well. Columns needed:* "x" -> Preferably UTM Easting* "y" -> Preferably UTM Northing* "uwi" -> a unique id used to aggregate the multiple lines (survey stations) **FILE IMPORT: From Local Desktop**
###Code
# Fetch PAD Input (CSV)
pad_upload = files.upload()
#Fetch WELL Input (CSV)
well_upload = files.upload()
#Fetch DRILLED Wells Input (CSV)
drilled_upload = files.upload()
# Store PAD dataset under df_pad
df_pad = pd.read_csv(io.BytesIO(pad_upload['Dataset_Pad.csv']))
# Store WELL dataset under df_well
df_well = pd.read_csv(io.BytesIO(well_upload['Dataset_Wells.csv']))
# Store DRILLED dataset under df_drilled
df_drilled = pd.read_csv(io.BytesIO(drilled_upload['Dataset_Drilled.csv']))
###Output
_____no_output_____
###Markdown
**FILE IMPORT: From Google Drive**
###Code
#Mount Google Drive.
from google.colab import drive
drive.mount('/content/gdrive')
#Store dataset from Google Drive in a Pandas Dataframe
df_pad = pd.read_csv('gdrive/My Drive/Colab Notebooks/DataSets/Dataset_Pad.csv')
df_well = pd.read_csv('gdrive/My Drive/Colab Notebooks/DataSets/Dataset_Wells.csv')
df_drilled = pd.read_csv('gdrive/My Drive/Colab Notebooks/DataSets/Dataset_Drilled.csv')
###Output
_____no_output_____
###Markdown
PARAMETERS: Customisation
###Code
#Development Start date
dev_start = datetime.date(2021, 1, 1) #year, month, day.
#Number of crews
n_rig = 2 #Drilling Rig
n_crew = 2 #Completion Crew
#Drilling BREAK_UP
bu_month_start, bu_day_start = 4, 1 #April 1st
bu_month_end, bu_day_end = 6, 15 #June 15th
#Completion WINTER season
winter_month_start, winter_day_start = 12, 15 #December 15th
winter_month_end, winter_day_end = 5, 31 #May 31st
#River Withdrawal Period
withdraw_month_start, withdraw_day_start = 4, 15 #April 15th
withdraw_month_end, withdraw_day_end = 10, 31 #October 31st
#Completion Parameters
compl_day_lag = timedelta(days = 30) #Number of days before winter season that a completion crew can start operations
water_threshold = 0.8 #Fraction of total completion water needed available in pond before to start on a given pad.
stage_per_day = 8 #Number of stages per day
stage_per_day_handicap = -1 #Reduction in stages per day (handicap) for first few pads to account for learning curve
stage_handicap_length = 2 #Number of pad before handicap goes away
compl_mob_demob = timedelta(days = 7) #Time added to completion for mobilizing and demobilizing the crew.
soak_time = timedelta(days = 30) #Days added to completion before pad is on-stream
#SIMOPS
well_spacing = 300 #Approximate distance between wells
width_incr_margin = 200 #Width added to the box perpendicular to the well (to generate overlapp with neighboor).
long_incr_margin = 300 #Length to add to the box in the direction of the well
#Basin/Pond Specs
basin_specs = {'basin': ['Red Deer', 'Clearwater'],
'pond_capa': [950000, 950000], #Pond capacity (m3)
'pond_vol': [0, 0], #Volume at t0 (m3).
'max_withdraw_rate': [0.1, 0.1], #m3/s during withdrawal period
'yearly_max': [2000000, 1800000], #Maximum yearly withdrawal allowance (m3)
'yearly_vol': [0, 0]} #Volume already withdrawed at t0 (m3).
#Number of pad that do not get shuffle by the optimizer. These are the 'n' first of the schedule. Feature meant to capture the licensed or built pads.
index_shuffle = 5
#Break-up length
bu_length = timedelta(days = round((bu_month_end - bu_month_start) * 30.5 + (bu_day_end - bu_day_start)))
###Output
_____no_output_____
###Markdown
SPATIAL REPRESENTATIONExtract pad spatial information from the well Dataframe. Heel and toe coordinates are being used with SIMOPS parameters to create a 2D spatial representation of each pads.
###Code
#Create well entity (Shapely representation) and stored it in df_well
df_well['entity'] = 0
df_well['length'] = 0
df_well['azi'] = 0
for n, item in enumerate(df_well.well):
heel_x = df_well.heel_x[n]
heel_y = df_well.heel_y[n]
toe_x = df_well.toe_x[n]
toe_y = df_well.toe_y[n]
cx = (heel_x + toe_x) / 2
cy = (heel_y + toe_y) / 2
length = df_well.toe_md[n] - df_well.heel_md[n]
df_well['length'][n] = length #Store length in df_well
angle = 90 - (57.2958 * atan((toe_y - heel_y) / (toe_x - heel_x)))
df_well['azi'][n] = angle #Store azi in df_well
w = well_spacing + width_incr_margin
h = length + long_incr_margin
c = shapely.geometry.box(-w/2.0, -h/2.0, w/2.0, h/2.0)
rc = shapely.affinity.rotate(c, 180 - angle)
shape = shapely.affinity.translate(rc, cx, cy)
df_well['entity'][n] = shape
#Create pad entity (Shapely representation) and stored it in df_pad
df_pad['entity'] = "Undefined"
for n, item in enumerate(df_well.well):
pad_entity = df_pad[df_pad.id == df_well.pad_id[n]].entity.iloc[0]
if pad_entity == "Undefined":
pad_entity = df_well.entity[n] #Pad entity equal the fist well shape if 1st time we pull this pad
else:
pad_entity = pad_entity.union(df_well.entity[n]) #Pad entity equalt union if subsequent
pad_index = df_pad[df_pad.id == df_well.pad_id[n]].index[0]
df_pad.set_value(pad_index, 'entity', pad_entity) #df_pad is updated with latest entity
#Add len_tot, stage_tot and water_tot to df_pad
len_tot = df_well.groupby(['pad_id']).sum().length.rename('len_tot')
df_pad = pd.merge (df_pad, len_tot, left_on= ['id'], right_index=True)
df_pad['stage_tot'] = df_pad.len_tot.div(60).round(0)
df_pad['water_tot'] = df_pad.stage_tot.multiply(1500)
#Add n_wells to df_pad
n_wells = df_well.groupby(['pad_id']).count().well.rename('n_wells')
df_pad = pd.merge (df_pad, n_wells, left_on= ['id'], right_index=True)
#Add Azimuth to df_pad
azi = df_well.groupby(['pad_id']).mean().round(0).azi.rename('azi')
df_pad = pd.merge (df_pad, azi, left_on= ['id'], right_index=True)
###Output
_____no_output_____
###Markdown
FUNCTIONS
###Code
def tracker_ini():
"""
Instantiate the 5 tracking DataFrames that will be used in the main function and the scheduling section.
"""
#Build the "Rig Tracking Dataframe"
rig_tracker = pd.DataFrame(columns=['rig', 'status', 'drill_end'])
for rig in range(0, n_rig):
rig_tracker.loc[rig] = [rig, 'Free', 'NA']
#Build the "Crew Tracking Dataframe"
crew_tracker = pd.DataFrame(columns=['crew', 'status', 'compl_end'])
for crew in range(0, n_crew):
crew_tracker.loc[crew] = [crew, 'Free', 'NA']
#Build the "Pad Tracking DataFrame"
pad_tracker = pd.DataFrame(columns=['id', 'status', 'ops_end', 'drill_start', 'drill_end', 'compl_start', 'compl_end'])
for n, pad in enumerate(df.id):
pad_tracker.loc[n] = [pad, 'Undrilled', "NA", "NA", "NA", "NA", "NA"]
#Build the "Pond Tracking DataFrame"
water_tracker = pd.DataFrame(basin_specs, columns =['basin', 'pond_capa', 'pond_vol', 'max_withdraw_rate', 'yearly_max', 'yearly_vol'])
water_tracker.set_index(keys = 'basin', inplace = True)
#Time Tracker
time_tracker = pd.DataFrame(columns=['date', 'clearwater_pond', 'red_deer_pond'])
return rig_tracker, crew_tracker, pad_tracker, water_tracker, time_tracker
# ---------------------------- TIME FUNCTIONS -------------------------
def in_between(date, period):
"""
Take a Python date (yyyy, mm, dd) and return a boolean statement if it falls within the period (mm, dd). Can handle start being the year before (end_month < start_month).
"""
#Assign month_start and month_end
if period == "winter":
month_start, month_end, day_start, day_end = winter_month_start, winter_month_end, winter_day_start, winter_day_end
elif period == "break_up":
month_start, month_end, day_start, day_end = bu_month_start, bu_month_end, bu_day_start, bu_day_end
elif period == "withdraw_allowance":
month_start, month_end, day_start, day_end = withdraw_month_start, withdraw_month_end, withdraw_day_start, withdraw_day_end
#Assign year_start
if month_start <= month_end:
year_start = date.year #Period started same year as it finish
else:
year_start = date.year - 1 #Period started the previous year
#Instantiate start and end date
start_date = datetime.date(year = year_start, month = month_start, day = day_start)
end_date = datetime.date(year = date.year, month = month_end, day = day_end)
#Run logics
if (start_date < date < end_date):
return True
elif (month_start > month_end) and (date > datetime.date(year = date.year, month = month_start, day = day_start)):
return True #handle the case where period start (i.e. Dec 15) < date (i.e. Dec 28) < end of year (i.e. Dec 31)
else:
return False
def max_withdraw_days(previous_date, date):
"""
Return the number of days between 2 dates that are inside the pre-determined river withdraw allowance period.
"""
dates_range = [previous_date + datetime.timedelta(days=x) for x in range(0, (date - previous_date).days)]
counter = 0
for day in dates_range:
counter += in_between(day, "withdraw_allowance")
return counter
def enough_time_before_winter(date):
"""
Check if there is enough time to complete any pad before winter. The comp_day_lag is set in parameter initialisation. Return a boolean statement.
"""
winter_start_option_a = datetime.date(year = date.year, month = winter_month_start, day = winter_day_start)
winter_start_option_b = datetime.date(year = date.year + 1, month = winter_month_start, day = winter_day_start)
#This part deals with the change in year
if (winter_start_option_a - date) < timedelta(days =0): #if option_a (current date.year) is in the past, default to option_b
winter_start = winter_start_option_b
else:
winter_start = winter_start_option_a
#Check if we have enough days to squeeze completion in
if date <= (winter_start - compl_day_lag):
return True
else:
return False
def drill_end_date(date, drill_time):
"""
Take a current date with a drill_time and return the date at which drilling is finished. If end date is in break-up, break-up length is added.
"""
if in_between(date + drill_time, "break_up") == False:
return date + drill_time
else:
return date + drill_time + bu_length
def increment_date(date, pad_tracker):
"""
Increment date to next relevant time. This can be eiher the next ops_end, bu_end or winter_end. Whichever one comes first.
"""
ops_end_date = pad_tracker.ops_end[pad_tracker.ops_end != 'NA']
#Calculate end of next Break-Up
bu_end_option_a = datetime.date(year = date.year, month = bu_month_end, day = bu_day_end)
bu_end_option_b = datetime.date(year = date.year + 1, month = bu_month_end, day = bu_day_end)
if (bu_end_option_a - date) < timedelta(days =0):
bu_end = bu_end_option_b + timedelta(days = 1) #if option_a compared to date is in the past, default to option_b
else:
bu_end = bu_end_option_a + timedelta(days = 1) #if option_a is in the future
#Calculate end of next Winter
winter_end_option_a = datetime.date(year = date.year, month = winter_month_end, day = winter_day_end)
winter_end_option_b = datetime.date(year = date.year + 1, month = winter_month_end, day = winter_day_end)
if (winter_end_option_a - date) < timedelta(days = 0): #if option_a compared to date is in the past, default to option_b
winter_end = winter_end_option_b + timedelta(days = 1)
else:
winter_end = winter_end_option_a + timedelta(days = 1)
if pad_tracker.ops_end[pad_tracker.ops_end != 'NA'].size == 0: #Handle the case where all ops_end are NA.
return min(bu_end, winter_end)
else:
return min(pad_tracker.ops_end[pad_tracker.ops_end != 'NA'].min(), bu_end, winter_end)
# ---------------------------- SPATIAL FUNCTIONS -------------------------
def overlap(pad_a, pad_b):
"""
Take 2 pad ID and return a boolean statement if there is an overlap between them.
"""
if df.entity[df.id == pad_a].iloc[0].intersection(df.entity[df.id == pad_b].iloc[0]).area < 1:
return False
else:
return True
# ---------------------------- HYBRID FUNCTIONS -------------------------
def pick_compl_pad(pad_tracker, water_tracker):
"""
Find the first "Drilled" pad that is away from any "Drilling" pad.
The chosen pad must have enough water to start completion otherwise the FOR LOOP continue to the next "Drilled" pad.
Return "None" if no pad meet criterias: "Drilled" and away from adjacent "Drilling" SIMOPS.
"""
for uncompl_pad in range (0, pad_tracker.status[pad_tracker.status == 'Drilled'].size): #Loop through all uncompleted pads
pad_a = pad_tracker.id[pad_tracker.status == 'Drilled'].iloc[uncompl_pad]
basin = df.basin[df.id == pad_a].values #Return which basin pad_a is in
if water_tracker.pond_vol[basin].iloc[0] >= (df.water_tot[df.id == pad_a].iloc[0] * water_threshold): #Only enter if enough water in pond to do completion
if pad_tracker.status[pad_tracker.status == 'Drilling'].size == 0: #If no rig active we can pick first uncompl_pad
return pad_a
else:
counter = 0 # Initialize counter
for drilling_pad in range (0, pad_tracker.status[pad_tracker.status == 'Drilling'].size):
pad_b = pad_tracker.id[pad_tracker.status == 'Drilling'].iloc[drilling_pad]
if overlap(pad_a, pad_b) == False:
counter +=1
else:
break
if counter == pad_tracker.status[pad_tracker.status == 'Drilling'].size: #if "Drilled" pad_a doesn't have a frac hits with any "Drilling" pad.
return pad_a
def pick_drill_pad(pad_tracker):
"""
Find the first "Undrilled" pad that is away from all "Completing" pad. Return "None" if no pad meet criterias.
"""
for undrilled_pad in range (0, pad_tracker.status[pad_tracker.status == 'Undrilled'].size): #Loop through all "Undrilled" pads
pad_a = pad_tracker.id[pad_tracker.status == 'Undrilled'].iloc[undrilled_pad]
if pad_tracker.status[pad_tracker.status == 'Completing'].size == 0: #If no crew active we can pick first uncompl_pad
return pad_a
counter = 0 # Initialize counter
for completing_pad in range (0, pad_tracker.status[pad_tracker.status == 'Completing'].size): #Loop through all "Completing" pads
pad_b = pad_tracker.id[pad_tracker.status == 'Completing'].iloc[completing_pad]
if overlap(pad_a, pad_b) == False:
counter +=1
else:
#print ('Drilling has a frac hits risk:', pad_a, ' with ', pad_b)
break
if counter == pad_tracker.status[pad_tracker.status == 'Completing'].size: #if "Undrilled" pad_a doesn't have a frac hits with any "Completing" pad.
return pad_a
def parent_child_metric(pad_tracker):
"""
Sum & max of time (gap) from all parents that came on stream before the child.
Add the 2 metric columns to the pad_tracker
"""
pad_tracker['parent_child_gap_sum'] = 0
pad_tracker['parent_child_gap_max'] = 0
for pad_a in pad_tracker.id: #Assign pad_a
for pad_b in pad_tracker.id: #Assign pad_b
if pad_a == pad_b: #skip if pad_a equal pad_b
continue
if overlap(pad_a, pad_b): #If the pads are neighboor
parent_child_gap = (pad_tracker.compl_end[pad_tracker.id == pad_a].iloc[0] - pad_tracker.compl_end[pad_tracker.id == pad_b].iloc[0]).days #since soaking time is constant per pad, it doesn't need to be acocunted for here.
if parent_child_gap >= 1: #if pad_b precede pad_a
pad_tracker.parent_child_gap_sum[pad_tracker.id == pad_a] += parent_child_gap #Increment metrics (SUM)
if pad_tracker.parent_child_gap_max[pad_tracker.id == pad_a].iloc[0] < parent_child_gap: #if new value greater than old, replace with new value (MAX)
pad_tracker.parent_child_gap_max[pad_tracker.id == pad_a] = parent_child_gap
def compl_simops_metric(pad_tracker):
"""
Sum of time with 2 completion crews next to each other. Completion SIMOPS.
Add 1 metric columns to pad_tracker.
Adjacent completion can be seen as a good things since it doesn't required to shut in the offset being completed. Alternatively it can also be considered detrimental if tied to induced seismicity.
"""
pad_tracker['compl_simops'] = 0
for pad_a in pad_tracker.id: #Assign pad_a
for pad_b in pad_tracker.id: #Assign pad_b
if pad_a == pad_b: #skip if pad_a equal pad_b
continue
if overlap(pad_a, pad_b): #If the pads are neighboors
a_start = pad_tracker.compl_start[pad_tracker.id == pad_a].iloc[0]
a_end = pad_tracker.compl_end[pad_tracker.id == pad_a].iloc[0]
b_start = pad_tracker.compl_start[pad_tracker.id == pad_b].iloc[0]
b_end = pad_tracker.compl_end[pad_tracker.id == pad_b].iloc[0]
if a_start < b_start < b_end < a_end: #if pad_b is within pad_a duration
pad_tracker.compl_simops[pad_tracker.id == pad_a] += (b_end - b_start).days
elif b_start < a_start < a_end < b_end: #if pad_a is within pad_b duration
pad_tracker.compl_simops[pad_tracker.id == pad_a] += (a_end - a_start).days
elif a_start < b_start < a_end < b_end: # if pad_a started before pad_b but also finished before pad_b
pad_tracker.compl_simops[pad_tracker.id == pad_a] += (a_end - b_start).days
elif b_start < a_start < b_end < a_end: # if pad_b started before pad_a but also finished before pad_a
pad_tracker.compl_simops[pad_tracker.id == pad_a] += (b_end - a_start).days
def land_metric(pad_tracker):
"""
Calculate land metrics.
-Was the pad on stream prior to the land expiring (boolean)?
-What is the Freehold score? This score is an indexed based one. The earlier the Freehold are drilled, the better. This is meat to capture overall lower royalties from freehold land agreement.
"""
from datetime import datetime
pad_tracker['expired'] = 0
for n, date in enumerate(df.expiry):
if pad_tracker.compl_end[n] + soak_time > datetime.strptime(date, '%d/%m/%Y').date(): #if "on-stream" > expiry date
pad_tracker.expired[n] = df.n_wells[n]
pad_tracker['fh_score'] = 0
for m, fh_wells in enumerate (df.freehold_wells):
if fh_wells != 0:
pad_tracker.fh_score[m] += (df.shape[0] - m) * fh_wells
#---------------------------- VISUAL DISPLAY FUNCTIONS ------------------------
def plot_dev_rectangles(pad_tracker, metric_label):
"""
Take the pad_tracker and return a display of the dev program
A 'metric string' must be passed in order to plot the third graph
Descartes & pyplot is being used for display
"""
metric = pad_tracker[metric_label]
#Figure size & subplot
dx = max(df_well.toe_x.max(), df_well.heel_x.max()) - min(df_well.toe_x.min(), df_well.heel_x.min()) +1000 #Since we add +500 on the axes extend, we need to add +1000 to maintain proportions.
dy = max(df_well.toe_y.max(), df_well.heel_y.max()) - min(df_well.toe_y.min(), df_well.heel_y.min()) +1000
dim_ratio = dx/dy
dim = 10
fig = pyplot.figure(1, figsize=(3 * dim * dim_ratio, dim))
drill_plot = fig.add_subplot(131)
compl_plot = fig.add_subplot(132)
metric_plot = fig.add_subplot(133)
plots = [drill_plot, compl_plot, metric_plot]
#Axes and labels
for plot in plots:
#Set axes limit
plot.set_xlim(min(df_well.toe_x.min(), df_well.heel_x.min()) - 500, max(df_well.toe_x.max(), df_well.heel_x.max()) + 500)
plot.set_ylim(min(df_well.toe_y.min(), df_well.heel_y.min()) - 500, max(df_well.toe_y.max(), df_well.heel_y.max()) + 500)
#Set axes labels
plot.set_xlabel('Easting (m)')
drill_plot.set_ylabel('Northing (m)')
#Title
drill_plot.set_title('Drilling', fontsize=15, fontweight='bold')
compl_plot.set_title('Completion', fontsize=15, fontweight='bold')
metric_plot.set_title(metric_label, fontsize=15, fontweight='bold')
#Colors
keys = np.arange(dev_start.year, pad_tracker.compl_end.max().year + 1, 1)
#values = cm.rainbow(np.linspace(0, 1, keys.size)) #This color bar is better when more than six colors are needed.
values = ['r', 'y', 'g', 'c', 'royalblue', 'm', 'm', 'm', 'm', 'm'] #After 6 years, colors will default to purple.
color_dict = dict(zip(keys, values))
metric_keys = np.arange(0, metric.max() + 1, 1) #+1 makes it inclusive
metric_values = cm.Reds(np.linspace(0, 1, metric_keys.size))
metric_color_dict = dict(zip(metric_keys, metric_values))
#Legend
markers = [plt.Line2D([0,0],[0,0],color=color, marker='o', linestyle='') for color in color_dict.values()] #Create a "fake dot" for every color in color_dict
drill_plot.legend(markers, color_dict.keys(), numpoints=1)
compl_plot.legend(markers, color_dict.keys(), numpoints=1)
#Drilled Wells
for plot in plots:
for well in df_drilled.uwi.unique():
x = df_drilled.x[df_drilled.uwi == well]
y = df_drilled.y[df_drilled.uwi == well]
plot.plot(x,y, color = 'black', linewidth = 3, alpha = 0.8)
#Data
for pad in pad_tracker.id:
drill_color = color_dict[pad_tracker.drill_end[pad_tracker.id == pad].iloc[0].year]
drill_plot.add_patch(PolygonPatch(df.entity[df.id == pad].iloc[0], color=drill_color, alpha=0.8))
compl_color = color_dict[pad_tracker.compl_end[pad_tracker.id == pad].iloc[0].year]
compl_plot.add_patch(PolygonPatch(df.entity[df.id == pad].iloc[0], color=compl_color, alpha=0.8))
metric_color = metric_color_dict[metric[pad_tracker.id == pad].iloc[0]]
metric_plot.add_patch(PolygonPatch(df.entity[df.id == pad].iloc[0], color = metric_color, ec='lightgrey', alpha=0.8))
#Annotations
for plot in plots:
plot.annotate(df_pad.id_short[df_pad.id == pad].iloc[0],
xy=(df.x_mid[df.id == pad].iloc[0], df.y_mid[df.id == pad].iloc[0]),
fontsize=10,
ha='center',
va='center',
rotation= 90 - df.azi[df.id == pad].iloc[0]) #(90 - Azi) allows to convert well azimuth to text azimuth.
pyplot.show()
def format_fn(tick_val, tick_pos):
"""
Replace Y axis of gant chart with Pad ID labels
"""
if int(tick_val) in range(pad_tracker.shape[0]):
return list(pad_tracker.id)[int(tick_val)]
def plot_schedule():
"""
Produce a gant chart with drilling, completion and soaking dates
"""
fig, ax = plt.subplots(figsize=(15, 8))
#Data
for pad_id in pad_tracker.index: #Loop through each pad to create a point set (start and end). The tick is the line connecitong the 2 points.
y = [pad_id, pad_id]
dt = datetime.timedelta(days=6) #dt is used to trim the period. This reduce the overlap between bars because of the line width.
drill_period = [pad_tracker.drill_start[pad_id] + dt, pad_tracker.drill_end[pad_id] - dt]
plt.plot_date(drill_period, y, linestyle='-', linewidth=8, marker=None, color = 'blue')
compl_period = [pad_tracker.compl_start[pad_id] + dt, pad_tracker.compl_end[pad_id] - dt + compl_mob_demob]
plt.plot_date(compl_period, y, linestyle='-', linewidth=8, marker=None, color = 'red')
soak_period = [pad_tracker.compl_end[pad_id] + dt + compl_mob_demob, pad_tracker.compl_end[pad_id] + compl_mob_demob + soak_time - dt]
plt.plot_date(soak_period, y, linestyle='-', linewidth=8, marker=None, color = 'green')
#Title & Labels
plt.ylabel('Pad ID')
ax.set_title('Development Schedule', fontsize=15, fontweight='bold')
#Xaxis Formatting
plt.grid(which = 'major', axis = 'x', linewidth = 4 )
plt.grid(which = 'minor', axis = 'x', linewidth = 1 )
ax.xaxis.set_tick_params(rotation=0, labelsize=10)
ax.set_xticks(['2021-01-01', '2022-01-01', '2023-01-01', '2024-01-01', '2025-01-01', '2026-01-01', '2027-01-01', '2028-01-01', '2029-01-01'], minor=False)
ax.xaxis.set_minor_locator(MultipleLocator(30.5))
plt.xlim(dev_start, pad_tracker.compl_end.max() + soak_time) #soak_time is added so it doesn't get truncated
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'))
#Yaxis Formating
plt.ylim(pad_tracker.shape[0], -0.5)
plt.grid(which = 'major', axis = 'y', linewidth=1 )
ax.yaxis.set_major_locator(MultipleLocator(1))
ax.yaxis.set_major_formatter(FormatStrFormatter('%.0f'))
ax.yaxis.set_major_formatter(FuncFormatter(format_fn))
#Legend
color_dict = {'Drilling': 'blue', 'Completion': 'red', 'Soaking': 'green'}
markers = [plt.Line2D([0,0],[0,0],color=color, marker='o', linestyle='') for color in color_dict.values()] #Create a "fake dot" for every color in color_dict
plt.legend(markers, color_dict.keys(), numpoints=1, fontsize=15)
"""
#Option to add water over time.
ax2 = ax.twinx()
plt.plot_date(time_tracker.date, time_tracker.clearwater_pond, linestyle='-', linewidth = 1, marker=None, color = 'royalblue')
plt.plot_date(time_tracker.date, time_tracker.red_deer_pond, linestyle='-', linewidth = 1, marker=None, color = 'mediumpurple')
plt.ylim(0, 1000000)
plt.ylabel('Pond_Volume (m3)')
plt.legend(loc='center right', fontsize=12)
"""
def plot_water():
"""
Show water over time. Best visual when plotted directly below plot_schedule.
"""
fig, ax = plt.subplots(figsize=(15, 4))
#Title
ax.set_title('Water Schedule', fontsize=15, fontweight='bold')
#Data
plt.plot_date(time_tracker.date, time_tracker.clearwater_pond, linestyle='-', linewidth = 1, marker=None, color = 'royalblue')
plt.plot_date(time_tracker.date, time_tracker.red_deer_pond, linestyle='-', linewidth = 1, marker=None, color = 'mediumpurple')
#Legend
plt.legend(loc='upper right', fontsize=12)
#Yaxis Formatting
plt.ylim(0, 1000000)
plt.ylabel('Pond_Volume (m', rotation=0, fontsize=15, color='white' ) #Tricks to move plot to the right
#Xaxis Formatting
plt.grid(which = 'major', axis = 'x', linewidth = 4 )
plt.grid(which = 'minor', axis = 'x', linewidth = 1 )
ax.xaxis.set_tick_params(rotation=0, labelsize=10)
ax.set_xticks(['2021-01-01', '2022-01-01', '2023-01-01', '2024-01-01', '2025-01-01', '2026-01-01', '2027-01-01', '2028-01-01', '2029-01-01'], minor=False)
ax.xaxis.set_minor_locator(MultipleLocator(30.5))
plt.xlim(dev_start, pad_tracker.compl_end.max() + soak_time) #soak_time is added so it doesn't get truncated
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'))
###Output
_____no_output_____
###Markdown
FUNCTION: Main
###Code
def main(df):
"""
Nested while/for LOOPs.
Level 1 continue until all pads have a "Completed" status.
Level 2 - Pond for LOOP
Increase water in pond based on latest time increment
Level 2 - Completion for LOOP
Try to assign all completion crews that are not active.
Level 2 - Drilling for LOOP
Try to assign all drilling rigs that are not active.
Level 2 - Time increment function moving the clock forward before to start again at the beginning of the while LOOP (level 1)
Calculate metrics to quantify soft constraints (parent-child interactions, land expiries, etc)
"""
#Instantiate variables
date = dev_start #Initialize time
previous_date = date #Instantiate previous_date
rig_tracker, crew_tracker, pad_tracker, water_tracker, time_tracker = tracker_ini() #Initialize rig_tracker, crew_tracker, pad_tracker, water_tracker & time_tracker
n_pad_completed = 0 #Initialize n_pad_completed counter
#------------------------------------------------
while (pad_tracker.status == 'Completed').sum() - pad_tracker.status.size < 0: #This script will increment time until all pads are completed.
#Change pad status if drill_end equals ops_end.
for n, drill_end in enumerate (pad_tracker.drill_end):
if drill_end == date:
pad_tracker.ops_end.loc[n] = 'NA'
pad_tracker.status.loc[n] = 'Drilled'
#Change pad status if compl_end equals ops_end.
for n, compl_end in enumerate (pad_tracker.compl_end):
if compl_end == date:
pad_tracker.ops_end.loc[n] = 'NA'
pad_tracker.status.loc[n] = 'Completed'
#Change rig status if date equals drill_end.
for n, drill_end in enumerate (rig_tracker.drill_end):
if drill_end == date:
rig_tracker.drill_end.loc[n] = 'NA'
rig_tracker.status.loc[n] = 'Free'
#Change crew status if date equals compl_end.
for n, compl_end in enumerate (crew_tracker.compl_end):
if compl_end == date:
crew_tracker.compl_end.loc[n] = 'NA'
crew_tracker.status.loc[n] = 'Free'
#Reset pond yearly_vol to zero when year change
if previous_date.year != date.year:
water_tracker.yearly_vol[:] = 0
#------------------------------------------------
#Water LOOP. Place ahead of completion allows to add water to pond accumulated during the previous increment_date.
for pond in water_tracker.index: #Loop over each basin/pond
if water_tracker.pond_vol[pond] < water_tracker.pond_capa[pond]: #If the pond is not full
if water_tracker.yearly_vol[pond] < water_tracker.yearly_max[pond]: #If the yearly limit is not reached
withdraw_days = max_withdraw_days(previous_date, date) #Max days we can withdraw
if withdraw_days >= 1:
river_ability_vol = withdraw_days * water_tracker.max_withdraw_rate[pond] * 3600 * 24 #Days * withdraw_rate
pond_ability = water_tracker.pond_capa[pond] - water_tracker.pond_vol[pond] #Room in the pond
allowance_ability = water_tracker.yearly_max[pond] - water_tracker.yearly_vol[pond] #Room in our allowance limit
#Find minimum volume
incr_vol = min(river_ability_vol, pond_ability, allowance_ability)
#Increment pond and allowance volume
water_tracker.pond_vol[pond] += incr_vol
water_tracker.yearly_vol[pond] += incr_vol
#Completion LOOP. It is placed ahead of the drilling loop to give an uncompleted pad preference over an undrilled neighboor.
if (in_between(date, "winter") == False) and (enough_time_before_winter(date) == True): #Are we outside of Winter?
for crew in range(0, crew_tracker.status.str.contains('Free').sum()): #loop over crew with status free
if (pad_tracker.status == 'Drilled').sum() >= 1: #Check if there is at least a Drilled pad
new_pad = pick_compl_pad(pad_tracker, water_tracker)
if new_pad == None: #Handle the case where "pick_compl_pad()" did not find any pad
break
pad_index = pad_tracker[pad_tracker.id == new_pad].index[0] #Return index of the pad
pad_tracker.set_value(pad_index, 'status', 'Completing') #Change status to 'Completing'
pad_tracker.set_value(pad_index, 'compl_start', date) #Set compl_start to date
#Setting crew status to active
crew_index = crew_tracker[crew_tracker.status == 'Free'].iloc[0].crew #Find index of free rig
crew_tracker.set_value(crew_index, 'status', 'Active') #Assign status of crew to active
#Calculate completion time
if n_pad_completed >= stage_handicap_length:
completion_time = timedelta(days = int(df[df.id == new_pad].stage_tot.iloc[0] / stage_per_day)) #Calculate compl_time (stage_tot / stage_per_day)
else:
completion_time = timedelta(days = int(df[df.id == new_pad].stage_tot.iloc[0] / (stage_per_day + stage_per_day_handicap)))
#Setting end dates on pad and crew
compl_end_date = date + completion_time
pad_tracker.set_value(pad_index, 'compl_end', compl_end_date)
pad_tracker.set_value(pad_index, 'ops_end', compl_end_date)
crew_tracker.set_value(crew_index, 'compl_end', compl_end_date)
n_pad_completed += 1 #Increment pad completed counter
#Taking water out of pond in new_pad basin
water_tracker.pond_vol[[df.basin[df.id == new_pad].iloc[0]]] -= df.water_tot[df.id == new_pad].iloc[0]
time_tracker = time_tracker.append({'date': date, 'clearwater_pond': water_tracker.loc['Clearwater'].pond_vol, 'red_deer_pond': water_tracker.loc['Red Deer'].pond_vol}, ignore_index=True)
#Drilling LOOP
if in_between(date, "break_up") == False: #Are we outside of BU?
for rig in range(0, rig_tracker.status.str.contains('Free').sum()): #loop over rig with status free
if (pad_tracker.status == 'Undrilled').sum() >= 1: #Check if there is at least an undrilled pad
new_pad = pick_drill_pad(pad_tracker)
if new_pad == None: #Handle the case where "pick_drill_pad()" did not find any pad
break
#Setting pad_tracker info
pad_index = pad_tracker[pad_tracker.id == new_pad].index[0] #Return index of the pad
pad_tracker.set_value(pad_index, 'status', 'Drilling') #Change status to 'Drilling'
pad_tracker.set_value(pad_index, 'drill_start', date) #Set drill_start to date
#Setting rig status to active
rig_index = rig_tracker[rig_tracker.status == 'Free'].iloc[0].rig #Find index of free rig
rig_tracker.set_value(rig_index, 'status', 'Active') #Assign status of rig to active
#Setting end dates on pad and rig
drill_time = timedelta(days = int(df[df.id == new_pad].drill_days.iloc[0] * df[df.id == new_pad].n_wells.iloc[0])) #Calculate drill_time (days per well * n_wells)
end_date = drill_end_date(date, drill_time)
pad_tracker.set_value(pad_index, 'drill_end', end_date)
pad_tracker.set_value(pad_index, 'ops_end', end_date)
rig_tracker.set_value(rig_index, 'drill_end', end_date)
#Time increment
previous_date = date #This previous_date variable is used by the water_tracker to reset yearly_vol
date = increment_date(date, pad_tracker)
#Calculate metrics
parent_child_metric(pad_tracker) #Add the 2 parent child columns
compl_simops_metric(pad_tracker) #Add the 1 compl simops column
land_metric(pad_tracker) #Add 2 land expiry/FH columns
pad_tracker.pop('ops_end') #remove the ops_end column
pad_tracker.pop('status') #remove the status column
return pad_tracker, time_tracker
###Output
_____no_output_____
###Markdown
SCHEDULER: Default VersionThe PADs sequence is determined by the PAD csv input file. The sequence will be honoured unless a SIMOPS conflict arise.
###Code
%%time
df = df_pad
#Shuffle second part of a dataframe. Uncommment the next 3 lines to run on shuffle mode.
# df_a = df[: index_shuffle]
# df_b = df[index_shuffle :].sample(frac=1)
# df = df_a.append(df_b).reset_index(drop=True)
#Calling main function
pad_tracker, time_tracker = main(df)
#Plotting output
print(pad_tracker)
print()
print('Development schedule will take ', round((pad_tracker.compl_end.max() - dev_start).days / 365, 1), ' years')
print('The average parent-child gap is ', round(pad_tracker.parent_child_gap_max.mean(), 0), ' days')
print()
plot_dev_rectangles(pad_tracker, 'parent_child_gap_max')
print()
plot_schedule()
print()
plot_water()
print()
###Output
id drill_start drill_end ... compl_simops expired fh_score
0 Pad_0 2021-01-01 2021-07-15 ... 0 5 0
1 Pad_1 2021-01-01 2021-07-15 ... 0 5 0
2 Pad_2 2022-03-12 2022-07-10 ... 0 5 36
3 Pad_3 2021-07-15 2021-11-12 ... 0 0 51
4 Pad_4 2021-07-15 2021-11-12 ... 0 0 32
5 Pad_5 2022-03-12 2022-07-10 ... 0 5 60
6 Pad_6 2021-11-12 2022-03-12 ... 0 0 0
7 Pad_7 2021-11-12 2022-03-12 ... 0 0 39
8 Pad_8 2022-11-07 2023-03-07 ... 0 5 24
9 Pad_9 2022-11-07 2023-03-07 ... 0 0 0
10 Pad_10 2023-03-07 2023-07-05 ... 0 5 60
11 Pad_11 2022-07-10 2022-11-07 ... 0 0 0
12 Pad_12 2022-07-10 2022-11-07 ... 0 5 0
13 Pad_13 2023-03-07 2023-07-05 ... 0 5 0
14 Pad_14 2023-07-05 2023-11-02 ... 0 5 0
15 Pad_15 2023-11-02 2024-03-01 ... 0 0 0
16 Pad_16 2023-12-04 2024-06-16 ... 17 5 0
17 Pad_17 2024-03-01 2024-06-29 ... 17 0 0
18 Pad_18 2023-07-05 2023-11-02 ... 0 0 0
19 Pad_19 2024-07-03 2024-10-31 ... 0 0 0
[20 rows x 10 columns]
Development schedule will take 5.5 years
The average parent-child gap is 513.0 days
###Markdown
SCHEDULER: Ordered VersionThe PADs sequence can be pass as an input string. The sequence will be honoured unless a SIMOPS conflict arise.
###Code
df = df_pad
ordered_list =['Pad_0',
'Pad_1',
'Pad_2',
'Pad_3',
'Pad_4',
'Pad_5',
'Pad_6',
'Pad_7',
'Pad_8',
'Pad_9',
'Pad_10',
'Pad_11',
'Pad_12',
'Pad_13',
'Pad_14',
'Pad_15',
'Pad_16',
'Pad_17',
'Pad_18',
'Pad_19']
df_list = []
for i in ordered_list:
df_list.append(df_pad[df_pad.id == i])
df = pd.concat(df_list)
#Calling main function
pad_tracker, time_tracker = main(df)
#Plotting output
print(pad_tracker)
print()
print('Development schedule will take ', round((pad_tracker.compl_end.max() - dev_start).days / 365, 1), ' years')
print('The average parent-child gap is ', round(pad_tracker.parent_child_gap_max.mean(), 0), ' days')
print()
plot_dev_rectangles(pad_tracker, 'parent_child_gap_sum')
print()
plot_schedule()
print()
plot_water()
print()
###Output
id drill_start drill_end ... compl_simops expired fh_score
0 Pad_0 2021-01-01 2021-07-15 ... 0 5 0
1 Pad_1 2021-01-01 2021-07-15 ... 0 5 0
2 Pad_2 2022-03-12 2022-07-10 ... 0 5 36
3 Pad_3 2021-07-15 2021-11-12 ... 0 0 51
4 Pad_4 2021-07-15 2021-11-12 ... 0 0 32
5 Pad_5 2022-03-12 2022-07-10 ... 0 5 60
6 Pad_6 2021-11-12 2022-03-12 ... 0 0 0
7 Pad_7 2021-11-12 2022-03-12 ... 0 0 39
8 Pad_8 2022-11-07 2023-03-07 ... 0 5 24
9 Pad_9 2022-11-07 2023-03-07 ... 0 0 0
10 Pad_10 2023-03-07 2023-07-05 ... 0 5 60
11 Pad_11 2022-07-10 2022-11-07 ... 0 0 0
12 Pad_12 2022-07-10 2022-11-07 ... 0 5 0
13 Pad_13 2023-03-07 2023-07-05 ... 0 5 0
14 Pad_14 2023-07-05 2023-11-02 ... 0 5 0
15 Pad_15 2023-11-02 2024-03-01 ... 0 0 0
16 Pad_16 2023-12-04 2024-06-16 ... 17 5 0
17 Pad_17 2024-03-01 2024-06-29 ... 17 0 0
18 Pad_18 2023-07-05 2023-11-02 ... 0 0 0
19 Pad_19 2024-07-03 2024-10-31 ... 0 0 0
[20 rows x 10 columns]
Development schedule will take 5.5 years
The average parent-child gap is 513.0 days
###Markdown
OPTIMIZER: Ramdom SamplingThe SCHEDULER default version is used. The tail portion of the input sequence is shuffled. The "end" variable determined how many iterations are run. The "index_shuffle" in the parameters customization section determined which part of the sequence tail can be mixed and which part is locked. The soft contraint metrics tracked for each permutations are stored into the "scenario_tracker".
###Code
%%time
start = timeit.default_timer() #Start timer
#Set df
df = df_pad #create a copy (not a pointer)
#Instantiate Scenario_tracker
columns = ['order', 'duration', 'pc_sum_max', 'pc_sum_avg', 'pc_max_max', 'pc_max_avg', 'avg_ops_length', 'compl_simops', 'fh_score', 'expiry']
scenario_tracker = pd.DataFrame(columns=columns)
n = 0
end = 10000
while n < end:
#Shuffle second part of a dataframe.
df_a = df[: index_shuffle]
df_b = df[index_shuffle :].sample(frac=1)
df = df_a.append(df_b).reset_index(drop=True)
#Calling main function
pad_tracker, time_tracker = main(df)
#Metrics
data = {'order': [str(list(pad_tracker.id))],
'duration': round((pad_tracker.compl_end.max() + soak_time - dev_start).days / 365, 2), #from last soak to dev start.
'pc_sum_max': pad_tracker.parent_child_gap_sum.max(),
'pc_sum_avg': int(pad_tracker.parent_child_gap_sum.mean()),
'pc_max_max': pad_tracker.parent_child_gap_max.max(),
'pc_max_avg': int(pad_tracker.parent_child_gap_max.mean()),
'avg_ops_length': (pad_tracker.compl_end - pad_tracker.drill_start + soak_time).mean().days,
'compl_simops': int(pad_tracker.compl_simops.sum()/2), #Division by 2 allows to track actual numbers or days that 2 crews were next to each others. This prevent double accounting.
'fh_score': int(pad_tracker.fh_score.sum()),
'expiry': int(pad_tracker.expired.sum())
}
scenario = pd.DataFrame(data, columns = columns)
scenario_tracker = scenario_tracker.append(scenario)
#Print Time Progress
clear_output(wait=True)
stop = timeit.default_timer()
if n < 2: #if less than 2 iterations went by
expected_time = "Calculating..."
else:
expected_time = np.round(((stop - start) / (n / end)) / 60, 2)
print("Current progress:", np.round(n / end * 100, 2), "%")
print("Current Run Time:", np.round((stop - start) / 60, 2), "minutes")
print("Expected Run Time:", expected_time, "minutes")
n += 1
#Save result to csv
scenario_tracker.reset_index(drop=True).to_csv('scenario_tracker.csv') #Reset index before to save as csv
!cp scenario_tracker.csv /content/gdrive/My\ Drive/Colab\ Notebooks/Outputs
#Refresh the 'Files' tab, go under 'content' folder and right click for export to local drive.
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
OPTIMIZER: Hill ClimbingThe SCHEDULER: Ordered Version is used. Every pairs are swapped (1-2, 1-3, 1-4, 1-5 [...] 2-3, 2-4, 2-5 [...] 3-4, 3-5 [...]). It creates n(n-1) permutations where n is the number of PADs. The metrics tracked for each permutations are stored into the "scenario_tracker".
###Code
df = df_pad
ordered_list =['Pad_0',
'Pad_1',
'Pad_2',
'Pad_3',
'Pad_4',
'Pad_5',
'Pad_6',
'Pad_7',
'Pad_8',
'Pad_9',
'Pad_10',
'Pad_11',
'Pad_12',
'Pad_13',
'Pad_14',
'Pad_15',
'Pad_16',
'Pad_17',
'Pad_18',
'Pad_19']
df_list = []
for i in ordered_list:
df_list.append(df_pad[df_pad.id == i])
df = pd.concat(df_list)
start = timeit.default_timer() #Start timer
#Instantiate Scenario_tracker
columns = ['order', 'duration', 'pc_sum_max', 'pc_sum_avg', 'pc_max_max', 'pc_max_avg', 'avg_ops_length', 'compl_simops', 'fh_score', 'expiry']
scenario_tracker = pd.DataFrame(columns=columns)
#Create a copy of df we can use to generate df after each shuffle
df_copy = df.copy()
n = 0
end = (df_pad.shape[0] - index_shuffle) * (df_pad.shape[0] - index_shuffle - 1)/2 #number of pair (n-1) divided by 2.
#Imbricated for loop allows to run the n*(n-1) pairs
for a in np.arange(index_shuffle, df_pad.shape[0] -1):
for b in np.arange(a + 1, df_pad.shape[0]):
df = df_copy
swap_a, swap_b = df.iloc[b].copy(), df.iloc[a].copy()
df.iloc[b],df.iloc[a] = swap_b, swap_a
#Calling main function
pad_tracker, time_tracker = main(df)
#Metrics
data = {'order': [str(list(pad_tracker.id))],
'duration': round((pad_tracker.compl_end.max() + soak_time - dev_start).days / 365, 2), #from last soak to dev start.
'pc_sum_max': pad_tracker.parent_child_gap_sum.max(),
'pc_sum_avg': int(pad_tracker.parent_child_gap_sum.mean()),
'pc_max_max': pad_tracker.parent_child_gap_max.max(),
'pc_max_avg': int(pad_tracker.parent_child_gap_max.mean()),
'avg_ops_length': (pad_tracker.compl_end - pad_tracker.drill_start + soak_time).mean().days,
'compl_simops': int(pad_tracker.compl_simops.sum()/2), #Division by 2 allows to track actual numbers or days that 2 crews were next to each others. This prevent double accounting.
'fh_score': int(pad_tracker.fh_score.sum()),
'expiry': int(pad_tracker.expired.sum())
}
scenario = pd.DataFrame(data, columns = columns)
scenario_tracker = scenario_tracker.append(scenario)
#Print Time Progress
clear_output(wait=True)
stop = timeit.default_timer()
if n < 3: #if less than 2 iterations went by
expected_time = "Calculating..."
else:
expected_time = np.round(((stop - start) / (n / end)) / 60, 2)
print("Current progress:", np.round(n / end * 100, 2), "%")
print("Current Run Time:", np.round((stop - start) / 60, 2), "minutes")
print("Expected Run Time:", expected_time, "minutes")
n += 1
#Save result to csv
scenario_tracker.reset_index(drop=True).to_csv('scenario_tracker.csv') #Reset index before to save as csv
!cp scenario_tracker.csv /content/gdrive/My\ Drive/Colab\ Notebooks/Outputs
#Refresh the 'Files' tab, go under 'content' folder and right click for export to local drive.
###Output
_____no_output_____ |
Part-II/02_arctic_insitu_pts/tutorial_02_with_exercises.ipynb | ###Markdown
Search for datasets coincident with a list of points__This notebook is an expanded version of `tutorial_02_demo.ipynb`, containing some exercises to practice some of the concepts and methods described here.__A physical oceanographer is interested in obtaining ICESat-2 sea ice height in Baffin Bay close to ARGO floats. This kind of search could be done using EarthData search. First by getting the coordinates of the ARGO floats and then typing the coordinates into the search box. However, this workflow could get tedious, especially if the search needs to be repeated. Furthermore, the search is not easily made reproduceable. Reproduceability is critical if you need to completely redo your analysis yourself, or if others want to recreate your reanalysis. By capturing the search in code, either in a notebook such as this one or in a script, you or anyone else can reproduce the search and any subsequent analysis.Similar use cases would be to select data coincident with a cruise, with ice mass balance buoys in Arctic and Antarctic, or the MOSAIC experiment.In this tutorial, we will use python but a similar approach could be taken using R, Matlab or IDL. You will convert a list of coordinates for ARGO floats into a GeoJSON file; use this file to write a query to the CMR API and order data. Finally we will visualize the data to produce a plot siilar to the one below. Learning objectives1. Convert a list of coordinates into a GeoJSON file.2. Write a query for the NASA CMR API.3. Submit the query and interpret the response.4. Order datasets returned by the query.5. Visualize the results. Import modulesThe Python ecosystem is organized into modules. A module must be imported before the contents of that modules can be used. It is good practice to import modules in the first code cell of a notebook or at the top of your script. Not only does this make it clear which modules are being used, but it also ensures that the code fails at the beginning if one of the modules is not installed rather half way through after crunching a load of data.For some modules, it is common practice to shorten the module names according to accepted conventions. For example, the plotting module `matplotlib.pyplot` is shortened to `plt`. It is best to stick to these conventions rather than making up your own short names so that people reading your code see immediately what you are doing.
###Code
import json
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import pandas as pd
import geopandas as gpd
import tutorial_cmr
###Output
_____no_output_____
###Markdown
Convert a list of coordinates to a GeoJSON fileThere are two steps to this: first, read the list of coordinates; second, write cordinates as a GeoJSON file. We'll use `pandas` to read the file containing the coordinates becaue it offers a simple way to read comma separated text files (`csv`). The `GeoPandas` package, which extends `pandas` into the spatial realm is then used to write a GeoJSON file._If you are not familiar with `pandas` it's worth exploring._ What is GeoJSON?[__GeoJSON__](https://geojson.org/) is an open standard data format for simple geographic data and non-spatial attributes, such as points, lines and polygons. Before reading a file, it is always useful to have a look at it. Especially text files because they might not be formated nicely or have some strange characters that you need to deal with. If you are working in JupyterLab you can use the unix command `head` from within a notebook (see below) or in a terminal, even if you are running Mac or Windows. If you running Windows and not working in JupyterLab, you can open the file in a text editor such as `notepad` but make sure you don't save the file and __do not use a word processor__ it will likely change the file._I use `head`. In Jupyter notebooks the `!` at the beginning of a line allows a shell command to be run_
###Code
!head argo_locations.csv
###Output
_____no_output_____
###Markdown
We can learn a number of things from the file listing above. The file has a header row, and the columns are separated by whitespace. This whitespace could be multiple spaces or tabs. `pandas.read_csv` can deal with this if the `delim_whitespace` keyword argument is set to true. Setting `header=0` tells `pandas.read_csv` to use row 0 as column headings.
###Code
argo_df = pd.read_csv('argo_locations.csv', header=0, delim_whitespace=True) # df is shorthand for Dataframe
argo_df.head() # df.tail() prints the last few lines
###Output
_____no_output_____
###Markdown
__Exercise:__ Take a look at `pstrack.dat` using head, and use `pandas.read_csv` to read thefile into a Pandas Dataframe.__Hint:__ __ Converting the `pandas.Dataframe` to a GeoPandas dataframe is done simply using the `geopandas.GeoDataFrame` method. We need to tell this method which columns of `argo_df` contain spatial geometry information. Note, in the argument to `geopandas.points_from_xy`, the x coordinate is _Longitude_ and the y coordinate is _Latitude_.To complete the geographic information, we need to specify the coordinate reference system (CRS). Because we use latitude and longitude, the data are _unprojected_. However, latitude and longitude are on the World Geodetic System 1984 ellipsoid (WGS84) datum. We set the CRS using an EPSG code. EPSG stands for European Petroleum Survey Group. The code for WGS84 is 4326.
###Code
argo_gdf = gpd.GeoDataFrame(argo_df, geometry=gpd.points_from_xy(argo_df.Longitude, argo_df.Latitude), crs="EPSG:4326")
argo_gdf.head()
###Output
_____no_output_____
###Markdown
`argo_gdf` looks similar to `argo_df` but it has a __geometry__ column. This is the magic sauce that turns a dataframe into a geospatial dataframe. It's worth taking a quick look at the GeoJSON object, if only to take the mystery out of it. You can see that the object contains a collection of _features_. Each of these _features_ is information about an ARGO float on a give date. The column entries (_attributes_) for each float are listed as properties and the spatial information is the _geometry_.
###Code
# print(json.dumps(json.loads(argo_gdf.to_json()), indent=1))
###Output
_____no_output_____
###Markdown
`argo_gdf` can be written to a GeoJSON formatted file using the `to_file` method.
###Code
argo_gdf.to_file('argo-data.geojson', driver='GeoJSON')
###Output
_____no_output_____
###Markdown
While we've gone through this step by step, coordinate data can be converted from a text file to a GeoJSON file in three lines of code.```argo_df = pd.read_csv('argo_locations.csv', header=0, delim_whitespace=True)argo_gdf = gpd.GeoDataFrame(argo_df, geometry=geopandas.points_from_xy(df.Longitude, df.Latitude), )argo_gdf.to_file('argo-data.geojson', driver='GeoJSON')``` __Excercise:__ Create a GeoPandas Dataframe from `pstrack.dat` and then write this Dataframe to a GeoJSON file. Submit a query via the CMR API_CMR_ is the __Common Metadata Repository__. It is a metadata system that catalogs Earth Science data and associated service metadata records. These metadata records can be discovered and accessed through programmatic interfaces leveraging standard protocols and an Application Programming Interface or API. An API takes a request or set of instructions from a device, your computer, to a service, in this case NASA's CMR and returns a response. This short [video](https://www.youtube.com/watch?v=s7wmiS2mSXY) gives a nice explanation of APIs.There are a number of python modules that provide a stripped down interface with the CMR API:- [`pyCMR`](https://github.com/nasa/pyCMR);- [`python-cmr`](https://github.com/jddeal/python-cmr);- [`icepyx`](https://github.com/icesat2py/icepyx). `pyCMR` and `python-cmr` search the CMR. `icepyx` is a tool designed specifically for ICESat-2 data. However, these modules do not allow access to all of the CMR API functionality, so we have written an ad-hoc module `tutorial_cmr` for search and download just for these tutorials. `tutorial_cmr` is imported along with the other modules at the top of this notebook. The modules uses the `requests` module. Useful overview of `requests` can be found [here](https://requests.readthedocs.io/en/master/user/quickstart/) and [here](https://realpython.com/python-requests/). Take a look at `tutorial_cmr.py` if you want to find out more about how we use `requests` with the CMR API. __Hint:__ In Python, to find out how to use a function you can type `help()` or `?`. If the function has a _docstring_ (__All functions should have one__), it will be printed.
###Code
help(tutorial_cmr.search_granules)
###Output
_____no_output_____
###Markdown
__Excercise:__ See what output you get when you type `tutorial_cmr.search_granules?` `tutorial_cmr.search_granules` takes a dictionary of [CMR search parameters](https://cmr.earthdata.nasa.gov/search/site/docs/search/api.htmlgranule-search-by-parameters) and an optional GeoJSON file if you specify a spatial search.In this example, we are searching for version 3, ICESat-2 sea ice surface height, which has the `short_name` ATL07, for the first three days in January 2020 that corresponds with the locations of our selected ARGO floats.
###Code
search_parameters = {
"short_name": "ATL07",
"version": "003", # CMR searches for most recent version
"temporal": "2020-01-01T00:00:00Z,2020-01-03T23:59:59Z",
}
search_results = tutorial_cmr.search_granules(search_parameters, geojson="argo-data.geojson")
###Output
_____no_output_____
###Markdown
We can find 3 granules that match these criteria. By default, `tutorial_cmr.search_granules` returns a decoded JSON object. This is a Python dictionary object.Python dictionaries are collections of `key: value` pairs. Values can be numbers, strings, dictionaries, lists and other Python objects. Values in dictionaries are accessed with the following syntax```dictionary[key]``` __Excercise:__ Type `search_results` to see the structure of the response from `tutorial_cmr.search granules`. __Excercise:__ Find the _key_ `'entry'` and display the first entry. Note that the _value_ for `'entry'` is a `list`. Lists can be accessed with an index, for example:```a_list[0]```will print the first element of `a_list`.__Hint:__ `'entry'` is part of a _nested_ dictionary. You can access a _value_ of a nested dictionary by tagging the appropriate _key_ of the nested dictionary onto the command to access the _parent_ dictionary, as follows:```parent_dictionary[key][key_for_nested_dict]``` There is lots of useful information in the JSON structure returned from `tutorial_cmr.search_granules`. You can use the methods from the two exercises above to access fields this information.Fields of immediate interest are likely to be date and the polygon containing the granule, as well as the url for H5 file containing the actual data. `tutorial_cmr` contains helper functions to access time and spatial information, and to print the url for the H5 file for each granules.
###Code
tutorial_cmr.print_urls(search_results)
###Output
_____no_output_____
###Markdown
It is also useful to check that the granules are for the correct domain. `tutorial_cmr.get_extent_and_date` returns a GeoJSON Dataframe that can be used to plot the spatial extent of the granules. See `tutorials_cmr.py` on how this is done.
###Code
results_gdf = tutorial_cmr.results_to_geodataframe(search_results)
results_gdf
###Output
_____no_output_____
###Markdown
We can use `cartopy` and `matplotlib` to plot the ARGO float locations and granule extent polygons. If you don't know these modules, it is worth learning them because they are very useful.One thing to note is that we change the projection of `results_gdf` using the `to_crs()` method. By default polygon coordinates are unprojected latitudes and longitudes on the WGS84 datum. Many, but not all, plotting routines have trouble plotting polygons and lines that cross the poles. Re-projecting the geometries to a projected grid, such as the [NSIDC North Polar Stereographic grid](https://nsidc.org/data/polar-stereo/ps_grids.html), avoids this issue.To see the problem, try replacing `results_gdf.to_crs("EPSG:3413").plot(ax=ax, transform=NSIDCNorthPolarStereo)` with`results_gdf.plot(ax=ax, transform=ccrs.PlateCarree())`
###Code
# Define NSIDC North Polar Stereographic projection
NSIDCNorthPolarStereo = ccrs.Stereographic(central_longitude=-45., central_latitude=90., globe=None)
map_extent = [-5000000.0, 5000000.0, -5000000.0, 5000000.0]
fig = plt.figure(figsize=(7,7))
ax = fig.add_subplot(projection=NSIDCNorthPolarStereo)
ax.set_extent(map_extent, NSIDCNorthPolarStereo)
ax.add_feature(cfeature.LAND)
results_gdf.to_crs("EPSG:3413").plot(ax=ax, transform=NSIDCNorthPolarStereo)
argo_gdf.to_crs("EPSG:3413").plot(c="r", ax=ax, transform=NSIDCNorthPolarStereo)
results_gdf
###Output
_____no_output_____
###Markdown
We have already seen how to get a list of urls for the data files returned by search using `tutorial_cmr.filter_urls`. To download the files from NSIDC we use `tutorial_cmr.download`. Downloading files requires your EarthData username and password. You should __never__ store login credentials in a notebook, script or program. One way around this is to create a `.netrc` (on Unix/Linux platforms) or `_netrc` (on Windows) files. On Unix/Linux machines, is `.netrc` stored in your home directory. A simple `.netrc` file with a single entry for EarthData will look like:```machine urs.earthdata.nasa.gov login password ```On a Windows machine it is kept in `C:\Users\"username"` and has the following format:```machine urs.earthdata.nasa.govlogin password ````tutorial_cmr.download` first looks for a `netrc` file. If it doesn't find one, it will prompt for a username and password. So don't worry about setting one up right now. However, it is worth doing so in the future.
###Code
%%time
urls = tutorial_cmr.filter_urls(search_results)
for i, url in enumerate(urls):
print(f"{i} {url}")
#tutorial_cmr.download(urls[4:]) # Downloads the last two files in urls - urls[4:] "slices" array from 4th element to end of array
tutorial_cmr.download2(urls[4:]) # Alternative download function that uses urllib
###Output
_____no_output_____
###Markdown
Read ICESat-2 ATL07 granule using `xarray`ICESat-2 data are served in _Hierachical Data Format version 5_ (HDF5) files. You can find information about HDF5 from this [NASA page](https://earthdata.nasa.gov/esdis/eso/standards-and-references/hdf5). Information about the structure of ATL07 HDF5 files can be found [here](https://nsidc.org/data/ATL07).You can view the structure of any HDF5 file using `h5dump -H`, which displays the header information of the file. `h5dump -n` returns a list of file contents. `h5dump` is a powerful tool that can be used to explore a HDF5 file and subset the file. The full range of options for `h5dump` can be seen by typing`h5dump -h`__Note:__ _`h5dump` is a shell command. To run a shell command from a Jupyter Notebook type `!` at the beginning of the line.Running the cell below displays the header information of a HDF5. The output from `h5dump -H` can be long.__Hint:__ _In Jupyter Notebooks Clicking on the blue vertical bar to the left of an output cell, collapses that cell. You might need to click on the output cell contents to see the vertical blue line_
###Code
!h5dump -n ATL07-01_20200103213055_01250601_003_02.h5
###Output
_____no_output_____
###Markdown
ATL07 data can be read into any number of python objects, including `numpy` arrays and `pandas` Dataframes. I'm a big fan of `xarray`, which is a python package designed to work with multi-dimensional arrays. See the [xarray website](http://xarray.pydata.org/en/stable/) for more information. You can find examples of using `pandas` to work with ICESat-2 data in `04_melt_pond/tutorial_helper_functions.py`.`xarray` creates Dataset objects that have a similar structure to NetCDF files. Variables can have dimensions and coordinates, and attributes. _Pandas_ does not have this feature.The function below reads
###Code
import h5py
import xarray as xr
def parse_attrs(attrs):
"""Unpacks HDF5 attributes"""
result = {}
for k, v in attrs.items():
if isinstance(v, np.bytes_):
result[k] = v.astype(str)
elif k == "_FillValue":
result[k] = v[0]
else:
result[k] = v
return result
def read_atl07(filepath, beam='gt2l'):
"""Read ATL07 (Sea Ice Height)"""
f = h5py.File(filepath, 'r')
ds = xr.Dataset({
'height': (['x'],
f[beam]['sea_ice_segments']['heights']['height_segment_height'][:],
parse_attrs(f[beam]['sea_ice_segments']['heights']['height_segment_height'].attrs)),
'surface_type': (['x'],
f[beam]['sea_ice_segments']['heights']['height_segment_type'][:],
parse_attrs(f[beam]['sea_ice_segments']['heights']['height_segment_type'].attrs)),
'segment_length': (['x'],
f[beam]['sea_ice_segments']['heights']['height_segment_length_seg'][:],
parse_attrs(f[beam]['sea_ice_segments']['heights']['height_segment_length_seg'].attrs)),
'segment_quality': (['x'],
f[beam]['sea_ice_segments']['heights']['height_segment_quality'][:],
parse_attrs(f[beam]['sea_ice_segments']['heights']['height_segment_quality'].attrs)),
'geoseg_beg': (['x'],
f[beam]['sea_ice_segments']['geoseg_beg'][:],
parse_attrs(f[beam]['sea_ice_segments']['geoseg_beg'].attrs)),
'geoseg_end': (['x'],
f[beam]['sea_ice_segments']['geoseg_end'][:],
parse_attrs(f[beam]['sea_ice_segments']['geoseg_end'].attrs)),
'latitude': (['x'],
f[beam]['sea_ice_segments']['latitude'][:],
parse_attrs(f[beam]['sea_ice_segments']['latitude'].attrs)),
'longitude': (['x'],
f[beam]['sea_ice_segments']['longitude'][:],
parse_attrs(f[beam]['sea_ice_segments']['longitude'].attrs)),
'segment_id': (['x'],
f[beam]['sea_ice_segments']['height_segment_id'][:],
parse_attrs(f[beam]['sea_ice_segments']['height_segment_id'].attrs)),
},
coords={
'x': (['x'],
f[beam]['sea_ice_segments']['seg_dist_x'][:],
parse_attrs(f[beam]['sea_ice_segments']['seg_dist_x'].attrs)),
})
return ds
###Output
_____no_output_____
###Markdown
We'll read `ATL07-01_20200103213055_01250601_003_02.h5`. This is the ICESat-2 track that crosses Baffin Bay.`read_atl07` returns an `xarray.Dataset` object. If you are familiar with NetCDF, you'll notice that the structure of the Dataset is similar to a NetCDF file with dimensions, coordinates and variables. `read_atl07` is a custom function to read an ATL07 granule for this tutorial. If you want to read different variables, you can easily modify `read_atl07` to read those variables.
###Code
ds = read_atl07('ATL07-01_20200103213055_01250601_003_02.h5')
ds
###Output
_____no_output_____
###Markdown
Sea ice surface height can be plotted using the following code. Using the `xarray` plot method automatically labels the x and y axes of the plot.Even though the ICESat-2 ground track crosses Baffin Bay, there are missing height values. This is because ATL07 sea ice height is only processed for returns with > 15% ice concentration.
###Code
fig, ax = plt.subplots(figsize=(20,7))
ds.height.plot(ax=ax, linestyle='', marker='o', markersize=2)
###Output
_____no_output_____
###Markdown
It is useful to see surface height with respect to other parameters; for example `segment_quality` and `surface_type`. Unfortunately, `xarray` doesn't have this facility but we can use `matplotlib` to show these features.
###Code
from matplotlib.colors import ListedColormap, BoundaryNorm
quality_cmap = ListedColormap(['lightslategray''cyan'])
surface_cmap = ListedColormap(['slategray', 'cyan', 'blue'])
bounds = [-0.5, 0.5, 1.5, 9.5]
surface_norm = BoundaryNorm(bounds, ncolors=3, clip=True)
fig, ax = plt.subplots(2, 1, figsize=(20,7))
quality_plot = ax[0].scatter(ds.x, ds.height, c=ds.segment_quality, cmap=cmap1, s=2)
quality_legend = ax[0].legend(*quality_plot.legend_elements(), loc="upper left", title="Quality")
surface_plot = ax[1].scatter(ds.x, ds.height, c=ds.surface_type, cmap=surface_cmap, norm=surface_norm, s=2)
surf_handles, surf_labels = surface_plot.legend_elements()
surface_legend = ax[1].legend(surf_handles, surf_labels, #["Cloud covered", "Other", "Lead"],
loc="upper left", title="Surface Type")
###Output
_____no_output_____
###Markdown
The plot legends show the values of flag for `segment_quality` and `surface_type`. You can access the meanings for these flags in the attributes of each variable.Attributes for each variable in an `xarray.Dataset` are accessed by `ds.variable_name.attrs`. This is a dictionary. The `flag_values` attribute is an array of integers. However, `flag_meanings` is a string of meanings. This string needs to be split using the `.split()` string method. The resulting array of strings corresponding to each flag can be joined (or __zipped__) with the flag values and printed out.
###Code
for flag_value, flag_meaning in zip(ds.surface_type.attrs['flag_values'], ds.surface_type.attrs['flag_meanings'].split()):
print(f"{flag_value} {flag_meaning}")
###Output
_____no_output_____
###Markdown
We can also plot surface height on a map. We could plot the whole Arctic but because we are interested in sea ice height in Baffin Bay, close to the ARGO floats, we'll "zoom" in on this area. I want to center the plot on the ARGO floats. The code below finds the bounding box of the floats using the `total_bounds` method of the `argo_gdf` `geopandas` object. Then with a little trial and error, I have chosen a distance `dx` and `dy` around this center point. I then set `baffin_extent`.
###Code
bounds = argo_gdf.to_crs("EPSG:3413").total_bounds # Returns [minx, miny, maxx, maxy]
bounds
xcenter = 0.5 * (bounds[0] + bounds[2])
ycenter = 0.5 * (bounds[1] + bounds[3])
dx = 1750000.
dy = 2000000.
baffin_extent = [xcenter-dx, xcenter+dx, ycenter-dy, ycenter+dy]
fig = plt.figure(figsize=(7,7))
ax = fig.add_subplot(projection=NSIDCNorthPolarStereo)
ax.set_extent(baffin_extent, NSIDCNorthPolarStereo)
ax.add_feature(cfeature.LAND)
ax.set_title("ICESat-2 Sea Ice Height and ARGO float position")
ds.plot.scatter('longitude', 'latitude', hue='height', ax=ax, transform=ccrs.PlateCarree(), vmin=0., vmax=.6)
argo_gdf.to_crs("EPSG:3413").plot(c="r", ax=ax, transform=NSIDCNorthPolarStereo, label='ARGO Float')
ax.legend()
#fig.savefig('tutorial_03_intro_plot.png')
###Output
_____no_output_____ |
07_3_Machine_Learning_Models.ipynb | ###Markdown
Machine Learning -- Model Training and Evaluation----- IntroductionIn this tutorial, we'll discuss how to formulate a policy problem or a social science question in the machine learning framework; how to transform raw data into something that can be fed into a model; how to build, evaluate, compare, and select models; and how to reasonably and accurately interpret model results. You'll also get hands-on experience using the `scikit-learn` package in Python. This tutorial is based on chapter "Machine Learning" of [Big Data and Social Science](https://coleridge-initiative.github.io/big-data-and-social-science/). Setup
###Code
import pandas as pd
import numpy as np
import sqlite3
import sklearn
import seaborn as sns
import matplotlib.pyplot as plt
from dateutil.parser import parse
from sklearn.metrics import precision_recall_curve, roc_curve, auc, confusion_matrix, accuracy_score, precision_score, recall_score
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.tree import DecisionTreeClassifier
sns.set_style("white")
DB = 'ncdoc.db'
conn = sqlite3.connect(DB)
cur = conn.cursor()
###Output
_____no_output_____
###Markdown
Problem Formulation--- Our Machine Learning Problem>Of all prisoners released, we would like to predict who is likely to reenter jail within *5* years of the day we make our prediction. For instance, say it is Jan 1, 2009 and we want to identify which >prisoners are likely to re-enter jail between now and end of 2013. We can run our predictive model and identify who is most likely at risk. The is an example of a *binary classification* problem. Note the outcome window of 5 years is completely arbitrary. You could use a window of 5, 3, 1 years or 1 day. In order to predict recidivism, we will be using data from the `inmate` and `sentences` table to create labels (predictors, or independent variables, or $X$ variables) and features (dependent variables, or $Y$ variables). We need to munge our data into **labels** (1_Machine_Learning_Labels.ipynb) and **features** (2_Machine_Learning_Features.ipynb) before we can train and evaluate **machine learning models** (3_Machine_Learning_Models.ipynb).This notebook assumes that you have already worked through the `1_Machine_Learning_Labels` and `2_Machine_Learning_Features` notebooks. If that is not the case, the following functions allow you to bring in the functions developed in those notebooks from `.py` scripts.
###Code
# We are bringing in the create_labels and create_features functions covered in previous notebooks
# Note that these are brought in from scripts rather than from external packages
from create_labels import create_labels
from create_features import create_features
# These functions make sure that the tables have been created in the database.
create_labels('2008-12-31', '2009-01-01', '2013-12-31', conn)
create_labels('2013-12-31', '2014-01-01', '2018-12-31', conn)
create_features('2008-12-31', '2009-01-01', '2013-12-31', conn)
create_features('2013-12-31', '2014-01-01', '2018-12-31', conn)
###Output
_____no_output_____
###Markdown
Create Training and Test Sets--- Our Training SetWe create a training set that takes people at the beginning of 2009 and defines the outcome based on data from 2009-2013 (`recidivism_labels_2009_2013`). The features for each person are based on data up to the end of 2008 (`features_2000_2008`).*Note:* It is important to segregate your data based on time when creating features. Otherwise there can be "leakage", where you accidentally use information that you would not have known at the time.
###Code
sql_string = "drop table if exists train_matrix;"
cur.execute(sql_string)
sql_string = "create table train_matrix as "
sql_string += "select l.inmate_doc_number, l.recidivism, f.num_admits, f.length_longest_sentence, f.age_first_admit, f.age "
sql_string += "from recidivism_labels_2009_2013 l "
sql_string += "left join features_2000_2008 f on f.inmate_doc_number = l.inmate_doc_number "
sql_string += ";"
cur.execute(sql_string)
###Output
_____no_output_____
###Markdown
We then load the training data into `df_training`.
###Code
sql_string = "SELECT *"
sql_string += "FROM train_matrix "
sql_string += ";"
df_training = pd.read_sql(sql_string, con = conn)
df_training.head(5)
###Output
_____no_output_____
###Markdown
Our Test (Validation) SetIn the machine learning process, we want to build models on the training set and evaluate them on the test set. Our test set will use labels from 2014-2018 (`recidivism_labels_2014_2018`), and our features will be based on data up to the end of 2013 (`features_2000_2013`).
###Code
sql_string = "drop table if exists test_matrix;"
cur.execute(sql_string)
sql_string = "create table test_matrix as "
sql_string += "select l.inmate_doc_number, l.recidivism, f.num_admits, f.length_longest_sentence, f.age_first_admit, f.age "
sql_string += "from recidivism_labels_2014_2018 l "
sql_string += "left join features_2000_2013 f on f.inmate_doc_number = l.inmate_doc_number "
sql_string += ";"
cur.execute(sql_string)
###Output
_____no_output_____
###Markdown
We load the test data into `df_test`.
###Code
sql_string = "SELECT *"
sql_string += "FROM test_matrix "
sql_string += ";"
df_test = pd.read_sql(sql_string, con = conn)
df_test.head()
###Output
_____no_output_____
###Markdown
Data CleaningBefore we proceed to model training, we need to clean our training data. First, we check the percentage of missing values.
###Code
isnan_training_rows = df_training.isnull().any(axis=1)
nrows_training = df_training.shape[0]
nrows_training_isnan = df_training[isnan_training_rows].shape[0]
print('%of frows with NaNs {} '.format(float(nrows_training_isnan)/nrows_training))
###Output
_____no_output_____
###Markdown
We see that about 1% of the rows in our data have missing values. In the following, we will drop rows with missing values. Note, however, that better ways for dealing with missings exist.
###Code
df_training = df_training[~isnan_training_rows]
###Output
_____no_output_____
###Markdown
Let's check if the values of the ages at first admit are reasonable.
###Code
np.unique( df_training['age_first_admit'] )
###Output
_____no_output_____
###Markdown
Looks like this needs some cleaning. We will drop any rows that have age 99.
###Code
keep = (df_training['age_first_admit'] >= 14) & (df_training['age_first_admit'] <= 99)
df_training = df_training[keep]
###Output
_____no_output_____
###Markdown
Let's check how much data we still have and how many examples of recidivism are in our training dataset. When it comes to model evaluation, it is good to know what the "baseline" is in our dataset.
###Code
print('Number of rows: {}'.format(df_training.shape[0]))
df_training['recidivism'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
We have about 155,000 examples, and about 25% of those are *positive* examples (recidivist), which is what we're trying to identify. About 75% of the examples are *negative* examples (non-recidivist).Next, let's take a look at the test set.
###Code
isnan_test_rows = df_test.isnull().any(axis=1)
nrows_test = df_test.shape[0]
nrows_test_isnan = df_test[isnan_test_rows].shape[0]
print('%of rows with NaNs {} '.format(float(nrows_test_isnan)/nrows_test))
###Output
_____no_output_____
###Markdown
We see that about 1% of the rows in our test set have missing values. This matches what we'd expect based on what we saw in the training set.
###Code
df_test = df_test[~isnan_test_rows]
###Output
_____no_output_____
###Markdown
As before, we drop cases with age 99.
###Code
keep = (df_test['age_first_admit'] >= 14) & (df_test['age_first_admit'] <= 99)
df_test = df_test[keep]
###Output
_____no_output_____
###Markdown
We also check the number of observations and the outcome distribution for our test data.
###Code
print('Number of rows: {}'.format(df_test.shape[0]))
df_test['recidivism'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Split into features and labelsHere we select our features and outcome variable.
###Code
sel_features = ['num_admits', 'length_longest_sentence', 'age_first_admit', 'age']
sel_label = 'recidivism'
###Output
_____no_output_____
###Markdown
We can now create an X- and y-training and X- and y-test object to train and evaluate prediction models with `scikit-learn`.
###Code
X_train = df_training[sel_features].values
y_train = df_training[sel_label].values
X_test = df_test[sel_features].values
y_test = df_test[sel_label].values
###Output
_____no_output_____
###Markdown
Model Training--- On this basis, we can now build a prediction model that learns the relationship between our predictors (`X_train`) and recidivism (`y_train`) in the training data. We start with using logistic regression as our first model.
###Code
model = LogisticRegression(penalty = 'none')
model.fit( X_train, y_train )
print(model)
###Output
_____no_output_____
###Markdown
When we print the model object, we see different model settings that can be adjusted. To adjust these parameters, one would alter the call that creates the `LogisticRegression()` model instance, passing it one or more of these parameters with a value other than the default. So, to re-fit the model with `penalty` of "elasticnet", `C` of 0.01, and `intercept_scaling` of 2 (as an example), you'd create your model as follows: model = LogisticRegression(penalty = 'elasticnet', C = 0.01, intercept_scaling = 2)The basic way to choose values for, or "tune," these parameters is the same as the way you choose a model: fit the model to your training data with a variety of parameters, and see which perform the best on the test set. However, an obvious drawback is that you can also *overfit* to your test set. In this case, you can (and should) alter the validation method (e.g., split the data into a training, validation and test set or run cross-validation in the training set).Let's look at what the model learned, i.e. what the coefficients are.
###Code
model.coef_[0]
model.intercept_
###Output
_____no_output_____
###Markdown
Model Evaluation ---Machine learning models usually do not produce a prediction (0 or 1) directly. Rather, models produce a score (that can sometimes be interpreted as a probability) between 0 and 1, which lets you more finely rank all of the examples from *most likely* to *least likely* to have label 1 (positive). This score is then turned into a 0 or 1 based on a user-specified threshold. For example, you might label all examples that have a score greater than 0.5 as positive (1), but there's no reason that has to be the cutoff.
###Code
y_scores = model.predict_proba(X_test)[:,1]
###Output
_____no_output_____
###Markdown
Let's take a look at the distribution of scores and see if it makes sense to us.
###Code
sns.distplot(y_scores, kde=False, rug=False)
###Output
_____no_output_____
###Markdown
Our distribution of scores is skewed, with the majority of scores on the lower end of the scale. We expect this because 75% of the training data is made up of non-recidivists, so we'd guess that a higher proportion of the examples in the test set will be negative (meaning they should have lower scores).
###Code
df_test['y_score'] = y_scores
###Output
_____no_output_____
###Markdown
Tools like `scikit-learn` often have a default threshold of 0.5, but a good threshold is selected based on the data, model and the specific problem you are solving. As a trial run, let's set a threshold of 0.5.
###Code
calc_threshold = lambda x,y: 0 if x < y else 1
predicted = np.array( [calc_threshold(score,0.5) for score in y_scores] )
expected = y_test
###Output
_____no_output_____
###Markdown
Confusion MatrixOnce we have tuned our scores to 0 or 1 for classification, we create a *confusion matrix*, which has four cells: true negatives, true positives, false negatives, and false positives. If an example was predicted to be negative and is negative, it's a true negative. If an example was predicted to be positive and is positive, it's a true positive. If an example was predicted to be negative and is positive, it's a false negative. If an example was predicted to be positive and is negative, it's a false negative.
###Code
conf_matrix = confusion_matrix(expected,predicted)
print(conf_matrix)
###Output
_____no_output_____
###Markdown
The count of true negatives is `conf_matrix[0,0]`, false negatives `conf_matrix[1,0]`, true positives `conf_matrix[1,1]`, and false_positives `conf_matrix[0,1]`.
###Code
accuracy = accuracy_score(expected, predicted)
print( "Accuracy = " + str( accuracy ) )
###Output
_____no_output_____
###Markdown
We get an accuracy score of 84%. Recall that our test set had 84.5% non-recidivists and 15.5% recidivists. If we had just labeled all the examples as negative and guessed non-recidivist every time, we would have had an accuracy of 84.5%, so our basic model is not doing better than a "dumb classifier". We therefore want to explore other prediction methods in later sections. For now, let's look at the precision and recall scores of our model, still using the default classification threshold.
###Code
precision = precision_score(expected, predicted)
recall = recall_score(expected, predicted)
print( "Precision = " + str( precision ) )
print( "Recall= " + str(recall))
###Output
_____no_output_____
###Markdown
AUC-PR and AUC-ROCIf we care about the whole precision-recall space, we can consider a metric known as the area under the precision-recall curve (AUC-PR). The maximum AUC-PR is 1.
###Code
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(expected, y_scores)
auc_val = auc(recall_curve,precision_curve)
###Output
_____no_output_____
###Markdown
Here we plot the PR curve and print the corresponding AUC-PR score.
###Code
plt.plot(recall_curve, precision_curve)
plt.xlabel('Recall')
plt.ylabel('Precision')
print('AUC-PR: {0:1f}'.format(auc_val))
plt.show()
###Output
_____no_output_____
###Markdown
A related performance metric is the area under the receiver operating characteristic curve (AUC-ROC). It also has a maximum of 1, with 0.5 representing a non-informative model.
###Code
fpr, tpr, thresholds = roc_curve(expected, y_scores)
roc_auc = auc(fpr, tpr)
###Output
_____no_output_____
###Markdown
Here we plot the ROC curve and print the corresponding AUC-ROC score.
###Code
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
###Output
_____no_output_____
###Markdown
Precision and Recall at k%If we only care about a specific part of the precision-recall curve we can focus on more fine-grained metrics. For instance, say there is a special program for people likely to be recidivists, but only 5% can be admitted. In that case, we would want to prioritize the 5% who were *most likely* to end up back in jail, and it wouldn't matter too much how accurate we were on the 80% or so who weren't very likely to end up back in jail. Let's say that, out of the approximately 200,000 prisoners, we can intervene on 5% of them, or the "top" 10,000 prisoners (where "top" means highest predicted risk of recidivism). We can then focus on optimizing our precision at 5%. For this, we first define a function (`plot_precision_recall_n`) that computes and plots precision and recall for any percentage cutoff (k).
###Code
def plot_precision_recall_n(y_true, y_prob, model_name):
"""
y_true: ls
y_prob: ls
model_name: str
"""
y_score = y_prob
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true, y_score)
precision_curve = precision_curve[:-1]
recall_curve = recall_curve[:-1]
pct_above_per_thresh = []
number_scored = len(y_score)
for value in pr_thresholds:
num_above_thresh = len(y_score[y_score>=value])
pct_above_thresh = num_above_thresh / float(number_scored)
pct_above_per_thresh.append(pct_above_thresh)
pct_above_per_thresh = np.array(pct_above_per_thresh)
plt.clf()
fig, ax1 = plt.subplots()
ax1.plot(pct_above_per_thresh, precision_curve, 'b')
ax1.set_xlabel('percent of population')
ax1.set_ylabel('precision', color='b')
ax1.set_ylim(0,1.05)
ax2 = ax1.twinx()
ax2.plot(pct_above_per_thresh, recall_curve, 'r')
ax2.set_ylabel('recall', color='r')
ax2.set_ylim(0,1.05)
name = model_name
plt.title(name)
plt.show()
plt.clf()
###Output
_____no_output_____
###Markdown
We can now plot the precision and recall scores for the full range of k values (this might take some time to run).
###Code
plot_precision_recall_n(expected,y_scores, 'LR')
###Output
_____no_output_____
###Markdown
Here we define another function, `precision_at_k`, which returns the precision score for specific values of k.
###Code
def precision_at_k(y_true, y_scores,k):
threshold = np.sort(y_scores)[::-1][int(k*len(y_scores))]
y_pred = np.asarray([1 if i >= threshold else 0 for i in y_scores ])
return precision_score(y_true, y_pred)
###Output
_____no_output_____
###Markdown
We can now compute, e.g., precision at top 1% and precision at top 5%.
###Code
p_at_1 = precision_at_k(expected,y_scores, 0.01)
print('Precision at 1%: {:.2f}'.format(p_at_1))
p_at_5 = precision_at_k(expected,y_scores, 0.05)
print('Precision at 5%: {:.2f}'.format(p_at_5))
###Output
_____no_output_____
###Markdown
Baseline Finally, it is important to check our model against a reasonable baseline to know how well our model is doing. Without any context, 84% accuracy can sound really great... but it's not so great when you remember that you could do better by declaring everyone a non-recidivist, which would be a useless model. This baseline would be called the *no information rate*.In addition to the no information rate, we can check against a *random* baseline by assigning every example a label (positive or negative) completely at random. We can then compute the precision at top 5% for the random model.
###Code
random_score = [np.random.uniform(0,1) for i in enumerate(y_test)]
random_predicted = np.array( [calc_threshold(score,0.5) for score in random_score] )
random_p_at_5 = precision_at_k(expected,random_predicted, 0.05)
random_p_at_5
###Output
_____no_output_____
###Markdown
More models---We have only scratched the surface of what we can do with `scikit-learn`. We've only tried one method (logistic regression), and there are plenty more classification algorithms. In the following, we consider decision trees (`DT`), random forests (`RF`), extremely randomized trees (`ET`) and gradient boosting (`GB`) as additional prediction methods.
###Code
clfs = {'DT': DecisionTreeClassifier(max_depth=3),
'RF': RandomForestClassifier(n_estimators=500, n_jobs=-1),
'ET': ExtraTreesClassifier(n_estimators=250, n_jobs=-1, criterion='entropy'),
'GB': GradientBoostingClassifier(learning_rate=0.05, subsample=0.7, max_depth=3, n_estimators=250)}
sel_clfs = ['DT', 'RF', 'ET', 'GB']
###Output
_____no_output_____
###Markdown
We will use these methods in a loop that trains one model for each method with the training data and plots the corresponding precision and recall at top k figures with the test data.
###Code
max_p_at_k = 0
for clfNM in sel_clfs:
clf = clfs[clfNM]
clf.fit( X_train, y_train )
print(clf)
y_score = clf.predict_proba(X_test)[:,1]
predicted = np.array(y_score)
expected = np.array(y_test)
plot_precision_recall_n(expected,predicted, clfNM)
p_at_5 = precision_at_k(expected,y_score, 0.05)
if max_p_at_k < p_at_5:
max_p_at_k = p_at_5
print('Precision at 5%: {:.2f}'.format(p_at_5))
###Output
_____no_output_____
###Markdown
Let's explore the models we just built. We can, e.g., extract the decision tree result from the list of fitted models.
###Code
clf = clfs[sel_clfs[0]]
print(clf)
###Output
_____no_output_____
###Markdown
We can then print and plot the feature importances for this model, which are stored as the attribute `feature_importances_`. Note that you can explore other models (e.g. the random forest) by extracting the corresponding result from `clfs` as above.
###Code
importances = clf.feature_importances_
indices = np.argsort(importances)[::-1]
print ("Feature ranking")
for f in range(X_test.shape[1]):
print ("%d. %s (%f)" % (f + 1, sel_features[f], importances[indices[f]]))
plt.figure
plt.title ("Feature Importances")
plt.bar(range(X_test.shape[1]), importances[indices], color='r', align = "center")
plt.xticks(range(X_test.shape[1]), sel_features, rotation=90)
plt.xlim([-1, X_test.shape[1]])
plt.show
###Output
_____no_output_____
###Markdown
Our ML modeling pipeline can be extended in various ways. Further steps may include: - Creating more features- Trying more models- Trying different parameters for our models
###Code
cur.close()
conn.close()
###Output
_____no_output_____ |
01 - Basics/08-OOP.ipynb | ###Markdown
- Not only Python supports multi-level inheritance, it also supports multiple inheritance- https://en.wikipedia.org/wiki/Multiple_inheritance - Classes B & C inherits class A. Both classes A & B have same method name abc. - Class D inherits both classes B & C(Multiple inheritance)- Which method(B's abc or C'abc) will be called when we call it from class D's object? - whichever class will be inherited first
###Code
class A:
def __init__(self):
pass
def greet(self):
print('A')
class B(A):
def __init__(self):
pass
def greet(self):
print('B')
class C(A):
def __init__(self):
pass
def greet(self):
print('C')
class D(B,C):
def __init__(self):
pass
def getter(self):
print('D')
D().greet()
class D(C,B):
def __init__(self):
pass
def getter(self):
C.greet(self) #calling method of super class(using self parameter)
print('D')
D().greet()
D().getter()
class Person:
def __init__(self):
print('Inside Person Constructor')
def greet(self):
print('hello')
class Person2:
def __init__(self):
print('Inside Person2 constructor')
class Student(Person2,Person):
def __init__(self):
print('Inside Student Constructor')
super().__init__() #notice - not using self parameter
def greet(self):
super().greet() #not using self parameter
print('hi')
Student()
Student().greet()
class Rectangle:
def __init__(self, x1, y1, x2, y2):
self.x1 = x1
self.y1 = y1
self.x2 = x2
self.y2 = y2
def wth(self):
return self.x2 - self.x1
def hgt(self):
return self.y2 - self.y1
def area(self):
return self.wth() * self.hgt()
class Square(Rectangle):
def __init__(self,x1,y1,length):
super().__init__(x1,y1,x1+length,y1+length)
print(super().area())
_ = Square(2,3,4)
###Output
16
|
python-sdk/nuscenes/can_bus/tutorial.ipynb | ###Markdown
nuScenes CAN bus tutorialThis page describes how to use the nuScenes CAN bus expansion data.The CAN bus is a vehicle bus over which information such as position, velocity, acceleration, steering, lights, battery and many more are submitted.We recommend you start by reading the [README](https://github.com/nutonomy/nuscenes-devkit/tree/master/python-sdk/nuscenes/can_bus/README.md). SetupTo install the can bus expansion, please download the files from https://www.nuscenes.org/download and copy the files into your nuScenes can folder, e.g. `/data/sets/nuscenes/can_bus`. You will also need to update your `nuscenes-devkit`. InitializationTo initialize the can bus API, run the following:
###Code
from nuscenes.can_bus.can_bus_api import NuScenesCanBus
nusc_can = NuScenesCanBus(dataroot='/data/sets/nuscenes')
###Output
_____no_output_____
###Markdown
OverviewLet us get an overview of all the CAN bus messages and some basic statistics (min, max, mean, stdev, etc.). We will pick an arbitrary scene for that.
###Code
scene_name = 'scene-0001'
nusc_can.print_can_message_stats(scene_name)
###Output
_____no_output_____
###Markdown
VisualizationNext we plot the values in a CAN bus message over time. As an example let us pick the steering angle feedback message and the key called "value" as described in the [README](https://github.com/nutonomy/nuscenes-devkit/tree/master/python-sdk/nuscenes/can_bus/README.md). The plot below shows the steering angle. It seems like the scene starts with a strong left turn and then continues more or less straight.
###Code
nusc_can.plot_message_data(scene_name, 'steeranglefeedback', 'value')
###Output
_____no_output_____
###Markdown
If the data we want to plot is multi-dimensional, we need to provide an additional argument to select the dimension. Here we plot the acceleration along the lateral dimension (y-axis). We can see that initially this acceleration is higher.
###Code
nusc_can.plot_message_data(scene_name, 'pose', 'accel', dimension=1)
###Output
_____no_output_____
###Markdown
Now let us render the baseline route for this scene. The blue line below shows the baseline route extended by 50m beyond the start and end of the scene. The orange line indicates the ego vehicle pose. To differentiate the start and end point we highlight the start with a red cross. We can see that there is a slight deviation of the actual poses from the route.
###Code
nusc_can.plot_baseline_route(scene_name)
###Output
_____no_output_____
###Markdown
Error handlingPlease note that some scenes are not well aligned with the baseline route. This can be due to diversions or because the human driver was not following a route. We compute all misaligned routes by checking if each ego pose has a baseline route within 5m.
###Code
print(nusc_can.list_misaligned_routes())
###Output
_____no_output_____
###Markdown
Furthermore a small number of scenes have no CAN bus data at all. These can therefore not be used.
###Code
print(nusc_can.can_blacklist)
###Output
_____no_output_____ |
nbs/PointNetSeg.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
root_dir = "/content/gdrive/My Drive/PointNet3D"
!pip install path.py;
from path import Path
import sys
sys.path.append(root_dir)
import plotly.graph_objects as go
import numpy as np
import scipy.spatial.distance
import math
import random
import utils
class10_dir = "/datasets/ModelNet10txt/ModelNet10/ModelNet10/"
import random
def read_pts(file):
verts = np.genfromtxt(file)
return utils.cent_norm(verts)
#return verts
def read_seg(file):
verts = np.genfromtxt(file, dtype= (int))
return verts
def sample_2000(pts, pts_cat):
res1 = np.concatenate((pts,np.reshape(pts_cat, (pts_cat.shape[0], 1))), axis= 1)
res = np.asarray(random.choices(res1, weights=None, cum_weights=None, k=2000))
images = res[:, 0:3]
categories = res[:, 3]
categories-=np.ones(categories.shape)
return images, categories
###Output
_____no_output_____
###Markdown
Model
###Code
import torch
import torch.nn as nn
import numpy as np
import torch.nn.functional as F
class Tnet(nn.Module):
def __init__(self, k=3):
super().__init__()
self.k=k
self.conv1 = nn.Conv1d(k,64,1)
self.conv2 = nn.Conv1d(64,128,1)
self.conv3 = nn.Conv1d(128,1024,1)
self.fc1 = nn.Linear(1024,512)
self.fc2 = nn.Linear(512,256)
self.fc3 = nn.Linear(256,k*k)
self.bn1 = nn.BatchNorm1d(64)
self.bn2 = nn.BatchNorm1d(128)
self.bn3 = nn.BatchNorm1d(1024)
self.bn4 = nn.BatchNorm1d(512)
self.bn5 = nn.BatchNorm1d(256)
def forward(self, input):
# input.shape == (bs,n,3)
bs = input.size(0)
xb = F.relu(self.bn1(self.conv1(input)))
xb = F.relu(self.bn2(self.conv2(xb)))
xb = F.relu(self.bn3(self.conv3(xb)))
pool = nn.MaxPool1d(xb.size(-1))(xb)
flat = nn.Flatten(1)(pool)
xb = F.relu(self.bn4(self.fc1(flat)))
xb = F.relu(self.bn5(self.fc2(xb)))
#initialize as identity
init = torch.eye(self.k, requires_grad=True).repeat(bs,1,1)
if xb.is_cuda:
init=init.cuda()
matrix = self.fc3(xb).view(-1,self.k,self.k) + init
return matrix
class Transform(nn.Module):
def __init__(self):
super().__init__()
self.input_transform = Tnet(k=3)
self.feature_transform = Tnet(k=128)
self.fc1 = nn.Conv1d(3,64,1)
self.fc2 = nn.Conv1d(64,128,1)
self.fc3 = nn.Conv1d(128,128,1)
self.fc4 = nn.Conv1d(128,512,1)
self.fc5 = nn.Conv1d(512,2048,1)
self.bn1 = nn.BatchNorm1d(64)
self.bn2 = nn.BatchNorm1d(128)
self.bn3 = nn.BatchNorm1d(128)
self.bn4 = nn.BatchNorm1d(512)
self.bn5 = nn.BatchNorm1d(2048)
def forward(self, input):
n_pts = input.size()[2]
matrix3x3 = self.input_transform(input)
xb = torch.bmm(torch.transpose(input,1,2), matrix3x3).transpose(1,2)
outs = []
out1 = F.relu(self.bn1(self.fc1(xb)))
outs.append(out1)
out2 = F.relu(self.bn2(self.fc2(out1)))
outs.append(out2)
out3 = F.relu(self.bn3(self.fc3(out2)))
outs.append(out3)
matrix128x128 = self.feature_transform(out3)
out4 = torch.bmm(torch.transpose(out3,1,2), matrix128x128).transpose(1,2)
outs.append(out4)
out5 = F.relu(self.bn4(self.fc4(out4)))
outs.append(out5)
xb = self.bn5(self.fc5(out5))
xb = nn.MaxPool1d(xb.size(-1))(xb)
out6 = nn.Flatten(1)(xb).repeat(n_pts,1,1).transpose(0,2).transpose(0,1)#.repeat(1, 1, n_pts)
outs.append(out6)
return outs, matrix3x3, matrix128x128
class PointNetSeg(nn.Module):
def __init__(self, classes = 10):
super().__init__()
self.transform = Transform()
self.fc1 = nn.Conv1d(3008,256,1)
self.fc2 = nn.Conv1d(256,256,1)
self.fc3 = nn.Conv1d(256,128,1)
self.fc4 = nn.Conv1d(128,4,1)
self.bn1 = nn.BatchNorm1d(256)
self.bn2 = nn.BatchNorm1d(256)
self.bn3 = nn.BatchNorm1d(128)
self.bn4 = nn.BatchNorm1d(4)
self.logsoftmax = nn.LogSoftmax(dim=1)
def forward(self, input):
inputs, matrix3x3, matrix128x128 = self.transform(input)
stack = torch.cat(inputs,1)
xb = F.relu(self.bn1(self.fc1(stack)))
xb = F.relu(self.bn2(self.fc2(xb)))
xb = F.relu(self.bn3(self.fc3(xb)))
output = F.relu(self.bn4(self.fc4(xb)))
return self.logsoftmax(output), matrix3x3, matrix128x128
###Output
_____no_output_____
###Markdown
Dataset
###Code
from __future__ import print_function, division
import os
import torch
import pandas as pd
from skimage import io, transform
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
from torch.utils.data.dataset import random_split
import utils
class Data(Dataset):
"""Face Landmarks dataset."""
def __init__(self, root_dir, valid=False, transform=None):
self.root_dir = root_dir
self.files = []
self.valid=valid
newdir = root_dir + '/datasets/airplane_part_seg/02691156/expert_verified/points_label/'
for file in os.listdir(newdir):
o = {}
o['category'] = newdir + file
o['img_path'] = root_dir + '/datasets/airplane_part_seg/02691156/points/'+ file.replace('.seg', '.pts')
self.files.append(o)
def __len__(self):
return len(self.files)
def __getitem__(self, idx):
img_path = self.files[idx]['img_path']
category = self.files[idx]['category']
with open(img_path, 'r') as f:
image1 = read_pts(f)
with open(category, 'r') as f:
category1 = read_seg(f)
image2, category2 = sample_2000(image1, category1)
if not self.valid:
theta = random.random()*360
image2 = utils.rotation_z(utils.add_noise(image2), theta)
return {'image': np.array(image2, dtype="float32"), 'category': category2.astype(int)}
dset = Data(root_dir , transform=None)
train_num = int(len(dset) * 0.95)
val_num = int(len(dset) *0.05)
if int(len(dset)) - train_num - val_num >0 :
train_num = train_num + 1
elif int(len(dset)) - train_num - val_num < 0:
train_num = train_num -1
#train_dataset, val_dataset = random_split(dset, [3000, 118])
train_dataset, val_dataset = random_split(dset, [train_num, val_num])
val_dataset.valid=True
print('######### Dataset class created #########')
print('Number of images: ', len(dset))
print('Sample image shape: ', dset[0]['image'].shape)
#print('Sample image points categories', dset[0]['category'], end='\n\n')
train_loader = DataLoader(dataset=train_dataset, batch_size=64)
val_loader = DataLoader(dataset=val_dataset, batch_size=64)
#dataloader = torch.utils.data.DataLoader(dset, batch_size=4, shuffle=True, num_workers=4)
###Output
######### Dataset class created #########
Number of images: 2690
Sample image shape: (2000, 3)
###Markdown
Training loop
###Code
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
pointnet = PointNetSeg()
pointnet.to(device);
optimizer = torch.optim.Adam(pointnet.parameters(), lr=0.001)
def pointnetloss(outputs, labels, m3x3, m128x128, alpha = 0.0001):
criterion = torch.nn.NLLLoss()
bs=outputs.size(0)
id3x3 = torch.eye(3, requires_grad=True).repeat(bs,1,1)
id128x128 = torch.eye(128, requires_grad=True).repeat(bs,1,1)
if outputs.is_cuda:
id3x3=id3x3.cuda()
id128x128=id128x128.cuda()
diff3x3 = id3x3-torch.bmm(m3x3,m3x3.transpose(1,2))
diff128x128 = id128x128-torch.bmm(m128x128,m128x128.transpose(1,2))
return criterion(outputs, labels) + alpha * (torch.norm(diff3x3)+torch.norm(diff128x128)) / float(bs)
def train(model, train_loader, val_loader=None, epochs=15, save=True):
for epoch in range(epochs):
pointnet.train()
running_loss = 0.0
for i, data in enumerate(train_loader, 0):
inputs, labels = data['image'].to(device), data['category'].to(device)
optimizer.zero_grad()
outputs, m3x3, m64x64 = pointnet(inputs.transpose(1,2))
loss = pointnetloss(outputs, labels, m3x3, m64x64)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 10 == 9: # print every 10 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 10))
running_loss = 0.0
pointnet.eval()
correct = total = 0
# validation
if val_loader:
with torch.no_grad():
for data in val_loader:
inputs, labels = data['image'].to(device), data['category'].to(device)
outputs, __, __ = pointnet(inputs.transpose(1,2))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0) * labels.size(1) ##
correct += (predicted == labels).sum().item()
val_acc = 100 * correct / total
print('Valid accuracy: %d %%' % val_acc)
# save the model
if save:
torch.save(pointnet.state_dict(), root_dir+"/modelsSeg/"+str(epoch)+"_"+str(val_acc))
train(pointnet, train_loader, val_loader, save=True)
###Output
[1, 10] loss: 1.203
[1, 20] loss: 0.924
[1, 30] loss: 0.844
[1, 40] loss: 0.800
Valid accuracy: 78 %
[2, 10] loss: 0.769
[2, 20] loss: 0.741
[2, 30] loss: 0.732
[2, 40] loss: 0.729
Valid accuracy: 82 %
[3, 10] loss: 0.709
[3, 20] loss: 0.685
[3, 30] loss: 0.680
[3, 40] loss: 0.676
Valid accuracy: 85 %
[4, 10] loss: 0.663
[4, 20] loss: 0.642
[4, 30] loss: 0.637
[4, 40] loss: 0.636
Valid accuracy: 87 %
[5, 10] loss: 0.626
[5, 20] loss: 0.617
[5, 30] loss: 0.610
[5, 40] loss: 0.613
Valid accuracy: 86 %
[6, 10] loss: 0.604
[6, 20] loss: 0.587
[6, 30] loss: 0.580
[6, 40] loss: 0.583
Valid accuracy: 86 %
[7, 10] loss: 0.574
[7, 20] loss: 0.563
[7, 30] loss: 0.558
[7, 40] loss: 0.565
Valid accuracy: 86 %
[8, 10] loss: 0.553
[8, 20] loss: 0.539
[8, 30] loss: 0.538
[8, 40] loss: 0.543
Valid accuracy: 87 %
[9, 10] loss: 0.533
[9, 20] loss: 0.525
[9, 30] loss: 0.516
[9, 40] loss: 0.521
Valid accuracy: 87 %
[10, 10] loss: 0.522
[10, 20] loss: 0.506
[10, 30] loss: 0.503
[10, 40] loss: 0.507
Valid accuracy: 87 %
[11, 10] loss: 0.501
[11, 20] loss: 0.495
[11, 30] loss: 0.485
[11, 40] loss: 0.497
Valid accuracy: 88 %
[12, 10] loss: 0.484
[12, 20] loss: 0.477
[12, 30] loss: 0.474
[12, 40] loss: 0.477
Valid accuracy: 88 %
[13, 10] loss: 0.472
[13, 20] loss: 0.458
[13, 30] loss: 0.456
[13, 40] loss: 0.464
Valid accuracy: 87 %
[14, 10] loss: 0.454
[14, 20] loss: 0.452
[14, 30] loss: 0.446
[14, 40] loss: 0.458
Valid accuracy: 87 %
[15, 10] loss: 0.444
[15, 20] loss: 0.433
[15, 30] loss: 0.432
[15, 40] loss: 0.440
Valid accuracy: 88 %
###Markdown
test
###Code
pointnet = PointNetSeg()
pointnet.load_state_dict(torch.load(root_dir+"/modelsSeg/"+"14_88.01940298507462"))
pointnet.eval()
batch = next(iter(val_loader))
pred = pointnet(batch['image'].transpose(1,2))
pred_np = np.array(torch.argmax(pred[0],1));
pred_np
batch['image'][0].shape
pred_np==np.array(batch['category'])
acc = (pred_np==np.array(batch['category']))
resulting_acc = np.sum(acc, axis=1) / 2000
resulting_acc
pred_np
x,y,z=np.array(batch['image'][0]).T
c = np.array(batch['category'][0]).T
fig = go.Figure(data=[go.Scatter3d(x=x, y=y, z=z,
mode='markers',
marker=dict(
size=30,
color=c, # set color to an array/list of desired values
colorscale='Viridis', # choose a colorscale
opacity=1.0
))])
fig.update_traces(marker=dict(size=2,
line=dict(width=2,
color='DarkSlateGrey')),
selector=dict(mode='markers'))
fig.show()
###Output
_____no_output_____ |
code/01-Intro/oeis.ipynb | ###Markdown
Series
###Code
import pandas as pd
from oeis.sequence import OEIS_Sequence
from matplotlib import pyplot as plt
plt.plot(Sequence.terms)
plt.title(Sequence.description)
plt.show()
def formula_latex(k, floor=True):
latex = r"$$\left\lfloor\frac{n^2}{" + str(k) + r"}\right\rfloor$$"
if not floor:
latex = latex.replace("floor", "ceil")
return latex
def oeis_md_link(id_):
return f'[{id_}]({OEIS_URL}{id_})'
SEQ_LIST = ['A000290', 'A007590', 'A000212',
'A002620', 'A118015', 'A056827',
'A056834', 'A130519', 'A056838',
'A056865']
series_table = pd.DataFrame(columns= ['k',
'Secuencia',
'Fórmula',
'Descripción',
'Términos'])
MAX_TERMS = 15
for num, id_ in enumerate(SEQ_LIST):
Seq = OEIS_Sequence(id_)
series_table = series_table.append({'k': num + 1,
'Secuencia': oeis_md_link(id_),
'Fórmula': formula_latex(num + 1),
'Descripción': Seq.description,
'Términos': Seq.terms[:MAX_TERMS]
}, ignore_index=True)
series_table
# Tabla en markdown para incluir en el capítulo
print(series_table.to_markdown(index=False))
for k, seq in enumerate(SEQ_LIST):
lst = SEQ_LIST.copy()
del lst[k]
print(k, seq, lst)
V = ['A000290', 'A007590', 'A000212', 'A002620', 'A118015', 'A056827', 'A056834', 'A130519', 'A056865']
txt = ""
for x in V:
txt = txt + ", " + x
txt
from math import floor
def f(n, k):
return floor((n ** 2) / k)
lst = []
acc = 0
for n in range(20):
acc += f(n, 2)
lst.append(acc)
print(lst)
lista = []
for n in range(20):
lista.append(floor((2 * (n ** 3) + 3 * (n ** 2) - 2 * n)/12))
print(lista)
from sympy import Sum, symbols, simplify
i, k, n = symbols('i k n', integer=True)
simplify(Sum((i ** 2) / k , (i, 1, n)).doit())
###Output
_____no_output_____ |
Functional_Thinking/Lab/26D-High_Order_Function.ipynb | ###Markdown
The operator module - operators as regular functions Let's take our old friend: the factorial function!
###Code
l_factorial = lambda n: 1 if n == 0 else n*l_factorial(n-1)
###Output
_____no_output_____
###Markdown
Chaining functions and combining return valuesSay that we want to call this function a number of times, with different arguments, and do something with the return values. How can we do that?
###Code
def chain_mul(*what):
"""Takes a list of (function, argument) tuples. Calls each
function with its argument, multiplies up the return values,
(starting at 1) and returns the total."""
total = 1
for (fnc, arg) in what:
total *= fnc(arg)
return total
chain_mul( (l_factorial, 2), (l_factorial, 3) )
###Output
_____no_output_____
###Markdown
Operators as regular functionsThe function above is not very general: it can only multiple values, not multiply or subtract them. Ideally, we would pass an operator to the function as well. But `*` is syntax and not an object that we can pass! Fortunately, the Python's built-in `operator` module offers all operators as regular functions.
###Code
import operator
def chain(how, *what):
total = 1
for (fnc, arg) in what:
total = how(total, fnc(arg))
return total
chain(operator.truediv, (l_factorial, 2), (l_factorial, 3) )
###Output
_____no_output_____ |
notebooks/GE2E-Seungwonpark-ExtractSpeakerEmbedding.ipynb | ###Markdown
This is a noteboook used to generate the speaker embeddings with the GE2E model.
###Code
import sys
sys.path.insert(0, "../")
from utils.audio_processor import WrapperAudioProcessor as AudioProcessor
from utils.generic_utils import load_config
import librosa
import os
import numpy as np
import torch
from glob import glob
from tqdm import tqdm
#Download encoder Checkpoint
#!wget https://github.com/Edresson/GE2E-Speaker-Encoder/releases/download/checkpoints/checkpoint-voicefilter-seungwonpark.pt -O embedder.pt
# speaker_encoder parameters
num_mels = 40
n_fft = 512
emb_dim = 256
lstm_hidden = 768
lstm_layers = 3
window = 80
stride = 40
checkpoint_dir = "embedder.pt"
import torch
import torch.nn as nn
class LinearNorm(nn.Module):
def __init__(self, lstm_hidden, emb_dim):
super(LinearNorm, self).__init__()
self.linear_layer = nn.Linear(lstm_hidden, emb_dim)
def forward(self, x):
return self.linear_layer(x)
class SpeakerEncoder(nn.Module):
def __init__(self, num_mels, lstm_layers, lstm_hidden, window, stride):
super(SpeakerEncoder, self).__init__()
self.lstm = nn.LSTM(num_mels, lstm_hidden,
num_layers=lstm_layers,
batch_first=True)
self.proj = LinearNorm(lstm_hidden, emb_dim)
self.num_mels = num_mels
self.lstm_layers = lstm_layers
self.lstm_hidden = lstm_hidden
self.window = window
self.stride = stride
def forward(self, mel):
# (num_mels, T)
mels = mel.unfold(1, self.window, self.stride) # (num_mels, T', window)
mels = mels.permute(1, 2, 0) # (T', window, num_mels)
x, _ = self.lstm(mels) # (T', window, lstm_hidden)
x = x[:, -1, :] # (T', lstm_hidden), use last frame only
x = self.proj(x) # (T', emb_dim)
x = x / torch.norm(x, p=2, dim=1, keepdim=True) # (T', emb_dim)
x = x.sum(0) / x.size(0) # (emb_dim), average pooling over time frames
return x
embedder = SpeakerEncoder(num_mels, lstm_layers, lstm_hidden, window, stride).cuda()
chkpt_embed = torch.load(checkpoint_dir)
embedder.load_state_dict(chkpt_embed)
embedder.eval()
# Set constants
DATA_ROOT_PATH = '../../../LibriSpeech/voicefilter_bugfix_data/'
TRAIN_DATA = os.path.join(DATA_ROOT_PATH, 'train')
TEST_DATA = os.path.join(DATA_ROOT_PATH, 'test')
glob_re_wav_emb = '*-ref_emb.wav'
glob_re_emb = '*-emb.pt'
# load ap compativel with speaker encoder
config = {"backend":"voicefilter", "mel_spec": False, "audio_len": 3,
"voicefilter":{"n_fft": 1200,"num_mels":40,"num_freq": 601,"sample_rate": 16000,"hop_length": 160,
"win_length": 400,"min_level_db": -100.0, "ref_level_db": 20.0, "preemphasis": 0.97,
"power": 1.5, "griffin_lim_iters": 60}}
ap = AudioProcessor(config)
#os.listdir(TEST_DATA)
#Preprocess dataset
train_files = sorted(glob(os.path.join(TRAIN_DATA, glob_re_wav_emb)))
test_files = sorted(glob(os.path.join(TEST_DATA, glob_re_wav_emb)))
if len(train_files) == 0 or len(test_files):
print("check train and test path files not in directory")
files = train_files+test_files
for i in tqdm(range(len(files))):
try:
wave_file_path = files[i]
wav_file_name = os.path.basename(wave_file_path)
# Extract Embedding
emb_wav, _ = librosa.load(wave_file_path, sr=16000)
mel = torch.from_numpy(ap.get_mel(emb_wav)).cuda()
#print(mel.shape)
file_embedding = embedder(mel).cpu().detach().numpy()
except:
# if is not possible extract embedding because wav lenght is very small
file_embedding = np.array([0]) # its make a error in training
print("Embedding reference is very sort")
output_name = wave_file_path.replace(glob_re_wav_emb.replace('*',''),'')+glob_re_emb.replace('*','')
torch.save(torch.from_numpy(file_embedding.reshape(-1)), output_name)
###Output
_____no_output_____ |
docs/examples/3-chromatography/3c-Separators.ipynb | ###Markdown
Separator Calculations
###Code
import pandas as pd
import numpy as np
from pvtpy.compositional import Chromatography, Component, properties_df
from pvtpy.units import Pressure, Temperature
properties_df.index
d1 = {
'comp': ['carbon-dioxide','nitrogen','methane','ethane','propane','isobutane','butane','isopentane','pentane','n-hexane'],
'mole_fraction':[0.0008,0.0164,0.2840,0.0716,0.1048,0.042,0.042,0.0191,0.01912,0.0405]
}
c7_plus = Component(
name = 'C7+',
molecular_weight=252,
specific_gravity = 0.8429,
mole_fraction=0.3597,
critical_pressure=140,
critical_pressure_unit='psi',
critical_temperature=1279.8,
critical_temperature_unit='rankine',
params = {'acentric_factor':0.5067}
)
ch1 = Chromatography()
ch1.from_df(pd.DataFrame(d1),name='comp')
ch1.plus_fraction = c7_plus
ch1.df()['mole_fraction']
ma = ch1.apparent_molecular_weight()
print(f'Aparent Molecular weight {ma}')
rho = 44.794
###Output
_____no_output_____
###Markdown
Stage 1
###Code
p1 = Pressure(value=400, unit='psi')
t1 = Temperature(value=72, unit='farenheit')
ch1.equilibrium_ratios(p1,t1,method='whitson')
fsh1, phase1 = ch1.flash_calculations(p1,t1)
fsh1.index.name = 'component'
print(fsh1)
print(fsh1[['xi','yi']].sum())
moles_stage1 = ch1.phase_moles(p1,t1)
moles_stage1
###Output
_____no_output_____
###Markdown
Stage 2
###Code
p2 = Pressure(value=350, unit='psi')
t2 = Temperature(value=72, unit='farenheit')
ch2 = Chromatography()
ch2.from_df(fsh1, mole_fraction='xi')
c7_plus1 = Component(
name = 'C7+',
molecular_weight=252,
specific_gravity = 0.8429,
mole_fraction=fsh1.loc['C7+','xi'],
critical_pressure=140,
critical_pressure_unit='psi',
critical_temperature=1279.8,
critical_temperature_unit='rankine',
params = {'acentric_factor':0.5067}
)
ch2.plus_fraction = c7_plus1
ch2.df()['mole_fraction']
moles_stage2 = ch2.phase_moles(p2,t2)
moles_stage2
fsh2, phase2 = ch2.flash_calculations(p2,t2)
fsh2.index.name = 'component'
print(fsh2)
print(fsh2[['xi','yi']].sum())
###Output
mole_fraction xi yi k
component
carbon-dioxide 0.000592 0.000580 0.001458 2.512167
nitrogen 0.001713 0.001191 0.041154 34.548143
methane 0.069850 0.060335 0.789504 13.085251
ethane 0.063289 0.062753 0.103813 1.654311
propane 0.130747 0.131831 0.048744 0.369743
isobutane 0.056641 0.057287 0.007785 0.135895
butane 0.057502 0.058191 0.005391 0.092649
isopentane 0.026688 0.027029 0.000952 0.035219
pentane 0.026801 0.027145 0.000718 0.026438
n-hexane 0.057142 0.057892 0.000479 0.008281
C7+ 0.509035 0.515766 0.000002 0.000005
xi 1.0
yi 1.0
dtype: float64
###Markdown
Stage 3
###Code
p3 = Pressure(value=14.7, unit='psi')
t3 = Temperature(value=60, unit='farenheit')
ch3 = Chromatography()
ch3.from_df(fsh2.reset_index(),name = fsh2.index.name, mole_fraction='xi')
c7_plus3 = Component(
name = 'C7+',
molecular_weight=252,
specific_gravity = 0.8429,
mole_fraction=fsh2.loc['C7+','xi'],
critical_pressure=140,
critical_pressure_unit='psi',
critical_temperature=1279.8,
critical_temperature_unit='rankine',
params = {'acentric_factor':0.5067}
)
ch3.plus_fraction = c7_plus3
ch3.df()['mole_fraction']
moles_stage3 = ch3.phase_moles(p3,t3)
moles_stage3
fsh3, phase3 = ch3.flash_calculations(p3,t3)
fsh3.index.name = 'component'
print(fsh3)
print(fsh3[['xi','yi']].sum())
moles_stages = [moles_stage1,moles_stage2,moles_stage3]
nl = 1
for i in moles_stages:
nl *= i['liquid_moles']
nv = 1 - nl
print(f'liquid Moles Stock Tank {nl}\nLiberated Gas Moles {nv}')
ch4 = Chromatography()
ch4.from_df(fsh3.reset_index(),name = fsh3.index.name, mole_fraction='xi')
c7_plus4 = Component(
name = 'C7+',
molecular_weight=252,
specific_gravity = 0.8429,
mole_fraction=fsh3.loc['C7+','xi'],
critical_pressure=140,
critical_pressure_unit='psi',
critical_temperature=1279.8,
critical_temperature_unit='rankine',
params = {'acentric_factor':0.5067}
)
ch4.plus_fraction = c7_plus4
ch4.df()['mole_fraction']
ch4.apparent_molecular_weight()
## Separator Functions
from pvtpy.compositional import Stage, SeparatorTest
stage1 = Stage(
pressure=p1,
temperature = t1
)
stage2 = Stage(
pressure=p2,
temperature = t2
)
stage3 = Stage(
pressure=p3,
temperature = t3
)
list_stages = [stage1, stage2, stage3]
sep = SeparatorTest(
initial_chromatography = ch1,
stages = list_stages
)
sep.solve()
sep.stages[-1].phase_moles
###Output
_____no_output_____
###Markdown
Calculate apparent molecular weight of the stock-tank oil from its composition, togive
###Code
sep.stages[-1].chromatography.apparent_molecular_weight()
###Output
_____no_output_____
###Markdown
Calculate the actual number of moles of the liquid phase at the stock-tank condi-tionsCalculate the total number of moles of the liberated gas
###Code
sep.final_moles()
sep.final_molecular_weight()
rho = 50.920
sep.gas_solubility(rho=50.920)
sep.volumetric_factor(44.794,50.920)
###Output
_____no_output_____ |
Glove_Word_Embedding.ipynb | ###Markdown
Import necessary packages
###Code
import os
import urllib.request
import matplotlib.pyplot as plt
from scipy import spatial
from sklearn.manifold import TSNE
import numpy as np
import pandas as pd
import pathlib
###Output
_____no_output_____
###Markdown
Tobaco dataset Preparing
###Code
data_root = pathlib.Path('/content/drive/MyDrive/tobaco_OCR/')
print(data_root)
for item in data_root.iterdir():
print(item)
def get_file_paths_and_labels(data_root):
text_paths = [str(path) for path in data_root.glob('*/*.txt')]
labels = [p.split("/")[-2] for p in text_paths]
return text_paths, labels
text_paths, labels = get_file_paths_and_labels(data_root)
print(text_paths)
print(labels)
print(len(text_paths))
print(len(labels))
###Output
['/content/drive/MyDrive/tobaco_OCR/Resume/50521422-1423.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50630631-0632.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/40028776-8777.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/10150247_10150256.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50538795-8796.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50201456-1467.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50719747-9748.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50264605-4625.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50590985-0986.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50296092-6093.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50535699-5699.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50239379-9380.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50467009-7010.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50425154-5155.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/40010133-0134.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50439450-9451.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50483835-3836.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50506096-6097.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50607543-7544.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50378394-8395.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50429381-9382.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50654827-4828.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50546335-6336.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/40005130-5131.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50538978-8979.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50386532-6533.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50371713-1714.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50557910-7911.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50442704-2713.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50489607-9608.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50476159-6160.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50616851-6852.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50602320-2321.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50582484-2485.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/2028975962_2028975964.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50525533-5540.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/10036815_10036823.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50422920-2921.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50405403-5404.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50450234-0234.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/40035380-5380.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/85655567_85655583.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50487659-7660.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50106572-6573.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/11300115-0116.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50646644-6645.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50493176-3177.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50271743-1746.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50444074-4074.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50589377-9378.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50449946-9947.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50559165-9166.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50491022-1023.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50509820-9820.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50650671-0672.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50617225-7226.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/40047516-7517.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50426112-6114.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/40049902-9903.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50593387-3387.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50023848_50023850.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50722487-2488.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50459853-9854.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50457000-7001.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50513051-3052.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50500260-0261.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50649595-9596.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50618083-8085.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50639278-9279.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50387716-7716.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50690163-0164.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50339930-9931.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50501854-1854.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50617273-7274.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50479508-9509.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/0000153377.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50608376-8377.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/2023100295_2023100303.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50513847-3848.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50453779-3780.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50549883-9884.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50258985-8987.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50463741-3741.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50579200-9201.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50653196-3196.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/10087799_10087801.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/CTRCONTRACTS021884-1.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50267812-7820.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50591446-1447.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50565399-5401.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50510506-0507.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/60016011_60016015.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50586528-6531.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50308888-8889.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50410607-0615.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50529207-9208.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50583715-3716.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50294272-4272.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50549067-9068.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50515916-5917.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50721981-1982.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/40002609-2610.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50477507-7508.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50479406-9407.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50597708-7709.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50441969-1970.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50357861-7862.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50636686-6687.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50536499-6500.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50371016-1016.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50497931-7932.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50570560-0563.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/60037207_60037209.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/40019153-9154.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50602461-2462.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50421027-1027.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50638712-8712.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50313867-3869.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50515818-5819.txt', '/content/drive/MyDrive/tobaco_OCR/Resume/50553047-3048.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89209498_9500.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1001889415.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000237100.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/04366602_04366604.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073893399.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2057274390_2057274393.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2082514837.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/85026674_85026676.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/11766882.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2042111525_2042111534.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/01247680.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2072877887.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2024986135.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2047315478.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0013005994.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/98161549.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1002404720_1002404721.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87856898_87856902.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/96018383.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2082742674.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2025877604.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/00377005.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2060167949.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1002977842_1002977847.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2024850061_2024850070.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2025875649.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/88889211.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/88116311_88116313.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2050258326.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/80722057_2058.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000193526.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1003635090.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073365529_5534.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2045482589_2590.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054524189.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000554052.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89446070_89446076.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2041853500_2041853508.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89841477.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/03654458_03654459.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/04405847_04405848.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/80502221_80502224.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/93252933.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2048309984.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2076128735_8736.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/85067598.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/82918230_82918239.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/04107922_04107926.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000124690.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051123179_2051123180.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87048757_87048766.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2063435963.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000393586.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073420633.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1003390961.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tob13820.34.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2501065502.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000237399.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2064835504.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1001858525_1001858526.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000104448.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2020268309.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2071760875.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/03364017_03364022.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051122847.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/93444732a.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2070370405.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2076900993.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/91288318_91288320.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1331806.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051083199.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2065335800.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2058089097.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1368724.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/03523378.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054621429.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/13055817.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/ton05123.21.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tob08906.50.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2072205854.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2061639550.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000168771.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2081803353.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1003726944.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/11886054.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2045476428_2045476436.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2057718509_2057718511.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2024412663.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2074123192.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2030471856.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000474595.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2060347872_2060347873.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000211869.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2060084593_2060084599.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/01586188.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022254267.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89449655.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/ton00631.03_ton00631.05.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/titx0625.92_titx0626.16.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073437503_7504.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/91971720_1721.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2078755206.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87325335_87325337.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2504048594_2504048611.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2063695266_5268.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/82919105.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2026164697_2026164699.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1000790663.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2074847095.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2060353871-a_2060353872.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2072514128_4129.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2023408782_2023408790.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051110023.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2041102766.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/86546074.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051117086.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/93208121.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2056550084_2056550086.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2024072051_2024072053.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2078795106.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022923966.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2082621396.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87051325.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022181033.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tob01919.07.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1000799097.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051443365.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000208865.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/603677.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2072613990.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2046538342.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/92757436.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87915745_87915751.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/ti04960317.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2057762122_2057762125.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054767991_7992.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2085618986.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/04101854.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/92707788.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2045373494_2045373496.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2040929289.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tob10601.37.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000083559.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2057710690.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/91765155.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2025887469.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/82227170_7183.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tob07720.02.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/98862271.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051989880_2051989881.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/611829.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2058006466.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2046912122_2123.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051063550.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000141695.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2030062998.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2058090026.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/03370692_0695.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054924695.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2074865778_5785.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2010067199.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/92064329_92064330.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87293861_3880.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2030545412_2030545413.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2031263025.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2064724557_4560.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2501027943.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2031400439_2031400441.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051798475.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051044337.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051575345_2051575346.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/92638744.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2025614641_2025614644.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/44007097.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073336543.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/98779515.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/80726625_80726632.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2024817982_2024817996.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054544305.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2049050527_2049050536.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054547221.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1003723216_1003723218.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2031562974.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2024836429_2024836430.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073932597.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/88142422.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000008957.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2042471428.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1000251492.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2056174385_4386.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/00275226_00275227.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022164378.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2061654420.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2021266006.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2056133713_2056133715.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2083776812.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000015571.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2050531873_2050531874.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000118560.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2040287760_2040287794.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1000767360.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2041874888.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000059503.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2000621910.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/92631332_1338.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2064509602_9604.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/91979761_9767.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/86448705_8716.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2012581990.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2058122315_2058122318.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/82782147.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/91757213.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/04004176.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2025851048_2025851049.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2045187675_2045187679.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/03683731.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2042404241.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/86235667_5668.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022249818.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000408323.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/88898679.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/ti00290170_0176.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2023118657_2023118670.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/82581670.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000328495.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/83448568_8579.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054643904.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051115864.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/86002263.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2000766495_2000766497.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2024462000_2024462016.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2050984554.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2041563228_2041563229.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/81634973.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1003194341_1003194342.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2065268936_8938.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022165302_2022165304.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2057274729_2057274731.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89955590.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/80413408_80413420.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2061678550.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/96222726.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2040465704.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2061839050.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2065281482_1483.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073005257.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/99115867_5869.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2041268603_2041268605.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2072173651_3652.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000135087.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/98864336.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022180658_0659.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2044175001_2044175002.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000170925.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2043230684_2043230686.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1003040251_1003040252.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/96653443.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2057702349.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1001906916.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/83504123_4129.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1000798221.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/85560397.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/13313809.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2041837000_2041837001.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2041356925.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/03615288_03615289.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054838139_2054838140.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2074110669.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/96630332.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073883296.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/86264944_4947.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022142525.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2050941553.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1003640109.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/91983999.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/00137810_00137811.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tob17612.72_tob17612.74.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87398712.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2045083823_2045083824.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2046000414_2046000415.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/88990360.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000108319.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2072665602.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2501536718_2501536725.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054927900_2054927901.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/93299184.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2047934116.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022210456_2022210457.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000386602.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2057154439_2057154440.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/91981329_1338.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2060384942_2060384946.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2071860205_0207.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2048305604.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/80911283_80911286.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2047038707_8709.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2056706291.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/85615141.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2042506790_2042506791.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2062427064_2062427071.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2050804253_2050804255.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2049308509.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/85622275.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tob03508.13.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/98341362.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/96008379.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2012580836.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tcal0367773.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2042374171_2042374174.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/03726299_03726300.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000347162.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/81745625_5631.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2061670897.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89286069.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1003120857.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051111206.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2023012813.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89618674.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2500120257_2500120259.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1000874341_1000874344.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/85146382.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000169583.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/94011032_94011033.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/03335936.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2024257952.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1003255575.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2043616262_2043616266.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2074737024_7028.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2078388379.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2048965327_2048965331.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1002404594.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1005020149_1005020151.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2061648199.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2042017254.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2060133571.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1000767446.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1001861874.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022233786.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2075008304.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1001854313.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2042790056_2042790059.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2020134234.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/00450130.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000043910.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2064233234.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2015017717_2015017719.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/12850443.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/85580479_85580482.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89841379_1381.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2020157078.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2075169250_9251.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054504061_2054504063.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2076790792.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/81331173_81331174.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87348404.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/86265143.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2023431005.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000029906.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tob15809.19.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2050959395_2050959399.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2020280503.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/88063780.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2030016727_2030016729.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2072813043.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tnwl0049688_9693.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87842606_2608.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2021548067.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2026374867_2026374868.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/88141193.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/92292808_2811.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/dun00608.51_dun00608.52.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2057775423.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/88899010.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87800043_87800056.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1004865799_1004865800.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2020340083.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2040324939.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2023118626_2023118627.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054964454.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2042014987.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/93802240.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0013140948.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/95666861_6867.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2021514201_2021514203.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000249984.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2026459463.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2078645121_5122.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1001850798.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/82411349_1362.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/98201361_1362.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/82777112.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2047441654.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1001857064_1001857068.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/83594945_4946.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2055649545.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000547826.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/83644858.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000296750.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051034159.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2041060506_2041060507.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/81153042_81153046.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000936338.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/98754928.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051691699_2051691700.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/633619.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2040358117.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2020238777.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/98468802_8809.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1349980.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87328513_87328514.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2074663413_3414.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89002796.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2062210337.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073000609_0614.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2044393287.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/00000725.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1002907300.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2040907509.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2024821651_1652.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2029198047_2029198056.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2075067412.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89096277_6278.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2082852748_2749.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2048714647.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/titx1529.40_titx1529.41.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/98416803_6804.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000262962.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/83664478.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2062304594.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/03854099.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2025449419.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000146942.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2042813526_2042813527.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/93164272_93164273.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073008759_8760.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/98215328_5336.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87644162_87644163.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1001866123_1001866124.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1000843369_1000843370.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/00119510.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/92672244.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/82866939.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2047932359_2047932361.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051080096.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/ti01480811_0814.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2040733938.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2072956847.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2020136618_2020136619.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2056291607.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2048124824_4825.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/82252355_2356.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89678340.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2065224640_4641.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/91878674_91878675.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2024547864_2024547866.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1000775306.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/01046565.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/85014287_4299.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1003387594.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2043894005.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2044093019.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2040759890.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2048228137.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89481112_1122.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/91863222.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/24008317.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2057362082.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2043230492_2043230503.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2071493775_3778.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2023539362_2023539363.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2055043947_2055043948.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2044801453.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051064987.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000456149.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022184512_2022184519.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89282835.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/04242498_04242510.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000146507.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022948982.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/04145740.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2044300004.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000391492.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2049298600_2049298603.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/01591430.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2078250319.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2025612606_2609.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054857059.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2045845326.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2505338622.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2080486842.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2070416152.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1001809385_1001809386.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2056172030_2056172032.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2000457615_2000457620.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/11005495.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2057143978_2057143979.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000065944.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2061880058_0070.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2049461223.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2069625942_5947.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/85069315.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2043215175.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2050794145.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2081932182.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tob07006.81.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2046579981_2046579987.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2045186816.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/01329768.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2504078584_2504078585.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2021601265.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2042389568_9570.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1001854931.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2065449140_9141.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2048153405_2048153423.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2031015329_2031015330.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2040885396.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2041400194.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tob12300.33_tob12300.34.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2071386961.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/88980966.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000229184.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1001823466.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2070135879.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/88260444_88260447.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000488853.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/71146504.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073497485_7486.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073114345.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/01390279.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2070268662.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2072607314_7316.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2015023714.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000131535.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/04146448_6449.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2049386508_2049386509.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2000621665.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022244803_2022244805.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051035077_5081.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/83098512_8517.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2048240768_2048240769.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054641494.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1003479904.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1005095578.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2047523173_3191.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2080896959_6964.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/01777521.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2083136135_6136.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/82088755_8759.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1396849.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2047929185.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2072940944.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1001767165.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tob00500.02.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87718409.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2501316792_2501316793.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2047921425.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2072006278.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2078202885_2886.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89939582.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1003659451_1003659454.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/91637208_91637211.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/91975408_5411.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2028584557_2028584558.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2053522412.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2053471549.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2023551065_2023551066.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2023437101.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000264417.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2044198652.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2060528454_2060528455.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1000300824.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2000485447.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/00005729.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/tob18717.65.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2057722185_2057722186.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/80317519_7522.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000159555.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/96018853_8858.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/03000416_03000417.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000073229.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2024939330.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022209162.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/91377755_91377757.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2065322958.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2074163081.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/89282762.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87746801_87746802.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2051124321_2051124322.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/03551612.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2048129510.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/13497827.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/85418261.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/ti07830376.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054620205_2054620206.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2057729267_2057729278.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022161599_2022161600.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2050801870_1873.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2073453930.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2030974392.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/86280048_0055.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/80552294.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000045457.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/98419843.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/00104455_00104456.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/85004412.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2022177049_2022177054.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2074356039_6048.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2042812605.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/01337755.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/01329196.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/1000816599.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/86373252_3255.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2054608832_2054608833.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/2024816564_2024816570.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/12997349.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/87222823_2825.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000022175.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/82456991_6998.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000384019.txt', '/content/drive/MyDrive/tobaco_OCR/Memo/0000067671.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505311822.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50128017-8017.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50004413_50004414.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507622558+-2558.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512753125.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502684440.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512452246.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505486448_505486456.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/689417.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50519386-9386.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500883632.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/516015914+-5914.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tob13613.26.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/CTRSP-FILES026628-66.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505031983_505031985.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518288917+-8918.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505051340_505051343.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506063133_506063136.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501725136_501725137.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518476232+-6232.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60015669.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512077487+-7488.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti17120611_0612.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50554660-4660.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50499170-9170.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508407516_508407518.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/517098140_517098145.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/513988895_513988898.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10375498.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506658322+-8323.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/503664474_503664475.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/520693484+-3486.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10377432.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/522399399+-9399.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500254004+-4006.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/514963180+-3180.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501871470.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508904759_508904761.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518532897+-2897.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/520865580+-5583.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50224756-4756.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50088409.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50083690.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512245743_512245744.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507198806+-8809.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/514839018+-9019.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/CTRCONTRACTS009745-9.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508542996_508542997.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508976810_508976811.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11025683.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504441998_504442001.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502348783_502348784.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50005450.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/CTRCONTRACTS015206-5.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti16400391.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/513130256.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505828969.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50222862-2868.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502266196_502266198.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/513262199.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504034961_504034962.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50166929-6929.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507387897_507387901.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/522028451+-8451.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50144521-4521.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/521435346+-5346.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tob02711.67.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60023115.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50437836-7836.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518768911+-8911.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50436174-6174.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/520815209+-5209.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/titx0224.70.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502050562.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504740366.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60010441.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512289677+-9679.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507135769_507135770.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/513292964_513292967.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506366336.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/520070902+-0902.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60009061.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti16860527_0528.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50156846-6846.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/515682605.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518057258_518057259.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507007456.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508865270+-5274.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505081291+-1291.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/516394905_516394908.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502450770_502450771.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/522305211+-5211.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/656421.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/503864315_503864318.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10426975.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505802830.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507063202_507063206.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/870648.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/519878148+-8149.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500908611_500908612.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506354567_506354570.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508063247.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500219724+-9725.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512011799+-1799.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50093420-3421.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/516889156+-9156.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/513206214_513206217.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504419327_504419329.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60024487.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50042965_50042966.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11284250.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502429559_502429560.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505649215_505649221.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505876423.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/509696284+-6284.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504009806.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/516695861+-5861.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50330646-0646.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/515792806+-2806.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tcal0447628.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11001828_11001829.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti16321601.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501498016+-8018.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11004530.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50203102-3102.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512088115.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506014187.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505622748_505622751.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/00000831.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/515842254.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/511406901+-6901.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10164982.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/timo0003651_3652.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508161842.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504880625_504880626.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512760291.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/524413197+-3197.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tnwl0002422.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/40007049-7050.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/517444539+-4540.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50585499-5499.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504463629_504463631.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11288120.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tob17621.69.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/503808253.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/titl0003493_3506.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501443933_501443934.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/523186064+-6066.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/titx0417.71.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60008309.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/653346.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tob07201.29.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/40044277-4277.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505815955_505815972.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505278562.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50027038.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60009710.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/titx0320.20.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/71119336.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500784993+-4993.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506193791+-3792.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10021964.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518074129_518074130.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60021014.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512540914_512540915.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/dun00318.93_dun00318.93.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/511011264_511011268.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/503242511.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/516628601+-8601.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506121204.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10431489.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti31110620.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/13037818.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60021705.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60017528.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50129909-9909.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10429748.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50162077-2078.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/70118949-8950.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512330245_512330247.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504381073.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507051676.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11010532.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50327024-7025.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50445206-5206.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502298366+-8366.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/513421755+-1755.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/520709850+-9850.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500392757+-2757.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501470966.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50101792-1792.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506279786_506279793.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501358163_501358166.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10381924.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti16671221_1223.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11026865.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/511241254.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/522612220+-2222.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506010719.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ton04021.20.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/727296.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10403212.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505632516_505632517.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/515360458.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50737588-7588.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504236557.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500920901.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500370717.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504210190_504210191.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500320204.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507608307_507608308.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512291680.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/520546820+-6823.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504301618+-1618.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/517783754+-3754.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50046879.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/509244518.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60007551.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508077193.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50704129-4129.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502794513_502794518.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508146503+-6507.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/70013040-3040.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507360836+-0837.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11242995.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11291368.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/509157078+-7082.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502303662_502303663.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/521479934+-9934.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501941108.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505988421_505988422.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518424618+-4618.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50151938-1938.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/520140971+-0972.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/524569241+-9242.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501542245.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/515809872_515809874.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501565193.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tob14232.72_tob14232.73.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50193176-3176.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518641553_518641555.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tob08602.49.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50661869-1869.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506464139.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11301944.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11276252.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/521115354+-5356.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/44006199_44006202.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500068563.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/40017854-7854.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504739223+-9228.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/CTRCONTRACTS026722-6.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505966068_505966069.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50067193.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10230391.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502756777_502756778.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506226060_506226061.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50177282-7282.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50615547-5548.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/521431153+-1153.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518519963+-9966.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti16310640.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502349926i-9926j.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501152235_501152236.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518630372+-0375.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50404285-4285.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50079373.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/13003851.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50050161.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/503885478.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50739956-9956.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/510017822.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50030592.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/516465142_516465144.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ton01402.63_ton01402.65.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518500076+-0077.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504923090_504923091.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501754088_501754092.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11301143.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504081353_504081354.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60023807.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/503584077.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10430661.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507419113.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60016059.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/510112796+-2797.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11331272.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50019787.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/509559488.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50373820-3820.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11220408_11220409.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505993188_505993190.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11007947.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11290264.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/520222764+-2764.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/71153406.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506623600_506623602.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504334188_504334189.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tnjb0007854.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10235850.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/515121473+-1475.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/509452348+-2348.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti06390634.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500187028.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504115148.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/713402.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518193540.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50534026-4026.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502822581+-2585.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/513160165.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/600552.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50073615.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/13010826_13010827.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11281614.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10223631.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/523170476+-0476.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50572661-2661.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508131603_508131608.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10162340_10162342.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505449159_505449162.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/94000570.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50320092-0092.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502406921_502406922.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tob03000.60.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501420153_501420155.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50375676-5676.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50370017-0018.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/628017.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/509104355+-4360.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506453583+-3584.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti11211794_1795.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50551250-1251.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500976067.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50133805-3805.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/511397696.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10032894.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11277226.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/679995.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505239182+-9182.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/511111743+-1743.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501190520+-0520.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50014372.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518203735.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10432401.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507724696.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/509793995_509793996.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506434158_506434163.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501859325.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504033154+-3154.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506881197_506881198.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506263567_506263572.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505688958+-8963.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50321901-1901.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502056568.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504013747+-3748.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/CTRSP-FILES027316-73.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502364579.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504262724_504262728.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/509153956.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502393490+-3490.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505492811_505492812.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504789202_504789204.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50007953.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60018901.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11013274_11013275.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tob07507.45.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tob02614.42.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11246589.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ton03017.04_ton03017.05.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11019361.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507407645.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512997470.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11002448.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tnwl0035736.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/586670.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500409797.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508046753.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60012400.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504774700_504774702.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/517591807+-1809.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501943572+-3574.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501476059.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60006851.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/510623450.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/515766856+-6858.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50450839-0839.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/503673274_503673275.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501343042_501343047.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508500970.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50153588-3588.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507056895.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50108151-8151.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504732186+-2187.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11303519.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tnwl0028046.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502379814_502379820.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512195767_512195769.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tcal0173460.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50006758.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/514116293+-6293.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/CTRSP-FILES014766-47.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tnwl0029158.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50052281.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11292395.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tnwl0019853.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506050726_506050727.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/525022027+-2027.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50462735-2735.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11289117.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508228211_508228220.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11279456_11279457.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501995871_501995872.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506079937.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502805196_502805204.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500522645.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10390058.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518445181+-5181.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/0055501936.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ton02314.98.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506718643_506718644.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti04111260_1261.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11302743.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50025487.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50135293-5293.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504213969+-3971.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504473130.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506873664.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508923688.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507966367+-6367.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/510072684_510072685.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501334959_501334973.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/12745912.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505889574+-9575.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50218257-8257.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11016728.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512609438_512609439.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505499053_505499060.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50011833.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512015167+-5168.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10399765.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/CTRSP-FILES026390-63.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11014972.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50673098-3098.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti11491978.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501698636+-8636.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/524945610+-5611.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505575420_505575421.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/510221250_510221253.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50419732-9732.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512315558.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505954765_505954768.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505454926h_505454926i.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50361273-1273.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/517600807+-0807.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50365515-5515.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10392535.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50017430.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/509282179_509282180.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/519585825+-5829.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti16570359.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/513901494_513901500.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti03840086_0087.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti16601162.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506163412_506163415.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60020317.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/510336889.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500505654_500505665.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/514421927+-1927.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50500997-0997.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/513415616.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50110821-0821.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60018214.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50708841-8841.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60022426.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/12721591.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50095371-5371.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504660175+-0176.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11280472.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518036678+-6680.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507124268+-4270.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505354097_505354098.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508511388.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508217217_508217221.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11286228.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504275731.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/509887153.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504409810_504409811.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/503500305_503500306.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50382699-2699.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10389325.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501547796+-7797.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518654143_518654144.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507404690_507404692.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/70015348-5349.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/517572982+-2983.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/2077871785_1786.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/522046763+-6770.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/502523402.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/500041544.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506028258.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10427746.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518695806+-5806.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508047850+-7850.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507084664_507084673.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/516737076+-7077.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tob12322.82.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/520527284+-7285.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518451410.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518617506+-7506.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60006786.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/503856809_503856811.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60013409.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506080143+-0149.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518241192+-1192.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508847380.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/511023955.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/515034743+-4743.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/501896772_501896773.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/511994948+-4948.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/509158416+-8419.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507386652+-6652.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ton04200.95.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11318657.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508951029+-1033.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/674889.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50034842_50034843.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11223019.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11287239_11287240.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/509666724_509666725.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505601044_505601051.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50284479-4479.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50163650-3650.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50071348_50071349.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50199028-9028.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/13149719.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/510700728.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/518533499+-3499.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/507995262.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/titx0100.29.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ton00908.15.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/513232721_513232722.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50467771-7771.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/CTRSP-FILES003409-34.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11285202.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/512948144.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/524237789+-7789.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50048797.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506913284.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50606546-6546.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/71210664.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/504361271_504361273.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60015620.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tob10623.04.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/50343351-3351.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/11293403.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/40015160-5160.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/tob16529.92.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10005191.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/60012482_60012483.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti15732103.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/10424400.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/94002714.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/506538874_506538875.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/ti14861233.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/516028154+-8154.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/505316432+-6432.txt', '/content/drive/MyDrive/tobaco_OCR/Letter/508232590.txt', '/content/drive/MyDrive/tobaco_OCR/Report/510809052_510809074.txt', '/content/drive/MyDrive/tobaco_OCR/Report/504168509_504168510.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506521794_506521796.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506767138_506767143.txt', '/content/drive/MyDrive/tobaco_OCR/Report/503026367_503026370.txt', '/content/drive/MyDrive/tobaco_OCR/Report/11239042_11239045.txt', '/content/drive/MyDrive/tobaco_OCR/Report/519363746+-3756.txt', '/content/drive/MyDrive/tobaco_OCR/Report/510832749.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506252257_506252258.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ti10691080_1083.txt', '/content/drive/MyDrive/tobaco_OCR/Report/10232045_10232047.txt', '/content/drive/MyDrive/tobaco_OCR/Report/500929359.txt', '/content/drive/MyDrive/tobaco_OCR/Report/518308532+-8533.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507778811_507778812.txt', '/content/drive/MyDrive/tobaco_OCR/Report/503478032+-8034.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507523129+-3129.txt', '/content/drive/MyDrive/tobaco_OCR/Report/508757575_508757594.txt', '/content/drive/MyDrive/tobaco_OCR/Report/525291085+-1088.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ti12030045.txt', '/content/drive/MyDrive/tobaco_OCR/Report/509847620_509847621.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tim00403.90_tim00403.91.txt', '/content/drive/MyDrive/tobaco_OCR/Report/500272469+-2470.txt', '/content/drive/MyDrive/tobaco_OCR/Report/522528735+-8736.txt', '/content/drive/MyDrive/tobaco_OCR/Report/71103533.txt', '/content/drive/MyDrive/tobaco_OCR/Report/515321531+-1533.txt', '/content/drive/MyDrive/tobaco_OCR/Report/500985711.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ton03413.40.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ti16740794_0797.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507995082_507995083.txt', '/content/drive/MyDrive/tobaco_OCR/Report/514217348+-7349.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505016658_505016663.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob08823.31_tob08823.32.txt', '/content/drive/MyDrive/tobaco_OCR/Report/501656496.txt', '/content/drive/MyDrive/tobaco_OCR/Report/503428622.txt', '/content/drive/MyDrive/tobaco_OCR/Report/513966426+-6429.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505706456+-6459.txt', '/content/drive/MyDrive/tobaco_OCR/Report/titx1316.19_titx1316.21.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob00112.50_tob00112.58.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506875666_506875672.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507367473e_507367473f.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507896104_507896105.txt', '/content/drive/MyDrive/tobaco_OCR/Report/504310925.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506631896.txt', '/content/drive/MyDrive/tobaco_OCR/Report/512974158_512974160.txt', '/content/drive/MyDrive/tobaco_OCR/Report/503276148_503276153.txt', '/content/drive/MyDrive/tobaco_OCR/Report/501866197+-6204.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507181418_507181422.txt', '/content/drive/MyDrive/tobaco_OCR/Report/512697956_512697958.txt', '/content/drive/MyDrive/tobaco_OCR/Report/503445062_503445063.txt', '/content/drive/MyDrive/tobaco_OCR/Report/504330344_504330348.txt', '/content/drive/MyDrive/tobaco_OCR/Report/523365592+-5592.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob11819.95_tob11819.96.txt', '/content/drive/MyDrive/tobaco_OCR/Report/0001414986.txt', '/content/drive/MyDrive/tobaco_OCR/Report/517133786.txt', '/content/drive/MyDrive/tobaco_OCR/Report/502790213_502790214.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507017446+-7455.txt', '/content/drive/MyDrive/tobaco_OCR/Report/502813081.txt', '/content/drive/MyDrive/tobaco_OCR/Report/508301539.txt', '/content/drive/MyDrive/tobaco_OCR/Report/50255591-5591.txt', '/content/drive/MyDrive/tobaco_OCR/Report/503165138.txt', '/content/drive/MyDrive/tobaco_OCR/Report/504805602_504805609.txt', '/content/drive/MyDrive/tobaco_OCR/Report/502417535.txt', '/content/drive/MyDrive/tobaco_OCR/Report/502339200+-9201.txt', '/content/drive/MyDrive/tobaco_OCR/Report/513206103+-6103.txt', '/content/drive/MyDrive/tobaco_OCR/Report/522040863+-0863.txt', '/content/drive/MyDrive/tobaco_OCR/Report/12304582.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506247497_506247498.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505798058.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505869261_505869265.txt', '/content/drive/MyDrive/tobaco_OCR/Report/11298506_11298508.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob01615.44_tob01615.45.txt', '/content/drive/MyDrive/tobaco_OCR/Report/520516240+-6257.txt', '/content/drive/MyDrive/tobaco_OCR/Report/509823077.txt', '/content/drive/MyDrive/tobaco_OCR/Report/509468162_509468163.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tim00629.17_tim00629.20.txt', '/content/drive/MyDrive/tobaco_OCR/Report/516516589+-6590.txt', '/content/drive/MyDrive/tobaco_OCR/Report/501263595_501263601.txt', '/content/drive/MyDrive/tobaco_OCR/Report/515867026_515867029.txt', '/content/drive/MyDrive/tobaco_OCR/Report/71225399.txt', '/content/drive/MyDrive/tobaco_OCR/Report/0000958516.txt', '/content/drive/MyDrive/tobaco_OCR/Report/524485243+-5245.txt', '/content/drive/MyDrive/tobaco_OCR/Report/50027604_50027607.txt', '/content/drive/MyDrive/tobaco_OCR/Report/0011987564.txt', '/content/drive/MyDrive/tobaco_OCR/Report/60013476.txt', '/content/drive/MyDrive/tobaco_OCR/Report/515395510_515395524.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507975870_507975893.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506812605.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506934156_506934157.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506348523_506348524.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506992129.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507745286_507745288.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505683667+-3670.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506238235_506238236.txt', '/content/drive/MyDrive/tobaco_OCR/Report/10423065_10423068.txt', '/content/drive/MyDrive/tobaco_OCR/Report/510907182_510907183.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506650717+-0718.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507320874+-0882.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob06632.98_tob06632.99.txt', '/content/drive/MyDrive/tobaco_OCR/Report/524257370+-7371.txt', '/content/drive/MyDrive/tobaco_OCR/Report/503983498.txt', '/content/drive/MyDrive/tobaco_OCR/Report/50258031-8032.txt', '/content/drive/MyDrive/tobaco_OCR/Report/500195739_500195747.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505127415_505127418.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tnwl0002042_2045.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507828129_507828145.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tnwl0000110_0112.txt', '/content/drive/MyDrive/tobaco_OCR/Report/50118021-8021.txt', '/content/drive/MyDrive/tobaco_OCR/Report/508536414+-6418.txt', '/content/drive/MyDrive/tobaco_OCR/Report/504641839_504641846.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ton02204.35_ton02204.37.txt', '/content/drive/MyDrive/tobaco_OCR/Report/501115325+-5327.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507786984_507786991.txt', '/content/drive/MyDrive/tobaco_OCR/Report/509808805+-8811.txt', '/content/drive/MyDrive/tobaco_OCR/Report/511421807.txt', '/content/drive/MyDrive/tobaco_OCR/Report/500735305.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505375034_505375035.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505228919.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506131656_506131659.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507767713_507767722.txt', '/content/drive/MyDrive/tobaco_OCR/Report/508599869_508599898.txt', '/content/drive/MyDrive/tobaco_OCR/Report/511408565+-8565.txt', '/content/drive/MyDrive/tobaco_OCR/Report/501557447_501557455.txt', '/content/drive/MyDrive/tobaco_OCR/Report/500877285_500877288.txt', '/content/drive/MyDrive/tobaco_OCR/Report/512687521+-7522.txt', '/content/drive/MyDrive/tobaco_OCR/Report/513103226+-3228.txt', '/content/drive/MyDrive/tobaco_OCR/Report/50216661-6661.txt', '/content/drive/MyDrive/tobaco_OCR/Report/518027091_518027095.txt', '/content/drive/MyDrive/tobaco_OCR/Report/517992376+-2378.txt', '/content/drive/MyDrive/tobaco_OCR/Report/514948201+-8201.txt', '/content/drive/MyDrive/tobaco_OCR/Report/500649033_500649045.txt', '/content/drive/MyDrive/tobaco_OCR/Report/515610282+-0283.txt', '/content/drive/MyDrive/tobaco_OCR/Report/50244186-4186.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob09000.76.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506153379_506153380.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507963083_507963090.txt', '/content/drive/MyDrive/tobaco_OCR/Report/522629373+-9373.txt', '/content/drive/MyDrive/tobaco_OCR/Report/24010243_24010244.txt', '/content/drive/MyDrive/tobaco_OCR/Report/509611113_509611115.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ti12911987_1989.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506296226+-6227.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tnwl0053832_3842.txt', '/content/drive/MyDrive/tobaco_OCR/Report/508032020_508032021.txt', '/content/drive/MyDrive/tobaco_OCR/Report/514725860_514725861.txt', '/content/drive/MyDrive/tobaco_OCR/Report/501326452_501326456.txt', '/content/drive/MyDrive/tobaco_OCR/Report/501620093_501620119.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob06201.15_tob06201.19.txt', '/content/drive/MyDrive/tobaco_OCR/Report/517155795+-5795.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507061337+-1350.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob15511.67_tob15511.69.txt', '/content/drive/MyDrive/tobaco_OCR/Report/511988428_511988437.txt', '/content/drive/MyDrive/tobaco_OCR/Report/504988903+-8904.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506297104_506297105.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506030259.txt', '/content/drive/MyDrive/tobaco_OCR/Report/522843979+-3982.txt', '/content/drive/MyDrive/tobaco_OCR/Report/524363056+-3063.txt', '/content/drive/MyDrive/tobaco_OCR/Report/10176813.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506583220_506583221.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob09314.17.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ton01220.33_ton01220.60.txt', '/content/drive/MyDrive/tobaco_OCR/Report/11004538.txt', '/content/drive/MyDrive/tobaco_OCR/Report/519792044+-2049.txt', '/content/drive/MyDrive/tobaco_OCR/Report/502071859_502071866.txt', '/content/drive/MyDrive/tobaco_OCR/Report/524518395+-8418.txt', '/content/drive/MyDrive/tobaco_OCR/Report/500610469_500610475.txt', '/content/drive/MyDrive/tobaco_OCR/Report/502829678_502829693.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507676370+-6377.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506176307_506176326.txt', '/content/drive/MyDrive/tobaco_OCR/Report/516052362+-2363.txt', '/content/drive/MyDrive/tobaco_OCR/Report/525672323+-2324.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob10829.35.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tim00404.83_tim00404.84.txt', '/content/drive/MyDrive/tobaco_OCR/Report/50168370-8370.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ti13910019_0020.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505939350.txt', '/content/drive/MyDrive/tobaco_OCR/Report/503958547_503958548.txt', '/content/drive/MyDrive/tobaco_OCR/Report/0000972342.txt', '/content/drive/MyDrive/tobaco_OCR/Report/594375.txt', '/content/drive/MyDrive/tobaco_OCR/Report/504783010_504783011.txt', '/content/drive/MyDrive/tobaco_OCR/Report/512481148_512481149.txt', '/content/drive/MyDrive/tobaco_OCR/Report/518039321.txt', '/content/drive/MyDrive/tobaco_OCR/Report/10167656.txt', '/content/drive/MyDrive/tobaco_OCR/Report/508270873_508270880.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ti11512619_2624.txt', '/content/drive/MyDrive/tobaco_OCR/Report/511440042_511440053.txt', '/content/drive/MyDrive/tobaco_OCR/Report/514120277.txt', '/content/drive/MyDrive/tobaco_OCR/Report/511188446+-8449.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506135672_506135674.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507240573_507240574.txt', '/content/drive/MyDrive/tobaco_OCR/Report/519046949+-6954.txt', '/content/drive/MyDrive/tobaco_OCR/Report/504841352.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ti06071122_1124.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob14515.92_tob14516.06.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505830872+-0875.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob08110.23_tob08110.30.txt', '/content/drive/MyDrive/tobaco_OCR/Report/515245506+-5506.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505731955+-1955.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506040089_506040092.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505515167_505515171.txt', '/content/drive/MyDrive/tobaco_OCR/Report/513904111+-4118.txt', '/content/drive/MyDrive/tobaco_OCR/Report/511052379_511052380.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tcal0230966.txt', '/content/drive/MyDrive/tobaco_OCR/Report/503565734+-5736.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506189288_506189298.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507783114+-3116.txt', '/content/drive/MyDrive/tobaco_OCR/Report/504221495_504221497.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob15907.46.txt', '/content/drive/MyDrive/tobaco_OCR/Report/10127889_10127890.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505968942.txt', '/content/drive/MyDrive/tobaco_OCR/Report/10103478_10103482.txt', '/content/drive/MyDrive/tobaco_OCR/Report/11225239_11225242.txt', '/content/drive/MyDrive/tobaco_OCR/Report/13091906.txt', '/content/drive/MyDrive/tobaco_OCR/Report/587167.txt', '/content/drive/MyDrive/tobaco_OCR/Report/504239537+-9540.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ti09910500_0513.txt', '/content/drive/MyDrive/tobaco_OCR/Report/517285472+-5474.txt', '/content/drive/MyDrive/tobaco_OCR/Report/522929206+-9206.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505136134_505136135.txt', '/content/drive/MyDrive/tobaco_OCR/Report/503646128+-6129.txt', '/content/drive/MyDrive/tobaco_OCR/Report/500206087_500206093.txt', '/content/drive/MyDrive/tobaco_OCR/Report/10044882_10044909.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tim01047.91.txt', '/content/drive/MyDrive/tobaco_OCR/Report/501713950_501713953.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506023805_506023808.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506233399+-3399.txt', '/content/drive/MyDrive/tobaco_OCR/Report/515242497_515242498.txt', '/content/drive/MyDrive/tobaco_OCR/Report/502853056+-3063.txt', '/content/drive/MyDrive/tobaco_OCR/Report/503117428_503117430.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507692234+-2243.txt', '/content/drive/MyDrive/tobaco_OCR/Report/504164620_504164624.txt', '/content/drive/MyDrive/tobaco_OCR/Report/569669.txt', '/content/drive/MyDrive/tobaco_OCR/Report/24006371_24006372.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ti13540621_0622.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505938120+-8126.txt', '/content/drive/MyDrive/tobaco_OCR/Report/0001251167.txt', '/content/drive/MyDrive/tobaco_OCR/Report/521513467+-3500.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506316706_506316722.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506888300_506888301.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505026775.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ti10310980_0982.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ti30749189_9190.txt', '/content/drive/MyDrive/tobaco_OCR/Report/517574050+-4065.txt', '/content/drive/MyDrive/tobaco_OCR/Report/500515948.txt', '/content/drive/MyDrive/tobaco_OCR/Report/522868227+-8227.txt', '/content/drive/MyDrive/tobaco_OCR/Report/500272849_500272900.txt', '/content/drive/MyDrive/tobaco_OCR/Report/515575465+-5466.txt', '/content/drive/MyDrive/tobaco_OCR/Report/tob07526.89.txt', '/content/drive/MyDrive/tobaco_OCR/Report/519626567+-6574.txt', '/content/drive/MyDrive/tobaco_OCR/Report/503999149+-9150.txt', '/content/drive/MyDrive/tobaco_OCR/Report/504855605+-5605.txt', '/content/drive/MyDrive/tobaco_OCR/Report/0001166781.txt', '/content/drive/MyDrive/tobaco_OCR/Report/518405084+-5085.txt', '/content/drive/MyDrive/tobaco_OCR/Report/60033390_60033392.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505983529_505983530.txt', '/content/drive/MyDrive/tobaco_OCR/Report/ti02550955_0958.txt', '/content/drive/MyDrive/tobaco_OCR/Report/507838537_507838544.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505856025_505856027.txt', '/content/drive/MyDrive/tobaco_OCR/Report/509227491+-7492.txt', '/content/drive/MyDrive/tobaco_OCR/Report/506173089.txt', '/content/drive/MyDrive/tobaco_OCR/Report/505835991_505835992.txt', '/content/drive/MyDrive/tobaco_OCR/Report/520843662+-3668.txt', '/content/drive/MyDrive/tobaco_OCR/Report/516597850+-7850.txt', '/content/drive/MyDrive/tobaco_OCR/Report/511085054_511085060.txt', '/content/drive/MyDrive/tobaco_OCR/Report/502798863_502798869.txt', '/content/drive/MyDrive/tobaco_OCR/Report/512377084_512377092.txt', '/content/drive/MyDrive/tobaco_OCR/Report/502980775_502980776.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085134480.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078454149.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2067683989.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081766375.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081500243b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078758393a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078791240b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2074848927.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527904737+-4737.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070106870a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085753145a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078626411.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085724363.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085270500a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528803013+-3014.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080185536.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085777318.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085282002.txt', '/content/drive/MyDrive/tobaco_OCR/Email/525220448+-0448.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527795578+-5578.txt', '/content/drive/MyDrive/tobaco_OCR/Email/531327349+-7349.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080436553a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2077816903.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078424580.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078711737a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/531313268+-3272.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527970475+-0476.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078386156b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2073826651.txt', '/content/drive/MyDrive/tobaco_OCR/Email/531298458+-8472.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2076179775b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082986649c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085115771c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078633079.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081500121b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078345432.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085109849a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078852235.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528017710+-7711.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2077615469c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078312584.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085121643c_1644.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528802826+-2826.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085134821a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085726251a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505520136.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070163951.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085760833a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505217313a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505225305a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085056272a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085111504a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085113097a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081700495a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075317216b_7217.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085678310c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085125038c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2064213021d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/81882169_2170.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070383449b_3450.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072482454.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078373762.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085761260b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085697684b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527925861+-5862.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070975926c_5927.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071863501a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2074678024a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2077982703a_2704.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081917771.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078870030.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528624468+-4468.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2077620852b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078881416.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078873797.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085078244.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078772581.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080731955c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078172039.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528833979+-3980.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071863734c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505508423.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085802279b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071346649d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078565900.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082057595.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528796992+-6995.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528859565+-9566.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085797595.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078748816a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085135535b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527896358+-6358.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078629286.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084325895.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527974846+-4846.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2074731290a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071340730a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084288847c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528449937+-9938.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078796699a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085262852.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528483841+-3842.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527993750+-3750.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085697107e_7108.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085119305a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2067314284_2067314285.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085574918a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083319138c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530218757+-8757.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082375116.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2073258365.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083647688d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083319500a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072568903a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075558303.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527901046+-1046.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078700602c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083647487b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2073979455.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085273457_3464.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528986985+-6985.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2074655972.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528029471+-9471.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071862965a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071972378.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070045633c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071106472b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072356497d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528373026+-3027.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085764028a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078585095.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085789783b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071862304d_2305.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085542857a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083292221.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085103910d_3911.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505591406.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084375937.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528010701+-0701p.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081500771.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083318854a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071565220a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2076110608a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2067708396.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2076342380b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085779941.txt', '/content/drive/MyDrive/tobaco_OCR/Email/99410006.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084391010a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527890842+-0842.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071862760a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085787493a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2077189218.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2074824190a_4191.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071787140a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527929912+-9912.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078702965.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2074686186a_6187.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072597944b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085108289.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530596715+-6715.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078586412.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078802755a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528986461+-6462.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528853599+-3600.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085760404.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530886918+-6923.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085124806a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075733578.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085801251.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081629036b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085793441.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075569307.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527799804+-9805.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527948778+-8779.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080891364d_1365.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078168491.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530038367+-8367.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2714401005.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505170172.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071862544b_2545.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083183167c_3168.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530281906+-1906.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528749429+-9430.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080829018.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082369729.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530760166+-0167.txt', '/content/drive/MyDrive/tobaco_OCR/Email/81841302.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085316049.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081933672.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084391171.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075311121.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527943453+-3453.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2079132129.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085754049c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085786967c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085775268d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085792209c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085764954b_4956.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081703215a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082058428.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527790672+-0672.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085111870a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084055453b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078853416b_3418.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084391198a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085770392c_0393.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070998233a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072389090c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083157156.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072787482.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085160699.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085041601b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078790784.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078607589.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2708506046a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075057831a_7832.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078800112b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070045922a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085109520c_9521.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078307912.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505476088a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072945297a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085724563a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527802957+-2958.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085136133c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078337049d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528600874+-0874.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078305770.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085794222a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078851793.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528377798+-7798.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083634169.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072060377a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084029676.txt', '/content/drive/MyDrive/tobaco_OCR/Email/531487926+-7926.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072747154b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/522873332+-3332.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085123442.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085133627b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082023149.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2064213091c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085114044b_4045.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078783765.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528433655+-3655.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528807943+-7949.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070045276d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078463387.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083318596a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527966261+-6261.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528806197+-6199.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078787051a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083292414a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078526278.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082564294a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505229081a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075808091a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072569303.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078346945.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085126533a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085763719.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528008233+-8233.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085752507d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2076807350.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527990161+-0162.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527885883+-5883.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085725088c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084031123b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2063075627.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078560147.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530298970+-8976.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083618277a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084029934a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083647887a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085110291d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085134154c_4155.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085289249.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078854139.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084289636b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080150556b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527793256+-3258.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530919692+-9693.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527888831+-8831.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078174733_4734.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085792518a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2073812839.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085559043a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080116567a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078875153.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085116177c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083648375a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505940096.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084391924.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083182848d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085776551.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085785523.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082110412.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2073761095a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085781094b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085788705.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2064334043b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078209429a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078880328b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505413292.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2074977936.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078434728.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528430695+-0695.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528374671+-4671.txt', '/content/drive/MyDrive/tobaco_OCR/Email/80909413.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085136346b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085107848.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081501359a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530489046+-9046.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2069751488.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084289084b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/531635588+-5589.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072389522a_9523.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078376823a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085726478a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505226958.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2074221587a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085074946a_4947.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527804396+-4397.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085756034b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505944249.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528581385+-1389.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085136611b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/529243252+-3252.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070361099.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527834557+-4557.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070091787d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078617243.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084392262.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085761790a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530237627+-7627.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075285166b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083432603a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2064936813.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078337799.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527852044+-2044.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085029287.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528023553+-3553.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070046454a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2067238739.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078281731.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083204707.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2073528906.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081970957a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072313825b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072389709c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070046723a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2079129860.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505423846b_3847.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2074515234a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528678480+-8481.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528698555+-8560.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075711719.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085697275a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082835068c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085135750d_5751.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2079066936.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081961826a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085751613a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085120731.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085116708.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084125478a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085798744.txt', '/content/drive/MyDrive/tobaco_OCR/Email/81887335.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080325954a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505398309c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081409793.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528678082+-8082.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078467148.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527862259+-2259.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2079043035.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085270849b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084033434.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530465535+-5536.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080257713.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071773443a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083634674c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071977534a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085757193.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085782641.txt', '/content/drive/MyDrive/tobaco_OCR/Email/529216154+-6154.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083648580a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078868519.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082400874.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2076958821a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085791853c_1854.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530609086+-9086.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2076803208.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2064207315c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072341081d_1082.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085103737c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/529202087+-2087.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527812852+-2853.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2073789018b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080441685b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085801762.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085108596a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085780454d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085793905a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2076956842a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527880321+-0321.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078874121.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078706985.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075247914.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528744679+-4679.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085795423.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078309558.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085790906b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078736964.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080777017a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084289442a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2076406705.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084030293.txt', '/content/drive/MyDrive/tobaco_OCR/Email/529293099+-3108.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072948072a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084029834b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080109802c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078611775.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072389206d_9207.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528348661+-8661.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078802227c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085782239a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081184286.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527911602+-1602.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2067640632.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2064207054c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085141399a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/524449506+-9506.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078879163a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528408504+-8504.txt', '/content/drive/MyDrive/tobaco_OCR/Email/531607072+-7076.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527865354+-5354.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528954681+-4681.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530979244+-9246.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2067665158.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2079180199.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078379610a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084390753b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072334938a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085753598a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527825113+-5113.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530489831+-9834.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081179722a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083648195a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085778079.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078802747_2748.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2069750432_0433.txt', '/content/drive/MyDrive/tobaco_OCR/Email/529046054+-6061.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085121166.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085114496a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082063281b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528436790+-6790.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071863289a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078336709.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080265356.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078339485c_9486.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505232397a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085272616a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2074179164.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2067311301_2067311302.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530908549+-8551.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070044942d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2073428230a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078718698c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/530469306+-9307.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085768458a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078637110.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528759181+-9183.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085759967a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085697517.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2076857864.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078340010b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081422072.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082224692c_4693.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085756613a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2074602877.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085125283a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083647284d_7285.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071594545b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528679489+-9489.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528741590+-1590.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082042530a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/529196125+-6125.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082102932.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081980036a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078801610c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078315311.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072705831.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527840749+-0749.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528405175+-5175.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075574247c_4248.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528348589+-8589.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505169075.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078705877_5880.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078785333.txt', '/content/drive/MyDrive/tobaco_OCR/Email/529191296+-1304.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085246969.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078208162b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528003796+-3796.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080650521c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528051803+-1803.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078718342.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2067649639.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2077377563.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527820130+-0130.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085696947f_6948.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081565975c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085774142.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2069753721_3722.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072949212.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085724771a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085532615_2616.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085123878b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085775606.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085319449.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085696767b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2077619372a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078310078.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078866065.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2077480206a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085774571d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2076012709d_2710.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2076071228b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/529232712+-2723.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528408427+-8427.txt', '/content/drive/MyDrive/tobaco_OCR/Email/531678159+-8159.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078349619.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083635833c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080536914.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081510216c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078876269.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2074589755b_9756.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528014900+-4900.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081520879a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083635605b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2082782963.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085115503.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085782997a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528012479+-2479.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505287241a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085272008a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505322245.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081499563a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078865373.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078721312.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085269729.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081193524.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085800016d.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081465234a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2070361546a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/521442658+-2658.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528896972+-6972.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078616645.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2064850090a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2073356804b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/529247216+-7216.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083318296a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2071863841.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078871620.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085120453b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085111332a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085135313a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072389400b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/529222042+-2043.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528416668+-6668.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2505141976.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2702800465.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075574004c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085786587a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/529316773+-6782.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081008405.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2072356326b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/528345139+-5140.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2073624878.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081762672.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2078607866.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2075690818a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2081350001b.txt', '/content/drive/MyDrive/tobaco_OCR/Email/522871570+-1570.txt', '/content/drive/MyDrive/tobaco_OCR/Email/527928243+-8244.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085765256.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2067374252.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2080960195c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2067227128.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2079131244.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085269110.txt', '/content/drive/MyDrive/tobaco_OCR/Email/531440864+-0985.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2079118800a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085177226a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084391536.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2084194412.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2083287881.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2079175162.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085122028a.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085542332c.txt', '/content/drive/MyDrive/tobaco_OCR/Email/2085133858b.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/514109075+-9076.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2070501595.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2084426710_6711.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2065199983_9984.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/505576130+-6130.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2084425560_5561.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502597916.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502593053.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502610513+-0516.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/503781513.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502471549.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/91689217.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/501131762+-1762.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/tob19002.28_tob19002.30.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502474424.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/500248654+-8654.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/501303452+-3452.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/509132587+-2587.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2071388237.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/503783160+-3160.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2041076526.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/71408946.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2045830670.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2040811409.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2040697247.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502610107+-0107.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/1002763015.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502607162+-7162.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2040183521.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/664383.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/503783036+-3036.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/0030048095.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502141224.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/92221516_1518.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502472358.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/0030049569.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2070262428_2429.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/512696082+-6082.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/509134595+-4596.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2083306564.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/518050854_518050855.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2084513422.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/1002761179.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/527844182+-4182.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2041511979.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/91551808.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2058501023.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/04106546.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/1002761668.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/87063899.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2073971431.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/12779971.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/500161305.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/509134160.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2044759026.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2084396927.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/86122854.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/521151227+-1227.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2071466317_6318.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2084396082.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/93330837.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2043025626.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502594404.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2064004681.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/82262347_2348.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/12779302.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/96324864.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2058502871.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/501947737_501947738.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502599068.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2085043981.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/501157471+-7480.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2047920303.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2070712842_2843.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/92206285.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/503960838.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502472939.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2084510807.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/71178054.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2067540350_2067540352.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/503961738+-1738.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2501030007.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/1002761869.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502593762.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/71895703.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/521150302+-0302.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2058501637.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2049398699.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2084427165_7166.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2061190138.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2073487737.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/13581913.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2084412816_2817.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/tob01701.23.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2045081234.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502597344.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2084420704.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/tob06517.41_tob06517.42.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/89872610.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502595294.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/1005116329_6330.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/99332057_2058.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2070711761.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/12320856.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/03567810.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/0030048989.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502600009+-0009.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2061141775.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/71896384.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502219477.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/0000435350.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/503960254.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2061003642.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/93422915.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2045630056.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/507806606.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502590268.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502595869.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2061000301.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2040144258.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2071465080_5081.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/509137948+-7949.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502613661a-3662.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/tob08311.50.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502596443.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/521150892+-0895.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/503950104.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/0000136188.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502217247+-7247.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2070494554_4555.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/71374678.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/85251705.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/93332074_2082.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2042333574.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/517950663+-0671.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2070715949_5950.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502339758+-9761.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2070717320_7321.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2045573975.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502108541+-8546.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2042029706.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/500323180+-3180.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/92111083.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502611995a-1996.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/522934065+-4065.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/1002760819.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/509131202.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/03722789.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/517502631+-2631.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/500076275_500076282.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/91656417.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2061012559.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2069506477.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/1002325458.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2042348217.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2058502246.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/503146051+-6051.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/514766071+-6072.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2070500515_0516.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/87064470.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/71895027.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2025016432.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/509131306+-1306.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/509136909+-6910.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/91505342_5343.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/87064857.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2021284642.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/500713757_500713758.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2070718633.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502607827+-7827.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2061199558.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502218620.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2048729226.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502592126.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/03496270.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2063550295.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2023738160.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2045731638.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2058503524.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/0000556056.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502599535+-9535.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/509130720+-0721.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2058503900.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2061002797.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2501053670.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/506672255_506672256.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/99342431_2432.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/1002762773.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2072998094.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2501015790.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/515127169_515127171.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/04412344.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/517508450+-8453.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/517501237+-1237.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/92678691.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2023086360.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/92236848.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/71329566.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/87066788.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/ton01906.84.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/509139618+-9618.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2084426012_6013.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2050834062.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2061002005.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502612183+-2183.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2070717402_7403.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/94054618.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/92086756.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2070711254_1255.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/87003967_87003968.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2048509434.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/04233037_04233039.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2064932937.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/517500165+-0165.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/514968966+-8967.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/503524860_503524863.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/503782317.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/509138972+-8972.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/1005150029.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2058504564.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2084396137_6138.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2072281108.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/502473843.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2041089398.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/517508300+-8301.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/2024182848.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/512725630.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/509137457+-7458.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/93341527.txt', '/content/drive/MyDrive/tobaco_OCR/ADVE/04102204.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003044581-a.txt', '/content/drive/MyDrive/tobaco_OCR/News/2023272505.txt', '/content/drive/MyDrive/tobaco_OCR/News/10030149.txt', '/content/drive/MyDrive/tobaco_OCR/News/2072201232.txt', '/content/drive/MyDrive/tobaco_OCR/News/2085577788.txt', '/content/drive/MyDrive/tobaco_OCR/News/2023962145.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003044360-a.txt', '/content/drive/MyDrive/tobaco_OCR/News/2083779691.txt', '/content/drive/MyDrive/tobaco_OCR/News/2010030882.txt', '/content/drive/MyDrive/tobaco_OCR/News/87023168_3169.txt', '/content/drive/MyDrive/tobaco_OCR/News/2083779313.txt', '/content/drive/MyDrive/tobaco_OCR/News/0000162076.txt', '/content/drive/MyDrive/tobaco_OCR/News/2070397531.txt', '/content/drive/MyDrive/tobaco_OCR/News/2046288261.txt', '/content/drive/MyDrive/tobaco_OCR/News/2072552877.txt', '/content/drive/MyDrive/tobaco_OCR/News/2073204566.txt', '/content/drive/MyDrive/tobaco_OCR/News/2057077308.txt', '/content/drive/MyDrive/tobaco_OCR/News/10031147.txt', '/content/drive/MyDrive/tobaco_OCR/News/2063832954.txt', '/content/drive/MyDrive/tobaco_OCR/News/2025028443.txt', '/content/drive/MyDrive/tobaco_OCR/News/2015036663.txt', '/content/drive/MyDrive/tobaco_OCR/News/2025028269.txt', '/content/drive/MyDrive/tobaco_OCR/News/2080731994.txt', '/content/drive/MyDrive/tobaco_OCR/News/85879135_85879138.txt', '/content/drive/MyDrive/tobaco_OCR/News/2077315759.txt', '/content/drive/MyDrive/tobaco_OCR/News/0000330171.txt', '/content/drive/MyDrive/tobaco_OCR/News/2025028501.txt', '/content/drive/MyDrive/tobaco_OCR/News/2047577340.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003044181.txt', '/content/drive/MyDrive/tobaco_OCR/News/00625137.txt', '/content/drive/MyDrive/tobaco_OCR/News/2023798955.txt', '/content/drive/MyDrive/tobaco_OCR/News/85650285.txt', '/content/drive/MyDrive/tobaco_OCR/News/1002402586.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003044160-e.txt', '/content/drive/MyDrive/tobaco_OCR/News/2083513848.txt', '/content/drive/MyDrive/tobaco_OCR/News/10226460.txt', '/content/drive/MyDrive/tobaco_OCR/News/94682942.txt', '/content/drive/MyDrive/tobaco_OCR/News/2046407013_7014.txt', '/content/drive/MyDrive/tobaco_OCR/News/2080714838.txt', '/content/drive/MyDrive/tobaco_OCR/News/2026261439.txt', '/content/drive/MyDrive/tobaco_OCR/News/2073724138.txt', '/content/drive/MyDrive/tobaco_OCR/News/2046117930.txt', '/content/drive/MyDrive/tobaco_OCR/News/2070910722.txt', '/content/drive/MyDrive/tobaco_OCR/News/2083547815_7816.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003043510-a.txt', '/content/drive/MyDrive/tobaco_OCR/News/2073582086.txt', '/content/drive/MyDrive/tobaco_OCR/News/1002402743a.txt', '/content/drive/MyDrive/tobaco_OCR/News/2070051956.txt', '/content/drive/MyDrive/tobaco_OCR/News/2501374290.txt', '/content/drive/MyDrive/tobaco_OCR/News/2025854306.txt', '/content/drive/MyDrive/tobaco_OCR/News/2070897169.txt', '/content/drive/MyDrive/tobaco_OCR/News/2070877098.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003043256-a.txt', '/content/drive/MyDrive/tobaco_OCR/News/50288669-8669.txt', '/content/drive/MyDrive/tobaco_OCR/News/2505370845.txt', '/content/drive/MyDrive/tobaco_OCR/News/2046593592.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003042516-b.txt', '/content/drive/MyDrive/tobaco_OCR/News/0000940821.txt', '/content/drive/MyDrive/tobaco_OCR/News/2023269862.txt', '/content/drive/MyDrive/tobaco_OCR/News/2071991216.txt', '/content/drive/MyDrive/tobaco_OCR/News/2001222207.txt', '/content/drive/MyDrive/tobaco_OCR/News/2025500007.txt', '/content/drive/MyDrive/tobaco_OCR/News/2071778497.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003537661.txt', '/content/drive/MyDrive/tobaco_OCR/News/92859262.txt', '/content/drive/MyDrive/tobaco_OCR/News/2046546822a.txt', '/content/drive/MyDrive/tobaco_OCR/News/2083780649.txt', '/content/drive/MyDrive/tobaco_OCR/News/10031395.txt', '/content/drive/MyDrive/tobaco_OCR/News/0000114117.txt', '/content/drive/MyDrive/tobaco_OCR/News/2016003493.txt', '/content/drive/MyDrive/tobaco_OCR/News/2071620452_0454.txt', '/content/drive/MyDrive/tobaco_OCR/News/10030834.txt', '/content/drive/MyDrive/tobaco_OCR/News/2065340065.txt', '/content/drive/MyDrive/tobaco_OCR/News/2083784934.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003042513-b.txt', '/content/drive/MyDrive/tobaco_OCR/News/2083780285.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003538199-a.txt', '/content/drive/MyDrive/tobaco_OCR/News/2072097814_7815.txt', '/content/drive/MyDrive/tobaco_OCR/News/2025028981-e.txt', '/content/drive/MyDrive/tobaco_OCR/News/0011927386.txt', '/content/drive/MyDrive/tobaco_OCR/News/2080736802.txt', '/content/drive/MyDrive/tobaco_OCR/News/1005150791.txt', '/content/drive/MyDrive/tobaco_OCR/News/60000259.txt', '/content/drive/MyDrive/tobaco_OCR/News/2044771242.txt', '/content/drive/MyDrive/tobaco_OCR/News/2072411039_1040.txt', '/content/drive/MyDrive/tobaco_OCR/News/2070109534_9535.txt', '/content/drive/MyDrive/tobaco_OCR/News/1002403141.txt', '/content/drive/MyDrive/tobaco_OCR/News/2501047925.txt', '/content/drive/MyDrive/tobaco_OCR/News/11311025.txt', '/content/drive/MyDrive/tobaco_OCR/News/10030014.txt', '/content/drive/MyDrive/tobaco_OCR/News/2046585258.txt', '/content/drive/MyDrive/tobaco_OCR/News/2064814764.txt', '/content/drive/MyDrive/tobaco_OCR/News/2023175414.txt', '/content/drive/MyDrive/tobaco_OCR/News/2025382018.txt', '/content/drive/MyDrive/tobaco_OCR/News/1005037224-a.txt', '/content/drive/MyDrive/tobaco_OCR/News/2023266800.txt', '/content/drive/MyDrive/tobaco_OCR/News/01752917.txt', '/content/drive/MyDrive/tobaco_OCR/News/80307630a.txt', '/content/drive/MyDrive/tobaco_OCR/News/1005151248.txt', '/content/drive/MyDrive/tobaco_OCR/News/10030463.txt', '/content/drive/MyDrive/tobaco_OCR/News/2025028644.txt', '/content/drive/MyDrive/tobaco_OCR/News/2077070251.txt', '/content/drive/MyDrive/tobaco_OCR/News/2065214163.txt', '/content/drive/MyDrive/tobaco_OCR/News/10031617.txt', '/content/drive/MyDrive/tobaco_OCR/News/1005150648-a.txt', '/content/drive/MyDrive/tobaco_OCR/News/85865251.txt', '/content/drive/MyDrive/tobaco_OCR/News/89685528.txt', '/content/drive/MyDrive/tobaco_OCR/News/10031067.txt', '/content/drive/MyDrive/tobaco_OCR/News/2021279291.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003044520-b.txt', '/content/drive/MyDrive/tobaco_OCR/News/2074404560.txt', '/content/drive/MyDrive/tobaco_OCR/News/2063887409.txt', '/content/drive/MyDrive/tobaco_OCR/News/2083542628.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003542938.txt', '/content/drive/MyDrive/tobaco_OCR/News/0000023720.txt', '/content/drive/MyDrive/tobaco_OCR/News/2070422690.txt', '/content/drive/MyDrive/tobaco_OCR/News/10405567.txt', '/content/drive/MyDrive/tobaco_OCR/News/2025635504.txt', '/content/drive/MyDrive/tobaco_OCR/News/2083790356.txt', '/content/drive/MyDrive/tobaco_OCR/News/0000240416.txt', '/content/drive/MyDrive/tobaco_OCR/News/2025684835.txt', '/content/drive/MyDrive/tobaco_OCR/News/2051807163.txt', '/content/drive/MyDrive/tobaco_OCR/News/04303523-a.txt', '/content/drive/MyDrive/tobaco_OCR/News/10077592.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003538042-c.txt', '/content/drive/MyDrive/tobaco_OCR/News/10031350.txt', '/content/drive/MyDrive/tobaco_OCR/News/2078115033.txt', '/content/drive/MyDrive/tobaco_OCR/News/91011904a.txt', '/content/drive/MyDrive/tobaco_OCR/News/10030325.txt', '/content/drive/MyDrive/tobaco_OCR/News/03620726.txt', '/content/drive/MyDrive/tobaco_OCR/News/2046305055.txt', '/content/drive/MyDrive/tobaco_OCR/News/2080721150_1153.txt', '/content/drive/MyDrive/tobaco_OCR/News/2046562645.txt', '/content/drive/MyDrive/tobaco_OCR/News/01138186a.txt', '/content/drive/MyDrive/tobaco_OCR/News/2026167362.txt', '/content/drive/MyDrive/tobaco_OCR/News/2021174533_2021174534.txt', '/content/drive/MyDrive/tobaco_OCR/News/2501204899.txt', '/content/drive/MyDrive/tobaco_OCR/News/2083790656.txt', '/content/drive/MyDrive/tobaco_OCR/News/10030096.txt', '/content/drive/MyDrive/tobaco_OCR/News/2044424416.txt', '/content/drive/MyDrive/tobaco_OCR/News/2085568558.txt', '/content/drive/MyDrive/tobaco_OCR/News/2072114951.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003537794-a.txt', '/content/drive/MyDrive/tobaco_OCR/News/1000150875-a.txt', '/content/drive/MyDrive/tobaco_OCR/News/2015024734.txt', '/content/drive/MyDrive/tobaco_OCR/News/91017445.txt', '/content/drive/MyDrive/tobaco_OCR/News/0000644359.txt', '/content/drive/MyDrive/tobaco_OCR/News/03754144-b.txt', '/content/drive/MyDrive/tobaco_OCR/News/2024210851.txt', '/content/drive/MyDrive/tobaco_OCR/News/03747860.txt', '/content/drive/MyDrive/tobaco_OCR/News/2054406337.txt', '/content/drive/MyDrive/tobaco_OCR/News/1005125627.txt', '/content/drive/MyDrive/tobaco_OCR/News/2071692648.txt', '/content/drive/MyDrive/tobaco_OCR/News/2023271099.txt', '/content/drive/MyDrive/tobaco_OCR/News/2047870236.txt', '/content/drive/MyDrive/tobaco_OCR/News/60007601.txt', '/content/drive/MyDrive/tobaco_OCR/News/81760888.txt', '/content/drive/MyDrive/tobaco_OCR/News/10029170.txt', '/content/drive/MyDrive/tobaco_OCR/News/2047128093_8094.txt', '/content/drive/MyDrive/tobaco_OCR/News/04328080.txt', '/content/drive/MyDrive/tobaco_OCR/News/89268210.txt', '/content/drive/MyDrive/tobaco_OCR/News/85644137.txt', '/content/drive/MyDrive/tobaco_OCR/News/0000334176.txt', '/content/drive/MyDrive/tobaco_OCR/News/2083781257.txt', '/content/drive/MyDrive/tobaco_OCR/News/2025880123.txt', '/content/drive/MyDrive/tobaco_OCR/News/2048270201.txt', '/content/drive/MyDrive/tobaco_OCR/News/50265293-5293.txt', '/content/drive/MyDrive/tobaco_OCR/News/0000343395.txt', '/content/drive/MyDrive/tobaco_OCR/News/2074121288.txt', '/content/drive/MyDrive/tobaco_OCR/News/2065594467.txt', '/content/drive/MyDrive/tobaco_OCR/News/01182899.txt', '/content/drive/MyDrive/tobaco_OCR/News/2083774535.txt', '/content/drive/MyDrive/tobaco_OCR/News/2074288832.txt', '/content/drive/MyDrive/tobaco_OCR/News/10029716.txt', '/content/drive/MyDrive/tobaco_OCR/News/98065867.txt', '/content/drive/MyDrive/tobaco_OCR/News/1004859787.txt', '/content/drive/MyDrive/tobaco_OCR/News/2046468258.txt', '/content/drive/MyDrive/tobaco_OCR/News/10032025.txt', '/content/drive/MyDrive/tobaco_OCR/News/2080726921_6923.txt', '/content/drive/MyDrive/tobaco_OCR/News/2040565275.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003099991.txt', '/content/drive/MyDrive/tobaco_OCR/News/2046323306.txt', '/content/drive/MyDrive/tobaco_OCR/News/10029418.txt', '/content/drive/MyDrive/tobaco_OCR/News/2073415682.txt', '/content/drive/MyDrive/tobaco_OCR/News/2078115137.txt', '/content/drive/MyDrive/tobaco_OCR/News/2047564059.txt', '/content/drive/MyDrive/tobaco_OCR/News/1003543120.txt', '/content/drive/MyDrive/tobaco_OCR/News/1002402255-a.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2071191029.txt', '/content/drive/MyDrive/tobaco_OCR/Note/60019614.txt', '/content/drive/MyDrive/tobaco_OCR/Note/85651109.txt', '/content/drive/MyDrive/tobaco_OCR/Note/1003722129.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2080475741.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2063181986.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2024255591.txt', '/content/drive/MyDrive/tobaco_OCR/Note/80405450.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2030714615.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2021655168.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2065023135.txt', '/content/drive/MyDrive/tobaco_OCR/Note/70057287-7287.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2030053173.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2021204574.txt', '/content/drive/MyDrive/tobaco_OCR/Note/87705667.txt', '/content/drive/MyDrive/tobaco_OCR/Note/93443778.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029042370.txt', '/content/drive/MyDrive/tobaco_OCR/Note/03016778.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2043916652.txt', '/content/drive/MyDrive/tobaco_OCR/Note/1004867378.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2028919119.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2082206330.txt', '/content/drive/MyDrive/tobaco_OCR/Note/89313371.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2070159121.txt', '/content/drive/MyDrive/tobaco_OCR/Note/84422713.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2025325000.txt', '/content/drive/MyDrive/tobaco_OCR/Note/03662678.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2030323138.txt', '/content/drive/MyDrive/tobaco_OCR/Note/1013025.txt', '/content/drive/MyDrive/tobaco_OCR/Note/10226038.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2074784438.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2020155279.txt', '/content/drive/MyDrive/tobaco_OCR/Note/0000007194.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2030105137.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2028984183.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2028826508.txt', '/content/drive/MyDrive/tobaco_OCR/Note/80191212_80191213.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2045207751.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029157248.txt', '/content/drive/MyDrive/tobaco_OCR/Note/71194840.txt', '/content/drive/MyDrive/tobaco_OCR/Note/71221273.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2037023038.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2085052974.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2070386760.txt', '/content/drive/MyDrive/tobaco_OCR/Note/89556237.txt', '/content/drive/MyDrive/tobaco_OCR/Note/50488503-8503.txt', '/content/drive/MyDrive/tobaco_OCR/Note/91985231-a_91985231.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2022245046.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2021614005.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2067623583.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2062432446.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2060570691.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2075567953.txt', '/content/drive/MyDrive/tobaco_OCR/Note/1000240946.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2084568423.txt', '/content/drive/MyDrive/tobaco_OCR/Note/50028588.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2031367558.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2505334571.txt', '/content/drive/MyDrive/tobaco_OCR/Note/71206341.txt', '/content/drive/MyDrive/tobaco_OCR/Note/1005111374.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2023163743.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2059196199.txt', '/content/drive/MyDrive/tobaco_OCR/Note/88025179.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029180090.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2001004940.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2072654561.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2026205216.txt', '/content/drive/MyDrive/tobaco_OCR/Note/11278338.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2501658355.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2021528870.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2063710424.txt', '/content/drive/MyDrive/tobaco_OCR/Note/85605325.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2043246073.txt', '/content/drive/MyDrive/tobaco_OCR/Note/13525267.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2083927405.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2012517041.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2030353547.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2056172318.txt', '/content/drive/MyDrive/tobaco_OCR/Note/10058361_10058364.txt', '/content/drive/MyDrive/tobaco_OCR/Note/71460901.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029240096.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029160542.txt', '/content/drive/MyDrive/tobaco_OCR/Note/03590690.txt', '/content/drive/MyDrive/tobaco_OCR/Note/03016504.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2048150028.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2075949750.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2072821881.txt', '/content/drive/MyDrive/tobaco_OCR/Note/0000248859.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2028980117.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2071226076.txt', '/content/drive/MyDrive/tobaco_OCR/Note/12963987.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2064803045.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2023682165.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2041767269.txt', '/content/drive/MyDrive/tobaco_OCR/Note/10106776.txt', '/content/drive/MyDrive/tobaco_OCR/Note/87732150.txt', '/content/drive/MyDrive/tobaco_OCR/Note/01404111.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2505291689.txt', '/content/drive/MyDrive/tobaco_OCR/Note/1000261527.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2020247638.txt', '/content/drive/MyDrive/tobaco_OCR/Note/1003188772.txt', '/content/drive/MyDrive/tobaco_OCR/Note/00093561.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2505475748.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029129273.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2045518260.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2041238248.txt', '/content/drive/MyDrive/tobaco_OCR/Note/71432730.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2076140545.txt', '/content/drive/MyDrive/tobaco_OCR/Note/71369508.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2060540658.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2025990305.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2021546486.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2021333139.txt', '/content/drive/MyDrive/tobaco_OCR/Note/03563613.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2021500947.txt', '/content/drive/MyDrive/tobaco_OCR/Note/10093110.txt', '/content/drive/MyDrive/tobaco_OCR/Note/0000642640.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2501268920-a.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2072150072.txt', '/content/drive/MyDrive/tobaco_OCR/Note/0000002770.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2072571651.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2023192377.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2072489111.txt', '/content/drive/MyDrive/tobaco_OCR/Note/50076168.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029026171.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2023162754.txt', '/content/drive/MyDrive/tobaco_OCR/Note/89774046a.txt', '/content/drive/MyDrive/tobaco_OCR/Note/71826079.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029237593.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2067474399.txt', '/content/drive/MyDrive/tobaco_OCR/Note/12882100.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2071722490.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2028837643.txt', '/content/drive/MyDrive/tobaco_OCR/Note/82857902.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2028747856.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029183905.txt', '/content/drive/MyDrive/tobaco_OCR/Note/89091669.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2023104578.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2073425843.txt', '/content/drive/MyDrive/tobaco_OCR/Note/03023805.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2031018769.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2077122151.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029157407.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2050364687.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2065354661.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2021327414_7415.txt', '/content/drive/MyDrive/tobaco_OCR/Note/10384492.txt', '/content/drive/MyDrive/tobaco_OCR/Note/91783075.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029130917.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2024024233.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2085234526.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2062055279.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2028838727_2028838729.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2021387400_7401.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2024407772.txt', '/content/drive/MyDrive/tobaco_OCR/Note/10080510.txt', '/content/drive/MyDrive/tobaco_OCR/Note/11282478.txt', '/content/drive/MyDrive/tobaco_OCR/Note/81071627_1629.txt', '/content/drive/MyDrive/tobaco_OCR/Note/1003286334.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2020102257.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2080605718_5718a.txt', '/content/drive/MyDrive/tobaco_OCR/Note/1003403894_1003403895.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2030195129.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2074071502a.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2030545825.txt', '/content/drive/MyDrive/tobaco_OCR/Note/1005127604_1005127605.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2078067768.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2074231764.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029146130.txt', '/content/drive/MyDrive/tobaco_OCR/Note/87927212.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2501639006.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2082693424a.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2063603124.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2023785616.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2074551872.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2051800478.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2073440248.txt', '/content/drive/MyDrive/tobaco_OCR/Note/88025410.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2028589254.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2028897672.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2073414412.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2072037683.txt', '/content/drive/MyDrive/tobaco_OCR/Note/1002465404.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2024267699.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2048858141.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2021596245.txt', '/content/drive/MyDrive/tobaco_OCR/Note/60438302.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2072237168.txt', '/content/drive/MyDrive/tobaco_OCR/Note/10221596_10221597.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2070393585.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2030020255.txt', '/content/drive/MyDrive/tobaco_OCR/Note/0000628923.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2025319413.txt', '/content/drive/MyDrive/tobaco_OCR/Note/50054229.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2026092254.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2022916569.txt', '/content/drive/MyDrive/tobaco_OCR/Note/0000016248.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2064984702.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2029204890.txt', '/content/drive/MyDrive/tobaco_OCR/Note/91771720.txt', '/content/drive/MyDrive/tobaco_OCR/Note/2044183733.txt', '/content/drive/MyDrive/tobaco_OCR/Form/1003102483.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2040772930.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028677197.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030455607.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2084090691.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2051243678.txt', '/content/drive/MyDrive/tobaco_OCR/Form/99488529.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2025827751.txt', '/content/drive/MyDrive/tobaco_OCR/Form/501915092.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2024027426_7427.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505130836.txt', '/content/drive/MyDrive/tobaco_OCR/Form/89338038.txt', '/content/drive/MyDrive/tobaco_OCR/Form/521702221+-2222.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2501231184.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2084614365_4366.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2024959488.txt', '/content/drive/MyDrive/tobaco_OCR/Form/522249803+-9804.txt', '/content/drive/MyDrive/tobaco_OCR/Form/510987530.txt', '/content/drive/MyDrive/tobaco_OCR/Form/71375081.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2070707897.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2040995364_2040995371.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2001113061.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2083838637.txt', '/content/drive/MyDrive/tobaco_OCR/Form/508871780+-1783.txt', '/content/drive/MyDrive/tobaco_OCR/Form/71478650.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029148403.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2045869672.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063330045.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2082147410.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505151689.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2021653490.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2071286336.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028723868.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2051309337.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2040972331.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028706067.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2071133389.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505132697_2698.txt', '/content/drive/MyDrive/tobaco_OCR/Form/94333601.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2502420605.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2065157324.txt', '/content/drive/MyDrive/tobaco_OCR/Form/506990806+-0806.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2023182617.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2086080713.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030298543.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2043460274-b.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2054599438.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030797528.txt', '/content/drive/MyDrive/tobaco_OCR/Form/506787468_506787473.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028876937.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2070465138.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2058030605.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2065372884.txt', '/content/drive/MyDrive/tobaco_OCR/Form/505852586_505852607.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2050890408.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2062520999.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030190285.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028877598.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2045007011.txt', '/content/drive/MyDrive/tobaco_OCR/Form/523385048+-5048.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2041773225.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2061682039.txt', '/content/drive/MyDrive/tobaco_OCR/Form/96055461_5464.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2076200495.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2086085674.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505631428.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2040118863.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2073297651.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2072578611.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2070934900.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2031536228.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2077577525.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2040972351.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2074109036.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2056598962.txt', '/content/drive/MyDrive/tobaco_OCR/Form/94543616.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030519092.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2065395481.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030372552.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2073090353.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2040758524.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030138503.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063195647.txt', '/content/drive/MyDrive/tobaco_OCR/Form/511107734.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2501329716-a.txt', '/content/drive/MyDrive/tobaco_OCR/Form/87116586_6588.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2025441799.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2051064814_2051064816.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2083862805.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505366564.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2045829634_9635.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2061646797.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063643759_3768.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030033040.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030109097.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505103752.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030164656.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2042050255.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2023130835.txt', '/content/drive/MyDrive/tobaco_OCR/Form/87052570.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2076373183.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2074124922.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029242669.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2067560112.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2086086330.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029001293.txt', '/content/drive/MyDrive/tobaco_OCR/Form/522505387+-5389.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028834778_2028834779.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505146266.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2502054295.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2048609726.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2502293263.txt', '/content/drive/MyDrive/tobaco_OCR/Form/518095301.txt', '/content/drive/MyDrive/tobaco_OCR/Form/71360042.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030026388.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2045211256.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2046105083.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2026002985.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2065340716.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2072197186.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063850487.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2050883337.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030982612_2030982615.txt', '/content/drive/MyDrive/tobaco_OCR/Form/502307556+-7559.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028675690.txt', '/content/drive/MyDrive/tobaco_OCR/Form/522409061+-9064.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2082002843.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030140207.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2043472071.txt', '/content/drive/MyDrive/tobaco_OCR/Form/508851993+-1993.txt', '/content/drive/MyDrive/tobaco_OCR/Form/506192666_506192667.txt', '/content/drive/MyDrive/tobaco_OCR/Form/513169059_513169060.txt', '/content/drive/MyDrive/tobaco_OCR/Form/512982316.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2055400325.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029026039.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2031262894.txt', '/content/drive/MyDrive/tobaco_OCR/Form/512983569.txt', '/content/drive/MyDrive/tobaco_OCR/Form/507222734.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2065060429.txt', '/content/drive/MyDrive/tobaco_OCR/Form/510934211+-4215.txt', '/content/drive/MyDrive/tobaco_OCR/Form/520754836+-4838.txt', '/content/drive/MyDrive/tobaco_OCR/Form/91974564.txt', '/content/drive/MyDrive/tobaco_OCR/Form/89113285_89113286.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2051111934.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030114328.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028874609.txt', '/content/drive/MyDrive/tobaco_OCR/Form/96334543_4546.txt', '/content/drive/MyDrive/tobaco_OCR/Form/507432558.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2074868357.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2043491170.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505406844.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505430275.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2501422412.txt', '/content/drive/MyDrive/tobaco_OCR/Form/522691957+-1957.txt', '/content/drive/MyDrive/tobaco_OCR/Form/518643444.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2076160631.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030584391.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505117113_7119.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2083849128.txt', '/content/drive/MyDrive/tobaco_OCR/Form/520940677+-0677.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2001242496.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2022909012.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2064962489.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2048566743.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2084093390.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2073858629.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028991095.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505375213.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2057065020.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2084061184.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2080866138.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2075461961.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2501584399.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2073475109.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2054124215.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2064500077.txt', '/content/drive/MyDrive/tobaco_OCR/Form/502942005+-2005.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2026379208.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2076060765.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2085601691.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505106228.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2073595585.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2031513374.txt', '/content/drive/MyDrive/tobaco_OCR/Form/516742904+-2904.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2081459449.txt', '/content/drive/MyDrive/tobaco_OCR/Form/506483587_506483588.txt', '/content/drive/MyDrive/tobaco_OCR/Form/518491302+-1309.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2501929589.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2075764125.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505119474.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028832264.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028877028.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2084372039.txt', '/content/drive/MyDrive/tobaco_OCR/Form/515476087_515476089.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063195729.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028847540.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2023263057.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030119286.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2022954637.txt', '/content/drive/MyDrive/tobaco_OCR/Form/520078025+-8027.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063226195.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029200575.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063935614_5615.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2022819044.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2071268158.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2057332433.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028445341.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2026345124.txt', '/content/drive/MyDrive/tobaco_OCR/Form/524002607+-2607.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029205060.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028725289.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2077883186.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2501858707.txt', '/content/drive/MyDrive/tobaco_OCR/Form/83876034.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2045468031.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2065123360.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2075165787.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505123320.txt', '/content/drive/MyDrive/tobaco_OCR/Form/518623059+-3062.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2051117046_7049.txt', '/content/drive/MyDrive/tobaco_OCR/Form/87452548.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030305774.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505233645_3646.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030865618.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2062555598.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2054647558.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2070261623.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063269412.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030987291.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029041170_2029041171.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029148409.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505154716.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2050407409.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030113468.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2051110243.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2050665426.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2074452495.txt', '/content/drive/MyDrive/tobaco_OCR/Form/01129640.txt', '/content/drive/MyDrive/tobaco_OCR/Form/89405867_89405868.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2500062018.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2064279128.txt', '/content/drive/MyDrive/tobaco_OCR/Form/523090481+-0483.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2501762327.txt', '/content/drive/MyDrive/tobaco_OCR/Form/93495597.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2040761653.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030139269.txt', '/content/drive/MyDrive/tobaco_OCR/Form/95861023_1025.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2075323266.txt', '/content/drive/MyDrive/tobaco_OCR/Form/517171219_517171220.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029207880.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505108126_8127.txt', '/content/drive/MyDrive/tobaco_OCR/Form/00843896_00843910.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2051388902.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028696931.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2057347105.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2076256156.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2084593737_3738.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030179917.txt', '/content/drive/MyDrive/tobaco_OCR/Form/514182141+-2141.txt', '/content/drive/MyDrive/tobaco_OCR/Form/92662989.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2064793499.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2051163935.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2024128642.txt', '/content/drive/MyDrive/tobaco_OCR/Form/94548296.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505654925.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063850367_0368.txt', '/content/drive/MyDrive/tobaco_OCR/Form/99193395.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2085598409.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029221507.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2051118520.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2023182659_2023182662.txt', '/content/drive/MyDrive/tobaco_OCR/Form/506342778.txt', '/content/drive/MyDrive/tobaco_OCR/Form/12812692.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2077800601.txt', '/content/drive/MyDrive/tobaco_OCR/Form/93158159.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2078446371_6372.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2020056497.txt', '/content/drive/MyDrive/tobaco_OCR/Form/505977150+-7161.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063857341_7342.txt', '/content/drive/MyDrive/tobaco_OCR/Form/99201310.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028581990.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2078252008.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2501611413.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505402423.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2022909404.txt', '/content/drive/MyDrive/tobaco_OCR/Form/523649974+-9974.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2072444844_4847.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2074684703.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029194944.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2048326028.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029272317.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063575884.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2061504333.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2501953428_3429.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2037016100.txt', '/content/drive/MyDrive/tobaco_OCR/Form/508010150.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2502028344_8345.txt', '/content/drive/MyDrive/tobaco_OCR/Form/83477011.txt', '/content/drive/MyDrive/tobaco_OCR/Form/514343422.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505168428_8429.txt', '/content/drive/MyDrive/tobaco_OCR/Form/00064657.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2073893026.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2086082866.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2064800936.txt', '/content/drive/MyDrive/tobaco_OCR/Form/87223926_87223927.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063608730.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2023636688.txt', '/content/drive/MyDrive/tobaco_OCR/Form/505726358_505726359.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2040890277.txt', '/content/drive/MyDrive/tobaco_OCR/Form/504146300.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2071235594_5595.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028880483.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030521082.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2020129373.txt', '/content/drive/MyDrive/tobaco_OCR/Form/91376724_91376725.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505284500.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029087248.txt', '/content/drive/MyDrive/tobaco_OCR/Form/1002402133.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2054491148.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2077493568.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2056599150.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028831254.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030599494.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505234135_4136.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2048316435.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2061643226.txt', '/content/drive/MyDrive/tobaco_OCR/Form/514936770+-6770.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063871560.txt', '/content/drive/MyDrive/tobaco_OCR/Form/509322176+-2176.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2043635355.txt', '/content/drive/MyDrive/tobaco_OCR/Form/518622137+-2139.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2074448640.txt', '/content/drive/MyDrive/tobaco_OCR/Form/95857211_7212.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2072962659.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030734306.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2501329036.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2062172660.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063219938.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2054911394.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2084471399.txt', '/content/drive/MyDrive/tobaco_OCR/Form/92081249_1250.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2085673496.txt', '/content/drive/MyDrive/tobaco_OCR/Form/96690786.txt', '/content/drive/MyDrive/tobaco_OCR/Form/00834658_00834672.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2054944258.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2083822795.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029196130.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028706323.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2084434916_4917.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2054530144.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063196920.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2050754031-a.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2051141303.txt', '/content/drive/MyDrive/tobaco_OCR/Form/507920731_507920735.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2065088945.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2059729599.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2051160492.txt', '/content/drive/MyDrive/tobaco_OCR/Form/506810079+-0080.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505371673.txt', '/content/drive/MyDrive/tobaco_OCR/Form/515540185.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505203747.txt', '/content/drive/MyDrive/tobaco_OCR/Form/513839179.txt', '/content/drive/MyDrive/tobaco_OCR/Form/94079800.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2025040448.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2078562901.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505365152.txt', '/content/drive/MyDrive/tobaco_OCR/Form/505990902.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2076451948.txt', '/content/drive/MyDrive/tobaco_OCR/Form/505857485.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2064399320.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029198211.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2084619272_9273.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063066794.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2054480200.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030146144.txt', '/content/drive/MyDrive/tobaco_OCR/Form/503488470_503488473.txt', '/content/drive/MyDrive/tobaco_OCR/Form/87402332_2333.txt', '/content/drive/MyDrive/tobaco_OCR/Form/524303800+-3805.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2073231326.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505360669_0670.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029021363.txt', '/content/drive/MyDrive/tobaco_OCR/Form/93117926_7928.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2080114243.txt', '/content/drive/MyDrive/tobaco_OCR/Form/12650441.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2056288404.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2071023345.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2077852013.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2074131808.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2024494158-a.txt', '/content/drive/MyDrive/tobaco_OCR/Form/80023507.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028676247.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2084116356.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2063305015_5016.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2073087305.txt', '/content/drive/MyDrive/tobaco_OCR/Form/504428178_504428181.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2080312930.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2025558567.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2070710176.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2024473886.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2076700502.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2064200566.txt', '/content/drive/MyDrive/tobaco_OCR/Form/522502497+-2499.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2041774106.txt', '/content/drive/MyDrive/tobaco_OCR/Form/516742597+-2599.txt', '/content/drive/MyDrive/tobaco_OCR/Form/522038543+-8543.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2040973918.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2075714490.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2050663442.txt', '/content/drive/MyDrive/tobaco_OCR/Form/94408603_94408605.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2023995006.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2505104774.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2082989787.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030162652.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2020350100.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2077798199_8201.txt', '/content/drive/MyDrive/tobaco_OCR/Form/521068371+-8371.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2030566380.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2050647259-b.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2500146911.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2086083943.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2029212023_2024.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2043134349.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2050648072.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2070709268.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2064952856.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2082387172.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2028880868.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2501992820.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2026416550.txt', '/content/drive/MyDrive/tobaco_OCR/Form/514222260+-2260.txt', '/content/drive/MyDrive/tobaco_OCR/Form/2060548029.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2028875721.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/60163219_60163222.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2074406815.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2082798720_8753.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2020226023.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/PUBLICATIONS003667-3.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/87638550_87638558.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2021644184_4185.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50287561-7618.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/PUBLICATIONS030901-0.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/87125423_87125428.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2022945385.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2024371262.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10406749_10406751.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50612436-2444.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50523644-3644.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501275351.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/40040832-0834.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50647413-7415.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/1000827761.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2057612157.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50579881-9881.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/87150355.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2022240598.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/01193262_01193288.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2029110888.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2061991348.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2056147164.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2074893796_3801.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50590463-0469.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/40037985-7989.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/PUBLICATIONS035812-5.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50473051-3053.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2023286661.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2001116407.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2022172184.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50518907-8911.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2064751581.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2064717660_7789.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/11309016_11309017.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501525906_2501525919.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2076907631_7668.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2024438352.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2076634303_4365.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50649664-9664.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2000781804.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50244030-4037.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10092523_10092530.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10146565_10146568.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2075232072_2077.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/40050529-0536.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50391006-1006.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/500614595_500614597.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2031426764.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50511224-1224.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50655432-5443.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10083703_10083706.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2026209134.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/70007742-7742.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2055054537.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/70105930-5937.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2022162411_2022162414.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2025562502.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2024313532.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50578125-8138.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50396016-6017.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50478349-8349.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50399090-9099.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/504175664_504175669.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/1001920153.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10400989_10400990.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50691511-1515.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2028716785.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50492374-2377.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2075743296_3297.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2028638180.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/11229835_11229851.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50648850-8850.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/40009604-9604.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50427824-7825.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/502497872_502497879.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/504792876_504792884.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/0013165452.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501613594.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501033072.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50538109-8115.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/500602493_500602494.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10073624.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50416058-6058.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10214817_10214869.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501383464_2501383472.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50592285-2292.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/1001909081.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/11051592_11051597.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/87678393_87678399.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50628622-8624.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2028873122.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50609172-9177.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2502336642.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/40021754-1754.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/1001511267.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10186148_10186155.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/40036154-6154.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2056712540.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2051025161.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/503254175+-4176.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/501009080_501009119.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/85710569.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/CTRSP-FILES012488-25.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50576426-6426.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50285926-5933.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/503283292+-3292.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50637798-7798.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2022148121.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10054448_10054450.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/88755863_5886.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2062058903.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501598680.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50085955_50085963.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50619005-9007.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2062090416.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2050247507.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50431906-1906.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50455681-5681.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2076955901_5905.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2078582262_2285.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50367188-7194.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10356461_10356469.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10412046_10412049.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/1000027365.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50603971-3971.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2022143383.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/87592525_87592533.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10331632_10331638.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50351246-1246.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501311550.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10051884_10051894.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/1001762921.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/80231415_1417.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501138382.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50700292-0297.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/1001402342.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/PUBLICATIONS026505-6.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/83904610_4615.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/89313238.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/PUBLICATIONS007431-7.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/PUBLICATIONS011449-1.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/60004387_60004390.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/01142172_01142173.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2022158644.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50355312-5312.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50729294-9294.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2065372492_2495.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/40024983-4986.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50333481-3481.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501185399.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50553126-3126.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/60028272_60028299.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10017556_10017561.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2082223866.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2063664717.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50357403-7404.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50574463-4475.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501013691_2501013746.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2053475725.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10322990_10322994.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10009243.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50056922.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501008232.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2023129971.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501387695.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10234326_10234328.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2024556141.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50454742-4743.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2026402184.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/01190197_01190199.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2023489783.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10420278.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/88198368_8384.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50474246-4248.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50407632-7632.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/11306049_11306054.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2028719331.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10410310_10410313.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2028840914.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2057342922_2947.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/PUBLICATIONS058430-8.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2028620226.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/500488902_500488905.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2062855890.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50486477-6479.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50260322-0323.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501567500.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10421720_10421729.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2024437773.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50640162-0168.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2062035168.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50493985-3994.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2028626805.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/40033964-3964.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2065411188.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2078576212_6288.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10064589_10064594.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2056445938_5940.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50546182-6182.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2028445606.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50522863-2863.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2023105190.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501052565.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50563292-3301.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10404393_10404399.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2085524059_4065.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/500881186.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50527980-7985.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50380095-0099.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50520506-0506.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2074993860.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50601614-1617.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2022235106.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10350601_10350605.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/501060341+-0350.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2053554806.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501236320_2501236348.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10023383_10023387.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2055663076.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/88221449_1451.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2026025182_2026025325.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/1003726646.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/1000121813.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/1001402452_2454.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50018640_50018651.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2028704495.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50701941-1941.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/81302811.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/501550599_501550605.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2022202871.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10408675.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/40042818-2820.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/PUBLICATIONS016897-6.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/11314548.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2025492721_2738.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/1003112789_1003112797.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/40005199-5206.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/1001766887.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2501124648.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50665809-5811.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50541753-1762.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2023457760.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2500016783_6794.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50594477-4477.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2057353836.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50430119-0120.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2024318141.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50526280-6282.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50506969-6969.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/10194471_10194474.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/PUBLICATIONS040939-0.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/2056298359.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/503645435+-5446.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/CTRSP-FILES025157-51.txt', '/content/drive/MyDrive/tobaco_OCR/Scientific/50726195-6197.txt']
['Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Resume', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Memo', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Letter', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Report', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'Email', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'ADVE', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'News', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Note', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Form', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific', 'Scientific']
3482
3482
###Markdown
Utility Functions for preprocessing
###Code
import re
def preprocess_text(text_string):
preprocessed_string = re.sub(r'[^\w\s]','',text_string)
preprocessed_string = preprocessed_string.replace('\n',' ')
preprocessed_string = preprocessed_string.replace('_',' ')
preprocessed_string = re.sub(' +', ' ', preprocessed_string)
return preprocessed_string
## Tokenize, Lemmatize, stopwords removal
import spacy
import nltk
nlp = spacy.load("en", disable=['parser', 'tagger', 'ner'])
from nltk.corpus import stopwords
nltk.download('stopwords')
stops = stopwords.words("english")
def normalize(comment, lowercase, remove_stopwords):
if lowercase:
comment = comment.lower()
comment = nlp(comment)
lemmatized = list()
for word in comment:
lemma = word.lemma_.strip()
if lemma:
if not remove_stopwords or (remove_stopwords and lemma not in stops):
lemmatized.append(lemma)
return " ".join(lemmatized)
normalize("counting playing the Home", lowercase=True, remove_stopwords=True)
def get_text_from_path(path):
with open(path) as f:
lines = f.readlines()
lines = ' '.join(lines)
f.close()
return lines
out_text = get_text_from_path('/content/drive/MyDrive/tobaco_OCR/ADVE/0000435350.txt')
# out_text = preprocess_text(out_text)
print(out_text)
###Output
TE che fitm
m66400 7127
KOOLS are the only cigarettes that taste
good when you have &® cold. They taste even
better when you don't.
Job No, K-2978
‘Mevapapars—300 iner—Mareh & April, 1956
(5 9-4 in 4 108 ines) Pinel Proof (7) March 15, 1956
###Markdown
Retrieve the actual texts from the file
###Code
texts= []
c= 0
for i, this_path in enumerate(text_paths):
texts.append(get_text_from_path(this_path))
print(c, end= " ")
c +=1
###Output
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481
###Markdown
Create the dataframe
###Code
df = pd.DataFrame(list(zip(text_paths, texts, labels)),
columns =['text_path','texts', 'data_label'])
df.head()
###Output
_____no_output_____
###Markdown
Apply preprocessing utility function
###Code
df['texts'] = [preprocess_text(this_text) for this_text in df['texts']]
df.head()
df['texts'] = [normalize(this_text, lowercase=True, remove_stopwords=True) for this_text in df['texts']]
df.head()
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
df['data_label']= le.fit_transform(df['data_label'])
df.head()
###Output
_____no_output_____
###Markdown
Get the overall vocabulary of our dataset
###Code
our_vocab = []
max = 0
min = 0
for this_str in df['texts']:
tmp = this_str.split(" ")
if len(tmp) > max:
max = len(tmp)
if len(tmp) < min:
min = len(tmp)
our_vocab.extend(tmp)
print(len(our_vocab))
print(our_vocab[1:100])
print(max)
print(min)
print(len(set(our_vocab)))
###Output
986
0
77615
###Markdown
Train test Split
###Code
from sklearn.model_selection import train_test_split
label = df['data_label']
X_train, X_test, y_train, y_test = train_test_split(df['texts'], df['data_label'] , test_size=0.2, random_state = 42)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
###Output
(2785,)
(2785,)
(697,)
(697,)
###Markdown
Glove on train and test dataset seperately
###Code
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
max_len = 500
tokenizer = Tokenizer(num_words=len(X_train))
tokenizer.fit_on_texts(X_train)
train_sequence = tokenizer.texts_to_sequences(X_train)
train_padded = pad_sequences(
train_sequence, maxlen = max_len, truncating = "post", padding = "post"
)
test_sequence = tokenizer.texts_to_sequences(X_test)
test_padded = pad_sequences(
test_sequence, maxlen = max_len, truncating = "post", padding = "post"
)
###Output
_____no_output_____
###Markdown
Glove Model Downloading and Unzipping
###Code
urllib.request.urlretrieve('https://nlp.stanford.edu/data/glove.6B.zip','glove.6B.zip')
!unzip "/content/glove.6B.zip" -d "/content/"
emmbed_dict = {}
with open('/content/glove.6B.200d.txt','r') as f:
for line in f:
values = line.split()
word = values[0]
vector = np.asarray(values[1:],'float32')
emmbed_dict[word]=vector
f.close()
word_index = tokenizer.word_index
len(word_index)
num_words = len(word_index) + 1
embedding_matrix = np.zeros((num_words, 200))
for word, i in word_index.items():
if i < num_words:
emb_vec = emmbed_dict.get(word)
if emb_vec is not None:
embedding_matrix[i] = emb_vec
embedding_matrix.shape
###Output
_____no_output_____
###Markdown
CNN-1D
###Code
from keras.models import Sequential
from keras.layers import Embedding, LSTM, Dense, Dropout, Conv1D, GlobalMaxPooling1D, MaxPooling1D
from keras.initializers import Constant
from tensorflow.keras.optimizers import Adam
cnn = Sequential()
cnn.add(
Embedding(
num_words,
200,
embeddings_initializer = Constant(embedding_matrix),
trainable = False
)
)
cnn.add(Dropout(0.2))
cnn.add(Conv1D(filters=100,kernel_size=3,padding='valid',activation='relu',strides=1))
cnn.add(MaxPooling1D())
cnn.add(Conv1D(filters=200,kernel_size=3,padding='valid',activation='relu',strides=1))
cnn.add(MaxPooling1D())
cnn.add(Conv1D(filters=300,kernel_size=3,padding='valid',activation='relu',strides=1))
cnn.add(MaxPooling1D())
cnn.add(Conv1D(filters=512,kernel_size=3,padding='valid',activation='relu',strides=1))
cnn.add(GlobalMaxPooling1D())
cnn.add(Dense(64, activation = 'relu'))
cnn.add(Dropout(0.4))
cnn.add(Dense(10, activation='softmax'))
optimizer = Adam(learning_rate = 1e-3)
cnn.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer, metrics = ["accuracy"])
cnn.summary()
history = cnn.fit(
train_padded,
y_train,
epochs = 20,
validation_data = (test_padded, y_test),
verbose=1,
)
sequence = tokenizer.texts_to_sequences(X_test)
padded = pad_sequences(sequence, maxlen = max_len, truncating = "post", padding = "post")
y_pred = cnn.predict(padded)
y_pred = [np.argmax(i) for i in y_pred]
from sklearn import metrics
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test,y_pred))
###Output
_____no_output_____ |
Numpy/2-Datatypes-of-Numpy-arrays.ipynb | ###Markdown
Datatype of Numpy arrays- Python does a good job of identifying the types of array element- Some conversions might still be required.
###Code
import numpy as np
###Output
_____no_output_____ |
data-analysis/pandas/4_groupby.ipynb | ###Markdown
GroupbyThe groupby method allows you to group rows of data together and call aggregate functions
###Code
import pandas as pd
# Create dataframe
data = {'Company':['GOOG','GOOG','MSFT','MSFT','FB','FB'],
'Person':['Sam','Charlie','Amy','Vanessa','Carl','Sarah'],
'Sales':[200,120,340,124,243,350]}
df = pd.DataFrame(data)
df
###Output
_____no_output_____
###Markdown
** Now you can use the .groupby() method to group rows together based off of a column name. For instance let's group based off of Company. This will create a DataFrameGroupBy object:**
###Code
df.groupby('Company')
###Output
_____no_output_____
###Markdown
You can save this object as a new variable:
###Code
by_comp = df.groupby("Company")
###Output
_____no_output_____
###Markdown
And then call aggregate methods off the object:
###Code
by_comp.mean()
df.groupby('Company').mean()
###Output
_____no_output_____
###Markdown
More examples of aggregate methods:
###Code
by_comp.std()
by_comp.min()
by_comp.max()
by_comp.count()
by_comp.describe()
by_comp.describe().transpose()
by_comp.describe().transpose()['GOOG']
###Output
_____no_output_____ |
_notebooks/2021-01-21-Sign-Language-Inference-with-WebCam.ipynb | ###Markdown
Real Time Sign Language Classification> Creating an App to Run Inference with.- toc: false- branch: master- badges: false- comments: true- categories: [projects, tutorial]- image: images/header-asl.jpg Introduction After training our model in [Part A](https://jimmiemunyi.github.io/blog/tutorial/2021/01/20/Sign-Language-Classification-with-Deep-Learning.html), we are now going to develop an application to run inference with for new data.I am going to be utilizing `opencv` to get live video from my webcam, then run our model against each frame in the video and get the prediction of what Sign Language Letter I am holding up.Here is an example of what the output will look like:> youtube: https://youtu.be/-nggi8EwfOA The whole code + training notebooks from Part A can be found in this [github repo](https://github.com/jimmiemunyi/Sign-Language-App). This tutorial assumes some basic understanding of the `cv2` library and general understanding of how to run inference using a model. The Full Code Here is the full code of making the App if you just want the code.I will explain each part of the code and what was my thinkinh behind it in the next section.
###Code
from collections import deque, Counter
import cv2
from fastai.vision.all import *
print('Loading our Inference model...')
# load our inference model
inf_model = load_learner('model/sign_language.pkl')
print('Model Loaded')
# define a deque to get rolling average of predictions
# I go with the last 10 predictions
rolling_predictions = deque([], maxlen=10)
# get the most common item in the deque
def most_common(D):
data = Counter(D)
return data.most_common(1)[0][0]
def hand_area(img):
# specify where hand should go
hand = img[50:324, 50:324]
# the images in the model were trainind on 200x200 pixels
hand = cv2.resize(hand, (200,200))
return hand
# capture video on the webcam
cap = cv2.VideoCapture(0)
# get the dimensions on the frame
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
# define codec and create our VideoWriter to save the video
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter('output/sign-language.mp4', fourcc, 12, (frame_width, frame_height))
# read video
while True:
# capture each frame of the video
ret, frame = cap.read()
# flip frame to feel more 'natural' to webcam
frame = cv2.flip(frame, flipCode = 1)
# draw a blue rectangle where to place hand
cv2.rectangle(frame, (50, 50), (324, 324), (255, 0, 0), 2)
# get the image
inference_image = hand_area(frame)
# get the current prediction on the hand
pred = inf_model.predict(inference_image)
# append the current prediction to our rolling predictions
rolling_predictions.append(pred[0])
# our prediction is going to be the most common letter
# in our rolling predictions
prediction_output = f'The predicted letter is {most_common(rolling_predictions)}'
# show predicted text
cv2.putText(frame, prediction_output, (10, 350), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (255, 0, 0), 2)
# show the frame
cv2.imshow('frame', frame)
# save the frames to out file
out.write(frame)
# press `q` to exit
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# release VideoCapture()
cap.release()
# release out file
out.release()
# close all frames and video windows
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Explaining the Code Imports Install [fastai](https://github.com/fastai/fastai) and [opencv-python](https://pypi.org/project/opencv-python/).Next, this are the packages I utilize for this App. `fastai` is going to be used to run Inference with, `cv2` is going to handle all the WebCam functionality and we are going to utilize `deque` and `Counter` from collections to apply a nifty trick I am going to show you.
###Code
from collections import deque, Counter
import cv2
from fastai.vision.all import *
###Output
_____no_output_____
###Markdown
Loading our Inference Model
###Code
print('Loading our Inference model...')
# load our inference model
inf_model = load_learner('model/sign_language.pkl')
print('Model Loaded')
###Output
_____no_output_____
###Markdown
The next part of our code loads the model we pickled in Part A and prints some useful information. Rolling Average Predictions When I first made the App, I noticed one problem when using it. A slight movement of my hand was changing the predictions. This is known as `flickering`. The video below shows how flickering affects our App:The Video you saw in the beginning shows how 'stable' our model is after using rolling predictions.
###Code
# define a deque to get rolling average of predictions
# I go with the last 10 predictions
rolling_predictions = deque([], maxlen=10)
# get the most common item in the deque
def most_common(D):
data = Counter(D)
return data.most_common(1)[0][0]
###Output
_____no_output_____
###Markdown
To solve this, a utilized the deque from Collections. I used 10 as the maxlength of the deque since I wanted the App, when running inference, to output the most common prediction out of the last 10 predictions. This makes it more stable than when we are using only the current one.The function `most_common` will return the most common item in our deque. Hand Area
###Code
def hand_area(img):
# specify where hand should go
hand = img[50:324, 50:324]
# the images in the model were trainind on 200x200 pixels
hand = cv2.resize(hand, (200,200))
return hand
###Output
_____no_output_____
###Markdown
Next, we define a function that tells our model which part of the video to run inference on. We do not want to run inference on the whole video which will include our face! We will eventually draw a blue rectangle in this area so that you'll know where to place your hand. Capture Video on the WebCam and Define Our Writer
###Code
# capture video on the webcam
cap = cv2.VideoCapture(0)
# get the dimensions on the frame
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
# define codec and create our VideoWriter to save the video
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter('sign-language.mp4', fourcc, 12, (frame_width, frame_height))
###Output
_____no_output_____
###Markdown
Here, we define a `VideoCapture` that will record our video. The parameter 0 means capture on the first WebCam it finds. If you have multiple WebCams, this is the parameter you want to play around with until you find the correct one. Next, we get the dimensions of the frame being recorded by the VideoCapture. We are going to use this dimensions when writing (outputting) the recorded video Finally, we create a `VideoWriter` that we are going to use to output the video and write it to our Hard Disk. To do that, opencv requires us to define a codec to use, and so we create a `VideoWriter_fourcc` exactly for that purpose and we use 'mp4v' with it.In our writer, we first pass the name we want for the output file, here I use 'sign-language.mp4' which will be written in the current directory. You can change this location if you wish to. Next we pass in the codec. After that you pass in your fps (frame rate per second). I found that 12 worked best with my configuration but you probably want to play around with that until you get the best one for you. Finally, we pass in the frame sizes, which we had gotten earlier. The Main Video Loop
###Code
# read video
while True:
# capture each frame of the video
ret, frame = cap.read()
# flip frame to feel more 'natural' to webcam
frame = cv2.flip(frame, flipCode = 1)
# draw a blue rectangle where to place hand
cv2.rectangle(frame, (50, 50), (324, 324), (255, 0, 0), 2)
# get the image
inference_image = hand_area(frame)
# get the current prediction on the hand
pred = inf_model.predict(inference_image)
# append the current prediction to our rolling predictions
rolling_predictions.append(pred[0])
# our prediction is going to be the most common letter
# in our rolling predictions
prediction_output = f'The predicted letter is {most_common(rolling_predictions)}'
# show predicted text
cv2.putText(frame, prediction_output, (10, 350), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (255, 0, 0), 2)
# show the frame
cv2.imshow('frame', frame)
# save the frames to out file
out.write(frame)
# press `q` to exit
if cv2.waitKey(1) & 0xFF == ord('q'):
break
###Output
_____no_output_____
###Markdown
This is a long piece of code so lets break it down bit by bit:
###Code
# read video
while True:
# capture each frame of the video
_ , frame = cap.read()
# flip frame to feel more 'natural' to webcam
frame = cv2.flip(frame, flipCode = 1)
# ......
# truncated code here
# ......
# show the frame
cv2.imshow('frame', frame)
# save the frames to out file
out.write(frame)
# press `q` to exit
if cv2.waitKey(1) & 0xFF == ord('q'):
break
###Output
_____no_output_____
###Markdown
We create a infinite While loop that will always be running, until the user presses the 'q' letter on the keyboard, as defined by our last if statement at the very bottom of the loop. After that, we use the reader we created earlier and call `cap.read()` on it which returns the current frame of the video, and another variable that we are not going to use.A little intuition how videos works. A frame is somewhat equivalent to just one static image. Think of it as that. So for a video what usually happens it these single frames are played one after the other quickly, like 30-60 times faster hence creating the illusion of a continious video.So for our App, we are going to get each frame, and run it through our model (which expects the input to be an image, so this will work) and return the current prediction. This is also why we decided to use rolling average predictions and not the just the current prediction. To reduce the flickering that may occur by passing a different frame each second.Next:``` frame = cv2.flip(frame, flipCode = 1)```This flips our frame to make it feel more natural. What I mean is, without flipping, the output image felt reversed, where if I raised my left arm it seemed like I was raising my right. Try running the App with this part commented out and you'll get what I mean.This shows the frames one after the other and the out writes it to disk``` cv2.imshow('frame', frame) save the frames to out file out.write(frame)```
###Code
# read video
while True:
# ......
# truncated code here
# ......
# draw a blue rectangle where to place hand
cv2.rectangle(frame, (50, 50), (324, 324), (255, 0, 0), 2)
# get the image
inference_image = hand_area(frame)
# get the current prediction on the hand
pred = inf_model.predict(inference_image)
# append the current prediction to our rolling predictions
rolling_predictions.append(pred[0])
# our prediction is going to be the most common letter
# in our rolling predictions
prediction_output = f'The predicted letter is {most_common(rolling_predictions)}'
# show predicted text
cv2.putText(frame, prediction_output, (10, 350), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (255, 0, 0), 2)
# ......
# truncated code here
# ......
# press `q` to exit
if cv2.waitKey(1) & 0xFF == ord('q'):
break
###Output
_____no_output_____
###Markdown
Next, we draw a blue rectangle where the user should place the hand. The first parameter is where we want to draw the rectangle and we tell opencv to draw it on our current frame. The next two parameter describe the area where we want our rectangle to be. Note that this dimensions are exactly the same as those in the `hand_area` function we created earlier. This is to make sure we are running inference on the correct area. Lastly we pass in the color of the rectangle (in BGR formart) and the thickness of the line (2).```cv2.rectangle(frame, (50, 50), (324, 324), (255, 0, 0), 2)```Next, from our whole frame, we just extract the hand area and store it. This is the image we are going to pass to our model```inference_image = hand_area(frame)```Next, we pass our extracted image to our inference model and get the predictions and append that prediction to our rolling predictions deque. Remember that this deque will only hold the most recent 10 predictions and discard everything else```pred = inf_model.predict(inference_image)rolling_predictions.append(pred[0])```We get the most common Letter predicted in our Deque and use opencv to write that letter to the video. The parameters are almost similar to the rectangle code, with a slight variation since here we have to pass in the font(hershey simplex) and font size (0.9)```prediction_output = f'The predicted letter is {most_common(rolling_predictions)}'cv2.putText(frame, prediction_output, (10, 350), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (255, 0, 0), 2)``` The final part of the code just releases the resources we had acquired initially: the Video reader, the Video Writer and then destroys all windows created.
###Code
# release VideoCapture()
cap.release()
# release out file
out.release()
# close all frames and video windows
cv2.destroyAllWindows()
###Output
_____no_output_____ |
JobSatisfaction.ipynb | ###Markdown
Business UnderstandingFor the 3rd questions I want to find out the answer for what factors are related to the job satisfaction. I use the data from the Stack Overflow survey answered by more than 64,000 reviewers, with the personal information, coding experience, attitude towards coding and etc.To answer this question we need to use the data related to the job satisfaction such as the employment status, education level, remote work policy, company type, company size and etc. Data UnderstandingTo get started let's read in the necessary libraries and take a look at some of our columns of interest.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
import seaborn as sns
%matplotlib inline
df = pd.read_csv('./survey_results_public.csv')
df.head()
###Output
_____no_output_____
###Markdown
We pick the factors considered to be related to Job Satisfaction.Note that the reason why I don't choose Expected Salary is because it's mutually exclusive with Salary and has a low amount of non-null data.
###Code
# Pick relevent columns
rel_col = [
'JobSatisfaction', 'CareerSatisfaction', 'EmploymentStatus',
'FormalEducation', 'HomeRemote', 'CompanySize', 'CompanyType', 'HoursPerWeek',
'Overpaid', 'Gender', 'Race', 'Salary'
]
df_rel = df[rel_col]
###Output
_____no_output_____
###Markdown
Let's look into the quantative variables first, from the description below it seems all quantative variables has null values.
###Code
print('Total rows:', len(df_rel))
df_rel.describe()
###Output
Total rows: 51392
###Markdown
The above are variables that python is treating as numeric variables, and therefore, we could send them into our linear model blindly to predict the response. Let's take a quick look at our data first.
###Code
df_rel.hist();
sns.heatmap(df_rel.corr(), annot=True, fmt=".2f");
###Output
_____no_output_____
###Markdown
Also check the distribution of the values for those catigorical variables. From the result below it seems there are some answers needs to be convert to null values. Also some of the categories are too specific, we can combine the answer options into rougher categories.
###Code
# Check what kinds of values do the categorical variables contain
for col in df_rel.select_dtypes(include = ['object']).columns:
print(col)
print(df_rel[col].value_counts())
print()
###Output
EmploymentStatus
Employed full-time 36148
Independent contractor, freelancer, or self-employed 5233
Employed part-time 3180
Not employed, and not looking for work 2791
Not employed, but looking for work 2786
I prefer not to say 1086
Retired 168
Name: EmploymentStatus, dtype: int64
FormalEducation
Bachelor's degree 21609
Master's degree 11141
Some college/university study without earning a bachelor's degree 8129
Secondary school 5908
Doctoral degree 1308
I prefer not to answer 1109
Primary/elementary school 1047
Professional degree 715
I never completed any formal education 426
Name: FormalEducation, dtype: int64
HomeRemote
A few days each month 15454
Never 13975
All or almost all the time (I'm full-time remote) 4905
Less than half the time, but at least one day each week 4147
More than half, but not all, the time 1909
It's complicated 1849
About half the time 1769
Name: HomeRemote, dtype: int64
CompanySize
20 to 99 employees 8587
100 to 499 employees 7274
10,000 or more employees 5680
10 to 19 employees 4103
1,000 to 4,999 employees 3831
Fewer than 10 employees 3807
500 to 999 employees 2486
5,000 to 9,999 employees 1604
I don't know 869
I prefer not to answer 681
Name: CompanySize, dtype: int64
CompanyType
Privately-held limited company, not in startup mode 16709
Publicly-traded corporation 5871
I don't know 3233
Sole proprietorship or partnership, not in startup mode 2831
Government agency or public school/university 2451
Venture-funded startup 2387
I prefer not to answer 1816
Pre-series A startup 1288
Non-profit/non-governmental organization or private school/university 1225
State-owned company 670
Something else 342
Name: CompanyType, dtype: int64
Overpaid
Somewhat underpaid 6017
Neither underpaid nor overpaid 4837
Greatly underpaid 1555
Somewhat overpaid 877
Greatly overpaid 101
Name: Overpaid, dtype: int64
Gender
Male 31589
Female 2600
Other 225
Male; Other 171
Gender non-conforming 160
Male; Gender non-conforming 65
Female; Transgender 56
Transgender 55
Female; Gender non-conforming 29
Male; Female 15
Male; Female; Transgender; Gender non-conforming; Other 15
Transgender; Gender non-conforming 15
Male; Transgender 11
Female; Transgender; Gender non-conforming 8
Male; Female; Transgender; Gender non-conforming 7
Gender non-conforming; Other 4
Male; Female; Transgender 4
Male; Transgender; Gender non-conforming 4
Male; Gender non-conforming; Other 3
Male; Female; Other 2
Male; Female; Transgender; Other 1
Female; Gender non-conforming; Other 1
Male; Female; Gender non-conforming 1
Female; Other 1
Male; Transgender; Other 1
Male; Female; Gender non-conforming; Other 1
Transgender; Other 1
Female; Transgender; Other 1
Female; Transgender; Gender non-conforming; Other 1
Name: Gender, dtype: int64
Race
White or of European descent 23415
South Asian 2657
Hispanic or Latino/Latina 1289
East Asian 1285
Middle Eastern 899
...
Black or of African descent; Hispanic or Latino/Latina; South Asian; I don’t know 1
East Asian; Native American, Pacific Islander, or Indigenous Australian; I don’t know 1
Black or of African descent; Hispanic or Latino/Latina; White or of European descent; I don’t know 1
East Asian; Middle Eastern; South Asian 1
Native American, Pacific Islander, or Indigenous Australian; White or of European descent; I don’t know 1
Name: Race, Length: 97, dtype: int64
###Markdown
Prepare DataLet's begin with cleaning the categorical variables.Seems some of the categorical variables' values needs to be limited. For example we can limite the answers to EmploymentStatus to be only employed, since those umemployed or retired don't have a job thus has no job satisfaction.
###Code
# Seems some of the categorical variables' values needs to be limited
# for EmploymentStatus, exclude those not employed or retired
possible_vals = [
"EmploymentStatus", "Employed full-time",
"Independent contractor, freelancer, or self-employed", "Employed part-time",
"I prefer not to say"
]
df_rel = df_rel[df_rel.EmploymentStatus.isin(possible_vals)]
# Replace "I prefer not to say" to NaN
di = {"I prefer not to say" : None}
df_rel = df_rel.replace({"EmploymentStatus" : di})
###Output
_____no_output_____
###Markdown
Also note that there are some answers like "I prefer not to answer" and "I don't know", which can be recognized as null values.
###Code
# for FormalEducation, too many categories, merge to big categories
# Replace "I prefer not to anser" to NaN
di = {
"I prefer not to answer" : None, "Primary/elementary school" : "Below Secondary School",
"Primary/elementary school" : "Below Secondary School",
"I never completed any formal education" : "Below Secondary School",
"Professional degree" : "Master's degree"
}
df_rel = df_rel.replace({"FormalEducation": di})
# for HomeRemote, replace "It's complicated" to NaN
di = {"It's complicated" : None}
df_rel = df_rel.replace({"HomeRemote" : di})
# for CompanySize, replace "I don't know" and "I prefer not to answer" to NaN
di = {
"I don't know" : None,
"I prefer not to answer" : None
}
df_rel = df_rel.replace({"CompanySize" : di})
# for CompanyType, replace "I don't know", "Something else" and "I prefer not to answer" to NaN
# trim the types into Private, Public, Government, and Startup
di = {
"I don't know" : None, "I prefer not to answer" : None, "Something else" : None,
"Privately-held limited company, not in startup mode" : "Private",
"Publicly-traded corporation" : "Public", "Sole proprietorship or partnership, not in startup mode" : "Private",
"Government agency or public school/university" : "Government", "Venture-funded startup" : "Startup",
"Pre-series A startup" : "Startup", "State-owned company" : "Government"
}
df_rel = df_rel.replace({"CompanyType": di})
###Output
_____no_output_____
###Markdown
From the distribution of values we can also identify some columns that are not suitble for prediction of the job satisfaction.
###Code
# drop gender since there's large imbalance between mail and female
df_rel = df_rel.drop('Gender', axis=1)
# drop race since there's large imbalance between White and others
df_rel = df_rel.drop('Race', axis=1)
# remove career satisfaction since it has big corelation with job satisfaction
df_rel = df_rel.drop('CareerSatisfaction', axis=1)
###Output
_____no_output_____
###Markdown
Let's see what the categorical variables look like after the cleaning.
###Code
# Check what kinds of values do the categorical variables contain after the cleaning
for col in df_rel.select_dtypes(include = ['object']).columns:
print(col)
print(df_rel[col].value_counts())
print()
###Output
EmploymentStatus
Employed full-time 36148
Independent contractor, freelancer, or self-employed 5233
Employed part-time 3180
Name: EmploymentStatus, dtype: int64
FormalEducation
Bachelor's degree 20318
Master's degree 11386
Some college/university study without earning a bachelor's degree 7134
Secondary school 3912
Doctoral degree 1260
Below Secondary School 786
Name: FormalEducation, dtype: int64
HomeRemote
A few days each month 15452
Never 13972
All or almost all the time (I'm full-time remote) 4905
Less than half the time, but at least one day each week 4147
More than half, but not all, the time 1907
About half the time 1767
Name: HomeRemote, dtype: int64
CompanySize
20 to 99 employees 8587
100 to 499 employees 7273
10,000 or more employees 5680
10 to 19 employees 4103
1,000 to 4,999 employees 3831
Fewer than 10 employees 3806
500 to 999 employees 2486
5,000 to 9,999 employees 1604
Name: CompanySize, dtype: int64
CompanyType
Private 19539
Public 5871
Startup 3674
Government 3121
Non-profit/non-governmental organization or private school/university 1225
Name: CompanyType, dtype: int64
Overpaid
Somewhat underpaid 6017
Neither underpaid nor overpaid 4837
Greatly underpaid 1555
Somewhat overpaid 877
Greatly overpaid 101
Name: Overpaid, dtype: int64
###Markdown
To handle the null values, I defined a function.As for the quantative variables, simply replace the null values with the mean value of the column.As for the categorical variables, just get dummies ignoring the null values.
###Code
# Define the function to clean data: replace NaN with mean value for quantative variables, create dummies for
# catagorical variables, and return df
def clean_data(df):
'''
INPUT
df - pandas dataframe
OUTPUT
df - cleaned dataframe
This function cleans df using the following steps to produce df:
1. Drop all the rows with no Job Satisfaction
2. For each numeric variable in df, fill the column with the mean value of the column.
3. Create dummy columns for all the categorical variables in df, drop the original columns
'''
# Drop rows with missing salary values
df = df.dropna(subset=['JobSatisfaction'], axis=0)
# Fill numeric columns with the mean
num_vars = df.select_dtypes(include=['float', 'int']).columns
for col in num_vars:
df[col].fillna((df[col].mean()), inplace=True)
# Dummy the categorical variables
cat_vars = df.select_dtypes(include=['object']).copy().columns
for var in cat_vars:
# for each cat add dummy var, drop original column
df = pd.concat([df.drop(var, axis=1), pd.get_dummies(df[var], prefix=var, prefix_sep='_', drop_first=True)], axis=1)
return df
df_rel = clean_data(df_rel)
###Output
/Users/clin/opt/anaconda3/lib/python3.7/site-packages/pandas/core/generic.py:6245: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
self._update_inplace(new_data)
###Markdown
Data ModelingTo best train our model, I first use linear model, do the evaluation, and then try to use random forest model and parameter optimization methods to improve the model.I also use cutoffs to determine the best number of features to be used for modeling. Linear Model the EvaluationFirst use linear modeling.
###Code
# Train model and predict
X = df_rel.drop('JobSatisfaction', axis=1)
y = df_rel['JobSatisfaction']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .30, random_state=42)
lm_model = LinearRegression(normalize=True)
lm_model.fit(X_train, y_train) # If this model was to predict for new individuals, we probably would want
# worry about train/test splits and cross-validation, but for now I am most
# interested in finding a model that just fits all of the data well
# fit the model
y_test_preds = lm_model.predict(X_test)
# scores
print(r2_score(y_test, y_test_preds)) #In this case we are predicting a continuous, numeric response. Therefore, common
print(mean_squared_error(y_test, y_test_preds)) #metrics to assess fit include Rsquared and MSE.
preds_vs_act = pd.DataFrame(np.hstack([y_test.values.reshape(y_test.size,1), y_test_preds.reshape(y_test.size,1)]))
preds_vs_act.columns = ['actual', 'preds']
preds_vs_act['diff'] = preds_vs_act['actual'] - preds_vs_act['preds']
preds_vs_act.head()
### plot how far our predictions are from the actual values compaired to the predicted
plt.plot(preds_vs_act['preds'], preds_vs_act['diff'], 'bo');
plt.xlabel('predicted');
plt.ylabel('difference');
###Output
_____no_output_____
###Markdown
Seems there are quite big bias when the score is low or high.Let's what the best number of features to use based on the test set performance will be.
###Code
### Let's see what be the best number of features to use based on the test set performance
def find_optimal_lm_mod(X, y, cutoffs, test_size = .30, random_state=42, plot=True):
'''
INPUT
X - pandas dataframe, X matrix
y - pandas dataframe, response variable
cutoffs - list of ints, cutoff for number of non-zero values in dummy categorical vars
test_size - float between 0 and 1, default 0.3, determines the proportion of data as test data
random_state - int, default 42, controls random state for train_test_split
plot - boolean, default 0.3, True to plot result
OUTPUT
r2_scores_test - list of floats of r2 scores on the test data
r2_scores_train - list of floats of r2 scores on the train data
lm_model - model object from sklearn
X_train, X_test, y_train, y_test - output from sklearn train test split used for optimal model
'''
r2_scores_test, r2_scores_train, num_feats, results = [], [], [], dict()
for cutoff in cutoffs:
#reduce X matrix
reduce_X = X.iloc[:, np.where((X.sum() > cutoff) == True)[0]]
num_feats.append(reduce_X.shape[1])
#split the data into train and test
X_train, X_test, y_train, y_test = train_test_split(reduce_X, y, test_size = test_size, random_state=random_state)
#fit the model and obtain pred response
lm_model = LinearRegression(normalize=True)
lm_model.fit(X_train, y_train)
y_test_preds = lm_model.predict(X_test)
y_train_preds = lm_model.predict(X_train)
#append the r2 value from the test set
r2_scores_test.append(r2_score(y_test, y_test_preds))
r2_scores_train.append(r2_score(y_train, y_train_preds))
results[str(cutoff)] = r2_score(y_test, y_test_preds)
if plot:
plt.plot(num_feats, r2_scores_test, label="Test", alpha=.5)
plt.plot(num_feats, r2_scores_train, label="Train", alpha=.5)
plt.xlabel('Number of Features')
plt.ylabel('Rsquared')
plt.title('Rsquared by Number of Features')
plt.legend(loc=1)
plt.show()
best_cutoff = max(results, key=results.get)
#reduce X matrix
reduce_X = X.iloc[:, np.where((X.sum() > int(best_cutoff)) == True)[0]]
num_feats.append(reduce_X.shape[1])
#split the data into train and test
X_train, X_test, y_train, y_test = train_test_split(reduce_X, y, test_size = test_size, random_state=random_state)
#fit the model
lm_model = LinearRegression(normalize=True)
lm_model.fit(X_train, y_train)
return r2_scores_test, r2_scores_train, lm_model, X_train, X_test, y_train, y_test
cutoffs = [5000, 3500, 2500, 1000, 100, 50, 30, 20, 10, 5]
r2_scores_test, r2_scores_train, lm_model, X_train, X_test, y_train, y_test = find_optimal_lm_mod(X, y, cutoffs)
###Output
_____no_output_____
###Markdown
We can see the more features are used the higher the test set performance will be.
###Code
coefs_df = pd.DataFrame()
coefs_df['est_int'] = X_train.columns
coefs_df['coefs'] = lm_model.coef_
coefs_df['abs_coefs'] = np.abs(lm_model.coef_)
# check the scale of coefficients
coefs_df.sort_values('abs_coefs', ascending=False)
###Output
_____no_output_____
###Markdown
Interesting, it seems the most factor of Job Satisfaction is whether the employee get overpaid, also employees in Startups seems more happy, and those work for big companies are unhappy, also the more time working from home the higher satisfaction Randomforest Model the Evaluation
###Code
### Use randomforest instead of linear model
from sklearn.ensemble import RandomForestRegressor
### Let's see what be the best number of features to use based on the test set performance
def find_optimal_rf_mod(X, y, cutoffs, test_size = .30, random_state=42, plot=True):
'''
INPUT
X - pandas dataframe, X matrix
y - pandas dataframe, response variable
cutoffs - list of ints, cutoff for number of non-zero values in dummy categorical vars
test_size - float between 0 and 1, default 0.3, determines the proportion of data as test data
random_state - int, default 42, controls random state for train_test_split
plot - boolean, default 0.3, True to plot result
kwargs - include the arguments you want to pass to the rf model
OUTPUT
r2_scores_test - list of floats of r2 scores on the test data
r2_scores_train - list of floats of r2 scores on the train data
rf_model - model object from sklearn
X_train, X_test, y_train, y_test - output from sklearn train test split used for optimal model
'''
r2_scores_test, r2_scores_train, num_feats, results = [], [], [], dict()
for cutoff in cutoffs:
#reduce X matrix
reduce_X = X.iloc[:, np.where((X.sum() > cutoff) == True)[0]]
num_feats.append(reduce_X.shape[1])
#split the data into train and test
X_train, X_test, y_train, y_test = train_test_split(reduce_X, y, test_size = test_size, random_state=random_state)
#fit the model and obtain pred response
rf_model = RandomForestRegressor() #no normalizing here, but could tune other hyperparameters
rf_model.fit(X_train, y_train)
y_test_preds = rf_model.predict(X_test)
y_train_preds = rf_model.predict(X_train)
#append the r2 value from the test set
r2_scores_test.append(r2_score(y_test, y_test_preds))
r2_scores_train.append(r2_score(y_train, y_train_preds))
results[str(cutoff)] = r2_score(y_test, y_test_preds)
if plot:
plt.plot(num_feats, r2_scores_test, label="Test", alpha=.5)
plt.plot(num_feats, r2_scores_train, label="Train", alpha=.5)
plt.xlabel('Number of Features')
plt.ylabel('Rsquared')
plt.title('Rsquared by Number of Features')
plt.legend(loc=1)
plt.show()
best_cutoff = max(results, key=results.get)
#reduce X matrix
reduce_X = X.iloc[:, np.where((X.sum() > int(best_cutoff)) == True)[0]]
num_feats.append(reduce_X.shape[1])
#split the data into train and test
X_train, X_test, y_train, y_test = train_test_split(reduce_X, y, test_size = test_size, random_state=random_state)
#fit the model
rf_model = RandomForestRegressor()
rf_model.fit(X_train, y_train)
return r2_scores_test, r2_scores_train, rf_model, X_train, X_test, y_train, y_test
cutoffs = [5000, 3500, 2500, 1000, 100, 50, 30, 20, 10, 5]
r2_test, r2_train, rf_model, X_train, X_test, y_train, y_test = find_optimal_rf_mod(X, y, cutoffs)
###Output
_____no_output_____
###Markdown
Oops, It seems if use randomforest it will get overfitted.
###Code
y_test_preds = rf_model.predict(X_test)
preds_vs_act = pd.DataFrame(np.hstack([y_test.values.reshape(y_test.size,1), y_test_preds.reshape(y_test.size,1)]))
preds_vs_act.columns = ['actual', 'preds']
preds_vs_act['diff'] = preds_vs_act['actual'] - preds_vs_act['preds']
plt.plot(preds_vs_act['preds'], preds_vs_act['diff'], 'bo');
plt.xlabel('predicted');
plt.ylabel('difference');
###Output
_____no_output_____
###Markdown
Improve the Randomforest Model using parameter optimization.
###Code
# use GridSearchCV to search for optimal hyper parameters
from sklearn.model_selection import GridSearchCV
### Let's see what be the best number of features to use based on the test set performance
def find_optimal_rf_mod(X, y, cutoffs, test_size = .30, random_state=42, plot=True, param_grid=None):
'''
INPUT
X - pandas dataframe, X matrix
y - pandas dataframe, response variable
cutoffs - list of ints, cutoff for number of non-zero values in dummy categorical vars
test_size - float between 0 and 1, default 0.3, determines the proportion of data as test data
random_state - int, default 42, controls random state for train_test_split
plot - boolean, default 0.3, True to plot result
kwargs - include the arguments you want to pass to the rf model
OUTPUT
r2_scores_test - list of floats of r2 scores on the test data
r2_scores_train - list of floats of r2 scores on the train data
rf_model - model object from sklearn
X_train, X_test, y_train, y_test - output from sklearn train test split used for optimal model
'''
r2_scores_test, r2_scores_train, num_feats, results = [], [], [], dict()
for cutoff in cutoffs:
#reduce X matrix
reduce_X = X.iloc[:, np.where((X.sum() > cutoff) == True)[0]]
num_feats.append(reduce_X.shape[1])
#split the data into train and test
X_train, X_test, y_train, y_test = train_test_split(reduce_X, y, test_size = test_size, random_state=random_state)
#fit the model and obtain pred response
if param_grid==None:
rf_model = RandomForestRegressor() #no normalizing here, but could tune other hyperparameters
else:
rf_inst = RandomForestRegressor(n_jobs=-1, verbose=1)
rf_model = GridSearchCV(rf_inst, param_grid, n_jobs=-1)
rf_model.fit(X_train, y_train)
y_test_preds = rf_model.predict(X_test)
y_train_preds = rf_model.predict(X_train)
#append the r2 value from the test set
r2_scores_test.append(r2_score(y_test, y_test_preds))
r2_scores_train.append(r2_score(y_train, y_train_preds))
results[str(cutoff)] = r2_score(y_test, y_test_preds)
if plot:
plt.plot(num_feats, r2_scores_test, label="Test", alpha=.5)
plt.plot(num_feats, r2_scores_train, label="Train", alpha=.5)
plt.xlabel('Number of Features')
plt.ylabel('Rsquared')
plt.title('Rsquared by Number of Features')
plt.legend(loc=1)
plt.show()
best_cutoff = max(results, key=results.get)
#reduce X matrix
reduce_X = X.iloc[:, np.where((X.sum() > int(best_cutoff)) == True)[0]]
num_feats.append(reduce_X.shape[1])
#split the data into train and test
X_train, X_test, y_train, y_test = train_test_split(reduce_X, y, test_size = test_size, random_state=random_state)
#fit the model
if param_grid==None:
rf_model = RandomForestRegressor() #no normalizing here, but could tune other hyperparameters
else:
rf_inst = RandomForestRegressor(n_jobs=-1, verbose=1)
rf_model = GridSearchCV(rf_inst, param_grid, n_jobs=-1)
rf_model.fit(X_train, y_train)
return r2_scores_test, r2_scores_train, rf_model, X_train, X_test, y_train, y_test
cutoffs = [5000, 3500, 2500, 1000, 100, 50, 30, 20, 10, 5]
params = {'n_estimators': [10, 100, 1000], 'max_depth': [1, 5, 10, 100]}
r2_test, r2_train, rf_model, X_train, X_test, y_train, y_test = find_optimal_rf_mod(X, y, cutoffs, param_grid=params)
###Output
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=-1)]: Done 18 tasks | elapsed: 0.1s
[Parallel(n_jobs=-1)]: Done 168 tasks | elapsed: 0.3s
[Parallel(n_jobs=-1)]: Done 418 tasks | elapsed: 0.6s
[Parallel(n_jobs=-1)]: Done 768 tasks | elapsed: 1.1s
[Parallel(n_jobs=-1)]: Done 1000 out of 1000 | elapsed: 1.4s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 168 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 418 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 768 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 1000 out of 1000 | elapsed: 0.1s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 168 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 418 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 768 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 1000 out of 1000 | elapsed: 0.2s finished
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=-1)]: Done 18 tasks | elapsed: 0.1s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed: 0.2s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 100 out of 100 | elapsed: 0.0s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 100 out of 100 | elapsed: 0.0s finished
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=-1)]: Done 18 tasks | elapsed: 0.1s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed: 0.2s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 100 out of 100 | elapsed: 0.0s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 100 out of 100 | elapsed: 0.0s finished
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=-1)]: Done 18 tasks | elapsed: 0.1s
[Parallel(n_jobs=-1)]: Done 168 tasks | elapsed: 0.4s
[Parallel(n_jobs=-1)]: Done 418 tasks | elapsed: 1.0s
[Parallel(n_jobs=-1)]: Done 768 tasks | elapsed: 1.7s
[Parallel(n_jobs=-1)]: Done 1000 out of 1000 | elapsed: 2.3s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 168 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 418 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 768 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 1000 out of 1000 | elapsed: 0.2s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 168 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 418 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 768 tasks | elapsed: 0.2s
[Parallel(n_jobs=16)]: Done 1000 out of 1000 | elapsed: 0.2s finished
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=-1)]: Done 18 tasks | elapsed: 0.1s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed: 0.3s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 100 out of 100 | elapsed: 0.0s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 100 out of 100 | elapsed: 0.0s finished
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=-1)]: Done 18 tasks | elapsed: 0.1s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed: 0.3s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 100 out of 100 | elapsed: 0.0s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 100 out of 100 | elapsed: 0.0s finished
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=-1)]: Done 18 tasks | elapsed: 0.1s
[Parallel(n_jobs=-1)]: Done 168 tasks | elapsed: 0.4s
[Parallel(n_jobs=-1)]: Done 418 tasks | elapsed: 1.0s
[Parallel(n_jobs=-1)]: Done 768 tasks | elapsed: 1.8s
[Parallel(n_jobs=-1)]: Done 1000 out of 1000 | elapsed: 2.3s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 168 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 418 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 768 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 1000 out of 1000 | elapsed: 0.2s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 168 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 418 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 768 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 1000 out of 1000 | elapsed: 0.2s finished
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=-1)]: Done 18 tasks | elapsed: 0.1s
[Parallel(n_jobs=-1)]: Done 168 tasks | elapsed: 0.4s
[Parallel(n_jobs=-1)]: Done 418 tasks | elapsed: 1.0s
[Parallel(n_jobs=-1)]: Done 768 tasks | elapsed: 1.8s
[Parallel(n_jobs=-1)]: Done 1000 out of 1000 | elapsed: 2.2s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 168 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 418 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 768 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 1000 out of 1000 | elapsed: 0.2s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 168 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 418 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 768 tasks | elapsed: 0.1s
[Parallel(n_jobs=16)]: Done 1000 out of 1000 | elapsed: 0.2s finished
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=-1)]: Done 18 tasks | elapsed: 0.1s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed: 0.3s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 100 out of 100 | elapsed: 0.0s finished
[Parallel(n_jobs=16)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=16)]: Done 18 tasks | elapsed: 0.0s
[Parallel(n_jobs=16)]: Done 100 out of 100 | elapsed: 0.0s finished
[Parallel(n_jobs=-1)]: Using backend ThreadingBackend with 16 concurrent workers.
[Parallel(n_jobs=-1)]: Done 18 tasks | elapsed: 0.1s
###Markdown
It seems the linear model has better performance than random forest here.
###Code
features = X_train.columns
importances = rf_model.best_estimator_.feature_importances_
indices = np.argsort(importances)
plt.figure(figsize=(15,8))
plt.title('Feature Importances')
plt.barh(range(len(indices)), importances[indices], color='b', align='center')
plt.yticks(range(len(indices)), [features[i] for i in indices])
plt.xlabel('Relative Importance')
plt.show()
# Rsqured not very good
###Output
_____no_output_____ |
pyne/pyne_dataset.ipynb | ###Markdown
Create a decay dataset suitable for radioactivedecay from PyNEThis notebook creates a set of decay dataset files suitable for radioactivedecay `v0.4.5+` from the decay data in PyNE `v0.7.6`. The PyNE data is based on the [191004 ENDSF](https://github.com/pyne/pyne/pull/1216) release.First import the necessary modules.
###Code
import math, pickle
from pyne import nucname, data
from pyne.material import Material
import pyne
import numpy as np
import pandas as pd
from scipy import sparse
from sympy import Integer, S, log, Matrix
from sympy.matrices import SparseMatrix
print("Using PyNE version:", pyne.__version__)
###Output
Using PyNE version: 0.7.6
###Markdown
Create a DataFrame containing the PyNE decay dataCreate a list of all the ground state (non-metastable) radionuclides in PyNE. Note we exclude the metastable states as the PyNE treatment for decay chains passing through metastable states is [incorrect](https://github.com/pyne/pyne/issues/739) as of `v0.7.6`. We also exclude radionuclides with undefined half-lives.
###Code
pyne_nonmetastable_ids = []
for z in range(1,120):
for a in range(1,300):
try:
id = z*10000000+a*10000
hl = data.half_life(id)
except:
continue
if hl == float("inf"): continue # ignore stable nuclides
elif math.isnan(hl): continue # ignore nuclides where the half-life is undefined half-lives
pyne_nonmetastable_ids.append(id)
print("Total number of radionuclides:", len(pyne_nonmetastable_ids))
###Output
Total number of radionuclides: 2920
###Markdown
Define functions to fill a Pandas DataFrame with the decay data from PyNE.
###Code
def add_hyphen(name):
"""Add hypen to radionuclide name string e.g. H3 to H-3."""
for i in range(1, len(name)):
if not name[i].isdigit():
continue
name = name[:i] + "-" + name[i:]
break
return name
def create_rows(ids):
"""Create a list of dictionaries which will become rows of the DataFrame of decay data."""
rows = []
for id in ids:
name = add_hyphen(nucname.name(id))
Z, A = nucname.znum(id), nucname.anum(id)
hl = data.half_life(id)
children = list(data.decay_children(id))
bf = []
modes = []
atomic_mass = data.atomic_mass(id)
for c in children:
bf.append(data.branch_ratio(id, c))
cZ, cA = nucname.znum(c), nucname.anum(c)
if Z == cZ and A == cA: modes.append("IT")
elif Z-2 == cZ and A-4 == cA: modes.append("α")
elif Z+1 == cZ and A == cA: modes.append("β-")
elif Z-1 == cZ and A == cA: modes.append("β+ or EC")
else: modes.append("SF or other")
rows.append({"Radionuclide": name, "id": id, "Z": Z, "A": A, "Half-life_s": hl,
"Num_decay_modes": len(children), "Progeny": children, "Branching_fractions": bf,
"Modes": modes, "Atomic_mass": atomic_mass})
return rows
###Output
_____no_output_____
###Markdown
Add all the PyNE decay data to a DataFrame.
###Code
col_names = ["Radionuclide", "id", "Z", "A", "Half-life_s", "Num_decay_modes",
"Progeny", "Branching_fractions", "Modes", "Atomic_mass"]
pyne_full = pd.DataFrame(create_rows(pyne_nonmetastable_ids), columns=col_names)
pyne_full.set_index("Radionuclide", inplace=True)
pyne_full.to_csv("pyne_full.csv", index=True)
pyne_full.head(n=10)
###Output
_____no_output_____
###Markdown
Order the DataFrame so all progeny are located below their parentThe radionuclides in the DataFrame need to be ordered so that progeny (decay children) are always located lower than their parent. This is so the subsequent matrices that we create are lower triangular.To achieve this we first count how many times each radioactive decay mode occurs in the dataset.
###Code
modes = pd.Series(np.concatenate(pyne_full.Modes))
print("β+ or electron capture:", modes.value_counts()["β+ or EC"])
print("β-:", modes.value_counts()["β-"])
print("α:", modes.value_counts()["α"])
print("Spontaneous Fission or other:", modes.value_counts()["SF or other"])
print("Total number of decay modes:", pyne_full.Num_decay_modes.sum())
###Output
β+ or electron capture: 1143
β-: 1133
α: 580
Spontaneous Fission or other: 1257
Total number of decay modes: 4113
###Markdown
We order by decreasing mass number (A), followed by decreasing atomic number (Z), as there are more β+ and EC decays than β- decays.
###Code
pyne_full.sort_values(by=["A", "Z"], inplace=True, ascending=[False, False])
pyne_full.head(n=10)
###Output
_____no_output_____
###Markdown
Now it is necessary to correct the positions of the remaining radionuclides that are not ordered correctly. We do this by looping over all the radionuclides in the DataFrame, and checking if their progeny are located below. If not, the positions of the parent and progeny rows in the DataFrame are switched. This process takes a few passes until all the parents and progeny are correctly ordered.
###Code
nuclide_list = list(pyne_full.index)
id_list = list(pyne_full.id)
swapping = 1
while swapping >= 1:
swaps = 0
for parent in nuclide_list:
for c, mode, bf in zip(pyne_full.at[parent, "Progeny"],
pyne_full.at[parent, "Modes"],
pyne_full.at[parent, "Branching_fractions"]):
if data.decay_const(c) == 0.0 or c not in id_list:
continue
j = nuclide_list.index(parent)
k = id_list.index(c)
if j > k:
nuclide_list[j], nuclide_list[k] = nuclide_list[k], nuclide_list[j]
id_list[j], id_list[k] = id_list[k], id_list[j]
pyne_full = pyne_full.reindex(index=nuclide_list)
swaps +=1
print("Iteration", swapping, "number of swaps:", swaps)
swapping += 1
if swaps == 0: swapping = 0
pyne_full.head(n=10)
###Output
Iteration 1 number of swaps: 901
Iteration 2 number of swaps: 632
Iteration 3 number of swaps: 425
Iteration 4 number of swaps: 262
Iteration 5 number of swaps: 135
Iteration 6 number of swaps: 53
Iteration 7 number of swaps: 16
Iteration 8 number of swaps: 1
Iteration 9 number of swaps: 0
###Markdown
Now make the dataset files for radioactivedecayThe process of making datasets for radioactivedecay is as follows. We first make the sparse lower triangular matrix *Λ*, which captures the decay relationships and branching fractions between parents and their immediate (first) progeny. We then make the sparse matrix _C_, which is used in decay calculations, and from this make its inverse *C-1*.First we define some functions used for making *Λ*, _C_ and *C-1*.
###Code
def make_lambda_mat(df):
"""Make the lambda matrix and a list of the decay constants."""
rows = np.array([], dtype=np.int64)
cols = np.array([], dtype=np.int64)
values = np.array([], dtype=np.float64)
lambdas = []
ln2 = np.log(2)
nuclide_list = list(df.index)
id_list = list(df.id)
for parent in nuclide_list:
j = nuclide_list.index(parent)
rows = np.append(rows, [j])
cols = np.append(cols, [j])
lambd = ln2/df.at[parent, "Half-life_s"]
values = np.append(values, -lambd)
lambdas = np.append(lambdas, lambd)
for progeny, bf in zip(df.at[parent, "Progeny"], df.at[parent, "Branching_fractions"]):
if (progeny not in id_list): continue
i = id_list.index(progeny)
rows = np.append(rows, [i])
cols = np.append(cols, [j])
values = np.append(values, [lambd*bf])
return sparse.csc_matrix((values, (rows, cols))), lambdas
def prepare_C_inv_C(df):
"""Prepare data structures needed to make C and inv_C."""
nuclide_list = list(df.index)
num_nuclides = len(nuclide_list)
rows_dict = {}
for i in range(num_nuclides-1, -1, -1):
a,_ = lambda_mat[:,i].nonzero()
b = a
for j in a:
if j > i:
b = np.unique(np.concatenate((b,rows_dict[j])))
rows_dict[i] = b
rows_C = np.array([], dtype=np.int64)
cols_C = np.array([], dtype=np.int64)
for i in range(0, num_nuclides):
rows_C = np.concatenate((rows_C,rows_dict[i]))
cols_C = np.concatenate((cols_C,np.array([i]*len(rows_dict[i]))))
C = sparse.csc_matrix((np.array([0.0]*rows_C.size, dtype=np.float64), (rows_C, cols_C)))
inv_C = sparse.csc_matrix((np.array([0.0]*rows_C.size, dtype=np.float64), (rows_C, cols_C)))
return rows_dict, rows_C, cols_C, C, inv_C
def make_C(rows_dict, rows_C, cols_C, C, lambda_mat, df):
"""Calculate C. Report cases of radionuclides with identical or similar half-lives in the same decay chain."""
nuclide_list = list(df.index)
for index in range(0, rows_C.size):
i = rows_C[index]
j = cols_C[index]
if i == j: C[i,i] = 1.0
else:
sigma = 0.0
for k in rows_dict[j]:
if k == i: break
sigma += lambda_mat[i,k]*C[k,j]
if lambda_mat[j,j]==lambda_mat[i,i]:
print("equal decay constants:", nuclide_list[i], nuclide_list[j])
C[i,j] = sigma/(lambda_mat[j,j]-lambda_mat[i,i])
if abs((lambda_mat[j,j]-lambda_mat[i,i])/lambda_mat[j,j]) < 1E-4:
print("rel_diff of decay constants < 1E-4:", nuclide_list[i], nuclide_list[j])
return C
def make_inv_C(rows_dict, rows_C, cols_C, C, inv_C):
"""Calculate inv_C."""
for index in range(0, rows_C.size):
i = rows_C[index]
j = cols_C[index]
if i == j: inv_C[i,i] = 1.0
else:
sigma = 0.0
for k in rows_dict[j]:
if k == i: break
sigma -= C[i,k]*inv_C[k,j]
inv_C[i,j] = sigma
return inv_C
###Output
_____no_output_____
###Markdown
The process of making _Λ_, _C_ and *C-1* is complicated as PyNE includes some decay chains where two radionuclides have identical half-lives. PyNE has [special routines](https://pyne.io/theorymanual/decay.html) to cope with this, but radioactivedecay currently does not. Fortunately these cases are limited to some fairly obscure radionuclides which are unlikely to be relevant to most users.The following is a first pass through at making *Λ* and *C*. It highlights the cases where radionuclides in the same chain have identical half-lives, and also cases where radionuclides in the same chain have similar half-lives (relative difference < 1E-4).
###Code
lambda_mat, lambdas = make_lambda_mat(pyne_full)
rows_dict, rows_C, cols_C, C, inv_C = prepare_C_inv_C(pyne_full)
C = make_C(rows_dict, rows_C, cols_C, C, lambda_mat, pyne_full)
###Output
equal decay constants: Os-179 Pt-183
rel_diff of decay constants < 1E-4: Os-179 Pt-183
###Markdown
So there are radionuclides with identical half-lives in the chains containing 183Pt, 172Ir and 153Lu. These cases withstanding, there are no other chains containing radionuclides with decay constants with a relative difference of less than 1E-4.It turns out that there is a [bug](https://github.com/pyne/pyne/issues/1342) in PyNE `v0.7.6` causing it to incorrectly calculate decayed activities for the chains containing 183Pt, 172Ir and 153Lu. The bug affects all radionuclides in the chains upwards from these three radionuclides. Because of this bug and the fact that radioactivedecay does not support chains with radionuclides with equal half-lives, we remove the affected radionuclides from the decay dataset. Note even by doing this, decay calculation results are unaffected for chains starting with radionuclides below 183Pt, 172Ir and 153Lu.This function finds the radionuclides to remove.
###Code
def find_affected_radionuclides(nuclide_list, lambda_mat, nuclide):
"""Find radionuclides higher in decay chain than nuclide."""
s1 = {nuclide_list.index(nuclide)}
index = 0
while index < len(nuclide_list):
s2 = set(lambda_mat.getcol(index).indices)
if len(s1.intersection(s2)) > 0:
s2 = set([s for s in list(s2) if s <= index])
if s2.issubset(s1):
index += 1
continue
s1 = s2.union(s1)
index = 0
continue
index +=1
return [nuclide_list[nuclide] for nuclide in s1]
nuclide_list = list(pyne_full.index)
affected = find_affected_radionuclides(nuclide_list, lambda_mat, "Pt-183")
print("Radionuclides affected for Pt-183:", affected)
remove = affected
affected = find_affected_radionuclides(nuclide_list, lambda_mat, "Ir-172")
print("Radionuclides affected for Ir-172:", affected)
remove.extend(affected)
affected = find_affected_radionuclides(nuclide_list, lambda_mat, "Lu-153")
print("Radionuclides affected for Lu-153:", affected)
remove.extend(affected)
###Output
Radionuclides affected for Pt-183: ['Po-191', 'Pt-183', 'Bi-191', 'Pb-187', 'Tl-187', 'Rn-195', 'At-195', 'Hg-183', 'Au-183']
Radionuclides affected for Ir-172: ['Pb-180', 'Hg-176', 'Tl-177', 'Pt-172', 'Ir-172']
Radionuclides affected for Lu-153: ['Lu-153', 'Ta-157']
###Markdown
In total there are 16 radionuclides to be removed from the decay dataset.
###Code
pyne_truncated = pyne_full.copy()
pyne_truncated = pyne_truncated.drop(labels=remove)
pyne_truncated.to_csv("pyne_truncated.csv", index=True)
###Output
_____no_output_____
###Markdown
Now this is done, we can make the matrices _C_ and *C-1* used by radioactivedecay.
###Code
lambda_mat, lambdas = make_lambda_mat(pyne_truncated)
rows_dict, rows_C, cols_C, C, inv_C = prepare_C_inv_C(pyne_truncated)
C = make_C(rows_dict, rows_C, cols_C, C, lambda_mat, pyne_truncated)
inv_C = make_inv_C(rows_dict, rows_C, cols_C, C, inv_C)
###Output
_____no_output_____
###Markdown
Calculate SymPy versions of the matrices for arbitrary-precision calculationsWe now calculate SymPy versions of _C_ and *C-1* for arbitrary-precision calculations. First define some functions for processing the data into SymPy objects:
###Code
year_sympy = S(36525)/10000
def to_rational(number):
"""
Converts half-life string to SymPy object.
"""
if number == '0.0':
return S(0)
if 'e' in number or 'E' in number:
if 'e' in number:
end = number.split('e')[1]
number = number.split('e')[0]
else:
end = number.split('E')[1]
number = number.split('E')[0]
parts = number.split('.')
if len(parts) == 1: parts.append('')
if end[0] == '+':
multiply = 1
factor = S(10**int(end.lstrip('+')))
else:
multiply = 0
factor = S(10**int(end.lstrip('-')))
denom = S(10**len(parts[1]))
parts[0] = parts[0].lstrip('0')
if len(parts[0]) == 0: parts[1] = parts[1].lstrip('0')
if multiply == 1:
return S(parts[0]+parts[1])*factor/denom
else: return S(parts[0]+parts[1])/(denom*factor)
parts = number.split('.')
if len(parts) == 1: parts.append('')
denom = S(10**len(parts[1]))
parts[0] = parts[0].lstrip('0')
if len(parts[0]) == 0: parts[1] = parts[1].lstrip('0')
return S(parts[0]+parts[1])/denom
###Output
_____no_output_____
###Markdown
Now make a SymPy version of the *Λ* matrix:
###Code
num_nuclides = len(pyne_truncated)
lambda_mat_sympy = SparseMatrix.zeros(num_nuclides, num_nuclides)
lambdas_sympy = Matrix.zeros(num_nuclides, 1)
nuclide_list = list(pyne_truncated.index)
id_list = list(pyne_truncated.id)
masses_sympy = Matrix.zeros(num_nuclides, 1)
for parent in nuclide_list:
j = nuclide_list.index(parent)
hl_sympy = to_rational(str(pyne_truncated.at[parent, "Half-life_s"]))
lambd = log(2)/hl_sympy
lambda_mat_sympy[j, j] = -lambd
lambdas_sympy[j] = lambd
for progeny, bf in zip(pyne_truncated.at[parent, "Progeny"], pyne_truncated.at[parent, "Branching_fractions"]):
if (progeny not in id_list): continue
i = id_list.index(progeny)
lambda_mat_sympy[i, j] = lambd*to_rational(str(bf))
masses_sympy[j] = to_rational(str(pyne_truncated.at[parent, "Atomic_mass"]))
###Output
_____no_output_____
###Markdown
Now make a SymPy version of the _C_ and *C-1* matrix:
###Code
c_sympy = SparseMatrix.zeros(num_nuclides, num_nuclides)
c_inv_sympy = SparseMatrix.zeros(num_nuclides, num_nuclides)
for index in range(0, rows_C.size):
i = rows_C[index]
j = cols_C[index]
if i == j: c_sympy[i, i] = Integer(1)
else:
sigma = Integer(0)
for k in rows_dict[j]:
if k == i: break
sigma += lambda_mat_sympy[i, k]*c_sympy[k, j]
c_sympy[i, j] = sigma/(lambda_mat_sympy[j, j]-lambda_mat_sympy[i, i])
for index in range(0, rows_C.size):
i = rows_C[index]
j = cols_C[index]
if i == j: c_inv_sympy[i, i] = Integer(1)
else:
sigma = Integer(0)
for k in rows_dict[j]:
if k == i: break
sigma -= c_sympy[i, k]*c_inv_sympy[k, j]
c_inv_sympy[i, j] = sigma
###Output
_____no_output_____
###Markdown
Save the outputsWrite output files containing _C_ and *C-1* in SciPy and SymPy sparse format, and other files needed to create a dataset suitable for radioactive decay `v0.4.0+`.
###Code
hldata = np.array([(np.float64(hl), 's', str(hl) + ' s') for hl in pyne_truncated["Half-life_s"]], dtype=object)
prog_bfs_modes = np.array([{}]*len(pyne_truncated.index))
i = 0
for parent in list(pyne_truncated.index):
progeny = [add_hyphen(nucname.name(id)) for id in pyne_truncated.at[parent, "Progeny"]]
bfs = dict(zip(progeny, pyne_truncated.at[parent, "Branching_fractions"]))
modes = dict(zip(progeny, pyne_truncated.at[parent, "Modes"]))
bfs = {key: value for key, value in sorted(bfs.items(), key=lambda x: x[1], reverse=True)}
prog_bfs_modes[i] = {progeny: [bf, modes[progeny]] for progeny, bf in bfs.items()}
i += 1
np.savez_compressed("./decay_data.npz", radionuclides=np.array(nuclide_list),
masses=np.array(list(pyne_truncated["Atomic_mass"])),
hldata=hldata, prog_bfs_modes=prog_bfs_modes,
year_conv=365.25)
# Write out SciPy sparse matrices (convert to CSR format)
sparse.save_npz("./c_scipy.npz", C.tocsr())
sparse.save_npz("./c_inv_scipy.npz", inv_C.tocsr())
import pkg_resources, sympy
if pkg_resources.parse_version(sympy.__version__) >= pkg_resources.parse_version('1.9'):
pickle_type = '1.9'
else:
pickle_type = '1.8'
# Write out SymPy objects to pickle files
with open(f"c_sympy_{pickle_type}.pickle", "wb") as outfile:
outfile.write(pickle.dumps(c_sympy))
with open(f"c_inv_sympy_{pickle_type}.pickle", "wb") as outfile:
outfile.write(pickle.dumps(c_inv_sympy))
with open(f"atomic_masses_sympy_{pickle_type}.pickle", "wb") as outfile:
outfile.write(pickle.dumps(masses_sympy))
with open(f"decay_consts_sympy_{pickle_type}.pickle", "wb") as outfile:
outfile.write(pickle.dumps(lambdas_sympy))
with open(f"year_conversion_sympy_{pickle_type}.pickle", "wb") as outfile:
outfile.write(pickle.dumps(year_sympy))
###Output
_____no_output_____
###Markdown
Create a decay dataset suitable for radioactivedecay from PyNEThis notebook creates a set of decay dataset files suitable for radioactivedecay `v0.4.0+` from the decay data in PyNE `v0.7.5`. The PyNE data is based on the [191004 ENDSF](https://github.com/pyne/pyne/pull/1216) release.First import the necessary modules.
###Code
import math, pickle
from pyne import nucname, data
from pyne.material import Material
import pyne
import numpy as np
import pandas as pd
from scipy import sparse
from sympy import Integer, S, log, Matrix
from sympy.matrices import SparseMatrix
print("Using PyNE version:", pyne.__version__)
###Output
Using PyNE version: 0.7.1
###Markdown
Create a DataFrame containing the PyNE decay dataCreate a list of all the ground state (non-metastable) radionuclides in PyNE. Note we exclude the metastable states as the PyNE treatment for decay chains passing through metastable states is [incorrect](https://github.com/pyne/pyne/issues/739) as of `v0.7.5`. We also exclude radionuclides with undefined half-lives.
###Code
pyne_nonmetastable_ids = []
for z in range(1,120):
for a in range(1,300):
try:
id = z*10000000+a*10000
hl = data.half_life(id)
except:
continue
if hl == float("inf"): continue # ignore stable nuclides
elif math.isnan(hl): continue # ignore nuclides where the half-life is undefined half-lives
pyne_nonmetastable_ids.append(id)
print("Total number of radionuclides:", len(pyne_nonmetastable_ids))
###Output
Total number of radionuclides: 2920
###Markdown
Define functions to fill a Pandas DataFrame with the decay data from PyNE.
###Code
def add_hyphen(name):
"""Add hypen to radionuclide name string e.g. H3 to H-3."""
for i in range(1, len(name)):
if not name[i].isdigit():
continue
name = name[:i] + "-" + name[i:]
break
return name
def create_rows(ids):
"""Create a list of dictionaries which will become rows of the DataFrame of decay data."""
rows = []
for id in ids:
name = add_hyphen(nucname.name(id))
Z, A = nucname.znum(id), nucname.anum(id)
hl = data.half_life(id)
children = list(data.decay_children(id))
bf = []
modes = []
atomic_mass = data.atomic_mass(id)
for c in children:
bf.append(data.branch_ratio(id, c))
cZ, cA = nucname.znum(c), nucname.anum(c)
if Z == cZ and A == cA: modes.append("IT")
elif Z-2 == cZ and A-4 == cA: modes.append("α")
elif Z+1 == cZ and A == cA: modes.append("β-")
elif Z-1 == cZ and A == cA: modes.append("β+ or EC")
else: modes.append("SF or other")
rows.append({"Radionuclide": name, "id": id, "Z": Z, "A": A, "Half-life_s": hl,
"Num_decay_modes": len(children), "Progeny": children, "Branching_fractions": bf,
"Modes": modes, "Atomic_mass": atomic_mass})
return rows
###Output
_____no_output_____
###Markdown
Add all the PyNE decay data to a DataFrame.
###Code
col_names = ["Radionuclide", "id", "Z", "A", "Half-life_s", "Num_decay_modes",
"Progeny", "Branching_fractions", "Modes", "Atomic_mass"]
pyne_full = pd.DataFrame(create_rows(pyne_nonmetastable_ids), columns=col_names)
pyne_full.set_index("Radionuclide", inplace=True)
pyne_full.to_csv("pyne_full.csv", index=True)
pyne_full.head(n=10)
###Output
_____no_output_____
###Markdown
Order the DataFrame so all progeny are located below their parentThe radionuclides in the DataFrame need to be ordered so that progeny (decay children) are always located lower than their parent. This is so the subsequent matrices that we create are lower triangular.To achieve this we first count how many times each radioactive decay mode occurs in the dataset.
###Code
modes = pd.Series(np.concatenate(pyne_full.Modes))
print("β+ or electron capture:", modes.value_counts()["β+ or EC"])
print("β-:", modes.value_counts()["β-"])
print("α:", modes.value_counts()["α"])
print("Spontaneous Fission or other:", modes.value_counts()["SF or other"])
print("Total number of decay modes:", pyne_full.Num_decay_modes.sum())
###Output
β+ or electron capture: 1143
β-: 1133
α: 580
Spontaneous Fission or other: 1257
Total number of decay modes: 4113
###Markdown
We order by decreasing mass number (A), followed by decreasing atomic number (Z), as there are more β+ and EC decays than β- decays.
###Code
pyne_full.sort_values(by=["A", "Z"], inplace=True, ascending=[False, False])
pyne_full.head(n=10)
###Output
_____no_output_____
###Markdown
Now it is necessary to correct the positions of the remaining radionuclides that are not ordered correctly. We do this by looping over all the radionuclides in the DataFrame, and checking if their progeny are located below. If not, the positions of the parent and progeny rows in the DataFrame are switched. This process takes a few passes until all the parents and progeny are correctly ordered.
###Code
nuclide_list = list(pyne_full.index)
id_list = list(pyne_full.id)
swapping = 1
while swapping >= 1:
swaps = 0
for parent in nuclide_list:
for c, mode, bf in zip(pyne_full.at[parent, "Progeny"],
pyne_full.at[parent, "Modes"],
pyne_full.at[parent, "Branching_fractions"]):
if data.decay_const(c) == 0.0 or c not in id_list:
continue
j = nuclide_list.index(parent)
k = id_list.index(c)
if j > k:
nuclide_list[j], nuclide_list[k] = nuclide_list[k], nuclide_list[j]
id_list[j], id_list[k] = id_list[k], id_list[j]
pyne_full = pyne_full.reindex(index=nuclide_list)
swaps +=1
print("Iteration", swapping, "number of swaps:", swaps)
swapping += 1
if swaps == 0: swapping = 0
pyne_full.head(n=10)
###Output
Iteration 1 number of swaps: 901
Iteration 2 number of swaps: 632
Iteration 3 number of swaps: 425
Iteration 4 number of swaps: 262
Iteration 5 number of swaps: 135
Iteration 6 number of swaps: 53
Iteration 7 number of swaps: 16
Iteration 8 number of swaps: 1
Iteration 9 number of swaps: 0
###Markdown
Now make the dataset files for radioactivedecayThe process of making datasets for radioactivedecay is as follows. We first make the sparse lower triangular matrix *Λ*, which captures the decay relationships and branching fractions between parents and their immediate (first) progeny. We then make the sparse matrix _C_, which is used in decay calculations, and from this make its inverse *C-1*.First we define some functions used for making *Λ*, _C_ and *C-1*.
###Code
def make_lambda_mat(df):
"""Make the lambda matrix and a list of the decay constants."""
rows = np.array([], dtype=np.int64)
cols = np.array([], dtype=np.int64)
values = np.array([], dtype=np.float64)
lambdas = []
ln2 = np.log(2)
nuclide_list = list(df.index)
id_list = list(df.id)
for parent in nuclide_list:
j = nuclide_list.index(parent)
rows = np.append(rows, [j])
cols = np.append(cols, [j])
lambd = ln2/df.at[parent, "Half-life_s"]
values = np.append(values, -lambd)
lambdas = np.append(lambdas, lambd)
for progeny, bf in zip(df.at[parent, "Progeny"], df.at[parent, "Branching_fractions"]):
if (progeny not in id_list): continue
i = id_list.index(progeny)
rows = np.append(rows, [i])
cols = np.append(cols, [j])
values = np.append(values, [lambd*bf])
return sparse.csc_matrix((values, (rows, cols))), lambdas
def prepare_C_inv_C(df):
"""Prepare data structures needed to make C and inv_C."""
nuclide_list = list(df.index)
num_nuclides = len(nuclide_list)
rows_dict = {}
for i in range(num_nuclides-1, -1, -1):
a,_ = lambda_mat[:,i].nonzero()
b = a
for j in a:
if j > i:
b = np.unique(np.concatenate((b,rows_dict[j])))
rows_dict[i] = b
rows_C = np.array([], dtype=np.int64)
cols_C = np.array([], dtype=np.int64)
for i in range(0, num_nuclides):
rows_C = np.concatenate((rows_C,rows_dict[i]))
cols_C = np.concatenate((cols_C,np.array([i]*len(rows_dict[i]))))
C = sparse.csc_matrix((np.array([0.0]*rows_C.size, dtype=np.float64), (rows_C, cols_C)))
inv_C = sparse.csc_matrix((np.array([0.0]*rows_C.size, dtype=np.float64), (rows_C, cols_C)))
return rows_dict, rows_C, cols_C, C, inv_C
def make_C(rows_dict, rows_C, cols_C, C, lambda_mat, df):
"""Calculate C. Report cases of radionuclides with identical or similar half-lives in the same decay chain."""
nuclide_list = list(df.index)
for index in range(0, rows_C.size):
i = rows_C[index]
j = cols_C[index]
if i == j: C[i,i] = 1.0
else:
sigma = 0.0
for k in rows_dict[j]:
if k == i: break
sigma += lambda_mat[i,k]*C[k,j]
if lambda_mat[j,j]==lambda_mat[i,i]:
print("equal decay constants:", nuclide_list[i], nuclide_list[j])
C[i,j] = sigma/(lambda_mat[j,j]-lambda_mat[i,i])
if abs((lambda_mat[j,j]-lambda_mat[i,i])/lambda_mat[j,j]) < 1E-4:
print("rel_diff of decay constants < 1E-4:", nuclide_list[i], nuclide_list[j])
return C
def make_inv_C(rows_dict, rows_C, cols_C, C, inv_C):
"""Calculate inv_C."""
for index in range(0, rows_C.size):
i = rows_C[index]
j = cols_C[index]
if i == j: inv_C[i,i] = 1.0
else:
sigma = 0.0
for k in rows_dict[j]:
if k == i: break
sigma -= C[i,k]*inv_C[k,j]
inv_C[i,j] = sigma
return inv_C
###Output
_____no_output_____
###Markdown
The process of making _Λ_, _C_ and *C-1* is complicated as PyNE includes some decay chains where two radionuclides have identical half-lives. PyNE has [special routines](https://pyne.io/theorymanual/decay.html) to cope with this, but radioactivedecay currently does not. Fortunately these cases are limited to some fairly obscure radionuclides which are unlikely to be relevant to most users.The following is a first pass through at making *Λ* and *C*. It highlights the cases where radionuclides in the same chain have identical half-lives, and also cases where radionuclides in the same chain have similar half-lives (relative difference < 1E-4).
###Code
lambda_mat, lambdas = make_lambda_mat(pyne_full)
rows_dict, rows_C, cols_C, C, inv_C = prepare_C_inv_C(pyne_full)
C = make_C(rows_dict, rows_C, cols_C, C, lambda_mat, pyne_full)
###Output
equal decay constants: Os-179 Pt-183
rel_diff of decay constants < 1E-4: Os-179 Pt-183
###Markdown
So there are radionuclides with identical half-lives in the chains containing 183Pt, 172Ir and 153Lu. These cases withstanding, there are no other chains containing radionuclides with decay constants with a relative difference of less than 1E-4.It turns out that there is a [bug](https://github.com/pyne/pyne/issues/1342) in PyNE `v0.7.5` causing it to incorrectly calculate decayed activities for the chains containing 183Pt, 172Ir and 153Lu. The bug affects all radionuclides in the chains upwards from these three radionuclides. Because of this bug and the fact that radioactivedecay does not support chains with radionuclides with equal half-lives, we remove the affected radionuclides from the decay dataset. Note even by doing this, decay calculation results are unaffected for chains starting with radionuclides below 183Pt, 172Ir and 153Lu.This function finds the radionuclides to remove.
###Code
def find_affected_radionuclides(nuclide_list, lambda_mat, nuclide):
"""Find radionuclides higher in decay chain than nuclide."""
s1 = {nuclide_list.index(nuclide)}
index = 0
while index < len(nuclide_list):
s2 = set(lambda_mat.getcol(index).indices)
if len(s1.intersection(s2)) > 0:
s2 = set([s for s in list(s2) if s <= index])
if s2.issubset(s1):
index += 1
continue
s1 = s2.union(s1)
index = 0
continue
index +=1
return [nuclide_list[nuclide] for nuclide in s1]
nuclide_list = list(pyne_full.index)
affected = find_affected_radionuclides(nuclide_list, lambda_mat, "Pt-183")
print("Radionuclides affected for Pt-183:", affected)
remove = affected
affected = find_affected_radionuclides(nuclide_list, lambda_mat, "Ir-172")
print("Radionuclides affected for Ir-172:", affected)
remove.extend(affected)
affected = find_affected_radionuclides(nuclide_list, lambda_mat, "Lu-153")
print("Radionuclides affected for Lu-153:", affected)
remove.extend(affected)
###Output
Radionuclides affected for Pt-183: ['Po-191', 'Pt-183', 'Bi-191', 'Pb-187', 'Tl-187', 'Rn-195', 'At-195', 'Hg-183', 'Au-183']
Radionuclides affected for Ir-172: ['Pb-180', 'Hg-176', 'Tl-177', 'Pt-172', 'Ir-172']
Radionuclides affected for Lu-153: ['Lu-153', 'Ta-157']
###Markdown
In total there are 16 radionuclides to be removed from the decay dataset.
###Code
pyne_truncated = pyne_full.copy()
pyne_truncated = pyne_truncated.drop(labels=remove)
pyne_truncated.to_csv("pyne_truncated.csv", index=True)
###Output
_____no_output_____
###Markdown
Now this is done, we can make the matrices _C_ and *C-1* used by radioactivedecay.
###Code
lambda_mat, lambdas = make_lambda_mat(pyne_truncated)
rows_dict, rows_C, cols_C, C, inv_C = prepare_C_inv_C(pyne_truncated)
C = make_C(rows_dict, rows_C, cols_C, C, lambda_mat, pyne_truncated)
inv_C = make_inv_C(rows_dict, rows_C, cols_C, C, inv_C)
###Output
_____no_output_____
###Markdown
Calculate SymPy versions of the matrices for arbitrary-precision calculationsWe now calculate SymPy versions of _C_ and *C-1* for arbitrary-precision calculations. First define some functions for processing the data into SymPy objects:
###Code
year_sympy = S(36525)/10000
def to_rational(number):
"""
Converts half-life string to SymPy object.
"""
if number == '0.0':
return S(0)
if 'e' in number or 'E' in number:
if 'e' in number:
end = number.split('e')[1]
number = number.split('e')[0]
else:
end = number.split('E')[1]
number = number.split('E')[0]
parts = number.split('.')
if len(parts) == 1: parts.append('')
if end[0] == '+':
multiply = 1
factor = S(10**int(end.lstrip('+')))
else:
multiply = 0
factor = S(10**int(end.lstrip('-')))
denom = S(10**len(parts[1]))
parts[0] = parts[0].lstrip('0')
if len(parts[0]) == 0: parts[1] = parts[1].lstrip('0')
if multiply == 1:
return S(parts[0]+parts[1])*factor/denom
else: return S(parts[0]+parts[1])/(denom*factor)
parts = number.split('.')
if len(parts) == 1: parts.append('')
denom = S(10**len(parts[1]))
parts[0] = parts[0].lstrip('0')
if len(parts[0]) == 0: parts[1] = parts[1].lstrip('0')
return S(parts[0]+parts[1])/denom
###Output
_____no_output_____
###Markdown
Now make a SymPy version of the *Λ* matrix:
###Code
num_nuclides = len(pyne_truncated)
lambda_mat_sympy = SparseMatrix.zeros(num_nuclides, num_nuclides)
lambdas_sympy = Matrix.zeros(num_nuclides, 1)
nuclide_list = list(pyne_truncated.index)
id_list = list(pyne_truncated.id)
masses_sympy = Matrix.zeros(num_nuclides, 1)
for parent in nuclide_list:
j = nuclide_list.index(parent)
hl_sympy = to_rational(str(pyne_truncated.at[parent, "Half-life_s"]))
lambd = log(2)/hl_sympy
lambda_mat_sympy[j, j] = -lambd
lambdas_sympy[j] = lambd
for progeny, bf in zip(pyne_truncated.at[parent, "Progeny"], pyne_truncated.at[parent, "Branching_fractions"]):
if (progeny not in id_list): continue
i = id_list.index(progeny)
lambda_mat_sympy[i, j] = lambd*to_rational(str(bf))
masses_sympy[j] = to_rational(str(pyne_truncated.at[parent, "Atomic_mass"]))
###Output
_____no_output_____
###Markdown
Now make a SymPy version of the _C_ and *C-1* matrix:
###Code
c_sympy = SparseMatrix.zeros(num_nuclides, num_nuclides)
c_inv_sympy = SparseMatrix.zeros(num_nuclides, num_nuclides)
for index in range(0, rows_C.size):
i = rows_C[index]
j = cols_C[index]
if i == j: c_sympy[i, i] = Integer(1)
else:
sigma = Integer(0)
for k in rows_dict[j]:
if k == i: break
sigma += lambda_mat_sympy[i, k]*c_sympy[k, j]
c_sympy[i, j] = sigma/(lambda_mat_sympy[j, j]-lambda_mat_sympy[i, i])
for index in range(0, rows_C.size):
i = rows_C[index]
j = cols_C[index]
if i == j: c_inv_sympy[i, i] = Integer(1)
else:
sigma = Integer(0)
for k in rows_dict[j]:
if k == i: break
sigma -= c_sympy[i, k]*c_inv_sympy[k, j]
c_inv_sympy[i, j] = sigma
###Output
_____no_output_____
###Markdown
Save the outputsWrite output files containing _C_ and *C-1* in SciPy and SymPy sparse format, and other files needed to create a dataset suitable for radioactive decay `v0.4.0+`.
###Code
hldata = np.array([(np.float64(hl), 's', str(hl) + ' s') for hl in pyne_truncated["Half-life_s"]], dtype=object)
prog_bfs_modes = np.array([{}]*len(pyne_truncated.index))
i = 0
for parent in list(pyne_truncated.index):
progeny = [add_hyphen(nucname.name(id)) for id in pyne_truncated.at[parent, "Progeny"]]
bfs = dict(zip(progeny, pyne_truncated.at[parent, "Branching_fractions"]))
modes = dict(zip(progeny, pyne_truncated.at[parent, "Modes"]))
bfs = {key: value for key, value in sorted(bfs.items(), key=lambda x: x[1], reverse=True)}
prog_bfs_modes[i] = {progeny: [bf, modes[progeny]] for progeny, bf in bfs.items()}
i += 1
np.savez_compressed("./decay_data.npz", radionuclides=np.array(nuclide_list),
masses=np.array(list(pyne_truncated["Atomic_mass"])),
hldata=hldata, prog_bfs_modes=prog_bfs_modes,
year_conv=365.25)
# Write out SciPy sparse matrices (convert to CSR format)
sparse.save_npz("./c_scipy.npz", C.tocsr())
sparse.save_npz("./c_inv_scipy.npz", inv_C.tocsr())
# Write out SymPy objects to pickle files
with open("c_sympy.pickle", "wb") as outfile:
outfile.write(pickle.dumps(c_sympy))
with open("c_inv_sympy.pickle", "wb") as outfile:
outfile.write(pickle.dumps(c_inv_sympy))
with open("atomic_masses_sympy.pickle", "wb") as outfile:
outfile.write(pickle.dumps(masses_sympy))
with open("decay_consts_sympy.pickle", "wb") as outfile:
outfile.write(pickle.dumps(lambdas_sympy))
with open("year_conversion_sympy.pickle", "wb") as outfile:
outfile.write(pickle.dumps(year_sympy))
###Output
_____no_output_____ |
hsi/notebooks/hsi_using_r2py_k8s_cluster.ipynb | ###Markdown
Instance type: `m5.8xlarge`Container using: 24 cores and 120 Gi
###Code
import os
import subprocess
import glob
from IPython import get_ipython
ipython = get_ipython()
specie = "pan_onca"
dir_specie = "Ponca_DV_loc"
file_specie = "poncadav2"
dir_mask_specie = "Ponca_DV"
file_mask_specie = "poncamask.tif"
dir_years = "forest_jEquihua_mar"
date_of_processing = "02_06_2021"
bucket_with_data = "hsi-kale"
input_dir_data = "/shared_volume/input_data"
if not os.path.exists(input_dir_data):
os.makedirs(input_dir_data)
cmd_subprocess = ["aws", "s3", "cp",
"s3://" + bucket_with_data,
input_dir_data,
"--recursive"]
subprocess.run(cmd_subprocess)
###Output
_____no_output_____
###Markdown
```R Localidades en shapefile de la especies con los aniosponcaloc<-rgdal::readOGR("/shared_volume/Ponca_DV_loc/","poncadav2")```
###Code
#
ipython.magic("load_ext rpy2.ipython")
#
string_libraries = """R library(rgdal); library(raster)"""
ipython.magic(string_libraries)
##assignment statements to build string
variable_specie_loc = "specie_loc"
variable_mask_specie = "specie_mask"
string1 = "R " + variable_specie_loc + " <- rgdal::readOGR("
string2 = os.path.join(input_dir_data, dir_specie)
string3 = variable_mask_specie + " <- raster::raster("
string4 = os.path.join(input_dir_data, dir_mask_specie, file_mask_specie)
string_data_input = "".join([string1, "\"", string2, "\",",
"\"", file_specie, "\"",");",
string3, "\"", string4, "\"", ")"])
##(end) assignment statements to build string
ipython.magic(string_data_input)
specie_loc = ipython.magic("Rget " + variable_specie_loc)
specie_mask = ipython.magic("Rget " + variable_mask_specie)
###Output
_____no_output_____
###Markdown
```Rponcaloc_transf <- sp::spTransform(poncaloc, CRSobj = "+proj=lcc +lat_1=17.5 +lat_2=29.5 +lat_0=12 +lon_0=-102 +x_0=2500000 +y_0=0 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0") ```
###Code
#
ipython.magic("load_ext rpy2.ipython")
print(specie_loc)
ipython.magic("Rpush " + variable_specie_loc)
#
string_libraries = """R library(rgdal)"""
ipython.magic(string_libraries)
##assignment statements to build string
variable_specie_loc_transf = "specie_loc_transf"
string1 = "R " + variable_specie_loc_transf + " <- sp::spTransform("
string2 = "CRSobj = \"+proj=lcc +lat_1=17.5 +lat_2=29.5 +lat_0=12 +lon_0=-102 +x_0=2500000 +y_0=0 +datum=WGS84 +units=m +no_defs +ellps=WGS84 +towgs84=0,0,0\")"
string_transform = "".join([string1, variable_specie_loc, ",",
string2])
##(end) assignment statements to build string
ipython.magic(string_transform)
specie_loc_transf = ipython.magic("Rget " + variable_specie_loc_transf)
###Output
_____no_output_____
###Markdown
```Rtest_sp <- sp_temporal_data(occs=poncaloc_transf,longitude = "coords.x1", latitude = "coords.x2",sp_year_var="Year", layers_by_year_dir ="/shared_volume/forest_jEquihua_mar/", layers_ext = "*.tif$",reclass_year_data = T)```
###Code
#
ipython.magic("load_ext rpy2.ipython")
print(specie_loc_transf)
ipython.magic("Rpush " + variable_specie_loc_transf)
#
string_libraries = """R library(hsi)"""
ipython.magic(string_libraries)
##assignment statements to build string
variable_test_sp = "test_sp"
string1 = "R " + variable_test_sp + " <- sp_temporal_data(occs="
string2 = "longitude = \"coords.x1\",latitude = \"coords.x2\",sp_year_var=\"Year\",layers_by_year_dir ="
string3 = os.path.join(input_dir_data, dir_years)
string4 = "layers_ext = \"*.tif$\",reclass_year_data = T)"
string_test = "".join([string1, variable_specie_loc_transf, ",",
string2, "\"", string3 , "\",",
string4])
##(end) assignment statements to build string
ipython.magic(string_test)
test_sp = ipython.magic("Rget " + variable_test_sp)
###Output
_____no_output_____
###Markdown
```RFiltrar las localidades que se usaran mediante la mascaratest_sp_mask <- occs_filter_by_mask(test_sp,ponca_mask)Limpia localidades duplicadas por aniotest_sp_clean <- clean_dup_by_year(this_species = test_sp_mask,threshold = res(ponca_mask)[1])e_test <- extract_by_year(this_species=test_sp_clean,layers_pattern="_mar")```
###Code
#
ipython.magic("load_ext rpy2.ipython")
string_libraries = """R library(hsi);library(raster)"""
ipython.magic(string_libraries)
print(test_sp)
print(specie_mask)
ipython.magic("Rpush " + variable_test_sp)
ipython.magic("Rpush " + variable_mask_specie)
#
##assignment statements to build string
variable_test_sp_mask = "test_sp_mask"
string1 = "R " + variable_test_sp_mask + " <- occs_filter_by_mask("
string_filter = "".join([string1, variable_test_sp, ",",
variable_mask_specie,
")"])
##(end)assignment statements to build string
ipython.magic(string_filter)
##assignment statements to build string
variable_test_sp_clean = "test_sp_clean"
string1 = "R " + variable_test_sp_clean + " <- clean_dup_by_year(this_species = "
string2 = ", threshold = res("
string3 = ")[1])"
string_clean_test = "".join([string1, variable_test_sp_mask,
string2, variable_mask_specie,
string3])
##(end)assignment statements to build string
ipython.magic(string_clean_test)
##assignment statements to build string
variable_e_test = "e_test"
string1 = "R " + variable_e_test + " <- extract_by_year(this_species="
string2 = ",layers_pattern=\"_mar\")"
string_extract = "".join([string1, variable_test_sp_clean, string2])
##(end)assignment statements to build string
ipython.magic(string_extract)
e_test = ipython.magic("Rget " + variable_e_test)
###Output
_____no_output_____
###Markdown
```Rbest_model_2004 <- find_best_model(this_species = e_test, cor_threshold = 0.8, ellipsoid_level = 0.975, nvars_to_fit = 3,E = 0.05, RandomPercent = 70, NoOfIteration = 1000, parallel = TRUE, n_cores = 24, plot3d = FALSE)```
###Code
#
ipython.magic("load_ext rpy2.ipython")
print(e_test)
ipython.magic("Rpush " + variable_e_test)
#
string_libraries = """R library(hsi)"""
ipython.magic(string_libraries)
##assignment statements to build string
variable_best_model_2004 = "best_model_2004"
string1 = "R " + variable_best_model_2004 + " <- find_best_model(this_species ="
string2 = ", cor_threshold = 0.8, ellipsoid_level = 0.975,nvars_to_fit = 3,E = 0.05,RandomPercent = 70,NoOfIteration = 1000,parallel = TRUE,n_cores = 24,plot3d = FALSE)"
string_best_model = "".join([string1, variable_e_test, string2])
##(end)assignment statements to build string
ipython.magic(string_best_model)
best_model_2004 = ipython.magic("Rget " + variable_best_model_2004)
###Output
_____no_output_____
###Markdown
```Rtemporal_projection(this_species = best_model_2004, save_dir = "/shared_volume/new_model_parallel/27_05_2021/", sp_mask = ponca_mask, crs_model = NULL, sp_name ="pan_onca", plot3d = FALSE)```
###Code
#
ipython.magic("load_ext rpy2.ipython")
string_libraries = """R library(hsi);library(raster)"""
ipython.magic(string_libraries)
print(best_model_2004)
print(specie_mask)
ipython.magic("Rpush " + variable_best_model_2004)
ipython.magic("Rpush " + variable_mask_specie)
#
dir_results = "/shared_volume/new_model_parallel"
save_dir = os.path.join(dir_results, date_of_processing)
##assignment statements to build string
string1 = "R temporal_projection(this_species = "
string2 = ",save_dir = "
string3 = "sp_mask = "
string4 = ",crs_model = NULL,sp_name ="
string5 = ",plot3d = FALSE)"
string_temporal_proj = "".join([string1, variable_best_model_2004,
string2, "\"", save_dir, "\",",
string3, variable_mask_specie,
string4, "\"", specie, "\"", string5])
##(end)assignment statements to build string
if not os.path.exists(save_dir):
os.makedirs(save_dir)
ipython.magic(string_temporal_proj)
#temporal_projection = ipython.magic("Rget temporal_projection")
dir_to_upload = glob.glob(save_dir + '*')[0]
bucket_results = "s3://hsi-kale-results"
bucket_path_uploading = os.path.join(bucket_results, date_of_processing)
cmd_subprocess = ["aws", "s3", "cp",
dir_to_upload,
bucket_path_uploading,
"--recursive"]
subprocess.run(cmd_subprocess)
###Output
_____no_output_____ |
content/posts/Bartlett's Test for Equality of Variances.ipynb | ###Markdown
Bartlett's test, developed by [Maurice Stevenson Bartlett](https://en.wikipedia.org/wiki/M._S._Bartlett), is a statistical procedure for testing if $k$ population samples have equal variances. Equality of variances in population samples is assumed in commonly used comparison of means tests, such as [Student's t-test](https://en.wikipedia.org/wiki/Student%27s_t-test) and [analysis of variance](https://en.wikipedia.org/wiki/Analysis_of_variance). Therefore, a procedure such as Bartlett's test can be conducted to accept or reject the assumption of equal variances across group samples. Levene's test is an alternative to Barlett's test and is less sensitive to non-normal samples. Thus, Levene's test is generally preferred in most cases, particularly when the underlying distribution of the samples is known. Equality of variances is also known as [homoscedasticity](https://en.wikipedia.org/wiki/Homoscedasticity) or homogeneity of variances.Bartlett's test statistic, denoted $\chi^2$ (also sometimes denoted as $T$), is approximately chi-square distributed with $k - 1$ degrees of freedom, where $k$is the number of sample groups. The chi-square approximation does not hold sufficiently when the sample size ofa group is $n_i > 5$. The test statistic is defined as:$$ \chi^2 = \frac{(n - k) \ln(S^2_p) - \sum^k_{i=1} (n_i - 1) \ln(S^2_i)}{1 + \frac{1}{3(k - 1)} \left(\sum^k_{i=1} (\frac{1}{n_i - 1}) - \frac{1}{n - k} \right)} $$where, * $n$ is the total number of samples across all groups* $k$ is the number of groups* $S^2_i$ are the sample variances.$S^2_p$, the pooled estimate of the samples' variance, is defined as:$$ S^2_p = \frac{1}{n - k} \sum_i (n_i - 1) S^2_i $$ Bartlett's Test in Python
###Code
import numpy as np
import pandas as pd
from scipy.stats import chi2
import numpy_indexed as npi
###Output
_____no_output_____
###Markdown
The [`PlantGrowth`](https://vincentarelbundock.github.io/Rdatasets/doc/datasets/PlantGrowth.html) dataset is available in [R](https://www.r-project.org/) as part of its standard datasets and can also be downloaded [here](https://vincentarelbundock.github.io/Rdatasets/csv/datasets/PlantGrowth.csv). After downloading the data, we load it into memory with pandas' [`read_csv`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) function. Once the data is loaded, we transform the resulting `DataFrame` into a [`numpy`](https://numpy.org/) array with the [`.to_numpy`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_numpy.html) method. The first three rows of the dataset are then printed to get a sense of what the data contains. We also confirm there are indeed three sample groups in the data using numpy's [`unique`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html) function.
###Code
plants = pd.read_csv('../../data/PlantGrowth.csv')
plants = plants.to_numpy()
print(plants[:3])
print(list(np.unique(plants[:,2])))
###Output
[[1 4.17 'ctrl']
[2 5.58 'ctrl']
[3 5.18 'ctrl']]
['ctrl', 'trt1', 'trt2']
###Markdown
Implementing the test procedure in Python is comparatively straightforward. First, we require the total number of samples, $n$, and the number of sample groups, $k$. The number of samples can be found by indexing the first value returned from the `plants` array [`.shape`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) method, while the number of unique groups is obtained by taking the length of the array returned by the `unique` function. We also require the total number of samples within each group and the group's variance, which we compute by grouping the `plants` array by `len` and `numpy`'s [`var`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.var.html) functions. The [`numpy-indexed`](https://pypi.org/project/numpy-indexed/) library and its `group_by` function are convenient for performing this grouping operation on `numpy` arrays.
###Code
n = plants.shape[0]
k = len(np.unique(plants[:,2]))
group_n = np.array([i for _, i in npi.group_by(plants[:, 2], plants[:, 1], len)])
group_variance = np.array([i for _, i in npi.group_by(plants[:, 2], plants[:, 1], np.var)])
###Output
_____no_output_____
###Markdown
The sample pooled variance, $S^2_p$, is then computed.
###Code
pool_var = 1 / (n - k) * np.sum((group_n - 1) * group_variance)
###Output
_____no_output_____
###Markdown
As the Bartlett test statistic equation is rather hefty, we split the computation into two variables, the numerator and denominator of the test statistic, to help keep the code easier to read. The numerator and denominator are then divided, which returns the computed test statistic.
###Code
x2_num = (n - k) * np.log(pool_var) - np.sum((group_n - 1) * np.log(group_variance))
x2_den = 1 + 1 / (3 * (k - 1)) * (np.sum(1 / (group_n - 1)) - 1 / (n - k))
x2 = x2_num / x2_den
x2
###Output
_____no_output_____
###Markdown
The Bartlett test statistic, $\chi^2$, is approximately $2.279$. To find the associated p-value of the test statistic, we using the `.cdf` method from `scipy.stats`'s [`chi2`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2.html) variable with $k - 1$ degrees of freedom.
###Code
p = 1 - chi2.cdf(x2, k - 1)
p
###Output
_____no_output_____ |
Quiz_3_Prob.ipynb | ###Markdown
Importar librerias necesarias Descargar paquetes
###Code
install.packages("ggplot2")
install.packages("psych")
install.packages("dplyr")
install.packages("readxl")
###Output
Installing package into ‘/usr/local/lib/R/site-library’
(as ‘lib’ is unspecified)
Installing package into ‘/usr/local/lib/R/site-library’
(as ‘lib’ is unspecified)
Warning message:
“dependency ‘mnormt’ is not available”
Warning message in install.packages("psych"):
“installation of package ‘psych’ had non-zero exit status”
Installing package into ‘/usr/local/lib/R/site-library’
(as ‘lib’ is unspecified)
Installing package into ‘/usr/local/lib/R/site-library’
(as ‘lib’ is unspecified)
###Markdown
Cargar paquetes
###Code
library("ggplot2")
library("psych")
library("dplyr")
library("readxl")
X <- read.csv('Base_Banco.csv')
head(X)
###Output
_____no_output_____ |
eda/explore-vaccines-tweets-eda-labelled - Copy.ipynb | ###Markdown
Explore Vaccines Tweets - Labelled data IntroductionThe Dataset we are using here is collected using Twitter API, **tweepy** and Python package.The following vaccines are included: * Pfizer/BioNTech; * Sinopharm; * Sinovac; * Moderna; * Oxford/AstraZeneca; * Covaxin; * Sputnik V. Data preparation Load packages
###Code
! pip install spellchecker wordcloud textblob nltk plotly pyspellchecker neattext missingno lightgbm tensorflow
import numpy as np
import pandas as pd
import matplotlib as mp
import seaborn as sns
import matplotlib.pyplot as plt
from textblob import TextBlob
%matplotlib inline
from wordcloud import WordCloud, STOPWORDS
##
import plotly.express as px
import plotly.graph_objs as go
from plotly.offline import init_notebook_mode, iplot
##
import warnings
warnings.simplefilter("ignore")
.
###Output
_____no_output_____
###Markdown
Load data
###Code
tweets_df = pd.read_csv("covid-19_vaccine_tweets_with_sentiment.csv", encoding='latin1')
###Output
_____no_output_____
###Markdown
Data exploration Glimpse the data
###Code
print(f"data shape: {tweets_df.shape}")
tweets_df.info()
tweets_df.describe()
tweets_df.head()
###Output
_____no_output_____
###Markdown
Missing data
###Code
def missing_data(data):
total = data.isnull().sum()
percent = (data.isnull().sum()/data.isnull().count()*100)
tt = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
types = []
for col in data.columns:
dtype = str(data[col].dtype)
types.append(dtype)
tt['Types'] = types
return(np.transpose(tt))
missing_data(tweets_df)
missed = pd.DataFrame()
missed['column'] = tweets_df.columns
missed['percent'] = [round(100* tweets_df[col].isnull().sum() / len(tweets_df), 2) for col in tweets_df.columns]
missed = missed.sort_values('percent',ascending=False)
missed = missed[missed['percent']>0]
print(missed)
#fig = sns.barplot(
# x=missed['percent'],
# y=missed["column"],
# orientation='horizontal'
#).set_title('Missed values percent for every column')
###Output
Empty DataFrame
Columns: [column, percent]
Index: []
###Markdown
Unique values
###Code
def unique_values(data):
total = data.count()
tt = pd.DataFrame(total)
tt.columns = ['Total']
uniques = []
for col in data.columns:
unique = data[col].nunique()
uniques.append(unique)
tt['Uniques'] = uniques
return(np.transpose(tt))
unique_values(tweets_df)
###Output
_____no_output_____
###Markdown
Most frequent values
###Code
def most_frequent_values(data):
total = data.count()
tt = pd.DataFrame(total)
tt.columns = ['Total']
items = []
vals = []
for col in data.columns:
itm = data[col].value_counts().index[0]
val = data[col].value_counts().values[0]
items.append(itm)
vals.append(val)
tt['Most frequent item'] = items
tt['Frequence'] = vals
tt['Percent from total'] = np.round(vals / total * 100, 3)
return(np.transpose(tt))
most_frequent_values(tweets_df)
###Output
_____no_output_____
###Markdown
Visualize the data distribution Tweet source
###Code
#plot heatmap to see the correlation between features
plt.subplots(figsize=(9, 9))
sns.heatmap(tweets_df.corr(), annot=True, square=True)
plt.show()
stopwords = set(STOPWORDS)
def show_wordcloud(data, title = None):
wordcloud = WordCloud(
background_color='white',
stopwords=stopwords,
max_words=50,
max_font_size=40,
scale=5,
random_state=1
).generate(str(data))
fig = plt.figure(1, figsize=(10,10))
plt.axis('off')
if title:
fig.suptitle(title, fontsize=20)
fig.subplots_adjust(top=2.3)
plt.imshow(wordcloud)
plt.show()
from wordcloud import WordCloud, STOPWORDS
def show_wordcloud(data, title=""):
text = " ".join(t for t in data.dropna())
stopwords = set(STOPWORDS)
stopwords.update(["t", "co", "https", "amp", "U"])
wordcloud = WordCloud(stopwords=stopwords, scale=4, max_font_size=50, max_words=500,background_color="black").generate(text)
fig = plt.figure(1, figsize=(16,16))
plt.axis('off')
fig.suptitle(title, fontsize=20)
fig.subplots_adjust(top=2.3)
plt.imshow(wordcloud, interpolation='bilinear')
plt.show()
###Output
_____no_output_____
###Markdown
Text wordcloauds
###Code
show_wordcloud(tweets_df['tweet_text'], title = 'Prevalent words in tweets')
#@labels=tweets_df.groupby("label").agg({'tweet_text':'count'}).rename(columns={'tweet_text':'tweet_count'}).sort_values(by="tweet_count", ascending=False)
labels = tweets_df.groupby('label').count()['tweet_text'].reset_index().sort_values(by='label',ascending=True)
labels.style.background_gradient(cmap='gist_earth_r')
plt.figure(figsize=(5,5))
sns.countplot(x='label',data=tweets_df)
fig = go.Figure(go.Funnelarea(
text =labels.label,
values = labels.tweet_text,
title = {"position": "top center", "text": "Funnel-Chart of Sentiment Distribution"}
))
fig.show()
tweets_df
tweets_df.drop('tweet_id',inplace=True,axis=1)
tweets_df
###Output
_____no_output_____
###Markdown
Data processing
###Code
import neattext as ntx
tweets_df['clean_data']=tweets_df['tweet_text']
# Cleaning the data using neattext library
tweets_df['clean_data']=tweets_df['clean_data'].apply(ntx.remove_hashtags)
tweets_df['clean_data']=tweets_df['clean_data'].apply(ntx.remove_urls)
tweets_df['clean_data']=tweets_df['clean_data'].apply(ntx.remove_userhandles)
tweets_df['clean_data']=tweets_df['clean_data'].apply(ntx.remove_multiple_spaces)
#tweets_df['clean_data']=tweets_df['clean_data'].apply(ntx.remove_special_characters)#
tweets_df['clean_data']=tweets_df['clean_data'].str.replace("[^a-zA-Z#]", " ")
tweets_df['clean_data']=tweets_df['clean_data'].apply(ntx.remove_numbers)
tweets_df['clean_data']=tweets_df['clean_data'].apply(ntx.remove_puncts)
tweets_df['clean_data']=tweets_df['clean_data'].apply(ntx.remove_emojis)
tweets_df['clean_data']=tweets_df['clean_data'].str.lower()
tweets_df[['clean_data','tweet_text']].head()
import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')
nltk.download('wordnet')
remove_words=lambda x : ' '.join([word for word in x.split() if word not in stopwords.words('english')])
tweets_df['clean_data']=tweets_df['clean_data'].apply(remove_words)
pd.set_option('display.max_colwidth', 1000)
tweets_df[['clean_data','tweet_text']]
from nltk.tokenize import TweetTokenizer
from nltk.stem import PorterStemmer
def tokenize(tweet_text):
tokenizer = TweetTokenizer()
tweet_tokens = tokenizer.tokenize(tweet_text)
tweets_clean = []
stemmer = PorterStemmer()
for word in tweet_tokens:
stem_word = stemmer.stem(word) # stemming word
tweets_clean.append(stem_word)
return ' '.join(tweets_clean)
tweets_df['clean_data']=tweets_df['clean_data'].apply(tokenize)
pd.set_option('display.max_colwidth', 500)
tweets_df[['clean_data','tweet_text']]
'''
import neattext.functions as ntf
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
import re
from nltk.tokenize import TweetTokenizer
import string
def pre_process_tweet(tweet_data):
"""
Process tweet function.
Input:
tweet: a string containing a tweet
Returns:
tweets_clean: a list of words containing the processed tweet
"""
stemmer = PorterStemmer()
stopwords_english = stopwords.words('english')
# remove old style retweet text "RT"
tweet_data = re.sub(r'^RT[\s]+', '', str(tweet_data))
# remove hashtags
ntf.remove_hashtags(tweet_data)
# remove Urls
ntf.remove_urls(tweet_data)
# remove_userhandles
ntf.remove_multiple_spaces(tweet_data)
ntf.remove_numbers(tweet_data)
ntf.remove_puncts(tweet_data)
ntf.remove_special_characters(tweet_data)
ntf.remove_emojis(tweet_data)
ntf.remove_userhandles(tweet_data)
return tweet_data
# tokenize tweets
tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True,
reduce_len=True)
tweet_tokens = tokenizer.tokenize(tweet_data)
tweets_clean = []
for word in tweet_tokens:
if (word not in stopwords_english and word not in string.punctuation): # remove punctuation
# tweets_clean.append(word)
stem_word = stemmer.stem(word) # stemming word
tweets_clean.append(stem_word)
return " ".join(tweets_clean) '''
"""
def removeStopwords(text):
stemmer = PorterStemmer()
stopwords_english = stopwords.words('english')
tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True,
reduce_len=True)
tweet_tokens = tokenizer.tokenize(text)
tweets_clean = []
for word in tweet_tokens:
if (word not in stopwords_english and word not in string.punctuation): # remove punctuation
tweets_clean.append(word)
#stem_word = stemmer.stem(word) # stemming word
#tweets_clean.append(stem_word)
return " ".join(tweets_clean)
def removePunctuations(text):
return text.translate(str.maketrans('', '', string.punctuation)).replace(" s ", " ")
def removeLinks(text):
clean_text = re.sub('https?://\S+|www\.\S+', '', text)
return clean_text
def removeNumbers(text):
clean_text = re.sub(r'\d+', '', text)
return clean_text
def removenewline(text):
clean_text = re.sub(r'(\n+)', '', text)
return clean_text
import emoji
#does the text contain an emoji?
def text_has_emoji(text):
for character in text:
if character in emoji.UNICODE_EMOJI:
return True
return False
#remove the emoji
def deEmojify(inputString):
return inputString.encode('ascii', 'ignore').decode('ascii')
"""
"""def clean_text(text):
if text_has_emoji(text):
text=deEmojify(text)
text = text.lower()
text =removenewline(text)
text = removeStopwords(text)
text = removePunctuations(text)
text = removeNumbers(text)
text = removeLinks(text)
return text
"""
#tweets_df['clean_data'] = tweets_df['clean_data'].apply(clean_text)
#tweetsDF['label'] = tweetsDF['label'].map({1:0, 2:1, 3:2})# renumbering labels to avoid error in the one hot encoding process
#tweetsDF.drop("tweet_text",axis=1).to_csv('mycsvfile.csv',index=False)
###Output
_____no_output_____
###Markdown
Dropping columns not needed
###Code
tweets_df.drop('tweet_text',inplace=True,axis=1)
###Output
_____no_output_____
###Markdown
tweets_df.head() For SSl splitting the data to 70-30 , where 30 will be used for final prediction task
###Code
# seperate off train and test
train = tweets_df.iloc[:4200, :]
test = tweets_df.iloc[4200:, :]
###Output
_____no_output_____
###Markdown
Classification Tasks
###Code
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
#import gensim
from sklearn.svm import SVC
from sklearn.naive_bayes import MultinomialNB,BernoulliNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.metrics import classification_report, f1_score, confusion_matrix,recall_score,precision_score,make_scorer
from sklearn.model_selection import StratifiedKFold, train_test_split, learning_curve,cross_val_score
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, HashingVectorizer
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from statistics import mean, stdev
import lightgbm as lgb
# target variable
y = train["label"].values
no_of_splits=5
# initializing Kfold
skf = StratifiedKFold(n_splits=no_of_splits, shuffle=True, random_state=24)
# count vectorizer transformation
count_vect = CountVectorizer()
count_vect.fit(tweets_df["clean_data"].values.tolist())
train_count_vect = count_vect.transform(train["clean_data"])
# tfidf vectorizer transformation
tfidf_vect = TfidfVectorizer()
tfidf_vect.fit(tweets_df["clean_data"].values.tolist())
train_tfidf_vect = tfidf_vect.transform(train["clean_data"])
# light gbm parameters
lgbm_params = {
"learning_rate": 0.02,
"random_state": 24,
"metric": "auc_mu",
"n_estimators": 2000,
"objective": "multiclass"
}
# models
models= {
"svm": SVC(),
"logistic_regression": LogisticRegression(),
"naive_bayes": MultinomialNB(),
"SGD": SGDClassifier(),
"random_forest": RandomForestClassifier(),
#"BernoulliNB": BernoulliNB(),
"DecisionTreeClassifier": DecisionTreeClassifier(),
"KNeighborsClassifier": KNeighborsClassifier(),
"LGBM":lgb.LGBMClassifier(**lgbm_params)
}
# current vectors
vectors = {
"count_vectorizer": train_count_vect,
"tfidf_vectorizer": train_tfidf_vect
}
from sklearn import metrics
#scoring methods that calculate the precision, recall and f-mesure values for every fold of k-fold cross validation
def precEval(y_true, y_pred):
fold_report = metrics.classification_report(y_true, y_pred, output_dict=True)
prec = fold_report['macro avg']['precision']
return prec
def recEval(y_true, y_pred):
fold_report = metrics.classification_report(y_true, y_pred, output_dict=True)
rec = fold_report['macro avg']['recall']
return rec
def f1Eval(y_true, y_pred):
fold_report = metrics.classification_report(y_true, y_pred, output_dict=True)
f1 = fold_report['macro avg']['f1-score']
return f1
def stratified_kfold(clf:str, vect_type:str, y, kfold):
"""
Perform Kfold Cross-Validation
:param model: the model used to make predictions
:param X: the train features being used
:param y: the target feature,
:param kfold: the cross validation strategy
:return: dictionary with model name key and results as the values
"""
results = {}
# store the name of the model in dictionary
results["modelname_vectorType"] = clf + "_" + vect_type
# call the model and training data
model = models[clf]
X = vectors[vect_type]
#prec = cross_val_score(model, X, y, cv=kfold, scoring=make_scorer(precEval))
#rec = cross_val_score(model, X, y, cv=kfold, scoring=make_scorer(recEval))
#f1 = cross_val_score(model, X, y, cv=kfold, scoring=make_scorer(f1Eval))
#results["Precision"] = "%.3f%%" % (prec.mean() * 100)
#results["Recall"] = "%.3f%%" % (rec.mean() * 100)
f1score_list= []
lst_accu_stratified = []
# perfrom kfold cv
for fold, (train_idx, valid_idx) in enumerate(kfold.split(X, y)):
#print(f"\nCurrently Training: {results['modelname_vectorType']}... Fold: {fold+1}")
X_train, X_valid = X[train_idx], X[valid_idx]
y_train, y_valid = y[train_idx], y[valid_idx]
# train on seen data, predict on unseen
model.fit(X_train, y_train)
y_preds = model.predict(X_valid)
#results["Accuracy"].append(model.score(X_valid, y_valid))
lst_accu_stratified.append(model.score(X_valid, y_valid))
f1score_list.append(f1_score(y_valid, y_preds,average='weighted'))
# lst_accu_stratified.append(lr.score(x_test_fold, y_test_fold))
#print(fold, f1_score(y_valid, y_preds, average='micro'))
#print(fold, recall_score(y_valid, y_preds, average='micro'))
#print(fold, precision_score(y_valid, y_preds, average='micro'))
#if fold == 0 :
#print"fold_{} f1-score is ".format(fold+1), f1_score(y_valid, y_preds, average='micro')
#results["fold_{}".format(fold+2)] = recall_score(y_valid, y_preds, average='micro')
#print(fold)
#if fold == no_of_splits-1:
# results["F1-Score"] ="%.3f%%" % (f1_score(y_valid, y_preds, average='micro')*100)
# results["Recall"] = "%.3f%%" % (recall_score(y_valid, y_preds,average=='micro')* 100)
# results["Precision"] = "%.3f%%" % (precision_score(y_valid, y_preds,average=='micro') *100)
results["Accuracy"] = "%.3f%%" % (mean(lst_accu_stratified) * 100)
results["F1-Score"] = "%.3f%%" % (mean(f1score_list)*100)
return results
def stratified_kfold_lbgm(clf:str, vect_type:str, y, kfold):
"""
Perform Kfold Cross-Validation
:param model: the model used to make predictions
:param X: the train features being used
:param y: the target feature,
:param kfold: the cross validation strategy
:return: dictionary with model name key and results as the values
"""
results = {}
# store the name of the model in dictionary
results["modelname_vectorType"] = clf + "_" + vect_type
# call the model and training data
model = models[clf]
X = vectors[vect_type]
f1score_list= []
lst_accu_stratified = []
# perfrom kfold cv
for fold, (train_idx, valid_idx) in enumerate(kfold.split(X, y)):
print(f"\nCurrently Training: {results['modelname_vectorType']}... Fold: {fold+1}")
X_train, X_valid= X[train_idx].astype(np.float64), X[valid_idx].astype(np.float64)
y_train, y_valid= y[train_idx].astype(np.float64), y[valid_idx].astype(np.float64)
# train on seen data, predict on unseen
model.fit(X_train,
y_train,
eval_set=[(X_valid, y_valid)],
verbose=100,
early_stopping_rounds=100)
y_preds = model.predict(X_valid)
lst_accu_stratified.append(model.score(X_valid, y_valid))
f1score_list.append(f1_score(y_valid, y_preds,average='weighted'))
results["Accuracy"] = "%.3f%%" % (mean(lst_accu_stratified) * 100)
results["F1-Score"] = "%.3f%%" % (mean(f1score_list)*100)
return results
# store all models
all_models = []
for clf in models:
for vect in vectors:
if clf == "LGBM":
all_models.append(stratified_kfold_lbgm(clf, vect, y, skf))
else:
all_models.append(stratified_kfold(clf, vect, y, skf))
print(f"Current Model: {clf}_{vect}...\n")
models_df = pd.DataFrame(all_models)
models_df
import lightgbm as lgb
#train then validate lightgbm with count vectors
model_dict = {}
model_dict["modelname_vectorType"] = "lgbm_count_vectorizer"
lst_accu_stratified1 = []
f1score_list= []
lgbm = lgb.LGBMClassifier(**lgbm_params)
for fold, (train_idx, valid_idx) in enumerate(skf.split(train_count_vect, y)):
print(f"\nCurrently Training: {model_dict['modelname_vectorType']}... Fold: {fold+1}")
X_train, X_valid= train_count_vect[train_idx].astype(np.float64), train_count_vect[valid_idx].astype(np.float64)
y_train, y_valid= y[train_idx].astype(np.float64), y[valid_idx].astype(np.float64)
# training
lgbm.fit(X_train,
y_train,
eval_set=[(X_valid, y_valid)],
verbose=100,
early_stopping_rounds=100)
lst_accu_stratified1.append(lgbm.score(X_valid, y_valid))
# predictions
y_preds = lgbm.predict(X_valid)
#model_dict["fold_{}".format(fold+1)] = f1_score(y_valid, y_preds)
f1score_list.append(f1_score(y_valid, y_preds,average='weighted'))
model_dict["Accuracy"] = "%.3f%%" % (mean(lst_accu_stratified1) * 100)
model_dict["F1-Score"] = "%.3f%%" % (mean(f1score_list) * 100)
print(lst_accu_stratified1)
print(f1score_list)
# adding results to models df
new_model = pd.DataFrame(model_dict, columns=models_df.columns, index=[0])
models_df = pd.concat([models_df, new_model], ignore_index=True)
models_df
###Output
_____no_output_____
###Markdown
Word2Vec Embeddings
###Code
! pip install gensim
import gensim.downloader as api
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation, Bidirectional
from tensorflow.keras import initializers, regularizers, constraints, optimizers, layers, Sequential
def get_word2vec_enc(corpus:list, vocab_size:int, embedding_size:int, gensim_pretrained_emb:str) -> list:
"""
Get the embeddings value for each word withing
:param corpus: The text we want to get embeddings for
:param vocab_list: The size of the vocabulary
:param embedding_size: The dimensions of the embedding
:param gensim_pretrained_emb: The pretrained embedding from gensim
:return: words encoded as vectors
"""
word_vecs = api.load(gensim_pretrained_emb)
embedding_weights = np.zeros((vocab_size, embedding_size))
for word, i in corpus:
if word in word_vecs:
embedding_weights[i] = word_vecs[word]
return embedding_weights
n_epochs = 8
embedding_size = 200
max_length = 202
pretrained_embedding_file = "glove-twitter-200"
# tokenizer
tokenizer = Tokenizer(oov_token="<unk>")
tokenizer.fit_on_texts(train["clean_data"].values)
train_tokenized_list = tokenizer.texts_to_sequences(train["clean_data"].values)
# store vocab size
vocab_size = len(tokenizer.word_index) + 1
# padding sequences
X_padded = pad_sequences(train_tokenized_list, maxlen=max_length)
# get the pretrained word embeddings and prepare embedding layer
embedding_matrix = get_word2vec_enc(corpus=tokenizer.word_index.items(),
vocab_size=vocab_size,
embedding_size=embedding_size,
gensim_pretrained_emb=pretrained_embedding_file)
embedding_layer = Embedding(input_dim=vocab_size,
output_dim=embedding_size,
weights=[embedding_matrix],
input_length=max_length,
trainable=False)
def my_LSTM(embedding_layer):
print('Creating model...')
model = Sequential()
model.add(embedding_layer)
model.add(Bidirectional(LSTM(units=64, dropout=0.1, recurrent_dropout=0.1)))
model.add(Dense(50, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(1, activation = "sigmoid"))
print('Compiling...')
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=["accuracy"])
return model
# stratified kfold with LSTM
model_dict = {}
model_dict["model_name"] = "lstm_word_2_vec"
for fold, (train_idx, val_idx) in enumerate(skf.split(X=X_padded, y=y)):
print(f"\nCurrently Training: {model_dict['model_name']}... Fold: {fold+1}")
X_train, X_val = X_padded[train_idx], X_padded[val_idx]
y_train, y_val = y[train_idx], y[val_idx]
# train the model
clf = my_LSTM(embedding_layer)
clf.fit(X_train,
y_train,
epochs=n_epochs,
verbose=1)
# make predictions
y_preds = clf.predict_classes(X_val, verbose=-1)
model_dict["fold_{}".format(fold+1)] = f1_score(y_val, y_preds,average='weighted')
# adding results to models df
new_model = pd.DataFrame(model_dict, columns=models_df.columns, index=[0])
models_df = pd.concat([models_df, new_model], ignore_index=True)
###Output
Currently Training: lstm_word_2_vec... Fold: 1
Creating model...
Compiling...
Epoch 1/8
105/105 [==============================] - 112s 1s/step - loss: -22.4008 - accuracy: 0.0575
Epoch 2/8
105/105 [==============================] - 98s 931ms/step - loss: -228.7584 - accuracy: 0.0602
Epoch 3/8
105/105 [==============================] - 88s 845ms/step - loss: -603.7898 - accuracy: 0.0612
Epoch 4/8
105/105 [==============================] - 104s 997ms/step - loss: -1161.2539 - accuracy: 0.0654
Epoch 5/8
105/105 [==============================] - 101s 959ms/step - loss: -1916.6482 - accuracy: 0.0611
Epoch 6/8
105/105 [==============================] - 67s 641ms/step - loss: -2859.1679 - accuracy: 0.0656
Epoch 7/8
105/105 [==============================] - 122s 1s/step - loss: -3946.4701 - accuracy: 0.0618
Epoch 8/8
105/105 [==============================] - 100s 955ms/step - loss: -5219.9205 - accuracy: 0.0669
Currently Training: lstm_word_2_vec... Fold: 2
Creating model...
Compiling...
Epoch 1/8
105/105 [==============================] - 96s 881ms/step - loss: -20.4561 - accuracy: 0.0553
Epoch 2/8
105/105 [==============================] - 97s 929ms/step - loss: -214.3598 - accuracy: 0.0626
Epoch 3/8
105/105 [==============================] - 96s 916ms/step - loss: -596.5754 - accuracy: 0.0611
Epoch 4/8
105/105 [==============================] - 96s 917ms/step - loss: -1189.5259 - accuracy: 0.0623
Epoch 5/8
105/105 [==============================] - 109s 1s/step - loss: -1947.4669 - accuracy: 0.0631
Epoch 6/8
105/105 [==============================] - 110s 1s/step - loss: -2938.0007 - accuracy: 0.0611
Epoch 7/8
105/105 [==============================] - 121s 1s/step - loss: -4079.5851 - accuracy: 0.0632
Epoch 8/8
105/105 [==============================] - 117s 1s/step - loss: -5378.1726 - accuracy: 0.0640
Currently Training: lstm_word_2_vec... Fold: 3
Creating model...
Compiling...
Epoch 1/8
105/105 [==============================] - 110s 1s/step - loss: -32.2482 - accuracy: 0.0589
Epoch 2/8
105/105 [==============================] - 109s 1s/step - loss: -303.6089 - accuracy: 0.0621
Epoch 3/8
105/105 [==============================] - 140s 1s/step - loss: -795.2278 - accuracy: 0.0609
Epoch 4/8
105/105 [==============================] - 90s 860ms/step - loss: -1524.6240 - accuracy: 0.0612
Epoch 5/8
105/105 [==============================] - 124s 1s/step - loss: -2490.3639 - accuracy: 0.0589
Epoch 6/8
105/105 [==============================] - 120s 1s/step - loss: -3671.7935 - accuracy: 0.0651
Epoch 7/8
105/105 [==============================] - 93s 879ms/step - loss: -5160.0478 - accuracy: 0.0583
Epoch 8/8
105/105 [==============================] - 137s 1s/step - loss: -6721.2592 - accuracy: 0.0621
Currently Training: lstm_word_2_vec... Fold: 4
Creating model...
Compiling...
Epoch 1/8
105/105 [==============================] - 146s 1s/step - loss: -25.9635 - accuracy: 0.0644
Epoch 2/8
105/105 [==============================] - 86s 820ms/step - loss: -236.9544 - accuracy: 0.0666
Epoch 3/8
105/105 [==============================] - 83s 790ms/step - loss: -625.3382 - accuracy: 0.0544
Epoch 4/8
105/105 [==============================] - 100s 949ms/step - loss: -1160.1259 - accuracy: 0.0721
Epoch 5/8
105/105 [==============================] - 88s 838ms/step - loss: -1949.7948 - accuracy: 0.0597
Epoch 6/8
105/105 [==============================] - 131s 1s/step - loss: -2855.7373 - accuracy: 0.0588
Epoch 7/8
105/105 [==============================] - 101s 964ms/step - loss: -3947.3775 - accuracy: 0.0678
Epoch 8/8
105/105 [==============================] - 71s 676ms/step - loss: -5194.4412 - accuracy: 0.0614
Currently Training: lstm_word_2_vec... Fold: 5
Creating model...
Compiling...
Epoch 1/8
105/105 [==============================] - 100s 912ms/step - loss: -24.3597 - accuracy: 0.0571
Epoch 2/8
105/105 [==============================] - 103s 981ms/step - loss: -245.3793 - accuracy: 0.0634
Epoch 3/8
105/105 [==============================] - 118s 1s/step - loss: -648.3476 - accuracy: 0.0650
Epoch 4/8
105/105 [==============================] - 97s 927ms/step - loss: -1263.2248 - accuracy: 0.0597
Epoch 5/8
105/105 [==============================] - 102s 977ms/step - loss: -2047.5069 - accuracy: 0.0679
Epoch 6/8
105/105 [==============================] - 136s 1s/step - loss: -3091.9784 - accuracy: 0.0625
Epoch 7/8
105/105 [==============================] - 113s 1s/step - loss: -4312.4374 - accuracy: 0.0611
Epoch 8/8
105/105 [==============================] - 90s 854ms/step - loss: -5640.6171 - accuracy: 0.0645
|
Riga_Proc.ipynb | ###Markdown
Данные с e021 по неделям:
###Code
df1['proc'].groupby(df1.date.dt.week).mean()
###Output
_____no_output_____
###Markdown
Данные с остальных:
###Code
df2['proc'].loc[df2['routingKey'] == 'event.E011T'].groupby(df2.date.dt.week).mean()
df2.groupby(['routingKey',df2.date.dt.week]).mean()
df3['proc'].loc[df3['routingKey'] == 'event.E011'].groupby(df3.date.dt.week).mean()
df3.groupby(['routingKey',df3.date.dt.week]).mean()
df3.loc[df3['routingKey'] == 'event.E011']
df4['all'] = df4['has_plate'] + df4['no_plate']
df4['proc'] = df4['has_plate']/df4['all']
df4.date = pd.to_datetime(df4.date.astype('object'), format = '%Y-%m-%d')
df4.groupby(['routingKey',df4.date.dt.week]).mean()
df4.loc[df4['routingKey'] == 'event.E011']
###Output
_____no_output_____ |
adsp-05-code.ipynb | ###Markdown
ADSP-05: Pendahuluan OOP di Python (Bagian ke-02: inheritance) (C)Taufik Sutanto https://tau-data.id/adsp-05/ Contoh Class dan Object pada lesson sebelumnya:
###Code
# Tambah fungsi rata-rata nilai kelas
class Mahasiswa:
def __init__(self, nama, nilai):
self.nama = nama
self.nilai = nilai
def nilai_mahasiswa(self):
return self.nilai
class Kuliah:
def __init__(self, nama, max_mahasiswa):
self.nama = nama
self.max_mahasiswa = max_mahasiswa
self.mahasiswa = []
def tambah_mahasiswa(self, nama):
if len(self.mahasiswa) < self.max_mahasiswa:
self.mahasiswa.append(nama)
return True
else:
return "Error: Maaf kelas Penuh"
def rerata_nilai(self):
sum_ = 0
for siswa in self.mahasiswa:
sum_ += siswa.nilai_mahasiswa()
# perhatikan disini kita melakukan ini karena siswa adalah objek
# objek siswa punya methode "nilai_mahasiswa"
return sum_/len(self.mahasiswa)
m1 = Mahasiswa('Udin', 77)
m2 = Mahasiswa('Ucok', 67)
m3 = Mahasiswa('Asep', 87)
kelas = Kuliah('Kalkulus', 2)
kelas.tambah_mahasiswa(m1), kelas.tambah_mahasiswa(m2)
'Nilai rata-rata kelas ', kelas.nama, ' adalah = ', kelas.rerata_nilai()
###Output
_____no_output_____
###Markdown
Outline Lesson ADSP-05:* Inheritance* Super Function* Method Overriding/Overwriting inheritance di OOP (Python)* Ketika *child* class diwariskan *property* dari class *parent*, maka hal ini disebut **inheritance** Mengapa menggunakan inheritance?* Code reusability: bayangkan seperti "template".* Transition & Readability: Baik untuk teamwork.* Realworld Relationship: Hubungan antar class/objects
###Code
# Contoh paling sederhana inheritance
class Ortu:
def pungsi1(self):
print("ini fungsi di orang tua")
class Anak(Ortu):
def pungsi2(self):
print("ini fungsi di anak")
sulung = Anak()
# PERHATIKAN "sulung" memiliki fungsi dari "Ortu"
sulung.pungsi1(), sulung.pungsi2()
# Menggunakan init seperti lesson sebelumnya (ADSP-04)
class Ortu:
def __init__(self, nama='Bambang', umur='40'):
self.nama = nama
self.umur = umur
def pungsi1(self):
print("ini fungsi di orang tua")
def info(self):# Method dari class seperti Lesson sebelumnya
print("Nama = {}, Umur = {}".format(self.nama, self.umur))
class Anak(Ortu):
def __init__(self, nama, umur, anakKe):
Ortu.__init__(self, nama, umur)
self.anakKe = anakKe
def pungsi2(self):
print("ini fungsi di anak")
def info(self):
print("Nama = {}, Umur = {}, anak Ke-{}".format(self.nama, self.umur, self.anakKe))
sulung = Anak("Budi", 5, 2) # Property/Method "Ortu" di OVERWRITE oleh "Anak"
print(sulung.info())
# Contoh Multiple Inheritance
class Ayah:
def __init__(self, nama='Bambang', umur='40'):
self.nama = nama
self.umur = umur
def pungsiAyah(self):
print("ini fungsi di Ayah")
def info(self):
print("Nama = {}, Umur = {}".format(self.nama, self.umur))
class Ibu:
def __init__(self, nama='Wati', umur='40'):
self.nama = nama
self.umur = umur
def pungsiIbu(self):
print("ini fungsi di Ibu")
def info(self):# Method dari class seperti kuliah sebelumnya
print("Nama = {}, Umur = {}".format(self.nama, self.umur))
class Anak(Ayah, Ibu):
def __init__(self, nama, umur, anakKe):
Ayah.__init__(self, nama, umur)
self.anakKe = anakKe
def pungsiAnak(self):
print("ini fungsi di anak")
def info(self):
print("Nama = {}, Umur = {}, anak Ke-{}".format(self.nama, self.umur, self.anakKe))
sulung = Anak("Budi", 5, 2) # Property/method "Ayah & Ibu" diwariskan ke "Anak"
print(sulung.pungsiAyah(), sulung.pungsiIbu())
# Contoh Multilevel Inheritance
class Kakek:
def __init__(self, nama='Iwan', umur='40'):
self.nama = nama
self.umur = umur
def pungsiKakek(self):
print("ini fungsi di Kakek")
def info(self):# Method dari class seperti kuliah sebelumnya
print("Nama = {}, Umur = {}".format(self.nama, self.umur))
class Ortu(Kakek):
def __init__(self, nama='Parto', umur='40'):
self.nama = nama
self.umur = umur
def pungsiOrtu(self):
print("ini fungsi di Ortu")
def info(self):
print("Nama = {}, Umur = {}".format(self.nama, self.umur))
class Anak(Ortu):
def __init__(self, nama, umur, anakKe):
Ayah.__init__(self, nama, umur)
self.anakKe = anakKe
def pungsiAnak(self):
print("ini fungsi di anak")
def info(self):
print("Nama = {}, Umur = {}, anak Ke-{}".format(self.nama, self.umur, self.anakKe))
sulung = Anak("Budi", 5, 2) # Property/method "Ortu dan Kakek" diwariskan ke "Anak"
print(sulung.pungsiKakek())
# Contoh Hierarchical Multilevel
class Kakek:
def __init__(self, nama='Iwan', umur='40'):
self.nama = nama
self.umur = umur
def pungsiKakek(self):
print("ini fungsi di Kakek")
def info(self):
print("Nama = {}, Umur = {}".format(self.nama, self.umur))
class Ortu(Kakek):
def __init__(self, nama='Parto', umur='40'):
self.nama = nama
self.umur = umur
def pungsiOrtu(self):
print("ini fungsi di Ortu")
def info(self):
print("Nama = {}, Umur = {}".format(self.nama, self.umur))
class Paman():
def __init__(self, nama='Parto', umur='40'):
self.nama = nama
self.umur = umur
def pungsiPaman(self):
print("ini fungsi di Paman")
def info(self):
print("Nama = {}, Umur = {}".format(self.nama, self.umur))
class Anak(Paman, Ortu):
def __init__(self, nama, umur, anakKe):
Paman.__init__(self, nama, umur)
self.anakKe = anakKe
def pungsiAnak(self):
print("ini fungsi di anak")
def info(self):
print("Nama = {}, Umur = {}, anak Ke-{}".format(self.nama, self.umur, self.anakKe))
sulung = Anak("Budi", 5, 2)
print(sulung.pungsiPaman(), sulung.pungsiKakek())
###Output
ini fungsi di Paman
ini fungsi di Kakek
None None
###Markdown
Super Function
###Code
class Ortu():
def pungsiOrtu(self):
print("ini fungsi di Ortu")
class Anak(Ortu):
def pungsiAnak(self):
super().pungsiOrtu()
print("ini di dalam fungsi Anak")
sulung = Anak()
sulung.pungsiAnak()
###Output
ini fungsi di Ortu
ini di dalam fungsi Anak
###Markdown
Method Overriding/Overwriting* MO merubah fungsi di class Parent.
###Code
class Ortu():
def pungsi(self):
print("ini fungsi di Ortu")
class Anak(Ortu):
def pungsi(self): # Perhatikan Nama fungsi Sama
print("ini di dalam fungsi Anak")
sulung = Anak()
sulung.pungsi()
###Output
ini di dalam fungsi Anak
|
Guia Practica Nro 2/184651_Guia_2_Taxonomia_de_Flynn.ipynb | ###Markdown
El siguiente código va a permitir que todo código ejecutado en el colab pueda ser medido
###Code
!pip install ipython-autotime
%load_ext autotime
print(sum(range(10)))
###Output
45
time: 1.01 ms
###Markdown
Pregunta 1: Que porción de 1 segundo es el valor impreso? microsengundos
###Code
print(sum(range(10)))
###Output
45
time: 896 µs
###Markdown
milisengundo
---
A seguir, tenemos una librería de Python llamado **numba** que realiza paralelización automatica. Asi, se puede verificar que al usar prange() se tiene mejor tiempo de ejecución que al usar range()
###Code
from numba import njit, prange
import numpy as np
A = np.arange(5, 14000000)
@njit(parallel=True)
def prange_test(A):
s = 0
# Without "parallel=True" in the jit-decorator
# the prange statement is equivalent to range
for i in prange(A.shape[0]):
s += A[i]
return s
print(prange_test(A))
from numba import njit, prange
import numpy as np
A = np.arange(5, 14000000)
#@njit(parallel=True)
def prange_test(A):
s = 0
# Without "parallel=True" in the jit-decorator
# the prange statement is equivalent to range
for i in range(A.shape[0]):
s += A[i]
return s
print(prange_test(A))
###Output
97999992999990
time: 3.79 s
###Markdown
Pregunta 2: identifique otros valores en A, de manera que, serializando, tengamos mejor resultado que paralelizando
---
La Taxonomia de Flynn define 4 tipos de arquitecturas para computación paralela: SISD, SIMD, MISD, y MIMD.
---
Pregunta 3 : El ultimo código ejecutado es de tipo? SIMD
---
Pregunta 4: el siguiente código paralelo es de tipo? Comentar el código para justificar su respuesta
###Code
import threading
import time
#UNA INSTRUCION PARA MOSTRAR EL COUNT Y SU TIEMPO
def print_time(name, n):
count = 0
print("Para el Hilo: %s, en el momento: %s, su valor de count es: %s" % ( name, time.ctime(), count))
while count < 5:
time.sleep(n)
count+=1
print("%s: %s. count %s" % ( name, time.ctime(), count))
# INGRESANDO 2 DATOS
t1 = threading.Thread(target=print_time, args=("Thread-1", 0, ) )
t2 = threading.Thread(target=print_time, args=("Thread-2", 0, ) )
t1.start()
t2.start()
###Output
Para el Hilo: Thread-1, en el momento: Wed Dec 9 15:10:55 2020, su valor de count es: 0
Thread-1: Wed Dec 9 15:10:55 2020. count 1
Thread-1: Wed Dec 9 15:10:55 2020. count 2
Thread-1: Wed Dec 9 15:10:55 2020. count 3
Thread-1: Wed Dec 9 15:10:55 2020. count 4
Thread-1: Wed Dec 9 15:10:55 2020. count 5
Para el Hilo: Thread-2, en el momento: Wed Dec 9 15:10:55 2020, su valor de count es: 0time: 14.1 ms
Thread-2: Wed Dec 9 15:10:55 2020. count 1
Thread-2: Wed Dec 9 15:10:55 2020. count 2
Thread-2: Wed Dec 9 15:10:55 2020. count 3
Thread-2: Wed Dec 9 15:10:55 2020. count 4
Thread-2: Wed Dec 9 15:10:55 2020. count 5
###Markdown
---
SIMD SOLO SE TIENE UNA SOLO INSTRUCION
PARA DATOS DIFERENTES SE VA A MOSTRAR DIFERENTES RESULTADOS PARA HILO 1 E HILO2 SEGUN A DATOS INGRESADOS Una computadora paralela tipo MIMD es utilizado más en la computación distribuida, ejm. Clusters. El siguiente código en python desktop muestra tal funcionamiento
###Code
#greeting-server.py
import Pyro4
@Pyro4.expose
class GreetingMaker(object):
def get_fortune(self, name):
return "Hello, {0}. Here is your fortune message:\n" \
"Behold the warranty -- the bold print giveth and the fine print taketh away.".format(name)
daemon = Pyro4.Daemon() # make a Pyro daemon
uri = daemon.register(GreetingMaker) # register the greeting maker as a Pyro object
print("Ready. Object uri =", uri) # print the uri so we can use it in the client later
daemon.requestLoop() # start the event loop of the server to wait for calls
#greeting-client.py
import Pyro4
uri = input("What is the Pyro uri of the greeting object? ").strip()
name = input("What is your name? ").strip()
greeting_maker = Pyro4.Proxy(uri) # get a Pyro proxy to the greeting object
print(greeting_maker.get_fortune(name)) # call method normally
###Output
_____no_output_____
###Markdown
Pregunta 5: Explique que hace este código de tipo MIMD
---
Ejercicio Propuesto: Crear un ejemplo que muestre una computación paralela de tipo MISD
###Code
#imput : un solo dato que es el hilo
#procedimiento: 2 procesos para un solo dato
#- primero el valor de hilo aumenta uno en uno
#- segundo el valor de hilo es multiplicado por 2 bajo restriccion del while
#ouput: muestra 2 procesos
#- primero muestra el valor del hilo "count" sumando 1 en 1
#- segundo muetsra el valor del hilo "count" multiplicando 2 en 2
import threading
import time
def print_time1(name, n):
count = 0
print("Para el Hilo: %s, en el momento: %s, su valor de count es: %s" % ( name, time.ctime(), count))
while count < 10:
count+=1
print("%s: %s. count %s" % ( name, time.ctime(), count))
def print_time2(name, n):
count = 1
print("Para el Hilo: %s, en el momento: %s, su valor de count es: %s" % ( name, time.ctime(), count))
while count < 100:
count*=2
print("%s: %s. count %s" % ( name, time.ctime(), count))
t1 = threading.Thread(target=print_time1, args=("Thread-1", 0, ) )
t1.start()
t1 = threading.Thread(target=print_time2, args=("Thread-1", 0, ) )
t1.start()
###Output
_____no_output_____ |
covid_19/Era5-by-political-boundaries.ipynb | ###Markdown
Subsetting ERA5 to political boundaries==================In this notebook, we download ERA5 precipitation and temperature data and subset it to political boundaries (countries and states/provinces). We write these out to two csv files.
###Code
%matplotlib inline
import warnings
import cartopy
import cdsapi
import geopandas
import hvplot.pandas # noqa
import regionmask
import pandas as pd
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
from cartopy import crs as ccrs
from tqdm.notebook import tqdm
xr.set_options(display_style='html')
warnings.simplefilter("ignore", category=RuntimeWarning)
assert regionmask.__version__ == '0.5.0'
###Output
_____no_output_____
###Markdown
Download the latest ERA5 dataThese requires authenticating with the Copernicus Climate Data Store. You'll need a file named `.cdsapirc` in your home directory with the following details:```url: https://cds.climate.copernicus.eu/api/v2key: XXXX:1234567-1234-1234-1234-1234567890```Be sure to replace both parts of the `XXXX` with your UID and the remainder with your key.
###Code
# url: https://cds.climate.copernicus.eu/api/v2
# key: XXXX:1234567-1234-1234-1234-1234567890
c = cdsapi.Client()
###Output
_____no_output_____
###Markdown
This next cell will download data from the CDS, placing a new file (`download.nc`) in your working directory.
###Code
c.retrieve(
'reanalysis-era5-single-levels-monthly-means',
{
'format': 'netcdf',
'product_type': 'monthly_averaged_reanalysis',
'variable': [
'2m_temperature', 'total_precipitation',
],
'year': [
'2019', '2020',
],
'month': [
'01', '02', '03',
'04', '05', '06',
'07', '08', '09',
'10', '11', '12',
],
'time': '00:00',
},
'download.nc')
###Output
_____no_output_____
###Markdown
Now we can open the ERA5 dataset. This dataset comes with two experiements, we'll merge them since they don't overlap in their forecast time.
###Code
# open the dataset
ds = xr.open_dataset('download.nc')
# merge the experiments
ds = ds.bfill('expver').isel(expver=0)
ds
# plot the last timestep
ds['t2m'].isel(time=-1).plot()
# plot a timeseries at one location
ds['t2m'].sel(longitude=114.3055, latitude=30.5928, method='nearest').plot()
###Output
_____no_output_____
###Markdown
Subsetting ERA5 by political boundariesThe ERA5 data we plotted above is still in its gridded format. If we want to subset this data by political boundary, we need to bring in a shapefile. Below we define two functions that will help us do this subsetting.
###Code
def get_gdf(resolution='50m', category='cultural', name='admin_0_countries'):
'''return a geopandas.GeoDataFrame of boundaries
More info: https://www.naturalearthdata.com/downloads/
'''
fname = cartopy.io.shapereader.natural_earth(
resolution=resolution,
category=category,
name=name)
gdf = geopandas.GeoDataFrame.from_file(fname)
gdf['cent_lon'] = gdf.geometry.centroid.x
gdf['cent_lon'].values[gdf['cent_lon'].values < 0] += 360.
gdf['cent_lat'] = gdf.geometry.centroid.y
return gdf
def subset_by_gdf(ds, gdf, var='t2m'):
'''Subset a dataset (ds) using the shapes defined in a GeoDataFrame (gdf)
Returns
-------
final_df : geopandas.GeoDataFrame
'''
# create masks
print('Creating masks...')
shapes = regionmask.Regions(gdf.geometry)
mask = shapes.mask(ds, lon_name='longitude', lat_name='latitude', wrap_lon=True)
print('looping over shapes...')
# loop over shapes
df = pd.DataFrame(index=ds.indexes['time'])
for val, row in tqdm(gdf.iterrows()):
if not (mask == val).values.any():
data = ds[var].sel(latitude=row['cent_lat'], longitude=row['cent_lon'], method='nearest', tolerance=1)
else:
data = ds[var].where(mask == val).mean(('latitude', 'longitude'))
df[val] = data.to_series()
if var == 't2m':
df[val] -= 273.15
# setup final dataframe
df.index = df.index.to_period()
df = df['2019-11-01':].transpose()
final_df = gdf.merge(df, right_index=True, left_index=True)
final_df.crs = ccrs.PlateCarree().proj4_init
return final_df
gdf = get_gdf(resolution='110m')
countries = subset_by_gdf(ds, gdf, var='t2m')
display(countries.head())
countries.hvplot(c='2019-12', cmap='viridis', hover_cols=['ADMIN'])
gdf = get_gdf(resolution='10m', name='admin_1_states_provinces')
states = subset_by_gdf(ds, gdf, var='t2m')
display(states.head())
states.hvplot(c='2019-12', cmap='viridis') # TODO fix the hover tools here
# write to csv
pd.DataFrame(countries).drop(columns=['geometry']).to_csv('era5_ne_countries.csv', encoding='utf-8')
pd.DataFrame(states).drop(columns=['geometry']).to_csv('era5_ne_states.csv', encoding='utf-8')
###Output
_____no_output_____ |
NEU_ADS_Student_Project_Portfolio_Examples/NBA MVP Prediction with Principal Component Analysis/Project/PredictMVP/FinalPredictingMVP.ipynb | ###Markdown
Threshold and Requirements*Every player must put up a PER of at least 18.5 in the season.*Every player need to get an average of at least 0.20 WS/48 in the season.*Every player plays at least 1500 minutes in the season.
###Code
data_traintest = data_traintest[data_traintest.MP >= 1500]
data_traintest = data_traintest[data_traintest.PER >= 18.5]
data_traintest = data_traintest[data_traintest.WS/48 > 0.2]
data_predict = data_predict[data_predict.MP >= 1500]
data_predict = data_predict[data_predict.PER >= 18.5]
data_predict = data_predict[data_predict.WS/48 > 0.2]
data_result = data_result[data_result.MP >= 1500]
data_result = data_result[data_result.PER >= 18.5]
data_result = data_result[data_result.WS/48 > 0.2]
###Output
_____no_output_____
###Markdown
Because our target value, voting rates, is a decimal, we raise it by 1000 times to make the results more obvious.
###Code
data_traintest['Voting'] = data_traintest['Voting']*1000
data_traintest
# Get dimensions of train&test dataset
n = data_traintest.shape[0]
p = data_traintest.shape[1]
# Make data a np.array
data_traintest = data_traintest.values
data_predict = data_predict.values
data_result = data_result.values
#Get training and testing data
#Set start index and end index of training and testing data
train_start = 0
train_end = int(np.floor(0.8*n))
test_start = train_end + 1
test_end = n
data_train = data_traintest[np.arange(train_start, train_end), :]
data_test = data_traintest[np.arange(test_start, test_end), :]
#Build X and y
X_train = data_train[:, 1:]
y_train = data_train[:, 0]
X_test = data_test[:, 1:]
y_test = data_test[:, 0]
#Start creating RNN model and seting variables
# Number of player in training data
n_player = X_train.shape[1]
# Neurons
n_neurons_1 = 2048
n_neurons_2 = 1024
n_neurons_3 = 512
n_neurons_4 = 256
n_neurons_5 = 128
# Session
net = tf.InteractiveSession()
# Placeholder
X = tf.placeholder(dtype=tf.float32, shape=[None, n_player])
Y = tf.placeholder(dtype=tf.float32, shape=[None])
# Initializers--tf.zeros_initializer
# Initializers are used to initialize the network’s variables before training.
sigma = 1
weight_initializer = tf.variance_scaling_initializer(mode="fan_avg", distribution="uniform", scale=sigma)
bias_initializer = tf.zeros_initializer()
# Hidden weights and biases
W_hidden_1 = tf.Variable(weight_initializer([n_player, n_neurons_1]))
bias_hidden_1 = tf.Variable(bias_initializer([n_neurons_1]))
W_hidden_2 = tf.Variable(weight_initializer([n_neurons_1, n_neurons_2]))
bias_hidden_2 = tf.Variable(bias_initializer([n_neurons_2]))
W_hidden_3 = tf.Variable(weight_initializer([n_neurons_2, n_neurons_3]))
bias_hidden_3 = tf.Variable(bias_initializer([n_neurons_3]))
W_hidden_4 = tf.Variable(weight_initializer([n_neurons_3, n_neurons_4]))
bias_hidden_4 = tf.Variable(bias_initializer([n_neurons_4]))
W_hidden_5 = tf.Variable(weight_initializer([n_neurons_4, n_neurons_5]))
bias_hidden_5 = tf.Variable(bias_initializer([n_neurons_5]))
# Output weights and biases
W_out = tf.Variable(weight_initializer([n_neurons_5, 1]))
bias_out = tf.Variable(bias_initializer([1]))
# Hidden layer(leaky_relu)
hidden_1 = tf.nn.leaky_relu(tf.add(tf.matmul(X, W_hidden_1), bias_hidden_1))#
hidden_2 = tf.nn.leaky_relu(tf.add(tf.matmul(hidden_1, W_hidden_2), bias_hidden_2))
hidden_3 = tf.nn.leaky_relu(tf.add(tf.matmul(hidden_2, W_hidden_3), bias_hidden_3))
hidden_4 = tf.nn.leaky_relu(tf.add(tf.matmul(hidden_3, W_hidden_4), bias_hidden_4))
hidden_5 = tf.nn.leaky_relu(tf.add(tf.matmul(hidden_4, W_hidden_5), bias_hidden_5))
# Output layer (transpose!)
out = tf.transpose(tf.add(tf.matmul(hidden_5, W_out), bias_out))
# Cost function--user defined
# MSE computes the average squared deviation between predictions and targets
mse = tf.reduce_mean(tf.squared_difference(out, Y))
# Optimizer--Adam
# Used to compute and adapt weights and biases
opt = tf.train.AdamOptimizer().minimize(mse)
#initialize variables
init = tf.global_variables_initializer()
# Setup plot
plt.ion()
fig = plt.figure()
ax1 = fig.add_subplot(111)
line1, = ax1.plot(y_test)
#line2, = ax1.plot(y_test * 2)
plt.show()
###Output
_____no_output_____
###Markdown
Fitting the RNN model
###Code
# Number of iterations or training cycles
epochs = 4000
#Run the model
#Display the changing process of mse every 500 epochs
with tf.Session() as sess:
init.run()
for e in range(epochs):
sess.run(opt, feed_dict={X: X_train, Y: y_train})
if e % 500 == 0:
loss = mse.eval(feed_dict={X: X_train, Y: y_train})
print(e, "\tMSE:", loss)
y_pred = sess.run(out, feed_dict={X: X_test})
# Predict MVP of new season
MVP = sess.run(out, feed_dict={X: data_predict})
#First observe the accuracy of the predicting results and actual values of test data.
#Make the plot
plt.title("Pridiction VS Actual", fontsize=14)
plt.plot(pd.Series(np.ravel(y_test)), "bo", markersize = 5, label="Actual")
plt.plot(pd.Series(np.ravel(y_pred)), "r.", markersize = 5, label="Pridicting")
plt.legend(loc="upper left")
plt.xlabel("Players")
plt.show()
###Output
_____no_output_____
###Markdown
Copyright 2017 Sebastian HeinzPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Predict MVP in 17-18 season
###Code
#Make the plot of predicting results of new season data
plt.title("Pridiction MVP in 17-18 season", fontsize=14)
plt.plot(pd.Series(np.ravel(MVP)))
plt.ylabel("Predicting MVP PTS", fontsize=14)
plt.xlabel("Players", fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
Show specific players intuitively
###Code
#Because the shape of MVP value array and Player array is different, transpose it.
MVP = MVP.T
#Combine the arrays and match the predicting results to the players.
Players = np.concatenate((data_result,MVP),axis=1)
#Sort players in desending order for MVP value
Players = Players[Players[:,51].argsort()][::-1]
Players
###Output
_____no_output_____ |
_docs/nbs/T714933-MetaTL-for-Cold-start-users-on-Amazon-Electronics-dataset.ipynb | ###Markdown
MetaTL for Cold-start users on Amazon Electronics dataset Introduction A fundamental challenge for sequential recommenders is to capture the sequential patterns of users toward modeling how users transit among items. In many practical scenarios, however, there are a great number of cold-start users with only minimal logged interactions. As a result, existing sequential recommendation models will lose their predictive power due to the difficulties in learning sequential patterns over users with only limited interactions. In this work, we aim to improve sequential recommendation for cold-start users with a novel framework named MetaTL, which learns to model the transition patterns of users through meta-learning.Specifically, the proposed MetaTL:1. formulates sequential recommendation for cold-start users as a few-shot learning problem;2. extracts the dynamic transition patterns among users with a translation-based architecture; and3. adopts meta transitional learning to enable fast learning for cold-start users with only limited interactions, leading to accurate inference of sequential interactions. Background Sequential RecommendersOne of the first approaches for sequential recommendation is the use of Markov Chains to model the transitions of users among items. More recently, TransRec embeds items in a “transition space” and learns a translation vector for each user. With the advance in neural networks, many different neural structures including Recurrent Neural Networks, Convolutional Neural Networks, Transformers and Graph Neural Networks, have been adopted to model the dynamic preferences of users over their behavior sequences. While these methods aim to improve the overall performance via representation learning for sequences, they suffer from weak prediction power for cold-start users with short behavior sequences. Meta LearningThis line of research aims to learn a model which can adapt and generalize to new tasks and new environments with a few training samples. To achieve the goal of “learning-to-learn”, there are three types of different approaches. Metric-based methods are based on a similar idea to the nearest neighbors algorithm with a well-designed metric or distance function, prototypical networks or Siamese Neural Network. Model-based methods usually perform a rapid parameter update with an internal architecture or are controlled by another meta-learner model. As for the optimization-based approaches, by adjusting the optimization algorithm, the models can be efficiently updated with a few examples. Cold-Start Meta RecommendersMetaRec proposes a meta-learning strategy to learn user-specific logistic regression. There are also methods including MetaCF, Warm-up and MeLU, adopting Model-Agnostic Meta-Learning (MAML) methods to learn a model to achieve fast adaptation for cold-start users. Cold-Start Meta Sequential Recommenderscold-start sequential recommendation targets a setting where no additional auxiliary knowledge can be accessed due to privacy issues, and more importantly, the user-item interactions are sequentially dependent. A user’s preferences and tastes may change over time and such dynamics are of great significance in sequential recommendation. Hence, it is necessary to develop a new sequential recommendation framework that can distill short-range item transitional dynamics, and make fast adaptation to those cold-start users with limited user-item interactions. Problem StatementLet $I = \{𝑖_1,𝑖_2, \dots,𝑖_𝑃\}$ and $U = \{u_1,u_2, \dots,u_G\}$ represent the item set and user set in the platform respectively. Each item is mapped to a trainable embedding associated with its ID. There is no auxiliary information for users or items. In sequential recommendation, given the sequence of items ${𝑆𝑒𝑞}_𝑢 = (𝑖_{𝑢,1},𝑖_{𝑢,2}, \dots,𝑖_{𝑢,𝑛})$ that user 𝑢 has interacted with in chronological order, the model aims to infer the next interesting item $𝑖_{𝑢,𝑛+1}$. That is to say, we need to predict the preference score for each candidate item based on ${𝑆𝑒𝑞}_𝑢$ and thus recommend the top-N items with the highest scores.In our task, we train the model on $U_{𝑡𝑟𝑎𝑖𝑛}$, which contains users with various numbers of logged interactions. Then given 𝑢 in a separate test set $U_{𝑡𝑒𝑠𝑡},\ U_{𝑡𝑟𝑎𝑖𝑛} ∩ U_{𝑡𝑒𝑠𝑡} = \phi$, the model can quickly learn user transition patterns according to the 𝐾 initial interactions and thus infer the sequential interactions. Note that the size of a user’s initial interactions (i.e., 𝐾) is assumed to be a small number (e.g., 2, 3 or 4) considering the cold-start scenario. Setup Imports
###Code
import os
import sys
import copy
import json
import random
import shutil
import logging
import numpy as np
from collections import defaultdict, Counter, OrderedDict
from multiprocessing import Process, Queue
import torch
import torch.nn as nn
from torch.nn import functional as F
###Output
_____no_output_____
###Markdown
Params
###Code
class Args:
dataset = "electronics"
seed = None
K = 3 #NUMBER OF SHOT
embed_dim = 100
batch_size = 1024
learning_rate = 0.001
epoch = 1000
print_epoch = 100
eval_epoch = 100
beta = 5
margin = 1
dropout_p = 0.5
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
params = dict(Args.__dict__)
params
if params['seed'] is not None:
SEED = params['seed']
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
np.random.seed(SEED)
random.seed(SEED)
###Output
_____no_output_____
###Markdown
Dataset ***Electronics*** is adopted from the public Amazon review dataset, which includes reviews ranging from May 1996 to July 2014 on Amazon products belonging to the “Electronics” category.We filter out items with fewer than 10 interactions. We split each dataset with a corresponding cutting timestamp 𝑇, such that we construct $U_{𝑡𝑟𝑎𝑖𝑛}$ with users who have interactions before 𝑇 and construct $U_{𝑡𝑒𝑠𝑡}$ with users who start their first interactions after 𝑇.When evaluating few-shot sequential recommendation for a choice of 𝐾 (i.e., the number of initial interactions), we keep 𝐾 interactions as initialization for each user in $U_{𝑡𝑒𝑠𝑡}$ and predict for the user’s next interactions.
###Code
!wget -q --show-progress https://github.com/sparsh-ai/coldstart-recsys/raw/main/data/electronics/electronics_train.csv
!wget -q --show-progress https://github.com/sparsh-ai/coldstart-recsys/raw/main/data/electronics/electronics_test_new_user.csv
# sampler for batch generation
def random_neq(l, r, s):
t = np.random.randint(l, r)
while t in s:
t = np.random.randint(l, r)
return t
def trans_to_cuda(variable):
if torch.cuda.is_available():
return variable.cuda()
else:
return variable
def trans_to_cpu(variable):
if torch.cuda.is_available():
return variable.cpu()
else:
return variable
# train/val/test data generation
def data_load(fname, num_sample):
usernum = 0
itemnum = 0
user_train = defaultdict(list)
# assume user/item index starting from 1
f = open('%s_train.csv' % (fname), 'r')
for line in f:
u, i, t = line.rstrip().split('\t')
u = int(u)
i = int(i)
usernum = max(u, usernum)
itemnum = max(i, itemnum)
user_train[u].append(i)
f.close()
# read in new users for testing
user_input_test = {}
user_input_valid = {}
user_valid = {}
user_test = {}
User_test_new = defaultdict(list)
f = open('%s_test_new_user.csv' % (fname), 'r')
for line in f:
u, i, t = line.rstrip().split('\t')
u = int(u)
i = int(i)
User_test_new[u].append(i)
f.close()
for user in User_test_new:
if len(User_test_new[user]) > num_sample:
if random.random()<0.3:
user_input_valid[user] = User_test_new[user][:num_sample]
user_valid[user] = []
user_valid[user].append(User_test_new[user][num_sample])
else:
user_input_test[user] = User_test_new[user][:num_sample]
user_test[user] = []
user_test[user].append(User_test_new[user][num_sample])
return [user_train, usernum, itemnum, user_input_test, user_test, user_input_valid, user_valid]
class DataLoader(object):
def __init__(self, user_train, user_test, itemnum, parameter):
self.curr_rel_idx = 0
self.bs = parameter['batch_size']
self.maxlen = parameter['K']
self.valid_user = []
for u in user_train:
if len(user_train[u]) < self.maxlen or len(user_test[u]) < 1: continue
self.valid_user.append(u)
self.num_tris = len(self.valid_user)
self.train = user_train
self.test = user_test
self.itemnum = itemnum
def next_one_on_eval(self):
if self.curr_tri_idx == self.num_tris:
return "EOT", "EOT"
u = self.valid_user[self.curr_tri_idx]
self.curr_tri_idx += 1
seq = np.zeros([self.maxlen], dtype=np.int32)
pos = np.zeros([self.maxlen - 1], dtype=np.int32)
neg = np.zeros([self.maxlen - 1], dtype=np.int32)
idx = self.maxlen - 1
ts = set(self.train[u])
for i in reversed(self.train[u]):
seq[idx] = i
if idx > 0:
pos[idx - 1] = i
if i != 0: neg[idx - 1] = random_neq(1, self.itemnum + 1, ts)
idx -= 1
if idx == -1: break
curr_rel = u
support_triples, support_negative_triples, query_triples, negative_triples = [], [], [], []
for idx in range(self.maxlen-1):
support_triples.append([seq[idx],curr_rel,pos[idx]])
support_negative_triples.append([seq[idx],curr_rel,neg[idx]])
rated = ts
rated.add(0)
query_triples.append([seq[-1],curr_rel,self.test[u][0]])
for _ in range(100):
t = np.random.randint(1, self.itemnum + 1)
while t in rated: t = np.random.randint(1, self.itemnum + 1)
negative_triples.append([seq[-1],curr_rel,t])
support_triples = [support_triples]
support_negative_triples = [support_negative_triples]
query_triples = [query_triples]
negative_triples = [negative_triples]
return [support_triples, support_negative_triples, query_triples, negative_triples], curr_rel
###Output
_____no_output_____
###Markdown
Sampling
###Code
def sample_function_mixed(user_train, usernum, itemnum, batch_size, maxlen, result_queue, SEED):
def sample():
if random.random()<0.5:
user = np.random.randint(1, usernum + 1)
while len(user_train[user]) <= 1: user = np.random.randint(1, usernum + 1)
seq = np.zeros([maxlen], dtype=np.int32)
pos = np.zeros([maxlen], dtype=np.int32)
neg = np.zeros([maxlen], dtype=np.int32)
if len(user_train[user]) < maxlen:
nxt_idx = len(user_train[user]) - 1
else:
nxt_idx = np.random.randint(maxlen,len(user_train[user]))
nxt = user_train[user][nxt_idx]
idx = maxlen - 1
ts = set(user_train[user])
for i in reversed(user_train[user][min(0, nxt_idx - 1 - maxlen) : nxt_idx - 1]):
seq[idx] = i
pos[idx] = nxt
if nxt != 0: neg[idx] = random_neq(1, itemnum + 1, ts)
nxt = i
idx -= 1
if idx == -1: break
curr_rel = user
support_triples, support_negative_triples, query_triples, negative_triples = [], [], [], []
for idx in range(maxlen-1):
support_triples.append([seq[idx],curr_rel,pos[idx]])
support_negative_triples.append([seq[idx],curr_rel,neg[idx]])
query_triples.append([seq[-1],curr_rel,pos[-1]])
negative_triples.append([seq[-1],curr_rel,neg[-1]])
return support_triples, support_negative_triples, query_triples, negative_triples, curr_rel
else:
user = np.random.randint(1, usernum + 1)
while len(user_train[user]) <= 1: user = np.random.randint(1, usernum + 1)
seq = np.zeros([maxlen], dtype=np.int32)
pos = np.zeros([maxlen], dtype=np.int32)
neg = np.zeros([maxlen], dtype=np.int32)
list_idx = random.sample([i for i in range(len(user_train[user]))], maxlen + 1)
list_item = [user_train[user][i] for i in sorted(list_idx)]
nxt = list_item[-1]
idx = maxlen - 1
ts = set(user_train[user])
for i in reversed(list_item[:-1]):
seq[idx] = i
pos[idx] = nxt
if nxt != 0: neg[idx] = random_neq(1, itemnum + 1, ts)
nxt = i
idx -= 1
if idx == -1: break
curr_rel = user
support_triples, support_negative_triples, query_triples, negative_triples = [], [], [], []
for idx in range(maxlen-1):
support_triples.append([seq[idx],curr_rel,pos[idx]])
support_negative_triples.append([seq[idx],curr_rel,neg[idx]])
query_triples.append([seq[-1],curr_rel,pos[-1]])
negative_triples.append([seq[-1],curr_rel,neg[-1]])
return support_triples, support_negative_triples, query_triples, negative_triples, curr_rel
np.random.seed(SEED)
while True:
one_batch = []
for i in range(batch_size):
one_batch.append(sample())
support, support_negative, query, negative, curr_rel = zip(*one_batch)
result_queue.put(([support, support_negative, query, negative], curr_rel))
class WarpSampler(object):
def __init__(self, User, usernum, itemnum, batch_size=64, maxlen=10, n_workers=1):
self.result_queue = Queue(maxsize=n_workers * 10)
self.processors = []
for i in range(n_workers):
self.processors.append(
Process(target=sample_function_mixed, args=(User,
usernum,
itemnum,
batch_size,
maxlen,
self.result_queue,
np.random.randint(2e9)
)))
self.processors[-1].daemon = True
self.processors[-1].start()
def next_batch(self):
return self.result_queue.get()
def close(self):
for p in self.processors:
p.terminate()
p.join()
###Output
_____no_output_____
###Markdown
Model Definition
###Code
class Embedding(nn.Module):
def __init__(self, num_ent, parameter):
super(Embedding, self).__init__()
self.device = parameter['device']
self.es = parameter['embed_dim']
self.embedding = nn.Embedding(num_ent + 1, self.es)
nn.init.xavier_uniform_(self.embedding.weight)
def forward(self, triples):
idx = [[[t[0], t[2]] for t in batch] for batch in triples]
idx = torch.LongTensor(idx).to(self.device)
return self.embedding(idx)
class MetaLearner(nn.Module):
def __init__(self, K, embed_size=100, num_hidden1=500, num_hidden2=200, out_size=100, dropout_p=0.5):
super(MetaLearner, self).__init__()
self.embed_size = embed_size
self.K = K
self.out_size = out_size
self.rel_fc1 = nn.Sequential(OrderedDict([
('fc', nn.Linear(2*embed_size, num_hidden1)),
('bn', nn.BatchNorm1d(K)),
('relu', nn.LeakyReLU()),
('drop', nn.Dropout(p=dropout_p)),
]))
self.rel_fc2 = nn.Sequential(OrderedDict([
('fc', nn.Linear(num_hidden1, num_hidden2)),
('bn', nn.BatchNorm1d(K)),
('relu', nn.LeakyReLU()),
('drop', nn.Dropout(p=dropout_p)),
]))
self.rel_fc3 = nn.Sequential(OrderedDict([
('fc', nn.Linear(num_hidden2, out_size)),
('bn', nn.BatchNorm1d(K)),
]))
nn.init.xavier_normal_(self.rel_fc1.fc.weight)
nn.init.xavier_normal_(self.rel_fc2.fc.weight)
nn.init.xavier_normal_(self.rel_fc3.fc.weight)
def forward(self, inputs):
size = inputs.shape
x = inputs.contiguous().view(size[0], size[1], -1)
x = self.rel_fc1(x)
x = self.rel_fc2(x)
x = self.rel_fc3(x)
x = torch.mean(x, 1)
return x.view(size[0], 1, 1, self.out_size)
class EmbeddingLearner(nn.Module):
def __init__(self):
super(EmbeddingLearner, self).__init__()
def forward(self, h, t, r, pos_num):
score = -torch.norm(h + r - t, 2, -1).squeeze(2)
p_score = score[:, :pos_num]
n_score = score[:, pos_num:]
return p_score, n_score
class MetaTL(nn.Module):
def __init__(self, itemnum, parameter):
super(MetaTL, self).__init__()
self.device = parameter['device']
self.beta = parameter['beta']
self.dropout_p = parameter['dropout_p']
self.embed_dim = parameter['embed_dim']
self.margin = parameter['margin']
self.embedding = Embedding(itemnum, parameter)
self.relation_learner = MetaLearner(parameter['K'] - 1, embed_size=100, num_hidden1=500,
num_hidden2=200, out_size=100, dropout_p=self.dropout_p)
self.embedding_learner = EmbeddingLearner()
self.loss_func = nn.MarginRankingLoss(self.margin)
self.rel_q_sharing = dict()
def split_concat(self, positive, negative):
pos_neg_e1 = torch.cat([positive[:, :, 0, :],
negative[:, :, 0, :]], 1).unsqueeze(2)
pos_neg_e2 = torch.cat([positive[:, :, 1, :],
negative[:, :, 1, :]], 1).unsqueeze(2)
return pos_neg_e1, pos_neg_e2
def forward(self, task, iseval=False, curr_rel=''):
# transfer task string into embedding
support, support_negative, query, negative = [self.embedding(t) for t in task]
K = support.shape[1] # num of K
num_sn = support_negative.shape[1] # num of support negative
num_q = query.shape[1] # num of query
num_n = negative.shape[1] # num of query negative
rel = self.relation_learner(support)
rel.retain_grad()
rel_s = rel.expand(-1, K+num_sn, -1, -1)
if iseval and curr_rel != '' and curr_rel in self.rel_q_sharing.keys():
rel_q = self.rel_q_sharing[curr_rel]
else:
sup_neg_e1, sup_neg_e2 = self.split_concat(support, support_negative)
p_score, n_score = self.embedding_learner(sup_neg_e1, sup_neg_e2, rel_s, K)
y = torch.Tensor([1]).to(self.device)
self.zero_grad()
loss = self.loss_func(p_score, n_score, y)
loss.backward(retain_graph=True)
grad_meta = rel.grad
rel_q = rel - self.beta*grad_meta
self.rel_q_sharing[curr_rel] = rel_q
rel_q = rel_q.expand(-1, num_q + num_n, -1, -1)
que_neg_e1, que_neg_e2 = self.split_concat(query, negative)
p_score, n_score = self.embedding_learner(que_neg_e1, que_neg_e2, rel_q, num_q)
return p_score, n_score
###Output
_____no_output_____
###Markdown
Training and Inference Meta-learning aims to learn a model which can adapt to new tasks (i.e., new users) with a few training samples. To enable meta-learning in sequential recommendation for cold-start users, we formulate training a sequential recommender as solving a new few-shot learning problem (i.e., meta-testing task) by training on many sampled similar tasks (i.e., the meta-training tasks). Each task includes a 𝑠𝑢𝑝𝑝𝑜𝑟𝑡 set S and a 𝑞𝑢𝑒𝑟𝑦 set Q, which can be regarded as the “training” set and “testing” set of the task. For example, while constructing a task $T_𝑛$, given user $𝑢_𝑗$ with initial interactions in sequence (e.g., $𝑖_𝐴 \rightarrow_{u_j} i_B \rightarrow_{u_j} i_C$), we will have the a set of transition pairs $\{ 𝑖_𝐴 \rightarrow_{u_j} i_B, i_B \rightarrow_{u_j} i_C \}$ as support and predict for the query $i_C \rightarrow_{u_j} ?$.When testing on a new user $𝑢_{𝑡𝑒𝑠𝑡}$, we will firstly construct the support set $S_{𝑡𝑒𝑠𝑡}$ based on the user’s initial interactions. The model $𝑓_\theta$ is fine-tuned with all the transition pairs in $S_{𝑡𝑒𝑠𝑡}$ and updated to $𝑓_{\theta_{𝑡𝑒𝑠𝑡}'}$ , which can be used to generate the updated $tr_{𝑡𝑒𝑠𝑡}$. Given the test query $𝑖_𝑜 \rightarrow_{u_{test}}?$, the preference score for item $𝑖_𝑝$ (as the next interaction) is calculated as −$∥i_𝑜 + tr_{𝑡𝑒𝑠𝑡} − i_𝑝 ∥^2$.
###Code
class Trainer:
def __init__(self, data_loaders, itemnum, parameter):
self.parameter = parameter
# data loader
self.train_data_loader = data_loaders[0]
self.dev_data_loader = data_loaders[1]
self.test_data_loader = data_loaders[2]
# parameters
self.batch_size = parameter['batch_size']
self.learning_rate = parameter['learning_rate']
self.epoch = parameter['epoch']
self.print_epoch = parameter['print_epoch']
self.eval_epoch = parameter['eval_epoch']
self.device = parameter['device']
self.MetaTL = MetaTL(itemnum, parameter)
self.MetaTL.to(self.device)
self.optimizer = torch.optim.Adam(self.MetaTL.parameters(), self.learning_rate)
def rank_predict(self, data, x, ranks):
# query_idx is the idx of positive score
query_idx = x.shape[0] - 1
# sort all scores with descending, because more plausible triple has higher score
_, idx = torch.sort(x, descending=True)
rank = list(idx.cpu().numpy()).index(query_idx) + 1
ranks.append(rank)
# update data
if rank <= 10:
data['Hits@10'] += 1
data['NDCG@10'] += 1 / np.log2(rank + 1)
if rank <= 5:
data['Hits@5'] += 1
data['NDCG@5'] += 1 / np.log2(rank + 1)
if rank == 1:
data['Hits@1'] += 1
data['NDCG@1'] += 1 / np.log2(rank + 1)
data['MRR'] += 1.0 / rank
def do_one_step(self, task, iseval=False, curr_rel=''):
loss, p_score, n_score = 0, 0, 0
if not iseval:
self.optimizer.zero_grad()
p_score, n_score = self.MetaTL(task, iseval, curr_rel)
y = torch.Tensor([1]).to(self.device)
loss = self.MetaTL.loss_func(p_score, n_score, y)
loss.backward()
self.optimizer.step()
elif curr_rel != '':
p_score, n_score = self.MetaTL(task, iseval, curr_rel)
y = torch.Tensor([1]).to(self.device)
loss = self.MetaTL.loss_func(p_score, n_score, y)
return loss, p_score, n_score
def train(self):
# initialization
best_epoch = 0
best_value = 0
bad_counts = 0
# training by epoch
for e in range(self.epoch):
# sample one batch from data_loader
train_task, curr_rel = self.train_data_loader.next_batch()
loss, _, _ = self.do_one_step(train_task, iseval=False, curr_rel=curr_rel)
# print the loss on specific epoch
if e % self.print_epoch == 0:
loss_num = loss.item()
print("Epoch: {}\tLoss: {:.4f}".format(e, loss_num))
# do evaluation on specific epoch
if e % self.eval_epoch == 0 and e != 0:
print('Epoch {} Validating...'.format(e))
valid_data = self.eval(istest=False, epoch=e)
print('Epoch {} Testing...'.format(e))
test_data = self.eval(istest=True, epoch=e)
print('Finish')
def eval(self, istest=False, epoch=None):
self.MetaTL.eval()
self.MetaTL.rel_q_sharing = dict()
if istest:
data_loader = self.test_data_loader
else:
data_loader = self.dev_data_loader
data_loader.curr_tri_idx = 0
# initial return data of validation
data = {'MRR': 0, 'Hits@1': 0, 'Hits@5': 0, 'Hits@10': 0, 'NDCG@1': 0, 'NDCG@5': 0, 'NDCG@10': 0}
ranks = []
t = 0
temp = dict()
while True:
# sample all the eval tasks
eval_task, curr_rel = data_loader.next_one_on_eval()
# at the end of sample tasks, a symbol 'EOT' will return
if eval_task == 'EOT':
break
t += 1
_, p_score, n_score = self.do_one_step(eval_task, iseval=True, curr_rel=curr_rel)
x = torch.cat([n_score, p_score], 1).squeeze()
self.rank_predict(data, x, ranks)
# print current temp data dynamically
for k in data.keys():
temp[k] = data[k] / t
sys.stdout.write("{}\tMRR: {:.3f}\tNDCG@10: {:.3f}\tNDCG@5: {:.3f}\tNDCG@1: {:.3f}\tHits@10: {:.3f}\tHits@5: {:.3f}\tHits@1: {:.3f}\r".format(
t, temp['MRR'], temp['NDCG@10'], temp['NDCG@5'], temp['NDCG@1'], temp['Hits@10'], temp['Hits@5'], temp['Hits@1']))
sys.stdout.flush()
# print overall evaluation result and return it
for k in data.keys():
data[k] = round(data[k] / t, 3)
if istest:
print("TEST: \tMRR: {:.3f}\tNDCG@10: {:.3f}\tNDCG@5: {:.3f}\tNDCG@1: {:.3f}\tHits@10: {:.3f}\tHits@5: {:.3f}\tHits@1: {:.3f}\r".format(
temp['MRR'], temp['NDCG@10'], temp['NDCG@5'], temp['NDCG@1'], temp['Hits@10'], temp['Hits@5'], temp['Hits@1']))
else:
print("VALID: \tMRR: {:.3f}\tNDCG@10: {:.3f}\tNDCG@5: {:.3f}\tNDCG@1: {:.3f}\tHits@10: {:.3f}\tHits@5: {:.3f}\tHits@1: {:.3f}\r".format(
temp['MRR'], temp['NDCG@10'], temp['NDCG@5'], temp['NDCG@1'], temp['Hits@10'], temp['Hits@5'], temp['Hits@1']))
return data
user_train, usernum_train, itemnum, user_input_test, user_test, user_input_valid, user_valid = data_load(params['dataset'], params['K'])
sampler = WarpSampler(user_train, usernum_train, itemnum, batch_size=params['batch_size'], maxlen=params['K'], n_workers=2)
sampler_test = DataLoader(user_input_test, user_test, itemnum, params)
sampler_valid = DataLoader(user_input_valid, user_valid, itemnum, params)
trainer = Trainer([sampler, sampler_valid, sampler_test], itemnum, params)
trainer.train()
sampler.close()
###Output
Epoch: 0 Loss: 1.0004
Epoch: 100 Loss: 0.7276
Epoch: 200 Loss: 0.6102
Epoch: 300 Loss: 0.6143
Epoch: 400 Loss: 0.5779
Epoch: 500 Loss: 0.5438
Epoch: 600 Loss: 0.5271
Epoch: 700 Loss: 0.5430
Epoch: 800 Loss: 0.5642
Epoch: 900 Loss: 0.5107
Epoch: 1000 Loss: 0.5222
Epoch 1000 Validating...
VALID: MRR: 0.302 NDCG@10: 0.345 NDCG@5: 0.303 NDCG@1: 0.187 Hits@10: 0.542 Hits@5: 0.410 Hits@1: 0.187
Epoch 1000 Testing...
TEST: MRR: 0.286 NDCG@10: 0.327 NDCG@5: 0.286 NDCG@1: 0.172 Hits@10: 0.523 Hits@5: 0.397 Hits@1: 0.172
Epoch: 1100 Loss: 0.5396
Epoch: 1200 Loss: 0.5236
Epoch: 1300 Loss: 0.4934
Epoch: 1400 Loss: 0.4948
Epoch: 1500 Loss: 0.5047
Epoch: 1600 Loss: 0.4933
Epoch: 1700 Loss: 0.5068
Epoch: 1800 Loss: 0.4833
Epoch: 1900 Loss: 0.5245
Epoch: 2000 Loss: 0.4982
Epoch 2000 Validating...
VALID: MRR: 0.307 NDCG@10: 0.353 NDCG@5: 0.308 NDCG@1: 0.188 Hits@10: 0.561 Hits@5: 0.422 Hits@1: 0.188
Epoch 2000 Testing...
TEST: MRR: 0.295 NDCG@10: 0.338 NDCG@5: 0.296 NDCG@1: 0.177 Hits@10: 0.538 Hits@5: 0.406 Hits@1: 0.177
Epoch: 2100 Loss: 0.4960
Epoch: 2200 Loss: 0.4943
Epoch: 2300 Loss: 0.4477
Epoch: 2400 Loss: 0.4481
Epoch: 2500 Loss: 0.4429
Epoch: 2600 Loss: 0.4885
Epoch: 2700 Loss: 0.4485
Epoch: 2800 Loss: 0.4438
Epoch: 2900 Loss: 0.4456
Epoch: 3000 Loss: 0.4484
Epoch 3000 Validating...
VALID: MRR: 0.317 NDCG@10: 0.360 NDCG@5: 0.317 NDCG@1: 0.197 Hits@10: 0.558 Hits@5: 0.425 Hits@1: 0.197
Epoch 3000 Testing...
TEST: MRR: 0.302 NDCG@10: 0.347 NDCG@5: 0.304 NDCG@1: 0.182 Hits@10: 0.551 Hits@5: 0.418 Hits@1: 0.182
Epoch: 3100 Loss: 0.4422
Epoch: 3200 Loss: 0.4398
Epoch: 3300 Loss: 0.4230
Epoch: 3400 Loss: 0.3967
Epoch: 3500 Loss: 0.4214
Epoch: 3600 Loss: 0.4144
Epoch: 3700 Loss: 0.3635
Epoch: 3800 Loss: 0.3918
Epoch: 3900 Loss: 0.4223
Epoch: 4000 Loss: 0.4319
Epoch 4000 Validating...
VALID: MRR: 0.322 NDCG@10: 0.371 NDCG@5: 0.326 NDCG@1: 0.195 Hits@10: 0.584 Hits@5: 0.445 Hits@1: 0.195
Epoch 4000 Testing...
TEST: MRR: 0.309 NDCG@10: 0.357 NDCG@5: 0.310 NDCG@1: 0.189 Hits@10: 0.567 Hits@5: 0.423 Hits@1: 0.189
Epoch: 4100 Loss: 0.3717
Epoch: 4200 Loss: 0.3762
Epoch: 4300 Loss: 0.3786
Epoch: 4400 Loss: 0.3803
Epoch: 4500 Loss: 0.3884
Epoch: 4600 Loss: 0.3833
Epoch: 4700 Loss: 0.3913
Epoch: 4800 Loss: 0.4011
Epoch: 4900 Loss: 0.3760
Epoch: 5000 Loss: 0.4257
Epoch 5000 Validating...
VALID: MRR: 0.329 NDCG@10: 0.378 NDCG@5: 0.336 NDCG@1: 0.200 Hits@10: 0.591 Hits@5: 0.462 Hits@1: 0.200
Epoch 5000 Testing...
TEST: MRR: 0.321 NDCG@10: 0.367 NDCG@5: 0.322 NDCG@1: 0.201 Hits@10: 0.575 Hits@5: 0.435 Hits@1: 0.201
Epoch: 5100 Loss: 0.3676
Epoch: 5200 Loss: 0.3505
Epoch: 5300 Loss: 0.3675
Epoch: 5400 Loss: 0.3786
Epoch: 5500 Loss: 0.3471
Epoch: 5600 Loss: 0.3569
Epoch: 5700 Loss: 0.3753
Epoch: 5800 Loss: 0.3767
Epoch: 5900 Loss: 0.3100
Epoch: 6000 Loss: 0.3656
Epoch 6000 Validating...
VALID: MRR: 0.343 NDCG@10: 0.392 NDCG@5: 0.351 NDCG@1: 0.216 Hits@10: 0.607 Hits@5: 0.479 Hits@1: 0.216
Epoch 6000 Testing...
TEST: MRR: 0.329 NDCG@10: 0.377 NDCG@5: 0.334 NDCG@1: 0.207 Hits@10: 0.585 Hits@5: 0.452 Hits@1: 0.207
Epoch: 6100 Loss: 0.3711
Epoch: 6200 Loss: 0.3548
Epoch: 6300 Loss: 0.3829
Epoch: 6400 Loss: 0.3478
Epoch: 6500 Loss: 0.3661
Epoch: 6600 Loss: 0.3433
Epoch: 6700 Loss: 0.3506
Epoch: 6800 Loss: 0.3107
Epoch: 6900 Loss: 0.3364
Epoch: 7000 Loss: 0.3267
Epoch 7000 Validating...
VALID: MRR: 0.344 NDCG@10: 0.395 NDCG@5: 0.351 NDCG@1: 0.216 Hits@10: 0.615 Hits@5: 0.479 Hits@1: 0.216
Epoch 7000 Testing...
TEST: MRR: 0.329 NDCG@10: 0.378 NDCG@5: 0.335 NDCG@1: 0.204 Hits@10: 0.588 Hits@5: 0.458 Hits@1: 0.204
Epoch: 7100 Loss: 0.3744
Epoch: 7200 Loss: 0.3236
Epoch: 7300 Loss: 0.3446
Epoch: 7400 Loss: 0.3261
Epoch: 7500 Loss: 0.3212
Epoch: 7600 Loss: 0.3229
Epoch: 7700 Loss: 0.3204
Epoch: 7800 Loss: 0.3168
Epoch: 7900 Loss: 0.3125
Epoch: 8000 Loss: 0.3491
Epoch 8000 Validating...
VALID: MRR: 0.349 NDCG@10: 0.398 NDCG@5: 0.354 NDCG@1: 0.228 Hits@10: 0.611 Hits@5: 0.473 Hits@1: 0.228
Epoch 8000 Testing...
TEST: MRR: 0.326 NDCG@10: 0.375 NDCG@5: 0.333 NDCG@1: 0.197 Hits@10: 0.588 Hits@5: 0.457 Hits@1: 0.197
Epoch: 8100 Loss: 0.3372
Epoch: 8200 Loss: 0.3157
Epoch: 8300 Loss: 0.3285
Epoch: 8400 Loss: 0.3266
Epoch: 8500 Loss: 0.3086
Epoch: 8600 Loss: 0.3008
Epoch: 8700 Loss: 0.3207
Epoch: 8800 Loss: 0.3412
Epoch: 8900 Loss: 0.3214
Epoch: 9000 Loss: 0.3146
Epoch 9000 Validating...
VALID: MRR: 0.351 NDCG@10: 0.401 NDCG@5: 0.355 NDCG@1: 0.229 Hits@10: 0.615 Hits@5: 0.474 Hits@1: 0.229
Epoch 9000 Testing...
TEST: MRR: 0.336 NDCG@10: 0.383 NDCG@5: 0.341 NDCG@1: 0.214 Hits@10: 0.588 Hits@5: 0.458 Hits@1: 0.214
|
cross_validation_pipeline.ipynb | ###Markdown
Use Sagemaker Pipelines To Orchestrate End To End Cross Validation Model Training WorkflowAmazon SageMaker Pipelines simplifies ML workflows orchestration across each step of the ML process, from exploration data analysis, preprocessing to model training and model deployment. With Sagemaker Pipelines, you can develop a consistent, reusable workflow that integrates with CI/CD pipeline for improved quality and reduced errors throughout development lifecycle. SageMaker PipelinesAn ML workflow built using Sagemaker Pipeline is made up of a series of Steps defined as a directed acryclic graph (DAG). The pipeline is expressed in JSON definition that captures relationships between the steps of your pipeline. Here's a terminology used in Sagemaker Pipeline for defining an ML workflow.* Pipelines - Top level definition of a pipeline. It encapsulates name, parameters, and steps. A pipeline is scoped within an account and region. * Parameters - Parameters are defined in the pipeline definition. It introduces variables that can be provided to the pipeline at execution time. Parameters support string, float and integer types. * Pipeline Steps - Defines the actions that the pipeline takes and the relationships between steps using properties. Sagemaker Pipelines support the following step types: Processing, Training, Transform, CreateModel, RegisterModel, Condition, Callback. Notebook OverviewThis notebook implements a complete Cross Validation ML model workflow using a custom built docker image, HyperparameterTuner for automatic hyperparameter optimization, SKLearn framework for K fold split and model training. The workflow is defined orchestrated using Sagemaker Pipelines. Here are the main steps involved the end to end workflow: Defines a list of parameters, with default values to be used throughout the pipelineDefines a ProcessingStep with SKLearn processor to perform KFold cross validation splitsDefines a ProcessingStep that orchestrates cross validation model training with HyperparameterTuner integration Defines a ConditionStep that validates the model performance against the baselineDefines a TrainingStep to train the model with the hyperparameters suggested by HyperparameterTuner using all the dataset Creates a Model package, defines RegisterModel to register the trained model in the previous step with Sagemaker Model Registry DatasetThe Iris flower data set is a multivariate data set introduced by the British statistician, eugenicist, and biologist Ronald Fisher in his 1936 [paper](https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1469-1809.1936.tb02137.x). The data set consists of 50 samples from each of 3 species of Iris:* Iris setosa * Iris virginica * Iris versicolorThere are 4 features available in each sample: the length and the width of the sepals and petals measured in centimeters. Based on the combination of these four features, we are going to build a linear algorithm (SVM) to train a multiclass classification model to distinguish the species from each other.
###Code
import boto3
import sagemaker
region = boto3.Session().region_name
sagemaker_session = sagemaker.session.Session()
###Output
_____no_output_____
###Markdown
Defines Pipeline ParametersWith Pipeline Parameters, you can introduce variables to the pipeline that specific to the pipeline run. The supported parameter types include:ParameterString - represents a str Python typeParameterInteger - represents an int Python typeParameterFloat - represents a float Python typeAdditionally, parameters support default values, which can be useful for scenarios where only a subset of the defined parameters need to change. For example, for training a model that uses k fold Cross Validation method, you could provide the desired k value at pipeline execution time. Here are the parameters for the workflow used in this notebook:* ProcessingInstanceCount - number of instances for a Sagemaker Processing job in prepropcessing step.* ProcessingInstanceType - instance type used for a Sagemaker Processing job in prepropcessing step.* TrainingInstanceType - instance type used for Sagemaker Training job.* TrainingInstanceCount - number of instances for a Sagemaker Training job.* InferenceInstanceType - instance type for hosting the deployment of the Sagemaker trained model.* HPOTunerScriptInstanceType - instance type for the script processor that triggers the hyperparameter tuning job * ModelApprovalStatus - the initial approval status for the trained model in Sagemaker Model Registry* ExecutionRole - IAM role to use throughout the specific pipeline execution. * DefaultS3Bucket - default S3 bucket name as the object storage for the target pipeline execution.* BaselineModelObjectiveValue - the minimum objective metrics used for model evaluation.* S3BucketPrefix - bucket prefix for the pipeline execution.* ImageURI - docker image URI (ECR) for triggering cross validation model training with HyperparameterTuner.* KFold - the value of k to be used in k fold cross validation* MaxTrainingJobs - maximum number of model training jobs to trigger in a single hyperparameter tuner job.* MaxParallelTrainingJobs - maximum number of parallel model training jobs to trigger in a single hyperparameter tuner job.* MinimumC, MaximumC - Hyperparameter ranges for SVM 'c' parameter.* MimimumGamma, MaximumGamma - Hyperparameter ranges for SVM 'gamma' parameter.
###Code
from sagemaker.workflow.parameters import (
ParameterInteger,
ParameterString,
ParameterFloat
)
processing_instance_count = ParameterInteger(name="ProcessingInstanceCount", default_value=1)
processing_instance_type = ParameterString(name="ProcessingInstanceType", default_value="ml.m5.xlarge")
training_instance_type = ParameterString(name="TrainingInstanceType", default_value="ml.m5.xlarge")
training_instance_count = ParameterInteger(name="TrainingInstanceCount", default_value=1)
inference_instance_type = ParameterString(name="InferenceInstanceType", default_value="ml.m5.large")
hpo_tuner_instance_type = ParameterString(name="HPOTunerScriptInstanceType", default_value="ml.t3.medium")
model_approval_status = ParameterString(name="ModelApprovalStatus", default_value="PendingManualApproval")
role = ParameterString(name='ExecutionRole', default_value=sagemaker.get_execution_role())
default_bucket = ParameterString(name="DefaultS3Bucket", default_value=sagemaker_session.default_bucket())
baseline_model_objective_value = ParameterFloat(name='BaselineModelObjectiveValue', default_value=0.6)
bucket_prefix = ParameterString(name="S3BucketPrefix", default_value="cross_validation_iris_classification")
image_uri = ParameterString(name="ImageURI")
k = ParameterInteger(name="KFold", default_value=3)
max_jobs = ParameterInteger(name="MaxTrainingJobs", default_value=3)
max_parallel_jobs = ParameterInteger(name="MaxParallelTrainingJobs", default_value=1)
min_c = ParameterInteger(name="MinimumC", default_value=0)
max_c = ParameterInteger(name="MaximumC", default_value=1)
min_gamma = ParameterFloat(name="MinimumGamma", default_value=0.0001)
max_gamma = ParameterFloat(name="MaximumGamma", default_value=0.001)
gamma_scaling_type = ParameterString(name="GammaScalingType", default_value="Logarithmic")
# Variables / Constants used throughout the pipeline
model_package_group_name="IrisClassificationCrossValidatedModel"
framework_version = "0.23-1"
s3_bucket_base_path=f"s3://{default_bucket}/{bucket_prefix}"
s3_bucket_base_path_train = f"{s3_bucket_base_path}/train"
s3_bucket_base_path_test = f"{s3_bucket_base_path}/test"
s3_bucket_base_path_evaluation = f"{s3_bucket_base_path}/evaluation"
s3_bucket_base_path_jobinfo = f"{s3_bucket_base_path}/jobinfo"
s3_bucket_base_path_output = f"{s3_bucket_base_path}/output"
###Output
_____no_output_____
###Markdown
Preprocessing StepThe first step in K Fold cross validation model workflow is to split the training dataset into k batches randomly.We are going to use Sagemaker SKLearnProcessor with a preprocessing script to perform dataset splits, and upload the results to the specified S3 bucket for model training step.
###Code
from sagemaker.sklearn.processing import SKLearnProcessor
sklearn_processor = SKLearnProcessor(
framework_version=framework_version,
instance_type=processing_instance_type,
instance_count=processing_instance_count,
base_job_name="kfold-crossvalidation-split",
role=role
)
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker.workflow.steps import ProcessingStep
step_process = ProcessingStep(
name="PreprocessStep",
processor=sklearn_processor,
outputs=[
ProcessingOutput(output_name="train",
source="/opt/ml/processing/train",
destination=s3_bucket_base_path_train),
ProcessingOutput(output_name="test",
source="/opt/ml/processing/test",
destination=s3_bucket_base_path_test),
],
code="code/preprocessing.py"
)
###Output
_____no_output_____
###Markdown
Cross Validation Model Training Step In Cross Validation Model Training workflow, a script processor is used for orchestrating k training jobs in parallel, each of the k jobs is responsible for training a model using the specified split samples. Additionally, the script processor leverages Sagemaker HyperparameterTuner to optimize the hyper parameters and pass these values to perform k training jobs. The script processor monitors all training jobs. Once the jobs are complete, the script processor captures key metrics, including the training accuracy and the hyperparameters from the best training job, then uploads the results to the specified S3 bucket location to be used for model evaluation and model selection steps.The components involved in orchestrating the cross validation model training, hyperparameter optimizations and key metrics capture:* PropertyFile - EvaluationReport, contains the performance metrics from the HyperparameterTuner job, expressed in JSON format.* PropertyFile JobInfo, contains information about the best training job and the corresponding hyperparameters used for training, expressed in JSON format.* ScriptProcessor - A python script that orchestrates a hyperparameter tuning job for cross validation model trainings. Custom Docker ImageIn order to facilitate k fold cross validation training jobs through Sagemaker Automatic Model tuning, we need to create a custom docker image to include both the python script that manages the kfold cross validation training jobs, and the actual training script that each of the k training jobs would submit. For details about adopting custom docker containers to work with Sagemaker, please follow this [link](https://docs.aws.amazon.com/sagemaker/latest/dg/docker-containers-adapt-your-own.html). The docker image used in the pipeline was built using the [Dockerfile](code/Dockerfile) included in this project. Following are the steps for working with [ECR](https://aws.amazon.com/ecr/) on authentication, image building and pushing to ECR registry for Sagemaker training: \(follow this [link](https://docs.aws.amazon.com/AmazonECR/latest/userguide/getting-started-cli.html) for official AWS guidance for working with ECR\)Prerequisites* [docker](https://docs.docker.com/get-docker/) * [git client](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) * [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) Note:If you use [AWS Cloud9](https://aws.amazon.com/cloud9/) as the CLI terminal, the prerequisites described above are met by default, there is no need to install any additional tools.Steps* Open a new terminal* git clone this project* cd to code directory* ./build-and-push-docker.sh [aws_acct_id] [aws_region]* capture the ECR repository name from the script after a successful run. You'll need to provide the image name at pipeline execution time. Here's an example format of an ECR repo name: .dkr.ecr.region.amazonaws.com/sagemaker-cross-validation-pipeline:latest
###Code
from sagemaker.processing import ScriptProcessor
from sagemaker.workflow.properties import PropertyFile
evaluation_report = PropertyFile(
name="EvaluationReport", output_name="evaluation", path="evaluation.json"
)
jobinfo = PropertyFile(
name="JobInfo", output_name="jobinfo", path="jobinfo.json"
)
script_tuner = ScriptProcessor(
image_uri=image_uri,
command=["python3"],
instance_type=hpo_tuner_instance_type,
instance_count=1,
base_job_name="KFoldCrossValidationHyperParameterTuner",
role=role
)
step_cv_train_hpo = ProcessingStep(
name="HyperParameterTuningStep",
processor=script_tuner,
code="code/cross_validation_with_hpo.py",
outputs=[
ProcessingOutput(output_name="evaluation",
source="/opt/ml/processing/evaluation",
destination=s3_bucket_base_path_evaluation),
ProcessingOutput(output_name="jobinfo",
source="/opt/ml/processing/jobinfo",
destination=s3_bucket_base_path_jobinfo)
],
job_arguments=["-k", str(k),
"--image-uri", image_uri,
"--train", s3_bucket_base_path_train,
"--test", s3_bucket_base_path_test,
"--instance-type", training_instance_type,
"--instance-count", str(training_instance_count),
"--output-path", s3_bucket_base_path_output,
"--max-jobs", str(max_jobs),
"--max-parallel-jobs" , str(max_parallel_jobs),
"--min-c", str(min_c),
"--max-c", str(max_c),
"--min-gamma", str(min_gamma),
"--max-gamma", str(max_gamma),
"--gamma-scaling-type", str(gamma_scaling_type),
"--region", str(region)],
property_files=[evaluation_report],
depends_on=['PreprocessStep'])
###Output
_____no_output_____
###Markdown
Model Selection StepModel selection is the final step in cross validation model training workflow. Based on the metrics and hyperparameters acquired from the cross validation steps orchestrated through ScriptProcessor, a Training Step is defined to train a model with the same algorithm used in cross validation training, with all available training data. The model artifacts created from the training process will be used for model registration, deployment and inferences. Components involved in the model selection step: * SKLearn Estimator - A Sagemaker Estimator used in training a final model.* TrainingStep - Workflow step that triggers the model selection process.
###Code
from sagemaker.inputs import TrainingInput
from sagemaker.workflow.steps import TrainingStep
from sagemaker.sklearn.estimator import SKLearn
sklearn_estimator = SKLearn("scikit_learn_iris.py",
framework_version=framework_version,
instance_type=training_instance_type,
py_version='py3',
source_dir="code",
output_path=s3_bucket_base_path_output,
role=role)
step_model_selection = TrainingStep(
name="ModelSelectionStep",
estimator=sklearn_estimator,
inputs={
"train": TrainingInput(
s3_data=f'{step_process.arguments["ProcessingOutputConfig"]["Outputs"][0]["S3Output"]["S3Uri"]}/all',
content_type="text/csv"
),
"jobinfo": TrainingInput(
s3_data=f"{s3_bucket_base_path_jobinfo}",
content_type="application/json"
)
}
)
###Output
_____no_output_____
###Markdown
Register Model With Model RegistryOnce the model selection step is complete, the trained model artifact can be registered with Sagemaker Model Registry.Model registry catalogs the trained model to enable model versioning, performance metrics and approval status captures. Additionally, models versioned in the ModelRegistry can be deployed through CI/CD. Here's a link for more information about Model Registry, https://docs.aws.amazon.com/sagemaker/latest/dg/model-registry.htmlComponents involved in registering a trained model with Model Registry:* Model - Model object that contains metadata for the trained model. * CreateModelInput - An object that encapsulates the parameters used to create a Sagemaker Model.* CreateModelStep - Workflow Step that creates a Sagemaker Model* ModelMetrics - Captures metadata, including metrics statistics, data constraints, bias and explainability for the trained model.* RegisterModel - Workflow Step that registers model Model Registry.
###Code
from sagemaker.model import Model
model = Model(
image_uri=sklearn_estimator.image_uri,
model_data=step_model_selection.properties.ModelArtifacts.S3ModelArtifacts,
sagemaker_session=sagemaker_session,
role=role,
)
from sagemaker.model_metrics import MetricsSource, ModelMetrics
from sagemaker.workflow.step_collections import RegisterModel
model_metrics = ModelMetrics(
model_statistics=MetricsSource(
s3_uri="{}/evaluation.json".format(
step_cv_train_hpo.arguments["ProcessingOutputConfig"]["Outputs"][0]["S3Output"]["S3Uri"]
),
content_type="application/json",
)
)
step_register_model = RegisterModel(
name="RegisterModelStep",
estimator=sklearn_estimator,
model_data=step_model_selection.properties.ModelArtifacts.S3ModelArtifacts,
content_types=["text/csv"],
response_types=["text/csv"],
inference_instances=["ml.t2.medium", "ml.m5.xlarge"],
transform_instances=["ml.m5.xlarge"],
model_package_group_name=model_package_group_name,
approval_status=model_approval_status,
model_metrics=model_metrics,
)
###Output
_____no_output_____
###Markdown
Condition StepSagemaker Pipelines supports condition steps for evaluating the conditions of step properties to determine the next action.In the context of cross validation model workflow, a condition step is defined to evaluate model metrics captured in the Cross Validation Training Step to determine whether the model selection step should take place. This step evaluates a ConditionGreaterThanOrEqualTo based on a given baseline model objective value to determine the next steps.Components involved in defining a Condition Step:ConditionGreaterThanOrEqualTo - A condition that defines the evaluation criteria for the given model objective value and model performance metrics captured in the evaluation report. This condition returns True if the model performance metrics is greater or equals to the baseline model objective value, False otherwise.ConditionStep - Workflow Step that performs the evaluation based on the criteria defined in ConditionGreaterThanOrEqualTo
###Code
from sagemaker.workflow.conditions import ConditionGreaterThanOrEqualTo
from sagemaker.workflow.condition_step import (
ConditionStep,
JsonGet,
)
cond_gte = ConditionGreaterThanOrEqualTo(
left=JsonGet(
step=step_cv_train_hpo,
property_file=evaluation_report,
json_path="multiclass_classification_metrics.accuracy.value",
),
right=baseline_model_objective_value,
)
step_cond = ConditionStep(
name="ModelEvaluationStep",
conditions=[cond_gte],
if_steps=[step_model_selection, step_register_model],
else_steps=[],
)
###Output
_____no_output_____
###Markdown
Define A PipelineWith Pipeline components defined, we can create Sagemaker Pipeline by associating the Parameters, Steps and Conditions created in this notebook.The pipeline definition encodes a pipeline using a directed acyclic graph (DAG) with relationships between each step of the pipeline. The structure of a pipeline's DAG is determined by either data dependencies between steps, or custom dependencies defined in the Steps.For CrossValidation training pipline, relationships between the components in the DAG are specified in the depends_on attribute of the Steps.A pipeline instance is composed of a name, parameters, and steps .
###Code
from sagemaker.workflow.pipeline import Pipeline
from sagemaker.workflow.pipeline_experiment_config import PipelineExperimentConfig
from sagemaker.workflow.execution_variables import ExecutionVariables
pipeline_name = f"CrossValidationTrainingPipeline"
pipeline = Pipeline(
name=pipeline_name,
parameters=[
processing_instance_count,
processing_instance_type,
training_instance_type,
training_instance_count,
inference_instance_type,
hpo_tuner_instance_type,
model_approval_status,
role,
default_bucket,
baseline_model_objective_value,
bucket_prefix,
image_uri,
k,
max_jobs,
max_parallel_jobs,
min_c,
max_c,
min_gamma,
max_gamma,
gamma_scaling_type
],
pipeline_experiment_config=PipelineExperimentConfig(
ExecutionVariables.PIPELINE_NAME,
ExecutionVariables.PIPELINE_EXECUTION_ID),
steps=[step_process, step_cv_train_hpo, step_cond],
)
###Output
_____no_output_____
###Markdown
Examine Pipeline DefinitionBefore triggering a pipeline run, it's a good practice to examine the JSON pipeline definition to ensure that it's well-formed.
###Code
import json
json.loads(pipeline.definition())
###Output
_____no_output_____
###Markdown
Pipeline CreationSubmit the pipeline definition to the SageMaker Pipelines service to create a pipeline if it doesn't exist, or update the pipeline if it does. The role passed in is used by SageMaker Pipelines to create all of the jobs defined in the steps.
###Code
pipeline.upsert(role_arn=role)
###Output
_____no_output_____
###Markdown
Trigger Pipeline ExecutionAfter creating a pipeline definition, you can submit it to SageMaker to start your execution, optionally provides the parameters specific for the run.
###Code
# Before triggering the pipeline, make sure to override the ImageURI parameter value with
# one created the previous step.
execution = pipeline.start(
parameters=dict(
BaselineModelObjectiveValue=0.8,
MinimumC=0,
MaximumC=1,
MaxTrainingJobs=3,
ImageURI="041158455166.dkr.ecr.us-east-1.amazonaws.com/sagemaker-cross-validation-pipeline:latest"
))
###Output
_____no_output_____
###Markdown
Examine a Pipeline ExecutionExamine the pipeline execution at runtime by using sagemaker SDK
###Code
execution.describe()
###Output
_____no_output_____
###Markdown
Wait For The Pipeline Execution To Complete Pipeline execution supports waiting for the job to complete synchrounously
###Code
execution.wait()
###Output
_____no_output_____ |
notebooks/PyTorch with Skorch.ipynb | ###Markdown
PyTorch/Tensorflow are extremely verbose packages.The good thing about it is that you learn by having to do every single step of it.The obvious bad thing about it is that you have to do every single step of it, and whenever you have to write a lot of code, bugs appear...There are many packages to short the amount of code people have to write. For example:- Ignite --> https://pytorch.org/ignite/ | https://github.com/pytorch/ignite (3.2k stars)- Pytorch-lightning --> https://www.pytorchlightning.ai/ (11.3k stars)- Skorch --> https://github.com/skorch-dev/skorch (3.7k stars)- Poutyne --> https://poutyne.org/ | https://github.com/GRAAL-Research/poutyne (422 stars)- Catalyst --> https://github.com/catalyst-team/catalyst (2.4k stars) Example of what are these frameworks for:
###Code
import numpy as np
from sklearn.datasets import make_classification
from torch import nn
from skorch import NeuralNetClassifier
X, y = make_classification(1000, 24, n_informative=10, random_state=0)
X = X.astype(np.float32)
y = y.astype(np.int64)
class MyNet(nn.Module):
def __init__(self, num_units=10, nonlin=nn.ReLU()):
super(MyNet, self).__init__()
self.dense0 = nn.Linear(24, num_units)
self.nonlin = nonlin
self.dropout = nn.Dropout(0.5)
self.dense1 = nn.Linear(num_units, num_units)
self.output = nn.Linear(num_units, 2)
self.softmax = nn.Softmax(dim=-1)
def forward(self, x, **kwargs):
X = self.nonlin(self.dense0(x))
X = self.dropout(X)
X = self.nonlin(self.dense1(X))
X = self.softmax(self.output(X))
return X
net = NeuralNetClassifier(
MyNet,
max_epochs=10,
lr=0.1,
# Shuffle training data on each epoch
#iterator_train__shuffle=True,
)
net.fit(X, y)
y_proba = net.predict_proba(X)
from pycaret.datasets import get_data
from pycaret.classification import *
data = get_data('diabetes')
clf1 = setup(data, target = 'Class variable')
# The transformed data has shape (_, 24)
create_model(net)
###Output
_____no_output_____ |
Notebooks/Antonio Marinho/gaussian_similaridade.ipynb | ###Markdown
Filtros espaciais- Gaussian/ Similaridade de ImagensAntonio Marinho Importando bibliotecas
###Code
import cv2
import numpy as np
from matplotlib import pyplot as plt
from skimage.measure import compare_ssim
from skimage.measure import compare_mse
%matplotlib inline
def mse(imageA, imageB):
err = np.sum((imageA.astype("float") - imageB.astype("float")) ** 2)
err /= float(imageA.shape[0] * imageA.shape[1])
return err
#carregando imagem
img = cv2.imread('lena_gray.jpg',0)
blur= cv2.GaussianBlur(img,(5,5),0)
###Output
_____no_output_____
###Markdown
Exibindo imagem
###Code
plt.rcParams['figure.figsize'] = (11,7)
plt.subplot(1,2,1),plt.imshow(img,cmap = 'gray')
plt.title('Original'), plt.xticks([]), plt.yticks([])
plt.subplot(1,2,2),plt.imshow(blur,cmap='gray')
plt.title('Blur Gaussian'), plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Similaridade entre imagens
###Code
#usando Erro quadrado medio
compare_mse(img,blur)
#usando Similaridade estrutural
(score, diff) = compare_ssim(img, blur, full=True)
score
compare_mse(img,blur)
###Output
_____no_output_____
###Markdown
imagem resultate da diferença
###Code
plt.imshow(diff,cmap = 'gray')
###Output
_____no_output_____ |
Bifurcation_and_outputs.ipynb | ###Markdown
This code is under MIT license. See the License.txt file. Loading the libraries and dependencies
###Code
## Libraries ##
import matplotlib.pyplot as plt
from matplotlib import animation
import numpy as np
import time as libtime
import seaborn as sns; sns.set()
from decimal import Decimal
import gc
plt.style.use('default')
## Home made function
from Main import * ## The model per se
from Numerical_functions import * ## Few helpful functions
###Output
_____no_output_____
###Markdown
Bifurcation
###Code
### Here is a description of the parameters of interest of the model:
"""
Cell:
- rc : cell radius
- Vc : cell volume
- Qc : structural biomass of the cell (depends on rc)
Reproduction and death:
- gmax : maximum of g, the division rate (of a sigmoïd function of the log-internal state of the cell)
- thresh : inflexion point of g, the rate, at which g=gmax/2 (depends on rc)
- slope : slope in of g at g(thresh)
- kd : maximum metabolism related decay rate (when metabolism is insufficient to fuel cell maintenance; see below)
- mort : intrinsic metabolic rate, can be seen as a senescence term or an accidental death term.
Metabolism:
- qmax : maximum metabolic rate (Michaelis-Menten; depends on rc)
- ks : half-saturation constant of the metabolism (Michaelis-Menten)
- mg : minimum metabolic rate to fulfill the energetic requirement for cell maintainance (depends on rc)
Environment:
- QH, QC, QN, QG : inward/outward fluxes constants. From SBL (diffusion and solubilisation) for H2 (H), CO2 (C), and CH4 (G).
From hydrothermal vents for NH4 (N).
- Hinf, Cinf, Ninf, Ginf : abiotic equilibrium values (fixed for the moment assuming no feedback onto
atmospheric composition). Equals to 0 for Xo (the dead biomass)
- Still need a term describing the externalization of Xo (dead biomass). For the moment it is described by a constant
(burrial?).
"""
### Default parametrization
## t is time (...):
tmax = 1000
dt = 0.0025
## Trait
rc = 1e-6 # µm
Vc = (4/3)*pi*rc**3 # µm3
Qc = (18E-12*Vc**(0.94))/10 # molX.Cell-1 Menden-Deuer and Lessard 2000
ks = 1e-12 # molX.L-1 arbitrary
qmax = 1e-1 # (d.(molX.L-1))-1 Gonzalez Cabaleiro 2015 PLOS
qmax = qmax*Qc/Vc # (d.Cell)-1
mg = 4500 # J.(molX.h-1) Gonzalez Cabaleiro 2015 PLOS
mg = 4500*24*Qc # J.(Cell.d-1)
kd = 1 # d-1 Batstone et al 2002 in GC 2015 ISME
mort = 0.1 # d-1 arbitrary
thresh = 10*Qc # molX.Cell-1 arbitrary
slope = 10 # arbitrary
gmax = 1 # d-1 arbitrary
## Environment (more details in Environment.py)
QH = 1.1e-1 # m(x100).d-1 Kharecha
QC = 4.1e-2 # m(x100).d-1 Kharecha
QN = 1E-6 # m(x100).d-1 arbitrary
QG = 3.9e-2 # m(x100).d-1 Kharecha
Hinf = 7.8e-7 # mol.L-1 Kharecha
Cinf = 2.5e-6 # mol.L-1 Kharecha
Ninf = 1e-8 # mol.L-1 arbitrary
Ginf = 1.4e-8 # mol.L-1 Kharecha
### Vectors of parameter values explored during bifurcation:
"""
qmax_vec = np.exp(np.arange(np.log(1E-2),np.log(1E1),(np.log(1E1)-np.log(1E-2))/200))*Qc/Vc
qmax_vec = list(reversed(qmax_vec))
ks_vec = np.exp(np.arange(np.log(1E-6),np.log(4E-4),(np.log(4E-4)-np.log(1E-6))/100))
ks_vec = list(reversed(ks_vec))
ks_vec = np.exp(np.arange(np.log(1E-10),np.log(1E5),(np.log(1E5)-np.log(1E-10))/200))
#ks_vec = list(reversed(ks_vec))
gmax_vec = np.exp(np.arange(np.log(1E-1),np.log(3E0),(np.log(3E0)-np.log(1E-1))/200))
gmax_vec = list(reversed(gmax_vec))
thresh_vec = np.exp(np.arange(np.log(8E-13),np.log(8E-9),(np.log(8E-9)-np.log(8E-13))/100))
#thresh_vec = list(reversed(gmax_vec))
Hinf_vec = np.exp(np.arange(np.log(1E-9),np.log(1E-5),(np.log(1E-5)-np.log(1E-9))/100))
Hinf_vec = list(reversed(Hinf_vec))
Cinf_vec = np.exp(np.arange(np.log(8E-10),np.log(1E15),(np.log(1E15)-np.log(8E-10))/200))
#Cinf_vec = list(reversed(Cinf_vec))
kd_vec = np.exp(np.arange(np.log(9E-1),np.log(1E2),(np.log(1E2)-np.log(9E-1))/100))
#kd_vec = list(reversed(kd_vec))
mort_vec = np.exp(np.arange(np.log(1E-2),np.log(0.9),(np.log(0.9)-np.log(1E-2))/200))
#mort_vec = list(reversed(mort_vec))
rc_vec = np.exp(np.arange(np.log(1E-7),np.log(1E-3),(np.log(1E-3)-np.log(1E-7))/20))
rc_vec = list(reversed(rc_vec)) # Choose if the bifurcation is upward or downward
"""
### Vector of the bifurcation actually performed
Hinf_vec = np.exp(np.arange(np.log(1E-9),np.log(1E-5),(np.log(1E-5)-np.log(1E-9))/100))
Hinf_vec = list(reversed(Hinf_vec))
par_vec = Hinf_vec
### Arrays storing the results of the bifurcation
NC_mat = []
X_mat = []
H_mat = []
C_mat = []
N_mat = []
G_mat = []
Xo_mat = []
i = 0
init = [Hinf,Cinf,1e2,1E-10,0,1e4,thresh]
#init = [Hinf,par_vec[i],1e2,1E-10,0,1e4,thresh] ## to use when the bifurcation is performed on an environmental parameter
for i in range(0,len(par_vec)):
Hinf = par_vec[i] # Replace by the parameter on which the bifurcation is performed
## switch-off the parameter on which the bifurcation is performed
rc = 1e-6 # µm
Vc = (4/3)*pi*rc**3 # µm3
Qc = (18E-12*Vc**(0.94))/10 # molX.Cell-1 Menden-Deuer and Lessard 2000
ks = 1e-12 # molX.L-1 arbitrary
qmax = 1e-1 # (d.(molX.L-1))-1 Gonzalez Cabaleiro 2015 PLOS
qmax = qmax*Qc/Vc # (d.Cell)-1
mg = 4500 # J.(molX.h-1) Gonzalez Cabaleiro 2015 PLOS
mg = 4500*24*Qc # J.(Cell.d-1)
kd = 1 # d-1 Batstone et al 2002 in GC 2015 ISME
mort = 0.1 # d-1 arbitrary
thresh = 10*Qc # molX.Cell-1 arbitrary
slope = 10 # arbitrary
#gmax = 1 # d-1 arbitrary
QH = 1.1e-1 # m(x100).d-1 Kharecha
QC = 4.1e-2 # m(x100).d-1 Kharecha
QN = 1E-6 # m(x100).d-1 arbitrary
QG = 3.9e-2 # m(x100).d-1 Kharecha
Hinf = 7.8e-7 # mol.L-1 Kharecha
Cinf = 2.5e-6 # mol.L-1 Kharecha
Ninf = 1e-8 # mol.L-1 arbitrary
Ginf = 1.4e-8 # mol.L-1 Kharecha
Env = [Hinf,Cinf,Ninf,Ginf,QH,QC,QN,QG]
starters = [rc,Vc,Qc,ks,qmax,mg,kd,mort,thresh,slope,gmax]
if i>0:
tmax = 100
time = np.arange(0,tmax,dt)
NCT,XT,HT,CT,NT,GT,XoT,D,time = Run_Profile(init,starters,Env,tmax=tmax,T=TS,dt = dt)
NCT = np.array(NCT)
XT = np.array(XT)
HT = np.array(HT)
CT = np.array(CT)
NT = np.array(NT)
GT = np.array(GT)
XoT = np.array(XoT)
D = np.array(D)
NC_mat.append(NCT)
X_mat.append(XT)
H_mat.append(HT)
C_mat.append(CT)
N_mat.append(NT)
G_mat.append(GT)
Xo_mat.append(XoT)
init = [HT[len(HT)-1],CT[len(CT)-1],max([NT[len(NT)-1],1E-1]),GT[len(GT)-1],XoT[len(XoT)-1],NCT[len(NCT)-1],max([Qc/10,XT[len(XT)-1]])]
print("Progress => {:2.1%}".format((i+1)/len(par_vec)), end="\r")
gc.collect()
optNCT_array = []
for i in range(0,len(par_vec)):
temp_NCT = NC_mat[i]
endNCT = temp_NCT[int(np.floor(len(temp_NCT))*0.5) : len(temp_NCT)]
deltaNCT = diff(endNCT)
optNCT = np.abs(diff(np.sign(deltaNCT)))
t_optNCT = find(optNCT,2)
if not t_optNCT:
t_optNCT = [len(endNCT)-3,len(endNCT)-3]
optNCT = endNCT[np.array(t_optNCT)+2]
optNCT_array.append(optNCT)
optCT_array = []
for i in range(0,len(par_vec)):
temp_CT = C_mat[i]
endCT = temp_CT[int(np.floor(len(temp_CT))*0.5) : len(temp_CT)]
deltaCT = diff(endCT)
optCT = np.abs(diff(np.sign(deltaCT)))
t_optCT = find(optCT,2)
if not t_optCT:
t_optCT = [len(endCT)-3,len(endCT)-3]
optCT = endCT[np.array(t_optCT)+2]
optCT_array.append(optCT)
optHT_array = []
for i in range(0,len(par_vec)):
temp_HT = H_mat[i]
endHT = temp_HT[int(np.floor(len(temp_HT))*0.5) : len(temp_HT)]
deltaHT = diff(endHT)
optHT = np.abs(diff(np.sign(deltaHT)))
t_optHT = find(optHT,2)
if not t_optHT:
t_optHT = [len(endHT)-3,len(endHT)-3]
optHT = endHT[np.array(t_optHT)+2]
optHT_array.append(optHT)
optGT_array = []
for i in range(0,len(par_vec)):
temp_GT = G_mat[i]
endGT = temp_GT[int(np.floor(len(temp_GT))*0.5) : len(temp_GT)]
deltaGT = diff(endGT)
optGT = np.abs(diff(np.sign(deltaGT)))
t_optGT = find(optGT,2)
if not t_optGT:
t_optGT = [len(endGT)-3,len(endGT)-3]
optGT = endGT[np.array(t_optGT)+2]
optGT_array.append(optGT)
optXoT_array = []
for i in range(0,len(par_vec)):
temp_XoT = Xo_mat[i]
endXoT = temp_XoT[int(np.floor(len(temp_XoT))*0.5) : len(temp_XoT)]
deltaXoT = diff(endXoT)
optXoT = np.abs(diff(np.sign(deltaXoT)))
t_optXoT = find(optXoT,2)
if not t_optXoT:
t_optXoT = [len(endXoT)-3,len(endXoT)-3]
optXoT = endXoT[np.array(t_optXoT)+2]
optXoT_array.append(optXoT)
optXT_array = []
for i in range(0,len(par_vec)):
temp_XT = X_mat[i]*NC_mat[i]
endXT = temp_XT[int(np.floor(len(temp_XT))*0.5) : len(temp_XT)]
deltaXT = diff(endXT)
optXT = np.abs(diff(np.sign(deltaXT)))
t_optXT = find(optXT,2)
if not t_optXT:
t_optXT = [len(endXT)-3,len(endXT)-3]
optXT = endXT[np.array(t_optXT)+2]
optXT_array.append(optXT)
optNT_array = []
for i in range(0,len(par_vec)):
temp_NT = N_mat[i]
endNT = temp_NT[int(np.floor(len(temp_NT))*0.5) : len(temp_NT)]
deltaNT = diff(endNT)
optNT = np.abs(diff(np.sign(deltaNT)))
t_optNT = find(optNT,2)
if not t_optNT:
t_optNT = [len(endNT)-3,len(endNT)-3]
optNT = endNT[np.array(t_optNT)+2]
optNT_array.append(optNT)
fig, ax = plt.subplots(2,4,sharex=False)
for i in range(len(par_vec)):
#Cinf = par_vec[i]
ax[0,0].loglog([par_vec[i]]*len(optHT_array[i]),optHT_array[i],'.',color = 'royalblue', ms = 1, alpha = 1)
ax[0,0].loglog([par_vec[i]]*len(optHT_array[i]),QH*(Hinf-optHT_array[i]),'.',color = 'red', ms = 1, alpha = 1)
ax[0,0].loglog(par_vec[i],Hinf,'.',ms=1,color='black')
ax[0,0].set_title('H2')
ax[0,0].set_ylim([1E-8,1E-5])
ax[0,0].set_xlim([min(par_vec)/8,max(par_vec)*2])
ax[0,1].loglog([par_vec[i]]*len(optCT_array[i]),optCT_array[i],'.',color = 'royalblue', ms = 1, alpha = 1)
ax[0,1].loglog([par_vec[i]]*len(optCT_array[i]),QC*(Cinf-optCT_array[i]),'.',color = 'red', ms = 1, alpha = 1)
ax[0,1].loglog(par_vec[i],Cinf,'.',ms=1,color='black')
ax[0,1].set_title('CO2')
ax[0,1].set_ylim([1E-8,1E-5])
ax[0,1].set_xlim([min(par_vec)/8,max(par_vec)*2])
ax[0,2].loglog([par_vec[i]]*len(optGT_array[i]),optGT_array[i],'.',color = 'royalblue', ms = 1)
ax[0,2].loglog([par_vec[i]]*len(optGT_array[i]),abs(QG*(Ginf-optGT_array[i])),'.',color = 'red', ms = 1)
ax[0,2].loglog(par_vec[i],Ginf,'.',ms=1,color='black')
ax[0,2].set_title('CH4')
ax[0,2].set_ylim([5E-9,5E-6])
ax[0,2].set_xlim([min(par_vec)/8,max(par_vec)*2])
ax[0,3].loglog([par_vec[i]]*len(optNT_array[i]),optNT_array[i],'.',color = 'royalblue', ms = 1)
ax[0,3].set_title('NH4')
ax[0,3].set_ylim([5E1,2E2])
ax[0,3].set_xlim([min(par_vec)/8,max(par_vec)*2])
ax[1,0].loglog([par_vec[i]]*len(optNCT_array[i]),optNCT_array[i],'.',color = 'royalblue', ms = 1)
ax[1,0].set_title('Cells')
ax[1,0].set_ylim([1E0,1E20])
ax[1,0].set_xlim([min(par_vec)/8,max(par_vec)*2])
ax[1,1].loglog([par_vec[i]]*len(optXT_array[i]),optXT_array[i],'.',color = 'royalblue', ms = 1)
ax[1,1].set_title('Organic Biomass')
ax[1,1].set_ylim([1E-14,1E-10])
ax[1,1].set_xlim([min(par_vec)/8,max(par_vec)*2])
ax[1,2].loglog([par_vec[i]]*len(optXoT_array[i]),optXoT_array[i],'.',color = 'royalblue', ms = 1)
ax[1,2].set_title('Dead biomass')
ax[1,2].set_ylim([1E-12,1E-10])
ax[1,2].set_xlim([min(par_vec)/8,max(par_vec)*2])
ax[1,3].axis('off')
ax[1,3].axes.get_xaxis().set_visible(False)
fig.subplots_adjust(hspace=0.5,wspace=0.35)
fig.set_figwidth(15)
fig.add_subplot(111, frameon=False)
plt.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off')
plt.xlabel('$[H_2]_{oc.}^{abiotic eq.}$ ($mol.L_{-1}$)',labelpad=20,fontsize=15)
plt.savefig('Bif_gmax.png', dpi=500, format = 'png', transparent = True, bbox_inches='tight')
plt.show()
i = 50
plt.semilogy(Xo_mat[i])
plt.show()
np.where([1,3,2])
###Output
_____no_output_____ |
Models/Pair2/ESCORTS_Linear_Regression_Model.ipynb | ###Markdown
Data Analytics Project - Models Pair 2 - ESCORTS Linear Regression Model--- 1. Import required modules
###Code
import numpy as np
import pandas as pd
from fastai.tabular.core import add_datepart
from sklearn.linear_model import LinearRegression
from sklearn import metrics
from regressors import stats
###Output
/home/varun487/.local/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
###Markdown
--- 2. Get Pair 2 Orders Dataset 2.1. Get the orders
###Code
orders_df = pd.read_csv('../../Preprocess/Pair2/Pair2_orders.csv')
orders_df.head()
orders_df.tail()
###Output
_____no_output_____
###Markdown
2.2. Visualize the orders
###Code
# Plotting the zscore of the Spread
orders_plt = orders_df.plot(x='Date', y='zscore', figsize=(30,15))
# Plotting the lines at mean, 1 and 2 std. dev.
orders_plt.axhline(0, c='black')
orders_plt.axhline(1, c='red', ls = "--")
orders_plt.axhline(-1, c='red', ls = "--")
# Extracting orders
Orders = orders_df['Orders']
# Plot vertical lines where orders are placed
for order in range(len(Orders)):
if Orders[order] != "FLAT":
# GREEN line for a long position
if Orders[order] == "LONG":
orders_plt.axvline(x=order, c = "green")
# RED line for a short position
elif Orders[order] == "SHORT":
orders_plt.axvline(x=order, c = "red")
# BLACK line for getting out of all positions at that point
else:
orders_plt.axvline(x=order, c = "black")
orders_plt.set_ylabel("zscore")
###Output
_____no_output_____
###Markdown
__In the figure above:__- __Blue line__ - zscore of the Spread- __Black horizontal line__ at 0 - Mean- __Red dotted horizontal lines__ - at +1 and -1 standard deviations- __Green vertical line__ - represents long position taken on that day- __Red vertical line__ - represents short position taken on that day- __Black vertical line__ - represents getting out of all open positions till that point 2.3 Visualize the close prices of both stocks
###Code
orders_df_plt = orders_df.plot(x='Date', y=['BEML_Close', 'ESCORTS_Close'], figsize=(30,15))
orders_df_plt.set_xlabel("Date")
orders_df_plt.set_ylabel("Price")
###Output
_____no_output_____
###Markdown
--- 3. ESCORTS Linear Regression Model 3.1. Get the Complete ESCORTS dataset
###Code
escorts_df = pd.read_csv("../../Storage/Companies_with_names_exchange/ESCORTSNSE.csv")
escorts_df.head()
###Output
_____no_output_____
###Markdown
- We can see that we have data from 2017-01-02 3.2. Get ESCORTS training data 3.2.1 Get complete escorts dataset
###Code
escorts_df = escorts_df.drop(columns=['High', 'Low', 'Open', 'Volume', 'Adj Close', 'Company', 'Exchange'])
escorts_df.head()
###Output
_____no_output_____
###Markdown
- We can see that the period where the stocks are correlated and co-integration starts from 2018-09-04.- Thus the test data for which we need to make predictions is from 2018-09-04 to when the period ends at 2018-12-03.- We take 1 year's worth of training data for our model, which means that the time period of our training data is from 2017-09-03 to 2018-09-04. 3.2.2. Crop dataset within training range
###Code
escorts_df_train = escorts_df[escorts_df['Date'] >= '2017-09-03']
escorts_df_train.head()
escorts_df_train = escorts_df_train[escorts_df_train['Date'] <= '2018-09-04']
escorts_df_train.tail()
###Output
_____no_output_____
###Markdown
3.2.3 Add extra date columns to the training data
###Code
add_datepart(escorts_df_train, 'Date')
###Output
_____no_output_____
###Markdown
3.2.4 Get the training data and labels
###Code
escorts_train_X = escorts_df_train.copy()
escorts_train_X = escorts_train_X.reset_index(drop=True)
escorts_train_X_plot = escorts_train_X.copy()
escorts_train_X = escorts_train_X.drop(columns=["Elapsed", "Close"])
escorts_train_X.head()
escorts_train_X.tail()
escorts_train_y = escorts_df[(escorts_df['Date'] >= '2017-09-04') & (escorts_df['Date'] <= '2018-09-04')]['Close']
escorts_train_y
len(escorts_train_X)
len(escorts_train_y)
###Output
_____no_output_____
###Markdown
3.3. Get ESCORTS Test Data
###Code
escorts_test_df = orders_df.copy()
escorts_test_df = escorts_df[(escorts_df['Date'] >= '2018-09-04') & (escorts_df['Date'] <= '2018-12-03')].copy()
escorts_test_df.head()
escorts_test_df.tail()
add_datepart(escorts_test_df, 'Date')
escorts_test_df.head()
escorts_test_X = escorts_test_df.copy()
escorts_test_X = escorts_test_X.drop(columns=['Close', "Elapsed"])
escorts_test_X.reset_index(drop=True, inplace=True)
escorts_test_X.index += 251
escorts_test_X.head()
escorts_test_X.tail()
escorts_test_y = escorts_df[(escorts_df['Date'] >= '2018-09-04') & (escorts_df['Date'] <= '2018-12-03')]
escorts_test_y.reset_index(drop=True, inplace=True)
escorts_test_y.index += 251
escorts_test_y = escorts_test_y['Close']
escorts_test_y
len(escorts_test_X)
len(escorts_test_y)
###Output
_____no_output_____
###Markdown
3.4 Create and Train ESCORTS Model
###Code
model = LinearRegression()
model = model.fit(escorts_train_X, escorts_train_y)
###Output
_____no_output_____
###Markdown
3.5. Get predictions
###Code
predictions = model.predict(escorts_test_X)
predictions_df = pd.DataFrame(predictions, columns=['predictions'])
predictions_df.index += 251
predictions_df
predictions_df['test_data'] = escorts_test_y
predictions_df
predictions = predictions_df['predictions']
predictions
print('Mean Absolute Error:', metrics.mean_absolute_error(escorts_test_y, predictions))
print('Mean Squared Error:', metrics.mean_squared_error(escorts_test_y, predictions))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(escorts_test_y, predictions)))
print('R2 Score:', metrics.r2_score(escorts_test_y, predictions))
###Output
Mean Absolute Error: 247.21142817893997
Mean Squared Error: 66280.512967018
Root Mean Squared Error: 257.45002032825323
R2 Score: -13.734528239591599
###Markdown
3.6. Visualize the predicitons vs test data
###Code
escorts_model_plt = escorts_train_X_plot.plot(y=['Close'], figsize=(30,15))
escorts_model_plt.plot(predictions_df['test_data'])
escorts_model_plt.plot(predictions_df['predictions'])
###Output
_____no_output_____
###Markdown
__In the graph above:__- We can see the training data in blue- The test data in orange- The predictions made by the models in green 4. Put the results into a file
###Code
escorts_predictions_data = {'Date': orders_df['Date'], 'Actual_Close': orders_df['ESCORTS_Close']}
escorts_predictions_df = pd.DataFrame(escorts_predictions_data)
escorts_predictions_df.head()
predictions_df = predictions_df.reset_index()
escorts_predictions_df['Linear_regression_Close'] = predictions_df['predictions']
escorts_predictions_df.head()
escorts_predictions_df.to_csv('Escorts_predicitions.csv', index=False)
###Output
_____no_output_____ |
bronze/.ipynb_checkpoints/B96_Homework-checkpoint.ipynb | ###Markdown
prepared by Abuzer Yakaryilmaz (QuSoft@Riga) | December 09, 2018 I have some macros here. If there is a problem with displaying mathematical formulas, please run me to load these macros.$ \newcommand{\bra}[1]{\langle 1\rvert} $$ \newcommand{\ket}[1]{\lvert1\rangle} $$ \newcommand{\braket}[2]{\langle 1\lvert2\rangle} $$ \newcommand{\inner}[2]{\langle 1,2\rangle} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Homework (Rotations) Deadline: January 14, 2019Send your solutions to [email protected] free to ask questions by e-mail. Decision problems on streaming inputs 1. Suppose that you read a series of symbols from an alphabet $ \Sigma $. For example, $ \Sigma = \{a,b\} $, and your inputs can be $ aaabbbabababababab $ or $ aaaaaaa $ or $ bbbbbba $, etc.2. You may use one or more qubits for solving the given task. 3. At the beginning, each qubit is set to $ \ket{0} $.4. For each symbol, you fix certain operators and apply them to the quantum register whenever you read this symbol. For example, for each $ a $, you may apply x-gate on each qubit; and, for each $ b $, you may apply z-gate and then h-gate on each qubit.5. After reading whole the input, you make a measurement. You should make a decision on the given input. There will be two possible outcomes. So, you divide all possible outcomes into two sets, and give your decisions accordingly. Example 1 Let $ \Sigma = \{a\} $.We decide whether the length of the given input is odd or even.We use a single qubit. For each symbol, we apply x-gate.If we observe $ 0 $ (resp., $1$) at the end, we output "even" (resp., "odd"). We test our program on randomly generated 10 strings of length less than 50.
###Code
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
def parity_check(input):
qreg = QuantumRegister(1)
creg = ClassicalRegister(1)
mycircuit = QuantumCircuit(qreg,creg)
for i in range(len(input)):
mycircuit.x(qreg[0])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=100)
counts = job.result().get_counts(mycircuit)
return counts
from random import randrange
for i in range(10):
length = randrange(50)
input = ""
for j in range(length):
input = input + "a"
counts = parity_check(input)
print("the input is",input)
print("its length is",length)
print(counts)
for key in counts:
if key=="0":
print("the output 'even' is given",counts["0"],"times")
if key=="1":
print("the output 'odd' is given",counts["1"],"times")
print()
###Output
_____no_output_____
###Markdown
Example 2 Let $ \Sigma = \{a,b\} $.We decide whether the input contains odd numbers of $a$s and odd numbers of $b$s.We use two qubits. For each $a$, we apply x-gate to the first qubit.For each $b$, we apply x-gate to the second qubit.If we observe $ 11 $ at the end, we output "yes". Otherwise, we output "no". We test our program on randomly generated 20 strings of length less than 40.
###Code
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer
def double_odd(input):
qreg = QuantumRegister(2)
creg = ClassicalRegister(2)
mycircuit = QuantumCircuit(qreg,creg)
for i in range(len(input)):
if input[i]=="a":
mycircuit.x(qreg[0])
if input[i]=="b":
mycircuit.x(qreg[1])
mycircuit.measure(qreg,creg)
job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=100)
counts = job.result().get_counts(mycircuit)
return counts
from random import randrange
for i in range(20):
length = randrange(40)
input = ""
number_of_as=0
number_of_bs=0
for j in range(length):
if randrange(2)==0:
input = input + "a"
number_of_as = number_of_as + 1
else:
input = input + "b"
number_of_bs = number_of_bs + 1
counts = double_odd(input)
print("the input is",input)
print("the number of as is",number_of_as)
print("the number of bs is",number_of_bs)
print(counts)
number_of_yes = 0
number_of_no = 0
for key in counts:
if key=="11":
number_of_yes = counts["11"]
elif key=="00":
number_of_no = number_of_no + counts["00"]
elif key=="01":
number_of_no = number_of_no + counts["01"]
elif key=="11":
number_of_no = number_of_no + counts["10"]
print("number of yes is",number_of_yes,"and number of no is",number_of_no)
print()
###Output
_____no_output_____
###Markdown
Task 1 Let $ \Sigma = \{a\} $.You will read an input of length which is a multiple of $ 8 $: $ 8i \in \{8,16,24,\ldots\} $.Use a single qubit and determine whether the multiple ($ i $) is odd or even.For each $a$, you can apply a rotation.Test your program with the inputs of lengths $ 8, 16, 24, 32, 40, 48, 56, 64, 72, 80 $.
###Code
#
# your solution
#
###Output
_____no_output_____
###Markdown
Task 2 Let $ \Sigma= \{a\} $.Determine whether the length of the input is a multiple of 7 or not in the following manner:1. If it is a multiple of 7, then output "yes" with probability 1.2. If it is not a multiple of 7, then output "yes" with probability less than 1.For each $a$, you can apply a rotation.Test your program with all inputs of lengths less than 29.Determine the inputs for which you output "yes" nearly three times less than the output "no".
###Code
#
# your solution
#
###Output
_____no_output_____
###Markdown
Task 3 Write down possible six different rotation angles that would work for Task 2. Rotations:(Double click to this cell for editing.)1. $ \cdot\frac{\pi}{\cdot} $2. $ \cdot\frac{\pi}{\cdot} $3. $ \cdot\frac{\pi}{\cdot} $4. $ \cdot\frac{\pi}{\cdot} $5. $ \cdot\frac{\pi}{\cdot} $6. $ \cdot\frac{\pi}{\cdot} $ Task 4 Experimentially test each of these rotations for Task 2.
###Code
#
# your solution
#
###Output
_____no_output_____
###Markdown
Task 5We can improve the algorihtm for Task 2.Let $ \Sigma= \{a\} $.Determine whether the length of input is a multiple of 91.There are 90 different rotations that you can use.Randomly pick four of these rotations and fix them.Use four qubits. In each qubit, apply one of these rotations.Test your program with all inputs of lengths less than 92.If the input length is 91, then your program should output "yes" with probability 1.If the input length is not 91, then your program should output "yes" with probability no more than $ \epsilon $, where $ \epsilon < \frac{1}{2}$. [*]Experimentially verify both cases, and also determine the approximate value of $\epsilon$.[*] Remark that the randomly picked rotations would work high likely. But, there is still a small chance that $\epsilon$ can be more than $ \frac{1}{2} $ because of certain sets of ratations.
###Code
#
# your solution
#
###Output
_____no_output_____
###Markdown
Task 6 Repeat Task 5 with five and then six rotations by using five and six qubits, respectively.The value of $ \epsilon $ is expected to decrease if we use more rotations and qubits.
###Code
#
# your solution
#
###Output
_____no_output_____ |
notebooks/Resources.ipynb | ###Markdown
Reading Crisp framework
###Code
from IPython.display import IFrame, display
#filepath = "http://wikipedia.org" # works with websites too!
filepath = "../data/Images/CrispFramework.PNG"
IFrame(filepath, width=800, height=300)
###Output
_____no_output_____
###Markdown
Business UnderstandingWe would like to track the spread of Coronavirus across countries to stop the spread and inform the general public. Data Understanding - John Hopkins Github Data https://github.com/CSSEGISandData/COVID-19.git - REST API services to retrieve Data www.smartable.ai Github Data
###Code
# Approach 1
# # automating the pull of the data after cloning
# github_pull = subprocess.Popen("/usr/bin/git pull",
# cwd = os.path.dirname('../data/raw/COVID-19/'),
# shell = True,
# stdout = subprocess.PIPE,
# stderr = subprocess.PIPE)
# (out,error) = github_pull.communicate()
# print("Error:"+str(error))
# print("out:"+str(out))
# Approach 2
# automating the pull of the data after cloning
github_pull = git.cmd.Git('../data/raw/COVID-19/')
github_pull.pull()
data_path='../data/raw/COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv'
pd_raw = pd.read_csv(data_path)
pd_raw.head()
###Output
_____no_output_____
###Markdown
REST API Datawww.smartable.ai
###Code
# getting data from API
url = "https://coronavirus-smartable.p.rapidapi.com/stats/v1/CA/"
headers = {
'x-rapidapi-key': "572e4af04cmsh3c40e9f74b8b78ap1146f4jsn75ead2590b80",
'x-rapidapi-host': "coronavirus-smartable.p.rapidapi.com"
}
response = requests.request("GET", url, headers=headers)
print(response)
CA_dict=json.loads(response.content) # imports string
with open('../data/raw/SMARTABLE/CA_data.txt', 'w') as outfile:
json.dump(CA_dict, outfile,indent=2)
print(json.dumps(CA_dict,indent=2)) #string dump
df.head()
###Output
_____no_output_____ |
old/NCOV19_X_ray_Classifier.ipynb | ###Markdown
Coronavirus 2019 (COVID-19) Classifer using Posteroanterior views (PA) of Chest Radiograph Images (CXR)Accompanying information [here](https://towardsdatascience.com/using-deep-learning-to-detect-ncov-19-from-x-ray-images-1a89701d1acd).NOTICE: This notebook is provided as-is with no guarantee of accurate diagnosis. The model was trained on heavily skewed data and is not suitable for deployment. It is currently meant to be a proof of concept for now. All images used were publicly accessible and usable at the time of training.
###Code
!git clone https://github.com/ieee8023/covid-chestxray-dataset.git
import pandas as pd
import numpy as np
import os, shutil
from fastai.vision import *
from fastai.widgets import ClassConfusion
###Output
_____no_output_____
###Markdown
PreprocessingExtracting Images
###Code
metadata_path='covid-chestxray-dataset/metadata.csv'
df=pd.read_csv(metadata_path)
#types we're interested in
covid_patients=df['finding']=='COVID-19'
CT=df['view']=='CT'
PA=df['view']=='PA'
# %%
df[covid_patients & CT].shape
df[covid_patients & PA].shape
# %%
PA_covid=df[covid_patients & PA]
Others=df[~covid_patients & PA]
covid_files=[files for files in PA_covid['filename']]
other_files=[files for files in Others['filename']]
#our test folder. manually included files via upload.
test_files=[file for file in sorted(os.listdir('test'))]
df_test=pd.DataFrame(test_files, columns=['filename'])
#create data folder and positive & negative cases folder, and test folder
destpath = 'data/covid','data/other', 'data/test'
srcpath = 'covid-chestxray-dataset/images'
for root, dirs, files in os.walk(srcpath):
if not os.path.isdir(destpath[0]):
os.makedirs(destpath[0])
if not os.path.isdir(destpath[1]):
os.makedirs(destpath[1])
if not os.path.isdir(destpath[2]):
os.makedirs(destpath[2])
for file in files:
if file in covid_files:
shutil.copy(os.path.join(root, file),destpath[0])
if file in other_files:
shutil.copy(os.path.join(root,file),destpath[1])
if file in test_files:
shutil.copy(os.path.join(root,file)),destpath[2]
#see number of files
path, dirs, files2 = os.walk("data/other").__next__()
path, dirs, files1 = os.walk("data/covid").__next__()
path, dirs, files3 = os.walk("data/test").__next__()
print("Number of images in Other: {}".format(len(files2)),"Number of images in Covid: {}".format(len(files1)),"Number of images in Test Set: {}".format(len(files3)) )
###Output
Number of images in Other: 226 Number of images in Covid: 35
###Markdown
Loading and Splitting DataWe first declare the labels to be used (corresponding with the folder names). We then wrap it around a dataloader from fastai. We allocate 20% of the data for validation, and we reserve a test set from a folder called "test". We resize all images to 512 x 512 pixels.
###Code
classes=['covid','other']
#include a test folder named test before running this block
#function assumes test set is located in the path (first arg) by default
data = ImageDataBunch.from_folder('data', train=".", valid_pct=0.25,test='test',
ds_tfms=get_transforms(), bs=8, size=512, num_workers=4).normalize(imagenet_stats)
data.classes
#show size of our datasets
print(len(data.train_ds),len(data.valid_ds),len(data.test_ds.x))
#sample of our images with labels
data.show_batch(rows=5, figsize=(7,8))
###Output
_____no_output_____
###Markdown
TrainingWe use a Resnet 50 for transfer learning.Initially we run the fit one cycle policy for a few epochs and then using fastai's **lrfinder** to find an optimal range for our learning rate.We use precision and recall to measure the incidents of false positives and false negatives, as well as AUC to account for performance given the skewed data.
###Code
precision=Precision()
recall=Recall()
AUC=AUROC()
learn = cnn_learner(data, models.resnet50, metrics=(accuracy,precision,recall,AUC))
learn.fit_one_cycle(1)
###Output
_____no_output_____
###Markdown
At this stage, we realize the model is underfitting, so we continue to progressively increase the number of epochs from here on in an effort to reduce training loss while maintaining the low validation loss.
###Code
learn.fit_one_cycle(2)
learn.lr_find()
learn.recorder.plot()
#@title Defining custom checkpoints
#Customizing where our checkpoints are saved and loaded
#if not os.path.isdir('checkpoints'):
# os.mkdir('checkpoints')
os.mkdir('check')
def custom_path_save(self, name:PathOrStr, path='check', return_path:bool=False, with_opt:bool=True):
"Save model and optimizer state (if `with_opt`) with `name` to `self.model_dir`."
# delete # path = self.path/self.model_dir/f'{name}.pth'
# my addition: start
if path=='': path = self.path/self.model_dir/f'{name}.pth'
else: path = f'{path}/{name}.pth'
# end
if not with_opt: state = get_model(self.model).state_dict()
else: state = {'model': get_model(self.model).state_dict(), 'opt':self.opt.state_dict()}
torch.save(state, path)
if return_path: return path
def custom_path_load(self, name:PathOrStr, path='check', device:torch.device=None, strict:bool=True, with_opt:bool=None,purge=False):
"Load model and optimizer state (if `with_opt`) `name` from `self.model_dir` using `device`."
if device is None: device = self.data.device
# delete # state = torch.load(self.path/self.model_dir/f'{name}.pth', map_location=device)
# my addition: start
if path=='': path = self.path/self.model_dir/f'{name}.pth'
else: path = f'{path}/{name}.pth'
state = torch.load(path, map_location=device)
# end
if set(state.keys()) == {'model', 'opt'}:
get_model(self.model).load_state_dict(state['model'], strict=strict)
if ifnone(with_opt,True):
if not hasattr(self, 'opt'): opt = self.create_opt(defaults.lr, self.wd)
try: self.opt.load_state_dict(state['opt'])
except: pass
else:
if with_opt: warn("Saved filed doesn't contain an optimizer state.")
get_model(self.model).load_state_dict(state, strict=strict)
return self
learn.save = custom_path_save.__get__(learn)
learn.load = custom_path_load.__get__(learn)
model_path ='check'
learn.save('Corona_model_stage1')
#learn.load('Corona_model_stage1')
learn.unfreeze()
learn.fit_one_cycle(10, max_lr=slice(9e-07,1e-06))
learn.save('Corona_model_stage2')
#confusion matrix for the first 2 iterations
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
ClassConfusion(interp, classes, is_ordered=False, figsize=(8,8))
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(30, max_lr=slice(6e-07,7e-06))
learn.save('Corona_model_stage3.pth')
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(40, max_lr=slice(8e-06,1e-05))
learn.save('Corona_model_stage4.pth')
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(5, max_lr=2e-06)
learn.save('Corona_model_stage5')
###Output
_____no_output_____
###Markdown
Results on Validation Set and Predictions on Test Set
###Code
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
preds, _ = learn.get_preds(ds_type = DatasetType.Test, ordered=True)
df_test
#WARNING: PREDICTIONS ARE NOT SORTED AND DO NOT MATCH THEIR ACTUAL CORRESPONDING IMAGES
'''
model_classes = learn.data.classes
preds = preds.tolist()
confidences = [{c: p for c,p*100 in zip(model_classes, probs)} for probs in preds]
final_df = pd.DataFrame({'ID_code': df_test['filename'], 'target': confidences})
final_df.to_csv('NCOV_test_results.csv', header=True, index=False)
'''
#safer to use a dictionary data structure
#save predictions on test set in csv
images={filename:open_image('data/test/'+filename) for filename in test_files}
results={filename:learn.predict(images[filename]) for filename in test_files}
final_df=pd.DataFrame.from_dict(results,orient='index')
final_df.to_csv('NCOV_test_results.csv', header=True)
###Output
_____no_output_____ |
notebooks/Industry-Steel.ipynb | ###Markdown
Steel Regression Model
###Code
import pandas as pd
import numpy as np
from fbprophet import Prophet
from pandas.tseries.offsets import MonthEnd
###Output
_____no_output_____
###Markdown
read in historical data
###Code
df_prod = pd.read_csv('../data/raw/Industry/SteelHistorical.csv')
df_prod.head()
df_prod.info()
###Output
_____no_output_____
###Markdown
Make year column YYYY-MM-DD format for Prophet
###Code
df_prod = df_prod.set_index(['Economy'])
df_prod.head()
df_prod['ds'] = pd.to_datetime(df_prod['Year'], format="%Y") + MonthEnd(12)
df_prod.head()
###Output
_____no_output_____
###Markdown
read in historical macro data
###Code
df_macro = pd.read_csv('../data/raw/Industry/MacroHistorical.csv')
df_macro.head()
df_macro['ds']=pd.to_datetime(df_macro['Year'],format='%Y')
df_macro['ds'] = pd.to_datetime(df_macro['ds'], format="%Y%m") + MonthEnd(12)
df_macro = df_macro.set_index(['Economy'])
df_macro.head()
df_macro['GDP_per_capita'] = df_macro['GDP'].div(df_macro['Population'])
df = pd.merge(df_prod,df_macro,how='left',on=['Economy','ds','Year'])
df.head()
df['ln_prod_per_cap'] = df['SteelConsumption'].div(df['Population'])
df['ln_prod_per_cap'] = np.log(df['ln_prod_per_cap'])
df['ln_GDP_per_cap'] = np.log(df['GDP_per_capita'])
df = df.rename(columns={"ln_prod_per_cap":"y"})
df.head()
economies = df.index.unique()
economies
economies
models ={}
for economy in economies:
m = Prophet(daily_seasonality=False,
weekly_seasonality=False,
yearly_seasonality=False,
seasonality_mode='additive',
growth='linear')
m.add_regressor('ln_GDP_per_cap')
models[economy] = m
models
###Output
_____no_output_____
###Markdown
fit models
###Code
for economy,model in models.items():
model.fit(df.loc[economy])
###Output
_____no_output_____
###Markdown
add future macro data
###Code
df_future_macro = pd.read_csv('../data/raw/Industry/MacroAssumptions.csv',
index_col=['Economy'])
df_future_macro['GDP_per_capita'] = df_future_macro['GDP'].div(df_future_macro['Population'])
df_future_macro['ln_GDP_per_cap'] = np.log(df_future_macro['GDP_per_capita'])
df_future_macro.head()
df_future_macro['ds'] = pd.to_datetime(df_future_macro['Year'], format="%Y") + MonthEnd(12)
df_future_macro.head()
df_future_macro.tail()
###Output
_____no_output_____
###Markdown
create regressors for 1990-2050
###Code
regressors_hist = df
regressors_fut = df_future_macro
#regressors_hist = df.drop(columns=['Year','SteelConsumption','GDP','Population','GDP_per_capita','y'])
#regressors_fut = df_future_macro.drop(columns=['Year','GDP','Population','GDP_per_capita'])
_regressors_list =[]
for economy in economies:
_regressors = pd.concat([regressors_hist.loc[economy],regressors_fut.loc[economy]],
ignore_index=False, sort=False)
_regressors_list.append(_regressors)
regressors = pd.concat(_regressors_list)
###Output
_____no_output_____
###Markdown
run model (make prediction)
###Code
pred_list =[]
for economy,model in models.items():
forecast = model.predict(regressors.loc[economy])
forecast.insert(loc=0,column='Economy',value=economy)
forecast = forecast.set_index(['Economy'])
pred_list.append(forecast)
results = pd.concat(pred_list, sort=False)
results[['ds', 'yhat', 'yhat_lower', 'yhat_upper']]
results['Year'] = results['ds'].dt.year
#results[['Year', 'yhat', 'yhat_lower', 'yhat_upper']].to_csv ('../data/final/steel_results.csv', header=True)
###Output
_____no_output_____
###Markdown
plot results
###Code
for economy,model in models.items():
fig1 = model.plot(results.loc[economy])
results.info()
regressors.info()
_a = results[['ds', 'yhat', 'yhat_lower', 'yhat_upper']]
_b = regressors[['ds','Year','GDP','Population']]
_b.head()
#final_results = pd.merge(_a,_b,how='outer',on='ds')
final_results = pd.merge(_a,_b,left_index=True, right_index=True)
final_results
final_results['estimated production - thousand tons per capita'] = (np.exp(final_results['yhat'])).div(1000)
final_results['estimated production - tons'] = np.multiply(final_results['estimated production - thousand tons per capita'],final_results['Population'])
final_results.to_csv ('../data/final/steel_results.csv', header=True)
###Output
_____no_output_____ |
docs/notebooks/Moebius Strip And The Field of Coefficients.ipynb | ###Markdown
Moebius Strip And The Field of CoefficientsIn this notebook we will explore an example in which the field of coefficients impacts the answer that 1 dimensional homology gives. This example demonstrates that contrary to common conventions which say to always use $\mathbb{Z} / 2$ (binary) coefficients there may be good reasons to use other fields, especially when there are *twists.*First, we do all of the necessary imports as usual
###Code
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from ripser import ripser
from persim import plot_diagrams
###Output
_____no_output_____
###Markdown
Now we will create a closed loop which lives on a 2-torus embedded in 3 dimensions. First, we will have the loop travel twice around the small circle (the "inner tube" part) for every one revolution around the large circle. Given a radius $R$ for the big loop and a radius $r$ for the small loop, we sample the following parametric curve$x(t) = (R + r \cos(2t)) \cos(t)$$y(t) = (R + r \cos(2t)) \sin(t)$$z(t) = r \sin(2t)$We then compute persistent homology using both $\mathbb{Z} / 2$ and $\mathbb{Z} / 3$ coefficients
###Code
## Step 1: Setup curve
N = 100 # Number of points to sample
R = 4 # Big radius of torus
r = 1 # Little radius of torus
X = np.zeros((N, 3))
t = np.linspace(0, 2*np.pi, N)
X[:, 0] = (R + r*np.cos(2*t))*np.cos(t)
X[:, 1] = (R + r*np.cos(2*t))*np.sin(t)
X[:, 2] = r*np.sin(2*t)
## Step 2: Compute persistent homology
dgms2 = ripser(X, coeff=2)['dgms']
dgms3 = ripser(X, coeff=3)['dgms']
fig = plt.figure(figsize=(9, 3))
ax = fig.add_subplot(131, projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2])
ax.set_aspect('equal')
plt.title("Generated Loop")
plt.subplot(132)
plot_diagrams(dgms2)
plt.title("$\mathbb{Z} / 2$")
plt.subplot(133)
plot_diagrams(dgms3)
plt.title("$\mathbb{Z} / 3$")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Although the loop is very curved, we still see just one class forming in H1, and the persistence diagrams are the same for $\mathbb{Z} / 2$ and $\mathbb{Z} / 3$ coefficients; there is one class born at around $0$ which dies at $r + R$.If, on the other hand, we have the loop go twice around the big circle for every once around the small circle; that is, the following parametric curve$x(t) = (R + r \cos(t)) \cos(2t)$$y(t) = (R + r \cos(t)) \sin(2t)$$z(t) = r \sin(t)$
###Code
X[:, 0] = (R + r*np.cos(t))*np.cos(2*t)
X[:, 1] = (R + r*np.cos(t))*np.sin(2*t)
X[:, 2] = r*np.sin(t)
## Step 2: Compute persistent homology
dgms2 = ripser(X, coeff=2)['dgms']
dgms3 = ripser(X, coeff=3)['dgms']
fig = plt.figure(figsize=(9, 3))
ax = fig.add_subplot(131, projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2])
ax.set_aspect('equal')
plt.title("Generated Loop")
plt.subplot(132)
plot_diagrams(dgms2)
plt.title("$\mathbb{Z} / 2$")
plt.subplot(133)
plot_diagrams(dgms3)
plt.title("$\mathbb{Z} / 3$")
plt.show()
###Output
_____no_output_____ |
DataManagement/1_DataManagement.ipynb | ###Markdown
Workshop: Data Management with Python Prerequisites Installation:Minimum requirement is Python3.I personally do prefer the installation of the Anaconda Distribution. https://www.anaconda.com/products/individualBasically all packages you need to get started are included, as well as jupyter and spider editor. Also when installing new packages with conda```pythonconda install PACKAGE_NAME```the compatibility is taken care of.
###Code
# in case a package is missing and you recognise when you want to import it,
# you can do the installation also from within the notebook
!pip install pandas
#!conda install pandas
# For installation on Windows the command might need to look like:
!{sys.executable} -m pip install pandas
!{sys.executable} -m pip install xl...
###Output
_____no_output_____
###Markdown
Packages:For the data management tutorial we need packages for importing and exporting the data and for data wrangling. The most common package, which covers both is the **pandas module**. ThisPython open source library provides high-performance, easy-to-use data structures and data analysis tools.Another tool we need is the **numpy module**. ThisPython open source library makes handling of vectors, matrices and big multidimensional arrays easy.So after the installation of numpy and pandas package, you have to import them at the beginning of your python script or notebook to use their functionalitiesGood Practice tip: import as "np" and "pd" because it's faster to write "pd" than "pandas"```pythonimport pandas as pdimport numpy as np``` Data:At the very beginning we will look at a minimal Excel example Basics.xlsx to get to know some basic operations.Later on we will take a look at two examples of experimental data:- Tensile Test- Bubble Column Plan B:If something went wrong with the installation or you cannot access the datafiles, you can view this notebook at nbviewer (passive) or at binder (interactive). Also you can view it directly on github.Links:- binder https://mybinder.org/v2/gh/astridwalle/python_jupyter_basics/HEAD?filepath=DataManagement%2F1_DataManagement.ipynb- nbviewer https://nbviewer.jupyter.org/github/astridwalle/python_jupyter_basics/blob/main/DataManagement/1_DataManagement.ipynb- github https://github.com/astridwalle/python_jupyter_basics Now let's get started! What is this about? Why Python? Separation of Data and Analysis!For example you have performed experimental analyses and now you have loads of raw data:There is actually no need to copy this data for analyzing and to store different analysis versions...Well-known szenarios are:- Multiple Excel files floating around on share drives- Lots of versions and local copies- Hard or even impossible to keep overview and trace changes- Changing the underlying data (add columns, change format, ...) can break everything Typical Usecases for which Python has advantages over Excel, Matlab, ...For large datasets Excel becomes unhandy and slow. This is no problem for python due to the paradigm of separating the data from the analysis. If you want to try and store multiple approaches and analyses, this becomes a problem in Excel due to large file copies.With python your different analysis files stay small (just textfiles) and can also be version controlled (e.g. with git), which is great for collaboration.You don't end up with ..._final ..._final_final_With Python:- Import data and with that- Decouple Business Logic, Computations, Visualisation from the data- Access databases, xlsx tabels and more sources without changing them.- Export your analysis afterwards to every format you like (database, xlsx, ...) Huge community of users and contributorsYou can google everything! Jupyter Basics to get started:Check out the following short-cut and menu bar functionalities: - Markdown vs. Code- Coding Environment- auto-complete ```tab```- Running a cell: ```Run``` or ```Shift+Enter``` - ? in front of function call to get some help Important for this course:The intention is to have an interactive working document, so- Add as many code cells as you like!- Try evrything!- Don't be shy, you cannot break anything! Example files for this classBasic sample data: ```Data/Basics.xlsx```Real data from testing:- Tensile Testing```Data/TensileTest/AIMg3.txt Data/TensileTest/HDPE.txt Data/TensileTest/Staht.txt```- Bubble colum```Data/BubbleColumn/Test_01.xlsx Data/BubbleColumn/Test_02.xlsx Data/BubbleColumn/Test_03.xlsx ``` Importing basic libraries**pandas module**Python open source library providing high-performance, easy-to-use data structures and data analysis tools. Built-in great capabilities for data import and export! (csv, xlsx, ...)Good Practice tip: import as "np" and "pd" because it's faster to call "pd" than "pandas"**numpy module** (We don't need it today)Python open source library for easy handling of vectors, matrices, big multidimensional arrays and mathematical operations General notes ModulesYou can either import a complete module, or just single classes or functions of a module.```pythonimport pandas as pdpd.read_csv("file.csv")``````pythonimport matplotlib.pyplot as pltplt.plot(x,y)``` Classes and functionsPython is a multi-paradigm programming language. You can do object oriented programming, but also structured, sequential programming. You can create objects with classes or just use functions. Let's go!At first just one statement in a code cell to get a nice output. (This enables the print of multiple variables per coding cell.)
###Code
# make all print statements in a cell appear in output
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
###Output
_____no_output_____
###Markdown
Now it is your turn!1. Import the modules2. The functions of these modules are then called with pd.XXX3. Check which functions are available and how to use them4. Try pd. tab/autocomplete to see all available options5. Try inserting a ? in front of the function name Press ```+``` and get started :)
###Code
# import module
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Our first Testcases - Basics.xlsx / Tensile TestingIf you have multiple tables, e.g. from different test runs, which you want to combine to one datasource for analyzing, that is pretty easy.We will do this for:1. One xlsx file with multiple sheets as raw data2. Multiple txt files as raw dataStitching together rowwise or columnwise is easy. Exploratory Data AnalysisWe always start with an exploratory data analysis to ensure we know how our data looks like. Our main datastructure: Pandas DataFrameFor data analysis the most common module is pandas with the main tabular data structure DataFrame, which enables for endless operations.
###Code
# We start with reading in an Excel File.
# To check the usage of the function we can put the questionmark in front of the function.
#
#?pd.ExcelFile
#??pd.ExcelFile
# 1. Read xlsx file - Variant 1 --> reads all the sheets in the file
xls1=pd.ExcelFile("../Data/Basics.xlsx")
xls1.sheet_names
# 2. Read xlsx file - Variant 2 --> reads just the first sheet!
xls2=pd.read_excel("../Data/Basics.xlsx")
xls2
# We use Variant 1 --> Parse the sheets into dataframes
df_test1=xls1.parse("Test1")
df_test2=xls1.parse("Test2")
df_test3=xls1.parse("Test3")
# Check the results of the parsing on your own!
# Always use autocompletion!
# show results: have a look at the resulting dataframes
# --> check the additional index column, which was added automatically
df_test1
df_test2
df_test3
# show no of columns and rows with df.shape
df_test1.shape
df_test2.shape
df_test3.shape
###Output
_____no_output_____
###Markdown
pd.concat - join dataframes**Task**At first we want to join the two dataframes df_test1 and df_test2 into one big DataFrame, which has 1. all columns2. all rowsLet's find out the caveats here...**hints**- have a look again at the dataframes: - just type the name of the df and shift+enter - have a look at number of rows and columns using df.shape- use **?** to find out how to use pd.concat- ask google
###Code
df_test1
df_test2
# 1. all columns
df_all_cols=pd.concat([df_test1,df_test2],axis=1,ignore_index=False)
df_all_cols
# show resulting dataframe
df_all_rows=pd.concat([df_test1,df_test2],axis=0,ignore_index=True)
df_all_rows
###Output
_____no_output_____
###Markdown
Your turn: try around with- ?pd.concat- Change the axis- Change ignore_index- Try other options with autocomplete... Combine different testruns into one dataframeIf you want to merge different "cases", present in different tables, e.g. each representing another test or another parameter varation. pd.concat can join them into one df, adding an additional index, a so called multiindexWe will do this now with our test tables:
###Code
#df_all_Params=pd.concat([df_Param1,df_Param2,df_Param3],axis=0,ignore_index=True)
df_all_Params=pd.concat([df_test1,df_test2],keys=["Test1","Test2"],axis=0,names=["Param","Row_Index"],ignore_index=False)
df_all_Params
###Output
_____no_output_____
###Markdown
Your turn: try out join / merge: Join and merge are also extremely interesting functionalities for dataframes. --> check it out now or later with```df_test1.join()df_test1.merge()```Ask google or add **?** to find out more. Use autocompletion! Now let's have a look at some real testing data```Data/TensileTest/AIMg3.txt``````Data/TensileTest/HDPE.txt``````Data/TensileTest/Stahl.tx```` Your turn:- Read in all 3 files with the function pd.read_csv()- What are the caveats here?- Hint: read_csv can detect commas and whitespaces automatically, but here are tabs...
###Code
# 1. Read in multiple txt files
# Try autocomplete- for everything! Paths, Variable names, functions, options...
df_AlMg3=pd.read_csv("../Data/TensileTest/AlMg3.txt", sep="\t", skiprows=0, header=None)
# See the whole dataframe
df_AlMg3
# See the beginning or the end
df_AlMg3.head(15)
df_AlMg3.tail(10)
# See a slice of the df
df_AlMg3[5:15]
# This looks a bit messy, so let's do some data wrangling
# Split it up into metadata and actual data
df_AlMg3_meta=df_AlMg3[1:9]
df_AlMg3_data=df_AlMg3[12::]
df_AlMg3_meta
df_AlMg3_data
# And some more wrangling
# Set menaningful columnnames
df_AlMg3_meta.columns=["key","value","unit"]
df_AlMg3_data.columns=["Standardweg [mm]","Standardkraft [N]","None"]
df_AlMg3_meta
df_AlMg3_data
# and at last we will cut the last column from the data, as this is empyt anyway:
df_AlMg3_data.drop(columns=["None"]).reset_index()
# ATTENTION! Make sure that the action was really APPLIED and not just executed!
df_AlMg3_data
# so either you write the df new, or set the inplace option or copy
# df_AlMg3=df_AlMg3_data.drop(columns=["None"]).reset_index()
df_AlMg3_data=df_AlMg3_data.drop(columns=["None"])
df_AlMg3_data.reset_index(inplace=True)
# And we drop the empty rows of the metadata df
df_AlMg3_data
###Output
_____no_output_____
###Markdown
Now that the data looks good, we can apply the transforming steps to the other datasets as well: Option 1. Read one by one Option 2. Read files in a loop
###Code
! ls -al ../Data/TensileTest/
import os
# we can assign dynamic variable names with the loop index, but this is a bad idea. It is better
# to write it into a so called dictionary. Later on we can access it by the name
tests={}
#for i in ["HDPE","Stahl"]:
for i in os.listdir("../Data/TensileTest/"):
i=i.strip(".txt")
df=pd.read_csv("../Data/TensileTest/"+i+".txt", sep="\t", skiprows=0, header=None)
df_meta=df[1:9]
df_data=df[12::]
df_meta.columns=["key","value","unit"]
df_data.columns=["Standardweg [mm]","Standardkraft [N]","None"]
df_data=df_data.drop(columns=["None"])
df_data.reset_index(inplace=True)
key_data=i+"_data"
value_data=df_data
key_meta=i+"_meta"
value_meta=df_meta
tests[key_data]=value_data
tests[key_meta]=value_meta
# Now we check the keys of the dictionary
tests["AlMg3_meta"]["key"]
# And now we combine all the testing data into one dataframe, using just the data values of the dictionary.
df_tensile_tests=pd.concat([tests["AlMg3_data"],tests["Stahl_meta"]],keys=["AlMg3","Stahl"],axis=0,
names=["Material","Row_Index"],ignore_index=False)
# Let's check the result
df_tensile_tests
###Output
_____no_output_____
###Markdown
Example 2: Data Exploration with large datafiles...We always start by looking at the data. Get to know your data**Does the data look ok?****Does it look as we expected it to be?**In the following we will list a few commands, which are helpful for data exploration!For this example we will use some data from the bubblecolumn.
###Code
df_bub=pd.read_excel("../Data/BubbleColumn/Test_01.xlsx")
df_bub
###Output
_____no_output_____
###Markdown
The column names to not look nice. Let's check the original fiel and adapt our function call.We need to give some extra options for reading in the files. --> Check the read_excel function with ?
###Code
?pd.read_excel
# we give the first 2 lines as header --> then a multiindex is created.
df_bub=pd.read_excel("../Data/BubbleColumn/Test_01.xlsx",header=[0,1])
df_bub
###Output
_____no_output_____
###Markdown
Now we will do some data exploration- shape- describe- slicing- accessing columns / rows
###Code
df_bub.shape
df_bub.describe()
df_bub.columns
# To get the "correct" names for slicing and for looping
df_bub.columns.values
# access specific columns --> there are multiple ways to do so.
df_bub.cam0
df_bub.cam0["Waddel Disk Diameter"]
df_bub["cam0"]["Waddel Disk Diameter"]
###Output
_____no_output_____
###Markdown
Get an overview
###Code
df_bub.shape
df_bub.head(5)
df_bub.tail(5)
df_bub.index
len(df_bub)
###Output
_____no_output_____
###Markdown
Slicing
###Code
# 1 row
df_bub[4:5]
#multiple rows
df_bub[10:20]
#ignore last 100 rows
df_bub[:-100]
# 1 column
df_bub["cam1"]["Bounding \nleft"]
# multiple columns
df_bub[df_bub.columns.values[:3]]
###Output
_____no_output_____
###Markdown
Filtering
###Code
df_bub[df_bub["cam1"]["Bounding \nleft"]>32]
###Output
_____no_output_____
###Markdown
Your task: Try filtering options with Boolean Algebra:- - == / !=- multiple options combined with ```&``` Some more filtering - groupbyFilter data by categorical valuesApplies if you want to get single dataframes for specific groups.Example: RKI Covid Case Data - 1 row per day per Landkreis. To get all rows only for one Landkreis, you can use groupby.
###Code
# you can also read the csv directly from url!
df_rki=pd.read_csv("https://www.arcgis.com/sharing/rest/content/items/f10774f1c63e40168479a1feb6c7ca74/data")
df_rki
# Grouping works great to get separate DataFrames for different categories.
df_grouped=df.groupby("Landkreis")
for name, dataframe in df_grouped:
print(name, len(dataframe))
###Output
_____no_output_____
###Markdown
Min / Max / Mean
###Code
df_bub["cam1"]["Bounding \nleft"].min()
df_bub["cam1"]["Bounding \nleft"].max()
df_bub["cam1"]["Bounding \nleft"].mean()
###Output
_____no_output_____
###Markdown
Find unique values
###Code
# Makes only sense for categorical values...
df_bub["cam1"]["Bounding \nleft"].unique()
df_rki["Bundesland"].unique()
###Output
_____no_output_____
###Markdown
Get specific rows, columns, elementsBy names (loc), indices (iloc)
###Code
# loc - gets you data by column and row name
# get one specific element by column_name and row_index
df_bub.loc[6,("cam1","Bounding \nleft")]
# get numerical index of column:
idx=df_bub.columns.get_loc(("cam1","Bounding \nleft"))
idx
# iloc - gets you data by index
# get one specific element by column index and row index
df_bub.iloc[6,idx]
###Output
_____no_output_____
###Markdown
And what do we now do with that? More ideas... Try it! Add other tests and combine to one big dataframe Add columns with postprocessed values Plotting ... Visualize resultsFor the bubblecolumn test we plot v_pins over timeMore about data visualization in the next session! Hint: You can also plot only a portion of the original data and apply the filtering functions upfront.
###Code
df_bub.columns.values
import matplotlib.pyplot as plt
%matplotlib inline
import datetime
x=df_bub["erg","Zeit [ms]"]
y1=(df_bub["erg","z_bild "].shift(1)-df_bub["erg","z_bild "])/(df_bub["erg","t_Bilder LabV"].shift(1)-df_bub["erg","t_Bilder LabV"])
y2=df_bub["erg","z_bild "]
plt.figure(figsize=(15, 10))
plt.scatter(x,y1)
plt.ylim(0,0.5)
plt.xlabel("Zeit [ms]")
plt.ylabel("v_pins")
###Output
_____no_output_____
###Markdown
Export to Excel
###Code
df_all_cols
# just a tiny example. Of course you can do all kinds of formatting etc...
writer = pd.ExcelWriter("../Data/df_all_columns.xlsx",engine='xlsxwriter',options={'remove_timezone': True})
df_all_cols.to_excel(writer,sheet_name="all cols",startrow=1 , startcol=1, index=False)
writer.save()
###Output
_____no_output_____
###Markdown
Export notebooks also as- pdf- latex- py--> check out in the menu: File --> Export More python:Some useful functionalities:- zip() - combines multiple iterables into one data structure by grouping them together according to their index (0 pairs with 0, 1 with 1, etc) - map() - map iterates over an array and executes a function on each element. It's an elegant and concise way to loop through data - filter() - filters through your array and returns all elements who pass your condition - reduce() - you can perform cumulative tasks on the elements of your list, for example the sum of all elements or calculating the product of all entries (has to be imported from functools for python 3) - lambdas - Lambdas are locally defined functions you can use without having to define them globally
###Code
%%time
from functools import reduce
arr = [1, 2, 3, 4]
letters = ['A', 'B', 'C', 'D']
def someFunction(arg1, arg2):
result = arg1 ** arg2
return(result)
print(someFunction(2, 3))
outputZip = list(zip(arr, letters))
print(outputZip)
outputMap = list(map(lambda x: x*2, arr))
print(outputMap)
outputFilter = list(filter(lambda x: x % 2 == 0, arr))
print(outputFilter)
outputReduce = reduce(lambda x, y: x + y, arr)
print(outputReduce)
###Output
_____no_output_____
###Markdown
More hacks and best practices: Command mode```esc``` and then navigate around with arrows Shell commands```python!ls``` Use virtual environments for more complex projectsOne big disadavantage of python is it's volatility and dynamic. So lots of functions keep changing and packages are not compatible with each other, depending on the versions.```pythonpython3 -m venv --system-site-packages NAME_ENV``` Use the virtual env with jupyter notebook:```pythonpip install --user ipykernelpython -m ipykernel install --user --name=myenvsource env/bin/activate``` Get a working environmentrequirements.txtpip freezeBesides Reuse the same structure for your projects --> Cookiecutter templatesThe way from raw to processed data is well documented, comprehensible and repeatable. Hacks:- https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/- https://github.com/astridwalle/python_jupyter_basics/blob/main/JupyterHacks/Jupyter-Hacks.ipynb Combination of tools:https://sehyoun.com/blog/20180904_using-matlab-with-jupyter-notebook.html Check out markdown possibilitiese.g. include pics and gif's easily
###Code
# Check out your variables of a specific type, here now we check all DataFrames in our notebook
%who DataFrame
###Output
_____no_output_____ |
notebooks/open-source-dream.ipynb | ###Markdown
Creation of an open source model for DREAM descriptor prediction Preliminaries and Imports
###Code
%load_ext autoreload
%autoreload 2
from missingpy import KNNImputer
import mordred
import opc_python.utils.loading as dream_loading
import numpy as np
import pandas as pd
import pickle
from pyrfume.odorants import from_cids, all_smiles
from pyrfume.features import smiles_to_mordred, smiles_to_morgan, smiles_to_morgan_sim
from rickpy import ProgressBar
import rdkit
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import make_scorer
from sklearn.model_selection import cross_val_score, cross_validate
from sklearn.multioutput import MultiOutputRegressor
###Output
_____no_output_____
###Markdown
Load the perceptual data from Keller and Vosshall, 2016
###Code
kv2016_perceptual_data = dream_loading.load_raw_bmc_data()
kv2016_perceptual_data = dream_loading.format_bmc_data(kv2016_perceptual_data,
only_dream_subjects=False, # Whether to only keep DREAM subjects
only_dream_descriptors=True, # Whether to only keep DREAM descriptors
only_dream_molecules=True) # Whether to only keep DREAM molecules)
# Get the list of PubChem IDs from this data
kv_cids = list(kv2016_perceptual_data.index.get_level_values('CID').unique())
in kv_cids
# Get information from PubChem about these molecules
info = from_cids(kv_cids)
# Make a Pandas series relating PubChem IDs to SMILES strings
smiles = pd.Series(index=kv_cids, data=[x['IsomericSMILES'] for x in info])
smiles.head()
###Output
[-----------------------100%---------------------] 476 out of 476 complete (Retrieved 400 through 475)
###Markdown
Compute physicochemical features for the DREAM data (and some other molecules)
###Code
# Get a list of all SMILES strings from the Pyrfume library
ref_smiles = list(set(all_smiles()))
mordred_features_ref = smiles_to_mordred(ref_smiles)
# A KNN imputer instance
imputer_knn = KNNImputer(n_neighbors=10, col_max_missing=1)
X = mordred_features_ref.astype(float)
imputer_knn.fit(X)
# Compute Mordred features from these SMILES strings
mordred_features = smiles_to_mordred(smiles.values)
# The computed Mordred features as floats (so errors become NaNs)
X = mordred_features.astype(float)
# Whether a column (one feature, many molecules) has at least 50% non-NaN values
is_good_col = X.isnull().mean() < 0.5
# The list of such "good" columns (i.e. well-behaved features)
good_cols = is_good_col[is_good_col].index
# Impute the missing (NaN) values
X[:] = imputer_knn.fit_transform(X)
# Restrict Mordred features to those from the good columns, even after imputation
X = X[good_cols]
# Put back into a dataframe
mordred_features_knn = pd.DataFrame(index=mordred_features.index, columns=good_cols, data=X)
# Compute Morgan fingerprint similarities from these SMILES strings
morgan_sim_features = smiles_to_morgan_sim(smiles.values, ref_smiles)
len(ref_smiles), len(set(ref_smiles))
len(list(morgan_sim_features)), len(set(morgan_sim_features))
# Combine Mordred (after imputation) and Morgan features into one dataframe
all_features = mordred_features_knn.join(morgan_sim_features, lsuffix="mordred_", rsuffix="morgan_")
assert len(all_features.index) == len(all_features.index.unique())
assert list(all_features.index) == list(smiles.values)
all_features.index = smiles.index
all_features.index.name = 'PubChem CID'
all_features.head()
len(list(all_features)), len(set(all_features))
###Output
_____no_output_____
###Markdown
Organize perceptual data
###Code
# Compute the descriptor mean across subjects
data_mean = kv2016_perceptual_data.mean(axis=1)
# Compute the subject-averaged descriptor mean across replicates
data_mean = data_mean.unstack('Descriptor').reset_index().groupby(['CID', 'Dilution']).mean().iloc[:, 1:]
# Fix the index for joining
data_mean.index = data_mean.index.rename(['PubChem CID', 'Dilution'])
# Show the dataframe
data_mean.head()
###Output
_____no_output_____
###Markdown
Join the features and the descriptors and split again for prediction
###Code
# Create a joined data frame with perceptual descriptors and physicochemical features
df = data_mean.join(all_features, how='inner')
# Add a column for dilution (used in prediction)
df['Dilution'] = df.index.get_level_values('Dilution')
# Make a list of all the columns that will be used in prediction
predictor_columns = [col for col in list(df) if col not in list(data_mean)]
# Make a list of all the columns that must be predicted
data_columns = list(data_mean)
# Create the feature matrix and the target matrix
X = df[predictor_columns]
Y = df[data_columns]
# Each feature name is only used once
assert pd.Series(predictor_columns).value_counts().max()==1
###Output
_____no_output_____
###Markdown
Verify that this model gets reasonable out-of-sample performance
###Code
# A function to compute the correlation between the predicted and observed ratings
# (for a given descriptor columns)
def get_r(Y, Y_pred, col=0):
pred = Y_pred[:, col]
obs = Y.iloc[:, col]
return np.corrcoef(pred, obs)[0, 1]
# A series of scorers, one for each descriptor
scorers = {desc: make_scorer(get_r, col=i) for i, desc in enumerate(Y.columns)}
# The number of splits to use in cross-validation
n_splits = 5
# The number of descriptors in the perceptual data
n_descriptors = Y.shape[1]
# A vanilla Random Forest model with only 10 trees (performance will increase with more trees)
rfr = RandomForestRegressor(n_estimators=10, random_state=0)
# A multioutput regressor used to fit one model per descriptor, in parallel
mor = MultiOutputRegressor(rfr, n_jobs=n_descriptors)
# Check the cross-validation performance of this kind of model
%time cv_scores = cross_validate(mor, X, Y, scoring=scorers, cv=n_splits)
# An empty dataframe to hold the cross-validation summary
rs = pd.DataFrame(index=list(Y))
# Compute the mean and standard deviation across cross-validation splits
rs['Mean'] = [cv_scores['test_%s' % desc].mean() for desc in list(Y)]
rs['StDev'] = [cv_scores['test_%s' % desc].std() for desc in list(Y)]
# Show the results
rs
###Output
_____no_output_____
###Markdown
Fit the final model and save it
###Code
# A random forest regressor with more trees
rfr = RandomForestRegressor(n_estimators=250, random_state=0)
# Wrap in a class that will fit one model per descriptor
mor = MultiOutputRegressor(rfr, n_jobs=n_descriptors)
# Fit the model
%time mor.fit(X, Y);
len(list(X)), len(set(list(X)))
# Save the fitted model
path = pyrfume.DATA_DIR / 'keller_2017' / 'open-source-dream.pkl'
with open(path, 'wb') as f:
pickle.dump([mor, list(X), list(Y), imputer_knn], f)
###Output
_____no_output_____
###Markdown
Demonstration: using the fitted model (can be run independently if the above has been run at some point)
###Code
from pyrfume.odorants import from_cids
from pyrfume.predictions import load_dream_model, smiles_to_features, predict
novel_cids = [14896, 228583] # Beta-pinene and 2-Furylacetone
novel_info = from_cids(novel_cids)
novel_smiles = [x['IsomericSMILES'] for x in novel_info]
model_, use_features_, descriptors_, imputer_ = load_dream_model()
features_ = smiles_to_features(novel_smiles, use_features_, imputer_)
predict(model_, features_, descriptors_)
Out[1].to_dict('records')
###Output
_____no_output_____ |
employee_exit_survey_gender.ipynb | ###Markdown
Employee Exit Survey The ClientTAFE and DETE are vocational colleges in Australia. They have been doing exit surveys for a while and have now gathered a dataset of about 1600 results which they would like analysed. The client is focussed on internal contributing factors. Aims of Analysis: DissatisfactionThe client has asked for a report to help them understand the results of their recent exit survey.They wish to understand the profile of employees who cite dissatisfaction as a contributing factor to their exit from the organisation.Leadership wants to understand where to target retention improvement strategies. Conclusiongs: Age and DissatisfactionWomen make up a significant majority of the employee demographic at TAFE and DETE where the demographics of gender are very similar.Analysis revealed:- Men cite dissatisfaction as a reason for leaving 12% more often than women. VisualisationsThe correlation analysis determined men are more likely to be dissatisfied than women. They make up about 30% of the workforce, but cite dissatisfaction as a contributing factor to their leaving DETE or TAFE 12% more often than women. A Story in Pie ChartsPie charts make an excellent visualisation tool.At the end of this notebook there is a useful infographic to communicate the above analysis.The series of pie charts progress the viewer left to right from the gender disribution in the whole workforce, to the dissatisfied gender distribution, and then to a chart which explodes the key statistic. Notebooks and ReportsThe following notebooks and documents are part of this anaylsis: Jupyter Notebook Filename: Summary- [employee_exit_survey_cleaning_1.ibynb](https://github.com/jholidayscott/employee_exit_survey/blob/main/employee_exit_survey_cleaning_1.ipynb): Columns drops, missing data, renaming columns, tidying data for consistency- [employee_exit_survey_cleaning_2.ibynb](https://github.com/jholidayscott/employee_exit_survey/blob/main/employee_exit_survey_cleaning_2.ipynb): Adding calculated columns, adding category columns, further drops- [employee_exit_survey_correlation.ibynb](https://github.com/jholidayscott/employee_exit_survey/blob/main/employee_exit_survey_correlation.ipynb): Investigating correlations to guide analysis- [employee_exit_survey_gender.ibynb](https://github.com/jholidayscott/employee_exit_survey/blob/main/employee_exit_survey_gender.ipynb): Aggregation by pivot_table of gender subsets, visualisations- [employee_exit_survey_age.ibynb](https://github.com/jholidayscott/employee_exit_survey/blob/main/employee_exit_survey_age.ipynb): Aggregation by pivot_table of age subsets, visualisations- [employee_exit_survey_conflict.ibynb](https://github.com/jholidayscott/employee_exit_survey/blob/main/employee_exit_survey_conflict.ipynb): Exploration of conflict as a contributory factor
###Code
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.style as style
import numpy as np
exit_survey = pd.read_csv('employee_exit_survey_clean_final.csv')
pv_gender = exit_survey.pivot_table(values='cf_dept_or_job_dissatisfaction', index='gender', aggfunc=np.sum, margins=True)
pv_gender['perc_gender'] = (pv_gender['cf_dept_or_job_dissatisfaction']/pv_gender['cf_dept_or_job_dissatisfaction'].sum())*100
all_gender = exit_survey['gender'].value_counts()
pv_gender['all_perc_gender'] = (all_gender/all_gender.sum())*100
#pv_gender.drop('All', axis=0, inplace=True)
pv_gender
pv_gender.plot(kind='pie', y='perc_gender')
pv_gender.plot(kind='pie', y='all_perc_gender')
diff = pv_gender.loc['Male','all_perc_gender'] - pv_gender.loc['Male','perc_gender']
gender_diff = {'cf_dept_or_job_dissatisfaction':0, 'perc_gender':0, 'all_perc_gender': diff}
df_row = pd.DataFrame(data=gender_diff, index=['Male_x'])
pv_gender_diff = pv_gender
pv_gender_diff = pd.concat([pv_gender_diff,df_row])
pv_gender_diff
pv_gender_diff.plot(kind='pie', y='all_perc_gender')
fig1, (ax1,ax2,ax3) = plt.subplots(ncols=3, nrows=1, figsize=((12,5)))
# Styles
style.use('default')
style.use('fivethirtyeight')
# Plots
explode = (0, 0, 0.1)
ax1.pie(x=pv_gender['all_perc_gender'], startangle=23, colors=['#8aa672','#628247','#e5ae38'])
ax2.pie(x=pv_gender['perc_gender'], startangle=47, colors=['#8aa672','#628247','#e5ae38'])
ax3.pie(x=pv_gender_diff['all_perc_gender'], startangle=50, explode=explode, colors=['#8aa672','#628247','#e5ae38'])
ax1.axis('equal')
ax2.axis('equal')
ax3.axis('equal')
#Title
ax1.text(x=-1, y=1.2, s='Employee Dissatisfaction by Gender',size=35,color='#8b8b8b', weight='bold')
# Gender Labels
x_pos = 0.4
y_pos = -0.3
axes=[ax1,ax2,ax3]
for ax in axes:
ax.text(x=x_pos, y=y_pos, s='M', size=15,color='white', weight='bold')
ax.text(x=x_pos-1, y=y_pos, s='F',size=15,color='white', weight='bold')
# Exploded Section
explode_value = str(round(diff))+'%'
ax3.text(x=0.6, y=0.4, s=explode_value,size=14, weight='bold',color='white')
# Subtitles
ax1.text(x=-0.7, y=-1.5, s='Gender Split\nTAFE & DETE', size=20,color='#8b8b8b', weight='bold')
ax2.text(x=-0.7, y=-1.5, s='Gender Split\nDissatisfied', size=20,color='#8b8b8b', weight='bold')
ax3.text(x=-0.7, y=-1.5, s='Gender Most\nDissatisfied', size=20,color='#8b8b8b', weight='bold')
# Footer
ax1.text(x=-1.2, y=-1.9, s='James Holiday-Scott, 2021' + ' '*177, backgroundcolor='grey', color='white', size=11)
plt.show()
###Output
_____no_output_____ |
! Dissertation/*LSTM/LSTM Prediction - Univariate & Multivariate 2 - Completed, weird pred.ipynb | ###Markdown
Univariate LSTM - Single asset (in-sample) prediction 1. Import Libraries
###Code
#Libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from pandas_datareader import data
import math
import datetime as dt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
#LSTM RNN
from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout, GRU, Bidirectional
from keras.optimizers import SGD
from keras.callbacks import EarlyStopping
#Check for stationarity
from sklearn.metrics import mean_squared_error
plt.style.use('seaborn-darkgrid')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Some helpful resources(Multi stock prediction w single nn)[https://www.kaggle.com/humamfauzi/multiple-stock-prediction-using-single-nn](LSTM math)[https://medium.com/deep-math-machine-learning-ai/chapter-10-1-deepnlp-lstm-long-short-term-memory-networks-with-math-21477f8e4235]Root of mean squared error (helper fn)
###Code
def rmse_return(test,predicted):
rmse = math.sqrt(mean_squared_error(test, predicted))
print("The root mean squared error is {}.".format(rmse))
#Import single asset: GLD
def GetData(asset_name):
return pd.read_csv('Asset_Dataset/'+asset_name+'.csv', usecols=['Date','Adj Close'], parse_dates=True, index_col='Date' ).astype('float32').dropna()
def GetVol(asset_name):
return pd.read_csv('Asset_Dataset/'+asset_name+'.csv', usecols=['Date','Volume'], parse_dates=True, index_col='Date' ).astype('float32').dropna()
data_GLD = GetData('GLD')
data_GLD.plot()
plt.title('GLD Closing Price')
plt.ylabel('Asset Price USD')
plt.ylim(90,140)
plt.xlabel('Time')
###Output
_____no_output_____
###Markdown
Univariate LSTM - Single Asset
###Code
#train-test split
split_date = '2018-01-01'
train = data_GLD[data_GLD.index<split_date]
test = data_GLD[data_GLD.index>=split_date]
plt.plot(data_GLD)
plt.axvspan(data_GLD.index[0], split_date, color='blue', alpha=0.1)
plt.axvspan(split_date, data_GLD.index[-1], color='red', alpha=0.1)
plt.xlim(data_GLD.index[0],data_GLD.index[-1])
plt.title('Train (Blue) & Test (Red) Split')
plt.legend(['Closing Price USD'])
plt.xlabel('Year')
plt.ylabel('USD')
plt.show()
print(' Training set consists of {}% of data'.format(round(train.shape[0]/data_GLD.shape[0],2)*100))
###Output
_____no_output_____
###Markdown
Use MinMaxScaler() to scale NN values to between 0,1. Then, reshape to ensure shape match.* Q is the week-ahead for prediction
###Code
mm = MinMaxScaler()
train = np.reshape(train.values, (len(train), 1))
train = mm.fit_transform(train)
test = np.reshape(test.values, (len(test),1))
test = mm.transform(test)
Q = 1
X_train = train[0:len(train)-Q]
y_train = train[Q:len(train)]
X_test = test[0:len(test)-Q]
y_test = test[Q:len(test)]
X_train = np.reshape(X_train, (len(X_train), 1, X_train.shape[1]))
X_test = np.reshape(X_test, (len(X_test), 1, X_test.shape[1]))
###Output
_____no_output_____
###Markdown
* defines early stopping patience and lstm node count* use MSE instead of MAE due to potentially small spread in errors
###Code
patience = 15
lstm_nodes = 32
Univar_LSTM = 'reset'
# designing NN
Univar_LSTM = Sequential()
Univar_LSTM.add(LSTM(lstm_nodes, activation='relu', input_shape=(X_train.shape[1], X_train.shape[2])))
Univar_LSTM.add(Dense(1))
Univar_LSTM.compile(optimizer='adam', loss='mse')
# fit NN
history_univar = Univar_LSTM.fit(X_train, y_train, batch_size=1, epochs=100,
validation_data=(X_test, y_test), callbacks = [EarlyStopping(monitor='value_loss', patience=patience)],
verbose=-1)
# plot history
plt.plot(history_univar.history['loss'], label='train')
plt.plot(history_univar.history['val_loss'], label='test')
plt.title('Univariate Training History')
plt.xlabel('Epochs')
plt.ylabel('Mean Absolute Error')
plt.legend()
plt.show()
#finding predictions for testing data
predicted_univar = Univar_LSTM.predict(X_test)
predicted_univar = mm.inverse_transform(predicted_univar)[:,0]
y_true = mm.inverse_transform(y_test)
#building dataframe of predictions values
pred = pd.DataFrame({'True':y_true.flatten(),'Pred_Univar':predicted_univar.flatten()})
pred.index = data_GLD[data_GLD.index>=split_date][:-1].index
#percent change per day, difference between
change_pct = pred.pct_change()[1:]
change_pct = change_pct[1:]
change_pct['Univar'] = change_pct['True'] - change_pct['Pred_Univar']
change_pct['Tru'] = 0
change_pct.index = data_GLD[data_GLD.index>=split_date][1:-2].index
fig = plt.figure(figsize=[15, 10])
plt.suptitle('Univariate LSTM Predictions')
plt.subplot(221)
pred.Pred_Univar.plot(c='orange')
pred['True'].plot(c='blue')
plt.legend(['Prediction','True'])
plt.title('GLD True vs Univariate Prediction')
plt.ylabel('USD')
plt.xlabel('Date')
plt.subplot(222)
plt.title('Percent Change by Day, difference between True and Predicted')
plt.ylabel('Difference in Percent')
plt.xlabel('Date')
change_pct.Univar.plot(c='orange')
change_pct.Tru.plot(c= 'blue')
plt.legend(['Univariate Prediction','True']);
*plt.show()
###Output
_____no_output_____
###Markdown
Observations:* Univariate LSTM tends to under-estimate GLD values* To rectify: * Increase the number of features * Increase the training to overcome over- or under-fitting MULTIVARIATE LSTM - Single asset (in-sample) prediction
###Code
#import market and econ data
%store -r data_gdp
%store -r data_savings
%store -r data_vix
GLD_vol = GetVol('GLD')
#market and econ data
data_m = pd.concat([data_savings,data_vix],axis=1)
data_m = data_m.fillna(method='ffill')
data_all = pd.concat([data_GLD,GLD_vol,data_m],axis=1).dropna()
data_all.columns = ['Adj_Close','Volume','Savings','VIX']
data_all
# convert time series to supervised learning
# Using one lag observation as input (x)
# Using one observation as output (y)
def convert_ts_to_supervised(data_in):
n_vars = 1 if type(data_in) is list else data_all.shape[1]
df = pd.DataFrame(data_in)
y = list()
names = list()
# Build input sequence
y.append(df.shift(1))
names += [('var%d(t-%d)' % (j+1, 1)) for j in range(n_vars)]
# Build forecast sequence
y.append(df.shift(-1))
names += [('var%d(t+%d)' % (j+1, 1)) for j in range(n_vars)]
# Combine input and forecast sequence
combined_data = pd.concat(y, axis=1)
combined_data.columns = names
# Remove missing values
combined_data.dropna(inplace=True)
return combined_data
def plot_features(data):
# Plot only the features:
# GLD_close price, Savings, VIX_close price
num_features = [0, 1, 2]
i = 1
pyplot.figure(figsize=(10,8))
for n in num_features:
pyplot.subplot(len(num_features), 1, i)
pyplot.plot(values[:, n])
pyplot.title(data.columns[n], y=0.6, loc='left')
i += 1
pyplot.show()
#Engineer the features: normalization and transformation
scaler = MinMaxScaler(feature_range=(0,1))
scaled_in = scaler.fit_transform(data_all)
print(scaled_in)
#Convert TS to supervised learning model
reframed = convert_ts_to_supervised(scaled_in)
print(reframed.head())
# Predict only y=GLD_Close(t+1)
# Drop columns Savings(t+1) and VIX_Close(t+1)
reframed.drop(reframed.columns[[4,5]], axis=1, inplace=True)
print(reframed.head())
# Split into 80% train and 20% test data
values = reframed.values
train_80pct = int(len(values)* 0.8)
train = values[:train_80pct, :]
test = values[train_80pct:, :]
# Split training and test data into input(x) and output(y)
train_X, train_y = train[:, :-1], train[:, -1]
test_X, test_y = test[:, :-1], test[:, -1]
# Reshape for LSTM network: [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
# Design network and fit the model
model = Sequential()
model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
fitted = model.fit(train_X, train_y, epochs=200, batch_size=100, validation_data=(test_X, test_y), verbose=2, shuffle=False)
# Plot istory
plt.plot(fitted.history['loss'], label='train')
plt.plot(fitted.history['val_loss'], label='test')
plt.legend()
plt.show()
# Predict SP500 Close Price
yhat = model.predict(test_X)
test_X = test_X.reshape((test_X.shape[0], test_X.shape[2]))
# invert scaling for forecast
inv_yhat = np.concatenate((yhat, test_X[:, 1:]), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
inv_yhat = inv_yhat[:,0]
# Reverse scaling to get actual value
test_y = test_y.reshape((len(test_y), 1))
rev_y = np.concatenate((test_y, test_X[:, 1:]), axis=1)
rev_y = scaler.inverse_transform(rev_y)
rev_y = rev_y[:,0]
print (rev_y)
# Calculate and print RMSE
rmse = math.sqrt(mean_squared_error(rev_y, inv_yhat))
print('RMSE Result: %.5f' % rmse)
###Output
RMSE Result: 5.81737
|
notebooks/B2-mol2vec.ipynb | ###Markdown
Learning Mol2vec IntroductionIn the previous notebook, RDKit provided some fingerprinting methods so that you can make chemical comparisons, using _numerical vectors_. Depending on your application, however, these fingerprints may not _encode_ the information relevant to your task: this corresponds to a relatively broad area of machine learning, which is "representation learning"; how machines learn and use concrete representations of typically abstract concepts. In this case, we're concerned with _how molecules should be represented_, to tailor for your purpose. For example, maybe you need to know not only which functional groups are present, but also _how many_ and whether they are redundant. Another case maybe if you need to know if an aromatic ring is next to a particular functional group. These features/aspects may not be captured, or at least low-level enough for a machine learning method to extract.In this notebook, we'll look at using `mol2vec` to generate "encodings" or representations of molecules. `mol2vec` is inspired by `word2vec`, which is a widely used algorithm in natural language processing. Before we explain what `mol2vec` does, it's more instructional to start with `word2vec` as it is likely more relatable. Word2VecIn natural language, we form sentences using words. Each word carries a specific meaning, and a sentence of words establishes context that modifies the meaning of words. From a computing perspective, we could easily encode each word, or even letter, as vectors:```pythonhello = [1, 0, 0, 0, 0, 0]goodbye = [0, 1, 0, 0, 0, 0]apple = [0, 0, 1, 0, 0, 0]banana = [0, 0, 0, 1, 0, 0]...```You can then encode an entire dictionary of words into this format, where every word has 1 at a specific position (one-hot encoding). These words then make up a _corpus_, or your vocabulary. Now this is useful if you only cared about the words themselves, but in a sentence each word is independent of one another: this encoding does not reflect the fact that "goodbye" should come after "hello". In this corpus, the words hello/goodbye in English should also be more similar in semantics as they are greetings, compared to apple/banana which are fruits. `word2vec` was developed to generate encodings like the ones above, except in a more semantically useful manner by training machine learning model to convert words into vectors, where it learns how words are related to one another based on common usage. For example, we can take a textbook and throw it into `word2vec`, where it then learns how certain words are used in sentences more frequently than others. On a grander scale, you can do the same with Wikipedia, and the model will learn a larger corpus/vocabulary, and different set of semantics. Alternatively, you can also train it on Tweets, which is guaranteed to produce racist and poorly formed sentences.There are two ways of doing this, although I'll only explain one. The training method is to show the model sentences where one word is omitted out of say ten words (a window), and the job of the model is to predict which word in the corpus has been omitted. The workflow is something like this: using an example sentence "Jane is visiting ____ in September", the model has to find/recommend the most likely word to appear based on the training dataset. Mol2vecThe `mol2vec` package uses the same machinery as `word2vec`, but applied to molecules. We train a corpus/vocabulary of molecules, where molecules are to sentences as atoms are to words. Instead of training a model to predict words, the model now predicts atoms in molecules. In the same way how words can now carry context, atoms can also carry information about their local environment.
###Code
from pathlib import Path
from tempfile import NamedTemporaryFile
import fileinput
import os
import pandas as pd
from mol2vec import features
from rdkit import Chem
from rdkit.Chem.Draw import IPythonConsole
from gensim.models import word2vec
benzene = Chem.MolFromSmiles("c1ccccc1")
###Output
_____no_output_____
###Markdown
The `mol2alt_sentence` function generates Morgan identifiers for each atom; the identifier takes into account the atom type, as well as its local environment.
###Code
benzene_sentence = features.mol2alt_sentence(benzene, 1)
print(benzene_sentence)
###Output
['3218693969', '98513984', '3218693969', '98513984', '3218693969', '98513984', '3218693969', '98513984', '3218693969', '98513984', '3218693969', '98513984']
###Markdown
Organizing SMILES for corpus generationHere we're going to rely on two big datasets to build our corpus. It's important to keep in mind that all of our results will depend heavily on this step: the choice of molecules that will compose our corpus, and the number of examples for training. We'll be combining large, commonly used datasets: the QM9 set and a subset of ZINC15 containing molecules up to 200 amu in mass. Our primary focus here is on small-ish molecules (on the scale of biomolecules), and as such we do not want to swamp our corpus with extremely large molecules. Finally, we're going to also include molecules from the KIDA reaction network, which are astronomically relevant. Take a look at notebook "B1" for combining this data. Generating a corpus for training
###Code
CORPUSNAME = "mol2vec_corpus.dat"
RADIUS = 1
NJOBS = 4
# create a corpus from the SMILES
features.generate_corpus("collected_smiles.smi", CORPUSNAME, RADIUS, sentence_type="alt", n_jobs=NJOBS, sanitize=False)
###Output
[Parallel(n_jobs=4)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=4)]: Done 123 tasks | elapsed: 0.9s
[Parallel(n_jobs=4)]: Done 59396 tasks | elapsed: 6.3s
[Parallel(n_jobs=4)]: Done 187396 tasks | elapsed: 18.4s
[Parallel(n_jobs=4)]: Done 366596 tasks | elapsed: 37.3s
[Parallel(n_jobs=4)]: Done 596996 tasks | elapsed: 1.0min
[Parallel(n_jobs=4)]: Done 878596 tasks | elapsed: 1.5min
[Parallel(n_jobs=4)]: Done 1211396 tasks | elapsed: 2.1min
[Parallel(n_jobs=4)]: Done 1537432 out of 1537432 | elapsed: 2.7min finished
###Markdown
Training the `mol2vec` model
###Code
model = features.train_word2vec_model(CORPUSNAME, "mol2vec_model.pkl", vector_size=300, min_count=1, n_jobs=NJOBS)
###Output
Runtime: 5.77 minutes
|
per-project-usage-shardingsphere.ipynb | ###Markdown
Job executions per monthMaximum available value is 180 * 24 * days == 129600 (30 days) .. 133920 (31 days)
###Code
actions[["jobhours"]].groupby(actions.month).agg({"jobhours":["sum","mean", "max", "count"]})
###Output
_____no_output_____
###Markdown
Number of jobs executed by git repositories (last month)
###Code
actions[actions.month == last_month][["repo","jobhours"]].groupby("repo").agg({"jobhours":["sum","mean", "max"]}).sort_values(('jobhours',"sum"), ascending=False).head(20)
###Output
_____no_output_____
###Markdown
Job hour statustics per workflows
###Code
actions[actions.month == last_month][["repo","workflowid","jobhours"]].groupby(["repo","workflowid"]).agg({"jobhours":["sum","mean", "max"]}).sort_values(('jobhours',"sum"), ascending=False)
###Output
_____no_output_____
###Markdown
Slowest workflow runs
###Code
actions.sort_values("jobhours", ascending=False).head(25)
job = pd.read_csv("github-action-job.csv.gz")
job.startedat = pd.to_datetime(job.startedat * 1000000, utc = True)
job.completedat = pd.to_datetime(job.completedat * 1000000, utc = True)
job["project"] = job.repo.apply(asf_project)
job["jobhours"] = (job.completedat - job.startedat).dt.seconds / 60 / 60
job = job[job.project == project]
###Output
_____no_output_____
###Markdown
Slowest job executions by job names
###Code
job[["jobhours"]].groupby([job.org,job.repo, job.name]).sum().reset_index().sort_values("jobhours", ascending=False).head(25)
###Output
_____no_output_____
###Markdown
Number of job executions per status
###Code
job[["id"]].groupby([job.org,job.repo, job.conclusion]).count().reset_index().sort_values("id", ascending=False).head(25)
start = job.loc[:,["org","repo","project","id","runid","startedat"]]
start["value"] = 1
start = start.rename(columns={"startedat":"date"})
end = job.loc[:,["org","repo","project","id","runid","completedat"]]
end["value"] = -1
end = end.rename(columns={"completedat":"date"})
events = pd.concat([start, end]).sort_values("date")
events["running"] = events.value.cumsum()
###Output
_____no_output_____
###Markdown
Average (12h window) parallel running/queued job at a given time
###Code
r = events.set_index('date')
r = r.sort_index()
r = r.resample("12H").mean().fillna(0)
plt.figure(figsize=(20,8))
plt.plot(r.index,r.running)
plt.show()
## Max (12h window) parallel running/queued job at a given time
r = events.set_index('date')
r = r.sort_index()
r = r.resample("12H").max().fillna(0)
plt.figure(figsize=(20,8))
plt.plot(r.index,r.running)
plt.show()
###Output
_____no_output_____ |
present/bi2/2020/ubb/az_en_jupyter2_mappam/PythonDataScienceHandbook/05.13-Kernel-Density-Estimation.ipynb | ###Markdown
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* In-Depth: Kernel Density Estimation In the previous section we covered Gaussian mixture models (GMM), which are a kind of hybrid between a clustering estimator and a density estimator.Recall that a density estimator is an algorithm which takes a $D$-dimensional dataset and produces an estimate of the $D$-dimensional probability distribution which that data is drawn from.The GMM algorithm accomplishes this by representing the density as a weighted sum of Gaussian distributions.*Kernel density estimation* (KDE) is in some senses an algorithm which takes the mixture-of-Gaussians idea to its logical extreme: it uses a mixture consisting of one Gaussian component *per point*, resulting in an essentially non-parametric estimator of density.In this section, we will explore the motivation and uses of KDE.We begin with the standard imports:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
import numpy as np
###Output
_____no_output_____
###Markdown
Motivating KDE: HistogramsAs already discussed, a density estimator is an algorithm which seeks to model the probability distribution that generated a dataset.For one dimensional data, you are probably already familiar with one simple density estimator: the histogram.A histogram divides the data into discrete bins, counts the number of points that fall in each bin, and then visualizes the results in an intuitive manner.For example, let's create some data that is drawn from two normal distributions:
###Code
def make_data(N, f=0.3, rseed=1):
rand = np.random.RandomState(rseed)
x = rand.randn(N)
x[int(f * N):] += 5
return x
x = make_data(1000)
###Output
_____no_output_____
###Markdown
We have previously seen that the standard count-based histogram can be created with the ``plt.hist()`` function.By specifying the ``normed`` parameter of the histogram, we end up with a normalized histogram where the height of the bins does not reflect counts, but instead reflects probability density:
###Code
hist = plt.hist(x, bins=30, density=True)
###Output
_____no_output_____
###Markdown
Notice that for equal binning, this normalization simply changes the scale on the y-axis, leaving the relative heights essentially the same as in a histogram built from counts.This normalization is chosen so that the total area under the histogram is equal to 1, as we can confirm by looking at the output of the histogram function:
###Code
density, bins, patches = hist
widths = bins[1:] - bins[:-1]
(density * widths).sum()
###Output
_____no_output_____
###Markdown
One of the issues with using a histogram as a density estimator is that the choice of bin size and location can lead to representations that have qualitatively different features.For example, if we look at a version of this data with only 20 points, the choice of how to draw the bins can lead to an entirely different interpretation of the data!Consider this example:
###Code
x = make_data(20)
bins = np.linspace(-5, 10, 10)
fig, ax = plt.subplots(1, 2, figsize=(12, 4),
sharex=True, sharey=True,
subplot_kw={'xlim':(-4, 9),
'ylim':(-0.02, 0.3)})
fig.subplots_adjust(wspace=0.05)
for i, offset in enumerate([0.0, 0.6]):
ax[i].hist(x, bins=bins + offset, density=True)
ax[i].plot(x, np.full_like(x, -0.01), '|k',
markeredgewidth=1)
###Output
_____no_output_____
###Markdown
On the left, the histogram makes clear that this is a bimodal distribution.On the right, we see a unimodal distribution with a long tail.Without seeing the preceding code, you would probably not guess that these two histograms were built from the same data: with that in mind, how can you trust the intuition that histograms confer?And how might we improve on this?Stepping back, we can think of a histogram as a stack of blocks, where we stack one block within each bin on top of each point in the dataset.Let's view this directly:
###Code
fig, ax = plt.subplots()
bins = np.arange(-3, 8)
ax.plot(x, np.full_like(x, -0.1), '|k',
markeredgewidth=1)
for count, edge in zip(*np.histogram(x, bins)):
for i in range(count):
ax.add_patch(plt.Rectangle((edge, i), 1, 1,
alpha=0.5))
ax.set_xlim(-4, 8)
ax.set_ylim(-0.2, 8)
###Output
_____no_output_____
###Markdown
The problem with our two binnings stems from the fact that the height of the block stack often reflects not on the actual density of points nearby, but on coincidences of how the bins align with the data points.This mis-alignment between points and their blocks is a potential cause of the poor histogram results seen here.But what if, instead of stacking the blocks aligned with the *bins*, we were to stack the blocks aligned with the *points they represent*?If we do this, the blocks won't be aligned, but we can add their contributions at each location along the x-axis to find the result.Let's try this:
###Code
x_d = np.linspace(-4, 8, 2000)
density = sum((abs(xi - x_d) < 0.5) for xi in x)
plt.fill_between(x_d, density, alpha=0.5)
plt.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1)
plt.axis([-4, 8, -0.2, 8]);
###Output
_____no_output_____
###Markdown
The result looks a bit messy, but is a much more robust reflection of the actual data characteristics than is the standard histogram.Still, the rough edges are not aesthetically pleasing, nor are they reflective of any true properties of the data.In order to smooth them out, we might decide to replace the blocks at each location with a smooth function, like a Gaussian.Let's use a standard normal curve at each point instead of a block:
###Code
from scipy.stats import norm
x_d = np.linspace(-4, 8, 1000)
density = sum(norm(xi).pdf(x_d) for xi in x)
plt.fill_between(x_d, density, alpha=0.5)
plt.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1)
plt.axis([-4, 8, -0.2, 5]);
###Output
_____no_output_____
###Markdown
This smoothed-out plot, with a Gaussian distribution contributed at the location of each input point, gives a much more accurate idea of the shape of the data distribution, and one which has much less variance (i.e., changes much less in response to differences in sampling).These last two plots are examples of kernel density estimation in one dimension: the first uses a so-called "tophat" kernel and the second uses a Gaussian kernel.We'll now look at kernel density estimation in more detail. Kernel Density Estimation in PracticeThe free parameters of kernel density estimation are the *kernel*, which specifies the shape of the distribution placed at each point, and the *kernel bandwidth*, which controls the size of the kernel at each point.In practice, there are many kernels you might use for a kernel density estimation: in particular, the Scikit-Learn KDE implementation supports one of six kernels, which you can read about in Scikit-Learn's [Density Estimation documentation](http://scikit-learn.org/stable/modules/density.html).While there are several versions of kernel density estimation implemented in Python (notably in the SciPy and StatsModels packages), I prefer to use Scikit-Learn's version because of its efficiency and flexibility.It is implemented in the ``sklearn.neighbors.KernelDensity`` estimator, which handles KDE in multiple dimensions with one of six kernels and one of a couple dozen distance metrics.Because KDE can be fairly computationally intensive, the Scikit-Learn estimator uses a tree-based algorithm under the hood and can trade off computation time for accuracy using the ``atol`` (absolute tolerance) and ``rtol`` (relative tolerance) parameters.The kernel bandwidth, which is a free parameter, can be determined using Scikit-Learn's standard cross validation tools as we will soon see.Let's first show a simple example of replicating the above plot using the Scikit-Learn ``KernelDensity`` estimator:
###Code
from sklearn.neighbors import KernelDensity
# instantiate and fit the KDE model
kde = KernelDensity(bandwidth=1.0, kernel='gaussian')
kde.fit(x[:, None])
# score_samples returns the log of the probability density
logprob = kde.score_samples(x_d[:, None])
plt.fill_between(x_d, np.exp(logprob), alpha=0.5)
plt.plot(x, np.full_like(x, -0.01), '|k', markeredgewidth=1)
plt.ylim(-0.02, 0.22)
###Output
_____no_output_____
###Markdown
The result here is normalized such that the area under the curve is equal to 1. Selecting the bandwidth via cross-validationThe choice of bandwidth within KDE is extremely important to finding a suitable density estimate, and is the knob that controls the bias–variance trade-off in the estimate of density: too narrow a bandwidth leads to a high-variance estimate (i.e., over-fitting), where the presence or absence of a single point makes a large difference. Too wide a bandwidth leads to a high-bias estimate (i.e., under-fitting) where the structure in the data is washed out by the wide kernel.There is a long history in statistics of methods to quickly estimate the best bandwidth based on rather stringent assumptions about the data: if you look up the KDE implementations in the SciPy and StatsModels packages, for example, you will see implementations based on some of these rules.In machine learning contexts, we've seen that such hyperparameter tuning often is done empirically via a cross-validation approach.With this in mind, the ``KernelDensity`` estimator in Scikit-Learn is designed such that it can be used directly within the Scikit-Learn's standard grid search tools.Here we will use ``GridSearchCV`` to optimize the bandwidth for the preceding dataset.Because we are looking at such a small dataset, we will use leave-one-out cross-validation, which minimizes the reduction in training set size for each cross-validation trial:
###Code
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import LeaveOneOut
bandwidths = 10 ** np.linspace(-1, 1, 100)
grid = GridSearchCV(KernelDensity(kernel='gaussian'),
{'bandwidth': bandwidths},
cv=LeaveOneOut())
grid.fit(x[:, None]);
###Output
_____no_output_____
###Markdown
Now we can find the choice of bandwidth which maximizes the score (which in this case defaults to the log-likelihood):
###Code
grid.best_params_
###Output
_____no_output_____
###Markdown
The optimal bandwidth happens to be very close to what we used in the example plot earlier, where the bandwidth was 1.0 (i.e., the default width of ``scipy.stats.norm``). Example: KDE on a SpherePerhaps the most common use of KDE is in graphically representing distributions of points.For example, in the Seaborn visualization library (see [Visualization With Seaborn](04.14-Visualization-With-Seaborn.ipynb)), KDE is built in and automatically used to help visualize points in one and two dimensions.Here we will look at a slightly more sophisticated use of KDE for visualization of distributions.We will make use of some geographic data that can be loaded with Scikit-Learn: the geographic distributions of recorded observations of two South American mammals, *Bradypus variegatus* (the Brown-throated Sloth) and *Microryzomys minutus* (the Forest Small Rice Rat).With Scikit-Learn, we can fetch this data as follows:
###Code
from sklearn.datasets import fetch_species_distributions
# this step might fail based on permssions and network access
# if in Docker, specify --network=host
# if in docker-compose specify version 3.4 and build -> network: host
data = fetch_species_distributions()
# Get matrices/arrays of species IDs and locations
latlon = np.vstack([data.train['dd lat'],
data.train['dd long']]).T
species = np.array([d.decode('ascii').startswith('micro')
for d in data.train['species']], dtype='int')
###Output
_____no_output_____
###Markdown
With this data loaded, we can use the Basemap toolkit (mentioned previously in [Geographic Data with Basemap](04.13-Geographic-Data-With-Basemap.ipynb)) to plot the observed locations of these two species on the map of South America.
###Code
# !conda install -c conda-forge basemap-data-hires -y
# RESTART KERNEL
#Hack to fix missing PROJ4 env var
import os
import conda
conda_file_dir = conda.__file__
conda_dir = conda_file_dir.split('lib')[0]
proj_lib = os.path.join(os.path.join(conda_dir, 'share'), 'proj')
os.environ["PROJ_LIB"] = proj_lib
from mpl_toolkits.basemap import Basemap
from sklearn.datasets.species_distributions import construct_grids
xgrid, ygrid = construct_grids(data)
# plot coastlines with basemap
m = Basemap(projection='cyl', resolution='c',
llcrnrlat=ygrid.min(), urcrnrlat=ygrid.max(),
llcrnrlon=xgrid.min(), urcrnrlon=xgrid.max())
m.drawmapboundary(fill_color='#DDEEFF')
m.fillcontinents(color='#FFEEDD')
m.drawcoastlines(color='gray', zorder=2)
m.drawcountries(color='gray', zorder=2)
# plot locations
m.scatter(latlon[:, 1], latlon[:, 0], zorder=3,
c=species, cmap='rainbow', latlon=True);
###Output
_____no_output_____
###Markdown
Unfortunately, this doesn't give a very good idea of the density of the species, because points in the species range may overlap one another.You may not realize it by looking at this plot, but there are over 1,600 points shown here!Let's use kernel density estimation to show this distribution in a more interpretable way: as a smooth indication of density on the map.Because the coordinate system here lies on a spherical surface rather than a flat plane, we will use the ``haversine`` distance metric, which will correctly represent distances on a curved surface.There is a bit of boilerplate code here (one of the disadvantages of the Basemap toolkit) but the meaning of each code block should be clear:
###Code
# Set up the data grid for the contour plot
X, Y = np.meshgrid(xgrid[::5], ygrid[::5][::-1])
land_reference = data.coverages[6][::5, ::5]
land_mask = (land_reference > -9999).ravel()
xy = np.vstack([Y.ravel(), X.ravel()]).T
xy = np.radians(xy[land_mask])
# Create two side-by-side plots
fig, ax = plt.subplots(1, 2)
fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05)
species_names = ['Bradypus Variegatus', 'Microryzomys Minutus']
cmaps = ['Purples', 'Reds']
for i, axi in enumerate(ax):
axi.set_title(species_names[i])
# plot coastlines with basemap
m = Basemap(projection='cyl', llcrnrlat=Y.min(),
urcrnrlat=Y.max(), llcrnrlon=X.min(),
urcrnrlon=X.max(), resolution='c', ax=axi)
m.drawmapboundary(fill_color='#DDEEFF')
m.drawcoastlines()
m.drawcountries()
# construct a spherical kernel density estimate of the distribution
kde = KernelDensity(bandwidth=0.03, metric='haversine')
kde.fit(np.radians(latlon[species == i]))
# evaluate only on the land: -9999 indicates ocean
Z = np.full(land_mask.shape[0], -9999.0)
Z[land_mask] = np.exp(kde.score_samples(xy))
Z = Z.reshape(X.shape)
# plot contours of the density
levels = np.linspace(0, Z.max(), 25)
axi.contourf(X, Y, Z, levels=levels, cmap=cmaps[i])
###Output
_____no_output_____
###Markdown
Compared to the simple scatter plot we initially used, this visualization paints a much clearer picture of the geographical distribution of observations of these two species. Example: Not-So-Naive BayesThis example looks at Bayesian generative classification with KDE, and demonstrates how to use the Scikit-Learn architecture to create a custom estimator.In [In Depth: Naive Bayes Classification](05.05-Naive-Bayes.ipynb), we took a look at naive Bayesian classification, in which we created a simple generative model for each class, and used these models to build a fast classifier.For Gaussian naive Bayes, the generative model is a simple axis-aligned Gaussian.With a density estimation algorithm like KDE, we can remove the "naive" element and perform the same classification with a more sophisticated generative model for each class.It's still Bayesian classification, but it's no longer naive.The general approach for generative classification is this:1. Split the training data by label.2. For each set, fit a KDE to obtain a generative model of the data. This allows you for any observation $x$ and label $y$ to compute a likelihood $P(x~|~y)$. 3. From the number of examples of each class in the training set, compute the *class prior*, $P(y)$.4. For an unknown point $x$, the posterior probability for each class is $P(y~|~x) \propto P(x~|~y)P(y)$. The class which maximizes this posterior is the label assigned to the point.The algorithm is straightforward and intuitive to understand; the more difficult piece is couching it within the Scikit-Learn framework in order to make use of the grid search and cross-validation architecture.This is the code that implements the algorithm within the Scikit-Learn framework; we will step through it following the code block:
###Code
from sklearn.base import BaseEstimator, ClassifierMixin
class KDEClassifier(BaseEstimator, ClassifierMixin):
"""Bayesian generative classification based on KDE
Parameters
----------
bandwidth : float
the kernel bandwidth within each class
kernel : str
the kernel name, passed to KernelDensity
"""
def __init__(self, bandwidth=1.0, kernel='gaussian'):
self.bandwidth = bandwidth
self.kernel = kernel
def fit(self, X, y):
self.classes_ = np.sort(np.unique(y))
training_sets = [X[y == yi] for yi in self.classes_]
self.models_ = [KernelDensity(bandwidth=self.bandwidth,
kernel=self.kernel).fit(Xi)
for Xi in training_sets]
self.logpriors_ = [np.log(Xi.shape[0] / X.shape[0])
for Xi in training_sets]
return self
def predict_proba(self, X):
logprobs = np.array([model.score_samples(X)
for model in self.models_]).T
result = np.exp(logprobs + self.logpriors_)
return result / result.sum(1, keepdims=True)
def predict(self, X):
return self.classes_[np.argmax(self.predict_proba(X), 1)]
###Output
_____no_output_____
###Markdown
The anatomy of a custom estimator Let's step through this code and discuss the essential features:```pythonfrom sklearn.base import BaseEstimator, ClassifierMixinclass KDEClassifier(BaseEstimator, ClassifierMixin): """Bayesian generative classification based on KDE Parameters ---------- bandwidth : float the kernel bandwidth within each class kernel : str the kernel name, passed to KernelDensity """```Each estimator in Scikit-Learn is a class, and it is most convenient for this class to inherit from the ``BaseEstimator`` class as well as the appropriate mixin, which provides standard functionality.For example, among other things, here the ``BaseEstimator`` contains the logic necessary to clone/copy an estimator for use in a cross-validation procedure, and ``ClassifierMixin`` defines a default ``score()`` method used by such routines.We also provide a doc string, which will be captured by IPython's help functionality (see [Help and Documentation in IPython](01.01-Help-And-Documentation.ipynb)). Next comes the class initialization method:```python def __init__(self, bandwidth=1.0, kernel='gaussian'): self.bandwidth = bandwidth self.kernel = kernel```This is the actual code that is executed when the object is instantiated with ``KDEClassifier()``.In Scikit-Learn, it is important that *initialization contains no operations* other than assigning the passed values by name to ``self``.This is due to the logic contained in ``BaseEstimator`` required for cloning and modifying estimators for cross-validation, grid search, and other functions.Similarly, all arguments to ``__init__`` should be explicit: i.e. ``*args`` or ``**kwargs`` should be avoided, as they will not be correctly handled within cross-validation routines. Next comes the ``fit()`` method, where we handle training data:```python def fit(self, X, y): self.classes_ = np.sort(np.unique(y)) training_sets = [X[y == yi] for yi in self.classes_] self.models_ = [KernelDensity(bandwidth=self.bandwidth, kernel=self.kernel).fit(Xi) for Xi in training_sets] self.logpriors_ = [np.log(Xi.shape[0] / X.shape[0]) for Xi in training_sets] return self```Here we find the unique classes in the training data, train a ``KernelDensity`` model for each class, and compute the class priors based on the number of input samples.Finally, ``fit()`` should always return ``self`` so that we can chain commands. For example:```pythonlabel = model.fit(X, y).predict(X)```Notice that each persistent result of the fit is stored with a trailing underscore (e.g., ``self.logpriors_``).This is a convention used in Scikit-Learn so that you can quickly scan the members of an estimator (using IPython's tab completion) and see exactly which members are fit to training data. Finally, we have the logic for predicting labels on new data:```python def predict_proba(self, X): logprobs = np.vstack([model.score_samples(X) for model in self.models_]).T result = np.exp(logprobs + self.logpriors_) return result / result.sum(1, keepdims=True) def predict(self, X): return self.classes_[np.argmax(self.predict_proba(X), 1)]```Because this is a probabilistic classifier, we first implement ``predict_proba()`` which returns an array of class probabilities of shape ``[n_samples, n_classes]``.Entry ``[i, j]`` of this array is the posterior probability that sample ``i`` is a member of class ``j``, computed by multiplying the likelihood by the class prior and normalizing.Finally, the ``predict()`` method uses these probabilities and simply returns the class with the largest probability. Using our custom estimatorLet's try this custom estimator on a problem we have seen before: the classification of hand-written digits.Here we will load the digits, and compute the cross-validation score for a range of candidate bandwidths using the ``GridSearchCV`` meta-estimator (refer back to [Hyperparameters and Model Validation](05.03-Hyperparameters-and-Model-Validation.ipynb)):
###Code
from sklearn.datasets import load_digits
from sklearn.model_selection import GridSearchCV
digits = load_digits()
bandwidths = 10 ** np.linspace(0, 2, 100)
grid = GridSearchCV(KDEClassifier(), {'bandwidth': bandwidths})
grid.fit(digits.data, digits.target)
# scores = [val.mean_validation_score for val in grid.grid_scores_]
scores = grid.cv_results_['mean_test_score']
###Output
_____no_output_____
###Markdown
Next we can plot the cross-validation score as a function of bandwidth:
###Code
plt.semilogx(bandwidths, scores)
plt.xlabel('bandwidth')
plt.ylabel('accuracy')
plt.title('KDE Model Performance')
print(grid.best_params_)
print('accuracy =', grid.best_score_)
###Output
{'bandwidth': 6.135907273413174}
accuracy = 0.9677298050139276
###Markdown
We see that this not-so-naive Bayesian classifier reaches a cross-validation accuracy of just over 96%; this is compared to around 80% for the naive Bayesian classification:
###Code
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import cross_val_score
cross_val_score(GaussianNB(), digits.data, digits.target).mean()
###Output
_____no_output_____ |
notebooks/local_versions/MagLook_dask-Local.ipynb | ###Markdown
First Look at Magnetics Data Import Data Files
###Code
%reset -f
import pandas as pd
import hvplot.pandas
import numpy as np
import matplotlib.dates as dates
import warnings
warnings.filterwarnings('ignore')
import holoviews as hv
from holoviews import dim, opts
import hvplot.dask
hv.extension('bokeh')
import glob, os
import dask.dataframe as dd
from time import sleep
def inc(x):
sleep(1)
return x + 1
def add(x, y):
sleep(1)
return x + y
###Output
_____no_output_____
###Markdown
Start a Dask Cluster
###Code
from dask.distributed import Client
client = Client("tcp://10.0.132.32:44559")
client
import datetime
def dateparse (date_string):
return datetime.datetime.strptime(date_string, '%d-%m-%Y %H:%M:%S')
#!ls /home/jovyan/data/bravoseis_data/MAGNETOMETRO/
!head -n 5 /home/jovyan/data/bravoseis_data/MAGNETOMETRO/190127_2000.mag
!head -n 10 /home/jovyan/data/bravoseis_data/MAGNETOMETRO/190127_2000.XYZ
#!head /home/jovyan/data/bravoseis_data/MAGNETOMETRO/Linea_1_001.XYZ
#!head /home/jovyan/data/bravoseis_data/MAGNETOMETRO/Linea_4.mag
from dask import delayed
import dask.array as dsa
###Output
_____no_output_____
###Markdown
Read Gravity Files
###Code
df_grav=dd.read_csv('/home/jovyan/data/bravoseis_data/SADO/jan_2019/gravimetro_bruto.proc/*.proc',
parse_dates=['fecha'], date_parser=dateparse,
dtype = {'fecha': object,'status': np.float64,
'gravimetria_bruta': np.float64, 'spring_tension': np.float64,
'longitud': np.float64, 'latitud': np.float64,
'velocidad': np.float64,'rumbo': np.float64 })
#df.partitions[5].compute()
df_grav=df_grav.set_index("fecha")
del df_grav['fecha_telegrama']
del df_grav['rumbo']
del df_grav['velocidad']
del df_grav['spring_tension']
del df_grav['status']
df_grav.head()
df_grav.index.head()
df_grav = df_grav.resample('s').mean().compute()
df_grav.partitions[5].compute()
###Output
_____no_output_____
###Markdown
Read Bathy Files
###Code
df_bath=dd.read_csv('/home/jovyan/data/bravoseis_data/SADO/jan_2019/posicion.proc/*.proc',
parse_dates=True, date_parser=dateparse,
dtype = {'Date': object,'longitud': np.float64,
'latitud': np.float64, 'rumbo': np.float64,
'velocidad': np.float64, 'profundidad': np.float64,
'cog': np.float64,'sog': np.float64 })
#df.partitions[5].compute()
df_bath=df_bath.set_index("fecha")
del df_bath['fecha_telegrama']
del df_bath['rumbo']
del df_bath['velocidad']
df_bath.head()
#df_bath.index = pd.to_datetime(df_bath.index.values)
###Output
_____no_output_____
###Markdown
Merge Dataframes
###Code
#test = pd.merge(df_bath, df_grav,how='inner', indicator=True,left_index=True, right_index=True, suffixes=('_B', '_G'))
test = dd.merge(df_bath, df_grav,how='inner',right_index=True, left_index=True,suffixes=('_B', '_G')).compute()
test.head()
df_gravMerge = test[test['_merge'] == 'both']
del df_gravMerge['_merge']
df_gravMerge['longitud'] = df_gravMerge['longitud_G']
df_gravMerge['latitud'] = df_gravMerge['latitud_G']
del df_gravMerge['longitud_B']
del df_gravMerge['latitud_B']
del df_gravMerge['longitud_G']
del df_gravMerge['latitud_G']
df_gravMerge.head()
#df_gravMerge.size
###Output
_____no_output_____
###Markdown
Downsample the data
###Code
df_minuteGrav = pd.DataFrame()
df_minuteGrav['proc_gravity'] = df_gravMerge.gravimetria_bruta.resample('min').mean()
df_minuteGrav['eotvos'] = df_gravMerge.eotvos.resample('min').mean()
df_minuteGrav['grav_corr'] = df_gravMerge.gravimetria_bruta.resample('min').mean() + df_gravMerge.eotvos.resample('min').mean()
df_minuteGrav['lon'] = df_gravMerge.longitud.resample('min').mean()
df_minuteGrav['lat'] = df_gravMerge.latitud.resample('min').mean()
df_minuteGrav['sog'] = df_gravMerge.sog.resample('min').mean()
df_minuteGrav['cog'] = df_gravMerge.cog.resample('min').mean()
df_minuteGrav['depth'] = df_gravMerge.profundidad.resample('min').mean()
df_minuteGrav.tail()
df_minuteGrav.size
df_minuteGrav2=df_minuteGrav.loc['2019-01-20 00:00:00':'2019-01-24 00:00:00']
df_temp=df_minuteGrav.loc['2019-01-26 21:00:00':'2019-02-05 23:58:00']
df_minuteGrav2=df_minuteGrav2.append(df_temp)
df_minuteGrav2.hvplot.points('lon', 'lat',
height=500,
color='proc_gravity',
cmap='colorwheel',
size=3,
hover_cols=['depth'], title= 'proc_gravity',
fontsize={'title': 16, 'labels': 14, 'xticks': 12, 'yticks': 12})
#df_minuteGrav2.hvplot.heatmap(x='lon', y='lat', C='proc_gravity', reduce_function=np.mean, colorbar=True)
###Output
_____no_output_____
###Markdown
Things to notice:1. The depth signiature is visable2. Examine crossing paths... there is a directioal dependence to our readings related to ship direction. 3. Is the difference between these lines just the ETVOS correction or are their other corrections that need to be applied? 4. Whould you please share the processing stream?
###Code
df_minuteGrav2.hvplot.points('index', 'proc_gravity', color='proc_gravity',
cmap='colorwheel', size=.5,
hover_cols=['cog'], title= 'proc_gravity')
df_minuteGrav2.head(1)
cond1 = df_minuteGrav2["lat"] < -62.44
cond2 = df_minuteGrav2["lat"] > -62.45
cond3 = df_minuteGrav2["lon"] > -58.42
cond4 = df_minuteGrav2["lon"] < -58.36
df_minuteGrav3 = df_minuteGrav2[cond1 & cond2 & cond3 & cond4]
del df_minuteGrav3['eotvos']
del df_minuteGrav3['grav_corr']
df_minuteGrav3.head()
df_minuteGrav3.hvplot.scatter('lon', 'lat',
height=500,
color='proc_gravity',
cmap='colorwheel',
size=50,
hover_cols=['depth'], title= 'proc_gravity subset').opts(bgcolor= grey)
df_minuteGrav3.to_csv('proc_gravity_subset.csv')
###Output
_____no_output_____
###Markdown
The gravitational constant in SI units :math:`m^3 kg^{-1} s^{-1}` GRAVITATIONAL_CONST = 0.00000000006673 Bouguer Correction The mass of the material between the gravity station and the datum also causes a variation of gravity with elevation (Figure 1). This mass effect causes gravity at higher stations to be greater than at stations with lower elevations and thus partly offsets the Free Air effect. To calculate the effect of this mass, a model of the topography must be constructed and its density must be estimated. The traditional approach is crude but has been proven to be effective. In this approach, each station is assumed to sit on a slab of material that extends to infinity laterally and to the elevation datum vertically (Figure 1). The formula for the gravitational attraction of this infinite slab is derived by employing a volume integral to calculate its mass. The resulting correction is named for the French geodesist Pierre Bouguer: Bouguer Correction = BC = 2pgrh, where g is the International gravitational constant, r is the density, and h = (elevation - datum elevation). As discussed below, the need to estimate density for the calculation of the Bouguer correction is a significant source of uncertainty in gravity studies. With Gobs being observed gravity corrected for drift and tides, the Bouguer anomaly (BA) is then defined as: BA = Gobs - Gt + FAC - BC If terrain corrections (see below) are not applied, the term simple Bouguer anomaly is used. If they have, the term complete Bouguer anomaly is used. A second order correction to account for the curvature of the Earth is often added to this calculation.
###Code
ellipsoid = get_ellipsoid()
#Convert latitude to radians
latitude_rad = np.radians(latitude)
prime_vertical_radius = ellipsoid.semimajor_axis / np.sqrt(1 - ellipsoid.first_eccentricity ** 2 * np.sin(latitude_rad) ** 2)
# Instead of computing X and Y, we only comupute the projection on the XY plane:
# xy_projection = sqrt( X**2 + Y**2 )
xy_projection = (height + prime_vertical_radius) * np.cos(latitude_rad)
z_cartesian = (height + (1 - ellipsoid.first_eccentricity ** 2) * prime_vertical_radius) * np.sin(latitude_rad)
radius = np.sqrt(xy_projection ** 2 + z_cartesian ** 2)
geocentric_latitude = 180 / np.pi * np.arcsin(z_cartesian / radius)
return geocentric_latitude, radius
###Output
_____no_output_____ |
Code/.ipynb_checkpoints/1_WALIS_Data_Extraction-checkpoint.ipynb | ###Markdown
Data extraction, formatting, and export from the WALIS database This notebook contains scripts that allow querying and extracting data from the "World Atlas of Last Interglacial Shorelines" (WALIS) database. The notebook calls scripts contained in the /scripts folder. After downloading the database (internet connection required), field headers are renamed, and field values are substituted, following 1:n or n:n relationships. The tables composing the database are then saved in CSV, Xls (multi-sheet), and geoJSON formats. Dependencies and packagesThis notebook calls various scripts that are included in the \scripts folder. The following is a list of the python libraries needed to run this notebook.
###Code
import pandas as pd
import MySQLdb
import pandas.io.sql as psql
import numpy as np
import xlsxwriter as writer
from datetime import date
import tqdm
from tqdm.notebook import tqdm_notebook
from IPython.display import *
import ipywidgets as widgets
from ipywidgets import *
import matplotlib.pyplot as plt
from shapely.geometry import Point
import geopandas
import os
import glob
import shutil
import contextily as ctx
import folium
from shapely.geometry import box
import folium
import folium.plugins as plugins
from folium.plugins import MarkerCluster
from folium.plugins import Search
import seaborn as sns
import math
from mpl_toolkits.axes_grid1 import make_axes_locatable
from scipy import optimize
from scipy import stats
import functools
import warnings
from bokeh.tile_providers import get_provider, Vendors
from bokeh.io import output_file, output_notebook, show
from bokeh.plotting import figure, ColumnDataSource
from bokeh.palettes import Spectral6
from bokeh.transform import linear_cmap
import bokeh.layouts
from bokeh.layouts import gridplot
from bokeh.models import ColorBar, ColumnDataSource
from bokeh.plotting import figure, output_file, save
from bokeh.models import BoxZoomTool
from ipywidgets import Box
# Ignore warning 'FutureWarning'
warnings.simplefilter(action='ignore', category=FutureWarning)
#pandas options for debugging
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
#Set a date string for exported file names
date=date.today()
dt_string = date.strftime("_%d_%m_%Y")
warnings.filterwarnings('ignore')
###Output
/Users/alessiorovere/opt/anaconda3/envs/WALIS_Database/lib/python3.7/site-packages/geopandas/_compat.py:115: UserWarning: The Shapely GEOS version (3.8.0-CAPI-1.13.1 ) is incompatible with the GEOS version PyGEOS was compiled with (3.9.1-CAPI-1.14.2). Conversions between both will be slow.
shapely_geos_version, geos_capi_version_string
###Markdown
Import databaseConnect to the online MySQL database containing WALIS data and download data into a series of pandas data frames.
###Code
## Connect to the WALIS database server
%run -i scripts/connection.py
## Import data tables and show progress bar
with tqdm_notebook(total=len(SQLtables),desc='Importing tables from WALIS') as pbar:
for i in range(len(SQLtables)):
query = "SELECT * FROM {}".format(SQLtables[i])
walis_dict[i] = psql.read_sql(query, con=db)
query2 = "SHOW FULL COLUMNS FROM {}".format(SQLtables[i])
walis_cols[i] = psql.read_sql(query2, con=db)
pbar.update(1)
###Output
_____no_output_____
###Markdown
Delete all data in the output folder and save a csv file containing table column descriptions.
###Code
%run -i scripts/create_outfolder.py
###Output
_____no_output_____
###Markdown
Query the databaseNow, the data is ready to be queried according to a user input. There are three ways to extact data of interest from WALIS. Run either one and proceed.1. [Select by author](Query-option-1---Select-by-author)2. [Select by geographic coordinates](Query-option-2---Select-by-geographic-extent)3. [Select by country](Query-Option-3---Select-by-country) Query option 1 - Select by authorThis option compiles data from multiple users who collaborated to create regional datasets for the WALIS Special Issue in ESSD. Select "WALIS Admin" in the dropdown menu if you want to extract the entire database.**NOTE: If you want to change users, just re-run this cell and select a different set of values**
###Code
%run -i scripts/select_user.py
multiUsr
###Output
_____no_output_____
###Markdown
Once the selection is done, run the following cell to query the database and extract only the data inserted by the selected user(s)
###Code
%run -i scripts/multi_author_query.py
###Output
_____no_output_____
###Markdown
Query option 2 - Select by geographic extentThis option allows the download of data by geographic extent, defined as maximum-minimum bounds on Latitude and Longitude. Use this website to quickly find bounding coordinates: http://bboxfinder.com.
###Code
# bounding box coordinates in decimal degrees (x=Lon, y=Lat)
xmin=-100
xmax=50
ymin=-80
ymax=80
# From the dictionary in connection.py, extract the dataframes
%run -i scripts/geoextent_query.py
###Output
_____no_output_____
###Markdown
Query Option 3 - Select by countryThis option allows compiling data from one or more countries.
###Code
%run -i scripts/select_country.py
select_country
%run -i scripts/country_query.py
###Output
_____no_output_____
###Markdown
Substitute data codes The following code makes joins between the data, substituting numerical or comma-separated codes with the corresponding text values.**WARNING - MODIFICATIONS TO THE ORIGINAL DATA**The following adjustments to the data are made:1. If there is an age in ka, but the uncertainty field is empty, the age uncertainty is set to 30%2. If the "timing constraint" is missing, the "MIS limit" is taken. If still empty, it is set to "Equal to"
###Code
%run -i scripts/substitutions.py
%run -i scripts/make_summary.py
###Output
_____no_output_____
###Markdown
Write outputThe following scripts save the data in Xlsx, CSV, and geoJSON format (for use in GIS software).
###Code
%run -i scripts/write_spreadsheets.py
%run -i scripts/write_geojson.py
print ('Done!')
###Output
_____no_output_____ |
Group DYE Code.ipynb | ###Markdown
Data Project 1. Data Collection 1.1 Importing required packagesWe start off by importing the packages, that we will be using in our project.
###Code
import numpy as np
import pandas_datareader
import datetime
import pydst
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1.2 Importing DataThe data required for this analysis is obtained from *Danmark Statistik*.We will be using the **NAN1** dataset from Danmarks Statistik, which contains yearly keyfigures for the danish economy from 1966-2018.
###Code
# Inspect variables
Dst = pydst.Dst(lang="da")
Dst.get_variables(table_id="NAN1")["values"][0][:20]
###Output
_____no_output_____
###Markdown
By using the information obtaoined from `Dst.get_variables(table_id="NAN1")["values"][0][:20]` we are able to import the variables of interest.Each imported variable from NAN1 is assigned a variable that we can call upon later in our code. We import the full time range and the values are given in 2010-prices
###Code
#Importing desired variables from NAN1
gdp = Dst.get_data(table_id = "NAN1", variables = {"TRANSAKT":["B1GQK"], "PRISENHED":["LAN_M"], "Tid":["*"]})
priv_cons = Dst.get_data(table_id = "NAN1", variables = {"TRANSAKT":["P31S1MD"], "PRISENHED":["LAN_M"], "Tid":["*"]})
publ_cons = Dst.get_data(table_id = "NAN1", variables = {"TRANSAKT":["P3S13D"], "PRISENHED":["LAN_M"], "Tid":["*"]})
inv = Dst.get_data(table_id = "NAN1", variables = {"TRANSAKT":["P51GD"], "PRISENHED":["LAN_M"], "Tid":["*"]})
exp = Dst.get_data(table_id = "NAN1", variables = {"TRANSAKT":["P6D"], "PRISENHED":["LAN_M"], "Tid":["*"]})
imp = Dst.get_data(table_id = "NAN1", variables = {"TRANSAKT":["P7K"], "PRISENHED":["LAN_M"], "Tid":["*"]})
###Output
_____no_output_____
###Markdown
1.3 Cleaning and preparing dataWe will now create lists containing our variables from previous section. These will be used later, in our loops.
###Code
variable_list = ("B1GQK", "P31S1MD", "P3S13D", "P51GD", "P6D", "P7K")
var_list = (gdp, priv_cons, publ_cons, inv, exp, imp)
var_list_string = ("gdp", "priv_cons", "publ_cons", "inv", "exp", "imp")
for i in var_list:
"""The loop below drops the unwanted columns from the var_list variables for the purpose of cleaning the dataset
"""
i.drop(["TRANSAKT", "PRISENHED"], axis = 1, inplace = True)
for i in var_list:
"""We can now index our data on time, this will ensure that it is treated as time-series.
"""
i.index = i["TID"]
###Output
_____no_output_____
###Markdown
Now, when inspecting our data, we observe that the column names are the same for each variables.
###Code
print(gdp.head()); print(inv.head())
###Output
TID INDHOLD
TID
1991 1991 1306,6
1992 1992 1332,2
2018 2018 2050,5
1993 1993 1332,3
1994 1994 1403,3
TID INDHOLD
TID
1990 1990 221,5
1991 1991 216,3
2018 2018 451,4
1992 1992 215,2
1993 1993 209,5
###Markdown
We can fix the above issue by renaming the columns for our variables. This is done by using the `.rename()` on the Dataframe. It takes the the original column name as input and transform it to desired output
###Code
# Rename variables
gdp = gdp.rename(columns = {"TID":"year", "INDHOLD":"gdp"})
priv_cons = priv_cons.rename(columns = {"TID":"year", "INDHOLD":"priv_cons"})
publ_cons = publ_cons.rename(columns = {"TID":"year", "INDHOLD":"publ_cons"})
inv = inv.rename(columns = {"TID":"year", "INDHOLD":"inv"})
exp = exp.rename(columns = {"TID":"year", "INDHOLD":"exp"})
imp = imp.rename(columns = {"TID":"year", "INDHOLD":"imp"})
"""Dataframe.rename(colums = {"original name":"new name"})
"""
###Output
_____no_output_____
###Markdown
2. Creating a single dataframeWe will now construct a single dataframe, indexed on *time*, containing all necessary variables to be used in our analysis.Using the `pd.DataFrame()` function to create an empty pandas dataframe, containing only a time index. This will make it easier to merge all sub-dataframes into a single dataframe.
###Code
# Create empty dataframe
data = pd.DataFrame(index=range(1966,2019), columns = ["year"])
# Specify index
data["year"] = range(1966, 2019)
# View empty frame
data.head()
###Output
_____no_output_____
###Markdown
We use the `pandas.merge()` function to merge every subset of our data into a single dataframe.The function merges two dataframes together. It takes the names of the two dataframes as inputs, as well as a specification of how they should be merged i.e. "inner" in our case and on what reference column i.e. "year" in our caseAs we can only merge two dataframes at a time we do multiple partialmerges to combine all our sub-dataframes into a single one
###Code
# Merge Dataset
data = pd.merge(data, gdp, how = "inner", on = ["year"])
""" how explains the type of merge, while on specifies the column we reference to
"""
data = pd.merge(data, priv_cons, how = "inner", on = ["year"])
data = pd.merge(data, publ_cons, how = "inner", on = ["year"])
data = pd.merge(data, inv, how = "inner", on = ["year"])
data = pd.merge(data, exp, how = "inner", on = ["year"])
data = pd.merge(data, imp, how = "inner", on = ["year"])
# View first five rows of dataframe
data.head()
###Output
_____no_output_____
###Markdown
2.1 Preparing dataframe for analysisWe will now be cleaning our data, in order to prepare it for analysis.We start off by indexing our final dataframe on "*year*" and deleting the `year` column, as this is no longer of use:
###Code
# Indexing on time
data.index = data["year"]
# Delete "year" column
del data["year"]
data.head()
###Output
_____no_output_____
###Markdown
As we are only interested in values from 1980 to 2018, we will be removing the rows up to the year 1980.This is done with the `pandas.loc()` function and stored on the existing dataset as below:
###Code
data = data.loc[1980 :]
data.head()
###Output
_____no_output_____
###Markdown
We will now convert the comma-separator from "," to ".", as python does not use "," as comma-separator:
###Code
# Correcting for comma separator and conver to floats
for i in var_list_string:
""" This code is designed in order to change comma-separator
"""
data[i] = data[i].replace(",",".", regex=True)
""" This converts the variables from "strings" to "floats"
"""
data[i] = data[i].astype(float)
data.head()
###Output
_____no_output_____
###Markdown
2.2 Adding additional variablesFor our analysis, variables as *Import* and *Export*, can just be summarized as *Net exports*, which is defined as:$$Net\,export=Export-Import$$We will now construct a new variable in our data frame named ```nx``` which captures Gross Net Exports, 2010-chained value (billion kr.)
###Code
# Create new column named nx, denoting netexports by subtracting export from import
data["nx"] = data["exp"] - data["imp"]
# We assign the new column to a variable, nx, which will be used in later analysis
nx = data["nx"]
###Output
_____no_output_____
###Markdown
We will now add the respective percentage change for all variables in ```data```, to our dataset. The percentage changes are usefull when conducting analysis and are calulated as follows:$${pct\,change\,in\,x} = \frac{x_t-x_{t-1}}{x_{t-1}}*100$$
###Code
# Update var_list and var_list_string to include "nx"
var_list = (gdp, priv_cons, publ_cons, inv, exp, imp, nx)
var_list_string = ("gdp", "priv_cons", "publ_cons", "inv", "exp", "imp", "nx")
# Generate percentage change of variables
data_pct_change = data.pct_change()
data_pct_change = data_pct_change * 100 # in order to obtain percentage values
# Rename columns to indicate percentage change
for i in var_list_string:
data_pct_change = data_pct_change.rename(columns = {i:"pct. change in "+i})
# Merge dataset with original dataset
data = pd.merge(data, data_pct_change, how = "inner", on = ["year"])
round(data.head(), 1)
###Output
_____no_output_____
###Markdown
3. Presenting dataWe will now visually analyse the development of the GDP components, in order to see how these have changed compared to GDP.We start off by comparing the weights of each GDP component from 1980 and 2018 in order to see if anything has changed.
###Code
# calculations for piechart to compare compomnents in 1980 to 2018
sizes_1980 = [data.loc[1980, "priv_cons"], data.loc[1980, "publ_cons"], data.loc[1980, "inv"], data.loc[1980, "nx"]]
sizes_2018 = [data.loc[2018, "priv_cons"], data.loc[2018, "publ_cons"], data.loc[2018, "inv"], data.loc[2018, "nx"]]
labels = ["Private consumption", "Goverment spending", "Investment", "Net exports"]
fig1, ax = plt.subplots(1,2)
plt.style.use("tableau-colorblind10") #This is for cosmetic purposes
# 1980
ax[0].pie(sizes_1980, labels = labels, autopct='%1.1f%%',
shadow=True, startangle=90)
ax[0].axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
ax[0].set_title("GDP components 1980")
#2018
ax[1].pie(sizes_2018, labels = labels, autopct='%1.1f%%',
shadow=True, startangle=90)
ax[1].axis('equal')
ax[1].set_title("GDP components 2018")
#adjustments
plt.subplots_adjust(wspace=1)
#add source
plt.annotate('Source: Danmarks Statistik - http://www.statistikbanken.dk/NAN1', (0,0), (0,0), fontsize = 7.5, xycoords = 'axes fraction', textcoords = 'offset points', va = 'top')
plt.show()
###Output
_____no_output_____
###Markdown
We observe that there has not been any significant changes to the components as pct. of GDP. Investments have now increased as pct. of GDP, from 15.3 % in 1980 to 22.2 % in 2018, whilst Net exports have increased from 3.2 % to 5 %. Private consumption, was 51.7 % but is now 47.1 %, whilst goverment spending has fallen from 29.8 % to 25.7 %.As we will see next, this is not due to a lack of change in the respective component, as there has been an overall increase in GDP and all of its components. We will now analyse the change in the respective components and will compare it to the increase in Gross Domestic Product.We define a function, in order to reduce the repetetive task of making new figures for each variable. We will show later what the function does.
###Code
def my_graph(var1, varname1, var2 = "gdp", varname2 = "GDP", df = data):
"""
df - a pandas dataframe containing the variables
var1 - a string character that indicates the first variable of choice in our dataframe
var2 - a string character that indicates the second variable of choice in our dataframe
varname1 - a string character that indicates what we want the first variable to be called
varname1 - a string character that indicates what we want the second variable to be called
"""
fig, ax = plt.subplots(1, 2)
plt.style.use("tableau-colorblind10")
# Figure 1
ax[0].plot(df.index,df[var2], linestyle = "--")
ax[0].twinx().plot(df.index, df[var1])
ax[0].set_xlabel("Years")
ax[0].set_ylabel(varname2+" (Billion DKK)")
ax[0].twinx().set_ylabel(varname1+" (Billions DKK)")
ax[0].set_title(varname2+" and "+varname1)
ax[0].legend(loc = 4, frameon = False)
ax[0].twinx().legend(loc = 8, frameon = False)
# Figure 2
ax[1].plot(df.index,df["pct. change in "+var2], linestyle = "--")
ax[1].twinx().plot(df.index, df["pct. change in "+var1])
ax[1].set_xlabel("Years")
ax[1].set_ylabel(varname2+" (growth rate)")
ax[1].twinx().set_ylabel(varname1+" (growth rate)")
ax[1].set_title(varname2+" and "+varname1)
ax[1].legend(loc = 4, frameon = False)
ax[1].twinx().legend(loc = 8, frameon = False)
#Adjust size of plot
plt.subplots_adjust(right = 2, wspace = 0.5, hspace = 0)
#make annotation for source
plt.annotate('Source: Danmarks Statistik - http://www.statistikbanken.dk/NAN1', (0,0), (0,-45), fontsize = 7.5,
xycoords = 'axes fraction', textcoords = 'offset points', va = 'top')
plt.show()
###Output
_____no_output_____
###Markdown
We will also create a table containing key statistics about the growth of GDP and its key components. This table will containg the percentage change in the components and the average annual growth rate. The statistics are computed below.
###Code
#Pct change from 1980 to 2018
stats = round((((data.loc[2018]-data.loc[1980])/data.loc[1980])*100), 2)
stats = stats.dropna()
stats = pd.DataFrame(stats, columns = ["Pct. increase"])
# average pct increase
stats["Average pct. increase"] = [np.mean(data["pct. change in gdp"]), np.mean(data["pct. change in priv_cons"]), np.mean(data["pct. change in publ_cons"]), np.mean(data["pct. change in inv"]), np.mean(data["pct. change in exp"]), np.mean(data["pct. change in imp"]), np.mean(data["pct. change in nx"])]
stats["Average pct. increase"] = round(stats["Average pct. increase"],2)
stats.rename(index={"gdp":"Gross Domestic Product"}, inplace=True)
stats.rename(index={"priv_cons":"Private Consumption"}, inplace=True)
stats.rename(index={"publ_cons":"Goverment Expenditure"}, inplace=True)
stats.rename(index={"inv":"Investment"}, inplace=True)
stats.rename(index={"exp":"Export"}, inplace=True)
stats.rename(index={"imp":"Import"}, inplace=True)
stats.rename(index={"nx":"Net Export"}, inplace=True)
# Beginning value and end value
val_1980 = [data.loc[1980, "gdp"], data.loc[1980, "priv_cons"], data.loc[1980, "publ_cons"], data.loc[1980, "inv"], data.loc[1980, "exp"], data.loc[1980, "imp"], data.loc[1980, "nx"]]
val_2018 = [data.loc[2018, "gdp"], data.loc[2018, "priv_cons"], data.loc[2018, "publ_cons"], data.loc[2018, "inv"], data.loc[2018, "exp"], data.loc[2018, "imp"], data.loc[2018, "nx"]]
stats["1980 (bn DKK)"] = val_1980
stats["2018 (bn DKK)"] = val_2018
stats
###Output
_____no_output_____
###Markdown
3.1 Private consumption
###Code
my_graph("priv_cons", "Private consumption")
###Output
No handles with labels found to put in legend.
No handles with labels found to put in legend.
###Markdown
Consumption has increased from 538.8 billion DKK in 1980 to 958.8 billion DKK in 2018. This translates to an increase of 77.95 % during the entire period, with an average annual growth rate of 1.55 %. GDP went from 1048.1 billion DKK to 2050.5 billion DKK, thus increasing by 95.64 %, during the entire period, meaning an average annual growth rate of 1.8 %.As the figure above shows, the increase in consumption has closely followed that of GDP, with the exception of 1997-2003, where investments seemed to stall, while GDP increased. Both show significant increase during the entire period, except for a few drops in consumption in 1987, 1994 and a larger drop around the period of the 2008 financial crisis. 3.2 Goverment expenditure
###Code
my_graph("publ_cons", "Goverment Expenditure")
###Output
No handles with labels found to put in legend.
No handles with labels found to put in legend.
###Markdown
Government expenditure has increased from 310.2 billion DKK in 1980 to 522.4 billion DKK in 2018, this is an overall total increase of 68.41 %, with an average growth rate of 1.39 % per year. Again, an increase in GDP and Public spending both follow the same development during the time period.We see that the only period where this is nor true, is in the 2008, where GDP drops, but goverment spending increases. This might be partially due to fiscal policy. 3.3 Investment
###Code
my_graph("inv", "Investment")
###Output
No handles with labels found to put in legend.
No handles with labels found to put in legend.
###Markdown
Investments have increased significantly during the period. Gross investments have increased from 259.8 billion DKK in 1980 to 451.4 billion DKK, a total increase of 182.48 %, an average growth rate of 3 %. We also saw earlier that investments make up 22.2 % of GDP, thus showcasing the change in the danish economy in regards to investments.Investments did drop significantly during the late 80's and early 90's, as well as during the financial crisis of 2008, dropping by 15 %. They have, however, seemed to increase from thereon and are currently on their highest ever. 3.4 Net Exports 3.4a Export and Import
###Code
fig, ax = plt.subplots()
plt.style.use("tableau-colorblind10")
ax.plot(data.index,data["exp"])
ax.plot(data.index,data["imp"], linestyle = "--")
ax.set_title("Export and Import")
ax.set_xlabel("Years")
ax.set_ylabel("(Billion DKK)")
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The figure above shows the change in export and import from 1980-2018. Exports have increased from 245.3 billion DKK in 1980 to 1165.5 billion DKK. This corresponds to a staggering 375.13 % increase in exports in a (roughly) 30 year period. This increase in exports is accompanied with an even larger increase in imports, which has gone from 211.9 billion DKK to 1064.5 billion DKK in 2018, an increase of 402.36 %. These numbers show an overwhelming increase in trade. We will next examine the chnage in the trade decifit (net exports) during the same period. 3.4b Net Exports
###Code
my_graph("nx", "Net exports")
###Output
No handles with labels found to put in legend.
No handles with labels found to put in legend.
###Markdown
Net exports has gone from 33.4 billion DKK in 1980 to 101 billion DKK in 2018, corresponding to a 202.4 % increase, with an average annual growth of 4.6 %. These are impressive numbers considering the short time-petiod and shows an overall progress in the danish economy as a whole. 4. Conclusion
###Code
# Drop non-used variables
stats2 = stats.drop(["Gross Domestic Product", "Export", "Import"])
# Figure
fig, ax = plt.subplots()
plt.style.use("tableau-colorblind10")
# Define Bars
ax.bar(stats2.index, stats2["1980 (bn DKK)"], label = "1980", width = 0.5)
ax.bar(stats2.index, stats2["2018 (bn DKK)"], label = "2018", width = 0.25, align = "edge")
# Axis Labels
ax.set_xticklabels(stats2.index, rotation = 90) #Rotating x-axis labels if long names
ax.set_ylabel("Billion DKK")
ax.set_title("GDP Components 1980 & 2018")
ax.legend()
plt.show()
###Output
_____no_output_____ |
experiments/tl_3v2/A/cores-oracle.run1.framed/trials/10/trial.ipynb | ###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_3Av2:cores -> oracle.run1.framed",
"device": "cuda",
"lr": 0.0001,
"x_shape": [2, 200],
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 200]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 16000, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "take_200"],
"episode_transforms": [],
"domain_prefix": "C_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_mag", "take_200", "resample_20Msps_to_25Msps"],
"episode_transforms": [],
"domain_prefix": "O_",
},
],
"seed": 7,
"dataset_seed": 7,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
Email Marketing Campaign.ipynb | ###Markdown
Load Data
###Code
email_opened = pd.read_csv("../Collection of DS take home challenges/data collection-Product dataset数据挑战数据集/ML Email Marketing Campaign/email_opened_table.csv")
email = pd.read_csv("../Collection of DS take home challenges/data collection-Product dataset数据挑战数据集/ML Email Marketing Campaign/email_table.csv")
link_clicked = pd.read_csv("../Collection of DS take home challenges/data collection-Product dataset数据挑战数据集/ML Email Marketing Campaign/link_clicked_table.csv")
email.head()
email.info()
for i in ["email_text", "email_version", "hour", "weekday", "user_country"]:
uniques = sorted(email[i].unique())
print("{0:20s} {1:10d}\t {2}".format(i, len(uniques), uniques[:10]))
email_opened.head()
len(email_opened["email_id"].unique())
link_clicked.head()
len(link_clicked["email_id"].unique())
email_opened["open"] = 1
link_clicked["click"] = 1
email = email.merge(email_opened, how = "left", on = "email_id").merge(link_clicked, how = "left", on = "email_id")
email = email.fillna(0)
print("open rate: {}%".format(email.open.mean()*100))
print("click rate: {}%".format(email.click.mean()*100))
print("{}% of opened email was clicked".format(round(100*sum(email.click)/sum(email.open), 2)))
###Output
20.48% of opened email was clicked
###Markdown
EDA
###Code
email.head()
fig, axs = plt.subplots(1, 3, figsize = (18,6))
sns.countplot(x = "email_text", data = email, ax = axs[0])
sns.barplot(x = "email_text", y = "open", data = email, ax = axs[1])
sns.barplot(x = "email_text", y = "click", data = email, ax = axs[2])
plt.show()
fig, axs = plt.subplots(1, 3, figsize = (18,6))
sns.countplot(x = "email_version", data = email, ax = axs[0])
sns.barplot(x = "email_version", y = "open", data = email, ax = axs[1])
sns.barplot(x = "email_version", y = "click", data = email, ax = axs[2])
plt.show()
fig, axs = plt.subplots(1, 3, figsize = (18,6))
sns.countplot(x = "hour", data = email, ax = axs[0])
sns.barplot(x = "hour", y = "open", data = email, ax = axs[1])
sns.barplot(x = "hour", y = "click", data = email, ax = axs[2])
plt.show()
fig, axs = plt.subplots(1, 3, figsize = (18,6))
sns.countplot(x = "weekday", data = email, ax = axs[0])
sns.barplot(x = "weekday", y = "open", data = email, ax = axs[1])
sns.barplot(x = "weekday", y = "click", data = email, ax = axs[2])
plt.show()
fig, axs = plt.subplots(1, 3, figsize = (18,6))
sns.countplot(x = "user_country", data = email, ax = axs[0])
sns.barplot(x = "user_country", y = "open", data = email, ax = axs[1])
sns.barplot(x = "user_country", y = "click", data = email, ax = axs[2])
plt.show()
fig, axs = plt.subplots(1, 3, figsize = (18,6))
sns.countplot(x = "user_past_purchases", data = email, ax = axs[0])
sns.barplot(x = "user_past_purchases", y = "open", data = email, ax = axs[1])
sns.barplot(x = "user_past_purchases", y = "click", data = email, ax = axs[2])
plt.show()
###Output
_____no_output_____
###Markdown
Model
###Code
email
h2o.init()
h2o.remove_all()
h2o_df = H2OFrame(email)
h2o_df["click"] = h2o_df["click"].asfactor()
h2o_df.summary()
strat_split = h2o_df["click"].stratified_split(test_frac = 0.25)
train = h2o_df[strat_split == "train"]
test = h2o_df[strat_split == "test"]
features = ['email_text', 'email_version', 'hour', 'weekday', 'user_country', 'user_past_purchases']
target = "click"
model = H2ORandomForestEstimator(balance_classes = True)
model.train(x = features, y = target, training_frame = train)
_ = model.varimp_plot()
train_true = train.as_data_frame()["click"]
test_true = test.as_data_frame()["click"]
train_pred = model.predict(train).as_data_frame()["p1"]
test_pred = model.predict(test).as_data_frame()["p1"]
print(classification_report(test_true, (test_pred>0.5).astype(int)))
test_fpr, test_tpr, test_thresh = roc_curve(test_true, test_pred)
train_fpr, train_tpr, train_thresh = roc_curve(train_true, train_pred)
# ROC curves
fig, ax = plt.subplots(figsize=(8, 6))
ax.plot(train_fpr, train_tpr)
ax.plot(test_fpr, test_tpr)
ax.set_xlabel('False Positive Rate', fontsize=12)
ax.set_ylabel('True Positive Rate', fontsize=12)
ax.legend(fontsize=12)
plt.show()
print(classification_report(train_true, (train_pred>0.02143389361115466).astype(int)))
print(classification_report(test_true, (test_pred>0.02143389361115466).astype(int)))
h2o.cluster().shutdown()
###Output
H2O session _sid_a42e closed.
|
netflix-shows.ipynb | ###Markdown
Analysis:1. Which country has the most movies?2. Comparision of total TV shows vs Movies count?3. Who are the most active director?4. What are the trend of movies counts for past 10 years?5. Which types of movie ratings are more popular?
###Code
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
root = '/kaggle/input/netflix-shows/'
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
df = pd.read_csv(
os.path.join(root, 'netflix_titles.csv'))
df
sns.heatmap(df.isnull(), cmap = 'inferno')
df.dropna(subset=['date_added'],axis = 0, inplace = True)
df[df['rating'].isnull()]
replace_rating = {67: 'TV-14', 2359: 'TV-14', 3660: 'PG-13', 3736: 'TV-14', 3737: 'TV-14', 3738: 'TV-14', 4323: 'TV-14'}
for i, rating in replace_rating.items():
df.loc[i, 'rating'] = rating
df.nunique()/df.shape[0]*100
#Adding values to missing elements for country
df['country'] = df['country'].fillna('United States')
df['fist_country_in_agg'] = df['country'].apply(lambda x: x.split(",")[0]) #when there are multiple countries in a cell
#Adding an column for season_count
df['season_count'] = df['duration'].apply(lambda x : x.split(" ")[0] if "Season" in x else "")
df['season_count']
# Top10 Countries Based on the Movie Count
from collections import Counter
import matplotlib.pyplot as plt
country_movies = df[df.type=="Movie"].country.value_counts()
#Creating a series for a multiple countries
multiple_labels = country_movies[country_movies.index.str.contains(",")]
multiple_labels = multiple_labels.index.str.split(", ")
#Creating an array separating group values into a single row
a=[]
for i in range(len(multiple_labels)):
for j in range(len(multiple_labels[i])):
a.append(multiple_labels[i][j])
a = country_movies.append(pd.Series(Counter(a)))
b = a.groupby(by= a.index).sum()
country_movies_df = b[~b.index.str.contains(",")]
country_movies_df = country_movies_df.sort_values(ascending = False)[0:10]
plt.figure(figsize=(12, 3))
top10country = country_movies_df.sort_values(ascending = False)[0:10]
sns.barplot(x = top10country.index, y= top10country.values)
plt.show()
movie = df[df['type']== 'Movie']['type'].count()
TV = df[df['type']== 'TV Show']['type'].count()
plt.bar(['Movie', 'TV'], height=[movie, TV], color=['red','green'], visible = True)
plt.title('Comparision between TV and Movie shows count')
plt.xlabel('Medium')
plt.ylabel('Count')
plt.show()
df_movie = df[df.type=='Movie']
df_movie_graph = df_movie.groupby('director', as_index= False).count()[['director','show_id']].sort_values(by='show_id', ascending=False)[:8]
plt.figure(figsize=(15,10))
plt.bar(df_movie_graph['director'], df_movie_graph['show_id'], color=['blue'], visible = True)
plt.title('Most movies by director')
plt.xlabel('Director')
plt.ylabel('Count')
plt.show()
df_movie_graph = df_movie.groupby('release_year', as_index= False).count()[['release_year','show_id']].sort_values(by='release_year', ascending=False)[:10]
plt.bar(df_movie_graph['release_year'], df_movie_graph['show_id'], color=['blue'], visible = True)
plt.title('Movies count for last 10 years')
plt.xlabel('Year')
plt.ylabel('Movie')
plt.show()
df_movie_graph = df_movie.groupby('rating', as_index= False).count()[['rating','show_id']].sort_values(by='show_id', ascending=False)[:10]
plt.bar(df_movie_graph['rating'], df_movie_graph['show_id'], color=['blue'], visible = True)
plt.title('Movies with top rating count')
plt.xlabel('Rating')
plt.ylabel('Movie')
plt.show()
###Output
_____no_output_____ |
present/sapientia1/.ipynb_checkpoints/test-checkpoint.ipynb | ###Markdown
Sapientia adattudomány képzéssorozat első rész: adatvizualizáció Bővítőcsomagok importálása:
###Code
import pandas as pd
import html5lib
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Romániai lakosság letöltése INSSE-ról: Előzetesen letöltött fájlútvonalt használunk.
###Code
csv_path='exportPivot_POP105A.csv' #SAJAT HELY CSV FILE
df=pd.read_csv(csv_path)
df.head()
###Output
_____no_output_____
###Markdown
Wikipédia táblázatok letöltése
###Code
wiki_path="http://hu.wikipedia.org/wiki/Csíkszereda"
###Output
_____no_output_____
###Markdown
Ha `html5llib not found` hibaüzenetet kapunk, akkor egy konzol (`Command Prompt`, `Parancssor`) megnyitásával és a `conda install html5lib` vagy `pip install html5lib` parancsokal telepítjük. Ezután újra kell indítani a `Jupyter`-t.
###Code
df2=pd.read_html(wiki_path)
df2[4]
###Output
_____no_output_____
###Markdown
A táblázatlistából nincsen szükség csak a 5. (tehát 4-es indexű, 0-tól kezdődik) táblázatra. Ezt mentsük el az `gf` változóba, aminek a típusa egy `pandas dataframe` lesz.
###Code
gf=df2[4]
gf
###Output
_____no_output_____
###Markdown
Csak az 1-től 4-ig terjedő sorok van szükség, a többit eldobjuk. Ezután a 0. sort beállítjuk indexnek. Miután ez megtörtént, ezt is eldobjuk a sorok közül.
###Code
ef=gf[1:4]
ef.columns=ef.loc[ef.index[0]]
ef=ef.drop(1)
ef=ef.set_index(ef.columns[0])
ef=ef.drop(u'Év',axis=1)
ef
###Output
_____no_output_____
###Markdown
Transzponáljuk a táblázatot:
###Code
rf=ef.T
rf.head(2)
###Output
_____no_output_____
###Markdown
D3plus-ba betölthető `json` formátumban elmentjük a táblázat tartalmát. Ezt úgy érhetük el, hogy végigmegyunk a táblázat értékein minden sorban majd minden oszlopban. Vigyázzunk a magyar karaterekre, ezért fontos az `unicode` rendszerbe való konvertálás. A táblázatban tárlot értékek `string`-ek, ezeket egész számokká konvertáljuk, figyelembe véve a pozitív/negatív értékek formátumát.
###Code
#uj=[[] for i in range(len(rf.columns))]
d3=[]
ujnevek=['ujmax','ujmin']
for k in range(len(rf.index)):
i=rf.index[k]
seged={}
for j in range(len(rf.loc[i])):
uc=unicode(rf.loc[i][j])
if ',' in uc:
ertek=-int(uc[1:-2])
else:
ertek=int(uc[0:-1])
#uj[j].append(ertek)
seged[ujnevek[j]]=ertek
seged["honap"]=rf.index[k]
seged["honap2"]=k+1
d3.append(seged)
###Output
_____no_output_____
###Markdown
Az eredmény:
###Code
d3
###Output
_____no_output_____
###Markdown
Elmentjük a fájlt:
###Code
import json
file('uj.json','w').write(json.dumps(d3))
###Output
_____no_output_____ |
docs/nhypergeom_sims.ipynb | ###Markdown
(page:nufe)= An exact test for non-unity null odds ratios
###Code
from pkg.utils import set_warnings
set_warnings()
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from giskard.plot import set_theme, subuniformity_plot
from matplotlib.figure import Figure
from myst_nb import glue
from scipy.stats import binom, fisher_exact
from pkg.stats import fisher_exact_nonunity
set_theme()
def my_glue(name, variable):
glue(name, variable, display=False)
if isinstance(variable, Figure):
plt.close()
###Output
_____no_output_____
###Markdown
Simulation setupHere, we investigate the performance of this test on simulated independent binomials.The model we use is as follows:Let$$X \sim Binomial(m_x, p_x)$$independently,$$Y \sim Binomial(m_y, p_y)$$Fix $m_x = $ {glue:text}`m_x`, $m_y = $ {glue:text}`m_y`, $p_y = $ {glue:text}`p_y`.Let $p_x = \omega p_y$, where $\omega$ is some positive real number, notnot necessarily equal to 1, such that $p_x \in (0, 1)$.
###Code
m_x = 1000
my_glue("m_x", m_x)
m_y = 2000
my_glue("m_y", m_y)
upper_omegas = np.linspace(1, 4, 7)
lower_omegas = 1 / upper_omegas
lower_omegas = lower_omegas[1:]
omegas = np.sort(np.concatenate((upper_omegas, lower_omegas)))
my_glue("upper_omega", max(omegas))
my_glue("lower_omega", min(omegas))
p_y = 0.01
my_glue("p_y", p_y)
n_sims = 200
my_glue("n_sims", n_sims)
alternative = "two-sided"
###Output
_____no_output_____
###Markdown
ExperimentBelow, we'll sample from the model described above, varying $\omega$ from{glue:text}`lower_omega` to {glue:text}`upper_omega`. For each value of $\omega$,we'll draw {glue:text}`n_sims` samples of $(x,y)$ from the model described above.For each draw, we'll test the following two hypotheses:$$H_0: p_x = p_y \quad H_A: p_x \neq p_y$$using Fisher's exact test, and$$H_0: p_x = \omega_0 p_y \quad H_A: p_x \neq \omega_0 p_y$$using a modified version of Fisher's exact test, which uses[Fisher's noncentral hypergeometric distribution](https://en.wikipedia.org/wiki/Fisher%27s_noncentral_hypergeometric_distribution)as the null distribution. Note that we can re-write this null as$$H_0: \frac{p_x}{p_y} = \omega_0$$to easily see that this is a test for a posited odds ratio $\omega_0$. For thisexperiment, we set $\omega_0 = \omega$ - in other words, we assume the true odds ratiois known.Below, we'll call the first hypothesis test **FE** (Fisher's Exact), and the second**NUFE (Non-unity Fisher's Exact)**.
###Code
# definitions following https://en.wikipedia.org/wiki/Fisher%27s_noncentral_hypergeometric_distribution
rows = []
for omega in omegas:
# params
p_x = omega * p_y
omega_x = p_x / (1 - p_x)
omega_y = p_y / (1 - p_y)
for sim in range(n_sims):
# sample
x = binom.rvs(m_x, p_x)
y = binom.rvs(m_y, p_y)
n = x + y
table = np.array([[x, m_x - x], [y, m_y - y]])
_, vanilla_pvalue = fisher_exact(table, alternative=alternative)
_, nu_pvalue = fisher_exact_nonunity(
table, alternative=alternative, null_odds=omega
)
rows.append(
{
"method": "FE",
"omega": omega,
"pvalue": vanilla_pvalue,
"sim": sim,
}
)
rows.append({"method": "NUFE", "omega": omega, "pvalue": nu_pvalue, "sim": sim})
results = pd.DataFrame(rows)
###Output
_____no_output_____
###Markdown
Results
###Code
colors = sns.color_palette()
palette = {"FE": colors[0], "NUFE": colors[1]}
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
sns.lineplot(data=results, x="omega", y="pvalue", hue="method", ax=ax, palette=palette)
ax.axvline(1, color="darkred", linestyle="--")
ax.set(ylabel="p-value", xlabel=r"$\omega$")
my_glue("fig_mean_pvalue_by_omega", fig)
###Output
_____no_output_____
###Markdown
```{glue:figure} fig_mean_pvalue_by_omega:name: "fig-mean-pvalue-by-omega"Plot of mean p-values by the true odds ratio $\omega$. Each point in the lineplotdenotes the mean over {glue:text}`n_sims` trials, and the shaded region denotes 95%bootstrap CI estimates. Fisher's Exact test(FE) tends to reject for non-unity odds ratios, while the modified, non-unity Fisher'sexact test (NUFE) does not. Note that NUFE is testing the null hypothesis that theodds ratio ($\omega_0$) is equal to the true odds ratio in the simulation ($\omega$).```
###Code
def plot_select_pvalues(method, omega, ax):
subuniformity_plot(
results[(results["method"] == method) & (results["omega"] == omega)][
"pvalue"
].values,
color=palette[method],
ax=ax,
)
ax.set(title=r"$\omega = $" + f"{omega}, method = {method}", xlabel="p-value")
fig, axs = plt.subplots(1, 2, figsize=(12, 6))
plot_select_pvalues(method="FE", omega=1, ax=axs[0])
plot_select_pvalues(method="NUFE", omega=1, ax=axs[1])
my_glue("fig_pvalue_dist_omega_1", fig)
fig, axs = plt.subplots(1, 2, figsize=(12, 6))
plot_select_pvalues(method="FE", omega=3, ax=axs[0])
plot_select_pvalues(method="NUFE", omega=3, ax=axs[1])
my_glue("fig_pvalue_dist_omega_3", fig)
###Output
_____no_output_____ |
experiments/tl_2v2/oracle.run1.framed-cores_wisig/trials/10/trial.ipynb | ###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_2v2:oracle.run1.framed -> cores+wisig",
"device": "cuda",
"lr": 0.0001,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_mag"],
"episode_transforms": [],
"domain_prefix": "C_",
},
{
"labels": [
"1-10",
"1-12",
"1-14",
"1-16",
"1-18",
"1-19",
"1-8",
"10-11",
"10-17",
"10-4",
"10-7",
"11-1",
"11-10",
"11-19",
"11-20",
"11-4",
"11-7",
"12-19",
"12-20",
"12-7",
"13-14",
"13-18",
"13-19",
"13-20",
"13-3",
"13-7",
"14-10",
"14-11",
"14-12",
"14-13",
"14-14",
"14-19",
"14-20",
"14-7",
"14-8",
"14-9",
"15-1",
"15-19",
"15-6",
"16-1",
"16-16",
"16-19",
"16-20",
"17-10",
"17-11",
"18-1",
"18-10",
"18-11",
"18-12",
"18-13",
"18-14",
"18-15",
"18-16",
"18-17",
"18-19",
"18-2",
"18-20",
"18-4",
"18-5",
"18-7",
"18-8",
"18-9",
"19-1",
"19-10",
"19-11",
"19-12",
"19-13",
"19-14",
"19-15",
"19-19",
"19-2",
"19-20",
"19-3",
"19-4",
"19-6",
"19-7",
"19-8",
"19-9",
"2-1",
"2-13",
"2-15",
"2-3",
"2-4",
"2-5",
"2-6",
"2-7",
"2-8",
"20-1",
"20-12",
"20-14",
"20-15",
"20-16",
"20-18",
"20-19",
"20-20",
"20-3",
"20-4",
"20-5",
"20-7",
"20-8",
"3-1",
"3-13",
"3-18",
"3-2",
"3-8",
"4-1",
"4-10",
"4-11",
"5-1",
"5-5",
"6-1",
"6-15",
"6-6",
"7-10",
"7-11",
"7-12",
"7-13",
"7-14",
"7-7",
"7-8",
"7-9",
"8-1",
"8-13",
"8-14",
"8-18",
"8-20",
"8-3",
"8-8",
"9-1",
"9-7",
],
"domains": [1, 2, 3, 4],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/wisig.node3-19.stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_mag"],
"episode_transforms": [],
"domain_prefix": "W_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag"],
"episode_transforms": [],
"domain_prefix": "O_",
},
],
"seed": 7,
"dataset_seed": 7,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
Regression/Decision_Tree_Regression.ipynb | ###Markdown
Decision Tree Regression Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('CA_housing.csv')
dataset = dataset.dropna(axis=0)
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [-1])], remainder='passthrough')
X = pd.concat([dataset.iloc[:, :-2], dataset.iloc[:, -1]], axis=1).values
X = np.array(ct.fit_transform(X))
y = dataset.iloc[:, -2:-1].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Training the Decision Tree Regression model on the whole dataset
###Code
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor(random_state = 0)
regressor.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting a new result
###Code
regressor.predict([[0.0, 1.0, 0.0, 0.0, 0.0, -121.24, 39.37, 16.0, 2785.0, 616.0,
1387.0, 530.0, 2.3886]])
y[-1]
###Output
_____no_output_____
###Markdown
Visualising the Decision Tree Regression results (higher resolution)
###Code
# X_grid = np.arange(min(X), max(X), 0.01)
# X_grid = X_grid.reshape((len(X_grid), 1))
# plt.scatter(X, y, color = 'red')
# plt.plot(X_grid, regressor.predict(X_grid), color = 'blue')
# plt.title('Truth or Bluff (Decision Tree Regression)')
# plt.xlabel('Position level')
# plt.ylabel('Salary')
# plt.show()
y_pred = regressor.predict(X_test)
np.set_printoptions(precision=2)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
df = pd.DataFrame(data=np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1), columns=['Predicted ($)', 'Actual ($)'])
df
import dataframe_image as dfi
dfi.export(df, 'act_pred_dtr.png', max_rows=5)
px = np.linspace(0, max(y_test), int(max(y_test)))
py = np.linspace(0, max(y_test), int(max(y_test)))
plt.figure(figsize=(10,6))
import seaborn as sns
sns.set()
plt.scatter(y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1), color = 'red')
plt.plot(px, py, color='blue')
plt.title('True vs Predicted Median Home Values (DTR)')
plt.xlabel('Predicted Values')
plt.ylabel('True Values')
plt.show()
# plt.savefig('act_pred_svr_scatter.png')
from sklearn.metrics import r2_score
print('R2: ', r2_score(y_test, y_pred))
print('Adjusted R2: ', 1-(1-r2_score(y_test, y_pred))*((len(X_test)-1)/(len(X_test)-len(X_test[0])-1)))
from sklearn.metrics import mean_squared_error
import math
mean_squared_error(y_test, y_pred, squared=False)
cols = np.array(dataset.columns)
cols
cols = cols[cols!='median_house_value']
cols = cols[cols!='ocean_proximity']
np.concatenate(['1 2 3 4 5'.split(), cols])
# from pydotplus.graphviz import graph_from_dot_data
# from sklearn.tree import export_graphviz
# dot_data = export_graphviz( # Create dot data
# regressor, filled=True, rounded=True,
# class_names=['Setosa', 'Versicolor','Virginica'],
# feature_names=np.concatenate(['1 2 3 4 5'.split(), cols]),
# out_file=None
# )
# graph = graph_from_dot_data(dot_data) # Create graph from dot data
# graph.write_png('tree.png')
###Output
dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.033598 to fit
|
Datasets/fifa-world-cup/Untitled.ipynb | ###Markdown
Fifa Woeld Cup 1930
###Code
columns = ['Group 1', 'Group 2', 'Group 3', 'Group 4', 'Group 5', 'Group 6', 'Group 7']
for dt in worldCupMatches.Year:
if dt == 1930:
groups = pd.DataFrame(worldCupMatches[dt][:])
pd.DataFrame([1, 2, 3, 4, 5], index=[0,0,0,0,0],columns=[11, 22, 33, 44, 55]
###Output
_____no_output_____ |
python/161226_how_to_import_to_notebook_from_file_in_different_directory/notebook_directory/import_code_into_this_notebook.ipynb | ###Markdown
PurposeThis notebook shows how to import code from a `.py` file located in a different directory (`` below) into a jupyter notebook. Note that it is not necessary to have a `__init__.py` file in the other directory. Steps:1. Make absolute path to directory that contains the `.py` file1. Insert this path into `sys.path` in position `1`1. Use either `from import , , ...` or `from import *` to import desired functions from the `.py` file Here's the code
###Code
import os
import sys
module_path = os.path.abspath(os.path.join('../code_to_import'))
if module_path not in sys.path:
sys.path.insert(1,module_path)
from test_import import printhello, printcurrentdirectory
printhello()
printcurrentdirectory()
###Output
/Users/nordin/Documents/Projects/Python/notes_to_self/161226_how_to_import_to_notebook_from_file_in_different_directory/notebook_directory
###Markdown
Check to see what we've done
###Code
sys.path
dir()
###Output
_____no_output_____ |
dd_1/Part 4/Section 02 - Classes/07 - Initializing Class Instances.ipynb | ###Markdown
Initializing Class Instances When we create a new instance of a class two separate things are happening:1. The object instance is **created**2. The object instance is then further **initialized** We can "intercept" both the creating and initialization phases, by using special methods `__new__` and `__init__`.We'll come back to `__new__` later. For now we'll focus on `__init__`. What's important to remember, is that `__init__` is an **instance method**. By the time `__init__` is called, the new object has **already** been created, and our `__init__` function defined in the class is now treated like a **method** bound to the instance.
###Code
class Person:
def __init__(self):
print(f'Initializing a new Person object: {self}')
p = Person()
###Output
Initializing a new Person object: <__main__.Person object at 0x7f80a022b0f0>
###Markdown
And we can see that `p` has the same memory address:
###Code
hex(id(p))
###Output
_____no_output_____
###Markdown
Because `__init__` is an instance method, we have access to the object (instance) state within the method, so we can use it to manipulate the object state:
###Code
class Person:
def __init__(self, name):
self.name = name
p = Person('Eric')
p.__dict__
###Output
_____no_output_____
###Markdown
What actually happens is that after the new instance has been created, Python sees and automatically calls `.__init__(self, *args, **kwargs)` So this is no different that if we had done it this way:
###Code
class Person:
def initialize(self, name):
self.name = name
p = Person()
p.__dict__
p.initialize('Eric')
p.__dict__
###Output
_____no_output_____ |
Pytorch_example.ipynb | ###Markdown
###Code
!git clone https://github.com/dongjun-Lee/text-summarization-tensorflow
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# Mount Google Drive
from google.colab import drive # import drive from google colab
ROOT = "/content/drive" # default location for the drive
print(ROOT) # print content of ROOT (Optional)
drive.mount(ROOT) # we mount the google drive at /content/drive
#drive.mount("/content/drive", force_remount=True)
%ls -l
%cd /content/drive/MyDrive/ColabNotebooks
%ls -l
%cd '/content/drive/MyDrive/ColabNotebooks/NLP/text_summarization'
%ls -l
# Clone github repository setup
# import join used to join ROOT path and MY_GOOGLE_DRIVE_PATH
from os.path import join
# path to your project on Google Drive
MY_GOOGLE_DRIVE_PATH = '/content/drive/My Drive/ColabNotebooks/NLP'
# replace with your Github username
GIT_USERNAME = "BoilerToad"
# definitely replace with your
GIT_TOKEN = "1664e3d5c3bbf4630a6ca516b2473785c2d6842a"
# Replace with your github repository in this case we want
# to clone deep-learning-v2-pytorch repository
GIT_REPOSITORY = "text-summarization-tensorflow"
PROJECT_PATH = join(ROOT, MY_GOOGLE_DRIVE_PATH)
# It's good to print out the value if you are not sure
print("PROJECT_PATH: ", PROJECT_PATH)
# In case we haven't created the folder already; we will create a folder in the project path
!mkdir "{PROJECT_PATH}"
#GIT_PATH = "https://{GIT_TOKEN}@github.com/{GIT_USERNAME}/{GIT_REPOSITORY}.git" this return 400 Bad Request for me
GIT_PATH = "https://" + GIT_TOKEN + "@github.com/" + GIT_USERNAME + "/" + GIT_REPOSITORY + ".git"
print("GIT_PATH: ", GIT_PATH)
%cd '/content/drive/My Drive/ColabNotebooks/NLP'
#%cd "{PROJECT_PATH}" # Change directory to the location defined in project_path
!git clone "{GIT_PATH}" # clone the github repository
# Clone github repository setup
# import join used to join ROOT path and MY_GOOGLE_DRIVE_PATH
from os.path import join
# path to your project on Google Drive
MY_GOOGLE_DRIVE_PATH = '/content/drive/My Drive/ColabNotebooks/NLP'
# replace with your Github username
GIT_USERNAME = "BoilerToad"
# definitely replace with your
GIT_TOKEN = "1664e3d5c3bbf4630a6ca516b2473785c2d6842a"
# Replace with your github repository in this case we want
# to clone deep-learning-v2-pytorch repository
GIT_REPOSITORY = "sent-summary"
PROJECT_PATH = join(ROOT, MY_GOOGLE_DRIVE_PATH)
# It's good to print out the value if you are not sure
print("PROJECT_PATH: ", PROJECT_PATH)
# In case we haven't created the folder already; we will create a folder in the project path
!mkdir "{PROJECT_PATH}"
#GIT_PATH = "https://{GIT_TOKEN}@github.com/{GIT_USERNAME}/{GIT_REPOSITORY}.git" this return 400 Bad Request for me
GIT_PATH = "https://" + GIT_TOKEN + "@github.com/" + GIT_USERNAME + "/" + GIT_REPOSITORY + ".git"
print("GIT_PATH: ", GIT_PATH)
!git clone "{GIT_PATH}" # clone the github repository
%cd text-summarization-tensorflow
%ls -l
!git status
!pip install -r requirements.txt
!python prep_data.py --glove
import nltk
nltk.download('punkt')
!python train.py --glove
!python test.py
%ls -l
###Output
total 284868
drwx------ 2 root root 4096 Feb 7 00:28 [0m[01;34mglove[0m/
-rw------- 1 root root 1068 Feb 6 18:18 LICENSE
-rw------- 1 root root 6939 Feb 6 18:18 model.py
-rw------- 1 root root 7187 Feb 7 00:06 model-upgraded.py
-rw------- 1 root root 999 Feb 6 18:18 prep_data.py
drwx------ 2 root root 4096 Feb 7 00:31 [01;34m__pycache__[0m/
-rw------- 1 root root 5109 Feb 6 18:18 README.md
-rw------- 1 root root 21965 Feb 7 00:06 report.txt
-rw------- 1 root root 36 Feb 6 18:18 requirements.txt
-rw------- 1 root root 765674 Feb 6 18:18 sample_data.zip
drwx------ 6 root root 4096 Apr 12 2016 [01;34msumdata[0m/
-rw------- 1 root root 290866023 Feb 6 18:35 summary.tar.gz
-rw------- 1 root root 1771 Feb 6 18:18 test.py
-rw------- 1 root root 4388 Feb 6 18:18 train.py
-rw------- 1 root root 4428 Feb 7 00:00 train-upgraded.py
-rw------- 1 root root 4023 Feb 6 18:18 utils.py
|
notebooks/Solutions/DATAPREP_03d_MV_Handling_MachineLearningImputation_Lab_Solution.ipynb | ###Markdown
In Data Science modeling unknown relationships between attributes has been achieved using Machine Learning models.The same process can be applied to predict MVUsing the KNN algorithm, every time a MV is found in an instance KNN Imputation computes the k nearest neighbors and a value from them is imputed. For nominal values the most common value among all neighbors is taken, for numerical values the average value is usedImpute the missing values of the provided array x, applying KNN Imputation with k=2!
###Code
import numpy as np
from sklearn.impute import KNNImputer
x = np.array([[1, 2, np.nan], [3, 4, 3], [np.nan, 6, 5], [8, 8, 7]])
print("Original data: \n",x)
n=2
imputer = KNNImputer(n_neighbors=n, weights="uniform")
transformed_x= imputer.fit_transform(x)
print("\nMissing values are imputed based on values of ",n," nearest neigbors")
print("Transformed data (knn imputation): \n",transformed_x)
###Output
Original data:
[[ 1. 2. nan]
[ 3. 4. 3.]
[nan 6. 5.]
[ 8. 8. 7.]]
Missing values are imputed based on values of 2 nearest neigbors
Transformed data (knn imputation):
[[1. 2. 4. ]
[3. 4. 3. ]
[5.5 6. 5. ]
[8. 8. 7. ]]
###Markdown
KMeans Clustering is another ML algorithm, which can be used for MV Imputation. Attributes which have no MVs are used to define clusters of similar examples. Then the missing values are calculated based on existing values of the examples from the same cluster.Apply KMeans Clustering Imputation on data frame x1. Drop features with MV2. Run k Means on the reducted data frame x3. Set up an object for a simple mean imputation (remember basic imputation approaches)4. Apply the mean imputation to the examples of each cluster seperately5. Print the completed data set
###Code
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
from sklearn.impute import SimpleImputer
x = pd.DataFrame([[1, 2,5], [1,0,6],[1, np.nan,6], [10, 0,20],[10, 2,21], [100, 40,220], [100, 50,230]],columns=['A1','A2','A3'])
print("Original data: \n",x)
#Feature deletion
x_clean=x.dropna(axis=1)
print("\nData after deleting features with missing values: \n",x_clean)
#Run kmeans Clustering without MV feature
n=3
kmeans = KMeans(n_clusters=n, random_state=0).fit(x_clean)
x['Cluster']=kmeans.labels_
print("\nOriginal data with Cluster-ID: \n",x)
# Set up an object for average Imputation using strategy='mean'
imp = SimpleImputer(missing_values=np.nan, strategy='mean')
#Intitialize transformed data set (only data of Cluster 0)
transformed_x=pd.DataFrame(imp.fit_transform(x[x['Cluster']==0]),columns=['A1','A2','A3','Cluster'])
for i in range(1,n):
append_x=pd.DataFrame(imp.fit_transform(x[x['Cluster']==i]),columns=['A1','A2','A3','Cluster'])
transformed_x=transformed_x.append(append_x)
#Print the completed data set
print("\nTransformed data (mean imputation): \n",transformed_x)
###Output
Original data:
A1 A2 A3
0 1 2.0 5
1 1 0.0 6
2 1 NaN 6
3 10 0.0 20
4 10 2.0 21
5 100 40.0 220
6 100 50.0 230
Data after deleting features with missing values:
A1 A3
0 1 5
1 1 6
2 1 6
3 10 20
4 10 21
5 100 220
6 100 230
Original data with Cluster-ID:
A1 A2 A3 Cluster
0 1 2.0 5 2
1 1 0.0 6 2
2 1 NaN 6 2
3 10 0.0 20 0
4 10 2.0 21 0
5 100 40.0 220 1
6 100 50.0 230 1
Transformed data (mean imputation):
A1 A2 A3 Cluster
0 10.0 0.0 20.0 0.0
1 10.0 2.0 21.0 0.0
0 100.0 40.0 220.0 1.0
1 100.0 50.0 230.0 1.0
0 1.0 2.0 5.0 2.0
1 1.0 0.0 6.0 2.0
2 1.0 1.0 6.0 2.0
|
Course 4 - Convolutional Neural Networks/1. Convolutional Model/Convolution model - Application - v1.ipynb | ###Markdown
Convolutional Neural Networks: ApplicationWelcome to Course 4's second assignment! In this notebook, you will:- Implement helper functions that you will use when implementing a TensorFlow model- Implement a fully functioning ConvNet using TensorFlow **After this assignment you will be able to:**- Build and train a ConvNet in TensorFlow for a classification problem We assume here that you are already familiar with TensorFlow. If you are not, please refer the *TensorFlow Tutorial* of the third week of Course 2 ("*Improving deep neural networks*"). 1.0 - TensorFlow modelIn the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call. As usual, we will start by loading in the packages.
###Code
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
###Output
_____no_output_____
###Markdown
Run the next cell to load the "SIGNS" dataset you are going to use.
###Code
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
###Output
_____no_output_____
###Markdown
As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples.
###Code
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
###Output
y = 2
###Markdown
In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.To get started, let's examine the shapes of your data.
###Code
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
###Output
number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)
###Markdown
1.1 - Create placeholdersTensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.**Exercise**: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension **[None, n_H0, n_W0, n_C0]** and Y should be of dimension **[None, n_y]**. [Hint](https://www.tensorflow.org/api_docs/python/tf/placeholder).
###Code
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
"""
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(tf.float32, (None, n_H0, n_W0, n_C0))
Y = tf.placeholder(tf.float32, (None, n_y))
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
###Output
X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32)
Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32)
###Markdown
**Expected Output** X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32) Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32) 1.2 - Initialize parametersYou will initialize weights/filters $W1$ and $W2$ using `tf.contrib.layers.xavier_initializer(seed = 0)`. You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment.**Exercise:** Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use:```pythonW = tf.get_variable("W", [1,2,3,4], initializer = ...)```[More Info](https://www.tensorflow.org/api_docs/python/tf/get_variable).
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable('W1', [4,4,3,8], initializer=tf.contrib.layers.xavier_initializer(seed=0))
W2 = tf.get_variable('W2', [2,2,8,16], initializer=tf.contrib.layers.xavier_initializer(seed=0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
###Output
W1 = [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394
-0.06847463 0.05245192]
W2 = [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058
-0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228
-0.22779644 -0.1601823 -0.16117483 -0.10286498]
###Markdown
** Expected Output:** W1 = [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 -0.06847463 0.05245192] W2 = [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 -0.22779644 -0.1601823 -0.16117483 -0.10286498] 1.2 - Forward propagationIn TensorFlow, there are built-in functions that carry out the convolution steps for you.- **tf.nn.conv2d(X,W1, strides = [1,s,s,1], padding = 'SAME'):** given an input $X$ and a group of filters $W1$, this function convolves $W1$'s filters on X. The third input ([1,s,s,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). You can read the full documentation [here](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d)- **tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'):** given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. You can read the full documentation [here](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool)- **tf.nn.relu(Z1):** computes the elementwise ReLU of Z1 (which can be any shape). You can read the full documentation [here.](https://www.tensorflow.org/api_docs/python/tf/nn/relu)- **tf.contrib.layers.flatten(P)**: given an input P, this function flattens each example into a 1D vector it while maintaining the batch-size. It returns a flattened tensor with shape [batch_size, k]. You can read the full documentation [here.](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/flatten)- **tf.contrib.layers.fully_connected(F, num_outputs):** given a the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation [here.](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/fully_connected)In the last function above (`tf.contrib.layers.fully_connected`), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters. **Exercise**: Implement the `forward_propagation` function below to build the following model: `CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED`. You should use the functions above. In detail, we will use the following parameters for all the steps: - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME" - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME" - Flatten the previous output. - FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost.
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X, W1, strides=[1,1,1,1], padding='SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, ksize=[1,8,8,1], strides=[1,8,8,1], padding='SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1, W2, strides=[1,1,1,1], padding='SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, ksize=[1,4,4,1], strides=[1,4,4,1], padding='SAME')
# FLATTEN
P2 = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(P2, 6, activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
###Output
Z3 = [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064]
[-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]
###Markdown
**Expected Output**: Z3 = [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]] 1.3 - Compute costImplement the compute cost function below. You might find these two functions helpful: - **tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y):** computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation [here.](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits)- **tf.reduce_mean:** computes the mean of elements across dimensions of a tensor. Use this to sum the losses over all the examples to get the overall cost. You can check the full documentation [here.](https://www.tensorflow.org/api_docs/python/tf/reduce_mean)** Exercise**: Compute the cost below using the function above.
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (number of examples, 6)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3, labels=Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
###Output
cost = 2.91034
###Markdown
**Expected Output**: cost = 2.91034 1.4 Model Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset. You have implemented `random_mini_batches()` in the Optimization programming assignment of course 2. Remember that this function returns a list of mini-batches. **Exercise**: Complete the function below. The model below should:- create placeholders- initialize parameters- forward propagate- compute the cost- create an optimizerFinally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. [Hint for initializing the variables](https://www.tensorflow.org/api_docs/python/tf/global_variables_initializer)
###Code
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
"""
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([optimizer, cost], {X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
###Output
_____no_output_____
###Markdown
Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
###Code
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
###Output
Cost after epoch 0: 1.917929
Cost after epoch 5: 1.506757
Cost after epoch 10: 0.955359
Cost after epoch 15: 0.845802
Cost after epoch 20: 0.701174
Cost after epoch 25: 0.571977
Cost after epoch 30: 0.518435
Cost after epoch 35: 0.495806
Cost after epoch 40: 0.429827
Cost after epoch 45: 0.407291
Cost after epoch 50: 0.366394
Cost after epoch 55: 0.376922
Cost after epoch 60: 0.299491
Cost after epoch 65: 0.338870
Cost after epoch 70: 0.316400
Cost after epoch 75: 0.310413
Cost after epoch 80: 0.249549
Cost after epoch 85: 0.243457
Cost after epoch 90: 0.200031
Cost after epoch 95: 0.175452
###Markdown
**Expected output**: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease. **Cost after epoch 0 =** 1.917929 **Cost after epoch 5 =** 1.506757 **Train Accuracy =** 0.940741 **Test Accuracy =** 0.783333 Congratulations! You have finised the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance). Once again, here's a thumbs up for your work!
###Code
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
###Output
_____no_output_____ |
Lectures/2_BayesianInference/00_Bayesian_Inference.ipynb | ###Markdown
ML Course, Bogotá, Colombia (© Josh Bloom; June 2019)
###Code
%run ../talktools.py
###Output
_____no_output_____
###Markdown
Approaches to statistical inference- For many of us working with data, the role of inference is to draw quantitative conclusions from noisy data.- Historically, approaches to inference can be divided into two camps termed ‘frequentist’ and ‘Bayesian’. The frequentist interpretation of probability is expressed in terms of repeated trials, while Bayesian interpret probability as a degree of belief.- Statiscian Larry Wasserman has [written about the distinction](https://normaldeviate.wordpress.com/2012/11/17/what-is-bayesianfrequentist-inference/) emphasizing that the two camps are defined by what outcome they hope to achieve rather than defined by what methods they use. Mike Jordan (UC Berkeley) has talked about the distinction analogous to wave-particle nature of light. - The methodological implications of this distinction are profound and subtle. The notion of hypothesis testing is drawn from frequentist statistics, where propositions are evaluated by the possibility of their being false (e.g., p-values). On the other hands, concepts such as likelihood and evidence arise from within Bayesian statistics. - While the distinction is not considered a major rift within statistics today, the application of inference within scientific fields is often surprisingly one-sided: e.g., within cosmology Bayesian inference is standard, in particle physics frequentist statistics are the norm. Astronomy as a whole is more mixed, what about other fields? What about in industry? Approaches to statistical inference in (data) science• Ultimately, the difference between frequentist and Bayesian statistics is not of the highest practical importance for scientists. The kinds of questions that matter to scientists are:- “I have these data with some error bars (that, between the two of us, I do not trust). I want to publish in Nature. What do I do in between?” - Or, more specifically: “How do I fit a model to these data, or decide which of two models is better?” - Or, even more specifically: “How do I take this numpy array and find maximum likelihood parameters with an associated covariance matrix and/or joint probability distributions?”Approaches to statistical inference with Python (ie., actually deal with data). That is packages that interface with numpy, return ‘optimized’ numbers (that are probablyarguments in a function call), as well as some description of the probability distribution from which they are drawn (an array? a function to draw samples?) Bayesian parameter inference: formalismWhen embarking upon an experiment, we almost always have some prior expectation about the outcome. Bayesian inference is the process by which this expectation (or "belief") is updated to account for new data we obtain. Information about parameters is expressed in terms of probability distributions: In Bayesian statistics, we perform inferences with posterior probability distributions on parameters of interest, θ, given some data X Brief History of Bayesian Stats1. Thomas Bayes (1702–1761), a minister & amateur mathematician, proved a case of Bayes’ Theorem in a 1763 paper.2. Pierre-Simon Laplace (1749–1827) introduced a general version of the theorem and applied it to several fields, including celestial mechanics & medicine. When insufficient knowledge was available to specify an informed prior, Laplace used uniform priors, according to his “principle of insufficient reason”3. Fell out of favor in the early 20th century, where Frequentist Statistics of Fisher, Neyman, and Pearson dominated the field4. Around 1950, statisticians began to advocate Bayesian methods to overcome the limitations of frequentist approach: L.J. Savage, Bruno de Finetti, Dennis Lindley5. Bayesian Statistics did not become popular until the 1980’s and 90’s:the Bayesian approach requires evaluation of complex, multi-dimensional integrals Faster and cheaper computing along with efficient sampling algorithms led to the revitalization of the field and wide-spread acceptance Objective versus Subjective Bayes- An essential ingredient to obtaining the posterior, $p(\theta | {\rm data})$, is the prior distribution, $p(\theta)$, symbolizing our belief in $\theta$ before collecting or observing any data- The prior can have a large impact on the inferences and opens one up to charges of non-objectivity- However, by the same argument, the choice of Likelihood function (probability model for the data, given the model parameters) used by both Bayesians and Frequentists is also subjective- There is a lot of work attempting to minimize the effect of the prior on resulting inference. These are non-informative or reference priors- “Subjective” Bayesians believe in the complete subjectivity of the interpretation of probability and believe that informative priors should always be used, if available Steps in Bayesian Analysis:1. Specify likelihood and prior (before looking at the data!)2. Compute the posterior distribution for the parameter(s) of interest given the particular X that we observed. - In cases where direct derivation of the posterior is impossible, we instead draw samples from the posterior3. Check that the model fits well (posterior predictive checks)4. Perform statistical inferences (parameter estimation, predictions on new data, model comparison)Reading and References- ["Bayesian Methods for Hackers"](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers): An introduction to Bayesian methods + probabilistic programming with a computation/understanding-first, mathematics-second point of view. All in pure Python.- Salvatier, J, Wiecki TV, and Fonnesbeck C. (2016) Probabilistic programming in Python using PyMC3. PeerJ Computer Science 2:e55 https://doi.org/10.7717/peerj-cs.55 Presidental Approval Ratings In a recent poll by Datexco (April 12, 2019), there where $n=900$ respondents (http://www.wradio.com.co/noticias/actualidad/imagen-favorable-del-presidente-ivan-duque-es-de-30-segun-pulso-pais/20190412/nota/3890168.aspx)**We want to estimate the true proportion, $\theta$, of Americans that approve of the way Iván Duque is handling his job.**What is a sensible likelihood $p(X | \theta$)? Answer: the **[Binomial distribution](https://en.wikipedia.org/wiki/Binomial_distribution)**$$p(X | \theta) = \binom{n}{a} \theta^a (1 - \theta)^{n - a}$$where $n$ is the total number in the poll and $a$ is the number that approve. $$\binom{n}{a} = \frac{n!}{a! (n - a)!} $$ What's a sensible prior $p(\theta)$? We could choose a flat prior (equal probability of $\theta$ between 0 and 1) but that's probably not reasonable. Note that if we choose a prior of the form $\propto \theta^r (1 - \theta)^{s}$, then our **posterior** will have the same form. A common prior distribution is the [**Beta distribution**](https://en.wikipedia.org/wiki/Beta_distribution).$${\rm Beta}(\alpha, \beta) \propto \theta^{1 - \alpha} (1 - \theta)^{1 - \beta}$$
###Code
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_context("poster")
alpha_betas = [(0.5,0.5), (1,1), (3,3), (4,1), (10,10), (100,100)]
x_theta = np.linspace(0, 1, 101)
plt.figure(figsize=(10,6))
for alpha, beta in alpha_betas:
p_theta = stats.beta(alpha, beta).pdf(x_theta)
plt.plot(x_theta, p_theta, linewidth=3.,label=f"{alpha}, {beta}")
plt.legend()
plt.ylim([0, max(p_theta)])
plt.xlabel("Theta")
plt.ylabel("p(Theta)")
plt.title("Beta distributions")
###Output
_____no_output_____
###Markdown
$$p(\theta | X ) \propto \theta^a (1 - \theta)^{n - a} \times \theta^{1 - \alpha} (1 - \theta)^{1 - \beta}$$$$\propto \theta^{1 - \alpha + a} (1 - \theta)^{1 - \beta + n - a}$$$$\propto {\rm Beta(\alpha + a, \beta + n - a)}$$ In the poll noted above, there were a=270 "approve" of the way the President is doing his job out of n=900
###Code
a = 270
n = 900
alpha_betas = [(0.5,0.5), (1,1), (3,3), (4,1), (10,10), (100,100)]
x_theta = np.linspace(0, 1, 101)
plt.figure(figsize=(10,6))
for alpha, beta in alpha_betas:
p_theta = stats.beta(alpha + a, beta + n - a).pdf(x_theta)
plt.plot(x_theta, p_theta, linewidth=3.,label=f"{alpha}, {beta}")
plt.legend()
plt.ylim([0, max(p_theta)])
plt.xlabel(r"$\theta$")
plt.ylabel(r"$p(\theta)$")
plt.title("Posterior distribution: Duque Approval April 12, 2019")
###Output
_____no_output_____
###Markdown
Notice that our choice of most priors had little to no effect. This is exactly what we expect when we have a lot of data. What if we only polled 10 people (say n=10, a=3)?
###Code
a = 3
n = 10
alpha_betas = [(0.5,0.5), (1,1), (3,3), (4,1), (10,10)]
x_theta = np.linspace(0, 1, 101)
plt.figure(figsize=(10,6))
for alpha, beta in alpha_betas:
p_theta = stats.beta(alpha + a, beta + n - a).pdf(x_theta)
plt.plot(x_theta, p_theta, linewidth=3.,label=f"{alpha}, {beta}")
plt.legend()
plt.ylim([0, max(p_theta)])
plt.xlabel(r"$\theta$")
plt.ylabel(r"$p(\theta)$")
plt.title("Posterior distribution: Duque Approval (small sample)")
###Output
_____no_output_____ |
notebooks/03.04 - Data - Calendar Data - Compile.ipynb | ###Markdown
03.04 - Calendar Data Imports & setup
###Code
import pathlib
import datetime
import dateutil
from os import PathLike
from typing import Union
#import simplegeneric
import pandas as pd
import numpy as np
from astral import Astral
import matplotlib.pyplot as plt
plt.style.use('grayscale')
from matplotlib.dates import DateFormatter
import matplotlib.dates as mdates
import palettable
%matplotlib inline
PROJECT_DIR = pathlib.Path.cwd().parent.resolve()
IMPUTED_DATA_DIR_DEMAND = PROJECT_DIR / 'data' / '03-imputed' / 'demand'
CALCULATED_FEATURES_DATA_DIR = PROJECT_DIR / 'data' / '03-calculated-features' / 'calendar'
###Output
_____no_output_____
###Markdown
Load
###Code
demand_df = pd.read_csv(IMPUTED_DATA_DIR_DEMAND / 'demand.csv', index_col=0, parse_dates=True,
date_parser=dateutil.parser.parse)
#demand_df.index.tz_localize(None)
demand_df.info()
features_df = demand_df.copy(deep=True)
features_df['hour_of_day'] = features_df.index.hour
features_df['year'] = features_df.index.year
features_df['month'] = features_df.index.month
features_df['day_of_week'] = features_df.index.dayofweek
features_df['day_of_year'] = features_df.index.dayofyear
features_df['week_of_year'] = features_df.index.weekofyear
features_df['quarter'] = features_df.index.quarter
features_df.drop(columns=['ont_demand'], inplace=True)
features_df.head()
import holidays
hols = holidays.Canada(state='ON') # default is ontario Holidays
print(features_df.loc['2018-01-01'].index.date[0] in hols)
print(features_df.loc['2018-12-27'].index.date[0] in hols)
features_df['stat_hol'] = pd.Series(features_df.index.date).apply(lambda x: x in hols).values
features_df.head()
features_df.tail()
from astral import Astral
a = Astral()
city_name='Toronto'
city = a[city_name]
#city.latitude
sun = city.sun(date=datetime.date(2019, 7, 2), local=True)
print(sun['sunrise'])
print(sun['sunset'])
print(type(sun['sunrise']))
print(features_df.loc['2018-01-01'].index[0])
print(features_df.loc['2018-12-27'].index[0])
features_df.head()
def get_daylight_hours(row, city):
sun = city.sun(date=row.name, local=True)
sunrise = sun['sunrise'].replace(tzinfo=None) ; sunset = sun['sunset'].replace(tzinfo=None)
bool_val = (row.name > sunrise) & (row.name < sunset)
return bool_val
a = Astral()
city = a['Toronto']
features_df['day_light_hours'] = features_df.apply(get_daylight_hours, city=city, axis=1)
features_df.head()
features_df.tail()
features_df.to_csv(CALCULATED_FEATURES_DATA_DIR / 'calendar.csv')
features_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 222840 entries, 1994-01-01 00:00:00 to 2019-06-03 23:00:00
Data columns (total 9 columns):
hour_of_day 222840 non-null int64
year 222840 non-null int64
month 222840 non-null int64
day_of_week 222840 non-null int64
day_of_year 222840 non-null int64
week_of_year 222840 non-null int64
quarter 222840 non-null int64
stat_hol 222840 non-null bool
day_light_hours 222840 non-null bool
dtypes: bool(2), int64(7)
memory usage: 24.0 MB
|
assign2-a.ipynb | ###Markdown
Intelligent Systems Assignment 2 Bayes' net inference**Names:****IDs:**
###Code
class Directions:
NORTH = 'North'
SOUTH = 'South'
EAST = 'East'
WEST = 'West'
STOP = 'Stop'
###Output
_____no_output_____
###Markdown
a. Bayes' net for instant perception and position.Build a Bayes' net that represent the relationships between the random variables. Based on it, write an expression for the joint probability distribution of all the variables.$P(X, E_{N}, E_{S}, E_{W},E_{E}) = P(X)P(E_{N}|X)P(E_{S}|X)P(E_{W}|X)P(E_{E}|X)$ b. Probability functions calculated from the instant model.Assuming an uniform distribution for the Pacman position probability, write functions to calculate the following probabilities:i. $P(X=x|E_{N}=e_{N},E_{S}=e_{S}) = \dfrac{P(X=x)P(E_{N}=e_{N}|X=x)P(E_{N}=e_{N}|X=x)}{\sum\limits_{x} P(X=x)P(E_{N}=e_{N}|X=x)P(E_{N}=e_{N}|X=x)}$
###Code
def getMapa():
mapa = [[0] * 6 for i in range(1, 6)]
mapa[1][1] = 1
mapa[1][3] = 1
mapa[1][4] = 1
mapa[3][1] = 1
mapa[3][3] = 1
mapa[3][4] = 1
return mapa
def getMap():
mapa = getMapa()
matriz = [[None] * 6 for i in range(1, 6)]
px = 1 / float(24)
for x in range(0, 5):
for y in range(0, 6):
if(mapa [x][y] == 1):
p = 0.0
else:
p = px
if(x == 0):
n = True
elif(mapa[x - 1][y] == 1):
n = True
else:
n = False
if(x == 4):
s = True
elif(mapa[x + 1][y] == 1):
s = True
else:
s = False
if(y == 0):
l = True
elif(mapa[x][y - 1] == 1):
l = True
else:
l = False
if(y == 5):
r = True
elif(mapa[x][y + 1] == 1):
r = True
else:
r = False
matriz[x][y] = [n, l, p, r, s]
return matriz
def P_1(eps, E_N, E_S):
'''
Calculates: P(X=x|E_{N}=e_{N},E_{S}=e_{S})
Arguments: E_N, E_S \in {True,False}
0 <= eps <= 1 (epsilon)
'''
truePerception = 1 - eps;
falsePerception = eps;
matrix = getMap()
den = 0
for i in range(len(matrix)):
row = matrix[i]
for j in range(len(row)):
n, l, p, r, s = row[j]
pn = falsePerception
ps = falsePerception
if n == E_N:
pn = truePerception
if s == E_S:
ps = truePerception
den += (p * pn * ps)
pd = {(x, y):0 for x in range(1, 7) for y in range(1, 6)}
for i in range(len(matrix)):
row = matrix[i]
for j in range(len(row)):
n, l, p, r, s = row[j]
pn = falsePerception
ps = falsePerception
if n == E_N:
pn = truePerception
if s == E_S:
ps = truePerception
p = (p * pn * ps) / den
row[j] = [n, l, p, r, s]
# Cambiar a coordenadas cartesianas
pd[(j + 1, 5 - i)] = p
return pd
P_1(0.0, True, False)
###Output
_____no_output_____
###Markdown
ii. $P(E_{E}=e_{E}|E_{N}=e_{N},E_{S}=E_{S})$
###Code
def P_2(eps, E_N, E_S):
'''
Calculates: P(E_{E}=e_{E}|E_{N}=e_{N},E_{S}=E_{S})
Arguments: E_N, E_S \in {True,False}
0 <= eps <= 1
'''
truePerception = 1 - eps;
falsePerception = eps;
mapa = getMapa()
matrix = getMap()
den = 0
for i in range(len(matrix)):
row = matrix[i]
for j in range(len(row)):
n, l, p, r, s = row[j]
pn = falsePerception
ps = falsePerception
pr = truePerception
if n == E_N:
pn = truePerception
if s == E_S:
ps = truePerception
if r == True:
pr = truePerception
else:
pr = falsePerception
if mapa[i][j]==0:
wall=1
else:
wall=0
pr = truePerception
den += (pr* pn * ps * wall)
#print den
# print 'den ',den
count=0
for i in range(len(matrix)):
row = matrix[i]
for j in range(len(row)):
n, l, p, r, s = row[j]
pn = falsePerception
ps = falsePerception
pr = falsePerception
if n == E_N:
pn = truePerception
if s == E_S:
ps = truePerception
# print r
if mapa[i][j]==0:
wall=1
else:
wall=0
if r == True:
pr = truePerception
else:
pr = falsePerception
pr = (pr * pn * ps*wall)
#/den
#print i,' ',j,' ',pr,' ',pr/den
count += pr
# print count
pr = count/den
# print pr
pd = {True:pr, False:(1-pr)}
return pd
P_2(0.0, True, False)
###Output
den 3.0
###Markdown
iii. $P(S)$, where $S\subseteq\{e_{N},e_{S},e_{E},e_{W}\}$
###Code
def P_3(eps, S):
'''
Calculates: P(S), where S\subseteq\{e_{N},e_{S},e_{E},e_{W}\}
Arguments: S a dictionary with keywords in Directions and values in
{True,False}
0 <= eps <= 1
'''
# for i in range(len(S)):
# print S[i]
mapa = getMapa()
matrix = getMap()
truePerception = 1 - eps;
falsePerception = eps;
pb=0
if(len(S)==1):
for i in range(len(matrix)):
row = matrix[i]
for j in range(len(row)):
n, l, p, r, s = row[j]
pr = falsePerception
pn = falsePerception
pl = falsePerception
ps = falsePerception
if mapa[i][j]==0:
wall=1
else:
wall=0
if S.get(Directions.EAST) != None:
if r == S.get(Directions.EAST):
pr = truePerception
pb += (pr*wall*p)
elif S.get(Directions.WEST) != None:
if l == S.get(Directions.WEST):
pl = truePerception
pb += (pl*wall*p)
elif S.get(Directions.SOUTH) != None:
if s == S.get(Directions.SOUTH):
ps = truePerception
pb += (ps*wall*p)
elif S.get(Directions.NORTH) != None:
if n == S.get(Directions.NORTH):
pn = truePerception
pb += (pn*wall*p)
elif(len(S)==2):
for i in range(len(matrix)):
row = matrix[i]
for j in range(len(row)):
n, l, p, r, s = row[j]
pr = falsePerception
pn = falsePerception
pl = falsePerception
ps = falsePerception
if mapa[i][j]==0:
wall=1
else:
wall=0
if S.get(Directions.EAST) != None and S.get(Directions.WEST) != None:
if r == S.get(Directions.EAST):
pr = truePerception
if l == S.get(Directions.WEST):
pl = truePerception
pb += (pr*pl*wall*p)
elif S.get(Directions.EAST) != None and S.get(Directions.SOUTH) != None:
if r == S.get(Directions.EAST):
pr = truePerception
if s == S.get(Directions.SOUTH):
ps = truePerception
pb += (pr*ps*wall*p)
elif S.get(Directions.EAST) != None and S.get(Directions.NORTH) != None:
if r == S.get(Directions.EAST):
pr = truePerception
if n == S.get(Directions.NORTH):
pn = truePerception
pb += (pr*pn*wall*p)
elif S.get(Directions.WEST) != None and S.get(Directions.SOUTH) != None:
if l == S.get(Directions.WEST):
pl = truePerception
if s == S.get(Directions.SOUTH):
ps = truePerception
pb += (pl*ps*wall*p)
elif S.get(Directions.WEST) != None and S.get(Directions.NORTH) != None:
if l == S.get(Directions.WEST):
pl = truePerception
if n == S.get(Directions.NORTH):
pn = truePerception
pb += (pl*pn*wall*p)
elif S.get(Directions.NORTH) != None and S.get(Directions.SOUTH) != None:
if n == S.get(Directions.NORTH):
pn = truePerception
if s == S.get(Directions.SOUTH):
ps = truePerception
pb += (pn*ps*wall*p)
elif(len(S)==3):
for i in range(len(matrix)):
row = matrix[i]
for j in range(len(row)):
n, l, p, r, s = row[j]
pr = falsePerception
pn = falsePerception
pl = falsePerception
ps = falsePerception
if mapa[i][j]==0:
wall=1
else:
wall=0
if S.get(Directions.EAST) != None and S.get(Directions.WEST) != None and S.get(Directions.SOUTH) != None:
if r == S.get(Directions.EAST):
pr = truePerception
if l == S.get(Directions.WEST):
pl = truePerception
if s == S.get(Directions.SOUTH):
ps = truePerception
pb += (pr*pl*ps*wall*p)
elif S.get(Directions.EAST) != None and S.get(Directions.WEST) and S.get(Directions.NORTH) != None:
if r == S.get(Directions.EAST):
pr = truePerception
if l == S.get(Directions.WEST):
pl = truePerception
if n == S.get(Directions.NORTH):
pn = truePerception
pb += (pr*pl*pn*wall*p)
elif S.get(Directions.EAST) != None and S.get(Directions.NORTH) != None and S.get(Directions.SOUTH) != None:
if r == S.get(Directions.EAST):
pr = truePerception
if n == S.get(Directions.NORTH):
pn = truePerception
if s == S.get(Directions.SOUTH):
ps = truePerception
pb += (pr*pn*ps*wall*p)
elif S.get(Directions.WEST) != None and S.get(Directions.NORTH) != None and S.get(Directions.SOUTH) != None:
if l == S.get(Directions.WEST):
pl = truePerception
if n == S.get(Directions.NORTH):
pn = truePerception
if s == S.get(Directions.SOUTH):
ps = truePerception
pb += (pl*pn*ps*wall*p)
elif(len(S)==4):
for i in range(len(matrix)):
row = matrix[i]
for j in range(len(row)):
n, l, p, r, s = row[j]
pr = falsePerception
pn = falsePerception
pl = falsePerception
ps = falsePerception
if mapa[i][j]==0:
wall=1
else:
wall=0
if S.get(Directions.EAST) != None and S.get(Directions.WEST) != None and S.get(Directions.SOUTH) != None and S.get(Directions.NORTH) != None:
if r == S.get(Directions.EAST):
pr = truePerception
if l == S.get(Directions.WEST):
pl = truePerception
if s == S.get(Directions.SOUTH):
ps = truePerception
if n == S.get(Directions.NORTH):
pn = truePerception
pb += (pr*pl*ps*pn*wall*p)
# print pb
return pb
P_3(0.0, {Directions.EAST: True, Directions.WEST: True})
###Output
_____no_output_____
###Markdown
c. Bayes' net for dynamic perception and position.Now we will consider a scenario where the Pacman moves a finite number of steps $n$. In this case we have $n$different variables for the positions $X_{1},\dots,X_{n}$, as well as for each one of the perceptions, e.g.$E_{N_{1}},\dots,E_{N_{n}}$ for the north perception. For the initial Pacman position, assume an uniform distribution among the valid positions. Also assume that at each time step the Pacman choses, to move, one of the valid neighbor positions with uniform probability. Draw the corresponding Bayes' net for $n=4$. d. Probability functions calculated from the dynamic model.Assuming an uniform distribution for the Pacman position probability, write functions to calculate the following probabilities:i. $P(X_{4}=x_{4}|E_{1}=e_{1},E_{3}=e_{3})$
###Code
def P_4(eps, E_1, E_3):
import numpy as np
pos= np.random.randint(24)
count=0
x1=1/24
for i in range(len(matrix)):
row = matrix[i]
for j in range(len(row)):
n, l, p, r, s = row[j]
count++
pr = falsePerception
pn = falsePerception
pl = falsePerception
ps = falsePerception
if mapa[i][j]==0:
wall=1
else:
wall=0
if count == pos:
'''
Calculates: P(X_{4}=x_{4}|E_{1}=e_{1},E_{3}=e_{3})
Arguments: E_1, E_3 dictionaries of type Directions --> {True,False}
0 <= eps <= 1
'''
pd = {(x,y):0 for x in range(1,7) for y in range(1,6)}
return pd
E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
P_4(0.1, E_1, E_3)
###Output
20
###Markdown
ii. $P(X_{2}=x_{2}|E_{2}=e_{2},E_{3}=e_{3},E_{4}=e_{4})$
###Code
def P_5(eps, E_2, E_3, E_4):
'''
Calculates: P(X_{2}=x_{2}|E_{2}=e_{2},E_{3}=e_{3},E_{4}=e_{4})
Arguments: E_2, E_3, E_4 dictionaries of type Directions --> {True,False}
0 <= eps <= 1
'''
pd = {(x,y):0 for x in range(1,7) for y in range(1,6)}
return pd
E_2 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
E_4 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
P_5(0.1, E_2, E_3, E_4)
###Output
_____no_output_____
###Markdown
iii. $P(E_{4}=e_{4}|E_{1}=e_{1},E_{2}=e_{2},E_{3}=e_{3})$
###Code
def P_6(eps, E_1, E_2, E_3):
'''
Calculates: P(E_{4}=e_{4}|E_{1}=e_{1},E_{2}=e_{2},E_{3}=e_{3})
Arguments: E_1, E_2, E_3 dictionaries of type Directions --> {True,False}
0 <= eps <= 1
'''
pd = {(n, s, e, w): 0 for n in [False, True] for s in [False, True]
for e in [False, True] for w in [False, True]}
return pd
E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_2 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
P_6(0.1, E_1, E_2, E_3)
###Output
_____no_output_____
###Markdown
iv. $P(E_{E_{2}}=e_{E_{2}}|E_{N_{2}}=e_{N_{2}},E_{S_{2}}=E_{S_{2}})$
###Code
def P_7(eps, E_N, E_S):
'''
Calculates: P(E_{E_{2}}=e_{E_{2}}|E_{N_{2}}=e_{N_{2}},E_{S_{2}}=E_{S_{2}})
Arguments: E_N_2, E_S_2 \in {True,False}
0 <= eps <= 1
'''
pd = {True:0, False:0}
return pd
P_7(0.1, True, False)
###Output
_____no_output_____
###Markdown
Test functionsYou can use the following functions to test your solutions.
###Code
def approx_equal(val1, val2):
return abs(val1-val2) <= 0.00001
def test_P_1():
pd = P_1(0.0, True, True)
assert approx_equal(pd[(2, 1)], 0.1111111111111111)
assert approx_equal(pd[(3, 1)], 0)
pd = P_1(0.3, True, False)
assert approx_equal(pd[(2, 1)], 0.03804347826086956)
assert approx_equal(pd[(3, 1)], 0.016304347826086956)
def test_P_2():
pd = P_2(0.0, True, True)
assert approx_equal(pd[False], 1.0)
pd = P_2(0.3, True, False)
assert approx_equal(pd[False], 0.5514492753623188)
def test_P_3():
pd = P_3(0.1, {Directions.EAST: True, Directions.WEST: True})
assert approx_equal(pd, 0.2299999999999999)
pd = P_3(0.1, {Directions.EAST: True})
assert approx_equal(pd, 0.3999999999999999)
pd = P_3(0.2, {Directions.EAST: False, Directions.WEST: True, Directions.SOUTH: True})
assert approx_equal(pd, 0.0980000000000000)
def test_P_4():
E_1 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: True}
E_3 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: True}
pd = P_4(0.0, E_1, E_3)
assert approx_equal(pd[(6, 3)], 0.1842105263157895)
assert approx_equal(pd[(4, 3)], 0.0)
pd = P_4(0.2, E_1, E_3)
assert approx_equal(pd[(6, 3)], 0.17777843398830864)
assert approx_equal(pd[(4, 3)], 0.000578430282649176)
E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
E_3 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False}
pd = P_4(0.0, E_1, E_3)
assert approx_equal(pd[(6, 2)], 0.3333333333333333)
assert approx_equal(pd[(4, 3)], 0.0)
def test_P_5():
E_2 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
E_3 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: False, Directions.WEST: False}
E_4 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False}
pd = P_5(0, E_2, E_3, E_4)
assert approx_equal(pd[(2, 5)], 0.5)
assert approx_equal(pd[(4, 3)], 0.0)
pd = P_5(0.3, E_2, E_3, E_4)
assert approx_equal(pd[(2, 5)], 0.1739661245168835)
assert approx_equal(pd[(4, 3)], 0.0787991740545979)
def test_P_7():
pd = P_7(0.0, True, False)
assert approx_equal(pd[False], 0.7142857142857143)
pd = P_7(0.3, False, False)
assert approx_equal(pd[False], 0.5023529411764706)
h test_P_1()
###Output
None
|
Anita Mburu-WT-21-022/22.ipynb | ###Markdown
Transforming and Combining DataIn the previous module you worked on a dataset that combined two different `World HealthOrganization datasets: population and the number of deaths due to tuberculosis`.They could be combined because they share a `common attribute: the countries`. Thisweek you will learn the techniques behind the creation of such a combined dataset.
###Code
import warnings
warnings.simplefilter('ignore', FutureWarning)
import pandas as pd
table = [
['UK', 2678454886796.7], # 1st row
['USA', 16768100000000.0], # 2nd row
['China', 9240270452047.0], # and so on...
['Brazil', 2245673032353.8],
['South Africa', 366057913367.1]
]
headings = ['Country', 'GDP (US$)']
gdp = pd.DataFrame(columns=headings, data=table)
headings = ['Country name', 'Life expectancy (years)']
table = [
['China', 75],
['Russia', 71],
['United States', 79],
['India', 66],
['United Kingdom', 81]
]
life = pd.DataFrame(columns=headings, data=table)
def roundToMillions (value):
return round(value / 1000000)
def usdToGBP (usd):
return usd / 1.564768 # average rate during 2013
def expandCountry (name):
if name == 'UK':
return 'United Kingdom'
elif name == 'USA':
return 'United States'
else:
return name
def expandCountry (name):
if name == 'UK':
name = 'United Kingdom'
if name == 'USA':
name = 'United States'
return name
gdp['Country name'] = gdp['Country'].apply(expandCountry)
gdp['GDP (£m)'] = gdp['GDP (US$)'].apply(usdToGBP).apply(roundToMillions)
gdp['GDP (US$)'].apply(roundToMillions).apply(usdToGBP).apply(round)
headings = ['Country name', 'GDP (£m)']
gdp = gdp[headings]
###Output
_____no_output_____
###Markdown
Joining left, right and centreAt this point, both tables have a common column, 'Country name', with fully expanded country names.
###Code
Let’s take stock for a moment. There’s the original, unchanged table (with full country
names) about the life expectancy:
life
###Output
_____no_output_____
###Markdown
… and a table with the GDP in millions of pounds and also full country names.
###Code
gdp
###Output
_____no_output_____
###Markdown
Both tables have a common column with a common name (‘Country name’). We can **join** thetwo tables on that common column, using the **merge()** function. Merging basically puts all columns of the two tables together, without duplicating the common column, and joinsany rows that have the same value in the common column.There are four possible ways of joining, depending on which rows we want to include in theresulting table. If we want to include only those countries appearing in the GDP table, we callthe **merge()** function. A **left join** takes the rows of the left table and adds the columns of the right table.
###Code
pd.merge(gdp, life, on='Country name', how='left')
###Output
_____no_output_____
###Markdown
The first two arguments are the tables to be merged, with the first table being called the‘left’ table and the second being the ‘right’ table. The on argument is the name of thecommon column, i.e. both tables must have a column with that name. The **how** argumentstates we want a **left join** , i.e. the resulting rows are dictated by the left (GDP) table. Youcan easily see that India and Russia, which appear only in the right (expectancy) table,don’t show up in the result. You can also see that Brazil and South Africa, which appearonly in the left table, have an undefined life expectancy. (Remember that ‘NaN’ stands for‘not a number.)A **right join** will instead take the rows from the right table, and add the columns of the lefttable. Therefore, countries not appearing in the left table will have undefined values for theleft table’s columns. A **right join** takes the rows from the right table, and adds the columns of the left table.
###Code
pd.merge(gdp, life, on='Country name', how='right')
###Output
_____no_output_____
###Markdown
The third possibility is an **outer join** which takes all countries, i.e. whether they are in theleft or right table. The result has all the rows of the left and right joins. An **outer join** takes the union of the rows, i.e. it has all the rows of the left and right joins.
###Code
pd.merge(gdp, life, on='Country name', how='outer')
###Output
_____no_output_____
###Markdown
The last possibility is an **inner join** which takes only those countries common to bothtables, i.e. for which I know the GDP and the life expectancy. That’s the join we want, toavoid any undefined values: An **inner join** takes the intersection of the rows (i.e. the common rows) of the left and right joins.
###Code
gdpVsLife = pd.merge(gdp, life, on='Country name', how='inner')
gdpVsLife
###Output
_____no_output_____
###Markdown
TaskJoin your population dataframe previous task with `gdpVsLife`, in four different ways, and note the differences.
###Code
gdpVsLife = pd.merge(gdp, life, on='Country name', how='inner')
gdpVsLife
###Output
_____no_output_____
###Markdown
Constant variablesYou may have noticed that the same column names appear over and over in the code.If, someday, we decide one of the new columns should be called `‘GDP (million GBP)’`instead of `‘GDP (£m)’` to make clear which currency is meant (because various countriesuse the pound symbol), we need to change the string in every line of code it occurs.Laziness is the mother of invention. If we assign the string to a variable and then use thevariable everywhere instead of the string, whenever we wish to change the string, we onlyhave to edit one line of code, where it’s assigned to the variable. A second advantage ofusing names instead of values is that we can use the name completion facility of Jupyternotebooks by pressing **‘TAB’**. Writing code becomes much faster… gdpInGbp = 'GDP (million GBP)'gdpInUsd = 'GDP (US$)'country = 'Country name'gdp[gdpInGbp] = gdp[gdpInUsd].apply(usdToGbp)headings = [country, gdpInGbp]gdp = gdp[headings] Such variables are meant to be assigned once. They are called **constants** , because theirvalue never changes. However, if someone else takes our code and wishes to adapt andextend it, they may not realise those variables are supposed to remain constant. Even wemay forget it and try to assign a new value further down in the code! To help prevent suchslip-ups the Python convention is to write names of constants in uppercase letters, withwords separated by underscores. Thus, any further assignment to a variable in uppercasewill ring an alarm bell `(in your head, the computer remains silent)`. Constants are used to represent fixed values (e.g. strings and numbers) that occur frequently in a program. Constant names are conventionally written in uppercase, with underscores to separate multiple words.
###Code
GDP_USD = 'GDP (US$)'
GDP_GBP = 'GDP (£m)'
GDP_USD
COUNTRY = 'Country name'
gdp[GDP_GBP] = gdp[GDP_USD].apply(usdToGbp)
headings = [COUNTRY, GDP_GBP]
gdp = gdp[headings]
###Output
_____no_output_____ |
Analysis/Wine_Review_Analysis.ipynb | ###Markdown
Wine Review Data AnalysisAnalysis of 130k different wines from various countries reviewed by different tasters and giving a score on them. Importing required librariesSetting the parameters for these libraries aswell
###Code
import pandas as pd
pd.set_option("display.max_rows", 15)
###Output
_____no_output_____
###Markdown
Read the file and assign to a variable named `reviews`, Check the size of the file and its dimensions aswell
###Code
reviews = pd.read_csv('Input\Data\winemag-130k-v2.csv',
index_col=0)
# Check the shape and size of the Data Frame
reviews.shape
reviews.size
# Check the data
reviews.head()
###Output
_____no_output_____
###Markdown
Getting to know the columns of the database, so it can be used more efficiently later
###Code
reviews.columns
###Output
_____no_output_____
###Markdown
Calling the country column from the data
###Code
reviews.country
reviews['country'][1467]
###Output
_____no_output_____
###Markdown
Understanding the statistics of the data including the categorical variables
###Code
# Descriptive Statistics
reviews.describe(include='all')
###Output
_____no_output_____
###Markdown
Descriptive Statistics of a column taster_name
###Code
reviews.taster_name.describe()
###Output
_____no_output_____
###Markdown
The mean of the points in the data
###Code
reviews.points.mean()
###Output
_____no_output_____
###Markdown
The number of unique reviewers in the data
###Code
reviews.taster_name.unique()
###Output
_____no_output_____
###Markdown
Exploring the description of wines to get some insight
###Code
reviews.taster_name.value_counts()
###Output
_____no_output_____
###Markdown
Index based Selection (iloc)Using pandas's `iloc` which is row first and column second unlike python where its column first and row second
###Code
reviews.iloc[4]
# Just the first column
reviews.iloc[:, 0]
# Just the second and third row entries of first column
reviews.iloc[1:3, ]
# Select first, second, third rows of first column as well
reviews.iloc[[0, 1, 2], 0]
###Output
_____no_output_____
###Markdown
Label Based Selection (loc)`loc` is similar to `iloc` but uses information in indices to work unlike indexing in `iloc`
###Code
reviews.loc[3, 'country']
# Getting select columns using loc
reviews.loc[:, ['taster_name', 'points']]
# Conditional Selection
print('Asking if its from a specific condition')
reviews.country == 'Italy'
print('Find which wines are from Italy with higher than average points')
reviews.loc[(reviews.country == 'Italy') & (reviews.points >= 90)]
###Output
Find which wines are from Italy with higher than average points
###Markdown
Select wines only from italy or france
###Code
reviews.loc[reviews.country.isin(['Italy', 'France'])]
###Output
_____no_output_____
###Markdown
Mapping of the dataWe try some mapping functionality to get more insight on the data, Using `map` and `apply` functions
###Code
print('Scores of wins after re-meaning:')
reviews_point_mean = reviews.points.mean()
reviews.points.map(lambda p: p - reviews_point_mean)
# Can also be achieved with in-built pandas functionality
reviews.points - reviews_point_mean
###Output
_____no_output_____
###Markdown
`apply` takes Dataframe as input and does same as `map` over the entire dataframe while `map` takes only series as input
###Code
def remean_points(srs):
srs.points = srs.points - reviews_point_mean
return srs
reviews.apply(remean_points, axis='columns')
###Output
_____no_output_____
###Markdown
Combining country and region information
###Code
reviews.country + "-" + reviews.region_1
fruity_wine = reviews.description.map(lambda x: 'fruity' in x).value_counts()
fruity_wine
reviews.loc[(reviews.points/reviews.price).idxmax()].title
###Output
_____no_output_____
###Markdown
Exploring the description of wines to get some insight
###Code
tropical_wine = reviews.description.map(lambda x: 'tropical' in x).value_counts()
tropical_wine
fruity_wine = reviews.description.map(lambda x: 'fruity' in x).value_counts()
fruity_wine
###Output
_____no_output_____
###Markdown
Lets create a series based on this information
###Code
pd.Series([tropical_wine[True], fruity_wine[True]], index=['tropical', 'fruity'], name='Wine Types')
###Output
_____no_output_____
###Markdown
Extracting information related to countries and their varieties of wines
###Code
temp = reviews.loc[(reviews.country.notnull()) & (reviews.variety.notnull())]
temp
country_variety = temp.apply(lambda x: x.country + '-' + x.variety, axis='columns')
country_variety.value_counts()
###Output
_____no_output_____
###Markdown
Grouping of the data We group data to get more information about the database using `groupby`
###Code
reviews.groupby('points').points.count()
###Output
_____no_output_____
###Markdown
Cheapest wine in each point value category
###Code
reviews.groupby('points').price.min()
###Output
_____no_output_____
###Markdown
Name of the first wine reviewd from each winery in the dataset
###Code
reviews.groupby('winery').apply(lambda x: x.title.iloc[0])
###Output
_____no_output_____
###Markdown
More control over the data to be displayed can be done as shown below:Where we can group by more than one column, best wine by country and province
###Code
reviews.groupby(['country', 'province']).apply(lambda x: x.loc[x.points.idxmax()])
###Output
_____no_output_____
###Markdown
Using `agg` we can run multiple different functions on the `DataFrame` simultaneously
###Code
reviews.groupby(['country']).price.agg([len, min, max])
###Output
_____no_output_____
###Markdown
Multi Indexing Using multi indexing we can achieve more insight into the data.
###Code
countries_reviewed = reviews.groupby(['country', 'province']).description.agg([len])
countries_reviewed
###Output
_____no_output_____
###Markdown
In general the MultiIndex method we will use most often is the one for converting back to a regular index, the `reset_index` method
###Code
countries_reviewed.reset_index()
###Output
_____no_output_____
###Markdown
SortingLooking again at countries_reviewed we can see that grouping returns data in index order, not in value order. That is to say, when outputting the result of a groupby, the order of the rows is dependent on the values in the index, not in the data.To get data in the order want it in we can sort it ourselves. The `sort_values` method is handy for this.
###Code
countries_reviewed = countries_reviewed.reset_index()
countries_reviewed.sort_values(by='len')
###Output
_____no_output_____
###Markdown
`sort_values` defaults to an ascending sort, where the lowest values go first. Most of the time we want a descending sort however, where the higher numbers go first. That goes thusly:
###Code
countries_reviewed.sort_values(by='len', ascending=False)
###Output
_____no_output_____
###Markdown
To sort by index values, use the companion method `sort_index`. This method has the same arguments and default order:
###Code
countries_reviewed.sort_index()
###Output
_____no_output_____
###Markdown
We can sort by more than one column at a time:
###Code
countries_reviewed.sort_values(by=['country', 'len'])
###Output
_____no_output_____
###Markdown
Finding the most common wine reviewers in the dataset
###Code
common_wine_reviewers = reviews.groupby('taster_twitter_handle').taster_twitter_handle.count()
# Sort the values in descending order
common_wine_reviewers = common_wine_reviewers.sort_values(ascending=False)
common_wine_reviewers
###Output
_____no_output_____
###Markdown
Best wine to buy from a given amount of money
###Code
reviews.groupby('price').points.max().sort_index()
###Output
_____no_output_____
###Markdown
Max and Min prices for each `Variety` of wine ?
###Code
reviews.groupby('variety').price.agg([max, min])
reviews.groupby('taster_name').points.mean()
###Output
_____no_output_____
###Markdown
The most expensive wine varieties.
###Code
reviews.groupby('variety').price.agg([min, max]).sort_values(by=['min', 'max'], ascending=False)
###Output
_____no_output_____
###Markdown
Combination of countries and varieties which are most common
###Code
reviews['n'] = 0
reviews.groupby(['country', 'variety']).n.count().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Missing Data and Checking Data TypesHere we try to handle missing data and deal with them. Also check the data types within the data.
###Code
# Checking dtype property of the specific column
reviews.price.dtype
# dtypes returns dtype of all columns in the dataset
reviews.dtypes
###Output
_____no_output_____
###Markdown
Converting data types of columns to make more sense and correctly adjusting the data.
###Code
reviews.points.astype('float64')
###Output
_____no_output_____
###Markdown
Indexes have dtypes of their own aswell
###Code
reviews.index.dtype
###Output
_____no_output_____
###Markdown
Missing values in a data are denoted by `NaN` short for Not a Number. `NaN` are always of type `float64` dtype due to technical reasons.We can use `pd.isnull` or `pd.notnull` to specify the missing entries.
###Code
reviews[reviews.country.isnull()]
###Output
_____no_output_____
###Markdown
We can deal with missing values by replacing them with various values like mean, median, frequency etc. depending on the situation by using `fillna`
###Code
# Replacing missing values with 'unknown' as value
reviews.region_2.fillna('unknown')
###Output
_____no_output_____
###Markdown
Kerin O'Keefe has changed her Twitter handle from `@kerinokeefe` to `@kerino`. One way to reflect this in the dataset is using the `replace` method:
###Code
reviews.taster_twitter_handle.replace('@kerinokeefe', '@kerino')
###Output
_____no_output_____
###Markdown
Some wines do not list a price. How often does this occur ? We can determine this by generating `Series` for each review in the dataset and states whether the wine reviewed has a null `price`
###Code
reviews.price.isnull()
###Output
_____no_output_____
###Markdown
The most common wine-producing regions, We can find this by creating a `Series` counting the number of times each value occurs in the `region_1` field. This field is often missing data, so replace missing values with `Unknown`. Sort in descending order.
###Code
reviews.region_1.fillna("Unknown").value_counts()
###Output
_____no_output_____
###Markdown
The `sum()` of a list of boolean values will return how many times `True` appears in that list.Create a `pandas` `Series` showing how many times each of the columns in the dataset contains null values.
###Code
reviews.isnull().sum().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Renaming We can rename indexes, column names etc. using the function `rename`
###Code
# Changing points column to score
reviews.rename(columns={'points': 'score'})
reviews.rename_axis('wines', axis='rows').rename_axis('fields', axis='columns')
###Output
_____no_output_____
###Markdown
CombiningWe can combine various fields using the main 3 functions `concat`, `join` and `merge`. The simplest of the methods is `concat` `region_1` and `region_2` are pretty uninformative names for locale columns in the dataset. Let's rename these columns to `region` and `locale`.
###Code
reviews.rename(columns={'region_1': 'region', 'region_2': 'locale'})
###Output
_____no_output_____
###Markdown
Method Chaining with referenceContinuity of performing actions on `DataFrame` or `Series` reduces redundant actions and saves the LoC. Method chaining is advantageous for several reasons. One is that it lessens the need for creating and mentally tracking temporary variables. Another is that it emphasizes a correctly structured interative approach to working with data, where each operation is a "next step" after the last. Debugging is easy: just comment out operations that don't work until you get to one that does, and then start stepping forward again. Filling the `region_1` field with the province field wherever the `region_1` is `null` (useful if we're mixing in our own categories), we would do: The `assign` method lets you create new columns or modify old ones inside of a DataFrame inline.
###Code
reviews.assign(region_1=reviews.apply(lambda srs: srs.region_1 if pd.notnull(srs.region_1) else srs.province,
axis='columns'))
###Output
_____no_output_____
###Markdown
`pipe` lets you perform an operation on the entire `DataFrame` at once, and replaces the current `DataFrame` with the output of your pipe.
###Code
def name_index(df):
df.index.name = 'review_id'
return df
reviews.pipe(name_index)
###Output
_____no_output_____ |
extract_box.ipynb | ###Markdown
CoastWatch Python Exercises Python Basics: A tutorial for the NOAA Satellite Workshop> history | updated August 2021 > owner | NOAA CoastWatch In this exercise, you will use Python to download data and metadata from ERDDAP. The exercise demonstrates the following skills: * Using Python to retrieve information about a dataset from ERDDAP* Downloading satellite data from ERDDAP in netCDF format* Extracting data with Python About the dataset used in this exercise* For the examples in this exercise we will use the NOAA GeoPolar Sea Surface Temperature dataset from the CoastWatch West Coast Node * ERDDAP ID = nesdisGeoPolarSSTN5SQNRT* https://coastwatch.pfeg.noaa.gov/erddap/griddap/nesdisGeoPolarSSTN5SQNRT.graph * The dataset contains monthly composites of SST* The low spatial resolution (0.05 degrees) will allow for small download size and help prevent overloading the internet bandwidth during the class Look for python modules you might not have installed* Use the pkg_resources module to check for installed modules* We will be using the xarray, numpy, and pandas modules for this exercise. * Make sure that they are installed in your Python 3 environment. * A quick way to do this is with the script below
###Code
import pkg_resources
# Create a set 'curly brackets' of the modules to look for
# You can put any modules that you want to in the set
required = {'xarray', 'numpy', 'pandas'}
# look for the installed packages
installed = {pkg.key for pkg in pkg_resources.working_set}
# Find which modules are missing
missing = required - installed
if len(missing)==0:
print('All modules are installed')
else:
print('These modules are missing', ', '.join(missing))
###Output
All modules are installed
###Markdown
Import the primary modules for this tutorial* numpy is used for matrix operations* numpy.ma specifically is used for masked arrays* pandas is used for tabular data* xarray is used for opening the gridded dataset
###Code
import numpy as np
import numpy.ma as ma
import pandas as pd
import xarray as xr
###Output
_____no_output_____
###Markdown
Get information about a dataset from ERDDAPWe will use the xarray `open_dataset` function to access data and metadata from ERDDAP. Here we describe how to create the url for the `open_dataset` function and demonstrate a function to generate the ERDDAP url. Open a pointer to an ERDDAP dataset, using the xarray `open_dataset` function > xr.open_dataset('full_url_to_erddap_dataset') Where, the `full_url_to_erddap_dataset` is the base url to the ERDDAP you are using and the ERDDAP dataset id. So, for our dataset: * base_URL = 'https://coastwatch.pfeg.noaa.gov/erddap/griddap'* dataset_id = 'nesdisGeoPolarSSTN5SQNRT'* full_URL = 'https://coastwatch.pfeg.noaa.gov/erddap/griddap/nesdisGeoPolarSSTN5SQNRT'__A simple `open_dataset` example__ ```pythonbase_URL = 'https://coastwatch.pfeg.noaa.gov/erddap/griddap'dataset_id = 'nesdisGeoPolarSSTN5SQNRT'full_URL = '/'.join([base_URL, dataset_id])print(full_URL)da = xr.open_dataset(full_URL)```__Make this more versatile by putting it into a function__```pythondef point_to_dataset(dataset_id, base_url='https://coastwatch.pfeg.noaa.gov/erddap/griddap'): base_url = base_url.rstrip('/') full_url = '/'.join([base_url, dataset_id]) return xr.open_dataset(full_url)``` * __dataset_id__ is the ERDDAP id for the dataset of interest. For this example: 'nesdisGeoPolarSSTN5SQNRT'* __base_url__ is the url of the ERDDAP you are pulling data from. For this example, the West Coast Node ERDDAP at 'https://coastwatch.pfeg.noaa.gov/erddap/griddap'* __full_url__ is the full URL to the ERDDAP dataset created by joining base_url and dataset_id* The pointer to the dataset is returned* The default base_url is the West Coast Node ERDDAP 'https://coastwatch.pfeg.noaa.gov/erddap/griddap'. * Examples of passing dataset_id and base_url to the function:```pythonpoint_to_dataset('nesdisGeoPolarSSTN5SQNRT')``````pythonpoint_to_dataset(dataset_id = 'nesdisGeoPolarSSTN5SQNRT')``````pythonpoint_to_dataset('nesdisGeoPolarSSTN5SQNRT', 'https://upwell.pfeg.noaa.gov/erddap/griddap')``````pythonpoint_to_dataset(dataset_id = 'nesdisGeoPolarSSTN5SQNRT', base_url = 'https://upwell.pfeg.noaa.gov/erddap/griddap')```
###Code
def point_to_dataset(dataset_id, base_url='https://coastwatch.pfeg.noaa.gov/erddap/griddap'):
base_url = base_url.rstrip('/')
full_url = '/'.join([base_url, dataset_id])
return xr.open_dataset(full_url)
da = point_to_dataset('nesdisGeoPolarSSTN5SQNRT')
da
###Output
_____no_output_____
###Markdown
Examine the metadata Examine the coordinate variables and dimensions* The code below lists the coordinate variables and their sizes. * The dataset is a 3D array with: * 6793 values in the time dimension (as of 4/15/2021 but that increases each day) * 3600 values in the latitude dimension * 7200 values in the longitude dimension
###Code
display(da.coords)
display(da.dims)
###Output
_____no_output_____
###Markdown
Examine the data variables* The code below lists the data variables. * There are several variables in the dataset. We are interested in "analysed_sst".
###Code
print ('data variables', list(da.keys()))
###Output
data variables ['analysed_sst', 'analysis_error', 'sea_ice_fraction', 'mask']
###Markdown
Examine the Global AttributesGlobal attributes provide information about a dataset as a whole. A few of the global attributes are important for helping you to decide if the dataset will work for your application: * `geospatial_lat_min`, `geospatial_lat_max`, `geospatial_lon_min` and `geospatial_lon_max` provide the geographical range of the dataset * `time_coverage_start` and `time_coverage_end` provide the time range covered by the dataset * `geospatial_lat_resolution` and `geospatial_lon_resolution` povide the spatial resolution* Attributes like `comment`, `summary`, and `references` provide more information about: * how the dataset was generated * how you may use the data * the people and organizations to acknowledge if you use the dataThe code below lists these global attributes.
###Code
print('Latitude range:', da.geospatial_lat_min, 'to', da.geospatial_lat_max)
print('Longitude range:', da.geospatial_lon_min, 'to', da.geospatial_lon_max)
print('Time range:', da.time_coverage_start, 'to', da.time_coverage_end)
print('Spatial resolution (degrees):',
'lat', np.around(da.geospatial_lat_resolution, decimals=3),
'lon', np.around(da.geospatial_lon_resolution, decimals=3)
)
print(' ')
print('Dataset summary:')
print(da.summary)
###Output
Latitude range: -89.975 to 89.975
Longitude range: -179.975 to 179.975
Time range: 2002-09-01T12:00:00Z to 2022-01-09T12:00:00Z
Spatial resolution (degrees): lat 0.05 lon 0.05
Dataset summary:
This dataset is an aggregation of Science Quality STAR data (2002-2016) and Near Real Time OSPO data (2017-present). Analysed blended sea surface temperature over the global ocean using night only input data. An SST estimation scheme which combines multi-satellite retrievals of sea surface temperature datasets available from polar orbiters, geostationary InfraRed (IR) and microwave sensors into a single global analysis. This global SST ananlysis provide a daily gap free map of the foundation sea surface temperature at 0.05� spatial resolution.
###Markdown
Download data from ERDDAPFor this exercise, the area we are interested in includes Monterey Bay, CA: * Latitude range: 32N, 39N* Longitude range: -124E, -117E* Time range June 3, 2020 to June 7, 2020 The Xarray module makes it really easy to request a subset of a dataset using latitude, longitude, and time ranges. Here is an example using the `sel` method with the `slice` function. ```pythonsst = da['analysed_sst'].sel( latitude=slice(32., 39.), longitude=slice(-124, -117), time=slice('2020-06-03T12:00:00', '2020-06-07T12:00:00') ) ```>Note: If the dataset has an altitude dimension, an altitude slice would need to be added, e.g. >```python altitude=slice(0.0), >``` Create a subsetting function* Use these xarray features in a function to make it more versatile. * `my_da` is the data array produced from `open_dataset`* `my_var` is the name of the variable* the other inputs are the geographic and time ranges
###Code
def get_data(my_da, my_var,
my_lt_min, my_lt_max,
my_ln_min, my_ln_max,
my_tm_min, my_tm_max
):
my_data = my_da[my_var].sel(
latitude=slice(my_lt_min, my_lt_max),
longitude=slice(my_ln_min, my_ln_max),
time=slice(my_tm_min, my_tm_max)
)
return my_data
###Output
_____no_output_____
###Markdown
Run the subsetting function with our geographical and time rangesThe returned sst data array is a subset of `da`
###Code
lat_min = 32.
lat_max = 39.
lon_min = -124.
lon_max = -117.
time_min = '2020-06-03T12:00:00' # written in ISO format
time_max = '2020-06-07T12:00:00' # written in ISO format
my_var = 'analysed_sst'
sst = get_data(
da, my_var,
lat_min, lat_max,
lon_min, lon_max,
time_min, time_max
)
print(sst.dims)
print('Dimension size:', sst.shape)
sst
###Output
('time', 'latitude', 'longitude')
Dimension size: (5, 140, 140)
###Markdown
Visualizing the satellite SST data Make a simple plotXarray makes it easy to quickly visualize the data as a map. * Use the isel method to pick a time slice by its index number * Use the imshow method to plot the data* We have 5 time steps so the index numbers are 0 to 4. Plot the first time step: ```sst.isel(time=0).plot.imshow()```
###Code
%matplotlib inline
sst.isel(time=0).plot.imshow()
###Output
_____no_output_____
###Markdown
Use a loop to plot all 5 time steps
###Code
import matplotlib.pyplot as plt
for i in range(0,5):
ax = plt.subplot()
sst.isel(time=i).plot.imshow()
plt.show()
###Output
_____no_output_____
###Markdown
Calculate the mean SST over the region for each day * Use the `numpy mean()` method to take the mean of the latitude longitude grid ( axis=(1,2) ) for each time slice.* Use the `matplotlib.pyplot.plot_date()` plot routine, which formats the x axis labels as dates* Use `sst.time` as the x axis values and the mean as the Y axis values
###Code
plt.plot_date(sst.time, sst.mean(axis=(1,2)), 'o')
# auto format the date label positions on the x axis
plt.gcf().autofmt_xdate()
###Output
_____no_output_____ |
8_Grouping and Aggregating/1_Groupby/4_When_to_use_Groupby.ipynb | ###Markdown
https://www.youtube.com/watch?v=qy0fDqoMJx8 When should I use a "groupby" in pandas?
###Code
import pandas as pd
drinks = pd.read_csv("http://bit.ly/drinksbycountry", sep=",")
drinks.head()
drinks.beer_servings.mean()
drinks['continent'].value_counts()
drinks.groupby('continent').beer_servings.mean()
drinks.groupby('continent').beer_servings.max()
drinks.groupby('continent').beer_servings.min()
drinks[drinks.continent=='Africa']
drinks[drinks.continent=='Africa'].beer_servings.mean()
drinks[drinks.continent=='Europe'].beer_servings.mean()
drinks.groupby('continent').beer_servings.agg(['count','min','max','mean'])
%matplotlib inline
drinks.groupby('continent').mean().plot
drinks.groupby('continent').mean().plot(kind='bar')
###Output
_____no_output_____ |
Lab Activities Submission/ProblemSet1/Tests/58089_PrelimPS_Tadeo.ipynb | ###Markdown
Topic02a : Prelim Problem Set I Case 1Represent the following representations into its vectorized form using LaTeX.> **Problem 1.a. System of Linear Equations**$$\left\{ \begin{array}{cc} -y+z=\frac{1}{32}\\ \frac{1}{2}x -2y=0 \\ -x + \frac{3}{7}z=\frac{4}{5} \end{array}\right. $$> **Problem 1.b. Linear Combination**$$ \cos{(\theta)}\hat{i} + \sin{(\theta)}\hat{j} - \csc{(2\theta)}\hat{k}$$> **Problem 1.c. Scenario**>>A conference has 200 student attendees, 45 professionals, and has 15 members of the panel. There is a team of 40 people on the organizing committee. Represent the *percent* composition of each *attendee* type of the conference in matrix form.Express your answers in LaTeX in the answer area. Problem 1.a System of Linear Equations$$ \begin{bmatrix} z \\ y\end{bmatrix} = \begin{bmatrix} 0 \\ -\frac{1}{32} \end{bmatrix} + t\begin{bmatrix} 1 \\ 1 \end{bmatrix} $$ $$ \begin{bmatrix} x \\ y\end{bmatrix} = \begin{bmatrix} 0 \\0 \end{bmatrix} + t\begin{bmatrix} 4 \\ 1 \end{bmatrix} $$$$ \begin{bmatrix} x \\ z\end{bmatrix} = \begin{bmatrix} 3 \\ 7 \end{bmatrix} + t\begin{bmatrix} 0 \\ \frac{28}{35} \end{bmatrix}$$Problem 1.b Linear Combination$$\text{X} = \begin{bmatrix} \cos{[\theta]} & sin{[\theta]} & -csc{[2\theta]}\\ \end{bmatrix} ,\omega = \begin{bmatrix}\hat{i} \\ \hat{j} \\ \hat{k}\end{bmatrix}$$$$\cos{[\theta]} * direction_\hat{i} + sin{[\theta]} * direction_\hat{j} - csc{[2\theta]}*direction_\hat{k}$$$$\text{X} \cdot \omega $$Problem 1.c Scenario$$ Students (x): \frac{200}{260} \times 100 = 76.92\% $$$$ Professionals (y): \frac{45}{260} \times 100 = 17.31\% $$$$ Panel (z): \frac{15}{260} \times 100 = 5.77\% $$$$ \begin{bmatrix} \frac{10}{13} \\ \frac{9}{52} \\ \frac{1}{52} \end{bmatrix} \cdot \begin{bmatrix} x \\ y\\ z \end{bmatrix}$$ Case 2> **Problem 2.a: Vector Magnitude**>The magnitude of a vector is usually computed as:$$||v|| = \sqrt{a_0^2 + a_1^2 + ... +a_n^2}$$Whereas $v$ is any vector and $a_k$ are its elements wherein $k$ is the size of $v$.Re-formulate $||v||$ as a function of an inner product. Further discuss this concept and provide your user-defined function.> **Problem 2.b: Angle Between Vectors**> Inner products can also be related to the Law of Cosines. The property suggests that:$$u\cdot v = ||u||\cdot||v||\cos(\theta)$$Whereas $u$ and $v$ are vectors that have the same sizes and $\theta$ is the angle between $u$ and $v$.> Explain the behavior of the dot product when the two vectors are perpendicular and when they are parallel.
###Code
import numpy as np
matA = np.array([cos, sin,-csc])
matB = np.array([[i], [j], [k]])
np.dot(matA, matB)
###Output
_____no_output_____ |
Code/Peroxisome_dynamics/Experimental_Omega.ipynb | ###Markdown
Overview of implementing omega on real experimental dataset Steps1. Determine biochemical process that generated the experimental data and program this into Omega 1. An example of this is a Gillespie simulation (although it doesn't need to be)2. This implementation can run simulations which follow the same biochemical laws as the experiment 1. For a given set of cells $X$ we can generate traces over some time period $T$ 2. For each time point $t_i$ in $T$ we can describe the cell abundance as $P(x_1, x_2, .., x_n|t_i)$3. We can condition the implementation on our experimental data to recreate the experimental state $$\sum\nolimits_{t_i \in T} P(x_1, x_2,.., x_n|\text{Data}, t_i)$$4. Use Omega's functionality (ie replace) on the conditioned trace to ask counterfactual queries about the experiment Biochemical model of organelle dynamicsQuick reminder on what our experimental data looks like.
###Code
import pandas as pd
day1 = pd.read_csv('../../../../../Research/Causal_Inference/SDE_inference/Experimental_Data/Data/Day1/all.dat', sep=',').iloc[:, 1:]
day1.head()
###Output
_____no_output_____ |
Exercise_sessions/Lab Chapter 5.ipynb | ###Markdown
Lab: Cross-Validation and the BootstrapIn this lab, we explore the resampling techniques covered in this chapter.Some of the commands in this lab may take a while to run on your computer 1 The Validation Set ApproachWe explore the use of the validation set approach in order to estimate thetest error rates that result from fitting various linear models on the Autodata set.Before we begin, we use the set.seed() function in order to set a seed forR’s random number generator, so that the reader of this book will obtainprecisely the same results as those shown below. It is generally a good ideato set a random seed when performing an analysis such as cross-validationthat contains an element of randomness, so that the results obtained canbe reproduced precisely at a later time.We begin by using the sample() function to split the set of observationsinto two halves, by selecting a random subset of 196 observations out ofthe original 392 observations. We refer to these observations as the trainingset.
###Code
# install.packages('ISLR', repos='http://cran.us.r-project.org')
library(ISLR)
set.seed(1)
train = sample(392,196)
###Output
_____no_output_____
###Markdown
(Here we use a shortcut in the sample command; see ?sample for details.)We then use the subset option in lm() to fit a linear regression using onlythe observations corresponding to the training set.
###Code
lm.fit =lm(mpg~horsepower ,data=Auto , subset = train )
###Output
_____no_output_____
###Markdown
We now use the predict() function to estimate the response for all 392observations, and we use the mean() function to calculate the MSE of the196 observations in the validation set. Note that the -train index belowselects only the observations that are not in the training set
###Code
attach(Auto)
mean((mpg - predict(lm.fit ,Auto))[-train]^2)
###Output
_____no_output_____
###Markdown
Therefore, the estimated test MSE for the linear regression fit is 23.27. Wecan use the poly() function to estimate the test error for the quadraticand cubic regressions.
###Code
lm.fit2=lm(mpg~poly(horsepower,2) ,data=Auto , subset =train )
mean((mpg - predict(lm.fit2 ,Auto))[-train]^2)
lm.fit3=lm(mpg~poly(horsepower,3) ,data=Auto , subset =train )
mean((mpg - predict(lm.fit3 ,Auto))[-train]^2)
###Output
_____no_output_____
###Markdown
These error rates are 18.72 and 18.79, respectively. If we choose a differenttraining set instead, then we will obtain somewhat different errors on thevalidation set
###Code
set.seed(2)
train = sample(392,196)
lm.fit =lm(mpg~horsepower, subset = train )
mean((mpg - predict(lm.fit ,Auto))[-train]^2)
mean((mpg - predict(lm.fit2 ,Auto))[-train]^2)
mean((mpg - predict(lm.fit3 ,Auto))[-train]^2)
###Output
_____no_output_____
###Markdown
Using this split of the observations into a training set and a validationset, we find that the validation set error rates for the models with linear,quadratic, and cubic terms are 25.73, 19.95, and 19.98, respectively.These results are consistent with our previous findings: a model thatpredicts mpg using a quadratic function of horsepower performs better thana model that involves only a linear function of horsepower, and there islittle evidence in favor of a model that uses a cubic function of horsepower. 2. Leave-One-Out Cross-ValidationThe LOOCV estimate can be automatically computed for any generalizedlinear model using the glm() and cv.glm() functions. In the lab for Chapter 4, we used the glm() function to perform logistic regression by passingin the family="binomial" argument. But if we use glm() to fit a modelwithout passing in the family argument, then it performs linear regression,just like the lm() function. So for instance,
###Code
glm.fit =glm(mpg~horsepower , data= Auto)
coef(glm.fit )
# and
lm.fit =lm(mpg~horsepower , data= Auto)
coef(lm.fit )
###Output
_____no_output_____
###Markdown
yield identical linear regression models. In this lab, we will perform linearregression using the glm() function rather than the lm() function becausethe former can be used together with cv.glm(). The cv.glm() function ispart of the boot library
###Code
library(boot)
glm.fit =glm(mpg~horsepower, data= Auto)
cv.err =cv.glm(Auto, glm.fit)
cv.err$delta
###Output
_____no_output_____
###Markdown
The cv.glm() function produces a list with several components. The twonumbers in the delta vector contain the cross-validation results. In this case the numbers are identical (up to two decimal places) and correspondto the LOOCV statistic given in (5.1). Below, we discuss a situation inwhich the two numbers differ. Our cross-validation estimate for the testerror is approximately 24.23.We can repeat this procedure for increasingly complex polynomial fits.To automate the process, we use the for() function to initiate a for loopwhich iteratively fits polynomial regressions for polynomials of order i = 1 for loopto i = 5, computes the associated cross-validation error, and stores it inthe ith element of the vector cv.error. We begin by initializing the vector.This command will likely take a couple of minutes to run.
###Code
cv.error =rep(0 ,5)
for(i in 1:5){
glm.fit =glm(mpg~poly(horsepower, i),data=Auto)
cv.error[i]=cv.glm(Auto,glm.fit)$delta[1]
}
cv.error
###Output
_____no_output_____
###Markdown
we see a sharp drop in the estimated test MSE betweenthe linear and quadratic fits, but then no clear improvement from usinghigher-order polynomials. 3. k-Fold Cross-ValidationThe cv.glm() function can also be used to implement k-fold CV. Below weuse k = 10, a common choice for k, on the Auto data set. We once again seta random seed and initialize a vector in which we will store the CV errorscorresponding to the polynomial fits of orders one to ten.
###Code
set.seed(17)
cv.error.10= rep(0 ,10)
for(i in 1:10){
glm.fit=glm(mpg~poly(horsepower, i), data=Auto)
cv.error.10[i] = cv.glm(Auto, glm.fit,K=10)$delta[1]
}
cv.error.10
###Output
_____no_output_____
###Markdown
Notice that the computation time is much shorter than that of LOOCV.(In principle, the computation time for LOOCV for a least squares linearmodel should be faster than for k-fold CV, due to the availability of theformula (5.2) for LOOCV; however, unfortunately the cv.glm() functiondoes not make use of this formula.) We still see little evidence that usingcubic or higher-order polynomial terms leads to lower test error than simplyusing a quadratic fit. We saw in Section 5.3.2 that the two numbers associated with delta areessentially the same when LOOCV is performed. When we instead performk-fold CV, then the two numbers associated with delta differ slightly. The first is the standard k-fold CV estimate, as in (5.3). The second is a biascorrected version. On this data set, the two estimates are very similar to each other. 4. The BootstrapWe illustrate the use of the bootstrap in the simple example of Section 5.2,as well as on an example involving estimating the accuracy of the linearregression model on the Auto data set. Estimating the Accuracy of a Statistic of InterestOne of the great advantages of the bootstrap approach is that it can beapplied in almost all situations. No complicated mathematical calculationsare required. Performing a bootstrap analysis in R entails only two steps.First, we must create a function that computes the statistic of interest.Second, we use the boot() function, which is part of the boot library, toperform the bootstrap by repeatedly sampling observations from the dataset with replacement.The Portfolio data set in the ISLR package is described in Section 5.2.To illustrate the use of the bootstrap on this data, we must first createa function, alpha.fn(), which takes as input the (X, Y ) data as well asa vector indicating which observations should be used to estimate α. Thefunction then outputs the estimate for α based on the selected observations.
###Code
alpha.fn=function(data,index){
X = data$X[index]
Y = data$Y[index]
return((var(Y)-cov(X,Y))/(var(X)+var(Y)-2*cov(X,Y)))
}
###Output
_____no_output_____
###Markdown
This function returns, or outputs, an estimate for α based on applying(5.7) to the observations indexed by the argument index. For instance, thefollowing command tells R to estimate α using all 100 observations.
###Code
alpha.fn(Portfolio ,1:100)
###Output
_____no_output_____
###Markdown
The next command uses the sample() function to randomly select 100 observations from the range 1 to 100, with replacement. This is equivalentto constructing a new bootstrap data set and recomputing ˆ α based on thenew data set.
###Code
set.seed(1)
alpha.fn(Portfolio ,sample(100 ,100 , replace =T))
###Output
_____no_output_____
###Markdown
We can implement a bootstrap analysis by performing this command manytimes, recording all of the corresponding estimates for α, and computing the resulting standard deviation. However, the boot() function automatesthis approach. Below we produce R = 1, 000 bootstrap estimates for α.
###Code
boot(Portfolio,alpha.fn,R=1000)
###Output
_____no_output_____
###Markdown
The final output shows that using the original data, ˆα = 0.5758, and thatthe bootstrap estimate for SE(ˆα) is 0.0937. Estimating the Accuracy of a Linear Regression ModelThe bootstrap approach can be used to assess the variability of the coefficient estimates and predictions from a statistical learning method. Herewe use the bootstrap approach in order to assess the variability of theestimates for β0 and β1, the intercept and slope terms for the linear regression model that uses horsepower to predict mpg in the Auto data set. Wewill compare the estimates obtained using the bootstrap to those obtainedusing the formulas for SE(βˆ0) and SE(βˆ1) described in Section 3.1.2.We first create a simple function, boot.fn(), which takes in the Auto dataset as well as a set of indices for the observations, and returns the interceptand slope estimates for the linear regression model. We then apply thisfunction to the full set of 392 observations in order to compute the estimates of β0 and β1 on the entire data set using the usual linear regressioncoefficient estimate formulas from Chapter 3. Note that we do not need the{ and } at the beginning and end of the function because it is only one linelong.
###Code
boot.fn= function (data ,index)
return (coef(lm(mpg~horsepower , data=data , subset = index )))
boot.fn(Auto, 1:392)
###Output
_____no_output_____
###Markdown
The boot.fn() function can also be used in order to create bootstrap estimates for the intercept and slope terms by randomly sampling from amongthe observations with replacement. Here we give two examples.
###Code
set.seed(1)
boot.fn(Auto,sample(392,392,replace=T))
boot.fn(Auto,sample(392,392, replace=T))
###Output
_____no_output_____
###Markdown
Next, we use the boot() function to compute the standard errors of 1,000bootstrap estimates for the intercept and slope terms.
###Code
boot(Auto,boot.fn ,1000)
###Output
_____no_output_____
###Markdown
This indicates that the bootstrap estimate for SE(βˆ0) is 0.84, and thatthe bootstrap estimate for SE(βˆ1) is 0.0073. As discussed in Section 3.1.2,standard formulas can be used to compute the standard errors for theregression coefficients in a linear model. These can be obtained using thesummary() function.
###Code
summary(lm(mpg~horsepower ,data =Auto))$coef
###Output
_____no_output_____
###Markdown
The standard error estimates for βˆ0 and βˆ1 obtained using the formulasfrom Section 3.1.2 are 0.717 for the intercept and 0.0064 for the slope.Interestingly, these are somewhat different from the estimates obtainedusing the bootstrap. Does this indicate a problem with the bootstrap? Infact, it suggests the opposite. Recall that the standard formulas given inEquation 3.8 on page 66 rely on certain assumptions. For example, theydepend on the unknown parameter σ2, the noise variance. We then estimateσ2 using the RSS. Now although the formula for the standard errors do notrely on the linear model being correct, the estimate for σ2 does. We see inFigure 3.8 on page 91 that there is a non-linear relationship in the data, andso the residuals from a linear fit will be inflated, and so will ˆ σ2. Secondly,the standard formulas assume (somewhat unrealistically) that the xi arefixed, and all the variability comes from the variation in the errors $\epsilon$i. Thebootstrap approach does not rely on any of these assumptions, and so it islikely giving a more accurate estimate of the standard errors of βˆ0 and βˆ1than is the summary() function.Below we compute the bootstrap standard error estimates and the standard linear regression estimates that result from fitting the quadratic modelto the data. Since this model provides a good fit to the data (Figure 3.8),there is now a better correspondence between the bootstrap estimates andthe standard estimates of SE(βˆ0), SE(βˆ1) and SE(βˆ2).
###Code
boot.fn= function (data ,index )
coefficients(lm(mpg~horsepower+I(horsepower^2),
data=data, subset=index))
set.seed(1)
boot(Auto,boot.fn,1000)
summary(lm(mpg~horsepower +I(horsepower^2) ,data= Auto))$coef
###Output
_____no_output_____ |
cython/notebooks/Cython2.ipynb | ###Markdown
Additional Cython Features Automatic Type Inference Using Cython's `infer_types`
###Code
import numpy as np
from random import random
import Cython
%load_ext Cython
%%cython -a
from random import random
from cython cimport infer_types
cdef inline double my_rand():
return random()
@infer_types(True)
cpdef pi_mc_inferred(n=1000):
'''Calculate PI using Monte Carlo method'''
in_circle = 0
for i in range(n):
x = my_rand()
y = my_rand()
if x * x + y * y <= 1.0:
in_circle += 1
return 4.0 * in_circle / n
%time pi_mc_inferred(10000000)
###Output
_____no_output_____
###Markdown
Cython Extensions Types
###Code
class PyRectangle:
def __init__(self, x, y):
self.x = x
self.y = y
def area(self):
return self.x * self.y
def perimeter(self):
return 2.0 * (self.x + self.y)
%%cython
cdef class CyRectangle:
cdef:
double x, y
def __cinit__(self, x, y):
self.x = x
self.y = y
cpdef double area(self):
return self.x * self.y
cpdef double perimeter(self):
return 2.0 * (self.x + self.y)
a = CyRectangle(1, 2)
print(a.area(), a.perimeter())
%%cython
from random import random
cdef class CyRectangle:
cdef:
double x, y
def __cinit__(self, x, y):
self.x = x
self.y = y
cpdef double area(self):
return self.x * self.y
cpdef double perimeter(self):
return 2.0 * (self.x + self.y)
cdef class CyRectangles:
cdef:
list rectangles
def __cinit__(self, int n):
cdef unsigned int i
self.rectangles = []
for i in range(n):
self.rectangles.append(CyRectangle(random(), random()))
cpdef double total_area(self):
cdef CyRectangle rect
cdef double area = 0.0
for rect in self.rectangles:
area += rect.area()
return area
a = CyRectangles(100000)
a.total_area()
###Output
_____no_output_____
###Markdown
C-like Allocation/Dealllocation
###Code
%%cython
from libc.stdlib cimport malloc, free
cdef class CyRangeVector:
cdef:
int *data
int size
def __cinit__(self, int start, int end):
cdef unsigned int i
if start >= end:
raise Exception(f'{start} >= {end}')
self.size = end - start
self.data = <int*>malloc(self.size * sizeof(int))
for i in range(start, end):
self.data[i - start] = i
def __getitem__(self, int i):
if i >= self.size or i < 0:
return -1
return self.data[i]
def __dealloc__(self):
free(self.data)
my_range = CyRangeVector(10, 11000)
my_range[2]
###Output
_____no_output_____
###Markdown
Interacting with the C++ Standard Template LibraryAs long as we start using the C++ STL from inside Cython we have to switch to `language=c++`
###Code
%%cython
# distutils: language=c++
from libcpp.vector cimport vector
cdef class CyRangeVector:
cdef:
vector[int] data
def __cinit__(self, int start, int end):
cdef unsigned int i
if start >= end:
raise Exception(f'{start} >= {end}')
for i in range(start, end):
self.data.push_back(i)
def __getitem__(self, int i):
if i >= self.data.size() or i < 0:
return None
return self.data[i]
v = CyRangeVector(1, 20)
print(v[1])
%%cython
# distutils: language=c++
from libcpp.vector cimport vector
cpdef vector[int] cy_range(int start, int end):
cdef vector[int] v
cdef unsigned int i
for i in range(start, end):
v.push_back(i)
return v
x = cy_range(1, 10)
print(x, type(x))
###Output
_____no_output_____
###Markdown
Additional Cython Features Automatic Type Inference Using Cython's `infer_types`
###Code
import numpy as np
from random import random
import Cython
%load_ext Cython
%%cython -a
from random import random
from cython cimport infer_types
cdef inline double my_rand():
return random()
@infer_types(True)
cpdef pi_mc_inferred(n=1000):
'''Calculate PI using Monte Carlo method'''
in_circle = 0
for i in range(n):
x = my_rand()
y = my_rand()
if x * x + y * y <= 1.0:
in_circle += 1
return 4.0 * in_circle / n
%time pi_mc_inferred(10000000)
###Output
_____no_output_____
###Markdown
Cython Extensions Types
###Code
class PyRectangle:
def __init__(self, x, y):
self.x = x
self.y = y
def area(self):
return self.x * self.y
def perimeter(self):
return 2.0 * (self.x + self.y)
%%cython
cdef class CyRectangle:
cdef:
double x, y
def __cinit__(self, x, y):
self.x = x
self.y = y
cpdef double area(self):
return self.x * self.y
cpdef double perimeter(self):
return 2.0 * (self.x + self.y)
a = CyRectangle(1, 2)
print(a.area(), a.perimeter())
%%cython
from random import random
cdef class CyRectangle:
cdef:
double x, y
def __cinit__(self, x, y):
self.x = x
self.y = y
cpdef double area(self):
return self.x * self.y
cpdef double perimeter(self):
return 2.0 * (self.x + self.y)
cdef class CyRectangles:
cdef:
list rectangles
def __cinit__(self, int n):
cdef unsigned int i
self.rectangles = []
for i in range(n):
self.rectangles.append(CyRectangle(random(), random()))
cpdef double total_area(self):
cdef CyRectangle rect
cdef double area = 0.0
for rect in self.rectangles:
area += rect.area()
return area
a = CyRectangles(100000)
a.total_area()
###Output
_____no_output_____
###Markdown
C-like Allocation/Dealllocation
###Code
%%cython
from libc.stdlib cimport malloc, free
cdef class CyRangeVector:
cdef:
int *data
int size
def __cinit__(self, int start, int end):
cdef unsigned int i
if start >= end:
raise Exception(f'{start} >= {end}')
self.size = end - start
self.data = <int*>malloc(self.size * sizeof(int))
for i in range(start, end):
self.data[i - start] = i
def __getitem__(self, int i):
if i >= self.size or i < 0:
return -1
return self.data[i]
def __dealloc__(self):
free(self.data)
my_range = CyRangeVector(10, 11000)
my_range[2]
###Output
_____no_output_____
###Markdown
Interacting with the C++ Standard Template LibraryAs long as we start using the C++ STL from inside Cython we have to switch to `language=c++`
###Code
%%cython
# distutils: language=c++
from libcpp.vector cimport vector
cdef class CyRangeVector:
cdef:
vector[int] data
def __cinit__(self, int start, int end):
cdef unsigned int i
if start >= end:
raise Exception(f'{start} >= {end}')
for i in range(start, end):
self.data.push_back(i)
def __getitem__(self, int i):
if i >= self.data.size() or i < 0:
return None
return self.data[i]
v = CyRangeVector(1, 20)
print(v[1])
%%cython
# distutils: language=c++
from libcpp.vector cimport vector
cpdef vector[int] cy_range(int start, int end):
cdef vector[int] v
cdef unsigned int i
for i in range(start, end):
v.push_back(i)
return v
x = cy_range(1, 10)
print(x, type(x))
###Output
_____no_output_____ |
notebooks/36. Fix ambiguous acceptor.ipynb | ###Markdown
IntroductionPreviously we observed that both the ACEDIA and SELCYS reactions have an ambiguosly defined electron acceptor associated to them. Here I will look further into each reaction, and per Ben's advice solve the issues with mass balance here.
###Code
import cameo
import pandas as pd
import cobra.io
from cobra import Reaction, Metabolite
model = cobra.io.read_sbml_model('../model/p-thermo.xml')
###Output
_____no_output_____
###Markdown
ACEDIAThis reaction converts acetolactate to diacetyl and co2 in a spontaneous reaction. It is associated with an electron acceptor but there is no information about which acceptor this may be. Therefore, Ben recommended that we keep the acceptor as an ambiguous entity, and allow it to accept electrons in this reaction.Then we can add a paired reaction where the acceptor donates its electrons and proton to NAD, regenerating NADH. For this specific reaction we chose this co-factor as it is the most logical considering the role this reaction plays. If ever more information about this reaction appears one can change it.Also it should be noted that whenever a pathway is analyzed that relies on the end of this pathway, to investigate what the effect would be of removing the cofactor requirement at all. To analyze the role of this in cofactor (re)generation.
###Code
model.add_metabolites(Metabolite(id='acc_c'))
model.metabolites.acc_c.name = 'Acceptor'
model.metabolites.acc_c.formula = 'R'
model.metabolites.acc_c.compartment = 'c'
model.metabolites.acc_c.charge = 0
model.metabolites.acc_c.annotation['kegg.compound'] = 'C00028'
model.metabolites.acc_c.annotation['chebi'] = 'CHEBI:15339'
model.metabolites.hacc_c.formula = 'HR'
model.metabolites.hacc_c.charge = -1
model.metabolites.hacc_c.annotation['kegg.compound'] = 'C00030'
model.metabolites.hacc_c.annotation['chebi'] = 'CHEBI:17499'
model.reactions.ACEDIA.add_metabolites({
model.metabolites.acc_c:-1,
model.metabolites.h_c: -1
})
#make irreversible as it is spontaneous decarboxylation
model.reactions.ACEDIA.bounds = (0,1000)
#Add regeneration reactions
model.add_reaction(Reaction(id='ACCR'))
model.reactions.ACCR.name = 'acceptor regeneration'
model.reactions.ACCR.notes['NOTES'] = 'Assumed regeneration with NADH'
model.reactions.ACCR.annotation['sbo'] = 'SBO:0000176'
model.reactions.ACCR.add_metabolites({
model.metabolites.hacc_c: -1,
model.metabolites.acc_c:1,
model.metabolites.nad_c:-1,
model.metabolites.nadh_c:1
})
#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
###Output
_____no_output_____
###Markdown
SELCYSLY the SELCYSLY reaction has a similar issue to the ACEDIA reaction. In Kegg it is annotated with an ambiguous electron acceptor. Pyridoxal-phosphate is believed to act as a cofactor catalyst. Therefore it should not be present in the reaction. Instead it needs the input of some electrons.There is no real proof of what co-factor is used here. However as it requires electron input it would be more realistic to couple this reaction to NADP(H) consumption. (As NADPH is the general donor of reducing equivalents.)To solve this problem i will add another ambiguous electron acceptor pair that will be regenerated. I will not use the same because we want to prevent the model from unintentionally coupling the ACEDIA and SELCYSLY reactions together.Again, when you are particularly analyzing this pathway for a specific purpose, always check the effect of removing the co-factors. This would show you if this reaction is responsible for some unrealistic production or consumption of cofactors.
###Code
model = cobra.io.read_sbml_model('../model/p-thermo.xml')
model.add_metabolites(Metabolite(id='acc2_c'))
model.metabolites.acc2_c.name = 'Acceptor 2'
model.metabolites.acc2_c.formula = 'R'
model.metabolites.acc2_c.compartment = 'c'
model.metabolites.acc2_c.charge = 0
model.metabolites.acc2_c.annotation['kegg.compound'] = 'C00028'
model.metabolites.acc2_c.annotation['chebi'] = 'CHEBI:15339'
model.add_metabolites(Metabolite(id='hacc2_c'))
model.metabolites.hacc2_c.name = 'Hydrogen-Acceptor 2'
model.metabolites.hacc2_c.formula = 'HR'
model.metabolites.hacc2_c.compartment = 'c'
model.metabolites.hacc2_c.charge = -1
model.metabolites.hacc2_c.annotation['kegg.compound'] = 'C00030'
model.metabolites.hacc2_c.annotation['chebi'] = 'CHEBI:17499'
model.reactions.SELCYSLY.add_metabolites({
model.metabolites.hacc2_c:-1,
model.metabolites.acc2_c:1,
model.metabolites.h_c:-1
})
#add regeneration reaction
model.add_reaction(Reaction(id='ACCR2'))
model.reactions.ACCR2.name = 'acceptor regeneration variant 2'
model.reactions.ACCR2.notes['NOTES'] = 'Assumed regeneration with NADPH'
model.reactions.ACCR2.annotation['sbo'] = 'SBO:0000176'
model.reactions.ACCR2.add_metabolites({
model.metabolites.acc2_c:-1,
model.metabolites.hacc2_c: 1,
model.metabolites.nadph_c: -1,
model.metabolites.nadp_c:1
})
#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
###Output
_____no_output_____ |
notebooks/CF Tracker.ipynb | ###Markdown
Correlation Filter (CF) based Tracker This tracker is a first initial implementation of the ideas describes in the following 3 papers regarding template tracking using adaptive correlation filters:- David S. Bolme, J. Ross Beveridge, Bruce A. Draper and Yui Man Lui. "Visual Object Tracking using Adaptive Correlation Filters". CVPR, 2010- Hamed Kiani Galoogahi, Terence Sim, Simon Lucey. "Multi-Channel Correlation Filters". ICCV, 2013.- J. F. Henriques, R. Caseiro, P. Martins, J. Batista. "High-Speed Tracking with Kernelized Correlation Filters". TPAMI, 2015. Load and manipulate basket ball video Read, pre-process and store a particular number of frames of the provided basket ball video.
###Code
video_path = '../data/video.mp4'
cam = cv2.VideoCapture(video_path)
print 'Is video capture opened?', cam.isOpened()
n_frames = 500
resolution = (640, 360)
frames = []
for _ in range(n_frames):
# read frame
frame = cam.read()[1]
# scale down
frame = cv2.resize(frame, resolution)
# bgr to rgb
frame = frame[..., ::-1]
# pixel values from 0 to 1
frame = np.require(frame, dtype=np.double)
frame /= 255
# roll channel axis to the front
frame = np.rollaxis(frame, -1)
# build menpo image and turn it to grayscale
frame = Image(frame)
# append to frame list
frames.append(frame)
cam.release()
visualize_images(frames)
###Output
_____no_output_____
###Markdown
Define the position and size of the target on the first frame. Note that we need to this manually!
###Code
# first frame
frame0 = frames[0]
# manually define target centre
target_centre0 = PointCloud(np.array([168.0, 232.0])[None])
# manually define target size
target_shape = (31.0, 31.0)
# build bounding box containing the target
target_bb = generate_bounding_box(target_centre0, target_shape)
# add target centre and bounding box as frame landmarks
frame0.landmarks['target_centre'] = target_centre0
frame0.landmarks['target_bb'] = target_bb
# visualize initialization
frame0.view_widget()
###Output
_____no_output_____
###Markdown
Track basket ball video Create and initialize the correlation filter based tracker by giving it the first frame and the target position and size on the first frame.
###Code
# set options
# specify the kind of filters to be learned and incremented
learn_filter = learn_mccf # learn_mosse or learn_mccf
increment_filter = increment_mccf # increment_mosse or increment_mccf; should match with the previous learn filter!
# specify image representation used for tracking
features = no_op # no_op, greyscale, greyscale_hog
tracker = CFTracker(frame0, target_centre0, target_shape, learn_filter=learn_filter,
increment_filter=increment_filter, features=features)
###Output
_____no_output_____
###Markdown
Visualize the learned correlation filters.
###Code
# only the up to the first 5 channels are shown
n_channels = np.minimum(5, tracker.filter.shape[0])
fig_size = (3*n_channels, 3*n_channels)
fig = plt.figure()
fig.set_size_inches(fig_size)
for j, c in enumerate(tracker.filter[:n_channels]):
plt.subplot(1, n_channels, j+1)
plt.title('CF in spatial domain')
plt.imshow(tracker.filter[j])
fig = plt.figure()
fig.set_size_inches(fig_size)
for j, c in enumerate(tracker.filter[:n_channels]):
plt.subplot(1, n_channels, j+1)
plt.title('CF in frequency domain')
plt.imshow(np.abs(fftshift(fft2(tracker.filter[j]))))
###Output
_____no_output_____
###Markdown
Track the previous frames.
###Code
# set options
# filter adaptive parameter; values close to 0 give more weight to filters derive from the last tracked frames,
# values close to 0 give more weight to the initial filter
nu = 0.125
# specifies a threshold on the peak to sidelobe measure below which there is to much uncertainty wrt the target
# position and concequently filters are not updated based on the current frame
psr_threshold = 5
# specifies how the next target position is obtained given the filter response
compute_peak = compute_max_peak # compute_max_peak or compute_meanshift_peak
target_centre = target_centre0
filters = []
targets = []
psrs = []
rs = []
for j, frame in enumerate(frames):
# track target
target_centre, psr, r = tracker.track(frame, target_centre, nu=nu,
psr_threshold=psr_threshold,
compute_peak=compute_peak)
# add target centre and its bounding box as landmarks
frame.landmarks['tracked_centre'] = target_centre
frame.landmarks['tracked_bb'] = generate_bounding_box(target_centre, target_shape)
# add psr to list
psrs.append(psr)
rs.append(r)
# print j
###Output
_____no_output_____
###Markdown
Explore tracked frames.
###Code
visualize_images(frames)
###Output
_____no_output_____
###Markdown
Show peak to sidelobe ratio (PSR) over the entire sequence.
###Code
plt.title('Peak to sidelobe ratio (PSR)')
plt.plot(range(len(psrs)), psrs)
###Output
_____no_output_____ |
.ipynb_checkpoints/5_Use_NaiveBayes_to_NewsText_Classifier-checkpoint.ipynb | ###Markdown
1.Modeling Import
###Code
#coding:utf-8
#python3
import os
import time
import random
import jieba
import sklearn
from sklearn.naive_bayes import MultinomialNB
import numpy as np
import pylab as pl
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
2.出掉stopwords中的重复性的词语
###Code
def remove_deplicated_words(words):
words_set = set() # 利用set 的属性仅存储单一的元素
with open(words,'r',encoding = 'utf-8') as readwords:
for line in readwords.readlines():
word = line.strip() # word is string
if len(word)>0 and word not in words_set: # 没有出现再现有的words 集合set 中则,增加进来
words_set.add(word)
return words_set
#folder_path = r'C:\AI\github\Ai_Lab_\Ai_Lab_NLP\data\Database\SogouC\Sample'
folder_path = './data/Database/SogouC/Sample'
folder_list = os.listdir(folder_path) # listdir - 用于列出该路劲下所有的文件及文件夹
data_list = []
label_list = []
#print(folder_list)
index = 0
for folder in folder_list:
new_folder_path = os.path.join(folder_path,folder)
files = os.listdir(new_folder_path) # 读取每个文件夹下面的全部文件
index += 1
label_list.append(folder)
#print("index = %s: files= %s: it's label = %s " %(index,files,label_list) )
import os
path = "F:/gts/gtsdate/"
b = os.path.join(path,"abc")
b
###Output
_____no_output_____
###Markdown
3.文本处理-用于训练及测试样本生成
###Code
def Examples_Generated(folder_path,test_dataset_percentage = 0.20):
folder_list = os.listdir(folder_path) # listdir - 用于列出该路劲下所有的文件及文件夹
data_list = []
label_list = []
for folder in folder_list:
new_folder_path = os.path.join(folder_path,folder)
files = os.listdir(new_folder_path) # 读取每个文件夹下面的全部文件
# Reading Files
jindex = 1 # index initialization
for file in files:
# worry about memory if >100:
if jindex >= 100:
print("jindex >= 100,Break Now!")
# reading file from detail file path
with open(os.path.join(new_folder_path,file),'r',encoding = 'utf-8') as open_handel:
read_rawdata = open_handel.read()
# jieba cut - 分词
rawdata_cut = jieba.cut(read_rawdata,cut_all = False) # cut 精简model
rawdata_cut_list = list(rawdata_cut)
data_list.append(rawdata_cut_list) # data
label_list.append(folder) # folder name is the label of the data in this folder
# label_list.append(folder.decode('utf-8'))
# divide the dataset into train and test data manual
data_label_list = list(zip(data_list,label_list))# 将两个list以元祖的形式组合成list = [(data_list, label_list)]
random.shuffle(data_label_list) # 乱序
# 手动区分 - Index_boundary
index_boundary = int(len(data_label_list)* test_dataset_percentage)
test_dataset = data_label_list[0:index_boundary]
train_dataset = data_label_list[index_boundary: ]
# 通过zip(*) 将原来组合的list 再重写解开成两个单独的list, one is data, other one is list
test_data,test_data_lable = list(zip(*test_dataset))
train_data,train_data_lable = list(zip(*train_dataset))
# transform to list type
train_data_lable = list(train_data_lable)
test_data_lable = list(test_data_lable)
test_data = list(test_data)
train_data = list(train_data)
# using sklearn train_test_split to split train and test data
#from sklearn.model_selection import train_test_split
#x_trian,x_test,y_train,y_test = train_test_split(train_dataset,test_dataset,random_state = 200)
# 统计词频方法all_words_dict 中
statics_wordsfrequency_dict = {}
for word in train_data: # every single 'word' in train_data
#print("word = ", word)
for character in word: # every single 'character' in word
#print("character = ", character)
if character in statics_wordsfrequency_dict: # python3.x deleted has_key, has_key only used in python 2.x
statics_wordsfrequency_dict[character] += 1 # dict[key] = value 赋值方法!
else:
statics_wordsfrequency_dict[character] = 1
# key = lambda x:x[1] - 按照每个元素的第二值即value 排序,reverse =True 逆序-降序排列
order_statics_list = sorted(statics_wordsfrequency_dict.items(),key = lambda x:x[1], reverse =True)
all_words_list = list(list(zip(*order_statics_list))[0]) # zip(*) 要转成list in python 3.x
# return 想要的结果
return all_words_list, train_data, test_data, train_data_lable, test_data_lable
###Output
_____no_output_____
###Markdown
4. 提取特征词
###Code
def feature_extraction (all_words_list,deletN, stopwords_set = set() ):
# feature extraction
feature_words = []
n = 1 # initialization n index
for t in range(deletN,len(all_words_list),1):
if n>10000: # feature _ words 的维数
print("n is bigger than 1000,Break !")
break
# if all is digit, isdigit() return True
if not all_words_list[t].isdigit() and all_words_list[t] not in stopwords_set and 1< len(all_words_list[t])<5:
feature_words.append(all_words_list[t])
n += 1
return feature_words
###Output
_____no_output_____
###Markdown
5.文本特征
###Code
def text_features(train_data,test_data,feature_words,flag = 'nltk'):
def text_features(text,feature_words):
text_words = set(text)
##---------
if flag == 'nltk':
features = {word: 1 if word in text_words else 0 for word in feature_words }
elif flag == 'sklearn':
features = [1 if word in text_words else 0 for word in feature_words]
else:
features = []
##---------
return features
train_feature_list = [text_features(text,feature_words) for text in train_data]
test_feature_list = [text_features(text,feature_words) for text in test_data]
return train_feature_list, test_feature_list
###Output
_____no_output_____
###Markdown
6 训练模型并且输出准确率
###Code
def text_model_classificaton(train_feature_list,test_feature_list,train_data_lable,test_data_lable,flag = 'nltk'):
## -----
if flag == 'nltk':
## 使用nltk分类器
train_list = zip(train_feature_list,train_data_lable)
test_list = zip(test_feature_list,test_data_lable)
classifier = nltk.classify.NaiveBayesClassifier.train(train_list) # NaiveBayesClassifier
accuracy_test = nltk.classify.accuracy(classifier,test_list)
# -----
elif flag == 'sklearn':
#classifier = MultinomialNB()
print("train_feature_list is ",type(train_feature_list))
print("train_data_lable is ",type(train_data_lable))
classifier = MultinomialNB().fit(train_feature_list,train_data_lable)
accuracy_test = classifier.score(test_feature_list,test_data_lable)
else:
accuracy_test = []
return accuracy_test
train_data = ['thshesieeiiiiie ','hedd','thisedusiie']
statics_wordsfrequency_dict = {}
for word in train_data: # every single 'word' in train_data
#print("word = ", word)
for character in word: # every single 'character' in word
#print("character = ", character)
if character in statics_wordsfrequency_dict: # python3.x deleted has_key, has_key only used in python 2.x
statics_wordsfrequency_dict[character] += 1 # dict[key] = value 赋值方法!
else:
statics_wordsfrequency_dict[character] = 1
# key = lambda x:x[1] - 按照每个元素的第二值即value 排序,reverse =True 逆序-降序
order_statics_list = sorted(statics_wordsfrequency_dict.items(),key = lambda x:x[1], reverse =True) #
#statics_wordsfrequency_dict
e = list(list(zip(*order_statics_list))[0])
###Output
_____no_output_____
###Markdown
1 - 6 Processing the Data....
###Code
import nltk
print("Starting ... ")
# Text pre-processing
folder_path = "./data/Database/SogouC/Sample"
all_words_list,train_data,test_data,train_data_lable,test_data_lable = Examples_Generated(folder_path,test_dataset_percentage = 0.20)
print(len(all_words_list))
print(type(train_data))
print(type(test_data))
print(type(train_data_lable))
print(type(test_data_lable))
# Generated stop words
stopwords_txtfile = './data/stopwords_cn_in5NavBayesTextClassifier.txt'
stopwords_set = remove_deplicated_words(stopwords_txtfile)
# Text Extraction and Classification
flag = 'sklearn'
deleteN = 1000
accuracy_test_list = []
#for deleteN in deleteNs:
feature_words = feature_extraction(all_words_list,deleteN,stopwords_set)
train_feature_list, test_feature_list = text_features(train_data,test_data,feature_words)
import numpy as np
"""print(np.array(train_feature_list).reshape(-1,1))
print(np.array(train_data_lable).shape)
print(np.array(test_feature_list).shape)
print(np.array(test_data_lable).shape)
accuracy_test = text_model_classificaton(train_feature_list,test_feature_list,train_data_lable,test_data_lable,flag)
accuracy_test_list.append(accuracy_test)
print("Accuracy score is :" , accuracy_test_list)"""
# Result
'''plt.figure()
plt.plot(deleteNs,accuracy_test_list)
plt.title("relationship between deleteNs and accuracy_test_list")
plt.xlabel("deleteNs")
plt.ylabel("accuracy_test_list")
plt.show()
print("Completed!")'''
import numpy as np
dssec= [[1,4,43],[12,3,4]]
#dssec.reshape(1,-1)
dssecreshape = np.array(dssec).reshape(1,-1)
print(np.array(dssec).reshape(1,-1))
print("dssecreshape = ", dssecreshape)
###Output
_____no_output_____ |
intro_to_ml/machine_learning.ipynb | ###Markdown
Scenario:Mobile carrier Megaline has 2 newer plans, Smart or Ultra, but many of their subscribers still use a legacy plan.As an analysts for Megaline, we've been asked to create a machine learning model that recommends an appropriate plan based on data about the behavior of those subscribers who've already switched. Accuracy counts. Our model needs an **accuracy >= 75%**.This is a classification task because our **target (is_ultra)** is categorical: Ultra - 1, Smart - 0Our plan:- download the data- investigate the data (it should already be preprocessed)- split the data into train, validation, and test data sets- create models / test different hyperparameters- check the accuracy using the test data set- sanity check the model- discuss findingsBecause this is a business classification task where accuracy is most important, we will start with the Random Forest Classifier and test other models if needed.Our question becomes: Can we predict which plan to recommend based on behavior of users who've switched to one of the new plans?
###Code
# import libraries
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import mean_squared_error
from sklearn.dummy import DummyClassifier
# import sys and insert code to ignore warnings
import sys
if not sys.warnoptions:
import warnings
warnings.simplefilter("ignore")
# load the data
try:
df = pd.read_csv('/datasets/users_behavior.csv')
except:
print('ERROR: Unable to find or access file.')
df.head()
# check info
df.info()
# check for na values
df.isna().sum()
# check for duplicates
df[df.duplicated()]
df.shape
###Output
_____no_output_____
###Markdown
Data description: No missing values, duplicate rows, or other issues noted across the 5 columns and 3214 rows.- сalls — number of calls- minutes — total call duration in minutes- messages — number of text messages- mb_used — Internet traffic used in MB- is_ultra — plan for the current month (Ultra - 1, Smart - 0)
###Code
# split data into train, valid, test data sets (3:1:1)
# first split train test into df_train, df_valid, then divide into df_train, df_test
df_train, df_valid = train_test_split(df, test_size=0.2, random_state=12345)
# print(len(df), len(df_train), len(df_valid))
df_train, df_test = train_test_split(df_train, test_size=0.25, random_state=12345)
print('Verify sizes of newly divided dataframes\n')
print('train valid test\n')
print(len(df_train), len(df_valid), len(df_test))
print('\nCalculate means of is_ultra in each data set')
print('train valid test\n')
print(df_train.is_ultra.mean(), df_valid.is_ultra.mean(), df_test.is_ultra.mean())
###Output
Verify sizes of newly divided dataframes
train valid test
1928 643 643
Calculate means of is_ultra in each data set
train valid test
0.30549792531120334 0.3048211508553655 0.3110419906687403
###Markdown
Our original data frame is divided into 3 new data frames with a ration of train(3):valid(1):test(1). In other words, 60% of the sample is in the train data set, 20% in the valid and 20% in the test.We also note that in each data set around 30% of the populations have the Ultra plan. This distribution verifies that the df dataset has been divided appropriately, at least as far as is_ultra is concerned.
###Code
# create features dfs where is_ultra, the target is dropped
# create target dfs with only is_ultra
print('Verify rows and columns of train and valid sets\n')
features_train = df_train.drop(['is_ultra'], axis=1)
target_train = df_train['is_ultra']
print('features_train', features_train.shape)
print('target_train', target_train.shape)
features_valid = df_valid.drop(['is_ultra'], axis=1)
target_valid = df_valid['is_ultra']
print('features_valid', features_valid.shape)
print('target_valid', target_valid.shape)
features_test = df_test.drop(['is_ultra'], axis=1)
target_test = df_test['is_ultra']
print('features_test', features_test.shape)
print('target_test', target_test.shape)
# create random forest classifier model
# create loop for n_estimators
print('Accuracy for random forest classifier model\n')
print('n_estimators accuracy')
# set up list for accuracy score
accuracy_list = []
# find the accuracy score when n_estimators is between 1 and 100
for n in range(1, 101):
# notice need random_state=12345 here
model = RandomForestClassifier(random_state=12345, n_estimators = n)
# train the model/fit model
model.fit(features_train, target_train)
# find the predictions using validation set
# notice not using score...
predictions_valid = model.predict(features_valid)
# calculate accuracy score
acc_score = accuracy_score(target_valid, predictions_valid)
# print n value and accuracy score
print("n_estimators =", n, ": ", acc_score)
# add n value and accuracy score to list
accuracy_list.append(acc_score)
# find the max n_estimator and save it as best_n_estimator
max_accuracy = max(accuracy_list)
# add one to calculation because index begins at 0
best_n_estimator = accuracy_list.index(max_accuracy) + 1
# print n_estimator and accuracy score
print("The best performing n_estimators =", best_n_estimator, ": ", max_accuracy)
print('')
print('Our first choice to make this model is the random forest classifier because '
'of the high accuracy. We create a loop to run through n_estimators between 1 and 100. '
'We note the accuracy score is generally 78% to 79%. \nThe best result occurs when the '
'n-estimators =', best_n_estimator, 'with an accuracy of: {:.2%}'.format(max_accuracy))
print('We will use this n_estimators for a final test.')
# test random forest classifier model using best result
# and compare with train data set, test data set
# notice need random_state=12345 here
model = RandomForestClassifier(random_state=12345, n_estimators = best_n_estimator)
# train the model/fit model
model.fit(features_train, target_train)
# find the predictions using validation set
predictions_valid = model.predict(features_valid)
valid_accuracy = accuracy_score(target_valid, predictions_valid)
predictions_train = model.predict(features_train)
predictions_test = model.predict(features_test)
# write code for training set calculations here
accuracy = accuracy_score(target_train, predictions_train)
# write code for test set calculations here
test_accuracy = accuracy_score(target_test, predictions_test)
print('Accuracy\n')
print('Validation set: {:.2%}'.format(valid_accuracy))
print('Training set: {:.2%}'.format(accuracy))
print('Test set: {:.2%}'.format(test_accuracy))
###Output
Accuracy
Validation set: 79.32%
Training set: 99.95%
Test set: 79.78%
###Markdown
As we expect, the model scores almost 100% on the training set. Both the validation set and the test set are over 75%, our threshold, so this may be a good choice for a model to use. However, we would also like to examine the decision tree classifier model (generally known for lower accuracy but greater speed) and the logistic regression model (known for medium accuracy).
###Code
# create decision tree classifier model
# create loop for max_depth
print('Accuracy for decision tree classifier model\n')
print('max_depth accuracy')
# set up list for accuracy score
accuracy_list = []
for depth in range(1, 21):
# create a model, specify max_depth=depth
# notice need random_state=12345 here
model = DecisionTreeClassifier(random_state=12345, max_depth = depth)
# train the model/fit model
model.fit(features_train, target_train)
# find the predictions using validation set
# notice not using score...
predictions_valid = model.predict(features_valid)
# calculate accuracy score
acc_score = accuracy_score(target_valid, predictions_valid)
# print n value and accuracy score
print("max_depth =", depth, ": ", acc_score)
# add n value and accuracy score to list
accuracy_list.append(acc_score)
# find the max depth and save it as best_max_depth
max_accuracy = max(accuracy_list)
# add one to calculation because index begins at 0
best_max_depth = accuracy_list.index(max_accuracy) + 1
# print best max depth and accuracy score
print("The best performing max_depth =", best_max_depth, ": ", max_accuracy)
print('We create a loop to run through max_depths between 1 and 20 for the decision tree classifier. '
'We note the accuracy score peaks around 78%. \nThe best result occurs when the '
'n-estimators =', best_max_depth, 'with an accuracy of: {:.2%}'.format(max_accuracy))
print('We will use this best_max_depth for a final test.')
# test decision tree classifier model using best result of max_depth = 7
# and compare with train data set, test data set
# notice need random_state=12345 here
model = DecisionTreeClassifier(random_state=12345, max_depth = best_max_depth)
# train the model/fit model
model.fit(features_train, target_train)
# find the predictions using validation set
predictions_valid = model.predict(features_valid)
valid_accuracy = accuracy_score(target_valid, predictions_valid)
predictions_train = model.predict(features_train)
predictions_test = model.predict(features_test)
# write code for training set calculations here
accuracy = accuracy_score(target_train, predictions_train)
# write code for test set calculations here
test_accuracy = accuracy_score(target_test, predictions_test)
print('Accuracy\n')
print('Validation set: {:.2%}'.format(valid_accuracy))
print('Training set: {:.2%}'.format(accuracy))
print('Test set: {:.2%}'.format(test_accuracy))
###Output
Accuracy
Validation set: 78.85%
Training set: 82.73%
Test set: 75.89%
###Markdown
Once again we note the highest accuracy is for the training set, but it is far less than the 99% of the random forest classifier. Even though the validation and test sets are over 75%, we still believe the best model is the random forest classifier. Finally, we will check out the logistic regression model.
###Code
# create logistic regression model
model = LogisticRegression(random_state=12345, solver='liblinear')
# train the model/fit model
model.fit(features_train, target_train)
# find the predictions using validation set
# notice not using score...
predictions_valid = model.predict(features_valid)
# train the model/fit model
model.fit(features_train, target_train)
# find the predictions using validation set
predictions_valid = model.predict(features_valid)
valid_accuracy = accuracy_score(target_valid, predictions_valid)
predictions_train = model.predict(features_train)
predictions_test = model.predict(features_test)
# write code for training set calculations here
accuracy = accuracy_score(target_train, predictions_train)
# write code for test set calculations here
test_accuracy = accuracy_score(target_test, predictions_test)
print('Accuracy\n')
print('Validation set: {:.2%}'.format(valid_accuracy))
print('Training set: {:.2%}'.format(accuracy))
print('Test set: {:.2%}'.format(test_accuracy))
###Output
Accuracy
Validation set: 70.30%
Training set: 70.38%
Test set: 69.67%
###Markdown
The results of the logistic regression model are disappointing and don't even reach our 75% threshold.We recommend the RandomForestClassifier model using the best performing n_estimators value. We will perform a sanity check on the selected test data below:
###Code
# sanity check the test data
# we are using the test data, divided and filtered as below:
# features_test = df_test.drop(['is_ultra'], axis=1)
# target_test = df_test['is_ultra']
dummy_clf = DummyClassifier(strategy="most_frequent")
dummy_clf.fit(features_test, target_test)
dummy_clf.predict(features_test)
dummy_clf.score(features_test, target_test)
sanity_score = dummy_clf.score(features_test, target_test)
print('Sanity check of test data: {:.2%}'.format(sanity_score))
print('The RandomForestClassifier (random_state=12345, n_estimators =', best_n_estimator,') '
'reliably (over 75% of the time) predicts which plan to recommend based on the behavior '
'of users who\'ve switched to one of the new plans. \n\nOur selected model passes '
'the sanity check when we use the dummy classifier to determine the percent correct '
'by chance alone for this classification/catagorical problem.'
'\n\nOur score, {:.2%}'.format(max_accuracy), 'is greater than the '
'sanity score {:.2%}'.format(sanity_score))
###Output
The RandomForestClassifier (random_state=12345, n_estimators = 84 ) reliably (over 75% of the time) predicts which plan to recommend based on the behavior of users who've switched to one of the new plans.
Our selected model passes the sanity check when we use the dummy classifier to determine the percent correct by chance alone for this classification/catagorical problem.
Our score, 78.85% is greater than the sanity score 68.90%
###Markdown
Refrences[Ways to divide a data set in 3 proportions](https://stackoverflow.com/questions/38250710/how-to-split-data-into-3-sets-train-validation-and-test) DummyClassifier
###Code
# alternative way to divide
# train, valid, test = \
# np.split(df.sample(frac=1, random_state=12345),
# [int(.6*len(df)), int(.8*len(df))])
# print(len(train), len(valid), len(test))
# results 1928 643 643
###Output
_____no_output_____ |
recsys/calibration/calibrated_reco.ipynb | ###Markdown
Table of Contents1 Calibrated Recommendations1.1 Preparation1.2 Deep Dive Into Calibrated Recommendation1.2.1 Calibration Metric1.2.2 Generating Calibrated Recommendations1.3 End Note2 Reference
###Code
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style='custom2.css', plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import os
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.sparse import csr_matrix
from implicit.bpr import BayesianPersonalizedRanking
from implicit.evaluation import train_test_split, precision_at_k
%watermark -a 'Ethen' -d -t -v -p scipy,numpy,pandas,matplotlib,implicit
###Output
Ethen 2018-10-17 10:09:55
CPython 3.6.4
IPython 6.4.0
scipy 1.1.0
numpy 1.14.1
pandas 0.23.0
matplotlib 2.2.2
implicit 0.3.8
###Markdown
Calibrated Recommendations When a user has watched, say, 70% romance movies and 30% action movies in the past, then it is reasonable to expect the personalized list of recommended movies to be comprised of 70% romance and 30% action movies as well since we would like to cover the user's diverse set of interests. A recommendation that actually reflect most if not all of the user's interest is considered a **Calibrated Recommendation**. But the question is, does our recommendation exhibit this trait?Recommendation algorithm provides personalized user experience past on the user's past historical interaction with the product/system/website. However, when serving the recommendation such as recommendation top 10 movies that we think the user might be interested in, a recommendation engine that is solely measured based on ranking metrics can easily generate recommendations that focus on the main area of interests, resulting the user's other area of interest to be under-represented, or worse, absent in the final recommendation.To hit the notion home, using the example above, given a user that has watched 70% romance movies and 30% action movies, if we were to solely measure the metric based on precision, we can say we can achieve the best performance by predicting the majority genre, i.e. we will recommend 100% romance movies and we can expect the user to interact with those recommendations 70% of the time. On other other hand, if we were to recommend 70% romance movies and 30% action movies, then we would expect our recommendation to only be correct 0.7 * 0.7 + 0.3 * 0.3 = 58% of the time.Throughout the rest of this notebook, we will take a look at if the phenomenon of crowding out user's sub-interest occurs with our recommendation, develop a quantitative metric to measure how severe this issue is and implement a post-preprocessing logic that is agnostic of the underlying recommendation algorithm we decided to use to ensure the recommendation becomes more calibrated. Preparation We'll be using the publicly available movielens-20m dataset throughout this experiment. We can download it via the following [link](https://www.kaggle.com/grouplens/movielens-20m-dataset). There's multiple data under that folder, we can select download all to make things easier.The algorithm that we will be using to generate the recommendation is Bayesian Personalized Ranking, which is a matrix factorization based collaborative filtering algorithm. The reader doesn't need to be acquainted with this model per se to continue with the rest of this notebook as the discussion is model-agnostic and we'll be explaining the syntax. That said, this [link](https://github.com/ethen8181/machine-learningrecsys--20161217) contains some resources on this algorithm if it is of interest.Given the dataset and the algorithm the preparation step we'll be doing in the next few code chunks is this:- The raw `rating.csv` contains user's rating for each movie. Here, we will focus on implicit data, and follow the usual procedure of simulating binary implicit feedback data (i.e. whether the user enjoyed the movie) by retaining only ratings of 4 stars and higher, while dropping lower ratings.- The raw `movie.csv` contains each movies genre tag. We will also eliminate movies that had no genre information attached and create a mapping that stores each movies' genre distribution. In this dataset, each movie $i$ typically has several genres $g$ associated with it, thus we assign equal probabilities $p(g|i)$ to each genre such that $\sum_g p(g|i) = 1$ for each movie $i$. This genre distribution will play a strong role in determining whether our recommendation is well calibrated or not.
###Code
data_dir = 'movielens-20m-dataset'
# we are working with movie data, but we'll name
# the movie as item to make it more generic to
# all use-cases
user_col = 'userId'
item_col = 'movieId'
value_col = 'rating'
time_col = 'timestamp'
rating_path = os.path.join(data_dir, 'rating.csv')
df_raw = pd.read_csv(rating_path)
print('dimension: ', df_raw.shape)
df_raw.head()
title_col = 'title'
genre_col = 'genres'
item_info_path = os.path.join(data_dir, 'movie.csv')
df_item = pd.read_csv(item_info_path)
df_item = df_item[df_item[genre_col] != '(no genres listed)']
print('dimension: ', df_item.shape)
df_item.head()
class Item:
"""
Data holder for our item.
Parameters
----------
id : int
title : str
genre : dict[str, float]
The item/movie's genre distribution, where the key
represents the genre and value corresponds to the
ratio of that genre.
score : float
Score for the item, potentially generated by some
recommendation algorithm.
"""
def __init__(self, _id, title, genres, score=None):
self.id = _id
self.title = title
self.score = score
self.genres = genres
def __repr__(self):
return self.title
def create_item_mapping(df_item, item_col, title_col, genre_col):
"""Create a dictionary of item id to Item lookup."""
item_mapping = {}
for row in df_item.itertuples():
item_id = getattr(row, item_col)
item_title = getattr(row, title_col)
item_genre = getattr(row, genre_col)
splitted = item_genre.split('|')
genre_ratio = 1. / len(splitted)
item_genre = {genre: genre_ratio for genre in splitted}
item = Item(item_id, item_title, item_genre)
item_mapping[item_id] = item
return item_mapping
item_mapping = create_item_mapping(df_item, item_col, title_col, genre_col)
item_mapping[1]
# convert to implicit feedback data and filter out
# movies that doesn't have any genre
df_rating = df_raw[df_raw[value_col] >= 4.0].copy()
df_rating = df_rating.merge(df_item, on=item_col)
for col in (user_col, item_col):
df_rating[col] = df_rating[col].astype('category')
# the original id are converted to indices to create
# the sparse matrix, so we keep track of the mappings here
# e.g. a userId 1 will correspond to index 0 in our sparse matrix
index2user = df_rating[user_col].cat.categories
index2item = df_rating[item_col].cat.categories
print('dimension: ', df_rating.shape)
df_rating.head()
###Output
dimension: (9995306, 6)
###Markdown
Given this dataframe we will use the `userId`, `movieId` and `rating` to construct a sparse matrix, perform the random train/test split (we can split based on the time if preferred) and feed the training set into a collaborative filtering based algorithm to train the model, so we can generate item recommendations for users.
###Code
def create_user_item_csr_matrix(data, user_col, item_col, value_col):
rows = data[user_col].cat.codes
cols = data[item_col].cat.codes
values = data[value_col].astype(np.float32)
return csr_matrix((values, (rows, cols)))
user_item = create_user_item_csr_matrix(df_rating, user_col, item_col, value_col)
user_item
np.random.seed(1234)
user_item_train, user_item_test = train_test_split(user_item, train_percentage=0.8)
user_item_train
user_item_test
# the model expects item-user sparse matrix,
# i.e. the rows represents item and the column
# represents users
np.random.seed(1234)
bpr = BayesianPersonalizedRanking(iterations=70)
bpr.fit(user_item_train.T.tocsr())
###Output
100%|██████████| 70/70 [01:26<00:00, 1.16s/it, correct=89.30%, skipped=12.78%]
###Markdown
we will look at a precision_at_k metric just to make sure our recommender is reasonable, feel free to tune the model's hyperparameter to squeeze out performance, but that is not the focus here.
###Code
precision = precision_at_k(bpr, user_item_train, user_item_test, K=10)
precision
###Output
100%|██████████| 138287/138287 [02:07<00:00, 1081.09it/s]
###Markdown
Deep Dive Into Calibrated Recommendation We will take the first user as an example to see whether our recommendations are calibrated or not. Once we're familiar with the procedure for one user, we can repeat the process for all of the users if we'd like to.Let's start of by defining the problem. We are given the distribution genres $g$ for each movie $i$, $p(g|i)$, what we are interested is whether $p(g|u)$ is similar to $q(g|u)$. Where:- $p(g|u)$ is the distribution over genre $g$ of the set of movies $H$ played by user $u$ in the past.\begin{align}p(g|u) = \sum_{i \in H} p(g|i)\end{align}- $q(g|u)$ is the distribution over genre $g$ of the set of movies $I$ we recommended to user $u$.\begin{align}q(g|u) = \sum_{i \in I} p(g|i)\end{align}For these distributions, we can have a weighted version if we liked to get sophisticated. e.g. the $p(g|i)$ can be weighted by recency saying something like the item/movie interaction matters more if its a more recent interaction, indicating that item/movie's genre should also be weighted more, but let's not go there yet.Let's first look at some code to generate these information.
###Code
# look a the first user
user_id = 0
# find the index that the user interacted with,
# we can then map this to a list of Item, note that we need to first
# map the recommended index to the actual itemId/movieId first
interacted_ids = user_item_train[user_id].nonzero()[1]
interacted_items = [item_mapping[index2item[index]] for index in interacted_ids]
interacted_items[:10]
###Output
_____no_output_____
###Markdown
For the same user, we can use the .recommend method to recommend the topn recommendation for him/her, note that we also passed in the original sparse matrix, and by default, the items/movies that the user has already played will be filtered from the list (controlled by a `filter_already_liked_items` argument, which defaults to `True`).
###Code
# it returns the recommended index and their corresponding score
topn = 20
reco = bpr.recommend(user_id, user_item_train, N=topn)
reco[:10]
# map the index to Item
reco_items = [item_mapping[index2item[index]] for index, _ in reco]
reco_items[:10]
###Output
_____no_output_____
###Markdown
The next code chunk defines a function to obtain the genre distribution for a list of items. Given that we now have the list of interacted items and recommended items, we can pass it to the function to obtain the two genre distributions.
###Code
def compute_genre_distr(items):
"""Compute the genre distribution for a given list of Items."""
distr = {}
for item in items:
for genre, score in item.genres.items():
genre_score = distr.get(genre, 0.)
distr[genre] = genre_score + score
# we normalize the summed up probability so it sums up to 1
# and round it to three decimal places, adding more precision
# doesn't add much value and clutters the output
for item, genre_score in distr.items():
normed_genre_score = round(genre_score / len(items), 3)
distr[item] = normed_genre_score
return distr
# we can check that the probability does in fact add up to 1
# np.array(list(interacted_distr.values())).sum()
interacted_distr = compute_genre_distr(interacted_items)
interacted_distr
reco_distr = compute_genre_distr(reco_items)
reco_distr
# change default style figure and font size
plt.rcParams['figure.figsize'] = 10, 8
plt.rcParams['font.size'] = 12
def distr_comparison_plot(interacted_distr, reco_distr, width=0.3):
# the value will automatically be converted to a column with the
# column name of '0'
interacted = pd.DataFrame.from_dict(interacted_distr, orient='index')
reco = pd.DataFrame.from_dict(reco_distr, orient='index')
df = interacted.join(reco, how='outer', lsuffix='_interacted')
n = df.shape[0]
index = np.arange(n)
plt.barh(index, df['0_interacted'], height=width, label='interacted distr')
plt.barh(index + width, df['0'], height=width, label='reco distr')
plt.yticks(index, df.index)
plt.legend(bbox_to_anchor=(1, 0.5))
plt.title('Genre Distribution between User Historical Interaction v.s. Recommendation')
plt.ylabel('Genre')
plt.show()
distr_comparison_plot(interacted_distr, reco_distr)
###Output
_____no_output_____
###Markdown
Calibration Metric Looking at the results above, we can see that according to $p(g|u)$, the user has interacted with genres such as War, Western, however, they are nowhere to be seen in the topn recommendation to the user, hence we can argue based on the output that our recommendation might not be that well calibrated to the user's past interaction.To scale this type of comparison, we'll now define our calibration metric $C$. There are various methods to compare whether two distributions are similar to each other, and one popular choice is KL-divergence.\begin{align}C(p,q) = D_{KL}(p || q) = \sum_{g} p(g|u) \cdot \log \frac{p(g|u)}{\tilde{q}(g|u)}\end{align}The denominator in the formula should be $q(g|u)$, but given that the formula would be undefined if $q(g|u) = 0$ and $p(g|u) > 0$ for a genre $g$. We instead use:\begin{align}\tilde{q}(g|u) = (1 - \alpha) \cdot q(g|u) + \alpha \cdot p(g|u)\end{align}with a small $\alpha$ such as 0.01, so that $q(g|u) \approx \tilde{q}(g|u)$.
###Code
def compute_kl_divergence(interacted_distr, reco_distr, alpha=0.01):
"""
KL (p || q), the lower the better.
alpha is not really a tuning parameter, it's just there to make the
computation more numerically stable.
"""
kl_div = 0.
for genre, score in interacted_distr.items():
reco_score = reco_distr.get(genre, 0.)
reco_score = (1 - alpha) * reco_score + alpha * score
kl_div += score * np.log2(score / reco_score)
return kl_div
compute_kl_divergence(interacted_distr, reco_distr)
###Output
_____no_output_____
###Markdown
Generating Calibrated Recommendations Being able to compute the calibration metric between $p(g|u)$ and $q(g|u)$ is all well and good, but how can we generate a recommendation list that is more calibrated becomes the next important and interesting question.Different recommendation algorithm's objective function might be completely different, thus instead of going to hard-route of incorporating it into the objective function right off the bat and spend two weeks writing the customized algorithm in an efficient manner, we will start with an alternative approach of re-ranking the predicted list of a recommender system in a post-processing step.To determine the optimal set $I^*$ of $N$ recommended items, we'll be using maximum marginal relevance.\begin{align}I^* = \underset{I, |I|=N}{\text{argmax}} \; (1 - \lambda) \cdot s(I) - \lambda \cdot C(p, q(I))\end{align}Where- $s(i)$ is the score of the items $i \in I$ predicted by the recommender system and $s(I) = \sum_{i \in I} s(i)$, i.e. the sum of all the items' score in the recommendation list.- $\lambda \in [0, 1]$ is a tuning parameter that determines the trade-off between the score generated by the recommender and the calibration score, notice that since the calibration score is measured by KL-divergence, which is a metric that's the lower the better we use its negative in the maximization formula.Finding the optimal set $I^*$ is a combinatorial optimization problem and can be a topic by itself. We won't do a deep dive into it, but instead leverage a popular greedy submodular optimization to solve this problem. The process is as follows:- We start out with the empty set.- Iteratively append one item $i$ at a time, and at step $n$, when we already have the set $I_{n-1}$ comprised of $n - 1$ items, the item $i$ that maximizes the objective function defined above for the set $I_{n-1} \cup {i}$ is added to obtain $I_n$- Repeat the process the generate the full $I^*$ of size $N$.From a theoretical standpoint, this procedure guarantees a solution that has a score of 0.63 of the optimal set.With these information at hand, let's look at the implementation part:
###Code
def generate_item_candidates(model, user_item, user_id, index2item, item_mapping,
filter_already_liked_items=True):
"""
For a given user, generate the list of items that we can recommend, during this
step, we will also attach the recommender's score to each item.
"""
n_items = user_item.shape[1]
# this is how implicit's matrix factorization generates
# the scores for each item for a given user, modify this
# part of the logic if we were to use a completely different
# algorithm to generate the ranked items
user_factor = model.user_factors[user_id]
scores = model.item_factors.dot(user_factor)
liked = set()
if filter_already_liked_items:
liked = set(user_item[user_id].indices)
item_ids = set(np.arange(n_items))
item_ids -= liked
items = []
for item_id in item_ids:
item = item_mapping[index2item[item_id]]
item.score = scores[item_id]
items.append(item)
return items
items = generate_item_candidates(bpr, user_item_train, user_id, index2item, item_mapping)
print('number of item candidates:', len(items))
items[:5]
def compute_utility(reco_items, interacted_distr, lmbda=0.5):
"""
Our objective function for computing the utility score for
the list of recommended items.
lmbda : float, 0.0 ~ 1.0, default 0.5
Lambda term controls the score and calibration tradeoff,
the higher the lambda the higher the resulting recommendation
will be calibrated. Lambda is keyword in Python, so it's
lmbda instead ^^
"""
reco_distr = compute_genre_distr(reco_items)
kl_div = compute_kl_divergence(interacted_distr, reco_distr)
total_score = 0.0
for item in reco_items:
total_score += item.score
# kl divergence is the lower the better, while score is
# the higher the better so remember to negate it in the calculation
utility = (1 - lmbda) * total_score - lmbda * kl_div
return utility
def calib_recommend(items, interacted_distr, topn, lmbda=0.5):
"""
start with an empty recommendation list,
loop over the topn cardinality, during each iteration
update the list with the item that maximizes the utility function.
"""
calib_reco = []
for _ in range(topn):
max_utility = -np.inf
for item in items:
if item in calib_reco:
continue
utility = compute_utility(calib_reco + [item], interacted_distr, lmbda)
if utility > max_utility:
max_utility = utility
best_item = item
calib_reco.append(best_item)
return calib_reco
start = time.time()
calib_reco_items = calib_recommend(items, interacted_distr, topn, lmbda=0.99)
elapsed = time.time() - start
print('elapsed: ', elapsed)
calib_reco_items
###Output
elapsed: 18.013550996780396
###Markdown
In the code chunk above, we turned the $\lambda$ knob extremely high to generate the most calibrated recommendation list possible. Let's now compare the calibrated recommendation (which only optimizes for score, $s$), the original recommendation and the user's interaction distribution.
###Code
calib_reco_distr = compute_genre_distr(calib_reco_items)
calib_reco_kl_div = compute_kl_divergence(interacted_distr, calib_reco_distr)
reco_kl_div = compute_kl_divergence(interacted_distr, reco_distr)
print('\noriginal reco kl-divergence score:', reco_kl_div)
print('calibrated reco kl-divergence score:', calib_reco_kl_div)
distr_comparison_plot(interacted_distr, calib_reco_distr)
###Output
original reco kl-divergence score: 1.3345197164266038
calibrated reco kl-divergence score: 0.025585815111015126
###Markdown
Printing out the genre distribution from the calibrated recommendation list shows that this list covers more genre and its distribution closely resembles the distribution of the user's past historical interaction and our quantitative calibration metric, KL-divergence also confirms this. i.e. the calibrated recommendation's KL-divergence is lower than the original recommendation's score. Thankfully from the results above, it seems that the re-ranked recommendation list that aims to maximize calibration score does in fact generate a more calibrated list. But the question is at what cost? Does other ranking metrics that recommender system often optimize for drop? Let's take a look at precision_at_k. Here the number for `k` is the `topn` parameter that we've defined earlier. i.e. the number of recommendations to generate for the user.
###Code
def precision(user_item, user_id, reco_items, index2item):
indptr = user_item.indptr
indices = user_item.indices
reco_ids = {item.id for item in reco_items}
likes = {index2item[indices[i]] for i in range(indptr[user_id], indptr[user_id + 1])}
relevant = len(reco_ids & likes)
total = min(len(reco_items), len(likes))
return relevant / total
reco_precision = precision(user_item_test, user_id, reco_items, index2item)
calib_reco_precision = precision(user_item_test, user_id, calib_reco_items, index2item)
print('original reco precision score:', reco_precision)
print('calibrated reco precision score:', calib_reco_precision)
###Output
original reco precision score: 0.1875
calibrated reco precision score: 0.125
###Markdown
Well ..., it's not a surprise that the calibrated recommendation list's precision score is a bit disappointing compared to the original recommendation. But let's see if we try a different value of $\lambda$, this time turning it down a bit to strike a balance between calibration and precision.
###Code
start = time.time()
calib_reco_items = calib_recommend(items, interacted_distr, topn, lmbda=0.5)
elapsed = time.time() - start
print('elapsed: ', elapsed)
calib_reco_items
calib_reco_distr = compute_genre_distr(calib_reco_items)
calib_reco_kl_div = compute_kl_divergence(interacted_distr, calib_reco_distr)
calib_reco_precision = precision(user_item_test, user_id, calib_reco_items, index2item)
print('calibrated reco kl-divergence score:', calib_reco_kl_div)
print('calibrated reco precision score:', calib_reco_precision)
calib_reco_distr = compute_genre_distr(calib_reco_items)
distr_comparison_plot(interacted_distr, calib_reco_distr)
###Output
_____no_output_____
###Markdown
Well, well, well. It turns out calibration can be improved considerably while accuracy is reduced only slightly if we find the sweet spot for the tuning parameter $\lambda$.The following code chunk curates all the code to generate the calibrated recommendation, the original recommendation and compare it with the user's historical interaction in one place for ease of tracking the flow. This process is outlined for 1 user, feel free to modify the code to perform this comparison across all users and due to the randomness in the recommendation algorithm, the results might differ across runs, but the underlying trend should remain the same.
###Code
topn = 20
user_id = 0
lmbda = 0.99
reco = bpr.recommend(user_id, user_item_train, N=topn)
reco_items = [item_mapping[index2item[index]] for index, _ in reco]
reco_distr = compute_genre_distr(reco_items)
interacted_ids = user_item_train[user_id].nonzero()[1]
interacted_items = [item_mapping[index2item[index]] for index in interacted_ids]
interacted_distr = compute_genre_distr(interacted_items)
items = generate_item_candidates(bpr, user_item_train, user_id, index2item, item_mapping)
calib_reco_items = calib_recommend(items, interacted_distr, topn, lmbda)
calib_reco_distr = compute_genre_distr(calib_reco_items)
calib_reco_kl_div = compute_kl_divergence(interacted_distr, calib_reco_distr)
calib_reco_precision = precision(user_item_test, user_id, calib_reco_items, index2item)
print('calibrated reco kl-divergence score:', calib_reco_kl_div)
print('calibrated reco precision score:', calib_reco_precision)
distr_comparison_plot(interacted_distr, calib_reco_distr)
reco_kl_div = compute_kl_divergence(interacted_distr, reco_distr)
reco_precision = precision(user_item_test, user_id, reco_items, index2item)
print('original reco kl-divergence score:', reco_kl_div)
print('original reco precision score:', reco_precision)
distr_comparison_plot(interacted_distr, reco_distr)
###Output
calibrated reco kl-divergence score: 0.025585815111015126
calibrated reco precision score: 0.125
|
src/notebooks/finance/interventions/zhang.ipynb | ###Markdown
Adversarial debiasing - Adult dataThis notebook contains a simple implementations of the algorithm presented in [Mitigating Unwated Biases with Adversarial Learning](https://dl.acm.org/doi/10.1145/3278721.3278779) by Zhang et al.We train a model in tandem with an adversary that tries to predict sensitive data from the model outputs. By training the model not only to perform well, but also to fool the adversary we achieve fairness. By varying what we allow the adversary to see, we can achieve different notions of fairness with an otherwise very similar setup. In this notebook we demonstrate demographic parity, conditional demographic parity and equalised odds.For simplicity, we'll focus mitigating bias with resepct to `sex`.
###Code
from pathlib import Path
import joblib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from fairlearn.metrics import (
demographic_parity_difference,
demographic_parity_ratio,
equalized_odds_difference,
equalized_odds_ratio,
)
from helpers.metrics import (
accuracy,
conditional_demographic_parity_difference,
conditional_demographic_parity_ratio,
)
from helpers.finance import bin_hours_per_week
from helpers.plot import group_box_plots
from tqdm.auto import tqdm # noqa
from helpers import export_plot
###Output
_____no_output_____
###Markdown
The sigmoid function normalises numbers to the range $(0, 1)$, and is useful for constraining model outputs to be probabilities.
###Code
def sigmoid(arr):
return 1 / (1 + np.exp(-arr))
###Output
_____no_output_____
###Markdown
Here we set some global hyperparameters for easy reference. Feel free to experiment with different values.
###Code
BATCH_SIZE = 512
ITERATIONS = 5000
WARMUP_ITERATIONS = 2000
# number of discriminator training steps per model training step
DISCRIMINATOR_STEPS = 5
MODEL_HIDDEN_UNITS = [50, 50]
MODEL_ACTIVATION = "relu"
MODEL_LEARNING_RATE = 1e-4
DISCRIMINATOR_HIDDEN_UNITS = [50, 50]
DISCRIMINATOR_ACTIVATION = "relu"
DISCRIMINATOR_LEARNING_RATE = 1e-2
DISCRIMINATOR_LOSS_WEIGHT = 0.9
###Output
_____no_output_____
###Markdown
Location of artifacts (model and data)
###Code
artifacts_dir = Path("../../../artifacts")
# override data_dir in source notebook
# this is stripped out for the hosted notebooks
artifacts_dir = Path("../../../../artifacts")
###Output
_____no_output_____
###Markdown
Load the data. Check out the preprocessing notebook for details on how this data was obtained. Tensorflow expects float32 data, so we cast all columns on load.
###Code
data_dir = artifacts_dir / "data" / "adult"
train_oh = pd.read_csv(data_dir / "processed" / "train-one-hot.csv").astype(
np.float32
)
val_oh = pd.read_csv(data_dir / "processed" / "val-one-hot.csv").astype(
np.float32
)
test_oh = pd.read_csv(data_dir / "processed" / "test-one-hot.csv").astype(
np.float32
)
# unscaled data for making plots
train = pd.read_csv(data_dir / "processed" / "train.csv")
val = pd.read_csv(data_dir / "processed" / "val.csv")
test = pd.read_csv(data_dir / "processed" / "test.csv")
###Output
_____no_output_____
###Markdown
Create NumPy arrays of relevant data.
###Code
train_features = train_oh.drop(columns=["sex", "salary"]).values
train_sex = train_oh[["sex"]].values
train_salary = train_oh["salary"].values
val_features = val_oh.drop(columns=["sex", "salary"]).values
val_sex = val_oh[["sex"]].values
val_salary = val_oh["salary"].values
test_features = test_oh.drop(columns=["sex", "salary"]).values
test_sex = test_oh[["sex"]].values
test_salary = test_oh["salary"].values
###Output
_____no_output_____
###Markdown
We'll also load the baseline adult model to compare results against.
###Code
baseline_model = joblib.load(
artifacts_dir / "models" / "finance" / "baseline.pkl"
)
###Output
_____no_output_____
###Markdown
Demographic parity.Build a model and an adversary. We use simple feed-forward networks in each case.
###Code
dp_model = tf.keras.Sequential(
[
tf.keras.layers.Dense(units, activation=MODEL_ACTIVATION)
for units in MODEL_HIDDEN_UNITS
],
name="model",
)
# no activation in last layer, model outputs logits not probabilities.
dp_model.add(tf.keras.layers.Dense(1))
dp_discriminator = tf.keras.Sequential(
[
tf.keras.layers.Dense(units, activation=DISCRIMINATOR_ACTIVATION)
for units in DISCRIMINATOR_HIDDEN_UNITS
],
name="discriminator",
)
# also no activation function here.
dp_discriminator.add(tf.keras.layers.Dense(1))
###Output
_____no_output_____
###Markdown
Build a pipeline to manage training. This pipeline contains the original model, and feeds the outputs of the model to the discriminator.
###Code
features = tf.keras.Input(train_features.shape[1])
attribute = tf.keras.Input(1)
# concatenate features and protected data to pass to model
model_inputs = tf.keras.layers.concatenate([features, attribute])
model_outputs = dp_model(model_inputs)
# pass model outputs to discriminator
discriminator_outputs = dp_discriminator(model_outputs)
# pipeline outputs both model and discriminator outputs
dp_pipeline = tf.keras.Model(
inputs=[features, attribute],
outputs=[model_outputs, discriminator_outputs],
)
###Output
_____no_output_____
###Markdown
We build Tensorflow datasets from the data. These will handle batching and shuffling of the data during training.
###Code
train_data = (
tf.data.Dataset.from_tensor_slices(
((train_features, train_sex), train_salary)
)
.shuffle(buffer_size=BATCH_SIZE * 16, reshuffle_each_iteration=True)
.batch(BATCH_SIZE)
.repeat()
)
val_data = (
tf.data.Dataset.from_tensor_slices(((val_features, val_sex), val_salary))
.batch(val_features.shape[0])
.repeat()
)
test_data = (
tf.data.Dataset.from_tensor_slices(
((test_features, test_sex), test_salary)
)
.batch(test_features.shape[0])
.repeat()
)
###Output
_____no_output_____
###Markdown
This function makes the relevant training steps. Since we'll reuse very similar training steps later we make a function that takes as an argument the pipeline and returns the training steps plus metrics that get logged.
###Code
def make_training_steps(
pipeline, model_learning_rate, discriminator_learning_rate
):
# separate optimisers for the model and discriminator
model_optim = tf.optimizers.Adam(model_learning_rate)
discriminator_optim = tf.optimizers.Adam(discriminator_learning_rate)
# use binary cross entropy for losses, note from_logits=True as we
# have not normalised the model outputs into probabilities.
binary_cross_entropy = tf.losses.BinaryCrossentropy(from_logits=True)
# lists of variables that will be updated during training.
model_vars = pipeline.get_layer("model").trainable_variables
discriminator_vars = pipeline.get_layer(
"discriminator"
).trainable_variables
# create a dictionary of metrics for easy tracking of losses
metrics = {
"performance_loss": tf.metrics.Mean(
"performance-loss", dtype=tf.float32
),
"val_performance_loss": tf.metrics.Mean(
"val-performance-loss", dtype=tf.float32
),
"discriminator_loss": tf.metrics.Mean(
"discriminator-loss", dtype=tf.float32
),
"val_discriminator_loss": tf.metrics.Mean(
"val-discriminator-loss", dtype=tf.float32
),
"loss": tf.metrics.Mean("loss", dtype=tf.float32),
"val_loss": tf.metrics.Mean("val-loss", dtype=tf.float32),
}
@tf.function
def model_training_step(x_train, y_train, discriminator_loss_weight):
"""
The weights of the model are trained by minimising.
(1 - dlw) * model_loss - dlw * discriminator_loss
The minus sign in front of the discriminator loss means we try to
maximise it, thereby removing information about the protected
attribute from the model outputs.
"""
with tf.GradientTape() as tape:
fair_logits, discriminator_logits = pipeline(x_train)
performance_loss = binary_cross_entropy(y_train, fair_logits)
discriminator_loss = binary_cross_entropy(
x_train[1], discriminator_logits
)
loss = (
(1 - discriminator_loss_weight) * performance_loss
- discriminator_loss_weight * discriminator_loss
)
metrics["performance_loss"](performance_loss)
metrics["discriminator_loss"](discriminator_loss)
metrics["loss"](loss)
# compute gradients and pass to optimiser
grads = tape.gradient(loss, model_vars)
model_optim.apply_gradients(zip(grads, model_vars))
@tf.function
def discriminator_training_step(x_train):
"""
The weights of the discriminator are simply trained by minimising
the discriminator loss directly.
"""
with tf.GradientTape() as tape:
_, discriminator_logits = pipeline(x_train)
discriminator_loss = binary_cross_entropy(
x_train[1], discriminator_logits
)
grads = tape.gradient(discriminator_loss, discriminator_vars)
discriminator_optim.apply_gradients(zip(grads, discriminator_vars))
@tf.function
def val_step(x_val, y_val, discriminator_loss_weight):
fair_logits, discriminator_logits = pipeline(x_val)
performance_loss = binary_cross_entropy(y_val, fair_logits)
discriminator_loss = binary_cross_entropy(
x_val[1], discriminator_logits
)
loss = (
(1 - discriminator_loss_weight) * performance_loss
- discriminator_loss_weight * discriminator_loss
)
metrics["val_performance_loss"](performance_loss)
metrics["val_discriminator_loss"](discriminator_loss)
metrics["val_loss"](loss)
return model_training_step, discriminator_training_step, val_step, metrics
###Output
_____no_output_____
###Markdown
Make the training steps for demographic parity
###Code
(
model_training_step,
discriminator_training_step,
val_step,
metrics,
) = make_training_steps(
dp_pipeline, MODEL_LEARNING_RATE, DISCRIMINATOR_LEARNING_RATE
)
###Output
_____no_output_____
###Markdown
Training this model typically takes a couple of minutes, so we load a trained model from disk here, but all the code used to train the model we're loading is included below.
###Code
dp_pipeline = tf.keras.models.load_model(
artifacts_dir / "models" / "finance" / "adversarial-dp.h5"
)
###Output
_____no_output_____
###Markdown
We now have everything we need to train the model. We'll manually track the losses with a list since our setup is not too complicated, but we could also log metrics to [TensorBoard](https://www.tensorflow.org/tensorboard/) here.
###Code
# ds = iter(train_data)
# val_ds = iter(val_data)
# perf_losses = []
# disc_losses = []
# losses = []
# val_perf_losses = []
# val_disc_losses = []
# val_losses = []
###Output
_____no_output_____
###Markdown
We start by warming up the model without a fairness constraint to help optimisation later. Since the fairness and performance objectives are in tension, it's helpful to first roughly optimise for performance before brining in the fairness constraint.To train we'll simply loop over the training data and apply the model training step with the discriminator weight set to 0.
###Code
# for i in tqdm(range(WARMUP_ITERATIONS)):
# x_train_batch, y_train_batch = next(ds)
# model_training_step(x_train_batch, y_train_batch, 0.0)
# if i % 25 == 0:
# x_val_batch, y_val_batch = next(val_ds)
# val_step(x_val_batch, y_val_batch, 0.0)
# # log metrics every 25 iterations
# perf_losses.append(metrics["performance_loss"].result())
# metrics["performance_loss"].reset_states()
# val_perf_losses.append(metrics["val_performance_loss"].result())
# metrics["val_performance_loss"].reset_states()
# disc_losses.append(metrics["discriminator_loss"].result())
# metrics["discriminator_loss"].reset_states()
# val_disc_losses.append(metrics["val_discriminator_loss"].result())
# metrics["val_discriminator_loss"].reset_states()
# losses.append(metrics["loss"].result())
# metrics["loss"].reset_states()
# val_losses.append(metrics["val_loss"].result())
# metrics["val_loss"].reset_states()
###Output
_____no_output_____
###Markdown
We can validate training by making some simple plots of the loss curves. These are plots we'll make repeatedly, so we extract them into a reusable function.In this case everything looks good.
###Code
def plot_losses(
losses,
val_losses,
perf_losses,
val_perf_losses,
disc_losses,
val_disc_losses,
):
"""
Compare loss curves on train and validation sets.
"""
f, ax = plt.subplots(ncols=3, figsize=(16, 5))
def plot_loss_curves(ls, vls, ax, title):
ax.plot([i * 25 for i, _ in enumerate(ls)], ls, label="train")
ax.plot([i * 25 for i, _ in enumerate(vls)], vls, label="val")
ax.set_title(title)
ax.set_xlabel("Iteration")
ax.legend()
plot_loss_curves(losses, val_losses, ax[0], "Loss")
plot_loss_curves(perf_losses, val_perf_losses, ax[1], "Performance loss")
plot_loss_curves(disc_losses, val_disc_losses, ax[2], "Discriminator loss")
# plot_losses(
# losses, val_losses, perf_losses, val_perf_losses, disc_losses, val_disc_losses
# )
###Output
_____no_output_____
###Markdown
Having warmed up, we now train the model against the adversary to remove discrimination.
###Code
# # full training
# for i in tqdm(range(ITERATIONS)):
# x_train_batch, y_train_batch = next(ds)
# model_training_step(
# x_train_batch, y_train_batch, DISCRIMINATOR_LOSS_WEIGHT
# )
# for j in range(DISCRIMINATOR_STEPS):
# x_train_batch, _ = next(ds)
# discriminator_training_step(x_train_batch)
# if i % 25 == 0:
# x_val_batch, y_val_batch = next(val_ds)
# val_step(x_val_batch, y_val_batch, DISCRIMINATOR_LOSS_WEIGHT)
# # log metrics every 25 iterations
# perf_losses.append(metrics["performance_loss"].result())
# metrics["performance_loss"].reset_states()
# val_perf_losses.append(metrics["val_performance_loss"].result())
# metrics["val_performance_loss"].reset_states()
# disc_losses.append(metrics["discriminator_loss"].result())
# metrics["discriminator_loss"].reset_states()
# val_disc_losses.append(metrics["val_discriminator_loss"].result())
# metrics["val_discriminator_loss"].reset_states()
# losses.append(metrics["loss"].result())
# metrics["loss"].reset_states()
# val_losses.append(metrics["val_loss"].result())
# metrics["val_loss"].reset_states()
###Output
_____no_output_____
###Markdown
Again we plot the loss curves to check that training has roughly proceeded as follows. Notice a there's a step change when we change the weighting in the loss.
###Code
# plot_losses(
# losses, val_losses, perf_losses, val_perf_losses, disc_losses, val_disc_losses
# )
###Output
_____no_output_____
###Markdown
We now calculate some metrics on the test set. We compare to the same metrics for the baseline model. We see that both the score level and decision level measures of demographic parity are drastically reduced, and that we also see a small reduction in accuracy.
###Code
mask = test_sex.flatten() == 1
# baseline metrics
bl_test_probs = baseline_model.predict_proba(
test_oh.drop(columns="salary").values
)[:, 1]
bl_test_pred = bl_test_probs >= 0.5
bl_test_acc = accuracy(test_salary, bl_test_probs)
bl_test_dpd = demographic_parity_difference(
test_oh.salary, bl_test_pred, sensitive_features=test_sex.flatten(),
)
bl_test_dpr = demographic_parity_ratio(
test_oh.salary, bl_test_pred, sensitive_features=test_sex.flatten(),
)
# new model metrics
test_logits, _ = dp_pipeline((test_features, test_sex))
test_probs = sigmoid(test_logits.numpy().flatten())
test_pred = test_probs >= 0.5
test_acc = accuracy(test_salary, test_probs)
test_dpd = demographic_parity_difference(
test_oh.salary, test_pred, sensitive_features=test_sex.flatten(),
)
test_dpr = demographic_parity_ratio(
test_oh.salary, test_pred, sensitive_features=test_sex.flatten(),
)
print(f"Baseline accuracy: {bl_test_acc:.3f}")
print(f"Accuracy: {test_acc:.3f}\n")
print(f"Baseline demographic parity difference: {bl_test_dpd:.3f}")
print(f"Demographic parity difference: {test_dpd:.3f}\n")
print(f"Baseline demographic parity ratio: {bl_test_dpr:.3f}")
print(f"Demographic parity ratio: {test_dpr:.3f}")
###Output
_____no_output_____
###Markdown
We can further visualise the improvement with a box plot.
###Code
dp_box = group_box_plots(
np.concatenate([bl_test_probs, test_probs]),
np.tile(test_oh.sex.map(lambda x: "Male" if x else "Female"), 2),
groups=np.concatenate(
[np.zeros_like(bl_test_probs), np.ones_like(test_probs)]
),
group_names=["Baseline", "Adversarial model"],
title="Scores by sex",
xlabel="Score",
ylabel="Method",
)
dp_box
export_plot(dp_box, "adversarial-dp.json")
###Output
_____no_output_____
###Markdown
The mean female and male scores are relatively close, and we have preserved accuracy pretty well also. Conditional demographic parity.We'll now repeat the process for conditional demographic parity, where we use `hours_per_week` as a legitimate risk factor when predicting someone's salary. As you'll see, we don't need to make many modifications to the code, the principal difference being that the discriminator gets direct access to `hours_per_week`. This means that the model gets no benefit from removing information about `hours_per_week` from its outputs.
###Code
cdp_model = tf.keras.Sequential(
[
tf.keras.layers.Dense(units, activation=MODEL_ACTIVATION)
for units in MODEL_HIDDEN_UNITS
],
name="model",
)
# no activation in last layer, model outputs logits not probabilities.
cdp_model.add(tf.keras.layers.Dense(1))
cdp_discriminator = tf.keras.Sequential(
[
tf.keras.layers.Dense(units, activation=DISCRIMINATOR_ACTIVATION)
for units in DISCRIMINATOR_HIDDEN_UNITS
],
name="discriminator",
)
# also no activation function here.
cdp_discriminator.add(tf.keras.layers.Dense(1))
###Output
_____no_output_____
###Markdown
Build a pipeline to manage training. This pipeline contains the original model, and feeds the outputs of the model to the discriminator. We now also pass the legitimate risk factors to the discriminator directly.
###Code
features = tf.keras.Input(train_features.shape[1] - 1)
legitimate_risk_factors = tf.keras.Input(1)
attribute = tf.keras.Input(1)
# features, protected data and legitimate risk factors all passed to model
model_inputs = tf.keras.layers.concatenate(
[features, legitimate_risk_factors, attribute]
)
model_outputs = cdp_model(model_inputs)
# discriminator receives model outputs and legitimate risk factors
discriminator_inputs = tf.keras.layers.concatenate(
[model_outputs, legitimate_risk_factors]
)
discriminator_outputs = cdp_discriminator(model_outputs)
# pipeline outputs both model and discriminator outputs
cdp_pipeline = tf.keras.Model(
inputs=[features, legitimate_risk_factors, attribute],
outputs=[model_outputs, discriminator_outputs],
)
###Output
_____no_output_____
###Markdown
We once again build Tensorflow datasets from the data. These will handle batching and shuffling of the data during training. Note that now we separate hours per week from the rest of the data so that we can pass it to the discriminator.
###Code
train_cdp_features = train_oh.drop(
columns=["sex", "salary", "hours_per_week"]
).values
val_cdp_features = val_oh.drop(
columns=["sex", "salary", "hours_per_week"]
).values
test_cdp_features = test_oh.drop(
columns=["sex", "salary", "hours_per_week"]
).values
train_hpw = train_oh[["hours_per_week"]].values
val_hpw = val_oh[["hours_per_week"]].values
test_hpw = test_oh[["hours_per_week"]].values
train_data = (
tf.data.Dataset.from_tensor_slices(
((train_cdp_features, train_sex, train_hpw), train_salary)
)
.shuffle(buffer_size=BATCH_SIZE * 16, reshuffle_each_iteration=True)
.batch(BATCH_SIZE)
.repeat()
)
val_data = (
tf.data.Dataset.from_tensor_slices(
((val_cdp_features, val_sex, val_hpw), val_salary)
)
.batch(val_features.shape[0])
.repeat()
)
test_data = (
tf.data.Dataset.from_tensor_slices(
((test_cdp_features, test_sex, test_hpw), test_salary)
)
.batch(test_features.shape[0])
.repeat()
)
###Output
_____no_output_____
###Markdown
Training steps. These are as before, but we use the `cdp_pipeline` instead of the `dp_pipeline`.
###Code
(
model_training_step,
discriminator_training_step,
val_step,
metrics,
) = make_training_steps(
cdp_pipeline, MODEL_LEARNING_RATE, DISCRIMINATOR_LEARNING_RATE
)
###Output
_____no_output_____
###Markdown
Training this model typicall takes a couple of minutes, so we load a trained model from disk here, but all the code used to train the model we're loading is included below.
###Code
cdp_pipeline = tf.keras.models.load_model(
artifacts_dir / "models" / "finance" / "adversarial-cdp.h5"
)
###Output
_____no_output_____
###Markdown
We now have everything we need to train the model. We'll manually track the losses with a list since our setup is not too complicated, but we could also log metrics to [TensorBoard](https://www.tensorflow.org/tensorboard/) here.
###Code
# ds = iter(train_data)
# val_ds = iter(val_data)
# perf_losses = []
# disc_losses = []
# losses = []
# val_perf_losses = []
# val_disc_losses = []
# val_losses = []
###Output
_____no_output_____
###Markdown
We start by warming up the model without a fairness constraint to help optimisation later. Since the fairness and performance objectives are in tension, it's helpful to first roughly optimise for performance before brining in the fairness constraint.To train we'll simply loop over the training data and apply the model training step with the discriminator weight set to 0.
###Code
# for i in tqdm(range(WARMUP_ITERATIONS)):
# x_train_batch, y_train_batch = next(ds)
# model_training_step(x_train_batch, y_train_batch, 0.0)
# if i % 25 == 0:
# x_val_batch, y_val_batch = next(val_ds)
# val_step(x_val_batch, y_val_batch, 0.0)
# # log metrics every 25 iterations
# perf_losses.append(metrics["performance_loss"].result())
# metrics["performance_loss"].reset_states()
# val_perf_losses.append(metrics["val_performance_loss"].result())
# metrics["val_performance_loss"].reset_states()
# disc_losses.append(metrics["discriminator_loss"].result())
# metrics["discriminator_loss"].reset_states()
# val_disc_losses.append(metrics["val_discriminator_loss"].result())
# metrics["val_discriminator_loss"].reset_states()
# losses.append(metrics["loss"].result())
# metrics["loss"].reset_states()
# val_losses.append(metrics["val_loss"].result())
# metrics["val_loss"].reset_states()
###Output
_____no_output_____
###Markdown
We can validate training by making some simple plots of the loss curves. In this case everything looks good.
###Code
# plot_losses(
# losses, val_losses, perf_losses, val_perf_losses, disc_losses, val_disc_losses
# )
###Output
_____no_output_____
###Markdown
Having warmed up, we now train the model against the adversary to remove discrimination.
###Code
# # full training
# for i in tqdm(range(ITERATIONS)):
# x_train_batch, y_train_batch = next(ds)
# model_training_step(
# x_train_batch, y_train_batch, DISCRIMINATOR_LOSS_WEIGHT
# )
# for j in range(DISCRIMINATOR_STEPS):
# x_train_batch, _ = next(ds)
# discriminator_training_step(x_train_batch)
# if i % 25 == 0:
# x_val_batch, y_val_batch = next(val_ds)
# val_step(x_val_batch, y_val_batch, DISCRIMINATOR_LOSS_WEIGHT)
# # log metrics every 25 iterations
# perf_losses.append(metrics["performance_loss"].result())
# metrics["performance_loss"].reset_states()
# val_perf_losses.append(metrics["val_performance_loss"].result())
# metrics["val_performance_loss"].reset_states()
# disc_losses.append(metrics["discriminator_loss"].result())
# metrics["discriminator_loss"].reset_states()
# val_disc_losses.append(metrics["val_discriminator_loss"].result())
# metrics["val_discriminator_loss"].reset_states()
# losses.append(metrics["loss"].result())
# metrics["loss"].reset_states()
# val_losses.append(metrics["val_loss"].result())
# metrics["val_loss"].reset_states()
###Output
_____no_output_____
###Markdown
Again we plot the loss curves to check that training has roughly proceeded as follows. Notice a there's a step change when we change the weighting in the loss.
###Code
# plot_losses(
# losses, val_losses, perf_losses, val_perf_losses, disc_losses, val_disc_losses
# )
###Output
_____no_output_____
###Markdown
We compute demographic parity conditioned on binned values of `hours_per_week` and compare against the baseline. Once again we see a major improvement but a slight drop in accuracy as a result.
###Code
mask = test_sex.flatten() == 1
test_binned_hpw = test.hours_per_week.map(bin_hours_per_week).values
# baseline metrics
bl_test_probs = baseline_model.predict_proba(
test_oh.drop(columns="salary").values
)[:, 1]
bl_test_pred = bl_test_probs >= 0.5
bl_test_acc = accuracy(test_salary, bl_test_probs)
bl_test_dpd = conditional_demographic_parity_difference(
test_oh.salary, bl_test_pred, test_sex.flatten(), test_binned_hpw,
)
bl_test_dpr = conditional_demographic_parity_ratio(
test_oh.salary, bl_test_pred, test_sex.flatten(), test_binned_hpw,
)
# new model metrics
test_logits, _ = dp_pipeline((test_features, test_sex))
test_probs = sigmoid(test_logits.numpy().flatten())
test_pred = test_probs >= 0.5
test_acc = accuracy(test_salary, test_probs)
test_dpd = conditional_demographic_parity_difference(
test_oh.salary, test_pred, test_sex.flatten(), test_binned_hpw,
)
test_dpr = conditional_demographic_parity_ratio(
test_oh.salary, test_pred, test_sex.flatten(), test_binned_hpw,
)
print(f"Baseline accuracy: {bl_test_acc:.3f}")
print(f"Accuracy: {test_acc:.3f}\n")
print(f"Baseline cond. dem. parity difference: {bl_test_dpd:.3f}")
print(f"Cond. dem. parity difference: {test_dpd:.3f}\n")
print(f"Baseline cond. dem. parity ratio: {bl_test_dpr:.3f}")
print(f"Cond. dem. parity ratio: {test_dpr:.3f}")
###Output
_____no_output_____
###Markdown
We can also visualise the improvement with a box plot.
###Code
bl_cdp_box = group_box_plots(
bl_test_probs,
test_oh.sex.map({0: "Female", 1: "Male"}),
groups=test.hours_per_week.map(bin_hours_per_week),
group_names=["0-30", "30-40", "40-50", "50+"],
title="Adversarial scores by sex and hours worked per week",
xlabel="Score",
ylabel="Years of experience",
)
bl_cdp_box
cdp_box = group_box_plots(
test_probs,
test_oh.sex.map({0: "Female", 1: "Male"}),
groups=test.hours_per_week.map(bin_hours_per_week),
group_names=["0-30", "30-40", "40-50", "50+"],
title="Adversarial scores by sex and hours worked per week",
xlabel="Score",
ylabel="Years of experience",
)
cdp_box
export_plot(bl_cdp_box, "bl-adversarial-cdp.json")
export_plot(cdp_box, "adversarial-cdp.json")
###Output
_____no_output_____
###Markdown
Equal opportunityFinally we repeat the process for conditional demographic parity. Once again the code is similar, all that changes is that we now pass the labels to the discriminator. This means that hte model gets no benegit from removing from its outputs information about the protected attribute that is contained in the labels.On this dataset equal opportunity seems harder to achieve, so we use a slightly more complex model, and we increase the discriminator weight.
###Code
ITERATIONS = 10000
BATCH_SIZE = 2048
DISCRIMINATOR_STEPS = 10
MODEL_HIDDEN_UNITS = [50, 50, 50]
DISCRIMINATOR_HIDDEN_UNITS = [50, 50, 50]
DISCRIMINATOR_LOSS_WEIGHT = 0.975
eo_model = tf.keras.Sequential(
[
tf.keras.layers.Dense(units, activation=MODEL_ACTIVATION)
for units in MODEL_HIDDEN_UNITS
],
name="model",
)
eo_model.add(tf.keras.layers.Dense(1))
eo_discriminator = tf.keras.Sequential(
[
tf.keras.layers.Dense(units, activation=DISCRIMINATOR_ACTIVATION)
for units in DISCRIMINATOR_HIDDEN_UNITS
],
name="discriminator",
)
eo_discriminator.add(tf.keras.layers.Dense(1))
###Output
_____no_output_____
###Markdown
Build a pipeline to manage training. This pipeline contains the original model, and feeds the outputs of the model to the discriminator. We now also pass the labels to the discriminator directly.
###Code
features = tf.keras.Input(train_features.shape[1])
salary = tf.keras.Input(1)
attribute = tf.keras.Input(1)
# features and protected attribute passed to model, NOT labels!
model_inputs = tf.keras.layers.concatenate([features, attribute])
model_outputs = eo_model(model_inputs)
# model outputs and labels passed to discriminator
discriminator_inputs = tf.keras.layers.concatenate([model_outputs, salary])
discriminator_outputs = eo_discriminator(model_outputs)
eo_pipeline = tf.keras.Model(
inputs=[features, attribute, salary],
outputs=[model_outputs, discriminator_outputs],
)
###Output
_____no_output_____
###Markdown
We once again build Tensorflow datasets from the data. These will handle batching and shuffling of the data during training. Note that now we pass labels in as part of the data so that we can feed it to the discriminator.
###Code
train_data = (
tf.data.Dataset.from_tensor_slices(
(
(train_features, train_sex, train_salary.reshape(-1, 1)),
train_salary,
)
)
.shuffle(buffer_size=BATCH_SIZE * 16, reshuffle_each_iteration=True)
.batch(BATCH_SIZE)
.repeat()
)
val_data = (
tf.data.Dataset.from_tensor_slices(
((val_features, val_sex, val_salary.reshape(-1, 1)), val_salary)
)
.batch(val_features.shape[0])
.repeat()
)
test_data = (
tf.data.Dataset.from_tensor_slices(
((test_features, test_sex, test_salary.reshape(-1, 1)), test_salary)
)
.batch(test_features.shape[0])
.repeat()
)
###Output
_____no_output_____
###Markdown
Training steps. These are as before, but we use the `eo_pipeline`.
###Code
(
model_training_step,
discriminator_training_step,
val_step,
metrics,
) = make_training_steps(
eo_pipeline, MODEL_LEARNING_RATE, DISCRIMINATOR_LEARNING_RATE
)
###Output
_____no_output_____
###Markdown
Training this model typically takes a couple of minutes, so we load a trained model from disk here, but all the code used to train the model we're loading is included below.
###Code
eo_pipeline = tf.keras.models.load_model(
artifacts_dir / "models" / "finance" / "adversarial-eo.h5"
)
###Output
_____no_output_____
###Markdown
We now have everything we need to train the model. We'll manually track the losses with a list since our setup is not too complicated, but we could also log metrics to [TensorBoard](https://www.tensorflow.org/tensorboard/) here.
###Code
# ds = iter(train_data)
# val_ds = iter(val_data)
# perf_losses = []
# disc_losses = []
# losses = []
# val_perf_losses = []
# val_disc_losses = []
# val_losses = []
###Output
_____no_output_____
###Markdown
We start by warming up the model without a fairness constraint to help optimisation later. Since the fairness and performance objectives are in tension, it's helpful to first roughly optimise for performance before brining in the fairness constraint.To train we'll simply loop over the training data and apply the model training step with the discriminator weight set to 0.
###Code
# for i in tqdm(range(WARMUP_ITERATIONS)):
# x_train_batch, y_train_batch = next(ds)
# model_training_step(x_train_batch, y_train_batch, 0.0)
# if i % 25 == 0:
# x_val_batch, y_val_batch = next(val_ds)
# val_step(x_val_batch, y_val_batch, 0.0)
# # log metrics every 25 iterations
# perf_losses.append(metrics["performance_loss"].result())
# metrics["performance_loss"].reset_states()
# val_perf_losses.append(metrics["val_performance_loss"].result())
# metrics["val_performance_loss"].reset_states()
# disc_losses.append(metrics["discriminator_loss"].result())
# metrics["discriminator_loss"].reset_states()
# val_disc_losses.append(metrics["val_discriminator_loss"].result())
# metrics["val_discriminator_loss"].reset_states()
# losses.append(metrics["loss"].result())
# metrics["loss"].reset_states()
# val_losses.append(metrics["val_loss"].result())
# metrics["val_loss"].reset_states()
###Output
_____no_output_____
###Markdown
We can validate training by making some simple plots of the loss curves. In this case everything looks good.
###Code
# plot_losses(
# losses, val_losses, perf_losses, val_perf_losses, disc_losses, val_disc_losses
# )
###Output
_____no_output_____
###Markdown
Having warmed up, we now train the model against the adversary to remove discrimination.
###Code
# # full training
# for i in tqdm(range(ITERATIONS)):
# x_train_batch, y_train_batch = next(ds)
# model_training_step(
# x_train_batch, y_train_batch, DISCRIMINATOR_LOSS_WEIGHT
# )
# for j in range(DISCRIMINATOR_STEPS):
# x_train_batch, _ = next(ds)
# discriminator_training_step(x_train_batch)
# if i % 25 == 0:
# x_val_batch, y_val_batch = next(val_ds)
# val_step(x_val_batch, y_val_batch, DISCRIMINATOR_LOSS_WEIGHT)
# # log metrics every 25 iterations
# perf_losses.append(metrics["performance_loss"].result())
# metrics["performance_loss"].reset_states()
# val_perf_losses.append(metrics["val_performance_loss"].result())
# metrics["val_performance_loss"].reset_states()
# disc_losses.append(metrics["discriminator_loss"].result())
# metrics["discriminator_loss"].reset_states()
# val_disc_losses.append(metrics["val_discriminator_loss"].result())
# metrics["val_discriminator_loss"].reset_states()
# losses.append(metrics["loss"].result())
# metrics["loss"].reset_states()
# val_losses.append(metrics["val_loss"].result())
# metrics["val_loss"].reset_states()
###Output
_____no_output_____
###Markdown
We again plot the loss curves. In this case, we found that there was quite a bit of instability compared to the other definitions of fairness.
###Code
# plot_losses(
# losses, val_losses, perf_losses, val_perf_losses, disc_losses, val_disc_losses
# )
###Output
_____no_output_____
###Markdown
Comparing metrics to the baseline, not much has changed. The accuracy stayed roughly the same. The baseline actually performed slightly better in one metric and worse in the other. Actually optimising for equalised odds is going to take more effort.
###Code
# baseline metrics
bl_test_probs = baseline_model.predict_proba(
test_oh.drop(columns="salary").values
)[:, 1]
bl_test_pred = bl_test_probs >= 0.5
bl_test_acc = accuracy(test_salary, bl_test_probs)
bl_test_eod = equalized_odds_difference(
test_salary, bl_test_pred, sensitive_features=test_sex.flatten(),
)
bl_test_eor = equalized_odds_ratio(
test_salary, bl_test_pred, sensitive_features=test_sex.flatten(),
)
# new model metrics
test_logits, _ = eo_pipeline((test_features, test_sex, test_salary))
test_probs = sigmoid(test_logits.numpy().flatten())
test_pred = test_probs >= 0.5
test_acc = accuracy(test_salary, test_probs)
test_eod = equalized_odds_difference(
test_salary, test_pred, sensitive_features=test_sex.flatten(),
)
test_eor = equalized_odds_ratio(
test_salary, test_pred, sensitive_features=test_sex.flatten(),
)
print(f"Baseline accuracy: {bl_test_acc:.3f}")
print(f"Accuracy: {test_acc:.3f}\n")
print(f"Baseline equalised odds (dist.): {bl_test_eod:.3f}")
print(f"Equalised odds (dist.): {test_eod:.3f}\n")
print(f"Baseline equalised odds (prob.): {bl_test_eor:.3f}")
print(f"Equalised odds (prob.): {test_eor:.3f}")
bl_eo_box = group_box_plots(
bl_test_probs,
test_oh.sex.map({0: "Female", 1: "Male"}),
groups=test_oh.salary,
group_names=["Not employed", "Employed"],
title="Baseline scores by sex and outcome",
xlabel="Score",
ylabel="Outcome",
)
bl_eo_box
eo_box = group_box_plots(
test_probs,
test.sex.map({0: "Female", 1: "Male"}),
groups=test_oh.salary,
group_names=["Not employed", "Employed"],
title="Adversarial scores by sex and outcome",
xlabel="Score",
ylabel="Outcome",
)
eo_box
export_plot(bl_eo_box, "bl-adversarial-eo.json")
export_plot(eo_box, "adversarial-eo.json")
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.