path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
LQR_4D_reduced_sampling_space.ipynb | ###Markdown
2D numerical experiment of the inverted pendulum
###Code
import gpytorch
import torch
from src.TVBO import TimeVaryingBOModel
from src.objective_functions_LQR import lqr_objective_function_4D
###Output
_____no_output_____
###Markdown
Hyperparameters
###Code
# parameters regarding the objective function
objective_function_options = {'objective_function': lqr_objective_function_4D,
# optimize the 4D feedback gain
'spatio_dimensions': 4,
# approximate noise level from the objective function
'noise_lvl': 0.005,
# feasible set for the optimization SAME AS INITIAL SET
'feasible_set': torch.tensor([[-3, -6, -50, -4],
[-2, -4, -25, -2]],
dtype=torch.float),
# initial feasible set consisting of only controllers
'initial_feasible_set': torch.tensor([[-3, -6, -50, -4],
[-2, -4, -25, -2]],
dtype=torch.float),
# scaling \theta to have approximately equal lengthscales in each dimension
'scaling_factors': torch.tensor([1 / 8, 1 / 4, 3, 1 / 4])}
# parameters regarding the model
model_options = {'constrained_dimensions': None, # later specified for each variation
'forgetting_type': None, # later specified for each variation
'forgetting_factor': None, # later specified for each variation
# specification for the constraints (cf. Agrell 2019)
'nr_samples': 10000,
'xv_points_per_dim': 4, # VOPs per dimensions
'truncation_bounds': [0, 2],
# specification of prior
'prior_mean': 0.,
'lengthscale_constraint': gpytorch.constraints.Interval(0.5, 6),
'lengthscale_hyperprior': gpytorch.priors.GammaPrior(6, 1 / 0.3),
'outputscale_constraint_spatio': gpytorch.constraints.Interval(0, 20),
'outputscale_hyperprior_spatio': None, }
###Output
_____no_output_____
###Markdown
Specify variations
###Code
# UI -> UI-TVBO, B2P_OU -> TV-GP-UCB
variations = [
# in the paper color blue
{'forgetting_type': 'UI', 'forgetting_factor': 0.03, 'constrained_dims': []},
# in the paper color red
{'forgetting_type': 'B2P_OU', 'forgetting_factor': 0.03, 'constrained_dims': []}, ]
###Output
_____no_output_____
###Markdown
Start optimization
###Code
trials_per_variation = 25 # number of different runs
for variation in variations:
# update variation specific parameters
model_options['forgetting_type'] = variation['forgetting_type']
model_options['forgetting_factor'] = variation['forgetting_factor']
model_options['constrained_dimensions'] = variation['constrained_dims']
tvbo_model = TimeVaryingBOModel(objective_function_options=objective_function_options,
model_options=model_options,
post_processing_options={},
add_noise=False, ) # noise is added during the simulation of the pendulum
# specify name to safe results
method_name = model_options['forgetting_type']
forgetting_factor = model_options['forgetting_factor']
string = 'constrained' if model_options['constrained_dimensions'] else 'unconstrained'
NAME = f"{method_name}_2DLQR_{string}_forgetting_factor_{forgetting_factor}".replace('.', '_')
# run optimization
for trial in range(1, trials_per_variation + 1):
tvbo_model.run_TVBO(n_initial_points=30,
time_horizon=300,
safe_name=NAME,
trial=trial, )
print('Finished.')
###Output
_____no_output_____ |
Michelle-Pichardo-Sep-Tutorial-UDF-with 3-color-image.ipynb | ###Markdown
Tutorial with UDF - Michelle Pichardo This tutorial shows the basic steps of using SEP to detect objects in an image and preform some basic photometry. * Photometry: > a branch of science that deals with measurement of the intensity of light. Imort Packages * Numpy:(Package) element-by-element operations- used for speed* SEP: Python lib for Source Extraction and Photometry > * Command-line program for segmentation and analysis of astronomical images. Reads FITS files, preofroms several tasks. > https://sep.readthedocs.io/en/v1.0.x/index.html* astropy.io:(Package) provides access to FITS files > * Flexible Image Transport System> > A portable file standard used in astronomy community to store images and tables* Matplotlib: Lib for creating visualizations with python> * rcParams: Matplotlib defines rc(runtime configuration)containing default styles for every plot element created. The configurations can be modified with the rc parameters.> https://www.data-blogger.com/2017/11/15/python-matplotlib-pyplot-a-perfect-combination/* %matplotlib inline: reders the figure in a notebook instead of displaying a figure.
###Code
#packages are used to read the test image
#and display plots
import numpy as np
import sep
from astropy.io import fits
import matplotlib.pyplot as plt
from matplotlib import rcParams
%matplotlib inline
#from astropy.io.fits import getdata (not used)
#Setting the parameters for all future figures:
#selecting figure function and figsize argument
#figsize takes(float,float)=(width,height) in inches
#w= 10.in h=8.in
#default was: 6.4,4.8
rcParams['figure.figsize'] = [10.,8.]
check_info_to_convertAB = 'hlsp_hudf12_hst_wfc3ir_udfmain_f105w_v1.0_drz.fits'
hdu_list = fits.open(check_info_to_convertAB)
hdu_list.info()
image_data = hdu_list[0].data
image_header = hdu_list[0].header
print(image_header)
###Output
_____no_output_____
###Markdown
Read an example image from a FITS file and display itInformation on the following blocks: * Verify the image is 256 x 256 pixels * More info: https://docs.astropy.org/en/stable/generated/examples/io/plot_fits-image.htmlsphx-glr-generated-examples-io-plot-fits-image-py* ext = 0 > * This argument calls the header> * Without other argumentst it would also only call the header * fits.getdata: > * returns: array, record array or groups of data object> https://docs.astropy.org/en/stable/io/fits/api/files.htmlastropy.io.fits.getdata
###Code
#call image
data = fits.getdata('hlsp_hudf12_hst_wfc3ir_udfmain_f105w_v1.0_drz.fits', ext =0)
#verify the shape:
print("Our array is a 2-D array with the following elements: ")
print(data.shape)
#checking type:
#print(type(data))
###Output
_____no_output_____
###Markdown
Show the image Information on the following blocks: * np.mean and np.std: > Calls the mean value and standard div from a set of values * plt.imshow: > Functions with certain parameters,(X, interpolatoin, cmap, vmin, vmax, origin)> * https://www.geeksforgeeks.org/matplotlib-pyplot-imshow-in-python/> * The input may either be actual RGB(A) data, or 2D scalar data* X: is the data of the image * cmap: is a color map instance (selection of colors) > https://matplotlib.org/3.1.0/tutorials/colors/colormaps.html* vmin,vmax: the color bar range * interpolation: interpolation used to display an image * origin: place the [0,0] index in a corner of axes
###Code
#check the mean and std values
print('Mean values from data: ')
print(np.mean(data))
print('\nStandard Deviation values from data: ')
print(np.std(data))
#assign the mean and standard deviation to variables
m, s = np.mean(data), np.std(data)
plt.imshow(data, interpolation='nearest', cmap='gray',
vmin=m-s, vmax=m+s, origin='lower')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Background subtraction * This step is needed before sources can be detected * In SEP, background estimation and source detection are two seperate steps * sep.background: > Representation of spatially variable image background and noise > * arguments(data)- 'not the same as our data' is a 2-d array> * methods: background, background rms, subfrom > * https://sep.readthedocs.io/en/v1.0.x/api/sep.Background.html> * returns: rms, an array with same dimensions as original
###Code
#measure a spatially varying background on the image
#assign the background data to a variable
data = data.byteswap().newbyteorder()
bkg = sep.Background(data)
###Output
_____no_output_____
###Markdown
sep.Background(): * returns an Background object that holds information on the spatially varying background and spatially varying background noise level. The Values do not match the Tutorial * Confirmed by prof, this is ok
###Code
#get a 'global' mean and noise of the image backgroung:
# print the global background level
print(bkg.globalback)
# print the global background Root Mean Square
print(bkg.globalrms)
# evaluate background as 2-d array, same size as original image
#set background data to a 2-d array
bkg_image = np.array(bkg)
# show the background
plt.imshow(bkg_image, interpolation='nearest',
cmap='gray', origin='lower')
plt.colorbar()
# evaluate the background noise as 2-d array, same size as original image
#Modify the background data to be the rms of the background (noise)
bkg_rms = bkg.rms()
#show the noise (bkg_rms)
plt.imshow(bkg_rms, interpolation='nearest',
cmap='gray', origin='lower')
plt.colorbar()
# subtract the background
#data(total info) - bkg(noise)
data_sub = data - bkg
###Output
_____no_output_____
###Markdown
Object detection Now that we've subtracted the background, we can run object detection on the background subtracted data. You can see the background noise is pretty flat. So we're setting the detection threshold to be a constant value of 1.5σ where σ is the global background RMS. * set.extract: > Extracts sources from an image > arguments: (data,thresh, err) > https://sep.readthedocs.io/en/v1.0.x/api/sep.extract.html* Data: 2-d array * Thresh: float, is the threshold value for detection. * err: error or variance, this can be used to specify a pixel by pixel detection threshold * returns: Extracted object parameters> Threshold at object location, err second moment errors
###Code
# extract data, set threshold ot 1.5
objects = sep.extract(data_sub, 1.5, err = bkg.globalrms)
#view an entry and it's x,y positions
#print(objects[0])
#print(objects['x'])
#print(objects['y'])
###Output
_____no_output_____
###Markdown
Length of objects found
###Code
# how many objects were detected
print("The amount of sources found: "+ str(len(objects)) )
###Output
_____no_output_____
###Markdown
objects['x'] and objects['y'] will give the centroid coordinates of the objects. Just to check where the detected objects are, we'll over-plot the object coordinates with some basic shape parameters on the image: About the following block: * matplotlib.patches: > patches contain classes to pull from> https://matplotlib.org/api/patches_api.html> Ellipse: is a class in patches, a scale free ellipse * Ellipse [xy, width, hight[,angle])> https://matplotlib.org/api/_as_gen/matplotlib.patches.Ellipse.htmlmatplotlib.patches.Ellipse* plt.subplots(): > Creates a figure and a set of subplots > * Returns: Fig: figure and ax: axes.Axes object or array of Axes objects > https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.subplots.html* Key to showing image is Axes(ax) > * as.imshow(): >> * Display data as an image, on a 2-D raster(bitmap image- ind. pixels as a square) >> The input is a 2D scalar daata, which will be rendered as a pseudocolor image. The number of pixels used to render an image is set by the axes size and the dpi(dots per inch) of the figure >> * vmin and vmax are related to the cmap (colormap) where m:mean and s:standard div
###Code
# import package for displaying ellipses
from matplotlib.patches import Ellipse
# plot background-subtracted image
# define the figure and axes
fig, ax = plt.subplots()
# define the mean and standard div values
#from the new background without noise
m, s = np.mean(data_sub), np.std(data_sub)
# Display the Axes defined by our our array data_sub
#define the color scale by using the mean and standard div
im = ax.imshow(data_sub, interpolation='nearest', cmap='gray',
vmin=m-s, vmax=m+s, origin='lower')
# plot an ellipse for each object
for i in range(len(objects)): #loop through [0,69-1] total of 69
#define the ellipse
#xy = (x,y) takes the first elements and assigns the coordinate
#width= total length of the horizontal axis
#height = total length of the vertical
#angle = rotation anticolockwise
#facecolor= no fill
#edge = parimeter
e = Ellipse(xy=(objects['x'][i], objects['y'][i]),
width=6*objects['a'][i],
height=6*objects['b'][i],
angle=objects['theta'][i] * 180. / np.pi)
e.set_facecolor('none')
e.set_edgecolor('yellow')
ax.add_artist(e)
# available fields
#these are other data types within objects that we can use
objects.dtype.names
###Output
_____no_output_____
###Markdown
Aperture Photometry Finally, we'll preform simple circular aperture photometry with a 3 pixel radius at the locations of the objects: * measurement of brightness in the aperture * We kept the parameters from the tutorial Important notes See image below: The imformation extracted is from the FITS HeaderSteps taken: 1. We originally went to the trouble of intalling some packages in this website> * https://www.stsci.edu/hst/instrumentation/acs/data-analysis/zeropoints2. We were under the impression it was necessary to do this and gathered information on the functions > * https://acstools.readthedocs.io/en/latest/acszpt.html3. We noticed our Detector and filter are not used in these newer packages > * there are explicit messages noting only specific filter and intruments would work. 4. We then found a website specific to the IR F105W > * this gave specifc values for ABmag> * https://www.stsci.edu/hst/instrumentation/wfc3/data-analysis/photometric-calibration/ir-photometric-calibrationsection-cc19dbfc-8f60-4870-8765-43810de399245. We needed to account for the correction from .2" to infinity according to STscI > * Another issue arose, the website did not have a value for F105W only F105LB. We could not find any other figures with this data. We aproximated the error to .877 > * https://www.stsci.edu/hst/instrumentation/acs/data-analysis/aperture-corrections6. Since we were unable to use the function specific to STScI we opted to take the values given and form the equation provided. 7. We noticed issues in the log10 function and ran a for loop to adjust any negative values. I'm not too confortable with flux but I assumed the negative was relative.
###Code
# Compute initial flux
#flux, fluxerr, and flag are all 1d arrays with one entry per object
#this computes the sum of pixels withing the given radius
#our error or variance is Bkg (the noise) rms
flux, fluxerr, flag = sep.sum_circle(data_sub, objects['x'], objects['y'],
3.0, err=bkg.globalrms, gain=1.0)
#Check to see all objects have thier flux values:
#for i in range(len(objects)):
#print("object {:d}: flux = {:f} +/- {:f}".format(i, flux[i], fluxerr[i]))
#Only print 10, as per the tutorial
for i in range(10):
print("object {:d}: flux = {:f} +/- {:f}".format(i, flux[i], fluxerr[i]))
###Output
_____no_output_____
###Markdown
Convert flux to AB magnitude * From objects = sept.extract > * flux: sum of member pixels in unconvoled data (not connected)
###Code
#correct the Flux array values
correction_inf = 0.877
flux_inf = flux/correction_inf
#convert instrumental fluxes to physical fluxes and magnitudes
# using a for loop to account for negative values
for i in range(len(flux_inf)):
if flux_inf[i]> 0:
n = -2.5 * np.log10(flux_inf[i]) + 26.2687
flux_inf[i] = n
#print(n)
elif flux_inf[i]<0:
n = -2.5 * np.log10(-flux_inf[i]) + 26.2687
flux_inf[i] = n
#print(n)
#Check to see all objects have thier flux values:
#for i in range(len(objects)):
#print("object {:d}: flux = {:f} +/- {:f}".format(i, flux[i], fluxerr[i]))
#Only print 10, as per the tutorial
for i in range(10):
print("object {:d}: flux = {:f} +/- {:f}".format(i, flux_inf[i], fluxerr[i]))
###Output
_____no_output_____
###Markdown
Differences in graphs * we noticed differneces in our histograms * Depening on the method chosen the histograms will differ, the method used for this one is described above.
###Code
#Create the histogram
histogram = plt.hist(flux_inf, bins='auto')
###Output
_____no_output_____
###Markdown
Making a 3 Color image * RGB Imaging * I wasn't able to figure out the fomatting from lecture, but Joey was able to help compose an image using lupton_rgb
###Code
#call images, rename.
image_f105w = "hlsp_hudf12_hst_wfc3ir_udfmain_f105w_v1.0_drz.fits"
image_f125w = "hlsp_hudf12_hst_wfc3ir_udfmain_f125w_v1.0_drz.fits"
image_f160w = "hlsp_hudf12_hst_wfc3ir_udfmain_f160w_v1.0_drz.fits"
f105w_data = fits.getdata(image_f105w)
f125w_data = fits.getdata(image_f125w)
f160w_data = fits.getdata(image_f160w)
#Import necessary module
#astropy.visualization: visuals of data
# framework for plotting agaisnt matplot coordinates
# RGB color creation from seperate images and custome plot styles
from astropy.visualization import make_lupton_rgb
# use make_lupton_rgb function
# return a red/green/blue color image from up to 3 images
#using parameters of function:
# min: intensity that should be mapped to black
# stretch: the linear stretch of the image
# Q the asinh softening parameter
# filename= to save image
rgb_image = make_lupton_rgb(f160w_data,f125w_data,f105w_data,Q=0.2,
stretch=0.005,filename="hduf_rgb_3-color_image.png")
plt.imshow(rgb_image, interpolation='nearest', origin='lower')
plt.colorbar()
#zooming in the data, shown to me by joey as well
#adjust the values of the images
#not really sure how this works out
#we have the data[:,:]
#I assume we take the 2d array and adjust the portions observed
#data[10,10] becomes data[2:5,2:5]
# meaning start at 2 end at 5 from both axises nothing is rotating
rgb = make_lupton_rgb(f160w_data[1000:2500,1000:2500],
f125w_data[1000:2500,1000:2500],
f105w_data[1000:2500, 1000:2500],
Q=0.02, stretch=0.005,
filename='hduf_rgb_3-color(zoomed-in)_image.png')
plt.imshow(rgb, origin='lower')
plt.colorbar()
###Output
_____no_output_____ |
nbs/train_language_model/ULMFit.ipynb | ###Markdown
Setup
###Code
!pip install sentencepiece
!pip install fastai
!pip install nbd-colab
from nbd_colab import *
drive_setup()
home_dir()
repo_name = 'bonltk'
change_dir(f'/content/drive/My Drive/Notebooks/Esukhia/{repo_name}')
###Output
Requirement already satisfied: sentencepiece in /usr/local/lib/python3.6/dist-packages (0.1.85)
Requirement already satisfied: fastai in /usr/local/lib/python3.6/dist-packages (1.0.60)
Requirement already satisfied: numexpr in /usr/local/lib/python3.6/dist-packages (from fastai) (2.7.1)
Requirement already satisfied: torch>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from fastai) (1.4.0)
Requirement already satisfied: bottleneck in /usr/local/lib/python3.6/dist-packages (from fastai) (1.3.2)
Requirement already satisfied: fastprogress>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from fastai) (0.2.3)
Requirement already satisfied: numpy>=1.15 in /usr/local/lib/python3.6/dist-packages (from fastai) (1.18.2)
Requirement already satisfied: nvidia-ml-py3 in /usr/local/lib/python3.6/dist-packages (from fastai) (7.352.0)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from fastai) (3.13)
Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from fastai) (7.0.0)
Requirement already satisfied: spacy>=2.0.18 in /usr/local/lib/python3.6/dist-packages (from fastai) (2.2.4)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from fastai) (1.4.1)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from fastai) (1.0.3)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from fastai) (20.3)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from fastai) (2.21.0)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from fastai) (3.2.1)
Requirement already satisfied: torchvision in /usr/local/lib/python3.6/dist-packages (from fastai) (0.5.0)
Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from fastai) (0.7)
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.6/dist-packages (from fastai) (4.6.3)
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from spacy>=2.0.18->fastai) (2.0.3)
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.6/dist-packages (from spacy>=2.0.18->fastai) (1.0.2)
Requirement already satisfied: plac<1.2.0,>=0.9.6 in /usr/local/lib/python3.6/dist-packages (from spacy>=2.0.18->fastai) (1.1.3)
Requirement already satisfied: srsly<1.1.0,>=1.0.2 in /usr/local/lib/python3.6/dist-packages (from spacy>=2.0.18->fastai) (1.0.2)
Requirement already satisfied: catalogue<1.1.0,>=0.0.7 in /usr/local/lib/python3.6/dist-packages (from spacy>=2.0.18->fastai) (1.0.0)
Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.6/dist-packages (from spacy>=2.0.18->fastai) (4.38.0)
Requirement already satisfied: blis<0.5.0,>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from spacy>=2.0.18->fastai) (0.4.1)
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from spacy>=2.0.18->fastai) (3.0.2)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from spacy>=2.0.18->fastai) (46.1.3)
Requirement already satisfied: wasabi<1.1.0,>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from spacy>=2.0.18->fastai) (0.6.0)
Requirement already satisfied: thinc==7.4.0 in /usr/local/lib/python3.6/dist-packages (from spacy>=2.0.18->fastai) (7.4.0)
Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas->fastai) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->fastai) (2018.9)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from packaging->fastai) (1.12.0)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->fastai) (2.4.7)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->fastai) (2.8)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->fastai) (2020.4.5.1)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->fastai) (3.0.4)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->fastai) (1.24.3)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->fastai) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->fastai) (1.2.0)
Requirement already satisfied: importlib-metadata>=0.20; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from catalogue<1.1.0,>=0.0.7->spacy>=2.0.18->fastai) (1.6.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata>=0.20; python_version < "3.8"->catalogue<1.1.0,>=0.0.7->spacy>=2.0.18->fastai) (3.1.0)
Requirement already satisfied: nbd-colab in /usr/local/lib/python3.6/dist-packages (0.0.10)
Requirement already satisfied: fastcore in /usr/local/lib/python3.6/dist-packages (from nbd-colab) (0.1.16)
Requirement already satisfied: nbdev in /usr/local/lib/python3.6/dist-packages (from nbd-colab) (0.2.17)
Requirement already satisfied: dataclasses>='0.7'; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from fastcore->nbd-colab) (0.7)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from fastcore->nbd-colab) (1.18.2)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from nbdev->nbd-colab) (3.13)
Requirement already satisfied: nbconvert>=5.6.1 in /usr/local/lib/python3.6/dist-packages (from nbdev->nbd-colab) (5.6.1)
Requirement already satisfied: nbformat>=4.4.0 in /usr/local/lib/python3.6/dist-packages (from nbdev->nbd-colab) (5.0.5)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from nbdev->nbd-colab) (20.3)
Requirement already satisfied: fastscript in /usr/local/lib/python3.6/dist-packages (from nbdev->nbd-colab) (0.1.4)
Requirement already satisfied: jinja2>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev->nbd-colab) (2.11.1)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev->nbd-colab) (0.6.0)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev->nbd-colab) (4.6.3)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev->nbd-colab) (0.8.4)
Requirement already satisfied: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev->nbd-colab) (0.4.4)
Requirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev->nbd-colab) (3.1.4)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev->nbd-colab) (4.3.3)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev->nbd-colab) (2.1.3)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev->nbd-colab) (1.4.2)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert>=5.6.1->nbdev->nbd-colab) (0.3)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.4.0->nbdev->nbd-colab) (0.2.0)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.4.0->nbdev->nbd-colab) (2.6.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from packaging->nbdev->nbd-colab) (1.12.0)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->nbdev->nbd-colab) (2.4.7)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2>=2.4->nbconvert>=5.6.1->nbdev->nbd-colab) (1.1.1)
Requirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert>=5.6.1->nbdev->nbd-colab) (0.5.1)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->nbconvert>=5.6.1->nbdev->nbd-colab) (4.4.2)
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly
Enter your authorization code:
··········
###Markdown
Imports
###Code
from pathlib import Path
from fastai.basic_data import *
from fastai.text import *
import sentencepiece as spm
###Output
_____no_output_____
###Markdown
Config
###Code
# Paths
corpus_path = Path('.bonltk/data/corpora/base')
tokenizer_models_path = Path('.bonltk/models/tokenizers')
lm_path = Path('.bonltk/models/lm')
lm_path.mkdir(exist_ok=True)
seq_len = 150
seed = 43
valid_pct = 0.1
###Output
_____no_output_____
###Markdown
Model
###Code
class BoyigTokenizer(BaseTokenizer):
def __init__(self, lang):
self.lang = lang
self.sp = spm.SentencePieceProcessor()
self.sp.Load(str(tokenizer_models_path/f'classical-unigram.model'))
self.vocab = [self.sp.IdToPiece(int(i)) for i in range(30000)]
def tokenizer(self, t):
return self.sp.EncodeAsPieces(t)
tok = BoyigTokenizer('bo')
boyig_cls_vocab = Vocab(tok.vocab)
tokenizer = Tokenizer(tok_func=BoyigTokenizer, lang='bo')
#data_lm = TextLMDataBunch.from_folder(path=corpus_path, tokenizer=tokenizer, vocab=boyig_cls_vocab)
#data_lm.save()
data_lm = load_data(corpus_path, 'data_save.pkl')
data_lm.show_batch()
learn = language_model_learner(data_lm, AWD_LSTM, pretrained=False)
learn.lr_find()
learn.recorder.plot(show_moms=True)
learn.fit_one_cycle(2, 7e-3, moms=(0.8,0.7), callbacks=[callbacks.SaveModelCallback(learn, every='improvement', monitor='accuracy', name='model')])
learn.save('2epoch')
learn = learn.load('last-epoch')
learn.fit_one_cycle(8, 7e-3, moms=(0.8,0.7), callbacks=[callbacks.SaveModelCallback(learn, every='improvement', monitor='accuracy', name='model')])
learn.save(f'last-epoch')
learn.save_encoder('finetuned')
TEXT = '༄༅། །འདུལ་བ་ང་བཞུགས་སོ།།'
N_WORDS = 40
N_SENTENCES = 2
preds = [learn.predict(TEXT, N_WORDS, temperature=0.75)
for _ in range(N_SENTENCES)]
print("\n".join([pred.replace('▁', '').replace('xxunk', '').replace(' ', '') for pred in preds]))
list(cls_ug_tok(['༄༅། །འདུལ་བ་ང་བཞུགས་སོ།།']))
###Output
_____no_output_____ |
embedding/text-test.ipynb | ###Markdown
Datasets and Dataloaders
###Code
def create_report_examples(path):
raw_reports = np.load(path)
dirty_reports = [report['body'] for report in raw_reports]
clean_reports, _ = tu.clean_report(dirty_reports, clean=1) # first pass removes \n's and weird characters
tokenised_reports, report_vocab = tu.clean_report(clean_reports, clean=2) # second pass tokenises and builds vocab
vocab, embeddings = tu.load_glove('/home/rohanmirchandani/glove/glove.6B.50d.w2vformat.txt', report_vocab, 50)
vocab['<SOS>'] = embeddings.shape[0]
embeddings = np.vstack((embeddings, np.zeros((1, 50))))
vocab['<EOS>'] = embeddings.shape[0]
embeddings = np.vstack((embeddings, np.ones((1, 50))))
vocab['<UNK>'] = embeddings.shape[0]
embeddings = np.vstack((embeddings, -np.ones((1, 50))))
for i, tokens in enumerate(tokenised_reports): # should multithread this at some point
tokens = ['<SOS>'] + tokens + ['<EOS>']
length = len(tokens)
if length > 300 or length < 10:
continue
vecs = np.array([[vocab[token] if token in vocab.keys() else vocab['<UNK>'] for token in tokens]]).transpose()
print(vecs.shape)
padding_size = 300 - vecs.shape[0]
padding = np.zeros((padding_size, 1))
vecs = np.vstack((vecs, padding))
data = data = {'tokens': tokens, "vectors": vecs}
name = "example_{}".format(i)
np.save(os.path.join('/home/rohanmirchandani/maxwell-pt-test/examples/', name), data)
create_report_examples(path='/home/rohanmirchandani/maxwell-pt-test/points.npy')
###Output
_____no_output_____
###Markdown
LSTM Testing
###Code
class EncoderGRU(nn.Module):
def __init__(self, input_dim, hidden_dim):
super(EncoderGRU, self).__init__()
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.gru = nn.GRU(self.input_dim, self.hidden_dim)
def forward(self, x, hidden):
output, hidden = self.gru(x, hidden)
return output, hidden
def init_hidden(self, bs):
result = Variable(torch.zeros(1, bs, self.hidden_dim))
return result
class DecoderGRU(nn.Module):
def __init__(self, output_dim, hidden_dim):
super(DecoderGRU, self).__init__()
self.output_dim = output_dim
self.hidden_dim = hidden_dim
self.gru = nn.GRU(self.hidden_dim, self.hidden_dim)
self.out = nn.Linear(self.hidden_dim, self.output_dim)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, x, hidden):
output, hidden = self.gru(x, hidden)
output = self.out(output[0])
output = self.softmax(output)
return output, hidden
def init_hidden(self, bs):
result = Variable(torch.zeros(1, bs, self.hidden_dim))
return result
dataloader = tu.create_dataloader('/home/rohanmirchandani/maxwell-pt-test/examples/', batch_size=1)
iterator = iter(dataloader)
tokens, vectors = next(iterator)
print(len(tokens))
print(vectors.shape)
E = EncoderGRU(50, 50)
D = DecoderGRU(50, 50)
e_hidden = E.init_hidden(bs=vectors.shape[1])
d_hidden = D.init_hidden(bs=vectors.shape[1])
inputs = Variable(vectors.float())
output, z_hidden = E(inputs, e_hidden)
output, final_hidden = D(z_hidden, d_hidden)
output
###Output
_____no_output_____
###Markdown
Training
###Code
epochs = 1
criterion = nn.MSELoss()
embedding_dim = 50
E = nn.DataParallel(EncoderGRU(50, 50).cuda())
D = nn.DataParallel(DecoderGRU(50, 50).cuda())
optm = optim.Adam(list(E.parameters()) + list(D.parameters()))
dataloader = tu.create_dataloader('/home/rohanmirchandani/maxwell-pt-test/examples/', batch_size=1)
for epoch in range(epochs):
for tokens, vectors in dataloader:
e_hidden = Variable(torch.zeros(1, vectors.shape[1], embedding_dim)).cuda()
d_hidden = Variable(torch.zeros(1, vectors.shape[1], embedding_dim)).cuda()
optm.zero_grad()
inputs = Variable(vectors.float()).cuda()
z_output, e_hidden = E(inputs, e_hidden)
outputs, d_hidden = D(e_hidden, d_hidden)
loss = criterion(outputs, inputs)
print(loss)
loss.backward()
optm.step()
###Output
_____no_output_____ |
P2/Proyecto2-RL.ipynb | ###Markdown
Project 2 Used Vehicle Price Prediction Introduction- 1.2 Million listings scraped from TrueCar.com - Price, Mileage, Make, Model dataset from Kaggle: [data](https://www.kaggle.com/jpayne/852k-used-car-listings)- Each observation represents the price of an used car
###Code
import os
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import datetime
import joblib
from random import sample
from sklearn.linear_model import LinearRegression as LR
from sklearn.metrics import mean_squared_error as mse
from sklearn.model_selection import train_test_split as tts
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
data = pd.read_csv('https://raw.githubusercontent.com/albahnsen/AdvancedMethodsDataAnalysisClass/master/datasets/dataTrain_carListings.zip')
data[['State','Make','Model']] = data[['State','Make','Model']].astype('category')
data.head()
data_dummy = pd.get_dummies(data)
data.info(), data_dummy.info()
data.shape
data_dummy.shape
data.Price.describe()
data.plot(kind='scatter', y='Price', x='Year')
data.plot(kind='scatter', y='Price', x='Mileage')
data.columns
###Output
_____no_output_____
###Markdown
Exercise P2.1 (50%)Develop a machine learning model that predicts the price of the of car using as an input ['Year', 'Mileage', 'State', 'Make', 'Model'] Evaluation:- 25% - Performance of the models using a manually implemented K-Fold (K=10) cross-validation- 25% - Notebook explaining the process for selecting the best model. You must specify how the calibration of each of the parameters is done and how these change the performance of the model. It is expected that a clear comparison will be made of all implemented models.. Present the most relevant conslusions about the whole process.
###Code
y = data_dummy.Price.values
X = data_dummy.drop('Price', axis=1).values
X_train, X_test, y_train, y_test = tts(X, y, test_size=0.8, random_state=123)
del data
del data_dummy
cants = 50000 #quantity of samples selected from biggest DF
cantmdl = 20 #quantity of models to train (with CV)
mdls = list()
rmses = list()
for j in range(cantmdl): #Models specified in cantmdl var
Xi = sample(range(1, len(list(X_train))), cants)
X_train1 = X_train[Xi]
y_train1 = y_train[Xi]
bestrmse = 9999999999
bestmdl = 0
kf = 10
ks = np.round(cants/kf,0).astype(np.int)
for i in range(kf): #CV with K-Fold defined in KF var
X_train2 = pd.DataFrame(X_train1)
y_train2 = pd.DataFrame(y_train1)
X_train2 = X_train2[np.invert(X_train2.index.isin(np.arange(i*ks,(i+1)*ks)))]
y_train2 = y_train2[np.invert(y_train2.index.isin(np.arange(i*ks,(i+1)*ks)))]
X_test2 = X_train1[np.arange(i*ks,(i+1)*ks)]
y_test2 = y_train1[np.arange(i*ks,(i+1)*ks)]
mdl = LR()
mdl.fit(X_train2,y_train2)
rmse = mse(mdl.predict(X_test2),y_test2)**0.5
if rmse < bestrmse:
bestrmse = rmse;
bestmdl = mdl;
mdls.append(bestmdl)
rmses.append(bestrmse)
score = np.zeros(cantmdl)
for i in range(cantmdl):
score[i]=(mse(mdls[i].predict(X_test),y_test)**0.5)
alpha = (1 - score) / (1 - score).sum()
alpha = (1/alpha)/sum(1/alpha)
pred = pd.DataFrame(0, index=np.arange(len(y_test)), columns=range(cantmdl))
for i in range(cantmdl):
pred[i] = (pd.DataFrame(mdls[i].predict(X_test)))
#pred[cantmdl] = pred.iloc[:,0:cantmdl].mean(axis = 1, skipna = True)
mse((pred * alpha).sum(axis=1),y_test)**0.5
joblib.dump(mdls, 'model_lr.pkl', compress=3)
pd.DataFrame(alpha).to_csv('alpha.csv')
from numpy import zeros as zr
Year = 2019
Mileage = 5000
State = 'OK'
Make = 'Audi'
Model = 'A8'
datax=pd.DataFrame({'Year':[Year],
'Mileage':[Mileage],
'State':[State],
'Make':[Make],
'Model':[Model]})
states = ['AK','AL','AR','AZ','CA','CO','CT','DC','DE','FL','GA','HI','IA','ID','IL','IN','KS','KY','LA','MA','MD','ME','MI','MN','MO','MS','MT','NC','ND','NE','NH','NJ','NM','NV','NY','OH','OK','OR','PA','RI','SC','SD','TN','TX','UT','VA','VT','WA','WI','WV','WY']
makes = ['Acura','Audi','BMW','Bentley','Buick','Cadillac','Chevrolet','Chrysler','Dodge','FIAT','Ford','Freightliner','GMC','Honda','Hyundai','INFINITI','Jaguar','Jeep','Kia','Land','Lexus','Lincoln','MINI','Mazda','Mercedes-Benz','Mercury','Mitsubishi','Nissan','Pontiac','Porsche','Ram','Scion','Subaru','Suzuki','Tesla','Toyota','Volkswagen','Volvo']
models = ['1','15002WD','15004WD','1500Laramie','1500Tradesman','200LX','200Limited','200S','200Touring','25002WD','25004WD','3','300300C','300300S','3004dr','300Base','300Limited','300Touring','35004WD','350Z2dr','4Runner2WD','4Runner4WD','4Runner4dr','4RunnerLimited','4RunnerRWD','4RunnerSR5','4RunnerTrail','5','500Pop','6','7','911','9112dr','A34dr','A44dr','A64dr','A8','AcadiaAWD','AcadiaFWD','Accent4dr','Accord','AccordEX','AccordEX-L','AccordLX','AccordLX-S','AccordSE','Altima4dr','Armada2WD','Armada4WD','Avalanche2WD','Avalanche4WD','Avalon4dr','AvalonLimited','AvalonTouring','AvalonXLE','Azera4dr','Boxster2dr','C-Class4dr','C-ClassC','C-ClassC300','C-ClassC350','C702dr','CC4dr','CR-V2WD','CR-V4WD','CR-VEX','CR-VEX-L','CR-VLX','CR-VSE','CR-ZEX','CT','CTCT','CTS','CTS-V','CTS4dr','CX-7FWD','CX-9AWD','CX-9FWD','CX-9Grand','CX-9Touring','Caliber4dr','Camaro2dr','CamaroConvertible','CamaroCoupe','Camry','Camry4dr','CamryBase','CamryL','CamryLE','CamrySE','CamryXLE','Canyon2WD','Canyon4WD','CanyonCrew','CanyonExtended','CayenneAWD','Cayman2dr','Challenger2dr','ChallengerR/T','Charger4dr','ChargerSE','ChargerSXT','CherokeeLimited','CherokeeSport','Civic','CivicEX','CivicEX-L','CivicLX','CivicSi','Cobalt2dr','Cobalt4dr','Colorado2WD','Colorado4WD','ColoradoCrew','ColoradoExtended','Compass4WD','CompassLatitude','CompassLimited','CompassSport','Continental','Cooper','Corolla4dr','CorollaL','CorollaLE','CorollaS','Corvette2dr','CorvetteConvertible','CorvetteCoupe','CruzeLT','CruzeSedan','DTS4dr','Dakota2WD','Dakota4WD','Durango2WD','Durango4dr','DurangoAWD','DurangoSXT','E-ClassE','E-ClassE320','E-ClassE350','ES','ESES','Eclipse3dr','Econoline','EdgeLimited','EdgeSE','EdgeSEL','EdgeSport','Elantra','Elantra4dr','ElantraLimited','Element2WD','Element4WD','EnclaveConvenience','EnclaveLeather','EnclavePremium','Eos2dr','EquinoxAWD','EquinoxFWD','Escalade','Escalade2WD','Escalade4dr','EscaladeAWD','Escape4WD','Escape4dr','EscapeFWD','EscapeLImited','EscapeLimited','EscapeS','EscapeSE','EscapeXLT','Excursion137"','Expedition','Expedition2WD','Expedition4WD','ExpeditionLimited','ExpeditionXLT','Explorer','Explorer4WD','Explorer4dr','ExplorerBase','ExplorerEddie','ExplorerFWD','ExplorerLimited','ExplorerXLT','Express','F-1502WD','F-1504WD','F-150FX2','F-150FX4','F-150King','F-150Lariat','F-150Limited','F-150Platinum','F-150STX','F-150SuperCrew','F-150XL','F-150XLT','F-250King','F-250Lariat','F-250XL','F-250XLT','F-350King','F-350Lariat','F-350XL','F-350XLT','FJ','FX35AWD','FiestaS','FiestaSE','FitSport','FlexLimited','FlexSE','FlexSEL','Focus4dr','Focus5dr','FocusS','FocusSE','FocusSEL','FocusST','FocusTitanium','Forester2.5X','Forester4dr','Forte','ForteEX','ForteLX','ForteSX','Frontier','Frontier2WD','Frontier4WD','Fusion4dr','FusionHybrid','FusionS','FusionSE','FusionSEL','G35','G37','G64dr','GLI4dr','GS','GSGS','GTI2dr','GTI4dr','GX','GXGX','Galant4dr','Genesis','Golf','Grand','Highlander','Highlander4WD','Highlander4dr','HighlanderBase','HighlanderFWD','HighlanderLimited','HighlanderSE','IS','ISIS','Impala4dr','ImpalaLS','ImpalaLT','Impreza','Impreza2.0i','ImprezaSport','Jetta','JourneyAWD','JourneyFWD','JourneySXT','LS','LSLS','LX','LXLX','LaCrosse4dr','LaCrosseAWD','LaCrosseFWD','Lancer4dr','Land','Legacy','Legacy2.5i','Legacy3.6R','Liberty4WD','LibertyLimited','LibertySport','Lucerne4dr','M-ClassML350','MDX4WD','MDXAWD','MKXAWD','MKXFWD','MKZ4dr','MX5','Malibu','Malibu1LT','Malibu4dr','MalibuLS','MalibuLT','Matrix5dr','Maxima4dr','Mazda34dr','Mazda35dr','Mazda64dr','Milan4dr','Model','Monte','Murano2WD','MuranoAWD','MuranoS','Mustang2dr','MustangBase','MustangDeluxe','MustangGT','MustangPremium','MustangShelby','Navigator','Navigator2WD','Navigator4WD','Navigator4dr','New','OdysseyEX','OdysseyEX-L','OdysseyLX','OdysseyTouring','Optima4dr','OptimaEX','OptimaLX','OptimaSX','Outback2.5i','Outback3.6R','Outlander','Outlander2WD','Outlander4WD','PT','PacificaLimited','PacificaTouring','Passat','Passat4dr','Pathfinder2WD','Pathfinder4WD','PathfinderS','PathfinderSE','Patriot4WD','PatriotLatitude','PatriotLimited','PatriotSport','Pilot2WD','Pilot4WD','PilotEX','PilotEX-L','PilotLX','PilotSE','PilotTouring','Prius','Prius5dr','PriusBase','PriusFive','PriusFour','PriusOne','PriusThree','PriusTwo','Q5quattro','Q7quattro','QX562WD','QX564WD','Quest4dr','RAV4','RAV44WD','RAV44dr','RAV4Base','RAV4FWD','RAV4LE','RAV4Limited','RAV4Sport','RAV4XLE','RDXAWD','RDXFWD','RX','RX-84dr','RXRX','Ram','Ranger2WD','Ranger4WD','RangerSuperCab','Regal4dr','RegalGS','RegalPremium','RegalTurbo','RidgelineRTL','RidgelineSport','RioLX','RogueFWD','Rover','S2000Manual','S44dr','S60T5','S804dr','SC','SL-ClassSL500','SLK-ClassSLK350','SRXLuxury','STS4dr','Santa','Savana','Sedona4dr','SedonaEX','SedonaLX','Sentra4dr','Sequoia4WD','Sequoia4dr','SequoiaLimited','SequoiaPlatinum','SequoiaSR5','Sienna5dr','SiennaLE','SiennaLimited','SiennaSE','SiennaXLE','Sierra','Silverado','Sonata4dr','SonataLimited','SonataSE','SonicHatch','SonicSedan','Sorento2WD','SorentoEX','SorentoLX','SorentoSX','Soul+','SoulBase','Sportage2WD','SportageAWD','SportageEX','SportageLX','SportageSX','Sprinter','Suburban2WD','Suburban4WD','Suburban4dr','Super','TL4dr','TLAutomatic','TSXAutomatic','TT2dr','Tacoma2WD','Tacoma4WD','TacomaBase','TacomaPreRunner','Tahoe2WD','Tahoe4WD','Tahoe4dr','TahoeLS','TahoeLT','Taurus4dr','TaurusLimited','TaurusSE','TaurusSEL','TaurusSHO','TerrainAWD','TerrainFWD','Tiguan2WD','TiguanS','TiguanSE','TiguanSEL','Titan','Titan2WD','Titan4WD','Touareg4dr','Town','Transit','TraverseAWD','TraverseFWD','TucsonAWD','TucsonFWD','TucsonLimited','Tundra','Tundra2WD','Tundra4WD','TundraBase','TundraLimited','TundraSR5','VeracruzAWD','VeracruzFWD','Versa4dr','Versa5dr','Vibe4dr','WRXBase','WRXLimited','WRXPremium','WRXSTI','Wrangler','Wrangler2dr','Wrangler4WD','WranglerRubicon','WranglerSahara','WranglerSport','WranglerX','X1xDrive28i','X3AWD','X3xDrive28i','X5AWD','X5xDrive35i','XC60AWD','XC60FWD','XC60T6','XC704dr','XC90AWD','XC90FWD','XC90T6','XF4dr','XJ4dr','XK2dr','Xterra2WD','Xterra4WD','Xterra4dr','Yaris','Yaris4dr','YarisBase','YarisLE','Yukon','Yukon2WD','Yukon4WD','Yukon4dr','tC2dr','xB5dr','xD5dr']
for state in states:
datax['State_' + state] = datax.State.str.contains(state).astype(int)
for make in makes:
datax['Make_' + make] = datax.Make.str.contains(make).astype(int)
for model in models:
datax['Model_' + model] = datax.Model.str.contains(model).astype(int)
datax.drop(['State','Make','Model'], axis=1, inplace=True)
reg = mdls
alpha2 = pd.read_csv('alpha.csv').values
cant2 = len(alpha2)
pred2 = zr(cant2)
for i in range(cant2):
pred2[i] = (reg[i].predict(datax))*alpha2[i]
p1 = np.sum(pred2)
p1
np.sum(pred2)
###Output
_____no_output_____
###Markdown
Exercise P2.2 (50%)Create an API of the model.Example: Evaluation:- 40% - API hosted on a cloud service- 10% - Show screenshots of the model doing the predictions on the local machine
###Code
joblib.dump(clf, 'model_deployment/.pkl', compress=3)
###Output
_____no_output_____ |
huanghaiguang_code/ex1-linear regression/.ipynb_checkpoints/1.linear_regreesion_v1-checkpoint.ipynb | ###Markdown
linear regreesion(线性回归)注意:python版本为3.6,安装TensorFlow的方法:pip install tensorflow
###Code
import pandas as pd
import seaborn as sns
sns.set(context="notebook", style="whitegrid", palette="dark")
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
df = pd.read_csv('ex1data1.txt', names=['population', 'profit'])#读取数据并赋予列名
df.head()#看前五行
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 97 entries, 0 to 96
Data columns (total 2 columns):
population 97 non-null float64
profit 97 non-null float64
dtypes: float64(2)
memory usage: 1.6 KB
###Markdown
*** 看下原始数据
###Code
sns.lmplot('population', 'profit', df, size=6, fit_reg=False)
plt.show()
def get_X(df):#读取特征
# """
# use concat to add intersect feature to avoid side effect
# not efficient for big dataset though
# """
ones = pd.DataFrame({'ones': np.ones(len(df))})#ones是m行1列的dataframe
data = pd.concat([ones, df], axis=1) # 合并数据,根据列合并
return data.iloc[:, :-1].as_matrix() # 这个操作返回 ndarray,不是矩阵
def get_y(df):#读取标签
# '''assume the last column is the target'''
return np.array(df.iloc[:, -1])#df.iloc[:, -1]是指df的最后一列
def normalize_feature(df):
# """Applies function along input axis(default 0) of DataFrame."""
return df.apply(lambda column: (column - column.mean()) / column.std())#特征缩放
###Output
_____no_output_____
###Markdown
多变量的假设 h 表示为:\\[{{h}_{\theta }}\left( x \right)={{\theta }_{0}}+{{\theta }_{1}}{{x}_{1}}+{{\theta }_{2}}{{x}_{2}}+...+{{\theta }_{n}}{{x}_{n}}\\] 这个公式中有n+1个参数和n个变量,为了使得公式能够简化一些,引入${{x}_{0}}=1$,则公式转化为: 此时模型中的参数是一个n+1维的向量,任何一个训练实例也都是n+1维的向量,特征矩阵X的维度是 m*(n+1)。 因此公式可以简化为:${{h}_{\theta }}\left( x \right)={{\theta }^{T}}X$,其中上标T代表矩阵转置。
###Code
def linear_regression(X_data, y_data, alpha, epoch, optimizer=tf.train.GradientDescentOptimizer):# 这个函数是旧金山的一个大神Lucas Shen写的
# placeholder for graph input
X = tf.placeholder(tf.float32, shape=X_data.shape)
y = tf.placeholder(tf.float32, shape=y_data.shape)
# construct the graph
with tf.variable_scope('linear-regression'):
W = tf.get_variable("weights",
(X_data.shape[1], 1),
initializer=tf.constant_initializer()) # n*1
y_pred = tf.matmul(X, W) # m*n @ n*1 -> m*1
loss = 1 / (2 * len(X_data)) * tf.matmul((y_pred - y), (y_pred - y), transpose_a=True) # (m*1).T @ m*1 = 1*1
opt = optimizer(learning_rate=alpha)
opt_operation = opt.minimize(loss)
# run the session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_data = []
for i in range(epoch):
_, loss_val, W_val = sess.run([opt_operation, loss, W], feed_dict={X: X_data, y: y_data})
loss_data.append(loss_val[0, 0]) # because every loss_val is 1*1 ndarray
if len(loss_data) > 1 and np.abs(loss_data[-1] - loss_data[-2]) < 10 ** -9: # early break when it's converged
# print('Converged at epoch {}'.format(i))
break
# clear the graph
tf.reset_default_graph()
return {'loss': loss_data, 'parameters': W_val} # just want to return in row vector format
data = pd.read_csv('ex1data1.txt', names=['population', 'profit'])#读取数据,并赋予列名
data.head()#看下数据前5行
###Output
_____no_output_____
###Markdown
计算代价函数$$J\left( \theta \right)=\frac{1}{2m}\sum\limits_{i=1}^{m}{{{\left( {{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}} \right)}^{2}}}$$其中:\\[{{h}_{\theta }}\left( x \right)={{\theta }^{T}}X={{\theta }_{0}}{{x}_{0}}+{{\theta }_{1}}{{x}_{1}}+{{\theta }_{2}}{{x}_{2}}+...+{{\theta }_{n}}{{x}_{n}}\\]
###Code
X = get_X(data)
print(X.shape, type(X))
y = get_y(data)
print(y.shape, type(y))
#看下数据维度
theta = np.zeros(X.shape[1])#X.shape[1]=2,代表特征数n
def lr_cost(theta, X, y):
# """
# X: R(m*n), m 样本数, n 特征数
# y: R(m)
# theta : R(n), 线性回归的参数
# """
m = X.shape[0]#m为样本数
inner = X @ theta - y # R(m*1),X @ theta等价于X.dot(theta)
# 1*m @ m*1 = 1*1 in matrix multiplication
# but you know numpy didn't do transpose in 1d array, so here is just a
# vector inner product to itselves
square_sum = inner.T @ inner
cost = square_sum / (2 * m)
return cost
lr_cost(theta, X, y)#返回theta的值
###Output
_____no_output_____
###Markdown
batch gradient decent(批量梯度下降)$${{\theta }_{j}}:={{\theta }_{j}}-\alpha \frac{\partial }{\partial {{\theta }_{j}}}J\left( \theta \right)$$
###Code
def gradient(theta, X, y):
m = X.shape[0]
inner = X.T @ (X @ theta - y) # (m,n).T @ (m, 1) -> (n, 1),X @ theta等价于X.dot(theta)
return inner / m
def batch_gradient_decent(theta, X, y, epoch, alpha=0.01):
# 拟合线性回归,返回参数和代价
# epoch: 批处理的轮数
# """
cost_data = [lr_cost(theta, X, y)]
_theta = theta.copy() # 拷贝一份,不和原来的theta混淆
for _ in range(epoch):
_theta = _theta - alpha * gradient(_theta, X, y)
cost_data.append(lr_cost(_theta, X, y))
return _theta, cost_data
#批量梯度下降函数
epoch = 500
final_theta, cost_data = batch_gradient_decent(theta, X, y, epoch)
final_theta
#最终的theta
cost_data
# 看下代价数据
# 计算最终的代价
lr_cost(final_theta, X, y)
###Output
_____no_output_____
###Markdown
visualize cost data(代价数据可视化)
###Code
ax = sns.tsplot(cost_data, time=np.arange(epoch+1))
ax.set_xlabel('epoch')
ax.set_ylabel('cost')
plt.show()
#可以看到从第二轮代价数据变换很大,接下来平稳了
b = final_theta[0] # intercept,Y轴上的截距
m = final_theta[1] # slope,斜率
plt.scatter(data.population, data.profit, label="Training data")
plt.plot(data.population, data.population*m + b, label="Prediction")
plt.legend(loc=2)
plt.show()
###Output
_____no_output_____
###Markdown
3- 选修章节
###Code
raw_data = pd.read_csv('ex1data2.txt', names=['square', 'bedrooms', 'price'])
raw_data.head()
###Output
_____no_output_____
###Markdown
标准化数据最简单的方法是令: 其中 是平均值,sn 是标准差。
###Code
def normalize_feature(df):
# """Applies function along input axis(default 0) of DataFrame."""
return df.apply(lambda column: (column - column.mean()) / column.std())
data = normalize_feature(raw_data)
data.head()
###Output
_____no_output_____
###Markdown
2. multi-var batch gradient decent(多变量批量梯度下降)
###Code
X = get_X(data)
print(X.shape, type(X))
y = get_y(data)
print(y.shape, type(y))#看下数据的维度和类型
alpha = 0.01#学习率
theta = np.zeros(X.shape[1])#X.shape[1]:特征数n
epoch = 500#轮数
final_theta, cost_data = batch_gradient_decent(theta, X, y, epoch, alpha=alpha)
sns.tsplot(time=np.arange(len(cost_data)), data = cost_data)
plt.xlabel('epoch', fontsize=18)
plt.ylabel('cost', fontsize=18)
plt.show()
final_theta
###Output
_____no_output_____
###Markdown
3. learning rate(学习率)
###Code
base = np.logspace(-1, -5, num=4)
candidate = np.sort(np.concatenate((base, base*3)))
print(candidate)
epoch=50
fig, ax = plt.subplots(figsize=(16, 9))
for alpha in candidate:
_, cost_data = batch_gradient_decent(theta, X, y, epoch, alpha=alpha)
ax.plot(np.arange(epoch+1), cost_data, label=alpha)
ax.set_xlabel('epoch', fontsize=18)
ax.set_ylabel('cost', fontsize=18)
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set_title('learning rate', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
4. normal equation(正规方程)正规方程是通过求解下面的方程来找出使得代价函数最小的参数的:$\frac{\partial }{\partial {{\theta }_{j}}}J\left( {{\theta }_{j}} \right)=0$ 。 假设我们的训练集特征矩阵为 X(包含了${{x}_{0}}=1$)并且我们的训练集结果为向量 y,则利用正规方程解出向量 $\theta ={{\left( {{X}^{T}}X \right)}^{-1}}{{X}^{T}}y$ 。上标T代表矩阵转置,上标-1 代表矩阵的逆。设矩阵$A={{X}^{T}}X$,则:${{\left( {{X}^{T}}X \right)}^{-1}}={{A}^{-1}}$梯度下降与正规方程的比较:梯度下降:需要选择学习率α,需要多次迭代,当特征数量n大时也能较好适用,适用于各种类型的模型 正规方程:不需要选择学习率α,一次计算得出,需要计算${{\left( {{X}^{T}}X \right)}^{-1}}$,如果特征数量n较大则运算代价大,因为矩阵逆的计算时间复杂度为O(n3),通常来说当n小于10000 时还是可以接受的,只适用于线性模型,不适合逻辑回归模型等其他模型
###Code
# 正规方程
def normalEqn(X, y):
theta = np.linalg.inv(X.T@X)@X.T@y#X.T@X等价于X.T.dot(X)
return theta
final_theta2=normalEqn(X, y)#感觉和批量梯度下降的theta的值有点差距
final_theta2
###Output
_____no_output_____
###Markdown
run the tensorflow graph over several optimizer
###Code
X_data = get_X(data)
print(X_data.shape, type(X_data))
y_data = get_y(data).reshape(len(X_data), 1) # special treatment for tensorflow input data
print(y_data.shape, type(y_data))
epoch = 2000
alpha = 0.01
optimizer_dict={'GD': tf.train.GradientDescentOptimizer,
'Adagrad': tf.train.AdagradOptimizer,
'Adam': tf.train.AdamOptimizer,
'Ftrl': tf.train.FtrlOptimizer,
'RMS': tf.train.RMSPropOptimizer
}
results = []
for name in optimizer_dict:
res = linear_regression(X_data, y_data, alpha, epoch, optimizer=optimizer_dict[name])
res['name'] = name
results.append(res)
###Output
_____no_output_____
###Markdown
画图
###Code
fig, ax = plt.subplots(figsize=(16, 9))
for res in results:
loss_data = res['loss']
# print('for optimizer {}'.format(res['name']))
# print('final parameters\n', res['parameters'])
# print('final loss={}\n'.format(loss_data[-1]))
ax.plot(np.arange(len(loss_data)), loss_data, label=res['name'])
ax.set_xlabel('epoch', fontsize=18)
ax.set_ylabel('cost', fontsize=18)
ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
ax.set_title('different optimizer', fontsize=18)
plt.show()
###Output
_____no_output_____ |
notebooks/Prototype.ipynb | ###Markdown
Reference [Peter Norvig](https://norvig.com/lispy.html) Let's just make a simple calculator. We want to be able to use it like:```(define r 10)(* pi (* r r))``````>> program = "(begin (define r 10) (* pi (* r r)))">>> parse(program)['begin', ['define', 'r', 10], ['*', 'pi', ['*', 'r', 'r']]]>>> eval(parse(program))314.1592653589793``` Type Definitions
###Code
Symbol = str # Symbol is implemented as a Python str
Number = (int, float) # Number is implemented as either a Python int or float
Atom = (Symbol, Number) # An Atom is a Symbol or Number
List = list # List is implemented as a Python list
Exp = (Atom, List) # An expression is either an Atom or List
Env = dict # An environment is a mapping of {variable: value}. Dict for now; we'll expand later.
def tokenize(chars: str) -> list:
"""
Convert a string of characters into a list of tokens.
"""
return chars.replace('(', ' ( ').replace(')', ' ) ').split()
program = "(begin (define r 10) (* pi (* r r)))"
print(tokenize(program))
def parse(program: str) -> Exp:
"""
Read a string and turn it into an Expression.
"""
return read_from_tokens(tokenize(program))
def read_from_tokens(tokens: list) -> Exp:
"""
Read an expression from a sequence of tokens.
"""
if len(tokens) == 0:
raise SyntaxError('unexpected EOF')
token = tokens.pop(0)
if token == '(':
L = []
while tokens[0] != ')':
L.append(read_from_tokens(tokens))
tokens.pop(0) # pop off ')'
return L
if token == ')':
raise SyntaxError('unexpected )')
return atom(token)
def atom(token: str) -> Atom:
"""
Numbers remain numbers; every other token becomes a symbol.
"""
try:
return int(token)
except ValueError:
try:
return float(token)
except ValueError:
return Symbol(token)
program
parse(program)
###Output
_____no_output_____
###Markdown
Environments
###Code
#*TODO: make this a class
#*TODO: consider other ways to update the global space and expand it by importing modules
import math
import operator as op
def standard_env() -> Env:
"An environment with some Scheme standard procedures."
env = Env()
env.update(vars(math)) # sin, cos, sqrt, pi, ...
env.update({
'+':op.add, '-':op.sub, '*':op.mul, '/':op.truediv,
'>':op.gt, '<':op.lt, '>=':op.ge, '<=':op.le, '=':op.eq,
'abs': abs,
'append': op.add,
'apply': lambda proc, args: proc(*args),
'begin': lambda *x: x[-1],
'car': lambda x: x[0],
'cdr': lambda x: x[1:],
'cons': lambda x,y: [x] + y,
'eq?': op.is_,
'expt': pow,
'equal?': op.eq,
'length': len,
'list': lambda *x: List(x),
'list?': lambda x: isinstance(x, List),
'map': map,
'max': max,
'min': min,
'not': op.not_,
'null?': lambda x: x == [],
'number?': lambda x: isinstance(x, Number),
'print': print,
'procedure?': callable,
'round': round,
'symbol?': lambda x: isinstance(x, Symbol),
})
return env
global_env = standard_env()
###Output
_____no_output_____
###Markdown
Evaluation: eval
###Code
def eval(x: Exp, env=global_env) -> Exp:
"""
Evaluate an expression in an environment.
"""
if isinstance(x, Symbol): # variable reference
return env[x]
if isinstance(x, Number): # constant number
return x
if x[0] == 'if': # conditional
(_, test, conseq, alt) = x
result = eval(test,env)
exp = (conseq if eval(test, env) else alt)
return eval(exp, env)
if x[0] == 'define': # definition
(_, symbol, exp) = x
env[symbol] = eval(exp, env)
return None
# Procedure call.
proc = eval(x[0], env)
args = [eval(arg, env) for arg in x[1:]]
return proc(*args)
eval(parse(program))
def run(program: str) -> Exp:
return eval(parse(program))
print(program)
run(program)
###Output
_____no_output_____
###Markdown
Interaction: REPL
###Code
def repl(prompt='> '):
"""
A prompt-read-eval-print loop.
"""
while True:
text = input(prompt)
if text == 'exit':
break
val = eval(parse(text))
if val is not None:
print(unparse(val))
def unparse(exp):
"""
Convert an expression's internal representation Python object back into a parsable string.
"""
if isinstance(exp, List):
return '(' + ' '.join(map(unparse, exp)) + ')'
else:
return str(exp)
###Output
_____no_output_____
###Markdown
Try this:```repl()(+ 1 2)exit```
###Code
repl()
###Output
> (+ 1 2)
3
> exit
###Markdown
Try this:```repl()> (define r 10)> (* pi (* r r))314.1592653589793> (if (> (* 11 11) 12) (* 7 6) oops)```
###Code
run('(if (> 1 2) 1 0)')
run("""
(if (> (* 11 11) 12) (* 7 6) oops)
""")
run("""
(list (+ 1 1) (+ 2 2) (* 2 3) (expt 2 3))
""")
###Output
_____no_output_____
###Markdown
Make Env a Class
###Code
class Env(dict):
"""
An environment is a dict of name value pairs, with an outer Env.
"""
def __init__(self, parms=(), args=(), outer: Env=None):
"""
@param parms: Variable names to bind arguments to.
@param args: Values to bind the variable names to.
"""
self.update(zip(parms, args))
self.outer = outer
def find(self, var: str) -> Env:
"""
Finds the innermost Env where the given name appears.
@param var: The name of the variable we're looking for.
@returns Env: The environment in which this name appears.
"""
if var in self:
return self
return self.outer.find(var)
# change global_env over to the new Env class.
global_env = standard_env()
###Output
_____no_output_____
###Markdown
User-defined procedures
###Code
class Procedure:
"""
A user-defined procedure with variable name bindings.
"""
def __init__(self, parms, body, env):
self.parms = parms
self.body = body
self.env = env
def __call__(self, *args):
# Create an environment with bindings for this one invocation.
env = Env(self.parms, args, self.env)
return eval(self.body, env)
###Output
_____no_output_____
###Markdown
**TODO**: Figure out how to enable plugins into `eval()`
###Code
def eval(x: Exp, env=global_env) -> Exp:
"""
Evaluate an expression in an environment.
"""
if isinstance(x, Symbol): # variable reference
return env.find(x)[x]
if not isinstance(x, List): # constant number
return x
op, *args = x
if op == 'quote': # Quote an expression without evaluating it
return args[0]
if op == 'if': # conditional
(test, conseq, alt) = args
result = eval(test,env)
if result:
exp = conseq
else:
exp = alt
return eval(exp, env)
if op == 'define': # definition
(symbol, exp) = args
env[symbol] = eval(exp, env)
return None
if op == 'set!': # assignment
(symbol, exp) = args
env.find(symbol)[symbol] = eval(exp, env)
return None
if op == 'lambda': # procedure
(parms, body) = args
return Procedure(parms, body, env)
# Procedure call.
proc = eval(x[0], env)
args = [eval(arg, env) for arg in x[1:]]
return proc(*args)
run("""
(define make-account
(lambda (balance)
(lambda (amt)
(begin (set! balance (+ balance amt))
balance))))
(define account1 (make-account 100))
(account1 -20)
""")
###Output
define args ['make-account', ['lambda', ['balance'], ['lambda', ['amt'], ['begin', ['set!', 'balance', ['+', 'balance', 'amt']], 'balance']]]]
###Markdown
```(define make-account (lambda (balance) (lambda (amt) (begin (set! balance (+ balance amt)) balance))))(define account1 (make-account 100.00))(account1 -20.00)(define account1 (make-account 100.00))(account1 -20.00)(account1 -20.00)(define account2 (make-account 100.00))(account2 40)(account2 -10)```
###Code
repl()
###Output
> (define make-account (lambda (balance) (lambda (amt) (begin (set! balance (+ balance amt)) balance))))
> (define account1 (make-account 100.00))
> account1
<__main__.Procedure object at 0x000002B8E6E84848>
> (account1 -20.00)
80.0
> (account1 -20.00)
60.0
> (define account2 (make-account 100.00))
> (account2 40)
140.0
> (account2 -10)
130.0
> exit
###Markdown
Next stepsFirst let's assemble a list of things we'd like to do. Our goal is to write something non-trivial like Asteroids. To do that there are a few things we need to add. Must Have* [Tail Call Optimization](https://en.wikipedia.org/wiki/Tail_call) * To write a simple game I need to iterate infinitely. This means either TCO, or cheating by implementing an explicit loop structure and introduce a new keyword.* [Data Structures](https://www.csie.ntu.edu.tw/~course/10420/Resources/lp/node50.html) * [Association Lists](https://www.csie.ntu.edu.tw/~course/10420/Resources/lp/node51.html)* Cleanup/exception handling. * To write a game, I need to create graphics structures such as windows which need to be de-allocated if the game halts unexpectedly.* Stack traces * We'll need to be able to report what the interpreter was doing when we, for example, reference a variable that doesn't exist. To do that we'd probably change the parser to attach properties with each token detailing the file, line number, and character position.* Macros* Plugin/module architecture. * I don't like needing to update `eval()` when we add new structures. Should be able to load a module that uses either lisp or python functions that hook into `eval()`.* Interpreter object. I'd like to have a top-level object to encapsulate a lot of these global variables. This would also enable me to have multiple interpreters that don't interfere with each other.Take a moment and consider what we'd need to do to implement stack traces for error messages. That's going to involve diving deep into the parser and tokenizer, maybe changing it to take objects instead of normal strings. Some of these will require pretty significant rework of the code, and it'll be easy to get it wrong and break it. It's time to make unit tests. What should our unit tests look like? We could code them in Python, but let's code the above test in Lisp.
###Code
code = """
(define make-account
(lambda (balance)
(lambda (amt)
(begin (set! balance (+ balance amt))
balance))))
(define account1 (make-account 100))
(expect-equal 80.0 (account1 -20))
(expect-equal 60.0 (account1 -20))
(define account2 (make-account 100))
(expect-equal 140.0 (account2 40))
(expect-equal 130.0 (account2 -10))
"""
###Output
_____no_output_____ |
projects/Diversity-Matters/scripts/evaluation.ipynb | ###Markdown
WRN28x10 on CIFAR-100
###Code
DATA = []
###Output
_____no_output_____
###Markdown
DeepEns-4
###Code
ensemble_size = 4
# load config file
cfg = get_cfg()
cfg.merge_from_file("../configs/C100_WRN28x10_SGD.yaml", allow_unsafe=True)
cfg.NUM_GPUS = 1
# build model
model = build_model(cfg).cuda().eval()
# build dataloaders
dataloaders = build_dataloaders(cfg, root="../datasets")
# configure path for deep ensembles
get_ith_weight_file = lambda idx: os.path.join(f"../outputs/C100_WRN28x10_SGD_{idx}", "best_acc1.pth.tar")
# disable grad
torch.set_grad_enabled(False)
# make predictions on valid split
val_pred_logits, val_true_labels = get_de_predictions(
model, dataloaders["val_loader"], ensemble_size, get_ith_weight_file
)
val_confidences = torch.softmax(val_pred_logits, dim=2)
# make predictions on test split
tst_pred_logits, tst_true_labels = get_de_predictions(
model, dataloaders["tst_loader"], ensemble_size, get_ith_weight_file
)
tst_confidences = torch.softmax(tst_pred_logits, dim=2)
# make evaluation results
for e in range(ensemble_size):
t_opt = get_optimal_temperature(val_confidences[:, :e+1, :].mean(1), val_true_labels)
DATA.append([
f"DeepEns-{e+1}",
evaluate_acc( tst_confidences[:, :e+1, :].mean(1), tst_true_labels) * 100,
evaluate_nll( tst_confidences[:, :e+1, :].mean(1), tst_true_labels),
evaluate_bs( tst_confidences[:, :e+1, :].mean(1), tst_true_labels),
evaluate_ece( tst_confidences[:, :e+1, :].mean(1), tst_true_labels),
evaluate_nll(torch.log_softmax(tst_confidences[:, :e+1, :].mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
evaluate_bs( torch.log_softmax(tst_confidences[:, :e+1, :].mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
evaluate_ece(torch.log_softmax(tst_confidences[:, :e+1, :].mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
])
print(tabulate(DATA, headers=["Label", "ACC", "NLL", "BS", "ECE", "cNLL", "cBS", "cECE",], floatfmt=["", ".2f",] + [".3f"] * 6))
print()
###Output
Label ACC NLL BS ECE cNLL cBS cECE
--------- ----- ----- ----- ----- ------ ----- ------
DeepEns-1 80.22 0.789 0.282 0.042 0.789 0.282 0.041
DeepEns-2 81.90 0.713 0.261 0.033 0.708 0.260 0.031
DeepEns-3 82.46 0.684 0.253 0.032 0.673 0.251 0.027
DeepEns-4 82.54 0.670 0.249 0.033 0.655 0.246 0.026
###Markdown
BatchEns-4
###Code
ensemble_size = 4
# load config file
cfg = get_cfg()
cfg.merge_from_file("../configs/C100_WRN28x10_BE4.yaml", allow_unsafe=True)
cfg.NUM_GPUS = 1
# build model
model = build_model(cfg).cuda().eval()
model.load_state_dict(torch.load("../outputs/C100_WRN28x10_BE4_KD_0/best_acc1.pth.tar", map_location="cpu")["model_state_dict"])
# build dataloaders
dataloaders = build_dataloaders(cfg, root="../datasets")
# disable grad
torch.set_grad_enabled(False)
# make predictions on valid split
val_pred_logits, val_true_labels = get_be_predictions(model, dataloaders["val_loader"], ensemble_size)
val_confidences = torch.softmax(val_pred_logits, dim=2)
t_opt = get_optimal_temperature(val_confidences.mean(1), val_true_labels)
# make predictions on test split
tst_pred_logits, tst_true_labels = get_be_predictions(model, dataloaders["tst_loader"], ensemble_size)
tst_confidences = torch.softmax(tst_pred_logits, dim=2)
DATA.append([
"BatchEns-4 (KD)",
evaluate_acc( tst_confidences.mean(1), tst_true_labels) * 100,
evaluate_nll( tst_confidences.mean(1), tst_true_labels),
evaluate_bs( tst_confidences.mean(1), tst_true_labels),
evaluate_ece( tst_confidences.mean(1), tst_true_labels),
evaluate_nll(torch.log_softmax(tst_confidences.mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
evaluate_bs( torch.log_softmax(tst_confidences.mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
evaluate_ece(torch.log_softmax(tst_confidences.mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
])
print(tabulate(DATA, headers=["Label", "ACC", "NLL", "BS", "ECE", "cNLL", "cBS", "cECE",], floatfmt=["", ".2f",] + [".3f"] * 6))
print()
ensemble_size = 4
# load config file
cfg = get_cfg()
cfg.merge_from_file("../configs/C100_WRN28x10_BE4.yaml", allow_unsafe=True)
cfg.NUM_GPUS = 1
# build model
model = build_model(cfg).cuda().eval()
model.load_state_dict(torch.load("../outputs/C100_WRN28x10_BE4_KDGaussian_0/best_acc1.pth.tar", map_location="cpu")["model_state_dict"])
# build dataloaders
dataloaders = build_dataloaders(cfg, root="../datasets")
# disable grad
torch.set_grad_enabled(False)
# make predictions on valid split
val_pred_logits, val_true_labels = get_be_predictions(model, dataloaders["val_loader"], ensemble_size)
val_confidences = torch.softmax(val_pred_logits, dim=2)
t_opt = get_optimal_temperature(val_confidences.mean(1), val_true_labels)
# make predictions on test split
tst_pred_logits, tst_true_labels = get_be_predictions(model, dataloaders["tst_loader"], ensemble_size)
tst_confidences = torch.softmax(tst_pred_logits, dim=2)
DATA.append([
"BatchEns-4 (KD + Gaussian)",
evaluate_acc( tst_confidences.mean(1), tst_true_labels) * 100,
evaluate_nll( tst_confidences.mean(1), tst_true_labels),
evaluate_bs( tst_confidences.mean(1), tst_true_labels),
evaluate_ece( tst_confidences.mean(1), tst_true_labels),
evaluate_nll(torch.log_softmax(tst_confidences.mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
evaluate_bs( torch.log_softmax(tst_confidences.mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
evaluate_ece(torch.log_softmax(tst_confidences.mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
])
print(tabulate(DATA, headers=["Label", "ACC", "NLL", "BS", "ECE", "cNLL", "cBS", "cECE",], floatfmt=["", ".2f",] + [".3f"] * 6))
print()
ensemble_size = 4
# load config file
cfg = get_cfg()
cfg.merge_from_file("../configs/C100_WRN28x10_BE4.yaml", allow_unsafe=True)
cfg.NUM_GPUS = 1
# build model
model = build_model(cfg).cuda().eval()
model.load_state_dict(torch.load("../outputs/C100_WRN28x10_BE4_KDODS_0/best_acc1.pth.tar", map_location="cpu")["model_state_dict"])
# build dataloaders
dataloaders = build_dataloaders(cfg, root="../datasets")
# disable grad
torch.set_grad_enabled(False)
# make predictions on valid split
val_pred_logits, val_true_labels = get_be_predictions(model, dataloaders["val_loader"], ensemble_size)
val_confidences = torch.softmax(val_pred_logits, dim=2)
t_opt = get_optimal_temperature(val_confidences.mean(1), val_true_labels)
# make predictions on test split
tst_pred_logits, tst_true_labels = get_be_predictions(model, dataloaders["tst_loader"], ensemble_size)
tst_confidences = torch.softmax(tst_pred_logits, dim=2)
DATA.append([
"BatchEns-4 (KD + ODS)",
evaluate_acc( tst_confidences.mean(1), tst_true_labels) * 100,
evaluate_nll( tst_confidences.mean(1), tst_true_labels),
evaluate_bs( tst_confidences.mean(1), tst_true_labels),
evaluate_ece( tst_confidences.mean(1), tst_true_labels),
evaluate_nll(torch.log_softmax(tst_confidences.mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
evaluate_bs( torch.log_softmax(tst_confidences.mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
evaluate_ece(torch.log_softmax(tst_confidences.mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
])
print(tabulate(DATA, headers=["Label", "ACC", "NLL", "BS", "ECE", "cNLL", "cBS", "cECE",], floatfmt=["", ".2f",] + [".3f"] * 6))
print()
ensemble_size = 4
# load config file
cfg = get_cfg()
cfg.merge_from_file("../configs/C100_WRN28x10_BE4.yaml", allow_unsafe=True)
cfg.NUM_GPUS = 1
# build model
model = build_model(cfg).cuda().eval()
model.load_state_dict(torch.load("../outputs/C100_WRN28x10_BE4_KDConfODS_0/best_acc1.pth.tar", map_location="cpu")["model_state_dict"])
# build dataloaders
dataloaders = build_dataloaders(cfg, root="../datasets")
# disable grad
torch.set_grad_enabled(False)
# make predictions on valid split
val_pred_logits, val_true_labels = get_be_predictions(model, dataloaders["val_loader"], ensemble_size)
val_confidences = torch.softmax(val_pred_logits, dim=2)
t_opt = get_optimal_temperature(val_confidences.mean(1), val_true_labels)
# make predictions on test split
tst_pred_logits, tst_true_labels = get_be_predictions(model, dataloaders["tst_loader"], ensemble_size)
tst_confidences = torch.softmax(tst_pred_logits, dim=2)
DATA.append([
"BatchEns-4 (KD + ConfODS)",
evaluate_acc( tst_confidences.mean(1), tst_true_labels) * 100,
evaluate_nll( tst_confidences.mean(1), tst_true_labels),
evaluate_bs( tst_confidences.mean(1), tst_true_labels),
evaluate_ece( tst_confidences.mean(1), tst_true_labels),
evaluate_nll(torch.log_softmax(tst_confidences.mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
evaluate_bs( torch.log_softmax(tst_confidences.mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
evaluate_ece(torch.log_softmax(tst_confidences.mean(1).log() / t_opt, dim=1).exp(), tst_true_labels),
])
print(tabulate(DATA, headers=["Label", "ACC", "NLL", "BS", "ECE", "cNLL", "cBS", "cECE",], floatfmt=["", ".2f",] + [".3f"] * 6))
print()
###Output
Label ACC NLL BS ECE cNLL cBS cECE
-------------------------- ----- ----- ----- ----- ------ ----- ------
DeepEns-1 80.22 0.789 0.282 0.042 0.789 0.282 0.041
DeepEns-2 81.90 0.713 0.261 0.033 0.708 0.260 0.031
DeepEns-3 82.46 0.684 0.253 0.032 0.673 0.251 0.027
DeepEns-4 82.54 0.670 0.249 0.033 0.655 0.246 0.026
BatchEns-4 (KD) 80.40 0.804 0.286 0.072 0.750 0.277 0.021
BatchEns-4 (KD + Gaussian) 80.04 0.816 0.288 0.075 0.760 0.277 0.020
BatchEns-4 (KD + ODS) 81.92 0.685 0.258 0.026 0.682 0.258 0.026
BatchEns-4 (KD + ConfODS) 82.25 0.670 0.253 0.023 0.665 0.252 0.023
|
raw/kEffNet/CIFAR-10/JP30B28_32x32_50Epochs-RAM-32x32-kType2-13.ipynb | ###Markdown
You might need to install this on your system:apt-get install python3-opencv git
###Code
import os
#"""
# !rm k -r
if not os.path.isdir('k'):
!git clone -b development12 https://github.com/joaopauloschuler/k-neural-api.git k
else:
!cd k && git pull
#"""
!cd k && pip install .
import cai.layers
import cai.datasets
import cai.models
import cai.densenet
import cai.efficientnet
import numpy as np
from tensorflow import keras
from tensorflow.keras import mixed_precision
import gc
import multiprocessing
import random
import tensorflow as tf
print("Tensorflow version:", tf.version.VERSION)
print("Keras version:", keras.__version__)
print("CPU cores:", multiprocessing.cpu_count())
import psutil
print('RAM:', (psutil.virtual_memory().total / 1e9),'GB')
print(tf.config.list_physical_devices('GPU'))
import matplotlib.pylab as plt
from sklearn.metrics import classification_report
!nvidia-smi
mixed_precision.set_global_policy('mixed_float16')
from google.colab import drive
drive.mount('/content/drive/')
dataset=tf.keras.datasets.cifar10
verbose=True
lab=False
bipolar=False
base_model_name='JP30B28'
x_train, y_train, x_test, y_test = cai.datasets.load_dataset(dataset, verbose=verbose, lab=lab, bipolar=bipolar, base_model_name=base_model_name)
print(x_train.shape)
print(y_train.shape)
num_classes = 10
batch_size = 64
epochs = 50
target_size_x = 32
target_size_y = 32
seed = 12
from tensorflow.python.profiler.model_analyzer import profile
from tensorflow.python.profiler.option_builder import ProfileOptionBuilder
def get_flops(model):
forward_pass = tf.function(
model.call,
input_signature=[tf.TensorSpec(shape=(1,) + model.input_shape[1:])])
graph_info = profile(forward_pass.get_concrete_function().graph,
options=ProfileOptionBuilder.float_operation())
# The //2 is necessary since `profile` counts multiply and accumulate
# as two flops, here we report the total number of multiply accumulate ops
flops = graph_info.total_float_ops // 2
return flops
train_datagen = cai.util.create_image_generator(validation_split=0.1, rotation_range=20, width_shift_range=0.3, height_shift_range=0.3, channel_shift_range=0.0)
test_datagen = cai.util.create_image_generator_no_augmentation()
cpus_num = max([multiprocessing.cpu_count(), 8])
def cyclical_adv_lrscheduler25(epoch):
"""CAI Cyclical and Advanced Learning Rate Scheduler.
# Arguments
epoch: integer with current epoch count.
# Returns
float with desired learning rate.
"""
base_learning = 0.001
local_epoch = epoch % 25
if local_epoch < 7:
return base_learning * (1 + 0.5*local_epoch)
else:
return (base_learning * 4) * ( 0.85**(local_epoch-7) )
def work_on_efficientnet(show_model=False, run_fit=False, test_results=False, calc_f1=False):
monitor='val_accuracy'
if (calc_f1):
test_results=True
if (show_model):
input_shape = (target_size_x, target_size_y, 3)
else:
input_shape = (None, None, 3)
for kType in [2, 13]: #
basefilename = '/content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-'+str(kType)
best_result_file_name = basefilename+'-best_result.hdf5'
print('Running: '+basefilename)
model = cai.efficientnet.kEfficientNetB0(
include_top=True,
skip_stride_cnt=3,
input_shape=input_shape,
classes=num_classes,
kType=kType)
optimizer = keras.optimizers.RMSprop()
optimizer = mixed_precision.LossScaleOptimizer(optimizer)
model.compile(
loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
if (show_model):
model.summary()
print('model flops:',get_flops(model))
save_best = keras.callbacks.ModelCheckpoint(
filepath=best_result_file_name,
monitor=monitor,
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='max',
save_freq='epoch')
if (run_fit):
train_flow = train_datagen.flow(
x_train, y_train,
batch_size=batch_size,
shuffle=True,
seed=seed,
subset='training'
)
validation_flow = train_datagen.flow(
x_train, y_train,
batch_size=batch_size,
shuffle=True,
seed=seed,
subset='validation'
)
history = model.fit(
x = train_flow,
epochs=epochs,
batch_size=batch_size,
validation_data=validation_flow,
callbacks=[save_best, tf.keras.callbacks.LearningRateScheduler(cyclical_adv_lrscheduler25)],
workers=cpus_num,
max_queue_size=128
)
plt.figure()
plt.ylabel("Accuracy (training and validation)")
plt.xlabel("Epochs")
plt.ylim([0,1])
plt.plot(history.history["accuracy"])
plt.plot(history.history["val_accuracy"])
if (test_results):
test_flow = test_datagen.flow(
x_test, y_test,
batch_size=batch_size,
shuffle=True,
seed=seed
)
print('Best Model Results: '+best_result_file_name)
model = cai.models.load_kereas_model(best_result_file_name)
evaluated = model.evaluate(
x=test_flow,
batch_size=batch_size,
use_multiprocessing=False,
workers=cpus_num
)
for metric, name in zip(evaluated,["loss","acc"]):
print(name,metric)
if (calc_f1):
model = cai.models.load_kereas_model(best_result_file_name)
pred_y = model.predict(x_test)
print("Predicted Shape:", pred_y.shape)
pred_classes_y = np.array(list(np.argmax(pred_y, axis=1)))
test_classes_y = np.array(list(np.argmax(y_test, axis=1)))
print("Pred classes shape:",pred_classes_y.shape)
print("Test classes shape:",test_classes_y.shape)
report = classification_report(test_classes_y, pred_classes_y, digits=4)
print(report)
print('Finished: '+basefilename)
###Output
_____no_output_____
###Markdown
Show Models
###Code
work_on_efficientnet(show_model=True, run_fit=False, test_results=False)
###Output
Running: /content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-2
Model: "kEffNet-b0"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 32, 32, 3)] 0
__________________________________________________________________________________________________
k_stem_conv_pad (ZeroPadding2D) (None, 33, 33, 3) 0 input_1[0][0]
__________________________________________________________________________________________________
k_stem_conv (Conv2D) (None, 31, 31, 32) 864 k_stem_conv_pad[0][0]
__________________________________________________________________________________________________
k_stem_bn (BatchNormalization) (None, 31, 31, 32) 128 k_stem_conv[0][0]
__________________________________________________________________________________________________
k_stem_activation (Activation) (None, 31, 31, 32) 0 k_stem_bn[0][0]
__________________________________________________________________________________________________
k_block1a__0dwconv (DepthwiseCo (None, 31, 31, 32) 288 k_stem_activation[0][0]
__________________________________________________________________________________________________
k_block1a__0bn (BatchNormalizat (None, 31, 31, 32) 128 k_block1a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block1a__0activation (Activat (None, 31, 31, 32) 0 k_block1a__0bn[0][0]
__________________________________________________________________________________________________
k_block1a__0se_squeeze (GlobalA (None, 32) 0 k_block1a__0activation[0][0]
__________________________________________________________________________________________________
k_block1a__0se_reshape (Reshape (None, 1, 1, 32) 0 k_block1a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block1a__0se_reduce_conv (Con (None, 1, 1, 8) 136 k_block1a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block1a__0se_reduce (Activati (None, 1, 1, 8) 0 k_block1a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block1a__0se_reduce_group_int (None, 1, 1, 8) 0 k_block1a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block1a__0se_reduce_group_int (None, 1, 1, 8) 40 k_block1a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block1a__0se_reduce_group_int (None, 1, 1, 8) 0 k_block1a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block1a__0se_reduce_inter_gro (None, 1, 1, 8) 0 k_block1a__0se_reduce_group_inter
k_block1a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block1a__0se_expand_conv (Con (None, 1, 1, 32) 288 k_block1a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block1a__0se_expand (Activati (None, 1, 1, 32) 0 k_block1a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block1a__0se_excite (Multiply (None, 31, 31, 32) 0 k_block1a__0activation[0][0]
k_block1a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block1a__0project_conv_conv ( (None, 31, 31, 16) 256 k_block1a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block1a__0project_conv_bn (Ba (None, 31, 31, 16) 64 k_block1a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block1a__0project_conv_group_ (None, 31, 31, 16) 0 k_block1a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block1a__0project_conv_group_ (None, 31, 31, 16) 128 k_block1a__0project_conv_group_in
__________________________________________________________________________________________________
k_block1a__0project_conv_group_ (None, 31, 31, 16) 64 k_block1a__0project_conv_group_in
__________________________________________________________________________________________________
k_block1a__0project_conv_inter_ (None, 31, 31, 16) 0 k_block1a__0project_conv_group_in
k_block1a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block2a__0expand_conv (Conv2D (None, 31, 31, 96) 1536 k_block1a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block2a__0expand_bn (BatchNor (None, 31, 31, 96) 384 k_block2a__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block2a__0expand (Activation) (None, 31, 31, 96) 0 k_block2a__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block2a__0dwconv (DepthwiseCo (None, 31, 31, 96) 864 k_block2a__0expand[0][0]
__________________________________________________________________________________________________
k_block2a__0bn (BatchNormalizat (None, 31, 31, 96) 384 k_block2a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block2a__0activation (Activat (None, 31, 31, 96) 0 k_block2a__0bn[0][0]
__________________________________________________________________________________________________
k_block2a__0se_squeeze (GlobalA (None, 96) 0 k_block2a__0activation[0][0]
__________________________________________________________________________________________________
k_block2a__0se_reshape (Reshape (None, 1, 1, 96) 0 k_block2a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block2a__0se_reduce_conv (Con (None, 1, 1, 4) 100 k_block2a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block2a__0se_reduce (Activati (None, 1, 1, 4) 0 k_block2a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block2a__0se_reduce_group_int (None, 1, 1, 4) 8 k_block2a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block2a__0se_reduce_group_int (None, 1, 1, 4) 0 k_block2a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block2a__0se_reduce_inter_gro (None, 1, 1, 4) 0 k_block2a__0se_reduce_group_inter
k_block2a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block2a__0se_expand_conv (Con (None, 1, 1, 96) 480 k_block2a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block2a__0se_expand (Activati (None, 1, 1, 96) 0 k_block2a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block2a__0se_excite (Multiply (None, 31, 31, 96) 0 k_block2a__0activation[0][0]
k_block2a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block2a__0project_conv_conv ( (None, 31, 31, 24) 384 k_block2a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block2a__0project_conv_bn (Ba (None, 31, 31, 24) 96 k_block2a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block2a__0project_conv_group_ (None, 31, 31, 24) 0 k_block2a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block2a__0project_conv_group_ (None, 31, 31, 24) 96 k_block2a__0project_conv_group_in
__________________________________________________________________________________________________
k_block2a__0project_conv_group_ (None, 31, 31, 24) 96 k_block2a__0project_conv_group_in
__________________________________________________________________________________________________
k_block2a__0project_conv_inter_ (None, 31, 31, 24) 0 k_block2a__0project_conv_group_in
k_block2a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block2b__0expand_conv (Conv2D (None, 31, 31, 144) 3456 k_block2a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block2b__0expand_bn (BatchNor (None, 31, 31, 144) 576 k_block2b__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block2b__0expand (Activation) (None, 31, 31, 144) 0 k_block2b__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block2b__0dwconv (DepthwiseCo (None, 31, 31, 144) 1296 k_block2b__0expand[0][0]
__________________________________________________________________________________________________
k_block2b__0bn (BatchNormalizat (None, 31, 31, 144) 576 k_block2b__0dwconv[0][0]
__________________________________________________________________________________________________
k_block2b__0activation (Activat (None, 31, 31, 144) 0 k_block2b__0bn[0][0]
__________________________________________________________________________________________________
k_block2b__0se_squeeze (GlobalA (None, 144) 0 k_block2b__0activation[0][0]
__________________________________________________________________________________________________
k_block2b__0se_reshape (Reshape (None, 1, 1, 144) 0 k_block2b__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block2b__0se_reduce_conv (Con (None, 1, 1, 6) 150 k_block2b__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block2b__0se_reduce (Activati (None, 1, 1, 6) 0 k_block2b__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block2b__0se_reduce_group_int (None, 1, 1, 6) 12 k_block2b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block2b__0se_reduce_group_int (None, 1, 1, 6) 0 k_block2b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block2b__0se_reduce_inter_gro (None, 1, 1, 6) 0 k_block2b__0se_reduce_group_inter
k_block2b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block2b__0se_expand_conv (Con (None, 1, 1, 144) 1008 k_block2b__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block2b__0se_expand (Activati (None, 1, 1, 144) 0 k_block2b__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block2b__0se_excite (Multiply (None, 31, 31, 144) 0 k_block2b__0activation[0][0]
k_block2b__0se_expand[0][0]
__________________________________________________________________________________________________
k_block2b__0project_conv_conv ( (None, 31, 31, 24) 432 k_block2b__0se_excite[0][0]
__________________________________________________________________________________________________
k_block2b__0project_conv_bn (Ba (None, 31, 31, 24) 96 k_block2b__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block2b__0project_conv_group_ (None, 31, 31, 24) 0 k_block2b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block2b__0project_conv_group_ (None, 31, 31, 24) 72 k_block2b__0project_conv_group_in
__________________________________________________________________________________________________
k_block2b__0project_conv_group_ (None, 31, 31, 24) 96 k_block2b__0project_conv_group_in
__________________________________________________________________________________________________
k_block2b__0project_conv_inter_ (None, 31, 31, 24) 0 k_block2b__0project_conv_group_in
k_block2b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block2b__0drop (Dropout) (None, 31, 31, 24) 0 k_block2b__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block2b__0add (Add) (None, 31, 31, 24) 0 k_block2b__0drop[0][0]
k_block2a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block3a__0expand_conv (Conv2D (None, 31, 31, 144) 3456 k_block2b__0add[0][0]
__________________________________________________________________________________________________
k_block3a__0expand_bn (BatchNor (None, 31, 31, 144) 576 k_block3a__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block3a__0expand (Activation) (None, 31, 31, 144) 0 k_block3a__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block3a__0dwconv (DepthwiseCo (None, 31, 31, 144) 3600 k_block3a__0expand[0][0]
__________________________________________________________________________________________________
k_block3a__0bn (BatchNormalizat (None, 31, 31, 144) 576 k_block3a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block3a__0activation (Activat (None, 31, 31, 144) 0 k_block3a__0bn[0][0]
__________________________________________________________________________________________________
k_block3a__0se_squeeze (GlobalA (None, 144) 0 k_block3a__0activation[0][0]
__________________________________________________________________________________________________
k_block3a__0se_reshape (Reshape (None, 1, 1, 144) 0 k_block3a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block3a__0se_reduce_conv (Con (None, 1, 1, 6) 150 k_block3a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block3a__0se_reduce (Activati (None, 1, 1, 6) 0 k_block3a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block3a__0se_reduce_group_int (None, 1, 1, 6) 12 k_block3a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block3a__0se_reduce_group_int (None, 1, 1, 6) 0 k_block3a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block3a__0se_reduce_inter_gro (None, 1, 1, 6) 0 k_block3a__0se_reduce_group_inter
k_block3a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block3a__0se_expand_conv (Con (None, 1, 1, 144) 1008 k_block3a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block3a__0se_expand (Activati (None, 1, 1, 144) 0 k_block3a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block3a__0se_excite (Multiply (None, 31, 31, 144) 0 k_block3a__0activation[0][0]
k_block3a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block3a__0project_conv_conv ( (None, 31, 31, 40) 720 k_block3a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block3a__0project_conv_bn (Ba (None, 31, 31, 40) 160 k_block3a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block3a__0project_conv_group_ (None, 31, 31, 40) 0 k_block3a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block3a__0project_conv_group_ (None, 31, 31, 40) 200 k_block3a__0project_conv_group_in
__________________________________________________________________________________________________
k_block3a__0project_conv_group_ (None, 31, 31, 40) 160 k_block3a__0project_conv_group_in
__________________________________________________________________________________________________
k_block3a__0project_conv_inter_ (None, 31, 31, 40) 0 k_block3a__0project_conv_group_in
k_block3a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block3b__0expand_conv (Conv2D (None, 31, 31, 240) 4800 k_block3a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block3b__0expand_bn (BatchNor (None, 31, 31, 240) 960 k_block3b__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block3b__0expand (Activation) (None, 31, 31, 240) 0 k_block3b__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block3b__0expand_group_interl (None, 31, 31, 240) 0 k_block3b__0expand[0][0]
__________________________________________________________________________________________________
k_block3b__0dwconv (DepthwiseCo (None, 31, 31, 240) 6000 k_block3b__0expand_group_interlea
__________________________________________________________________________________________________
k_block3b__0bn (BatchNormalizat (None, 31, 31, 240) 960 k_block3b__0dwconv[0][0]
__________________________________________________________________________________________________
k_block3b__0activation (Activat (None, 31, 31, 240) 0 k_block3b__0bn[0][0]
__________________________________________________________________________________________________
k_block3b__0se_squeeze (GlobalA (None, 240) 0 k_block3b__0activation[0][0]
__________________________________________________________________________________________________
k_block3b__0se_reshape (Reshape (None, 1, 1, 240) 0 k_block3b__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block3b__0se_reduce_conv (Con (None, 1, 1, 10) 250 k_block3b__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block3b__0se_reduce (Activati (None, 1, 1, 10) 0 k_block3b__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block3b__0se_reduce_group_int (None, 1, 1, 10) 20 k_block3b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block3b__0se_reduce_group_int (None, 1, 1, 10) 0 k_block3b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block3b__0se_reduce_inter_gro (None, 1, 1, 10) 0 k_block3b__0se_reduce_group_inter
k_block3b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block3b__0se_expand_conv (Con (None, 1, 1, 240) 2640 k_block3b__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block3b__0se_expand (Activati (None, 1, 1, 240) 0 k_block3b__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block3b__0se_excite (Multiply (None, 31, 31, 240) 0 k_block3b__0activation[0][0]
k_block3b__0se_expand[0][0]
__________________________________________________________________________________________________
k_block3b__0project_conv_conv ( (None, 31, 31, 40) 960 k_block3b__0se_excite[0][0]
__________________________________________________________________________________________________
k_block3b__0project_conv_bn (Ba (None, 31, 31, 40) 160 k_block3b__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block3b__0project_conv_group_ (None, 31, 31, 40) 0 k_block3b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block3b__0project_conv_group_ (None, 31, 31, 40) 160 k_block3b__0project_conv_group_in
__________________________________________________________________________________________________
k_block3b__0project_conv_group_ (None, 31, 31, 40) 160 k_block3b__0project_conv_group_in
__________________________________________________________________________________________________
k_block3b__0project_conv_inter_ (None, 31, 31, 40) 0 k_block3b__0project_conv_group_in
k_block3b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block3b__0drop (Dropout) (None, 31, 31, 40) 0 k_block3b__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block3b__0add (Add) (None, 31, 31, 40) 0 k_block3b__0drop[0][0]
k_block3a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block4a__0expand_conv (Conv2D (None, 31, 31, 240) 4800 k_block3b__0add[0][0]
__________________________________________________________________________________________________
k_block4a__0expand_bn (BatchNor (None, 31, 31, 240) 960 k_block4a__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block4a__0expand (Activation) (None, 31, 31, 240) 0 k_block4a__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block4a__0expand_group_interl (None, 31, 31, 240) 0 k_block4a__0expand[0][0]
__________________________________________________________________________________________________
k_block4a__0dwconv_pad (ZeroPad (None, 33, 33, 240) 0 k_block4a__0expand_group_interlea
__________________________________________________________________________________________________
k_block4a__0dwconv (DepthwiseCo (None, 16, 16, 240) 2160 k_block4a__0dwconv_pad[0][0]
__________________________________________________________________________________________________
k_block4a__0bn (BatchNormalizat (None, 16, 16, 240) 960 k_block4a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block4a__0activation (Activat (None, 16, 16, 240) 0 k_block4a__0bn[0][0]
__________________________________________________________________________________________________
k_block4a__0se_squeeze (GlobalA (None, 240) 0 k_block4a__0activation[0][0]
__________________________________________________________________________________________________
k_block4a__0se_reshape (Reshape (None, 1, 1, 240) 0 k_block4a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block4a__0se_reduce_conv (Con (None, 1, 1, 10) 250 k_block4a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block4a__0se_reduce (Activati (None, 1, 1, 10) 0 k_block4a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block4a__0se_reduce_group_int (None, 1, 1, 10) 20 k_block4a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block4a__0se_reduce_group_int (None, 1, 1, 10) 0 k_block4a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block4a__0se_reduce_inter_gro (None, 1, 1, 10) 0 k_block4a__0se_reduce_group_inter
k_block4a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block4a__0se_expand_conv (Con (None, 1, 1, 240) 2640 k_block4a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block4a__0se_expand (Activati (None, 1, 1, 240) 0 k_block4a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block4a__0se_excite (Multiply (None, 16, 16, 240) 0 k_block4a__0activation[0][0]
k_block4a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block4a__0project_conv_conv ( (None, 16, 16, 80) 1920 k_block4a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block4a__0project_conv_bn (Ba (None, 16, 16, 80) 320 k_block4a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block4a__0project_conv_group_ (None, 16, 16, 80) 0 k_block4a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block4a__0project_conv_group_ (None, 16, 16, 80) 640 k_block4a__0project_conv_group_in
__________________________________________________________________________________________________
k_block4a__0project_conv_group_ (None, 16, 16, 80) 320 k_block4a__0project_conv_group_in
__________________________________________________________________________________________________
k_block4a__0project_conv_inter_ (None, 16, 16, 80) 0 k_block4a__0project_conv_group_in
k_block4a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block4b__0expand_conv (Conv2D (None, 16, 16, 480) 7680 k_block4a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block4b__0expand_bn (BatchNor (None, 16, 16, 480) 1920 k_block4b__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block4b__0expand (Activation) (None, 16, 16, 480) 0 k_block4b__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block4b__0expand_group_interl (None, 16, 16, 480) 0 k_block4b__0expand[0][0]
__________________________________________________________________________________________________
k_block4b__0dwconv (DepthwiseCo (None, 16, 16, 480) 4320 k_block4b__0expand_group_interlea
__________________________________________________________________________________________________
k_block4b__0bn (BatchNormalizat (None, 16, 16, 480) 1920 k_block4b__0dwconv[0][0]
__________________________________________________________________________________________________
k_block4b__0activation (Activat (None, 16, 16, 480) 0 k_block4b__0bn[0][0]
__________________________________________________________________________________________________
k_block4b__0se_squeeze (GlobalA (None, 480) 0 k_block4b__0activation[0][0]
__________________________________________________________________________________________________
k_block4b__0se_reshape (Reshape (None, 1, 1, 480) 0 k_block4b__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block4b__0se_reduce_conv (Con (None, 1, 1, 20) 500 k_block4b__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block4b__0se_reduce (Activati (None, 1, 1, 20) 0 k_block4b__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block4b__0se_reduce_group_int (None, 1, 1, 20) 40 k_block4b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block4b__0se_reduce_group_int (None, 1, 1, 20) 0 k_block4b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block4b__0se_reduce_inter_gro (None, 1, 1, 20) 0 k_block4b__0se_reduce_group_inter
k_block4b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block4b__0se_expand_conv (Con (None, 1, 1, 480) 10080 k_block4b__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block4b__0se_expand (Activati (None, 1, 1, 480) 0 k_block4b__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block4b__0se_excite (Multiply (None, 16, 16, 480) 0 k_block4b__0activation[0][0]
k_block4b__0se_expand[0][0]
__________________________________________________________________________________________________
k_block4b__0project_conv_conv ( (None, 16, 16, 80) 1920 k_block4b__0se_excite[0][0]
__________________________________________________________________________________________________
k_block4b__0project_conv_bn (Ba (None, 16, 16, 80) 320 k_block4b__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block4b__0project_conv_group_ (None, 16, 16, 80) 0 k_block4b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block4b__0project_conv_group_ (None, 16, 16, 80) 320 k_block4b__0project_conv_group_in
__________________________________________________________________________________________________
k_block4b__0project_conv_group_ (None, 16, 16, 80) 320 k_block4b__0project_conv_group_in
__________________________________________________________________________________________________
k_block4b__0project_conv_inter_ (None, 16, 16, 80) 0 k_block4b__0project_conv_group_in
k_block4b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block4b__0drop (Dropout) (None, 16, 16, 80) 0 k_block4b__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block4b__0add (Add) (None, 16, 16, 80) 0 k_block4b__0drop[0][0]
k_block4a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block4c__0expand_conv (Conv2D (None, 16, 16, 480) 7680 k_block4b__0add[0][0]
__________________________________________________________________________________________________
k_block4c__0expand_bn (BatchNor (None, 16, 16, 480) 1920 k_block4c__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block4c__0expand (Activation) (None, 16, 16, 480) 0 k_block4c__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block4c__0expand_group_interl (None, 16, 16, 480) 0 k_block4c__0expand[0][0]
__________________________________________________________________________________________________
k_block4c__0dwconv (DepthwiseCo (None, 16, 16, 480) 4320 k_block4c__0expand_group_interlea
__________________________________________________________________________________________________
k_block4c__0bn (BatchNormalizat (None, 16, 16, 480) 1920 k_block4c__0dwconv[0][0]
__________________________________________________________________________________________________
k_block4c__0activation (Activat (None, 16, 16, 480) 0 k_block4c__0bn[0][0]
__________________________________________________________________________________________________
k_block4c__0se_squeeze (GlobalA (None, 480) 0 k_block4c__0activation[0][0]
__________________________________________________________________________________________________
k_block4c__0se_reshape (Reshape (None, 1, 1, 480) 0 k_block4c__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block4c__0se_reduce_conv (Con (None, 1, 1, 20) 500 k_block4c__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block4c__0se_reduce (Activati (None, 1, 1, 20) 0 k_block4c__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block4c__0se_reduce_group_int (None, 1, 1, 20) 40 k_block4c__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block4c__0se_reduce_group_int (None, 1, 1, 20) 0 k_block4c__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block4c__0se_reduce_inter_gro (None, 1, 1, 20) 0 k_block4c__0se_reduce_group_inter
k_block4c__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block4c__0se_expand_conv (Con (None, 1, 1, 480) 10080 k_block4c__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block4c__0se_expand (Activati (None, 1, 1, 480) 0 k_block4c__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block4c__0se_excite (Multiply (None, 16, 16, 480) 0 k_block4c__0activation[0][0]
k_block4c__0se_expand[0][0]
__________________________________________________________________________________________________
k_block4c__0project_conv_conv ( (None, 16, 16, 80) 1920 k_block4c__0se_excite[0][0]
__________________________________________________________________________________________________
k_block4c__0project_conv_bn (Ba (None, 16, 16, 80) 320 k_block4c__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block4c__0project_conv_group_ (None, 16, 16, 80) 0 k_block4c__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block4c__0project_conv_group_ (None, 16, 16, 80) 320 k_block4c__0project_conv_group_in
__________________________________________________________________________________________________
k_block4c__0project_conv_group_ (None, 16, 16, 80) 320 k_block4c__0project_conv_group_in
__________________________________________________________________________________________________
k_block4c__0project_conv_inter_ (None, 16, 16, 80) 0 k_block4c__0project_conv_group_in
k_block4c__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block4c__0drop (Dropout) (None, 16, 16, 80) 0 k_block4c__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block4c__0add (Add) (None, 16, 16, 80) 0 k_block4c__0drop[0][0]
k_block4b__0add[0][0]
__________________________________________________________________________________________________
k_block5a__0expand_conv (Conv2D (None, 16, 16, 480) 7680 k_block4c__0add[0][0]
__________________________________________________________________________________________________
k_block5a__0expand_bn (BatchNor (None, 16, 16, 480) 1920 k_block5a__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block5a__0expand (Activation) (None, 16, 16, 480) 0 k_block5a__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block5a__0expand_group_interl (None, 16, 16, 480) 0 k_block5a__0expand[0][0]
__________________________________________________________________________________________________
k_block5a__0dwconv (DepthwiseCo (None, 16, 16, 480) 12000 k_block5a__0expand_group_interlea
__________________________________________________________________________________________________
k_block5a__0bn (BatchNormalizat (None, 16, 16, 480) 1920 k_block5a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block5a__0activation (Activat (None, 16, 16, 480) 0 k_block5a__0bn[0][0]
__________________________________________________________________________________________________
k_block5a__0se_squeeze (GlobalA (None, 480) 0 k_block5a__0activation[0][0]
__________________________________________________________________________________________________
k_block5a__0se_reshape (Reshape (None, 1, 1, 480) 0 k_block5a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block5a__0se_reduce_conv (Con (None, 1, 1, 20) 500 k_block5a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block5a__0se_reduce (Activati (None, 1, 1, 20) 0 k_block5a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block5a__0se_reduce_group_int (None, 1, 1, 20) 40 k_block5a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block5a__0se_reduce_group_int (None, 1, 1, 20) 0 k_block5a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block5a__0se_reduce_inter_gro (None, 1, 1, 20) 0 k_block5a__0se_reduce_group_inter
k_block5a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block5a__0se_expand_conv (Con (None, 1, 1, 480) 10080 k_block5a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block5a__0se_expand (Activati (None, 1, 1, 480) 0 k_block5a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block5a__0se_excite (Multiply (None, 16, 16, 480) 0 k_block5a__0activation[0][0]
k_block5a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block5a__0project_conv_conv ( (None, 16, 16, 112) 3360 k_block5a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block5a__0project_conv_bn (Ba (None, 16, 16, 112) 448 k_block5a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block5a__0project_conv_group_ (None, 16, 16, 112) 0 k_block5a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block5a__0project_conv_group_ (None, 16, 16, 112) 784 k_block5a__0project_conv_group_in
__________________________________________________________________________________________________
k_block5a__0project_conv_group_ (None, 16, 16, 112) 448 k_block5a__0project_conv_group_in
__________________________________________________________________________________________________
k_block5a__0project_conv_inter_ (None, 16, 16, 112) 0 k_block5a__0project_conv_group_in
k_block5a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block5b__0expand_conv (Conv2D (None, 16, 16, 672) 10752 k_block5a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block5b__0expand_bn (BatchNor (None, 16, 16, 672) 2688 k_block5b__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block5b__0expand (Activation) (None, 16, 16, 672) 0 k_block5b__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block5b__0expand_group_interl (None, 16, 16, 672) 0 k_block5b__0expand[0][0]
__________________________________________________________________________________________________
k_block5b__0dwconv (DepthwiseCo (None, 16, 16, 672) 16800 k_block5b__0expand_group_interlea
__________________________________________________________________________________________________
k_block5b__0bn (BatchNormalizat (None, 16, 16, 672) 2688 k_block5b__0dwconv[0][0]
__________________________________________________________________________________________________
k_block5b__0activation (Activat (None, 16, 16, 672) 0 k_block5b__0bn[0][0]
__________________________________________________________________________________________________
k_block5b__0se_squeeze (GlobalA (None, 672) 0 k_block5b__0activation[0][0]
__________________________________________________________________________________________________
k_block5b__0se_reshape (Reshape (None, 1, 1, 672) 0 k_block5b__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block5b__0se_reduce_conv (Con (None, 1, 1, 28) 700 k_block5b__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block5b__0se_reduce (Activati (None, 1, 1, 28) 0 k_block5b__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block5b__0se_reduce_group_int (None, 1, 1, 28) 56 k_block5b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block5b__0se_reduce_group_int (None, 1, 1, 28) 0 k_block5b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block5b__0se_reduce_inter_gro (None, 1, 1, 28) 0 k_block5b__0se_reduce_group_inter
k_block5b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block5b__0se_expand_conv (Con (None, 1, 1, 672) 19488 k_block5b__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block5b__0se_expand (Activati (None, 1, 1, 672) 0 k_block5b__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block5b__0se_excite (Multiply (None, 16, 16, 672) 0 k_block5b__0activation[0][0]
k_block5b__0se_expand[0][0]
__________________________________________________________________________________________________
k_block5b__0project_conv_conv ( (None, 16, 16, 112) 2688 k_block5b__0se_excite[0][0]
__________________________________________________________________________________________________
k_block5b__0project_conv_bn (Ba (None, 16, 16, 112) 448 k_block5b__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block5b__0project_conv_group_ (None, 16, 16, 112) 0 k_block5b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block5b__0project_conv_group_ (None, 16, 16, 112) 448 k_block5b__0project_conv_group_in
__________________________________________________________________________________________________
k_block5b__0project_conv_group_ (None, 16, 16, 112) 448 k_block5b__0project_conv_group_in
__________________________________________________________________________________________________
k_block5b__0project_conv_inter_ (None, 16, 16, 112) 0 k_block5b__0project_conv_group_in
k_block5b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block5b__0drop (Dropout) (None, 16, 16, 112) 0 k_block5b__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block5b__0add (Add) (None, 16, 16, 112) 0 k_block5b__0drop[0][0]
k_block5a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block5c__0expand_conv (Conv2D (None, 16, 16, 672) 10752 k_block5b__0add[0][0]
__________________________________________________________________________________________________
k_block5c__0expand_bn (BatchNor (None, 16, 16, 672) 2688 k_block5c__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block5c__0expand (Activation) (None, 16, 16, 672) 0 k_block5c__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block5c__0expand_group_interl (None, 16, 16, 672) 0 k_block5c__0expand[0][0]
__________________________________________________________________________________________________
k_block5c__0dwconv (DepthwiseCo (None, 16, 16, 672) 16800 k_block5c__0expand_group_interlea
__________________________________________________________________________________________________
k_block5c__0bn (BatchNormalizat (None, 16, 16, 672) 2688 k_block5c__0dwconv[0][0]
__________________________________________________________________________________________________
k_block5c__0activation (Activat (None, 16, 16, 672) 0 k_block5c__0bn[0][0]
__________________________________________________________________________________________________
k_block5c__0se_squeeze (GlobalA (None, 672) 0 k_block5c__0activation[0][0]
__________________________________________________________________________________________________
k_block5c__0se_reshape (Reshape (None, 1, 1, 672) 0 k_block5c__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block5c__0se_reduce_conv (Con (None, 1, 1, 28) 700 k_block5c__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block5c__0se_reduce (Activati (None, 1, 1, 28) 0 k_block5c__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block5c__0se_reduce_group_int (None, 1, 1, 28) 56 k_block5c__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block5c__0se_reduce_group_int (None, 1, 1, 28) 0 k_block5c__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block5c__0se_reduce_inter_gro (None, 1, 1, 28) 0 k_block5c__0se_reduce_group_inter
k_block5c__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block5c__0se_expand_conv (Con (None, 1, 1, 672) 19488 k_block5c__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block5c__0se_expand (Activati (None, 1, 1, 672) 0 k_block5c__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block5c__0se_excite (Multiply (None, 16, 16, 672) 0 k_block5c__0activation[0][0]
k_block5c__0se_expand[0][0]
__________________________________________________________________________________________________
k_block5c__0project_conv_conv ( (None, 16, 16, 112) 2688 k_block5c__0se_excite[0][0]
__________________________________________________________________________________________________
k_block5c__0project_conv_bn (Ba (None, 16, 16, 112) 448 k_block5c__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block5c__0project_conv_group_ (None, 16, 16, 112) 0 k_block5c__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block5c__0project_conv_group_ (None, 16, 16, 112) 448 k_block5c__0project_conv_group_in
__________________________________________________________________________________________________
k_block5c__0project_conv_group_ (None, 16, 16, 112) 448 k_block5c__0project_conv_group_in
__________________________________________________________________________________________________
k_block5c__0project_conv_inter_ (None, 16, 16, 112) 0 k_block5c__0project_conv_group_in
k_block5c__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block5c__0drop (Dropout) (None, 16, 16, 112) 0 k_block5c__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block5c__0add (Add) (None, 16, 16, 112) 0 k_block5c__0drop[0][0]
k_block5b__0add[0][0]
__________________________________________________________________________________________________
k_block6a__0expand_conv (Conv2D (None, 16, 16, 672) 10752 k_block5c__0add[0][0]
__________________________________________________________________________________________________
k_block6a__0expand_bn (BatchNor (None, 16, 16, 672) 2688 k_block6a__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block6a__0expand (Activation) (None, 16, 16, 672) 0 k_block6a__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block6a__0expand_group_interl (None, 16, 16, 672) 0 k_block6a__0expand[0][0]
__________________________________________________________________________________________________
k_block6a__0dwconv_pad (ZeroPad (None, 19, 19, 672) 0 k_block6a__0expand_group_interlea
__________________________________________________________________________________________________
k_block6a__0dwconv (DepthwiseCo (None, 8, 8, 672) 16800 k_block6a__0dwconv_pad[0][0]
__________________________________________________________________________________________________
k_block6a__0bn (BatchNormalizat (None, 8, 8, 672) 2688 k_block6a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block6a__0activation (Activat (None, 8, 8, 672) 0 k_block6a__0bn[0][0]
__________________________________________________________________________________________________
k_block6a__0se_squeeze (GlobalA (None, 672) 0 k_block6a__0activation[0][0]
__________________________________________________________________________________________________
k_block6a__0se_reshape (Reshape (None, 1, 1, 672) 0 k_block6a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block6a__0se_reduce_conv (Con (None, 1, 1, 28) 700 k_block6a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block6a__0se_reduce (Activati (None, 1, 1, 28) 0 k_block6a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block6a__0se_reduce_group_int (None, 1, 1, 28) 56 k_block6a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6a__0se_reduce_group_int (None, 1, 1, 28) 0 k_block6a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block6a__0se_reduce_inter_gro (None, 1, 1, 28) 0 k_block6a__0se_reduce_group_inter
k_block6a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6a__0se_expand_conv (Con (None, 1, 1, 672) 19488 k_block6a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block6a__0se_expand (Activati (None, 1, 1, 672) 0 k_block6a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block6a__0se_excite (Multiply (None, 8, 8, 672) 0 k_block6a__0activation[0][0]
k_block6a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block6a__0project_conv_conv ( (None, 8, 8, 192) 4032 k_block6a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block6a__0project_conv_bn (Ba (None, 8, 8, 192) 768 k_block6a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block6a__0project_conv_group_ (None, 8, 8, 192) 0 k_block6a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6a__0project_conv_group_ (None, 8, 8, 192) 1152 k_block6a__0project_conv_group_in
__________________________________________________________________________________________________
k_block6a__0project_conv_group_ (None, 8, 8, 192) 768 k_block6a__0project_conv_group_in
__________________________________________________________________________________________________
k_block6a__0project_conv_inter_ (None, 8, 8, 192) 0 k_block6a__0project_conv_group_in
k_block6a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6b__0expand_conv (Conv2D (None, 8, 8, 1152) 18432 k_block6a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block6b__0expand_bn (BatchNor (None, 8, 8, 1152) 4608 k_block6b__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block6b__0expand (Activation) (None, 8, 8, 1152) 0 k_block6b__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block6b__0expand_group_interl (None, 8, 8, 1152) 0 k_block6b__0expand[0][0]
__________________________________________________________________________________________________
k_block6b__0dwconv (DepthwiseCo (None, 8, 8, 1152) 28800 k_block6b__0expand_group_interlea
__________________________________________________________________________________________________
k_block6b__0bn (BatchNormalizat (None, 8, 8, 1152) 4608 k_block6b__0dwconv[0][0]
__________________________________________________________________________________________________
k_block6b__0activation (Activat (None, 8, 8, 1152) 0 k_block6b__0bn[0][0]
__________________________________________________________________________________________________
k_block6b__0se_squeeze (GlobalA (None, 1152) 0 k_block6b__0activation[0][0]
__________________________________________________________________________________________________
k_block6b__0se_reshape (Reshape (None, 1, 1, 1152) 0 k_block6b__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block6b__0se_reduce_conv (Con (None, 1, 1, 48) 1200 k_block6b__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block6b__0se_reduce (Activati (None, 1, 1, 48) 0 k_block6b__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block6b__0se_reduce_group_int (None, 1, 1, 48) 96 k_block6b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6b__0se_reduce_group_int (None, 1, 1, 48) 0 k_block6b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block6b__0se_reduce_inter_gro (None, 1, 1, 48) 0 k_block6b__0se_reduce_group_inter
k_block6b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6b__0se_expand_conv (Con (None, 1, 1, 1152) 19584 k_block6b__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block6b__0se_expand (Activati (None, 1, 1, 1152) 0 k_block6b__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block6b__0se_expand_group_int (None, 1, 1, 1152) 0 k_block6b__0se_expand[0][0]
__________________________________________________________________________________________________
k_block6b__0se_excite (Multiply (None, 8, 8, 1152) 0 k_block6b__0activation[0][0]
k_block6b__0se_expand_group_inter
__________________________________________________________________________________________________
k_block6b__0project_conv_conv ( (None, 8, 8, 192) 3456 k_block6b__0se_excite[0][0]
__________________________________________________________________________________________________
k_block6b__0project_conv_bn (Ba (None, 8, 8, 192) 768 k_block6b__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block6b__0project_conv_group_ (None, 8, 8, 192) 0 k_block6b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6b__0project_conv_group_ (None, 8, 8, 192) 576 k_block6b__0project_conv_group_in
__________________________________________________________________________________________________
k_block6b__0project_conv_group_ (None, 8, 8, 192) 768 k_block6b__0project_conv_group_in
__________________________________________________________________________________________________
k_block6b__0project_conv_inter_ (None, 8, 8, 192) 0 k_block6b__0project_conv_group_in
k_block6b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6b__0drop (Dropout) (None, 8, 8, 192) 0 k_block6b__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block6b__0add (Add) (None, 8, 8, 192) 0 k_block6b__0drop[0][0]
k_block6a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block6c__0expand_conv (Conv2D (None, 8, 8, 1152) 18432 k_block6b__0add[0][0]
__________________________________________________________________________________________________
k_block6c__0expand_bn (BatchNor (None, 8, 8, 1152) 4608 k_block6c__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block6c__0expand (Activation) (None, 8, 8, 1152) 0 k_block6c__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block6c__0expand_group_interl (None, 8, 8, 1152) 0 k_block6c__0expand[0][0]
__________________________________________________________________________________________________
k_block6c__0dwconv (DepthwiseCo (None, 8, 8, 1152) 28800 k_block6c__0expand_group_interlea
__________________________________________________________________________________________________
k_block6c__0bn (BatchNormalizat (None, 8, 8, 1152) 4608 k_block6c__0dwconv[0][0]
__________________________________________________________________________________________________
k_block6c__0activation (Activat (None, 8, 8, 1152) 0 k_block6c__0bn[0][0]
__________________________________________________________________________________________________
k_block6c__0se_squeeze (GlobalA (None, 1152) 0 k_block6c__0activation[0][0]
__________________________________________________________________________________________________
k_block6c__0se_reshape (Reshape (None, 1, 1, 1152) 0 k_block6c__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block6c__0se_reduce_conv (Con (None, 1, 1, 48) 1200 k_block6c__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block6c__0se_reduce (Activati (None, 1, 1, 48) 0 k_block6c__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block6c__0se_reduce_group_int (None, 1, 1, 48) 96 k_block6c__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6c__0se_reduce_group_int (None, 1, 1, 48) 0 k_block6c__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block6c__0se_reduce_inter_gro (None, 1, 1, 48) 0 k_block6c__0se_reduce_group_inter
k_block6c__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6c__0se_expand_conv (Con (None, 1, 1, 1152) 19584 k_block6c__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block6c__0se_expand (Activati (None, 1, 1, 1152) 0 k_block6c__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block6c__0se_expand_group_int (None, 1, 1, 1152) 0 k_block6c__0se_expand[0][0]
__________________________________________________________________________________________________
k_block6c__0se_excite (Multiply (None, 8, 8, 1152) 0 k_block6c__0activation[0][0]
k_block6c__0se_expand_group_inter
__________________________________________________________________________________________________
k_block6c__0project_conv_conv ( (None, 8, 8, 192) 3456 k_block6c__0se_excite[0][0]
__________________________________________________________________________________________________
k_block6c__0project_conv_bn (Ba (None, 8, 8, 192) 768 k_block6c__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block6c__0project_conv_group_ (None, 8, 8, 192) 0 k_block6c__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6c__0project_conv_group_ (None, 8, 8, 192) 576 k_block6c__0project_conv_group_in
__________________________________________________________________________________________________
k_block6c__0project_conv_group_ (None, 8, 8, 192) 768 k_block6c__0project_conv_group_in
__________________________________________________________________________________________________
k_block6c__0project_conv_inter_ (None, 8, 8, 192) 0 k_block6c__0project_conv_group_in
k_block6c__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6c__0drop (Dropout) (None, 8, 8, 192) 0 k_block6c__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block6c__0add (Add) (None, 8, 8, 192) 0 k_block6c__0drop[0][0]
k_block6b__0add[0][0]
__________________________________________________________________________________________________
k_block6d__0expand_conv (Conv2D (None, 8, 8, 1152) 18432 k_block6c__0add[0][0]
__________________________________________________________________________________________________
k_block6d__0expand_bn (BatchNor (None, 8, 8, 1152) 4608 k_block6d__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block6d__0expand (Activation) (None, 8, 8, 1152) 0 k_block6d__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block6d__0expand_group_interl (None, 8, 8, 1152) 0 k_block6d__0expand[0][0]
__________________________________________________________________________________________________
k_block6d__0dwconv (DepthwiseCo (None, 8, 8, 1152) 28800 k_block6d__0expand_group_interlea
__________________________________________________________________________________________________
k_block6d__0bn (BatchNormalizat (None, 8, 8, 1152) 4608 k_block6d__0dwconv[0][0]
__________________________________________________________________________________________________
k_block6d__0activation (Activat (None, 8, 8, 1152) 0 k_block6d__0bn[0][0]
__________________________________________________________________________________________________
k_block6d__0se_squeeze (GlobalA (None, 1152) 0 k_block6d__0activation[0][0]
__________________________________________________________________________________________________
k_block6d__0se_reshape (Reshape (None, 1, 1, 1152) 0 k_block6d__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block6d__0se_reduce_conv (Con (None, 1, 1, 48) 1200 k_block6d__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block6d__0se_reduce (Activati (None, 1, 1, 48) 0 k_block6d__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block6d__0se_reduce_group_int (None, 1, 1, 48) 96 k_block6d__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6d__0se_reduce_group_int (None, 1, 1, 48) 0 k_block6d__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block6d__0se_reduce_inter_gro (None, 1, 1, 48) 0 k_block6d__0se_reduce_group_inter
k_block6d__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6d__0se_expand_conv (Con (None, 1, 1, 1152) 19584 k_block6d__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block6d__0se_expand (Activati (None, 1, 1, 1152) 0 k_block6d__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block6d__0se_expand_group_int (None, 1, 1, 1152) 0 k_block6d__0se_expand[0][0]
__________________________________________________________________________________________________
k_block6d__0se_excite (Multiply (None, 8, 8, 1152) 0 k_block6d__0activation[0][0]
k_block6d__0se_expand_group_inter
__________________________________________________________________________________________________
k_block6d__0project_conv_conv ( (None, 8, 8, 192) 3456 k_block6d__0se_excite[0][0]
__________________________________________________________________________________________________
k_block6d__0project_conv_bn (Ba (None, 8, 8, 192) 768 k_block6d__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block6d__0project_conv_group_ (None, 8, 8, 192) 0 k_block6d__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6d__0project_conv_group_ (None, 8, 8, 192) 576 k_block6d__0project_conv_group_in
__________________________________________________________________________________________________
k_block6d__0project_conv_group_ (None, 8, 8, 192) 768 k_block6d__0project_conv_group_in
__________________________________________________________________________________________________
k_block6d__0project_conv_inter_ (None, 8, 8, 192) 0 k_block6d__0project_conv_group_in
k_block6d__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6d__0drop (Dropout) (None, 8, 8, 192) 0 k_block6d__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block6d__0add (Add) (None, 8, 8, 192) 0 k_block6d__0drop[0][0]
k_block6c__0add[0][0]
__________________________________________________________________________________________________
k_block7a__0expand_conv (Conv2D (None, 8, 8, 1152) 18432 k_block6d__0add[0][0]
__________________________________________________________________________________________________
k_block7a__0expand_bn (BatchNor (None, 8, 8, 1152) 4608 k_block7a__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block7a__0expand (Activation) (None, 8, 8, 1152) 0 k_block7a__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block7a__0expand_group_interl (None, 8, 8, 1152) 0 k_block7a__0expand[0][0]
__________________________________________________________________________________________________
k_block7a__0dwconv (DepthwiseCo (None, 8, 8, 1152) 10368 k_block7a__0expand_group_interlea
__________________________________________________________________________________________________
k_block7a__0bn (BatchNormalizat (None, 8, 8, 1152) 4608 k_block7a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block7a__0activation (Activat (None, 8, 8, 1152) 0 k_block7a__0bn[0][0]
__________________________________________________________________________________________________
k_block7a__0se_squeeze (GlobalA (None, 1152) 0 k_block7a__0activation[0][0]
__________________________________________________________________________________________________
k_block7a__0se_reshape (Reshape (None, 1, 1, 1152) 0 k_block7a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block7a__0se_reduce_conv (Con (None, 1, 1, 48) 1200 k_block7a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block7a__0se_reduce (Activati (None, 1, 1, 48) 0 k_block7a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block7a__0se_reduce_group_int (None, 1, 1, 48) 96 k_block7a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block7a__0se_reduce_group_int (None, 1, 1, 48) 0 k_block7a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block7a__0se_reduce_inter_gro (None, 1, 1, 48) 0 k_block7a__0se_reduce_group_inter
k_block7a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block7a__0se_expand_conv (Con (None, 1, 1, 1152) 19584 k_block7a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block7a__0se_expand (Activati (None, 1, 1, 1152) 0 k_block7a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block7a__0se_expand_group_int (None, 1, 1, 1152) 0 k_block7a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block7a__0se_excite (Multiply (None, 8, 8, 1152) 0 k_block7a__0activation[0][0]
k_block7a__0se_expand_group_inter
__________________________________________________________________________________________________
k_block7a__0project_conv_conv ( (None, 8, 8, 320) 5760 k_block7a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block7a__0project_conv_bn (Ba (None, 8, 8, 320) 1280 k_block7a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block7a__0project_conv_group_ (None, 8, 8, 320) 0 k_block7a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block7a__0project_conv_group_ (None, 8, 8, 320) 1600 k_block7a__0project_conv_group_in
__________________________________________________________________________________________________
k_block7a__0project_conv_group_ (None, 8, 8, 320) 1280 k_block7a__0project_conv_group_in
__________________________________________________________________________________________________
k_block7a__0project_conv_inter_ (None, 8, 8, 320) 0 k_block7a__0project_conv_group_in
k_block7a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_top_conv_conv (Conv2D) (None, 8, 8, 1280) 20480 k_block7a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_top_conv_bn (BatchNormalizati (None, 8, 8, 1280) 5120 k_top_conv_conv[0][0]
__________________________________________________________________________________________________
k_top_conv_group_interleaved (I (None, 8, 8, 1280) 0 k_top_conv_bn[0][0]
__________________________________________________________________________________________________
k_avg_pool (GlobalAveragePoolin (None, 1280) 0 k_top_conv_group_interleaved[0][0
__________________________________________________________________________________________________
k_top_dropout (Dropout) (None, 1280) 0 k_avg_pool[0][0]
__________________________________________________________________________________________________
k_probs (Dense) (None, 10) 12810 k_top_dropout[0][0]
==================================================================================================
Total params: 685,334
Trainable params: 639,702
Non-trainable params: 45,632
__________________________________________________________________________________________________
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/nn_ops.py:5063: tensor_shape_from_node_def_name (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.tensor_shape_from_node_def_name`
model flops: 84833890
Finished: /content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-2
Running: /content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-13
Model: "kEffNet-b0"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) [(None, 32, 32, 3)] 0
__________________________________________________________________________________________________
k_stem_conv_pad (ZeroPadding2D) (None, 33, 33, 3) 0 input_2[0][0]
__________________________________________________________________________________________________
k_stem_conv (Conv2D) (None, 31, 31, 32) 864 k_stem_conv_pad[0][0]
__________________________________________________________________________________________________
k_stem_bn (BatchNormalization) (None, 31, 31, 32) 128 k_stem_conv[0][0]
__________________________________________________________________________________________________
k_stem_activation (Activation) (None, 31, 31, 32) 0 k_stem_bn[0][0]
__________________________________________________________________________________________________
k_block1a__0dwconv (DepthwiseCo (None, 31, 31, 32) 288 k_stem_activation[0][0]
__________________________________________________________________________________________________
k_block1a__0bn (BatchNormalizat (None, 31, 31, 32) 128 k_block1a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block1a__0activation (Activat (None, 31, 31, 32) 0 k_block1a__0bn[0][0]
__________________________________________________________________________________________________
k_block1a__0se_squeeze (GlobalA (None, 32) 0 k_block1a__0activation[0][0]
__________________________________________________________________________________________________
k_block1a__0se_reshape (Reshape (None, 1, 1, 32) 0 k_block1a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block1a__0se_reduce_conv (Con (None, 1, 1, 8) 264 k_block1a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block1a__0se_reduce (Activati (None, 1, 1, 8) 0 k_block1a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block1a__0se_expand_conv (Con (None, 1, 1, 32) 288 k_block1a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block1a__0se_expand (Activati (None, 1, 1, 32) 0 k_block1a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block1a__0se_excite (Multiply (None, 31, 31, 32) 0 k_block1a__0activation[0][0]
k_block1a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block1a__0project_conv_conv ( (None, 31, 31, 16) 512 k_block1a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block1a__0project_conv_bn (Ba (None, 31, 31, 16) 64 k_block1a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block2a__0expand_conv (Conv2D (None, 31, 31, 96) 1536 k_block1a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block2a__0expand_bn (BatchNor (None, 31, 31, 96) 384 k_block2a__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block2a__0expand (Activation) (None, 31, 31, 96) 0 k_block2a__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block2a__0dwconv (DepthwiseCo (None, 31, 31, 96) 864 k_block2a__0expand[0][0]
__________________________________________________________________________________________________
k_block2a__0bn (BatchNormalizat (None, 31, 31, 96) 384 k_block2a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block2a__0activation (Activat (None, 31, 31, 96) 0 k_block2a__0bn[0][0]
__________________________________________________________________________________________________
k_block2a__0se_squeeze (GlobalA (None, 96) 0 k_block2a__0activation[0][0]
__________________________________________________________________________________________________
k_block2a__0se_reshape (Reshape (None, 1, 1, 96) 0 k_block2a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block2a__0se_reduce_conv (Con (None, 1, 1, 4) 196 k_block2a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block2a__0se_reduce (Activati (None, 1, 1, 4) 0 k_block2a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block2a__0se_reduce_group_int (None, 1, 1, 4) 0 k_block2a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block2a__0se_reduce_group_int (None, 1, 1, 4) 12 k_block2a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block2a__0se_reduce_group_int (None, 1, 1, 4) 0 k_block2a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block2a__0se_reduce_inter_gro (None, 1, 1, 4) 0 k_block2a__0se_reduce_group_inter
k_block2a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block2a__0se_expand_conv (Con (None, 1, 1, 96) 480 k_block2a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block2a__0se_expand (Activati (None, 1, 1, 96) 0 k_block2a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block2a__0se_excite (Multiply (None, 31, 31, 96) 0 k_block2a__0activation[0][0]
k_block2a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block2a__0project_conv_conv ( (None, 31, 31, 24) 768 k_block2a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block2a__0project_conv_bn (Ba (None, 31, 31, 24) 96 k_block2a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block2a__0project_conv_group_ (None, 31, 31, 24) 0 k_block2a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block2a__0project_conv_group_ (None, 31, 31, 24) 192 k_block2a__0project_conv_group_in
__________________________________________________________________________________________________
k_block2a__0project_conv_group_ (None, 31, 31, 24) 96 k_block2a__0project_conv_group_in
__________________________________________________________________________________________________
k_block2a__0project_conv_inter_ (None, 31, 31, 24) 0 k_block2a__0project_conv_group_in
k_block2a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block2b__0expand_conv (Conv2D (None, 31, 31, 144) 3456 k_block2a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block2b__0expand_bn (BatchNor (None, 31, 31, 144) 576 k_block2b__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block2b__0expand (Activation) (None, 31, 31, 144) 0 k_block2b__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block2b__0dwconv (DepthwiseCo (None, 31, 31, 144) 1296 k_block2b__0expand[0][0]
__________________________________________________________________________________________________
k_block2b__0bn (BatchNormalizat (None, 31, 31, 144) 576 k_block2b__0dwconv[0][0]
__________________________________________________________________________________________________
k_block2b__0activation (Activat (None, 31, 31, 144) 0 k_block2b__0bn[0][0]
__________________________________________________________________________________________________
k_block2b__0se_squeeze (GlobalA (None, 144) 0 k_block2b__0activation[0][0]
__________________________________________________________________________________________________
k_block2b__0se_reshape (Reshape (None, 1, 1, 144) 0 k_block2b__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block2b__0se_reduce_conv (Con (None, 1, 1, 6) 294 k_block2b__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block2b__0se_reduce (Activati (None, 1, 1, 6) 0 k_block2b__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block2b__0se_reduce_group_int (None, 1, 1, 6) 0 k_block2b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block2b__0se_reduce_group_int (None, 1, 1, 6) 18 k_block2b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block2b__0se_reduce_group_int (None, 1, 1, 6) 0 k_block2b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block2b__0se_reduce_inter_gro (None, 1, 1, 6) 0 k_block2b__0se_reduce_group_inter
k_block2b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block2b__0se_expand_conv (Con (None, 1, 1, 144) 1008 k_block2b__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block2b__0se_expand (Activati (None, 1, 1, 144) 0 k_block2b__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block2b__0se_excite (Multiply (None, 31, 31, 144) 0 k_block2b__0activation[0][0]
k_block2b__0se_expand[0][0]
__________________________________________________________________________________________________
k_block2b__0project_conv_conv ( (None, 31, 31, 24) 864 k_block2b__0se_excite[0][0]
__________________________________________________________________________________________________
k_block2b__0project_conv_bn (Ba (None, 31, 31, 24) 96 k_block2b__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block2b__0project_conv_group_ (None, 31, 31, 24) 0 k_block2b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block2b__0project_conv_group_ (None, 31, 31, 24) 144 k_block2b__0project_conv_group_in
__________________________________________________________________________________________________
k_block2b__0project_conv_group_ (None, 31, 31, 24) 96 k_block2b__0project_conv_group_in
__________________________________________________________________________________________________
k_block2b__0project_conv_inter_ (None, 31, 31, 24) 0 k_block2b__0project_conv_group_in
k_block2b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block2b__0drop (Dropout) (None, 31, 31, 24) 0 k_block2b__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block2b__0add (Add) (None, 31, 31, 24) 0 k_block2b__0drop[0][0]
k_block2a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block3a__0expand_conv (Conv2D (None, 31, 31, 144) 3456 k_block2b__0add[0][0]
__________________________________________________________________________________________________
k_block3a__0expand_bn (BatchNor (None, 31, 31, 144) 576 k_block3a__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block3a__0expand (Activation) (None, 31, 31, 144) 0 k_block3a__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block3a__0dwconv (DepthwiseCo (None, 31, 31, 144) 3600 k_block3a__0expand[0][0]
__________________________________________________________________________________________________
k_block3a__0bn (BatchNormalizat (None, 31, 31, 144) 576 k_block3a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block3a__0activation (Activat (None, 31, 31, 144) 0 k_block3a__0bn[0][0]
__________________________________________________________________________________________________
k_block3a__0se_squeeze (GlobalA (None, 144) 0 k_block3a__0activation[0][0]
__________________________________________________________________________________________________
k_block3a__0se_reshape (Reshape (None, 1, 1, 144) 0 k_block3a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block3a__0se_reduce_conv (Con (None, 1, 1, 6) 294 k_block3a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block3a__0se_reduce (Activati (None, 1, 1, 6) 0 k_block3a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block3a__0se_reduce_group_int (None, 1, 1, 6) 0 k_block3a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block3a__0se_reduce_group_int (None, 1, 1, 6) 18 k_block3a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block3a__0se_reduce_group_int (None, 1, 1, 6) 0 k_block3a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block3a__0se_reduce_inter_gro (None, 1, 1, 6) 0 k_block3a__0se_reduce_group_inter
k_block3a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block3a__0se_expand_conv (Con (None, 1, 1, 144) 1008 k_block3a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block3a__0se_expand (Activati (None, 1, 1, 144) 0 k_block3a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block3a__0se_excite (Multiply (None, 31, 31, 144) 0 k_block3a__0activation[0][0]
k_block3a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block3a__0project_conv_conv ( (None, 31, 31, 40) 1440 k_block3a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block3a__0project_conv_bn (Ba (None, 31, 31, 40) 160 k_block3a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block3a__0project_conv_group_ (None, 31, 31, 40) 0 k_block3a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block3a__0project_conv_group_ (None, 31, 31, 40) 400 k_block3a__0project_conv_group_in
__________________________________________________________________________________________________
k_block3a__0project_conv_group_ (None, 31, 31, 40) 160 k_block3a__0project_conv_group_in
__________________________________________________________________________________________________
k_block3a__0project_conv_inter_ (None, 31, 31, 40) 0 k_block3a__0project_conv_group_in
k_block3a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block3b__0expand_conv (Conv2D (None, 31, 31, 240) 9600 k_block3a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block3b__0expand_bn (BatchNor (None, 31, 31, 240) 960 k_block3b__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block3b__0expand (Activation) (None, 31, 31, 240) 0 k_block3b__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block3b__0dwconv (DepthwiseCo (None, 31, 31, 240) 6000 k_block3b__0expand[0][0]
__________________________________________________________________________________________________
k_block3b__0bn (BatchNormalizat (None, 31, 31, 240) 960 k_block3b__0dwconv[0][0]
__________________________________________________________________________________________________
k_block3b__0activation (Activat (None, 31, 31, 240) 0 k_block3b__0bn[0][0]
__________________________________________________________________________________________________
k_block3b__0se_squeeze (GlobalA (None, 240) 0 k_block3b__0activation[0][0]
__________________________________________________________________________________________________
k_block3b__0se_reshape (Reshape (None, 1, 1, 240) 0 k_block3b__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block3b__0se_reduce_conv (Con (None, 1, 1, 10) 490 k_block3b__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block3b__0se_reduce (Activati (None, 1, 1, 10) 0 k_block3b__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block3b__0se_reduce_group_int (None, 1, 1, 10) 0 k_block3b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block3b__0se_reduce_group_int (None, 1, 1, 10) 30 k_block3b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block3b__0se_reduce_group_int (None, 1, 1, 10) 0 k_block3b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block3b__0se_reduce_inter_gro (None, 1, 1, 10) 0 k_block3b__0se_reduce_group_inter
k_block3b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block3b__0se_expand_conv (Con (None, 1, 1, 240) 2640 k_block3b__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block3b__0se_expand (Activati (None, 1, 1, 240) 0 k_block3b__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block3b__0se_excite (Multiply (None, 31, 31, 240) 0 k_block3b__0activation[0][0]
k_block3b__0se_expand[0][0]
__________________________________________________________________________________________________
k_block3b__0project_conv_conv ( (None, 31, 31, 40) 1920 k_block3b__0se_excite[0][0]
__________________________________________________________________________________________________
k_block3b__0project_conv_bn (Ba (None, 31, 31, 40) 160 k_block3b__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block3b__0project_conv_group_ (None, 31, 31, 40) 0 k_block3b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block3b__0project_conv_group_ (None, 31, 31, 40) 320 k_block3b__0project_conv_group_in
__________________________________________________________________________________________________
k_block3b__0project_conv_group_ (None, 31, 31, 40) 160 k_block3b__0project_conv_group_in
__________________________________________________________________________________________________
k_block3b__0project_conv_inter_ (None, 31, 31, 40) 0 k_block3b__0project_conv_group_in
k_block3b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block3b__0drop (Dropout) (None, 31, 31, 40) 0 k_block3b__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block3b__0add (Add) (None, 31, 31, 40) 0 k_block3b__0drop[0][0]
k_block3a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block4a__0expand_conv (Conv2D (None, 31, 31, 240) 9600 k_block3b__0add[0][0]
__________________________________________________________________________________________________
k_block4a__0expand_bn (BatchNor (None, 31, 31, 240) 960 k_block4a__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block4a__0expand (Activation) (None, 31, 31, 240) 0 k_block4a__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block4a__0dwconv_pad (ZeroPad (None, 33, 33, 240) 0 k_block4a__0expand[0][0]
__________________________________________________________________________________________________
k_block4a__0dwconv (DepthwiseCo (None, 16, 16, 240) 2160 k_block4a__0dwconv_pad[0][0]
__________________________________________________________________________________________________
k_block4a__0bn (BatchNormalizat (None, 16, 16, 240) 960 k_block4a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block4a__0activation (Activat (None, 16, 16, 240) 0 k_block4a__0bn[0][0]
__________________________________________________________________________________________________
k_block4a__0se_squeeze (GlobalA (None, 240) 0 k_block4a__0activation[0][0]
__________________________________________________________________________________________________
k_block4a__0se_reshape (Reshape (None, 1, 1, 240) 0 k_block4a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block4a__0se_reduce_conv (Con (None, 1, 1, 10) 490 k_block4a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block4a__0se_reduce (Activati (None, 1, 1, 10) 0 k_block4a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block4a__0se_reduce_group_int (None, 1, 1, 10) 0 k_block4a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block4a__0se_reduce_group_int (None, 1, 1, 10) 30 k_block4a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block4a__0se_reduce_group_int (None, 1, 1, 10) 0 k_block4a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block4a__0se_reduce_inter_gro (None, 1, 1, 10) 0 k_block4a__0se_reduce_group_inter
k_block4a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block4a__0se_expand_conv (Con (None, 1, 1, 240) 2640 k_block4a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block4a__0se_expand (Activati (None, 1, 1, 240) 0 k_block4a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block4a__0se_excite (Multiply (None, 16, 16, 240) 0 k_block4a__0activation[0][0]
k_block4a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block4a__0project_conv_conv ( (None, 16, 16, 80) 3840 k_block4a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block4a__0project_conv_bn (Ba (None, 16, 16, 80) 320 k_block4a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block4a__0project_conv_group_ (None, 16, 16, 80) 0 k_block4a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block4a__0project_conv_group_ (None, 16, 16, 80) 1280 k_block4a__0project_conv_group_in
__________________________________________________________________________________________________
k_block4a__0project_conv_group_ (None, 16, 16, 80) 320 k_block4a__0project_conv_group_in
__________________________________________________________________________________________________
k_block4a__0project_conv_inter_ (None, 16, 16, 80) 0 k_block4a__0project_conv_group_in
k_block4a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block4b__0expand_conv (Conv2D (None, 16, 16, 480) 19200 k_block4a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block4b__0expand_bn (BatchNor (None, 16, 16, 480) 1920 k_block4b__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block4b__0expand (Activation) (None, 16, 16, 480) 0 k_block4b__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block4b__0expand_group_interl (None, 16, 16, 480) 0 k_block4b__0expand[0][0]
__________________________________________________________________________________________________
k_block4b__0dwconv (DepthwiseCo (None, 16, 16, 480) 4320 k_block4b__0expand_group_interlea
__________________________________________________________________________________________________
k_block4b__0bn (BatchNormalizat (None, 16, 16, 480) 1920 k_block4b__0dwconv[0][0]
__________________________________________________________________________________________________
k_block4b__0activation (Activat (None, 16, 16, 480) 0 k_block4b__0bn[0][0]
__________________________________________________________________________________________________
k_block4b__0se_squeeze (GlobalA (None, 480) 0 k_block4b__0activation[0][0]
__________________________________________________________________________________________________
k_block4b__0se_reshape (Reshape (None, 1, 1, 480) 0 k_block4b__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block4b__0se_reduce_conv (Con (None, 1, 1, 20) 980 k_block4b__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block4b__0se_reduce (Activati (None, 1, 1, 20) 0 k_block4b__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block4b__0se_reduce_group_int (None, 1, 1, 20) 0 k_block4b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block4b__0se_reduce_group_int (None, 1, 1, 20) 60 k_block4b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block4b__0se_reduce_group_int (None, 1, 1, 20) 0 k_block4b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block4b__0se_reduce_inter_gro (None, 1, 1, 20) 0 k_block4b__0se_reduce_group_inter
k_block4b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block4b__0se_expand_conv (Con (None, 1, 1, 480) 10080 k_block4b__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block4b__0se_expand (Activati (None, 1, 1, 480) 0 k_block4b__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block4b__0se_excite (Multiply (None, 16, 16, 480) 0 k_block4b__0activation[0][0]
k_block4b__0se_expand[0][0]
__________________________________________________________________________________________________
k_block4b__0project_conv_conv ( (None, 16, 16, 80) 3840 k_block4b__0se_excite[0][0]
__________________________________________________________________________________________________
k_block4b__0project_conv_bn (Ba (None, 16, 16, 80) 320 k_block4b__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block4b__0project_conv_group_ (None, 16, 16, 80) 0 k_block4b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block4b__0project_conv_group_ (None, 16, 16, 80) 640 k_block4b__0project_conv_group_in
__________________________________________________________________________________________________
k_block4b__0project_conv_group_ (None, 16, 16, 80) 320 k_block4b__0project_conv_group_in
__________________________________________________________________________________________________
k_block4b__0project_conv_inter_ (None, 16, 16, 80) 0 k_block4b__0project_conv_group_in
k_block4b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block4b__0drop (Dropout) (None, 16, 16, 80) 0 k_block4b__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block4b__0add (Add) (None, 16, 16, 80) 0 k_block4b__0drop[0][0]
k_block4a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block4c__0expand_conv (Conv2D (None, 16, 16, 480) 19200 k_block4b__0add[0][0]
__________________________________________________________________________________________________
k_block4c__0expand_bn (BatchNor (None, 16, 16, 480) 1920 k_block4c__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block4c__0expand (Activation) (None, 16, 16, 480) 0 k_block4c__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block4c__0expand_group_interl (None, 16, 16, 480) 0 k_block4c__0expand[0][0]
__________________________________________________________________________________________________
k_block4c__0dwconv (DepthwiseCo (None, 16, 16, 480) 4320 k_block4c__0expand_group_interlea
__________________________________________________________________________________________________
k_block4c__0bn (BatchNormalizat (None, 16, 16, 480) 1920 k_block4c__0dwconv[0][0]
__________________________________________________________________________________________________
k_block4c__0activation (Activat (None, 16, 16, 480) 0 k_block4c__0bn[0][0]
__________________________________________________________________________________________________
k_block4c__0se_squeeze (GlobalA (None, 480) 0 k_block4c__0activation[0][0]
__________________________________________________________________________________________________
k_block4c__0se_reshape (Reshape (None, 1, 1, 480) 0 k_block4c__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block4c__0se_reduce_conv (Con (None, 1, 1, 20) 980 k_block4c__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block4c__0se_reduce (Activati (None, 1, 1, 20) 0 k_block4c__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block4c__0se_reduce_group_int (None, 1, 1, 20) 0 k_block4c__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block4c__0se_reduce_group_int (None, 1, 1, 20) 60 k_block4c__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block4c__0se_reduce_group_int (None, 1, 1, 20) 0 k_block4c__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block4c__0se_reduce_inter_gro (None, 1, 1, 20) 0 k_block4c__0se_reduce_group_inter
k_block4c__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block4c__0se_expand_conv (Con (None, 1, 1, 480) 10080 k_block4c__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block4c__0se_expand (Activati (None, 1, 1, 480) 0 k_block4c__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block4c__0se_excite (Multiply (None, 16, 16, 480) 0 k_block4c__0activation[0][0]
k_block4c__0se_expand[0][0]
__________________________________________________________________________________________________
k_block4c__0project_conv_conv ( (None, 16, 16, 80) 3840 k_block4c__0se_excite[0][0]
__________________________________________________________________________________________________
k_block4c__0project_conv_bn (Ba (None, 16, 16, 80) 320 k_block4c__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block4c__0project_conv_group_ (None, 16, 16, 80) 0 k_block4c__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block4c__0project_conv_group_ (None, 16, 16, 80) 640 k_block4c__0project_conv_group_in
__________________________________________________________________________________________________
k_block4c__0project_conv_group_ (None, 16, 16, 80) 320 k_block4c__0project_conv_group_in
__________________________________________________________________________________________________
k_block4c__0project_conv_inter_ (None, 16, 16, 80) 0 k_block4c__0project_conv_group_in
k_block4c__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block4c__0drop (Dropout) (None, 16, 16, 80) 0 k_block4c__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block4c__0add (Add) (None, 16, 16, 80) 0 k_block4c__0drop[0][0]
k_block4b__0add[0][0]
__________________________________________________________________________________________________
k_block5a__0expand_conv (Conv2D (None, 16, 16, 480) 19200 k_block4c__0add[0][0]
__________________________________________________________________________________________________
k_block5a__0expand_bn (BatchNor (None, 16, 16, 480) 1920 k_block5a__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block5a__0expand (Activation) (None, 16, 16, 480) 0 k_block5a__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block5a__0expand_group_interl (None, 16, 16, 480) 0 k_block5a__0expand[0][0]
__________________________________________________________________________________________________
k_block5a__0dwconv (DepthwiseCo (None, 16, 16, 480) 12000 k_block5a__0expand_group_interlea
__________________________________________________________________________________________________
k_block5a__0bn (BatchNormalizat (None, 16, 16, 480) 1920 k_block5a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block5a__0activation (Activat (None, 16, 16, 480) 0 k_block5a__0bn[0][0]
__________________________________________________________________________________________________
k_block5a__0se_squeeze (GlobalA (None, 480) 0 k_block5a__0activation[0][0]
__________________________________________________________________________________________________
k_block5a__0se_reshape (Reshape (None, 1, 1, 480) 0 k_block5a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block5a__0se_reduce_conv (Con (None, 1, 1, 20) 980 k_block5a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block5a__0se_reduce (Activati (None, 1, 1, 20) 0 k_block5a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block5a__0se_reduce_group_int (None, 1, 1, 20) 0 k_block5a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block5a__0se_reduce_group_int (None, 1, 1, 20) 60 k_block5a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block5a__0se_reduce_group_int (None, 1, 1, 20) 0 k_block5a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block5a__0se_reduce_inter_gro (None, 1, 1, 20) 0 k_block5a__0se_reduce_group_inter
k_block5a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block5a__0se_expand_conv (Con (None, 1, 1, 480) 10080 k_block5a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block5a__0se_expand (Activati (None, 1, 1, 480) 0 k_block5a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block5a__0se_excite (Multiply (None, 16, 16, 480) 0 k_block5a__0activation[0][0]
k_block5a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block5a__0project_conv_conv ( (None, 16, 16, 112) 6720 k_block5a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block5a__0project_conv_bn (Ba (None, 16, 16, 112) 448 k_block5a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block5a__0project_conv_group_ (None, 16, 16, 112) 0 k_block5a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block5a__0project_conv_group_ (None, 16, 16, 112) 1568 k_block5a__0project_conv_group_in
__________________________________________________________________________________________________
k_block5a__0project_conv_group_ (None, 16, 16, 112) 448 k_block5a__0project_conv_group_in
__________________________________________________________________________________________________
k_block5a__0project_conv_inter_ (None, 16, 16, 112) 0 k_block5a__0project_conv_group_in
k_block5a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block5b__0expand_conv (Conv2D (None, 16, 16, 672) 37632 k_block5a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block5b__0expand_bn (BatchNor (None, 16, 16, 672) 2688 k_block5b__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block5b__0expand (Activation) (None, 16, 16, 672) 0 k_block5b__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block5b__0expand_group_interl (None, 16, 16, 672) 0 k_block5b__0expand[0][0]
__________________________________________________________________________________________________
k_block5b__0dwconv (DepthwiseCo (None, 16, 16, 672) 16800 k_block5b__0expand_group_interlea
__________________________________________________________________________________________________
k_block5b__0bn (BatchNormalizat (None, 16, 16, 672) 2688 k_block5b__0dwconv[0][0]
__________________________________________________________________________________________________
k_block5b__0activation (Activat (None, 16, 16, 672) 0 k_block5b__0bn[0][0]
__________________________________________________________________________________________________
k_block5b__0se_squeeze (GlobalA (None, 672) 0 k_block5b__0activation[0][0]
__________________________________________________________________________________________________
k_block5b__0se_reshape (Reshape (None, 1, 1, 672) 0 k_block5b__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block5b__0se_reduce_conv (Con (None, 1, 1, 28) 1372 k_block5b__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block5b__0se_reduce (Activati (None, 1, 1, 28) 0 k_block5b__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block5b__0se_reduce_group_int (None, 1, 1, 28) 0 k_block5b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block5b__0se_reduce_group_int (None, 1, 1, 28) 84 k_block5b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block5b__0se_reduce_group_int (None, 1, 1, 28) 0 k_block5b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block5b__0se_reduce_inter_gro (None, 1, 1, 28) 0 k_block5b__0se_reduce_group_inter
k_block5b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block5b__0se_expand_conv (Con (None, 1, 1, 672) 19488 k_block5b__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block5b__0se_expand (Activati (None, 1, 1, 672) 0 k_block5b__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block5b__0se_excite (Multiply (None, 16, 16, 672) 0 k_block5b__0activation[0][0]
k_block5b__0se_expand[0][0]
__________________________________________________________________________________________________
k_block5b__0project_conv_conv ( (None, 16, 16, 112) 4704 k_block5b__0se_excite[0][0]
__________________________________________________________________________________________________
k_block5b__0project_conv_bn (Ba (None, 16, 16, 112) 448 k_block5b__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block5b__0project_conv_group_ (None, 16, 16, 112) 0 k_block5b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block5b__0project_conv_group_ (None, 16, 16, 112) 784 k_block5b__0project_conv_group_in
__________________________________________________________________________________________________
k_block5b__0project_conv_group_ (None, 16, 16, 112) 448 k_block5b__0project_conv_group_in
__________________________________________________________________________________________________
k_block5b__0project_conv_inter_ (None, 16, 16, 112) 0 k_block5b__0project_conv_group_in
k_block5b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block5b__0drop (Dropout) (None, 16, 16, 112) 0 k_block5b__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block5b__0add (Add) (None, 16, 16, 112) 0 k_block5b__0drop[0][0]
k_block5a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block5c__0expand_conv (Conv2D (None, 16, 16, 672) 37632 k_block5b__0add[0][0]
__________________________________________________________________________________________________
k_block5c__0expand_bn (BatchNor (None, 16, 16, 672) 2688 k_block5c__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block5c__0expand (Activation) (None, 16, 16, 672) 0 k_block5c__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block5c__0expand_group_interl (None, 16, 16, 672) 0 k_block5c__0expand[0][0]
__________________________________________________________________________________________________
k_block5c__0dwconv (DepthwiseCo (None, 16, 16, 672) 16800 k_block5c__0expand_group_interlea
__________________________________________________________________________________________________
k_block5c__0bn (BatchNormalizat (None, 16, 16, 672) 2688 k_block5c__0dwconv[0][0]
__________________________________________________________________________________________________
k_block5c__0activation (Activat (None, 16, 16, 672) 0 k_block5c__0bn[0][0]
__________________________________________________________________________________________________
k_block5c__0se_squeeze (GlobalA (None, 672) 0 k_block5c__0activation[0][0]
__________________________________________________________________________________________________
k_block5c__0se_reshape (Reshape (None, 1, 1, 672) 0 k_block5c__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block5c__0se_reduce_conv (Con (None, 1, 1, 28) 1372 k_block5c__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block5c__0se_reduce (Activati (None, 1, 1, 28) 0 k_block5c__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block5c__0se_reduce_group_int (None, 1, 1, 28) 0 k_block5c__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block5c__0se_reduce_group_int (None, 1, 1, 28) 84 k_block5c__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block5c__0se_reduce_group_int (None, 1, 1, 28) 0 k_block5c__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block5c__0se_reduce_inter_gro (None, 1, 1, 28) 0 k_block5c__0se_reduce_group_inter
k_block5c__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block5c__0se_expand_conv (Con (None, 1, 1, 672) 19488 k_block5c__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block5c__0se_expand (Activati (None, 1, 1, 672) 0 k_block5c__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block5c__0se_excite (Multiply (None, 16, 16, 672) 0 k_block5c__0activation[0][0]
k_block5c__0se_expand[0][0]
__________________________________________________________________________________________________
k_block5c__0project_conv_conv ( (None, 16, 16, 112) 4704 k_block5c__0se_excite[0][0]
__________________________________________________________________________________________________
k_block5c__0project_conv_bn (Ba (None, 16, 16, 112) 448 k_block5c__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block5c__0project_conv_group_ (None, 16, 16, 112) 0 k_block5c__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block5c__0project_conv_group_ (None, 16, 16, 112) 784 k_block5c__0project_conv_group_in
__________________________________________________________________________________________________
k_block5c__0project_conv_group_ (None, 16, 16, 112) 448 k_block5c__0project_conv_group_in
__________________________________________________________________________________________________
k_block5c__0project_conv_inter_ (None, 16, 16, 112) 0 k_block5c__0project_conv_group_in
k_block5c__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block5c__0drop (Dropout) (None, 16, 16, 112) 0 k_block5c__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block5c__0add (Add) (None, 16, 16, 112) 0 k_block5c__0drop[0][0]
k_block5b__0add[0][0]
__________________________________________________________________________________________________
k_block6a__0expand_conv (Conv2D (None, 16, 16, 672) 37632 k_block5c__0add[0][0]
__________________________________________________________________________________________________
k_block6a__0expand_bn (BatchNor (None, 16, 16, 672) 2688 k_block6a__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block6a__0expand (Activation) (None, 16, 16, 672) 0 k_block6a__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block6a__0expand_group_interl (None, 16, 16, 672) 0 k_block6a__0expand[0][0]
__________________________________________________________________________________________________
k_block6a__0dwconv_pad (ZeroPad (None, 19, 19, 672) 0 k_block6a__0expand_group_interlea
__________________________________________________________________________________________________
k_block6a__0dwconv (DepthwiseCo (None, 8, 8, 672) 16800 k_block6a__0dwconv_pad[0][0]
__________________________________________________________________________________________________
k_block6a__0bn (BatchNormalizat (None, 8, 8, 672) 2688 k_block6a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block6a__0activation (Activat (None, 8, 8, 672) 0 k_block6a__0bn[0][0]
__________________________________________________________________________________________________
k_block6a__0se_squeeze (GlobalA (None, 672) 0 k_block6a__0activation[0][0]
__________________________________________________________________________________________________
k_block6a__0se_reshape (Reshape (None, 1, 1, 672) 0 k_block6a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block6a__0se_reduce_conv (Con (None, 1, 1, 28) 1372 k_block6a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block6a__0se_reduce (Activati (None, 1, 1, 28) 0 k_block6a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block6a__0se_reduce_group_int (None, 1, 1, 28) 0 k_block6a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6a__0se_reduce_group_int (None, 1, 1, 28) 84 k_block6a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block6a__0se_reduce_group_int (None, 1, 1, 28) 0 k_block6a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block6a__0se_reduce_inter_gro (None, 1, 1, 28) 0 k_block6a__0se_reduce_group_inter
k_block6a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6a__0se_expand_conv (Con (None, 1, 1, 672) 19488 k_block6a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block6a__0se_expand (Activati (None, 1, 1, 672) 0 k_block6a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block6a__0se_excite (Multiply (None, 8, 8, 672) 0 k_block6a__0activation[0][0]
k_block6a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block6a__0project_conv_conv ( (None, 8, 8, 192) 8064 k_block6a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block6a__0project_conv_bn (Ba (None, 8, 8, 192) 768 k_block6a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block6a__0project_conv_group_ (None, 8, 8, 192) 0 k_block6a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6a__0project_conv_group_ (None, 8, 8, 192) 2304 k_block6a__0project_conv_group_in
__________________________________________________________________________________________________
k_block6a__0project_conv_group_ (None, 8, 8, 192) 768 k_block6a__0project_conv_group_in
__________________________________________________________________________________________________
k_block6a__0project_conv_inter_ (None, 8, 8, 192) 0 k_block6a__0project_conv_group_in
k_block6a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6b__0expand_conv (Conv2D (None, 8, 8, 1152) 36864 k_block6a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block6b__0expand_bn (BatchNor (None, 8, 8, 1152) 4608 k_block6b__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block6b__0expand (Activation) (None, 8, 8, 1152) 0 k_block6b__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block6b__0expand_group_interl (None, 8, 8, 1152) 0 k_block6b__0expand[0][0]
__________________________________________________________________________________________________
k_block6b__0dwconv (DepthwiseCo (None, 8, 8, 1152) 28800 k_block6b__0expand_group_interlea
__________________________________________________________________________________________________
k_block6b__0bn (BatchNormalizat (None, 8, 8, 1152) 4608 k_block6b__0dwconv[0][0]
__________________________________________________________________________________________________
k_block6b__0activation (Activat (None, 8, 8, 1152) 0 k_block6b__0bn[0][0]
__________________________________________________________________________________________________
k_block6b__0se_squeeze (GlobalA (None, 1152) 0 k_block6b__0activation[0][0]
__________________________________________________________________________________________________
k_block6b__0se_reshape (Reshape (None, 1, 1, 1152) 0 k_block6b__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block6b__0se_reduce_conv (Con (None, 1, 1, 48) 2352 k_block6b__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block6b__0se_reduce (Activati (None, 1, 1, 48) 0 k_block6b__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block6b__0se_reduce_group_int (None, 1, 1, 48) 0 k_block6b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6b__0se_reduce_group_int (None, 1, 1, 48) 144 k_block6b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block6b__0se_reduce_group_int (None, 1, 1, 48) 0 k_block6b__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block6b__0se_reduce_inter_gro (None, 1, 1, 48) 0 k_block6b__0se_reduce_group_inter
k_block6b__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6b__0se_expand_conv (Con (None, 1, 1, 1152) 56448 k_block6b__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block6b__0se_expand (Activati (None, 1, 1, 1152) 0 k_block6b__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block6b__0se_excite (Multiply (None, 8, 8, 1152) 0 k_block6b__0activation[0][0]
k_block6b__0se_expand[0][0]
__________________________________________________________________________________________________
k_block6b__0project_conv_conv ( (None, 8, 8, 192) 6912 k_block6b__0se_excite[0][0]
__________________________________________________________________________________________________
k_block6b__0project_conv_bn (Ba (None, 8, 8, 192) 768 k_block6b__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block6b__0project_conv_group_ (None, 8, 8, 192) 0 k_block6b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6b__0project_conv_group_ (None, 8, 8, 192) 1152 k_block6b__0project_conv_group_in
__________________________________________________________________________________________________
k_block6b__0project_conv_group_ (None, 8, 8, 192) 768 k_block6b__0project_conv_group_in
__________________________________________________________________________________________________
k_block6b__0project_conv_inter_ (None, 8, 8, 192) 0 k_block6b__0project_conv_group_in
k_block6b__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6b__0drop (Dropout) (None, 8, 8, 192) 0 k_block6b__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block6b__0add (Add) (None, 8, 8, 192) 0 k_block6b__0drop[0][0]
k_block6a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block6c__0expand_conv (Conv2D (None, 8, 8, 1152) 36864 k_block6b__0add[0][0]
__________________________________________________________________________________________________
k_block6c__0expand_bn (BatchNor (None, 8, 8, 1152) 4608 k_block6c__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block6c__0expand (Activation) (None, 8, 8, 1152) 0 k_block6c__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block6c__0expand_group_interl (None, 8, 8, 1152) 0 k_block6c__0expand[0][0]
__________________________________________________________________________________________________
k_block6c__0dwconv (DepthwiseCo (None, 8, 8, 1152) 28800 k_block6c__0expand_group_interlea
__________________________________________________________________________________________________
k_block6c__0bn (BatchNormalizat (None, 8, 8, 1152) 4608 k_block6c__0dwconv[0][0]
__________________________________________________________________________________________________
k_block6c__0activation (Activat (None, 8, 8, 1152) 0 k_block6c__0bn[0][0]
__________________________________________________________________________________________________
k_block6c__0se_squeeze (GlobalA (None, 1152) 0 k_block6c__0activation[0][0]
__________________________________________________________________________________________________
k_block6c__0se_reshape (Reshape (None, 1, 1, 1152) 0 k_block6c__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block6c__0se_reduce_conv (Con (None, 1, 1, 48) 2352 k_block6c__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block6c__0se_reduce (Activati (None, 1, 1, 48) 0 k_block6c__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block6c__0se_reduce_group_int (None, 1, 1, 48) 0 k_block6c__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6c__0se_reduce_group_int (None, 1, 1, 48) 144 k_block6c__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block6c__0se_reduce_group_int (None, 1, 1, 48) 0 k_block6c__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block6c__0se_reduce_inter_gro (None, 1, 1, 48) 0 k_block6c__0se_reduce_group_inter
k_block6c__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6c__0se_expand_conv (Con (None, 1, 1, 1152) 56448 k_block6c__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block6c__0se_expand (Activati (None, 1, 1, 1152) 0 k_block6c__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block6c__0se_excite (Multiply (None, 8, 8, 1152) 0 k_block6c__0activation[0][0]
k_block6c__0se_expand[0][0]
__________________________________________________________________________________________________
k_block6c__0project_conv_conv ( (None, 8, 8, 192) 6912 k_block6c__0se_excite[0][0]
__________________________________________________________________________________________________
k_block6c__0project_conv_bn (Ba (None, 8, 8, 192) 768 k_block6c__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block6c__0project_conv_group_ (None, 8, 8, 192) 0 k_block6c__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6c__0project_conv_group_ (None, 8, 8, 192) 1152 k_block6c__0project_conv_group_in
__________________________________________________________________________________________________
k_block6c__0project_conv_group_ (None, 8, 8, 192) 768 k_block6c__0project_conv_group_in
__________________________________________________________________________________________________
k_block6c__0project_conv_inter_ (None, 8, 8, 192) 0 k_block6c__0project_conv_group_in
k_block6c__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6c__0drop (Dropout) (None, 8, 8, 192) 0 k_block6c__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block6c__0add (Add) (None, 8, 8, 192) 0 k_block6c__0drop[0][0]
k_block6b__0add[0][0]
__________________________________________________________________________________________________
k_block6d__0expand_conv (Conv2D (None, 8, 8, 1152) 36864 k_block6c__0add[0][0]
__________________________________________________________________________________________________
k_block6d__0expand_bn (BatchNor (None, 8, 8, 1152) 4608 k_block6d__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block6d__0expand (Activation) (None, 8, 8, 1152) 0 k_block6d__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block6d__0expand_group_interl (None, 8, 8, 1152) 0 k_block6d__0expand[0][0]
__________________________________________________________________________________________________
k_block6d__0dwconv (DepthwiseCo (None, 8, 8, 1152) 28800 k_block6d__0expand_group_interlea
__________________________________________________________________________________________________
k_block6d__0bn (BatchNormalizat (None, 8, 8, 1152) 4608 k_block6d__0dwconv[0][0]
__________________________________________________________________________________________________
k_block6d__0activation (Activat (None, 8, 8, 1152) 0 k_block6d__0bn[0][0]
__________________________________________________________________________________________________
k_block6d__0se_squeeze (GlobalA (None, 1152) 0 k_block6d__0activation[0][0]
__________________________________________________________________________________________________
k_block6d__0se_reshape (Reshape (None, 1, 1, 1152) 0 k_block6d__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block6d__0se_reduce_conv (Con (None, 1, 1, 48) 2352 k_block6d__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block6d__0se_reduce (Activati (None, 1, 1, 48) 0 k_block6d__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block6d__0se_reduce_group_int (None, 1, 1, 48) 0 k_block6d__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6d__0se_reduce_group_int (None, 1, 1, 48) 144 k_block6d__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block6d__0se_reduce_group_int (None, 1, 1, 48) 0 k_block6d__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block6d__0se_reduce_inter_gro (None, 1, 1, 48) 0 k_block6d__0se_reduce_group_inter
k_block6d__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block6d__0se_expand_conv (Con (None, 1, 1, 1152) 56448 k_block6d__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block6d__0se_expand (Activati (None, 1, 1, 1152) 0 k_block6d__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block6d__0se_excite (Multiply (None, 8, 8, 1152) 0 k_block6d__0activation[0][0]
k_block6d__0se_expand[0][0]
__________________________________________________________________________________________________
k_block6d__0project_conv_conv ( (None, 8, 8, 192) 6912 k_block6d__0se_excite[0][0]
__________________________________________________________________________________________________
k_block6d__0project_conv_bn (Ba (None, 8, 8, 192) 768 k_block6d__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block6d__0project_conv_group_ (None, 8, 8, 192) 0 k_block6d__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6d__0project_conv_group_ (None, 8, 8, 192) 1152 k_block6d__0project_conv_group_in
__________________________________________________________________________________________________
k_block6d__0project_conv_group_ (None, 8, 8, 192) 768 k_block6d__0project_conv_group_in
__________________________________________________________________________________________________
k_block6d__0project_conv_inter_ (None, 8, 8, 192) 0 k_block6d__0project_conv_group_in
k_block6d__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block6d__0drop (Dropout) (None, 8, 8, 192) 0 k_block6d__0project_conv_inter_gr
__________________________________________________________________________________________________
k_block6d__0add (Add) (None, 8, 8, 192) 0 k_block6d__0drop[0][0]
k_block6c__0add[0][0]
__________________________________________________________________________________________________
k_block7a__0expand_conv (Conv2D (None, 8, 8, 1152) 36864 k_block6d__0add[0][0]
__________________________________________________________________________________________________
k_block7a__0expand_bn (BatchNor (None, 8, 8, 1152) 4608 k_block7a__0expand_conv[0][0]
__________________________________________________________________________________________________
k_block7a__0expand (Activation) (None, 8, 8, 1152) 0 k_block7a__0expand_bn[0][0]
__________________________________________________________________________________________________
k_block7a__0expand_group_interl (None, 8, 8, 1152) 0 k_block7a__0expand[0][0]
__________________________________________________________________________________________________
k_block7a__0dwconv (DepthwiseCo (None, 8, 8, 1152) 10368 k_block7a__0expand_group_interlea
__________________________________________________________________________________________________
k_block7a__0bn (BatchNormalizat (None, 8, 8, 1152) 4608 k_block7a__0dwconv[0][0]
__________________________________________________________________________________________________
k_block7a__0activation (Activat (None, 8, 8, 1152) 0 k_block7a__0bn[0][0]
__________________________________________________________________________________________________
k_block7a__0se_squeeze (GlobalA (None, 1152) 0 k_block7a__0activation[0][0]
__________________________________________________________________________________________________
k_block7a__0se_reshape (Reshape (None, 1, 1, 1152) 0 k_block7a__0se_squeeze[0][0]
__________________________________________________________________________________________________
k_block7a__0se_reduce_conv (Con (None, 1, 1, 48) 2352 k_block7a__0se_reshape[0][0]
__________________________________________________________________________________________________
k_block7a__0se_reduce (Activati (None, 1, 1, 48) 0 k_block7a__0se_reduce_conv[0][0]
__________________________________________________________________________________________________
k_block7a__0se_reduce_group_int (None, 1, 1, 48) 0 k_block7a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block7a__0se_reduce_group_int (None, 1, 1, 48) 144 k_block7a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block7a__0se_reduce_group_int (None, 1, 1, 48) 0 k_block7a__0se_reduce_group_inter
__________________________________________________________________________________________________
k_block7a__0se_reduce_inter_gro (None, 1, 1, 48) 0 k_block7a__0se_reduce_group_inter
k_block7a__0se_reduce[0][0]
__________________________________________________________________________________________________
k_block7a__0se_expand_conv (Con (None, 1, 1, 1152) 56448 k_block7a__0se_reduce_inter_group
__________________________________________________________________________________________________
k_block7a__0se_expand (Activati (None, 1, 1, 1152) 0 k_block7a__0se_expand_conv[0][0]
__________________________________________________________________________________________________
k_block7a__0se_excite (Multiply (None, 8, 8, 1152) 0 k_block7a__0activation[0][0]
k_block7a__0se_expand[0][0]
__________________________________________________________________________________________________
k_block7a__0project_conv_conv ( (None, 8, 8, 320) 11520 k_block7a__0se_excite[0][0]
__________________________________________________________________________________________________
k_block7a__0project_conv_bn (Ba (None, 8, 8, 320) 1280 k_block7a__0project_conv_conv[0][
__________________________________________________________________________________________________
k_block7a__0project_conv_group_ (None, 8, 8, 320) 0 k_block7a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_block7a__0project_conv_group_ (None, 8, 8, 320) 3200 k_block7a__0project_conv_group_in
__________________________________________________________________________________________________
k_block7a__0project_conv_group_ (None, 8, 8, 320) 1280 k_block7a__0project_conv_group_in
__________________________________________________________________________________________________
k_block7a__0project_conv_inter_ (None, 8, 8, 320) 0 k_block7a__0project_conv_group_in
k_block7a__0project_conv_bn[0][0]
__________________________________________________________________________________________________
k_top_conv_conv (Conv2D) (None, 8, 8, 1280) 40960 k_block7a__0project_conv_inter_gr
__________________________________________________________________________________________________
k_top_conv_bn (BatchNormalizati (None, 8, 8, 1280) 5120 k_top_conv_conv[0][0]
__________________________________________________________________________________________________
k_top_conv_group_interleaved (I (None, 8, 8, 1280) 0 k_top_conv_bn[0][0]
__________________________________________________________________________________________________
k_avg_pool (GlobalAveragePoolin (None, 1280) 0 k_top_conv_group_interleaved[0][0
__________________________________________________________________________________________________
k_top_dropout (Dropout) (None, 1280) 0 k_avg_pool[0][0]
__________________________________________________________________________________________________
k_probs (Dense) (None, 10) 12810 k_top_dropout[0][0]
==================================================================================================
Total params: 1,104,802
Trainable params: 1,059,202
Non-trainable params: 45,600
__________________________________________________________________________________________________
model flops: 138410206
Finished: /content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-13
###Markdown
Fitting
###Code
work_on_efficientnet(show_model=False, run_fit=True, test_results=True)
###Output
Running: /content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-2
Epoch 1/50
704/704 [==============================] - 289s 324ms/step - loss: 2.3518 - accuracy: 0.2361 - val_loss: 1.9592 - val_accuracy: 0.2974
Epoch 00001: val_accuracy improved from -inf to 0.29740, saving model to /content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-2-best_result.hdf5
###Markdown
Test Results
###Code
work_on_efficientnet(show_model=False, run_fit=False, test_results=True)
work_on_efficientnet(show_model=False, run_fit=False, test_results=False, calc_f1=True)
###Output
Running: /content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-2
Best Model Results: /content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-2-best_result.hdf5
157/157 [==============================] - 20s 67ms/step - loss: 0.2307 - accuracy: 0.9246
loss 0.23065903782844543
acc 0.9246000051498413
Predicted Shape: (10000, 10)
Pred classes shape: (10000,)
Test classes shape: (10000,)
precision recall f1-score support
0 0.9147 0.9440 0.9291 1000
1 0.9323 0.9780 0.9546 1000
2 0.9189 0.8950 0.9068 1000
3 0.8621 0.8380 0.8499 1000
4 0.9210 0.9440 0.9323 1000
5 0.9108 0.8580 0.8836 1000
6 0.9147 0.9650 0.9392 1000
7 0.9689 0.9350 0.9517 1000
8 0.9576 0.9490 0.9533 1000
9 0.9437 0.9390 0.9414 1000
accuracy 0.9245 10000
macro avg 0.9245 0.9245 0.9242 10000
weighted avg 0.9245 0.9245 0.9242 10000
Finished: /content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-2
Running: /content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-13
Best Model Results: /content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-13-best_result.hdf5
157/157 [==============================] - 19s 62ms/step - loss: 0.2137 - accuracy: 0.9361
loss 0.21367250382900238
acc 0.9361000061035156
Predicted Shape: (10000, 10)
Pred classes shape: (10000,)
Test classes shape: (10000,)
precision recall f1-score support
0 0.9509 0.9490 0.9499 1000
1 0.9418 0.9870 0.9639 1000
2 0.9338 0.9170 0.9253 1000
3 0.8621 0.8630 0.8626 1000
4 0.9483 0.9530 0.9506 1000
5 0.9266 0.8590 0.8915 1000
6 0.9139 0.9760 0.9439 1000
7 0.9627 0.9550 0.9588 1000
8 0.9703 0.9470 0.9585 1000
9 0.9521 0.9550 0.9536 1000
accuracy 0.9361 10000
macro avg 0.9363 0.9361 0.9359 10000
weighted avg 0.9363 0.9361 0.9359 10000
Finished: /content/drive/MyDrive/output/JP30B28-EfficientNet-CIFAR10-13
|
Section-05-Oversampling/05-06-ADASYN.ipynb | ###Markdown
ADASYNCreates new samples by interpolation of samples of the minority class and its closest neighbours. It creates more samples from samples that are harder to classify.
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import make_blobs
from imblearn.over_sampling import ADASYN
###Output
_____no_output_____
###Markdown
Create datahttps://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_blobs.htmlWe will create 2 classes, one majority one minority, clearly separated to facilitate the demonstration.
###Code
# Configuration options
blobs_random_seed = 42
centers = [(0, 0), (5, 5)]
cluster_std = 1.5
num_features_for_samples = 2
num_samples_total = 1600
# Generate X
X, y = make_blobs(
n_samples=num_samples_total,
centers=centers,
n_features=num_features_for_samples,
cluster_std=cluster_std)
# transform arrays to pandas formats
X = pd.DataFrame(X, columns=['VarA', 'VarB'])
y = pd.Series(y)
# create an imbalancced Xset
# (make blobs creates same number of obs per class
# we need to downsample manually)
X = pd.concat([
X[y == 0],
X[y == 1].sample(200, random_state=42)
], axis=0)
y = y.loc[X.index]
# display size
X.shape, y.shape
sns.scatterplot(
data=X, x="VarA", y="VarB", hue=y, alpha=0.5
)
plt.title('Toy dataset')
plt.show()
###Output
_____no_output_____
###Markdown
ADASYN[ADASYN](https://imbalanced-learn.org/stable/references/generated/imblearn.over_sampling.ADASYN.html)
###Code
ada = ADASYN(
sampling_strategy='auto', # samples only the minority class
random_state=0, # for reproducibility
n_neighbors=5,
n_jobs=4
)
X_res, y_res = ada.fit_resample(X, y)
# size of original data
X.shape, y.shape
# size of undersampled data
X_res.shape, y_res.shape
# number of minority class observations
y.value_counts(), y_res.value_counts()
# plot of original data
sns.scatterplot(
data=X, x="VarA", y="VarB", hue=y,alpha=0.5
)
plt.title('Original dataset')
plt.show()
# plot of original data
sns.scatterplot(
data=X_res, x="VarA", y="VarB", hue=y_res, alpha=0.5
)
plt.title('Over-sampled dataset')
plt.show()
###Output
_____no_output_____ |
lesson_notes/Customer_Clustering_c01_Metrics_#27.ipynb | ###Markdown
**PA005: Customer Clustering** Solution Planning Input - Business Problem * Select most valuable customers to create a loyalty program called Insiders- Data * One year of e-commerce sales Output * A list of customers that will be part of Insiders* A report answering business questions 1. Who are the eligible customers to participate in the Insiders program? 2. How many customers will be part of the program? 3. What are the main characteristics of these customers? 4. What revenue percentage comes from Insiders? 5. What is the Insiders' expected revenue for the coming months? 6. What are the conditions for a customer to be eligible for the Insiders program? 7. What are the conditions for a customer to be removed from the Insiders program? 8. What is the guarantee that the Insiders program is better than the regular customer database? 9. What actions can the marketing team make to increase the revenue? Tasks * A report answering business questions: 1. Who are the eligible customers to participate in the Insiders program? - Understand the criteria to a eligible customer. - Criteria examples: * Revenue * High average ticket * High LTV (lifetime value) * Low recency * High basket size * Low churn probability * Expenses * Return rate * Buying Experience * High average notes on reviews 2. How many customers will be part of the program? - Calculate the percentage of customers that belong to Insiders program over the total number of customers. 3. What are the main characteristics of these customers? - Indicate customer characteristics: * Age * City * Education level * Localization, etc. - Indicate consumption characteristics: * Clusters attributes 4. What revenue percentage comes from Insiders? - Calculate the percentage of Insiders revenue over the total revenue. 5. What is the Insiders' expected revenue for the coming months? - Calculate Insiders' LTV - Calculate Cohort Analysis. 6. What are the conditions for a customer to be eligible for the Insiders program? - Define verification periodicity (monthly, quarterly, etc.) - The customer must be similar to a customer on Insiders. 7. What are the conditions for a customer to be removed from the Insiders program? - Define verification periodicity (monthly, quarterly, etc.) - The customer must be dissimilar to a customer on Insiders. 8. What is the guarantee that the Insiders program is better than the regular customer database? - Perform A/B Test - Perform A/B Bayesian Test - Perform Hypothesis Test 9. What actions can the marketing team make to increase the revenue? - Discount - Buying preferences - Shipping options - Promote a visit to the company, etc. * Solution Benchmark - Desk Research * INSERIR EXEMPLOS APLICADOS NO MERCADO * Imports
###Code
import numpy as np
import pandas as pd
import seaborn as sns
from IPython.core.display import HTML
from matplotlib import pyplot as plt
from sklearn import cluster as c
from yellowbrick.cluster import KElbowVisualizer
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
def personal_settings():
# plotly settings
plt.style.use( 'bmh' )
plt.rcParams['figure.figsize'] = [20, 10]
plt.rcParams['font.size'] = 24
# notebook settings
display(HTML('<style>.container{width:90% !important;}</style>'))
np.set_printoptions(suppress=True)
pd.set_option('display.float_format', '{:.2f}'.format)
# seaborn settings
sns.set(rc={'figure.figsize':(20,10)})
sns.set_theme(style = 'darkgrid', font_scale = 1.5)
personal_settings()
###Output
_____no_output_____
###Markdown
Load Dataset
###Code
df_raw = pd.read_csv(r'../data/raw/ecommerce.csv', encoding='unicode_escape')
display(df_raw.head())
# drop 'unnamed: 8' column
df_raw = df_raw.drop(columns=['Unnamed: 8'], axis =1)
###Output
_____no_output_____
###Markdown
Data Description
###Code
df2 = df_raw.copy()
###Output
_____no_output_____
###Markdown
Rename Columns
###Code
df_raw.columns
cols_new = ['invoice_no','stock_code','description','quantity','invoice_date','unit_price','customer_id','country']
df2.columns = cols_new
df2.head()
###Output
_____no_output_____
###Markdown
Data Dimensions
###Code
print('Number of rows: {}'.format(df2.shape[0]))
print('Number of cols: {}'.format(df2.shape[1]))
###Output
Number of rows: 541909
Number of cols: 8
###Markdown
Data Types
###Code
print(df2.dtypes)
display(df2.head())
###Output
invoice_no object
stock_code object
description object
quantity int64
invoice_date object
unit_price float64
customer_id float64
country object
dtype: object
###Markdown
Check NA
###Code
df2.isna().sum()
###Output
_____no_output_____
###Markdown
Replace NA
###Code
# c01 metrics - removing NA
df2 = df2.dropna(subset=['description','customer_id'])
print ('Removed data: {:.2f}'.format(1 - df2.shape[0]/df_raw.shape[0]))
print('Remaining rows: {}'.format(df2.shape[0]))
df2.isna().sum()
###Output
Removed data: 0.25
Remaining rows: 406829
###Markdown
Change dtypes
###Code
print(df2.dtypes)
display(df2.head())
# checking 'invoice_no' by forcing change to integer
df2['invoice_no'] =df2['invoice_no'].astype(int)
# note: error indicates that this feature also has letters, therefore must remain as 'object'
# changing 'invoice_date' format
df2['invoice_date'] = pd.to_datetime (df2['invoice_date'], format='%d-%b-%y')
df2.head()
# checking 'customer_id' by forcing change to integer
df2['customer_id'] = df2['customer_id'].astype('int64')
df2.head()
# checking final dtypes
df2.dtypes
###Output
_____no_output_____
###Markdown
Descriptive Statistics
###Code
# c01 metrics - nothing
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
df3 = df2.copy()
###Output
_____no_output_____
###Markdown
Feature Creation
###Code
# data reference
df_ref = df3.drop(['invoice_no','stock_code','description','quantity','invoice_date','unit_price','country'],
axis=1).drop_duplicates(ignore_index=True)
print('Data reference shape:', df_ref.shape)
df_ref.head()
# === MONETARY
# creating 'gross_revenue' (= quantity * price)
df3['gross_revenue'] = df3['quantity']*df3['unit_price']
# creating 'monetary'
df_monetary = df3[['customer_id','gross_revenue']].groupby('customer_id').sum().reset_index()
# merging dataframes
df_ref = pd.merge(df_ref, df_monetary, on='customer_id', how='left')
print('Checking NA: \n\n', df_ref.isna().sum(),'\n\n')
print('Data reference shape:', df_ref.shape)
df_ref.head()
# === RECENCY (last day of purchase)
df_recency = df3[['customer_id','invoice_date']].groupby('customer_id').max().reset_index() # selecting last date from each customer
df_recency['recency_days'] = (df3['invoice_date'].max() - df_recency['invoice_date']).dt.days # dt vectorize the series to apply 'days' command
df_recency = df_recency[['customer_id','recency_days']].copy()
# merging dataframes
df_ref = pd.merge(df_ref, df_recency, on='customer_id',how='left')
print('Checking NA: \n\n', df_ref.isna().sum(),'\n\n')
print('Data reference shape:', df_ref.shape)
df_ref.head()
# === FREQUENCY (number of purchases)
df_freq = df3[['customer_id','invoice_no']].drop_duplicates().groupby('customer_id').count().reset_index()
df_freq = df_freq.rename(columns={'customer_id': 'customer_id','invoice_no': 'invoice_freq'}) # changing columns names
# merging dataframes
df_ref = pd.merge(df_ref, df_freq, on='customer_id', how='left')
print('Checking NA: \n\n', df_ref.isna().sum(),'\n\n')
print('Data reference shape:', df_ref.shape)
df_ref.head()
df_ref.head()
###Output
_____no_output_____
###Markdown
Variable Filtering
###Code
df4 = df_ref.copy()
###Output
_____no_output_____
###Markdown
EDA (Exploratory Data Analysis)
###Code
df5 = df4.copy()
###Output
_____no_output_____
###Markdown
Data Preparation
###Code
df6 = df5.copy()
###Output
_____no_output_____
###Markdown
Feature Selection
###Code
df7 = df6.copy()
###Output
_____no_output_____
###Markdown
Hypermarameter Fine-Tunning
###Code
X = df7.drop(columns=['customer_id'])
X.head()
clusters = [2,3,4,5,6]
###Output
_____no_output_____
###Markdown
Within-Cluster Sum of Square (WSS)
###Code
wss = []
for k in clusters:
# model definition
kmeans = c.KMeans(init='random',
n_clusters=k,
n_init=10, # init random inicia o centroide aleatoriamente, n_init
max_iter=300,
random_state=42) # random state define um estado aleatório fixo
# model training
kmeans.fit(X)
# validation
wss.append(kmeans.inertia_) # generates a wss value for each k
# wss plot - elbow method
plt.plot(clusters, wss, linestyle='--', marker='o', color='b')
plt.xlabel('K') # number of clusters
plt.ylabel('Within-Cluster Sum of Square')
plt.title('WSS vs. K')
print(wss)
# yellow brick
kmeans_y = KElbowVisualizer(c.KMeans(), k=clusters, timings=False);
kmeans_y.fit(X);
kmeans_y.show();
###Output
_____no_output_____
###Markdown
Silhoute Score
###Code
# yellow brick
kmeans_y = KElbowVisualizer(c.KMeans(), k=clusters, metric='silhouette', timings=False);
kmeans_y.fit(X);
kmeans_y.show();
###Output
_____no_output_____ |
notebooks/ruta-training/Chapter 1 - Language elements/Exercise 3 - Complex Annotations.ipynb | ###Markdown
Exercise 3: Complex AnnotationsThis exercise provides an introduction to more complex annotations with features. Setup First, we define some input text for the following examples.
###Code
%%documentText
Peter works for Frank.
10€ are less than 100$.
DECLARE Employer, Employee;
"Peter"-> Employee;
"Frank"-> Employer;
###Output
_____no_output_____
###Markdown
Complex Annoations We declare a new annotation type `WorksFor` with the two features `Employee` and `Employer` of a suitable type.Then, we create `WorksFor` annotations with feature values using three different approaches.
###Code
// Switching display mode for inspecting feature values.
%displayMode DYNAMIC_HTML
%dynamicHtmlAllowedTypes WorksFor Employee Employer
DECLARE WorksFor (Employee employee, Employer employer);
// Approach 1: CREATE is able to assign feature values by directly referencing the Type
(Employee # Employer){-> CREATE(WorksFor, "employee"=Employee, "employer"=Employer)};
// Approach 2: GATHER can use the index of a rule element for the assignment
// Employee (index=1), Wildcard (#) (index=2), Employer (index=3)
(Employee # Employer){-> GATHER(WorksFor, "employee"=1, "employer"=3)};
// Approach 3: We can also use an implicit action for this task
(e1:Employee # e2:Employer){-> wf:WorksFor, wf.employee=e1, wf.employer=e2};
###Output
_____no_output_____
###Markdown
Now we declare a new annotation type `MoneyAmount` with an INT feature `amount` and a STRING feature `currency`.We create annotations for mentions of amounts of money and fill the features with correct values. `PARSE` is used to parse the number as an Integer.
###Code
%displayMode DYNAMIC_HTML
%dynamicHtmlAllowedTypes Currency MoneyAmount
// Helper type for currencies
DECLARE Currency;
"$" {-> Currency};
"€" {-> Currency};
DECLARE MoneyAmount(INT amount, STRING currency);
// We need a variable for the PARSE condition, i.e. for storing the amount as integer.
INT value;
(NUM{PARSE(value)} c:Currency){-> CREATE(MoneyAmount, "amount"=value, "currency"=c.ct)};
###Output
_____no_output_____ |
packages/syft/examples/duet/mnist/MNIST_Syft_Data_Scientist.ipynb | ###Markdown
MNIST - Syft Duet - Data Scientist 🥁 PART 1: Connect to a Remote Duet ServerAs the Data Scientist, you want to perform data science on data that is sitting in the Data Owner's Duet server in their Notebook.In order to do this, we must run the code that the Data Owner sends us, which importantly includes their Duet Session ID. The code will look like this, importantly with their real Server ID.```import syft as syduet = sy.duet('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')```This will create a direct connection from my notebook to the remote Duet server. Once the connection is established all traffic is sent directly between the two nodes.Paste the code or Server ID that the Data Owner gives you and run it in the cell below. It will return your Client ID which you must send to the Data Owner to enter into Duet so it can pair your notebooks.
###Code
import syft as sy
duet = sy.join_duet(loopback=True)
###Output
_____no_output_____
###Markdown
Checkpoint 0 : Now STOP and run the Data Owner notebook until the next checkpoint. PART 2: Setting up a Model and our DataThis notebook is mainly based on the original pytorch [example](https://github.com/pytorch/examples/tree/master/mnist/). The `duet` variable is now your reference to a whole world of remote operations including supported libraries like torch.Lets take a look at the duet.torch attribute.```duet.torch```
###Code
duet.torch
###Output
_____no_output_____
###Markdown
Lets create a model just like the one in the MNIST example. We do this in almost the exact same way as in PyTorch. The main difference is we inherit from sy.Module instead of nn.Module and we need to pass in a variable called torch_ref which we will use internally for any calls that would normally be to torch.
###Code
class SyNet(sy.Module):
def __init__(self, torch_ref):
super(SyNet, self).__init__(torch_ref=torch_ref)
self.conv1 = self.torch_ref.nn.Conv2d(1, 32, 3, 1)
self.conv2 = self.torch_ref.nn.Conv2d(32, 64, 3, 1)
self.dropout1 = self.torch_ref.nn.Dropout2d(0.25)
self.dropout2 = self.torch_ref.nn.Dropout2d(0.5)
self.fc1 = self.torch_ref.nn.Linear(9216, 128)
self.fc2 = self.torch_ref.nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = self.torch_ref.nn.functional.relu(x)
x = self.conv2(x)
x = self.torch_ref.nn.functional.relu(x)
x = self.torch_ref.nn.functional.max_pool2d(x, 2)
x = self.dropout1(x)
x = self.torch_ref.flatten(x, 1)
x = self.fc1(x)
x = self.torch_ref.nn.functional.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = self.torch_ref.nn.functional.log_softmax(x, dim=1)
return output
# lets import torch and torchvision just as we normally would
import torch
import torchvision
# now we can create the model and pass in our local copy of torch
local_model = SyNet(torch)
###Output
_____no_output_____
###Markdown
Next we can get our MNIST Test Set ready using our local copy of torch.
###Code
# we need some transforms for the MNIST data set
local_transform_1 = torchvision.transforms.ToTensor() # this converts PIL images to Tensors
local_transform_2 = torchvision.transforms.Normalize(0.1307, 0.3081) # this normalizes the dataset
# compose our transforms
local_transforms = torchvision.transforms.Compose([local_transform_1, local_transform_2])
# Lets define a few settings which are from the original MNIST example command-line args
batch_size = 64
test_batch_size = 1000
args = {
"batch_size": batch_size,
"test_batch_size": test_batch_size,
"epochs": 14,
"lr": 1.0,
"gamma": 0.7,
"no_cuda": False,
"dry_run": False,
"seed": 42, # the meaning of life
"log_interval": 10,
"save_model": True,
}
from syft.util import get_root_data_path
# we will configure the test set here locally since we want to know if our Data Owner's
# private training dataset will help us reach new SOTA results for our benchmark test set
test_kwargs = {
"batch_size": args["test_batch_size"],
}
test_data = torchvision.datasets.MNIST(str(get_root_data_path()), train=False, download=True, transform=local_transforms)
test_loader = torch.utils.data.DataLoader(test_data,**test_kwargs)
test_data_length = len(test_loader.dataset)
print(test_data_length)
###Output
_____no_output_____
###Markdown
Now its time to send the model to our partner’s Duet Server. Note: You can load normal torch model weights before sending your model.Try training the model and saving it at the end of the notebook and then coming back andreloading the weights here, or you can train the same model once using the original scriptin `original` dir and load it here as well.
###Code
# local_model.load("./duet_mnist.pt")
model = local_model.send(duet)
###Output
_____no_output_____
###Markdown
Lets create an alias for our partner’s torch called `remote_torch` so we can refer to the local torch as `torch` and any operation we want to do remotely as `remote_torch`. Remember, the return values from `remote_torch` are `Pointers`, not the real objects. They mostly act the same when using them with other `Pointers` but you can't mix them with local torch objects.
###Code
remote_torch = duet.torch
# lets ask to see if our Data Owner has CUDA
has_cuda = False
has_cuda_ptr = remote_torch.cuda.is_available()
has_cuda = bool(has_cuda_ptr.get(
request_block=True,
reason="To run test and inference locally",
timeout_secs=5, # change to something slower
))
print(has_cuda)
use_cuda = not args["no_cuda"] and has_cuda
# now we can set the seed
remote_torch.manual_seed(args["seed"])
device = remote_torch.device("cuda" if use_cuda else "cpu")
print(f"Data Owner device is {device.type.get()}")
# if we have CUDA lets send our model to the GPU
if has_cuda:
model.cuda(device)
else:
model.cpu()
###Output
_____no_output_____
###Markdown
Lets get our params, setup an optimizer and a scheduler just the same as the PyTorch MNIST example
###Code
params = model.parameters()
optimizer = remote_torch.optim.Adadelta(params, lr=args["lr"])
scheduler = remote_torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=args["gamma"])
###Output
_____no_output_____
###Markdown
Next we need a training loop so we can improve our remote model. Since we want to train on remote data we should first check if the model is remote since we will be using remote_torch in this function. To check if a model is local or remote simply use the `.is_local` attribute.
###Code
def train(model, torch_ref, train_loader, optimizer, epoch, args, train_data_length):
# + 0.5 lets us math.ceil without the import
train_batches = round((train_data_length / args["batch_size"]) + 0.5)
print(f"> Running train in {train_batches} batches")
if model.is_local:
print("Training requires remote model")
return
model.train()
for batch_idx, data in enumerate(train_loader):
data_ptr, target_ptr = data[0], data[1]
optimizer.zero_grad()
output = model(data_ptr)
loss = torch_ref.nn.functional.nll_loss(output, target_ptr)
loss.backward()
optimizer.step()
loss_item = loss.item()
train_loss = loss_item.resolve_pointer_type()
if batch_idx % args["log_interval"] == 0:
local_loss = None
local_loss = train_loss.get(
reason="To evaluate training progress",
request_block=True,
timeout_secs=5
)
if local_loss is not None:
print("Train Epoch: {} {} {:.4}".format(epoch, batch_idx, local_loss))
else:
print("Train Epoch: {} {} ?".format(epoch, batch_idx))
if batch_idx >= train_batches - 1:
print("batch_idx >= train_batches, breaking")
break
if args["dry_run"]:
break
###Output
_____no_output_____
###Markdown
Now we can define a simple test loop very similar to the original PyTorch MNIST example.This function should expect a remote model from our outer epoch loop, so internally we can call `get` to download the weights to do an evaluation on our machine with our local test set. Remember, if we have trained on private data, our model will require permission to download, so we should use request_block=True and make sure the Data Owner approves our requests. For the rest of this function, we will use local `torch` as we normally would.
###Code
def test_local(model, torch_ref, test_loader, test_data_length):
# download remote model
if not model.is_local:
local_model = model.get(
request_block=True,
reason="test evaluation",
timeout_secs=5
)
else:
local_model = model
# + 0.5 lets us math.ceil without the import
test_batches = round((test_data_length / args["test_batch_size"]) + 0.5)
print(f"> Running test_local in {test_batches} batches")
local_model.eval()
test_loss = 0.0
correct = 0.0
with torch_ref.no_grad():
for batch_idx, (data, target) in enumerate(test_loader):
output = local_model(data)
iter_loss = torch_ref.nn.functional.nll_loss(output, target, reduction="sum").item()
test_loss = test_loss + iter_loss
pred = output.argmax(dim=1)
total = pred.eq(target).sum().item()
correct += total
if args["dry_run"]:
break
if batch_idx >= test_batches - 1:
print("batch_idx >= test_batches, breaking")
break
accuracy = correct / test_data_length
print(f"Test Set Accuracy: {100 * accuracy}%")
###Output
_____no_output_____
###Markdown
Finally just for demonstration purposes, we will get the built-in MNIST dataset but on the Data Owners side from `remote_torchvision`.
###Code
# we need some transforms for the MNIST data set
remote_torchvision = duet.torchvision
transform_1 = remote_torchvision.transforms.ToTensor() # this converts PIL images to Tensors
transform_2 = remote_torchvision.transforms.Normalize(0.1307, 0.3081) # this normalizes the dataset
remote_list = duet.python.List() # create a remote list to add the transforms to
remote_list.append(transform_1)
remote_list.append(transform_2)
# compose our transforms
transforms = remote_torchvision.transforms.Compose(remote_list)
# The DO has kindly let us initialise a DataLoader for their training set
train_kwargs = {
"batch_size": args["batch_size"],
}
train_data_ptr = remote_torchvision.datasets.MNIST(str(get_root_data_path()), train=True, download=True, transform=transforms)
train_loader_ptr = remote_torch.utils.data.DataLoader(train_data_ptr,**train_kwargs)
# normally we would not necessarily know the length of a remote dataset so lets ask for it
# so we can pass that to our training loop and know when to stop
def get_train_length(train_data_ptr):
train_data_length = len(train_data_ptr)
return train_data_length
try:
if train_data_length is None:
train_data_length = get_train_length(train_data_ptr)
except NameError:
train_data_length = get_train_length(train_data_ptr)
print(f"Training Dataset size is: {train_data_length}")
###Output
_____no_output_____
###Markdown
PART 3: Training
###Code
import time
args["dry_run"] = True # comment to do a full train
print("Starting Training")
for epoch in range(1, args["epochs"] + 1):
epoch_start = time.time()
print(f"Epoch: {epoch}")
# remote training on model with remote_torch
train(model, remote_torch, train_loader_ptr, optimizer, epoch, args, train_data_length)
# local testing on model with local torch
test_local(model, torch, test_loader, test_data_length)
scheduler.step()
epoch_end = time.time()
print(f"Epoch time: {int(epoch_end - epoch_start)} seconds")
if args["dry_run"]:
break
print("Finished Training")
local_model = None
if args["save_model"]:
local_model = model.get(
request_block=True,
reason="test evaluation",
timeout_secs=5
).save("./duet_mnist.pt")
###Output
_____no_output_____
###Markdown
PART 4: Inference A model would be no fun without the ability to do inference. The following code shows some examples on how we can do this either remotely or locally.
###Code
import matplotlib.pyplot as plt
def draw_image_and_label(image, label):
fig = plt.figure()
plt.tight_layout()
plt.imshow(image, cmap="gray", interpolation="none")
plt.title("Ground Truth: {}".format(label))
def prep_for_inference(image):
image_batch = image.unsqueeze(0).unsqueeze(0)
image_batch = image_batch * 1.0
return image_batch
def classify_local(image, model):
if not model.is_local:
print("model is remote try .get()")
return -1, torch.Tensor([-1])
image_tensor = torch.Tensor(prep_for_inference(image))
output = model(image_tensor)
preds = torch.exp(output)
local_y = preds
local_y = local_y.squeeze()
pos = local_y == max(local_y)
index = torch.nonzero(pos, as_tuple=False)
class_num = index.squeeze()
return class_num, local_y
def classify_remote(image, model):
if model.is_local:
print("model is local try .send()")
return -1, remote_torch.Tensor([-1])
image_tensor_ptr = remote_torch.Tensor(prep_for_inference(image))
output = model(image_tensor_ptr)
preds = remote_torch.exp(output)
preds_result = preds.get(
request_block=True,
reason="To see a real world example of inference",
timeout_secs=10
)
if preds_result is None:
print("No permission to do inference, request again")
return -1, torch.Tensor([-1])
else:
# now we have the local tensor we can use local torch
local_y = torch.Tensor(preds_result)
local_y = local_y.squeeze()
pos = local_y == max(local_y)
index = torch.nonzero(pos, as_tuple=False)
class_num = index.squeeze()
return class_num, local_y
# lets grab something from the test set
import random
total_images = test_data_length # 10000
index = random.randint(0, total_images)
print("Random Test Image:", index)
count = 0
batch = index // test_kwargs["batch_size"]
batch_index = index % int(total_images / len(test_loader))
for tensor_ptr in test_loader:
data, target = tensor_ptr[0], tensor_ptr[1]
if batch == count:
break
count += 1
print(f"Displaying {index} == {batch_index} in Batch: {batch}/{len(test_loader)}")
if batch_index > len(data):
batch_index = 0
image_1 = data[batch_index].reshape((28, 28))
label_1 = target[batch_index]
draw_image_and_label(image_1, label_1)
# classify remote
class_num, preds = classify_remote(image_1, model)
print(f"Prediction: {class_num} Ground Truth: {label_1}")
print(preds)
if local_model is None:
local_model = model.get(
request_block=True,
reason="To run test and inference locally",
timeout_secs=5,
)
# classify local
class_num, preds = classify_local(image_1, local_model)
print(f"Prediction: {class_num} Ground Truth: {label_1}")
print(preds)
# We can also download an image from the web and run inference on that
from PIL import Image, ImageEnhance
import PIL.ImageOps
import os
def classify_url_image(image_url):
filename = os.path.basename(image_url)
os.system(f'curl -O {image_url}')
im = Image.open(filename)
im = PIL.ImageOps.invert(im)
# im = im.resize((28,28), Image.ANTIALIAS)
im = im.convert('LA')
enhancer = ImageEnhance.Brightness(im)
im = enhancer.enhance(3)
print(im.size)
fig = plt.figure()
plt.tight_layout()
plt.imshow(im, cmap="gray", interpolation="none")
# classify local
class_num, preds = classify_local(image_1, local_model)
print(f"Prediction: {class_num}")
print(preds)
# image_url = "https://raw.githubusercontent.com/kensanata/numbers/master/0018_CHXX/0/number-100.png"
# classify_url_image(image_url)
###Output
_____no_output_____ |
Matrix-Diagonal-Sum.ipynb | ###Markdown
Matrix Diagonal SumGiven a square matrix mat, return the sum of the matrix diagonals.Only include the sum of all the elements on the primary diagonal and all the elements on the secondary diagonal that are not part of the primary diagonal. 解析题目来源:[LeetCode - Matrix Diagonal Sum - 1572](https://leetcode.com/problems/matrix-diagonal-sum/)很简单的一题,看似需要二重遍历,实际上纵坐标能从横坐标上算出来,分别是`x`和`len(mat) - x - 1`,最后再处理一下奇数矩阵的中央即可。
###Code
def diagonalSum(mat):
total = 0
for main in range(0,len(mat)):
total += mat[main][main]
total += mat[main][len(mat) - main - 1]
if len(mat) % 2 != 0:
total -= mat[len(mat)/2][len(mat)/2]
return total
print(diagonalSum([[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,1,1]]))
###Output
_____no_output_____ |
demos/AutoTable.ipynb | ###Markdown
Что нужно обосновать?**Model Dependency**Фиксируем Датасет Метрика: число разных кластеров (т.е. сколько у разных моделей разных кластеров)**Randomization**Просто средняя дисперсия по всей кривой? Отдельно -- пересечения (Жакар?) оптимумов, и дисперсия внутри оптимумов (по всем точкам, которые попали хотя бы в один оптимум) **Methods' disagreement**Кажется, тут нужно для каждого датасета и модели посчитать интервалы числа тем и дать статистику о том, насколько их пересечение пустое?> и между априорными предположениями о числе тем/категорий, когда они есть у датасета. Кажется, тут нужно показать пальцем на WRef220?**Objectivity concerns**???**Synthetic corpus**??? Properties of studied metrics **Diversity**Diversity vs max(AIC): во сколько раз большекакую диверсити взять? давай пока все, потом обсудимкаждая клеточка -- это датасет и модель**Information-theoretic**Модель х Датасетсмог ли он сделать оценку? через запятую**expected results**WRef, 20NG: метрика, модель, значение $T$. Болдом отметить там, где оно ОК**TODO:** починить Outside и попробовать плато
###Code
import glob
import itertools
import os
import time
import sys
import matplotlib.pyplot as plt
%matplotlib inline
sys.path.insert(0, '../OptimalNumberOfTopics/') # topnum
sys.path.insert(1, '/home/bulatov/bb_topicnet/') # topicnet
%load_ext autoreload
%autoreload 2
from topicnet.cooking_machine.models import TopicModel
from topicnet.cooking_machine.dataset import Dataset
from topnum.data.vowpal_wabbit_text_collection import VowpalWabbitTextCollection
from topnum.search_methods.optimize_scores_method import OptimizeScoresMethod
from topnum.utils import (
read_corpus_config, split_into_train_test,
build_every_score, monotonity_and_std_analysis,
trim_config, classify_curve, SCORES_DIRECTION, load_models_from_disk
)
from topnum.model_constructor import KnownModel, PARAMS_EXPLORED
from topnum.utils import estimate_num_iterations_for_convergence
from collections import defaultdict
from topnum.utils import magic_clutch
magic_clutch()
import topnum.scores.base_custom_score as base_custom_score
base_custom_score.__NO_LOADING_DATASET__[0] = True
EXPERIMENTS_DICT = {
"20NewsGroups": "/data/_tmp_alekseev/OptNumExperiments/AllDatasets/20NG_20NG_NEW",
# "RuWikiGood":
"StackOverflow": "/data/_tmp_alekseev/OptNumExperiments/AllDatasets/SO_SO_NEW",
"WikiRef220": "/data/_tmp_alekseev/OptNumExperiments/AllDatasets/WRef_NEW/",
"PostNauka": "/data/_tmp_alekseev/OptNumExperiments/AllDatasets/PN_PN_NEW",
# "Reuters": "/data/_tmp_alekseev/OptNumExperiments/AllDatasets/"
"Brown": "/data/_tmp_alekseev/OptNumExperiments/AllDatasets/Brown_Brown_NEW",
}
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
# %%prun -s cumulative -q -l 600 -T prun0
EXPERIMENT_NAME_TEMPLATE = "_{mfv}_{param_id}_{seed}"
configs_dir = os.path.join('..', 'OptimalNumberOfTopics', 'topnum', 'configs')
configs_mask = os.path.join(configs_dir, '*.yml')
data_results = []
optimum_tolerance = 0.07
from time import time
start = time()
start2 = time()
for config_file in glob.glob(configs_mask):
config = read_corpus_config(config_file)
if config['name'] in EXPERIMENTS_DICT:
print(config['name'])
experiment_directory = EXPERIMENTS_DICT[config['name']]
for model_family in KnownModel:
#if model_family != KnownModel.ARTM:
# continue
# print(model_family, end=", ")
print(model_family, (time() - start)/60)
start = time()
tmp = "WRef_test" if config['name'] == "WikiRef220" else config['batches_prefix']
template = tmp + EXPERIMENT_NAME_TEMPLATE.format(
mfv=model_family.value, param_id="{}", seed="{}"
)
details = defaultdict(dict)
all_subexperems_mask = os.path.join(
experiment_directory, template.format("*", "*")
)
for entry in glob.glob(all_subexperems_mask):
experiment_name = entry.split("/")[-1]
result, detailed_result = load_models_from_disk(
experiment_directory, experiment_name
)
for score in detailed_result.keys():
if SCORES_DIRECTION[score] is not None:
details[score][experiment_name] = detailed_result[score].T
for score in details.keys():
for experiment_name, data in details[score].items():
*name_base, param_id, seed = experiment_name.split("_")
seed = int(seed)
my_data = data.T.mean(axis=0)
score_direction = SCORES_DIRECTION[score]
colored_values, curve_type = classify_curve(my_data, optimum_tolerance, score_direction)
data_results.append(
[
config['name'], model_family.value, param_id, seed, score,
str(curve_type).split(".")[1],
list(colored_values[colored_values.notna()].index),
data.values.T.tolist()[0], data.index.tolist(),
config['max_num_topics'],
config['min_num_topics'],
]
)
print()
end = time()
print((end-start2)/60)
import pandas as pd
rwg_df = pd.read_pickle("rwg_df.pkl")
rwg_df
df = pd.DataFrame(data=data_results, columns=["corpus", "model_family", "parameters_id", "seed", "score", "curve_type", "optimums", 'numeric_values', 'T_index', 'max_num_topics', 'min_num_topics'])
df
df = pd.concat([df, rwg_df])
# df.pivot_table(index=['score', 'curve_type'], aggfunc='count')
'''
table = df.query("model_family != 'ARTM'").pivot_table(
values="seed",
index=['score'], columns=['curve_type'], aggfunc='count',
fill_value=0
)
'''
# df.query("curve_type == 'PEAK' and score == 'toptok1'")
###Output
_____no_output_____
###Markdown
**Что мы видим:** у топ-токенов острый максимум, но он не совпадает по разным перезапускам Устойчивсть по рандомизацииПросто средняя дисперсия по всей кривой? Отдельно -- пересечения (Жакар?) оптимумов, и дисперсия внутри оптимумов (по всем точкам, которые попали хотя бы в один оптимум)
###Code
from functools import reduce
def calc_stability(gdf):
optima_union = list(reduce(lambda a, b: set(a) | set(b), gdf.optimums))
optima_intersect = list(reduce(lambda a, b: set(a) & set(b), gdf.optimums))
tmp = pd.DataFrame.from_records(gdf.numeric_values.values).T
tmp['T_index'] = gdf.T_index.iloc[0]
tmp.set_index("T_index", inplace=True)
scale = tmp.max().max() - tmp.min().min()
relative_delta = (tmp.max(axis=1) - tmp.min(axis=1)) / scale
if optima_union:
jaccard = 1 - len(optima_intersect)/len(optima_union)
else:
jaccard = float("nan")
return relative_delta.loc[optima_union].mean(), relative_delta.mean(), jaccard
#groupee = df.query("curve_type != 'OUTSIDE' and curve_type != 'EMPTY'").groupby(["corpus", "model_family", 'parameters_id', 'score'])
groupee = df.groupby(["corpus", "model_family", 'parameters_id', 'score'])
calculated_stab = pd.DataFrame(groupee.apply(calc_stability))
calculated_stab["avg_rel_delta_opt"] = calculated_stab[0].apply(lambda x: x[0])
calculated_stab["avg_rel_delta_all"] = calculated_stab[0].apply(lambda x: x[1])
calculated_stab["jaccard"] = calculated_stab[0].apply(lambda x: x[2])
calculated_stab.drop(columns=[0], inplace=True)
calculated_stab.groupby(['score', 'corpus']).agg('mean')['jaccard'].unstack()#.sort_values(by=['jaccard'])
df.query("score == '[email protected]_purity' and model_family == 'PLSA' and corpus == 'RuWikiGood'")
row = df.query("curve_type == 'PEAK'").iloc[0]
row
int_len = df.optimums.apply(len)
df[int_len < 3].query("curve_type != 'OUTSIDE' and curve_type != 'PEAK'")
def peak_hack(row):
fixed_val = row.curve_type
if len(row.optimums) < 3:
if len(row.optimums) == 1:
if row.optimums[0] in [row.T_index[0], row.T_index[-1]]:
fixed_val = "OUTSIDE"
if len(row.optimums) == 2:
if set(row.optimums) == set(row.T_index[:2]):
fixed_val = "OUTSIDE"
if set(row.optimums) == set(row.T_index[-2:]):
fixed_val = "OUTSIDE"
return fixed_val
df.curve_type = df.apply(peak_hack, axis=1)
calculated_stab.groupby(['score', 'model_family']).agg('mean')['jaccard'].unstack()#.sort_values(by=['jaccard'])
calculated_stab.groupby(['score', 'corpus']).agg('mean')['jaccard'].unstack().mean(axis=1).sort_values()
RESULTS = pd.DataFrame()
RESULTS["avg_jaccard"] = calculated_stab.groupby(['score', 'corpus']).agg('mean')['jaccard'].unstack().mean(axis=1)
RESULTS
calculated_stab.groupby(['score', 'corpus']).agg('mean')['jaccard'].unstack().max(axis=1)
calculated_stab.groupby(['score']).agg('mean').sort_values(by=['avg_rel_delta_opt'])
###Output
_____no_output_____
###Markdown
Проверка на общую адекватность
###Code
adeq_table = df.groupby("score").apply(
lambda gdf: gdf.curve_type.value_counts()
).unstack(fill_value=0)
for score in ["TopicKernel@{}.average_contrast", "TopicKernel@{}.average_purity", "SparsityPhiScore@{}"]:
print(score)
adeq_table.loc[score.format("ALL"), :] = adeq_table.loc[score.format("lemmatized"), :] + adeq_table.loc[score.format("word"), :]
adeq_table.drop(score.format("word"), inplace=True)
adeq_table.drop(score.format("lemmatized"), inplace=True)
adeq_table['IP'] = adeq_table['INTERVAL'] + adeq_table['PEAK']
adeq_table['IP'] / (adeq_table.sum(axis=1) - adeq_table['IP'])
RESULTS["informativity"] = adeq_table['IP'] / (adeq_table.sum(axis=1) - adeq_table['IP'])
adeq_table.sort_values(by=["INTERVAL"], ascending=False)
df
adeq_table.sum(axis=1)
###Output
_____no_output_____
###Markdown
Проверка на частную адекватность
###Code
df.corpus.unique()
EXPERIMENTS_EXPECTED_T = {
'20NewsGroups': list(range(15,21)),
'StackOverflow': [],
'WikiRef220': [5],
'PostNauka': list(range(15,31)),
'Brown': list(range(10,21)),
'RuWikiGood': list(range(7,14)) + list(range(80,100)),
# 'RuWikiGood': list(range(80,100)),
# 'Reuters': list(range(15, 50))
}
def calc_if_row_succeeded(row):
corpus = row.corpus
# print(corpus, set(EXPERIMENTS_EXPECTED_T[corpus]))
result = bool(set(row.optimums) & set(EXPERIMENTS_EXPECTED_T[corpus]))
result &= (row.curve_type != "EMPTY") & (row.curve_type != "OUTSIDE")
return result
df['managed'] = df.apply(calc_if_row_succeeded, axis=1)
df.query("corpus == 'RuWikiGood' and managed == True").score.value_counts()
df.query("corpus == 'RuWikiGood' and managed == True").score.value_counts()
df.query("corpus == 'Reuters'")
groupee = df.groupby(["score", "corpus"])
res = groupee.apply(lambda gdf: gdf.managed.sum())
total = groupee.apply(lambda gdf: gdf.managed.count())
pivot_res = pd.DataFrame(res / total).unstack()
groupee2 = df.groupby(["score", "model_family"])
res2 = groupee2.apply(lambda gdf: gdf.managed.sum())
total2 = groupee2.apply(lambda gdf: gdf.managed.count())
pivot_res
res2['toptok1']
total2
pivot_res2 = pd.DataFrame(res2 / total2).unstack()
pivot_res2
RESULTS["expected"] = pivot_res.drop(columns=[(0, "StackOverflow")]).mean(axis=1)
pivot_res2.sort_values(by=[(0,'PLSA')], ascending=False)
#pivot_res2.sort_values(by=['avg'], ascending=False)
(pd.DataFrame(res).unstack().sum(axis=1) / pd.DataFrame(total).unstack().sum(axis=1)).sort_values(ascending=False)
df.query("managed == True and score == 'toptok1' and model_family != 'ARTM'")
RESULTS
FILTERED_RESULTS = RESULTS.copy()
FILTERED_RESULTS.index
to_remain = ['AIC_sparsity_False', 'AIC_sparsity_True', 'BIC_sparsity_False',
'BIC_sparsity_True', 'MDL_sparsity_False', 'MDL_sparsity_True',
'arun', 'calhar', 'diversity_cosine_False', 'diversity_cosine_True',
'diversity_euclidean_False', 'diversity_euclidean_True',
'diversity_hellinger_False', 'diversity_hellinger_True',
'diversity_jensenshannon_False', 'diversity_jensenshannon_True',
'lift', 'new_holdout_perp', 'perp',
'renyi_0.5', 'renyi_1', 'renyi_2', 'silh', 'toptok1',
'uni_theta_divergence']
FILTERED_RESULTS = FILTERED_RESULTS.loc[to_remain]
['AIC_sparsity_False', 'AIC_sparsity_True', 'BIC_sparsity_False',
'BIC_sparsity_True', 'MDL_sparsity_False', 'MDL_sparsity_True',
'arun', 'calhar', 'diversity_cosine_False', 'diversity_cosine_True',
'diversity_euclidean_False', 'diversity_euclidean_True',
'diversity_hellinger_False', 'diversity_hellinger_True',
'diversity_jensenshannon_False', 'diversity_jensenshannon_True',
'lift', 'new_holdout_perp', 'perp',
'renyi_0.5', 'renyi_1', 'renyi_2', 'silh', 'toptok1',
'uni_theta_divergence']
renamer = {}
for elem in to_remain:
renamed_elem = None
if "_sparsity_" in elem:
parts = elem.split("_")
if parts[-1] == "False":
renamed_elem = parts[0]
else:
renamed_elem = "sparse " + parts[0]
if "diversity_" in elem:
parts = elem.split("_")
t = {"cosine": "COS", "euclidean": "L2", "hellinger": "H", "jensenshannon": "JH"}
if parts[-1] == "False":
renamed_elem = f"D-" + "avg-" + t[parts[1]]
else:
renamed_elem = f"D-" + "cls-" + t[parts[1]]
if elem == 'new_holdout_perp':
renamed_elem = "holdout_perplexity"
if elem == 'perp':
renamed_elem = "perplexity"
if "renyi" in elem or "lift" in elem or "uni_theta_divergence" in elem:
renamed_elem = elem
if elem == 'arun':
renamed_elem = "D-Spectral"
if elem == 'calhar':
renamed_elem = "CHI"
if elem == 'silh':
renamed_elem = "SilhC"
if elem == 'toptok1':
renamed_elem = "average coherence"
renamer[elem] = renamed_elem.replace("_", "-")
FILTERED_RESULTS.rename(index=renamer)
FILTERED_RESULTS.rename(index=renamer, inplace=True)
print(FILTERED_RESULTS.to_latex(float_format="%.3f"))
FILTERED_RESULTS.sort_values(by="avg_jaccard").head(7)
FILTERED_RESULTS.sort_values(by="informativity").tail(7)
FILTERED_RESULTS.sort_values(by="expected").tail(11)
FILTERED_RESULTS.shape
###Output
_____no_output_____
###Markdown
разбирательство с Рейтерс (WIP)
###Code
!ls -lh /data/_tmp_alekseev/OptNumExperiments/AllDatasets/Reuters_Reuters_NEW
from collections import Counter
EXPERIMENT_NAME_TEMPLATE = "_{mfv}_{param_id}_{seed}"
configs_dir = os.path.join('..', 'OptimalNumberOfTopics', 'topnum', 'configs')
configs_mask = os.path.join(configs_dir, '*.yml')
for config_file in glob.glob(configs_mask):
config = read_corpus_config(config_file)
if config['name'] == "Reuters":
break
for model_family in KnownModel:
tmp = "WRef_test" if config['name'] == "WikiRef220" else config['batches_prefix']
template = tmp + EXPERIMENT_NAME_TEMPLATE.format(
mfv=model_family.value, param_id="{}", seed="{}"
)
experiment_directory = '/data/_tmp_alekseev/OptNumExperiments/AllDatasets/Reuters_Reuters_NEW'
details = defaultdict(dict)
all_subexperems_mask = os.path.join(
experiment_directory, template.format("*", "*")
)
print(all_subexperems_mask)
for entry in glob.glob(all_subexperems_mask):
print(entry)
experiment_name = entry.split("/")[-1]
print(experiment_name)
masks = [
f"{experiment_directory}/{experiment_name}_*",
f"{experiment_directory}/{experiment_name}/*"
]
for new_exp_format, mask in enumerate(masks):
if not len(glob.glob(mask)):
continue
print(experiment_name, len(glob.glob(mask)))
cnt = Counter()
print(
Counter(
os.stat(folder).st_mode
for folder in glob.glob(mask)
)
)
for folder2 in glob.glob(mask):
# if os.stat(folder2).st_mode == 17901:
if os.stat(folder2).st_mode == 16893:
dm = DummyTopicModel.load(folder2)
break
break
break
#result, detailed_result = load_models_from_disk(
# experiment_directory, experiment_name
#)
folder, os.stat(folder).st_mode
test_dataset = Dataset(
'/home/alekseev/topicnet/tests/test_data/test_dataset.csv',
internals_folder_path="./DELETE_ME_PLZ"
)
_ = build_every_score(test_dataset, test_dataset, config)
from topnum.scores import (
IntratextCoherenceScore, SophisticatedTopTokensCoherenceScore
)
IntratextCoherenceScore("jbi", test_dataset)
SophisticatedTopTokensCoherenceScore("sds", test_dataset)
tm = TopicModel.load(folder)
tm._model.num_phi_updates
tm2 = TopicModel.load(folder2)
tm2._model.num_phi_updates
! cp /home/alekseev/OptimalNumberOfTopics/demos/*_train.csv .
for folder in glob.glob(mask):
print(os.path.basename(folder), os.stat(folder).st_mode)
experiment_directory
config
experiment_directory, experiment_name
base_experiment_name = experiment_name
###Output
_____no_output_____ |
Tutorial 2 - Calculations in Bulk/Data Generation for Noisy Linear Data to CSV.ipynb | ###Markdown
Choose the amount of data you wish to generate.
###Code
number_of_datapoints = 150
###Output
_____no_output_____
###Markdown
Choose the highest x value you'd like.
###Code
x_max = 100
###Output
_____no_output_____
###Markdown
Uniformly generate the number of data points from 0 to 1, then scale by x max. Subtracting 0.5 from the uniform values before scaling by your biggest x value is a great way to create a symmetric set of x values around 0 that has negative values.
###Code
negatives = 0.5*True #False means your data's x values lie from [0,x max]. True means x is on [-x max/2, x max/2].
data = (stats.uniform().rvs(number_of_datapoints) - negatives)*x_max
###Output
_____no_output_____
###Markdown
The noise is simply gaussian with whichever mean and standard deviation you'd like.
###Code
noise_mean = 0
noise_stDeviation = 15
noise = np.random.normal(noise_mean,noise_stDeviation,number_of_datapoints)
###Output
_____no_output_____
###Markdown
Choose the parameters for your underlying linear data.
###Code
slope = 2
intercept = 0
###Output
_____no_output_____
###Markdown
The dependent data, or your y values, are now y = mx+b with noise, or some error.
###Code
dependent_variable = slope*data + intercept + noise
###Output
_____no_output_____
###Markdown
Verify your data looks the way it should.
###Code
plt.scatter(data,dependent_variable)
###Output
_____no_output_____
###Markdown
Create a Pandas dataframe to export the data as a csv, since these are pretty easy to work with.
###Code
columns_for_table = {'x values':data,'y values': dependent_variable}
generated_data = pd.DataFrame(columns_for_table)
filename = 'simulated data.csv'
generated_data.to_csv(filename, encoding='utf-8', index=False)
###Output
_____no_output_____ |
PDR_VGG19_Testing.ipynb | ###Markdown
Plant Disease Recognition using VGG19 on modified version of PlantVillage Dataset. Importing necessary libraries
###Code
import tensorflow as tf
print(tf.__version__)
from tensorflow.keras.layers import Input, Dense, Flatten
from tensorflow.keras.applications.vgg19 import VGG19 as PretrainedModel, preprocess_input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.layers import BatchNormalization
from glob import glob
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sys, os
###Output
_____no_output_____
###Markdown
Downloading and unzipping the modified dataset available on Gdrive. If you don`t have gdown module, install it using pip.
###Code
!gdown --id 1Mj6wsKBZN2ycAyyIMs2lI361deuCJqBI --output pv0.zip
!unzip pv0.zip
###Output
_____no_output_____
###Markdown
Check if the folder has been unzipped.
###Code
!ls
###Output
pv0 pv0.zip sample_data
###Markdown
Setting up path for datagenerators from keras
###Code
train_path = '/content/pv0/train'
valid_path = '/content/pv0/test'
# useful for getting number of files
image_files = glob(train_path + '/*/*.JPG')
valid_image_files = glob(valid_path + '/*/*.JPG')
# useful for getting number of classes
folders = glob(train_path + '/*')
len(folders)
###Output
_____no_output_____
###Markdown
Specify input image size.
###Code
IMAGE_SIZE = [256, 256]
#sneek peek at a random image
plt.imshow(image.load_img(np.random.choice(image_files)))
plt.show()
###Output
_____no_output_____
###Markdown
Configuring the pretrainned model as per our needs.
###Code
ptm = PretrainedModel(
input_shape=IMAGE_SIZE + [3],
weights='imagenet',
include_top=False)
# freeze pretrained model weights
ptm.trainable = False
K = len(folders) # number of classes
#model definition
x = Flatten()(ptm.output)
x= BatchNormalization()(x)
x= Dense(512,activation='relu')(x)
x = Dense(K, activation='softmax')(x)
# create a model object
model = Model(inputs=ptm.input, outputs=x)
# view the structure of the model
model.summary()
#view the number of layers in the model
len(model.layers)
# create an instance of ImageDataGenerator
#Keras generators returns one-hot encoded labels and provides data augmentation.
gen_train = ImageDataGenerator(
rotation_range=90,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
zoom_range=0.2,
horizontal_flip=True,
preprocessing_function=preprocess_input
)
gen_test = ImageDataGenerator(
preprocessing_function=preprocess_input
)
#batch size is the number of examples that are run through the model at once.
batch_size = 300
# create generators
train_generator = gen_train.flow_from_directory(
train_path,
shuffle=True,
target_size=IMAGE_SIZE,
batch_size=batch_size,
)
valid_generator = gen_test.flow_from_directory(
valid_path,
target_size=IMAGE_SIZE,
batch_size=batch_size,
)
###Output
Found 46141 images belonging to 38 classes.
Found 8162 images belonging to 38 classes.
###Markdown
Since Keras no longer provides some metrics within itself, so we define those metrics ourselves. Here, we are defining F1_score, Precision and Recall.
###Code
from keras import backend as Ke
def recall_m(y_true, y_pred):
true_positives = Ke.sum(Ke.round(Ke.clip(y_true * y_pred, 0, 1)))
possible_positives = Ke.sum(Ke.round(Ke.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + Ke.epsilon())
return recall
def precision_m(y_true, y_pred):
true_positives = Ke.sum(Ke.round(Ke.clip(y_true * y_pred, 0, 1)))
predicted_positives = Ke.sum(Ke.round(Ke.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + Ke.epsilon())
return precision
def f1_m(y_true, y_pred):
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+Ke.epsilon()))
###Output
_____no_output_____
###Markdown
This block is for creating a lr scheduler, since the lr scheduler was not as effective as using adam directly, it is left for experimentation.
###Code
# from keras.optimizers import SGD
# import math
# def step_decay(epoch):
# initial_lrate = 1e-4
# drop = 0.5
# epochs_drop = 10.0
# lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
# return lrate
# sgd = SGD(lr=0.0, momentum=0.9)
# # learning schedule callback
# from keras.callbacks import LearningRateScheduler
# lrate = LearningRateScheduler(step_decay)
# callbacks_list = [lrate]
###Output
_____no_output_____
###Markdown
Compiling our model with loss, optimizer and metrics (including our custom defined ones).
###Code
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy',f1_m,precision_m, recall_m]
)
###Output
_____no_output_____
###Markdown
The fit function is called for starting our training.
###Code
# fit the model
r = model.fit(
train_generator,
validation_data=valid_generator,
epochs=5,
steps_per_epoch=int(np.ceil(len(image_files) / batch_size)),
validation_steps=int(np.ceil(len(valid_image_files) / batch_size)),
)
###Output
Epoch 1/5
150/150 [==============================] - 795s 5s/step - loss: 0.7822 - accuracy: 0.8552 - f1_m: 0.8575 - precision_m: 0.8668 - recall_m: 0.8489 - val_loss: 0.6398 - val_accuracy: 0.9083 - val_f1_m: 0.9091 - val_precision_m: 0.9116 - val_recall_m: 0.9065
Epoch 2/5
150/150 [==============================] - 768s 5s/step - loss: 0.3244 - accuracy: 0.9265 - f1_m: 0.9274 - precision_m: 0.9322 - recall_m: 0.9227 - val_loss: 0.2975 - val_accuracy: 0.9395 - val_f1_m: 0.9406 - val_precision_m: 0.9434 - val_recall_m: 0.9379
Epoch 3/5
150/150 [==============================] - 758s 5s/step - loss: 0.2359 - accuracy: 0.9396 - f1_m: 0.9411 - precision_m: 0.9459 - recall_m: 0.9364 - val_loss: 0.2293 - val_accuracy: 0.9496 - val_f1_m: 0.9502 - val_precision_m: 0.9531 - val_recall_m: 0.9474
Epoch 4/5
150/150 [==============================] - 755s 5s/step - loss: 0.1881 - accuracy: 0.9482 - f1_m: 0.9491 - precision_m: 0.9533 - recall_m: 0.9449 - val_loss: 0.1689 - val_accuracy: 0.9570 - val_f1_m: 0.9567 - val_precision_m: 0.9590 - val_recall_m: 0.9544
Epoch 5/5
150/150 [==============================] - 754s 5s/step - loss: 0.1611 - accuracy: 0.9543 - f1_m: 0.9547 - precision_m: 0.9588 - recall_m: 0.9507 - val_loss: 0.1994 - val_accuracy: 0.9543 - val_f1_m: 0.9552 - val_precision_m: 0.9576 - val_recall_m: 0.9528
###Markdown
Saving our Model in HD5 format.
###Code
model.save("model.h5")
print("Saved model to disk")
###Output
Saved model to disk
###Markdown
Graphs for our metrics
###Code
# loss
plt.plot(r.history['loss'], label='train loss')
plt.plot(r.history['val_loss'], label='val loss')
plt.legend()
plt.show()
# accuracies
plt.plot(r.history['accuracy'], label='train acc')
plt.plot(r.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
# f1_score
plt.plot(r.history['f1_m'], label='train f1_m')
plt.plot(r.history['val_f1_m'], label='val f1_m')
plt.legend()
plt.show()
# precision
plt.plot(r.history['precision_m'], label='train precision_m')
plt.plot(r.history['val_precision_m'], label='val precision_m')
plt.legend()
plt.show()
# recall
plt.plot(r.history['recall_m'], label='train recall_m')
plt.plot(r.history['val_recall_m'], label='val recall_m')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Next we evaluate the model on our test set again.
###Code
# evaluate the model
valid_generator = gen_test.flow_from_directory(valid_path,target_size=IMAGE_SIZE,batch_size=batch_size,)
loss, accuracy, f1_score, precision, recall = model.evaluate(valid_generator, steps=int(np.ceil(len(valid_image_files)/ batch_size)))
###Output
Found 8162 images belonging to 38 classes.
27/27 [==============================] - 57s 2s/step - loss: 0.1991 - accuracy: 0.9547 - f1_m: 0.9554 - precision_m: 0.9577 - recall_m: 0.9531
###Markdown
Printing our metrics
###Code
print('loss : ',loss)
print('accuracy : ',accuracy)
print('f1_score :',f1_score)
print('precision:',precision)
print('recall :',recall)
###Output
loss : 0.19907443225383759
accuracy : 0.9546913504600525
f1_score : 0.9553820490837097
precision: 0.9576932191848755
recall : 0.9530863761901855
|
Copy_of_Attention_Basics.ipynb | ###Markdown
Attention BasicsIn this notebook, we look at how attention is implemented. We will focus on implementing attention in isolation from a larger model. That's because when implementing attention in a real-world model, a lot of the focus goes into piping the data and juggling the various vectors rather than the concepts of attention themselves.We will implement attention scoring as well as calculating an attention context vector. Attention Scoring Inputs to the scoring functionLet's start by looking at the inputs we'll give to the scoring function. We will assume we're in the first step in the decoging phase. The first input to the scoring function is the hidden state of decoder (assuming a toy RNN with three hidden nodes -- not usable in real life, but easier to illustrate):
###Code
dec_hidden_state = [5,1,20]
###Output
_____no_output_____
###Markdown
Let's visualize this vector:
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Let's visualize our decoder hidden state
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(dec_hidden_state)), annot=True, cmap=sns.light_palette("purple", as_cmap=True), linewidths=1)
###Output
_____no_output_____
###Markdown
Our first scoring function will score a single annotation (encoder hidden state), which looks like this:
###Code
annotation = [3,12,45] #e.g. Encoder hidden state
# Let's visualize the single annotation
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(annotation)), annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
###Output
_____no_output_____
###Markdown
IMPLEMENT: Scoring a Single AnnotationLet's calculate the dot product of a single annotation. Numpy's [dot()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) is a good candidate for this operation
###Code
def single_dot_attention_score(dec_hidden_state, enc_hidden_state):
# TODO: return the dot product of the two vectors
return np.dot(dec_hidden_state, enc_hidden_state)
single_dot_attention_score(dec_hidden_state, annotation)
###Output
_____no_output_____
###Markdown
Annotations MatrixLet's now look at scoring all the annotations at once. To do that, here's our annotation matrix:
###Code
annotations = np.transpose([[3,12,45], [59,2,5], [1,43,5], [4,3,45.3]])
###Output
_____no_output_____
###Markdown
And it can be visualized like this (each column is a hidden state of an encoder time step):
###Code
# Let's visualize our annotation (each column is an annotation)
ax = sns.heatmap(annotations, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
###Output
_____no_output_____
###Markdown
IMPLEMENT: Scoring All Annotations at OnceLet's calculate the scores of all the annotations in one step using matrix multiplication. Let's continue to us the dot scoring methodTo do that, we'll have to transpose `dec_hidden_state` and [matrix multiply](https://docs.scipy.org/doc/numpy/reference/generated/numpy.matmul.html) it with `annotations`.
###Code
def dot_attention_score(dec_hidden_state, annotations):
# TODO: return the product of dec_hidden_state transpose and enc_hidden_states
return np.matmul(np.transpose(dec_hidden_state), annotations)
attention_weights_raw = dot_attention_score(dec_hidden_state, annotations)
attention_weights_raw
###Output
_____no_output_____
###Markdown
Looking at these scores, can you guess which of the four vectors will get the most attention from the decoder at this time step? SoftmaxNow that we have our scores, let's apply softmax:
###Code
def softmax(x):
x = np.array(x, dtype=np.float128)
e_x = np.exp(x)
return e_x / e_x.sum(axis=0)
attention_weights = softmax(attention_weights_raw)
attention_weights
###Output
_____no_output_____
###Markdown
Even when knowing which annotation will get the most focus, it's interesting to see how drastic softmax makes the end score become. The first and last annotation had the respective scores of 927 and 929. But after softmax, the attention they'll get is 0.12 and 0.88 respectively. Applying the scores back on the annotationsNow that we have our scores, let's multiply each annotation by its score to proceed closer to the attention context vector. This is the multiplication part of this formula (we'll tackle the summation part in the latter cells)
###Code
def apply_attention_scores(attention_weights, annotations):
# TODO: Multiple the annotations by their weights
return attention_weights * annotations
applied_attention = apply_attention_scores(attention_weights, annotations)
applied_attention
###Output
_____no_output_____
###Markdown
Let's visualize how the context vector looks now that we've applied the attention scores back on it:
###Code
# Let's visualize our annotations after applying attention to them
ax = sns.heatmap(applied_attention, annot=True, cmap=sns.light_palette("orange", as_cmap=True), linewidths=1)
###Output
_____no_output_____
###Markdown
Contrast this with the raw annotations visualized earlier in the notebook, and we can see that the second and third annotations (columns) have been nearly wiped out. The first annotation maintains some of its value, and the fourth annotation is the most pronounced. Calculating the Attention Context VectorAll that remains to produce our attention context vector now is to sum up the four columns to produce a single attention context vector
###Code
def calculate_attention_vector(applied_attention):
return np.sum(applied_attention, axis=1)
attention_vector = calculate_attention_vector(applied_attention)
attention_vector
# Let's visualize the attention context vector
plt.figure(figsize=(1.5, 4.5))
sns.heatmap(np.transpose(np.matrix(attention_vector)), annot=True, cmap=sns.light_palette("Blue", as_cmap=True), linewidths=1)
###Output
_____no_output_____ |
BBY162 Not Defteri Hafta2.ipynb | ###Markdown
Bölüm 00: Python'a Giriş Yazar HakkındaMihriban Civaroğlu Çalışma Defteri HakkındaBu çalışma defteri Google'ın Jupyter Notebook platformuna benzer özellikler taşıyan Google Colab üzerinde oluşturulmuştur. Google Colab, herhangi bir altyapı düzenlemesine ihtiyaç duymadan Web tabanlı olarak Python kodları yazmanıza ve çalıştırmanıza imkan veren ücretsiz bir platformdur. Platform ile ilgili detaylı bilgiye [https://colab.research.google.com/notebooks/intro.ipynb](https://colab.research.google.com/notebooks/intro.ipynb) adresinden ulaşabilirsiniz.Python'a giriş seviyesinde 10 dersten oluşan bu çalışma defteri daha önce kodlama deneyimi olmayan öğrenenler için hazırlanmıştır. Etkileşimli yapısından dolayı hem konu anlatımlarının hem de çalıştırılabilir örneklerin bir arada olduğu bu yapı, sürekli olarak güncellenebilecek bir altyapıya sahiptir. Bu açıdan çalışma defterinin güncel sürümünü aşağıdaki adresten kontrol etmenizi tavsiye ederim.Sürüm 1.0: [Python'a Giriş](https://github.com/orcunmadran/hu-bby162-2020/blob/master/BBY162_Python_a_Giris.ipynb)İyi çalışmalar ve başarılar :) Kullanım ŞartlarıBu çalışma defteri aşağıda belirtilen şartlar altında, katkıda bulunanlara Atıf vermek ve aynı lisansla paylaşmak kaydıyla ticari amaç dahil olmak üzere her şekilde dağıtabilir, paylaşabilir, üzerinde değişiklik yapılarak yeniden kullanılabilir.---Bu çalışma defteri Jetbrains'in "Introduction to Python" dersi temel alınarak hazırlanmış ve Creative Commons [Atıf-AynıLisanslaPaylaş 4.0 Uluslararası Lisansı](http://creativecommons.org/licenses/by-sa/4.0/) ile lisanslanmıştır.--- Bölüm 01: GirişBu bölümde:* İlk bilgisayar programımız,* Yorumlar yer almaktadır. İlk Bilgisayar ProgramımızGeleneksel olarak herhangi bir programlama dilinde yazılan ilk program "Merhaba Dünya!"'dır. **Örnek Uygulama:**```print("Merhaba Dünya!")```
###Code
# Örnek uygulamayı çalıştır
print("Merhaba Dünya!")
###Output
Merhaba Dünya!
###Markdown
**Görev:** Kendinizi dünyaya tanıtacak ilk bilgisayar programını yazın!
###Code
print("Merhaba Dünya Ve Bütün O Balıklar İçin Teşekkürler")
###Output
Merhaba Dünya Ve Bütün O Balıklar İçin Teşekkürler
###Markdown
YorumlarPython'daki yorumlar "hash" karakteriyle başlar ve fiziksel çizginin sonuna kadar uzanır. Yorum yapmak için kullanılan "hash" karakteri kod satırlarını geçici olarak devre dışı bırakmak amacıyla da kullanılabilir. **Örnek Uygulama:**``` Bu ilk bilgisayar programım için ilk yorumumprint(" bu bir yorum değildir")print("Merhaba!") yorumlar kod satırının devamında da yapılabilir.print("Bu kod geçici olarak devre dışı bırakılmıştır.")```
###Code
# Örnek uygulamayı çalıştır
# Bu ilk bilgisayar programım için ilk yorumum
print("# bu bir yorum değildir")
print("Merhaba!") # yorumlar kod satırının devamında da yapılabilir.
# print("Bu kod geçici olarak devre dışı bırakılmıştır.")
###Output
# bu bir yorum değildir
Merhaba!
###Markdown
**Görev:** Python kodunuza yeni bir yorum ekleyin, mevcut satıra yorum ekleyin, yazılmış olan bir kod satırını geçici olarak devre dışı bırakın!
###Code
print("Bu satırın devamına bir yorum ekleyin") #yorum eklendi
#print("Bu satırı devre dışı bırakın!")
###Output
Bu satırın devamına bir yorum ekleyin
###Markdown
Bölüm 02: DeğişkenlerBu bölümde:* Değişken nedir?,* Değişken tanımlama,* Değişken türleri,* Değişken türü dönüştürme,* Aritmetik operatörler,* Artıtılmış atama operatörleri,* Boolean operatörleri,* Karşılaştırma operatörleri yer almaktadır. Değişken Nedir?Değişkenler değerleri depolamak için kullanılır. Böylece daha sonra bu değişkenler program içinden çağırılarak atanan değer tekrar ve tekrar kullanılabilir. Değişkenlere metinler ve / veya sayılar atanabilir. Sayı atamaları direkt rakamların yazılması ile gerçekleştirilirken, metin atamalarında metin tek tırnak içinde ( 'abc' ) ya da çift tırnak ( "abc" ) içinde atanır.Değişkenler etiketlere benzer ve atama operatörü olarak adlandırılan eşittir ( = ) operatörü ile bir değişkene bir değer atanabilir. Bir değer ataması zincirleme şeklinde gerçekleştirilebilir. Örneğin: a = b = 2 **Örnek Uygulama 1**Aşağıda bir "zincir atama" örneği yer almaktadır. Değer olarak atanan 2 hem "a" değişkenine, hem de "b" değişkenine atanmaktadır.```a = b = 2print("a = " + str(a))print("b = " + str(b))```"a" ve "b" değişkenleri başka metinler ile birlikte ekrana yazdırılmak istendiğinde metin formatına çevrilmesi gerekmektedir. Bu bağlamda kullanılan "str(a)" ve "str(b)" ifadeleri eğitimin ilerleyen bölümlerinde anlatılacaktır.
###Code
# Örnek uygulamayı çalıştır
a = b = 2
print("a = " + str(a))
print("b = " + str(b))
###Output
a = 2
b = 2
###Markdown
**Örnek Uygulama 2**```adSoyad = "Orçun Madran"print("Adı Soyadı: " + adSoyad)```
###Code
# Örnek uygulamayı çalıştır
adSoyad = "Orçun Madran"
print("Adı Soyadı: " + adSoyad)
###Output
Adı Soyadı: Orçun Madran
###Markdown
**Görev:** "eposta" adlı bir değişken oluşturun. Oluşturduğunuz bu değişkene bir e-posta adresi atayın. Daha sonra atadığınız bu değeri ekrana yazdırın. Örneğin: "E-posta: orcun[at]madran.net"
###Code
# Ekrana e-posta yazdır
eposta = "[email protected]"
print("E-posta: " + eposta)
###Output
E-posta: [email protected]
###Markdown
Değişken TanımlamaDeğişken isimlerinde uyulması gereken bir takım kurallar vardır:* Rakam ile başlayamaz.* Boşluk kullanılamaz.* Alt tire ( _ ) haricinde bir noktalama işareti kullanılamaz.* Python içinde yerleşik olarak tanımlanmış anahtar kelimeler kullanılamaz (ör: print).* Python 3. sürümden itibaren latin dışı karakter desteği olan "Unicode" desteği gelmiştir. Türkçe karakterler değişken isimlerinde kullanılabilir. **Dikkat:** Değişken isimleri büyük-küçük harfe duyarlıdır. Büyük harfle başlanan isimlendirmeler genelde *sınıflar* için kullanılır. Değişken isimlerinin daha anlaşılır olması için deve notasyonu (camelCase) ya da alt tire kullanımı tavsiye edilir. **Örnek Uygulama:**```degisken = 1kullaniciAdi = "orcunmadran"kul_ad = "rafet"``` Henüz tanımlanmamış bir değişken kullanıldığında derleyicinin döndürdüğü hatayı kodu çalıştırarak gözlemleyin!
###Code
degisken1 = "Veri"
print(degisken2)
###Output
_____no_output_____
###Markdown
**Görev:** Tanımladığınız değişkeni ekrana yazdırın!
###Code
degisken3 = 'Yeni veri'
print(degisken3)
###Output
Yeni veri
###Markdown
Değişken TürleriPython'da iki ana sayı türü vardır; tam sayılar ve ondalık sayılar.**Dikkat:** Ondalık sayıların yazımında Türkçe'de *virgül* (,) kullanılmasına rağmen, programlama dillerinin evrensel yazım kuralları içerisinde ondalık sayılar *nokta* (.) ile ifade edilir. **Örnek Uygulama:**```tamSayi = 5print(type(tamSayi)) tamSayi değişkeninin türünü yazdırırondalikSayi = 7.4print(type(ondalikSayi) ondalikSayi değişkeninin türünü yazdırır```
###Code
# Örnek uygulamayı çalıştır
tamSayi = 5
print(type(tamSayi))
ondalikSayi = 7.4
print(type(ondalikSayi))
###Output
<class 'int'>
<class 'float'>
###Markdown
**Görev:** "sayi" değişkeninin türünü belirleyerek ekrana yazdırın!
###Code
sayi = 9.0
print(type(sayi))
###Output
<class 'float'>
###Markdown
Değişken Türü DönüştürmeBir veri türünü diğerine dönüştürmenize izin veren birkaç yerleşik fonksiyon (built-in function) vardır. Bu fonksiyonlar ("int()", "str()", "float()") uygulandıkları değişkeni dönüştürerek yeni bir nesne döndürürler. **Örnek Uygulama**```sayi = 6.5print(type(sayi)) "sayi" değişkeninin türünü ondalık olarak yazdırırprint(sayi)sayi = int(sayi) Ondalık sayı olan "sayi" değişkenini tam sayıya dönüştürürprint(type(sayi))print(sayi)sayi = float(sayi) Tam sayı olan "sayi" değişkenini ondalık sayıya dönüştürürprint(type(sayi))print(sayi)sayi = str(sayi) "sayi" değişkeni artık düz metin halini almıştırprint(type(sayi))print(sayi)```
###Code
# Örnek uygulamayı çalıştır
sayi = 6.5
print(type(sayi))
print(sayi)
sayi = int(sayi)
print(type(sayi))
print(sayi)
sayi = float(sayi)
print(type(sayi))
print(sayi)
sayi = str(sayi)
print(type(sayi))
print(sayi)
###Output
_____no_output_____
###Markdown
**Görev:** Ondalık sayıyı tam sayıya dönüştürün ve ekrana değişken türünü ve değeri yazdırın!
###Code
sayi = 3.14
sayi = int(sayi) #ondalık sayı tam sayıya dönüştürüldü
print(sayi) #değişkenin değeri yazdırıldı
print(type(sayi)) #değişken türü yazdırıldı
###Output
3
<class 'int'>
###Markdown
Aritmetik OperatörlerDiğer tüm programlama dillerinde olduğu gibi, toplama (+), çıkarma (-), çarpma (yıldız) ve bölme (/) operatörleri sayılarla kullanılabilir. Bunlarla birlikte Python'un üs (çift yıldız) ve mod (%) operatörleri vardır.**Dikkat:** Matematik işlemlerinde geçerli olan aritmetik operatörlerin öncelik sıralamaları (çarpma, bölme, toplama, çıkarma) ve parantezlerin önceliği kuralları Python içindeki matematiksel işlemler için de geçerlidir. **Örnek Uygulama:**``` Toplama işlemisayi = 7.0sonuc = sayi + 3.5print(sonuc) Çıkarma işlemisayi = 200sonuc = sayi - 35print(sonuc) Çarpma işlemisayi = 44sonuc = sayi * 10print(sonuc) Bölme işlemisayi = 30sonuc = sayi / 3print(sonuc) Üs alma işlemisayi = 30sonuc = sayi ** 3print(sonuc) Mod alma işlemi sayi = 35sonuc = sayi % 4print(sonuc)```
###Code
# Örnek uygulamayı çalıştır
# Toplama işlemi
sayi = 7.0
sonuc = sayi + 3.5
print(sonuc)
# Çıkarma işlemi
sayi = 200
sonuc = sayi - 35
print(sonuc)
# Çarpma işlemi
sayi = 44
sonuc = sayi * 10
print(sonuc)
# Bölme işlemi
sayi = 30
sonuc = sayi / 3
print(sonuc)
# Üs alma işlemi
sayi = 30
sonuc = sayi ** 3
print(sonuc)
# Mod alma işlemi
sayi = 35
sonuc = sayi % 4
print(sonuc)
###Output
10.5
165
440
10.0
27000
3
###Markdown
**Görev:** Aşağıda değer atamaları tamamlanmış olan değişkenleri kullanarak ürünlerin peşin satın alınma bedelini TL olarak hesaplayınız ve ürün adı ile birlikte ekrana yazdırınız! İpucu: Ürün adını ve ürün bedelini tek bir satırda yazdırmak isterseniz ürün bedelini str() fonksiyonu ile düz metin değişken türüne çevirmeniz gerekir.
###Code
urunAdi = "Bisiklet"
urunBedeliAvro = 850
pariteAvroTL = 7
urunAdet = 3
pesinAdetIndirimTL = 500
urunBedeliTl = 850 * 7
print(urunBedeliTl) #tek ürünün TL cinsinden değeri
toplamPesinTl = (urunBedeliTl - pesinAdetIndirimTL) * urunAdet
print(toplamPesinTl) # bisikletin TL cinsinden toplam indirimli bedeli
print((urunAdi) + ":"+ str(toplamPesinTl)) #ürün adıyla birlikte toplam bedel yazıldı
###Output
5950
16350
Bisiklet:16350
###Markdown
Artırılmış Atama OperatörleriArtırılmış atama, bir değişkenin mevcut değerine belirlenen değerin eklenerek ( += ) ya da çıkartılarak ( -= ) atanması işlemidir. **Örnek Uygulama**```sayi = 8sayi += 4 Mevcut değer olan 8'e 4 daha ekler.print(sayi) sayi -= 6 Mevcut değer olan 12'den 6 eksiltir.print("Sayı = " + str(sayi))```
###Code
# Örnek uygulama çalıştır
sayi = 8
sayi += 4
print(sayi)
sayi -= 6
print("Sayı = " + str(sayi))
###Output
_____no_output_____
###Markdown
**Görev:** Artıtılmış atama operatörleri kullanarak "sayi" değişkenine 20 ekleyip, 10 çıkartarak değişkenin güncel değerini ekrana yazdırın!
###Code
sayi = 55
sayi+=20
sayi-=10
print("Sayı:" + str(sayi))
###Output
Sayı:65
###Markdown
Boolean OperatörleriBoolean, yalnızca **Doğru (True)** veya **Yanlış (False)** olabilen bir değer türüdür. Eşitlik (==) operatörleri karşılaştırılan iki değişkenin eşit olup olmadığını kontrol eder ve *True* ya da *False* değeri döndürür. **Örnek Uygulama:**```deger1 = 10deger2 = 10esitMi = (deger1 == deger2) Eşit olup olmadıkları kontrol ediliyorprint(esitMi) Değişken "True" olarak dönüyordeger1 = "Python"deger2 = "Piton"esitMi = (deger1 == deger2) Eşit olup olmadıkları kontrol ediliyorprint(esitMi) Değişken "False" olarak dönüyor```
###Code
# Örnek uygulama çalıştır
deger1 = 10
deger2 = 10
esitMi = (deger1 == deger2)
print(esitMi)
deger1 = "Python"
deger2 = "Piton"
esitMi = (deger1 == deger2)
print(esitMi)
###Output
_____no_output_____
###Markdown
**Görev:** Atamaları yapılmış olan değişkenler arasındaki eşitliği kontrol edin ve sonucu ekrana yazıdırın!
###Code
sifre = "Python2020"
sifreTekrar = "Piton2020"
esitMi = (sifre == sifreTekrar)
print(esitMi)
print("Şifreler birbirine eşit değildir.")
###Output
False
Şifreler birbirine eşit değildir.
###Markdown
Karşılaştırma OperatörleriPython'da, >=, , < vb. dahil olmak üzere birçok operatör bulunmaktadır. Python'daki tüm karşılaştırma operatörleri aynı önceliğe sahiptir. Karşılaştırma sonucunda boole değerleri (*True* ya da *False*) döner. Karşılaştırma operatörleri isteğe bağlı olarak arka arkaya da (zincirlenerek) kullanılabilir. **Örnek Uygulama:**```deger1 = 5deger2 = 7deger3 = 9print(deger1 < deger2 < deger3) Sonuç "True" olarak dönecektir```
###Code
# Örnek uygulama çalıştır
deger1 = 5
deger2 = 7
deger3 = 9
print(deger1 < deger2 < deger3)
###Output
True
###Markdown
**Görev:** Aşağıda değer atamaları tamamlanmış olan değişkenleri kullanarak ürünlerin peşin satın alınma bedelini TL olarak hesaplayın. Toplam satın alma bedeli ile bütçenizi karşılaştırın. Satın alma bedelini ve bütçenizi ekrana yazdırın. Ödeme bütçenizi aşıyorsa ekrana "False", aşmıyorsa "True" yazdırın.
###Code
urunAdi = "Bisiklet"
urunBedeliAvro = 850
kurAvro = 7
urunAdet = 3
pesinAdetIndirimTL = 500
butce = 15000
satinAlmaBedeliTl = (((urunBedeliAvro * kurAvro) - (pesinAdetIndirimTL)) * (urunAdet))
print(satinAlmaBedeliTl)
print(butce)
butceyiAsarMi = satinAlmaBedeliTl < butce
print(butceyiAsarMi)
print("Satın alma bedeli 16350 TL iken bütçe 15000 TL 'dir. Fiyat bütçeyi aşıyor.")
###Output
16350
15000
False
Satın alma bedeli 16350 TL iken bütçe 15000 TL 'dir. Fiyat bütçeyi aşıyor.
###Markdown
Bölüm 03: Metin KatarlarıBu bölümde:* Birbirine bağlama,* Metin katarı çarpımı,* Metin katarı dizinleme,* Metin katarı negatif dizinleme,* Metin katarı dilimleme,* In operatörü,* Metin katarının uzunluğu,* Özel karakterlerden kaçma,* Basit metin katarı metodları,* Metin katarı biçimlendirme yer almaktadır. Birbirine BağlamaBirbirine bağlama artı (+) işlemini kullanarak iki metin katarının birleştirilmesi işlemine denir. **Örnek Uygulama**```deger1 = "Merhaba"deger2 = "Dünya"selamlama = deger1 + " " + deger2print(selamlama) Çıktı: Merhaba Dünya```
###Code
# Örnek uygulamayı çalışıtır
deger1 = "Merhaba"
deger2 = "Dünya"
selamlama = deger1 + " " + deger2
print(selamlama)
###Output
_____no_output_____
###Markdown
**Görev:** *ad*, *soyad* ve *hitap* değişkenlerini tek bir çıktıda birleştirecek kodu yazın!
###Code
hitap = "Öğr. Gör."
ad = "Orçun"
soyad = "Madran"
# Çıktı: Öğr. Gör. Orçun Madran
###Output
_____no_output_____
###Markdown
Metin Katarı ÇarpımıPython, metin katarlarının çarpım sayısı kadar tekrar ettirilmesini desteklemektedir. **Örnek Uygulama**```metin = "Hadi! "metniCarp = metin * 4print(metniCarp) Çıktı: Hadi! Hadi! Hadi! Hadi! ```
###Code
# Örnek uygulamayı çalıştır
metin = "Hadi! "
metniCarp = metin * 4
print(metniCarp)
###Output
_____no_output_____
###Markdown
**Görev:** Sizi sürekli bekleten arkadaşınızı uyarabilmek için istediğiniz sayıda "Hadi!" kelimesini ekrana yazdırın!
###Code
metin = "Hadi! "
# Çıktı: Hadi! Hadi! Hadi! Hadi! ... Hadi!
###Output
_____no_output_____
###Markdown
Metin Katarı DizinlemeKonumu biliniyorsa, bir metin katarındaki ilgili karaktere erişilebilir. Örneğin; str[index] metin katarındaki indeks numarasının karşılık geldiği karakteri geri döndürecektir. İndekslerin her zaman 0'dan başladığı unutulmamalıdır. İndeksler, sağdan saymaya başlamak için negatif sayılar da olabilir. -0, 0 ile aynı olduğundan, negatif indeksler -1 ile başlar. **Örnek Uygulama**```metin = "Python Programlama Dili"print("'h' harfini yakala: " + metin[3]) Çıktı: 'h' harfini yakala: h"```
###Code
# örnek uygulama çalıştır
metin = "Python Programlama Dili"
print("'h' harfini yakala: " + metin[3])
###Output
_____no_output_____
###Markdown
**Görev:** İndeks numarasını kullanarak metin katarındaki ikinci "P" harfini ekrana yazdırın!
###Code
metin = "Python Programlama Dili"
# Çıktı: P
###Output
_____no_output_____
###Markdown
Metin Katarı Negatif DizinlemeMetin katarının sonlarında yer alan bir karaktere daha rahat erişebilmek için indeks numarası negatif bir değer olarak belirlenebilir. **Örnek Uygulama**```metin = "Python Programlama Dili"dHarfi = metin[-4]print(dHarfi) Çıktı: D```
###Code
# Örnek uygulama çalıştır
metin = "Python Programlama Dili"
dHarfi = metin[-4]
print(dHarfi)
###Output
_____no_output_____
###Markdown
**Görev:** Metin katarının sonunda yer alan "i" harfini ekrana yazdırın!
###Code
metin = "Python Programlama Dili"
#Çıktı: i
###Output
_____no_output_____
###Markdown
Metin Katarı DilimlemeDilimleme, bir metin katarından birden çok karakter (bir alt katar oluşturmak) almak için kullanılır. Söz dizimi indeks numarası ile bir karaktere erişmeye benzer, ancak iki nokta üst üste işaretiyle ayrılmış iki indeks numarası kullanılır. Ör: str[ind1:ind2].Noktalı virgülün solundaki indeks numarası belirlenmezse ilk karakterden itibaren (ilk karakter dahil) seçimin yapılacağı anlamına gelir. Ör: str[:ind2]Noktalı virgülün sağındaki indeks numarası belirlenmezse son karaktere kadar (son karakter dahil) seçimin yapılacağı anlamına gelir. Ör: str[ind1:] **Örnek Uygulama**```metin = "Python Programlama Dili"dilimle = metin[:6] print(dilimle) Çıktı: Pythonmetin = "Python Programlama Dili" print(metin[7:]) Çıktı: Programlama Dili```
###Code
# Örnek uygulama çalıştır
metin = "Python Programlama Dili"
dilimle = metin[:6]
print(dilimle)
metin = "Python Programlama Dili"
print(metin[7:])
###Output
_____no_output_____
###Markdown
**Görev:** Metin katarını dilemleyerek katarda yer alan üç kelimeyi de ayrı ayrı (alt alta) ekrana yazdırın!.
###Code
metin = "Python Programlama Dili"
# Çıktı:
# Python
# Programlama
# Dili
###Output
_____no_output_____
###Markdown
In OperatörüBir metin katarının belirli bir harf ya da bir alt katar içerip içermediğini kontrol etmek için, in anahtar sözcüğü kullanılır. **Örnek Uygulama**```metin = "Python Programlama Dili"print("Programlama" in metin) Çıktı: True``` **Görev:** Metin katarında "Python" kelimesinin geçip geçmediğini kontrol ederek ekrana yazdırın!
###Code
metin = "Python Programlama Dili"
###Output
_____no_output_____
###Markdown
Metin Katarının UzunluğuBir metin katarının kaç karakter içerdiğini saymak için len() yerleşik fonksiyonu kullanılır. **Örnek Uygulama**```metin = "Python programlama dili"print(len(metin)) Çıktı: 23```
###Code
# Örnek uygulamayı çalıştır
metin = "Python programlama dili"
print(len(metin))
###Output
_____no_output_____
###Markdown
**Görev:** Metin katarındaki cümlenin ilk yarısını ekrana yazdırın! Yazılan kod cümlenin uzunluğundan bağımsız olarak cümleyi ikiye bölmelidir.
###Code
metin = "Python programlama dili, dünyada eğitim amacıyla en çok kullanılan programlama dillerinin başında gelir."
# Çıktı: Python programlama dili, dünyada eğitim amacıyla en
###Output
_____no_output_____
###Markdown
Özel Karakterlerden KaçmaMetin katarları içerisinde tek ve çift tırnak kullanımı kimi zaman sorunlara yol açmaktadır. Bu karakterin metin katarları içerisinde kullanılabilmesi için "Ters Eğik Çizgi" ile birlikte kullanılırlar. Örneğin: 'Önümüzdeki ay "Ankara'da Python Eğitimi" gerçekleştirilecek' cümlesindeki tek tırnak kullanımı soruna yol açacağından 'Önümüzdeki ay "Ankara\'da Python Eğitimi" gerçekleştirilecek' şeklinde kullanılmalıdır.**İpucu:** Tek tırnaklı metin katarlarından kaçmak için çift tırnak ya da tam tersi kullanılabilir. **Örnek Uygulama**```metin = 'Önümüzdeki ay "Ankara\'da Python Eğitimi" gerçekleştirilecektir.'print(metin) Çıktı: Önümüzdeki ay "Ankara'da Python Eğitimi" gerçekleştirilecektir.metin = 'Önümüzdeki ay "Ankara'da Python Eğitimi" gerçekleştirilecektir.'print(metin) Çıktı: Geçersiz söz dizimi hatası dönecektir. ```
###Code
# Örnek uygulamayı çalıştır
metin = 'Önümüzdeki ay "Ankara\'da Python Eğitimi" gerçekleştirilecektir.'
print(metin)
# Örnek uygulamadaki hatayı gözlemle
metin = 'Önümüzdeki ay "Ankara'da Python Eğitimi" gerçekleştirilecektir.'
print(metin)
###Output
_____no_output_____
###Markdown
**Görev:** Metin katarındaki cümlede yer alan noktalama işaretlerinden uygun şekilde kaçarak cümleyi ekrana yazdırın!
###Code
metin = 'Bilimsel çalışmalarda "Python" kullanımı Türkiye'de çok yaygınlaştı!'
print(metin)
###Output
_____no_output_____
###Markdown
Basit Metin Katarı MetodlarıPython içinde birçok yerleşik metin katarı fonksiyonu vardır. En çok kullanılan fonksiyonlardan bazıları olarak;* tüm harfleri büyük harfe dönüştüren *upper()*,* tüm harfleri küçük harfe dönüştüren *lower()*,* sadece cümlenin ilk harfini büyük hale getiren *capitalize()* sayılabilir.**İpucu:** Python'daki yerleşik fonksiyonların bir listesini görüntüleyebilmek için metin katarından sonra bir nokta (.) koyulur ve uygun olan fonksiyonlar arayüz tarafından otomatik olarak listelenir. Bu yardımcı işlevi tetiklemek için CTRL + Bolşuk tuş kombinasyonu da kullanılabilir. **Örnek Uygulama**```metin = "Python Programlama Dili"print(metin.lower()) Çıktı: python programlama diliprint(metin.upper()) Çıktı: PYTHON PROGRAMLAMA DILIprint(metin.capitalize()) Çıktı: Python programlama dili```
###Code
# Örnek uygulamayı çalıştır
metin = "Python Programlama Dili"
print(metin.lower())
print(metin.upper())
print(metin.capitalize())
###Output
_____no_output_____
###Markdown
**Görev:** *anahtarKelime* ve *arananKelime* değişkenlerinde yer alan metinler karşılaştırıldığında birbirlerine eşit (==) olmalarını sağlayın ve dönen değerin "True" olmasını sağlayın!
###Code
anahtarKelime = "Makine Öğrenmesi"
arananKelime = "makine öğrenmesi"
print(anahtarKelime == arananKelime) # Çıktı: True
###Output
_____no_output_____
###Markdown
Metin Katarı BiçimlendirmeBir metin katarından sonraki % operatörü, bir metin katarını değişkenlerle birleştirmek için kullanılır. % operatörü, bir metin katarıdanki % s öğesini, arkasından gelen değişkenle değiştirir. % d sembolü ise, sayısal veya ondalık değerler için yer tutucu olarak kullanılır. **Örnek Uygulama**```adsoyad = "Orçun Madran"dogumTarihi = 1976print("Merhaba, ben %s!" % adsoyad) Çıktı: Merhaba, ben Orçun Madran!print("Ben %d doğumluyum" % dogumTarihi) Ben 1976 doğumluyum.ad = "Orçun"soyad = "Madran"print("Merhaba, ben %s %s!" % (ad, soyad)) Çıktı: Merhaba, ben Orçun Madran!```
###Code
# Örnek uygulamayı çalıştır
adsoyad = "Orçun Madran"
dogumTarihi = 1976
print("Merhaba, ben %s!" % adsoyad)
print("Ben %d doğumluyum" % dogumTarihi)
# Örnek uygulamayı çalıştır
ad = "Orçun"
soyad = "Madran"
print("Merhaba, ben %s %s!" % (ad, soyad))
###Output
_____no_output_____
###Markdown
**Görev:** "Merhaba Orçun Madran, bu dönemki dersiniz 'Programlama Dilleri'. Başarılar!" cümlesini ekrana biçimlendirmeyi kullanarak (artı işaretini kullanmadan) yazdırın!
###Code
ad = "Orçun"
soyad = "Madran"
ders = "Programlama Dilleri"
# Çıktı: Merhaba Orçun Madran, bu dönemki dersiniz "Programlama Dilleri". Başarılar!
###Output
_____no_output_____
###Markdown
Bölüm 04: Veri YapılarBu bölümde:* Listeler,* Liste işlemleri,* Liste öğeleri,* Demetler (Tuples),* Sözlükler,* Sözlük değerleri ve anahtarları,* In anahtar kelimesinin kullanımı yer almaktadır. ListelerListe, birden fazla değeri tek bir değişken adı altında saklamak için kullanabileceğiniz bir veri yapısıdır. Bir liste köşeli parantez arasında virgülle ayrılmış değerler dizisi olarak yazılır. Ör: liste = [deger1, deger2].Listeler farklı türden öğeler içerebilir, ancak genellikle listedeki tüm öğeler aynı türdedir. Metin katarları gibi listeler de dizine eklenebilir ve dilimlenebilir. (Bkz. Bölüm 3). **Örnek Uygulama**```acikListe = ["Açık Bilim", "Açık Erişim", "Açık Veri", "Açık Eğitim", "Açık Kaynak"] acikListe adında yeni bir liste oluştururprint(acikListe) Çıktı: ['Açık Bilim', 'Açık Erişim', 'Açık Veri', 'Açık Eğitim', 'Açık Kaynak']```
###Code
# Örnek uygulamayı çalıştır
acikListe = ["Açık Bilim", "Açık Erişim", "Açık Veri", "Açık Eğitim", "Açık Kaynak"]
print(acikListe)
###Output
_____no_output_____
###Markdown
**Görev 1:** acikListe içinde yer alan 3. liste öğesini ekrana yazıdırın!
###Code
acikListe = ["Açık Bilim", "Açık Erişim", "Açık Veri", "Açık Eğitim", "Açık Kaynak"]
###Output
_____no_output_____
###Markdown
**Görev 2:** acikListe içinde yer alan 4. ve 5. liste öğesini ekrana yazıdırın!
###Code
acikListe = ["Açık Bilim", "Açık Erişim", "Açık Veri", "Açık Eğitim", "Açık Kaynak"]
###Output
_____no_output_____
###Markdown
Liste İşlemleriappend() fonksiyonunu kullanarak ya da artırılmış atama operatörü ( += ) yardımıyla listenin sonuna yeni öğeler (değerler) eklenebilir. Listelerin içindeki öğeler güncellenebilir, yani liste[indeksNo] = yeni_deger kullanarak içeriklerini değiştirmek mümkündür. **Örnek Uygulama**```acikListe = ["Açık Bilim", "Açık Erişim", "Açık Veri", "Açık Eğitim", "Açık Kaynak"] acikListe adında yeni bir liste oluştururprint(acikListe) Çıktı: ['Açık Bilim', 'Açık Erişim', 'Açık Veri', 'Açık Eğitim', 'Açık Kaynak']acikListe += ["Açık Donanım", "Açık İnovasyon"] listeye iki yeni öğe eklerprint(acikListe) Çıktı: ['Açık Bilim', 'Açık Erişim', 'Açık Veri', 'Açık Eğitim', 'Açık Kaynak', 'Açık Donanım', 'Açık İnovasyon']acikListe.append("Açık Veri Gazeteciliği") listeye yeni bir öğe eklerprint(acikListe) Çıktı: ['Açık Bilim', 'Açık Erişim', 'Açık Veri', 'Açık Eğitim', 'Açık Kaynak', 'Açık Donanım', 'Açık İnovasyon', 'Açık Veri Gazeteciliği']acikListe[4] = "Açık Kaynak Kod" listenin 5. öğesini değiştirirprint(acikListe) Çıktı: ['Açık Bilim', 'Açık Erişim', 'Açık Veri', 'Açık Eğitim', 'Açık Kaynak Kod', 'Açık Donanım', 'Açık İnovasyon', 'Açık Veri Gazeteciliği']```
###Code
# Örnek uygulamayı çalıştır
acikListe = ["Açık Bilim", "Açık Erişim", "Açık Veri", "Açık Eğitim", "Açık Kaynak"]
print(acikListe)
acikListe += ["Açık Donanım", "Açık İnovasyon"]
print(acikListe)
acikListe.append("Açık Veri Gazeteciliği")
print(acikListe)
acikListe[4] = "Açık Kaynak Kod"
print(acikListe)
###Output
_____no_output_____
###Markdown
**Görev:** bilgiBilim adlı bir liste oluşturun. Bu listeye bilgi bilim disiplini ile ilgili 3 adet anahtar kelime ya da kavram ekleyin. Bu listeyi ekrana yazdırın. Listeye istediğiniz bir yöntem ile (append(), +=) 2 yeni öğe ekleyin. Ekrana listenin son durumunu yazdırın. Listenizdeki son öğeyi değiştirin. Listenin son halini ekrana yazıdırn.
###Code
#bilgiBilim
###Output
_____no_output_____
###Markdown
Liste Öğeleri Liste öğelerini dilimleme (slice) yaparak da atamak mümkündür. Bu bir listenin boyutunu değiştirebilir veya listeyi tamamen temizleyebilir. **Örnek Uygulama**```acikListe = ["Açık Bilim", "Açık Erişim", "Açık Veri", "Açık Eğitim", "Açık Kaynak"] acikListe adında yeni bir liste oluştururprint(acikListe) Çıktı: ['Açık Bilim', 'Açık Erişim', 'Açık Veri', 'Açık Eğitim', 'Açık Kaynak']acikListe[2:4] = ["Açık İnovasyon"] "Açık Veri" ve "Açık Eğitim" öğelerinin yerine tek bir öğe eklerprint(acikListe) Çıktı: ["Açık Bilim", "Açık Erişim", "Açık İnovasyon", "Açık Kaynak"]acikListe[:2] = [] listenin ilk iki öğesini silerprint(acikListe) Çıktı: ["Açık İnovasyon", "Açık Kaynak"]acikListe[:] = [] listeyi temizler print(acikListe) Çıktı: []```
###Code
# Örnek uygulamayı çalıştır
acikListe = ["Açık Bilim", "Açık Erişim", "Açık Veri", "Açık Eğitim", "Açık Kaynak"]
print(acikListe)
acikListe[2:4] = ["Açık İnovasyon"]
print(acikListe)
acikListe[:2] = []
print(acikListe)
acikListe[:] = []
print(acikListe)
###Output
_____no_output_____
###Markdown
**Görev:** Önceki görevde oluşturulan "bilgiBilim" adlı listenin istediğiniz öğesini silerek listenin güncel halini ekrana yazdırın. Listeyi tamamen temizleyerek listenin güncel halini ekrana yazdırın.
###Code
#bilgiBilim
###Output
_____no_output_____
###Markdown
Demetler (Tuples) Demetler neredeyse listelerle aynı. Demetler ve listeler arasındaki tek önemli fark, demetlerin değiştirilememesidir. Demetlere öğe eklenmez, öğe değiştirilmez veya demetlerden öğe silinemez. Demetler, parantez içine alınmış bir virgül operatörü tarafından oluşturulur. Ör: demet = ("deger1", "deger2", "deger3"). Tek bir öğe demetinde ("d",) gibi bir virgül olmalıdır. **Örnek Uygulama**```ulkeKodlari = ("TR", "US", "EN", "JP")print(ulkeKodlari) Çıktı: ('TR', 'US', 'EN', 'JP')```
###Code
# Örnek uygulamayı çalıştır
ulkeKodlari = ("TR", "US", "EN", "JP")
print(ulkeKodlari)
###Output
_____no_output_____
###Markdown
**Görev:** Kongre Kütüphanesi konu başlıkları listesinin kodlarından oluşan bir demet oluşturun ve ekrana yazdırın! Oluşturulan demet içindeki tek bir öğeyi ekrana yazdırın!
###Code
#konuBasliklari
###Output
_____no_output_____
###Markdown
SözlüklerSözlük, listeye benzer, ancak sözlük içindeki değerlere indeks numarası yerine bir anahtara ile erişilebilir. Bir anahtar herhangi bir metin katarı veya rakam olabilir. Sözlükler ayraç içine alınır. Ör: sozluk = {'anahtar1': "değer1", 'anahtar2': "değer2"}. **Örnek Uygulama**```adresDefteri = {"Hacettepe Üniversitesi": "hacettepe.edu.tr", "ODTÜ": "odtu.edu.tr", "Bilkent Üniversitesi": "bilkent.edu.tr"} yeni bir sözlük oluştururprint(adresDefteri) Çıktı: {'Hacettepe Üniversitesi': 'hacettepe.edu.tr', 'ODTÜ': 'odtu.edu.tr', 'Bilkent Üniversitesi': 'bilkent.edu.tr'}adresDefteri["Ankara Üniversitesi"] = "ankara.edu.tr" sözlüğe yeni bir öğe eklerprint(adresDefteri) Çıktı: {'Hacettepe Üniversitesi': 'hacettepe.edu.tr', 'ODTÜ': 'odtu.edu.tr', 'Bilkent Üniversitesi': 'bilkent.edu.tr', 'Ankara Üniversitesi': 'ankara.edu.tr'}del adresDefteri ["Ankara Üniversitesi"] sözlükten belirtilen öğeyi silerprint(adresDefteri) Çıktı: {'Hacettepe Üniversitesi': 'hacettepe.edu.tr', 'ODTÜ': 'odtu.edu.tr', 'Bilkent Üniversitesi': 'bilkent.edu.tr'}```
###Code
# Örnek uygulamayı çalıştır
adresDefteri = {"Hacettepe Üniversitesi": "hacettepe.edu.tr", "ODTÜ": "odtu.edu.tr", "Bilkent Üniversitesi": "bilkent.edu.tr"}
print(adresDefteri)
adresDefteri["Ankara Üniversitesi"] = "ankara.edu.tr"
print(adresDefteri)
del adresDefteri ["Ankara Üniversitesi"]
print(adresDefteri)
###Output
_____no_output_____
###Markdown
**Görev:** İstediğin herhangi bir konuda 5 öğeye sahip bir sözlük oluştur. Sözlüğü ekrana yazdır. Sözlükteki belirli bir öğeyi ekrana yazdır. Sözlükteki belirli bir öğeyi silerek sözlüğün güncel halini ekrana yazdır!
###Code
#sozluk
###Output
_____no_output_____
###Markdown
Sözlük Değerleri ve AnahtarlarıSözlüklerde values() ve keys() gibi birçok yararlı fonksiyon vardır. Bir sozlük adı ve ardından noktadan sonra çıkan listeyi kullanarak geri kalan fonksiyolar incelenebilir. **Örnek Uygulama**```adresDefteri = {"Hacettepe Üniversitesi": "hacettepe.edu.tr", "ODTÜ": "odtu.edu.tr", "Bilkent Üniversitesi": "bilkent.edu.tr"} yeni bir sözlük oluştururprint(adresDefteri) Çıktı: {'Hacettepe Üniversitesi': 'hacettepe.edu.tr', 'ODTÜ': 'odtu.edu.tr', 'Bilkent Üniversitesi': 'bilkent.edu.tr'}print(adresDefteri.values()) Çıktı: dict_values(['hacettepe.edu.tr', 'odtu.edu.tr', 'bilkent.edu.tr'])print(adresDefteri.keys()) Çıktı: dict_keys(['Hacettepe Üniversitesi', 'ODTÜ', 'Bilkent Üniversitesi'])```
###Code
# Örnek uygulamayı çalıştır
adresDefteri = {"Hacettepe Üniversitesi": "hacettepe.edu.tr", "ODTÜ": "odtu.edu.tr", "Bilkent Üniversitesi": "bilkent.edu.tr"}
print(adresDefteri)
print(adresDefteri.values())
print(adresDefteri.keys())
###Output
_____no_output_____
###Markdown
**Görev:** İstediğin bir konuda istediğin öğe saysına sahip bir sözlük oluştur. Sözlükler ile ilgili farklı fonksiyoları dene. Sonuçları ekrana yazdır!
###Code
#yeniSozluk
###Output
_____no_output_____
###Markdown
In Anahtar Kelimesi"In" anahtar sözcüğü, bir listenin veya sözlüğün belirli bir öğe içerip içermediğini kontrol etmek için kullanılır. Daha önce metin katarlarındaki kullanıma benzer bir kullanımı vardır. "In" anahtar sözcüğü ile öğe kontrolü yapıldıktan sonra sonuç, öğe listede ya da sözlükte yer alıyorsa *True* yer almıyorsa *False* olarak geri döner.**Dikkat**: Aranan öğe ile liste ya da sözlük içinde yer alan öğelerin karşılaştırılması sırasında büyük-küçük harf duyarlılığı bulunmaktadır. Ör: "Bilgi" ve "bilgi" iki farklı öğe olarak değerlendirilir. **Örnek Uygulama**```bilgiKavramları = ["indeks", "erişim", "koleksiyon"] yeni bir liste oluştururprint("Erişim" in bilgiKavramları) Çıktı: FalsebilgiSozlugu = {"indeks": "index", "erişim": "access", "koleksiyon": "collection"} yeni bir sozluk oluştururprint("koleksiyon" in bilgiSozlugu.keys()) çıktı: True```
###Code
# Örnek uygulamayı çalıştır
bilgiKavramları = ["indeks", "erişim", "koleksiyon"]
print("Erişim" in bilgiKavramları)
bilgiSozlugu = {"indeks": "index", "erişim": "access", "koleksiyon": "collection"}
print("koleksiyon" in bilgiSozlugu.keys())
###Output
_____no_output_____
###Markdown
**Görev:** Bir liste ve bir sözlük oluşturun. Liste içinde istediğiniz kelimeyi aratın ve sonucunu ekrana yazdırın! Oluşturduğunuz sözlüğün içinde hem anahtar kelime (keys()) hem de değer (values()) kontrolü yaptırın ve sonucunu ekrana yazdırın!
###Code
#yeniListe
#yeniSozluk
###Output
_____no_output_____
###Markdown
Bölüm 05: Koşullu İfadelerBu bölümde:* Mantıksal operatörler,* If cümleciği,* Else ve elif kullanımı yer almatadır. Mantıksal OperatörlerMantıksal operatörler ifadeleri karşılaştırır ve sonuçları *True* ya da *False* değerleriyle döndürür. Python'da üç tane mantıksal operatör bulunur:1. "and" operatörü: Her iki yanındaki ifadeler doğru olduğunda *True* değerini döndürür.2. "or" operatörü: Her iki tarafındaki ifadelerden en az bir ifade doğru olduğunda "True" değerini döndürür.3. "not" operatörü: İfadenin tam tersi olarak değerlendirilmesini sağlar. **Örnek Uygulama**```kullaniciAdi = "orcunmadran"sifre = 123456print(kullaniciAdi == "orcunmadran" and sifre == 123456) Çıktı: TruekullaniciAdi = "orcunmadran"sifre = 123456print(kullaniciAdi == "orcunmadran" and not sifre == 123456) Çıktı: FalsecepTel = "05321234567"ePosta = "[email protected]"print(cepTel == "" or ePosta == "[email protected]" ) Çıktı: True```
###Code
# Örnek uygulamayı çalıştır
kullaniciAdi = "orcunmadran"
sifre = 123456
print(kullaniciAdi == "orcunmadran" and sifre == 123456)
kullaniciAdi = "orcunmadran"
sifre = 123456
print(kullaniciAdi == "orcunmadran" and not sifre == 123456)
cepTel = "05321234567"
ePosta = "[email protected]"
print(cepTel == "" or ePosta == "[email protected]" )
###Output
_____no_output_____
###Markdown
**Görev:** Klavyeden girilen kullanıcı adı ve şifrenin kayıtlı bulunan kullanıcı adı ve şifre ile uyuşup uyuşmadığını kontrol edin ve sonucu ekrana yazdırın!
###Code
#Sistemde yer alan bilgiler:
sisKulAdi = "yonetici"
sisKulSifre = "bby162"
#Klavyeden girilen bilgiler:
girKulAdi = input("Kullanıcı Adı: ")
girKulSifre = input("Şifre: ")
#Kontrol
#Sonuç
print(sonuc)
###Output
_____no_output_____
###Markdown
If Cümleciği"If" anahtar sözcüğü, verilen ifadenin doğru olup olmadığını kontrol ettikten sonra belirtilen kodu çalıştıran bir koşullu ifade oluşturmak için kullanılır. Python'da kod bloklarının tanımlanması için girinti kullanır. **Örnek Uygulama**```acikKavramlar = ["bilim", "erişim", "veri", "eğitim"]kavram = input("Bir açık kavramı yazın: ")if kavram in acikKavramlar: print(kavram + " açık kavramlar listesinde yer alıyor!")```
###Code
# Örnek uygulamayı çalıştır
acikKavramlar = ["bilim", "erişim", "veri", "eğitim"]
kavram = input("Bir açık kavramı yazın: ")
if kavram in acikKavramlar:
print(kavram + " açık kavramlar listesinde yer alıyor!")
###Output
_____no_output_____
###Markdown
**Görev:** "acikSozluk" içinde yer alan anahtarları (keys) kullanarak eğer klavyeden girilen anahtar kelime sözlükte varsa açıklamasını ekrana yazdırın!
###Code
acikSozluk = {
"Açık Bilim" : "Bilimsel bilgi kamu malıdır. Bilimsel yayınlara ve verilere açık erişim bir haktır." ,
"Açık Erişim" : "Kamu kaynakları ile yapılan araştırmalar sonucunda üretilen yayınlara ücretsiz erişim" ,
"Açık Veri" : "Kamu kaynakları ile yapılan araştırma sonucunda üretilen verilere ücretsiz ve yeniden kullanılabilir biçimde erişim"
}
anahtar = input("Anahtar Kelime: ")
#If
###Output
_____no_output_____
###Markdown
Else ve Elif Kullanımı"If" cümleciği içinde ikinci bir ifadenin doğruluğunun kontrolü için "Elif" ifadesi kullanılır. Doğruluğu sorgulanan ifadelerden hiçbiri *True* döndürmediği zaman çalışacak olan kod bloğu "Else" altında yer alan kod bloğudur. **Örnek Uygulama**```gunler = ["Pazartesi", "Çarşamba", "Cuma"]girilen = input("Gün giriniz: ")if girilen == gunler[0]: print("Programlama Dilleri")elif girilen == gunler[1]: print("Kataloglama")elif girilen == gunler[2]: print("Bilimsel İletişim")else : print("Kayıtlı bir gün bilgisi girmediniz!")```
###Code
# Örnek uygulamayı çalıştır
gunler = ["Pazartesi", "Çarşamba", "Cuma"]
girilen = input("Gün giriniz: ")
if girilen == gunler[0]:
print("Programlama Dilleri")
elif girilen == gunler[1]:
print("Kataloglama")
elif girilen == gunler[2]:
print("Bilimsel İletişim")
else :
print("Kayıtlı bir gün bilgisi girmediniz!")
###Output
_____no_output_____
###Markdown
**Görev:** Klavyeden girilen yaş bilgisini kullanarak ekrana aşağıdaki mesajları yazdır:* 21 yaş altı ve 64 yaş üstü kişilere: "Sokağa çıkma yasağı bulunmaktadır!"* Diğer tüm kişilere: "Sokağa çıkma yasağı yoktur!"* Klavyeden yaş harici bir bilgi girişi yapıldığında: "Yaşınızı rakam olarak giriniz!"
###Code
yas = int(input("Yaşınızı giriniz: "))
###Output
_____no_output_____
###Markdown
Bölüm 06: DöngülerBu bölümde:* for döngüsü,* Metin katarlarında for döngüsü kullanımı,* while döngüsü,* break anahtar kelimesi,* continue anahtar kelimesi yer almaktadır. for Döngüsüfor döngüleri belirli komut satırını ya da satırlarını yinelemek (tekrar etmek) için kullanılır. Her yinelemede, for döngüsünde tanımlanan değişken listedeki bir sonraki değere otomatik olarak atanacaktır. **Örnek Uygulama**```for i in range(5): i değerine 0-4 arası indeks değerleri otomatik olarak atanır print(i) Çıktı: Bu komut satırı toplam 5 kere tekrarlanır ve her satırda yeni i değeri yazdırılırkonular = ["Açık Bilim", "Açık Erişim", "Açık Veri"] yeni bir liste oluştururfor konu in konular: print(konu) Çıktı: Her bir liste öğesi alt alta satırlara yazdırılır```
###Code
# Örnek uygulmayı çalıştır
for i in range(5):
print(i)
# Örnek uygulmayı çalıştır
konular = ["Açık Bilim", "Açık Erişim", "Açık Veri"]
for konu in konular:
print(konu)
###Output
_____no_output_____
###Markdown
**Görev:** Bir liste oluşturun. Liste öğelerini "for" döngüsü kullanarak ekrana yazdırın!
###Code
#liste
###Output
_____no_output_____
###Markdown
Metin Katarlarında for Döngüsü KullanımıMetin Katarları üzerinde gerçekleştirilebilecek işlemler Python'daki listelerle büyük benzerlik taşırlar. Metin Katarını oluşturan öğeler (harfler) liste elemanları gibi "for" döngüsü yardımıyla ekrana yazdırılabilir. **Örnek Uygulama**```cumle = "Bisiklet hem zihni hem bedeni dinç tutar!"for harf in cumle: Cümledeki her bir harfi ekrana satır satır yazdırır print(harf)```
###Code
# Örnek uygulamayı çalıştır
cumle = "Bisiklet hem zihni, hem bedeni dinç tutar!"
for harf in cumle:
print(harf)
###Output
_____no_output_____
###Markdown
**Görev:** İçinde metin katarı bulunan bir değişken oluşturun. Bu değişkende yer alan her bir harfi bir satıra gelecek şekilde "for" döngüsü ile ekrana yazdırın!
###Code
#degisken
###Output
_____no_output_____
###Markdown
while Döngüsü"While" döngüsü "if" cümleciğinin ifade şekline benzer. Koşul doğruysa döngüye bağlı kod satırı ya da satırları yürütülür (çalıştırılır). Temel fark, koşul doğru (True) olduğu olduğu sürece bağlı kod satırı ya da satırları çalışmaya devam eder. **Örnek Uygulama**```deger = 1while deger <= 10: print(deger) Bu satır 10 kez tekrarlanacak deger += 1 Bu satır da 10 kez tekrarlanacakprint("Program bitti") Bu satır sadece bir kez çalıştırılacak```
###Code
# Örnek uygulamayı çalıştır
deger = 1
while deger <= 10:
print(deger)
deger += 1
print("Program bitti")
###Output
_____no_output_____
###Markdown
break Anahtar KelimesiAsla bitmeyen döngüye sonsuz döngü adı verilir. Döngü koşulu daima doğru (True) olursa, böyle bir döngü sonsuz olur. "Break" anahtar kelimesi geçerli döngüden çıkmak için kullanılır. **Örnek Uygulama**```sayi = 0while True: bu döngü sonsuz bir döngüdür print(sayi) sayi += 1 if sayi >= 5: break sayı değeri 5 olduğunda döngü otomatik olarak sonlanır```
###Code
# Örnek Uygulamayı çalıştır
sayi = 0
while True:
print(sayi)
sayi += 1
if sayi >= 5:
break
###Output
_____no_output_____
###Markdown
continue Anahtar Kelimesi"continue" anahtar kelimesi, o anda yürütülen döngü için döngü içindeki kodun geri kalanını atlamak ve "for" veya "while" deyimine geri dönmek için kullanılır. ```for i in range(5): if i == 3: continue i değeri 3 olduğu anda altta yer alan "print" komutu atlanıyor. print(i)```
###Code
# Örnek Uygulamayı çalıştır
for i in range(5):
if i == 3:
continue
print(i)
###Output
_____no_output_____
###Markdown
**Görev: Tahmin Oyunu**"while" döngüsü kullanarak bir tahmin oyunu tasarla. Bu tahmin oyununda, önceden belirlenmiş olan kelime ile klavyeden girilen kelime karşılaştırılmalı, tahmin doğru ise oyun "Bildiniz..!" mesajı ile sonlanmalı, yanlış ise tahmin hakkı bir daha verilmeli.
###Code
#Tahmin Oyunu
kelime = "bilgi"
tahmin = ""
###Output
_____no_output_____
###Markdown
Bölüm 07: Fonksiyonlar Fonksiyon Tanımlama (Definition)Fonksiyonlar, yazılan kodu faydalı bloklara bölmenin, daha okunabilir hale getirmenin ve tekrar kullanmaya yardımcı olmanın kullanışlı bir yoludur. Fonksiyonlar "def" anahtar sözcüğü ve ardından fonksiyonun adı kullanılarak tanımlanır. **Örnek Uygulama**```def merhaba_dunya(): fonksiyon tanımlama, isimlendirme print("Merhaba Dünya!") fonksiyona dahil kod satırlarıfor i in range(5): merhaba_dunya() fonksiyon 5 kere çağırılacak```
###Code
# Örnek uygulamayı çalıştır
def merhaba_dunya(): # fonksiyon tanımlama, isimlendirme
print("Merhaba Dünya!") #fonksiyona dahil kod satırları
for i in range(5):
merhaba_dunya() # fonksiyon 5 kere çağırılacak
###Output
_____no_output_____
###Markdown
Fonksiyolarda Parametre KullanımıFonksiyon parametreleri, fonksiyon adından sonra parantez () içinde tanımlanır. Parametre, iletilen bağımsız değişken için değişken adı görevi görür. **Örnek Uygulama**```def foo(x): x bir fonksiyon parametresidir print("x = " + str(x))foo(5) 5 değeri fonksiyona iletilir ve değer olarak kullanılır.```
###Code
# Örnek uygulamayı çalıştır
def foo(x):
print("x = " + str(x))
foo(5)
###Output
_____no_output_____
###Markdown
**Görev:** *karsila* fonksiyonunun tetiklenmesi için gerekli kod ve parametleri ekle!
###Code
def karsila(kAd, kSoyad):
print("Hoşgeldin, %s %s" % (kAd, kSoyad))
###Output
_____no_output_____
###Markdown
Return DeğeriFonksiyonlar, "return" anahtar sözcüğünü kullanarak fonksiyon sonucunda bir değer döndürebilir. Döndürülen değer bir değişkene atanabilir veya sadece örneğin değeri yazdırmak için kullanılabilir. **Örnek Uygulama**```def iki_sayi_topla(a, b): return a + b hesaplama işleminin sonucu değer olarak döndürülüyorprint(iki_sayi_topla(3, 12)) ekrana işlem sonucu yazdırılacak```
###Code
# Örnek uygulamayı çalıştır
def iki_sayi_topla(a, b):
return a + b
print(iki_sayi_topla(3, 12))
###Output
_____no_output_____
###Markdown
Varsayılan ParametrelerBazen bir veya daha fazla fonksiyon parametresi için varsayılan bir değer belirtmek yararlı olabilir. Bu, ihtiyaç duyulan parametrelerden daha az argümanla çağrılabilen bir fonksiyon oluşturur. **Örnek Uygulama**```def iki_sayi_carp(a, b=2): return a * bprint(iki_sayi_carp(3, 47)) verilen iki degeri de kullanır print(iki_sayi_carp(3)) verilmeyen 2. değer yerine varsayılanı kullanır```
###Code
# Örnek uygulamayı çalıştır
def iki_sayi_carp(a, b=2):
return a * b
print(iki_sayi_carp(3, 47))
print(iki_sayi_carp(3))
###Output
_____no_output_____
###Markdown
**Örnek Uygulama: Sayısal Loto**Aşağıda temel yapısı aynı olan iki *sayısal loto* uygulaması bulunmaktadır: Fonksiyonsuz ve fonksiyonlu.İlk sayısal loto uygulamasında herhangi bir fonksiyon kullanımı yoktur. Her satırda 1-49 arası 6 adet sayının yer aldığı 6 satır oluşturur.İkinci sayısal loto uygulamsında ise *tahminEt* isimli bir fonksiyon yer almaktadır. Bu fonksiyon varsayılan parametrelere sahiptir ve bu parametreler fonksiyon çağırılırken değiştirilebilir. Böylece ilk uygulamadan çok daha geniş seçenekler sunabilir bir hale gelmiştir.
###Code
#Sayısal Loto örnek uygulama (fonksiyonsuz)
from random import randint
i = 0
secilenler = [0,0,0,0,0,0]
for rastgele in secilenler:
while i < len(secilenler):
secilen = randint(1, 49)
if secilen not in secilenler:
secilenler[i] = secilen
i+=1
print(sorted(secilenler))
i=0
#Sayısal Loto örnek uygulama (fonksiyonlu)
from random import randint
def tahminEt(rakam=6, satir=6, baslangic=1, bitis=49):
i = 0
secilenler = []
for liste in range(rakam):
secilenler.append(0)
for olustur in range(satir):
while i < len(secilenler):
secilen = randint(baslangic, bitis)
if secilen not in secilenler:
secilenler[i] = secilen
i+=1
print(sorted(secilenler))
i=0
tahminEt(10,6,1,60)
###Output
_____no_output_____
###Markdown
**Görev:** Bu görev genel olarak fonksiyon bölümünü kapsamaktadır.Daha önce yapmış olduğunuz "Adam Asmaca" projesini (ya da aşağıda yer alan örneği) fonksiyonlar kullanarak oyun bittiğinde tekrar başlatmaya gerek duyulmadan yeniden oynanabilmesine imkan sağlayacak şekilde yeniden kurgulayın.Oyunun farklı sekansları için farklı fonksiyonlar tanımlayarak oyunu daha optimize hale getirmeye çalışın.Aşağıda bir adam asmaca oyununun temel özellikerine sahip bir örnek yer almaktadır.
###Code
#Fonksiyonsuz Adam Asmaca
from random import choice
adamCan = 3
kelimeler = ["bisiklet", "triatlon", "yüzme", "koşu"]
secilenKelime = choice(kelimeler)
print(secilenKelime)
dizilenKelime = []
for diz in secilenKelime:
dizilenKelime.append("_")
print(dizilenKelime)
while adamCan > 0:
girilenHarf = input("Bir harf giriniz: ")
canKontrol = girilenHarf in secilenKelime
if canKontrol == False:
adamCan-=1
i = 0
for kontrol in secilenKelime:
if secilenKelime[i] == girilenHarf:
dizilenKelime[i] = girilenHarf
i+=1
print(dizilenKelime)
print("Kalan can: "+ str(adamCan))
#Fonksiyonlu Adam Asmaca
###Output
_____no_output_____
###Markdown
Bölüm 08: Sınıflar ve NesnelerBu bölümde:* Sınıf ve nesne tanımlama,* Değişkenlere erişim,* self parametresi,* init metodu yer almaktadır. Sınıf ve Nesne TanımlamaBir nesne değişkenleri ve fonksiyonları tek bir varlıkta birleştirir. Nesneler değişkenlerini ve fonksiyonlarını sınıflardan alır. Sınıflar bir anlamda nesnelerinizi oluşturmak için kullanılan şablonlardır. Bir nesneyi, fonksiyonların yanı sıra veri içeren tek bir veri yapısı olarak düşünebilirsiniz. Nesnelerin fonksiyonlarına yöntem (metod) denir.**İpucu:** Sınıf isimlerinin baş harfi büyük yazılarak Python içindeki diğer öğelerden (değişken, fonksiyon vb.) daha rahat ayırt edilmeleri sağlanır. **Örnek Uygulama**```class BenimSinifim: yeni bir sınıfın tanımlanması bsDegisken = 4 sınıf içinde yer alan bir değişken def bsFonksiyon(self): sınıf içinde yer alan bir fonksiyon print("Benim sınıfımın fonksiyonundan Merhaba!")benimNesnem = BenimSinifim()``` Değişkenlere ve Fonksiyonlara ErişimSınıftan örneklenen bir nesnenin içindeki bir değişkene ya da fonksiyona erişmek için öncelikle nesnenin adı daha sonra ise değişkenin ya da fonkiyonun adı çağırılmalıdır (Ör: nesneAdi.degiskenAdi). Bir sınıfın farklı örnekleri (nesneleri) içinde tanımlanan değişkenlerin değerleri değiştirebilir. **Örnek Uygulama 1**```class BenimSinifim: yeni bir sınıf oluşturur bsDegisken = 3 sınıfın içinde bir değişken tanımlar def bsFonksiyon(self): sınıfın içinde bir fonksiyon tanımlar print("Benim sınıfımın fonksiyonundan Merhaba!")benimNesnem = BenimSinifim() sınıftan yeni bir nesne oluştururfor i in range(benimNesnem.bsDegisken): oluşturulan nesne üzerinden değişkene ve fonksiyona ulaşılır benimNesnem.bsFonksiyon()benimNesnem.bsDegisken = 5 sınıfın içinde tanımlanan değişkene yeni değer atanmasıfor i in range(benimNesnem.bsDegisken): benimNesnem.bsFonksiyon()```
###Code
# Örnek uygulama 1'i gözlemleyelim
class BenimSinifim:
bsDegisken = 3
def bsFonksiyon(self):
print("Benim sınıfımın fonksiyonundan Merhaba!")
benimNesnem = BenimSinifim()
for i in range(benimNesnem.bsDegisken):
benimNesnem.bsFonksiyon()
benimNesnem.bsDegisken = 5
for i in range(benimNesnem.bsDegisken):
benimNesnem.bsFonksiyon()
###Output
_____no_output_____
###Markdown
**Örnek Uygulama 2**```class Bisiklet: renk = "Kırmızı" vites = 1 def ozellikler(self): ozellikDetay = "Bu bisiklet %s renkli ve %d viteslidir." % (self.renk, self.vites) return ozellikDetaybisiklet1 = Bisiklet()bisiklet2 = Bisiklet()print("Bisiklet 1: " + bisiklet1.ozellikler())bisiklet2.renk = "Sarı"bisiklet2.vites = 22print("Bisiklet 2: " + bisiklet2.ozellikler())```
###Code
# Örnek uygulama 2'i gözlemleyelim
class Bisiklet:
renk = "Kırmızı"
vites = 1
def ozellikler(self):
ozellikDetay = "Bu bisiklet %s renkli ve %d viteslidir." % (self.renk, self.vites)
return ozellikDetay
bisiklet1 = Bisiklet()
bisiklet2 = Bisiklet()
print("Bisiklet 1: " + bisiklet1.ozellikler())
bisiklet2.renk = "Sarı"
bisiklet2.vites = 22
print("Bisiklet 2: " + bisiklet2.ozellikler())
###Output
_____no_output_____
###Markdown
self Parametresi"self" parametresi bir Python kuralıdır. "self", herhangi bir sınıf yöntemine iletilen ilk parametredir. Python, oluşturulan nesneyi belirtmek için self parametresini kullanır. **Örnek Uygulama**Aşağıdaki örnek uygulamada **Bisiklet** sınıfının değişkenleri olan *renk* ve *bisiklet*, sınıf içindeki fonksiyonda **self** parametresi ile birlikte kullanılmaktadır. Bu kullanım şekli sınıftan oluşturulan nesnelerin tanımlanmış değişkenlere ulaşabilmeleri için gereklidir.```class Bisiklet: renk = "Kırmızı" vites = 1 def ozellikler(self): ozellikDetay = "Bu bisiklet %s renkli ve %d viteslidir." % (self.renk, self.vites) return ozellikDetay```
###Code
# Örnek uygulamada "self" tanımlaması yapılmadığı zaman döndürülen hata kodunu inceleyin
class Bisiklet:
renk = "Kırmızı"
vites = 1
def ozellikler(self):
ozellikDetay = "Bu bisiklet %s renkli ve %d viteslidir." % (renk, vites) #tanımlama eksik
return ozellikDetay
bisiklet1 = Bisiklet()
bisiklet2 = Bisiklet()
print("Bisiklet 1: " + bisiklet1.ozellikler())
bisiklet2.renk = "Sarı"
bisiklet2.vites = 22
print("Bisiklet 2: " + bisiklet2.ozellikler())
###Output
_____no_output_____
###Markdown
__init__ Metodu__init__ fonksiyonu, oluşturduğu nesneleri başlatmak için kullanılır. init "başlat" ın kısaltmasıdır. __init__() her zaman yaratılan nesneye atıfta bulunan en az bir argüman alır: "self". **Örnek Uygulama**Aşağıdaki örnek uygulamada *sporDali* sınıfının içinde tanımlanan **init** fonksiyonu, sınıf oluşturulduğu anda çalışmaya başlamaktadır. Fonksiyonun ayrıca çağırılmasına gerek kalmamıştır.```class sporDali: sporlar = ["Yüzme", "Bisiklet", "Koşu"] def __init__(self): for spor in self.sporlar: print(spor + " bir triatlon branşıdır.")triatlon = sporDali()```
###Code
# Örnek uygulamayı çalıştır
class sporDali:
sporlar = ["Yüzme", "Bisiklet", "Koşu"]
def __init__(self):
for spor in self.sporlar:
print(spor + " bir triatlon branşıdır.")
triatlon = sporDali()
###Output
_____no_output_____
###Markdown
Bölüm 09: Modüller ve Paketler Modülün İçe AktarılmasıPython'daki modüller, Python tanımlarını (sınıflar, fonksiyonlar vb.) ve ifadelerini (değişkenler, listeler, sözlükler vb.) içeren .py uzantısına sahip Python dosyalarıdır.Modüller, *import* anahtar sözcüğü ve uzantı olmadan dosya adı kullanılarak içe aktarılır. Bir modül, çalışan bir Python betiğine ilk kez yüklendiğinde, modüldeki kodun bir kez çalıştırılmasıyla başlatılır. **Örnek Uygulama**```bisiklet.py adlı modülün içeriği"""Bu modül içinde Bisiklet sınıfı yer almaktadır."""class Bisiklet: renk = "Kırmızı" vites = 1 def ozellikler(self): ozellikDetay = "Bu bisiklet %s renkli ve %d viteslidir." % (self.renk, self.vites) return ozellikDetay``````bisikletler.py adlı Python dosyasının içeriğiimport bisikletbisiklet1 = bisiklet.Bisiklet()print("Bisiklet 1: " + bisiklet1.ozellikler())``` **PyCharm Örneği** bisiklet.py---bisikletler.py Colab'de Modülün İçe AktarılmasıBir önceki bölümde (Modülün İçe Aktarılması) herhangi bir kişisel bilgisayarın sabit diski üzerinde çalışırken yerleşik olmayan (kendi yazdığımız) modülün içe aktarılması yer aldı.Bu bölümde ise Colab üzerinde çalışırken yerleşik olmayan bir modülü nasıl içe aktarılacağı yer almakta. **Örnek Uygulama**Aşağıda içeriği görüntülenen *bisiklet.py* adlı Python dosyası Google Drive içerisinde "BBY162_Python_a_Giris.ipynb" dosyasının ile aynı klasör içinde bulunmaktadır.```bisiklet.py adlı modülün içeriği"""Bu modül içinde Bisiklet sınıfı yer almaktadır."""class Bisiklet: renk = "Kırmızı" vites = 1 def ozellikler(self): ozellikDetay = "Bu bisiklet %s renkli ve %d viteslidir." % (self.renk, self.vites) return ozellikDetay```
###Code
# Google Drive'ın bir disk olarak görülmesi
from google.colab import drive
drive.mount('gdrive') # bağlanan diskin 'gdrive' adı ile tanımlanması.
import sys # bağlanan diskin fiziksel yolunun tespit edilmesi ve bağlantı yoluna eklenmesi
sys.path.append('/content/gdrive/My Drive/Colab Notebooks/BBY162 - Programlama ve Algoritmalar/')
import bisiklet # bisiklet.py içerisindeki 'bisiklet' sınıfının içe aktarılması
bisiklet1 = bisiklet.Bisiklet()
print("Bisiklet 1: " + bisiklet1.ozellikler())
###Output
_____no_output_____
###Markdown
Yerleşik Modüller (built-in)Python aşağıdaki bağlantıda yer alan standart modüllerle birlikte gelir. Bu modüllerin *import* anahtar kelimesi ile çağrılması yeterlidir. Ayrıca bu modüllerin yüklenmesine gerek yoktur.[Python Standart Modülleri](https://docs.python.org/3/library/) **Örnek Uygulama**```import datetimeprint(datetime.datetime.today())```
###Code
# Örnek uygulamayı çalıştır
import datetime
print(datetime.datetime.today())
###Output
_____no_output_____
###Markdown
from import Kullanımıİçe aktarma ifadesinin bir başka kullanım şekli *from* anahtar kelimesinin kullanılmasıdır. *from* ifadesi ile modül adları paketin içinde alınarak direkt kullanıma hazır hale getirilir. Bu şekilde, içe aktarılan modül, modül_adı öneki olmadan doğrudan kullanılır. **Örnek Uygulama**```bisiklet.py adlı modülün içeriği"""Bu modül içinde Bisiklet sınıfı yer almaktadır."""class Bisiklet: renk = "Kırmızı" vites = 1 def ozellikler(self): ozellikDetay = "Bu bisiklet %s renkli ve %d viteslidir." % (self.renk, self.vites) return ozellikDetay```
###Code
# Google Drive'ın bir disk olarak görülmesi
from google.colab import drive
drive.mount('gdrive') # bağlanan diskin 'gdrive' adı ile tanımlanması.
import sys # bağlanan diskin fiziksel yolunun tespit edilmesi ve bağlantı yoluna eklenmesi
sys.path.append('/content/gdrive/My Drive/Colab Notebooks/BBY162 - Programlama ve Algoritmalar/')
from bisiklet import Bisiklet # bisiklet.py içerisindeki 'bisiklet' sınıfının içe aktarılması
bisiklet1 = Bisiklet() # bisiklet ön tanımlamasına gerek kalmadı
print("Bisiklet 1: " + bisiklet1.ozellikler())
###Output
_____no_output_____
###Markdown
Bölüm 10: Dosya İşlemleri Dosya OkumaPython, bilgisayarınızdaki bir dosyadan bilgi okumak ve yazmak için bir dizi yerleşik fonksiyona sahiptir. **open** fonksiyonu bir dosyayı açmak için kullanılır. Dosya, okuma modunda (ikinci argüman olarak "r" kullanılarak) veya yazma modunda (ikinci argüman olarak "w" kullanılarak) açılabilir. **open** fonksiyonu dosya nesnesini döndürür. Dosyanın saklanması için kapatılması gerekir. **Örnek Uygulama**```Google Drive Bağlantısıfrom google.colab import drivedrive.mount('/gdrive')dosya = "/gdrive/My Drive/Colab Notebooks/BBY162 - Programlama ve Algoritmalar/metin.txt"f = open(dosya, "r") for line in f.readlines(): print(line)f.close()```Dosyanın sağlıklı şekilde okunabilmesi için Google Drive ile bağlantının kurulmuş olması ve okunacak dosyanın yolunun tam olarak belirtilmesi gerekmektedir.
###Code
#Google Drive Bağlantısı
from google.colab import drive
drive.mount('/gdrive')
dosya = "/gdrive/My Drive/Colab Notebooks/BBY162 - Programlama ve Algoritmalar/metin.txt"
f = open(dosya, "r")
for line in f.readlines():
print(line)
f.close()
###Output
_____no_output_____
###Markdown
Dosya YazmaBir dosyayı ikinci argüman olarak "w" (yazma) kullanarak açarsanız, yeni bir boş dosya oluşturulur. Aynı ada sahip başka bir dosya varsa silineceğini unutmayın. Mevcut bir dosyaya içerik eklemek istiyorsanız "a" (ekleme) değiştiricisini kullanmalısınız. **Örnek Uygulama**Aşağıdaki örnekte dosya 'w' parametresi ile açıldığı için var olan dosyanın içindekiler silinir ve yeni veriler dosyaya yazılır. Dosyanın içindeki verilerin kalması ve yeni verilerin eklenmesi isteniyorsa dosya 'a' parametresi ile açılmalıdır.```Google Drive Bağlantısıfrom google.colab import drivedrive.mount('/gdrive') dosya = "/gdrive/My Drive/Colab Notebooks/BBY162 - Programlama ve Algoritmalar/cikti.txt"f = open(dosya, 'w') Mevcut veriye ek veri yazılması için parametre: 'a'f.write("test") Her yeni verinin bir alt satıra yazdırılması "test\n"f.close()```Kod çalıştırıldıktan sonra eğer *cikti.txt* adında bir dosya yoksa otomatik olarak oluşturulur ve istenilen içerik yazılır.
###Code
#Google Drive Bağlantısı
from google.colab import drive
drive.mount('/gdrive')
dosya = "/gdrive/My Drive/Colab Notebooks/BBY162 - Programlama ve Algoritmalar/cikti.txt"
f = open(dosya, 'w') # Mevcut veriye ek veri yazılması için parametre: 'a'
f.write("test") # Her yeni verinin bir alt satıra yazdırılması "test\n"
f.close()
###Output
_____no_output_____ |
src/01_variaveis_tipos_estruturas/05_dicionarios.ipynb | ###Markdown
Dicionários
###Code
# Isto é uma lista
estudantes_lst = ['Mateus', 24, 'Fernanda', 22, 'Tamires', 26, 'Cristiano', 25]
# Impressão da lista de estudantes
print(estudantes_lst)
# Criação de um dicionário, a diferença é sutil
estudantes_dic = {'Mateus' : 24, 'Fernanda' : 22, 'Tamires' : 26, 'Cristiano' : 25}
# Impressão do dicionário
print(estudantes_dic)
# Agora podemos usar as chaves como índices para recuperar os valores
estudantes_dic['Mateus']
# Mudar um valor, de acordo com uma chave
# A chave Pedro não existe no dicionário, mas o Python criará uma nova
estudantes_dic['Pedro'] = 24
# Impressão do dicionário
print(estudantes_dic)
# Limpar os dados em um dicionário
estudantes_dic.clear()
# Checar a limpeza
print(estudantes_dic)
# Deletar o dicionário
del estudantes_dic
# Vamos recriar o dicionáveio
dic = { 'Mateus' : 24, 'Fernanda' : 22, 'Tamires' : 26, 'Cristiano' : 25 }
# Printar o dicionário
print(dic)
# Checar o tamanho do dicionário
# O tamanho será 4, pois cada par de chave e valor é considerado um item
len(dic)
# Extração apenas das chaves
dic.keys()
# E ainda podemos extrair apenas os valores
dic.values()
# Retornar os dados em formato de itens
dic.items()
# Criaremos outros dicionário
estudantes2 = { 'Erika' : 28, 'Maria' : 26, 'Milton' : 27 }
# Faremos a atualização do dicionário dic, trazendo os dados do dicionário estudantes2
dic.update(estudantes2)
# Checar a atualização
dic
# Criação de um dicionário vazio
dic1 = {}
dic1
# Adicionar itens no dicionário
dic1['key_one'] = 1
print(dic1)
# Adicionar chaves e valores como números
dic1[10] = 5
print(dic1)
dic1[8.2] = 'Python'
print(dic1)
# Criação de mais um dicionário
dc = {}
dc['teste'] = 10
dc['key'] = 'teste'
# No print abaixo, podemos analisar que em um par a palavra teste é a chave em um item e o valor no outro item
# O que não é uma boa prática
print(dc)
# Criação de outro dicionário
dc2 = {}
dc2['key1'] = 'Big Data'
dc2['key2'] = 10
dc2['key3'] = 5.6
dc2
# Atribuição dos valores em variáveis
a = dc2['key1']
b = dc2['key2']
c = dc2['key3']
# Impressão das variáveis
print(a)
print(b)
print(c)
###Output
Big Data
10
5.6
###Markdown
Dicionário de Listas
###Code
dc3 = { 'key1' : 1230, 'key2' : [22, 243, 73, 4], 'key3' : ['leite', 'maça', 'batata'] }
print(dc3)
# Imprimir o valor da key2
print(dc3['key2'])
# Acessar o valor dentro da lista, que está dentro de um valor no dicionário e ainda converter para caixa alta
dc3['key3'][0].upper()
# Operações com itens da lista, dentro do dicionário
var1 = dc3['key2'][0] - 2
# Imprimir
print(var1)
# Outra operação é usar operadores de atribuição resumidos, para atribuir um novo valor
dc3['key2'][0] -= 2
# Checar se o valor decrementou em 2
print(dc3)
###Output
{'key1': 1230, 'key2': [18, 243, 73, 4], 'key3': ['leite', 'maça', 'batata']}
###Markdown
Criação de Dicionários Aninhados
###Code
dic_anin = { 'key1' : {'key2_aninhada' : {'key3_aninhado' : 'dicionário aninhado em Python'}} }
print(dic_anin)
# Recuperar o valor dentro da chave 'key3_aninhado'
dic_anin['key1']['key2_aninhada']['key3_aninhado']
###Output
_____no_output_____ |
ipynb/bac_genome/Ecoli/.ipynb_checkpoints/Dataset-checkpoint.ipynb | ###Markdown
Description:* Getting the needed dataset Setting variables
###Code
workDir = '/home/nick/notebook/SIPSim/dev/Ecoli/'
SIPSimExe = '/home/nick/notebook/SIPSim/SIPSim'
###Output
_____no_output_____
###Markdown
Init
###Code
import os,sys
import numpy as np
import pandas as pd
from ggplot import *
import matplotlib.pyplot as plt
%load_ext rpy2.ipython
%matplotlib inline
if not os.path.isdir(workDir):
os.mkdir(workDir)
genomeDir = os.path.join(workDir, 'genomes')
if not os.path.isdir(genomeDir):
os.mkdir(genomeDir)
###Output
_____no_output_____
###Markdown
Downloading genome
###Code
!cd $genomeDir; \
seqDB_tools accession-GI2fasta < ../accession.txt > Ecoli_O157H7.fna
###Output
Starting batch: 1
Starting trial: 1
--------------------- WARNING ---------------------
MSG: No whitespace allowed in FASTA ID [AE005174|Escherichia coli O157:H7 EDL933, complete genome.]
---------------------------------------------------
--------------------- WARNING ---------------------
MSG: No whitespace allowed in FASTA ID [AE005174|Escherichia coli O157:H7 EDL933, complete genome.]
---------------------------------------------------
###Markdown
Genome info
###Code
!cd $genomeDir; \
seq_tools fasta_info --tl --tgc --header Ecoli_O157H7.fna
###Output
total_seq_length total_GC
5528445 50.38
###Markdown
Indexing genome
###Code
# list of all genomes files and their associated names
!cd $genomeDir; \
find . -name "*fna" | \
perl -pe 's/.+\///' | \
perl -pe 's/(.+)(\.[^.]+)/\$1\t\$1\$2/' > genome_index.txt
!cd $genomeDir; \
$SIPSimExe indexGenomes genome_index.txt --fp .
###Output
Indexing: "Ecoli_O157H7"
0
0: 1.81%, 0:00:00.885690
0: 3.62%, 0:00:01.596740
0: 5.43%, 0:00:02.329085
|
Regression/Linear Models/PassiveAggressiveRegressor_StandardScaler_QuantileTransformer.ipynb | ###Markdown
PassiveAggressiveRegressor with StandardScaler & Quantile Transformer This Code template is for the regression analysis using a PassiveAggressive Regression and the feature rescaling technique StandardScaler along with Quantile Transformer as a feature transformation technique in a pipeline Required Packages
###Code
import warnings as wr
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler,QuantileTransformer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import PassiveAggressiveRegressor
from sklearn.metrics import mean_squared_error, r2_score,mean_absolute_error
wr.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path) #reading file
df.head()#displaying initial entries
print('Number of rows are :',df.shape[0], ',and number of columns are :',df.shape[1])
df.columns.tolist()
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
plt.figure(figsize = (15, 10))
corr = df.corr()
mask = np.triu(np.ones_like(corr, dtype = bool))
sns.heatmap(corr, mask = mask, linewidths = 1, annot = True, fmt = ".2f")
plt.show()
correlation = df[df.columns[1:]].corr()[target][:]
correlation
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
#spliting data into X(features) and Y(Target)
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
#we can choose randomstate and test_size as over requerment
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 1) #performing datasplitting
###Output
_____no_output_____
###Markdown
Data Scaling**Used StandardScaler*** Standardize features by removing the mean and scaling to unit variance The standard score of a sample x is calculated as:z = (x - u) / s* Where u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False. Feature Transformation**QuantileTransformer :**This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme.More about QuantileTransformer module https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html Model**passive-aggressive regressor**The passive-aggressive algorithms are a family of algorithms for large-scale learning. They are similar to the Perceptron in that they do not require a learning rate. However, contrary to the Perceptron, they include a regularization parameter C* **C ->** Maximum step size (regularization). Defaults to 1.0.* **max_iter ->** The maximum number of passes over the training data (aka epochs). It only impacts the behavior in the fit method, and not the partial_fit method.* **tol->** The stopping criterion. If it is not None, the iterations will stop when (loss > previous_loss - tol).* **early_stopping->** Whether to use early stopping to terminate training when validation. score is not improving. If set to True, it will automatically set aside a fraction of training data as validation and terminate training when validation score is not improving by at least tol for n_iter_no_change consecutive epochs.* **validation_fraction->** The proportion of training data to set aside as validation set for early stopping. Must be between 0 and 1. Only used if early_stopping is True.* **n_iter_no_change->** Number of iterations with no improvement to wait before early stopping.* **shuffle->** Whether or not the training data should be shuffled after each epoch.* **loss->** The loss function to be used: epsilon_insensitive: equivalent to PA-I in the reference paper. squared_epsilon_insensitive: equivalent to PA-II in the reference paper.* **epsilon->** If the difference between the current prediction and the correct label is below this threshold, the model is not updated.
###Code
#training the PassiveAggressiveRegressor
model = make_pipeline(StandardScaler(),QuantileTransformer(),PassiveAggressiveRegressor(random_state=1))
model.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Model Accuracyscore() method return the mean accuracy on the given test data and labels.In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
#prediction on testing set
prediction=model.predict(X_test)
###Output
_____no_output_____
###Markdown
Model evolution**r2_score:** The r2_score function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.**MAE:** The mean abosolute error function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.**MSE:** The mean squared error function squares the error(penalizes the model for large errors) by our model.
###Code
print('Mean Absolute Error:', mean_absolute_error(y_test, prediction))
print('Mean Squared Error:', mean_squared_error(y_test, prediction))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, prediction)))
print("R-squared score : ",r2_score(y_test,prediction))
#ploting actual and predicted
red = plt.scatter(np.arange(0,80,5),prediction[0:80:5],color = "red")
green = plt.scatter(np.arange(0,80,5),y_test[0:80:5],color = "green")
plt.title("Comparison of Regression Algorithms")
plt.xlabel("Index of Candidate")
plt.ylabel("target")
plt.legend((red,green),('PassiveAggressiveRegressor', 'REAL'))
plt.show()
###Output
_____no_output_____
###Markdown
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis. For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(10,6))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____ |
Next17 Classifying Manhattan with ML Engine.ipynb | ###Markdown
Classifying Manhattan with BigQuery and TensorFlow Clear all Cells Importing the training data from BigQuery
###Code
%%sql -d standard
SELECT
timestamp,
borough,
latitude,
longitude
FROM
`bigquery-public-data.new_york.nypd_mv_collisions`
ORDER BY
timestamp DESC
LIMIT
15
###Output
_____no_output_____
###Markdown
Preprocess the training data on BigQuery
###Code
%%sql --module nyc_collisions
SELECT
IF(borough = 'MANHATTAN', 1, 0) AS is_mt,
latitude,
longitude
FROM
`bigquery-public-data.new_york.nypd_mv_collisions`
WHERE
LENGTH(borough) > 0
AND latitude IS NOT NULL AND latitude != 0.0
AND longitude IS NOT NULL AND longitude != 0.0
AND borough != 'BRONX'
ORDER BY
RAND()
LIMIT
10000
###Output
_____no_output_____
###Markdown
Import the BigQuery SQL result as NumPy array
###Code
import datalab.bigquery as bq
nyc_cols = bq.Query(nyc_collisions).to_dataframe(dialect='standard').as_matrix()
import numpy as np
is_mt = nyc_cols[:,0].astype(np.int32)
latlng = nyc_cols[:,1:3].astype(np.float32)
print("Is Manhattan: " + str(is_mt))
print("\nLat/Lng: \n\n" + str(latlng))
print("\nLoaded " + str(is_mt.size) + " rows.")
###Output
_____no_output_____
###Markdown
Feature scaling and plotting
###Code
# standardization
from sklearn.preprocessing import StandardScaler
latlng_std = StandardScaler().fit_transform(latlng)
# plotting
import matplotlib.pyplot as plt
lat = latlng_std[:,0]
lng = latlng_std[:,1]
plt.scatter(lng[is_mt == 1], lat[is_mt == 1], c='b') # plot points in Manhattan in blue
plt.scatter(lng[is_mt == 0], lat[is_mt == 0], c='y') # plot points outside Manhattan in yellow
plt.show()
###Output
_____no_output_____
###Markdown
Split the data into "Training Data" and "Test Data"
###Code
# 8,000 pairs for training
latlng_train = latlng_std[0:8000]
is_mt_train = is_mt[0:8000]
# 2,000 pairs for test
latlng_test = latlng_std[8000:10000]
is_mt_test = is_mt[8000:10000]
###Output
_____no_output_____
###Markdown
Define a neural network
###Code
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR) # supress warning messages
# define two feature columns with real values
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=2)]
# create a neural network
dnnc = tf.contrib.learn.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[20, 20, 20, 20],
n_classes=2)
dnnc
###Output
_____no_output_____
###Markdown
Check the accuracy of the neural network
###Code
# plot a predicted map of Manhattan
def plot_predicted_map(classifier):
is_mt_pred = classifier.predict(latlng_std, as_iterable=False) # an array of prediction results
plt.scatter(lng[is_mt_pred == 1], lat[is_mt_pred == 1], c='b')
plt.scatter(lng[is_mt_pred == 0], lat[is_mt_pred == 0], c='y')
plt.show()
# print the accuracy of the neural network
def print_accuracy(classifier):
accuracy = classifier.evaluate(x=latlng_test, y=is_mt_test)["accuracy"]
print('Accuracy: {:.2%}'.format(accuracy))
# train the model just for 1 step and print the accuracy
dnnc.fit(x=latlng_train, y=is_mt_train, steps=1)
plot_predicted_map(dnnc)
print_accuracy(dnnc)
###Output
_____no_output_____
###Markdown
Train the neural network
###Code
steps = 20
for i in range (1, 6):
dnnc.fit(x=latlng_train, y=is_mt_train, steps=steps)
plot_predicted_map(dnnc)
print('Steps: ' + str(i * steps))
print('\nTraining Finished.')
print_accuracy(dnnc)
###Output
_____no_output_____ |
exercises/data_wrangling_json/sliderule_dsi_json_exercise.ipynb | ###Markdown
JSON examples and exercise****+ get familiar with packages for dealing with JSON+ study examples with JSON strings and files + work on exercise to be completed and submitted ****+ reference: http://pandas.pydata.org/pandas-docs/stable/io.htmlio-json-reader+ data source: http://jsonstudio.com/resources/****
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
imports for Python, Pandas
###Code
import json
from pandas.io.json import json_normalize
###Output
_____no_output_____
###Markdown
JSON example, with string+ demonstrates creation of normalized dataframes (tables) from nested json string+ source: http://pandas.pydata.org/pandas-docs/stable/io.htmlnormalization
###Code
# define json string
data = [{'state': 'Florida',
'shortname': 'FL',
'info': {'governor': 'Rick Scott'},
'counties': [{'name': 'Dade', 'population': 12345},
{'name': 'Broward', 'population': 40000},
{'name': 'Palm Beach', 'population': 60000}]},
{'state': 'Ohio',
'shortname': 'OH',
'info': {'governor': 'John Kasich'},
'counties': [{'name': 'Summit', 'population': 1234},
{'name': 'Cuyahoga', 'population': 1337}]}]
# use normalization to create tables from nested element
json_normalize(data, 'counties')
# further populate tables created from nested element
json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
###Output
_____no_output_____
###Markdown
**** JSON example, with file+ demonstrates reading in a json file as a string and as a table+ uses small sample file containing data about projects funded by the World Bank + data source: http://jsonstudio.com/resources/
###Code
# load json as string
json.load((open('data/world_bank_projects_less.json')))
# load as Pandas dataframe
sample_json_df = pd.read_json('data/world_bank_projects_less.json')
sample_json_df
###Output
_____no_output_____
###Markdown
**** JSON exerciseUsing data in file 'data/world_bank_projects.json' and the techniques demonstrated above,1. Find the 10 countries with most projects2. Find the top 10 major project themes (using column 'mjtheme_namecode')3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
###Code
bank = pd.read_json('data/world_bank_projects.json')
bank.head()
###Output
_____no_output_____
###Markdown
1. Find the 10 countries with most projects
###Code
bank.countryname.value_counts().head(10)
###Output
_____no_output_____
###Markdown
2. Find the top 10 major project themes (using column 'mjtheme_namecode')
###Code
names = []
for i in bank.index:
namecode = bank.loc[i,'mjtheme_namecode']
names.extend(list(json_normalize(namecode)['name']))
pd.Series(names).value_counts().head(10).drop('', axis=0)
###Output
_____no_output_____
###Markdown
3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
###Code
codes_names = pd.DataFrame(columns=['code', 'name'])
for i in bank.index:
namecode = bank.loc[i,'mjtheme_namecode']
codes_names = pd.concat([codes_names, json_normalize(namecode)])
codes_names_dict = (codes_names[codes_names.name != '']
.drop_duplicates()
.to_dict())
for i in bank.index:
namecode = bank.loc[i,'mjtheme_namecode']
cell = json_normalize(namecode).replace('', np.nan)
cell = cell.fillna(codes_names_dict)
bank.set_value(i, 'mjtheme_namecode', cell.to_dict(orient='record'))
###Output
_____no_output_____ |
ICPs/ICP4/L3_Differential_Privacy_(Exercise).ipynb | ###Markdown
Toy Differential Privacy - Simple Database Queries In the context of Deep Learning, Differential Privacy ensures that the DL algorithms learns only what is is supposed to learn from the data while ignoring what it is not supposed to learn from the data. This is called Differential Privacy (DP). But what is DP?**Differential Privacy (DP):** The main goal is to ensure that statistical analysis techniques does not compromise the privacy of any particular individual in the dataset. * Classical privacy defination: privacy is preserved if after the analysis is done, the analyzer does not know anything about the people in the dataset. (No information leak from the data.)- The Netflix Prize (anonymized simply! Attacked using the Linkage Attack)- Anonymization is not enough to protect our privacyA modern definition by Cynthia Dwork:DP describes a promise made by the data holder to a data subject that: "you will not be affected by allowing your data to be used in any study or analysis , no matter what other studies, datasets, or information sources are available".**DP** is a mathematical definition of privacy. In the simplest setting, consider an algorithm that analyzes a dataset and computes statistics about it (such as the data's mean, variance, median, mode, etc.). Such an algorithm is said to be differentially private if by looking at the output, one cannot tell whether any individual's data was included in the original dataset or not. ---In this section we're going to play around with Differential Privacy in the context of a database query. The database is going to be a VERY simple database with only one boolean vector. Each row corresponds to a person. Each value corresponds to whether or not that person has a certain private attribute (such as whether they have a certain disease, or whether they are above/below a certain age). We are then going to learn how to know whether a database query over such a small database is differentially private or not - and more importantly - what techniques are at our disposal to ensure various levels of privacy First We Create a Simple DatabaseStep one is to create our database - we're going to do this by initializing a random list of 1s and 0s (which are the entries in our database). Note - the number of entries directly corresponds to the number of people in our database.
###Code
import torch
# the number of entries in our database
num_entries = 5000
db = torch.rand(num_entries) > 0.5 # returns a new Boolean tensor with each value > 0.5 set to 1 otherwise set to 0
db
###Output
_____no_output_____
###Markdown
Project: Generate Parallel DatabasesKey to the definition of differenital privacy is the ability to ask the question "When querying a database, if I removed someone from the database, would the output of the query be any different?". Thus, in order to check this, we must construct what we term "parallel databases" which are simply databases with one entry removed. In this first project, I want you to create a list of every parallel database to the one currently contained in the "db" variable. Then, I want you to create a function which both:- creates the initial database (db)- creates all parallel databases
###Code
import torch
def create_db(entries):
return torch.rand(entries) > 0.5
db = create_db(5000)
db
# create a function to create paraller db
def get_parallel_dbs(db):
parallel_dbs = list()
for i in range(len(db)):
pdb = torch.cat((db[0:i], db[i+1:]))
parallel_dbs.append(pdb)
#print(f'A databse of size {db.shape} was created')
return parallel_dbs
pdb = get_parallel_dbs(db)
# a helper function for the future
def get_db_and_parallel(num_entries):
db = torch.rand(num_entries) > 0.5
pdb = get_parallel_dbs(db)
return db, pdb
###Output
_____no_output_____
###Markdown
Lesson: Towards Evaluating The Differential Privacy of a FunctionIntuitively, we want to be able to query our database and evaluate whether or not the result of the query is leaking "private" information. As mentioned previously, this is about evaluating whether the output of a query changes when we remove someone from the database. Specifically, we want to evaluate the *maximum* amount the query changes when someone is removed (maximum over all possible people who could be removed). So, in order to evaluate how much privacy is leaked, we're going to iterate over each person in the database and measure the difference in the output of the query relative to when we query the entire database. Just for the sake of argument, let's make our first "database query" a simple sum. Aka, we're going to count the number of 1s in the database.
###Code
sensitivity = 0 # maximum difference between query(db) and each query(pdb)
db, pdb = get_db_and_parallel(5000)
print(db.shape)
print(pdb[0].shape)
# return the value of sensitivity
# loop on each pdb in the pdbs and calculate its distance
max_diff = 0
total = torch.sum(db)
print(f'Total:{total}')
for element in pdb:
pdb_total = torch.sum(element)
diff = total - pdb_total
max_diff = diff if diff > max_diff else max_diff
print(f"Maximum diff: {max_diff}")
# find the maximum distance <-- that's your sensitivity
sensitivity
###Output
_____no_output_____
###Markdown
Exercise - Evaluating the Privacy of a FunctionIn the last section, we measured the difference between each parallel db's query result and the query result for the entire database and then calculated the max value (which was 1). This value is called "sensitivity", and it corresponds to the function we chose for the query. Namely, the "sum" query will always have a sensitivity of exactly 1. However, we can also calculate sensitivity for other functions as well.First, create a function that will accept (1) a query function and (2) a num_items to generate the db and the pdb: **Hint:** def sensitivity(query, num_items)
###Code
# define a function to calculate the sensitivity of a query
def sensitivity(query, num_items):
# generate the db and the pdbs
db, pdbs = get_db_and_parallel(num_items)
# query the db
db_query = query(db)
#print(f"db_query: {db_query}")
# return the value of sensitivity
sensitivity = 0
# loop on each pdb in the pdbs and calculate its distance
for pdb in pdbs:
pdb_query = query(pdb)
if(db_query - pdb_query) > sensitivity:
# find the maximum distance <-- that's your sensitivity
sensitivity = pdb_query
return db_query - sensitivity
# recreate the sum query we created earlier
query = torch.sum
# test your sensitivity function with the sum query
sensitivity(query, 5000)
for i in range(10):
s = sensitivity(query, 10)
print(s)
# create a new query function, which finds the mean rather than the sum
def calc_mean(db):
return db.float().mean()
query = calc_mean
# test your sensitivity function of the Mean query
# how much the output value (query) is using informaiton frmo each of the participating observations
sensitivity(query, 1000)
for i in range(10):
s = sensitivity(query, 10)
print(s)
###Output
_____no_output_____
###Markdown
Wow! That sensitivity is WAY lower. Note the intuition here. "Sensitivity" is measuring how sensitive the output of the query is to a person being removed from the database. For a simple sum, this is always 1, but for the mean, removing a person is going to change the result of the query by rougly 1 divided by the size of the database (which is much smaller). Thus, "mean" is a VASTLY less "sensitive" function (query) than SUM. Exercise: Calculate L1 Sensitivity For ThresholdIn this exercise, you need to calculate the sensitivty for the "threshold" function. - First compute the sum over the database (i.e. sum(db)) and return whether that sum is greater than a certain threshold.- Then, create databases of size 10 and threshold of 5 and calculate the sensitivity of the function. - Finally, re-initialize the database 10 times and calculate the sensitivity each time.
###Code
# define a fucntion to query the sum and return wethere the sum is greater or less than a threshold
def threshold(db):
return sum(db) > 5
query = threshold
# the sensitivity with db of size 10 and threshold of 5
s = sensitivity(query, 10)
print(s)
# repeat for 10 times
for i in range(10):
s = sensitivity(threshold, 10)
print(s)
###Output
_____no_output_____
###Markdown
A Basic Differencing AttackSadly none of the functions we've looked at so far are differentially private (despite them having varying levels of sensitivity). The most basic type of attack can be done as follows.Let's say we wanted to figure out a specific person's value in the database. All we would have to do is query for the sum of the entire database and then the sum of the entire database without that person! Exercise: Perform a Differencing Attack on Row 10In this project, construct a database and then demonstrate how you can use different queries (sum, threshold, and mean) to explose the value of the person represented by row 10 in the database (note, you'll need to use a database with at least 10 rows)
###Code
# Perform a differencing attack using the sum query on row 10
db_size = 10
print(f"Creating database of {db_size} rows")
db = create_db(db_size)
print(db)
db_sum = db.sum()
pdb_sum = db[:-1].sum()
# take the difference
diff_sum = db_sum - pdb_sum
print(diff_sum)
# reverse engineer the total, by checking what is left from the total.
print(f"The 10th row's value is {int(db[-1])}")
correct = "yes" if int(db[-1]) == diff_sum else "no"
print(f"Did it predict it? {correct}")
# Perform a differencing attack using the mean query on row 10
db_size = 10
print(f"Creating database of {db_size} rows")
db = create_db(db_size)
pdb = db[:-1]
print(db)
result = 0 if (sum(db).float() / len(db)) - (sum(pdb.float() /len(pdb))) < 0 else 1
print(result)
# Perform a differencing attack using the threshold query on row 10
db_size = 10
print(f"Creating database of {db_size} rows")
db = create_db(db_size)
print(f"db{db}")
pdb = db[:-1]
print(f"pdb{pdb}")
result = (sum(db) > sum(db) - 1) - (sum(pdb) > sum(db) - 1)
print(result)
###Output
Creating database of 10 rows
dbtensor([0, 1, 1, 0, 0, 0, 0, 1, 0, 1], dtype=torch.uint8)
pdbtensor([0, 1, 1, 0, 0, 0, 0, 1, 0], dtype=torch.uint8)
tensor(1, dtype=torch.uint8)
###Markdown
Project: Local Differential PrivacyAs you can see, the basic sum query is not differentially private at all! In truth, differential privacy always requires a form of randomness added to the query. Let me show you what I mean. Randomized Response (Local Differential Privacy)Let's say I have a group of people I wish to survey about a very taboo behavior which I think they will lie about (say, I want to know if they have ever committed a certain kind of crime). I'm not a policeman, I'm just trying to collect statistics to understand the higher level trend in society. So, how do we do this? One technique is to add randomness to each person's response by giving each person the following instructions (assuming I'm asking a simple yes/no question):- Flip a coin 2 times.- If the first coin flip is heads, answer honestly- If the first coin flip is tails, answer according to the second coin flip (heads for yes, tails for no)!Thus, each person is now protected with "plausible deniability". If they answer "Yes" to the question "have you committed X crime?", then it might becasue they actually did, or it might be becasue they are answering according to a random coin flip. Each person has a high degree of protection. Furthermore, we can recover the underlying statistics with some accuracy, as the "true statistics" are simply averaged with a 50% probability. Thus, if we collect a bunch of samples and it turns out that 60% of people answer yes, then we know that the TRUE distribution is actually centered around 70%, because 70% averaged wtih 50% (a coin flip) is 60% which is the result we obtained. However, it should be noted that, especially when we only have a few samples, the this comes at the cost of accuracy. This tradeoff exists across all of Differential Privacy. The greater the privacy protection (plausible deniability) the less accurate the results. Let's implement this local DP for our database before!
###Code
import torch
def create_db(entries):
return torch.rand(entries) > 0.5
def create_noisy_db(db, noise=0.5):
# make a "flip" tensor
flip1 = torch.rand(len(db)) > noise
# make a second "flip" tensor
flip2 = torch.rand(len(db)) > 1 - noise
# make a result tensor
noisy = db.clone().detach()
# go through the db and update the noise tensor
for i in range(len(db)):
if flip1[i] == 1:
# keep
noisy[i] = db[i]
else:
noisy[i] = flip2[i]
return noisy
def find_truth(noisy_mean, noise=0.5):
return (noisy_mean - ((1- noise) *noise )) / .5
#return (1.0/noise)*(noisy_mean - ((1-noise) * 0.5))
#return (noisy_mean - (0.5) * ( 1 - noise)) / (0.5)
def perform_analysis(entities, noise, query):
# create the original db
db_x = create_db(entities)
# create the noisy db.
noisy_x = create_noisy_db(db_x, noise)
# perform the query on the original dataset.
db_x_query = query(db_x)
# perform the query on the noisy dataset, and get the truth(based on probability)
noisy_x_mean = query(noisy_x)
truth_x = find_truth(noisy_x_mean, noise)
# return the query from the original and the truth.
return (db_x_query, truth_x)
def query(db):
return db.float().mean()
noise = 0.5
database_sizes = [10, 100, 1000, 10000, 10000000]
for size in database_sizes:
(db_output, truth_output) = perform_analysis(database_size, noise, query)
print(f"Database Size:{size}")
print(db_output)
print(truth_output)
print()
# before I refactored above, this was my answer...
noise = 0.5
# database size 10
db1 = create_db(10)
noisy1 = create_noisy_db(db1, noise)
db1_mean = query(db1)
noisy1_mean = query(noisy1)
truth1 = find_truth(noisy1_mean, noise)
print(f"db1.mean(): {db1_mean}")
print(f"truth1: {truth1}")
print(f"db1.mean() - truth1 : {db1_mean - truth1}")
print()
# database size 100
db2 = create_db(100)
noisy2 = create_noisy_db(db2, noise)
db2_mean = db2.float().mean()
noisy2_mean = noisy2.float().mean()
truth2 = find_truth(noisy2_mean, noise)
print(f"db2.mean(): {db2_mean}")
print(f"truth2: {truth2}")
print(f"db2.mean() - truth2 : {db2_mean - truth2}")
print()
# database size 1000
db3 = create_db(1000)
noisy3 = create_noisy_db(db3, noise)
db3_mean = db3.float().mean()
noisy3_mean = noisy3.float().mean()
truth3 = find_truth(noisy3_mean, noise)
print(f"db3.mean(): {db3_mean}")
print(f"truth3: {noisy3_mean}")
print(f"db3.mean() - truth3 : {db3_mean - truth3}")
print()
###Output
_____no_output_____
###Markdown
Project: Varying Amounts of NoiseIn this project, I want you to augment the randomized response query (the one we just wrote) to allow for varying amounts of randomness to be added. Specifically, I want you to bias the coin flip to be higher or lower and then run the same experiment. Note - this one is a bit tricker than you might expect. You need to both adjust the likelihood of the first coin flip AND the de-skewing at the end (where we create the "augmented_result" variable).
###Code
import torch
def create_db(entries):
return torch.rand(entries) > 0.5
def create_noisy_db(db, noise=0.5):
# make a "flip" tensor
flip1 = torch.rand(len(db)) > noise
# make a second "flip" tensor
flip2 = torch.rand(len(db)) > 1 - noise
# make a result tensor
noisy = db.clone().detach()
# go through the db and update the noise tensor
for i in range(len(db)):
if flip1[i] == 1:
# keep
noisy[i] = db[i]
else:
noisy[i] = flip2[i]
return noisy
def find_truth(noisy_mean, noise=0.5):
return (noisy_mean - (0.5) * ( 1 - noise)) / (0.5)
def perform_analysis(entities, noise, query):
# create the original db
db_x = create_db(entities)
# create the noisy db.
noisy_x = create_noisy_db(db_x, noise)
# perform the query on the original dataset.
db_x_query = query(db_x)
# perform the query on the noisy dataset, and get the truth(based on probability)
noisy_x_mean = query(noisy_x)
truth_x = find_truth(noisy_x_mean, noise)
# return the query from the original and the truth.
return (db_x_query, truth_x)
def query(db):
return db.float().mean()
noises = [0.2, 0.4, 0.5, 0.6, 0.8, 1.0]
database_sizes = [10, 100, 1000, 10000, 10000000]
for noise in noises:
for size in database_sizes:
(db_output, truth_output) = perform_analysis(database_size, noise, query)
print(f"Database Size:{size}")
print(f"Noise: {noise}")
print(db_output)
print(truth_output)
print(f"difference: {abs(db_output - truth_output)}")
print()
###Output
_____no_output_____
###Markdown
Lesson: The Formal Definition of Differential PrivacyThe previous method of adding noise was called "Local Differentail Privacy" because we added noise to each datapoint individually. This is necessary for some situations wherein the data is SO sensitive that individuals do not trust noise to be added later. However, it comes at a very high cost in terms of accuracy. However, alternatively we can add noise AFTER data has been aggregated by a function. This kind of noise can allow for similar levels of protection with a lower affect on accuracy. However, participants must be able to trust that no-one looked at their datapoints _before_ the aggregation took place. In some situations this works out well, in others (such as an individual hand-surveying a group of people), this is less realistic.Nevertheless, global differential privacy is incredibly important because it allows us to perform differential privacy on smaller groups of individuals with lower amounts of noise. Let's revisit our sum functions.
###Code
db, pdbs = create_db_and_parallels(100)
def query(db):
return torch.sum(db.float())
def M(db):
query(db) + noise
query(db)
###Output
_____no_output_____
###Markdown
So the idea here is that we want to add noise to the output of our function. We actually have two different kinds of noise we can add - Laplacian Noise or Gaussian Noise. However, before we do so at this point we need to dive into the formal definition of Differential Privacy. _Image From: "The Algorithmic Foundations of Differential Privacy" - Cynthia Dwork and Aaron Roth - https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf_ This definition does not _create_ differential privacy, instead it is a measure of how much privacy is afforded by a query M. Specifically, it's a comparison between running the query M on a database (x) and a parallel database (y). As you remember, parallel databases are defined to be the same as a full database (x) with one entry/person removed.Thus, this definition says that FOR ALL parallel databases, the maximum distance between a query on database (x) and the same query on database (y) will be e^epsilon, but that occasionally this constraint won't hold with probability delta. Thus, this theorem is called "epsilon delta" differential privacy. EpsilonLet's unpack the intuition of this for a moment. Epsilon Zero: If a query satisfied this inequality where epsilon was set to 0, then that would mean that the query for all parallel databases outputed the exact same value as the full database. As you may remember, when we calculated the "threshold" function, often the Sensitivity was 0. In that case, the epsilon also happened to be zero.Epsilon One: If a query satisfied this inequality with epsilon 1, then the maximum distance between all queries would be 1 - or more precisely - the maximum distance between the two random distributions M(x) and M(y) is 1 (because all these queries have some amount of randomness in them, just like we observed in the last section). DeltaDelta is basically the probability that epsilon breaks. Namely, sometimes the epsilon is different for some queries than it is for others. For example, you may remember when we were calculating the sensitivity of threshold, most of the time sensitivity was 0 but sometimes it was 1. Thus, we could calculate this as "epsilon zero but non-zero delta" which would say that epsilon is perfect except for some probability of the time when it's arbitrarily higher. Note that this expression doesn't represent the full tradeoff between epsilon and delta. Lesson: How To Add Noise for Global Differential PrivacyIn this lesson, we're going to learn about how to take a query and add varying amounts of noise so that it satisfies a certain degree of differential privacy. In particular, we're going to leave behind the Local Differential privacy previously discussed and instead opt to focus on Global differential privacy. So, to sum up, this lesson is about adding noise to the output of our query so that it satisfies a certain epsilon-delta differential privacy threshold.There are two kinds of noise we can add - Gaussian Noise or Laplacian Noise. Generally speaking Laplacian is better, but both are still valid. Now to the hard question... How much noise should we add?The amount of noise necessary to add to the output of a query is a function of four things:- the type of noise (Gaussian/Laplacian)- the sensitivity of the query/function- the desired epsilon (ε)- the desired delta (δ)Thus, for each type of noise we're adding, we have different way of calculating how much to add as a function of sensitivity, epsilon, and delta. We're going to focus on Laplacian noise. Laplacian noise is increased/decreased according to a "scale" parameter b. We choose "b" based on the following formula.b = sensitivity(query) / epsilonIn other words, if we set b to be this value, then we know that we will have a privacy leakage of <= epsilon. Furthermore, the nice thing about Laplace is that it guarantees this with delta == 0. There are some tunings where we can have very low epsilon where delta is non-zero, but we'll ignore them for now. Querying Repeatedly- if we query the database multiple times - we can simply add the epsilons (Even if we change the amount of noise and their epsilons are not the same).
###Code
###Output
_____no_output_____
###Markdown
Project: Create a Differentially Private QueryIn this project, I want you to take what you learned in the previous lesson and create a query function which sums over the database and adds just the right amount of noise such that it satisfies an epsilon constraint. Write a query for both "sum" and for "mean". Ensure that you use the correct sensitivity measures for both.
###Code
# try this project here!
###Output
_____no_output_____
###Markdown
Lesson: Differential Privacy for Deep LearningSo in the last lessons you may have been wondering - what does all of this have to do with Deep Learning? Well, these same techniques we were just studying form the core primitives for how Differential Privacy provides guarantees in the context of Deep Learning. Previously, we defined perfect privacy as "a query to a database returns the same value even if we remove any person from the database", and used this intuition in the description of epsilon/delta. In the context of deep learning we have a similar standard.Training a model on a dataset should return the same model even if we remove any person from the dataset.Thus, we've replaced "querying a database" with "training a model on a dataset". In essence, the training process is a kind of query. However, one should note that this adds two points of complexity which database queries did not have: 1. do we always know where "people" are referenced in the dataset? 2. neural models rarely never train to the same output model, even on identical dataThe answer to (1) is to treat each training example as a single, separate person. Strictly speaking, this is often overly zealous as some training examples have no relevance to people and others may have multiple/partial (consider an image with multiple people contained within it). Thus, localizing exactly where "people" are referenced, and thus how much your model would change if people were removed, is challenging.The answer to (2) is also an open problem - but several interesitng proposals have been made. We're going to focus on one of the most popular proposals, PATE. An Example Scenario: A Health Neural NetworkFirst we're going to consider a scenario - you work for a hospital and you have a large collection of images about your patients. However, you don't know what's in them. You would like to use these images to develop a neural network which can automatically classify them, however since your images aren't labeled, they aren't sufficient to train a classifier. However, being a cunning strategist, you realize that you can reach out to 10 partner hospitals which DO have annotated data. It is your hope to train your new classifier on their datasets so that you can automatically label your own. While these hospitals are interested in helping, they have privacy concerns regarding information about their patients. Thus, you will use the following technique to train a classifier which protects the privacy of patients in the other hospitals.- 1) You'll ask each of the 10 hospitals to train a model on their own datasets (All of which have the same kinds of labels)- 2) You'll then use each of the 10 partner models to predict on your local dataset, generating 10 labels for each of your datapoints- 3) Then, for each local data point (now with 10 labels), you will perform a DP query to generate the final true label. This query is a "max" function, where "max" is the most frequent label across the 10 labels. We will need to add laplacian noise to make this Differentially Private to a certain epsilon/delta constraint.- 4) Finally, we will retrain a new model on our local dataset which now has labels. This will be our final "DP" model.So, let's walk through these steps. I will assume you're already familiar with how to train/predict a deep neural network, so we'll skip steps 1 and 2 and work with example data. We'll focus instead on step 3, namely how to perform the DP query for each example using toy data.So, let's say we have 10,000 training examples, and we've got 10 labels for each example (from our 10 "teacher models" which were trained directly on private data). Each label is chosen from a set of 10 possible labels (categories) for each image.
###Code
import numpy as np
num_teachers = 10 # we're working with 10 partner hospitals
num_examples = 10000 # the size of OUR dataset
num_labels = 10 # number of lablels for our classifier
preds = (np.random.rand(num_teachers, num_examples) * num_labels).astype(int).transpose(1,0) # fake predictions
new_labels = list()
for an_image in preds:
label_counts = np.bincount(an_image, minlength=num_labels)
epsilon = 0.1
beta = 1 / epsilon
for i in range(len(label_counts)):
label_counts[i] += np.random.laplace(0, beta, 1)
new_label = np.argmax(label_counts)
new_labels.append(new_label)
# new_labels
###Output
_____no_output_____
###Markdown
PATE Analysis
###Code
labels = np.array([9, 9, 3, 6, 9, 9, 9, 9, 8, 2])
counts = np.bincount(labels, minlength=10)
query_result = np.argmax(counts)
query_result
from syft.frameworks.torch.differential_privacy import pate
num_teachers, num_examples, num_labels = (100, 100, 10)
preds = (np.random.rand(num_teachers, num_examples) * num_labels).astype(int) #fake preds
indices = (np.random.rand(num_examples) * num_labels).astype(int) # true answers
preds[:,0:10] *= 0
data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5)
assert data_dep_eps < data_ind_eps
data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5)
print("Data Independent Epsilon:", data_ind_eps)
print("Data Dependent Epsilon:", data_dep_eps)
preds[:,0:50] *= 0
data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5, moments=20)
print("Data Independent Epsilon:", data_ind_eps)
print("Data Dependent Epsilon:", data_dep_eps)
###Output
_____no_output_____
###Markdown
Where to Go From HereRead:* Algorithmic Foundations of Differential Privacy: https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf* Deep Learning with Differential Privacy: https://arxiv.org/pdf/1607.00133.pdf* The Ethical Algorithm: https://www.amazon.com/Ethical-Algorithm-Science-Socially-Design/dp/0190948205 Topics:* The Exponential Mechanism* The Moment's Accountant* Differentially Private Stochastic Gradient Descent
###Code
###Output
_____no_output_____
###Markdown
Section Project:For the final project for this section, you're going to train a DP model using this PATE method on the MNIST dataset, provided below.
###Code
import torchvision.datasets as datasets
mnist_trainset = datasets.MNIST(root='./data', train=True, download=True, transform=None)
train_data = mnist_trainset.train_data
train_targets = mnist_trainset.train_labels
test_data = mnist_trainset.test_data
test_targets = mnist_trainset.test_labels
###Output
_____no_output_____ |
Module-17-Challenge-Resources/Starter_Code/credit_risk_resampling.ipynb | ###Markdown
Credit Risk Resampling Techniques
###Code
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
###Output
_____no_output_____
###Markdown
Read the CSV and Perform Basic Data Cleaning
###Code
columns = [
"loan_amnt", "int_rate", "installment", "home_ownership",
"annual_inc", "verification_status", "issue_d", "loan_status",
"pymnt_plan", "dti", "delinq_2yrs", "inq_last_6mths",
"open_acc", "pub_rec", "revol_bal", "total_acc",
"initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt",
"total_pymnt_inv", "total_rec_prncp", "total_rec_int", "total_rec_late_fee",
"recoveries", "collection_recovery_fee", "last_pymnt_amnt", "next_pymnt_d",
"collections_12_mths_ex_med", "policy_code", "application_type", "acc_now_delinq",
"tot_coll_amt", "tot_cur_bal", "open_acc_6m", "open_act_il",
"open_il_12m", "open_il_24m", "mths_since_rcnt_il", "total_bal_il",
"il_util", "open_rv_12m", "open_rv_24m", "max_bal_bc",
"all_util", "total_rev_hi_lim", "inq_fi", "total_cu_tl",
"inq_last_12m", "acc_open_past_24mths", "avg_cur_bal", "bc_open_to_buy",
"bc_util", "chargeoff_within_12_mths", "delinq_amnt", "mo_sin_old_il_acct",
"mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op", "mo_sin_rcnt_tl", "mort_acc",
"mths_since_recent_bc", "mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl",
"num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl",
"num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0",
"num_sats", "num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m",
"num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75", "pub_rec_bankruptcies",
"tax_liens", "tot_hi_cred_lim", "total_bal_ex_mort", "total_bc_limit",
"total_il_high_credit_limit", "hardship_flag", "debt_settlement_flag"
]
target = ["loan_status"]
# Load the data
file_path = Path('../LoanStats_2019Q1.csv')
df = pd.read_csv(file_path, skiprows=1)[:-2]
df = df.loc[:, columns].copy()
# Drop the null columns where all values are null
df = df.dropna(axis='columns', how='all')
# Drop the null rows
df = df.dropna()
# Remove the `Issued` loan status
issued_mask = df['loan_status'] != 'Issued'
df = df.loc[issued_mask]
# convert interest rate to numerical
df['int_rate'] = df['int_rate'].str.replace('%', '')
df['int_rate'] = df['int_rate'].astype('float') / 100
#dropping the hardship flag and debt settlement flag
df=df.drop('hardship_flag', axis=1)
df=df.drop('debt_settlement_flag', axis=1)
# Convert the target column values to low_risk and high_risk based on their values
#values changed from low_risk to 0
x = {'Current': 'low_risk'}
df = df.replace(x)
#changed the values from high_risk to 1
x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk')
df = df.replace(x)
df.reset_index(inplace=True, drop=True)
df.head(20)
###Output
_____no_output_____
###Markdown
Split the Data into Training and Testing
###Code
# Create our features
X = df.copy()
X = X.drop('loan_status', axis=1)
# Create our target
y = df['loan_status'].values
X.describe()
# Check the balance of our target values
Counter(y)
# Create X_train, X_test, y_train, y_test
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
print(X_train.shape)
print(X_test.shape)
###Output
(51612, 83)
(17205, 83)
###Markdown
OversamplingIn this section, you will compare two oversampling algorithms to determine which algorithm results in the best performance. You will oversample the data using the naive random oversampling algorithm and the SMOTE algorithm. For each algorithm, be sure to complete the folliowing steps:1. View the count of the target classes using `Counter` from the collections library. 3. Use the resampled data to train a logistic regression model.3. Calculate the balanced accuracy score from sklearn.metrics.4. Print the confusion matrix from sklearn.metrics.5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests Naive Random Oversampling
###Code
# Resample the training data with the RandomOversampler
from imblearn.over_sampling import RandomOverSampler
ros = RandomOverSampler(random_state=1)
X_resampled, y_resampled = ros.fit_resample(X_train, y_train)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Calculated the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
y_pred=model.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
y_pred = model.predict(X_test)
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
###Output
_____no_output_____
###Markdown
SMOTE Oversampling
###Code
# Resample the training data with SMOTE
from imblearn.over_sampling import SMOTE
X_resampled, y_resampled = SMOTE(random_state=1,
sampling_strategy='auto').fit_resample(X_train, y_train)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
y_pred = model.predict(X_test)
# Calculated the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
###Output
pre rec spe f1 geo iba sup
high_risk 0.01 0.70 0.70 0.03 0.70 0.49 101
low_risk 1.00 0.70 0.70 0.82 0.70 0.49 17104
avg / total 0.99 0.70 0.70 0.82 0.70 0.49 17205
###Markdown
UndersamplingIn this section, you will test an undersampling algorithms to determine which algorithm results in the best performance compared to the oversampling algorithms above. You will undersample the data using the Cluster Centroids algorithm and complete the folliowing steps:1. View the count of the target classes using `Counter` from the collections library. 3. Use the resampled data to train a logistic regression model.3. Calculate the balanced accuracy score from sklearn.metrics.4. Print the confusion matrix from sklearn.metrics.5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
###Code
# Resample the data using the ClusterCentroids resampler
from imblearn.under_sampling import ClusterCentroids
cc = ClusterCentroids(random_state=1)
X_resampled, y_resampled = cc.fit_resample(X_train, y_train)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Calculated the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
y_pred = model.predict(X_test)
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
###Output
pre rec spe f1 geo iba sup
high_risk 0.01 0.81 0.47 0.02 0.62 0.40 101
low_risk 1.00 0.47 0.81 0.64 0.62 0.37 17104
avg / total 0.99 0.48 0.81 0.64 0.62 0.37 17205
###Markdown
Combination (Over and Under) SamplingIn this section, you will test a combination over- and under-sampling algorithm to determine if the algorithm results in the best performance compared to the other sampling algorithms above. You will resample the data using the SMOTEENN algorithm and complete the folliowing steps:1. View the count of the target classes using `Counter` from the collections library. 3. Use the resampled data to train a logistic regression model.3. Calculate the balanced accuracy score from sklearn.metrics.4. Print the confusion matrix from sklearn.metrics.5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
###Code
# Resample the training data with SMOTEENN
from imblearn.combine import SMOTEENN
smote_enn = SMOTEENN(random_state=0)
X_resampled, y_resampled = smote_enn.fit_resample(X, y)
Counter(y_resampled)
# Train the Logistic Regression model using the resampled data
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_resampled, y_resampled)
# Calculated the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
y_pred = model.predict(X_test)
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
###Output
pre rec spe f1 geo iba sup
high_risk 0.01 0.71 0.68 0.03 0.70 0.49 101
low_risk 1.00 0.68 0.71 0.81 0.70 0.48 17104
avg / total 0.99 0.68 0.71 0.81 0.70 0.48 17205
|
notebooks/Figure. CNV eQTL Characteristics.ipynb | ###Markdown
Figure. CNV eQTL Characteristics
###Code
import copy
import cPickle
import os
import subprocess
import cdpybio as cpb
import matplotlib as mpl
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
pd.options.mode.chained_assignment = None # default='warn'
import pybedtools as pbt
import scipy.stats as stats
import seaborn as sns
import ciepy
import cardipspy as cpy
%matplotlib inline
%load_ext rpy2.ipython
dy_name = 'figure_cnv_eqtl_characteristics'
outdir = os.path.join(ciepy.root, 'output', dy_name)
cpy.makedir(outdir)
private_outdir = os.path.join(ciepy.root, 'private_output', dy_name)
cpy.makedir(private_outdir)
import socket
if socket.gethostname() == 'fl-hn1' or socket.gethostname() == 'fl-hn2':
dy = os.path.join(ciepy.root, 'sandbox', 'tmp', dy_name)
cpy.makedir(dy)
pbt.set_tempdir(dy)
###Output
_____no_output_____
###Markdown
Each figure should be able to fit on a single 8.5 x 11 inch page. Please do not send figure panels as individual files. We use three standard widths for figures: 1 column, 85 mm; 1.5 column, 114 mm; and 2 column, 174 mm (the full width of the page). Although your figure size may be reduced in the print journal, please keep these widths in mind. For Previews and other three-column formats, these widths are also applicable, though the width of a single column will be 55 mm.
###Code
fn = os.path.join(ciepy.root, 'output', 'mcnv_analysis', 'sig.tsv')
mcnv_sig = pd.read_table(fn, index_col=0)
fn = os.path.join(ciepy.root, 'output/cnv_analysis/cnv_gene_variants.pickle')
cnv_gv = cPickle.load(open(fn))
fn = os.path.join(ciepy.root, 'output/cnv_analysis/combined_info.pickle')
combined_info = cPickle.load(open(fn))
sig_cnvs = set(cnv_gv.cnv_id)
not_sig_cnvs = set(combined_info.index) - sig_cnvs
cnv_lead_vars = cnv_gv[cnv_gv.cnv_is_lead]
cnv_lead_vars = cnv_lead_vars.sort_values(by='pvalue').drop_duplicates(subset=['gene_id'])
p = sum(cnv_lead_vars.beta > 0) / float(cnv_lead_vars.shape[0])
print('{:.2f}% of lead CNV eQTLs are positively associated with gene expression.'.format(p * 100))
sns.set_style('whitegrid')
s,p = stats.mannwhitneyu(np.log10(combined_info.ix[sig_cnvs, 'svlen'].abs()),
np.log10(combined_info.ix[not_sig_cnvs, 'svlen'].abs()))
print('Length of CNV eQTLs vs. CNVs that are not eQTLs is different (p={:.3e}, Mann Whitney U).'.format(p))
s,p = stats.mannwhitneyu(np.log10(combined_info.ix[sig_cnvs, 'nearest_tss_dist'].abs() + 1),
np.log10(combined_info.ix[not_sig_cnvs, 'nearest_tss_dist'].abs() + 1))
print('Distance from nearest TSS of CNV eQTLs vs. CNVs that are not eQTLs '
'is different (p={:.3e}, Mann Whitney U).'.format(p))
fig = plt.figure(figsize=(6.85, 6), dpi=300)
gs = gridspec.GridSpec(1, 1)
ax = fig.add_subplot(gs[0, 0])
ax.text(0, 0, 'Figure S3',
size=16, va='bottom')
ciepy.clean_axis(ax)
ax.set_xticks([])
ax.set_yticks([])
gs.tight_layout(fig, rect=[0, 0.90, 0.5, 1])
gs = gridspec.GridSpec(3, 2)
# Lead CNV effect sizes
ax = fig.add_subplot(gs[0, 0])
bins = np.arange(-3, 3.1, 0.1)
cnv_lead_vars.drop_duplicates('gene_id').beta.hist(bins=bins, histtype='stepfilled', lw=0)
print('{:,} lead CNVs.'.format(cnv_lead_vars.drop_duplicates('gene_id').shape[0]))
p = stats.binom_test((cnv_lead_vars.drop_duplicates('gene_id').beta > 0).value_counts())
print('Effect sizes for lead CNVs are biased '
'(p={:.3e}, binomial test).'.format(p))
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
ax.set_xlabel('$\\beta$', fontsize=8)
ax.set_ylabel('Number of lead CNVs', fontsize=8)
ax.set_xlim(-3, 3)
# Length
ax = fig.add_subplot(gs[0, 1])
se = np.log10(combined_info.ix[sig_cnvs, 'svlen'].abs())
weights = np.ones_like(se) / float(se.shape[0])
se.hist(ax=ax, label='eQTL'.format(se.shape[0]),
alpha=0.5, weights=weights, histtype='stepfilled',
bins=np.arange(0, 6.1, 0.1))
se = np.log10(combined_info.ix[not_sig_cnvs, 'svlen'].abs())
weights = np.ones_like(se) / float(se.shape[0])
se.hist(ax=ax, label='Not eQTL'.format(se.shape[0]),
alpha=0.5, weights=weights, histtype='stepfilled',
bins=np.arange(0, 6.1, 0.1))
ax.legend(fontsize=8, loc='upper left', fancybox=True, frameon=True)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
ax.set_xlabel('$\\log_{10}$ CNV length', fontsize=8)
ax.set_ylabel('Density', fontsize=8)
# TSS distance.
ax = fig.add_subplot(gs[1, 0])
se = np.log10(combined_info.ix[sig_cnvs, 'nearest_tss_dist'].abs() + 1)
weights = np.ones_like(se) / float(se.shape[0])
se.hist(ax=ax, label='eQTL'.format(se.shape[0]),
alpha=0.5, weights=weights, histtype='stepfilled',
bins=np.arange(0, 6.1, 0.1))
se = np.log10(combined_info.ix[not_sig_cnvs, 'nearest_tss_dist'].abs() + 1)
weights = np.ones_like(se) / float(se.shape[0])
se.hist(ax=ax, label='Not eQTL'.format(se.shape[0]),
alpha=0.5, weights=weights, histtype='stepfilled',
bins=np.arange(0, 6.1, 0.1))
ax.legend(fontsize=8, loc='upper left', fancybox=True, frameon=True)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
ax.set_xlabel('$\\log_{10}$ distance to nearest TSS', fontsize=8)
ax.set_ylabel('Density', fontsize=8)
tdf = mcnv_sig.sort_values(by=['overlap_gene_cons', 'pvalue']).drop_duplicates(subset='gene')
# Genic mCNV
ax = fig.add_subplot(gs[1, 1])
ax.set_ylabel('Number of genic\nlead mCNVs', fontsize=8)
ax.set_xlabel('$\\beta$', fontsize=8)
tdf[tdf.overlap_gene_cons].beta.hist(bins=np.arange(-1, 1.1, 0.1), ax=ax, histtype='stepfilled', lw=0)
ax.grid(axis='x')
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
print('{:,} genic mCNV eGenes.'.format(tdf[tdf.overlap_gene_cons].shape[0]))
p = stats.binom_test((tdf[tdf.overlap_gene_cons].beta > 0).value_counts())
print('Effect sizes for genic lead mCNVs are biased '
'(p={:.3e}, binomial test).'.format(p))
# Intergenic mCNV
ax = fig.add_subplot(gs[2, 0])
tdf[tdf.overlap_gene_cons == False].beta.hist(bins=np.arange(-1, 1.1, 0.1), ax=ax, histtype='stepfilled', lw=0)
ax.set_ylabel('Number of intergenic\nlead mCNVs', fontsize=8)
ax.set_xlabel('$\\beta$', fontsize=8)
ax.grid(axis='x')
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
p = stats.binom_test((tdf[tdf.overlap_gene_cons == False].beta > 0).value_counts())
print('{:,} lead intergenic mCNV eGenes.'.format(tdf[tdf.overlap_gene_cons == False].shape[0]))
print('Effect sizes for intergenic lead mCNVs are biased '
'(p={:.3e}, binomial test).'.format(p))
t = fig.text(0.005, 0.87, 'A', weight='bold',
size=12)
t = fig.text(0.5, 0.87, 'B', weight='bold',
size=12)
t = fig.text(0.005, 0.58, 'C', weight='bold',
size=12)
t = fig.text(0.5, 0.58, 'D', weight='bold',
size=12)
t = fig.text(0.005, 0.29, 'E', weight='bold',
size=12)
gs.tight_layout(fig, rect=[0, 0, 1, 0.9])
fig.savefig(os.path.join(outdir, 'cnv_eqtl_chars.pdf'))
fig.savefig(os.path.join(outdir, 'cnv_eqtl_chars.png'), dpi=300)
fig = plt.figure(figsize=(6.85, 5.4), dpi=300)
gs = gridspec.GridSpec(3, 2)
# Lead CNV effect sizes
ax = fig.add_subplot(gs[0, 0])
bins = np.arange(-3, 3.1, 0.1)
cnv_lead_vars.drop_duplicates('gene_id').beta.hist(bins=bins, histtype='stepfilled', lw=0)
print('{:,} lead CNVs.'.format(cnv_lead_vars.drop_duplicates('gene_id').shape[0]))
p = stats.binom_test((cnv_lead_vars.drop_duplicates('gene_id').beta > 0).value_counts())
print('Effect sizes for lead CNVs are biased '
'(p={:.3e}, binomial test).'.format(p))
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
ax.set_xlabel('$\\beta$', fontsize=8)
ax.set_ylabel('Number of lead CNVs', fontsize=8)
ax.set_xlim(-3, 3)
# Length
ax = fig.add_subplot(gs[0, 1])
se = np.log10(combined_info.ix[sig_cnvs, 'svlen'].abs())
weights = np.ones_like(se) / float(se.shape[0])
se.hist(ax=ax, label='eQTL'.format(se.shape[0]),
alpha=0.5, weights=weights, histtype='stepfilled',
bins=np.arange(0, 6.1, 0.1))
se = np.log10(combined_info.ix[not_sig_cnvs, 'svlen'].abs())
weights = np.ones_like(se) / float(se.shape[0])
se.hist(ax=ax, label='Not eQTL'.format(se.shape[0]),
alpha=0.5, weights=weights, histtype='stepfilled',
bins=np.arange(0, 6.1, 0.1))
ax.legend(fontsize=8, loc='upper left', fancybox=True, frameon=True)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
ax.set_xlabel('$\\log_{10}$ CNV length', fontsize=8)
ax.set_ylabel('Density', fontsize=8)
# TSS distance.
ax = fig.add_subplot(gs[1, 0])
se = np.log10(combined_info.ix[sig_cnvs, 'nearest_tss_dist'].abs() + 1)
weights = np.ones_like(se) / float(se.shape[0])
se.hist(ax=ax, label='eQTL'.format(se.shape[0]),
alpha=0.5, weights=weights, histtype='stepfilled',
bins=np.arange(0, 6.1, 0.1))
se = np.log10(combined_info.ix[not_sig_cnvs, 'nearest_tss_dist'].abs() + 1)
weights = np.ones_like(se) / float(se.shape[0])
se.hist(ax=ax, label='Not eQTL'.format(se.shape[0]),
alpha=0.5, weights=weights, histtype='stepfilled',
bins=np.arange(0, 6.1, 0.1))
ax.legend(fontsize=8, loc='upper left', fancybox=True, frameon=True)
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
ax.set_xlabel('$\\log_{10}$ distance to nearest TSS', fontsize=8)
ax.set_ylabel('Density', fontsize=8)
tdf = mcnv_sig.sort_values(by=['overlap_gene_cons', 'pvalue']).drop_duplicates(subset='gene')
# Genic mCNV
ax = fig.add_subplot(gs[1, 1])
ax.set_ylabel('Number of genic\nlead mCNVs', fontsize=8)
ax.set_xlabel('$\\beta$', fontsize=8)
tdf[tdf.overlap_gene_cons].beta.hist(bins=np.arange(-1, 1.1, 0.1), ax=ax, histtype='stepfilled', lw=0)
ax.grid(axis='x')
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
print('{:,} genic mCNV eGenes.'.format(tdf[tdf.overlap_gene_cons].shape[0]))
p = stats.binom_test((tdf[tdf.overlap_gene_cons].beta > 0).value_counts())
print('Effect sizes for genic lead mCNVs are biased '
'(p={:.3e}, binomial test).'.format(p))
# Intergenic mCNV
ax = fig.add_subplot(gs[2, 0])
tdf[tdf.overlap_gene_cons == False].beta.hist(bins=np.arange(-1, 1.1, 0.1), ax=ax, histtype='stepfilled', lw=0)
ax.set_ylabel('Number of intergenic\nlead mCNVs', fontsize=8)
ax.set_xlabel('$\\beta$', fontsize=8)
ax.grid(axis='x')
for t in ax.get_xticklabels() + ax.get_yticklabels():
t.set_fontsize(8)
p = stats.binom_test((tdf[tdf.overlap_gene_cons == False].beta > 0).value_counts())
print('{:,} lead intergenic mCNV eGenes.'.format(tdf[tdf.overlap_gene_cons == False].shape[0]))
print('Effect sizes for intergenic lead mCNVs are biased '
'(p={:.3e}, binomial test).'.format(p))
t = fig.text(0.005, 0.96, 'A', weight='bold',
size=12)
t = fig.text(0.5, 0.96, 'B', weight='bold',
size=12)
t = fig.text(0.005, 0.64, 'C', weight='bold',
size=12)
t = fig.text(0.5, 0.64, 'D', weight='bold',
size=12)
t = fig.text(0.005, 0.32, 'E', weight='bold',
size=12)
gs.tight_layout(fig, rect=[0, 0, 1, 1])
fig.savefig(os.path.join(outdir, 'cnv_eqtl_chars_no_label.pdf'))
fig.savefig(os.path.join(outdir, 'cnv_eqtl_chars_no_label.png'), dpi=300)
###Output
108 lead CNVs.
Effect sizes for lead CNVs are biased (p=2.075e-13, binomial test).
33 genic mCNV eGenes.
Effect sizes for genic lead mCNVs are biased (p=1.309e-07, binomial test).
57 lead intergenic mCNV eGenes.
Effect sizes for intergenic lead mCNVs are biased (p=3.202e-03, binomial test).
|
related_alg/correlation_analysis.ipynb | ###Markdown
相关性分析(correlation analysis) Correlation analysis is a method of statistical evaluation used to study the strength of a relationship between two, numerically measured, continuous variables (e.g. height and weight). * 两个变量间有线性关系* 一个变量变大,另一个是否也跟着变大 1. method1: 通过numpy的corrcoef计算皮尔逊相关系数
###Code
from numpy import corrcoef
list1 = [1,2,3,4,5,6]
list2 = [3,4,5,6,7,8]
corrcoef(list1, list2)[0,1]
list1 = [2,2,7,4,5,6]
list2 = [3,4,5,6,1,8]
corrcoef(list1, list2)[0,1]
###Output
_____no_output_____
###Markdown
Analysis:* 值越大相关性越高(list1不断增大,list2也跟随不断增大) 2. method2: 通过scipy的pearsonr计算皮尔逊相关系数
###Code
from scipy.stats.stats import pearsonr
list1 = [1,2,3,4,5,6]
list2 = [3,4,5,6,7,8]
pearsonr(list1, list2)# correlation coefficient and the p-value (相关系数 & pvalue)
from scipy.stats.stats import pearsonr
list1 = [2,2,7,4,5,6]
list2 = [3,4,5,6,1,8]
pearsonr(list1, list2)# correlation coefficient and the p-value (相关系数 & pvalue)
###Output
_____no_output_____
###Markdown
Analysis同上 3. 简单的相关系数的分类 * 0.8-1.0 极强相关* 0.6-0.8 强相关* 0.4-0.6 中等程度相关* 0.2-0.4 弱相关* 0.0-0.2 极弱相关或无相关 4. 相关性分析适用的场合 **适用于解决这样的问题:**假设五个国家的国民生产总值分别是1、2、3、5、8(单位10亿美元),又假设这五个国家的贫困比例分别是11%、12%、13%、15%、18%。国民生产总值和贫困比例之间是否有相关性?
###Code
from numpy import corrcoef
list1 = [1,2,3,5,8]
list2 = [11,12,13,15,18]
corrcoef(list1, list2)[0,1]
# 结果为1,极强相关
###Output
_____no_output_____ |
Notebooks/Probability_101.ipynb | ###Markdown
Probability of an ORFFor getting started, we'll solve a simple version.Assume the RNA length is a multiple of 3.Only count ORFs in frame 1.Assume every codon is a random uniform selection from 64 choices. Solve one one exact length in codons.Include every ORF of length >= codons/2.
###Code
import time
def show_time():
t = time.time()
print(time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)))
show_time()
###Output
2021-08-10 14:08:49 EDT
###Markdown
Let W = P(NonStart) = 63/64 M = P(Start) = 1/64 S = P(Stop) = 3/64 A = P(NonStop)= 61/64 C = P(AnyCodon) = 64/64 n = Length of RNA in bases. L = ORF length in codons (includes start, excludes stop) H = ceil(n/2) I = floor(n/2) Porf(n) = Probability of an ORF, L>=n/2. $P_{orf}(n)=\sum_{i=1}^I\sum_{j=i}^L[W^{i-1}MA^{j}S]$
###Code
import math
P_NoStart = 63/64
P_Start = 1/64
P_Stop = 3/64
P_Amino = 61/64
P_Codon = 64/64
def get_prob_by_sum(rna_len):
psum = 0.0 # sum of probs
min_cds = math.ceil(rna_len/2)
min_amino = min_cds-2
max_start = math.floor(rna_len/2)
for start_pos in range(1,max_start+1):
for stop_pos in range(start_pos+min_cds,rna_len+1):
num_pre = start_pos-1
num_amino = stop_pos-start_pos-1
pone = (P_NoStart**num_pre)*P_Start*(P_Amino**num_amino)*P_Stop
psum += pone
return psum
print(" RNA CODONS PROB")
for c in range(1,11,1):
rna_codons=c
rna_len=rna_codons*3
porf = get_prob_by_sum(rna_codons)
print("%5d %6d %.5f"%(rna_len,rna_codons,porf))
print(" RNA CODONS PROB")
for c in range(33,700,33):
rna_codons=c
rna_len=rna_codons*3
porf = get_prob_by_sum(rna_codons)
print("%5d %6d %.5e"%(rna_len,rna_codons,porf))
###Output
RNA CODONS PROB
99 33 3.40404e-02
198 66 4.72581e-02
297 99 3.45162e-02
396 132 2.19696e-02
495 165 1.17732e-02
594 198 6.27716e-03
693 231 3.04329e-03
792 264 1.51496e-03
891 297 7.03198e-04
990 330 3.38947e-04
1089 363 1.53960e-04
1188 396 7.29797e-05
1287 429 3.27679e-05
1386 462 1.53909e-05
1485 495 6.86598e-06
1584 528 3.20822e-06
1683 561 1.42591e-06
1782 594 6.64280e-07
1881 627 2.94608e-07
1980 660 1.37007e-07
2079 693 6.06855e-08
|
doc/ipython_notebooks_src/dev-cvode-nsteps-only.ipynb | ###Markdown
Developers notes: CVODE - limit the number of steps for time integration It is useful to get control back from CVODE 'every now and then' to save restart data and provide some output to the user. We can do this by limiting (and adjusting) the maximum number of iterations that we allow sundials to carry out. The principle is demonstrated here. More detailed information can be found in the [CVODE manual, pdf](https://computation.llnl.gov/casc/sundials/documentation/cv_guide.pdf)
###Code
# set up an example
import numpy as np
from finmag.util.ode import cvode
import finmag.native.sundials as sundials
integrator = sundials.cvode(sundials.CV_ADAMS, sundials.CV_FUNCTIONAL)
def rhs(t, y, ydot):
ydot[:] = 0.5 * y
return 0
# new function that does the time integration for max_steps only
def advance_time(integrator, tout, yout, max_steps=None):
integrator.set_max_num_steps(max_steps)
"""
*Arguments*
``tout`` - target time (float)
``yout`` - state vector (numpy array)
``max_steps`` - maximum number of steps (integer)
Given the integrator object, a target time tout, and a state vector yout,
this function integrates towards tout. If max_steps is given and the
number of more than max_steps steps for the integration are reached,
we interrupt the calculation and return False.
If tout is reached within the number of allowed steps, it will return True.
"""
if max_steps != None:
integrator.set_max_num_steps(1)
reached_tout = True
tout_actual = tout
try:
integrator.advance_time(tout, yout)
except RuntimeError, msg:
# if we have reached max_num_steps, the error message will read something like
# expected_error = "Error in CVODE:CVode (CV_TOO_MUCH_WORK): At t = 0.258733, mxstep steps taken before reaching tout.'"
if "CV_TOO_MUCH_WORK" in msg.message:
reached_tout = False
print ("not reached t_out")
# in this case, return cvode current time
tout_actual = integrator.get_current_time()
else: # don't know what this is, raise error again
raise
return reached_tout, tout_actual
###Output
_____no_output_____
###Markdown
And here we define a function that uses the n-steps only code above:
###Code
def test_advance_time_nsteps():
"""Wrap up the functionality above to have regression test. We may not need this anymore with Max new testing tool."""
import numpy as np
from finmag.util.ode import cvode
import finmag.native.sundials as sundials
integrator = sundials.cvode(sundials.CV_ADAMS, sundials.CV_FUNCTIONAL)
def rhs(t, y, ydot):
ydot[:] = 0.5 * y
return 0
yout = np.zeros(1)
ts = np.linspace(0.1, 1, 10)*0.1
integrator.init(rhs, 0, np.array([1.]))
integrator.set_scalar_tolerances(1e-9, 1e-9)
for i, t in enumerate(ts):
retval, tout_actual = advance_time(integrator, t, yout, 2)
#assert retval == 0.0
print("t={:6.4}, yout = {:14}".format(t,yout)),
print("current_time = {:15.10}".format(integrator.get_current_time())),
print("num_steps = {:6}".format(integrator.get_num_steps())),
print("cur_step = {:6}".format(integrator.get_current_step())),
print("rhsevals = {:6}".format(integrator.get_num_rhs_evals())),
absdiff = abs(yout[0] - np.exp(tout_actual*0.5))
print("absdiff = {}".format(absdiff))
assert absdiff < 2e-9
return integrator
###Output
_____no_output_____
###Markdown
And then we need to call it:
###Code
integrator = test_advance_time_nsteps()
integrator.get_actual_init_step() # the step size that was attempted as the very first step
integrator.get_last_step() # the step size used in the last step
###Output
_____no_output_____
###Markdown
There is a convenient function to get a number of statistics in one shot:
###Code
stats = integrator.get_integrator_stats()
nsteps, nfevals, nlinsetups, netfails, qlast, qcur, hinused, hlast, hcur, tcur = stats
print stats
###Output
(10, 26, 0, 3, 4, 4, 6.324555320338399e-05, 0.02745619524875278, 0.02745619524875278, 0.0795298134904037)
|
notebooks/basic_functionality.ipynb | ###Markdown
Basic Placekey Functionality Install and load the Placekey library If placekey is not installed on your system you can install it and other dependencies for the notebooks in this repo by running```pip install -r requirements.txt```
###Code
import placekey as pk
import h3 as h3
###Output
_____no_output_____
###Markdown
Conversion between Placekeys and latitude and longitude The most basic functionality of the library is converting between Placekeys and latitude/longitude
###Code
geo = (37.779351, -122.418655) # The front door of SF City Hall
placekey = pk.geo_to_placekey(*geo)
print('The Placekey for the location of SF City Hall is "{}".'.format(placekey))
centroid_lat, centroid_long = pk.placekey_to_geo(placekey)
print('The latitude and longitude for the center of "{}" is ({}, {}).'.format(placekey, centroid_lat, centroid_long))
###Output
The Placekey for the location of SF City Hall is "@5vg-7gq-tjv".
The latitude and longitude for the center of "@5vg-7gq-tjv" is (37.77988951810222, -122.41864762076004).
###Markdown
Conversion between Placekeys and H3 indices Since the location portion (Where Part) of a Placekey is based on H3, there are also functions for converting back and forth between Plackeys and H3 indices. These H3 indices are always resolution 10. There is also support for working with integer representation of H3 indices as well as the string representation.
###Code
h3_for_placekey = pk.placekey_to_h3(placekey)
print('The H3 index corresponding to "{}" is "{}".'.format(placekey, h3_for_placekey))
print('"{}" has resolution {}.'.format(h3_for_placekey, h3.h3_get_resolution(h3_for_placekey)))
h3_int_for_placekey = pk.placekey_to_h3_int(placekey)
print('The integer H3 index corresponding to "{}" is {}.'.format(placekey, h3_int_for_placekey))
###Output
The H3 index corresponding to "@5vg-7gq-tjv" is "8a2830828747fff".
"8a2830828747fff" has resolution 10.
The integer H3 index corresponding to "@5vg-7gq-tjv" is 622203769592250367.
###Markdown
Converting Placekeys to spatial geometry formats Often when working with Placekeys it is useful to be able to visualize the corresponding hexagon or to operate with that hexagon and other spatial geometries. To that end we've provided funtionality to convert placekeys into several formats for specifying geometric shapes:1. Hexagon boundary coordinates,2. WKT (Weel-Known Text) string for the hexagon boundary,3. GeoJSON dictionary for the hexagon boundary,4. Shapely Polygon object for the hexagon boundary.There are also functions for converting geometric shapes in these formats into lists of Placekeys. See the [advanced functionality notebook]() for examples
###Code
pk.placekey_to_hex_boundary(placekey)
pk.placekey_to_wkt(placekey)
pk.placekey_to_geojson(placekey)
pk.placekey_to_polygon(placekey)
###Output
_____no_output_____ |
communities/Getting CAS Action Help from Python.ipynb | ###Markdown
Getting CAS Action Help from PythonAs with most things in programming, there are multiple ways of displaying help information about CAS action sets and actions from Python. We'll outline each of those methods in this article.The first thing we need is a connection to CAS.
###Code
import swat
conn = swat.CAS(host, port, username, password)
###Output
_____no_output_____
###Markdown
Using the `help` ActionThe CAS server has a builtin help system that will tell you about action sets and actions. To get help for all of the loaded action sets and a description of all of the actions in those action sets, you just call the **help** action with no parameters. In this case, we are storing the output of the action to a variable. That result contains the same information as the printed notes, but the information in encapsulated in DataFrame structures. Unless you are going to use the action set information programmatically, there isn't much reason to have it printed twice.
###Code
out = conn.help()
###Output
NOTE: Available Action Sets and Actions:
NOTE: accessControl
NOTE: assumeRole - Assumes a role
NOTE: dropRole - Relinquishes a role
NOTE: showRolesIn - Shows the currently active role
NOTE: showRolesAllowed - Shows the roles that a user is a member of
NOTE: isInRole - Shows whether a role is assumed
NOTE: isAuthorized - Shows whether access is authorized
NOTE: isAuthorizedActions - Shows whether access is authorized to actions
NOTE: isAuthorizedTables - Shows whether access is authorized to tables
NOTE: isAuthorizedColumns - Shows whether access is authorized to columns
NOTE: listAllPrincipals - Lists all principals that have explicit access controls
NOTE: whatIsEffective - Lists effective access and explanations (Origins)
NOTE: listAcsData - Lists access controls for caslibs, tables, and columns
NOTE: listAcsActionSet - Lists access controls for an action or action set
NOTE: repAllAcsCaslib - Replaces all access controls for a caslib
NOTE: repAllAcsTable - Replaces all access controls for a table
NOTE: repAllAcsColumn - Replaces all access controls for a column
NOTE: repAllAcsActionSet - Replaces all access controls for an action set
NOTE: repAllAcsAction - Replaces all access controls for an action
NOTE: updSomeAcsCaslib - Adds, deletes, and modifies some access controls for a caslib
NOTE: updSomeAcsTable - Adds, deletes, and modifies some access controls for a table
NOTE: updSomeAcsColumn - Adds, deletes, and modifies some access controls for a column
NOTE: updSomeAcsActionSet - Adds, deletes, and modifies some access controls for an action set
NOTE: updSomeAcsAction - Adds, deletes, and modifies some access controls for an action
NOTE: remAllAcsData - Removes all access controls for a caslib, table, or column
NOTE: remAllAcsActionSet - Removes all access controls for an action set or action
NOTE: operTableMd - Adds, deletes, and modifies table metadata
NOTE: operColumnMd - Adds, deletes, and modifies column metadata
NOTE: operActionSetMd - Adds, deletes, and modifies action set metadata
NOTE: operActionMd - Adds, deletes, and modifies action metadata
NOTE: operAdminMd - Assigns users and groups to roles and modifies administrator metadata
NOTE: listMetadata - Lists the metadata for caslibs, tables, columns, action sets, actions, or administrators
NOTE: persistMetadata - Persists the access control metadata
NOTE: createBackup - Creates a backup if one is not in progress
NOTE: completeBackup - Flags a backup as complete
NOTE: operBWPaths - Configures a blacklist or whitelist of paths
NOTE: deleteBWList - Deletes a blacklist or a whitelist
NOTE: builtins
NOTE: addNode - Adds a machine to the server
NOTE: removeNode - Remove one or more machines from the server
NOTE: help - Shows the parameters for an action or lists all available actions
NOTE: listNodes - Shows the host names used by the server
NOTE: loadActionSet - Loads an action set for use in this session
NOTE: installActionSet - Loads an action set in new sessions automatically
NOTE: log - Shows and modifies logging levels
NOTE: queryActionSet - Shows whether an action set is loaded
NOTE: queryName - Checks whether a name is an action or action set name
NOTE: reflect - Shows detailed parameter information for an action or all actions in an action set
NOTE: serverStatus - Shows the status of the server
NOTE: about - Shows the status of the server
NOTE: shutdown - Shuts down the server
NOTE: getUsers - Shows the users from the authentication provider
NOTE: getGroups - Shows the groups from the authentication provider
NOTE: userInfo - Shows the user information for your connection
NOTE: actionSetInfo - Shows the build information from loaded action sets
NOTE: history - Shows the actions that were run in this session
NOTE: casCommon - Provides parameters that are common to many actions
NOTE: ping - Sends a single request to the server to confirm that the connection is working
NOTE: echo - Prints the supplied parameters to the client log
NOTE: modifyQueue - Modifies the action response queue settings
NOTE: getLicenseInfo - Shows the license information for a SAS product
NOTE: refreshLicense - Refresh SAS license information from a file
NOTE: httpAddress - Shows the HTTP address for the server monitor
NOTE: configuration
NOTE: getServOpt - displays the value of a server option
NOTE: listServOpts - Displays the server options and server values
NOTE: dataPreprocess
NOTE: rustats - Computes robust univariate statistics, centralized moments, quantiles, and frequency distribution statistics
NOTE: impute - Performs data matrix (variable) imputation
NOTE: outlier - Performs outlier detection and treatment
NOTE: binning - Performs unsupervised variable discretization
NOTE: discretize - Performs supervised and unsupervised variable discretization
NOTE: histogram - Generates histogram bins and simple bin-based statistics for numeric variables
NOTE: transform - Performs pipelined variable imputation, outlier detection and treatment, functional transformation, binning, and robust univariate statistics to evaluate the quality of the transformation
NOTE: kde - Computes kernel density estimation
NOTE: dataStep
NOTE: runCode - Runs DATA step code
NOTE: percentile
NOTE: percentile - Calculate quantiles and percentiles
NOTE: boxPlot - Calculate quantiles, high and low whiskers, and outliers
NOTE: assess - Assess and compare models
NOTE: search
NOTE: searchIndex - Searches for a query against an index and retrieves records, documents, and tuples that are relevant to that query
NOTE: searchAggregate - Aggregates certain fields in a table that is usually generated by searchIndex
NOTE: valueCount - value count for multiple fields
NOTE: buildIndex - Creates an empty index using a schema (the first step of Search)
NOTE: getSchema - Gets the schema of an index
NOTE: appendIndex - Loads data to an index after the buildIndex action is performed
NOTE: deleteDocuments - Delete a portion of documents from index
NOTE: session
NOTE: listSessions - Displays a list of the sessions on the server
NOTE: addNodeStatus - Lists details about machines currently being added to the server
NOTE: timeout - Changes the time-out for a session
NOTE: endSession - Ends the current session
NOTE: sessionId - Displays the name and UUID of the current session
NOTE: sessionName - Changes the name of the current session
NOTE: sessionStatus - Displays the status of the current session
NOTE: listresults - Lists the saved results for a session
NOTE: batchresults - Change current action to batch results
NOTE: fetchresult - Fetch the specified saved result for a session
NOTE: flushresult - Flush the saved result for this session
NOTE: setLocale - Changes the locale for the current session
NOTE: metrics - Displays the metrics for each action after it executes
NOTE: sessionProp
NOTE: setSessOpt - Sets a session option
NOTE: getSessOpt - Displays the value of a session option
NOTE: listSessOpts - Displays the session options and session values
NOTE: addFmtLib - Adds a format library
NOTE: listFmtLibs - Lists the format libraries that are associated with the session
NOTE: setFmtSearch - Sets the format libraries to search
NOTE: listFmtSearch - Shows the format library search order
NOTE: dropFmtLib - Drops a format library from global scope for all sessions
NOTE: deleteFormat - Deletes a format from a format library
NOTE: addFormat - Adds a format to a format library
NOTE: listFmtValues - Shows the values for a format
NOTE: saveFmtLib - Saves a format library
NOTE: promoteFmtLib - Promotes a format library to global scope for all sessions
NOTE: listFmtRanges - Displays the range information for a format
NOTE: simple
NOTE: mdSummary - Calculates multidimensional summaries of numeric variables
NOTE: numRows - Shows the number of rows in a Cloud Analytic Services table
NOTE: summary - Generates descriptive statistics of numeric variables such as the sample mean, sample variance, sample size, sum of squares, and so on
NOTE: correlation - Generates a matrix of Pearson product-moment correlation coefficients
NOTE: regression - Performs a linear regression up to 3rd-order polynomials
NOTE: crossTab - Performs one-way or two-way tabulations
NOTE: distinct - Computes the distinct number of values of the variables in the variable list
NOTE: topK - Returns the top-K and bottom-K distinct values of each variable included in the variable list based on a user-specified ranking order
NOTE: groupBy - Builds BY groups in terms of the variable value combinations given the variables in the variable list
NOTE: freq - Generates a frequency distribution for one or more variables
NOTE: paraCoord - Generates a parallel coordinates plot of the variables in the variable list
NOTE: table
NOTE: view - Creates a view from files or tables
NOTE: attribute - Manages extended table attributes
NOTE: upload - Transfers binary data to the server to create objects like tables
NOTE: loadTable - Loads a table from a caslib's data source
NOTE: tableExists - Checks whether a table has been loaded
NOTE: columnInfo - Shows column information
NOTE: fetch - Fetches rows from a table or view
NOTE: save - Saves a table to a caslib's data source
NOTE: addTable - Add a table by sending it from the client to the server
NOTE: tableInfo - Shows information about a table
NOTE: tableDetails - Get detailed information about a table
NOTE: dropTable - Drops a table
NOTE: deleteSource - Delete a table or file from a caslib's data source
NOTE: fileInfo - Lists the files in a caslib's data source
NOTE: promote - Promote a table to global scope
NOTE: addCaslib - Adds a new caslib to enable access to a data source
NOTE: dropCaslib - Drops a caslib
NOTE: caslibInfo - Shows caslib information
NOTE: queryCaslib - Checks whether a caslib exists
NOTE: partition - Partitions a table
NOTE: recordCount - Shows the number of rows in a Cloud Analytic Services table
NOTE: loadDataSource - Loads one or more data source interfaces
NOTE: update - Updates rows in a table
###Markdown
If you only want to see the help for a single action set, you can specify the action set name as a parameter.
###Code
out = conn.help(actionset='simple')
###Output
NOTE: Information for action set 'simple':
NOTE: simple
NOTE: mdSummary - Calculates multidimensional summaries of numeric variables
NOTE: numRows - Shows the number of rows in a Cloud Analytic Services table
NOTE: summary - Generates descriptive statistics of numeric variables such as the sample mean, sample variance, sample size, sum of squares, and so on
NOTE: correlation - Generates a matrix of Pearson product-moment correlation coefficients
NOTE: regression - Performs a linear regression up to 3rd-order polynomials
NOTE: crossTab - Performs one-way or two-way tabulations
NOTE: distinct - Computes the distinct number of values of the variables in the variable list
NOTE: topK - Returns the top-K and bottom-K distinct values of each variable included in the variable list based on a user-specified ranking order
NOTE: groupBy - Builds BY groups in terms of the variable value combinations given the variables in the variable list
NOTE: freq - Generates a frequency distribution for one or more variables
NOTE: paraCoord - Generates a parallel coordinates plot of the variables in the variable list
###Markdown
You can also specify a single action as a parameter. Calling the **help** action this way will also print descriptions of all of the action parameters.
###Code
out = conn.help(action='summary')
###Output
NOTE: Information for action 'simple.summary':
NOTE: The following parameters are accepted. Default values are shown.
NOTE: list table={
NOTE: specifies the table name, caslib, and other common parameters.
NOTE: string name=NULL (required),
NOTE: specifies the name of the table to use.
NOTE: string caslib=NULL,
NOTE: specifies the caslib containing the table that you want to use with the action. By default, the active caslib is used. Specify a value only if you need to access a table from a different caslib.
NOTE: string where=NULL,
NOTE: specifies an expression for subsetting the input data.
NOTE: array of groupBy={
NOTE: specifies the names of the variables to use for grouping results.
NOTE: string name=NULL (required),
NOTE: specifies the name for the variable.
NOTE: string label=NULL,
NOTE: specifies the descriptive label for the variable.
NOTE: int32 formattedLength=0,
NOTE: specifies the format field length plus the format precision length.
NOTE: string format=NULL,
NOTE: specifies the format to apply to the variable.
NOTE: int32 nfl=0,
NOTE: specifies the format field length.
NOTE: int32 nfd=0
NOTE: specifies the format precision length.
NOTE: },
NOTE: specifies the names of the variables to use for grouping results.
NOTE: array of orderBy={
NOTE: specifies the variables to use for ordering observations within partitions. This parameter applies to partitioned tables or it can be combined with groupBy variables when groupByMode is set to REDISTRIBUTE.
NOTE: string name=NULL (required),
NOTE: specifies the name for the variable.
NOTE: string label=NULL,
NOTE: specifies the descriptive label for the variable.
NOTE: int32 formattedLength=0,
NOTE: specifies the format field length plus the format precision length.
NOTE: string format=NULL,
NOTE: specifies the format to apply to the variable.
NOTE: int32 nfl=0,
NOTE: specifies the format field length.
NOTE: int32 nfd=0
NOTE: specifies the format precision length.
NOTE: },
NOTE: specifies the variables to use for ordering observations within partitions. This parameter applies to partitioned tables or it can be combined with groupBy variables when groupByMode is set to REDISTRIBUTE.
NOTE: array of computedVars={
NOTE: specifies the names of the computed variables to create. Specify an expression for each variable in the computedVarsProgram parameter.
NOTE: string name=NULL (required),
NOTE: specifies the name for the variable.
NOTE: string label=NULL,
NOTE: specifies the descriptive label for the variable.
NOTE: int32 formattedLength=0,
NOTE: specifies the format field length plus the format precision length.
NOTE: string format=NULL,
NOTE: specifies the format to apply to the variable.
NOTE: int32 nfl=0,
NOTE: specifies the format field length.
NOTE: int32 nfd=0
NOTE: specifies the format precision length.
NOTE: } (alias: compVars),
NOTE: specifies the names of the computed variables to create. Specify an expression for each variable in the computedVarsProgram parameter.
NOTE: string computedVarsProgram=NULL (alias: compPgm),
NOTE: specifies an expression for each computed variable that you include in the computedVars parameter.
NOTE: enum groupByMode='NOSORT' ('NOSORT', 'REDISTRIBUTE'),
NOTE: specifies how the server creates groups.
NOTE: boolean computedOnDemand=false (alias: compOnDemand),
NOTE: when set to True, the computed variables are created when the table is loaded instead of when the action begins.
NOTE: boolean singlePass=false,
NOTE: when set to True, the data does not create a transient table in the server. Setting this parameter to True can be efficient, but the data might not have stable ordering upon repeated runs.
NOTE: alternative list importOptions={
NOTE: specifies the settings for reading a table from a data source.
NOTE: list : {
NOTE: enum : 'auto' ('auto') (alias: ft)
NOTE: specifies the file type based on the filename suffix. Default values for the parameters related to the file type are used.
NOTE: },
NOTE: list : {
NOTE: enum : 'hdat' ('hdat') (required),
NOTE: specifies to import a SASHDAT table.
NOTE: alternative list : {
NOTE: specifies a password for encrypting or decrypting stored data.
NOTE: string : NULL,
NOTE: blob
NOTE: }
NOTE: specifies a password for encrypting or decrypting stored data.
NOTE: },
NOTE: list : {
NOTE: enum : 'csv' ('csv', 'delimited') (required),
NOTE: specifies to import a delimited file.
NOTE: int64 : 20 (value >= 1),
NOTE: specifies the number of rows to scan in order to determine data types for variables. Specify 0 to scan all rows.
NOTE: string : ',',
NOTE: specifies the character to use as the field delimiter.
NOTE: array of : {
NOTE: specifies the names, types, formats, and other metadata for variables.
NOTE: string : NULL,
NOTE: specifies the name for the variable.
NOTE: string : NULL,
NOTE: specifies the descriptive label for the variable.
NOTE: int32 : 8 (value >= 1),
NOTE: specifies the unformatted length of the variable. This parameter applies to fixed-length character variables (type="CHAR") only.
NOTE: enum : 'double' ('binary', 'char', 'date', 'datetime', 'decquad', 'decsext', 'double', 'int32', 'int64', 'time', 'varbinary', 'varchar'),
NOTE: specifies the data type for the variable.
NOTE: int32 : 0,
NOTE: specifies the format field length plus the format precision length.
NOTE: string : NULL,
NOTE: specifies the format to apply to the variable.
NOTE: int32 : 0,
NOTE: specifies the format field length.
NOTE: int32 : 0
NOTE: specifies the format precision length.
NOTE: },
NOTE: specifies the names, types, formats, and other metadata for variables.
NOTE: boolean : true,
NOTE: when set to True, the values in the first line of the file are used as variable names.
NOTE: boolean : true,
NOTE: when set to True, variable-length strings are used for character variables.
NOTE: int32 : 0 (value >= 0),
NOTE: specifies the number of threads to use on each machine in the server. By default, the server uses one thread for each CPU that is licensed to use SAS software.
NOTE: boolean : false,
NOTE: removes leading and trailing blanks from character variables.
NOTE: string : 'utf-8',
NOTE: specifies the text encoding of the file.
NOTE: string : NULL,
NOTE: specifies the locale for interpreting data in the file
NOTE: boolean : true
NOTE: specifies that truncation of character data that is too long for a given field is allowed.
NOTE: },
NOTE: list : {
NOTE: enum : 'excel' ('excel') (required),
NOTE: imports a Microsoft Excel workbook.
NOTE: string : NULL,
NOTE: specifies the name of the worksheet to import.
NOTE: string : NULL,
NOTE: specifies a subset of the cells to import. For example, the range B2..E8 is the range address for a rectangular block of 12 cells, where the top left cell is B2 and the bottom right cell is E8.
NOTE: boolean : true
NOTE: when set to True, the values in the first line of the file are used as variable names.
NOTE: },
NOTE: list : {
NOTE: enum : 'jmp' ('jmp') (required)
NOTE: imports a JMP file.
NOTE: },
NOTE: list : {
NOTE: enum : 'spss' ('spss') (required)
NOTE: imports an SPSS file.
NOTE: },
NOTE: list : {
NOTE: enum : 'dta' ('dta') (required)
NOTE: imports a STATA file.
NOTE: },
NOTE: list : {
NOTE: enum : 'esp' ('esp') (required),
NOTE: imports a window from SAS Event Stream Processing.
NOTE: boolean : false,
NOTE: when set to True, the table is created with character and float data types only. When set to False, the table uses the same data types that are used in the ESP event.
NOTE: int32 : 5
NOTE: specifies the number of seconds to receive data from SAS Event Stream Processing before declaring an end of file on the stream. The data is read until an end of file is found, and then the action stops running.
NOTE: },
NOTE: list : {
NOTE: enum : 'lasr' ('lasr') (required),
NOTE: imports a table from SAS LASR Analytic Server.
NOTE: string list : {
NOTE: specifies the variables to use in the action.
NOTE: },
NOTE: specifies the variables to use in the action.
NOTE: string : NULL,
NOTE: specifies an expression for subsetting the input data.
NOTE: array of : {
NOTE: specifies the names of the computed variables to create. Specify an expression for each variable in the computedVarsProgram parameter.
NOTE: string : NULL (required),
NOTE: specifies the name for the variable.
NOTE: string : NULL,
NOTE: specifies the descriptive label for the variable.
NOTE: int32 : 0,
NOTE: specifies the format field length plus the format precision length.
NOTE: string : NULL,
NOTE: specifies the format to apply to the variable.
NOTE: int32 : 0,
NOTE: specifies the format field length.
NOTE: int32 : 0
NOTE: specifies the format precision length.
NOTE: },
NOTE: specifies the names of the computed variables to create. Specify an expression for each variable in the computedVarsProgram parameter.
NOTE: string : NULL,
NOTE: specifies an expression for each computed variable that you include in the computedVars parameter.
NOTE: boolean : false,
NOTE: when set to True, variable-length strings are used for character variables.
NOTE: int32 : 0 (value >= 0),
NOTE: specifies the number of threads to use on each machine in the server. By default, the server uses one thread for each CPU that is licensed to use SAS software.
NOTE: enum : 'Fallback' ('Fallback', 'Force', 'None'),
NOTE: specifies how the table is transferred from SAS LASR Analytic Server to SAS Cloud Analytic Services.
NOTE: boolean : false
NOTE: when set to True, the rows are inserted into the new table in the same order as they are received from the SAS LASR Analytic Server. Creating the table is less efficient when this parameter is used.
NOTE: },
NOTE: list : {
NOTE: enum : 'basesas' ('basesas') (required),
NOTE: specifies the settings for importing a SAS data set.
NOTE: alternative list : {
NOTE: specifies a password for encrypting or decrypting stored data.
NOTE: string : NULL,
NOTE: blob
NOTE: },
NOTE: specifies a password for encrypting or decrypting stored data.
NOTE: string : NULL,
NOTE: specifies the password for a password-protected data set. Use this parameter if the data set is password-protected or uses SAS proprietary encryption.
NOTE: string : NULL,
NOTE: specifies the Read password for the SAS data set.
NOTE: enum : 'AUTO' ('AUTO', 'SERIAL', 'PARALLEL') (alias: dtm),
NOTE: specifies how data is transferred between the data source and SAS Cloud Analytic Services.
NOTE: double : 1 (1 <= value <= 5)
NOTE: specifies a multiplier value to expand fixed-width character variables that might require transcoding. The lengths are increased to avoid character data truncation. The lengths are increased by multiplying the length by the specified value.
NOTE: },
NOTE: list : {
NOTE: enum : 'mva' ('mva') (required)
NOTE: imports a table from a Base SAS session.
NOTE: },
NOTE: list : {
NOTE: enum : 'xls' ('xls') (required),
NOTE: imports a Microsoft Excel workbook with an XLS file extension.
NOTE: string : NULL,
NOTE: specifies the name of the worksheet to import.
NOTE: string : NULL,
NOTE: specifies a subset of the cells to import. For example, the range B2..E8 is the range address for a rectangular block of 12 cells, where the top left cell is B2 and the bottom right cell is E8.
NOTE: boolean : true
NOTE: when set to True, the values in the first line of the file are used as variable names.
NOTE: },
NOTE: list : {
NOTE: enum : 'fmt' ('fmt') (required),
NOTE: imports SAS formats from a file.
NOTE: string : NULL
NOTE: specifies a file system path and filename. The file must be a SAS item store that includes the formats to import.
NOTE: }
NOTE: } (alias: import),
NOTE: specifies the settings for reading a table from a data source.
NOTE: boolean onDemand=true,
NOTE: when set to True, table access is less aggressive with virtual memory use.
NOTE: array of vars={
NOTE: specifies the variables to use in the action.
NOTE: string name=NULL (required),
NOTE: specifies the name for the variable.
NOTE: string label=NULL,
NOTE: specifies the descriptive label for the variable.
NOTE: int32 formattedLength=0,
NOTE: specifies the format field length plus the format precision length.
NOTE: string format=NULL,
NOTE: specifies the format to apply to the variable.
NOTE: int32 nfl=0,
NOTE: specifies the format field length.
NOTE: int32 nfd=0
NOTE: specifies the format precision length.
NOTE: }
NOTE: specifies the variables to use in the action.
NOTE: } (required),
NOTE: specifies the table name, caslib, and other common parameters.
NOTE: list groupbyTable={
NOTE: specifies an input table that contains the groups to use in a group-by analysis.
NOTE: string name=NULL (required),
NOTE: specifies the name of the table to use.
NOTE: string casLib=NULL,
NOTE: specifies the caslib containing the table that you want to use with the action. By default, the active caslib is used. Specify a value only if you need to access a table from a different caslib.
NOTE: string where=NULL
NOTE: specifies an expression for subsetting the input data.
NOTE: },
NOTE: specifies an input table that contains the groups to use in a group-by analysis.
NOTE: array of attributes={
NOTE: string name=NULL (required),
NOTE: specifies the name for the variable.
NOTE: string label=NULL,
NOTE: specifies the descriptive label for the variable.
NOTE: int32 formattedLength=0,
NOTE: specifies the format field length plus the format precision length.
NOTE: string format=NULL,
NOTE: specifies the format to apply to the variable.
NOTE: int32 nfl=0,
NOTE: specifies the format field length.
NOTE: int32 nfd=0
NOTE: specifies the format precision length.
NOTE: } (alias: attribute) (alias: attrs) (alias: attr) (alias: varAttrs),
NOTE: array of inputs={
NOTE: string name=NULL (required),
NOTE: specifies the name for the variable.
NOTE: string label=NULL,
NOTE: specifies the descriptive label for the variable.
NOTE: int32 formattedLength=0,
NOTE: specifies the format field length plus the format precision length.
NOTE: string format=NULL,
NOTE: specifies the format to apply to the variable.
NOTE: int32 nfl=0,
NOTE: specifies the format field length.
NOTE: int32 nfd=0
NOTE: specifies the format precision length.
NOTE: } (alias: input),
NOTE: int64 groupByLimit=9223372036854775807 (value >= 1),
NOTE: specifies the maximum number of levels in a group-by set. When the server determines this number of levels, the server stops and does not return a result. Specify this parameter if you want to avoid creating large result sets in group-by operations.
NOTE: string list orderBy={
NOTE: specifies a list of variables by which to order the result set.
NOTE: },
NOTE: specifies a list of variables by which to order the result set.
NOTE: int64 list orderByAgg={
NOTE: specifies one or more aggregators by which to order the result set.
NOTE: } ('CSS', 'CV', 'MAX', 'MEAN', 'MIN', 'N', 'NMISS', 'PROBT', 'STD', 'STDERR', 'SUM', 'TSTAT', 'USS', 'VAR'),
NOTE: specifies one or more aggregators by which to order the result set.
NOTE: string list orderByDesc={
NOTE: arranges the results in descending order.
NOTE: },
NOTE: arranges the results in descending order.
NOTE: boolean orderByGbyRaw=false (alias: orderByRaw),
NOTE: when set to True, the ordering of the group-by variables is based on the raw values of the variables, not the formatted values.
NOTE: int32 resultLimit=0 (0 <= value <= 2147483647) (alias: limit),
NOTE: specifies the maximum size of the result set returned to the client.
NOTE: list casOut={
NOTE: specifies the settings for an output table.
NOTE: string name=NULL,
NOTE: specifies the name to associate with the table.
NOTE: string caslib=NULL,
NOTE: specifies the name of the caslib to use.
NOTE: string timeStamp=NULL,
NOTE: specifies the timestamp to apply to the table. Specify the value in the form that is appropriate for your session locale.
NOTE: boolean compress=false,
NOTE: when set to True, data compression is applied to the table.
NOTE: boolean replace=false,
NOTE: specifies whether to overwrite an existing table with the same name.
NOTE: int32 replication=1 (value >= 0),
NOTE: specifies the number of copies of the table to make for fault tolerance. Larger values result in slower performance and use more memory, but provide high availability for data in the event of a node failure.
NOTE: string label=NULL,
NOTE: specifies the descriptive label to associate with the table.
NOTE: int64 maxMemSize=0,
NOTE: specifies the maximum amount of physical memory, in bytes, to allocate for the table. After this threshold is reached, the server uses temporary files and operating system facilities for memory management.
NOTE: boolean promote=false,
NOTE: when set to True, the output table is added with a global scope. This enables other sessions to access the table, subject to access controls. The target caslib must also have a global scope.
NOTE: boolean onDemand=true
NOTE: when set to True, table access is less aggressive with virtual memory use.
NOTE: },
NOTE: specifies the settings for an output table.
NOTE: boolean repeat=false,
NOTE: when set to True, the action is repeated.
NOTE: string freq=NULL (alias: frequency),
NOTE: specifies a frequency variable.
NOTE: double ciAlpha=0.05 (0 < value < 1),
NOTE: specifies the level of significance for 100*(1-ciAlpha)% confidence intervals. The default value of 0.05 results in 95% confidence intervals.
NOTE: enum ciType='TWOSIDED' ('LEFT', 'LOWER', 'RIGHT', 'TWOSIDED', 'UPPER'),
NOTE: specifies the type of confidence interval.
NOTE: int64 list subSet={
NOTE: specifies the summary statistics to generate.
NOTE: } ('CSS', 'CV', 'MAX', 'MEAN', 'MIN', 'N', 'NMISS', 'PROBT', 'STD', 'STDERR', 'SUM', 'TSTAT', 'USS', 'VAR') (unique) (alias: summarySubset) (alias: statistics),
NOTE: specifies the summary statistics to generate.
NOTE: string weight=NULL
NOTE: specifies a numeric variable whose values weight the values of the analysis variables.
###Markdown
Using Python's `help` FunctionIn addition to the **help** action, you can also use Python's **help** function. In this case, you have to specify an action set or action variable. In the code below, we will get the help for the **simple** action set. In addition to the actions in the action set, you will also get information about the action set's Python class.
###Code
help(conn.simple)
###Output
Help on Simple in module swat.cas.actions object:
class Simple(CASActionSet)
| Analytics
|
| Actions
| -------
| simple.correlation : Generates a matrix of Pearson product-moment correlation
| coefficients
| simple.crosstab : Performs one-way or two-way tabulations
| simple.distinct : Computes the distinct number of values of the variables in
| the variable list
| simple.freq : Generates a frequency distribution for one or more
| variables
| simple.groupby : Builds BY groups in terms of the variable value
| combinations given the variables in the variable list
| simple.mdsummary : Calculates multidimensional summaries of numeric variables
| simple.numrows : Shows the number of rows in a Cloud Analytic Services table
| simple.paracoord : Generates a parallel coordinates plot of the variables in
| the variable list
| simple.regression : Performs a linear regression up to 3rd-order polynomials
| simple.summary : Generates descriptive statistics of numeric variables such
| as the sample mean, sample variance, sample size, sum of
| squares, and so on
| simple.topk : Returns the top-K and bottom-K distinct values of each
| variable included in the variable list based on a user-
| specified ranking order
|
| Method resolution order:
| Simple
| CASActionSet
| builtins.object
|
| Data and other attributes defined here:
|
| actions = {'correlation': <class 'swat.cas.actions.simple.Correlation'...
|
| ----------------------------------------------------------------------
| Methods inherited from CASActionSet:
|
| __call__(self, *args, **kwargs)
|
| __dir__(self)
|
| __getattr__(self, name)
|
| ----------------------------------------------------------------------
| Class methods inherited from CASActionSet:
|
| from_reflection(asinfo, connection) from builtins.type
| Create a CASActionSet class from reflection information
|
| Parameters
| ----------
| asinfo : dict
| Reflection information from the server
| connection : CAS object
| The connection object to associate with the CASActionSet
|
| Returns
| -------
| CASActionSet class
|
| get_connection() from builtins.type
| Retrieve the registered connection
|
| Since the connection is only held using a weak reference,
| this method will raise a SWATError if the connection object
| no longer exists.
|
| Returns
| -------
| CAS object
| The registered connection object
|
| Raises
| ------
| SWATError
| If the connection object no longer exists
|
| ----------------------------------------------------------------------
| Data descriptors inherited from CASActionSet:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from CASActionSet:
|
| trait_names = None
###Markdown
Alternatively, you can specify a particular action attribute. This will print information about the action parameters and the Python action class.
###Code
help(conn.simple.summary)
###Output
Help on simple.Summary in module swat.cas.actions object:
class simple.Summary(CASAction)
| Generates descriptive statistics of numeric variables such as the sample mean, sample variance, sample size, sum of squares, and so on
|
| Parameters
| ----------
| table : dict or CASTable
| specifies the table name, caslib, and other common parameters.
|
| table.name : string or CASTable
| specifies the name of the table to use.
|
| table.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| table.where : string, optional
| specifies an expression for subsetting the input data.
|
| table.groupby : list of dicts, optional
| specifies the names of the variables to use for grouping
| results.
|
| table.groupby[*].name : string
| specifies the name for the variable.
|
| table.groupby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.groupby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.groupby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.groupby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.groupby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.groupbyfmts : list, optional
| specifies the format to apply to each group-by variable. To
| avoid specifying a format for a group-by variable, use "" (no
| format).
| Default: []
|
| table.orderby : list of dicts, optional
| specifies the variables to use for ordering observations within
| partitions. This parameter applies to partitioned tables or it
| can be combined with groupBy variables when groupByMode is set to
| REDISTRIBUTE.
|
| table.orderby[*].name : string
| specifies the name for the variable.
|
| table.orderby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.orderby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.orderby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.orderby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.orderby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvars : list of dicts, optional
| specifies the names of the computed variables to create. Specify
| an expression for each variable in the computedVarsProgram
| parameter.
|
| table.computedvars[*].name : string
| specifies the name for the variable.
|
| table.computedvars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.computedvars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.computedvars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.computedvars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.computedvars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvarsprogram : string, optional
| specifies an expression for each computed variable that you
| include in the computedVars parameter.
|
| table.groupbymode : string, optional
| specifies how the server creates groups.
| Default: NOSORT
| Values: NOSORT, REDISTRIBUTE
|
| table.computedondemand : boolean, optional
| when set to True, the computed variables are created when the
| table is loaded instead of when the action begins.
| Default: False
|
| table.singlepass : boolean, optional
| when set to True, the data does not create a transient table in
| the server. Setting this parameter to True can be efficient, but
| the data might not have stable ordering upon repeated runs.
| Default: False
|
| table.importoptions : dict, optional
| specifies the settings for reading a table from a data source.
|
| table.importoptions.filetype : string
| Default: auto
| Values: auto, hdat, csv, delimited, excel, jmp, spss, dta,
| esp, lasr, basesas, mva, xls, fmt
|
| table.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| table.vars : list of dicts, optional
| specifies the variables to use in the action.
|
| table.vars[*].name : string
| specifies the name for the variable.
|
| table.vars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.vars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.vars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.vars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.vars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| nthreads : int32, optional
| specifies the number of threads to use on each machine in the
| server. By default, the server uses one thread for each CPU that is
| licensed to use SAS software.
| Default: 0
| Note: Value range is 0 <= n <= 64
|
| groupbytable : dict or CASTable, optional
| specifies an input table that contains the groups to use in a
| group-by analysis.
|
| groupbytable.name : string or CASTable
| specifies the name of the table to use.
|
| groupbytable.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| groupbytable.where : string, optional
| specifies an expression for subsetting the input data.
|
| attributes : list of dicts, optional
|
| attributes[*].name : string
| specifies the name for the variable.
|
| attributes[*].label : string, optional
| specifies the descriptive label for the variable.
|
| attributes[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| attributes[*].format : string, optional
| specifies the format to apply to the variable.
|
| attributes[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| attributes[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| inputs : list of dicts, optional
|
| inputs[*].name : string
| specifies the name for the variable.
|
| inputs[*].label : string, optional
| specifies the descriptive label for the variable.
|
| inputs[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| inputs[*].format : string, optional
| specifies the format to apply to the variable.
|
| inputs[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| inputs[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| groupbylimit : int64, optional
| specifies the maximum number of levels in a group-by set. When the
| server determines this number of levels, the server stops and does
| not return a result. Specify this parameter if you want to avoid
| creating large result sets in group-by operations.
| Note: Value range is 1 <= n < 9223372036854775807
|
| orderby : list of strings, optional
| specifies a list of variables by which to order the result set.
| Default: []
|
| orderbyagg : list, optional
| specifies one or more aggregators by which to order the result set.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| orderbydesc : list of strings, optional
| arranges the results in descending order.
| Default: []
|
| orderbygbyraw : boolean, optional
| when set to True, the ordering of the group-by variables is based on
| the raw values of the variables, not the formatted values.
| Default: False
|
| resultlimit : int32, optional
| specifies the maximum size of the result set returned to the client.
| Default: 0
| Note: Value range is 0 <= n <= 2147483647
|
| casout : dict or CASTable, optional
| specifies the settings for an output table.
|
| casout.name : string or CASTable, optional
| specifies the name to associate with the table.
|
| casout.caslib : string, optional
| specifies the name of the caslib to use.
|
| casout.timestamp : string, optional
| specifies the timestamp to apply to the table. Specify the value
| in the form that is appropriate for your session locale.
|
| casout.compress : boolean, optional
| when set to True, data compression is applied to the table.
| Default: False
|
| casout.replace : boolean, optional
| specifies whether to overwrite an existing table with the same
| name.
| Default: False
|
| casout.replication : int32, optional
| specifies the number of copies of the table to make for fault
| tolerance. Larger values result in slower performance and use
| more memory, but provide high availability for data in the event
| of a node failure.
| Default: 1
| Note: Value range is 0 <= n < 2147483647
|
| casout.threadblocksize : int64, optional
| specifies the number of bytes to use for blocks that are read by
| threads. Increase this value only if you have a large table and
| CPU utilization by threads shows thread starvation.
| Note: Value range is 0 <= n < 9223372036854775807
|
| casout.label : string, optional
| specifies the descriptive label to associate with the table.
|
| casout.maxmemsize : int64, optional
| specifies the maximum amount of physical memory, in bytes, to
| allocate for the table. After this threshold is reached, the
| server uses temporary files and operating system facilities for
| memory management.
| Default: 0
|
| casout.promote : boolean, optional
| when set to True, the output table is added with a global scope.
| This enables other sessions to access the table, subject to
| access controls. The target caslib must also have a global scope.
| Default: False
|
| casout.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| repeat : boolean, optional
| when set to True, the action is repeated.
| Default: False
|
| freq : string, optional
| specifies a frequency variable.
|
| cialpha : double, optional
| specifies the level of significance for 100*(1-ciAlpha)% confidence
| intervals. The default value of 0.05 results in 95% confidence
| intervals.
| Default: 0.05
| Note: Value range is 0.0 < n < 1.0
|
| citype : string, optional
| specifies the type of confidence interval.
| Default: TWOSIDED
| Values: LEFT, LOWER, RIGHT, TWOSIDED, UPPER
|
| subset : list, optional
| specifies the summary statistics to generate.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| weight : string, optional
| specifies a numeric variable whose values weight the values of the
| analysis variables.
|
| Returns
| -------
| Summary object
|
| Method resolution order:
| simple.Summary
| CASAction
| swat.cas.utils.params.ParamManager
| builtins.object
|
| Methods defined here:
|
| __call__(_self_, table=None, nthreads=None, groupbytable=None, attributes=None, inputs=None, groupbylimit=None, orderby=None, orderbyagg=None, orderbydesc=None, orderbygbyraw=None, resultlimit=None, casout=None, repeat=None, freq=None, cialpha=None, citype=None, subset=None, weight=None, **kwargs)
| Generates descriptive statistics of numeric variables such as the sample mean, sample variance, sample size, sum of squares, and so on
|
| Parameters
| ----------
| table : dict or CASTable
| specifies the table name, caslib, and other common parameters.
|
| table.name : string or CASTable
| specifies the name of the table to use.
|
| table.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| table.where : string, optional
| specifies an expression for subsetting the input data.
|
| table.groupby : list of dicts, optional
| specifies the names of the variables to use for grouping
| results.
|
| table.groupby[*].name : string
| specifies the name for the variable.
|
| table.groupby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.groupby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.groupby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.groupby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.groupby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.groupbyfmts : list, optional
| specifies the format to apply to each group-by variable. To
| avoid specifying a format for a group-by variable, use "" (no
| format).
| Default: []
|
| table.orderby : list of dicts, optional
| specifies the variables to use for ordering observations within
| partitions. This parameter applies to partitioned tables or it
| can be combined with groupBy variables when groupByMode is set to
| REDISTRIBUTE.
|
| table.orderby[*].name : string
| specifies the name for the variable.
|
| table.orderby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.orderby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.orderby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.orderby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.orderby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvars : list of dicts, optional
| specifies the names of the computed variables to create. Specify
| an expression for each variable in the computedVarsProgram
| parameter.
|
| table.computedvars[*].name : string
| specifies the name for the variable.
|
| table.computedvars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.computedvars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.computedvars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.computedvars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.computedvars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvarsprogram : string, optional
| specifies an expression for each computed variable that you
| include in the computedVars parameter.
|
| table.groupbymode : string, optional
| specifies how the server creates groups.
| Default: NOSORT
| Values: NOSORT, REDISTRIBUTE
|
| table.computedondemand : boolean, optional
| when set to True, the computed variables are created when the
| table is loaded instead of when the action begins.
| Default: False
|
| table.singlepass : boolean, optional
| when set to True, the data does not create a transient table in
| the server. Setting this parameter to True can be efficient, but
| the data might not have stable ordering upon repeated runs.
| Default: False
|
| table.importoptions : dict, optional
| specifies the settings for reading a table from a data source.
|
| table.importoptions.filetype : string
| Default: auto
| Values: auto, hdat, csv, delimited, excel, jmp, spss, dta,
| esp, lasr, basesas, mva, xls, fmt
|
| table.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| table.vars : list of dicts, optional
| specifies the variables to use in the action.
|
| table.vars[*].name : string
| specifies the name for the variable.
|
| table.vars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.vars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.vars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.vars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.vars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| nthreads : int32, optional
| specifies the number of threads to use on each machine in the
| server. By default, the server uses one thread for each CPU that is
| licensed to use SAS software.
| Default: 0
| Note: Value range is 0 <= n <= 64
|
| groupbytable : dict or CASTable, optional
| specifies an input table that contains the groups to use in a
| group-by analysis.
|
| groupbytable.name : string or CASTable
| specifies the name of the table to use.
|
| groupbytable.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| groupbytable.where : string, optional
| specifies an expression for subsetting the input data.
|
| attributes : list of dicts, optional
|
| attributes[*].name : string
| specifies the name for the variable.
|
| attributes[*].label : string, optional
| specifies the descriptive label for the variable.
|
| attributes[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| attributes[*].format : string, optional
| specifies the format to apply to the variable.
|
| attributes[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| attributes[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| inputs : list of dicts, optional
|
| inputs[*].name : string
| specifies the name for the variable.
|
| inputs[*].label : string, optional
| specifies the descriptive label for the variable.
|
| inputs[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| inputs[*].format : string, optional
| specifies the format to apply to the variable.
|
| inputs[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| inputs[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| groupbylimit : int64, optional
| specifies the maximum number of levels in a group-by set. When the
| server determines this number of levels, the server stops and does
| not return a result. Specify this parameter if you want to avoid
| creating large result sets in group-by operations.
| Note: Value range is 1 <= n < 9223372036854775807
|
| orderby : list of strings, optional
| specifies a list of variables by which to order the result set.
| Default: []
|
| orderbyagg : list, optional
| specifies one or more aggregators by which to order the result set.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| orderbydesc : list of strings, optional
| arranges the results in descending order.
| Default: []
|
| orderbygbyraw : boolean, optional
| when set to True, the ordering of the group-by variables is based on
| the raw values of the variables, not the formatted values.
| Default: False
|
| resultlimit : int32, optional
| specifies the maximum size of the result set returned to the client.
| Default: 0
| Note: Value range is 0 <= n <= 2147483647
|
| casout : dict or CASTable, optional
| specifies the settings for an output table.
|
| casout.name : string or CASTable, optional
| specifies the name to associate with the table.
|
| casout.caslib : string, optional
| specifies the name of the caslib to use.
|
| casout.timestamp : string, optional
| specifies the timestamp to apply to the table. Specify the value
| in the form that is appropriate for your session locale.
|
| casout.compress : boolean, optional
| when set to True, data compression is applied to the table.
| Default: False
|
| casout.replace : boolean, optional
| specifies whether to overwrite an existing table with the same
| name.
| Default: False
|
| casout.replication : int32, optional
| specifies the number of copies of the table to make for fault
| tolerance. Larger values result in slower performance and use
| more memory, but provide high availability for data in the event
| of a node failure.
| Default: 1
| Note: Value range is 0 <= n < 2147483647
|
| casout.threadblocksize : int64, optional
| specifies the number of bytes to use for blocks that are read by
| threads. Increase this value only if you have a large table and
| CPU utilization by threads shows thread starvation.
| Note: Value range is 0 <= n < 9223372036854775807
|
| casout.label : string, optional
| specifies the descriptive label to associate with the table.
|
| casout.maxmemsize : int64, optional
| specifies the maximum amount of physical memory, in bytes, to
| allocate for the table. After this threshold is reached, the
| server uses temporary files and operating system facilities for
| memory management.
| Default: 0
|
| casout.promote : boolean, optional
| when set to True, the output table is added with a global scope.
| This enables other sessions to access the table, subject to
| access controls. The target caslib must also have a global scope.
| Default: False
|
| casout.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| repeat : boolean, optional
| when set to True, the action is repeated.
| Default: False
|
| freq : string, optional
| specifies a frequency variable.
|
| cialpha : double, optional
| specifies the level of significance for 100*(1-ciAlpha)% confidence
| intervals. The default value of 0.05 results in 95% confidence
| intervals.
| Default: 0.05
| Note: Value range is 0.0 < n < 1.0
|
| citype : string, optional
| specifies the type of confidence interval.
| Default: TWOSIDED
| Values: LEFT, LOWER, RIGHT, TWOSIDED, UPPER
|
| subset : list, optional
| specifies the summary statistics to generate.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| weight : string, optional
| specifies a numeric variable whose values weight the values of the
| analysis variables.
|
| Returns
| -------
| CASResults object
|
| __init__(_self_, table=None, nthreads=None, groupbytable=None, attributes=None, inputs=None, groupbylimit=None, orderby=None, orderbyagg=None, orderbydesc=None, orderbygbyraw=None, resultlimit=None, casout=None, repeat=None, freq=None, cialpha=None, citype=None, subset=None, weight=None, **kwargs)
| Generates descriptive statistics of numeric variables such as the sample mean, sample variance, sample size, sum of squares, and so on
|
| Parameters
| ----------
| table : dict or CASTable
| specifies the table name, caslib, and other common parameters.
|
| table.name : string or CASTable
| specifies the name of the table to use.
|
| table.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| table.where : string, optional
| specifies an expression for subsetting the input data.
|
| table.groupby : list of dicts, optional
| specifies the names of the variables to use for grouping
| results.
|
| table.groupby[*].name : string
| specifies the name for the variable.
|
| table.groupby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.groupby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.groupby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.groupby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.groupby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.groupbyfmts : list, optional
| specifies the format to apply to each group-by variable. To
| avoid specifying a format for a group-by variable, use "" (no
| format).
| Default: []
|
| table.orderby : list of dicts, optional
| specifies the variables to use for ordering observations within
| partitions. This parameter applies to partitioned tables or it
| can be combined with groupBy variables when groupByMode is set to
| REDISTRIBUTE.
|
| table.orderby[*].name : string
| specifies the name for the variable.
|
| table.orderby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.orderby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.orderby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.orderby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.orderby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvars : list of dicts, optional
| specifies the names of the computed variables to create. Specify
| an expression for each variable in the computedVarsProgram
| parameter.
|
| table.computedvars[*].name : string
| specifies the name for the variable.
|
| table.computedvars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.computedvars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.computedvars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.computedvars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.computedvars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvarsprogram : string, optional
| specifies an expression for each computed variable that you
| include in the computedVars parameter.
|
| table.groupbymode : string, optional
| specifies how the server creates groups.
| Default: NOSORT
| Values: NOSORT, REDISTRIBUTE
|
| table.computedondemand : boolean, optional
| when set to True, the computed variables are created when the
| table is loaded instead of when the action begins.
| Default: False
|
| table.singlepass : boolean, optional
| when set to True, the data does not create a transient table in
| the server. Setting this parameter to True can be efficient, but
| the data might not have stable ordering upon repeated runs.
| Default: False
|
| table.importoptions : dict, optional
| specifies the settings for reading a table from a data source.
|
| table.importoptions.filetype : string
| Default: auto
| Values: auto, hdat, csv, delimited, excel, jmp, spss, dta,
| esp, lasr, basesas, mva, xls, fmt
|
| table.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| table.vars : list of dicts, optional
| specifies the variables to use in the action.
|
| table.vars[*].name : string
| specifies the name for the variable.
|
| table.vars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.vars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.vars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.vars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.vars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| nthreads : int32, optional
| specifies the number of threads to use on each machine in the
| server. By default, the server uses one thread for each CPU that is
| licensed to use SAS software.
| Default: 0
| Note: Value range is 0 <= n <= 64
|
| groupbytable : dict or CASTable, optional
| specifies an input table that contains the groups to use in a
| group-by analysis.
|
| groupbytable.name : string or CASTable
| specifies the name of the table to use.
|
| groupbytable.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| groupbytable.where : string, optional
| specifies an expression for subsetting the input data.
|
| attributes : list of dicts, optional
|
| attributes[*].name : string
| specifies the name for the variable.
|
| attributes[*].label : string, optional
| specifies the descriptive label for the variable.
|
| attributes[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| attributes[*].format : string, optional
| specifies the format to apply to the variable.
|
| attributes[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| attributes[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| inputs : list of dicts, optional
|
| inputs[*].name : string
| specifies the name for the variable.
|
| inputs[*].label : string, optional
| specifies the descriptive label for the variable.
|
| inputs[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| inputs[*].format : string, optional
| specifies the format to apply to the variable.
|
| inputs[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| inputs[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| groupbylimit : int64, optional
| specifies the maximum number of levels in a group-by set. When the
| server determines this number of levels, the server stops and does
| not return a result. Specify this parameter if you want to avoid
| creating large result sets in group-by operations.
| Note: Value range is 1 <= n < 9223372036854775807
|
| orderby : list of strings, optional
| specifies a list of variables by which to order the result set.
| Default: []
|
| orderbyagg : list, optional
| specifies one or more aggregators by which to order the result set.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| orderbydesc : list of strings, optional
| arranges the results in descending order.
| Default: []
|
| orderbygbyraw : boolean, optional
| when set to True, the ordering of the group-by variables is based on
| the raw values of the variables, not the formatted values.
| Default: False
|
| resultlimit : int32, optional
| specifies the maximum size of the result set returned to the client.
| Default: 0
| Note: Value range is 0 <= n <= 2147483647
|
| casout : dict or CASTable, optional
| specifies the settings for an output table.
|
| casout.name : string or CASTable, optional
| specifies the name to associate with the table.
|
| casout.caslib : string, optional
| specifies the name of the caslib to use.
|
| casout.timestamp : string, optional
| specifies the timestamp to apply to the table. Specify the value
| in the form that is appropriate for your session locale.
|
| casout.compress : boolean, optional
| when set to True, data compression is applied to the table.
| Default: False
|
| casout.replace : boolean, optional
| specifies whether to overwrite an existing table with the same
| name.
| Default: False
|
| casout.replication : int32, optional
| specifies the number of copies of the table to make for fault
| tolerance. Larger values result in slower performance and use
| more memory, but provide high availability for data in the event
| of a node failure.
| Default: 1
| Note: Value range is 0 <= n < 2147483647
|
| casout.threadblocksize : int64, optional
| specifies the number of bytes to use for blocks that are read by
| threads. Increase this value only if you have a large table and
| CPU utilization by threads shows thread starvation.
| Note: Value range is 0 <= n < 9223372036854775807
|
| casout.label : string, optional
| specifies the descriptive label to associate with the table.
|
| casout.maxmemsize : int64, optional
| specifies the maximum amount of physical memory, in bytes, to
| allocate for the table. After this threshold is reached, the
| server uses temporary files and operating system facilities for
| memory management.
| Default: 0
|
| casout.promote : boolean, optional
| when set to True, the output table is added with a global scope.
| This enables other sessions to access the table, subject to
| access controls. The target caslib must also have a global scope.
| Default: False
|
| casout.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| repeat : boolean, optional
| when set to True, the action is repeated.
| Default: False
|
| freq : string, optional
| specifies a frequency variable.
|
| cialpha : double, optional
| specifies the level of significance for 100*(1-ciAlpha)% confidence
| intervals. The default value of 0.05 results in 95% confidence
| intervals.
| Default: 0.05
| Note: Value range is 0.0 < n < 1.0
|
| citype : string, optional
| specifies the type of confidence interval.
| Default: TWOSIDED
| Values: LEFT, LOWER, RIGHT, TWOSIDED, UPPER
|
| subset : list, optional
| specifies the summary statistics to generate.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| weight : string, optional
| specifies a numeric variable whose values weight the values of the
| analysis variables.
|
| Returns
| -------
| Summary object
|
| get_param(_self_, key)
| Get the value of an action parameter
|
| Parameters
| ----------
| key : string
| The fully-qualified name (e.g., table.name) of the parameter to retrieve.
|
| Valid Parameters
| ----------------
| table : dict or CASTable
| specifies the table name, caslib, and other common parameters.
|
| table.name : string or CASTable
| specifies the name of the table to use.
|
| table.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| table.where : string, optional
| specifies an expression for subsetting the input data.
|
| table.groupby : list of dicts, optional
| specifies the names of the variables to use for grouping
| results.
|
| table.groupby[*].name : string
| specifies the name for the variable.
|
| table.groupby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.groupby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.groupby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.groupby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.groupby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.groupbyfmts : list, optional
| specifies the format to apply to each group-by variable. To
| avoid specifying a format for a group-by variable, use "" (no
| format).
| Default: []
|
| table.orderby : list of dicts, optional
| specifies the variables to use for ordering observations within
| partitions. This parameter applies to partitioned tables or it
| can be combined with groupBy variables when groupByMode is set to
| REDISTRIBUTE.
|
| table.orderby[*].name : string
| specifies the name for the variable.
|
| table.orderby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.orderby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.orderby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.orderby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.orderby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvars : list of dicts, optional
| specifies the names of the computed variables to create. Specify
| an expression for each variable in the computedVarsProgram
| parameter.
|
| table.computedvars[*].name : string
| specifies the name for the variable.
|
| table.computedvars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.computedvars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.computedvars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.computedvars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.computedvars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvarsprogram : string, optional
| specifies an expression for each computed variable that you
| include in the computedVars parameter.
|
| table.groupbymode : string, optional
| specifies how the server creates groups.
| Default: NOSORT
| Values: NOSORT, REDISTRIBUTE
|
| table.computedondemand : boolean, optional
| when set to True, the computed variables are created when the
| table is loaded instead of when the action begins.
| Default: False
|
| table.singlepass : boolean, optional
| when set to True, the data does not create a transient table in
| the server. Setting this parameter to True can be efficient, but
| the data might not have stable ordering upon repeated runs.
| Default: False
|
| table.importoptions : dict, optional
| specifies the settings for reading a table from a data source.
|
| table.importoptions.filetype : string
| Default: auto
| Values: auto, hdat, csv, delimited, excel, jmp, spss, dta,
| esp, lasr, basesas, mva, xls, fmt
|
| table.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| table.vars : list of dicts, optional
| specifies the variables to use in the action.
|
| table.vars[*].name : string
| specifies the name for the variable.
|
| table.vars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.vars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.vars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.vars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.vars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| nthreads : int32, optional
| specifies the number of threads to use on each machine in the
| server. By default, the server uses one thread for each CPU that is
| licensed to use SAS software.
| Default: 0
| Note: Value range is 0 <= n <= 64
|
| groupbytable : dict or CASTable, optional
| specifies an input table that contains the groups to use in a
| group-by analysis.
|
| groupbytable.name : string or CASTable
| specifies the name of the table to use.
|
| groupbytable.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| groupbytable.where : string, optional
| specifies an expression for subsetting the input data.
|
| attributes : list of dicts, optional
|
| attributes[*].name : string
| specifies the name for the variable.
|
| attributes[*].label : string, optional
| specifies the descriptive label for the variable.
|
| attributes[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| attributes[*].format : string, optional
| specifies the format to apply to the variable.
|
| attributes[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| attributes[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| inputs : list of dicts, optional
|
| inputs[*].name : string
| specifies the name for the variable.
|
| inputs[*].label : string, optional
| specifies the descriptive label for the variable.
|
| inputs[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| inputs[*].format : string, optional
| specifies the format to apply to the variable.
|
| inputs[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| inputs[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| groupbylimit : int64, optional
| specifies the maximum number of levels in a group-by set. When the
| server determines this number of levels, the server stops and does
| not return a result. Specify this parameter if you want to avoid
| creating large result sets in group-by operations.
| Note: Value range is 1 <= n < 9223372036854775807
|
| orderby : list of strings, optional
| specifies a list of variables by which to order the result set.
| Default: []
|
| orderbyagg : list, optional
| specifies one or more aggregators by which to order the result set.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| orderbydesc : list of strings, optional
| arranges the results in descending order.
| Default: []
|
| orderbygbyraw : boolean, optional
| when set to True, the ordering of the group-by variables is based on
| the raw values of the variables, not the formatted values.
| Default: False
|
| resultlimit : int32, optional
| specifies the maximum size of the result set returned to the client.
| Default: 0
| Note: Value range is 0 <= n <= 2147483647
|
| casout : dict or CASTable, optional
| specifies the settings for an output table.
|
| casout.name : string or CASTable, optional
| specifies the name to associate with the table.
|
| casout.caslib : string, optional
| specifies the name of the caslib to use.
|
| casout.timestamp : string, optional
| specifies the timestamp to apply to the table. Specify the value
| in the form that is appropriate for your session locale.
|
| casout.compress : boolean, optional
| when set to True, data compression is applied to the table.
| Default: False
|
| casout.replace : boolean, optional
| specifies whether to overwrite an existing table with the same
| name.
| Default: False
|
| casout.replication : int32, optional
| specifies the number of copies of the table to make for fault
| tolerance. Larger values result in slower performance and use
| more memory, but provide high availability for data in the event
| of a node failure.
| Default: 1
| Note: Value range is 0 <= n < 2147483647
|
| casout.threadblocksize : int64, optional
| specifies the number of bytes to use for blocks that are read by
| threads. Increase this value only if you have a large table and
| CPU utilization by threads shows thread starvation.
| Note: Value range is 0 <= n < 9223372036854775807
|
| casout.label : string, optional
| specifies the descriptive label to associate with the table.
|
| casout.maxmemsize : int64, optional
| specifies the maximum amount of physical memory, in bytes, to
| allocate for the table. After this threshold is reached, the
| server uses temporary files and operating system facilities for
| memory management.
| Default: 0
|
| casout.promote : boolean, optional
| when set to True, the output table is added with a global scope.
| This enables other sessions to access the table, subject to
| access controls. The target caslib must also have a global scope.
| Default: False
|
| casout.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| repeat : boolean, optional
| when set to True, the action is repeated.
| Default: False
|
| freq : string, optional
| specifies a frequency variable.
|
| cialpha : double, optional
| specifies the level of significance for 100*(1-ciAlpha)% confidence
| intervals. The default value of 0.05 results in 95% confidence
| intervals.
| Default: 0.05
| Note: Value range is 0.0 < n < 1.0
|
| citype : string, optional
| specifies the type of confidence interval.
| Default: TWOSIDED
| Values: LEFT, LOWER, RIGHT, TWOSIDED, UPPER
|
| subset : list, optional
| specifies the summary statistics to generate.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| weight : string, optional
| specifies a numeric variable whose values weight the values of the
| analysis variables.
|
| Returns
| -------
| any
| The value of the speciifed parameter.
|
| get_params(_self_, *keys)
| Get the value of one or more action parameters
|
| Parameters
| ----------
| *keys : one or more strings
| The fully-qualified names (e.g., table.name) of the parameters to retrieve.
|
| Valid Parameters
| ----------------
| table : dict or CASTable
| specifies the table name, caslib, and other common parameters.
|
| table.name : string or CASTable
| specifies the name of the table to use.
|
| table.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| table.where : string, optional
| specifies an expression for subsetting the input data.
|
| table.groupby : list of dicts, optional
| specifies the names of the variables to use for grouping
| results.
|
| table.groupby[*].name : string
| specifies the name for the variable.
|
| table.groupby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.groupby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.groupby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.groupby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.groupby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.groupbyfmts : list, optional
| specifies the format to apply to each group-by variable. To
| avoid specifying a format for a group-by variable, use "" (no
| format).
| Default: []
|
| table.orderby : list of dicts, optional
| specifies the variables to use for ordering observations within
| partitions. This parameter applies to partitioned tables or it
| can be combined with groupBy variables when groupByMode is set to
| REDISTRIBUTE.
|
| table.orderby[*].name : string
| specifies the name for the variable.
|
| table.orderby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.orderby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.orderby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.orderby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.orderby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvars : list of dicts, optional
| specifies the names of the computed variables to create. Specify
| an expression for each variable in the computedVarsProgram
| parameter.
|
| table.computedvars[*].name : string
| specifies the name for the variable.
|
| table.computedvars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.computedvars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.computedvars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.computedvars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.computedvars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvarsprogram : string, optional
| specifies an expression for each computed variable that you
| include in the computedVars parameter.
|
| table.groupbymode : string, optional
| specifies how the server creates groups.
| Default: NOSORT
| Values: NOSORT, REDISTRIBUTE
|
| table.computedondemand : boolean, optional
| when set to True, the computed variables are created when the
| table is loaded instead of when the action begins.
| Default: False
|
| table.singlepass : boolean, optional
| when set to True, the data does not create a transient table in
| the server. Setting this parameter to True can be efficient, but
| the data might not have stable ordering upon repeated runs.
| Default: False
|
| table.importoptions : dict, optional
| specifies the settings for reading a table from a data source.
|
| table.importoptions.filetype : string
| Default: auto
| Values: auto, hdat, csv, delimited, excel, jmp, spss, dta,
| esp, lasr, basesas, mva, xls, fmt
|
| table.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| table.vars : list of dicts, optional
| specifies the variables to use in the action.
|
| table.vars[*].name : string
| specifies the name for the variable.
|
| table.vars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.vars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.vars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.vars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.vars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| nthreads : int32, optional
| specifies the number of threads to use on each machine in the
| server. By default, the server uses one thread for each CPU that is
| licensed to use SAS software.
| Default: 0
| Note: Value range is 0 <= n <= 64
|
| groupbytable : dict or CASTable, optional
| specifies an input table that contains the groups to use in a
| group-by analysis.
|
| groupbytable.name : string or CASTable
| specifies the name of the table to use.
|
| groupbytable.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| groupbytable.where : string, optional
| specifies an expression for subsetting the input data.
|
| attributes : list of dicts, optional
|
| attributes[*].name : string
| specifies the name for the variable.
|
| attributes[*].label : string, optional
| specifies the descriptive label for the variable.
|
| attributes[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| attributes[*].format : string, optional
| specifies the format to apply to the variable.
|
| attributes[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| attributes[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| inputs : list of dicts, optional
|
| inputs[*].name : string
| specifies the name for the variable.
|
| inputs[*].label : string, optional
| specifies the descriptive label for the variable.
|
| inputs[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| inputs[*].format : string, optional
| specifies the format to apply to the variable.
|
| inputs[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| inputs[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| groupbylimit : int64, optional
| specifies the maximum number of levels in a group-by set. When the
| server determines this number of levels, the server stops and does
| not return a result. Specify this parameter if you want to avoid
| creating large result sets in group-by operations.
| Note: Value range is 1 <= n < 9223372036854775807
|
| orderby : list of strings, optional
| specifies a list of variables by which to order the result set.
| Default: []
|
| orderbyagg : list, optional
| specifies one or more aggregators by which to order the result set.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| orderbydesc : list of strings, optional
| arranges the results in descending order.
| Default: []
|
| orderbygbyraw : boolean, optional
| when set to True, the ordering of the group-by variables is based on
| the raw values of the variables, not the formatted values.
| Default: False
|
| resultlimit : int32, optional
| specifies the maximum size of the result set returned to the client.
| Default: 0
| Note: Value range is 0 <= n <= 2147483647
|
| casout : dict or CASTable, optional
| specifies the settings for an output table.
|
| casout.name : string or CASTable, optional
| specifies the name to associate with the table.
|
| casout.caslib : string, optional
| specifies the name of the caslib to use.
|
| casout.timestamp : string, optional
| specifies the timestamp to apply to the table. Specify the value
| in the form that is appropriate for your session locale.
|
| casout.compress : boolean, optional
| when set to True, data compression is applied to the table.
| Default: False
|
| casout.replace : boolean, optional
| specifies whether to overwrite an existing table with the same
| name.
| Default: False
|
| casout.replication : int32, optional
| specifies the number of copies of the table to make for fault
| tolerance. Larger values result in slower performance and use
| more memory, but provide high availability for data in the event
| of a node failure.
| Default: 1
| Note: Value range is 0 <= n < 2147483647
|
| casout.threadblocksize : int64, optional
| specifies the number of bytes to use for blocks that are read by
| threads. Increase this value only if you have a large table and
| CPU utilization by threads shows thread starvation.
| Note: Value range is 0 <= n < 9223372036854775807
|
| casout.label : string, optional
| specifies the descriptive label to associate with the table.
|
| casout.maxmemsize : int64, optional
| specifies the maximum amount of physical memory, in bytes, to
| allocate for the table. After this threshold is reached, the
| server uses temporary files and operating system facilities for
| memory management.
| Default: 0
|
| casout.promote : boolean, optional
| when set to True, the output table is added with a global scope.
| This enables other sessions to access the table, subject to
| access controls. The target caslib must also have a global scope.
| Default: False
|
| casout.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| repeat : boolean, optional
| when set to True, the action is repeated.
| Default: False
|
| freq : string, optional
| specifies a frequency variable.
|
| cialpha : double, optional
| specifies the level of significance for 100*(1-ciAlpha)% confidence
| intervals. The default value of 0.05 results in 95% confidence
| intervals.
| Default: 0.05
| Note: Value range is 0.0 < n < 1.0
|
| citype : string, optional
| specifies the type of confidence interval.
| Default: TWOSIDED
| Values: LEFT, LOWER, RIGHT, TWOSIDED, UPPER
|
| subset : list, optional
| specifies the summary statistics to generate.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| weight : string, optional
| specifies a numeric variable whose values weight the values of the
| analysis variables.
|
| Returns
| -------
| dict
| A dictionary of key value pairs containing the requested parameters.
|
| set_param(_self_, *args, **kwargs)
| Set one or more action parameters
|
| Parameters
| ----------
| *args : string / any pairs, optional
| Parameters can be specified as fully-qualified names (e.g, table.name)
| and values as subsequent arguments. Any number of name / any pairs
| can be specified.
| **kwargs : any, optional
| Parameters can be specified as any number of keyword arguments.
|
| Examples
| --------
| #
| # String / any pairs
| #
| > summ = s.simple.Sumamry()
| > summ.set_param('table.name', 'iris',
| 'table.singlepass', True,
| 'casout.name', 'iris_summary')
| > print(summ)
| ?.simple.Summary(table={'name': 'iris', 'singlepass': True},
| casout={'name': 'iris_summary'})
|
| #
| # Keywords
| #
| > summ.set_param(casout=dict(name='iris_out'))
| > print(summ)
| ?.simple.Summary(table={'name': 'iris', 'singlepass': True},
| casout={'name': 'iris_out'})
|
| Valid Parameters
| ----------------
| table : dict or CASTable
| specifies the table name, caslib, and other common parameters.
|
| table.name : string or CASTable
| specifies the name of the table to use.
|
| table.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| table.where : string, optional
| specifies an expression for subsetting the input data.
|
| table.groupby : list of dicts, optional
| specifies the names of the variables to use for grouping
| results.
|
| table.groupby[*].name : string
| specifies the name for the variable.
|
| table.groupby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.groupby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.groupby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.groupby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.groupby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.groupbyfmts : list, optional
| specifies the format to apply to each group-by variable. To
| avoid specifying a format for a group-by variable, use "" (no
| format).
| Default: []
|
| table.orderby : list of dicts, optional
| specifies the variables to use for ordering observations within
| partitions. This parameter applies to partitioned tables or it
| can be combined with groupBy variables when groupByMode is set to
| REDISTRIBUTE.
|
| table.orderby[*].name : string
| specifies the name for the variable.
|
| table.orderby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.orderby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.orderby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.orderby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.orderby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvars : list of dicts, optional
| specifies the names of the computed variables to create. Specify
| an expression for each variable in the computedVarsProgram
| parameter.
|
| table.computedvars[*].name : string
| specifies the name for the variable.
|
| table.computedvars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.computedvars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.computedvars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.computedvars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.computedvars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvarsprogram : string, optional
| specifies an expression for each computed variable that you
| include in the computedVars parameter.
|
| table.groupbymode : string, optional
| specifies how the server creates groups.
| Default: NOSORT
| Values: NOSORT, REDISTRIBUTE
|
| table.computedondemand : boolean, optional
| when set to True, the computed variables are created when the
| table is loaded instead of when the action begins.
| Default: False
|
| table.singlepass : boolean, optional
| when set to True, the data does not create a transient table in
| the server. Setting this parameter to True can be efficient, but
| the data might not have stable ordering upon repeated runs.
| Default: False
|
| table.importoptions : dict, optional
| specifies the settings for reading a table from a data source.
|
| table.importoptions.filetype : string
| Default: auto
| Values: auto, hdat, csv, delimited, excel, jmp, spss, dta,
| esp, lasr, basesas, mva, xls, fmt
|
| table.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| table.vars : list of dicts, optional
| specifies the variables to use in the action.
|
| table.vars[*].name : string
| specifies the name for the variable.
|
| table.vars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.vars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.vars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.vars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.vars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| nthreads : int32, optional
| specifies the number of threads to use on each machine in the
| server. By default, the server uses one thread for each CPU that is
| licensed to use SAS software.
| Default: 0
| Note: Value range is 0 <= n <= 64
|
| groupbytable : dict or CASTable, optional
| specifies an input table that contains the groups to use in a
| group-by analysis.
|
| groupbytable.name : string or CASTable
| specifies the name of the table to use.
|
| groupbytable.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| groupbytable.where : string, optional
| specifies an expression for subsetting the input data.
|
| attributes : list of dicts, optional
|
| attributes[*].name : string
| specifies the name for the variable.
|
| attributes[*].label : string, optional
| specifies the descriptive label for the variable.
|
| attributes[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| attributes[*].format : string, optional
| specifies the format to apply to the variable.
|
| attributes[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| attributes[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| inputs : list of dicts, optional
|
| inputs[*].name : string
| specifies the name for the variable.
|
| inputs[*].label : string, optional
| specifies the descriptive label for the variable.
|
| inputs[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| inputs[*].format : string, optional
| specifies the format to apply to the variable.
|
| inputs[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| inputs[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| groupbylimit : int64, optional
| specifies the maximum number of levels in a group-by set. When the
| server determines this number of levels, the server stops and does
| not return a result. Specify this parameter if you want to avoid
| creating large result sets in group-by operations.
| Note: Value range is 1 <= n < 9223372036854775807
|
| orderby : list of strings, optional
| specifies a list of variables by which to order the result set.
| Default: []
|
| orderbyagg : list, optional
| specifies one or more aggregators by which to order the result set.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| orderbydesc : list of strings, optional
| arranges the results in descending order.
| Default: []
|
| orderbygbyraw : boolean, optional
| when set to True, the ordering of the group-by variables is based on
| the raw values of the variables, not the formatted values.
| Default: False
|
| resultlimit : int32, optional
| specifies the maximum size of the result set returned to the client.
| Default: 0
| Note: Value range is 0 <= n <= 2147483647
|
| casout : dict or CASTable, optional
| specifies the settings for an output table.
|
| casout.name : string or CASTable, optional
| specifies the name to associate with the table.
|
| casout.caslib : string, optional
| specifies the name of the caslib to use.
|
| casout.timestamp : string, optional
| specifies the timestamp to apply to the table. Specify the value
| in the form that is appropriate for your session locale.
|
| casout.compress : boolean, optional
| when set to True, data compression is applied to the table.
| Default: False
|
| casout.replace : boolean, optional
| specifies whether to overwrite an existing table with the same
| name.
| Default: False
|
| casout.replication : int32, optional
| specifies the number of copies of the table to make for fault
| tolerance. Larger values result in slower performance and use
| more memory, but provide high availability for data in the event
| of a node failure.
| Default: 1
| Note: Value range is 0 <= n < 2147483647
|
| casout.threadblocksize : int64, optional
| specifies the number of bytes to use for blocks that are read by
| threads. Increase this value only if you have a large table and
| CPU utilization by threads shows thread starvation.
| Note: Value range is 0 <= n < 9223372036854775807
|
| casout.label : string, optional
| specifies the descriptive label to associate with the table.
|
| casout.maxmemsize : int64, optional
| specifies the maximum amount of physical memory, in bytes, to
| allocate for the table. After this threshold is reached, the
| server uses temporary files and operating system facilities for
| memory management.
| Default: 0
|
| casout.promote : boolean, optional
| when set to True, the output table is added with a global scope.
| This enables other sessions to access the table, subject to
| access controls. The target caslib must also have a global scope.
| Default: False
|
| casout.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| repeat : boolean, optional
| when set to True, the action is repeated.
| Default: False
|
| freq : string, optional
| specifies a frequency variable.
|
| cialpha : double, optional
| specifies the level of significance for 100*(1-ciAlpha)% confidence
| intervals. The default value of 0.05 results in 95% confidence
| intervals.
| Default: 0.05
| Note: Value range is 0.0 < n < 1.0
|
| citype : string, optional
| specifies the type of confidence interval.
| Default: TWOSIDED
| Values: LEFT, LOWER, RIGHT, TWOSIDED, UPPER
|
| subset : list, optional
| specifies the summary statistics to generate.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| weight : string, optional
| specifies a numeric variable whose values weight the values of the
| analysis variables.
|
| Returns
| -------
| None
|
| set_params(_self_, *args, **kwargs)
| Set one or more action parameters
|
| Parameters
| ----------
| *args : string / any pairs, optional
| Parameters can be specified as fully-qualified names (e.g, table.name)
| and values as subsequent arguments. Any number of name / any pairs
| can be specified.
| **kwargs : any, optional
| Parameters can be specified as any number of keyword arguments.
|
| Examples
| --------
| #
| # String / any pairs
| #
| > summ = s.simple.Sumamry()
| > summ.set_param('table.name', 'iris',
| 'table.singlepass', True,
| 'casout.name', 'iris_summary')
| > print(summ)
| ?.simple.Summary(table={'name': 'iris', 'singlepass': True},
| casout={'name': 'iris_summary'})
|
| #
| # Keywords
| #
| > summ.set_param(casout=dict(name='iris_out'))
| > print(summ)
| ?.simple.Summary(table={'name': 'iris', 'singlepass': True},
| casout={'name': 'iris_out'})
|
| Valid Parameters
| ----------------
| table : dict or CASTable
| specifies the table name, caslib, and other common parameters.
|
| table.name : string or CASTable
| specifies the name of the table to use.
|
| table.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| table.where : string, optional
| specifies an expression for subsetting the input data.
|
| table.groupby : list of dicts, optional
| specifies the names of the variables to use for grouping
| results.
|
| table.groupby[*].name : string
| specifies the name for the variable.
|
| table.groupby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.groupby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.groupby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.groupby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.groupby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.groupbyfmts : list, optional
| specifies the format to apply to each group-by variable. To
| avoid specifying a format for a group-by variable, use "" (no
| format).
| Default: []
|
| table.orderby : list of dicts, optional
| specifies the variables to use for ordering observations within
| partitions. This parameter applies to partitioned tables or it
| can be combined with groupBy variables when groupByMode is set to
| REDISTRIBUTE.
|
| table.orderby[*].name : string
| specifies the name for the variable.
|
| table.orderby[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.orderby[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.orderby[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.orderby[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.orderby[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvars : list of dicts, optional
| specifies the names of the computed variables to create. Specify
| an expression for each variable in the computedVarsProgram
| parameter.
|
| table.computedvars[*].name : string
| specifies the name for the variable.
|
| table.computedvars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.computedvars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.computedvars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.computedvars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.computedvars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| table.computedvarsprogram : string, optional
| specifies an expression for each computed variable that you
| include in the computedVars parameter.
|
| table.groupbymode : string, optional
| specifies how the server creates groups.
| Default: NOSORT
| Values: NOSORT, REDISTRIBUTE
|
| table.computedondemand : boolean, optional
| when set to True, the computed variables are created when the
| table is loaded instead of when the action begins.
| Default: False
|
| table.singlepass : boolean, optional
| when set to True, the data does not create a transient table in
| the server. Setting this parameter to True can be efficient, but
| the data might not have stable ordering upon repeated runs.
| Default: False
|
| table.importoptions : dict, optional
| specifies the settings for reading a table from a data source.
|
| table.importoptions.filetype : string
| Default: auto
| Values: auto, hdat, csv, delimited, excel, jmp, spss, dta,
| esp, lasr, basesas, mva, xls, fmt
|
| table.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| table.vars : list of dicts, optional
| specifies the variables to use in the action.
|
| table.vars[*].name : string
| specifies the name for the variable.
|
| table.vars[*].label : string, optional
| specifies the descriptive label for the variable.
|
| table.vars[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| table.vars[*].format : string, optional
| specifies the format to apply to the variable.
|
| table.vars[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| table.vars[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| nthreads : int32, optional
| specifies the number of threads to use on each machine in the
| server. By default, the server uses one thread for each CPU that is
| licensed to use SAS software.
| Default: 0
| Note: Value range is 0 <= n <= 64
|
| groupbytable : dict or CASTable, optional
| specifies an input table that contains the groups to use in a
| group-by analysis.
|
| groupbytable.name : string or CASTable
| specifies the name of the table to use.
|
| groupbytable.caslib : string, optional
| specifies the caslib containing the table that you want to use
| with the action. By default, the active caslib is used. Specify a
| value only if you need to access a table from a different caslib.
|
| groupbytable.where : string, optional
| specifies an expression for subsetting the input data.
|
| attributes : list of dicts, optional
|
| attributes[*].name : string
| specifies the name for the variable.
|
| attributes[*].label : string, optional
| specifies the descriptive label for the variable.
|
| attributes[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| attributes[*].format : string, optional
| specifies the format to apply to the variable.
|
| attributes[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| attributes[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| inputs : list of dicts, optional
|
| inputs[*].name : string
| specifies the name for the variable.
|
| inputs[*].label : string, optional
| specifies the descriptive label for the variable.
|
| inputs[*].formattedlength : int32, optional
| specifies the format field length plus the format precision
| length.
| Default: 0
|
| inputs[*].format : string, optional
| specifies the format to apply to the variable.
|
| inputs[*].nfl : int32, optional
| specifies the format field length.
| Default: 0
|
| inputs[*].nfd : int32, optional
| specifies the format precision length.
| Default: 0
|
| groupbylimit : int64, optional
| specifies the maximum number of levels in a group-by set. When the
| server determines this number of levels, the server stops and does
| not return a result. Specify this parameter if you want to avoid
| creating large result sets in group-by operations.
| Note: Value range is 1 <= n < 9223372036854775807
|
| orderby : list of strings, optional
| specifies a list of variables by which to order the result set.
| Default: []
|
| orderbyagg : list, optional
| specifies one or more aggregators by which to order the result set.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| orderbydesc : list of strings, optional
| arranges the results in descending order.
| Default: []
|
| orderbygbyraw : boolean, optional
| when set to True, the ordering of the group-by variables is based on
| the raw values of the variables, not the formatted values.
| Default: False
|
| resultlimit : int32, optional
| specifies the maximum size of the result set returned to the client.
| Default: 0
| Note: Value range is 0 <= n <= 2147483647
|
| casout : dict or CASTable, optional
| specifies the settings for an output table.
|
| casout.name : string or CASTable, optional
| specifies the name to associate with the table.
|
| casout.caslib : string, optional
| specifies the name of the caslib to use.
|
| casout.timestamp : string, optional
| specifies the timestamp to apply to the table. Specify the value
| in the form that is appropriate for your session locale.
|
| casout.compress : boolean, optional
| when set to True, data compression is applied to the table.
| Default: False
|
| casout.replace : boolean, optional
| specifies whether to overwrite an existing table with the same
| name.
| Default: False
|
| casout.replication : int32, optional
| specifies the number of copies of the table to make for fault
| tolerance. Larger values result in slower performance and use
| more memory, but provide high availability for data in the event
| of a node failure.
| Default: 1
| Note: Value range is 0 <= n < 2147483647
|
| casout.threadblocksize : int64, optional
| specifies the number of bytes to use for blocks that are read by
| threads. Increase this value only if you have a large table and
| CPU utilization by threads shows thread starvation.
| Note: Value range is 0 <= n < 9223372036854775807
|
| casout.label : string, optional
| specifies the descriptive label to associate with the table.
|
| casout.maxmemsize : int64, optional
| specifies the maximum amount of physical memory, in bytes, to
| allocate for the table. After this threshold is reached, the
| server uses temporary files and operating system facilities for
| memory management.
| Default: 0
|
| casout.promote : boolean, optional
| when set to True, the output table is added with a global scope.
| This enables other sessions to access the table, subject to
| access controls. The target caslib must also have a global scope.
| Default: False
|
| casout.ondemand : boolean, optional
| when set to True, table access is less aggressive with virtual
| memory use.
| Default: True
|
| repeat : boolean, optional
| when set to True, the action is repeated.
| Default: False
|
| freq : string, optional
| specifies a frequency variable.
|
| cialpha : double, optional
| specifies the level of significance for 100*(1-ciAlpha)% confidence
| intervals. The default value of 0.05 results in 95% confidence
| intervals.
| Default: 0.05
| Note: Value range is 0.0 < n < 1.0
|
| citype : string, optional
| specifies the type of confidence interval.
| Default: TWOSIDED
| Values: LEFT, LOWER, RIGHT, TWOSIDED, UPPER
|
| subset : list, optional
| specifies the summary statistics to generate.
| Default: []
| Values: CSS, CV, MAX, MEAN, MIN, N, NMISS, PROBT, STD, STDERR, SUM,
| TSTAT, USS, VAR
|
| weight : string, optional
| specifies a numeric variable whose values weight the values of the
| analysis variables.
|
| Returns
| -------
| None
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| all_params = set(['attributes', 'attributes[*].format', 'attributes[*]...
|
| param_names = ['table', 'nthreads', 'groupbytable', 'attributes', 'inp...
|
| ----------------------------------------------------------------------
| Methods inherited from CASAction:
|
| __iter__(self)
| Call the action and iterate over the results
|
| invoke(self, **kwargs)
| Invoke the action
|
| Parameters
| ----------
| **kwargs : any, optional
| Arbitrary key/value pairs to add to the arguments sent to the
| action. These key/value pairs are not added to the collection
| of parameters set on the action object. They are only used in
| this call.
|
| Returns
| -------
| self
| Returns the CASAction object itself
|
| retrieve = __call__(self, **kwargs)
| Call the action
|
| Parameters
| ----------
| **kwargs : any, optional
| Arbitrary key/value pairs to add to the arguments sent to the
| action. These key/value pairs are not added to the collection
| of parameters set on the action object. They are only used in
| this call.
|
| Returns
| -------
| CASResults object
| Collection of results from the action call
|
| ----------------------------------------------------------------------
| Class methods inherited from CASAction:
|
| from_reflection(asname, actinfo, connection) from builtins.type
| Construct a CASAction class from reflection information
|
| Parameters
| ----------
| asname : string
| The action set name
| actinfo : dict
| The reflection information for the action
| connection : CAS object
| The connection to associate with the CASAction
| defaults : dict
| Default parameters for the action
|
| Returns
| -------
| CASAction class
|
| get_connection() from builtins.type
| Return the registered connection
|
| The connection is only held by a weak reference. If the
| connection no longer exists, a SWATError is raised.
|
| Raises
| ------
| SWATError
| If the registered connection no longer exists
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from CASAction:
|
| trait_names = None
|
| ----------------------------------------------------------------------
| Methods inherited from swat.cas.utils.params.ParamManager:
|
| __delattr__(self, name)
| Delete an attribute
|
| __enter__(self)
|
| __exit__(self, type, value, traceback)
|
| __getattr__(self, name)
| Get named attribute
|
| __repr__(self)
|
| __setattr__(self, name, value)
| Set an attribute
|
| __str__(self)
|
| del_param = del_params(self, *keys)
| Delete parameters
|
| Parameters
| ----------
| *keys : strings
| Names of parameters to delete
|
| Returns
| -------
| None
|
| del_params(self, *keys)
| Delete parameters
|
| Parameters
| ----------
| *keys : strings
| Names of parameters to delete
|
| Returns
| -------
| None
|
| has_param = has_params(self, *keys)
| Return a boolean indicating whether or not the parameters exist
|
| Parameters
| ----------
| *keys : one or more strings
| Names of parameters
|
| Returns
| -------
| True or False
|
| has_params(self, *keys)
| Return a boolean indicating whether or not the parameters exist
|
| Parameters
| ----------
| *keys : one or more strings
| Names of parameters
|
| Returns
| -------
| True or False
|
| to_dict(self)
| Return the parameters as a dictionary
|
| to_json(self, *args, **kwargs)
| Convert parameters to JSON
|
| Parameters
| ----------
| *args : any, optional
| Additional arguments to json.dumps
| **kwargs : any, optional
| Additional arguments to json.dumps
|
| Returns
| -------
| string
|
| to_params = to_dict(self)
| Return the parameters as a dictionary
|
| ----------------------------------------------------------------------
| Data descriptors inherited from swat.cas.utils.params.ParamManager:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
###Markdown
Using IPython's ? OperatorThe IPython environment has a way of invoking help as well. It is more useful in the notebook environment where the help content will pop up in a separate pane of the browser. To bring up help for an action set, you simply add a **?** after the action set attribute name.
###Code
conn.simple?
###Output
_____no_output_____
###Markdown
The **?** operator also works with action names.
###Code
conn.simple.summary?
###Output
_____no_output_____
###Markdown
ConclusionWhich one of the above described methods of getting help on CAS actions that you decide to use really depends on what type of information you are looking for and what environment you are in. If you are commonly working in IPython, the **?** operator method is likely to be your best bet. If you simply want to see what actions are available in an action set, you may just call the **help** action directly. And if you are looking for information about the action as well as the Python action class methods, then Python's **help** function is what you are looking for.
###Code
conn.close()
###Output
_____no_output_____ |
challengerStatsNA2016.ipynb | ###Markdown
I've crawled all matches played by challenger tier in NA of SEASON2016 by using 'crawlTierMatches.py'. The S7 season has not started yet, let's just play on this data first. Load Data
###Code
import pickle
import os
path = os.path.join(os.path.dirname('getMatchList.ipynb'), 'data')
with open(os.path.join(path, 'league_match_history_2016_na.pickle'), 'rb') as f:
match_ids = pickle.load(f)
matches = pickle.load(f)
print(len(matches))
import pandas as pd
matches_stats = pd.DataFrame.from_dict(matches)
matches_stats.head()
###Output
_____no_output_____
###Markdown
What Lane Do People Play Most? First we need to seperate duo_carry and duo_support.
###Code
matches_stats.loc[matches_stats['role'] == 'DUO_CARRY', 'lane'] = 'BOT_ADC'
matches_stats.loc[matches_stats['role'] == 'DUO_SUPPORT', 'lane'] = 'BOT_SUP'
# drop those BOTTOM that are neither DUO_CARRY nor DUO_SUPPORT
matches_stats = matches_stats[matches_stats['lane'] != 'BOTTOM']
matches_stats.head()
lane_stats = matches_stats.groupby(['lane', 'queue']).size()
lane_stats = lane_stats.unstack()
lane_stats
ax = lane_stats.plot.bar(stacked=True, legend=False);
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
# Put a legend to the right of the current axis
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=0)
pass
###Output
_____no_output_____
###Markdown
What Champion Do People Play Most?
###Code
champ_stats = matches_stats.groupby(['lane', 'champion']).size()
champ_stats = champ_stats.unstack()
champ_stats.head()
from lolcrawler_util import get_champion_name
import matplotlib.pyplot as plt
import numpy as np
f, axarr = plt.subplots(3, 2, figsize=(15,15))
plt_cnt = 0
for lane, row in champ_stats.iterrows():
sorted_row = row.sort_values(ascending=False)
# print(sorted_row[:5].values)
axarr[plt_cnt/2, plt_cnt%2].bar(np.arange(10), sorted_row[:10].values)
axarr[plt_cnt/2, plt_cnt%2].title.set_text(lane)
champion_name = []
for c_id in sorted_row[:10].index.values:
champion_name.append(get_champion_name(c_id))
axarr[plt_cnt/2, plt_cnt%2].xaxis.set_ticks(np.arange(10))
axarr[plt_cnt/2, plt_cnt%2].set_xticklabels(champion_name, rotation=45)
plt_cnt += 1
f.subplots_adjust(hspace=0.5)
axarr[-1, -1].axis('off')
plt.show()
###Output
_____no_output_____ |
assets/ta_asset.ipynb | ###Markdown
ALEXI Grid Properties
###Code
alexi_coll = ee.ImageCollection('projects/disalexi/alexi/CONUS')
# print(ee.Image(alexi_coll.first()).projection().getInfo()['transform'])
# print(ee.Image(alexi_coll.first()).projection().getInfo()['crs'])
# print(ee.Image(alexi_coll.first()).getInfo()['bands'][0]['dimensions'])
alexi_cs = 0.04
alexi_geo = [0.04, 0.0, -125.04, 0.0, -0.04, 49.82]
alexi_shape = [1456, 625]
alexi_crs = 'EPSG:4326'
alexi_extent = [
alexi_geo[2], alexi_geo[5] + alexi_geo[4] * alexi_shape[1],
alexi_geo[2] + alexi_geo[0] * alexi_shape[0], alexi_geo[5]]
alexi_geo_str = '[' + ','.join(list(map(str, alexi_geo))) + ']'
alexi_shape_str = '{0}x{1}'.format(*alexi_shape)
# print(alexi_geo_str)
# print(alexi_shape_str)
###Output
_____no_output_____
###Markdown
Study Area Properties
###Code
output_crs = 'EPSG:4326'
output_cs = 0.04 # ALEXI cellsize
# # Study areas
# output_extent = [-121.9, 38.8, -121.7, 38.9] # Study Area
# output_extent = [-122.0, 38.7, -121.6, 39.0] # Study Area
# output_extent = [-122.0, 38.0, -121.0, 39.0] # 1 x 1 deg
output_extent = [-123.0, 37.8, -120.4, 40.0] # LC08_044033_20150711
# output_extent = [-123, 35, -118.5, 40] # Central Valley
# output_extent = [-125, 32, -114, 42] # California / Nevada
# output_extent = [-125, 25, -65, 50] # CONUS
# Computed output transform, extent, and shape
output_geom = ee.Geometry.Rectangle(output_extent, output_crs, False)
output_region = output_geom.bounds(1, output_crs).coordinates().getInfo()[0][:-1]
output_xmin = min(x for x, y in output_region)
output_ymin = min(y for x, y in output_region)
output_xmax = max(x for x, y in output_region)
output_ymax = max(y for x, y in output_region)
# Expand extent when snapping/aligning to ALEXI grid
output_xmin = math.floor((output_xmin - alexi_extent[0]) / output_cs) * output_cs + alexi_extent[0]
output_ymin = math.floor((output_ymin - alexi_extent[3]) / output_cs) * output_cs + alexi_extent[3]
output_xmax = math.ceil((output_xmax - alexi_extent[0]) / output_cs) * output_cs + alexi_extent[0]
output_ymax = math.ceil((output_ymax - alexi_extent[3]) / output_cs) * output_cs + alexi_extent[3]
output_extent = [output_xmin, output_ymin, output_xmax, output_ymax]
output_geo = [output_cs, 0.0, output_xmin, 0.0, -output_cs, output_ymax]
# Convert to strings for export calls
output_geo_str = '[' + ','.join(list(map(str, output_geo))) + ']'
output_shape_str = '{0}x{1}'.format(
int(abs(output_extent[2] - output_extent[0]) / output_cs),
int(abs(output_extent[3] - output_extent[1]) / output_cs))
output_cs_str = '{:0.3f}'.format(output_cs).replace('.', 'p')
print(output_geo_str)
print(output_shape_str)
print(output_cs_str)
###Output
[0.04,0.0,-123.0,0.0,-0.04,40.019999999999996]
65x55
0p040
###Markdown
Landsat Image, Collection, and Properties
###Code
landsat_coll_id = 'LANDSAT/LC08/C01/T1_RT_TOA'
landsat_id = 'LC08_044033_20150711'
landsat_img = ee.Image('LANDSAT/LC08/C01/T1_RT_TOA/LC08_044033_20150711')
landsat_crs = landsat_img.select(['B2']).projection().getInfo()['crs']
landsat_geo = landsat_img.select(['B2']).projection().getInfo()['transform']
landsat_shape = landsat_img.select(['B2']).getInfo()['bands'][0]['dimensions']
landsat_geo_str = '[' + ','.join(list(map(str, landsat_geo))) + ']'
landsat_shape_str = '{0}x{1}'.format(*landsat_shape)
print(landsat_crs)
print(landsat_geo_str)
print(landsat_shape_str)
# Eventually try mapping the functions over a collection of images
landsat_coll = ee.ImageCollection(landsat_coll_id) \
.filterDate('2015-07-11', '2015-07-12') \
.filterBounds(output_geom) \
.filterMetadata('CLOUD_COVER_LAND', 'less_than', 70)
# .filterMetadata('WRS_PATH', 'equals', 44) \
# .filterMetadata('WRS_ROW', 'not_greater_than', 34) \
# .filterMetadata('WRS_ROW', 'not_less_than', 33) \
# pprint.pprint(list(landsat_coll.aggregate_histogram('system:index').getInfo().keys()))
###Output
_____no_output_____
###Markdown
Ta Function
###Code
def get_affine_transform(image):
return ee.List(ee.Dictionary(ee.Algorithms.Describe(image.projection())).get('transform'))
def ta_landsat_func(l_img):
"""Compute air temperature at the Landsat scale (don't aggregate)"""
input_img = ee.Image(landsat.Landsat(l_img).prep())
# Use the CONUS ALEXI ET but the global landcover and elevation products
d_obj = disalexi.Image(
input_img,
iterations=10,
elevation=ee.Image('USGS/SRTMGL1_003').rename(['elevation']),
landcover=ee.Image(ee.ImageCollection('users/cgmorton/GlobeLand30')
.filterBounds(l_img.geometry().bounds(1)).mosaic()) \
.divide(10).floor().multiply(10).rename(['landcover']),
lc_type='GLOBELAND30',
tair_values=list(range(273, 321, 1)),
)
return d_obj.compute_ta()
# return d_obj.compute_ta().reproject(
# crs=landsat_img.select(['B2']).projection().crs(),
# crsTransform=get_affine_transform(landsat_img.select(['B2'])))
def ta_coarse_func(l_img):
"""Compute air temperature averaged/aggregated to the ALEXI grid"""
input_img = ee.Image(landsat.Landsat(l_img).prep())
# Use the CONUS ALEXI ET but the global landcover and elevation products
d_obj = disalexi.Image(
input_img,
iterations=10,
elevation=ee.Image('USGS/SRTMGL1_003').rename(['elevation']),
landcover=ee.Image(ee.ImageCollection('users/cgmorton/GlobeLand30')
.filterBounds(l_img.geometry().bounds(1)).mosaic()) \
.divide(10).floor().multiply(10).rename(['landcover']),
lc_type='GLOBELAND30',
tair_values=list(range(273, 321, 1)),
)
# Was testing to see if setting the crsTransform in the reproject helped (it didn't)
landsat_crs = landsat_img.select(['B2']).projection().crs()
# landsat_geo = get_affine_transform(l_img.select(['B2']))
# .reproject(crs=landsat_crs, crsTransform=landsat_geo)\
return d_obj.compute_ta()\
.reproject(crs=landsat_crs, scale=30)\
.reduceResolution(reducer=ee.Reducer.mean(), maxPixels=65535) \
.reproject(crs=output_crs, crsTransform=output_geo) \
.updateMask(1)
###Output
_____no_output_____
###Markdown
Ta Export - Aggregated/averaged to ALEXI gridExport the aggregated air temperature asset. This fails with an internal error.
###Code
export_id = 'LC08_044033_20150711'
export_img = ee.Image('{}/{}'.format('LANDSAT/LC08/C01/T1_RT_TOA', export_id))
asset_id = '{coll}/{image}_{cs}'.format(coll='projects/disalexi/ta/CONUS', image=export_id, cs=output_cs_str)
task_id = 'disalexi_tair_coarse_{image}_{cs}'.format(image=export_id, cs=output_cs_str)
ta_coarse_img = ta_coarse_func(export_img)\
.setMulti({
'DATE_INGESTED': datetime.datetime.now().strftime('%Y-%m-%d'),
'DISALEXI_VERSION': openet.disalexi.__version__,
'DATE': ee.Date(export_img.get('system:time_start')).format('YYYY-mm-dd'),
})
task = ee.batch.Export.image.toAsset(
image=ee.Image(ta_coarse_img).toFloat(),
description=task_id,
assetId=asset_id,
crs=output_crs,
crsTransform=output_geo_str,
dimensions=output_shape_str,
)
# task.start()
# time.sleep(1)
# print(task.status())
###Output
_____no_output_____
###Markdown
Ta Export - Landsat Scale (30 or 60m)Export the Landsat scale air temperature asset. This completes successfully for 30 or 60m cellsize.
###Code
export_id = 'LC08_044033_20150711'
export_img = ee.Image('{}/{}'.format('LANDSAT/LC08/C01/T1_RT_TOA', export_id))
# Switch output cellsize to 60m
export_cs = 30
export_crs = export_img.select(['B2']).projection().getInfo()['crs']
export_geo = export_img.select(['B2']).projection().getInfo()['transform']
export_geo[0] = export_cs
export_geo[4] = -export_cs
export_shape = export_img.select(['B2']).getInfo()['bands'][0]['dimensions']
export_shape[0] = int(export_shape[0] / (export_cs / 30) + 0.5)
export_shape[1] = int(export_shape[1] / (export_cs / 30) + 0.5)
# print(export_crs)
# print(export_geo)
# print(export_shape)
asset_id = '{coll}_{cs}m/{image}'.format(
coll='projects/disalexi/ta/landsat', image=export_id, cs=export_cs)
task_id = 'disalexi_tair_landsat_{image}_{cs}m'.format(image=export_id, cs=export_cs)
ta_landsat_img = ta_landsat_func(export_img)\
.setMulti({
'DATE_INGESTED': datetime.datetime.now().strftime('%Y-%m-%d'),
'DISALEXI_VERSION': openet.disalexi.__version__,
'DATE': ee.Date(export_img.get('system:time_start')).format('YYYY-mm-dd'),
})
task = ee.batch.Export.image.toAsset(
image=ee.Image(ta_landsat_img).toFloat(),
description=task_id,
assetId=asset_id,
crs=export_crs,
crsTransform='[' + ','.join(list(map(str, export_geo))) + ']',
dimensions='{0}x{1}'.format(*export_shape),
)
# task.start()
# time.sleep(1)
# print(task.status())
###Output
_____no_output_____
###Markdown
Ta Export - Mosaiced Landsat Images (for one WRS path, date, UTM zone)Start with a small mosaic that is only 2 or 3 images in the same path. Eventually the exports may need to be for all images in the path or by path and UTM zone.
###Code
# export_id = 'LC08_044033_20150711'
# export_img = ee.Image('{}/{}'.format(l8_coll_id, export_id))
# Try calling the function for a mosaiced a collection of images in the same UTM zone, WRS Path, and date
export_coll = ee.ImageCollection('LANDSAT/LC08/C01/T1_RT_TOA') \
.filterDate('2015-07-11', '2015-07-12') \
.filterMetadata('WRS_PATH', 'equals', 44) \
.filterMetadata('WRS_ROW', 'not_greater_than', 34) \
.filterMetadata('WRS_ROW', 'not_less_than', 33) \
.filterMetadata('CLOUD_COVER_LAND', 'less_than', 70)
# print(export_coll.aggregate_histogram('system:index').getInfo())
export_id = '20150711_p044'
export_img = landsat_coll.mean()\
.set('system:time_start', ee.Date.fromYMD(2015, 7, 11).millis())
landsat_crs = 'EPSG:32610'
landsat_cs = 60
def ta_func(l_img):
input_img = ee.Image(landsat.Landsat(l_img).prep())
# Use the CONUS ALEXI ET but the global landcover and elevation products
d_obj = disalexi.Image(
input_img,
iterations=10,
elevation=ee.Image('USGS/SRTMGL1_003').rename(['elevation']),
landcover=ee.Image(ee.ImageCollection('users/cgmorton/GlobeLand30')
.filterBounds(l_img.geometry().bounds(1)).mosaic()) \
.divide(10).floor().multiply(10).rename(['landcover']),
lc_type='GLOBELAND30',
tair_values=list(range(273, 321, 1)),
)
return d_obj.compute_ta()
# .reproject(crs=landsat_img.select(['B2']).projection().crs(), scale=landsat_cs)\
# .reduceResolution(reducer=ee.Reducer.mean(), maxPixels=65535) \
# .reproject(crs=output_crs, crsTransform=output_geo) \
# .updateMask(1)
asset_id = '{coll}_{cs}m/{image}'.format(
coll='projects/disalexi/ta/landsat', image=export_id, cs=export_cs)
task_id = 'disalexi_tair_landsat_{image}_{cs}m'.format(image=export_id, cs=export_cs)
ta_landsat_img = ta_landsat_func(export_img)\
.setMulti({
'DATE_INGESTED': datetime.datetime.now().strftime('%Y-%m-%d'),
'DISALEXI_VERSION': openet.disalexi.__version__,
'DATE': ee.Date(export_img.get('system:time_start')).format('YYYY-mm-dd'),
})
task = ee.batch.Export.image.toAsset(
image=ee.Image(ta_landsat_img).toFloat(),
description=task_id,
assetId=asset_id,
crs=landsat_crs,
scale=30
# crsTransform='[' + ','.join(list(map(str, export_geo))) + ']',
# dimensions='{0}x{1}'.format(*export_shape),
)
# task.start()
# time.sleep(1)
# print(task.status())
###Output
_____no_output_____
###Markdown
Thumbnails
###Code
# # thumbnail_crs = 'EPSG:4326'
# # thumbnail_cs = 0.005
# thumbnail_crs = 'EPSG:32610'
# thumbnail_cs = 120
# thumbnail_xy = output_geom.bounds(1, thumbnail_crs).coordinates().getInfo()[0]
# thumbnail_xmin = int(min(x for x, y in thumbnail_xy) / thumbnail_cs) * thumbnail_cs
# thumbnail_ymin = int(min(y for x, y in thumbnail_xy) / thumbnail_cs) * thumbnail_cs
# thumbnail_xmax = int(max(x for x, y in thumbnail_xy) / thumbnail_cs) * thumbnail_cs + thumbnail_cs
# thumbnail_ymax = int(max(y for x, y in thumbnail_xy) / thumbnail_cs) * thumbnail_cs + thumbnail_cs
# thumbnail_geo = [thumbnail_cs, 0.0, thumbnail_xmin, 0.0, thumbnail_cs, thumbnail_ymax]
# thumnbail_region = [[], [], [], []]
# thumbnail_shape_str = '{0}x{1}'.format(
# int(abs(thumbnail_xmax - thumbnail_xmin) / thumbnail_cs),
# int(abs(thumbnail_ymax - thumbnail_ymin) / thumbnail_cs))
# print(thumbnail_crs)
# print(thumbnail_geo)
# print(thumbnail_shape_str)
# landsat_url = landsat_img.select(['B4', 'B3', 'B2'])\
# .reproject(crs=thumbnail_crs, crsTransform=thumbnail_geo)\
# .getThumbURL({'region': output_region, 'min': 0, 'max': 0.30})
# # print(landsat_url)
# Image(url=landsat_url)
# landsat_url = landsat_coll.filterDate('2015-07-11', '2015-07-12')\
# .median().select(['B4', 'B3', 'B2'])\
# .reproject(crs=thumbnail_crs, crsTransform=thumbnail_geo)\
# .getThumbURL({'region': output_region, 'min': 0, 'max': 0.30})
# # print(landsat_url)
# Image(url=landsat_url)
# landsat_url = landsat_coll.filterDate('2015-07-11', '2015-07-12')\
# .mean().select(['B4', 'B3', 'B2'])\
# .reproject(crs=output_crs, crsTransform=output_geo)\
# .getThumbURL({'region': output_region, 'min': 0, 'max': 0.30})
# # print(landsat_url)
# Image(url=landsat_url)
###Output
_____no_output_____
###Markdown
Ta Image
###Code
# ta_landsat_url = ta_landsat_func(landsat_img)\
# .reproject(crs=thumbnail_crs, crsTransform=thumbnail_geo)\
# .getThumbURL({
# 'region': output_region, 'min': 273, 'max': 325,
# 'palette': ','.join(['FF0000', 'FFFF00', '00FFFF', '0000FF'])})
# print(ta_landsat_url)
# Image(url=ta_landsat_url)
# ta_coarse_url = ta_coarse_func(landsat_img)\
# .reproject(crs=thumbnail_crs, crsTransform=thumbnail_geo)\
# .getThumbURL({'region': output_region, 'min': 273, 'max': 325,
# 'palette': ','.join(['FF0000', 'FFFF00', '00FFFF', '0000FF'])})
# print(ta_coarse_url)
# Image(url=ta_coarse_url)
###Output
_____no_output_____ |
d2l/mxnet/chapter_convolutional-neural-networks/pooling.ipynb | ###Markdown
汇聚层:label:`sec_pooling`通常当我们处理图像时,我们希望逐渐降低隐藏表示的空间分辨率、聚集信息,这样随着我们在神经网络中层叠的上升,每个神经元对其敏感的感受野(输入)就越大。而我们的机器学习任务通常会跟全局图像的问题有关(例如,“图像是否包含一只猫呢?”),所以我们最后一层的神经元应该对整个输入的全局敏感。通过逐渐聚合信息,生成越来越粗糙的映射,最终实现学习全局表示的目标,同时将卷积图层的所有优势保留在中间层。此外,当检测较底层的特征时(例如 :numref:`sec_conv_layer`中所讨论的边缘),我们通常希望这些特征保持某种程度上的平移不变性。例如,如果我们拍摄黑白之间轮廓清晰的图像`X`,并将整个图像向右移动一个像素,即`Z[i, j] = X[i, j + 1]`,则新图像`Z`的输出可能大不相同。而在现实中,随着拍摄角度的移动,任何物体几乎不可能发生在同一像素上。即使用三脚架拍摄一个静止的物体,由于快门的移动而引起的相机振动,可能会使所有物体左右移动一个像素(除了高端相机配备了特殊功能来解决这个问题)。本节将介绍*汇聚*(pooling)层,它具有双重目的:降低卷积层对位置的敏感性,同时降低对空间降采样表示的敏感性。 最大汇聚层和平均汇聚层与卷积层类似,汇聚层运算符由一个固定形状的窗口组成,该窗口根据其步幅大小在输入的所有区域上滑动,为固定形状窗口(有时称为*汇聚窗口*)遍历的每个位置计算一个输出。然而,不同于卷积层中的输入与卷积核之间的互相关计算,汇聚层不包含参数。相反,池运算符是确定性的,我们通常计算汇聚窗口中所有元素的最大值或平均值。这些操作分别称为*最大汇聚层*(maximum pooling)和*平均汇聚层*(average pooling)。在这两种情况下,与互相关运算符一样,汇聚窗口从输入张量的左上角开始,从左往右、从上往下的在输入张量内滑动。在汇聚窗口到达的每个位置,它计算该窗口中输入子张量的最大值或平均值。计算最大值或平均值是取决于使用了最大汇聚层还是平均汇聚层。:label:`fig_pooling` :numref:`fig_pooling`中的输出张量的高度为$2$,宽度为$2$。这四个元素为每个汇聚窗口中的最大值:$$\max(0, 1, 3, 4)=4,\\\max(1, 2, 4, 5)=5,\\\max(3, 4, 6, 7)=7,\\\max(4, 5, 7, 8)=8.\\$$汇聚窗口形状为$p \times q$的汇聚层称为$p \times q$汇聚层,汇聚操作称为$p \times q$汇聚。回到本节开头提到的对象边缘检测示例,现在我们将使用卷积层的输出作为$2\times 2$最大汇聚的输入。设置卷积层输入为`X`,汇聚层输出为`Y`。无论`X[i, j]`和`X[i, j + 1]`的值是否不同,或`X[i, j + 1]`和`X[i, j + 2]`的值是否不同,汇聚层始终输出`Y[i, j] = 1`。也就是说,使用$2\times 2$最大汇聚层,即使在高度或宽度上移动一个元素,卷积层仍然可以识别到模式。在下面的代码中的`pool2d`函数,我们(**实现汇聚层的前向传播**)。此功能类似于 :numref:`sec_conv_layer`中的`corr2d`函数。然而,这里我们没有卷积核,输出为输入中每个区域的最大值或平均值。
###Code
from mxnet import np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l
npx.set_np()
def pool2d(X, pool_size, mode='max'):
p_h, p_w = pool_size
Y = np.zeros((X.shape[0] - p_h + 1, X.shape[1] - p_w + 1))
for i in range(Y.shape[0]):
for j in range(Y.shape[1]):
if mode == 'max':
Y[i, j] = X[i: i + p_h, j: j + p_w].max()
elif mode == 'avg':
Y[i, j] = X[i: i + p_h, j: j + p_w].mean()
return Y
###Output
_____no_output_____
###Markdown
我们可以构建 :numref:`fig_pooling`中的输入张量`X`,[**验证二维最大汇聚层的输出**]。
###Code
X = np.array([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]])
pool2d(X, (2, 2))
###Output
_____no_output_____
###Markdown
此外,我们还可以(**验证平均汇聚层**)。
###Code
pool2d(X, (2, 2), 'avg')
###Output
_____no_output_____
###Markdown
[**填充和步幅**]与卷积层一样,汇聚层也可以改变输出形状。和以前一样,我们可以通过填充和步幅以获得所需的输出形状。下面,我们用深度学习框架中内置的二维最大汇聚层,来演示汇聚层中填充和步幅的使用。我们首先构造了一个输入张量`X`,它有四个维度,其中样本数和通道数都是1。
###Code
X = np.arange(16, dtype=np.float32).reshape((1, 1, 4, 4))
X
###Output
_____no_output_____
###Markdown
默认情况下,(**深度学习框架中的步幅与汇聚窗口的大小相同**)。因此,如果我们使用形状为`(3, 3)`的汇聚窗口,那么默认情况下,我们得到的步幅形状为`(3, 3)`。
###Code
pool2d = nn.MaxPool2D(3)
# 由于汇聚层中没有参数,所以不需要调用初始化函数
pool2d(X)
###Output
_____no_output_____
###Markdown
[**填充和步幅可以手动设定**]。
###Code
pool2d = nn.MaxPool2D(3, padding=1, strides=2)
pool2d(X)
###Output
_____no_output_____
###Markdown
当然,我们可以设定一个任意大小的矩形汇聚窗口,并分别设定填充和步幅的高度和宽度。
###Code
pool2d = nn.MaxPool2D((2, 3), padding=(0, 1), strides=(2, 3))
pool2d(X)
###Output
_____no_output_____
###Markdown
多个通道在处理多通道输入数据时,[**汇聚层在每个输入通道上单独运算**],而不是像卷积层一样在通道上对输入进行汇总。这意味着汇聚层的输出通道数与输入通道数相同。下面,我们将在通道维度上连结张量`X`和`X + 1`,以构建具有2个通道的输入。
###Code
X = np.concatenate((X, X + 1), 1)
X
###Output
_____no_output_____
###Markdown
如下所示,汇聚后输出通道的数量仍然是2。
###Code
pool2d = nn.MaxPool2D(3, padding=1, strides=2)
pool2d(X)
###Output
_____no_output_____ |
examples/notebooks/upload-tool-demo.ipynb | ###Markdown
Connect to local serverThis notebook demonstrates how to use the upload tool that is included in the hoss client library.For these demo notebooks, it's assumed you're running against the system running indev mode and able to connect to localhost.We start by connecting the the "local" server. If using a different server be sure to change the `.connect()` arg
###Code
server_local = hoss.connect('http://localhost')
print("Existing Namespaces:")
print(server_local.list_namespaces())
###Output
_____no_output_____
###Markdown
Create a datasetFirst load the default namespace and then create a dataset inside the namespace
###Code
ns = server_local.get_namespace('default')
ds = ns.create_dataset("upload-test", "A dataset for an upload tool example")
ds.display()
###Output
_____no_output_____
###Markdown
Write test data to uploadThe upload tool operates on a directory of files. Create a test directory of dummy data.
###Code
temp_dir = tempfile.TemporaryDirectory()
for cnt in range(5):
with open(os.path.join(temp_dir.name, f"file{cnt}.dat"), 'wt') as fh:
fh.write('dummy data' * 5000000)
###Output
_____no_output_____
###Markdown
Run upload toolYou can run the upload tool as a function that even works in Jupyter.You can also run the upload tool from the command line. When you pip install the hoss client library, the program `hoss` is installed. The format of the command line interface is:`hoss upload `You can optionally write metadata key-value pairs using the `-m` flag (i.e `-m subject_id=123`). Multiple `-m` optional args are supported.You can optionally filter out files to upload using a regex string with the `--skip` arg.You can specify the endpoint (defaults to localhost) using the `--endpoint` arg.
###Code
!hoss upload -h
# Try uploading by using the function directly
# We can populate most args using the client library objects we've already created
hoss.tools.upload.upload_directory(ds.dataset_name, temp_dir.name, ns.name, server_local.base_url, num_processes=1,
skip=None, max_concurrency=10, multipart_threshold=48, multipart_chunk_size=48, metadata={"my-upload-test": "foo"})
###Output
_____no_output_____
###Markdown
Verify the files uploaded successfully
###Code
for f in (ds / "my-test").iterdir():
print(f)
###Output
_____no_output_____
###Markdown
Clean up this exampleRun these cells to remove the resources created during the test
###Code
temp_dir.cleanup()
ns.delete_dataset("upload-test")
###Output
_____no_output_____ |
Aperture and PSF Photometry.ipynb | ###Markdown
Electromagnetic follow-up of Gravitational Wave events (EMGW) Main Motive Measuring brightness of astronomical sources i.e. Understanding the concept of photometry. Key steps- Extracting sources form image.- Cross-match with some external catalogue to get Zeropoints.- Calculating magnitudes using aperture photometry standardising the magnitudes.- Performing PSF-fit photometry. **Here are a few important notes before we get started:-**- python3 environment is recommended for this notebook with the following modeules installed: (you can also make use of conda to make such an environment.)- numpy- matplotlib- astropy- photutils- astroqueryIf any of these modules are not installed, a simple pip insatll might do the job. i.e. `pip install `. You can also use conda to install these modules if you are working in a conda environment. If you are working with conda environment, you might want to make sure that your environment is active and pip is installed within your working conda environment to your conda environment**We also require a few additional astrometic software dependency :-**- SExtractor (source code Download link: https://www.astromatic.net/software)- PSFEx (source code Download link: https://www.astromatic.net/software) Let's get started Once again we start by importing necessary python modules. Do not run the next cell if you have all packages installed.
###Code
! pip install astroquery
! pip install astroscrappy
! pip install astropy
! sudo apt-get install psfex
! sudo add-apt-repository universe
! sudo apt-get install alien
! wget http://www.astromatic.net/download/sextractor/sextractor-2.19.5-1.x86_64.rpm
! alien -i sextractor-2.19.5-1.x86_64.rpm
! pip install photutils
import os
import glob
import numpy as np
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
import subprocess
import astropy.units as u
from astropy.io import fits
from astropy.table import Table
from astropy.stats import sigma_clipped_stats, sigma_clip
from photutils import SkyCircularAperture, SkyCircularAnnulus, aperture_photometry
from astroquery.vizier import Vizier
from astropy.io import ascii
from astropy.wcs import WCS
from astropy.coordinates import SkyCoord
from astropy.table import Table
from tqdm import tqdm_notebook as tqdm
def check_dependency(dep, i, alternate_names):
try:
subprocess.check_output(dep, stderr=subprocess.PIPE, shell=True)
print("{} is installed as {}.".format(dep, dep))
return 0
except:
try:
subprocess.check_output(alternate_names[i], stderr=subprocess.PIPE, shell=True)
print("{} is installed as {}.".format(dep, alternate_names[i]))
return 0
except subprocess.CalledProcessError:
output = "{} is not installed properly.".format(dep)
n_len = len(output)
print("%s"%("-" * n_len))
print(output)
print("%s"%("-" * n_len))
return 1
dependencies = ['sextractor','psfex']
Alt_names = ['sex', 'PSFEx']
for i, dep in enumerate(dependencies):
status = check_dependency(dep, i, Alt_names)
if status != 0:
print("Dependency is not insatlled properly. Please check for alternative names for dependency or contact the tutors for help.")
else:
print("All set. Let's fly :-) ")
## Simple decorative display function. Please ignore
class color:
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
def display_text(text):
print( color.GREEN + '*'+'-'*(10+len(text))+'*' )
print('*'+('-'*3)+(' '*2)+ color.GREEN+ color.PURPLE+str(text)+ color.GREEN+(' '*2)+('-'*3)+'*')
print('*'+'-'*(10+len(text))+'*' +"\n")
def wait_request():
print('This step may take a while. Please wait 🙏.\n')
###Output
_____no_output_____
###Markdown
In our last notebook, we have calibrated data and made it ready for the actual science purpose. It's time to use that. Do you remember where that data is ? Let's start by finding the calibrated data. All the calibrated / reduced data sits in reduced directory. So, reduced path will be the the one to go for in most of our processes in this notebook.
###Code
# mounting google drive to import data files
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
##Finding data 🧐
cwd = "/content/drive/MyDrive"
science_path = os.path.join(cwd,'data','science')
reduced_path = os.path.join(cwd,'reduced') # all preocessed data sit here.
###Output
_____no_output_____
###Markdown
- Visualise the image to confirm that we are working with the correct data. Check whether it looks good or not. We have a few images available. You are free to use any of the image in theory. But, let's use the same image to compare the results.
###Code
os.chdir(reduced_path)
file_list = glob.glob("*.proc.fits")
print("found following {} files: {}".format(len(file_list), file_list))
image = file_list[3]
hdu = fits.open(image)
data = hdu[0].data
header = hdu[0].header
mean, med, std = sigma_clipped_stats(data)
plt.figure(figsize= (10,10))
plt.imshow(data, vmin = med - 10*std, vmax = med + 10*std)
plt.colorbar()
###Output
_____no_output_____
###Markdown
Photometry : Photo (photons) + metry (measurement)It's a technique by which we measure the Flux or intensity of the light emiited by a source. In other words, it is a method to measure the brightness of sources. We detect photons from the source onto a CCD camera and measure the number of photon collected in a given time. An estimation of photon flux from any particular source gives us its brightness. In astronomy we represents it in terms of magnitude of a source. Magnitude is represented as: m = -2.5 * log10(Flux) Here `m` is called "instrumental magnitude of a source". As we can see from the formula that it depends on Flux i.e. number of photons collected by camera for a particular source. Number of photons collected by camera greatly depends on its specification and telescope assembly. Hence, this magnitude cooresponds to a particular camera assembly. Different cameras may register different Flux for the same source depending on various factors. Hence, the instrumental magnitude may vary with camera. As instrumental magnitude is not a standard thing we cannot use it directly on global scale. Therefore, we have to standardise it. We will do that later. Let's first understand how to estimate the instrumental magnitude. Two types of photometry: Aperture Photometry and PSF fit photometry Extracting sources from imageNow, we will extract sources from our image using `SExtractor`. `SExtractor` is a very versatile software, widely used by Astro commnunity for detecting sources from a fits images. Along with detecting sources it can also perform aperture and PSF photometry on sources if provided necessary parameters. These parameters are usally stored in the configuration file. `SExtractor` sources detection methods has been trained on various telescope images and are quite reliable. Although there exist better algorithms to perform photometry on sources, hardly any other software is as reliable as `SExtractor` in detecting sources.
###Code
# Let's define input and output file names
conf_file = 'config.sex' # configuration file for SExtractor. This files consist of values of differet params
parameter_file = 'apr.param' # parameter file which tells what params to be stored in catalogue file.
out_cat = image + ".cat" # Resulted catalogue from SExtractor consisting the sources info in the image.
command = ['sex', image, '-c', conf_file, '-CATALOG_NAME', out_cat, '-PARAMETERS_NAME', parameter_file]
print('SExtrator command is : %s' % command)
try:
display_text("Running SExtractor")
rval = subprocess.call(command)
display_text("Process complete")
except subprocess.CalledProcessError as err:
print('An error occuered while running SExtractor. Please try to run it manually through terminal.')
sys.exit(1)
def load_catalogue(catalogue, frames=1):
"""
Load the sextractor generated catalogue in form of an astropy table.
"""
if frames >0:
frames = frames*2
source_table= Table.read(catalogue, hdu=frames)
return source_table
local_sources = load_catalogue(out_cat)
print(local_sources.colnames)
print(local_sources)
###Output
_____no_output_____
###Markdown
Source selectionLet's select the good sources from the sextrator catalogue. SExtractor flags the sources using 8 flag bits depending on various factors. You can read about the SExtrator flags in details here: https://sextractor.readthedocs.io/en/latest/Flagging.html
###Code
unflagged_sources = local_sources[(local_sources['FLAGS'] == 0)]
print((unflagged_sources))
from matplotlib.patches import Circle
fig = plt.figure(figsize=(20,20))
ax = fig.gca()
plt.imshow(data, vmin=med-3*std, vmax=med+3*std)
# Marking the position of sources in image.
stars = [Circle((obj['XWIN_IMAGE'], obj['YWIN_IMAGE']), radius = 15, edgecolor='w', facecolor='None') for obj in unflagged_sources]
for star in stars:
ax.add_artist(star)
plt.show()
w = WCS(header)
[ra_cent, dec_cent] = w.all_pix2world(header["NAXIS2"]/2, header["NAXIS1"]/2, 1) # centeral ra dec of the image.
query_radius = 23 # catalogue query radius around central ra dec in arcmins.
minmag = 14 # maximum magnitude cut to get rid of very bright stars
maxmag = 18 # minimum magnitude cut to get rid of very faint stars
# Keep in mind: more the magnitude of the star, less brighter it is.
pan_catnum = "II/349" #This is the catalog number of PanSTARRS DR1 in Vizier.
# Use 'V/147' to query SDSS catalogue in Vizier. You have to replace colomn header names in last line as those are different for SDSS.
display_text('Making a vizier query for %s catalogue number in %.f arcmin radius around Ra %.5f, Dec %.5f'%(pan_catnum, query_radius, ra_cent, dec_cent))
wait_request()
try:
v = Vizier(columns=['*'], column_filters={"gmag":"<%.1f"%maxmag, "e_gmag":"<<1.086/3", "Nd":">6"}, row_limit=-1)
Q = v.query_region(SkyCoord(ra = ra_cent, dec = dec_cent, unit = (u.deg, u.deg)), radius = str(query_radius)+'m', catalog=pan_catnum, cache=False)
good_stars = Q[0][Q[0]['gmag'] > minmag] # Neglecting very bright stars from the queried catalogue.
print("\n\n Vizier query resulted %.f sources in the queried field after applying the mentioned filtering criteria."%len(good_stars))
except:
print('An error occured in querying vizier database. Please check whether your internet conection is working or not.')
# Convert the queried source positions to image pixel position using world2pix conversion for later use.
cat_localcoords = w.all_world2pix(good_stars['RAJ2000'], good_stars['DEJ2000'], 1) # position of queried sources in the image.
print(good_stars)
###Output
_____no_output_____
###Markdown
You can also check the relationship between instric magnitude and magnitude from panSTARRS. They should follow a linear trend. So, this brings us to the next Exercise! Plot the instrinsic magnitude Vs ps1 magniudes and see if they follow a linear trend or not.
###Code
# Exercise 2 solution here:
plt.figure(figsize=(8,8))
plt.scatter()
plt.scatter()
plt.xlabel('PanSTARRS mag', fontsize=20)
plt.ylabel('Instrumental psf mag', fontsize=20)
plt.ylim()
plt.legend()
plt.show()
ra = 324.8157058 # RA of the source
dec = 46.7339800 # Dec of the source
position = SkyCoord(ra = ra, dec = dec, unit = u.deg, frame = 'icrs')
radii_max =int(np.round(medFWHM, 0))
aperture_radii= np.arange(1,radii_max+1)
print(aperture_radii)
apertures = [SkyCircularAperture(position, r = r * u.pix) for r in aperture_radii]
pix_apertures = [a.to_pixel(w) for a in apertures]
phot_table = aperture_photometry(data, pix_apertures)
for col in phot_table.colnames:
phot_table[col].info.format = '%.8g'
print(phot_table)
# Create the annulus aperture for background estimation
anuRadius = int(np.round(4*medFWHM, 0))
anuWidth = 3
annulus_aperture = SkyCircularAnnulus(position, r_in = anuRadius * u.pix, r_out = (anuRadius + anuWidth) * u.pix)
pix_annulus_aperture = annulus_aperture.to_pixel(w)
#Measuring the flux inside an aperture annulus
error = np.sqrt(data) # error array for each pixel value.
annulus_phot_table = aperture_photometry(data, pix_annulus_aperture, error = error)
for col in annulus_phot_table.colnames:
annulus_phot_table[col].info.format = '%.8g'
#print the output
print(annulus_phot_table)
bkg_mean = annulus_phot_table['aperture_sum'] / pix_annulus_aperture.area
bkg_flux = bkg_mean * pix_apertures[-1].area
print('aperture_sum_%d'%i)
source_flux = phot_table['aperture_sum_%d'%(radii_max-1)] - bkg_flux
int_mag_err=2.5*np.log10(1 + annulus_phot_table['aperture_sum_err'] / source_flux)
e_mag= np.sqrt(int_mag_err**2+zero_points[-1]['zp_std']**2)
source_mag = zero_points[-1]['zp_median'] - 2.5 * np.log10(source_flux)
print('Found source magnitude of %.2f +/- %0.2f for aperture of radius %d pixels'%(source_mag, e_mag, aperture_radii[-1]))
###Output
_____no_output_____
###Markdown
PSF-fit photometry:
###Code
psf_config = 'config.psfex'
psfex_command = ['psfex', '-c', psf_config, out_cat]
print('PSFEx command is : %s'%psfex_command)
try:
display_text("Running PSFEx")
wait_request()
rval = subprocess.call(psfex_command)
display_text("Process complete")
except subprocess.CalledProcessError as err:
print('An error occuered while running PSFEx on %s. Please try to run it manually through terminal.'%image)
sys.exit(1)
psf_hdu = fits.open('moffat_'+image+'.fits')[0]
psf_data = psf_hdu.data
psf_mean, psf_median, psf_std = sigma_clipped_stats(psf_data)
plt.figure(figsize=(6,6))
plt.imshow(psf_data, vmin = psf_median - 3*psf_std, vmax = psf_median + 10*psf_std)
plt.show()
psf_model = image + '.psf'
out_psfcat = image+'.psf.cat'
parameter_file = 'photomPSF.param' # psf photometry parameter file for SExtrator
# Let's run sextrator again. But, this time with psf model as input to perform psf photometry.
command = ['sex', image, '-c', conf_file, '-CATALOG_NAME', out_psfcat, '-PARAMETERS_NAME', parameter_file, '-PSF_NAME', psf_model]
try:
display_text("Running SExtractor")
wait_request()
rval = subprocess.call(command)
display_text("Process complete")
except subprocess.CalledProcessError as err:
print('An error occuered while running SExtractor. Please try to run it manually through terminal.')
sys.exit(1)
local_psfsources = load_catalogue(out_psfcat)
# Spend a minute on comparing the new and older sextrator catalogue table columns and see what now in recent catalogue.
print(local_psfsources.colnames)
print("\n\nFound {} sources".format(len(local_psfsources)))
# Selecting good sources:
unflagged_psfsources = local_psfsources[(local_psfsources['FLAGS']==0) & (local_psfsources['FLAGS_MODEL']==0) & (local_psfsources['FWHM_WORLD']*3600 < 5)]
local_psf_catcoords = SkyCoord(ra=unflagged_psfsources['ALPHAWIN_J2000'], dec=unflagged_psfsources['DELTAWIN_J2000'], frame='icrs', unit='degree')
cross_match_radius = 0.676
local_psfidx, pan_psfidx, d2d, d3d = pan_catcoords.search_around_sky(local_psf_catcoords, cross_match_radius*u.arcsec)
print('Found %d good cross-matches'%len(local_psfidx))
plt.figure(figsize=(8,8))
plt.plot(good_stars['gmag'][ pan_psfidx], unflagged_psfsources['MAG_POINTSOURCE'][local_psfidx] , 'go', alpha = 0.5)
plt.xlabel('PanSTARRS mag', fontsize=20)
plt.ylabel('Instrumental psf mag', fontsize=20)
plt.ylim(-12.5, -8)
plt.show()
# Get the zeropoint of the image by crossd matching the psf photometry of sources.
psfoffsets = np.ma.array(good_stars['gmag'][ pan_psfidx] - unflagged_psfsources['MAG_POINTSOURCE'][local_psfidx])
zp_psfmean, zp_psfmed, zp_psfstd = sigma_clipped_stats(psfoffsets)
print('PSF mean zp: %.3f, PSF median zp: %.3f, PSF std zp: %.3f'%(zp_psfmean, zp_psfmed, zp_psfstd))
target_coords = SkyCoord(ra=[ra], dec=[dec], frame='icrs', unit='degree')
idx_target, local_idx_psf_target, d2d, d3d = local_psf_catcoords.search_around_sky(target_coords, cross_match_radius*u.arcsec)
if len(local_idx_psf_target) > 0:
print('Source found in SExtrator catalogue')
else:
print("Unable to locate source in SExtrator catalogue")
int_psf_mag = unflagged_psfsources[local_idx_psf_target]['MAG_POINTSOURCE'][0]
int_psf_magerr = unflagged_psfsources[local_idx_psf_target]['MAGERR_POINTSOURCE'][0]
psfmag = int_psf_mag + zp_psfmed
e_psfmag = np.sqrt(int_psf_magerr**2 + zp_psfstd**2)
print('PSF magnitude of target is %.2f +/- %.2f'%(psfmag, e_psfmag))
###Output
_____no_output_____ |
.ipynb_checkpoints/Wine recommendation by taster-checkpoint.ipynb | ###Markdown
Wine Recommendation by taster Analizing the database of wine reviews: [Wine Reviews](https://www.kaggle.com/zynicide/wine-reviews) inspired by [wine-recommender](https://www.kaggle.com/sudhirnl7/wine-recommender/notebook) Import and read csv
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from wordcloud import WordCloud,STOPWORDS
%matplotlib inline
plt.style.use('fivethirtyeight')
plt.rcParams.update({'font.size':12})
path = '2-Data/wine_130k.csv'
wines = pd.read_csv(path, low_memory=False)
print(wines.shape)
wines.head()
###Output
(120915, 14)
###Markdown
Taster Name
###Code
f,ax = plt.subplots(1,2, figsize = (16,8))
ax1,ax2 = ax.flatten()
sns.countplot(y = wines['taster_name'], palette = 'Set2', ax =ax1)
ax1.set_title('Taster Name')
ax1.set_xlabel('')
ax1.set_ylabel('')
sns.countplot(y = wines['taster_twitter_handle'], palette = 'Set2', ax =ax2)
ax2.set_title('Taser Twiter Handle')
ax2.set_xlabel('')
ax2.set_ylabel('');
plt.figure(figsize = (16,6))
cnt = wines.groupby(['country','taster_name',]).count().reset_index()
sns.countplot(x = cnt['country'], palette='Set2')
plt.xticks(rotation = 90);
###Output
_____no_output_____
###Markdown
Description
###Code
plt.figure(figsize= (16,8))
plt.title('Word cloud of Description')
wc = WordCloud(max_words=1000,max_font_size=40,background_color='black', stopwords = STOPWORDS,colormap='Set1')
wc.generate(' '.join(wines['description']))
plt.imshow(wc,interpolation="bilinear")
plt.axis('off')
###Output
_____no_output_____ |
sample_program_8_7_4_rf_classification.ipynb | ###Markdown
Python で気軽に化学・化学工学 第 8 章 モデル y = f(x) を構築して、新たなサンプルの y を推定する 8.7.4 ランダムフォレスト (Random Forests, RF) クラス分類 Jupyter Notebook の有用なショートカットのまとめ- Esc: コマンドモードに移行(セルの枠が青)- Enter: 編集モードに移行(セルの枠が緑)- コマンドモードで M: Markdown セル (説明・メモを書く用) に変更- コマンドモードで Y: Code セル (Python コードを書く用) に変更- コマンドモードで H: ヘルプを表示- コマンドモードで A: ひとつ**上**に空のセルを挿入- コマンドモードで B: ひとつ**下**に空のセルを挿入- コマンドモードで DD: セルを削除- Ctrl+Enter: セルの内容を実行- Shift+Enter: セルの内容を実行して下へ あやめのデータセット (iris_with_species.csv)有名な [Fisher’s Iris Data](https://en.wikipedia.org/wiki/Iris_flower_data_set)。150個のあやめについて、がく片長(Sepal Length)、がく片幅(Sepal Width)、花びら長(Petal Length)、花びら幅(Petal Width)が計測されています。
###Code
import pandas as pd # pandas のインポート
dataset = pd.read_csv('iris_with_species.csv', index_col=0, header=0) # あやめのデータセットの読み込み
###Output
_____no_output_____
###Markdown
RF は DT と同様にして、3 つのクラスがあっても問題ありません。setosa, versicolor, virginica の 3 つのクラスを分類します。
###Code
# y と x に分割
y = dataset.iloc[:,0]
x = dataset.iloc[:,1:]
###Output
_____no_output_____
###Markdown
トレーニングデータとテストデータの分割
###Code
from sklearn.model_selection import train_test_split
# ランダムにトレーニングデータとテストデータとに分割。random_state に数字を与えることで、別のときに同じ数字を使えば、ランダムとはいえ同じ結果にすることができます
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=50, stratify=y, shuffle=True, random_state=3)
###Output
_____no_output_____
###Markdown
DT モデルと同様にして RF モデルにおいても、一般的に x の標準化 (オートスケーリング) は行いません。 RF の実行
###Code
from sklearn.ensemble import RandomForestClassifier # クラス分類用の RF の実行に使用
model = RandomForestClassifier(n_estimators=500, max_features=0.5, oob_score=True) # RFモデルの宣言
model.fit(x_train, y_train) # DTモデル構築
###Output
_____no_output_____
###Markdown
構築された RF モデルにおける説明変数 x の重要度
###Code
model.feature_importances_ # 特徴量の重要度。array 型で出力されます
importances = pd.DataFrame(model.feature_importances_) # pandas の DataFrame 型に変換
importances.index = x_train.columns # 説明変数に対応する名前を、元のデータの説明変数名に
importances.columns = ['importances'] # 列名を変更
importances # 念のため確認
importances.to_csv('importances.csv') # csv ファイルに保存。同じ名前のファイルがあるときは上書きされますので注意してください
###Output
_____no_output_____
###Markdown
Out Of Bag (OOB) における正解率
###Code
model.oob_score_
###Output
_____no_output_____
###Markdown
トレーニングデータのクラスの推定
###Code
estimated_y_train = pd.DataFrame(model.predict(x_train), index=x_train.index, columns=['estimated_class']) # 推定し、pandas の DataFrame 型に変換
estimated_y_train.to_csv('estimated_y_train.csv') # csv ファイルに保存。同じ名前のファイルがあるときは上書きされますので注意してください
###Output
_____no_output_____
###Markdown
トレーニングデータの混同行列
###Code
from sklearn import metrics # 混同行列の作成、正解率の計算に使用
class_types = list(set(y_train)) # リスト型に変換。これで混同行列における縦と横のクラスの順番を定めます
class_types.sort() # アルファベット順に並び替え
confusion_matrix_train = pd.DataFrame(metrics.confusion_matrix(y_train, estimated_y_train, labels=class_types)) # 混同行列を作成し、pandas の DataFrame 型に変換
confusion_matrix_train.index = class_types # 行の名前を、定めたクラスの名前に
confusion_matrix_train.columns = class_types # 列の名前、定めたクラスの名前に
confusion_matrix_train # 確認
confusion_matrix_train.to_csv('confusion_matrix_train.csv') # csv ファイルに保存。同じ名前のファイルがあるときは上書きされますので注意してください
metrics.accuracy_score(y_train, estimated_y_train) # 正解率
###Output
_____no_output_____
###Markdown
テストデータのクラスの推定。トレーニングデータをテストデータに変えるだけで、実行する内容はトレーニングデータのときと同じです
###Code
estimated_y_test = pd.DataFrame(model.predict(x_test), index=x_test.index, columns=['estimated_class']) # 推定し、pandas の DataFrame 型に変換
estimated_y_test # 念のため確認
estimated_y_test.to_csv('estimated_y_test.csv') # csv ファイルに保存。同じ名前のファイルがあるときは上書きされますので注意してください
###Output
_____no_output_____
###Markdown
テストデータの混同行列
###Code
confusion_matrix_test = pd.DataFrame(metrics.confusion_matrix(y_test, estimated_y_test, labels=class_types)) # 混同行列を作成し、pandas の DataFrame 型に変換
confusion_matrix_test.index = class_types # 行の名前を、定めたクラスの名前に
confusion_matrix_test.columns = class_types # 列の名前、定めたクラスの名前に
confusion_matrix_test # 確認
confusion_matrix_test.to_csv('confusion_matrix_test.csv') # csv ファイルに保存。同じ名前のファイルがあるときは上書きされますので注意してください
metrics.accuracy_score(y_test, estimated_y_test) # 正解率
###Output
_____no_output_____
###Markdown
OOB を用いた説明変数 x の割合の最適化
###Code
import numpy as np # NumPy のインポート
ratios_of_x = np.arange(0.1, 1.1, 0.1) # 用いる説明変数の割合の候補
ratios_of_x # 念のため確認
accuracy_oob = [] # 空の list。説明変数の数の割合ごとに、OOB における正解率を入れていきます
for ratio_of_x in ratios_of_x:
model = RandomForestClassifier(n_estimators=500, max_features=ratio_of_x, oob_score=True, random_state=1)
model.fit(x_train, y_train)
accuracy_oob.append(model.oob_score_)
import matplotlib.pyplot as plt # 図の描画に使用
# 結果の確認
plt.rcParams['font.size'] = 18
plt.scatter(ratios_of_x, accuracy_oob)
plt.xlabel('ratio of x')
plt.ylabel('accuracy for OOB')
plt.show()
optimal_ratio_of_x = ratios_of_x[accuracy_oob.index(max(accuracy_oob))] # OOB における正解率が最大となる選択する x の割合
optimal_ratio_of_x # 念のため確認
###Output
_____no_output_____
###Markdown
RF モデルの構築および予測
###Code
model = RandomForestClassifier(n_estimators=500, max_features=optimal_ratio_of_x, oob_score=True) # RFモデルの宣言
model.fit(x_train, y_train) # RF モデル構築
###Output
_____no_output_____
###Markdown
構築された RF モデルにおける説明変数 x の重要度
###Code
model.feature_importances_ # 特徴量の重要度。array 型で出力されます
importances = pd.DataFrame(model.feature_importances_) # pandas の DataFrame 型に変換
importances.index = x_train.columns # 説明変数に対応する名前を、元のデータの説明変数名に
importances.columns = ['importances'] # 列名を変更
importances # 念のため確認
importances.to_csv('importances.csv') # csv ファイルに保存。同じ名前のファイルがあるときは上書きされますので注意してください
###Output
_____no_output_____
###Markdown
Out Of Bag (OOB) における正解率
###Code
model.oob_score_
###Output
_____no_output_____
###Markdown
トレーニングデータのクラスの推定
###Code
estimated_y_train = pd.DataFrame(model.predict(x_train), index=x_train.index, columns=['estimated_class']) # 推定し、pandas の DataFrame 型に変換
estimated_y_train.to_csv('estimated_y_train.csv') # csv ファイルに保存。同じ名前のファイルがあるときは上書きされますので注意してください
###Output
_____no_output_____
###Markdown
トレーニングデータの混同行列
###Code
from sklearn import metrics # 混同行列の作成、正解率の計算に使用
class_types = list(set(y_train)) # リスト型に変換。これで混同行列における縦と横のクラスの順番を定めます
class_types.sort() # アルファベット順に並び替え
confusion_matrix_train = pd.DataFrame(metrics.confusion_matrix(y_train, estimated_y_train, labels=class_types)) # 混同行列を作成し、pandas の DataFrame 型に変換
confusion_matrix_train.index = class_types # 行の名前を、定めたクラスの名前に
confusion_matrix_train.columns = class_types # 列の名前、定めたクラスの名前に
confusion_matrix_train # 確認
confusion_matrix_train.to_csv('confusion_matrix_train.csv') # csv ファイルに保存。同じ名前のファイルがあるときは上書きされますので注意してください
metrics.accuracy_score(y_train, estimated_y_train) # 正解率
###Output
_____no_output_____
###Markdown
テストデータのクラスの推定。トレーニングデータをテストデータに変えるだけで、実行する内容はトレーニングデータのときと同じです
###Code
estimated_y_test = pd.DataFrame(model.predict(x_test), index=x_test.index, columns=['estimated_class']) # 推定し、pandas の DataFrame 型に変換
estimated_y_test # 念のため確認
estimated_y_test.to_csv('estimated_y_test.csv') # csv ファイルに保存。同じ名前のファイルがあるときは上書きされますので注意してください
###Output
_____no_output_____
###Markdown
テストデータの混同行列
###Code
confusion_matrix_test = pd.DataFrame(metrics.confusion_matrix(y_test, estimated_y_test, labels=class_types)) # 混同行列を作成し、pandas の DataFrame 型に変換
confusion_matrix_test.index = class_types # 行の名前を、定めたクラスの名前に
confusion_matrix_test.columns = class_types # 列の名前、定めたクラスの名前に
confusion_matrix_test # 確認
confusion_matrix_test.to_csv('confusion_matrix_test.csv') # csv ファイルに保存。同じ名前のファイルがあるときは上書きされますので注意してください
metrics.accuracy_score(y_test, estimated_y_test) # 正解率
###Output
_____no_output_____ |
notebook/8_Model_Optimization/8_6_beta_dynamic_quantization_on_bert_jp.ipynb | ###Markdown
「BERTの動的量子化(ベータ版)」【原題】(beta) Dynamic Quantization on BERT【原著】[Jianyu Huang](https://github.com/jianyuh)【査読】[Raghuraman Krishnamoorthi](https://github.com/raghuramank100)【編著】[Jessica Lin](https://github.com/jlin27)【元URL】https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html【翻訳】電通国際情報サービスISID HCM事業部 櫻井 亮佑【日付】2020年2月6日【チュトーリアル概要】本チュートリアルでは、HuggingFace TransformersのBERTモデルに動的量子化を適用します。 **ヒント**本チュートリアルを最大限活用するには、こちらの[バージョンのColab](https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/dynamic_quantization_bert_tutorial.ipynb)を使用することを推奨します。 導入本チュートリアルでは、[HuggingFace Transformers](https://github.com/huggingface/transformers)のBERTモデルに動的量子化を適用します。BERTのような有名かつ最先端のモデルを、動的量子化済みのモデルへと変換する方法について、ひとつずつ解説します。
- BERT、あるいはトランスフォーマーによる双方向型埋め込み表現(Bidirectional Embedding Representations from Transformers)は、質問回答や文書分類などの多くの自然言語処理(NLP)タスクにおいて最先端の精度を達成している、新しい訓練済みの言語表現の方法の一つです。
元の論文は[こちら](https://arxiv.org/pdf/1810.04805.pdf)で確認できます。
- PyTorchでサポートされている動的量子化は、重みや活性化について動的量子化を行い、float型のモデルを静的なint8型やfloat16型の量子化モデルへと変換します。
活性化は重みがint8型に量子化される際に、(バッチ毎に)動的にint8型へと量子化されます。
PyTorchでは、特定のモジュールの重みのみ量子化したものに置換し、量子化モデルを出力する[torch.quantization.quantize_dynamic API](https://pytorch.org/docs/stable/quantization.htmltorch.quantization.quantize_dynamic)が存在します。
- 一般的な言語理解評価のベンチマーク[(GLUE)](https://gluebenchmark.com/)の[Microsoft Research Paraphrase Corpus (MRPC)タスク](https://www.microsoft.com/en-us/download/details.aspx?id=52398)について、精度と推論のパフォーマンスの結果を求めます。
MRPC (Dolan、Brockett, 2005)は、オンラインのニュースソースから自動的に抽出した文章のペアのコーパスであり、ペアの文章が意味的に等価かどうか人間がアノテーションを付けています。
MRPCのクラスは不均衡(68%の陽性、32%の陰性)であるため、一般的な慣習に従い、[F1スコア](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html)を指標にします。
なお、以下に示すように、MRPCは言語ペアの分類において一般的なNLPのタスクです。 1. 準備 1.1 PyTorchとHuggingFaceのTransformersのインストール本チュートリアルを始めるにあたって、[PyTorch](https://github.com/pytorch/pytorch/installation)と[HuggingFaceのGithubのリポジトリ](https://github.com/huggingface/transformersinstallation)にあるインストール方法に倣いましょう。さらに、組み込み済みのF1スコアを算出する関数を利用するため、[scikit-learn](https://github.com/scikit-learn/scikit-learn)もインストールします。
###Code
!pip install sklearn
!pip install transformers
###Output
Requirement already satisfied: sklearn in /usr/local/lib/python3.6/dist-packages (0.0)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from sklearn) (0.22.2.post1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sklearn) (1.0.0)
Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sklearn) (1.4.1)
Requirement already satisfied: numpy>=1.11.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sklearn) (1.19.5)
Collecting transformers
[?25l Downloading https://files.pythonhosted.org/packages/98/87/ef312eef26f5cecd8b17ae9654cdd8d1fae1eb6dbd87257d6d73c128a4d0/transformers-4.3.2-py3-none-any.whl (1.8MB)
[K |████████████████████████████████| 1.8MB 5.6MB/s
[?25hCollecting sacremoses
[?25l Downloading https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/sacremoses-0.0.43.tar.gz (883kB)
[K |████████████████████████████████| 890kB 16.6MB/s
[?25hRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers) (4.41.1)
Requirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers) (3.0.12)
Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from transformers) (0.8)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (1.19.5)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (2019.12.20)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from transformers) (3.4.0)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers) (2.23.0)
Collecting tokenizers<0.11,>=0.10.1
[?25l Downloading https://files.pythonhosted.org/packages/fd/5b/44baae602e0a30bcc53fbdbc60bd940c15e143d252d658dfdefce736ece5/tokenizers-0.10.1-cp36-cp36m-manylinux2010_x86_64.whl (3.2MB)
[K |████████████████████████████████| 3.2MB 26.1MB/s
[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers) (20.9)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (1.15.0)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (7.1.2)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (1.0.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->transformers) (3.4.0)
Requirement already satisfied: typing-extensions>=3.6.4; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->transformers) (3.7.4.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2020.12.5)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (2.4.7)
Building wheels for collected packages: sacremoses
Building wheel for sacremoses (setup.py) ... [?25l[?25hdone
Created wheel for sacremoses: filename=sacremoses-0.0.43-cp36-none-any.whl size=893261 sha256=f9c7195f69375d43d5563efe3738c44005f7936733ed5cec7a255b7145f9a0c1
Stored in directory: /root/.cache/pip/wheels/29/3c/fd/7ce5c3f0666dab31a50123635e6fb5e19ceb42ce38d4e58f45
Successfully built sacremoses
Installing collected packages: sacremoses, tokenizers, transformers
Successfully installed sacremoses-0.0.43 tokenizers-0.10.1 transformers-4.3.2
###Markdown
なお、PyTorchのベータ版である機能を使用することになるため、最新バージョンのtorchとtorchvisionをインストールすることを推奨します。最新のローカルでのインストール方法は[こちら](https://pytorch.org/get-started/locally/)で確認可能です。例えば、Mac上にインストールするには下記のコマンドを使用します。
###Code
# !yes y | pip uninstall torch tochvision
# !yes y | pip install --pre torch -f https://download.pytorch.org/whl/nightly/cu101/torch_nightly.html
###Output
_____no_output_____
###Markdown
1.2 必須モジュールのインポート本ステップでは、チュートリアルを進めるにあたって必須であるPythonのモジュールをインポートします。
###Code
from __future__ import absolute_import, division, print_function
import logging
import numpy as np
import os
import random
import sys
import time
import torch
from argparse import Namespace
from torch.utils.data import (DataLoader, RandomSampler, SequentialSampler,
TensorDataset)
from tqdm import tqdm
from transformers import (BertConfig, BertForSequenceClassification, BertTokenizer,)
from transformers import glue_compute_metrics as compute_metrics
from transformers import glue_output_modes as output_modes
from transformers import glue_processors as processors
from transformers import glue_convert_examples_to_features as convert_examples_to_features
# ログの準備
logger = logging.getLogger(__name__)
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.WARN)
logging.getLogger("transformers.modeling_utils").setLevel(
logging.WARN) # ログの削減
print(torch.__version__)
###Output
1.7.0+cu101
###Markdown
FP32とINT8との間で、単一スレッドでのパフォーマンスを比較するために、スレッド数を設定します。なお本チュートリアルの最後では、適切な並列バックエンドを持つPyTorchを構築することで、別途スレッド数を設定できるようになります。
###Code
torch.set_num_threads(1)
print(torch.__config__.parallel_info())
###Output
ATen/Parallel:
at::get_num_threads() : 1
at::get_num_interop_threads() : 1
OpenMP 201511 (a.k.a. OpenMP 4.5)
omp_get_max_threads() : 1
Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
mkl_get_max_threads() : 1
Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
std::thread::hardware_concurrency() : 2
Environment variables:
OMP_NUM_THREADS : [not set]
MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP
###Markdown
1.3 ヘルパー関数について学ぶtransformersのライブラリには組み込み済みのヘルパー関数が存在します。本チュートリアルでは、主に2つのヘルパー関数を使用します。一つは文章のサンプルを特徴ベクトルに変換する関数であり、もう一つは予測結果のF1スコアを測定する関数です。 [glue_convert_examples_to_features](https://github.com/huggingface/transformers/blob/master/transformers/data/processors/glue.py)関数は、文章を入力特徴量に変換します。
- 入力系列をトークン化します。
- 始めの部分に[CLS]を挿入します。
- 初めの文章と2番目の文章との間、及び最後の部分に[SEP]を挿入します。
- トークンが最初の文章に属するか、2番目の文章に属するかを示すトークン型IDを生成します。 [glue_compute_metrics](https://github.com/huggingface/transformers/blob/master/transformers/data/processors/glue.py)関数は、適合率と再現率の加重平均と解釈できる、[F1 score](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html)の指標を備えています。F1スコアは最良の値で1を取り、最悪の場合は0を取ります。なお、F1スコアに対する適合率と再現率の寄与は相対的に等しいものになっています。- F1スコアの式は以下のとおりです。 $$F1 = 2 * (\text{precision} * \text{recall}) / (\text{precision} + \text{recall})$$ 1.4 データセットのダウンロードMRPCタスクを実行する前に、[こちらのスクリプト](https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e)を実行して[GLUEデータ](https://gluebenchmark.com/tasks)をダウンロードし、`glue_data`ディレクトリに解凍します。
###Code
!python download_glue_data.py --data_dir='glue_data' --tasks='MRPC'
# 日本語版注釈:ここがうまく動作しない・・・
###Output
_____no_output_____
###Markdown
2. BERTモデルのファインチューン言語表現を事前訓練し、様々なタスクに対して、最小限のタスク依存のパラメータを使用してディープな双方向表現をファインチューニングし、最先端の結果を達成するという流れが、BERTのアイデアです。本チュートリアルでは、MRPCタスクに存在する、意味的に等価な文章のペアを分類する事前訓練済みのBERTモデルをファインチューニングすることに専念します。事前訓練済みのBERTモデル(HuggingFaceのtransformers内の`bert-base-uncased`モデル)を、MRPCタスクに対してファインチューニングするには、[こちらのサンプル](https://github.com/huggingface/transformers/tree/master/examplesmrpc)のコマンドにならいます。
###Code
!export GLUE_DIR=./glue_data
!export TASK_NAME=MRPC
!export OUT_DIR=./$TASK_NAME/
!python ./run_glue.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--save_steps 100000 \
--output_dir $OUT_DIR
###Output
_____no_output_____
###Markdown
MRPCタスクのためにファインチューン済みのBERTモデルを[こちら](https://download.pytorch.org/tutorial/MRPC.zip)で提供しています。時間を節約するために、ローカルの`$OUT_DIR`フォルダにモデルのファイル(~400 MB)をダウンロードすることが可能です。 2.1 グローバルな設定動的量子化の前後でファインチューンされたBERTモデルの評価を行うために、グローバルな設定を行います。
###Code
configs = Namespace()
# ファインチューン済みモデルを出力するディレクトリ、$OUT_DIR
configs.output_dir = "./MRPC/"
# GLUEベンチマークのMRPCタスクのデータを格納するディレクトリ、$GLUE_DIR/$TASK_NAME
configs.data_dir = "./glue_data/MRPC"
# 事前訓練済みモデルのモデル名、またはパス
configs.model_name_or_path = "bert-base-uncased"
# 入力系列の最大長
configs.max_seq_length = 128
# GLUEタスクの準備
configs.task_name = "MRPC".lower()
configs.processor = processors[configs.task_name]()
configs.output_mode = output_modes[configs.task_name]
configs.label_list = configs.processor.get_labels()
configs.model_type = "bert".lower()
configs.do_lower_case = True
# デバイス、バッチサイズ、トポロジ(GPU数やマシンのランク)、及びキャッシュフラグを設定
configs.device = "cpu"
configs.per_gpu_eval_batch_size = 8
configs.n_gpu = 0
configs.local_rank = -1
configs.overwrite_cache = False
# 再現性のための乱数シードの設定
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
set_seed(42)
###Output
_____no_output_____
###Markdown
2.2 ファインチューン済みBERTモデルの読み込み`configs.output_dir`からトークナイザ―とファインチューニング済みのBERT系列分類器モデル(FP32)を読み込みます。
###Code
tokenizer = BertTokenizer.from_pretrained(
configs.output_dir, do_lower_case=configs.do_lower_case)
model = BertForSequenceClassification.from_pretrained(configs.output_dir)
model.to(configs.device)
###Output
_____no_output_____
###Markdown
2.3 トークン化と評価を行う関数の定義[Huggingface](https://github.com/huggingface/transformers/blob/master/examples/run_glue.py)からトークン化と評価を行う関数を再利用します。
###Code
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team.
# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def evaluate(args, model, tokenizer, prefix=""):
# MNLIの2つの評価(一致、不一致)を処理するためのループ
eval_task_names = ("mnli", "mnli-mm") if args.task_name == "mnli" else (args.task_name,)
eval_outputs_dirs = (args.output_dir, args.output_dir + '-MM') if args.task_name == "mnli" else (args.output_dir,)
results = {}
for eval_task, eval_output_dir in zip(eval_task_names, eval_outputs_dirs):
eval_dataset = load_and_cache_examples(args, eval_task, tokenizer, evaluate=True)
if not os.path.exists(eval_output_dir) and args.local_rank in [-1, 0]:
os.makedirs(eval_output_dir)
args.eval_batch_size = args.per_gpu_eval_batch_size * max(1, args.n_gpu)
# DistributedSamplerはランダムにサンプリングする点に留意してください。
eval_sampler = SequentialSampler(eval_dataset) if args.local_rank == -1 else DistributedSampler(eval_dataset)
eval_dataloader = DataLoader(eval_dataset, sampler=eval_sampler, batch_size=args.eval_batch_size)
# マルチGPUの確認
if args.n_gpu > 1:
model = torch.nn.DataParallel(model)
# 評価
logger.info("***** Running evaluation {} *****".format(prefix))
logger.info(" Num examples = %d", len(eval_dataset))
logger.info(" Batch size = %d", args.eval_batch_size)
eval_loss = 0.0
nb_eval_steps = 0
preds = None
out_label_ids = None
for batch in tqdm(eval_dataloader, desc="Evaluating"):
model.eval()
batch = tuple(t.to(args.device) for t in batch)
with torch.no_grad():
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[3]}
if args.model_type != 'distilbert':
inputs['token_type_ids'] = batch[2] if args.model_type in ['bert', 'xlnet'] else None # XLM、DistilBERT、そしてRoBERTaはsegment_idsを使用しません。
outputs = model(**inputs)
tmp_eval_loss, logits = outputs[:2]
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
if preds is None:
preds = logits.detach().cpu().numpy()
out_label_ids = inputs['labels'].detach().cpu().numpy()
else:
preds = np.append(preds, logits.detach().cpu().numpy(), axis=0)
out_label_ids = np.append(out_label_ids, inputs['labels'].detach().cpu().numpy(), axis=0)
eval_loss = eval_loss / nb_eval_steps
if args.output_mode == "classification":
preds = np.argmax(preds, axis=1)
elif args.output_mode == "regression":
preds = np.squeeze(preds)
result = compute_metrics(eval_task, preds, out_label_ids)
results.update(result)
output_eval_file = os.path.join(eval_output_dir, prefix, "eval_results.txt")
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results {} *****".format(prefix))
for key in sorted(result.keys()):
logger.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
return results
def load_and_cache_examples(args, task, tokenizer, evaluate=False):
if args.local_rank not in [-1, 0] and not evaluate:
torch.distributed.barrier() # 分散訓練内の最初のプロセスのみがデータセットを処理し、その他のプロセスはキャッシュを使用するように担保します。
processor = processors[task]()
output_mode = output_modes[task]
# キャッシュ、またはデータセットのファイルからデータの特徴量を読み込み
cached_features_file = os.path.join(args.data_dir, 'cached_{}_{}_{}_{}'.format(
'dev' if evaluate else 'train',
list(filter(None, args.model_name_or_path.split('/'))).pop(),
str(args.max_seq_length),
str(task)))
if os.path.exists(cached_features_file) and not args.overwrite_cache:
logger.info("Loading features from cached file %s", cached_features_file)
features = torch.load(cached_features_file)
else:
logger.info("Creating features from dataset file at %s", args.data_dir)
label_list = processor.get_labels()
if task in ['mnli', 'mnli-mm'] and args.model_type in ['roberta']:
# ハック:RoBERTaの訓練済みモデルではラベルのインデックスが反転しています。
label_list[1], label_list[2] = label_list[2], label_list[1]
examples = processor.get_dev_examples(args.data_dir) if evaluate else processor.get_train_examples(args.data_dir)
features = convert_examples_to_features(examples,
tokenizer,
label_list=label_list,
max_length=args.max_seq_length,
output_mode=output_mode,
pad_on_left=bool(args.model_type in ['xlnet']), # xlnetでは左部にパディングを挿入します。
pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0],
pad_token_segment_id=4 if args.model_type in ['xlnet'] else 0,
)
if args.local_rank in [-1, 0]:
logger.info("Saving features into cached file %s", cached_features_file)
torch.save(features, cached_features_file)
if args.local_rank == 0 and not evaluate:
torch.distributed.barrier() # 分散訓練内の最初のプロセスのみがデータセットを処理し、その他のプロセスはキャッシュを使用するように担保します。
# テンソルに変換し、データセットを構築します。
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
if output_mode == "classification":
all_labels = torch.tensor([f.label for f in features], dtype=torch.long)
elif output_mode == "regression":
all_labels = torch.tensor([f.label for f in features], dtype=torch.float)
dataset = TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels)
return dataset
###Output
_____no_output_____
###Markdown
3. 動的量子化の適用`torch.quantization.quantize_dynamic`をモデルに対して呼び出し、HuggingFaceのBERTモデルに動的量子化を適用します。具体的には、以下の処理が行われるように引数を指定します。- モデル内のtorch.nn.Linearモジュールが量子化される- 重みが量子化されたint8型の値に変換される
###Code
quantized_model = torch.quantization.quantize_dynamic(
model, {torch.nn.Linear}, dtype=torch.qint8
)
print(quantized_model)
###Output
_____no_output_____
###Markdown
3.1 モデルサイズの確認まずはモデルのサイズを確認してみましょう。モデルサイズが大幅に削減されていることが観測できます。(FP32 総サイズ: 438 MB; INT8 総サイズ: 181 MB)
###Code
def print_size_of_model(model):
torch.save(model.state_dict(), "temp.p")
print('Size (MB):', os.path.getsize("temp.p")/1e6)
os.remove('temp.p')
print_size_of_model(model)
print_size_of_model(quantized_model)
###Output
_____no_output_____
###Markdown
本チュートリアルで使用されているBERTモデル (`bert-base-uncased`) は、V=30522のボキャブラリーサイズです。つまり、埋め込みサイズが768であれば、埋め込みテーブルの総サイズはおよそ 4 (Bytes/FP32) * 30522 * 768 = 90 MB になります。したがって、量子化の効果で非埋め込みテーブルの部分のモデルサイズは、350 MB (FP32モデル)から90 MB (INT8モデル)に削減されています。 3.2 推論精度と時間の評価次に、元のFP32型モデルと動的量子化後のINT8型モデルの間で、推論時間と精度の評価を行います。
###Code
def time_model_evaluation(model, configs, tokenizer):
eval_start_time = time.time()
result = evaluate(configs, model, tokenizer, prefix="")
eval_end_time = time.time()
eval_duration_time = eval_end_time - eval_start_time
print(result)
print("Evaluate total time (seconds): {0:.1f}".format(eval_duration_time))
# 元のFP32型のBERTモデルを評価
time_model_evaluation(model, configs, tokenizer)
# 動的量子化後のINT8型のBERTモデルを評価
time_model_evaluation(quantized_model, configs, tokenizer)
###Output
_____no_output_____
###Markdown
上記のコードを量子化せずにMacBook Pro上で実行した場合、(MRPCデータセット内の全408個のサンプルに対しての)推論には約160秒必要ですが、量子化を行った場合は約90秒しかかかりません。MacBook Proで量子化済みBERTモデルを用いた推論を実行した結果を以下にまとめます。 | Prec | F1 score | Model Size | 1 thread | 4 threads | | FP32 | 0.9019 | 438 MB | 160 sec | 85 sec | | INT8 | 0.902 | 181 MB | 90 sec | 46 sec | MRPCタスクについてファインチューニングされたBERTモデルに対して訓練後に動的量子化を行った場合、0.6%のF1スコアが得られました。なお比較として、[最近の論文](https://arxiv.org/pdf/1910.06188.pdf)(表1)では、訓練後に動的量子化を適用することで0.8788、量子化を考慮した訓練を行うことで0.8956のスコアを達成しています。主な違いは、前記の論文は対称的な量子化のみサポートしている一方で、PyTorch では非対称的な量子化をサポートしている点です。なお、本チュートリアルでは単一スレッドでの比較を行うためにスレッド数を1に設定している点に留意してください。これらの量子化INT8型の演算子の処理の並列化もサポートしています。ユーザーは`torch.set_num_threads(N)`(Nは処理中に並列化するスレッド数)により、マルチスレッドを設定することが可能です。ただし、処理の並列化サポートを有効化するために必要な補足事項として、PyTorchをOpenMP、Native、またはTBBといった適切な[バックエンド](https://pytorch.org/docs/stable/notes/cpu_threading_torchscript_inference.htmlbuild-options)でビルドする必要があります。`torch.__config__.parallel_info()`を使用し、並列化の設定を確認することが可能です。ちなみに、同一のMacBook Proで並列化にNativeバックエンドを使用したPyTorchでは、MRPCのデータセットの評価の処理を約46秒で行えました。 3.3 量子化モデルのシリアル化モデルのトレース後に`torch.jit.save`を使用することで、量子化モデルをシリアル化し、保存することが可能です。
###Code
input_ids = ids_tensor([8, 128], 2)
token_type_ids = ids_tensor([8, 128], 2)
attention_mask = ids_tensor([8, 128], vocab_size=2)
dummy_input = (input_ids, attention_mask, token_type_ids)
traced_model = torch.jit.trace(quantized_model, dummy_input)
torch.jit.save(traced_model, "bert_traced_eager_quant.pt")
###Output
_____no_output_____
###Markdown
量子化モデルを読み込むには、`torch.jit.load`が使用可能です。
###Code
loaded_quantized_model = torch.jit.load("bert_traced_eager_quant.pt")
###Output
_____no_output_____ |
0.12/_downloads/plot_dipole_fit.ipynb | ###Markdown
Source localization with single dipole fitThis shows how to fit a dipole using mne-python.For a comparison of fits between MNE-C and mne-python, see: https://gist.github.com/Eric89GXL/ca55f791200fe1dc3dd2Note that for 3D graphics you may need to choose a specific IPythonbackend, such as:`%matplotlib qt` or `%matplotlib wx`
###Code
from os import path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.forward import make_forward_dipole
from mne.evoked import combine_evoked
from mne.simulation import simulate_evoked
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_ave = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
fname_cov = op.join(data_path, 'MEG', 'sample', 'sample_audvis-cov.fif')
fname_bem = op.join(subjects_dir, 'sample', 'bem', 'sample-5120-bem-sol.fif')
fname_trans = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
fname_surf_lh = op.join(subjects_dir, 'sample', 'surf', 'lh.white')
###Output
_____no_output_____
###Markdown
Let's localize the N100m (using MEG only)
###Code
evoked = mne.read_evokeds(fname_ave, condition='Right Auditory',
baseline=(None, 0))
evoked.pick_types(meg=True, eeg=False)
evoked_full = evoked.copy()
evoked.crop(0.07, 0.08)
# Fit a dipole
dip = mne.fit_dipole(evoked, fname_cov, fname_bem, fname_trans)[0]
# Plot the result in 3D brain
dip.plot_locations(fname_trans, 'sample', subjects_dir)
###Output
_____no_output_____
###Markdown
Calculate and visualise magnetic field predicted by dipole with maximum GOFand compare to the measured data, highlighting the ipsilateral (right) source
###Code
fwd, stc = make_forward_dipole(dip, fname_bem, evoked.info, fname_trans)
pred_evoked = simulate_evoked(fwd, stc, evoked.info, None, snr=np.inf)
# find time point with highes GOF to plot
best_idx = np.argmax(dip.gof)
best_time = dip.times[best_idx]
# rememeber to create a subplot for the colorbar
fig, axes = plt.subplots(nrows=1, ncols=4, figsize=[10., 3.4])
vmin, vmax = -400, 400 # make sure each plot has same colour range
# first plot the topography at the time of the best fitting (single) dipole
plot_params = dict(times=best_time, ch_type='mag', outlines='skirt',
colorbar=False)
evoked.plot_topomap(time_format='Measured field', axes=axes[0], **plot_params)
# compare this to the predicted field
pred_evoked.plot_topomap(time_format='Predicted field', axes=axes[1],
**plot_params)
# Subtract predicted from measured data (apply equal weights)
diff = combine_evoked([evoked, pred_evoked], [1, -1])
plot_params['colorbar'] = True
diff.plot_topomap(time_format='Difference', axes=axes[2], **plot_params)
plt.suptitle('Comparison of measured and predicted fields '
'at {:.0f} ms'.format(best_time * 1000.), fontsize=16)
###Output
_____no_output_____
###Markdown
Estimate the time course of a single dipole with fixed position andorientation (the one that maximized GOF)over the entire interval
###Code
dip_fixed = mne.fit_dipole(evoked_full, fname_cov, fname_bem, fname_trans,
pos=dip.pos[best_idx], ori=dip.ori[best_idx])[0]
dip_fixed.plot()
###Output
_____no_output_____ |
sympy_ntheory.ipynb | ###Markdown
###Code
###Output
_____no_output_____
###Markdown
sympy.ntheory に脇道https://docs.sympy.org/latest/modules/ntheory.htmlmodule-sympy.ntheory.modular
###Code
# エラトステネスの篩
from sympy import sieve
sieve._reset()
display(25 in sieve) # 25 は素数表 sieve にあるか => False
display(sieve) # 単純な素数表ではないのだ!!!!
display(type(sieve))
display(type(sieve._list))
display(sieve._list)
display(sieve[3])
display([i for i in sieve._list])
# same as 30 in sieve without return
from sympy import sieve
sieve._reset()
sieve.extend(30)
display(sieve[3]==5)
display(sieve._list)
# nth prime
from sympy import sieve, prime
sieve._reset()
n=20
sieve.extend_to_no(n)
display(sieve[n])
display(prime(n))
display(sieve._list)
# isprime
from sympy import sieve,prime,isprime
sieve._reset()
display(isprime(1299709))
display(sieve._list)
display(sieve[10])
display(sieve._list)
# mobiusrange(a,b) # b is outside of range
# 下の例では 7,8,9,10,11,12,13,14,15,16,17 についてメビウス関数の値を示す
# メビウス関数は素因数分解して中に平方を含んでいれば 0 => 8,9,12,16
# 因子が奇数個の場合は -1 => 7,11,13,17
# 因子が偶数個の場合は 1 => 10,14,15
# 従って素数とは関係ない
from sympy import sieve
print([i for i in sieve.mobiusrange(7, 18)])
# primerange
from sympy import sieve
print([i for i in sieve.primerange(7, 18)])
# primerange と同じものを書いてみる
from sympy import sieve
n=30
sieve.extend_to_no(n)
display([i for i in sieve._list if (i >= 7) & (i < 18)])
# search 与えられた値の上下の素数を tuple で返す
from sympy import sieve
n = 8
display(sieve.search(n))
a, b = sieve.search(n)
display(sieve[a], sieve[b])
###Output
_____no_output_____
###Markdown
totientオイラーのトーシェント関数とは、正の整数 $n$ に対して、 $n$ と互いに素である 1 以上 $n$ 以下の自然数の個数 $\varphi (n)$ を与える数論的関数 $\varphi$ である
###Code
# totientrange range について totient 数を返す
# 素数の場合 n-1 個になる
from sympy import sieve
print([i for i in sieve.totientrange(7, 18)])
# nth prime n番目の素数
from sympy import prime
display(prime(10))
display(prime(1))
display(prime(100000))
#primepi n を含み n より小さい素数の数 primepi(10) => 2,3,5,7 => 4 個
from sympy import *
n = 10
primepi(n)
# 対数積分 logarithmic integral function オイラーの対数積分
from sympy import *
n = 10
print(li(n))
li(n).evalf()
# nextprime
from sympy import nextprime
[(i,nextprime(i)) for i in range (10,15)]
# prevprime
from sympy import prevprime
[(i,prevprime(i)) for i in range (10,15)]
# primerange, sieve
from sympy import primerange, sieve
print([i for i in primerange(1,30)])
print([i for i in sieve.primerange(1,30)])
print(list(sieve.primerange(1,30)))
# randprime, isprime
from sympy import randprime, isprime
print(randprime(1,30))
print(isprime(randprime(1,30)))
from sympy.ntheory.generate import primorial, primerange
from sympy import factorint, Mul, primefactors, sqrt
print(primorial(4)) # the first 4 primes are 2, 3, 5, 7
print(primorial(4, nth=False)) # primes <= 4 are 2 and 3
print(primorial(1))
print(primorial(1, nth=False))
print(sqrt(101).evalf())
print(primorial(sqrt(101), nth=False))
print(factorint(primorial(4) + 1))
print(factorint(210))
print(factorint(primorial(4) - 1))
p = list(primerange(10, 20)) # generate all primes in a given range
print(p)
print(sorted(set(primefactors(Mul(*p) + 1)).difference(set(p))))
from sympy.ntheory.generate import cycle_length
func = lambda i: (i**2 + 1) % 51
print(next(cycle_length(func, 4)))
n = cycle_length(func, 4, values=True)
print(list(ni for ni in n))
print(next(cycle_length(func,4,nmax=4)))
print([ni for ni in cycle_length(func,4,nmax=4,values=True)])
from sympy import composite
print(composite(36))
print(composite(1))
print(composite(17737))
from sympy import compositepi
print(compositepi(25))
print(compositepi(1))
print(compositepi(1000))
from sympy.ntheory.factor_ import smoothness
print(smoothness(2**7*3**2))
print(smoothness(2**4*13))
print(smoothness(2))
from sympy.ntheory.factor_ import smoothness_p
print(smoothness_p(10431, m=1))
print(smoothness_p(10431))
print(smoothness_p(10431,power=1))
print(smoothness_p(21477639576571, visual=1))
print(factorint(17*9))
print(smoothness_p(factorint(17*9)))
print(smoothness_p(smoothness_p(factorint(17*9))))
from sympy import trailing
print(trailing(128))
print(trailing(63))
from sympy.ntheory import multiplicity
from sympy.core.numbers import Rational as R
print([multiplicity(5, n) for n in [8, 5, 25, 125, 250]])
print(multiplicity(3, R(1, 9)))
###Output
[0, 1, 2, 3, 3]
-2
|
.ipynb_checkpoints/strip_chart-checkpoint.ipynb | ###Markdown
OscilloscopeEmulates an oscilloscope.
###Code
import numpy as np
from matplotlib.lines import Line2D
import matplotlib.pyplot as plt
import matplotlib.animation as animation
class Scope:
def __init__(self, ax, maxt=2, dt=0.02):
self.ax = ax
self.dt = dt
self.maxt = maxt
self.tdata = [0]
self.ydata = [0]
self.line = Line2D(self.tdata, self.ydata)
self.ax.add_line(self.line)
self.ax.set_ylim(-.1, 1.1)
self.ax.set_xlim(0, self.maxt)
def update(self, y):
lastt = self.tdata[-1]
if lastt > self.tdata[0] + self.maxt: # reset the arrays
self.tdata = [self.tdata[-1]]
self.ydata = [self.ydata[-1]]
self.ax.set_xlim(self.tdata[0], self.tdata[0] + self.maxt)
self.ax.figure.canvas.draw()
t = self.tdata[-1] + self.dt
self.tdata.append(t)
self.ydata.append(y)
self.line.set_data(self.tdata, self.ydata)
return self.line,
def emitter(p=0.1):
"""Return a random value in [0, 1) with probability p, else 0."""
while True:
v = np.random.rand(1)
if v > p:
yield 0.
else:
yield np.random.rand(1)
# Fixing random state for reproducibility
np.random.seed(19680801 // 10)
fig, ax = plt.subplots()
scope = Scope(ax)
# pass a generator in "emitter" to produce data for the update func
ani = animation.FuncAnimation(fig, scope.update, emitter, interval=50,
blit=True)
plt.show()
###Output
_____no_output_____ |
notebooks/Natural Language Patient Finder - One Fragment - DREAm Team - Final.ipynb | ###Markdown
Natural Language Patient FinderIdentify patients stored in a FHIR server that match a natural language fragment.
###Code
import requests
import json
###Output
_____no_output_____
###Markdown
Set target servicesServices usually share a namespace. Specify namespace to be used.
###Code
ingress_subdomain = ""
namespace = "hackathon" # input("Enter a service namespace: ")
#Learned Intent endpoint
intentURL = "https://" + namespace + "-learned-intent." + ingress_subdomain + "/?text="
#Key Concept Extractor endpoint
extractConceptsURL = "https://" + namespace + "-key-concept-extractor." + ingress_subdomain
#Generate CQL endpoint
generateCQLURL = "https://" + namespace + "-cql-generator." + ingress_subdomain + "/generate"
#libraries
cohortURL = "https://hackathon-ingestion-cohort-service." + ingress_subdomain + "/cohort-service/libraries"
#/libraries/nameoflibrary (get or delete)
#/libraries (post new library)
#/libraries/nameoflibrary/patients (return list of patients)
#/libraries/nameoflibrary/patientIds (return list of patient ids)
###Output
_____no_output_____
###Markdown
Provide fragment to be analyzedExamples:- Female patients > 18 years old- Patient has diabetes mellitus- Hypertriglyceridemia- nonproliferative diabetic retinopathy- no gestational diabetes- Patient is able to return to Johns Hopkins for follow-up appointments.
###Code
fragment = input("Enter a fragment: ")
###Output
_____no_output_____
###Markdown
Identify Intent of fragment
###Code
url = intentURL + "\"" + fragment + "\""
resp = requests.get(url=url)
intent = resp.content.decode("utf-8")
print("Learned intent is: ", intent)
###Output
_____no_output_____
###Markdown
Extract key entities for given fragment and intent
###Code
params = {'intent': intent, 'text': fragment}
# sending get request and saving the response as response object
concept_set = requests.get(url = extractConceptsURL, params = params).content.decode("utf-8")
parsed = json.loads(concept_set)
print(json.dumps(parsed, indent=4, sort_keys=True))
###Output
_____no_output_____
###Markdown
(Optional) show SnoMed concept expansion
###Code
snomed_code = input("Enter SnoMed code to expand: ")
snomed_expansion_url = "https://snowstorm-fhir.snomedtools.org/fhir/CodeSystem/$lookup?code=" + snomed_code;
expansion_response = requests.get(url=snomed_expansion_url)
parsed = json.loads( expansion_response.content.decode("utf-8"))
print("\nSnoMed expansion:")
print(json.dumps(parsed, indent=4, sort_keys=True))
###Output
_____no_output_____
###Markdown
Generate CQL from concepts and identified type
###Code
cql_response = requests.post(url=generateCQLURL, data={'conceptSet': concept_set})
cql = cql_response.content.decode("utf-8")
print("\nCQL Generator produced:\n\n", cql)
###Output
_____no_output_____
###Markdown
Push CQL to Cohort Service and Run
###Code
library_name = cql.split('"')[1]
print("name:", library_name)
library_version = cql.split("'")[1]
print("version:", library_version)
library_endpoint = library_name + "-" + library_version
print("endpoint extension:", library_endpoint)
#verify no existing CQL exists with this name. Delete if it does.
resp = requests.delete(cohortURL + "/" + library_endpoint)
print(resp.content.decode("utf-8"))
#Push CQL to cohort service
headers = {}
headers["Content-Type"] = "text/plain"
headers["Accept"] = "application/json"
resp = requests.post(url = cohortURL, data = cql, headers = headers)
print("Posting CQL to Cohort Service: " + str(resp))
# Run CQL to find matching patient ID's
run_endpoint = cohortURL + "/" + library_endpoint + "/patientIds"
resp = requests.get(url=run_endpoint)
print("Running CQL on Cohort Service: " + str(resp))
result = resp.content.decode("utf-8")
ids = (eval(result))
print(len(ids), "patients returned")
for id in ids:
print(id)
###Output
_____no_output_____
###Markdown
Run CQL on cohort service to identify matching patients
###Code
run_endpoint = cohortURL + "/" + library_endpoint + "/patients"
total_resp = requests.get(url=run_endpoint)
print(json.dumps(total_resp.json(), indent=4, sort_keys=True))
###Output
_____no_output_____ |
semi_s_training/2. Store as numpy.ipynb | ###Markdown
Reading DICOM scans an saving as numpy arrays
###Code
import random
import os
from pathlib import Path
from collections import Counter
import numpy as np
import pandas as pd
from tqdm.auto import tqdm
from lunglens.data import *
data_dir = Path('../data/extracted')
dest_root_dir = Path('../data/prepared')
###Output
_____no_output_____
###Markdown
Kaggle: OSIC Pulmonary Fibrosis Progression
###Code
ds_dir = data_dir/'osic-pulmonary-fibrosis-progression'
dest_dir = dest_root_dir/'osic-pulmonary-fibrosis-progression/'
all_scans = list(ds_dir.rglob('ID0*'))
convert_dicoms2np(all_scans, dest_dir)
all_files = list(dest_dir.glob('*/*.npy'))
scans_folders = [f.parent.name for f in all_files]
scan_sizes = list(Counter(scans_folders).values())
print_hist(np.array(scan_sizes))
###Output
_____no_output_____
###Markdown
COVID19_1110 dataset
###Code
ds_dir = data_dir/'COVID19_1110'
dest_dir = dest_root_dir/'COVID19_1110'
all_files = list(ds_dir.rglob('*.nii'))
len(all_files)
convert_dicoms2np(all_files, dest_dir)
###Output
_____no_output_____
###Markdown
Kaggle: covid19-ct-scans
###Code
ds_dir = data_dir / 'covid19-ct-scans'
dest_dir = dest_root_dir / 'covid19-ct-scans'
(dest_dir / 'infected').mkdir(parents=True, exist_ok=True)
(dest_dir / 'healthy').mkdir(parents=True, exist_ok=True)
df = pd.read_csv(ds_dir / 'metadata.csv')
df[['ct_scan', 'infection_mask']].head()
rel_path_exp = 'covid19-ct-scans/(.+)'
df['mask_path'] = df.infection_mask.str.extract(rel_path_exp)
df['scan_path'] = df.ct_scan.str.extract(rel_path_exp)
scan2mask = dict(zip(df.scan_path, df.mask_path))
dataset = []
for scan_id in tqdm(df.scan_path):
scan_path = ds_dir / scan_id
mask_path = ds_dir / scan2mask[scan_id]
mask, _ = read_dicom_file(mask_path)
scan, _ = read_dicom_file(scan_path)
# apply lung CT window and normalize
scan = appply_window(scan, normalize=True)
infection_per_slice = mask.sum(axis=(1,2))
for i, infection in enumerate(infection_per_slice):
lbl = 'infected' if infection > 0 else 'healthy'
rel_path = f'{lbl}/{scan_path.stem}-{i:03}.npy'
slice_dest_path = dest_dir / rel_path
np.save(str(slice_dest_path), scan[i])
dataset.append({
'path': rel_path,
'label': lbl
})
df = pd.DataFrame(dataset)
df.to_csv(dest_dir / 'metadata.csv', index=False)
###Output
_____no_output_____
###Markdown
Looking at stored data
###Code
infected_slices = list((dest_dir / 'infected').glob('*.npy'))
len(infected_slices)
data = np.load(str(random.choice(infected_slices)))
print_slice(data)
random_h_slices = random.choice(list((dest_dir / 'healthy').glob('*.npy')))
data = np.load(str(random_h_slices))
print_slice(data)
###Output
_____no_output_____ |
notebooks/Supplementary-Table-2.ipynb | ###Markdown
Forest Offsets Paper - Supplementary Table 2
###Code
import os
import fsspec
import json
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load the data
###Code
with fsspec.open(
"https://carbonplan.blob.core.windows.net/carbonplan-forests/offsets/archive/results/reclassification-labels.json",
"r",
) as f:
data = json.load(f)
with fsspec.open(
"https://carbonplan.blob.core.windows.net/carbonplan-forests/offsets/archive/results/reclassification-crediting-error.json",
"r",
) as f:
analysis = json.load(f)
###Output
_____no_output_____
###Markdown
Plot the table
###Code
df = pd.DataFrame()
df["Project"] = [d["id"] for d in data]
df["Supersection"] = [d["ss_id"] for d in data]
df["Assessment Area"] = [d["aa_id"] for d in data]
df["Species"] = [
"\n".join(
[
str(s["name"]).capitalize() + " : " + "%.1f" % (s["fraction"] * 100) + "%"
for s in d["species"]
]
)
for d in data
]
df["Classification"] = [
"\n".join(
[str(s[0]).capitalize() + " : " + "%.1f" % (s[1] * 100) + "%" for s in d["classification"]]
)
for d in data
]
###Output
_____no_output_____
###Markdown
Filter to only include projects used in our primary analysis
###Code
df = df[[d[1]["Project"] in analysis.keys() for d in df.iterrows()]]
df.drop_duplicates().style.set_properties(
**{
"white-space": "pre-wrap",
}
).hide_index()
###Output
_____no_output_____ |
sagemaker-kubernetes/sagemaker-components-kubeflow-pipelines/kfp-sagemaker-script-mode.ipynb | ###Markdown
Amazon SageMaker Components for Kubeflow Pipelines - script modeIn this example we'll build a Kubeflow pipeline where every component call a different Amazon SageMaker feature.Our simple pipeline will perform:1. Hyperparameter optimization 1. Select best hyperparameters and increase epochs1. Training model on the best hyperparameters 1. Create an Amazon SageMaker model1. Deploy model
###Code
import kfp
from kfp import components
from kfp.components import func_to_container_op
from kfp import dsl
import time, os, json
###Output
_____no_output_____
###Markdown
https://github.com/kubeflow/pipelines/tree/master/components/aws/sagemaker
###Code
sagemaker_hpo_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/cb36f87b727df0578f4c1e3fe9c24a30bb59e5a2/components/aws/sagemaker/hyperparameter_tuning/component.yaml')
sagemaker_train_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/cb36f87b727df0578f4c1e3fe9c24a30bb59e5a2/components/aws/sagemaker/train/component.yaml')
sagemaker_model_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/cb36f87b727df0578f4c1e3fe9c24a30bb59e5a2/components/aws/sagemaker/model/component.yaml')
sagemaker_deploy_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/cb36f87b727df0578f4c1e3fe9c24a30bb59e5a2/components/aws/sagemaker/deploy/component.yaml')
import sagemaker
import boto3
sess = boto3.Session()
sm = sess.client('sagemaker')
role = sagemaker.get_execution_role()
sagemaker_session = sagemaker.Session(boto_session=sess)
###Output
_____no_output_____
###Markdown
Prepare training datasets and upload to Amazon S3
###Code
bucket_name = sagemaker_session.default_bucket()
job_folder = 'jobs'
dataset_folder = 'datasets'
local_dataset = 'cifar10'
!python generate_cifar10_tfrecords.py --data-dir {local_dataset}
datasets = sagemaker_session.upload_data(path='cifar10', key_prefix='datasets/cifar10-dataset')
# If dataset is already in S3 use the dataset's path:
# datasets = 's3://{bucket_name}/{dataset_folder}/cifar10-dataset'
###Output
_____no_output_____
###Markdown
Upload training scripts to Amazon S3
###Code
!tar cvfz sourcedir.tar.gz --exclude=".ipynb*" -C code .
source_s3 = sagemaker_session.upload_data(path='sourcedir.tar.gz', key_prefix='training-scripts')
print('\nUploaded to S3 location:')
print(source_s3)
###Output
_____no_output_____
###Markdown
Create a custom pipeline opTakes the results from a hyperparameter tuning job and increases the number of epochs for the next training job
###Code
def update_best_model_hyperparams(hpo_results, best_model_epoch = "80") -> str:
import json
r = json.loads(str(hpo_results))
return json.dumps(dict(r,epochs=best_model_epoch))
get_best_hyp_op = func_to_container_op(update_best_model_hyperparams)
###Output
_____no_output_____
###Markdown
Create a pipeline
###Code
@dsl.pipeline(
name='cifar10 hpo train deploy pipeline',
description='cifar10 hpo train deploy pipeline using sagemaker'
)
def cifar10_hpo_train_deploy(region='us-west-2',
training_input_mode='File',
train_image='763104351884.dkr.ecr.us-west-2.amazonaws.com/tensorflow-training:1.15.2-gpu-py36-cu100-ubuntu18.04',
serving_image='763104351884.dkr.ecr.us-west-2.amazonaws.com/tensorflow-inference:1.15.2-cpu',
volume_size='50',
max_run_time='86400',
instance_type='ml.p3.2xlarge',
network_isolation='False',
traffic_encryption='False',
spot_instance='False',
channels='[ \
{ \
"ChannelName": "train", \
"DataSource": { \
"S3DataSource": { \
"S3DataType": "S3Prefix", \
"S3Uri": "'+datasets+'/train", \
"S3DataDistributionType": "FullyReplicated" \
} \
}, \
"CompressionType": "None", \
"RecordWrapperType": "None" \
}, \
{ \
"ChannelName": "validation", \
"DataSource": { \
"S3DataSource": { \
"S3DataType": "S3Prefix", \
"S3Uri": "'+datasets+'/validation", \
"S3DataDistributionType": "FullyReplicated" \
} \
}, \
"CompressionType": "None", \
"RecordWrapperType": "None" \
}, \
{ \
"ChannelName": "eval", \
"DataSource": { \
"S3DataSource": { \
"S3DataType": "S3Prefix", \
"S3Uri": "'+datasets+'/eval", \
"S3DataDistributionType": "FullyReplicated" \
} \
}, \
"CompressionType": "None", \
"RecordWrapperType": "None" \
} \
]'
):
# Component 1
hpo = sagemaker_hpo_op(
region=region,
image=train_image,
training_input_mode=training_input_mode,
strategy='Bayesian',
metric_name='val_acc',
metric_definitions='{"val_acc": "val_acc: ([0-9\\\\.]+)"}',
metric_type='Maximize',
static_parameters='{ \
"epochs": "10", \
"momentum": "0.9", \
"weight-decay": "0.0002", \
"model_dir":"s3://'+bucket_name+'/jobs", \
"sagemaker_program": "cifar10-training-sagemaker.py", \
"sagemaker_region": "us-west-2", \
"sagemaker_submit_directory": "'+source_s3+'" \
}',
continuous_parameters='[ \
{"Name": "learning-rate", "MinValue": "0.0001", "MaxValue": "0.1", "ScalingType": "Logarithmic"} \
]',
categorical_parameters='[ \
{"Name": "optimizer", "Values": ["sgd", "adam"]}, \
{"Name": "batch-size", "Values": ["32", "128", "256"]}, \
{"Name": "model-type", "Values": ["resnet", "custom"]} \
]',
channels=channels,
output_location=f's3://{bucket_name}/jobs',
instance_type=instance_type,
instance_count='1',
volume_size=volume_size,
max_num_jobs='16',
max_parallel_jobs='4',
max_run_time=max_run_time,
network_isolation=network_isolation,
traffic_encryption=traffic_encryption,
spot_instance=spot_instance,
role=role
)
# Component 2
training_hyp = get_best_hyp_op(hpo.outputs['best_hyperparameters'])
# Component 3
training = sagemaker_train_op(
region=region,
image=train_image,
training_input_mode=training_input_mode,
hyperparameters=training_hyp.output,
channels=channels,
instance_type=instance_type,
instance_count='1',
volume_size=volume_size,
max_run_time=max_run_time,
model_artifact_path=f's3://{bucket_name}/jobs',
network_isolation=network_isolation,
traffic_encryption=traffic_encryption,
spot_instance=spot_instance,
role=role,
)
# Component 4
create_model = sagemaker_model_op(
region=region,
model_name=training.outputs['job_name'],
image=serving_image,
model_artifact_url=training.outputs['model_artifact_url'],
network_isolation=network_isolation,
role=role
)
# Component 5
prediction = sagemaker_deploy_op(
region=region,
model_name_1=create_model.output,
instance_type_1='ml.m5.large'
)
kfp.compiler.Compiler().compile(cifar10_hpo_train_deploy,'sm-hpo-train-deploy-pipeline.zip')
client = kfp.Client()
aws_experiment = client.create_experiment(name='sm-kfp-experiment')
exp_name = f'cifar10-hpo-train-deploy-kfp-{time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())}'
my_run = client.run_pipeline(aws_experiment.id, exp_name, 'sm-hpo-train-deploy-pipeline.zip')
import json, boto3, numpy as np
client = boto3.client('runtime.sagemaker')
file_name = '1000_dog.png'
with open(file_name, 'rb') as f:
payload = f.read()
response = client.invoke_endpoint(EndpointName='Endpoint-20200522021801-DR5P',
ContentType='application/x-image',
Body=payload)
pred = json.loads(response['Body'].read())['predictions']
labels = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']
for l,p in zip(labels, pred[0]):
print(l,"{:.4f}".format(p*100))
###Output
_____no_output_____ |
notebooks/candidates/next_sentence_prediction.ipynb | ###Markdown
Next Sentence Prediction
###Code
!which python
!pip freeze|grep transformers
from tqdm.notebook import tqdm
import json
import pandas as pd
import numpy as np
import pickle
from smart_open import open
import os
import sys
import random
import json
import pickle
import logging
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix, precision_recall_fscore_support
from IPython.core.display import display
from collections import defaultdict
import itertools
#sys.path.append(os.path.dirname(os.getcwd()))
from experiments.environment import get_env
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
level=logging.INFO,
)
logger = logging.getLogger(__name__)
os.environ['WANDB_DISABLED'] = 'true'
env = get_env()
!nvidia-smi
#os.environ['CUDA_VISIBLE_DEVICES'] = '1'
os.environ['CUDA_VISIBLE_DEVICES'] = '1,2,3,5,6,7'
os.environ['CUDA_VISIBLE_DEVICES'] = '2,3,5,6'
from itertools import combinations
from transformers import Trainer, TrainingArguments
from typing import Dict
from transformers.data import metrics
from transformers import EvalPrediction
from sklearn.metrics import classification_report, precision_recall_fscore_support, f1_score
from scipy.special import softmax
import torch
from torch.utils.data import Dataset
from transformers import BertForNextSentencePrediction, BertTokenizerFast, BertForSequenceClassification
output_dir = './output/nsp'
data_dir = '/data/experiments/hensel/storytelling-candidates/data'
train_batch_size = 24
eval_batch_size = 28
model_name = 'bert-base-cased'
#model_name = 's2orc-scibert'
model_name_or_path = os.path.join(env['bert_dir'], model_name)
model_output_dir = os.path.join(output_dir, model_name)
model_output_dir
###Output
_____no_output_____
###Markdown
Prepare dataset
###Code
meta_df = pd.read_csv(os.path.join(data_dir, 'meta_data-12-02-21.tsv'), sep='\t', index_col=0)
meta_df
train_doc_ids, test_doc_ids = train_test_split(meta_df.doc_id.unique().tolist(), test_size=0.2, random_state=1)
logger.info(f'Train: {len(train_doc_ids)}, Test: {len(test_doc_ids)}')
# sub sample
#doc_ids = random.sample(meta_df.doc_id.unique().tolist(), 1000)
#len(doc_ids)
tokenizer = BertTokenizerFast.from_pretrained(model_name_or_path)
# Load model
model = BertForNextSentencePrediction.from_pretrained(model_name_or_path)
#model = BertForSequenceClassification.from_pretrained(model_name_or_path, num_labels=2)
class NSPDataset(Dataset):
"""
0 indicates sequence B is a continuation of sequence A,
1 indicates sequence B is a random sequence.
"""
max_length = 256
neg_ratio = 1.0
positive_label = 0
negative_label = 1
def __init__(self, df, doc_ids, hf_tokenizer, return_labels=True, sample_n=0):
self.df = df
self.doc_ids = doc_ids
self.tokenizer = hf_tokenizer
self.return_labels = return_labels
self.sample_n = sample_n
self.inputs = []
self.samples = []
def load(self):
logger.info(f'Dataframe size: {len(self.df)}')
# sub-sample
if self.sample_n > 0:
logger.info('Sub-sampling..')
self.doc_ids = random.sample(self.doc_ids, self.sample_n)
# filter by doc ids
self.df = self.df[self.df.doc_id.isin(self.doc_ids)]
# positives
positive_pairs = set()
for doc_id, doc_df in self.df.groupby('doc_id'):
# positive samples
prev_sentence = None
for sent in doc_df.sort_values('start')['text']:
if len(sent) > 50:
if prev_sentence is not None:
pair = (prev_sentence, sent)
positive_pairs.add(pair)
prev_sentence = sent
logger.info(f'Postive samples: {len(positive_pairs)}')
# negatives
negative_pairs = set()
neg_needed = int(self.neg_ratio * len(positive_pairs))
tries = 0
logger.info(f'Randomly selecting {neg_needed} negative samples (ratio={self.neg_ratio})')
sents = meta_df[['doc_id', 'text']].values.tolist()
random.shuffle(sents)
rand_sents = sents
rand_pairs = iter([rand_sents[i:i+2] for i in range(0, len(rand_sents), 2)])
while len(negative_pairs) < neg_needed:
(a_doc_id, a_sent), (b_doc_id, b_sent) = next(rand_pairs)
#print(a_doc_id)
pair = (a_sent, b_sent)
# TODO # and pair not in positive_pairs and pair not in negative_pairs
if a_doc_id != b_doc_id:
negative_pairs.add(pair)
else:
tries += 1
logger.info(f'done after {tries} invalid tries')
logger.info(f'Tokenize... {len(positive_pairs):,} + {len(negative_pairs):,} samples')
self.inputs = self.tokenizer(
text=[a for a, b in positive_pairs] + [a for a, b in negative_pairs],
text_pair=[b for a, b in positive_pairs] + [b for a, b in negative_pairs],
add_special_tokens=True,
return_attention_mask=True,
return_tensors='pt',
padding='max_length',
max_length=self.max_length,
truncation=True,
return_token_type_ids=True
)
if self.return_labels:
labels = [self.positive_label] * len(positive_pairs)
labels += [self.negative_label] * len(negative_pairs)
self.inputs['labels'] = torch.tensor(labels)
logger.info('Dataset loaded')
def __getitem__(self, idx):
return {k: v[idx] for k, v in self.inputs.items()}
def __len__(self):
return len(self.inputs['input_ids'])
train_ds = NSPDataset(meta_df, train_doc_ids, tokenizer, return_labels=True, sample_n=5000)
train_ds.load()
test_ds = NSPDataset(meta_df, test_doc_ids, tokenizer, return_labels=True, sample_n=500)
test_ds.load()
###Output
2021-02-19 11:47:33 - INFO - __main__ - Dataframe size: 175720
2021-02-19 11:47:33 - INFO - __main__ - Sub-sampling..
2021-02-19 11:47:33 - INFO - __main__ - Postive samples: 4823
2021-02-19 11:47:33 - INFO - __main__ - Randomly selecting 4823 negative samples (ratio=1.0)
2021-02-19 11:47:34 - INFO - __main__ - done after 1 invalid tries
2021-02-19 11:47:34 - INFO - __main__ - Tokenize... 4,823 + 4,823 samples
2021-02-19 11:47:35 - INFO - __main__ - Dataset loaded
###Markdown
Training
###Code
def compute_metrics(predict_out: EvalPrediction) -> Dict:
y_true = predict_out.label_ids
y_pred = np.argmax(predict_out.predictions, axis=1)
return {
"f1_micro": f1_score(y_true, y_pred, average='micro'),
"f1_macro": f1_score(y_true, y_pred, average='macro'),
"classification_report": classification_report(
y_true, y_pred,
labels=[0,1],
target_names=['pos','neg'],
output_dict=True,
),
"classification_report_str": classification_report(
y_true, y_pred,
labels=[0,1],
target_names=['pos','neg'],
)
}
trainer = Trainer(
model=model,
args=TrainingArguments(
output_dir=model_output_dir,
overwrite_output_dir=True,
learning_rate=2e-5,
do_train=True,
num_train_epochs=3,
per_device_train_batch_size=train_batch_size,
per_device_eval_batch_size=eval_batch_size,
save_steps=0,
save_total_limit=3,
eval_steps=50,
do_eval=True,
),
train_dataset=train_ds,
eval_dataset=test_ds,
compute_metrics=compute_metrics,
)
train_out = trainer.train()
train_out
trainer.save_model()
tokenizer.save_pretrained(model_output_dir)
predict_out = trainer.predict(test_ds)
print(compute_metrics(predict_out)['classification_report_str'])
###Output
precision recall f1-score support
pos 0.94 0.91 0.93 4823
neg 0.91 0.94 0.93 4823
accuracy 0.93 9646
macro avg 0.93 0.93 0.93 9646
weighted avg 0.93 0.93 0.93 9646
###Markdown
Predict for sentences with high similarity
###Code
# Create your own dataset
class PredictNSPDataset(Dataset):
max_length = 256
def __init__(self, sent_id2text, sim_df, hf_tokenizer, min_sim=0., max_sim=1., sample_n=0):
self.sent_id2text = sent_id2text
self.sim_df = sim_df
self.tokenizer = hf_tokenizer
self.min_sim = min_sim
self.max_sim = max_sim
self.sample_n = sample_n
self.inputs = []
self.samples = []
def load(self):
self.sent1_ids = []
self.sent2_ids = []
texts1 = []
texts2 = []
logger.info(f'sent_id2text size: {len(self.sent_id2text)}')
if self.sim_df is None:
# Random pairs
sent_ids = list(self.sent_id2text.keys())
sent_id_pairs = combinations(sent_ids, 2)
logger.info(f'Possible pairs: {(len(sent_ids) *(len(sent_ids)-1))/2:,}')
# logger.info('Sub-sampling..')
# sent_id_pairs = random.sample(list(sent_id_pairs), self.sample_n)
for a, b in sent_id_pairs:
if self.sample_n > 0 and len(texts1) > self.sample_n:
logger.info('stop...')
break
self.sent1_ids.append(a)
self.sent2_ids.append(b)
texts1.append(self.sent_id2text[a])
texts2.append(self.sent_id2text[b])
else:
# Using similarity as candidate filter
logger.info(f'Similarity dataframe size: {len(self.sim_df)}')
# filter
self.sim_df = self.sim_df[(self.sim_df.similarity >= self.min_sim) & (self.sim_df.similarity <= self.max_sim)]
# sub-sample
if self.sample_n > 0:
logger.info('Sub-sampling..')
self.sim_df = self.sim_df.sample(self.sample_n)
for a, b in self.sim_df[['sent1_id', 'sent2_id']].values:
if a in self.sent_id2text and b in self.sent_id2text:
self.sent1_ids.append(a)
self.sent2_ids.append(b)
texts1.append(self.sent_id2text[a])
texts2.append(self.sent_id2text[b])
logger.info(f'Tokenize... {len(texts1):,} samples')
self.inputs = self.tokenizer(
text=texts1,
text_pair=texts2,
add_special_tokens=True,
return_attention_mask=True,
return_tensors='pt',
padding='max_length',
max_length=self.max_length,
truncation=True,
return_token_type_ids=True
)
logger.info('Dataset loaded')
def __getitem__(self, idx):
return {k: v[idx] for k, v in self.inputs.items()}
def __len__(self):
return len(self.inputs['input_ids'])
# Load similarity dataframe
sim_df = pd.read_csv(os.path.join(data_dir, 'sentence_pairs-12-02-21.tsv'), sep='\t')
sent_id2text = {sent_id: row['text'] for sent_id, row in meta_df.iterrows()}
sim_df
# Tokenize dataset
#pred_ds = PredictNSPDataset(sent_id2text, sim_df, tokenizer, min_sim=0, max_sim=1, sample_n=0)
#pred_ds.load()
# Tokenize dataset / without similarity
pred_ds = PredictNSPDataset(sent_id2text, None, tokenizer, sample_n=1_000_000)
pred_ds.load()
# Load previously trained model from disk
model = BertForNextSentencePrediction.from_pretrained('./output/nsp/bert-base-cased')
pred_trainer = Trainer(
model=model,
args=TrainingArguments(
output_dir='./output/nsp__predict/bert-base-cased',
per_device_eval_batch_size=512,
),
)
pred_out = pred_trainer.predict(pred_ds)
from scipy.special import softmax
nsp_df = pd.DataFrame(dict(
sent1_id=pred_ds.sent1_ids,
sent2_id=pred_ds.sent2_ids,
is_next_sentence=softmax(pred_out.predictions[:,0]),
is_not_next_sentence=softmax(pred_out.predictions[:,1]),
))
nsp_df
min_score = 0.1
nsp_df[nsp_df.is_next_sentence > nsp_df.is_not_next_sentence]
nsp_df.to_csv('./output/nsp.1m.csv', index=False)
###Output
_____no_output_____ |
Butterfly_Pytorch.ipynb | ###Markdown
Butterfly Images - PytorchIn June 2016, I read an article about tensorflow practiced on butterfly images. Just right after presenting my Master's capstone project which I changed from Convulutional Neural Networks to Enterprise Architecture. I didn't know Neural Networks were the building block of Deep Learning or the blog I read is about Deep Learning and Computer Vision. This article triggered to start my learning journey on Deep Learning, Computer Vision and Artificial Intelligence. You can find the blog post here: [a poet does tensorflow](https://www.oreilly.com/learning/a-poet-does-tensorflow)
###Code
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
from torch.utils.data.sampler import SubsetRandomSampler
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
```dataset = datasets.ImageFolder('path/to/data', transform=transform)``` Load Data
###Code
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of data set to use as test
test_size = 0.2
transform = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
data_set = datasets.ImageFolder(root="data",transform=transform)
dataloader = torch.utils.data.DataLoader(data_set, batch_size=4,shuffle=True,num_workers=2)
# obtain training indices that will be used for test
num_data = len(data_set)
indices = list(range(num_data))
np.random.shuffle(indices)
split = int(np.floor(test_size * num_data))
train_idx, test_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
test_sampler = SubsetRandomSampler(test_idx)
# prepare data loaders
trainloader = torch.utils.data.DataLoader(data_set, batch_size=batch_size,
sampler = train_sampler, num_workers=num_workers)
testloader = torch.utils.data.DataLoader(data_set, batch_size=batch_size,
sampler = test_sampler, num_workers=num_workers)
classes = ('blurry','clear')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
Train ImagesHere we are defining our network architecture and training them
###Code
model = nn.Sequential(nn.Linear(150528, 500),
nn.Linear(500, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.003)
epochs = 30
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten images into a long vector
images = images.view(images.shape[0], -1)
# Training pass
optimizer.zero_grad()
output = model.forward(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
torch.save(model.state_dict(), 'checkpoint.pth')
state_dict = torch.load('checkpoint.pth')
print(state_dict.keys())
model.load_state_dict(state_dict)
###Output
_____no_output_____
###Markdown
Transfer Learning For transfer learning let's use one of the most famous pre-trained models:resnet.
###Code
from torchvision import models
model_resnet152 = models.resnet152(pretrained=True)
model_resnet152
# Freeze parameters so we don't backprop through them
for param in model_resnet152.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(1024, 500)),
('relu', nn.ReLU()),
('fc2', nn.Linear(500, 2)),
('output', nn.LogSoftmax(dim=1))
]))
model_resnet152.classifier = classifier
optimizer = optim.SGD(model_resnet152.parameters(), lr=0.003)
criterion = nn.NLLLoss()
epochs = 30
steps = 0
running_loss = 0
print_every = 5
device = 'cpu'
for epoch in range(epochs):
for inputs, labels in trainloader:
steps += 1
# Move input and label tensors to the default device
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
logps = model_resnet152.forward(inputs)
loss = criterion(logps, labels)
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
test_loss = 0
accuracy = 0
model_resnet152.eval()
with torch.no_grad():
for inputs, labels in testloader:
inputs, labels = inputs.to(device), labels.to(device)
logps = model_resnet152.forward(inputs)
batch_loss = criterion(logps, labels)
test_loss += batch_loss.item()
# Calculate accuracy
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(testloader):.3f}.. "
f"Test accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model_resnet152.train()
torch.save(model_resnet152.state_dict(), 'resnet_checkpoint.pth')
###Output
_____no_output_____ |
Pandas/Pandas Advanced/.ipynb_checkpoints/Solution_simao-checkpoint.ipynb | ###Markdown
Phospho seems to happen everytime.Let's get the distribution of aminoacids
###Code
# first get a cleaned version of the sequence column
PTMS = (PTMS
.assign(cleaned_sequence = lambda row: (row
.Sequence
.apply(lambda seq : seq.split('.')[1])
)
)
)
PTMS.head(3)
def capture_phospho_positions(modification):
"""
Given a modification description of the form:
"Acetyl: 1; Oxidation: 4; Phospho: 6", captures the positions of the Phospho component only
Returns a list of the positions.
Example:
example = "Acetyl: 1; Oxidation: 4; Phospho: 6"
capture_phospho_positions(example)
>>> [6]
example_2 = "Acetyl: 5; Oxidation: 2; Phospho: 6, 9"
capture_phospho_positions(example)
>>> [6, 9]
"""
events = modification.split(';')
phospho_info = [s for s in events if 'Phospho' in s][0]
# remove unecessary info ('Phospho:')
positions_str = phospho_info[phospho_info.find(':')+1:]
# remove whitespaces and get list of positions
positions_list = positions_str.replace(" ", "").split(',')
# convert positions to ints
positions_list = [int(e) -1 for e in positions_list]
return positions_list
PTMS['Letras'] = PTMS.apply(lambda row: np.array([char for char in row.cleaned_sequence])[capture_phospho_positions(row.Modifications)],axis=1)
PTMS.assign(TotalLetras = PTMS.Letras.apply(lambda x: len(x)),
S = PTMS.Letras.apply(lambda letras: len([s for s in letras if s == 'S'])),
T = PTMS.Letras.apply(lambda letras: len([s for s in letras if s == 'T'])),
Y = PTMS.Letras.apply(lambda letras: len([s for s in letras if s == 'Y'])))
###Output
_____no_output_____ |
pca_yale_facerecog.ipynb | ###Markdown
Implementation of face recognition using neural net
###Code
%matplotlib inline
import cv2
import numpy as np
import os
from skimage import io
from sklearn.cross_validation import train_test_split
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report,accuracy_score
from sklearn.neural_network import MLPClassifier
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras import metrics
###Output
_____no_output_____
###Markdown
Listing the path of all the images
###Code
DatasetPath = []
for i in os.listdir("yalefaces"):
DatasetPath.append(os.path.join("yalefaces", i))
###Output
_____no_output_____
###Markdown
Reading each image and assigning respective labels
###Code
imageData = []
imageLabels = []
for i in DatasetPath:
imgRead = io.imread(i,as_grey=True)
imageData.append(imgRead)
labelRead = int(os.path.split(i)[1].split(".")[0].replace("subject", "")) - 1
imageLabels.append(labelRead)
###Output
_____no_output_____
###Markdown
Preprocessing: Face Detection using OpenCV and cropping the image to a size of 150 * 150
###Code
faceDetectClassifier = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
imageDataFin = []
for i in imageData:
facePoints = faceDetectClassifier.detectMultiScale(i)
x,y = facePoints[0][:2]
cropped = i[y: y + 150, x: x + 150]
imageDataFin.append(cropped)
c = np.array(imageDataFin)
c.shape
###Output
_____no_output_____
###Markdown
Splitting Dataset into train and test
###Code
X_train, X_test, y_train, y_test = train_test_split(np.array(imageDataFin),np.array(imageLabels), train_size=0.7, random_state = 20)
X_train = np.array(X_train)
X_test = np.array(X_test)
X_train.shape
X_test.shape
nb_classes = 15
y_train = np.array(y_train)
y_test = np.array(y_test)
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
###Output
_____no_output_____
###Markdown
Converting each 2d image into 1D vector
###Code
X_train = X_train.reshape(X_train.shape[0], X_train.shape[1]*X_train.shape[2])
X_test = X_test.reshape(X_test.shape[0], X_test.shape[1]*X_test.shape[2])
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# normalize the data
X_train /= 255
X_test /= 255
###Output
_____no_output_____
###Markdown
Preprocessing -PCA
###Code
computed_pca = PCA(n_components = 20,whiten=True).fit(X_train)
XTr_pca = computed_pca.transform(X_train)
print("Plot of amount of variance explained vs pcs")
plt.plot(range(len(computed_pca.explained_variance_)),np.cumsum(computed_pca.explained_variance_ratio_))
plt.show()
XTs_pca = computed_pca.transform(X_test)
print("Training PCA shape",XTr_pca.shape)
print("Test PCA shape",XTs_pca.shape)
def plot_eigenfaces(images, h, w, rows=5, cols=4):
plt.figure()
for i in range(rows * cols):
plt.subplot(rows, cols, i + 1)
plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray)
plt.xticks(())
plt.yticks(())
plot_eigenfaces(computed_pca.components_,150,150)
print("Eigen Faces")
print("Original Training matrix shape", X_train.shape)
print("Original Testing matrix shape", X_test.shape)
print("Fitting the classifier to the training set")
clf = MLPClassifier(hidden_layer_sizes=(1024,), batch_size=64, verbose=True, early_stopping=True).fit(XTr_pca, y_train)
y_pred = clf.predict(XTs_pca)
#print(y_pred,y_test)
print(classification_report(y_test, y_pred))
print("Accuracy: ",accuracy_score(y_test, y_pred))
# Visualization
def plot_gallery(images, titles, h, w, rows=3, cols=3):
plt.figure()
for i in range(rows * cols):
plt.subplot(rows, cols, i + 1)
plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray)
plt.title(titles[i])
plt.xticks(())
plt.yticks(())
def titles(y_pred, y_test):
for i in range(y_pred.shape[0]):
pred_name = y_pred[i]
true_name = y_test[i]
yield 'predicted: {0}\ntrue: {1}'.format(pred_name, true_name)
prediction_titles = list(titles(y_pred, y_test))
plot_gallery(X_test, prediction_titles, 150, 150)
###Output
_____no_output_____
###Markdown
Defining the model
###Code
model = Sequential()
model.add(Dense(512,input_shape=(XTr_pca.shape[1],)))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy', optimizer="adam", metrics=[metrics.mae,metrics.categorical_accuracy])
###Output
_____no_output_____
###Markdown
Training
###Code
model.fit(XTr_pca, Y_train, batch_size=64, epochs=50, verbose=1, validation_data=(XTs_pca, Y_test))
###Output
Train on 115 samples, validate on 50 samples
Epoch 1/50
115/115 [==============================] - 0s 4ms/step - loss: 2.6741 - mean_absolute_error: 0.1238 - categorical_accuracy: 0.1565 - val_loss: 2.4677 - val_mean_absolute_error: 0.1218 - val_categorical_accuracy: 0.5400
Epoch 2/50
115/115 [==============================] - 0s 202us/step - loss: 2.3206 - mean_absolute_error: 0.1195 - categorical_accuracy: 0.6087 - val_loss: 2.2317 - val_mean_absolute_error: 0.1184 - val_categorical_accuracy: 0.7000
Epoch 3/50
115/115 [==============================] - 0s 190us/step - loss: 1.9277 - mean_absolute_error: 0.1124 - categorical_accuracy: 0.8087 - val_loss: 1.9980 - val_mean_absolute_error: 0.1138 - val_categorical_accuracy: 0.7600
Epoch 4/50
115/115 [==============================] - 0s 181us/step - loss: 1.6127 - mean_absolute_error: 0.1042 - categorical_accuracy: 0.9217 - val_loss: 1.7571 - val_mean_absolute_error: 0.1073 - val_categorical_accuracy: 0.8000
Epoch 5/50
115/115 [==============================] - 0s 195us/step - loss: 1.2708 - mean_absolute_error: 0.0919 - categorical_accuracy: 0.9652 - val_loss: 1.5162 - val_mean_absolute_error: 0.0986 - val_categorical_accuracy: 0.8000
Epoch 6/50
115/115 [==============================] - 0s 175us/step - loss: 0.9849 - mean_absolute_error: 0.0774 - categorical_accuracy: 0.9565 - val_loss: 1.2834 - val_mean_absolute_error: 0.0881 - val_categorical_accuracy: 0.8000
Epoch 7/50
115/115 [==============================] - 0s 156us/step - loss: 0.7236 - mean_absolute_error: 0.0615 - categorical_accuracy: 0.9739 - val_loss: 1.0737 - val_mean_absolute_error: 0.0766 - val_categorical_accuracy: 0.8400
Epoch 8/50
115/115 [==============================] - 0s 142us/step - loss: 0.5419 - mean_absolute_error: 0.0483 - categorical_accuracy: 0.9739 - val_loss: 0.8976 - val_mean_absolute_error: 0.0655 - val_categorical_accuracy: 0.8400
Epoch 9/50
115/115 [==============================] - 0s 157us/step - loss: 0.4025 - mean_absolute_error: 0.0375 - categorical_accuracy: 0.9826 - val_loss: 0.7627 - val_mean_absolute_error: 0.0559 - val_categorical_accuracy: 0.8400
Epoch 10/50
115/115 [==============================] - 0s 145us/step - loss: 0.2823 - mean_absolute_error: 0.0268 - categorical_accuracy: 0.9826 - val_loss: 0.6686 - val_mean_absolute_error: 0.0485 - val_categorical_accuracy: 0.8400
Epoch 11/50
115/115 [==============================] - 0s 139us/step - loss: 0.2078 - mean_absolute_error: 0.0213 - categorical_accuracy: 1.0000 - val_loss: 0.6051 - val_mean_absolute_error: 0.0430 - val_categorical_accuracy: 0.8600
Epoch 12/50
115/115 [==============================] - 0s 144us/step - loss: 0.1530 - mean_absolute_error: 0.0162 - categorical_accuracy: 1.0000 - val_loss: 0.5648 - val_mean_absolute_error: 0.0391 - val_categorical_accuracy: 0.8800
Epoch 13/50
115/115 [==============================] - 0s 152us/step - loss: 0.1056 - mean_absolute_error: 0.0118 - categorical_accuracy: 1.0000 - val_loss: 0.5412 - val_mean_absolute_error: 0.0365 - val_categorical_accuracy: 0.8600
Epoch 14/50
115/115 [==============================] - 0s 157us/step - loss: 0.0873 - mean_absolute_error: 0.0100 - categorical_accuracy: 1.0000 - val_loss: 0.5252 - val_mean_absolute_error: 0.0345 - val_categorical_accuracy: 0.8600
Epoch 15/50
115/115 [==============================] - 0s 156us/step - loss: 0.0696 - mean_absolute_error: 0.0081 - categorical_accuracy: 1.0000 - val_loss: 0.5168 - val_mean_absolute_error: 0.0330 - val_categorical_accuracy: 0.8600
Epoch 16/50
115/115 [==============================] - 0s 153us/step - loss: 0.0595 - mean_absolute_error: 0.0068 - categorical_accuracy: 1.0000 - val_loss: 0.5147 - val_mean_absolute_error: 0.0320 - val_categorical_accuracy: 0.8600
Epoch 17/50
115/115 [==============================] - 0s 160us/step - loss: 0.0377 - mean_absolute_error: 0.0047 - categorical_accuracy: 1.0000 - val_loss: 0.5138 - val_mean_absolute_error: 0.0312 - val_categorical_accuracy: 0.8600
Epoch 18/50
115/115 [==============================] - 0s 157us/step - loss: 0.0379 - mean_absolute_error: 0.0046 - categorical_accuracy: 1.0000 - val_loss: 0.5138 - val_mean_absolute_error: 0.0305 - val_categorical_accuracy: 0.8600
Epoch 19/50
115/115 [==============================] - 0s 162us/step - loss: 0.0259 - mean_absolute_error: 0.0032 - categorical_accuracy: 1.0000 - val_loss: 0.5097 - val_mean_absolute_error: 0.0297 - val_categorical_accuracy: 0.8400
Epoch 20/50
115/115 [==============================] - 0s 153us/step - loss: 0.0267 - mean_absolute_error: 0.0033 - categorical_accuracy: 1.0000 - val_loss: 0.5064 - val_mean_absolute_error: 0.0290 - val_categorical_accuracy: 0.8800
Epoch 21/50
115/115 [==============================] - 0s 150us/step - loss: 0.0211 - mean_absolute_error: 0.0027 - categorical_accuracy: 1.0000 - val_loss: 0.5036 - val_mean_absolute_error: 0.0283 - val_categorical_accuracy: 0.8800
Epoch 22/50
115/115 [==============================] - 0s 152us/step - loss: 0.0161 - mean_absolute_error: 0.0021 - categorical_accuracy: 1.0000 - val_loss: 0.5013 - val_mean_absolute_error: 0.0278 - val_categorical_accuracy: 0.8800
Epoch 23/50
115/115 [==============================] - 0s 163us/step - loss: 0.0128 - mean_absolute_error: 0.0017 - categorical_accuracy: 1.0000 - val_loss: 0.5005 - val_mean_absolute_error: 0.0274 - val_categorical_accuracy: 0.8800
Epoch 24/50
115/115 [==============================] - 0s 164us/step - loss: 0.0133 - mean_absolute_error: 0.0017 - categorical_accuracy: 1.0000 - val_loss: 0.5004 - val_mean_absolute_error: 0.0271 - val_categorical_accuracy: 0.8800
Epoch 25/50
115/115 [==============================] - 0s 161us/step - loss: 0.0112 - mean_absolute_error: 0.0014 - categorical_accuracy: 1.0000 - val_loss: 0.5012 - val_mean_absolute_error: 0.0269 - val_categorical_accuracy: 0.8400
Epoch 26/50
115/115 [==============================] - 0s 174us/step - loss: 0.0125 - mean_absolute_error: 0.0016 - categorical_accuracy: 1.0000 - val_loss: 0.5019 - val_mean_absolute_error: 0.0267 - val_categorical_accuracy: 0.8400
Epoch 27/50
115/115 [==============================] - 0s 208us/step - loss: 0.0081 - mean_absolute_error: 0.0011 - categorical_accuracy: 1.0000 - val_loss: 0.5020 - val_mean_absolute_error: 0.0266 - val_categorical_accuracy: 0.8400
Epoch 28/50
115/115 [==============================] - 0s 215us/step - loss: 0.0107 - mean_absolute_error: 0.0014 - categorical_accuracy: 1.0000 - val_loss: 0.5018 - val_mean_absolute_error: 0.0265 - val_categorical_accuracy: 0.8400
Epoch 29/50
115/115 [==============================] - 0s 193us/step - loss: 0.0094 - mean_absolute_error: 0.0012 - categorical_accuracy: 1.0000 - val_loss: 0.5013 - val_mean_absolute_error: 0.0264 - val_categorical_accuracy: 0.8200
Epoch 30/50
115/115 [==============================] - 0s 318us/step - loss: 0.0078 - mean_absolute_error: 0.0010 - categorical_accuracy: 1.0000 - val_loss: 0.5014 - val_mean_absolute_error: 0.0264 - val_categorical_accuracy: 0.8200
Epoch 31/50
115/115 [==============================] - 0s 388us/step - loss: 0.0091 - mean_absolute_error: 0.0012 - categorical_accuracy: 1.0000 - val_loss: 0.5003 - val_mean_absolute_error: 0.0263 - val_categorical_accuracy: 0.8200
Epoch 32/50
115/115 [==============================] - 0s 296us/step - loss: 0.0071 - mean_absolute_error: 9.2563e-04 - categorical_accuracy: 1.0000 - val_loss: 0.4994 - val_mean_absolute_error: 0.0263 - val_categorical_accuracy: 0.8200
Epoch 33/50
115/115 [==============================] - 0s 297us/step - loss: 0.0046 - mean_absolute_error: 6.1262e-04 - categorical_accuracy: 1.0000 - val_loss: 0.4989 - val_mean_absolute_error: 0.0262 - val_categorical_accuracy: 0.8200
Epoch 34/50
115/115 [==============================] - 0s 255us/step - loss: 0.0064 - mean_absolute_error: 8.3428e-04 - categorical_accuracy: 1.0000 - val_loss: 0.4980 - val_mean_absolute_error: 0.0261 - val_categorical_accuracy: 0.8400
Epoch 35/50
115/115 [==============================] - 0s 294us/step - loss: 0.0059 - mean_absolute_error: 7.7176e-04 - categorical_accuracy: 1.0000 - val_loss: 0.4974 - val_mean_absolute_error: 0.0261 - val_categorical_accuracy: 0.8400
###Markdown
Evaluating the performance
###Code
loss,mean_absolute_error,accuracy = model.evaluate(XTs_pca,Y_test, verbose=0)
print("Loss:", loss)
print("Categorical Accuracy: ", accuracy)
print("Mean absolute error: ", mean_absolute_error)
predicted_classes = model.predict_classes(XTs_pca)
correct_classified_indices = np.nonzero(predicted_classes == y_test)[0]
incorrect_classified_indices = np.nonzero(predicted_classes != y_test)[0]
correct_classified_indices
incorrect_classified_indices
prediction_titles = list(titles(predicted_classes, y_test))
plot_gallery(X_test, prediction_titles, 150, 150)
###Output
_____no_output_____ |
Andrew Ng - Coursera/Week 1/Exos/Cost Function Wgt Hgt.ipynb | ###Markdown
1. Prepare the data **Convert to Metric System**
###Code
df.head(3)
df['Weight'] = round(df['Weight']/2.2)
df.head(3)
df['Height'] = round((df['Height']*2.54)/100,2)
df.head(3)
###Output
_____no_output_____
###Markdown
Plot the data
###Code
X = df[['Weight']]
y = df[['Height']]
ax = sns.jointplot(x=X,y=y,kind='reg',joint_kws={'line_kws':{'color':'red'}})
###Output
_____no_output_____
###Markdown
2. Splitting into Training and Testing Data Divide into Training and testing sets
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.33,random_state=42)
len(X_train)
len(X_test)
###Output
_____no_output_____
###Markdown
3. Fitting the model
###Code
from sklearn import linear_model
lr = linear_model.LinearRegression()
X_train = np.asanyarray(X_train)
y_train = np.asanyarray(y_train)
lr.fit(X_train,y_train)
print('Slope : ',lr.coef_)
print('Intercept : ',lr.intercept_)
###Output
Slope : [[0.00687299]]
Intercept : [1.16832316]
###Markdown
**Plot the training Model**
###Code
plt.scatter(X_train, y_train, color='blue')
plt.plot(X_train, lr.coef_[0][0]*X_train + lr.intercept_[0], '-r')
plt.xlabel("Weight in KG")
plt.ylabel("Height in Meters")
###Output
_____no_output_____
###Markdown
4. Make Prediction
###Code
X_test = np.asanyarray(X_test)
y_test = np.asanyarray(y_test)
test_y_hat = lr.predict(X_test)
X_test[:5]
test_y_hat[:5]
y_test[:5]
###Output
_____no_output_____
###Markdown
**Plot the testing Model** Line obviously following the prediction
###Code
plt.scatter(X_test, test_y_hat, color='blue')
plt.plot(X_test, lr.coef_[0][0]*X_test + lr.intercept_[0], '-r')
plt.xlabel("Weight in KG")
plt.ylabel("Height in Meters")
###Output
_____no_output_____
###Markdown
5. Evaluating the Model Residual some of squares
###Code
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_hat - y_test) ** 2))
###Output
Residual sum of squares (MSE): 0.00
###Markdown
Bonus : Cost Function
###Code
m = len(y_test)
s = 0
for i,j in zip(test_y_hat,y_test):
s = s + (i - j)**2
print(s)
print(" Cost Function : ",1/(2*m) * s)
###Output
Cost Function : [0.00069775]
|
notebooks/tax_declarations.ipynb | ###Markdown
Environment Set Up
###Code
! pip install awswrangler
import awswrangler as wr
import pandas as pd
###Output
_____no_output_____
###Markdown
Create Athena Tables Data downloaded from Ministero delle Finanze: https://www1.finanze.gov.it/finanze3/analisi_stat/index.php?search_class[0]=cCOMUNE&opendata=yes 1) Tax Declarations Comuni
###Code
comuni_path = 'Redditi_e_principali_variabili_IRPEF_su_base_comunale_CSV_2019.csv'
comuni = pd.read_csv(comuni_path, sep = ";")
# Fixing columns empty names issues
cols = comuni.columns[:-1]
new_comuni = comuni.reset_index().iloc[:,:-2]
new_comuni.columns = cols
# Uploading to Athena and S3
wr.s3.to_parquet(
df=new_comuni,
path="s3://gimi-data/out/italy/tax-declarations/tax-declarations-comuni/",
dataset=True,
database="gimi",
table="irpef-comuni",
)
###Output
_____no_output_____
###Markdown
1) Tax Declarations Comuni Sub (Cap level for cities)
###Code
comuni_sub_path = 'Redditi_e_principali_variabili_IRPEF_su_base_subcomunale_CSV_2019.csv'
comuni_sub = pd.read_csv(comuni_path, sep = ";")
# Fixing columns empty names issues
cols = comuni_sub.columns[:-1]
new_comuni_sub = comuni_sub.reset_index().iloc[:,:-2]
new_comuni_sub.columns = cols
# Uploading to Athena and S3
wr.s3.to_parquet(
df=new_comuni_sub,
path="s3://gimi-data/out/italy/tax-declarations/tax-declarations-comuni-sub/",
dataset=True,
database="gimi",
table="irpef-comuni-sub",
)
# Sample Row cap level
sub = new_comuni_sub[new_comuni_sub['CAP'] == 20144]
sub.T
###Output
_____no_output_____ |
notebooks/test_model.ipynb | ###Markdown
Load Model and Data
###Code
# Load (ensemble) models
RUN_VERSIONS = [0, 1, 2, 3, 4]
ensemble_models = load_ensemble_models(HD_MODELS_PATH / 'scheduled_masked_ensembles', [f'model_{idx}.ckpt' for idx in RUN_VERSIONS])
"""
model_dir_path = HD_MODELS_PATH / 'scheduled_masked_ensembles'
file_names = [f'model_{idx}.ckpt' for idx in [0, 1, 2, 3, 4]]
ensembles = load_ensemble_models(dir_path=model_dir_path, file_names=file_names)
"""
model = load_vae_baur_model(Path('/mnt/2TB_internal_HD/lightning_logs/beta_test/version_2/checkpoints/last.ckpt'))
#model = load_vae_baur_model(Path('/media/1TB_SSD/lightning_logs/camcan_beta/version_0/checkpoints/last.ckpt'))
# Masked Model!
#model = load_vae_baur_model(Path('/mnt/2TB_internal_HD/lightning_logs/schedule_mask/version_3/checkpoints/last.ckpt'))
dataloader_dict = default_dataloader_dict_factory(batch_size=8, num_workers=0, shuffle_val=True)
###Output
_____no_output_____
###Markdown
Plot Infernce Reconstruction
###Code
plot_n_batches = 1
#plot_dataloader_dict = filter_dataloader_dict(dataloader_dict, contains=['BraTS'], exclude=[])
plot_dataloader_dict = {name: dataloader_dict[name] for name in ['BraTS T2']}
#plot_dataloader_dict = filter_dataloader_dict(dataloader_dict, contains=['BraTS T2', 'VFlip'])
for dataloader_name, dataloader in plot_dataloader_dict.items():
print(f'Loader {dataloader_name}, Dataset: {dataloader.dataset.name}')
batch_generator = yield_inference_batches(dataloader, model, residual_fn=residual_l1_max, residual_threshold=0.70,
manual_seed_val=None)
plot_stacked_scan_reconstruction_batches(batch_generator, plot_n_batches, nrow=32,
cmap='gray', axis='off', figsize=(15, 15), mask_background=False,
save_dir_path=None, )
###Output
_____no_output_____
###Markdown
Pixel-Wise Anomaly Detection Performance (ROC & PRC)
###Code
from uncertify.evaluation.configs import EvaluationConfig, EvaluationResult
from uncertify.evaluation.evaluation_pipeline import OUT_DIR_PATH, PixelAnomalyDetectionResult, SliceAnomalyDetectionResults, OODDetectionResults, print_results
eval_cfg = EvaluationConfig()
eval_cfg.use_n_batches = 1
eval_dataloader = dataloader_dict['BraTS T2 HM']
results = EvaluationResult(OUT_DIR_PATH, eval_cfg, PixelAnomalyDetectionResult(), SliceAnomalyDetectionResults(), OODDetectionResults())
results.make_dirs()
results.pixel_anomaly_result.best_threshold = 0.70
results = run_anomaly_detection_performance(eval_cfg, model, eval_dataloader, results)
print_results(results)
###Output
_____no_output_____
###Markdown
Segmentation Scores
###Code
from uncertify.evaluation.model_performance import mean_std_dice_scores, mean_std_iou_scores
from uncertify.visualization.model_performance import plot_segmentation_performance_vs_threshold
try:
tqdm._instances.clear()
except:
pass
###Output
_____no_output_____
###Markdown
Only run with one pre-defined threshold
###Code
max_n_batches = 30
residual_threshold = 0.70
eval_dataloader = dataloader_dict['BraTS T2']
best_mean_dice_score, best_std_dice_score = mean_std_dice_scores(eval_dataloader,
model,
[residual_threshold],
max_n_batches)
LOG.info(f'Dice score (t={residual_threshold:.2f}) for {eval_dataloader.dataset.name}: '
f'{best_mean_dice_score[0]:.2f} +- {best_std_dice_score[0]:.2f}')
###Output
_____no_output_____
###Markdown
Check over multiple thresholds
###Code
n_thresholds = 10
max_n_batches = 15
pixel_thresholds = np.linspace(0.2, 1.2, n_thresholds)
eval_dataloader = dataloader_dict['BraTS T2 HM']
mean_dice_scores, std_dice_scores = mean_std_dice_scores(eval_dataloader, model, residual_thresholds=pixel_thresholds, max_n_batches=max_n_batches)
best_dice_idx, best_dice_score = max(enumerate(mean_dice_scores), key=operator.itemgetter(1))
print(f'Best dice score: {best_dice_score:.2f}+-{std_dice_scores[best_dice_idx]} with threshold {pixel_thresholds[best_dice_idx]}.')
fig = plot_segmentation_performance_vs_threshold(pixel_thresholds, dice_scores=mean_dice_scores, dice_stds=std_dice_scores, iou_scores=None,
train_set_threshold=None, figsize=(12, 6));
fig.savefig(DATA_DIR_PATH / 'plots' / 'dice_iou_vs_threshold.png')
###Output
_____no_output_____
###Markdown
Sample-wise Loss Term Histograms
###Code
from sklearn.neighbors import KernelDensity
from uncertify.visualization.histograms import plot_loss_histograms
try:
tqdm._instances.clear()
except:
pass
max_n_batches = 30
select_dataloaders = ['CamCAN T2', 'BraTS T2', 'BraTS T2 HM',]
output_generators = []
for dataloader_name in select_dataloaders:
dataloader = dataloader_dict[dataloader_name]
output_generators.append(yield_inference_batches(dataloader, model, max_n_batches,
progress_bar_suffix=f'{dataloader_name}',
manual_seed_val=None))
figs_axes = plot_loss_histograms(output_generators=output_generators, names=select_dataloaders,
figsize=(12, 3.0), ylabel='Frequency', plot_density=True, show_data_ticks=False,
kde_bandwidth=[0.009, 0.009*5.5], show_histograms=False)
for idx, (fig, _) in enumerate(figs_axes):
save_fig(fig, DATA_DIR_PATH / 'plots' / f'loss_term_distributions_{idx}.png')
###Output
_____no_output_____
###Markdown
Threshold calculation
###Code
from uncertify.visualization.threshold_search import plot_fpr_vs_residual_threshold
from uncertify.evaluation.evaluation_pipeline import run_residual_threshold_evaluation, EvaluationResult, PixelAnomalyDetectionResult, SliceAnomalyDetectionResults, OODDetectionResults
from uncertify.evaluation.configs import EvaluationConfig, PixelThresholdSearchConfig
from uncertify.evaluation.evaluation_pipeline import OUT_DIR_PATH
try:
tqdm._instances.clear()
except:
pass
eval_cfg = EvaluationConfig()
eval_cfg.use_n_batches = 15
eval_cfg.do_plots = True
results = EvaluationResult(OUT_DIR_PATH, eval_cfg, PixelAnomalyDetectionResult(), SliceAnomalyDetectionResults(), OODDetectionResults())
results.make_dirs()
results = run_residual_threshold_evaluation(model, dataloader_dict['CamCAN T2'], eval_cfg, results)
###Output
_____no_output_____
###Markdown
Plot MNIST reconstructionsRun various MNIST examples (batches consisting of samples of a certain number) through the model and plot input and reconstructions.
###Code
plot_n_batches = 1
batch_size = 8
for n in range(0, 10):
_, mnist_val_dataloader = dataloader_factory(DatasetType.MNIST,
batch_size=batch_size,
transform=torchvision.transforms.Compose([
torchvision.transforms.Resize((128, 128)),
torchvision.transforms.ToTensor()]),
mnist_label=n)
batch_generator = yield_inference_batches(mnist_val_dataloader, model, residual_threshold=1.8)
plot_stacked_scan_reconstruction_batches(batch_generator, plot_n_batches,
cmap='hot', axis='off', figsize=(15, 15), save_dir_path=DATA_DIR_PATH/'reconstructions')
###Output
_____no_output_____ |
recipes/aud/EquivalentPER.ipynb | ###Markdown
Acoustic unit to phone mapping
###Code
counts = defaultdict(lambda: defaultdict(int))
for utt in ref_align:
for ref_unit, hyp_unit in zip(ref_align[utt], hyp_align[utt]):
if ref_unit == '#':
print(utt, ' '.join(ref_align[utt]))
counts[hyp_unit][ref_unit] += 1
au_map = {au: max(label_counts, key=label_counts.get) for au, label_counts in counts.items()}
len(au_map)
###Output
_____no_output_____
###Markdown
Equivalent phone error rate
###Code
ref_trans = map_trans(ref, phonemap)
hyp_trans = map_trans(hyp, au_map)
hyp_trans = map_trans(hyp_trans, phonemap)
hyp_trans = {utt: [x[0] for x in groupby(trans)] for utt, trans in hyp_trans.items()}
# remove sil
#ref_trans = {utt: list(filter(lambda a: a != 'sil', trans)) for utt, trans in ref_trans.items()}
#hyp_trans = {utt: list(filter(lambda a: a != 'sil', trans)) for utt, trans in hyp_trans.items()}
acc_wer = 0
nwords = 0
for utt in ref_trans:
try:
ref_t, hyp_t = ref_trans[utt], hyp_trans[utt]
acc_wer += wer(ref_t, hyp_t)
nwords += len(ref_t)
except KeyError:
pass
print(f'Phone Error Rate: {100 * acc_wer / nwords:.2f}')
import random
for utt in random.choices(list(ref.keys()), k=1):
print('(ref)', utt, ' '.join(ref_trans[utt]))
print('(hyp)', utt, ' '.join(hyp[utt]))
print('(hyp)', utt, ' '.join(hyp_trans[utt]))
###Output
(ref) mdks0_si1696 sil n aa sil t w ih n sh iy sil w ey dx ih sil s ow l aa ng aa l r eh sil d iy sil
(hyp) mdks0_si1696 sil au45 au18 au36 au65 au27 au58 au74 au76 au65 au20 au72 au12 au27 au74 au47 au12 au72 au60 au78 au40 au58 au40 au21 au81 au40 au24 au13 au6 au60 au50 au30 au1 sil
(hyp) mdks0_si1696 sil aa n l ay l sil n sh aa iy l ey iy aa s ow ay ow l m ow aa r s iy s f sil
###Markdown
Normalized Mutual Information
###Code
def align(T_ref, T_new, labels, clusters):
counts_labels = np.zeros(len(labels))
counts_clusters = np.zeros(len(clusters))
counts = np.zeros((len(clusters), len(labels))) + 1
for utt in T_ref.keys():
data_ref = T_ref[utt]
data_new = T_new[utt]
ref_labels = []
mu = []
for t in data_ref:
label, start, stop, _, _ = t
mu.append(start + 0.5*(stop-start))
ref_labels.append(label)
idx = labels.index(label)
counts_labels[idx] += 1
mu = np.asarray(mu)
if len(mu) == 0:
print(utt)
for t in data_new:
cluster, start, stop, _, _ = t
idx = clusters.index(cluster)
counts_clusters[idx] += 1
x = start + 0.5 * (stop-start)
closest_label = ((x-mu)**2).argmin()
i = clusters.index(cluster)
j = labels.index(ref_labels[closest_label])
counts[i,j] += 1
return counts, counts_labels, counts_clusters
def timing(align):
timing_trans = {}
for utt, trans in align.items():
label = trans[0]
start = 0
timings = []
for i, next_label in enumerate(trans[1:], 1):
if label != next_label:
timings.append((label, start, i, None, None))
label = next_label
start = i
timing_trans[utt] = timings
timings.append((label, start, len(trans) -1, None, None))
return timing_trans
ref_t_trans = timing(map_trans(ref_align, phonemap))
hyp_t_trans = timing(hyp_align)
counts = defaultdict(lambda: defaultdict(int))
ref_align39 = map_trans(ref_align, phonemap)
for utt in ref_align:
for ref_unit, hyp_unit in zip(ref_align39[utt], hyp_align[utt]):
counts[hyp_unit][ref_unit] += 1
aus = list(counts.keys())
phones = set()
for phonecount in counts.values():
for phone in phonecount:
phones.add(phone)
phones = list(phones)
M, counts_labels, counts_clusters = align(ref_t_trans, hyp_t_trans, phones, aus)
p_X_given_Y = probability_matrix(M, alpha=alpha)
def probability_matrix(counts, alpha=1):
c = np.array(counts + alpha, dtype=float)
return (c.T/c.sum(axis=1)).T
alpha = 1
#M = np.zeros((len(aus), len(phones)))
#for i in range(len(aus)):
# for j in range(len(phones)):
# M[i, j] += counts[aus[i]][phones[j]]
#p_X_given_Y = probability_matrix(M, alpha=alpha)
# Estimate the marginal distribution of the reference cluster labels.
#-----------------------------------------------------------------------
p_X = np.asarray(M.sum(axis=0), dtype=float) + alpha
p_X /= p_X.sum()
p_Y = np.asarray(M.sum(axis=1), dtype=float) + alpha
p_Y /= p_Y.sum()
# Evaluate the conditional and marginal entropy.
#-----------------------------------------------------------------------
H_X_given_Y = -(p_Y.dot((p_X_given_Y*np.log2(p_X_given_Y)).sum(axis=1)))
H_X = -p_X.dot(np.log2(p_X))
H_Y = -p_Y.dot(np.log2(p_Y))
# Evaluate the mutual information between reference labels and clusters.
#-----------------------------------------------------------------------
I_XY = H_X - H_X_given_Y
#print('H(X):', H_X)
#print('H(Y):', H_Y)
#print('2*I(X;Y)/(H(X) + H(Y)):', 100*2*I_XY/(H_X + H_Y), '%')
print('Normalized Mutual Information')
print('-----------------------------')
print('# refs units:', len(phones))
print('# proposed units:', len(aus))
print('I(X;Y)/ H(x) =', 100 * I_XY/(H_X), '%')
print('I(X;Y) =', I_XY)
print('H(Y) =', H_Y)
print('H(X) =', H_X)
print('counts =', M.sum())
def get_durations(trans, max_duration=100):
current = trans[0]
durations = np.zeros(max_duration)
duration = 1
for token in trans[1:]:
if token == current:
duration += 1
else:
current = token
durations[min(duration, max_duration - 1)] += 1
duration = 1
return durations
ref = load_transcript('exp/timit/monophone_mbn_babel/align_ac1.0/train/trans')
durations = np.zeros(100)
for utt, trans in ref.items():
trans = list(filter(lambda a: a != 'sil', trans))
durations += get_durations(trans, len(durations))
durations /= durations.sum()
hyp = load_transcript('exp/timit/subspace_aud_mbn_babel_ldim100/decode_perframe_ac1.0/train/trans')
hyp_durations = np.zeros(len(durations))
for utt, trans in hyp.items():
trans = list(filter(lambda a: a != 'sil', trans))
hyp_durations += get_durations(trans, len(hyp_durations))
hyp_durations /= hyp_durations.sum()
hyp = load_transcript('exp/timit/aud_8g/decode_perframe_ac1.0/train/trans')
#hyp = load_transcript('exp/timit/aud_4g/decode_perframe_ac1.0/train/trans')
hyp_durations2 = np.zeros(len(durations))
for utt, trans in hyp.items():
trans = list(filter(lambda a: a != 'sil', trans))
hyp_durations2 += get_durations(trans, len(hyp_durations2))
hyp_durations2 /= hyp_durations2.sum()
fig = figure(x_range=(0, 40))
fig.vbar(x=range(len(durations)), top=durations, width=0.9, alpha=0.5)
fig.vbar(x=range(len(hyp_durations)), top=hyp_durations, width=0.9, alpha=0.5, color='red')
#fig.vbar(x=range(len(hyp_durations2)), top=hyp_durations2, width=0.9, alpha=0.5, color='green')
show(fig)
###Output
_____no_output_____ |
notebook/.ipynb_checkpoints/test_skynet_env-checkpoint.ipynb | ###Markdown
做一个动态链接库```shcd ../skynet-master 编译cc -g -O2 -Wall -fPIC -dynamiclib -Wl,-undefined,dynamic_lookup -I3rd/lua -o libskynet.so skynet-src/skynet_main.c skynet-src/skynet_handle.c skynet-src/skynet_module.c skynet-src/skynet_mq.c skynet-src/skynet_server.c skynet-src/skynet_start.c skynet-src/skynet_timer.c skynet-src/skynet_error.c skynet-src/skynet_harbor.c skynet-src/skynet_env.c skynet-src/skynet_monitor.c skynet-src/skynet_socket.c skynet-src/socket_server.c skynet-src/malloc_hook.c skynet-src/skynet_daemon.c skynet-src/skynet_log.c 3rd/lua/liblua.a -Iskynet-src -I3rd/jemalloc/include/jemalloc -lpthread -lm -ldl -DNOUSE_JEMALLOC```
###Code
from cffi import FFI
ffi = FFI()
ffi.cdef("""
const char * skynet_getenv(const char *key);
void skynet_setenv(const char *key, const char *value);
void skynet_env_init();
""", override=True)
lib = ffi.dlopen('../skynet-master/libskynet.so')
lib.skynet_env_init()
# 定义变量
k = ffi.new("char[]", b"v1")
v = ffi.new("char[]", b"100")
k = b"host"
v = b"localhost"
lib.skynet_setenv(k, v)
ffi.string(lib.skynet_getenv(k))
# 赋值
for i in range(10):
k = b"V%d"%i
v = b"%d"%i
lib.skynet_setenv(k, v)
# 获得值
for i in range(10):
k = b"V%d"%i
print(ffi.string(lib.skynet_getenv(k)))
###Output
b'0'
b'1'
b'2'
b'3'
b'4'
b'5'
b'6'
b'7'
b'8'
b'9'
|
Final/Quasi_Newton_Demo.ipynb | ###Markdown
Quasi-Newton method for KL divergenceThe main idea behind this method is to compute the gradient and inverse Hessian of $D_{KL}(V,WH) = \sum\limits_{i=1}^M \sum\limits_{j=1}^K V_{ij}\ln \frac{V_{ij}}{W_{ij}H_{ij}} - V_{ij} + W_{ij}H_{ij}$ over elements of $W$ and $H$ at each iteration. The equations for this method follow [this article](http://www.bsp.brain.riken.jp/~zdunek/ZdCich_ICAISCP06.pdf):$$ W \leftarrow \max (\varepsilon, W-H_W^{-1} \nabla_W D_{KL}),\ H \leftarrow \max (\varepsilon, H-H_H^{-1} \nabla_H D_{KL}),$$where $\nabla_W D_{KL} = H^T J_{M \times K} - H^T(V \oslash (WH))$, $\nabla_H D_{KL} = J_{M \times K} W^T - (V \oslash (WH))W^T$ are gradients,$H_W = \text{diag} \{h_{W,m},\ m=1,\ldots,M\}$, $h_{W,m} = H\ \text{diag} \{[V \oslash (Q \otimes Q)]_{m,:}\}\ H^T$,$H_H = \text{diag}\{h_{H,k}, k=1,\ldots,K\}$, $h_{H,k} = W^T\ \text{diag}\{[V \oslash (Q \otimes Q)]_{:,k}\}\ W$ are Hessians, $Q = WH$, $V \in R^{M \times K}$, $W \in R^{M \times R}$, $H \in R^{R \times K}$.The main problem for the implementation of the method was to debug the code, because the article had some vague moments in it. For example, inverse Hessian and gradient do not have the dimensions required for multiplication, and vectorization of gradient had to be used. Also, because each iteration of the method is costly, it takes a lot of time and memory to apply it to the spectrogram of a signal. As an example we decided to show its work on small matrices.
###Code
import numpy as np
from nmf import klquasinewton as qn
from matplotlib import pyplot as plt
%matplotlib inline
test1 = np.array([[1.,2.,3.,4.,5.],[6.,7.,8.,9.,10.],[11.,12.,13.,14.,15.],[16.,17.,18.,19.,20.],[21.,22.,23.,24.,25.]])
print 'test matrix 1:\n', test1
W,H,kl_div = qn(test1, max_iter = 100,rank = 3)
print '\ntest1 - WH:\n', test1 - W.dot(H)
plt.title("KL divergence for 100 iterations of QNM", fontsize=10)
plt.xlabel("Iterations", fontsize=10)
plt.ylabel("KL divergence", fontsize=10)
plt.plot(kl_div)
plt.show()
###Output
test matrix 1:
[[ 1. 2. 3. 4. 5.]
[ 6. 7. 8. 9. 10.]
[11. 12. 13. 14. 15.]
[16. 17. 18. 19. 20.]
[21. 22. 23. 24. 25.]]
test1 - WH:
[[-3.58728606e-05 5.55559446e-05 8.52032361e-05 2.97229657e-05
-1.37841086e-04]
[ 6.15752840e-03 -5.68868269e-03 -6.52117059e-03 -1.82888751e-03
7.96486197e-03]
[ 9.45703165e-05 1.30516917e-04 -2.86924091e-04 -3.15967861e-04
3.76506530e-04]
[-6.75948734e-03 5.69643073e-03 6.05683233e-03 1.58591360e-03
-6.61778274e-03]
[-1.40175753e-02 1.13203836e-02 1.25909535e-02 3.65332217e-03
-1.36116685e-02]]
###Markdown
As we see, the KL divergence significantly drops to almost zero after several iterations, although there is some peak in the beginning. Let's plot KLD for the first 20 iterations and the last 80.
###Code
plt.title("KL divergence for the first 20 iterations of QNM", fontsize=10)
plt.xlabel("Iterations", fontsize=10)
plt.ylabel("KL divergence", fontsize=10)
plt.plot(kl_div[0:50])
plt.show()
plt.xlabel("Iterations", fontsize=10)
plt.ylabel("KL divergence", fontsize=10)
plt.title("KL divergence for the last 80 iterations of QNM", fontsize=10)
plt.plot(kl_div[50:])
plt.show()
print 'KL divergence on the last iteration:', kl_div[-1]
###Output
_____no_output_____
###Markdown
Let's do it one more time for another matrix, but this time let's try different sizes of $W$ and $H$.
###Code
test2 = np.array([[5.,10.,10.,4.,7.,4.],[10.,7.,10.,15.,3.,8.],[10.,10.,10.,17.,6.,15.],[9.,15.,6.,10.,5.,16.],[12.,11.,10.,9.,8.,7.]])
W1,H1,kl_div1 = qn(test2,max_iter = 10,rank = 1)
W2,H2,kl_div2 = qn(test2,max_iter = 10,rank = 2)
W3,H3,kl_div3 = qn(test2,max_iter = 10,rank = 3)
W4,H4,kl_div4 = qn(test2,max_iter = 10,rank = 4)
W5,H5,kl_div5 = qn(test2,max_iter = 10,rank = 5)
plt.title("KL divergence for 10 iterations of QNM", fontsize=10)
plt.xlabel("Iterations", fontsize=10)
plt.ylabel("KL divergence", fontsize=10)
plt.plot(kl_div1, color='black', label='R=1')
plt.plot(kl_div2, color='blue', label='R=2')
plt.plot(kl_div3, color='green', label='R=3')
plt.plot(kl_div4, color='brown', label='R=4')
plt.plot(kl_div5, color='red', label='R=5')
plt.legend()
plt.show()
plt.title("KL divergence for the last 4 iterations of QNM", fontsize=10)
plt.xlabel("Iterations", fontsize=10)
plt.ylabel("KL divergence", fontsize=10)
plt.plot(kl_div1[6:], color='black', label='R=1')
plt.plot(kl_div2[6:], color='blue', label='R=2')
plt.plot(kl_div3[6:], color='green', label='R=3')
plt.plot(kl_div4[6:], color='brown', label='R=4')
plt.plot(kl_div5[6:], color='red', label='R=5')
plt.legend()
plt.show()
print 'KL divergence on the last iteration:'
print ' R=1:',kl_div1[-1]
print ' R=2:',kl_div2[-1]
print ' R=3:',kl_div3[-1]
print ' R=4:',kl_div4[-1]
print ' R=5:',kl_div5[-1]
###Output
_____no_output_____ |
Kaggle/Courses/Pandas/6-exercise-renaming-and-combining.ipynb | ###Markdown
**This notebook is an exercise in the [Pandas](https://www.kaggle.com/learn/pandas) course. You can reference the tutorial at [this link](https://www.kaggle.com/residentmario/renaming-and-combining).**--- IntroductionRun the following cell to load your data and some utility functions.
###Code
import pandas as pd
reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
from learntools.core import binder; binder.bind(globals())
from learntools.pandas.renaming_and_combining import *
print("Setup complete.")
###Output
Setup complete.
###Markdown
ExercisesView the first several lines of your data by running the cell below:
###Code
reviews.head()
###Output
_____no_output_____
###Markdown
1.`region_1` and `region_2` are pretty uninformative names for locale columns in the dataset. Create a copy of `reviews` with these columns renamed to `region` and `locale`, respectively.
###Code
renamed = reviews.rename(columns={'region_1':'region', 'region_2':'locale'})
renamed.head()
# Your code here
renamed = reviews.rename(columns={'region_1':'region', 'region_2':'locale'})
# Check your answer
q1.check()
#q1.hint()
#q1.solution()
###Output
_____no_output_____
###Markdown
2.Set the index name in the dataset to `wines`.
###Code
reindexed = reviews.rename_axis('wines')
reindexed
reindexed = reviews.rename_axis('wines')
# Check your answer
q2.check()
#q2.hint()
#q2.solution()
###Output
_____no_output_____
###Markdown
3.The [Things on Reddit](https://www.kaggle.com/residentmario/things-on-reddit/data) dataset includes product links from a selection of top-ranked forums ("subreddits") on reddit.com. Run the cell below to load a dataframe of products mentioned on the */r/gaming* subreddit and another dataframe for products mentioned on the *r//movies* subreddit.
###Code
gaming_products = pd.read_csv("../input/things-on-reddit/top-things/top-things/reddits/g/gaming.csv")
gaming_products['subreddit'] = "r/gaming"
movie_products = pd.read_csv("../input/things-on-reddit/top-things/top-things/reddits/m/movies.csv")
movie_products['subreddit'] = "r/movies"
len(gaming_products)
gaming_products.head()
len(movie_products)
movie_products.head()
###Output
_____no_output_____
###Markdown
Create a `DataFrame` of products mentioned on *either* subreddit.
###Code
combined_products = pd.concat([gaming_products,movie_products])
combined_products
combined_products = pd.concat([gaming_products,movie_products])
# Check your answer
q3.check()
#q3.hint()
#q3.solution()
###Output
_____no_output_____
###Markdown
4.The [Powerlifting Database](https://www.kaggle.com/open-powerlifting/powerlifting-database) dataset on Kaggle includes one CSV table for powerlifting meets and a separate one for powerlifting competitors. Run the cell below to load these datasets into dataframes:
###Code
powerlifting_meets = pd.read_csv("../input/powerlifting-database/meets.csv")
powerlifting_competitors = pd.read_csv("../input/powerlifting-database/openpowerlifting.csv")
len(powerlifting_meets)
powerlifting_meets.head()
len(powerlifting_competitors)
powerlifting_competitors.head()
###Output
_____no_output_____
###Markdown
Both tables include references to a `MeetID`, a unique key for each meet (competition) included in the database. Using this, generate a dataset combining the two tables into one.
###Code
left = powerlifting_meets.set_index('MeetID')
right = powerlifting_competitors.set_index('MeetID')
left.join(right)
powerlifting_combined = left.join(right)
# Check your answer
q4.check()
#q4.hint()
#q4.solution()
###Output
_____no_output_____ |
IGTI_Módulo_5_[Desafio_Final].ipynb | ###Markdown
**Enunciado**
Neste desafio final, vamos empregar boa parte dos conceitos mostrados no decorrer de
todos os módulos do **Bootcamp** para a análise e a classificação de veículos do conhecido
dataset **“cars”**.
> Esse dataset contém um conjunto de informações sobre vários veículos
pesquisados.
>Existem dados, por exemplo, sobre a potência do veículo, sobre a origem e
cilindradas cúbicas.
____
Para essa análise, vamos empregar os conceitos de redução da dimensionalidade com o **PCA**, clusterização com o **K-Means** e **Classificações** com algoritmos supervisionados. 1. Acessar o link abaixo e realizar o download do arquivo **cars.csv**.
###Code
import numpy as np
import pandas as pd
import requests
def transpose(d):
return pd.DataFrame(d).transpose()
url ='https://drive.google.com/uc?export=download&id=1Gjumb68_WrOOJr-7YUKH3yFk6rNJrHH2'
cars = pd.read_csv(url)
cars_original = cars.copy()
cars.describe().T
cars.info()
cars.tail(3)
###Output
_____no_output_____
###Markdown
2. Para a implementação dos algoritmos, utilizear as definições abaixo:
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
random_state = 42
scaler = StandardScaler()
pca = PCA(n_components=7)
kmeans = KMeans(n_clusters=3,random_state=random_state)
dtc = DecisionTreeClassifier(random_state=random_state)
lr = LogisticRegression(random_state=random_state)
###Output
_____no_output_____
###Markdown
3. Para as questões que envolvem a construção de modelos supervisionados, você
deve utilizar o dataset original para definir a eficiência dos veículos.
Além disso, deveutilizar as variáveis: **['cylinders' ,'cubicinches' ,'hp' ,'weightlbs' ,'timeto-60']** como entrada.
A saída deve ser a classificação de eficiência do veículo.
###Code
#cars = cars[['cylinders' ,'cubicinches' ,'hp' ,'weightlbs' ,'time-to-60']]
###Output
_____no_output_____
###Markdown
1 .Após a utilização da biblioteca pandas para a leitura dos dados sobre os valores lidos, é CORRETO afirmar que:
###Code
transpose(cars.isna().sum())
transpose(cars.dtypes)
###Output
_____no_output_____
###Markdown
2. Realize a transformação das colunas **“cubicinches”** e “weightlbs” do tipo **“string”** para o tipo numérico utilizando o **pd.to_numeric()**, utilizando o parâmetro errors='coerce'. Após essa transformação, é CORRETO afirmar:
###Code
cars.cubicinches = pd.to_numeric(cars.cubicinches,errors='coerce')
transpose(cars.isna().sum())
###Output
_____no_output_____
###Markdown
3. Indique quais eram os índices dos valores presentes no dataset que **“forçaram”** o pandas a compreender a variável **“cubicinches”** como string.
###Code
#coerce = Análise inválida será definida como NaN.
cars_original[cars_original.cubicinches==' '].index
###Output
_____no_output_____
###Markdown
4. Após a transformação das variáveis “string” para os valores numéricos, quantos valores nulos (células no dataframe) passaram a existir no dataset?
###Code
cars.weightlbs = pd.to_numeric(cars.weightlbs,errors='coerce')
transpose(cars.isna().sum())
###Output
_____no_output_____
###Markdown
5. Substitua os valores nulos introduzidos no dataset após a transformação pelo valor médio das colunas. Qual é o novo valor médio da coluna **“weightlbs”**?
###Code
cars.loc[cars.cubicinches.isna(),'cubicinches'] = cars.cubicinches.mean()
cars.loc[cars.weightlbs.isna(),'weightlbs'] = cars.weightlbs.mean()
cars.weightlbs.mean()
###Output
_____no_output_____
###Markdown
6. Após substituir os valores nulos pela média das colunas, selecione as colunas **['mpg', 'cylinders', 'cubicinches', 'hp', 'weightlbs', 'time-to-60', 'year']**. Qual é o valor da mediana para a característica 'mpg'?
###Code
cars.mpg.median()
###Output
_____no_output_____
###Markdown
7. Qual é a afirmação CORRETA sobre o valor de 14,00 para a variável “time-to-60”?
###Code
transpose(cars['time-to-60'].describe())
###Output
_____no_output_____
###Markdown
8. Sobre o coeficiente de correlação de Pearson entre as variáveis “cylinders” e “mpg”, é correto afirmar, EXCETO:
###Code
cars[['cylinders','mpg']].corr()
###Output
_____no_output_____
###Markdown
9. Sobre o boxplot da variável “hp”, é correto afirmar, EXCETO:
Grupo de escolhas da pergunta
###Code
cars.hp.plot.box()
###Output
_____no_output_____
###Markdown
10. Após normalizado, utilizando a função StandardScaler(), qual é o maior valor para a variável “hp”?
###Code
print('Antes:\t',cars.hp.max())
cars.hp = scaler.fit_transform(cars.hp.values.reshape(-1,1))
print('Depois:\t',cars.hp.max())
###Output
Antes: 230
Depois: 3.05870398977614
###Markdown
11. Aplicando o PCA, conforme a definição acima, qual é o valor da variância explicada com pela primeira componente principal?
###Code
pca.fit(cars[cars.columns[:-1]])
pca.explained_variance_
pca.components_
###Output
_____no_output_____
###Markdown
12. Utilize os três primeiros componentes principais para construir o K-means com um número de 3 clusters. Sobre os clusters, é INCORRETO afirmar que:
###Code
k = kmeans.fit(pca.components_[:3])
k.cluster_centers_
###Output
_____no_output_____
###Markdown
13. Após todo o processamento realizado nos itens anteriores, crie uma coluna que contenha a variável de eficiência do veículo. Veículos que percorrem mais de 25 milhas com um galão (“mpg”>25) devem ser considerados eficientes. Utilize as colunas ['cylinders' ,'cubicinches' ,'hp' ,'weightlbs','time-to-60'] como entradas e como saída a coluna de eficiência criada.
Utilizando a árvore de decisão como mostrado, qual é a acurácia do modelo?
###Code
cars['eficientes'] = 0
cars.loc[cars.mpg>25,'eficientes']=1
x = cars[['cylinders' ,'cubicinches' ,'hp' ,'weightlbs','time-to-60']]
x = scaler.transform(x)
y = cars['eficientes']
xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size=0.3,random_state=random_state)
from sklearn.metrics import accuracy_score
dtc = DecisionTreeClassifier(random_state=random_state)
dtc = dtc.fit(xtrain,ytrain)
predicts = dtc.predict(xtest)
accuracy_score(ytest,predicts)
###Output
_____no_output_____
###Markdown
14. Sobre a matriz de confusão obtida após a aplicação da árvore de decisão, como mostrado anteriormente, é INCORRETO afirmar:
###Code
from sklearn.metrics import confusion_matrix
verdadeiro_negativo, falso_positivo, falso_negativo, verdadeiro_positivo = confusion_matrix(ytest,predicts).ravel()
print('verdadeiro_negativo:\t',verdadeiro_negativo)
print('falso_positivo:\t\t',falso_positivo)
print('falso_negativo:\t\t',falso_negativo)
print('verdadeiro_positivo:\t',verdadeiro_positivo)
###Output
verdadeiro_negativo: 33
falso_positivo: 8
falso_negativo: 2
verdadeiro_positivo: 36
###Markdown
15. Utilizando a mesma divisão de dados entre treinamento e teste empregada para a análise anterior, aplique o modelo de regressão logística como mostrado na descrição do trabalho.
Comparando os resultados obtidos com o modelo de árvore de decisão, é INCORRETO afirmar que:
###Code
lr = LogisticRegression(random_state=random_state)
lr = lr.fit(xtrain,ytrain)
predicts = lr.predict(xtest)
accuracy_score(ytest,predicts)
###Output
_____no_output_____ |
04_Advection_1D/02_CFLCondition.ipynb | ###Markdown
Content under Creative Commons Attribution license CC-BY 4.0, code under MIT license (c)2019 Daniel Koehn based on (c)2014 L.A. Barba, G.F. Forsyth, C. Cooper [CFDPython](https://github.com/barbagroup/CFDPython), (c)2013 L.A. Barba, also under CC-BY license.
###Code
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, 'r').read())
###Output
_____no_output_____
###Markdown
1D Linear Advection Problem: Stability and CFL condition In the first lesson of this lecture, we studied the numerical solution of the linear and non-linear advection equations, using the finite-difference method. Did you experiment there using different parameter choices? If you did, you probably ran into some unexpected behavior. Did your solution ever blow up (sometimes in a cool way!)? In this Jupyter Notebook, we will explore why changing the discretization parameters can affect your solution in such a drastic way. With the solution parameters we initially suggested, the spatial grid had 41 points and the time-step size was 0.25. Now, we're going to experiment with the number of points in the grid. The code below corresponds to the linear advection case, but written into a function so that we can easily examine what happens as we adjust just one variable: **the grid size**.
###Code
import numpy
from matplotlib import pyplot
%matplotlib inline
# Set the font family and size to use for Matplotlib figures.
pyplot.rcParams['font.family'] = 'serif'
pyplot.rcParams['font.size'] = 16
def linear_advection(nx, L=2.0, c=1.0, dt=0.025, nt=20):
"""
Solves the 1D linear convection equation
with constant speed c in the domain [0, L]
and plots the solution (along with the initial conditions).
Parameters
----------
nx : integer
Number of grid points to discretize the domain.
L : float, optional
Length of the domain; default: 2.0.
c : float, optional
advection speed; default: 1.0.
dt : float, optional
Time-step size; default: 0.025.
nt : integer, optional
Number of time steps to compute; default: 20.
"""
# Discretize spatial grid.
dx = L / (nx - 1)
x = numpy.linspace(0.0, L, num=nx)
# Set initial conditions.
u0 = numpy.ones(nx)
mask = numpy.where(numpy.logical_and(x >= 0.5, x <= 1.0))
u0[mask] = 2.0
# Integrate the solution in time.
u = u0.copy()
for n in range(1, nt):
#u[1:] = u[1:] - c * dt / dx * (u[1:] - u[:-1])
un = u.copy()
for i in range(1, nx):
u[i] = un[i] - c * dt / dx * (un[i] - un[i - 1])
# Plot the solution along with the initial conditions.
pyplot.figure(figsize=(4.0, 4.0))
pyplot.xlabel('x')
pyplot.ylabel('u')
pyplot.grid()
pyplot.plot(x, u0, label='Initial',
color='C0', linestyle='--', linewidth=2)
pyplot.plot(x, u, label='nt = {}'.format(nt),
color='C1', linestyle='-', linewidth=2)
pyplot.legend()
pyplot.xlim(0.0, L)
pyplot.ylim(0.0, 2.5);
###Output
_____no_output_____
###Markdown
Now let's examine the results of the linear advection problem with an increasingly fine mesh. We'll try 41, 61 and 71 points ... then we'll shoot for 85. See what happens:
###Code
linear_advection(41) # solve using 41 spatial grid points
linear_advection(61)
linear_advection(71)
###Output
_____no_output_____
###Markdown
So far so good—as we refine the spatial grid, the wave is more square, indicating that the discretization error is getting smaller. But what happens when we refine the grid even further? Let's try 85 grid points.
###Code
linear_advection(85)
###Output
_____no_output_____
###Markdown
Oops. This doesn't look anything like our original hat function. Something has gone awry. It's the same code that we ran each time, so it's not a bug! What happened? To answer that question, we have to think a little bit about what we're actually implementing in the code when we solve the linear convection equation with the forward-time/backward-space method. In each iteration of the time loop, we use the existing data about the solution at time $n$ to compute the solution in the subsequent time step, $n+1$. In the first few cases, the increase in the number of grid points returned more accurate results. There was less discretization error and the translating wave looked more like a square wave than it did in our first example. Each iteration of the time loop advances the solution by a time-step of length $\Delta t$, which had the value 0.025 in the examples above. During this iteration, we evaluate the solution $u$ at each of the $x_i$ points on the grid. But in the last plot, something has clearly gone wrong. What has happened is that over the time period $\Delta t$, the wave is travelling a distance which is greater than `dx`, and we say that the solution becomes *unstable* in this situation (this statement can be proven formally, see below). The length `dx` of grid spacing is inversely proportional to the number of total points `nx`: we asked for more grid points, so `dx` got smaller. Once `dx` got smaller than the $c\Delta t$—the distance travelled by the numerical solution in one time step—it's no longer possible for the numerical scheme to solve the equation correctly!  Graphical interpretation of the CFL condition. Consider the illustration above. The green triangle represents the _domain of dependence_ of the numerical scheme. Indeed, for each time step, the variable $u_i^{n+1}$ only depends on the values $u_i^{n}$ and $u_{i-1}^{n}$. When the distance $c\Delta t$ is smaller than $\Delta x$, the characteristic line traced from the grid coordinate $i, n+1$ lands _between_ the points $i-1,n$ and $i,n$ on the grid. We then say that the _mathematical domain of dependence_ of the solution of the original PDE is contained in the _domain of dependence_ of the numerical scheme. On the contrary, if $\Delta x$ is smaller than $c\Delta t$, then the information about the solution needed for $u_i^{n+1}$ is not available in the _domain of dependence_ of the numerical scheme, because the characteristic line traced from the grid coordinate $i, n+1$ lands _behind_ the point $i-1,n$ on the grid. The following condition thus ensures that the domain of dependence of the differential equation is contained in the _numerical_ domain of dependence: $$\begin{equation}\sigma = \frac{c \Delta t}{\Delta x} \leq 1\end{equation}$$As can be proven formally, for example using the [von Neumann analysis](https://de.wikipedia.org/wiki/Von-Neumann-Stabilit%C3%A4tsanalyse), stability of the numerical solution requires that step size `dt` is calculated with respect to the size of `dx` to satisfy the condition above. The value of $c\Delta t/\Delta x$ is called the [**Courant-Friedrichs-Lewy number**](https://gdz.sub.uni-goettingen.de/id/PPN235181684_0100?tify={%22pages%22:[36],%22panX%22:0.519,%22panY%22:0.646,%22view%22:%22info%22,%22zoom%22:0.44}) (CFL number), often denoted by $\sigma$. The value $\sigma_{\text{max}}$ that will ensure stability depends on the discretization used; for the forward-time/backward-space scheme, the condition for stability is $\sigma<1$.In a new version of our code—written _defensively_—, we'll use the CFL number to calculate the appropriate time-step `dt` depending on the size of `dx`. Furthermore, we also define a maximum modelling time `Tmax`
###Code
def linear_advection_cfl(nx, Tmax, L=2.0, c=1.0, sigma=0.5):
"""
Solves the 1D linear advection equation
with constant speed c in the domain [0, L]
and plots the solution (along with the initial conditions).
Here, the time-step size is calculated based on a CFL constraint.
Parameters
----------
nx : integer
Number of grid points to discretize the domain.
Tmax : float
Maximum integration time
L : float, optional
Length of the domain; default: 2.0.
c : float, optional
Advection speed; default: 1.0.
sigma : float, optional
CFL constraint; default: 0.5.
"""
# Discretize spatial grid.
dx = L / (nx - 1)
x = numpy.linspace(0.0, L, num=nx)
# Compute the time-step size based on the CFL constraint.
dt = sigma * dx / c
# Compute number of time steps based on Tmax and dt
nt = (int)(Tmax/dt)
# Set initial conditions.
u0 = numpy.ones(nx)
mask = numpy.where(numpy.logical_and(x >= 0.5, x <= 1.0))
u0[mask] = 2.0
# Integrate the solution in time.
u = u0.copy()
for n in range(1, nt):
un = u.copy()
for i in range(1, nx):
u[i] = un[i] - c * dt / dx * (un[i] - un[i - 1])
# Plot the solution along with the initial conditions.
pyplot.figure(figsize=(4.0, 4.0))
pyplot.xlabel('x')
pyplot.ylabel('u')
pyplot.grid()
pyplot.plot(x, u0, label='Initial',
color='C0', linestyle='--', linewidth=2)
pyplot.plot(x, u, label='nt = {}'.format(nt),
color='C1', linestyle='-', linewidth=2)
pyplot.legend()
pyplot.xlim(0.0, L)
pyplot.ylim(0.0, 2.5);
###Output
_____no_output_____
###Markdown
Now, it doesn't matter how many points we use for the spatial grid: the solution will always be stable!
###Code
# Define maximum time Tmax
Tmax = 0.5
linear_advection_cfl(85,Tmax)
linear_advection_cfl(441,Tmax)
linear_advection_cfl(2000,Tmax)
###Output
_____no_output_____ |
Yandex data science/4/Week 2/.ipynb_checkpoints/stat.two_proportions_diff_test-checkpoint.ipynb | ###Markdown
**Корректность проверена на Python 3.7:**+ pandas 0.23.0+ numpy 1.14.5+ scipy 1.1.0+ statsmodels 0.9.0 Z-критерий для двух долей
###Code
import numpy as np
import pandas as pd
import scipy
from statsmodels.stats.weightstats import *
from statsmodels.stats.proportion import proportion_confint
import scipy
import statsmodels
print(np.__version__)
print(pd.__version__)
print(scipy.__version__)
print(statsmodels.__version__)
###Output
1.17.4
0.25.3
1.3.2
0.10.1
###Markdown
Загрузка данных
###Code
data = pd.read_csv('banner_click_stat.txt', header = None, sep = '\t')
data.columns = ['banner_a', 'banner_b']
data.head()
data.describe()
###Output
_____no_output_____
###Markdown
Интервальные оценки долей $$\frac1{ 1 + \frac{z^2}{n} } \left( \hat{p} + \frac{z^2}{2n} \pm z \sqrt{ \frac{ \hat{p}\left(1-\hat{p}\right)}{n} + \frac{z^2}{4n^2} } \right), \;\; z \equiv z_{1-\frac{\alpha}{2}}$$
###Code
conf_interval_banner_a = proportion_confint(sum(data.banner_a),
data.shape[0],
method = 'wilson')
conf_interval_banner_b = proportion_confint(sum(data.banner_b),
data.shape[0],
method = 'wilson')
print('95%% confidence interval for a click probability, banner a: [%f, %f]' % conf_interval_banner_a)
print('95%% confidence interval for a click probability, banner b [%f, %f]' % conf_interval_banner_b)
###Output
95% confidence interval for a click probability, banner a: [0.026961, 0.050582]
95% confidence interval for a click probability, banner b [0.040747, 0.068675]
###Markdown
Z-критерий для разности долей (независимые выборки) | $X_1$ | $X_2$ ------------- | -------------| 1 | a | b 0 | c | d $\sum$ | $n_1$| $n_2$ $$ \hat{p}_1 = \frac{a}{n_1}$$$$ \hat{p}_2 = \frac{b}{n_2}$$$$\text{Доверительный интервал для }p_1 - p_2\colon \;\; \hat{p}_1 - \hat{p}_2 \pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{\hat{p}_1(1 - \hat{p}_1)}{n_1} + \frac{\hat{p}_2(1 - \hat{p}_2)}{n_2}}$$$$Z-статистика: Z({X_1, X_2}) = \frac{\hat{p}_1 - \hat{p}_2}{\sqrt{P(1 - P)(\frac{1}{n_1} + \frac{1}{n_2})}}$$$$P = \frac{\hat{p}_1{n_1} + \hat{p}_2{n_2}}{{n_1} + {n_2}} $$
###Code
def proportions_diff_confint_ind(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
p1 = float(sum(sample1)) / len(sample1)
p2 = float(sum(sample2)) / len(sample2)
left_boundary = (p1 - p2) - z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
right_boundary = (p1 - p2) + z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
return (left_boundary, right_boundary)
def proportions_diff_z_stat_ind(sample1, sample2):
n1 = len(sample1)
n2 = len(sample2)
p1 = float(sum(sample1)) / n1
p2 = float(sum(sample2)) / n2
P = float(p1*n1 + p2*n2) / (n1 + n2)
return (p1 - p2) / np.sqrt(P * (1 - P) * (1. / n1 + 1. / n2))
def proportions_diff_z_test(z_stat, alternative = 'two-sided'):
if alternative not in ('two-sided', 'less', 'greater'):
raise ValueError("alternative not recognized\n"
"should be 'two-sided', 'less' or 'greater'")
if alternative == 'two-sided':
return 2 * (1 - scipy.stats.norm.cdf(np.abs(z_stat)))
if alternative == 'less':
return scipy.stats.norm.cdf(z_stat)
if alternative == 'greater':
return 1 - scipy.stats.norm.cdf(z_stat)
print("95%% confidence interval for a difference between proportions: [%f, %f]" %\
proportions_diff_confint_ind(data.banner_a, data.banner_b))
print("p-value: %f" % proportions_diff_z_test(proportions_diff_z_stat_ind(data.banner_a, data.banner_b)))
print("p-value: %f" % proportions_diff_z_test(proportions_diff_z_stat_ind(data.banner_a, data.banner_b), 'less'))
###Output
p-value: 0.042189
###Markdown
Z-критерий для разности долей (связанные выборки) $X_1$ \ $X_2$ | 1| 0 | $\sum$ ------------- | -------------| 1 | e | f | e + f 0 | g | h | g + h $\sum$ | e + g| f + h | n $$ \hat{p}_1 = \frac{e + f}{n}$$$$ \hat{p}_2 = \frac{e + g}{n}$$$$ \hat{p}_1 - \hat{p}_2 = \frac{f - g}{n}$$$$\text{Доверительный интервал для }p_1 - p_2\colon \;\; \frac{f - g}{n} \pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{f + g}{n^2} - \frac{(f - g)^2}{n^3}}$$$$Z-статистика: Z({X_1, X_2}) = \frac{f - g}{\sqrt{f + g - \frac{(f-g)^2}{n}}}$$
###Code
def proportions_diff_confint_rel(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
sample = list(zip(sample1, sample2))
n = len(sample)
f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])
g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])
left_boundary = float(f - g) / n - z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
right_boundary = float(f - g) / n + z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
return (left_boundary, right_boundary)
def proportions_diff_z_stat_rel(sample1, sample2):
sample = list(zip(sample1, sample2))
n = len(sample)
f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])
g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])
return float(f - g) / np.sqrt(f + g - float((f - g)**2) / n )
print("95%% confidence interval for a difference between proportions: [%f, %f]" \
% proportions_diff_confint_rel(data.banner_a, data.banner_b))
print("p-value: %f" % proportions_diff_z_test(proportions_diff_z_stat_rel(data.banner_a, data.banner_b)))
print("p-value: %f" % proportions_diff_z_test(proportions_diff_z_stat_rel(data.banner_a, data.banner_b), 'less'))
###Output
p-value: 0.001675
|
notebooks/intermediate_frequency/loop_hydrodynamics.ipynb | ###Markdown
Loop Hydrodynamics: Intermediate-frequency Nanoflares
###Code
import os
import sys
import subprocess
import multiprocessing
import numpy as np
from scipy.optimize import curve_fit,brentq
import astropy.units as u
import matplotlib.pyplot as plt
import synthesizAR
from synthesizAR.interfaces import EbtelInterface
sys.path.append('../../scripts')
from constrained_heating_model import CustomHeatingModel
%matplotlib inline
field = synthesizAR.Field.restore('/storage-home/w/wtb2/data/timelag_synthesis_v2/base_noaa1158/')
heating_options = {
'duration': 200.0,
'duration_rise': 100.0,
'duration_decay': 100.0,
'stress_level': 1.,
'power_law_slope': -2.5,
'frequency_parameter': 1.
}
heating_model = CustomHeatingModel(heating_options)
ih = synthesizAR.util.InputHandler('/storage-home/w/wtb2/codes/ebtelPlusPlus/config/ebtel.example.cfg.xml')
base_config = ih.lookup_vars()
base_config['c1_cond0'] = 6.0
base_config['total_time'] = 3e4
base_config['use_adaptive_solver'] = True
base_config['use_flux_limiting'] = True
base_config['calculate_dem'] = False
base_config['heating']['partition'] = 1.0
base_config['heating']['background'] = 1e-6
base_config['force_single_fluid'] = False
base_config['tau_max'] = 200.0
ebtel_interface = EbtelInterface(base_config,heating_model,
'/storage-home/w/wtb2/data/timelag_synthesis_v2/intermediate_frequency/hydro_config/',
'/storage-home/w/wtb2/data/timelag_synthesis_v2/intermediate_frequency/hydro_results/')
heating_model.constrain_distribution(field,
tolerance=1e-3,
ar_flux_constraint=1e7,
sigma_increase=1.,sigma_decrease=1e-6,
verbose=True)
###Output
Iteration 0 with error=0.3272736265973286 and phi=1.3272736265973286
Iteration 1 with error=0.09766036751636864 and phi=0.9023396324836314
Iteration 2 with error=0.020273130179691456 and phi=0.9797268698203085
Iteration 3 with error=0.00033944150774822823 and phi=1.0003394415077482
###Markdown
Check that we are obeying the constraint and take a look at the distribution of $\epsilon$ values, i.e. for each event, what fraction of the energy is being extracted from the field. This value should be close to 1 as the average flux over time and over all strands should be $\approx 10^7$ erg cm$^{-2}$ s$^{-1}$, the constraint from WN77
###Code
tot = 0.
energies = []
loop_energies = []
for l in field.loops:
energies += (heating_model.power_law_distributions[l.name] / ((l.field_strength.value.mean()**2)/8./np.pi)).tolist()
loop_energies.append((heating_model.power_law_distributions[l.name] / ((l.field_strength.value.mean()**2)/8./np.pi)))
tot += heating_model.power_law_distributions[l.name].sum()*l.full_length.to(u.cm).value
print(tot / len(field.loops) / base_config['total_time'] / 1e7)
def mle(x,xmin,xmax,alpha_bounds=[1.1,10]):
#define mle function
def f_mle(alpha,xi,x_min,x_max):
n = len(xi)
term1 = -np.sum(np.log(xi))
term2 = n/(alpha - 1.0)
term3a = n/(x_min**(1.0-alpha) - x_max**(1.0-alpha))
term3b = x_min**(1.0-alpha)*np.log(x_min) - x_max**(1.0-alpha)*np.log(x_max)
return term1 + term2 + term3a*term3b
x0,r = brentq(f_mle,alpha_bounds[0],alpha_bounds[1],args=(x,xmin,xmax),full_output=True)
if r.converged:
return x0
else:
print('Minimization not sucessful. Returning None')
return None
hist,bins,_ = plt.hist(np.array(energies),
bins=np.logspace(-3,0.1,100),
lw=2,histtype='step',density=False,);
def fit_func(x,a,b):
return a*x + b
power_law_slopes = np.zeros((len(field.loops),))
for i,le in enumerate(loop_energies):
#print(i)
try:
power_law_slopes[i] = mle(le,le.min(),le.max(),)
except (RuntimeError,ValueError):
power_law_slopes[i] = np.nan
bin_centers = (bins[1:] + bins[:-1])/2.
bin_centers = bin_centers[hist>0]
popt,pcov = curve_fit(fit_func, np.log10(bin_centers), np.log10(hist[hist>0]),)
plt.plot(bin_centers,(10.**popt[1])*(bin_centers**popt[0]),color='C1')
#plt.axvline(x=0.3,ls=':',color='k')
#plt.axvline(x=0.1,ls=':',color='k')
plt.xscale('log')
plt.yscale('log')
plt.xlim(1e-3,3)
plt.ylim(1,5e4)
#plt.title(r'$\alpha$={:.3f}'.format(popt[0]));
plt.hist(power_law_slopes[~np.isnan(power_law_slopes)],bins='scott',histtype='step',lw=2);
plt.axvline(x=2.5,ls=':',color='K')
plt.xlim(0,3)
field.configure_loop_simulations(ebtel_interface)
def ebtel_runner(loop):
subprocess.call([os.path.join('/storage-home/w/wtb2/codes/','ebtelPlusPlus/bin/ebtel++.run'),
'-c',loop.hydro_configuration['config_filename']])
pool = multiprocessing.Pool()
runs = pool.map_async(ebtel_runner,field.loops)
runs.wait()
field.load_loop_simulations(ebtel_interface,
savefile='/storage-home/w/wtb2/data/timelag_synthesis_v2/intermediate_frequency/loop_parameters.h5'
)
fig,axes = plt.subplots(2,1,figsize=(20,10),sharex=True)
plt.subplots_adjust(hspace=0.)
for loop in field.loops[::5]:
axes[0].plot(loop.time,loop.electron_temperature[:,0].to(u.MK),color='C0',alpha=0.05)
axes[0].plot(loop.time,loop.ion_temperature[:,0].to(u.MK),color='C2',alpha=0.05)
axes[1].plot(loop.time,loop.density[:,0]/1e9,color='C0',alpha=0.1)
axes[0].set_xlim(0,base_config['total_time'])
axes[0].set_ylim(0,25)
axes[1].set_ylim(0,50)
axes[0].set_ylabel(r'$T$ [MK]')
axes[1].set_ylabel(r'$n$ [10$^9$ cm$^{-3}$]')
axes[1].set_xlabel(r'$t$ [s]')
field.save('/storage-home/w/wtb2/data/timelag_synthesis_v2/intermediate_frequency/field_checkpoint')
###Output
WARNING: VerifyWarning: Invalid 'BLANK' keyword in header. The 'BLANK' keyword is only applicable to integer data, and will be ignored in this HDU. [astropy.io.fits.hdu.image]
|
notebook_examples/Ex_svm.ipynb | ###Markdown
Example SVM First, we'll import the 'ML' module, to use its 'Classifier' class, os, and TQDM, which is a handy pip-installable package that gives us nice loading bars.
###Code
import ML, os
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
Set your paths! 'patient_path' points to our 'condition-positive' dataset; in this example it points to spectral data in the 'ref pain' study folder, using the P300 task data, with 500-sample-long contig windows and all channels 'reference_path' points to a folder containing healthy control data study folders
###Code
patient_path = "/wavi/EEGstudies/CANlab/spectra/P300_500_1111111111111111111_0"
reference_path = "/wavi/EEGstudies"
###Output
_____no_output_____
###Markdown
Instantiate a 'Classifier' Object 'Classifier' takes one positional argument, currently either "spectra" or "contigs"
###Code
myclf = ML.Classifier("spectra")
###Output
_____no_output_____
###Markdown
Load Patient (Condition-Positive) Data
###Code
for fname in tqdm(os.listdir(patient_path)):
myclf.LoadData(patient_path+"/"+fname)
###Output
100%|██████████| 207/207 [00:00<00:00, 264.89it/s]
###Markdown
Load Control (Condition-Negative) Data using the 'Balance' method of 'Classifier', the dataset will automatically add healthy control data found in the reference folders *note* there are currently few scans in 81+ so it won't balance completely, and will not finish the loop. it's balanced within 1% or so
###Code
myclf.Balance(reference_path)
###Output
100%|██████████| 7/7 [00:00<00:00, 8.66it/s]
###Markdown
Run 'SVM' method of 'Classifier' This method will structure the input classes (in this case, 'Spectra' objects) Optional parameters include: - C: (float > 0) default 1, regularization parameter - iterations: (int) default 1000, maximum number of iterations to be run - kernel type: ('rbf', linear') default 'linear' - normalize: (None, 'standard', 'minmax') default None, z-score normalize input data (features) - plot_PR: (bool) default 'False', plot precision-recall curve - plot_Features: (bool) default 'False', plot features, and selected features if feat_select set to 'True' - lowbound: (int) default 3, in Hz, lowest frequency included in the model - highbound: (int) default 20, in Hz, highest frequency included in the model - feat_select: (bool) default 'False', univariate feature selection - num_feats: (int) default 10, number of features selected with feat_select set to 'True' - tt_split: (float) default 0.33, ratio of test samples to train samples
###Code
myclf.SVM(kernel='linear', iterations=10000, normalize="standard", num_feats=30, plot_Features=True, lowbound=0, highbound=25)
###Output
Number of negative outcomes: 210
Number of positive outcomes: 207
Number of samples in train: 268
Number of samples in test: 149
Classification accuracy on validation data: 0.725
|
02_jupyter_notebook/notebook.ipynb | ###Markdown
Jupyter NotebookCongratulations on opening your first Jupyter Notebook! For the rest of this, we'll be working in Jupyter Notebook to help you become more familiar with it, provide both code and notes side-by-side, and provide places for you to try out your own code. CellsJupyter Notebook has the concept of cells, likely an idea taken from the cells of Microsoft Excel spreadsheets. There are three kinds of cells. We'll go over each one. Everything you see in a notebook is a cell. You can double-click a cell to edit it. To finalize a cell, click the play button at the top. In programming, the play button acts as the "Run" button. Basically, just click it to make stuff happen :)You can also reorder cells. Each cell can be clicked and dragged to swap places with other cells. Unfortunately, it doesn't seem like you can place cells side-by-side. This is only for vertical ordering. Lastly, you can select the type of a cell with the dropdown up above. With that, let's dive into each type of cell. RawRaw cells are simply raw text. They're unformatted, simple text. See the one below:
###Code
This is raw text.
Such raw.
So text.
Wow.
###Output
_____no_output_____
###Markdown
You'll notice it looks slightly different from what else is written so far. This is because we've been writing in Markdown to this point. Markdown can get complicated, so we're covering it last. Raw text is good for when you want to show some sort of text without any specific formatting. This is good for CSV data, JSON data, or anything that shouldn't be considered more "human" information. At least, that's what I can gather. Often, gray text in a console font is meant for showing data or code. CodeYep, you guessed it. Code cells are where you write your Python code. When you click the play button up top, Jupyter Notebook will execute that code and show and printed output beneath the code. Check it out.
###Code
print('Hello world!')
###Output
Hello world!
|
module4-makefeatures/Arturo_Obregon_LS_DS_114_Make_Features_Assignment.ipynb | ###Markdown
ASSIGNMENT- Replicate the lesson code. - This means that if you haven't followed along already, type out the things that we did in class. Forcing your fingers to hit each key will help you internalize the syntax of what we're doing. - [Lambda Learning Method for DS - By Ryan Herr](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit?usp=sharing)- Convert the `term` column from string to integer.- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns.
###Code
##### Begin Working Here #####
!wget https://resources.lendingclub.com/LoanStats_2018Q4.csv.zip
!unzip LoanStats_2018Q4.csv.zip
!tail LoanStats_2018Q4.csv
!head LoanStats_2018Q4.csv
import pandas as pd
#set pandas display ooptions
pd.set_option('display.max_rows',500)
pd.set_option('display.max_columns',500)
#romoved the extra lines of text from the header using (header=1)
#romoved the estra lines of text from the footer using (skipfooter = 2)
df = pd.read_csv('LoanStats_2018Q4.csv', header=1, na_values=['n/a'], skipfooter=2)
df.head(10)
#view the column headers of my dataset
list(df.columns)
#I can use the shape method to get the number of rows or columns
# by using [0] I can refer to either the column or rows of the output
#[rows, columns]
df.shape[0]
#sorts the columns with most NaN in ascending order
df.isnull().sum().sort_values(ascending=False)
df = df.drop(columns=['id', 'member_id','desc','url'])
df.head()
#resorted to have NaN columns in ascedning order
df.isnull().sum().sort_values(ascending=False)
df['int_rate']
df.dtypes
def remove_month_to_integer(string):
return int(string.strip('months'))
df['term'] = df['term'].apply(remove_month_to_integer)
df.head()
df.dtypes
df['loan_status_is_great'] = 1
df['loan_status_is_great']
# df['loan_status_is_great'] = df['loan_status'].str.contains('Current')
# df['loan_status_is_great']
###Output
_____no_output_____
###Markdown
STRETCH OPTIONSYou can do more with the LendingClub or Instacart datasets.LendingClub options:- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20. - Take initiatve and work on your own ideas!Instacart options:- Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)- Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.)- Take initiative and work on your own ideas! You can uncomment and run the cells below to re-download and extract the Instacart data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# %cd instacart_2017_05_01
###Output
_____no_output_____ |
Seq2Seq-Convolution.ipynb | ###Markdown
Setup
###Code
# OPTIONS:
# ENGLISH - en,
# GERMAN - de,
# FRENCH - fr,
# CZECH - cs
lang1 = 'de'
lang2 = 'en'
train_sentences, test_sentences = load_data(lang1, lang2)
train_sentences = (train_sentences[0][:500], train_sentences[1][:500])
TEST_SIZE=0.2
BATCH_SIZE=64
VALID_BATCH_SIZE=128
MAX_VOCAB=20000
src_vocab, tgt_vocab, train_loader, valid_loader = make_dataset(train_sentences, test_sentences, BATCH_SIZE, VALID_BATCH_SIZE, MAX_VOCAB)
print(f"Number of training examples: {len(train_loader.dataset)}")
print(f"Number of validation examples: {len(valid_loader.dataset)}")
print(f"Training Batches {len(train_loader)}\tValidation Batches {len(valid_loader)}")
print(f"Unique tokens in source ({lang1}) vocabulary: {len(src_vocab)}")
print(f"Unique tokens in target ({lang2}) vocabulary: {len(tgt_vocab)}")
###Output
Unique tokens in source (de) vocabulary: 1348
Unique tokens in target (en) vocabulary: 1224
###Markdown
Make the Model
###Code
# ENCODER ARGS
ENC_UNITS = 32 # 512
ENC_EMBEDDING = 32 # 256
SRC_VOCAB_SIZE = len(src_vocab)
ENC_NUM_LAYERS = 10 # 10
ENC_KERNEL_SIZE = 3 # ODD
DROPOUT = 0.25
# DECODER ARGS
DEC_UNITS = ENC_UNITS
DEC_EMBEDDING = ENC_EMBEDDING
TGT_VOCAB_SIZE = len(tgt_vocab)
DEC_NUM_LAYERS = ENC_NUM_LAYERS
DEC_KERNEL_SIZE = 3 # EVEN OR ODD
PAD_IDX = tgt_vocab.PAD_token
# SEQ2SEQ ARGS
MAX_LENGTH = max(train_loader.dataset.tensors[1].size(1), train_loader.dataset.tensors[0].size(1)) + 3
SOS_TOKEN = tgt_vocab.SOS_token
TEACHER_FORCING = 1.0
encoder = Encoder(SRC_VOCAB_SIZE, ENC_EMBEDDING, ENC_UNITS, ENC_NUM_LAYERS, ENC_KERNEL_SIZE, DROPOUT, MAX_LENGTH)
decoder = Decoder(DEC_UNITS, DEC_EMBEDDING, TGT_VOCAB_SIZE, DEC_NUM_LAYERS, DEC_KERNEL_SIZE, DROPOUT, PAD_IDX, MAX_LENGTH)
seq2seq = Seq2Seq(encoder, decoder, TEACHER_FORCING, MAX_LENGTH, SOS_TOKEN)
print(f'The model has {count_parameters(seq2seq):,} trainable parameters')
print(seq2seq)
criterion = MaskedCrossEntropyLoss(pad_tok=tgt_vocab.PAD_token)
optimizer = optim.Adam(seq2seq.parameters())
###Output
_____no_output_____
###Markdown
Train
###Code
# valid_loss = evaluate(seq2seq, valid_loader, criterion)
# valid_loss
idx = 55
src_sentence = train_loader.dataset.tensors[0][idx:idx+1][:, :20]
tgt_sentence = train_loader.dataset.tensors[1][idx:idx+1][:, :20]
print(src_sentence[:, :19])
print(tgt_sentence[:, :21])
print(src_sentence.size(), tgt_sentence.size())
print(src_vocab.to_string(src_sentence))
print(tgt_vocab.to_string(tgt_sentence))
out, attention = seq2seq(src_sentence)
out.size(), attention.size()
translation = tgt_vocab.to_string(out.argmax(dim=-1))[0]
translation
N_EPOCHS = 50
CLIP = 1
# seq2seq.teacher_forcing = 1.0
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
print(f'Epoch: {epoch+1:02}')
train_loss = train(seq2seq, train_loader, optimizer, criterion, CLIP, src_vocab.PAD_token)
valid_loss = evaluate(seq2seq, train_loader, criterion)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(seq2seq.state_dict(), 'models/seq2seq_conv.pt')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
def evaluate_translate(model, iterator, criterion, pad_tok=0):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, (src, tgt) in enumerate(tqdm(iterator, file=sys.stdout)):
# src.shape = (batch_size, src_seq_len)
# tgt.shape = (batch_size, tgt_seq_len)
src_mask = create_padding_mask(src, pad_tok)
if model.type == 'rnn':
output, attention = model(src, None, src_mask) #turn off teacher forcing
# output.shape == (batch_size, max_length, tgt_vocab_size)
# print(output)
# output = output[:, 1:, :]
tgt = tgt[:, 1:]
elif model.type == 'conv':
output, attention = model(src, None) #turn off teacher forcing
tgt = tgt[:, 1:]
loss = criterion(output, tgt) # masked loss automatically slices for you
epoch_loss += loss.item()
return epoch_loss / len(iterator)
valid_loss = evaluate_translate(seq2seq, train_loader, criterion)
valid_loss, math.exp(valid_loss)
idx = 0
src_sentence = train_loader.dataset.tensors[0][idx:idx+1]
tgt_sentence = train_loader.dataset.tensors[1][idx:idx+1]
src_sentence = src_vocab.to_string(src_sentence, remove_special=True)[0]
tgt_sentence = tgt_vocab.to_string(tgt_sentence, remove_special=True)[0]
translation, attention = translate(src_sentence, seq2seq, src_vocab, tgt_vocab, src_vocab.PAD_token)
print(f"> {src_sentence}")
print(f"= {tgt_sentence}")
print(f"< {translation}")
src_vocab.PAD_token
plot_attention(attention, src_sentence, translation)
attention
# valid_loss = evaluate(seq2seq, valid_loader, criterion)
###Output
_____no_output_____ |
Network Analysis/La Liga Player Properties.ipynb | ###Markdown
Load Data
###Code
%load_ext autoreload
%autoreload 2
import os; import sys; sys.path.append('../')
import pandas as pd
import tqdm
import warnings
import copy
warnings.simplefilter(action='ignore', category=pd.errors.PerformanceWarning)
import networkx as nx
import numpy as np
from collections import Counter
from collections import OrderedDict
import matplotlib.pyplot as plt
import csv
## Configure file and folder names
datafolder = "../data"
spadl_h5 = os.path.join(datafolder,"spadl-statsbomb.h5")
games = pd.read_hdf(spadl_h5,"games")
games = games[games.competition_name == "La Liga"]
print("nb of games:", len(games))
###Output
nb of games: 348
###Markdown
Helper Functions
###Code
def players_in_pos(pos):
contribution_action = ['pass', 'dribble', 'throw_in', 'corner_crossed', 'freekick_crossed', 'cross', 'shot',
'freekick_short', 'goalkick', 'corner_short', 'shot_penalty']
pos_players = []
team = None
for play in pos:
player = play['player_name']
if play['type_name'] in contribution_action and play['result_name'] == 'success':
team = play['team_name']
if player not in pos_players:
pos_players.append(player)
return pos_players, team
def change_possession(action, action_team, possession_team, result):
end_pos = ['bad_touch', 'foul']
change_team = ['pass', 'dribble', 'throw_in', 'corner_crossed', 'freekick_crossed', 'cross', 'shot',
'freekick_short', 'goalkick', 'corner_short', 'shot_penalty', 'keeper_pick_up']
success_change = ['tackle', 'interception', 'take_on', 'clearance', 'keeper_claim', 'keeper_save',
'keeper_punch']
if possession_team == None:
if result == 'success':
if action in change_team:
possession_team = action_team
else:
return False, None
if action in end_pos:
return True, None
if action_team != possession_team:
if action in change_team:
return True, action_team
if result == 'success':
if action in success_change:
return True, action_team
return False, possession_team
def extract_possessions(actions):
all_possessions = []
curr_possession = []
team1 = []
team2 = []
possessing_team = actions.loc[0]["team_name"]
team1_name = actions.loc[0]["team_name"]
for i in range(len(actions)):
# Extract possession
action = actions.loc[i]["type_name"]
action_team = actions.loc[i]["team_name"]
if action_team != team1_name:
team2_name = action_team
result = actions.loc[i]["result_name"]
end_pos, possessing_team = change_possession(action, action_team, possessing_team, result)
if end_pos:
all_possessions.append(copy.deepcopy(curr_possession))
curr_possession = []
curr_possession.append(actions.loc[i])
# Identify players
if (len(team1) == 14 and len(team2) == 14):
continue
player = actions.loc[i]["player_name"]
if action_team == team1_name:
if player not in team1:
team1.append(player)
else:
if player not in team2:
team2.append(player)
return all_possessions, team1, team2, team1_name, team2_name
def pos_pass_list(pos):
edges = []
pass_action = ['pass', 'throw_in', 'corner_crossed', 'freekick_crossed', 'cross',
'freekick_short', 'goalkick', 'corner_short']
for i in range(len(pos)):
action = pos[i]
if action["type_name"] in pass_action:
if action["result_name"] == 'success':
passer = action["player_name"]
team = action["team_name"]
j = 1
while i+j < len(pos) and (pos[i+j]["team_name"] != team):
j += 1
try:
passer = action["player_name"]
receiver = pos[i+j]["player_name"]
edges.append((passer, receiver))
except:
continue
return edges
def create_graph(passes):
G = nx.DiGraph((x, y, {'weight': v}) for (x, y), v in Counter(passes).items())
return G
def get_total_links(G):
DV = G.degree(weight='weight')
return sum(deg for n, deg in DV)/2.0
def get_metrics(G):
total_links = get_total_links(G)
density = nx.density(G)
return total_links, density
def compute_average(player_metrics):
average = {}
for player in player_metrics:
if len(player_metrics[player][0]) < 5:
continue
average[player] = [np.mean(player_metrics[player][0]), np.mean(player_metrics[player][1])]
return average
def compute_difference(pos_metrics, team_props, roster):
difference = {}
for player in pos_metrics:
if len(pos_metrics[player][0]) < 150:
continue
try:
team = roster[player]
diff1 = np.mean(pos_metrics[player][0]) - np.mean(team_props[team][0])
diff2 = np.mean(pos_metrics[player][1]) - np.mean(team_props[team][1])
norm1 = diff1 / np.std(team_props[team][0])
norm2 = diff2 / np.std(team_props[team][1])
difference[player] = [norm1, norm2]
except:
continue
return difference
def la_liga_team_placements():
placements = {}
with open('./Contributions/La_Liga_Standings.csv', newline='', encoding='utf-8') as f:
reader = csv.reader(f)
for row in reader:
if row[-1] != "Average":
placements[row[0]] = float(row[-1])
return placements
la_liga_team_placements()
###Output
_____no_output_____
###Markdown
Compute Network Metrics
###Code
players = pd.read_hdf(spadl_h5,"players")
teams = pd.read_hdf(spadl_h5,"teams")
actiontypes = pd.read_hdf(spadl_h5, "actiontypes")
bodyparts = pd.read_hdf(spadl_h5, "bodyparts")
results = pd.read_hdf(spadl_h5, "results")
pos_metrics = {}
not_pos_metrics = {}
team_props = {}
roster = {}
for game in tqdm.tqdm(list(games.itertuples())):
actions = pd.read_hdf(spadl_h5,f"actions/game_{game.game_id}")
actions = (
actions.merge(actiontypes)
.merge(results)
.merge(bodyparts)
.merge(players,"left",on="player_id")
.merge(teams,"left",on="team_id")
.sort_values(["period_id", "time_seconds", "timestamp"])
.reset_index(drop=True)
)
possessions, team1, team2, team1_name, team2_name = extract_possessions(actions)
for player in team1:
roster[player] = team1_name
for player in team2:
roster[player] = team2_name
for pos in possessions:
pos_players, pos_team = players_in_pos(pos)
if pos_team is None:
continue
passes = pos_pass_list(pos)
if len(passes) < 3:
continue
G = create_graph(passes)
total_links, density = get_metrics(G)
if pos_team in team_props:
team_props[pos_team][0].append(total_links)
team_props[pos_team][1].append(density)
else:
team_props[pos_team] = [[total_links], [density]]
if pos_team == team1_name:
for player in team1:
if player in pos_players:
if player in pos_metrics:
pos_metrics[player][0].append(total_links)
pos_metrics[player][1].append(density)
else:
pos_metrics[player] = [[total_links], [density]]
else:
if player in not_pos_metrics:
not_pos_metrics[player][0].append(total_links)
not_pos_metrics[player][1].append(density)
else:
not_pos_metrics[player] = [[total_links], [density]]
else:
for player in team2:
if player in pos_players:
if player in pos_metrics:
pos_metrics[player][0].append(total_links)
pos_metrics[player][1].append(density)
else:
pos_metrics[player] = [[total_links], [density]]
else:
if player in not_pos_metrics:
not_pos_metrics[player][0].append(total_links)
not_pos_metrics[player][1].append(density)
else:
not_pos_metrics[player] = [[total_links], [density]]
team_avg = compute_average(team_props)
difference = compute_difference(pos_metrics, team_props, roster)
placements = la_liga_team_placements()
###Output
_____no_output_____
###Markdown
Total Links
###Code
count = 21
ordered_players = OrderedDict(sorted(difference.items(), key=lambda x: x[1][0], reverse=True))
for player in ordered_players:
team = roster[player]
if count > 0:
print(player + " (" + team + ") : " + str(ordered_players[player][0]))
#count -= 1
###Output
Andreu Fontàs Prat (Celta Vigo) : 2.2155685544675845
Juan Isaac Cuenca López (Granada) : 2.0649445891702247
Adriano Correia Claro (Sevilla) : 1.916933893042736
David Villa Sánchez (Valencia) : 1.7458058380365242
Martín Montoya Torralbo (Real Betis) : 1.5584519390093914
Claudio Andrés Bravo Muñoz (Real Sociedad) : 1.3787242956237495
Ibrahim Afellay (Barcelona) : 0.7701211242311883
Thomas Vermaelen (Barcelona) : 0.6580363651843211
Marc Bartra Aregall (Barcelona) : 0.5054098690548072
Juan Francisco Torres Belén (Osasuna) : 0.480850096392671
Fernando Navarro i Corbacho (Sevilla) : 0.4783256581652563
Luka Modrić (Real Madrid) : 0.4714183881259725
Thiago Alcântara do Nascimento (Barcelona) : 0.437881918297847
Javier Alejandro Mascherano (Barcelona) : 0.43361432038041026
Pedro Eliezer Rodríguez Ledesma (Barcelona) : 0.42477592040704787
Sergi Roberto Carnicer (Barcelona) : 0.41569838302051515
Jordi Alba Ramos (Barcelona) : 0.4121011504469722
Rafael Alcântara do Nascimento (Barcelona) : 0.4052766940555768
Víctor Ruíz Torre (Villarreal) : 0.4052451116406301
Gorka Iraizoz Moreno (Athletic Bilbao) : 0.3947384049114669
Alexandre Dimitri Song-Billong (Barcelona) : 0.389028922748187
Gabriel Fernández Arenas (Real Zaragoza) : 0.38306734442362034
Alexis Alejandro Sánchez Sánchez (Barcelona) : 0.37691455088139475
Maxwell Scherrer Cabelino Andrade (Barcelona) : 0.374732331894703
Gerard Piqué Bernabéu (Barcelona) : 0.371452955995816
Munir El Haddadi Mohamed (Barcelona) : 0.3683782915153768
Cristian Tello Herrera (Barcelona) : 0.3640504119712949
Jorge Resurrección Merodio (Atlético Madrid) : 0.36215935827600787
Kléper Laveran Lima Ferreira (Real Madrid) : 0.3621414673210969
Seydou Kéita (Barcelona) : 0.3512905766881604
Jérémy Mathieu (Barcelona) : 0.34934281680262635
Jeffren Isaac Suárez Bermúdez (Barcelona) : 0.34727380025547183
David Albelda Aliqués (Valencia) : 0.3389062710157323
Marcelo Vieira da Silva Júnior (Real Madrid) : 0.3337601634691116
Sergio Busquets i Burgos (Barcelona) : 0.33279728857842744
Francesc Fàbregas i Soler (Barcelona) : 0.3242332470354592
Daniel Parejo Muñoz (Valencia) : 0.3169073386088711
Ivan Rakitić (Barcelona) : 0.31675766736923855
Neymar da Silva Santos Junior (Barcelona) : 0.316382275309344
Carlos Marchena López (Valencia) : 0.31312427775117746
Tiago Cardoso Mendes (Atlético Madrid) : 0.3089338350937979
Arda Turan (Barcelona) : 0.3029776397766654
Aleix Vidal Parreu (Barcelona) : 0.2982614408511054
Sergio Ramos García (Real Madrid) : 0.2916746050058941
Joaquín Sánchez Rodríguez (Valencia) : 0.28386026979413986
Antonio López Guerrero (Atlético Madrid) : 0.2806753769360472
Rubén Gracia Calmache (Villarreal) : 0.2747309446771786
Daniel Alves da Silva (Barcelona) : 0.2737024765390546
Ander Herrera Agüera (Athletic Bilbao) : 0.26980732845909905
Carles Puyol i Saforcada (Barcelona) : 0.26636001393926484
Lionel Andrés Messi Cuccittini (Barcelona) : 0.2616434709758167
Bruno Soriano Llido (Villarreal) : 0.25889282064043406
Luis Alberto Suárez Díaz (Barcelona) : 0.257654854203333
Luís Miguel Brito Garcia Monteiro (Valencia) : 0.2546706114508145
Eric-Sylvain Bilal Abidal (Barcelona) : 0.24743792173803186
Karim Benzema (Real Madrid) : 0.24401612750497034
Dmytro Chygrynskiy (Barcelona) : 0.24356171300257015
Markel Susaeta Laskurain (Athletic Bilbao) : 0.24129588829336793
Mehdi Lacen (Racing Santander) : 0.24107803515456155
Andrés Iniesta Luján (Barcelona) : 0.23323701310755324
Jesús Navas González (Sevilla) : 0.23068989703780335
Javier Martínez Aginaga (Athletic Bilbao) : 0.22877885692775185
Marcos Antonio Senna da Silva (Villarreal) : 0.22386590679716242
Lilian Thuram (Barcelona) : 0.21458710805023298
Cristiano Ronaldo dos Santos Aveiro (Real Madrid) : 0.21211442503744493
Francisco Puñal Martínez (Osasuna) : 0.2119558478556382
Óscar de Marcos Arana (Athletic Bilbao) : 0.19972275865091474
Víctor Valdés Arribas (Barcelona) : 0.19539178276303706
Xavier Hernández Creus (Barcelona) : 0.19268021243628444
David Josué Jiménez Silva (Valencia) : 0.19069142518393348
José Manuel Pinto Colorado (Barcelona) : 0.18590342698113682
Bojan Krkíc Pérez (Barcelona) : 0.1822953935165069
Zlatan Ibrahimović (Barcelona) : 0.18208848343764555
Gianluca Zambrotta (Barcelona) : 0.18053277540869248
Roberto Trashorras Gayoso (Rayo Vallecano) : 0.16848507544354815
Andoni Iraola Sagarna (Athletic Bilbao) : 0.15110548657937414
Xabier Prieto Argarate (Real Sociedad) : 0.1369054056708884
Gnégnéri Yaya Touré (Barcelona) : 0.10875980113283915
Gabriel Alejandro Milito (Barcelona) : 0.10717316918654661
Éver Maximiliano David Banega (Atlético Madrid) : 0.1012332895919412
Giovanni van Bronckhorst (Barcelona) : 0.0870143852510159
Joan Verdú Fernández (Deportivo La Coruna) : 0.08414788778428312
Sylvio Mendes Campos Junior (Barcelona) : 0.07984434362368587
Frédéric Oumar Kanouté (Sevilla) : 0.07493206009098208
Raúl García Escudero (Atlético Madrid) : 0.07358537718881789
Oleguer Presas Renom (Barcelona) : 0.05030897345497231
Rafael Márquez Álvarez (Barcelona) : 0.04551277632948183
Thiago Motta (Barcelona) : 0.036892740856643964
Eiður Smári Guðjohnsen (Barcelona) : 0.023875522141944588
Juan Francisco García García (Real Zaragoza) : 0.015554131147982122
Samuel Eto"o Fils (Barcelona) : -0.008620552936099802
Thierry Henry (Barcelona) : -0.011040117835384439
José Martín Cáceres Silva (Barcelona) : -0.025257858131625094
Ludovic Giuly (Barcelona) : -0.028364261633817153
Simão Pedro Fonseca Sabrosa (Atlético Madrid) : -0.029191910305635314
Ronaldo de Assis Moreira (Barcelona) : -0.030369570130217162
José Edmílson Gomes de Moraes (Barcelona) : -0.034496234785600696
Anderson Luís de Souza (Barcelona) : -0.05369640501515415
Xabier Alonso Olano (Real Madrid) : -0.05573290678591975
Sergio García De La Fuente (Real Betis) : -0.059007877686288214
Giovani dos Santos Ramírez (Barcelona) : -0.07391667868967745
Juliano Haus Belletti (Barcelona) : -0.10417842921098239
Mark van Bommel (Barcelona) : -0.1916791029036273
###Markdown
Density
###Code
count = 21
ordered_players = OrderedDict(sorted(difference.items(), key=lambda x: x[1][1], reverse=False))
for player in ordered_players:
team = roster[player]
if count > 0:
print(player + " (" + team + ") : " + str(ordered_players[player][1]))
count -= 1
###Output
Claudio Andrés Bravo Muñoz (Real Sociedad) : -0.4664342437352757
Juan Isaac Cuenca López (Granada) : -0.44938667699608775
Adriano Correia Claro (Sevilla) : -0.41209687825727154
Andreu Fontàs Prat (Celta Vigo) : -0.37873677210108664
Víctor Ruíz Torre (Villarreal) : -0.36743426840030435
David Villa Sánchez (Valencia) : -0.3592681274530591
Martín Montoya Torralbo (Real Betis) : -0.3492346343112816
Jeffren Isaac Suárez Bermúdez (Barcelona) : -0.34633625761940884
Gorka Iraizoz Moreno (Athletic Bilbao) : -0.333864183390192
Marc Bartra Aregall (Barcelona) : -0.31305997047820855
Luis Alberto Suárez Díaz (Barcelona) : -0.30736491248025394
Munir El Haddadi Mohamed (Barcelona) : -0.2997958274945992
José Manuel Pinto Colorado (Barcelona) : -0.2968336909473386
Javier Alejandro Mascherano (Barcelona) : -0.2962098370551698
Víctor Valdés Arribas (Barcelona) : -0.2938993799748879
Rafael Alcântara do Nascimento (Barcelona) : -0.2868586412853839
Sergio Ramos García (Real Madrid) : -0.2861570440779701
Alexandre Dimitri Song-Billong (Barcelona) : -0.28110989295122685
Thomas Vermaelen (Barcelona) : -0.2751279234541338
Jérémy Mathieu (Barcelona) : -0.27484639864536375
Neymar da Silva Santos Junior (Barcelona) : -0.27143176287001763
###Markdown
La Liga Team Possession Regression
###Code
from scipy import stats
ordered_teams = OrderedDict(sorted(team_avg.items(), key=lambda x: x[1][0], reverse=True))
metrics = ["Total Links", "Density"]
for i in range(2):
X = []
y = []
for team in ordered_teams:
X.append(ordered_teams[team][i])
y.append(placements[team])
slope, intercept, r_value, p_value, std_err = stats.linregress(X,y)
yPred1 = [intercept + slope * x for x in X]
plt.scatter(X, y,alpha=0.5)
plt.plot(X, yPred1, 'r', label="Linear")
plt.title("L2 Linear Regression: " + metrics[i])
plt.ylabel("Average Placement")
plt.xlabel(metrics[i])
plt.show()
print("slope:", slope)
print("r:", r_value)
print("p:", p_value)
print("std_err", std_err)
print()
###Output
_____no_output_____ |
SAT/.ipynb_checkpoints/sat_Regression-checkpoint.ipynb | ###Markdown
SAT Scaled Scores v/s Raw Scores AnalysisFor this analysis, SAT tests have been classified as either "easy", "hard" or "normal" tests comparatively. "Easy" tests tend to have harsher "curves" while "hard" tests have more forgiving "curves" and "normal" tests have something in between.
###Code
import numpy as np
from datascience import *
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
%matplotlib inline
# funcs
def to_standard_units(array):
"""Converts values to standrad units"""
array = (array - np.average(array)) / np.std(array)
return array
def calculate_r(x,y):
"""Calculates coefficient of correlation"""
xstd, ystd = to_standard_units(x), to_standard_units(y)
product = xstd*ystd
return np.mean(product)
def linear_reg(x, y):
"""Creates a regression line.
Parameters
----------
x : numpy.ndarray
x-axis values
y : numpy.ndarray
y-axis values
Returns
-------
tuple
x-axis and y-axis values as NumPy Arrays (x,y)
"""
# calculating r
r = calculate_r(x,y)
# regression line
std = np.std(y)
mean = np.mean(y)
y = to_standard_units(x)* r * std + mean
return x, y
###Output
_____no_output_____
###Markdown
Math
###Code
# getting data from files
scores = Table().read_table("./data/mathtrain.csv")
# building arrays for math
x0,y0,x1,y1,x2,y2,x,y = [[] for i in range(8)]
for i in scores.columns[1:]:
for j in range(len(i)-1):
x.append(scores.column(0)[j])
y.append(i[j])
if i[-1] == 0:
x0.append(scores.column(0)[j])
y0.append(i[j])
elif i[-1] == 1:
x1.append(scores.column(0)[j])
y1.append(i[j])
elif i[-1] == 0.5:
x2.append(scores.column(0)[j])
y2.append(i[j])
# plotting regression lines for math
plt.figure(figsize=(15,9))
# arrays
a,b = linear_reg(x0,y0)
c,d = linear_reg(x1,y1)
e,f = linear_reg(x,y)
h,i = np.linspace(0,58,58),np.linspace(200,800,58)
j,k = linear_reg(x2,y2)
# patches
red_patch = mpatches.Patch(color='red', label='hard test')
blue_patch = mpatches.Patch(color='blue', label='easy test')
purple_patch = mpatches.Patch(color='purple', label='normal test')
green_patch = mpatches.Patch(color='green', label='all tests')
orange_patch = mpatches.Patch(color='orange', label='Constant Slope')
plt.legend(handles=[red_patch, blue_patch,purple_patch,green_patch,orange_patch])
# plots
plt.plot(a,b, color="blue")
plt.plot(c,d,color="red")
plt.plot(e,f,color="green")
plt.plot(h,i,color="orange")
plt.plot(j,k, color="purple")
plt.scatter(x0,y0, color="blue", s=8, alpha=0.5)
plt.scatter(x1,y1, color="red", s=8, alpha=0.5)
plt.scatter(x2,y2, color="purple", s=8, alpha = 0.5)
plt.title("SAT Math Curve Regression")
plt.xlabel("Raw Score")
plt.ylabel("Scaled Score")
###Output
_____no_output_____
###Markdown
Reading
###Code
scores = Table().read_table("./data/readingtrain.csv")
# building arrays for reading
x0,y0,x1,y1,x2,y2,x,y = [[] for i in range(8)]
for i in scores.columns[1:]:
for j in range(len(i)-1):
x.append(scores.column(0)[j])
y.append(i[j])
if i[-1] == 0:
x0.append(scores.column(0)[j])
y0.append(i[j])
elif i[-1] == 1:
x1.append(scores.column(0)[j])
y1.append(i[j])
elif i[-1] == 0.5:
x2.append(scores.column(0)[j])
y2.append(i[j])
# plotting regression lines for reading
plt.figure(figsize=(15,9))
# arrays
a,b = linear_reg(x0,y0)
c,d = linear_reg(x1,y1)
e,f = linear_reg(x,y)
h,i = np.linspace(0,52,52),np.linspace(10,40,52)
j,k = linear_reg(x2,y2)
# patches
red_patch = mpatches.Patch(color='red', label='hard test')
blue_patch = mpatches.Patch(color='blue', label='easy test')
purple_patch = mpatches.Patch(color='purple', label='normal test')
green_patch = mpatches.Patch(color='green', label='all tests')
orange_patch = mpatches.Patch(color='orange', label='Constant Slope')
plt.legend(handles=[red_patch, blue_patch,purple_patch,green_patch,orange_patch])
# plots
plt.plot(a,b, color="blue")
plt.plot(c,d,color="red")
plt.plot(e,f,color="green")
plt.plot(h,i,color="orange")
plt.plot(j,k, color="purple")
plt.scatter(x0,y0, color="blue", s=8, alpha=0.5)
plt.scatter(x1,y1, color="red", s=8, alpha=0.5)
plt.scatter(x2,y2, color="purple", s=8, alpha = 0.5)
plt.title("SAT Reading Curve Regression")
plt.xlabel("Raw Score")
plt.ylabel("Scaled Score")
###Output
_____no_output_____
###Markdown
Writing
###Code
scores = Table().read_table("./data/writingtrain.csv")
# building arrays for writing
x0,y0,x1,y1,x2,y2,x,y = [[] for i in range(8)]
for i in scores.columns[1:]:
for j in range(len(i)-1):
x.append(scores.column(0)[j])
y.append(i[j])
if i[-1] == 0:
x0.append(scores.column(0)[j])
y0.append(i[j])
elif i[-1] == 1:
x1.append(scores.column(0)[j])
y1.append(i[j])
elif i[-1] == 0.5:
x2.append(scores.column(0)[j])
y2.append(i[j])
# plotting regression lines for writing
plt.figure(figsize=(15,9))
# arrays
a,b = linear_reg(x0,y0)
c,d = linear_reg(x1,y1)
e,f = linear_reg(x,y)
h,i = np.linspace(0,44,44),np.linspace(10,40,44)
j,k = linear_reg(x2,y2)
# patches
red_patch = mpatches.Patch(color='red', label='hard test')
blue_patch = mpatches.Patch(color='blue', label='easy test')
purple_patch = mpatches.Patch(color='purple', label='normal test')
green_patch = mpatches.Patch(color='green', label='all tests')
orange_patch = mpatches.Patch(color='orange', label='Constant Slope')
plt.legend(handles=[red_patch, blue_patch,purple_patch,green_patch,orange_patch])
# plots
plt.plot(a,b, color="blue")
plt.plot(c,d,color="red")
plt.plot(e,f,color="green")
plt.plot(h,i,color="orange")
plt.plot(j,k, color="purple")
plt.scatter(x0,y0, color="blue", s=8, alpha=0.5)
plt.scatter(x1,y1, color="red", s=8, alpha=0.5)
plt.scatter(x2,y2, color="purple", s=8, alpha = 0.5)
plt.title("SAT Writing Curve Regression")
plt.xlabel("Raw Score")
plt.ylabel("Scaled Score")
###Output
_____no_output_____ |
param_estimator.ipynb | ###Markdown
Brightness
###Code
from timbral_models import timbral_brightness, tf_timbral_brightness_2
data_dir = "/home/ubuntu/Documents/code/data/"
audio_samples, fs = timbral_util.file_read(
fname, 0, phase_correction=False)
audio_samples2, fs = timbral_util.file_read(
fname2, 0, phase_correction=False)
tt = 128*128
fps = glob.glob(os.path.join(
data_dir, "**/*.wav"), recursive=True)
error = []
nn = len(fps)
grad = True
params = []
yy = []
for i, fname in enumerate(fps):
audio_samples, fs = timbral_util.file_read(
fname, 0, phase_correction=False)
audio_samples_t = tf.convert_to_tensor(
[audio_samples[:tt]], dtype=tf.float32)
audio_samples_t = tf.expand_dims(audio_samples_t, -1)
acm_score = np.array(timbral_brightness(fname, dev_output=False))
yy.append(acm_score)
tf_score = tf_timbral_brightness_2(
audio_samples_t, fs=fs, dev_output=True)
params.append(np.array(tf_score))
print("File {}/{}".format(i+1, nn), end='\r')
sys.stdout.flush()
#error.append(100 * (acm_score - tf_score.numpy()) / acm_score)
print("All done !")
params = np.array(params)[:,:,0]
yy = np.array(yy)
reg = LinearRegression().fit(params, yy)
print("R",reg.score(params, yy))
print("estimated", reg.coef_,reg.intercept_)
print("og", [4.613128018020465, 17.378889309312974, 17.434733750553022])
###Output
coeff 0.9978800316081435 [ 4.5712724 17.355247 ] 17.197018
og [4.613128018020465, 17.378889309312974, 17.434733750553022]
|
content/HW/HW5/.ipynb_checkpoints/cs109b_hw5_rnn_gael-checkpoint.ipynb | ###Markdown
CS109B Data Science 2: Advanced Topics in Data Science Homework 5 - Recurrent Neural Networks**Harvard University****Spring 2021****Instructors**: Mark Glickman, Pavlos Protopapas, and Chris Tanner
###Code
#RUN THIS
import requests
from IPython.core.display import HTML
styles = requests.get(
"https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/"
"content/styles/cs109.css"
).text
HTML(styles)
###Output
_____no_output_____
###Markdown
INSTRUCTIONS- To submit your assignment follow the instructions given in Canvas.- Please restart the kernel and run the entire notebook again before you submit.- Running cells out of order is a common pitfall in Jupyter Notebooks. To make sure your code works restart the kernel and run the whole notebook again before you submit. - We have tried to include all the libraries you may need to do the assignment in the imports cell provided below. **Please use only the libraries provided in those imports.**- Please use .head() when viewing data. Do not submit a notebook that is **excessively long**. - In questions that require code to answer, such as "calculate the $R^2$", do not just output the value from a cell. Write a `print()` function that clearly labels the output, includes a reference to the calculated value, and rounds it to a reasonable number of digits. **Do not hard code values in your printed output**. For example, this is an appropriate print statement:```pythonprint(f"The R^2 is {R:.4f}")```- Your plots should be clearly labeled, including clear labels for the $x$ and $y$ axes as well as a descriptive title ("MSE plot" is NOT a descriptive title; "95% confidence interval of coefficients of polynomial degree 5" on the other hand is descriptive).
###Code
import json
import os
import pickle
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.metrics import f1_score, confusion_matrix
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow.keras import backend
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Input, SimpleRNN, Embedding, Dense, \
TimeDistributed, GRU, Dropout, Bidirectional, \
Conv1D, BatchNormalization
from tensorflow.keras.models import model_from_json
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
plt.style.use("tableau-colorblind10")
print(f"Using TensorFlow version: {tf.__version__}")
print(f"Using TensorFlow Keras version: {tf.keras.__version__}")
devices = tf.config.experimental.get_visible_devices()
print(f"Devices: {devices}\n")
print(
f"Logical Devices: {tf.config.experimental.list_logical_devices('GPU')}\n"
)
print(f"GPU Available: {tf.config.list_physical_devices('GPU')}\n")
print(f"All Pysical Devices: {tf.config.list_physical_devices()}")
# Set seed for repeatable results
np.random.seed(123)
tf.random.set_seed(456)
###Output
Devices: [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Logical Devices: [LogicalDevice(name='/device:GPU:0', device_type='GPU')]
GPU Available: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
All Pysical Devices: [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
###Markdown
Notebook Contents- [**PART 1 [ 22 pts ]: Data**](part1) - [Overview](part1intro) - [Questions](part1questions) - [Solutions](part1solutions)- [**PART 2 [ 38 pts ]: Modelling**](part2) - [Overview](part2intro) - [Questions](part2questions) - [Solutions](part2solutions)- [**PART 3 [ 40 pts ]: Analysis**](part3) - [Overview](part3intro) - [Questions](part3questions) - [Solutions](part3solutions) About this Homework The named entity recognition challenge seeks to locate and classify named entities present in unstructured text into predefined categories such as organizations, locations, expressions of times, names of persons, etc. This technique is often used in real use cases such as classifying content for news providers, efficient search algorithms over large corpora, and content-based recommendation systems. NER represents an interesting "many-to-many" problem, and in this homework, it allows us to experiment with recurrent architectures and compare their performances against other models. --> PART 1 [ 22 pts ]: Data[Return to contents](contents) Overview[Return to contents](contents)**First, we will read `data/HW5_data.csv` into a pandas dataframe using the code provided below:**
###Code
# RUN THIS CELL
datapath = "./data/HW5_data.csv"
data = pd.read_csv(datapath, encoding="latin1")
data = data.fillna(method="ffill")
data.head(15)
###Output
_____no_output_____
###Markdown
**As you can see above,** we have a dataset with sentences (as indicated by the `Sentence ` column), each composed of words (shown in the `Word` column) with part-of-speech tagging (shown in the `POS` tagging column) and inside–outside–beginning (IOB) named entity tags attached (shown in the `Tag` column). **`POS` will NOT be used for this homework. We will predict `Tag` using only the words themselves.****Essential info about entities:*** geo = Geographical Entity* org = Organization* per = Person* gpe = Geopolitical Entity* tim = Time indicator* art = Artifact* eve = Event* nat = Natural Phenomenon**IOB prefix:*** B: beginning of named entity* I: inside of named entity* O: outside of named entity PART 1: Questions [Return to contents](contents)**[1.1:](s11)** Create a list of unique words found in the `Word` column and sort it in alphabetic order (do not modify the word capitalization, nor remove any numeric or special characters). Then append the special word `"ENDPAD"` to the end of the list, and assign it to the variable `words`. Store the length of this list as `n_words`. **Print your results for `n_words`****[1.2:](s12)** Create a list of unique tags and sort it in alphabetic order. Then append the special word `"PAD"` to the end of the list, and assign it to the variable `tags`. Store the length of this list as `n_tags`. **Print your results for `n_tags`****[1.3:](s13)** Create a list of lists where each sentence in the data is a list of `(word, tag)` tuples. Here is an example of how the first sentence in the list should look:```[('Thousands', 'O'), ('of', 'O'), ('demonstrators', 'O'), ('have', 'O'),('marched', 'O'), ('through', 'O'), ('London', 'B-geo'), ('to', 'O'),('protest', 'O'), ('the', 'O'), ('war', 'O'), ('in', 'O'),('Iraq', 'B-geo'), ('and', 'O'), ('demand', 'O'), ('the', 'O'), ('withdrawal', 'O'), ('of', 'O'), ('British', 'B-gpe'), ('troops', 'O'),('from', 'O'), ('that', 'O'), ('country', 'O'), ('.', 'O')]```**[1.4:](s14)** Find out the number of words in the longest sentence, and store it to variable `max_len`. **Print your results for `max_len`.****[1.5:](s15)** It is now time to convert the sentences data in a suitable format for our RNN training and evaluation procedures. Create a `word2idx` dictionary that maps distinct words from the dataset into distinct integers. Also create an `idx2word` dictionary.**[1.6:](s16)** Prepare the predictors matrix `X` as a list of lists, where each inner list is a sequence of words mapped into integers according to the `word2idx` dictionary. **[1.7:](s17)** Apply the Keras `pad_sequences` function to create standard length observations. You should retrieve a matrix with all padded sentences and length equal to the `max_len` previously computed. The dimensionality of your resulting `X` matrix should therefore be equal to `( of sentences, max_len)`. Run the provided cell to print your results. Your `X[i]` now should be something similar to this:``` [ 8193 27727 31033 33289 22577 33464 23723 16665 33464 31142 31319 28267 27700 33246 28646 16052 21 16915 17349 7924 32879 32985 18238 23555 24 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178]``` **[1.8:](s18)** Create a `tag2idx` dictionary mapping distinct named entity tags from the dataset into distinct integers. Also create a `idx2tag` dictionary.**[1.9:](s19)** Prepare the targets matrix `Y` as a list of lists, where each inner list is a sequence of tags mapped into integers according to the `tag2idx` dictionary.**[1.10:](s110)** Apply the Keras `pad_sequences` function to standardize the targets. Inject the `PAD` tag integer value for the padding words. Your result should be a `Y` matrix with all padded sentences' tags and length equal to the `max_len` previously computed. **[1.11:](s111)** Use the Keras `to_categorical` function to one-hot-encode the tags. The dimensionality of your resulting `Y` matrix should be equal to `( of sentences, max_len, n_tags)`. Run the provided cell to print your results.**[1.12:](s112)** Split the dataset into train and test sets with a 10% test split using `109` for your random state. Assign your training data to the variables `X_tr` and `y_tr` and your test data to the variables `X_te` and `y_te`. PART 1: Solutions[Return to contents](contents) **[1.1:](q11)** Create a list of unique words found in the `Word` column and sort it in alphabetic order (do not modify the word capitalization, nor remove any numeric or special characters). Then append the special word `"ENDPAD"` to the end of the list, and assign it to the variable `words`. Store the length of this list as `n_words`. **Print your results for `n_words`**
###Code
# your code here
words = sorted(np.unique(data['Word']))
words.append('ENDPAD')
n_words = len(words)
# Run this cell to show your results for n_words
print("There are {:,} unique words found in our dataset.".format(n_words))
###Output
There are 35,179 unique words found in our dataset.
###Markdown
**[1.2:](q12)** Create a list of unique tags and sort it in alphabetic order. Then append the special word `"PAD"` to the end of the list, and assign it to the variable `tags`. Store the length of this list as `n_tags`. **Print your results for `n_tags`**
###Code
# your code here
tags = list(sorted(np.unique(data['Tag'])))
tags.append("PAD")
n_tags = len(tags)
# Run this cell to show your results for n_tags
print("There are {} unique tags found in our dataset.".format(n_tags))
###Output
There are 18 unique tags found in our dataset.
###Markdown
**[1.3:](q13)** Create a list of lists where each sentence in the data is a list of `(word, tag)` tuples. Here is an example of how the first sentence in the list should look:```[('Thousands', 'O'), ('of', 'O'), ('demonstrators', 'O'), ('have', 'O'),('marched', 'O'), ('through', 'O'), ('London', 'B-geo'), ('to', 'O'),('protest', 'O'), ('the', 'O'), ('war', 'O'), ('in', 'O'),('Iraq', 'B-geo'), ('and', 'O'), ('demand', 'O'), ('the', 'O'), ('withdrawal', 'O'), ('of', 'O'), ('British', 'B-gpe'), ('troops', 'O'),('from', 'O'), ('that', 'O'), ('country', 'O'), ('.', 'O')]```
###Code
# your code here
sentences = []
data['Sentence number'] = data['Sentence #'].apply(lambda x: int(x.split(':')[1]))
grouping = data.groupby('Sentence number')
grouped_word = grouping['Word']
grouped_tag = grouping['Tag']
for word, tag in zip(grouped_word, grouped_tag):
sentences.append([(w, t) for (w, t) in zip(word[1].values, tag[1].values)])
###Output
_____no_output_____
###Markdown
**[1.4:](q14)** Find out the number of words in the longest sentence, and store it to variable `max_len`. **Print your results for `max_len`.**
###Code
# your code here
counts = data.groupby('Sentence #').count()['Word']
max_len = counts.max()
# Run this cell to show your results for max_len
print("The number of words in our longest sentence is: {}".format(max_len))
###Output
The number of words in our longest sentence is: 104
###Markdown
**[1.5:](q15)** It is now time to convert the sentences data in a suitable format for our RNN training and evaluation procedures. Create a `word2idx` dictionary that maps distinct words from the dataset into distinct integers. Also create an `idx2word` dictionary.
###Code
# your code here
word2idx = {word:i for i, word in enumerate(words)}
idx2word = {i:word for i, word in enumerate(words)}
###Output
_____no_output_____
###Markdown
**[1.6:](q16)** Prepare the predictors matrix `X` as a list of lists, where each inner list is a sequence of words mapped into integers according to the `word2idx` dictionary.
###Code
# your code here
X = []
for sentence in sentences:
X.append([word2idx[word[0]] for word in sentence])
###Output
_____no_output_____
###Markdown
**[1.7:](q17)** Apply the Keras `pad_sequences` function to create standard length observations. You should retrieve a matrix with all padded sentences and length equal to the `max_len` previously computed. The dimensionality of your resulting `X` matrix should therefore be equal to `( of sentences, max_len)`. Run the provided cell to print your results. Your `X[i]` now should be something similar to this:``` [ 8193 27727 31033 33289 22577 33464 23723 16665 33464 31142 31319 28267 27700 33246 28646 16052 21 16915 17349 7924 32879 32985 18238 23555 24 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178]```
###Code
# your code here
X = pad_sequences(X, maxlen=max_len, value=word2idx['ENDPAD'], padding='post')
# Run this cell to show your results
print("The index of word 'Harvard' is: {}\n".format(word2idx["Harvard"]))
print("Sentence 1: {}\n".format(X[1]))
print("The shape of the X array is: {}".format(X.shape))
###Output
The index of word 'Harvard' is: 7506
Sentence 1: [ 6283 27700 31967 25619 24853 33246 19981 25517 33246 29399 34878 19044
18095 34971 32712 31830 17742 1 4114 11464 11631 14985 1 17364
1 14484 33246 3881 24 1 35178 35178 35178 35178 35178 35178
35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178
35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178
35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178
35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178
35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178 35178
35178 35178 35178 35178 35178 35178 35178 35178]
The shape of the X array is: (47959, 104)
###Markdown
**[1.8:](q18)** Create a `tag2idx` dictionary mapping distinct named entity tags from the dataset into distinct integers. Also create a `idx2tag` dictionary.
###Code
# your code here
tag2idx = {tag:i for i, tag in enumerate(tags)}
idx2tag = {i:tag for i, tag in enumerate(tags)}
###Output
_____no_output_____
###Markdown
**[1.9:](q19)** Prepare the targets matrix `Y` as a list of lists, where each inner list is a sequence of tags mapped into integers according to the `tag2idx` dictionary.
###Code
# your code here
Y = []
for sentence in sentences:
Y.append([tag2idx[word[1]] for word in sentence])
###Output
_____no_output_____
###Markdown
**[1.10:](q110)** Apply the Keras `pad_sequences` function to standardize the targets. Inject the `PAD` tag integer value for the padding words. Your result should be a `Y` matrix with all padded sentences' tags and length equal to the `max_len` previously computed.
###Code
# your code here
Y = pad_sequences(Y, max_len, value = tag2idx['PAD'], padding='post')
print("The shape of the Y array is: {}".format(Y.shape))
###Output
The shape of the Y array is: (47959, 104)
###Markdown
**[1.11:](q111)** Use the Keras `to_categorical` function to one-hot-encode the tags. The dimensionality of your resulting `Y` matrix should be equal to `( of sentences, max_len, n_tags)`. Run the provided cell to print your results.
###Code
# your code here
Y = to_categorical(Y)
# Run this cell to show your results
print("The index of tag 'B-gpe' is: {}\n".format(tag2idx["B-gpe"]))
print("The tag of the last word in Sentence 1: {}\n".format(Y[0][-1]))
print("The shape of the Y array is: {}".format(Y.shape))
###Output
The index of tag 'B-gpe' is: 3
The tag of the last word in Sentence 1: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
The shape of the Y array is: (47959, 104, 18)
###Markdown
**[1.12:](q112)** Split the dataset into train and test sets with a 10% test split using `109` for your random state. Assign your training data to the variables `X_tr` and `y_tr` and your test data to the variables `X_te` and `y_te`.
###Code
# your code here
X_tr, X_te, y_tr, y_te = train_test_split(X, Y, test_size=0.1, random_state=109)
# Run this cell to show your results
print(
"The shapes of the resulting train-test splits are:\n\n"
"\tX_train\t{}\n\ty_train\t{}\n\n\tX_test\t{}\n\ty_test\t{}\n"
"".format(X_tr.shape, y_tr.shape, X_te.shape, y_te.shape)
)
###Output
The shapes of the resulting train-test splits are:
X_train (43163, 104)
y_train (43163, 104, 18)
X_test (4796, 104)
y_test (4796, 104, 18)
###Markdown
--> PART 2 [ 38 pts ]: Modelling[Return to contents](contents) Overview[Return to contents](contents) **After preparing the train and test sets, we are ready to build five models:**1. Frequency-based (Baseline) 2. Feed forward neural network (FNN)3. Recurrent neural network (RNN)4. Gated recurrent neural network (GRU)5. Bidirectional gated recurrent neural network (Bidirectional GRU)More details are given about the desired architectures in each model's section in [PART 2: Questions](part2questions) below. The input and output dimensions (i.e. the shapes of the inputs and outputs) will be the same for all models:- input: `[ of sentences, max_len]`- output: `[ of sentences, max_len, n_tags]`Follow the information in each model's section to set up the architecture of the model. And, after training each model, use the given `store_keras_model` function to store the weights and architectures in the `./models` path for later testing. A `load_keras_model` function is also provided to you.A further `plot_training_history` helper function is given to illustrate the training history.**Here are the provided helper functions described above:**
###Code
# RUN THIS CELL
# Store model
def store_keras_model(model, model_name):
"""Save model and weights as model_name in models folder
:param model: trained Keras model object
:param model_name: str, name under which to save the model
"""
# serialize model to JSON
model_json = model.to_json()
if not os.path.exists("models"):
os.mkdir("models")
with open("./models/{}.json".format(model_name), "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("./models/{}.h5".format(model_name))
print("Saved model to disk")
# Load model
def load_keras_model(model_name):
"""Load model_name from models folder in working directory
:param model_name: str, name of saved model
:return: Keras model object loaded from disk
"""
# Load json and create model
json_file = open("./models/{}.json".format(model_name), "r")
loaded_model_json = json_file.read()
json_file.close()
model = tf.keras.models.model_from_json(loaded_model_json)
# Load weights into new model
model.load_weights("./models/{}.h5".format(model_name))
return model
# Plot history
def plot_training_history(
history, model_title, loss_name="Categorical Cross-entropy"
):
"""Plot training and validation loss over all trained epochs
:param history: Keras model training history object
:param model_title: str, descriptive model name for use in plot title
:param loss_name: str, name of loss type used in model for labeling
y-axis (default="Categorical Cross-entropy")
"""
loss = history.history["loss"]
val_loss = history.history["val_loss"]
epochs = range(1,len(loss)+1)
fig, ax = plt.subplots(figsize=(7,4))
plt.plot(epochs, loss, "k--", label="Training loss")
plt.plot(epochs, val_loss, "ko-", label="Validation loss")
plt.title(
"{}\nTraining and Validation Loss".format(model_title), fontsize=14
)
plt.xlabel("Epoch", fontsize=12)
plt.ylabel("{}".format(loss_name), fontsize=12)
if len(loss)<31:
plt.xticks(range(1,len(loss)+1))
plt.grid(":", alpha=0.4)
plt.legend(fontsize=11)
plt.tight_layout()
plt.show();
###Output
_____no_output_____
###Markdown
PART 2: Questions Predict the named entity tag of a word to be its most frequently-seen tag in the training set.[Return to contents](contents)**[2.1:](s21)** **MODEL 1: Baseline**Predict the named entity tag of a word to be its most frequently-seen tag in the training set.- For example, let's say the word "Apple" appears 10 times in the training set and 7 times it was tagged as "Corporate" and 3 times it was tagged as "Fruit". If we encounter the word "Apple" in the test set, our Baseline model should predict it as "Corporate".**Create an np.array `baseline` of length [n_words]** where the $i$-th element `baseline[i]` is the index of the most commonly seen named entity tag of word $i$ summarized from the training set (e.g. `[16, 16, 16, ..., 0, 16, 16]`). For words that aren't present in the training set, use the default tag `"O"`.**[2.2:](s22)** **MODEL 2: Feed Forward Neural Network**This model is provided for you. Please pay attention to the architecture of this neural network, especially the input and output dimensionalities and the Embedding layer.- **[2.2.a:](s22a)** Explain what the Embedding layer is and why we need it here.- **[2.2.b:](s22b)** Explain why the Param of the Embedding layer is 1,758,950 (as shown in `print(model.summary())`).- **[2.2.c:](s22c)** In addition to our models' final results, we often want to inspect intermediate results. For this, we can get outputs from a hidden layer and reduce the dimensionality of those outputs using PCA so that we can visualize them in 2-dimensional space. - Using the code provided to you in this question, visualize outputs from the Embedding layer in your feed-forward neural network, with one subplot for **B-tags** and one subplot for **I-tags**. (Please note that you should be able to generate these plots by simply running the code provided to you.) - Comment on the patterns you observe in the plotted output.**[2.3:](s23)** **MODEL 3: RNN**Set up a simple RNN model by stacking the following layers in sequence: - an Input "layer" - a simple Embedding layer transforming integer words into vectors - a Dropout layer to regularize the model - a SimpleRNN layer - a TimeDistributed layer with an inner Dense layer which output dimensionality is equal to `n_tag` For hyperparameters in this model as well as the subsequent models in 2.4 and 2.5, please use those provided to you in MODEL 2.- **[2.3.a:](s23a)** Define, compile, and train an RNN model. Use the provided code to save the model and plot the training history.- **[2.3.b:](s23b)** Using the functions provided to you [in 2.2.c](s22c), visualize outputs from the SimpleRNN layer, one subplot for **B-tags** and one subplot for **I-tags**. Comment on the patterns you observed.**[2.4:](s24)** **MODEL 4: GRU**- **[2.4.a:](s24a)** Briefly explain what a GRU is and how it is different from a simple RNN.- **[2.4.b:](s24b)** Define, compile, and train a GRU architecture by replacing the SimpleRNN cell with a GRU one. Use the provided code to save the model and plot the training history.- **[2.4.c:](s24c)** Using the functions provided to you [in 2.2.c](s22c), visualize outputs from the GRU layer, one subplot for **B-tags** and one subplot for **I-tags**. Comment on the patterns you observed.**[2.5:](s25)** **MODEL 5: Bidirectional GRU**- **[2.5.a:](s25a)** Explain how a Bidirectional GRU differs from the GRU model above.- **[2.5.b:](s25b)** Define, compile, and train a Bidirectional GRU by wrapping your GRU layer in a Bidirectional one. Use the provided code to save the model and plot the training history.- **[2.5.c:](s25c)** Using the functions provided to you [in 2.2.c](s22c), visualize outputs from the Bidirectional GRU layer, one subplot for **B-tags** and one subplot for **I-tags**. Comment on the patterns you observed. PART 2: Solutions[Return to contents](contents) **[2.1:](q21)** **MODEL 1: Baseline**Predict the named entity tag of a word to be its most frequently-seen tag in the training set.- For example, let's say the word "Apple" appears 10 times in the training set and 7 times it was tagged as "Corporate" and 3 times it was tagged as "Fruit". If we encounter the word "Apple" in the test set, our Baseline model should predict it as "Corporate".**Create an np.array `baseline` of length [n_words]** where the $i$-th element `baseline[i]` is the index of the most commonly seen named entity tag of word $i$ summarized from the training set (e.g. `[16, 16, 16, ..., 0, 16, 16]`). For words that aren't present in the training set, use the default tag `"O"`.
###Code
# your code here
from collections import Counter
words_to_tags = {}
for sentence in sentences:
for word in sentence:
if words_to_tags.get(word[0]) is None:
words_to_tags[word[0]] = [word[1]]
else:
words_to_tags[word[0]].append(word[1])
for word in words_to_tags:
counter = Counter(words_to_tags[word])
most_frequent_tag = list(counter.keys())[0]
words_to_tags[word] = most_frequent_tag
baseline = []
for index in range(n_words):
word = idx2word.get(index)
most_frequent_tag_index = words_to_tags.get(word, None)
if most_frequent_tag_index is None:
baseline.append(tag2idx.get('O'))
else:
baseline.append(int(tag2idx.get(most_frequent_tag_index)))
baseline = np.array(baseline)
# Run this cell to show your results
print("The baseline array is shape: {}".format(baseline.shape))
print(
"The training predictions array is shape: {}\n".format(baseline[X_tr].shape)
)
print("Sentence:\n {}\n".format([idx2word[w] for w in X_tr[0]]))
print("Predicted Tags:\n {}".format([idx2tag[int(i)] for i in baseline[X_tr[0]]]))
###Output
The baseline array is shape: (35179,)
The training predictions array is shape: (43163, 104)
Sentence:
['Mr.', 'Abbas', 'heads', 'the', 'Fatah', 'faction', ',', 'which', 'is', 'a', 'fierce', 'rival', 'of', 'Hamas', '.', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD', 'ENDPAD']
Predicted Tags:
['B-per', 'I-per', 'O', 'O', 'B-org', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-org', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
###Markdown
**[2.2:](q22)** **MODEL 2: Feed Forward Neural Network**This model is provided for you. Please pay attention to the architecture of this neural network, especially the input and output dimensionalities and the Embedding layer. Use these hyperparameters for all NN models
###Code
n_units = 100
drop_rate = .1
dim_embed = 50
optimizer = "rmsprop"
loss = "categorical_crossentropy"
metrics = ["accuracy"]
batch_size = 32
epochs = 10
validation_split = 0.1
verbose = 1
# Define model
model_title = "Feed-forward Neural Network (FFNN)"
model = tf.keras.Sequential()
model.add(
tf.keras.layers.Embedding(
input_dim=n_words, output_dim=dim_embed, input_length=max_len
)
)
model.add(tf.keras.layers.Dropout(drop_rate))
model.add(tf.keras.layers.Dense(n_tags, activation="softmax"))
# Compile model
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
print(model.summary())
%%time
# Train model
history = model.fit(X_tr, y_tr, batch_size=batch_size, epochs=epochs,
validation_split=validation_split, verbose=verbose)
store_keras_model(model, "model_FFNN")
plot_training_history(history, model_title)
###Output
_____no_output_____
###Markdown
**[2.2.a:](q22a)** Explain what the Embedding layer is and why we need it here. **INTERPRETATION:** An embedding layer is a 'representational layer'. It projects the inputs into a lower dimension in order to benefit from a better representation of our inputs. *Why is it better ?* Our initial input is sparse, padded and not normalized (with numbers between 1 and 35000). Projecting this input into a lower dimension space (here $50$) will present several advantages:- Benefit from representations in space with lower dimension. This allows us to leverage Dense layers without having to use tens of millions of parameters. Indeed, suppose we had a dense layer with 500 hidden units before the output layer: the number of parameters would have been $35000*500 + 500*16$ (let aside bias) which would be far greater- Benefit from **dense representations**. Our initial representations were quite sparse, because of the padding. Using an Embedding layer will allow us to benefit from more dense representations- Have a representation for the input words: this embedding layer could serve for other downstream tasks- Projecting the inputs into a lower dimensional space allows us to have some better performances for our model (should the dimension of the embedding be well chosen) **[2.2.b:](q22b)** Explain why the Param of the Embedding layer is 1,758,950 (as shown in `print(model.summary())`). **INTERPRETATION:** The parameters of the embedding layer are the weights of the projection matrix (or a lookup table). The effect of the embedding layer is projecting the input in a subspace. Therefore, its action is $H = U^TX$, where the shape of $U$ is $(n_{words}, dim_{embed})$
###Code
n_words*dim_embed
###Output
_____no_output_____
###Markdown
**[2.2.c:](q22c)** In addition to our models' final results, we often want to inspect intermediate results. For this, we can get outputs from a hidden layer and reduce the dimensionality of those outputs using PCA so that we can visualize them in 2-dimensional space. - Using the code provided to you in this question, visualize outputs from the Embedding layer in your feed-forward neural network, with one subplot for **B-tags** and one subplot for **I-tags**. (Please note that you should be able to generate these plots by simply running the code provided to you.) - Comment on the patterns you observe in the plotted output.
###Code
def get_hidden_output_PCA(
model, X_test, layer_index, out_dimension, model_title
):
"""Generate hidden layer output PCA transformation
Captures the output of a specific layer in a Keras model and then
returns a transformed PCA object. Also, prints the variance explained
by the first two principal components.
:param model: Keras trained model object
:param X_test: np.array, X test data
:param layer_index: int, index of model layer for which to inspect output
:param out_dimension: int, output embedding dimension of chosen layer
:param model_title: str, descriptive model name for use in printed output
:return: Fitted and transformed sklearn PCA model object
"""
output = tf.keras.backend.function(
[model.layers[0].input],[model.layers[layer_index].output]
)
hidden_feature = np.array(output([X_test]))
hidden_feature = hidden_feature.reshape(-1, out_dimension)
pca = PCA(n_components=2)
pca_result = pca.fit_transform(hidden_feature)
print(
"{}\nHidden features' variance explained by PCA first 2 "
"components: {:.4f}\n".format(
model_title, np.sum(pca.explained_variance_ratio_)
)
)
return pca_result
def visualize_B_I(pca_result, y_test, model_title):
"""Visualize the first 2 PCA dimensions, labeled by tag
Constructs two subplots showing the first two principal components of
the `B-tags` and `I-tags` in the transformed PCA object provided
:param pca_result: sklearn PCA object
:param y_test: np.array, y test data
:param model_title: str, descriptive model name for use in plot title
"""
category = np.argmax(y_test.reshape(-1,18), axis=1)
fig, ax = plt.subplots(1,2, sharey=True, sharex=True, figsize=(11, 6.5))
titles=["B-tags", "I-tags"]
for i in range(2):
for cat in range(8*i,8*(i+1)):
indices = np.where(category==cat)[0]
ax[i].scatter(
pca_result[indices,0],
pca_result[indices, 1],
label=idx2tag[cat],
s=10,
alpha=0.6,
)
ax[i].legend(markerscale=2, facecolor="w", framealpha=1, fontsize=11)
ax[i].grid(":", alpha=0.4)
ax[i].set_xlabel("First principal component", fontsize=12)
ax[i].set_title(titles[i], fontsize=14)
ax[0].set_ylabel("Second principal component", fontsize=12)
fig.suptitle(
"Visualization of hidden features on first two PCA components:\n"
"{}".format(model_title),
fontsize=16,
y=1,
)
plt.tight_layout()
plt.show()
# Run this cell to show your results
FFNN = load_keras_model("model_FFNN")
h_pca = get_hidden_output_PCA(FFNN, X_te, 1, 50, model_title)
visualize_B_I(h_pca, y_te, model_title)
###Output
Feed-forward Neural Network (FFNN)
Hidden features' variance explained by PCA first 2 components: 0.9345
###Markdown
**INTERPRETATION:** We can see that, among a tag ($B$ or $I$), the different tags are quite well separated. However, the related labels between $I$ and $B$ tags are quite similar in the embedding layer, which tells us that the network relies on the combination between the different features in order to make a prediction. **[2.3:](q23)** **MODEL 3: RNN**Set up a simple RNN model by stacking the following layers in sequence: - an Input "layer" - a simple Embedding layer transforming integer words into vectors - a Dropout layer to regularize the model - a SimpleRNN layer - a TimeDistributed layer with an inner Dense layer which output dimensionality is equal to `n_tag` For hyperparameters in this model as well as the subsequent models in 2.4 and 2.5, please use those provided to you in MODEL 2.- **[2.3.a:](q23a)** Define, compile, and train an RNN model. Use the provided code to save the model and plot the training history.
###Code
# your code here
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(input_dim=n_words, output_dim=dim_embed, input_length=max_len))
model.add(tf.keras.layers.Dropout(drop_rate))
model.add(SimpleRNN(units=100, return_sequences=True))
model.add(tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(n_tags, activation="softmax")))
# Compile model
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
%%time
# Train model
history = model.fit(X_tr, y_tr, batch_size=batch_size, epochs=epochs,
validation_split=validation_split, verbose=verbose)
# Run this cell to save your model
store_keras_model(model, "model_RNN")
# Run this cell to show your results
print(model.summary())
# Run this cell to show your results
plot_training_history(history, "model_RNN")
###Output
_____no_output_____
###Markdown
**[2.3.b:](q23b)** Using the functions provided to you [in 2.2.c](s22c), visualize outputs from the SimpleRNN layer, one subplot for **B-tags** and one subplot for **I-tags**. Comment on the patterns you observed.
###Code
# your code here
RNN = load_keras_model("model_RNN")
h_pca = get_hidden_output_PCA(RNN, X_te, 1, 50, "Recurrent Neural Network (RNN)")
visualize_B_I(h_pca, y_te, "Recurrent Neural Network (RNN)")
###Output
Recurrent Neural Network (RNN)
Hidden features' variance explained by PCA first 2 components: 0.9852
###Markdown
**INTERPRETATION:**Unlike for the FNN we can see that among $B$ and $I$ the tags are worse separated and that a good fraction of them are overlapping. The hidden features' variance is similar to what we obtained for our previous neural network, the FNN. Regarding the related tags among $B$ and $I$, most of them are similarly distributed but we see that one B-tag is more present on the left of the PCA plot. We can still conclude that our model mostly relies on the combination between the different features to make its predictions. **[2.4:](q24)** **MODEL 4: GRU**- **[2.4.a:](q24a)** Briefly explain what a GRU is and how it is different from a simple RNN. **INTERPRETATION:** The GRU comes from the observation that RNNs are not able to capture long term dependencies in long sequences. The reason for that is the instability when backpropagating the information. GRU allows the network to select what he wants to retain/forget thanks to a gated mechanism. It leverages two gates: a reset gate and an update gate. The reset gate allows to produce a new candidate for the memory and the update gate produces a score in order to compute how much we want to retain from our previous memory. What is crucially different from the RNNs is the update of the hidden state: there is a leaky unit allowing the information to flow more easily and therefore prevent the numerical instabilities when passing some information. **[2.4.b:](q24b)** Define, compile, and train a GRU architecture by replacing the SimpleRNN cell with a GRU one. Use the provided code to save the model and plot the training history.
###Code
# your code here
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(input_dim=n_words, output_dim=dim_embed, input_length=max_len))
model.add(tf.keras.layers.Dropout(drop_rate))
model.add(tf.keras.layers.GRU(100, return_sequences=True))
model.add(tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(n_tags, activation="softmax")))
# Compile model
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
%%time
# Train model
history = model.fit(X_tr, y_tr, batch_size=batch_size, epochs=epochs,
validation_split=validation_split, verbose=verbose)
# Run this cell to save your model
store_keras_model(model, "model_GRU")
# Run this cell to show your results
print(model.summary())
# Run this cell to show your results
plot_training_history(history, "model_GRU")
###Output
_____no_output_____
###Markdown
**[2.4.c:](q24c)** Using the functions provided to you [in 2.2.c](s22c), visualize outputs from the GRU layer, one subplot for **B-tags** and one subplot for **I-tags**. Comment on the patterns you observed.
###Code
# your code here
GRU = load_keras_model("model_GRU")
h_pca = get_hidden_output_PCA(GRU, X_te, 1, 50, "Gated Recurrent Unit (GRU)")
visualize_B_I(h_pca, y_te, "Gated Recurrent Unit (GRU)")
###Output
Gated Recurrent Unit (GRU)
Hidden features' variance explained by PCA first 2 components: 0.9787
###Markdown
**INTERPRETATION:** This PCA plot is very similar to the one previously plotted for the RNN. Tags among among $B$ and $I$ are not as separated as in the FNN PCA's plot, however they have a similar hidden features' variance and lastly a tag appears more on the $B$-tags' PCA plots than on the $I$-tags' one. We can infer the same interpretation as the one for the previous network. **[2.5:](q25)** **MODEL 5: Bidirectional GRU**- **[2.5.a:](q25a)** Explain how a Bidirectional GRU differs from the GRU model above. **INTERPRETATION:** In a GRU model, we still process the inputs in a sequential way, meaning that $h_t = f(x_t, h_{t-1})$ and then $y_t = g(x_t, h_{t-1})$. But then, Bi-Directional structures have leveraged the fact that one would also want to use the information coming after the sequence. Therefore, in the Bi-directional GRU, we use two GRUs: one forward (when at time $t$, the hidden state is a function of $(h_{1:t-1})$) and one backward, (when at time $t$, the hidden state is a function of $(h_{t+1:T})$).Last, the resulting hidden state on the bi-directional GRU is the concatenation of both hidden states from both networks. **[2.5.b:](q25b)** Define, compile, and train a Bidirectional GRU by wrapping your GRU layer in a Bidirectional one. Use the provided code to save the model and plot the training history.
###Code
# your code here
model = tf.keras.Sequential()
model.add(tf.keras.layers.Embedding(input_dim=n_words, output_dim=dim_embed, input_length=max_len))
model.add(tf.keras.layers.Dropout(drop_rate))
model.add(tf.keras.layers.Bidirectional(layer = tf.keras.layers.GRU(units=100, return_sequences=True))) # concat by default
model.add(tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(n_tags, activation="softmax")))
# Compile model
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
%%time
# Train model
history = model.fit(X_tr, y_tr, batch_size=batch_size, epochs=epochs,
validation_split=validation_split, verbose=verbose)
# Run this cell to save your model
store_keras_model(model, "model_BiGRU")
# Run this cell to show your results
print(model.summary())
# Run this cell to show your results
plot_training_history(history, "model_BiGRU")
###Output
_____no_output_____
###Markdown
**[2.5.c:](q25c)** Using the functions provided to you [in 2.2.c](s22c), visualize outputs from the Bidirectional GRU layer, one subplot for **B-tags** and one subplot for **I-tags**. Comment on the patterns you observed.
###Code
# your code here
BiGRU = load_keras_model("model_BiGRU")
h_pca = get_hidden_output_PCA(BiGRU, X_te, 1, 50, "Bidirectional Gated Recurrent Unit (BiGRU)")
visualize_B_I(h_pca, y_te, "Bidirectional Gated Recurrent Unit (BiGRU)")
###Output
Bidirectional Gated Recurrent Unit (BiGRU)
Hidden features' variance explained by PCA first 2 components: 0.8185
###Markdown
**INTERPRETATION:** Both for $B$ and $I$ tags are better separated than for RNN and GRU but there is still some overlapping compared to the FNN model. Hidden features' variance is smaller than the two previous model and the additional tag observed for the $B$-tags is very less present. --> PART 3 [ 40 pts ]: Analysis[Return to contents](contents) Overview[Return to contents](contents) Now that we have built, trained, and visualized our 5 different models, in this section, we will further investigate the results of each model and then seek to improve the results of our most promising model.For this section, we will be using $F_1$ score as our evaluative metric. If you are unfamiliar with this metric, $F_1$ is the harmonic mean of precision and recall. Some basic background on this metric [can be found here](https://en.wikipedia.org/wiki/F1_score). PART 3: Questions [Return to contents](contents) **[3.1:](s31)** For each model, iteratively:- Load the model using the given function `load_keras_model`- Apply the model to the test dataset- Compute an $F_1$ score for each `Tag` and store it **[3.2:](s32)** Plot the $F_1$ score per Tag and per model, including all on a single grouped barplot. Include a horizontal reference line at $F_1=0.8$ on your plot.**[3.3:](s33)** Briefly discuss the performance of each model.**[3.4:](s34)** Which tags have the lowest $F_1$ score? For instance, you may find from the plot above that the test performance on `"B-art"` and `"I-art"` is very low (just an example, your case may be different). Here is an example when models failed to predict these tags right:**[3.5:](s35)** Write functions to output another example test sentence in which the lowest scoring tags you identified in 3.4 were predicted wrong in a sentence (be certain to include both `"B-xxx"` and `"I-xxx"` tags). Store the results in a DataFrame (same format as the above example) and use the styling function provided below to display your DataFrame so that misclassified tags are shown with red text similar to the example provided in the image above. (**Please note:** The red text of your styled DataFrame will not persist between Jupyter notebook sessions. That is perfectly fine and to be expected.)**[3.6:](s36)** Choose one of the most promising models you have built and improve that model to achieve an $F_1$ score greater than $0.8$ for as many tags as possible (you have lots of options here, e.g. data balancing, hyperparameter tuning, changing the structure of the NN, using a different optimizer, etc.).**[3.7:](s37)** For your final improved model, illustrate your results with a bar plot similarly formatted to the one you created in 3.2, and be certain to include a horizontal line at $F_1=0.8$ to make interpretation easier. Interpret your results and clearly explain why you chose to change certain elements of the model and how effective those adjustments were. PART 3: Solutions[Return to contents](contents) **[3.1:](q31)** For each model, iteratively:- Load the model using the given function `load_keras_model`- Apply the model to the test dataset- Compute an $F_1$ score for each `Tag` and store it
###Code
# your code here
models = ['model_FFNN', 'model_RNN', 'model_GRU', 'model_BiGRU']
f1_models = []
y_test = np.argmax(y_te, axis=-1)
for name in models:
model = load_keras_model(name)
y_pred = np.argmax(model(X_te), axis=-1)
f1s = []
for tag in tag2idx.values():
y_pred_tag = 1*(y_pred==tag).reshape(1, -1)
y_true_tag = 1*(y_test==tag).reshape(1, -1)
f1_tag = f1_score(y_true_tag[0], y_pred_tag[0])
f1s.append(f1_tag)
f1_models.append(f1s)
f1_models = np.array(f1_models)
tags_baseline = []
for sentence in X_te:
for word in sentence:
if baseline[word]==tag2idx.get('O'):
tags_baseline.append(0)
else:
tags_baseline.append(int(baseline[word]))
tags_baseline=np.array(tags_baseline)
f1_baseline = []
f1 = f1_score(y_test.reshape(1, -1)[0], tags_baseline, average=None)
###Output
_____no_output_____
###Markdown
**[3.2:](q32)** Plot the $F_1$ score per Tag and per model, including all on a single grouped barplot. Include a horizontal reference line at $F_1=0.8$ on your plot.
###Code
# your code here
fig, ax = plt.subplots(1, figsize=(20, 10))
width = 0.35
labels = list(tag2idx.keys())
x = np.arange(len(labels))
plt.bar(x = x - 3*width/4, height=f1, width=width, color='lightpink', label='Baseline')
plt.bar(x = x - width/2, height=f1_models[0, :], width=width, color='lightcoral', label='FFNN')
plt.bar(x = x - width/4, height=f1_models[1, :], width=width, color='lightblue', label='Vanilla RNN')
plt.bar(x = x + width/4, height=f1_models[2, :], width=width, color='gold', label='GRU')
plt.bar(x = x + width/2, height=f1_models[3, :], width=width, color='lightgreen', label='BiGRU')
ax.set_xticks(x)
ax.axhline(y=0.8, color='k', linestyle='--', label='Threshold')
ax.set_xlabel('Tag')
ax.set_ylabel('F1 score')
ax.set_title('F1 score per model and per tag')
ax.set_xticklabels(labels)
ax.legend()
plt.show(fig)
###Output
_____no_output_____
###Markdown
**[3.3:](q33)** Briefly discuss the performance of each model. **INTERPRETATION:** The trends appearing in the previous plot are:- The Bidirectional GRU is consistently better than all the other models. I guess one could infer that the reason why this happens is because the biGRU has the ability to look at the entire sequence.- Some classes are so underrespresented that no networks detect them, except the Baseline that uses a rule that does not depend on the representativity of one class. - The baseline outperforms the FFNN for a big number of tags. This explains how a naive method can outperform a FFNN, not taking into account the sequential aspect of the problem, thus limited in its predictive power. - The GRU wonsistently outperforms the RNN and the BiGRU consistently outperforms the Vanilla RNN and GRUs, which is expected **[3.4:](q34)** Which tags have the lowest $F_1$ score? For instance, you may find from the plot above that the test performance on `"B-art"` and `"I-art"` is very low (just an example, your case may be different). Here is an example when models failed to predict these tags right: **INTERPRETATION:** The tags that have the lowest $F1$ scores are 'B-art', 'I-art' and 'I-nat'. **[3.5:](q35)** Write functions to output another example test sentence in which the lowest scoring tags you identified in 3.4 were predicted wrong in a sentence (be certain to include both `"B-xxx"` and `"I-xxx"` tags). Store the results in a DataFrame (same format as the above example) and use the styling function provided below to display your DataFrame so that misclassified tags are shown with red text similar to the example provided in the image above. (**Please note:** The red text of your styled DataFrame will not persist between Jupyter notebook sessions. That is perfectly fine and to be expected.)
###Code
def highlight_errors(s):
"""Highlights misclassified values when applied to Pandas styler
See the `pandas.io.formats.style.Styler.apply` documentation
for more information.
"""
is_max = s == s.y_true
return [
"" if v or key=="Word" else "color: red"
for key, v in is_max.iteritems()
]
# your code here
models = ['model_FFNN', 'model_RNN', 'model_GRU', 'model_BiGRU']
model_tags = []
selected_sentence = None
indexes_misclassified = set([0, 8]) # lowest scoring B tag and scoring I tag (B-art and I-art)
for i, sentence in enumerate(y_test):
if indexes_misclassified.issubset(set(sentence)): # we found a sentence where the 2 tags are present
for name in models:
model = load_keras_model(name)
y_pred = np.argmax(model(X_te[i].reshape(1, -1)), axis=-1)
tags_predicted = [idx2tag[idx] for idx in y_pred[0]]
model_tags.append(tags_predicted)
break
baseline_tags = []
for word in X_te[i]:
baseline_tags.append(int(baseline[word]))
baseline_tags = np.array(baseline_tags)
baseline_tags = [idx2tag[idx] for idx in baseline_tags]
true_sentence = [idx2word[idx] for idx in X_te[i]]
true_tags = [idx2tag[idx] for idx in sentence]
columns = ['Word', 'y_true', 'baseline', 'model_FFNN', 'model_RNN', 'model_GRU', 'model_BiGRU']
data = {'Word': true_sentence, 'y_true': true_tags,
'baseline': baseline_tags, 'model_FFNN': model_tags[0],
'model_RNN': model_tags[1], 'model_GRU': model_tags[2], 'model_BiGRU': model_tags[3]}
df = pd.DataFrame(data = data)
df.style.apply(lambda x: highlight_errors(x), axis=1)
###Output
_____no_output_____
###Markdown
**[3.6:](q36)** Choose one of the most promising models you have built and improve that model to achieve an $F_1$ score greater than $0.8$ for as many tags as possible (you have lots of options here, e.g. data balancing, hyperparameter tuning, changing the structure of the NN, using a different optimizer, etc.).
###Code
%%time
!pip install imbalanced-learn
!pip install delayed
from imblearn.over_sampling import SMOTE #SMOTE synthesizes new examples for the minority class to avoid imbalanced classification
from imblearn.over_sampling import RandomOverSampler
from sklearn.datasets import make_classification
A, B = make_classification(n_samples=5000, n_features=2, n_informative=2,
n_redundant=0, n_repeated=0, n_classes=3,
n_clusters_per_class=1,
weights=[0.01, 0.05, 0.94],
class_sep=0.8, random_state=0)
print(A.shape,B.shape)
A_r, B_r = SMOTE().fit_resample(A, B)
# your code here
#We select the BiGRU model
model_bigru = tf.keras.Sequential()
model_bigru.add(tf.keras.layers.InputLayer(input_shape=(104,), batch_size=batch_size))
#model_bigru.add(tf.keras.layers.UpSampling2D(interpolation="bilinear"))
model_bigru.add(tf.keras.layers.Embedding(input_dim=n_words, output_dim=dim_embed, input_length=max_len))
model_bigru.add(tf.keras.layers.Dropout(drop_rate))
model_bigru.add(tf.keras.layers.Bidirectional(layer = tf.keras.layers.GRU(units=64, return_sequences=True))) # concat by default
model_bigru.add(tf.keras.layers.Dropout(drop_rate))
model_bigru.add(tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(n_tags, activation="softmax")))
# Compile model
model_bigru.compile(optimizer='adam', loss=loss, metrics=metrics)
sm = SMOTE()
#y_trr=np.argmax(y_tr,axis=-1)
#print(X_tr.reshape(-1,2,2,1))
#X_tr, y_trr = sm.fit_resample(X_tr, y_trr)
# Train model
history = model_bigru.fit(X_tr, y_tr, batch_size=batch_size, epochs=3,
validation_split=validation_split, verbose=verbose)
X = []
Y = []
for sentence in sentences:
X.append([word2idx[word[0]] for word in sentence])
Y.append([tag2idx[word[1]] for word in sentence])
X = pad_sequences(X, max_len)
Y = pad_sequences(Y, max_len, value = tag2idx['PAD'])
Y = to_categorical(Y)
X_tr, X_te, y_tr, y_te = train_test_split(X, Y, test_size=0.1, random_state=109)
history = model_bigru.fit(X_tr, y_tr, batch_size=batch_size, epochs=epochs,
validation_split=validation_split, verbose=verbose)
y_pred_bigru = np.argmax(BiGRU(X_te), axis=-1)
y_test = np.argmax(y_te, axis=-1)
f1_bigru = []
for tag in tag2idx.values():
y_pred_tag = 1*(y_pred_bigru==tag).reshape(1, -1)
y_true_tag = 1*(y_test==tag).reshape(1, -1)
f1_tag = f1_score(y_true_tag[0], y_pred_tag[0])
f1_bigru.append(f1_tag)
###Output
_____no_output_____
###Markdown
Before trying to improve the $F_1$ score of our tags we had x tags above the given threshold already. The most promising model in terms of $F_1$ clearly was the BiGRU one. We have intended different techniques and approaches in order to improve tags' scores:* **Data balancing**: data balancing was our first idea. Indeed the number of sentences that belong to each tag appear to be greatly unbalanced, thus we decided to increase the number of sentences with under represented tags to retrain the model after. We used a Python library named SMOTE(). However this did not improve the scores at all we couldn't see any difference* **Hyperparameter tuning**: this part has been very tedious, we first tried to tune the parameters of the fit function (`verbose`, `batch_size`) to see if the scores would improve; they did not. We then changed the number of units of the GRU layer but without any success.* **Changing the structure of the NN**: we figured out that to improve the $F_1$ score, a good idea could be to add a Dense layer and/or to add a new Dropout layer, to counterbalance the underrepresntation of certain tags. * **Using a different optimizer**: We tried every optimizer there is but this resulted only in tiny differences, nothing significant.We concluded that our model already was finely tuned and yielded nice $F_1$ score for the tags. **[3.7:](q37)** For your final improved model, illustrate your results with a bar plot similarly formatted to the one you created in 3.2, and be certain to include a horizontal line at $F_1=0.8$ to make interpretation easier. Interpret your results and clearly explain why you chose to change certain elements of the model and how effective those adjustments were.
###Code
# your code here
fig, ax = plt.subplots(1, figsize=(20, 10))
width = 0.5
labels = list(tag2idx.keys())
x = np.arange(len(labels))
plt.bar(x = x - width/3, height=f1_bigru, width=width, color='lightgreen', label='enhanced BiGRU')
plt.bar(x = x + width/3, height=f1_models[3, :], width=width, color='green', label='original BiGRU')
ax.set_xticks(x)
ax.axhline(y=0.8, color='k', linestyle='--', label='Threshold')
ax.set_xlabel('Tag')
ax.set_ylabel('F1 score')
ax.set_title('F1 score of the BiGRU model')
ax.set_xticklabels(labels)
ax.legend()
plt.show(fig)
###Output
_____no_output_____ |
projects/novelty/ocsvm_1.ipynb | ###Markdown
My first public Kaggle notebook. Using Recall and Precision to judge the predictions. Trying out some ideas for novelty / outlier detection. Implemented my own Multivariate Gaussian outlier detection function and compare to scikit OneClassSVM. I reach 97% recall with 0.01 precision. This corresponds to catching 97% of all frauds, but giving a false alert 99% of the time. Any feedback on this result is much appreciated.Note: I don't think using accuracy as a measure of how well your prediction algorithm works is useful here. If we simply set all predicitons to "No Fraud", we obtain an accuracy of over 99%. For more information you can read this https://tryolabs.com/blog/2013/03/25/why-accuracy-alone-bad-measure-classification-tasks-and-what-we-can-do-about-it/
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn import svm
from sklearn.model_selection import train_test_split
import seaborn as sns
%matplotlib inline
#load data
data = pd.read_csv("../input/creditcard.csv")
data.head()
len(data['Class'][data.Class==1]), len(data['Class'][data.Class==0]), len(data)
len(data['Class'][data.Class==0]) + len(data['Class'][data.Class==1]) == len(data)
data.tail()
data.groupby(("Class")).mean()
def plot_decision_function(model, ax, sv=True):
# mesh-grid to evaluate the model
xx, yy = np.meshgrid(np.linspace(-4, 4, 100),
np.linspace(-4, 4, 100))
# evaluating the model
Z = model.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
## plots the margin
ax = plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='darkred')
ax = plt.contourf(xx, yy, Z, levels=[0, Z.max()], colors='palevioletred')
ax = plt.contourf(xx, yy, Z, levels=np.linspace(Z.min(), 0, 6), cmap=plt.cm.PuBu)
if sv:
ax = plt.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, linewidth=1, facecolors='none', edgecolors='black')
def plot_new_observations(X_train, X_new, anomaly, ax):
_ = plt.scatter(X_train[:,0], X_train[:,1],
axes=ax, color='w', s=40, edgecolors='k',
label='Training data')
_ = plt.scatter(X_new[:,0], X_new[:,1], axes=ax,
color='violet', s=40, edgecolors='k',
label='New regular observations')
_ = plt.scatter(anomaly[:,0], anomaly[:,1], axes=ax,
color='gold', s=40, edgecolors='k',
label='New abnormal observations')
_ = ax.legend()
_ = ax.set_xlim([-4, 4])
_ = ax.set_ylim([-4, 4])
n_points = 200
## generate a cluster of points for the training
np.random.seed(42)
X_train = 0.5 * np.random.randn(n_points, 2)
X_train.shape
## plot
fig, ax = plt.subplots(figsize=(12,8))
_ = plt.scatter(X_train[:,0], X_train[:,1], axes=ax, color='w', s=40, edgecolors='k')
_ = ax.set_xlim([-4, 4])
_ = ax.set_ylim([-4, 4])
model = svm.OneClassSVM(nu=0.05, kernel="rbf", gamma=0.7)
model.fit(X_train)
fig, ax = plt.subplots(figsize=(11,7))
# Plot the decision function
plot_decision_function(model, ax, sv=True)
# Add the training points
_ = plt.scatter(X_train[:,0], X_train[:,1], axes=ax,
color='w', s=40, edgecolors='k',label='Training data')
_ = ax.legend()
# compute the empirical error
y_train = model.predict(X_train)
err_emp_1 = y_train[y_train == -1].size
print("Training error = {}/{}".format(err_emp_1, n_points))
# Reduce nu, i.e. weight more the slack variables
model = svm.OneClassSVM(nu=0.005, kernel="rbf", gamma=0.7)
model.fit(X_train)
fig, ax = plt.subplots(figsize=(11,7))
# plot the decision function
plot_decision_function(model, ax, sv=True)
# add the training points
_ = plt.scatter(X_train[:,0], X_train[:,1], axes=ax,
color='w', s=40, edgecolors='k',label='Training data')
_ = ax.legend()
# Compute the empirical error
y_train = model.predict(X_train)
err_emp_2 = y_train[y_train == -1].size
print("Training error = {}/{}".format(err_emp_2, n_points))
###Output
Training error = 5/200
###Markdown
Error is reduced but risk of overfitting increases by reducing the value of nu. Anomaly
###Code
new_observation = 25
new_anomaly = 10
# generate new observation from the same distribution
np.random.seed(42)
X_new = 0.5 * np.random.randn(new_observation, 2)
# generate outliers
anomaly = np.random.uniform(low=-3, high=3, size=(new_anomaly, 2))
# plot
fig, ax = plt.subplots(figsize=(11,7))
plot_new_observations(X_train, X_new, anomaly, ax)
y_new = model.predict(X_new)
y_anomaly = model.predict(anomaly)
y_new, y_anomaly
err_new = y_new[y_new == -1].size
err_anomaly = y_anomaly[y_anomaly == -1].size
err_new, err_anomaly
print("Fraction of new regular observations misclassified = {}/{}".format(err_new, new_observation))
print("Fraction of new abnormal observations correctly classified = {}/{}".format(err_anomaly, new_anomaly))
fig, ax = plt.subplots(figsize=(11,7))
# plot the decision function
plot_decision_function(model, ax, sv=False)
plot_new_observations(X_train, X_new, anomaly, ax)
# generate two cluster for the training
np.random.seed(42)
X_train1 = 0.5 * np.random.randn(n_points//2, 2)+1.5
X_train2 = 0.5 * np.random.randn(n_points//2, 2)-1.5
X_train1, X_train2
X_train = np.r_[X_train1, X_train2]
X_train
X_train.shape
# plot
fig, ax = plt.subplots(figsize=(11,7))
_ = plt.scatter(X_train[:,0], X_train[:,1], axes=ax, color='w', s=40, edgecolors='k')
_ = ax.set_xlim([-4, 4])
_ = ax.set_ylim([-4, 4])
model = svm.OneClassSVM(nu=0.06, kernel="rbf", gamma=0.5)
model.fit(X_train)
fig, ax = plt.subplots(figsize=(11,7))
# plot the decision function
plot_decision_function(model, ax, sv=True)
# add the training points
_ = plt.scatter(X_train[:,0], X_train[:,1], axes=ax,
color='w', s=40, edgecolors='k',label='Training data')
_ = ax.legend()
# compute the empirical error
y_train = model.predict(X_train)
err_emp_1 = y_train[y_train == -1].size
print("Training error = {}/{}".format(err_emp_1, n_points))
new_observation = 50
new_anomaly = 20
# generate new observation from the same distribution
np.random.seed(42)
X_new_1 = 0.5 * np.random.randn(new_observation//2, 2)+1.5
X_new_2 = 0.5 * np.random.randn(new_observation//2, 2)-1.5
X_new = np.r_[X_new_1, X_new_2]
# generate outliers
anomaly = np.random.uniform(low=-4, high=4, size=(new_anomaly, 2))
# plot
fig, ax = plt.subplots(figsize=(11,7))
plot_new_observations(X_train, X_new, anomaly, ax)
y_new = model.predict(X_new)
y_anomaly = model.predict(anomaly)
err_new = y_new[y_new == -1].size
err_anomaly = y_anomaly[y_anomaly == -1].size
print("Fraction of new regular observations misclassified = {}/{}".format(err_new, new_observation))
print("Fraction of new abnormal observations correctly classified = {}/{}".format(err_anomaly, new_anomaly))
fig, ax = plt.subplots(figsize=(11,7))
# plot the decision function
plot_decision_function(model, ax, sv=False)
plot_new_observations(X_train, X_new, anomaly, ax)
# We can try to move the cluster closer
np.random.seed(42)
X_train1 = 0.5 * np.random.randn(n_points//2, 2)+0.9
X_train2 = 0.5 * np.random.randn(n_points//2, 2)-0.9
X_train = np.r_[X_train1, X_train2]
# plot
fig, ax = plt.subplots(figsize=(11,7))
_ = plt.scatter(X_train[:,0], X_train[:,1], axes=ax, color='w', s=40, edgecolors='k')
_ = ax.set_xlim([-4, 4])
_ = ax.set_ylim([-4, 4])
model = svm.OneClassSVM(nu=0.05, kernel="rbf", gamma=0.1)
model.fit(X_train)
fig, ax = plt.subplots(figsize=(11,7))
# plot the decision function
plot_decision_function(model, ax, sv=True)
# add the training points
_ = plt.scatter(X_train[:,0], X_train[:,1], axes=ax,
color='w', s=40, edgecolors='k',label='Training data')
_ = ax.legend()
# compute the empirical error
y_train = model.predict(X_train)
err_emp_1 = y_train[y_train == -1].size
print("Training error = {}/{}".format(err_emp_1, n_points))
model = svm.OneClassSVM(nu=0.3, kernel="rbf", gamma=1)
model.fit(X_train)
fig, ax = plt.subplots(figsize=(11,7))
# plot the decision function
plot_decision_function(model, ax, sv=True)
# add the training points
_ = plt.scatter(X_train[:,0], X_train[:,1], axes=ax,
color='w', s=40, edgecolors='k',label='Training data')
_ = ax.legend()
# compute the empirical error
y_train = model.predict(X_train)
err_emp_1 = y_train[y_train == -1].size
print("Training error = {}/{}".format(err_emp_1, n_points))
###Output
Training error = 60/200
###Markdown
Seems like the values for V1, V2, etc are on average much farther from 0 for fraud.Let's check out some correlations matrices
###Code
#correlation matrix
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(data.drop(['Amount','Time'],1).corr(), vmax=.8, square=True);
###Output
_____no_output_____
###Markdown
* Class correlates most with V1 - V18 and not (or barely) with V19 - V28
###Code
#correlation matrix for only Fraud
f, (ax1, ax2) = plt.subplots(1,2,figsize=(13, 5))
sns.heatmap(data.query("Class==1").drop(['Class','Time'],1).corr(), vmax=.8, square=True, ax=ax1)
ax1.set_title('Fraud')
sns.heatmap(data.query("Class==0").drop(['Class','Time'],1).corr(), vmax=.8, square=True, ax=ax2);
ax2.set_title('Legit')
plt.show()
###Output
_____no_output_____
###Markdown
* Strong correlations between the different V for Fraud data* Much less correlation for Legit data* Correlation between the data seems to be an important key (This should be captured by Multivariate Gaussian)* Seems like Amount correlates as well. Thus, I should perhaps include it... Check out some distributionsThey should ideally be gaussian for non-fraud examples
###Code
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(10,3))
data.query("Class==1").hist(column='V6',bins=np.linspace(-10,10,20),ax=ax1,label='Fraud')
ax1.legend()
data.query("Class==0").hist(column='V6',bins=np.linspace(-10,10,20),ax=ax2,label='Legit')
plt.legend()
plt.show()
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(10,3))
data.query("Class==1").hist(column='V2',bins=np.linspace(-10,10,20),ax=ax1,label='Fraud')
ax1.legend()
data.query("Class==0").hist(column='V2',bins=np.linspace(-10,10,20),ax=ax2,label='Legit')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
(try it for different Vi)For Legit transactions, the Vi are centered around 0 and look kind of gaussian. For frauds, they are off-center
###Code
bins=np.linspace(-10,50,40)
data.query("Class==1").hist(column='Amount',bins=bins)
data.query("Class==0").hist(column="Amount",bins=bins)
data.query("Class==1").hist(column="Time")#,bins=np.linspace(-10,10,20))
data.query("Class==0").hist(column="Time")#,bins=np.linspace(-10,10,20))
###Output
_____no_output_____
###Markdown
* **TIME** makes no difference apparently* **AMOUNT** hard to say... Does not really look like a gaussian. Frauds seem to have a tendency for low amounts. For now drop "Amount"
###Code
X_Legit = data.query("Class==0").drop(["Amount","Class","Time"],1)
y_Legit = data.query("Class==0")["Class"]
X_Fraud = data.query("Class==1").drop(["Amount","Class","Time"],1)
y_Fraud = data.query("Class==1")["Class"]
#split data into training and cv set
X_train, X_test, y_train, y_test = train_test_split(X_Legit, y_Legit, test_size=0.33, random_state=42)
print(len(X_test))
X_test = X_test.append(X_Fraud)
print(len(X_Fraud),' ', len(X_test))
y_test = y_test.append(y_Fraud)
X_test.head()
data.plot.scatter("V21","V22",c="Class")
###Output
_____no_output_____
###Markdown
V22 and V21 are definitely anticorrelated. The distribution looks like a Gaussian. Multivariate GaussianOneClassSVM is further below
###Code
# Write my own Multivariate Gaussian outlier detection
X = X_train
m = len(X)
mu = 1./m * X.mean()
Sigma=0
for i in range(m):
Sigma += np.outer((X.iloc[i]-mu) , (X.iloc[i]-mu))
Sigma*=1./m
Sig_inv = np.linalg.inv(Sigma)
Sig_det = np.linalg.det(Sigma)
np.matrix(Sigma).shape
# This function calculates the probability for a Gaussian distribution
def prob(x_example):
n=len(Sigma)
xminusmu = x_example - mu
return 1./((2*np.pi)**(n/2.) * Sig_det**0.5) * np.exp(-0.5* xminusmu.dot(Sig_inv).dot(xminusmu))
Sigma.diagonal()
# Check out some resulting probablilities for Fraud examples
for i in range(10):
print(prob(X_Fraud.iloc[i]))
# Check out some resulting probablilities for NON-Fraud examples
for i in range(10):
print(prob(X_train.iloc[i]))
# Picking out 100 training examples to test how many are misclassified as false positive
ptrain_result = np.apply_along_axis(prob, 1, X_train.head(100))
sum(ptrain_result < 1e-13)
###Output
_____no_output_____
###Markdown
With an epsilon of 1e-13, roughly 50% of the test samples are falsely classified as Fraud. Let's see how many are classified correctly using that epsilon
###Code
# Copying this to a variable with a new name because i am using the same
# variable below with 'Amount' included as feature
pTest_result = np.apply_along_axis(prob, 1, X_test)
pTest_result_prev = np.copy(pTest_result)
epsilon = 1e-13
yTest_result_prev = (pTest_result_prev < epsilon)
tp = sum(yTest_result_prev & y_test)
tn = sum((~ yTest_result_prev) & (~ y_test))
fp = sum((yTest_result_prev) & (~ y_test))
fn = sum((~ yTest_result_prev) & ( y_test))
print("true_pos ",tp)
print("true_neg ",tn)
print("false_pos ",fp)
print("false_neg ",fn)
recall = tp / (tp + fn)
precision = tp / (tp + fp)
F1 = 2*recall*precision/(recall+precision)
print("recall=",recall,"\nprecision=",precision)
print("F1=",F1)
###Output
true_pos 479
true_neg 43451
false_pos 50373
false_neg 13
recall= 0.9735772357723578
precision= 0.009419491858727288
F1= 0.018658460579619823
###Markdown
Thus, I obtain a recall of 97%, but a low precision of 0.01, which means only 1 out of 100 fraud 'detections' are actual frauds rescale "Amount" and use it
###Code
data["Amountresc"] = (data["Amount"])/data["Amount"].var()
X_Legit = data.query("Class==0").drop(["Amount","Class","Time"],1)
y_Legit = data.query("Class==0")["Class"]
X_Fraud = data.query("Class==1").drop(["Amount","Class","Time"],1)
y_Fraud = data.query("Class==1")["Class"]
#split data into training and cv set
X_train, X_test, y_train, y_test = train_test_split(X_Legit, y_Legit, test_size=0.33, random_state=42)
X_test = X_test.append(X_Fraud)
y_test = y_test.append(y_Fraud)
X_test.head()
# Use my outlier detection
X = X_train
m = len(X)
mu = 1./m * X.mean()
Sigma=0
for i in range(m):
Sigma += np.outer((X.iloc[i]-mu) , (X.iloc[i]-mu))
Sigma*=1./m
Sig_inv = np.linalg.inv(Sigma)
Sig_det = np.linalg.det(Sigma)
pTest_result = np.apply_along_axis(prob, 1, X_test)
epsilon = 2e-11
yTest_result = (pTest_result < epsilon)
tp = sum(yTest_result & y_test)
tn = sum((~ yTest_result) & (~ y_test))
fp = sum((yTest_result) & (~ y_test))
fn = sum((~ yTest_result) & ( y_test))
print("true_pos ",tp)
print("true_neg ",tn)
print("false_pos ",fp)
print("false_neg ",fn)
recall = tp / (tp + fn)
precision = tp / (tp + fp)
F1 = 2*recall*precision/(recall+precision)
print("recall=",recall,"\nprecision=",precision)
print("F1=",F1)
###Output
true_pos 479
true_neg 41984
false_pos 51840
false_neg 13
recall= 0.9735772357723578
precision= 0.009155373764789082
F1= 0.018140160193899
###Markdown
ConclusionNo real improvement. Note, that I am using a larger epsilon Next upI can try the Novelty Detection algorithm by scikit
###Code
len(X_train)
## Use only part of training set, otherwise it takes very long
Xsmall = X_train.head(20000)
# fit the model
clf = svm.OneClassSVM(kernel="rbf", nu=0.01, gamma=0.3)
clf.fit(Xsmall)
y_pred_test = clf.predict(X_test)
y_pred_test = np.array([y==-1 for y in y_pred_test])
tp = sum(y_pred_test & y_test)
tn = sum((~ y_pred_test) & (~ y_test))
fp = sum((y_pred_test) & (~ y_test))
fn = sum((~ y_pred_test) & ( y_test))
print("true_pos ",tp)
print("true_neg ",tn)
print("false_pos ",fp)
print("false_neg ",fn)
recall = tp / (tp + fn)
precision = tp / (tp + fp)
F1 = 2*recall*precision/(recall+precision)
print("recall=",recall,"\nprecision=",precision)
print("F1=",F1)
# ## Some results from different test runs
# # nu=0.01, gamma=0.3
# true_pos 478
# true_neg 55794
# false_pos 38030
# false_neg 14
# recall= 0.971544715447
# precision= 0.0124130050899
# F1= 0.0245128205128
# # nu=0.05, gamma=0.3
# true_pos 478
# true_neg 55774
# false_pos 38050
# false_neg 14
# recall= 0.971544715447
# precision= 0.0124065614618
# F1= 0.0245002562788
# # nu=0.05, gamma=0.2
# true_pos 463
# true_neg 68954
# false_pos 24870
# false_neg 29
# recall= 0.941056910569
# precision= 0.0182765562705
# F1= 0.0358567279768
# # nu=0.5, gamma=0.5
# true_pos 487
# true_neg 36001
# false_pos 57823
# false_neg 5
# recall= 0.989837398374
# precision= 0.00835191219345
# F1= 0.0165640624469
# # nu=0.5, gamma=0.1
# true_pos 478
# true_neg 47316
# false_pos 46508
# false_neg 14
# recall= 0.971544715447
# precision= 0.0101732430937
# F1= 0.0201356417709
###Output
_____no_output_____
###Markdown
- With nu=0.5, gamma=0.1: 97% of frauds are detected, but only 1 in 100 detections is actual fraud (i.e. 99% false alert). Seems fine to me... But: It's not better than Multivariate Gaussian (but much slower).- with a smaller nu we get larger F1 and higher precision BUT smaller recall...- Same thing goes for gamma... Larger gamma means larger recall and smaller precision. - TODO: Plot precision, recall and F1 as a function of mu and gamma.- TODO (?): Add new features V1xV3, V1xV5, V1xV7 to make use of the strong correlation between the two for fraud detection and see if it improves results. For multivariate Gaussian, the correlations should already be included. I am not sure if this is also the case for OneClass SVM with a Gaussian Kernel... Try another classification algorithm - IsolationForest
###Code
from sklearn.ensemble import IsolationForest
rng = np.random.RandomState(42)
clf = IsolationForest(max_samples=10, random_state=rng)
clf.fit(X_train.head(100000))
y_pred_train = clf.predict(X_train)
y_pred_test = clf.predict(X_test)
#y_pred_outliers = clf.predict(X_outliers)
y_pred_test = clf.predict(X_test)
y_pred_test = np.array([y==-1 for y in y_pred_test])
tp = sum(y_pred_test & y_test)
tn = sum((~ y_pred_test) & (~ y_test))
fp = sum((y_pred_test) & (~ y_test))
fn = sum((~ y_pred_test) & ( y_test))
print("true_pos ",tp)
print("true_neg ",tn)
print("false_pos ",fp)
print("false_neg ",fn)
recall = tp / (tp + fn)
precision = tp / (tp + fp)
F1 = 2*recall*precision/(recall+precision)
print("recall=",recall,"\nprecision=",precision)
print("F1=",F1)
###Output
true_pos 418
true_neg 84420
false_pos 9404
false_neg 74
recall= 0.8495934959349594
precision= 0.04255752392588068
F1= 0.0810548768663952
|
cards.ipynb | ###Markdown
Teaching Loops With Cards and Python Create decks of cards
###Code
from cards.cards import *
# Create a single card using its suit and rank
Card("spades", "A")
# Create a full ordered deck of cards
print(full_deck())
# Create a small random deck of a given size
safe_small_random_deck(5)
# Create a deck from the cards you have in your hand
# ** REPLACE WITH YOUR OWN DECK **
deck = deck_from_list([
("diamonds", "A"),
("hearts", "4")
])
deck
###Output
_____no_output_____
###Markdown
1. Find the highest value Heart in your deck
###Code
"""
Finds the highest ranked hearts card in the deck
"""
def find_highest_heart(deck: List[Card]) -> Card:
# prepare variable to store intermediate answers
highest_heart = None
# loop over all the cards in you deck
# ** ADD CODE HERE **
# check the suit
if card.suit=='hearts':
# the first heart is automatically the highest
if highest_heart == None:
highest_heart = card
else:
# otherwise, need to check if it's bigger than the previously seen heart
if card > highest_heart:
highest_heart = card
return highest_heart
find_highest_heart(deck)
###Output
_____no_output_____
###Markdown
2. Find the highest value Diamond in your deck that is in an even location (index)The first card at the top is at location/index 0.
###Code
def find_highest_even_diamond(deck: List[Card]) -> Card:
# prepare variable to store intermediate answers
highest_even_diamond = None
#loop over the indices of your list
# ** ADD CODE HERE **
# check if index is even
if index%2==0:
card = deck[index]
# the rest is the same as the hearts example above (but with diamonds)
if card.suit=="diamonds":
if highest_even_diamond == None:
highest_even_diamond = card
else:
if card > highest_heart:
highest_even_diamond = card
return highest_even_diamond
find_highest_even_diamond(deck)
###Output
_____no_output_____
###Markdown
3. Find the highest value card in the first seven of your deck.
###Code
def find_highest_in_seven(deck: List[Card]) -> Card:
# prepare variable to store intermediate answers
highest = None
# loop over 7 indices (HINT: range may be helpful here)
# ** ADD CODE HERE **
# grab the card in that location
card = # ** ADD CODE HERE **
# first card is automatically highest
if highest == None:
highest = card
# otherwise, needs to be bigger than previous highest card
elif card > highest:
highest = card
return highest
find_highest_in_seven(deck)
###Output
_____no_output_____
###Markdown
4. Separate your cards into 3 piles. Count the number of cards in each pile.
###Code
def count_three_piles(piles: List[List[Card]]) -> List[Card]:
# prepare to collect counts
counts = []
# loop over piles
# ** ADD CODE HERE **
# set up counter for this pile
count = 0
# loop over cards in this pile
# ** ADD CODE HERE **
# count each card
count += 1
# store the length of this pile
counts.append(count)
return counts
###Output
_____no_output_____
###Markdown
5. Look at the value of the top card in your deck. Put it on the bottom. Forever.
###Code
def infinite_loop(deck):
# the check will always be True
# ** ADD CODE HERE **
# grab the first card
first_card = d.pop(0)
# put it on the bottom
d.append(first_card)
###Output
_____no_output_____
###Markdown
6. Draw cards until the sum of their values reaches (or passes) 21.
###Code
d = shuffle(deck.copy())
def at_least_21(deck):
# set up counter
counter = 0
# keep going if 21 has not been reached
# ** ADD CODE HERE **
# get the top card from the deck
card = deck.pop(0)
# add its value to your counter (HINT: use card.value)
# ** ADD CODE HERE **
return counter
at_least_21(d)
###Output
_____no_output_____
###Markdown
7. Deal 5 cards to each of 3 players.
###Code
def deal_5_to_3(deck):
# prepare to grab decks
player_decks = []
# loop over players
# ** ADD CODE HERE **
# ** ADD CODE HERE **
# loop over cards in your deck (or make sure each player has 5 cards)
# ** ADD CODE HERE **
# ** ADD CODE HERE **
return player_decks
###Output
_____no_output_____
###Markdown
8. CHALLENGE: The Game of WarTwo players each get a deck of card. They both flip the top card in their deck. Whosever card has the highest value (assume A=1, J=11, Q=12, K=13) adds both cards to the bottom of their deck. If the values are the same, set the cards aside. The game ends when one person is out of cards.
###Code
d1 = small_random_deck(5)
d2 = small_random_deck(5)
def game_of_war(deck1, deck2):
# how do you know whether to keep going?
# what do you do at each step?
return winner
###Output
_____no_output_____ |
Exercise 03 Vietnam weather data.ipynb | ###Markdown
Data Analysis with Python and Pandas tutorial Exercise 3: Vietnam weather data For this exercise you need the VN-humidity.xslx and VN-temperature.xlsx files. These are available at:https://1drv.ms/u/s!AgtH78k0_cuvglZTACkxCrOl0vr0?e=nHd1kK
###Code
# import the pandas library
import pandas as pd
# read the VN-temperature.xlsx file into a dataframe (df1)
df1 = pd.read_excel('VN-temperature.xlsx')
# print the shape of the dataframe
df1.shape
# print a few rows (head)
df1.head()
# define a column mapping to rename columns to english and lowercase
col_mapping = {
'DIAPHUONG': 'city',
'NHIETDO': 'temperature',
'THANG': 'month',
'NAM': 'year',
'VUNG': 'region',
'DO_AM': 'humidity'
}
# optionally, print the mapping to review it
col_mapping
# apply the column mapping to rename the columns
# do the renaming inplace
df1.rename(columns=col_mapping, inplace=True)
# print head again
df1.head(2)
df2 = pd.read_excel('VN-humidity.xlsx')
df2.shape
df1.info()
df1.region.value_counts()
df2.rename(columns=col_mapping, inplace=True)
df2.head()
df = pd.merge(df1, df2, on=['year', 'month', 'region', 'city'], how='outer')
df.info()
df.head()
df[(df.year == 2011) & (df.month == 2)]
df[df.temperature.isna()]
df.loc[(df.city == 'Lai Chau') & (df.year == 2011) & (df.month == 2) & (df.region == 'BAC'), 'temperature'] = 20.0
df1.set_index(['year', 'month', 'region', 'city'], inplace=True)
df2.set_index(['year', 'month', 'region', 'city'], inplace=True)
df = pd.concat([df1, df2], axis=1) # , keys=['year', 'month', 'region', 'locale'])
df1
###Output
_____no_output_____ |
3-ml-classification/Week_3-assignment-1.ipynb | ###Markdown
Identifying safe loans with decision trees The [LendingClub](https://www.lendingclub.com/) is a peer-to-peer leading company that directly connects borrowers and potential lenders/investors. In this notebook, you will build a classification model to predict whether or not a loan provided by LendingClub is likely to [default](https://en.wikipedia.org/wiki/Default_%28finance%29).In this notebook you will use data from the LendingClub to predict whether a loan will be paid off in full or the loan will be [charged off](https://en.wikipedia.org/wiki/Charge-off) and possibly go into default. In this assignment you will:* Use SFrames to do some feature engineering.* Train a decision-tree on the LendingClub dataset.* Predict whether a loan will default along with prediction probabilities (on a validation set).* Train a complex tree model and compare it to simple tree model.Let's get started! Fire up Turi Create Make sure you have the latest version of Turi Create. If you don't find the decision tree module, then you would need to upgrade Turi Create using``` pip install turicreate --upgrade```
###Code
import turicreate
###Output
_____no_output_____
###Markdown
Load LendingClub dataset We will be using a dataset from the [LendingClub](https://www.lendingclub.com/). A parsed and cleaned form of the dataset is availiable [here](https://github.com/learnml/machine-learning-specialization-private). Make sure you **download the dataset** before running the following command.
###Code
loans = turicreate.SFrame('lending-club-data.sframe/')
###Output
_____no_output_____
###Markdown
Exploring some featuresLet's quickly explore what the dataset looks like. First, let's print out the column names to see what features we have in this dataset.
###Code
loans.column_names()
###Output
_____no_output_____
###Markdown
Here, we see that we have some feature columns that have to do with grade of the loan, annual income, home ownership status, etc. Let's take a look at the distribution of loan grades in the dataset.
###Code
loans['grade'].show()
###Output
_____no_output_____
###Markdown
We can see that over half of the loan grades are assigned values `B` or `C`. Each loan is assigned one of these grades, along with a more finely discretized feature called `sub_grade` (feel free to explore that feature column as well!). These values depend on the loan application and credit report, and determine the interest rate of the loan. More information can be found [here](https://www.lendingclub.com/public/rates-and-fees.action).Now, let's look at a different feature.
###Code
loans['home_ownership'].show()
###Output
_____no_output_____
###Markdown
This feature describes whether the loanee is mortaging, renting, or owns a home. We can see that a small percentage of the loanees own a home. Exploring the target columnThe target column (label column) of the dataset that we are interested in is called `bad_loans`. In this column **1** means a risky (bad) loan **0** means a safe loan.In order to make this more intuitive and consistent with the lectures, we reassign the target to be:* **+1** as a safe loan, * **-1** as a risky (bad) loan. We put this in a new column called `safe_loans`.
###Code
# safe_loans = 1 => safe
# safe_loans = -1 => risky
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
###Output
_____no_output_____
###Markdown
Now, let us explore the distribution of the column `safe_loans`. This gives us a sense of how many safe and risky loans are present in the dataset.
###Code
loans['safe_loans'].show()
###Output
_____no_output_____
###Markdown
You should have:* Around 81% safe loans* Around 19% risky loansIt looks like most of these loans are safe loans (thankfully). But this does make our problem of identifying risky loans challenging. Features for the classification algorithm In this assignment, we will be using a subset of features (categorical and numeric). The features we will be using are **described in the code comments** below. If you are a finance geek, the [LendingClub](https://www.lendingclub.com/) website has a lot more details about these features.
###Code
features = ['grade', # grade of the loan
'sub_grade', # sub-grade of the loan
'short_emp', # one year or less of employment
'emp_length_num', # number of years of employment
'home_ownership', # home_ownership status: own, mortgage or rent
'dti', # debt to income ratio
'purpose', # the purpose of the loan
'term', # the term of the loan
'last_delinq_none', # has borrower had a delinquincy
'last_major_derog_none', # has borrower had 90 day or worse rating
'revol_util', # percent of available credit being used
'total_rec_late_fee', # total late fees received to day
]
target = 'safe_loans' # prediction target (y) (+1 means safe, -1 is risky)
# Extract the feature columns and target column
loans = loans[features + [target]]
###Output
_____no_output_____
###Markdown
What remains now is a **subset of features** and the **target** that we will use for the rest of this notebook. Sample data to balance classesAs we explored above, our data is disproportionally full of safe loans. Let's create two datasets: one with just the safe loans (`safe_loans_raw`) and one with just the risky loans (`risky_loans_raw`).
###Code
safe_loans_raw = loans[loans[target] == +1]
risky_loans_raw = loans[loans[target] == -1]
print("Number of safe loans : %s" % len(safe_loans_raw))
print("Number of risky loans : %s" % len(risky_loans_raw))
###Output
Number of safe loans : 99457
Number of risky loans : 23150
###Markdown
Now, write some code to compute below the percentage of safe and risky loans in the dataset and validate these numbers against what was given using `.show` earlier in the assignment:
###Code
total_loans = len(safe_loans_raw) + len(risky_loans_raw)
print("Percentage of safe loans :", (len(safe_loans_raw)/total_loans) * 100)
print("Percentage of risky loans :", (len(risky_loans_raw)/total_loans) * 100)
###Output
Percentage of safe loans : 81.11853319957262
Percentage of risky loans : 18.881466800427383
###Markdown
One way to combat class imbalance is to undersample the larger class until the class distribution is approximately half and half. Here, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used `seed=1` so everyone gets the same results.
###Code
# Since there are fewer risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
risky_loans = risky_loans_raw
safe_loans = safe_loans_raw.sample(percentage, seed=1)
# Append the risky_loans with the downsampled version of safe_loans
loans_data = risky_loans.append(safe_loans)
###Output
_____no_output_____
###Markdown
Now, let's verify that the resulting percentage of safe and risky loans are each nearly 50%.
###Code
print("Percentage of safe loans :", len(safe_loans) / float(len(loans_data)))
print("Percentage of risky loans :", len(risky_loans) / float(len(loans_data)))
print("Total number of loans in our new dataset :", len(loans_data))
###Output
Percentage of safe loans : 0.5022361744216048
Percentage of risky loans : 0.4977638255783951
Total number of loans in our new dataset : 46508
###Markdown
**Note:** There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this [paper](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5128907&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F69%2F5173046%2F05128907.pdf%3Farnumber%3D5128907 ). For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods. Split data into training and validation sets We split the data into training and validation sets using an 80/20 split and specifying `seed=1` so everyone gets the same results.**Note**: In previous assignments, we have called this a **train-test split**. However, the portion of data that we don't train on will be used to help **select model parameters** (this is known as model selection). Thus, this portion of data should be called a **validation set**. Recall that examining performance of various potential models (i.e. models with different parameters) should be on validation set, while evaluation of the final selected model should always be on test data. Typically, we would also save a portion of the data (a real test set) to test our final model on or use cross-validation on the training set to select our final model. But for the learning purposes of this assignment, we won't do that.
###Code
train_data, validation_data = loans_data.random_split(.8, seed=1)
###Output
_____no_output_____
###Markdown
Use decision tree to build a classifier Now, let's use the built-in Turi Create decision tree learner to create a loan prediction model on the training data. (In the next assignment, you will implement your own decision tree learning algorithm.) Our feature columns and target column have already been decided above. Use `validation_set=None` to get the same results as everyone else.
###Code
decision_tree_model = turicreate.decision_tree_classifier.create(train_data,
validation_set=None,
target = target,
features = features)
###Output
_____no_output_____
###Markdown
Building a smaller tree Typically the max depth of the tree is capped at 6. However, such a tree can be hard to visualize graphically, and moreover, it may overfit.. Here, we instead learn a smaller model with **max depth of 2** to gain some intuition and to understand the learned tree more.
###Code
small_model = turicreate.decision_tree_classifier.create(train_data,
validation_set=None,
target = target,
features = features,
max_depth = 2)
###Output
_____no_output_____
###Markdown
Making predictionsLet's consider two positive and two negative examples **from the validation set** and see what the model predicts. We will do the following:* Predict whether or not a loan is safe.* Predict the probability that a loan is safe.
###Code
validation_safe_loans = validation_data[validation_data[target] == 1]
validation_risky_loans = validation_data[validation_data[target] == -1]
sample_validation_data_risky = validation_risky_loans[0:2]
sample_validation_data_safe = validation_safe_loans[0:2]
sample_validation_data = sample_validation_data_safe.append(sample_validation_data_risky)
sample_validation_data
###Output
_____no_output_____
###Markdown
Explore label predictions Now, we will use our model to predict whether or not a loan is likely to default. For each row in the **sample_validation_data**, use the **decision_tree_model** to predict whether or not the loan is classified as a **safe loan**. **Hint:** Be sure to use the `.predict()` method.
###Code
sample_preds = decision_tree_model.predict(sample_validation_data)
(sample_preds == sample_validation_data['safe_loans']).sum()/len(sample_preds) * 100
###Output
_____no_output_____
###Markdown
**Quiz Question:** What percentage of the predictions on `sample_validation_data` did `decision_tree_model` get correct? Explore probability predictionsFor each row in the **sample_validation_data**, what is the probability (according **decision_tree_model**) of a loan being classified as **safe**? **Hint:** Set `output_type='probability'` to make **probability** predictions using **decision_tree_model** on `sample_validation_data`:
###Code
sample_preds = decision_tree_model.predict(sample_validation_data, output_type='probability')
sample_preds
###Output
_____no_output_____
###Markdown
**Quiz Question:** Which loan has the highest probability of being classified as a **safe loan**?**Checkpoint:** Can you verify that for all the predictions with `probability >= 0.5`, the model predicted the label **+1**? Tricky predictions!Now, we will explore something pretty interesting. For each row in the **sample_validation_data**, what is the probability (according to **small_model**) of a loan being classified as **safe**?**Hint:** Set `output_type='probability'` to make **probability** predictions using **small_model** on `sample_validation_data`:
###Code
small_model.predict(sample_validation_data, output_type='probability')
###Output
_____no_output_____
###Markdown
**Quiz Question:** Notice that the probability preditions are the **exact same** for the 2nd and 3rd loans. Why would this happen? Evaluating accuracy of the decision tree model Recall that the accuracy is defined as follows:$$\mbox{accuracy} = \frac{\mbox{ correctly classified examples}}{\mbox{ total examples}}$$Let us start by evaluating the accuracy of the `small_model` and `decision_tree_model` on the training data
###Code
print(small_model.evaluate(train_data)['accuracy'])
print(decision_tree_model.evaluate(train_data)['accuracy'])
###Output
0.6135020416935311
0.6405813453685794
###Markdown
**Checkpoint:** You should see that the **small_model** performs worse than the **decision_tree_model** on the training data.Now, let us evaluate the accuracy of the **small_model** and **decision_tree_model** on the entire **validation_data**, not just the subsample considered above.
###Code
print(small_model.evaluate(validation_data)['accuracy'])
print(decision_tree_model.evaluate(validation_data)['accuracy'])
###Output
0.6193451098664369
0.6367944851357173
###Markdown
**Quiz Question:** What is the accuracy of `decision_tree_model` on the validation set, rounded to the nearest .01? Evaluating accuracy of a complex decision tree modelHere, we will train a large decision tree with `max_depth=10`. This will allow the learned tree to become very deep, and result in a very complex model. Recall that in lecture, we prefer simpler models with similar predictive power. This will be an example of a more complicated model which has similar predictive power, i.e. something we don't want.
###Code
big_model = turicreate.decision_tree_classifier.create(train_data, validation_set=None,
target = target, features = features, max_depth = 10)
###Output
_____no_output_____
###Markdown
Now, let us evaluate **big_model** on the training set and validation set.
###Code
print(big_model.evaluate(train_data)['accuracy'])
print(big_model.evaluate(validation_data)['accuracy'])
###Output
0.665538362346873
0.6274235243429557
###Markdown
**Checkpoint:** We should see that **big_model** has even better performance on the training set than **decision_tree_model** did on the training set. **Quiz Question:** How does the performance of **big_model** on the validation set compare to **decision_tree_model** on the validation set? Is this a sign of overfitting? Quantifying the cost of mistakesEvery mistake the model makes costs money. In this section, we will try and quantify the cost of each mistake made by the model.Assume the following:* **False negatives**: Loans that were actually safe but were predicted to be risky. This results in an oppurtunity cost of losing a loan that would have otherwise been accepted. * **False positives**: Loans that were actually risky but were predicted to be safe. These are much more expensive because it results in a risky loan being given. * **Correct predictions**: All correct predictions don't typically incur any cost.Let's write code that can compute the cost of mistakes made by the model. Complete the following 4 steps:1. First, let us compute the predictions made by the model.1. Second, compute the number of false positives.2. Third, compute the number of false negatives.3. Finally, compute the cost of mistakes made by the model by adding up the costs of true positives and false positives.First, let us make predictions on `validation_data` using the `decision_tree_model`:
###Code
predictions = decision_tree_model.predict(validation_data)
###Output
_____no_output_____
###Markdown
**False positives** are predictions where the model predicts +1 but the true label is -1. Complete the following code block for the number of false positives:
###Code
def compute_cost(y_true, y_pred):
'''
'''
import numpy as np
fn_cost = 1e4
fp_cost = 2e4
y_true = np.array(y_true)
y_pred = np.array(y_pred)
total_fp = len(np.where(np.logical_and(y_true == -1, y_pred == +1))[0])
total_fn = len(np.where(np.logical_and(y_true == +1, y_pred == -1))[0])
return (fp_cost * total_fp) + (fn_cost * total_fn)
###Output
_____no_output_____
###Markdown
**False negatives** are predictions where the model predicts -1 but the true label is +1. Complete the following code block for the number of false negatives:
###Code
compute_cost(validation_data['safe_loans'], predictions)
###Output
_____no_output_____ |
cylinder/cylgrid1new_sst_iddes_07_alphaupw1p00_upwfactor0p00/plotCp.ipynb | ###Markdown
Plot Cp distribution grid1 case, compare upw_factor=1 with upw_factor=0Generate the `cylpressure.dat` file using```bash$ python3 ../utilities/pp_cyl.py -m rundir/out01/cylinder.e -t 60```
###Code
%%capture
import sys
sys.path.insert(1, '../utilities')
import litCpData
import numpy as np
import matplotlib.pyplot as plt
# Define a useful function for pull stuff out of dicts
getparam = lambda keylabel, pdict, default: pdict[keylabel] if keylabel in pdict else default
# Basic problem parameters
D = 6 # Cylinder diameter
U = 20 # Freestream velocity
Lspan = 24 # Spanwise length
A = D*Lspan # frontal area
rho = 1.225 # density
Q = 0.5*rho*U*U # Dynamic head
# Index of all runs here
runlist=[
# Name, cylpressure file, style dict
['Nalu-Wind IDDES (upw_factor=1)', '../cylgrid1new_sst_iddes_01/cylpressure03.dat', {'color':'k', 'lw':2, 'lstyle':'--'}],
['Nalu-Wind IDDES (upw_factor=0)', './cylpressure.dat', {'color':'k', 'lw':2, 'lstyle':'-'}],
]
# Load the pressure data
P = np.loadtxt('cylpressure.dat', skiprows=1, delimiter=',')
# Construct Theta vs Cp
XYtoDeg = lambda x, y: np.arctan2(y,x)*180.0/np.pi+180.0
X=np.array([XYtoDeg(P[:,0], P[:,1]), P[:,3]/Q]).transpose()
thetaCp=X[X[:,0].argsort()]
# Save the data
np.savetxt('CpDistribution.dat', thetaCp)
# Plot Cp distribution
plt.rc('font', size=16)
plt.figure(figsize=(10,8))
# Plot other people's values
litCpData.plotEXP()
litCpData.plotCFD()
for run in runlist:
label = run[0]
filename = run[1]
rundict = run[2]
P = np.loadtxt(filename, skiprows=1, delimiter=',')
# Construct Theta vs Cp
XYtoDeg = lambda x, y: np.arctan2(y,x)*180.0/np.pi+180.0
X=np.array([XYtoDeg(P[:,0], P[:,1]), P[:,3]/Q]).transpose()
thetaCp=X[X[:,0].argsort()]
lstyle = getparam('lstyle', rundict, '-')
lw = getparam('lw', rundict, 1.25)
color = getparam('color', rundict, 'b')
plt.plot(thetaCp[:,0], thetaCp[:,1],linestyle=lstyle, color=color, linewidth=lw, label=label)
plt.xlim([0, 179])
plt.legend()
plt.xlabel(r'Theta $\theta$')
plt.ylabel(r'$C_p$')
plt.grid()
plt.title(r'$C_p$ distribution [grid1, new BC/new Code]')
plt.tight_layout()
###Output
_____no_output_____ |
BackgroundInjections/SimInject5/WN_only_BI_v1.ipynb | ###Markdown
Names and Directories
###Code
current_path = os.getcwd()
splt_path = current_path.split("/")
top_path_idx = splt_path.index('nanograv')
top_dir = "/".join(splt_path[0:top_path_idx+1])
background_injection_dir = top_dir + '/NANOGrav/BackgroundInjections'
pta_sim_dir = top_dir + '/pta_sim/pta_sim'
runname = '/simGWB_1'
#Where the everything should be saved to (chains,etc.)
simdir = current_path + '/SimRuns'
outdir = simdir + runname
if os.path.exists(simdir) == False:
os.mkdir(simdir)
if os.path.exists(outdir) == False:
os.mkdir(outdir)
#The pulsars
psrs_wn_only_dir = background_injection_dir + '/FakePTA/'
#noise11yr_path = background_injection_dir + '/nano11/noisefiles_new/'
#psrlist11yr_path = background_injection_dir + '/nano11/psrlist_Tg3yr.txt'
###Output
_____no_output_____
###Markdown
Load Jeff's sim_gw function from pta_sim
###Code
sys.path.insert(0,pta_sim_dir)
import sim_gw as SG
###Output
_____no_output_____
###Markdown
Get par and tim files
###Code
parfiles = sorted(glob.glob(psrs_wn_only_dir+'*.par'))
timfiles = sorted(glob.glob(psrs_wn_only_dir+'*.tim'))
###Output
_____no_output_____
###Markdown
Instantiate a "Simulation class"
###Code
sim1 = SG.Simulation(parfiles,timfiles)
###Output
PSR J2317+1439 loaded.
###Markdown
Inject 2 backgrounds
###Code
background_amp_1 = 1.3e-15
background_amp_2 = 5.0e-15
background_gamma_1 = 13./3.
background_gamma_2 = 7./3.
background_seed_1 = 1986
background_seed_2 = 1667
#LT.createGWB(sim1.libs_psrs, A_gwb, gamma_gw, seed=seed)
LT.createGWB(sim1.libs_psrs,background_amp_1,background_gamma_1,seed=background_seed_1)
LT.createGWB(sim1.libs_psrs,background_amp_2,background_gamma_2,seed=background_seed_2,noCorr=True)
#sim1.createGWB(background_amp_1,gamma_gw=background_gamma_1,seed=background_seed_1)
#sim1.createGWB(background_amp_2,gamma_gw=background_gamma_2,seed=background_seed_2)
injection_parameters = {}
injection_parameters['Background_1'] = {'log_10_amp':np.log10(background_amp_1),\
'gamma':background_gamma_1,\
'seed':background_seed_1}
injection_parameters['Background_2'] = {'log_10_amp':np.log10(background_amp_2),\
'gamma':background_gamma_2,\
'seed':background_seed_2}
print(injection_parameters['Background_1'])
print(injection_parameters['Background_2'])
injection_parameters = [['background_amp_1 = %e' %background_amp_1,\
'background_gamma_1 = %f' %background_gamma_1,\
'background_seed_1 = %i' %background_seed_1]]
injection_parameters.append(['background_amp_2 = %e' %background_amp_2,\
'background_gamma_2 = %f' %background_gamma_2,\
'background_seed_2 = %i' %background_seed_2])
print(injection_parameters)
###Output
['background_amp_1 = 1.300000e-15', 'background_gamma_1 = 4.333333', 'background_seed_1 = 1986', ['background_amp_2 = 5.000000e-15', 'background_gamma_2 = 2.333333', 'background_seed_2 = 1667']]
###Markdown
Get pulsars as enterprise pulsars
###Code
sim1.init_ePulsars()
###Output
WARNING: enterprise.pulsar: WARNING: Could not find pulsar distance for PSR J0218+4232. Setting value to 1 with 20% uncertainty.
WARNING: enterprise.pulsar: WARNING: Could not find pulsar distance for PSR J0621+1002. Setting value to 1 with 20% uncertainty.
WARNING: enterprise.pulsar: WARNING: Could not find pulsar distance for PSR J0751+1807. Setting value to 1 with 20% uncertainty.
WARNING: enterprise.pulsar: WARNING: Could not find pulsar distance for PSR J0900-3144. Setting value to 1 with 20% uncertainty.
WARNING: enterprise.pulsar: WARNING: Could not find pulsar distance for PSR J1738+0333. Setting value to 1 with 20% uncertainty.
WARNING: enterprise.pulsar: WARNING: Could not find pulsar distance for PSR J1741+1351. Setting value to 1 with 20% uncertainty.
WARNING: enterprise.pulsar: WARNING: Could not find pulsar distance for PSR J1751-2857. Setting value to 1 with 20% uncertainty.
WARNING: enterprise.pulsar: WARNING: Could not find pulsar distance for PSR J1853+1303. Setting value to 1 with 20% uncertainty.
WARNING: enterprise.pulsar: WARNING: Could not find pulsar distance for PSR J1955+2908. Setting value to 1 with 20% uncertainty.
WARNING: enterprise.pulsar: WARNING: Could not find pulsar distance for PSR J2019+2425. Setting value to 1 with 20% uncertainty.
###Markdown
Use Simple 2 GWB model to instantiate enterprise PTA
###Code
background_gammas = [background_gamma_1, background_gamma_2]
pta1 = SG.model_simple_multiple_gwbs(sim1.psrs,gammas=background_gammas)
#pta1 = SG.model_simple_multiple_gwbs(sim1.psrs,gammas=[background_gamma_2])
###Output
_____no_output_____
###Markdown
Save params for plotting
###Code
with open(outdir + '/parameters.json', 'w') as fp:
json.dump(pta1.param_names, fp)
###Output
_____no_output_____
###Markdown
Set up sampler and initial samples
###Code
#Pick random initial sampling
xs1 = {par.name: par.sample() for par in pta1.params}
# dimension of parameter space
ndim1 = len(xs1)
# initial jump covariance matrix
cov1 = np.diag(np.ones(ndim1) * 0.01**2)
groups1 = model_utils.get_parameter_groups(pta1)
groups1.append([ndim1-2,ndim1-1])
# intialize sampler
sampler = ptmcmc(ndim1, pta1.get_lnlikelihood, pta1.get_lnprior, cov1, groups=groups1, outDir = outdir,resume=False)
###Output
_____no_output_____
###Markdown
Sample!
###Code
# sampler for N steps
N = int(1e5)
x0 = np.hstack(p.sample() for p in pta1.params)
sampler.sample(x0, N, SCAMweight=30, AMweight=15, DEweight=50)
###Output
Finished 10.00 percent in 152.708713 s Acceptance rate = 0.11629Adding DE jump with weight 50
Finished 99.00 percent in 1558.130296 s Acceptance rate = 0.262737
Run Complete
|
labs/FFT/HLS/FFT.ipynb | ###Markdown
FFT TESTBENCHThis notebook takes two inputs (real and imaginary) and gived the real and imaginary parts of the FFT outputs using AXI4. It is then compared with software version of FFT
###Code
from pynq import Overlay
import numpy as np
from pynq import Xlnk
from pynq.lib import dma
from scipy.linalg import dft
import matplotlib.pyplot as plt
import time
ol=Overlay('fft.bit')
NUM_SAMPLES = 1024
real_error=np.zeros(NUM_SAMPLES)
imag_error=np.zeros(NUM_SAMPLES)
ind=np.arange(NUM_SAMPLES)
real_rmse=np.zeros(NUM_SAMPLES)
imag_rmse=np.zeros(NUM_SAMPLES)
xlnk = Xlnk()
in_r = xlnk.cma_array(shape=(NUM_SAMPLES,), dtype=np.float32)
in_i = xlnk.cma_array(shape=(NUM_SAMPLES,), dtype=np.float32)
out_r = xlnk.cma_array(shape=(NUM_SAMPLES,), dtype=np.float32)
out_i = xlnk.cma_array(shape=(NUM_SAMPLES,), dtype=np.float32)
a = [i for i in range(NUM_SAMPLES)]
a=np.cos(a)
real=a.real # Change input real and imaginary value here
img=a.imag
np.copyto(in_r, real)
np.copyto(in_i, img)
fft_ip = ol.hls_fft_float_0
fft_ip.write(0x10,in_r.physical_address)
fft_ip.write(0x18,in_i.physical_address)
fft_ip.write(0x20,out_r.physical_address)
fft_ip.write(0x28,out_i.physical_address)
v=time.time()
fft_ip.write(0x00,1)
print(time.time()-v)
###Output
0.0008766651153564453
###Markdown
Verifying Functionality
###Code
c=time.time()
golden_op=np.fft.fft(a)
print(time.time()-c)
for i in range(NUM_SAMPLES):
real_error[i]="{0:.6f}".format(abs(out_r[i]-golden_op.real[i]))
imag_error[i]="{0:.6f}".format(abs(out_i[i]-golden_op.imag[i]))
sum_sq_real=0
sum_sq_imag=0
for i in range(NUM_SAMPLES):
sum_sq_real =sum_sq_real+(real_error[i]*real_error[i])
real_rmse = np.sqrt(sum_sq_real / (i+1))
sum_sq_imag =sum_sq_imag+(imag_error[i]*imag_error[i])
imag_rmse = np.sqrt(sum_sq_imag / (i+1))
print("Real Part RMSE: ", real_rmse, "Imaginary Part RMSE:", imag_rmse)
if real_rmse<0.001 and imag_rmse<0.001:
print("PASS")
else:
print("FAIL")
###Output
Real Part RMSE: 2.04459558196e-05 Imaginary Part RMSE: 1.80089224848e-05
PASS
###Markdown
Displaying Error and Output
###Code
plt.figure(figsize=(10, 5))
plt.subplot(1,2,1)
plt.bar(ind,real_error)
plt.title("Real Part Error")
plt.xlabel("Index")
plt.ylabel("Error")
#plt.xticks(ind)
plt.tight_layout()
plt.subplot(1,2,2)
plt.bar(ind,imag_error)
plt.title("Imaginary Part Error")
plt.xlabel("Index")
plt.ylabel("Error")
#plt.xticks(ind)
plt.tight_layout()
freq=np.fft.fftfreq(1024)
plt.figure(figsize=(7, 4))
plt.subplot(1,2,1)
plt.plot(freq,out_r,label='real')
plt.plot(freq,out_i,label='imag')
plt.title("1024-DFT")
plt.xlabel("Frequency")
plt.ylabel("DFT real and imaginary data")
plt.legend()
plt.tight_layout()
plt.subplot(1,2,2)
plt.plot(freq,golden_op.real,label='real')
plt.plot(freq,golden_op.imag,label='imag')
plt.title("1024-FFT -Numpy")
plt.xlabel("Frequency")
plt.ylabel("FFT real and imaginary data")
plt.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
Python-Programming/Python-3-Bootcamp/16-Bonus Material - Introduction to GUIs/.ipynb_checkpoints/05-Widget Styling-checkpoint.ipynb | ###Markdown
Widget StylingIn this lecture we will learn about the various ways to style widgets! `style` vs. `layout`There are two ways to change the appearance of widgets in the browser. The first is through the `layout` attribute which exposes layout-related CSS properties for the top-level DOM element of widgets, such as margins and positioning. The second is through the `style` attribute which exposes non-layout related attributes like button color and font weight. While `layout` is general to all widgets and containers of widgets, `style` offers tools specific to each type of widget.Thorough understanding of all that `layout` has to offer requires knowledge of front-end web development, including HTML and CSS. This section provides a brief overview of things that can be adjusted using `layout`. However, the full set of tools are provided in the separate notebook **Advanced Widget Styling with Layout**.To learn more about web development, including HTML and CSS, check out the course [Python and Django Full Stack Web Developer Bootcamp](https://www.udemy.com/python-and-django-full-stack-web-developer-bootcamp/)Basic styling is more intuitive as it relates directly to each type of widget. Here we provide a set of helpful examples of the `style` attribute. The `layout` attributeJupyter interactive widgets have a `layout` attribute exposing a number of CSS properties that impact how widgets are laid out. These properties map to the values of the CSS properties of the same name (underscores being replaced with dashes), applied to the top DOM elements of the corresponding widget. Sizes* `height`* `width`* `max_height`* `max_width`* `min_height`* `min_width` Display* `visibility`* `display`* `overflow`* `overflow_x`* `overflow_y` Box model* `border`* `margin`* `padding` Positioning* `top`* `left`* `bottom`* `right` Flexbox* `order`* `flex_flow`* `align_items`* `flex`* `align_self`* `align_content`* `justify_content` A quick example of `layout`We've already seen what a slider looks like without any layout adjustments:
###Code
import ipywidgets as widgets
from IPython.display import display
w = widgets.IntSlider()
display(w)
###Output
_____no_output_____
###Markdown
Let's say we wanted to change two of the properties of this widget: `margin` and `height`. We want to center the slider in the output area and increase its height. This can be done by adding `layout` attributes to **w**
###Code
w.layout.margin = 'auto'
w.layout.height = '75px'
###Output
_____no_output_____
###Markdown
Notice that the slider changed positions on the page immediately!Layout settings can be passed from one widget to another widget of the same type. Let's first create a new IntSlider:
###Code
x = widgets.IntSlider(value=15,description='New slider')
display(x)
###Output
_____no_output_____
###Markdown
Now assign **w**'s layout settings to **x**:
###Code
x.layout = w.layout
###Output
_____no_output_____
###Markdown
That's it! For a complete set of instructions on using `layout`, visit the **Advanced Widget Styling - Layout** notebook. Predefined stylesBefore we investigate the `style` attribute, it should be noted that many widgets offer a list of pre-defined styles that can be passed as arguments during creation.For example, the `Button` widget has a `button_style` attribute that may take 5 different values:* `'primary'`* `'success'`* `'info'`* `'warning'`* `'danger'`besides the default empty string `''`.
###Code
import ipywidgets as widgets
widgets.Button(description='Ordinary Button', button_style='')
widgets.Button(description='Danger Button', button_style='danger')
###Output
_____no_output_____
###Markdown
The `style` attribute While the `layout` attribute only exposes layout-related CSS properties for the top-level DOM element of widgets, the`style` attribute is used to expose non-layout related styling attributes of widgets.However, the properties of the `style` atribute are specific to each widget type.
###Code
b1 = widgets.Button(description='Custom color')
b1.style.button_color = 'lightgreen'
b1
###Output
_____no_output_____
###Markdown
You can get a list of the style attributes for a widget with the `keys` property.
###Code
b1.style.keys
###Output
_____no_output_____
###Markdown
Note that `widgets.Button().style.keys` also works. Just like the `layout` attribute, widget styles can be assigned to other widgets.
###Code
b2 = widgets.Button()
b2.style = b1.style
b2
###Output
_____no_output_____
###Markdown
Note that only the style was picked up by **b2**, not any other parameters like `description`. Widget styling attributes are specific to each widget type.
###Code
s1 = widgets.IntSlider(description='Blue handle')
s1.style.handle_color = 'lightblue'
s1
###Output
_____no_output_____ |
documentation/source/usersGuide/usersGuide_21_sorting.ipynb | ###Markdown
User's Guide, Chapter 21: Ordering and Sorting of Stream Elements Inside a stream, each object has a position and thus an order in the :class:`~music21.stream.Stream`. Up until now we've seen two different ways to describe the position of an element (such as a :class:`~music21.note.Note`) in a stream. The first is the index of the object in the stream (a number in square brackets) and the second is the `offset`.Let's take a simple Stream:
###Code
from music21 import *
s = stream.Measure()
ts1 = meter.TimeSignature('3/4')
s.insert(0, ts1)
s.insert(0, key.KeySignature(2))
s.insert(0, clef.TrebleClef())
s.insert(0, note.Note('C#4'))
s.insert(1, note.Note('D#4'))
###Output
_____no_output_____
###Markdown
We have inserted three elements that take up no space (a TimeSignature, KeySignature, and a Clef) and two elements that take up 1.0 quarter notes (the default length of a Note object). You might notice that the signatures and clef were inserted in a strange order. Don't worry, we'll get to that in a bit. In addition to inserting elements at particular places in a Stream, we can append an element to the end of the Stream:
###Code
e = note.Note('E4')
s.append(e)
s.show()
###Output
_____no_output_____
###Markdown
Now we're pretty sure that the C will be the fourth element in the Stream, which is referred to as `[3]` and the D will be the fifth, or `[4]`
###Code
s[3]
s[4]
###Output
_____no_output_____
###Markdown
The E will be `[5]` but we can also get it by saying it's the last element, or `[-1]`
###Code
s[-1]
###Output
_____no_output_____
###Markdown
The other way to describe the position of an element is by its offset.
###Code
e.offset
###Output
_____no_output_____
###Markdown
You may recall from previous discussions that the `offset` of an element is its position within the last referenced Stream it was attached to. Thus, if you want to know the offset of an element within a particular Stream, it is always safer to use one the following methods: `.getOffsetBySite(stream)`:
###Code
e.getOffsetBySite(s)
###Output
_____no_output_____
###Markdown
Or to call `stream.elementOffset(element)`. This is a little bit faster so it's what we use internally. It will always give the same result if `e` is in `s`, but if `e` might not be in `s` but be derived from an element in `s` then `.getOffsetBySite` will trace the `.derivation.chain()` to find it.
###Code
s.elementOffset(e)
###Output
_____no_output_____
###Markdown
If you want to find all the elements at a particular offset, call `.getElementsByOffset` on the Stream. Note that if any elements are found it returns a `StreamIterator`, so you will need to use the square bracket index to reference it:
###Code
s.getElementsByOffset(2.0)[0]
###Output
_____no_output_____
###Markdown
This description might seem a bit obnoxious, but it is necessary because you can get multiple elements back, such as with an offset range:
###Code
y = s.getElementsByOffset(1.0, 3.0)
(y[0], y[1])
###Output
_____no_output_____
###Markdown
At this point, you might think that you know everything about how elements are positioned in a Stream, but there are a few more points that are important and point to the power of `music21`. Let's show the Stream as a text file:
###Code
s.show('text')
###Output
{0.0} <music21.clef.TrebleClef>
{0.0} <music21.key.KeySignature of 2 sharps>
{0.0} <music21.meter.TimeSignature 3/4>
{0.0} <music21.note.Note C#>
{1.0} <music21.note.Note D#>
{2.0} <music21.note.Note E>
###Markdown
Something has happened: the `TrebleClef` object which was inserted third has now become the first element of the Stream. The `KeySignature` and `TimeSignature` objects have also switched position. Now all three are in the order we'd expect to see them in a score:
###Code
(s[0], s[1], s[2])
###Output
_____no_output_____
###Markdown
Even though they have the same `.offset`, each of these objects knows its place in the Stream, because of something called `.classSortOrder`. Each Class and each instance of the class has a default sort order so that if it is at the same offset as a member of a different class, one will sort before the other:
###Code
(s[0].classSortOrder, s[1].classSortOrder, s[2].classSortOrder)
###Output
_____no_output_____
###Markdown
In fact, `classSortOrder` is present not just on objects but on classes:
###Code
(clef.Clef.classSortOrder, key.KeySignature.classSortOrder, meter.TimeSignature.classSortOrder)
###Output
_____no_output_____
###Markdown
Notes have a higher `classSortOrder` and thus sort even later, hence why the C appears after the clefs and signatures:
###Code
(note.Note.classSortOrder, base.Music21Object.classSortOrder)
###Output
_____no_output_____
###Markdown
There are a few elements that sort even lower than Clefs because they usually refer to the area of the composition that precedes the clef:
###Code
(bar.Barline.classSortOrder, instrument.Instrument.classSortOrder, metadata.Metadata.classSortOrder)
###Output
_____no_output_____
###Markdown
The numbers are actually completely arbitrary (it could be -6.432 instead of -5), only the order of numbers (-25 is less than -5) matters.If we put a second TimeSignature into the stream at offset 0 (like some pieces do with multiple interpretations for meter), it will have a tie for its .offset and .classSortOrder. Which one will come first? It's the first one inserted:
###Code
ts2 = meter.TimeSignature('6/8')
s.insert(0, ts2)
s.show('text')
###Output
{0.0} <music21.clef.TrebleClef>
{0.0} <music21.key.KeySignature of 2 sharps>
{0.0} <music21.meter.TimeSignature 3/4>
{0.0} <music21.meter.TimeSignature 6/8>
{0.0} <music21.note.Note C#>
{1.0} <music21.note.Note D#>
{2.0} <music21.note.Note E>
###Markdown
If we wanted to make sure that the two TimeSignatures appeared in a particular order regardless of when they were inserted, there is one way to do so: set the `.priority` attribute on the TimeSignature. Every Music21Object has a `priority` attribute, and the default is `0`. Negative numbers make an element sort before a default element. Positive numbers sort after. Let us insert two more notes into the stream, at offsets 1 and 2, but we'll make the note at offset 1 come before the D and the one at offset 2 come after the E, so we have a chromatic scale fragment:
###Code
d = note.Note('D')
d.priority = -10
eis = note.Note('E#')
eis.priority = 10
s.insert(1.0, d)
s.insert(2.0, eis)
s.show('text')
###Output
{0.0} <music21.clef.TrebleClef>
{0.0} <music21.key.KeySignature of 2 sharps>
{0.0} <music21.meter.TimeSignature 3/4>
{0.0} <music21.meter.TimeSignature 6/8>
{0.0} <music21.note.Note C#>
{1.0} <music21.note.Note D>
{1.0} <music21.note.Note D#>
{2.0} <music21.note.Note E>
{2.0} <music21.note.Note E#>
###Markdown
Some things to note about priority:(1) Priority changes immediately affect the sorting of the Stream (in v.3 or above). Before that if you wanted to change the priority of an object, you'd need to remove it and then reinsert it.
###Code
d.priority = 20
s.remove(d)
s.insert(1.0, d)
s.show('text')
###Output
{0.0} <music21.clef.TrebleClef>
{0.0} <music21.key.KeySignature of 2 sharps>
{0.0} <music21.meter.TimeSignature 3/4>
{0.0} <music21.meter.TimeSignature 6/8>
{0.0} <music21.note.Note C#>
{1.0} <music21.note.Note D#>
{1.0} <music21.note.Note D>
{2.0} <music21.note.Note E>
{2.0} <music21.note.Note E#>
###Markdown
(2) Priority is currently a global property that affects all Streams that an object is in. This is behavior that may change in later versions.(3) Priority overrides `classSortOrder`. So if we wanted to move the 6/8 TimeSignature `(ts2)` to sort before the 3/4 `(ts1)`, it is not enough to shift the priority of `ts2` and reinsert it:
###Code
ts2.priority = -5
s.remove(ts2)
s.insert(0.0, ts2)
s.show('text')
###Output
{0.0} <music21.meter.TimeSignature 6/8>
{0.0} <music21.clef.TrebleClef>
{0.0} <music21.key.KeySignature of 2 sharps>
{0.0} <music21.meter.TimeSignature 3/4>
{0.0} <music21.note.Note C#>
{1.0} <music21.note.Note D#>
{1.0} <music21.note.Note D>
{2.0} <music21.note.Note E>
{2.0} <music21.note.Note E#>
###Markdown
Now the 6/8 TimeSignature appears before the clef and key signature. A fix for this would involve assigning some priority to each object connected to its sort order:
###Code
for el in s.getElementsByOffset(0.0):
el.priority = el.classSortOrder
ts2.priority = 3 # between KeySignature (priority = 2) and TimeSignature (priority = 4)
s.show('text')
###Output
{0.0} <music21.clef.TrebleClef>
{0.0} <music21.key.KeySignature of 2 sharps>
{0.0} <music21.meter.TimeSignature 6/8>
{0.0} <music21.meter.TimeSignature 3/4>
{0.0} <music21.note.Note C#>
{1.0} <music21.note.Note D#>
{1.0} <music21.note.Note D>
{2.0} <music21.note.Note E>
{2.0} <music21.note.Note E#>
###Markdown
This is enough about sorting for most of purposes, so feel free to move on to :ref:`Chapter 22: Graphing Music21 Streams `, but for anyone who wants to go into more depth, there's a "behind the scenes" tour below. Advanced Sorting and the `sortTuple` How does sorting actually work? `Music21` uses six attributes to determine which elements go before or after each other. The six-element tuple that determines sort order can be accessed on any `Music21Object` by calling the method `.sortTuple()`:
###Code
#_DOCS_SHOW ts1.sortTuple()
ts1.sortTuple().modify(insertIndex=0) #_DOCS_HIDE
#_DOCS_SHOW ts2.sortTuple()
ts2.sortTuple().modify(insertIndex=118) #_DOCS_HIDE
###Output
_____no_output_____
###Markdown
A :class:`~music21.sorting.SortTuple` is a lightweight class derived from the `NamedTuple` object that can be compared using the `>` and `<` operators. Each of the elements is compared from left to right; if there is a tie on one attribute then the next one becomes important:
###Code
ts1.sortTuple() > ts2.sortTuple()
###Output
_____no_output_____
###Markdown
`SortTuples` live in their own module :ref:`moduleSorting` and have a few cool features. Since the main point of comparison is offset, SortTuples can compare against plain integers or floats or Fractions by comparing their offsets (and `atEnd`, which we'll get to in a second).
###Code
st = sorting.SortTuple(atEnd=0, offset=10.0, priority=1, classSortOrder=4, isNotGrace=1, insertIndex=5)
st > 8.0
###Output
_____no_output_____
###Markdown
Because they can be unwieldly to display, `SortTuple`s have a `.shortRepr()` call which summarizes the main information in them: the offset, the priority, the classSortOrder, and the insertIndex.
###Code
st.shortRepr()
###Output
_____no_output_____
###Markdown
In this case, the third element, priority, decides the order. The first attribute, atEnd, is 0 for normal elements, and 1 for an element stored at the end of a Stream. Let's add a courtesy KeySignature change at the end of `s`:
###Code
ks2 = key.KeySignature(-3)
s.storeAtEnd(ks2)
#_DOCS_SHOW ks2.sortTuple()
ks2.sortTuple().modify(insertIndex=120) #_DOCS_HIDE
###Output
_____no_output_____
###Markdown
Putting a rightBarline on a Measure has the same effect:
###Code
rb = bar.Barline('double')
s.rightBarline = rb
#_DOCS_SHOW rb.sortTuple()
rb.sortTuple().modify(insertIndex=121) #_DOCS_HIDE
#_DOCS_SHOW rb.sortTuple().shortRepr()
rb.sortTuple().modify(insertIndex=121).shortRepr() #_DOCS_HIDE
###Output
_____no_output_____
###Markdown
The next three attributes (offset, priority, classSortOrder) have been described. `isNotGrace` is 0 if the note is a grace note, 1 (default) if it is any other note or not a note. Grace notes sort before other notes at the same offset and priority. The last attribute is an ever increasing index of the number of elements that have had SiteReferences added to it. (Advanced topic: the order that elements were inserted is used in order to make sure that elements do not shift around willy-nilly, but it's not something to use often or to rely on for complex calculations. For this reason, we have not exposed it as something easy to get, but if you need to access it, here's the formula:)
###Code
(ts1.sites.siteDict[id(s)].globalSiteIndex, ts2.sites.siteDict[id(s)].globalSiteIndex)
###Output
_____no_output_____
###Markdown
Streams have an attribute to cache whether they have been sorted, so that `.sort()` only needs to be called when a change has been made that alters the sort order.
###Code
s.isSorted
###Output
_____no_output_____
###Markdown
Calling a command that needs a particular order (`.show()`, `.getElementsByClass()`, `[x]`, etc.) automatically sorts the Stream:
###Code
s[0]
s.isSorted
###Output
_____no_output_____
###Markdown
There is one more way that elements in a Stream can be returned, for advanceduses only. Each Stream has an `autoSort` property. By default it is On. Butif you turn it off, then elements are returned in the order they are addedregardless of offset, priority, or classSortOrder. Here is an example of that:
###Code
s.autoSort = False
ts1.setOffsetBySite(s, 20.0)
s.show('text')
###Output
{0.0} <music21.clef.TrebleClef>
{0.0} <music21.key.KeySignature of 2 sharps>
{0.0} <music21.meter.TimeSignature 6/8>
{20.0} <music21.meter.TimeSignature 3/4>
{0.0} <music21.note.Note C#>
{1.0} <music21.note.Note D#>
{1.0} <music21.note.Note D>
{2.0} <music21.note.Note E>
{2.0} <music21.note.Note E#>
{20.0} <music21.bar.Barline type=double>
{20.0} <music21.key.KeySignature of 3 flats>
###Markdown
The setting `autoSort = False` can speed up some operations if you already know that all the notes are in order. Inside the stream/core.py module you’ll see some even faster operations such as `coreInsert()` and `coreAppend()` which are even faster and which we use when translating from one format to another. After running a `coreInsert()` operation, the Stream is in an unusuable state until `coreElementsChanged()` is run, which lets the Stream ruminate over its new state as if a normal `insert()` or `append()` operation has been done. Mixing `coreInsert()` and `coreAppend()` commands without running `coreElementsChanged()` is likely to have disastrous consequences. Use one or the other. If you want to get back to the sorted state, just turn `autoSort = True`:
###Code
s.autoSort = True
s.isSorted = False
s.show('text')
###Output
{0.0} <music21.clef.TrebleClef>
{0.0} <music21.key.KeySignature of 2 sharps>
{0.0} <music21.meter.TimeSignature 6/8>
{0.0} <music21.note.Note C#>
{1.0} <music21.note.Note D#>
{1.0} <music21.note.Note D>
{2.0} <music21.note.Note E>
{2.0} <music21.note.Note E#>
{20.0} <music21.meter.TimeSignature 3/4>
{20.0} <music21.bar.Barline type=double>
{20.0} <music21.key.KeySignature of 3 flats>
###Markdown
Note that this is a destructive operation. Turning `autoSort` back to `False` won’t get you back the earlier order:
###Code
s.autoSort = False
s.show('text')
###Output
{0.0} <music21.clef.TrebleClef>
{0.0} <music21.key.KeySignature of 2 sharps>
{0.0} <music21.meter.TimeSignature 6/8>
{0.0} <music21.note.Note C#>
{1.0} <music21.note.Note D#>
{1.0} <music21.note.Note D>
{2.0} <music21.note.Note E>
{2.0} <music21.note.Note E#>
{20.0} <music21.meter.TimeSignature 3/4>
{20.0} <music21.bar.Barline type=double>
{20.0} <music21.key.KeySignature of 3 flats>
|
CVND_Exercises/3_6_Matrices_and_transformation_of_state/4_matrix_addition.ipynb | ###Markdown
Matrix AdditionIn this exercises, you will write a function that accepts two matrices and outputs their sum. Think about how you could do this with a for loop nested inside another for loop.
###Code
### TODO: Write a function called matrix_addition that
### calculate the sum of two matrices
###
### INPUTS:
### matrix A _ an m x n matrix
### matrix B _ an m x n matrix
###
### OUPUT:
### matrixSum _ sum of matrix A + matrix B
def matrix_addition(matrixA, matrixB):
# initialize matrix to hold the results
matrixSum = []
# matrix to hold a row for appending sums of each element
# row = []
# TODO: write a for loop within a for loop to iterate over
# the matrices
# TODO: As you iterate through the matrices, add matching
# elements and append the sum to the row variable
# TODO: When a row is filled, append the row to matrixSum.
# Then reinitialize row as an empty list
for i in range(len(matrixA)):
row = []
for j in range(len(matrixA[i])):
row.append(matrixA[i][j] + matrixB[i][j])
matrixSum.append(row)
return matrixSum
### When you run this code cell, your matrix addition function
### will run on the A and B matrix.
A = [
[2,5,1],
[6,9,7.4],
[2,1,1],
[8,5,3],
[2,1,6],
[5,3,1]
]
B = [
[7, 19, 5.1],
[6.5,9.2,7.4],
[2.8,1.5,12],
[8,5,3],
[2,1,6],
[2,33,1]
]
matrix_addition(A, B)
###Output
_____no_output_____
###Markdown
Vectors versus MatricesWhat happens if you run the cell below? Here you are adding two vectors together. Does your code still work?
###Code
matrix_addition([4, 2, 1], [5, 2, 7])
###Output
_____no_output_____
###Markdown
Why did this error occur? Because your code assumes that a matrix is a two-dimensional grid represented by a list of lists. But a horizontal vector, which can also be considered a matrix, is a one-dimensional grid represented by a single list.What happens if you store a vector as a list of lists like [[4, 2, 1]] and [[5, 2, 7]]? Does your function work? Run the code cell below to find out.
###Code
matrix_addition([[4, 2, 1]], [[5, 2, 7]])
###Output
_____no_output_____
###Markdown
Test your CodeRun the cell below. If there is no output, then your results are as expected.
###Code
assert matrix_addition([
[1, 2, 3]],
[[4, 5, 6]]) == [[5, 7, 9]]
assert matrix_addition([
[4]], [
[5]]) == [[9]]
assert matrix_addition([[1, 2, 3],
[4, 5, 6]],
[[7, 8, 9],
[10, 11, 12]]) == [[8, 10, 12],
[14, 16, 18]]
###Output
_____no_output_____ |
RS-BP/python/Superconductors-Solution.ipynb | ###Markdown
Exercise 2: Hypothesis testingIn this exercise we will test if 10 different measurements of the same quantity are well-described by a single average value or not. In the lab ten nominally identical samples of superconducting material are made and tested using the same procedure. Due to factors that are not under control in the experiment it could be that slightly different materials are being produced. For each sample of superconducting material the critical temperature $T_c$ is determined using the same setup and same criterion to identify the transition. The uncertainty in the transition temperature introduced by this method is $\pm$ 0.2 K.The transition temperature for the 10 samples is found to be:
###Code
import numpy as np
Tc=np.array([10.2, 10.4, 9.8, 10.5, 9.9, 9.8, 10.3, 10.1, 10.3, 9.9])
###Output
_____no_output_____
###Markdown
Plot the dataIt is always a good idea to plot the data before you start any calculation
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# Some default styling for the figures; best solution is once at the beginning of the code
# See https://matplotlib.org/3.1.3/tutorials/introductory/customizing.html
# These settings assume that you have used import matplotlib.pyplot as plt
# Smallest font size is a 10 point font for a 4 inch wide figure.
# font sizes and figure size are scaled by a factor 2 to have a large figure on the screen
SMALL_SIZE = 10*2
MEDIUM_SIZE = 12*2
BIGGER_SIZE = 14*2
plt.rc('font', size=SMALL_SIZE, family='serif') # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE, direction='in') # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE, direction='in') # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
plt.rc('figure', figsize='8, 6') # size of the figure, used to be '4, 3' in inches
# data
x = np.arange(1, 11, 1)
err=np.full((10),0.2)
plt.figure()
plt.errorbar(x, Tc, yerr=err, ls='None', marker='o', markersize=7, capsize=7)
plt.xticks(x) # Simply put a label for each sample
plt.xlabel('Sample number')
plt.yticks((9.0,9.5,10.0,10.5,11.0))
plt.ylabel('Critical Temperature $T_c$ (K)')
plt.ylim(9,11)
plt.show()
###Output
_____no_output_____
###Markdown
According to the hypothesis it is suggested that all measurements of $T_c$ correspond to the same true value with the differences between measurement explained by measurement errors. a) Calculate the average value of $T_c$ and the error bar
###Code
Tc_avg=np.average(Tc) # This gives a weighted average with a weight of the inverse variance
Tc_err=np.sqrt(1.0/(11.0/0.2**2)) # Note that this is the statistical error using the formula.
print("The average value of T_c is %6.2f +/- %2.2f K \n" % (Tc_avg,Tc_err))
plt.figure()
plt.errorbar(x, Tc, yerr=err, ls='None', marker='o', markersize=7, capsize=7)
xfit=np.linspace(0,12,100) # I want the fit function to look smooth
plt.plot(xfit,xfit*0.0+Tc_avg, ls='solid', color='red')
plt.xticks(x) # Simply put a label for each sample
plt.xlabel('Sample number')
plt.yticks((9.0,9.5,10.0,10.5,11.0))
plt.ylabel('Critical Temperature $T_c$ (K)')
plt.ylim(9,11)
plt.xlim(0.5,10.5)
plt.show()
###Output
The average value of T_c is 10.12 +/- 0.06 K
###Markdown
This value minimizes the least squares defined by$$\chi^2 = \sum_i (y_i - \bar{y})^2$$and is thus by definition the best fit to the data. b) Calculate this minimum value of $\chi^2$
###Code
x2=np.sum((Tc-Tc_avg)**2)
print('The mimimum value of chi-squared is %4.3f' % x2)
# Let's show that it is indeed a minimum
Ta=np.linspace(9.5,11.0,50)
x2array=np.ones(50)
for i in range(len(Ta)):
x2array[i] = np.sum((Tc-Ta[i])**2)
plt.figure(figsize=(3,3))
plt.plot(Ta,x2array)
plt.plot(Tc_avg,x2, marker='o', markersize=8)
plt.ylabel('$\chi^2$')
plt.xlabel('$T_{avg}$')
plt.ylim(0,2.0)
plt.xlim(9.8,10.4)
plt.xticks((10.0,10.25))
plt.show()
###Output
The mimimum value of chi-squared is 0.596
###Markdown
c) Determine the number of degrees of freedom
###Code
# The number of degrees of freedom is equal to the number of datapoints minus the number of fit parameters
# In this case the 'fit' parameter is the average value
dof=len(Tc)-1
print('There are %d degrees of freedom' % dof)
###Output
There are 9 degrees of freedom
###Markdown
d) Given the value of $\chi^2$, do you believe that the samples are indeed identical? Hint: calculate the probability that you find a value of $\chi^2$ smaller or equal than the value obtained in b) by taking into account the error bar on the measurement and the number of degrees of freedom. First attempt from the plotOne can plot the average and confidence intervals using the error bar on the datapoints of 0.2 K. Note that we should take the error on the measurement and not the standard deviation for the best estimate of the average. One can see that all values are within 2 sigma from the average. However, 5 points are more than 1 standard deviation away from the average and only 4 points are within a standard deviation. This is a little suspicious. We need to quantify this.
###Code
plt.figure()
plt.errorbar(x, Tc, yerr=err, ls='None', marker='o', markersize=7, capsize=7)
xfit=np.linspace(0,12,100) # I want the fit function to look smooth
plt.plot(xfit,xfit*0.0+Tc_avg, ls='solid', color='red')
plt.xticks(x) # Simply put a label for each sample
plt.xlabel('Sample number')
plt.yticks((9.0,9.5,10.0,10.5,11.0))
plt.ylabel('Critical Temperature $T_c$ (K)')
plt.ylim(9,11)
plt.xlim(0.5,10.5)
# Add confidence intervals
nstd = 1.0 # to draw nstd-sigma intervals
fit_up = xfit*0.0+Tc_avg+nstd*0.2
fit_dw = xfit*0.0+Tc_avg-nstd*0.2
plt.fill_between(xfit, fit_up, fit_dw, color='red', alpha=.25)
nstd = 2.0 # to draw nstd-sigma intervals
fit_up = xfit*0.0+Tc_avg+nstd*0.2
fit_dw = xfit*0.0+Tc_avg-nstd*0.2
plt.fill_between(xfit, fit_up, fit_dw, color='red', alpha=.25)
plt.show()
###Output
_____no_output_____
###Markdown
Second attempt with the value of $\chi^2$The problem with the values of $\chi^2$ that we calculated is that it is not yet properly normalized which makes it difficult to interpret the absolute value. What should we compare the value to?We should normalize using the error bar by calculating the normalized $\chi^2$$$\chi_n^2 = \sum_i \frac{(y_i - \bar{y})^2}{\sigma_i^2} = \frac{1}{\sigma^2} \sum_i (y_i - \bar{y})^2 = \frac{\chi^2}{\sigma^2} $$where we used that the error bar or each data point is identical after the first equality sign. We can then compare this normalized value to the expected value. We expect that the normalized value $\chi_n^2$ is equal to the number of degrees of freedom. If we approximate the $\chi^2$ distribution to be Gaussian (for a large number of degrees of freedom) the variance should be twice the number of degrees of freedom.
###Code
# Normalized chi-squared value
print('The normalized value of the minimum chi-squared is %4.2f \n' % (x2/(0.2**2)))
print('The expected value of the minimum chi-squared is equal to the degree of freedom. DOF = %4.2f \n' % dof)
print('If we approximate the distribution as Gaussian the standard deviation should be %4.2f \n' % np.sqrt(2*dof))
###Output
The normalized value of the minimum chi-squared is 14.90
The expected value of the minimum chi-squared is equal to the degree of freedom. DOF = 9.00
If we approximate the distribution as Gaussian the standard deviation should be 4.24
###Markdown
The value of the normalized minimum $\chi^2$ value is quite a bit higher than the expectation value, but it only slightly more than a standard deviation away from the expectation value. There is no need to worry in this case. Since we are using Python we could actually calculate the probabilty to find this value or a higher value using the real $\chi^2$ distribution. For 9 degrees of freedom this distribution is not so Gaussian.
###Code
from scipy.stats import chi2,norm
mean, var, skew, kurt = chi2.stats(dof, moments='mvsk')
print(mean) # Find the average value of the distribution
print(var) # Find the variance
# The problem is not the expectation value or the variance of the distribution.
# The distribtuion is not Guassian, i.e. has higher order moments
x = np.linspace(0, 30, 100)
xx = np.linspace((x2/(0.2**2)),30,100)
plt.plot(x, chi2.pdf(x, dof), ls='solid', color='red')
plt.fill_between(xx, xx*0.0, chi2.pdf(xx, dof), color='red', alpha=.25)
plt.plot(x, norm.pdf(x, dof,np.sqrt(2*dof)), ls='dashed', color='blue')
plt.xlim(0,30)
plt.xlabel('Value of $\chi_n^2$')
plt.ylim(0,0.12)
plt.ylabel('Probability density')
plt.yticks((0,0.04,0.08, 0.12))
plt.show()
# We need to calculate the probability corresponding to the red area. This is the probability that the value
# of chi-square is larger than the value we found. This is best done via the cumulative distribution function
print('The probability that the value of chi-squared is larger than %4.1f is %4.3f' %
((x2/0.2**2),(1.0-chi2.cdf(x2/0.2**2, dof))))
###Output
9.0
18.0
|
PA-A01-DataWranglingAndRegression.ipynb | ###Markdown
Data Wrangling and Regression(c) Hochschule Aalen 2020, Matthias Nutz, Martin Heckmann
###Code
import pandas as pd, numpy as np
###Output
_____no_output_____ |
06_Linear_algebra_Solutions.ipynb | ###Markdown
Linear algebra
###Code
import numpy as np
np.__version__
###Output
_____no_output_____
###Markdown
Matrix and vector products Q1. Predict the results of the following code.
###Code
x = [1,2]
y = [[4, 1], [2, 2]]
print np.dot(x, y)
print np.dot(y, x)
print np.matmul(x, y)
print np.inner(x, y)
print np.inner(y, x)
###Output
[8 5]
[6 6]
[8 5]
[6 6]
[6 6]
###Markdown
Q2. Predict the results of the following code.
###Code
x = [[1, 0], [0, 1]]
y = [[4, 1], [2, 2], [1, 1]]
print np.dot(y, x)
print np.matmul(y, x)
###Output
[[4 1]
[2 2]
[1 1]]
[[4 1]
[2 2]
[1 1]]
###Markdown
Q3. Predict the results of the following code.
###Code
x = np.array([[1, 4], [5, 6]])
y = np.array([[4, 1], [2, 2]])
print np.vdot(x, y)
print np.vdot(y, x)
print np.dot(x.flatten(), y.flatten())
print np.inner(x.flatten(), y.flatten())
print (x*y).sum()
###Output
30
30
30
30
30
###Markdown
Q4. Predict the results of the following code.
###Code
x = np.array(['a', 'b'], dtype=object)
y = np.array([1, 2])
print np.inner(x, y)
print np.inner(y, x)
print np.outer(x, y)
print np.outer(y, x)
###Output
abb
abb
[['a' 'aa']
['b' 'bb']]
[['a' 'b']
['aa' 'bb']]
###Markdown
Decompositions Q5. Get the lower-trianglular `L` in the Cholesky decomposition of x and verify it.
###Code
x = np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]], dtype=np.int32)
L = np.linalg.cholesky(x)
print L
assert np.array_equal(np.dot(L, L.T.conjugate()), x)
###Output
[[ 2. 0. 0.]
[ 6. 1. 0.]
[-8. 5. 3.]]
###Markdown
Q6. Compute the qr factorization of x and verify it.
###Code
x = np.array([[12, -51, 4], [6, 167, -68], [-4, 24, -41]], dtype=np.float32)
q, r = np.linalg.qr(x)
print "q=\n", q, "\nr=\n", r
assert np.allclose(np.dot(q, r), x)
###Output
q=
[[-0.85714287 0.39428571 0.33142856]
[-0.42857143 -0.90285712 -0.03428571]
[ 0.2857143 -0.17142858 0.94285715]]
r=
[[ -14. -21. 14.]
[ 0. -175. 70.]
[ 0. 0. -35.]]
###Markdown
Q7. Factor x by Singular Value Decomposition and verify it.
###Code
x = np.array([[1, 0, 0, 0, 2], [0, 0, 3, 0, 0], [0, 0, 0, 0, 0], [0, 2, 0, 0, 0]], dtype=np.float32)
U, s, V = np.linalg.svd(x, full_matrices=False)
print "U=\n", U, "\ns=\n", s, "\nV=\n", v
assert np.allclose(np.dot(U, np.dot(np.diag(s), V)), x)
###Output
U=
[[ 0. 1. 0. 0.]
[ 1. 0. 0. 0.]
[ 0. 0. 0. -1.]
[ 0. 0. 1. 0.]]
s=
[ 3. 2.23606801 2. 0. ]
V=
[[ 1. 0. 0.]
[ 0. 1. 0.]
[ 0. 0. 1.]]
###Markdown
Matrix eigenvalues Q8. Compute the eigenvalues and right eigenvectors of x. (Name them eigenvals and eigenvecs, respectively)
###Code
x = np.diag((1, 2, 3))
eigenvals = np.linalg.eig(x)[0]
eigenvals_ = np.linalg.eigvals(x)
assert np.array_equal(eigenvals, eigenvals_)
print "eigenvalues are\n", eigenvals
eigenvecs = np.linalg.eig(x)[1]
print "eigenvectors are\n", eigenvecs
###Output
eigenvalues are
[ 1. 2. 3.]
eigenvectors are
[[ 1. 0. 0.]
[ 0. 1. 0.]
[ 0. 0. 1.]]
###Markdown
Q9. Predict the results of the following code.
###Code
print np.array_equal(np.dot(x, eigenvecs), eigenvals * eigenvecs)
###Output
True
###Markdown
Norms and other numbers Q10. Calculate the Frobenius norm and the condition number of x.
###Code
x = np.arange(1, 10).reshape((3, 3))
print np.linalg.norm(x, 'fro')
print np.linalg.cond(x, 'fro')
###Output
16.8819430161
4.56177073661e+17
###Markdown
Q11. Calculate the determinant of x.
###Code
x = np.arange(1, 5).reshape((2, 2))
out1 = np.linalg.det(x)
out2 = x[0, 0] * x[1, 1] - x[0, 1] * x[1, 0]
assert np.allclose(out1, out2)
print out1
###Output
-2.0
###Markdown
Q12. Calculate the rank of x.
###Code
x = np.eye(4)
out1 = np.linalg.matrix_rank(x)
out2 = np.linalg.svd(x)[1].size
assert out1 == out2
print out1
###Output
4
###Markdown
Q13. Compute the sign and natural logarithm of the determinant of x.
###Code
x = np.arange(1, 5).reshape((2, 2))
sign, logdet = np.linalg.slogdet(x)
det = np.linalg.det(x)
assert sign == np.sign(det)
assert logdet == np.log(np.abs(det))
print sign, logdet
###Output
-1.0 0.69314718056
###Markdown
Q14. Return the sum along the diagonal of x.
###Code
x = np.eye(4)
out1 = np.trace(x)
out2 = x.diagonal().sum()
assert out1 == out2
print out1
###Output
4.0
###Markdown
Solving equations and inverting matrices Q15. Compute the inverse of x.
###Code
x = np.array([[1., 2.], [3., 4.]])
out1 = np.linalg.inv(x)
assert np.allclose(np.dot(x, out1), np.eye(2))
print out1
###Output
[[-2. 1. ]
[ 1.5 -0.5]]
|
notes/reference/moocs/fast.ai/dl-2018/lesson03-spreadsheet-cnn.ipynb | ###Markdown
Lesson 3 - Recreating the spreadsheet CNN in PyTorch
###Code
PATH = Path('./data/mnist')
PATH.mkdir(exist_ok=True)
###Output
_____no_output_____
###Markdown
Dataset
###Code
!kaggle competitions download -c digit-recognizer --path={PATH}
df = pd.read_csv(PATH/'train.csv')
df.head(10)
###Output
_____no_output_____
###Markdown
Kaggle provides us the MNIST data all contained within the CSV file. Each row represents an image, with each column past the first is a pixel value.We can load all pixels for a single image as follows:
###Code
img_pixels = df.loc[7, [c for c in df.columns if c.startswith('pixel')]]
img_pixels.shape
img_arr = np.array([int(i) for i in img_pixels])
img_arr.shape
###Output
_____no_output_____
###Markdown
We can reshape it into a square, then take a look at one of the images
###Code
img_arr = img_arr.reshape((28, 28))
img_arr.shape
plt.imshow(img_arr, cmap='gray')
###Output
_____no_output_____
###Markdown
One thing to note about the intensity value of images: they should either be int in range 0 to 255, or floats in range 0, 1.
###Code
img_arr.dtype
pd.DataFrame(img_arr)
plt.imshow(img_arr, cmap='gray')
img_arr_float = img_arr / 255.
pd.DataFrame(img_arr_float).round(2)
plt.imshow(img_arr_float, cmap='gray')
###Output
_____no_output_____
###Markdown
Since PyTorch expects each image to have at least 1 channel, I'll need to force a "channel" on the image. Usually channels are the final dimension of a multi dimensional array, but PyTorch wants them at the start.
###Code
img_arr_float = img_arr_float.reshape(1, 28, 28)
img_arr_float.shape
###Output
_____no_output_____
###Markdown
I also need to convert any Numpy array into Torch tensors:
###Code
img_tensor = torch.from_numpy(img_arr_float)
img_tensor.shape
###Output
_____no_output_____
###Markdown
Lastly, I need to add a batch dimension to it using `unsqueeze(0)`, since all data fed into PyTorch model should be a batch:
###Code
img_tensor = img_tensor.unsqueeze(0).cpu().float()
img_tensor.shape
###Output
_____no_output_____
###Markdown
ConvNet step through Let's start with the first ConvLayer in Jeremy's spreadsheet example.It expects an image with a single input channel, then outputs an image 2 with channels. It uses a kernel size of 3 and leaves the stride at the default of 1.We can use the ``nn.Conv2d`` class to create a convolution.
###Code
conv1 = nn.Conv2d(in_channels=1, out_channels=2, kernel_size=3)
###Output
_____no_output_____
###Markdown
Note that a convolution is simply a matrix which is "convoled" over some input image.
###Code
conv1.weight.shape
img_output = conv1(Variable(img_tensor))
img_output[0].shape
###Output
_____no_output_____
###Markdown
Since we didn't specify any padding, we end up with image 2 pixels smaller on both dimensions. We now have an image-like thing with 2 channels.These are the first 2 ouputs on Jeremy's spreadsheet. Let's perform the 2nd Conv operation:
###Code
conv2 = nn.Conv2d(in_channels=2, out_channels=2, kernel_size=3)
conv2.weight.shape
img_output2 = conv2(img_output)
img_output2.shape
###Output
_____no_output_____
###Markdown
When we look at the output values, we can see there are a lot of negative values.
###Code
img_output2[0]
###Output
_____no_output_____
###Markdown
We can remove those with the Relu operation:
###Code
img_output_relu = F.relu(img_output2)
img_output_relu[0]
###Output
_____no_output_____
###Markdown
We can also create a maxpool with a 2x2 kernel:
###Code
maxpool = nn.MaxPool2d(kernel_size=2)
img_output_maxpool = maxpool(img_output_relu)
img_output_maxpool.shape
###Output
_____no_output_____
###Markdown
Notice how it halves the height and width of the output image? The last step in the CNN that Jeremy explains in the spreadsheet is to pass to a Linear layer aka a Fully Connected layer aka a dot product.To pass our image to a linear layer, we'll need to first flatten it out into a vector:
###Code
img_flatten = img_output_maxpool.view(1, -1)
img_flatten.shape
###Output
_____no_output_____
###Markdown
We can then create a linear layer with 288 input dimensions and 10 output dimensions (the number of classes in the MNIST problem):
###Code
linear = nn.Linear(288, 10)
###Output
_____no_output_____
###Markdown
A linear layer is really just a matrix which can be used to perform a dot product with the input vector:
###Code
linear.weight.transpose(1, 0).shape
output = linear(img_flatten)
###Output
_____no_output_____
###Markdown
We now have a set of predictions returned from our model:
###Code
output
###Output
_____no_output_____
###Markdown
The last thing to do is to pass it through a SoftMax layer, which converts the outputs into probability-like numbers (sum to 1 and each between 0 and 1), and generally picks 1.
###Code
F.softmax(output, dim=1)
###Output
_____no_output_____
###Markdown
Putting it all together
###Code
class SimpleCNN(nn.Module):
def __init__(self, in_channels=1):
super(SimpleCNN, self).__init__()
self.conv1 = nn.Conv2d(in_channels=in_channels, out_channels=2, kernel_size=3)
self.conv2 = nn.Conv2d(in_channels=2, out_channels=2, kernel_size=3)
self.max_pool = nn.MaxPool2d(kernel_size=2)
self.fc1 = nn.Linear(in_features=288, out_features=10)
def forward(self, img_batch):
conv1_output = self.conv1(img_batch)
conv1_output = F.relu(conv1_output)
conv2_output = self.conv2(conv1_output)
conv2_output = F.relu(conv2_output)
maxpool_output = self.max_pool(conv2_output)
flattened_output = maxpool_output.view(img_batch.size(0), -1)
output = self.fc1(flattened_output)
return F.log_softmax(output, dim=1)
cnn = SimpleCNN()
torch.exp(cnn.forward(Variable(img_tensor)))
###Output
_____no_output_____
###Markdown
Train with Fast.ai ConvLearner I'm going to loop through the rows in the DataFrame, convert each to a 28x28 matrix, then stack those 3 times to make a 28x28x3 image. I'll then save it to disk, so I can use the Fast.ai stuff we've learned so far.
###Code
from pathlib import Path
train_path = PATH/'train'
train_path.mkdir(exist_ok=True)
from PIL import Image
img_ids = []
labels = []
for idx, row in tqdm_notebook(df.iterrows(), total=len(df)):
label = row[0]
img_vect = np.array(row[1:], dtype=np.uint8)
img_arr = img_vect.reshape(28, 28)
# Convert to 3 channels
img_arr = np.stack((img_arr,) * 3, axis=-1)
plt.imsave(str(train_path/f'{idx}.jpg'), img_arr)
img_ids.append(idx)
labels.append(label)
img = plt.imread(train_path/'10.jpg')
plt.imshow(img, cmap='gray')
train_df = pd.DataFrame({'id': img_ids, 'label': labels})
train_df.to_csv(PATH/'train_prepared.csv', index=False)
###Output
_____no_output_____
###Markdown
I can then create a ImageClassifierData object as usual, use the `from_model_data` constructor to create a `ConvLearner` from my custom model.
###Code
val_idx = get_cv_idxs(len(train_df))
cnn = SimpleCNN(in_channels=3)
data = ImageClassifierData.from_csv(
PATH, 'train', PATH/'train_prepared.csv', tfms=tfms_from_model(cnn, 28), val_idxs=val_idx, suffix='.jpg')
conv_learner = ConvLearner.from_model_data(cnn, data)
conv_learner.lr_find()
conv_learner.sched.plot()
conv_learner.fit(0.02, 3)
conv_learner.fit(0.001, 3)
test_df = pd.read_csv(PATH/'test.csv')
img_1 = test_df.loc[5, [c for c in df.columns if c.startswith('pixel')]]
img_arr = np.array(img_1)
img_arr = img_arr.reshape((28, 28))
img_float = np.array(img_arr) * (1/255)
img_float = np.stack((img_float,) * 3, axis=-1)
img_float = img_float.transpose((2, 0, 1))
img_tensor = torch.from_numpy(img_float)
img_tensor = img_tensor.unsqueeze(0).cpu().float()
preds = torch.exp(conv_learner.model(Variable(img_tensor)))
plt.imshow(img_arr, cmap='gray')
preds
np.argmax(torch.exp(preds.data).numpy())
###Output
_____no_output_____ |
codes/mapreduce_practice.ipynb | ###Markdown
MapReduceThe MapReduce programming technique was designed to analyze massive data sets across a cluster. In this Jupyter notebook, you'll get a sense for how Hadoop MapReduce works; however, this notebook will run locally rather than on a cluster.The biggest difference between Hadoop and Spark is that Spark tries to do as many calculations as possible in memory, which avoids moving data back and forth across a cluster. Hadoop writes intermediate calculations out to disk, which can be less efficient. Hadoop is an older technology than Spark and one of the cornerstone big data technologies.If you click on the Jupyter notebook logo at the top of the workspace, you'll be taken to the workspace directory. There you will see a file called "songplays.txt". This is a text file where each line represents a song that was played in the Sparkify app. The MapReduce code will count how many times each song was played. In other words, the code counts how many times the song title appears in the list. MapReduce versus Hadoop MapReduceDon't get confused by the terminology! MapReduce is a programming technique. Hadoop MapReduce is a specific implementation of the programming technique.Some of the syntax will look a bit funny, so be sure to read the explanation and comments for each section. You'll learn more about the syntax in later lessons. Run each of the code cells below to see the output.
###Code
# Install mrjob library. This package is for running MapReduce jobs with Python
# In Jupyter notebooks, "!" runs terminal commands from inside notebooks
! pip install mrjob
%%file wordcount.py
# %%file is an Ipython magic function that saves the code cell as a file
from mrjob.job import MRJob # import the mrjob library
class MRSongCount(MRJob):
# the map step: each line in the txt file is read as a key, value pair
# in this case, each line in the txt file only contains a value but no key
# _ means that in this case, there is no key for each line
def mapper(self, _, song):
# output each line as a tuple of (song_names, 1)
yield (song, 1)
# the reduce step: combine all tuples with the same key
# in this case, the key is the song name
# then sum all the values of the tuple, which will give the total song plays
def reducer(self, key, values):
yield (key, sum(values))
if __name__ == "__main__":
MRSongCount.run()
# run the code as a terminal command
! python wordcount.py songplays.txt
###Output
No configs found; falling back on auto-configuration
No configs specified for inline runner
Creating temp directory /tmp/wordcount.root.20200418.224351.121567
Running step 1 of 1...
job output is in /tmp/wordcount.root.20200418.224351.121567/output
Streaming final output from /tmp/wordcount.root.20200418.224351.121567/output...
"Broken Networks" 510
"Data House Rock" 828
"Deep Dreams" 1131
Removing temp directory /tmp/wordcount.root.20200418.224351.121567...
|
jupyterbook/content-de/python/lab/ex10-exceptions.ipynb | ###Markdown
Exercise 10 - Exception handlingBy now, you will almost certainly have encountered 'exceptions' - the error messages that appear when you ask Python to do something that it doesn't like. For example, the following code will raise an exception:```pythona = [1, 2, 3]print(a[4])```Attempting to execute this code results in some text similar to this:```text---------------------------------------------------------------------------IndexError Traceback (most recent call last) in () 1 a = [1, 2, 3]----> 2 print(a[4])IndexError: list index out of range```The error here is that we have tried to access the 5th element of `a` (remember, counting starts from zero!), but `a` only contains three entries.Another example might be```pythona = 1 + 'hello'```which generates```text---------------------------------------------------------------------------TypeError Traceback (most recent call last) in ()----> 1 1 + 'hello'TypeError: unsupported operand type(s) for +: 'int' and 'str'```Notice that these two error messages have different headlines: the first is an `IndexError`, whereas the second is a `TypeError`. You will notice that a variety of other kinds of error exist.If these errors are simply coding mistakes, it is useful to have the program terminate immediately, so we can fix it. However, in 'real' code these sorts of problem may arise for reasons beyond the programmer's control - perhaps the user has provided an incorrect set of inputs, for example. It is therefore often useful to be able to 'catch' and 'handle' exceptions in a graceful manner.To do this, Python provides the `try... except...` construct. This looks like:```pythontry: [code that may fail]except: [code to handle the error]```When a `try...except...` construct is encountered, Python first attempts to execute all the code within the indented `try` block. If this is successful, the code within the `except` block is never executed. However, as soon as an error is encountered, Python stops attempting to execute the `try` block, and jumps immediately to the first line in the `except` block. It executes everything in the `except` block, and then (assuming no more errors arise) continues with the first line *after* the `try...except...` construct.So, for example:```pythontry: x = float(input('Please enter a number: ')) print("The next number is: ", x+1)except: print("Sorry, that is not a valid number")```will gracefully handle cases where the user types text into the input field.**&10148; Try it out!** Compare how Python behaves with, and without, the `try...except...` construct.
###Code
# Try it here!
###Output
_____no_output_____
###Markdown
This kind of error handling is sometimes referred to as the 'EAFP' model: "easier to ask forgiveness than permission". Rather than attempting to verify that everything is correct *before* carrying out an operation - a process which can be tedious and computationally inefficient - we start by assuming everything will work, and then deal with any mess that we create.Our `try... except...` statement above will handle *any* kind of error that might arise. This may seem superficially attractive, but it can lead to confusion. For example, suppose we had made a typo in our code, referring to a variable `z` (which doesn't exist):```pythontry: x = float(input('Please enter a number: ')) print("The next number is: ", z+1)except: print("Sorry, that is not a valid number")```Now, this will always complain that we have entered an invalid number - even though this is not the real problem. If we remove the `try... except...` we see that this code is triggerring a `NameError`, rather than the `ValueError` that we intended to avoid. If this were 'real' code, we might waste a lot of time trying to understand why Python thought we were entering invalid numbers.**&10148; Try it out!**
###Code
# Try it here!
###Output
_____no_output_____
###Markdown
To avoid this, we can specify what sort of error(s) the `except` block is intended to handle:```pythontry: x = float(input('Please enter a number: ')) print("The next number is: ", z+1)except ValueError: print("Sorry, that is not a valid number")```Now, our typo will be obvious when we try and run the code, but once it is fixed everything will work as expected. If necessary, we can have one `except` that catches multiple types of exception```pythontry: [code]except ValueError, TypeError: [code]```and we can have multiple `except` blocks to handle different errors in different ways:```pythontry: [code]except ValueError: [code]except TypeError: [code]except: [code]```In the above example, the 'bare' `except` at the end is optional, and will catch all errors that do not match one of the 'named' exception handlers. For example```pythontry: x = float(input('Please enter a number: ')) print("The next number is: ", z+1)except ValueError: print("Sorry, that is not a valid number")except: print("Something unexpected happened")```will catch the error arising from our typo.**&10148; Try it out!**
###Code
# Try it here!
###Output
_____no_output_____
###Markdown
Exception handling can sometimes be a central part of your code design. Suppose you need to write a piece of code to sum up the entries in a list (and you have forgotten that Python's `sum()` function exists to do this). One solution (the cleanest, and so the best) would be to loop over the entries in the array:```pythona = [1, 3, 6]s = 0for x in a: s += xprint(s)```However, you could also write something like:```pythona = [1, 3, 6]i = 0s = 0while True: try: s += a[i] except IndexError: break i+=1print(s)```While this is unnecessarily complicated for such a straightfoward example, it illustrates how exception-handling can be used to control the flow of a program.
###Code
# Try it here!
###Output
_____no_output_____
###Markdown
It is tempting to overuse `try...except...` clauses, to supress Python's built-in errror messages. Generally this will be a mistake, as it will make it harder to identify the causes of bugs. It is best to only use `try...except...` when necessary to handle 'predictable' error cases, or in production code.**&10148; In Exercise 4, you made a guessing game. Using `try...except...`, adapt it so that if the user enters anything other than an integer, they are prompted to 'try again'.**
###Code
# Try it here!
###Output
_____no_output_____
###Markdown
Sometimes, it is useful to be able to access more information about the exact error that occurred. This can be achieved by modifying the `except` statement:```pythontry: [code]except as : [code]```As an example,```pythontry: x = 1 + 'hello'except TypeError as err: print("There is an error") print(err)```Now, if a TypeError is raised, Python creates the variable `err` and sets it to contain some more detailed information about the error. We can then use this to give a more detailed report, or to help us handle the problem. Different types of error may store different information within the variable.**&10148; Try it out!**
###Code
# Try it here!
###Output
_____no_output_____
###Markdown
Sometimes, there may be code that you want to execute regardless of whether an error occurs or not. For example, you might wish to save some information about the stage your program has reached, or delete temporary files. To help with this, Python provides a variant of `try...except...`:```pythontry: [code]finally: [code]```The code within the `finally` block is *always* executed, either after everything in `try` has been successfully completed, or *before* an error is propagated. For example:```pythontry: s = 0 for x in [1, 2, 3, 'x']: s += xfinally: print("This line is printed *before* the error is raised...") print(s)```If the `try...finally...` occurs in a function, and `finally` contains a `return `statement the error is never raised. Similarly, if `try...finally...` occurs in a loop, and `finally` contains a `break` statement, the error is discarded.**&10148; Try it out!**
###Code
# Try it here!
###Output
_____no_output_____
###Markdown
Python also allows you to `raise` errors within your code, triggering the error-handling mechanisms already described. This is achieved by the command```pythonraise ```or```pythonraise ()```For example,```pythonraise IndexError```or ```pythonraise IndexError("This is just an example")```A file `some_data.txt` is present in this folder. Use the skills you acquired during the last exercises to build a function that loads this file and multiply the two columns together. Do the sum for each line. If the sum is different from 100, raise a ValueError with a message saying that the sum of columns should be 100.**&10148; Try it out!**
###Code
# Try it here!
###Output
_____no_output_____
###Markdown
In conjunction with `try...except...`, `raise` can allow an effective mechanism for controlling program flow, since an exception raised within a function (within another function, within...) can be caught and handled at the top-most level. For example, one might write something like:```pythondef check_consistency(datafile_lines): Checks whether datafile contents are self-consistent [...] if [...]: File is good return True else: return False def load_datafile(...): Load data from file with open(datafile, 'r') as fp: lines = fp.readlines() [...] if not check_consistency(lines): raise IOError("Datafile is not self-consistent") def restart_calculation(...): Attempt to resume interrupted calculation [...] load_datafile(...) [...]def program_startup(...): [...] try: restart_calcuation(...) except IOError: start_new_calculation(...)```Here, the `program_startup` routine attempts to restart an existing, previous calculation based on information in a file, but if reading this file fails for any reason, it will simply start the calculation afresh. **➤ Earlier, you wrote some code to compute the sum of columns of a dataset. Using the structure described above, adapt this so that if the file doesn't exist, the dataset `[[1.,99.],[2.,98.],[3.,97.]]` is processed instead.**
###Code
# Try it here!
###Output
_____no_output_____ |
models/notebooks/FFNN/04-FFNN_with_pickle_reply.ipynb | ###Markdown
Load Data
###Code
path = f'{conf.dataset_mini_path}/train'
train = read_data(path)
path = f'{conf.dataset_mini_path}/test'
test = read_data(path)
path = f'{conf.dataset_mini_path}/valid'
valid = read_data(path)
TARGET = 'reply'
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
def set_dataframe_types(df, train):
df['id'] = np.arange( df.shape[0] )
df['id'] = df['id'].astype(np.uint32)
if train:
df['reply_timestamp'] = df['reply_timestamp'].fillna(0)
df['retweet_timestamp'] = df['retweet_timestamp'].fillna(0)
df['comment_timestamp'] = df['comment_timestamp'].fillna(0)
df['like_timestamp'] = df['like_timestamp'].fillna(0)
df['reply_timestamp'] = df['reply_timestamp'].astype(np.uint32)
df['retweet_timestamp'] = df['retweet_timestamp'].astype(np.uint32)
df['comment_timestamp'] = df['comment_timestamp'].astype(np.uint32)
df['like_timestamp'] = df['like_timestamp'].astype(np.uint32)
df['tweet_timestamp'] = df['tweet_timestamp'].astype( np.uint32 )
df['creator_follower_count'] = df['creator_follower_count'].astype( np.uint32 )
df['creator_following_count'] = df['creator_following_count'].astype( np.uint32 )
df['creator_account_creation']= df['creator_account_creation'].astype( np.uint32 )
df['engager_follower_count'] = df['engager_follower_count'].astype( np.uint32 )
df['engager_following_count'] = df['engager_following_count'].astype( np.uint32 )
df['engager_account_creation']= df['engager_account_creation'].astype( np.uint32 )
return df
def preprocess(df, target, train):
df = set_dataframe_types(df, train)
# df = df.set_index('id')
# df.columns = conf.raw_features + conf.labels
df = df.drop('text_tokens', axis=1)
df = feature_extraction(df, features=conf.used_features, train=train) # extract 'used_features'
cols = []
return df
train = preprocess(train, TARGET, True)
valid = preprocess(valid, TARGET, True)
test = preprocess(test, TARGET, True)
train
###Output
_____no_output_____
###Markdown
pickle matching language
###Code
pickle_path = conf.dict_path
user_main_language_path = pickle_path + "user_main_language.pkl"
if os.path.exists(user_main_language_path) :
with open(user_main_language_path, 'rb') as f :
user_main_language = pickle.load(f)
language_dict_path = pickle_path + "language_dict.pkl"
if os.path.exists(language_dict_path ) :
with open(language_dict_path , 'rb') as f :
language_dict = pickle.load(f)
train['language'] = train.apply(lambda x : language_dict[x['language']], axis = 1)
test['language'] = test.apply(lambda x : language_dict[x['language']], axis = 1)
valid['language'] = valid.apply(lambda x : language_dict[x['language']], axis = 1)
del language_dict
train['creator_main_language'] = train['creator_id'].map(user_main_language)
valid['creator_main_language'] = valid['creator_id'].map(user_main_language)
test['creator_main_language'] = test['creator_id'].map(user_main_language)
train['engager_main_language'] = train['engager_id'].map(user_main_language)
valid['engager_main_language'] = valid['engager_id'].map(user_main_language)
test['engager_main_language'] = test['engager_id'].map(user_main_language)
train['creator_and_engager_have_same_main_language'] = train.apply(lambda x : 1 if x['creator_main_language'] == x['engager_main_language'] else 0, axis = 1)
valid['creator_and_engager_have_same_main_language'] = valid.apply(lambda x : 1 if x['creator_main_language'] == x['engager_main_language'] else 0, axis = 1)
test['creator_and_engager_have_same_main_language'] = test.apply(lambda x : 1 if x['creator_main_language'] == x['engager_main_language'] else 0, axis = 1)
train['is_tweet_in_creator_main_language'] = train.apply(lambda x : 1 if x['creator_main_language'] == x['language'] else 0, axis = 1)
valid['is_tweet_in_creator_main_language'] = valid.apply(lambda x : 1 if x['creator_main_language'] == x['language'] else 0, axis = 1)
test['is_tweet_in_creator_main_language'] = test.apply(lambda x : 1 if x['creator_main_language'] == x['language'] else 0, axis = 1)
train['is_tweet_in_engager_main_language'] = train.apply(lambda x : 1 if x['engager_main_language'] == x['language'] else 0, axis = 1)
valid['is_tweet_in_engager_main_language'] = valid.apply(lambda x : 1 if x['engager_main_language'] == x['language'] else 0, axis = 1)
test['is_tweet_in_engager_main_language'] = test.apply(lambda x : 1 if x['engager_main_language'] == x['language'] else 0, axis = 1)
del user_main_language
train.head()
###Output
_____no_output_____
###Markdown
engagements
###Code
engagement_like_path = pickle_path + "engagement-like.pkl"
if os.path.exists(engagement_like_path ) :
with open(engagement_like_path , 'rb') as f :
engagement_like = pickle.load(f)
train['engager_feature_number_of_previous_like_engagement'] = train.apply(lambda x : engagement_like[x['engager_id']], axis = 1)
valid['engager_feature_number_of_previous_like_engagement'] = valid.apply(lambda x : engagement_like[x['engager_id']], axis = 1)
test['engager_feature_number_of_previous_like_engagement'] = test.apply(lambda x : engagement_like[x['engager_id']], axis = 1)
engagement_like_path = pickle_path + "creator-engagement-like.pkl"
if os.path.exists(engagement_like_path ) :
with open(engagement_like_path , 'rb') as f :
creator_engagement_like = pickle.load(f)
train['creator_feature_number_of_previous_like_engagement'] = train.apply(lambda x : creator_engagement_like[x['creator_id']], axis = 1)
valid['creator_feature_number_of_previous_like_engagement'] = valid.apply(lambda x : creator_engagement_like[x['creator_id']], axis = 1)
test['creator_feature_number_of_previous_like_engagement'] = test.apply(lambda x : creator_engagement_like[x['creator_id']], axis = 1)
del engagement_like
del creator_engagement_like
engagement_reply_path = pickle_path + "engagement-reply.pkl"
if os.path.exists(engagement_reply_path ) :
with open(engagement_reply_path , 'rb') as f :
engagement_reply = pickle.load(f)
train['engager_feature_number_of_previous_reply_engagement'] = train.apply(lambda x : engagement_reply[x['engager_id']], axis = 1)
valid['engager_feature_number_of_previous_reply_engagement'] = valid.apply(lambda x : engagement_reply[x['engager_id']], axis = 1)
test['engager_feature_number_of_previous_reply_engagement'] = test.apply(lambda x : engagement_reply[x['engager_id']], axis = 1)
engagement_reply_path = pickle_path + "creator-engagement-reply.pkl"
if os.path.exists(engagement_reply_path ) :
with open(engagement_reply_path , 'rb') as f :
creator_engagement_reply = pickle.load(f)
train['creator_feature_number_of_previous_reply_engagement'] = train.apply(lambda x : creator_engagement_reply[x['creator_id']], axis = 1)
valid['creator_feature_number_of_previous_reply_engagement'] = valid.apply(lambda x : creator_engagement_reply[x['creator_id']], axis = 1)
test['creator_feature_number_of_previous_reply_engagement'] = test.apply(lambda x : creator_engagement_reply[x['creator_id']], axis = 1)
del engagement_reply
del creator_engagement_reply
engagement_retweet_path = pickle_path + "engagement-retweet.pkl"
if os.path.exists(engagement_retweet_path ) :
with open(engagement_retweet_path , 'rb') as f :
engagement_retweet = pickle.load(f)
train['engager_feature_number_of_previous_retweet_engagement'] = train.apply(lambda x : engagement_retweet[x['engager_id']], axis = 1)
valid['engager_feature_number_of_previous_retweet_engagement'] = valid.apply(lambda x : engagement_retweet[x['engager_id']], axis = 1)
test['engager_feature_number_of_previous_retweet_engagement'] = test.apply(lambda x : engagement_retweet[x['engager_id']], axis = 1)
engagement_retweet_path = pickle_path + "creator-engagement-retweet.pkl"
if os.path.exists(engagement_retweet_path ) :
with open(engagement_retweet_path , 'rb') as f :
creator_engagement_retweet = pickle.load(f)
train['creator_feature_number_of_previous_retweet_engagement'] = train.apply(lambda x : creator_engagement_retweet[x['creator_id']], axis = 1)
valid['creator_feature_number_of_previous_retweet_engagement'] = valid.apply(lambda x : creator_engagement_retweet[x['creator_id']], axis = 1)
test['creator_feature_number_of_previous_retweet_engagement'] = test.apply(lambda x : creator_engagement_retweet[x['creator_id']], axis = 1)
del engagement_retweet
del creator_engagement_retweet
engagement_comment_path = pickle_path + "engagement-comment.pkl"
if os.path.exists(engagement_comment_path ) :
with open(engagement_comment_path , 'rb') as f :
engagement_comment = pickle.load(f)
train['engager_feature_number_of_previous_comment_engagement'] = train.apply(lambda x : engagement_comment[x['engager_id']], axis = 1)
valid['engager_feature_number_of_previous_comment_engagement'] = valid.apply(lambda x : engagement_comment[x['engager_id']], axis = 1)
test['engager_feature_number_of_previous_comment_engagement'] = test.apply(lambda x : engagement_comment[x['engager_id']], axis = 1)
engagement_comment_path = pickle_path + "creator-engagement-comment.pkl"
if os.path.exists(engagement_comment_path ) :
with open(engagement_comment_path , 'rb') as f :
creator_engagement_comment = pickle.load(f)
train['creator_feature_number_of_previous_comment_engagement'] = train.apply(lambda x : creator_engagement_comment[x['creator_id']], axis = 1)
valid['creator_feature_number_of_previous_comment_engagement'] = valid.apply(lambda x : creator_engagement_comment[x['creator_id']], axis = 1)
test['creator_feature_number_of_previous_comment_engagement'] = test.apply(lambda x : creator_engagement_comment[x['creator_id']], axis = 1)
del engagement_comment
del creator_engagement_comment
train['number_of_engagements_positive'] = train.apply(lambda x : x['engager_feature_number_of_previous_like_engagement'] + x['engager_feature_number_of_previous_retweet_engagement'] + x['engager_feature_number_of_previous_reply_engagement'] + x['engager_feature_number_of_previous_comment_engagement'], axis = 1)
valid['number_of_engagements_positive'] = valid.apply(lambda x : x['engager_feature_number_of_previous_like_engagement'] + x['engager_feature_number_of_previous_retweet_engagement'] + x['engager_feature_number_of_previous_reply_engagement'] + x['engager_feature_number_of_previous_comment_engagement'], axis = 1)
test['number_of_engagements_positive'] = test.apply(lambda x : x['engager_feature_number_of_previous_like_engagement'] + x['engager_feature_number_of_previous_retweet_engagement'] + x['engager_feature_number_of_previous_reply_engagement'] + x['engager_feature_number_of_previous_comment_engagement'], axis = 1)
train['creator_number_of_engagements_positive'] = train.apply(lambda x : x['creator_feature_number_of_previous_like_engagement'] + x['creator_feature_number_of_previous_retweet_engagement'] + x['creator_feature_number_of_previous_reply_engagement'] + x['creator_feature_number_of_previous_comment_engagement'], axis = 1)
valid['creator_number_of_engagements_positive'] = valid.apply(lambda x : x['creator_feature_number_of_previous_like_engagement'] + x['creator_feature_number_of_previous_retweet_engagement'] + x['creator_feature_number_of_previous_reply_engagement'] + x['creator_feature_number_of_previous_comment_engagement'], axis = 1)
test['creator_number_of_engagements_positive'] = test.apply(lambda x : x['creator_feature_number_of_previous_like_engagement'] + x['creator_feature_number_of_previous_retweet_engagement'] + x['creator_feature_number_of_previous_reply_engagement'] + x['creator_feature_number_of_previous_comment_engagement'], axis = 1)
# train = train.drop('engager_feature_number_of_previous_reply_engagement', axis = 1)
train = train.drop('engager_feature_number_of_previous_like_engagement', axis = 1)
train = train.drop('engager_feature_number_of_previous_retweet_engagement', axis = 1)
train = train.drop('engager_feature_number_of_previous_comment_engagement', axis = 1)
# train = train.drop('engager_feature_number_of_previous_reply_engagement', axis = 1)
train = train.drop('creator_feature_number_of_previous_like_engagement', axis = 1)
train = train.drop('creator_feature_number_of_previous_retweet_engagement', axis = 1)
train = train.drop('creator_feature_number_of_previous_comment_engagement', axis = 1)
# valid = valid.drop('engager_feature_number_of_previous_reply_engagement', axis = 1)
valid = valid.drop('engager_feature_number_of_previous_like_engagement', axis = 1)
valid = valid.drop('engager_feature_number_of_previous_retweet_engagement', axis = 1)
valid = valid.drop('engager_feature_number_of_previous_comment_engagement', axis = 1)
# valid = valid.drop('creator_feature_number_of_previous_reply_engagement', axis = 1)
valid = valid.drop('creator_feature_number_of_previous_like_engagement', axis = 1)
valid = valid.drop('creator_feature_number_of_previous_retweet_engagement', axis = 1)
valid = valid.drop('creator_feature_number_of_previous_comment_engagement', axis = 1)
# test = test.drop('engager_feature_number_of_previous_reply_engagement', axis = 1)
test = test.drop('engager_feature_number_of_previous_like_engagement', axis = 1)
test = test.drop('engager_feature_number_of_previous_retweet_engagement', axis = 1)
test = test.drop('engager_feature_number_of_previous_comment_engagement', axis = 1)
# test = test.drop('creator_feature_number_of_previous_reply_engagement', axis = 1)
test = test.drop('creator_feature_number_of_previous_like_engagement', axis = 1)
test = test.drop('creator_feature_number_of_previous_retweet_engagement', axis = 1)
test = test.drop('creator_feature_number_of_previous_comment_engagement', axis = 1)
# train['number_of_engagements_ratio_like'] = train.apply(lambda x : x['engager_feature_number_of_previous_like_engagement'] / x['number_of_engagements_positive'] if x['number_of_engagements_positive'] != 0 else 0, axis = 1)
# valid['number_of_engagements_ratio_like'] = valid.apply(lambda x : x['engager_feature_number_of_previous_like_engagement'] / x['number_of_engagements_positive'] if x['number_of_engagements_positive'] != 0 else 0, axis = 1)
# test['number_of_engagements_ratio_like'] = test.apply(lambda x : x['engager_feature_number_of_previous_like_engagement'] / x['number_of_engagements_positive'] if x['number_of_engagements_positive'] != 0 else 0, axis = 1)
train[f'engager_number_of_engagements_ratio_{TARGET}'] = train.apply(lambda x : x[f'engager_feature_number_of_previous_{TARGET}_engagement'] / x['number_of_engagements_positive'] if x['number_of_engagements_positive'] != 0 else 0, axis = 1)
valid[f'engager_number_of_engagements_ratio_{TARGET}'] = valid.apply(lambda x : x[f'engager_feature_number_of_previous_{TARGET}_engagement'] / x['number_of_engagements_positive'] if x['number_of_engagements_positive'] != 0 else 0, axis = 1)
test[f'engager_number_of_engagements_ratio_{TARGET}'] = test.apply(lambda x : x[f'engager_feature_number_of_previous_{TARGET}_engagement'] / x['number_of_engagements_positive'] if x['number_of_engagements_positive'] != 0 else 0, axis = 1)
train[f'creator_number_of_engagements_ratio_{TARGET}'] = train.apply(lambda x : x[f'creator_feature_number_of_previous_{TARGET}_engagement'] / x['creator_number_of_engagements_positive'] if x['creator_number_of_engagements_positive'] != 0 else 0, axis = 1)
valid[f'creator_number_of_engagements_ratio_{TARGET}'] = valid.apply(lambda x : x[f'creator_feature_number_of_previous_{TARGET}_engagement'] / x['creator_number_of_engagements_positive'] if x['creator_number_of_engagements_positive'] != 0 else 0, axis = 1)
test[f'creator_number_of_engagements_ratio_{TARGET}'] = test.apply(lambda x : x[f'creator_feature_number_of_previous_{TARGET}_engagement'] / x['creator_number_of_engagements_positive'] if x['creator_number_of_engagements_positive'] != 0 else 0, axis = 1)
###Output
_____no_output_____
###Markdown
Sampling
###Code
# df_positive = train[train[TARGET]==1]
# df_negative = train[train[TARGET]==0]
# print(len(df_positive))
# print(len(df_negative))
# df_negative = df_negative.sample(n = len(df_positive), random_state=777)
# train = pd.concat([df_positive, df_negative])
# train = train.sample(frac = 1)
# del df_positive
# del df_negative
###Output
_____no_output_____
###Markdown
Split
###Code
label_names = ['reply', 'retweet', 'comment', 'like']
DONT_USE = ['tweet_timestamp','creator_account_creation','engager_account_creation','engage_time',
'creator_account_creation', 'engager_account_creation',
'fold','tweet_id',
'tr','dt_day','','',
'engager_id','creator_id','engager_is_verified',
'elapsed_time',
'links','domains','hashtags0','hashtags1',
'hashtags','tweet_hash','dt_second','id',
'tw_hash0',
'tw_hash1',
'tw_rt_uhash',
'same_language', 'nan_language','language',
'tw_hash', 'tw_freq_hash','tw_first_word', 'tw_second_word', 'tw_last_word', 'tw_llast_word',
'ypred','creator_count_combined','creator_user_fer_count_delta_time','creator_user_fing_count_delta_time','creator_user_fering_count_delta_time','creator_user_fing_count_mode','creator_user_fer_count_mode','creator_user_fering_count_mode'
]
DONT_USE += label_names
DONT_USE += conf.labels
RMV = [c for c in DONT_USE if c in train.columns]
y_train = train[TARGET]
X_train = train.drop(RMV, axis=1)
del train
y_valid = valid[TARGET]
X_valid = valid.drop(RMV, axis=1)
del valid
y_test = test[TARGET]
X_test = test.drop(RMV, axis=1)
del test
###Output
_____no_output_____
###Markdown
Scaling
###Code
X_train = X_train.reset_index(drop=True)
X_test = X_test.reset_index(drop=True)
X_val = X_valid.reset_index(drop=True)
scaling_columns = ['creator_following_count', 'creator_follower_count', 'engager_follower_count', 'engager_following_count', f'engager_feature_number_of_previous_{TARGET}_engagement',f'creator_feature_number_of_previous_{TARGET}_engagement', 'number_of_engagements_positive', 'creator_number_of_engagements_positive', 'dt_dow', 'dt_hour', 'len_domains']
standard_scaler = preprocessing.StandardScaler()
standard_scaler.fit(X_train[scaling_columns])
ss = standard_scaler.transform(X_train[scaling_columns])
X_train[scaling_columns] = pd.DataFrame(ss, columns = scaling_columns)
ss = standard_scaler.transform(X_valid[scaling_columns])
X_valid[scaling_columns] = pd.DataFrame(ss, columns = scaling_columns)
ss = standard_scaler.transform(X_test[scaling_columns])
X_test[scaling_columns] = pd.DataFrame(ss, columns = scaling_columns)
X_train = X_train.fillna(X_train.mean())
X_valid = X_valid.fillna(X_valid.mean())
X_test = X_test.fillna(X_test.mean())
X_train
###Output
_____no_output_____
###Markdown
Modeling
###Code
model = Sequential([
Dense(16, activation = 'relu', input_dim = X_train.shape[1]),
Dense(8, activation = 'relu'),
Dense(4, activation = 'relu'),
Dense(1, activation = 'sigmoid')
])
model.compile(
optimizer = 'adam',
loss = 'binary_crossentropy', # softmax : sparse_categorical_crossentropy, sigmoid : binary_crossentropy
metrics=['binary_crossentropy']) # sigmoid :binary_crossentropy
result = model.fit(
x = X_train,
y = y_train,
validation_data=(X_valid, y_valid),
epochs=20,
batch_size=32
)
plt.plot(result.history['binary_crossentropy'])
plt.plot(result.history['val_binary_crossentropy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
model.evaluate(X_test, y_test)
model.save(f'./saved_model/ffnn_{TARGET}')
###Output
INFO:tensorflow:Assets written to: ./saved_model/ffnn_reply/assets
###Markdown
Predict
###Code
model = tf.keras.models.load_model(f'./saved_model/ffnn_{TARGET}')
pred = model.predict(X_test)
rce = compute_rce(pred, y_test)
rce
average_precision_score(y_test, pred)
X_test.columns
###Output
_____no_output_____ |
Assignment_2_and_3.ipynb | ###Markdown
Uncover the factors that lead to employee attrition and explore important questions such as:1. Show a breakdown of distance from home by job role and attrition.2. Compare average monthly income by education and attrition.
###Code
from google.colab import drive
drive.mount('/content/gdrive', force_remount = True)
import sys
sys.path.append('/content/gdrive/My Drive/DataScienceSchool/Assignments/ADS-Assignment-2-3')
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib
df = pd.read_csv('/content/gdrive/My Drive/DataScienceSchool/Assignments/ADS-Assignment-2-3/WA_Fn-UseC_-HR-Employee-Attrition.csv')
df
df.columns
df.dtypes
df.DistanceFromHome.nunique()
###Output
_____no_output_____
###Markdown
1. Show a breakdown of distance from home by job role and attrition.
###Code
df.groupby(['JobRole', 'Attrition'])['DistanceFromHome'].mean().unstack().head(30)
###Output
_____no_output_____
###Markdown
We can also visualize the above data
###Code
plt.figure(figsize = (22, 12))
sns.barplot(x = 'JobRole', y = 'DistanceFromHome', data = df, hue = 'Attrition')
###Output
_____no_output_____
###Markdown
2. Compare average monthly income by education and attrition.
###Code
df['Education'].nunique()
df['MonthlyIncome'].nunique()
df.groupby(['Education', 'Attrition'])['MonthlyIncome'].mean().unstack()
plt.figure(figsize=(22, 12))
sns.barplot(x='Education', y = 'MonthlyIncome', data = df, hue = 'Attrition')
###Output
_____no_output_____ |
original/v1/s003-looping/ostrava/Feedback k domácím projektům.ipynb | ###Markdown
Feedback k domácím projektům Jde tento kód napsat jednodušeji, aby ale dělal úplně totéž?
###Code
for radek in range(4):
radek += 1
for value in range(radek):
print('X', end=' ')
print('')
###Output
_____no_output_____
###Markdown
Ano, lze :-)
###Code
for radek in range(1, 5):
print('X ' * radek)
###Output
_____no_output_____
###Markdown
A co tento?
###Code
promenna = "X"
for j in range(5):
for i in promenna:
print(i, i, i, i, i)
###Output
_____no_output_____
###Markdown
Ten taky
###Code
for j in range(5):
print('X ' * 5)
###Output
_____no_output_____
###Markdown
Nejmenší číslo Upovídané řešení z domácích projektů
###Code
prve = input('Zadej cislo: ')
druhe = input('Zadej cislo: ')
tretie = input('Zadej cislo: ')
stvrte = input('Zadej cislo: ')
piate = input('Zadej cislo: ')
if prve<druhe and prve<tretie and prve<stvrte and prve<piate:
print(prve)
if druhe<prve and druhe<tretie and druhe<stvrte and druhe<piate:
print(druhe)
if tretie<prve and tretie<druhe and tretie<stvrte and tretie<piate:
print(tretie)
if stvrte<prve and stvrte<druhe and stvrte<tretie and stvrte<piate:
print(stvrte)
if piate<prve and piate<druhe and piate<tretie and piate<stvrte:
print(piate)
###Output
_____no_output_____
###Markdown
Lepší, ale pořád ne optimální
###Code
a = float(input('Prvni cislo: '))
b = float(input('Druhe cislo: '))
c = float(input('Treti cislo: '))
d = float(input('Ctrvte cislo: '))
e = float(input('Pate cislo: '))
m = a
for cislo in a, b, c, d, e:
if cislo < m:
m=cislo
print(m)
###Output
_____no_output_____
###Markdown
Kratší a méně náročné řešení
###Code
minimum = 0
for x in range(5):
cislo = int(input('Zadej cislo: '))
if minimum == 0 or cislo < minimum:
minimum = cislo
print('Nejmensi zadane cislo je', minimum)
###Output
_____no_output_____
###Markdown
N-úhelníky v řadě Upovídané řešení z domácích projektů
###Code
from turtle import forward, shape, left, right, exitonclick, penup, pendown, back
# pětiúhelník:
vnitrniuhel = 180*(1-(2/5))
vnejsiuhel= 180-vnitrniuhel
for x in range (5):
forward(200/5)
left(vnejsiuhel)
penup()
forward(100)
pendown()
# šestiúhelník:
vnitrniuhel = 180*(1-(2/6))
vnejsiuhel= 180-vnitrniuhel
for x in range (6):
forward(200/6)
left(vnejsiuhel)
penup()
forward(100)
pendown()
# sedmiúhelník:
vnitrniuhel = 180*(1-(2/7))
vnejsiuhel= 180-vnitrniuhel
for x in range (7):
forward(200/7)
left(vnejsiuhel)
penup()
forward(100)
pendown()
# osmiúhelník:
vnitrniuhel = 180*(1-(2/8))
vnejsiuhel= 180-vnitrniuhel
for x in range (8):
forward(200/8)
left(vnejsiuhel)
exitonclick()
###Output
_____no_output_____
###Markdown
Kratší řešení s využitím cyklu v dalším cyklu
###Code
from turtle import forward, shape, left, right, exitonclick, penup, pendown, back
for n in range(5,9):
vnitrniuhel = 180*(1-(2/n))
vnejsiuhel= 180-vnitrniuhel
for x in range (n):
forward(200/n)
left(vnejsiuhel)
penup()
forward(100)
pendown()
exitonclick()
###Output
_____no_output_____
###Markdown
Obecné připomínky a rady * Importy provádíme vždy na prvních řádcích programu a v rámci programu pouze jednou.* Snažíme se nepoužívat importy s hvězdičkou.* Neimportujeme nic co pak v programu nepoužijeme.* Kód nemusí být elegantní, hlavně když funguje (alespoň pro začátek).* Komentáře je lepší a jednodušší psát nad nebo pod kód místo vedle něj. Obzvlášť pokud má komentovaná část kódu několik řádků.* Pochvala za funkce.* Když odevzdáváte soubor s funkcemi, je třeba je v rámci souboru také zavolat, jinak se po jeho spuštění nic nestane.* Martin děkuje všem, kteří zrychlili želvičku. Děkujeme za PyBeer
###Code
# ##### ## #######
# ############################
# #############################
# #################################
# | |___________
# | ( ) ( ) ( ) |________ /
# | ) ( ) ( ) ( | / /
# | ( ) ( ) ( ) | / /
# | ) ( ) ( ) ( | / /
# | ( ) ( ) ( ) |____/ /
# | ) ( ) ( ) ( |_____/
# | (___) (___) (___) |
# | |
# |_____________________________|
###Output
_____no_output_____ |
004_wavelet_transform.ipynb | ###Markdown
ConvolutionConvolution is often employed to obtain the degree of similarities between analyzed signal and prototype functions.It's major advantage is to use known atoms to investigate unknown phenomena.It is an important concept for Fourier transform, Wavelet transform etc.The convolution of two functions $f(t)$ and $g(t)$ is:\begin{equation}h(t) = f(t)* g(t)= \int_{-\infty}^{\infty}f(\tau)g(t-\tau)d\tau\end{equation}For discrete time series, the convolution is a sum instead of an integral.\begin{equation}h(n) = \sum_{m=-\infty}^{\infty}f(m)g(n-m)\end{equation}The most common fast convolution algorithms use fast Fourier transform (FFT) algorithms via the circular convolution theorem.
###Code
def conv(f, g, same=True):
"""
discrete convolution:
i: [0, len(f)+len(g)-1)
j: [max(0, i-len(g)+1), min(i, len(f)-1)]
"""
len_f = len(f)
len_g = len(g)
len_h = len_f + len_g - 1
h = np.zeros(len_h)
for i in range(len_h):
for j in range(max(0, i-len_g+1), min(i, len_f-1)+1):
h[i] += f[j]*g[i-j]
if same:
l = (len(g)-1)//2
h = h[l:len(h)-(len(g)-l-1)]
return h
def gaussian(t, sigma, mu):
return np.e ** (-0.5*((t-mu)/sigma)**2)
def Plot(x,y,z):
# plot the morlet atom
plt.figure(figsize=(10,5))
ax = plt.gca()
ax.grid(color='#b7b7b7', linestyle='-', linewidth=0.5, alpha=0.5)
plt.plot(t,np.real(x), color='#333333', linestyle=':', alpha=0.5)
plt.plot(t,np.real(y), color='#333333', linestyle=':', alpha=0.5)
plt.plot(t,np.real(z), color='#121212', linestyle='-')
plt.plot(t,np.imag(z), color='#121212', linestyle='-.')
# sampling rate
Fs = 100
# sampling interval
Ts = 1.0/Fs
# time vector
t = np.arange(0, 10, Ts)
# frequency of the atom
w = 2*np.pi*1
x=np.e**(-1j*w*t)
# construct the morlet atom
y=gaussian(t, 1, 5)
morlet=x*y
Plot(x,y,morlet)
###Output
_____no_output_____
###Markdown
Morlet wavelet (standard version)\begin{equation}\psi(t) = \pi^{-1/4}e^{jwt}e^{-t^2/2}\end{equation} Daughter wavelet atoms\begin{equation}\psi_{a,b}(t) = \frac1{\sqrt{a}}\psi\left(\frac{t - b}{a}\right)\end{equation}
###Code
def morlet(N, s=1.0, w=6.0):
"""
Complex Morlet wavelet.
:param N: (int) Length of the wavelet
:param s: (float) Scaling factor. Default is 1.0
:param w: (float) Omega0. Default is 5.0
:return: morlet: (ndarray) Morlet wavelet
"""
t = np.linspace(-1/s*2*np.pi, 1/s*2*np.pi, N)
return np.pi**(-0.25)*np.exp(1j*w*t)*np.exp(-0.5*(t**2))
def fourier(x):
"""
Fourier transform.
:param x: (ndarray) input signal
:return: X: (ndarray) one side frequency of input signal
"""
n = len(x) # length pf the signal
X = np.fft.fft(x) # fast fourier transform
X /= n # normalization
X = X[range(n//2)] # one side frequency range
return abs(X)
def Plot(morlet_atoms, scales, morlet_freqs, freq):
alphas = [0.4, 0.7, 1]
plt.figure(figsize=(10,5))
plt.subplot(2,1,1)
ax = plt.gca()
ax.grid(color='#b7b7b7', linestyle='-', linewidth=0.5, alpha=0.5)
for i in range(len(morlet_atoms)):
plt.plot(np.real(morlet_atoms[i]), color="#333333", alpha=alphas[i], label=str(scales[i]))
plt.legend(ncol=4, loc='upper right', fontsize='small')
plt.subplot(2,1,2)
ax = plt.gca()
ax.grid(color='#b7b7b7', linestyle='-', linewidth=0.5, alpha=0.5)
for i in range(len(morlet_atoms)):
plt.plot(freq, morlet_freqs[i], color="#333333", alpha=alphas[i])
fs = 1
dt=1/fs
n = 256
T = n * dt
freq = np.arange(n)/T # two side frequencies
freq = freq[range(n//2)] # one side frequencies
scales = [0.5, 1, 2]
morlet_atoms = [1/np.sqrt(s) * morlet(n, s) for s in scales]
morlet_freqs = [fourier(x) for x in morlet_atoms]
Plot(morlet_atoms, scales, morlet_freqs, freq)
###Output
_____no_output_____
###Markdown
Continuous wavelet transform\begin{equation}W_f(a,b) = \frac1{\sqrt{a}}\int_{-\infty}^{+\infty}f(t)\psi^*\left(\frac{t - b}{a}\right)dt\end{equation}It is possible to calculate the wavelet transform using convolution in time space, it is considerably faster to do the calculations in Fourier space.
###Code
t = np.linspace(-1, 1, 256, endpoint=False)
data = 2*np.sin(2 * np.pi * 1 * t) + np.sin(2 * np.pi * 5 * t)*(t>0)
scales = np.arange(1, 21)
wavelet_spectrum = np.zeros([len(scales), len(data)], dtype=np.complex)
for ind, scale in enumerate(scales):
wavelets = morlet(min(10 * scale, len(data)), scale)
wavelet_spectrum[ind, :] = conv(dasta, wavelets)
wavelet_power_spectrum = abs(wavelet_spectrum)**2
fig = plt.figure(figsize=(10, 5))
fig.subplots_adjust(left=0.09, bottom=0.09, right=0.95, top=0.95,
hspace=0.05, wspace=0.05)
plt.subplot(2,1,1)
ax = plt.gca()
ax.grid(color='#b7b7b7', linestyle='-', linewidth=0.5, alpha=0.5)
plt.setp(ax.get_xticklabels(), visible=False)
plt.plot(data, '#333333')
plt.subplot(2,1,2)
plt.imshow(wavelet_spectrum.real, cmap='PRGn', aspect='auto',
vmax=abs(wavelet_spectrum.real).max(), vmin=-abs(wavelet_spectrum.real).max())
plt.show()
###Output
_____no_output_____ |
Deep_Learning/TensorFlow-aymericdamien/notebooks/3_NeuralNetworks/convolutional_network.ipynb | ###Markdown
Convolutional Neural Network ExampleBuild a convolutional neural network with TensorFlow.This example is using TensorFlow layers API, see 'convolutional_network_raw' examplefor a raw TensorFlow implementation with variables.- Author: Aymeric Damien- Project: https://github.com/aymericdamien/TensorFlow-Examples/These lessons are adapted from [aymericdamien TensorFlow tutorials](https://github.com/aymericdamien/TensorFlow-Examples) / [GitHub](https://github.com/aymericdamien/TensorFlow-Examples) which are published under the [MIT License](https://github.com/Hvass-Labs/TensorFlow-Tutorials/blob/master/LICENSE) which allows very broad use for both academic and commercial purposes. CNN OverviewConvolutional Networks [https://youtu.be/jajksuQW4mc](https://youtu.be/jajksuQW4mc)Introduction to Deep Learning: What Are Convolutional Neural Networks? [https://youtu.be/ixF5WNpTzCA](https://youtu.be/ixF5WNpTzCA)MIT 6.S191 Lecture 3: Convolutional Neural Networks [https://youtu.be/v5JvvbP0d44](https://youtu.be/v5JvvbP0d44) MNIST Dataset OverviewThis example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flattened and converted to a 1-D numpy array of 784 features (28*28).More info: http://yann.lecun.com/exdb/mnist/
###Code
from __future__ import division, print_function, absolute_import
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=False)
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
# Training Parameters
learning_rate = 0.001
num_steps = 2000
batch_size = 128
# Network Parameters
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)
dropout = 0.75 # Dropout, probability to keep units
# Create the neural network
def conv_net(x_dict, n_classes, dropout, reuse, is_training):
# Define a scope for reusing the variables
with tf.variable_scope('ConvNet', reuse=reuse):
# TF Estimator input is a dict, in case of multiple inputs
x = x_dict['images']
# MNIST data input is a 1-D vector of 784 features (28*28 pixels)
# Reshape to match picture format [Height x Width x Channel]
# Tensor input become 4-D: [Batch Size, Height, Width, Channel]
x = tf.reshape(x, shape=[-1, 28, 28, 1])
# Convolution Layer with 32 filters and a kernel size of 5
conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv1 = tf.layers.max_pooling2d(conv1, 2, 2)
# Convolution Layer with 64 filters and a kernel size of 3
conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)
# Max Pooling (down-sampling) with strides of 2 and kernel size of 2
conv2 = tf.layers.max_pooling2d(conv2, 2, 2)
# Flatten the data to a 1-D vector for the fully connected layer
fc1 = tf.contrib.layers.flatten(conv2)
# Fully connected layer (in tf contrib folder for now)
fc1 = tf.layers.dense(fc1, 1024)
# Apply Dropout (if is_training is False, dropout is not applied)
fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)
# Output layer, class prediction
out = tf.layers.dense(fc1, n_classes)
return out
# Define the model function (following TF Estimator Template)
def model_fn(features, labels, mode):
# Build the neural network
# Because Dropout have different behavior at training and prediction time, we
# need to create 2 distinct computation graphs that still share the same weights.
logits_train = conv_net(features, num_classes, dropout, reuse=False, is_training=True)
logits_test = conv_net(features, num_classes, dropout, reuse=True, is_training=False)
# Predictions
pred_classes = tf.argmax(logits_test, axis=1)
pred_probas = tf.nn.softmax(logits_test)
# If prediction mode, early return
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode, predictions=pred_classes)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits_train, labels=tf.cast(labels, dtype=tf.int32)))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op, global_step=tf.train.get_global_step())
# Evaluate the accuracy of the model
acc_op = tf.metrics.accuracy(labels=labels, predictions=pred_classes)
# TF Estimators requires to return a EstimatorSpec, that specify
# the different ops for training, evaluating, ...
estim_specs = tf.estimator.EstimatorSpec(
mode=mode,
predictions=pred_classes,
loss=loss_op,
train_op=train_op,
eval_metric_ops={'accuracy': acc_op})
return estim_specs
# Build the Estimator
model = tf.estimator.Estimator(model_fn)
# Define the input function for training
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'images': mnist.train.images}, y=mnist.train.labels,
batch_size=batch_size, num_epochs=None, shuffle=True)
# Train the Model
model.train(input_fn, steps=num_steps)
# Evaluate the Model
# Define the input function for evaluating
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'images': mnist.test.images}, y=mnist.test.labels,
batch_size=batch_size, shuffle=False)
# Use the Estimator 'evaluate' method
model.evaluate(input_fn)
# Predict single images
n_images = 4
# Get images from test set
test_images = mnist.test.images[:n_images]
# Prepare the input data
input_fn = tf.estimator.inputs.numpy_input_fn(
x={'images': test_images}, shuffle=False)
# Use the model to predict the images class
preds = list(model.predict(input_fn))
# Display
for i in range(n_images):
plt.imshow(np.reshape(test_images[i], [28, 28]), cmap='gray')
plt.show()
print("Model prediction:", preds[i])
###Output
INFO:tensorflow:Restoring parameters from /tmp/tmpdhd6F4/model.ckpt-2000
|
examples/ch09/snippets_ipynb/09_04selfcheck.ipynb | ###Markdown
 9.4 Self Check **3. _(IPython Session)_** In the `accounts.txt` file, update the last name `'Doe'` to `'Smith'`.**Answer:**
###Code
accounts = open('accounts.txt', 'r')
temp_file = open('temp_file.txt', 'w')
with accounts, temp_file:
for record in accounts:
account, name, balance = record.split()
if name != 'Doe':
temp_file.write(record)
else:
new_record = ' '.join([account, 'Smith', balance])
temp_file.write(new_record + '\n')
import os
os.remove('accounts.txt')
os.rename('temp_file.txt', 'accounts.txt')
# macOS/Linux Users: View file contents
!cat accounts.txt
# Windows Users: View file contents
!more accounts.txt
##########################################################################
# (C) Copyright 2019 by Deitel & Associates, Inc. and #
# Pearson Education, Inc. All Rights Reserved. #
# #
# DISCLAIMER: The authors and publisher of this book have used their #
# best efforts in preparing the book. These efforts include the #
# development, research, and testing of the theories and programs #
# to determine their effectiveness. The authors and publisher make #
# no warranty of any kind, expressed or implied, with regard to these #
# programs or to the documentation contained in these books. The authors #
# and publisher shall not be liable in any event for incidental or #
# consequential damages in connection with, or arising out of, the #
# furnishing, performance, or use of these programs. #
##########################################################################
###Output
_____no_output_____ |
98-dados/programacao_distribuida_com_DASK/04_dataframe.ipynb | ###Markdown
<img src="http://dask.readthedocs.io/en/latest/_images/dask_horizontal.svg" align="right" width="30%" alt="Dask logo\"> Dask DataFramesWe finished Chapter 02 by building a parallel dataframe computation over a directory of CSV files using `dask.delayed`. In this section we use `dask.dataframe` to automatically build similiar computations, for the common case of tabular computations. Dask dataframes look and feel like Pandas dataframes but they run on the same infrastructure that powers `dask.delayed`.In this notebook we use the same airline data as before, but now rather than write for-loops we let `dask.dataframe` construct our computations for us. The `dask.dataframe.read_csv` function can take a globstring like `"data/nycflights/*.csv"` and build parallel computations on all of our data at once. When to use `dask.dataframe`Pandas is great for tabular datasets that fit in memory. Dask becomes useful when the dataset you want to analyze is larger than your machine's RAM. The demo dataset we're working with is only about 200MB, so that you can download it in a reasonable time, but `dask.dataframe` will scale to datasets much larger than memory. The `dask.dataframe` module implements a blocked parallel `DataFrame` object that mimics a large subset of the Pandas `DataFrame`. One Dask `DataFrame` is comprised of many in-memory pandas `DataFrames` separated along the index. One operation on a Dask `DataFrame` triggers many pandas operations on the constituent pandas `DataFrame`s in a way that is mindful of potential parallelism and memory constraints.**Related Documentation*** [Dask DataFrame documentation](http://dask.pydata.org/en/latest/dataframe.html)* [Pandas documentation](http://pandas.pydata.org/)**Main Take-aways**1. Dask.dataframe should be familiar to Pandas users2. The partitioning of dataframes is important for efficient queries Setup We create artifical data.
###Code
from prep import accounts_csvs
accounts_csvs(3, 1000000, 500)
import os
import dask
filename = os.path.join('data', 'accounts.*.csv')
###Output
_____no_output_____
###Markdown
This works just like `pandas.read_csv`, except on multiple csv files at once.
###Code
filename
import dask.dataframe as dd
df = dd.read_csv(filename)
# load and count number of rows
df.head()
len(df)
###Output
_____no_output_____
###Markdown
What happened here?- Dask investigated the input path and found that there are three matching files - a set of jobs was intelligently created for each chunk - one per original CSV file in this case- each file was loaded into a pandas dataframe, had `len()` applied to it- the subtotals were combined to give you the final grant total. Real DataLets try this with an extract of flights in the USA across several years. This data is specific to flights out of the three airports in the New York City area.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]})
###Output
_____no_output_____
###Markdown
Notice that the respresentation of the dataframe object contains no data - Dask has just done enough to read the start of the first file, and infer the column names and types.
###Code
df
###Output
_____no_output_____
###Markdown
We can view the start and end of the data
###Code
df.head()
df.tail() # this fails
###Output
_____no_output_____
###Markdown
What just happened?Unlike `pandas.read_csv` which reads in the entire file before inferring datatypes, `dask.dataframe.read_csv` only reads in a sample from the beginning of the file (or first file if using a glob). These inferred datatypes are then enforced when reading all partitions.In this case, the datatypes inferred in the sample are incorrect. The first `n` rows have no value for `CRSElapsedTime` (which pandas infers as a `float`), and later on turn out to be strings (`object` dtype). Note that Dask gives an informative error message about the mismatch. When this happens you have a few options:- Specify dtypes directly using the `dtype` keyword. This is the recommended solution, as it's the least error prone (better to be explicit than implicit) and also the most performant.- Increase the size of the `sample` keyword (in bytes)- Use `assume_missing` to make `dask` assume that columns inferred to be `int` (which don't allow missing values) are actually floats (which do allow missing values). In our particular case this doesn't apply.In our case we'll use the first option and directly specify the `dtypes` of the offending columns.
###Code
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': str,
'CRSElapsedTime': float,
'Cancelled': bool})
df.tail() # now works
###Output
_____no_output_____
###Markdown
Computations with `dask.dataframe`We compute the maximum of the `DepDelay` column. With just pandas, we would loop over each file to find the individual maximums, then find the final maximum over all the individual maximums```pythonmaxes = []for fn in filenames: df = pd.read_csv(fn) maxes.append(df.DepDelay.max()) final_max = max(maxes)```We could wrap that `pd.read_csv` with `dask.delayed` so that it runs in parallel. Regardless, we're still having to think about loops, intermediate results (one per file) and the final reduction (`max` of the intermediate maxes). This is just noise around the real task, which pandas solves with```pythondf = pd.read_csv(filename, dtype=dtype)df.DepDelay.max()````dask.dataframe` lets us write pandas-like code, that operates on larger than memory datasets in parallel.
###Code
%time df.DepDelay.max().compute()
###Output
_____no_output_____
###Markdown
This writes the delayed computation for us and then runs it. Some things to note:1. As with `dask.delayed`, we need to call `.compute()` when we're done. Up until this point everything is lazy.2. Dask will delete intermediate results (like the full pandas dataframe for each file) as soon as possible. - This lets us handle datasets that are larger than memory - This means that repeated computations will have to load all of the data in each time (run the code above again, is it faster or slower than you would expect?) As with `Delayed` objects, you can view the underlying task graph using the `.visualize` method:
###Code
# notice the parallelism
df.DepDelay.max().visualize()
###Output
_____no_output_____
###Markdown
ExercisesIn this section we do a few `dask.dataframe` computations. If you are comfortable with Pandas then these should be familiar. You will have to think about when to call `compute`. 1.) How many rows are in our dataset?If you aren't familiar with pandas, how would you check how many records are in a list of tuples?
###Code
# Your code here
%load solutions/03-dask-dataframe-rows.py
###Output
_____no_output_____
###Markdown
2.) In total, how many non-canceled flights were taken?With pandas, you would use [boolean indexing](https://pandas.pydata.org/pandas-docs/stable/indexing.htmlboolean-indexing).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled.py
###Output
_____no_output_____
###Markdown
3.) In total, how many non-cancelled flights were taken from each airport?*Hint*: use [`df.groupby`](https://pandas.pydata.org/pandas-docs/stable/groupby.html).
###Code
# Your code here
%load solutions/03-dask-dataframe-non-cancelled-per-airport.py
###Output
_____no_output_____
###Markdown
4.) What was the average departure delay from each airport?Note, this is the same computation you did in the previous notebook (is this approach faster or slower?)
###Code
# Your code here
df.columns
%load solutions/03-dask-dataframe-delay-per-airport.py
###Output
_____no_output_____
###Markdown
5.) What day of the week has the worst average departure delay?
###Code
# Your code here
%load solutions/03-dask-dataframe-delay-per-day.py
###Output
_____no_output_____
###Markdown
Sharing Intermediate ResultsWhen computing all of the above, we sometimes did the same operation more than once. For most operations, `dask.dataframe` hashes the arguments, allowing duplicate computations to be shared, and only computed once.For example, lets compute the mean and standard deviation for departure delay of all non-canceled flights. Since dask operations are lazy, those values aren't the final results yet. They're just the recipe require to get the result.If we compute them with two calls to compute, there is no sharing of intermediate computations.
###Code
non_cancelled = df[~df.Cancelled]
mean_delay = non_cancelled.DepDelay.mean()
std_delay = non_cancelled.DepDelay.std()
%%time
mean_delay_res = mean_delay.compute()
std_delay_res = std_delay.compute()
###Output
_____no_output_____
###Markdown
But lets try by passing both to a single `compute` call.
###Code
%%time
mean_delay_res, std_delay_res = dask.compute(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
Using `dask.compute` takes roughly 1/2 the time. This is because the task graphs for both results are merged when calling `dask.compute`, allowing shared operations to only be done once instead of twice. In particular, using `dask.compute` only does the following once:- the calls to `read_csv`- the filter (`df[~df.Cancelled]`)- some of the necessary reductions (`sum`, `count`)To see what the merged task graphs between multiple results look like (and what's shared), you can use the `dask.visualize` function (we might want to use `filename='graph.pdf'` to zoom in on the graph better):
###Code
dask.visualize(mean_delay, std_delay)
###Output
_____no_output_____
###Markdown
How does this compare to Pandas? Pandas is more mature and fully featured than `dask.dataframe`. If your data fits in memory then you should use Pandas. The `dask.dataframe` module gives you a limited `pandas` experience when you operate on datasets that don't fit comfortably in memory.During this tutorial we provide a small dataset consisting of a few CSV files. This dataset is 45MB on disk that expands to about 400MB in memory (the difference is caused by using `object` dtype for strings). This dataset is small enough that you would normally use Pandas.We've chosen this size so that exercises finish quickly. Dask.dataframe only really becomes meaningful for problems significantly larger than this, when Pandas breaks with the dreaded MemoryError: ... Furthermore, the distributed scheduler allows the same dataframe expressions to be executed across a cluster. To enable massive "big data" processing, one could execute data ingestion functions such as `read_csv`, where the data is held on storage accessible to every worker node (e.g., amazon's S3), and because most operations begin by selecting only some columns, transforming and filtering the data, only relatively small amounts of data need to be communicated between the machines.Dask.dataframe operations use `pandas` operations internally. Generally they run at about the same speed except in the following two cases:1. Dask introduces a bit of overhead, around 1ms per task. This is usually negligible.2. When Pandas releases the GIL (coming to `groupby` in the next version) `dask.dataframe` can call several pandas operations in parallel within a process, increasing speed somewhat proportional to the number of cores. For operations which don't release the GIL, multiple processes would be needed to get the same speedup. Dask DataFrame Data ModelFor the most part, a Dask DataFrame feels like a pandas DataFrame.So far, the biggest difference we've seen is that Dask operations are lazy; they build up a task graph instead of executing immediately (more details coming in [Schedulers](04-schedulers.ipynb)).This lets Dask do operations in parallel and out of core.In [Dask Arrays](02-dask-arrays.ipynb), we saw that a `dask.array` was composed of many NumPy arrays, chunked along one or more dimensions.It's similar for `dask.dataframe`: a Dask DataFrame is composed of many pandas DataFrames. For `dask.dataframe` the chunking happens only along the index.We call each chunk a *partition*, and the upper / lower bounds are *divisions*.Dask *can* store information about the divisions. We'll cover this in more detail in [Distributed DataFrames](05-distributed-dataframes-and-efficiency.ipynb).For now, partitions come up when you write custom functions to apply to Dask DataFrames Converting `CRSDepTime` to a timestampThis dataset stores timestamps as `HHMM`, which are read in as integers in `read_csv`:
###Code
crs_dep_time = df.CRSDepTime.head(10)
crs_dep_time
###Output
_____no_output_____
###Markdown
To convert these to timestamps of scheduled departure time, we need to convert these integers into `pd.Timedelta` objects, and then combine them with the `Date` column.In pandas we'd do this using the `pd.to_timedelta` function, and a bit of arithmetic:
###Code
import pandas as pd
# Get the first 10 dates to complement our `crs_dep_time`
date = df.Date.head(10)
# Get hours as an integer, convert to a timedelta
hours = crs_dep_time // 100
hours_timedelta = pd.to_timedelta(hours, unit='h')
# Get minutes as an integer, convert to a timedelta
minutes = crs_dep_time % 100
minutes_timedelta = pd.to_timedelta(minutes, unit='m')
# Apply the timedeltas to offset the dates by the departure time
departure_timestamp = date + hours_timedelta + minutes_timedelta
departure_timestamp
###Output
_____no_output_____
###Markdown
Custom code and Dask DataframeWe could swap out `pd.to_timedelta` for `dd.to_timedelta` and do the same operations on the entire dask DataFrame. But let's say that Dask hadn't implemented a `dd.to_timedelta` that works on Dask DataFrames. What would you do then?`dask.dataframe` provides a few methods to make applying custom functions to Dask DataFrames easier:- [`map_partitions`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_partitions)- [`map_overlap`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.map_overlap)- [`reduction`](http://dask.pydata.org/en/latest/dataframe-api.htmldask.dataframe.DataFrame.reduction)Here we'll just be discussing `map_partitions`, which we can use to implement `to_timedelta` on our own:
###Code
# Look at the docs for `map_partitions`
help(df.CRSDepTime.map_partitions)
###Output
_____no_output_____
###Markdown
The basic idea is to apply a function that operates on a DataFrame to each partition.In this case, we'll apply `pd.to_timedelta`.
###Code
hours = df.CRSDepTime // 100
# hours_timedelta = pd.to_timedelta(hours, unit='h')
hours_timedelta = hours.map_partitions(pd.to_timedelta, unit='h')
minutes = df.CRSDepTime % 100
# minutes_timedelta = pd.to_timedelta(minutes, unit='m')
minutes_timedelta = minutes.map_partitions(pd.to_timedelta, unit='m')
departure_timestamp = df.Date + hours_timedelta + minutes_timedelta
departure_timestamp
departure_timestamp.head()
###Output
_____no_output_____
###Markdown
Exercise: Rewrite above to use a single call to `map_partitions`This will be slightly more efficient than two separate calls, as it reduces the number of tasks in the graph.
###Code
def compute_departure_timestamp(df):
# TODO
departure_timestamp = df.map_partitions(compute_departure_timestamp)
departure_timestamp.head()
%load solutions/03-dask-dataframe-map-partitions.py
###Output
_____no_output_____ |
jupyterbook/content/code_gallery/data_access_notebooks/2016-11-15-glider_data_example.ipynb | ###Markdown
Plotting Glider data with Python toolsCreated: 2016-11-15In this notebook we demonstrate how to obtain and plot glider data using iris and cartopy. We will explore data from the Rutgers University RU29 [Challenger](http://challenger.marine.rutgers.edu) glider that was launched from Ubatuba, Brazil on June 23, 2015 to travel across the Atlantic Ocean. After 282 days at sea, the Challenger was picked up off the coast of South Africa, on March 31, 2016. For more information on this ground breaking excusion see: [https://marine.rutgers.edu/main/announcements/the-challenger-glider-mission-south-atlantic-mission-complete](https://marine.rutgers.edu/main/announcements/the-challenger-glider-mission-south-atlantic-mission-complete)Data collected from this glider mission are available on the IOOS Glider DAC THREDDS via OPeNDAP.
###Code
url = (
"https://data.ioos.us/thredds/dodsC/deployments/rutgers/"
"ru29-20150623T1046/ru29-20150623T1046.nc3.nc"
)
import iris
glider = iris.load_raw(url)
print(glider)
###Output
0: longitude / (degrees) (-- : 1; -- : 542; -- : 483)
1: sea_water_electrical_conductivity / (S m-1) (-- : 1; -- : 542; -- : 483)
2: longitude status_flag / (no_unit) (-- : 1; -- : 542; -- : 483)
3: eastward_sea_water_velocity / (m s-1) (-- : 1; -- : 542)
4: time / (seconds since 1970-01-01T00:00:00Z) (-- : 1; -- : 542; -- : 483)
5: sea_water_pressure / (dbar) (-- : 1; -- : 542; -- : 483)
6: latitude / (degrees) (-- : 1; -- : 542; -- : 483)
7: longitude status_flag / (no_unit) (-- : 1; -- : 542; -- : 483)
8: sea_water_salinity / (1e-3) (-- : 1; -- : 542; -- : 483)
9: latitude status_flag / (no_unit) (-- : 1; -- : 542; -- : 483)
10: Platform Metadata / (1) (-- : 1; -- : 542; -- : 483)
11: time status_flag / (no_unit) (-- : 1; -- : 542; -- : 483)
12: time status_flag / (no_unit) (-- : 1; -- : 542; -- : 483)
13: precise_lon Variable Quality Flag / (no_unit) (-- : 1; -- : 542; -- : 483)
14: northward_sea_water_velocity status_flag / (no_unit) (-- : 1; -- : 542; -- : 483)
15: precise_lat Variable Quality Flag / (no_unit) (-- : 1; -- : 542; -- : 483)
16: sea_water_density / (kg m-3) (-- : 1; -- : 542; -- : 483)
17: latitude status_flag / (no_unit) (-- : 1; -- : 542; -- : 483)
18: northward_sea_water_velocity / (m s-1) (-- : 1; -- : 542)
19: Trajectory Name / (unknown) (-- : 1; -- : 64)
20: CTD Metadata / (1) (-- : 1; -- : 542; -- : 483)
21: WMO ID / (unknown) (-- : 1; -- : 64)
22: Profile ID / (unknown) (-- : 1; -- : 542)
23: sea_water_temperature / (Celsius) (-- : 1; -- : 542; -- : 483)
24: eastward_sea_water_velocity status_flag / (no_unit) (-- : 1; -- : 542; -- : 483)
###Markdown
`Iris` requires the data to adhere strictly to the `CF-1.6` data model.That is why we see all those warnings about `Missing CF-netCDF ancillary data variable`. Note that if the data is not CF at all `iris` will refuse to load it!The other hand, the advantage of following the `CF-1.6` conventions,is that the `iris` cube has the proper metadata is attached it.We do not need to extract the coordinates or any other information separately .All we need to do is to request the phenomena we want, in this case `sea_water_density`, `sea_water_temperature` and `sea_water_salinity`.
###Code
temp = glider.extract_cube("sea_water_temperature")
salt = glider.extract_cube("sea_water_salinity")
dens = glider.extract_cube("sea_water_density")
print(temp)
###Output
sea_water_temperature / (Celsius) (-- : 1; -- : 542; -- : 483)
Auxiliary coordinates:
latitude x x -
longitude x x -
time x x -
depth x x x
Ancillary variables:
sea_water_temperature status_flag x x x
Attributes:
Conventions Unidata Dataset Discovery v1.0, COARDS, CF-1.6
DODS.dimName wmo_id_strlen
DODS.strlen 7
Easternmost_Easting 13.591759500847711
Metadata_Conventions Unidata Dataset Discovery v1.0, COARDS, CF-1.6
Northernmost_Northing -25.492669785275247
Southernmost_Northing -37.340890399992446
Westernmost_Easting -44.92195338434748
_ChunkSizes 1
acknowledgment This deployment supported by funding from the G. Unger Vetelsen Foundation...
actual_range array([ 3.744 , 24.5387], dtype=float32)
cdm_data_type TrajectoryProfile
cdm_profile_variables time_uv,lat_uv,lon_uv,u,v,profile_id,time,latitude,longitude
cdm_trajectory_variables trajectory,wmo_id
colorBarMaximum 32.0
colorBarMinimum 0.0
comment Glider operatored by the Rutgers University Coastal Ocean Observation Lab,...
contributor_name Scott Glenn, Oscar Schofield, Josh Kohut, Antonio Ramos, Sebastian Swart,...
contributor_role Principal Investigator, Principal Investigator, Principal Investigator,...
creator_email [email protected]
creator_name John Kerfoot
creator_url http://rucool.marine.rutgers.edu
date_created 2016-03-31T06:16:37Z
date_issued 2016-03-31T06:16:37Z
featureType TrajectoryProfile
format_version IOOS_Glider_NetCDF_v2.0.nc
geospatial_lat_max -25.492669785275247
geospatial_lat_min -37.340890399992446
geospatial_lat_units degrees_north
geospatial_lon_max 13.591759500847711
geospatial_lon_min -44.92195338434748
geospatial_lon_units degrees_east
geospatial_vertical_max 983.17
geospatial_vertical_min 0.61
geospatial_vertical_positive down
geospatial_vertical_units m
gts_ingest true
history '2016-03-31T06:16:37Z /home/kerfoot/slocum/matlab/spt/export/nc/IOOS/DAC/writeIoosGliderFlatNc.m\n2021-10-15T13:26:31Z...
id ru29-20160331T0855
infoUrl https://gliders.ioos.us/erddap/
institution Rutgers University
instrument instrument_ctd
ioos_category Temperature
ioos_dac_checksum fe452cc3a1bd121d6ba03cd41c4c004c
ioos_dac_completed True
keywords AUVS > Autonomous Underwater Vehicles, Earth Science > Oceans > Ocean Pressure...
keywords_vocabulary GCMD Science Keywords
license This data may be redistributed and used without restriction. Data provided...
naming_authority edu.rutgers.marine
observation_type measured
platform platform
platform_type Slocum Glider
processing_level Timestamp and gps positions checked for validity.
project Challenger
publisher_email [email protected]
publisher_name John Kerfoot
publisher_url http://rucool.marine.rutgers.edu
sea_name South Atlantic Ocean
source Observational data from a profiling glider
sourceUrl (local files)
standard_name_vocabulary CF-v25
subsetVariables trajectory,wmo_id,time_uv,lat_uv,lon_uv,u,v,profile_id,time,latitude,l...
summary "Third leg of the ru29 Challenger mission from Brazil to\n South...
time_coverage_end 2016-03-31T09:25:31Z
time_coverage_start 2015-06-23T10:57:59Z
title ru29-20150623T1046
valid_max 40.0
valid_min -5.0
###Markdown
Glider data is not something trivial to visualize. The very first thing to do is to plot the glider track to check its path.
###Code
import numpy.ma as ma
T = temp.data.squeeze()
S = salt.data.squeeze()
D = dens.data.squeeze()
x = temp.coord(axis="X").points.squeeze()
y = temp.coord(axis="Y").points.squeeze()
z = temp.coord(axis="Z")
t = temp.coord(axis="T")
vmin, vmax = z.attributes["actual_range"]
z = ma.masked_outside(z.points.squeeze(), vmin, vmax)
t = t.units.num2date(t.points.squeeze())
location = y.mean(), x.mean() # Track center.
locations = list(zip(y, x)) # Track points.
import folium
tiles = (
"http://services.arcgisonline.com/arcgis/rest/services/"
"World_Topo_Map/MapServer/MapServer/tile/{z}/{y}/{x}"
)
m = folium.Map(location, tiles=tiles, attr="ESRI", zoom_start=4)
folium.CircleMarker(locations[0], fill_color="green", radius=10).add_to(m)
folium.CircleMarker(locations[-1], fill_color="red", radius=10).add_to(m)
line = folium.PolyLine(
locations=locations,
color="orange",
weight=8,
opacity=0.6,
popup="Slocum Glider ru29 Deployed on 2015-06-23",
).add_to(m)
m
###Output
_____no_output_____
###Markdown
One might be interested in a the individual profiles of each dive. Lets extract the deepest dive and plot it.
###Code
import numpy as np
# Find the deepest profile.
idx = np.nonzero(~T[:, -1].mask)[0][0]
%matplotlib inline
import matplotlib.pyplot as plt
ncols = 3
fig, (ax0, ax1, ax2) = plt.subplots(
sharey=True, sharex=False, ncols=ncols, figsize=(3.25 * ncols, 5)
)
kw = dict(linewidth=2, color="cornflowerblue", marker=".")
ax0.plot(T[idx], z[idx], **kw)
ax1.plot(S[idx], z[idx], **kw)
ax2.plot(D[idx] - 1000, z[idx], **kw)
def spines(ax):
ax.spines["right"].set_color("none")
ax.spines["bottom"].set_color("none")
ax.xaxis.set_ticks_position("top")
ax.yaxis.set_ticks_position("left")
[spines(ax) for ax in (ax0, ax1, ax2)]
ax0.set_ylabel("Depth (m)")
ax0.set_xlabel("Temperature ({})".format(temp.units))
ax0.xaxis.set_label_position("top")
ax1.set_xlabel("Salinity ({})".format(salt.units))
ax1.xaxis.set_label_position("top")
ax2.set_xlabel("Density ({})".format(dens.units))
ax2.xaxis.set_label_position("top")
ax0.invert_yaxis()
###Output
_____no_output_____
###Markdown
We can also visualize the whole track as a cross-section.
###Code
import numpy as np
import seawater as sw
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
def distance(x, y, units="km"):
dist, pha = sw.dist(x, y, units=units)
return np.r_[0, np.cumsum(dist)]
def plot_glider(
x, y, z, t, data, cmap=plt.cm.viridis, figsize=(9, 3.75), track_inset=False
):
fig, ax = plt.subplots(figsize=figsize)
dist = distance(x, y, units="km")
z = np.abs(z)
dist, z = np.broadcast_arrays(dist[..., np.newaxis], z)
cs = ax.pcolor(dist, z, data, cmap=cmap, snap=True)
kw = dict(orientation="vertical", extend="both", shrink=0.65)
cbar = fig.colorbar(cs, **kw)
if track_inset:
axin = inset_axes(ax, width="25%", height="30%", loc=4)
axin.plot(x, y, "k.")
start, end = (x[0], y[0]), (x[-1], y[-1])
kw = dict(marker="o", linestyle="none")
axin.plot(*start, color="g", **kw)
axin.plot(*end, color="r", **kw)
axin.axis("off")
ax.invert_yaxis()
ax.set_xlabel("Distance (km)")
ax.set_ylabel("Depth (m)")
return fig, ax, cbar
from palettable import cmocean
haline = cmocean.sequential.Haline_20.mpl_colormap
thermal = cmocean.sequential.Thermal_20.mpl_colormap
dense = cmocean.sequential.Dense_20.mpl_colormap
fig, ax, cbar = plot_glider(x, y, z, t, S, cmap=haline, track_inset=False)
cbar.ax.set_xlabel("(g kg$^{-1}$)")
cbar.ax.xaxis.set_label_position("top")
ax.set_title("Salinity")
fig, ax, cbar = plot_glider(x, y, z, t, T, cmap=thermal, track_inset=False)
cbar.ax.set_xlabel(r"($^\circ$C)")
cbar.ax.xaxis.set_label_position("top")
ax.set_title("Temperature")
fig, ax, cbar = plot_glider(x, y, z, t, D - 1000, cmap=dense, track_inset=False)
cbar.ax.set_xlabel(r"(kg m$^{-3}$C)")
cbar.ax.xaxis.set_label_position("top")
ax.set_title("Density")
print("Data collected from {} to {}".format(t[0], t[-1]))
###Output
/tmp/ipykernel_19098/3856466220.py:19: MatplotlibDeprecationWarning: shading='flat' when X and Y have the same dimensions as C is deprecated since 3.3. Either specify the corners of the quadrilaterals with X and Y, or pass shading='auto', 'nearest' or 'gouraud', or set rcParams['pcolor.shading']. This will become an error two minor releases later.
cs = ax.pcolor(dist, z, data, cmap=cmap, snap=True)
|
150801_test_interrupt_speed/150801_serial_to_pyboard.ipynb | ###Markdown
Table of Contents
###Code
%%javascript
IPython.load_extensions('calico-document-tools');
from __future__ import division
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import serial
import time
!ls /dev/tty.usbmodem*
!ls /dev/cu.usbmodem*
pyboard = serial.Serial('/dev/tty.usbmodem1452', 115200)
pyboard.write('start\n')
for i in range(10):
print(pyboard.readline())
pyboard.flushInput()
pyboard.write('stop\n')
print(pyboard.readline())
pyboard.close()
pyboard = serial.Serial('/dev/tty.usbmodem1452', 115200)
pyboard.write('start')
for i in range(10):
print(pyboard.readline())
pyboard.flushInput()
pyboard.write('stop')
print(pyboard.readline())
pyboard.close()
pyboard = serial.Serial('/dev/tty.usbmodem1452', 115200, timeout=2)
pyboard.write('start')
for i in range(10):
print(pyboard.readline())
pyboard.flushInput()
pyboard.write('stop')
print(pyboard.readline())
pyboard.close()
pyboard = serial.Serial('/dev/tty.usbmodem1452', 115200, timeout=2)
pyboard.write('start')
for i in range(10):
print(pyboard.readline().strip())
pyboard.flushInput()
pyboard.write('stop')
print(pyboard.readline())
pyboard.close()
pyboard = serial.Serial('/dev/tty.usbmodem1452', 115200, timeout=2)
pyboard.write('start')
for i in range(10):
print(pyboard.readline().strip().split(','))
pyboard.flushInput()
pyboard.write('stop')
print(pyboard.readline())
pyboard.close()
pyboard = serial.Serial('/dev/tty.usbmodem1452', 115200)
pyboard.baudrate
pyboard.bytesize
pyboard.getBaudrate
pyboard.getBaudrate()
pyboard.getPort()
pyboard.getSupportedBaudrates()
pyboard.isOpen()
pyboard.isatty()
temp = []
for i in range(20):
if i == 0:
pyboard.flushInput()
temp.append( pyboard.readline().strip().split(',') )
print(temp)
class serial_speed_test(object):
def __init__(self, freq_Hz):
self.tick = 0
self.tick_ready = False
self.freq_Hz = freq_Hz
#tim1 = pyb.Timer(1)
#tim1.init(freq=freq_Hz)
#tim1.callback(self.serial_speed_test_cb)
def serial_speed_test_cb(self):
self.tick_ready = True
#print(micros_timer.counter(), ',', 40*self.tick)
self.tick = (self.tick + 1) % 100
sst = serial_speed_test(2)
print(sst.tick_ready)
s = "%d,%d\n" % (sst.tick, sst.freq_Hz)
print(s)
s_temp = 'start\n'
s_temp.startswith('start')
###Output
_____no_output_____ |
2_dropout_batchnorm_cnn0.ipynb | ###Markdown
###Code
%matplotlib inline
!ls -l
!cp ./drive/MyDrive/training_data.zip .
!unzip training_data.zip
import glob
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img
IMG_DIM = (50, 50)
train_files = glob.glob('training_data/*')
train_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in train_files]
train_imgs = np.array(train_imgs)
train_labels = [fn.split('/')[1].split('.')[0].strip() for fn in train_files]
validation_files = glob.glob('validation_data/*')
validation_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in validation_files]
validation_imgs = np.array(validation_imgs)
validation_labels = [fn.split('/')[1].split('.')[0].strip() for fn in validation_files]
print('Train dataset shape:', train_imgs.shape,
'\tValidation dataset shape:', validation_imgs.shape)
train_imgs_scaled = train_imgs.astype('float32')
validation_imgs_scaled = validation_imgs.astype('float32')
train_imgs_scaled /= 255
validation_imgs_scaled /= 255
print(train_imgs[90].shape)
array_to_img(train_imgs[90])
batch_size = 50
num_classes = 2
epochs = 50
input_shape = (50, 50, 3)
# encode text category labels
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(train_labels)
train_labels_enc = le.transform(train_labels)
validation_labels_enc = le.transform(validation_labels)
print(train_labels[1495:1505], train_labels_enc[1495:1505])
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, BatchNormalization
from tensorflow.keras.models import Sequential
from tensorflow.keras import optimizers
###Output
_____no_output_____
###Markdown
Model Case I
###Code
model = Sequential()
model.add(Conv2D(16, kernel_size=(3, 3), activation='relu', padding="same",
input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu', padding="same"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(128, kernel_size=(3, 3), activation='relu', padding="same"))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer= 'adam', # optimizers.RMSprop(lr=0.0001)
metrics=['accuracy'])
model.summary()
history = model.fit(x=train_imgs_scaled, y=train_labels_enc,
validation_data=(validation_imgs_scaled, validation_labels_enc),
batch_size=batch_size,
epochs=epochs,
verbose=1)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
t = f.suptitle('Basic CNN Performance', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)
epoch_list = list(range(1,51))
ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy')
ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy')
ax1.set_xticks(np.arange(0, 51, 5))
ax1.set_ylabel('Accuracy Value')
ax1.set_xlabel('Epoch')
ax1.set_title('Accuracy')
l1 = ax1.legend(loc="best")
ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
ax2.set_xticks(np.arange(0, 51, 5))
ax2.set_ylabel('Loss Value')
ax2.set_xlabel('Epoch')
ax2.set_title('Loss')
l2 = ax2.legend(loc="best")
model.save('2-dropout-batchnorm-cnn.h5')
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
|
TimeSeries_Click_Prediction.ipynb | ###Markdown
**Inference:** There are outliers in the distribution of target values **Fitting kernel density estimation plots to understand the distribution of target variable 'bookings'**
###Code
kernel_options = ["biw", "cos", "epa", "gau", "tri", "triw"]
plt.figure(figsize=(18,5))
for kern in kernel_options:
sns.kdeplot(train_i.bookings,kernel=kern,label=kern)
###Output
/home/p_abhijeet666/anaconda3/envs/fastai/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
**Inference:** We could choose the limit for outliers to be approximately 50. These could be mass bookings made during new years eve or during long-holiday periods, which are not representative of the whole year.
###Code
train_i[train_i.bookings>-1].shape,train_i[train_i.bookings>50].shape
###Output
_____no_output_____
###Markdown
**5515 values out of 595848 are dropped. That is 0.9% of the values are considered to be outliers** **Dropping the rows from the training dataset with target variable 'bookings' value more than 50.**
###Code
train_i.drop(index=train_i[train_i.bookings>50].index,inplace=True)
df_1=pd.DataFrame()
df_2=pd.DataFrame()
df_1['a']=train_i.week_of_year[train_i['yyear']==2017]
df_1['b']= train_i.bookings[train_i['yyear']==2017]
df_2['a']=train_i.week_of_year[train_i['yyear']==2018]
df_2['b']= train_i.bookings[train_i['yyear']==2018]
plt.figure(figsize=(15,8))
ax=sns.lineplot(x="a", y="b",data=df_1)
sns.lineplot(x="a", y="b",data=df_2,ax=ax)
###Output
/home/p_abhijeet666/anaconda3/envs/fastai/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
**Inferences:** - It is wise to chose the Trianing set as data from year 2017 and Cross validation set from year 2018 instead of a regular 80-20 split of the dataset. - It is evident from the graph above that the trend of bookings follows approximately same pattern in years 2017 and 2018.
###Code
scaler = MinMaxScaler()
X_train=scaler.fit_transform( train_i[train_i['yyear']==2017].drop(columns=['bookings','id']))
Y_train = train_i.bookings[train_i['yyear']==2017]
X_cv = scaler.fit_transform(train_i[train_i['yyear']==2018].drop(columns=['bookings','id']))
Y_cv = train_i.bookings[train_i['yyear']==2018]
X_test = scaler.fit_transform(test_i.drop(columns=['bookings','id']))
param_test ={'num_leaves': sp_randint(6, 50),
'min_child_samples': sp_randint(100, 500),
'min_child_weight': [1e-5, 1e-3, 1e-2, 1e-1, 1, 1e1, 1e2, 1e3, 1e4],
'subsample': sp_uniform(loc=0.2, scale=0.8),
'colsample_bytree': sp_uniform(loc=0.4, scale=0.6),
'reg_alpha': [0, 1e-1, 1, 2, 5, 7, 10, 50, 100],
'reg_lambda': [0, 1e-1, 1, 5, 10, 20, 50, 100]}
fit_params={"early_stopping_rounds":30,
"eval_metric" : 'rmse',
"eval_set" : [(X_cv,Y_cv)],
'eval_names': ['valid'],
#'callbacks': [lgb.reset_parameter(learning_rate=learning_rate_010_decay_power_099)],
'verbose': 100,
'categorical_feature': 'auto',#'feature_name':X_train.columns.tolist()
}
n_HP_points_to_test = 100
model = lgb.LGBMRegressor(max_depth=5, random_state=314, silent=True, metric='rmse', n_jobs=4, n_estimators=5000)
gs = RandomizedSearchCV(
estimator=model, param_distributions=param_test,
n_iter=n_HP_points_to_test,
scoring='r2',
cv=3,
refit=True,
random_state=314,
verbose=True)
###Output
_____no_output_____
###Markdown
- The above randomized grid search was run in another copy of the notebook to save time. - Best score is achieved with following parameters {'colsample_bytree': 0.5748561875650441, 'min_child_samples': 148, 'min_child_weight': 100.0, 'num_leaves': 33, 'reg_alpha': 50, 'reg_lambda': 5, 'subsample': 0.5431696497636938}
###Code
model = lgbm.LGBMRegressor(
objective='regression',
max_depth=-1,
learning_rate=0.007,
n_estimators=30000,
min_child_samples=148,
subsample=0.5431696497636938,
colsample_bytree=0.5748561875650441,
reg_alpha=50,
reg_lambda=5,
random_state=np.random.randint(10e6),min_child_weight=100,num_leaves=33)
model.fit(X_train,Y_train,eval_set=(X_cv,Y_cv), eval_names=('fit', 'val'),
eval_metric= 'rmse',
early_stopping_rounds=200,
#feature_name= X_train.columns.tolist(),
verbose=False)
def rmse(x,y):
return np.sqrt(((x-y)**2).mean())
y_pred_on_cv = model.predict(X_cv, num_iteration= model.best_iteration_)
print(rmse(Y_cv,y_pred_on_cv))
feature_importances = pd.DataFrame()
feature_importances['features'] = train_i.drop(columns=['bookings','id']).columns
feature_importances['importance']=model.feature_importances_
feature_importances.sort_values(by='importance',ascending=True)
###Output
_____no_output_____
###Markdown
- It is evident from the table above that number of clicks has the highest importance.
###Code
y_pred_on_test = model.predict(X_test, num_iteration= model.best_iteration_)
submission=pd.DataFrame()
submission['id']=test_i.id
submission['pred_bookings']=y_pred_on_test
submission.head()
submission.to_csv('submission.csv',index=False)
###Output
_____no_output_____ |
goget_knn.ipynb | ###Markdown
GoGet Recommendation System
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.metrics import roc_curve, roc_auc_score
import seaborn as sns
###Output
_____no_output_____
###Markdown
Reading the Data
###Code
data = pd.read_csv('/home/dhanush/Downloads/Test_dataset_3.csv')
###Output
_____no_output_____
###Markdown
Printing the first 5 entries
###Code
data.head()
###Output
_____no_output_____
###Markdown
Printing the information of the Dataset
###Code
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 100 entries, 0 to 99
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Shop Name 100 non-null object
1 Avalability 100 non-null int64
2 Distance 100 non-null float64
3 Rating 100 non-null float64
4 Total bill 100 non-null int64
dtypes: float64(2), int64(2), object(1)
memory usage: 4.0+ KB
###Markdown
To understand the range of each rating, a bargraph is plotted.
###Code
rating_counts = data.Rating.value_counts()
plt.figure(figsize=(20,5))
sns.barplot(x=rating_counts.index, y=rating_counts.values, palette="Oranges")
plt.xlabel("Rating value")
plt.ylabel("Counts")
plt.title("Most Common Rating Counts");
###Output
_____no_output_____
###Markdown
To understand the Availability, another bargraph is plotted.
###Code
Availability_counts = data.Avalability.value_counts()
plt.figure(figsize=(20,5))
sns.barplot(x=Availability_counts.index, y=Availability_counts.values, palette="Greens")
plt.xlabel("Avalability")
plt.ylabel("Counts")
plt.title("Availability Counts");
#Converting the dataframe into a list.
df = data.values.tolist()
#Finding the median of the Total Bill across the shops.
total_bill = []
for i in range(len(df)):
if(df[i][1]==1):
total_bill.append(df[i][4])
median = sorted(total_bill)[len(total_bill)//2]
print("Median is "+str(median))
#Calculating Final score using the GoGet formula.
for i in range(len(df)):
fs = 0
d=0
if(df[i][1]==1):
if(df[i][4]<=median):
d = 5//(df[i][2])
fs = df[i][3] *20+ (median - df[i][4]) + d
df[i].append(fs)
else:
d = 5//(df[i][2])
fs = df[i][3] *20 + (median - df[i][4]) +d
df[i].append(fs)
else:
df[i].append(0)
dataset = pd.DataFrame(df, columns = ['Shop_name', 'Availability','Distance','Rating','Total_bill','Final_score'])
###Output
_____no_output_____
###Markdown
Printing the new dataset along with the Final score
###Code
dataset.head()
dataset.info()
df = dataset.values.tolist()
###Output
_____no_output_____
###Markdown
Assigning classes to each shop based on the Final score.
###Code
for i in range(len(df)):
if(df[i][5]>=100):
df[i].append('1')
elif(80<=df[i][5]<100):
df[i].append('2')
elif(60<=df[i][5]<80):
df[i].append('3')
elif(40<=df[i][5]<60):
df[i].append('4')
elif(0<=df[i][5]<40):
df[i].append('5')
dataset = pd.DataFrame(df, columns = ['Shop_name', 'Availability','Distance','Rating','Total_bill','Final_score','Class'])
dataset.head()
X=dataset.iloc[:,5:-1].values
y=dataset.iloc[:, 6].values
###Output
_____no_output_____
###Markdown
Performing train test split on the dataset.
###Code
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.4, random_state=60)
###Output
_____no_output_____
###Markdown
Preprocessing the dataset using StandardScaler.
###Code
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Applying KNN algorithm
###Code
classifier = KNeighborsClassifier(n_neighbors=5)
classifier.fit(X_train,y_train)
#Calculating y_pred
y_pred = classifier.predict(X_test)
#Finding the Accuracy, precision, recall, f1-score and support
print(classification_report(y_test, y_pred))
#Printing multiclass confusion matrix
print(confusion_matrix(y_test, y_pred))
rec = []
final = dataset.values.tolist()
final.sort(key = lambda x: x[5], reverse=True)
###Output
_____no_output_____
###Markdown
Top 10 recommendations using GoGet Recommendation System
###Code
for i in range(0,10):
print(final[i])
###Output
['Pacchu Kiraani Stores', 1, 4.9, 4.7, 180, 110.0, '1']
['Sri laxmi stores', 1, 0.6, 4.7, 190, 107.0, '1']
["Babanna's Shop", 1, 0.8, 4.7, 190, 105.0, '1']
['Karnataka General\xa0STORE', 1, 0.7, 4.8, 195, 103.0, '1']
['shobha store', 1, 1.4, 4.6, 190, 100.0, '1']
['Shhravan Grocery\xa0Store', 1, 4.5, 4.6, 188, 100.0, '1']
['SPB store', 1, 2.9, 4.8, 194, 98.0, '2']
['Shri Gurusai Stores', 1, 1.3, 4.9, 198, 98.0, '2']
['S.N.M Stores', 1, 4.1, 4.8, 195, 97.0, '2']
['Ragavendra provision store', 1, 3.3, 4.4, 188, 96.0, '2']
|
12. FIR Filter -Windowed-Sinc Filters/.ipynb_checkpoints/Windowed-Sinc Filters_old-checkpoint.ipynb | ###Markdown
Windowed-Sinc FiltersWindowed-sinc filters are used to separate one band of frequencies from another. They are very stable, produce few surprises, and can be pushed to incredible performance levels. These exceptional frequency domain characteristics are obtained at the expense of poor performance in the time domain, including excessive ripple and overshoot in the step response. When carried out by standard convolution, windowed-sinc filters are easy to program, but slow to execute. $$ h[i]=\frac{\sin{(2\pi f_{c}i)}}{i\pi}$$Shited version$$ h[i]=\frac{\sin{(2\pi f_{c}(i-M/2))}}{(i-M/2)\pi}$$
###Code
import numpy as np
import matplotlib.pyplot as plt
def sinc_function(i,fc):
"""
Function that calculates a sinc time response.
Parameters:
i (numpy array): Array of numbers representing the samples being used to construct the sinc response.
fc (float): Cut-off frequency for the low-pass filter. Between 0 and 0.5.
Returns:
numpy array: Returns sinc time domain response.
"""
h = np.zeros(len(i))
h[1:] = (np.sin(2*np.pi*i[1:]*fc))/(i[1:]*np.pi)
h[0] = 2*fc
return h
def shifted_sinc_function(i,fc, M):
"""
Function that calculates a sinc shifted time response.
Parameters:
i (numpy array): Array of numbers representing the samples being used to construct the sinc response.
fc (float): Cut-off frequency for the low-pass filter. Between 0 and 0.5.
M (int): Length of the filter kernel. Usually M = 4/BW, where BW is the filter bandwidth of the transition band.
Returns:
numpy array: Returns sinc shifted time domain response.
"""
limit = np.where(i == M/2)[0][0]
h = np.zeros(len(i))
h[:limit] = (np.sin(2*np.pi*(i[:limit]-M/2)*fc))/((i[:limit]-M/2)*np.pi)
h[limit+1:] = (np.sin(2*np.pi*(i[limit+1:]-M/2)*fc))/((i[limit+1:]-M/2)*np.pi)
h[limit] = 2*fc
return h
###Output
_____no_output_____
###Markdown
In order to develop the filter, two parameters must be selected:1. The cut-off frequency, $0\leq f_c \leq 0.5$2. The lenght of the filter kernel, $M=\frac{4}{BW}$, where $BW$ is the transition bandwidth (say, 99% to 1% of the curve).
###Code
fc = 0.20
BW = 0.04
M = int(4/BW)
i = np.arange(0,M,1)
print("Filter lenght is {}".format(M))
###Output
Filter lenght is 100
###Markdown
The **cutoff frequency** of the windowed-sinc filter is measured at the **one-half amplitude point**. Why use 0.5 instead of the standard 0.707 (-3dB) used in analog electronics and other digital filters? This is because the windowed-sinc's frequency response is symmetrical between the passband and the stopband. For instance, the Hamming window results in a passband ripple of 0.2%, and an identical stopband attenuation (i.e., ripple in the stopband) of 0.2%. Other filters do not show this symmetry, and therefore have no advantage in using the one-half amplitude point to mark the cutoff frequency. This symmetry makes the windowed-sinc ideal for spectral inversion.
###Code
sinc = sinc_function(i,fc)
shifted_sinc = shifted_sinc_function(i,fc, M)
normalized_sinc = sinc/np.sum(sinc)
normalized_shifted_sinc = shifted_sinc/np.sum(shifted_sinc)
fft_sinc = np.fft.fft(sinc)
fft_shifted_sinc = np.fft.fft(shifted_sinc)
normalized_fft_sinc = np.absolute(fft_sinc)/np.sum(np.absolute(fft_sinc))
normalized_fft_shifted_sinc = np.absolute(fft_shifted_sinc)/np.sum(np.absolute(fft_shifted_sinc))
plt.rcParams["figure.figsize"] = (15,10)
plt.subplot(2,2,1)
plt.stem(i, normalized_sinc, markerfmt='.', use_line_collection=True)
plt.title('Sinc Function')
plt.subplot(2,2,2)
plt.stem(i, normalized_shifted_sinc, linefmt='orange', markerfmt='r.', use_line_collection=True)
plt.title('Shited {}-Sinc Function'.format(M))
plt.subplot(2,2,3)
plt.stem(i/np.max(i), normalized_fft_sinc, markerfmt='.', use_line_collection=True)
plt.title('Sinc Frequency Response')
plt.subplot(2,2,4)
plt.stem(i/np.max(i), normalized_fft_shifted_sinc, linefmt='orange', markerfmt='r.', use_line_collection=True)
plt.title('Shited {}-Sinc Frequency Response'.format(M));
###Output
_____no_output_____
###Markdown
Hamming and Blackman WindowsA window function is a mathematical function that is zero-valued outside of some chosen interval, normally symmetric around the middle of the interval, usually near a maximum in the middle, and usually tapering away from the middle. Mathematically, when another function or waveform/data-sequence is "multiplied" by a window function, the product is also zero-valued outside the interval: all that is left is the part where they overlap, the "view through the window"
###Code
def hamming_window(i, M):
"""
Function that calculates a Hamming window of a given M-kernel.
Parameters:
i (numpy array): Array of numbers representing the samples being used to construct the Hamming window.
M (int): Length of the filter kernel. Usually M = 4/BW, where BW is the filter bandwidth of the transition band.
Returns:
numpy array: Returns Hamming window of a given M-kernel.
"""
return 0.54 - 0.46*np.cos(2*np.pi*i/M)
def blackman_window(i, M):
"""
Function that calculates a Blackman window of a given M-kernel.
Parameters:
i (numpy array): Array of numbers representing the samples being used to construct the Blackman window.
M (int): Length of the filter kernel. Usually M = 4/BW, where BW is the filter bandwidth of the transition band.
Returns:
numpy array: Returns Blackman window of a given M-kernel.
"""
return 0.42 - 0.5*np.cos(2*np.pi*i/M) + 0.08*np.cos(4*np.pi*i/M)
hamming = hamming_window(i, M)
blackman = blackman_window(i, M)
fft_hamming = np.fft.fft(hamming)
fft_blackman= np.fft.fft(blackman)
normalized_fft_hamming = np.absolute(fft_hamming)/np.sum(np.absolute(fft_hamming))
normalized_fft_blackman = np.absolute(fft_blackman)/np.sum(np.absolute(fft_blackman))
plt.rcParams["figure.figsize"] = (15,10)
plt.subplot(2,2,1)
plt.stem(i, hamming, markerfmt='.', use_line_collection=True)
plt.title('Hamming Window')
plt.subplot(2,2,2)
plt.stem(i, blackman, linefmt='orange', markerfmt='r.', use_line_collection=True)
plt.title('Blackman Window')
plt.subplot(2,2,3)
plt.stem(i/np.max(i), normalized_fft_hamming, markerfmt='.', use_line_collection=True)
plt.title('Hamming Window Frequency Response')
plt.subplot(2,2,4)
plt.stem(i/np.max(i), normalized_fft_blackman, linefmt='orange', markerfmt='r.', use_line_collection=True)
plt.title('Blackman Window Frequency Response');
hamming_shited_sinc = normalized_shifted_sinc*hamming
blackman_shited_sinc = normalized_shifted_sinc*blackman
fft_hamming_shited_sinc= np.fft.fft(hamming_shited_sinc)
fft_blackman_shited_sinc= np.fft.fft(blackman_shited_sinc)
normalized_fft_hamming_shited_sinc = np.absolute(fft_hamming_shited_sinc)/np.sum(np.absolute(fft_hamming_shited_sinc))
normalized_fft_blackman_shited_sinc = np.absolute(fft_blackman_shited_sinc)/np.sum(np.absolute(fft_blackman_shited_sinc))
plt.rcParams["figure.figsize"] = (15,10)
plt.subplot(2,2,1)
plt.stem(i, hamming_shited_sinc, markerfmt='.', use_line_collection=True)
plt.title('Shited Sinc - Hamming Window')
plt.subplot(2,2,2)
plt.stem(i, blackman_shited_sinc, linefmt='orange', markerfmt='r.', use_line_collection=True)
plt.title('Shited Sinc - Blackman Window')
plt.subplot(2,2,3)
plt.stem(i/np.max(i), normalized_fft_hamming_shited_sinc, markerfmt='.', use_line_collection=True)
plt.title('Shited Sinc - Hamming Window Frequency Response')
plt.subplot(2,2,4)
plt.stem(i/np.max(i), normalized_fft_blackman_shited_sinc, linefmt='orange', markerfmt='r.', use_line_collection=True)
plt.title('Shited Sinc - Blackman Window Frequency Response');
###Output
_____no_output_____
###Markdown
Comparison between Hamming and Blackman windowsThe Hamming window has a **faster roll-off** than the Blackman, however the Blackman has a **better stopband attenuation**. To be exact, the stopband attenuation for the Blackman is greater than the Hamming. Although it cannot be seen in these graphs, the Blackman has a very small passband ripple compared to the the Hamming. In general, the **Blackman should be your first choice**; a slow roll-off is easier to handle than poor stopband attenuation. Example of filter design for an ECG signalAn electroencephalogram, or EEG, is a measurement of the electrical activity of the brain. It can be detected as millivolt level signals appearing on electrodes attached to the surface of the head. Each nerve cell in the brain generates small electrical pulses. The EEG is the combined result of an enormous number of these electrical pulses being generated in a (hopefully) coordinated manner. Although the relationship between thought and this electrical coordination is very poorly understood, different frequencies in the EEG can be identified with specific mental states. If you close your eyes and relax, the predominant EEG pattern will be a slow oscillation between about 7 and 12 hertz. This waveform is called the alpha rhythm, and is associated with contentment and a decreased level of attention. Opening your eyes and looking around causes the EEG to change to the beta rhythm, occurring between about 17 and 20 hertz. Other frequencies and waveforms are seen in children, different depths of sleep, and various brain disorders such as epilepsy.In this example, we will assume that the EEG signal has been amplified by analog electronics, and then digitized at a sampling rate of 100 samples per second. We have a data of 640 samples. Our goal is to separate the alpha from the beta rhythms. To do this, we will design a digital low-pass filter with a cutoff frequency of 14 hertz, or 0.14 of the sampling rate. The transition bandwidth will be set at 4 hertz, or 0.04 of the sampling rate.
###Code
fc = 0.14
BW = 0.08
M = int(4/BW)
i = np.arange(0,M,1)
print("Filter lenght is {}".format(M))
shifted_sinc = shifted_sinc_function(i,fc, M)
normalized_shifted_sinc = shifted_sinc/np.sum(shifted_sinc)
hamming = hamming_window(i, M)
hamming_shited_sinc = normalized_shifted_sinc*hamming
ecg = np.loadtxt(fname = "ecg.dat").flatten()
filtered_ecg = np.convolve(ecg,hamming_shited_sinc)
fft_hamming_shited_sinc= np.fft.fft(hamming_shited_sinc)
normalized_fft_hamming_shited_sinc = np.absolute(fft_hamming_shited_sinc)/np.sum(np.absolute(fft_hamming_shited_sinc))
fft_ecg = np.fft.fft(ecg)
normalized_fft_ecg = np.absolute(fft_ecg)/np.sum(np.absolute(fft_ecg))
fft_filtered_ecg = np.fft.fft(filtered_ecg)
normalized_fft_filtered_ecg = np.absolute(fft_filtered_ecg)/np.sum(np.absolute(fft_filtered_ecg))
plt.rcParams["figure.figsize"] = (15,10)
plt.subplot(2,2,1)
plt.plot(ecg)
plt.title('ECG Signal')
plt.subplot(2,2,2)
plt.plot(filtered_ecg, color='orange')
plt.title('Filtered ECG Signal')
plt.subplot(2,2,3)
plt.stem(np.arange(len(normalized_fft_ecg))/len(normalized_fft_ecg),
normalized_fft_ecg, markerfmt='.', use_line_collection=True)
plt.title('Frequency Response ECG Signal')
plt.subplot(2,2,4)
plt.stem(np.arange(len(normalized_fft_filtered_ecg))/len(normalized_fft_filtered_ecg),
normalized_fft_filtered_ecg, linefmt='orange', markerfmt=' ', use_line_collection=True)
plt.title('Frequency Response Filtered ECG Signal');
###Output
_____no_output_____
###Markdown
We will pickle our filter design for later user in the next Jupyter Notebook...
###Code
import pickle
data = {'ecg':ecg, 'low_pass':hamming_shited_sinc, 'fft_low_pass':normalized_fft_hamming_shited_sinc}
file = open('save_data.pickle', 'wb')
pickle.dump(data, file)
file.close()
###Output
_____no_output_____ |
05 Linear Regression.ipynb | ###Markdown
1. Create train and test sets
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=123, train_size=0.8)
X_train.shape
X_test.shape
X_train[0,:]
###Output
_____no_output_____
###Markdown
Sanity check: Could a linear model make sense?
###Code
plt.scatter(X_train[:,5], y_train)
plt.xlabel('Values of variable')
plt.ylabel('Prices per square meter')
###Output
_____no_output_____
###Markdown
2. Train a model
###Code
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
###Output
_____no_output_____
###Markdown
We need to pass some data to `lr` to calculate the relationship.
###Code
lr.fit(X_train, y_train) # Find the coefficients in linear regression
###Output
_____no_output_____
###Markdown
You can look at how the predictions will be calculated *in this case*.
###Code
lr.coef_ # Coefficients/weights from lina
X_train[0,:]
lr.intercept_
lr.score(X_train, y_train) # Accuracy of the model in the training set. (between 0 and 1)
###Output
_____no_output_____
###Markdown
Look at what the model does in the testing set (validation).
###Code
y_pred = lr.predict(X_test)
y_pred[:5]
y_test[:5]
###Output
_____no_output_____
###Markdown
**Parity plot:** Compare predicted values against observed values.
###Code
minval = min(min(y_pred), min(y_test))
maxval = max(max(y_pred), max(y_test))
import numpy as np
mesh = np.linspace(minval, maxval, 100) # 100 equally spaced points in (minval, maxval)
# Parity plot
plt.scatter(y_pred, y_test)
plt.xlim(minval, maxval)
plt.ylim(minval, maxval)
plt.xlabel('Predicted values')
plt.ylabel('True values')
plt.plot(mesh, mesh, 'r') #draw a red line at 45 degrees
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.