path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
examples/fontsizes.ipynb
###Markdown Fontsizes are dictionaries that can be passed to `matplotlib.pyplot.rcParams.update()`. ###Code fontsizes.icml2022() ###Output _____no_output_____ ###Markdown Compare the default font-sizes to, e.g., the ICML style. (To make differences more obvious, we increase the `dpi` value.) ###Code fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel") ax.set_ylabel("ylabel") plt.show() plt.rcParams.update(fontsizes.icml2022()) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel") ax.set_ylabel("ylabel") plt.show() plt.rcParams.update(fontsizes.neurips2021()) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel") ax.set_ylabel("ylabel") plt.show() plt.rcParams.update(fontsizes.aistats2022()) fig, ax = plt.subplots() ax.plot([1.0, 2.0], [3.0, 4.0]) ax.set_title("Title") ax.set_xlabel("xlabel") ax.set_ylabel("ylabel") plt.show() ###Output _____no_output_____
GA_kernel.ipynb
###Markdown Genetic AlgorithmLooking at [Giba's property](https://www.kaggle.com/titericz/the-property-by-giba) made me wonder how to come up with this ordering of the rows and columns, and I thought that might be a problem suitable for genetic algorithms - whether that is actually the case, or if there is a much faster closed-form solution to this problem (?), I do not know. I've opted for implementing the algorithm from scratch rather than using a library, since this was very much done for my own education. I'm sure everything can be done better, faster, more pythonic etc. Starting out on this notebook earlier today I knew nothing about genetic algorithms, except the overall concepts [from this tutorial](https://blog.sicara.com/getting-started-genetic-algorithms-python-tutorial-81ffa1dd72f9) - now I reckon I might go buy a book to actually learn about it more thoroughly. Any recommendations would be awesome :) .. any comments/improvements for the code below would also be very much appreciated. ###Code import gc import numpy as np import pandas as pd from tqdm import tqdm_notebook from IPython.display import clear_output, display from sklearn.externals.joblib import Parallel, delayed ###Output _____no_output_____ ###Markdown Giba's PropertyFor the purpose of this notebook I'll only look at the training df, and only at the small subset presented by Giba. I imagine the algorithm should scale pretty well to the entire dataset though, albeit with minor modifications when including test data. Let's first get the subset presented by Giba ###Code #Remove constant features def remove_constant_cols(df, tolerance=10): for c in df.columns: if len(df[c].unique()) <= tolerance: df.drop(c, axis=1, inplace=True) # Get the data train_df = pd.read_csv('train.csv').set_index('ID') # Get columns and rows in question giba_cols = [ "f190486d6","58e2e02e6","eeb9cd3aa","9fd594eec","6eef030c1","15ace8c9f", "fb0f5dbfe","58e056e12","20aa07010","024c577b9","d6bb78916", "b43a7cfd5","58232a6fb" ] giba_rows = [ '7862786dc', 'c95732596', '16a02e67a', 'ad960f947', '8adafbb52', 'fd0c7cfc2', 'a36b78ff7', 'e42aae1b8', '0b132f2c6', '448efbb28', 'ca98b17ca', '2e57ec99f', 'fef33cb02' ] giba_df = train_df.loc[giba_rows, :] remove_constant_cols(giba_df) giba_df ###Output _____no_output_____ ###Markdown Ordering rows & columns with Genetic AlgorithmIt's pretty easy to see the structure in the above - timeseries in columns and rows, and column `f190486d6` is two steps ahead of the target. The following is my quick-n-dirty class with fitness function, breeding functions, mutation functions, etc. One thing to note in the `fitness()` function is that I insert the `target` and `target+1` into the dataframe before score evaluation - I do this simply to direct it towards the structure above, but I reckon it isn't strictly neccesary. This should work for the entire training set as well, but for the test set one would have to modify it, especially if test&train rows are intermingled. For now I just look at Giba's subset. ###Code class GeneticOptimizer(): def __init__(self, n_population=100, n_cols=40, n_rows=40, n_breeders=10, n_lucky=2, renew_ratio=0.3, n_generations=10, max_row_mutations=10, max_col_mutations=10, max_combined_rows=10, max_combined_cols=10, optimize_rows=False): # Set variables self.n_population = n_population self.n_cols = n_cols self.n_rows = n_rows self.n_generations = n_generations self.n_breeders = n_breeders self.n_lucky = n_lucky self.renew_ratio = renew_ratio self.max_row_mutations = max_row_mutations self.max_col_mutations = max_col_mutations self.max_combined_rows = max_combined_rows self.max_combined_cols = max_combined_cols self.optimize_rows = optimize_rows self.history = [] self.fittest = [] @staticmethod def fitness(X, weights, individual): """ Lower score means better alignment, see sample df at: https://www.kaggle.com/titericz/the-property-by-giba """ # Get a copy of our dataframe X = X.loc[individual['rows'], ['target','target+1'] + individual['cols'].tolist()] # Shift matrix to get fitness shiftLeftUp = X.iloc[1:, 1:].values deleteRightDown = X.iloc[:-1, :-1].values # Calculate & return score diff = (shiftLeftUp - deleteRightDown).astype(bool) both_zero = np.logical_or(~shiftLeftUp.astype(bool), ~deleteRightDown.astype(bool)) # Penalize score by number of zeroes in columns score = np.sum(np.logical_or(both_zero, diff) * weights) return score @staticmethod def hash_individual(individual): return hash(frozenset(individual)) @staticmethod def swap_random(seq, n): """Swaps a n-length subsequence around in seq""" l = len(seq) idx = range(l) i1, i2 = np.random.choice(idx, 2, replace=False) i1 = l-n if n + i1 >= l else i1 i2 = l-n if n + i2 >= l else i2 for m in range(n): seq[i1+m], seq[i2+m] = seq[i2+m], seq[i1+m] @staticmethod def get_parallel(verbose=0, n_jobs=-1, pre_dispatch='2*n_jobs'): return Parallel( n_jobs=n_jobs, pre_dispatch=pre_dispatch, verbose=verbose ) def create_random_population(self, n_pop, columns, index): population = [] for _ in range(n_pop): np.random.shuffle(columns) if self.optimize_rows: np.random.shuffle(index) population.append({'cols': np.copy(columns)[:self.n_cols], 'rows': np.copy(index)[:self.n_rows]}) return np.array(population) def compute_population_performance(self, population, X, weights, **kwargs): parallel = self.get_parallel(**kwargs) performance = parallel( delayed(self.fitness)(X, weights, individual) for individual in population ) return np.array(performance) def select_from_population(self, population, performance, best_sample=3, lucky_few=1): # Sort the population to have best first sorted_population = population[np.argsort(performance)] # Save the fittest individual of the generation self.fittest.append(sorted_population[0]) # Create next generation with best and random nextGeneration = [] for i in range(best_sample): nextGeneration.append(sorted_population[i]) for i in range(lucky_few): nextGeneration.append(np.random.choice(sorted_population)) # Shuffle new generation and return np.random.shuffle(nextGeneration) return nextGeneration def create_child(self, breeders): # Mom, dad and child mom = breeders[np.random.randint(0, len(breeders))] dad = breeders[np.random.randint(0, len(breeders))] child_columns, child_index = [0]*self.n_cols, [0]*self.n_rows # Convenience function def set_trait(array, index, mom_trait, dad_trait): if np.random.rand() > 0.5: if mom_trait not in array: array[index] = mom_trait else: if dad_trait not in array: array[index] = dad_trait # Get characteristics from parent 1 for i in range(self.n_cols): set_trait(child_columns, i, mom['cols'][i], dad['cols'][i]) if self.optimize_rows: for i in range(self.n_rows): set_trait(child_index, i, mom['rows'][i], dad['rows'][i]) # Fill in missing values (in a sense also a mutation factor) missing_cols = [c for c in mom['cols'] if c not in child_columns] for i in range(self.n_cols): if child_columns[i] == 0: child_columns[i] = missing_cols.pop() if self.optimize_rows: missing_rows = [c for c in mom['rows'] if c not in child_index] for i in range(self.n_rows): if child_index[i] == 0: child_index[i] = missing_rows.pop() else: child_index = mom['rows'] return {'cols': np.array(child_columns), 'rows': np.array(child_index)} def create_children(self, breeders, n_children, **kwargs): parallel = self.get_parallel(**kwargs) nextPopulation = parallel( delayed(self.create_child)(breeders) for _ in range(n_children)) return np.array(nextPopulation) def mutate_individual(self, individual): if self.optimize_rows and self.max_row_mutations > 0: for _ in np.arange(0, np.random.randint(0, self.max_row_mutations)): n = np.random.randint(1, self.max_combined_rows) self.swap_random(individual['rows'], n) if self.max_col_mutations > 0: for _ in np.arange(0, np.random.randint(0, self.max_col_mutations)): n = np.random.randint(1, self.max_combined_cols) self.swap_random(individual['cols'], n) return individual def mutate_population(self, population, **kwargs): parallel = self.get_parallel(**kwargs) nextPopulation = parallel( delayed(self.mutate_individual)(individual) for individual in population ) return np.array(nextPopulation) def get_fittest_target_error(self, X, validation_index): """Assume first column in individual is 2 steps behind target""" individual = self.fittest[-1] target_idx = [i for i in individual['rows'] if i in validation_index] target = np.log1p(X.loc[target_idx, 'target']) target2p_col = individual['cols'][0] target2p = np.log1p(X.loc[target_idx, target2p_col].shift(-2)) return np.sqrt((target-target2p).dropna()**2).sum() def fit(self, X, y, weights=None, validation_index=None, **kwargs): # Do not modify original X = X.copy() # Create initial population population = self.create_random_population(self.n_population, X.columns.tolist(), X.index.tolist()) # Add target and target+1 to X, so as to direct the order of result X.insert(0, 'target+1', y.shift(1)) X.insert(0, 'target', y) X.fillna(0, inplace=True) # If no weights specified, all columns equally important if weights is None: weights = np.ones(self.n_cols+1) # Run the algorithm for n_generations for epoch in range(self.n_generations): # Get performance for each individual in population performance = self.compute_population_performance(population, X, weights, **kwargs) # Get breeders breeders = self.select_from_population(population, performance) # If we have a validation index, then get the train error for the best performer if validation_index is not None: train_error = self.get_fittest_target_error(X, validation_index) else: train_error = 'NaN' # Update population new_pop = self.create_random_population(int(self.n_population * self.renew_ratio), X.columns.tolist(), X.index.tolist()) offspring = self.create_children(breeders, self.n_population-len(new_pop), **kwargs) population = np.concatenate([new_pop, offspring], axis=-1) # Mutate population before next generation population = self.mutate_population(population, **kwargs) # Save to history & display clear_output() self.history.append({ "pop_loss": np.mean(performance), "std_pop_loss": np.std(performance), "top_performer_loss": np.min(performance), 'generation': epoch+1, 'Train RMSLE': train_error }) display(pd.DataFrame(self.history).set_index('generation')) # Just in case gc.collect() ###Output _____no_output_____ ###Markdown This class basically creates an initially fully random population of column/row orders, and based on this breeds new combinations which minimize the fitness function - the lower the fitness function score, the close we are to a matrix that has the structure observed in Giba's subset. Let's try to run it for a few generations. ###Code x = giba_df.drop(['target'], axis=1).iloc[:, :40] x[x == 0].count().sum() # Number of cols to find n_cols = 10 # Weigh different columns differently in scoring (most important are those close to target) weights = np.exp(-np.linspace(0, np.sqrt(n_cols+1), n_cols+1)) # Instantiate class and run on training data gp_opt = GeneticOptimizer( n_population=10000, n_cols=n_cols, n_rows=13, n_breeders=100, n_lucky=10, renew_ratio=0.3, n_generations=15, max_row_mutations=5, max_col_mutations=5, max_combined_rows=5, max_combined_cols=5, optimize_rows=False ) # Fit to data gp_opt.fit( giba_df.drop(['target'], axis=1), giba_df['target'], n_jobs=8, verbose=1, weights=weights, validation_index=giba_df.index.values ) ###Output _____no_output_____ ###Markdown Locally I've managed to get a top performer that matched Giba's solution perfectly (more generations, and slightly different population settings). I imagine this approach will scale well to the entire training (and test, with modifications), where the best solution may be less neat. ###Code best = gp_opt.fittest[-1] giba_df.loc[best['rows'], ['target'] + best['cols'].tolist()] giba_cols = [ "f190486d6","58e2e02e6","eeb9cd3aa","9fd594eec","6eef030c1","15ace8c9f", "fb0f5dbfe","58e056e12","20aa07010","024c577b9","d6bb78916", "b43a7cfd5","58232a6fb" ] giba_rows = [ '7862786dc', 'c95732596', '16a02e67a', 'ad960f947', '8adafbb52', 'fd0c7cfc2', 'a36b78ff7', 'e42aae1b8', '0b132f2c6', '448efbb28', 'ca98b17ca', '2e57ec99f', 'fef33cb02' ] ###Output _____no_output_____
notebooks/RESULT-bifurcation-diagrams.ipynb
###Markdown Bifurcation diagrams for single-node and whole-brain network This notebook draws the bifurcation diagrams in Fig. 2. ###Code # change into the root directory of the project import os if os.getcwd().split("/")[-1] == "examples": os.chdir('..') import logging logger = logging.getLogger() import warnings warnings.filterwarnings("ignore") logger.setLevel(logging.INFO) #logger.setLevel(logging.DEBUG) #logging.disable(logging.WARNING) #logging.disable(logging.WARN) %load_ext autoreload %autoreload 2 %aimport import numpy as np import matplotlib.pyplot as plt from neurolib.models.aln import ALNModel from neurolib.utils.parameterSpace import ParameterSpace from neurolib.optimize.exploration import BoxSearch import neurolib.utils.functions as func import neurolib.optimize.exploration.explorationUtils as eu import neurolib.utils.devutils as du from neurolib.utils.loadData import Dataset plt.rcParams['text.usetex'] = False plt.rcParams['svg.fonttype'] = 'none' plt.style.reload_library() plt.style.use("seaborn-white") plt.rcParams['image.cmap'] = 'plasma' ###Output _____no_output_____ ###Markdown Single area ###Code model = ALNModel() model.params['dt'] = 0.1 model.params['duration'] = 20 * 1000 #ms # add custom parameter for downsampling results model.params['save_dt'] = 10.0 # 10 ms sampling steps for saving data, should be multiple of dt model.params["tauA"] = 600.0 model.params["sigma_ou"] = 0.0 model.params["b"] = 20.0 def evaluateSimulation(traj): # get the model from the trajectory using `search.getModelFromTraj(traj)` model = search.getModelFromTraj(traj) # initiate the model with random initial contitions model.randomICs() defaultDuration = model.params['duration'] # -------- stage wise simulation -------- # Stage 3: full and final simulation # --------------------------------------- model.params['duration'] = defaultDuration rect_stimulus = func.construct_stimulus(stim="rect", duration=model.params.duration, dt=model.params.dt) model.params['ext_exc_current'] = rect_stimulus * 5.0 model.run() # up down difference state_length = 2000 last_state = (model.t > defaultDuration - state_length) down_window = (defaultDuration/2-state_length<model.t) & (model.t<defaultDuration/2) # time period in ms where we expect the down-state up_window = (defaultDuration-state_length<model.t) & (model.t<defaultDuration) # and up state up_state_rate = np.mean(model.output[:, up_window], axis=1) down_state_rate = np.mean(model.output[:, down_window], axis=1) up_down_difference = np.max(up_state_rate - down_state_rate) # check rates! max_amp_output = np.max( np.max(model.output[:, up_window], axis=1) - np.min(model.output[:, up_window], axis=1) ) max_output = np.max(model.output[:, up_window]) model_frs, model_pwrs = func.getMeanPowerSpectrum(model.output, dt=model.params.dt, maxfr=40, spectrum_windowsize=10) max_power = np.max(model_pwrs) model_frs, model_pwrs = func.getMeanPowerSpectrum(model.output[:, up_window], dt=model.params.dt, maxfr=40, spectrum_windowsize=5) domfr = model_frs[np.argmax(model_pwrs)] result = { "max_output": max_output, "max_amp_output" : max_amp_output, #"max_power" : max_power, #"model_pwrs" : model_pwrs, #"output": model.output[:, ::int(model.params['save_dt']/model.params['dt'])], "domfr" : domfr, "up_down_difference" : up_down_difference } search.saveOutputsToPypet(result, traj) return parameters = ParameterSpace({"mue_ext_mean": np.linspace(0.0, 4, 161), "mui_ext_mean": np.linspace(0.0, 4, 161), "b": [0.0, 20.0] }) search = BoxSearch(evalFunction = evaluateSimulation, model=model, parameterSpace=parameters, filename='exploration-8.0-single-node.hdf') search.run() search.loadResults(filename="/mnt/raid/data/cakan/hdf/exploration-8.0-single-node.hdf", all=False) search.dfResults plot_key_label = "Max. $r_E$ [Hz]" eu.plotExplorationResults(search.dfResults, par1=['mue_ext_mean', 'Input to E [nA]'], par2=['mui_ext_mean', 'Input to I [nA]'], by=['b'], plot_key='max_output', plot_clim=[0.0, 80.0], nan_to_zero=False, plot_key_label=plot_key_label, one_figure=False, multiply_axis=0.2, contour=["max_amp_output", "up_down_difference"], contour_color=[['white'], ['springgreen']], contour_levels=[[10], [10]], contour_alpha=[1.0, 1.0], contour_kwargs={0 : {"linewidths" : (5,)}, 1 : {"linestyles" : "--", "linewidths" : (5,)}}, #alpha_mask="relative_amplitude_BOLD", mask_threshold=0.1, mask_alpha=0.2, savename="single_node.svg") ###Output MainProcess root INFO Saving to ./data/figures/b=0.0_single_node.svg ###Markdown Brain network ###Code ds = Dataset("gw") model = ALNModel(Cmat=ds.Cmat, Dmat=ds.Dmat) model.params['dt'] = 0.1 model.params['duration'] = 20 * 1000 #ms # add custom parameter for downsampling results model.params['save_dt'] = 10.0 # 10 ms sampling steps for saving data, should be multiple of dt model.params["tauA"] = 600.0 model.params["sigma_ou"] = 0.0 model.params["b"] = 20.0 model.params["Ke_gl"] = 300.0 model.params["signalV"] = 80.0 parameters = ParameterSpace({"mue_ext_mean": np.linspace(0.0, 4, 101), "mui_ext_mean": np.linspace(0.0, 4, 101), "b": [0.0, 20.0] }) search_brain = BoxSearch(evalFunction = evaluateSimulation, model=model, parameterSpace=parameters, filename='exploration-8.0-brain.hdf') search_brain.run() search_brain.loadResults(filename="/mnt/raid/data/cakan/hdf/exploration-8.0-brain.hdf", all=False) search_brain.dfResults plot_key_label = "Max. $r_E$ [Hz]" eu.plotExplorationResults(search_brain.dfResults, par1=['mue_ext_mean', 'Input to E [nA]'], par2=['mui_ext_mean', 'Input to I [nA]'], by=['b'], plot_key='max_output', plot_clim=[0.0, 80.0], nan_to_zero=False, plot_key_label=plot_key_label, one_figure=False, multiply_axis=0.2, contour=["max_amp_output", "up_down_difference"], contour_color=[['white'], ['springgreen']], contour_levels=[[10], [10]], contour_alpha=[1.0, 1.0], contour_kwargs={0 : {"linewidths" : (5,)}, 1 : {"linestyles" : "--", "linewidths" : (5,)}}, #alpha_mask="relative_amplitude_BOLD", mask_threshold=0.1, mask_alpha=0.2, savename="gw_brain.pdf") ###Output MainProcess root INFO Saving to ./data/figures/b=0.0_gw_brain.svg ###Markdown Timeseries of some locations (network bistability) ###Code # we place the system in the bistable region model.params['b'] = 0 model.params['mue_ext_mean'] = 2.0 model.params['mui_ext_mean'] = 3.5 # construct a stimulus rect_stimulus = func.construct_stimulus(stim="rect", duration=model.params.duration, dt=model.params.dt) model.params['ext_exc_current'] = rect_stimulus * 5.0 model.run() import neurolib.utils.brainplot as bp bp.plot_ts(model, stimulus=True, plot_nodes="all", stimulus_scale = 50, lw=2, xlim=(200, 200000), stimulus_color='k') plt.text(3, 10, 'down-state', fontsize=14) plt.text(14, 35, 'up / down', fontsize=14) #plt.savefig("data/figures/partial_bistability_up_down.pdf") #plt.savefig("data/figures/partial_bistability_up_down.svg") ###Output _____no_output_____ ###Markdown Bistability with adaptation ###Code # we place the system in the bistable region model.params['b'] = 20 model.params['mue_ext_mean'] = 2.8 model.params['mui_ext_mean'] = 3.5 # construct a stimulus rect_stimulus = func.construct_stimulus(stim="rect", duration=model.params.duration, dt=model.params.dt) model.params['ext_exc_current'] = rect_stimulus * 5.0 model.run() bp.plot_ts(model, stimulus=True, plot_nodes="all", stimulus_scale = 50, lw=1.5, xlim=(200, 200000), stimulus_color='k', legend=False) plt.text(3, 7, 'down-state', fontsize=14) plt.text(14, 60, 'up / LC$_{AE}$', fontsize=14) #plt.savefig("data/figures/partial_bistability_up_LCAE.pdf") #plt.savefig("data/figures/partial_bistability_up_LCAE.svg") ###Output _____no_output_____ ###Markdown Sanity check: Simulate regions with numerical errors (lower right edge of bifurcation diagram) ###Code ds = Dataset("gw") model = ALNModel(Cmat=ds.Cmat, Dmat=ds.Dmat) model.params['dt'] = 0.05 # low dt gets rid of artefact model.params['duration'] = 20 * 1000 #ms # add custom parameter for downsampling results model.params['save_dt'] = 10.0 # 10 ms sampling steps for saving data, should be multiple of dt model.params["tauA"] = 600.0 model.params["sigma_ou"] = 0.0 model.params["b"] = 20.0 # we place the system in the bistable region model.params['b'] = 0 model.params['mue_ext_mean'] = 3.5 model.params['mui_ext_mean'] = 0.0 # construct a stimulus rect_stimulus = func.construct_stimulus(stim="rect", duration=model.params.duration, dt=model.params.dt) model.params['ext_exc_current'] = rect_stimulus * 5.0 model.run() plt.figure(figsize=(5, 3), dpi=150) plt.plot(model.t[::100], model.output[:, ::100].T, lw = 1) plt.plot(model.t[::100], rect_stimulus[::100] * 100, lw = 2, c='r', label="stimulus") plt.text(3000, 7, 'down-state', fontsize=16) plt.text(15000, 35, 'up-state', fontsize=16) plt.legend(fontsize=14) plt.xlim(100, model.t[-1]) plt.xlabel("Time [ms]") plt.ylabel("Activity [Hz]") ###Output _____no_output_____
Home assignment 3 Function and Class.ipynb
###Markdown Functions and ClassesFunction and Class are required for object oriented programming. Functions, once created, can be implemented multiple times while Class is more useful for both data encaptulation and functions. ###Code import numpy as np import random as random ###Output _____no_output_____ ###Markdown Class CircleDefine a function which will take radious as input and provides area as output for a circle. ###Code def area(r): A = np.pi*r**2 return A ###Output _____no_output_____ ###Markdown * Calculate the area of a sample circle of radius 10. ###Code area(10) ###Output _____no_output_____ ###Markdown * Define a function which will take radious as input and provides circumference as output for a circle. ###Code def circumference(r):# define circumference with variable and formula C = 2*np.pi*r return C #return is compulsory for output below. ###Output _____no_output_____ ###Markdown * Claculate the circumference of a sample circle of radius 10. ###Code circumference(10) ###Output _____no_output_____ ###Markdown * Lets build a class implementing above constants and functions ###Code class Circle(): def _init_(self, r): self.r=r def area(self): A=np.pi*self.r**2 return A def circumference(self): C=2*np.pi*self.r return C ###Output _____no_output_____ ###Markdown * Test using examples. Circle object can be created by calling Circle(5) and function area() can be applied later or together. ###Code Circle(5).area() CC = Circle() CC.area(),CC.circumference() ###Output _____no_output_____ ###Markdown * Similar to the functions, a data can be called from class object. ###Code CC.r ###Output _____no_output_____ ###Markdown * To use class and function object multiple time. ###Code for r in [2,3,6,24,25,46,567]: CC = Circle(r) print("radius: " , r,\ "area : " , CC.area(),\ "circumf : " , CC.circumference()) ###Output _____no_output_____ ###Markdown Class Gravity* To create a function Gravity ###Code def gravity(m1,m2,d): F=(m1*m2)/d**2 return F gravity(5,4,10) ###Output _____no_output_____ ###Markdown * Lets create a class Newton for Gravity calculation ###Code class Newton(): def __init__(self,value_of_G,value_of_g, supplied_info): self.G = value_of_G self.info = supplied_info self.g = value_of_g def gravity(self,m1,m2,d): F = self.G*(m1*m2)/d**2 print(self.info) return F def gravity_pot(self,m1): F = m1*self.g return F #Becare: space after def, capital F ###Output _____no_output_____ ###Markdown * To create a object by calling a class with define inputs. ###Code N1 = Newton(value_of_G =6.7, value_of_g= 9.8,\ supplied_info = "great job")#Becare: no space after "\" ###Output _____no_output_____ ###Markdown * To find constants and output of functions ###Code N1.G,N1.g,N1.gravity(2,3,13),N1.gravity_pot(12) N1.gravity(m1=11,m2=12,d=3) ###Output great job ###Markdown Class Dice* Lets create a Class called Dice for fun ###Code class Dice(object):#run in every definition def __init__(self,A_value,B_value,C_value): self.pi = 3.14 self.A = A_value self.B = B_value self.C = C_value def find_sum(self,n1,n2): S = n1+n2 return S def find_product(self,n1,n2): P = n1*n2 return P def poly(self,x): pl = self.A*self.find_product(x,x) + self.B*x + self.C return pl def roll_dice(self): side = random.choice([1,2,3,4,5,6]) return side def roll_two_dices(self): d1 = self.roll_dice() d2 = self.roll_dice() p = self.find_product(d1,d2) s = self.find_sum(d1,d2) return d1,d2,p,s ###Output _____no_output_____ ###Markdown * To implement object created by class with predefined input ###Code A = 2.3; B=4.5; C =8.9 D = Dice(A,B,C) ###Output _____no_output_____ ###Markdown * Can I ask this object for value of A, B and C? ###Code D.A, D.B, D.C ###Output _____no_output_____ ###Markdown * To roll a dice to get random side ###Code D.roll_dice() # Answer is 4 in the NPS hub d=D.roll_dice() D.poly(d) ###Output _____no_output_____
Learn-python/Part 2 - Plots, Graphs, Dictionaries, Flow and Loops/02-Dictionaries.ipynb
###Markdown ![tstaunton.jpg](attachment:tstaunton.jpg)*** Dictionaries***In this chapter we are going to look at a new Python data type called **dictionaries**. Previously, we looked at lists, dictionaries are lists on steroids. If we revisit our list of employees and departments we see: ###Code # A list of departments team_areas = ['develop', 'design', 'finance'] # A list of team members in each department team_members = ['tony', 'mark', 'fiona'] ###Output _____no_output_____ ###Markdown Other than explicitly looking at these lists and seeing that Mark works in the design department how do can I run a query to know that information? What would happen if I had a list of employees in the thousands, how would I know what department an employee works in? Let's see how we can do this using lists. ###Code # Use index() to get the index of design index_design = team_areas.index('design') # Use index_design to obtain the team member of design from the team_members list. print(team_members[index_design]) ###Output _____no_output_____ ###Markdown Here we've built two lists and then used Python's index function to retrieve information. It works but not very well. You can see how this would quickly become unmanageable with longer lists. Using Python Dictionaries we can link each department to its employee. In the code example below I have used the information from **team_areas** and **team_members** to create a dictionary. As you can see dictionaries are created with curly brackets, inside the curly brackets we use key, value pairs to link information together. In this example, team areas are the keys and team_members are the values. The first key is development and its corresponding value is tony. Each key is separated from its value with a colon and each key, value pair is separated from one another with a comma. When we have finished creating a dictionary we can assign it to a variable name. In this case **project_web**. ###Code # Create a new dictionary called project_web project_web = {"developer":"Tony", "design":"Mark", "finance":"Fiona"} ###Output _____no_output_____ ###Markdown Now we can call the value for any department with the following code: ###Code # Output the value for developer project_web['developer'] # Output the value for design project_web['design'] ###Output _____no_output_____ ###Markdown If we want to recall the entire dictionary we can simply type: ###Code # Output the entire dictionary print(project_web) # Output the dictionary keys print(project_web.keys()) # Output value for the key finance print(project_web['finance']) ###Output _____no_output_____ ###Markdown When creating a new dictionary the keys have to be unique. Keys are immutable objects which means that contents cannot change after they have been created.Let's see how we can add another key, value pair to our web_project dictionary. ###Code # Add marketing:jane to web_project project_web["marketing"] = 'jane' # Output project_web # Check if marketing in project_web "marketing" in project_web ###Output _____no_output_____ ###Markdown With the same syntax as before you can also change values. The code example below updates the value in developers from tony to amy. ###Code # Update developer value to amy project_web["developer"] = "amy" # Output project_web project_web ###Output _____no_output_____ ###Markdown Because each key in a dictionary is unique Python knows that you are not trying to create a new pair but update an exiting one.How do we remove a key, value pair from a dictionary? With the **del()** function. ###Code # Remove developer from dictionary del(project_web["developer"]) # Output dictionary project_web ###Output _____no_output_____ ###Markdown As you can see from the previous examples lists and dictionaries are similar. They both use [...] to select, update and remove values. A list is indexed by a range of numbers, a dictionary is indexed by uniques keys. Use the list when you want a collection of values, order matters and you want to create and select entire subsets. Use dictionaries when you need a lookup table with unique keys and speed matters.In a previous lesson we seen that lists could contain anything, even other lists. It's the same for dictionaries. Dictionaries contain key:value pairs where the values can be other dictionaries. Lets look at an example. ###Code # Create a dicionary with dictionaries as values web_project02 = {"developers": {"lead developer":"tony", "junior developer":"carl"}, "design":{"head of design":"mark", "junior designer":"james"}, "finance":{"head of finance":"fiona", "accountant":"paul"}} # To access a value use two keys print(web_project02["design"]["head of design"]) # Create a sub-dictionary temp_team = {"head of marketing":"jane", "researcher":"harry"} # Add temp_team to web_project02 under the key 'marketing' web_project02['marketing'] = temp_team # Outout dictionary web_project02 ###Output _____no_output_____
notebooks/.ipynb_checkpoints/fgsm-dead-code-checkpoint.ipynb
###Markdown Load Data ###Code word2vec = Word2Vec.load('/home/david/projects/university/astnn/data/train/embedding/node_w2v_128').wv vocab = word2vec.vocab ast_data = pd.read_pickle(root+'test/test_.pkl') block_data = pd.read_pickle(root+'test/blocks.pkl') ###Output _____no_output_____ ###Markdown Allowed var names ###Code leaf_embed = nn.Sequential( model._modules['encoder']._modules['embedding'], model._modules['encoder']._modules['W_c'] ) # words we wont allow as variable names reserved_words = [ 'auto', 'break', 'case', 'char', 'const', 'continue', 'default', 'do', 'int', 'long', 'register', 'return', 'short', 'sizeof', 'static', 'struct', 'switch', 'typedef', 'union', 'unsigned', 'void', 'volatile', 'while', 'double', 'else', 'enum', 'extern', 'float', 'for', 'goto', 'if', 'printf', 'scanf', 'cos', 'malloc' ] def allowed_variable(var): pattern = re.compile("([a-z]|[A-Z]|_)+([a-z]|[A-Z]|[0-9]|_)*$") if (var not in reserved_words) and pattern.match(var): return True else: return False allowed_variable('scanf') embedding_map = {} for index in range(len(vocab)): if allowed_variable(word2vec.index2word[index]): embedding_map[index] = leaf_embed(torch.tensor(index)).detach().numpy() ###Output _____no_output_____ ###Markdown Var replace functions ###Code def replace_index(node, old_i, new_i): i = node[0] if i == old_i: result = [new_i] else: result = [i] children = node[1:] for child in children: result.append(replace_index(child, old_i, new_i)) return result def replace_var(x, old_i, new_i): mod_blocks = [] for block in x: mod_blocks.append(replace_index(block, old_i, new_i)) return mod_blocks ###Output _____no_output_____ ###Markdown Closest Var functions ###Code def l2_norm(a, b): return np.linalg.norm(a-b) def cos_sim(a, b): return np.inner(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)) def closest_index(embedding, embedding_map, metric): embedding = embedding.detach().numpy() closest_i = list(embedding_map.keys())[0] closest_dist = metric(embedding_map[closest_i], embedding) for i, e in embedding_map.items(): d = metric(embedding_map[i], embedding) if d < closest_dist: closest_dist = d closest_i = i return closest_i def normalize(v): norm = np.linalg.norm(v) if norm == 0: return v return v / norm ###Output _____no_output_____ ###Markdown Grad locating functions ###Code def get_embedding(indices, node_list): ''' get the embeddings at the index positions in postorder traversal. ''' res = [] c = 0 for i in range(node_list.size(0)): if not np.all(node_list[i].detach().numpy() == 0): if c in indices: res.append(node_list[i]) c += 1 return res def post_order_loc(node, var, res, counter): ''' ''' index = node[0] children = node[1:] for child in children: res, counter = post_order_loc(child, var, res, counter) if var == index and (not children): res.append(counter) # print(counter, word2vec.index2word[index]) counter += 1 return res, counter def get_grad(x, var_index, node_list): grads = [] for i, block in enumerate(x): indices, _ = post_order_loc(block, var_index, [], 0) grads += get_embedding(indices, node_list.grad[:, i, :]) try: node_embedding = get_embedding(indices, node_list[:, i, :])[0] except: pass if len(grads) < 1: return None, None grad = torch.stack(grads).sum(dim=0) return grad, node_embedding ###Output _____no_output_____ ###Markdown Var name finder ###Code class declarationFinder(c_ast.NodeVisitor): def __init__(self): self.names = set() def visit_Decl(self, node): if type(node.type) in [TypeDecl, ArrayDecl] : self.names.add(node.name) def get_var_names(ast): declaration_finder = declarationFinder() declaration_finder.visit(ast) return declaration_finder.names # get_var_names(x) ###Output _____no_output_____ ###Markdown FGSMwith vars ordered and early exit ###Code # def gradient_method(x, n_list, var, epsilon, metric): # orig_index = vocab[var].index if var in vocab else MAX_TOKEN # grad, node_embedding = get_grad(x, orig_index, n_list) # if grad is None: # # print("no leaf occurences") # return None # v = node_embedding.detach().numpy() # g = torch.sign(grad).detach().numpy() # v = v + epsilon * g # # get the closest emebedding from our map # i = closest_index(v, sampled_embedding_map, metric) # # print("orig name:", word2vec.index2word[orig_index], "; new name:", word2vec.index2word[i]) # if i != orig_index: # return replace_var(x, orig_index, i) # else: # return x MAX_TOKEN = word2vec.vectors.shape[0] import time import datetime def evaluate(epsilon, limit = None, sort_vars = True): ast_count = 0 var_count = 0 ast_total = 0 var_total = 0 start = time.time() for code_id in block_data['id'].tolist(): # print(code_id) x, ast = block_data['code'][code_id], ast_data['code'][code_id] _, orig_pred = torch.max(model([x]).data, 1) orig_pred = orig_pred.item() # get the grad loss_function = torch.nn.CrossEntropyLoss() labels = torch.LongTensor([orig_pred]) output = model([x]) loss = loss_function(output, Variable(labels)) loss.backward() n_list = model._modules['encoder'].node_list var_names = get_var_names(ast) success = False var_weighted = [] for var in list(var_names): orig_index = vocab[var].index if var in vocab else MAX_TOKEN grad, node_embedding = get_grad(x, orig_index, n_list) if grad is not None: h = abs((grad @ torch.sign(grad)).item()) var_weighted.append( (h, grad, node_embedding) ) if sort_vars: var_weighted = sorted(var_weighted, key=lambda x: x[0], reverse = True) for h, grad, node_embedding in var_weighted: v = node_embedding g = torch.sign(grad) v = v + epsilon * g # get the closest emebedding from our map i = closest_index(v, sampled_embedding_map, l2_norm) if i != orig_index: new_x_l2 = replace_var(x, orig_index, i) else: new_x_l2 = x if new_x_l2: o = model([new_x_l2]) _, predicted_l2 = torch.max(o.data, 1) # print(orig_pred, predicted_l2.item()) var_total += 1 if orig_pred != predicted_l2.item(): var_count += 1 success = True break if success: ast_count += 1 ast_total += 1 if ast_total % 500 == 499: eval_time = time.time() - start eval_time = datetime.timedelta(seconds=eval_time) print(ast_total, ";", eval_time, ";", ast_count / ast_total, ";", var_count / var_total) if limit and limit < ast_total: break return (ast_count / ast_total, var_count / var_total) # sample_rate = 0.2 # sample_count = int(len(embedding_map) * sample_rate) # sampled_embedding_map = {key: embedding_map[key] for key in random.sample(embedding_map.keys(), sample_count)} sampled_embedding_map = embedding_map evaluate(10) ###Output 499 ; 0:06:11.082216 ; 0.4709418837675351 ; 0.10638297872340426 999 ; 0:12:42.325317 ; 0.44544544544544545 ; 0.09812568908489526 1499 ; 0:19:17.990759 ; 0.43695797198132086 ; 0.09406864857101824 1999 ; 0:25:45.001663 ; 0.43921960980490243 ; 0.09426669529740177 2499 ; 0:30:51.134010 ; 0.43137254901960786 ; 0.0915188046523474 2999 ; 0:35:45.699827 ; 0.43347782594198064 ; 0.09219204311750939 3499 ; 0:40:01.739350 ; 0.43412403543869676 ; 0.09294499173958269 3999 ; 0:44:42.915889 ; 0.43810952738184544 ; 0.09404186795491143 4499 ; 0:49:08.399550 ; 0.43943098466325853 ; 0.0945752009184845 4999 ; 0:54:01.274330 ; 0.4370874174834967 ; 0.09332820775670596 5499 ; 0:58:34.313150 ; 0.43498817966903075 ; 0.09266648587920816 5999 ; 1:03:06.773592 ; 0.43723953992332054 ; 0.09341833463921932 6499 ; 1:07:48.804006 ; 0.4397599630712417 ; 0.0940564733758968 6999 ; 1:12:44.722604 ; 0.43934847835405055 ; 0.0937871717448989 7499 ; 1:17:23.098371 ; 0.44192559007867716 ; 0.09463974640888712 7999 ; 1:21:41.197561 ; 0.44318039754969374 ; 0.09505805379025555 8499 ; 1:26:04.079845 ; 0.44122837980938934 ; 0.09443702938880355 8999 ; 1:30:05.035542 ; 0.4398266474052673 ; 0.09417531169696393 9499 ; 1:34:28.999388 ; 0.4405726918623013 ; 0.09431836109170404 9999 ; 1:37:44.151763 ; 0.44124412441244126 ; 0.09473706813252883
Anscombe.ipynb
###Markdown The Anscombe Assignment. History.Created in 1973 by statistican Francis Anscombe to demonstrate the need to graph data before it wa analysed and the effects of outliners on statistical properties(3).Anscombe's Quartet is a set of four simple, two dimensional data sets, each containing eleven rows and all with similar statistical properties (1), however when plotted and viewed in pictorial form, they all appear different.It is not clear as to where or how Anscombe came to develop the dataset, however Francis anscombe described his work as a counter to the statisticans impressions ststing that "numerical calculations are exact, but graphs are rough" (3).Since 1973 there have been several attempts to generate similar data sets using identical numbers all producing dissimilar graphs (1). The Dataset. The raw data set is inbuilt in seaborn, thus negating the need for .cvs file on the computer.It can be ran as follows: ###Code import seaborn as sns df = sns.load_dataset("anscombe") #this produces the numerical values in table form. df #Python source code: download source: anscombes_quartet.py ###Output _____no_output_____ ###Markdown Once the numerical data is displayed, the next step is to view it in pictorial fom, in the form of a graph. ###Code import matplotlib.pyplot as plt #matplotlib allows for the data to be placed on a graph.Needs to be imported. import seaborn as sns sns.set(style="ticks") plt.style.use(u'ggplot') df = sns.load_dataset("anscombe") # Load the example dataset for Anscombe's quartet sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df, col_wrap=2, ci=None, palette="muted", size=4, scatter_kws={"s": 50, "alpha": 1}) # Show the results of a linear regression within each dataset plt.show() #credit https://seaborn.pydata.org/examples/anscombes_quartet.html ###Output _____no_output_____ ###Markdown The Images Dataset I: Consists of a set of points that appear to follow a rough linear relationship with some variance and following the assumption of normality. (1) Dataset II:This graphis not distributed normally; while a relationship between the two variables is obvious, it is not linear, and the Pearson correlation https://en.wikipedia.org/wiki/Pearson_correlation_coefficient coefficient is not relevant. It presentscurve but doesn’t follow a linear relationship. Dataset III:The distribution of this graph is linear, however the calculated regression is offset by the one outlier https://en.wikipedia.org/wiki/Outlierwhich which exerts enough influence to lower the correlation coefficient from 1 to 0.816.this shows a tight linear relationship between x and y, except for one large outlier. (1) Dataset IV: The fourth graph shows an example when one outlier is enough to produce a high correlation coefficient, even though the other data points do not indicate any relationship between the variables.(1) This dataset looks like the x remains constant, except for one outlier as well. Viewing the data in numerical or chart form wouldn’t have told us any of these variences. Instead, it’s important to visualize the data on a graph to get a clear picture of what’s going on. Mean,Max and count output.All the summary statistics you’d think to compute are close to identical:The mean value for x is 9 for each dataset.The mean value for y is 7.50 for each dataset.The variance for x is 11.The variance for y is 4.12.The correlation between x and y is 0.816 for each dataset.A linear regression (line of best fit) for each dataset follows the equation y = 0.5x + 3. (4)To produce the variables within the dataset, the following was used to produce the numbers in table form: ###Code df = sns.load_dataset("anscombe").describe() #full result (#4) df df = sns.load_dataset("anscombe").mean() #just the mean numbers. df df = sns.load_dataset("anscombe").max() # just the max numbers. df ###Output _____no_output_____
NHL_data_shape_notebooks/Approach_2_cumul_1seas_Pischedda_data_process/Merging Pisch and dummies and Code_for_deriving_stats/v2_stats_tools_more_seasons_Pish.ipynb
###Markdown I am splitting v3_Clean_model_add_Pis_feat.ipynb into 2 notebooks -another one on modelling v1_Model2_Pisch_Eval_Tuning.ipynb-this one on creating stats data set (just Pisch for now) ###Code import numpy as np import pandas as pd ###Output _____no_output_____ ###Markdown some extra feature stuff Some simple functions ###Code ##have not written yet def top3_max_val_params(model, X, dates, drop_first=False): pass def perc_null(X): total = X.isnull().sum().sort_values(ascending=False) data_types = X.dtypes percent = (X.isnull().sum()/X.isnull().count()).sort_values(ascending=False) missing_data = pd.concat([total, data_types, percent], axis=1, keys=['Total','Type' ,'Percent']) return missing_data #this is for regressors predicting wins - losses, can use this to turn output into win prediction def fav_win(x): if x <=0: return 1 if x>0: return 0 def make_win(x): if x <= 0: return 0 if x >0: return 1 v_make_win = np.vectorize(make_win) #useage: v_make_win(y_pred) def one_plus(x): return 1+x v_one_plus = np.vectorize(one_plus) def minus_one(x): return x-1 v_minus_one = np.vectorize(minus_one) def one_minus(x): return 1-x v_one_minus = np.vectorize(one_minus) ###Output _____no_output_____ ###Markdown Some more complex functions useful for generating new -basic stats (eg sh%)-cumulative stats for all prior games for that team up to the present date(not including present date) ###Code ##this function creates the dummy variables for you for evey team ... ##HA_diff does dummies_home - dummies_away ##HA_concat does dummies_home concat dummies_away (to the right) ##! Concat veriosn shouls also do dummy for H/A since it is no linger encoded! ## hmm ... maybe not.. HT_dummies, AT_dummies, HT_stats, AT_stats; Hg-Ag, HTWin def make_HA_diff(X, season, list_var_names = None ): X = X.loc[X['season'] == season, :].copy() X_H = X.loc[X['HoA'] == 'home',:].copy() X_A = X.loc[X['HoA'] == 'away',:].copy() X_H['goal_difference'] = X_H['goalsFor'] - X_H['goalsAgainst'] ##note every thing is based in home data #reset index to prep for df1.sub(df2) X_H.reset_index(drop = True, inplace = True) X_A.reset_index(drop = True, inplace = True) df_visitor = pd.get_dummies(X_H['nhl_name'], dtype=np.int64) df_home = pd.get_dummies(X_A['nhl_name'], dtype=np.int64) df_model = df_home.sub(df_visitor) for feat in ['won', 'goal_difference', 'Open']: ##will go in reverse order df_model.insert(loc=0, column= feat, value= X_H[feat].copy()) #carefule with home and away teams df_model.insert(loc=0, column= 'away_team', value= X_A['nhl_name'].copy()) df_model.insert(loc=0, column= 'home_team', value= X_H['nhl_name'].copy()) df_model.insert(loc=0, column= 'full_date', value= X_H['full_date'].copy()) df_model.insert(loc=0, column= 'game_id', value= X_H['game_id'].copy()) #df_model['home_team'] = X_H['nhl_name'].copy() #df_model['away_team'] = X_A['nhl_name'].copy() #y = X_H.loc[:,['date', 'full_date','game_id', 'Open','goal_difference', 'won']].copy() ##these are from home team perspective; 'Open' is for betting return df_model ##try later maye def make_HA_concat(X, season, list_var_names = None ): X = X.loc[X['season'] == season, :].copy() X_H = X.loc[X['HoA'] == 'home',:].copy() X_A = X.loc[X['HoA'] == 'away',:].copy() X_H['goal_difference'] = X_H['goalsFor'] - X_H['goalsAgainst'] ##note every thing is based in home data X_H.reset_index(drop = True, inplace = True) X_A.reset_index(drop = True, inplace = True) df_visitor = pd.get_dummies(X_H['nhl_name'], dtype=np.int64) df_home = pd.get_dummies(X_A['nhl_name'], dtype=np.int64) #df_HA = pd.get_dummies(X['HoA']), dtype=np.int64) df_model = df_home.sub(df_visitor) df_model['date'] = X_H['date'] df_model['full_date'] = X_H['full_date'] df_model['game_id'] = X_H['game_id'] df_model['home_id'] = X_H['team_id'] df_model['away_id'] = X_A['team_id'] y = X_H.loc[:,['date', 'full_date','game_id', 'Open','goal_difference', 'won']].copy() ##these are from home team perspective; 'Open' is for betting return (df_model, y) ['date', 'full_date','game_id', 'Open','goal_difference', 'won' ][::-1] def make_diff(statFor, statAway): return statFor - statAway v_make_diff = np.vectorize(make_diff) def make_per(statFor, statAway): #example FOWFor/(FOWFor + FOWAgainst) or ShAt try: return statFor/(statFor+statAway) except: return 0 v_make_per = np.vectorize(make_per) def make_ratio(stat1, stat2): #example goalsFor/shotsFor = sh% try: return stat1/stat2 except: return 0 v_make_ratio = np.vectorize(make_ratio) ##have to adjust later depending what is convenient to use for stat name ... #not using k_days_back .. will use a unioversal fn to just avg past games... #makes more sense ,,, if they get a 1000 shots against one game, don't want that to leak into #other games .. def get_per(X, statFor): #stat_name = goalsFor or faceoffsWonFor example to keep in mind ... mp style ##we do this so we can loop thru existing names in our feature list stat_name = statFor[:-3] #remove last 3 #statFor = stat_name+'For' statAgainst = stat_name+'Against' X[statFor+'%' ] = v_make_per(X.loc[:,statFor],X.loc[:,statAgainst]) #return v_make_per(X.loc[:,statFor],X.loc[:,statAgainst]) def get_ratio(X, stat1, stat2, new_stat_name): #stat1/stat2 #stat_name = goalsFor or faceoffsWonFor example to keep in mind ... mp style ##we do this so we can loop thru existing names in our feature list X[new_stat_name] = v_make_ratio(X.loc[:,stat1],X.loc[:,stat2]) #return v_make_ratio(X.loc[:,stat1],X.loc[:,stat2]) def get_diff(X, statFor): #stat_name = goalsFor or faceoffsWonFor example to keep in mind ... mp style ##we do this so we can loop thru existing names in our feature list stat_name = statFor[:-3] #remove last 3 #statFor = stat_name+'For' statAgainst = stat_name+'Against' X[stat_name+'Diff']= v_make_diff(X.loc[:,statFor],X.loc[:, statAgainst]) #return v_make_diff(X.loc[:,statFor],X.loc[:, statAgainst]) sorted({8,5}) [2,3,434,45,5,566,6,][-10:] ##!! you need to loop over nhl_name at the beginning so your dates are associated to that fixed team #stat_name goalsFor for example def get_k_game_sum(X, stat_name, k_days_back= 10**6 ): #k_days_back #season is random-default #make string version of k_days_back for later; 10**6 means go back forever to beginning of season if k_days_back < 10**6: str_k = '_'+str(k_days_back)+'_day' else: str_k = "_cumul" #set up column eg goalFor_10_days or X[stat_name+str_k+'_sum'] = np.NaN #doing set removes duplicates; sorted makes increasing ordered list all_teams = list(set(X['nhl_name'])) for nhl_name0 in all_teams: #I'll label in nahl_name0 to emphasize it is fixed constant ## this is all the dates *that have this tean nhl_name playing* team0_dates = sorted(set(X.loc[X['nhl_name'] == nhl_name0, 'full_date'])) #note: first date0 of teh season for the team is special because date < date0 will be empty for date1 in team0_dates[1:]: #all but first date so don't get empty object with date< date1 team0_dates_bef_date1 = [date for date in team0_dates if date < date1] #for the fixed team team0_k_dates_bef_date1 = team0_dates_bef_date1[-k_days_back :] # this further restricts to the last k days of list; # or returns all for large k_days_back, [1,2][-10,:] = [1,2] #will be nonempty if k_days_back >0 #we have to restrict to k *team = nhl_name0* games so that's why we do it here #restrict the df to just team0 and dates < date1 (this is where we get empty if date1 =date0 ) X_team0_k_bef_date1 = X.loc[(X['nhl_name'] == nhl_name0) & X['full_date'].isin(team0_k_dates_bef_date1), :].copy() #main step; first calculate the sum for nhl_name0 and dates < date1 k_sum = np.sum(X_team0_k_bef_date1[stat_name]) #here we assign the value to a unique row of the original X which was passed #the new columns is eg goalsFor_10_sum or goals_for_cumul_sum X.loc[(X['full_date'] == date1)& (X['nhl_name'] == nhl_name0) , stat_name+str_k+'_sum'] =k_sum #you can either operate directly on X which was passed, adding a new column, or you could return a df ... #so far seems convenient to do the former #return X #huh? this below is wrong ... date1 should not be touched #k_sum = X.loc[(X['full_date'] == date1)& (X['nhl_name'] == nhl_name) , stat_name] ##should be single number anyway #X.loc[(X['full_date'] == date1)& (X['nhl_name'] == nhl_name) , stat_name+str_k+'_sum'] =k_sum ##!! you need to loop over nhl_name at the beginning so your dates are associated to that fixed team #stat_name goalsFor for example def get_k_game_avg(X, stat_name, k_days_back= 10**6 ): #k_days_back #season is random-default #make string version of k_days_back for later; 10**6 means go back forever to beginning of season if k_days_back < 10**6: str_k = '_'+str(k_days_back)+'_day' else: str_k = "_cumul" #set up column eg goalFor_10_days or X[stat_name+str_k+'_avg'] = np.NaN #doing set removes duplicates; sorted makes increasing ordered list all_teams = list(set(X['nhl_name'])) for nhl_name0 in all_teams: #I'll label in nahl_name0 to emphasize it is fixed constant ## this is all the dates *that have this tean nhl_name playing* team0_dates = sorted(set(X.loc[X['nhl_name'] == nhl_name0, 'full_date'])) #note: first date0 of teh season for the team is special because date < date0 will be empty for date1 in team0_dates[1:]: #all but first date so don't get empty object with date< date1 team0_dates_bef_date1 = [date for date in team0_dates if date < date1] #for the fixed team team0_k_dates_bef_date1 = team0_dates_bef_date1[-k_days_back :] # this further restricts to the last k days of list; # or returns all for large k_days_back, [1,2][-10,:] = [1,2] #will be nonempty if k_days_back >0 #we have to restrict to k *team = nhl_name0* games so that's why we do it here #restrict the df to just team0 and dates < date1 (this is where we get empty if date1 =date0 ) X_team0_k_bef_date1 = X.loc[(X['nhl_name'] == nhl_name0) & X['full_date'].isin(team0_k_dates_bef_date1), :].copy() #main step; first calculate the sum for nhl_name0 and dates < date1 k_avg = np.mean(X_team0_k_bef_date1[stat_name]) #here we assign the value to a unique row of the original X which was passed #the new columns is eg goalsFor_10_avg or goals_for_cumul_avg X.loc[(X['full_date'] == date1)& (X['nhl_name'] == nhl_name0) , stat_name+str_k+'_avg'] =k_avg #you can either operate directly on X which was passed, adding a new column, or you could return a df ... #so far seems convenient to do the former #return X #huh? this below is wrong ... date1 should not be touched #k_sum = X.loc[(X['full_date'] == date1)& (X['nhl_name'] == nhl_name) , stat_name] ##should be single number anyway #X.loc[(X['full_date'] == date1)& (X['nhl_name'] == nhl_name) , stat_name+str_k+'_sum'] =k_sum ###Output _____no_output_____ ###Markdown Now let's run thru generating the Pisch data set again ... (some corrections) ###Code data = pd.read_csv("/Users/joejohns/data_bootcamp/GitHub/final_project_nhl_prediction/Data/Shaped_Data/data_bet_stats_mp.csv") data.drop(columns=[ 'Unnamed: 0'], inplace=True) data['won'] = data['won'].apply(int) data_playoffs = data.loc[data['playoffGame'] == 1, :].copy() #set aside playoff games ... probably won't use them. data= data.loc[data['playoffGame'] == 0, :].copy() #fix the Nans in FOW%: data['faceOffTotalBothTeams'] = data['faceOffsWonFor'] + data['faceOffsWonAgainst'] data['faceOffWinPercentage'] = v_make_ratio(data['faceOffsWonFor'],data['faceOffTotalBothTeams']) #sorted(data.columns) #bad_game_ids with 0 0 score (in df_game_team_stats) ,df_game is probably ok bad_game_ids = [2008020057, 2008020071, 2008020306, 2008020623, 2008021108, 2008021196, 2009020072, 2009020253, 2009020682, 2009020831, 2009021118, 2009021209, 2010020382, 2010020761, 2010020878, 2010021111, 2011020749, 2011020787, 2011021016, 2011021052, 2011021108, 2012020159, 2012020412, 2012020487, 2013020126, 2013021136, 2013021223, 2014020055, 2014020158, 2014020313, 2014020456, 2014021008, 2014021210, 2016020785, 2017020561, 2017020965, 2018020783, 2019020127, 2019021041] ##impute the missing 00 games data.loc[data['game_id'].isin(bad_game_ids)&(data['won']==1),'goalsFor'] = 1.0 data.loc[data['game_id'].isin(bad_game_ids)&(data['won']==0), 'goalsAgainst'] = 1.0 #verify #data.loc[data['game_id'].isin(bad_game_ids), ['won', 'goalsFor', 'goalsAgainst']] ###debugging ... why am I gettin 47% nan in dummies and date, id float? perc_null(data.loc[data['season'] ==season, :]) season = 20152016 X_seas = data.loc[data['season'] ==season, :].copy() #Here Pis is for Pischada ... I am following a 2013 paper which he based on the Weissenbock paper (he is at U Ottawa) #Piscada paper file:///Users/joejohns/Downloads/PredictingNHLmatchoutcomeswithMLmodels%20(1).pdf feat_Pis = ['goalsAgainst', 'goalsFor', 'goalDiff', 'goal_perc', 'PP%', 'PK%', 'sh%', 'sv%', 'win_streak_grouped_10', 'conference_standing_grouped_10','Fclose%', 'PDO'] ##these ae teh features from X_seas I will need to build the Pisch features feat_for_Pis_small = [ 'goalsAgainst','goalsFor', 'powerPlayGoals','powerPlayOpportunities', 'shotsOnGoalAgainst', 'shotsOnGoalFor','savedShotsOnGoalAgainst','savedShotsOnGoalFor', 'fenwickPercentage',] #we start with pp% and pk% below get_ratio(X_seas, 'powerPlayGoals','powerPlayOpportunities', 'pp%') #to do pk% (no SHgoalsAgainst) so we use pk% of HT = 1- pp% of AT and vice versa #set up the pk% column initialized with 0 X_seas['pk%'] = 0 ##to make this 1-pk% work we make sure the indices are set up consistently (not sure if needed) X_seas.sort_values(by = ['full_date', 'game_id', 'HoA'], inplace = True) X_seas.reset_index(drop = True, inplace = True) ##note: The following 2 lines did not work when I had X_seas = (...) with no .copy() above! X_seas.loc[X_seas['HoA'] == 'home', 'pk%'] = v_one_minus(X_seas.loc[X_seas['HoA'] == 'away' ,['pp%']].copy()) X_seas.loc[X_seas['HoA'] == 'away', 'pk%'] = v_one_minus(X_seas.loc[X_seas['HoA'] == 'home' ,['pp%']].copy()) #this creates gfdiff, gf%, sh%, sv% get_diff(X_seas, 'goalsFor') get_per(X_seas, 'goalsFor') get_ratio(X_seas, 'goalsFor', 'shotsOnGoalFor', 'sh%') get_ratio(X_seas, 'savedShotsOnGoalAgainst', 'shotsOnGoalAgainst', 'sv%') #X_seas['sv%'] = one_minus(X_seas['sv%']) don't need this now ... used savedSHA #pdo is simple sum X_seas['PDO'] = X_seas['sh%'] + X_seas['sv%'] ##we create a column of 1s to calculated games so far X_seas['ones'] = 1 get_k_game_sum(X_seas, 'ones') get_k_game_sum(X_seas, 'won') X_seas.rename(columns = {'ones_cumul_sum': 'team_games_so_far'}, inplace = True) #counts num of games so far for that team ##total wins and win% (I am ignoring distinction OT/SO/Reg) get_ratio(X_seas, 'won_cumul_sum', 'team_games_so_far', 'win%_cumul') ##total wins and win% in the last 10 games ... later can try different versions get_k_game_sum(X_seas, 'won', k_days_back=10) get_k_game_sum(X_seas, 'ones', k_days_back=10) get_ratio(X_seas, 'won_10_day_sum','ones_10_day_sum','win%_last_10_games') feat_Pis_to_sum = ['goalsAgainst', 'goalsFor', 'goalsDiff'] feat_Pis_to_avg = ['goalsFor%', 'pp%', 'pk%', 'sh%', 'sv%', 'PDO','fenwickPercentage', 'corsiPercentage', 'xGoalsPercentage', ] ##added corsi and xgoals ##NOTE! This is a bit inaccurate to average these ... one should find cumul_sums then do the % calculations above for accurate % ##let's see if they are that different ... #'last_10_games_win%', 'win%', omtted because already a 10 gm avg, for feat_to_sum in feat_Pis_to_sum: get_k_game_sum(X_seas, feat_to_sum) for feat_to_avg in feat_Pis_to_avg: get_k_game_avg(X_seas, feat_to_avg) ##get dummies feat_Pis = ['goalsAgainst', 'goalsFor', 'goal_diff', 'goal_perc', 'PP%', 'PK%', 'sh%', 'sv%', 'win_streak_grouped_10', 'conference_standing_grouped_10','Fclose%', 'PDO'] #did win% and win%_last_10 instead of streak and standing ... did regular fenwick, not fenclose #might be good to group these into 5-10 groups ... data_Pis_pre_xg_corsi = X_seas.loc[:, ['HoA', 'goalsAgainst_cumul_sum', 'goalsFor_cumul_sum', 'goalsDiff_cumul_sum', 'goalsFor%_cumul_avg', 'pp%_cumul_avg', 'pk%_cumul_avg', 'sh%_cumul_avg', 'sv%_cumul_avg', 'PDO_cumul_avg', 'fenwickPercentage_cumul_avg', 'corsiPercentage_cumul_avg', 'xGoalsPercentage_cumul_avg', 'win%_last_10_games', 'win%_cumul', ]].copy() df_dic ={} #set season = 20122013 #select this shortened season because same as Pisch for season in [20152016, 20162017, 20172018, 20182019]: X_seas = data.loc[data['season'] ==season, :].copy() #Here Pis is for Pischada ... I am following a 2013 paper which he based on the Weissenbock paper (he is at U Ottawa) #Piscada paper file:///Users/joejohns/Downloads/PredictingNHLmatchoutcomeswithMLmodels%20(1).pdf feat_Pis = ['goalsAgainst', 'goalsFor', 'goalDiff', 'goal_perc', 'PP%', 'PK%', 'sh%', 'sv%', 'win_streak_grouped_10', 'conference_standing_grouped_10','Fclose%', 'PDO'] ##these ae teh features from X_seas I will need to build the Pisch features feat_for_Pis_small = [ 'goalsAgainst','goalsFor', 'powerPlayGoals','powerPlayOpportunities', 'shotsOnGoalAgainst', 'shotsOnGoalFor','savedShotsOnGoalAgainst','savedShotsOnGoalFor', 'fenwickPercentage',] #we start with pp% and pk% below get_ratio(X_seas, 'powerPlayGoals','powerPlayOpportunities', 'pp%') #to do pk% (no SHgoalsAgainst) so we use pk% of HT = 1- pp% of AT and vice versa #set up the pk% column initialized with 0 X_seas['pk%'] = 0 ##to make this 1-pk% work we make sure the indices are set up consistently (not sure if needed) X_seas.sort_values(by = ['full_date', 'game_id', 'HoA'], inplace = True) X_seas.reset_index(drop = True, inplace = True) ##note: The following 2 lines did not work when I had X_seas = (...) with no .copy() above! X_seas.loc[X_seas['HoA'] == 'home', 'pk%'] = v_one_minus(X_seas.loc[X_seas['HoA'] == 'away' ,['pp%']].copy()) X_seas.loc[X_seas['HoA'] == 'away', 'pk%'] = v_one_minus(X_seas.loc[X_seas['HoA'] == 'home' ,['pp%']].copy()) #this creates gfdiff, gf%, sh%, sv% get_diff(X_seas, 'goalsFor') get_per(X_seas, 'goalsFor') get_ratio(X_seas, 'goalsFor', 'shotsOnGoalFor', 'sh%') get_ratio(X_seas, 'savedShotsOnGoalAgainst', 'shotsOnGoalAgainst', 'sv%') #X_seas['sv%'] = one_minus(X_seas['sv%']) don't need this now ... used savedSHA #pdo is simple sum X_seas['PDO'] = X_seas['sh%'] + X_seas['sv%'] ##we create a column of 1s to calculated games so far X_seas['ones'] = 1 get_k_game_sum(X_seas, 'ones') get_k_game_sum(X_seas, 'won') X_seas.rename(columns = {'ones_cumul_sum': 'team_games_so_far'}, inplace = True) #counts num of games so far for that team ##total wins and win% (I am ignoring distinction OT/SO/Reg) get_ratio(X_seas, 'won_cumul_sum', 'team_games_so_far', 'win%_cumul') ##total wins and win% in the last 10 games ... later can try different versions get_k_game_sum(X_seas, 'won', k_days_back=10) get_k_game_sum(X_seas, 'ones', k_days_back=10) get_ratio(X_seas, 'won_10_day_sum','ones_10_day_sum','win%_last_10_games') feat_Pis_to_sum = ['goalsAgainst', 'goalsFor', 'goalsDiff'] feat_Pis_to_avg = ['goalsFor%', 'pp%', 'pk%', 'sh%', 'sv%', 'PDO','fenwickPercentage', 'corsiPercentage', 'xGoalsPercentage', ] ##added corsi and xgoals ##NOTE! This is a bit inaccurate to average these ... one should find cumul_sums then do the % calculations above for accurate % ##let's see if they are that different ... #'last_10_games_win%', 'win%', omtted because already a 10 gm avg, for feat_to_sum in feat_Pis_to_sum: get_k_game_sum(X_seas, feat_to_sum) for feat_to_avg in feat_Pis_to_avg: get_k_game_avg(X_seas, feat_to_avg) ##get dummies feat_Pis = ['goalsAgainst', 'goalsFor', 'goal_diff', 'goal_perc', 'PP%', 'PK%', 'sh%', 'sv%', 'win_streak_grouped_10', 'conference_standing_grouped_10','Fclose%', 'PDO'] #did win% and win%_last_10 instead of streak and standing ... did regular fenwick, not fenclose #might be good to group these into 5-10 groups ... data_Pis_pre_xg_corsi = X_seas.loc[:, ['HoA', 'goalsAgainst_cumul_sum', 'goalsFor_cumul_sum', 'goalsDiff_cumul_sum', 'goalsFor%_cumul_avg', 'pp%_cumul_avg', 'pk%_cumul_avg', 'sh%_cumul_avg', 'sv%_cumul_avg', 'PDO_cumul_avg', 'fenwickPercentage_cumul_avg', 'corsiPercentage_cumul_avg', 'xGoalsPercentage_cumul_avg', 'win%_last_10_games', 'win%_cumul', ]].copy() data_Pis_pre = X_seas.loc[:, ['HoA', 'goalsAgainst_cumul_sum', 'goalsFor_cumul_sum', 'goalsDiff_cumul_sum', 'goalsFor%_cumul_avg', 'pp%_cumul_avg', 'pk%_cumul_avg', 'sh%_cumul_avg', 'sv%_cumul_avg', 'PDO_cumul_avg', 'fenwickPercentage_cumul_avg', 'win%_last_10_games', 'win%_cumul', ]].copy() #df_dic["data_Pis_pre_xg_corsi_"+str(season)] = data_Pis_pre_xg_corsi #df_dic["data_Pis_pre_"+str(season)] = data_Pis_pre #targets # 'won', 'goal_difference' ,'goalsAgainst','goalsFor', 'Open' #id stuff ##'full_date','season', 'game_id', 'nhl_name','HoA','opposingTeam', data_Pis_H = data_Pis_pre.loc[data_Pis_pre['HoA'] =='home',:].copy() data_Pis_A = data_Pis_pre.loc[data_Pis_pre['HoA'] =='away',:].copy() #reset index for df1.sub(df2) data_Pis_A.reset_index(drop = True, inplace = True) data_Pis_H.reset_index(drop = True, inplace = True) ##set the numerical data to home stats - away stats data_Pis = data_Pis_H.iloc[:, 1:].copy().sub(data_Pis_A.iloc[:, 1:].copy()).copy() #remove the 'HoA' column with 1: dummies_pm1_Pis = make_HA_diff(data, season = season).copy() #single df, has id stuff and target stuff! dummies_pm1_Pis.reset_index(drop = True, inplace =True) #combine dummies and data data_dummies_Pis = pd.concat([dummies_pm1_Pis, data_Pis], axis =1) df_dic["data_Pis_pre_xg_corsi_"+str(season)] = data_dummies_Pis filename_seas = 'data_dummies_Pis_xg_Corsi_v3_'+str(season)+'.csv' data_dummies_Pis.to_csv(filename_seas) ###Output _____no_output_____ ###Markdown moved into for loop over seasons ...feat_Pis_to_sum = ['goalsAgainst', 'goalsFor', 'goalsDiff']feat_Pis_to_avg = ['goalsFor%', 'pp%', 'pk%', 'sh%', 'sv%', 'fenwickPercentage', 'PDO', ] NOTE! This is a bit inaccurate to average these ... one should find cumul_sums then do the % calculations above for accurate %let's see if they are that different ... 'last_10_games_win%', 'win%', omtted because already a 10 gm avg, for feat_to_sum in feat_Pis_to_sum: get_k_game_sum(X_12, feat_to_sum) for feat_to_avg in feat_Pis_to_avg: get_k_game_avg(X_12, feat_to_avg) ###Code # check stuff 'SJS', 'ANA' ... looks good ##chnage X_12 to X_20162017 = pd.read_csv(...) #X_12.loc[X_12['nhl_name'] == 'SJS', ['won','nhl_name', 'team_games_so_far','full_date', 'sv%', 'sv%_cumul_avg', 'win%', 'won_10_day_sum','ones_10_day_sum','last_10_games_win%']] ##note: restrict the dates later if you want to muchacho ##and remove the target etc at modelling time #data_dummies_Pis.iloc[20:50,:] ##fiddle with this later feat_mine = [ 'full_date', 'season', 'game_id', 'HoA', 'nhl_name', 'opposingTeam', 'settled_in', 'playoffGame', 'situation', 'won', 'goalsAgainst', 'goalsFor', 'penalityMinutesAgainst', 'penalityMinutesFor', 'penaltiesAgainst', 'penaltiesFor', 'powerPlayGoals', 'powerPlayOpportunities', 'faceOffsWonAgainst', 'faceOffsWonFor', 'faceOffWinPercentage', #filled in 'giveawaysAgainst', 'giveawaysFor', 'dZoneGiveawaysAgainst', 'dZoneGiveawaysFor', 'shotAttemptsAgainst', 'shotAttemptsFor', 'unblockedShotAttemptsAgainst', 'unblockedShotAttemptsFor', 'shotsOnGoalAgainst', 'shotsOnGoalFor', 'savedShotsOnGoalAgainst', 'savedShotsOnGoalFor', 'savedUnblockedShotAttemptsAgainst', 'savedUnblockedShotAttemptsFor', 'xFreezeAgainst', 'xFreezeFor', 'xGoalsAgainst', 'xGoalsFor', 'scoreVenueAdjustedxGoalsAgainst', 'scoreVenueAdjustedxGoalsFor', 'xGoalsPercentage', 'corsiPercentage', 'fenwickPercentage', 'flurryScoreVenueAdjustedxGoalsAgainst', 'flurryScoreVenueAdjustedxGoalsFor', 'highDangerxGoalsAgainst', 'highDangerxGoalsFor', 'highDangerShotsAgainst', 'highDangerShotsFor',] sorted(data.columns) data['faceOffWinPercentage'].isnull().value_counts() feat_Pis = ['goalsAgainst', 'goalsFor', 'goal_diff', 'goal_perc', 'PP%', 'PK%', 'sh%', 'sv%', 'win_streak_grouped_10', 'conference_standing_grouped_10','Fclose%', 'PDO'] feat_Pis_plus = ['goalsAgainst', 'goalsFor', 'goal_diff', 'goal_perc', 'PP%', 'PK%', 'sh%', 'sv%', 'win_streak', 'pts%', 'win%', 'Fclose%', 'PDO'] #features I need to do this ... feat_for_Pis = ['full_date','season', 'game_id', 'nhl_name','HoA','opposingTeam', 'goalsAgainst','goalsFor', 'powerPlayGoals','powerPlayOpportunities', 'shotsOnGoalAgainst','shotsOnGoalFor','savedShotsOnGoalAgainst','savedShotsOnGoalFor', 'fenwickPercentage', 'won', 'settled_in',] #'win_streak_grouped_10', 'conference_standing_grouped_10', #'PDO' #targets #could later try to train classifier to predict this ... tough tho --> reg or no extra = ['corsiPercentage', 'penaltiesAgainst', 'penaltiesFor', 'shotAttemptsAgainst', 'shotAttemptsFor', 'unblockedShotAttemptsAgainst', 'unblockedShotAttemptsFor', 'savedUnblockedShotAttemptsAgainst', 'savedUnblockedShotAttemptsFor', 'xGoalsPercentage','scoreVenueAdjustedxGoalsAgainst', 'scoreVenueAdjustedxGoalsFor','blockedShotAttemptsAgainst', 'blockedShotAttemptsFor', 'flurryAdjustedxGoalsAgainst', 'flurryAdjustedxGoalsFor', 'flurryScoreVenueAdjustedxGoalsAgainst', 'flurryScoreVenueAdjustedxGoalsFor', 'missedShotsAgainst', 'missedShotsFor',] feat_Pis = ['goalsAgainst', 'goalsFor', 'goal_diff', 'goal_perc', 'PP%', 'PK%', 'sh%', 'sv%', 'win_streak_grouped_10', 'conference_standing_grouped_10','Fclose%', 'PDO'] feat_for_Pis = ['full_date','season', 'game_id', 'nhl_name','HoA','opposingTeam', 'goalsAgainst','goalsFor', 'powerPlayGoals','powerPlayOpportunities', 'shotsOnGoalAgainst','shotsOnGoalFor','savedShotsOnGoalAgainst','savedShotsOnGoalFor', 'fenwickPercentage', 'won', 'settled_in',] ###set Leung aside for now ... feat_Leung = ['ID', 'Date', 'HomeTeam', 'AwayTeam', 'GDiff','GF%', 'CF%','CSh%', 'CSv%', 'FF%','FSh%','FSv%','PDO','PENDiff','ShF%','SDiff','Sh%', 'Sv%','FOW%','W%','FavoritesW%', 'Result'] feat_for_Leung = ['full_date','season', 'game_id', 'nhl_name','HoA','opposingTeam', 'goalsAgainst','goalsFor', 'powerPlayGoals','powerPlayOpportunities', 'shotsOnGoalAgainst','shotsOnGoalFor','savedShotsOnGoalAgainst','savedShotsOnGoalFor', 'fenwickPercentage', 'won', 'settled_in',] extra = ['corsiPercentage', 'penaltiesAgainst', 'penaltiesFor', 'shotAttemptsAgainst', 'shotAttemptsFor', 'unblockedShotAttemptsAgainst', 'unblockedShotAttemptsFor', 'savedUnblockedShotAttemptsAgainst', 'savedUnblockedShotAttemptsFor', 'xGoalsPercentage','scoreVenueAdjustedxGoalsAgainst', 'scoreVenueAdjustedxGoalsFor','blockedShotAttemptsAgainst', 'blockedShotAttemptsFor', 'flurryAdjustedxGoalsAgainst', 'flurryAdjustedxGoalsFor', 'flurryScoreVenueAdjustedxGoalsAgainst', 'flurryScoreVenueAdjustedxGoalsFor', 'missedShotsAgainst', 'missedShotsFor',] feat_plus = ['full_date','season', 'game_id', 'nhl_name','HoA','opposingTeam', 'goalsAgainst','goalsFor', 'goal_diff', 'goal_perc', 'PP%', 'PK%', 'sh%', 'sv%', ] ###Output _____no_output_____ ###Markdown checking the calculations done to make the data set Overall, the sums, differences, ratios, cumulative sums and cumulative avgs are working well! what I checked: (beginning of season for SJS and a little bit TBL, I eye-balled and also checked a few with a calculator; end of season just eye-balled a few)The following problems were found: [now fixed as of 7pm Aug 6]1. df_game_team_stats has around 40 games with 0 0 score (df_game is ok)note: they are all OT 1-0 games (from df_game) so these can be imputed using the "won" column (the game_ids are below)2. get_k_avg and get_k_sum need to loop over the team so that the dates are restricted to the fixed team (rather than all the last 10 dates regardless of whether the team played or not). This error messed up: wins in last 10 and stuff (counting ones also)3. the first game of teams starting after first game of season have a bad first row ... it should be NaN but instead it is same as game 2 ... after that no errors game 2 on ... so not big deal. Probably date = date0etc. is messed up in get_k_sum and get_k_avg ... probably looping over teams first will fix it like in 2. ###Code ##check stats calculations first few games ... ##part 1 ... goals for and against stats ... ##they are all OT 1-0 wins ... so you can impute this using the "won" column X_12.loc[X_12['nhl_name'].isin(['SJS']), ['game_id','full_date', 'nhl_name', 'HoA', 'won', 'settled_in','games_so_far', 'goalsAgainst', 'goalsFor', 'goalsFor%', 'goalsDiff','goalsDiff_cumul_sum','goalsAgainst_cumul_sum', 'goalsFor_cumul_sum', 'goalsFor%_cumul_avg', ]].iloc[:15, :] 'shotsOnGoalAgainst', 'shotsOnGoalFor', 'savedShotsOnGoalAgainst', 'savedShotsOnGoalFor', 'fenwickPercentage', 'sh%_cumul_avg', 'sv%_cumul_avg', 'PDO', 'PDO_cumul_avg' 'fenwickPercentage_cumul_avg', 'sh%', 'sv%', 'powerPlayGoals', 'powerPlayOpportunities', 'pp%', 'pk%', 'pp%_cumul_avg', 'pk%_cumul_avg', ##check stats calculations first few games ... ##part 4 ...shots, fenwick X_12.loc[X_12['nhl_name'].isin(['SJS']), ['game_id','full_date', 'nhl_name', 'HoA', 'won','goalsAgainst', 'goalsFor', 'powerPlayGoals', 'powerPlayOpportunities', 'pp%', 'pk%', 'pp%_cumul_avg', 'pk%_cumul_avg', ]].iloc[:20, :] ##check stats calculations first few games ... ##part 3 ... pp, X_12.loc[X_12['nhl_name'].isin(['SJS']), ['game_id','full_date', 'nhl_name', 'HoA', 'won','goalsAgainst', 'goalsFor', 'shotsOnGoalAgainst', 'shotsOnGoalFor', 'savedShotsOnGoalAgainst', 'savedShotsOnGoalFor', 'sh%', 'sv%','PDO', 'PDO_cumul_avg','sh%_cumul_avg', 'sv%_cumul_avg', 'fenwickPercentage', 'fenwickPercentage_cumul_avg', ]].iloc[-8:, :] ##check stats calculations first few games ... ##part 2 ... wins, win% ... X_12.loc[X_12['nhl_name'].isin(['SJS']), ['game_id','full_date', 'nhl_name', 'HoA', 'won','goalsAgainst', 'goalsFor', 'won_cumul_sum', 'games_so_far', 'win%', 'ones','won_10_day_sum', 'ones_10_day_sum', 'last_10_games_win%', ]].iloc[:20, :] ###Output _____no_output_____
authentication.ipynb
###Markdown This notebook deals with authenticating download requests, allowing for programmatic bulk-downloading without having to resort to Selenium.See: https://github.com/n8henrie/pycookiecheatNote: This requires Python 3, whereas for all else I am using Python 2 ###Code from pycookiecheat import chrome_cookies import requests # You want to visit the actual URL before running this #url = 'http://www.nature.com.ezp-prod1.hul.harvard.edu/articles/nature10381.pdf' url = 'http://www.nature.com.ezp-prod1.hul.harvard.edu/articles/srep44529.pdf' # Uses Chrome's default cookies filepath by default cookies = chrome_cookies(url) ###Output _____no_output_____ ###Markdown Here we could potentially prolong the expiration of the cookie. But I'm not sure of the consequences of doing that. ###Code # Let's do it anyways # WARNING: This is specific to the NATURE source cookies['PS_TOKENEXPIRE'] = '20_Feb_2019_01:38:49_GMT' # For now, we save the credentials as JSON to be used later import json import os.path cred_str = json.dumps(cookies) # IMPORTANT: DO NOT CHECK CREDENTIALS INTO THE REPOSITORY # the credentials directory is ignored so it is safe. out_name = 'gabe_nature' fname = 'credentials/' + out_name + '.json' # Prevents overwriting #assert not os.path.isfile(fname) with open(fname, 'w') as f: f.write(cred_str) # We can now use the cookies, for example: # Read file import json with open(fname, 'r') as f: json_str=f.read().replace('\n', '') cookies = json.loads(json_str) # Note: I've made a util for doing the above #r = requests.get(url, cookies=cookies) #r from importlib import reload import util reload(util); download_file(url, cookies) ###Output _____no_output_____
jupyter_russian/tutorials/pipeline_featureunion_datamove.ipynb
###Markdown Открытый курс по машинному обучениюАвтор материала: Трунов Артем Геннадьевич, @datamove. Pipeline, FeatureUnion – практика применения ВведениеВ этой статье будем разбираться с классами пакета sklearn, которые представляют значительное удобство и экономию времени в работе. Многие, наверное, любят, когда код иллюстрируется диаграммами классов или каким-нибудь метакодом, который позволяет убрать из поля зрения все детали реализации и оставить на виду только самое главное. Pipeline в sklearn - это и есть такой вот метакод, с помошью которого модель видна как на ладони.В качестве же примеров будем использовать не измусоленные со всех сторон встроенные в sklearn датасеты, а знакомые читателю по домашним работам и соревнованию 'Alice' данные. Надеюсь, что кого-то эта статья побудит исправить свой код и вдохновит на новые засылки на Kaggle! PipelineИтак, начнем с перевода и определения. Русские слова "труба" и,тем паче, "трубопровод", мы, пожалуй, использовать не будем, а вот вариант "конвейер данных" кажется мне наиболее подходящим и благозвучным.Документация Pipeline определяет этот класс как конвейер преобразования данных с финальным эстиматором (обучающей моделью), применяющийся для того, чтобы можно было легко менять параметры на каждом этапе конвейера и сравнивать результаты. С такой же легкостью можно заменять и сами этапы преобразований данных и финальную модель.Давайте сразу окунемся в пример. Рассмотрим датасет Самсунга из домашней работы №7. Мы применяли к данным алгоритм PCA для уменьшения размерности, а что бы он работал как надо, предварительно масштабировали данные. Для классификации использовали метод опорных векторов. Таким образом, наш конвейер будет состоять из двух шагов обработки (StandardScaler, PCA) и финальной модели (LinearSVC). ###Code #изменить соответственно PATH_TO_DATA="../../" #загрузка данных #На всякий случай - ссылка https://cloud.mail.ru/public/3EJK/cB2VXsyrP import numpy as np X_train = np.loadtxt(PATH_TO_DATA+"data/samsung_HAR/samsung_train.txt") y_train = np.loadtxt(PATH_TO_DATA+"data/samsung_HAR/samsung_train_labels.txt").astype(int) X_test = np.loadtxt(PATH_TO_DATA+"data/samsung_HAR/samsung_test.txt") y_test = np.loadtxt(PATH_TO_DATA+"data/samsung_HAR/samsung_test_labels.txt").astype(int) from sklearn.decomposition import PCA from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.svm import LinearSVC pipeline = Pipeline([ ('scaler', StandardScaler()), ('pca', PCA(n_components=65)), ('svc', LinearSVC()) ]) ###Output _____no_output_____ ###Markdown Ну как, красиво? Давайте разберем. Конструктор Pipeline() принимает массив кортежей, в каждом из которых мнемоническое обозначение этапа преобразования и экземпляр класса преобразователя, инстанциированный "на лету". Первый этап предразования, 'scaler', принимает исходные данные, и выдает отмасштабированные на выход, который является входом второго этапа - 'pca'. Из 'pca' в 'svc' поступает урезанная матрица главных компонентов числом 65 штук. Обучение проводится именно на ней.Давайте запустим обучение и получим результат. ###Code pipeline.fit(X_train, y_train) #"Валидируемся" на том же тренировочном датасете pred = pipeline.predict(X_train) from sklearn.metrics import accuracy_score, roc_auc_score accuracy_score(pred, y_train) ###Output _____no_output_____ ###Markdown Заметим, что метод fit() вызывается на всех этапах конвейера, а метод predict() - только для финального эстиматора. Тоже самое, разумеется, происходит и с другим набором данных: ###Code #Валидируемся на тестовом датасете, для которого у нас есть разметка test_pred = pipeline.predict(X_test) accuracy_score(test_pred, y_test) ###Output _____no_output_____ ###Markdown Это значит, что нам не надо тащить за собой хвост из преобразований тестовой выборки, об этом позаботится наш конвейер!Так как конвейер обладает интерфейсом модели обучения (fit(), predict() etc), то мы можем использовать его напрямую с полюбившимися методами, такими как кросс-валидация: ###Code from sklearn.model_selection import cross_val_score cross_val_score(pipeline, X_train, y_train, cv=3) ###Output _____no_output_____ ###Markdown а также GridSearchCV: ###Code from sklearn.model_selection import GridSearchCV gcv_params = {'pca__n_components': [20,60,100], 'svc__C': [0.001, 0.01, 0.1, 1, 10] } gcv = GridSearchCV(pipeline, gcv_params, cv=3) gcv.fit(X_train,y_train) gcv.best_params_ ###Output _____no_output_____ ###Markdown "Так," - скажет внимательный читатель,- "а ведь нам надо было выбрать число компонент так, чтобы оставить 90% дисперсии исходных данных". Как же мы используем это условие в конвейере?". Вообще-то оно уже реализовано в классе PCA - достаточно передать параметер n_components=0.9 в конструктор класса. Но давайте сделаем сами. Для этого нам придется реализовать собственный класс эстиматора, для многих - первый в их жизни! Сейчас увидим, что на самом деле, это - легко!Мы унаследуем класс PCA и перегрузим методы fit(), transform() и fit_tranaform() так, чтобы возвращать матрицу с числом компонентов, объясняющих exp_var% дисперсии.Затем построим конвейер с новым классом. ###Code class PCAExplainedVariance(PCA): #констуктор принимает и сохраняет значение желаемой дисперсии def __init__(self, exp_var=1.0 ): super().__init__(copy=True) self.exp_var = exp_var #желаемая дисперсия исходных данных self.N_ = 0 #число компонент, тербуемых для достижения заданной дисперсии # Находим соответствующее число компонент def fit(self, X, y=None): super().fit(X, y) self.N_ = len(X) cum_var = 0 for i, component in enumerate(self.components_): cum_var += self.explained_variance_ratio_[i] if cum_var>=self.exp_var: self.N_ = i + 1 break # возвращаем усеченный по числу компонент датасет def transform(self, X, y=None): U = X[:,:self.N_] return U # fit + transform в одном флаконе def fit_transform(self, X, y=None): self.fit(X) U = X[:, :self.N_] return U #Снова собираем конвейер pipeline = Pipeline([ ('scaler', StandardScaler()), ('pca', PCAExplainedVariance(exp_var=0.9)), ('svc', LinearSVC()) ]) #На этот раз запустим с GridSearchCV gcv_params = {'svc__C': [0.001, 0.01, 0.1, 1, 10] } gcv = GridSearchCV(pipeline, gcv_params, cv=3) gcv.fit(X_train, y_train) gcv.best_params_ ###Output _____no_output_____ ###Markdown Объект конвейера предоставляет доступ и к экземплярам составляющих его классов. Например, чтобы посмотреть, какое число компонент оставил наш новый PCA-эстиматор: ###Code gcv.best_estimator_.named_steps['pca'].N_ ###Output _____no_output_____ ###Markdown Feature UnionДавайте идти дальше и расширять диапазон применяемых средств. Для этого возьмем в качестве примера более сложный случай.В соревновании Catch me if you can (aka "Alice") на Kaggle, мы отдельно обрабатываем посещаемые пользователями сайты с помощью техники Bag of Words, и отдельно конструируем новые признаки из чего только можно. Затем объединяем частотную матрицу с матрицей признаков и применяем логистическую регрессию.Попробуем запрограммировать этот сценарий в конвейер. ###Code #Загрузка и предобработка данных - код от @yorko import pandas as pd train_df = pd.read_csv(PATH_TO_DATA+"../Alice-comp/train_sessions.csv", index_col="session_id") #test_df = pd.read_csv(PATH_TO_DATA+"../Alice-comp/test_sessions.csv", index_col="session_id") # приведем колонки time1, ..., time10 к временному формату times = ['time%s' % i for i in range(1, 11)] train_df[times] = train_df[times].apply(pd.to_datetime).fillna(method='ffill', axis=1) #test_df[times] = test_df[times].apply(pd.to_datetime).fillna(method='ffill', axis=1) # отсортируем данные по времени train_df = train_df.sort_values(by='time1') sites = ['site%s' % i for i in range(1,11)] train_df[sites] = train_df[sites].fillna(0).astype('int') #test_df[sites] = test_df[sites].fillna(0).astype('int') #целевая переменая y_train = train_df['target'] train_df.drop('target', axis=1, inplace=True) train_df.head() ###Output _____no_output_____ ###Markdown Итак, у нас есть такой вот датафрейм и мы хотим: а) составить Bag Of Words из сайтов - код взят из ноутбука @yorko б) нагенерить признаки, связанные со временем, любезно подсказанные @yorko: year_month, start_hour, morning (последний признак - бинарный) Реализуем a), б) по отдельности как классы-трансформеры, а потом объединим результаты. ###Code from scipy.sparse import csr_matrix # Этот класс-трансформер возвращает разреженную матрицу сайтов # from sklearn.base import BaseEstimator, TransformerMixin class ColsToCountMatrix(BaseEstimator, TransformerMixin): #констуктор принимает и сохраняет название колонок для сливания в текст def __init__(self, columns=[]): self.columns=columns # fit() ничего не делает def fit(self, X, y = None): return self #преобразуем посещения сайтов в частотную матрицу def transform(self, X): # последовательность с индексами sites_flatten = X[self.columns].values.flatten() # искомая матрица sites_sparse = csr_matrix(([1] * sites_flatten.shape[0], sites_flatten, range(0, sites_flatten.shape[0] + 10, 10)))[:, 1:] return sites_sparse #Unit test sparse_matrix = ColsToCountMatrix(columns=sites).transform(train_df.head(3)) print(sparse_matrix.shape) print(sparse_matrix) # Этот класс-трансформер возвращает матрицу с новыми признаками # from sklearn.base import BaseEstimator, TransformerMixin class TimeToFeatures(BaseEstimator, TransformerMixin): # берем и сохраняем колонки, которые используем для приготовления новых признаков def __init__(self, columns=[]): self.columns = columns # бездельник опять def fit(self,X,y=None): return self # работяга def transform(self, X): # это колонка 'time1' начального датафрейма time1=self.columns[0] # создаем пустой датафрейм для новых признаков new_features = pd.DataFrame(index=X.index) # делаем новые признаки new_features['year_month'] = X[time1].apply(lambda ts: ts.year*100 + ts.month) new_features['start_hour'] = X[time1].apply(lambda ts: ts.hour) new_features['morning'] = new_features['start_hour'].apply(lambda sh: 1 if 4<sh<12 else 0) return new_features[['year_month','start_hour','morning']] #Unit test TimeToFeatures(columns=times).transform(train_df.head()).values ###Output _____no_output_____ ###Markdown Давайте теперь применим FeatureUnion. Конструктор класса FeatureUnion, как и конструктор Pipeline, принимает список кортежей (название, класс-трансформер), а его метод transform() просто объединяет колонки, получившиеся после применения метода transform() для каждого из составных классов. ###Code from sklearn.pipeline import FeatureUnion fu=FeatureUnion([ ('cols_to_text', ColsToCountMatrix(columns=sites)), ('time_to_features', TimeToFeatures(columns=times)), ]) #используем todense() для наглядности full_matrix = fu.transform(train_df.head(3)).todense() print(full_matrix.shape) print(full_matrix) ###Output _____no_output_____ ###Markdown В итоге - было 951 колонка частотной матрицы, 3 колонки новых признаков, стало 954 колонки.Это еще не все! Раскроем возможность использовать FeatureUnion и Pipeline вместе.Сразу поразим читателя, добавив этапы преобразования полученных данных, а так же модель обучения на объединенных данных. ###Code from sklearn.feature_extraction.text import TfidfTransformer from sklearn.linear_model import LogisticRegressionCV from sklearn.pipeline import FeatureUnion, Pipeline from sklearn.preprocessing import StandardScaler logit_params={'scoring':'roc_auc','class_weight':'balanced', 'Cs':range(1,5),'n_jobs':3, 'random_state':17} #используем немного другой формат вызова FeatureUnion, #хотя веса для нашей модели не пригодятся, читатель будет знать о таких возможностях pipeline = Pipeline([ ('union', FeatureUnion( transformer_list=[ ('text', Pipeline([ ('cols_to_text', ColsToCountMatrix(columns=sites)), ('tfidf',TfidfTransformer()), ])), ('new_features', Pipeline([ ('time_to_features', TimeToFeatures(columns=times)), ('scaler', StandardScaler()), ])), ], transformer_weights={'text':1.0, 'features':1.0} )), ('logit',LogisticRegressionCV(**logit_params)) ]) pipeline.fit(train_df, y_train) # это подобранный перебором коэффициент регуляризации pipeline.named_steps['logit'].C_ #таблица метрики ROC_AUC для С=[1,2,3,4] и трех выборок кросс-валидации (cv=3) pipeline.named_steps['logit'].scores_ ###Output _____no_output_____ ###Markdown Заметки Что ж, неплохо получилось! Наша модель описывается 16-ю строками, после того как мы реализовали преобразования данных в классах-трансформерах. Давайте теперь разберем некоторые вопросы применения конвейеров и объединителей признаков.1. Для того, чтобы сделать предсказания обученной модели для тестовой выборки, вызовите метод pipeline.predict_proba(df_proba)2. Мы не можем (по крайней мере, с легкостью) в нашем конвейере сделать чаcтотную матрицу на объединенной тренировочной и тестовой выборках, как @yorko делал это на мастер-классе. Автор решил эту проблему таким образом. Вместо класса ColsToCountMatrix, который работает с колонками sites, используем класс ColsToText, определенный ниже. Он собирает сайты из всех колонок в "текст", который принимает библиотечный CountVectorizer. В констукторе этого класса читатель найдет не только опцию vocabulary для передачи словаря объединенной тренировочной и тестовой выборки, но и некоторые опции, которые имеет смысл попробовать для улучшения результатов модели.3. Если читатель решит применить другую модель обучения, например SGDClassifier, в котором не реализована кросс-валидация, то можно "обернуть" его в GridSearchCV: ('gcv', GridSearchCV(SGDClassifier(**sgd_params), gcv_sgd_params, **gcv_params)) Надеюсь, что читатель сможет теперь сам улучшать свою модель для соревнования - работать над признаками и подбирать параметры. В заключениеЧто можно посоветовать читателю в плане дальнейшего изучения предмета?1. Изучить официальную документацию: PipeLine, FeatureUnion, и разобрать статьи-примеры. 2. Взять на вооружение библиотеку mlextend Себастьяна Рашки. Там можно найти много интересных классов, не реализованных в стандартной библиотеке sklearn.3. Посмотреть sklearn-pandas - облегчение работы именно с датафреймами. Например, можно некоторые колонки преобразовать масштабированием, другие - по принципу one-hot-encoding.4. Стремиться создавать такие конвейеры, которые позволяют быстро проверять модели и признаки.Успехов! ###Code #это подготовительный этап трансформации данных, #перед тем как применим CountVectorizer from sklearn.base import BaseEstimator, TransformerMixin class ColsToText(BaseEstimator, TransformerMixin): #конструктор принимает и сохраняет название колонок для сливания в текст def __init__(self,columns=[]): self.columns = columns # fit() отдыхает - делать нечего def fit(self, X, y= None): return self # сливаем содержимое колонок в одну строку, кроме нулей def transform(self, X): return X[self.columns]\ .apply(lambda x: " ".join([str(a) for a in x.values if not a==0]), axis=1)\ .values.reshape(len(X),1) #заметьте - возвращаем numpy.ndarray # Unit test ColsToText(columns=sites).transform(train_df.head()) ###Output _____no_output_____
deep-learning/multi-frameworks/notebooks/Keras_CNTK_RNN.ipynb
###Markdown High-level RNN Keras (CNTK) Example ###Code import os import sys import numpy as np os.environ['KERAS_BACKEND'] = "cntk" import keras as K import cntk from keras.models import Sequential from keras.layers import Dense, Embedding, GRU, CuDNNGRU from common.params_lstm import * from common.utils import * # Force one-gpu os.environ["CUDA_VISIBLE_DEVICES"] = "0" print("OS: ", sys.platform) print("Python: ", sys.version) print("Keras: ", K.__version__) print("Numpy: ", np.__version__) print("CNTK: ", cntk.__version__) print(K.backend.backend()) print(K.backend.image_data_format()) print("GPU: ", get_gpu_name()) print(get_cuda_version()) print("CuDNN Version ", get_cudnn_version()) def create_symbol(CUDNN=True, maxf=MAXFEATURES, edim=EMBEDSIZE, nhid=NUMHIDDEN, maxl=MAXLEN): model = Sequential() model.add(Embedding(maxf, edim, input_length=maxl)) # Only return last output if not CUDNN: model.add(GRU(nhid, return_sequences=False, return_state=False)) else: model.add(CuDNNGRU(nhid, return_sequences=False, return_state=False)) model.add(Dense(2, activation='softmax')) return model def init_model(m, lr=LR, b1=BETA_1, b2=BETA_2, eps=EPS): m.compile( loss = "categorical_crossentropy", optimizer = K.optimizers.Adam(lr, b1, b2, eps), metrics = ['accuracy']) return m %%time # Data into format for library x_train, x_test, y_train, y_test = imdb_for_library(seq_len=MAXLEN, max_features=MAXFEATURES, one_hot=True) print(x_train.shape, x_test.shape, y_train.shape, y_test.shape) print(x_train.dtype, x_test.dtype, y_train.dtype, y_test.dtype) %%time # Load symbol # CuDNN RNNs are only available with the TensorFlow backend. sym = create_symbol(CUDNN=False) %%time # Initialise model model = init_model(sym) model.summary() %%time # Main training loop: 53s model.fit(x_train, y_train, batch_size=BATCHSIZE, epochs=EPOCHS, verbose=1) %%time # Main evaluation loop: 7s y_guess = model.predict(x_test, batch_size=BATCHSIZE) y_guess = np.argmax(y_guess, axis=-1) y_truth = np.argmax(y_test, axis=-1) print("Accuracy: ", sum(y_guess == y_truth)/len(y_guess)) ###Output Accuracy: 0.86076
Reason_For_Absence_to_Work_Clustering_and_Analysis.ipynb
###Markdown ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from apyori import apriori from sklearn import preprocessing from sklearn.metrics import confusion_matrix from sklearn.cluster import KMeans import sklearn from sklearn import metrics from scipy.spatial.distance import cdist from sklearn.cluster import KMeans from sklearn_extra.cluster import KMedoids !pip install scikit-learn-extra !pip install apyori data = pd.read_excel('Absenteeism_at_work.xls') data data.describe() data.info() data.isnull() data.isnull().sum() data.dropna() data = data.drop_duplicates() data data = data.drop(columns = ['ID']) data list(data.columns) ###Output _____no_output_____ ###Markdown DB-SCAN ###Code from sklearn.cluster import DBSCAN db=DBSCAN(eps=3,min_samples=4,metric='euclidean') x=data.iloc[:,[18,19]].values x=data.values model=db.fit(x) label=model.labels_ label from sklearn import metrics #identifying the points which makes up our core points sample_cores=np.zeros_like(label,dtype=bool) sample_cores[db.core_sample_indices_]=True #Calculating the number of clusters n_clusters=len(set(label))- (1 if -1 in label else 0) print('No of clusters:',n_clusters) y_means = db.fit_predict(x) y_means data y_means = db.fit_predict(x) plt.figure(figsize=(7,5)) plt.scatter(x[y_means == 0, 0], x[y_means == 0, 1], s = 50, c = 'yellow') plt.scatter(x[y_means == 1, 0], x[y_means == 1, 1], s = 50, c = 'cyan') plt.scatter(x[y_means == 2, 0], x[y_means == 2, 1], s = 50, c = 'magenta') plt.scatter(x[y_means == 3, 0], x[y_means == 3, 1], s = 50, c = 'orange') plt.scatter(x[y_means == 4, 0], x[y_means == 4, 1], s = 50, c = 'blue') plt.scatter(x[y_means == 5, 0], x[y_means == 5, 1], s = 50, c = 'red') plt.scatter(x[y_means == 6, 0], x[y_means == 6, 1], s = 50, c = 'black') plt.scatter(x[y_means == 7, 0], x[y_means == 7, 1], s = 50, c = 'violet') plt.title('Clusters of data') plt.show() ###Output _____no_output_____ ###Markdown K-Medoid ###Code temp = pd.DataFrame(data,columns=['Body mass index','Absenteeism time in hours']) temp cobj = KMedoids(n_clusters=8).fit(data) labels = cobj.labels_ unique_labels = set(labels) colors = [ plt.cm.Spectral(each) for each in np.linspace(0, 1, len(unique_labels)) ] plt.figure(figsize=(20,20)) for k, col in zip(unique_labels, colors): class_member_mask = labels == k xy = data[class_member_mask] plt.plot( xy.iloc[:, 0], xy.iloc[:, 1], "o", markerfacecolor=tuple(col), markeredgecolor="g", markersize=6, ) plt.plot( cobj.cluster_centers_[:, 0], cobj.cluster_centers_[:, 1], "o", markerfacecolor="blue", markeredgecolor="b", markersize=10, ) plt.title("KMedoids clustering. The Medoids have been represented in blue.") !pip install -U scikit-learn scipy matplotlib ###Output _____no_output_____ ###Markdown Silhoutte Score ###Code sklearn.metrics.silhouette_score(data, labels,metric='euclidean', sample_size=None, random_state=None) ###Output _____no_output_____ ###Markdown Davis-Boudin Score ###Code sklearn.metrics.davies_bouldin_score(data, labels) from factor_analyzer import FactorAnalyzer !pip install factor_analyzer from factor_analyzer.factor_analyzer import calculate_bartlett_sphericity chi_square_value,p_value=calculate_bartlett_sphericity(data) chi_square_value, p_value from factor_analyzer.factor_analyzer import calculate_kmo kmo_all,kmo_model=calculate_kmo(data) fa = FactorAnalyzer() fa.fit(data) eigen_values, vectors = fa.get_eigenvalues() eigen_values vectors fa.get_factor_variance() ###Output _____no_output_____ ###Markdown Analysis of effect of 5 Attributes with Reason for Absence ###Code sns.scatterplot(data['Hit target'], y=data['Reason for absence'], data=data); sns.scatterplot(data['Transportation expense'], y=data['Reason for absence'], data=data); sns.scatterplot(data['Service time'], y=data['Reason for absence'], data=data); sns.scatterplot(data['Body mass index'], y=data['Reason for absence'], data=data); sns.scatterplot(data['Distance from Residence to Work'], y=data['Reason for absence'], data=data); ###Output /usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. FutureWarning ###Markdown Correlation Of Distance from Residence to Work to Reason for absence ###Code pd.DataFrame(data['Distance from Residence to Work']).corrwith(data['Reason for absence']) ###Output _____no_output_____ ###Markdown Correlation Of Body mass index to Reason for absence ###Code pd.DataFrame(data['Body mass index']).corrwith(data['Reason for absence']) ###Output _____no_output_____ ###Markdown Correlation Of Service time to Reason for absence ###Code pd.DataFrame(data['Service time']).corrwith(data['Reason for absence']) ###Output _____no_output_____ ###Markdown Correlation Of Transportation expense to Reason for absence ###Code pd.DataFrame(data['Transportation expense']).corrwith(data['Reason for absence']) ###Output _____no_output_____ ###Markdown Correlation Of Hit target to Reason for absence ###Code pd.DataFrame(data['Hit target']).corrwith(data['Reason for absence']) x =data[data.columns] fa = FactorAnalyzer() fa.fit(x, 10) #Get Eigen values and plot them ev, v = fa.get_eigenvalues() ev plt.plot(range(1,x.shape[1]+1),ev) ###Output _____no_output_____ ###Markdown To figure out how many factors we would need, we can look at eigenvalues, which is a measure of how much of the variance of the variables does a factor explain. An eigenvalue of more than one means that the factor explains more variance than a unique variable. We will only use 4 factors here, given the big dropoff in eigenvalue after the 4th factor. ###Code fa = FactorAnalyzer(4, rotation='varimax') fa.fit(x) loads = fa.loadings_ print(loads) ###Output _____no_output_____
chap05/textbook-chap-5-3.ipynb
###Markdown 5. Deep Learning for Computer Vision Using a Pretrained Convnet ###Code import os from tensorflow import keras import pandas as pd import numpy as np import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown A highly effective approach to deep learining on small image datasets is to use a pretrained network. A pretrained network is a saved network that was previously trained on a large dataset, typically on a large scale image classification task. If this original dataset is large enough and general enough then the spatial hierarchy of features learned by the pretrained network can effectively act as a generic mdoel of the visual world, and hence its features can prove useful for many different compute vision problems, even though these new problems may involve completely different classes than those of the original task.We will use VGG16 architecture, which is a simple and widely used convnet architecture for ImageNet. Although it is an older model, far from the current state of the art and somewhat heavier than many other recent mocels, its architecture is similar to what we have learnt before and is easy to understand.To use a pretrained network there are two ways: feature extraction and fine-tuning. Feature Extraction Feature extraction consists using the representations learned by a previous network to extract interesting features from new samples. These features are then run through a new classifier, which is trained from scratch.Convnets used for image classification has two parts: - A series of pooling and convolution layers (the convolutional base)- A densely connected classifierIn the case of convnets, feature extraction consists of taking the convolutional base of a previously trained network, running the new data through it, and training a new classifer on top of the output.Generally, we reuse the convolutional base as the representations learnt by the convlutional base are likely to be more generic and therefore more reusable. The feature maps of a convnet are presence maps of generic concepts over a picture, which is likely to be useful regardless of the computer vision problem at hand. In contrast, the representations learned by the classifier will necessarily be specific to the set of classes on which the model was trained - they will only contain information about whether this or that class exists in the entire picture.For this example, we will use the convolutional base of the VGG16 network, trained on ImageNet to extract interesting features from cat and dog images, and then train a dog-vs-cat classifier on top of these features. ###Code # Instantiate Model conv_base = keras.applications.VGG16( # specifies the weight checkpoint from which to initialise the model weights='imagenet', # refers to use/not use the densly connected classifier (which we are not) include_top=False, # shape of the image tensors to be fed into the network input_shape=(150,150,3) ) ###Output _____no_output_____ ###Markdown You can see that the architecture of the VGG16 convolutional base is similar to the convnets we have seen before: ###Code # For testing conv_base.summary() ###Output Model: "vgg16" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 150, 150, 3)] 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 150, 150, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 150, 150, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 75, 75, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 75, 75, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 75, 75, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 37, 37, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 37, 37, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 37, 37, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 37, 37, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 18, 18, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 18, 18, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 18, 18, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 18, 18, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 9, 9, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 9, 9, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 4, 4, 512) 0 ================================================================= Total params: 14,714,688 Trainable params: 14,714,688 Non-trainable params: 0 _________________________________________________________________ ###Markdown From here, there are two ways to proceed:1. Run the convolutional base over the dataset, recording the output to a Numpy array, then use this data to input to a standalone, densly connected classifier.2. Extend the `conv_base` by adding `Dense` layers on top, and running the whole thing end to end on the input data.Method 1 ###Code # Preprocessing HOME_DIR = os.path.dirname(os.path.abspath('__FILE__')) WORKSPACE_DIR = os.path.join(HOME_DIR, 'workspace') train_dir = os.path.join(WORKSPACE_DIR, 'train') validation_dir = os.path.join(WORKSPACE_DIR, 'validation') test_dir = os.path.join(WORKSPACE_DIR, 'test') ###Output Found 2000 images belonging to 2 classes. Found 1000 images belonging to 2 classes. ###Markdown Here we load the datasets onto Numpy arrays. ###Code def extract_features(directory, sample_count): BATCH_SIZE = 20 TARGET_SIZE = (150, 150) CLASS_MODE = 'binary' features = np.zeros(shape=(sample_count, 4,4,512)) labels = np.zeros(shape=(sample_count)) datagen = keras.preprocessing.image.ImageDataGenerator(rescale=1./255) generator = datagen.flow_from_directory(directory, target_size=TARGET_SIZE, batch_size=BATCH_SIZE, class_mode=CLASS_MODE) i = 0 for inputs_batch, labels_batch in generator: features_batch = conv_base.predict(inputs_batch) features[i*BATCH_SIZE:(i+1)*BATCH_SIZE] = features_batch labels[i*BATCH_SIZE:(i+1)*BATCH_SIZE] = labels_batch i+=1 if i * BATCH_SIZE >= sample_count: break return features, labels import warnings warnings.filterwarnings("ignore") train_features, train_labels = extract_features(train_dir, 2000) train_features = np.reshape(train_features, (2000, 4*4*512)) val_features, val_labels = extract_features(train_dir, 1000) val_features = np.reshape(val_features, (1000, 4*4*512)) ###Output Found 2000 images belonging to 2 classes. ###Markdown Now that the data images have been transformed, we can use it to train the densely connected layer model directly. ###Code model21a = keras.models.Sequential() model21a.add(keras.layers.Dense(256, activation='relu', input_dim=4*4*512)) model21a.add(keras.layers.Dropout(0.5)) model21a.add(keras.layers.Dense(1, activation='sigmoid')) print(model21a.summary()) model21a.compile(optimizer=keras.optimizers.RMSprop(learning_rate=2e-5), loss='binary_crossentropy', metrics=['accuracy']) history21a = model21a.fit(train_features, train_labels, epochs=50, batch_size=20, validation_data=(val_features, val_labels), verbose=0) import warnings warnings.filterwarnings("ignore") metrics_df = pd.DataFrame(history21a.history) metrics_df['epoch'] = metrics_df.index+1 display(metrics_df.tail()) fig = plt.figure(figsize=(17,10)) ax1, ax2 = fig.add_subplot(2,1,1), fig.add_subplot(2,1,2) metrics_df.plot(kind='scatter', x='epoch', y='loss', ax=ax1, label='train', color='blue') metrics_df.plot(kind='line', x='epoch', y='val_loss', ax=ax1, label='validation', color='red',) ax1.set_ylabel("Loss") ax1.grid('GAINSBORO') ax1.legend([]) ax1.set_ylim(0,1.1) ax1.set_xticks(range(0,51)) metrics_df.plot(kind='scatter', x='epoch', y='accuracy', ax=ax2, label='train', color='blue') metrics_df.plot(kind='line', x='epoch', y='val_accuracy', ax=ax2, label='validation', color='red',) ax2.set_ylabel("Accuracy") ax2.grid('GAINSBORO') ax2.set_xticks(range(0,51)) ax2.legend([]) ax2.set_ylim(0,1.1) plt.show() ###Output _____no_output_____
doc/src/statphys/.ipynb_checkpoints/statphys-checkpoint.ipynb
###Markdown Statistical physics **Morten Hjorth-Jensen Email [email protected]**, Department of Physics and Center of Mathematics for Applications, University of Oslo and National Superconducting Cyclotron Laboratory, Michigan State UniversityDate: **Fall 2015** EnsemblesIn statistical physics the concept of an ensemble is one of thecornerstones in the definition of thermodynamical quantities. Anensemble is a collection of microphysics systems from which we deriveexpectations values and thermodynamical properties related toexperiment. As an example, the specific heat (which is a measurablequantity in the laboratory) of a system of infinitely many particles,can be derived from the basic interactions between the microscopicconstituents. The latter can span from electrons to atoms andmolecules or a system of classical spins. All these microscopicconstituents interact via a well-defined interaction. We saytherefore that statistical physics bridges the gap between themicroscopic world and the macroscopic world. Thermodynamicalquantities such as the specific heat or net magnetization of a systemcan all be derived from a microscopic theory. Famous EnsemblesThe table lists the most used ensembles in statistical physicstogether with frequently arising extensive (depend on the size of thesystems such as the number of particles) and intensive variables(apply to all components of a system), in addition to associatedpotentials. Microcanonical Canonical Grand canonical Pressure canonical Exchange of heat no yes yes yes with the environment Exchange of particles no no yes no with the environemt Thermodynamical $V, \cal M, \cal D$ $V, \cal M, \cal D$ $V, \cal M, \cal D$ $P, \cal H, \cal E$ parameters $E$ $T$ $T$ $T$ $N$ $N$ $\mu$ $N$ Potential Entropy Helmholtz $PV$ Gibbs Energy Internal Internal Internal Enthalpy Canonical EnsembleOne of the most used ensembles is the canonical one, which is related to the microcanonical ensemblevia a Legendre transformation. The temperature is an intensive variable in this ensemble whereas the energyfollows as an expectation value. In order to calculate expectation values such as the mean energy $\langle E \rangle $at a given temperature, we need a probability distribution.It is given by the Boltzmann distribution $$P_i(\beta) = \frac{e^{-\beta E_i}}{Z}$$ with $\beta=1/k_BT$ being the inverse temperature, $k_B$ is the Boltzmann constant, $E_i$ is the energy of a microstate $i$ while $Z$ is the partition function for the canonical ensembledefined as The partition function is a normalization constantIn the canonical ensemble the partition function is $$Z=\sum_{i=1}^{M}e^{-\beta E_i},$$ where the sum extends over all microstates $M$. Helmoltz free energy, what does it mean?The potential of interest in this case is Helmholtz' free energy. Itrelates the expectation value of the energy at a given temperatur $T$to the entropy at the same temperature via $$F=-k_{B}TlnZ=\langle E \rangle-TS.$$ Helmholtz' free energy expresses thestruggle between two important principles in physics, namely thestrive towards an energy minimum and the drive towards higher entropyas the temperature increases. A higher entropy may be interpreted as alarger degree of disorder. When equilibrium is reached at a giventemperature, we have a balance between these two principles. Thenumerical expression is Helmholtz' free energy. Thermodynamical quantitiesIn the canonical ensemble the entropy is given by $$S =k_{B}lnZ+k_{B}T\left(\frac{\partial lnZ}{\partial T}\right)_{N, V},$$ and the pressure by $$p=k_{B}T\left(\frac{\partial lnZ}{\partial V}\right)_{N, T}.$$ Similarly we can compute the chemical potential as $$\mu =-k_{B}T\left(\frac{\partial lnZ}{\partial N}\right)_{V, T}.$$ Thermodynamical quantities, the energy in the canonical ensembleFor a system described by the canonical ensemble, the energy is anexpectation value since we allow energy to be exchanged with the surroundings(a heat bath with temperature $T$). This expectation value, the mean energy,can be calculated using $$\langle E\rangle =k_{B}T^{2}\left(\frac{\partial lnZ}{\partial T}\right)_{V, N}$$ or using the probability distribution$P_i$ as $$\langle E \rangle = \sum_{i=1}^M E_i P_i(\beta)= \frac{1}{Z}\sum_{i=1}^M E_ie^{-\beta E_i}.$$ Energy and specific heat in the canonical ensembleThe energy is proportional to the first derivative of the potential,Helmholtz' free energy. The corresponding variance is defined as $$\sigma_E^2=\langle E^2 \rangle-\langle E \rangle^2= \frac{1}{Z}\sum_{i=1}^M E_i^2e^{-\beta E_i}- \left(\frac{1}{Z}\sum_{i=1}^M E_ie^{-\beta E_i}\right)^2.$$ If we divide the latter quantity with$kT^2$ we obtain the specific heat at constant volume $$C_V= \frac{1}{k_BT^2}\left(\langle E^2 \rangle-\langle E \rangle^2\right),$$ which again can be related to the second derivative of Helmholtz' free energy. Magnetic moments and susceptibility in the canonical ensembleUsing the same prescription, we can also evaluate the mean magnetizationthrough $$\langle {\cal M} \rangle = \sum_i^M {\cal M}_i P_i(\beta)= \frac{1}{Z}\sum_i^M {\cal M}_ie^{-\beta E_i},$$ and the corresponding variance $$\sigma_{{\cal M}}^2=\langle {\cal M}^2 \rangle-\langle {\cal M} \rangle^2= \frac{1}{Z}\sum_{i=1}^M {\cal M}_i^2e^{-\beta E_i}- \left(\frac{1}{Z}\sum_{i=1}^M {\cal M}_ie^{-\beta E_i}\right)^2.$$ This quantity defines also the susceptibility $\chi$ $$\chi=\frac{1}{k_BT}\left(\langle {\cal M}^2 \rangle-\langle {\cal M} \rangle^2\right).$$ Our model, the Ising model in one and two dimensionsThe model we will employ in our studies of phase transitions at finite temperature for magnetic systems is the so-called Ising model. In its simplest formthe energy is expressed as $$E=-J\sum_{}^{N}s_ks_l-{\cal B}\sum_k^Ns_k,$$ with $s_k=\pm 1$, $N$ is the total number of spins, $J$ is a coupling constant expressing the strength of the interactionbetween neighboring spins and ${\cal B}$ is an external magnetic field interacting with the magneticmoment set up by the spins.The symbol $$ indicates that we sum over nearestneighbors only. Notice that for $J>0$ it is energetically favorable for neighboring spins to be aligned. This feature leads to, at low enough temperatures,a cooperative phenomenon called spontaneous magnetization. That is, through interactions between nearest neighbors, a given magneticmoment can influence the alignment of spins that are separated from the given spin by a macroscopic distance. These long range correlationsbetween spins are associated with a long-range order in whichthe lattice has a net magnetization in the absence of a magnetic field. Our model, the Ising model in one and two dimensionsIn order to calculate expectation values such as the mean energy$\langle E \rangle $ ormagnetization $\langle {\cal M} \rangle $in statistical physicsat a given temperature, we need a probability distribution $$P_i(\beta) = \frac{e^{-\beta E_i}}{Z}$$ with $\beta=1/kT$ being the inverse temperature, $k$ the Boltzmann constant, $E_i$ is the energy of a state $i$ while $Z$ is the partition function for the canonical ensembledefined as $$Z=\sum_{i=1}^{M}e^{-\beta E_i},$$ where the sum extends over all microstates$M$. $P_i$ expresses the probability of finding the system in a given configuration $i$. Our model, the Ising model in one and two dimensionsThe energy for a specific configuration $i$is given by $$E_i =-J\sum_{}^{N}s_ks_l.$$ Our model, the Ising model in one and two dimensionsTo better understand what is meant with a configuration, consider first the case of the one-dimensional Ising modelwith ${\cal B}=0$. In general, a given configuration of $N$ spins in onedimension may look like $$\begin{array}{cccccccccc}\uparrow&\uparrow&\uparrow&\dots&\uparrow&\downarrow&\uparrow&\dots&\uparrow&\downarrow\\1&2&3&\dots& i-1&i&i+1&\dots&N-1&N\end{array}$$ In order to illustrate these features let us further specialize tojust two spins.With two spins, since each spin takes two values only,we have $2^2=4$ possible arrangements of the two spins. These four possibilities are $$1= \uparrow\uparrow\hspace{1cm} 2= \uparrow\downarrow\hspace{1cm} 3= \downarrow\uparrow\hspace{1cm} 4=\downarrow\downarrow$$ Our model, the Ising model in one and two dimensionsWhat is the energy of each of these configurations? For small systems, the way we treat the ends matters. Two cases areoften used.In the first case we employ what is called free ends. This means that there is no contribution from points to the right or left of the endpoints. For the one-dimensional case, the energy is then written as a sum over a single index $$E_i =-J\sum_{j=1}^{N-1}s_js_{j+1},$$ Our model, the Ising model in one and two dimensionsIf we label the first spin as $s_1$ and the second as $s_2$ we obtain the following expression for the energy $$E=-Js_1s_2.$$ The calculation of the energy for the one-dimensional latticewith free ends for one specific spin-configuration can easily be implemented in the following lines for ( j=1; j < N; j++) { energy += spin[j]*spin[j+1]; } where the vector $spin[]$ contains the spin value $s_k=\pm 1$. Our model, the Ising model in one and two dimensionsFor the specific state $E_1$, we have chosen all spins up. The energy ofthis configuration becomes then $$E_1=E_{\uparrow\uparrow}=-J.$$ The other configurations give $$E_2=E_{\uparrow\downarrow}=+J,$$ $$E_3=E_{\downarrow\uparrow}=+J,$$ and $$E_4=E_{\downarrow\downarrow}=-J.$$ Our model, the Ising model in one and two dimensionsWe can also choose so-called periodic boundary conditions. This meansthat the neighbour to the right of $s_N$ is assumed to take the valueof $s_1$. Similarly, the neighbour to the left of $s_1$ takes thevalue $s_N$. In this case the energy for the one-dimensional latticereads $$E_i =-J\sum_{j=1}^{N}s_js_{j+1},$$ and we obtain the following expression for thetwo-spin case $$E=-J(s_1s_2+s_2s_1).$$ Our model, the Ising model in one and two dimensionsIn this case the energy for $E_1$ is different, we obtain namely $$E_1=E_{\uparrow\uparrow}=-2J.$$ The other cases do also differ and we have $$E_2=E_{\uparrow\downarrow}=+2J,$$ $$E_3=E_{\downarrow\uparrow}=+2J,$$ and $$E_4=E_{\downarrow\downarrow}=-2J.$$ Our model, the Ising model in one and two dimensionsIf we choose to use periodic boundary conditions we can code the aboveexpression as jm=N; for ( j=1; j <=N ; j++) { energy += spin[j]*spin[jm]; jm = j ; } The magnetization is however the same, defined as $${\cal M}_i=\sum_{j=1}^N s_j,$$ where we sum over all spins for a given configuration $i$. Our model, the Ising model in one and two dimensionsThe table lists the energy and magnetization for both free endsand periodic boundary conditions. State Energy (FE) Energy (PBC) Magnetization $1= \uparrow\uparrow$ $-J$ $-2J$ 2 $2=\uparrow\downarrow$ $J$ $2J$ 0 $ 3=\downarrow\uparrow$ $J$ $2J$ 0 $ 4=\downarrow\downarrow$ $-J$ $-2J$ -2 Our model, the Ising model in one and two dimensionsWe can reorganize according to the number of spins pointing up, as shown in the table hereNumber spins up Degeneracy Energy (FE) Energy (PBC) Magnetization 2 1 $-J$ $-2J$ 2 1 2 $J$ $2J$ 0 0 1 $-J$ $-2J$ -2 Our model, the Ising model in one and two dimensionsIt is worth noting that for small dimensions of the lattice,the energy differs depending on whether we useperiodic boundary conditions or free ends. This means alsothat the partition functions will be different, as discussedbelow. In the thermodynamic limit we have $N\rightarrow \infty$,and the final results do not depend on the kind of boundary conditionswe choose. For a one-dimensional lattice with periodic boundary conditions, each spin sees two neighbors. For atwo-dimensional lattice each spin sees four neighboring spins. How many neighbors does a spin see in three dimensions? Our model, the Ising model in one and two dimensionsIn a similar way, we could enumerate the number of states fora two-dimensional system consisting of two spins, i.e., a $2\times 2$ Ising model on a square lattice with {\em periodicboundary conditions}. In this case we have a total of $2^4=16$ states. Some examples of configurations with their respective energies are listed here $$E=-8J\hspace{1cm}\begin{array}{cc}\uparrow & \uparrow \\ \uparrow & \uparrow\end{array}\hspace{0.5cm} E=0\hspace{1cm}\begin{array}{cc}\uparrow & \uparrow \\ \uparrow & \downarrow\end{array}\hspace{0.5cm} E=0\hspace{1cm}\begin{array}{cc}\downarrow & \downarrow \\ \uparrow & \downarrow\end{array}\hspace{0.5cm} E=-8J\hspace{1cm}\begin{array}{cc}\downarrow & \downarrow \\ \downarrow & \downarrow\end{array}$$ Our model, the Ising model in one and two dimensionsIn the table here we group these configurationsaccording to their total energy and magnetization.Number spins up Degeneracy Energy Magnetization 4 1 $-8J$ 4 3 4 $0$ 2 2 4 $0$ 0 2 2 $8J$ 0 1 4 $0$ -2 0 1 $-8J$ -4 Phase Transitions and Critical PhenomenaA phase transition is marked by abrupt macroscopic changes as externalparameters are changed, such as an increase of temperature. The pointwhere a phase transition takes place is called a critical point.We distinguish normally between two types of phase transitions;first-order transitions and second-order transitions. An importantquantity in studies of phase transitions is the so-called correlationlength $\xi$ and various correlations functions like spin-spincorrelations. For the Ising model we shall show below that thecorrelation length is related to the spin-correlation function, whichagain defines the magnetic susceptibility. The spin-correlationfunction is nothing but the covariance and expresses the degree ofcorrelation between spins. Phase Transitions and Critical PhenomenaThe correlation length defines the length scale at which the overallproperties of a material start to differ from its bulk properties. Itis the distance over which the fluctuations of the microscopic degreesof freedom (for example the position of atoms) are significantlycorrelated with each other. Usually it is of the order of fewinteratomic spacings for a solid. The correlation length $\xi$depends however on external conditions such as pressure andtemperature. Phase Transitions and Critical PhenomenaFirst order/discontinuous phase transitions are characterized by two or morestates on either side of the critical point that can coexist at thecritical point. As we pass through the critical point we observe adiscontinuous behavior of thermodynamical functions. The correlationlength is normally finite at the critical point. Phenomena such ashysteris occur, viz. there is a continuation of state below thecritical point into one above the critical point. This continuation ismetastable so that the system may take a macroscopically long time toreadjust. A classical example is the melting of ice. It takes aspecific amount of time before all the ice has melted. The temperatureremains constant and water and ice can coexist for a macroscopictime. The energy shows a discontinuity at the critical point,reflecting the fact that a certain amount of heat is needed in orderto melt all the ice Phase Transitions and Critical PhenomenaSecond order or continuous transitions are different and in generalmuch difficult to understand and model. The correlation lengthdiverges at the critical point, fluctuations are correlated over alldistance scales, which forces the system to be in a unique criticalphase. The two phases on either side of the critical point becomeidentical. The disappearance of a spontaneous magnetization is aclassical example of a second-order phase transitions. Structuraltransitions in solids are other types of second-order phasetransitions. Strong correlations make a perturbative treatmentimpossible. From a theoretical point of view, the way out isrenormalization group theory. The table lists some typical systemwith their pertinent order parameters. Phase Transitions and Critical Phenomena System Transition Order Parameter Liquid-gas Condensation/evaporation Density difference $\Delta\rho=\rho_{liquid}-\rho_{gas}$ Binary liquid mixture/Unmixing Composition difference Quantum liquid Normal fluid/superfluid $$, $\psi$ = wavefunction Liquid-solid Melting/crystallisation Reciprocal lattice vector Magnetic solid Ferromagnetic Spontaneous magnetisation $M$ Antiferromagnetic Sublattice magnetisation $M$ Dielectric solid Ferroelectric Polarization $P$ Antiferroelectric Sublattice polarisation $P$ Phase Transitions and Critical PhenomenaUsing Ehrenfest's definition of the order of a phase transition we canrelate the behavior around the critical point to various derivativesof the thermodynamical potential. In the canonical ensemble we areusing, the thermodynamical potential is Helmholtz' free energy $$F= \langle E\rangle -TS = -kTln Z$$ meaning $ lnZ = -F/kT = -F\beta$. The energy is given as the first derivative of $F$ $$\langle E \rangle=-\frac{\partial lnZ}{\partial \beta} =\frac{\partial (\beta F)}{\partial \beta}.$$ and the specific heat is defined via the second derivative of $F$ $$C_V=-\frac{1}{kT^2}\frac{\partial^2 (\beta F)}{\partial\beta^2}.$$ Phase Transitions and Critical PhenomenaWe can relate observables to various derivatives of the partitionfunction and the free energy. When a given derivative of the freeenergy or the partition function is discontinuous or diverges(logarithmic divergence for the heat capacity from the Ising model) wetalk of a phase transition of order of the derivative. A first-orderphase transition is recognized in a discontinuity of the energy, orthe first derivative of $F$. The Ising model exhibits a second-orderphase transition since the heat capacity diverges. The susceptibilityis given by the second derivative of $F$ with respect to externalmagnetic field. Both these quantities diverge. The Ising Model and Phase TransitionsThe Ising model in two dimensions with ${\cal B} = 0$ undergoes aphase transition of second order. What it actually means is that belowa given critical temperature $T_C$, the Ising model exhibits aspontaneous magnetization with $\langle {\cal M} \rangle\ne 0$. Above$T_C$ the average magnetization is zero. The mean magnetizationapproaches zero at $T_C$ with an infinite slope. Such a behavior isan example of what are called critical phenomena. A criticalphenomenon is normally marked by one or more thermodynamical variableswhich vanish above a critical point. In our case this is the meanmagnetization $\langle {\cal M} \rangle\ne 0$. Such a parameter isnormally called the order parameter. The Ising Model and Phase TransitionsCritical phenomena have been extensively studied in physics. One majorreason is that we still do not have a satisfactory understanding ofthe properties of a system close to a critical point. Even for thesimplest three-dimensional systems we cannot predict exactly thevalues of various thermodynamical variables. Simplified theoreticalapproaches like mean-field models discussed below, can even predictthe wrong physics. Mean-field theory results in a second-order phasetransition for the one-dimensional Ising model, whereas we saw in theprevious section that the one-dimensional Ising model does not predictany spontaneous magnetization at any finite temperature. The physicalreason for this can be understood from the following simpleconsideration. Assume that the ground state for an $N$-spin system inone dimension is characterized by the following configuration The Ising Model and Phase TransitionsIt is possibleto show that the mean magnetization is given by(for temperature below $T_C$) $$\langle {\cal M}(T) \rangle \sim \left(T-T_C\right)^{\beta},$$ where $\beta=1/8$ is a so-called critical exponent. A similar relationapplies to the heat capacity $$C_V(T) \sim \left|T_C-T\right|^{-\alpha},$$ and the susceptibility $$\chi(T) \sim \left|T_C-T\right|^{-\gamma},$$ with $\alpha = 0$ and $\gamma = -7/4$. The Ising Model and Phase TransitionsAnother important quantity is the correlation length, which is expectedto be of the order of the lattice spacing for $T$ is close to $T_C$. Because the spinsbecome more and more correlated as $T$ approaches $T_C$, the correlationlength increases as we get closer to the critical temperature. The discontinuous behavior of the correlation $\xi$ near $T_C$ is $$\begin{equation} \xi(T) \sim \left|T_C-T\right|^{-\nu}.\label{eq:xi} \tag{1}\end{equation}$$ The Ising Model and Phase TransitionsA second-order phase transition is characterized by a correlationlength which spans the whole system. The correlation length istypically of the order of some few interatomic distances. The factthat a system like the Ising model, whose energy is described by theinteraction between neighboring spins only, can yield correlationlengths of macroscopic size at a critical point is still a featurewhich is not properly understood. Stated differently, how can thespins propagate their correlations so extensively when we approach thecritical point, in particular since the interaction acts only betweennearest spins? Below we will compute the correlation length via thespin-sin correlation function for the one-dimensional Ising model. The Ising Model and Phase TransitionsIn our actual calculations of the two-dimensional Ising model, we are however always limited to a finite lattice and $\xi$ willbe proportional with the size of the lattice at the critical point. Through finite size scaling relationsit is possible to relate the behavior at finite lattices with the results for an infinitely large lattice.The critical temperature scales then as $$\begin{equation} T_C(L)-T_C(L=\infty) \propto aL^{-1/\nu},\label{eq:tc} \tag{2}\end{equation}$$ with $a$ a constant and $\nu$ defined in Eq. [(1)](eq:xi). The Ising Model and Phase TransitionsThe correlation length for a finite lattice size can then be shown to be proportional to $$\xi(T) \propto L\sim \left|T_C-T\right|^{-\nu}.$$ and if we set $T=T_C$ one can obtain the following relations for the magnetization, energy and susceptibility for $T \le T_C$ $$\langle {\cal M}(T) \rangle \sim \left(T-T_C\right)^{\beta} \propto L^{-\beta/\nu},$$ $$C_V(T) \sim \left|T_C-T\right|^{-\gamma} \propto L^{\alpha/\nu},$$ and $$\chi(T) \sim \left|T_C-T\right|^{-\alpha} \propto L^{\gamma/\nu}.$$ The Metropolis Algorithm and the Two-dimensional Ising ModelIn our case we have as the Monte Carlo sampling function the probabilityfor finding the system in a state $s$ given by $$P_s=\frac{e^{-(\beta E_s)}}{Z},$$ with energy $E_s$, $\beta=1/kT$ and $Z$ is a normalization constant whichdefines the partition function in the canonical ensemble. As discussedabove $$Z(\beta)=\sum_se^{-(\beta E_s)}$$ is difficult to compute since we need all states. The Metropolis Algorithm and the Two-dimensional Ising ModelIn a calculation of the Ising model in two dimensions, the number ofconfigurations is given by $2^N$ with $N=L\times L$ the number ofspins for a lattice of length $L$. Fortunately, the Metropolisalgorithm considers only ratios between probabilities and we do notneed to compute the partition function at all. The algorithm goes asfollows * Establish an initial state with energy $E_b$ by positioning yourself at a random configuration in the lattice * Change the initial configuration by flipping e.g., one spin only. Compute the energy of this trial state $E_t$. * Calculate $\Delta E=E_t-E_b$. The number of values $\Delta E$ is limited to five for the Ising model in two dimensions, see the discussion below. * If $\Delta E \le 0$ we accept the new configuration, meaning that the energy is lowered and we are hopefully moving towards the energy minimum at a given temperature. Go to step 7. * If $\Delta E > 0$, calculate $w=e^{-(\beta \Delta E)}$. * Compare $w$ with a random number $r$. If $$r \le w,$$ then accept the new configuration, else we keep the old configuration. * The next step is to update various expectations values. * The steps (2)-(7) are then repeated in order to obtain a sufficently good representation of states. * Each time you sweep through the lattice, i.e., when you have summed over all spins, constitutes what is called a Monte Carlo cycle. You could think of one such cycle as a measurement. At the end, you should divide the various expectation values with the total number of cycles. You can choose whether you wish to divide by the number of spins or not. If you divide with the number of spins as well, your result for e.g., the energy is now the energy per spin. The Metropolis Algorithm and the Two-dimensional Ising ModelThe crucial step is the calculation of the energy difference and thechange in magnetization. This part needs to be coded in an asefficient as possible way since the change in energy is computed manytimes. In the calculation of the energy difference from one spinconfiguration to the other, we will limit the change to the flippingof one spin only. For the Ising model in two dimensions it means thatthere will only be a limited set of values for $\Delta E$. Actually,there are only five possible values. The Metropolis Algorithm and the Two-dimensional Ising ModelTo see this, select first arandom spin position $x,y$ and assume that this spin and its nearestneighbors are all pointing up. The energy for this configuration is$E=-4J$. Now we flip this spin as shown below. The energy of the newconfiguration is $E=4J$, yielding $\Delta E=8J$. $$E=-4J\hspace{1cm}\begin{array}{ccc} & \uparrow & \\ \uparrow & \uparrow & \uparrow\\ & \uparrow & \end{array}\hspace{1cm}\Longrightarrow\hspace{1cm} E=4J\hspace{1cm}\begin{array}{ccc} & \uparrow & \\ \uparrow & \downarrow & \uparrow\\ & \uparrow & \end{array}$$ The four other possibilities are as follows $$E=-2J\hspace{1cm}\begin{array}{ccc} & \uparrow & \\ \downarrow & \uparrow & \uparrow\\ & \uparrow & \end{array}\hspace{1cm}\Longrightarrow\hspace{1cm} E=2J\hspace{1cm}\begin{array}{ccc} & \uparrow & \\ \downarrow & \downarrow & \uparrow\\ & \uparrow & \end{array}$$ with $\Delta E=4J$, $$E=0\hspace{1cm}\begin{array}{ccc} & \uparrow & \\ \downarrow & \uparrow & \uparrow\\ & \downarrow & \end{array}\hspace{1cm}\Longrightarrow\hspace{1cm} E=0\hspace{1cm}\begin{array}{ccc} & \uparrow & \\ \downarrow & \downarrow & \uparrow\\ & \downarrow & \end{array}$$ with $\Delta E=0$, $$E=2J\hspace{1cm}\begin{array}{ccc} & \downarrow & \\ \downarrow & \uparrow & \uparrow\\ & \downarrow & \end{array}\hspace{1cm}\Longrightarrow\hspace{1cm} E=-2J\hspace{1cm}\begin{array}{ccc} & \downarrow & \\ \downarrow & \downarrow & \uparrow\\ & \downarrow & \end{array}$$ with $\Delta E=-4J$ and finally $$E=4J\hspace{1cm}\begin{array}{ccc} & \downarrow & \\ \downarrow & \uparrow & \downarrow\\ & \downarrow & \end{array}\hspace{1cm}\Longrightarrow\hspace{1cm} E=-4J\hspace{1cm}\begin{array}{ccc} & \downarrow & \\ \downarrow & \downarrow & \downarrow\\ & \downarrow & \end{array}$$ with $\Delta E=-8J$. The Metropolis Algorithm and the Two-dimensional Ising ModelThis means in turn that we could construct an array which contains all valuesof $e^{\beta \Delta E}$ before doing the Metropolis sampling. Else, wewould have to evaluate the exponential at each Monte Carlo sampling. For the two-dimensional Ising model there are only five possible values. It is rather easyto convice oneself that for the one-dimensional Ising model we have only three possible values.The main part of the Ising model program is shown here /* Program to solve the two-dimensional Ising model The coupling constant J = 1 Boltzmann's constant = 1, temperature has thus dimension energy Metropolis sampling is used. Periodic boundary conditions. */ include include include include "lib.h" using namespace std; ofstream ofile; // inline function for periodic boundary conditions inline int periodic(int i, int limit, int add) { return (i+limit+add) % (limit); } // Function to read in data from screen void read_input(int&, int&, double&, double&, double&); // Function to initialise energy and magnetization void initialize(int, double, int **, double&, double&); // The metropolis algorithm void Metropolis(int, long&, int **, double&, double&, double *); // prints to file the results of the calculations void output(int, int, double, double *); // main program int main(int argc, char* argv[]) { char *outfilename; long idum; int **spin_matrix, n_spins, mcs; double w[17], average[5], initial_temp, final_temp, E, M, temp_step; // Read in output file, abort if there are too few command-line arguments if( argc <= 1 ){ cout << "Bad Usage: " << argv[0] << " read also output file on same line" << endl; exit(1); } else{ outfilename=argv[1]; } ofile.open(outfilename); // Read in initial values such as size of lattice, temp and cycles read_input(n_spins, mcs, initial_temp, final_temp, temp_step); spin_matrix = (int**) matrix(n_spins, n_spins, sizeof(int)); idum = -1; // random starting point for ( double temp = initial_temp; temp <= final_temp; temp+=temp_step){ // initialise energy and magnetization E = M = 0.; // setup array for possible energy changes for( int de =-8; de <= 8; de++) w[de+8] = 0; for( int de =-8; de <= 8; de+=4) w[de+8] = exp(-de/temp); // initialise array for expectation values for( int i = 0; i < 5; i++) average[i] = 0.; initialize(n_spins, double temp, spin_matrix, E, M); // start Monte Carlo computation for (int cycles = 1; cycles <= mcs; cycles++){ Metropolis(n_spins, idum, spin_matrix, E, M, w); // update expectation values average[0] += E; average[1] += E*E; average[2] += M; average[3] += M*M; average[4] += fabs(M); } // print results output(n_spins, mcs, temp, average); } free_matrix((void **) spin_matrix); // free memory ofile.close(); // close output file return 0; } The Metropolis Algorithm and the Two-dimensional Ising ModelThe array $w[17]$ contains values of $\Delta E$ spanning from $-8J$ to$8J$ and it is precalculated in the main part for every newtemperature. The program takes as input the initial temperature, finaltemperature, a temperature step, the number of spins in one direction(we force the lattice to be a square lattice, meaning that we have thesame number of spins in the $x$ and the $y$ directions) and the numberof Monte Carlo cycles. The Metropolis Algorithm and the Two-dimensional Ising ModelFor every Monte Carlo cycle we run through allspins in the lattice in the function metropolis and flip one spin atthe time and perform the Metropolis test. However, every time we flipa spin we need to compute the actual energy difference $\Delta E$ inorder to access the right element of the array which stores $e^{\beta\Delta E}$. This is easily done in the Ising model since we canexploit the fact that only one spin is flipped, meaning in turn thatall the remaining spins keep their values fixed. The energydifference between a state $E_1$ and a state $E_2$ with zero externalmagnetic field is $$\Delta E = E_2-E_1 =J\sum_{}^{N}s_k^1s_{l}^1-J\sum_{}^{N}s_k^2s_{l}^2,$$ which we can rewrite as $$\Delta E = -J \sum_{}^{N}s_k^2(s_l^2-s_{l}^1),$$ where the sum now runs only over the nearest neighbors $k$. The Metropolis Algorithm and the Two-dimensional Ising ModelSince the spin to be flipped takes only two values, $s_l^1=\pm 1$ and $s_l^2=\pm 1$, it means that if$s_l^1= 1$, then $s_l^2=-1$ and if $s_l^1= -1$, then $s_l^2=1$. The other spins keep their values, meaning that$s_k^1=s_k^2$.If $s_l^1= 1$ we must have $s_l^1-s_{l}^2=2$, and if $s_l^1= -1$ we must have $s_l^1-s_{l}^2=-2$. From these results we see that the energy differencecan be coded efficiently as $$\begin{equation} \Delta E = 2Js_l^1\sum_{}^{N}s_k,\label{eq:deltaeising} \tag{3}\end{equation}$$ where the sum runs only over the nearest neighbors $k$ of spin $l$.We can compute the change in magnetisation by flipping one spin as well.Since only spin $l$ is flipped, all the surrounding spins remain unchanged. The Metropolis Algorithm and the Two-dimensional Ising ModelThe difference in magnetisation is therefore only given by the difference $s_l^1-s_{l}^2=\pm 2$, or in a more compact way as $$\begin{equation}M_2 = M_1+2s_l^2,\label{eq:deltamising} \tag{4}\end{equation}$$ where $M_1$ and $M_2$ are the magnetizations before and after the spin flip, respectively. Eqs. [(3)](eq:deltaeising) and [(4)](eq:deltamising) are implemented in the function **metropolis** shown here void Metropolis(int n_spins, long& idum, int **spin_matrix, double& E, double&M, double *w) { // loop over all spins for(int y =0; y < n_spins; y++) { for (int x= 0; x < n_spins; x++){ // Find random position int ix = (int) (ran1(&idum)*(double)n_spins); int iy = (int) (ran1(&idum)*(double)n_spins); int deltaE = 2*spin_matrix[iy][ix]* (spin_matrix[iy][periodic(ix,n_spins,-1)]+ spin_matrix[periodic(iy,n_spins,-1)][ix] + spin_matrix[iy][periodic(ix,n_spins,1)] + spin_matrix[periodic(iy,n_spins,1)][ix]); // Here we perform the Metropolis test if ( ran1(&idum) <= w[deltaE+8] ) { spin_matrix[iy][ix] *= -1; // flip one spin and accept new spin config // update energy and magnetization M += (double) 2*spin_matrix[iy][ix]; E += (double) deltaE; } } } } // end of Metropolis sampling over spins The Metropolis Algorithm and the Two-dimensional Ising ModelNote that we loop over all spins but that we choose the lattice positions $x$ and $y$ randomly.If the move is accepted after performing the Metropolis test, we update the energy and the magnetisation.The new values are used to update the averages computed in the main function. The Metropolis Algorithm and the Two-dimensional Ising ModelWe need also to initialize various variables.This is done in the function here. // function to initialise energy, spin matrix and magnetization void initialize(int n_spins, double temp, int **spin_matrix, double& E, double& M) { // setup spin matrix and intial magnetization for(int y =0; y < n_spins; y++) { for (int x= 0; x < n_spins; x++){ if (temp < 1.5) spin_matrix[y][x] = 1; // spin orientation for the ground state M += (double) spin_matrix[y][x]; } } // setup initial energy for(int y =0; y < n_spins; y++) { for (int x= 0; x < n_spins; x++){ E -= (double) spin_matrix[y][x]* (spin_matrix[periodic(y,n_spins,-1)][x] + spin_matrix[y][periodic(x,n_spins,-1)]); } } }// end function initialise Two-dimensional Ising Model and analysis of spin valuesThe following python code displays the values of the spins as function of temperature. ###Code # coding=utf-8 #2-dimensional ising model with visualization import numpy, sys, math import pygame #Needed for visualize when using SDL screen = None; font = None; BLOCKSIZE = 10 def periodic (i, limit, add): """ Choose correct matrix index with periodic boundary conditions Input: - i: Base index - limit: Highest \"legal\" index - add: Number to add or subtract from i """ return (i+limit+add) % limit def visualize(spin_matrix, temp, E, M, method): """ Visualize the spin matrix Methods: method = -1:No visualization (testing) method = 0: Just print it to the terminal method = 1: Pretty-print to terminal method = 2: SDL/pygame single-pixel method = 3: SDL/pygame rectangle """ #Simple terminal dump if method == 0: print "temp:", temp, "E:", E, "M:", M print spin_matrix #Pretty-print to terminal elif method == 1: out = "" size = len(spin_matrix) for y in xrange(size): for x in xrange(size): if spin_matrix.item(x,y) == 1: out += "X" else: out += " " out += "\n" print "temp:", temp, "E:", E, "M:", M print out + "\n" #SDL single-pixel (useful for large arrays) elif method == 2: size = len(spin_matrix) screen.lock() for y in xrange(size): for x in xrange(size): if spin_matrix.item(x,y) == 1: screen.set_at((x,y),(0,0,255)) else: screen.set_at((x,y),(255,0,0)) screen.unlock() pygame.display.flip() #SDL block (usefull for smaller arrays) elif method == 3: size = len(spin_matrix) screen.lock() for y in xrange(size): for x in xrange(size): if spin_matrix.item(x,y) == 1: rect = pygame.Rect(x*BLOCKSIZE,y*BLOCKSIZE,BLOCKSIZE,BLOCKSIZE) pygame.draw.rect(screen,(0,0,255),rect) else: rect = pygame.Rect(x*BLOCKSIZE,y*BLOCKSIZE,BLOCKSIZE,BLOCKSIZE) pygame.draw.rect(screen,(255,0,0),rect) screen.unlock() pygame.display.flip() #SDL block w/ data-display elif method == 4: size = len(spin_matrix) screen.lock() for y in xrange(size): for x in xrange(size): if spin_matrix.item(x,y) == 1: rect = pygame.Rect(x*BLOCKSIZE,y*BLOCKSIZE,BLOCKSIZE,BLOCKSIZE) pygame.draw.rect(screen,(255,255,255),rect) else: rect = pygame.Rect(x*BLOCKSIZE,y*BLOCKSIZE,BLOCKSIZE,BLOCKSIZE) pygame.draw.rect(screen,(0,0,0),rect) s = font.render("<E> = %5.3E; <M> = %5.3E" % E,M,False,(255,0,0)) screen.blit(s,(0,0)) screen.unlock() pygame.display.flip() def monteCarlo(temp, size, trials, visual_method): """ Calculate the energy and magnetization (\"straight\" and squared) for a given temperature Input: - temp: Temperature to calculate for - size: dimension of square matrix - trials: Monte-carlo trials (how many times do we flip the matrix?) - visual_method: What method should we use to visualize? Output: - E_av: Energy of matrix averaged over trials, normalized to spins**2 - E_variance: Variance of energy, same normalization * temp**2 - M_av: Magnetic field of matrix, averaged over trials, normalized to spins**2 - M_variance: Variance of magnetic field, same normalization * temp - Mabs: Absolute value of magnetic field, averaged over trials """ #Setup spin matrix, initialize to ground state spin_matrix = numpy.zeros( (size,size), numpy.int8) + 1 #Create and initialize variables E = M = 0 E_av = E2_av = M_av = M2_av = Mabs_av = 0 #Setup array for possible energy changes w = numpy.zeros(17,numpy.float64) for de in xrange(-8,9,4): #include +8 w[de+8] = math.exp(-de/temp) #Calculate initial magnetization: M = spin_matrix.sum() #Calculate initial energy for j in xrange(size): for i in xrange(size): E -= spin_matrix.item(i,j)*\ (spin_matrix.item(periodic(i,size,-1),j) + spin_matrix.item(i,periodic(j,size,1))) #Start metropolis MonteCarlo computation for i in xrange(trials): #Metropolis #Loop over all spins, pick a random spin each time for s in xrange(size**2): x = int(numpy.random.random()*size) y = int(numpy.random.random()*size) deltaE = 2*spin_matrix.item(x,y)*\ (spin_matrix.item(periodic(x,size,-1), y) +\ spin_matrix.item(periodic(x,size,1), y) +\ spin_matrix.item(x, periodic(y,size,-1)) +\ spin_matrix.item(x, periodic(y,size,1))) if numpy.random.random() <= w[deltaE+8]: #Accept! spin_matrix[x,y] *= -1 M += 2*spin_matrix[x,y] E += deltaE #Update expectation values E_av += E E2_av += E**2 M_av += M M2_av += M**2 Mabs_av += int(math.fabs(M)) visualize(spin_matrix, temp,E/float(size**2),M/float(size**2), method); #Normalize average values E_av /= float(trials); E2_av /= float(trials); M_av /= float(trials); M2_av /= float(trials); Mabs_av /= float(trials); #Calculate variance and normalize to per-point and temp E_variance = (E2_av-E_av*E_av)/float(size*size*temp*temp); M_variance = (M2_av-M_av*M_av)/float(size*size*temp); #Normalize returned averages to per-point E_av /= float(size*size); M_av /= float(size*size); Mabs_av /= float(size*size); return (E_av, E_variance, M_av, M_variance, Mabs_av) # Main program size = 100 trials = 100000 temp = 2.5 method = 3 #Initialize pygame if method == 2 or method == 3 or method == 4: pygame.init() if method == 2: screen = pygame.display.set_mode((size,size)) elif method == 3: screen = pygame.display.set_mode((size*10,size*10)) elif method == 4: screen = pygame.display.set_mode((size*10,size*10)) font = pygame.font.Font(None,12) (E_av, E_variance, M_av, M_variance, Mabs_av) = monteCarlo(temp,size,trials, method) print "%15.8E %15.8E %15.8E %15.8E %15.8E %15.8E\n" % (temp, E_av, E_variance, M_av, M_variance, Mabs_av) pygame.quit(); ###Output _____no_output_____
notebooks/Production Database Builder - Immutable Metadata.ipynb
###Markdown Loading Production Data from 1987 to 2008The production data from these years follows the same file format.We can therefore import using the same format and put the dataframes into a dictionary.In 1990 we manually fix well API No: 21451, DUCKETT "A" and set it's well number to 1 as unspecified.Same in 1991. ###Code dates_cols_oil = ["OIL."+str(i) for i in range(0, 12, 1)] dates_cols_gas = ["GAS."+str(i) for i in range(0, 12, 1)] dates_cols = dates_cols_oil + dates_cols_gas headers_old_2003 = ['API_COUNTY', 'API_NUMBER', 'SUFFIX', 'WELL_NAME','WELL_NO', ' OPER_NO', 'OPER_SUFFIX', 'OPERATOR', 'ME', 'SECTION', 'TWP','RAN', 'Q4', 'Q3', 'Q2', 'Q1', 'LATITUDE', 'LONGITUDE', 'OTC_COUNTY', 'OTC_LEASE_NO', 'OTC_SUB_NO', 'OTC_MERGE', 'POOL_NO', 'CODE','FORMATION', 'OFB', 'ALLOWABLE_CLASS', 'ALLOWABLE_TYPE', ' PURCH_NO', 'PURCHASER', 'PURCH_SUFFIX', 'OFB.1', 'YEAR', 'JAN', 'OIL.0', 'GAS.0', 'FEB', 'OIL.1', 'GAS.1', 'MAR', 'OIL.2', 'GAS.2', 'APR', 'OIL.3', 'GAS.3', 'MAY', 'OIL.4', 'GAS.4', 'JUN', 'OIL.5', 'GAS.5', 'JUL', 'OIL.6', 'GAS.6', 'AUG', 'OIL.7', 'GAS.7', 'SEP', 'OIL.8', 'GAS.8', 'OCT', 'OIL.9', 'GAS.9', 'NOV', 'OIL.10', 'GAS.10', 'DEC', 'OIL.11', 'GAS.11'] headers_new_2004 = ['API_COUNTY', 'API_NUMBER', 'S', 'WELL_NAME','WELL_NO', ' OPER_NO', 'OPERATOR', 'ME', 'SECTION', 'TWP','RAN', 'Q4', 'Q3', 'Q2', 'Q1', 'LATITUDE', 'LONGITUDE', 'OTC_COUNTY', 'OTC_LEASE_NO', 'OTC_SUB_NO', 'OTC_MERGE', 'POOL_NO', 'CODE','FORMATION','ALLOWABLE_CLASS', 'ALLOWABLE_TYPE', ' PURCH_NO', 'PURCHASER', 'OFB.1', 'YEAR', 'JAN', 'OIL.0', 'GAS.0', 'FEB', 'OIL.1', 'GAS.1', 'MAR', 'OIL.2', 'GAS.2', 'APR', 'OIL.3', 'GAS.3', 'MAY', 'OIL.4', 'GAS.4', 'JUN', 'OIL.5', 'GAS.5', 'JUL', 'OIL.6', 'GAS.6', 'AUG', 'OIL.7', 'GAS.7', 'SEP', 'OIL.8', 'GAS.8', 'OCT', 'OIL.9', 'GAS.9', 'NOV', 'OIL.10', 'GAS.10', 'DEC', 'OIL.11', 'GAS.11'] df_in = None production_data = {} for i in range(1987, 2016, 1): dates_oil = [ "OIL_"+str(datetime.date(i, j+1, 1)) for j in range(0, 12, 1)] dates_gas = [ "GAS_"+str(datetime.date(i, j+1, 1)) for j in range(0, 12, 1)] renamed_oil = {old: new for old, new in zip(dates_cols_oil, dates_oil)} renamed_gas = {old: new for old, new in zip(dates_cols_gas, dates_gas)} renamed_cols = {**renamed_oil, **renamed_gas} #print(renamed_cols) if i != 1994: #No Data from 1994 print(i) if i <= 2008: df = None if i < 2004: df = pd.read_csv("../raw/"+str(i)+"prodn.txt", delimiter="|", skiprows=[0, 2], names=headers_old_2003) else: df = pd.read_csv("../raw/"+str(i)+"prodn.txt", delimiter="|", skiprows=[0, 2], names=headers_new_2004) df_in = df.copy() print(df.columns) print(renamed_cols) df.rename(index=str, columns=renamed_cols, inplace=True) df = df.drop(['YEAR','JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL','AUG', 'SEP', 'OCT', 'NOV', 'DEC'], axis=1) production_data[i] = df else: df = pd.read_csv("../raw/"+str(i)+"prodn.txt", delimiter="|") df[["API_COUNTY", "API_NUMBER"]].apply(lambda x: pd.to_numeric(x, errors='coerce',downcast='integer')) df_in = df.copy() df.rename(renamed_cols) production_data[i] = df df_in.head() def filter_data(row): buffer = [] for val in row: val_parsed = None try: val_parsed = int(val) except ValueError: val_parsed = 0 buffer.append(val_parsed) return np.array(buffer, dtype=np.int32) meta_dataframe = None meta_prod_dfs = [] meta_data = {} columns = ['API_NUMBER','API_COUNTY','LATITUDE', 'LONGITUDE', 'FORMATION'] for year in range(1987, 2016): print(year) if year != 1994: filter_col = columns yearly_meta_data = production_data[year]#.dropna() for i in range(1, len(yearly_meta_data.index)): row = yearly_meta_data.iloc[[i]] api_num = row["API_NUMBER"].values.astype(np.int32)[0] mdata = row[filter_col].values[0] if api_num in meta_data.keys(): pass else: if not np.isnan(api_num): meta_data[api_num] = {} try: meta_data[api_num]["API_COUNTY"] = int(mdata[1]) meta_data[api_num]["LATITUDE"] = float(mdata[2]) meta_data[api_num]["LONGITUDE"] = float(mdata[3]) form_str = str(mdata[4]).strip(" ") meta_data[api_num]["FORMATION"] = form_str except ValueError: print("Found invalid value: ", api_num, year, mdata) del meta_data[-2147483648 ] meta_out = {} for key in meta_data.keys(): meta_out[str(key)] = meta_data[key] with open('../processed/immutable/immutable.json', 'w') as fp: json.dump(meta_out, fp, sort_keys=True) ###Output _____no_output_____
book/_build/jupyter_execute/notebooks/04-spatial-joins.ipynb
###Markdown Spatial joins Goals of this notebook:- Based on the `countries` and `cities` dataframes, determine for each city the country in which it is located.- To solve this problem, we will use the the concept of a 'spatial join' operation: combining information of geospatial datasets based on their spatial relationship. ###Code %matplotlib inline import pandas as pd import geopandas countries = geopandas.read_file("zip://./data/ne_110m_admin_0_countries.zip") cities = geopandas.read_file("zip://./data/ne_110m_populated_places.zip") rivers = geopandas.read_file("zip://./data/ne_50m_rivers_lake_centerlines.zip") ###Output _____no_output_____ ###Markdown Recap - joining dataframesPandas provides functionality to join or merge dataframes in different ways, see https://chrisalbon.com/python/data_wrangling/pandas_join_merge_dataframe/ for an overview and https://pandas.pydata.org/pandas-docs/stable/merging.html for the full documentation. To illustrate the concept of joining the information of two dataframes with pandas, let's take a small subset of our `cities` and `countries` datasets: ###Code cities2 = cities[cities['name'].isin(['Bern', 'Brussels', 'London', 'Paris'])].copy() cities2['iso_a3'] = ['CHE', 'BEL', 'GBR', 'FRA'] cities2 countries2 = countries[['iso_a3', 'name', 'continent']] countries2.head() ###Output _____no_output_____ ###Markdown We added a 'iso_a3' column to the `cities` dataset, indicating a code of the country of the city. This country code is also present in the `countries` dataset, which allows us to merge those two dataframes based on the common column.Joining the `cities` dataframe with `countries` will transfer extra information about the countries (the full name, the continent) to the `cities` dataframe, based on a common key: ###Code cities2.merge(countries2, on='iso_a3') ###Output _____no_output_____ ###Markdown **But**, for this illustrative example, we added the common column manually, it is not present in the original dataset. However, we can still know how to join those two datasets based on their spatial coordinates. Recap - spatial relationships between objectsIn the previous notebook [02-spatial-relationships.ipynb](./02-spatial-relationships-operations.ipynb), we have seen the notion of spatial relationships between geometry objects: within, contains, intersects, ...In this case, we know that each of the cities is located *within* one of the countries, or the other way around that each country can *contain* multiple cities.We can test such relationships using the methods we have seen in the previous notebook: ###Code france = countries.loc[countries['name'] == 'France', 'geometry'].squeeze() cities.within(france) ###Output _____no_output_____ ###Markdown The above gives us a boolean series, indicating for each point in our `cities` dataframe whether it is located within the area of France or not. Because this is a boolean series as result, we can use it to filter the original dataframe to only show those cities that are actually within France: ###Code cities[cities.within(france)] ###Output _____no_output_____ ###Markdown We could now repeat the above analysis for each of the countries, and add a column to the `cities` dataframe indicating this country. However, that would be tedious to do manually, and is also exactly what the spatial join operation provides us.*(note: the above result is incorrect, but this is just because of the coarse-ness of the countries dataset)* Spatial join operation **SPATIAL JOIN** = *transferring attributes from one layer to another based on their spatial relationship* Different parts of this operations:* The GeoDataFrame to which we want add information* The GeoDataFrame that contains the information we want to add* The spatial relationship we want to use to match both datasets ('intersects', 'contains', 'within')* The type of join: left or inner join![](img/illustration-spatial-join.svg) In this case, we want to join the `cities` dataframe with the information of the `countries` dataframe, based on the spatial relationship between both datasets.We use the [`geopandas.sjoin`](http://geopandas.readthedocs.io/en/latest/reference/geopandas.sjoin.html) function: ###Code joined = geopandas.sjoin(cities, countries, op='within', how='left') joined joined['continent'].value_counts() ###Output _____no_output_____ ###Markdown Lets's practice!We will again use the Paris datasets to do some exercises. Let's start importing them again: ###Code districts = geopandas.read_file("data/paris_districts.geojson").to_crs(epsg=2154) stations = geopandas.read_file("data/paris_bike_stations.geojson").to_crs(epsg=2154) ###Output _____no_output_____ ###Markdown **EXERCISE:*** Determine for each bike station in which district it is located (using a spatial join!). Call the result `joined`. ###Code # %load _solved/solutions/04-spatial-joins1.py # %load _solved/solutions/04-spatial-joins2.py ###Output _____no_output_____ ###Markdown **EXERCISE: Map of tree density by district (I)**Using a dataset of all trees in public spaces in Paris, the goal is to make a map of the tree density by district. For this, we first need to find out how many trees each district contains, which we will do in this exercise. In the following exercise, we will then use this result to calculate the density and create a map.To obtain the tree count by district, we first need to know in which district each tree is located, which we can do with a spatial join. Then, using the result of the spatial join, we will calculate the number of trees located in each district using the pandas 'group-by' functionality.- Import the trees dataset `"paris_trees.gpkg"` and call the result `trees`. Also read the districts dataset we have seen previously (`"paris_districts.geojson"`), and call this `districts`. Convert the districts dataset to the same CRS as the trees dataset.- Add a column with the `'district_name'` to the trees dataset using a spatial join. Call the result `joined`.Hints- Remember, we can perform a spatial join with the `geopandas.sjoin()` function.- `geopandas.sjoin()` takes as first argument the dataframe to which we want to add information, and as second argument the dataframe that contains this additional information.- The `op` argument is used to specify which spatial relationship between both dataframes we want to use for joining (options are `'intersects'`, `'contains'`, `'within'`). ###Code # %load _solved/solutions/04-spatial-joins3.py # %load _solved/solutions/04-spatial-joins4.py # %load _solved/solutions/04-spatial-joins5.py ###Output _____no_output_____ ###Markdown **EXERCISE: Map of tree density by district (II)**- Calculate the number of trees located in each district: group the `joined` DataFrame by the `'district_name'` column, and calculate the size of each group. We convert the resulting Series `trees_by_district` to a DataFrame for the next exercise.Hints- The general group-by syntax in pandas is: `df.groupby('key').aggregation_method()`, substituting 'key' and 'aggregation_method' with the appropriate column name and method. - To know the size of groups, we can use the `.size()` method. ###Code # %load _solved/solutions/04-spatial-joins6.py # %load _solved/solutions/04-spatial-joins7.py # %load _solved/solutions/04-spatial-joins8.py ###Output _____no_output_____ ###Markdown **EXERCISE: Map of tree density by district (III)**Now we have obtained the number of trees by district, we can make the map of the districts colored by the tree density.For this, we first need to merge the number of trees in each district we calculated in the previous step (`trees_by_district`) back to the districts dataset. We will use the [`pd.merge()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge.html) function to join two dataframes based on a common column.Since not all districts have the same size, it is a fairer comparison to visualize the tree density: the number of trees relative to the area.- Use the `pd.merge()` function to merge `districts` and `trees_by_district` dataframes on the `'district_name'` column. Call the result `districts_trees`.- Add a column `'n_trees_per_area'` to the `districts_trees` dataframe, based on the `'n_trees'` column divided by the area.- Make a plot of the `districts_trees` dataframe, using the `'n_trees_per_area'` column to determine the color of the polygons.Hints- The pandas `pd.merge()` function takes the two dataframes you want to merge as the first two arguments.- The column name on which you want to merge both datasets can be specified with the `on` keyword.- Accessing a column of a DataFrame can be done with `df['col']`, while adding a column to a DataFrame can be done with `df['new_col'] = values` where `values` can be the result of a computation.- Remember, the area of each geometry in a GeoSeries or GeoDataFrame can be retrieved using the `area` attribute. So considering a GeoDataFrame `gdf`, then `gdf.geometry.area` will return a Series with the area of each geometry.- We can use the `.plot()` method of a GeoDataFrame to make a visualization of the geometries. - For using one of the columns of the GeoDataFrame to determine the fill color, use the `column=` keyword. ###Code # %load _solved/solutions/04-spatial-joins9.py # %load _solved/solutions/04-spatial-joins10.py # %load _solved/solutions/04-spatial-joins11.py ###Output _____no_output_____ ###Markdown The overlay operationIn the spatial join operation above, we are not changing the geometries itself. We are not joining geometries, but joining attributes based on a spatial relationship between the geometries. This also means that the geometries need to at least overlap partially.If you want to create new geometries based on joining (combining) geometries of different dataframes into one new dataframe (eg by taking the intersection of the geometries), you want an **overlay** operation. ###Code africa = countries[countries['continent'] == 'Africa'] africa.plot() cities['geometry'] = cities.buffer(2) geopandas.overlay(africa, cities, how='difference').plot() ###Output _____no_output_____ ###Markdown REMEMBER * **Spatial join**: transfer attributes from one dataframe to another based on the spatial relationship* **Spatial overlay**: construct new geometries based on spatial operation between both dataframes (and combining attributes of both dataframes) **EXERCISE: Exploring a Land Use dataset**For the following exercises, we first introduce a new dataset: a dataset about the land use of Paris (a simplified version based on the open European [Urban Atlas](https://land.copernicus.eu/local/urban-atlas)). The land use indicates for what kind of activity a certain area is used, such as residential area or for recreation. It is a polygon dataset, with a label representing the land use class for different areas in Paris.In this exercise, we will read the data, explore it visually, and calculate the total area of the different classes of land use in the area of Paris.* Read in the `'paris_land_use.shp'` file and assign the result to a variable `land_use`.* Make a plot of `land_use`, using the `'class'` column to color the polygons. We also add a legend. Note: it might take a few seconds for the plot to generate because there are a lot of polygons.* Add a new column `'area'` with the area of each polygon.* Calculate the total area in km² for each `'class'` using the `groupby()` method, and print the result.Hints* Reading a file can be done with the `geopandas.read_file()` function.* To use a column to color the geometries, use the `column` keyword to indicate the column name.* The area of each geometry can be accessed with the `area` attribute of the `geometry` of the GeoDataFrame.* The `groupby()` method takes the column name on which you want to group as the first argument. ###Code # %load _solved/solutions/04-spatial-joins12.py # %load _solved/solutions/04-spatial-joins13.py # %load _solved/solutions/04-spatial-joins14.py # %load _solved/solutions/04-spatial-joins15.py ###Output _____no_output_____ ###Markdown **EXERCISE: Intersection of two polygons**For this exercise, we are going to use 2 individual polygons: the district of Muette extracted from the `districts` dataset, and the green urban area of Boulogne, a large public park in the west of Paris, extracted from the `land_use` dataset. The two polygons have already been assigned to the `muette` and `park_boulogne` variables.We first visualize the two polygons. You will see that they overlap, but the park is not fully located in the district of Muette. Let's determine the overlapping part.* Plot the two polygons in a single map to examine visually the degree of overlap* Calculate the intersection of the `park_boulogne` and `muette` polygons.* Print the proportion of the area of the district that is occupied by the park.Hints* The intersection of to scalar polygons can be calculated with the `intersection()` method of one of the polygons, and passing the other polygon as the argument to that method. ###Code land_use = geopandas.read_file("zip://./data/paris_land_use.zip") districts = geopandas.read_file("data/paris_districts.geojson").to_crs(land_use.crs) # extract polygons land_use['area'] = land_use.geometry.area park_boulogne = land_use[land_use['class'] == "Green urban areas"].sort_values('area').geometry.iloc[-1] muette = districts[districts.district_name == 'Muette'].geometry.squeeze() # Plot the two polygons geopandas.GeoSeries([park_boulogne, muette]).plot(alpha=0.5, color=['green', 'blue']) # %load _solved/solutions/04-spatial-joins16.py # %load _solved/solutions/04-spatial-joins17.py # %load _solved/solutions/04-spatial-joins18.py ###Output _____no_output_____ ###Markdown **EXERCISE: Intersecting a GeoDataFrame with a Polygon**Combining the land use dataset and the districts dataset, we can now investigate what the land use is in a certain district.For that, we first need to determine the intersection of the land use dataset with a given district. Let's take again the *Muette* district as example case.* Calculate the intersection of the `land_use` polygons with the single `muette` polygon. Call the result `land_use_muette`.* Make a quick plot of this intersection, and pass `edgecolor='black'` to more clearly see the boundaries of the different polygons.* Print the first five rows of `land_use_muette`.Hints* The intersection of each geometry of a GeoSeries with another single geometry can be performed with the `intersection()` method of a GeoSeries.* The `intersection()` method takes as argument the geometry for which to calculate the intersection. ###Code land_use = geopandas.read_file("zip://./data/paris_land_use.zip") districts = geopandas.read_file("data/paris_districts.geojson").to_crs(land_use.crs) muette = districts[districts.district_name == 'Muette'].geometry.squeeze() # %load _solved/solutions/04-spatial-joins19.py # %load _solved/solutions/04-spatial-joins20.py # Print the first five rows of the intersection land_use_muette.head() ###Output _____no_output_____ ###Markdown You can see in the plot that we now only have a subset of the full land use dataset. The `land_use_muette` still has the same number of rows as the original `land_use`, though. But many of the rows, as you could see by printing the first rows, consist now of empty polygons when it did not intersect with the Muette district. ###Code land_use_muette = land_use.copy() land_use_muette['geometry'] = land_use.geometry.intersection(muette) land_use_muette = land_use_muette[~land_use_muette.is_empty] land_use_muette.head() land_use_muette.dissolve(by='class') land_use_muette.dissolve(by='class').reset_index().plot(column='class') ###Output _____no_output_____ ###Markdown **EXERCISE: Overlaying spatial datasets**We will now combine both datasets in an overlay operation. Create a new `GeoDataFrame` consisting of the intersection of the land use polygons wich each of the districts, but make sure to bring the attribute data from both source layers.Once we created the overlay of the land use and districts datasets, we can more easily inspect the land use for the different districts. Let's get back to the example district of Muette, and inspect the land use of that district.* Create a new GeoDataFrame from the intersections of `land_use` and `districts`. Assign the result to a variable `combined`.* Print the first rows the resulting GeoDataFrame (`combined`).* Add a new column `'area'` with the area of each polygon to the `combined` GeoDataFrame.* Create a subset called `land_use_muette` where the `'district_name'` is equal to "Muette".* Make a plot of `land_use_muette`, using the `'class'` column to color the polygons.* Calculate the total area for each `'class'` of `land_use_muette` using the `groupby()` method, and print the result.Hints* The intersection of two GeoDataFrames can be calculated with the `geopandas.overlay()` function.* The `overlay()` functions takes first the two GeoDataFrames to combine, and a third `how` keyword indicating how to combine the two layers.* For making an overlay based on the intersection, you can pass `how='intersection'`.* The area of each geometry can be accessed with the `area` attribute of the `geometry` of the GeoDataFrame.* To use a column to color the geometries, pass its name to the `column` keyword.* The `groupby()` method takes the column name on which you want to group as the first argument.* The total area for each class can be calculated by taking the `sum()` of the area. ###Code land_use = geopandas.read_file("zip://./data/paris_land_use.zip") districts = geopandas.read_file("data/paris_districts.geojson").to_crs(land_use.crs) # %load _solved/solutions/04-spatial-joins21.py # %load _solved/solutions/04-spatial-joins22.py # %load _solved/solutions/04-spatial-joins23.py # %load _solved/solutions/04-spatial-joins24.py # %load _solved/solutions/04-spatial-joins25.py # %load _solved/solutions/04-spatial-joins26.py ###Output _____no_output_____
Python_visualization_for_ML.ipynb
###Markdown Python資料視覺化呈現,實作機器學習方法http://www.cc.ntu.edu.tw/chinese/epaper/0041/20170620_4105.html ###Code import os os.environ['TF_CPP_MIN_LOG_LEVEL']='2' import tensorflow as tf import matplotlib import numpy as np import keras import sklearn node1 = tf.constant(5.0, tf.float32) node2 = tf.constant(3.5,tf.float32) sess = tf.Session() print(sess.run([node1, node2])) ###Output [5.0, 3.5] ###Markdown NOTE: Console 端如果出現以下錯誤訊息I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA這只是 warning 建議你可以用 source code 編譯安裝, 這樣速度會快許多, 可以加上以下 code 忽略import osos.environ['TF_CPP_MIN_LOG_LEVEL']='2' 三、Matplotlib 資料視覺化的呈現 ###Code import matplotlib import matplotlib.pyplot as plt import numpy as np from sklearn.datasets import make_blobs %matplotlib inline plt.rc('figure', figsize=(8.0, 8.0)) data, label = make_blobs(n_samples=200, random_state=0) #label = label.reshape(200, 1) #TonyH; error plt.scatter(data[:,0], data[:,1], s=20, c=label, cmap=plt.cm.Accent) ###Output _____no_output_____ ###Markdown 簡單說明本程式的重點:(1) 使用matplotlib.pyplot進行畫圖,使用plt.rc定義圖片大小。(2) 利用sklearn.datasets載入資料,本範例使用make_blobs,或用circle, API的使用可以參考,make_blobs和make_circle。(3) n_samples 取的點數。(4) reshape(200,1) 將200個點的陣列轉成向量,其中label的值為0或是1,在圖形上顯示兩種不同的顏色。(5) data 為200x2的矩陣,其中data[: ,0] 表示X座標值,data[: ,1]為Y座標值,Color使用label來區分,s為資料的圖形大小。Cmap則是color map的方式。 四、TensorFlow 介紹 先來看一個簡單的範例,假設我們要實作一個如下圖這樣的網路,其中x1、x2;x3是輸入,而變數是a和b,此變數需要學習,並且在之後變化,這些變數就是我們一般所說的權重。![image.png](attachment:image.png) ###Code import tensorflow as tf # (1) x1 = tf.constant(1.0, tf.float32) x2 = tf.constant(2.0, tf.float32) x3 = tf.constant(3.0, tf.float32) # (2) node1 = tf.add(x1, x2) # (3) a = tf.placeholder(tf.float32) b = tf.placeholder(tf.float32) node2 = a + b * x3 + node1 # (4) sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) sess.run(node2, {a: 10.0, b:3.0}) ###Output _____no_output_____ ###Markdown 程式重點: (1) x1、x2、x3 是輸入,我們使用constant使其為定值。(2) node1 是 x1和x2的相加。(3) a,b是可以調變的變數,注意到這邊,一旦我們定義了變數,我們就必須先初始化變數。(4) 最後使用session.run執行一輪,即結束。 介紹另一個基本的範例 Linear Model 這個範例介紹了如何將一個資料向量X=[x1, x2, x3, x4],進行向量線性運算 Y = W*X + b,其中W就是權重。所以如圖十五,經過線性運算後,可以得到一組Y向量。如果我們希望能夠訓練W的值,我們就需要完成像圖十六這樣的架構。 ![image.png](attachment:image.png) ![image.png](attachment:image.png) 五、Keras 介紹 Keras底層也是利用tensorflow來實作,可以把Keras想成更抽像的API利用Keras的API來建構網路的(文件可參考https://keras.io/layers/core/) 若我們想建立一個輸入為10筆、輸出為2筆的網路架構,程式如下。其中Model裡的Dense是預設輸入為10,輸出為2的節點。 ###Code from keras.models import Sequential from keras.layers import Dense model = Sequential() model.add(Dense(2, input_shape=(10,))) model.compile(optimizer='sgd', loss='binary_crossentropy') model.summary() ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_4 (Dense) (None, 2) 22 ================================================================= Total params: 22 Trainable params: 22 Non-trainable params: 0 _________________________________________________________________ ###Markdown 如果想要在中間多加2層的網路,在不用動到其它程式碼下,只要在中間增加model.add即可,如下程式碼即建立兩層hidden layer,且各層為10的節點數量: ###Code from keras.models import Sequential from keras.layers import Dense model = Sequential() model.add(Dense(2, input_shape=(10,))) model.add(Dense(10)) model.add(Dense(10)) model.compile(optimizer='sgd', loss='binary_crossentropy') model.summary() ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_5 (Dense) (None, 2) 22 _________________________________________________________________ dense_6 (Dense) (None, 10) 30 _________________________________________________________________ dense_7 (Dense) (None, 10) 110 ================================================================= Total params: 162 Trainable params: 162 Non-trainable params: 0 _________________________________________________________________
Hanifa/Assignment/Chamillionaire/Chamillionaire_findings.ipynb
###Markdown **Automatic text summarization** is the task of producing a concise and fluent summary while preserving key information content and overall meaning Machines have become capable of understanding human language with the help of NLP or Natural Language Processing. Today, research is being done with the help of text analytics.One application of text analytics and NLP is Text Summarization. Text Summarization Python helps in summarizing and shortening the text in the user feedback. It can be done with the help of an algorithm that can help in reducing the text bodies while keeping their original meaning intact or by giving insights into their original text. **Two different approaches are used for Text Summarization**- Extractive Summarization- Abstractive Summarization **Extractive Summarization**In Extractive Summarization, we are identifying important phrases or sentences from the original text and extract only these phrases from the text. These extracted sentences would be the summary. **Abstractive Summarization**In the Abstractive Summarization approach, we work on generating new sentences from the original text. The abstractive method is in contrast to the approach that was described above. The sentences generated through this approach might not even be present in the original text. ###Code import pandas as pd df = pd.read_csv("Chamillionaire.csv") df.drop(['Unnamed: 0'],inplace=True,axis=1) df for i in range(0,10): print("Text " + str(i+1) +"\n\n" + df['Content'][i] + "\n\n\n") ###Output Text 1 [' Yes, it is rather surprising that a rap artist is very interested in entrepreneurship and technology. '] Text 2 ['When Grammy award-winning rapper Hakeem “Chamillionaire” Seriki began learning the ropes of venture capitalism in the tech space, he noticed something almost immediately— he wasn’t seeing many people who looked like him. ', 'When he began to fund ventures himself, the heads of startups that were brought to him weren’t diverse, either, and that led him to create pitch competitions specifically geared toward people of color and women. His latest contest features a $100,000 investment from him and fellow rapper E-40. ', ' “The reason why we decided to put the focus on minority and women-funded startups is because this demographic of companies and founders is just underrepresented, they’re under-invested in,” he told The Associated Press in a recent interview. “They’re just not as appreciated as we would like, so we’re trying to do more to create more awareness for these companies and also put our money where our mouth is and invest in one of them.”', '', '', 'Startup companies will submit their pitches on Convoz, a video-based social app started by Chamillionaire that focuses on face-to-face interaction. The applicants will be reviewed by him, E-40, Daymond John of “Shark Tank” and Republic, an SEC-registered investing platform.', 'Chamillionaire, who co-founded the underground Texas group the Color Changin’ Click with Paul Wall, is best known for his hit “Ridin’ Dirty,” but has also made a name for himself in business.', 'He believes diversity is scarce because limited partners and venture capitalists tend to work with people they are familiar with or those they “see themselves in,” Chamillionaire said. The trickle-down effect is that the tech space is almost completely dominated by white males. However, the rapper believes that bringing in people with different backgrounds and life experiences will benefit the entire sector. ', '“A lot of people are raising money, but a lot of people aren’t minorities. A lot of them aren’t women,” Chamillionaire said. “We’re solving problems, often unique problems, because some of these companies I’m seeing are the regurgitation of problems that already got solved. And then you go into places where people like me grew up and there are people that are seeing the world from a very unique lens and those people aren’t getting the capital to go and create those things.”', 'He advises many celebrities and athletes of color in learning how to properly invest and believes they are now realizing the power their status carries, which results in them eventually founding their own companies, as opposed to collecting a paycheck from one.', '', '', '“I want to do more to create more awareness so that the people in our communities aren’t just thinking that you just got to be a basketball player or a rapper, because that’s what I thought,” Chamillionaire said. “Now I feel like there’s a lot of ‘cool’ happening with the tech. There are people that are becoming millionaires that are 20-something-years-old when Snapchat IPOs or when this company gets acquired. And I feel like we need to start training people in our community to start thinking like this.”', 'Although Chamillionaire says he’s encouraged by conferences such as AfroTech – the nation’s largest technology conference for African American techies and entrepreneurs—he says now is the time for inclusion. ', '“I think there’s a systemic problem that I’m not alone going to be able to fix, but I recognize it’s a real thing. So I’m gonna be very vocal about it,” says the venture capitalist. ', 'The pitch competition ends Friday.', '___', 'Follow Associated Press entertainment journalist Gary Gerard Hamilton at twitter.com/GaryGHamilton.'] Text 3 ['In the last several years, a growing number of celebrities have begun investing into tech startups. But few have humbled and immersed themselves into the industry like Chamillionaire, who rose to fame in the early 2000s as a rapper, but can now be spotted at investor parties and Y Combinator "demo days." Next act: A self-described entrepreneur at heart, Chamillionaire recently debuted his mobile video chat app Convoz while he continues to invest in startups. Quick facts: While still working as a musician, he began advising companies like SayNow, which let celebrities directly interact with fans and sold to Google in 2011. He leads a syndicate of investors made up of influencers, celebrities, and athletes. His exits so far include Maker Studios (to Disney for a reported $675 million) and Cruise (to General Motors for close to $1 billion). His startup Convoz now has a total of seven employees, and has raised an undisclosed amount of seed funding from Greycroft Ventures, Upfront Ventures, 500 Startups, Precursor VC, Okapi Ventures, XG Ventures, and a roster of angels including Justin Kan and Snoop Dogg. He\'s not made any investments in cryptocurrencies but says he believes in the potential of blockchain tech. Axios spoke with Chamillionaire a few weeks ago, here are the highlights:On his lifelong interest in tech: When I signed a deal with Universal, I was always thinking digital. I was an entrepreneur who was known as a rapper... I had, like, five phones. People thought that I was a nerd as a rapper. On his introduction to the tech industry as he began advising some entertainment-related startups: It was a whole other world that I didn\'t know existed... But as I started getting closer to a lot of these companies, I realized that a lot of companies were coming to the music industry and cannibalizing their business... I started going to tech conferences on my own, me and my partner. My first tech conference was something at Stanford, Quincy Jones\' son told me about it. After that I went to [TechCrunch] Disrupt, then CES [and so on]. To us, growing up, you got two options: you gonna be a musician or a basketball player... The diversity thing is very important to me. I don\'t think people understand the value of diversity. On ending up at Upfront Ventures in Los Angeles as an entrepreneur-in-residence:I was honestly planning on going to San Fran and getting into investing and building a company out there. Mark Suster asked me why. Well there\'s no tech in L.A., when you get off the plane there\'s paparazzi. I stayed there for almost a year. I wanted to come every day. Hearing [the Upfront partners] break down companies... When a founder would come in and pitch… When they leave we would hear all the VCs break it down... I don\'t know what price tag to put on that. On his biggest surprise so far: The surprising thing about it all, is that everyone was so open to giving feedback, criticism, contacts—not like in the music industry. Mark [Suster] showed me 10 companies before I decided to put money in Maker Studios.On celebrities investing in tech startups: Investing in tech, I think, is smart for any entrepreneur or business savvy person—you gotta diversify. At first it was just information, I just wanted to be along for the journey and learn. Then it turned into some wins... Because I was an angel, I was spending what i could afford to lose. At the end of the day I think you’re betting on people and at the end of the day I think I\'m pretty good at people...Eventually people will take a lot of these celebrities and influencers as just a tweet. We\'re more than that—we can connect you to people, we got feedback.The biggest lesson he\'s learned in tech so far: I would say is that I guess I knew this, but it\'s just being in the thick of this, nothing is gonna come overnight. You can get in here and have a false sense of how things work. Being a celebrity is tough—I don\'t even think of myself as one but I guess I am—everyone tells you it’s gonna be great. People tell me what I want to hear or what I need to hear. '] Text 4 ['The middle of the 00’s saw many rappers rise to fame then fall off the map. Sounds changed, ringtones became uncool, and new talent emerged. The music industry changes rapidly and many careers die faster than they started. For some people, the end of a music career might be the start of better endeavors.', 'Around 2004 the rap scene saw an increase in Texas-based rappers gain popularity. No one peaked higher than Chamillionaire, hitting the top of the Billboard Hot 100 for two weeks during the summer of 2006 with his song Ridin’. Falling from the top turned out to be a blessing in disguise because he’s become quite the successful investor.', 'Chamillionaire, whose real name is Hakeem Seriki, has taken some risks that paid off. He found ways to give back through his success, and has put his earnings to good use. Seriki’s music success has really pigeonholed him as a one hit wonder, but he’s much more than that. Now more than ever, we should be recognizing his accomplishments and they ways he is empowering future entrepreneurs.', 'Ridin’ was a catchy song, but also focused on the topic of racial profiling. This is still a conversation we’re addressing, and Chamillionaire was discussing it fourteen years ago. After this song, he struggled to find chart success again. His follow up album failed to reach the same level of success as The Sound of Revenge (the album containing Ridin’). The album after that never got released, and by 2009 the name Chamillionaire was old news.', 'As a solo artist, Chamillionaire would only ever release two proper albums. He would release a few more EPs and mixtapes, but as his success in music dwindled he would fade from the public eye. By the time he hit 30, he was facing retirement from the rap scene. He could continue with independent endeavors, but landing another major success seemed unlikely.', 'To some degree, this is not uncommon. Many rappers start their own record labels and hope to become the next media mogul like P. Diddy. Some even venture outside of rap with such examples being Beats by Dre or 50 Cent’s partnership with Vitamin Water. Others try to find success in other genres, like Akon who is credited with giving Lady Gaga her first major deal. While all of these ventures are impressive, they’re also enhanced by star power and celebrity endorsements.', 'Chamillionaire’s interest lied in the tech scene, and he first dipped his toes into this space in 2009. Venture capitalist Mark Suster recognized the rapper for his ability to engage an audience while at a tech convention. He wasn’t just generating excitement, he was discussing how he rose to the top of the iTunes charts thanks to his focus on digital media.', 'Audience engagement wasn’t the only reason Chamillionaire appeared on the tech scene; he was there to learn to invest. He made some notable investments early in his career, but not without shopping around. Early in his career, he invested in Maker’s Studio, which would later be sold to Disney for a reported $675 million. While it’s unknown how much Chamillionaire made in this investment, he invested early and made a significant profit upon his exit.', 'By 2015, Chamillionaire joined Upfront Ventures in Santa Monica as the Entrepreneur-In-Residence. In this role he invests early in rising tech companies, hoping to accelerate the growth and identify the next big thing. It’s hard to deny his eye for future successes. His portfolio includes early investments in: home security system Ring (acquired by Amazon), self-driving car technology Cruise (acquired by GM), and ride-sharing app Lyft which went public in 2018.', 'For a one hit wonder, it’s safe to say Chamillionaire is doing better than a lot of other rappers.', 'In 2018, Chamillionaire introduced the world to his own app named Convoz. The goal is to encourage collaborative video conversations around current topics. If Twitter and TikTok had a baby, it might look something like Convoz. One user posts a message, another user can reply, and anyone else will see the back-and-fourth like it’s a realtime conversation.', 'Future plans include helping new tech ideas blossom, particularly for women and people of color. In 2019, Chamillionaire held a content to invest $25,000 in a start up because so few start ups had a female founder and so few people of color had their start ups venture-backed. He later ran a second competitor, this time investing $100,000 into the winner. After applying through Convoz, Pierre Laguerre’s company Fleeting was selected as the winner. Laguerre is a Haitian-born college dropout who recognized the shortage of qualified truck drivers in the US. As our shipping needs grew, he wants to bridge the gap with by connecting truck drivers to on demand jobs.', 'Is Fleeting the next big start up? Possibly. It’s hard to deny Chamillionaire’s eye for success. While he still dabbles in rapping occasionally, his biggest success can be found in the investing world. He seems to have built a path for longterm growth rather than being dependent upon the trends associated with the music scene. At this point, he could probably retire if he wanted. I don’t see that happening, however. He wants to serve communities who are underrepresented by investors and build his reputation as a founder. He’s truly found success in a new industry, and his achievements are worth celebrating.', 'Thank you for reading! I love connecting with other content creators, particularly in product design, branding, and finance. Beyond Medium, I can also be found on my website and LinkedIn.'] Text 5 ['There are a few ways you can announce a launch, but when\xa0Grammy-winning rapper Chamillionaire shared that\xa0he’d founded his own tech startup via Mark Suster’s Snapchat account from Wash U’s campus, it was certainly noteworthy.', 'According to Business Insider, the startup will provide\xa0“downloadable software applications for streaming communications with entertainers, politicians, and celebrities,” something akin to “Twitter with more live streaming baked in.”', 'In town Thursday to visit\xa0Washington University’s School of Engineering and Applied Sciences,\xa0Suster, a successful entrepreneur himself and the managing partner of Upfront Ventures in Los Angeles, has had a working relationship with\xa0Chamillionaire for awhile now. Chamillionaire serves\xa0as an EIR at Upfront and the two co-invested in Maker Studio, which has since been sold to the Walt Disney Company.', 'After a morning\xa0at Wash\xa0U, Suster\xa0and Chamillionaire gave a talk at\xa0Venture Café St. Louis, where we were able to sit down with both to\xa0talk about tech, funding and social media.', 'Chamillionaire:\xa0It is actually something I’ve been thinking in my head for a long time. I was advising companies and just got frustrated with the process of talking to other people and watching them trying to accomplish a vision that ultimately wasn’t mine. Then it was\xa0like,\xa0“You know what? The only way this is going to be done right is if I do it.”', 'Right now we are in stealth mode, so we aren’t saying much about what we’re doing, but it is public information that I am building a company. A lot of\xa0people haven’t seen me put out music in a long time, so they’re wondering,\xa0“How come you’re not releasing music?” and it’s like, I have other aspirations and other problems I want to solve and I want to build a different type of company. So now I’m hiring developers and bringing in people to help build this tech company with me and it’s just a different experience, and I’m excited for the journey.', 'Mark Suster:\xa0The first thing is, Cham had the concept awhile ago. He spoke earlier at [Wash U] about solving problems that you authentically know and experience first hand, so that was the genesis. He saw\xa0a\xa0particular problem and articulated to me how to build it, with whom, what features it should have. ', 'So I said to Cham, come out and do it. If you come here as an EIR, we’ll help you\xa0recruit engineers, build a product team, have advisors; you can learn how\xa0economic models work; we can help you network. He\xa0took us up on that offer\xa0and\xa0week in, week out, he sat in the\xa0company pitches, responding, giving us feedback, sometimes helping us with due diligence and sometimes\xa0co-investing as an individual.', '[For his own startup]\xa0he actually has a working product. It’s not public, it’s not launched, it’s not available, but it is working—I have a copy on my phone, he has a copy on his phone.\xa0', 'The real issue is, when is it ready for prime time? Now the hard thing about being Chamillionaire is, anyone else can just\xa0create a product, put it out there, test it with a\xa0bunch of people and slowly fine-tune it. But when you’re well-known and you put out a product, he has a higher bar because if he just puts it out there then you’ll get all the negative reactions because people want to find everything wrong with startups and he’s refining it in private.', 'Chamillionaire: The music thing is what most of the people who know me know about and it’s what I’ve been asked about the most. And to be fair, I haven’t retired from rap, it’s just that in order to be successful in what I’m trying to do, it takes a certain level of commitment. You can’t play around with it, and it’s tough to toggle between putting out an album, going on tour and trying to please fans and then running back to a startup when ultimately, people don’t look at entertainers as really serious people. ', 'So many people [in entertainment] are coming to the\xa0tech world and they’re like, “Oh there’s this cool tech thing going on and people are making money,” and they treat it as a thing they will just\xa0moonlight with. I’m trying to prove that I’m really serious. That’s why I can look Mark or\xa0any other investor in the face and say,\xa0“Hey, I’m different than these guys, I’m all in.” So, I plan to put out music, but I just want to be able to do it on my terms, and that’s kind of what I’m doing now.', 'Chamillionaire: A lot of time when they’re coming to me, it’s because they are trying to find a way to get\xa0people using something. They have great technology and they’re trying to figure out how to get people to understand that this thing is out there. So they often ask me about ways that they can promote it or market it. When they ask me to invest, normally it’s not just about a check, it’s about, “How can we get Chamillionaire involved and use his strategy and the things he’s done in the past to help things grow with us?” Startups for the most part are just trying to get people to know they exist.', 'Mark Suster: So let me push beyond humble Chamillionaire: I introduced him years ago to our startups and they all approached him for the same reason: “Hey, maybe I can get him to promote us.” And that’s the first thought for anyone who has reach with an audience–it’s promotion. And then Cham would come back and say,\xa0“So I looked at how your App\xa0integrates with other people’s apps or how hard user registration is,” and he kept coming back with advice on products. ', 'And I think that’s the thing that people don’t realize about him.\xa0He has a background in visual design and formerly used to be an artist, so he really\xa0thinks about usability. He’s just driven by products and product usage. He knows how to engage audiences and that’s part of what he does.', 'Now the big challenge will be, can he recruit, motivate and retain a really talented\xa0team beneath him to develop a\xa0product that’s world class,\xa0and time will tell.', 'Mark Suster: If I could use one word to\xa0describe the students I interacted with it’d be serious. It wasn’t a\xa0frivolous day. People were engaged. ', 'A lot of people were already working on startups and thinking of building things. I know it’s a very quality engineering department and that’s something that matters to us. We really try to work with world-class engineers, and people were taking things pretty seriously.', '', 'Mark Suster:\xa0So I have this great audience, they are really engaged, there’s no bullsh*t, we’re doing this live. I think people like to feel like they’re connected.\xa0', 'With\xa0blogging, it takes me 45 minutes to write a post. I think it takes most people a little longer, but I don’t worry about spelling or editing, I don’t have advertisers, so I don’t have to be perfect. I just write for 45 minutes and then hit publish. But it does take time.', 'On Snapchat yesterday, I was about to come down to St. Louis from Chicago and I had five minutes. So I got a napkin and a pen out and I drew a chart about the innovator’s dilemma and how\xa0that drives a lot of my investment thesis, and it took five minutes. ', 'Now, everyone in the hotel lobby thought I was strange, but it was great and it got 7,000 views in less than 24 hours. I like the immediacy, I like the intimacy. And here’s the thing, my target customer for the most part is 22-34 and they’re all on Snapchat. And no VC knows how to use Snapchat. So I’m like, “Hey man, I got this swim lane to myself, why not?” ', 'Being early to a platform matters. I have 250,000 followers on Twitter, if I started today, I couldn’t get that, I don’t have any famous rap songs—yet. [Looks to Chamillilonaire and laughs.]\xa0', 'Chamillionaire:\xa0He’s the VC of the millennials.', 'Mark Suster: That’s a great question I’m asked all the time.\xa0Number one is, if you want to stay in St. Louis and build a company here, you have a great advantage, which is great engineering talent that will be cheaper because cost of living is cheaper and a much higher retention rate because if you’re building an interesting company. It’s not\xa0like there are 2,000 others at your footstep, like in the Bay Area. ', 'The problem is that investors have a harder time\xa0committing early-stage. The reason is not that I mind coming to St. Louis, but do I want to come eight times a year? Again, nothing against St. Louis, but I go 8-10 times to New York, I go 14 times a year to the Bay Area. So if I add another\xa0location it would just kill me unless it was a company at the next level.', 'I just invested in a company in Toronto, I won’t give a name yet, but we’ll be\xa0announcing in 30 days. And what I said to them was,\xa0“If you’ll do board meetings in New York and LA, I’ll come once a year to Toronto.” So it’s taking that issue off the table; a lot of people don’t know to do that.', 'Anyone a startup pitches in New York, San Francisco, LA, their first thought is,\xa0“Do I want to go to St. Louis eight times a year?” And they’re also thinking, “Can I really provide you enough advice and have enough interactions to make a difference?” So if you say,\xa0“I’m on the coasts all the time anyway, every time I come I’ll come see you and you’ll have plenty of access,” then now you’ve taken that issue off the table and I can focus on your business.', 'Chamillionaire:\xa0I like to call Mark my Mr. Miyagi. When I was trying to get into the tech industry, I would look at\xa0tech blogs and I would go to tech conferences and just try to find out what was happening with investing and startups, because remember, I’m coming from a whole different industry and trying to navigate the waters. I saw Mark as this guy that was giving away so much information and it was all very entrepreneurial, friendly stuff. That’s the real reason I felt he was really trustworthy, because he’s already shown that he wants people to have information. That’s how I got interested in investing.\nMark Suster:\xa0Cham is an authentic human being. He knows how to engage audiences, he is a very sincere and humble person. It’s been great. We involve him in investment decisions we’re trying to make, and he’s very curious and thoughtful, always wanting to know why, why does it work that way, but why? Which is my favorite response.', 'I think, coming from\xa0the music industry, initially he was very much like “What’s Mark’s motive? What does he want out of this?” Because when he came to be an EIR, he kept asking and I told him, “I don’t want anything, I just want to see you succeed!”', 'Chamillionaire: I’m guy coming from an industry where you have believed that a wolf is always in sheep’s clothing. So I’m just waiting for the wolf the whole time.', 'Mark Suster:\xa0I just want him\xa0to succeed. He’s\xa0been talking about doing a startup for the past couple of years; it’s time that he\xa0do it. JDFI. Now if he succeeds, we do own equity, and one of our largest shareholders is Wash U, so they own part of his company. ', 'My job is to drive returns, and I take my job very seriously and I wouldn’t just give money to anybody. But truly, authentically, my goal is to see Chamillionaire succeed as a tech entrepreneur and be an inspiration for a thousand people behind him who may choose to be tech entrepreneurs rather than wannabe rappers or sports stars. Not that there is anything wrong with that, but there are other options out there.\xa0'] Text 6 ["You probably know Chamillionaire from the song “Ridin,'”\xa0but did you know the Grammy Award winner is also a successful startup investor? He has had several favorable outcomes, including Cruise, which sold to General Motors and Maker Studios, which was bought by Disney.", 'Now he’s trying his hand at a startup of his own. If you’re a Chamillionaire superfan, you may already be familiar with Convoz. The team did a soft launch of the social media app last summer and now they’re ready to get the word out to the world.', 'Chamillionaire recently unveiled Convoz\xa0with the above presentation to an investor and entrepreneur crowd at the Upfront Summit in Los Angeles. They were\xa0wowed. \xa0(Aspirational entrepreneurs should watch the clip if you’re wondering what nailing a pitch looks like. I’ve never seen a slideshow presentation that flowed so well. Bonus: Snoop makes a cameo.)', 'So what is Convoz?', 'Chamillionaire tells me that the video-centric platform aims to be “the place where you go to talk to people.” He wants Convoz to be an app where people converse face-to-face with stars like Shaq or find new friends with common interests.', 'He was inspired to create an alternative to Twitter, which he feels is overwhelmed with trolls. “I just wasn’t happy with the communication channels that are currently existing on social media,” said Cham.', 'Convoz\xa0allows people to upload 15-second clips, often addressed to particular celebrities. They can then watch and choose which ones they want to respond to, sometimes broadcasting a message for all to see.', '', 'My initial reaction was that this seemed like a lot of effort for an in-demand individual, but Chamillionaire didn’t think that this would take much longer than scrolling through other social media.\xa0 He isn’t expecting everyone to get a response, but believes “there’s an opportunity to prioritize the people who really deserve it.”', 'He hopes that users will be less likely to bully or harass others when they show their face and aren’t hiding behind an anonymous digital persona. And unlike Twitter, where everyone can see people’s mentions, Convoz\xa0users are able to approve what’s being said about them publicly. It “gives the curator of the conversation some level of control.”', 'Building a social media platform isn’t easy. Other than the biggest networks like Facebook, Instagram, Snapchat and Twitter, most have flamed out.', 'But Chamillionaire isn’t deterred and has put a lot of thought into his approach. He was an entrepreneur-in-residence at Upfront Ventures where he regularly sat in on startup pitches and learned firsthand about what worked and what didn’t. He also did this so that potential partners would know that he’s committed and is not just another celebrity with a side project with his name attached. Convoz is a clear priority.', 'Above all, the Houston native said that he wants to send the message to others from a similar upbringing that they have more options for a successful life than being a rap star or a basketball player.\xa0“I want to change the narrative.”'] Text 7 ['Last night I co-hosted a dinner at Soho House in Los Angeles with some of the most senior people in the media industry with executives from Disney, Fox, Warner, media agencies and many promising tech…'] Text 8 ['On why you should be an entrepreneur,', '“A lot of people do what they have to do. You want to get yourself to a position where you can do what you want to do.” -Chamillionaire', '', 'Last night I co-hosted a dinner at Soho House in Los Angeles with some of the most senior people in the media industry with executives from Disney, Fox, Warner, media agencies and many promising tech & media startup CEO’s. The topic was “the future of television & the digital living room.”', 'With all of the knowledge in the room the person who stole the night wasn’t even on a panel. I had called on Chamillionaire from the audience and asked him to provide some views on how artists view social media, why they use it and where it’s heading. He was riveting.', 'He stood up, grabbed the mic and gave a heartfelt overview of his experiences in experimenting with new technologies to build relationships with his audience, get feedback on his product quality, and to market his music all the way to the top of iTunes. To stay the crowed was “wowed” was an understatement. He received that only round of applause of the evening.', 'While many were floored by his insights, I wasn’t in the slightest. I’ve known Chamillionaire for a couple of years and I’ve never been at a tech event where he HASN’T upstaged everybody with his marketing insights.', 'So it was my great pleasure to host Chamillionaire on This Week in VC this week talking marketing, entrepreneurship, old media and, of course, music. We also talked about getting more young African Americans interested in entrepreneurship & technology. I hope many of you can take the time to watch the interview–I promise he doesn’t disappoint. You can click the image above or this link.', 'Here are some take away’s:', '1. On failure, trial-and-error & confidence: He did a lot of experimenting early in his career. As a teenager he experimented with writing & producing his own rap music and received a lot of feedback from elders that he had a talent with words. ', 'He began producing and selling “mixtapes” of his music. He studied the errors that other people had made and tried to improve on them. He made many of his own mistakes. But he was street smart and hustled. He started selling the mixtapes out of his trunk and even gave away some of his music. He wanted to create awareness for himself to generate marketing buzz and demand and then get the retail stores to pay wholesales prices for his cds. ', '“All the failures that people get so scared of is what I did. It made me confident about what would work. Confidence doesn’t come from being a ‘know-it-all,’ it’s because I’ve done this 10 times already.”', 'What things did he experiment in the early days when there was no Facebook, Twitter or even MySpace to promote oneself? He used online services such as SHOUTcast, which was online radio that allowed him to play his own songs, interrupt a song, do a commercial break and connect with fans. [It sort of reminds me of the new generation of innovation that is happening around user-controlled terrestrial & Internet station Jelli.]', ' 2. Authenticity – I asked Chamillionaire why he thinks he connects so much with people at tech conferences. How does he always wow a usually skeptical crowd? He said that he finds that people here are often speaking in big words or jargon–and that doesn’t connect with a lot of people. Cham studied early in his career how to hold the microphone, how to project his voice, how to watch the audience and pay attention to what interested them. ', 'He said that he noticed a lot of tech entrepreneurs don’t speak into the mic, don’t project their voices with confidence and aren’t necessarily paying attention to the mood or energy of the audience. I had written a blog post on exactly this–how to not suck at group presentations–and what he said reminded me a lot of this post.', '3. Marketing Innovation – Too many entrepreneurs are great product or technology people and lack the knowledge, skills or even desire to figure out how to market their products or themselves cleverly. Some other entrepreneurs who went down the MBA, consulting or banking routes without working at a startup are certainly book smart but haven’t always refined the street-smart skills needed to be an effective entrepreneur. ', 'Chamillionaire has tried so many marketing angles that when new technologies emerge he has a strong sense on how to use them to best marketing himself and his business. In his early career he realized the importance of email lists. He would do anything he could to capture people’s email addresses because he knew that they served as a valuable tool for future marketing purposes. ', 'His email list became his power. He would occasionally give away free music in exchange for email addresses. He created his own domain and gave out email address with the [email protected] nomenclature. This was in the 90′s. It created viral buzz because other fans saw the email address and wanted to know how they got it. He was trailblazing. ', 'He would try initiatives like announcing that a new cd was going to drop at new year’s. He had a website and put up a timer / countdown for the new year’s release. People would then call stores and ask if they had his album. He would get a call from the stores asking about a new album coming out. He created demand. Sometimes he didn’t even have the product when he announced it but the hype would get him focused on what he had to produce. ', 'There are many analogies here for software development. I often tell teams that you need to create product deadlines that are semi-public (or maybe board commitments) that help you focus on shipping product. You may have to cut scope but nothing gets you more focused and the creative juices flowing than a deadline staring you in the face.', 'Businesses like TopSpin Media now professionalize campaigns for musicians to capture email addresses, build social-media audiences and sell products directly to consumers (and many other artist-to-fan direct initiatives). Cham learned this on his own because he had to–he didn’t have a label. So when Twitter, Facebook, YouTube, Ustream and other social websites became popular he has ideas for how to use them to authentically build a relationship with his audience.', '4. Customer Feedback – Chamillionaire regularly seeks public feedback from his fan base. In the early days that was from releasing mixtapes. More recently it has been by putting free early releases of songs for free on Twitter. He said that the labels have a standard marketing plan that they say has worked in the past for other musicians. Cham is very skeptical of the one-size-fits-all approach ', 'He said he learned what his fans wanted through the trial and error process. ', '“Not everything works for everybody. I tested so many things to see what works. Labels just had a marketing plan for everybody. but it didn’t work for everybody–it was just a plan …”', 'What is good? There are a million opinions about what is good. I just wanted to know what people wanted to hear from ME.”\n', '5. Raising Capital–The VC equivalent for musicians is getting signed by a major label. I have always told entrepreneurs that to get VC interest you need scarcity value (in addition to a great product). People want what they can’t have and VCs are no different. The most potent entrepreneur is the one that doesn’t NEED your money.', 'So cheeky Chamillionaire went to Universal wearing the tags from every other label he had visited. While this blunt approach wouldn’t work with VCs a more subtle version actually does. What Cham said to Universal in his initial meeting was that he wasn’t wearing all of the other label tags just to rub them in Universal’s face, he wanted to make a statement:', '“I just want you to know that I’m perfectly comfortable leaving here without a deal.”', '6. On JFDI (play on Just Do It) – Chamillionaire talked a lot about social media. We talked initially about ustream. The labels said he could do live streaming himself but they didn’t want him to stream any music or videos since ustream wasn’t paying them. Reminds me of how the networks today announced they were blocking their video content from being shown on Google TV. Universal tried to push him to another site that had cut a deal with the label. He was frustrated because he wanted to be where the fans were:', '“I was just trying to give the fans what they wanted and what they wanted was ustream.” ', 'He did it anyways and didn’t ask for permission. By putting up his music free on ustream he ended up driving his song to the number one spot on iTunes (which obviously generates money). ', '“It would be successful and after it was successful nobody would say anything.”', 'This was obviously music to my ears since my personal philosophy that I’ve written about is “it’s better to beg for forgiving than to ask for permission.”', '7. On What Next?', 'First, Chamillionaire is up front about the fact that he is trying to get out of the label contract he has with Universal and he’s holding back from producing music until he does. He said that most artists “chase checks” and he actually wants to do what’s right for his audience. He says that labels impede on your creativity, don’t allow experimentation and flexibility. He’s holding back for now, but he’s clearly studying what’s going on in technology', '“I look at Zynga and all the games they have and how addicting it is and I think “there’s got to be a way to connect. A way to do music this way.”', 'We also spoke a lot about “free” as a metaphor to build future value. He spoke about his Grammy-winning song Ridin’ (as in Ridin’ Dirty) and how the labels wanted to extend life of song by getting somebody famous to remix the song. Cham had other ideas. He got people to do bootlegged mixtapes in new york, france and new zealand. He wanted to be bootlegged even more. The song spread globally. ', 'He was fine with the bootleg–it helped build and audience and helped him globalize. It allowed him to do big shows down the line in places like Norway & Dubai. Anyone who knows the industry knows that artists make way more money by performing and selling merchandise than off of their albums (where the studio prevails). So it was almost like Chamillionaire already knew the Zynga model–give away the game and sell other things. He actually did it before Zynga was huge.', 'I told you this guy was smart.', '“I can do so much more than rap with the rest of my life. there’s so much more in this world. I know that young people who look up to me are watching a show like this and they’re paying attention. I want to start feeding this stuff out so that the younger generation will start getting it and paying attention to this stuff ” [technology, marketing, business].', '“I’m learning so much, I’m so advanced–ahead of so many other people, I don’t know a better way to serve my music [than by mastering technology]. I study it every day.”', '7. On African American Youth?', 'Chamillionaire would like to see more young, urban, african americans aspire to things other than basketball or rap. ', '“They’re trained to think that it’s “the only way out.”', 'It bothers him. He wants people to know that it’s cool to be knowledgeable about business and technology. ', '“Technology is power. It’s so hard to do it in an over-saturated rap market. I just want to do the right thing and tell young people straight what they need to do.” ', '“They say the ‘game is to be sold and not to be told.’ Well I just ‘tell it.’ If you’re a young & up and coming rapper and you don’t know what tunecore is–you should know it.” ', '“The future of the world is in the palm of the tech community.” ', 'Reprinted from Both Sides of the Table', 'Mark Suster is a 2x entrepreneur who has gone to the Dark Side of VC. He joined GRP Partners in 2007 as a General Partner after selling his company to Salesforce.com. He focuses on early-stage technology companies. Follow him at twitter.com/msuster.', 'I grew up in Northern California and was fortunate enough to have computers around my house and school from a young age. In fact, in high school in the mid-eighties I sold computer software and taught advanced computers', ' More', 'Innovation in your inbox Sign up for the daily newsletter '] Text 9 ['HOUSTON – Houston rapper and Grammy Award-winning hip-hop artist Chamillionaire paid a visit to Houston Independent School District students Monday to teach them about career opportunities in the tech world.', 'Chamillionaire, born\xa0Hakeem Sariki, exploded onto the Houston rap scene in the early 2000s, but has since spent much of his time as an entrepreneur and tech investor in Los Angeles.', '"I was a musician.\xa0I still am, but I realize the value in appreciating the\xa0tech side of things and that has become my main business,” he said.', 'Chamillionaire visited Worthing High School with fellow panelists Tuma Basa, head of hip-hop for streaming service,\xa0Spotify, Shawn Gee, artist manager and president of Live Nation Urban, and Brittany Lewis, video programming manager at Spotify to discuss entrepreneurship with HISD seniors.', 'He explained that he frequently sees young people spend time on social media applications like Snapchat and Instagram, and encourages them to think beyond “social media fame” and focus on career opportunities to build similar tools.', '"You can learn how to code today. You can build this same thing that you\'re looking at every day, that you\'re tweeting on, that you\'re snappping on, and I feel like that conversation needs to be had," Chamillionaire said.', 'The rapper has already raised more than $1\xa0million for his own video startup company, according to Business Insider. (Read more here)', '"Hopefully we can make (tech) cool so a lot of these kids can understand that they can be the next people to build the next social media products," Chamillionaire said.'] Text 10 ['In the mid \'00s, Chamillionaire exploded into the national consciousness with his No. 1 single "Ridin,\'" a muscular celebration of eluding the police that was simultaneously gleeful and menacing. The Houston MC had been rapping for years -- he sold mixtapes out of his trunk and released a collaborative album with Paul Wall in 2002. But he went from regional force to national star seemingly overnight, "0 to 100 real quick," as Drake might say.', 'Now the rapper appears to have executed a similar move in the world of tech entrepreneurship. The venture capitalist Mark Suster of Upfront Ventures announced earlier this week that Chamillionaire will be "moving to LA for a while and working in our offices and developing his ideas" as an "entrepreneur in residence." Suster wrote online that he first met the rapper "at a tech conference in LA. I saw him on stage at the event talking about how he used social media to engage audiences. This was 2009 and his understanding of audience engagement was far beyond anything I was hearing from\xa0most people at that time."'] ###Markdown EXTRACTIVE SUMMARIZATION Approach 1: Text Summarization using nltk ###Code import re import heapq import nltk import string for i in range(0,10): print("Text " + str(i+1) +"\n\n" + df['Content'][i] + "\n\n\n") article_text = re.sub(r'\[[0-9]*\]', ' ', str(df['Content'][i])) article_text = re.sub(r'\s+', ' ', article_text) formatted_article_text = re.sub('[^a-zA-Z]', ' ', article_text) formatted_article_text = re.sub(r'\s+', ' ', formatted_article_text) #Converting Text To Sentences sentence_list = nltk.sent_tokenize(article_text) stopwords = nltk.corpus.stopwords.words('english') #Find Weighted Frequency of Occurrence word_frequencies = {} for word in nltk.word_tokenize(formatted_article_text): if word not in stopwords: if word not in word_frequencies.keys(): word_frequencies[word] = 1 else: word_frequencies[word] += 1 maximum_frequncy = max(word_frequencies.values()) for word in word_frequencies.keys(): word_frequencies[word] = (word_frequencies[word]/maximum_frequncy) #Calculating Sentence Scores sentence_scores = {} for sent in sentence_list: for word in nltk.word_tokenize(sent.lower()): if word in word_frequencies.keys(): if len(sent.split(' ')) < 30: if sent not in sentence_scores.keys(): sentence_scores[sent] = word_frequencies[word] else: sentence_scores[sent] += word_frequencies[word] #Getting the Summary summary_sentences = heapq.nlargest(15, sentence_scores, key=sentence_scores.get) summary = ' '.join(summary_sentences) print("Summary " + str(i+1) +"\n\n" + summary + "\n\n\n") ###Output Text 1 [' Yes, it is rather surprising that a rap artist is very interested in entrepreneurship and technology. '] Summary 1 [' Yes, it is rather surprising that a rap artist is very interested in entrepreneurship and technology. '] Text 2 ['When Grammy award-winning rapper Hakeem “Chamillionaire” Seriki began learning the ropes of venture capitalism in the tech space, he noticed something almost immediately— he wasn’t seeing many people who looked like him. ', 'When he began to fund ventures himself, the heads of startups that were brought to him weren’t diverse, either, and that led him to create pitch competitions specifically geared toward people of color and women. His latest contest features a $100,000 investment from him and fellow rapper E-40. ', ' “The reason why we decided to put the focus on minority and women-funded startups is because this demographic of companies and founders is just underrepresented, they’re under-invested in,” he told The Associated Press in a recent interview. “They’re just not as appreciated as we would like, so we’re trying to do more to create more awareness for these companies and also put our money where our mouth is and invest in one of them.”', '', '', 'Startup companies will submit their pitches on Convoz, a video-based social app started by Chamillionaire that focuses on face-to-face interaction. The applicants will be reviewed by him, E-40, Daymond John of “Shark Tank” and Republic, an SEC-registered investing platform.', 'Chamillionaire, who co-founded the underground Texas group the Color Changin’ Click with Paul Wall, is best known for his hit “Ridin’ Dirty,” but has also made a name for himself in business.', 'He believes diversity is scarce because limited partners and venture capitalists tend to work with people they are familiar with or those they “see themselves in,” Chamillionaire said. The trickle-down effect is that the tech space is almost completely dominated by white males. However, the rapper believes that bringing in people with different backgrounds and life experiences will benefit the entire sector. ', '“A lot of people are raising money, but a lot of people aren’t minorities. A lot of them aren’t women,” Chamillionaire said. “We’re solving problems, often unique problems, because some of these companies I’m seeing are the regurgitation of problems that already got solved. And then you go into places where people like me grew up and there are people that are seeing the world from a very unique lens and those people aren’t getting the capital to go and create those things.”', 'He advises many celebrities and athletes of color in learning how to properly invest and believes they are now realizing the power their status carries, which results in them eventually founding their own companies, as opposed to collecting a paycheck from one.', '', '', '“I want to do more to create more awareness so that the people in our communities aren’t just thinking that you just got to be a basketball player or a rapper, because that’s what I thought,” Chamillionaire said. “Now I feel like there’s a lot of ‘cool’ happening with the tech. There are people that are becoming millionaires that are 20-something-years-old when Snapchat IPOs or when this company gets acquired. And I feel like we need to start training people in our community to start thinking like this.”', 'Although Chamillionaire says he’s encouraged by conferences such as AfroTech – the nation’s largest technology conference for African American techies and entrepreneurs—he says now is the time for inclusion. ', '“I think there’s a systemic problem that I’m not alone going to be able to fix, but I recognize it’s a real thing. So I’m gonna be very vocal about it,” says the venture capitalist. ', 'The pitch competition ends Friday.', '___', 'Follow Associated Press entertainment journalist Gary Gerard Hamilton at twitter.com/GaryGHamilton.'] Summary 2 ', '“A lot of people are raising money, but a lot of people aren’t minorities. ', 'He believes diversity is scarce because limited partners and venture capitalists tend to work with people they are familiar with or those they “see themselves in,” Chamillionaire said. However, the rapper believes that bringing in people with different backgrounds and life experiences will benefit the entire sector. “We’re solving problems, often unique problems, because some of these companies I’m seeing are the regurgitation of problems that already got solved. “Now I feel like there’s a lot of ‘cool’ happening with the tech. There are people that are becoming millionaires that are 20-something-years-old when Snapchat IPOs or when this company gets acquired. The trickle-down effect is that the tech space is almost completely dominated by white males. ', '“I think there’s a systemic problem that I’m not alone going to be able to fix, but I recognize it’s a real thing. So I’m gonna be very vocal about it,” says the venture capitalist. A lot of them aren’t women,” Chamillionaire said. His latest contest features a $100,000 investment from him and fellow rapper E-40. The applicants will be reviewed by him, E-40, Daymond John of “Shark Tank” and Republic, an SEC-registered investing platform. ', 'The pitch competition ends Friday. ', '___', 'Follow Associated Press entertainment journalist Gary Gerard Hamilton at twitter.com/GaryGHamilton.'] Text 3 ['In the last several years, a growing number of celebrities have begun investing into tech startups. But few have humbled and immersed themselves into the industry like Chamillionaire, who rose to fame in the early 2000s as a rapper, but can now be spotted at investor parties and Y Combinator "demo days." Next act: A self-described entrepreneur at heart, Chamillionaire recently debuted his mobile video chat app Convoz while he continues to invest in startups. Quick facts: While still working as a musician, he began advising companies like SayNow, which let celebrities directly interact with fans and sold to Google in 2011. He leads a syndicate of investors made up of influencers, celebrities, and athletes. His exits so far include Maker Studios (to Disney for a reported $675 million) and Cruise (to General Motors for close to $1 billion). His startup Convoz now has a total of seven employees, and has raised an undisclosed amount of seed funding from Greycroft Ventures, Upfront Ventures, 500 Startups, Precursor VC, Okapi Ventures, XG Ventures, and a roster of angels including Justin Kan and Snoop Dogg. He\'s not made any investments in cryptocurrencies but says he believes in the potential of blockchain tech. Axios spoke with Chamillionaire a few weeks ago, here are the highlights:On his lifelong interest in tech: When I signed a deal with Universal, I was always thinking digital. I was an entrepreneur who was known as a rapper... I had, like, five phones. People thought that I was a nerd as a rapper. On his introduction to the tech industry as he began advising some entertainment-related startups: It was a whole other world that I didn\'t know existed... But as I started getting closer to a lot of these companies, I realized that a lot of companies were coming to the music industry and cannibalizing their business... I started going to tech conferences on my own, me and my partner. My first tech conference was something at Stanford, Quincy Jones\' son told me about it. After that I went to [TechCrunch] Disrupt, then CES [and so on]. To us, growing up, you got two options: you gonna be a musician or a basketball player... The diversity thing is very important to me. I don\'t think people understand the value of diversity. On ending up at Upfront Ventures in Los Angeles as an entrepreneur-in-residence:I was honestly planning on going to San Fran and getting into investing and building a company out there. Mark Suster asked me why. Well there\'s no tech in L.A., when you get off the plane there\'s paparazzi. I stayed there for almost a year. I wanted to come every day. Hearing [the Upfront partners] break down companies... When a founder would come in and pitch… When they leave we would hear all the VCs break it down... I don\'t know what price tag to put on that. On his biggest surprise so far: The surprising thing about it all, is that everyone was so open to giving feedback, criticism, contacts—not like in the music industry. Mark [Suster] showed me 10 companies before I decided to put money in Maker Studios.On celebrities investing in tech startups: Investing in tech, I think, is smart for any entrepreneur or business savvy person—you gotta diversify. At first it was just information, I just wanted to be along for the journey and learn. Then it turned into some wins... Because I was an angel, I was spending what i could afford to lose. At the end of the day I think you’re betting on people and at the end of the day I think I\'m pretty good at people...Eventually people will take a lot of these celebrities and influencers as just a tweet. We\'re more than that—we can connect you to people, we got feedback.The biggest lesson he\'s learned in tech so far: I would say is that I guess I knew this, but it\'s just being in the thick of this, nothing is gonna come overnight. You can get in here and have a false sense of how things work. Being a celebrity is tough—I don\'t even think of myself as one but I guess I am—everyone tells you it’s gonna be great. People tell me what I want to hear or what I need to hear. '] Summary 3 But as I started getting closer to a lot of these companies, I realized that a lot of companies were coming to the music industry and cannibalizing their business... ['In the last several years, a growing number of celebrities have begun investing into tech startups. On his introduction to the tech industry as he began advising some entertainment-related startups: It was a whole other world that I didn\'t know existed... Quick facts: While still working as a musician, he began advising companies like SayNow, which let celebrities directly interact with fans and sold to Google in 2011. On his biggest surprise so far: The surprising thing about it all, is that everyone was so open to giving feedback, criticism, contacts—not like in the music industry. Axios spoke with Chamillionaire a few weeks ago, here are the highlights:On his lifelong interest in tech: When I signed a deal with Universal, I was always thinking digital. Being a celebrity is tough—I don\'t even think of myself as one but I guess I am—everyone tells you it’s gonna be great. He\'s not made any investments in cryptocurrencies but says he believes in the potential of blockchain tech. To us, growing up, you got two options: you gonna be a musician or a basketball player... Next act: A self-described entrepreneur at heart, Chamillionaire recently debuted his mobile video chat app Convoz while he continues to invest in startups. I started going to tech conferences on my own, me and my partner. When a founder would come in and pitch… When they leave we would hear all the VCs break it down... My first tech conference was something at Stanford, Quincy Jones\' son told me about it. People tell me what I want to hear or what I need to hear. '] I don\'t think people understand the value of diversity. Text 4 ['The middle of the 00’s saw many rappers rise to fame then fall off the map. Sounds changed, ringtones became uncool, and new talent emerged. The music industry changes rapidly and many careers die faster than they started. For some people, the end of a music career might be the start of better endeavors.', 'Around 2004 the rap scene saw an increase in Texas-based rappers gain popularity. No one peaked higher than Chamillionaire, hitting the top of the Billboard Hot 100 for two weeks during the summer of 2006 with his song Ridin’. Falling from the top turned out to be a blessing in disguise because he’s become quite the successful investor.', 'Chamillionaire, whose real name is Hakeem Seriki, has taken some risks that paid off. He found ways to give back through his success, and has put his earnings to good use. Seriki’s music success has really pigeonholed him as a one hit wonder, but he’s much more than that. Now more than ever, we should be recognizing his accomplishments and they ways he is empowering future entrepreneurs.', 'Ridin’ was a catchy song, but also focused on the topic of racial profiling. This is still a conversation we’re addressing, and Chamillionaire was discussing it fourteen years ago. After this song, he struggled to find chart success again. His follow up album failed to reach the same level of success as The Sound of Revenge (the album containing Ridin’). The album after that never got released, and by 2009 the name Chamillionaire was old news.', 'As a solo artist, Chamillionaire would only ever release two proper albums. He would release a few more EPs and mixtapes, but as his success in music dwindled he would fade from the public eye. By the time he hit 30, he was facing retirement from the rap scene. He could continue with independent endeavors, but landing another major success seemed unlikely.', 'To some degree, this is not uncommon. Many rappers start their own record labels and hope to become the next media mogul like P. Diddy. Some even venture outside of rap with such examples being Beats by Dre or 50 Cent’s partnership with Vitamin Water. Others try to find success in other genres, like Akon who is credited with giving Lady Gaga her first major deal. While all of these ventures are impressive, they’re also enhanced by star power and celebrity endorsements.', 'Chamillionaire’s interest lied in the tech scene, and he first dipped his toes into this space in 2009. Venture capitalist Mark Suster recognized the rapper for his ability to engage an audience while at a tech convention. He wasn’t just generating excitement, he was discussing how he rose to the top of the iTunes charts thanks to his focus on digital media.', 'Audience engagement wasn’t the only reason Chamillionaire appeared on the tech scene; he was there to learn to invest. He made some notable investments early in his career, but not without shopping around. Early in his career, he invested in Maker’s Studio, which would later be sold to Disney for a reported $675 million. While it’s unknown how much Chamillionaire made in this investment, he invested early and made a significant profit upon his exit.', 'By 2015, Chamillionaire joined Upfront Ventures in Santa Monica as the Entrepreneur-In-Residence. In this role he invests early in rising tech companies, hoping to accelerate the growth and identify the next big thing. It’s hard to deny his eye for future successes. His portfolio includes early investments in: home security system Ring (acquired by Amazon), self-driving car technology Cruise (acquired by GM), and ride-sharing app Lyft which went public in 2018.', 'For a one hit wonder, it’s safe to say Chamillionaire is doing better than a lot of other rappers.', 'In 2018, Chamillionaire introduced the world to his own app named Convoz. The goal is to encourage collaborative video conversations around current topics. If Twitter and TikTok had a baby, it might look something like Convoz. One user posts a message, another user can reply, and anyone else will see the back-and-fourth like it’s a realtime conversation.', 'Future plans include helping new tech ideas blossom, particularly for women and people of color. In 2019, Chamillionaire held a content to invest $25,000 in a start up because so few start ups had a female founder and so few people of color had their start ups venture-backed. He later ran a second competitor, this time investing $100,000 into the winner. After applying through Convoz, Pierre Laguerre’s company Fleeting was selected as the winner. Laguerre is a Haitian-born college dropout who recognized the shortage of qualified truck drivers in the US. As our shipping needs grew, he wants to bridge the gap with by connecting truck drivers to on demand jobs.', 'Is Fleeting the next big start up? Possibly. It’s hard to deny Chamillionaire’s eye for success. While he still dabbles in rapping occasionally, his biggest success can be found in the investing world. He seems to have built a path for longterm growth rather than being dependent upon the trends associated with the music scene. At this point, he could probably retire if he wanted. I don’t see that happening, however. He wants to serve communities who are underrepresented by investors and build his reputation as a founder. He’s truly found success in a new industry, and his achievements are worth celebrating.', 'Thank you for reading! I love connecting with other content creators, particularly in product design, branding, and finance. Beyond Medium, I can also be found on my website and LinkedIn.'] Summary 4 He would release a few more EPs and mixtapes, but as his success in music dwindled he would fade from the public eye. Seriki’s music success has really pigeonholed him as a one hit wonder, but he’s much more than that. Many rappers start their own record labels and hope to become the next media mogul like P. Diddy. Others try to find success in other genres, like Akon who is credited with giving Lady Gaga her first major deal. For some people, the end of a music career might be the start of better endeavors. While he still dabbles in rapping occasionally, his biggest success can be found in the investing world. In this role he invests early in rising tech companies, hoping to accelerate the growth and identify the next big thing. He’s truly found success in a new industry, and his achievements are worth celebrating. He found ways to give back through his success, and has put his earnings to good use. He could continue with independent endeavors, but landing another major success seemed unlikely. One user posts a message, another user can reply, and anyone else will see the back-and-fourth like it’s a realtime conversation. He seems to have built a path for longterm growth rather than being dependent upon the trends associated with the music scene. His portfolio includes early investments in: home security system Ring (acquired by Amazon), self-driving car technology Cruise (acquired by GM), and ride-sharing app Lyft which went public in 2018. His follow up album failed to reach the same level of success as The Sound of Revenge (the album containing Ridin’). ', 'Future plans include helping new tech ideas blossom, particularly for women and people of color. Text 5 ['There are a few ways you can announce a launch, but when\xa0Grammy-winning rapper Chamillionaire shared that\xa0he’d founded his own tech startup via Mark Suster’s Snapchat account from Wash U’s campus, it was certainly noteworthy.', 'According to Business Insider, the startup will provide\xa0“downloadable software applications for streaming communications with entertainers, politicians, and celebrities,” something akin to “Twitter with more live streaming baked in.”', 'In town Thursday to visit\xa0Washington University’s School of Engineering and Applied Sciences,\xa0Suster, a successful entrepreneur himself and the managing partner of Upfront Ventures in Los Angeles, has had a working relationship with\xa0Chamillionaire for awhile now. Chamillionaire serves\xa0as an EIR at Upfront and the two co-invested in Maker Studio, which has since been sold to the Walt Disney Company.', 'After a morning\xa0at Wash\xa0U, Suster\xa0and Chamillionaire gave a talk at\xa0Venture Café St. Louis, where we were able to sit down with both to\xa0talk about tech, funding and social media.', 'Chamillionaire:\xa0It is actually something I’ve been thinking in my head for a long time. I was advising companies and just got frustrated with the process of talking to other people and watching them trying to accomplish a vision that ultimately wasn’t mine. Then it was\xa0like,\xa0“You know what? The only way this is going to be done right is if I do it.”', 'Right now we are in stealth mode, so we aren’t saying much about what we’re doing, but it is public information that I am building a company. A lot of\xa0people haven’t seen me put out music in a long time, so they’re wondering,\xa0“How come you’re not releasing music?” and it’s like, I have other aspirations and other problems I want to solve and I want to build a different type of company. So now I’m hiring developers and bringing in people to help build this tech company with me and it’s just a different experience, and I’m excited for the journey.', 'Mark Suster:\xa0The first thing is, Cham had the concept awhile ago. He spoke earlier at [Wash U] about solving problems that you authentically know and experience first hand, so that was the genesis. He saw\xa0a\xa0particular problem and articulated to me how to build it, with whom, what features it should have. ', 'So I said to Cham, come out and do it. If you come here as an EIR, we’ll help you\xa0recruit engineers, build a product team, have advisors; you can learn how\xa0economic models work; we can help you network. He\xa0took us up on that offer\xa0and\xa0week in, week out, he sat in the\xa0company pitches, responding, giving us feedback, sometimes helping us with due diligence and sometimes\xa0co-investing as an individual.', '[For his own startup]\xa0he actually has a working product. It’s not public, it’s not launched, it’s not available, but it is working—I have a copy on my phone, he has a copy on his phone.\xa0', 'The real issue is, when is it ready for prime time? Now the hard thing about being Chamillionaire is, anyone else can just\xa0create a product, put it out there, test it with a\xa0bunch of people and slowly fine-tune it. But when you’re well-known and you put out a product, he has a higher bar because if he just puts it out there then you’ll get all the negative reactions because people want to find everything wrong with startups and he’s refining it in private.', 'Chamillionaire: The music thing is what most of the people who know me know about and it’s what I’ve been asked about the most. And to be fair, I haven’t retired from rap, it’s just that in order to be successful in what I’m trying to do, it takes a certain level of commitment. You can’t play around with it, and it’s tough to toggle between putting out an album, going on tour and trying to please fans and then running back to a startup when ultimately, people don’t look at entertainers as really serious people. ', 'So many people [in entertainment] are coming to the\xa0tech world and they’re like, “Oh there’s this cool tech thing going on and people are making money,” and they treat it as a thing they will just\xa0moonlight with. I’m trying to prove that I’m really serious. That’s why I can look Mark or\xa0any other investor in the face and say,\xa0“Hey, I’m different than these guys, I’m all in.” So, I plan to put out music, but I just want to be able to do it on my terms, and that’s kind of what I’m doing now.', 'Chamillionaire: A lot of time when they’re coming to me, it’s because they are trying to find a way to get\xa0people using something. They have great technology and they’re trying to figure out how to get people to understand that this thing is out there. So they often ask me about ways that they can promote it or market it. When they ask me to invest, normally it’s not just about a check, it’s about, “How can we get Chamillionaire involved and use his strategy and the things he’s done in the past to help things grow with us?” Startups for the most part are just trying to get people to know they exist.', 'Mark Suster: So let me push beyond humble Chamillionaire: I introduced him years ago to our startups and they all approached him for the same reason: “Hey, maybe I can get him to promote us.” And that’s the first thought for anyone who has reach with an audience–it’s promotion. And then Cham would come back and say,\xa0“So I looked at how your App\xa0integrates with other people’s apps or how hard user registration is,” and he kept coming back with advice on products. ', 'And I think that’s the thing that people don’t realize about him.\xa0He has a background in visual design and formerly used to be an artist, so he really\xa0thinks about usability. He’s just driven by products and product usage. He knows how to engage audiences and that’s part of what he does.', 'Now the big challenge will be, can he recruit, motivate and retain a really talented\xa0team beneath him to develop a\xa0product that’s world class,\xa0and time will tell.', 'Mark Suster: If I could use one word to\xa0describe the students I interacted with it’d be serious. It wasn’t a\xa0frivolous day. People were engaged. ', 'A lot of people were already working on startups and thinking of building things. I know it’s a very quality engineering department and that’s something that matters to us. We really try to work with world-class engineers, and people were taking things pretty seriously.', '', 'Mark Suster:\xa0So I have this great audience, they are really engaged, there’s no bullsh*t, we’re doing this live. I think people like to feel like they’re connected.\xa0', 'With\xa0blogging, it takes me 45 minutes to write a post. I think it takes most people a little longer, but I don’t worry about spelling or editing, I don’t have advertisers, so I don’t have to be perfect. I just write for 45 minutes and then hit publish. But it does take time.', 'On Snapchat yesterday, I was about to come down to St. Louis from Chicago and I had five minutes. So I got a napkin and a pen out and I drew a chart about the innovator’s dilemma and how\xa0that drives a lot of my investment thesis, and it took five minutes. ', 'Now, everyone in the hotel lobby thought I was strange, but it was great and it got 7,000 views in less than 24 hours. I like the immediacy, I like the intimacy. And here’s the thing, my target customer for the most part is 22-34 and they’re all on Snapchat. And no VC knows how to use Snapchat. So I’m like, “Hey man, I got this swim lane to myself, why not?” ', 'Being early to a platform matters. I have 250,000 followers on Twitter, if I started today, I couldn’t get that, I don’t have any famous rap songs—yet. [Looks to Chamillilonaire and laughs.]\xa0', 'Chamillionaire:\xa0He’s the VC of the millennials.', 'Mark Suster: That’s a great question I’m asked all the time.\xa0Number one is, if you want to stay in St. Louis and build a company here, you have a great advantage, which is great engineering talent that will be cheaper because cost of living is cheaper and a much higher retention rate because if you’re building an interesting company. It’s not\xa0like there are 2,000 others at your footstep, like in the Bay Area. ', 'The problem is that investors have a harder time\xa0committing early-stage. The reason is not that I mind coming to St. Louis, but do I want to come eight times a year? Again, nothing against St. Louis, but I go 8-10 times to New York, I go 14 times a year to the Bay Area. So if I add another\xa0location it would just kill me unless it was a company at the next level.', 'I just invested in a company in Toronto, I won’t give a name yet, but we’ll be\xa0announcing in 30 days. And what I said to them was,\xa0“If you’ll do board meetings in New York and LA, I’ll come once a year to Toronto.” So it’s taking that issue off the table; a lot of people don’t know to do that.', 'Anyone a startup pitches in New York, San Francisco, LA, their first thought is,\xa0“Do I want to go to St. Louis eight times a year?” And they’re also thinking, “Can I really provide you enough advice and have enough interactions to make a difference?” So if you say,\xa0“I’m on the coasts all the time anyway, every time I come I’ll come see you and you’ll have plenty of access,” then now you’ve taken that issue off the table and I can focus on your business.', 'Chamillionaire:\xa0I like to call Mark my Mr. Miyagi. When I was trying to get into the tech industry, I would look at\xa0tech blogs and I would go to tech conferences and just try to find out what was happening with investing and startups, because remember, I’m coming from a whole different industry and trying to navigate the waters. I saw Mark as this guy that was giving away so much information and it was all very entrepreneurial, friendly stuff. That’s the real reason I felt he was really trustworthy, because he’s already shown that he wants people to have information. That’s how I got interested in investing.\nMark Suster:\xa0Cham is an authentic human being. He knows how to engage audiences, he is a very sincere and humble person. It’s been great. We involve him in investment decisions we’re trying to make, and he’s very curious and thoughtful, always wanting to know why, why does it work that way, but why? Which is my favorite response.', 'I think, coming from\xa0the music industry, initially he was very much like “What’s Mark’s motive? What does he want out of this?” Because when he came to be an EIR, he kept asking and I told him, “I don’t want anything, I just want to see you succeed!”', 'Chamillionaire: I’m guy coming from an industry where you have believed that a wolf is always in sheep’s clothing. So I’m just waiting for the wolf the whole time.', 'Mark Suster:\xa0I just want him\xa0to succeed. He’s\xa0been talking about doing a startup for the past couple of years; it’s time that he\xa0do it. JDFI. Now if he succeeds, we do own equity, and one of our largest shareholders is Wash U, so they own part of his company. ', 'My job is to drive returns, and I take my job very seriously and I wouldn’t just give money to anybody. But truly, authentically, my goal is to see Chamillionaire succeed as a tech entrepreneur and be an inspiration for a thousand people behind him who may choose to be tech entrepreneurs rather than wannabe rappers or sports stars. Not that there is anything wrong with that, but there are other options out there.\xa0'] Summary 5 So now I’m hiring developers and bringing in people to help build this tech company with me and it’s just a different experience, and I’m excited for the journey. I think people like to feel like they’re connected.\xa0', 'With\xa0blogging, it takes me 45 minutes to write a post. They have great technology and they’re trying to figure out how to get people to understand that this thing is out there. ', 'Chamillionaire: The music thing is what most of the people who know me know about and it’s what I’ve been asked about the most. I was advising companies and just got frustrated with the process of talking to other people and watching them trying to accomplish a vision that ultimately wasn’t mine. Now the hard thing about being Chamillionaire is, anyone else can just\xa0create a product, put it out there, test it with a\xa0bunch of people and slowly fine-tune it. ', 'A lot of people were already working on startups and thinking of building things. We really try to work with world-class engineers, and people were taking things pretty seriously. ', 'Chamillionaire: A lot of time when they’re coming to me, it’s because they are trying to find a way to get\xa0people using something. That’s the real reason I felt he was really trustworthy, because he’s already shown that he wants people to have information. The reason is not that I mind coming to St. Louis, but do I want to come eight times a year? If you come here as an EIR, we’ll help you\xa0recruit engineers, build a product team, have advisors; you can learn how\xa0economic models work; we can help you network. ', 'I think, coming from\xa0the music industry, initially he was very much like “What’s Mark’s motive? I think it takes most people a little longer, but I don’t worry about spelling or editing, I don’t have advertisers, so I don’t have to be perfect. We involve him in investment decisions we’re trying to make, and he’s very curious and thoughtful, always wanting to know why, why does it work that way, but why? Text 6 ["You probably know Chamillionaire from the song “Ridin,'”\xa0but did you know the Grammy Award winner is also a successful startup investor? He has had several favorable outcomes, including Cruise, which sold to General Motors and Maker Studios, which was bought by Disney.", 'Now he’s trying his hand at a startup of his own. If you’re a Chamillionaire superfan, you may already be familiar with Convoz. The team did a soft launch of the social media app last summer and now they’re ready to get the word out to the world.', 'Chamillionaire recently unveiled Convoz\xa0with the above presentation to an investor and entrepreneur crowd at the Upfront Summit in Los Angeles. They were\xa0wowed. \xa0(Aspirational entrepreneurs should watch the clip if you’re wondering what nailing a pitch looks like. I’ve never seen a slideshow presentation that flowed so well. Bonus: Snoop makes a cameo.)', 'So what is Convoz?', 'Chamillionaire tells me that the video-centric platform aims to be “the place where you go to talk to people.” He wants Convoz to be an app where people converse face-to-face with stars like Shaq or find new friends with common interests.', 'He was inspired to create an alternative to Twitter, which he feels is overwhelmed with trolls. “I just wasn’t happy with the communication channels that are currently existing on social media,” said Cham.', 'Convoz\xa0allows people to upload 15-second clips, often addressed to particular celebrities. They can then watch and choose which ones they want to respond to, sometimes broadcasting a message for all to see.', '', 'My initial reaction was that this seemed like a lot of effort for an in-demand individual, but Chamillionaire didn’t think that this would take much longer than scrolling through other social media.\xa0 He isn’t expecting everyone to get a response, but believes “there’s an opportunity to prioritize the people who really deserve it.”', 'He hopes that users will be less likely to bully or harass others when they show their face and aren’t hiding behind an anonymous digital persona. And unlike Twitter, where everyone can see people’s mentions, Convoz\xa0users are able to approve what’s being said about them publicly. It “gives the curator of the conversation some level of control.”', 'Building a social media platform isn’t easy. Other than the biggest networks like Facebook, Instagram, Snapchat and Twitter, most have flamed out.', 'But Chamillionaire isn’t deterred and has put a lot of thought into his approach. He was an entrepreneur-in-residence at Upfront Ventures where he regularly sat in on startup pitches and learned firsthand about what worked and what didn’t. He also did this so that potential partners would know that he’s committed and is not just another celebrity with a side project with his name attached. Convoz is a clear priority.', 'Above all, the Houston native said that he wants to send the message to others from a similar upbringing that they have more options for a successful life than being a rap star or a basketball player.\xa0“I want to change the narrative.”'] Summary 6 The team did a soft launch of the social media app last summer and now they’re ready to get the word out to the world. ["You probably know Chamillionaire from the song “Ridin,'”\xa0but did you know the Grammy Award winner is also a successful startup investor? And unlike Twitter, where everyone can see people’s mentions, Convoz\xa0users are able to approve what’s being said about them publicly. “I just wasn’t happy with the communication channels that are currently existing on social media,” said Cham. He also did this so that potential partners would know that he’s committed and is not just another celebrity with a side project with his name attached. It “gives the curator of the conversation some level of control.”', 'Building a social media platform isn’t easy. They can then watch and choose which ones they want to respond to, sometimes broadcasting a message for all to see. \xa0(Aspirational entrepreneurs should watch the clip if you’re wondering what nailing a pitch looks like. ', 'Convoz\xa0allows people to upload 15-second clips, often addressed to particular celebrities. ', 'Chamillionaire recently unveiled Convoz\xa0with the above presentation to an investor and entrepreneur crowd at the Upfront Summit in Los Angeles. He was an entrepreneur-in-residence at Upfront Ventures where he regularly sat in on startup pitches and learned firsthand about what worked and what didn’t. I’ve never seen a slideshow presentation that flowed so well. Other than the biggest networks like Facebook, Instagram, Snapchat and Twitter, most have flamed out. He has had several favorable outcomes, including Cruise, which sold to General Motors and Maker Studios, which was bought by Disney. ', 'He was inspired to create an alternative to Twitter, which he feels is overwhelmed with trolls. Text 7 ['Last night I co-hosted a dinner at Soho House in Los Angeles with some of the most senior people in the media industry with executives from Disney, Fox, Warner, media agencies and many promising tech…'] Summary 7 Text 8 ['On why you should be an entrepreneur,', '“A lot of people do what they have to do. You want to get yourself to a position where you can do what you want to do.” -Chamillionaire', '', 'Last night I co-hosted a dinner at Soho House in Los Angeles with some of the most senior people in the media industry with executives from Disney, Fox, Warner, media agencies and many promising tech & media startup CEO’s. The topic was “the future of television & the digital living room.”', 'With all of the knowledge in the room the person who stole the night wasn’t even on a panel. I had called on Chamillionaire from the audience and asked him to provide some views on how artists view social media, why they use it and where it’s heading. He was riveting.', 'He stood up, grabbed the mic and gave a heartfelt overview of his experiences in experimenting with new technologies to build relationships with his audience, get feedback on his product quality, and to market his music all the way to the top of iTunes. To stay the crowed was “wowed” was an understatement. He received that only round of applause of the evening.', 'While many were floored by his insights, I wasn’t in the slightest. I’ve known Chamillionaire for a couple of years and I’ve never been at a tech event where he HASN’T upstaged everybody with his marketing insights.', 'So it was my great pleasure to host Chamillionaire on This Week in VC this week talking marketing, entrepreneurship, old media and, of course, music. We also talked about getting more young African Americans interested in entrepreneurship & technology. I hope many of you can take the time to watch the interview–I promise he doesn’t disappoint. You can click the image above or this link.', 'Here are some take away’s:', '1. On failure, trial-and-error & confidence: He did a lot of experimenting early in his career. As a teenager he experimented with writing & producing his own rap music and received a lot of feedback from elders that he had a talent with words. ', 'He began producing and selling “mixtapes” of his music. He studied the errors that other people had made and tried to improve on them. He made many of his own mistakes. But he was street smart and hustled. He started selling the mixtapes out of his trunk and even gave away some of his music. He wanted to create awareness for himself to generate marketing buzz and demand and then get the retail stores to pay wholesales prices for his cds. ', '“All the failures that people get so scared of is what I did. It made me confident about what would work. Confidence doesn’t come from being a ‘know-it-all,’ it’s because I’ve done this 10 times already.”', 'What things did he experiment in the early days when there was no Facebook, Twitter or even MySpace to promote oneself? He used online services such as SHOUTcast, which was online radio that allowed him to play his own songs, interrupt a song, do a commercial break and connect with fans. [It sort of reminds me of the new generation of innovation that is happening around user-controlled terrestrial & Internet station Jelli.]', ' 2. Authenticity – I asked Chamillionaire why he thinks he connects so much with people at tech conferences. How does he always wow a usually skeptical crowd? He said that he finds that people here are often speaking in big words or jargon–and that doesn’t connect with a lot of people. Cham studied early in his career how to hold the microphone, how to project his voice, how to watch the audience and pay attention to what interested them. ', 'He said that he noticed a lot of tech entrepreneurs don’t speak into the mic, don’t project their voices with confidence and aren’t necessarily paying attention to the mood or energy of the audience. I had written a blog post on exactly this–how to not suck at group presentations–and what he said reminded me a lot of this post.', '3. Marketing Innovation – Too many entrepreneurs are great product or technology people and lack the knowledge, skills or even desire to figure out how to market their products or themselves cleverly. Some other entrepreneurs who went down the MBA, consulting or banking routes without working at a startup are certainly book smart but haven’t always refined the street-smart skills needed to be an effective entrepreneur. ', 'Chamillionaire has tried so many marketing angles that when new technologies emerge he has a strong sense on how to use them to best marketing himself and his business. In his early career he realized the importance of email lists. He would do anything he could to capture people’s email addresses because he knew that they served as a valuable tool for future marketing purposes. ', 'His email list became his power. He would occasionally give away free music in exchange for email addresses. He created his own domain and gave out email address with the [email protected] nomenclature. This was in the 90′s. It created viral buzz because other fans saw the email address and wanted to know how they got it. He was trailblazing. ', 'He would try initiatives like announcing that a new cd was going to drop at new year’s. He had a website and put up a timer / countdown for the new year’s release. People would then call stores and ask if they had his album. He would get a call from the stores asking about a new album coming out. He created demand. Sometimes he didn’t even have the product when he announced it but the hype would get him focused on what he had to produce. ', 'There are many analogies here for software development. I often tell teams that you need to create product deadlines that are semi-public (or maybe board commitments) that help you focus on shipping product. You may have to cut scope but nothing gets you more focused and the creative juices flowing than a deadline staring you in the face.', 'Businesses like TopSpin Media now professionalize campaigns for musicians to capture email addresses, build social-media audiences and sell products directly to consumers (and many other artist-to-fan direct initiatives). Cham learned this on his own because he had to–he didn’t have a label. So when Twitter, Facebook, YouTube, Ustream and other social websites became popular he has ideas for how to use them to authentically build a relationship with his audience.', '4. Customer Feedback – Chamillionaire regularly seeks public feedback from his fan base. In the early days that was from releasing mixtapes. More recently it has been by putting free early releases of songs for free on Twitter. He said that the labels have a standard marketing plan that they say has worked in the past for other musicians. Cham is very skeptical of the one-size-fits-all approach ', 'He said he learned what his fans wanted through the trial and error process. ', '“Not everything works for everybody. I tested so many things to see what works. Labels just had a marketing plan for everybody. but it didn’t work for everybody–it was just a plan …”', 'What is good? There are a million opinions about what is good. I just wanted to know what people wanted to hear from ME.”\n', '5. Raising Capital–The VC equivalent for musicians is getting signed by a major label. I have always told entrepreneurs that to get VC interest you need scarcity value (in addition to a great product). People want what they can’t have and VCs are no different. The most potent entrepreneur is the one that doesn’t NEED your money.', 'So cheeky Chamillionaire went to Universal wearing the tags from every other label he had visited. While this blunt approach wouldn’t work with VCs a more subtle version actually does. What Cham said to Universal in his initial meeting was that he wasn’t wearing all of the other label tags just to rub them in Universal’s face, he wanted to make a statement:', '“I just want you to know that I’m perfectly comfortable leaving here without a deal.”', '6. On JFDI (play on Just Do It) – Chamillionaire talked a lot about social media. We talked initially about ustream. The labels said he could do live streaming himself but they didn’t want him to stream any music or videos since ustream wasn’t paying them. Reminds me of how the networks today announced they were blocking their video content from being shown on Google TV. Universal tried to push him to another site that had cut a deal with the label. He was frustrated because he wanted to be where the fans were:', '“I was just trying to give the fans what they wanted and what they wanted was ustream.” ', 'He did it anyways and didn’t ask for permission. By putting up his music free on ustream he ended up driving his song to the number one spot on iTunes (which obviously generates money). ', '“It would be successful and after it was successful nobody would say anything.”', 'This was obviously music to my ears since my personal philosophy that I’ve written about is “it’s better to beg for forgiving than to ask for permission.”', '7. On What Next?', 'First, Chamillionaire is up front about the fact that he is trying to get out of the label contract he has with Universal and he’s holding back from producing music until he does. He said that most artists “chase checks” and he actually wants to do what’s right for his audience. He says that labels impede on your creativity, don’t allow experimentation and flexibility. He’s holding back for now, but he’s clearly studying what’s going on in technology', '“I look at Zynga and all the games they have and how addicting it is and I think “there’s got to be a way to connect. A way to do music this way.”', 'We also spoke a lot about “free” as a metaphor to build future value. He spoke about his Grammy-winning song Ridin’ (as in Ridin’ Dirty) and how the labels wanted to extend life of song by getting somebody famous to remix the song. Cham had other ideas. He got people to do bootlegged mixtapes in new york, france and new zealand. He wanted to be bootlegged even more. The song spread globally. ', 'He was fine with the bootleg–it helped build and audience and helped him globalize. It allowed him to do big shows down the line in places like Norway & Dubai. Anyone who knows the industry knows that artists make way more money by performing and selling merchandise than off of their albums (where the studio prevails). So it was almost like Chamillionaire already knew the Zynga model–give away the game and sell other things. He actually did it before Zynga was huge.', 'I told you this guy was smart.', '“I can do so much more than rap with the rest of my life. there’s so much more in this world. I know that young people who look up to me are watching a show like this and they’re paying attention. I want to start feeding this stuff out so that the younger generation will start getting it and paying attention to this stuff ” [technology, marketing, business].', '“I’m learning so much, I’m so advanced–ahead of so many other people, I don’t know a better way to serve my music [than by mastering technology]. I study it every day.”', '7. On African American Youth?', 'Chamillionaire would like to see more young, urban, african americans aspire to things other than basketball or rap. ', '“They’re trained to think that it’s “the only way out.”', 'It bothers him. He wants people to know that it’s cool to be knowledgeable about business and technology. ', '“Technology is power. It’s so hard to do it in an over-saturated rap market. I just want to do the right thing and tell young people straight what they need to do.” ', '“They say the ‘game is to be sold and not to be told.’ Well I just ‘tell it.’ If you’re a young & up and coming rapper and you don’t know what tunecore is–you should know it.” ', '“The future of the world is in the palm of the tech community.” ', 'Reprinted from Both Sides of the Table', 'Mark Suster is a 2x entrepreneur who has gone to the Dark Side of VC. He joined GRP Partners in 2007 as a General Partner after selling his company to Salesforce.com. He focuses on early-stage technology companies. Follow him at twitter.com/msuster.', 'I grew up in Northern California and was fortunate enough to have computers around my house and school from a young age. In fact, in high school in the mid-eighties I sold computer software and taught advanced computers', ' More', 'Innovation in your inbox Sign up for the daily newsletter '] ###Markdown Approach 2 : Text Summarization using Sumy ###Code !pip install sumy import sumy ###Output _____no_output_____ ###Markdown Algorithms for summarization using sumy:- LexRank- Luhn- Latent Semantic Analysis, LSA- KL-Sum **LexRank** A sentence which is similar to many other sentences of the text has a high probability of being important. The approach of LexRank is that a particular sentence is recommended by other similar sentences and hence is ranked higher.Higher the rank, higher is the priority of being included in the summarized text. ###Code # Importing the parser and tokenizer from sumy.parsers.plaintext import PlaintextParser from sumy.nlp.tokenizers import Tokenizer # Import the LexRank summarizer from sumy.summarizers.lex_rank import LexRankSummarizer for i in range(0,10): #print("Text " + str(i+1) +"\n\n" + df['Content'][i] + "\n\n\n") article_text = re.sub(r'\[[0-9]*\]', ' ', str(df['Content'][i])) article_text = re.sub(r'\s+', ' ', article_text) my_parser = PlaintextParser.from_string(article_text,Tokenizer('english')) # Creating a summary of 3 sentences. lex_rank_summarizer = LexRankSummarizer() lexrank_summary = lex_rank_summarizer(my_parser.document,sentences_count=10) print("Summary : " + str(i+1)) print("-------------------------------------------------------") # Printing the summary for sentence in lexrank_summary: print(sentence) print("\n\n") ###Output Summary : 1 ------------------------------------------------------- [' Yes, it is rather surprising that a rap artist is very interested in entrepreneurship and technology. '] Summary : 2 ------------------------------------------------------- ['When Grammy award-winning rapper Hakeem “Chamillionaire” Seriki began learning the ropes of venture capitalism in the tech space, he noticed something almost immediately— he wasn’t seeing many people who looked like him. ', 'When he began to fund ventures himself, the heads of startups that were brought to him weren’t diverse, either, and that led him to create pitch competitions specifically geared toward people of color and women. ', ' “The reason why we decided to put the focus on minority and women-funded startups is because this demographic of companies and founders is just underrepresented, they’re under-invested in,” he told The Associated Press in a recent interview. “They’re just not as appreciated as we would like, so we’re trying to do more to create more awareness for these companies and also put our money where our mouth is and invest in one of them.”', '', '', 'Startup companies will submit their pitches on Convoz, a video-based social app started by Chamillionaire that focuses on face-to-face interaction. The applicants will be reviewed by him, E-40, Daymond John of “Shark Tank” and Republic, an SEC-registered investing platform. ', 'He believes diversity is scarce because limited partners and venture capitalists tend to work with people they are familiar with or those they “see themselves in,” Chamillionaire said. ', '“A lot of people are raising money, but a lot of people aren’t minorities. And then you go into places where people like me grew up and there are people that are seeing the world from a very unique lens and those people aren’t getting the capital to go and create those things.”', 'He advises many celebrities and athletes of color in learning how to properly invest and believes they are now realizing the power their status carries, which results in them eventually founding their own companies, as opposed to collecting a paycheck from one. ', '', '', '“I want to do more to create more awareness so that the people in our communities aren’t just thinking that you just got to be a basketball player or a rapper, because that’s what I thought,” Chamillionaire said. ', '“I think there’s a systemic problem that I’m not alone going to be able to fix, but I recognize it’s a real thing. Summary : 3 ------------------------------------------------------- I was an entrepreneur who was known as a rapper... On his introduction to the tech industry as he began advising some entertainment-related startups: It was a whole other world that I didn\'t know existed... But as I started getting closer to a lot of these companies, I realized that a lot of companies were coming to the music industry and cannibalizing their business... I don\'t think people understand the value of diversity. On ending up at Upfront Ventures in Los Angeles as an entrepreneur-in-residence:I was honestly planning on going to San Fran and getting into investing and building a company out there. When a founder would come in and pitch… When they leave we would hear all the VCs break it down... On his biggest surprise so far: The surprising thing about it all, is that everyone was so open to giving feedback, criticism, contacts—not like in the music industry. Mark [Suster] showed me 10 companies before I decided to put money in Maker Studios.On celebrities investing in tech startups: Investing in tech, I think, is smart for any entrepreneur or business savvy person—you gotta diversify. At the end of the day I think you’re betting on people and at the end of the day I think I\'m pretty good at people...Eventually people will take a lot of these celebrities and influencers as just a tweet. We\'re more than that—we can connect you to people, we got feedback.The biggest lesson he\'s learned in tech so far: I would say is that I guess I knew this, but it\'s just being in the thick of this, nothing is gonna come overnight. Summary : 4 ------------------------------------------------------- For some people, the end of a music career might be the start of better endeavors. By the time he hit 30, he was facing retirement from the rap scene. ', 'Audience engagement wasn’t the only reason Chamillionaire appeared on the tech scene; he was there to learn to invest. Early in his career, he invested in Maker’s Studio, which would later be sold to Disney for a reported $675 million. ', 'For a one hit wonder, it’s safe to say Chamillionaire is doing better than a lot of other rappers. In 2019, Chamillionaire held a content to invest $25,000 in a start up because so few start ups had a female founder and so few people of color had their start ups venture-backed. ', 'Is Fleeting the next big start up? It’s hard to deny Chamillionaire’s eye for success. While he still dabbles in rapping occasionally, his biggest success can be found in the investing world. He’s truly found success in a new industry, and his achievements are worth celebrating. Summary : 5 ------------------------------------------------------- The only way this is going to be done right is if I do it.”', 'Right now we are in stealth mode, so we aren’t saying much about what we’re doing, but it is public information that I am building a company. A lot of\xa0people haven’t seen me put out music in a long time, so they’re wondering,\xa0“How come you’re not releasing music?” and it’s like, I have other aspirations and other problems I want to solve and I want to build a different type of company. ', 'Chamillionaire: The music thing is what most of the people who know me know about and it’s what I’ve been asked about the most. That’s why I can look Mark or\xa0any other investor in the face and say,\xa0“Hey, I’m different than these guys, I’m all in.” So, I plan to put out music, but I just want to be able to do it on my terms, and that’s kind of what I’m doing now. ', 'Chamillionaire: A lot of time when they’re coming to me, it’s because they are trying to find a way to get\xa0people using something. When they ask me to invest, normally it’s not just about a check, it’s about, “How can we get Chamillionaire involved and use his strategy and the things he’s done in the past to help things grow with us?” Startups for the most part are just trying to get people to know they exist. I think people like to feel like they’re connected.\xa0', 'With\xa0blogging, it takes me 45 minutes to write a post. And what I said to them was,\xa0“If you’ll do board meetings in New York and LA, I’ll come once a year to Toronto.” So it’s taking that issue off the table; a lot of people don’t know to do that. ', 'Anyone a startup pitches in New York, San Francisco, LA, their first thought is,\xa0“Do I want to go to St. Louis eight times a year?” And they’re also thinking, “Can I really provide you enough advice and have enough interactions to make a difference?” So if you say,\xa0“I’m on the coasts all the time anyway, every time I come I’ll come see you and you’ll have plenty of access,” then now you’ve taken that issue off the table and I can focus on your business. What does he want out of this?” Because when he came to be an EIR, he kept asking and I told him, “I don’t want anything, I just want to see you succeed!”', 'Chamillionaire: I’m guy coming from an industry where you have believed that a wolf is always in sheep’s clothing. Summary : 6 ------------------------------------------------------- If you’re a Chamillionaire superfan, you may already be familiar with Convoz. )', 'So what is Convoz? ', 'Chamillionaire tells me that the video-centric platform aims to be “the place where you go to talk to people.” He wants Convoz to be an app where people converse face-to-face with stars like Shaq or find new friends with common interests. ', 'He was inspired to create an alternative to Twitter, which he feels is overwhelmed with trolls. They can then watch and choose which ones they want to respond to, sometimes broadcasting a message for all to see. ', '', 'My initial reaction was that this seemed like a lot of effort for an in-demand individual, but Chamillionaire didn’t think that this would take much longer than scrolling through other social media.\xa0 He isn’t expecting everyone to get a response, but believes “there’s an opportunity to prioritize the people who really deserve it.”', 'He hopes that users will be less likely to bully or harass others when they show their face and aren’t hiding behind an anonymous digital persona. It “gives the curator of the conversation some level of control.”', 'Building a social media platform isn’t easy. He was an entrepreneur-in-residence at Upfront Ventures where he regularly sat in on startup pitches and learned firsthand about what worked and what didn’t. He also did this so that potential partners would know that he’s committed and is not just another celebrity with a side project with his name attached. ', 'Above all, the Houston native said that he wants to send the message to others from a similar upbringing that they have more options for a successful life than being a rap star or a basketball player.\xa0“I want to change the narrative.”'] Summary : 7 ------------------------------------------------------- ['Last night I co-hosted a dinner at Soho House in Los Angeles with some of the most senior people in the media industry with executives from Disney, Fox, Warner, media agencies and many promising tech…'] ###Markdown **LSA (Latent semantic analysis)** Latent Semantic Analysis is a unsupervised learning algorithm that can be used for extractive text summarization.It extracts semantically significant sentences by applying singular value decomposition(SVD) to the matrix of term-document frequency. To learn more about this algorithm ###Code # Import the summarizer from sumy.summarizers.lsa import LsaSummarizer # Parsing the text string using PlaintextParser from sumy.nlp.tokenizers import Tokenizer from sumy.parsers.plaintext import PlaintextParser for i in range(0,10): #print("Text " + str(i+1) +"\n\n" + df['Content'][i] + "\n\n\n") article_text = re.sub(r'\[[0-9]*\]', ' ', str(df['Content'][i])) article_text = re.sub(r'\s+', ' ', article_text) parser=PlaintextParser.from_string(article_text,Tokenizer('english')) # creating the summarizer lsa_summarizer=LsaSummarizer() lsa_summary= lsa_summarizer(parser.document,8) print("Summary : " + str(i+1)) print("-------------------------------------------------------") # Printing the summary for sentence in lsa_summary: print(sentence) print("\n\n") ###Output Summary : 1 ------------------------------------------------------- [' Yes, it is rather surprising that a rap artist is very interested in entrepreneurship and technology. '] Summary : 2 ------------------------------------------------------- ['When Grammy award-winning rapper Hakeem “Chamillionaire” Seriki began learning the ropes of venture capitalism in the tech space, he noticed something almost immediately— he wasn’t seeing many people who looked like him. “They’re just not as appreciated as we would like, so we’re trying to do more to create more awareness for these companies and also put our money where our mouth is and invest in one of them.”', '', '', 'Startup companies will submit their pitches on Convoz, a video-based social app started by Chamillionaire that focuses on face-to-face interaction. The applicants will be reviewed by him, E-40, Daymond John of “Shark Tank” and Republic, an SEC-registered investing platform. ', 'Chamillionaire, who co-founded the underground Texas group the Color Changin’ Click with Paul Wall, is best known for his hit “Ridin’ Dirty,” but has also made a name for himself in business. ', 'He believes diversity is scarce because limited partners and venture capitalists tend to work with people they are familiar with or those they “see themselves in,” Chamillionaire said. “Now I feel like there’s a lot of ‘cool’ happening with the tech. And I feel like we need to start training people in our community to start thinking like this.”', 'Although Chamillionaire says he’s encouraged by conferences such as AfroTech – the nation’s largest technology conference for African American techies and entrepreneurs—he says now is the time for inclusion. So I’m gonna be very vocal about it,” says the venture capitalist. Summary : 3 ------------------------------------------------------- ['In the last several years, a growing number of celebrities have begun investing into tech startups. But few have humbled and immersed themselves into the industry like Chamillionaire, who rose to fame in the early 2000s as a rapper, but can now be spotted at investor parties and Y Combinator "demo days." Next act: A self-described entrepreneur at heart, Chamillionaire recently debuted his mobile video chat app Convoz while he continues to invest in startups. Quick facts: While still working as a musician, he began advising companies like SayNow, which let celebrities directly interact with fans and sold to Google in 2011. On his introduction to the tech industry as he began advising some entertainment-related startups: It was a whole other world that I didn\'t know existed... But as I started getting closer to a lot of these companies, I realized that a lot of companies were coming to the music industry and cannibalizing their business... My first tech conference was something at Stanford, Quincy Jones\' son told me about it. You can get in here and have a false sense of how things work. Summary : 4 ------------------------------------------------------- Now more than ever, we should be recognizing his accomplishments and they ways he is empowering future entrepreneurs. This is still a conversation we’re addressing, and Chamillionaire was discussing it fourteen years ago. Many rappers start their own record labels and hope to become the next media mogul like P. Diddy. Others try to find success in other genres, like Akon who is credited with giving Lady Gaga her first major deal. While all of these ventures are impressive, they’re also enhanced by star power and celebrity endorsements. Venture capitalist Mark Suster recognized the rapper for his ability to engage an audience while at a tech convention. He wants to serve communities who are underrepresented by investors and build his reputation as a founder. He’s truly found success in a new industry, and his achievements are worth celebrating. Summary : 5 ------------------------------------------------------- ['There are a few ways you can announce a launch, but when\xa0Grammy-winning rapper Chamillionaire shared that\xa0he’d founded his own tech startup via Mark Suster’s Snapchat account from Wash U’s campus, it was certainly noteworthy. ', 'According to Business Insider, the startup will provide\xa0“downloadable software applications for streaming communications with entertainers, politicians, and celebrities,” something akin to “Twitter with more live streaming baked in.”', 'In town Thursday to visit\xa0Washington University’s School of Engineering and Applied Sciences,\xa0Suster, a successful entrepreneur himself and the managing partner of Upfront Ventures in Los Angeles, has had a working relationship with\xa0Chamillionaire for awhile now. ', 'Chamillionaire:\xa0It is actually something I’ve been thinking in my head for a long time. A lot of\xa0people haven’t seen me put out music in a long time, so they’re wondering,\xa0“How come you’re not releasing music?” and it’s like, I have other aspirations and other problems I want to solve and I want to build a different type of company. We really try to work with world-class engineers, and people were taking things pretty seriously. It’s not\xa0like there are 2,000 others at your footstep, like in the Bay Area. That’s how I got interested in investing.\nMark Suster:\xa0Cham is an authentic human being. But truly, authentically, my goal is to see Chamillionaire succeed as a tech entrepreneur and be an inspiration for a thousand people behind him who may choose to be tech entrepreneurs rather than wannabe rappers or sports stars. Summary : 6 ------------------------------------------------------- \xa0(Aspirational entrepreneurs should watch the clip if you’re wondering what nailing a pitch looks like. I’ve never seen a slideshow presentation that flowed so well. “I just wasn’t happy with the communication channels that are currently existing on social media,” said Cham. ', '', 'My initial reaction was that this seemed like a lot of effort for an in-demand individual, but Chamillionaire didn’t think that this would take much longer than scrolling through other social media.\xa0 He isn’t expecting everyone to get a response, but believes “there’s an opportunity to prioritize the people who really deserve it.”', 'He hopes that users will be less likely to bully or harass others when they show their face and aren’t hiding behind an anonymous digital persona. Other than the biggest networks like Facebook, Instagram, Snapchat and Twitter, most have flamed out. ', 'But Chamillionaire isn’t deterred and has put a lot of thought into his approach. He was an entrepreneur-in-residence at Upfront Ventures where he regularly sat in on startup pitches and learned firsthand about what worked and what didn’t. He also did this so that potential partners would know that he’s committed and is not just another celebrity with a side project with his name attached. Summary : 7 ------------------------------------------------------- ['Last night I co-hosted a dinner at Soho House in Los Angeles with some of the most senior people in the media industry with executives from Disney, Fox, Warner, media agencies and many promising tech…'] Summary : 8 ------------------------------------------------------- He studied the errors that other people had made and tried to improve on them. Confidence doesn’t come from being a ‘know-it-all,’ it’s because I’ve done this 10 times already.”', 'What things did he experiment in the early days when there was no Facebook, Twitter or even MySpace to promote oneself? ', 'So cheeky Chamillionaire went to Universal wearing the tags from every other label he had visited. While this blunt approach wouldn’t work with VCs a more subtle version actually does. Reminds me of how the networks today announced they were blocking their video content from being shown on Google TV. It allowed him to do big shows down the line in places like Norway & Dubai. ', '“I can do so much more than rap with the rest of my life. I know that young people who look up to me are watching a show like this and they’re paying attention. Summary : 9 ------------------------------------------------------- ['HOUSTON – Houston rapper and Grammy Award-winning hip-hop artist Chamillionaire paid a visit to Houston Independent School District students Monday to teach them about career opportunities in the tech world. ', 'Chamillionaire, born\xa0Hakeem Sariki, exploded onto the Houston rap scene in the early 2000s, but has since spent much of his time as an entrepreneur and tech investor in Los Angeles. ', '"I was a musician.\xa0I still am, but I realize the value in appreciating the\xa0tech side of things and that has become my main business,” he said. ', 'Chamillionaire visited Worthing High School with fellow panelists Tuma Basa, head of hip-hop for streaming service,\xa0Spotify, Shawn Gee, artist manager and president of Live Nation Urban, and Brittany Lewis, video programming manager at Spotify to discuss entrepreneurship with HISD seniors. ', 'He explained that he frequently sees young people spend time on social media applications like Snapchat and Instagram, and encourages them to think beyond “social media fame” and focus on career opportunities to build similar tools. ', '"You can learn how to code today. ', 'The rapper has already raised more than $1\xa0million for his own video startup company, according to Business Insider. (Read more here)', '"Hopefully we can make (tech) cool so a lot of these kids can understand that they can be the next people to build the next social media products," Chamillionaire said.'] Summary : 10 ------------------------------------------------------- 1 single "Ridin,\'" a muscular celebration of eluding the police that was simultaneously gleeful and menacing. The Houston MC had been rapping for years -- he sold mixtapes out of his trunk and released a collaborative album with Paul Wall in 2002. But he went from regional force to national star seemingly overnight, "0 to 100 real quick," as Drake might say. ', 'Now the rapper appears to have executed a similar move in the world of tech entrepreneurship. The venture capitalist Mark Suster of Upfront Ventures announced earlier this week that Chamillionaire will be "moving to LA for a while and working in our offices and developing his ideas" as an "entrepreneur in residence." Suster wrote online that he first met the rapper "at a tech conference in LA. I saw him on stage at the event talking about how he used social media to engage audiences. This was 2009 and his understanding of audience engagement was far beyond anything I was hearing from\xa0most people at that time."'] ###Markdown **Luhn** Luhn Summarization algorithm’s approach is based on TF-IDF (Term Frequency-Inverse Document Frequency). It is useful when very low frequent words as well as highly frequent words(stopwords) are both not significant.Based on this, sentence scoring is carried out and the high ranking sentences make it to the summary. ###Code # Import the summarizer from sumy.summarizers.luhn import LuhnSummarizer # Creating the parser from sumy.nlp.tokenizers import Tokenizer from sumy.parsers.plaintext import PlaintextParser for i in range(0,10): #print("Text " + str(i+1) +"\n\n" + df['Content'][i] + "\n\n\n") article_text = re.sub(r'\[[0-9]*\]', ' ', str(df['Content'][i])) article_text = re.sub(r'\s+', ' ', article_text) parser=PlaintextParser.from_string(article_text,Tokenizer('english')) # Creating the summarizer luhn_summarizer=LuhnSummarizer() luhn_summary=luhn_summarizer(parser.document,sentences_count=10) print("Summary : " + str(i+1)) print("-------------------------------------------------------") # Printing the summary for sentence in luhn_summary: print(sentence) print("\n\n") ###Output Summary : 1 ------------------------------------------------------- [' Yes, it is rather surprising that a rap artist is very interested in entrepreneurship and technology. '] Summary : 2 ------------------------------------------------------- ['When Grammy award-winning rapper Hakeem “Chamillionaire” Seriki began learning the ropes of venture capitalism in the tech space, he noticed something almost immediately— he wasn’t seeing many people who looked like him. ', 'When he began to fund ventures himself, the heads of startups that were brought to him weren’t diverse, either, and that led him to create pitch competitions specifically geared toward people of color and women. ', ' “The reason why we decided to put the focus on minority and women-funded startups is because this demographic of companies and founders is just underrepresented, they’re under-invested in,” he told The Associated Press in a recent interview. “They’re just not as appreciated as we would like, so we’re trying to do more to create more awareness for these companies and also put our money where our mouth is and invest in one of them.”', '', '', 'Startup companies will submit their pitches on Convoz, a video-based social app started by Chamillionaire that focuses on face-to-face interaction. ', 'He believes diversity is scarce because limited partners and venture capitalists tend to work with people they are familiar with or those they “see themselves in,” Chamillionaire said. ', '“A lot of people are raising money, but a lot of people aren’t minorities. “We’re solving problems, often unique problems, because some of these companies I’m seeing are the regurgitation of problems that already got solved. And then you go into places where people like me grew up and there are people that are seeing the world from a very unique lens and those people aren’t getting the capital to go and create those things.”', 'He advises many celebrities and athletes of color in learning how to properly invest and believes they are now realizing the power their status carries, which results in them eventually founding their own companies, as opposed to collecting a paycheck from one. ', '', '', '“I want to do more to create more awareness so that the people in our communities aren’t just thinking that you just got to be a basketball player or a rapper, because that’s what I thought,” Chamillionaire said. And I feel like we need to start training people in our community to start thinking like this.”', 'Although Chamillionaire says he’s encouraged by conferences such as AfroTech – the nation’s largest technology conference for African American techies and entrepreneurs—he says now is the time for inclusion. Summary : 3 ------------------------------------------------------- But few have humbled and immersed themselves into the industry like Chamillionaire, who rose to fame in the early 2000s as a rapper, but can now be spotted at investor parties and Y Combinator "demo days." On his introduction to the tech industry as he began advising some entertainment-related startups: It was a whole other world that I didn\'t know existed... But as I started getting closer to a lot of these companies, I realized that a lot of companies were coming to the music industry and cannibalizing their business... On ending up at Upfront Ventures in Los Angeles as an entrepreneur-in-residence:I was honestly planning on going to San Fran and getting into investing and building a company out there. When a founder would come in and pitch… When they leave we would hear all the VCs break it down... On his biggest surprise so far: The surprising thing about it all, is that everyone was so open to giving feedback, criticism, contacts—not like in the music industry. Mark [Suster] showed me 10 companies before I decided to put money in Maker Studios.On celebrities investing in tech startups: Investing in tech, I think, is smart for any entrepreneur or business savvy person—you gotta diversify. At the end of the day I think you’re betting on people and at the end of the day I think I\'m pretty good at people...Eventually people will take a lot of these celebrities and influencers as just a tweet. We\'re more than that—we can connect you to people, we got feedback.The biggest lesson he\'s learned in tech so far: I would say is that I guess I knew this, but it\'s just being in the thick of this, nothing is gonna come overnight. Being a celebrity is tough—I don\'t even think of myself as one but I guess I am—everyone tells you it’s gonna be great. Summary : 4 ------------------------------------------------------- For some people, the end of a music career might be the start of better endeavors. Seriki’s music success has really pigeonholed him as a one hit wonder, but he’s much more than that. This is still a conversation we’re addressing, and Chamillionaire was discussing it fourteen years ago. He would release a few more EPs and mixtapes, but as his success in music dwindled he would fade from the public eye. He wasn’t just generating excitement, he was discussing how he rose to the top of the iTunes charts thanks to his focus on digital media. ', 'Audience engagement wasn’t the only reason Chamillionaire appeared on the tech scene; he was there to learn to invest. Early in his career, he invested in Maker’s Studio, which would later be sold to Disney for a reported $675 million. While it’s unknown how much Chamillionaire made in this investment, he invested early and made a significant profit upon his exit. ', 'For a one hit wonder, it’s safe to say Chamillionaire is doing better than a lot of other rappers. In 2019, Chamillionaire held a content to invest $25,000 in a start up because so few start ups had a female founder and so few people of color had their start ups venture-backed. Summary : 5 ------------------------------------------------------- A lot of\xa0people haven’t seen me put out music in a long time, so they’re wondering,\xa0“How come you’re not releasing music?” and it’s like, I have other aspirations and other problems I want to solve and I want to build a different type of company. But when you’re well-known and you put out a product, he has a higher bar because if he just puts it out there then you’ll get all the negative reactions because people want to find everything wrong with startups and he’s refining it in private. That’s why I can look Mark or\xa0any other investor in the face and say,\xa0“Hey, I’m different than these guys, I’m all in.” So, I plan to put out music, but I just want to be able to do it on my terms, and that’s kind of what I’m doing now. When they ask me to invest, normally it’s not just about a check, it’s about, “How can we get Chamillionaire involved and use his strategy and the things he’s done in the past to help things grow with us?” Startups for the most part are just trying to get people to know they exist. ', 'Mark Suster: So let me push beyond humble Chamillionaire: I introduced him years ago to our startups and they all approached him for the same reason: “Hey, maybe I can get him to promote us.” And that’s the first thought for anyone who has reach with an audience–it’s promotion. ', 'Mark Suster: That’s a great question I’m asked all the time.\xa0Number one is, if you want to stay in St. Louis and build a company here, you have a great advantage, which is great engineering talent that will be cheaper because cost of living is cheaper and a much higher retention rate because if you’re building an interesting company. And what I said to them was,\xa0“If you’ll do board meetings in New York and LA, I’ll come once a year to Toronto.” So it’s taking that issue off the table; a lot of people don’t know to do that. ', 'Anyone a startup pitches in New York, San Francisco, LA, their first thought is,\xa0“Do I want to go to St. Louis eight times a year?” And they’re also thinking, “Can I really provide you enough advice and have enough interactions to make a difference?” So if you say,\xa0“I’m on the coasts all the time anyway, every time I come I’ll come see you and you’ll have plenty of access,” then now you’ve taken that issue off the table and I can focus on your business. When I was trying to get into the tech industry, I would look at\xa0tech blogs and I would go to tech conferences and just try to find out what was happening with investing and startups, because remember, I’m coming from a whole different industry and trying to navigate the waters. What does he want out of this?” Because when he came to be an EIR, he kept asking and I told him, “I don’t want anything, I just want to see you succeed!”', 'Chamillionaire: I’m guy coming from an industry where you have believed that a wolf is always in sheep’s clothing. ###Markdown **KL-Sum** Another extractive method is the KL-Sum algorithm.It selects sentences based on similarity of word distribution as the original text. It aims to lower the KL-divergence criteria. It uses greedy optimization approach and keeps adding sentences till the KL-divergence decreases ###Code from sumy.summarizers.kl import KLSummarizer from sumy.nlp.tokenizers import Tokenizer from sumy.parsers.plaintext import PlaintextParser for i in range(0,10): #print("Text " + str(i+1) +"\n\n" + df['Content'][i] + "\n\n\n") article_text = re.sub(r'\[[0-9]*\]', ' ', str(df['Content'][i])) article_text = re.sub(r'\s+', ' ', article_text) parser=PlaintextParser.from_string(article_text,Tokenizer('english')) kl_summarizer=KLSummarizer() kl_summary=kl_summarizer(parser.document,sentences_count=10) print("Summary : " + str(i+1)) print("-------------------------------------------------------") # Printing the summary for sentence in kl_summary: print(sentence) print("\n\n") ###Output Summary : 1 ------------------------------------------------------- [' Yes, it is rather surprising that a rap artist is very interested in entrepreneurship and technology. '] Summary : 2 ------------------------------------------------------- ', 'Chamillionaire, who co-founded the underground Texas group the Color Changin’ Click with Paul Wall, is best known for his hit “Ridin’ Dirty,” but has also made a name for himself in business. The trickle-down effect is that the tech space is almost completely dominated by white males. ', '“A lot of people are raising money, but a lot of people aren’t minorities. A lot of them aren’t women,” Chamillionaire said. ', '', '', '“I want to do more to create more awareness so that the people in our communities aren’t just thinking that you just got to be a basketball player or a rapper, because that’s what I thought,” Chamillionaire said. “Now I feel like there’s a lot of ‘cool’ happening with the tech. There are people that are becoming millionaires that are 20-something-years-old when Snapchat IPOs or when this company gets acquired. ', '“I think there’s a systemic problem that I’m not alone going to be able to fix, but I recognize it’s a real thing. ', 'The pitch competition ends Friday. ', '___', 'Follow Associated Press entertainment journalist Gary Gerard Hamilton at twitter.com/GaryGHamilton.'] Summary : 3 ------------------------------------------------------- ['In the last several years, a growing number of celebrities have begun investing into tech startups. He leads a syndicate of investors made up of influencers, celebrities, and athletes. His startup Convoz now has a total of seven employees, and has raised an undisclosed amount of seed funding from Greycroft Ventures, Upfront Ventures, 500 Startups, Precursor VC, Okapi Ventures, XG Ventures, and a roster of angels including Justin Kan and Snoop Dogg. He\'s not made any investments in cryptocurrencies but says he believes in the potential of blockchain tech. My first tech conference was something at Stanford, Quincy Jones\' son told me about it. Mark Suster asked me why. Well there\'s no tech in L.A., when you get off the plane there\'s paparazzi. Hearing [the Upfront partners] break down companies... At the end of the day I think you’re betting on people and at the end of the day I think I\'m pretty good at people...Eventually people will take a lot of these celebrities and influencers as just a tweet. You can get in here and have a false sense of how things work. Summary : 4 ------------------------------------------------------- ['The middle of the 00’s saw many rappers rise to fame then fall off the map. For some people, the end of a music career might be the start of better endeavors. ', 'Ridin’ was a catchy song, but also focused on the topic of racial profiling. ', 'For a one hit wonder, it’s safe to say Chamillionaire is doing better than a lot of other rappers. The goal is to encourage collaborative video conversations around current topics. After applying through Convoz, Pierre Laguerre’s company Fleeting was selected as the winner. ', 'Is Fleeting the next big start up? Possibly. It’s hard to deny Chamillionaire’s eye for success. ', 'Thank you for reading! Summary : 5 ------------------------------------------------------- Then it was\xa0like,\xa0“You know what? ', '[For his own startup]\xa0he actually has a working product. It’s not public, it’s not launched, it’s not available, but it is working—I have a copy on my phone, he has a copy on his phone.\xa0', 'The real issue is, when is it ready for prime time? ', 'Chamillionaire: A lot of time when they’re coming to me, it’s because they are trying to find a way to get\xa0people using something. But it does take time. ', 'Chamillionaire:\xa0I like to call Mark my Mr. Miyagi. It’s been great. Which is my favorite response. ', 'Mark Suster:\xa0I just want him\xa0to succeed. JDFI. Summary : 6 ------------------------------------------------------- ["You probably know Chamillionaire from the song “Ridin,'”\xa0but did you know the Grammy Award winner is also a successful startup investor? They were\xa0wowed. I’ve never seen a slideshow presentation that flowed so well. Bonus: Snoop makes a cameo. )', 'So what is Convoz? ', 'Chamillionaire tells me that the video-centric platform aims to be “the place where you go to talk to people.” He wants Convoz to be an app where people converse face-to-face with stars like Shaq or find new friends with common interests. ', 'He was inspired to create an alternative to Twitter, which he feels is overwhelmed with trolls. ', 'Convoz\xa0allows people to upload 15-second clips, often addressed to particular celebrities. Convoz is a clear priority. ', 'Above all, the Houston native said that he wants to send the message to others from a similar upbringing that they have more options for a successful life than being a rap star or a basketball player.\xa0“I want to change the narrative.”'] Summary : 7 ------------------------------------------------------- ['Last night I co-hosted a dinner at Soho House in Los Angeles with some of the most senior people in the media industry with executives from Disney, Fox, Warner, media agencies and many promising tech…'] Summary : 8 ------------------------------------------------------- ['On why you should be an entrepreneur,', '“A lot of people do what they have to do. ', 'He stood up, grabbed the mic and gave a heartfelt overview of his experiences in experimenting with new technologies to build relationships with his audience, get feedback on his product quality, and to market his music all the way to the top of iTunes. Cham studied early in his career how to hold the microphone, how to project his voice, how to watch the audience and pay attention to what interested them. There are a million opinions about what is good. People want what they can’t have and VCs are no different. Reminds me of how the networks today announced they were blocking their video content from being shown on Google TV. On What Next? A way to do music this way.”', 'We also spoke a lot about “free” as a metaphor to build future value. On African American Youth? I just want to do the right thing and tell young people straight what they need to do.” ', '“They say the ‘game is to be sold and not to be told.’ Well I just ‘tell it.’ If you’re a young & up and coming rapper and you don’t know what tunecore is–you should know it.” ', '“The future of the world is in the palm of the tech community.” ', 'Reprinted from Both Sides of the Table', 'Mark Suster is a 2x entrepreneur who has gone to the Dark Side of VC. Summary : 9 ------------------------------------------------------- ['HOUSTON – Houston rapper and Grammy Award-winning hip-hop artist Chamillionaire paid a visit to Houston Independent School District students Monday to teach them about career opportunities in the tech world. ', 'Chamillionaire, born\xa0Hakeem Sariki, exploded onto the Houston rap scene in the early 2000s, but has since spent much of his time as an entrepreneur and tech investor in Los Angeles. ', '"I was a musician.\xa0I still am, but I realize the value in appreciating the\xa0tech side of things and that has become my main business,” he said. ', 'Chamillionaire visited Worthing High School with fellow panelists Tuma Basa, head of hip-hop for streaming service,\xa0Spotify, Shawn Gee, artist manager and president of Live Nation Urban, and Brittany Lewis, video programming manager at Spotify to discuss entrepreneurship with HISD seniors. ', 'He explained that he frequently sees young people spend time on social media applications like Snapchat and Instagram, and encourages them to think beyond “social media fame” and focus on career opportunities to build similar tools. ', '"You can learn how to code today. You can build this same thing that you\'re looking at every day, that you\'re tweeting on, that you\'re snappping on, and I feel like that conversation needs to be had," Chamillionaire said. ', 'The rapper has already raised more than $1\xa0million for his own video startup company, according to Business Insider. (Read more here)', '"Hopefully we can make (tech) cool so a lot of these kids can understand that they can be the next people to build the next social media products," Chamillionaire said.'] Summary : 10 ------------------------------------------------------- ['In the mid \'00s, Chamillionaire exploded into the national consciousness with his No. 1 single "Ridin,\'" a muscular celebration of eluding the police that was simultaneously gleeful and menacing. The Houston MC had been rapping for years -- he sold mixtapes out of his trunk and released a collaborative album with Paul Wall in 2002. But he went from regional force to national star seemingly overnight, "0 to 100 real quick," as Drake might say. ', 'Now the rapper appears to have executed a similar move in the world of tech entrepreneurship. The venture capitalist Mark Suster of Upfront Ventures announced earlier this week that Chamillionaire will be "moving to LA for a while and working in our offices and developing his ideas" as an "entrepreneur in residence." Suster wrote online that he first met the rapper "at a tech conference in LA. I saw him on stage at the event talking about how he used social media to engage audiences. This was 2009 and his understanding of audience engagement was far beyond anything I was hearing from\xa0most people at that time."'] ###Markdown Approach 3 : Text Summarization using TextRank **TextRank** is an extractive summarization technique. It is based on the concept that words which occur more frequently are significant. Hence , the sentences containing highly frequent words are importantBased on this , the algorithm assigns scores to each sentence in the text . The top-ranked sentences make it to the summary. ###Code !pip install summa from summa import summarizer from summa import keywords for i in range(0,10): #print("Text " + str(i+1) +"\n\n" + df['Content'][i] + "\n\n\n") article_text = re.sub(r'\[[0-9]*\]', ' ', str(df['Content'][i])) article_text = re.sub(r'\s+', ' ', article_text) print("Summary : " + str(i+1) + "\n\n") print(summarizer.summarize(article_text)) print("\n\nKeywords in Summary : " + str(i+1) + "\n") print(keywords.keywords(article_text)) print("-----------------------------------------------------------------------------------------\n\n\n") ###Output Summary : 1 Keywords in Summary : 1 ----------------------------------------------------------------------------------------- Summary : 2 “They’re just not as appreciated as we would like, so we’re trying to do more to create more awareness for these companies and also put our money where our mouth is and invest in one of them.”', '', '', 'Startup companies will submit their pitches on Convoz, a video-based social app started by Chamillionaire that focuses on face-to-face interaction. The applicants will be reviewed by him, E-40, Daymond John of “Shark Tank” and Republic, an SEC-registered investing platform.', 'Chamillionaire, who co-founded the underground Texas group the Color Changin’ Click with Paul Wall, is best known for his hit “Ridin’ Dirty,” but has also made a name for himself in business.', 'He believes diversity is scarce because limited partners and venture capitalists tend to work with people they are familiar with or those they “see themselves in,” Chamillionaire said. And then you go into places where people like me grew up and there are people that are seeing the world from a very unique lens and those people aren’t getting the capital to go and create those things.”', 'He advises many celebrities and athletes of color in learning how to properly invest and believes they are now realizing the power their status carries, which results in them eventually founding their own companies, as opposed to collecting a paycheck from one.', '', '', '“I want to do more to create more awareness so that the people in our communities aren’t just thinking that you just got to be a basketball player or a rapper, because that’s what I thought,” Chamillionaire said. Keywords in Summary : 2 chamillionaire rapper like problems problem investment invest investing venture technology texas american ridin changin contest fund ventures people competition ends started start pitch competitions specifically solving unique solved associated gerard pitches getting gets ----------------------------------------------------------------------------------------- Summary : 3 ['In the last several years, a growing number of celebrities have begun investing into tech startups. But few have humbled and immersed themselves into the industry like Chamillionaire, who rose to fame in the early 2000s as a rapper, but can now be spotted at investor parties and Y Combinator "demo days." Next act: A self-described entrepreneur at heart, Chamillionaire recently debuted his mobile video chat app Convoz while he continues to invest in startups. On his introduction to the tech industry as he began advising some entertainment-related startups: It was a whole other world that I didn\'t know existed... On ending up at Upfront Ventures in Los Angeles as an entrepreneur-in-residence:I was honestly planning on going to San Fran and getting into investing and building a company out there. Mark [Suster] showed me 10 companies before I decided to put money in Maker Studios.On celebrities investing in tech startups: Investing in tech, I think, is smart for any entrepreneur or business savvy person—you gotta diversify. At the end of the day I think you’re betting on people and at the end of the day I think I\'m pretty good at people...Eventually people will take a lot of these celebrities and influencers as just a tweet. We\'re more than that—we can connect you to people, we got feedback.The biggest lesson he\'s learned in tech so far: I would say is that I guess I knew this, but it\'s just being in the thick of this, nothing is gonna come overnight. Keywords in Summary : 3 people startups startup tech ventures include including celebrities celebrity industry like feedback thing things suster chamillionaire coming come pretty savvy investing invest investments started getting combinator quincy son days day criticism biggest demo ----------------------------------------------------------------------------------------- Summary : 4 For some people, the end of a music career might be the start of better endeavors.', 'Around 2004 the rap scene saw an increase in Texas-based rappers gain popularity. Falling from the top turned out to be a blessing in disguise because he’s become quite the successful investor.', 'Chamillionaire, whose real name is Hakeem Seriki, has taken some risks that paid off. Seriki’s music success has really pigeonholed him as a one hit wonder, but he’s much more than that. He wasn’t just generating excitement, he was discussing how he rose to the top of the iTunes charts thanks to his focus on digital media.', 'Audience engagement wasn’t the only reason Chamillionaire appeared on the tech scene; he was there to learn to invest. While it’s unknown how much Chamillionaire made in this investment, he invested early and made a significant profit upon his exit.', 'By 2015, Chamillionaire joined Upfront Ventures in Santa Monica as the Entrepreneur-In-Residence. His portfolio includes early investments in: home security system Ring (acquired by Amazon), self-driving car technology Cruise (acquired by GM), and ride-sharing app Lyft which went public in 2018.', 'For a one hit wonder, it’s safe to say Chamillionaire is doing better than a lot of other rappers.', 'In 2018, Chamillionaire introduced the world to his own app named Convoz. In 2019, Chamillionaire held a content to invest $25,000 in a start up because so few start ups had a female founder and so few people of color had their start ups venture-backed. It’s hard to deny Chamillionaire’s eye for success. While he still dabbles in rapping occasionally, his biggest success can be found in the investing world. Keywords in Summary : 4 chamillionaire like tech successful success successes venture ventures future invest investments invested investment invests investing changed ridin truck rappers rapper includes include music industry changes scene new album albums app media needs public design years retirement retire endeavors lady companies company conversation conversations celebrity celebrating convoz started start chart charts use college seriki connecting self acquired engage engagement ----------------------------------------------------------------------------------------- Summary : 5 ['There are a few ways you can announce a launch, but when\xa0Grammy-winning rapper Chamillionaire shared that\xa0he’d founded his own tech startup via Mark Suster’s Snapchat account from Wash U’s campus, it was certainly noteworthy.', 'According to Business Insider, the startup will provide\xa0“downloadable software applications for streaming communications with entertainers, politicians, and celebrities,” something akin to “Twitter with more live streaming baked in.”', 'In town Thursday to visit\xa0Washington University’s School of Engineering and Applied Sciences,\xa0Suster, a successful entrepreneur himself and the managing partner of Upfront Ventures in Los Angeles, has had a working relationship with\xa0Chamillionaire for awhile now. Chamillionaire serves\xa0as an EIR at Upfront and the two co-invested in Maker Studio, which has since been sold to the Walt Disney Company.', 'After a morning\xa0at Wash\xa0U, Suster\xa0and Chamillionaire gave a talk at\xa0Venture Café St. Louis, where we were able to sit down with both to\xa0talk about tech, funding and social media.', 'Chamillionaire:\xa0It is actually something I’ve been thinking in my head for a long time. A lot of\xa0people haven’t seen me put out music in a long time, so they’re wondering,\xa0“How come you’re not releasing music?” and it’s like, I have other aspirations and other problems I want to solve and I want to build a different type of company. So now I’m hiring developers and bringing in people to help build this tech company with me and it’s just a different experience, and I’m excited for the journey.', 'Mark Suster:\xa0The first thing is, Cham had the concept awhile ago. But when you’re well-known and you put out a product, he has a higher bar because if he just puts it out there then you’ll get all the negative reactions because people want to find everything wrong with startups and he’s refining it in private.', 'Chamillionaire: The music thing is what most of the people who know me know about and it’s what I’ve been asked about the most. ', 'So many people [in entertainment] are coming to the\xa0tech world and they’re like, “Oh there’s this cool tech thing going on and people are making money,” and they treat it as a thing they will just\xa0moonlight with. That’s why I can look Mark or\xa0any other investor in the face and say,\xa0“Hey, I’m different than these guys, I’m all in.” So, I plan to put out music, but I just want to be able to do it on my terms, and that’s kind of what I’m doing now.', 'Chamillionaire: A lot of time when they’re coming to me, it’s because they are trying to find a way to get\xa0people using something. When they ask me to invest, normally it’s not just about a check, it’s about, “How can we get Chamillionaire involved and use his strategy and the things he’s done in the past to help things grow with us?” Startups for the most part are just trying to get people to know they exist.', 'Mark Suster: So let me push beyond humble Chamillionaire: I introduced him years ago to our startups and they all approached him for the same reason: “Hey, maybe I can get him to promote us.” And that’s the first thought for anyone who has reach with an audience–it’s promotion. ', 'A lot of people were already working on startups and thinking of building things. We really try to work with world-class engineers, and people were taking things pretty seriously.', '', 'Mark Suster:\xa0So I have this great audience, they are really engaged, there’s no bullsh*t, we’re doing this live. [Looks to Chamillilonaire and laughs.]\xa0', 'Chamillionaire:\xa0He’s the VC of the millennials.', 'Mark Suster: That’s a great question I’m asked all the time.\xa0Number one is, if you want to stay in St. Louis and build a company here, you have a great advantage, which is great engineering talent that will be cheaper because cost of living is cheaper and a much higher retention rate because if you’re building an interesting company. And what I said to them was,\xa0“If you’ll do board meetings in New York and LA, I’ll come once a year to Toronto.” So it’s taking that issue off the table; a lot of people don’t know to do that.', 'Anyone a startup pitches in New York, San Francisco, LA, their first thought is,\xa0“Do I want to go to St. Louis eight times a year?” And they’re also thinking, “Can I really provide you enough advice and have enough interactions to make a difference?” So if you say,\xa0“I’m on the coasts all the time anyway, every time I come I’ll come see you and you’ll have plenty of access,” then now you’ve taken that issue off the table and I can focus on your business.', 'Chamillionaire:\xa0I like to call Mark my Mr. Miyagi. What does he want out of this?” Because when he came to be an EIR, he kept asking and I told him, “I don’t want anything, I just want to see you succeed!”', 'Chamillionaire: I’m guy coming from an industry where you have believed that a wolf is always in sheep’s clothing. Keywords in Summary : 5 ###Markdown ABSTRACTIVE SUMMARIZATION Abstractive summarization is the new state of art method, which generates new sentences that could best represent the whole text. This is better than extractive methods where sentences are just selected from original text for the summary.A simple and effective way is through the Huggingface’s transformers library. ###Code !pip install transformers ###Output Requirement already satisfied: transformers in /opt/anaconda3/lib/python3.8/site-packages (2.2.0) Requirement already satisfied: regex in /opt/anaconda3/lib/python3.8/site-packages (from transformers) (2017.4.5) Requirement already satisfied: sentencepiece in /opt/anaconda3/lib/python3.8/site-packages (from transformers) (0.1.96) Requirement already satisfied: numpy in /opt/anaconda3/lib/python3.8/site-packages (from transformers) (1.19.2) Requirement already satisfied: boto3 in /opt/anaconda3/lib/python3.8/site-packages (from transformers) (1.18.7) Requirement already satisfied: sacremoses in /opt/anaconda3/lib/python3.8/site-packages (from transformers) (0.0.45) Requirement already satisfied: tqdm in /opt/anaconda3/lib/python3.8/site-packages (from transformers) (4.61.2) Requirement already satisfied: requests in /opt/anaconda3/lib/python3.8/site-packages (from transformers) (2.25.1) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /opt/anaconda3/lib/python3.8/site-packages (from boto3->transformers) (0.10.0) Requirement already satisfied: botocore<1.22.0,>=1.21.7 in /opt/anaconda3/lib/python3.8/site-packages (from boto3->transformers) (1.21.7) Requirement already satisfied: s3transfer<0.6.0,>=0.5.0 in /opt/anaconda3/lib/python3.8/site-packages (from boto3->transformers) (0.5.0) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /opt/anaconda3/lib/python3.8/site-packages (from botocore<1.22.0,>=1.21.7->boto3->transformers) (2.8.2) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /opt/anaconda3/lib/python3.8/site-packages (from botocore<1.22.0,>=1.21.7->boto3->transformers) (1.26.6) Requirement already satisfied: six>=1.5 in /opt/anaconda3/lib/python3.8/site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.22.0,>=1.21.7->boto3->transformers) (1.15.0) Requirement already satisfied: chardet<5,>=3.0.2 in /opt/anaconda3/lib/python3.8/site-packages (from requests->transformers) (4.0.0) Requirement already satisfied: idna<3,>=2.5 in /opt/anaconda3/lib/python3.8/site-packages (from requests->transformers) (2.10) Requirement already satisfied: certifi>=2017.4.17 in /opt/anaconda3/lib/python3.8/site-packages (from requests->transformers) (2021.5.30) Requirement already satisfied: click in /opt/anaconda3/lib/python3.8/site-packages (from sacremoses->transformers) (7.1.2) Requirement already satisfied: joblib in /opt/anaconda3/lib/python3.8/site-packages (from sacremoses->transformers) (1.0.1) ###Markdown - HuggingFace supports state of the art models to implement tasks such as summarization, classification, etc.. Some common models are GPT-2, GPT-3, BERT , OpenAI, GPT, T5 ###Code import tensorflow as tf from transformers import GPT2LMHeadModel, GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id) for i in range(0,10): #print("Text " + str(i+1) +"\n\n" + df['Content'][i] + "\n\n\n") article_text = str(df['Content'][i]) # Encoding text to get input ids & pass them to model.generate() inputs=tokenizer.batch_encode_plus([article_text],return_tensors='pt',max_length=512,truncation=True) summary_ids=model.generate(inputs['input_ids'],early_stopping=True) print("Summary : " + str(i+1)) print("-------------------------------------------------------") # Printing the summary GPT_summary=tokenizer.decode(summary_ids[0],skip_special_tokens=True) print(GPT_summary) print("\n\n") for i in range(0,10): #print("Text " + str(i+1) +"\n\n" + df['Content'][i] + "\n\n\n") article_text = str(df['Content'][i]) article_text = article_text[:20] # Encoding text to get input ids & pass them to model.generate() inputs=tokenizer.batch_encode_plus([article_text],return_tensors='pt',max_length=512,truncation=True) summary_ids=model.generate(inputs['input_ids'],early_stopping=True) print("Summary : " + str(i+1)) print("-------------------------------------------------------") # Printing the summary GPT_summary=tokenizer.decode(summary_ids[0],skip_special_tokens=True) print(GPT_summary) print("\n\n") ###Output Summary : 1 ------------------------------------------------------- [' Yes, it is rather difficult to get a job in the UK, but I am a very Summary : 2 ------------------------------------------------------- ['When Grammy award-winning singer-songwriter and producer, John Legend died, he was buried Summary : 3 ------------------------------------------------------- ['In the last severa, the man who was the first to die was the first to die Summary : 4 ------------------------------------------------------- ['The middle of the vernal equinox' is a sign of the end of the Summary : 5 ------------------------------------------------------- ['There are a few wares here, but I'm not sure if they're worth it." Summary : 6 ------------------------------------------------------- ["You probably know ____, but you're not going to get it from me."] Summary : 7 ------------------------------------------------------- ['Last night I co-hopped a party with my friends and I had a great time. Summary : 8 ------------------------------------------------------- ['On why you should ____] [On why you should ____] [On why you Summary : 9 ------------------------------------------------------- ['HOUSTON – Houston.") The Houston Rockets have been in the midst of a rebuilding Summary : 10 ------------------------------------------------------- ['In the mid \'00s, vern \'00s\ n [ME, fr.
Scripts/EvaluationScript.ipynb
###Markdown Automated Globe Prediction in CT images of the Orbits Axial CT images of the orbits ###Code """ UNIVERSITY OF ARIZONA Author: Lavanya Umapathy Date: Description: Script to evaluate/test using a saved CNN model. The MRes-UNET2D model uses Axial CT images of the orbits to predict globe masks and quantify globe volumes. If you use this CNN model in your work, please site the following: Lavanya Umapathy, Blair Winegar, Lea MacKinnon, Michael Hill, Maria I Altbach, Joseph M Miller and Ali Bilgin, "Fully Automated Segmentation of Globes for Volume Quantification in CT Orbits Images Using Deep Learning", American Journal of Neuroradiology, June 2020. """ from matplotlib import pyplot as plt import time, sys from skimage import measure import Utilities as utils # path to the saved pre-trained model model_loadPath = '../PretrainedModel/MRes_UNET2D.h5' # path to a dicom directory containing CT images. Replace this with actual data directory dcm_loadPath = '../Data/SubjectFolderName/' output_shape = (512,512) WL = 50 # in hounsfield units, for window level WW = 200 # in hounsfield units, for window width dicom_srchstr = 'IM*' gpu_number = '0' # load the CT DICOM series into img (Height x Width x Number of Slices) img = utils.loadDicomSeries_sorted(dcmPath,dicom_srchstr) # Get pixel size for volume calculations pixdim = utils.getPixDims_Dicom(dcmPath,dicom_srchstr) # Pre-process CT images of the orbits img = utils.preProcess_orbitalCT(img,output_shape,WL=WL,WW=WW) # Load the pretrained MRes-UNET2D model model = utils.loadSavedModel(modelLoadPath,gpu_number) # Predict masks for globes using MRes-UNET2D model start_pred = time.time() predictedGlobes = utils.predictGlobes(model, img) end_pred = time.time() print("Time Elapsed prediction in seconds: ",round(end_pred - start_pred,4)) # Display the predicted Globe contours img_idx = 2 # select an image to display the contours on contours_pred = measure.find_contours(predImg[:,:,idx], 0.25) fig, ax = plt.subplots() ax.imshow(img[:,:,idx], interpolation='nearest', cmap=plt.cm.gray) for n, contour in enumerate(contours_pred): ax.plot(contour[:, 1], contour[:, 0], linewidth=1.5, color='blue') # Print evaluation stats computeGlobeStats(predictedGlobes,pixdim) ###Output _____no_output_____
Software/Python Code/Lab_DNN_Classifer_Model.ipynb
###Markdown Neural Network Model to train on Semantic distances between title and body/prose segments ###Code import pandas as pd import numpy as np df = pd.read_csv('Lab_dom_dep_train_data.csv') # Stats of the data frame df.info() df.head() df.describe() import seaborn as sns %matplotlib inline df.hist(column='Label') X = df[['Header_Distance','Body_Distance','Length']] y = df['Label'] # import python tensorflow import tensorflow as tf from tensorflow.contrib.learn import SKCompat # Specify that all features have real-value data feature_columns = [tf.contrib.layers.real_valued_column("", dimension=3)] classifer_qq = SKCompat(tf.contrib.learn.DNNClassifier(hidden_units=[24,48], feature_columns=feature_columns, n_classes=3,model_dir='./tf_model/' )) classifer_qq.fit(X,y,steps=2000,batch_size=256) ###Output WARNING:tensorflow:float64 is not supported by many models, consider casting to float32. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Saving checkpoints for 1 into ./tf_model/model.ckpt. INFO:tensorflow:loss = 19.0013, step = 1 INFO:tensorflow:global_step/sec: 206.918 INFO:tensorflow:loss = 0.762444, step = 101 (0.486 sec) INFO:tensorflow:global_step/sec: 214.022 INFO:tensorflow:loss = 1.55348, step = 201 (0.466 sec) INFO:tensorflow:global_step/sec: 213.564 INFO:tensorflow:loss = 0.710124, step = 301 (0.467 sec) INFO:tensorflow:global_step/sec: 218.237 INFO:tensorflow:loss = 0.66276, step = 401 (0.459 sec) INFO:tensorflow:global_step/sec: 215.875 INFO:tensorflow:loss = 0.619779, step = 501 (0.462 sec) INFO:tensorflow:global_step/sec: 217.286 INFO:tensorflow:loss = 0.721031, step = 601 (0.461 sec) INFO:tensorflow:global_step/sec: 209.525 INFO:tensorflow:loss = 0.55749, step = 701 (0.475 sec) INFO:tensorflow:global_step/sec: 212.653 INFO:tensorflow:loss = 0.656736, step = 801 (0.472 sec) INFO:tensorflow:global_step/sec: 215.875 INFO:tensorflow:loss = 0.598093, step = 901 (0.464 sec) INFO:tensorflow:global_step/sec: 208.214 INFO:tensorflow:loss = 0.578224, step = 1001 (0.478 sec) INFO:tensorflow:global_step/sec: 202.712 INFO:tensorflow:loss = 0.659505, step = 1101 (0.494 sec) INFO:tensorflow:global_step/sec: 204.794 INFO:tensorflow:loss = 0.560839, step = 1201 (0.486 sec) INFO:tensorflow:global_step/sec: 153.201 INFO:tensorflow:loss = 0.594871, step = 1301 (0.656 sec) INFO:tensorflow:global_step/sec: 200.673 INFO:tensorflow:loss = 0.607511, step = 1401 (0.497 sec) INFO:tensorflow:global_step/sec: 206.489 INFO:tensorflow:loss = 0.635268, step = 1501 (0.484 sec) INFO:tensorflow:global_step/sec: 211.75 INFO:tensorflow:loss = 0.627354, step = 1601 (0.472 sec) INFO:tensorflow:global_step/sec: 205.638 INFO:tensorflow:loss = 0.801062, step = 1701 (0.486 sec) INFO:tensorflow:global_step/sec: 153.674 INFO:tensorflow:loss = 0.599568, step = 1801 (0.651 sec) INFO:tensorflow:global_step/sec: 197.493 INFO:tensorflow:loss = 0.61895, step = 1901 (0.510 sec) INFO:tensorflow:Saving checkpoints for 2000 into ./tf_model/model.ckpt. INFO:tensorflow:Loss for final step: 0.552525.
digit image classifer.ipynb
###Markdown To demonstrate how easy it is to load the MNIST dataset, we will first write a little script to download and visualize the first 4 images in the training dataset. ###Code # Plot ad hoc mnist instances from keras.datasets import mnist import matplotlib.pyplot as plt # load (downloaded if needed) the MNIST dataset (X_train, y_train), (X_test, y_test) = mnist.load_data() # plot 4 images as gray scale plt.subplot(221) plt.imshow(X_train[0], cmap=plt.get_cmap('gray')) plt.subplot(222) plt.imshow(X_train[1], cmap=plt.get_cmap('gray')) plt.subplot(223) plt.imshow(X_train[2], cmap=plt.get_cmap('gray')) plt.subplot(224) plt.imshow(X_train[3], cmap=plt.get_cmap('gray')) # show the plot plt.show() import numpy from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.utils import np_utils import matplotlib.pyplot as plot # load data (X_train, y_train), (X_test, y_test) = mnist.load_data() # flatten 28*28 images to a 784 vector for each image num_pixels = X_train.shape[1] * X_train.shape[2] X_train_cnv = X_train.reshape(X_train.shape[0], num_pixels).astype('float32') X_test_cnv = X_test.reshape(X_test.shape[0], num_pixels).astype('float32') print(num_pixels) # normalize inputs from 0-255 to 0-1 # what does this exactly do? What is the type of X_train and X_test X_train_cnv = X_train_cnv / 255 X_test_cnv = X_test_cnv / 255 # one hot encode outputs y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test) num_classes = y_test.shape[1] # define baseline model def baseline_model(): # create model model = Sequential() model.add(Dense(num_pixels, input_dim=num_pixels, kernel_initializer='normal', activation='relu')) model.add(Dense(num_classes, kernel_initializer='normal', activation='softmax')) # Compile model model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model ###Output _____no_output_____ ###Markdown Note that now tensorboard is added hereAfter this run, you can run "tensorboard --logdir /Users/khan/Documents/deeplearning/log" to see that in action:http://0.0.0.0:6006We can also embed an image in the notebook (https://stackoverflow.com/questions/32370281/how-to-include-image-or-picture-in-jupyter-notebook) ###Code from keras.callbacks import TensorBoard tensorboard = TensorBoard(log_dir='/Users/khan/Documents/deeplearning/log', histogram_freq=0, write_graph=True, write_images=False) # build the model model = baseline_model() # Fit the model model.fit(X_train_cnv, y_train, validation_data=(X_test_cnv, y_test), epochs=10, batch_size=200, verbose=2, callbacks=[tensorboard]) # Final evaluation of the model scores = model.evaluate(X_test_cnv, y_test, verbose=0) print("Baseline Error: %.2f%%" % (100-scores[1]*100)) ###Output Train on 60000 samples, validate on 10000 samples Epoch 1/10 3s - loss: 0.2830 - acc: 0.9199 - val_loss: 0.1404 - val_acc: 0.9570 Epoch 2/10 3s - loss: 0.1104 - acc: 0.9674 - val_loss: 0.0952 - val_acc: 0.9713 Epoch 3/10 3s - loss: 0.0706 - acc: 0.9797 - val_loss: 0.0799 - val_acc: 0.9768 Epoch 4/10 3s - loss: 0.0500 - acc: 0.9855 - val_loss: 0.0698 - val_acc: 0.9777 Epoch 5/10 3s - loss: 0.0357 - acc: 0.9896 - val_loss: 0.0631 - val_acc: 0.9804 Epoch 6/10 3s - loss: 0.0260 - acc: 0.9932 - val_loss: 0.0666 - val_acc: 0.9777 Epoch 7/10 3s - loss: 0.0200 - acc: 0.9949 - val_loss: 0.0591 - val_acc: 0.9807 Epoch 8/10 3s - loss: 0.0152 - acc: 0.9965 - val_loss: 0.0617 - val_acc: 0.9811 Epoch 9/10 3s - loss: 0.0114 - acc: 0.9975 - val_loss: 0.0611 - val_acc: 0.9812 Epoch 10/10 3s - loss: 0.0079 - acc: 0.9986 - val_loss: 0.0608 - val_acc: 0.9818 Baseline Error: 1.82% ###Markdown Now let's try to load a digit image and see how it works ###Code from PIL import Image, ImageFilter import PIL.ImageOps from numpy import array class ScaleUtils(): def __init__(self, p, h=28, w=28): self.im = Image.open(p) self.size = (h,w) def toGrey(self): self.im = self.im.convert("L") return self def resize(self): self.im = self.im.resize(self.size) return self def invert(self): self.im = PIL.ImageOps.invert(self.im) return self def getArray(self): return array(self.im) def getImage(self): return self.im def run(self): return self.toGrey().resize().getArray() # this particular one requires invert ScaleUtils("/Users/khan/Desktop/test_8_digit.png").toGrey().resize().invert().getImage().save("output.jpg", "JPEG") ###Output _____no_output_____ ###Markdown Predict using one from training set ###Code plt.imshow(X_train[1]) plt.show() input = X_train_cnv[1] X = input.reshape(1,784) pr = model.predict_classes(X) print (pr) from scipy.misc import imread import matplotlib.pyplot as plt im = imread("output.jpg") plt.imshow(im) plt.show() input = im X = input.reshape(1,784) pr = model.predict_classes(X) print (pr) ###Output _____no_output_____
Python/4_model_selection.ipynb
###Markdown Assignment 1: Time Series Forecast With Python (Seasonal ARIMA)**Lecturer**: Vincent Claes**Authors:** Bryan Honof, Jeffrey Gorissen**Start Date:** 19/10/2018 **Objective:** Visualize and predict the future temperatures via ARIMA**Description:** In this notebook we train our model**This notebook is really only used to calculate the best parameters so most of the description is left out.** ###Code import warnings import itertools import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt from sklearn.metrics import mean_absolute_error plt.style.use('fivethirtyeight') data_csv = pd.read_csv('./data/data.csv') data = pd.DataFrame() # Convert the creation_date column to datetime64 data['dateTime'] = pd.to_datetime(data_csv['dateTime']) # Convert the value column to float data['temperature'] = pd.to_numeric(data_csv['temperature']) # Set the dateTime column as index data = data.set_index(['dateTime']) # Sort the dataFrame just to be sure... data = data.sort_index() data = data.dropna() # Double check the results data.info() df = data data.tail(5) ###Output _____no_output_____ ###Markdown Search for best parameters```p``` is the auto-regressive part of the model. It allows us to incorporate the effect of past values into our model. Intuitively, this would be similar to stating that it is likely to be warm tomorrow if it has been warm the past 3 days.```d``` is the integrated part of the model. This includes terms in the model that incorporate the amount of differencing (i.e. the number of past time points to subtract from the current value) to apply to the time series. Intuitively, this would be similar to stating that it is likely to be same temperature tomorrow if the difference in temperature in the last three days has been very small.```q``` is the moving average part of the model. This allows us to set the error of our model as a linear combination of the error values observed at previous time points in the past.We will use a "grid search" to iteratively explore different combinations of parameters. For each combination of parameters, we fit a new seasonal ARIMA model with the ```SARIMAX()``` function from the statsmodels module and assess its overall quality. Once we have explored the entire landscape of parameters, our optimal set of parameters will be the one that yields the best performance for our criteria of interest. Let's begin by generating the various combination of parameters that we wish to assess: ###Code # Define the p, d and q parameters to take any value between 0 and 2 p = d = q = range(0, 2) # Generate all different combinations of p, q and q triplets pdq = list(itertools.product(p, d, q)) # Generate all different combinations of seasonal p, q and q triplets seasonal_pdq = [(x[0], x[1], x[2], 24) for x in list(itertools.product(p, d, q))] print('Examples of parameter combinations for Seasonal ARIMA...') print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[1])) print('SARIMAX: {} x {}'.format(pdq[1], seasonal_pdq[2])) print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[3])) print('SARIMAX: {} x {}'.format(pdq[2], seasonal_pdq[4])) ###Output Examples of parameter combinations for Seasonal ARIMA... SARIMAX: (0, 0, 1) x (0, 0, 1, 24) SARIMAX: (0, 0, 1) x (0, 1, 0, 24) SARIMAX: (0, 1, 0) x (0, 1, 1, 24) SARIMAX: (0, 1, 0) x (1, 0, 0, 24) ###Markdown Here we took a p, d, q value between 0 and 2. We could increase this number to get even more accurate predictions but for times sake we use 0 and 2. (We ran another test with 0 and 3 as range. The result of that test is what we used to do our prediction in the next notebook) ###Code warnings.filterwarnings("ignore") # specify to ignore warning messages AIC = [] _param = [] _seasonal_param = [] for param in pdq: for param_seasonal in seasonal_pdq: try: mod = sm.tsa.statespace.SARIMAX(df, order=param, seasonal_order=param_seasonal, enforce_stationarity=False, enforce_invertibility=False) results = mod.fit() pred = results.get_prediction(dynamic=False) AIC.append(round(results.aic, 2)) _param.append(param) _seasonal_param.append(param_seasonal) print('ARIMA{}x{}24 - AIC:{}'.format(param, param_seasonal, round(results.aic, 2))) except: continue min(AIC) pos = AIC.index(min(AIC)) print(_param[pos], _seasonal_param[pos], min(AIC)) order = _param[pos] seasonal_order = _seasonal_param[pos] ###Output (1, 1, 1) (0, 1, 1, 24) 775.46
Alanine Tutorial/TUTORIAL_PART_2_Alanine_Solution.ipynb
###Markdown Welcome to part 2 of the alanine tutorial. Here we take the computed dataset we used previously and solve for site specific delta values. We begin by importing. ###Code import sys; sys.path.insert(0, '..') import alanineTest import readInput as ri import fragmentAndSimulate as fas import solveSystem as ss import basicDeltaOperations as op import copy from tqdm import tqdm import solveSystem as ss import numpy as np import sympy as sy from datetime import date today = date.today() ###Output _____no_output_____ ###Markdown Here, we have imported the alanineTest.py file which we discussed before, allowing us to predict what the dataset should look like for a given standard. We standardize by defining and computing a forward model, without experimental fractionation and with all peaks observed, giving us theoretical values for each peak. To do so, we need to specify some hypothesized standard structure, which will often be wrong; it turns out that sample standard comparisons are robust to reasonable natural abundance errors in our hypothesized standard, so this does not need to be perfect. ###Code deltas = [-30,-30,0,0,0,0] fragSubset = ['full','44'] df, expandedFrags, fragSubgeometryKeys, fragmentationDictionary = alanineTest.initializeAlanine(deltas, fragSubset) forbiddenPeaks = {} predictedMeasurement, MNDictStd, FF = alanineTest.simulateMeasurement(df, fragmentationDictionary, expandedFrags, fragSubgeometryKeys, abundanceThreshold = 0, massThreshold = 1, unresolvedDict = {}, outputFull = False) ###Output Calculating Isotopologue Concentrations ###Markdown Next, we read in our sample and standard files. We have specific functions in the readInput.py for doing so; note that different functions are used for importing experimental data. The error option allows us to specify an error for each observed peak of sample and standard, e.g. a 1 per mil error on each observed peak. Likely experimental results will have different errors for different beams, but this is a basic way to understand how error will propagate. We can set error again for the molecular average measurement, from ri.SampleUValues ###Code standardJSON = ri.readJSON(str(today) + " TUTORIAL Standard Stochastic.json") processStandard = ri.readComputedData(standardJSON, error = 0, theory = predictedMeasurement) sampleJSON = ri.readJSON(str(today) + " TUTORIAL Sample Stochastic.json") processSample = ri.readComputedData(sampleJSON, error = 0) UValuesSmp = ri.readComputedUValues(sampleJSON, error = 0) processStandard ###Output _____no_output_____ ###Markdown Now, we set up the matrix inversion problem to find site-specific information. At this point, it would be useful to review the theory paper's description of this process. Computationally, we first determine which isotopologues were introduced via this experiment and track where they appear in the different fragments. We include a "precise identity" specifying in words which isotopologue these correspond to. ###Code MNKey = "M1" isotopologuesDict = fas.isotopologueDataFrame(MNDictStd, df) Isotopologues = isotopologuesDict[MNKey] Isotopologues ###Output _____no_output_____ ###Markdown Next, we define "O value correction factors". We don't discuss these in detail here; briefly, they correct for instances when we do not see all of the M+N relative abundance associated with a fragment, e.g. because we do not observe certain low abundance ion beams. See the Appendix for a detailed discussion. ###Code OCorrection = ss.OValueCorrectTheoretical(predictedMeasurement, processSample, massThreshold = 1) OCorrection ###Output _____no_output_____ ###Markdown To solve and propagate error, we perform a monte carlo routine. We will go through the steps of this routine now (compare cf. solveSystem.M1MonteCarlo) First, we perturb our O factors (again, see Appendix). We then perturb our standard based on their observed errors. Following perturbation, we calculate correction factors by taking the ratio between the perturbed and predicted abundance. ###Code variableOCorrect = copy.deepcopy(OCorrection) variableOCorrect = ss.modifyOValueCorrection(OCorrection, variableOCorrect, MNKey, amount = 0) std = ss.perturbStandard(processStandard, theory = True) std ###Output _____no_output_____ ###Markdown Next, we perturb our sample. This process has a few subroutines; we first discuss those, then show how they are run in a single function. Our first subroutine perturbs based on experimental error, as is done with the standard. See solveSystem.perturbSample ###Code perturbSample = ss.perturbSampleError(processSample) perturbSample ###Output _____no_output_____ ###Markdown Next, we apply correction factors calculated from the standard to the sample. Renormalize should be set to be True, to remove the "W" factor discussed in the Appendix. ###Code correctedSample = ss.perturbSampleCorrectionFactors(perturbSample, std, renormalize = True) correctedSample ###Output _____no_output_____ ###Markdown Finally, we apply our U Value correction factors, scaling our observations based on the hypothesized abundances of unobserved peaks. The output of this function gives the data we will use for the matrix routine. ###Code OCorrectedSample = ss.perturbSampleOCorrection(correctedSample, variableOCorrect) OCorrectedSample ###Output _____no_output_____ ###Markdown Rather than run these functions individually each time, we have a parent function which deals with all of the sample perturbation. This function additionally processes the UValueCorrectedSample variables to give dataframes, and allows us to procedurally turn on or off different correction factors. Advanced users can go into more detail with these by reading the relevant sections of the paper and looking at the function description. The M1 entry includes a corrected, standardized relative abundance for each observed ion beam. ###Code smp = ss.perturbSample(processSample, std, variableOCorrect, experimentalOCorrectList = [])['M1'] smp ###Output _____no_output_____ ###Markdown Next, we use knowledge of which isotopologues fragment to yield which substitutions and the corrected, standardized relative abundances to set up the matrix problem we hope to invert. The "comp" variable gives the composition matrix, specifying how each isotopologue is sampled in each observation. The columns of this matrix correspond to the Isotopologues in the Isotopologues dataframe; the first column is 13C Ccarboxyl, the second is 13C Calphabeta, and so forth. The rows refer to different observations; the first gives closure, then the second gives the full.D observation, the third gives full.15N, the fourth full.17O, and the fifth full.13C; the pattern repeats for the 44 peak. The rows of the "meas" vector are given in the same way. ###Code comp, meas = ss.constructMatrix(Isotopologues, smp, MNKey, fragmentationDictionary) sy.Matrix(comp) sy.Matrix(meas) ###Output _____no_output_____ ###Markdown We can solve this matrix inversion multiple ways. First, we can use the np.linalg.lstsq routine, which is most useful for fully constrained systems. Alternatively, we can run a Gauss-Jordan elimination algorithm; this is useful for underconstrained systems, as it can help us determine which individual isotopologues are unsolved for, and which are well constrained. ###Code sol = np.linalg.lstsq(comp, meas, rcond = -1) sol[0] AugMatrix = np.column_stack((comp, meas)) solve = ss.GJElim(AugMatrix, augMatrix = True) sy.Matrix(solve[0]) ###Output _____no_output_____ ###Markdown This entire process is performed by the M1MonteCarlo function, which perturbs U Values, standard, and sample, then solves the system, for N steps. The options GJ and debugMatrix can both be set to True in order to see output from every step of the GJ Solution, which may help advanced users troubleshoot. They should look at the code to see exactly what this function is outputting in that case. ###Code M1Results = ss.M1MonteCarlo(processStandard, processSample, OCorrection, isotopologuesDict, fragmentationDictionary, N = 100, GJ = False, debugMatrix = False, perturbTheoryOAmt = 0) M1Results ###Output _____no_output_____ ###Markdown Next, we need to process these results and make sense of them. The key mathematical step here is going from M+N relative abundance space (where our solution is now) into U Value space, making use of the U^M+N variable. We encourage the user to review the relevant parts of the theory paper here. We will perform all of these steps for each solution to the Monte Carlo process; as an example, we take one solution and demonstrate the process. We first take all the isotopologues that were introduced and assign their percent abundances from the solution. ###Code out = isotopologuesDict['M1'][['Number','Stochastic','Composition','Stochastic U','Precise Identity']].copy() out[MNKey + ' M+N Relative Abundance'] = M1Results['NUMPY'][0] out ###Output _____no_output_____ ###Markdown Then, we calculate the U^M+1 value. The process for this is elaborated in the M+N theory paper; briefly, we take the observed U Value for some isotope of interest and divide by the sum of the percent abundances of all isotopologues with that isotopic composition. For example, we would take the 13C U value and divide by the M1 M+N Relative Abundance of 13C Ccarboxyl + 13C Calphabeta.We may do this for multiple substitutions; the ones we will do it for are determined by the UMNSub parameter, a list of U Values to use. If we calculate it multiple ways, we take the average and use this as our U^M+1. ###Code #Perturb U Values UPerturb = ss.PerturbUValue(UValuesSmp) #Calculate UM1 UM1 = ss.calcUMN(MNKey, out, UPerturb, UMNSub = ['13C']) out['UM1'] = UM1 out['Calc U Values'] = out[MNKey + ' M+N Relative Abundance'] * out['UM1'] out ###Output _____no_output_____ ###Markdown We can compute delta values directly using the Isotopologues dataframe. But it would be convenient to calculate them for our initial input dataframe, which lists the sites in a different order. So we next process this information to follow the same order as our input dataframe. Then we normalize fo the number of atoms, and compute both the absolute delta in VPDB etc. space and the relative sample standard delta. The "absolute delta" will generally be incorrect unless we have perfect knowledge of the standard; the relative delta will generally be accurate. Keep in mind that relative deltas are not directly additive--if the sample is -40, and the standard is -30, the relative delta will not be precisely -10! ###Code #This section reassigns the solutions of the isotopologues dataframe to the right order for the #site-specific dataframe M1 = [0] * len(out.index) UM1 = [0] * len(out.index) U = [0] * len(out.index) for i, v in out.iterrows(): identity = v['Precise Identity'].split(' ')[1] index = list(df.index).index(identity) M1[index] = v[MNKey + ' M+N Relative Abundance'] UM1[index] = v['UM1'] U[index] = v['Calc U Values'] #calculate relevant information normM1 = U / df['Number'] #This gives deltas in absolute reference frame smpDeltasAbs = [op.ratioToDelta(x,y) for x, y in zip(df['IDS'], normM1)] appxStd = df['deltas'] #This gives deltas relative to standard relSmpStdDeltas = [op.compareRelDelta(atomID, delta1, delta2) for atomID, delta1, delta2 in zip(df['IDS'], appxStd, smpDeltasAbs)] relSmpStdDeltas ###Output _____no_output_____ ###Markdown As before, we define a single function that takes care of all of this, repeating the process for the N results from the Monte Carlo model. ###Code processedResults = ss.processM1MCResults(M1Results, UValuesSmp, isotopologuesDict, df, UMNSub = ['13C']) processedResults ###Output _____no_output_____ ###Markdown Finally, we update the original dataframe with these answers, calculating means and errors (standard deviations) for each value. ###Code ss.updateSiteSpecificDfM1MC(processedResults, df) ###Output _____no_output_____
GDP_Toni/Test_Zill.ipynb
###Markdown Plot Percent Change in Median Home Sales Price Year to Year: States vs. Total USThe following codes cleans the csv that contains median home sales price for every month starting March, 2008 for all states and the total United States, aggregates by year, and plots the percent change. ###Code us_wa_hp=us_states_hp.loc[(us_states_hp['RegionName']== 'Washington') | (us_states_hp['RegionName']== 'Colorado') | (us_states_hp['RegionName']== 'Oregon')| (us_states_hp['RegionName']== 'United States'),['RegionName','2008-03','2008-04','2008-05','2008-06','2008-07', '2008-08','2008-09','2008-10','2008-11','2008-12','2009-01','2009-02','2009-03','2009-04','2009-05','2009-06','2009-07','2009-08', '2009-09','2009-10','2009-11','2009-12','2010-01','2010-02','2010-03','2010-04','2010-05','2010-06','2010-07','2010-08','2010-09', '2010-10','2010-11','2010-12','2011-01','2011-02','2011-03','2011-04','2011-05','2011-06','2011-07','2011-08','2011-09','2011-10', '2011-11','2011-12','2012-01','2012-02','2012-03','2012-04','2012-05','2012-06','2012-07','2012-08','2012-09','2012-10','2012-11', '2012-12','2013-01','2013-02','2013-03','2013-04','2013-05','2013-06','2013-07','2013-08','2013-09','2013-10','2013-11','2013-12', '2014-01','2014-02','2014-03','2014-04','2014-05','2014-06','2014-07','2014-08','2014-09','2014-10','2014-11','2014-12','2015-01', '2015-02','2015-03','2015-04','2015-05','2015-06','2015-07','2015-08','2015-09','2015-10','2015-11','2015-12','2016-01','2016-02', '2016-03','2016-04','2016-05','2016-06','2016-07','2016-08','2016-09','2016-10','2016-11','2016-12','2017-01','2017-02','2017-03', '2017-04','2017-05','2017-06','2017-07','2017-08','2017-09','2017-10','2017-11','2017-12','2018-01','2018-02','2018-03','2018-04', '2018-05','2018-06','2018-07','2018-08','2018-09','2018-10','2018-11']] us_wa_hp.set_index('RegionName',inplace=True) us_wa_hp=us_wa_hp.transpose() us_wa_hp.reset_index(inplace=True) us_wa_hp[['Year','Month']]=us_wa_hp['index'].str.split('-',expand=True) # us_wa_hp us_wa_avg=pd.DataFrame(data=[us_wa_hp.groupby('Year')['Washington'].median(),us_wa_hp.groupby('Year')['United States'].median(),us_wa_hp.groupby('Year')['Colorado'].median(),us_wa_hp.groupby('Year')['Oregon'].median()]).transpose() # us_wa_avg us_wa_avg['WA_Percent_Change']=us_wa_avg['Washington'].pct_change()*100 us_wa_avg['CO_Percent_Change']=us_wa_avg['Colorado'].pct_change()*100 us_wa_avg['OR_Percent_Change']=us_wa_avg['Oregon'].pct_change()*100 us_wa_avg['US_Percent_Change']=us_wa_avg['United States'].pct_change()*100 us_wa_avg testfig2,testax2=plt.subplots() x_axis=[2009,2010,2011,2012,2013,2014,2015,2016,2017,2018] # testax2.plot(x_axis,z,marker='o',color='green',label='5 Change in Median Home Sales Price') testax2.plot(x_axis,us_wa_avg['WA_Percent_Change'].dropna(),label='Washington',marker='o',color='mediumpurple') testax2.plot(x_axis,us_wa_avg['OR_Percent_Change'].dropna(),label='Oregon',marker='o',color='darkorange') testax2.plot(x_axis,us_wa_avg['CO_Percent_Change'].dropna(),label='Colorado',marker='o',color='palevioletred') testax2.plot(x_axis,us_wa_avg['US_Percent_Change'].dropna(),label='Total US',marker='o',color='dimgrey') testfig2.suptitle("% Change in Median Home Sales Price", fontsize=16, fontweight="bold") plt.legend(loc='best') plt.xlabel("Years") plt.ylabel("Change from Previous Year (%)") plt.xticks(x_axis,rotation='vertical') testax2.set_facecolor('whitesmoke') plt.show() ###Output _____no_output_____ ###Markdown Build Heat Map of Sales in Washington with Retailer Markers ###Code # business.head() retailers=business.loc[(business['Type']=='MARIJUANA RETAILER/MEDICAL MARIJUANA ENDORSEMENT') | (business['Type']=='MARIJUANA RETAILER'),: ] retailers['UBI']=retailers.UBI.astype(str).apply(lambda x: x[:9]) retailers.dropna(subset=['UBI'],inplace=True) retailers['UBI']=retailers['UBI'].astype(int) pot_sales=pot_sales.loc[pot_sales['Total Sales'] != 0,:] pot_sales.dropna(subset=['UBI'],inplace=True) # pot_sales.head() ret_sales=retailers.merge(pot_sales,on='UBI',how='inner') dates=pd.to_datetime(ret_sales['Period Start'],format='%m/%d/%Y') ret_sales['Sales Month']=dates.apply(lambda x: x.strftime('%Y-%m')) ret_sales['Zip']=ret_sales['Zip'].astype(str).apply(lambda x: x[:5]).astype(int) sales_by_city=pd.DataFrame(ret_sales.groupby(['City','State'])['Total Sales'].sum()) sales_by_city.reset_index(inplace=True) url = "https://maps.googleapis.com/maps/api/geocode/json?address=" lat=[] lng=[] for i in range(len(sales_by_city)): query_url = url + sales_by_city.iloc[i]['City'] + ",+WA&key=" + gkey response = requests.get(query_url) json = response.json() lat.append(json['results'][0]['geometry']['location']['lat']) lng.append(json['results'][0]['geometry']['location']['lng']) sales_by_city['lat']=lat sales_by_city['lng']=lng sales_by_address=pd.DataFrame(ret_sales.groupby(['Address','City','State'])['Total Sales'].sum()) sales_by_address.reset_index(inplace=True) lat_mark=[] lng_mark=[] for i in range(len(sales_by_address)): query_url_mark= url + sales_by_address.iloc[i]['Address']+ ",+" + sales_by_address.iloc[i]['City'] + ",+WA&key=" + gkey json_mark = requests.get(query_url_mark).json() try: lat_mark.append(json_mark['results'][0]['geometry']['location']['lat']) lng_mark.append(json_mark['results'][0]['geometry']['location']['lng']) except: lat_mark.append('none') lng_mark.append('none') sales_by_address['lat']=lat_mark sales_by_address['lng']=lng_mark locations_marker=sales_by_address[sales_by_address['lat']!='none'] # Store latitude and longitude in locations locations = sales_by_city[["lat", "lng"]] weight=sales_by_city['Total Sales'].astype(float) heat_layer = gmaps.heatmap_layer( locations, weights=weight,dissipating=False,point_radius=0.8) locations_marker=sales_by_address[sales_by_address['lat']!='none'] locations_marker=locations_marker[['lat','lng']] marker_layer=gmaps.symbol_layer(locations_marker,fill_color='green',stroke_color='black',scale=2) fig = gmaps.figure() fig.add_layer(heat_layer) fig.add_layer(marker_layer) fig ###Output _____no_output_____ ###Markdown Make Timeseries of Average % Change in Median Home Sales Price in Zip Codes with Marijuana Retailers ###Code wa_all_hp=homeprice.loc[homeprice['StateName']=='Washington',['RegionName','2008-03','2008-04','2008-05','2008-06','2008-07', '2008-08','2008-09','2008-10','2008-11','2008-12','2009-01','2009-02','2009-03','2009-04','2009-05','2009-06','2009-07','2009-08', '2009-09','2009-10','2009-11','2009-12','2010-01','2010-02','2010-03','2010-04','2010-05','2010-06','2010-07','2010-08','2010-09', '2010-10','2010-11','2010-12','2011-01','2011-02','2011-03','2011-04','2011-05','2011-06','2011-07','2011-08','2011-09','2011-10', '2011-11','2011-12','2012-01','2012-02','2012-03','2012-04','2012-05','2012-06','2012-07','2012-08','2012-09','2012-10','2012-11', '2012-12','2013-01','2013-02','2013-03','2013-04','2013-05','2013-06','2013-07','2013-08','2013-09','2013-10','2013-11','2013-12', '2014-01','2014-02','2014-03','2014-04','2014-05','2014-06','2014-07','2014-08','2014-09','2014-10','2014-11','2014-12','2015-01', '2015-02','2015-03','2015-04','2015-05','2015-06','2015-07','2015-08','2015-09','2015-10','2015-11','2015-12','2016-01','2016-02', '2016-03','2016-04','2016-05','2016-06','2016-07','2016-08','2016-09','2016-10','2016-11','2016-12','2017-01','2017-02','2017-03', '2017-04','2017-05','2017-06','2017-07','2017-08','2017-09','2017-10','2017-11','2017-12','2018-01','2018-02','2018-03','2018-04', '2018-05','2018-06','2018-07','2018-08','2018-09','2018-10','2018-11']] wa_all_hp.rename(columns={'RegionName':'Zip_MedianHomeSale'},inplace=True) wa_all_hp.set_index('Zip_MedianHomeSale',inplace=True) wa_all_hp=wa_all_hp.transpose() wa_all_hp.head() mj_sales_zip=pd.crosstab(ret_sales['Zip'],ret_sales['Sales Month'],values=ret_sales['Total Sales'],aggfunc=np.sum).transpose() # mj_sales_zip=pd.crosstab(ret_sales['Zip'],ret_sales['Sales Month'],values=ret_sales['Total Sales'],aggfunc=np.sum) mj_sales_zip.head() all_data=mj_sales_zip.join(wa_all_hp,how='outer',lsuffix='_mjsales',rsuffix='_homeprice') all_data df=all_data.filter(like='_',axis=1) df zipcodes_with_mj=df.filter(like='homeprice',axis=1) zipcodes_with_mj list1=list(zipcodes_with_mj) zipcodes_with_mj.reset_index(inplace=True) zipcodes_with_mj[['Year','Month']]=zipcodes_with_mj['index'].str.split('-',expand=True) zipcodes_with_mj.groupby('Year')['{0}'.format('98103_homeprice')].mean() yearavg=pd.DataFrame(index=['2008','2009','2010','2011','2012','2013','2014','2015','2016','2017','2018']) for i in list1: yearavg=yearavg.join(zipcodes_with_mj.groupby('Year')['{0}'.format(i)].median(),how='outer') yearavg yearavgchange=pd.DataFrame(index=['2008','2009','2010','2011','2012','2013','2014','2015','2016','2017','2018']) for col in yearavg: yearavgchange=yearavgchange.join(yearavg['{0}'.format(col)].pct_change()*100,how='outer',rsuffix='_pctchange') yearavgchange.mean(axis=1) testfig3,testax3=plt.subplots() x_axis=[2009,2010,2011,2012,2013,2014,2015,2016,2017,2018] # testax2.plot(x_axis,z,marker='o',color='green',label='5 Change in Median Home Sales Price') testax3.plot(x_axis,yearavgchange.mean(axis=1).dropna(),label='Zip Codes with Marijuana Retailers',marker='o',color='mediumseagreen') testax3.plot(x_axis,us_wa_avg['WA_Percent_Change'].dropna(),label='Washington',marker='o',color='mediumpurple') testax3.plot(x_axis,us_wa_avg['US_Percent_Change'].dropna(),label='Total US',marker='o',color='dimgrey') testfig3.suptitle("% Change in Median Home Sales Price", fontsize=16, fontweight="bold") plt.legend(loc='best') plt.xlabel("Years") plt.ylabel("Change from Previous Year (%)") plt.xticks(x_axis,rotation='vertical') testax3.set_facecolor('whitesmoke') plt.show() ###Output _____no_output_____ ###Markdown Extra Code ###Code testdf=df[['98012_mjsales','98012_homeprice']] testdf['PercChangeHP']=testdf['98012_homeprice'].pct_change()*100 testdf.reset_index(inplace=True) testdf[['Year','Month']]=testdf['index'].str.split('-',expand=True) testdf avg_home_price=pd.DataFrame(testdf.groupby('Year')['98012_homeprice'].mean()) avg_home_price testfig,testax=plt.subplots() testax.plot(avg_home_price.index.values,avg_home_price['98012_homeprice'],marker='o') plt.show() x=pd.DataFrame(list(df)) x[0]=x[0].str.split('_',expand=True) y=x[0].unique() y fig,ax=plt.subplots() fig.suptitle("Marijuana vs Home Price") for i in y: df1=df[['{0}_mjsales'.format(i),'{0}_homeprice'.format(i)]] df1.dropna(inplace=True) ax.plot(df1["{0}_mjsales".format(i)],df1["{0}_homeprice".format(i)],marker='x',linewidth=0) plt.show() df1=df[['98004_mjsales','98004_homeprice']] df1.dropna(inplace=True) df1.plot(kind="scatter", x="98004_mjsales", y="98004_homeprice", grid=True, title="Marijuana vs Home Price") plt.show() df2=df[['98012_mjsales','98012_homeprice']] df2.dropna(inplace=True) df2.plot(kind="scatter", x="98012_mjsales", y="98012_homeprice", grid=True, title="Marijuana vs Home Price") plt.show() fig,ax=plt.subplots() fig.suptitle("Marijuana vs Home Price") ax.plot(df1["98004_mjsales"],df1["98004_homeprice"],marker='o',linewidth=0) ax.plot(df2["98012_mjsales"],df2["98012_homeprice"],marker='o',linewidth=0) plt.show() df3=df[['98027_mjsales','98027_homeprice']] df3.dropna(inplace=True) df3.plot(kind="scatter", x="98027_mjsales", y="98027_homeprice", grid=True, title="Marijuana vs Home Price") plt.show() ###Output C:\Users\mstos\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel_launcher.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
Lesson13_GUIs/GUI.ipynb
###Markdown GUIPython has a few options for creating GUIs (Graphical User Interfaces). We will learn Tkinter because it is part of the standard library and easy to use. GUIs are a type of user interface that allows users to run your program using viusal indicators like buttons, text boxes, and scroll bars. If you are creating a program that will be run by a casual user, a GUI is a good option. Tkinter provides a python interface to the Tk GUI toolkit. Tk is a cross platform library that allows creating of native desktop GUI elements. Tkinter is a module, so to start, we will have to import it import Tkinter ###Code import tkinter ###Output _____no_output_____ ###Markdown Creating the main windowYou will want to create the applications main window that the action will take place in. To do this, you create a Tk object root = Tkinter.Tk() ###Code root = Tkinter.Tk() ###Output _____no_output_____ ###Markdown That didn't do much, did it. To actually create the window you have to run the mainloop function on your main window root.mainloop() This creates the window and starts listening for actions that the user takes. This is an infinite loop, and is one of the cases where an infinite loop is actually useful (don't worry, clicking the close button on the window will break out for the loop). ###Code root.mainloop() ###Output _____no_output_____ ###Markdown Deleting the main windowTo quit a Tkinter application, you run the destroy method on the main window root.destroy() This won't do anything until you are broken out of the main loop, but it will cleanup after that happens. ###Code root.destroy() ###Output _____no_output_____ ###Markdown TRY ITCreate a tkinter app main window and store it in the variable `top` run the `mainloop` function on it. WidgetsSo far our GUI has been pretty boring. To make it useful we have to add "widgets" to it. Widgets are things like buttons, textboxes, labels, etc. Anything you would expect to have in a desktop application.You need to create the widget after creating the main window, but before running `mainloop`You will also need to let Tkinter know how to layout your widgets. Right now we will just use the `pack` method with no arguments. widget.pack() ButtonYou can create a button that when pressed will run a function. To do that you run the Button function. It takes the parent window as the first parameter, and then a list of key value params as options for look and feel, text, and command - the function that will run when the button is pushed. It returns a reference to the widget. my_button = Tkinter.Button(window, text='my text', callback=myfunction) ###Code # A basic button root = Tkinter.Tk() # Widgets go here w = Tkinter.Button(root, text='Hello') w.pack() # Run the gui root.mainloop() # a button with a callback def my_callback(): print("Here i am") root = Tkinter.Tk() # Widgets go here w = Tkinter.Button(root, text='Hello', command=my_callback) w.pack() # Run the gui root.mainloop() ###Output _____no_output_____ ###Markdown LabelA Label provides a space for the GUI programmer to place text on the screen l = Tkinter.Label(mainwindow, text='label text') ###Code # A basic label root = Tkinter.Tk() # Widgets go here w = Tkinter.Label(root, text='Hello') w.pack() # Run the gui root.mainloop() ###Output _____no_output_____ ###Markdown If you want to be able to update the text in a label, you will need to use a `StringVar` for the text, and instead of using text keyword param, you will use textvaribale keyword parameter. var = Tkinter.StringVar() l = Label(root, texvariable=var) l.pack() var.set('New text') ###Code # A changeable label root = Tkinter.Tk() # Widgets go here v = Tkinter.StringVar() w = Tkinter.Label(root, textvariable=v) w.pack() v.set('Aaaah') # Run the gui root.mainloop() ###Output _____no_output_____ ###Markdown EntryAn entry accepts a single text line from a user. w = Entry(root)To get the data from an entry, you can use the get method w = Entry(root) text = w.get() ###Code # A basic entry root = Tkinter.Tk() e = Tkinter.Entry(root) e.pack() root.mainloop() # An entry where we get the text entered. def what_entered(): print(e.get()) root = Tkinter.Tk() e = Tkinter.Entry(root) e.pack() # Click to button to see what teh text entered is b = Tkinter.Button(root, text='Say What?', command=what_entered) b.pack() root.mainloop() ###Output _____no_output_____ ###Markdown TRY ITCreate a GUI that has a Label that says "What is your name?" and an entry to enter it. If you are feeling confident, add a button that will print out the name to the screen when pressed. There are many other types of widgets including canvas for drawing images and shapes, menubutton for creating menus, radio and lists for making selection, and lots of other. Look to the documentation for more info: https://docs.python.org/2/library/tkinter.html Event driven programmingGUI programming uses a different model of programming than we are used to. In the past our program has run from top to bottom and the only way to change the outputs was to change the inputs (or use random...)In GUIs we use event driven programming where we create our application and then listen for events like a user clicking a button or entering some data. When an event is created, then the callback function associated with that event runs. Then the next event causes a callback to run. This all happens in an infinite loop (`mainloop`) which is broken by you either calling destroy or close in your code in response to an event or by clicking the close button for the window.In Event driven programming, the user determines the workflow. It will take some time to get used to thinking in these terms. TRY ITGive yourself a few minutes to think about event driven programming and how it is different than the previous programs you have written. AttributesIn the keyword arguments all widgets can take some attribute parameters. These can determine the widget's dimensions, colors, fonts, image, etc. Size* height - height of element* width - width of element ###Code # A basic button root = Tkinter.Tk() # Widgets go here w = Tkinter.Button(root, text='Hello', width=50, height=1) w.pack() # Run the gui root.mainloop() ###Output _____no_output_____ ###Markdown Color* fg - foreground color of widget* bg - background color of widgetColor can be in hex: "ff0000" or in words "red", "green", "blue". Hex is preferred. ###Code # A basic button root = Tkinter.Tk() # Widgets go here w = Tkinter.Button(root, text='Hello', fg="red", bg="#00aaff") w.pack() # Run the gui root.mainloop() ###Output _____no_output_____ ###Markdown TRY ITCreate a really, really big label that screams 'AAAAAAHHHHHHH!' GeometryYou can organize your widgets using geometry management. So far we have been using the `pack` method with no arguments. But you can have arguments, or use `grid` to create a table or `place` to put them in specific positions. PackPack can take some parameters* expand - True/False, widget should fill all space in widget's parent* side - which side to pack the widget against (TOP, BOTTOM, LEFT, RIGHT)* fill - should widget fill extra space ###Code # A basic button root = Tkinter.Tk() # Widgets go here w = Tkinter.Button(root, text='Hello') w.pack(side=Tkinter.LEFT) # Run the gui root.mainloop() ###Output _____no_output_____ ###Markdown GridYou can lay out your widgets in a table like fashion using grid.* column - column to put widget in* row - row to put widget in* columnspan - how many columns a widget occupies (default 1)* rowspan - how many rows a widget occupies (default 1) ###Code # A basic button root = Tkinter.Tk() # Widgets go here for i in range(3): for j in range(4): text = "{}-{}".format(i, j) w = Tkinter.Button(root, text=text) w.grid(row = i, column = j) # Run the gui root.mainloop() ###Output _____no_output_____
DevNotebooks/MakeMDF-Data.ipynb
###Markdown $$ y(t) = A\sin(2 \pi f t + \varphi) = A\sin(\omega t + \varphi) $$ ###Code companies = [ "HeavyEquipmentInc", ] products = [ "Bulldozer", "DumpTruck", "Excavator", ] channels = [ "engine_speed", "engine_speed_desired", "vehicle_speed", "transmission_gear", "coolant_temp", "longitude", "latitude" ] t=np.arange(0, 10, 1e-1, dtype=np.float32) A=1 f=1 sine_ = A*np.sin( 2 * np.pi * f * t ) sine = Signal( samples=sine_, timestamps=t, name="sine", unit='f8', ) signals = [ sine, ] mdf4 = MDF( version='4.10', ) mdf4.append( signals=signals, source_info='Created by '+asammdf.__version__, common_timebase=False, ) mdf4.save( dst="tmp", overwrite=True, compression=2, ) ###Output _____no_output_____
texts/2.2.3.ipynb
###Markdown 2.2.3 標準インターフェイスとしての列 概要 列の演算あるプログラムで処理の共通パターンを見出し、共通化します。共通モジュールとして以下を挙げます。- 列挙(enumerator) ※こちらはプログラム毎に異なります。- フィルタ(filter)- マップ(map)- 集積器 (accumulator)これらを使って、完結に処理を記述する方法を紹介します。 各モジュールのインターフェースを入力・出力にリストを使用することで、 各モジュールが接続しやすくなります。 (リストを標準インターフェースと言っています) マップのネストネストしたループをマップのネストで実現します。 練習問題- [練習問題2.33 accumulateを使ったmap/append/lengthの実装](../exercises/2.33.ipynb)- [練習問題2.34 ホーナー法](../exercises/2.34.ipynb)- [練習問題2.35 count-leaves](../exercises/2.35.ipynb)- [練習問題2.36 accumulate-n](../exercises/2.36.ipynb)- [練習問題2.37 行列演算](../exercises/2.37.ipynb)- [練習問題2.38 fold-rightとfold-left](../exercises/2.38.ipynb)- [練習問題2.39 fold-rightとfold-leftを使ったreverseの実装](../exercises/2.39.ipynb)- [練習問題2.40 unique-pairs/prime-sum-pairs](../exercises/2.40.ipynb)- [練習問題2.41 3変数のマップ](../exercises/2.41.ipynb)- [練習問題2.42 8クイーンパズル](../exercises/2.42.ipynb)- [練習問題2.43 間違った8クイーンパズルの回答](../exercises/2.43.ipynb) はじめにここでは2つのプログラムについて考えます。- sum-odd-squares手続き・・・⽊を引数に取り、奇数の葉の⼆乗の合計を出力する。- even-fibs手続き・・・フィボナッチ数が偶数となる値のリストを出力する。 ###Code ; ⽊を引数に取り、奇数の葉の⼆乗の合計を出力する。 (define (square x)(* x x)) (define (sum-odd-squares tree) (cond ((null? tree) 0) ((not (pair? tree)) ; 葉に到達したら (if (odd? tree) (square tree) ; 葉の値が奇数なら2乗を計算する 0) ; 葉の値が偶数なら無視する ) (else (+ (sum-odd-squares (car tree)) (sum-odd-squares (cdr tree))))) ) (define x (list 1 (list 2 (list 3 4) 5) (list 6 7))) (display x) (newline) (display (sum-odd-squares x)) (newline) ; フィボナッチ数が偶数となる値のリストを出力する。 (define (fib n) (cond ((= n 0) 0) ((= n 1) 1) (else (+ (fib (- n 1)) (fib (- n 2)))) ) ) (define (even-fibs n) (define (next k) (if (> k n) '() (let ((f (fib k))) (if (even? f) (cons f (next (+ k 1))) ; 偶数のフィボナッチ数は加算の対象とする -> consでつなぎ合わせてリストにする (next (+ k 1)))) ; 奇数のフィボナッチ数は無視する ) ) (next 0) ) (display (even-fibs 10)) (newline) (map fib (list 0 1 2 3 4 5 6 7 8 9 10)) ###Output _____no_output_____ ###Markdown 上記2つの処理は、一見全く異なるものに見えますが、処理を分割すると共通性が見えてきます。- sum-odd-squares手続き - 木の葉を列挙 - フィルタによって奇数を選ぶ(フィルタ) - 選ばれた数の⼆乗を求める(マップ) - +を使って、0から始めて結果を合計する(集積)- even-fibs手続き - 0からnまでの数値を列挙 - それぞれの整数に対するフィボナッチ数を求る(マップ) - フィルタによって偶数を選ぶ(フィルタ) - consを使って、空リストから始めて結果をリストにする(集積) 列の演算ここでは、以下を実装します。- フィルタ- 集積- 木の葉を列挙- 0からnまでの数値を列挙マップは定義済みのものを使用します。 列挙はプログラム毎に実装します。(ただし、出力はリスト) 図2.7:⼿続きsum-odd-squares(上)とeven-fibs(下)を信号の流れという図式によって表現すると、⼆つのプログラ ムの共通性が明らかになる。 ###Code ; フィルタ (define (filter predicate sequence) (cond ((null? sequence) '()) ( (predicate (car sequence)) (cons (car sequence) (filter predicate (cdr sequence))) ) (else (filter predicate (cdr sequence)))) ; 条件を満たさない要素は無視する ) ; 動作確認 (display (filter odd? (list 1 2 3 4 5 6 7 8 9 10))) (newline) (display (filter even? (list 1 2 3 4 5 6 7 8 9 10))) (newline) ; 集積 (define (accumulate op initial sequence) (if (null? sequence) initial (op (car sequence) (accumulate op initial (cdr sequence))))) ; cdrダウンで各要素についてopの処理を施す ; 動作確認 (display (accumulate + 0 (list 1 2 3 4 5))) (newline) (display (accumulate * 1 (list 1 2 3 4 5))) (newline) (display (accumulate cons '() (list 1 2 3 4 5))) (newline) ; 整数列の列挙 (define (enumerate-interval low high) (if (> low high) '() (cons low (enumerate-interval (+ low 1) high)))) (enumerate-interval 2 7) ; 木の葉の列挙 ; 葉の要素をリストに変換する。 ; 練習問題2.28のfringe手続きそのものであることに注意。 (define (enumerate-tree tree) (cond ((null? tree) '()) ((not (pair? tree)) (list tree)) (else (append (enumerate-tree (car tree)) (enumerate-tree (cdr tree))))) ) (enumerate-tree (list 1 (list 2 (list 3 4)) 5)) ; sum-odd-squares手続きを共通化したモジュールを使って実装した場合 (define (sum-odd-squares tree) (accumulate + 0 (map square (filter odd? (enumerate-tree tree)))) ) (sum-odd-squares (list 1 (list 2 (list 3 4) 5) (list 6 7))) ; even-fibs手続きを共通化したモジュールを使って実装した場合 (define (even-fibs n) (accumulate cons '() (filter even? (map fib (enumerate-interval 0 n))))) (even-fibs 10) ; 共通化したモジュールを使用することで、 ; フィボナッチ数の2乗の列挙するプログラムも定義済みの手続きの組み合わせで実装ができる。 (define (list-fib-squares n) (accumulate cons '() (map square (map fib (enumerate-interval 0 n))))) (display (map fib '(0 1 2 3 4 5 6 7 8 9 10))) (newline) (display (list-fib-squares 10)) (newline) ; 奇数の値の2乗の値の積も ; 定義済みの手続きの組み合わせで実装ができる。 (define (product-of-squares-of-odd-elements sequence) (accumulate * 1 (map square (filter odd? sequence)))) (product-of-squares-of-odd-elements (list 1 2 3 4 5)) ###Output _____no_output_____ ###Markdown ⼀般的なデータ処理アプリケーションを列の演算として定式化することもできます。 人事記録(レコード)の列があるとして、最も給料の⾼いプログラマの給料を見つけたいとします。- セレクタsalary:人事レコードに含まれる給料を返す- セレクタprogrammer?:人事レコードがプログラマのものであるかをチェックするというセレクタが用意されているとすると、 「最も給料の⾼いプログラマの給料を見つける」というプログラムは次のように書くことができるでしょう。 (define (salary-of-highest-paid-programmer records) accumulate max 0 (map salary (filter programmer? records)))) ※ここではあまり深く説明しません。入出力インターフェースとして、リストを使うことによって、 各処理モジュールを接続できるようになるので有用です。 そのため、リストは処理モジュールを接続する標準インターフェイスとして使うことができます。 (「接続」というのは、手続きの引数に手続きの呼び出しを書くこと。手続きの呼び出しがネストしている) また、構造をリストとして統一することで、 リストに対する演算と、 データ構造に依存している処理とを分けて設計することができます。 これによって、 プログラムの全体的な設計に手を加えずに、 列の表現⽅法をいろいろ試してみることができます。 リストを標準インターフェイスとして使うことは、 3.5節ストリームにの話題につながっていきます。 練習問題- [練習問題2.33 accumulateを使ったmap/append/lengthの実装](../exercises/2.33.ipynb)- [練習問題2.34 ホーナー法](../exercises/2.34.ipynb)- [練習問題2.35 count-leaves](../exercises/2.35.ipynb)- [練習問題2.36 accumulate-n](../exercises/2.36.ipynb)- [練習問題2.37 行列演算](../exercises/2.37.ipynb)- [練習問題2.38 fold-rightとfold-left](../exercises/2.38.ipynb)- [練習問題2.39 fold-rightとfold-leftを使ったreverseの実装](../exercises/2.39.ipynb) マップのネスト列というパラダイムは、ネストしたループによって表現される処理に適用することができます。  次の問題について考えてみます。 「正の整数$n$が与えられたとき、$1 \leq j < i \leq n$で、かつ$i + j$が素数となるような異なる正の整数$i$と$j$のすべての順序つきペアを⾒つけよ。」 例えば、$n$が$6$のとき、ペアは以下のようになります。 $$\begin{array}{l|ccccccc}i & 2 & 3 & 4 & 4 & 5 & 6 & 6 \\j & 1 & 2 & 1 & 3 & 2 & 1 & 5 \\ \hlinei+j & 3 & 5 & 5 & 7 & 7 & 7 & 11 \end{array}$$ この計算の実装を考えてみます。 - $n$以下の正の整数からなる、大きい順に並んだすべてのペアの列を生成する。 - フィルタによって合計が素数となるペアを選択する - フィルタを通過したそれぞれの$(i,j)$のペアに対して$(i,j,i+j)$ という三つ組を作る。ペアの列の生成は以下の方法で出来ます。- すべての整数$i \leq n$に対して整数$i$を列挙する。 -> (enumerate-interval 1 n)で列挙する。- このような$i$に対して$j < i$となる$j$を列挙し、この$i,j$からペア$(i,j)$を生成する。 -> (enumerate-interval 1 (- i 1))で列挙し、(list i j)を列挙する。- すべての$i$に対して、すべての列を(appendで集積して)組み合わせることで、求めるペアの列を生成する。これによって、それぞれの$i$に対するペアの列ができます。 ###Code ; ペア(i, j)の列の生成 (define n 6) (accumulate append '() (map (lambda (i) (map (lambda (j) (list i j)) (enumerate-interval 1 (- i 1))) ) (enumerate-interval 1 n) ) ) ; マップと集積をappendによって組み合わせる処理はよく使われるので、 ; 独立した⼿続きとして実装する。 (define (flatmap proc seq) (accumulate append '() (map proc seq))) ; フィルタの述語。 ; ペアの合計が素数かどうか。 (define (prime-sum? pair) (prime? (+ (car pair) (cadr pair)))) (define (smallest-divisor n) (find-divisor n 2) ) (define (find-divisor n test-divisor) (cond ((> (square test-divisor) n) n) ((divides? test-divisor n) test-divisor) (else (find-divisor n (+ test-divisor 1))) ) ) (define (divides? a b) (= (remainder b a) 0) ) (define (prime? n) (= n (smallest-divisor n)) ) ; フィルタを通ったペアの列に対して次の⼿続きでマップして、結果の列を⽣成する。 ; ペアの⼆つの要素とその合計からなる三つ組(i, j , i + j)を構築する。 (define (make-pair-sum pair) (list (car pair) (cadr pair) (+ (car pair) (cadr pair)))) ; 上記手続きを組み合わせて完成した手続き (define (prime-sum-pairs n) (map make-pair-sum (filter prime-sum? (flatmap (lambda (i) (map (lambda (j) (list i j)) (enumerate-interval 1 (- i 1))) ) (enumerate-interval 1 n) ) ) ) ) ; 動作確認 (prime-sum-pairs 6) ###Output _____no_output_____ ###Markdown マップのネストは、ある集合$S$に対する順列の列挙にも役に立ちます。 $S=\{1,2,3\}$ である場合、順列は以下のようになります。 $\{1,2,3\}, \{1,3,2\}, \{2,1,3\}, \{2,3,1\}, \{3,1,2\}, \{3,2,1\}$ 集合$S$の順列を生成する方法として、以下の方法が使えます。 $S$の各要素$x$に対して、$S - \{x\}$の順列を生成し、それぞれの先頭に$x$を追加する。 $S - \{x\}$の順列を生成して、生成した順列のそれぞれの先頭に$x$を追加することで、 $x$から始まる$S$の順列が得られます。 これを全ての$x$について行うので、$S$の順列がすべて得られます。 ###Code ; 順列 ; 練習問題2.32の回答に近いことに注意。 (define (permutations s) (if (null? s) (list '()) ;集合sは空だったら空集合を持つ列 (flatmap (lambda (x) (map (lambda (p) (cons x p)) (permutations (remove x s)))) s) ) ) ; 与えられたリストから指定された要素を除いたリストを返す。 (define (remove item sequence) (filter (lambda (x) (not (= x item))) sequence)) ; 動作確認 (remove 3 '(1 2 3 4 5 6 5 4 3 2 1)) ; 動作確認 (permutations '(1 2 3)) (define (permutations-debug s) (if (null? s) (list '()) ;集合sは空だったら空集合を持つ列 (flatmap (lambda (x) (map (lambda (p) (display x) (display ",") (display p) (display ",") (display (cons x p)) (newline) (cons x p) ) (permutations-debug (remove x s)))) s) ) ) ; 動作確認 (permutations-debug '(1 2 3)) ###Output 3,(),(3) 2,(3),(2 3) 2,(),(2) 3,(2),(3 2) 1,(2 3),(1 2 3) 1,(3 2),(1 3 2) 3,(),(3) 1,(3),(1 3) 1,(),(1) 3,(1),(3 1) 2,(1 3),(2 1 3) 2,(3 1),(2 3 1) 2,(),(2) 1,(2),(1 2) 1,(),(1) 2,(1),(2 1) 3,(1 2),(3 1 2) 3,(2 1),(3 2 1) ###Markdown s=(1 2 3)の順序列挙方法。 (1 2 3) ->x=1 s=(2 3) -> x=2 s=(3) ->(2 3) -> x=3 s=(2) ->(3 2) -> ((2 3) (3 2)) -> ((1 2 3) (1 3 2) ->x=2 s=(1 3) -> x=1 s=(3) ->(1 3) -> x=3 s=(1) ->(3 1) -> ((1 3) (3 1)) -> ((2 1 3) (2 3 1) ->x=3 s=(1 2) -> x=1 s=(2) ->(1 2) -> x=2 s=(1) ->(2 1) -> ((1 2) (2 1)) -> ((3 1 2) (3 2 1) ###Code ; (i, j)の列挙 (define (enum-nn n) (accumulate cons '() (map (lambda (i) (accumulate cons '() (map (lambda (j) (list i j)) (enumerate-interval 1 n)) ) ) (enumerate-interval 1 n) ) ) ) (enum-nn 4) ; ネストしたループの動作確認 (define (enum-nnn n) (define (iter k) (if (= k 0) (list ()) (filter (lambda (x) #t) (flatmap (lambda (rest) (map (lambda (new-row) (append rest (list new-row))) (enumerate-interval 1 n)) ) (iter (- k 1)) ) ) ) ) (iter n) ) (enum-nnn 4) ###Output _____no_output_____
notebooks/similarity_histograms.ipynb
###Markdown Histograms of C. elegans similarities ###Code # load precomputed UMAP instance with open(os.path.join(data_path_c_elegans, f"umapperns_after_seed_{seed}_eucl.pkl"), "rb") as file: umapperns = pickle.load(file) embd = umapperns.embedding_ # filter graph as done during the UMAP optimization c_elegans_fil_graph = filter_graph(umapperns.graph_, umapperns.n_epochs).tocoo() # compute the historgrams hist_high_c_elegans, \ hist_high_pos_c_elegans, \ hist_target_c_elegans, \ hist_target_pos_c_elegans, \ hist_low_c_elegans, \ hist_low_pos_c_elegans, \ bins_c_elegans = hists_from_graph_embd(graph=c_elegans_fil_graph, embedding=embd, a=a, b=b) # plot histogram of all edges plt.rcParams.update({'font.size': 22}) plt.figure(figsize=(8, 5)) plt.hist(bins_c_elegans[:-1], bins_c_elegans, weights=hist_high_c_elegans, alpha=alpha, label=r"$\mu_{ij}$") plt.hist(bins_c_elegans[:-1], bins_c_elegans, weights=hist_target_c_elegans, alpha=alpha, label=r"$\nu_{ij}^*$") plt.hist(bins_c_elegans[:-1], bins_c_elegans, weights=hist_low_c_elegans, alpha=alpha, label=r"$\nu_{ij}$") plt.legend(loc="upper center", ncol=3) plt.yscale("symlog", linthresh=1) plt.gca().spines['left'].set_position("zero") plt.gca().spines['bottom'].set_position("zero") plt.savefig(os.path.join(fig_path, f"c_elegans_hist_sims_all_log_seed_{seed}.png"), bbox_inches = 'tight', pad_inches = 0,dpi=300) # plot histogram of positive high-dimensional edges plt.rcParams.update({'font.size': 22}) plt.figure(figsize=(8, 5)) plt.hist(bins_c_elegans[:-1], bins_c_elegans, weights=hist_high_pos_c_elegans, alpha=alpha, label=r"$\mu_{ij}$") plt.hist(bins_c_elegans[:-1], bins_c_elegans, weights=hist_target_pos_c_elegans, alpha=alpha, label=r"$\nu_{ij}^*$") plt.hist(bins_c_elegans[:-1], bins_c_elegans, weights=hist_low_pos_c_elegans, alpha=alpha, label=r"$\nu_{ij}$") plt.legend(loc="upper center", ncol=3) #plt.yscale("symlog", linthresh=1) plt.gca().spines['left'].set_position("zero") plt.gca().spines['bottom'].set_position("zero") plt.savefig(os.path.join(fig_path, f"c_elegans_hist_sims_pos_seed_{seed}.png"), bbox_inches = 'tight', pad_inches = 0,dpi=300) # load UMAP instance with inverted high-dimensional similarities seed=0 with open(os.path.join(data_path_c_elegans, f"umapperns_inv_seed_{seed}.pkl"), "rb") as file: umapperns_inv = pickle.load(file) embd_inv = umapperns_inv.embedding_ c_elegans_inv_fil_graph = filter_graph(umapperns_inv.graph_, umapperns.n_epochs).tocoo() # compute all histograms hist_high_c_elegans_inv, \ hist_high_pos_c_elegans_inv, \ hist_target_c_elegans_inv, \ hist_target_pos_c_elegans_inv, \ hist_low_c_elegans_inv, \ hist_low_pos_c_elegans_inv, \ bins_c_elegans_inv= hists_from_graph_embd(graph=c_elegans_inv_fil_graph, embedding=embd_inv, a=a, b=b) # compare histograms of for positive high-dimensional and target similarities for normal and inverted high-dimensional similarities alpha=0.5 plt.rcParams.update({'font.size': 22}) plt.figure(figsize=(8, 5)) plt.hist(bins_c_elegans[:-1], bins_c_elegans, weights=hist_high_pos_c_elegans, alpha=alpha, label=r"$\mu_{ij}$") plt.hist(bins_c_elegans[:-1], bins_c_elegans, weights=hist_target_pos_c_elegans, alpha=alpha, label=r"$\nu_{ij}^*$") plt.hist(bins_c_elegans_inv[:-1], bins_c_elegans_inv, weights=hist_high_pos_c_elegans_inv, alpha=alpha, label=r"inverted $\mu_{ij}$") plt.hist(bins_c_elegans_inv[:-1], bins_c_elegans_inv, weights=hist_target_pos_c_elegans_inv, alpha=alpha, label=r"$\nu_{ij}^*$ for inverted $\mu_{ij}$") plt.legend(loc="upper center", ncol=2, handlelength=1.0) #plt.yscale("symlog", linthresh=1) plt.gca().spines['left'].set_position("zero") plt.gca().spines['bottom'].set_position("zero") #plt.savefig(os.path.join(fig_path, f"c_elegans_compare_no_inv_inv_{seed}.png"), # bbox_inches = 'tight', # pad_inches = 0,dpi=300) alpha=0.5 plt.rcParams.update({'font.size': 22}) plt.figure(figsize=(8, 5)) plt.hist(bins_c_elegans[:-1], bins_c_elegans, weights=hist_low_pos_c_elegans_inv, alpha=alpha, label=r"$\nu_{ij}$ for inverted $\mu_{ij}$") plt.hist(bins_c_elegans[:-1], bins_c_elegans, weights=hist_low_pos_c_elegans, alpha=alpha, label=r"$\nu_{ij}$") plt.legend(loc="upper center", ncol=2, handlelength=1.0) #plt.yscale("symlog", linthresh=1) plt.gca().spines['left'].set_position("zero") plt.gca().spines['bottom'].set_position("zero") #plt.savefig(os.path.join(fig_path, f"c_elegans_compare_no_inv_inv_low_dim_{seed}.png"), # bbox_inches = 'tight', # pad_inches = 0,dpi=300) # difference between low-dim sims for non-inverted and inverted is barely visible # maximal true repulsive weight c_elegans_push_weights_keops, _ = get_UMAP_push_weight_keops(high_sim=c_elegans_fil_graph, negative_sample_rate=umapperns.negative_sample_rate) print(c_elegans_push_weights_keops.max(1).max()) # average intended repulsive weight n_pairs = np.prod(c_elegans_fil_graph.shape) avg_high_sim_push_weight = 1/n_pairs * ((1-c_elegans_fil_graph.data).sum() # rep weights below one + n_pairs - c_elegans_fil_graph.nnz) # rep weights equal one print(avg_high_sim_push_weight) ###Output Compiling libKeOpstorch1d2e57d9b3 in /net/hcihome/storage/sdamrich/.cache/pykeops-1.4.2-cpython-38: formula: Max_Reduction((((IntCst(5) * Var(0,1,0)) * Var(1,1,1)) / Var(2,1,2)),0) aliases: Var(0,1,0); Var(1,1,1); Var(2,1,2); dtype : float32 ... Done. tensor(0.0043, device='cuda:0') 0.9998971757730714 ###Markdown Histograms of PBMC similarities ###Code # load precomputed UMAP instance with open(os.path.join(data_path_pbmc, f"umapperns_after_seed_{seed}.pkl"), "rb") as file: umapperns_pbmc = pickle.load(file) embd_pbmc = umapperns_pbmc.embedding_ pbmc_fil_graph = filter_graph(umapperns_pbmc.graph_, umapperns_pbmc.n_epochs).tocoo() hist_high_pbmc, \ hist_high_pos_pbmc, \ hist_target_pbmc, \ hist_target_pos_pbmc, \ hist_low_pbmc, \ hist_low_pos_pbmc, \ bins_pbmc = hists_from_graph_embd(graph=pbmc_fil_graph, embedding=embd_pbmc, a=a, b=b) # plot histogram of positive high-dimensional edges plt.rcParams.update({'font.size': 22}) plt.figure(figsize=(8, 5)) plt.hist(bins_pbmc[:-1], bins_pbmc, weights=hist_high_pos_pbmc, alpha=alpha, label=r"$\mu_{ij}$") plt.hist(bins_pbmc[:-1], bins_pbmc, weights=hist_target_pos_pbmc, alpha=alpha, label=r"$\nu_{ij}^*$") plt.hist(bins_pbmc[:-1], bins_pbmc, weights=hist_low_pos_pbmc, alpha=alpha, label=r"$\nu_{ij}$") plt.legend(loc="upper center", ncol=3) plt.gca().spines['left'].set_position("zero") plt.gca().spines['bottom'].set_position("zero") plt.savefig(os.path.join(fig_path, f"pbmc_hist_sims_pos_seed_{seed}.png"), bbox_inches = 'tight', pad_inches = 0, dpi=300) ###Output _____no_output_____ ###Markdown Histograms of Lunc cancer dataset similarities ###Code # load precomputed UMAP instance with open(os.path.join(data_path_lung_cancer, f"umapperns_after_seed_{seed}.pkl"), "rb") as file: umapperns_lung_cancer = pickle.load(file) embd_lung_cancer = umapperns_lung_cancer.embedding_ lung_cancer_fil_graph = filter_graph(umapperns_lung_cancer.graph_, umapperns_lung_cancer.n_epochs).tocoo() hist_high_lung_cancer, \ hist_high_pos_lung_cancer, \ hist_target_lung_cancer, \ hist_target_pos_lung_cancer, \ hist_low_lung_cancer, \ hist_low_pos_lung_cancer, \ bins_lung_cancer = hists_from_graph_embd(graph=lung_cancer_fil_graph, embedding=embd_lung_cancer, a=a, b=b) # plot histogram of positive high-dimensional edges plt.rcParams.update({'font.size': 22}) plt.figure(figsize=(8, 5)) plt.hist(bins_lung_cancer[:-1], bins_lung_cancer, weights=hist_high_pos_lung_cancer, alpha=alpha, label=r"$\mu_{ij}$") plt.hist(bins_lung_cancer[:-1], bins_lung_cancer, weights=hist_target_pos_lung_cancer, alpha=alpha, label=r"$\nu_{ij}^*$") plt.hist(bins_lung_cancer[:-1], bins_lung_cancer, weights=hist_low_pos_lung_cancer, alpha=alpha, label=r"$\nu_{ij}$") plt.legend(loc="upper center", ncol=3) plt.gca().spines['left'].set_position("zero") plt.gca().spines['bottom'].set_position("zero") plt.savefig(os.path.join(fig_path, f"lung_cancer_hist_sims_pos_seed_{seed}.png"), bbox_inches = 'tight', pad_inches = 0, dpi=300) ###Output _____no_output_____ ###Markdown Histogram of CIFAR similarities ###Code # load precomputed UMAP instance with open(os.path.join(data_path_cifar, f"umapperns_after_seed_{seed}.pkl"), "rb") as file: umapperns_cifar = pickle.load(file) embd_cifar = umapperns_cifar.embedding_ cifar_fil_graph = filter_graph(umapperns_cifar.graph_, 200).tocoo() hist_high_cifar, \ hist_high_pos_cifar, \ hist_target_cifar, \ hist_target_pos_cifar, \ hist_low_cifar, \ hist_low_pos_cifar, \ bins_cifar = hists_from_graph_embd(graph=cifar_fil_graph, embedding=embd_cifar, a=a, b=b) # plot histogram of positive high-dimensional edges plt.rcParams.update({'font.size': 22}) plt.figure(figsize=(8, 5)) plt.hist(bins_cifar[:-1], bins_cifar, weights=hist_high_pos_cifar, alpha=alpha, label=r"$\mu_{ij}$") plt.hist(bins_cifar[:-1], bins_cifar, weights=hist_target_pos_cifar, alpha=alpha, label=r"$\nu_{ij}^*$") plt.hist(bins_cifar[:-1], bins_cifar, weights=hist_low_pos_cifar, alpha=alpha, label=r"$\nu_{ij}$") plt.legend(loc="upper center", ncol=3) plt.gca().spines['left'].set_position("zero") plt.gca().spines['bottom'].set_position("zero") plt.savefig(os.path.join(fig_path, f"cifar_hist_sims_pos_seed_{seed}.png"), bbox_inches = 'tight', pad_inches = 0, dpi=300) ###Output _____no_output_____
notebook-samples/interactive-graph-sampler.ipynb
###Markdown https://github.com/jupyter-widgets/ipywidgets/blob/master/docs/source/examples/Exploring%20Graphs.ipynb Exploring Network Graphs ###Code %matplotlib notebook import networkx as nx import matplotlib.pyplot as plt from ipywidgets import interact def rand_lobster(n, m, k, p): return nx.random_lobster(n, p, p/m) def powerlaw_cluster(n, m, k, p): return nx.powerlaw_cluster_graph(n, m, p) def nws(n, m, k, p): return nx.newman_watts_strogatz_graph(n, k, p) def erdos_renyi(n, m, k, p): return nx.erdos_renyi_graph(n, p) def plot_rand_graph(n, m, k, p, generator): g = generator(n, m, k, p) nx.draw(g) plt.show() interact(plot_rand_graph, n=(2,30), m=(1,10), k=(1,10), p=(0.0, 1.0, 0.001), generator={ 'lobster': rand_lobster, 'power law': powerlaw_cluster, 'Newman-Watts-Strogatz': nws, u'Erdős-Rényi': erdos_renyi, }) ###Output _____no_output_____
notebooks/answers/EDA_SDSS_answers.ipynb
###Markdown ML@Cezeaux Machine Learning Tutorial Section 1.a - Introduction to Machine Learning by [Emille Ishida](https://www.emilleishida.com/) *Take home message 1: know thy data!***Goal:** 1. get acquainted with the data &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2. formulate a learning framework &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3. weight our expectations**Data**: SDSS DR14 as available through [Kaggle](https://www.kaggle.com/lucidlenn/sloan-digital-sky-surveySkyserver_SQL2_27_2018%206_51_39%20PM.csv) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;10000 objects (lines) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;18 features (columns) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Features we are interested in: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$objid$: object identifier &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$u$: u-band magnitude &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$g$: g-band magnitude &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$r$: r-band magnitude &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$i$: i-band magnitude &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$z$: z-band magnitude &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;$class$: source classification - only galaxies and Stars ###Code # import some basic libaries import matplotlib.pylab as plt import numpy as np import pandas as pd import seaborn as sns ###Output _____no_output_____ ###Markdown Let's beging by loading and taking a look in the first entries in our data ###Code # Read data data = pd.read_csv('../../data/Skyserver_SQL2_27_2018 6_51_39 PM.csv.zip', compression='zip') ###Output _____no_output_____ ###Markdown Tip 1: Be suspicious, always!You should always begin by ensuring you know in what grounds you stand. Before any analysis starts, ask yourself questions like:------------------------------------------------------------------------------------------------------------------ What are the features in your data? ###Code data.keys() ###Output _____no_output_____ ###Markdown ----------------------------------------------------------------------------------------------------------------- Are you interested in all the features? If not, drop the irrelevant ones. ###Code # drop irrelevant columns data.drop(['ra','dec','run','rerun','camcol','field','specobjid','redshift', 'plate','mjd','fiberid'], axis=1, inplace=True) # check remaining features data.keys() ###Output _____no_output_____ ###Markdown ------------------------------------------------------------------------------------------------------- What about the number of objects in your data? ###Code data.shape ###Output _____no_output_____ ###Markdown Answer: *Documentation is correct* ----------------------------------------------------------------------------------- How many classes are in your data? Are you interested in all of them? If no, drop the ones you are not interested in ###Code data.groupby('class').nunique() mask_qso=data['class']=='QSO' # mask for QSO qso = (data['class'] == 'QSO') # remove QSOs from current data frame data = data[~mask_qso] #check remaining classes data.groupby('class').nunique() ###Output _____no_output_____ ###Markdown ----------------------------------------------------------------------------------------------------------------- Tip 2: do not ignore your domain knowledge!In astronomy, an observational science, the data only tells part of the story. We know, for example, that colors carry a lot of information. We also know that at we should keep at least one magnitude so we do not loose overall brightness information. So, let's try to use r-band magnitudes and colors and check correlations again. ###Code ug = data['u'] - data['g'] gr = data['g'] - data['r'] ri = data['r'] - data['i'] iz = data['i'] - data['z'] # add color to data frame data = data.assign(ug=ug) data = data.assign(gr=gr) data = data.assign(ri=ri) data = data.assign(iz=iz) # plot sns.pairplot(data, hue='class', vars=['r','ug', 'gr','ri','iz']) plt.show() ###Output _____no_output_____ ###Markdown ------------------------------------------------------------------------------------------------------------------- Tip 3: be concious on the data you are using Are you sure all the data you need is suitable for the task at hand? Do you see outliers which should be removed? If so, remove them from the data frame ###Code # r-magnitude mask for outliers rmag_remain = (data['r'] < 22.5) # g-r maks for outliers gr_remain = (data['gr'] > -2.5) # r-i mask for outliers ri_remain = np.logical_and(-2 < data['ri'], data['ri'] < 2) # i-z mask for outliers iz_remain = (data['iz'] < 1.5) flag = rmag_remain & gr_remain & ri_remain & iz_remain # remove outliers from data frame data = data[flag] # plot remaining points sns.pairplot(data, hue='class', vars=['r','ug', 'gr','ri','iz']) plt.show() ###Output _____no_output_____ ###Markdown Can you think of any other test which can give insight on the general aspects of the data? Answer: Save the clean data to diskOnce you are happy with your exploration of the most simple aspects of your data, save the data for future analysis... and do not forget to document your decisions! ###Code data.to_csv('../../data/SDSS_star_galaxy_clean.csv', index=False) ###Output _____no_output_____
src/91_Create_Submission_54_feat3_extra_trees_regressor.ipynb
###Markdown Introduction- ExtraTreesRegressor- fc と build distance datasetsのみ使う Import everything I need :) ###Code import warnings warnings.filterwarnings('ignore') import time import multiprocessing import glob import gc import matplotlib.pyplot as plt import seaborn as sns import numpy as np import pandas as pd from plotly.offline import init_notebook_mode, iplot import plotly.graph_objs as go from sklearn.preprocessing import LabelEncoder, StandardScaler from sklearn.model_selection import KFold from sklearn.metrics import mean_absolute_error from sklearn.ensemble import ExtraTreesRegressor, AdaBoostRegressor, RandomForestRegressor from fastprogress import progress_bar ###Output _____no_output_____ ###Markdown Preparation ###Code nb = 91 isSmallSet = False length = 2000 model_name = 'extra_trees_regressor' pd.set_option('display.max_columns', 200) # use atomic numbers to recode atomic names ATOMIC_NUMBERS = { 'H': 1, 'C': 6, 'N': 7, 'O': 8, 'F': 9 } file_path = '../input/champs-scalar-coupling/' glob.glob(file_path + '*') # train path = file_path + 'train.csv' if isSmallSet: train = pd.read_csv(path) [:length] else: train = pd.read_csv(path) # test path = file_path + 'test.csv' if isSmallSet: test = pd.read_csv(path)[:length] else: test = pd.read_csv(path) # structure path = file_path + 'structures.csv' structures = pd.read_csv(path) # fc_train path = file_path + 'nb47_fc_train.csv' if isSmallSet: fc_train = pd.read_csv(path)[:length] else: fc_train = pd.read_csv(path) # fc_test path = file_path + 'nb47_fc_test.csv' if isSmallSet: fc_test = pd.read_csv(path)[:length] else: fc_test = pd.read_csv(path) len(test), len(fc_test) len(train), len(fc_train) if isSmallSet: print('using SmallSet !!') print('-------------------') print(f'There are {train.shape[0]} rows in train data.') print(f'There are {test.shape[0]} rows in test data.') print(f"There are {train['molecule_name'].nunique()} distinct molecules in train data.") print(f"There are {test['molecule_name'].nunique()} distinct molecules in test data.") print(f"There are {train['atom_index_0'].nunique()} unique atoms.") print(f"There are {train['type'].nunique()} unique types.") ###Output There are 4658147 rows in train data. There are 2505542 rows in test data. There are 85003 distinct molecules in train data. There are 45772 distinct molecules in test data. There are 29 unique atoms. There are 8 unique types. ###Markdown --- myFunc**metrics** ###Code def kaggle_metric(df, preds): df["prediction"] = preds maes = [] for t in df.type.unique(): y_true = df[df.type==t].scalar_coupling_constant.values y_pred = df[df.type==t].prediction.values mae = np.log(mean_absolute_error(y_true, y_pred)) maes.append(mae) return np.mean(maes) ###Output _____no_output_____ ###Markdown ---**momory** ###Code def reduce_mem_usage(df, verbose=True): numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64'] start_mem = df.memory_usage().sum() / 1024**2 for col in df.columns: col_type = df[col].dtypes if col_type in numerics: c_min = df[col].min() c_max = df[col].max() if str(col_type)[:3] == 'int': if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max: df[col] = df[col].astype(np.int8) elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max: df[col] = df[col].astype(np.int16) elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max: df[col] = df[col].astype(np.int32) elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max: df[col] = df[col].astype(np.int64) else: c_prec = df[col].apply(lambda x: np.finfo(x).precision).max() if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max and c_prec == np.finfo(np.float16).precision: df[col] = df[col].astype(np.float16) elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max and c_prec == np.finfo(np.float32).precision: df[col] = df[col].astype(np.float32) else: df[col] = df[col].astype(np.float64) end_mem = df.memory_usage().sum() / 1024**2 if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem)) return df ###Output _____no_output_____ ###Markdown Feature Engineering Build Distance Dataset ###Code def build_type_dataframes(base, structures, coupling_type): base = base[base['type'] == coupling_type].drop('type', axis=1).copy() base = base.reset_index() base['id'] = base['id'].astype('int32') structures = structures[structures['molecule_name'].isin(base['molecule_name'])] return base, structures # a,b = build_type_dataframes(train, structures, '1JHN') def add_coordinates(base, structures, index): df = pd.merge(base, structures, how='inner', left_on=['molecule_name', f'atom_index_{index}'], right_on=['molecule_name', 'atom_index']).drop(['atom_index'], axis=1) df = df.rename(columns={ 'atom': f'atom_{index}', 'x': f'x_{index}', 'y': f'y_{index}', 'z': f'z_{index}' }) return df def add_atoms(base, atoms): df = pd.merge(base, atoms, how='inner', on=['molecule_name', 'atom_index_0', 'atom_index_1']) return df def merge_all_atoms(base, structures): df = pd.merge(base, structures, how='left', left_on=['molecule_name'], right_on=['molecule_name']) df = df[(df.atom_index_0 != df.atom_index) & (df.atom_index_1 != df.atom_index)] return df def add_center(df): df['x_c'] = ((df['x_1'] + df['x_0']) * np.float32(0.5)) df['y_c'] = ((df['y_1'] + df['y_0']) * np.float32(0.5)) df['z_c'] = ((df['z_1'] + df['z_0']) * np.float32(0.5)) def add_distance_to_center(df): df['d_c'] = (( (df['x_c'] - df['x'])**np.float32(2) + (df['y_c'] - df['y'])**np.float32(2) + (df['z_c'] - df['z'])**np.float32(2) )**np.float32(0.5)) def add_distance_between(df, suffix1, suffix2): df[f'd_{suffix1}_{suffix2}'] = (( (df[f'x_{suffix1}'] - df[f'x_{suffix2}'])**np.float32(2) + (df[f'y_{suffix1}'] - df[f'y_{suffix2}'])**np.float32(2) + (df[f'z_{suffix1}'] - df[f'z_{suffix2}'])**np.float32(2) )**np.float32(0.5)) def add_distances(df): n_atoms = 1 + max([int(c.split('_')[1]) for c in df.columns if c.startswith('x_')]) for i in range(1, n_atoms): for vi in range(min(4, i)): add_distance_between(df, i, vi) def add_n_atoms(base, structures): dfs = structures['molecule_name'].value_counts().rename('n_atoms').to_frame() return pd.merge(base, dfs, left_on='molecule_name', right_index=True) def build_couple_dataframe(some_csv, structures_csv, coupling_type, n_atoms=10): base, structures = build_type_dataframes(some_csv, structures_csv, coupling_type) base = add_coordinates(base, structures, 0) base = add_coordinates(base, structures, 1) base = base.drop(['atom_0', 'atom_1'], axis=1) atoms = base.drop('id', axis=1).copy() if 'scalar_coupling_constant' in some_csv: atoms = atoms.drop(['scalar_coupling_constant'], axis=1) add_center(atoms) atoms = atoms.drop(['x_0', 'y_0', 'z_0', 'x_1', 'y_1', 'z_1'], axis=1) atoms = merge_all_atoms(atoms, structures) add_distance_to_center(atoms) atoms = atoms.drop(['x_c', 'y_c', 'z_c', 'atom_index'], axis=1) atoms.sort_values(['molecule_name', 'atom_index_0', 'atom_index_1', 'd_c'], inplace=True) atom_groups = atoms.groupby(['molecule_name', 'atom_index_0', 'atom_index_1']) atoms['num'] = atom_groups.cumcount() + 2 atoms = atoms.drop(['d_c'], axis=1) atoms = atoms[atoms['num'] < n_atoms] atoms = atoms.set_index(['molecule_name', 'atom_index_0', 'atom_index_1', 'num']).unstack() atoms.columns = [f'{col[0]}_{col[1]}' for col in atoms.columns] atoms = atoms.reset_index() # # downcast back to int8 for col in atoms.columns: if col.startswith('atom_'): atoms[col] = atoms[col].fillna(0).astype('int8') # atoms['molecule_name'] = atoms['molecule_name'].astype('int32') full = add_atoms(base, atoms) add_distances(full) full.sort_values('id', inplace=True) return full def take_n_atoms(df, n_atoms, four_start=4): labels = ['id', 'molecule_name', 'atom_index_1', 'atom_index_0'] for i in range(2, n_atoms): label = f'atom_{i}' labels.append(label) for i in range(n_atoms): num = min(i, 4) if i < four_start else 4 for j in range(num): labels.append(f'd_{i}_{j}') if 'scalar_coupling_constant' in df: labels.append('scalar_coupling_constant') return df[labels] atoms = structures['atom'].values types_train = train['type'].values types_test = test['type'].values structures['atom'] = structures['atom'].replace(ATOMIC_NUMBERS).astype('int8') fulls_train = [] fulls_test = [] for type_ in progress_bar(train['type'].unique()): full_train = build_couple_dataframe(train, structures, type_, n_atoms=10) full_test = build_couple_dataframe(test, structures, type_, n_atoms=10) full_train = take_n_atoms(full_train, 10) full_test = take_n_atoms(full_test, 10) fulls_train.append(full_train) fulls_test.append(full_test) structures['atom'] = atoms train = pd.concat(fulls_train).sort_values(by=['id']) #, axis=0) test = pd.concat(fulls_test).sort_values(by=['id']) #, axis=0) train['type'] = types_train test['type'] = types_test train = train.fillna(0) test = test.fillna(0) ###Output _____no_output_____ ###Markdown 統計量 ###Code def create_features(df): # df['molecule_couples'] = df.groupby('molecule_name')['id'].transform('count') # df['molecule_dist_mean'] = df.groupby('molecule_name')['dist'].transform('mean') # df['molecule_dist_min'] = df.groupby('molecule_name')['dist'].transform('min') # df['molecule_dist_max'] = df.groupby('molecule_name')['dist'].transform('max') # df['atom_0_couples_count'] = df.groupby(['molecule_name', 'atom_index_0'])['id'].transform('count') # df['atom_1_couples_count'] = df.groupby(['molecule_name', 'atom_index_1'])['id'].transform('count') # df[f'molecule_atom_index_0_x_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['x_1'].transform('std') # df[f'molecule_atom_index_0_y_1_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('mean') # df[f'molecule_atom_index_0_y_1_mean_diff'] = df[f'molecule_atom_index_0_y_1_mean'] - df['y_1'] # df[f'molecule_atom_index_0_y_1_mean_div'] = df[f'molecule_atom_index_0_y_1_mean'] / df['y_1'] # df[f'molecule_atom_index_0_y_1_max'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('max') # df[f'molecule_atom_index_0_y_1_max_diff'] = df[f'molecule_atom_index_0_y_1_max'] - df['y_1'] # df[f'molecule_atom_index_0_y_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('std') # df[f'molecule_atom_index_0_z_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['z_1'].transform('std') # df[f'molecule_atom_index_0_dist_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('mean') # df[f'molecule_atom_index_0_dist_mean_diff'] = df[f'molecule_atom_index_0_dist_mean'] - df['dist'] # df[f'molecule_atom_index_0_dist_mean_div'] = df[f'molecule_atom_index_0_dist_mean'] / df['dist'] # df[f'molecule_atom_index_0_dist_max'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('max') # df[f'molecule_atom_index_0_dist_max_diff'] = df[f'molecule_atom_index_0_dist_max'] - df['dist'] # df[f'molecule_atom_index_0_dist_max_div'] = df[f'molecule_atom_index_0_dist_max'] / df['dist'] # df[f'molecule_atom_index_0_dist_min'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('min') # df[f'molecule_atom_index_0_dist_min_diff'] = df[f'molecule_atom_index_0_dist_min'] - df['dist'] # df[f'molecule_atom_index_0_dist_min_div'] = df[f'molecule_atom_index_0_dist_min'] / df['dist'] # df[f'molecule_atom_index_0_dist_std'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('std') # df[f'molecule_atom_index_0_dist_std_diff'] = df[f'molecule_atom_index_0_dist_std'] - df['dist'] # df[f'molecule_atom_index_0_dist_std_div'] = df[f'molecule_atom_index_0_dist_std'] / df['dist'] # df[f'molecule_atom_index_1_dist_mean'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('mean') # df[f'molecule_atom_index_1_dist_mean_diff'] = df[f'molecule_atom_index_1_dist_mean'] - df['dist'] # df[f'molecule_atom_index_1_dist_mean_div'] = df[f'molecule_atom_index_1_dist_mean'] / df['dist'] # df[f'molecule_atom_index_1_dist_max'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('max') # df[f'molecule_atom_index_1_dist_max_diff'] = df[f'molecule_atom_index_1_dist_max'] - df['dist'] # df[f'molecule_atom_index_1_dist_max_div'] = df[f'molecule_atom_index_1_dist_max'] / df['dist'] # df[f'molecule_atom_index_1_dist_min'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('min') # df[f'molecule_atom_index_1_dist_min_diff'] = df[f'molecule_atom_index_1_dist_min'] - df['dist'] # df[f'molecule_atom_index_1_dist_min_div'] = df[f'molecule_atom_index_1_dist_min'] / df['dist'] # df[f'molecule_atom_index_1_dist_std'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('std') # df[f'molecule_atom_index_1_dist_std_diff'] = df[f'molecule_atom_index_1_dist_std'] - df['dist'] # df[f'molecule_atom_index_1_dist_std_div'] = df[f'molecule_atom_index_1_dist_std'] / df['dist'] # df[f'molecule_atom_1_dist_mean'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('mean') # df[f'molecule_atom_1_dist_min'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('min') # df[f'molecule_atom_1_dist_min_diff'] = df[f'molecule_atom_1_dist_min'] - df['dist'] # df[f'molecule_atom_1_dist_min_div'] = df[f'molecule_atom_1_dist_min'] / df['dist'] # df[f'molecule_atom_1_dist_std'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('std') # df[f'molecule_atom_1_dist_std_diff'] = df[f'molecule_atom_1_dist_std'] - df['dist'] # df[f'molecule_type_0_dist_std'] = df.groupby(['molecule_name', 'type_0'])['dist'].transform('std') # df[f'molecule_type_0_dist_std_diff'] = df[f'molecule_type_0_dist_std'] - df['dist'] # df[f'molecule_type_dist_mean'] = df.groupby(['molecule_name', 'type'])['dist'].transform('mean') # df[f'molecule_type_dist_mean_diff'] = df[f'molecule_type_dist_mean'] - df['dist'] # df[f'molecule_type_dist_mean_div'] = df[f'molecule_type_dist_mean'] / df['dist'] # df[f'molecule_type_dist_max'] = df.groupby(['molecule_name', 'type'])['dist'].transform('max') # df[f'molecule_type_dist_min'] = df.groupby(['molecule_name', 'type'])['dist'].transform('min') # df[f'molecule_type_dist_std'] = df.groupby(['molecule_name', 'type'])['dist'].transform('std') # df[f'molecule_type_dist_std_diff'] = df[f'molecule_type_dist_std'] - df['dist'] # fc df[f'molecule_type_fc_max'] = df.groupby(['molecule_name', 'type'])['fc'].transform('max') df[f'molecule_type_fc_min'] = df.groupby(['molecule_name', 'type'])['fc'].transform('min') df[f'molecule_type_fc_std'] = df.groupby(['molecule_name', 'type'])['fc'].transform('std') df[f'molecule_type_fc_std_diff'] = df[f'molecule_type_fc_std'] - df['fc'] df[f'molecule_atom_index_0_fc_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['fc'].transform('mean') df[f'molecule_atom_index_0_fc_mean_diff'] = df[f'molecule_atom_index_0_fc_mean'] - df['fc'] df[f'molecule_atom_index_0_fc_mean_div'] = df[f'molecule_atom_index_0_fc_mean'] / df['fc'] df[f'molecule_atom_index_0_fc_max'] = df.groupby(['molecule_name', 'atom_index_0'])['fc'].transform('max') df[f'molecule_atom_index_0_fc_max_diff'] = df[f'molecule_atom_index_0_fc_max'] - df['fc'] df[f'molecule_atom_index_0_fc_max_div'] = df[f'molecule_atom_index_0_fc_max'] / df['fc'] df[f'molecule_atom_index_0_fc_min'] = df.groupby(['molecule_name', 'atom_index_0'])['fc'].transform('min') df[f'molecule_atom_index_0_fc_min_diff'] = df[f'molecule_atom_index_0_fc_min'] - df['fc'] df[f'molecule_atom_index_0_fc_min_div'] = df[f'molecule_atom_index_0_fc_min'] / df['fc'] df[f'molecule_atom_index_0_fc_std'] = df.groupby(['molecule_name', 'atom_index_0'])['fc'].transform('std') df[f'molecule_atom_index_0_fc_std_diff'] = df[f'molecule_atom_index_0_fc_std'] - df['fc'] df[f'molecule_atom_index_0_fc_std_div'] = df[f'molecule_atom_index_0_fc_std'] / df['fc'] df[f'molecule_atom_index_1_fc_mean'] = df.groupby(['molecule_name', 'atom_index_1'])['fc'].transform('mean') df[f'molecule_atom_index_1_fc_mean_diff'] = df[f'molecule_atom_index_1_fc_mean'] - df['fc'] df[f'molecule_atom_index_1_fc_mean_div'] = df[f'molecule_atom_index_1_fc_mean'] / df['fc'] df[f'molecule_atom_index_1_fc_max'] = df.groupby(['molecule_name', 'atom_index_1'])['fc'].transform('max') df[f'molecule_atom_index_1_fc_max_diff'] = df[f'molecule_atom_index_1_fc_max'] - df['fc'] df[f'molecule_atom_index_1_fc_max_div'] = df[f'molecule_atom_index_1_fc_max'] / df['fc'] df[f'molecule_atom_index_1_fc_min'] = df.groupby(['molecule_name', 'atom_index_1'])['fc'].transform('min') df[f'molecule_atom_index_1_fc_min_diff'] = df[f'molecule_atom_index_1_fc_min'] - df['fc'] df[f'molecule_atom_index_1_fc_min_div'] = df[f'molecule_atom_index_1_fc_min'] / df['fc'] df[f'molecule_atom_index_1_fc_std'] = df.groupby(['molecule_name', 'atom_index_1'])['fc'].transform('std') df[f'molecule_atom_index_1_fc_std_diff'] = df[f'molecule_atom_index_1_fc_std'] - df['fc'] df[f'molecule_atom_index_1_fc_std_div'] = df[f'molecule_atom_index_1_fc_std'] / df['fc'] return df %%time print('add fc') print(len(train), len(test)) train['fc'] = fc_train.values test['fc'] = fc_test.values # print('type0') # print(len(train), len(test)) # train = create_type0(train) # test = create_type0(test) # print('distances') # print(len(train), len(test)) # train = distances(train) # test = distances(test) print('create_featueres') print(len(train), len(test)) train = create_features(train) test = create_features(test) # print('create_closest') # print(len(train), len(test)) # train = create_closest(train) # test = create_closest(test) # train.drop_duplicates(inplace=True, subset=['id']) # なぜかtrainの行数が増えるバグが発生 # train = train.reset_index(drop=True) # print('add_cos_features') # print(len(train), len(test)) # train = add_cos_features(train) # test = add_cos_features(test) ###Output add fc 4658147 2505542 create_featueres 4658147 2505542 CPU times: user 18.9 s, sys: 22.9 s, total: 41.8 s Wall time: 41.8 s ###Markdown ---nanがある特徴量を削除 ###Code drop_feats = train.columns[train.isnull().sum(axis=0) != 0].values drop_feats train = train.drop(drop_feats, axis=1) test = test.drop(drop_feats, axis=1) assert sum(train.isnull().sum(axis=0))==0, f'train に nan があります。' assert sum(test.isnull().sum(axis=0))==0, f'test に nan があります。' ###Output _____no_output_____ ###Markdown エンコーディング ###Code cat_cols = ['atom_1'] num_cols = list(set(train.columns) - set(cat_cols) - set(['type', "scalar_coupling_constant", 'molecule_name', 'id', 'atom_0', 'atom_1','atom_2', 'atom_3', 'atom_4', 'atom_5', 'atom_6', 'atom_7', 'atom_8', 'atom_9'])) print(f'カテゴリカル: {cat_cols}') print(f'数値: {num_cols}') ###Output カテゴリカル: ['atom_1'] 数値: ['d_5_1', 'molecule_atom_index_1_fc_min_div', 'molecule_atom_index_0_fc_min_diff', 'molecule_atom_index_1_fc_mean_diff', 'd_4_3', 'molecule_atom_index_0_fc_mean', 'd_6_1', 'd_5_2', 'atom_index_0', 'd_3_2', 'd_3_1', 'd_2_0', 'd_1_0', 'molecule_type_fc_min', 'd_8_2', 'd_2_1', 'molecule_atom_index_1_fc_mean_div', 'molecule_type_fc_max', 'molecule_atom_index_1_fc_mean', 'd_5_0', 'd_9_0', 'd_8_3', 'molecule_atom_index_1_fc_min', 'd_4_1', 'molecule_atom_index_0_fc_max', 'd_6_0', 'd_9_2', 'd_6_2', 'fc', 'd_7_1', 'd_9_3', 'molecule_atom_index_0_fc_max_diff', 'd_7_2', 'molecule_atom_index_0_fc_min_div', 'd_4_2', 'molecule_atom_index_0_fc_max_div', 'd_5_3', 'molecule_atom_index_1_fc_max_div', 'd_4_0', 'molecule_atom_index_1_fc_max_diff', 'molecule_atom_index_1_fc_max', 'd_6_3', 'atom_index_1', 'd_8_1', 'molecule_atom_index_0_fc_mean_div', 'molecule_atom_index_0_fc_mean_diff', 'd_3_0', 'molecule_atom_index_1_fc_min_diff', 'd_8_0', 'molecule_atom_index_0_fc_min', 'd_9_1', 'd_7_3', 'd_7_0'] ###Markdown LabelEncode- `atom_1` = {H, C, N}- `type_0` = {1, 2, 3}- `type` = {2JHC, ...} ###Code for f in ['type_0', 'type']: if f in train.columns: lbl = LabelEncoder() lbl.fit(list(train[f].values) + list(test[f].values)) train[f] = lbl.transform(list(train[f].values)) test[f] = lbl.transform(list(test[f].values)) ###Output _____no_output_____ ###Markdown ---**show features** ###Code train.head(2) print(train.columns) ###Output Index(['id', 'molecule_name', 'atom_index_1', 'atom_index_0', 'atom_2', 'atom_3', 'atom_4', 'atom_5', 'atom_6', 'atom_7', 'atom_8', 'atom_9', 'd_1_0', 'd_2_0', 'd_2_1', 'd_3_0', 'd_3_1', 'd_3_2', 'd_4_0', 'd_4_1', 'd_4_2', 'd_4_3', 'd_5_0', 'd_5_1', 'd_5_2', 'd_5_3', 'd_6_0', 'd_6_1', 'd_6_2', 'd_6_3', 'd_7_0', 'd_7_1', 'd_7_2', 'd_7_3', 'd_8_0', 'd_8_1', 'd_8_2', 'd_8_3', 'd_9_0', 'd_9_1', 'd_9_2', 'd_9_3', 'scalar_coupling_constant', 'type', 'fc', 'molecule_type_fc_max', 'molecule_type_fc_min', 'molecule_atom_index_0_fc_mean', 'molecule_atom_index_0_fc_mean_diff', 'molecule_atom_index_0_fc_mean_div', 'molecule_atom_index_0_fc_max', 'molecule_atom_index_0_fc_max_diff', 'molecule_atom_index_0_fc_max_div', 'molecule_atom_index_0_fc_min', 'molecule_atom_index_0_fc_min_diff', 'molecule_atom_index_0_fc_min_div', 'molecule_atom_index_1_fc_mean', 'molecule_atom_index_1_fc_mean_diff', 'molecule_atom_index_1_fc_mean_div', 'molecule_atom_index_1_fc_max', 'molecule_atom_index_1_fc_max_diff', 'molecule_atom_index_1_fc_max_div', 'molecule_atom_index_1_fc_min', 'molecule_atom_index_1_fc_min_diff', 'molecule_atom_index_1_fc_min_div'], dtype='object') ###Markdown create train, test data ###Code y = train['scalar_coupling_constant'] train = train.drop(['id', 'molecule_name', 'scalar_coupling_constant'], axis=1) test = test.drop(['id', 'molecule_name' ], axis=1) train = reduce_mem_usage(train) test = reduce_mem_usage(test) X = train.copy() X_test = test.copy() assert len(X.columns) == len(X_test.columns), f'X と X_test のサイズが違います X: {len(X.columns)}, X_test: {len(X_test.columns)}' del train, test, full_train, full_test gc.collect() ###Output _____no_output_____ ###Markdown Training model **params** ###Code # Configuration model_params = {'n_estimators': 400, 'max_depth': 70, 'n_jobs': 60} n_folds = 4 folds = KFold(n_splits=n_folds, shuffle=True) def train_model(X, X_test, y, folds, model_params): model = ExtraTreesRegressor(**model_params) # <================= scores = [] oof = np.zeros(len(X)) # <======== prediction = np.zeros(len(X)) # <======== result_dict = {} for fold_n, (train_idx, valid_idx) in enumerate(folds.split(X)): print(f'Fold {fold_n + 1} started at {time.ctime()}') model.fit(X.iloc[train_idx, :], y[train_idx]) y_valid_pred = model.predict(X.iloc[valid_idx, :]) prediction = model.predict(X_test) oof[valid_idx] = y_valid_pred score = mean_absolute_error(y[valid_idx], y_valid_pred) scores.append(score) print(f'fold {fold_n+1} mae: {score :.5f}') print('') print('CV mean score: {0:.4f}, std: {1:.4f}.'.format(np.mean(scores), np.std(scores))) cv_score = np.log(mean_absolute_error(y, oof)) print('CV kaggle score(group log mae): {0:.4f}'.format(cv_score)) print('') result_dict['oof'] = oof result_dict['prediction'] = prediction result_dict['scores'] = scores return result_dict %%time # type ごとの学習 X_short = pd.DataFrame({'ind': list(X.index), 'type': X['type'].values, 'oof': [0] * len(X), 'target': y.values}) X_short_test = pd.DataFrame({'ind': list(X_test.index), 'type': X_test['type'].values, 'prediction': [0] * len(X_test)}) for t in X['type'].unique(): print('*'*80) print(f'Training of type {t}') print('*'*80) X_t = X.loc[X['type'] == t] X_test_t = X_test.loc[X_test['type'] == t] y_t = X_short.loc[X_short['type'] == t, 'target'].values result_dict = train_model(X_t, X_test_t, y_t, folds, model_params) X_short.loc[X_short['type'] == t, 'oof'] = result_dict['oof'] X_short_test.loc[X_short_test['type'] == t, 'prediction'] = result_dict['prediction'] print('') print('===== finish =====') X['scalar_coupling_constant'] = y metric = kaggle_metric(X, X_short['oof'].values) X = X.drop(['scalar_coupling_constant', 'prediction'], axis=1) print('CV mean score(group log mae): {0:.4f}'.format(metric)) prediction = X_short_test['prediction'] ###Output ******************************************************************************** Training of type 0 ******************************************************************************** Fold 1 started at Tue Aug 27 14:47:00 2019 fold 1 mae: 0.69412 Fold 2 started at Tue Aug 27 14:52:04 2019 fold 2 mae: 0.69507 Fold 3 started at Tue Aug 27 14:57:08 2019 fold 3 mae: 0.69287 Fold 4 started at Tue Aug 27 15:02:13 2019 fold 4 mae: 0.69510 CV mean score: 0.6943, std: 0.0009. CV kaggle score(group log mae): -0.3649 ******************************************************************************** Training of type 3 ******************************************************************************** Fold 1 started at Tue Aug 27 15:07:21 2019 fold 1 mae: 0.16245 Fold 2 started at Tue Aug 27 15:09:35 2019 fold 2 mae: 0.16222 Fold 3 started at Tue Aug 27 15:11:49 2019 fold 3 mae: 0.16192 Fold 4 started at Tue Aug 27 15:14:04 2019 fold 4 mae: 0.16272 CV mean score: 0.1623, std: 0.0003. CV kaggle score(group log mae): -1.8181 ******************************************************************************** Training of type 1 ******************************************************************************** Fold 1 started at Tue Aug 27 15:16:20 2019 fold 1 mae: 0.38666 Fold 2 started at Tue Aug 27 15:16:30 2019 fold 2 mae: 0.38113 Fold 3 started at Tue Aug 27 15:16:40 2019 fold 3 mae: 0.39138 Fold 4 started at Tue Aug 27 15:16:50 2019 fold 4 mae: 0.38666 CV mean score: 0.3865, std: 0.0036. CV kaggle score(group log mae): -0.9507 ******************************************************************************** Training of type 4 ******************************************************************************** Fold 1 started at Tue Aug 27 15:16:59 2019 fold 1 mae: 0.14810 Fold 2 started at Tue Aug 27 15:17:32 2019 fold 2 mae: 0.14961 Fold 3 started at Tue Aug 27 15:18:06 2019 fold 3 mae: 0.14876 Fold 4 started at Tue Aug 27 15:18:40 2019 fold 4 mae: 0.15013 CV mean score: 0.1492, std: 0.0008. CV kaggle score(group log mae): -1.9028 ******************************************************************************** Training of type 2 ******************************************************************************** Fold 1 started at Tue Aug 27 15:19:14 2019 fold 1 mae: 0.25841 Fold 2 started at Tue Aug 27 15:29:04 2019 fold 2 mae: 0.25763 Fold 3 started at Tue Aug 27 15:38:53 2019 fold 3 mae: 0.25821 Fold 4 started at Tue Aug 27 15:48:38 2019 fold 4 mae: 0.25804 CV mean score: 0.2581, std: 0.0003. CV kaggle score(group log mae): -1.3545 ******************************************************************************** Training of type 6 ******************************************************************************** Fold 1 started at Tue Aug 27 15:58:33 2019 fold 1 mae: 0.14936 Fold 2 started at Tue Aug 27 16:02:26 2019 fold 2 mae: 0.14948 Fold 3 started at Tue Aug 27 16:06:21 2019 fold 3 mae: 0.14929 Fold 4 started at Tue Aug 27 16:10:10 2019 fold 4 mae: 0.14910 CV mean score: 0.1493, std: 0.0001. CV kaggle score(group log mae): -1.9018 ******************************************************************************** Training of type 5 ******************************************************************************** Fold 1 started at Tue Aug 27 16:14:06 2019 fold 1 mae: 0.26588 Fold 2 started at Tue Aug 27 16:27:30 2019 fold 2 mae: 0.26717 Fold 3 started at Tue Aug 27 16:40:56 2019 fold 3 mae: 0.26682 Fold 4 started at Tue Aug 27 16:54:19 2019 fold 4 mae: 0.26807 CV mean score: 0.2670, std: 0.0008. CV kaggle score(group log mae): -1.3206 ******************************************************************************** Training of type 7 ******************************************************************************** Fold 1 started at Tue Aug 27 17:07:42 2019 fold 1 mae: 0.11271 Fold 2 started at Tue Aug 27 17:08:30 2019 fold 2 mae: 0.11316 Fold 3 started at Tue Aug 27 17:09:21 2019 fold 3 mae: 0.11240 Fold 4 started at Tue Aug 27 17:10:11 2019 fold 4 mae: 0.11208 CV mean score: 0.1126, std: 0.0004. CV kaggle score(group log mae): -2.1840 ===== finish ===== CV mean score(group log mae): -1.4747 CPU times: user 1d 22h 51min 37s, sys: 14min 23s, total: 1d 23h 6min 1s Wall time: 2h 24min 15s ###Markdown Save **submission** ###Code # path_submittion = './output/' + 'nb{}_submission_lgb_{}.csv'.format(nb, metric) path_submittion = f'../output/nb{nb}_submission_{model_name}_{metric :.5f}.csv' print(f'save pash: {path_submittion}') submittion = pd.read_csv('../input/champs-scalar-coupling/sample_submission.csv') # submittion = pd.read_csv('./input/champs-scalar-coupling/sample_submission.csv')[::100] if isSmallSet: pass else: submittion['scalar_coupling_constant'] = prediction submittion.to_csv(path_submittion, index=False) ###Output _____no_output_____ ###Markdown ---**result** ###Code path_oof = f'../output/nb{nb}_oof_{model_name}_{metric :.5f}.csv' print(f'save pash: {path_oof}') if isSmallSet: pass else: oof = pd.DataFrame(X_short['oof']) oof.to_csv(path_oof, index=False) ###Output _____no_output_____ ###Markdown analysis ###Code plot_data = pd.DataFrame(y) plot_data.index.name = 'id' plot_data['yhat'] = X_short['oof'].values plot_data['type'] = lbl.inverse_transform(X['type']) def plot_oof_preds(ctype, llim, ulim): plt.figure(figsize=(6,6)) sns.scatterplot(x='scalar_coupling_constant',y='yhat', data=plot_data.loc[plot_data['type']==ctype, ['scalar_coupling_constant', 'yhat']]); plt.xlim((llim, ulim)) plt.ylim((llim, ulim)) plt.plot([llim, ulim], [llim, ulim]) plt.xlabel('scalar_coupling_constant') plt.ylabel('predicted') plt.title(f'{ctype}', fontsize=18) plt.show() plot_oof_preds('1JHC', 0, 250) plot_oof_preds('1JHN', 0, 100) plot_oof_preds('2JHC', -50, 50) plot_oof_preds('2JHH', -50, 50) plot_oof_preds('2JHN', -25, 25) plot_oof_preds('3JHC', -25, 60) plot_oof_preds('3JHH', -20, 20) plot_oof_preds('3JHN', -10, 15) ###Output _____no_output_____
code/Test_2.ipynb
###Markdown Displacements due to pressure variations in reservoir simulating a Disk-shaped reservoir under non uniform depletion This code aims at creating the synthetic test 2 simulating a Disk-shaped reservoir under non uniform depletionThe disk-shaped reservoir is composed by two vertically juxtaposed cylinders, each one with a uniform depletion.The deepest cylinder is uniformly depleted by $\Delta p = -20$ MPa with its top and bottom at, respectively, 800 and 850 m deep.The shallowest cylinder is uniformly depleted by $\Delta p = -40$ MPa with its top and bottom at, respectively, 750 and 800 m deep. ###Code import numpy as np import matplotlib.pyplot as plt import matplotlib.ticker import pickle import compaction as cp # Parameters describing the reservoir R = 500. #radius of the cylinder y0 = 0 # y coordinate of the center x0 = 0 # x coordinate of the center # shallowest cylinder top1 = 750. #reservoir top bottom1 = 800. #reservoir bottom h1 = bottom1 - top1 #reservoir thickness D1 = 0.5*(bottom1+top1) # z coordinate of the center # deepest cylinder top2 = 800. #reservoir top bottom2 = 850. #reservoir bottom h2 = bottom2 - top2 #reservoir thickness D2 = 0.5*(bottom2+top2) # z coordinate of the center # Define the model # shallowest cylinder shallowest_cylinder = cp.prism_layer_circular((y0,x0), R, (20,20), bottom1, top1) # deepest cylinder deepest_cylinder = cp.prism_layer_circular((y0,x0), R, (20,20), bottom2, top2) # model is the disk-shaped reservoir composed by the # two vertically juxtaposed cylinders defined above model = np.vstack([shallowest_cylinder, deepest_cylinder]) # Pressure variation (in MPa) #The shallowest cylinder is uniformly depleted by Δ𝑝 = −40 MPa DP1 = np.zeros(shallowest_cylinder.shape[0]) - 40 # The deepest cylinder is uniformly depleted by Δ𝑝 = −20 MPa DP2 = np.zeros(deepest_cylinder.shape[0]) - 20 # Disk-shaped reservoir under non uniform depletion DP = np.vstack([DP1, DP2]) ###Output _____no_output_____ ###Markdown Young’s modulus $E$ and Poisson's ratio $\nu$ ###Code # Young’s modulus (in MPa) young = 3300 # Poisson coefficient poisson = 0.25 ###Output _____no_output_____ ###Markdown The uniaxial compaction coefficient $C_m$ $C_m = \frac{1}{E} \: \frac{(1 + \nu) (1 - 2\nu)}{(1-\nu)}$ ###Code cm = cp.Cm(poisson, young) # uniaxial compaction coefficient in 1/MPa G = young/(2*(1+poisson)) # Shear Modulus in MPa print ('CM', cm, 'G', G) ###Output CM 0.0002525252525252525 G 1320.0 ###Markdown Coordinates on the plane x = 0 m ###Code # Define computation points on vertical plane at x = 0m shape = (120, 24) y = np.linspace(-1500, 1500, shape[0]) z = np.linspace(0, 1200, shape[1]) y, z = np.meshgrid(y, z) y = y.ravel() z = z.ravel() x = np.zeros_like(y) coordinates = np.vstack([y, x, z]) ###Output _____no_output_____ ###Markdown Compute the displacement components on plane x = 0 m¶ ###Code # Compute the x-component of displacement displacement_x = cp.displacement_x_component(coordinates, model, DP, poisson, young) # Compute the y-component of displacement displacement_y = cp.displacement_y_component(coordinates, model, DP, poisson, young) # Compute the z-component of displacement displacement_z = cp.displacement_z_component(coordinates, model, DP, poisson, young) # horizontal component of displacement equation (39) displacement_horizontal = np.sqrt(displacement_x**2 + displacement_y**2) ###Output _____no_output_____ ###Markdown Save the data ###Code fields_d_mult_layers= dict() fields_d_mult_layers['x'] = x fields_d_mult_layers['y'] = y fields_d_mult_layers['z'] = z # Displacement field fields_d_mult_layers['displacement_x'] = displacement_x fields_d_mult_layers['displacement_y'] = displacement_y fields_d_mult_layers['displacement_z'] = displacement_z fields_d_mult_layers['displacement_horizontal'] = displacement_horizontal #save the data file_name = 'synthetic_cylindrical_displacement_fields_mult_layers_x_zero.pickle' with open(file_name, 'wb') as f: pickle.dump(fields_d_mult_layers, f) ###Output _____no_output_____ ###Markdown PLOT DISPLACEMENT FIELD BY OUR METHODOLOGY: Plot the results of the displacement fields on plane x = 0 m ###Code y = np.linspace(-1500, 1500, shape[0]) z = np.linspace(0, 1200, shape[1]) # Plot the displacement fields fig, ax = plt.subplots(nrows=2, ncols=1, sharex=False, sharey=False, figsize=(7.33,6.33)) ax[0].set_aspect("equal") img = ax[0].contourf(y, z, displacement_horizontal.reshape(shape[::-1]), 60, cmap="jet") cb = plt.colorbar(img, ax=ax[0], aspect=15, pad=0.05, shrink=0.90) cb.set_label('m', rotation=90, fontsize=12) ax[0].set_title("(a) Horizontal displacement ") ax[0].set_xticklabels(ax[0].get_xticks()) ax[0].set_yticklabels(ax[0].get_yticks()) ax[0].invert_yaxis() ax[0].set_xlabel("Horizontal coordinate y (m)") ax[0].set_ylabel("Depth (m)") ax[1].set_aspect("equal") img = ax[1].contourf(y, z, displacement_z.reshape(shape[::-1]), 60, cmap="jet") cb = plt.colorbar(img, ax=ax[1], aspect=15, pad=0.05, shrink=0.90) cb.set_label('m', rotation=90, fontsize=12) ax[1].set_title("(b) Vertical displacement ") ax[1].set_xticklabels(ax[1].get_xticks()) ax[1].set_yticklabels(ax[1].get_yticks()) ax[1].invert_yaxis() ax[1].set_xlabel("Horizontal coordinate y (m)") ax[1].set_ylabel("Depth (m)") plt.tight_layout(True) plt.savefig('../manuscript/Fig/Figure_Displacement_non_uniform_depletion.png', dpi=600) ###Output _____no_output_____ ###Markdown DISPLACEMENT FIELD BY OUR METHODOLOGY: Reservoir under non uniform depletion: (a) Horizontal x-component displacement and (b) vertical displacement by our methodology that uses the closed expressions of the volume integrations given by Nagy et al. (2000) and Nagy et al. (2002). These displacements are calculated along the x-axis, at $y = 0 $ m and $z$ located at the depths of: seafloor ($z = 0$ m), reservoir top ($z = 750$ m), reservoir center ($z = 800$ m) and reservoir bottom ($z = 850$ m). ###Code # Define computation points z_top = np.zeros(100) + 750 z_center = np.zeros(100) + 800 z_bottom = np.zeros(100) + 850 z_seafloor = np.zeros(100) x = np.linspace(0, 600, 100) y = np.zeros_like(x) coordinates_top = np.vstack([y, x, z_top]) coordinates_center = np.vstack([y, x, z_center]) coordinates_bottom = np.vstack([y, x, z_bottom]) coordinates_seafloor = np.vstack([y, x, z_seafloor]) ###Output _____no_output_____ ###Markdown Compute the displacement components along the x-axis, at $y = 0 $ m and $z$ located at the depths of: seafloor ($z = 0$ m), reservoir top ($z = 750$ m), reservoir center ($z = 800$ m) and reservoir bottom ($z = 850$ m). ###Code # Compute the x-component of displacement at the top displacement_x_top = cp.displacement_x_component( coordinates_top, model, DP, poisson, young ) # Compute the z-component of displacement at the top displacement_z_top = cp.displacement_z_component( coordinates_top, model, DP, poisson, young ) # Compute the x-component of displacement at the center displacement_x_center = cp.displacement_x_component( coordinates_center, model, DP, poisson, young ) # Compute the z-component of displacement at the center displacement_z_center = cp.displacement_z_component( coordinates_center, model, DP, poisson, young ) # Compute the x-component of displacement at the bottom displacement_x_bottom = cp.displacement_x_component( coordinates_bottom, model, DP, poisson, young ) # Compute the z-component of displacement at the bottom displacement_z_bottom = cp.displacement_z_component( coordinates_bottom, model, DP, poisson, young ) # Compute the x-component of displacement at the seafloor displacement_x_seafloor = cp.displacement_x_component( coordinates_seafloor, model, DP, poisson, young ) # Compute the z-component of displacement at the seafloor displacement_z_seafloor = cp.displacement_z_component( coordinates_seafloor, model, DP, poisson, young ) ###Output _____no_output_____ ###Markdown Plot the results at the top, center and bottom of the reservoir ###Code fig, ax = plt.subplots(nrows=2, ncols=1, sharex=False, sharey=False, figsize=(5.33, 6.33)) ax[0].plot(x, displacement_x_top, 'k.-', label='top') ax[0].plot(x, displacement_x_center, 'b.-', label='center') ax[0].plot(x, displacement_x_bottom, 'r.-', label='bottom') ax[0].plot(x, displacement_x_seafloor, 'g.-', label='Seafloor') ax[0].set_title('(a)', loc='left') ax[0].set_xlabel("Horizontal coordinate x (m)") ax[0].set_ylabel("x-component of displacement (m)") ax[0].grid() ax[0].legend(loc='best') ax[1].plot(x, displacement_z_top, 'k.-', label='top') ax[1].plot(x, displacement_z_center, 'b.-', label='center') ax[1].plot(x, displacement_z_bottom, 'r.-', label='bottom') ax[1].plot(x, displacement_z_seafloor, 'g.-', label='Seafloor') ax[1].invert_yaxis() ax[1].set_title('(b)', loc='left') ax[1].set_xlabel("Horizontal coordinate x (m)") ax[1].set_ylabel("Vertical displacement (m)") ax[1].grid() ax[1].legend(loc='upper right', bbox_to_anchor=(0.3, 0.95)) plt.tight_layout(True) plt.savefig('../manuscript/Fig/Figure_Displacement_z_levels_non_uniform_depletion.png', dpi=600) ###Output _____no_output_____ ###Markdown THE STRESS FIELD BY OUR METHODOLOGY on plane z = 0 m ###Code # Define computation points on the plane z = 0m shape = (60, 60) y = np.linspace(-1500, 1500, shape[0]) x = np.linspace(-1500, 1500, shape[1]) y, x = np.meshgrid(y, x) y = y.ravel() x = x.ravel() z = np.zeros_like(x) coordinates = np.vstack([y, x, z]) # Compute the x-component of stress stress_x = cp.stress_x_component(coordinates, model, DP, poisson, young) # Compute the y-component of stress stress_y = cp.stress_y_component(coordinates, model, DP, poisson, young) # Compute the z-component of stress stress_z = cp.stress_z_component(coordinates, model, DP, poisson, young) # horizontal component of stress stress_horizontal = np.sqrt(stress_x**2 + stress_y**2) ###Output _____no_output_____ ###Markdown Plot the stress components on plane z = 0 m Reservoir under uniform depletion: (a) 𝑥−, (b) 𝑦−, and (c) 𝑧−components of the stress at the free surface¶ ###Code ### Plot the results on plane z = 0 m y = np.linspace(-1500, 1500, shape[0]) x = np.linspace(-1500, 1500, shape[1]) # Plot the results on a map fig, ax = plt.subplots(nrows=1, ncols=3, sharex=False, sharey=True, figsize=(11.33, 5.33)) ax[0].set_aspect("equal") img = ax[0].contourf(y, x, stress_x.reshape(shape), 60, cmap="jet") cb = plt.colorbar(img, ax=ax[0], aspect=15, pad=0.05, shrink=0.5) cb.set_label('MPa', rotation=90, fontsize=10) ax[0].set_title("(a) x-component stress") ax[0].set_xticklabels(ax[0].get_xticks()) ax[0].set_yticklabels(ax[0].get_yticks()) ax[0].set_xlabel("Horizontal coordinate y (m)") ax[0].set_ylabel("Horizontal coordinate x (m)") ax[1].set_aspect("equal") img = ax[1].contourf(y, x, stress_y.reshape(shape), 60, cmap="jet") cb = plt.colorbar(img, ax=ax[1], aspect=15, pad=0.05, shrink=0.5) cb.set_label('MPa', rotation=90, fontsize=10) ax[1].set_title("(b) y-component stress") ax[1].set_xticklabels(ax[1].get_xticks()) ax[1].set_yticklabels(ax[1].get_yticks()) ax[1].set_xlabel("Horizontal coordinate y (m)") ax[2].set_aspect("equal") img = ax[2].contourf(y, x, stress_z.reshape(shape), 60, cmap="jet") cb = plt.colorbar(img, ax=ax[2], aspect=15, pad=0.05, shrink=0.5) cb.set_label('MPa', rotation=90, fontsize=10) ax[2].set_title("(c) z-component stress") ax[2].set_xticklabels(ax[2].get_xticks()) ax[2].set_yticklabels(ax[2].get_yticks()) ax[2].set_xlabel(" Horizontal coordinate y (m)") plt.tight_layout(True) plt.savefig('../manuscript/Fig/Figure_Null_stress_non_uniform_depletion.png', dpi=600) ###Output _____no_output_____
TuEyeQ Validation.ipynb
###Markdown TüEyeQ dataset validationExtracking data for comparison with other dataset. ###Code import numpy as np import pandas as pd from matplotlib import pyplot as plt import os from tqdm import tqdm from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split, KFold from sklearn.metrics import f1_score, roc_auc_score, plot_roc_curve, accuracy_score from sklearn.dummy import DummyClassifier from sklearn.preprocessing import Normalizer from sklearn import preprocessing import csv ###Output _____no_output_____ ###Markdown Loading the participant featuresThe table below holds the info we have on each participant. Each participant has a unique subject ID. The info about the tasks was removed, as it was deemed unnecessary. Our taget value should be age / gender. ###Code participant_features = pd.read_csv('TuEyeQ/cft_full.csv', index_col=1) del participant_features['task_id'] del participant_features['cft_task'] participant_features = participant_features.drop_duplicates() participant_features.head() ###Output _____no_output_____ ###Markdown Reading eye tracking features Here is what eye tracking features for Some of the participants and some readings have been removed due to too much noise. ###Code eye_tracking_features_path = 'TuEyeQ/EyeMovementData/split' arbitrary_eye_tracking_features = pd.read_csv(eye_tracking_features_path+'/ABT22/task_01.csv', index_col=0) arbitrary_eye_tracking_features['gender'] = participant_features.loc['ABT22']['gender'] arbitrary_eye_tracking_features.head(6) ###Output _____no_output_____ ###Markdown Appending target values to feature vectorsWe want a feature vector to consist of eye tracking features and then a target value in the end, based on the subject ID. ###Code participants = list(set(participant_features.index)) def load_participant(participant_id): tasks = [] if not os.path.isdir(eye_tracking_features_path+'/'+participant_id): #print(participant_id+' has no readings.') return for task in os.listdir(eye_tracking_features_path+'/'+participant_id): df = pd.read_csv(eye_tracking_features_path+'/'+participant_id+'/'+task, index_col=0) gender = participant_features.loc[participant_id]['gender'] age = participant_features.loc[participant_id]['age'] df['gender'] = gender df['age'] = age tasks.append(df) return tasks def drop_nulls(lst): return list(filter(None, lst)) abt22 = load_participant('ABT22') ###Output _____no_output_____ ###Markdown Making a heat mapIt might be a good idea to visualise the data before attacking it. Perhaps we can even see a difference. Here I make a heatmap of the locations the male and female participants look and scanpaths. ###Code all_males = participant_features[participant_features['gender']==1].index all_females = participant_features[participant_features['gender']==2].index all_male_readings = drop_nulls([load_participant(subject) for subject in tqdm(all_males)]) all_female_readings = drop_nulls([load_participant(subject) for subject in tqdm(all_females)]) def make_heatmap_scanpath(readings, verbose=True): heatmaps, all_x, all_y = [], [], [] for subject_ind, subject in tqdm(enumerate(readings)): for reading_ind, i in enumerate(subject): fixations = i[i['eventType']=='fixation'] mean_x = fixations['meanX'] mean_y = fixations['meanY'] if mean_x.shape[0] < 2 and mean_y.shape[0] < 2: if verbose: print(f"Reading {reading_ind} on subject {subject_ind} too small to work with.") break heatmap, x_edges, y_edges = np.histogram2d(mean_x, mean_y, bins=(20,40)) heatmaps.append(heatmap) all_x.append(mean_x) all_y.append(mean_y) combined_heatmap = sum(heatmaps) if verbose: print("Drawing...") plt.figure(figsize=(10,8)) plt.subplot(2,1,1) plt.imshow(combined_heatmap); ax = plt.gca() # Inverter x-akse. Den passede ikke af en eller anden årsag. ax.invert_xaxis() plt.subplot(2,1,2) for x, y in zip(all_x, all_y): plt.plot(x, y, linewidth=0.1/len(readings), c='blue') make_heatmap_scanpath(all_male_readings, verbose=False) make_heatmap_scanpath(all_female_readings, verbose=False) ###Output 130it [00:05, 25.38it/s] ###Markdown Random Forest Classification ###Code rfc = RandomForestClassifier( n_estimators=1000, criterion='entropy', min_samples_split=5, min_samples_leaf=1, random_state=42, max_features='sqrt' ) dummy = DummyClassifier() #The features available in all entries - Except start time. feats = ['duration', 'meanPupilDiameter', 'eventIdxLeft', 'eventIdxRight', 'meanX', 'meanY', 'startSaccadeX', 'startSaccadeY', 'endSaccadeX', 'endSaccadeY', 'microsaccadeCount', 'microsaccadeAmplitude', 'microsaccadePeakVelocity'] X_prepared = [i for p in all_male_readings+all_female_readings[:52] for i in p] X_copyForLasse = X_prepared X_prepared = [d.mean() for d in X_prepared] #Take the mean of all values - To make single feature vector X_prepared = [x for x in X_prepared if not np.isnan(x.loc['gender'])] #Remove nan-values. X_prepared = [x.fillna(0) for x in X_prepared] np.random.shuffle(X_prepared) X = [x.loc[feats] for x in X_prepared] #Take all features except last two y = [x.loc['gender'] for x in X_prepared] # Take gender (last feature) X_prepared[0] #Fit that model acc_score = [] f1 = [] auc = [] X = np.array(X) y = np.array(y) k = 5 kf = KFold(n_splits=k, shuffle=False) for train_index, test_index in tqdm(kf.split(X)): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] rfc.fit(X_train,y_train) pred_values = rfc.predict(X_test) acc_score.append(accuracy_score(y_test, pred_values)) f1.append(f1_score(y_test, pred_values)) auc.append(roc_auc_score(y_test, pred_values)) dummy.fit(X_train, y_train) ax = plt.gca() rfc_disp = plot_roc_curve(rfc, X_test, y_test, ax=ax, alpha=0.8) rfc_disp = plot_roc_curve(dummy, X_test, y_test, ax=ax, alpha=0.8) rfc_avg_acc_score = sum(acc_score)/k rfc_avg_f1_score = sum(f1)/k rfc_avg_auc_score = sum(auc)/k print('Average Accuracy:', rfc_avg_acc_score) print('Average F1:', rfc_avg_f1_score) print('Average AUC:', rfc_avg_auc_score) #tæl male/female. y = np.array(y, dtype=int) np.bincount(y) preds = dummy.predict(X_test) print(accuracy_score(y_test, preds)) print(f1_score(y_test, preds)) print(roc_auc_score(y_test, preds)) preds, y_test len(all_female_readings[:52]) #Normaliserede pupil-means pups = np.array([x['meanPupilDiameter'] for x in X_prepared]) pups = (pups - np.min(pups))/np.ptp(pups) pups.min() x = np.array([x['meanY'] for x in X_prepared]) min(x), max(x) X[0] ###Output _____no_output_____ ###Markdown Preparation and export of CSV ###Code # For looking at more "raw" data - currently not used del_list = [] for count, item in enumerate(X_copyForLasse): if len(item) == 0: del_list.append(count) for index in sorted(del_list, reverse=True): del X_copyForLasse[index] new_feats = [] for item in X: new_feats.append([ #item[0], item[1], #item[4], #item[5], item[10], item[12] ]) new_y = y.tolist() for count, item in enumerate(new_y): if item == 1: new_y[count] = 0 if item == 2: new_y[count] = 1 with open('TuEyeQ_X.csv', 'w') as f: write = csv.writer(f) write.writerows(new_feats) with open('TuEyeQ_y.csv', 'w') as f: writer = csv.writer(f) for val in new_y: writer.writerow([val]) ###Output _____no_output_____
Smart_Traffic_Control.ipynb
###Markdown Smart Traffic Control The following steps involve in the process 1. Background subtraction.2. OpenCV filters.3. Object detection by contours.4. Building of pipeline for data manipulation. ###Code # install library and download video for processing !pip install sk-video>=1.1.8 #Download video from Youtube "https://www.youtube.com/watch?v=UM0hX7nomi8", choose any online downloader and paste URL below import requests file_url = "https://r4---sn-a5msen76.googlevideo.com/videoplayback?expire=1582386637&ei=bPlQXuLoNseOkwaspbfICw&ip=104.161.21.11&id=o-AANNuvLkEJ6fsx-ixk6YahIZqXaRPxQ-0Ry0zb9JSBiz&itag=22&source=youtube&requiressl=yes&mm=31%2C29&mn=sn-a5msen76%2Csn-a5meknlz&ms=au%2Crdu&mv=m&mvi=3&pl=18&initcwndbps=6616250&vprv=1&mime=video%2Fmp4&ratebypass=yes&dur=384.452&lmt=1577202292201447&mt=1582364989&fvip=4&fexp=23842630&c=WEB&txp=1306222&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cvprv%2Cmime%2Cratebypass%2Cdur%2Clmt&sig=ALgxI2wwRQIhAMgstSr1PnwETkvm308YSJs0cPjJMm4MOFXGnpH1zghjAiA256NGEtF-e15fCxo1Ki_D8ZBTOQVviN9tavNkILU_gQ%3D%3D&lsparams=mm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=AHylml4wRQIgSTlWsxUiv1Ho9Rq9pNFpDKE3BIyVrZ1qb7me9wKtvpECIQCurkryCjUwyHbkKKG0cUU_m6k-rZhHG2BEkHGa1r018A%3D%3D&title=A3+A-road+Traffic+UK+HD+-+rush+hour+-+British+Highway+traffic+May+2017" r = requests.get(file_url, stream = True) with open("road.mp4", "wb") as file: for block in r.iter_content(chunk_size = 1024): if block: file.write(block) # import needed modules import os import csv import numpy as np import logging import logging.handlers import math import sys import random import numpy as np import skvideo.io import cv2 import matplotlib.pyplot as plt from IPython.display import HTML from base64 import b64encode cv2.ocl.setUseOpenCL(False) random.seed(123) # setup logging def init_logging(level=logging.INFO): main_logger = logging.getLogger() for hnd in main_logger.handlers: main_logger.removeHandler(hnd) formatter = logging.Formatter( fmt='%(asctime)s.%(msecs)03d %(levelname)-8s [%(name)s] %(message)s', datefmt='%Y-%m-%d %H:%M:%S') handler_stream = logging.StreamHandler(sys.stdout) handler_stream.setFormatter(formatter) main_logger.addHandler(handler_stream) main_logger.setLevel(level) return main_logger ###Output _____no_output_____ ###Markdown Background subtraction algorithmsThere are many different algorithms for background subtraction, but the main idea of them is very simple.Let’s assume that you have a video of your room, and on some of the frames of this video there is no humans & pets, so basically it’s static, let’s call it background_layer. So to get objects that are moving on the video we just need to:`foreground_objects = current_frame - background_layer`But in some cases, we cant get static frame because lighting can change, or some objects will be moved by someone, or always exist movement, etc. In such cases we are saving some number of frames and trying to figure out which of the pixels are the same for most of them, then this pixels becoming part of background_layer. Difference generally in how we get this background_layer and additional filtering that we use to make selection more accurate.In this lesson, we will use MOG algorithm for background subtraction and after processing, it looks like this: ###Code def train_bg_subtractor(inst, cap, num=500): ''' BG substractor need process some amount of frames to start giving result ''' print ('Training BG Subtractor...') i = 0 for frame in cap: inst.apply(frame, None, 0.001) i += 1 if i >= num: return cap VIDEO_SOURCE = "road.mp4" bg_subtractor = cv2.createBackgroundSubtractorMOG2( history=500, detectShadows=True) # Set up image source cap = skvideo.io.vreader(VIDEO_SOURCE) # skipping 500 frames to train bg subtractor train_bg_subtractor(bg_subtractor, cap, num=500) frame = next(cap) fg_mask = bg_subtractor.apply(frame, None, 0.001) plt.figure(figsize=(12,12)) plt.imshow(fg_mask) plt.show() ###Output Training BG Subtractor... ###Markdown So now we will use them to remove some noise on foreground mask.First, we will use Closing to remove gaps in areas, then Opening to remove 1–2 px points, and after that dilation to make object bolder. ###Code def filter_mask(img): ''' This filters are hand-picked just based on visual tests ''' kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2, 2)) # Fill any small holes closing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel) # Remove noise opening = cv2.morphologyEx(closing, cv2.MORPH_OPEN, kernel) # Dilate to merge adjacent blobs dilation = cv2.dilate(opening, kernel, iterations=2) return dilation bg_subtractor = cv2.createBackgroundSubtractorMOG2( history=500, detectShadows=True) # Set up image source cap = skvideo.io.vreader(VIDEO_SOURCE) # skipping 500 frames to train bg subtractor train_bg_subtractor(bg_subtractor, cap, num=500) frame = next(cap) fg_mask = bg_subtractor.apply(frame, None, 0.001) fg_mask[fg_mask < 240] = 0 fg_mask = filter_mask(fg_mask) plt.figure(figsize=(12,12)) plt.imshow(fg_mask) plt.show() ###Output Training BG Subtractor... ###Markdown Object detection by contoursFor this purpose we will use the standard cv2.findContours method with params:```cv2.CV_RETR_EXTERNAL — get only outer contours.cv2.CV_CHAIN_APPROX_TC89_L1 - use Teh-Chin chain approximation algorithm (faster)``` ###Code def get_centroid(x, y, w, h): x1 = int(w / 2) y1 = int(h / 2) cx = x + x1 cy = y + y1 return (cx, cy) class ContourDetection: ''' Detecting moving objects. Purpose of this processor is to subtrac background, get moving objects and detect them with a cv2.findContours method, and then filter off-by width and height. bg_subtractor - background subtractor isinstance. min_contour_width - min bounding rectangle width. min_contour_height - min bounding rectangle height. save_image - if True will save detected objects mask to file. image_dir - where to save images(must exist). ''' def __init__(self, bg_subtractor, min_contour_width=35, min_contour_height=35, save_image=False, image_dir='images'): super(ContourDetection, self).__init__() self.bg_subtractor = bg_subtractor self.min_contour_width = min_contour_width self.min_contour_height = min_contour_height self.save_image = save_image self.image_dir = image_dir def filter_mask(self, img, a=None): ''' This filters are hand-picked just based on visual tests ''' kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2, 2)) # Fill any small holes closing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel) # Remove noise opening = cv2.morphologyEx(closing, cv2.MORPH_OPEN, kernel) # Dilate to merge adjacent blobs dilation = cv2.dilate(opening, kernel, iterations=2) return dilation def detect_vehicles(self, fg_mask): matches = [] # finding external contours contours, hierarchy = cv2.findContours( fg_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_L1) for (i, contour) in enumerate(contours): (x, y, w, h) = cv2.boundingRect(contour) # On the exit, we add some filtering by height, width and add centroid. contour_valid = (w >= self.min_contour_width) and ( h >= self.min_contour_height) if not contour_valid: continue centroid = get_centroid(x, y, w, h) matches.append(((x, y, w, h), centroid)) return matches def __call__(self, frame): frame = frame.copy() fg_mask = self.bg_subtractor.apply(frame, None, 0.001) # just thresholding values fg_mask[fg_mask < 240] = 0 fg_mask = self.filter_mask(fg_mask, 0) return self.detect_vehicles(fg_mask) cd = ContourDetection(bg_subtractor) bg_subtractor = cv2.createBackgroundSubtractorMOG2( history=500, detectShadows=True) # Set up image source cap = skvideo.io.vreader(VIDEO_SOURCE) # skipping 500 frames to train bg subtractor train_bg_subtractor(bg_subtractor, cap, num=500) frame = next(cap) objects = cd(frame) print('Getting list of [((x,y,w,h), (xc,yc)), ...]') print(objects) ###Output Training BG Subtractor... Getting list of [((x,y,w,h), (xc,yc)), ...] [((400, 678, 146, 42), (473, 699)), ((0, 346, 150, 191), (75, 441)), ((248, 287, 174, 160), (335, 367)), ((1021, 254, 72, 62), (1057, 285)), ((762, 143, 58, 50), (791, 168)), ((578, 143, 120, 87), (638, 186)), ((829, 89, 53, 47), (855, 112)), ((1165, 44, 70, 83), (1200, 85)), ((890, 34, 50, 43), (915, 55)), ((763, 0, 127, 102), (826, 51))] ###Markdown Building processing pipelineYou must understand that in ML and CV there is no one magic algorithm that making altogether, even if we imagine that such algorithm exists, we still wouldn’t use it because it would be not effective at scale. For example a few years ago Netflix created competition with the prize 3 million dollars for the best movie recommendation algorithm. And one of the team created such, problem was that it just couldn’t work at scale and thus was useless for the company. But still, Netflix paid 1 million to them :)So now we will build simple processing pipeline, it not for scale just for convenient but the idea the same. ###Code class PipelineRunner(object): ''' Very simple pipline. Just run passed processors in order with passing context from one to another. You can also set log level for processors. ''' def __init__(self, pipeline=None, log_level=logging.INFO): self.pipeline = pipeline or [] self.context = {} self.log = logging.getLogger(self.__class__.__name__) self.log.setLevel(log_level) self.log_level = log_level self.set_log_level() def set_context(self, data): self.context = data def add(self, processor): if not isinstance(processor, PipelineProcessor): raise Exception( 'Processor should be an isinstance of PipelineProcessor.') processor.log.setLevel(self.log_level) self.pipeline.append(processor) def remove(self, name): for i, p in enumerate(self.pipeline): if p.__class__.__name__ == name: del self.pipeline[i] return True return False def set_log_level(self): for p in self.pipeline: p.log.setLevel(self.log_level) def run(self): for p in self.pipeline: self.context = p(self.context) self.log.debug("Frame #%d processed.", self.context['frame_number']) return self.context class PipelineProcessor(object): ''' Base class for processors. ''' def __init__(self): self.log = logging.getLogger(self.__class__.__name__) ###Output _____no_output_____ ###Markdown As input constructor will take a list of processors that will be run in order. Each processor making part of the job. We already have Countour Detection class, just need slightly udate it to use context ###Code def save_frame(frame, file_name, flip=True): # flip BGR to RGB if flip: cv2.imwrite(file_name, np.flip(frame, 2)) else: cv2.imwrite(file_name, frame) class ContourDetection(PipelineProcessor): ''' Detecting moving objects. Purpose of this processor is to subtrac background, get moving objects and detect them with a cv2.findContours method, and then filter off-by width and height. bg_subtractor - background subtractor isinstance. min_contour_width - min bounding rectangle width. min_contour_height - min bounding rectangle height. save_image - if True will save detected objects mask to file. image_dir - where to save images(must exist). ''' def __init__(self, bg_subtractor, min_contour_width=35, min_contour_height=35, save_image=False, image_dir='images'): super(ContourDetection, self).__init__() self.bg_subtractor = bg_subtractor self.min_contour_width = min_contour_width self.min_contour_height = min_contour_height self.save_image = save_image self.image_dir = image_dir def filter_mask(self, img, a=None): ''' This filters are hand-picked just based on visual tests ''' kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2, 2)) # Fill any small holes closing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel) # Remove noise opening = cv2.morphologyEx(closing, cv2.MORPH_OPEN, kernel) # Dilate to merge adjacent blobs dilation = cv2.dilate(opening, kernel, iterations=2) return dilation def detect_vehicles(self, fg_mask, context): matches = [] # finding external contours contours, hierarchy = cv2.findContours( fg_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_L1) for (i, contour) in enumerate(contours): (x, y, w, h) = cv2.boundingRect(contour) contour_valid = (w >= self.min_contour_width) and ( h >= self.min_contour_height) if not contour_valid: continue centroid = get_centroid(x, y, w, h) matches.append(((x, y, w, h), centroid)) return matches def __call__(self, context): frame = context['frame'].copy() frame_number = context['frame_number'] fg_mask = self.bg_subtractor.apply(frame, None, 0.001) # just thresholding values fg_mask[fg_mask < 240] = 0 fg_mask = self.filter_mask(fg_mask, frame_number) if self.save_image: save_frame(fg_mask, self.image_dir + "/mask_%04d.png" % frame_number, flip=False) context['objects'] = self.detect_vehicles(fg_mask, context) context['fg_mask'] = fg_mask return context ###Output _____no_output_____ ###Markdown Now let’s create a processor that will link detected objects on different frames and will create paths, and also will count vehicles that got to the exit zone. ###Code def distance(x, y, type='euclidian', x_weight=1.0, y_weight=1.0): if type == 'euclidian': return math.sqrt(float((x[0] - y[0])**2) / x_weight + float((x[1] - y[1])**2) / y_weight) class VehicleCounter(PipelineProcessor): ''' Counting vehicles that entered in exit zone. Purpose of this class based on detected object and local cache create objects pathes and count that entered in exit zone defined by exit masks. exit_masks - list of the exit masks. path_size - max number of points in a path. max_dst - max distance between two points. ''' def __init__(self, exit_masks=[], path_size=10, max_dst=30, x_weight=1.0, y_weight=1.0): super(VehicleCounter, self).__init__() self.exit_masks = exit_masks self.vehicle_count = 0 self.path_size = path_size self.pathes = [] self.max_dst = max_dst self.x_weight = x_weight self.y_weight = y_weight def check_exit(self, point): for exit_mask in self.exit_masks: try: if exit_mask[point[1]][point[0]] == 255: return True except: return True return False def __call__(self, context): objects = context['objects'] context['exit_masks'] = self.exit_masks context['pathes'] = self.pathes context['vehicle_count'] = self.vehicle_count if not objects: return context points = np.array(objects)[:, 0:2] points = points.tolist() # add new points if pathes is empty if not self.pathes: for match in points: self.pathes.append([match]) else: # link new points with old pathes based on minimum distance between # points new_pathes = [] for path in self.pathes: _min = 999999 _match = None for p in points: if len(path) == 1: # distance from last point to current d = distance(p[0], path[-1][0]) else: # based on 2 prev points predict next point and calculate # distance from predicted next point to current xn = 2 * path[-1][0][0] - path[-2][0][0] yn = 2 * path[-1][0][1] - path[-2][0][1] d = distance( p[0], (xn, yn), x_weight=self.x_weight, y_weight=self.y_weight ) if d < _min: _min = d _match = p if _match and _min <= self.max_dst: points.remove(_match) path.append(_match) new_pathes.append(path) # do not drop path if current frame has no matches if _match is None: new_pathes.append(path) self.pathes = new_pathes # add new pathes if len(points): for p in points: # do not add points that already should be counted if self.check_exit(p[1]): continue self.pathes.append([p]) # save only last N points in path for i, _ in enumerate(self.pathes): self.pathes[i] = self.pathes[i][self.path_size * -1:] # count vehicles and drop counted pathes: new_pathes = [] for i, path in enumerate(self.pathes): d = path[-2:] if ( # need at list two points to count len(d) >= 2 and # prev point not in exit zone not self.check_exit(d[0][1]) and # current point in exit zone self.check_exit(d[1][1]) and # path len is bigger then min self.path_size <= len(path) ): self.vehicle_count += 1 else: # prevent linking with path that already in exit zone add = True for p in path: if self.check_exit(p[1]): add = False break if add: new_pathes.append(path) self.pathes = new_pathes context['pathes'] = self.pathes context['objects'] = objects context['vehicle_count'] = self.vehicle_count self.log.debug('#VEHICLES FOUND: %s' % self.vehicle_count) return context ###Output _____no_output_____ ###Markdown We will count only paths that have length more than 3 points(to remove some noise) and the 4th in the green zone. We use masks cause it’s many operation effective and simpler than using vector algorithms.Just use “binary and” operation to check that point in the area, and that’s all. And here is how we set it: ###Code EXIT_PTS = np.array([ [[732, 720], [732, 590], [1280, 500], [1280, 720]], [[0, 400], [645, 400], [645, 0], [0, 0]] ]) SHAPE = (720,1280) base = np.zeros(SHAPE + (3,), dtype='uint8') exit_mask = cv2.fillPoly(base, EXIT_PTS, (255, 255, 255))[:, :, 0] plt.imshow(base) plt.show() ###Output _____no_output_____ ###Markdown Now let’s link points in paths at [line 55](scrollTo=lh3dhyf9iaKF&line=55&uniqifier=1)On first frame. we just add all points as new paths.Next if len(path) == 1, for each path in the cache we are trying to find the point(centroid) from newly detected objects which will have the smallest Euclidean distance to the last point of the path.If len(path) > 1, then with the last two points in the path we are predicting new point on the same line, and finding min distance between it and the current point.The point with minimal distance added to the end of the current path and removed from the list.If some points left after this we add them as new paths.And also we limit the number of points in the path at [line 101](scrollTo=lh3dhyf9iaKF&line=101&uniqifier=1/)Now we will try to count vehicles that entering in the exit zone. To do this we just take 2 last points in the path and checking that last of them in exit zone, and previous not, and also checking that len(path) should be bigger than limit.The part after else is preventing of back-linking new points to the points in exit zone.And the last two processor is CSV writer to create report CSV file, and visualization for debugging and nice pictures/videos. ###Code class CsvWriter(PipelineProcessor): def __init__(self, path, name, start_time=0, fps=15): super(CsvWriter, self).__init__() self.fp = open(os.path.join(path, name), 'w') self.writer = csv.DictWriter(self.fp, fieldnames=['time', 'vehicles']) self.writer.writeheader() self.start_time = start_time self.fps = fps self.path = path self.name = name self.prev = None def __call__(self, context): frame_number = context['frame_number'] count = _count = context['vehicle_count'] if self.prev: _count = count - self.prev time = ((self.start_time + int(frame_number / self.fps)) * 100 + int(100.0 / self.fps) * (frame_number % self.fps)) self.writer.writerow({'time': time, 'vehicles': _count}) self.prev = count return context BOUNDING_BOX_COLOUR = (255, 192, 0) CENTROID_COLOUR = (255, 192, 0) CAR_COLOURS = [(255, 192, 0)] EXIT_COLOR = (66, 183, 42) class Visualizer(PipelineProcessor): def __init__(self, save_image=True, image_dir='images'): super(Visualizer, self).__init__() self.save_image = save_image self.image_dir = image_dir def check_exit(self, point, exit_masks=[]): for exit_mask in exit_masks: if exit_mask[point[1]][point[0]] == 255: return True return False def draw_pathes(self, img, pathes): if not img.any(): return for i, path in enumerate(pathes): path = np.array(path)[:, 1].tolist() for point in path: cv2.circle(img, point, 2, CAR_COLOURS[0], -1) cv2.polylines(img, [np.int32(path)], False, CAR_COLOURS[0], 1) return img def draw_boxes(self, img, pathes, exit_masks=[]): for (i, match) in enumerate(pathes): contour, centroid = match[-1][:2] if self.check_exit(centroid, exit_masks): continue x, y, w, h = contour cv2.rectangle(img, (x, y), (x + w - 1, y + h - 1), BOUNDING_BOX_COLOUR, 1) cv2.circle(img, centroid, 2, CENTROID_COLOUR, -1) return img def draw_ui(self, img, vehicle_count, exit_masks=[]): # this just add green mask with opacity to the image for exit_mask in exit_masks: _img = np.zeros(img.shape, img.dtype) _img[:, :] = EXIT_COLOR mask = cv2.bitwise_and(_img, _img, mask=exit_mask) cv2.addWeighted(mask, 1, img, 1, 0, img) # drawing top block with counts cv2.rectangle(img, (0, 0), (img.shape[1], 50), (0, 0, 0), cv2.FILLED) cv2.putText(img, ("Vehicles passed: {total} ".format(total=vehicle_count)), (30, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 255), 1) return img def __call__(self, context): frame = context['frame'].copy() frame = np.ascontiguousarray(np.flip(frame, 2)) frame_number = context['frame_number'] pathes = context['pathes'] exit_masks = context['exit_masks'] vehicle_count = context['vehicle_count'] frame = self.draw_ui(frame, vehicle_count, exit_masks) frame = self.draw_pathes(frame, pathes) frame = self.draw_boxes(frame, pathes, exit_masks) if self.save_image: save_frame(frame, self.image_dir + "/processed_%04d.png" % frame_number) context['frame'] = frame return context ###Output _____no_output_____ ###Markdown CSV writer is saving data by time, cause we need it for further analytics. So i use this formula to add additional frame timing to the unixtimestamp:```time = ((self.start_time + int(frame_number / self.fps)) * 100 + int(100.0 / self.fps) * (frame_number % self.fps))``````so with start time=1 000 000 000 and fps=10 i will get results like thisframe 1 = 1 000 000 000 010frame 1 = 1 000 000 000 020…```Then after you get full csv report you can aggregate this data as you want. ConclusionSo as you see it was not so hard as many people think.But if you run the script you will see that this solution is not ideal, and having a problem with foreground objects overlapping, also it doesn’t have vehicles classification by types(that you will definitely need for real analytics). But still, with good camera position(above the road), it gives pretty good accuracy. And that tells us that even small & simple algorithms used in a right way can give good results. ###Code # build runner def main(): log = logging.getLogger("main") # creating exit mask from points, where we will be counting our vehicles base = np.zeros(SHAPE + (3,), dtype='uint8') exit_mask = cv2.fillPoly(base, EXIT_PTS, (255, 255, 255))[:, :, 0] # there is also bgslibrary, that seems to give better BG substruction, but # not tested it yet bg_subtractor = cv2.createBackgroundSubtractorMOG2( history=500, detectShadows=True) # processing pipline for programming conviniance pipeline = PipelineRunner(pipeline=[ ContourDetection(bg_subtractor=bg_subtractor, save_image=True, image_dir=IMAGE_DIR), # we use y_weight == 2.0 because traffic are moving vertically on video # use x_weight == 2.0 for horizontal. VehicleCounter(exit_masks=[exit_mask], y_weight=2.0), Visualizer(image_dir=IMAGE_DIR,save_image=False), CsvWriter(path='./', name='report.csv') ], log_level=logging.INFO) # Set up image source cap = skvideo.io.vreader(VIDEO_SOURCE) # skipping 500 frames to train bg subtractor train_bg_subtractor(bg_subtractor, cap, num=500) fourcc = cv2.VideoWriter_fourcc(*"MP4V") writer = cv2.VideoWriter(VIDEO_OUT, fourcc, 25, (SHAPE[1], SHAPE[0]), True) frame_number = -1 for frame in cap: if not frame.any(): log.error("Frame capture failed, stopping...") break frame_number += 1 log.info("Frame #%s" % frame_number) pipeline.set_context({ 'frame': frame, 'frame_number': frame_number, }) ctx = pipeline.run() writer.write(ctx['frame']) if frame_number > PARSE_FRAMES: break writer.release() # Parameters # ============================================================================ IMAGE_DIR = "./out" VIDEO_SOURCE = "road.mp4" VIDEO_OUT = "road_parsed.mp4" PARSE_FRAMES = 15*25 SHAPE = (720, 1280) # HxW EXIT_PTS = np.array([ [[732, 720], [732, 590], [1280, 500], [1280, 720]], [[0, 400], [645, 400], [645, 0], [0, 0]] ]) # ============================================================================ log = init_logging() main() from google.colab import files files.download('road_parsed.mp4') ###Output _____no_output_____
notebooks/biology/init_nbs/08e_init_random_db5.ipynb
###Markdown init fit ###Code p.wave = 'db5' p.J = 4 p.mode = 'zero' p.init_factor = 1 p.noise_factor = 0 p.const_factor = 0 p.num_epochs = 100 p.attr_methods = 'Saliency' lamWaveloss = 1 p.lamlSum = lamWaveloss p.lamhSum = lamWaveloss p.lamL2sum = lamWaveloss p.lamCMF = lamWaveloss p.lamConv = lamWaveloss p.lamL1wave = 0.01 p.lamL1attr = 0.0 p.target = 0 # load data and model train_loader, test_loader = get_dataloader(p.data_path, batch_size=p.batch_size, is_continuous=p.is_continuous) model = load_pretrained_model(p.model_path, device=device) # prepare model random.seed(p.seed) np.random.seed(p.seed) torch.manual_seed(p.seed) wt = awd.DWT1d(wave=p.wave, mode=p.mode, J=p.J, init_factor=p.init_factor, noise_factor=p.noise_factor, const_factor=p.const_factor).to(device) wt.train() # train params = list(wt.parameters()) optimizer = torch.optim.Adam(params, lr=p.lr) loss_f = awd.get_loss_f(lamlSum=p.lamlSum, lamhSum=p.lamhSum, lamL2norm=p.lamL2norm, lamCMF=p.lamCMF, lamConv=p.lamConv, lamL1wave=p.lamL1wave, lamL1attr=p.lamL1attr) trainer = awd.Trainer(model, wt, optimizer, loss_f, target=p.target, use_residuals=True, attr_methods=p.attr_methods, device=device, n_print=5) # run trainer(train_loader, epochs=p.num_epochs) plt.plot(np.log(trainer.train_losses)) plt.xlabel("epochs") plt.ylabel("log train loss") plt.title('Log-train loss vs epochs') plt.show() print('calculating losses and metric...') model.train() # cudnn RNN backward can only be called in training mode validator = awd.Validator(model, test_loader) rec_loss, lsum_loss, hsum_loss, L2norm_loss, CMF_loss, conv_loss, L1wave_loss, L1saliency_loss, L1inputxgrad_loss = validator( wt, target=p.target) print("Recon={:.5f}\n lsum={:.5f}\n hsum={:.5f}\n L2norm={:.5f}\n CMF={:.5f}\n conv={:.5f}\n L1wave={:.5f}\n Saliency={:.5f}\n Inputxgrad={:.5f}\n".format(rec_loss, lsum_loss, hsum_loss, L2norm_loss, CMF_loss, conv_loss, L1wave_loss, L1saliency_loss, L1inputxgrad_loss)) filt = get_1dfilts(wt) phi, psi, x = get_wavefun(wt) plot_1dfilts(filt, is_title=True, figsize=(2,2)) plot_wavefun((phi, psi, x), is_title=True, figsize=(2,1)) ###Output _____no_output_____ ###Markdown later fit ###Code p.lamL1wave = 0.0001 p.lamL1attr = 0.5 p.num_epochs = 100 # train params = list(wt.parameters()) optimizer = torch.optim.Adam(params, lr=p.lr) loss_f = awd.get_loss_f(lamlSum=p.lamlSum, lamhSum=p.lamhSum, lamL2norm=p.lamL2norm, lamCMF=p.lamCMF, lamConv=p.lamConv, lamL1wave=p.lamL1wave, lamL1attr=p.lamL1attr) trainer = awd.Trainer(model, wt, optimizer, loss_f, target=p.target, use_residuals=True, attr_methods=p.attr_methods, device=device, n_print=5) # run trainer(train_loader, epochs=p.num_epochs) plt.plot(np.log(trainer.train_losses)) plt.xlabel("epochs") plt.ylabel("log train loss") plt.title('Log-train loss vs epochs') plt.show() print('calculating losses and metric...') model.train() # cudnn RNN backward can only be called in training mode validator = awd.Validator(model, test_loader) rec_loss, lsum_loss, hsum_loss, L2norm_loss, CMF_loss, conv_loss, L1wave_loss, L1saliency_loss, L1inputxgrad_loss = validator( wt, target=p.target) print("Recon={:.5f}\n lsum={:.5f}\n hsum={:.5f}\n L2norm={:.5f}\n CMF={:.5f}\n conv={:.5f}\n L1wave={:.5f}\n Saliency={:.5f}\n Inputxgrad={:.5f}\n".format(rec_loss, lsum_loss, hsum_loss, L2norm_loss, CMF_loss, conv_loss, L1wave_loss, L1saliency_loss, L1inputxgrad_loss)) filt = get_1dfilts(wt) phi, psi, x = get_wavefun(wt) plot_1dfilts(filt, is_title=True, figsize=(2,2)) plot_wavefun((phi, psi, x), is_title=True, figsize=(2,1)) ###Output _____no_output_____
Project_2_VQE_Molecules/H4_complete.ipynb
###Markdown H4 Molecule: Constructing Potential Energy Surfaces Using VQE Step 1: Classical calculations ###Code import numpy as np import matplotlib.pyplot as plt from utility import * import tequila as tq threshold = 1e-6 #Cutoff for UCC MP2 amplitudes and QCC ranking gradients basis = 'sto-3g' ###Output _____no_output_____ ###Markdown Classical Electronic Structure Methods ###Code bond_lengths = np.linspace(0.8,2.7,15) #Run FCI print("Full Configuration Interaction (FCI):") FCI_PES = obtain_PES('h4', bond_lengths, basis, method='fci') #Run HF print("Hartree-Fock (HF):") HF_PES = obtain_PES('h4', bond_lengths, basis, method='hf') #Run CCSD print("Couple Cluster Singles and Doubles (CCSD):") CCSD_PES = obtain_PES('h4', bond_lengths, basis, method='ccsd') #Plot H4 PESs plt.title('H4 symmetric dissociation, STO-3G') plt.xlabel('R, Angstrom') plt.ylabel('E, Hartree') plt.plot(bond_lengths, FCI_PES, label='FCI') plt.scatter(bond_lengths, HF_PES, label='HF', color='orange') plt.scatter(bond_lengths, CCSD_PES, label='CCSD', color='purple') plt.legend() ###Output _____no_output_____ ###Markdown Step 2: Generating Qubit Hamiltonians ###Code qubit_transf = 'jw' # Jordan-Wigner transformations h4 = get_qubit_hamiltonian(mol='h4', geometry=1.5, basis='sto3g', qubit_transf=qubit_transf) print(h4) h4_tapered = taper_hamiltonian(h4, n_spin_orbitals=8, n_electrons=4, qubit_transf=qubit_transf) print("Effective Hamiltonian:", h4_tapered) ###Output _____no_output_____ ###Markdown Step 3: Unitary Ansatz ###Code trotter_steps = 1 xyz_data = get_molecular_data('h4', geometry=1.5, xyz_format=True) basis='sto-3g' h4_tq = tq.quantumchemistry.Molecule(geometry=xyz_data, basis_set=basis) print('Number of spin-orbitals (qubits): {} \n'.format(2*h4_tq.n_orbitals)) E_FCI = h4_tq.compute_energy(method='fci') print('FCI energy: {}'.format(E_FCI)) H = h4_tq.make_hamiltonian() print("\nHamiltonian has {} terms\n".format(len(H))) U_UCCSD = h4_tq.make_uccsd_ansatz(initial_amplitudes='MP2',threshold=threshold, trotter_steps=trotter_steps) E = tq.ExpectationValue(H=H, U=U_UCCSD) print('\nNumber of UCCSD amplitudes: {} \n'.format(len(E.extract_variables()))) print('\nStarting optimization:\n') result = tq.minimize(objective=E, method="BFGS", initial_values={k:0.0 for k in E.extract_variables()}, tol=1e-6) print('\nObtained UCCSD energy: {}'.format(result.energy)) ###Output Hamiltonian has 93 terms Number of UCCSD amplitudes: 8 Starting optimization: Optimizer: <class 'tequila.optimizers.optimizer_scipy.OptimizerSciPy'> backend : qulacs samples : None save_history : True noise : None Method : BFGS Objective : 1 expectationvalues gradient : 512 expectationvalues active variables : 8 E=-1.71399864 angles= {(3, 1, 2, 0): 0.0, (3, 0, 3, 0): 0.0, (2, 0, 3, 1): 0.0, (2, 1, 2, 1): 0.0, (3, 1, 3, 1): 0.0, (3, 0, 2, 1): 0.0, (2, 0, 2, 0): 0.0, (2, 1, 3, 0): 0.0} samples= None E=-1.11359858 angles= {(3, 1, 2, 0): -0.46722960472106934, (3, 0, 3, 0): -0.18474936485290527, (2, 0, 3, 1): -0.46722960472106934, (2, 1, 2, 1): -0.19438719749450684, (3, 1, 3, 1): -0.30623817443847656, (3, 0, 2, 1): -0.004305601119995117, (2, 0, 2, 0): -0.27246713638305664, (2, 1, 3, 0): -0.004305601119995117} samples= None E=-1.77345576 angles= {(3, 1, 2, 0): -0.07621729618876547, (3, 0, 3, 0): -0.030137424768035468, (2, 0, 3, 1): -0.07621729618876547, (2, 1, 2, 1): -0.031709605849114926, (3, 1, 3, 1): -0.04995540823963474, (3, 0, 2, 1): -0.0007023554854347396, (2, 0, 2, 0): -0.04444647390828256, (2, 1, 3, 0): -0.0007023554854347396} samples= None E=-1.86847365 angles= {(3, 1, 2, 0): -0.1315614164629193, (3, 0, 3, 0): -0.09395831984008161, (2, 0, 3, 1): -0.13143066883385895, (2, 1, 2, 1): -0.11225449998352802, (3, 1, 3, 1): -0.12684898733126088, (3, 0, 2, 1): -0.08839361782823328, (2, 0, 2, 0): -0.11092845553425043, (2, 1, 3, 0): -0.08770070315039019} samples= None E=-1.92490788 angles= {(3, 1, 2, 0): -0.23967740746534782, (3, 0, 3, 0): -0.03879723051960417, (2, 0, 3, 1): -0.23220395086328227, (2, 1, 2, 1): -0.41920652459809404, (3, 1, 3, 1): -0.24479237426045516, (3, 0, 2, 1): -0.23762765274751507, (2, 0, 2, 0): -0.2359646592478713, (2, 1, 3, 0): -0.2300305183599774} samples= None E=-1.93761168 angles= {(3, 1, 2, 0): -0.23631614790493116, (3, 0, 3, 0): -0.11255542662409998, (2, 0, 3, 1): -0.23222722992865386, (2, 1, 2, 1): -0.45759357891653557, (3, 1, 3, 1): -0.1895594036019585, (3, 0, 2, 1): -0.20957037265993628, (2, 0, 2, 0): -0.1868159532268097, (2, 1, 3, 0): -0.2049966323519755} samples= None E=-1.94772411 angles= {(3, 1, 2, 0): -0.23919228734243203, (3, 0, 3, 0): -0.15467097241645134, (2, 0, 3, 1): -0.23819142359988207, (2, 1, 2, 1): -0.5700298571495812, (3, 1, 3, 1): -0.08782842561215495, (3, 0, 2, 1): -0.17609392062718043, (2, 0, 2, 0): -0.08275346630883829, (2, 1, 3, 0): -0.17418041639109813} samples= None E=-1.95065212 angles= {(3, 1, 2, 0): -0.23282225537504558, (3, 0, 3, 0): -0.14382806431770626, (2, 0, 3, 1): -0.23134780593488768, (2, 1, 2, 1): -0.6524499433054977, (3, 1, 3, 1): -0.059665021241479446, (3, 0, 2, 1): -0.16342205335101112, (2, 0, 2, 0): -0.056755776042658086, (2, 1, 3, 0): -0.16040972482824123} samples= None E=-1.95110831 angles= {(3, 1, 2, 0): -0.2372450766219998, (3, 0, 3, 0): -0.11958886814489893, (2, 0, 3, 1): -0.23388501903014275, (2, 1, 2, 1): -0.6649357897861723, (3, 1, 3, 1): -0.0453746279336269, (3, 0, 2, 1): -0.17063617413152388, (2, 0, 2, 0): -0.04610564283037418, (2, 1, 3, 0): -0.16500311155759942} samples= None E=-1.95118571 angles= {(3, 1, 2, 0): -0.23800119730318073, (3, 0, 3, 0): -0.12705217706427893, (2, 0, 3, 1): -0.23381212829644057, (2, 1, 2, 1): -0.6714643885868825, (3, 1, 3, 1): -0.04729985607721233, (3, 0, 2, 1): -0.1709348489927596, (2, 0, 2, 0): -0.0468580425044987, (2, 1, 3, 0): -0.16430570703499436} samples= None E=-1.95118859 angles= {(3, 1, 2, 0): -0.23778842159638441, (3, 0, 3, 0): -0.12685492617182847, (2, 0, 3, 1): -0.2327108808995517, (2, 1, 2, 1): -0.668900708432699, (3, 1, 3, 1): -0.048690090778286337, (3, 0, 2, 1): -0.17123092619972016, (2, 0, 2, 0): -0.04784980561573943, (2, 1, 3, 0): -0.1634932239455811} samples= None E=-1.95119024 angles= {(3, 1, 2, 0): -0.238376449187641, (3, 0, 3, 0): -0.12685629341908122, (2, 0, 3, 1): -0.2318848415600486, (2, 1, 2, 1): -0.6683443726325188, (3, 1, 3, 1): -0.04879299607915962, (3, 0, 2, 1): -0.17211271713636297, (2, 0, 2, 0): -0.04788488333795322, (2, 1, 3, 0): -0.16259866869375209} samples= None E=-1.95119341 angles= {(3, 1, 2, 0): -0.23968886390891161, (3, 0, 3, 0): -0.12687878126644583, (2, 0, 3, 1): -0.23029567623053784, (2, 1, 2, 1): -0.6678282794445145, (3, 1, 3, 1): -0.04889311717502373, (3, 0, 2, 1): -0.17389199633926272, (2, 0, 2, 0): -0.047903393883564585, (2, 1, 3, 0): -0.16073632242377647} samples= None E=-1.95120317 angles= {(3, 1, 2, 0): -0.2449385227939941, (3, 0, 3, 0): -0.12696873265590433, (2, 0, 3, 1): -0.22393901491249488, (2, 1, 2, 1): -0.6657639066924974, (3, 1, 3, 1): -0.04929360155848017, (3, 0, 2, 1): -0.18100911315086177, (2, 0, 2, 0): -0.04797743606601006, (2, 1, 3, 0): -0.15328693734387408} samples= None E=-1.95122477 angles= {(3, 1, 2, 0): -0.2586335178763863, (3, 0, 3, 0): -0.1271564795721844, (2, 0, 3, 1): -0.20879918099850797, (2, 1, 2, 1): -0.6631470295413394, (3, 1, 3, 1): -0.04972731826279019, (3, 0, 2, 1): -0.1985226173230388, (2, 0, 2, 0): -0.04797425142263507, (2, 1, 3, 0): -0.13467618962413896} samples= None E=-1.95126093 angles= {(3, 1, 2, 0): -0.2835405638554542, (3, 0, 3, 0): -0.12738109553097185, (2, 0, 3, 1): -0.18262902482286877, (2, 1, 2, 1): -0.6606673130835974, (3, 1, 3, 1): -0.05004022113249763, (3, 0, 2, 1): -0.22942487040463178, (2, 0, 2, 0): -0.047860529741525625, (2, 1, 3, 0): -0.10168447715254461} samples= None E=-1.95131263 angles= {(3, 1, 2, 0): -0.32685640984966413, (3, 0, 3, 0): -0.12762300826127013, (2, 0, 3, 1): -0.13919569236776924, (2, 1, 2, 1): -0.6591666071260438, (3, 1, 3, 1): -0.04994031093582107, (3, 0, 2, 1): -0.2819861987364699, (2, 0, 2, 0): -0.047485265374333535, (2, 1, 3, 0): -0.04582458553401925} samples= None E=-1.95136032 angles= {(3, 1, 2, 0): -0.3623677011575087, (3, 0, 3, 0): -0.12759089024328554, (2, 0, 3, 1): -0.10703561359654606, (2, 1, 2, 1): -0.661903922591017, (3, 1, 3, 1): -0.048956419626223054, (3, 0, 2, 1): -0.3235045243048328, (2, 0, 2, 0): -0.046920364333569685, (2, 1, 3, 0): -0.0027994431742362005} samples= None E=-1.95139839 angles= {(3, 1, 2, 0): -0.37497710595737943, (3, 0, 3, 0): -0.1272412234172018, (2, 0, 3, 1): -0.09977446634133624, (2, 1, 2, 1): -0.6682844016163941, (3, 1, 3, 1): -0.04772957750835103, (3, 0, 2, 1): -0.33603697268989413, (2, 0, 2, 0): -0.046659187794520475, (2, 1, 3, 0): 0.009146338065928676} samples= None E=-1.95140358 angles= {(3, 1, 2, 0): -0.3694951156088176, (3, 0, 3, 0): -0.12708580147634146, (2, 0, 3, 1): -0.10612573424071789, (2, 1, 2, 1): -0.6705635273295066, (3, 1, 3, 1): -0.04763787998186347, (3, 0, 2, 1): -0.3280723154016191, (2, 0, 2, 0): -0.046899721234160495, (2, 1, 3, 0): 0.0013761719338480152} samples= None E=-1.95140382 angles= {(3, 1, 2, 0): -0.36937967044881853, (3, 0, 3, 0): -0.1270781717013275, (2, 0, 3, 1): -0.10606679012482531, (2, 1, 2, 1): -0.6706698115670132, (3, 1, 3, 1): -0.04758404604205622, (3, 0, 2, 1): -0.32736704486261353, (2, 0, 2, 0): -0.04688770364091082, (2, 1, 3, 0): 0.0010366831958293424} samples= None E=-1.95140420 angles= {(3, 1, 2, 0): -0.3693518971300903, (3, 0, 3, 0): -0.12706099393175657, (2, 0, 3, 1): -0.10591139659197554, (2, 1, 2, 1): -0.6707536022054474, (3, 1, 3, 1): -0.04749472502535161, (3, 0, 2, 1): -0.325967707079998, (2, 0, 2, 0): -0.04685236767036665, (2, 1, 3, 0): -2.0943293084611098e-05} samples= None E=-1.95140487 angles= {(3, 1, 2, 0): -0.3700578017747706, (3, 0, 3, 0): -0.1270454866380396, (2, 0, 3, 1): -0.1050353057321078, (2, 1, 2, 1): -0.6708376962866455, (3, 1, 3, 1): -0.04740464702452958, (3, 0, 2, 1): -0.3241484689578883, (2, 0, 2, 0): -0.046816069052693025, (2, 1, 3, 0): -0.0014802644561941675} samples= None E=-1.95140612 angles= {(3, 1, 2, 0): -0.37196549069570234, (3, 0, 3, 0): -0.12702844581841183, (2, 0, 3, 1): -0.10296543667061137, (2, 1, 2, 1): -0.670940849509649, (3, 1, 3, 1): -0.047296167386674026, (3, 0, 2, 1): -0.3214315020412858, (2, 0, 2, 0): -0.046770596320974196, (2, 1, 3, 0): -0.003763451163281366} samples= None E=-1.95140846 angles= {(3, 1, 2, 0): -0.37608381046495276, (3, 0, 3, 0): -0.12700704722824668, (2, 0, 3, 1): -0.09868999903927805, (2, 1, 2, 1): -0.6710818828239413, (3, 1, 3, 1): -0.04715081791971448, (3, 0, 2, 1): -0.31688837258203206, (2, 0, 2, 0): -0.046707567141602326, (2, 1, 3, 0): -0.0077242826828331996} samples= None ###Markdown Step 4: Measurement ###Code comm_groups = get_commuting_group(h4) print('Number of mutually commuting fragments: {}'.format(len(comm_groups))) print('The first commuting group') print(comm_groups[1]) uqwc = get_qwc_unitary(comm_groups[1]) print('This is unitary, U * U^+ = I ') print(uqwc * uqwc) qwc = remove_complex(uqwc * comm_groups[1] * uqwc) print(qwc) uz = get_zform_unitary(qwc) print("Checking whether U * U^+ is identity: {}".format(uz * uz)) allz = remove_complex(uz * qwc * uz) print("\nThe all-z form of qwc fragment:\n{}".format(allz)) ###Output Checking whether U * U^+ is identity: 0.9999999999999996 [] The all-z form of qwc fragment: -0.8823205468513678 [] + -0.035257520327649824 [Z0 Z1 Z2 Z3 Z7] + 0.035257520327649824 [Z0 Z1 Z2 Z5 Z7] + -0.03405838502249276 [Z0 Z3] + -0.03405838502251352 [Z0 Z5] + -0.03405838502249276 [Z1 Z2 Z3] + -0.035257520327649824 [Z1 Z2 Z3 Z4 Z7] + -0.035257520327649824 [Z1 Z2 Z3 Z6] + 0.035257520327649824 [Z1 Z2 Z5 Z6] + -0.03405838502251352 [Z1 Z4 Z5] + 0.035257520327649824 [Z1 Z5 Z7] + -0.035257520327649824 [Z2 Z3 Z4 Z6] + 0.03827976789253173 [Z2 Z3 Z6 Z7] + 0.03827976789253173 [Z3 Z6 Z7] + 0.03827976789251079 [Z4 Z5 Z6 Z7] + 0.035257520327649824 [Z5 Z6] + 0.03827976789251079 [Z5 Z6 Z7] ###Markdown Step 5: Circuits ###Code hf_reference = hf_occ(2*h4_tq.n_orbitals, h4_tq.n_electrons) #Define number of entanglers to enter ansatz n_ents = 1 #Rank entanglers using energy gradient criterion ranked_entangler_groupings = generate_QCC_gradient_groupings(H.to_openfermion(), 2*h4_tq.n_orbitals, hf_reference, cutoff=threshold) print('Grouping gradient magnitudes (Grouping : Gradient magnitude):') for i in range(len(ranked_entangler_groupings)): print('{} : {}'.format(i+1,ranked_entangler_groupings[i][1])) entanglers = get_QCC_entanglers(ranked_entangler_groupings, n_ents, 2*h4_tq.n_orbitals) print('\nSelected entanglers:') for ent in entanglers: print(ent) #Mean-field part of U (Omega): U_MF = construct_QMF_ansatz(n_qubits = 2*h4_tq.n_orbitals) #Entangling part of U: U_ENT = construct_QCC_ansatz(entanglers) U_QCC = U_MF + U_ENT E = tq.ExpectationValue(H=H, U=U_QCC) initial_vals = init_qcc_params(hf_reference, E.extract_variables()) #Minimize wrt the entangler amplitude and MF angles: result = tq.minimize(objective=E, method="BFGS", initial_values=initial_vals, tol=1.e-6) print('\nObtained QCC energy ({} entanglers): {}'.format(len(entanglers), result.energy)) H = tq.QubitHamiltonian.from_openfermion(get_qubit_hamiltonian('h4', 2, 'sto-3g', qubit_transf='jw')) print("entanglers", entanglers) print(construct_QCC_ansatz(entanglers)) a = tq.Variable("tau_0") print("a:", a) U = construct_QMF_ansatz(8) #hardcoding the entanglers U += tq.gates.ExpPauli(paulistring=tq.PauliString.from_string("X(2)Y(3)X(6)X(7)"), angle=a) print(U) E = tq.ExpectationValue(H=H, U=U) vars = {'beta_0': 3.141592653589793, 'gamma_0': 0.0, 'beta_1': 3.141592653589793, 'gamma_1': 0.0, 'beta_2': 3.141592542000603, 'gamma_2': 0.0, 'beta_3': 3.141592542000603, 'gamma_3': 0.0, 'beta_4': 0.0, 'gamma_4': 0.0, 'beta_5': 0.0, 'gamma_5': 0.0, 'beta_6': 0.0, 'gamma_6': 0.0, 'beta_7': 0.0, 'gamma_7': 0.0, 'tau_0': 0.8117964996241631} # values obtained from step 3 print(tq.simulate(E, variables=vars)) from qiskit import IBMQ IBMQ.save_account('6f1ae0f74f3b670c62a6a7427dc22eb12f9d6eaa47e5d264218990c42e2593d029d53a92ee9260ce05f0daf11d87a8b1a1114637c90ae648bfabeddca94ae087', overwrite=True) IBMQ.enable_account('6f1ae0f74f3b670c62a6a7427dc22eb12f9d6eaa47e5d264218990c42e2593d029d53a92ee9260ce05f0daf11d87a8b1a1114637c90ae648bfabeddca94ae087', overwrite=True) # list of devices available can be found in ibmq account page provider = IBMQ.get_provider(hub='ibm-q', group='open', project='main') device = provider.get_backend('ibmq_qasm_simulator') tq.simulate(E, variables=vars, samples=100, backend="qiskit", device=device)#,qiskit_provider = provider) #draw circ = tq.circuit.compiler.compile_exponential_pauli_gate(U) tq.draw(circ, backend="qiskit") ###Output ┌────────────────────┐┌─────────────────────┐ » q_0: ─┤ RX(f((beta_0,))_0) ├┤ RZ(f((gamma_0,))_1) ├────────────────────────────» ├────────────────────┤├─────────────────────┤ » q_1: ─┤ RX(f((beta_1,))_2) ├┤ RZ(f((gamma_1,))_3) ├────────────────────────────» ├────────────────────┤├─────────────────────┤ ┌───┐ » q_2: ─┤ RX(f((beta_2,))_4) ├┤ RZ(f((gamma_2,))_5) ├────┤ H ├──────■────────────» ├────────────────────┤├─────────────────────┤ ┌──┴───┴───┐┌─┴─┐ » q_3: ─┤ RX(f((beta_3,))_6) ├┤ RZ(f((gamma_3,))_7) ├─┤ RX(pi/2) ├┤ X ├──■───────» ├────────────────────┤├─────────────────────┤ └──────────┘└───┘ │ » q_4: ─┤ RX(f((beta_4,))_8) ├┤ RZ(f((gamma_4,))_9) ├────────────────────┼───────» ┌┴────────────────────┤├─────────────────────┴┐ │ » q_5: ┤ RX(f((beta_5,))_10) ├┤ RZ(f((gamma_5,))_11) ├───────────────────┼───────» ├─────────────────────┤├──────────────────────┤ ┌───┐ ┌─┴─┐ » q_6: ┤ RX(f((beta_6,))_12) ├┤ RZ(f((gamma_6,))_13) ├───┤ H ├─────────┤ X ├──■──» ├─────────────────────┤├──────────────────────┤ ├───┤ └───┘┌─┴─┐» q_7: ┤ RX(f((beta_7,))_14) ├┤ RZ(f((gamma_7,))_15) ├───┤ H ├──────────────┤ X ├» └─────────────────────┘└──────────────────────┘ └───┘ └───┘» c_0: ══════════════════════════════════════════════════════════════════════════» » c_1: ══════════════════════════════════════════════════════════════════════════» » c_2: ══════════════════════════════════════════════════════════════════════════» » c_3: ══════════════════════════════════════════════════════════════════════════» » c_4: ══════════════════════════════════════════════════════════════════════════» » c_5: ══════════════════════════════════════════════════════════════════════════» » c_6: ══════════════════════════════════════════════════════════════════════════» » c_7: ══════════════════════════════════════════════════════════════════════════» » « «q_0: ────────────────────────────────────────────────── « «q_1: ────────────────────────────────────────────────── « ┌───┐ «q_2: ──────────────────────────────────■──────┤ H ├──── « ┌─┴─┐┌───┴───┴───┐ «q_3: ─────────────────────────────■──┤ X ├┤ RX(-pi/2) ├ « │ └───┘└───────────┘ «q_4: ─────────────────────────────┼──────────────────── « │ «q_5: ─────────────────────────────┼──────────────────── « ┌─┴─┐┌───┐ «q_6: ────────────────────────■──┤ X ├┤ H ├───────────── « ┌────────────────────┐┌─┴─┐├───┤└───┘ «q_7: ┤ RZ(f((tau_0,))_16) ├┤ X ├┤ H ├────────────────── « └────────────────────┘└───┘└───┘ «c_0: ══════════════════════════════════════════════════ « «c_1: ══════════════════════════════════════════════════ « «c_2: ══════════════════════════════════════════════════ « «c_3: ══════════════════════════════════════════════════ « «c_4: ══════════════════════════════════════════════════ « «c_5: ══════════════════════════════════════════════════ « «c_6: ══════════════════════════════════════════════════ « «c_7: ══════════════════════════════════════════════════ «
Part_3_Predictive_modeling.ipynb
###Markdown Ultimate Inc Part 3 of Data Science ChallengeUltimate is interested in predicting rider retention. To help explore this question, we haveprovided a sample dataset of a cohort of users who signed up for an Ultimate account inJanuary 2014. The data was pulled several months later; we consider a user retained if theywere “active” (i.e. took a trip) in the preceding 30 days.We would like you to use this data set to help understand what factors are the best predictorsfor retention, and offer suggestions to operationalize those insights to help Ultimate.The data is in the attached file ultimate_data_challenge.json. See below for a detaileddescription of the dataset. Please include any code you wrote for the analysis and delete thedataset when you have finished with the challenge.1. Perform any cleaning, exploratory analysis, and/or visualizations to use the provideddata for this analysis (a few sentences/plots describing your approach will suffice). Whatfraction of the observed users were retained?2. Build a predictive model to help Ultimate determine whether or not a user will be active intheir 6th month on the system. Discuss why you chose your approach, what alternativesyou considered, and any concerns you have. How valid is your model? Include any keyindicators of model performance.3. Briefly discuss how Ultimate might leverage the insights gained from the model toimprove its long term rider retention (again, a few sentences will suffice). Data description- city: city this user signed up in- phone: primary device for this user- signup_date: date of account registration; in the form ‘YYYYMMDD’- last_trip_date: the last time this user completed a trip; in the form ‘YYYYMMDD’- avg_dist: the average distance in miles per trip taken in the first 30 days after signup- avg_rating_by_driver: the rider’s average rating over all of their trips- avg_rating_of_driver: the rider’s average rating of their drivers over all of their trips- surge_pct: the percent of trips taken with surge multiplier > 1- avg_surge: The average surge multiplier over all of this user’s trips- trips_in_first_30_days: the number of trips this user took in the first 30 days after signing up- ultimate_black_user: TRUE if the user took an Ultimate Black in their first 30 days; FALSEotherwise- weekday_pct: the percent of the user’s trips occurring during a weekday ###Code import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV from sklearn import preprocessing from sklearn.metrics import f1_score, roc_auc_score, roc_curve, PrecisionRecallDisplay, precision_recall_curve from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score ###Output _____no_output_____ ###Markdown Data Wrangling and Exploratory Data Analysis ###Code # Load data df = pd.read_json('ultimate_data_challenge.json') df.head(3) # Inspect data df.info() # Check for missing data df.isnull().sum().sort_values(ascending=False) # avg_rating_of_driver df.avg_rating_of_driver.value_counts(dropna=False) # phone df.phone.value_counts(dropna=False) # avg_rating_by_driver df.avg_rating_by_driver.value_counts(dropna=False) ###Output _____no_output_____ ###Markdown Plan to fill both of the missing ratings by using the overall averages for each column. However, will do this later after the test train split to avoid data leakage.Assuming the missing phone values means it's some other type of carrier. ###Code df.phone = df.phone.fillna('other') df.phone.value_counts(dropna=False) # Fix date time data types. df.signup_date = pd.to_datetime(df.signup_date) df.last_trip_date = pd.to_datetime(df.last_trip_date) df.info() # Consider a user retained if they were “active” (i.e. took a trip) in the preceding 30 days. max_last_trip_date = df.last_trip_date.max() active_cutoff_date = max_last_trip_date - pd.Timedelta(days=30) print(f'The dates to determine if a user is active are from {active_cutoff_date.strftime("%b %d, %Y")} to {max_last_trip_date.strftime("%b %d, %Y")}.') # Create target feature is_user_retained. df["is_user_retained"] = df.last_trip_date > active_cutoff_date df.is_user_retained.value_counts() # Question: What fraction of the observed users were retained? # Answer: 36.62% df.is_user_retained.value_counts(normalize=True) # Looking for correlation plt.figure(figsize=(16, 9)) sns.heatmap(df.corr(), annot=True) plt.show() # Summary statistics df.describe() # There are some outliers here. Using box plots to explore further. numeric_features = ['trips_in_first_30_days', 'avg_rating_of_driver', 'avg_surge', 'surge_pct', 'weekday_pct', 'avg_dist', 'avg_rating_by_driver'] for col in numeric_features: sns.boxplot(y=df[col], x=df['is_user_retained']) plt.show() # There are so many outliers in the data will try plotting some other ways to explore further. def plot_histogram(): """ Code copied from: pandas histogram: plot histogram for each column as subplot of a big figure at: https://stackoverflow.com/questions/39646070/pandas-histogram-plot-histogram-for-each-column-as-subplot-of-a-big-figure) """ fig, axes = plt.subplots(len(numeric_features)//3, 3, figsize=(16, 9)) i = 0 for triaxis in axes: for axis in triaxis: df.hist(column = numeric_features[i], ax=axis) i = i+1 plot_histogram() # city df.city.value_counts() # Plot analysis for categories for 'trips_in_first_30_days'. # Referenced http://seaborn.pydata.org/tutorial/categorical.html?highlight=bar%20plot category_features = ['city', 'phone'] for cf in category_features: sns.catplot(y=cf, x='trips_in_first_30_days', hue='is_user_retained', kind='violin', data=df) plt.show() # Date time need to be numeric for model later on. df.signup_date = pd.to_numeric(df.signup_date) df.last_trip_date = pd.to_numeric(df.last_trip_date) # One hot encode city categories df_cleaned = pd.get_dummies(df, prefix=['city', 'phone_OS'], columns=['city', 'phone']) df_cleaned.head(3) # All features are now numeric for model analysis. df_cleaned.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 50000 entries, 0 to 49999 Data columns (total 17 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 trips_in_first_30_days 50000 non-null int64 1 signup_date 50000 non-null int64 2 avg_rating_of_driver 41878 non-null float64 3 avg_surge 50000 non-null float64 4 last_trip_date 50000 non-null int64 5 surge_pct 50000 non-null float64 6 ultimate_black_user 50000 non-null bool 7 weekday_pct 50000 non-null float64 8 avg_dist 50000 non-null float64 9 avg_rating_by_driver 49799 non-null float64 10 is_user_retained 50000 non-null bool 11 city_Astapor 50000 non-null uint8 12 city_King's Landing 50000 non-null uint8 13 city_Winterfell 50000 non-null uint8 14 phone_OS_Android 50000 non-null uint8 15 phone_OS_iPhone 50000 non-null uint8 16 phone_OS_other 50000 non-null uint8 dtypes: bool(2), float64(6), int64(3), uint8(6) memory usage: 3.8 MB ###Markdown Pre-process, Train, and Evaluate Models ###Code # Train/Test split to help avoid overfitting model to specific data. X = df_cleaned.drop(['is_user_retained', 'last_trip_date'], axis=1) y = df_cleaned['is_user_retained'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, random_state=123) print(f"80% of data in X train {X_train.shape} and y train {y_train.shape}.") print(f"Remaining in X test {X_test.shape} and y test {y_test.shape}.") # Now it is time to calculate the missing values for the rating columns that have missing values. training_overall_avg_rating_of_driver = X_train['avg_rating_of_driver'].mean() training_overall_avg_rating_by_driver = X_train['avg_rating_by_driver'].mean() # Following recommendations at https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy. X_train_cleaned = X_train.copy() mask = X_train_cleaned.avg_rating_of_driver.isnull() X_train_cleaned.loc[mask, 'avg_rating_of_driver'] = training_overall_avg_rating_of_driver mask = X_train_cleaned.avg_rating_by_driver.isnull() X_train_cleaned.loc[mask, 'avg_rating_by_driver'] = training_overall_avg_rating_by_driver # If there were even more features that needed this type of imputation would use the recommendations at https://scikit-learn.org/stable/auto_examples/impute/plot_missing_values.html#sphx-glr-auto-examples-impute-plot-missing-values-py. X_test_cleaned = X_test.copy() mask = X_test_cleaned.avg_rating_of_driver.isnull() # Note that the training mean is still used here to avoid possible data leakage. X_test_cleaned.loc[mask, 'avg_rating_of_driver'] = training_overall_avg_rating_of_driver mask = X_test_cleaned.avg_rating_by_driver.isnull() X_test_cleaned.loc[mask, 'avg_rating_by_driver'] = training_overall_avg_rating_by_driver print(f'Overall mean of avg_rating_of_driver in training set is {training_overall_avg_rating_of_driver}.') print(f'Overall mean of avg_rating_by_driver in training set is {training_overall_avg_rating_by_driver}.') # Scale training and test data scaler = preprocessing.StandardScaler().fit(X_train_cleaned) X_train_cleaned_scaled = scaler.transform(X_train_cleaned) X_test_cleaned_scaled = scaler.transform(X_test_cleaned) ###Output _____no_output_____ ###Markdown Choosing to use Random forest for this supervised learning classification problem because:- it performs well in a multitude of data situations.- it uses an ensemble of decision trees.- it is an efficient way to investigate the importance of a set of features with a large data set.- dimensionality reduction helps find relevant details. ###Code base_rf = RandomForestClassifier(n_estimators=10, random_state=123) base_rf.fit(X_train_cleaned_scaled, y_train) y_pred = base_rf.predict(X_test_cleaned_scaled) ac = accuracy_score(y_test, y_pred) f1 = f1_score(y_test, y_pred, average='weighted') print("Baseline Model - Random Forest Classifier") print(f'Accuracy: {ac:.4f}.') print(f'Weighted F1-score: {f1:.4f}.') cm = confusion_matrix(y_test, y_pred) sns.heatmap(cm, annot=True, fmt='d') plt.show() # Additional feature engineering to identify most important features feat_importances = pd.Series(base_rf.feature_importances_, index=X.columns).sort_values(ascending=False) print(feat_importances) # Visual comparison fig, ax = plt.subplots(figsize=(8, 4.5)) feat_importances.nlargest(5).plot(kind='barh') # Add x, y gridlines ax.grid(b=True, color='grey', linestyle='-.', linewidth=0.5, alpha=0.2) plt.title("The 5 Most Important Features") plt.show() ###Output _____no_output_____ ###Markdown Referenced from https://towardsdatascience.com/improving-random-forest-in-python-part-1-893916666cd. Tips to Improve a Machine Learning ModelThere are three general approaches for improving an existing machine learning model:1. Use more (high-quality) data and feature engineering2. Tune the hyperparameters of the algorithm3. Try different algorithms ###Code # Determine which features are most important # List of features for later use feature_list = list(X.columns) # Get numerical feature importances importances = list(base_rf.feature_importances_) # List of tuples with variable and importance feature_importances = [(feature, round(importance, 2)) for feature, importance in zip(feature_list, importances)] # Sort the feature importances by most important first feature_importances = sorted(feature_importances, key = lambda x: x[1], reverse = True) # Print out the feature and importances for pair in feature_importances: print('Variable: {:} Importance: {}'.format(*pair)) # list of x locations for plotting x_values = list(range(len(importances))) # Make a bar chart plt.bar(x_values, importances, orientation = 'vertical', color = 'r', edgecolor = 'k', linewidth = 1.2) # Tick labels for x axis plt.xticks(x_values, feature_list, rotation='vertical') # Axis labels and title plt.ylabel('Importance'); plt.xlabel('Variable'); plt.title('Variable Importances') plt.show() # Arbitrary most import feature cutoff feature_importance_percentage = 0.95 # List of features sorted from most to least important sorted_importances = [importance[1] for importance in feature_importances] sorted_features = [importance[0] for importance in feature_importances] # Cumulative importances cumulative_importances = np.cumsum(sorted_importances) # Make a line graph plt.plot(x_values, cumulative_importances, 'g-') # Draw line at % of importance retained plt.hlines(y = feature_importance_percentage, xmin=0, xmax=len(sorted_importances), color = 'r', linestyles = 'dashed') # Format x ticks and labels plt.xticks(x_values, sorted_features, rotation = 'vertical') # Axis labels and title plt.xlabel('Variable'); plt.ylabel('Cumulative Importance'); plt.title('Cumulative Importances') plt.show() # Find number of features for cumulative importance. print(f'Number of features for {feature_importance_percentage * 100:.0f}% importance:', np.where(cumulative_importances > feature_importance_percentage)[0][0] + 1) # Decrease the number of features from 15 to 11. # Extract the names of the most important features important_feature_names = [feature[0] for feature in feature_importances[0:11]] # Find the columns of the most important features important_indices = [feature_list.index(feature) for feature in important_feature_names] # Create training and testing sets with only the important features important_train_features = X_train_cleaned_scaled[:, important_indices] important_test_features = X_test_cleaned_scaled[:, important_indices] # Sanity check on operations print('Important train features shape:', important_train_features.shape) print('Important test features shape:', important_test_features.shape) # Retrain using only the most important features. important_features_rf = RandomForestClassifier(n_estimators=10, random_state=123) important_features_rf.fit(important_train_features, y_train) y_pred = important_features_rf.predict(important_test_features) ac = accuracy_score(y_test, y_pred) f1 = f1_score(y_test, y_pred, average='weighted') print("Most Important Features - Random Forest Classifier") print(f'Accuracy: {ac:.4f}.') print(f'Weighted F1-score: {f1:.4f}.') # Reducing the number of features slightly reduced the accuracy of the models; however, it also made the performance slightly faster. # Since this is a smaller dataset it probably doesn't matter much either way; but in production this could be a very important additional step. cm = confusion_matrix(y_test, y_pred) sns.heatmap(cm, annot=True, fmt='d') plt.show() # Now, using RandomizedSearchCV to help find the best hyperparameters for the model to improve the evaluation metrics. # Used recommendations from article at: https://towardsdatascience.com/hyperparameter-tuning-the-random-forest-in-python-using-scikit-learn-28d2aa77dd74 # Number of trees in random forest n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)] # Number of features to consider at every split max_features = ['auto', 'sqrt'] # Maximum number of levels in tree max_depth = [int(x) for x in np.linspace(10, 110, num = 11)] max_depth.append(None) # Minimum number of samples required to split a node min_samples_split = [2, 5, 10] # Minimum number of samples required at each leaf node min_samples_leaf = [1, 2, 4] # Method of selecting samples for training each tree bootstrap = [True, False] random_grid = {'n_estimators': n_estimators, 'max_features': max_features, 'max_depth': max_depth, 'min_samples_split': min_samples_split, 'min_samples_leaf': min_samples_leaf, 'bootstrap': bootstrap} rf = RandomForestClassifier(random_state=123) clf = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, n_iter = 100, cv = 3, verbose = 2, random_state = 123, n_jobs = -1) clf.fit(X_train_cleaned_scaled, y_train) print("RandomizedSearchCV - Random Forest Classifier") print(f"Best Accuracy Score: {str(clf.best_score_)}.") print(f"Best Parameters: {str(clf.best_params_)}.") # Now using GridSearchCV to concentrate on combinations related to the best parameters above. param_grid = { 'n_estimators': [1600, 2100, 3000], 'min_samples_split': [2], 'min_samples_leaf': [1], 'max_features': [3], 'max_depth': [10], 'bootstrap': [False] } rf_cv = RandomForestClassifier(random_state=123) grid_search = GridSearchCV(estimator = rf_cv, param_grid = param_grid, cv = 3, n_jobs = -1, verbose = 2) grid_search.fit(X_train_cleaned_scaled, y_train) print("GridSearchCV - Random Forest Classifier") print(f"Best Accuracy Score: {str(grid_search.best_score_)}.") print(f"Best Parameters: {str(grid_search.best_params_)}.") # Stopping the evaluations here because GridSearchCV returned the same best results as the RandomizedSearchCV. y_pred = grid_search.predict(X_test_cleaned_scaled) ac = accuracy_score(y_test, y_pred) f1 = f1_score(y_test, y_pred, average='weighted') print("Best Random Forest Classifier") print(f'Accuracy: {ac:.4f}.') print(f'Weighted F1-score: {f1:.4f}.') cm_gs = confusion_matrix(y_test, y_pred) sns.heatmap(cm_gs, annot=True, fmt='d') plt.show() # The Receiver Operator Characteristic (ROC) curve is an evaluation metric for binary classification problems. y_pred_prob = grid_search.predict_proba(X_test_cleaned_scaled)[:, 1] print(f"The Area Under the Curve is {round(roc_auc_score(y_test, y_pred_prob), 4)}.") fpr, tpr, thresholds = roc_curve(y_test, y_pred_prob) plt.plot([0, 1], [0, 1]) plt.plot(fpr, tpr) plt.xlabel("False Positive Rate (1 - Specificity)") plt.ylabel("True Positive Rate (Sensitivity)") plt.title("ROC") plt.show() precision, recall, _ = precision_recall_curve(y_test, y_pred) display = PrecisionRecallDisplay(precision=precision, recall=recall) display.plot() plt.title("Precision-Recall Display for Best Random Forest") plt.show() ###Output _____no_output_____
notebooks/demo_AutoDock_Vina.ipynb
###Markdown How to run this notebook? Install the DockStream environment: conda env create -f environment.yml in the DockStream directory Activate the environment: conda activate DockStreamCommunity Execute jupyter: jupyter notebook Copy the link to a browser Update variables dockstream_path and dockstream_env (the path to the environment DockStream) in the first code block below Caution: Make sure, you have the AutoDock Vina binary available somewhere. `AutoDock Vina` backend demoThis notebook will demonstrate how to **(a)** set up a `AutoDock Vina` backend run with `DockStream`, including the most important settings and **(b)** how to set up a `REINVENT` run with `AutoDock` docking enabled as one of the scoring function components.**Steps:*** a: Set up `DockStream` run 1. Prepare the receptor 2. Prepare the input: SMILES and configuration file (JSON format) 3. Execute the docking and parse the results* b: Set up `REINVENT` run with a `DockStream` component 1. Prepare the receptor (see *a.1*) 2. Prepare the input (see *a.2*) 3. Prepare the `REINVENT` configuration (JSON format) 4. Execute `REINVENT`The following imports / loadings are only necessary when executing this notebook. If you want to use `DockStream` directly from the command-line, it is enough to execute the following with the appropriate configurations:```conda activate DockStreampython /path/to/DockStream/target_preparator.py -conf target_prep.jsonpython /path/to/DockStream/docker.py -conf docking.json``` ###Code import os import json import tempfile # update these paths to reflect your system's configuration dockstream_path = os.path.expanduser("~/Desktop/ProjectData/DockStream") dockstream_env = os.path.expanduser("~/miniconda3/envs/DockStream") vina_binary_location = os.path.expanduser("~/Desktop/ProjectData/foreign/AutoDockVina/autodock_vina_1_1_2_linux_x86/bin") # no changes are necessary beyond this point # --------- # get the notebook's root path try: ipynb_path except NameError: ipynb_path = os.getcwd() # generate the paths to the entry points target_preparator = dockstream_path + "/target_preparator.py" docker = dockstream_path + "/docker.py" # generate a folder to store the results output_dir = os.path.expanduser("~/Desktop/AutoDock_Vina_demo") try: os.mkdir(output_dir) except FileExistsError: pass # generate the paths to the files shipped with this implementation apo_1UYD_path = ipynb_path + "/../data/1UYD/1UYD_apo.pdb" reference_ligand_path = ipynb_path + "/../data/1UYD/PU8.pdb" smiles_path = ipynb_path + "/../data/1UYD/ligands_smiles.txt" # generate output paths for the configuration file, the "fixed" PDB file and the "Gold" receptor target_prep_path = output_dir + "/ADV_target_prep.json" fixed_pdb_path = output_dir + "/ADV_fixed_target.pdb" adv_receptor_path = output_dir + "/ADV_receptor.pdbqt" log_file_target_prep = output_dir + "/ADV_target_prep.log" log_file_docking = output_dir + "/ADV_docking.log" # generate output paths for the configuration file, embedded ligands, the docked ligands and the scores docking_path = output_dir + "/ADV_docking.json" ligands_conformers_path = output_dir + "/ADV_embedded_ligands.sdf" ligands_docked_path = output_dir + "/ADV_ligands_docked.sdf" ligands_scores_path = output_dir + "/ADV_scores.csv" ###Output _____no_output_____ ###Markdown Target preparation`AutoDock Vina` uses the `PDBQT` format for both the receptors and the (individual ligands). First, we will generate the receptor into which we want to dock the molecules. This is a semi-automated process and while `DockStream` has an entry point to help you setting this up, it might be wise to think about the details of this process beforehand, including:* Is my target structure complete (e.g. has it missing loops in the area of interest)?* Do I have a reference ligand in a complex (holo) structure or do I need to define the binding cleft (cavity) in a different manner?* Do I want to keep the crystal water molecules, potential co-factors and such or not?This step has to be done once per project and target. Typically, we start from a PDB file with a holo-structure, that is, a protein with its ligand. Using a holo-structure as input is convenient for two reasons:1. The cavity can be specified as being a certain area around the ligand in the protein (assuming the binding mode does not change too much).2. One can align other ligands (often a series with considerable similarity is used in docking studies) to the "reference ligand", potentially improving the performance.![](img/target_preparation_template_method.png)For this notebook, it is assumed that you are able to1. download `1UYD` and2. split it into `1UYD_apo.pdb` and `reference_ligand.pdb` (name is `PU8` in the file), respectively.We will now set up the JSON instruction file for the target preparator that will help us build a receptor suitable for `AutoDock Vina` docking later. We will also include a small section (internally using [PDBFixer](https://github.com/openmm/pdbfixer)) that will take care of minor problems of the input structure, such as missing hetero atoms - but of course you can address these things with a program of your choice as well. We will write the JSON to the output folder in order to load it with the `target_preparator.py` entry point of `DockStream`.Note, that we can use the (optional) `extract_box` block in the configuration to specify the cavity's box (the area where the algorithm will strive to optimize the poses). For this we simply specify a reference ligand and the algorithm will extract the center-of-geometry and the minimum and maximum values for all three axes. This information is printed to the log file and can be used to specify the cavity in the docking step. ###Code # specify the target preparation JSON file as a dictionary and write it out tp_dict = { "target_preparation": { "header": { # general settings "logging": { # logging settings (e.g. which file to write to) "logfile": log_file_target_prep } }, "input_path": apo_1UYD_path, # this should be an absolute path "fixer": { # based on "PDBFixer"; tries to fix common problems with PDB files "enabled": True, "standardize": True, # enables standardization of residues "remove_heterogens": True, # remove hetero-entries "fix_missing_heavy_atoms": True, # if possible, fix missing heavy atoms "fix_missing_hydrogens": True, # add hydrogens, which are usually not present in PDB files "fix_missing_loops": False, # add missing loops; CAUTION: the result is usually not sufficient "add_water_box": False, # if you want to put the receptor into a box of water molecules "fixed_pdb_path": fixed_pdb_path # if specified and not "None", the fixed PDB file will be stored here }, "runs": [ # "runs" holds a list of backend runs; at least one is required { "backend": "AutoDockVina", # one of the backends supported ("AutoDockVina", "OpenEye", ...) "output": { "receptor_path": adv_receptor_path # the generated receptor file will be saved to this location }, "parameters": { "pH": 7.4, # sets the protonation states (NOT used in Vina) "extract_box": { # in order to extract the coordinates of the pocket (see text) "reference_ligand_path": reference_ligand_path, # path to the reference ligand "reference_ligand_format": "PDB" # format of the reference ligand } }}]}} with open(target_prep_path, 'w') as f: json.dump(tp_dict, f, indent=" ") # execute this in a command-line environment after replacing the parameters !{dockstream_env}/bin/python {target_preparator} -conf {target_prep_path} !head -n 25 {adv_receptor_path} ###Output REMARK Name = /tmp/tmponfdi39q.pdb REMARK x y z vdW Elec q Type REMARK _______ _______ _______ _____ _____ ______ ____ ATOM 1 N GLU A 1 6.484 28.442 39.441 0.00 0.00 +0.386 N ATOM 2 CA GLU A 1 7.718 28.546 38.611 0.00 0.00 -0.005 C ATOM 3 C GLU A 1 7.625 27.706 37.277 0.00 0.00 +0.199 C ATOM 4 O GLU A 1 7.333 26.478 37.304 0.00 0.00 -0.278 OA ATOM 5 CB GLU A 1 8.951 28.140 39.474 0.00 0.00 -0.048 C ATOM 6 CG GLU A 1 9.355 26.647 39.367 0.00 0.00 +0.048 C ATOM 7 CD GLU A 1 10.138 26.088 40.562 0.00 0.00 +0.356 C ATOM 8 OE1 GLU A 1 11.022 26.816 41.117 0.00 0.00 -0.246 OA ATOM 9 OE2 GLU A 1 9.875 24.900 40.943 0.00 0.00 -0.246 OA ATOM 10 N VAL A 2 7.856 28.355 36.137 0.00 0.00 -0.305 N ATOM 11 CA VAL A 2 8.110 27.634 34.889 0.00 0.00 +0.102 C ATOM 12 C VAL A 2 9.523 27.050 34.954 0.00 0.00 +0.234 C ATOM 13 O VAL A 2 10.499 27.794 35.209 0.00 0.00 -0.274 OA ATOM 14 CB VAL A 2 7.967 28.556 33.636 0.00 0.00 -0.020 C ATOM 15 CG1 VAL A 2 8.234 27.763 32.310 0.00 0.00 -0.061 C ATOM 16 CG2 VAL A 2 6.598 29.245 33.609 0.00 0.00 -0.061 C ATOM 17 N GLU A 3 9.626 25.731 34.766 0.00 0.00 -0.302 N ATOM 18 CA GLU A 3 10.912 25.034 34.705 0.00 0.00 +0.100 C ATOM 19 C GLU A 3 11.363 24.695 33.258 0.00 0.00 +0.234 C ATOM 20 O GLU A 3 10.557 24.225 32.446 0.00 0.00 -0.274 OA ATOM 21 CB GLU A 3 10.872 23.762 35.555 0.00 0.00 -0.016 C ATOM 22 CG GLU A 3 10.774 24.017 37.048 0.00 0.00 +0.051 C ###Markdown This is it, now we have **(a)** fixed some minor issues with the input structure and **(b)** generated a reference ligand-based receptor and stored it in a binary file. For inspection later, we will write out the "fixed" PDB structure (parameter `fixed_pdb_path` in the `fixer` block above). DockingIn this section we consider a case where we have just prepared the receptor and want to dock a bunch of ligands (molecules, compounds) into the binding cleft. Often, we only have the structure of the molecules in the form of `SMILES`, rather than a 3D structure so the first step will be to generate these conformers before proceeding. In `DockStream` you can embed your ligands with a variety of programs including `Corina`, `RDKit`, `OMEGA` and `LigPrep` and use them freely with any backend. Here, we will use `Corina` for the conformer embedding.But first, we will have a look at the ligands: ###Code # load the smiles (just for illustrative purposes) # here, 15 moleucles will be used with open(smiles_path, 'r') as f: smiles = [smile.strip() for smile in f.readlines()] print(smiles) ###Output ['C#CCCCn1c(Cc2cc(OC)c(OC)c(OC)c2Cl)nc2c(N)ncnc21', 'CCCCn1c(Cc2cc(OC)c(OC)c(OC)c2)nc2c(N)ncnc21', 'CCCCn1c(Cc2cc(OC)ccc2OC)nc2c(N)ncnc21', 'CCCCn1c(Cc2cccc(OC)c2)nc2c(N)ncnc21', 'C#CCCCn1c(Cc2cc(OC)c(OC)c(OC)c2Cl)nc2c(N)nc(F)nc21', 'CCCCn1c(Cc2ccc(OC)cc2)nc2c(N)ncnc21', 'CCCCn1c(Cc2ccc3c(c2)OCO3)nc2c(N)ncnc21', 'CCCCn1c(Cc2cc(OC)ccc2OC)nc2c(N)nc(F)nc21', 'CCCCn1c(Cc2ccc3c(c2)OCO3)nc2c(N)nc(F)nc21', 'C#CCCCn1c(Cc2cc(OC)ccc2OC)nc2c(N)nc(F)nc21', 'CC(C)NCCCn1c(Cc2cc3c(cc2I)OCO3)nc2c(N)nc(F)nc21', 'CC(C)NCCCn1c(Sc2cc3c(cc2Br)OCO3)nc2c(N)ncnc21', 'CC(C)NCCCn1c(Sc2cc3c(cc2I)OCO3)nc2c(N)ncnc21', 'COc1ccc(OC)c(Cc2nc3nc(F)nc(N)c3[nH]2)c1', 'Nc1nccn2c(NCc3ccccc3)c(Cc3cc4c(cc3Br)OCO4)nc12'] ###Markdown While the embedding and docking tasks in `DockStream` are both specified in the same configuration file, they are handled independently. This means it is perfectly fine to either load conformers (from an `SDF` file) directly or to use a call of `docker.py` merely to generate conformers without doing the docking afterwards.`DockStream` uses the notion of (embedding) "pool"s, of which multiple can be specified and accessed via identifiers. Note, that while the way conformers are generated is highly backend specific, `DockStream` allows you to use the results interchangably. This allows to (a) re-use embedded molecules for multiple docking runs (e.g. different scoring functions), without the necessity to embed them more than once and (b) to combine embeddings and docking backends freely.One important feature is that you can also specify an `align` block for the pools, which will try to align the conformers produced to the reference ligand's coordinates. Alignment is especially useful if your molecules have a large common sub-structure, as it will potentially enhance the results. **Warning:** At the moment, this feature is a bit unstable at times (potentially crashes, if no overlap of a ligand with the reference ligand can be found).As mentioned at the target preparation stage, we need to specify the cavity (binding cleft) or search space for `AutoDock Vina`. As we have extracted the "box" (see print-out of the logging file below) using a reference ligand, this helps us deciding on the dimensions of the search space: ###Code !cat {log_file_target_prep} ###Output _____no_output_____ ###Markdown The three `mean` values will serve as the center of the search space and from the minimum and maximum values in all three dimensions, we decide to use 15 (for `x`) and 10 (for `y` and `z`, respectively). As larger ligands could be used, we will give the algorithm some leeway in each dimension. ###Code # specify the embedding and docking JSON file as a dictionary and write it out ed_dict = { "docking": { "header": { # general settings "logging": { # logging settings (e.g. which file to write to) "logfile": log_file_docking } }, "ligand_preparation": { # the ligand preparation part, defines how to build the pool "embedding_pools": [ { "pool_id": "Corina_pool", # here, we only have one pool "type": "Corina", "parameters": { "prefix_execution": "module load corina" # only required, if a module needs to be loaded to execute "Corina" }, "input": { "standardize_smiles": False, "type": "smi", "input_path": smiles_path }, "output": { # the conformers can be written to a file, but "output" is # not required as the ligands are forwarded internally "conformer_path": ligands_conformers_path, "format": "sdf" } } ] }, "docking_runs": [ { "backend": "AutoDockVina", "run_id": "AutoDockVina", "input_pools": ["Corina_pool"], "parameters": { "binary_location": vina_binary_location, # absolute path to the folder, where the "vina" binary # can be found "parallelization": { "number_cores": 4 }, "seed": 42, # use this "seed" to generate reproducible results; if # varied, slightly different results will be produced "receptor_pdbqt_path": [adv_receptor_path], # paths to the receptor files "number_poses": 2, # number of poses to be generated "search_space": { # search space (cavity definition); see text "--center_x": 3.3, "--center_y": 11.5, "--center_z": 24.8, "--size_x": 15, "--size_y": 10, "--size_z": 10 } }, "output": { "poses": { "poses_path": ligands_docked_path }, "scores": { "scores_path": ligands_scores_path } }}]}} with open(docking_path, 'w') as f: json.dump(ed_dict, f, indent=2) # print out path to generated JSON print(docking_path) # execute this in a command-line environment after replacing the parameters !{dockstream_env}/bin/python {docker} -conf {docking_path} -print_scores ###Output -9.2 -9.3 -9.3 -9.5 -9.6 -9.2 -10.1 -9.4 -10.3 -9.5 -9.3 -9.2 -9.2 -9.8 -11.0 ###Markdown Note, that the scores are usually only outputted to a `CSV` file specified by the `scores` block, but that since we have used parameter `-print_scores` they will also be printed to `stdout` (line-by-line).These scores are associated with docking poses (see picture below for a couple of ligands overlaid in the binding pocket).![](img/docked_ligands_overlay_holo.png) Using `DockStream` as a scoring component in `REINVENT`The *de novo* design platform `REINVENT` holds a recently added `DockStream` scoring function component (also check out our collection of notebooks in the [ReinventCommunity](https://github.com/MolecularAI/ReinventCommunity) repository). This means, provided that all necessary input files and configurations are available, you may run `REINVENT` and incorporate docking scores into the score of the compounds generated. Together with `FastROCS`, this represents the first step to integrate physico-chemical 3D information.While the docking scores are a very crude proxy for the actual binding affinity (at best), it does prove useful as a *geometric filter* (removing ligands that obviously do not fit the binding cavity). Furthermore, a severe limitation of knowledge-based predictions e.g. in activity models is the domain applicability. Docking, as a chemical space agnostic component, can enhance the ability of the agent for scaffold-hopping, i.e. to explore novel sub-areas in the chemical space. The `REINVENT` configuration JSONWhile every docking backend has its own configuration (see section above), calling `DockStream`'s `docker.py` entry point ensures that they all follow the same external API. Thus the component that needs to be added to `REINVENT`'s JSON configuration (to the `scoring_function`->`parameters` list) looks as follows for `AutoDock Vina`:```{ "component_type": "dockstream", "name": "dockstream", "weight": 1, "specific_parameters": { "transformation": { "transformation_type": "reverse_sigmoid", "low": -12, "high": -8, "k": 0.25 }, "configuration_path": "/docking.json", "docker_script_path": "/docker.py", "environment_path": "/envs/DockStream/bin/python" }}```You will need to update `configuration_path`, `docker_script_path` and the link to the environment, `environment_path` to match your system's configuration. It might be, that the latter two are already set to meaningful defaults, but your `DockStream` configuration JSON file will be specific for each run. How to find an appropriate transformation?We use a *reverse sigmoid* score transformation to bring the numeric, continuous value that was outputted by `DockStream` and fed back to `REINVENT` into a 0 to 1 regime. The parameters `low`, `high` and `k` are critical: their exact value naturally depends on the backend used, but also on the scoring function (make sure, "more negative is better" - otherwise you are looking for a *sigmoid* transformation) and potentially also the project used. The values reported here can be used as rule-of-thumb for an `AutoDock Vina` run. Below is a code snippet, that helps to find the appropriate parameters (excerpt of the `ReinventCommunity` notebook `Score_Transformations`). ###Code # load the dependencies and classes used %run code/score_transformation.py # set plotting parameters small = 12 med = 16 large = 22 params = {"axes.titlesize": large, "legend.fontsize": med, "figure.figsize": (16, 10), "axes.labelsize": med, "axes.titlesize": med, "xtick.labelsize": med, "ytick.labelsize": med, "figure.titlesize": large} plt.rcParams.update(params) plt.style.use("seaborn-whitegrid") sns.set_style("white") %matplotlib inline # set up Enums and factory tt_enum = TransformationTypeEnum() csp_enum = ComponentSpecificParametersEnum() factory = TransformationFactory() # sigmoid transformation # --------- values_list = np.arange(-14, -7, 0.25).tolist() specific_parameters = {csp_enum.TRANSFORMATION: True, csp_enum.LOW: -12, csp_enum.HIGH: -8, csp_enum.K: 0.25, csp_enum.TRANSFORMATION_TYPE: tt_enum.REVERSE_SIGMOID} transform_function = factory.get_transformation_function(specific_parameters) transformed_scores = transform_function(predictions=values_list, parameters=specific_parameters) # render the curve render_curve(title=" Reverse Sigmoid Transformation", x=values_list, y=transformed_scores) ###Output _____no_output_____
mixture-model/mcmc.ipynb
###Markdown Set hyper parameters $\sigma^2_q, K, d$. ###Code # hyper-parameters sigma_q = 0.5 K = 3 d = 2 # read data data_file = "./X.txt" xs = [] with open(data_file, "r") as f: for line in f: x_i = line.split() x_i = [float(x) for x in x_i] xs.append(x_i) xs = torch.tensor(xs, dtype=torch.float) pi_prior_dist = Dirichlet(torch.tensor([1.0 for _ in range(K)])) u_prior_dist = MultivariateNormal(torch.zeros(d), 5.0 * torch.eye(d)) lambda_prior_dist = LogNormal(0.1, 0.1) v_prior_dist = MultivariateNormal(torch.zeros(d), 0.25 * torch.eye(d)) ###Output _____no_output_____ ###Markdown Random initialization of $\theta$ by sampling it from prior distribution $p(\theta)$ ###Code pi = torch.tensor([1/K for _ in range(K)]) us = [u_prior_dist.sample() for _ in range(K)] lambdas = [lambda_prior_dist.sample() for _ in range(K)] vs = [v_prior_dist.sample() for _ in range(K)] ###Output _____no_output_____ ###Markdown Random cluster assignment $z_i$ for $i = 1, \ldots, n$ ###Code m = Categorical(pi) zs = [m.sample() for _ in range(xs.size(0))] zs = torch.tensor(zs, dtype=torch.long) ###Output _____no_output_____ ###Markdown Gibbs-Sampling $z_i \sim \text{Cat}(\alpha_1, \ldots, \alpha_K)$ where $\alpha_k = \frac{\pi_k \mathcal{N}(x_i|\mu_k \lambda_k I_d + v_k v_k^T)}{\sum_{j=1}^K \pi_j \mathcal{N}(x_i|\mu_j, \lambda_j I_d + v_j v_j^T)}$ Gibbs-Sampling $\pi \sim \text{Dir}(\pi; n_1 +1, \ldots, n_K +1)$ MCMC sampling (Metroplois-Hastings algorithm) $\phi_k :=(\mu_k', \lambda_k', v_k')$ with proposal distribution $\phi_k' \sim \mathcal{N}(\mu_k';\mu_k, \sigma^2_q I_d) \log \mathcal{N}(\lambda_k';\log \lambda_k, \sigma^2 I_d)\mathcal{N}(v_k';v_k, \sigma^2_q I_d)$ Acceptance probability is $A(\phi_k' | \phi_k) =\min \Bigg\{1, \frac{p(X,Z,\theta \setminus \phi_k|\phi_k')p(\phi_k') \:q(u_k, \lambda_k, v_k|u_k',\lambda_k', v_k')}{p(X,Z,\theta \setminus \phi_k)p(\phi_k)\:q(u_k', \lambda_k', v_k'| u_k, \lambda_k, v_k)} \Bigg\}$ ###Code num_iters = 300 joint_lls = [] t = trange(num_iters) for epoch in t: for i in range(xs.size(0)): x_i = xs[i] numerators = [] denom = 0.0 for k in range(K): cov = lambdas[k] * torch.eye(d) + torch.ger(vs[k], vs[k]) m = MultivariateNormal(us[k], cov) num = pi[k] * m.log_prob(x_i).exp() numerators.append(num) denom += num alphas = torch.tensor(numerators, dtype=torch.float) / denom # sample z ~ p(z_i|Z\{z_i},X,\theta) z_cat = Categorical(alphas) # update Z zs[i] = z_cat.sample() # sample \pi_k ~ p(\pi|X,Z,\theta \{z}) concentrations = torch.ones(K).float() counter = Counter(zs.tolist()) ns = torch.zeros_like(concentrations) for k, n_k in counter.items(): ns[k] = n_k concentrations = concentrations + ns pi_dir = Dirichlet(concentrations) # update new \pi pi = pi_dir.sample() identity = torch.eye(d) for k in range(K): q_u_prime_dist = MultivariateNormal(us[k], sigma_q**2 * identity) q_lambda_prime_dist = LogNormal(torch.log(lambdas[k]), sigma_q**2) q_v_prime_dist = MultivariateNormal(vs[k], sigma_q**2 * identity) u_prime = q_u_prime_dist.sample() lambda_prime = q_lambda_prime_dist.sample() v_prime = q_v_prime_dist.sample() q_u_dist = MultivariateNormal(u_prime, sigma_q**2 * identity) q_lambda_dist = LogNormal(torch.log(lambda_prime), sigma_q**2) q_v_dist = MultivariateNormal(v_prime, sigma_q**2 * identity) q_num = torch.exp(q_u_dist.log_prob(us[k]) + q_lambda_dist.log_prob(lambdas[k]) + q_v_dist.log_prob(vs[k])) q_denom = torch.exp(q_u_prime_dist.log_prob(u_prime) + q_lambda_prime_dist.log_prob(lambda_prime) + q_v_prime_dist.log_prob(v_prime)) q_ratio = q_num / q_denom p_theta = 1.0 p_theta_prime = 1.0 x_z_ll = 1.0 x_z_prime_ll = 1.0 # select x of which z = k boolean_mask = (zs == k) xs_k = xs[boolean_mask] cov = lambdas[k] * torch.eye(d) + torch.ger(vs[k], vs[k]) x_normal = MultivariateNormal(us[k], cov) cov_prime = lambda_prime * torch.eye(d) + torch.ger(v_prime, v_prime) x_prime_normal = MultivariateNormal(u_prime, cov_prime) ll_ratio = torch.exp(x_prime_normal.log_prob(xs_k).sum() - x_normal.log_prob(xs_k).sum()) prior_ratio = torch.exp(u_prior_dist.log_prob(u_prime) + lambda_prior_dist.log_prob(lambda_prime) + v_prior_dist.log_prob(v_prime) - u_prior_dist.log_prob(us[k]) - lambda_prior_dist.log_prob(lambdas[k]) - v_prior_dist.log_prob(vs[k])) p_ratio = ll_ratio * prior_ratio A = p_ratio * q_ratio accept_prob = min([1.0, A.item()]) # print("acceptance probability: {:.4f}, A: {:.4f}".format( # accept_prob, A)) if random.random() < accept_prob: us[k] = u_prime lambdas[k] = lambda_prime vs[k] = v_prime joint_ll = pi_prior_dist.log_prob(pi) for k in range(K): joint_ll += (u_prior_dist.log_prob(us[k]) + lambda_prior_dist.log_prob(lambdas[k]) + v_prior_dist.log_prob(vs[k])) for i in range(xs.size(0)): x_i = xs[i] z_i = zs[i] u_i = us[z_i] lambda_i = lambdas[z_i] v_i = vs[z_i] pi_i = pi[z_i] cov = lambda_i * torch.eye(d) + torch.ger(v_i, v_i) x_normal = MultivariateNormal(u_i, cov) joint_ll += x_normal.log_prob(x_i) + torch.log(pi_i) desc = "Epoch: {}, joint log likelihood: {:.4f}".format(epoch, joint_ll) t.set_description(desc) joint_lls.append(joint_ll) x_axis = [i+1 for i in range(num_iters)] plt.plot(x_axis, joint_lls) plt.xlabel("num iterations") plt.ylabel("joint log likelihood") plt.show() plt.scatter(xs[:, 0].numpy(), xs[:, 1].numpy(), c=zs.numpy()) plt.show() ###Output _____no_output_____
saving spectograms for genres.ipynb
###Markdown  Creating Spectogram image for each audio file and adding to folder ###Code cmap = plt.get_cmap('inferno') data_dir = 'data/music/' for g in genrelist: df = genres.loc[genres['genre'] ==g] for i in df['newid']: full_path = f'{data_dir}{i}' y, sr = librosa.load(full_path, mono=True, duration=5) plt.specgram(y, NFFT=2048, Fs=2, Fc=0, noverlap=128, cmap=cmap, sides='default', mode='default', scale='dB'); plt.axis('off'); plt.savefig(f'data/genres/{g}/{i.split(".mp3")[0]}.png') plt.clf() import os, shutil data_Rock_dir = 'data/genres/Rock' new_dir = 'data/genres/Rock' imgs_Rock = [file for file in os.listdir(data_Rock_dir) if file.endswith('.png')] imgs_Rock[0:10] print('There are', len(imgs_Rock), 'Rock images') train_folder = os.path.join(new_dir, 'train') test_folder = os.path.join(new_dir, 'test') val_folder = os.path.join(new_dir, 'val') os.mkdir(train_folder) os.mkdir(test_folder) os.mkdir(val_folder) train_folder = os.path.join(new_dir, 'train') train = os.path.join(train_folder, 'Rock') test_folder = os.path.join(new_dir, 'test') test_Rock = os.path.join(test_folder, 'Rock') val_folder = os.path.join(new_dir, 'validation') val_Rock = os.path.join(val_folder, 'Rock') imgs = imgs_Rock[:800] for img in imgs: origin = os.path.join(data_Rock_dir, img) destination = os.path.join('data/genres/Rock/train', img) shutil.copyfile(origin, destination) imgs = imgs_Rock[800:900] for img in imgs: origin = os.path.join(data_Rock_dir, img) destination = os.path.join('data/genres/Rock/test', img) shutil.copyfile(origin, destination) imgs = imgs_Rock[900:] for img in imgs: origin = os.path.join(data_Rock_dir, img) destination = os.path.join('data/genres/Rock/val', img) shutil.copyfile(origin, destination) genrelist.remove('Rock') genrelist ###Output _____no_output_____ ###Markdown  Splitting genres into train test split ###Code for genre in genrelist: direc = f'data/genres/{genre}' images = [file for file in os.listdir(direc) if file.endswith('.png')] imgs = images[:800] for img in imgs: origin = os.path.join(direc, img) destination = os.path.join(f'data/genres/{genre}/train', img) shutil.copyfile(origin, destination) imgs = images[800:900] for img in imgs: origin = os.path.join(direc, img) destination = os.path.join(f'data/genres/{genre}/test', img) shutil.copyfile(origin, destination) imgs = images[900:] for img in imgs: origin = os.path.join(direc, img) destination = os.path.join(f'data/genres/{genre}/val', img) shutil.copyfile(origin, destination) ###Output _____no_output_____
evaluacion_de_modelos.ipynb
###Markdown **Evaluación de un Modelo de Machine Learning**- Es muy importante escoger los métodos de evaluación que se alinien a los objetivos de nuestra aplicación.- Calcula la métrica de evaluación seleccionada para múltiples y diferentes modelos.- Selecciona el modelo que posea el mejor valor de la métrica de evaluación.En definitiva, se debe seleccionar el modelo o la configuración de parámetros que optimicen las métricas de evaluación, las cuales creemos son importantes para nuestro caso/aplicación.**¿Por qué no basta con la precisión?**- Supongamos tenemos 2 clases: - Relevante (R): clase positiva - No Relevante (N): clase negativa- De 1000 items seleccionados al azar, en promedio: - 1 item es Relevante y posee etiqueta R - El resto (999 items) no son relevantes y poseen etiqueta N- Recordar que:\begin{equation*}\frac{predicciones \ correctas}{{total \ de \ instancias}} =precision\end{equation*} **Evaluando modelos de clasificación** **Matriz de confusión**- [referencia](https://www.youtube.com/watch?v=r5WIImKV1XA)![img](https://drive.google.com/uc?id=1SRlsCNOHTeJ2SVTPbLvdMVn5RTa8ZspL) ###Code import numpy as np import matplotlib.pyplot as plt from sklearn import svm, datasets from sklearn.model_selection import train_test_split from sklearn.metrics import plot_confusion_matrix # carga de datos iris = datasets.load_iris() X = iris.data y = iris.target class_names = iris.target_names # divide dataset en subconjuntos de entrenamiento y testing X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) # Crea modelo clasficador de tipo support vector classifier classifier = svm.SVC(kernel='linear', C=0.01).fit(X_train, y_train) np.set_printoptions(precision=2) # Arma gráfico con datos sin normalizar y otro con datos normalizados titles_options = [("Matriz de confusión sin normalizar", None), ("Matriz de confusión con datos normalizados", 'true')] for title, normalize in titles_options: disp = plot_confusion_matrix(classifier, X_test, y_test, display_labels=class_names, cmap=plt.cm.Blues, normalize=normalize) disp.ax_.set_title(title) print(title) print(disp.confusion_matrix) plt.show() ###Output Matriz de confusión sin normalizar [[13 0 0] [ 0 10 6] [ 0 0 9]] Matriz de confusión con datos normalizados [[1. 0. 0. ] [0. 0.62 0.38] [0. 0. 1. ]] ###Markdown **Curva ROC y área bajo la curva**- [referencia 1](https://www.youtube.com/watch?v=AcbbkCL0dlo)- [referencia 2](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc?hl=es-419) ###Code import numpy as np import matplotlib.pyplot as plt from itertools import cycle from sklearn import svm, datasets from sklearn.metrics import roc_curve, auc from sklearn.model_selection import train_test_split from sklearn.preprocessing import label_binarize from sklearn.multiclass import OneVsRestClassifier from scipy import interp from sklearn.metrics import roc_auc_score # carga de datos iris = datasets.load_iris() X = iris.data y = iris.target # Transformamos las clases que para tenga una salida binaria y = label_binarize(y, classes=[0, 1, 2]) n_classes = y.shape[1] # Añade características con "ruido" para sumar dificultad a la clasificación random_state = np.random.RandomState(0) n_samples, n_features = X.shape X = np.c_[X, random_state.randn(n_samples, 200 * n_features)] # divide set de datos X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5, random_state=0) # crea modelo clasificador classifier = OneVsRestClassifier(svm.SVC(kernel='linear', probability=True, random_state=random_state)) # entrena modelo y_score = classifier.fit(X_train, y_train).decision_function(X_test) # Calcula la curva ROC y su área para cada clase fpr = dict() tpr = dict() roc_auc = dict() for i in range(n_classes): fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # Calcular el "micro-promedio" de la curva y su área fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel()) roc_auc["micro"] = auc(fpr["micro"], tpr["micro"]) # Configura gráfico plt.figure() lw = 2 plt.plot(fpr[2], tpr[2], color='darkorange', lw=lw, label='Curva ROC (area = %0.2f)' % roc_auc[2]) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('Falso Positivo') plt.ylabel('Verdadero Positivo') plt.title('Ejemplo de característica operativa del receptor') plt.legend(loc="abajo a la derecha") plt.show() ###Output /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:12: MatplotlibDeprecationWarning: Unrecognized location 'abajo a la derecha'. Falling back on 'best'; valid locations are best upper right upper left lower left lower right right center left center right lower center upper center center This will raise an exception in 3.3. if sys.path[0] == '': ###Markdown **Evaluando modelos de regresión** - [referencia](https://www.youtube.com/watch?v=F7xj8H_p288) ###Code import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model from sklearn.metrics import mean_squared_error, r2_score # Carga de datos diabetes_X, diabetes_y = datasets.load_diabetes(return_X_y=True) # utiliza sólo 1 característica diabetes_X = diabetes_X[:, np.newaxis, 2] # Divide set de datos en subconjuntos de entrenamiento y testing diabetes_X_train = diabetes_X[:-20] diabetes_X_test = diabetes_X[-20:] # Divide variable target en subconjuntos de entrenamiento y testing diabetes_y_train = diabetes_y[:-20] diabetes_y_test = diabetes_y[-20:] # Crea modelo de tipo regresión lineal regr = linear_model.LinearRegression() # Entrena modelo con datos de entrenamiento regr.fit(diabetes_X_train, diabetes_y_train) # Hace predicciones con datos de testing diabetes_y_pred = regr.predict(diabetes_X_test) # Calcula error cuadrático medio print('* Error cuadrático medio (Mean squared error): %.2f' % mean_squared_error(diabetes_y_test, diabetes_y_pred)) # Calcula el coeficiente de determinación (r2) print('* Coeficiente de determinacion: %.2f' % r2_score(diabetes_y_test, diabetes_y_pred)) # Grafica resultados plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, diabetes_y_pred, color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show() ###Output _____no_output_____
_notebooks/2021-10-15-plotting-segmentation-from-textgrid-with-librosa.ipynb
###Markdown Plotting segmentation from TextGrid with librosa> "Showing segmentation on a spectrogram"- toc: false- branch: master- comments: true- categories: [textgrid, spectrogram, librosa] ###Code %%capture !pip install seaborn ###Output _____no_output_____ ###Markdown Based on [Onset-based Segmentation with Backtracking](https://musicinformationretrieval.com/onset_segmentation.html) ###Code %matplotlib inline import seaborn import numpy as np, scipy, matplotlib.pyplot as plt, IPython.display as ipd import librosa, librosa.display plt.rcParams['figure.figsize'] = (13, 5) ###Output _____no_output_____ ###Markdown Change to match files: ###Code _TEXTGRID = "" _AUDIO = "" from praatio import textgrid tg = textgrid.openTextgrid(_TEXTGRID, False) x, sr = librosa.load(_AUDIO) ends = [tg.tierDict['phones'].entryList[0].start] + [end.end for end in tg.tierDict['phones'].entryList] phones = [end.label for end in tg.tierDict['phones'].entryList] S = librosa.stft(x, n_fft=2048, hop_length=512) logS = librosa.amplitude_to_db(np.abs(S), ref=np.max) librosa.display.specshow(logS, sr=sr, x_axis='time', y_axis='log') for xc in ends: plt.axvline(x=xc, color='w') ###Output _____no_output_____
Getting Started with TensorFlow 2/Week 3/Coding Tutorial.ipynb
###Markdown Validation, regularisation and callbacks Coding tutorials [1. Validation sets](coding_tutorial_1) [2. Model regularisation](coding_tutorial_2) [3. Introduction to callbacks](coding_tutorial_3) [4. Early stopping / patience](coding_tutorial_4) *** Validation sets Load the data ###Code # Load the diabetes dataset from sklearn.datasets import load_diabetes diabetes_dataset = load_diabetes() print(diabetes_dataset['DESCR']) # Save the input and target variables data = diabetes_dataset['data'] targets = diabetes_dataset['target'] # Normalise the target data (this will make clearer training curves) targets = (targets - targets.mean(axis=0)) / targets.std() targets # Split the data into train and test sets from sklearn.model_selection import train_test_split train_data, test_data, train_targets, test_targets = train_test_split(data, targets, test_size=0.1) print(train_data.shape) print(test_data.shape) print(train_targets.shape) print(test_targets.shape) ###Output (397, 10) (45, 10) (397,) (45,) ###Markdown Train a feedforward neural network model ###Code # Build the model from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense def get_model(): model = Sequential([ Dense(128, activation='relu', input_shape=(train_data.shape[1],)), Dense(128, activation='relu'), Dense(128, activation='relu'), Dense(128, activation='relu'), Dense(128, activation='relu'), Dense(128, activation='relu'), Dense(1) ]) return model model = get_model() # Print the model summary model.summary() # Compile the model model.compile(optimizer='adam', loss='mse', metrics=['mae']) # Train the model, with some of the data reserved for validation history = model.fit(train_data, train_targets, epochs=100, validation_split=0.15, batch_size=64, verbose=False) # Evaluate the model on the test set model.evaluate(test_data, test_targets, verbose=2) ###Output 45/1 - 0s - loss: 0.5640 - mae: 0.5068 ###Markdown Plot the learning curves ###Code import matplotlib.pyplot as plt %matplotlib inline # Plot the training and validation loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Loss vs. epochs') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='upper right') plt.show() ###Output _____no_output_____ ###Markdown *** Model regularisation Adding regularisation with weight decay and dropout ###Code from tensorflow.keras.layers import Dropout from tensorflow.keras import regularizers def get_regularised_model(wd, rate): model = Sequential([ Dense(128, kernel_regularizer=regularizers.l2(wd), activation="relu", input_shape=(train_data.shape[1],)), Dropout(rate), Dense(128, kernel_regularizer=regularizers.l2(wd), activation="relu"), Dropout(rate), Dense(128, kernel_regularizer=regularizers.l2(wd), activation="relu"), Dropout(rate), Dense(128, kernel_regularizer=regularizers.l2(wd), activation="relu"), Dropout(rate), Dense(128, kernel_regularizer=regularizers.l2(wd), activation="relu"), Dropout(rate), Dense(128, kernel_regularizer=regularizers.l2(wd), activation="relu"), Dropout(rate), Dense(1) ]) return model # Re-build the model with weight decay and dropout layers model = get_regularised_model(1e-5, 0.3) # Compile the model model.compile(optimizer='adam', loss='mse', metrics=['mae']) # Train the model, with some of the data reserved for validation history = model.fit(train_data, train_targets, epochs=100, validation_split=0.15, batch_size=64, verbose=False) # Evaluate the model on the test set model.evaluate(test_data, test_targets, verbose=2) ###Output 45/1 - 0s - loss: 0.5285 - mae: 0.5614 ###Markdown Plot the learning curves ###Code # Plot the training and validation loss import matplotlib.pyplot as plt plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Loss vs. epochs') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='upper right') plt.show() ###Output _____no_output_____ ###Markdown *** Introduction to callbacks Example training callback ###Code # Write a custom callback from tensorflow.keras.callbacks import Callback class PredictionCallback(Callback): def on_predict_begin(self, logs=None): print("Starting predicting....") def on_predict_batch_begin(self, batch, logs=None): print(f"Predicting: Starting batch {batch}") def on_predict_batch_end(self, batch, logs=None): print(f"Predicting: Finished batch {batch}") def on_predict_end(self, logs=None): print("Finished predicting!") # Re-build the model model = get_regularised_model(1e-5, 0.3) # Compile the model model.compile(optimizer='adam', loss='mse') ###Output _____no_output_____ ###Markdown Train the model with the callback ###Code # Train the model, with some of the data reserved for validation model.fit(train_data, train_targets, epochs=3, batch_size=128, verbose=False, callbacks=[TrainingCallback()]) # Evaluate the model model.evaluate(test_data, test_targets, verbose=False, callbacks=[TestingCallback()]) # Make predictions with the model model.predict(test_data, verbose=False, callbacks=[PredictionCallback()]) ###Output Starting predicting.... Predicting: Starting batch 0 Predicting: Finished batch 0 Predicting: Starting batch 1 Predicting: Finished batch 1 Finished predicting! ###Markdown *** Early stopping / patience Re-train the models with early stopping ###Code # Re-train the unregularised model unregularized_model = get_model() unregularized_model.compile(optimizer='adam', loss='mse') unreg_history = unregularized_model.fit(train_data, train_targets, epochs=100, validation_split=0.15, batch_size=64, verbose=False, callbacks=[tf.keras.callbacks.EarlyStopping(patience=2)]) # Evaluate the model on the test set unregularized_model.evaluate(test_data, test_targets, verbose=2) # Re-train the regularised model regularized_model = get_regularised_model(1e-8, 0.2) regularized_model.compile(optimzer='adam', loss='mse') reg_history = regularized_model.fit(train_data, train_targets, epochs=100, validation_split=0.15, batch_size=64, verbose=False, callbacks=[tf.keras.callbacks.EarlyStopping(patience=2)]) # Evaluate the model on the test set regularized_model.evaluate(test_data, test_targets, verbose=2) ###Output 45/1 - 0s - loss: 0.4802 ###Markdown Plot the learning curves ###Code # Plot the training and validation loss import matplotlib.pyplot as plt fig = plt.figure(figsize=(12, 5)) fig.add_subplot(121) plt.plot(unreg_history.history['loss']) plt.plot(unreg_history.history['val_loss']) plt.title('Unregularised model: loss vs. epochs') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='upper right') fig.add_subplot(122) plt.plot(reg_history.history['loss']) plt.plot(reg_history.history['val_loss']) plt.title('Regularised model: loss vs. epochs') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Training', 'Validation'], loc='upper right') plt.show() ###Output _____no_output_____
notebooks/plot_table.ipynb
###Markdown Begin plots ###Code import pandas as pd df = pd.read_csv("results.csv") df = df.sort_values("loss", ascending=False) from collections import defaultdict p_values = {"ImbalancedCIFAR10DataModule": defaultdict(float), "CelebADataModule": defaultdict(float), } from hypothetical.hypothesis import tTest dataset_name = "ImbalancedCIFAR10DataModule" df_ = df[df["dataset"] == dataset_name] for query in df_["query"].unique(): ce_accs = df_.loc[(df_["loss"] == CELOSS) & (df_["query"] == query), ["seed", "acc"]] poly_accs = df_.loc[(df_["loss"] == POLYLOSS) & (df_["query"] == query), ["seed", "acc"]] ce_accs = ce_accs.set_index("seed") poly_accs = poly_accs.set_index("seed") # if query == "IW": # ce_accs = ce_accs.drop(index=4) # seed 4 failed for poly loss, so we also drop cross entropy results acc_pairs = pd.concat([ce_accs.rename(columns={"acc": "ce_acc"}), poly_accs.rename(columns={"acc": "poly_acc"})], axis=1) ttest = tTest(y1=acc_pairs["poly_acc"], y2=acc_pairs["ce_acc"], var_equal=False, paired=True, alternative="greater", alpha=0.05 ) p_value = ttest.test_summary['p-value'] print(f"{query}: {p_value:.5f}") p_values[dataset_name][query] = p_value from hypothetical.hypothesis import tTest dataset_name = "CelebADataModule" df_ = df[df["dataset"] == dataset_name] for query in df_["query"].unique(): ce_accs = df_.loc[(df_["loss"] == CELOSS) & (df_["query"] == query), ["seed", "acc"]] poly_accs = df_.loc[(df_["loss"] == POLYLOSS) & (df_["query"] == query), ["seed", "acc"]] ce_accs = ce_accs.set_index("seed") poly_accs = poly_accs.set_index("seed") acc_pairs = pd.concat([ce_accs.rename(columns={"acc": "ce_acc"}), poly_accs.rename(columns={"acc": "poly_acc"})], axis=1) ttest = tTest(y1=acc_pairs["poly_acc"], y2=acc_pairs["ce_acc"], var_equal=False, paired=True, alternative="greater", alpha=0.05 ) p_value = ttest.test_summary['p-value'] print(f"{query}: {p_value:.6f}") p_values[dataset_name][query] = p_value import numpy as np df_ = df.copy() df_ = df_.drop(columns=["name"]) df_.loc[:, "ES"] = df_.loc[:, "query"].str.contains("ES") df_.loc[:, "interp"] = ~df_.loc[:, "ES"] df_ = df_.melt(id_vars=["dataset", "loss", "acc", "seed", "query"], var_name="group", value_name="tmp") df_ = df_.drop(columns="tmp") df_ = df_.pivot_table(index=["dataset", "group", "query", "seed"], columns="loss", values="acc") def hypothesis_test(group): ttest = tTest(y1=group[POLYLOSS], y2=group[CELOSS], var_equal=False, paired=True, alternative="greater", alpha=0.05 ) p_value = ttest.test_summary['p-value'] return p_value def stderr(x): return x.std() / len(x) ** 0.5 means = df_.groupby(level=[0,1,2]).agg(np.mean).round(3) stderrs = df_.groupby(level=[0,1,2]).agg(stderr).round(3) p_values_df = df_.groupby(level=[0,1,2]).apply(hypothesis_test).round(3) p_values_df = p_values_df.to_frame(name="p-value") table = means.merge(stderrs, left_index=True, right_index=True, suffixes=[" mean accuracy", " standard error"]) table = table.merge(p_values_df, left_index=True, right_index=True, suffixes=["", ""]) table = table.reindex(columns=["Cross Entropy mean accuracy", "Cross Entropy standard error", "Poly-tailed Loss mean accuracy", "Poly-tailed Loss standard error", "p-value"]) new_columns = pd.MultiIndex.from_tuples([("Cross Entropy", "mean"), ("Cross Entropy", "standard error"), ("Poly-tailed Loss", "mean"), ("Poly-tailed Loss", "standard error"), ("", "p-value")]) table.columns = new_columns table print(table.to_latex()) df_.index y1 = df_.loc[("ImbalancedCIFAR10DataModule", "interp", "No IW"), "Cross Entropy"] y2 = df_.loc[("ImbalancedCIFAR10DataModule", "interp", "IW-Exp"), "Poly-tailed Loss"] y1.mean() y2.mean() cifar_p_value = hypothesis_test(pd.concat([y1, y2], axis=1)) def get_corners(rectangle): b = rectangle w,h = b.get_width(), b.get_height() # lower left vertex x0, y0 = b.xy # lower right vertex x1, y1 = x0 + w, y0 # top left vertex x2, y2 = x0, y0 + h # top right vertex x3, y3 = x0 + w, y0 + h return (x0,y0), (x1,y1), (x2,y2), (x3,y3) def outline_bracket(left_bar, right_bar, spacing, height): l0, l1, l2, l3 = get_corners(left_bar) r0, r1, r2, r3 = get_corners(right_bar) # lower left b0 = ((l0[0] + l1[0]) / 2, max(l2[1], r2[1]) + spacing) # upper left b1 = (b0[0], max(l2[1] + spacing, r2[1] + spacing) + height) # upper right b2 = ((r0[0] + r1[0]) / 2, b1[1]) # lower right b3 = (b2[0], b0[1]) return b0, b1, b2, b3 ###Output _____no_output_____ ###Markdown Interpolation results ###Code df_interp = df[df["query"].isin(["No IW", "IW", "IW-Exp"])] df_interp import seaborn as sns import matplotlib.pyplot as plt from matplotlib import style plt.rc('text', usetex=True) plt.rc('font', family='times') palette = ['#E24A33', '#348ABD', '#988ED5', '#777777', '#FBC15E', '#8EBA42', '#FFB5B8'] sns.set_palette(palette) fig, axes = plt.subplots(figsize=(20, 5), ncols=2) ax = axes[0] dataset_name = "ImbalancedCIFAR10DataModule" bar = sns.barplot(data=df_interp[df_interp["dataset"] == dataset_name], x="query", y="acc", hue="loss", ax=ax, alpha=0.9, saturation=0.75, order=["No IW", "IW", "IW-Exp"], ci=68, errcolor=(0, 0, 0, 0.9)) sns.stripplot(data=df_interp[df_interp["dataset"] == dataset_name], x="query", y="acc", hue="loss", ax=ax, alpha=0.7, order=["No IW", "IW", "IW-Exp"], dodge=True, edgecolor="black", linewidth=1.7) ax.set(ylim=[0.4, 0.7]) ax.legend().remove() ax.set(title=r"Imbalanced Binary CIFAR10") ax.set(ylabel=r"Test Accuracy") ax.set(xlabel=None) ax.set(xlabel=None) hatches = ["+","+","+", "x", "x", "x"] for i, b in enumerate(bar.patches): b.set_hatch(hatches[i]) b.set_edgecolor((1, 1, 1, 1.)) queries = [label.get_text() for label in ax.get_xticklabels()] for i in range(len(queries)): query = queries[i] p_value = p_values[dataset_name][query] if p_value < 0.05: star = r"$**$" if p_value < 0.005 else r"$*$" left_bar = bar.patches[i] right_bar = bar.patches[i + len(queries)] bracket = outline_bracket(left_bar, right_bar, spacing=0.02, height=0.005) b_xs, b_ys = list(zip(*bracket)) ax.plot(b_xs, b_ys, c="k") ax.text((b_xs[1] + b_xs[2]) / 2, b_ys[1] + 0.005, star, ha="center", va="bottom", color="k", fontsize=30) if cifar_p_value < 0.05: star = r"$**$" if cifar_p_value < 0.005 else r"$*$" left_bar = bar.patches[3] right_bar = bar.patches[2] bracket = outline_bracket(left_bar, right_bar, spacing=0.035, height=0.005) b_xs, b_ys = list(zip(*bracket)) ax.plot(b_xs, b_ys, c="k") ax.text((b_xs[1] + b_xs[2]) / 2, b_ys[1] + 0.005, star, ha="center", va="bottom", color="k", fontsize=30) ax.set(xticklabels=["No IW", "IW ($\overline w$)", r"IW ($\overline w^{3/2})$"]) # Put at the end because p_values dict is named using old keys ax.set_axisbelow(True) dataset_name = "CelebADataModule" ax = axes[1] bar = sns.barplot(data=df_interp[df_interp["dataset"] == dataset_name], x="query", y="acc", hue="loss", ax=ax, alpha=0.9, saturation=0.75, order=["No IW", "IW", "IW-Exp"], ci=68, errcolor=(0, 0, 0, 0.9)) sns.stripplot(data=df_interp[df_interp["dataset"] == dataset_name], x="query", y="acc", hue="loss", ax=ax, alpha=0.7, order=["No IW", "IW", "IW-Exp"], dodge=True, edgecolor="black", linewidth=1) ax.set(ylim=[0.7, 0.9]) ax.set(title=r"Subsampled CelebA") ax.set(ylabel=r"Test Accuracy") ax.set(xlabel=None) hatches = ["+","+","+", "x", "x", "x"] for i, b in enumerate(bar.patches): b.set_hatch(hatches[i]) b.set_edgecolor((1, 1, 1, 1.)) queries = [label.get_text() for label in ax.get_xticklabels()] for i in range(len(queries)): query = queries[i] p_value = p_values[dataset_name][query] if p_value < 0.05: star = r"$**$" if p_value < 0.005 else r"$*$" left_bar = bar.patches[i] right_bar = bar.patches[i + len(queries)] bracket = outline_bracket(left_bar, right_bar, spacing=0.014, height=0.005) b_xs, b_ys = list(zip(*bracket)) ax.plot(b_xs, b_ys, c="k") ax.text((b_xs[1] + b_xs[2]) / 2, b_ys[1] + 0.005, star, ha="center", va="bottom", color="k", fontsize=30) ax.set(xticklabels=["No IW", r"IW ($\overline w$)", r"IW ($\overline w^{2})$"]) ax.set_axisbelow(True) handles, labels = ax.get_legend_handles_labels() ax.legend(handles[-2:], labels[-2:], fontsize=20, loc="upper left") fig.suptitle("Trained to Interpolation", fontsize=30, y=1.02) fig.savefig("interpolated.pdf", bbox_inches="tight") ###Output _____no_output_____ ###Markdown Early stopped results ###Code df_es = df[df["query"].isin(["IW", "IW-ES", "IW-Exp-ES"])] import seaborn as sns import matplotlib.pyplot as plt from matplotlib import style plt.rc('text', usetex=True) plt.rc('font', family='times') palette = ['#E24A33', '#348ABD', '#988ED5', '#777777', '#FBC15E', '#8EBA42', '#FFB5B8'] sns.set_palette(palette) fig, axes = plt.subplots(figsize=(20, 5), ncols=2) ax = axes[0] dataset_name = "ImbalancedCIFAR10DataModule" bar = sns.barplot(data=df_es[df_es["dataset"] == dataset_name], x="query", y="acc", hue="loss", ax=ax, alpha=0.9, saturation=0.75, order=["IW", "IW-ES", "IW-Exp-ES"], ci=68, errcolor=(0, 0, 0, 0.9)) sns.stripplot(data=df_es[df_es["dataset"] == dataset_name], x="query", y="acc", hue="loss", ax=ax, alpha=0.7, order=["IW", "IW-ES", "IW-Exp-ES"], dodge=True, edgecolor="black", linewidth=1) ax.set(ylim=[0.4, 0.7]) ax.legend().remove() ax.set(title=r"Imbalanced Binary CIFAR10") ax.set(ylabel=r"Test Accuracy") ax.set(xlabel=None) hatches = ["+","+","+", "x", "x", "x"] for i, b in enumerate(bar.patches): b.set_hatch(hatches[i]) b.set_edgecolor((1, 1, 1, 1.)) queries = [label.get_text() for label in ax.get_xticklabels()] for i in range(len(queries)): query = queries[i] p_value = p_values[dataset_name][query] if p_value < 0.05: star = r"$**$" if p_value < 0.005 else r"$*$" left_bar = bar.patches[i] right_bar = bar.patches[i + len(queries)] bracket = outline_bracket(left_bar, right_bar, spacing=0.02, height=0.005) b_xs, b_ys = list(zip(*bracket)) ax.plot(b_xs, b_ys, c="k") ax.text((b_xs[1] + b_xs[2]) / 2, b_ys[1] + 0.005, star, ha="center", va="bottom", color="k", fontsize=30) ax.set(xticklabels=[r"IW ($\overline w$)", r"IW-ES ($\overline w$)", r"IW-ES ($\overline w^{3/2}$)"]) ax.set_axisbelow(True) dataset_name = "CelebADataModule" ax = axes[1] bar = sns.barplot(data=df_es[df_es["dataset"] == dataset_name], x="query", y="acc", hue="loss", ax=ax, alpha=0.9, saturation=0.75, order=["IW", "IW-ES", "IW-Exp-ES"], ci=68, errcolor=(0, 0, 0, 0.9)) sns.stripplot(data=df_es[df_es["dataset"] == dataset_name], x="query", y="acc", hue="loss", ax=ax, alpha=0.7, order=["IW", "IW-ES", "IW-Exp-ES"], dodge=True, edgecolor="black", linewidth=1) ax.set(ylim=[0.7, 0.9]) ax.set(title=r"Subsampled CelebA") ax.set(ylabel=r"Test Accuracy") ax.set(xlabel=None) hatches = ["+","+","+", "x", "x", "x"] for i, b in enumerate(bar.patches): b.set_hatch(hatches[i]) b.set_edgecolor((1, 1, 1, 1.)) queries = [label.get_text() for label in ax.get_xticklabels()] for i in range(len(queries)): query = queries[i] p_value = p_values[dataset_name][query] if p_value < 0.05: star = r"$**$" if p_value < 0.005 else r"$*$" left_bar = bar.patches[i] right_bar = bar.patches[i + len(queries)] bracket = outline_bracket(left_bar, right_bar, spacing=0.014, height=0.005) b_xs, b_ys = list(zip(*bracket)) ax.plot(b_xs, b_ys, c="k") ax.text((b_xs[1] + b_xs[2]) / 2, b_ys[1] + 0.005, star, ha="center", va="bottom", color="k", fontsize=30) ax.set(xticklabels=[r"IW ($\overline w$)", r"IW-ES ($\overline w$)", r"IW-ES ($\overline w^{2}$)"]) ax.set_axisbelow(True) handles, labels = ax.get_legend_handles_labels() ax.legend(handles[-2:], labels[-2:], fontsize=20, loc="upper left") fig.suptitle("Early-stopped", fontsize=30, y=1.02) fig.savefig("early-stopped.pdf", bbox_inches="tight") ###Output _____no_output_____
notebook/local_downsample.ipynb
###Markdown Test the downsampling. There seems to be something weird happening with it ###Code import numpy as np # --- centralms --- from centralms import util as UT from centralms import catalog as Cat from centralms import observables as Obvs import corner as DFM import matplotlib as mpl import matplotlib.pyplot as pl mpl.rcParams['text.usetex'] = True mpl.rcParams['font.family'] = 'serif' mpl.rcParams['axes.linewidth'] = 1.5 mpl.rcParams['axes.xmargin'] = 1 mpl.rcParams['xtick.labelsize'] = 'x-large' mpl.rcParams['xtick.major.size'] = 5 mpl.rcParams['xtick.major.width'] = 1.5 mpl.rcParams['ytick.labelsize'] = 'x-large' mpl.rcParams['ytick.major.size'] = 5 mpl.rcParams['ytick.major.width'] = 1.5 mpl.rcParams['legend.frameon'] = False %matplotlib inline subhalo = Cat.CentralSubhalos(nsnap0=15) shcat = subhalo.Read() shcat_down = subhalo.Read(downsampled='20') fig = plt.figure(figsize=(8,4)) sub = fig.add_subplot(111) DFM.hist2d(shcat['halo.m'], shcat['m.sham'], levels=[0.68, 0.95], range=[[10., 15.],[6., 12.2]], color='k', plot_datapoints=False, fill_contours=False, plot_density=False, ax=sub) DFM.hist2d(shcat_down['halo.m'], shcat_down['m.sham'], weights=shcat_down['weights'], levels=[0.68, 0.95], range=[[10., 15.],[6., 12.2]], color='C1', plot_datapoints=False, fill_contours=False, plot_density=False, ax=sub) fig = plt.figure(figsize=(8,4)) sub = fig.add_subplot(111) msmf = Obvs.getMF(shcat['m.sham'], weights=shcat['weights']) sub.plot(msmf[0], msmf[1], c='k', ls='--') msmf = Obvs.getMF(shcat_down['m.sham'], weights=shcat_down['weights']) sub.plot(msmf[0], msmf[1], c='C1') sub.set_xlim([9., 12.]) sub.set_yscale("log") sub.set_ylim([1e-6, 10**-1]) ###Output _____no_output_____
version_1.6.1/notebooks/examples/2. NDVI.ipynb
###Markdown Normalized Difference Vegentation Index ###Code # Display the matplotlib plots in the notebook %matplotlib inline import xarray as xr import numpy as np import datacube # Supress the warning in a notebook so that they # are not displayed when running cells. import warnings warnings.filterwarnings('ignore') warnings.filterwarnings(action='ignore') import compounds import utils ###Output _____no_output_____ ###Markdown Datacube Query**Ingested Area:** - Latitude: 4,5 - Longitude: -70, -69 - Time: 2018/08/03 - 2018/12/25 ###Code dc = datacube.Datacube(app="Query") xarr = dc.load( product="LS8_OLI_LASRC", latitude=(4.1,4.2), longitude=(-70.0, -69.8), # Time format YYYY-MM-DD time=("2018-01-01","2018-12-31"), measurements=['red','nir','pixel_qa'] ) xarr xarr.red[0].plot() #xarr.red[1].plot() #xarr.red[2].plot() ###Output _____no_output_____ ###Markdown Median CompositeThe median composite acomplish tree processing steps:1. Clouds masking.2. Normalization (optional).3. Arithmetic median.**Parameters:*** dataset: Raster image dataset* product: To apply cloud mask accoring with the product.* bands: Bands for which the median will be computed.* min_valid: Number of pixels valid o perform the composite, if this condition is not met the pixel will be marked as nodata. ###Code dataset = compounds.median_compound(xarr,product="LS8_OLI_LASRC",bands=['red','nir'],min_valid=1) dataset.red.plot() ###Output _____no_output_____ ###Markdown NDVI ###Code xarr0 = dataset.copy(deep=True) # Getting red and nir bands period_red = xarr0["red"].values period_nir = xarr0["nir"].values # If any, read or nir is nan, the pixel will be marked as nodata mask_nan=np.logical_or(np.isnan(period_red), np.isnan(period_nir)) # NDVI computation period_nvdi = (period_nir-period_red) / (period_nir+period_red) # Removing pixels marked as nondata period_nvdi[mask_nan]=np.nan # clip period_nvdi[period_nvdi>1]=np.nan period_nvdi[period_nvdi<-1]=np.nan # Convert np.array to xarray dataset output = utils.get_data_set(period_nvdi,var_name='ndvi',xarr0=xarr0) output.ndvi.plot() ###Output _____no_output_____
LAB/IRIS.ipynb
###Markdown IRIS ###Code import numpy as np import pandas as plt from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.neighbors import KNeighborsClassifier iris = load_iris() ###Output _____no_output_____ ###Markdown Train Test Split ###Code iris_tr, iris_te, y_tr, y_te = train_test_split( iris['data'], iris['target'], train_size = 0.7, test_size = 0.3, random_state = 0) print(f"train size : {iris_tr.shape}") print(f"test size : {iris_te.shape}") ###Output train size : (105, 4) test size : (45, 4) ###Markdown KNN ###Code knn = KNeighborsClassifier(n_neighbors=1) knn.fit(iris_tr, y_tr) pred_knn = knn.predict(iris_te) print(f"KNN_accuracy : {knn.score(iris_te, y_te):.4f}") ###Output KNN_accuracy : 0.9778 ###Markdown Multinomial classification ###Code from sklearn.linear_model import LogisticRegression lr = LogisticRegression().fit(iris_tr, y_tr) pred_lr = lr.predict(iris_te) print(f"Logistic_accuracy : {lr.score(iris_te, y_te):.4f}") ###Output _____no_output_____ ###Markdown DecisionTreeClassifier ###Code from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier().fit(iris_tr, y_tr) pred_tree = tree.predict(iris_te) print(f"Tree_accuracy : {tree.score(iris_te, y_te):.4f}") ###Output Tree_accuracy : 0.9778 ###Markdown GradientBoostingClassifier ###Code from sklearn.ensemble import GradientBoostingClassifier gbc = GradientBoostingClassifier().fit(iris_tr, y_tr) pred_gb = gbc.predict(iris_te) print(f"GradientBoostingClassifier_accuracy : {gbc.score(iris_te, y_te):.4f}") print(f"KNN_accuracy : {accuracy_score(y_te, pred_knn)}") print(f"Logistic_accuracy : {accuracy_score(y_te, pred_lr)}") print(f"Tree_accuracy : {accuracy_score(y_te, pred_tree)}") print(f"GradientBoostingClassifier_accuracy : {accuracy_score(y_te, pred_gb)}") ###Output KNN_accuracy : 0.9777777777777777 Logistic_accuracy : 0.8888888888888888 Tree_accuracy : 0.9777777777777777 GradientBoostingClassifier_accuracy : 0.9777777777777777 ###Markdown Confusion Matrix ###Code import pandas as pd import matplotlib.pyplot as plt from pandas.tools.plotting import scatter_matrix from sklearn import model_selection from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC classification_report(y_te, pred_knn) confusion_matrix(y_te, pred_knn) confusion_matrix(y_te, pred_lr) confusion_matrix(y_te, pred_tree) confusion_matrix(y_te, pred_gb) ###Output _____no_output_____ ###Markdown Conclusion ###Code print(f"KNN_accuracy : {knn.score(iris_te, y_te):.4f}") print(f"Logistic_accuracy : {lr.score(iris_te, y_te):.4f}") print(f"Tree_accuracy : {tree.score(iris_te, y_te):.4f}") print(f"GradientBoostingClassifier_accuracy : {gbc.score(iris_te, y_te):.4f}") ###Output KNN_accuracy : 0.9778 Logistic_accuracy : 0.8889 Tree_accuracy : 0.9778 GradientBoostingClassifier_accuracy : 0.9778
docs/Examples/example_submission_systems.ipynb
###Markdown Submission Systems Submission system play an important role, if you want to develop your pygromos code. Many times, they are hidden in the Simulation_runner blocks. But maybe you want to develop something, where you need direct access on the submission system? This notebook will give you some examples, how you can use the submission systems.Note that all submission systems are write in the same ways, such you can exchange them quickly. ###Code from pygromos.hpc_queuing.submission_systems import local # this executes your code in your local session. from pygromos.hpc_queuing.submission_systems import lsf # this module can be used to submit to the lsf-queue (e.g. on euler) from pygromos.hpc_queuing.submission_systems import dummy # this is a dummy system, that only prints the commands ###Output _____no_output_____ ###Markdown Local SubmissionThis system executes the commands directly in your current session. This allows you to locally test or execute your code. Maybe if your process needs much more time, you want later to switch to a submission system for job-queueing. ###Code sub_local = local.LOCAL() sub_local.verbose = True bash_command = "sleep 2; echo \"WUHA\"; sleep 2" job_id = sub_local.submit_to_queue(bash_command) job_id #This is a dummy function, to not break the code! sub_local.get_jobs_from_queue("FUN") ###Output Searching ID: FUN ###Markdown LSF SubmissionThe Lsf submission system allows to submit jobs to the IBM LSF-Queueing system.**Careful! This part requires a running LSF-Queueing System on your System**You can submit and kill jobs and arrays to the queue, as well as getting information from the queuing list. ###Code #Construct system: sub_lsf = lsf.LSF(nmpi=1, job_duration = "24:00", max_storage=100) sub_lsf.verbose = True sub_lsf._refresh_job_queue_list_all_s = 0 #you must wait at least 1s to update job_queue list ###Output _____no_output_____ ###Markdown Queue Checking: ###Code sub_lsf.get_queued_jobs() sub_lsf.job_queue_list ###Output Skipping refresh of job list, as the last update is 0:00:00.005036s ago ###Markdown Submission:here you can submit jobs to the queue as bash commands ###Code bash_command = "sleep 5; echo \"WUHA\"; sleep 2" job_name = "Test1" job_id = sub_lsf.submit_to_queue(command=bash_command, jobName=job_name) #search for the just submitted job in the queue sub_lsf.search_queue_for_jobid(job_id) sub_lsf.search_queue_for_jobname("Test1") ###Output _____no_output_____ ###Markdown Submitting multiple jobs ###Code bash_command = "sleep 2; echo \"WUHA\"; sleep 2" job_ids = [] for test in range(3): job_name = "Test"+str(test) job_id = sub_lsf.submit_to_queue(command=bash_command, jobName=job_name) job_ids.append(job_id) sub_lsf.search_queue_for_jobname("Te", regex=True) ###Output _____no_output_____ ###Markdown Killing a jobsRemove a job the job queue ###Code sub_lsf.kill_jobs(job_ids=[job_id]) sub_lsf.search_queue_for_jobname("Te", regex=True) ###Output _____no_output_____ ###Markdown Submission Systems Submission system play an important role, if you want to develop your pygromos code. Many times, they are hidden in the Simulation_runner blocks. But maybe you want to develop something, where you need direct access on the submission system? This notebook will give you some examples, how you can use the submission systems.Note that all submission systems are write in the same ways, such you can exchange them quickly. ###Code from pygromos.simulations.hpc_queuing.submission_systems import local # this executes your code in your local session. from pygromos.simulations.hpc_queuing.submission_systems import lsf # this module can be used to submit to the lsf-queue (e.g. on euler) from pygromos.simulations.hpc_queuing.submission_systems import dummy # this is a dummy system, that only prints the commands ###Output _____no_output_____ ###Markdown Local SubmissionThis system executes the commands directly in your current session. This allows you to locally test or execute your code. Maybe if your process needs much more time, you want later to switch to a submission system for job-queueing. ###Code sub_local = local.LOCAL() sub_local.verbose = True bash_command = "sleep 2; echo \"WUHA\"; sleep 2" job_id = sub_local.submit_to_queue(bash_command) job_id #This is a dummy function, to not break the code! sub_local.get_jobs_from_queue("FUN") ###Output Searching ID: FUN ###Markdown LSF SubmissionThe Lsf submission system allows to submit jobs to the IBM LSF-Queueing system.**Careful! This part requires a running LSF-Queueing System on your System**You can submit and kill jobs and arrays to the queue, as well as getting information from the queuing list. ###Code #Construct system: sub_lsf = lsf.LSF(nmpi=1, job_duration = "24:00", max_storage=100) sub_lsf.verbose = True sub_lsf._refresh_job_queue_list_all_s = 0 #you must wait at least 1s to update job_queue list ###Output _____no_output_____ ###Markdown Queue Checking: ###Code sub_lsf.get_queued_jobs() sub_lsf.job_queue_list ###Output Skipping refresh of job list, as the last update is 0:00:00.005036s ago ###Markdown Submission:here you can submit jobs to the queue as bash commands ###Code bash_command = "sleep 5; echo \"WUHA\"; sleep 2" job_name = "Test1" job_id = sub_lsf.submit_to_queue(command=bash_command, jobName=job_name) #search for the just submitted job in the queue sub_lsf.search_queue_for_jobid(job_id) sub_lsf.search_queue_for_jobname("Test1") ###Output _____no_output_____ ###Markdown Submitting multiple jobs ###Code bash_command = "sleep 2; echo \"WUHA\"; sleep 2" job_ids = [] for test in range(3): job_name = "Test"+str(test) job_id = sub_lsf.submit_to_queue(command=bash_command, jobName=job_name) job_ids.append(job_id) sub_lsf.search_queue_for_jobname("Te", regex=True) ###Output _____no_output_____ ###Markdown Killing a jobsRemove a job the job queue ###Code sub_lsf.kill_jobs(job_ids=[job_id]) sub_lsf.search_queue_for_jobname("Te", regex=True) ###Output _____no_output_____
Classifiers/Support Vector Machine/Support_Vector_Machine.ipynb
###Markdown Support Vector Machine Importing Libraires ###Code import numpy as np import matplotlib.pyplot as plt import pandas as pd ###Output _____no_output_____ ###Markdown Importing Dataset ###Code from google.colab import files files.upload() ###Output _____no_output_____ ###Markdown Splitting Dataset into X & Y ###Code dataset = pd.read_csv('Social_Network_Ads.csv') X = dataset.iloc[:,:-1].values Y = dataset.iloc[:,-1].values ###Output _____no_output_____ ###Markdown Splitting Dataset into Training & Test Set ###Code from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.25, random_state = 0) ###Output _____no_output_____ ###Markdown Feature Scaling ###Code from sklearn.preprocessing import StandardScaler feat_scale = StandardScaler() X_train = feat_scale.fit_transform(X_train) X_test = feat_scale.transform(X_test) ###Output _____no_output_____ ###Markdown Training the SVM model on Training Set ###Code from sklearn.svm import SVC classifier = SVC(kernel= 'linear', random_state= 0) classifier.fit(X_train, Y_train) ###Output _____no_output_____ ###Markdown Predicting the Test Set Result ###Code y_pred = classifier.predict(X_test) print(np.concatenate((y_pred.reshape(len(y_pred), 1), Y_test.reshape(len(Y_test), 1)), 1)) ###Output [[0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [1 1] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [1 1] [0 0] [0 0] [1 1] [0 0] [1 1] [0 0] [1 1] [0 0] [0 0] [0 0] [0 0] [0 0] [0 1] [1 1] [0 0] [0 0] [0 0] [0 0] [0 0] [0 0] [1 1] [0 0] [0 0] [0 0] [0 0] [1 1] [0 0] [0 0] [1 1] [0 0] [1 1] [1 1] [0 0] [0 0] [0 0] [1 1] [0 1] [0 0] [0 0] [0 1] [0 0] [0 0] [1 1] [0 0] [0 1] [0 0] [1 1] [0 0] [0 0] [0 0] [0 0] [1 1] [0 0] [0 0] [0 1] [0 0] [0 0] [1 0] [0 0] [1 1] [1 1] [1 1] [1 0] [0 0] [0 0] [1 1] [1 1] [0 0] [1 1] [0 1] [0 0] [0 0] [1 1] [0 0] [0 0] [0 0] [0 1] [0 0] [0 1] [1 1] [1 1]] ###Markdown Making the Confusion Matrix ###Code from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, accuracy_score confusionMatrix = confusion_matrix(Y_test,y_pred) dis = ConfusionMatrixDisplay(confusionMatrix, display_labels=classifier.classes_) print(confusionMatrix) print(accuracy_score(Y_test, y_pred)) dis.plot() plt.show() ###Output [[66 2] [ 8 24]] 0.9 ###Markdown Visulization of Training Set Result ###Code from matplotlib.colors import ListedColormap X_set, y_set = feat_scale.inverse_transform(X_train), Y_train X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(feat_scale.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Support Vector Machine (Training set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ###Output *c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points. *c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points. ###Markdown Visulization of Test Set Result ###Code from matplotlib.colors import ListedColormap X_set, y_set = feat_scale.inverse_transform(X_test), Y_test X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25), np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25)) plt.contourf(X1, X2, classifier.predict(feat_scale.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape), alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) plt.title('Support Vector Machine (Test set)') plt.xlabel('Age') plt.ylabel('Estimated Salary') plt.legend() plt.show() ###Output *c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points. *c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
sympsi/examples/BogoliubovFormalism.ipynb
###Markdown Load packages ###Code import numpy as np from sympy import * from sympsi import * from sympy.physics.secondquant import F from sympy.physics.quantum.dagger import Dagger from sympy.physics.quantum import TensorProduct from sympy.physics.quantum import Operator from sympy.matrices import Matrix, banded # This is required to get around a bug in sympy Matrix.adjoint = lambda self: self.T.applyfunc(Dagger) # We start with creating the full pairing Hamiltonian for N sites N = 4 a = symbols(f"a_:{N}") a = [Operator(ai) for ai in a] # We construct the vector containing the annihilation and creation operators for fermions on each site # This follows the explanation at https://topocondmat.org/w1_topointro/1D.html c = Matrix(a + [Dagger(ai) for ai in a]) c # Check whether taking the hermitin conjugate works properly Dagger(c) # We create the Bogoliubov Hamiltonian matrix epsilon = symbols(f"epsilon_:{N}") delta = symbols('Delta') E = banded({0: epsilon}) C = banded({-1: (N-1)*[-delta], 1: (N-1)*[delta]}) G = Matrix([[E,C], [-C,-E]]) G # Next we show that the Bogoliubov matrix is indeed like the original matrix H = Dagger(c)*G*c h1 = H[0] h1 # We will now continue showing that the this big matrix can actually be written as the sum of a smaller matrix of only two sites # To do this we actually first create the system for two sites with an undefined Bogoliubov matrix Nr = 2 ar = symbols(f"a_:{Nr}") ar = [Operator(ai) for ai in ar] cr = Matrix(ar + [Dagger(ai) for ai in ar]) xr = np.array(symbols(f"x_:{2*Nr}:{2*Nr}")).reshape((2*Nr,2*Nr)) Gr = Matrix(xr) # Execute the formulism to get the Hamiltonian in reduced space Hr = Dagger(cr)*Gr*cr h2 = Hr[0] h2 # We will now match the coefficients # First step: create a library of coefficients of the full hamiltonian lib = {e.args[-1]: e/e.args[-1] for e in h1.args} lib # Now, for each term in the reduced Hamiltonian, we will match the element of the terms in the reduced and full Hamiltonian Gr1 = Gr.copy() for i in range(2*Nr): # get the operator of the term term, ci = h2.args[i].args # create a lookup for operators and elements for this particular term store = {e.args[-1]: e/e.args[-1] for e in ncollect(lib[ci], cr).args} # for each operator in the term, match with the full Hamiltonian and substitute for term in ncollect(term, cr).args: xii, ki = term.args ki = store.get(ki, 0) Gr1 = Gr1.subs(xii, ki) Gr1 ###Output _____no_output_____ ###Markdown OK, great! We have a reduced matrix for the first two sites. Next we need to check whether the matrix of the next two sites is structually identical. ###Code # We will now continue showing that the this big matrix can actually be written as the sum of a smaller matrix of only two sites # To do this we actually first create the system for two sites with an undefined Bogoliubov matrix ar = symbols(f"a_:{N}") ar = ar[2:] ar = [Operator(ai) for ai in ar] cr = Matrix(ar + [Dagger(ai) for ai in ar]) xr = np.array(symbols(f"x_:{2*Nr}:{2*Nr}")).reshape((2*Nr,2*Nr)) Gr = Matrix(xr) # Execute the formulism to get the Hamiltonian in reduced space Hr = Dagger(cr)*Gr*cr h2 = Hr[0] # Now, for each term in the reduced Hamiltonian, we will match the element of the terms in the reduced and full Hamiltonian for i in range(2*Nr): # get the operator of the term term, ci = h2.args[i].args # create a lookup for operators and elements for this particular term store = {e.args[-1]: e/e.args[-1] for e in ncollect(lib[ci], cr).args} # for each operator in the term, match with the full Hamiltonian and substitute for term in ncollect(term, cr).args: xii, ki = term.args ki = store.get(ki, 0) Gr = Gr.subs(xii, ki) Gr ###Output _____no_output_____ ###Markdown It is!We can now proceed and check whether the sum of these two is actually the same as the full Hamiltonian ###Code # We will now continue showing that the this big matrix can actually be written as the sum of a smaller matrix of only two sites # To do this we actually first create the system for two sites with an undefined Bogoliubov matrix result = Add() for i in range(0, N-1, 1): ar = a[i:i+2] #print(ar) cr = Matrix(ar + [Dagger(ai) for ai in ar]) xr = np.array(symbols(f"x_:{4}:{4}")).reshape((4,4)) Gr = Matrix(xr) # Execute the formulism to get the Hamiltonian in reduced space Hr = Dagger(cr)*Gr*cr h2 = Hr[0] # Now, for each term in the reduced Hamiltonian, we will match the element of the terms in the reduced and full Hamiltonian for i in range(2*Nr): # get the operator of the term term, ci = h2.args[i].args # create a lookup for operators and elements for this particular term store = {e.args[-1]: e/e.args[-1] for e in ncollect(lib[ci], cr).args} # for each operator in the term, match with the full Hamiltonian and substitute for term in ncollect(term, cr).args: xii, ki = term.args ki = store.get(ki, 0) Gr = Gr.subs(xii, ki) result += (Dagger(cr)*Gr*cr)[0] result H = Dagger(c)*G*c h1 = H[0] h1 h1.expand() result.expand() ###Output _____no_output_____ ###Markdown The two are nearly identical. The center energy terms are bing counted twice... ###Code # When we remove the double counted energies teh two results are equal... out = result.copy() for i in [1, 2]: out += epsilon[i]*a[i]*Dagger(a[i]) - epsilon[i]*Dagger(a[i])*a[i] h1.expand() == out.expand() ###Output _____no_output_____ ###Markdown Programmetic construction of the BdG matrixConsidering the construction of the Bogoliubov de Gennes Matrix is procedural we should be able to construct it using some programmatic procedures. The Hamiltonian that we will take here is$\Delta \sum_{k=0}^{N} \big(c_{k+1}^{\dagger}c_{k}^{\dagger} + \mathrm{h.c.}\big)- \mu \sum_{k=0}^{N} \big( c_{k}^{\dagger}c_{k} + \mathrm{h.c.} \big) - \tau \sum_{k=0}^{N} \big(c_{k+1}^{\dagger}c_{k} + \mathrm{h.c.} \big)$ There is a conveniant way to construct these summations in Sympy using the Sum functions. This shows a nice mathematical representation of the sum until you actually execute the summation. Unfortunately the Indexed object are always commutation despite setting those to non-commutative. These makes it useles for operators. ###Code # Checking the commutative behavior of Indexed objects Indexed.is_commutative = False c = IndexedBase('c', commutative=False) # This is not what we expect -> these commute c[0]*c[1] + (c[1]*c[0]) # This is what we are supposed to get c = symbols('c:2', commutative=False) c[0]*c[1] + c[1]*c[0] ###Output _____no_output_____ ###Markdown Because of this we need to create a Sum function ourself. We will use this to construct the Hamiltonian ###Code # Simple summation function def Sum(fnc, n, N): result = Add() for i in range(n+N): result += fnc(i) return result # Create the non-commuting operators n = 2 a = symbols(f"a_:{n}") a = [Operator(ai) for ai in a] # Construct the Hamiltonian H0 = -Sum(lambda i: Dagger(a[i])*a[i] , 0, n) + \ -tau* Sum(lambda i: Dagger(a[i+1])*a[i] + Dagger(a[i])*a[i+1] , 0, n-1) + \ delta*Sum(lambda i: Dagger(a[i]*a[i+1]) + a[i+1]*a[i] , 0, n-1) # NOTE: we already limit the index of summations that include i+1 terms to n-1 # Otherwise we would need include an extra operator term and later remove terms included that extra operator with # H = drop_terms_containing(H0.expand(), [a[-1], Dagger(a[-1])]) # a = a[:-1] H0 ###Output _____no_output_____ ###Markdown So far we just created the original Hamiltonian. To construct a BdG Hamiltonian we need to use some commutation identities and rewrite the Hamiltonian. The $\mu$-term$a_{k}^{\dagger}a_{k} + a_{k+1}^{\dagger}a_{k} = 1 \to a_{k}^{\dagger}a_{k} = \frac{1}{2}a_{k}^{\dagger}a_{k} + \frac{1}{2}(1-a_{k}a_{k}^{\dagger})$The $\Delta$-term$a_{k+1}^{\dagger}a_{k}^{\dagger} + a_{k}^{\dagger}a_{k+1}^{\dagger} = 0 \to a_{k+1}^{\dagger}a_{k}^{\dagger} = \frac{1}{2}a_{k+1}^{\dagger}a_{k}^{\dagger} - \frac{1}{2}a_{k}^{\dagger}a_{k+1}^{\dagger}$$a_{k+1} a_{k} + a_{k} a_{k+1} = 0 \to a_{k+1}a_{k} = \frac{1}{2}a_{k+1}a_{k}- \frac{1}{2}a_{k}a_{k+1}$The $\tau$-term$a_{k+1}^{\dagger}a_{k} + a_{k}^{\dagger}a_{k+1} = 0 \to a_{k+1}^{\dagger}a_{k} = \frac{1}{2}a_{k+1}^{\dagger}a_{k} - \frac{1}{2}a_{k}^{\dagger}a_{k+1}$$a_{k}^{\dagger}a_{k+1} + a_{k+1}^{\dagger}a_{k} = 0 \to a_{k}^{\dagger}a_{k+1} = \frac{1}{2}a_{k}^{\dagger}a_{k+1} - \frac{1}{2}a_{k+1}^{\dagger}a_{k}$After making these replacement we will be able to construct the BdG Hamiltonian ###Code # To replace them we need to trick because Sympy's subs method we do the replacement sequentially which means, # parts that are added duning early replacements will we replaced later on during later replacements. # The trick is to do the replcement is two stages using dummies in between. submap = [[Dagger(a[i])*a[i], 1/2*(Dagger(a[i])*a[i] + 1 - a[i]*Dagger(a[i]))] for i in range(2)] + \ [[Dagger(a[i]*a[j]), 1/2*(Dagger(a[j]*a[i]) - Dagger(a[i]*a[j]))] for i in range(2) for j in range(2)] + \ [[a[i]*a[j], 1/2*(a[i]*a[j] - a[j]*a[i])] for i in range(2) for j in range(2)] + \ [[Dagger(a[i+1])*a[i], 1/2*(Dagger(a[i+1])*a[i] - a[i]*Dagger(a[i+1]))] for i in range(2-1)] + \ [[Dagger(a[i])*a[i+1], 1/2*(Dagger(a[i])*a[i+1] - a[i+1]*Dagger(a[i]))] for i in range(2-1)] H = H0.expand() for item in submap: key = Dummy() old, new = item H = H.subs(old, key) item[0] = key H = H.subs(submap) H.expand() # We construct the operator vectors! c = Matrix(a + [Dagger(ai) for ai in a]) Dagger(c) # Before we can create the expr = ncollect(H.expand()) expr # This routine actually creates the BdG matrix Hamiltonian. # First we express the Hamiltonian in terms of the leading (!!) operators. expr = ncollect(H.expand()) # Next we collect the terms belonging to each of the elements in the operator vector Dagger(c). # Those select the rows in the matrix G = Matrix.zeros(len(c)) expr = ncollect(H.expand()) for i, l_ops in enumerate(Dagger(c)): h = get_coefficient(expr, l_ops) # For each term, we again match the operators but now to the operator vector c (!!) # This selects the columns. for j, r_ops in enumerate(c): coeff = get_coefficient(h, r_ops) G[i,j] = coeff # This gives us the BdG Hamiltonian in matrix form G ###Output _____no_output_____ ###Markdown The original Hamiltonain in expanded sum-form can be obtained again (up to constant terms) using$\mathrm{C^{\dagger} g C}$where $\mathrm{C^{\dagger}} = [a_0^{\dagger}, a_1^{\dagger}, ... a_N^{\dagger}, a_0, a_1, ..., a_N]$. ###Code # Note the constant term '1' is missing ncollect((Dagger(c)*G*c)[0].expand()) ###Output _____no_output_____ ###Markdown alternative expression using Pauli matricesFinally following https://topocondmat.org/w1_topointro/1D.html it should also be possible to express the BdG Hamiltonian using Pauli-matrixes. Ofcourse, this would help you actually constuct the Hamiltonian, it's just a nice way to write it. ###Code # We define the Pauli-matrices sigma0 = Matrix([[1,0],[0,1]]) sigmax = Matrix([[0,1],[1,0]]) sigmay = Matrix([[0,-1j],[1j,0]]) sigmaz = Matrix([[1,0],[0,-1]]) ###Output _____no_output_____ ###Markdown The approach makes use of a 'selecting' vector $|n\rangle$ of size n consisting of a single non-zero element which is 1 i.e.$|n\rangle = [..., 1,...]^{T}$ e.g. $|2\rangle = [0, 0, 1, ....]$Note, we index from 0 here. ###Code # Conveniance function to create those special 'selecting vectors' def nvec(N, k = 0): v = Matrix(N*[0]) v[k] = 1 return v nvec(4, 1) ###Output _____no_output_____ ###Markdown At the website they give the folowing identity:$\mathrm{C^{\dagger}} \sigma_z |n\rangle \langle n| \mathrm{C} = 2 a_n^{\dagger} a_n - 1$This seems weird because $\sigma_z$ is of size [2,2] but $|n\rangle \langle n|$ will be of size [n,n]. Moreover $\mathrm{C} is on size [2*n,1]$. To resolve this we need to take into account the Tensor product of $\sigma_z$ and $|n\rangle \langle n|$ which will be of size [2n, 2n]$\mathrm{C^{\dagger}} \Big( \sigma_z \otimes |n\rangle \langle n| \Big) \mathrm{C} = 2 a_n^{\dagger} a_n - 1$ ###Code # We check the identity # We just reuse the c-vector of before n0 = nvec(int(len(c)/2), 0) eq = Dagger(c) * TensorProduct(mu*sigmaz, n0*Dagger(n0)) * c eq[0] ###Output _____no_output_____ ###Markdown Taking into account the commutation relation $a_0^{\dagger}a_0 + a_0 a_0^{\dagger} = 1$, this identity is indeed correct.Next we can check the formation of the BdG Hamiltonian i.e. ###Code n1 = nvec(2, 1) TensorProduct(tau*sigmaz + 1j*delta*sigmay, n0*Dagger(n1)) Dagger(TensorProduct(tau*sigmaz + 1j*delta*sigmay, n0*Dagger(n1))) # Lets construct the whole thing result = Matrix.zeros(2*n, 2*n) #Add() for ni in range(n): n0 = nvec(n, ni) result += -0.5*TensorProduct(mu*sigmaz, n0*Dagger(n0)) if ni < n-1: n1 = nvec(n, ni+1) result += -0.5*TensorProduct(tau*sigmaz - 1j*delta*sigmay, n0*Dagger(n1)) result += -0.5*TensorProduct(Dagger(tau*sigmaz - 1j*delta*sigmay), n1*Dagger(n0)) result # The Hamiltonian in sum-form can again be obtained by multiplying with the operator vector. ncollect((Dagger(c)*result*c)[0].expand()) ###Output _____no_output_____
examples/Example 4 - Optimize a noisy multivariate system.ipynb
###Markdown Example 4: Optimize a noisy multivariate systemBayesian optimization also works for stochastic response functions. Here we illustrate this for a noisy multivariate system by determining the maximal value. Framework approachWe will solve the problem in two different ways:1. using the closed-loop approach of the `.auto`-method2. using the iterative optimization approach of the framework which requires using the methods `.ask` and `.tell`. This approach allows for iterative optimization and optimization of any callable function, known or otherwise.The optimization process can be stopped after any number of iterations. Technical noteInstallation of `torch` and `torchvision` (required dependencies) cannot be bundled as part of the `creative_project` installable. This is unfortunate, but a known issue it seems. Therefore these must be installed first, before installing `creative_project`. Get started: Import libraries ###Code # Preamble # Install torch and torchvision. Use this link to identify the right versions to install on your system # depending on configuration: https://pytorch.org/get-started/locally/ # #pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html # # Install creative_project from github (will require authentication with password) #pip install --user https://github.com/svedel/kre8_core/ ! pip install --user greattunes import torch import matplotlib.pyplot as plt import numpy as np from greattunes import TuneSession import pandas as pd %matplotlib inline ###Output _____no_output_____ ###Markdown Set up the problem to be solvedHere create a noisy multivariate function defined below for the input vector $\mathbf{x} = (x_0, x_1)$$$f(\mathbf{x}) = - \frac{ (6 x_0 -2)^2 (6 x_1 -2)^2 \, \sin{(12 x_0- 4)} \sin{(12 x_1 - 4)} }{250} + \frac{1}{2 \sigma^2 \pi} \mathrm{e}^{- \left(\frac{x_0 - 0.5}{\sigma} \right)^2 - \left( \frac{x_1 - 0.5}{\sigma} \right)^2 } + \xi \quad , \quad x_i \in [0; 1], \ i=0,1$$where $\xi$ is random number drawn from a uniform distribution (range $[0; 1]$) and $\sigma = 0.1$. This function has its average global maximum at $\mathbf{x}^* = (0.5,0.5)$. ###Code # define the function def f2_dup(x): covar0, covar1 = np.meshgrid(x["covar0"].values, x["covar1"].values) sigma = 0.1 func_raw = (-(6 * covar0 - 2) ** 2 * np.sin(12 * covar0 - 4))*(-(6 * covar1 - 2) ** 2 * np.sin(12 * covar1 - 4))/250 + 1/np.sqrt(2*sigma**2*np.pi) * np.exp(-((covar0-0.5)/sigma)**2 - ((covar1-0.5)/sigma)**2 ) noise = torch.rand(covar0.shape).numpy() return np.add(func_raw, noise) ###Output _____no_output_____ ###Markdown Plot the response function ###Code # create an easy way to generate a surface plot of the function def f2_dup_plot(x_vec): x0, x1 = np.meshgrid(x_vec, x_vec) output = f2_dup(pd.DataFrame({"covar0": x_vec, "covar1": x_vec})) return x0, x1, output # generate the data for the figure x = np.linspace(0,1) x0, x1, output = f2_dup_plot(x) fig = plt.figure(figsize=(8, 8)) ax = fig.gca(projection='3d') surf = ax.plot_surface(x0, x1, output) ax.set_xlabel("Covariate covar_0") ax.set_ylabel("Covariate covar_1") ax.set_zlabel("Response f_2") plt.show() ###Output _____no_output_____ ###Markdown Define the range for the covariates ###Code # define the range of interest x0_init = 0.2 x1_init = 0.8 covars2d = [(x0_init, 0, 1), (x1_init, 0, 1)] ###Output _____no_output_____ ###Markdown Solution 1: Closed-loop solution approach using `.auto` methodInstantiate the `TuneSession` class and solve the problem ###Code # initialize class instance cc = TuneSession(covars=covars2d) # number of iterations max_iter = 20 # run the auto-method cc.auto(response_samp_func=f2_dup, max_iter=max_iter) ###Output _____no_output_____ ###Markdown **PLOT THE PATH TO OPTIMALITY**The best solution ###Code # run current_best method cc.current_best() ###Output _____no_output_____ ###Markdown Solution 2: Iterative solution using `.ask` and `.tell` methodsInstantiate the `TuneSession` class and solve the problem. In this case, we need to write our own loop to iterate. Notice that the `covars` and `response` variables are converted to `torch` tensors of size $1 \times \mathrm{\covariates}$ to store them in the instantiated class, where they are used for retraining the model at each iteration. ###Code from greattunes.data_format_mappings import tensor2pretty_covariate # initialize the class instance cc2 = TuneSession(covars=covars2d) # run the solution for i in range(max_iter): # generate candidate cc2.ask() # sample response # tensor2pretty_covariate maps between the backend dataformat used by 'proposed_X' to the pandas-based format consumed by the response function # the attribute 'covar_details' keeps information that maps backend and pandas ("pretty") dataformats covars = tensor2pretty_covariate(train_X_sample=cc2.proposed_X[-1].reshape(1,2), covar_details=cc2.covar_details) response = pd.DataFrame({"Response": [f2_dup(covars)]}) # report response cc2.tell(covar_obs=covars, response_obs=response) ###Output _____no_output_____ ###Markdown Best guess after solving ###Code # run current_best method cc2.current_best() ###Output _____no_output_____
modules/python-loops/3-exercise-introduction-to-while-loops.ipynb
###Markdown While loopsWhile loops in Python allow you to run code an unknown number of times. They examine a Boolean condition, and if it's true the code inside the loop will run. This is very useful for situations like prompting a user for values.You are creating an application to prompt a user for a list of planets. In a later exercise you will add the code to display the list. For now you will just create the code for prompting the user.Start by adding two variables - one for the input from the user named `new_planet`, and another for the list of planets named `planets`. ###Code new_planet = '' planets = [] ###Output _____no_output_____ ###Markdown Create the while loopWith the variables created, you will create the `while` loop. The `while` loop will run while `new_planet` is **not** set to **done**.Inside the loop, you will check if `new_planet` contains a value. This is a quick way to see if the user has entered a value. If they have, you will `append` it to `planets`.Finally, you will use `input` to prompt the user for a new planet or to use **done** if they are done. You will store the value from `input` in `new_planet`. ###Code while new_planet.lower() != 'done': if new_planet: planets.append(new_planet) new_planet = input('Enter a new planet ') ###Output _____no_output_____
lijin-THU:notes-python/04-scipy/04.09-linear-algbra.ipynb
###Markdown 线性代数 `numpy` 和 `scipy` 中,负责进行线性代数部分计算的模块叫做 `linalg`。 ###Code import numpy as np import numpy.linalg import scipy as sp import scipy.linalg import matplotlib.pyplot as plt from scipy import linalg %matplotlib inline ###Output _____no_output_____ ###Markdown numpy.linalg VS scipy.linalg 一方面`scipy.linalg` 包含 `numpy.linalg` 中的所有函数,同时还包含了很多 `numpy.linalg` 中没有的函数。另一方面,`scipy.linalg` 能够保证这些函数使用 BLAS/LAPACK 加速,而 `numpy.linalg` 中这些加速是可选的。因此,在使用时,我们一般使用 `scipy.linalg` 而不是 `numpy.linalg`。我们可以简单看看两个模块的差异: ###Code print "number of items in numpy.linalg:", len(dir(numpy.linalg)) print "number of items in scipy.linalg:", len(dir(scipy.linalg)) ###Output number of items in numpy.linalg: 36 number of items in scipy.linalg: 115 ###Markdown numpy.matrix VS 2D numpy.ndarray 线性代数的基本操作对象是矩阵,而矩阵的表示方法主要有两种:`numpy.matrix` 和 2D `numpy.ndarray`。 numpy.matrix `numpy.matrix` 是一个矩阵类,提供了一些方便的矩阵操作:- 支持类似 `MATLAB` 创建矩阵的语法- 矩阵乘法默认用 `*` 号- `.I` 表示逆,`.T` 表示转置可以用 `mat` 或者 `matrix` 来产生矩阵: ###Code A = np.mat("[1, 2; 3, 4]") print repr(A) A = np.matrix("[1, 2; 3, 4]") print repr(A) ###Output matrix([[1, 2], [3, 4]]) matrix([[1, 2], [3, 4]]) ###Markdown 转置和逆: ###Code print repr(A.I) print repr(A.T) ###Output matrix([[-2. , 1. ], [ 1.5, -0.5]]) matrix([[1, 3], [2, 4]]) ###Markdown 矩阵乘法: ###Code b = np.mat('[5; 6]') print repr(A * b) ###Output matrix([[17], [39]]) ###Markdown 2 维 numpy.ndarray 虽然 `numpy.matrix` 有着上面的好处,但是一般不建议使用,而是用 2 维 `numpy.ndarray` 对象替代,这样可以避免一些不必要的困惑。我们可以使用 `array` 复现上面的操作: ###Code A = np.array([[1,2], [3,4]]) print repr(A) ###Output array([[1, 2], [3, 4]]) ###Markdown 逆和转置: ###Code print repr(linalg.inv(A)) print repr(A.T) ###Output array([[-2. , 1. ], [ 1.5, -0.5]]) array([[1, 3], [2, 4]]) ###Markdown 矩阵乘法: ###Code b = np.array([5, 6]) print repr(A.dot(b)) ###Output array([17, 39]) ###Markdown 普通乘法: ###Code print repr(A * b) ###Output array([[ 5, 12], [15, 24]]) ###Markdown `scipy.linalg` 的操作可以作用到两种类型的对象上,没有区别。 基本操作 求逆 矩阵 $\mathbf{A}$ 的逆 $\mathbf{B}$ 满足:$\mathbf{BA}=\mathbf{AB}=I$,记作 $\mathbf{B} = \mathbf{A}^{-1}$。事实上,我们已经见过求逆的操作,`linalg.inv` 可以求一个可逆矩阵的逆: ###Code A = np.array([[1,2],[3,4]]) print linalg.inv(A) print A.dot(scipy.linalg.inv(A)) ###Output [[-2. 1. ] [ 1.5 -0.5]] [[ 1.00000000e+00 0.00000000e+00] [ 8.88178420e-16 1.00000000e+00]] ###Markdown 求解线性方程组 例如,下列方程组$$\begin{eqnarray*} x + 3y + 5z & = & 10 \\2x + 5y + z & = & 8 \\2x + 3y + 8z & = & 3\end{eqnarray*}$$的解为:$$\begin{split}\left[\begin{array}{c} x\\ y\\ z\end{array}\right]=\left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right]^{-1}\left[\begin{array}{c} 10\\ 8\\ 3\end{array}\right]=\frac{1}{25}\left[\begin{array}{c} -232\\ 129\\ 19\end{array}\right]=\left[\begin{array}{c} -9.28\\ 5.16\\ 0.76\end{array}\right].\end{split}$$我们可以使用 `linalg.solve` 求解方程组,也可以先求逆再相乘,两者中 `solve` 比较快。 ###Code import time A = np.array([[1, 3, 5], [2, 5, 1], [2, 3, 8]]) b = np.array([10, 8, 3]) tic = time.time() for i in xrange(1000): x = linalg.inv(A).dot(b) print x print A.dot(x)-b print "inv and dot: {} s".format(time.time() - tic) tic = time.time() for i in xrange(1000): x = linalg.solve(A, b) print x print A.dot(x)-b print "solve: {} s".format(time.time() - tic) ###Output [-9.28 5.16 0.76] [ 0.00000000e+00 -1.77635684e-15 -8.88178420e-16] inv and dot: 0.0353579521179 s [-9.28 5.16 0.76] [ 0.00000000e+00 -1.77635684e-15 -1.77635684e-15] solve: 0.0284671783447 s ###Markdown 计算行列式 方阵的行列式为$$\left|\mathbf{A}\right|=\sum_{j}\left(-1\right)^{i+j}a_{ij}M_{ij}.$$其中 $a_{ij}$ 表示 $\mathbf{A}$ 的第 $i$ 行 第 $j$ 列的元素,$M_{ij}$ 表示矩阵 $\mathbf{A}$ 去掉第 $i$ 行 第 $j$ 列的新矩阵的行列式。例如,矩阵$$\begin{split}\mathbf{A=}\left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right]\end{split}$$的行列式是:$$\begin{eqnarray*} \left|\mathbf{A}\right| & = & 1\left|\begin{array}{cc} 5 & 1\\ 3 & 8\end{array}\right|-3\left|\begin{array}{cc} 2 & 1\\ 2 & 8\end{array}\right|+5\left|\begin{array}{cc} 2 & 5\\ 2 & 3\end{array}\right|\\ & = & 1\left(5\cdot8-3\cdot1\right)-3\left(2\cdot8-2\cdot1\right)+5\left(2\cdot3-2\cdot5\right)=-25.\end{eqnarray*}$$可以用 `linalg.det` 计算行列式: ###Code A = np.array([[1, 3, 5], [2, 5, 1], [2, 3, 8]]) print linalg.det(A) ###Output -25.0 ###Markdown 计算矩阵或向量的模 矩阵的模定义如下:$$\begin{split}\left\Vert \mathbf{A}\right\Vert =\left\{ \begin{array}{cc} \max_{i}\sum_{j}\left|a_{ij}\right| & \textrm{ord}=\textrm{inf}\\ \min_{i}\sum_{j}\left|a_{ij}\right| & \textrm{ord}=-\textrm{inf}\\ \max_{j}\sum_{i}\left|a_{ij}\right| & \textrm{ord}=1\\ \min_{j}\sum_{i}\left|a_{ij}\right| & \textrm{ord}=-1\\ \max\sigma_{i} & \textrm{ord}=2\\ \min\sigma_{i} & \textrm{ord}=-2\\ \sqrt{\textrm{trace}\left(\mathbf{A}^{H}\mathbf{A}\right)} & \textrm{ord}=\textrm{'fro'}\end{array}\right.\end{split}$$其中,$\sigma_i$ 是矩阵的奇异值。向量的模定义如下:$$\begin{split}\left\Vert \mathbf{x}\right\Vert =\left\{ \begin{array}{cc} \max\left|x_{i}\right| & \textrm{ord}=\textrm{inf}\\ \min\left|x_{i}\right| & \textrm{ord}=-\textrm{inf}\\ \left(\sum_{i}\left|x_{i}\right|^{\textrm{ord}}\right)^{1/\textrm{ord}} & \left|\textrm{ord}\right|<\infty.\end{array}\right.\end{split}$$`linalg.norm` 可以计算向量或者矩阵的模: ###Code A = np.array([[1, 2], [3, 4]]) print linalg.norm(A) print linalg.norm(A,'fro') # frobenius norm 默认值 print linalg.norm(A,1) # L1 norm 最大列和 print linalg.norm(A,-1) # L -1 norm 最小列和 print linalg.norm(A,np.inf) # L inf norm 最大行和 ###Output 5.47722557505 5.47722557505 6 4 7 ###Markdown 最小二乘解和伪逆 问题描述 所谓最小二乘问题的定义如下:假设 $y_i$ 与 $\mathbf{x_i}$ 的关系可以用一组系数 $c_j$ 和对应的模型函数 $f_j(\mathbf{x_i})$ 的模型表示:$$y_{i}=\sum_{j}c_{j}f_{j}\left(\mathbf{x}_{i}\right)+\epsilon_{i}$$其中 $\epsilon_i$ 表示数据的不确定性。最小二乘就是要优化这样一个关于 $c_j$ 的问题:$$J\left(\mathbf{c}\right)=\sum_{i}\left|y_{i}-\sum_{j}c_{j}f_{j}\left(x_{i}\right)\right|^{2}$$其理论解满足:$$\frac{\partial J}{\partial c_{n}^{*}}=0=\sum_{i}\left(y_{i}-\sum_{j}c_{j}f_{j}\left(x_{i}\right)\right)\left(-f_{n}^{*}\left(x_{i}\right)\right)$$改写为:$$\begin{eqnarray*} \sum_{j}c_{j}\sum_{i}f_{j}\left(x_{i}\right)f_{n}^{*}\left(x_{i}\right) & = & \sum_{i}y_{i}f_{n}^{*}\left(x_{i}\right)\\ \mathbf{A}^{H}\mathbf{Ac} & = & \mathbf{A}^{H}\mathbf{y}\end{eqnarray*}$$其中:$$\left\{ \mathbf{A}\right\} _{ij}=f_{j}\left(x_{i}\right).$$当 $\mathbf{A^HA}$ 可逆时,我们有:$$\mathbf{c}=\left(\mathbf{A}^{H}\mathbf{A}\right)^{-1}\mathbf{A}^{H}\mathbf{y}=\mathbf{A}^{\dagger}\mathbf{y}$$矩阵 $\mathbf{A}^{\dagger}$ 叫做 $\mathbf{A}$ 的伪逆。 问题求解 注意到,我们的模型可以写为:$$\mathbf{y}=\mathbf{Ac}+\boldsymbol{\epsilon}.$$在给定 $\mathbf{y}$ 和 $\mathbf{A}$ 的情况下,我们可以使用 `linalg.lstsq` 求解 $\mathbf c$。在给定 $\mathbf{A}$ 的情况下,我们可以使用 `linalg.pinv` 或者 `linalg.pinv2` 求解 $\mathbf{A}^{\dagger}$。 例子 假设我们的数据满足:$$\begin{align}y_{i} & =c_{1}e^{-x_{i}}+c_{2}x_{i} \\z_{i} & = y_i + \epsilon_i\end{align}$$其中 $x_i = \frac{i}{10},\ i = 1,\dots,10$,$c_1 = 5, c_2 = 2$,产生数据 ###Code c1, c2 = 5.0, 2.0 i = np.r_[1:11] xi = 0.1*i yi = c1*np.exp(-xi) + c2*xi zi = yi + 0.05 * np.max(yi) * np.random.randn(len(yi)) ###Output _____no_output_____ ###Markdown 构造矩阵 $\mathbf A$: ###Code A = np.c_[np.exp(-xi)[:, np.newaxis], xi[:, np.newaxis]] print A ###Output [[ 0.90483742 0.1 ] [ 0.81873075 0.2 ] [ 0.74081822 0.3 ] [ 0.67032005 0.4 ] [ 0.60653066 0.5 ] [ 0.54881164 0.6 ] [ 0.4965853 0.7 ] [ 0.44932896 0.8 ] [ 0.40656966 0.9 ] [ 0.36787944 1. ]] ###Markdown 求解最小二乘问题: ###Code c, resid, rank, sigma = linalg.lstsq(A, zi) print c ###Output [ 4.87016856 2.19081311] ###Markdown 其中 `c` 的形状与 `zi` 一致,为最小二乘解,`resid` 为 `zi - A c` 每一列差值的二范数,`rank` 为矩阵 `A` 的秩,`sigma` 为矩阵 `A` 的奇异值。查看拟合效果: ###Code xi2 = np.r_[0.1:1.0:100j] yi2 = c[0]*np.exp(-xi2) + c[1]*xi2 plt.plot(xi,zi,'x',xi2,yi2) plt.axis([0,1.1,3.0,5.5]) plt.xlabel('$x_i$') plt.title('Data fitting with linalg.lstsq') plt.show() ###Output _____no_output_____ ###Markdown 广义逆 `linalg.pinv` 或 `linalg.pinv2` 可以用来求广义逆,其区别在于前者使用求最小二乘解的算法,后者使用求奇异值的算法求解。 矩阵分解 特征值和特征向量 问题描述 对于给定的 $N \times N$ 矩阵 $\mathbf A$,特征值和特征向量问题相当与寻找标量 $\lambda$ 和对应的向量 $\mathbf v$ 使得:$$\mathbf{Av} = \lambda \mathbf{v}$$矩阵的 $N$ 个特征值(可能相同)可以通过计算特征方程的根得到:$$\left|\mathbf{A} - \lambda \mathbf{I}\right| = 0$$然后利用这些特征值求(归一化的)特征向量。 问题求解 - `linalg.eig(A)` - 返回矩阵的特征值与特征向量- `linalg.eigvals(A)` - 返回矩阵的特征值- `linalg.eig(A, B)` - 求解 $\mathbf{Av} = \lambda\mathbf{Bv}$ 的问题 例子 矩阵为$$\begin{split}\mathbf{A}=\left[\begin{array}{ccc} 1 & 5 & 2\\ 2 & 4 & 1\\ 3 & 6 & 2\end{array}\right].\end{split}$$特征多项式为:$$\begin{eqnarray*} \left|\mathbf{A}-\lambda\mathbf{I}\right| & = & \left(1-\lambda\right)\left[\left(4-\lambda\right)\left(2-\lambda\right)-6\right]-\\ & & 5\left[2\left(2-\lambda\right)-3\right]+2\left[12-3\left(4-\lambda\right)\right]\\ & = & -\lambda^{3}+7\lambda^{2}+8\lambda-3.\end{eqnarray*}$$特征根为:$$\begin{eqnarray*} \lambda_{1} & = & 7.9579\\ \lambda_{2} & = & -1.2577\\ \lambda_{3} & = & 0.2997.\end{eqnarray*}$$ ###Code A = np.array([[1, 5, 2], [2, 4, 1], [3, 6, 2]]) la, v = linalg.eig(A) print la # 验证是否归一化 print np.sum(abs(v**2),axis=0) # 第一个特征值 l1 = la[0] # 对应的特征向量 v1 = v[:, 0].T # 验证是否为特征值和特征向量对 print linalg.norm(A.dot(v1)-l1*v1) ###Output [ 7.95791620+0.j -1.25766471+0.j 0.29974850+0.j] [ 1. 1. 1.] 3.23301824835e-15 ###Markdown 奇异值分解 问题描述 $M \times N$ 矩阵 $\mathbf A$ 的奇异值分解为:$$\mathbf{A=U}\boldsymbol{\Sigma}\mathbf{V}^{H}$$其中 $\boldsymbol{\Sigma}, (M \times N)$ 只有对角线上的元素不为 0,$\mathbf U, (M \times M)$ 和 $\mathbf V, (N \times N)$ 为正交矩阵。其具体原理可以查看维基百科:https://en.wikipedia.org/wiki/Singular_value_decomposition 问题求解 - `U,s,Vh = linalg.svd(A)` - 返回 $U$ 矩阵,奇异值 $s$,$V^H$ 矩阵- `Sig = linalg.diagsvd(s,M,N)` - 从奇异值恢复 $\boldsymbol{\Sigma}$ 矩阵 例子 奇异值分解: ###Code A = np.array([[1,2,3],[4,5,6]]) U, s, Vh = linalg.svd(A) ###Output _____no_output_____ ###Markdown $\boldsymbol{\Sigma}$ 矩阵: ###Code M, N = A.shape Sig = linalg.diagsvd(s,M,N) print Sig ###Output [[ 9.508032 0. 0. ] [ 0. 0.77286964 0. ]] ###Markdown 检查正确性: ###Code print A print U.dot(Sig.dot(Vh)) ###Output [[1 2 3] [4 5 6]] [[ 1. 2. 3.] [ 4. 5. 6.]] ###Markdown LU 分解 $M \times N$ 矩阵 $\mathbf A$ 的 `LU` 分解为:$$\mathbf{A}=\mathbf{P}\,\mathbf{L}\,\mathbf{U}$$$\mathbf P$ 是 $M \times M$ 的单位矩阵的一个排列,$\mathbf L$ 是下三角阵,$\mathbf U$ 是上三角阵。 可以使用 `linalg.lu` 进行 LU 分解的求解:具体原理可以查看维基百科:https://en.wikipedia.org/wiki/LU_decomposition ###Code A = np.array([[1,2,3],[4,5,6]]) P, L, U = linalg.lu(A) print P print L print U print P.dot(L).dot(U) ###Output [[ 0. 1.] [ 1. 0.]] [[ 1. 0. ] [ 0.25 1. ]] [[ 4. 5. 6. ] [ 0. 0.75 1.5 ]] [[ 1. 2. 3.] [ 4. 5. 6.]] ###Markdown Cholesky 分解 `Cholesky` 分解是一种特殊的 `LU` 分解,此时要求 $\mathbf A$ 为 Hermitian 正定矩阵 ($\mathbf A = \mathbf{A^H}$)。此时有:$$\begin{eqnarray*} \mathbf{A} & = & \mathbf{U}^{H}\mathbf{U}\\ \mathbf{A} & = & \mathbf{L}\mathbf{L}^{H}\end{eqnarray*}$$即$$\mathbf{L}=\mathbf{U}^{H}.$$可以用 `linalg.cholesky` 求解。 QR 分解 $M×N$ 矩阵 $\mathbf A$ 的 `QR` 分解为:$$\mathbf{A=QR}$$$\mathbf R$ 为上三角形矩阵,$\mathbf Q$ 是正交矩阵。维基链接:https://en.wikipedia.org/wiki/QR_decomposition可以用 `linalg.qr` 求解。 Schur 分解 对于 $N\times N$ 方阵 $\mathbf A$, `Schur` 分解要求找到满足下式的矩阵:$$\mathbf{A=ZTZ^H}$$其中 $\mathbf Z$ 是正交矩阵,$\mathbf T$ 是一个上三角矩阵。维基链接:https://en.wikipedia.org/wiki/Schur_decomposition ###Code A = np.mat('[1 3 2; 1 4 5; 2 3 6]') print A T, Z = linalg.schur(A) print T, Z print Z.dot(T).dot(Z.T) ###Output [[1 3 2] [1 4 5] [2 3 6]] [[ 9.90012467 1.78947961 -0.65498528] [ 0. 0.54993766 -1.57754789] [ 0. 0.51260928 0.54993766]] [[ 0.36702395 -0.85002495 -0.37782404] [ 0.63681656 -0.06646488 0.76814522] [ 0.67805463 0.52253231 -0.51691576]] [[ 1. 3. 2.] [ 1. 4. 5.] [ 2. 3. 6.]] ###Markdown 矩阵函数 考虑函数 $f(x)$ 的泰勒展开:$$f\left(x\right)=\sum_{k=0}^{\infty}\frac{f^{\left(k\right)}\left(0\right)}{k!}x^{k}$$对于方阵,矩阵函数可以定义如下:$$f\left(\mathbf{A}\right)=\sum_{k=0}^{\infty}\frac{f^{\left(k\right)}\left(0\right)}{k!}\mathbf{A}^{k}$$这也是计算矩阵函数的最好的方式。 指数和对数函数 指数 指数可以定义如下:$$e^{\mathbf{A}}=\sum_{k=0}^{\infty}\frac{1}{k!}\mathbf{A}^{k}$$`linalg.expm3` 使用的是泰勒展开的方法计算结果: ###Code A = np.array([[1, 2], [3, 4]]) print linalg.expm3(A) ###Output [[ 51.96890355 74.73648784] [ 112.10473176 164.07363531]] ###Markdown 另一种方法先计算 A 的特征值分解:$$\mathbf{A}=\mathbf{V}\boldsymbol{\Lambda}\mathbf{V}^{-1}$$然后有(正交矩阵和对角阵的性质):$$e^{\mathbf{A}}=\mathbf{V}e^{\boldsymbol{\Lambda}}\mathbf{V}^{-1}$$`linalg.expm2` 使用的就是这种方法: ###Code print linalg.expm2(A) ###Output [[ 51.9689562 74.73656457] [ 112.10484685 164.07380305]] ###Markdown 最优的方法是用 [`Padé` 近似](https://en.wikipedia.org/wiki/Pad%C3%A9_approximant) 实现,`Padé` 近似往往比截断的泰勒级数准确,而且当泰勒级数不收敛时,`Padé` 近似往往仍可行,所以多用于在计算机数学中。`linalg.expm` 使用的就是这种方法: ###Code print linalg.expm(A) ###Output [[ 51.9689562 74.73656457] [ 112.10484685 164.07380305]] ###Markdown 对数 指数的逆运算,可以用 `linalg.logm` 实现: ###Code print A print linalg.logm(linalg.expm(A)) ###Output [[1 2] [3 4]] [[ 1. 2.] [ 3. 4.]]
deep-learning/intro-to-pytorch/Part 8 - Transfer Learning.ipynb
###Markdown Transfer LearningIn this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html). ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU).Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy.With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now. ###Code %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import torch from torch import nn from torch import optim import torch.nn.functional as F from torchvision import datasets, transforms, models ###Output _____no_output_____ ###Markdown Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`. ###Code data_dir = 'Cat_Dog_data' # TODO: Define transforms for the training data and testing data train_transforms = transforms.Compose([transforms.RandomRotation(30), transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) test_transforms = transforms.Compose([transforms.Resize(255), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]) # Pass transforms in here, then run the next cell to see how the transforms look train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms) test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms) trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True) testloader = torch.utils.data.DataLoader(test_data, batch_size=64) ###Output _____no_output_____ ###Markdown We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.htmlid5). Let's print out the model architecture so we can see what's going on. ###Code model = models.densenet121(pretrained=True) model ###Output _____no_output_____ ###Markdown This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers. ###Code # Freeze parameters so we don't backprop through them for param in model.parameters(): param.requires_grad = False from collections import OrderedDict classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(1024, 500)), ('relu', nn.ReLU()), ('fc2', nn.Linear(500, 2)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier ###Output _____no_output_____ ###Markdown With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time.PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU. ###Code import time for device in ['cpu', 'cuda']: criterion = nn.NLLLoss() # Only train the classifier parameters, feature parameters are frozen optimizer = optim.Adam(model.classifier.parameters(), lr=0.001) model.to(device) for ii, (inputs, labels) in enumerate(trainloader): # Move input and label tensors to the GPU inputs, labels = inputs.to(device), labels.to(device) start = time.time() outputs = model.forward(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() if ii==3: break print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds") ###Output Device = cpu; Time per batch: 2.472 seconds Device = cuda; Time per batch: 0.025 seconds ###Markdown You can write device agnostic code which will automatically use CUDA if it's enabled like so:```python at beginning of the scriptdevice = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")... then whenever you get a new Tensor or Module this won't copy if they are already on the desired deviceinput = data.to(device)model = MyModule(...).to(device)```From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily.>**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen. ###Code # Use GPU if it's available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = models.densenet121(pretrained=True) # Freeze parameters so we don't backprop through them for param in model.parameters(): param.requires_grad = False model.classifier = nn.Sequential(nn.Linear(1024, 256), nn.ReLU(), nn.Dropout(0.2), nn.Linear(256, 2), nn.LogSoftmax(dim=1)) criterion = nn.NLLLoss() # Only train the classifier parameters, feature parameters are frozen optimizer = optim.Adam(model.classifier.parameters(), lr=0.003) model.to(device); epochs = 1 steps = 0 running_loss = 0 print_every = 5 for epoch in range(epochs): for inputs, labels in trainloader: steps += 1 # Move input and label tensors to the default device inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() logps = model.forward(inputs) loss = criterion(logps, labels) loss.backward() optimizer.step() running_loss += loss.item() if steps % print_every == 0: test_loss = 0 accuracy = 0 model.eval() with torch.no_grad(): for inputs, labels in testloader: inputs, labels = inputs.to(device), labels.to(device) logps = model.forward(inputs) batch_loss = criterion(logps, labels) test_loss += batch_loss.item() # Calculate accuracy ps = torch.exp(logps) top_p, top_class = ps.topk(1, dim=1) equals = top_class == labels.view(*top_class.shape) accuracy += torch.mean(equals.type(torch.FloatTensor)).item() print(f"Epoch {epoch+1}/{epochs}.. " f"Train loss: {running_loss/print_every:.3f}.. " f"Test loss: {test_loss/len(testloader):.3f}.. " f"Test accuracy: {accuracy/len(testloader):.3f}") running_loss = 0 model.train() ###Output Epoch 1/1.. Train loss: 0.993.. Test loss: 0.893.. Test accuracy: 0.500 Epoch 1/1.. Train loss: 0.653.. Test loss: 0.250.. Test accuracy: 0.928 Epoch 1/1.. Train loss: 0.365.. Test loss: 0.166.. Test accuracy: 0.970 Epoch 1/1.. Train loss: 0.273.. Test loss: 0.111.. Test accuracy: 0.973 Epoch 1/1.. Train loss: 0.232.. Test loss: 0.088.. Test accuracy: 0.973 Epoch 1/1.. Train loss: 0.165.. Test loss: 0.085.. Test accuracy: 0.968 Epoch 1/1.. Train loss: 0.184.. Test loss: 0.078.. Test accuracy: 0.971 Epoch 1/1.. Train loss: 0.181.. Test loss: 0.059.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.169.. Test loss: 0.115.. Test accuracy: 0.956 Epoch 1/1.. Train loss: 0.207.. Test loss: 0.055.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.203.. Test loss: 0.052.. Test accuracy: 0.980 Epoch 1/1.. Train loss: 0.199.. Test loss: 0.063.. Test accuracy: 0.973 Epoch 1/1.. Train loss: 0.193.. Test loss: 0.132.. Test accuracy: 0.947 Epoch 1/1.. Train loss: 0.232.. Test loss: 0.082.. Test accuracy: 0.969 Epoch 1/1.. Train loss: 0.192.. Test loss: 0.056.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.158.. Test loss: 0.054.. Test accuracy: 0.981 Epoch 1/1.. Train loss: 0.157.. Test loss: 0.055.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.187.. Test loss: 0.073.. Test accuracy: 0.970 Epoch 1/1.. Train loss: 0.171.. Test loss: 0.054.. Test accuracy: 0.982 Epoch 1/1.. Train loss: 0.195.. Test loss: 0.067.. Test accuracy: 0.973 Epoch 1/1.. Train loss: 0.174.. Test loss: 0.059.. Test accuracy: 0.982 Epoch 1/1.. Train loss: 0.185.. Test loss: 0.050.. Test accuracy: 0.984 Epoch 1/1.. Train loss: 0.224.. Test loss: 0.067.. Test accuracy: 0.975 Epoch 1/1.. Train loss: 0.180.. Test loss: 0.054.. Test accuracy: 0.982 Epoch 1/1.. Train loss: 0.154.. Test loss: 0.056.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.129.. Test loss: 0.053.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.186.. Test loss: 0.052.. Test accuracy: 0.982 Epoch 1/1.. Train loss: 0.169.. Test loss: 0.057.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.137.. Test loss: 0.044.. Test accuracy: 0.983 Epoch 1/1.. Train loss: 0.191.. Test loss: 0.056.. Test accuracy: 0.980 Epoch 1/1.. Train loss: 0.167.. Test loss: 0.058.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.173.. Test loss: 0.047.. Test accuracy: 0.983 Epoch 1/1.. Train loss: 0.158.. Test loss: 0.099.. Test accuracy: 0.961 Epoch 1/1.. Train loss: 0.172.. Test loss: 0.050.. Test accuracy: 0.984 Epoch 1/1.. Train loss: 0.182.. Test loss: 0.078.. Test accuracy: 0.970 Epoch 1/1.. Train loss: 0.202.. Test loss: 0.044.. Test accuracy: 0.984 Epoch 1/1.. Train loss: 0.189.. Test loss: 0.074.. Test accuracy: 0.975 Epoch 1/1.. Train loss: 0.180.. Test loss: 0.081.. Test accuracy: 0.968 Epoch 1/1.. Train loss: 0.230.. Test loss: 0.047.. Test accuracy: 0.985 Epoch 1/1.. Train loss: 0.148.. Test loss: 0.049.. Test accuracy: 0.984 Epoch 1/1.. Train loss: 0.149.. Test loss: 0.064.. Test accuracy: 0.977 Epoch 1/1.. Train loss: 0.204.. Test loss: 0.045.. Test accuracy: 0.983 Epoch 1/1.. Train loss: 0.192.. Test loss: 0.047.. Test accuracy: 0.981 Epoch 1/1.. Train loss: 0.248.. Test loss: 0.053.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.154.. Test loss: 0.049.. Test accuracy: 0.983 Epoch 1/1.. Train loss: 0.144.. Test loss: 0.052.. Test accuracy: 0.982 Epoch 1/1.. Train loss: 0.183.. Test loss: 0.053.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.162.. Test loss: 0.049.. Test accuracy: 0.982 Epoch 1/1.. Train loss: 0.149.. Test loss: 0.065.. Test accuracy: 0.976 Epoch 1/1.. Train loss: 0.145.. Test loss: 0.046.. Test accuracy: 0.982 Epoch 1/1.. Train loss: 0.185.. Test loss: 0.053.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.188.. Test loss: 0.054.. Test accuracy: 0.980 Epoch 1/1.. Train loss: 0.172.. Test loss: 0.043.. Test accuracy: 0.983 Epoch 1/1.. Train loss: 0.144.. Test loss: 0.042.. Test accuracy: 0.982 Epoch 1/1.. Train loss: 0.164.. Test loss: 0.045.. Test accuracy: 0.986 Epoch 1/1.. Train loss: 0.156.. Test loss: 0.057.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.159.. Test loss: 0.052.. Test accuracy: 0.979 Epoch 1/1.. Train loss: 0.114.. Test loss: 0.047.. Test accuracy: 0.984 Epoch 1/1.. Train loss: 0.164.. Test loss: 0.047.. Test accuracy: 0.980 Epoch 1/1.. Train loss: 0.096.. Test loss: 0.069.. Test accuracy: 0.974
notebook/Unit4-1-PyThermo-RankineCycle-OOP.ipynb
###Markdown Rankine Cycle Analysis: the Rankine Cycle -OOP* 1 expression only* 2 the basic abstraction :List dict,function * [simple data type,expression only & List dict,function](./Unit2-2-PyThermo-RankineCycle.ipynb)* 3 Object-oriented programming 1. The Rankine Cycle Chapter 8 : Vapor Power Systems: Example 8.1:Analyzing an Ideal Rankine Cycle Page 438 Example 8.2: Analyzing a Rankine Cycle with Irreversibilities Example 8.1:Analyzing an Ideal Rankine Cycle Page 438Steam is the working fluid in an ideal Rankine cycle. Saturated vapor enters the turbine at 8.0 MPa and saturated liquid exits the condenser at a pressure of 0.008 MPa. The net power output of the cycle is 100 MW.![rankine81](./img/rankine81.jpg)* **Process 1–2:** **Isentropic expansion** of the working fluid through the turbine from saturated vapor at state 1 to the condenser pressure.* **Process 2–3:** Heat transfer from the working fluid as it flows at **constant pressure**through the condenser with saturated liquid at state 3.* **Process 3–4:** **Isentropic compression** in the pump to state 4 in the compressed liquid region.* **Process 4–1:** Heat transfer to the working fluid as it flows at **constant pressure** through the boiler to complete the cycle.Determine for the cycle(a) the thermal efficiency,(b) the back work ratio, (c) the mass flow rate of the steam,in kg/h,(d) the rate of heat transfer, Qin, into the working fluid as it passes through the boiler, in MW,(e) the rate of heat transfer, Qout, from the condensing steam as it passes through the condenser, in MW,(f) the mass flow rate of the condenser cooling water, in kg/h, if cooling water enters the condenser at 15C and exits at 35C.**Engineering Model:*** 1 Each **component** of the cycle is analyzed as a **control volume** at steady state. The control volumes are shown on the accompanying sketch by **dashed** lines.* 2 All processes of the working fluid are internally reversible.* 3 The turbine and pump operate adiabatically.* 4 Kinetic and potential energy effects are negligible.* 5 Saturated vapor enters the turbine. Condensate exits the condenser as saturated liquid. Example 8.2 :Analyzing a Rankine Cycle with IrreversibilitiesReconsider the vapor power cycle of Example 8.1, but include in the analysis that the turbine and the pump each have an isentropic efficiency of 85%. Determine for the modified cycle * (a) the thermal efficiency, * (b) the mass flow rate of steam, in kg/h, for a net power output of 100MW, * (c) the rate of heat transfer $\dot{Q}_{in}$ in into the working fluid as it passes through the boiler, in MW, * (d) the rate of heat transfer $\dot{Q}_{out}$ out from the condensing steam as it passes through the condenser, in MW, * (e) the mass flow rate of the condenser cooling water, in kg/h, if cooling water enters the condenser at 15°C and exits as 35°C.**SOLUTION****Known:** A vapor power cycle operates with steam as the working fluid. The turbine and pump both have efficiencies of 85%.**Find:** Determine the thermal efficiency, the mass flow rate, in kg/h, the rate of heat transfer to the working fluid as it passes through the boiler, in MW, the heat transfer rate from the condensing steam as it passes through thecondenser, in MW, and the mass flow rate of the condenser cooling water, in kg/h.**Engineering Model:**1. Each component of the cycle is analyzed as a control volume at steady state.2. The working fluid passes through the boiler and condenser at constant pressure. Saturated vapor enters the turbine. The condensate is saturated at the condenser exit.3. The turbine and pump each operate adiabatically with an efficiency of 85%.4. Kinetic and potential energy effects are negligible![rankine82](./img/rankine82.jpg) 2 Thermal EfficiencyThe net power developed by the cycle is$\dot{W}_{cycle}=\dot{W}_t-\dot{W}_p$Mass and energy rate balances for control volumes around the turbine and pump give,respectively$\frac{\dot{W}_t}{\dot{m}}=h_1-h_2$ $\frac{\dot{W}_p}{\dot{m}}=h_4-h_3$where $\dot{m}$ is the mass flow rate of the steam. The rate of heat transfer to the working fluid as it passes through the boiler is determined using mass and energy rate balances as$\frac{\dot{Q}_{in}}{\dot{m}}=h_1-h_4$The thermal efficiency is then$\eta=\frac{\dot{W}_t-\dot{W}_p}{\dot{Q}_{in}}=\frac{(h_1-h_2)-(h_4-h_3)}{h_1-h_4}$ 3 The Object-oriented Programming of Rankine Cycle Modeling and Simulation of the Rankine Cycle with [Computational Thinking](https://en.wikipedia.org/wiki/Computational_thinking) to the `generic` solutions 3.1 The RankineCycle Apply **abstraction** and **decomposition** to code rankine cycle 8.1&8.2 simulator``` ----Node 0---Turbine---Node 1---- | | Boiler Condenser | | ----Node 3---Pump------Node 2---- ```**Decomposition** : Decompose The ideal rankine cycle into parts : `nodes and devices` **Abstraction** : Define the classes of nodes and devices : `data and methods`* **1** Node * **2** Boiler,Turbine,Condenser,PumpThen, creating **algorithms** to obtain the generic solution results* `class RankineCycle` 3.2 Node Class* **Properties:** name,nid, p,t,h,s,v,x* **Methods:** (p,t),(p,h),(p,s),(h,s),(p,x),(t,x) `__str__` ###Code import seuif97 as if97 class Node: title = ('{:^6} \t {:^20} \t {:^5}\t {:^7}\t {:^7}\t {:^5} \t {:^7}\t {:^7}'.format ("NodeID", "Name", "P", "T", "H", "S", "V", "X")) def __init__(self, name, nid): self.name = name self.nid = nid self.p = None self.t = None self.h = None self.s = None self.v = None self.x = None def pt(self): self.h = if97.pt2h(self.p, self.t) self.s = if97.pt2s(self.p, self.t) self.v = if97.pt2v(self.p, self.t) self.x = None def ph(self): self.t = if97.ph2t(self.p, self.h) self.s = if97.ph2s(self.p, self.h) self.v = if97.ph2v(self.p, self.h) self.x = if97.ph2x(self.p, self.h) def ps(self): self.t = if97.ps2t(self.p, self.s) self.h = if97.ps2h(self.p, self.s) self.v = if97.ps2v(self.p, self.s) self.x = if97.ps2x(self.p, self.s) def hs(self): self.t = if97.hs2t(self.h, self.s) self.p = if97.hs2p(self.h, self.s) self.v = if97.hs2v(self.h, self.s) self.x = if97.hs2x(self.h, self.s) def px(self): self.t = if97.px2t(self.p, self.x) self.h = if97.px2h(self.p, self.x) self.s = if97.px2s(self.p, self.x) self.v = if97.px2v(self.p, self.x) def tx(self): self.p = if97.tx2p(self.t, self.x) self.h = if97.tx2h(self.t, self.x) self.s = if97.tx2s(self.t, self.x) self.v = if97.tx2v(self.t, self.x) def __str__(self): result = ('{:^6d} \t {:^20} \t {:>5.2f}\t {:>7.3f}\t {:>7.2f}\t {:>5.2f} \t {:>7.3f}\t {:>5.3}'.format (self.nid, self.name, self.p, self.t, self.h, self.s, self.v, self.x)) return result ###Output _____no_output_____ ###Markdown 3.3 Device Classes Boiler Class:``` ↑ exitNode main steam ┌───┼───┐Qindot │ │ │ │ │ │ │ │ │ heatAdded └───┼───┘ ↑ inletNode main feedwater ``` * **Properties:** * inletNode,exitNode; * heatAdded,Qindot* **Thermodynamic process** * Simulates the Boiler and tries to get the exit temperature down to the desiredOutletTemp. This is done by continuously adding h while keeping the P constant. ###Code class Boiler: """ The boiler class ↑ exitNode main steam ┌───┼───┐ │ │ │Qindot │ │ │ │ │ │ heatAdded └───┼───┘ ↑ inletNode main feedwater """ energy = "heatAdded" def __init__(self, inletNode, exitNode): """ Initializes the boiler with nodes """ self.inletNode = inletNode self.exitNode = exitNode def simulate(self, nodes): """ Simulates the Boiler and tries to get the exit temperature down to the desiredOutletTemp. This is done by continuously adding h while keeping the P constant. """ self.heatAdded = nodes[self.exitNode].h - nodes[self.inletNode].h ###Output _____no_output_____ ###Markdown Turbine Classturbine in the Rankine cycle``` inletNode inlet steam ┌────────┐ ↓ ╱ │ workExtracted ┤ │ ╲ │ └────────┤ ↓ exitNode exhausted steam ```* **Properties:** * inletNode,exitNode; * workExtracted* **Thermodynamic process** * doing work while expanding. ###Code from seuif97 import ps2h class Turbine: """ Turbine class Represents a turbine in the Rankine cycle inletNode inlet steam ┌────────┐ ↓ ╱ │ workExtracted ┤ │ ╲ │ └────────┤ ↓ exitNode exhausted steam """ energy = 'workExtracted' def __init__(self, inletNode, exitNode, eta=1.0): """ Initializes the turbine with nodes """ self.inletNode = inletNode self.exitNode = exitNode self.eta = eta def simulate(self, nodes): """ Simulates the turbine """ nodes[self.exitNode].s = nodes[self.inletNode].s hout_s = ps2h(nodes[self.exitNode].p, nodes[self.exitNode].s) nodes[self.exitNode].h = nodes[self.inletNode].h - \ self.eta*(nodes[self.inletNode].h-hout_s) nodes[self.exitNode].ph() self.workExtracted = nodes[self.inletNode].h - nodes[self.exitNode].h ###Output _____no_output_____ ###Markdown Pump Classthe pump in the Rankine cycle``` ┌───────┐ │ │ exitNode ← ┼───────┼← inletNode workRequired │ │ └───────┘ ```* **Properties:** * inletNode,exitNode; * workRequired* **Thermodynamic process** * workRequired ###Code from seuif97 import ps2h class Pump: """ Pump class Represents a pump in the Rankine cycle ┌───────┐ │ │ exitNode ← ┼───────┼← inletNode │ │ └───────┘ """ energy = "workRequired" def __init__(self,inletNode, exitNode,eta=1.0): """ Initializes the pump with nodes """ self.inletNode = inletNode self.exitNode = exitNode self.eta=eta def simulate(self,nodes): """ Simulates the pump """ sout_s = nodes[self.inletNode].s hout_s = ps2h(nodes[self.exitNode].p, sout_s) nodes[self.exitNode].h = nodes[self.inletNode].h+(hout_s -nodes[self.inletNode].h)/self.eta nodes[self.exitNode].ph() self.workRequired = nodes[self.exitNode].h - nodes[self.inletNode].h ###Output _____no_output_____ ###Markdown Condenser ClassThe Condenser ``` ↓ inletNode exhausted steam ┌───┴───┐ │ │ exitNodeW ←┼───────┼← inletNodeW │ │ └───┬───┘ ↓ exitNode condensate water ```* **Properties:** * inletNode,exitNode;inletNodeW,exitNodeW * heatExtracted,Qoutdot,mcwdot* **Thermodynamic process** * heatExtracted(Qoutdot,mcwdot) ###Code class Condenser: """ The Condenser class ↓ inletNode exhausted steam ┌───┴───┐ │ │ exitNodeW ←┼───────┼← inletNodeW │ │ └───┬───┘ ↓ exitNode condensate water """ energy = "heatOuted" def __init__(self, inletNode, exitNode): """ Initializes the condenser with nodes """ self.inletNode = inletNode self.exitNode = exitNode def simulate(self, nodes): """ Simulates the Condenser """ self.heatExtracted = nodes[self.inletNode].h - nodes[self.exitNode].h ###Output _____no_output_____ ###Markdown 3.4 Analysis the Rankine Cycle ``` ----Node 0---Turbine---Node 1---- | | Boiler Condenser | | ----Node 3---Pump------Node 2---- ``` * 1 init nodes* 2 connect device* 3 simulate devices* 4 cycle ###Code %%file ./rankine/rankine81-nds.csv NAME,NID,p,t,x MainSteam,0,8,,1 OutletHP,1,0.008,, CondenserWater,2,0.008,,0 MainFeedWater,3,8,, %%file ./rankine/rankine81-des.csv NAME,TYPE,eta,minID,moutID Turbine,TURBINE-EX0,1.0,0,1 Condenser,CONDENSER,,1,2 Feedwater Pump,PUMP,1.0,2,3 Boiler,BOILER,,3,0 %%file ./rankine/rankine82-nds.csv NAME,NID,p,t,x MainSteam,0,8,,1 OutletHP,1,0.008,, CondenserWater,2,0.008,,0 MainFeedWater,3,8,, %%file ./rankine/rankine82-des.csv NAME,TYPE,eta,minID,moutID Turbine,TURBINE-EX0,0.85,0,1 Condenser,CONDENSER,,1,2 Feedwater Pump,PUMP,0.85,2,3 Boiler,BOILER,,3,0 import csv import numpy as np def read_nodesfile(filename): """ csvfile:nodes:unorder in the file""" # get count of Nodes,init nodes[] with size in count countNodes = len(open(filename, 'r').readlines()) - 1 nodes = [None for i in range(countNodes)] # put each node in nodes csvfile = open(filename, 'r') reader = csv.DictReader(csvfile) for line in reader: i = int(line['NID']) nodes[i] = Node(line['NAME'], i) try: nodes[i].p = float(line['p']) except: nodes[i].p = None try: nodes[i].t = float(line['t']) except: nodes[i].t = None try: nodes[i].x = float(line['x']) except: nodes[i].x = None if line['p'] != '' and line['t'] != '': nodes[i].pt() elif line['p'] != '' and line['x'] != '': nodes[i].px() elif line['t'] != '' and line['x'] != '': nodes[i].tx() csvfile.close() return nodes compdict = { "BOILER": Boiler, "TURBINE-EX0": Turbine, "PUMP": Pump, "CONDENSER": Condenser } def read_devicefile(filename): csvfile = open(filename, 'r') reader = csv.DictReader(csvfile) Comps = {} for curdev in reader: minID = int(curdev['minID']) moutID = int(curdev['moutID']) try: eta = float(curdev['eta']) Comps[curdev['NAME']] = compdict[curdev['TYPE']]( minID, moutID, eta) except: Comps[curdev['NAME']] = compdict[curdev['TYPE']](minID, moutID) csvfile.close() return Comps import sys class RankineCycle: def __init__(self, name): """ self.nodes : list of all nodes self.Comps : dict of all components """ self.name = name self.nodes = [] self.Comps = {} self.totalworkExtracted = 0 self.totalworkRequired = 0 self.totalheatAdded = 0 self.efficiency = 100.0 def addNodes(self, filename): self.nodes = read_nodesfile(filename) def addComponent(self, filename): self.Comps = read_devicefile(filename) def cycleSimulator(self): for key in self.Comps: self.Comps[key].simulate(self.nodes) if self.Comps[key].energy == "workExtracted": self.totalworkExtracted += self.Comps[key].workExtracted elif self.Comps[key].energy == "workRequired": self.totalworkRequired += self.Comps[key].workRequired elif self.Comps[key].energy == "heatAdded": self.totalheatAdded += self.Comps[key].heatAdded self.efficiency = 100.0 * \ (self.totalworkExtracted - self.totalworkRequired) / self.totalheatAdded def OutFiles(self, outfilename=None): savedStdout = sys.stdout if (outfilename != None): datafile = open(outfilename, 'w', encoding='utf-8') sys.stdout = datafile print("\n \t%s" % self.name) print("{:>20} {:>.2f} {:1}".format( 'Thermal efficiency:', self.efficiency, '%')) print(Node.title) for node in self.nodes: print(node) if (outfilename != None): datafile.close() sys.stdout = savedStdout class SimRankineCycle(object): def __init__(self, nodes_filesname, dev_filesname): self.nodes_filesname = nodes_filesname self.dev_filesname = dev_filesname self.cyclename = nodes_filesname[0:nodes_filesname.find('-')] def CycleSimulator(self): self.cycle = RankineCycle(self.cyclename) self.cycle.addNodes(self.nodes_filesname) self.cycle.addComponent(self.dev_filesname) self.cycle.cycleSimulator() def SimulatorOutput(self): # output self.cycle.OutFiles() self.cycle.OutFiles(self.cyclename + '-sp.txt') import glob nds_filesname = glob.glob(r'./rankine/rankine8[0-9]-nds.csv') dev_filesname = glob.glob(r'./rankine/rankine??-des.csv') cycle = [] for i in range(len(nds_filesname)): cycle.append(SimRankineCycle(nds_filesname[i], dev_filesname[i])) cycle[i].CycleSimulator() # Specified Net Output Power for i in range(len(nds_filesname)): cycle[i].SimulatorOutput() ###Output _____no_output_____ ###Markdown 3.5 glob — Unix style pathname pattern expansionhttps://docs.python.org/3/library/glob.htmlThe `glob` module finds all the pathnames matching a specified pattern according to the rules used by the Unix shell, although results are returned in `arbitrary order.` `No tilde(~)` expansion is done,but `*`, `?`, and character `ranges` expressed with `[]` will be correctly matched ###Code import glob nds_filesname = glob.glob(r'./rankine/rankine8[0-9]-nds.csv') dev_filesname = glob.glob(r'./rankine/rankine??-des.csv') for i in range(len(nds_filesname)): print(nds_filesname) print(dev_filesname) ###Output _____no_output_____
Examples/cdata/cdata_general_example.ipynb
###Markdown Compose steps. ###Code step1 = RecordMap( blocks_in=data_algebra.cdata.RecordSpecification( control_table=incoming_shape, record_keys=record_keys ), ) step2 = RecordMap( blocks_out=data_algebra.cdata.RecordSpecification( control_table=outgoing_shape, record_keys=record_keys ), ) step2.transform(step1.transform(data)) both = step2.compose(step1) both both.transform(data) ###Output _____no_output_____
RobustRegression/OutlierDetectionAndRemovalMinimumCovarianceDeterminant.ipynb
###Markdown Outlier Detection and Removalhttps://machinelearningmastery.com/model-based-outlier-detection-and-removal-in-python/ Dataset[House Price Dataset(housing.csv)](https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv)[House Price Dataset Description (housing.names)](https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.names) Load and summarize the dataset ###Code from pandas import read_csv from sklearn.model_selection import train_test_split # Load the dataset url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv' df = read_csv(url, header=None) # Retrieve the array data = df.values # Split into input and output elements X, y = data[:, :-1], data[:, -1] # Summarize the shape of the dataset X.shape, y.shape # Split into train and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1) # Summarize the shape of the train and test sets X_train.shape, X_test.shape, y_train.shape, y_test.shape ###Output _____no_output_____ ###Markdown Minimum Covariance Determinant PerformanceIf the input variables have a Gaussian distribution, then simple statistical methods can be used to detect outliers. ###Code from sklearn.linear_model import LinearRegression from sklearn.covariance import EllipticEnvelope from sklearn.metrics import mean_absolute_error ###Output _____no_output_____ ###Markdown It provides the “contamination” argument that defines the expected ratio of outliers to be observed in practice. In this case, we will set it to a value of 0.01, found with a little trial and error. ###Code # Identify outliers in the training dataset ee = EllipticEnvelope(contamination=0.01) yhat = ee.fit_predict(X_train) # Select all rows that are not outliers mask = yhat != -1 X_train, y_train = X_train[mask, :], y_train[mask] # Summarize the shape of the updated training dataset X_train.shape, y_train.shape # Fit the model model = LinearRegression() model.fit(X_train, y_train) # Evaluate the model yhat = model.predict(X_test) # Evaluate predictions mae = mean_absolute_error(y_test, yhat) print(f'MAE {mae}') ###Output MAE 3.3875684210278276
pln/.ipynb_checkpoints/binarizacao_e_tf_idf-checkpoint.ipynb
###Markdown Representação numérica de palavras e textos Neste notebook iremos apresentação formas de representar valores textuais por meio de representação numérica. Iremos usar pandas, caso queira entender um pouco sobre pandas, [veja este notebook](pandas.ipynb).Em aprendizado de máquina, muitas vezes, precisamos da representação numérica de um determinado valor. Por exemplo: ###Code import pandas as pd df_jogos = pd.DataFrame([ ["boa","nublado","não"], ["boa","chuvoso","não"], ["média","nublado","sim"], ["fraca","chuvoso","não"]], columns=["disposição","tempo","jogar volei?"]) df_jogos ###Output _____no_output_____ ###Markdown Caso quisermos maperar cada coluna (agora chamada de atributo) para um valor, forma mais simples de se fazer a transformação é simplesmente mapear esse atributo para um valor numérico. Veja o exemplo abaixo: Nesse exemplo, temos dois atributos disposição do jogador e tempo e queremos prever se o jogar irá jogar volei ou não. Tanto os atributos quanto a classe podem ser mapeados como número. Além disso, o atributo `disposicao` é um atributo que representa uma escala - o que deixa essa forma de tranformação bem adequada para esse atributo. ###Code from typing import Dict def mapeia_atributo_para_int(df_data:pd.DataFrame, coluna:str, dic_nom_to_int: Dict[int,str]): for i,valor in enumerate(df_data[coluna]): valor_int = dic_nom_to_int[valor] df_data[coluna].iat[i] = valor_int df_jogos = pd.DataFrame([ ["boa","nublado","sim"], ["boa","chuvoso","não"], ["média","ensolarado","sim"], ["fraca","chuvoso","não"]], columns=["disposição","tempo","jogar volei?"]) dic_disposicao = {"boa":3,"média":2,"fraca":1} mapeia_atributo_para_int(df_jogos, "disposição", dic_disposicao) dic_tempo = {"ensolarado":3,"nublado":2,"chuvoso":1} mapeia_atributo_para_int(df_jogos, "tempo", dic_tempo) dic_volei = {"sim":1, "não":0} mapeia_atributo_para_int(df_jogos, "jogar volei?", dic_volei) df_jogos ###Output _____no_output_____ ###Markdown Binarização dos atributos categóricos Podemos fazer a binarização dos atributos categóricos em que, cada valor de atributo transforma-se em uma coluna que recebe `0` caso esse atributo não exista e `1`, caso contrário. Em nosso exemplo: ###Code from preprocessamento_atributos import BagOfItems df_jogos = pd.DataFrame([ [4, "boa","nublado","sim"], [3,"boa","chuvoso","não"], [2,"média","ensolarado","sim"], [1,"fraca","chuvoso","não"]], columns=["id","disposição","tempo","jogar volei?"]) dic_disposicao = {"boa":3,"média":2,"fraca":1} bag_of_tempo = BagOfItems(0) #veja a implementação do método em preprocesamento_atributos.py df_jogos_bot = bag_of_tempo.cria_bag_of_items(df_jogos,["tempo"]) df_jogos_bot ###Output 0/4 ###Markdown Como existem vários valores no teste que você desconhece, se fizermos dessa forma, atributos que estão no teste poderiam estar completamente zerados no treino, sendo desnecessário, por exemplo: ###Code df_jogos_treino = df_jogos[:2] df_jogos_treino df_jogos_teste = df_jogos[2:] df_jogos_teste ###Output _____no_output_____ ###Markdown Exemplo Real Considere este exemplo real de filmes e seus atores ([obtidos no kaggle](https://www.kaggle.com/rounakbanik/the-movies-dataset)): ###Code import pandas as pd df_amostra = pd.read_csv("movies_amostra.csv") df_amostra ###Output _____no_output_____ ###Markdown Nesse exemplo, as colunas que representam os atores principais podem ser binarizadas. Em nosso caso, podemos colocar os atores todos em um "Bag of Items". Os atores são representados por as colunas `ator_1`, `ator_2`,..., `ator_5`. Abaixo, veja um sugestão de como fazer em dataset: ###Code import pandas as pd from preprocessamento_atributos import BagOfItems obj_bag_of_actors = BagOfItems(min_occur=3) #boa=bag of actors ;) df_amostra_boa = obj_bag_of_actors.cria_bag_of_items(df_amostra,["ator_1","ator_2","ator_3","ator_4","ator_5"]) df_amostra_boa ###Output _____no_output_____ ###Markdown Veja que temos bastante atributos um para cada ator. Mesmo sendo melhor possuirmos poucos atributos e mais informativos, um método de aprendizado de máquina pode ser capaz de usar essa quantidade de forma eficaz. Particularmente, o [SVM linear](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html) e o [RandomForest](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) são métodos que conseguem ir bem nesse tipo de dado. Essa é a forma mais prática de fazer, porém, em aprendizado de máquina, geralmente dividimos nossos dados em, pelo menos, treino e teste em que treino é o dado que você terá todo o acesso e, o teste, deve reproduzir uma amostra do mundo real. Vamos supor que no treino há atores raros que não ocorrem no teste, nesse caso tais atributos seriam inúteis para o teste. Isso pode fazer com que o resultado reproduza menos o mundo real - neste caso, é muito possível que a diferença seja quase insignificante. Mas, caso queiramos fazer da forma "mais correta", temos que considerar apenas o treino para isso: ###Code #supondo que 80% da amostra é treino df_treino_amostra = df_amostra.sample(frac=0.8, random_state = 2) df_teste_amostra = df_amostra.drop(df_teste_amostra.index) #min_occur=3 definie o minimo de ocorrencias desse ator para ser considerado #pois, um ator que apareceu em poucos filmes, pode ser menos relevante para a predição do genero obj_bag_of_actors = BagOfItems(min_occur=3) df_treino_amostra_boa = obj_bag_of_actors.cria_bag_of_items(df_treino_amostra,["ator_1","ator_2","ator_3","ator_4","ator_5"]) df_teste_amostra_boa = obj_bag_of_actors.aplica_bag_of_items(df_teste_amostra,["ator_1","ator_2","ator_3","ator_4","ator_5"]) ###Output _____no_output_____ ###Markdown Representação Bag of Words Muitas vezes, temos textos que podem ser relevantes para uma determinada tarefa de aprendizado d máquina. Por isso, temos que representar tais elementos para nosso método de aprendizado de máquina. A forma mais usual para isso, é a `Bag of Words` em que cada palavra é um atributo e, o valor dela, é a frequencia dele no texto (ou algum outro valor que indique a importancia dessa palavra no texto).Por exemplo, caso temos as frases `A casa é grande`, `A casa é verde verde` em que cada frase é uma instancia diferente. A representação seria da seguinte forma: ###Code dic_bow = {"a":[1,1], "casa":[1,1], "é":[1,1], "verde":[0,2] } df_bow = pd.DataFrame.from_dict(dic_bow) df_bow ###Output _____no_output_____ ###Markdown Da forma que fizemos acima, usamos a frequencia de um termo para definir sua importancia no texto, porém, existem termos que possuem uma frequencia muito alta e importancia baixa: são os casos dos artigos e preposições por exemplo, pois, eles não discriminam o texto. Uma forma de mensurar o porder discriminativo das palavras é usando a métrica `TF-IDF`. Para calcularmos essa métrica, primeiramente calculamos a frequencia de um termo no documento (TF) e, logo após multiplamos pelo IDF. A fórmula para calcular o TF-IDF do termo $i$ no documento (ou instancia) $j$ é a seguinte:\begin{equation} TFIDF_{ij} = TF_{ij} \times IDF_i\end{equation}\begin{equation} TF_{ij} = log(f_{ij})\end{equation}em que $f_{ij}$ é a frequencia de um termo $i$ no documento $j$. Usa-se o `log` para suavizar valores muito altos e o $IDF$ (do inglês, _Inverse Document Frequency_) do termo $i$ é calculado da seguinte forma:\begin{equation} IDF_i = log(\frac{N}{n_i})\end{equation}em que $N$ é o número de documentos da coleção e $n_i$ é o número de documentos em que esse termo $i$ ocorre. Espera-se que, quanto mais discriminativo o termo, em menos documentos esse termo irá ocorrer e, consequentemente, o $IDF$ deste termo será mais alto. Por exemplo, considere as palavras `de`, `bebida` e `cerveja`. `cerveja` é uma palavra mais discriminativa do que `bebida`; e `bebibda` é mais discriminativo do que a preposição `de`. Muito provavelmente teremos mais frequentemente termos menos discriminativos. Por exemplo, se tivermos uma coleção de 1000 documentos, `de` poderia ocorrer em 900 documentos, `bebida` em 500 e `cerveja` em 100 documentos. Se fizermos o calculo, veremos que quanto mais discriminativo um termo, mais alto é seu IDF: ###Code import math N = 1000 n_de = 900 n_bebida = 500 n_cerveja = 100 IDF_de = math.log(N/n_de) IDF_bebida = math.log(N/n_bebida) IDF_cerveja = math.log(N/n_cerveja) print(f"IDF_de: {IDF_de}\tIDF_bebida:{IDF_bebida}\tIDF_cerveja:{IDF_cerveja}") ###Output _____no_output_____ ###Markdown A biblioteca `scikitlearn`também já possui uma classe [TFIDFVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) que transforma um texto em um vetor de atributos usando o TF-IDF para o valor referente a relevancia deste termo. Veja um exemplo na coluna `resumo` do nosso dataset de filme: ###Code import pandas as pd from preprocessamento_atributos import BagOfWords df_amostra = pd.read_csv("datasets/movies_amostra.csv") bow_amostra = BagOfWords() df_bow_amostra = bow_amostra.cria_bow(df_amostra,"resumo") df_bow_amostra ###Output _____no_output_____ ###Markdown Como são muitos atributos, pode parecer que não ficou corretamente gerado. Mas, filtrando as palavras de um determinado resumo você verificará que está ok: ###Code df_bow_amostra[["in","lake", "high"]] ###Output _____no_output_____
unused/spam-detector.ipynb
###Markdown spam-detector%2020-04-19___ ###Code import pandas as pd import numpy as np import urllib import requests import os import matplotlib.pyplot as plt DATABASE_URL = 'https://archive.ics.uci.edu/ml/machine-learning-databases/spambase/' LOCAL_DATABASE_PATH = 'spambase' # Try opening local copies first before fetching from online database try: spam_df = pd.read_csv(os.path.join('spambase-data', 'spambase.data'), header=None, index_col=False) print('Reading .data file from local copy of database...') except OSError: spam_df = pd.read_csv(urllib.parse.urljoin(database_url, 'spambase.data'), header=None, index_col=False) print('Reading .data file from online of database...') try: with open(os.path.join('spambase-data', 'spambase.names')) as f: names_file_text = f.read() print('Reading .names file from local copy of database...') except OSError: names_file_text = requests.get(urllib.parse.urljoin(database_url, 'spambase.names')).text print('Reading .names file from online database...') ###Output Reading .data file from local copy of database... Reading .names file from local copy of database... ###Markdown Attributes are specified in the .names format: http://www.cs.washington.edu/dm/vfml/appendixes/c45.htm ###Code #print(names_file_text) def get_attribute_names(names_file_text): # Anything between a '|' and the end of the line is ignored strip_comments = lambda line : line.split('|',1)[0] attr_names = [] read_classes = False for line in names_file_text.splitlines(): if len(line.strip()) == 0 or line[0] == '|': continue elif not read_classes: classes = strip_comments(line).split(',') read_classes = True else: attr_name, attr_type = strip_comments(line).split(':') attr_names.append(attr_name) return attr_names # Add classlabel name to last column spam_df.columns = get_attribute_names(names_file_text) + ['spam'] # Number of Instances: 4601 (1813 Spam = 39.4%) # Check for null entries: none found spam_df.isnull().sum().sum() spam_df def normalise_capital_run_length_data(df): crl = df.filter(regex=('capital_run_length*')) # Min-Max normalisation normalise = lambda col : (col-col.min())/(col.max()-col.min()) crl = (crl-crl.min())/(crl.max()-crl.min()) for col_name in crl.columns: df[col_name] = normalise(df[col_name]) return df features = spam_df.drop('spam', axis=1) #features2 = normalise_capital_run_length_data(features) from sklearn.model_selection import train_test_split RANDOM_SEED = 9 x_train, x_test, y_train, y_test = train_test_split(features, spam_df['spam'], test_size=0.3, random_state=RANDOM_SEED) from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() scaler.fit(x_train) columns = x_train.columns train_index = x_train.index test_index = x_test.index x_train = scaler.transform(x_train) x_test = scaler.transform(x_test) x_train = pd.DataFrame(data=x_train, index=train_index, columns=columns) x_test = pd.DataFrame(data=x_test, index=test_index, columns=columns) x_train from sklearn.feature_selection import mutual_info_classif from sklearn.feature_selection import chi2 from sklearn.feature_selection import SelectKBest selector = SelectKBest(mutual_info_classif, k=20) selector.fit(x_train, y_train) x_train = selector.transform(x_train) x_test = selector.transform(x_test) import sklearn.metrics as skm from sklearn.metrics import roc_curve def display_metrics(model, x_test, y_test): score = model.score(x_test, y_test) probabilities = model.predict_proba(x_test) y_pred = model.predict(x_test) roc_auc = skm.roc_auc_score(y_test, probabilities[:, 1]) fpr, tpr, _ = roc_curve(y_test, probabilities[:, 1]) plt.plot(fpr, tpr) plt.plot([0, 1], [0, 1], color='grey', lw=1, linestyle='--') plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') tp, fp, fn, tn = skm.confusion_matrix(y_test, y_pred).ravel() precision = skm.precision_score(y_test, y_pred) recall = skm.recall_score(y_test, y_pred) metrics_report = ( f'Accuracy : {score}\n' f'ROC AUC : {roc_auc}\n' f'TP, FP, FN, TN : {[tp, fp, fn, tn]}\n' f'Precision : {precision}\n' f'Recall : {recall}\n' ) print(metrics_report) from sklearn.ensemble import RandomForestClassifier model1 = RandomForestClassifier(random_state=RANDOM_SEED) model1 = model1.fit(x_train, y_train) display_metrics(model1, x_test, y_test) from sklearn.neighbors import KNeighborsClassifier model3 = KNeighborsClassifier(n_neighbors=4) model3.fit(x_train, y_train) display_metrics(model3, x_test, y_test) from sklearn.svm import SVC svc = SVC(kernel='linear',probability=True) svc.fit(x_train, y_train) display_metrics(svc, x_test, y_test) df.to_csv('a.csv') from sklearn.decomposition import PCA pca = PCA(n_components=2) pca.fit(x_train) x_train = pca.transform(x_train) x_test = pca.transform(x_test) import matplotlib.pyplot as plt plt.scatter(x_train[:,0],x_train[:,1], c=y_train) #y_train.shape ###Output _____no_output_____
evaluations/simulation/5-query.simulation-cov-40.ipynb
###Markdown 1. Parameters ###Code simulation_dir = 'simulations/unset' metadata_file = 'input/metadata.tsv.gz' # Parameters read_coverage = 40 mincov = 10 simulation_dir = "simulations/cov-40" iterations = 3 sub_alpha = 0.2 from pathlib import Path import imp fp, pathname, description = imp.find_module('gdi_benchmark', ['../../lib']) gdi_benchmark = imp.load_module('gdi_benchmark', fp, pathname, description) simulation_dir_path = Path(simulation_dir) case_name = str(simulation_dir_path.name) index_reads_path = simulation_dir_path / 'index-reads' index_assemblies_path = simulation_dir_path / 'index-assemblies' output_api_reads_path = simulation_dir_path / 'query-reads-api.tsv' output_api_assemblies_path = simulation_dir_path / 'query-assemblies-api.tsv' output_cli_reads_path = simulation_dir_path / 'query-reads-cli.tsv' output_cli_assemblies_path = simulation_dir_path / 'query-assemblies-cli.tsv' ###Output _____no_output_____ ###Markdown 2. Benchmark command-line ###Code import pandas as pd import genomics_data_index.api as gdi def benchmark_cli_index(name: str, index_path: Path) -> pd.DataFrame: db = gdi.GenomicsDataIndex.connect(index_path) mutations_df = db.mutations_summary(reference_name='reference').sort_values('Count', ascending=False) top_mutation = mutations_df.iloc[0].name if 'chrom' not in top_mutation: raise Exception(f'Does not exist a single mutation for index {index_path}') else: print(f'top_mutation={top_mutation}') benchmark_commands = { 'query hasa': f'gdi --project-dir {index_path} --ncores 1 query "hasa:{top_mutation}"', 'query isa': f'gdi --project-dir {index_path} --ncores 1 query "isa:SH13-007"', 'query --summary': f'gdi --project-dir {index_path} --ncores 1 query --summary', 'query --features-summary': f'gdi --project-dir {index_path} --ncores 1 query --features-summary mutations', 'query isin': f'gdi --project-dir {index_path} --ncores 1 query --reference-name reference "isin_100_substitutions:SH13-007"', 'list samples': f'gdi --project-dir {index_path} --ncores 1 list samples', } number_samples = db.count_samples() number_features_no_unknown = db.count_mutations(reference_genome='reference', include_unknown=False) number_features_all = db.count_mutations(reference_genome='reference', include_unknown=True) iterations = 10 benchmarker = gdi_benchmark.QueryBenchmarkHandler() return benchmarker.benchmark_cli(name=name, kind_commands=benchmark_commands, number_samples=number_samples, number_features_no_unknown=number_features_no_unknown, number_features_all=number_features_all, iterations=iterations) ###Output _____no_output_____ ###Markdown 2.1. Benchmark reads ###Code reads_cli_df = benchmark_cli_index(name=f'{case_name} (reads)', index_path=index_reads_path) reads_cli_df.head(3) reads_cli_df.to_csv(output_cli_reads_path, sep='\t', index=False) ###Output _____no_output_____ ###Markdown 2.1. Benchmark assemblies ###Code assemblies_cli_df = benchmark_cli_index(name=f'{case_name} (reads)', index_path=index_assemblies_path) assemblies_cli_df.head(3) assemblies_cli_df.to_csv(output_cli_assemblies_path, sep='\t', index=False) ###Output _____no_output_____ ###Markdown 3. Test query API 3.1. Load (example) metadataThe simulated data is based off of real sample names and a real tree. So I can load up real metadata and attach it to a query (though the mutations and reference genome are all simulated). ###Code import pandas as pd metadata_df = pd.read_csv(metadata_file, sep='\t').rename({'Sample Name': 'Sample Name Orig'}, axis='columns') metadata_df.head(2) ###Output _____no_output_____ ###Markdown 3.2. Define benchmark cases ###Code from typing import List import genomics_data_index.api as gdi def benchmark_api_index(name: str, index_path: Path) -> pd.DataFrame: db = gdi.GenomicsDataIndex.connect(index_path) q_no_join = db.samples_query(reference_name='reference', universe='mutations') q_join = db.samples_query(reference_name='reference', universe='mutations').join(metadata_df, sample_names_column='Sample Name Orig') mutations_df = db.mutations_summary(reference_name='reference').sort_values('Count', ascending=False) top_mutations = mutations_df.iloc[[0,1]].index.tolist() if len(top_mutations) != 2: raise Exception(f'Does not exist two mutations for index {index_path}') else: mutation1 = top_mutations[0] mutation2 = top_mutations[1] print(f'mutation1={mutation1}, mutation2={mutation2}') q = q_join.hasa(mutation1) r = q_join.hasa(mutation2) number_samples = db.count_samples() number_features_no_unknown = db.count_mutations(reference_genome='reference', include_unknown=False) number_features_all = db.count_mutations(reference_genome='reference', include_unknown=True) repeat = 10 benchmark_cases = { 'db.samples_query': lambda: db.samples_query(reference_name='reference', universe='mutations'), 'q.join': lambda: q_no_join.join(metadata_df, sample_names_column='Sample Name Orig'), 'q.features_summary': lambda: q_join.features_summary(), 'q.features_comparison': lambda: q_join.features_comparison(sample_categories='outbreak_number', categories_kind='dataframe', kind='mutations', unit='proportion'), 'q.hasa': lambda: q_join.hasa(mutation1), 'q.isa': lambda: q_join.isa("SH13-007"), 'q AND r': lambda: q & r, 'q.toframe': lambda: q_join.toframe(), 'q.summary': lambda: q_join.summary(), 'q.isin (distance)': lambda: q_join.isin("SH13-007", kind='distance', distance=100, units='substitutions'), 'q.isin (mrca)': lambda: q_join.isin(["SH13-007", "SH12-001"], kind='mrca'), } benchmarker = gdi_benchmark.QueryBenchmarkHandler() return benchmarker.benchmark_api(name=name, kind_functions=benchmark_cases, number_samples=number_samples, number_features_no_unknown=number_features_no_unknown, number_features_all=number_features_all, repeat=repeat) ###Output _____no_output_____ ###Markdown 3.3. Benchmark reads index ###Code reads_df = benchmark_api_index(name=f'{case_name} (reads)', index_path=index_reads_path) reads_df.head(5) reads_df.to_csv(output_api_reads_path, sep='\t', index=False) ###Output _____no_output_____ ###Markdown 3.4. Benchmark assemblies index ###Code assemblies_df = benchmark_api_index(name=f'{case_name} (assemblies)', index_path=index_assemblies_path) assemblies_df.head(5) assemblies_df.to_csv(output_api_assemblies_path, sep='\t', index=False) ###Output _____no_output_____
Arvato Project Workbook pt 3.ipynb
###Markdown Libraries ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import pickle # Custom cleaning functions from utils import cleaning_functions ###Output _____no_output_____ ###Markdown Part 2: Supervised Learning ModelNow that you've found which parts of the population are more likely to be customers of the mail-order company, it's time to build a prediction model. Each of the rows in the "MAILOUT" data files represents an individual that was targeted for a mailout campaign. Ideally, we should be able to use the demographic information from each individual to decide whether or not it will be worth it to include that person in the campaign.The "MAILOUT" data has been split into two approximately equal parts, each with almost 43 000 data rows. In this part, you can verify your model with the "TRAIN" partition, which includes a column, "RESPONSE", that states whether or not a person became a customer of the company following the campaign. In the next part, you'll need to create predictions on the "TEST" partition, where the "RESPONSE" column has been withheld. Reading and cleaning training set ###Code mailout_train = pd.read_csv('data/Udacity_MAILOUT_052018_TRAIN.csv') ###Output C:\Users\jobqu\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3155: DtypeWarning: Columns (19,20) have mixed types.Specify dtype option on import or set low_memory=False. has_raised = await self.run_ast_nodes(code_ast.body, cell_name, ###Markdown Applying the cleaning steps from workbook 1 (now in cleaning_functions.py) ###Code training_clean = cleaning_functions.clean_data(mailout_train) ###Output Initial amount of missing values: 2217201 Reading the description of attributes table.... Missing values after including missing codes 2354411 Additional missing values: 137210 Starting the cleaning of attributes and feature engineering... ###Markdown We don't need `LNR` for training, just for the training set. ###Code training_clean.drop(['LNR'], axis = 1, inplace = True) training_clean.head() ###Output _____no_output_____ ###Markdown Scaling the training set. ###Code from sklearn.preprocessing import StandardScaler scaler = StandardScaler() train_scaled = scaler.fit_transform(training_clean.drop(['RESPONSE'], axis = 1)) train_scaled = pd.DataFrame(train_scaled, columns = list(training_clean.columns)[:-1]) train_scaled.head() ###Output _____no_output_____ ###Markdown Training learning modelsWe will try the following models:- SGDClassifier- RandomForestClassifier- XGBoostWe will perform hyperparameter tuning to improve the AUCROCC metric. Setting $X$ and $y$Since we will cross-validate our training batches we will not split our data in training and test set. ###Code #from sklearn.model_selection import train_test_split X = train_scaled.values y = training_clean['RESPONSE'].values #X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42, stratify = y) ###Output _____no_output_____ ###Markdown 1) SGDClassifierThis is a linear model from which we did not expect to have great performance. However, since it is very quick to train we can check that all the workflow is integrated correctly. ###Code %%time from sklearn.metrics import confusion_matrix, roc_auc_score from sklearn.model_selection import GridSearchCV from sklearn.linear_model import SGDClassifier # Define classifier sgd_clf = SGDClassifier(class_weight = 'balanced', early_stopping = True, loss = 'modified_huber') # Setting hyperparameter grid sgd_hyperparams = {'penalty': ['l2', 'l1', 'elasticnet'], 'alpha': [5e-5, 0.0001, 0.0002]} # Define grid search sgd_cv = GridSearchCV(sgd_clf, sgd_hyperparams, scoring = 'roc_auc', n_jobs = -1, verbose = 1, cv = 3) # Fit training sgd_cv.fit(X, y) print('\nBest parameters:', sgd_cv.best_params_) print('Best score:', sgd_cv.best_score_) ### Selecting best estimator sgd_best = sgd_cv.best_estimator_ #y_pred_prob = sgd_best.predict_proba(X)[:,1] #print('\nrouc_auc_score: ', roc_auc_score(y, y_pred_prob)) # Confusion matrix confusion_matrix(y, sgd_best.predict(X)) ###Output Fitting 3 folds for each of 9 candidates, totalling 27 fits Best parameters: {'alpha': 0.0001, 'penalty': 'l1'} Best score: 0.630051361200492 Wall time: 10.8 s ###Markdown 2) RandomForest Classifier ###Code from sklearn.ensemble import RandomForestClassifier rfc_clf = RandomForestClassifier() # class_weight = 'balanced') rf_hyperparameters = {'n_estimators': [150, 175, 185, 200], #, 250, 300], 'max_depth': [ 3, 4, 5, 6, 7], 'max_features': ['auto', 0.25, 0.33, 0.4], 'min_samples_leaf': [3, 5, 10, 12, 15]} rfc_cv = GridSearchCV(rfc_clf, rf_hyperparameters, scoring = 'roc_auc', n_jobs = 2, verbose = 3, cv = 3) preds = rfc_cv.fit(X, y) print('\nBest parameters:', rfc_cv.best_params_) print('Best score:', rfc_cv.best_score_) rfc_final = rfc_cv.best_estimator_ y_pred= rfc_final.predict_proba(X)[:,1] print('roc_auc_score using metric:', roc_auc_score(y, y_pred)) with open('final_rfc.pkl', 'wb') as f: pickle.dump(rfc_final, f) ###Output _____no_output_____ ###Markdown 3) XGBoost ###Code import xgboost as xgb #!pip install xgboost from sklearn.model_selection import RandomizedSearchCV import scipy.stats as stats from sklearn.utils.fixes import loguniform xgb_clf = xgb.XGBClassifier(n_jobs = -1, objective = 'binary:logistic', eval_metric = 'auc') #scale_pos_weight = 99) xgb_distributions = {'n_estimators':[4, 5, 6, 7, 10, 50, 100, 150, 200], # [5, 10, 50, 100], 'max_depth': [2, 3, 4, 5, 6, 7, 8, 9, 10], 'gamma': loguniform(1e-4, 1e0), 'learning_rate': loguniform(1e-4, 1e0), 'colsample_bytree': stats.uniform(0.1, 1.0),} xgb_cv = RandomizedSearchCV(xgb_clf, param_distributions = xgb_distributions, n_iter = 400, scoring = 'roc_auc', n_jobs = 2, verbose = 3, cv = 3) xgb_cv.fit(X, y) print('\nBest parameters:', xgb_cv.best_params_) print('Best score:', xgb_cv.best_score_) xgb_final = xgb_cv.best_estimator_ y_pred= xgb_final.predict_proba(X)[:,1] print('roc_auc_score using metric:', roc_auc_score(y, y_pred)) with open('final_xgb.pkl', 'wb') as f: pickle.dump(xgb_final, f) ###Output _____no_output_____ ###Markdown 4) AdaBoost ###Code from sklearn.ensemble import AdaBoostClassifier adab_clf = AdaBoostClassifier() adab_hyperparams = {'n_estimators': [5, 10, 50, 100, 200], 'learning_rate': [0.01, 0.05, 0.1, 0.2, 1]} adab_cv = GridSearchCV(adab_clf, adab_hyperparams, scoring = 'roc_auc', n_jobs = 2, verbose = 3, cv = 3) adab_cv.fit(X, y) print('\nBest parameters:', adab_cv.best_params_) print('Best score:', adab_cv.best_score_) adab_final = adab_cv.best_estimator_ y_pred = adab_final.predict_proba(X)[:,1] print('roc_auc_score using metric:', roc_auc_score(y, y_pred)) with open('final_adab.pkl', 'wb') as f: pickle.dump(adab_final, f) ###Output _____no_output_____ ###Markdown Feature selection with Boruta ###Code from boruta import BorutaPy # !pip install boruta clf = RandomForestClassifier(n_jobs=-1, max_depth = 6, class_weight = 'balanced') trans = BorutaPy(clf, n_estimators = 'auto', random_state = 42, verbose=1, max_iter = 200) X_filtered = trans.fit_transform(X, y) ###Output Iteration: 1 / 200 Iteration: 2 / 200 Iteration: 3 / 200 Iteration: 4 / 200 Iteration: 5 / 200 Iteration: 6 / 200 Iteration: 7 / 200 Iteration: 8 / 200 Iteration: 9 / 200 Iteration: 10 / 200 Iteration: 11 / 200 Iteration: 12 / 200 Iteration: 13 / 200 Iteration: 14 / 200 Iteration: 15 / 200 Iteration: 16 / 200 Iteration: 17 / 200 Iteration: 18 / 200 Iteration: 19 / 200 Iteration: 20 / 200 Iteration: 21 / 200 Iteration: 22 / 200 Iteration: 23 / 200 Iteration: 24 / 200 Iteration: 25 / 200 Iteration: 26 / 200 Iteration: 27 / 200 Iteration: 28 / 200 Iteration: 29 / 200 Iteration: 30 / 200 Iteration: 31 / 200 Iteration: 32 / 200 Iteration: 33 / 200 Iteration: 34 / 200 BorutaPy finished running. Iteration: 35 / 200 Confirmed: 6 Tentative: 0 Rejected: 206 ###Markdown Relevant features: ###Code rel = pd.DataFrame(X).loc[:, trans.support_] list(training_clean.iloc[:, rel.columns].columns) ###Output _____no_output_____ ###Markdown Using X_filteredWe retrained the last three algorithms with the newly filtered features RandomForest ###Code rfc_bor_clf = RandomForestClassifier() # class_weight = 'balanced') rf_hyperparameters = {'n_estimators': [150, 175, 185, 200], #, 250, 300], 'max_depth': [ 3, 4, 5, 6, 7], 'max_features': ['auto', 0.25, 0.33, 0.4], 'min_samples_leaf': [3, 5, 10, 12, 15]} rfc_bor_cv = GridSearchCV(rfc_bor_clf, rf_hyperparameters, scoring = 'roc_auc', n_jobs = 2, verbose = 3, cv = 3) preds = rfc_bor_cv.fit(X_filtered, y) print('\nBest parameters:', rfc_bor_cv.best_params_) print('Best score:', rfc_bor_cv.best_score_) rfc_bor_final = rfc_bor_cv.best_estimator_ y_pred= rfc_bor_final.predict_proba(X_filtered)[:,1] print('roc_auc_score using metric:', roc_auc_score(y, y_pred)) with open('final_bor_rfc.pkl', 'wb') as f: pickle.dump(rfc_bor_final, f) ###Output _____no_output_____ ###Markdown XGB ###Code xgb_bor_clf = xgb.XGBClassifier(n_jobs = -1, objective = 'binary:logistic', eval_metric = 'auc') #scale_pos_weight = 99) xgb_distributions = {'n_estimators':[4, 5, 6, 7, 10, 50, 100, 150, 200], # [5, 10, 50, 100], 'max_depth': [2, 3, 4, 5, 6, 7, 8, 9, 10], 'gamma': loguniform(1e-4, 1e0), 'learning_rate': loguniform(1e-4, 1e0), 'colsample_bytree': stats.uniform(0.1, 1.0),} xgb_bor_cv = RandomizedSearchCV(xgb_bor_clf, param_distributions = xgb_distributions, n_iter = 400, scoring = 'roc_auc', n_jobs = -1, verbose = 3, cv = 3) xgb_bor_cv.fit(X_filtered, y) print('\nBest parameters:', xgb_bor_cv.best_params_) print('Best score:', xgb_bor_cv.best_score_) xgb_bor_final = xgb_bor_cv.best_estimator_ y_pred= xgb_bor_final.predict_proba(X_filtered)[:,1] print('roc_auc_score using metric:', roc_auc_score(y, y_pred)) with open('final_bor_xgb.pkl', 'wb') as f: pickle.dump(xgb_bor_final, f) ###Output _____no_output_____ ###Markdown AdaBoost ###Code adab_bor_clf = AdaBoostClassifier() adab_hyperparams = {'n_estimators': [5, 10, 50, 100, 200], 'learning_rate': [0.01, 0.05, 0.1, 0.2, 1]} adab_bor_cv = GridSearchCV(adab_bor_clf, adab_hyperparams, scoring = 'roc_auc', n_jobs = 2, verbose = 3, cv = 3) adab_bor_cv.fit(X_filtered, y) print('\nBest parameters:', adab_bor_cv.best_params_) print('Best score:', adab_bor_cv.best_score_) adab_bor_final = adab_bor_cv.best_estimator_ y_pred = adab_bor_final.predict_proba(X_filtered)[:,1] print('roc_auc_score using metric:', roc_auc_score(y, y_pred)) with open('final_bor_adab.pkl', 'wb') as f: pickle.dump(adab_bor_final, f) ###Output _____no_output_____ ###Markdown Part 3: Kaggle CompetitionNow that you've created a model to predict which individuals are most likely to respond to a mailout campaign, it's time to test that model in competition through Kaggle. If you click on the link [here](http://www.kaggle.com/t/21e6d45d4c574c7fa2d868f0e8c83140), you'll be taken to the competition page where, if you have a Kaggle account, you can enter. If you're one of the top performers, you may have the chance to be contacted by a hiring manager from Arvato or Bertelsmann for an interview!Your entry to the competition should be a CSV file with two columns. The first column should be a copy of "LNR", which acts as an ID number for each individual in the "TEST" partition. The second column, "RESPONSE", should be some measure of how likely each individual became a customer – this might not be a straightforward probability. As you should have found in Part 2, there is a large output class imbalance, where most individuals did not respond to the mailout. Thus, predicting individual classes and using accuracy does not seem to be an appropriate performance evaluation method. Instead, the competition will be using AUC to evaluate performance. The exact values of the "RESPONSE" column do not matter as much: only that the higher values try to capture as many of the actual customers as possible, early in the ROC curve sweep. ###Code mailout_test = pd.read_csv('data/Udacity_MAILOUT_052018_TEST.csv') test_clean = cleaning_functions.clean_data(mailout_test) test_clean.set_index(['LNR'], inplace = True) # Scaler test_scaled = scaler.transform(test_clean) ###Output C:\Users\jobqu\anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3155: DtypeWarning: Columns (19,20) have mixed types.Specify dtype option on import or set low_memory=False. has_raised = await self.run_ast_nodes(code_ast.body, cell_name, ###Markdown RandomForest submission ###Code preds_rfc = rfc_final.predict_proba(test_scaled)[:,1] preds_rfc_df = pd.DataFrame(preds_rfc, index = test_clean.index) preds_rfc_df.reset_index(inplace = True) preds_rfc_df.columns = ['LNR', 'RESPONSE'] print(preds_rfc_df.shape) preds_rfc_df.to_csv('predictions/pred_rfc.csv', index = False) preds_rfc_df.head() ###Output (42833, 2) ###Markdown ![img](img/Submit_2.png) XGB submission ###Code preds_xgb = xgb_final.predict_proba(test_scaled)[:,1] preds_xgb_df = pd.DataFrame(preds_xgb, index = test_clean.index) preds_xgb_df.reset_index(inplace = True) preds_xgb_df.columns = ['LNR', 'RESPONSE'] print(preds_xgb_df.shape) preds_xgb_df.to_csv('predictions/pred_xgb.csv', index = False) preds_xgb_df.head() ###Output (42833, 2) ###Markdown ![img](img/Submit_1.png) AdaBoost submission ###Code preds_adab = adab_final.predict_proba(test_scaled)[:,1] preds_adab_df = pd.DataFrame(preds_adab, index = test_clean.index) preds_adab_df.reset_index(inplace = True) preds_adab_df.columns = ['LNR', 'RESPONSE'] print(preds_adab_df.shape) preds_adab_df.to_csv('predictions/pred_adab.csv', index = False) preds_adab_df.head() ###Output (42833, 2) ###Markdown ![img](img/submit_3.png) Models trained with features selected ###Code test_reduced = trans.transform(test_scaled) ###Output _____no_output_____ ###Markdown Boruta + RandomForest [Winner] ###Code rfc_bor_final preds_bor_rfc = rfc_bor_final.predict_proba(test_reduced)[:,1] preds_bor_rfc_df = pd.DataFrame(preds_bor_rfc, index = test_clean.index) preds_bor_rfc_df.reset_index(inplace = True) preds_bor_rfc_df.columns = ['LNR', 'RESPONSE'] print(preds_bor_rfc_df.shape) preds_bor_rfc_df.to_csv('predictions/pred_bor_rfc.csv', index = False) preds_bor_rfc_df.head() ###Output (42833, 2) ###Markdown ![img](img/submit_4.png) Boruta + XGB ###Code preds_xgb_bor = xgb_bor_final.predict_proba(test_reduced)[:,1] preds_xgb_bor_df = pd.DataFrame(preds_xgb_bor, index = test_clean.index) preds_xgb_bor_df.reset_index(inplace = True) preds_xgb_bor_df.columns = ['LNR', 'RESPONSE'] print(preds_xgb_bor_df.shape) preds_xgb_bor_df.to_csv('predictions/pred_bor_xgb.csv', index = False) preds_xgb_bor_df.head() ###Output (42833, 2) ###Markdown ![img](img/submit_5.png) Boruta + AdaBoost ###Code preds_adab_bor = adab_bor_final.predict_proba(test_reduced)[:,1] preds_adab_bor_df = pd.DataFrame(preds_adab_bor, index = test_clean.index) preds_adab_bor_df.reset_index(inplace = True) preds_adab_bor_df.columns = ['LNR', 'RESPONSE'] print(preds_adab_bor_df.shape) preds_adab_bor_df.to_csv('predictions/pred_bor_adab.csv', index = False) preds_adab_bor_df.head() ###Output (42833, 2)
tutorials/label-maker-dask.ipynb
###Markdown Label Maker with Dask and Planetary ComputerThis notebook shows how to run [label-maker](https://github.com/developmentseed/label-maker-dask) with [dask](https://dask.org/) using [Planetary Computer](https://planetarycomputer.microsoft.com/). Label Maker is a library for creating machine-learning ready data by pairing satellite images with [OpenStreetMap](https://www.openstreetmap.org/) (OSM) vector data. It fetches data from both sources and then divides them into smaller image chips based on [slippy map conventions](https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames). Environment SetupWe'll add our dependencies and use dask locally for ease of setup. For running a remote cluster, see the setup in [the dask example](../quickstarts/scale-with-dask.ipynb) ###Code !pip install -q label-maker-dask import planetary_computer as pc import pystac from label_maker_dask import LabelMakerJob from dask.distributed import Client client = Client() ###Output _____no_output_____ ###Markdown Finding Source ImageryYou can use any tiled imagery (WMS/TMS) endpoint or Cloud-Optimized GeoTIFF file as the imagery input to `label-maker-dask`. In this case, we follow the [Sentinel 2 L2A Example](..datasets/sentinel-2-l2a/sentinel-2-l2a-example.ipynb) to get an asset URL and sign it with our Planetary Computer SAS token. ###Code item = pystac.read_file( "https://planetarycomputer.microsoft.com/api/stac/v1/collections/sentinel-2-l2a/items/S2A_MSIL2A_20190724T112121_R037_T29SMC_20201005T185645" # noqa: E501 ) asset_href = item.assets["visual"].href signed_href = pc.sign(asset_href) ###Output _____no_output_____ ###Markdown Label-Maker-DaskNow that we have everything setup, we can supply the parameters to define our `label-maker` job:- `zoom`: *int*. The [zoom level](https://wiki.openstreetmap.org/wiki/Zoom_levels) used to create images. This functions as a rough proxy for resolution. Value should be given as an int on the interval `[0, 19]`- `bounds`: *List[float]*. The bounding box to create images from. This should be given in the form: `[xmin, ymin, xmax, ymax]` as longitude and latitude values between `[-180, 180]` and `[-90, 90]`, respectively. Values should use the WGS84 datum, with longitude and latitude units in decimal degrees.- `classes`: *List*. The training classes. Each class is defined as dict object with two required keys: - `name`: *str*. The class name. - `filter`: *List[str]*. A [Mapbox GL Filter](https://www.mapbox.com/mapbox-gl-js/style-specother-filter) to define any vector features matching this class. Filters are applied with the standalone [featureFilter](https://github.com/mapbox/mapbox-gl-js/tree/main/src/style-spec/feature_filterapi) from Mapbox GL JS.- `imagery`: *str*. Details at https://developmentseed.org/label-maker/parameters.htmlparameters- `ml_type`: *str*. One of 'classification', 'object-detection', or 'segmentation'. More details at https://developmentseed.org/label-maker/parameters.htmlparameters- `label_source`: *str*. A template string for a tile server providing OpenStreetMap QA tiles. Planetary Computer hosts a tile server supporting this format at https://qa-tiles-server-dev.ds.io/services/z17/tiles/{z}/{x}/{y}.pbfOnce the job is defined, we can use the `build_job` and `execute_job` methods to fetch our labels and imagery. ###Code lmj = LabelMakerJob( zoom=15, bounds=[-9.232635498046, 38.70265930723, -9.0966796875, 38.78138720209], classes=[ {"name": "Roads", "filter": ["has", "highway"]}, {"name": "Buildings", "filter": ["has", "building"]}, ], imagery=signed_href, ml_type="segmentation", label_source="https://qa-tiles-server-dev.ds.io/services/z17/tiles/{z}/{x}/{y}.pbf", ) lmj.build_job() # a quick check on the number of image chips/tiles lmj.n_tiles() lmj.execute_job() lmj.results[2] ###Output _____no_output_____
04_random_variable_with_scipy/05_Gaussian_normal_distribution.ipynb
###Markdown 4. Scipy로 공부하는 확률 변수 02장. 정규분포와 통계량 분포 1. 가우시안 정규분포 ###Code import scipy as sp import numpy as np import pandas as pd import matplotlib.pylab as plt import seaborn as sns import matplotlib as mpl from scipy import stats mpl.rcParams["font.family"] mpl.matplotlib_fname() import matplotlib.font_manager as fm font_location = "/Library/Fonts/AppleGothic.ttf" font_name = fm.FontProperties(fname=font_location).get_name() print(font_name) mpl.rc('font', family=font_name) ###Output AppleGothic ###Markdown - - - 1. 가우시안 정규분포 (Gaussian normal distribution)- 자연현상에서 나타나는 숫자를 확률 모형으로 모형화 시 가장 많이 사용되는 확률 모형- 평균과 분산 두개의 모수만으로 정의된다. ###Code %%latex A random variable X is said to be normally distributed with mean $\mu$ and variance $\sigma^{2}$ if its probability density function (pdf) is $N(x;\mu,\sigma^{2})=\frac{1}{\sqrt[]{2\pi\sigma^{2}}}exp(-\frac{(x-\mu)^{2}}{2\sigma^{2}})$ %%latex $\frac{1}{\sqrt[]{2\pi\sigma^{2}}}$는 normalize 돕는다. %%latex $exp(-\frac{(x-\mu)^{2}}{2\sigma^{2}})$값은 정규분포의 pdf 함수 결과값이 0과 1사이 값이 출력 되도록 돕는다. %%latex $x=\mu$일 때 확률 밀도가 최대가 된다. $x=\infty$나 $x=-\infty$로 다가갈 수록 확률 밀도가 작아진다. ###Output _____no_output_____ ###Markdown - - - 2. Scipy를 사용한 정규 분포의 시뮬레이션- `sp.stats.norm`, `loc` 평균 설정 `scale`로 표준편차 설정 ###Code # 가우시안 정규분포를 따르는 객체 생성하기 mu = 0 std = 1 rv = sp.stats.norm(mu, std) # 평균 0, 표준편차 1을 따르는 가우시안 정규분포를 이용해서 x값을 설정해 줄 때 pdf 확률 밀도 함수를 계산 할 수 있다. xx = np.linspace(-5,5,100) plt.plot(xx, rv.pdf(xx)) plt.ylabel("P(x)") plt.title("정규분포의 확률 밀도 함수(pdf)") plt.show() # 시뮬레이션을 통해서 직접 샘플링 해보자 np.random.seed(0) x = rv.rvs(100) x sns.distplot(x, kde=True, fit=sp.stats.norm) plt.show() ###Output /usr/local/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg. warnings.warn("The 'normed' kwarg is deprecated, and has been " ###Markdown - - - 3. QQ플롯 (Quantile-Quantile plot)- Quantiles are points in your data below which a certain proportion of your data fall. These are often referred to as “percentiles”.- 정규분포 여러 연속 확률 분포 중에서도 가장 유용하고 널리 쓰인다.- 어떤 확률 변수의 분포가 정규 분포인지 아닌지를 확인하는 것은 정규분포 검정(nomality test)은 가장 중요한 통계적 분석 중 하나- 정밀한 정규 분포 검정 사용하기 앞서서 시각적으로 간단하게 정규분포를 확인할 수 있는 QQ플롯을 사용가능하다.- - -- 샘플 데이터의 분포와 정규 분포의 형태를 비교할 수 있음- 동일 분위수에 해당하는 정상 분포의 값과 주어진 분포의 값을 한 쌍으로 만들어 스캐터플롯(scatter plot)으로 그린 것- 그리는 방법(샘플의 분위함수, 분위함수가 정규분포의 누적 확률 함수 값이 되는 표준정규분포의 값) - 대상 샘플 데이터를 크기에 따라 줄을 세운다. - 각 샘플 데이터의 분위함수 값, percent % 구한다. (통계 하위부터 따져서 작은 값이 10% 큰값이 90% 이렇게 분위함수 구한다.) - 각 샘플 데이터의 분위함수(%) 값이 정규 분포의 누적 확률 함수 값이 되는 표준 정규 분포의 값, 즉 분위수(quantile)를 구한다. - 샘플 데이터, 정규분포값을 하나의 쌍으로 생각해서 2차원 공간에 점으로 그린다. - 모든 샘플을 이전 4단계를 반복하며 QQ플롯을 그린다.- - -`sp.stats.probplot(샘플값, plot= )`- 원래 플롯 그리는 애 아니라서 인수에 저렇게 plt줘야함- plot 값에 matplotlib.pylab모듈 객체 / Axes클래스 객체 넣어줘야 차트 그린다.[Understanding Q-Q Plots](https://data.library.virginia.edu/understanding-q-q-plots/) ###Code np.random.seed(0) x = np.random.randn(100) # 가우시안 분포로 랜덤 값 뽑아서 그린다. plt.figure(figsize=(7,7)) sp.stats.probplot(x, plot=plt) plt.axis("equal") plt.show() ###Output _____no_output_____ ###Markdown - - -정규 분포를 따르지 않는 데이터 샘플을 QQ플롯으로 그리게 되면 직선이 아닌 휘어진 형태로 나타난다.아래로 휘어지면 short tail, 위로 휘어지면 long tail이라고 부른다. ###Code np.random.seed(1) x = np.random.rand(100) # 균등분포로 x 데이터 만든 후, QQ플롯 그린다. plt.figure(figsize=(7,7)) sp.stats.probplot(x, plot=plt) plt.ylim(-0.5, 1.5) plt.show() ###Output _____no_output_____ ###Markdown - - - 4. 중심 극한 정리(Central Limit Theorem)- 여러 확률 변수의 합이 정규 분포와 비슷한 분포를 이루는 현상- 실생활의 여러 현상들이 정규분포로 모형화 가능한 이유가 바로 중심 극한 정리 때문 ###Code %%latex $\overline{x}_{n}=\frac{1}{n}(x_{1}+...+x_{n})$ %%latex $z=\frac{\overline{x}_{n}-\mu}{\frac{\sigma}{\sqrt{n}}}$ %%latex $z=\frac{\overline{x}_{n}-\mu}{\frac{\sigma}{\sqrt{n}}} \rightarrow N(x;0,1)$ # 시뮬레이션 np.random.seed(0) xx = np.linspace(-2, 2, 100) plt.figure(figsize=(6, 9)) for i, N in enumerate([1, 2, 20]): X = np.random.rand(5000, N) Xbar = (X.mean(axis=1) - 0.5) * np.sqrt(12 * N) ax = plt.subplot(3, 2, 2 * i + 1) sns.distplot(Xbar, bins=10, kde=False, norm_hist=True) plt.xlim(-5, 5) plt.yticks([]) ax.set_title("N = {0}".format(N)) plt.subplot(3, 2, 2 * i + 2) sp.stats.probplot(Xbar, plot=plt) plt.tight_layout() plt.show() ###Output /usr/local/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg. warnings.warn("The 'normed' kwarg is deprecated, and has been "
01 - 1 var/oneVar_script.ipynb
###Markdown One variable plotting using python's [matplotlib](https://matplotlib.org/) referenced from Josh Peters implementation using R [1] Import all necessary files* use ipython magic `%matplotlib inline` to show figures beneath each cell* `SHOW = True` will show details on the dataframes and figures below. Turn to False to hide these outputs* `%qtconsole` is a nice GUI which lets you dabel in python code without a jupyter notebook ###Code import pandas as pd import numpy as np import matplotlib as mpl from matplotlib import rcParams from matplotlib import pylab as plt from matplotlib import colors %matplotlib inline import os SHOW = True # open qtconsole for debugging OPEN_QT = True if OPEN_QT: %qtconsole OPEN_QT = False ###Output _____no_output_____ ###Markdown [2] Read and peruse the data as a dataframe ###Code # read in data filename = "nature25479_f2_formatted.csv" data_df = pd.read_csv(filename, sep='\t') # get column names, because there are weird escape characterts in the csv Species, Individual, Peak_Power = data_df.columns # show data if SHOW: print("\n---dataframe column names---") print(Species + ", " + Individual + ", " + Peak_Power) print("\n---head of dataframe---") print(data_df.head()) print("\n---info of dataframe---") print(data_df.info()) # copy df data_original = data_df.copy() # drop the individual's column, not used data_df = data_original.drop(Individual, axis=1) # group by species on peak power column data_group = data_df.groupby(by=Species) # apply method on group, such as average or std data_count = data_group.count() data_mean = data_group.mean() data_std = data_group.std() data_sem = data_std/data_count # show data if SHOW: print("\n---number of data points per species---") print(data_count) print("\n---mean per species---") print(data_mean) print("\n---standard deviation per species---") print(data_std) print("\n---standard error from the mean per species---") print(data_sem) ###Output ---number of data points per species--- Peak_Power Species Cheetah 37 Impala 30 Lion 50 Zebra 57 ---mean per species--- Peak_Power Species Cheetah 107.779942 Impala 89.901928 Lion 102.444480 Zebra 79.675700 ---standard deviation per species--- Peak_Power Species Cheetah 33.780198 Impala 24.892209 Lion 39.831664 Zebra 34.282241 ---standard error from the mean per species--- Peak_Power Species Cheetah 0.912978 Impala 0.829740 Lion 0.796633 Zebra 0.601443 ###Markdown [3] Set up the figure parameters using [default variables](https://matplotlib.org/users/dflt_style_changes.html)defaults are obtained using matplotlib's rcParams dictionary ###Code # figure size rcParams['figure.figsize'] = (2,4) # figsize height x width # text sizes (follow publication guidelines) SMALL_SIZE = 6 MEDIUM_SIZE = 8 BIGGER_SIZE = 10 rcParams['font.size'] = SMALL_SIZE # controls default text sizes rcParams['axes.titlesize'] = SMALL_SIZE # fontsize of the axes title rcParams['axes.labelsize'] = MEDIUM_SIZE # fontsize of the x and y labels rcParams['xtick.labelsize'] = SMALL_SIZE # fontsize of the tick labels rcParams['ytick.labelsize'] = SMALL_SIZE # fontsize of the tick labels rcParams['legend.fontsize'] = SMALL_SIZE # legend fontsize rcParams['figure.titlesize'] = BIGGER_SIZE # fontsize of the figure title # convert data to array (rather than dataframe) and input to violin plot # this may not be the most optimal way, please add in your suggestions unique_animals = data_df[Species].unique() data_list = [] for animal in unique_animals: # grab data associated per species (aka animal), and only the peak power column (aka data) animal_data = data_df[data_df[Species] == animal][Peak_Power] animal_data = animal_data.values data_list.append(animal_data) ###Output _____no_output_____ ###Markdown [4] Iteratively improve on a violin plot ###Code # plot basic skeleton fig, ax = plt.subplots(1,1); ax.violinplot(data_list) # make them horizontal # note, that we want to rotate them from left to right data_list = data_list[::-1] ax.clear() ax.violinplot(data_list, vert=False) fig # remove and add relevant peices ax.clear() parts = ax.violinplot(data_list, showmeans=True, showmedians=False, showextrema=False, vert=False) if SHOW: print("\n---parameters of the violin plot that you can change---") print(parts.keys()) fig # change colors of violins # colors cmap = np.array(["#B9DC3D", "#78CE5C", "#3B568A", "#45397F"]) cmap_rep = np.array(["#4473B0", "#C15436", "#6E348C", "#E4AC43"]) # opacity alpha = 1 for color, patch in zip(cmap, parts['bodies']): # annoying aspect of color changing, changing alpha changes both face and edge color # convert color to rgb and manually insert alpha channel # note that the data type we're dealing with is tuple (immutable) rgb = colors.hex2color(color) rgba = [rgb[0], rgb[1], rgb[2], alpha] # convert edge and face color individually patch.set_facecolor(rgba) patch.set_edgecolor('k') # patch.set_alpha(0.25) # change line color line_collection = parts['cmeans'] line_collection.set_color(cmap) fig # place points # matplotlib's dogma is to not tinker with the data, that being said, there is no jitter command. # so we will make one of our own def jitter(N, index, amplitude=0.25, method='gaussian'): """ @N : number of data points to create new indexes for @index : numerical value. Index to plot, or equivalently the mean of the gaussian distribution @amplitude : noise power @method : gaussian or random returns: 1D array of list with gaussian noise """ new_index = index * np.ones(N) if method == 'gaussian': return new_index + (np.random.normal(size=N) * amplitude) elif method == "random": return new_index + (np.random.uniform(-1, 1, N) * amplitude) else: raise Exception("invalid method. Please choose between gaussian or random") # add in the data using scatter plot s=12 markers = ['o' ,'s', 'D', '^'] # iterate through each data and assign the appropriate attributes for index, data in enumerate(data_list): # add jitter to index N = len(data) amplitude=0.05 new_index = jitter(N, index+1, amplitude=amplitude) # index starts at 1 for violin plot # plot scatter # don't forget we switched the x and y cood with vert=False ax.scatter(data, new_index, c=cmap[index], edgecolor=cmap[index], marker=markers[index], linewidths=0.5, s=s, alpha=0.5, zorder=0) fig # annotations # you'll realize that many things are hardcoded are manually inputed. All options are user-specific # add y ticks ax.set_yticks([1, 2, 3, 4]) ax.set_yticklabels(unique_animals[::-1]) # add x label # adding some latex magic with r'$ $' ax.set_xlabel(r'Peak Power (W kg$^{\mathregular{-1}}$)', weight='bold') # add y label ax.set_ylabel('Species', weight='bold', rotation=0, labelpad=30) # remove spines ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) # remove ticks # tick_params is a powerful wrapper for controlling many aspects of ticks and tick labels ax.tick_params(top='off', right='off') # adjusting x limits ax.set_xlim([0, 225]) ax.locator_params(axis='x', nbins=5) # just because I don't like zeros, substitute the first 0'th index with empty string xticks = ax.get_xticklabels() xticks = [""] + xticks[1:] ax.set_xticklabels(xticks) fig # save figure SAVE = True if SAVE: figure_name = "figure" fig.savefig("{}.eps".format(figure_name), transparent=True, format='eps') fig.savefig("{}.png".format(figure_name), transparent=True, format='png') fig.savefig("{}.pdf".format(figure_name), transparent=True, format='pdf') fig.savefig("{}.svg".format(figure_name), transparent=True, format='svg') ###Output _____no_output_____
Model_03-Adam-k=3.ipynb
###Markdown --- ###Code import matplotlib.image as mpimg min1 = np.min(X_holdout) max1 = np.max(X_holdout) diff1 = max1 - min1 my_sum = lambda x: (x - min1)/diff1 X_holdout2 = my_sum(X_holdout) Y_holdout plt.imshow(X_holdout2[0], cmap='gray', interpolation='nearest'); tl = "Actual label : " +str(Y_holdout[9])+ ","+" iceberg_probability : "+str(pred_valid[0]) plt.title(tl) plt.imshow(X_holdout2[2], cmap='gray', interpolation='nearest'); tl = "Actual label : " +str(Y_holdout[14])+ ","+" iceberg_probability : "+str(pred_valid[2]) plt.title(tl) plt.imshow(X_holdout2[7], cmap='gray', interpolation='nearest'); tl = "Actual label : " +str(Y_holdout[28])+ ","+" iceberg_probability : "+str(pred_valid[7]) plt.title(tl) plt.imshow(X_holdout2[15], cmap='gray', interpolation='nearest'); tl = "Actual label : " +str(Y_holdout[73])+ ","+" iceberg_probability : "+str(pred_valid[15]) plt.title(tl) ###Output _____no_output_____
bantSprintEnd.ipynb
###Markdown Sprint report test project with pythonThis project is to see if Hasan can make sprint ends more useful by throwing away excel and using python instead.The sprint analyzed is Daikon(20) ###Code # import dependencies import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Load dataset df_bant_sprint20 = pd.read_csv('bant_sprint_20.csv') df_bant_sprint20.head() ###Output _____no_output_____ ###Markdown PlanThe plan now is to provide insightful information about the sprint to the audience. 1. Separate the projects into bant Android and bant iOS2. Identify what tickets were completed and what were not3. Create two graphs that show complete vs incomplete tickets4. Compare to previous velocity Separate Platforms ###Code # Seperate the projects df_bant_sprint20IOS = df_bant_sprint20[df_bant_sprint20['Project'] == 'bant - iOS'] df_bant_sprint20ANDROID = df_bant_sprint20[df_bant_sprint20['Project'] == 'bant - Android'] ###Output _____no_output_____ ###Markdown iOS ###Code # Calculate Total Story Points totalIOSPoints = df_bant_sprint20IOS['Story Points'].sum() totalIOSPoints # Calculate Velocity finishedIOS = df_bant_sprint20IOS[(df_bant_sprint20IOS['Status'] == 'Ready to Merge') | (df_bant_sprint20IOS['Status'] == 'Closed')] finishedIOSPoints = finishedIOS['Story Points'].sum() finishedIOSPoints # Compare Finished and Unfinished Points headings = ['Total iOS Story Points', 'Completed iOS points', 'Completed %'] table = [[totalIOSPoints, finishedIOSPoints, (finishedIOSPoints / totalIOSPoints) * 100]] iOSPerformance = pd.DataFrame(columns=headings, data=table) iOSPerformance # Graph performance plt.bar(x=['Completed iOS Points','Total iOS Points'], height=[finishedIOSPoints, totalIOSPoints], align='center', color=['green','blue']) plt.ylabel('Story points (Days)') plt.title('iOS Performance') plt.show() print('Our iOS velocity this sprint is' , finishedIOSPoints , 'points') print('We have completed' , iOSPerformance['Completed %'].iloc[0].round() , '% of our tickets') ###Output Our iOS velocity this sprint is 30.5 points We have completed 73.0 % of our tickets ###Markdown Android ###Code # Calculate Total Story Points totalAndroidPoints = df_bant_sprint20ANDROID['Story Points'].sum() totalAndroidPoints # Calculate Velocity finishedAndroid = df_bant_sprint20ANDROID[(df_bant_sprint20ANDROID['Status'] == 'Ready to Merge') | (df_bant_sprint20ANDROID['Status'] == 'Closed')] finishedAndroidPoints = finishedAndroid['Story Points'].sum() finishedAndroidPoints # Compare Finished and Unfinished Points headings = ['Total Android Story Points', 'Completed Android points', 'Completed %'] table = [[totalAndroidPoints, finishedAndroidPoints, (finishedAndroidPoints / totalAndroidPoints) * 100]] androidPerformance = pd.DataFrame(columns=headings, data=table) androidPerformance # Graph performance plt.bar(x=['Completed Android Points','Total Android Points'], height=[finishedAndroidPoints, totalAndroidPoints], align='center', color=['green','blue']) plt.ylabel('Story points (Days)') plt.title('Android Performance') plt.show() print('Our Android velocity this sprint is' , finishedAndroidPoints , 'points') print('We have completed' , androidPerformance['Completed %'].iloc[0].round() , '% of our tickets') ###Output Our Android velocity this sprint is 21.0 points We have completed 84.0 % of our tickets
kakenhi_fine_tuning.ipynb
###Markdown 科研費概要の分類日本語のデータセットでBERTのモデルをファインチューニングし、研究分野の分類を行います。 2019~2020年開始の基盤Cの課題の「研究開始時の研究の概要」をモデルのファインチューニングに用いている。 オリジナルの課題数: 25720 概要が空白の課題数: 116 空白を除いた課題数: 25604 日本語+英語: 25604 英語    : 370 日本語   : 25234 小区分がブランク: 27 小区分の設定あり: 25207 統合前のデータ数: 25207 統合したデータ数: 26729 トレーニングデータ数: 20046 テストデータ数   : 6683 ライブラリのインストールライブラリTransformers、およびnlpをインストールします。 ###Code !pip install transformers !pip install nlp !pip install datasets !pip install fugashi !pip install ipadic ###Output _____no_output_____ ###Markdown Google ドライブとの連携 以下のコードを実行し、認証コードを使用してGoogle ドライブをマウントします。 ###Code from google.colab import drive drive.mount("/content/drive/") ###Output _____no_output_____ ###Markdown ファインチューニング用のデータを読み込む&書き出す 科研費データの読み込み、整理科研費データベースからダウンロードしたcsvファイルを直接読む 必要なデータを取り出す ###Code import pandas as pd # 科研費データベースからダウンロードした未加工のcsvファイルを指定 # open_original_csv = "KibanC_2021_Original.csv" open_original_csv = "KibanC_2019-2020.csv" # 直近の1年(2021年)を除いた2年間 # open_original_csv = "KibanC_2019-2020.csv" # 2018は「研究開始時の研究の概要」が無い data_path = "/content/drive/My Drive/bert_nlp/section_5/" # csvファイルを開く raw_data = pd.read_csv(data_path + open_original_csv) # dtype="object"必要? # 読み込んだデータをチェック # raw_data.info() # 今後必要な行だけを取り出し、リネーム kadai = raw_data[["研究課題/領域番号", "審査区分", "研究開始時の研究の概要"]] kadai.columns = ["ID", "ShoKubun", "Abst"] # 課題番号の重複を確認。課題番号でソートする。 kadai["ID"].duplicated().any() # kadai = kadai.set_index("ID") # IDをインデックスに設定するコード kadai = kadai.sort_values("ID") # Abstが空欄の課題を削除 print("オリジナルの課題数: %5d" % len(kadai)) print("概要が空白の課題数: %5d" % len(kadai[kadai["Abst"].isna()])) kadai = kadai.dropna(subset=["Abst"]) print("空白を除いた課題数: %5d" % len(kadai)) # Abst中の改行コードを削除 kadai = kadai.replace('\r', '', regex=True) kadai = kadai.replace('\n', '', regex=True) # Abstが英語のみの課題を削除 num_jpen = len(kadai) kadai = kadai[kadai["Abst"].str.contains(r'[ぁ-んァ-ン]')] num_jp = len(kadai) print("日本語+英語: %5d" % num_jpen) print("英語    : %5d" % (num_jpen - num_jp)) print("日本語   : %5d" % num_jp) # kadai.to_csv(data_path + "test1.csv", encoding = "cp932") # 小区分が設定されていない課題を削除(旧分類、特設分野) aaa = len(kadai) kadai = kadai.dropna(subset=["ShoKubun"]) print("小区分がブランク: %5d" % (aaa - len(kadai))) print("小区分の設定あり: %5d" % len(kadai)) # 小区分の文字列の数字部分だけを取り出す kadai["ShoKubun"] = kadai["ShoKubun"].str[3:8] kadai = kadai.astype({"ShoKubun": int}) ###Output _____no_output_____ ###Markdown 整理した科研費データの保存審査区分データを読み込み、小区分番号を参照して結合 トレーニングデータと、テストデータに分けて保存する。 ###Code import pandas as pd from sklearn.model_selection import train_test_split # 科研費の審査区分表データのcsvファイル open_kubun_csv = "KubunTable.csv" data_path = "/content/drive/My Drive/bert_nlp/section_5/" # 審査区分テーブルのロード kubun_table = pd.read_csv(data_path + open_kubun_csv, encoding="cp932") kubun_table = kubun_table[["tabDai", "tabSho"]] # 審査区分表の重複を削除(一つの小区分が2つまたは3つの『中区分』に所属することに由来する) print("重複削除前の項目数: %3d" % len(kubun_table)) kubun_table = kubun_table.drop_duplicates() print("重複削除後の項目数: %3d" % len(kubun_table)) # 大区分への変換 # mergeを用いて、審査区分表のデータと突合 print("統合前のデータ数: %5d" % len(kadai)) kadaiDai = pd.merge(kadai, kubun_table, left_on='ShoKubun', right_on='tabSho') kadaiDai = kadaiDai[["Abst", "tabDai", "ID", "ShoKubun"]] print("統合したデータ数: %5d" % len(kadaiDai)) # 訓練用とテスト用に分割 層化 kadai_train, kadai_test = train_test_split(kadaiDai, shuffle=True, stratify = kadaiDai["tabDai"].tolist()) print("トレーニングデータ数: %5d" % len(kadai_train)) print("テストデータ数   : %5d" % len(kadai_test)) # 大区分&課題番号を基準にソート # 計算的には不要だが、人間用にソートしておく kadai_train = kadai_train.sort_values(["tabDai", "ID"]) kadai_test = kadai_test.sort_values (["tabDai", "ID"]) # ソート用に残していた課題番号(ID)行を削除 kadai_train = kadai_train.drop(['ID', 'ShoKubun'], axis=1) kadai_test = kadai_test.drop (['ID', 'ShoKubun'], axis=1) # csvとして書き出し kadai_train.to_csv(data_path+"kadai_train.csv", header=False, index=False) kadai_test.to_csv (data_path+"kadai_test.csv", header=False, index=False) ###Output _____no_output_____ ###Markdown ファインチューニングの実施 モデルとTokenizerの読み込み日本語の事前学習済みモデルと、これと紐づいたTokenizerを読み込みます。 ###Code from transformers import BertForSequenceClassification, BertJapaneseTokenizer # sc_model = BertForSequenceClassification.from_pretrained("cl-tohoku/bert-base-japanese-whole-word-masking", num_labels=9) sc_model = BertForSequenceClassification.from_pretrained("cl-tohoku/bert-base-japanese-whole-word-masking", num_labels=11) # 大区分は11 sc_model.cuda() tokenizer = BertJapaneseTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-whole-word-masking") ###Output _____no_output_____ ###Markdown データセットの読み込み保存された科研費のデータを読み込みます。 ###Code from datasets import load_dataset def tokenize(batch): # return tokenizer(batch["text"], padding=True, truncation=True, max_length=128) return tokenizer(batch["text"], padding=True, truncation=True, max_length=512) data_path = "/content/drive/My Drive/bert_nlp/section_5/" train_data = load_dataset("csv", data_files=data_path+"kadai_train.csv", column_names=["text", "label"], split="train") #print(type(train_data)) #print(train_data) #print(train_data[[0,0]]) #zzz train_data = train_data.map(tokenize, batched=True, batch_size=len(train_data)) train_data.set_format("torch", columns=["input_ids", "label"]) test_data = load_dataset("csv", data_files=data_path+"kadai_test.csv", column_names=["text", "label"], split="train") test_data = test_data.map(tokenize, batched=True, batch_size=len(test_data)) test_data.set_format("torch", columns=["input_ids", "label"]) ###Output _____no_output_____ ###Markdown 評価用の関数`sklearn.metrics`を使用し、モデルを評価するための関数を定義します。 ###Code from sklearn.metrics import accuracy_score def compute_metrics(result): labels = result.label_ids preds = result.predictions.argmax(-1) acc = accuracy_score(labels, preds) return { "accuracy": acc, } ###Output _____no_output_____ ###Markdown Trainerの設定Trainerクラス、およびTrainingArgumentsクラスを使用して、訓練を行うTrainerの設定を行います。 https://huggingface.co/transformers/main_classes/trainer.html https://huggingface.co/transformers/main_classes/trainer.htmltrainingarguments ###Code from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir = "./results", num_train_epochs = 2, per_device_train_batch_size = 8, per_device_eval_batch_size = 32, warmup_steps = 500, # 学習係数が0からこのステップ数で上昇 weight_decay = 0.01, # 重みの減衰率 # evaluate_during_training = True, # ここの記述はバージョンによっては必要ありません logging_dir = "./logs", ) trainer = Trainer( model = sc_model, args = training_args, compute_metrics = compute_metrics, train_dataset = train_data, eval_dataset = test_data, ) ###Output _____no_output_____ ###Markdown モデルの訓練設定に基づきファインチューニングを行います。 40分程度かかる。 ###Code trainer.train() ###Output _____no_output_____ ###Markdown モデルの評価Trainerの`evaluate()`メソッドによりモデルを評価します。 2分程度かかる ###Code trainer.evaluate() ###Output _____no_output_____ ###Markdown TensorBoardによる結果の表示TensorBoardを使って、logsフォルダに格納された学習過程を表示します。 ###Code %load_ext tensorboard %tensorboard --logdir logs ###Output _____no_output_____ ###Markdown モデルの保存訓練済みのモデルを保存します。 ###Code data_path = "/content/drive/My Drive/bert_nlp/section_5/" sc_model.save_pretrained(data_path) tokenizer.save_pretrained(data_path) ###Output _____no_output_____ ###Markdown 以下は削除する予定別のファイルに分ける モデルの読み込み保存済みのモデルを読み込みます。 ###Code from transformers import BertForSequenceClassification, BertJapaneseTokenizer data_path = "/content/drive/My Drive/bert_nlp/section_5/" loaded_model = BertForSequenceClassification.from_pretrained(data_path) #loaded_model.cuda() # GPU未対応=========================================== loaded_tokenizer = BertJapaneseTokenizer.from_pretrained(data_path) ###Output _____no_output_____ ###Markdown 研究分野の分類読み込んだモデルを使って研究分野を分類します。 ###Code import glob # ファイルの取得に使用 import os import torch import matplotlib.pyplot as plt import numpy as np Daikubun = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K"] # #@title String fields # sample_text = '' #@param {type:"string"} # sample_text = "磁性ナノ粒子を用いた新しい診断治療技術に関する研究課題である。腫瘍等に集積させた磁性ナノ粒子に体外から比較的低い周波数の交流磁界を印加し、そのときに生じる磁気信号を検出することにより体内の画像診断が可能となる。また、より高い周波数の交流磁界を印加すると磁性ナノ粒子が発熱する。この発熱は癌の温熱治療(ハイパーサーミア)に利用することができる。交流磁界に対する磁性ナノ粒子の磁化応答(ダイナミクス)を解明し、これら診断治療の実用を目指す。" # 1 # sample_text = "国際的な歴史的文字の共通検索実現およびオープンデータ化を目指して、奈文研・東京大学史料編纂所・国文学研究資料館・国立国語研究所・台湾中央研究院歴史語言研究所を中心に、京都大学人文学研究所や中国社会科学院歴史研究所等の研究者の参加も得ながら、国内・国外で各1回、研究会を開催した。そして、歴史的文字画像データ共通検索の基本コンセプトを、「開かれた」「対等」「継続的」として、このコンセプト基づいた「共通検索のためのるフレームワークの構築」作業に着手し、具体的な取り決めを決定した。この内容については、朝日新聞等でも報道された。詳細は『奈良文化財研究所紀要2019』に掲載予定である。また、研究の促進と参加誘発を目指して、奈良文化財研究所が公開している木簡関連の総合データベース「木簡庫」に検索した木簡情報(釈文・メタデータ等)をCSVファイルでダウンロードすることができる機能を追加し、オープンデータとした。これにより、利用者の目的に応じた木簡データのダウンロードおよびそれを活用しての研究や、さらにはそれぞれの関心に基づくデータベースの作成・公開も可能になった。この内容についても、朝日新聞・NHK等で報道された。文字に関する知識の集積作業として、あらたに気づきメモ約13000文字分(のべ)、観察記録シート約15000文字分(のべ)の情報を収集した。従来からの蓄積と合わせると、観察記録シートによる文字情報は木簡庫で公開している文字画像の約25%(テキストで公表している釈文文字数の約10%)に到達した。文字画像の切り出し・公開は1067字分行った。また、観察記録シートの手法を試験的に用い、中国晋代簡牘・韓国新羅木簡・日本平城宮木簡の比較を行い、それぞれの親和性を検討した研究を行い、成果を得た。この他、木簡の調査者との情報共有のためのワークショップを開催した。" # 0 # sample_text = "本研究は、最新科学と考古学の有機的なコラボレーションによって、発掘によらない王陵級巨大古墳の調査研究の方法を確立することにある。巨大古墳は、発掘調査の禁止あるいは制限がかかっている。そのため、重要な歴史資料でありながら、内容が不明なことが多い。そうした現状を最新科学を用いて打破する。具体的にはミュオンという素粒子を用いてレントゲン写真のように古墳内部を透視する。さらに出土埴輪の形態学的分析+化学分析・墳丘のレーザー測量を行う。主たるフィールドを吉備に置き、その調査成果を畿内の王陵と比較し、王陵級巨大古墳の構造分析を行う。" # 0 # sample_text = "本研究は、低エネルギー動作を特徴とする断熱的量子磁束回路(AQFP)を用いた双方向演算が可能な可逆回路の学理を明らかにし、論理回路の熱力学的極限を超える究極の低消費エネルギー集積回路を実現する。これにより回路の消費エネルギーを半導体回路に対して6桁以上低減し、冷却電力を考慮しても十分な優位性を生み出す。本研究は可逆AQFP を中核技術とし、回路設計技術、新規可逆回路、プロセッサアーキテクチャ、磁性体を用いた位相シフトAQFP、3 次元集積回路技術を研究し、超省エネ集積回路の基盤技術を確立する。最終目標として100nW 以下の動作が可能な4b可逆AQFPプロセッサの実現を目指す。" # 1 # sample_text = "ストリゴラクトンは根から分泌されて土壌中でAM菌との共生を促進する根圏シグナル物質である。AM菌共生は植物の陸上進出を可能にし、さらに陸上でのその後の繁栄を支えてきた。種子植物はSL受容体をもっており、SLは個体内で成長を調節する植物ホルモンとしても働き、養分吸収と成長のバランスを制御して植物の成長を最適化する。本研究では、植物がAM菌との共生関係を構築し、それに合わせて成長を調節する仕組みを進化させた道筋を分子レベルで理解することをめざす。本研究により、地球が緑の惑星となりえた理由の一端を明らかにすることができる。" # 2 # sample_text = "発芽は、植物の一生において最も重要なイベントの1つである。様々な環境で初期生育を達成するために、植物は最適なタイミングで発芽する機構を進化させてきた。適切な発芽管理は農業的にも重要であり、特に種子を収穫する穀物植物においては直接収量に関わる。本研究では最近確立されたGWASシステムを用いて、世界的に栽培されている穀物植物であるイネにおいて温度依存的な発芽調節機構を明らかにすることを目的に研究を行う。" # sample_text = "食品中のトランス脂肪酸は、過剰摂取により心疾患のリスクを高めることが疫学調査によって報告され、世界中で注目を集めている。しかしながら、食事によって摂取されたトランス脂肪酸の体内動態は明らかにされておらず、心疾患とトランス脂肪酸の直接的な関連性は不明である。そこで本研究では、心疾患とトランス脂肪酸の因果関係の解明に資する基礎的なデータを蓄積することを目的とし、食品中に多く含まれる炭素数18、二重結合数1の13種類のトランス脂肪酸(trans-18:1)異性体をマウスに投与し、各臓器・組織中のtrans-18:1異性体を各種質量分析計で測定することで、トランス脂肪酸異性体の体内動態を明らかにする。" # sample_text = "本研究では、遅延時間が小さく、100%の回線利用率を達成する理想的なインターネット輻輳(ふくそう)制御手法を実現することを目的とする。具体的には、バッファブロートと呼ばれる、遅延時間が数秒クラスに増大する輻輳問題に着目する。従来のインターネット輻輳制御手法では用いられていないネットワーク計測手法を活用し、数学的理論に基づいて輻輳状態を正確に推定する。その結果を、新たな輻輳制御手法へ応用する。提案手法の有効性を数学的に保証すると共に、コンピュータシミュレーション及び実ネットワーク環境下での実験により実用性の検証を行う。" sample_text = "現代の倫理学では、カントの倫理学は「義務論」に分類され、アリストテレス流の「徳倫理学」やベンサムに代表される「功利主義」と対比されるのが一般的である。しかしカント自身は晩年の『道徳の形而上学』において、義務論の枠内で徳論を展開している。本研究は、こうしたカントの徳倫理学の独自性を思想史的に遡って解明するとともに、それが倫理学の理論としてもつ強みについても明らかにする。" # Abst中の改行コードを削除 sample_text = sample_text.replace('\r', '') sample_text = sample_text.replace('\n', '') print(sample_text) max_length = 512 words = loaded_tokenizer.tokenize(sample_text) word_ids = loaded_tokenizer.convert_tokens_to_ids(words) # 単語をインデックスに変換 print(len(word_ids)) word_tensor = torch.tensor([word_ids[:max_length]]) # テンソルに変換 # x = word_tensor.cuda() # GPU対応 x = word_tensor # GPU未対応============================================== y = loaded_model(x) # 予測 y = y[0] pred = y.argmax(-1) # 最大値のインデックス out_put = Daikubun[pred] print("大区分"+out_put) m = torch.nn.Softmax(dim=1) # Softmax関数で確率に変換 y = m(y) yy = y.tolist()[0] yy = list(map(lambda x: int(x*100), yy)) all_result = dict(zip(Daikubun, yy)) print(all_result) # 結果のグラフを表示 plt.bar(Daikubun, yy) ###Output _____no_output_____ ###Markdown 分類精度検証用の科研費データ読み込み ###Code import pandas as pd # 科研費データベースからダウンロードした未加工のcsvファイルを指定 # open_original_csv = "KibanC_2021_Original.csv" open_original_csv = "KibanC_2021_Original.csv" # 直近の1年(2021年) # open_original_csv = "KibanC_2019-2020.csv" # 2018は「研究開始時の研究の概要」が無い data_path = "/content/drive/My Drive/bert_nlp/section_5/" # csvファイルを開く raw_data2 = pd.read_csv(data_path + open_original_csv) # dtype="object"必要? # 読み込んだデータをチェック # raw_data.info() # 今後必要な行だけを取り出し、リネーム kadai2 = raw_data2[["研究課題/領域番号", "審査区分", "研究開始時の研究の概要"]] kadai2.columns = ["ID", "ShoKubun", "Abst"] # 課題番号の重複を確認。課題番号でソートする。 kadai2["ID"].duplicated().any() # kadai2 = kadai2.set_index("ID") # IDをインデックスに設定するコード kadai2 = kadai2.sort_values("ID") # Abstが空欄の課題を削除 print("オリジナルの課題数: %5d" % len(kadai2)) print("概要が空白の課題数: %5d" % len(kadai2[kadai2["Abst"].isna()])) kadai2 = kadai2.dropna(subset=["Abst"]) print("空白を除いた課題数: %5d" % len(kadai2)) # Abst中の改行コードを削除 kadai2 = kadai2.replace('\r', '', regex=True) kadai2 = kadai2.replace('\n', '', regex=True) # Abstが英語のみの課題を削除 num_jpen = len(kadai2) kadai2 = kadai2[kadai2["Abst"].str.contains(r'[ぁ-んァ-ン]')] num_jp = len(kadai2) print("日本語+英語: %5d" % num_jpen) print("英語    : %5d" % (num_jpen - num_jp)) print("日本語   : %5d" % num_jp) # kadai.to_csv(data_path + "test1.csv", encoding = "cp932") # 小区分が設定されていない課題を削除(旧分類、特設分野) aaa = len(kadai2) kadai2 = kadai2.dropna(subset=["ShoKubun"]) print("小区分がブランク: %5d" % (aaa - len(kadai2))) print("小区分の設定あり: %5d" % len(kadai2)) # 小区分の文字列の数字部分だけを取り出す kadai2["ShoKubun"] = kadai2["ShoKubun"].str[3:8] kadai2 = kadai2.astype({"ShoKubun": int}) # 小区分を大区分に変換変換 =========================== #import pandas as pd #from sklearn.model_selection import train_test_split # 科研費の審査区分表データのcsvファイル open_kubun_csv = "KubunTable.csv" data_path = "/content/drive/My Drive/bert_nlp/section_5/" # 審査区分テーブルのロード kubun_table = pd.read_csv(data_path + open_kubun_csv, encoding="cp932") kubun_table = kubun_table[["tabDai", "tabSho"]] # 審査区分表の重複を削除(一つの小区分が2つまたは3つの『中区分』に所属することに由来する) print("重複削除前の項目数: %3d" % len(kubun_table)) kubun_table = kubun_table.drop_duplicates() print("重複削除後の項目数: %3d" % len(kubun_table)) # 大区分への変換 # mergeを用いて、審査区分表のデータと突合 print("統合前のデータ数: %5d" % len(kadai2)) kadaiDai2 = pd.merge(kadai2, kubun_table, left_on='ShoKubun', right_on='tabSho') kadaiDai2 = kadaiDai2[["Abst", "tabDai", "ID", "ShoKubun"]] print("統合したデータ数: %5d" % len(kadaiDai2)) kadaiDai2.info() kadaiDai2["Abst"][1] ###Output _____no_output_____ ###Markdown 分類精度を複数ファイルで確認3時間程度かかるかも ###Code import glob # ファイルの取得に使用 import os import torch import numpy as np import pandas as pd from tqdm import tqdm results_binary = 'results' data_path = "/content/drive/My Drive/bert_nlp/section_5/" Daikubun = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K"] max_length = 512 num_data = len(kadaiDai2) print("分類する課題数: %d" % num_data) # num_data = 100 num_category = len(Daikubun) # results = torch.zeros(num_data, num_category) # テンソルで変数を用意 results = np.zeros((num_data, num_category)) # メモリーが足りないと言われるのでテンソルではなくnumpy arrayにしてみた for m in tqdm(range(num_data)): words = loaded_tokenizer.tokenize(kadaiDai2["Abst"][m]) word_ids = loaded_tokenizer.convert_tokens_to_ids(words) # 単語をインデックスに変換 word_tensor = torch.tensor([word_ids[:max_length]]) # テンソルに変換 y = loaded_model(word_tensor) # GPU未対応時の予測 # y = loaded_model(word_tensor.cuda()) # GPU対応時の予測 # results[m,:] = y[0] results[m,:] = y[0].detach().numpy() # テンソルをnumpy arrayに変換 # 変数をとりあえずバイナリで保存 np.save(data_path+results_binary, results) # 計算結果をとりあえずバイナリで保存 print(results.shape) results ###Output _____no_output_____ ###Markdown 分類精度を複数ファイルで確認confusion matrix (混同行列)を作成 「マルチラベリング」で対応する方法もありそう ###Code import numpy as np import torch import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix show_num = 0 results_binary = "results" data_path = "/content/drive/My Drive/bert_nlp/section_5/" Daikubun = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K"] num_category = len(Daikubun) results_2 = np.load(data_path+results_binary+".npy") #print(type(aaa)) #print(aaa.shape) results_2 = torch.tensor(results_2) # Softmax関数を使うためにテンソルに変換 m = torch.nn.Softmax(dim=1) # Softmax関数で確率に変換 results_2 = m(results_2) results_2 = results_2.numpy() # numpy arrayに戻す kadaiDai2['estimated'] = np.argmax(results_2, axis=1) kadaiDai2 = kadaiDai2.drop_duplicates(subset='ID', keep=False) duplicated_data = kadaiDai2[kadaiDai2.duplicated(subset='ID', keep=False)] unique_id = duplicated_data['ID'].drop_duplicates() # #for kk in unique_id: # dup_set = duplicated_data[duplicated_data['ID'] == kk] # cat_est = dup_set['estimated'].tolist()[0] # cat_real = dup_set['tabDai'].tolist() # # if cat_est in cat_real: # aaa = cat_real.index(cat_est) # print(dup_set[:aaa]) # else: # aaa = cat_est # #print(aaa) cm = confusion_matrix(kadaiDai2['tabDai'], kadaiDai2['estimated']) # 混同行列の取得。(true, predicted)の順番 cm = pd.DataFrame(cm,columns=["pred_" + str(l) for l in Daikubun], index=["act_" + str(l) for l in Daikubun]) # print(cm) #dup_idx_all = kadaiDai2.drop_duplicated(subset='ID',keep=False) #duplicated_data_all = kadaiDai2[dup_idx_all] #duplicated_data_all = duplicated_data_all[['ID', 'tabDai', 'estimated']] #print(duplicated_data_all.shape) #duplicated_data_all = duplicated_data_all.groupby('ID').count() #dup_idx_first = kadaiDai2[kadaiDai2.duplicated(subset='ID', keep='first')] #dup_idx_first = dup_idx_first.index #for kk in dup_idx_first: # print(kk) #duplicated_data.to_csv(data_path+"duplicated_mat.csv") #kubun_estimated = np.argmax(results_2, axis=1) #kubun_real = kadaiDai2["tabDai"] # #kubun_results = pd.DataFrame( # data={ # 'estimated':kubun_estimated, # 'real':kubun_real # } #) # 結果のグラフを表示 plt.bar(Daikubun, results_2[show_num,:]) print(kadaiDai2["Abst"][show_num]) print(kadaiDai2["ShoKubun"][show_num]) print(kadaiDai2['ID'][show_num]) print("実際 大区分" + Daikubun[kadaiDai2["tabDai"][show_num]]) print("推定 大区分" + Daikubun[kadaiDai2["estimated"][show_num]]) print(int(results_2[show_num, kadaiDai2["estimated"][show_num]]*100)) #kadaiDai2 = kadaiDai2[["Abst", "tabDai", "ID", "ShoKubun"]] #out_mat = kubun_results.value_counts(sort=False) #kubun_results.aggregate #out_mat.to_csv(data_path+"result_mat.csv") #dup_idx_first cm ###Output _____no_output_____
Black Scholes Options Pricing.ipynb
###Markdown ![image.png](attachment:image.png) The Black Scholes model is considered to be one of the best ways of determining fair prices of options. It requires five variables: the strike price of an option, the current stock price, the time to expiration, the risk-free rate, and the volatility. ![image-3.png](attachment:image-3.png) In our model C = call option price N = CDF of the normal distribution St = spot price of an asset K = strike price r = risk-free interest rate t = time to maturity sigma(σ) = volatility of the asset ###Code def d1(St, K, t, r, sigma): return(log(St/K) + (r + sigma**2/2.)*t)/(sigma*sqrt(t)) def d2(St, K, t, r, sigma): return d1(St, K, t, r, sigma) - sigma*sqrt(t) def black_scholes_call(St, K, t, r, sigma): call_premium = St*norm.cdf(d1(St, K, t, r, sigma)) - K*exp(-r*t)*norm.cdf(d2(St, K, t, r, sigma)) return call_premium def black_scholes_put(St, K, t, r, sigma): put_premium = K*exp(-r*t)- St + black_scholes_call(St, K, t, r, sigma) return put_premium stock = 'SPY' expiry = '12-16-2022' strike_price = 470 today = datetime.now() yesterday = today.replace(day=today.day-1) one_year_ago = today.replace(year=today.year-1) df = web.DataReader(stock, 'yahoo', one_year_ago, today) df = df.sort_values(by="Date") df = df.dropna() df = df.assign(close_day_before=df.Close.shift(1)) df['returns'] = ((df.Close - df.close_day_before)/df.close_day_before) sigma = np.sqrt(252) * df['returns'].std() risk_free_rate = (web.DataReader("^TNX", 'yahoo',yesterday, today)['Close'].iloc[-1]) / 100 spot_price = df['Close'].iloc[-1] time = (datetime.strptime(expiry, "%m-%d-%Y") - datetime.utcnow()).days / 365 print('The Call Option Premium is: ', black_scholes_call(spot_price, strike_price, time, risk_free_rate, sigma)) print('The Put Option Premium is: ', black_scholes_put(spot_price, strike_price, time, risk_free_rate, sigma)) ###Output The Call Option Premium is: 26.40751702470334 The Put Option Premium is: 20.87534073010164 ###Markdown Implied Volatility It is defined as the expected future volatility of the stock over the life of the option. It is directly influenced by the supply and demand of the underlying option and the market’s expectation of the stock price’s direction. It could be calculated by solving the Black Scholes equation backwards for the volatility starting with the option trading price. ![image.png](attachment:image.png) ###Code def call_implied_volatility(Price, St, K, t, r): sigma = 0.001 while sigma < 1: Price_implied = St * \ norm.cdf(d1(St, K, t, r, sigma))-K*exp(-r*t) * \ norm.cdf(d2(St, K, t, r, sigma)) if Price-(Price_implied) < 0.001: return sigma sigma += 0.001 return "Not Found" def put_implied_volatility(Price, St, K, t, r): sigma = 0.001 while sigma < 1: Price_implied = K*exp(-r*t) - St + black_scholes_call(St, K, t, r, sigma) if Price-(Price_implied) < 0.001: return sigma sigma += 0.001 return "Not Found" print("Implied Volatility for call option: " + str(100 * round(call_implied_volatility(black_scholes_call(spot_price, strike_price, time, risk_free_rate, sigma,), spot_price, strike_price, time, risk_free_rate,),2)) + " %") print("Implied Volatility for put option: " + str(100 * round(put_implied_volatility(black_scholes_call(spot_price, strike_price, time, risk_free_rate, sigma,), spot_price, strike_price, time, risk_free_rate,),2)) + " %") ###Output Implied Volatility for call option: 13.0 % Implied Volatility for put option: 16.0 % ###Markdown Option Greeks Delta: the sensitivity of an option’s price changes relative to the changes in the underlying asset’s price. Gamma: the delta’s change relative to the changes in the price of the underlying asset.Vega: the sensitivity of an option price relative to the volatility of the underlying asset.Theta: the sensitivity of the option price relative to the option’s time to maturity.Rho: the sensitivity of the option price relative to interest rates. ![image.png](attachment:image.png) ###Code def gamma(St, K, t, r, sigma): return norm.pdf(d1(St, K, t, r, sigma))/(St*sigma*sqrt(t)) def vega(St, K, t, r, sigma): return 0.01*(St*norm.pdf(d1(St, K, t, r, sigma))*sqrt(t)) def call_delta(St, K, t, r, sigma): return norm.cdf(d1(St, K, t, r, sigma)) def call_theta(St, K, t, r, sigma): return 0.01*(-(St*norm.pdf(d1(St, K, t, r, sigma))*sigma)/(2*sqrt(t)) - r*K*exp(-r*t)*norm.cdf(d2(St, K, v, r, sigma))) def call_rho(St, K, t, r, sigma): return 0.01*(K*t*exp(-r*t)*norm.cdf(d2(St, K, t, r, sigma))) def put_delta(St, K, t, r, sigma): return -norm.cdf(-d1(St, K, t, r, sigma)) def put_theta(St, K, t, r, sigma): return 0.01*(-(St*norm.pdf(d1(St, K, t, r, sigma))*sigma)/(2*sqrt(t)) + r*K*exp(-r*t)*norm.cdf(-d2(St, K, t, r, sigma))) def put_rho(St, K, t, r, sigma): return 0.01*(-K*t*exp(-r*t)*norm.cdf(-d2(St, K, t, r, sigma))) ###Output _____no_output_____
trials/Session3_Pytorch101_ver4_2Methods_Conv_plus_FC_run.ipynb
###Markdown 1. Problem StatementWrite a neural network that can:1. take 2 inputs: - an image from the MNIST dataset (say 5), and - a random number between 0 and 9, (say 7)2. and gives two outputs: - the "number" that was represented by the MNIST image (predict 5), and - the "sum" of this number with the random number and the input image to the network (predict 5 + 7 = 12)3. you can mix fully connected layers and convolution layers4. you can use one-hot encoding to represent the random number input as well as the "summed" output. a. Random number (7) can be represented as 0 0 0 0 0 0 0 1 0 0 b. Sum (13) can be represented as: 1. 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 c. 0b1101 (remember that 4 digits in binary can at max represent 15, so we may need to go for 5 digits. i.e. 10010 2. Importing required libraries & Checking GPU ###Code import argparse import torch import torch.nn as nn import torch.optim as optim import torchvision import torch.nn.functional as F from torchvision import datasets, transforms from torch.autograd import Variable import matplotlib.pyplot as plt from torch.optim.lr_scheduler import StepLR print(torch.cuda.is_available()) # Checks if GPU is available print(torch.cuda.get_device_name(0)) # Name of GPU print(torch.cuda.device_count()) ###Output True Tesla K80 1 ###Markdown 3. Importing MNIST dataset from pytorch ###Code transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) mnist_train = datasets.MNIST('../data',train=True,download=True) # Train dataset mnist_test = datasets.MNIST('./data',train=False,download=True) # Test dataset ###Output Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ../data/MNIST/raw/train-images-idx3-ubyte.gz ###Markdown 3. Plotting few samples of the downloaded data ###Code figure = plt.figure(figsize=(8, 8)) cols, rows = 10, 10 for i in range(1, cols * rows + 1): sample_idx = torch.randint(len(mnist_train), size=(1,)).item() img, label = mnist_train[sample_idx] figure.add_subplot(rows, cols, i) plt.title(label) plt.axis("off") plt.imshow(img, cmap="gray") figure.tight_layout() plt.show() #dir(mnist_train) ###Output _____no_output_____ ###Markdown 4. Checking the size of the image, label and image information ###Code print(f'Number of examples in training dataset :{len(mnist_train)}') print(f'Shape of the training dataset - images : {mnist_train.data.shape}') print(f'Labels in the training dataset : {mnist_train.targets}') ###Output Number of examples in training dataset :60000 Shape of the training dataset - images : torch.Size([60000, 28, 28]) Labels in the training dataset : tensor([5, 0, 4, ..., 5, 6, 8]) ###Markdown User Input: User can choose any one of the methods defined below 1. method = 'conv_plus_fc' 2. method = 'fc' ###Code method = 'conv_plus_fc' #'fc' # ###Output _____no_output_____ ###Markdown 5. Defining Custom Dataset Class ###Code from torch.utils.data import Dataset from random import randrange # Dataset is there to be able to interact with DataLoader class MyDataset(Dataset): def __init__(self, inpDataset, transform, method = method): self.inpDataset = inpDataset self.transform = transform self.method = method def __getitem__(self, index): randomNumber = randrange(10) sample_image, label = self.inpDataset[index] if self.transform: sample_image = self.transform(sample_image) if self.method == 'conv_plus_fc': sample = (sample_image,F.one_hot(torch.tensor(randomNumber),num_classes=10), label,label+randomNumber) return sample elif self.method == 'fc': final_sample = torch.cat((sample_image.reshape(-1),F.one_hot(torch.tensor(randomNumber),num_classes=10))) sample = (final_sample,F.one_hot(torch.tensor(randomNumber),num_classes=10),label,label+randomNumber) return sample def __len__(self): return len(self.inpDataset) myData_train = MyDataset(mnist_train,transform,method = method) myData_test = MyDataset(mnist_test,transform,method = method) image,randomNumber, label1, label2 = next(iter(myData_train)) image.shape,randomNumber, label1, label2 ###Output _____no_output_____ ###Markdown 6. Creating DataLoader ###Code use_cuda = torch.cuda.is_available() device = torch.device("cuda") if use_cuda else torch.device("cpu") train_kwargs = {'batch_size': 1000} test_kwargs = {'batch_size': 1000} if use_cuda: cuda_kwargs = {'num_workers': 1, 'pin_memory': True, 'shuffle': True} train_kwargs.update(cuda_kwargs) test_kwargs.update(cuda_kwargs) train_loader = torch.utils.data.DataLoader(myData_train,**train_kwargs) test_loader = torch.utils.data.DataLoader(myData_test, **test_kwargs) # train_loader = torch.utils.data.DataLoader(mnist_train,**train_kwargs) # test_loader = torch.utils.data.DataLoader(mnist_test, **test_kwargs) for batch_idx, (data,randomNumber,target,target1) in enumerate(train_loader): print(f'data shape : {data.shape}') print(f'random number : {randomNumber.shape}') print(f'target : {target.shape}') print(f'target1 : {target1.shape}') break ###Output data shape : torch.Size([1000, 1, 28, 28]) random number : torch.Size([1000, 10]) target : torch.Size([1000]) target1 : torch.Size([1000]) ###Markdown 7. Defining Network : Method1 - Conv + FC Netwotk ###Code class Net1(nn.Module): def __init__(self): super(Net1, self).__init__() self.conv1 = nn.Conv2d(1, 32, 3, 1) self.conv2 = nn.Conv2d(32, 64, 3, 1) self.dropout1 = nn.Dropout(0.25) self.dropout2 = nn.Dropout(0.5) self.fc1 = nn.Linear(9226, 128) self.fc2 = nn.Linear(128, 10) self.fc3 = nn.Linear(128, 19) def forward(self, x,y): x = self.conv1(x) x = F.relu(x) x = self.conv2(x) x = F.relu(x) x = F.max_pool2d(x, 2) x = self.dropout1(x) x = torch.flatten(x, 1) x = torch.cat((x, y), 1) x = self.fc1(x) x = F.relu(x) x = self.dropout2(x) x1 = self.fc2(x) x2 = self.fc3(x) output1 = F.log_softmax(x1, dim=1) output2 = F.log_softmax(x2, dim=1) return output1, output2 model1 = Net1().to(device) model1 ###Output _____no_output_____ ###Markdown Defining Network: Method2 - Completely Fully Connected Network ###Code class Net2(nn.Module): def __init__(self): super(Net2,self).__init__() self.fc1 = nn.Linear(794, 512) self.fc2 = nn.Linear(512, 256) self.fc3 = nn.Linear(256,64) self.dropout1 = nn.Dropout(0.25) self.out1 = nn.Linear(64,10) self.out2 = nn.Linear(64,19) def forward(self, x, y): x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) x = self.fc3(x) x = F.relu(x) x = self.dropout1(x) x1 = self.out1(x) x2 = self.out2(x) output1 = F.log_softmax(x1, dim=1) output2 = F.log_softmax(x2, dim=1) return output1,output2 model2 = Net2().to(device) model2 ###Output _____no_output_____ ###Markdown 8. Training Network ###Code ###################### Chosing the model based on the method ############################## model = model1 if method == 'conv_plus_fc' else model2 ########################################################################################### def train(model, device, train_loader, optimizer, epoch): #print(f'which model is used : {model}') model.train() for batch_idx, (data,randomNumber,target,target1) in enumerate(train_loader): data,randomNumber,target,target1 = data.to(device), randomNumber.to(device), target.to(device), target1.to(device) optimizer.zero_grad() #print(f'which model is used : {model}') output, output1 = model(data,randomNumber) loss = F.nll_loss(output, target) + F.nll_loss(output1, target1) # * 2 loss.backward() optimizer.step() log_interval = 10 if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) def test(model, device, test_loader): #print(f'which model is used : {model}') model.eval() test_loss = 0 correct = 0 correct1 = 0 with torch.no_grad(): for data,randomNumber,target,target1 in test_loader: data,randomNumber,target,target1 = data.to(device), randomNumber.to(device), target.to(device), target1.to(device) output, output1 = model(data,randomNumber) test_loss += F.nll_loss(output, target, reduction='sum').item() + F.nll_loss(output1, target1, reduction='sum').item() # * 2 # sum up batch loss pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability pred1 = output1.argmax(dim=1, keepdim=True) # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() correct1 += pred1.eq(target1.view_as(pred1)).sum().item() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Image Accuracy: {}/{} ({:.0f}%), Sum Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset), correct1, len(test_loader.dataset), 100. * correct1 / len(test_loader.dataset))) optimizer = optim.Adam(model.parameters(), lr= 0.001) #optim.SGD(model.parameters(), lr=0.1, momentum=0.9) epochs = 20 # scheduler = StepLR(optimizer, step_size=1, gamma=0.7) for epoch in range(1, epochs + 1): train(model, device, train_loader, optimizer, epoch) test(model, device, test_loader) # scheduler.step() ###Output _____no_output_____
imooc/PCA/.ipynb_checkpoints/MNIST-checkpoint.ipynb
###Markdown 使用PCA降维 ###Code from sklearn.decomposition import PCA pca=PCA(0.9) pca.fit(X_train) X_train_reduction=pca.transform(X_train) X_test_reduction=pca.transform(X_test) knn_clf2=KNeighborsClassfier() knn_clf2.fit(X_train_reduction, y_train) knn_clf2.score(X_test_reduction, y_test) ###Output _____no_output_____
book/content/using-scipy.ipynb
###Markdown Using NumPy and SciPy modulesIn addition to using Cantera and Pint to help solve thermodynamics problems, we will need to use some additional packages in the scientific Python ecosystem to make plots, solve systems of equations, integrate ordinary differential equations, and more. ```{margin}The [*SciPy Lecture Notes*](https://scipy-lectures.org) are excellent, detailed resources on all these topics, and Python programming in general {cite}`scipylecture`.``` The examples contained in this electronic book will integrate these techniques as needed, but this notebook contains some specific examples. Index1. [Plotting](plotting)2. [Solving systems of equations](solving-systems-of-equations)3. [Integrating ODE systems](integrating-ode-systems)4. [Optimization](optimization)5. [Differentiation](differentiation) PlottingWe can use [Matplotlib](https://matplotlib.org) to produce nice plots of our results. If you used Anaconda to set up your computing environment, you likely already have Matplotlib installed; if not, see their [installation instructions](https://matplotlib.org/users/installing.html).Matplotlib provides an interface that is very similar to what you might already know from Matlab: `pyplot`. You can import this in Python files or a Jupyter notebook with the standard abbreviation `plt`: ###Code # this line makes figures interactive in Jupyter notebooks %matplotlib inline from matplotlib import pyplot as plt # these lines are only for helping improve the display import matplotlib_inline.backend_inline matplotlib_inline.backend_inline.set_matplotlib_formats('pdf', 'png') plt.rcParams['figure.dpi']= 150 plt.rcParams['savefig.dpi'] = 150 ###Output _____no_output_____ ###Markdown For example, let's generate some values of an independent variable $x$ linearly spaced between 0 and 10 (using the NumPy function [`linspace()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html)), and then plot the function $y(x) = \sin(x)$. We can also add labels to the axes, a legend, and a helpful grid. ###Code import numpy as np x = np.linspace(0, 10, num=50, endpoint=True) y = np.sin(x) plt.plot(x, y, label='y(x)') plt.xlabel('x axis') plt.ylabel('y axis') plt.legend() plt.grid(True) plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown We can also plot multiple data series in a single figure: ###Code plt.plot(x, y, label='sin(x)') plt.plot(x, np.cos(x), label='cos(x)') plt.xlabel('x axis') plt.ylabel('y axis') plt.grid(True) plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Or, we can use subplots to plot multiple axes in the same overall figure: ###Code # 2 rows, 1 column fig, axes = plt.subplots(2, 1) axes[0].plot(x, y, label='sin(x)') axes[0].set_ylabel('sin(x)') axes[0].grid(True) axes[1].plot(x, np.cos(x), label='cos(x)') axes[1].set_xlabel('x axis') axes[1].set_ylabel('cos(x)') axes[1].grid(True) plt.show() ###Output _____no_output_____ ###Markdown Solving systems of equationsFrequently we will encounter a system of one or more equations that involves an equal number of unknowns. If this is a linear system of equations, we can use linear algebra and the NumPy [`linalg.solve()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html) function, but more often in thermodynamics we encounter complex and/or nonlinear systems.In cases like this, we will need to set up our problems to find the roots, or zeroes, of the function(s); in other words, given a function $ f(x) $, finding the root means to find the value of $x$ such that $ f(x) = 0 $. If we are dealing with a system of equations and the same number of unknowns, then these would be vectors: $\mathbf{f}(\mathbf{x}) = 0$.(You might be wondering what to do about equations that don't equal zero... for example, if you have something like $f(x) = g(x)$. In this case, you just need to manipulate the equation to be in the form $f(x) - g(x) = 0$.)The [SciPy optimization module](https://docs.scipy.org/doc/scipy/reference/optimize.html) provides functions to find roots of equations; for scalar equations, we can use [`root_scalar()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root_scalar.htmlscipy.optimize.root_scalar), and for vector equations, we can use [`root()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root.html). Scalar equations Let's first look at an example of a scalar function: one equation, one unknown.Find the root of this equation:\begin{equation}\cos(x) = x^3\end{equation}We need to create a Python function that returns $f(x) = 0$, so that the function returns zero when the input value of $x$ is the (correct) root. Then, we can use the `root_scalar` function with some initial guesses. ###Code import numpy as np from scipy import optimize def func(x): return np.cos(x) - x**3 sol = optimize.root_scalar(func, x0=1.0, x1=2.0) print(f'Root: x ={sol.root: .3f}') print(f'Function evaluated at root: {func(sol.root)}') ###Output Root: x = 0.865 Function evaluated at root: -2.220446049250313e-16 ###Markdown Systems of equations / vector functionsWe can also solve systems of equations in a similar fashion using the SciPy [`root()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root.html) functino, where we find the roots $\mathbf{x}$ that satisfy $\mathbf{f} (\mathbf{x}) = 0$.For example, let's try to find the values of $x$ and $y$ that satisfy these equations:$$x \ln (x) = y^3 \\\sqrt{x} = \frac{1}{y}$$We have two equations and two unknowns, so we should be able to find the roots.To solve, we create a function that evaluates these equations when made equal to zero, or$$x \ln (x) - y^3 = 0 \\\sqrt{x} - \frac{1}{y} = 0$$then we call `root` specifying this function and two initial guesses for $x$ and $y$: ###Code import numpy as np from scipy import optimize def system(vars): x = vars[0] y = vars[1] return [ x*np.log(x) - y**3, np.sqrt(x) - (1/y) ] sol = optimize.root(system, [1.0, 1.0]) x = sol.x[0] y = sol.x[1] print(f'Roots: x = {x: .3f}, y = {y: .3f}') ###Output Roots: x = 1.467, y = 0.826 ###Markdown Integrating ODE systemsIn some cases, we encounter problems that require integrating one or more ordinary different equations in time. Depending on the form of the problem, we may need to integrate a function between two points (definite integral), or we may have a system of ordinary differential equations. Numerical integral of samplesIn some cases we have a set of $(x,y)$ data that we want to integrate numerically.We can do this using the NumPy [`trapz()` function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.trapz.html), which implements the composite trapezoidal rule.For example, let's consider a situation where a reciprocating compressor is being used to compress ammonia vapor during a refrigeration cycle. We have some experimental measurements of the pressure-volume data during the compression stroke (see table), and we want to determine the work done on the ammonia by the piston.| Pressure (psi) | Volume (in^3) ||----------------|---------------|| 65.1 | 80.0 || 80.5 | 67.2 || 93.2 | 60.1 || 110 | 52.5 || 134 | 44.8 || 161 | 37.6 || 190 | 32.5 |To find the work done by the piston to the ammonia, we can integrate pressure with respect to volume:\begin{equation}W_{\text{in}} = -\int_{V_1}^{V_2} P \, dV\end{equation} ###Code import numpy as np from pint import UnitRegistry ureg = UnitRegistry() Q_ = ureg.Quantity pressure = Q_([65.1, 80.5, 93.2, 110, 134, 161, 190], 'psi') volume = Q_([80.0, 67.2, 60.1, 52.5, 44.8, 37.6, 32.5], 'in^3') # convert to SI units pressure.ito('Pa') volume.ito('m^3') work = -Q_( np.trapz(pressure.magnitude, volume.magnitude), pressure.units * volume.units ) print(f'Work done on fluid: {work.to("J"): .2f}') ###Output Work done on fluid: 589.45 joule ###Markdown Numerical integral of expressionWe can also numerically integrate expressions/functions using the SciPy [`quad()` function](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.html), also in the `integrate` module.Let's build on the previous example: first, let's fit the compressor data to the polytropic form $P V^n = c$, where $c$ and $n$ are constants, then we can integrate the resulting function to calculate work.To fit the data, we need to rearrange the equation to the form $y = f(x)$:\begin{equation}P = c V^{-n}\end{equation}We'll use the [`scipy.optimize.curve_fit()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html) function for that. ###Code import numpy as np from scipy.optimize import curve_fit from scipy.integrate import quad from pint import UnitRegistry ureg = UnitRegistry() Q_ = ureg.Quantity pressure = Q_([65.1, 80.5, 93.2, 110, 134, 161, 190], 'psi') volume = Q_([80.0, 67.2, 60.1, 52.5, 44.8, 37.6, 32.5], 'in^3') # convert to SI units pressure.ito('Pa') volume.ito('m^3') def fit(V, n, c): '''Evaluate P = c * V**(-n). n and c will be found by the curve_fit functino. ''' return c * np.power(V, -n) # this function will automatically fit the unknown constants params, cov = curve_fit(fit, volume.magnitude, pressure.magnitude) print(f'Parameters: n={params[0]: 5.3f}, c={params[1]: 5.3f}') plt.plot(volume.magnitude, pressure.magnitude, 'o', label='Data') x = np.linspace(volume[0].magnitude, volume[-1].magnitude, 100) # plot data and fit plt.plot(x, fit(x, *params), 'r-', label=f'fit: n={params[0]: 5.3f}, c={params[1]: 5.3f}' ) plt.xlabel('Volume (m^3)') plt.ylabel('Pressure (Pa)') plt.legend() plt.tight_layout() plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0)) plt.show() ###Output _____no_output_____ ###Markdown That looks like a great fit to the data, so now let's evaluate the work by integrating: ###Code y, err = quad( fit, volume[0].magnitude, volume[-1].magnitude, args=(params[0], params[1]) ) work = -Q_(y, pressure.units * volume.units) print(f'Work done on fluid: {work.to("J"): .2f}') ###Output Work done on fluid: 586.26 joule ###Markdown Initial value problemsIn other words, we may have a system like\begin{equation}\frac{d \mathbf{y}}{dt} = \mathbf{f} (t, \mathbf{y})\end{equation}where $\mathbf{y}$ is the vector of state variables, for which we should have the initial values ($\mathbf{y}(t=0)$). In this case, we can use the SciPy function [`solve_ivp()` function](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html), part of the `integrate` module.For example, let's consider a problem where we want to find the volume of water, volume of air, and pressure in a tank as a function of time, as the tank releases water to an environment. The volumetric flow rate through the tank's valve is based on its instantaneous pressure:$$\dot{V}_f = \frac{P - P_{\text{atm}}}{R_v} \;,$$where $P_{\text{atm}}$ is atmospheric pressure and $R_v = 10$ psi/gpm is the valve resistance parameter. The tank has volume 50 gal, and is initially filled with water (volume fraction $f$ = 0.8); the air in the tank is initially at 100 psi. We will treat the air as an ideal gas, and the water as an incompressible substance.We have the initial air pressure, and we can calculate the initial volumes of water and air:$$V_{f,0} = f \, V \\V_{g,0} = (1 - f) V \\$$We can perform a mass balance on the water in the tank to obtain a rate equation for the volume of water:$$0 = \dot{m}_f + \frac{d m_f}{dt} = \dot{V}_f \rho_f + \rho_f \frac{d V_f}{dt} \\0 = \dot{V}_f + \frac{dV_f}{dt} \\\therefore \frac{dV_f}{dt} = -\frac{(P - P_{\text{atm}})}{R_v}$$Since the overall volume of the tank is constant, we can obtain the rate of change of the volume of air:$$V = V_g + V_f \\\frac{dV_g}{dt} + \frac{dV_f}{dt} = 0 \\\therefore \frac{dV_g}{dt} = -\frac{dV_f}{dt}$$Finally, a mass balance on the air in the tank allows us to find the rate of change of pressure:$$0 = \frac{dm_a}{dt} \\0 = \frac{d}{dt} \left( \frac{P V_g}{R T} \right) \\0 = V_g \frac{dP}{dt} + P \frac{dV_g}{dt} \\\therefore \frac{dP}{dt} = -\frac{P}{V_g} \frac{dV_g}{dt}$$Now, we can integrate this system of ODEs: ###Code from pint import UnitRegistry ureg = UnitRegistry() Q_ = ureg.Quantity # initial conditions and constants valve_resistance = Q_(10, 'psi/(gal/min)') pressure_atmosphere = Q_(1, 'atm') volume_tank = Q_(50, 'gal') water_fraction = 0.8 pressure_initial = Q_(100, 'psi') volume_water_initial = water_fraction * volume_tank volume_air_initial = (1.0 - water_fraction) * volume_tank def tank_equations(t, y, valve_resistance, pressure_atmosphere): '''Rates of change for water volume, air volume, and air pressure in tank. Input values in SI units. ''' volume_water = Q_(y[0], 'm^3') volume_air = Q_(y[1], 'm^3') pressure_air = Q_(y[2], 'Pa') dVf_dt = -(pressure_air - pressure_atmosphere) / valve_resistance dVg_dt = -dVf_dt dP_dt = -(pressure_air / volume_air) * dVg_dt return [ dVf_dt.to('m^3/s').magnitude, dVg_dt.to('m^3/s').magnitude, dP_dt.to('Pa/s').magnitude ] ###Output _____no_output_____ ###Markdown Now that we have the initial conditions and rate function set up, we can integrate in time. Let's do this for 500 seconds, and then plot the water volume and tank pressure as functions of time: ###Code from scipy.integrate import solve_ivp # now integrate for 500 seconds, specifying the function, time interval, #initial conditions, and additional arguments to the function sol = solve_ivp( tank_equations, [0, 500.0], [volume_water_initial.to('m^3').magnitude, volume_air_initial.to('m^3').magnitude, pressure_initial.to('Pa').magnitude ], args=(valve_resistance, pressure_atmosphere,), method='BDF' ) time = sol.t volume_water = Q_(sol.y[0], 'm^3') volume_air = Q_(sol.y[1], 'm^3') pressure = Q_(sol.y[2], 'Pa') plt.plot(time, pressure.to('psi').magnitude, label='Tank pressure (psi)') plt.plot(time, volume_water.to('gal').magnitude, label='Water volume (gal)') plt.grid(True) plt.xlabel('Time (s)') plt.ylabel('Water volume (gal) and tank pressure (psi)') plt.ylim(ymin=0.0) # ensure lower y-axis bound is zero plt.legend() plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown OptimizationFor some problems, we may want to find the input parameter(s) that minimize or maximize some function. In these cases, we can use the SciPy [`minimize_scalar()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize_scalar.html) function for a scalar function with one input variable, or the [`minimize()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html) function for a scalar function of one or more input variables.(Note: this is a fairly complicated topic, and we will only consider relatively simple optimization problems.)For example, consider a spring with a horizontal force applied; we can calculate the potential energy of the spring:\begin{equation}PE(x) = 0.5 k x^2 - F x \;.\end{equation}At equilibrium, the potential energy will be at a minimum, so given a particular force we can find the displacement by finding the value that minimizes the potential energy. ###Code from scipy.optimize import minimize_scalar spring_constant = 2.0 # N/cm force = 5.0 # N def spring(x, k, F): '''Calculates potential energy of spring pulled by force. ''' return 0.5 * k * x**2 - F * x sol = minimize_scalar(spring, args=(spring_constant, force)) print(f'Equilibrium displacement: {sol.x: .2f} cm') ###Output Equilibrium displacement: 2.50 cm ###Markdown A more complicated problem is a system with two springs, connected at one end with a force in some general direction (with $x$ and $y$ components); the system has both horizontal and vertical components. The springs have spring constants $k_a$ and $k_b$, and unloaded lengths $L_a$ and $L_b$.The potential energy for this system is\begin{equation}PE(x,y) = 0.5 k_a \left( \sqrt{x^2 + (L_a - y)^2} - L_a \right)^2 + 0.5 k_b \left( \sqrt{x^2 + (L_b + y)^2} - L_b \right)^2 - F_x x - F_y y\end{equation}where $x$ and $y$ are the horizontal and vertical deformations from the unloaded state.We can minimize this function of two variables to find the displacement based on the force: ###Code from scipy.optimize import minimize spring_constant_a = 9.0 # N/cm spring_constant_b = 2.0 # N/cm length_a = 10.0 # cm length_b = 10.0 # cm force_x = 2.0 # N force_y = 4.0 # N def spring_system(xvec, ka, kb, La, Lb, Fx, Fy): '''Calculates potential energy of springs pulled by force. ''' x = xvec[0] y = xvec[1] return ( 0.5*ka * (np.sqrt(x**2 + (La - y)**2) - La)**2 + 0.5*kb * (np.sqrt(x**2 + (Lb + y)**2) - Lb)**2 - (Fx * x) - (Fy * y) ) guesses = [1.0, 1.0] sol = minimize( spring_system, guesses, args=(spring_constant_a, spring_constant_b, length_a, length_b, force_x, force_y ) ) x = sol.x[0] y = sol.x[1] print(f'Equilibrium displacement: x={x: .2f} cm, y={y: .2f} cm') ###Output Equilibrium displacement: x= 4.95 cm, y= 1.28 cm ###Markdown DifferentiationMany thermodynamic properties are derivatives of other properties, so you may find the need to take derivatives of either data or of an expression. We can use the NumPy function [`gradient()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.gradient.html) to take numerical derivatives of data using finite differences, and we can use [SymPy](https://www.sympy.org/en/index.html) to find analytical derivatives of expressions. Numerical derivativesLet's say we have data for some function $f(x) = x^2$ at some locations $x$: ###Code x = np.linspace(0, 1, 10) f = x**2 plt.plot(x, f) plt.xlabel('x') plt.ylabel('f(x)') plt.grid(True) plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown We do not need to know the functional form of $f(x)$ to take the numerical derivatives—this example just uses a simple function for ease of checking the results.The `gradient()` function uses second-order central differences to evaluate the derivative of input data, using forward and backward differences at the boundaries. Let's use that to get the derivative, and compare against the exact derivative ($\frac{df}{dx} = 2x$): ###Code dfdx = np.gradient(f, x) dfdx_exact = 2*x plt.plot(x, dfdx_exact, label='Exact') plt.plot(x, dfdx, 'o', label='Numerical') plt.xlabel('x') plt.ylabel('df/dx') plt.grid(True) plt.legend() plt.show() ###Output _____no_output_____ ###Markdown The numerical derivative we obtain is very accurate, thanks to the linear nature of the derivative. However, at the boundaries the approximate derivative is a bit off, due to the first-order finite differences used there. Analytical derivativeIn some cases we may be given (or obtain) analytical expressions that we need to differentiate. Given a function, we have two options for calculating the derivative:1. Use the function to calculate values over the desired range, then numerically differentiate.2. Obtain the exact derivative by differentiating analytically.We'll now focus on the latter case; we can use SymPy to construct a function symbolically, and then find the exact derivative. ###Code import sympy sympy.init_printing(use_latex='mathjax') x = sympy.symbols('x', real=True) f = x**2 # take derivative of f with respect to x dfdx = sympy.diff(f, x) # this complicated expression is only necessary for printing. # It creates an equation object, that sets the unevaluated # derivative of f equal to the calculated derivative. display(sympy.Eq(sympy.Derivative(f), dfdx)) ###Output _____no_output_____ ###Markdown Once we have evaluated the analytical derivative, we can even turn it into a Python function for evaluation! Passing the `'numpy'` argument creates a NumPy array-compatible function. ###Code calc_derivative = sympy.lambdify(x, dfdx, 'numpy') x_vals = np.linspace(0, 1, 10) plt.plot(x_vals, 2*x_vals, label='Exact derivative') plt.plot(x_vals, calc_derivative(x_vals), 'o', label='SymPy derivative') plt.xlabel('x') plt.ylabel('df/dx') plt.grid(True) plt.legend() plt.show() ###Output _____no_output_____ ###Markdown As expected, the exact derivative we calculated using SymPy matches the known derivative perfectly—this was obtained symbolically, so there are no approximations/errors involved.When taking the derivative of more complicated expressions that involve other functions (e.g., $\sin$, $\log$), you will need to use the SymPy-provided versions: `sympy.sin`, `sympy.log`, etc. We also need to explicitly define symbolic variables, either using `x = sympy.Symbol('x')` to create one variable at a time or `x, y = sympy.symbols('x y', real=True)` to create multiple variables at once. (The string you specify is the displayed representation of the variable, and can include complex formatting such as subscripts/superscripts and Greek letters.)For example, let's take the derivative of\begin{equation}f(x) = \log(x) + x^3\end{equation} ###Code x = sympy.Symbol('x') f = sympy.log(x) + x**3 dfdx = sympy.diff(f, x) display(dfdx) ###Output _____no_output_____
predict-daily-temperature/Daily Temperature 2 - Featuretools Solution.ipynb
###Markdown Featuretools Solution In this notebook, we'll use Featuretools to engineer our features for predicting future daily average temperatures using historical temperature data. To see a simplified baseline run, check out the [Baseline Solution](Daily%20Temperature%201%20-%20Baseline%20Solution.ipynb).Time series forecasting is different from other machine learning problems in that there is an inherent temporal ordering to the data, which means that special considerations will need to be taken into account during preprocessing, feature engineering, and model building. Featuretools provides an array of time series primitives that will handle the constraints necessary for time series featue engineering, allowing for the same ease of automation that is standard for other machine learning problem types in Featuretools. Configure Problem ###Code import warnings warnings.filterwarnings('ignore') import utils import featuretools as ft import sklearn filepath = "dataset/DailyDelhiClimate.csv" time_index = "date" target_col = 'meantemp' df = utils.read_data(filepath, time_index, target_col) df.head(10) ###Output _____no_output_____ ###Markdown In this demo and in many time series problems, we're trying to predict a sequential series of values that are highly dependent on one another. We will exploit the fact that more recent observations are more predictive than more distant ones.Therefore, we will use two concepts, `gap` and `window_length` that define a window over which we can engineer features.The first day we have access to will be after a `gap` of `9` days. The window will continue for `5` days. ###Code gap = 9 window_length = 5 ###Output _____no_output_____ ###Markdown Data SplittingAdditionally, we'll want to have our data split up into training and testing data. Since the data has a strict temporal ordering, this will split the data at a defined point in time instead of randomly sampling from the data. ###Code training_data, test_data = utils.get_train_test(df) test_data.head() ###Output _____no_output_____ ###Markdown Feature Engineering with FeaturetoolsNow, we can use Featuretools like we would for any other machine learning problem. We'll set up an [EntitySet](https://featuretools.alteryx.com/en/stable/getting_started/using_entitysets.html), define our primitives, and run [DFS](https://featuretools.alteryx.com/en/stable/getting_started/afe.html). ###Code # Adds an index column to the data, so the "temperatures" # dataframe will have 3 columns training_es = utils.set_up_entityset(training_data, id_='training_es', time_index=time_index) test_es = utils.set_up_entityset(test_data, id_='test_es', time_index=time_index) training_es # Delaying primitives delaying_primitives = [ft.primitives.NumericLag(periods=t + gap) for t in range(9)] # Datetime primitives datetime_primitives = ["Month", "Year"] # Rolling primitive # Min periods is a pandas parameter, and it just stops us from including partial calculations before the windows have # all the possible observations, so it's the window_length min_periods = window_length rolling_mean_primitive = ft.primitives.RollingMean(window_length, gap=gap, min_periods=min_periods) ###Output _____no_output_____ ###Markdown Now we'll make our DFS run and use the feature definitions in calculating the test feature matrix. ###Code # DFS Run - calculates training feature matrix and the feature definitions train_fm, feature_defs = ft.dfs(entityset=training_es, target_dataframe_name='temperatures', max_depth=1, trans_primitives = datetime_primitives + delaying_primitives + [rolling_mean_primitive] ) # Reuse the feature definitions for the test data test_fm = ft.calculate_feature_matrix(feature_defs, test_es) ###Output _____no_output_____ ###Markdown Format data for modeling Again, we'll need to remove any null values in the data. In this case, all of our lagging primitives and our rolling mean primitive will have introduced NaNs that we need to remove. ###Code # Separate in to X and y objects for modeling X_train = train_fm.dropna() y_train = X_train.pop(target_col) # Do the same for the test data X_test = test_fm.dropna() y_test = X_test.pop(target_col) X_train.head(13) ###Output _____no_output_____ ###Markdown Model BuildingThe modeling step will be the exact same as the baseline run, but now we have twelve different features instead of just one! ###Code reg = sklearn.ensemble.RandomForestRegressor(n_estimators=100) reg.fit(X_train, y_train) preds = reg.predict(X_test) featuretools_score = sklearn.metrics.median_absolute_error(preds, y_test) print('Median Abs Error: {:.2f}'.format(featuretools_score)) ###Output Median Abs Error: 1.67 ###Markdown We can see that the median absolute error has decreased relative to the baseline notebook, which means that this model is more accurate than the basline.We can also take a look at the feature importances to see which contribute the most to the model. ###Code high_imp_feats = utils.feature_importances(X_train, reg, feats=100) ###Output 1: ROLLING_MEAN(date, meantemp, window_length=5, gap=9, min_periods=5) [0.515] 2: NUMERIC_LAG(date, meantemp, periods=9) [0.275] 3: MONTH(date) [0.069] 4: NUMERIC_LAG(date, meantemp, periods=10) [0.034] 5: NUMERIC_LAG(date, meantemp, periods=11) [0.031] 6: NUMERIC_LAG(date, meantemp, periods=13) [0.016] 7: NUMERIC_LAG(date, meantemp, periods=17) [0.015] 8: NUMERIC_LAG(date, meantemp, periods=15) [0.011] 9: NUMERIC_LAG(date, meantemp, periods=16) [0.010] 10: NUMERIC_LAG(date, meantemp, periods=14) [0.010] 11: NUMERIC_LAG(date, meantemp, periods=12) [0.009] 12: YEAR(date) [0.005] ----- ###Markdown These feature importances are extremly telling. The **most recent observation** we can access is one of the most important features, but we are able to improve upon the baseline model using the other features built from time series primitives. The rolling mean has the biggest impact on predictions, and if we graph our predictions over the rolling mean, we can see how similar they are. Let's take a look at a graph of the rolling mean and our predictions over the actual target values. ###Code utils.graph_preds_mean_and_y(preds, X_test['ROLLING_MEAN(date, meantemp, window_length=5, gap=9, min_periods=5)'], y_test) ###Output _____no_output_____
scripts/German Credit Pre-processing.ipynb
###Markdown Reading the dataset ###Code import pandas as pd import io import requests url="https://raw.githubusercontent.com/fbarth/ds-saint-paul/master/data/german_credit_data.csv" s=requests.get(url).content df = pd.read_csv(io.StringIO(s.decode('utf-8')), sep=",") df.head() df =df.drop(columns=['Unnamed: 0']) df.head() df.shape ###Output _____no_output_____ ###Markdown Pre-processing and descriptive analysis ###Code df.loc[df['Job'] == 1, 'Job'] = 'j1' df.loc[df['Job'] == 2, 'Job'] = 'j2' df.loc[df['Job'] == 3, 'Job'] = 'j3' df.loc[df['Job'] == 0, 'Job'] = 'j0' %matplotlib inline fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 10)) df['Risk'].value_counts().plot(kind='bar', ax=ax1, title='Risk') df['Age'].value_counts().plot(kind='hist', ax=ax2, title='Age') df['Sex'].value_counts().plot(kind='bar', ax=ax3, title='Sex') df['Job'].value_counts().plot(kind='bar', ax=ax4, title='Job') ax = sns.lmplot( x="Credit amount", y="Duration", data=df, fit_reg=False, hue='Risk', legend=True) fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 10)) ax = sns.countplot(x="Risk", hue="Sex", data=df, ax=ax1) ax = sns.countplot(x="Risk", hue="Job", data=df, ax=ax2) ax = sns.countplot(x="Risk", hue="Housing", data=df, ax=ax3) ax = sns.countplot(x="Risk", hue="Saving accounts", data=df, ax=ax4) ###Output _____no_output_____ ###Markdown Splitting training and validation datasets ###Code from sklearn.model_selection import train_test_split train, test = train_test_split(df, test_size=0.1, random_state=4) df['Risk'].value_counts() train['Risk'].value_counts() test['Risk'].value_counts() project.save_data("german_credit_train.csv", train.to_csv(header=True, index=False), overwrite=True) project.save_data("german_credit_test.csv", test.to_csv(header=True, index=False), overwrite=True) ###Output _____no_output_____
notebooks/0.1-pipeline-features.ipynb
###Markdown Features ###Code GRUBER_URLINTEXT_PAT = re.compile(r"""(?i)\b((?:https?://|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/) (?:[^\s()<>]|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+ (?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:\'".,<> ?\xab\xbb\u201c\u201d\u2018\u2019]))""", re.X) WEB_URL_REGEX = r"""(?i)\b((?:https?:(?:/{1,3}|[a-z0-9%])|[a-z0-9.\-]+[.] (?:com|net|org|edu|gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post|pro |tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|be|bf|bg|bh |bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|cr|cs|cu|cv|cx|cy |cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi |gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|im|in|io|iq|ir|is|it|je|jm|jo |jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|me|mg|mh|mk |ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng|ni|nl|no|np|nr|nu|nz|om|pa|pe |pf|pg|ph|pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|Ja|sk|sl |sm|sn|so|sr|ss|st|su|sv|sx|sy|sz|tc|td|tf|tg|th|tj|tk|tl|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug |uk|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)/)(?:[^\s()<>{}\[\]]+|\([^\s()]*? \([^\s()]+\)[^\s()]*?\)|\([^\s]+?\))+(?:\([^\s()]*?\([^\s()]+\)[^\s()]*?\)|\([^\s]+?\)| [^\s`!()\[\]{};:'".,<>?«»“”‘’])|(?:(?<!@)[a-z0-9]+(?:[.\-][a-z0-9]+)*[.] (?:com|net|org|edu|gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post |pro|tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|be| bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co |cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga |gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|im|in |io|iq|ir|is|it|je|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu |lv|ly|ma|mc|md|me|mg|mh|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng |ni|nl|no|np|nr|nu|nz|om|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa |sb|sc|sd|se|sg|sh|si|sj|Ja|sk|sl|sm|sn|so|sr|ss|st|su|sv|sx|sy|sz|tc|td|tf|tg|th|tj|tk |tl|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za |zm|zw)\b/?(?!@)))""" CURRENCY_PATT = u"[$¢£¤¥֏؋৲৳৻૱௹฿៛\u20a0-\u20bd\ua838\ufdfc\ufe69\uff04\uffe0\uffe1\uffe5\uffe6]" tf_params = {'lowercase': True, 'analyzer': 'char_wb', 'stop_words': None, 'ngram_range': (4, 4), 'min_df': 0.0, 'max_df': 1.0, 'preprocessor': None, 'max_features': 4000, 'norm': '', 'use_idf': 1} patterns = [(r"[\(\d][\d\s\(\)-]{8,15}\d", {"name": "phone", "is_len": 0}), (r"%|taxi|скид(?:к|очн)|ц[іе]н|знижк|такс[иі]|промо|акц[іи]|bonus|бонус", {"name": "custom", "is_len": 0, "flags": re.I | re.U}), # (r"[+-<>/^]", {"name": "math_ops", "is_len": 0}), (r"[.]", {"name": "dot", "is_len": 0}), # (WEB_URL_REGEX, {"name": "url", "is_len": 0, "flags": re.X}), (CURRENCY_PATT, {"name": "currency", "is_len": 0, "flags": re.U}), # (r"[*]", {"name": "special_symbols", "is_len": 0}) (r":\)|:\(|-_-|:p|:v|:\*|:o|B-\)|:’\(", {"name": "emoji", "is_len": 0, "flags": re.U}), (r"[0-9]{2,4}[.-/][0-9]{2,4}[.-/][0-9]{2,4}", {"name": "date", "is_len": 0}) ] def get_tokens_pipe(features=TOKEN_FEATURES): token_features = TokenFeatures(features=features) tok_pipe = [ ("selector", Select(["tokens"], to_np=0)), ('tok', token_features)] return Pipeline(tok_pipe) def get_vec_pipe(add_len=True, tfidf_params={}): vectorizer = TfIdfLen(add_len, **tfidf_params) vec_pipe = [ ('vec', vectorizer)] return Pipeline(vec_pipe) def get_pattern_pipe(patterns): pipes = [] for i, (patt, params) in enumerate(patterns): kwargs = params.copy() name = kwargs.pop("name") + "_" + str(i) transformer = MatchPattern(pattern=patt, **kwargs) pipes.append((name, transformer)) return pipes def get_len_pipe(use_tfidf=True, vec_pipe=None): len_pipe = [("length", Length(use_tfidf))] if use_tfidf: len_pipe.insert(0, ("vec", vec_pipe)) return Pipeline(len_pipe) def build_transform_pipe(tf_params=tf_params, add_len=True, vec_mode="add", patterns=patterns, features=TOKEN_FEATURES): vec_pipe = get_vec_pipe(add_len, tf_params) if vec_mode == "only": return vec_pipe patt_pipe = get_pattern_pipe(patterns) tok_pipe = get_tokens_pipe(features) chain = [ ('selector', Select(["text"], to_np=0)), ('converter', Converter()), ('union', FeatureUnion([ ('vec', vec_pipe), *patt_pipe ])) ] final_chain = FeatureUnion([("chain", Pipeline(chain)), ("tok", tok_pipe)], n_jobs=-1) return [("final_chain", final_chain)] def build_classifier(name, seed=25): if name == "logit": model = LogisticRegression(C=1, class_weight="balanced", random_state=seed, penalty="l2") model.grid_s = {f'{name}__C' : (0.1, 0.2, 0.3, 0.4, 0.5, 1, 5, 10)} model.grid_b = {f'{name}__C' : [(1)]} elif name == "nb": model = MultinomialNB(alpha=0.1) #class_prior=[0.5, 0.5]) model.grid_s = {f'{name}__alpha' : (0.1, 0.5, 1, 5, 10)} model.grid_b = {f'{name}__alpha' : [(1)]} model.name = name return model def get_estimator_pipe(name, model, tf_params, vec_mode="add", patterns=patterns, features=TOKEN_FEATURES): chain = build_transform_pipe(tf_params, vec_mode=vec_mode, patterns=patterns, features=features) chain.append((name, model)) pipe = Pipeline(chain) pipe.name = name return pipe vec_pipe = get_vec_pipe(True, tf_params) patt_pipe = get_pattern_pipe(patterns) chain = [ ('selector', Select(["text"], to_np=0)), ('converter', Converter()), ('union', FeatureUnion([ ('vec', vec_pipe), *patt_pipe ])) ] pipe = Pipeline(chain) pipe.fit_transform(X_test) def is_lower(tokens): return any(token.islower() for token in tokens) def is_upper(tokens): return any(token.isupper() for token in tokens) class TokenFeatures(Transformer): def __init__(self, features=None): self.features = features def get_params(self, deep=True): return dict() def _get_features(self, tokens): output = [] for f in self.features: output.append(eval(f)(tokens)) return np.array(output) def transform(self, X, **kwargs): rez = [] for record in X: temp = self._get_features(record) rez.append(temp) return np.array(rez) trf = build_transform_pipe() clf = build_classifier("logit") pipe = get_estimator_pipe(clf.name, clf, tf_params) pipe.fit(X_train, y_train) sms = "поїдем до них на таксі?" sms_df = pd.DataFrame({"text": [sms]}) sms_df["tokens"] = sms_df["text"].map(word_tokenize) sms_df ham, spam = pipe.predict_proba(sms_df)[0] print(f"Probability ham: {ham*100:0.3f}%\nProbability spam: {spam*100:.3f}%") p = r"[0-9]{2,4}[.-/][0-9]{2,4}[.-/][0-9]{2,4}" repr(p) re.findall(p, "21.04.2016", re.U) ###Output _____no_output_____ ###Markdown Grid Search CV ###Code best_estimators, best_scores = grid_search(patterns=patterns, estimator_names=["logit"]) best_estimators[0].predict_proba(sms_df) sms_df best_scores best_scores scores,results, conf_matrix, fnp = analyze_model(model=best_estimators[0], log_fold=False) fn, fp = fnp["fn"], fnp["fp"] for el in X.iloc[fn]["text"]: print(el+"\n") (data .assign(l=lambda x: x["text"].str.findall(r"%|taxi|скид(?:к|очн)|ц[іе]н|знижк|такс[иі]|промо|акц[іи]|bonus|бонус", flags=re.I|re.U).map(len)) ).groupby("l")["label"].agg(["mean", "count"]) ###Output _____no_output_____
posts/2015-04-30_trame-sensorielle.ipynb
###Markdown On va maintenant utiliser une image naturelle comme entrée. At each time, the pipeline is the following:* take an image, * turn into blocks corresponding to the edges' centers,* into each block determine the most likely orientationLet's first create a dummy movie: ###Code import os import matplotlib matplotlib.use("Agg") # agg-backend, so we can create figures without x-server (no PDF, just PNG etc.) from elasticite import EdgeGrid e = EdgeGrid() fps = 24. loop = 1 autoplay = 0 duration = 4. figpath = '../files/elasticite/' import matplotlib as mpl import matplotlib.pyplot as plt from moviepy.video.io.bindings import mplfig_to_npimage import moviepy.editor as mpy fig_mpl, ax = plt.subplots(1, figsize=(1, 1), facecolor='white') def draw_elementary_pattern(ax, center): #ax.add_artist(mpl.patches.Wedge(center-1., 1., 0, 180, width=.1)) #center, r, theta1, theta2, width=None #ax.add_artist(mpl.patches.Wedge(-center+1., 1., 0, 180, width=.1)) # class matplotlib.patches.RegularPolygon(xy, numVertices, radius=5, orientation=0, **kwargs)¶ ax.add_artist(mpl.patches.RegularPolygon((.5,.5), 5, center, facecolor='r')) ax.add_artist(mpl.patches.RegularPolygon((.5,.5), 5, center/2, facecolor='w')) ax.add_artist(mpl.patches.RegularPolygon((.5,.5), 5, center/4, facecolor='g')) def make_frame_mpl(t): ax = fig_mpl.add_axes([0., 0., 1., 1.], axisbg='w') ax.cla() plt.setp(ax, xticks=[]) plt.setp(ax, yticks=[]) #ax.axis(c='b', lw=0, frame_on=False) ax.grid(b=False, which="both") draw_elementary_pattern(ax, t/duration) return mplfig_to_npimage(fig_mpl) # RGB image of the figure animation = mpy.VideoClip(make_frame_mpl, duration=duration) name, vext = 'elasticite_test', '.mp4' if not os.path.isfile(os.path.join(figpath, name + '.mp4')): animation.write_videofile(figpath + name + vext, fps=fps) animation.write_videofile(figpath + name + e.vext, fps=fps) e.ipython_display(name) ###Output [MoviePy] >>>> Building video ../files/elasticite/elasticite_test.mp4 [MoviePy] Writing video ../files/elasticite/elasticite_test.mp4 ###Markdown Now read this clip using ``imageio``: ###Code import imageio reader = imageio.get_reader(figpath + name + vext) for i, im in enumerate(reader): print('Mean of frame %i is %1.1f' % (i, im.mean())) if i > 15: break ###Output _____no_output_____ ###Markdown Let's consider one frame, and a ROI defined by its center and width: ###Code def mat2ipn(mat): # create a temporary file import tempfile filename = tempfile.mktemp(suffix='.png') # Use write_png to export your wonderful plot as png ! import vispy.io as io imageio.imwrite(filename, mat) from IPython.core.display import display, Image return display(Image(filename)) mat2ipn(im) #from holoviews import Image #from holoviews import HoloMap, Dimension #%load_ext holoviews.ipython ###Output _____no_output_____ ###Markdown trying to guess orientations using LogGabors ###Code from NeuroTools.parameters import ParameterSet from SLIP import Image slip = Image(ParameterSet({'N_X':im.shape[1], 'N_Y':im.shape[0]})) import numpy as np im_ = im.sum(axis=-1) print(im_.shape) X_=.3 Y_=.5 w=.2 im_ = im_ * np.exp(-.5*((slip.x-X_)**2+(slip.y-Y_)**2)/w**2) mat2ipn(im_) ###Output _____no_output_____ ###Markdown Now, we will test the energy of different orientations : ###Code from LogGabor import LogGabor lg = LogGabor(slip) N_theta = 24 thetas, E = np.linspace(0, np.pi, N_theta), np.zeros((N_theta,)) for i_theta, theta in enumerate(thetas): params= {'sf_0':.3, 'B_sf': .3, 'theta':theta, 'B_theta': .1} FT_lg = lg.loggabor(0, 0, **params) E[i_theta] = np.sum(np.absolute(slip.FTfilter(im_.T, FT_lg, full=True))**2) print(theta*180/np.pi, E[i_theta]) ###Output _____no_output_____ ###Markdown Now select the most likely: ###Code print(np.argmax(E), thetas[np.argmax(E)]*180/np.pi) ###Output _____no_output_____ ###Markdown wrapping things in one function: ###Code from NeuroTools.parameters import ParameterSet from SLIP import Image from LogGabor import LogGabor import numpy as np slip = Image(ParameterSet({'N_X':im.shape[1], 'N_Y':im.shape[0]})) lg = LogGabor(slip) N_theta = 24 def theta_max(im, X_=.0, Y_=.0, w=.3): im_ = im.sum(axis=-1) im_ = im_ * np.exp(-.5*((slip.x-X_)**2+(slip.y-Y_)**2)/w**2) thetas, E = np.linspace(0, np.pi, N_theta), np.zeros((N_theta,)) for i_theta, theta in enumerate(thetas): params= {'sf_0':.3, 'B_sf': .3, 'theta':theta, 'B_theta': .1} FT_lg = lg.loggabor(0, 0, **params) E[i_theta] = np.sum(np.absolute(slip.FTfilter(im_.T, FT_lg, full=True))**2) return np.pi/2 - thetas[np.argmax(E)] e.reader = imageio.get_reader(figpath + 'elasticite_test.mp4', loop=True) for i, im in enumerate(reader): print(i, theta_max(im, X_=.3, Y_=.3, w=.3)*180./np.pi) if i > 5: break ###Output _____no_output_____ ###Markdown We retrieve the centers and span of all edges from the ``EdgeGrid`` class: ###Code name = 'trame_loggabor' import numpy as np from EdgeGrid import EdgeGrid e = EdgeGrid() import imageio e.reader = imageio.get_reader(figpath + 'elasticite_test.mp4', loop=True) def make_lames(e): im = e.reader.get_next_data() for i in range(e.N_lame): e.lames[2, i] = e.theta_max(im, X_=e.lames[0, i], Y_=e.lames[1, i], w=.05) return e.lames[2, :] e.make_anim(name, make_lames, duration=duration) e.ipython_display(name) ###Output _____no_output_____ ###Markdown trying to guess orientations using Sobel filters dans ce cas, on voit que les filtres orientés sont corrects, mais c'est un peu overkill (et lent) donc on peut préférer utiliser des filtres orientés plus simples, les filtres de Sobel, soit pour les horizontales la matrice: [1 2 1] [0 0 0] [-1 -2 -1] et son transposé (pour les verticales). ###Code name = 'trame_sobel_orientations' if not os.path.isfile(os.path.join(figpath, name + '.mp4')): from EdgeGrid import EdgeGrid e = EdgeGrid() import imageio e.reader = imageio.get_reader(figpath + 'elasticite_test.mp4', loop=True) import matplotlib.pyplot as plt import numpy as np from moviepy.video.io.bindings import mplfig_to_npimage import moviepy.editor as mpy # DRAW A FIGURE WITH MATPLOTLIB fps = 24. duration = 4. fig_mpl, ax = plt.subplots(1, 2, figsize=(10,5), facecolor='white') def make_frame_mpl(t): import numpy as np sobel = np.array([[1, 2, 1,], [0, 0, 0,], [-1, -2, -1,]]) im = e.reader.get_next_data() im_ = im.sum(axis=-1) from scipy.signal import convolve2d #im_ = im_ * np.exp(-.5*((slip.x-X_)**2+(slip.y-Y_)**2)/w**2) ax[0].imshow(convolve2d(im_, sobel, 'same')) ax[1].imshow(convolve2d(im_, sobel.T, 'same')) return mplfig_to_npimage(fig_mpl) # RGB image of the figure animation = mpy.VideoClip(make_frame_mpl, duration=duration) animation.write_videofile(os.path.join(figpath, name + '.mp4'), fps=fps) e.ipython_display(name) ###Output _____no_output_____ ###Markdown The angle is derived as the arctan of the 2 components ###Code name = 'trame_sobel_orientation' import os if True or not os.path.isfile(os.path.join(figpath, name + '.mp4')): from EdgeGrid import EdgeGrid e = EdgeGrid() import imageio e.reader = imageio.get_reader(figpath + 'elasticite_test.mp4', loop=True) import matplotlib.pyplot as plt import numpy as np from moviepy.video.io.bindings import mplfig_to_npimage import moviepy.editor as mpy # DRAW A FIGURE WITH MATPLOTLIB fps = 24. duration = 4. def make_frame_mpl(t): fig_mpl, ax = plt.subplots(figsize=(5,5), facecolor='white') import numpy as np sobel = np.array([[1, 2, 1], [0, 0, 0], [-1, -2, -1]]) im = e.reader.get_next_data() im_ = im.sum(axis=-1) N_X, N_Y = im_.shape x, y = np.mgrid[0:1:1j*N_X, 0:1:1j*N_Y] # mask = np.exp(-.5*((x-.5)**2+(y-.5)**2)/.1**2) blur = np.array([[1, 2, 1], [1, 8, 2], [1, 2, 1]]) from scipy.signal import convolve2d im_X = convolve2d(im_, sobel, 'same') im_Y = convolve2d(im_, sobel.T, 'same') for i in range(10): im_X = convolve2d(im_X, blur, 'same') im_Y = convolve2d(im_Y, blur, 'same') mappable = ax.imshow(np.arctan2(im_Y, im_X)*180/np.pi, origin='lower') fig_mpl.colorbar(mappable) return mplfig_to_npimage(fig_mpl) # RGB image of the figure animation = mpy.VideoClip(make_frame_mpl, duration=duration) animation.write_videofile(os.path.join(figpath, name + '.mp4'), fps=fps) e.ipython_display(name) ###Output _____no_output_____ ###Markdown This function is included in the ``EdgeGrid`` class: ###Code from EdgeGrid import EdgeGrid e = EdgeGrid() import numpy as np np.set_printoptions(precision=2, suppress=True) import imageio e.reader = imageio.get_reader(figpath + 'elasticite_test.mp4', loop=True) for i, im in enumerate(e.reader): print(i, e.theta_sobel(im, N_blur=10)*180/np.pi) if i>5: break name = 'trame_sobel' from EdgeGrid import EdgeGrid e = EdgeGrid() import imageio e.reader = imageio.get_reader(figpath + 'elasticite_test.mp4', loop=True) def make_lames(e): e.im = e.reader.get_next_data() return e.theta_sobel(e.im, N_blur=10) duration = 4. e.make_anim(name, make_lames, duration=duration) e.ipython_display(name) import imagen as ig line=ig.Line(xdensity=5, ydensity=5, smoothing=0) import numpy as np np.set_printoptions(1) import holoviews %reload_ext holoviews.ipython import numbergen as ng from holoviews import NdLayout import param param.Dynamic.time_dependent=True stim = ig.SineGrating(orientation=np.pi*ng.UniformRandom()) NdLayout(stim.anim(3)) name = 'trame_sobel_grating' from EdgeGrid import EdgeGrid e = EdgeGrid() stim = ig.SineGrating(xdensity=64, ydensity=64) def make_lames(e): stim.orientation=np.pi*e.t/4. e.im = stim() return e.theta_sobel(e.im, N_blur=5) duration = 4. e.make_anim(name, make_lames, duration=duration) e.ipython_display(name) %%opts Image.Pattern (cmap='Blues_r') l1 = ig.Line(orientation=-np.pi/4) l2 = ig.Line(orientation=+np.pi/4) cross = l1 | l2 cross.orientation=ng.ScaledTime()*(np.pi/-20) l1.anim(20) + l2.anim(20) + cross.anim(20) name = 'trame_sobel_cross' from EdgeGrid import EdgeGrid e = EdgeGrid() l1 = ig.Line(orientation=-np.pi/4) l2 = ig.Line(orientation=+np.pi/4) cross = l1 | l2 def make_lames(e): cross.orientation = np.pi*e.t/4. e.im = cross() return e.theta_sobel(e.im, N_blur=1) duration = 4. e.make_anim(name, make_lames, duration=duration) e.ipython_display(name) line.set_param(xdensity=72,ydensity=72,orientation=np.pi/4, thickness=0.02, smoothing=0.02) line.x = .25 noise = ig.Composite(xdensity=72, ydensity=72, operator=np.add, generators=[ig.Gaussian(size=0.1, x=ng.UniformRandom(seed=i+1)-0.5, y=ng.UniformRandom(seed=i+2)-0.5, orientation=np.pi*ng.UniformRandom(seed=i+3)) for i in range(10)]) stim = line + 0.3*noise NdLayout(stim.anim(4)).cols(5) name = 'trame_sobel_line_tmp_4' from EdgeGrid import EdgeGrid e = EdgeGrid() def make_lames(e): line.x = -.5 + e.t / 4. stim = line + noise e.im = stim() return e.theta_sobel(e.im, N_blur=1) duration = 4. e.make_anim(name, make_lames, duration=duration) e.ipython_display(name) ###Output _____no_output_____
data-analysis/finding-area-under-curve.ipynb
###Markdown $$ E = \sum P \Delta t $$ ###Code import numpy as np delta_t = data['time'].diff(1) / np.timedelta64(1, 'h') delta_t little_boxes = data['power (kW)'] * delta_t little_boxes.sum() little_boxes.cumsum().plot() ###Output _____no_output_____
experiments/exp030.ipynb
###Markdown Settings ###Code EXP_NO = 30 SEED = 1 N_SPLITS = 5 TARGET = 'target' GROUP = 'art_series_id' REGRESSION = False assert((TARGET, REGRESSION) in (('target', True), ('target', False), ('sorting_date', True))) CV_THRESHOLD = None PAST_EXPERIMENTS = tuple(exp_no for exp_no in range(4, 28 + 1) # 7 は予測結果がなんかおかしい、16, 25, 28 は時間の都合でできなかった if exp_no not in (7, 16, 25, 28)) PAST_EXPERIMENTS ###Output _____no_output_____ ###Markdown Library ###Code from collections import defaultdict from functools import partial import gc import glob import json from logging import getLogger, StreamHandler, FileHandler, DEBUG, Formatter import pickle import os import sys import time import lightgbm as lgbm import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from sklearn.linear_model import RidgeCV, RidgeClassifierCV from sklearn.metrics import confusion_matrix, mean_squared_error from sklearnex import patch_sklearn SCRIPTS_DIR = os.path.join('..', 'scripts') assert(os.path.isdir(SCRIPTS_DIR)) if SCRIPTS_DIR not in sys.path: sys.path.append(SCRIPTS_DIR) from cross_validation import load_cv_object_ids from features import extract_representative_color_features, extract_representative_colors from dataset import load_csvfiles, load_photofile from folder import experiment_dir_of from target import soring_date2target pd.options.display.float_format = '{:.5f}'.format patch_sklearn() ###Output Intel(R) Extension for Scikit-learn* enabled (https://github.com/intel/scikit-learn-intelex) ###Markdown Prepare directory ###Code output_dir = experiment_dir_of(EXP_NO) output_dir ###Output _____no_output_____ ###Markdown Prepare logger ###Code logger = getLogger(__name__) '''Refference https://docs.python.org/ja/3/howto/logging-cookbook.html ''' logger.setLevel(DEBUG) # create file handler which logs even debug messages fh = FileHandler(os.path.join(output_dir, 'log.log')) fh.setLevel(DEBUG) # create console handler with a higher log level ch = StreamHandler() ch.setLevel(DEBUG) # create formatter and add it to the handlers formatter = Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') fh.setFormatter(formatter) ch.setFormatter(formatter) # add the handlers to the logger logger.addHandler(fh) logger.addHandler(ch) len(logger.handlers) logger.info('Experiment no: {}'.format(EXP_NO)) logger.info('CV: StratifiedGroupKFold') logger.info('SEED: {}'.format(SEED)) logger.info('REGRESSION: {}'.format(REGRESSION)) ###Output 2021-07-22 15:31:50,922 - __main__ - INFO - Experiment no: 30 2021-07-22 15:31:50,924 - __main__ - INFO - CV: StratifiedGroupKFold 2021-07-22 15:31:50,925 - __main__ - INFO - SEED: 1 2021-07-22 15:31:50,927 - __main__ - INFO - REGRESSION: False ###Markdown Load csv files ###Code SINCE = time.time() logger.debug('Start loading csv files ({:.3f} seconds passed)'.format(time.time() - SINCE)) train, test, materials, techniques, sample_submission = load_csvfiles() logger.debug('Complete loading csv files ({:.3f} seconds passed)'.format(time.time() - SINCE)) train test ###Output _____no_output_____ ###Markdown Feature engineering Extract past experiments' prediction values for ensemble. ###Code @np.vectorize def predict(proba_0: float, proba_1: float, proba_2: float, proba_3: float) -> int: return np.argmax((proba_0, proba_1, proba_2, proba_3)) from typing import Tuple, Optional def get_cv_prediction(experiments_no: Tuple[int], cv_threshold: Optional[float] = None, n_splits: int = 5, log_func: Optional[callable] = print) -> Tuple[pd.DataFrame, pd.DataFrame, pd.DataFrame]: pred_train, pred_valid, pred_test = pd.DataFrame(), pd.DataFrame(), pd.DataFrame() # Process experiment result one by one for exp_no in experiments_no: # Get directory where results are stored output_dir = experiment_dir_of(exp_no, auto_make=False) if not os.path.isdir(output_dir): raise ValueError(exp_no, output_dir) # Check whether local cv is better than threshold. # If not, that experiment result will not be ensembled. # Skip this check if threshold is not given. if cv_threshold is not None: with open(os.path.join(output_dir, 'metrics.json'), 'r') as f: metrics = json.load(f) local_cv = metrics['valid_losses_avg'] if local_cv > cv_threshold: log_func('Exclude experiment {} from ensemble, local_cv={:.5f}, threshold={:.5f}'. \ format(exp_no, local_cv, cv_threshold)) continue # Load and cv result files, (fold1, fold2, ..., fold<n_splits>) pred_train_, pred_valid_, pred_test_ = pd.DataFrame(), pd.DataFrame(), pd.DataFrame() for i in range(n_splits): num_fold = i + 1 for fold in ('training', 'validation', 'test'): # Load prediction and merge into 1 dataframe pred_df = pd.read_csv(os.path.join(output_dir, f'cv_fold{num_fold}_{fold}.csv')) if 'pred' not in pred_df: # the task was classification pred_df['pred'] = predict(pred_df['0'], pred_df['1'], pred_df['2'], pred_df['3']) pred_df['num_fold'] = num_fold pred_df = pred_df[['object_id', 'num_fold', 'pred']] if fold == 'training': pred_train_ = pd.concat([pred_train_, pred_df]) elif fold == 'validation': pred_valid_ = pd.concat([pred_valid_, pred_df]) elif fold == 'test': pred_test_ = pd.concat([pred_test_, pred_df]) pred_train_.rename(columns={'pred': f'exp{str(exp_no).zfill(3)}'}, inplace=True) pred_valid_.rename(columns={'pred': f'exp{str(exp_no).zfill(3)}'}, inplace=True) pred_test_.rename(columns={'pred': f'exp{str(exp_no).zfill(3)}'}, inplace=True) # Merge into 1 dataframe ## Training set if pred_train.shape[1] < 1: pred_train = pred_train_.copy() else: assert(pred_train.shape[0] == pred_train_.shape[0]) pred_train = pd.merge(pred_train, pred_train_) assert(pred_train.shape[0] == pred_train_.shape[0]) ## Validation set if pred_valid.shape[1] < 1: pred_valid = pred_valid_.copy() else: assert(pred_valid.shape[0] == pred_valid_.shape[0]) pred_valid = pd.merge(pred_valid, pred_valid_) assert(pred_valid.shape[0] == pred_valid_.shape[0]) ## Test set if pred_test.shape[1] < 1: pred_test = pred_test_.copy() else: assert(pred_test.shape[0] == pred_test_.shape[0]) pred_test = pd.merge(pred_test, pred_test_) assert(pred_test.shape[0] == pred_test_.shape[0]) log_func('Experiment {}: join into ensemble'.format(exp_no)) return (pred_train.set_index('object_id'), pred_valid.set_index('object_id'), pred_test.set_index('object_id')) # Get predictions of weak learner pred_train_weak_learner, pred_valid_weak_learner, pred_test_weak_learner = get_cv_prediction(PAST_EXPERIMENTS, CV_THRESHOLD) ###Output Experiment 4: join into ensemble Experiment 5: join into ensemble Experiment 6: join into ensemble Experiment 8: join into ensemble Experiment 9: join into ensemble Experiment 10: join into ensemble Experiment 11: join into ensemble Experiment 12: join into ensemble Experiment 13: join into ensemble Experiment 14: join into ensemble Experiment 15: join into ensemble Experiment 17: join into ensemble Experiment 18: join into ensemble Experiment 19: join into ensemble Experiment 20: join into ensemble Experiment 21: join into ensemble Experiment 22: join into ensemble Experiment 23: join into ensemble Experiment 24: join into ensemble Experiment 26: join into ensemble Experiment 27: join into ensemble ###Markdown Cross validation ###Code train.set_index('object_id', inplace=True) fold_object_ids = load_cv_object_ids() for i, (train_object_ids, valid_object_ids) in enumerate(zip(fold_object_ids[0], fold_object_ids[1])): assert(set(train_object_ids) & set(valid_object_ids) == set()) num_fold = i + 1 logger.debug('Start fold {} ({:.3f} seconds passed)'.format(num_fold, time.time() - SINCE)) # Separate dataset into training/validation fold X_train = pred_train_weak_learner.loc[train_object_ids].query(f'num_fold == {num_fold}').drop(columns=['num_fold']) y_train = train.loc[train_object_ids, TARGET].values X_valid = pred_valid_weak_learner.loc[valid_object_ids].query(f'num_fold == {num_fold}').drop(columns=['num_fold']) y_valid = train.loc[valid_object_ids, TARGET].values X_test = pred_test_weak_learner.query(f'num_fold == {num_fold}').drop(columns=['num_fold']) # Training logger.debug('Start training model ({:.3f} seconds passed)'.format(time.time() - SINCE)) ## train estimator estimator = RidgeCV(alphas=(0.01, 0.1, 1.0, 10.0, 50.)) if REGRESSION \ else RidgeClassifierCV(alphas=(0.01, 0.1, 1.0, 10.0, 50.), class_weight='balanced') estimator.fit(X_train, y_train) ## Save coefficients coef_df = pd.DataFrame(data=estimator.coef_) if REGRESSION: coef_df.index = X_train.columns coef_df.columns = ['coef'] else: coef_df.columns = X_train.columns coef_df.to_csv(os.path.join(output_dir, f'cv_fold{num_fold}_coefficients.csv')) logger.debug('Complete training ({:.3f} seconds passed)'.format(time.time() - SINCE)) # Save model and prediction ## Prediction if REGRESSION: pred_train = pd.DataFrame(data=estimator.predict(X_train), columns=['pred']) pred_valid = pd.DataFrame(data=estimator.predict(X_valid), columns=['pred']) pred_test = pd.DataFrame(data=estimator.predict(X_test), columns=['pred']) else: try: pred_train = pd.DataFrame(data=estimator.predict_proba(X_train), columns=estimator.classes_) pred_valid = pd.DataFrame(data=estimator.predict_proba(X_valid), columns=estimator.classes_) pred_test = pd.DataFrame(data=estimator.predict_proba(X_test), columns=estimator.classes_) except AttributeError: pred_train = pd.DataFrame(data=estimator.decision_function(X_train), columns=estimator.classes_) pred_valid = pd.DataFrame(data=estimator.decision_function(X_valid), columns=estimator.classes_) pred_test = pd.DataFrame(data=estimator.decision_function(X_test), columns=estimator.classes_) ## Training set pred_train['object_id'] = train_object_ids filepath_fold_train = os.path.join(output_dir, f'cv_fold{num_fold}_training.csv') pred_train.to_csv(filepath_fold_train, index=False) logger.debug('Save training fold to {} ({:.3f} seconds passed)' \ .format(filepath_fold_train, time.time() - SINCE)) ## Validation set pred_valid['object_id'] = valid_object_ids filepath_fold_valid = os.path.join(output_dir, f'cv_fold{num_fold}_validation.csv') pred_valid.to_csv(filepath_fold_valid, index=False) logger.debug('Save validation fold to {} ({:.3f} seconds passed)' \ .format(filepath_fold_valid, time.time() - SINCE)) ## Test set pred_test['object_id'] = X_test.index.values filepath_fold_test = os.path.join(output_dir, f'cv_fold{num_fold}_test.csv') pred_test.to_csv(filepath_fold_test, index=False) logger.debug('Save test result {} ({:.3f} seconds passed)' \ .format(filepath_fold_test, time.time() - SINCE)) ## Model filepath_fold_model = os.path.join(output_dir, f'cv_fold{num_fold}_model.pkl') with open(filepath_fold_model, 'wb') as f: pickle.dump(estimator, f) logger.debug('Save model {} ({:.3f} seconds passed)'.format(filepath_fold_model, time.time() - SINCE)) # Save memory del (estimator, X_train, X_valid, y_train, y_valid, pred_train, pred_valid, pred_test) gc.collect() logger.debug('Complete fold {} ({:.3f} seconds passed)'.format(num_fold, time.time() - SINCE)) ###Output 2021-07-22 15:31:53,989 - __main__ - DEBUG - Start fold 1 (3.042 seconds passed) 2021-07-22 15:31:54,027 - __main__ - DEBUG - Start training model (3.080 seconds passed) 2021-07-22 15:31:54,235 - __main__ - DEBUG - Complete training (3.288 seconds passed) 2021-07-22 15:31:54,264 - __main__ - DEBUG - Save training fold to ..\scripts\..\experiments\exp030\cv_fold1_training.csv (3.317 seconds passed) 2021-07-22 15:31:54,275 - __main__ - DEBUG - Save validation fold to ..\scripts\..\experiments\exp030\cv_fold1_validation.csv (3.328 seconds passed) 2021-07-22 15:31:54,320 - __main__ - DEBUG - Save test result ..\scripts\..\experiments\exp030\cv_fold1_test.csv (3.373 seconds passed) 2021-07-22 15:31:54,322 - __main__ - DEBUG - Save model ..\scripts\..\experiments\exp030\cv_fold1_model.pkl (3.375 seconds passed) 2021-07-22 15:31:54,379 - __main__ - DEBUG - Complete fold 1 (3.432 seconds passed) 2021-07-22 15:31:54,380 - __main__ - DEBUG - Start fold 2 (3.433 seconds passed) 2021-07-22 15:31:54,405 - __main__ - DEBUG - Start training model (3.458 seconds passed) 2021-07-22 15:31:54,415 - __main__ - DEBUG - Complete training (3.468 seconds passed) 2021-07-22 15:31:54,450 - __main__ - DEBUG - Save training fold to ..\scripts\..\experiments\exp030\cv_fold2_training.csv (3.503 seconds passed) 2021-07-22 15:31:54,458 - __main__ - DEBUG - Save validation fold to ..\scripts\..\experiments\exp030\cv_fold2_validation.csv (3.511 seconds passed) 2021-07-22 15:31:54,516 - __main__ - DEBUG - Save test result ..\scripts\..\experiments\exp030\cv_fold2_test.csv (3.569 seconds passed) 2021-07-22 15:31:54,518 - __main__ - DEBUG - Save model ..\scripts\..\experiments\exp030\cv_fold2_model.pkl (3.571 seconds passed) 2021-07-22 15:31:54,576 - __main__ - DEBUG - Complete fold 2 (3.629 seconds passed) 2021-07-22 15:31:54,577 - __main__ - DEBUG - Start fold 3 (3.630 seconds passed) 2021-07-22 15:31:54,598 - __main__ - DEBUG - Start training model (3.651 seconds passed) 2021-07-22 15:31:54,609 - __main__ - DEBUG - Complete training (3.662 seconds passed) 2021-07-22 15:31:54,638 - __main__ - DEBUG - Save training fold to ..\scripts\..\experiments\exp030\cv_fold3_training.csv (3.691 seconds passed) 2021-07-22 15:31:54,648 - __main__ - DEBUG - Save validation fold to ..\scripts\..\experiments\exp030\cv_fold3_validation.csv (3.701 seconds passed) 2021-07-22 15:31:54,695 - __main__ - DEBUG - Save test result ..\scripts\..\experiments\exp030\cv_fold3_test.csv (3.747 seconds passed) 2021-07-22 15:31:54,697 - __main__ - DEBUG - Save model ..\scripts\..\experiments\exp030\cv_fold3_model.pkl (3.750 seconds passed) 2021-07-22 15:31:54,758 - __main__ - DEBUG - Complete fold 3 (3.811 seconds passed) 2021-07-22 15:31:54,759 - __main__ - DEBUG - Start fold 4 (3.812 seconds passed) 2021-07-22 15:31:54,783 - __main__ - DEBUG - Start training model (3.836 seconds passed) 2021-07-22 15:31:54,796 - __main__ - DEBUG - Complete training (3.849 seconds passed) 2021-07-22 15:31:54,837 - __main__ - DEBUG - Save training fold to ..\scripts\..\experiments\exp030\cv_fold4_training.csv (3.891 seconds passed) 2021-07-22 15:31:54,848 - __main__ - DEBUG - Save validation fold to ..\scripts\..\experiments\exp030\cv_fold4_validation.csv (3.902 seconds passed) 2021-07-22 15:31:54,899 - __main__ - DEBUG - Save test result ..\scripts\..\experiments\exp030\cv_fold4_test.csv (3.952 seconds passed) 2021-07-22 15:31:54,900 - __main__ - DEBUG - Save model ..\scripts\..\experiments\exp030\cv_fold4_model.pkl (3.954 seconds passed) 2021-07-22 15:31:54,960 - __main__ - DEBUG - Complete fold 4 (4.013 seconds passed) 2021-07-22 15:31:54,961 - __main__ - DEBUG - Start fold 5 (4.015 seconds passed) 2021-07-22 15:31:54,984 - __main__ - DEBUG - Start training model (4.038 seconds passed) 2021-07-22 15:31:54,997 - __main__ - DEBUG - Complete training (4.050 seconds passed) 2021-07-22 15:31:55,026 - __main__ - DEBUG - Save training fold to ..\scripts\..\experiments\exp030\cv_fold5_training.csv (4.079 seconds passed) 2021-07-22 15:31:55,034 - __main__ - DEBUG - Save validation fold to ..\scripts\..\experiments\exp030\cv_fold5_validation.csv (4.088 seconds passed) 2021-07-22 15:31:55,077 - __main__ - DEBUG - Save test result ..\scripts\..\experiments\exp030\cv_fold5_test.csv (4.130 seconds passed) 2021-07-22 15:31:55,079 - __main__ - DEBUG - Save model ..\scripts\..\experiments\exp030\cv_fold5_model.pkl (4.133 seconds passed) 2021-07-22 15:31:55,140 - __main__ - DEBUG - Complete fold 5 (4.193 seconds passed) ###Markdown Evaluation ###Code rmse = partial(mean_squared_error, squared=False) metrics = defaultdict(list) ###Output _____no_output_____ ###Markdown Training set ###Code pred_train_dfs = [] for i in range(N_SPLITS): num_fold = i + 1 logger.debug('Evaluate cv result (training set) Fold {}'.format(num_fold)) # Read cv result filepath_fold_train = os.path.join(output_dir, f'cv_fold{num_fold}_training.csv') pred_train_df = pd.read_csv(filepath_fold_train) pred_train_df['actual'] = train.loc[pred_train_df['object_id'], TARGET].values if REGRESSION: if TARGET == 'target': pred_train_df['pred'].clip(lower=0, upper=3, inplace=True) else: pred_train_df['pred'] = np.vectorize(soring_date2target)(pred_train_df['pred']) pred_train_df['actual'] = np.vectorize(soring_date2target)(pred_train_df['actual']) else: pred_train_df['pred'] = predict(pred_train_df['0'], pred_train_df['1'], pred_train_df['2'], pred_train_df['3']) if not (REGRESSION and TARGET == 'target'): print(confusion_matrix(pred_train_df['actual'], pred_train_df['pred'], labels=np.sort(train['target'].unique()))) loss = rmse(pred_train_df['actual'], pred_train_df['pred']) logger.debug('Loss: {}'.format(loss)) metrics['train_losses'].append(loss) pred_train_dfs.append(pred_train_df) metrics['train_losses_avg'] = np.mean(metrics['train_losses']) metrics['train_losses_std'] = np.std(metrics['train_losses']) pred_train = pd.concat(pred_train_dfs).groupby('object_id').sum() pred_train = pred_train / N_SPLITS if not REGRESSION: pred_train['pred'] = predict(pred_train['0'], pred_train['1'], pred_train['2'], pred_train['3']) pred_train['actual'] = train.loc[pred_train.index, TARGET].values if REGRESSION and TARGET == 'sorting_date': pred_train['actual'] = np.vectorize(soring_date2target)(pred_train['actual']) pred_train if not (REGRESSION and TARGET == 'target'): print(confusion_matrix(pred_train['actual'], pred_train['pred'], labels=np.sort(train['target'].unique()))) loss = rmse(pred_train['actual'], pred_train['pred']) metrics['train_loss'] = loss logger.info('Training loss: {}'.format(loss)) pred_train.to_csv(os.path.join(output_dir, 'prediction_train.csv')) logger.debug('Write cv result to {}'.format(os.path.join(output_dir, 'prediction_train.csv'))) ###Output 2021-07-22 15:31:55,470 - __main__ - DEBUG - Write cv result to ..\scripts\..\experiments\exp030\prediction_train.csv ###Markdown Validation set ###Code pred_valid_dfs = [] for i in range(N_SPLITS): num_fold = i + 1 logger.debug('Evaluate cv result (validation set) Fold {}'.format(num_fold)) # Read cv result filepath_fold_valid = os.path.join(output_dir, f'cv_fold{num_fold}_validation.csv') pred_valid_df = pd.read_csv(filepath_fold_valid) pred_valid_df['actual'] = train.loc[pred_valid_df['object_id'], TARGET].values if REGRESSION: if TARGET == 'target': pred_valid_df['pred'].clip(lower=0, upper=3, inplace=True) else: pred_valid_df['pred'] = np.vectorize(soring_date2target)(pred_valid_df['pred']) pred_valid_df['actual'] = np.vectorize(soring_date2target)(pred_valid_df['actual']) else: pred_valid_df['pred'] = predict(pred_valid_df['0'], pred_valid_df['1'], pred_valid_df['2'], pred_valid_df['3']) if not (REGRESSION and TARGET == 'target'): print(confusion_matrix(pred_valid_df['actual'], pred_valid_df['pred'], labels=np.sort(train['target'].unique()))) loss = rmse(pred_valid_df['actual'], pred_valid_df['pred']) logger.debug('Loss: {}'.format(loss)) metrics['valid_losses'].append(loss) pred_valid_dfs.append(pred_valid_df) metrics['valid_losses_avg'] = np.mean(metrics['valid_losses']) metrics['valid_losses_std'] = np.std(metrics['valid_losses']) pred_valid = pd.concat(pred_valid_dfs).groupby('object_id').sum() pred_valid = pred_valid / N_SPLITS if not REGRESSION: pred_valid['pred'] = predict(pred_valid['0'], pred_valid['1'], pred_valid['2'], pred_valid['3']) pred_valid['actual'] = train.loc[pred_valid.index, TARGET].values if REGRESSION and TARGET == 'sorting_date': pred_valid['actual'] = np.vectorize(soring_date2target)(pred_valid['actual']) pred_valid if not REGRESSION: print(confusion_matrix(pred_valid['actual'], pred_valid['pred'], labels=np.sort(train['target'].unique()))) loss = rmse(pred_valid['actual'], pred_valid['pred']) metrics['valid_loss'] = loss logger.info('Validatino loss: {}'.format(loss)) pred_valid.to_csv(os.path.join(output_dir, 'prediction_valid.csv')) logger.debug('Write cv result to {}'.format(os.path.join(output_dir, 'prediction_valid.csv'))) with open(os.path.join(output_dir, 'metrics.json'), 'w') as f: json.dump(dict(metrics), f) logger.debug('Write metrics to {}'.format(os.path.join(output_dir, 'metrics.json'))) ###Output 2021-07-22 15:31:55,714 - __main__ - DEBUG - Write metrics to ..\scripts\..\experiments\exp030\metrics.json ###Markdown Prediction ###Code pred_test_dfs = [] for i in range(N_SPLITS): num_fold = i + 1 # Read cv result filepath_fold_test = os.path.join(output_dir, f'cv_fold{num_fold}_test.csv') pred_test_df = pd.read_csv(filepath_fold_test) pred_test_dfs.append(pred_test_df) pred_test = pd.concat(pred_test_dfs).groupby('object_id').sum() pred_test = pred_test / N_SPLITS if REGRESSION: if TARGET == 'target': pred_test['pred'].clip(lower=0, upper=3, inplace=True) else: pred_test['pred'] = np.vectorize(soring_date2target)(pred_test['pred']) else: pred_test['pred'] = predict(pred_test['0'], pred_test['1'], pred_test['2'], pred_test['3']) pred_test test['target'] = pred_test.loc[test['object_id'], 'pred'].values test = test[['target']] test sample_submission test.to_csv(os.path.join(output_dir, f'{str(EXP_NO).zfill(3)}_submission.csv'), index=False) logger.debug('Write submission to {}'.format(os.path.join(output_dir, f'{str(EXP_NO).zfill(3)}_submission.csv'))) fig = plt.figure() if not (REGRESSION and TARGET == 'target'): sns.countplot(data=test, x='target') else: sns.histplot(data=test, x='target') sns.despine() fig.savefig(os.path.join(output_dir, 'prediction.png')) logger.debug('Write figure to {}'.format(os.path.join(output_dir, 'prediction.png'))) logger.debug('Complete ({:.3f} seconds passed)'.format(time.time() - SINCE)) ###Output 2021-07-22 15:31:56,044 - __main__ - DEBUG - Complete (5.097 seconds passed)
docs/tutorials/hierarchy/ward.ipynb
###Markdown Ward This notebook illustrates the hierarchical clustering of graphs by the [Ward method](https://scikit-network.readthedocs.io/en/latest/reference/hierarchy.html), after embedding in a space of low dimension. ###Code from IPython.display import SVG import numpy as np from sknetwork.data import karate_club, painters, movie_actor from sknetwork.embedding import Spectral from sknetwork.hierarchy import Ward, BiWard, cut_straight, dasgupta_score, tree_sampling_divergence from sknetwork.visualization import svg_graph, svg_digraph, svg_bigraph, svg_dendrogram ###Output _____no_output_____ ###Markdown Graphs ###Code graph = karate_club(metadata=True) adjacency = graph.adjacency position = graph.position ###Output _____no_output_____ ###Markdown **Hierarchy** ###Code ward = Ward() dendrogram = ward.fit_transform(adjacency) image = svg_dendrogram(dendrogram) SVG(image) ###Output _____no_output_____ ###Markdown **Cuts of the dendrogram** ###Code labels = cut_straight(dendrogram) print(labels) n_clusters = 4 labels, dendrogram_aggregate = cut_straight(dendrogram, n_clusters, return_dendrogram=True) print(labels) _, counts = np.unique(labels, return_counts=True) image = svg_dendrogram(dendrogram_aggregate, names=counts, rotate_names=False) SVG(image) image = svg_graph(adjacency, position, labels=labels) SVG(image) ###Output _____no_output_____ ###Markdown **Metrics** ###Code dasgupta_score(adjacency, dendrogram) tree_sampling_divergence(adjacency, dendrogram) ###Output _____no_output_____ ###Markdown **Other embedding** ###Code ward = Ward(embedding_method=Spectral(4)) ###Output _____no_output_____ ###Markdown Digraphs ###Code graph = painters(metadata=True) adjacency = graph.adjacency position = graph.position names = graph.names ###Output _____no_output_____ ###Markdown **Hierarchy** ###Code biward = BiWard() dendrogram = biward.fit_transform(adjacency) image = svg_dendrogram(dendrogram, names, n_clusters=3, rotate=True) SVG(image) ###Output _____no_output_____ ###Markdown **Cuts of the dendrogram** ###Code # cut with 3 clusters labels = cut_straight(dendrogram, n_clusters = 3) print(labels) image = svg_digraph(adjacency, position, names=names, labels=labels) SVG(image) ###Output _____no_output_____ ###Markdown **Metrics** ###Code dasgupta_score(adjacency, dendrogram) tree_sampling_divergence(adjacency, dendrogram) ###Output _____no_output_____ ###Markdown Bigraphs ###Code graph = movie_actor(metadata=True) biadjacency = graph.biadjacency names_row = graph.names_row names_col = graph.names_col ###Output _____no_output_____ ###Markdown **Hierarchy** ###Code biward = BiWard(cluster_col = True, cluster_both = True) biward.fit(biadjacency) dendrogram_row = biward.dendrogram_row_ dendrogram_col = biward.dendrogram_col_ dendrogram_full = biward.dendrogram_full_ image = svg_dendrogram(dendrogram_row, names_row, n_clusters=4, rotate=True) SVG(image) image = svg_dendrogram(dendrogram_col, names_col, n_clusters=4, rotate=True) SVG(image) ###Output _____no_output_____ ###Markdown **Cuts of the dendrogram** ###Code labels = cut_straight(dendrogram_full, n_clusters = 4) n_row = biadjacency.shape[0] labels_row = labels[:n_row] labels_col = labels[n_row:] image = svg_bigraph(biadjacency, names_row, names_col, labels_row, labels_col) SVG(image) ###Output _____no_output_____ ###Markdown Ward This notebook illustrates the hierarchical clustering of graphs by the [Ward method](https://scikit-network.readthedocs.io/en/latest/reference/hierarchy.html), after embedding in a space of low dimension. ###Code from IPython.display import SVG import numpy as np from sknetwork.data import karate_club, painters, movie_actor from sknetwork.embedding import Spectral from sknetwork.hierarchy import Ward, cut_straight, dasgupta_score, tree_sampling_divergence from sknetwork.visualization import svg_graph, svg_digraph, svg_bigraph, svg_dendrogram ###Output _____no_output_____ ###Markdown Graphs ###Code graph = karate_club(metadata=True) adjacency = graph.adjacency position = graph.position # hierarchical clustering ward = Ward() dendrogram = ward.fit_transform(adjacency) image = svg_dendrogram(dendrogram) SVG(image) # cuts labels = cut_straight(dendrogram) print(labels) n_clusters = 4 labels, dendrogram_aggregate = cut_straight(dendrogram, n_clusters, return_dendrogram=True) print(labels) _, counts = np.unique(labels, return_counts=True) # aggregate dendrogram image = svg_dendrogram(dendrogram_aggregate, names=counts, rotate_names=False) SVG(image) # clustering image = svg_graph(adjacency, position, labels=labels) SVG(image) # metrics dasgupta_score(adjacency, dendrogram) # other embedding ward = Ward(embedding_method=Spectral(4)) ###Output _____no_output_____ ###Markdown Directed graphs ###Code graph = painters(metadata=True) adjacency = graph.adjacency position = graph.position names = graph.names # hierarchical clustering ward = Ward() dendrogram = ward.fit_transform(adjacency) image = svg_dendrogram(dendrogram, names, n_clusters=3, rotate=True) SVG(image) # cut with 3 clusters labels = cut_straight(dendrogram, n_clusters = 3) print(labels) image = svg_digraph(adjacency, position, names=names, labels=labels) SVG(image) # metrics dasgupta_score(adjacency, dendrogram) ###Output _____no_output_____ ###Markdown Bipartite graphs ###Code graph = movie_actor(metadata=True) biadjacency = graph.biadjacency names_row = graph.names_row names_col = graph.names_col # hierarchical clustering ward = Ward(co_cluster = True) ward.fit(biadjacency) dendrogram_row = ward.dendrogram_row_ dendrogram_col = ward.dendrogram_col_ dendrogram_full = ward.dendrogram_full_ image = svg_dendrogram(dendrogram_row, names_row, n_clusters=4, rotate=True) SVG(image) image = svg_dendrogram(dendrogram_col, names_col, n_clusters=4, rotate=True) SVG(image) # cuts labels = cut_straight(dendrogram_full, n_clusters = 4) n_row = biadjacency.shape[0] labels_row = labels[:n_row] labels_col = labels[n_row:] image = svg_bigraph(biadjacency, names_row, names_col, labels_row, labels_col) SVG(image) ###Output _____no_output_____
tutorials/oct_cb_tutorial_04_angiography.ipynb
###Markdown Tutorial 4: Angiography Reconstruction ###Code #Import required system libraries for file management import sys,importlib,os # Provide path to oct-cbort library module_path=os.path.abspath('/Users/damondepaoli/Documents/GitHub/oct-cbort') if module_path not in sys.path: sys.path.append(module_path) # Import oct-cbort library from oct import * # Choose a directory with all meta/ofd/ofb data within it d = os.path.join(module_path,'examples/data/1_VL_Benchtop1_rat_nerve_biseg_n2_m5_struct_angio_ps') data = Load(directory = d) ###Output _____no_output_____ ###Markdown First we need to compute the tomogram, like we did in the previous tutorial ###Code tom = Tomogram() outtom = tom.reconstruct(data=data) for key,val in outtom.items(): data.processedData[key] = outtom[key] ###Output _____no_output_____ ###Markdown There are several different ways to create the angiography contrast using the reconstruct library (oct.reconstruct.angiography). Technique 1: Direct variable assignment without class instanceThis technique is most like MATLAB, but does not take avantage of all the initialization that can be taken advantage of thanks to object oriented programming. ###Code out = Angiography().reconstruct(tomch1=outtom['tomch1'], tomch2=outtom['tomch2'], settings=data.angioSettings) for key,val in out.items(): data.processedData[key] = out[key] ###Output _____no_output_____ ###Markdown Technique 2: Create Structure instance -> direct variable assigmentUsing this technique, we can keep the class in memory and continue to process other frames without needing to reinitialize and variables or memory space, important for GPU processing ###Code angiography = Angiography(mode='cdv') out = angiography.reconstruct(tomch1=outtom['tomch1'], tomch2=outtom['tomch2'], settings=data.angioSettings) for key,val in out.items(): data.processedData[key] = out[key] ###Output _____no_output_____ ###Markdown Technique 3 : Create Structure instance -> data object variable assigmentThis method automatically grabs all the required tomogramgs and settings from the `data` object and reconstructs the contrast using it. ###Code angiography = Angiography(mode='cdv') out = angiography.reconstruct(data=data) for keycf,val in out.items(): data.processedData[key] = out[key] ###Output _____no_output_____ ###Markdown What is in "out"?The processed angio and weight images reside in an output dictionary at `out['angio']` and `out['weight']` . Why use dictionaries? Because they're scalable and more outputs can be added later. ###Code for key,val in out.items(): if not (out[key] is None): print('Dictionary key ', key, ' : ', out[key].shape, out[key].dtype) else: print('Dictionary key ', key, ' : ', 'None' , 'None') ###Output Dictionary key angio : (750, 592) uint8 Dictionary key weight : (750, 592) uint8 ###Markdown Let's look at the frames ###Code fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(121) ax.imshow(data.processedData['angio'],cmap='gray', aspect='auto') ax = fig.add_subplot(122) ax.imshow(data.processedData['weight'],cmap='gray', aspect='auto') ###Output _____no_output_____ ###Markdown As Before, we can also view what `Angiography` requires ###Code angiography.requires() ###Output Required: tomch1=tomch1 tomch2=tomch2 data.angioSettings['xFilter'] (for big-seg scans) data.angioSettings['zFilter'] (for big-seg scans) data.angioSettings['imgWidthAng'] (for big-seg scans) data.angioSettings['imgDepthAng'] (for big-seg scans) data.angioSettings['AlinesToProcAngioLinesA'] (for big-seg scans) data.angioSettings['AlinesToProcAngioLinesB'] (for big-seg scans) Optional: data.angioSettings['invertGray'] data.angioSettings['contrastLowHigh'] ( [min, max] ) For reference, the whole settings dict and its defaults are: self.settings[' contrastLowHigh '] : [-40.0, 130.0] self.settings[' invertGray '] : False self.settings[' imgWidthAng '] : 592 self.settings[' imgDepthAng '] : 3 self.settings[' xFilter '] : 11 self.settings[' zFilter '] : 1 self.settings[' nAlinesToProcAngio '] : [] self.settings[' AlinesToProcAngioLinesA '] : [ 0 1 2 ... 2955 2956 2957] self.settings[' AlinesToProcAngioLinesB '] : [ 2 3 4 ... 2957 2958 2959] self.settings[' imgDepth '] : 5
w3/w3-day_2/Matplotlib_3d_walkthrough.ipynb
###Markdown Wireframe ###Code # create the figure and axes fig,ax = plt.subplots(figsize=(12.8,9.6)) # set a 3D projection ax = plt.axes(projection='3d') # a wireframe ax.plot_wireframe(X, Y, Z, color='r') plt.show() ###Output _____no_output_____ ###Markdown Surface ###Code # create the figure and axes fig,ax = plt.subplots(figsize=(12.8,9.6)) # set a 3D projection ax = plt.axes(projection='3d') # the surface ax.plot_surface(X, Y, Z, cmap='jet') plt.show() ###Output _____no_output_____ ###Markdown Contour ###Code # create the figure and axes fig,ax = plt.subplots(figsize=(12.8,9.6)) # set a 3D projection ax = plt.axes(projection='3d') # the surface ax.contour(X, Y, Z, 40, cmap='jet') plt.show() ###Output _____no_output_____
RESTEndpoint/Db2 RESTful Example.ipynb
###Markdown Db2 11.5.4 RESTful ProgrammingThe following notebook is a brief example of how to use the Db2 11.5.4 RESTful Endpoint service to extend the capabilies of Db2.Programmers can create Representational State Transfer (REST) endpoints that can be used to interact with Db2.Each endpoint is associated with a single SQL statement. Authenticated users of web, mobile, or cloud applications can use these REST endpoints from any REST HTTP client without having to install any Db2 drivers.The Db2 REST server accepts an HTTP request, processes the request body, and returns results in JavaScript Object Notation (JSON).The Db2 REST server is pre-installed and running on Docker on host3 (10.0.0.4) in the Demonstration cluster. As a programmer you can communicate with the service on port 50050. Your welcome note includes the external port you can use to interact with the Db2 RESTful Endpoint service directly.You can find more information about this service at: https://www.ibm.com/support/producthub/db2/docs/content/SSEPGG_11.5.0/com.ibm.db2.luw.admin.rest.doc/doc/c_rest.html. Finding the Db2 RESTful Endpoint Service API DocumentationIf you are running this notebook from a browser running inside the Cloud Pak for Data cluster, click: http://10.0.0.4:50050/docs If you are running this from a browser from your own desktop, check your welcome note for the address of the Db2 RESTful Service at port 50050. Getting StartedBefore you can start submitting SQL or creating your own services you need to complete a few setup steps. Import the required programming librariesThe requests library is the minimum required by Python to construct RESTful service calls. The Pandas library is used to format and manipulate JSON result sets as tables. ###Code import requests import pandas as pd ###Output _____no_output_____ ###Markdown Create the Header File required for getting an authetication tokenWe have to provide the location of the RESTful service for our calls.The RESTful call to the Db2 RESTful Endpoint service is contructed and transmitted as JSON. The first part of the JSON structure is the headers that define the content tyoe of the request. ###Code headers = { "content-type": "application/json" } ###Output _____no_output_____ ###Markdown Define the RESTful HostThe next part defines where the request is sent to. It provides the location of the RESTful service for our calls. ###Code Db2RESTful = "http://localhost:50050" ###Output _____no_output_____ ###Markdown API Authentication ServiceEach service has its own path in the RESTful call. For authentication we need to point to the `v1/auth` service. ###Code API_Auth = "/v1/auth" ###Output _____no_output_____ ###Markdown Database Connection InformationTo authenticate to the RESTful service you must provide the connection information for the database along with the userid and password that you are using to authenticate with. You can also provide an expiry time so that the access token that gets returned will be invalidated after that time period. ###Code body = { "dbParms": { "dbHost": "10.0.0.1", "dbName": "ONTIME", "dbPort": 50001, "isSSLConnection": False, "username": "db2inst1", "password": "db2inst1" }, "expiryTime": "8760h" } ###Output _____no_output_____ ###Markdown Retrieving an Access TokenWhen communicating with the RESTful service, you must provide the name of the service that you want to interact with. In this case the authentication service is */v1/auth*. ###Code try: response = requests.post("{}{}".format(Db2RESTful,API_Auth), headers=headers, json=body) print (response) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) ###Output _____no_output_____ ###Markdown A response code of 200 means that the authentication worked properly, otherwise the error that was generated is printed. The response includes a connection token that is reused throughout the rest of this lab. It ensures secure a connection without requiring that you reenter a userid and password with each request. ###Code if (response.status_code == 200): token = response.json()["token"] print("Token: {}".format(token)) else: print(response.json()["errors"]) ###Output _____no_output_____ ###Markdown Creating a standard reusable JSON headerThe standard header for all subsequent calls will use this format. It includes the access token. ###Code headers = { "authorization": f"{token}", "content-type": "application/json" } ###Output _____no_output_____ ###Markdown Executing an SQL StatementBefore you try creating your own customer service endpoint, you can try using some of the built in services. These let you submit SQL statements in a variety of ways. Executing SQL requires a different service endpoint. In this case we will use "/services/execsql" ###Code API_execsql = "/v1/services/execsql" ###Output _____no_output_____ ###Markdown In this example the code requests that the RESTful function waits until the command is complete. ###Code sql = \ """ SELECT AC."TAIL_NUMBER", AC."MANUFACTURER", AC."MODEL", OT."FLIGHTDATE", OT."UNIQUECARRIER", OT."AIRLINEID", OT."CARRIER", OT."TAILNUM", OT."FLIGHTNUM", OT."ORIGINAIRPORTID", OT."ORIGINAIRPORTSEQID", OT."ORIGINCITYNAME", OT."ORIGINSTATE", OT."DESTAIRPORTID", OT."DESTCITYNAME", OT."DESTSTATE", OT."DEPTIME", OT."DEPDELAY", OT."TAXIOUT", OT."WHEELSOFF", OT."WHEELSON", OT."TAXIIN", OT."ARRTIME", OT."ARRDELAY", OT."ARRDELAYMINUTES", OT."CANCELLED", OT."AIRTIME", OT."DISTANCE" FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC WHERE AC."TAIL_NUMBER" = OT.TAILNUM AND ORIGINSTATE = 'NJ' AND DESTSTATE = 'CA' AND AC.MANUFACTURER = 'Boeing' AND AC.MODEL LIKE 'B737%' AND OT.TAXIOUT > 30 AND OT.DISTANCE > 2000 AND OT.DEPDELAY > 300 ORDER BY OT.ARRDELAY; """ body = { "isQuery": True, "sqlStatement": sql, "sync": True } print(body) def runStatement(sql, isQuery) : body = { "isQuery": isQuery, "sqlStatement": sql, "sync": True } try: response = requests.post("{}{}".format(Db2RESTful,API_execsql), headers=headers, json=body) return response except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) response = runStatement(sql, True) ###Output _____no_output_____ ###Markdown If the successful call returns a **200** response code. ###Code print(response) ###Output _____no_output_____ ###Markdown Now that you know the call is a success, you can retrieve the json in the result set. ###Code print(response.json()["resultSet"]) ###Output _____no_output_____ ###Markdown To format the results, use a Pandas Dataframe class to convert the json result set into a table. Dataframes can be used to further manipulate results in Python. ###Code display(pd.DataFrame(response.json()['resultSet'])) ###Output _____no_output_____ ###Markdown Use Parameters in a SQL StatementSimple parameter passing is also available through the execsql service. In this case we are passing the employee number into the query to retrieve the full employee record. Try substituting different employee numbers and run the REST call again. For example, you can change "000010" to "000020", or "000030". ###Code sqlparm = \ """ SELECT AC."TAIL_NUMBER", AC."MANUFACTURER", AC."MODEL", OT."FLIGHTDATE", OT."UNIQUECARRIER", OT."AIRLINEID", OT."CARRIER", OT."TAILNUM", OT."FLIGHTNUM", OT."ORIGINAIRPORTID", OT."ORIGINAIRPORTSEQID", OT."ORIGINCITYNAME", OT."ORIGINSTATE", OT."DESTAIRPORTID", OT."DESTCITYNAME", OT."DESTSTATE", OT."DEPTIME", OT."DEPDELAY", OT."TAXIOUT", OT."WHEELSOFF", OT."WHEELSON", OT."TAXIIN", OT."ARRTIME", OT."ARRDELAY", OT."ARRDELAYMINUTES", OT."CANCELLED", OT."AIRTIME", OT."DISTANCE" FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC WHERE AC."TAIL_NUMBER" = OT.TAILNUM AND ORIGINSTATE = 'NJ' AND DESTSTATE = 'CA' AND AC.MANUFACTURER = 'Boeing' AND AC.MODEL LIKE 'B737%' AND OT.TAXIOUT > 30 AND OT.DISTANCE > 2000 AND OT.DEPDELAY > ? ORDER BY OT.ARRDELAY; """ body = { "isQuery": True, "parameters" : { "1" : 300 }, "sqlStatement": sqlparm, "sync": True } try: response = requests.post("{}{}".format(Db2RESTful,API_execsql), headers=headers, json=body) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) print(response) response.json()["resultSet"] display(pd.DataFrame(response.json()['resultSet'])) ###Output _____no_output_____ ###Markdown Generate a Call and don't wait for the resultsIf you know that your statement will take a long time to return a result, you can check back later. Turn **sync** off to avoid waiting. ###Code sql = \ """ SELECT AC."TAIL_NUMBER", AC."MANUFACTURER", AC."MODEL", OT."FLIGHTDATE", OT."UNIQUECARRIER", OT."AIRLINEID", OT."CARRIER", OT."TAILNUM", OT."FLIGHTNUM", OT."ORIGINAIRPORTID", OT."ORIGINAIRPORTSEQID", OT."ORIGINCITYNAME", OT."ORIGINSTATE", OT."DESTAIRPORTID", OT."DESTCITYNAME", OT."DESTSTATE", OT."DEPTIME", OT."DEPDELAY", OT."TAXIOUT", OT."WHEELSOFF", OT."WHEELSON", OT."TAXIIN", OT."ARRTIME", OT."ARRDELAY", OT."ARRDELAYMINUTES", OT."CANCELLED", OT."AIRTIME", OT."DISTANCE" FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC WHERE AC."TAIL_NUMBER" = OT.TAILNUM AND ORIGINSTATE = 'NJ' AND DESTSTATE = 'CA' AND AC.MANUFACTURER = 'Boeing' AND AC.MODEL LIKE 'B737%' AND OT.TAXIOUT > 30 AND OT.DISTANCE > 2000 AND OT.DEPDELAY > 300 ORDER BY OT.ARRDELAY; """ body = { "isQuery": True, "sqlStatement": sql, "sync": False } try: response = requests.post("{}{}".format(Db2RESTful,API_execsql), headers=headers, json=body) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) print(response) ###Output _____no_output_____ ###Markdown Retrieve the job id, so that you can retrieve the results later. ###Code job_id = response.json()["id"] print(job_id) ###Output _____no_output_____ ###Markdown Retrieve Result set using Job IDThe service API needs to be appended with the Job ID. ###Code API_get = "/v1/services/" ###Output _____no_output_____ ###Markdown We can limit the number of rows that we return at a time. Setting the limit to zero means all of the rows are to be returned. ###Code body = { "limit": 0 } ###Output _____no_output_____ ###Markdown Get the results. ###Code try: response = requests.get("{}{}{}".format(Db2RESTful,API_get,job_id), headers=headers, json=body) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) print(response) ###Output _____no_output_____ ###Markdown Retrieve the results. ###Code display(pd.DataFrame(response.json()['resultSet'])) ###Output _____no_output_____ ###Markdown Now that you have some experience with the built in SQL service, you can try creating your own endpoint service. Using RESTful Endpoint ServicesThe most common way of interacting with the service is to fully encapsulate an SQL statement, including any parameters, in a unique RESTful service. This creates a secure separation between the database service and the RESTful programming service. It also allows you to create versions of the same service to make maintenance and evolution of programming models simple and predictable. Setup the Meta Data Tables and Stored Procedures to manage Endpoint ServicesBefore you can start defining and running your own RESTful Endpoint services you need call the service to create the table and stored procedures in the database you are using. ###Code API_makerest = "/v1/metadata/setup" ###Output _____no_output_____ ###Markdown You can specify the schema that the new table and stored procedures will be created in. In this example we will use **DB2REST** ###Code body = { "schema": "DB2REST" } try: response = requests.post("{}{}".format(Db2RESTful,API_makerest), headers=headers, json=body) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) ###Output _____no_output_____ ###Markdown If the process is successful the service returns a 201 status code. ###Code if (response.status_code == 201): print(response.reason) else: print(response.json()) ###Output _____no_output_____ ###Markdown Create a RESTful ServiceNow that the RESTful Service metadata is created in your database, you can create your first service. In this example you will pass an employee numb er, a 6 character string, to the service. It will return the department number of the employee. ###Code API_makerest = "/v1/services" ###Output _____no_output_____ ###Markdown The first step is to define the SQL that we want in the RESTful call. Parameters are identified using an ampersand "@". Notice that our SQL is nicely formatted to make this notebook easier to ready. However when creating a service it is good practice to remove the line break characters from your SQL statement. ###Code sql = \ """ SELECT COUNT(AC."TAIL_NUMBER") FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC WHERE AC."TAIL_NUMBER" = OT.TAILNUM AND ORIGINSTATE = @STATE AND DESTSTATE = 'CA' AND AC.MANUFACTURER = 'Boeing' AND AC.MODEL LIKE 'B737%' AND OT.TAXIOUT > 30 AND OT.DISTANCE > 2000 AND OT.DEPDELAY > @DELAY FETCH FIRST 5 ROWS ONLY """ sql = sql.replace("\n","") ###Output _____no_output_____ ###Markdown The next step is defining the jason body to send along with the REST call. ###Code body = {"isQuery": True, "parameters": [ { "datatype": "CHAR(2)", "name": "@STATE" }, { "datatype": "INT", "name": "@DELAY" } ], "schema": "DEMO", "serviceDescription": "Delay", "serviceName": "delay", "sqlStatement": sql, "version": "1.0" } ###Output _____no_output_____ ###Markdown Now submit the full RESTful call to create the new service. ###Code try: response = requests.post("{}{}".format(Db2RESTful,API_makerest), headers=headers, json=body) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) print(response) ###Output _____no_output_____ ###Markdown Call the new RESTful ServiceNow you can call the RESTful service. In this case we will pass the stock symbol CAT. But like in the previous example you can try rerunning the service call with different stock symbols. ###Code API_runrest = "/v1/services/delay/1.0" body = { "parameters": { "@STATE": "NY","@DELAY":"300" }, "sync": True } try: response = requests.post("{}{}".format(Db2RESTful,API_runrest), headers=headers, json=body) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) print("{}{}".format(Db2RESTful,API_runrest)) print(response) print(response.json()) ###Output _____no_output_____ ###Markdown You can retrieve the result set, convert it into a Dataframe and display the table. ###Code display(pd.DataFrame(response.json()['resultSet'])) ###Output _____no_output_____ ###Markdown Loop through the new callNow you can call the RESTful service with different values. ###Code API_runrest = "/v1/services/delay/1.0" repeat = 2 for x in range(0, repeat): for state in ("OH", "NJ", "NY", "FL", "MI"): body = { "parameters": { "@STATE": state,"@DELAY": "240" }, "sync": True } try: response = requests.post("{}{}".format(Db2RESTful,API_runrest), headers=headers, json=body) print(state + ": " + str(response.json()['resultSet'])) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) ###Output _____no_output_____ ###Markdown Managing Your Services There are several service calls you can use to help manage the Db2 RESTful Endpoint service. List Available ServicesYou can also list all the user defined services you have access to ###Code API_listrest = "/v1/services" try: response = requests.get("{}{}".format(Db2RESTful,API_listrest), headers=headers) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) print(response.json()) display(pd.DataFrame(response.json()['Db2Services'])) ###Output _____no_output_____ ###Markdown Get Service DetailsYou can also get the details of a service ###Code API_getDetails = "/v1/services/delay/3.0" try: response = requests.get("{}{}".format(Db2RESTful,API_getDetails), headers=headers) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) json = response.json() print(json) ###Output _____no_output_____ ###Markdown You can format the result to make it easier to ready. For example, here are the input and outputs. ###Code display(pd.DataFrame(json['inputParameters'])) display(pd.DataFrame(json['resultSetFields'])) ###Output _____no_output_____ ###Markdown Delete a ServiceA single call is also available to delete a service ###Code API_deleteService = "/v1/services" Service = "/delay" Version = "/1.0" try: response = requests.delete("{}{}{}{}".format(Db2RESTful,API_deleteService,Service,Version), headers=headers) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) print (response) ###Output _____no_output_____ ###Markdown Get Service LogsYou can also easily download the Db2 RESTful Endpoint service logs. ###Code API_listrest = "/v1/logs" try: response = requests.get("{}{}".format(Db2RESTful,API_listrest), headers=headers) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) if (response.status_code == 200): myFile = response.content open('/tmp/logs.zip', 'wb').write(myFile) print("Downloaded",len(myFile),"bytes.") else: print(response.json()) ###Output _____no_output_____ ###Markdown To see the content of the logs, open the Files browser on machine host3 (10.0.0.4). Navigate to the **/tmp** directory and unzip the logs file. Using the Db2 REST ClassFor your convenience, everything in the lessons above has been included into a Db2REST Python Class. You can add or use this code as part of your own Jupyter notebooks to make working with the Db2 RESTful Endpoint service quick and easy. There are also lots of examples in the following lesson on how to use the class. ###Code # Run the Db2REST Class library # Used to construct and reuse an Autentication Key # Used to construct RESTAPI URLs and JSON payloads import json import requests import pandas as pd class Db2REST(): def __init__(self, RESTServiceURL): self.headers = {"content-type": "application/json"} self.RESTServiceURL = RESTServiceURL self.version = "/v1" self.API_auth = self.version + "/auth" self.API_makerest = self.version + "/metadata/setup" self.API_services = self.version + "/services/" self.API_version = self.version + "/version/" self.API_execsql = self.API_services + "execsql" self.API_monitor = self.API_services + "monitor" def connectDatabase(self, dbHost, dbName, dbPort, isSSLConnection, dbUsername, dbPassword, expiryTime="300m"): self.dbHost = dbHost self.dbName = dbName self.dbPort = dbPort self.isSSLConnection = isSSLConnection self.dbusername = dbUsername self.dbpassword = dbPassword self.connectionBody = { "dbParms": { "dbHost": dbHost, "dbName": dbName, "dbPort": dbPort, "isSSLConnection": isSSLConnection, "username": dbUsername, "password": dbPassword }, "expiryTime": expiryTime } try: response = requests.post("{}{}".format(self.RESTServiceURL,self.API_auth), headers=self.headers, json=self.connectionBody) print (response) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) if (response.status_code == 200): self.token = response.json()["token"] print("Successfully connected and retrieved access token") else: print(response.json()["errors"]) self.headers = { "authorization": f"{self.token}", "content-type": "application/json" } def getConnection(self): return self.connectionBody def getService(self): return self.RESTServiceURL def getToken(self): return("Token: {}".format(self.token)) def getVersion(self): try: response = requests.get("{}{}".format(self.RESTServiceURL,self.API_version), headers=self.headers) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) if (response.status_code == 200): return response.json()['version'] else: print(response.json()['errors'][0]['more_info']) def runStatement(self, sql, isQuery=True, sync=True, parameters={}): body = { "isQuery": isQuery, "sqlStatement": sql, "sync": sync, "parameters": parameters } try: response = requests.post("{}{}".format(self.RESTServiceURL,self.API_execsql), headers=self.headers, json=body) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) if (response.status_code == 200): return pd.DataFrame(response.json()['resultSet']) elif (response.status_code == 202): return response.json()["id"] else: print(response.json()['errors'][0]['more_info']) def getResult(self, job_id, limit=0): body = {"limit": limit} try: response = requests.get("{}{}{}".format(self.RESTServiceURL,self.API_services,job_id), headers=self.headers, json=body) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) if (response.status_code == 200): json = response.json() if (json['jobStatus'] == 2): return json['jobStatusDescription'] elif (json['jobStatus'] == 3): return pd.DataFrame(json['resultSet']) elif (json['jobStatus'] == 4): return pd.DataFrame(json['resultSet']) else: return json elif (response.status_code == 404): print(response.json()['errors']) elif (response.status_code == 500): print(response.json()['errors'][0]['more_info']) else: print(response.json()) def createServiceMetadata(self, serviceSchema="Db2REST"): self.serviceSchema = serviceSchema body = {"schema": self.serviceSchema} try: response = requests.post("{}{}".format(self.RESTServiceURL,self.API_makerest), headers=self.headers, json=body) if (response.status_code == 201): print(response.reason) else: print(response.json()) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) def listServices(self): try: response = requests.get("{}{}".format(self.RESTServiceURL,self.API_services), headers=self.headers) return pd.DataFrame(response.json()['Db2Services']) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) def getServiceDetails(self, serviceName, version): try: response = requests.get("{}{}{}{}".format(self.RESTServiceURL,self.API_services,"/" + serviceName,"/" + version), headers=self.headers) print(response.status_code) if (response.status_code == 200): description = response.json() print("Input parameters:") print(description["inputParameters"]) print("Result format:") print(description["resultSetFields"]) else: print(response.json()) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) def createService(self, schema, serviceDescription, serviceName, sql, version, parameters=False, isQuery=True): if (parameters==False): body = {"isQuery": isQuery, "schema": schema, "serviceDescription": serviceDescription, "serviceName": serviceName, "sqlStatement": sql.replace("\n",""), "version": version } else: body = {"isQuery": isQuery, "schema": schema, "serviceDescription": serviceDescription, "serviceName": serviceName, "sqlStatement": sql.replace("\n",""), "version": version, "parameters": parameters } try: response = requests.post("{}{}".format(self.RESTServiceURL,self.API_services), headers=self.headers, json=body) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) if (response.status_code == 201): print("Service: " + serviceName + " Version: " + version + " created") else: print(response.json()) def deleteService(self, serviceName, version): try: response = requests.delete("{}{}{}{}".format(self.RESTServiceURL,self.API_services,"/" + serviceName,"/" + version), headers=self.headers) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) if (response.status_code == 204): print("Service: " + serviceName + " Version: " + version + " deleted") else: print(response.json()) def callService(self, serviceName, version, parameters, sync=True): body = { "parameters": parameters, "sync": sync } try: response = requests.post("{}{}{}{}".format(self.RESTServiceURL,self.API_services,"/" + serviceName,"/" + version), headers=self.headers, json=body) if (response.status_code == 200): return pd.DataFrame(response.json()['resultSet']) elif (response.status_code == 202): return response.json()["id"] else: print(response.json()['errors'][0]['more_info']) except Exception as e: if (repr(e) == "KeyError('more_info',)"): print("Service not found") else: print("Unable to call RESTful service. Error={}".format(repr(e))) def monitorJobs(self): try: response = requests.get("{}{}".format(self.RESTServiceURL,self.API_monitor), headers=self.headers) if (response.status_code == 200): return pd.DataFrame(response.json()['MonitorServices']) else: print(response.json()) except Exception as e: print("Unable to call RESTful service. Error={}".format(repr(e))) ###Output _____no_output_____ ###Markdown Setting up a Db2 RESTful Endpoint Service Class instanceTo use the class first create an instance of the class. The cell below creates an object called **Db2RESTService** from the **Db2REST** class. The first call to the object is **getVersion** to confirm the version of the RESTful Endpoint Service you are connected to. ###Code Db2RESTService = Db2REST("http://localhost:50050") print("Db2 RESTful Endpoint Service Version: " + Db2RESTService.getVersion()) ###Output _____no_output_____ ###Markdown Connecting to the service to the databaseUnless your service is already bound to a single database, the call below connects it to a single Db2 database. You can run this command again to connect to a different database from the same RESTful Endpoint service. ###Code Db2RESTService.connectDatabase("10.0.0.1", "ONTIME", 50001, False, "db2inst1", "db2inst1") ###Output _____no_output_____ ###Markdown Confirming the service settingsOnce the connection to the RESTful Endpoint Service and Db2 is established you can always check your settings using the following calls. ###Code print(Db2RESTService.getService()) print(Db2RESTService.getConnection()) print(Db2RESTService.getToken()) ###Output _____no_output_____ ###Markdown Running SQL Through the ServiceYou can run an SQL Statement through the RESTful service as a simple text string.Let's start by defining the SQL to run: ###Code sql = \ """ SELECT AC."TAIL_NUMBER", AC."MANUFACTURER", AC."MODEL", OT."FLIGHTDATE", OT."UNIQUECARRIER", OT."AIRLINEID", OT."CARRIER", OT."TAILNUM", OT."FLIGHTNUM", OT."ORIGINAIRPORTID", OT."ORIGINAIRPORTSEQID", OT."ORIGINCITYNAME", OT."ORIGINSTATE", OT."DESTAIRPORTID", OT."DESTCITYNAME", OT."DESTSTATE", OT."DEPTIME", OT."DEPDELAY", OT."TAXIOUT", OT."WHEELSOFF", OT."WHEELSON", OT."TAXIIN", OT."ARRTIME", OT."ARRDELAY", OT."ARRDELAYMINUTES", OT."CANCELLED", OT."AIRTIME", OT."DISTANCE" FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC WHERE AC."TAIL_NUMBER" = OT.TAILNUM AND ORIGINSTATE = 'NJ' AND DESTSTATE = 'CA' AND AC.MANUFACTURER = 'Boeing' AND AC.MODEL LIKE 'B737%' AND OT.TAXIOUT > 30 AND OT.DISTANCE > 2000 AND OT.DEPDELAY > 300 ORDER BY OT.DEPDELAY DESC FETCH FIRST 5 ROWS ONLY; """ ###Output _____no_output_____ ###Markdown Now a single call to the **runStatement** routine runs the SQL synchronously and returns the result as a DataFrame ###Code result = (Db2RESTService.runStatement(sql)) display(result) ###Output _____no_output_____ ###Markdown You can also run the statement asynchronously so you don't have to wait for the result. In this case the result is the statement identifier that you can use to check the statement status. ###Code statementID = (Db2RESTService.runStatement(sql, sync=False)) display(statementID) ###Output _____no_output_____ ###Markdown If you have several statements running at the same time you can check to see their status with the **monitorStatus** routine and see where they are in the service queue. ###Code services = Db2RESTService.monitorJobs() display(services) ###Output _____no_output_____ ###Markdown You can try to get the results of the statment by passing the statement identifier into the getResults routine. If the statement has finished running it will return a result set as a DataFrame. It is still running, a message is returned. ###Code result = (Db2RESTService.getResult(statementID)) display(result) ###Output _____no_output_____ ###Markdown Passing Parameters when running SQL StatementsYou can also define a single SQL statement with ? parameters and call that statement with different values using the same **runStatement** routine. ###Code sqlparm = \ """ SELECT AC."TAIL_NUMBER", AC."MANUFACTURER", AC."MODEL", OT."FLIGHTDATE", OT."UNIQUECARRIER", OT."AIRLINEID", OT."CARRIER", OT."TAILNUM", OT."FLIGHTNUM", OT."ORIGINAIRPORTID", OT."ORIGINAIRPORTSEQID", OT."ORIGINCITYNAME", OT."ORIGINSTATE", OT."DESTAIRPORTID", OT."DESTCITYNAME", OT."DESTSTATE", OT."DEPTIME", OT."DEPDELAY", OT."TAXIOUT", OT."WHEELSOFF", OT."WHEELSON", OT."TAXIIN", OT."ARRTIME", OT."ARRDELAY", OT."ARRDELAYMINUTES", OT."CANCELLED", OT."AIRTIME", OT."DISTANCE" FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC WHERE AC."TAIL_NUMBER" = OT.TAILNUM AND ORIGINSTATE = ? AND DESTSTATE = ? AND AC.MANUFACTURER = 'Boeing' AND AC.MODEL LIKE 'B737%' AND OT.TAXIOUT > 30 AND OT.DISTANCE > 2000 AND OT.DEPDELAY > ? ORDER BY OT.DEPDELAY DESC FETCH FIRST 10 ROWS ONLY; """ result = Db2RESTService.runStatement(sqlparm,parameters={"1": 'NY', "2": 'CA', "3" : 300}) display(result) result = Db2RESTService.runStatement(sqlparm,parameters={"1": 'NJ', "2": 'CA', "3" : 200}) display(result) ###Output _____no_output_____ ###Markdown Limiting ResultsYou also have full control of how many rows in an answer set to return. Run the following statement using **sync=False** ###Code statementID = Db2RESTService.runStatement(sqlparm, sync=False, parameters={"1": 'NJ', "2": 'CA', "3" : 200}) display(statementID) result = (Db2RESTService.getResult(statementID)) display(result) ###Output _____no_output_____ ###Markdown This time the **getResult** routine include a parameter to limit the result set to 5 rows. ###Code result = (Db2RESTService.getResult(statementID, limit=5)) display(result) ###Output _____no_output_____ ###Markdown The next cell retrieves the remaining rows. ###Code result = (Db2RESTService.getResult(statementID)) display(result) ###Output _____no_output_____ ###Markdown After all the rows have been returned the job history is removed. If you try to retrieve the results for this statement now the service won't find it. ###Code result = (Db2RESTService.getResult(statementID)) display(result) ###Output _____no_output_____ ###Markdown Creating and Running Endpoint ServicesIf the MetaData tables have not already been created in your database you can use the following call to create the MetaData in the schema of your choice. In this case **DB2REST**. ###Code Db2RESTService.createServiceMetadata("DB2REST") ###Output _____no_output_____ ###Markdown Let's start by defining the SQL statement. It can include parameters that have to be idenfied with an amersand "@". ###Code sql = \ """ SELECT COUNT(AC."TAIL_NUMBER") FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC WHERE AC."TAIL_NUMBER" = OT.TAILNUM AND ORIGINSTATE = @STATE AND DESTSTATE = 'CA' AND AC.MANUFACTURER = 'Boeing' AND AC.MODEL LIKE 'B737%' AND OT.TAXIOUT > 30 AND OT.DISTANCE > 2000 AND OT.DEPDELAY > @DELAY FETCH FIRST 5 ROWS ONLY """ ###Output _____no_output_____ ###Markdown Now we can create the service, including the two parameters, using the **createService** routine. ###Code parameters = [{"datatype": "CHAR(2)","name": "@STATE"},{"datatype": "INT","name": "@DELAY"}] schema = 'DEMO' serviceDescription = 'Delay' serviceName = 'delay' version = '2.0' Db2RESTService.createService(schema, serviceDescription, serviceName, sql, version, parameters) ###Output _____no_output_____ ###Markdown A call to the **listServices** routine confirms that you have created the new service. ###Code services = Db2RESTService.listServices() display(services) ###Output _____no_output_____ ###Markdown You can also see the details for any service using the **getServiceDetails** routine. ###Code details = Db2RESTService.getServiceDetails("delay","2.0") display(details) ###Output _____no_output_____ ###Markdown You can all the new service using the **callService** routine. The parameters are passed into call using an array of values. By default the call is synchronous so you have to wait for the results. ###Code serviceName = 'delay' version = '2.0' parameters = {"@STATE": "NJ","@DELAY":"200"} result = Db2RESTService.callService(serviceName, version, parameters) display(result) ###Output _____no_output_____ ###Markdown You can also call the service asychronously, just like we did with SQL statements earlier. Notice the additional parameter **sync=False**. Since the cell below immediately checks the status of the job you can see it has been queued. ###Code serviceName = 'delay' version = '2.0' parameters = {"@STATE": "NJ","@DELAY":"200"} statementID = Db2RESTService.callService(serviceName, version, parameters, sync=False) display(statementID) display(Db2RESTService.monitorJobs()) ###Output _____no_output_____ ###Markdown Run **monitorJobs** again to confirm that the endpoint service has completed the request. ###Code services = Db2RESTService.monitorJobs() display(services) ###Output _____no_output_____ ###Markdown And retrieve the result set. ###Code result = (Db2RESTService.getResult(statementID)) display(result) ###Output _____no_output_____ ###Markdown You can also delete an existing endpoint service with a call to the **deleteService** routine. ###Code serviceName = 'delay' version = '2.0' Db2RESTService.deleteService(serviceName, version) ###Output _____no_output_____ ###Markdown Using a service to query the CatalogYou can also think about creating services to explore the database catalog. For example, here is a service that accepts a schema as an input parameter and returns a list of tables in the schema. ###Code sql = \ """ SELECT TABSCHEMA, TABNAME, ALTER_TIME FROM SYSCAT.TABLES WHERE TABSCHEMA = @SCHEMA """ parameters = [{"datatype": "VARCHAR(64)","name": "@SCHEMA"}] schema = 'DEMO' serviceDescription = 'Tables' serviceName = 'tables' version = '1.0' Db2RESTService.createService(schema, serviceDescription, serviceName, sql, version, parameters) serviceName = 'tables' version = '1.0' result = Db2RESTService.callService(serviceName, version, parameters = {"@SCHEMA": "SYSCAT"}, sync=True) display(result) ###Output _____no_output_____ ###Markdown Incorporating the Db2 RESTFul Endpoint Class into your Python sciptsThe Db2 RESTful Endpoint Class is available on GIT at https://github.com/Db2-DTE-POC/modernization/tree/main/RESTEndpoint. You can download a copy into your own Python library and add **%run db2restendpoint.ipynb** to your own Python notebook. You can also include the following two lines which will automatically download a copy of the library from GIT and run the Class code. ###Code !wget -O db2endpoint.ipynb https://raw.githubusercontent.com/Db2-DTE-POC/modernization/master/RESTEndpoint/db2restendpoint.ipynb %run db2endpoint.ipynb ###Output _____no_output_____
Normalize_count_tables and KEGG heatmaps.ipynb
###Markdown Make a heat map using the normalized table ###Code ##Add a color row to the metadata table. my_palette = dict(zip(MetaData["Type"].unique(), ["olive","yellow","saddlebrown","orange"])) MetaData["colors"] = MetaData["Type"].map(my_palette) MetaData.head() #Create a dictionary mappinng the column names inthe heatmap to the colors hm_pallate = dict(zip(list(MetaData.Sample), list(MetaData.colors))) #Map the colors to the COLUMNS on the count matrix for heat map col_colors = LFC3_Counts_RowNorm.columns.map(hm_pallate) col_colors #Heatmap with Rows normalized by AVERAGE. Add the color bar legend in Adobe Illustrator sns.set(font_scale=1.0) colbar_kws = {'label':'GE relative to avg. across all samples'} m = sns.clustermap(LFC3_Counts_RowNorm, cmap="BuPu", col_colors=col_colors, linewidths=.5) #Add a legend for Column color bar: for label in MetaData["Type"].unique(): m.ax_row_dendrogram.bar(0, 0, color=my_palette[label], label=label, linewidth=0) m.ax_row_dendrogram.legend(loc="upper left", bbox_to_anchor=(5.0, 1.35), fontsize="large", title="Mesocosm host type") #adjust the size and rotation of axes labels plt.setp(m.ax_heatmap.get_yticklabels(), fontsize=12) # For y axis plt.setp(m.ax_heatmap.get_xticklabels(), rotation=90, fontsize=18) # For x axis plt.show() #bbos_inches makes sure the whole image is included in the pdf #plt.savefig("DA_KEGGS_noD7_heatmap.pdf", format='pdf', bbox_inches='tight') # Normalize ROWS wihth standard scale: Either 0 (rows) or 1 (columns). # Whether or not to standardize that dimension, # meaning for each row or column, subtract the minimum and divide each by its maximum. sns.set_palette(sns.color_palette("coolwarm")) sns.clustermap(LFC3_Counts_Norm, standard_scale=0,cmap="BuPu") ###Output _____no_output_____ ###Markdown Do the same thing for the D7 vs. Animal comparison ###Code LFC3_Counts2 = pd.read_table("summarized_DA_KEGGS.txt", index_col=0) LFC3_Counts2 #Metadata table ROWS must be in the same order as the Count matrix COLUMS! GE2 = MetaData.GE.values MetaData #Divide gene counts by GE by COLUMN LFC3_Counts_Norm2 = LFC3_Counts2.iloc[:,:].div(GE2[:], axis=1) LFC3_Counts_Norm2.head() #Get the AVERAGE of each row row_avg2 = LFC3_Counts_Norm2.iloc[:,:].mean(axis=1) #Divide by the row AVG to normalize LFC3_Counts_RowNorm2 = LFC3_Counts_Norm2.iloc[:,:].div(row_avg2,axis=0) LFC3_Counts_RowNorm2.head() LFC3_Counts_Norm2.to_csv("summarized_DA_KEGGs.GEnorm.txt", sep='\t') LFC3_Counts_RowNorm2.to_csv("summarized_DA_KEGGs.GE_RowNorm.txt", sep='\t') #Map the colors to the COLUMNS on the count matrix for heat map col_colors = LFC3_Counts_RowNorm2.columns.map(hm_pallate) col_colors plt.figure(figsize = (260,50)) m = sns.clustermap(LFC3_Counts_RowNorm2, cmap="BuPu", col_colors=col_colors, yticklabels=1, linewidths=.5) #Add a legend for Column color bar: for label in MetaData["Type"].unique(): m.ax_row_dendrogram.bar(0, 0, color=my_palette[label], label=label, linewidth=0) m.ax_row_dendrogram.legend(loc="upper left", bbox_to_anchor=(5.0, 1.35), fontsize="large", title="Mesocosm host type") #adjust the size and rotation of axes labels plt.setp(m.ax_heatmap.get_yticklabels(), fontsize=11) # For y axis plt.setp(m.ax_heatmap.get_xticklabels(), rotation=90, fontsize=17) # For x axis #plt.show() plt.savefig("DA_KEGGS_heatmap.pdf", format='pdf', bbox_inches='tight') ###Output _____no_output_____
Term1/Tasks2/.ipynb_checkpoints/12-checkpoint.ipynb
###Markdown Проверим интегралы движения: ![1-4.png](attachment:1-4.png) ###Code # Первый и второй интегралы (проекции кин. моменты на e3 и на OZ) fig, axes = plt.subplots(2) fig.set_figheight(10) fig.set_figwidth(12) fig.subplots_adjust(hspace = 0.5) axes[0].plot(time_points, Ki[:, 2] - Ki[0, 2]) axes[0].set_title("Отклонение кин. момента по OZ") axes[0].set_xlabel("Время в секундах") axes[0].set_ylabel("Ki_Z") axes[0].grid(True) axes[1].plot(time_points, Ke[:, 2] - Ke[0, 2]) axes[1].set_title("Отклонение кин. момента по e3") axes[1].set_xlabel("Время в секундах") axes[1].set_ylabel("Ki_e3") axes[1].grid(True) print("Максимальное изменение кин. момента на OZ (ИСО):", np.max(np.abs(Ki[:, 2] - Ki[0, 2]))) print("Максимальное изменение кин. момента на e3 (связанные оси):", np.max(np.abs(Ke[:, 2] - Ke[0, 2]))) # Третий интеграл wwe = np.array([qt.as_float_array(i)[1:] for i in we]) weq2 = wwe[:, 0]**2 + wwe[:, 1]**2 T2 = J[0, 0] * weq2 P2 = 2 * params.mass * params.distance_to_cm * e3[:, 2] * params.g Integral3 = T2 + P2 fig, axes = plt.subplots(1) fig.set_figheight(10) fig.set_figwidth(12) fig.subplots_adjust(hspace = 0.5) axes.plot(time_points, Integral3) axes.set_title("Integral3 - третий интеграл в случае Лагранжа") axes.set_xlabel("Время в секундах") axes.set_ylabel("Integral3") axes.grid(True) print("Максимальное изменение Integral3:", np.max(np.abs(Integral3 - Integral3[0]))) ###Output Максимальное изменение Integral3: 5.417888360170764e-14 ###Markdown Вывод: интегралы движения хорошо сохраняются ###Code # Подбираем параметры t0 = 0. t1 = 200. step = 0.001 we0 = np.array([0, 0, 20]) A0 = np.quaternion(np.cos(np.pi/4), np.sin(np.pi/4), 0, 0) x0 = np.hstack((we0, qt.as_float_array(A0))) result = RK4(lambda t, x: f1(t, x, params), (t0, t1), x0, step, normalization) time_points = result[:, 0] we = np.array([np.quaternion(*i) for i in result[:, 1:4]]) A = qt.as_quat_array(result[:, 4:]) wi = np.array([(j * i * j.inverse()) for i, j in zip(we, A)]) e3 = np.quaternion(0, 0, 1) e3 = np.array([qt.as_float_array((j * e3 * j.inverse()))[1:] for j in A]) # Посмотрим как движется e3 fig = plt.figure() fig.set_figheight(7) fig.set_figwidth(7) ax = fig.add_subplot(111, projection='3d') ax.set_xlim((-1, 1)) ax.set_ylim((-1, 1)) ax.set_zlim((-1, 1)) ax.plot(*e3.T) pass ###Output _____no_output_____
old/precip_demo.ipynb
###Markdown Grabbing the precip files from GPCP NOAA servers ###Code import xarray as xr from PrecipData import getPrecip import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.set_style("darkgrid", {"axes.facecolor": ".9"}) plt.rcParams.update({'font.size': 22}) plt.rcParams.update({'font.family' : 'sans-serif'}) plt.rcParams.update({'font.sans-serif' : 'DejaVu Sans'}) # # This takes a very long time! # precip = getPrecip(1997,2019) # precip.to_netcdf("GPCP_1997-2019_global.nc") # Read in pre-downloaded data precip = xr.open_dataset("GPCP_1997-2019_global.nc") # We have every daily JJA precip measurement from 1997 to 2019 precip ###Output _____no_output_____ ###Markdown Mumbai ###Code # Mumbai, India (lon,lat) = (72.825833, 18.975), taken from Wikipedia/GeoHack mumbai = precip.sel(longitude=72.825833,latitude=18.975, method='nearest') # There seems to be one spurious very large data point... let's filter it out print(mumbai['precip'].max()) mumbai = mumbai.where(mumbai['precip'] < mumbai['precip'].max()) ###Output <xarray.DataArray 'precip' ()> array(9.96920997e+36) Coordinates: latitude float32 19.0 longitude float32 73.0 ###Markdown Plot CDF of Mumbai precip ###Code fig, ax = plt.subplots(1,1, figsize = (10,10)) kwargs = {'cumulative': True} sns.distplot(mumbai['precip'], hist_kws=kwargs, kde_kws=kwargs, ax = ax) ax.set_xlim(left = 0) ax.set_xlabel('Precipitation [mm]') ax.set_ylabel('Probability') ax.set_title('Mumbai, India (1998 - 2019)') plt.show() ###Output _____no_output_____ ###Markdown What is the 95th percentile value? ###Code prcp95 = np.percentile(mumbai.dropna(dim = 'time')["precip"], 95) prcp95 ###Output _____no_output_____ ###Markdown Add 95th percentile line to plot ###Code fig, ax = plt.subplots(1,1, figsize = (10,10)) kwargs = {'cumulative': True} sns.distplot(mumbai['precip'], hist_kws=kwargs, kde_kws=kwargs, ax = ax) ax.plot([prcp95, prcp95], [0., 0.95], ls = 'dashed', color = 'C3', lw=3) ax.plot([0., prcp95], [0.95, 0.95], ls = 'dashed', color = 'C3', lw=3) ax.text(-15, 0.935, '0.95', color = 'C3') ax.text(prcp95 - 13, -0.105, str(round(prcp95,2)), color = 'C3', rotation = 45) ax.set_xlim(left = 0) ax.set_xticks(range(0,151,30)) ax.set_xlabel('Precipitation [mm]') ax.set_ylabel('Probability') ax.set_title('Mumbai, India (1998 - 2019)') plt.tight_layout() plt.savefig('mumbai_precip_cdf.pdf') ###Output _____no_output_____ ###Markdown What dates have extreme precip (above 95th percentile)? ###Code mumbai.where(mumbai.precip > np.percentile(mumbai.dropna(dim = 'time')["precip"], 95)).dropna(dim = 'time').time ###Output _____no_output_____
metadata/ncses/SED_SDR.ipynb
###Markdown Connect to Dimensions ###Code api_client = RichContextAPI.connect_dimensions_api() ###Output _____no_output_____ ###Markdown SED ###Code sed_datasets = [{'id':d['id'],'search_terms':[d['title']] + d['alt_title']} for d in datasets if d['id'] == 'dataset-370'] sed_datasets sed_ds_id = sed_datasets[0]['id'] sed_search_terms = sed_datasets[0]['search_terms'] sed_md_list = gen_dimensions_linkages(ds_id = sed_ds_id, search_terms = sed_search_terms,api_client = api_client) export_dimensions_csv(md_list = sed_md_list,file_name = 'sed_dimensions.csv') ###Output _____no_output_____ ###Markdown SDR ###Code sdr_datasets = [{'id':d['id'],'search_terms':[d['title']] + d['alt_title']} for d in datasets if d['id'] == 'dataset-371'] sdr_datasets sdr_ds_id = sdr_datasets[0]['id'] sdr_search_terms = sdr_datasets[0]['search_terms'] sdr_search_terms sdr_ds_id sdr_md_list = gen_dimensions_linkages(ds_id = sdr_ds_id, search_terms = sdr_search_terms,api_client = api_client) export_dimensions_csv(md_list = sdr_md_list,file_name = 'sdr_dimensions.csv') ###Output _____no_output_____ ###Markdown UMETRICS ###Code um_search_terms = ['UMETRICS','Universities: Measuring the Impacts of Research on Innovation, Competitiveness, and Science'] i = um_search_terms[1] dimension_return = run_exact_string_search(string = i,api_client = api_client) dimension_return um_md_list = gen_dimensions_linkages(ds_id = 'tbd', search_terms = um_search_terms,api_client = api_client) dimension_return ###Output _____no_output_____ ###Markdown Scratch ###Code file_name = 'SED_dimensions_linkages.json' with open(file_name, 'w') as outfile: json.dump(md_list, outfile, indent = 2) keys = ['title','doi','journal','search_string','datasets'] md_list_csv = [{k:m[k] for k in keys} for m in md_list] ###Output _____no_output_____
Practicas/practica1/simbolico.ipynb
###Markdown Práctica 1 - Introducción a Jupyter lab y libreria ```robots``` Introducción al calculo simbólico numérico y librería ```sympy``` En este documento se describe el proceso de obtención de la dinámica de un robot manipulador (pendulo simple) por medio de la ecuación de Euler-Lagrange, empecemos importando las librerias necesarias: ###Code from sympy import var, sin, cos, pi, Matrix, Function, Rational from sympy.physics.mechanics import mechanics_printing mechanics_printing() ###Output _____no_output_____ ###Markdown Una vez que hemos importado las funciones necesarias, podemos empezar definiendo las variables a utilizar dentro del calculo: ###Code var("l1") ###Output _____no_output_____ ###Markdown Cuando se definen las variables, se pueden mandar a llamar con el mismo nombre: ###Code l1 ###Output _____no_output_____ ###Markdown Definimos de una vez todas las variables necesarias: ###Code var("m1 J1 t g") ###Output _____no_output_____ ###Markdown Y definimos las variables que dependen de otra variable, especificamente en este calculo, todo lo anterior es constante y solo $q_1$ es una variable dependiente del tiempo: ###Code q1 = Function("q1")(t) ###Output _____no_output_____ ###Markdown Ya con las variables definidas, puedo empezar a definir la posición del centro de masa del primer (y único eslabon): ###Code x1 = l1*cos(q1) y1 = l1*sin(q1) x1 y1 ###Output _____no_output_____ ###Markdown De manera que si necesitamos calcular la derivada con respecto del tiempo de $x_1$, tenemos que hacer: ###Code x1.diff(t) ###Output _____no_output_____ ###Markdown Calculamos el cuadrado de la velocidad traslacional del primer centro de masa: ###Code v1c = x1.diff(t)**2 + y1.diff(t)**2 v1c ###Output _____no_output_____ ###Markdown Pero como se puede ver, no necesariamente se va a simplificar completamente la expresión calculada; por lo que podemos decir explicitamente que trate de simplificar más, el motor de algebra simbolica: ###Code v1c.simplify() ###Output _____no_output_____ ###Markdown Guardando esta expresion simplificada: ###Code v1c = v1c.simplify() ###Output _____no_output_____ ###Markdown Y calculando la altura y velocidad rotacional del eslabon: ###Code h1 = y1 ω1 = q1.diff(t) ###Output _____no_output_____ ###Markdown Calculando la energía cinética y potencial: ###Code K = Rational(1,2)*m1*v1c + Rational(1,2)*J1*ω1**2 U = m1*g*h1 ###Output _____no_output_____ ###Markdown Con estas energias se puede calcular el Lagrangiano: ###Code L = K - U L ###Output _____no_output_____ ###Markdown Y una vez obtenido el Lagrangiano, podemos empezar a derivar, $\frac{\partial L}{\partial \dot{q}_1}$, $\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{q}_1} \right)$ y $\frac{\partial L}{\partial q_1}$ ###Code L.diff(q1.diff(t)) L.diff(q1.diff(t)).diff(t) L.diff(q1) ###Output _____no_output_____ ###Markdown O bien, agrupandolo en la ecuación de Euler - Lagrange:$$\tau_1 = \frac{d}{dt}\left( \frac{\partial L}{\partial \dot{q}_1} \right) - \frac{\partial L}{\partial q_1}$$ ###Code L.diff(q1.diff(t)).diff(t) - L.diff(q1) ###Output _____no_output_____ ###Markdown En este caso, podemos utilizar el metodo collect para factorizar con respecto a ciertos terminos, en este caso $\ddot{q}_1$: ###Code τ1 = (L.diff(q1.diff(t)).diff(t) - L.diff(q1)).collect(q1.diff(t).diff(t)) τ1 ###Output _____no_output_____
Greedy/previous/1217. Play with Chips.ipynb
###Markdown 问题描述: 有一个列表,列表中有一些数字, 1、移动两个单位花费为0, 2、移动一个单位花费为1 求:将这些数字移动至同一下标数组位置下,所花费的最小值。例题1: Input: chips = [1,2,3] Output: 1,将1移动至下标为2时,花费为0,将2移动至下标2时,花费为1,最小值为1。例题2: Input: chips = [2,2,2,3,3] Output: 2 三个2的位置下标都是2,两个三都需要移动一个位置到2上,因此总花费为 1+1=2 ###Code class Solution: def minCostToMoveChips(self, chips) -> int: """chips中的数字都是给定的位置下标值 1、奇数移动至奇数花费为0,偶数移动至偶数花费为0 2、奇数移动至偶数花费为1,偶数移动至奇数花费为1 3、step_cost = abs(start - end) % 2 * 1 # 一个数字从起始位置移动至终点位置所花费的值 """ cost = [] position = set(chips) # 所有可能移动的位置 for p in position: step_cost = 0 for chip in chips: step_cost += (p - chip) % 2 # 从原始位置移动至目标位置所花费的值 cost.append(step_cost) # 每一个元素移动至一个位置所花费值得总和 return min(cost) chips_ = [2,2,2,3,3] solution = Solution() solution.minCostToMoveChips(chips_) ###Output _____no_output_____
state_and_motion/3. Turning Right.ipynb
###Markdown Turning RightThis notebook provides some initial variables and creates one car object!This time around, you are expected to **modify the car.py file** and test out some new functionality!Your tasks for this notebook are:1. Add a `turn_right()` function to `car.py` - There are a few ways to do this. I'd suggest looking at the code in `turn_left()` or even *using* this function.2. Don't forget to update the **state** as necessary, after a turn!3. Test out your `turn_right()` function in this notebook by visualizing the car as it moves, and printing out the state of the car to see if it matches what you expect! ###Code import numpy as np import car %matplotlib inline # Auto-reload function so that this notebook keeps up with # changes in the class file %load_ext autoreload %autoreload 2 ###Output The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload ###Markdown Create a new car object ###Code # Create a 2D world of 0's height = 4 width = 6 world = np.zeros((height, width)) # Define the initial car state initial_position = [0, 0] # [y, x] (top-left corner) velocity = [0, 1] # [vy, vx] (moving to the right) # Create a car with initial params carla = car.Car(initial_position, velocity, world) ###Output _____no_output_____ ###Markdown Directory of Python filesRemember, to go back to see and change all your files, click on the orange Jupyter icon at the top left of this notebook! There you'll see this notebook and the class file `car.py`, which you'll need to open and change.I recommend opening this notebook and the car.py file in new tabs so that you can easily look at both! ###Code ## TODO: Move carla around, using your new turn_right() function ## Display the result and the state as it changes for i in range(4): for j in range(3): carla.move() carla.turn_right() carla.display_world() ###Output _____no_output_____
_build/html/_sources/ML_LCD_readings/LCD_readings.ipynb
###Markdown Use deep learning to recognise LCD readings Train the text recognition model using deep-text-recognition ([github link](https://github.com/clovaai/deep-text-recognition-benchmark)) Different settings and models were used to achieve best acuracy. The arguments are listed as follow:---**Basic settings:**|Command|help|Input||:---:|:---:|:---:||--exp_name|Where to store logs and models|Directory to store trained model||--train_data|required=True, path to training dataset|Directory of training dataset||--valid_data|required=True, path to validation dataset|Directory of training dataset||--manualSeed|type=int, default=1111|for random seed setting||--workers|type=int, number of data loading workers, default=4|int||--batch_size|type=int, default=192|input batch size||--num_iter|type=int, default=300000|number of iterations to train for||--valInterval|type=int, default=2000, Interval between each validation|int||--saved_model|default='', path of model to continue training|Directory||--FT|action='store_true', whether to do fine-tuning|No input, activates by include this argument||--adam|action='store_true', Whether to use adam (default is Adadelta)|No input||--lr|type=float, default=1, learning rate, default=1.0 for Adadelta|float||--beta1|type=float, default=0.9, beta1 for adam. default=0.9|float||--rho|type=float, default=0.95, decay rate rho for Adadelta. default=0.95|float||--eps|type=float, default=1e-8, eps for Adadelta. default=1e-8|float||--grad_clip| type=float, default=5, gradient clipping value. default=5|float||--baiduCTC| action='store_true', for data_filtering_off mode|No input|---**Data processing:**|Command|help|Input||:---:|:---:|:---:||--select_data| type=str, default='MJ-ST', select training data (default is MJ-ST, which means MJ and ST used as training data|For use sample data||--batch_ratio| type=str, default='0.5-0.5', assign ratio for each selected data in the batch|Use with MJ-ST||--total_data_usage_ratio| type=str, default='1.0', total data usage ratio, this ratio is multiplied to total number of data.|For use part of data||--batch_max_length| type=int, default=25, maximum-label-length| ||--imgH| type=int, default=32, the height of the input image|image size||--imgW| type=int, default=100, the width of the input image|image size||--rgb| action='store_true', use rgb input'|No input||--character| type=str, default='0123456789abcdefghijklmnopqrstuvwxyz', character label|To add or fileter symbols, characters||--sensitive| action='store_true', for sensitive character mode|Use this to recognise Upper case||--PAD| action='store_true', whether to keep ratio then pad for image resize| ||--data_filtering_off| action='store_true', for data_filtering_off mode|No input|---**Model Architecture:**|Command|help|Input||:---:|:---:|:---:||--Transformation| type=str, required=True, Transformation stage. |None or TPS||--FeatureExtraction| type=str, required=True, FeatureExtraction stage. |VGG, RCNN or ResNet||--SequenceModeling| type=str, required=True, SequenceModeling stage. |None or BiLSTM||--Prediction| type=str, required=True, Prediction stage. |CTC or Attn||--num_fiducial| type=int, default=20, number of fiducial points of TPS-STN|int||--input_channel| type=int, default=1, the number of input channel of Feature extractor|int||--output_channel| type=int, default=512, the number of output channel of Feature extractor|int||--hidden_size| type=int, default=256, the size of the LSTM hidden state|int| Train the modelsThe variables used will be:|Model|Experiment Name|Command used||:---:|:---:|:---:||VGG | vgg-notran-nolstm-ctc | CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name vgg-notran-nolstm-ctc \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation None --FeatureExtraction VGG --SequenceModeling None --Prediction CTC \ --num_iter 10000 --valInterval 1000 ||VGG | vgg-tps-nolstm-ctc| CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name vgg-tps-nolstm-ctc \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation TPS --FeatureExtraction VGG --SequenceModeling None --Prediction CTC \ --num_iter 10000 --valInterval 1000 ||VGG |vgg-notran-nolstm-attn|CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name vgg-notran-nolstm-attn \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation None --FeatureExtraction VGG --SequenceModeling None --Prediction Attn \ --num_iter 10000 --valInterval 1000||RCNN | rcnn-notran-nolstm-ctc | CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name rcnn-notran-nolstm-ctc \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation None --FeatureExtraction RCNN --SequenceModeling None --Prediction CTC \ --num_iter 10000 --valInterval 1000 ||RCNN | rcnn-notran-nolstm-atnn | CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name rcnn-notran-nolstm-atnn \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation None --FeatureExtraction RCNN --SequenceModeling None --Prediction Attn \ --num_iter 10000 --valInterval 1000 ||ResNet | resnet-notran-nolstm-ctc | CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name resnet-notran-nolstm-ctc \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation None --FeatureExtraction ResNet --SequenceModeling None --Prediction CTC \ --num_iter 10000 --valInterval 1000 ||ResNet | resnet-notran-nolstm-atnn | CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name resnet-notran-nolstm-atnn \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation None --FeatureExtraction ResNet --SequenceModeling None --Prediction Attn \ --num_iter 10000 --valInterval 1000 | Experiment checklist ###Code from IPython.display import display from ipywidgets import Checkbox box1 = Checkbox(False, description='vgg-notran-nolstm-ctc') box2 = Checkbox(False, description='vgg-notran-nolstm-attn') box3 = Checkbox(False, description='rcnn-notran-nolstm-ctc') box4 = Checkbox(False, description='rcnn-notran-nolstm-atnn') box5 = Checkbox(False, description='resnet-notran-nolstm-ctc') box6 = Checkbox(False, description='resnet-notran-nolstm-atnn') display(box1,box2,box3,box4,box5,box6) def changed(b): print(b) box1.observe(changed) box2.observe(changed) box3.observe(changed) box4.observe(changed) box5.observe(changed) box6.observe(changed) ###Output _____no_output_____ ###Markdown Experiment summaryBy using ResNet (no Transformation, no BiLTSM) with ctc prediction, an prediction accuracy of over 98 % was achieved.|Model|Exp Name|Accuracy||:---:|:---:|:---:||VGG | vgg-notran-nolstm-ctc |90.837||VGG | vgg-tps-nolstm-ctc|64.542||VGG |vgg-notran-nolstm-attn|86.853||RCNN | rcnn-notran-nolstm-ctc |80.080||RCNN | rcnn-notran-nolstm-atnn | - ||ResNet | resnet-notran-nolstm-ctc |98.805||ResNet | resnet-notran-nolstm-atnn |94.422| Command to train ResNet with a batch size of 50:```!CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name resnet-notran-nolstm-ctc-bs50 \--train_data result/train --valid_data result/test --batch_size 50 \--Transformation None --FeatureExtraction ResNet --SequenceModeling None --Prediction CTC \--num_iter 10000 --valInterval 1000 \--saved_model saved_models/resnet-notran-nolstm-ctc/best_accuracy.pth``` Predict readings from trained model ###Code %cd /mnt/c/Users/stcik/scire/papers/muon/deep-text-recognition-benchmark # Predict 90C data output = !python3 predict.py \ --Transformation None --FeatureExtraction ResNet --SequenceModeling None --Prediction CTC \ --image_folder 90C/ --batch_size 400 \ --saved_model resnet-notran-nolstm-ctc-50bs.pth output from IPython.core.display import display, HTML from PIL import Image import base64 import io import pandas as pd import numpy as np import matplotlib.pyplot as plt from cycler import cycler plt.rcParams.update({ "text.usetex": True, "font.family": "DejaVu Sans", "font.serif": ["Computer Modern Roman"], "font.size": 10, "xtick.labelsize": 10, "ytick.labelsize": 10, "figure.subplot.left": 0.21, "figure.subplot.right": 0.96, "figure.subplot.bottom": 0.18, "figure.subplot.top": 0.93, "legend.frameon": False, }) params= {'text.latex.preamble' : [r'\usepackage{amsmath, amssymb, unicode-math}', r'\usepackage[dvips]{graphicx}', r'\usepackage{xfrac}', r'\usepackage{amsbsy}']} data = pd.DataFrame() for ind, row in enumerate(output[ output.index('image_path \tpredicted_labels \tconfidence score')+2: ]): row = row.split('\t') filename = row[0].strip() label = row[1].strip() conf = row[2].strip() img = Image.open(filename) img_buffer = io.BytesIO() img.save(img_buffer, format="PNG") imgStr = base64.b64encode(img_buffer.getvalue()).decode("utf-8") data.loc[ind, 'Image'] = '<img src="data:image/png;base64,{0:s}">'.format(imgStr) data.loc[ind, 'File name'] = filename data.loc[ind, 'Reading'] = label data.loc[ind, 'Confidence'] = conf html_all = data.to_html(escape=False) display(HTML(html_all)) ###Output _____no_output_____ ###Markdown Visualise the predicted data, correct wrong readings and calculate the average and error off the readings. Correct the readings ###Code # Convert data from string to float data['Reading']=data['Reading'].astype(float) # selecting rows based on condition rslt_df = data[(data['Reading'] < 85) | (data['Reading'] > 95)] html_failed = rslt_df.to_html(escape=False) display(HTML(html_failed)) data['Reading'].to_excel("90C_readings.xlsx") ###Output _____no_output_____ ###Markdown There are no wrong predictions, we can directly plot the data. ###Code import numpy as np import matplotlib.pyplot as plt def adjacent_values(vals, q1, q3): upper_adjacent_value = q3 + (q3 - q1) * 1.5 upper_adjacent_value = np.clip(upper_adjacent_value, q3, vals[-1]) lower_adjacent_value = q1 - (q3 - q1) * 1.5 lower_adjacent_value = np.clip(lower_adjacent_value, vals[0], q1) return lower_adjacent_value, upper_adjacent_value fig, ax = plt.subplots(1,2,figsize=(6.4,3),tight_layout=True) time = range(1,196) num_bins = 20 # the histogram of the data ax[0].plot(time,data['Reading']) ax[0].set_xlabel('$t$ (s)') ax[0].set_ylabel('Readings ($\degree$C)') violin_data = [sorted(data['Reading'])] parts = ax[1].violinplot(violin_data, positions=[0], showmeans=False, showmedians=False, showextrema=False) for pc in parts['bodies']: pc.set_facecolor('#D43F3A') pc.set_edgecolor('black') pc.set_alpha(1) quartile1, medians, quartile3 = np.percentile(violin_data, [25, 50, 75], axis=1) whiskers = np.array([ adjacent_values(sorted_array, q1, q3) for sorted_array, q1, q3 in zip(violin_data, quartile1, quartile3)]) whiskers_min, whiskers_max = whiskers[:, 0], whiskers[:, 1] mean=np.mean(violin_data) inds = np.arange(0, len(medians)) ax[1].scatter(inds, medians, marker='o', edgecolors='tab:blue', c='white', s=55, zorder=3, label = f'median: %.1f'% mean) ax[1].scatter(inds, mean, marker='s', edgecolors='tab:blue', c='white', s=45, zorder=4, label = f'mean: %.1f'% medians) ax[1].vlines(inds, quartile1, quartile3, color='k', linestyle='-', lw=5) ax[1].vlines(inds, whiskers_min, whiskers_max, color='k', linestyle='-', lw=1) ax[1].set_xlabel('Probability density') ax[1].legend(frameon=False, loc=0) plt.savefig('90C_prediction.eps') plt.show() ###Output _____no_output_____
discrepancy_effort.ipynb
###Markdown bed & covid ###Code # descrepancy_bed_covid df_discrepancy_bed_covid = pd.DataFrame.from_dict(discrepancy_bed_covid, orient="index", columns=["bed_covid_discrepancy","bed_covid_details"]) df_discrepancy_bed_covid = df_discrepancy_bed_covid.reset_index().rename(columns={"index":"STATE"}) df_discrepancy_bed_covid["bed_covid_discrepancy"] = df_discrepancy_bed_covid["bed_covid_discrepancy"].fillna(0) # df_discrepancy_bed_covid # effort_bed_covid df_effort_bed_covid = pd.DataFrame.from_dict(effort_bed_covid, orient="index", columns=["bed_covid_effort"]) df_effort_bed_covid = df_effort_bed_covid.reset_index().rename(columns={"index":"STATE"}) df_effort_bed_covid["bed_covid_effort"] = df_effort_bed_covid["bed_covid_effort"].fillna(0) # df_effort_bed_covid plt.rcParams['figure.figsize'] = (80, 4.0) plt.figure() plt.bar(df_discrepancy_bed_covid["STATE"],df_discrepancy_bed_covid["bed_covid_discrepancy"]) plt.xlabel("State") plt.ylabel("Discrepancy") plt.show() plt.figure() plt.bar(df_effort_bed_covid["STATE"],df_effort_bed_covid["bed_covid_effort"]) plt.xlabel("State") plt.ylabel("Effort") plt.show() # count for discrepancy (<0, =0, >0) df_discrepancy_bed_covid['level'] = df_discrepancy_bed_covid.apply(lambda x: np.sign(x.bed_covid_discrepancy), axis = 1) # df_discrepancy_bed_covid count_dis_bed_covid = df_discrepancy_bed_covid.groupby("level")["STATE"].size() count_dis_bed_covid ###Output _____no_output_____ ###Markdown population & covid ###Code # discrepancy_pop_covid df_discrepancy_pop_covid = pd.DataFrame.from_dict(discrepancy_pop_covid, orient="index", columns=["pop_covid_discrepancy","pop_covid_details"]) df_discrepancy_pop_covid = df_discrepancy_pop_covid.reset_index().rename(columns={"index":"STATE"}) df_discrepancy_pop_covid["pop_covid_discrepancy"] = df_discrepancy_pop_covid["pop_covid_discrepancy"].fillna(0) # df_discrepancy_pop_covid # effort_pop_covid df_effort_pop_covid = pd.DataFrame.from_dict(effort_pop_covid, orient="index", columns=["pop_covid_effort"]) df_effort_pop_covid = df_effort_pop_covid.reset_index().rename(columns={"index":"STATE"}) df_effort_pop_covid["pop_covid_effort"] = df_effort_pop_covid["pop_covid_effort"].fillna(0) # df_effort_pop_covid plt.rcParams['figure.figsize'] = (80, 4.0) plt.figure() plt.bar(df_discrepancy_pop_covid["STATE"],df_discrepancy_pop_covid["pop_covid_discrepancy"]) plt.xlabel("State") plt.ylabel("Discrepancy") plt.show() plt.figure() plt.bar(df_effort_pop_covid["STATE"],df_effort_pop_covid["pop_covid_effort"]) plt.xlabel("State") plt.ylabel("Effort") plt.show() # count for discrepancy (<0, =0, >0) df_discrepancy_pop_covid['level'] = df_discrepancy_pop_covid.apply(lambda x: np.sign(x.pop_covid_discrepancy), axis = 1) # df_discrepancy_bed_covid count_dis_pop_covid = df_discrepancy_pop_covid.groupby("level")["STATE"].size() count_dis_pop_covid ###Output _____no_output_____ ###Markdown bed & population ###Code # discrepancy_bed_pop df_discrepancy_bed_pop = pd.DataFrame.from_dict(discrepancy_bed_pop, orient="index", columns=["bed_pop_discrepancy","bed_pop_details"]) df_discrepancy_bed_pop = df_discrepancy_bed_pop.reset_index().rename(columns={"index":"STATE"}) df_discrepancy_bed_pop["bed_pop_discrepancy"] = df_discrepancy_bed_pop["bed_pop_discrepancy"].fillna(0) # df_discrepancy_bed_pop # effort_bed_pop df_effort_bed_pop = pd.DataFrame.from_dict(effort_bed_pop, orient="index", columns=["bed_pop_effort"]) df_effort_bed_pop = df_effort_bed_pop.reset_index().rename(columns={"index":"STATE"}) df_effort_bed_pop["bed_pop_effort"] = df_effort_bed_pop["bed_pop_effort"].fillna(0) # df_effort_bed_pop plt.rcParams['figure.figsize'] = (80, 4.0) plt.figure() plt.bar(df_discrepancy_bed_pop["STATE"],df_discrepancy_bed_pop["bed_pop_discrepancy"]) plt.xlabel("State") plt.ylabel("Discrepancy") plt.show() plt.figure() plt.bar(df_effort_bed_pop["STATE"],df_effort_bed_pop["bed_pop_effort"]) plt.xlabel("State") plt.ylabel("Effort") plt.show() # count for discrepancy (<0, =0, >0) df_discrepancy_bed_pop['level'] = df_discrepancy_bed_pop.apply(lambda x: np.sign(x.bed_pop_discrepancy), axis = 1) # df_discrepancy_bed_covid count_dis_bed_pop = df_discrepancy_bed_pop.groupby("level")["STATE"].size() count_dis_bed_pop ###Output _____no_output_____ ###Markdown merge all results ###Code df_final = pd.DataFrame(columns=["STATE"]) dfs=[df_discrepancy_bed_covid, df_discrepancy_pop_covid, df_discrepancy_bed_pop, df_effort_bed_covid, df_effort_pop_covid, df_effort_bed_pop] for df in dfs: df_final = df_final.merge(df, on=['STATE'], how='outer') df_final = df_final.drop(["bed_covid_details","pop_covid_details","bed_pop_details","level_x","level_y","level"],axis=1) # drop DC df_final = df_final.drop([7]) df_final = df_final.reset_index(drop=True) df_final # max->min import copy ranks = copy.copy(df_final) ranks[list(df_final.columns[1:])] = df_final[list(df_final.columns[1:])].rank(method="min", ascending=False) ranks outputpath = "./data/discr_eff/" if not os.path.exists(outputpath): os.makedirs(outputpath) df_final.to_csv(os.path.join(outputpath, "discr_eff_val.csv")) ranks.to_csv(os.path.join(outputpath, "discr_eff_rank.csv")) ###Output _____no_output_____ ###Markdown evaluate ###Code import os evaluatepath = "./data/discr_eff/" avaliable_bed = pd.read_csv(os.path.join(evaluatepath, "Summary_stats_all_locs.csv"),header=0) avaliable_bed = avaliable_bed[["location_name", "available_all_nbr"]] # avaliable_bed healthrank_path = "./data/discr_eff/" state_rank = pd.read_excel(os.path.join(healthrank_path, "stateRank.xlsx"),header=None) # drop Alaska and Hawaii state_rank = state_rank.drop([0,10]) state_rank = state_rank.reset_index(drop=True) state_rank["OverallRank"] = state_rank[5].rank(method="min", ascending=False) # state_rank new_rank = pd.merge(ranks, avaliable_bed, left_on =["STATE"], right_on=["location_name"], how="left") new_rank["bedRank"] = new_rank["available_all_nbr"].rank(method="min", ascending=False) dis1 = list(ranks.bed_covid_discrepancy) # r_dis_pc = list(ranks.pop_covid_discrepancy) # r_dis_bp = list(ranks.bed_pop_discrepancy) eff1 = list(ranks.bed_covid_effort) # r_eff_pc = list(ranks.pop_covid_effort) # r_eff_bp = list(ranks.bed_pop_effort) # dis_ranks = [r_dis_bc, r_dis_pc, r_dis_bp] # eff_ranks = [r_eff_bc, r_eff_pc, r_eff_bp] healthrank = list(state_rank.OverallRank) bedrank = list(new_rank.bedRank) from scipy import stats print("Bed Rank:") print("discrepancy:") print(stats.spearmanr(dis1, bedrank),stats.kendalltau(dis1, bedrank)) print("\n effort:") print(stats.spearmanr(eff1, bedrank),stats.kendalltau(eff1, bedrank)) print("\nHealth Rank:") print("discrepancy:") print(stats.spearmanr(dis1, healthrank),stats.kendalltau(dis1, healthrank)) print("\n effort:") print(stats.spearmanr(eff1, healthrank),stats.kendalltau(eff1, healthrank)) ###Output Bed Rank: discrepancy: SpearmanrResult(correlation=0.30525401650021705, pvalue=0.03487913654065799) KendalltauResult(correlation=0.20035460992907803, pvalue=0.04456926421584259) effort: SpearmanrResult(correlation=-0.13341293964394269, pvalue=0.36599662767031493) KendalltauResult(correlation=-0.09929078014184398, pvalue=0.31951233651960764) Health Rank: discrepancy: SpearmanrResult(correlation=0.05134607034303083, pvalue=0.7289011140152404) KendalltauResult(correlation=0.04609929078014185, pvalue=0.6439536077376282) effort: SpearmanrResult(correlation=-0.017042987407729047, pvalue=0.9084664677267075) KendalltauResult(correlation=-0.019503546099290784, pvalue=0.8449726626091453)
misc/2017-03-02-day06.ipynb
###Markdown ---layout: postauthor: csiudate: 2017-03-02title: "Day06: Jupyter Notebook, meet Jekyll blog post"categories: updatetags: - 100daysofcode - setupexcerpt: Integrating code --- DAY 06 - Mar 2, 2017 Data Science meetupToday I went to the [Data Science meetup for "Using NLP & Machine Learning to understand and predict performance"](https://www.meetup.com/DataScience/events/237733099/). Fascinating stuff. Somewhat similar to my thesis work and the talk inspired a few ideas for future projects. ###Code speaker = 'Thomas Levi' topics_mentioned_at_meetup = [ "latent dirichlet allocation", "collapsed gibbs sampling", "bayesian inference", "topic modelling", "porter stemmer", "flesch reading ease", "word2vec" ] ###Output _____no_output_____ ###Markdown *Anyways, I just got home and now (as I'm typing this) have 35 minutes to do something and post it for Day06.* Jupyter Notebook meet Jekyll blog postGoing back to a comment I recently recieved about including and embedding code to my jekyll blog posts. I thought I would tackle this problem now. The issue is that I use Jupyter Notebooks to explore and analyze data but I haven't really looked at its integration with the Jekyll blog post. ###Code for t in topics_mentioned_at_meetup: print("- '{}' was mentioned".format(t)) ###Output - 'latent dirichlet allocation' was mentioned - 'collapsed gibbs sampling' was mentioned - 'bayesian inference' was mentioned - 'topic modelling' was mentioned - 'porter stemmer' was mentioned - 'flesch reading ease' was mentioned - 'word2vec' was mentioned
week3/avneesh/Q3 - 2/Attempt1_filesubmission_Avneesh Mishra - Smooth Planning Spirals.ipynb
###Markdown Remember that in week 1 we had generated open-loop commands for a set of manoeuvres such as$[("straight", 5), ("right", 90), ("straight", 6), ("left", 90)]$Let us do repeat, but with a change. Instead of left/ right, simply use turn and a signed angle.$[("straight", 5), ("turn", -90), ("straight", 6), ("turn", 90)]$You can use cubic_spiral() from previous notebook ###Code v = 1 dt = 0.1 num_st_pts = int(v/dt) num_pts = 50 def cubic_spiral(theta_i, theta_f, n=10): x = np.linspace(0, 1, num=n) #-2*x**3 + 3*x**2 return (theta_f-theta_i)*(-2*x**3 + 3*x**2) + theta_i def straight(dist, curr_pose, n=num_st_pts): # the straight-line may be along x or y axis x0, y0, t0 = curr_pose xf, yf = x0 + dist*np.cos(t0), y0 + dist*np.sin(t0) x = (xf - x0) * np.linspace(0, 1, n) + x0 y = (yf - y0) * np.linspace(0, 1, n) + y0 return x, y, t0*np.ones_like(x) def turn(change, curr_pose, n=num_pts): # adjust scaling constant for desired turn radius x0, y0, t0 = curr_pose theta = cubic_spiral(t0, t0 + np.deg2rad(change), n) x= x0 + np.cumsum(v*np.cos(theta)*dt) y= y0 + np.cumsum(v*np.sin(theta)*dt) return x, y, theta def generate_trajectory(route, init_pose = (0, 0,np.pi/2)): curr_pose = init_pose func = {'straight': straight, 'turn': turn} x, y, t = np.array([]), np.array([]),np.array([]) for manoeuvre, command in route: px, py, pt = func[manoeuvre](command, curr_pose) curr_pose = px[-1],py[-1],pt[-1] # New current pose x = np.concatenate([x, px]) y = np.concatenate([y, py]) t = np.concatenate([t, pt]) return x, y, t ###Output _____no_output_____ ###Markdown Plot the trajectoryplot the trajectory and the change in orientation in separate plots ###Code route = [ ("straight", 5), ("turn", -90), ("straight", 6), ("turn", 90) ] x, y, th = generate_trajectory(route) plt.figure(figsize=(12, 5), dpi=80) plt.subplot(1,2,1) plt.axis('equal') plt.title("XY plot") plt.plot(x, y) plt.grid() plt.subplot(1,2,2) plt.title("Theta") plt.plot(th) plt.grid() ###Output _____no_output_____ ###Markdown Convert A* or Djikstra gives a sequence of $\{(x_i, y_i)\}$. We need to convert it to a sequence of {"straight", "turn"} if we are use generate_trajectory()Let us look at a simple method. Assume that the successive line segments are orthogonal (reasonable in the grid world). If we find the corner point, we can demarcate. For 3 consecutive points $(x_1,y_1), (x_2, y_2), (x_3, y_3)$ if $(x_1 - x_2)(y_3-y2) - (x_3-x_2)(y_2-y_1) \neq 0$, then $(x_2, y_2)$ is a corner point. This is much because the $\frac{\Delta Y}{\Delta X}$ value has canged (slope).Think about what is happening if1. $(x_1 - x_2)(y_3-y2) - (x_3-x_2)(y_2-y_1) > 0$2. $(x_1 - x_2)(y_3-y2) - (x_3-x_2)(y_2-y_1) < 0$ ###Code # here is a code to generate 2 orthogonal # line segments of lengths 6 s1, s2 = 6, 6 y1 = list(range(s1)) x1 = [0]*s1 x2 = list(range(s2)) y2 = [y1[-1]]*s2 x, y = x1[:-1]+x2, y1[:-1]+y2 plt.figure() plt.title("Path") plt.plot(x, y) plt.grid() #find the corner point and plot it interest_points = [(x[0], y[0])] # Interest points (corners + start and stop) for i in range(1, len(x)-1, 1): x1, y1 = x[i-1], y[i-1] x2, y2 = x[i], y[i] x3, y3 = x[i+1], y[i+1] ang12 = np.arctan2(y2-y1, x2-x1) ang23 = np.arctan2(y3-y2, x3-x2) ang_rel = ang23 - ang12 if ang_rel != 0: interest_points.append((x2, y2)) print(f"Angle {np.rad2deg(ang_rel):.2f} at {(x2, y2)}") interest_points.append((x[-1], y[-1])) # Fix a turn radius r # Shorten the straight segments by r # convert this into {("straight", s1), ("turn", +/- 90), ("straight", s2)} turn_radius = 1 path = [] def dist(p1, p2): x1, y1 = p1 x2, y2 = p2 return ((x2-x1)**2+(y2-y1)**2)**(0.5) # Start the main thing turn_radius = 3 * turn_radius path.append(("straight", dist(interest_points[0], interest_points[1]) - (0 if len(interest_points) == 2 else turn_radius))) for i in range(1, len(interest_points)-1): x1, y1 = interest_points[i-1] x2, y2 = interest_points[i] x3, y3 = interest_points[i+1] ang = np.arctan2((y3-y2), (x3-x2)) - np.arctan2((y2-y1), (x2-x1)) path.append(("turn", np.rad2deg(ang))) # Add the turn path.append(("straight", dist((x2, y2), (x3, y3)) - turn_radius - (0 if i+1 == len(interest_points)-1 else turn_radius))) print(f"Path is {path}") # use generate_trajectory() and plot the smooth path x, y, th = generate_trajectory(path) plt.figure(figsize=(12, 5), dpi=80) plt.subplot(1,2,1) plt.axis('equal') plt.title("XY plot") plt.plot(x, y) plt.grid() plt.subplot(1,2,2) plt.title("Theta") plt.plot(th) plt.grid() ###Output Angle -90.00 at (0, 5) Path is [('straight', 2.0), ('turn', -90.0), ('straight', 2.0)] ###Markdown Saving the path as a `.npy` file ###Code save_path = np.hstack((x.reshape(-1, 1), y.reshape(-1, 1), th.reshape(-1, 1))) np.save("./data/srs_path.npy", save_path) # Save the path ###Output _____no_output_____ ###Markdown More complex exampleBorrow the Grid world code from week 2 notebook. Get the A* path and smoothen it using the routine from above ###Code !tree ###Output . └── data ├── astar_grid.npy └── srs_path.npy 1 directory, 2 files ###Markdown Import important libraries ###Code import networkx as nx ###Output _____no_output_____ ###Markdown Load the grid ###Code # Load grid grid = np.load("./data/astar_grid.npy") print(f"Loaded grid of shape {grid.shape}") # you can define your own start/ end start = (0, 0) goal = (0, 19) # visualize the start/ end and the robot's environment fig, ax = plt.subplots(figsize=(12,12)) ax.imshow(grid, cmap=plt.cm.Dark2) ax.scatter(start[1],start[0], marker = "+", color = "yellow", s = 200) ax.scatter(goal[1],goal[0], marker = "+", color = "red", s = 200) plt.show() ###Output Loaded grid of shape (20, 20) ###Markdown Remove nodes that are occupied ###Code #initialize graph grid_size = grid.shape G = nx.grid_2d_graph(*grid_size) # G.nodes -> (0,0), (0,1), ... (19, 18), (19, 19) num_nodes = 0 # counter to keep track of deleted nodes #nested loop to remove nodes that are not connected #free cell => grid[i, j] = 0 #occupied cell => grid[i, j] = 1 for i in range(grid_size[0]): for j in range(grid_size[1]): if grid[i, j] == 1: # If occupied G.remove_node((i, j)) num_nodes += 1 print(f"Removed {num_nodes} nodes") print(f"Number of occupied cells in grid {np.sum(grid)}") pos = {(x,y):(y,-x) for x,y in G.nodes()} # Converting axis nx.draw(G, pos=pos, node_color='green', node_size=100) ###Output Removed 77 nodes Number of occupied cells in grid 77 ###Markdown Create an A* path ###Code def euclidean(node1, node2): x1, y1 = node1 x2, y2 = node2 return ((x1-x2)**2 + (y1-y2)**2)**0.5 nx.set_edge_attributes(G, {e: 1 for e in G.edges()}, "cost") # All edges have cost = 1 weight = 1.0 # Weight for heuristic astar_path = nx.astar_path(G, start, goal, heuristic=lambda n1, n2: weight * euclidean(n1, n2), weight="cost") print(astar_path) # Visualize the path fig, ax = plt.subplots(figsize=(12,12)) ax.imshow(grid, cmap=plt.cm.Dark2) ax.scatter(start[1],start[0], marker = "+", color = "yellow", s = 200) ax.scatter(goal[1],goal[0], marker = "+", color = "red", s = 200) for s in astar_path[1:]: ax.plot(s[1], s[0],'r+') ###Output [(0, 0), (0, 1), (0, 2), (1, 2), (2, 2), (3, 2), (4, 2), (5, 2), (6, 2), (6, 3), (6, 4), (6, 5), (7, 5), (8, 5), (9, 5), (10, 5), (11, 5), (12, 5), (13, 5), (14, 5), (14, 4), (14, 3), (14, 2), (14, 1), (15, 1), (16, 1), (17, 1), (18, 1), (18, 2), (18, 3), (17, 3), (17, 4), (17, 5), (17, 6), (17, 7), (17, 8), (17, 9), (17, 10), (17, 11), (17, 12), (17, 13), (17, 14), (16, 14), (15, 14), (14, 14), (13, 14), (13, 13), (13, 12), (13, 11), (13, 10), (12, 10), (11, 10), (10, 10), (10, 11), (10, 12), (10, 13), (10, 14), (9, 14), (9, 15), (9, 16), (8, 16), (7, 16), (7, 17), (7, 18), (6, 18), (5, 18), (4, 18), (3, 18), (2, 18), (1, 18), (0, 18), (0, 19)] ###Markdown Get the path ###Code a = np.array(astar_path) y, x = -a[:, 0], a[:, 1] #find the corner point and plot it interest_points = [(x[0], y[0])] # Interest points (corners + start and stop) for i in range(1, len(x)-1, 1): x1, y1 = x[i-1], y[i-1] x2, y2 = x[i], y[i] x3, y3 = x[i+1], y[i+1] ang12 = np.arctan2(y2-y1, x2-x1) ang23 = np.arctan2(y3-y2, x3-x2) ang_rel = ang23 - ang12 if ang_rel != 0: if ang_rel == 3*np.pi/2: ang_rel = -np.pi/2 if ang_rel == -3*np.pi/2: ang_rel = np.pi/2 interest_points.append((x2, y2)) # print(f"Angle {np.rad2deg(ang_rel):.2f} at {(x2, y2)}") interest_points.append((x[-1], y[-1])) # Fix a turn radius r # Shorten the straight segments by r # convert this into {("straight", s1), ("turn", +/- 90), ("straight", s2)} turn_radius = 1/3 path = [] def dist(p1, p2): x1, y1 = p1 x2, y2 = p2 return ((x2-x1)**2+(y2-y1)**2)**(0.5) # Start the main thing path.append(("straight", dist(interest_points[0], interest_points[1]) - (0 if len(interest_points) == 2 else turn_radius))) for i in range(1, len(interest_points)-1): x1, y1 = interest_points[i-1] x2, y2 = interest_points[i] x3, y3 = interest_points[i+1] ang = np.arctan2((y3-y2), (x3-x2)) - np.arctan2((y2-y1), (x2-x1)) if ang == 3 * np.pi/2: ang = -np.pi/2 elif ang == -3*np.pi/2: ang = np.pi/2 path.append(("turn", np.rad2deg(ang))) # Add the turn path.append(("straight", dist((x2, y2), (x3, y3)) - turn_radius - (0 if i+1 == len(interest_points)-1 else turn_radius))) xi, yi = x, y # Backup v = 1 dt = 0.01 num_st_pts = int(v/dt) num_pts = 50 # Generate the path x, y, th = generate_trajectory(path, (0, 0, 0)) # Visualize the path first plt.figure(figsize=(12, 12), dpi=80) plt.title("Path") plt.plot(xi, yi, label="actual") plt.plot(x, y, label="smooth") plt.legend() plt.grid() ###Output _____no_output_____ ###Markdown This approach of path planning with 90 deg turns juxtaposed between straight segments works well in structured environments.In the general case, where $A^*$/ $RRT^*$ path is a sequence of piecewise linear segments, we will perform a path optimization routine directly. There are 3 more advanced manouevres that you may need1. Lane-change: Robot has to move laterally but without change to the orientation2. Inplace: Robot has to turn around itself 3. Reverse: Straights or turns in reverseLane-change has to be applied as a combination of 2 cubic spirals (90 to 0 and 0 to 90). Inplace and Reverse are situational constructs ###Code ###Output _____no_output_____
notebooks/unsupervised_ML/Apriori_Algorithm/Apriori_Algorithm.ipynb
###Markdown Apriori Algorithm: Super Market Example ###Code import numpy as np import matplotlib.pyplot as plt import pandas as pd from apyori import apriori store_data = pd.read_csv('../../../datasets/unsupervised_ML/Apriori_Algorithm/store_data.csv', header=None) store_data.head(10) store_data.columns ## preprocess records = [] for i in range(0, 7501): records.append([str(store_data.values[i,j]) for j in range(0, 20)]) from mlxtend.preprocessing import TransactionEncoder te = TransactionEncoder() te_ary = te.fit(records).transform(records) store_data = pd.DataFrame(te_ary, columns=te.columns_) store_data ## Applying apriori #association_rules = apriori(records, min_support=0.0045, min_confidence=0.2, min_lift=3, min_length=2) association_rules = apriori(records, min_support=0.0045, min_lift=3) association_results = list(association_rules) ## Viewing Resulsts print(len(association_results)) print(association_results[0]) for item in association_results: # first index of the inner list # Contains base item and add item pair = item[0] items = [x for x in pair] print("Rule: " + items[0] + " -> " + items[1]) #second index of the inner list print("Support: " + str(item[1])) #third index of the list located at 0th #of the third index of the inner list print("Confidence: " + str(item[2][0][2])) print("Lift: " + str(item[2][0][3])) print("=====================================") ###Output Rule: chicken -> light cream Support: 0.004532728969470737 Confidence: 0.29059829059829057 Lift: 4.84395061728395 ===================================== Rule: mushroom cream sauce -> escalope Support: 0.005732568990801226 Confidence: 0.3006993006993007 Lift: 3.790832696715049 ===================================== Rule: pasta -> escalope Support: 0.005865884548726837 Confidence: 0.3728813559322034 Lift: 4.700811850163794 ===================================== Rule: herb & pepper -> ground beef Support: 0.015997866951073192 Confidence: 0.3234501347708895 Lift: 3.2919938411349285 ===================================== Rule: tomato sauce -> ground beef Support: 0.005332622317024397 Confidence: 0.3773584905660377 Lift: 3.840659481324083 ===================================== Rule: whole wheat pasta -> olive oil Support: 0.007998933475536596 Confidence: 0.2714932126696833 Lift: 4.122410097642296 ===================================== Rule: shrimp -> pasta Support: 0.005065991201173177 Confidence: 0.3220338983050847 Lift: 4.506672147735896 ===================================== Rule: chicken -> nan Support: 0.004532728969470737 Confidence: 0.29059829059829057 Lift: 4.84395061728395 ===================================== Rule: chocolate -> shrimp Support: 0.005332622317024397 Confidence: 0.23255813953488375 Lift: 3.2545123221103784 ===================================== Rule: spaghetti -> ground beef Support: 0.004799360085321957 Confidence: 0.5714285714285714 Lift: 3.2819951870487856 ===================================== Rule: mushroom cream sauce -> nan Support: 0.005732568990801226 Confidence: 0.3006993006993007 Lift: 3.790832696715049 ===================================== Rule: pasta -> nan Support: 0.005865884548726837 Confidence: 0.3728813559322034 Lift: 4.700811850163794 ===================================== Rule: ground beef -> spaghetti Support: 0.008665511265164644 Confidence: 0.31100478468899523 Lift: 3.165328208890303 ===================================== Rule: milk -> frozen vegetables Support: 0.004799360085321957 Confidence: 0.20338983050847456 Lift: 3.088314005352364 ===================================== Rule: mineral water -> shrimp Support: 0.007199040127982935 Confidence: 0.30508474576271183 Lift: 3.200616332819722 ===================================== Rule: spaghetti -> frozen vegetables Support: 0.005732568990801226 Confidence: 0.20574162679425836 Lift: 3.1240241752707125 ===================================== Rule: shrimp -> spaghetti Support: 0.005999200106652446 Confidence: 0.21531100478468898 Lift: 3.0131489680782684 ===================================== Rule: tomatoes -> spaghetti Support: 0.006665777896280496 Confidence: 0.23923444976076558 Lift: 3.4980460188216425 ===================================== Rule: grated cheese -> spaghetti Support: 0.005332622317024397 Confidence: 0.3225806451612903 Lift: 3.283144395325426 ===================================== Rule: mineral water -> herb & pepper Support: 0.006665777896280496 Confidence: 0.39062500000000006 Lift: 3.975682666214383 ===================================== Rule: herb & pepper -> nan Support: 0.015997866951073192 Confidence: 0.3234501347708895 Lift: 3.2919938411349285 ===================================== Rule: herb & pepper -> spaghetti Support: 0.006399146780429276 Confidence: 0.3934426229508197 Lift: 4.004359721511667 ===================================== Rule: milk -> ground beef Support: 0.004932675643247567 Confidence: 0.22424242424242427 Lift: 3.40494417862839 ===================================== Rule: tomato sauce -> nan Support: 0.005332622317024397 Confidence: 0.3773584905660377 Lift: 3.840659481324083 ===================================== Rule: shrimp -> spaghetti Support: 0.005999200106652446 Confidence: 0.5232558139534884 Lift: 3.005315360233627 ===================================== Rule: spaghetti -> milk Support: 0.007199040127982935 Confidence: 0.20300751879699247 Lift: 3.0825089038385434 ===================================== Rule: soup -> mineral water Support: 0.005199306759098787 Confidence: 0.22543352601156072 Lift: 3.4230301186492245 ===================================== Rule: nan -> whole wheat pasta Support: 0.007998933475536596 Confidence: 0.2714932126696833 Lift: 4.13077198425009 ===================================== Rule: shrimp -> pasta Support: 0.005065991201173177 Confidence: 0.3220338983050847 Lift: 4.515095833993347 ===================================== Rule: pancakes -> spaghetti Support: 0.005065991201173177 Confidence: 0.20105820105820105 Lift: 3.0529100529100526 ===================================== Rule: chocolate -> shrimp Support: 0.005332622317024397 Confidence: 0.23255813953488375 Lift: 3.260595522712454 ===================================== Rule: spaghetti -> nan Support: 0.004799360085321957 Confidence: 0.5714285714285714 Lift: 3.2819951870487856 ===================================== Rule: spaghetti -> ground beef Support: 0.008665511265164644 Confidence: 0.31100478468899523 Lift: 3.165328208890303 ===================================== Rule: mineral water -> milk Support: 0.004532728969470737 Confidence: 0.28813559322033894 Lift: 3.0228043143297376 ===================================== Rule: nan -> milk Support: 0.004799360085321957 Confidence: 0.20338983050847456 Lift: 3.094578333963626 ===================================== Rule: mineral water -> shrimp Support: 0.007199040127982935 Confidence: 0.30508474576271183 Lift: 3.200616332819722 ===================================== Rule: spaghetti -> nan Support: 0.005732568990801226 Confidence: 0.20574162679425836 Lift: 3.1303609383037156 ===================================== Rule: spaghetti -> shrimp Support: 0.005999200106652446 Confidence: 0.21531100478468898 Lift: 3.0187810222242093 ===================================== Rule: spaghetti -> tomatoes Support: 0.006665777896280496 Confidence: 0.23923444976076558 Lift: 3.4980460188216425 ===================================== Rule: grated cheese -> nan Support: 0.005332622317024397 Confidence: 0.3225806451612903 Lift: 3.283144395325426 ===================================== Rule: mineral water -> herb & pepper Support: 0.006665777896280496 Confidence: 0.39062500000000006 Lift: 3.975682666214383 ===================================== Rule: spaghetti -> herb & pepper Support: 0.006399146780429276 Confidence: 0.3934426229508197 Lift: 4.004359721511667 ===================================== Rule: nan -> milk Support: 0.004932675643247567 Confidence: 0.22424242424242427 Lift: 3.4118507591124225 ===================================== Rule: spaghetti -> shrimp Support: 0.005999200106652446 Confidence: 0.5232558139534884 Lift: 3.005315360233627 ===================================== Rule: spaghetti -> nan Support: 0.007199040127982935 Confidence: 0.20300751879699247 Lift: 3.088761457396025 ===================================== Rule: soup -> nan Support: 0.005199306759098787 Confidence: 0.22543352601156072 Lift: 3.429973384609973 ===================================== Rule: spaghetti -> pancakes Support: 0.005065991201173177 Confidence: 0.20105820105820105 Lift: 3.0591025682303568 ===================================== Rule: nan -> milk Support: 0.004532728969470737 Confidence: 0.28813559322033894 Lift: 3.0228043143297376 =====================================
JupyterNotebooks from Medium/CF Recommendation System-Examples.ipynb
###Markdown **Examples of Collaborative Filtering based Recommendation Systems** ###Code #make necesarry imports import numpy as np import pandas as pd import matplotlib.pyplot as plt import sklearn.metrics as metrics import numpy as np from sklearn.neighbors import NearestNeighbors from scipy.spatial.distance import correlation, cosine import ipywidgets as widgets from IPython.display import display, clear_output from sklearn.metrics import pairwise_distances from sklearn.metrics import mean_squared_error from math import sqrt import sys, os from contextlib import contextmanager #M is user-item ratings matrix where ratings are integers from 1-10 M = np.asarray([[3,7,4,9,9,7], [7,0,5,3,8,8], [7,5,5,0,8,4], [5,6,8,5,9,8], [5,8,8,8,10,9], [7,7,0,4,7,8]]) M=pd.DataFrame(M) #declaring k,metric as global which can be changed by the user later global k,metric k=4 metric='cosine' #can be changed to 'correlation' for Pearson correlation similaries M ###Output _____no_output_____ ###Markdown **User-based Recommendation Systems** ###Code #get cosine similarities for ratings matrix M; pairwise_distances returns the distances between ratings and hence #similarities are obtained by subtracting distances from 1 cosine_sim = 1-pairwise_distances(M, metric="cosine") #Cosine similarity matrix pd.DataFrame(cosine_sim) #get pearson similarities for ratings matrix M pearson_sim = 1-pairwise_distances(M, metric="correlation") #Pearson correlation similarity matrix pd.DataFrame(pearson_sim) #This function finds k similar users given the user_id and ratings matrix M #Note that the similarities are same as obtained via using pairwise_distances def findksimilarusers(user_id, ratings, metric = metric, k=k): similarities=[] indices=[] model_knn = NearestNeighbors(metric = metric, algorithm = 'brute') model_knn.fit(ratings) distances, indices = model_knn.kneighbors(ratings.iloc[user_id-1, :].values.reshape(1, -1), n_neighbors = k+1) similarities = 1-distances.flatten() print ('{0} most similar users for User {1}:\n'.format(k,user_id)) for i in range(0, len(indices.flatten())): if indices.flatten()[i]+1 == user_id: continue; else: print ('{0}: User {1}, with similarity of {2}'.format(i, indices.flatten()[i]+1, similarities.flatten()[i])) return similarities,indices similarities,indices = findksimilarusers(1,M, metric='cosine') similarities,indices = findksimilarusers(1,M, metric='correlation') #This function predicts rating for specified user-item combination based on user-based approach def predict_userbased(user_id, item_id, ratings, metric = metric, k=k): prediction=0 similarities, indices=findksimilarusers(user_id, ratings,metric, k) #similar users based on cosine similarity mean_rating = ratings.loc[user_id-1,:].mean() #to adjust for zero based indexing sum_wt = np.sum(similarities)-1 product=1 wtd_sum = 0 for i in range(0, len(indices.flatten())): if indices.flatten()[i]+1 == user_id: continue; else: ratings_diff = ratings.iloc[indices.flatten()[i],item_id-1]-np.mean(ratings.iloc[indices.flatten()[i],:]) product = ratings_diff * (similarities[i]) wtd_sum = wtd_sum + product prediction = int(round(mean_rating + (wtd_sum/sum_wt))) print ('\nPredicted rating for user {0} -> item {1}: {2}'.format(user_id,item_id,prediction)) return prediction predict_userbased(3,4,M); ###Output 4 most similar users for User 3: 1: User 4, with similarity of 0.9095126893401909 2: User 2, with similarity of 0.8747444148494656 3: User 5, with similarity of 0.8654538781497916 4: User 6, with similarity of 0.853274963343837 Predicted rating for user 3 -> item 4: 3 ###Markdown **Item-based Recommendation Systems** ###Code #This function finds k similar items given the item_id and ratings matrix M def findksimilaritems(item_id, ratings, metric=metric, k=k): similarities=[] indices=[] ratings=ratings.T model_knn = NearestNeighbors(metric = metric, algorithm = 'brute') model_knn.fit(ratings) distances, indices = model_knn.kneighbors(ratings.iloc[item_id-1, :].values.reshape(1, -1), n_neighbors = k+1) similarities = 1-distances.flatten() print ('{0} most similar items for item {1}:\n'.format(k,item_id)) for i in range(0, len(indices.flatten())): if indices.flatten()[i]+1 == item_id: continue; else: print ('{0}: Item {1} :, with similarity of {2}'.format(i,indices.flatten()[i]+1, similarities.flatten()[i])) return similarities,indices similarities,indices=findksimilaritems(3,M) #This function predicts the rating for specified user-item combination based on item-based approach def predict_itembased(user_id, item_id, ratings, metric = metric, k=k): prediction= wtd_sum =0 similarities, indices=findksimilaritems(item_id, ratings) #similar users based on correlation coefficients sum_wt = np.sum(similarities)-1 product=1 for i in range(0, len(indices.flatten())): if indices.flatten()[i]+1 == item_id: continue; else: product = ratings.iloc[user_id-1,indices.flatten()[i]] * (similarities[i]) wtd_sum = wtd_sum + product prediction = int(round(wtd_sum/sum_wt)) print ('\nPredicted rating for user {0} -> item {1}: {2}'.format(user_id,item_id,prediction)) return prediction prediction = predict_itembased(1,3,M) #This function is used to compute adjusted cosine similarity matrix for items def computeAdjCosSim(M): sim_matrix = np.zeros((M.shape[1], M.shape[1])) M_u = M.mean(axis=1) #means for i in range(M.shape[1]): for j in range(M.shape[1]): if i == j: sim_matrix[i][j] = 1 else: if i<j: sum_num = sum_den1 = sum_den2 = 0 for k,row in M.loc[:,[i,j]].iterrows(): if ((M.loc[k,i] != 0) & (M.loc[k,j] != 0)): num = (M[i][k]-M_u[k])*(M[j][k]-M_u[k]) den1= (M[i][k]-M_u[k])**2 den2= (M[j][k]-M_u[k])**2 sum_num = sum_num + num sum_den1 = sum_den1 + den1 sum_den2 = sum_den2 + den2 else: continue den=(sum_den1**0.5)*(sum_den2**0.5) if den!=0: sim_matrix[i][j] = sum_num/den else: sim_matrix[i][j] = 0 else: sim_matrix[i][j] = sim_matrix[j][i] return pd.DataFrame(sim_matrix) adjcos_sim = computeAdjCosSim(M) adjcos_sim #This function finds k similar items given the item_id and ratings matrix M def findksimilaritems_adjcos(item_id, ratings, k=k): sim_matrix = computeAdjCosSim(ratings) similarities = sim_matrix[item_id-1].sort_values(ascending=False)[:k+1].values indices = sim_matrix[item_id-1].sort_values(ascending=False)[:k+1].index print ('{0} most similar items for item {1}:\n'.format(k,item_id)) for i in range(0, len(indices)): if indices[i]+1 == item_id: continue; else: print ('{0}: Item {1} :, with similarity of {2}'.format(i,indices[i]+1, similarities[i])) return similarities ,indices similarities, indices = findksimilaritems_adjcos(3,M) #This function predicts the rating for specified user-item combination for adjusted cosine item-based approach #As the adjusted cosine similarities range from -1,+1, sometimes the predicted rating can be negative or greater than max value #Hack to deal with this: Rating is set to min if prediction is negative, Rating is set to max if prediction is above max def predict_itembased_adjcos(user_id, item_id, ratings): prediction=0 similarities, indices=findksimilaritems_adjcos(item_id, ratings) #similar users based on correlation coefficients sum_wt = np.sum(similarities)-1 product=1 wtd_sum = 0 for i in range(0, len(indices)): if indices[i]+1 == item_id: continue; else: product = ratings.iloc[user_id-1,indices[i]] * (similarities[i]) wtd_sum = wtd_sum + product prediction = int(round(wtd_sum/sum_wt)) if prediction < 0: prediction = 1 elif prediction >10: prediction = 10 print ('\nPredicted rating for user {0} -> item {1}: {2}'.format(user_id,item_id,prediction)) return prediction prediction=predict_itembased_adjcos(3,4,M) adjcos_sim #This function utilizes above function to recommend items for selected approach. Recommendations are made if the predicted #rating for an item is greater than or equal to 6, and the items has not been rated already def recommendItem(user_id, item_id, ratings): if user_id<1 or user_id>6 or type(user_id) is not int: print ('Userid does not exist. Enter numbers from 1-6') else: ids = ['User-based CF (cosine)','User-based CF (correlation)','Item-based CF (cosine)', 'Item-based CF (adjusted cosine)'] approach = widgets.Dropdown(options=ids, value=ids[0], description='Select Approach', width='500px') def on_change(change): prediction = 0 clear_output(wait=True) if change['type'] == 'change' and change['name'] == 'value': if (approach.value == 'User-based CF (cosine)'): metric = 'cosine' prediction = predict_userbased(user_id, item_id, ratings, metric) elif (approach.value == 'User-based CF (correlation)') : metric = 'correlation' prediction = predict_userbased(user_id, item_id, ratings, metric) elif (approach.value == 'Item-based CF (cosine)'): prediction = predict_itembased(user_id, item_id, ratings) else: prediction = predict_itembased_adjcos(user_id,item_id,ratings) if ratings[item_id-1][user_id-1] != 0: print ('Item already rated') else: if prediction>=6: print ('\nItem recommended') else: print ('Item not recommended') approach.observe(on_change) display(approach) #check for incorrect entries recommendItem(-1,3,M) recommendItem(3,4,M) recommendItem(3,4,M) recommendItem(3,4,M) recommendItem(3,4,M) #if the item is already rated, it is not recommended recommendItem(2,1,M) #This is a quick way to temporarily suppress stdout in particular code section @contextmanager def suppress_stdout(): with open(os.devnull, "w") as devnull: old_stdout = sys.stdout sys.stdout = devnull try: yield finally: sys.stdout = old_stdout #This is final function to evaluate the performance of selected recommendation approach and the metric used here is RMSE #suppress_stdout function is used to suppress the print outputs of all the functions inside this function. It will only print #RMSE values def evaluateRS(ratings): ids = ['User-based CF (cosine)','User-based CF (correlation)','Item-based CF (cosine)','Item-based CF (adjusted cosine)'] approach = widgets.Dropdown(options=ids, value=ids[0],description='Select Approach', width='500px') n_users = ratings.shape[0] n_items = ratings.shape[1] prediction = np.zeros((n_users, n_items)) prediction= pd.DataFrame(prediction) def on_change(change): clear_output(wait=True) with suppress_stdout(): if change['type'] == 'change' and change['name'] == 'value': if (approach.value == 'User-based CF (cosine)'): metric = 'cosine' for i in range(n_users): for j in range(n_items): prediction[i][j] = predict_userbased(i+1, j+1, ratings, metric) elif (approach.value == 'User-based CF (correlation)') : metric = 'correlation' for i in range(n_users): for j in range(n_items): prediction[i][j] = predict_userbased(i+1, j+1, ratings, metric) elif (approach.value == 'Item-based CF (cosine)'): for i in range(n_users): for j in range(n_items): prediction[i][j] = predict_userbased(i+1, j+1, ratings) else: for i in range(n_users): for j in range(n_items): prediction[i][j] = predict_userbased(i+1, j+1, ratings) MSE = mean_squared_error(prediction, ratings) RMSE = round(sqrt(MSE),3) print ("RMSE using {0} approach is: {1}".format(approach.value,RMSE)) approach.observe(on_change) display(approach) evaluateRS(M) evaluateRS(M) ###Output RMSE using Item-based CF (cosine) approach is: 2.804
Data Analysis with python/DA0101EN-Review-Introduction.ipynb
###Markdown Data Analysis with Python IntroductionWelcome!In this section, you will learn how to approach data acquisition in various ways, and obtain necessary insights from a dataset. By the end of this lab, you will successfully load the data into Jupyter Notebook, and gain some fundamental insights via Pandas Library. Table of Contents Data Acquisition Basic Insight of DatasetEstimated Time Needed: 10 min Data AcquisitionThere are various formats for a dataset, .csv, .json, .xlsx etc. The dataset can be stored in different places, on your local machine or sometimes online.In this section, you will learn how to load a dataset into our Jupyter Notebook.In our case, the Automobile Dataset is an online source, and it is in CSV (comma separated value) format. Let's use this dataset as an example to practice data reading. data source: https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data data type: csvThe Pandas Library is a useful tool that enables us to read various datasets into a data frame; our Jupyter notebook platforms have a built-in Pandas Library so that all we need to do is import Pandas without installing. ###Code # import pandas library import pandas as pd ###Output _____no_output_____ ###Markdown Read DataWe use pandas.read_csv() function to read the csv file. In the bracket, we put the file path along with a quotation mark, so that pandas will read the file into a data frame from that address. The file path can be either an URL or your local file address.Because the data does not include headers, we can add an argument headers = None inside the read_csv() method, so that pandas will not automatically set the first row as a header.You can also assign the dataset to any variable you create. This dataset was hosted on IBM Cloud object click HERE for free storage. ###Code # Import pandas library import pandas as pd # Read the online file by the URL provides above, and assign it to variable "df" other_path = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/auto.csv" df = pd.read_csv(other_path, header=None) ###Output _____no_output_____ ###Markdown After reading the dataset, we can use the dataframe.head(n) method to check the top n rows of the dataframe; where n is an integer. Contrary to dataframe.head(n), dataframe.tail(n) will show you the bottom n rows of the dataframe. ###Code # show the first 5 rows using dataframe.head() method print("The first 5 rows of the dataframe") df.head(5) ###Output _____no_output_____ ###Markdown Question 1: check the bottom 10 rows of data frame "df". ###Code # Write your code below and press Shift+Enter to execute ###Output _____no_output_____ ###Markdown Question 1 Answer: Run the code below for the solution! Double-click here for the solution.<!-- The answer is below:print("The last 10 rows of the dataframe\n")df.tail(10)--> Add HeadersTake a look at our dataset; pandas automatically set the header by an integer from 0.To better describe our data we can introduce a header, this information is available at: https://archive.ics.uci.edu/ml/datasets/AutomobileThus, we have to add headers manually.Firstly, we create a list "headers" that include all column names in order.Then, we use dataframe.columns = headers to replace the headers by the list we created. ###Code # create headers list headers = ["symboling","normalized-losses","make","fuel-type","aspiration", "num-of-doors","body-style", "drive-wheels","engine-location","wheel-base", "length","width","height","curb-weight","engine-type", "num-of-cylinders", "engine-size","fuel-system","bore","stroke","compression-ratio","horsepower", "peak-rpm","city-mpg","highway-mpg","price"] print("headers\n", headers) ###Output _____no_output_____ ###Markdown We replace headers and recheck our data frame ###Code df.columns = headers df.head(10) ###Output _____no_output_____ ###Markdown we can drop missing values along the column "price" as follows ###Code df.dropna(subset=["price"], axis=0) ###Output _____no_output_____ ###Markdown Now, we have successfully read the raw dataset and add the correct headers into the data frame. Question 2: Find the name of the columns of the dataframe ###Code # Write your code below and press Shift+Enter to execute ###Output _____no_output_____ ###Markdown Double-click here for the solution.<!-- The answer is below:print(df.columns)--> Save DatasetCorrespondingly, Pandas enables us to save the dataset to csv by using the dataframe.to_csv() method, you can add the file path and name along with quotation marks in the brackets. For example, if you would save the dataframe df as automobile.csv to your local machine, you may use the syntax below: ###Code df.to_csv("automobile.csv", index=False) ###Output _____no_output_____ ###Markdown We can also read and save other file formats, we can use similar functions to **`pd.read_csv()`** and **`df.to_csv()`** for other data formats, the functions are listed in the following table: Read/Save Other Data Formats| Data Formate | Read | Save || ------------- |:--------------:| ----------------:|| csv | `pd.read_csv()` |`df.to_csv()` || json | `pd.read_json()` |`df.to_json()` || excel | `pd.read_excel()`|`df.to_excel()` || hdf | `pd.read_hdf()` |`df.to_hdf()` || sql | `pd.read_sql()` |`df.to_sql()` || ... | ... | ... | Basic Insight of DatasetAfter reading data into Pandas dataframe, it is time for us to explore the dataset.There are several ways to obtain essential insights of the data to help us better understand our dataset. Data TypesData has a variety of types.The main types stored in Pandas dataframes are object, float, int, bool and datetime64. In order to better learn about each attribute, it is always good for us to know the data type of each column. In Pandas: ###Code df.dtypes ###Output _____no_output_____ ###Markdown returns a Series with the data type of each column. ###Code # check the data type of data frame "df" by .dtypes print(df.dtypes) ###Output _____no_output_____ ###Markdown As a result, as shown above, it is clear to see that the data type of "symboling" and "curb-weight" are int64, "normalized-losses" is object, and "wheel-base" is float64, etc.These data types can be changed; we will learn how to accomplish this in a later module. DescribeIf we would like to get a statistical summary of each column, such as count, column mean value, column standard deviation, etc. We use the describe method: ###Code dataframe.describe() ###Output _____no_output_____ ###Markdown This method will provide various summary statistics, excluding NaN (Not a Number) values. ###Code df.describe() ###Output _____no_output_____ ###Markdown This shows the statistical summary of all numeric-typed (int, float) columns.For example, the attribute "symboling" has 205 counts, the mean value of this column is 0.83, the standard deviation is 1.25, the minimum value is -2, 25th percentile is 0, 50th percentile is 1, 75th percentile is 2, and the maximum value is 3.However, what if we would also like to check all the columns including those that are of type object.You can add an argument include = "all" inside the bracket. Let's try it again. ###Code # describe all the columns in "df" df.describe(include = "all") ###Output _____no_output_____ ###Markdown Now, it provides the statistical summary of all the columns, including object-typed attributes.We can now see how many unique values, which is the top value and the frequency of top value in the object-typed columns.Some values in the table above show as "NaN", this is because those numbers are not available regarding a particular column type. Question 3: You can select the columns of a data frame by indicating the name of each column, for example, you can select the three columns as follows: dataframe[[' column 1 ',column 2', 'column 3']]Where "column" is the name of the column, you can apply the method ".describe()" to get the statistics of those columns as follows: dataframe[[' column 1 ',column 2', 'column 3'] ].describe()Apply the method to ".describe()" to the columns 'length' and 'compression-ratio'. ###Code # Write your code below and press Shift+Enter to execute ###Output _____no_output_____ ###Markdown Double-click here for the solution.<!-- The answer is below:df[['length', 'compression-ratio']].describe()--> InfoAnother method you can use to check your dataset is: ###Code dataframe.info ###Output _____no_output_____ ###Markdown It provide a concise summary of your DataFrame. ###Code # look at the info of "df" df.info ###Output _____no_output_____
notebooks/lectures_ready/pandas_intro.ipynb
###Markdown Brief Introduction to Pandas Limitations of using numpy for tabular dataWe have seen how to use numpy to import tabular data stored in a CSV file ###Code import numpy as np data = np.loadtxt('data.csv', delimiter=',', skiprows=2) data ###Output _____no_output_____ ###Markdown However, there are two limitations in using numpy for tabular data:- numpy arrays just stores the data, not the metadata (columns names, row index)- a numpy array has a single data type (e.g., integer, float), while tables may have columns of data with different types Here comes Pandas- Pandas (http://pandas.pydata.org/) is a widely-used Python library to handle tabular data - read from / write to different formats (CSV...) - analytics, statistics, tansformations, plotting (on top of matplotlib).- Borrows many features from R’s dataframes. - A 2-dimenstional table whose columns have names and potentially have different data types. We first import the library ###Code import pandas as pd ###Output _____no_output_____ ###Markdown A real exampleFirst, look at the real dataset of land-surface temperature (region averages) that we will use in the project:http://berkeleyearth.org/data/http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Text/This is a good example of real dataset used in science: text format, good documentation, human readable but a bit harder to deal with progammatically (e.g., column names as comments instead of strict CSV). Start by importing the packages that we will need for the project. ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Load the data (a single region/file ; we won't use all the columns available). Note that we can provide an URL to `pandas.read_csv` ! ###Code df = pd.read_csv( "http://berkeleyearth.lbl.gov/auto/Regional/TAVG/Text/united-states-TAVG-Trend.txt", delim_whitespace=True, comment='%', header=None, parse_dates=[[0,1]], index_col=(0), usecols=(0, 1, 2, 3, 8, 9), names=("year", "month", "anomaly", "uncertainty", "10-year-anomaly", "10-year-uncertainty") ) df.index.name = "date" ###Output _____no_output_____
TA/Session6.ipynb
###Markdown Classification with Naive Bayes and SVM Naive Bayes Preprocessing ###Code import pandas as pd import nltk import os import re import string import numpy as np from sklearn import datasets, svm from sklearn.feature_extraction.text import CountVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline df_train = pd.DataFrame(columns=['words', 'sentiment']) df_test = pd.DataFrame(columns=['words', 'sentiment']) sw_dir = './Data/6/sw.txt' stop_words = [] with open(sw_dir) as f: text = f.readlines() for word in text: stop_words.append(re.findall('\S+', word)[0]) # adding br and empty string to stop words stop_words.append('br') stop_words.append('') # create dataset from text files train_pos_dir = './Data/6/aclImdb/train/pos' for filename in os.listdir(train_pos_dir): with open(os.path.join(train_pos_dir, filename)) as f: text = f.readlines()[0] df_train = df_train.append({'words': text, 'sentiment': 1}, ignore_index=True) train_neg_dir = './Data/6/aclImdb/train/neg' for filename in os.listdir(train_neg_dir): with open(os.path.join(train_neg_dir, filename)) as f: text = f.readlines()[0] df_train = df_train.append({'words': text, 'sentiment': 0}, ignore_index=True) test_pos_dir = './Data/6/aclImdb/test/pos' for filename in os.listdir(test_pos_dir): with open(os.path.join(test_pos_dir, filename)) as f: text = f.readlines()[0] df_test = df_test.append({'words': text, 'sentiment': 1}, ignore_index=True) test_neg_dir = './Data/6/aclImdb/test/neg' for filename in os.listdir(test_neg_dir): with open(os.path.join(test_neg_dir, filename)) as f: text = f.readlines()[0] df_test = df_test.append({'words': text, 'sentiment': 0}, ignore_index=True) def remove_punct(text): def change(ch): if ch in string.punctuation or ch.isdigit(): return " " else: return ch no_punct = "".join([change(ch) for ch in text]) return no_punct # df_train['words'] = df_train['words'].apply(lambda x: remove_punct(x)) def tokenize(text): tokens = re.split('\W+', text) return tokens # df_train['words'] = df_train['words'].apply(lambda x: tokenize(x.lower())) def remove_sw(tokens): text = [w for w in tokens if w not in stop_words] return text # df_train['words'] = df_train['words'].apply(lambda x: remove_sw(x)) def remove_short(tokens): text = [w for w in tokens if len(w)>2] return text # df_train['words'] = df_train['words'].apply(lambda x: remove_short(x)) ###Output _____no_output_____ ###Markdown There are several ways to get root of tokens like stemming and lemmatizing. stemming is faster and lemmatizing is more precise. ###Code wn = nltk.WordNetLemmatizer() def lemmatizing(tokens): text = [wn.lemmatize(w) for w in tokens] return text ps = nltk.stem.PorterStemmer() def stemming(tokens): text = [ps.stem(w) for w in tokens] return text # df_train['words'] = df_train['words'].apply(lambda x: lemmatizing(x)) # df_train['words'] = df_train['words'].apply(lambda x: stemming(x)) def clean_text(text): text = remove_punct(text) text = tokenize(text) text = remove_sw(text) text = remove_short(text) # text = lemmatizing(text) text = stemming(text) return text count_vect = CountVectorizer(analyzer=clean_text, lowercase=True, binary=True) X_train = count_vect.fit_transform(df_train['words']) y_train = df_train['sentiment'].to_numpy(dtype='int') X_test = count_vect.transform(df_test['words']) y_test = df_train['sentiment'].to_numpy(dtype='int') # print(count_vect.get_feature_names()) ###Output _____no_output_____ ###Markdown Classification ###Code clf = MultinomialNB(alpha=100) clf = clf.fit(X_train, y_train) # clf.score(X_test, y_test) y_pred = clf.predict(X_test) cm = confusion_matrix(y_test, y_pred) acc_tr = clf.score(X_train, y_train) acc_te = clf.score(X_test, y_test) print("train accuracy: {}%".format(acc_tr*100)) print("test accuracy: {}%".format(acc_te*100)) sns.heatmap(cm, annot=True, fmt='d') ###Output _____no_output_____ ###Markdown Laplace smoothing ###Code alphas = [10**x for x in range(-4, 4)] accs_tr = [] accs_te = [] for alpha in alphas: cls = MultinomialNB(alpha=alpha) cls = cls.fit(X_train, y_train) accs_tr.append(cls.score(X_train, y_train)) accs_te.append(cls.score(X_test, y_test)) plt.plot(alphas, accs_tr) plt.plot(alphas, accs_te) plt.xlabel('alpha') plt.ylabel('accuracy') plt.legend(['train accuracy', 'test accuracy']) plt.xscale('log') ###Output _____no_output_____ ###Markdown SVM Classifier ###Code def make_meshgrid(x, y, h=.02): x_min, x_max = x.min() - 1, x.max() + 1 y_min, y_max = y.min() - 1, y.max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) return xx, yy def plot_contours(ax, clf, xx, yy, **params): Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) out = ax.contourf(xx, yy, Z, **params) return out iris = datasets.load_iris() X = iris.data[:, :2] y = iris.target models = (svm.SVC(kernel='linear', C=1.0), svm.LinearSVC(C=1.0, max_iter=10000), svm.SVC(kernel='rbf', gamma=0.7, C=1.0), svm.SVC(kernel='poly', degree=3, gamma='auto', C=1.0)) models = (clf.fit(X, y) for clf in models) titles = ('SVC with linear kernel', 'LinearSVC (linear kernel)', 'SVC with RBF kernel', 'SVC with polynomial (degree 3) kernel') fig, sub = plt.subplots(2, 2, figsize=(12, 8)) plt.subplots_adjust(wspace=0.4, hspace=0.4) X0, X1 = X[:, 0], X[:, 1] xx, yy = make_meshgrid(X0, X1) for clf, title, ax in zip(models, titles, sub.flatten()): plot_contours(ax, clf, xx, yy, cmap=plt.cm.coolwarm, alpha=0.8) ax.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors='k') ax.set_xlim(xx.min(), xx.max()) ax.set_ylim(yy.min(), yy.max()) ax.set_xlabel('Sepal length') ax.set_ylabel('Sepal width') ax.set_xticks(()) ax.set_yticks(()) ax.set_title(title) plt.show() ###Output _____no_output_____
.ipynb_checkpoints/SubmissionNotebook-checkpoint.ipynb
###Markdown This notebook is for illustration purpose. Please visit https://github.com/hym97/CAM_final_project if you want to play with the model yourself Classification IntroductionThis problem is a fraud detection problem. Where the False label occupies nearly 99% of the dataset, we can simply achieve 99% accuracy by making negative predictions for all the data. But it will not help us to detect fraud. Therefore, we must do something to the dataset.To address the problem, we can use techniques like **Undersampling**, **Oversampling**, or **Ensemble Learning**.We used balanced random forest in this implementation. ###Code df = pd.read_csv('./data/TrainingData.csv') X, Y = utils.pipeline(df) Y_regression = Y.values[:,0] Y = Y.values[:,1] data = np.c_[X,Y] np.random.shuffle(data) train, validate, test = np.split(data,[int(.6 * data.shape[0]), int(.8 * data.shape[0])]) train_X, train_Y = train[:,:-1], train[:,-1] validate_X, validate_Y = validate[:,:-1], validate[:,-1] test_X, test_Y = test[:,:-1], test[:,-1] brf = BalancedRandomForestClassifier(n_estimators=150, random_state=37) brf.fit(train_X,train_Y) predict_Y = brf.predict(validate_X) fig, axs = plt.subplots(ncols=2, figsize=(10, 5)) plot_confusion_matrix(brf, validate_X, validate_Y, ax=axs[0], colorbar=False) axs[0].set_title("Balanced random forest (val)") plot_confusion_matrix(brf, test_X, test_Y, ax=axs[1], colorbar=False) axs[1].set_title("Balanced random forest (test)") plt.show() ###Output C:\Users\hymsh\anaconda3\envs\CAMenv\lib\site-packages\sklearn\utils\deprecation.py:87: FutureWarning: Function plot_confusion_matrix is deprecated; Function `plot_confusion_matrix` is deprecated in 1.0 and will be removed in 1.2. Use one of the class methods: ConfusionMatrixDisplay.from_predictions or ConfusionMatrixDisplay.from_estimator. warnings.warn(msg, category=FutureWarning) C:\Users\hymsh\anaconda3\envs\CAMenv\lib\site-packages\sklearn\utils\deprecation.py:87: FutureWarning: Function plot_confusion_matrix is deprecated; Function `plot_confusion_matrix` is deprecated in 1.0 and will be removed in 1.2. Use one of the class methods: ConfusionMatrixDisplay.from_predictions or ConfusionMatrixDisplay.from_estimator. warnings.warn(msg, category=FutureWarning) ###Markdown RemarksFrom the figures above, we can see many False Positive cases (24382, 24543), and the accuracy drops to 79%. However, in return, we can successfully detect fraud which is far more important than accuracy in reality. ###Code def calcualte_metrics(Y_predict, Y_labeled): metrics = confusion_matrix(Y_predict, Y_labeled) TPR = metrics[1,1] / (metrics[0,1] + metrics[1,1]) FPR = metrics[1,0] / (metrics[0,0] + metrics[1,0]) return TPR, FPR ###Output _____no_output_____ ###Markdown Calculate the metric ###Code predict_val, predict_test = brf.predict(validate_X), brf.predict(test_X) TPR_val, FPR_val = calcualte_metrics(predict_val, validate_Y) TPR_test, FPR_test = calcualte_metrics(predict_test, test_Y) print("Performance on val set: TPR:{:.2f} FPR:{:.2f}".format(TPR_val,FPR_val)) print("Performance on test set: TPR:{:.2f} FPR:{:.2f}".format(TPR_test,FPR_test)) ###Output Performance on val set: TPR:0.82 FPR:0.21 Performance on test set: TPR:0.84 FPR:0.21 ###Markdown Regression IntroductionThis problem is simply a regression problem. But the labeled data are highly skewed. We'd better use the log transformation to make NMONTHS columns more normally distributed to get better performance.There are many regression methods. However, considering I do not need much interoperability, I choose FFNN to make the prediction. ArchitectureFFNN_classifer( (layer1): Linear(in_features=49, out_features=64, bias=True) (layer2): Linear(in_features=64, out_features=128, bias=True) (layer3): Linear(in_features=128, out_features=10, bias=True) (layer4): Linear(in_features=10, out_features=1, bias=True) (dropout): Dropout(p=0.2, inplace=False)) ###Code import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import math X, Y = utils.pipeline(df) Y_regression = Y.values[:,0] data = np.c_[X,Y_regression] np.random.shuffle(data) train, validate, test = np.split(data, [int(.6 * data.shape[0]), int(.8 * data.shape[0])]) train_X, train_Y = train[:,:-1], np.log(train[:,-1]) validate_X, validate_Y = validate[:,:-1], np.log(validate[:,-1]) class FFNN_classifer(nn.Module): def __init__(self, input_size): super(FFNN_classifer, self).__init__() self.layer1 = nn.Linear(input_size, 64) self.layer2 = nn.Linear(64, 128) self.layer3 = nn.Linear(128, 10) self.layer4 = nn.Linear(10, 1) self.dropout = nn.Dropout(.2) def forward(self, input_data): input_data =input_data.float() output = self.layer1(input_data) output = F.relu(output) output = self.layer2(output) output = F.relu(output) output = self.layer3(output) output = F.relu(output) output = self.dropout(output) output = self.layer4(output) return output def train_model(input_data, input_labels, optimizer, model,loss_func): optimizer.zero_grad() output = model(input_data) loss = loss_func(output.squeeze(1), input_labels.float()) loss.backward() optimizer.step() return loss.item() def mini_batch(batch_size, input_data, label): length = len(input_data) batch_num = math.ceil(length / batch_size) for i in range(batch_num): input_batch, input_label = input_data[batch_size*i:batch_size * (i + 1), :], \ label[batch_size*i:batch_size * (i + 1)] yield input_batch, input_label def eval_model(input_data, input_labels, model,loss_func): model.eval() input_data, input_labels = torch.tensor(input_data), torch.tensor(input_labels) output = model(input_data) loss = loss_func(output.squeeze(1), input_labels.float()) model.train() return loss.item() epoch, N_epoch = 0, 50 batch_size = 128 model = FFNN_classifer(49) optimizer = optim.Adam(model.parameters()) loss_func = nn.L1Loss() while epoch < N_epoch: loss = 0 for input_batch, input_label in mini_batch(batch_size, train_X, train_Y): input_batch, input_label = torch.tensor(input_batch), torch.tensor(input_label) loss = train_model(input_batch, input_label, optimizer, model, loss_func) if epoch % 10 == 0: print("epoch:{} Loss on training:{:.2f}".format(epoch, loss)) loss_val = eval_model(validate_X, validate_Y,model,loss_func) print("\tLoss on dev:{:.2f}".format(loss_val)) epoch += 1 ###Output epoch:0 Loss on training:0.48 Loss on dev:0.49 epoch:10 Loss on training:0.43 Loss on dev:0.43 epoch:20 Loss on training:0.45 Loss on dev:0.42 epoch:30 Loss on training:0.40 Loss on dev:0.42 epoch:40 Loss on training:0.40 Loss on dev:0.42 ###Markdown RemarksWe can see the loss on training set is still less than the loss on dev set even if a dropout layer is included. That may indicate we include too many parameters in the model. Calculate the metric ###Code test_X, test_Y = test[:,:-1], test[:,-1] test_log_error = eval_model(test_X, np.log(test_Y),model, loss_func) model.eval() test_X = torch.tensor(test_X) test_normal_error = np.abs(np.exp(model(test_X).detach().numpy()).squeeze(1) - test_Y).sum() / test_Y.shape[0] print('On Log Scale: MAD: {:.2f}'.format(test_log_error)) print('On Normal Scale: MAD: {:.2f}'.format(test_normal_error)) ###Output On Log Scale: MAD: 0.42 On Normal Scale: MAD: 12.33 ###Markdown Make Predictions ###Code test_data = pd.read_csv('./data/TestDataYremoved.csv') LID = test_data.LID.values pp_df = utils.pipeline_test(test_data) model.eval() FORCLOSED = brf.predict(pp_df) NMONTHS = np.exp(model(torch.tensor(pp_df)).detach().numpy()) prediction = np.c_[LID, FORCLOSED, NMONTHS] df = pd.DataFrame(prediction, columns = ['LID', 'FORCLOSED', 'NMONTHS']) df.FORCLOSED = df.FORCLOSED.map({0:False,1:True}) df.LID = df.LID.astype('int64') df.head() df.to_csv('submission.csv', index = False) ###Output _____no_output_____
cmemsapi_functions.ipynb
###Markdown ###Code #! /usr/bin/env python3 # -*- coding: utf-8 -*- """Main module.""" import calendar import datetime as dt import getpass as password import hashlib import logging import math import os import re import shutil import subprocess import sys import time from functools import reduce from importlib import reload from pathlib import Path import requests as rq import fire import lxml.html import pandas as pd import xarray as xr from funcy import omit DEFAULT_CURRENT_PATH = os.getcwd() BOLD = '\033[1m' END = '\033[0m' LOGFILE = Path( DEFAULT_CURRENT_PATH, 'log', ''.join(["CMEMS_API_", dt.datetime.now().strftime('%Y%m%d_%H%M'), ".log"])) try: if not LOGFILE.parent.exists(): LOGFILE.parent.mkdir(parents=True) if os.path.exists(LOGFILE): os.remove(LOGFILE) print(f'[INFO] Logging to: {str(LOGFILE)}') reload(logging) logging.basicConfig(filename=LOGFILE, level=logging.DEBUG, format='[%(asctime)s] - [%(levelname)s] - %(message)s', datefmt='%Y-%m-%d %H:%M:%S') except IOError: print("[ERROR] Failed to set logger.") def set_target_directory(local_storage_directory=None): """ Returns working directory where data is saved. Default value (None) creates a directory (``copernicus-tmp-data``) in the current working directory. Parameters ---------- local_storage_directory : path or str, optional A path object or string. The default is None. Returns ------- target_directory : path A path to the directory where data is saved. """ if local_storage_directory: target_directory = Path(local_storage_directory) else: target_directory = Path(DEFAULT_CURRENT_PATH, 'copernicus-tmp-data') if not target_directory.exists(): target_directory.mkdir(parents=True) print(f'[INFO] Directory successfully created : {target_directory}.') return target_directory def multireplace(tobereplaced, substitute): """ Returns replaced string given string and substitute map. Parameters ---------- tobereplaced : str String to execute replacements on. substitute : dict Substitute dictionary {value to find: value to replace}. Returns ------- str Replaced string. """ substrings = sorted(substitute, key=len, reverse=True) regex = re.compile('|'.join(map(re.escape, substrings))) return regex.sub(lambda match: substitute[match.group(0)], tobereplaced) def query(question, default="yes"): """ Returns answer from a yes/no question, read from user\'s input. Parameters ---------- question : str String written as a question, displayed to user. default : str, optional String value to be presented to user to help . The default is "yes". Raises ------ ValueError Raise error to continue asking question until user inputs one of the valid choice. Returns ------- bool Returns ``True`` if user validates question, ``False`` otherwise. """ valid = {"yes": True, "y": True, "ye": True, "no": False, "n": False} if default is None: prompt = " [y/n] " elif default == "yes": prompt = " [Y/n] " elif default == "no": prompt = " [y/N] " else: raise ValueError(f"[ERROR] Invalid default answer: '{default}'") while True: sys.stdout.write(question + prompt) choice = input().lower() if default is not None and choice == '': return valid[default] elif choice in valid: return valid[choice] else: sys.stdout.write("[ACTION] Please respond with 'yes' or 'no' " "(or 'y' or 'n').\n") def get_config_constraints(): """ Returns constraints configuration as ``dict`` from which data requests will be stacked. Returns ------- split_dict : TYPE DESCRIPTION. """ c_dict = { 'year': { 'depth': 6000, 'geo': 200 }, 'month': { 'depth': 6000, 'geo': 360 }, 'day': { 'depth': 6000, 'geo': 360 } } split_dict = { 'hourly_r': { 'pattern': [ '-hi', 'hourly', 'hts', 'fc-h', '1-027', '1-032', 'rean-h', '1hr', '3dinst', '_hm', 'BLENDED', '15min', 'MetO-NWS-WAV-RAN', 'skin', 'surface' ], 'year_s': c_dict['year'], 'month_s': c_dict['month'], 'day_s': c_dict['day'] }, 'day_r': { 'pattern': ['daily', 'weekly', 'an-fc-d', 'rean-d', 'day-', '-dm-'], 'year_s': c_dict['year'], 'month_s': c_dict['month'], 'day_s': c_dict['day'] }, 'month_r': { 'pattern': [ 'month', 'an-fc-m', 'rean-m', '-mm-', '-MON-', 'ran-arc-myoceanv2-be', 'CORIOLIS', 'bgc3d' ], 'year_s': c_dict['year'], 'month_s': c_dict['month'] } } return split_dict def get_credentials(file_rc=None, sep='='): """ Returns Copernicus Marine Credentials. Credentials can be specified in a file or if ommitted, manually by user's input. Parameters ---------- file_rc : str or path, optional Location of the file storing credentials. The default is None. sep : str, optional Character used to separate credential and its value. The default is `=`. Raises ------ SystemExit Raise an error to exit program at fatal error (wrong credentials etc). Returns ------- copernicus_username : str Copernicus Marine username. copernicus_password : str Copernicus Marine password. """ lines = [] if not file_rc: file_rc = Path.cwd() / 'copernicus_credentials.txt' try: with open(file_rc, 'r') as cred: for line in cred: lines.append(line) except FileNotFoundError: print(f'[INFO] Credentials must be entered hereafter, obtained from: ' f'https://resources.marine.copernicus.eu/?option=com_sla') print( f'[INFO] If you have forgotten either your USERNAME ' f'(which {BOLD}is NOT your email address{END}) or your PASSWORD, ' f'please visit: https://marine.copernicus.eu/faq/forgotten-password/?idpage=169' ) time.sleep(2) usr = password.getpass( prompt=f"[ACTION] Please input your Copernicus {BOLD}USERNAME{END}" " (and hit `Enter` key):") time.sleep(2) pwd = password.getpass( prompt=f"[ACTION] Please input your Copernicus {BOLD}PASSWORD{END}" " (and hit `Enter` key):") lines.append(f'username{sep}{usr}') lines.append(f'password{sep}{pwd}') create_cred_file = query( f'[ACTION] For future usage, do you want to save credentials in a' ' configuration file?', 'yes') if create_cred_file: with open(file_rc, 'w') as cred: for line in lines: cred.write(''.join([line, '\n'])) if not all([sep in item for item in lines]): print('[ERROR] Sperator is not found. Must be specifed or corrected.\n' f'[WARNING] Please double check content of {file_rc}. ' f'It should match (please mind the `{sep}`):' f'\nusername{sep}<USERNAME>\npassword{sep}<PASSWORD>') raise SystemExit copernicus_username = ''.join(lines[0].strip().split(sep)[1:]) copernicus_password = ''.join(lines[1].strip().split(sep)[1:]) if not check_credentials(copernicus_username, copernicus_password): if file_rc.exists(): msg = f' from content of {file_rc}' else: msg = '' print( '[ERROR] Provided username and/or password could not be validated.\n' f'[WARNING] Please double check it{msg}. More help at: ' 'https://marine.copernicus.eu/faq/forgotten-password/?idpage=169') raise SystemExit print('[INFO] Credentials have been succcessfully loaded and verified.') return copernicus_username, copernicus_password def check_credentials(user, pwd): """ Check provided Copernicus Marine Credentials are correct. Parameters ---------- username : str Copernicus Marine Username, provided for free from https://marine.copernicus.eu . password : str Copernicus Marine Password, provided for free from https://marine.copernicus.eu . Returns ------- bool Returns ``True`` if credentials are correst, ``False`` otherwise. """ cmems_cas_url = 'https://cmems-cas.cls.fr/cas/login' conn_session = rq.session() login_session = conn_session.get(cmems_cas_url) login_from_html = lxml.html.fromstring(login_session.text) hidden_elements_from_html = login_from_html.xpath( '//form//input[@type="hidden"]') playload = { he.attrib['name']: he.attrib['value'] for he in hidden_elements_from_html } playload['username'] = user playload['password'] = pwd conn_session.post(cmems_cas_url, data=playload) if 'CASTGC' not in conn_session.cookies: return False return True def get_viewscript(): """ Ask the user to input the ``VIEW_SCRIPT`` command. Returns ------- view_myscript : str String representing the ``TEMPLATE COMMAND`` generated by the webportal. Example is available at https://tiny.cc/get-viewscript-from-web """ uni_test = [ 'python -m motuclient --motu http', ' '.join([ '--out-dir <OUTPUT_DIRECTORY> --out-name <OUTPUT_FILENAME>', '--user <USERNAME> --pwd <PASSWORD>' ]) ] while True: view_myscript = input( f"[ACTION] Please paste the template command displayed on the webportal:\n" ) if not all([item in view_myscript for item in uni_test]): print( '[ERROR] Cannot parse VIEWSCRIPT. ' 'Please paste the ``TEMPLATE COMMAND`` as shown in this article: ' 'https://marine.copernicus.eu/faq/' 'how-to-write-and-run-the-script-to-download-' 'cmems-products-through-subset-or-direct-download-mechanisms/?idpage=169' ) else: return view_myscript def viewscript_string_to_dict(viewmy_script): """ Convert the ``VIEW SCRIPT`` string displayed by the webportal to a ``dict``. Parameters ---------- viewmy_script : TYPE DESCRIPTION. Returns ------- vs_dict : TYPE DESCRIPTION. """ vs_dict = dict( [e.strip().partition(" ")[::2] for e in viewmy_script.split('--')]) vs_dict['variable'] = [value for (var, value) in [e.strip().partition(" ")[::2] for e in viewmy_script.split('--')] if var == 'variable'] # pylint: disable=line-too-long vs_dict['abs_geo'] = [ abs(float(vs_dict['longitude-min']) - float(vs_dict['longitude-max'])), abs(float(vs_dict['latitude-min']) - float(vs_dict['latitude-max'])) ] try: vs_dict['abs_depth'] = abs( float(vs_dict['depth-min']) - float(vs_dict['depth-max'])) except KeyError: print(f"[INFO] The {vs_dict['product-id']} is 3D and not 4D:" " it does not contain depth dimension.") if len(vs_dict['date-min']) == 12: dtformat = '%Y-%m-%d' elif len(vs_dict['date-min']) > 12: dtformat = '%Y-%m-%d %H:%M:%S' vs_dict['dt-date-min'] = dt.datetime.strptime(vs_dict['date-min'][1:-1], dtformat) vs_dict['dt-date-max'] = dt.datetime.strptime(vs_dict['date-max'][1:-1], dtformat) if vs_dict['dt-date-max'].day == 1: vs_dict['dt-date-max'] = vs_dict['dt-date-max'] + dt.timedelta(days=1) vs_dict['delta-days'] = vs_dict['dt-date-max'] - vs_dict['dt-date-min'] vs_dict['prefix'] = '_'.join( list((vs_dict['service-id'].split('-')[0]).split('_')[i] for i in [0, -2, -1])) vs_dict['suffix'] = '.nc' if vs_dict['abs_geo'][0] == 0 and vs_dict['abs_geo'][1] == 0: vs_dict['gridpoint'] = 'gridpoint' if '-' in vs_dict['longitude-min']: vs_dict['gridpoint'] = '_'.join([ vs_dict['gridpoint'], vs_dict['longitude-min'].replace(".", "dot").replace("-", "W") ]) else: vs_dict['gridpoint'] = '_'.join([ vs_dict['gridpoint'], ''.join(['E', vs_dict['longitude-min'].replace('.', 'dot')]) ]) if '-' in vs_dict['latitude-min']: vs_dict['gridpoint'] = '_'.join([ vs_dict['gridpoint'], vs_dict['latitude-min'].replace(".", "dot").replace("-", "S") ]) else: vs_dict['gridpoint'] = '_'.join([ vs_dict['gridpoint'], ''.join(['N', vs_dict['latitude-min'].replace('.', 'dot')]) ]) if len(vs_dict['variable']) > 6: vs_dict['out_var_name'] = 'several_vars' else: vs_dict['out_var_name'] = '_'.join(vs_dict['variable']) return vs_dict def get_dates_stack(vs_dict, check_stack, size=None, renew=None): """ Update a ``dict`` containing ``VIEW SCRIPT`` values with dates for sub-requests. Parameters ---------- vs_dict : TYPE DESCRIPTION. check_stack : TYPE DESCRIPTION. size : TYPE, optional DESCRIPTION. The default is None. renew : TYPE, optional DESCRIPTION. The default is None. Returns ------- vs_dict : TYPE DESCRIPTION. """ if not size: cmd = 'cmd' else: cmd = 'size' if not renew: date_in = vs_dict['dt-date-min'] else: date_in = renew if check_stack == 'day': vs_dict[f'{cmd}-date-min'] = dt.datetime(date_in.year, date_in.month, date_in.day, 0) vs_dict[f'{cmd}-date-max'] = dt.datetime(date_in.year, date_in.month, date_in.day, 23, 30) vs_dict['format'] = "%Y%m%d" elif check_stack == 'month': vs_dict[f'{cmd}-date-min'] = dt.datetime(date_in.year, date_in.month, 1, 0) vs_dict[f'{cmd}-date-max'] = dt.datetime( date_in.year, date_in.month, calendar.monthrange(date_in.year, date_in.month)[1], 23, 30) vs_dict['format'] = "%Y%m" elif check_stack == 'year': if date_in.year == vs_dict['dt-date-max'].year: vs_dict[f'{cmd}-date-max'] = dt.datetime( date_in.year, vs_dict['dt-date-max'].month, calendar.monthrange(date_in.year, vs_dict['dt-date-max'].month)[1], 23, 30) else: vs_dict[f'{cmd}-date-max'] = dt.datetime(date_in.year, 12, 31, 23, 30) vs_dict[f'{cmd}-date-min'] = dt.datetime(date_in.year, date_in.month, date_in.day, 0) vs_dict['format'] = "%Y" else: print(f'No matching stack queries found for: {check_stack}') return vs_dict def viewscript_dict_to_string(size=None, strict=None, cmd=None, **kwargs): """ Convert the ``dict`` containing keys and values of the ``VIEW SCRIPT``, into a string as displayed by the webportal. Parameters ---------- size : TYPE, optional DESCRIPTION. The default is None. strict : TYPE, optional DESCRIPTION. The default is None. cmd : TYPE, optional DESCRIPTION. The default is None. **kwargs : TYPE DESCRIPTION. Returns ------- command : TYPE DESCRIPTION. """ if size: feature = 'size' elif strict: feature = 'dt' elif cmd: feature = 'cmd' vs_string = [] if 'python' in kwargs: vs_string.append(f"python {kwargs['python']}") if 'motu' in kwargs: vs_string.append(f"--motu {kwargs['motu']}") if 'service-id' in kwargs: vs_string.append(f"--service-id {kwargs['service-id']}") if 'product-id' in kwargs: vs_string.append(f"--product-id {kwargs['product-id']}") if 'longitude-min' in kwargs: vs_string.append(f"--longitude-min {kwargs['longitude-min']}") if 'longitude-max' in kwargs: vs_string.append(f"--longitude-max {kwargs['longitude-max']}") if 'latitude-min' in kwargs: vs_string.append(f"--latitude-min {kwargs['latitude-min']}") if 'latitude-max' in kwargs: vs_string.append(f"--latitude-max {kwargs['latitude-max']}") if f'{feature}-date-min' in kwargs: vs_string.append(f"--date-min \"{kwargs[f'{feature}-date-min']}\"") if f'{feature}-date-max' in kwargs: vs_string.append(f"--date-max \"{kwargs[f'{feature}-date-max']}\"") if 'depth-min' in kwargs: vs_string.append(f"--depth-min {kwargs['depth-min']}") if 'depth-max' in kwargs: vs_string.append(f"--depth-max {kwargs['depth-max']}") if 'variable' in kwargs: #if type(kwargs['variable']) == list: if isinstance(kwargs['variable'], list): for var in kwargs['variable']: vs_string.append(f"--variable {var}") # re-written due to pylint #3397 #[vs_string.append(f"--variable {var}") for var in kwargs['variable']] else: vs_string.append(f"--variable {kwargs['variable']}") if 'outname' in kwargs: vs_string.append(f"--out-name {kwargs['outname']}") if 'target_directory' in kwargs: vs_string.append(f"--out-dir {kwargs['target_directory']}") command = ' '.join(vs_string) return command def get_data(command=None, user=None, pwd=None, size=None): """ Returns status of binary netCDF file or, if ``size`` is specified, potential result file size, whose units is `kBytes`. Parameters ---------- command : TYPE, optional DESCRIPTION. The default is None. user : TYPE, optional DESCRIPTION. The default is None. pwd : TYPE, optional DESCRIPTION. The default is None. size : TYPE, optional DESCRIPTION. The default is None. Returns ------- returncode : TYPE DESCRIPTION. message : TYPE DESCRIPTION. """ if not user and not pwd: user, pwd = get_credentials() if not command: view_myscript = get_viewscript() command = view_myscript.replace( '--out-dir <OUTPUT_DIRECTORY> --out-name <OUTPUT_FILENAME> ' '--user <USERNAME> --pwd <PASSWORD>', '') msg = '' if size: msg = '--size -o console' get_command = ' '.join([command, msg, '-q -u ', user, ' -p ', pwd]) cmd_rep = get_command.replace(get_command.split(' ')[-1], '****') logging.info("SUBMIT REQUEST: %s", cmd_rep) process = subprocess.Popen(get_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) message, _ = process.communicate() returncode = process.returncode return returncode, message def check_data(returncode, message, command=None, user=None, stack=None, size=None): """ Returns ``True`` if status of the submitted request is successful, ``False`` otherwise. Parameters ---------- returncode : TYPE DESCRIPTION. message : TYPE DESCRIPTION. command : TYPE, optional DESCRIPTION. The default is None. user : TYPE, optional DESCRIPTION. The default is None. stack : TYPE, optional DESCRIPTION. The default is None. size : TYPE, optional DESCRIPTION. The default is None. Raises ------ SystemExit Raise an error to exit program at fatal error due to server maintenance. Returns ------- valid_check : bool DESCRIPTION. """ valid_check = False if returncode == 0: if b'[ERROR]' in message: logging.error("FAILED REQUEST - raised error:\n %s", message) else: if size: if stack: if b'code="005-0"' in message: valid_check = True elif b'code="005-0"' not in message and b'code="005-7"' in message: # Handling exceptions due to changes in MOTU API from v3.10 to v3.12 try: req_size = convert_size_hr( (float(str(message).split('=')[-1].split('"')[1])) * 1000) except ValueError: req_size = convert_size_hr( (float(str(message).split('=')[4].split('"')[1])) * 1000) treshold_size = convert_size_hr(1.0E8 * 1000) if req_size > treshold_size: token = hashlib.md5( (':'.join([command.rstrip(), user])).encode('utf-8')).hexdigest() token_url = 'https://github.com/copernicusmarine/cmemsapi/blob/master/_transactions' # pylint: disable=line-too-long resp = rq.get(f'{token_url}/{token}') if resp.status_code == 200: valid_check = True else: msg = ( '[ERROR] Your datarequest exceeds max limit set to 100 GiB.\n' '[ACTION] Please contact Support Team at:\n' ' https://marine.copernicus.eu/services-portfolio/contact-us/ \n' # pylint: disable=line-too-long f'[ACTION] And submit a query attaching your logile located here:\n' f' {LOGFILE}.\n' '[INFO] Once it is done and by the next 48 hours, ' 'the Support Team will authorize your request ' 'and send an email to the inbox linked to ' f'the Copernicus Marine Account (username = {user}) ' 'for confirmation and instructions.' ) print(msg) logging.error(msg) else: valid_check = True elif b'code="005-0"' in message: valid_check = True else: logging.info('Request status is successful') print( '[INFO] Server is releasing the token to successfully grant next request. ' 'It will resume AUTOMATICALLY.\n') time.sleep(5) valid_check = True else: logging.error("FAILED REQUEST - raised error:\n %s", message) print('[WARNING] Failed data request has been logged.\n') if b'HTTP Error 503' in message: print( 'HTTP Error 503 - Service is temporary down. Break for 5 minutes.' ) time.sleep(300) if b'HTTP Error 4' in message: logging.error('Permanent error. Exiting program.') raise SystemExit return valid_check def process_viewscript(target_directory, view_myscript=None, user=None, pwd=None, forcestack=None): """ Generates as many data requests as required to match initial ``VIEW_SCRIPT``. Parameters ---------- target_directory : str or path DESCRIPTION. view_myscript : str, optional DESCRIPTION. The default is None. user : str, optional DESCRIPTION. The default is None. pwd : str, optional DESCRIPTION. The default is None. forcestack : bool, optional DESCRIPTION. The default is None. Raises ------ ValueError DESCRIPTION. Returns ------- TYPE On success, returns path of the output file matching the ``VIEW_SCRIPT`` data request, ``False`` otherwise. """ split_dict = get_config_constraints() outname = False if not user and not pwd: user, pwd = get_credentials() if not view_myscript: view_myscript = get_viewscript() else: uni_test = [ 'python -m motuclient --motu http', ' '.join([ '--out-dir <OUTPUT_DIRECTORY> --out-name <OUTPUT_FILENAME>', '--user <USERNAME> --pwd <PASSWORD>' ]) ] if not all([item in view_myscript for item in uni_test]): msg = ( '[DEBUG] Cannot parse VIEWSCRIPT. ' 'Please paste the ``TEMPLATE COMMAND`` as shown in this article: ' 'https://marine.copernicus.eu/faq/' 'how-to-write-and-run-the-script-to-download-' 'cmems-products-through-subset-or-direct-download-mechanisms/?idpage=169' ) raise ValueError(msg) view_script_command = view_myscript.replace( '--out-dir <OUTPUT_DIRECTORY> --out-name <OUTPUT_FILENAME> ' '--user <USERNAME> --pwd <PASSWORD>', '') init_returncode, init_message = get_data(view_script_command, user, pwd, size=True) if not check_data( init_returncode, init_message, view_script_command, user, size=True): return outname vs_dict = viewscript_string_to_dict(view_script_command) vs_dict['target_directory'] = str(target_directory) if not forcestack: for key_r, val_r in split_dict.items(): if any(x in vs_dict['product-id'] for x in val_r.get('pattern', 'Not Found')): for key_s, val_s in omit(split_dict[key_r].items(), 'pattern'): try: check = all([ val_s.get('depth') >= vs_dict['abs_depth'], *([g <= val_s.get('geo') for g in vs_dict['abs_geo']]) ]) except KeyError: check = all([ *([g <= val_s.get('geo') for g in vs_dict['abs_geo']]) ]) if check: check_stack = key_s[:-2] if vs_dict['delta-days'].days < 28: check_stack = 'day' vs_dict = get_dates_stack(vs_dict, check_stack, size=True) command_size = viewscript_dict_to_string(size=True, **vs_dict) returncode, message = get_data(command_size, user, pwd, size=True) if check_data(returncode, message, stack=check_stack, size=True): stack = check_stack break else: stack = forcestack try: msg = (f'[INFO] Data requests will be submitted by ' f'{stack} stacks.') except NameError: stack = 'day' msg = ('[WARNING] No matching temporal resolution. ' f'To be coded using CSW. Stack is set to {stack}.') print(msg) print('\n+------------------------------------+\n| ! - CONNECTION TO CMEMS' 'HUB - OPEN |\n+------------------------------------+\n\n') for retry in range(1, 4): retry_flag = False date_start = vs_dict['dt-date-min'] date_end = vs_dict['dt-date-max'] vs_dict = get_dates_stack(vs_dict, stack) while date_start <= date_end: date_end_format = vs_dict['cmd-date-max'].strftime( vs_dict['format']) try: vs_dict['outname'] = '-'.join([ 'CMEMS', vs_dict['prefix'], vs_dict['gridpoint'], vs_dict['out_var_name'], date_end_format + vs_dict['suffix'] ]) except KeyError: vs_dict['outname'] = '-'.join([ 'CMEMS', vs_dict['prefix'], vs_dict['out_var_name'], date_end_format + vs_dict['suffix'] ]) command = viewscript_dict_to_string(cmd=True, **vs_dict) outname = vs_dict['outname'] print( '\n----------------------------------\n' '- ! - Processing dataset request : ' f"{outname}\n----------------------------------\n") if not Path(target_directory / outname).exists(): print('## MOTU API COMMAND ##') print(command.replace(user, '*****').replace(pwd, '*****')) print( '\n[INFO] New data request has been submitted to Copernicus' 'Marine Servers. ' 'If successful, it will extract the data and create your' ' dataset on the fly. Please wait. \n') returncode, message = get_data(command, user, pwd) if check_data(returncode, message): print('[INFO] The dataset for {} has been stored in {}.'. format(outname, target_directory)) else: retry_flag = True else: print(f"[INFO] The dataset for {outname} " f"has already been downloaded in {target_directory}\n") date_start = vs_dict['cmd-date-max'] + dt.timedelta(days=1) vs_dict = get_dates_stack(vs_dict, stack, renew=date_start) if not retry_flag: break print("+-------------------------------------+\n| ! - CONNECTION TO CMEMS " "HUB - CLOSE |\n+-------------------------------------+\n") with open(LOGFILE) as logfile: if retry == 3 and 'ERROR' in logfile.read(): print("## YOUR ATTENTION IS REQUIRED ##") print(f'Some download requests failed, though {retry} retries. ' f'Please see recommendation in {LOGFILE})') print('TIPS: you can also apply hereafter recommendations.' '\n1. Do not move netCDF files' '\n2. Double check if a change must be done in the ' 'viewscript, FTR it is currently set to:\n') print(view_myscript) print( '\n3. Check there is not an ongoing maintenance by looking ' 'at the User Notification Service and Systems & Products Status:\n', 'https://marine.copernicus.eu/services-portfolio/news-flash/' '\n4. Then, if relevant, do relaunch manually this python ' 'script to automatically download only failed data request(s)' '\n5. Finally, feel free to contact our Support Team either:' '\n - By mail: [email protected] or \n - ' 'By using the webform: ' 'https://marine.copernicus.eu/services-portfolio/contact-us/' ' or \n - By leaving a post on the forum:' ' https://forum.marine.copernicus.eu\n\n') outname = False return outname def convert_size_hr(size_in_bytes): """ Get size from bytes and displays to user in human readable. Parameters ---------- size_in_bytes : TYPE DESCRIPTION. Returns ------- TYPE DESCRIPTION. """ if size_in_bytes == 0: return '0 Byte' size_standard = ('B', 'KiB', 'MiB', 'GiB', 'TiB') integer = int(math.floor(math.log(size_in_bytes, 1_024))) powmath = math.pow(1_024, integer) precision = 2 size = round(size_in_bytes / powmath, precision) return size, size_standard[integer] def get_disk_stat(drive=None): """ Get disk size statistics. Parameters ---------- drive : TYPE, optional DESCRIPTION. The default is None. Returns ------- disk_stat : TYPE DESCRIPTION. """ if not drive: drive = '/' disk_stat = list(shutil.disk_usage(drive)) return disk_stat def get_file_size(files): """ Get size of file(s) in bytes. Parameters ---------- files : TYPE DESCRIPTION. Returns ------- mds_size : TYPE DESCRIPTION. """ mds_size = 0 for file in files: with xr.open_dataset(file, decode_cf=False) as sds: mds_size = mds_size + sds.nbytes return mds_size def check_file_size(mds_size, default_nc_size=None): """ Check size of file(s). Parameters ---------- mds_size : TYPE DESCRIPTION. default_nc_size : TYPE, optional DESCRIPTION. The default is None. Returns ------- check_fs : TYPE DESCRIPTION. """ if not default_nc_size: default_nc_size = 16_000_000_000 check_fs = False size, unit = display_disk_stat(mds_size) if mds_size == 0: print(f'[ERROR-NETCDF] There is an error to assess the size of netCDF ' 'file(s). Please check if data are not corrupted.') elif size == 0: print(f'[ERROR] Program exit.') elif mds_size > default_nc_size: print(f'[INFO-NETCDF] The size of the netCDF file would be higher than' ' 16 GiB.') force = query( f'[ACTION-NETCDF] Do you still want to create the netCDF file of ' f'{BOLD}size {size} {unit}{END}?', 'no') if not force: print('[ERROR-NETCDF] Writing to disk action has been aborted by ' 'user due to file size issue.') print('[INFO-NETCDF] The script will try to write several netCDF ' 'files with lower file size.') else: check_fs = True else: check_fs = True return check_fs def display_disk_stat(mds_size): """ Display hard drive statistics to user. Parameters ---------- mds_size : TYPE DESCRIPTION. Returns ------- mds_size_hr : TYPE DESCRIPTION. """ disk_stat = get_disk_stat() free_after = disk_stat[2] - mds_size disk_stat.append(free_after) disk_stat.append(mds_size) try: total_hr, used_hr, free_hr, free_after_hr, mds_size_hr = [ convert_size_hr(item) for item in disk_stat ] except ValueError as error: msg = f"[WARNING] Operation shall be aborted to avoid NO SPACE LEFT ON\ DEVICE error: {error}" mds_size_hr = (0, 'B') else: space = '-' * 37 msg = ''.join( (f"[INFO] {space}\n", f"[INFO] Total Disk Space (before operation) :" f" {total_hr[1]} {total_hr[0]} \n", f"[INFO] Used Disk Space (before operation) :" f" {used_hr[1]} {used_hr[0]} \n", f"[INFO] Free Disk Space (before operation) :" f" {free_hr[1]} {free_hr[0]} \n", f"[INFO] Operation to save dataset to Disk :" f" {mds_size_hr[1]} {mds_size_hr[0]} \n", f"[INFO] Free Disk Space (after operation) :" f" {free_after_hr[1]} {free_after_hr[0]} \n", f"[INFO] {space}")) print(''.join(("[INFO] CHECK DISK STATISTICS\n", msg))) return mds_size_hr def get_file_pattern(outname, sep='-', rem=-1, advanced=True): """ Retrieve a ``file_pattern`` from a filename and advanced regex. Parameters ---------- outname : str Filename from which a pattern must be extracted. sep : str, optional Separator. The default is '-'. rem : TYPE, optional Removal parts. The default is -1. advanced : TYPE, optional Advanced regex. The default is True. Returns ------- file_pattern : str The ``file_pattern`` extracted from ``filename``. """ if 'pathlib' in str(type(outname)): outname = outname.name if advanced: file_pattern = outname.replace(outname.split(sep)[rem], '')[:-1] else: # To be coded pass return file_pattern def get_years(ncfiles, sep='-'): """ Retrieve a list of years from a list of netCDF filenames. Parameters ---------- ncfiles : list List of filenames from which years will be extracted. sep : TYPE, optional Separator. The default is '-'. Returns ------- years : set List of years. """ years = set([str(f).split(sep)[-1][:4] for f in ncfiles]) return years def get_ncfiles(target_directory, file_pattern=None, year=None): """ Retrieve list of files, based on parameters. Parameters ---------- target_directory : str DESCRIPTION. file_pattern : TYPE, optional DESCRIPTION. The default is None. year : TYPE, optional DESCRIPTION. The default is None. Returns ------- ncfiles : list List of strings containing absolute path to files. """ if 'str' in str(type(target_directory)): target_directory = Path(target_directory) if file_pattern and year: ncfiles = list(target_directory.glob(f'{file_pattern}*{year}*.nc')) elif file_pattern and not year: ncfiles = list(target_directory.glob(f'*{file_pattern}*.nc')) elif year and not file_pattern: ncfiles = list(target_directory.glob(f'*{year}*.nc')) else: ncfiles = list(target_directory.glob('*.nc')) return ncfiles def set_outputfile(file_pattern, target_directory, target_out_directory=None, start_year=None, end_year=None): """ Set output filename based on variables. Parameters ---------- file_pattern : TYPE DESCRIPTION. target_directory : TYPE DESCRIPTION. target_out_directory : TYPE, optional DESCRIPTION. The default is None. start_year : TYPE, optional DESCRIPTION. The default is None. end_year : TYPE, optional DESCRIPTION. The default is None. Returns ------- outputfile : TYPE DESCRIPTION. """ if not target_out_directory: target_out_directory = Path(target_directory.parent, 'copernicus-processed-data') elif 'str' in str(type(target_out_directory)): target_out_directory = Path(target_out_directory) if not target_out_directory.exists(): target_out_directory.mkdir(parents=True) if start_year == end_year or not end_year: outputfile = target_out_directory / f'{file_pattern}-{start_year}.nc' else: outputfile = target_out_directory / \ f'{file_pattern}-{start_year}_{end_year}.nc' return outputfile def over_write(outputfile): """ Ask user if overwrite action should be performed. Parameters ---------- outputfile : TYPE DESCRIPTION. Returns ------- ow : TYPE DESCRIPTION. """ ok_overwrite = True if outputfile.exists(): ok_overwrite = query( f'[ACTION] The file {outputfile} already exists. Do you want ' f'{BOLD}to overwrite{END} it?', 'no') return ok_overwrite def del_ncfiles(ncfiles): """ Delete files. Parameters ---------- ncfiles : TYPE DESCRIPTION. Returns ------- bool DESCRIPTION. """ for fnc in ncfiles: try: fnc.unlink() except OSError as error: print(f'[ERROR]: {fnc} : {error.strerror}') print( '[INFO-NETCDF] All inputs netCDF files have been successfully deleted.' ) return True def to_nc4(mds, outputfile): """ Convert file(s) to one single netCDF-4 file, based on computer limits. Parameters ---------- mds : TYPE DESCRIPTION. outputfile : TYPE DESCRIPTION. Returns ------- nc4 : TYPE DESCRIPTION. """ if 'xarray.core.dataset.Dataset' not in str(type(mds)): mds = xr.open_mfdataset(mds, combine='by_coords') if 'str' in str(type(outputfile)): outputfile = Path(outputfile) prepare_encoding = {} for variable in mds.data_vars: prepare_encoding[variable] = mds[variable].encoding prepare_encoding[variable]['zlib'] = True prepare_encoding[variable]['complevel'] = 1 encoding = {} for key_encod, var_encod in prepare_encoding.items(): encoding.update({ key_encod: { key: value for key, value in var_encod.items() if key != 'coordinates' } }) try: mds.to_netcdf(path=outputfile, mode='w', engine='netcdf4', encoding=encoding) except ValueError as error: print( f'[INFO-NETCDF] Convertion initialized but ended in error due to : {error}' ) nc4 = False else: real_file_size = convert_size_hr(outputfile.stat().st_size) space = '-' * 20 msg = ''.join((f"[INFO] {space}\n", f"[INFO-NETCDF] Output file :" f" {str(outputfile)}\n", f"[INFO-NETCDF] File format : netCDF-4\n", f"[INFO-NETCDF] File size : {real_file_size[0]}" f" {real_file_size[1]}\n", f"[INFO] {space}")) print(''.join(("[INFO] CONVERTING TO NETCDF4\n", msg))) nc4 = True return nc4 def to_csv(mds, outputfile): """ Convert file(s) to one single csv file, based on computer limits. Parameters ---------- mds : TYPE DESCRIPTION. outputfile : TYPE DESCRIPTION. Returns ------- csv : TYPE DESCRIPTION. """ if 'xarray.core.dataset.Dataset' not in str(type(mds)): mds = xr.open_mfdataset(mds, combine='by_coords') if 'str' in str(type(outputfile)): outputfile = Path(outputfile) msg2 = 'please contact support at: https://marine.copernicus.eu/services-portfolio/contact-us/' csv = False force = False ms_excel_row_limit = 1_048_576 nb_grid_pts = reduce((lambda x, y: x * y), list([len(mds[c]) for c in mds.coords])) if nb_grid_pts > ms_excel_row_limit: print(f'[INFO-CSV] The total number of rows exceeds MS Excel limit.' f' It is {BOLD}NOT recommended{END} to continue.') force = query( f'[ACTION-CSV] Do you still want to create this CSV file with' f' {BOLD}{nb_grid_pts} rows{END} (though most computers will run out of memory)?', 'no') if nb_grid_pts < ms_excel_row_limit or force: try: dataframe = mds.to_dataframe().reset_index().dropna() outputfile = outputfile.with_suffix('.csv') dataframe.to_csv(outputfile, index=False) except IOError: print(f'[INFO-CSV] Convertion initialized but ended in error.') else: space = '-' * 18 msg = ''.join( (f"[INFO] {space}\n", f"[INFO-CSV] Output file :" f" {str(outputfile)}\n", f"[INFO-CSV] File format : Comma-Separated Values\n", f"[INFO-CSV] Preview Stat:\n {dataframe.describe()}\n", f"[INFO] {space}")) print(''.join(("[INFO] CONVERTING TO CSV\n", msg))) csv = True else: print('[WARNING-CSV] Writing to disk action has been aborted by user ' f'due to very high number of rows ({nb_grid_pts}) exceeding most ' 'computers and softwares limits (such as MS Excel).') print(' '.join( ('[INFO-CSV] A new function is under beta-version to handle ' 'this use case automatically.\n' '[ACTION-CSV] Usage:\n' 'cmemstb to_mfcsv PATH_TO_NETCDF_DIRECTORY PATH_TO_OUTPUT_DIRECTORY\n' '[INFO-CSV] To upvote this feature,', msg2))) try: mds.close() del mds except NameError: print(''.join(('[DEBUG] ', msg2))) return csv def to_mfcsv(input_directory, output_directory, max_depth_level=None): """ Convert netcdf file(s) to multiple csv files, based on MS Excel Limits. Parameters ---------- input_directory : TYPE DESCRIPTION. output_directory : TYPE DESCRIPTION. max_depth_level : TYPE, optional DESCRIPTION. The default is None. Returns ------- mfcsv : TYPE DESCRIPTION. """ mfcsv = False if isinstance(input_directory, xr.Dataset): mds = input_directory else: try: # Either a string glob in the form "path/to/my/files/*.nc" # or an explicit list of files to open. mds = xr.open_mfdataset(input_directory, combine='by_coords') except Exception: input_directory = Path(input_directory) mds = xr.open_mfdataset( [str(item) for item in list(input_directory.glob('*.nc'))], combine='by_coords') if isinstance(output_directory, str): output_directory = Path(output_directory) try: if not output_directory.exists(): output_directory.mkdir(parents=True) print(f'[INFO] Directory successfully created : {output_directory}.') except Exception as exception: print(f"[ERROR] Failed to create directory due to {str(exception)}.") ms_excel_row_limit = 1_048_576 space = '-' * 17 nb_grid_pts = reduce((lambda x, y: x * y), list([len(mds[c]) for c in mds.coords])) if nb_grid_pts > ms_excel_row_limit: print(f"[INFO] The total number of rows for a single CSV file exceeds MS Excel limit.") variable_name = list(mds.data_vars.keys())[0] try: depth = len(mds.depth) if max_depth_level is None: depth = len(mds.depth) elif max_depth_level < 0: print(f"[ERROR] Maximum depth level must be a positive index" f" from 0 to {len(mds.depth)}") return mfcsv elif max_depth_level >= 0: depth = max_depth_level print(f"[INFO] As a consequence, the total number of CSV files " f"to be generated is: {len(mds.time) * (depth + 1)}") for t in range(len(mds.time)): for d in range(len(mds.depth)): if d > depth: break DF = mds.isel(depth=d, time=t).to_dataframe() if not DF[variable_name].dropna().empty: t_format = pd.to_datetime(str(DF['time'].values[0])).strftime("%Y%m%d") v_format = '_'.join([DF[column].name for column in DF if column not in ['lon', 'lat', 'longitude', 'latitude', 'depth', 'time']]) try: gb_format = '_'.join([str(len(mds[lonlat])) for lonlat in mds.coords if lonlat not in ['depth', 'time']]) except Exception as exception: print(f"[ERROR] Failed to set boundingbox: {str(exception)}") output_filename = f'CMEMS-time_{t_format}-depth_{d}-{v_format}.csv' else: output_filename = f'CMEMS-gridbox_{gb_format}-time_{t_format}-depth_{d}-{v_format}.csv' finally: output_fpath = output_directory / output_filename if not output_fpath.exists(): try: DF.dropna().to_csv(output_fpath) except Exception as exception: print(f"[ERROR] Failed to write to disk: {repr(exception)}.") else: msg = ''.join( (f"[INFO] {space}\n", f"[INFO-CSV] Output file :" f" {output_fpath}\n", f"[INFO-CSV] File format : Comma-Separated Values\n", f"[INFO-CSV] Preview Stat:\n {DF.dropna().describe()}\n", f"[INFO] {space}")) print(''.join(("[INFO] CONVERTING TO CSV\n", msg))) else: print(f"[INFO] The CSV file {output_filename} already exists" f" in {output_directory.absolute()}.") except AttributeError: print("[INFO] As a consequence, the total number of CSV files " f"to be generated is: {len(mds.time)}") for t in range(len(mds.time)): DF = mds.isel(time=t).to_dataframe() if not DF[variable_name].dropna().empty: t_format = pd.to_datetime(str(DF['time'].values[0])).strftime("%Y%m%d") v_format = '_'.join([DF[column].name for column in DF if column not in ['lon', 'lat', 'longitude', 'latitude', 'time']]) try: gb_format = '_'.join([str(len(mds[lonlat])) for lonlat in mds.coords if lonlat not in ['depth', 'time']]) except Exception as exception: print(f"[ERROR] Failed to set boundingbox: {str(exception)}") output_filename = f'CMEMS-time_{t_format}-{v_format}.csv' else: output_filename = f'CMEMS-gridbox_{gb_format}-time_{t_format}-{v_format}.csv' finally: output_fpath = output_directory / output_filename if not output_fpath.exists(): try: DF.dropna().to_csv(output_fpath) except Exception as exception: print(f"[ERROR] Failed to write to disk: {repr(exception)}.") else: msg = ''.join( (f"[INFO] {space}\n", f"[INFO-CSV] Output file :" f" {output_fpath}\n", f"[INFO-CSV] File format : Comma-Separated Values\n", f"[INFO-CSV] Preview Stat:\n {DF.dropna().describe()}\n", f"[INFO] {space}")) print(''.join(("[INFO] CONVERTING TO CSV\n", msg))) else: print(f"[INFO] The CSV file {output_filename} already exists" f" in {output_directory.absolute()}.") mfcsv = True return mfcsv def to_nc4_csv(ncfiles, outputfile, skip_csv=False, default_nc_size=None): """ Convert file(s) to both netCDF-4 and csv files, based on computer limits. Parameters ---------- ncfiles : TYPE DESCRIPTION. outputfile : TYPE DESCRIPTION. skip_csv : TYPE, optional DESCRIPTION. The default is False. default_nc_size : TYPE, optional DESCRIPTION. The default is None. Returns ------- nc4 : bool DESCRIPTION. csv : bool DESCRIPTION. check_ow : bool DESCRIPTION. """ nc4 = False csv = False if not default_nc_size: default_nc_size = 16_000_000_000 mds_size = get_file_size(ncfiles) check_fs = check_file_size(mds_size, default_nc_size) check_ow = over_write(outputfile) check_ow_csv = over_write(outputfile.with_suffix('.csv')) if check_ow and check_fs: with xr.open_mfdataset(ncfiles, combine='by_coords') as mds: nc4 = to_nc4(mds, outputfile) elif not check_ow: print('[WARNING-NETCDF] Writing to disk action has been aborted by ' 'user due to already existing file.') elif not check_fs: skip_csv = True if check_ow_csv and not skip_csv: with xr.open_mfdataset(ncfiles, combine='by_coords') as mds: csv = to_csv(mds, outputfile) return nc4, csv, check_ow def post_processing(outname, target_directory, target_out_directory=None, delete_files=True): """ Post-process the data already located on disk. Concatenate a complete timerange in a single netCDF-4 file, or if not possible, stack periods on minimum netCDF-4 files (either by year or by month). There is a possibility to delete old files to save space, thanks to convertion from nc3 to nc4 and to convert to `CSV`, if technically feasible. Parameters ---------- outname : TYPE DESCRIPTION. target_directory : TYPE DESCRIPTION. target_out_directory : TYPE, optional DESCRIPTION. The default is None. delete_files : TYPE, optional DESCRIPTION. The default is True. Raises ------ SystemExit DESCRIPTION. Returns ------- processing : bool DESCRIPTION. See Also -------- get_file_pattern : called from this method get_ncfiles : called from this method get_years : called from this method set_outputfile : called from this method to_nc4_csv : called from this method del_ncfiles : called from this method """ processing = False try: file_pattern = get_file_pattern(outname) except AttributeError: print(f'[ERROR] Program exits due to fatal error. There is no need ' 'to re-run this script if no action has been taken from user side.') raise SystemExit sel_files = get_ncfiles(target_directory, file_pattern) years = get_years(sel_files) try: single_outputfile = set_outputfile(file_pattern, target_directory, target_out_directory, start_year=min(years), end_year=max(years)) except ValueError as error: print( f'[ERROR] Processing failed due to no file matching pattern : {error}' ) else: nc4, csv, ow_choice = to_nc4_csv(sel_files, single_outputfile) if not nc4 and not csv and ow_choice: for year in years: print(year) ncfiles = get_ncfiles(target_directory, file_pattern, year) outfilemerged = set_outputfile(file_pattern, target_directory, target_out_directory, start_year=year) nc4, csv, ow_choice = to_nc4_csv(ncfiles, outfilemerged) if all([delete_files, nc4]): del_ncfiles(sel_files) processing = True return processing def get(local_storage_directory=None, target_out_directory=None, view_myscript=None, user=None, pwd=None, forcestack=False, delete_files=True): """Download and post-process files to both compressed and tabular formats, if applicable. Download as many subsets of dataset required to fulfill an initial data request based on a template command, called ``VIEW SCRIPT`` generated by Copernicus Marine website (https://marine.copernicus.eu). Then, all files are post-processed locally. e.g to concatenate in a single file, to save space (thanks to nc3 -> nc4), to convert to ``CSV`` (if technically possible), and to delete old files. End-user is guided throughout the process if no parameter is declared. To get started, this function is the main entry point. Parameters ---------- local_storage_directory : TYPE, optional DESCRIPTION. The default is None. target_out_directory : TYPE, optional DESCRIPTION. The default is None. view_myscript : TYPE, optional DESCRIPTION. The default is None. user : TYPE, optional DESCRIPTION. The default is None. pwd : TYPE, optional DESCRIPTION. The default is None. forcestack : TYPE, optional DESCRIPTION. The default is False. delete_files : TYPE, optional DESCRIPTION. The default is True. Returns ------- True. See Also -------- process_viewscript : Method to parse `VIEW SCRIPT` post_processing : Method to convert downloaded data to other format Examples -------- Ex 1. Let the user be guided by the script with interactive questions: >>> cmemstb get Ex 2. Get data matching a ``VIEW SCRIPT`` template command passed as `parameter`: >>> cmemstb get --view_myscript="python -m motuclient --motu https://nrt.cmems-du.eu/motu-web/Motu --service-id GLOBAL_ANALYSIS_FORECAST_PHY_001_024-TDS --product-id global-analysis-forecast-phy-001-024 --longitude-min -20 --longitude-max 45 --latitude-min 25 --latitude-max 72 --date-min \\"2019-08-18 12:00:00\\" --date-max \\"2020-08-31 12:00:00\\" --depth-min 0.493 --depth-max 0.4942 --variable thetao --out-dir <OUTPUT_DIRECTORY> --out-name <OUTPUT_FILENAME> --user <USERNAME> --pwd <PASSWORD>" Notes ----- For Windows Operating System Users and when using the ``--view_myscript`` as parameter, you might want to double check that ``double quote`` around dates are well escaped (see above example). """ target_directory = set_target_directory(local_storage_directory) outname = process_viewscript(target_directory=target_directory, view_myscript=view_myscript, user=user, pwd=pwd, forcestack=forcestack) post_processing(outname=outname, target_directory=target_directory, target_out_directory=target_out_directory, delete_files=delete_files) return True def cli(): """ Method to enable Command Line Interface and to expose only useful method for beginners. Returns ------- None. """ fire.Fire({ 'display_disk_stat': display_disk_stat, 'get': get, 'get_credentials': get_credentials, 'get_data': get_data, 'get_file_pattern': get_file_pattern, 'get_ncfiles': get_ncfiles, 'post_processing': post_processing, 'process_viewscript': process_viewscript, 'set_target_directory': set_target_directory, 'to_nc4_csv': to_nc4_csv, 'to_nc4': to_nc4, 'to_csv': to_csv, 'to_mfcsv': to_mfcsv }) ###Output _____no_output_____
notebooks/frontend_ques_dataset_preprocessing.ipynb
###Markdown Categories of question dataset ###Code # think of some more smart way of incorporaating left out tags after this categorizzation category_dict= { 'f1':['html', 'html5'], 'f2':['css','css3'], 'f3':['javascript', 'jquery'], 'f4':['flex', 'saas', 'less', 'stylus', 'bootstrap', 'media-queries', 'twitter-bootstrap', 'twitter-bootstrap-2', 'twitter-bootstrap-3', 'twitter-bootstrap-4'], 'f5' : ['gulp', 'gruntjs', 'grunt', 'webpack', 'browserify', 'npm', 'bower'], 'f6': ['angularjs', 'angular2', 'angular-directive', 'angular-scope', 'angular-ui-router','angular-ng-repeat', 'angularjs-2.0','angularjs-directive','angularjs-filter','angularjs-module', 'angularjs-ng-repeat','angularjs-routing','angularjs-scope','emberjs', 'ember.js', 'backbonejs', 'backbone.js', 'reactjs','react-native', 'react-router','react-redux', 'redux','react-native-android', 'react-native-ios', 'react-native-listview', 'react-native-maps', 'react-native-router-flux', 'react-router-component', 'react-router-redux', 'react-router-v4', 'reactjs-flux', 'reactjs-native', 'reactjs.net', 'redux-form', 'redux-framework', 'redux-observable', 'redux-saga', 'redux-thunk' ], 'f7':['mocha', 'jasmine', 'karma-runner', 'karma-jasmine'] } ###Output _____no_output_____ ###Markdown Labelling question dataset which contains one tag ###Code # # Categorising question with only one tag value def labeler2(list): blaclisted_tags= ['django-endless-pagination', 'jflex', 'lockless', 'paperless', 'serverless-framework', 'shapeless', 'stackless', 'headless-browser'] content = list[0] output ='' for key, value in category_dict.iteritems(): if content in blaclisted_tags: continue if content in value: output = key else: for item in value: if item in content: output = key return output ques_with_one_tag['Tags'][380]=['javascript'] ques_with_one_tag['Category']= ques_with_one_tag['Tags'].apply(lambda x: labeler2(x)) ## Working on with questions which were not labelled correctly in first attempt df_unlabeled_ques=ques_with_one_tag[ques_with_one_tag['Category'].apply(lambda x: x=='')] c=df_unlabeled_ques['Tags'] left_out_tags= np.unique(np.hstack(np.array(c))) # ======preserving========== (pd.DataFrame(left_out_tags)).to_csv('../data/interim/left_out_tags.csv', index=False) # Working on questions labelled correctly (15k) col_req= ['Post Link', 'Category', 'Tags', 'Title', 'Body', 'Score', 'ViewCount'] df_labeled_ques1 = ques_with_one_tag[ques_with_one_tag['Category'].apply(lambda x: x!='')] print df_labeled_ques1.shape # ======preserving========== df_labeled_ques1[col_req].to_csv('../data/processed/ques_with_one_tag_labelled.csv', index=False) def find_occurance(df): for i in range(1,8): category= 'f'+str(i) row,col= df[df['Category'].apply(lambda x: x == category)].shape print "Total rows of Category 'f{i}': {row}".format(i=i,row=row) find_occurance(df_labeled_ques1) ###Output Total rows of Category 'f1': 551 Total rows of Category 'f2': 1396 Total rows of Category 'f3': 5719 Total rows of Category 'f4': 944 Total rows of Category 'f5': 1867 Total rows of Category 'f6': 12002 Total rows of Category 'f7': 130 ###Markdown Seperating out questions which are not labelled ###Code ques_dataset.shape ques_dataset_multi_tags_unlabelled= ques_dataset.loc[~ques_dataset.index.isin(df_labeled_ques1.index)] col_req2= ['Post Link', 'Tags', 'Title', 'Body', 'Score', 'ViewCount'] # ======preserving========== ques_dataset_multi_tags_unlabelled[col_req2].to_csv('../data/processed/ques_multi_tags_unlabelled.csv', index=False) ques_dataset_multi_tags_unlabelled.shape ###Output _____no_output_____