path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
exercises/paradigms.ipynb
###Markdown Exercises: Programming paradigms Functional programming Recursion **Exercise 1**: The Fibonacci numbers are defined as $f_1=1$, $f_2=1$, $f_{n+2} = f_n+f_{n+1}$. ###Code # Your solution here def fib(n): pass # Test your solution here assert fib(1) == 1 assert fib(2) == 1 assert fib(3) == 2 assert fib(4) == 3 assert fib(5) == 5 ###Output _____no_output_____ ###Markdown Click me for the solution ```pythondef fib(n): if n in [1, 2]: return 1 else: return fib(n-1) + fib(n-2)``` Higher order functions **Exercise 2a (easy)**: Given an arbitrary function ``fct`` that operators on "single" values (e.g. floats ``fct(1)=3``, etc.)We want to write a higher order function ``vectorize`` that returns a function ``fct_vectorized`` that can be both applied to single values (``fct_vectorized(1)=3``) and a list like so: ``fct_vectorized([1, 2, ...]) = [fct(1), fct(2), ...]``. ###Code # Your solution here def vectorize(fct): pass # Test your solution here: def _square(x): return x*x square = vectorize(_square) assert square(1) == 1 assert square(3) == 9 assert square([1, 2, 3, 4]) == [1, 4, 9, 16] assert square([]) == [] ###Output _____no_output_____ ###Markdown Click me for a hint ```python def vectorize(fct): def vectorized(lst): if isinstance(lst, list): if we were given a list YOUR CODE HERE else: if we were given a single value YOUR CODE HERE return vectorized``` Click me for the solution ```python from typing import Iterabledef vectorize(fct): def vectorized(lst): if isinstance(lst, Iterable): return [fct(item) for item in lst] else: return fct(lst) return vectorized``` **Exercise 2b (harder)**: We want to be able to apply an arbitrary function fct that is defined for "single" values to a list (of lists) like so: ``fct([[1, 2], 3, [[4]], ...]) = [[fct(1), fct(2)], fct(3), [[fct(4)]], ...])``. For this we want to create a high-level function ``tree_vectorize`` ###Code # Your solution here def tree_vectorize(fct): pass # Test your solution here: def _square(x): return x*x square = tree_vectorize(_square) assert square(4) == 16 assert square([4]) == [16] assert square([1, 3]) == [1, 9] assert square([1, 2, [3, [4]]]) == [1, 4, [9, [16]]] ###Output _____no_output_____ ###Markdown Click me for a hint The inner function needs to recursively call itself if the input is a list You can test whether the input is a list using isinstance(maybe_list, list) Click me for the solution ```pythondef tree_vectorize(fct): def _vectorized(nested_list): if not isinstance(nested_list, list): return fct(nested_list) else: return [_vectorized(lst) for lst in nested_list] return _vectorized``` **Exercise 3**: We want to log our function calls. Write a function ``log_call`` that takes a function and prints the function call (see the ``Test your solution`` example). ###Code # Your solution here def log_call(fct): pass # Test your solution here logged_fib = log_call(fib) # The following should always print the call itself logged_fib(1) logged_fib(7) ###Output _____no_output_____ ###Markdown Click me for the solution A very simple solution could look like this: ```pythondef log_call(fct): def _logged_fct(argument): return_value = fct(argument) fct.__name__ is the name of the function print(f"{fct.__name__}({argument}) = {return_value}") return return_value return _logged_fct```**Generalizing** to arbitrarily many arguments and keyword arguments: ```pythondef log_call(fct): def _logged_fct(*args, **kwargs): return_value = fct(*args, **kwargs) fct.__name__ is the name of the function print(f"{fct.__name__}({args}, {kwargs}) = {return_value}") return return_value return _logged_fct``` **Advanced**: The above solution works perfectly, but for some minor details, it is recommended to do this:```pythonfrom functools import wraps def log_call(fct): @wraps(fct) def _logged_fct(*args, **kwargs): return_value = fct(*args, **kwargs) fct.__name__ is the name of the function print(f"{fct.__name__}({args}, {kwargs}) = {return_value}") return return_value return _logged_fct``` Using a bit of python dark magic, we can actually print what's happening in the recursion: ###Code fib = log_call(fib) fib(7) ###Output _____no_output_____ ###Markdown To display a **fancier version** of this call graph of your recursion, head to https://anandology.com/python-practice-book/functional-programming.html and have a look at their ``trace`` function. Memoization As you can see in the above call graph, the same function values are calculated multiple times. This can be avoided by using the Memoization technique aka caching the output ###Code from functools import lru_cache fib = lru_cache(100)(fib) fib(20) ###Output _____no_output_____ ###Markdown Object oriented programming There will be more exercises about OOP in the exercises for the next lecture "Design Patterns".This time we will only cover the absolute basics. Abstract methods ###Code from abc import ABC, abstractmethod class Shape(ABC): @abstractmethod def calculate_area(self): pass ###Output _____no_output_____
TA/Session5.ipynb
###Markdown Decision Trees 1) Preparing Data: ###Code import pandas as pd import numpy as np from sklearn.preprocessing import OrdinalEncoder df2 = pd.read_csv("./Data/5/Dataset2.csv") print(df2.columns) df2.head() ###Output _____no_output_____ ###Markdown check for null values ###Code print(df2.info()) ###Output _____no_output_____ ###Markdown drop unwanted attributes ###Code for column in df2.columns: print(column, ": ", set(df2[column].values)) df2 = df2.drop(['veil-type'], axis=1) ###Output _____no_output_____ ###Markdown encode categorical attributes ###Code def replace_poison(x): poison = x['poisonous'] if poison == 'p': return 1 else : return 0 df2['poisonous'] = df2.apply(replace_poison, axis=1) X = df2.drop(['poisonous'], axis=1) y = df2['poisonous'] ordinal_enc = OrdinalEncoder() ordinal_vals = ordinal_enc.fit_transform(X) ordinal_vals = ordinal_vals.astype('int8') X = pd.DataFrame(ordinal_vals, columns=X.columns) ###Output _____no_output_____ ###Markdown 2) Classifying the Data ###Code from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.metrics import accuracy_score from sklearn.tree import export_graphviz from sklearn.datasets import load_wine from IPython.display import SVG # from graphviz import Source from IPython.display import display ###Output _____no_output_____ ###Markdown split into test and train sets ###Code X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) ###Output _____no_output_____ ###Markdown Decision Tree with gini ###Code dtree_gini = DecisionTreeClassifier(criterion='gini') cls = dtree_gini.fit(X_train,y_train) y_pred = cls.predict(X_test) print("Accuracy:", accuracy_score(y_test, y_pred)) export_graphviz(dtree_gini, out_file="./Data/5/dtree_gini.dot", feature_names=X_train.columns, filled = True) ###Output _____no_output_____ ###Markdown Decision Tree with entropy ###Code dtree_entropy = DecisionTreeClassifier(criterion='entropy') cls = dtree_entropy.fit(X_train,y_train) y_pred = cls.predict(X_test) print("Accuracy:", accuracy_score(y_test, y_pred)) export_graphviz(dtree_entropy, out_file="./Data/5/dtree_entropy.dot", feature_names=X_train.columns, filled = True) ###Output _____no_output_____ ###Markdown visualize ###Code # In order to see each tree in jupyter notebook # uncomment following lines and execute them in # separate cells # graph = Source(export_graphviz(dtree_gini, out_file=None, # feature_names=X_train.columns, # filled = True)) # display(SVG(graph.pipe(format='svg'))) # graph = Source(export_graphviz(dtree_entropy, out_file=None, # feature_names=X_train.columns, # filled = True)) # display(SVG(graph.pipe(format='svg'))) # random forests have many estimators so we should travers them # or just visualize one of them # graph = Source(export_graphviz(rf_gini.estimators_[0], out_file=None, # feature_names=X_train.columns, # filled = True)) # display(SVG(graph.pipe(format='svg'))) # graph = Source(export_graphviz(rf_entropy.estimators_[0], out_file=None, # feature_names=X_train.columns, # filled = True)) # display(SVG(graph.pipe(format='svg'))) # ------------ you can also try this: ------------ # # from sklearn.externals.six import StringIO # from IPython.display import Image # from sklearn.tree import export_graphviz # import pydotplus # dot_data = StringIO() # export_graphviz(clf, out_file=dot_data, # filled=True, rounded=True, # special_characters=True, feature_names = feature_cols,class_names=['0','1']) # graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) # graph.write_png('diabetes.png') # ------------------------------------------------ # ###Output _____no_output_____ ###Markdown Grid Search ###Code param = {'min_samples_split': [2, 4, 6, 8], 'max_depth': [5, 10, 15, 25, None]} ###Output _____no_output_____ ###Markdown grid search on decision tree with gini ###Code gs1 = GridSearchCV(dtree_gini, param, cv=5, n_jobs=-1, return_train_score=True) gs_fit1 = gs1.fit(X, y) pd.DataFrame(gs_fit1.cv_results_).sort_values('mean_test_score', ascending=False)[0:5] ###Output _____no_output_____ ###Markdown grid search on decision tree with entropy ###Code gs2 = GridSearchCV(dtree_entropy, param, cv=5, n_jobs=-1, return_train_score=True) gs_fit2 = gs2.fit(X, y) pd.DataFrame(gs_fit2.cv_results_).sort_values('mean_test_score', ascending=False)[0:5] ###Output _____no_output_____ ###Markdown Test on Unknown Data ###Code df2_val = pd.read_csv("./Data/5/Dataset2_Unknown.csv") res2 = pd.DataFrame() df2_val = df2_val.drop(['veil-type'], axis=1) ordinal_vals = ordinal_enc.transform(df2_val) ordinal_vals = ordinal_vals.astype('int8') df2_val = pd.DataFrame(ordinal_vals, columns=df2_val.columns) df2_val.head() ###Output _____no_output_____ ###Markdown predict with decision tree using gini ###Code dtree_gini = DecisionTreeClassifier(criterion='gini', max_depth=25, min_samples_split=2) cls = dtree_gini.fit(X_train,y_train) y_pred = cls.predict(df2_val) res2["dtree_gini"] = y_pred ###Output _____no_output_____ ###Markdown predict with decision tree using entropy ###Code dtree_entropy = DecisionTreeClassifier(criterion='entropy', max_depth=25, min_samples_split=4) cls = dtree_entropy.fit(X_train,y_train) y_pred = cls.predict(df2_val) res2["dtree_entropy"] = y_pred res2.to_csv("./Data/5/prediction.csv") ###Output _____no_output_____
JigsawLessons/.ipynb_checkpoints/swarm_plots-checkpoint.ipynb
###Markdown Swarm Plots Lesson *Manasvi Malepati, Amani Arman Kiruga, Caleb Davis* Our Resources:[Seaborn Blog Post](https://datasoups.blogspot.com/2018/10/seaborn-stripswarm-violin-plots.html])[Alex Gude Blog Post](https://alexgude.com/blog/distribution-plots/:~:text=Swarm%20Plots%2C%20also%20called%20beeswarm,instead%20of%20adding%20random%20jitter.)[Broadway Dataset](https://corgis-edu.github.io/corgis/csv/broadway/) What are Strip and Swarm plots and what are they used for?Strip and swarm (or beeswarm) plots are two types of visualization plots that focus on the distribution of a dataset. Both plots depict the points in the data set between a categorical and numeric variables. The x-axis contains the different categories within the categorical variable and the y-axis presents a scale for the numeric variable. The overall image can seem like a combination between a histogram and a scatter plot, with hundreds of points displaying an image of the dataset's distribution. How do we use Strip and Swarm plots?First, we need to import the necessary libraries (Pandas, Seaborn, MatPlotLib and NumPy): ###Code import pandas as pd import matplotlib.pyplot as plt import numpy as np import seaborn as sns ###Output _____no_output_____ ###Markdown Then, load the data. We will be using the [Broadway](https://corgis-edu.github.io/corgis/csv/broadway/) dataset from the CORGIS collection. The comment below contains the different columns within the dataset and their descriptions. We renamed the column names underneath and shortened the dataset to only contain the first 1000 rows in order to speed up plot generation. ###Code ''' Date.Day Integer The day of the month that this performance's week ended on. 26 Date.Full String The full date representation that this performance's week ended on in "Month/Day/Year" format. "8/26/1990" Date.Month Integer The numeric month that this performance's week ended in (1 = January, 2 = February, etc.). 8 Date.Year Integer The year that this week of performances occurred in. 1990 Show.Name String The name of the production. "Tru" Show.Theatre String The name of the theatre. "Booth" Show.Type String Whether it is a "Musical", "Play", or "Special". "Play" Statistics.Attendance Integer The total number of people who attended performances over the week. 5500 Statistics.Capacity Integer The percentage of the theatre that was filled during that week. 88 Statistics.Gross Integer The "Gross Gross" of this performance, or how much it made in total across the entire week. Measured in dollars. 134456 Statistics.Gross Potential Integer The Gross Potential is the maximum amount an engagement can possibly earn based on calculations involving ticket prices, seating capacity, and the number of performances. This number is expressed here as a percentage of what could have been achieved (Gross Gross / Gross Potential). In case the GP could not be calculated, it was replaced with 0%. 0 Statistics.Performances ''' broadway_url = "https://raw.githubusercontent.com/corgis-edu/corgis/master/source/broadway/broadway-corgis.csv" #Renamed columns and reading only first 1000 rows df = pd.read_csv(broadway_url, names=['Day', 'Date_Full', 'Month', 'Year', 'Name', 'Theatre', 'Type', 'Attendance', 'Capacity', 'Gross', 'Gross Potential', 'Performance'], nrows=1000) #dimensions df.shape df.head(10) ###Output _____no_output_____ ###Markdown Now to use the stripplot and swarmplot functions from seaborn. We based our example off of the [Seaborn blog post](https://datasoups.blogspot.com/2018/10/seaborn-stripswarm-violin-plots.html]) which is also linked above in resources. We will first work with the stripplot function and display a strip plot of the gross from each type of show. This means that we will be working with the categorical variable "Type" and the numeric/continous variable "Gross." The parameters that we have to pass in to the seaborn.stripplot() function are our x axis, our y axis and our data. ###Code sns.stripplot(x='Type', y = 'Gross', data = df) plt.show() ###Output _____no_output_____ ###Markdown AnalysisAs you can see, the distributions of the gross for each type of show is shown above with the type of show as the x-axis and the gross as the y-axis.By comparing the distributions for each type, we can see how the distribution of the gross changes based on which type of show we are looking at. For the plays in the dataset, a large portion had made around 50,000 to 4,000,000 dollars per week. Meanwhile the distribution for a musical is far more spread out, with the majority of shows making 3,000,000 to 8,500,000 dollars per week.The special category has only a handful of points compared to the other categories and the distribution is also a lot less unified since there is no clustering of points.Swarm plots are very similar in analysis to strip plots since they depict the same types of variables with points. Exercise 1:Draw a comparison of the attendance among the different types of Broadway shows i.e. Play, Musical, Special using a strip plot. And this time, change the radius of the dots to be 3 points wide. Be sure to look at the resources above for hints or syntax help if you need them. ###Code #Type code here: sns.stripplot(x='Type', y = 'Attendance', data = df, size = 3) plt.show() ###Output _____no_output_____ ###Markdown Now that we have learned about the strip plot, we can move on to the next distribution plot: the swarm plot.The parameters that we need are the exact same as for the stripplot() function. * X axis * Y axis* DataAre the parameters needed for seaborn.swarmplot(). Now try it yourself with the "Gross" and "Type" column. Exercise 2: ###Code #Type code here: sns.swarmplot(x='Type', y = 'Gross', data = df, size = 3) plt.show() ###Output _____no_output_____ ###Markdown Exercise 3:Now use a swarm plot for the same data in Exercise 1. What is the visual difference in the data? When would you use this instead of a strip plot? ###Code #Type code here: sns.swarmplot(x='Type', y = 'Attendance', data = df, size = 3) plt.show() ###Output c:\users\13023\cisc367-projects\venv\lib\site-packages\seaborn\categorical.py:1296: UserWarning: 23.0% of the points cannot be placed; you may want to decrease the size of the markers or use stripplot. warnings.warn(msg, UserWarning) ###Markdown Type answer to last two questions here: The visual differnce is that a swarm plot is wider than a strip plot. You would use a swarm plot in a case where you need to see all the data points since swarm plots avoid overlapping of data. Exercise 4:What is a limitation of using a swarm plot? (Pay attention to the messages given when running your code for the swarm plot exercise) Type your answer here: The limitation of using a swarm plot is that not all of the data might be able to be shown since there is no overlap in data. Other Ways of Plotting with Categorical Data Violin PlotsViolin Plots are very closely related to Swarm, Strip and Box plots. They are most useful for visualizing the distribution or kernel density of data. They are also named that way due to their attractiveness and resemblance to a violin. But keep in mind that the distribution estimation procedure is influenced by the sample size, and violins for relatively small samples might look misleadingly smooth. For example if we wanted to visualize how "Gross" is distributed across the diferent types of broadway events we would do this: ###Code sns.violinplot(x = 'Type', y = 'Gross', data = df ) plt.show() ###Output _____no_output_____ ###Markdown AnalysisSource: [How to interpret Violin Plots](https://towardsdatascience.com/violin-plots-explained-fb1d115e023d)One can see that the gross of the Musicals has a median at about 0.7E6 which is much higher than that of the Play or Special events. We can also notice the distribution of the Play data has multiple peaks and that the Special data has one peak at the median (almost normally distributed) sitting within the interquartile range. Exercise 5:Now it's your turn to try to plot a violin plot. Try pick any categorical variable and numerical variable from the dataframe and visualize it. ###Code #Type code here: sns.violinplot(x = 'Type', y = 'Attendance', data = df ) plt.show() ###Output _____no_output_____ ###Markdown Box PlotsAnother method of visualizing categorical data is using box plots. They offer an alternative to violin or strip/swarm plots with the advantage that you are able to quantify the quartiles of the data distribution (although this can also be done with a violin plot). The function seaborn.boxplot() is used for this. Notice that it takes the same basic parameters as swarmplot() and stripplot() ###Code sns.boxplot(x = "Type", y = "Gross", data = df) plt.show() ###Output _____no_output_____ ###Markdown AnalysisEach plot is formed with 5 main points: the 1st and 3rd quartile points (the outer lines of the box), the median (the middle line), and the minimum and maximum (the outer lines on the plot). The area taken up by the actual box in each plot represents the middle 50% of each type of categorical data, and the middle line is the median of that data. The Musical Type category is the only plot that contains an outlier, as shown by the point below the minimum on the plot. We can see that 50-75% of the Musicals in the data grossed more than the maximum grossing Play, and the minimum grossing Musical grossed more than the maximum grossing Special. This is strong evidence that plays grossed significantly higher than Plays and Specials. Exercise 6Try pick any categorical variable and numerical variable from the dataframe and visualize it. ###Code #Type code here: sns.boxplot(x = "Type", y = "Attendance", data = df) plt.show() ###Output _____no_output_____
aulas/aula1/SondaMarciana2D.ipynb
###Markdown A gente pode incrementar este PEAS colocando-o como 2D. Pra isso basta fazer o ambiente como uma classe descendente deoutra classe ambiente : a classe graphic ###Code class Marte2D(GraphicEnvironment): def percept(self, agent): '''retorna uma lista de coisas que estao nas proximidades da Sonda''' things = self.list_things_at(agent.location) return things def execute_action(self, agent, action): '''Altera o estado do Ambiente baseado no que o agente faz.''' if action == "ir em frente": print('{} resolveu {} na posicao: {}'.format(str(agent)[1:-1], action, agent.location)) agent.vaiemfrente() elif action == "absorver": items = self.list_things_at(agent.location, tclass=Baterias) if len(items) != 0: if agent.absorveu(items[0]): #Sonda absorveu a bateria print('{} absorveu {} na posicao: {}' .format(str(agent)[1:-1], str(items[0])[1:-1], agent.location)) self.delete_thing(items[0]) #Remove a bateria absorvida . elif action == "registrar": items = self.list_things_at(agent.location, tclass=Marciano) if len(items) != 0: if agent.registrou(items[0]): #Sonda registrou marciano numa posicao print('{} registrou {} na posicao : {}' .format(str(agent)[1:-1], str(items[0])[1:-1], agent.location)) self.delete_thing(items[0]) #Sonda agora pode ignorar este marciano registrado. def is_done(self): '''Geralmente se para quando o agente morre ( descarrega ), Mas vamos parar quando não houver mais bateria pra pegar nem marcianos a registrar''' no_edibles = not any(isinstance(thing, Baterias) or isinstance(thing, Marciano) for thing in self.things) dead_agents = not any(agent.is_alive() for agent in self.agents) return dead_agents or no_edibles ###Output _____no_output_____ ###Markdown Mas ao fazermos isso, mudamos o ambiente. Portanto temos que ajustar o Agente para este novo tipo de ambiente. Agora ele deve se deslocar num plano 2D ###Code class SondaMarciana(Agent): location = [0,1] # agora um valor 2D direction = Direction("down") # pra onde a Sonda esta olhando def vaiemfrente(self): self.location[1] += 1 def absorveu(self, thing): '''retorna True se conseguiu absorver''' if isinstance(thing, Baterias): return True return False def registrou(self, thing): ''' retorna True se conseguiu registrar''' if isinstance(thing, Marciano): return True return False ###Output _____no_output_____ ###Markdown Vamos dar uma rodada ... ###Code planicieMarciana = Marte2D(5,20, color={'SondaMarciana': (200,0,0), 'Marciano': (0, 200, 200), 'Baterias': (230, 115, 40)}) # park width is set to 5, and height to 20 Spirit = SondaMarciana(program) bateria = Baterias() marciano = Marciano() planicieMarciana.add_thing(Spirit, [0,1]) planicieMarciana.add_thing(bateria, [0,5]) planicieMarciana.add_thing(marciano, [0,7]) outroMarciano = Marciano() planicieMarciana.add_thing(outroMarciano, [0,15]) print("Spirit começou em (1,1) voltado para baixo, vamos ver se ele acha a bateria!") planicieMarciana.run(20) ###Output _____no_output_____ ###Markdown Mas nossa sonda, apesar de agora ver um mundo 2D, continua andando só numa direção. Vamos acertar isso : A sonda vai agora aleatoriamente seguir em frente ou se virar exceto quando encontrar os penhascos que limitam a area explorada ( é eu pus uns penhascos em Marte ). Quando detectar um penhasco ela se vira abitrariamente Percept: Percebe Bateria Percebe Marciano Percebe Nada Ação Absorve Registra Registra que está na beira do penhasco Tem penhasco Não tem penhasco Ação: Vira Esquerda / Vira Direita ( 50% - 50% chance ) Vira Esquerda / Vira Direita / Segue em Frente ( 25% - 25% - 50% chance ) ###Code from random import choice class SondaMarciana(Agent): location = [0,1] direction = Direction("down") def vaiemfrente(self, success=True): '''vai em frente se tem uma destinacao valida ''' if not success: return if self.direction.direction == Direction.R: self.location[0] += 1 elif self.direction.direction == Direction.L: self.location[0] -= 1 elif self.direction.direction == Direction.D: self.location[1] += 1 elif self.direction.direction == Direction.U: self.location[1] -= 1 def virou(self, d): self.direction = self.direction + d def absorveu(self, thing): if isinstance(thing, Baterias): return True return False def registrou(self, thing): if isinstance(thing, Marciano): return True return False def program(percepts): '''retorna uma ação baseada numa percepcao''' for p in percepts: if isinstance(p, Baterias): return 'absorver' elif isinstance(p, Marciano): return 'registrar' if isinstance(p,Bump): # checa se é um penhasco turn = False choice = random.choice((1,2)); else: choice = random.choice((1,2,3,4)) # 1-direita, 2-esquerda, ou segue em frente if choice == 1: return 'viradireita' elif choice == 2: return 'viraesquerda' else: return 'vaiemfrente' ###Output _____no_output_____ ###Markdown Bom fizemos o agente com novas habilidades. Agora temos que acertar o ambiente 2D para que ele se altere como resultado das ações do agente ###Code class Marte2D(GraphicEnvironment): def percept(self, agent): things = self.list_things_at(agent.location) loc = copy.deepcopy(agent.location) #verifica se a Sonda está indo pro penhasco if agent.direction.direction == Direction.R: loc[0] += 1 elif agent.direction.direction == Direction.L: loc[0] -= 1 elif agent.direction.direction == Direction.D: loc[1] += 1 elif agent.direction.direction == Direction.U: loc[1] -= 1 if not self.is_inbounds(loc): things.append(Bump()) return things def execute_action(self, agent, action): if action == 'viradireita': print('{} resolveu {} na posicao: {}'.format(str(agent)[1:-1], action, agent.location)) agent.virou(Direction.R) elif action == 'viraesquerda': print('{} resolveu {} na posicao: {}'.format(str(agent)[1:-1], action, agent.location)) agent.virou(Direction.L) elif action == 'vaiemfrente': print('{} resolveu andar {} na posicao: {}'.format(str(agent)[1:-1], agent.direction.direction, agent.location)) agent.vaiemfrente() elif action == "absorver": items = self.list_things_at(agent.location, tclass=Baterias) if len(items) != 0: if agent.absorveu(items[0]): print('{} absorveu {} na posicao: {}' .format(str(agent)[1:-1], str(items[0])[1:-1], agent.location)) self.delete_thing(items[0]) elif action == "registrar": items = self.list_things_at(agent.location, tclass=Marciano) if len(items) != 0: if agent.registrou(items[0]): print('{} registrou {} na posicao: {}' .format(str(agent)[1:-1], str(items[0])[1:-1], agent.location)) self.delete_thing(items[0]) def is_done(self): no_edibles = not any(isinstance(thing, Baterias) or isinstance(thing, Marciano) for thing in self.things) dead_agents = not any(agent.is_alive() for agent in self.agents) return dead_agents or no_edibles planicieM = Marte2D(10,10, color={'SondaMarciana': (200,0,0), 'Marciano': (0, 200, 200), 'Baterias': (230, 115, 40)}) Spirit = SondaMarciana(program) bateria1 = Baterias() marciano1 = Marciano() planicieM.add_thing(Spirit, [0,0]) planicieM.add_thing(bateria1, [1,2]) planicieM.add_thing(marciano1, [0,1]) bateria2 = Baterias() marciano2 = Marciano() planicieM.add_thing(bateria2, [4,3]) planicieM.add_thing(marciano2, [2,4]) print("Spirit inicia em [0,0], vira para o Sul.") planicieM.run(120) ###Output _____no_output_____
notebooks/5_2_Feature_Column.ipynb
###Markdown **5-2 feature_column** Feature column is usually applied in the feature engineering for the structured data, while rarely used for the image or text date. **1. Introduction about how to use feature column** Feature column is used to converting category features into one-hot encoding, or creating bucketing feature from continuous feature, or generating cross features from multiple features, etc. Before creating feature column, please call the functions in the module tf.feature_column. The nine most frequently used functions in this module are shown in the figure below. All these functions will return a Categorical-Column or a Dense-Column object, but will not return bucketized_column, since the last class is inhereted from the first two classes. Be careful: all the Categorical-Column class have to be converted into Dense-Column class through indicator_column before input to the model. - `numeric_column`, the most frequently used function. - `bucketized_column`, generated from numerical column, listing multiple features from a numerical clumn; it is one-hot encoded. - `categorical_column_with_identity`, one-hot encoded, identical to the case that each bucket is one interger. - `categorical_column_with_vocabulary_list`, one-hot encoded; the dictionary is specified by the list. - `categorical_column_with_vocabulary_file`, one-hot encoded; the dictionary is specified by the file. - `categorical_column_with_hash_bucket`, used in the case with a large interger or a large dictionary. - `indicator_column`, generated by Categorical-Column; one-hot encoded. - `embedding_column`, generated by Categorical Column; the embedded vector distributed parameter needs learning/training. The recommended dimension of the embedded vector is the fourth root to the number of categories. - `crossed_column`, consists of arbitrary category column except for categorical_column_with_hash_bucket **2. Demonstration of feature column** Here is a complete example that solves Titanic survival problmen using feature column. ###Code import datetime import numpy as np import pandas as pd from matplotlib import pyplot as plt import tensorflow as tf from tensorflow.keras import layers,models # Printing log def printlog(info): nowtime = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S') print("\n"+"=========="*8 + "%s"%nowtime) print(info+'...\n\n') ###Output _____no_output_____ ###Markdown **1. Constructing data pipeline** ###Code dftrain_raw = pd.read_csv("../data/titanic/train.csv") dftest_raw = pd.read_csv("../data/titanic/test.csv") dfraw = pd.concat([dftrain_raw, dftest_raw]) def prepare_dfdata(dfraw): dfdata = dfraw.copy() dfdata.columns = [x.lower() for x in dfdata.columns] dfdata = dfdata.rename(columns={"survived": "label"}) dfdata.drop(["passengerid", "name"], axis=1, inplace=True) for col, dtype in dict(dfdata.dtypes).items(): #see if there are missing values. if dfdata[col].hasnans: dfdata[col + "_nan"] = pd.isna(dfdata[col]).astype('int32') if dtype not in [np.object, np.str, np.unicode]: dfdata[col].fillna(dfdata[col].mean(), inplace=True) else: dfdata[col].fillna('', inplace=True) return (dfdata) dfdata = prepare_dfdata(dfraw) dftrain = dfdata.iloc[0:len(dftrain_raw),:] dftest = dfdata.iloc[len(dftrain_raw):,:] # importing data from dataframe # Importing data from dataframe def df_to_dataset(df, shuffle=True, batch_size=32): dfdata = df.copy() if 'label' not in dfdata.columns: ds = tf.data.Dataset.from_tensor_slices(dfdata.to_dict(orient = 'list')) else: labels = dfdata.pop('label') ds = tf.data.Dataset.from_tensor_slices((dfdata.to_dict(orient = 'list'), labels)) if shuffle: ds = ds.shuffle(buffer_size=len(dfdata)) ds = ds.batch(batch_size) return ds ds_train = df_to_dataset(dftrain) ds_test = df_to_dataset(dftest) #================================================================================ # 2. Defining the feature column #================================================================================ printlog("step2: make feature columns...") feature_columns = [] # Numerical column for col in ['age','fare','parch','sibsp'] + [ c for c in dfdata.columns if c.endswith('_nan')]: feature_columns.append(tf.feature_column.numeric_column(col)) # Bucketized column age = tf.feature_column.numeric_column('age') age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65]) feature_columns.append(age_buckets) # Category column # NOTE: all the Categorical-Column class have to be converted into Dense-Column class through `indicator_column` before input to the model. sex = tf.feature_column.indicator_column( tf.feature_column.categorical_column_with_vocabulary_list( key='sex',vocabulary_list=["male", "female"])) feature_columns.append(sex) pclass = tf.feature_column.indicator_column( tf.feature_column.categorical_column_with_vocabulary_list( key='pclass',vocabulary_list=[1,2,3])) feature_columns.append(pclass) ticket = tf.feature_column.indicator_column( tf.feature_column.categorical_column_with_hash_bucket('ticket',3)) feature_columns.append(ticket) embarked = tf.feature_column.indicator_column( tf.feature_column.categorical_column_with_vocabulary_list( key='embarked',vocabulary_list=['S','C','B'])) feature_columns.append(embarked) # Embedding column cabin = tf.feature_column.embedding_column( tf.feature_column.categorical_column_with_hash_bucket('cabin',32),2) feature_columns.append(cabin) # Crossed column pclass_cate = tf.feature_column.categorical_column_with_vocabulary_list( key='pclass',vocabulary_list=[1,2,3]) crossed_feature = tf.feature_column.indicator_column( tf.feature_column.crossed_column([age_buckets, pclass_cate],hash_bucket_size=15)) feature_columns.append(crossed_feature) #================================================================================ # 3. Defining the model #================================================================================ printlog("step3: define model...") tf.keras.backend.clear_session() model = tf.keras.Sequential([ layers.DenseFeatures(feature_columns), # Placing the feature into tf.keras.layers.DenseFeatures layers.Dense(64, activation='relu'), layers.Dense(64, activation='relu'), layers.Dense(1, activation='sigmoid') ]) #================================================================================ # 4. Training the model #================================================================================ printlog("step4: train model...") model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) history = model.fit(ds_train, validation_data=ds_test, epochs=10) #================================================================================ # 5. Evaluating the model #================================================================================ printlog("step5: eval model...") model.summary() %matplotlib inline %config InlineBackend.figure_format = 'svg' import matplotlib.pyplot as plt def plot_metric(history, metric): train_metrics = history.history[metric] val_metrics = history.history['val_'+metric] epochs = range(1, len(train_metrics) + 1) plt.plot(epochs, train_metrics, 'bo--') plt.plot(epochs, val_metrics, 'ro-') plt.title('Training and validation '+ metric) plt.xlabel("Epochs") plt.ylabel(metric) plt.legend(["train_"+metric, 'val_'+metric]) plt.show() plot_metric(history,"accuracy") ###Output _____no_output_____
sz/swarm-intelligence/ea.ipynb
###Markdown **Task 1:** Implement the generational evolutionary algorithm with tournament selection. Let it be parametrized with four parameters: the size of the population $N$, the size of the tournament $t$, the probability of mutation $p_m$ and the probability of crossover $p_c$. The algorithm should terminate automatically after 50 generations with no improvement. ###Code def chance(probability): return random.random() < probability class EvolutionaryAlgorithm: def __init__(self, problem, population_size=100, crossover_prob=0.75, mutation_prob=0.15, tournament_size=3): self.problem = problem self.population_size = population_size self.crossover_prob = crossover_prob self.mutation_prob = mutation_prob self.tournament_size = tournament_size def initial_population(self): return [self.problem.random_solution() for _ in range(self.population_size)] def tournament(self, number=1): return [ self.best_solution( random.sample(self.population, self.tournament_size) ) for _ in range(number) ] def next_solutions(self): if chance(self.crossover_prob): return self.select_with_crossover() else: return self.select_without_crossover() def select_with_crossover(self): parent1, parent2 = self.tournament(2) children = self.problem.crossover(parent1, parent2) return self.possibly_mutate_each(children) def select_without_crossover(self): return self.possibly_mutate_each(self.tournament()) def possibly_mutate_each(self, solutions): return [ self.problem.mutate(solution) if chance(self.mutation_prob) else solution for solution in solutions ] def evolve(self): next_population = [] while len(next_population) < self.population_size: next_population.extend(self.next_solutions()) self.population = next_population[0:self.population_size] def best_solution(self, solutions): return min( solutions, key=self.problem.evaluate ) def optimize(self): self.population = self.initial_population() best_fitness = float('inf') generations_without_improvement = 0 while generations_without_improvement < 50: self.evolve() best_solution = self.best_solution(self.population) fitness = self.problem.evaluate(best_solution) if fitness < best_fitness: best_fitness = fitness generations_without_improvement = 0 else: generations_without_improvement += 1 return best_solution ###Output _____no_output_____ ###Markdown **Task 2:*** What is the role of a mutation operator in evolutionary algorithms?* What are the properties of a good mutation operator?* What is the role of a crossover operator in evolutionary algorithms?* What are the properties of a good crossover operator?* What is the role of a cloning operator in evolutionary algorithms?* What are the properties of a good genetic representation?* How can you tell that the population has converged?* How do the parameters of the evolutionary algorithm affect the speed of its convergence?* Does the speed of the algorithm's convergence correlate with the quality of the solutions? What is the reason?* Can a population escape from a local optimum once it has converged?* What are the strengths of the evolutionary algorithms?* What are the weaknesses of the evolutionary algorithms?* What changes would you introduce to the evolutionary algorithms?* Which problems are evolutionary algorithms best suited for? ###Code tsp = TSP(20) ea = EvolutionaryAlgorithm(tsp) solution = ea.optimize() print(tsp.evaluate(solution)) tsp.display_solution_param(solution) ###Output 4.258790004734686 ###Markdown **Task 3:** Implement the QAP (https://en.wikipedia.org/wiki/Quadratic_assignment_problem). Use EA to solve it. ###Code import numpy as np class QAP(): def __init__(self, distances, flows, seed=1): random.seed(seed) drows, dcols = distances.shape frows, fcols = flows.shape if drows != dcols or frows != fcols or drows != frows: raise ValueError("distances and flows must be square matrices of the same size") self.size = drows self.distances = distances self.flows = flows def random_solution(self): # facility -> location return np.random.permutation(self.size) def evaluate(self, solution): return sum( self.flows[f1, f2] * self.distances[solution[f1], solution[f2]] for f1 in range(self.size) for f2 in range(self.size) ) def mutate(self, solution): mutated = np.copy(solution) i = random.randrange(self.size) j = random.randrange(self.size) mutated[i], mutated[j] = mutated[j], mutated[i] return mutated def crossover(self, a, b): cut = random.randrange(self.size) left_a = a[0:cut] left_b = b[0:cut] child1 = np.array(list(left_a) + [x for x in b if x not in left_a]) child2 = np.array(list(left_b) + [x for x in a if x not in left_b]) return child1, child2 distances = np.array([ [0, 40, 64, 36, 22, 60], [40, 0, 41, 22, 36, 72], [64, 41, 0, 28, 44, 53], [36, 22, 28, 0, 20, 50], [22, 36, 44, 20, 0, 41], [60, 72, 53, 50, 41, 0] ]) flows = np.array([ [0, 1, 1, 2, 0, 0], [1, 0, 0, 0, 0, 2], [1, 0, 0, 0, 0, 1], [2, 0, 0, 0, 3, 0], [0, 0, 0, 3, 0, 0], [0, 2, 1, 0, 0, 0] ]) # source: https://neos-guide.org/content/qap6 qap = QAP(distances, flows) ea = EvolutionaryAlgorithm(qap) solution = ea.optimize() print(solution) print(qap.evaluate(solution)) ###Output [3 1 5 4 0 2] 626 ###Markdown **Task 4:** Choose one of the problems (TSP or QAP). Assume a constant number of fitness evaluations per each evolutionary run. For a sufficiently big (nontrivial) problem:a) Assume $N = 200$, $t = 5$. Prepare a heatmap illustrating the influence of values of $p_m$ and $p_c$ on the quality of the solution.b) Assume $p_m = 0.5$, $p_c = 0.5$. Prepare a heatmap illustrating the influence of values of $N$ and $t$ on the quality of the solution.Discuss the results. Can we expect to see similar results under different search termination conditions (e.g. a number of iterations with no improvement)? Can we expect to see similar results for different optimization problems? ###Code import itertools def param_grid(variable_params): keys = variable_params.keys() values_with_index = (enumerate(values) for values in variable_params.values()) for params in itertools.product(*values_with_index): index, values = zip(*params) yield index, dict(zip(keys, values)) def evaluate(problem, **options): ea = EvolutionaryAlgorithm(problem, **options) solution = ea.optimize() return problem.evaluate(solution) def experiment(problem, constants, variables, repetitions=10): keys = variables.keys() labels = tuple([f"{key}={value}" for value in variables[key]] for key in keys) dims = tuple(map(len, variables.values())) results = np.zeros(dims) for index, variable_values in param_grid(variables): results[index] = sum( evaluate(problem, **constants, **variable_values) for _ in range(repetitions) ) / repetitions return results, labels def draw_heatmap(results, labels): fig, ax = plt.subplots(figsize=(10, 10)) y_labels, x_labels = labels ax.set_xticks(list(range(len(x_labels)))) ax.set_yticks(list(range(len(y_labels)))) ax.set_xticklabels(x_labels) ax.set_yticklabels(y_labels) im = ax.imshow(results) fig.colorbar(im, ticks=np.linspace(0, 1, 6)) plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor") for (row, column), value in np.ndenumerate(results): ax.text(column, row, f"{value:.04}", ha="center", va="center", color="w") plt.show() heatmap1, labels1 = experiment( tsp, { 'population_size': 200, 'tournament_size': 5 }, { 'crossover_prob': np.linspace(0.9, 0.1, 9), 'mutation_prob': np.linspace(0.05, 0.5, 10), } ) draw_heatmap(heatmap1, labels1) heatmap2, labels2 = experiment( tsp, { 'crossover_prob': 0.5, 'mutation_prob': 0.5 }, { 'population_size': [1000, 500, 200, 100, 50, 20], 'tournament_size': [5, 8, 11, 14, 17, 20] } ) draw_heatmap(heatmap2, labels2) ###Output _____no_output_____
6.LSTM.ipynb
###Markdown LSTM - To overcome RNN's problem, which is 'Long term Dependency > Long term dependency - 현재 state와 과거 input과의 term이 길 때, input을 반영하지 못하는 현상 (이전 state에서 현재로 오면서 곱하는게 큰 원인) LSTM Cell's structure - Cell state와 hidden layer가 별도로 존재 - Cell state는 정보의 기억을 주로 담당 - Forget gate는 Cell 상에서 크게 중요하지 않은 정보를 삭제 - Hidden layer(tanh) 는 input gate(sigmoid)를 통해 중요한 정보만을 Cell에 update - Cell state는 forget gate, input gate를 거친 후 다음 State로 넘어간다. - hidden layer는 다음 Cell에서 다음 step의 input 과 Concatnate(it makes memorize longer)하여 Cell에 입력 GRU cell's structure - input gate를 1-forget gate로 처리하면서 weight 개수를 줄여 연산량을 줄임 - 실제로 input gate와 forget gate는 반대로 가는 경향이 있음 - LSTM과 비슷한 성능을 내며, 연산량은 감소 $*$ **Dynamic RNN** - static RNN보다 더 많이 사용한다. - Padding에 대하여 loss를 구하지 않게 하여, 가변 길이 데이터를 더 원할하게 학습 할 수있다. > loss를 구할 때, Gate 처럼 먼저 length를 같이 받아서 넣어주면, padding 부분의 loss는 구하지 않는다. Bidirectional RNN - RNN Cell을 두개씩 가지고 서로 반대 방향의 time step별 state를 가지는 RNN구조 - 양방향의 loss를 연결 시킨 것을 낮추려고 노력하면서 더 좋은 성능을 내는 경우가 많다. Learning by implementation ###Code import tensorflow as tf import numpy as np import matplotlib.pyplot as plt %matplotlib inline tf.reset_default_graph() # preparing Data t = np.array([float(i)*0.01 for i in range(10000+1)]) sin = np.sin(t[:-1]) sin_next = np.sin(t[1:]) time_step = 100 reshaped_sin = np.reshape(sin, [-1, time_step, 1]) reshaped_sin_next = np.reshape(sin_next, [-1, 1]) signal = tf.placeholder(tf.float32, [None, time_step, 1]) signal_next = tf.placeholder(tf.float32, [None, 1]) inputs = tf.unstack(signal, axis=1) state_size = 24 rnn_cell = tf.nn.rnn_cell.LSTMCell(state_size) #rnn_cell = tf.nn.rnn_cell.GRUCell(state_size) states, state = tf.nn.static_rnn(rnn_cell, inputs, dtype=tf.float32) reshaped_states = tf.reshape(tf.stack(states, axis=1), [-1, state_size]) out = tf.layers.dense(reshaped_states, 1, use_bias=False) loss = tf.losses.mean_squared_error(signal_next, out) train_op = tf.train.AdamOptimizer(1e-2).minimize(loss) accuracy = tf.contrib.metrics.streaming_pearson_correlation(out, signal_next) with tf.Session() as sess : sess.run(tf.global_variables_initializer()) sess.run(tf.local_variables_initializer()) for epoch in range(100): _, _loss, _acc = sess.run([train_op, loss, accuracy], feed_dict={ signal : reshaped_sin, signal_next : reshaped_sin_next}) if epoch % 10 ==0: print(f"epoch : {epoch}, loss : {_loss:.4f}, accuracy : {_acc}") _pred = sess.run(out, feed_dict={signal : reshaped_sin}) plt.figure(1) plt.title("Truth") plt.plot(sin_next) plt.figure(2) plt.title("predicted") plt.plot(_pred) # Bi_Directional LSTM (dynamic) # dynamic 할때는 unstack, stack이 필요없다. tf.reset_default_graph() # preparing Data t = np.array([float(i)*0.01 for i in range(10000+1)]) sin = np.sin(t[:-1]) sin_next = np.sin(t[1:]) signal = tf.placeholder(tf.float32, [None, 1]) signal_next = tf.placeholder(tf.float32, [None, 1]) state_size = 24 frnn_cell = tf.nn.rnn_cell.BasicLSTMCell(state_size) brnn_cell = tf.nn.rnn_cell.GRUCell(state_size) (fstates, bstates), (fstate, bstate) = tf.nn.bidirectional_dynamic_rnn(frnn_cell, brnn_cell, signal, dtype=tf.float32) freshaped_states = tf.reshape(fstates, [-1, state_size]) breshaped_states = tf.reshape(bstates, [-1, state_size]) foutput = tf.layers.dense(freshaped_states, 1, use_bias=False) boutput = tf.layers.dense(breshaped_states, 1, use_bias=False) # loss 를 각각 평균내서 타임스텝마다 고쳐주라 floss = tf.losses.mean_squared_error(signal_next, foutput) bloss = tf.losses.mean_squared_error(signal, boutput) loss = (floss + bloss) / 2 train_op = tf.train.AdamOptimizer(1e-2).minimize(loss) accuracy = tf.contrib.metrics.streaming_pearson_correlation(foutput, signal_next) ###Output _____no_output_____
Notebooks/Data_Cleanup.ipynb
###Markdown Data Cleaning Notebook This data set came from a Mexican Government [website](https://www.gob.mx/salud/documentos/datos-abiertos-152127). I found a link to this dataset in a [kaggle](https://www.kaggle.com/tanmoyx/covid19-patient-precondition-dataset) project that somebody has done. I thought that finding COVID data would be easier than this, but it turns out that HIPPA laws do not allow data on the individual patient level to be made available to the public even if their names are left out. Therefore, only summary level data sets (like by county, or by age group) are able to be viewed by the public. This data set from the Mexican Government is at the individual patient level and doesn't provide any patient identification information. ###Code import pandas as pd import json import numpy as np import matplotlib.pyplot as plt #cd .. #import the data set covid_data = pd.read_csv('Data/201001COVID19MEXICO.csv') ###Output _____no_output_____ ###Markdown The original data set has 1,048,575 observations and 35 features. Also the column names are all in Spanish. Spanish to English Translations The kaggle project author has done translations of the feature descriptions so I will first create a dictionary containing all column names with the key being the Spanish, and the value being the English translation. Then I am going to change the column names to the English translations. ###Code translation_dict = { 'FECHA_ACTUALIZACION': 'Update Date', 'ID_REGISTRO': 'Record ID', 'ORIGEN':'Origin', 'SECTOR': 'Sector', 'ENTIDAD_UM':'Entity Location', 'SEXO':'Sex', 'ENTIDAD_NAC':'Entity of Birth', 'ENTIDAD_RES':'Entity of Residence', 'MUNICIPIO_RES':'Residence Municipality', 'TIPO_PACIENTE':'Type of Care', 'FECHA_INGRESO':'Admission Date', 'FECHA_SINTOMAS':'Sympton Onset', 'FECHA_DEF':'Date of Death', 'INTUBADO':'Intubation Required', 'NEUMONIA':'Pneumonia', 'EDAD':'Age', 'NACIONALIDAD':'Nationality', 'EMBARAZO':'Pregnant', 'HABLA_LENGUA_INDIG':'Speak Indigenous', 'DIABETES':'Diabetes', 'EPOC':'COPD Diagnosis', 'ASMA':'Asthma', 'INMUSUPR':'Immunosuppression', 'HIPERTENSION':'Hypertension', 'OTRA_COM':'Other Diseases', 'CARDIOVASCULAR':'Cardiovascular Disease', 'OBESIDAD':'Obesity', 'RENAL_CRONICA':'Kidney Failure', 'TABAQUISMO':'Smoker', 'OTRO_CASO':'Contact', 'RESULTADO':'Test Result', 'MIGRANTE':'Migrant', 'PAIS_NACIONALIDAD':'Prior Nationality', 'PAIS_ORIGEN':'Prior Origin', 'UCI':'Intensive Care Needed' } #now I'm going to change the names to be in English using the dictionary above english_names = [translation_dict[x] for x in covid_data.columns] covid_data.columns = english_names ###Output _____no_output_____ ###Markdown Cleaning the Data First, I am only interested in observations that actually tested positive for Covid19, so I will drop all rows where 'Test Result' is 2 (Negative) or 3 (Pending). ###Code indeces_to_delete = [] for i in range(len(covid_data['Test Result'])): if covid_data['Test Result'][i] == 3: indeces_to_delete.append(i) elif covid_data['Test Result'][i] == 2: indeces_to_delete.append(i) #now to drop the rows that didn't test positive for Covid, and then reset the index covid_data = covid_data.drop(indeces_to_delete) covid_data = covid_data.reset_index(drop=True) ###Output _____no_output_____ ###Markdown I found that the mapping provided with the data set does not completely identify all combinations of municpalities within entity. Because of this, as well as there being over 2,000 municipalities, I have decided to get rid of this feature. I am confident that whatever information is lost from this deletion will be replaced from the 'Entity of Residence' feature which shows which state the patient lives in. ###Code del covid_data['Residence Municipality'] ###Output _____no_output_____ ###Markdown At this point, there are ALOT of columns. I've gone through these columns and evaluated the quality of the data in them. By this, I mean I looked at how complete each column is in terms of how many N/A values are present.a. "Prior Origin" and "Migrant" features have over 700,000 N/A values each. After step 1 above, there are only 703,973 observations. Since 700,000 is over 90% of the observations, these two columns will be delete. b. "Contact" feature has over 1/7 of it's observations being N/A. This feature seems more helpful for a tracing project, and whether or not the person had come in contact with a COVID patient doesn't change the fact they have COVID. This feature will be deleted too. c. "Speak Indegenous" feature has over 22,000 N/A values. Also, this feature seems pretty useless because a virus doesn't care what language you speak. This feature will be deleted too. ###Code del covid_data['Migrant'] del covid_data['Prior Origin'] del covid_data['Contact'] del covid_data['Speak Indigenous'] ###Output _____no_output_____ ###Markdown Then there are some more complicated features: "Entity of Birth", "Diabetes", "COPD Diagnosis", "Asthma", "Immunosuppression", "Hypertension", "Other Diseases", "Cardiovascular Disease", "Obesity", "Kidney Failure", and "Smoker" features all have roughly 1,900-2,100 N/A values each. After finding out what the overlap is, there is a total of 7,185 rows where atleast one of the features are N/A. This is less than 1% of the total number of observations. Therefore, I think it should be ok to simply delete the observations where one or more of these features has N/A. Also, when considerig this, I saw that 6,008 of the observations impacted by this deletion are patient that didn't die. This shows that 83.6% of the patients deleted here didn't die. When looking at the data as a whole, 89.4% of the observations didn't die. Since 83.6% is relatively close to 89.4%, I don't think that deleting these 7,185 rows is deleting observations that offer something special to the data. Therefore, I will delete these occurences. ###Code vars_of_interest = ["Entity of Birth", "Diabetes", "COPD Diagnosis", "Asthma", "Immunosuppression", "Hypertension", "Other Diseases", "Cardiovascular Disease", "Obesity", "Kidney Failure", "Smoker"] row_deletions = [] exclusion_values = [97,98,99] for i in range(len(covid_data['Record ID'])): delete_row = False for var in vars_of_interest: if covid_data[var][i] in exclusion_values: delete_row = True if delete_row: row_deletions.append(i) covid_data = covid_data.drop(row_deletions) covid_data = covid_data.reset_index(drop=True) ###Output _____no_output_____ ###Markdown After performing the above actions, there are 7 N/A values in the "Pneumonia" feature. 7 observations is incredibly small for this data set, so I will delete these observations. ###Code pneumonia_delete_rows = list(covid_data[covid_data['Pneumonia']==99].index) covid_data = covid_data.drop(pneumonia_delete_rows) covid_data = covid_data.reset_index(drop=True) ###Output _____no_output_____ ###Markdown "Pregnant" feature has 368,300 N/A values. Many of these are from patients being Male, which makes sense because men can't get pregnant. For these 365,860 observations that are Male, I will change the Pregnant feature to equal 2. For the remaining 2,440 N/A values, I will assign a 2 (for not pregnant) since out of the 335,673 women (not including the N/As), only 5,201 are pregnant. This means that for each N/A value, there is only a 1.55% chance that observation is pregnant. My approach is a "Majority takes all" approach of solving this N/A problem. ###Code new_pregnant_column = [] for i in range(len(covid_data['Record ID'])): if covid_data['Pregnant'][i] in [97,98]: new_pregnant_column.append(2) else: new_pregnant_column.append(covid_data['Pregnant'][i]) covid_data['Pregnant'] = new_pregnant_column ###Output _____no_output_____ ###Markdown Since whether or not someone needed intubation or intensive care seems more like potential target variables than risk factors, and the intended purpose is for assessing initial risk, I will delete the 'Intensive Care Needed' and 'Intubation Required' features. ###Code del covid_data['Intensive Care Needed'] del covid_data['Intubation Required'] ###Output _____no_output_____ ###Markdown Whether or not a person died can be gathered from the 'Date of Death' variable. If the value is a date, then unfortunately that person died. However, if the value is '9999-99-99', then the person did not die. With this information, I'm going to create a binary variable called 'Died' which shows whether a patient died (1) or not (0). While I am looping through the data to create this new variable, I will overwrite the 'Date of Death' variable so that '9999-99-99' is recorded as np.nan. ###Code died = [] death_date = [] for i in range(len(covid_data['Date of Death'])): if str(covid_data['Date of Death'][i]) == '9999-99-99': died.append(0) death_date.append(np.nan) else: died.append(1) death_date.append(covid_data['Date of Death'][i]) covid_data['Died'] = died covid_data['Date of Death'] = death_date del died del death_date ###Output _____no_output_____ ###Markdown Now that missing values are taken care of, I would like to look into a location based feature and do groupings. The one that pops out to me the most is "Entity of Residence" because this is the most current location for that person. Birth entity doesn't seem to be recent enough. I will also delete the other location based features besides entity of residence. ###Code del covid_data['Entity Location'] del covid_data['Entity of Birth'] #I will find the unique values for an entity and then visualize how the death rate #vary across the entities in order to define the super groups. #First I want to create a dictionary of the death rate unique_entities = np.unique(covid_data['Entity of Residence']) death_entity_dict = {key: None for key in unique_entities} for entity in unique_entities: entity_subset = covid_data[covid_data['Entity of Residence']==entity] death_entity_dict[entity] = sum(entity_subset['Died'])/max(1,len(entity_subset['Died'])) #I want to sort these dictionaries in terms of their rates to make it easier to create groupings from the bar chart sorted_entity_death_dict = {k: v for k, v in sorted(death_entity_dict.items(), key=lambda item: item[1])} fig = plt.figure() width = 0.8 ax = fig.add_axes([0,0,1,1]) ax.bar(np.arange(0,len(list(sorted_entity_death_dict.keys()))),list(sorted_entity_death_dict.values()),width=width,color='green') ax.legend(('Death Rates')) plt.xticks(ticks=np.arange(0,len(list(sorted_entity_death_dict.keys()))),labels=list(sorted_entity_death_dict.keys())) plt.title('Death Rates by Entity of Residence') plt.xlabel('Entity') plt.ylabel('Death Rate') #plt.savefig('Residence_Entity_Death_Rates.png',bbox_inches='tight') plt.show() #This entity_groupings dictionary will contain the mapping from entity to supergroup for entity of residence #I picked 0.09 as the first cutoff because that roughly looks like the cutoff for the lowest third of the data #I picked 0.13 as the second cutoff because that seems like the rough cutoff for the highest third of the data entity_groupings = {} for key in list(sorted_entity_death_dict.keys()): if sorted_entity_death_dict[key] < 0.09: entity_groupings[int(key)] = 1 elif 0.09 <= sorted_entity_death_dict[key] <=0.13: entity_groupings[int(key)] = 2 else: entity_groupings[int(key)] = 3 #now to map these definitions to the dataframe new_column = [] for i in range(len(covid_data['Entity of Residence'])): current_entity = covid_data['Entity of Residence'][i] if current_entity not in exclusion_values: new_column.append(entity_groupings[current_entity]) else: #I need to make it a 99 because otherwise np.nan will pop up lots of times in the np.unique when doing encoding. #Having a np.nan as 99 will be easier later than having it as either 97,98,or 99. new_column.append(99) covid_data["Entity of Residence Grouped"] = new_column #now that the grouping is done, I can get rid of the old pregrouped feature. I can also get rid of the variables set up during #the time to free up memory space del covid_data['Entity of Residence'] del death_entity_dict del entity_subset del sorted_entity_death_dict del new_column #there are a few left over features not representing risk factors that won't be necessary for analysis later on in the #project. Therefore, I will delete them before encoding variables_to_delete = ['Update Date','Record ID','Origin','Sector','Type of Care','Admission Date','Sympton Onset', 'Date of Death','Nationality','Test Result','Prior Nationality'] for var in variables_to_delete: del covid_data[var] #Lastly, for all the features that are types of conditions, the value ranges of 1-2 need to change to 0-1. This can be done #by simply changing all occurences of 2 to 0 for these variables yes_no_vars = ['Pneumonia','Pregnant','Diabetes','COPD Diagnosis','Asthma','Immunosuppression','Hypertension','Other Diseases', 'Cardiovascular Disease','Obesity','Kidney Failure','Smoker','Died'] covid_data[yes_no_vars] = covid_data[yes_no_vars].replace(2,0) ###Output _____no_output_____ ###Markdown Writing the Final Cleaned Data to a CSV File ###Code covid_data.to_csv('Data/CleanedCovidData10-16.csv',index=False) ###Output _____no_output_____ ###Markdown Value Mapping Most data fields in this data set have numeric representations for non-numeric values. Below are dictionaries I have created using the 'Catalogos_0412.csv' file provided with the Covid data. ###Code #dictionary of yes/no features yes_no_dict = dict(zip([0,1],['No','Yes'])) #dictionary of entity names associated with each numeric value in the data entity_numbers = range(1,37,1) entity_names = ['AGUASCALIENTES','BAJA CALIFORNIA','BAJA CALIFORNIA SUR','CAMPECHE','COAHUILA DE ZARAGOZA','COLIMA','CHIAPAS', 'CHIHUAHUA','CIUDAD DE MÉXICO','DURANGO','GUANAJUATO','GUERRERO','HIDALGO','JALISCO','MÉXICO','MICHOACÁN DE OCAMPO', 'MORELOS','NAYARIT','NUEVO LEÓN','OAXACA','PUEBLA','QUERÉTARO','QUINTANA ROO','SAN LUIS POTOSÍ','SINALOA','SONORA', 'TABASCO','TAMAULIPAS','TLAXCALA','VERACRUZ DE IGNACIO DE LA LLAVE','YUCATÁN','ZACATECAS','ESTADOS UNIDOS MEXICANOS'] entity_dict = dict(zip(entity_numbers,entity_names)) value_mapping_dict={ 'Sex':dict(zip([1,2],['Female','Male'])), 'Entity of Residence':entity_dict, 'Entity of Residence Grouped':entity_groupings, 'Pneumonia':yes_no_dict, 'Age':{}, 'Pregnant':yes_no_dict, 'Diabetes':yes_no_dict, 'COPD Diagnosis':yes_no_dict, 'Asthma':yes_no_dict, 'Immunosuppression':yes_no_dict, 'Hypertension':yes_no_dict, 'Other Diseases':yes_no_dict, 'Cardiovascular Disease':yes_no_dict, 'Obesity':yes_no_dict, 'Kidney Failure':yes_no_dict, 'Smoker':yes_no_dict, 'Died':yes_no_dict } #now that the mappings are all created, I would like to write them to a json file with open('Data/post_cleaning_data_mapping.json', 'w') as json_file: json.dump(value_mapping_dict, json_file) ###Output _____no_output_____
notebooks/TESS_V1298Tau.ipynb
###Markdown **TESS TICA light curve v2** ###Code plt.plot(x,y) tess_texp = np.median(np.diff(x)) #plt.plot(temp1[:,0],temp1[:,1]) tab = Table.read('/Users/arcticfox/Documents/v1298tau/tess/ttvs.csv',format='csv') #Stellar parameters M_star = 1.10, 0.05 R_star = 1.305, 0.07 include_planet_e = True if include_planet_e == True: #Livingston ephemeris t0s = np.array([2231.281202 - 0.75*4.66/24, 2239.400529 + 0.5*5.59/24, 2234.046461 - 0.5*6.42/24, 2263.6229, 4644.08]) tess_t0s = np.array([4689.399860318306, 4682.6055129254755, 4648.09023, 4648.79668]) periods = np.array([8.249147, 12.401369, 24.141445, 36.695032307689445]) rors = np.array([0.0381, 0.0436, 0.0636, 0.0664]) depths = np.array(1e3*(rors**2)) t14s = np.array([4.66, 5.59, 6.42, 7.45])/24.0 elif include_planet_e == False: t0s = np.array([2231.281202, 2239.400529, 2234.046461])# - x_ref periods = np.array([8.249147, 12.401369, 24.141445]) rors = np.array([0.0381, 0.0436, 0.0700]) depths = np.array(1e3*(rors**2)) # Number of planets to be included in fit n_pl = len(t0s) # Compute the expected transit times for a linear ephemeris expected_transit_times = xo.orbits.ttv.compute_expected_transit_times( x.min(), x.max()+100, periods, tess_t0s, ) ###Output _____no_output_____ ###Markdown Before we start fitting the light curve let's see if we can identify the transits by eye ###Code x.min(),x.max() Time(expected_transit_times[2],format='bkjd').datetime#.to('datetime') nrows = 18 xmin = int(x.min()) + 3*np.arange(nrows) fig,axes = plt.subplots(nrows=nrows, ncols=1, figsize=(10,4*nrows)) for n in range(nrows): ax = axes[n] for i,let in enumerate("cdbe"): ttimes = expected_transit_times[i] for j,_tt in enumerate(ttimes): if (_tt>x.min()) & (_tt<x.max()): ax.axvline(_tt, color=tangerine[i], label=let) ax.axvspan(_tt-0.5*t14s[i], _tt+0.5*t14s[i], alpha=0.2, color=tangerine[i]) ax.axvline(_tt, color=tangerine[i], ls='--') # planet c # ax.errorbar(lk.time.value, binned.flux.value, # yerr=binned.flux_err.value, marker='.', linestyle='') ax.plot(x, y, 'k.') handles, labels = ax.get_legend_handles_labels() by_label = dict(zip(labels, handles)) ax.legend(by_label.values(), by_label.keys()) ax.set_xlim(xmin[n], xmin[n]+3) ax.set_ylim(-22,22) ax.set_xlabel("BKJD") ax.set_ylabel("Relative flux [ppt]") plt.tight_layout() plt.show() ###Output _____no_output_____ ###Markdown It looks like: 1. Planet d arrives late at BKJD = 4645.42. Planet b arrives early at BKJD = 4648.13. Planet c arrives late at BKJD = 4648.53? There is a dip right before the ingress of planet e, with seemingly the right duration. It's hard to tell for sure because the noise changes significantly around the transit.4. Planet e transits around BKJD = 4648.8 The data beyond BKJD = 4651.5 are corrupted so let's remove it. **The SimpleTransitOrbit model** (fitting in terms of duration) ###Code ett = np.array([expected_transit_times[0][0], expected_transit_times[1][0], expected_transit_times[2][0], expected_transit_times[3][0], ]) ett periods = np.array([8.249147, 12.401369, 24.141445, 48.0]) t0s = np.round(ett,2)+0.0# #np.array([4648.14, 4645.4, 4648.1, 4648.8]) rors = np.array([0.0381, 0.0436, 0.0700, 0.0611]) depths = np.array(1e3*(rors**2)) durations = np.array([4.66, 5.59, 6.42, 7.45])/24.0 n_pl = len(periods) R_star = 1.305, 0.07 #x = tica2_bkjd #y = tica2_f1 m = (np.isfinite(x)) & (np.isfinite(y))# & (x<4651.5) x = np.ascontiguousarray(x[m], dtype=np.float64) y = np.ascontiguousarray(y[m], dtype=np.float64) yerr = np.ascontiguousarray(yerr[m], dtype=np.float64) # These arrays are used as the times/phases where the models are # evaluated at higher resolution for plotting purposes phase_lc = np.linspace(-0.3, 0.3, 100) plt.errorbar(x,y,yerr=yerr, marker='.', linestyle='') # These arrays are used as the times/phases where the models are # evaluated at higher resolution for plotting purposes phase_lc = np.linspace(-0.3, 0.3, 100) # Required changes: # We can have different depths for K2 and TESS def build_model(mask=None, start=None, ttvs=False, eccentric=False): if mask is None: mask = np.ones(len(x), dtype=bool) with pm.Model() as model: # Parameters for the stellar properties BoundedNormal = pm.Bound(pm.Normal, lower=0, upper=3) m_star = BoundedNormal("m_star", mu=M_star[0], sd=M_star[1]) r_star = BoundedNormal("r_star", mu=R_star[0], sd=R_star[1]) u_star = xo.QuadLimbDark("u_star") star = xo.LimbDarkLightCurve(u_star) # Fit in terms of transit depth (assuming b<1) b = pm.Uniform("b", lower=0, upper=1, shape=n_pl) #log_depth_tess = pm.Normal("log_depth_tess", mu=np.log(depths), sigma=2.0, shape=n_pl) log_depth_tess = pm.Normal("log_depth_tess", mu=np.log(depths), sigma=0.1, shape=n_pl) ror_tess = pm.Deterministic("ror_tess", star.get_ror_from_approx_transit_depth( 1e-3 * tt.exp(log_depth_tess), b ), ) r_pl_tess = pm.Deterministic("r_pl_tess", ror_tess * r_star) r_pl_rade = pm.Deterministic("r_pl_rade", ror_tess * r_star * c.R_sun/c.R_earth) ecc = np.zeros(n_pl) omega = np.pi/2*np.ones(n_pl) # Orbital parameters for the planets t0 = pm.Normal("t0", mu=np.array(t0s), sd=1, shape=n_pl) log_period = pm.Normal("log_period", mu=np.log(periods), sd=1, shape=n_pl) period = pm.Deterministic("period", tt.exp(log_period)) # Orbit models orbit = xo.orbits.KeplerianOrbit( r_star=r_star, m_star=m_star, period=period, t0=t0, b=b, ecc=ecc, omega=omega, ) ######################################################################################## ######################################################################################## # Compute the model light curve mean_tess = pm.Normal("mean_tess", mu=0.0, sd=10.0) # Quadratic trend for varying background flux trend = pm.Normal( "trend", mu=0, sd=10.0 ** -np.arange(3)[::-1], shape=3 ) # Define the background model A = np.vander(x, 3) bkg = pm.Deterministic("bkg", tt.dot(A, trend)) light_curves_tess = ( star.get_light_curve( orbit=orbit, r=r_pl_tess, t=x[mask], texp=tess_texp) * 1e3 ) light_curve_tess = pm.math.sum(light_curves_tess, axis=-1) + mean_tess resid_tess = y[mask] - light_curve_tess - bkg[mask] # Transit jitter & GP parameters log_sigma_lc_tess = pm.Normal("log_sigma_lc_tess", mu=np.log(0.01*np.std(yerr[mask])), sd=10) log_sigma_jit_tess = pm.Normal("log_sigma_jit_tess", mu=np.log(0.02*np.std(yerr[mask])), sd=10) yerr_tess = pm.Deterministic("yerr_tess", tt.exp(log_sigma_lc_tess) + tt.exp(2*log_sigma_jit_tess)*(light_curve_tess**2)) #yerr_tess = pm.Deterministic("yerr_tess", tt.exp(log_sigma_lc_tess)) #The parameters of the RotationTerm kernel sigma_rot_tess = pm.InverseGamma( "sigma_rot_tess", **pmx.estimate_inverse_gamma_parameters(1.0, 5.0) ) log_period_rot_tess = pm.Normal("log_period_rot_tess", mu=np.log(2.87), sigma=2.0) period_rot_tess = pm.Deterministic("period_rot_tess", tt.exp(log_period_rot_tess)) log_Q0_rot_tess = pm.HalfNormal("log_Q0_rot_tess", sigma=2.0) log_dQ_rot_tess = pm.Normal("log_dQ_rot_tess", mu=0.0, sigma=2.0) f_rot_tess = pm.Uniform("f_rot_tess", lower=0.1, upper=1.0) kernel_tess = terms.RotationTerm( sigma=sigma_rot_tess, period=period_rot_tess, Q0=tt.exp(log_Q0_rot_tess), dQ=tt.exp(log_dQ_rot_tess), f=f_rot_tess, ) gp_tess = GaussianProcess(kernel_tess, t=x[mask], yerr=yerr_tess) gp_tess.marginal("transit_obs_tess", observed=resid_tess) #Compute and save the phased light curve models pm.Deterministic( "lc_pred", 1e3 * tt.stack( [ star.get_light_curve( orbit=orbit, r=r_pl_tess, t=t0[n] + phase_lc, texp=tess_texp )[..., n] for n in range(n_pl) ], axis=-1, ), ) # Fit for the maximum a posteriori parameters, I've found that I can get # a better solution by trying different combinations of parameters in turn if start is None: start = model.test_point map_soln = pmx.optimize(start=start, vars=trend) map_soln = pmx.optimize(start=map_soln, vars=[log_period, t0]) map_soln = pmx.optimize(start=map_soln, vars=[b, log_depth_tess]) map_soln = pmx.optimize(start=map_soln, vars=[sigma_rot_tess, log_period_rot_tess, log_Q0_rot_tess, log_dQ_rot_tess, f_rot_tess, mean_tess, ] ) map_soln = pmx.optimize(start=map_soln) extras = dict( zip( ["light_curves_tess", "gp_pred_tess"], pmx.eval_in_model([light_curves_tess, gp_tess.predict(resid_tess)], map_soln), ) ) return model, map_soln, extras, orbit model0, map_soln0, extras0, orbit0 = build_model(ttvs=True) map_soln0['yerr_tess'] np.nanstd(x) yerr_tess = np.ascontiguousarray(map_soln0['yerr_tess'] + 0.0, dtype=np.float64) np.random.seed(123) yerr_tess = np.ascontiguousarray(np.random.normal(np.nanmedian(lk_43.flux_err.value), np.nanstd(lk_43.flux_err.value), len(x)), dtype=np.float64) plt.errorbar(x,y,yerr=yerr_tess) def depth_duration_model(ttvs=False): with pm.Model() as model: # Physical parameters that will be sampled BoundedNormal = pm.Bound(pm.Normal, lower=0, upper=3) r_star = BoundedNormal("r_star", mu=R_star[0], sd=R_star[1]) m_star = BoundedNormal("m_star", mu=M_star[0], sd=M_star[1]) u_star = xo.QuadLimbDark("u_star") star = xo.LimbDarkLightCurve(u_star) b = pm.Uniform("b", lower=0, upper=1, shape=n_pl) #t0 = pm.Normal("t0", mu=t0s, sigma=0.1, shape=n_pl) #log_period = pm.Normal("log_period", mu=np.log(periods), # sigma=0.1, shape=n_pl) log_depth = pm.Normal("log_depth", mu=np.log(depths), sigma=0.1, shape=n_pl) log_duration = pm.Normal("log_duration", mu=np.log(durations), sigma=0.1, shape=n_pl) # Track parameters of interest as deterministics duration = pm.Deterministic("duration", tt.exp(log_duration)) ror = pm.Deterministic("ror", star.get_ror_from_approx_transit_depth( 1e-3 * tt.exp(log_depth), b ), ) r_pl_tess = pm.Deterministic("r_pl_tess", ror * r_star) r_pl_rade = pm.Deterministic("r_pl_rade", ror * r_star * c.R_sun/c.R_earth) ecc = np.zeros(n_pl) omega = np.pi/2*np.ones(n_pl) if ttvs==True: # Now we have a parameter for each transit time of each planet: transit_times = [] for i in range(n_pl): transit_times.append( pm.Normal( "tts_{0}".format(i), mu=expected_transit_times[i], sd=0.1, #Change this back to 0.1 to work shape=len(expected_transit_times[i]), ) ) # Set up an orbit for the planets orbit = xo.orbits.TTVOrbit( r_star=r_star, m_star=m_star, b=b, ecc=ecc, omega=omega, transit_times=transit_times) # It will be useful later to track some parameters of the orbit t0 = pm.Deterministic("t0", orbit.t0) period = pm.Deterministic("period", orbit.period) log_period = pm.Normal("log_period", mu=np.log(periods), sigma=0.1, shape=n_pl) for i in range(n_pl): pm.Deterministic("ttvs_{0}".format(i), orbit.ttvs[i]) #period = pm.Deterministic("period", tt.exp(log_period)) elif ttvs==False: # Orbital parameters for the planets t0 = pm.Normal("t0", mu=np.array(t0s), sd=1, shape=n_pl) log_period = pm.Normal("log_period", mu=np.log(periods), sd=1, shape=n_pl) period = pm.Deterministic("period", tt.exp(log_period)) # Orbit models orbit = xo.orbits.KeplerianOrbit( r_star=r_star, m_star=m_star, period=period, t0=t0, b=b, ecc=ecc, omega=omega, ) # Quadratic trend for varying background flux trend = pm.Normal( "trend", mu=0, sd=10.0 ** -np.arange(3)[::-1], shape=3 ) # Define the background model A = np.vander(x, 3) bkg = pm.Deterministic("bkg", tt.dot(A, trend)) #Compute the light curve model mean_tess = pm.Normal("mean_tess", mu=0.0, sd=10.0) light_curves_tess = ( star.get_light_curve( orbit=orbit, r=r_pl_tess, t=x, texp=tess_texp) * 1e3 ) light_curve_tess = pm.math.sum(light_curves_tess, axis=-1) + mean_tess resid_tess = y - light_curve_tess - bkg # Transit jitter & GP parameters log_sigma_lc_tess = pm.Normal("log_sigma_lc_tess", mu=np.log(0.01*np.std(y)), sd=5) log_sigma_jit_tess = pm.Normal("log_sigma_jit_tess", mu=np.log(0.02*np.std(y)), sd=5) #yerr_tess = pm.Deterministic("yerr_tess", tt.exp(log_sigma_lc_tess) + tt.exp(2*log_sigma_jit_tess)*(light_curve_tess**2)) #yerr_tess = pm.Deterministic("yerr_tess", tt.exp(log_sigma_lc_tess)) #The parameters of the RotationTerm kernel sigma_rot_tess = pm.InverseGamma( "sigma_rot_tess", **pmx.estimate_inverse_gamma_parameters(1.0, 5.0) ) log_period_rot_tess = pm.Normal("log_period_rot_tess", mu=np.log(2.87), sigma=2.0) period_rot_tess = pm.Deterministic("period_rot_tess", tt.exp(log_period_rot_tess)) log_Q0_rot_tess = pm.HalfNormal("log_Q0_rot_tess", sigma=2.0) log_dQ_rot_tess = pm.Normal("log_dQ_rot_tess", mu=0.0, sigma=2.0) f_rot_tess = pm.Uniform("f_rot_tess", lower=0.1, upper=1.0) kernel_tess = terms.RotationTerm( sigma=sigma_rot_tess, period=period_rot_tess, Q0=tt.exp(log_Q0_rot_tess), dQ=tt.exp(log_dQ_rot_tess), f=f_rot_tess, ) gp = GaussianProcess(kernel_tess, t=x, yerr=yerr_tess) gp.marginal("transit_obs", observed=resid_tess) # Compute and save the phased light curve models if ttvs == False: pm.Deterministic( "lc_pred_tess", 1e3 * tt.stack( [ star.get_light_curve( orbit=orbit, r=r_pl_tess, t=t0[n] + phase_lc, texp=tess_texp )[..., n] for n in range(n_pl) ], axis=-1, ), ) # Perform optimization start = model.test_point map_soln = pmx.optimize(start=start, vars=trend) if ttvs==True: map_soln = pmx.optimize(start=map_soln, vars=transit_times) elif ttvs==False: map_soln = pmx.optimize(start=map_soln, vars=[log_period, t0]) map_soln = pmx.optimize(start=map_soln, vars=[b, log_depth]) if ttvs == True: map_soln = pmx.optimize(start=map_soln, vars=transit_times) map_soln = pmx.optimize(start=map_soln, vars=[sigma_rot_tess, log_period_rot_tess, log_Q0_rot_tess, log_dQ_rot_tess, f_rot_tess, mean_tess, ] ) map_soln = pmx.optimize(start=map_soln) # Package the MAP light curve and GP prediction extras = dict( zip( ["light_curves_tess", "gp_pred_tess"], pmx.eval_in_model([light_curves_tess, gp.predict(resid_tess)], map_soln), ) ) return model, map_soln, extras, orbit model1, map_soln1, extras1, orbit1 = depth_duration_model(ttvs=False) def plot_light_curve(soln, extras, xrange=[4641,4690], mask=None): if mask is None: mask = np.ones(len(x), dtype=bool) fig, axes = plt.subplots(nrows=3, ncols=1, figsize=(10, 10)) ax = axes[0] ax.errorbar(x[mask], y[mask], yerr=yerr_tess)#soln["yerr_tess"]) gp_mod = extras["gp_pred_tess"] + soln["mean_tess"] + soln["bkg"] ax.plot(x[mask], gp_mod, color="k", label="GP + background model", zorder=4) ax.legend(fontsize=10, ncol=2) ax.set_ylabel("Relative flux [ppt]") ax.set_title('TESS') ax = axes[1] ax.errorbar(x[mask], y[mask] - gp_mod, yerr=yerr_tess,color='k')#soln["yerr_tess"]) mod_sum = np.sum(extras["light_curves_tess"], axis=-1) ax.plot(x[mask], mod_sum, label="sum", color="w") for i, l in enumerate("cdbe"): mod = extras["light_curves_tess"][:, i] ax.plot(x[mask], mod, label="planet {0}".format(l), zorder=3, color=tangerine[i]) ax.legend(fontsize=10, loc=3, ncol=3) ax.set_ylabel("De-trended flux [ppt]") ax = axes[2] mod = gp_mod + np.sum(extras["light_curves_tess"], axis=-1) ax.errorbar(x[mask], y[mask] - mod, yerr=yerr_tess)#soln["yerr_tess"]) ax.axhline(0, color="#aaaaaa", lw=1) ax.set_ylabel("Residuals [ppt]") ax.set_xlabel("BKJD [days]") for i in range(3): axes[i].set_xlim(xrange[0],xrange[1]) return fig, gp_mod _ = plot_light_curve(map_soln1, extras1) with model1: trace_ex = pmx.sample(tune=500, draws=5000, start=map_soln1, chains=3, return_inferencedata=True, random_seed=[39248934, 48374109, 84738013]) trace_ex.to_dataframe().to_csv('summary_2min.csv') flat_samps = trace_ex.posterior key = 't0' rnd = 5 for j in range(0,4): med = np.nanmedian(flat_samps[key][:,:,j].data) upp = np.nanpercentile(flat_samps[key][:,:,j].data,84) low = np.nanpercentile(flat_samps[key][:,:,j].data, 16) u = np.round(upp-med,rnd) l = np.round(med-low,rnd) m = np.round(med, rnd) print('$' + str(m)+'_{-'+str(l)+'}^{+'+str(u)+'}$') k2_t0=[ 2231.2797, 2239.3913, 2234.0488] k2_per=[ 8.24958, 12.4032, 24.1396] for i in range(len(k2_t0)): predicted = k2_t0[i]+k2_per[i]*np.arange(2,350) new = np.nanmedian(flat_samps[key][:,:,i].data) diff = np.abs(predicted - new) print(predicted[np.argmin(diff)], new) print(((predicted[np.argmin(diff)] - new) * units.day).to(units.hour)) from astropy import units key = 'r_pl_rade' rnd = 2 for j in range(0,4): med = np.nanmedian(flat_samps[key][:,:,j].data)*units.Rearth upp = np.nanpercentile(flat_samps[key][:,:,j].data,84)*units.Rearth low = np.nanpercentile(flat_samps[key][:,:,j].data, 16)*units.Rearth u = np.round(upp.to(units.Rjup).value-med.to(units.Rjup).value,rnd) l = np.round(med.to(units.Rjup).value-low.to(units.Rjup).value,rnd) m = np.round(med.to(units.Rjup).value, rnd) print('$' + str(m)+'_{-'+str(l)+'}^{+'+str(u)+'}$') gpdict = {} gpdict['time'] = x gpdict['flux'] = y gpdict['flux_err'] = yerr_tess gpdict['gp_mod'] = extras1["gp_pred_tess"] + map_soln1["mean_tess"] + map_soln1["bkg"] letter =['c','d','b','e'] for i in range(4): print(len(extras0['light_curves_tess'][i])) gpdict['planet_{}'.format(letter[i])] = extras1['light_curves_tess'][i] np.save('/Users/arcticfox/Documents/v1298tau/tess/model_2min.npy', model1) np.save('/Users/arcticfox/Documents/v1298tau/tess/map_soln_2min.npy', map_soln1) np.save('/Users/arcticfox/Documents/v1298tau/tess/extras_2min.npy', extras1) np.save('/Users/arcticfox/Documents/v1298tau/tess/gp_2min.npy', gpdict) ((1.16*units.Mjup) / (4.0/3.0 * np.pi * (0.89*units.Rjup)**3)).to(units.g/units.cm**3) ((0.64*units.Mjup) / (4.0/3.0 * np.pi * (0.85*units.Rjup)**3)).to(units.g/units.cm**3) map_soln1['ror'] ###Output _____no_output_____ ###Markdown Accounting for TTVs ###Code model2, map_soln2, extras2, orbit2 = depth_duration_model(ttvs=True) with model2: trace_ex_ttvs = pmx.sample(tune=500, draws=5000, start=map_soln2, chains=3, return_inferencedata=True, random_seed=[39248934, 48374109, 84738013]) trace_ex_ttvs.to_dataframe().to_csv('summary_2min_ttvs.csv') flat_samps_ttvs = trace_ex_ttvs.posterior from astropy.table import Table, Column plt.rcParams['font.size']=18 key = 'ttvs_0' rnd = 6 tab = Table(names=['planet','expected_transit_time', 'tts', 'ttvs_med', 'ttvs_l16', 'ttvs_u84'], dtype=[str,float,float,float,float,float]) for j in range(flat_samps_ttvs[key].shape[-1]): med = np.nanmedian(flat_samps_ttvs[key][:,:,j].data)*units.day upp = np.nanpercentile(flat_samps_ttvs[key][:,:,j].data,84)*units.day low = np.nanpercentile(flat_samps_ttvs[key][:,:,j].data, 16)*units.day u = np.round((upp-med).to(units.min).value,rnd) l = np.round((med-low).to(units.min).value,rnd) m = np.round(med.to(units.min).value, rnd) print('$' + str(m)+'_{-'+str(l)+'}^{+'+str(u)+'}$') tab.add_row(['c', map_soln2['t0'][0]+map_soln2['period'][0]*j, map_soln2['tts_0'][j], m, l, u]) plt.errorbar(map_soln2['t0'][0]+map_soln2['period'][0]*j, m, yerr=np.nanmedian([u,l]), marker='o', color='k',ms=8) key = 'ttvs_1' rnd = 6 for j in range(flat_samps_ttvs[key].shape[-1]): med = np.nanmedian(flat_samps_ttvs[key][:,:,j].data)*units.day upp = np.nanpercentile(flat_samps_ttvs[key][:,:,j].data,84)*units.day low = np.nanpercentile(flat_samps_ttvs[key][:,:,j].data, 16)*units.day u = np.round((upp-med).to(units.min).value,rnd) l = np.round((med-low).to(units.min).value,rnd) m = np.round(med.to(units.min).value, rnd) print('$' + str(m)+'_{-'+str(l)+'}^{+'+str(u)+'}$') tab.add_row(['d', map_soln2['t0'][1]+map_soln2['period'][1]*j, map_soln2['tts_1'][j], m, l, u]) plt.errorbar(map_soln2['t0'][1]+map_soln2['period'][1]*j, m, yerr=np.nanmedian([u,l]), marker='o', color='darkorange',ms=8) key = 'ttvs_2' rnd = 6 for j in range(flat_samps_ttvs[key].shape[-1]): med = np.nanmedian(flat_samps_ttvs[key][:,:,j].data)*units.day upp = np.nanpercentile(flat_samps_ttvs[key][:,:,j].data,84)*units.day low = np.nanpercentile(flat_samps_ttvs[key][:,:,j].data, 16)*units.day u = np.round((upp-med).to(units.min).value,rnd) l = np.round((med-low).to(units.min).value,rnd) m = np.round(med.to(units.min).value, rnd) print('$' + str(m)+'_{-'+str(l)+'}^{+'+str(u)+'}$') tab.add_row(['b', map_soln2['t0'][1]+map_soln2['period'][1]*j, map_soln2['tts_1'][j], m, l, u]) plt.errorbar(map_soln2['t0'][1]+map_soln2['period'][1]*j, m, yerr=np.nanmedian([u,l]), marker='o', color='green',ms=8) plt.xlabel('Time [BKJD - 2454833]') plt.plot(100,100,'ko',label='V1298 Tau c') plt.plot(100,100,'o',color='darkorange',label='V1298 Tau d') plt.legend(fontsize=12) plt.ylabel('TTVs [minutes]') plt.xlim(4643,4693) #plt.ylim(-20,20) #plt.savefig('ttvs.png',dpi=250,rasterize=True,bbox_inches='tight') np.nanmedian(tab[tab['planet']=='d']['ttvs_med']), np.nanstd(tab[tab['planet']=='d']['ttvs_med']) tab tab.write('ttvs.csv',format='csv') _, gp_mod = plot_light_curve(map_soln2, extras2) lc_tab = Table() lc_tab.add_column(Column(x,'time')) lc_tab.add_column(Column(y,'flux')) lc_tab.add_column(Column(yerr_tess,'flux_err')) lc_tab.add_column(Column(gp_mod, 'gp_pred_tess')) lc_tab.write('lc.csv',format='csv') plt.figure(figsize=(14,4)) plt.plot(lc_tab['time'], lc_tab['flux']) plt.plot(lc_tab['time'], lc_tab['gp_pred_tess']) ###Output _____no_output_____
jupyter/intro_pyspark.ipynb
###Markdown Intro to PySparkAn introduction to using pyspark to load data from `.csv` and psql database, as well as using SparkDataFrames and SQL to perform a query. Load necessary modules ###Code from python.cca_schema import schema from pyspark.sql.session import SparkSession from matplotlib import pyplot as plt ###Output _____no_output_____ ###Markdown Create `spark` session object ###Code # create spark session ---- spark = ( SparkSession.builder # Sets the Spark master URL to connect to, such as "local" to run locally .master("local[1]") # Sets a name for the application, which will be shown in the Spark web UI .appName("Python Spark SQL example") # Gets an existing :class:SparkSession or, if there is no existing one, # creates a new one based on the options set in this builder .getOrCreate() ) ###Output _____no_output_____ ###Markdown Load necessary data ###Code # load IL job data at the census block level # note: IL job counts are from 2017 jobs_df = ( spark.read .format("jdbc") .option("url", "jdbc:postgresql:///chicago") .option("dbtable", "il_wac_s000_jt00_2017") .load() ) # load 2010 IL census block to 2010 IL census tract crosswalk data xwalk_df = ( spark.read .format("jdbc") .option("url", "jdbc:postgresql:///chicago") .option("dbtable", "il_xwalk") .load() ) # load 2010 Chicago census tract data ct_df = ( spark.read .format("jdbc") .option("url", "jdbc:postgresql:///chicago") .option("dbtable", "census_tracts_2010") .load() ) # load current Chicago community area data cca_df = spark.read.csv(path="raw_data/community_areas.csv", schema=schema, sep=",", header=False) # header is supplied in schema ###Output _____no_output_____ ###Markdown Glimpse inside SparkDataFrame ###Code cca_df.show(n=5, truncate=True, vertical=True) ###Output -RECORD 0-------------------------- the_geom | null perimeter | null area | null comarea_ | null comarea_id | null area_numbe | null community | null area_num_1 | null shape_area | null shape_len | null -RECORD 1-------------------------- the_geom | MULTIPOLYGON (((-... perimeter | 0 area | 0 comarea_ | 0 comarea_id | 0 area_numbe | 35 community | DOUGLAS area_num_1 | 35 shape_area | 46004621.1581 shape_len | 31027.0545098 -RECORD 2-------------------------- the_geom | MULTIPOLYGON (((-... perimeter | 0 area | 0 comarea_ | 0 comarea_id | 0 area_numbe | 36 community | OAKLAND area_num_1 | 36 shape_area | 16913961.0408 shape_len | 19565.5061533 -RECORD 3-------------------------- the_geom | MULTIPOLYGON (((-... perimeter | 0 area | 0 comarea_ | 0 comarea_id | 0 area_numbe | 37 community | FULLER PARK area_num_1 | 37 shape_area | 19916704.8692 shape_len | 25339.0897503 -RECORD 4-------------------------- the_geom | MULTIPOLYGON (((-... perimeter | 0 area | 0 comarea_ | 0 comarea_id | 0 area_numbe | 38 community | GRAND BOULEVARD area_num_1 | 38 shape_area | 48492503.1554 shape_len | 28196.8371573 only showing top 5 rows ###Markdown Register SparkDataFrames as SQL views ###Code jobs_df.createOrReplaceTempView("jobs") xwalk_df.createOrReplaceTempView("xwalk") ct_df.createOrReplaceTempView("ct") cca_df.createOrReplaceTempView("cca") ###Output _____no_output_____ ###Markdown Count the number of jobs in each community area Layers of geography:* many census blocks -> one census tract* many census tracts -> one community area* many community areas -> one City of Chicago_note: 46826 blocks -> 801 tracts -> 77 community areas -> 1 city_ Logic:To do this, we need to mark the community area each Chicago census block resides in and then identify the number of jobs in each census block. ###Code query = """ SELECT cca.community, SUM(jobs.c000) AS num_jobs FROM xwalk JOIN ct ON xwalk.trct = ct.geoid10 JOIN cca ON ct.commarea = cca.area_numbe JOIN jobs ON xwalk.tabblk2010 = jobs.w_geocode GROUP BY community ORDER BY num_jobs DESC """ # execute query jobs_cca = spark.sql(query) # display results jobs_cca.show(n=5, truncate=True, vertical=True) ###Output -RECORD 0-------------------- community | LOOP num_jobs | 437666 -RECORD 1-------------------- community | NEAR NORTH SIDE num_jobs | 192789 -RECORD 2-------------------- community | NEAR WEST SIDE num_jobs | 132880 -RECORD 3-------------------- community | OHARE num_jobs | 58669 -RECORD 4-------------------- community | WEST TOWN num_jobs | 46945 only showing top 5 rows ###Markdown Convert to `pandas` DataFrame ###Code # convert SparkDataFrame to Pandas DataFrame ---- jobs_cca_df = jobs_cca.toPandas() jobs_cca_df.head() ###Output _____no_output_____ ###Markdown Visualize Results ###Code # visualize top ten CCAs by number of jobs ---- plt.barh(y=jobs_cca_df["community"][0:9], width=jobs_cca_df["num_jobs"][0:9]) plt.title("Top 10 Community Areas by Number of Jobs, 2017") plt.xlabel("Total number of Jobs") plt.ylabel("Chicago community areas") plt.tight_layout() # export plot as PNG ---- plt.savefig("visuals/top_ccas_by_jobs.png", dpi=200, bbox_inches="tight") ###Output _____no_output_____ ###Markdown Stop `spark` session ###Code spark.stop() ###Output _____no_output_____
Handwritten_Digit_Classification_using_CNN_.ipynb
###Markdown Data normalization ###Code X_train_n = X_train_full / 255. X_test_n = X_test / 255. X_valid, X_train = X_train_n[:5000], X_train_n[5000:] Y_valid, Y_train = Y_train_full[:5000], Y_train_full[5000:] X_test = X_test_n np.random.seed(42) tf.random.set_seed(42) ###Output _____no_output_____ ###Markdown create model cnn ###Code model = keras.models.Sequential() model.add(keras.layers.Conv2D(filters= 32, kernel_size= (3,3), strides=1, padding='valid', activation='relu', input_shape=(28, 28, 1))) model.add(keras.layers.MaxPool2D((2, 2))) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(200, activation="relu")) model.add(keras.layers.Dense(100, activation="softmax")) model.summary() model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"]) model_history = model.fit(X_train, Y_train, epochs=60, batch_size= 64, validation_data=(X_valid, Y_valid)) model_history.history pd.DataFrame(model_history.history).plot(figsize=(8,5)) plt.grid(True) plt.gca().set_ylim(0,1) plt.show() ev = model.evaluate(X_test_n, Y_test) ev X_new =X_test[:3] predict_x=model.predict(X_new) classes_x=np.argmax(predict_x,axis=1) Y_test[:3] print(plt.imshow(X_test[7].reshape((28, 28)))) ###Output _____no_output_____
helpers/analysis_adversarial_2.ipynb
###Markdown edge detection ###Code pip install scikit-image from skimage import io, color, feature def detect_edge(data): edge_maps = np.zeros_like(data) for idx,img in enumerate(data): # import pdb; pdb.set_trace() edge_maps[idx,0] = feature.canny(np.array(img[0], dtype=np.float64))#, sigma = 1, low_threshold=1.5) #, high_threshold=.1) return edge_maps import cv2 def auto_canny(image, sigma=100): v = np.median(image) lower = int(max(0, (1.0 - sigma) * v)) upper = int(min(255, (1.0 + sigma) * v)) edged = cv2.Canny(image, lower, upper) return edged # Converting the image to grayscale. import cv2 # gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def detect_edge_new(img): fgbg = cv2.createBackgroundSubtractorMOG2( history=10, varThreshold=2, detectShadows=False) gray = np.array(img.mean(axis=2)*255).astype('uint8') # Extract the foreground edges_foreground = cv2.bilateralFilter(gray, 9, 75, 75) foreground = fgbg.apply(edges_foreground) # Smooth out to get the moving area kernel = np.ones((50,50),np.uint8) foreground = cv2.morphologyEx(foreground, cv2.MORPH_CLOSE, kernel) # Applying static edge extraction edges_foreground = cv2.bilateralFilter(gray, 9, 75, 75) edges_filtered = cv2.Canny(edges_foreground, 30, 100) # Crop off the edges out of the moving area cropped = (foreground // 255) * edges_filtered return cropped#edges_filtered img_2 = np.array(img).astype('uint8')/255. edge_map = detect_edge_new(img_2) # auto_canny(img_2) plt.title(f"img") plt.imshow(edge_map) plt.xticks([]) plt.yticks([]) plt.show() ###Output _____no_output_____ ###Markdown Perform the attack now and repeat ###Code fgsm_attack = FGSM(model, eps=2/256) img_name = '6' from PIL import Image img = Image.open(f'./imgs/{img_name}.jpg').convert('RGB') # edge_map = detect_edge(img[None]/255.) pred = predict_image(f'./imgs/{img_name}.jpg') plt.title(labels[pred[0].item()]) plt.imshow(img) plt.xticks([]) plt.yticks([]) plt.show() transform = transforms.Compose([ # transforms.CenterCrop(224), transforms.ToTensor(), # transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) # transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)) ]) img_adv = fgsm_attack(transform(img).unsqueeze(0),torch.tensor([pred[0].item()])) # img_adv = img_adv - torch.tensor((0.485, 0.456, 0.406)) output = model(img_adv) # index = output.data.numpy().argmax() # top 1 _, indices = torch.sort(output.data, descending=True) pred = indices[0][:5] plt.title(labels[pred[0].item()]) plt.imshow(img_adv[0].permute(1,2,0)) plt.xticks([]) plt.yticks([]) plt.show() # plt.title('diference') # plt.imshow(img_adv[0].permute(1,2,0)*255 - np.array(img)) # plt.xticks([]) # plt.yticks([]) # plt.show() plt.title('diference') # diff = torch.norm(img_adv[0].permute(1,2,0)*255 - np.array(img), float('inf')) # diff = torch.abs(img_adv[0].permute(1,2,0)*255 - np.array(img)) diff = torch.abs(img_adv[0].permute(1,2,0)*255 - np.array(img)) diff = (diff - diff.min()) / (diff.max() - diff.min()) # diff = img_adv[0].permute(1,2,0)*255 - np.array(img) plt.imshow(diff) plt.xticks([]) plt.yticks([]) plt.show() # img = np.array(img*255) # img_2 = np.array(img).astype('uint8') # edge_map = auto_canny(img_2) img_2 = np.array(img).astype('uint8')/255. edge_map = detect_edge_new(img_2) plt.title(f"img") plt.imshow(edge_map) plt.xticks([]) plt.yticks([]) plt.show() # img_avd_2 = np.array(img_adv*255.).astype('uint8') # edge_map_adv = auto_canny(img_avd_2[0].transpose(1,2,0)) img_avd_2 = np.array(img_adv*255.).astype('uint8')/255. edge_map_adv = detect_edge_new(img_avd_2[0].transpose(1,2,0)) plt.title(f"fgsm eps={8/256}") plt.imshow(edge_map_adv) plt.xticks([]) plt.yticks([]) plt.show() plt.title(f"edge map diff") plt.imshow(edge_map-edge_map_adv) plt.xticks([]) plt.yticks([]) plt.show() plt.title('diference') # diff = torch.norm(img_adv[0].permute(1,2,0)*255 - np.array(img), float('inf')) # diff = torch.abs(img_adv[0].permute(1,2,0)*255 - np.array(img)) diff = torch.abs(img_adv[0].permute(1,2,0)*255 - np.array(img)/1.) diff = (diff - diff.min()) / (diff.max() - diff.min()) # diff = img_adv[0].permute(1,2,0)*255 - np.array(img) plt.imshow(diff) plt.xticks([]) plt.yticks([]) plt.show() # torch.abs(diff.max()) diff.max() edge_map.max() # (img_adv[0].permute(1,2,0)*255).max() # np.array(img).max() # np.array(img).min() # (img_adv[0].permute(1,2,0)*255).min() # diff.shape plt.title(f"fgsm eps={8/256}") plt.imshow(edge_map-edge_map_adv) plt.xticks([]) plt.yticks([]) plt.show() np.array(img).max() img_avd_2.shape img = Image.open("./imgs/1.jpg").convert('RGB') np.array(img_adv).astype('uint8').dtype img_avd_2.dtype inp = cv2.dnn.blobFromImage(np.array(img)) plt.imshow(inp[0].transpose(1,2,0)) net.setInput(inp) out = net.forward() out = out[0, 0] out = cv.resize(out, (frame.shape[1], frame.shape[0])) out = 255 * out out = out.astype(np.uint8) out=cv.cvtColor(out,cv.COLOR_GRAY2BGR) con=np.concatenate((frame,out),axis=1) cv.imshow(kWinName,con) !sh download_pretrained.sh from torch import nn x = torch.randn(1, 1, requires_grad=True) lin = nn.Linear(1, 1) # your model or manual operations out = lin(x) print(out.grad_fn) out.backward() x ###Output _____no_output_____
Decision_Tree_Regressor.ipynb
###Markdown ###Code import numpy as np import pandas as pd %matplotlib inline import matplotlib.pyplot as plt from sklearn.model_selection import GridSearchCV, cross_val_score, cross_val_predict, train_test_split import seaborn as sns from sklearn import datasets from sklearn.metrics import mean_squared_error from sklearn.tree import DecisionTreeRegressor # ############################################################################# # Load data boston = datasets.load_boston() print(boston.data.shape, boston.target.shape) print(boston.feature_names) data = pd.DataFrame(boston.data,columns=boston.feature_names) data = pd.concat([data,pd.Series(boston.target,name='MEDV')],axis=1) data.head() X = data.iloc[:,:-1] y = data.iloc[:,-1] x_training_set, x_test_set, y_training_set, y_test_set = train_test_split(X,y,test_size=0.10,random_state=42, shuffle=True) # Fit regression model # Estimate the score on the entire dataset, with no missing values model = DecisionTreeRegressor(max_depth=4, min_samples_split=5, max_leaf_nodes=10) model.fit(x_training_set, y_training_set) from sklearn.metrics import mean_squared_error, r2_score model_score = model.score(x_training_set,y_training_set) # Have a look at R sq to give an idea of the fit , # Explained variance score: 1 is perfect prediction print("coefficient of determination R^2 of the prediction.: ",model_score) y_predicted = model.predict(x_test_set) # The mean squared error print("Mean squared error: %.2f"% mean_squared_error(y_test_set, y_predicted)) # Explained variance score: 1 is perfect prediction print('Test Variance score: %.2f' % r2_score(y_test_set, y_predicted)) # So let's run the model against the test data from sklearn.model_selection import cross_val_predict fig, ax = plt.subplots() ax.scatter(y_test_set, y_predicted, edgecolors=(0, 0, 0)) ax.plot([y_test_set.min(), y_test_set.max()], [y_test_set.min(), y_test_set.max()], 'k--', lw=4) ax.set_xlabel('Actual') ax.set_ylabel('Predicted') ax.set_title("Ground Truth vs Predicted") plt.show() ### Hyperparameter tuning with GridSearchCV¶ param_grid = {"criterion": ["mse", "mae"], "min_samples_split": [10, 20, 40], "max_depth": [2, 6, 8], "min_samples_leaf": [20, 40, 100], "max_leaf_nodes": [5, 20, 100], } ## Comment in order to publish in kaggle. grid_cv_dtm = GridSearchCV(model, param_grid, cv=5) grid_cv_dtm.fit(x_training_set,y_training_set) print("R-Squared::{}".format(grid_cv_dtm.best_score_)) print("Best Hyperparameters::\n{}".format(grid_cv_dtm.best_params_)) df = pd.DataFrame(data=grid_cv_dtm.cv_results_) df.head() fig,ax = plt.subplots() sns.pointplot(data=df[['mean_test_score', 'param_max_leaf_nodes', 'param_max_depth']], y='mean_test_score',x='param_max_depth', hue='param_max_leaf_nodes',ax=ax) ax.set(title="Effect of Depth and Leaf Nodes on Model Performance") # Evaluating training model predicted = grid_cv_dtm.best_estimator_.predict(X) residuals = y.flatten()-predicted fig, ax = plt.subplots() ax.scatter(y.flatten(), residuals) ax.axhline(lw=2,color='black') ax.set_xlabel('Observed') ax.set_ylabel('Residual') plt.show() model.get_depth() import math dataset=pd.read_csv("weather_dataset.csv") dataset.head() ### chandra's code for decision tree regression for calculating total standard devation all categorical variables def calc_Total_SD(dataframe,targetcol): avg=dataframe[targetcol].mean() count=len(dataframe[targetcol]) dataframe["squared"]= avg - dataframe[targetcol] dataframe["squared"]= (dataframe["squared"]**2/count) total_sd=dataframe["squared"].sum() total_sd=round(math.sqrt(total_sd),4) return total_sd totla_sd=calc_Total_SD(dataset,"Decision") totla_sd ### chandra's code for decision tree regression for calculating total standard devation on one feature features_sd=[] def cal_sd_single_feature(dataframe,colnames,targetcol): for cols in colnames: for col in dataframe[cols].unique(): total=dataframe[targetcol].count() avg=dataframe[targetcol][dataframe[cols]==col].mean() count=len(dataframe[targetcol][dataframe[cols]==col]) dataframe["squared"]= avg - dataframe[targetcol][dataframe[cols]==col] dataframe["squared"]= (dataframe["squared"]**2/count) feature_dev=dataframe["squared"].sum() feature_dev=round(math.sqrt(feature_dev),4) sd=(count/total*feature_dev) print(cols,"-",col,"-",round(avg,4),"-",count,"-",total,"-",feature_dev,"-",round(sd,4)) features_sd.append(sd) cal_sd_single_feature(dataset,["Outlook","Temp.","Humidity","Wind"],"Decision") print(features_sd) ## chandra's code for decision tree regression for calculating total standard devation on one feature features_sd=[] def cal_sd_single_feature(dataframe,colnames,targetcol): for cols in colnames: local_sd=0 for col in dataframe[cols].unique(): total=dataframe[targetcol].count() avg=dataframe[targetcol][dataframe[cols]==col].mean() count=len(dataframe[targetcol][dataframe[cols]==col]) dataframe["squared"]= avg - dataframe[targetcol][dataframe[cols]==col] dataframe["squared"]= (dataframe["squared"]**2/count) feature_dev=dataframe["squared"].sum() feature_dev=round(math.sqrt(feature_dev),2) sd=(count/total*feature_dev) local_sd=round(local_sd+sd,2) print("local sd is",local_sd) print(cols,"-",col,"-",round(avg,2),"-",count,"-",total,"-",feature_dev,"-",round(sd,2),totla_sd) min_variance=round(totla_sd-local_sd,2) features_sd.append(min_variance) cal_sd_single_feature(dataset,["Outlook","Temp.","Humidity","Wind"],"Decision") print(features_sd) csvr cxzdataset["Temp."].unique() dataset.Decision[dataset.Humidity=="High"].mean() dataset["Decision"] dataset.Decision ###Output _____no_output_____
Chapman/Ch4-Problem_4-02.ipynb
###Markdown Excercises Electric Machinery Fundamentals Chapter 4 Problem 4-2 ###Code %pylab notebook %precision 0 ###Output Populating the interactive namespace from numpy and matplotlib ###Markdown Description Given a 13.8-kV, 50-MVA, 0.9-power-factor-lagging, 60-Hz, four-pole, Y-connected synchronous machine with:* a synchronous reactance of $2.5\,\Omega$ * an armature resistance of $0.2\,\Omega$.* at 60 Hz, its friction and windage losses are 1 MW* its core losses are 1.5 MW. * The field circuit has a dc voltage of 120 V,* the maximum $I_F$ is 10 A. The current of the field circuit is adjustable over the range from 0 to 10 A. The OCC of this generator is shown in Figure P4-1 below ###Code Vl = 13.8e3 # [V] PF = 0.9 Xs = 2.5 # [Ohm] Ra = 0.2 # [Ohm] P = 50e6 # [W] Pf_w = 1.0e6 # [W] Pcore = 1.5e6 # [W] Pstray = 0 # [W] n_m = 1800 # [r/min] ###Output _____no_output_____ ###Markdown (a) * How much field current is required to make the terminal voltage $V_T$ (or line voltage $V_L$ ) equal to 13.8 kV when the generator is running at no load? (b) * What is the internal generated voltage $E_A$ of this machine at rated conditions? (c) * What is the phase voltage $V_\phi$ of this generator at rated conditions? (d) * How much field current is required to make the terminal voltage $V_T$ equal to 13.8 kV when the generator is running at rated conditions? (e)Suppose that this generator is running at rated conditions, and then the load is removed without changing the field current. * What would the terminal voltage of the generator be? (f) * How much steady-state power and torque must the generator’s prime mover be capable of supplying to handle the rated conditions? (g) * Construct a capability curve for this generator. SOLUTION (a)If the no-load terminal voltage is 13.8 kV, the required field current can be read directly from the open-circuit characteristic. It is $\underline{\underline{I_F = 3.50\,A}}$. (b)This generator is Y-connected, so $I_L = I_A$ . At rated conditions, the line and phase current in this generator is:$$I_A = I_L = \frac{P}{\sqrt{3}V_L}$$ ###Code ia = P / (sqrt(3) * Vl) Ia_angle = -arccos(PF) Ia = ia * (cos(Ia_angle) + sin(Ia_angle)*1j) print('Ia = {:.0f} A ∠{:.1f}°'.format(abs(Ia), Ia_angle/pi *180)) ###Output Ia = 2092 A ∠-25.8° ###Markdown The phase voltage of this machine is:$$V_\phi = V_T / \sqrt{3}$$ ###Code V_phase = Vl / sqrt(3) print('V_phase = {:.0f} V'.format(V_phase)) ###Output V_phase = 7967 V ###Markdown The internal generated voltage of the machine is:$$\vec{E}_A = \vec{V}_\phi + R_A\vec{I}_A + jX_S\vec{I}_A$$ ###Code Ea = V_phase + Ra*Ia + Xs*1j*Ia Ea_angle = arctan(Ea.imag/Ea.real) print(''' Ea = {:.0f} V ∠{:.1f}° =================='''.format(abs(Ea), Ea_angle/pi*180)) ###Output Ea = 11547 V ∠23.1° ================== ###Markdown (c)The phase voltage of the machine at rated conditions is: ###Code print(''' V_phase = {:.0f} V ================'''.format(V_phase)) ###Output V_phase = 7967 V ================ ###Markdown (d)The equivalent open-circuit terminal voltage corresponding to an $E_A$ of the value calculated in **(b)** is: ###Code Vt_oc = sqrt(3) * abs(Ea) print('Vt_oc = {:.0f} kV'.format(Vt_oc/1000)) ###Output Vt_oc = 20 kV ###Markdown From the OCC, the required field current is $\underline{\underline{I_F = 10\,A}}$. (e)If the load is removed without changing the field current then $V_\phi = E_A$: ###Code abs(Ea) ###Output _____no_output_____ ###Markdown The corresponding terminal voltage would be $\underline{\underline{V_T = 20\,kV}}$. (f)The input power to this generator is equal to the output power plus losses. The rated output power is: ###Code Pout = P*PF print('Pout = {:.0f} MW'.format(Pout/1e6)) ###Output Pout = 45 MW ###Markdown $$P_{CU} = 3I^2_AR_A$$ ###Code Pcu = 3 * abs(Ia)**2 * Ra print('Pcu = {:.1f} MW'.format(Pcu/1e6)) Pin = Pout +Pcu + Pf_w + Pcore + Pstray print('Pin = {:.1f} MW'.format(Pin/1e6)) ###Output Pin = 50.1 MW ###Markdown Therefore the prime mover must be capable of supplying $P_{in}$. Since the generator is a four-pole 60 Hz machine, to must be turning at 1800 r/min. The required torque is:$$\tau_{app} = \frac{P_{in}}{\omega_m}$$ ###Code w_m = n_m * (2*pi/60.0) tau_app = Pin / w_m print(''' tau_app = {:.0f} Nm ==================='''.format(tau_app)) ###Output tau_app = 265924 Nm =================== ###Markdown (e)The rotor current limit of the capability curve would be drawn from an origin of:$$Q = -\frac{3V^2_\phi}{X_S}$$ ###Code Q = - (3 * V_phase**2) / Xs print('Q = {:.2f} Mvar'.format(Q/1e6)) ###Output Q = -76.18 Mvar ###Markdown The radius of the rotor current limit is:$$D_E = \frac{3V_\phi E_A}{X_S}$$ ###Code De = (3 * V_phase * abs(Ea)) / Xs print('De = {:.0f} Mvar'.format(De/1e6)) ###Output De = 110 Mvar ###Markdown The stator current limit is a circle at the origin of radius:$$S = 3V_\phi I_A$$ ###Code S = 3 * V_phase * abs(Ia) print('S = {:.0f} Mvar'.format(S/1e6)) ###Output S = 50 Mvar ###Markdown Get points for stator current limit: ###Code theta = arange(-95,95) # angle in degrees rad = theta * pi/180 # angle in radians s_curve = S * ( cos(rad) + sin(rad)*1j) ###Output _____no_output_____ ###Markdown Get points for rotor current limit: ###Code orig = Q*1j theta = arange(65,115) # angle in degrees rad = theta * pi / 180 # angle in radians r_curve = orig + De * ( cos(rad) + sin(rad)*1j ) ###Output _____no_output_____ ###Markdown Plot the capability diagram: ###Code fig= figure() ax=fig.add_subplot(1, 1, 1) ax.plot(real(s_curve/1e6),imag(s_curve/1e6),'b') ax.plot(real(r_curve/1e6),imag(r_curve/1e6),'r--') ax.set_title('Synchronous Generator Capability Diagram') ax.set_xlabel('Power (MW)') ax.set_ylabel('Reactive Power (Mvar)') ax.set_aspect('equal', 'datalim') ax.legend(('stator current limit', 'rotor current limit'), loc=3); ax.grid() ###Output _____no_output_____
2018/01/solution.ipynb
###Markdown Advent of Code 2018 - Day 1 Part 1 ###Code freq = 0 with open('input.txt', 'r') as f: for change in [int(i) for i in f]: freq = freq + change print freq ###Output 425 ###Markdown Part 2 ###Code with open('input.txt', 'r') as f: freq_list = [int(i) for i in f] freq = 0 freqs = set() while True: for change in freq_list: freq = freq + change if freq in freqs: freq_reached_twice = freq break else: freqs.add(freq) else: continue break print freq_reached_twice ###Output 57538
expressyeaself/interaction/1_how_to_process_raw_data.ipynb
###Markdown Processing Raw Data with _ExpressYeaself_ Introduction * This **interactive** notebook that will **automate** the **processing** of raw data. All you need to do is **set the parameters** that control the way in which the data is processed. * If you haven't already done so, please **download the raw data** by following the installation instructions found in our [README](https://github.com/yeastpro/ExpressYeaself/blob/master/README.md). * Run (using ``shift`` + ``enter``) **every cell** in this notebook from top to bottom . You'll need to **input some arguments** for some functions where instructed to before running the cell. This will involve assigning values to variables but typing some input after ``=`` signs. * If an error is thrown, check your input and try to run the cell again. Make sure you've assigned the variables by typing exactly what's in the ``codeblock``. For example, for parameter 2 ``'pTpA'`` is a correct assignment but ``pTpA`` **is not**. Importing some packages ###Code import context import os process = context.process_data ###Output _____no_output_____ ###Markdown Defining the paths to the raw data ###Code ROOT_DIR = os.getcwd()[:os.getcwd().rfind('Express')] + 'ExpressYeaself/' raw_pTpA = ROOT_DIR + 'example/pTpA_data/raw_data_pTpA.txt.gz' raw_data_Abf1TATA = ROOT_DIR + 'example/Abf1TATA_data/raw_data_Abf1TATA.txt.gz' ###Output _____no_output_____ ###Markdown Choosing the processing parameters 1. Decide what raw data you want to use. Type ``raw_pTpA`` or ``raw_Abf1TATA`` after the ``=`` sign. ###Code raw_data = ###Output _____no_output_____ ###Markdown 2. Choose the scaffold type. If you chose ``raw_pTpA`` the scaffold type is ``'pTpA'`` and if you chose ``raw_Abf1TATA`` the scaffold type is ``'Abf1TATA'``. ###Code scaffold_type = ###Output _____no_output_____ ###Markdown 3. If you specify a value for this parameter, the sequences in your raw data file are sorted by expression level. The top and bottom percentiles of the data are then extracted and proceed with the data processing, whereas the middle portion of the data is discarded. * For example, if you specify ``percentile = 0.25`` the quarter of sequences with the highest expression levels and the quarter of sequences with the lowest expression levels are extracted. The middle 50 % of data is discarded. * **Why use this?** This parameter is useful for creating extremes of data based on expression level, which can be used to train a classification model. This can predict the probability that a sequence will express _high_ or express _low_. * **Why not?** Is best used to train binary classification models. For a more quantitative prediction of expression level across a whole range, set this parameter to ``None``. ###Code percentile = ###Output _____no_output_____ ###Markdown 4. If (and only if) you have set a value of ``percentile`` that **is not ``None``**, choose whether or not to binarize the expression levels. This will set the expression levels of all sequences in the top pecentile to ``1`` and all the expression levels in the bottom percentile to ``0``. * Highly recommended that you set ``binarize_expression_levels = True`` if you have specified a value for ``percentile``. * Otherwise, set ``binarize_expression_levels = False``. ###Code binarize_expression_levels = ###Output _____no_output_____ ###Markdown 5. The raw data you have downloaded contains sequences of **variable** length, ranging from 97 to 127 nucleotides. To train a neural network model, the inputs must be encoded and all the encoded sequences must be the same length. Sequences are **automatically padded** so they are the same length. **However**, if you choose to set ``pull_only_homogeneous = True`` all the sequences that have the modal (most common) length will be pulled out. Every sequence will by definition have the same length - be _homogeneous_ - so will not need padding. For pTpA data, for instance, this is 110 nucleotides, and for Abf1TATA data this is 115 nucleotides. * If you choose ``pull_only_homogeneous = False``, sequences that are shorter than the longest sequence in the file will be 'padded' to the max length. When you encode your data, the padding will be encoded as empty vectors. ###Code pull_only_homogeneous = ###Output _____no_output_____ ###Markdown 6. The sequences in the raw data file contain '**flanking regions**'. This are short nucleotide sequences on each end of the oligonucleotide sequences found in the file that aid in the synthesis of polynucleotide sequences where nucleotides are inserted into '**scaffold sequences**'. Every sequence in each raw data file has the same flanking regions, though the flanking regions in the pTpA sequence data are **different** than those in the Abf1TATA sequence data. Here you can choose whether or not to remove these flanking regions. * Recommended: ``deflank_sequences = True`` as we can remove as many constants as possible before training a model that needs to pick up on subtleties. ###Code deflank_sequences = ###Output _____no_output_____ ###Markdown 7. Here you can choose whether or not to insert the sequences found in the raw data file into the middle of their corresponding scaffold sequences. Set ``insert_into_scaffold = True`` or ``insert_into_scaffold = False``. ###Code insert_into_scaffold = ###Output _____no_output_____ ###Markdown 8. Here you can choose whether or not to add extra padding to the sequences. This may be useful if you want to increase the sequence length to a particular length, but otherwise is not recommended; the automatic padding mechanism (or selecting ``pull_only_homogeneous = True`` in step 5) is usually sufficient. Set ``extra_padding = 0`` if you don't want to add extra padding, or else put in another positive integer if you do. Setting ``extra_padding = 3`` will pad sequences by an extra 3 empty nucleotides (that will be encoded as empty vectors later). ###Code extra_padding = ###Output _____no_output_____ ###Markdown 9. If you have selected to use some extra padding, or have set ``pull_only_homogeneous = False`` (which automatically pads sequences), here you can choose whether that padding is added to the front (LHS) or back (RHS) of the sequences. Set ``pad_front = True`` if you want to back the front, or ``pad_front = False`` if you want to pad the back. If you have set ``pull_only_homogeneous = True`` and ``extra_padding = 0`` it doens't matter what you set this paramter to as no padding will be applied (you must still set it to something though). ###Code pad_front = ###Output _____no_output_____ ###Markdown 10. If you would like a log report of the data processing to be written to file set ``log_process_report = True``. This will write **timings** of each step and **data lost/discarded** at each stage to file, which can be found at file path ``ExpressYeaself/example/processed_data/``. ###Code log_process_report = ###Output _____no_output_____ ###Markdown 11. If you would like the intermediate files created at each step of the data processing to be deleted after the process is complete, set ``remove_files = True``. It is **strongly recommended** to do so. This is because we are dealing with very large files, so having files for each step will use up a lot of memory. Otherwise, set ``remove_files = False``. ###Code remove_files = ###Output _____no_output_____ ###Markdown 12. Finally, if you would like to create a smaller sample data file based on your processed data, then you can specify a sample size here. This will pull this many sequence and expression level data lines from your processed data file pseudo-randomly. * This is useful for playing about with model architectures as a smaller data set (recommended size: ``sample_size = 10_000``) will run significantly faster. * If you don't want to create a sample data file, set ``sample_size = None`` ###Code sample_size = ###Output _____no_output_____ ###Markdown Calling the function Now you have specified all the parameters you are ready to call the function that processes the raw data. Just run the following cell and wait ~ 5-10 minutes (depending on computer performance and parameters set). ###Code processed_data = process.process_raw_data(input_seqs=raw_data, scaffold_type=scaffold_type, percentile=percentile, binarize_els=binarize_expression_levels, homogeneous=pull_only_homogeneous, deflank=deflank_sequences, insert_into_scaffold=insert_into_scaffold, extra_padding=extra_padding, pad_front=pad_front, report_loss=log_process_report, report_times=log_process_report, remove_files=remove_files, create_sample_of_size=sample_size) ###Output _____no_output_____
src/whylogs/cli/notebooks/Logging.ipynb
###Markdown In this notebook, we will explore how to generate logs using the WhyLogs Python library. The resulting profile can also be produced from the command line interface. The workflow to work with these files, along with deeper analysis and visualization examples, can be found in the `Analysis.ipynb` that is generated with `whylabs init`. Generating logs from WhyLogs Python libraryTo generate logs using Python, we will import the WhyLogs library, initialize a logging session with WhyLogs, read in our raw data from file, and pass this data to our session.First, import the relevant session and logger functions. ###Code from whylabs.logs import get_or_create_session session = get_or_create_session() ###Output _____no_output_____ ###Markdown We will now download an example dataset from Lending Club, an online financial lending platform. The dataset is located in the package's `notebooks/` folder for now.Feel free to use the below cell to orient yourself and guide `data_file` to the correct filepath. ###Code print("Current working directory:", os.getcwd()) data_file = "lending_club_1000.csv" data = pd.read_csv(os.path.join(data_file)) data ###Output _____no_output_____ ###Markdown We should see a Pandas dataframe containing the 1000 rows of our Lending Club data sample.Now that we have the raw data, we can pass it into the WhyLogs logger. It is often useful to pass a string label such as "demo.data" along with the dataset for future reference.The `log_dataframe` function will profile the given dataset using the WhyLogs library. When we capture the logger response, we can interact with the generated profiles. ###Code response = session.log_dataframe(data, 'test.data') profile = response['profile'] ###Output _____no_output_____ ###Markdown The flat summary, histograms, and frequency information can be found inside this summary object. For more information about the contents of these objects, consult the `Analysis.ipynb` notebook. ###Code summary = profile.flat_summary() flat_summary = summary['summary'] flat_summary print(flat_summary["column"].unique()) histograms = summary['hist'] histograms["delinq_amnt"] frequencies = summary['frequent_strings'] frequencies.update(summary['frequent_numbers']) frequencies['num_sats'] ###Output _____no_output_____ ###Markdown Additional options for our WhyLogs sessionWe chose the most simple configuration above, but there are a number of convenient options that can be set.**Cloud storage:** You may set the an AWS S3 bucket to have these logs automatically pushed to the cloud. You must have valid AWS configuration settings to be able to do so.**Binary file:** By default, we produce a binary file that contains raw objects used to summarize the data passed in. Navigating this file is beyond the scope of this notebook, however. This is listed under the *output_protobuf* option.**Flat and JSON summaries:** By default, we produce a flat summary in the CSV format along with histogram and frequency summaries in the JSON format.You can see these configuration options and others paired with the session in the `session.config` object. ###Code session.config ###Output _____no_output_____ ###Markdown Display and resetting the sessionThere is also a convenience function to send the internal Python logs to stdout. ###Code from whylabs.logs import display_logging display_logging('debug') ###Output _____no_output_____ ###Markdown When you are done with your session, run the `reset_session` function. ###Code from whylabs.logs.app.session import reset_session reset_session() ###Output _____no_output_____
_TODO/.ipynb_checkpoints/2021-01-19- Enjoying The Landscape-checkpoint.ipynb
###Markdown Developing Analytic IntuitionAsking myself, 'How do I encode complex realities from simple ideas'?Past what threshold of simple ideas, crossed together, can we say a complex whole has been achieved? A-> B --> C |---> A. Is this most efficient forward transfer of information. A complex whole has perpetuity in producing the same simple sums that define it, at less efficiency, until its eventual decay unless an 'observer' agent is in control to delay the inevitable...Chaotic when unforeseen, beatiful when tamed and hard to understand when foreign. Is there a way to disentangle it, is there order in molding it. I guess we will learn more when we cover reinforcement learning. Creating a narrativeHaving a narrative is important. Narratives draw you in, taking you on journey whose end you dont know yet. Time flies, as you are transported into 'focus' and everything else is a nuisance you wish off. You know have a strong narrative when it shows in your thinking and actions. How then, do you make a good narative that effortlessly transports you, at will, to some other place? Keep a canvas, so you pick up from where you left, so you appreciate your growth and so you keep a log of your recipes, good and bad, just like every great savant has unfinished projects. Get paid for your work. Nothing will make you work harder than getting paid to complete an assigment. Alternatively, measure your works productivity with an audience. Always make sure you are your biggest audience, take on a different form and look at your work... over and over. > Note to self: Start with the easy games, the view of the landscape and simple plot lines. Datasets shipped with sklearn with fewer steps to modeling and intentionally nuanced for illustration of concepts. Later, gradually introduce more granular templates.Create familiarity, add more steps and weigh the tradeoffs. Speaking of narratives. Lets look at the problems we try to solve like an adventure game. We learn basic controls, get started, fail alot, keep failing untill we have considerable intuition of the gameplay. We detail and catalog steps to elegance. In no time we will have an arsenal to deploy at more challenging games. This is the way. 'Just start', better an unfinished project than none at all. The feeling that, "you need to read plenty of books or understand all the jargon from a research paper", are obvious setbacks. Ultimately, you do need to read alot of books and understand research papers. But thats at a later level. Start at level 1, its still fun. This is the way. > Important: As turns out, watching others do the hardwork, is not a transferrable skill. Who knew? The most basic template Supervised and Unsupervised Learning Why machine learning? Machine learning helps solve a unique class of problems. Take for example, facial recognition, or language translation. These problems come ever close to how humans perceive the world. Such that machines are now an embeded part of human interaction, without which, we feel less of ourselves...This is only the beginning in 5 - 10 years, I woudn't want to be mere observer but a savant in the field. Machine learning has 3 main flavours, Supervised, Unsupervised and Reinforcement Learning. The main data structures are tabular data, image data, language and timeseries. In supervised learning, we know what the ground truth is. We have recorded enough outcomes given certain events and interactions. The outcomes are the labels or dependent variables (y), that are the end product of feature interactions of independent variables(x). A model is a recipe of features that can map feature interaction to a label with certain degree of acceptance. The degree of acceptance is accuracy in a reproducable and generalised way. Simply put, a model is a function that maps x(s) to y. In unsupervised learning, we dont have labels, we learn labels or categories from features. A simple supervised model. Supervised learning is either a classification problem or regression problem.Classification models predict the label class i.e boy, girl, plant species etc.Regression models predict labels as continous variables. i.e house price, fuel consumption etc. Step 0 Problem statement and libraries needed We are given an array of petal and sepal measurements for 3 iris species. Our role is predict the species correclty if given new taxonomic measurements of petals and sepals of the same species of flowers. This is clearly a classification problem. ###Code # lets start with a simple flower classification model with the iris data set # the data is already in sklearn. from sklearn import datasets import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns plt.style.use('ggplot') ###Output _____no_output_____ ###Markdown Step 1: Loading and preliminary data inspection ###Code #load dataset iris = datasets.load_iris() print(f' type iris: {type(iris)}') print(f'iris keys: {iris.keys()}') print(f'type iris.data: {type(iris.data)}') print(f'type iris.target: {type(iris.target)}') ###Output type iris: <class 'sklearn.utils.Bunch'> iris keys: dict_keys(['data', 'target', 'target_names', 'DESCR', 'feature_names', 'filename']) type iris.data: <class 'numpy.ndarray'> type iris.target: <class 'numpy.ndarray'> ###Markdown > Note: this is a dictionary of numpy.array values. We will have to create out pandas dataframe using the keys. ###Code #create a dataframe of features df= pd.DataFrame(iris.data, columns= iris.feature_names) df.info() df.describe() df.head() ###Output _____no_output_____ ###Markdown Step 2 Visualize the data We always want to visualize data. The reason is 2 fold. It will help you draw up conclusions fast and it will most likely be the method in which you communicate your findings. ###Code sns.heatmap(df.corr()) sns.pairplot(df) ###Output _____no_output_____ ###Markdown TODO: Add notes We aready know that our label had class of 3 flower Step 3 Choose the best model to fit the data with ###Code # apply knn from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors = 6) x , y = iris.data, iris.target knn.fit(x, y) ###Output _____no_output_____ ###Markdown > Note: note we did not fit on a dataframe. We fit our model on nympy.array. ###Code x.shape, y.shape, type(x), type(y) ###Output _____no_output_____ ###Markdown Step 4 Making predictions on test data ###Code new_data = np.array([[5.6, 2.8, 3.9, 1.1], [4.0, 2.1, 1.0, 0.2], [4.3, 3.6, 1.0, 0.3], [5.7, 2.6, 3.8, 1.3]]) prediction = knn.predict(new_data) print( prediction) ###Output [1 0 0 1] ###Markdown Step 5 Perfomance Metrics What model would be compelete if did not try to measure how well it perfomed. Say, a scientist handed us new data, kneatly formated to suit our training data (how thoughtful), how convincing is pur classification model to accurately label the dataset? Model accuracy. It means that your model is verifiable, generalizable and reproducable. Our model, has no more data to test on, we used all our data to train. This is obviously a problem. We cant pose the same questions to our intelligent model, that we used to train it on. A 100% accuracy wouldnt be imperessive in this scenario.So what should we have different?Splitting our data into train, test and validation set. For now, we stick to train and test. ###Code from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score xtrain, xtest, ytrain, ytest = train_test_split(x, y , test_size = 0.2,\ random_state = 1, stratify = y) knn= KNeighborsClassifier(n_neighbors=8) knn.fit(xtrain, ytrain) ypred = knn.predict(xtest) print(ypred) print(f'score: {knn.score(xtest, ytest)}') print(f'accuracy: {accuracy_score(ypred, ytest)}') ###Output score: 0.9666666666666667 accuracy: 0.9666666666666667 ###Markdown > Note: knn.score calls accuracy_score under the hood, that is why they give the same result. We are at 97% model accuracy in telling apart iris flower species. It be nice if we say the predicions as flower species names as opposed to numbers? Lets decode the predictions ###Code from sklearn.preprocessing import LabelEncoder le = LabelEncoder().fit(iris.target_names) le.inverse_transform(ypred) ###Output _____no_output_____
Dante's_text_generation.ipynb
###Markdown Load data and imports In this section, we just clone the repository and load the dataset in a DataFrame splitting the dataset by "Canti" ###Code !git clone https://github.com/FrancescoFarinola/divine-comedy-text-generation import os import pandas as pd import numpy as np import matplotlib.pyplot as plt import math import sys import time import re import random %matplotlib inline sys.path.append('/content/divine-comedy-text-generation') path = 'divine-comedy-text-generation/' filenames = ['inferno.txt', 'purgatorio.txt', 'paradiso.txt'] columns = ['Cantica', 'Canto', 'Text'] data = [] for file in filenames: filepath = path + file with open(filepath, 'r', encoding='utf8') as f: lines = f.readlines() start = True for line in lines: if 'Canto' in line: if start: row = np.empty(3, dtype=object) start = False else: row[2] = canto data.append(row) row = np.empty(3, dtype=object) head = line.split() row[0] = head[0] row[1] = head[3] canto = "" elif line != "\n": canto = canto + line row[2] = canto data.append(row) f.close() df = pd.DataFrame(data, columns=columns) df df.Text[0] ###Output _____no_output_____ ###Markdown Preprocessing ###Code import re from functools import reduce import string PUNCTUATION_RE = re.compile("[-—!?:;,.«»“”]") def clean_start(text): return text.replace(" ", "", 1) def remove_double_whitespaces(text): return text.replace(" ", "") def remove_punctuation(text): return PUNCTUATION_RE.sub("", text) def clean_newline(text): return text.replace("\n", " \n ") def change_apostrophe(text): return text.replace("’", "'") def replace_uncommon_symbols(text): """ Replace uncomoon symbols with particular accents """ text = text.replace("ä", "a") text = text.replace("é", "è") text = text.replace("ë", "è") text = text.replace("Ë", "E") text = text.replace("ï", "i") text = text.replace("Ï", "I") text = text.replace("ó", "ò") text = text.replace("ö", "o") text = text.replace("ü", "u") text = text.replace("ï", "i") return text def lower(text): return text.lower() def adjust_newline(text): return text.replace("\n ", "\n") def preprocessing(text): return reduce(lambda text, f: f(text), PREPROCESSING_PIPELINE, text) PREPROCESSING_PIPELINE = [remove_double_whitespaces, remove_punctuation, remove_double_whitespaces, change_apostrophe, replace_uncommon_symbols, clean_newline] df['Text'] = df['Text'].apply(lambda x: preprocessing(x)) df['Text'][0] ###Output _____no_output_____ ###Markdown Markov Chain text generation ###Code !pip install markovify import markovify corpus = [text for text in df.Text] corpus = reduce(lambda x,y: x+y, corpus) model = markovify.NewlineText(corpus) for i in range(3): print() for i in range(0, 3): print(model.make_short_sentence(50)) import numpy as np from matplotlib import pyplot as plt def plotWordFrequency(df): words = [line.split() for text in df.Text for line in text.split("\n") ] words = reduce(lambda x,y: x+y, words) data = sorted([(w, words.count(w)) for w in set(words)], key = lambda x:x[1], reverse=True)[:40] most_words = [x[0] for x in data] times_used = [int(x[1]) for x in data] plt.figure(figsize=(20,10)) plt.bar(x=sorted(most_words), height=times_used, color = 'grey', edgecolor = 'black', width=.5) plt.xticks(rotation=45, fontsize=18) plt.yticks(rotation=0, fontsize=18) plt.xlabel('Most Common Words:', fontsize=18) plt.ylabel('Number of Occurences:', fontsize=18) plt.show() plotWordFrequency(df) ###Output _____no_output_____ ###Markdown Character-level text generation We lower the text as character-level text generation works better w/o capital letters. Without lowering, the model would have double the classes as output and this may result in more memory usage and maybe also misleading predictions.We make a copy of the dataset before preprocessing, since we are going to use the same dataset later for text generation using seq2seq. Preprocessing ###Code PREPROCESSING_PIPELINE = [lower] df1 = df.copy() df1['Text'] = df1['Text'].apply(lambda x: preprocessing(x)) df1['Text'][0] ###Output _____no_output_____ ###Markdown First, we flatten the text corpus ###Code corpus = [text for text in df1.Text] corpus = reduce(lambda x,y: x+y, corpus) ###Output _____no_output_____ ###Markdown Then, we create the character listing along with the relative dictioneries that map each character to an integer and viceversa. ###Code def create_idx(df): unique_chars = set() for text in df.Text: unique_chars = list(set(unique_chars) | set(text)) unique_chars.sort() char2idx = {char[1]: char[0] for char in enumerate(unique_chars)} idx2char = {v: k for k, v in char2idx.items()} return unique_chars, char2idx, idx2char char_listing, char2idx, idx2char = create_idx(df1) print(char2idx) ###Output {'\n': 0, ' ': 1, '"': 2, "'": 3, '(': 4, ')': 5, 'a': 6, 'b': 7, 'c': 8, 'd': 9, 'e': 10, 'f': 11, 'g': 12, 'h': 13, 'i': 14, 'j': 15, 'l': 16, 'm': 17, 'n': 18, 'o': 19, 'p': 20, 'q': 21, 'r': 22, 's': 23, 't': 24, 'u': 25, 'v': 26, 'x': 27, 'y': 28, 'z': 29, 'à': 30, 'è': 31, 'ì': 32, 'ò': 33, 'ù': 34, '‘': 35} ###Markdown We transform the corpus text into an encoded corpus where each integer correspond to a character using the previously defined dictionary. ###Code def numerical_encoding(df, char2idx): """ Text to list of chars, to np.array of numerical idx """ chars_list = [char for text in df.Text for char in text] chars_list = [char2idx[char] for char in chars_list] chars_list = np.array(chars_list) return chars_list def decode_sequence(seq): decoded = [idx2char[i] for i in seq] return ''.join(decoded) encoded_corpus = numerical_encoding(df1, char2idx) ###Output _____no_output_____ ###Markdown Input preparation In the following cell:* `from_tensor_slices`: creates a Dataset whose elements are slices of the given tensors. The given tensors are sliced along their first dimension. This operation preserves the structure of the input tensors, removing the first dimension of each tensor and using it as the dataset dimension. Then apply `batch` since we have a sequence of integers representing characters which is much longer so it is much helpful if we divide this into many sequences of `SEQUENCE_LENGTH+1`. We use `drop_remainder=True` whether the last batch should be dropped in the case it has fewer than batch_size elements;* Creating batches pipeline: 1. `map` for each sequence created before with `from_tensor_slices` applies the function `split_input_target` which returns the input and output sequences for the model. i.e. We have a sequence `Nel mezzo del`: respectively the input text will be `Nel mezzo de` and the output will be `el mezzo del` 2. `shuffle` namely shuffles the sequences 3. `batch` namely creates batches of length `BATCH_SIZE` with sequences shuffled. ###Code import tensorflow as tf SEQ_LENGTH = 100 BATCH_SIZE = 128 BUFFER_SIZE = 10000 example_per_epoch = len(corpus)//SEQ_LENGTH def split_input_target(chunk): input_text = chunk[:-1] target_text = chunk[1:] return input_text, target_text sequences = tf.data.Dataset.from_tensor_slices(encoded_corpus).batch(batch_size=SEQ_LENGTH+1, drop_remainder=True) dataset = sequences.map(split_input_target) for input_ex, target_ex in dataset.take(2): print('Input data: ', repr(decode_sequence(input_ex.numpy()))) print('Output data: ', repr(decode_sequence(target_ex.numpy()))) print("\n") dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True) ###Output Input data: 'nel mezzo del cammin di nostra vita \n mi ritrovai per una selva oscura \n chè la diritta via era smar' Output data: 'el mezzo del cammin di nostra vita \n mi ritrovai per una selva oscura \n chè la diritta via era smarr' Input data: 'ita \n ahi quanto a dir qual era è cosa dura \n esta selva selvaggia e aspra e forte \n che nel pensier' Output data: 'ta \n ahi quanto a dir qual era è cosa dura \n esta selva selvaggia e aspra e forte \n che nel pensier ' ###Markdown Building the model ###Code VOCAB_SIZE = len(char_listing)# The embedding dimension EMBEDDING_DIM = 300 UNITS = 500 def build_model(vocab_size, embedding_dim, units, batch_size): model = tf.keras.Sequential([ tf.keras.layers.Embedding(input_dim=vocab_size, output_dim = embedding_dim, batch_input_shape = [batch_size, None]), tf.keras.layers.GRU(units * 2, return_sequences= True, stateful=True, recurrent_initializer='glorot_uniform'), tf.keras.layers.GRU(units, return_sequences= True, stateful=True, recurrent_initializer='glorot_uniform'), tf.keras.layers.Dense(vocab_size) ]) return model model = build_model(vocab_size = VOCAB_SIZE, embedding_dim = EMBEDDING_DIM, units = UNITS, batch_size = BATCH_SIZE) model.summary() from keras.utils.vis_utils import plot_model plot_model(model, show_shapes=True, show_layer_names=True) def loss(labels, logits): return tf.keras.losses.sparse_categorical_crossentropy(labels, logits, from_logits=True) model.compile(optimizer='Adam', loss=loss) # Directory where the checkpoints will be saved checkpoint_dir = './training_checkpoints'# Name of the checkpoint files checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}") checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix, save_weights_only=True) early_stop=tf.keras.callbacks.EarlyStopping(monitor='loss', patience=5, restore_best_weights=True) EPOCHS = 50 history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback, early_stop]) ###Output Epoch 1/50 41/41 [==============================] - 20s 205ms/step - loss: 2.9111 Epoch 2/50 41/41 [==============================] - 9s 204ms/step - loss: 2.2240 Epoch 3/50 41/41 [==============================] - 9s 206ms/step - loss: 1.9857 Epoch 4/50 41/41 [==============================] - 9s 206ms/step - loss: 1.8439 Epoch 5/50 41/41 [==============================] - 9s 208ms/step - loss: 1.7416 Epoch 6/50 41/41 [==============================] - 9s 208ms/step - loss: 1.6552 Epoch 7/50 41/41 [==============================] - 9s 207ms/step - loss: 1.5823 Epoch 8/50 41/41 [==============================] - 9s 208ms/step - loss: 1.5230 Epoch 9/50 41/41 [==============================] - 9s 208ms/step - loss: 1.4731 Epoch 10/50 41/41 [==============================] - 9s 208ms/step - loss: 1.4289 Epoch 11/50 41/41 [==============================] - 9s 208ms/step - loss: 1.3895 Epoch 12/50 41/41 [==============================] - 9s 209ms/step - loss: 1.3556 Epoch 13/50 41/41 [==============================] - 9s 211ms/step - loss: 1.3238 Epoch 14/50 41/41 [==============================] - 9s 212ms/step - loss: 1.2933 Epoch 15/50 41/41 [==============================] - 9s 212ms/step - loss: 1.2625 Epoch 16/50 41/41 [==============================] - 9s 213ms/step - loss: 1.2309 Epoch 17/50 41/41 [==============================] - 9s 213ms/step - loss: 1.2008 Epoch 18/50 41/41 [==============================] - 9s 213ms/step - loss: 1.1667 Epoch 19/50 41/41 [==============================] - 9s 213ms/step - loss: 1.1327 Epoch 20/50 41/41 [==============================] - 9s 213ms/step - loss: 1.0970 Epoch 21/50 41/41 [==============================] - 9s 214ms/step - loss: 1.0588 Epoch 22/50 41/41 [==============================] - 10s 241ms/step - loss: 1.0163 Epoch 23/50 41/41 [==============================] - 9s 216ms/step - loss: 0.9735 Epoch 24/50 41/41 [==============================] - 10s 219ms/step - loss: 0.9298 Epoch 25/50 41/41 [==============================] - 10s 220ms/step - loss: 0.8839 Epoch 26/50 41/41 [==============================] - 10s 220ms/step - loss: 0.8335 Epoch 27/50 41/41 [==============================] - 10s 219ms/step - loss: 0.7870 Epoch 28/50 41/41 [==============================] - 10s 221ms/step - loss: 0.7395 Epoch 29/50 41/41 [==============================] - 10s 223ms/step - loss: 0.6918 Epoch 30/50 41/41 [==============================] - 10s 222ms/step - loss: 0.6456 Epoch 31/50 41/41 [==============================] - 10s 224ms/step - loss: 0.6037 Epoch 32/50 41/41 [==============================] - 10s 224ms/step - loss: 0.5648 Epoch 33/50 41/41 [==============================] - 10s 224ms/step - loss: 0.5246 Epoch 34/50 41/41 [==============================] - 10s 225ms/step - loss: 0.4923 Epoch 35/50 41/41 [==============================] - 10s 226ms/step - loss: 0.4629 Epoch 36/50 41/41 [==============================] - 10s 225ms/step - loss: 0.4362 Epoch 37/50 41/41 [==============================] - 10s 227ms/step - loss: 0.4117 Epoch 38/50 41/41 [==============================] - 11s 232ms/step - loss: 0.3930 Epoch 39/50 41/41 [==============================] - 11s 231ms/step - loss: 0.3750 Epoch 40/50 41/41 [==============================] - 10s 230ms/step - loss: 0.3618 Epoch 41/50 41/41 [==============================] - 10s 231ms/step - loss: 0.3480 Epoch 42/50 41/41 [==============================] - 11s 261ms/step - loss: 0.3374 Epoch 43/50 41/41 [==============================] - 11s 258ms/step - loss: 0.3288 Epoch 44/50 41/41 [==============================] - 10s 227ms/step - loss: 0.3195 Epoch 45/50 41/41 [==============================] - 10s 229ms/step - loss: 0.3129 Epoch 46/50 41/41 [==============================] - 12s 263ms/step - loss: 0.3041 Epoch 47/50 41/41 [==============================] - 10s 227ms/step - loss: 0.2976 Epoch 48/50 41/41 [==============================] - 10s 230ms/step - loss: 0.2944 Epoch 49/50 41/41 [==============================] - 10s 231ms/step - loss: 0.2883 Epoch 50/50 41/41 [==============================] - 10s 232ms/step - loss: 0.2851 ###Markdown Temperature Char-level Sampling We first create a new inference model with `batch_size = 1`, load the latest checkpoint weights and define the input shape.The function `char_level_temperature_sampling` takes in input * the model (`model`), * the start string to define the context (`start_string`), * the number of characters to generate (`chars_to_generate`) and * the `temperature` which defines how distant the predictions are far from the dataset (the smaller, the more similar to the input dataset)After encoding the input_string which defines the context, we reset the states of the model to make independent call from the fit method, and then start to generate the next characters with iterative calls to the model.Only in the first call we pass the entire input string since the model is stateful and we do not need to pass the whole sequence on recurrent calls. So as, when recurrently calling the model, we simply pass the last character ID. ###Code tf.train.latest_checkpoint(checkpoint_dir) model = build_model(VOCAB_SIZE, EMBEDDING_DIM, UNITS, batch_size=1) model.load_weights(tf.train.latest_checkpoint(checkpoint_dir)) model.build(tf.TensorShape([1, None])) def char_level_temperature_sampling(model, start_string, chars_to_generate, temperature): input_eval = [char2idx[s] for s in start_string] #Convert input_string to correspondent encoding input_eval = tf.expand_dims(input_eval, axis=0) #reshape according to model input shape (1, None) text_generated = [] #string for storing results model.reset_states() #reset states of the model to make calls to the model independent from previous calls (fit, predict, evaluate) for i in range(chars_to_generate): predictions = model(input_eval) #predict the next character predictions = tf.squeeze(predictions, 0) #squeeze results predictions = predictions / temperature #apply temperature #Draws num_samples from a categorical distribution of logits #This is applied to the last timestep (-1) to get the next char to generate predicted_id = tf.random.categorical(predictions, num_samples=1)[-1,0].numpy() # Pass the predicted character as the next input to the # model along with the previous hidden state input_eval = tf.expand_dims([predicted_id], 0) text_generated.append(idx2char[predicted_id]) return ''.join(text_generated) start_string = u"nel mezzo del cammin di nostra vita" char_level_generation = char_level_temperature_sampling(model, start_string=start_string, chars_to_generate=400, temperature=0.7) print(start_string, char_level_generation) ###Output nel mezzo del cammin di nostra vita mi piacea per mille giaci o diva quanto pon mente a la spiga ch'ogn' erba si conosce per lo seme d'alta terra già col piè morrocco io era già da quell' ombre partito e seguitava l'orme d'i sante qual sovra 'l ventre e qual sovra le spalle omai sarebbe li suoi regi ancora nati per me de l'etterno consiglio cade vertù ne l'acqua e ne la pianta rimasa di sè presso i novi voglia i ###Markdown Seq2Seq text generation In this section we will experiment other text generation techniques using a Seq2Seq model. We will use:* SpaCy for Italian word embeddings* SymSpellPy italian vocabulary to handle out-of-vocabulary words ###Code %%capture !pip install spacy --upgrade !python -m spacy download it_core_news_lg import spacy nlp = spacy.load("it_core_news_lg") !pip install symspellpy from symspellpy import SymSpell, Verbosity sym_spell = SymSpell(max_dictionary_edit_distance=2, prefix_length=7) dictionary_path=path+"it-100k.txt" sym_spell.load_dictionary(dictionary_path, term_index=0, count_index=1) ###Output Requirement already satisfied: symspellpy in c:\users\francesco.farinola\appdata\local\programs\python\python38\lib\site-packages (6.7.5) Requirement already satisfied: editdistpy>=0.1.3 in c:\users\francesco.farinola\appdata\local\programs\python\python38\lib\site-packages (from symspellpy) (0.1.3) ###Markdown Adjust the newline token to get a better splitting on sequences ###Code PREPROCESSING_PIPELINE = [adjust_newline] df['Text'] = df['Text'].apply(lambda x: preprocessing(x)) df['Text'][0] corpus = [text for text in df.Text] corpus = reduce(lambda x,y: x+y, corpus) tokens = nlp(corpus) sequences = [nlp(line) for text in df.Text for line in text.split(" \n")] ###Output _____no_output_____ ###Markdown Word embeddings The `spell_correction` function looks up for the most similar word using the symspellpy italian vocabulary with `max_edit_distance = 1`.This function will be called only if the strategy of the `compute_embeddings` function will be `similarity`. By doing so, when assigning word embeddings of OOV words, this function will try to spell correctly a word in order to assign the word embedding of the word spelled correctly. ###Code def spell_correction(text): results = [t if (t.isnumeric() or len(t)<=3) else sym_spell.lookup(t, Verbosity.TOP, max_edit_distance=1, include_unknown=True)[0].term for t in text.split()] return ' '.join(results) ###Output _____no_output_____ ###Markdown The `compute_embeddings` function assigns SpaCy word embeddings to in-vocabulary words and handle OOV terms in different ways:* `strategy='similarity'` and `random_oov=False`: will spell correct the oov term and try to assign the embedding of the similar word* `strategy='similarity'` and `random_oov=False`: will assign to the oov term a random vector* `strategy='random'`: will assign to all the word embeddings a random vector, including in-vocabulary terms ###Code EMBEDDING_DIM = 300 def compute_embeddings(tokens, nlp, EMBEDDING_DIM, strategy="similarity", random_oov=False): print(f"There are {len(list(set([word.text for word in tokens])))} unique tokens") embeddings = {} #Initialize embedding dict oov_words = {} #Initialize oov dict cc = 0 #counter for oov terms #Initializing special words embeddings embeddings["<PAD>"] = np.zeros((EMBEDDING_DIM,)) embeddings["<SOS>"] = np.random.uniform(-1, 1, (EMBEDDING_DIM,)) embeddings["<EOS>"] = np.random.uniform(-1, 1, (EMBEDDING_DIM,)) embeddings["\n "] = np.random.uniform(-1, 1, (EMBEDDING_DIM,)) for word in tokens: #for each word in the dataset if word.text not in embeddings: #if a unique word has not been processed yet if strategy == "similarity": if word.has_vector: #if the word has a corresponding word vector in SpaCy embeddings[word.text] = word.vector #Add the word vector to the vocabulary elif random_oov: #If random strategy for oov terms assign a random vector cc = cc+1 embeddings[word.text] = np.random.uniform(-1, 1, (EMBEDDING_DIM,)) else: cc = cc+1 similar_word = spell_correction(word.text) #Find a similar word similar_token = nlp(similar_word) #Get the corresponding token in SpaCy oov_words[word.text] = similar_word #Fill the oov dict with [misspelled, correclty spelled] word #If the new word has a vector we assign the vector to the original word, otherwise a random vector if similar_token.has_vector and similar_word != word.text: embeddings[word.text] = similar_token.vector else: embeddings[word.text] = np.random.uniform(-1, 1, (EMBEDDING_DIM,)) #If random strategy assign to all words a random vector if strategy == "random": embeddings[word.text] = np.random.uniform(-1, 1, (EMBEDDING_DIM,)) print(f"There are {cc} words for which we created an embedding") #Compute idx2word and word2idx dictionaries idx2word = dict([(idx,v) for idx,v in enumerate(list(embeddings.keys()))]) word2idx = {v: k for k, v in idx2word.items()} #Initialize the embedding matrix as a numpy array and fill it with all the vectors throught the embeddings dict embedding_matrix = np.zeros((len(idx2word), EMBEDDING_DIM)) for word, i in word2idx.items(): embedding_vector = embeddings[word] #Words not found in embedding index will be all-zeros. if embedding_vector is not None: embedding_matrix[i] = embedding_vector return embedding_matrix, oov_words, idx2word, word2idx embedding_matrix, oov_dict, idx2word, word2idx = compute_embeddings(tokens, nlp, EMBEDDING_DIM) ###Output There are 13528 unique tokens There are 4058 words for which we created an embedding ###Markdown Here, we check how some OOV are handled, with which term and vector they are substituted. ###Code print("Some out-of-vocabulary words with their respective word substituted:") for k, v in list(oov_dict.items())[2:10]: oovtest = nlp(v) print(f"OOV word: {k}, \t vector used: {v} \t {oovtest.vector[:5]}") print("\n") zero_vectors = np.where(np.count_nonzero(embedding_matrix, axis=1)==0)[0] print(f"There are {zero_vectors.shape[0]} words with zero vectors. Here we show some:") print(np.array([idx2word[i] for i in zero_vectors])[:50]) ###Output Some out-of-vocabulary words with their respective word substituted: OOV word: ridir, vector used: ridir [0. 0. 0. 0. 0.] OOV word: intrai, vector used: entrai [ 0.83551 0.2146 -0.78065 -1.2835 -1.451 ] OOV word: èi, vector used: èi [0. 0. 0. 0. 0.] OOV word: macolato, vector used: maculato [-0.38396 0.85145 -0.33377 -1.6229 -1.5451 ] OOV word: mpediva, vector used: impediva [ 0.76814 1.9516 -1.61 -0.35673 -0.10835] OOV word: gaetta, vector used: gaeta [ 0.88015 -1.9323 1.1862 0.2662 -0.023887] OOV word: tremesse, vector used: premesse [ 1.6344 0.78347 -0.039451 1.1204 1.7222 ] OOV word: sembiava, vector used: sembrava [ 1.3384 1.3271 -0.76626 -1.1606 -0.10625] There are 97 words with zero vectors. Here we show some: ['<PAD>' '\n' 'risonavan' 'gittansi' 'ubidente' 'salutevol' 'farmisi' 'mugghia' 'disiato' 'caggiono' 'rabbuffa' 'avaccio' 'ruffian' 'adeschi' 'menommi' 'tacerci' 'dienno' 'distorse' 'appressavan' 'temesti' 'parlasia' 'navicar' 'disfaccia' 'Danar' 'Isopo' 'venieno' 'aggroppate' 'biece' 'Ogne' 'nvidio' 'scorgessi' 'guerir' 'triunfar' 'ossame' 'discarno' 'pontan' 'zebe' 'Beccheria' 'impetrai' 'sovvegna' 'ristrinsi' 'affisar' 'Oriaco' 'sodisfar' 'inforcar' 'ncarco' 'sodisfaccia' 'superbite' 'secondamente' 'iracundia'] ###Markdown Preprocessing We inspect the sequences length to define a proper maximum sequence length for the model.We will set two maximum sequence lengths: one for the encoder and one for the decoder. The decoder will have 1 token more since its input will include the `` token and the output the `` token. ###Code def check_max_sequence_length(sequences): from collections import Counter len_seq = [len(s) for s in sequences] len_frequencies = Counter(list(len_seq)) print(f"List of tuples of (line length, # of lines):\n{sorted(len_frequencies.items())}") #Print unnecessary lines which are too short or too long that may cause misleading predictions #Also choosing a bigger MAX_SEQUENCE_LENGTH may cause more computational time. print(f"There are {len_frequencies[1]} lines with length 0 (will be excluded from dataset)") for i, value in enumerate(len_seq): if value==3 or value>=14: print(f"Length: {value}\t Line: {sequences[i]}") check_max_sequence_length(sequences) ###Output List of tuples of (line length, # of lines): [(0, 100), (3, 12), (4, 182), (5, 1046), (6, 2802), (7, 3991), (8, 3351), (9, 1784), (10, 722), (11, 254), (12, 63), (13, 20), (14, 4), (15, 2)] There are 0 lines with length 0 (will be excluded from dataset) Length: 15 Line: l'umana spezie e 'l loco e 'l tempo e 'l seme Length: 14 Line: e sta 'n su quel più che 'n su l'altro eretto Length: 14 Line: così com' ella sie' tra 'l piano e 'l monte Length: 14 Line: tra 'l quinto dì e 'l sesto ond' io mi diedi Length: 3 Line: maravigliando diventaro smorte Length: 15 Line: tra 'l Po e 'l monte e la marina e 'l Reno Length: 3 Line: lungamente mostrando paganesmo Length: 3 Line: apparecchiava grazioso loco Length: 3 Line: superillustrans claritate tua Length: 3 Line: silogizzò invidiosi veri Length: 3 Line: cotanto gloriosamente accolto Length: 3 Line: etternalmente rimanendosi una Length: 3 Line: DILIGITE IUSTITIAM primai Length: 3 Line: surgono innumerabili faville Length: 3 Line: così benedicendomi cantando Length: 14 Line: di' quel ch'ell' è di' come se ne 'nfiora Length: 3 Line: perpetualemente Osanna sberna Length: 3 Line: stupefaciensi quando Laterano ###Markdown We choose a `MAX_SEQ_LEN` of 11 as a tradeoff in order to discard few and unnecessary long sequences that would increase the model complexity and training time.Then, we clean the dataset from too long and too short sequences. ###Code MAX_SEQ_LEN_ENCODER = 11 MAX_SEQ_LEN_DECODER = MAX_SEQ_LEN_ENCODER + 1 def clean_dataset(sequences, MAX_SEQ_LEN_ENCODER): len_seq = [len(s) for s in sequences] indices = [] #Loop to get hte indexes of too long and short sequences for i, value in enumerate(len_seq): #Select sequences of lengtg of 3 or lesser and 11 or lesser if value < 4 or value>MAX_SEQ_LEN_ENCODER: indices.append(i) #Loop to delete sequences of found indexes for i in sorted(indices, reverse=True): del sequences[i] return sequences sequences = clean_dataset(sequences, MAX_SEQ_LEN_ENCODER) ###Output _____no_output_____ ###Markdown Encoding model inputs The `encode_dataset` function transforms the sequences of the dataset into encoded ones using the word2idx dictionary.In an Encoder/Decoder model designed for text generation we have two parts which receive and produce different inputs and outputs respectively.Assuming the special tokens ` = 0`, ` = 1`, ` = 2`The Encoder has the task to produce a hidden representation of the sentence: so, it will receive as input the encoded sentence padded without any additional token (i.e. sentence to be fed = "Nel mezzo del cammin di nostra vita", input to be fed to the encoder = [3 4 5 6 7 8 9 0 0 0 0]) and produce through an RNN the states that will gives a better context representation and will be passed to the decoder.The Decoder instead will be fed with inputs and outputs:* the input is in the form: `` + encoded sentence = [1 3 4 5 6 7 8 9 0 0 0 0]* the output is in the form: encoded sentence + `` = [3 4 5 6 7 8 9 2 0 0 0 0]`preprocess_encoder_input` encodes the input of the encoder as previously said, so we simply pad the sequence.`preprocess_decoder_input` produces the decoder input and output are previously said. ###Code from tensorflow.keras.preprocessing.sequence import pad_sequences def encode_dataset(sequences, word2idx): encoded = [] for seq in sequences: encoded_seq = [] for word in seq: encoded_seq.append(word2idx[word.text]) encoded.append(encoded_seq) return encoded def preprocess_encoder_input(input, max_len, word2idx): encoder_input_data = pad_sequences(input, maxlen=max_len, padding="post", truncating="pre") return encoder_input_data def preprocess_decoder_input(input, max_len, word2idx): sos_token = [word2idx["<SOS>"]] eos_token = [word2idx["<EOS>"]] decoder_input = [sos_token + line for line in input] decoder_output = [line + eos_token for line in input] decoder_input_data = pad_sequences(decoder_input, maxlen=max_len, padding="post", truncating="pre") decoder_output_data = pad_sequences(decoder_output, maxlen=max_len, padding="post", truncating="pre") return decoder_input_data, decoder_output_data encoded_sequences = encode_dataset(sequences, word2idx) encoder_input = preprocess_encoder_input(encoded_sequences, MAX_SEQ_LEN_ENCODER, word2idx) decoder_input, decoder_output = preprocess_decoder_input(encoded_sequences, MAX_SEQ_LEN_DECODER, word2idx) ###Output _____no_output_____ ###Markdown Simply function to check if the inputs have been encoded correctly ###Code def decode_sequence(sequence, idx2word): s = [] for i in sequence: s.append(idx2word[i]) return " ".join(s) print(f"Encoder input: {decode_sequence(encoder_input[0], idx2word)}") print(f"Decoder input: {decode_sequence(decoder_input[0], idx2word)}") print(f"Decoder output: {decode_sequence(decoder_output[0], idx2word)}") #Clear cache with garbage collector import gc gc.collect() ###Output _____no_output_____ ###Markdown Building the model In this section we build our model. We define the classes for Encoder and Decoder:* Encoder: after passing the input to the Embedding layer, we go through two stacked GRUs and pass the final state to the Decoder* Decoder: is made of two stacked GRUs whose states are initialized with the output states of the Encoder final layer. The final layer of the decoder is a Dense Layer with softmax function to get the categorical probabilities for each token to be the next one.**NB:** We cannot use Bidirectional GRU in the Decoder part since we should not know anything about the next tokens and we are predicting the probabilities of the next tokens we can work only in one direction.Remember to always return the sequences when stacking GRU layers and to pass the state to remember the context, so to not make independent calls on layers. ###Code import tensorflow as tf UNITS = 500 BATCH_SIZE = 32 VOCAB_SIZE = embedding_matrix.shape[0] class Encoder(tf.keras.Model): def __init__(self): super().__init__() self.embed = tf.keras.layers.Embedding(input_dim = VOCAB_SIZE, output_dim = EMBEDDING_DIM, input_length=MAX_SEQ_LEN_ENCODER, trainable=True, #we set trainable=True to train word embeddings during training mask_zero=True, #to ignore padding tokens embeddings_initializer=tf.keras.initializers.Constant(embedding_matrix)) self.gru1 = tf.keras.layers.GRU(UNITS, return_sequences=True, return_state=True, dropout=0.2, recurrent_initializer='glorot_uniform') self.gru2 = tf.keras.layers.GRU(UNITS, return_sequences=True, return_state=True, dropout=0.2, recurrent_initializer='glorot_uniform') def call(self, x): embed = self.embed(x) out, h = self.gru1(embed) out, h = self.gru2(out, initial_state=h) return h class Decoder(tf.keras.Model): def __init__(self): super().__init__() self.embed = tf.keras.layers.Embedding(input_dim = VOCAB_SIZE, output_dim = EMBEDDING_DIM, input_length=MAX_SEQ_LEN_DECODER, trainable=True, #we set trainable=True to train word embeddings during training mask_zero=True, #to ignore padding tokens embeddings_initializer=tf.keras.initializers.Constant(embedding_matrix)) """ Tried implementing AdditiveAttention but no success self.attention = tf.keras.layers.AdditiveAttention() self.Wc = tf.keras.layers.Dense(UNITS, activation="tanh", use_bias=False) """ self.gru1 = tf.keras.layers.GRU(UNITS, return_sequences=True, return_state=True, dropout=0.2, recurrent_initializer='glorot_uniform') self.gru2 = tf.keras.layers.GRU(UNITS, return_sequences=True, return_state=True, dropout=0.2, recurrent_initializer='glorot_uniform') #Prediction layer with softmax function to get probabilities for each token in the vocabulary self.fc = tf.keras.layers.Dense(VOCAB_SIZE, activation='softmax') def call(self, x, init_state): embed = self.embed(x) out, h = self.gru1(embed, initial_state = init_state) out, h = self.gru2(out, initial_state = h) """ context_vector, attention_weights = self.attention([out, init_state], return_attention_scores=True) # Step 4. Eqn. (3): Join the context_vector and rnn_output # [ct; ht] shape: (batch t, value_units + query_units) context_and_rnn_output = tf.keras.layers.Concatenate(axis=-1)([context_vector, out]) # Step 4. Eqn. (3): `at = tanh(Wc@[ct; ht])` attention_vector = self.Wc(context_and_rnn_output) out = self.fc(attention_vector) """ out = self.fc(out) return out, h #Initialize the Encoder and Decoder encoder_model = Encoder() decoder_model = Decoder() #Define Input shapes encoder_inputs = tf.keras.layers.Input(shape=(MAX_SEQ_LEN_ENCODER,)) decoder_inputs = tf.keras.layers.Input(shape=(MAX_SEQ_LEN_DECODER,)) #Assign variables to models enc_state = encoder_model(encoder_inputs) decoder_outputs, _ = decoder_model(decoder_inputs, enc_state) #Define the Model seq2seq = tf.keras.Model([encoder_inputs, decoder_inputs], decoder_outputs) #Plot model from keras.utils.vis_utils import plot_model plot_model(seq2seq, show_shapes=True, show_layer_names=True) BATCH_SIZE = 32 EPOCHS = 15 loss = tf.losses.SparseCategoricalCrossentropy() seq2seq.compile(optimizer="nadam", loss=loss, metrics=['accuracy']) seq2seq.fit([encoder_input, decoder_input], decoder_output, batch_size=BATCH_SIZE, epochs=EPOCHS) seq2seq.save_weights("seq2seq.h5") ###Output _____no_output_____ ###Markdown Different Sampling techniques with evaluation (BLEU score) We will experiment different sampling methods:1. Greedy search2. Top-k sampling3. Temperature sampling4. Beam search Greedy search ###Code from tensorflow.keras.preprocessing.sequence import pad_sequences def encode_sequence(seq): encoded_seq = [] for j in seq.split(" "): if j != "": encoded_seq.append(word2idx[j]) encoded_seq = [encoded_seq] padded_seq = pad_sequences(encoded_seq, maxlen=MAX_SEQ_LEN_ENCODER, padding="post", truncating="pre") return padded_seq def greedy_search(predictions): return np.argmax(predictions) def greedy_sampling(input_seq): state = encoder_model(input_seq) target_seq = np.array([[word2idx['<SOS>']]]) decoded_sentence = '' while True: output_tokens, state = decoder_model(target_seq, state) preds = output_tokens[0, -1, :] sampled_token_index = greedy_search(preds) sampled_char = idx2word[sampled_token_index] decoded_sentence += sampled_char decoded_sentence += " " if (sampled_char == '<EOS>' or len(decoded_sentence.split()) > 7): decoded_sentence = re.sub("<EOS>", "",decoded_sentence) return decoded_sentence target_seq = np.array([[sampled_token_index]]) return decoded_sentence input_seq = "Nel mezzo del cammin di nostra vita" seq2seq_greedy = [] for i in range (0, 5): padded_seq = encode_sequence(input_seq) decoded_sentence = greedy_sampling(padded_seq) print(decoded_sentence) seq2seq_greedy.append(decoded_sentence) input_seq = decoded_sentence ###Output dinanzi amendue cortese cortese quanto caggia dinanzi amendue cortese meco riguardando dinanzi amendue cortese meco riguardando dinanzi amendue cortese meco riguardando dinanzi amendue cortese meco riguardando ###Markdown Temperature sampling ###Code def temperature(predictions, temp): conditional_probability = np.asarray(predictions).astype("float64") conditional_probability = np.log(conditional_probability) / temp exp_preds = np.exp(conditional_probability) conditional_probability = exp_preds / np.sum(exp_preds) probs = np.random.multinomial(1, conditional_probability, 1) return np.argmax(probs) def temperature_sampling(input_seq, temp): state = encoder_model(input_seq) target_seq = np.array([[word2idx['<SOS>']]]) decoded_sentence = '' while True: output_tokens, state = decoder_model(target_seq, state) preds = output_tokens[0, -1, :] sampled_token_index = temperature(preds, temp=temp) sampled_char = idx2word[sampled_token_index] decoded_sentence += sampled_char decoded_sentence += " " if (sampled_char == '<EOS>' or len(decoded_sentence.split()) > 7): decoded_sentence = re.sub("<EOS>", "",decoded_sentence) return decoded_sentence target_seq = np.array([[sampled_token_index]]) return decoded_sentence temperatures = 1 for t in temperatures: print(f"Sampling with temperature {t}:") generated_text = [] input_seq = "Nel mezzo del cammin di nostra vita" for i in range (0, 5): padded_seq = encode_sequence(input_seq) decoded_sentence = temperature_sampling(padded_seq, t) print(decoded_sentence) generated_text.append(decoded_sentence) input_seq = decoded_sentence ###Output _____no_output_____ ###Markdown Top-k sampling ###Code def softmax(z): return np.exp(z)/sum(np.exp(z)) def top_k(predictions, k): top_k_probabilities, top_k_indices= tf.math.top_k(predictions, k=k, sorted=True) top_k_indices = np.asarray(top_k_indices).astype("int32") top_k_redistributed_probability = softmax(np.log(top_k_probabilities)) top_k_redistributed_probability = np.asarray(top_k_redistributed_probability).astype("float32") sampled_token = np.random.choice(top_k_indices, p=top_k_redistributed_probability) return sampled_token def top_k_sampling(input_seq, k): state = encoder_model(input_seq) target_seq = np.array([[word2idx['<SOS>']]]) decoded_sentence = '' while True: output_tokens, state = decoder_model(target_seq, state) preds = output_tokens[0, -1, :] sampled_token_index = top_k(preds, k=k) sampled_char = idx2word[sampled_token_index] decoded_sentence += sampled_char decoded_sentence += " " if (sampled_char == '<EOS>' or len(decoded_sentence.split()) > 7): decoded_sentence = re.sub("<EOS>", "",decoded_sentence) return decoded_sentence target_seq = np.array([[sampled_token_index]]) return decoded_sentence input_seq = "Nel mezzo del cammin di nostra vita" seq2seq_topk = [] for i in range (0, 5): padded_seq = encode_sequence(input_seq) decoded_sentence = top_k_sampling(padded_seq, 10) print(decoded_sentence) seq2seq_topk.append(decoded_sentence) input_seq = decoded_sentence ###Output giuso Miserere rinova Quando tu Però deduca li dinanzi Miserere Magno anzi rimirando rimirando Miserere e Bèatrice virtute piangendo Miserere Miserere quando Bèatrice Bèatrice quando disii gridando Allor tacendo tacendo tacendo indugio Non facci ###Markdown Beam search ###Code def get_candidates(target_seq, state, k): output_tokens, state = decoder_model(target_seq, state) preds = output_tokens[0, -1, :] top_k_probabilities, top_k_indices= tf.math.top_k(preds, k=k, sorted=True) top_k_indices = np.asarray(top_k_indices).astype("int32") top_k_probabilities = np.asarray(top_k_probabilities).astype("float32") return top_k_indices, top_k_probabilities def beam_search_inference(input_seq, k=3, max_words=5): state = encoder_model(input_seq) scores = [[("<SOS>", 1.0)]] target_seq = np.array([[word2idx['<SOS>']]]) for c in range (0, max_words): for i in range (len(scores[c])): k_scores = [] for seq, score in scores[c]: target_seq = np.array([[word2idx[seq.split()[-1]]]]) top_k_indices, top_k_probabilities = get_candidates(target_seq, state, k) for j in range (0, k): inner_sentence = seq + " " + idx2word[top_k_indices[j]] inner_score = score - np.log(top_k_probabilities[j]) inner_tup = (inner_sentence, inner_score) k_scores.append(inner_tup) scores.append(k_scores) final_candidates = np.array([s for c,s in scores[-1]]) max_prob_idx = np.argmax(final_candidates) final_seq = scores[-1][max_prob_idx][0] final_seq = re.sub("<SOS> ", "",final_seq) return final_seq input_seq = "Nel mezzo del cammin di nostra vita" seq2seq_beam = [] for i in range (0, 5): padded_seq = encode_sequence(input_seq) decoded_sentence = beam_search_inference(padded_seq, k=2, max_words=6) print(decoded_sentence) seq2seq_beam.append(decoded_sentence) input_seq = decoded_sentence ###Output pianto appresso amendue disio giuso dinanzi maraviglia animal maraviglia animal maraviglia animal popol popol maraviglia animal maraviglia animal maraviglia animal maraviglia animal maraviglia animal popol popol maraviglia animal maraviglia animal ###Markdown Evaluation ###Code from nltk.translate.bleu_score import sentence_bleu, corpus_bleu, SmoothingFunction def bleu(data, generated, weights=(0.25,0.25,0.25,0.25)): cc = SmoothingFunction() references = [seq.text.split() for seq in data] hypothesis = [seq.split() for seq in generated] scores = [] for i in range(len(hypothesis)): bleu_score = sentence_bleu(references, hypothesis[i], weights=weights, smoothing_function=cc.method4) scores.append(bleu_score) return sum(scores)/len(scores) import nltk from nltk.translate import meteor from nltk import word_tokenize nltk.download('wordnet') nltk.download('punkt') def text_meteor_score(sequences, generated): seq_list = [seq.text for seq in sequences] text_score = [] best = [] for i in generated: sequence_score = [] for j in seq_list: sequence_score.append(round(meteor(references=[word_tokenize(j)], hypothesis=word_tokenize(i)), 4)) max_value = max(sequence_score) max_index = sequence_score.index(max_value) best.append(seq_list[max_index]) text_score.append(max(sequence_score)) return sum(text_score)/len(text_score) ###Output [nltk_data] Downloading package wordnet to [nltk_data] C:\Users\francesco.farinola\AppData\Roaming\nltk_data. [nltk_data] .. [nltk_data] Package wordnet is already up-to-date! [nltk_data] Downloading package punkt to [nltk_data] C:\Users\francesco.farinola\AppData\Roaming\nltk_data. [nltk_data] .. [nltk_data] Package punkt is already up-to-date! ###Markdown Char-level temperature sampling ###Code char_generation = char_level_generation.split(" \n ")[1:-1] for i in char_generation: print(i) print(f"BLEU score: {bleu(sequences, char_generation)}") print(f"METEOR score: {text_meteor_score(sequences, char_generation)}") ###Output mi piacea per mille giaci o diva quanto pon mente a la spiga ch'ogn' erba si conosce per lo seme d'alta terra già col piè morrocco io era già da quell' ombre partito e seguitava l'orme d'i sante qual sovra 'l ventre e qual sovra le spalle omai sarebbe li suoi regi ancora nati per me de l'etterno consiglio cade vertù ne l'acqua e ne la pianta BLEU score: 0.6057628248828149 METEOR score: 0.7250777777777777 ###Markdown Seq2Seq Greedy search ###Code for i in seq2seq_greedy: print(i) print(f"BLEU score: {bleu(sequences, seq2seq_greedy)}") print(f"METEOR score: {text_meteor_score(sequences, seq2seq_greedy)}") ###Output dinanzi amendue cortese cortese quanto caggia dinanzi amendue cortese meco riguardando dinanzi amendue cortese meco riguardando dinanzi amendue cortese meco riguardando dinanzi amendue cortese meco riguardando BLEU score: 0.06585282074145157 METEOR score: 0.14666 ###Markdown Seq2Seq Temperature sampling ###Code seq2seq_temperature = [] input_seq = "Nel mezzo del cammin di nostra vita" for i in range (0, 5): padded_seq = encode_sequence(input_seq) decoded_sentence = temperature_sampling(padded_seq, 1) print(decoded_sentence) seq2seq_temperature.append(decoded_sentence) input_seq = decoded_sentence print(f"BLEU score: {bleu(sequences, seq2seq_temperature)}") print(f"METEOR score: {text_meteor_score(sequences, seq2seq_temperature)}") ###Output dinanzi amendue primavera dove dovessi avvisar mia conoscenza seco tornerai maraviglia anzi senta ma cara fidanza seco simiglianza mo centauro dimostrato questo de ferza maraviglia operare differente ladroneccio inveggiar che risponde maraviglia seguitando Da tema ria cetra avria così BLEU score: 0.08194664808265416 METEOR score: 0.21222000000000002 ###Markdown Seq2Seq Top-k sampling ###Code for i in seq2seq_topk: print(i) print(f"BLEU score: {bleu(sequences, seq2seq_topk)}") print(f"METEOR score: {text_meteor_score(sequences, seq2seq_topk)}") ###Output giuso Miserere rinova Quando tu Però deduca li dinanzi Miserere Magno anzi rimirando rimirando Miserere e Bèatrice virtute piangendo Miserere Miserere quando Bèatrice Bèatrice quando disii gridando Allor tacendo tacendo tacendo indugio Non facci BLEU score: 0.0895478915809001 METEOR score: 0.25796 ###Markdown Seq2Seq Beam search ###Code for i in seq2seq_beam: print(i) print(f"BLEU score: {bleu(sequences, seq2seq_beam)}") print(f"METEOR score: {text_meteor_score(sequences, seq2seq_beam)}") ###Output pianto appresso amendue disio giuso dinanzi maraviglia animal maraviglia animal maraviglia animal popol popol maraviglia animal maraviglia animal maraviglia animal maraviglia animal maraviglia animal popol popol maraviglia animal maraviglia animal BLEU score: 0.049440190457038964 METEOR score: 0.12418 ###Markdown Extra - Term frequency? ###Code """from collections import Counter def frequent_words(df): corpus = [text for text in df.Text] corpus = reduce(lambda x,y: x+y, corpus) tokens = nlp(corpus) df_list = [word.text for word in tokens] word_counts = Counter(df_list) del word_counts["\n"] most_common = word_counts.most_common(50) words = list(zip(*most_common))[0] counts = list(zip(*most_common))[1] plt.figure(figsize=(20, 10)) plt.bar(words, counts) plt.show() return word_counts word_counts = frequent_words(df)""" """def compute_tf(word_counts): n_tokens = np.sum(list(word_counts.values())) for k in word_counts: word_counts[k] = 1 + np.log10(word_counts[k]) return word_counts term_freq = compute_tf(word_counts)""" def get_tf_matrix(input): tf_matrix = np.zeros(shape=(input.shape[0], input.shape[1])) for i, sequence in enumerate(input): for j, word in enumerate(sequence): tf_matrix[i, j] = term_freq[idx2word[word]] return tf_matrix tf_matrix = get_tf_matrix(decoder_input) ###Output _____no_output_____
Research/aml-iot/onnx-deploy-yolov3.ipynb
###Markdown ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.png) YOLO Real-time Object Detection using ONNX on AzureMLThis example shows how to use the YOLO v3 model as a web service using Azure Machine Learning services and the ONNX Runtime. What is ONNXONNX is an open format for representing machine learning and deep learning models. ONNX enables open and interoperable AI by enabling data scientists and developers to use the tools of their choice without worrying about lock-in and flexibility to deploy to a variety of platforms. ONNX is developed and supported by a community of partners including Microsoft, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai). YOLO DetailsYou Only Look Once (YOLO) is a state-of-the-art, real-time object detection system. For more information about YOLO, please visit the [YOLO website](https://pjreddie.com/darknet/yolo/). PrerequisitesTo make the best use of your time, make sure you have done the following:* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* Follow the instructions in the readme file before going through the steps in this notebook ###Code # Check core SDK version number import azureml.core print("SDK version:", azureml.core.VERSION) ###Output _____no_output_____ ###Markdown Download YOLO v3 ONNX model First we download the model. This may take a few minutes. The model will be downloaded to the same folder as this notebook. ###Code import urllib.request onnx_model_url = "https://onnxzoo.blob.core.windows.net/models/opset_10/yolov3/yolov3.onnx" urllib.request.urlretrieve(onnx_model_url, filename="yolov3.onnx") ###Output _____no_output_____ ###Markdown Load Azure ML workspaceWe begin by instantiating a workspace object from the existing workspace created in the configuration notebook. ###Code from azureml.core import Workspace ws = Workspace.from_config() print(ws.name, ws.location, ws.resource_group, sep = '\n') ###Output _____no_output_____ ###Markdown Registering your model with Azure MLNow we upload the model and register it in the workspace. ###Code from azureml.core.model import Model model = Model.register(model_path = "yolov3.onnx", model_name = "yolov3", tags = {"onnx": "yolov3"}, description = "YOLOv3 from ONNX Model Zoo", workspace = ws) ###Output _____no_output_____ ###Markdown Displaying your registered modelsYou can optionally list out all the models that you have registered in this workspace. ###Code models = ws.models for name, m in models.items(): print("Name:", name,"\tVersion:", m.version, "\tDescription:", m.description, m.tags) ###Output _____no_output_____ ###Markdown Write scoring fileWe are now going to deploy our ONNX model on Azure ML using the ONNX Runtime. We begin by writing a score.py file that will be invoked by the web service call. The `init()` function is called once when the container is started so we load the model using the ONNX Runtime into a global session object. The `run()` function is called when the webservice is invoked for inferencing. After running the code below you should see a score.py file in the same folder as this notebook. ###Code %%writefile score.py import json import time import sys import os from azureml.core.model import Model import numpy as np # we're going to use numpy to process input and output data import onnxruntime # to inference ONNX models, we use the ONNX Runtime import base64 from PIL import Image import io def init(): global session model = Model.get_model_path(model_name = 'yolov3') session = onnxruntime.InferenceSession(model) def letterbox_image(image, size): '''resize image with unchanged aspect ratio using padding''' iw, ih = image.size w, h = size scale = min(w/iw, h/ih) nw = int(iw*scale) nh = int(ih*scale) image = image.resize((nw,nh), Image.BICUBIC) new_image = Image.new('RGB', size, (128,128,128)) new_image.paste(image, ((w-nw)//2, (h-nh)//2)) return new_image def preprocess(input_data_json): # convert the JSON data into the tensor input imgb64 = json.loads(input_data_json)['data'] # Base64 decoding image_64_decode = base64.b64decode(imgb64) # Open the image img = Image.open(io.BytesIO(image_64_decode)) model_image_size = (416, 416) # Get the resized image boxed_image = letterbox_image(img, tuple(reversed(model_image_size))) # Convert image to numpy array image_data = np.array(boxed_image, dtype='float32') # Normalize image image_data /= 255. # Array has shape height x width x channel. We need to transpose it to channel x width x height image_data = np.transpose(image_data, [2, 0, 1]) # Add another dimension to make it an array of images image_data = np.expand_dims(image_data, 0) image_size = np.array([img.size[1], img.size[0]], dtype=np.float32).reshape(1, 2) return image_data, image_size def postprocess(result): #r = np.array(result) boxes = result[0] scores = result[1] indices = result[2] out_boxes, out_scores, out_classes = [], [], [] for idx_ in indices: out_classes.append(idx_[1].tolist()) out_scores.append(scores[tuple(idx_)].tolist()) idx_1 = (idx_[0], idx_[2]) out_boxes.append(boxes[idx_1].tolist()) er = {'boxes':out_boxes, 'scores':out_scores, 'classes':out_classes} return json.dumps(er) def run(input_data_json): try: start = time.time() # start timer image_data, image_size = preprocess(input_data_json) input_feeds = {} input_feeds[session.get_inputs()[0].name] = image_data input_feeds[session.get_inputs()[1].name] = image_size #input_name = session.get_inputs()[0].name # get the id of the first input of the model result = session.run([], input_feeds) end = time.time() # stop timer return {"result": postprocess(result), "time": end - start} except Exception as e: result = str(e) return {"error": result} ###Output _____no_output_____ ###Markdown Create dependencies fileCreate a YAML file that specifies which dependencies we would like to see in our container. After running the code below you should see myenv.yml in the same folder as this notebook. ###Code from azureml.core.conda_dependencies import CondaDependencies myenv = CondaDependencies.create(pip_packages=["numpy","pillow", "onnxruntime","azureml-defaults", "azureml-core"]) with open("myenv.yml","w") as f: f.write(myenv.serialize_to_string()) ###Output _____no_output_____ ###Markdown Create container image in Azure MLUse Azure ML to create the container image. This step will likely take a few minutes. ###Code from azureml.core.model import InferenceConfig, Model # Create inference configuration. This creates a docker image that contains the model. inference_config = InferenceConfig(runtime="python", entry_script="score.py", conda_file="myenv.yml") # Builds an image in ACR. # TODO: Move to 1.12.0 SDK version, and specify image name, and tag. package = Model.package(ws, [model], inference_config) package.wait_for_creation(show_output=True) print("ACR:", package.get_container_registry) print("Image:", package.location) ###Output _____no_output_____ ###Markdown Setup Azure IoT Edge deviceFollow [documentation](https://docs.microsoft.com/en-us/azure/iot-edge/quickstart-linux) to setup a Linux VM as an Azure IoT Edge device Deploy container to Azure IoT Edge device ###Code from azureml.core.image import ContainerImage acr_name = package.location.split("/")[0] reg_name = acr_name.split(".")[0] subscription_id = ws.subscription_id print('{}'.format(acr_name)) print('{}'.format(subscription_id)) # TODO: Derive image_location through code. image_location = "<Fill image URL from ACR>" print('{}'.format(image.image_location)) # Fetch username, password of ACR. from azure.mgmt.containerregistry import ContainerRegistryManagementClient from azure.mgmt import containerregistry client = ContainerRegistryManagementClient(ws._auth,subscription_id) result= client.registries.list_credentials(ws.resource_group, reg_name, custom_headers=None, raw=False) username = result.username password = result.passwords[0].value print(username) print(password) ###Output _____no_output_____ ###Markdown Create a deployment.json file using the template json. Then push the deployment json file to the IoT Hub, which will then send it to the IoT Edge device. The IoT Edge agent will then pull the Docker images and run them. ###Code module_name = "yolov3" file = open('iotedge-yolov3-template.json') contents = file.read() contents = contents.replace('__MODULE_NAME', module_name) contents = contents.replace('__REGISTRY_NAME', reg_name) contents = contents.replace('__REGISTRY_USER_NAME', username) contents = contents.replace('__REGISTRY_PASSWORD', password) contents = contents.replace('__REGISTRY_IMAGE_LOCATION', image_location) with open('./deployment.json', 'wt', encoding='utf-8') as output_file: output_file.write(contents) ###Output _____no_output_____ ###Markdown Enter your the IoT device id and the IoT Hub name in the command below ###Code # Push the deployment JSON to the IOT Hub !az iot edge set-modules --device-id <IoTdeviceid> --hub-name <IoTHubName> --content deployment.json ###Output _____no_output_____ ###Markdown TestingBefore testing, open up inbound port 5001 on your Edge device. You can use [Azure Portal](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/nsg-quickstart-portal) for this purpose. Update the scoring URI with the edge device public IP address ###Code import json import requests scoring_uri = 'http://<EdgeDeviceIPAddress>:5001/score' # You cannot send a byte array in JSON and hence need to decode it to UTF-8 input_data = json.dumps({'data': image_64_encode.decode("utf-8")}) try: # Set the content type headers = {'Content-Type': 'application/json'} # Make the request and display the response resp = requests.post(scoring_uri, input_data, headers=headers) plotImageWithBBoxesAndLabels(resp.text, downloaded_imagefile) except KeyError as e: print(str(e)) ###Output _____no_output_____
Assignment/Assignment1/exploring_word_vectors_김미성.ipynb
###Markdown CS224N Assignment 1: Exploring Word Vectors (25 Points)Welcome to CS224n! Before you start, make sure you read the README.txt in the same directory as this notebook. ###Code # All Import Statements Defined Here # Note: Do not add to this list. # All the dependencies you need, can be installed by running . # ---------------- import sys assert sys.version_info[0]==3 assert sys.version_info[1] >= 5 from gensim.models import KeyedVectors from gensim.test.utils import datapath import pprint import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = [10, 5] import nltk nltk.download('reuters') from nltk.corpus import reuters import numpy as np import random import scipy as sp from sklearn.decomposition import TruncatedSVD from sklearn.decomposition import PCA START_TOKEN = '<START>' END_TOKEN = '<END>' np.random.seed(0) random.seed(0) # ---------------- ###Output [nltk_data] Downloading package reuters to [nltk_data] C:\Users\MiSung\AppData\Roaming\nltk_data... ###Markdown Please Write Your SUNet ID Here: Word VectorsWord Vectors are often used as a fundamental component for downstream NLP tasks, e.g. question answering, text generation, translation, etc., so it is important to build some intuitions as to their strengths and weaknesses. Here, you will explore two types of word vectors: those derived from *co-occurrence matrices*, and those derived via *word2vec*. **Assignment Notes:** Please make sure to save the notebook as you go along. Submission Instructions are located at the bottom of the notebook.**Note on Terminology:** The terms "word vectors" and "word embeddings" are often used interchangeably. The term "embedding" refers to the fact that we are encoding aspects of a word's meaning in a lower dimensional space. As [Wikipedia](https://en.wikipedia.org/wiki/Word_embedding) states, "*conceptually it involves a mathematical embedding from a space with one dimension per word to a continuous vector space with a much lower dimension*". Part 1: Count-Based Word Vectors (10 points)Most word vector models start from the following idea:*You shall know a word by the company it keeps ([Firth, J. R. 1957:11](https://en.wikipedia.org/wiki/John_Rupert_Firth))*Many word vector implementations are driven by the idea that similar words, i.e., (near) synonyms, will be used in similar contexts. As a result, similar words will often be spoken or written along with a shared subset of words, i.e., contexts. By examining these contexts, we can try to develop embeddings for our words. With this intuition in mind, many "old school" approaches to constructing word vectors relied on word counts. Here we elaborate upon one of those strategies, *co-occurrence matrices* (for more information, see [here](http://web.stanford.edu/class/cs124/lec/vectorsemantics.video.pdf) or [here](https://medium.com/data-science-group-iitr/word-embedding-2d05d270b285)). Co-OccurrenceA co-occurrence matrix counts how often things co-occur in some environment. Given some word $w_i$ occurring in the document, we consider the *context window* surrounding $w_i$. Supposing our fixed window size is $n$, then this is the $n$ preceding and $n$ subsequent words in that document, i.e. words $w_{i-n} \dots w_{i-1}$ and $w_{i+1} \dots w_{i+n}$. We build a *co-occurrence matrix* $M$, which is a symmetric word-by-word matrix in which $M_{ij}$ is the number of times $w_j$ appears inside $w_i$'s window.**Example: Co-Occurrence with Fixed Window of n=1**:Document 1: "all that glitters is not gold"Document 2: "all is well that ends well"| * | START | all | that | glitters | is | not | gold | well | ends | END ||----------|-------|-----|------|----------|------|------|-------|------|------|-----|| START | 0 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 || all | 2 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 || that | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 || glitters | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 || is | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 || not | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 || gold | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 || well | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 1 || ends | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 || END | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 |**Note:** In NLP, we often add START and END tokens to represent the beginning and end of sentences, paragraphs or documents. In thise case we imagine START and END tokens encapsulating each document, e.g., "START All that glitters is not gold END", and include these tokens in our co-occurrence counts.The rows (or columns) of this matrix provide one type of word vectors (those based on word-word co-occurrence), but the vectors will be large in general (linear in the number of distinct words in a corpus). Thus, our next step is to run *dimensionality reduction*. In particular, we will run *SVD (Singular Value Decomposition)*, which is a kind of generalized *PCA (Principal Components Analysis)* to select the top $k$ principal components. Here's a visualization of dimensionality reduction with SVD. In this picture our co-occurrence matrix is $A$ with $n$ rows corresponding to $n$ words. We obtain a full matrix decomposition, with the singular values ordered in the diagonal $S$ matrix, and our new, shorter length-$k$ word vectors in $U_k$.![Picture of an SVD](imgs/svd.png "SVD")This reduced-dimensionality co-occurrence representation preserves semantic relationships between words, e.g. *doctor* and *hospital* will be closer than *doctor* and *dog*. **Notes:** If you can barely remember what an eigenvalue is, here's [a slow, friendly introduction to SVD](https://davetang.org/file/Singular_Value_Decomposition_Tutorial.pdf). If you want to learn more thoroughly about PCA or SVD, feel free to check out lectures [7](https://web.stanford.edu/class/cs168/l/l7.pdf), [8](http://theory.stanford.edu/~tim/s15/l/l8.pdf), and [9](https://web.stanford.edu/class/cs168/l/l9.pdf) of CS168. These course notes provide a great high-level treatment of these general purpose algorithms. Though, for the purpose of this class, you only need to know how to extract the k-dimensional embeddings by utilizing pre-programmed implementations of these algorithms from the numpy, scipy, or sklearn python packages. In practice, it is challenging to apply full SVD to large corpora because of the memory needed to perform PCA or SVD. However, if you only want the top $k$ vector components for relatively small $k$ — known as *[Truncated SVD](https://en.wikipedia.org/wiki/Singular_value_decompositionTruncated_SVD)* — then there are reasonably scalable techniques to compute those iteratively. Plotting Co-Occurrence Word EmbeddingsHere, we will be using the Reuters (business and financial news) corpus. If you haven't run the import cell at the top of this page, please run it now (click it and press SHIFT-RETURN). The corpus consists of 10,788 news documents totaling 1.3 million words. These documents span 90 categories and are split into train and test. For more details, please see https://www.nltk.org/book/ch02.html. We provide a `read_corpus` function below that pulls out only articles from the "crude" (i.e. news articles about oil, gas, etc.) category. The function also adds START and END tokens to each of the documents, and lowercases words. You do **not** have perform any other kind of pre-processing. ###Code def read_corpus(category="crude"): """ Read files from the specified Reuter's category. Params: category (string): category name Return: list of lists, with words from each of the processed files """ files = reuters.fileids(category) return [[START_TOKEN] + [w.lower() for w in list(reuters.words(f))] + [END_TOKEN] for f in files] ###Output _____no_output_____ ###Markdown Let's have a look what these documents are like…. ###Code reuters_corpus = read_corpus() pprint.pprint(reuters_corpus[:3], compact=True, width=100) ###Output [['<START>', 'japan', 'to', 'revise', 'long', '-', 'term', 'energy', 'demand', 'downwards', 'the', 'ministry', 'of', 'international', 'trade', 'and', 'industry', '(', 'miti', ')', 'will', 'revise', 'its', 'long', '-', 'term', 'energy', 'supply', '/', 'demand', 'outlook', 'by', 'august', 'to', 'meet', 'a', 'forecast', 'downtrend', 'in', 'japanese', 'energy', 'demand', ',', 'ministry', 'officials', 'said', '.', 'miti', 'is', 'expected', 'to', 'lower', 'the', 'projection', 'for', 'primary', 'energy', 'supplies', 'in', 'the', 'year', '2000', 'to', '550', 'mln', 'kilolitres', '(', 'kl', ')', 'from', '600', 'mln', ',', 'they', 'said', '.', 'the', 'decision', 'follows', 'the', 'emergence', 'of', 'structural', 'changes', 'in', 'japanese', 'industry', 'following', 'the', 'rise', 'in', 'the', 'value', 'of', 'the', 'yen', 'and', 'a', 'decline', 'in', 'domestic', 'electric', 'power', 'demand', '.', 'miti', 'is', 'planning', 'to', 'work', 'out', 'a', 'revised', 'energy', 'supply', '/', 'demand', 'outlook', 'through', 'deliberations', 'of', 'committee', 'meetings', 'of', 'the', 'agency', 'of', 'natural', 'resources', 'and', 'energy', ',', 'the', 'officials', 'said', '.', 'they', 'said', 'miti', 'will', 'also', 'review', 'the', 'breakdown', 'of', 'energy', 'supply', 'sources', ',', 'including', 'oil', ',', 'nuclear', ',', 'coal', 'and', 'natural', 'gas', '.', 'nuclear', 'energy', 'provided', 'the', 'bulk', 'of', 'japan', "'", 's', 'electric', 'power', 'in', 'the', 'fiscal', 'year', 'ended', 'march', '31', ',', 'supplying', 'an', 'estimated', '27', 'pct', 'on', 'a', 'kilowatt', '/', 'hour', 'basis', ',', 'followed', 'by', 'oil', '(', '23', 'pct', ')', 'and', 'liquefied', 'natural', 'gas', '(', '21', 'pct', '),', 'they', 'noted', '.', '<END>'], ['<START>', 'energy', '/', 'u', '.', 's', '.', 'petrochemical', 'industry', 'cheap', 'oil', 'feedstocks', ',', 'the', 'weakened', 'u', '.', 's', '.', 'dollar', 'and', 'a', 'plant', 'utilization', 'rate', 'approaching', '90', 'pct', 'will', 'propel', 'the', 'streamlined', 'u', '.', 's', '.', 'petrochemical', 'industry', 'to', 'record', 'profits', 'this', 'year', ',', 'with', 'growth', 'expected', 'through', 'at', 'least', '1990', ',', 'major', 'company', 'executives', 'predicted', '.', 'this', 'bullish', 'outlook', 'for', 'chemical', 'manufacturing', 'and', 'an', 'industrywide', 'move', 'to', 'shed', 'unrelated', 'businesses', 'has', 'prompted', 'gaf', 'corp', '&', 'lt', ';', 'gaf', '>,', 'privately', '-', 'held', 'cain', 'chemical', 'inc', ',', 'and', 'other', 'firms', 'to', 'aggressively', 'seek', 'acquisitions', 'of', 'petrochemical', 'plants', '.', 'oil', 'companies', 'such', 'as', 'ashland', 'oil', 'inc', '&', 'lt', ';', 'ash', '>,', 'the', 'kentucky', '-', 'based', 'oil', 'refiner', 'and', 'marketer', ',', 'are', 'also', 'shopping', 'for', 'money', '-', 'making', 'petrochemical', 'businesses', 'to', 'buy', '.', '"', 'i', 'see', 'us', 'poised', 'at', 'the', 'threshold', 'of', 'a', 'golden', 'period', ',"', 'said', 'paul', 'oreffice', ',', 'chairman', 'of', 'giant', 'dow', 'chemical', 'co', '&', 'lt', ';', 'dow', '>,', 'adding', ',', '"', 'there', "'", 's', 'no', 'major', 'plant', 'capacity', 'being', 'added', 'around', 'the', 'world', 'now', '.', 'the', 'whole', 'game', 'is', 'bringing', 'out', 'new', 'products', 'and', 'improving', 'the', 'old', 'ones', '."', 'analysts', 'say', 'the', 'chemical', 'industry', "'", 's', 'biggest', 'customers', ',', 'automobile', 'manufacturers', 'and', 'home', 'builders', 'that', 'use', 'a', 'lot', 'of', 'paints', 'and', 'plastics', ',', 'are', 'expected', 'to', 'buy', 'quantities', 'this', 'year', '.', 'u', '.', 's', '.', 'petrochemical', 'plants', 'are', 'currently', 'operating', 'at', 'about', '90', 'pct', 'capacity', ',', 'reflecting', 'tighter', 'supply', 'that', 'could', 'hike', 'product', 'prices', 'by', '30', 'to', '40', 'pct', 'this', 'year', ',', 'said', 'john', 'dosher', ',', 'managing', 'director', 'of', 'pace', 'consultants', 'inc', 'of', 'houston', '.', 'demand', 'for', 'some', 'products', 'such', 'as', 'styrene', 'could', 'push', 'profit', 'margins', 'up', 'by', 'as', 'much', 'as', '300', 'pct', ',', 'he', 'said', '.', 'oreffice', ',', 'speaking', 'at', 'a', 'meeting', 'of', 'chemical', 'engineers', 'in', 'houston', ',', 'said', 'dow', 'would', 'easily', 'top', 'the', '741', 'mln', 'dlrs', 'it', 'earned', 'last', 'year', 'and', 'predicted', 'it', 'would', 'have', 'the', 'best', 'year', 'in', 'its', 'history', '.', 'in', '1985', ',', 'when', 'oil', 'prices', 'were', 'still', 'above', '25', 'dlrs', 'a', 'barrel', 'and', 'chemical', 'exports', 'were', 'adversely', 'affected', 'by', 'the', 'strong', 'u', '.', 's', '.', 'dollar', ',', 'dow', 'had', 'profits', 'of', '58', 'mln', 'dlrs', '.', '"', 'i', 'believe', 'the', 'entire', 'chemical', 'industry', 'is', 'headed', 'for', 'a', 'record', 'year', 'or', 'close', 'to', 'it', ',"', 'oreffice', 'said', '.', 'gaf', 'chairman', 'samuel', 'heyman', 'estimated', 'that', 'the', 'u', '.', 's', '.', 'chemical', 'industry', 'would', 'report', 'a', '20', 'pct', 'gain', 'in', 'profits', 'during', '1987', '.', 'last', 'year', ',', 'the', 'domestic', 'industry', 'earned', 'a', 'total', 'of', '13', 'billion', 'dlrs', ',', 'a', '54', 'pct', 'leap', 'from', '1985', '.', 'the', 'turn', 'in', 'the', 'fortunes', 'of', 'the', 'once', '-', 'sickly', 'chemical', 'industry', 'has', 'been', 'brought', 'about', 'by', 'a', 'combination', 'of', 'luck', 'and', 'planning', ',', 'said', 'pace', "'", 's', 'john', 'dosher', '.', 'dosher', 'said', 'last', 'year', "'", 's', 'fall', 'in', 'oil', 'prices', 'made', 'feedstocks', 'dramatically', 'cheaper', 'and', 'at', 'the', 'same', 'time', 'the', 'american', 'dollar', 'was', 'weakening', 'against', 'foreign', 'currencies', '.', 'that', 'helped', 'boost', 'u', '.', 's', '.', 'chemical', 'exports', '.', 'also', 'helping', 'to', 'bring', 'supply', 'and', 'demand', 'into', 'balance', 'has', 'been', 'the', 'gradual', 'market', 'absorption', 'of', 'the', 'extra', 'chemical', 'manufacturing', 'capacity', 'created', 'by', 'middle', 'eastern', 'oil', 'producers', 'in', 'the', 'early', '1980s', '.', 'finally', ',', 'virtually', 'all', 'major', 'u', '.', 's', '.', 'chemical', 'manufacturers', 'have', 'embarked', 'on', 'an', 'extensive', 'corporate', 'restructuring', 'program', 'to', 'mothball', 'inefficient', 'plants', ',', 'trim', 'the', 'payroll', 'and', 'eliminate', 'unrelated', 'businesses', '.', 'the', 'restructuring', 'touched', 'off', 'a', 'flurry', 'of', 'friendly', 'and', 'hostile', 'takeover', 'attempts', '.', 'gaf', ',', 'which', 'made', 'an', 'unsuccessful', 'attempt', 'in', '1985', 'to', 'acquire', 'union', 'carbide', 'corp', '&', 'lt', ';', 'uk', '>,', 'recently', 'offered', 'three', 'billion', 'dlrs', 'for', 'borg', 'warner', 'corp', '&', 'lt', ';', 'bor', '>,', 'a', 'chicago', 'manufacturer', 'of', 'plastics', 'and', 'chemicals', '.', 'another', 'industry', 'powerhouse', ',', 'w', '.', 'r', '.', 'grace', '&', 'lt', ';', 'gra', '>', 'has', 'divested', 'its', 'retailing', ',', 'restaurant', 'and', 'fertilizer', 'businesses', 'to', 'raise', 'cash', 'for', 'chemical', 'acquisitions', '.', 'but', 'some', 'experts', 'worry', 'that', 'the', 'chemical', 'industry', 'may', 'be', 'headed', 'for', 'trouble', 'if', 'companies', 'continue', 'turning', 'their', 'back', 'on', 'the', 'manufacturing', 'of', 'staple', 'petrochemical', 'commodities', ',', 'such', 'as', 'ethylene', ',', 'in', 'favor', 'of', 'more', 'profitable', 'specialty', 'chemicals', 'that', 'are', 'custom', '-', 'designed', 'for', 'a', 'small', 'group', 'of', 'buyers', '.', '"', 'companies', 'like', 'dupont', '&', 'lt', ';', 'dd', '>', 'and', 'monsanto', 'co', '&', 'lt', ';', 'mtc', '>', 'spent', 'the', 'past', 'two', 'or', 'three', 'years', 'trying', 'to', 'get', 'out', 'of', 'the', 'commodity', 'chemical', 'business', 'in', 'reaction', 'to', 'how', 'badly', 'the', 'market', 'had', 'deteriorated', ',"', 'dosher', 'said', '.', '"', 'but', 'i', 'think', 'they', 'will', 'eventually', 'kill', 'the', 'margins', 'on', 'the', 'profitable', 'chemicals', 'in', 'the', 'niche', 'market', '."', 'some', 'top', 'chemical', 'executives', 'share', 'the', 'concern', '.', '"', 'the', 'challenge', 'for', 'our', 'industry', 'is', 'to', 'keep', 'from', 'getting', 'carried', 'away', 'and', 'repeating', 'past', 'mistakes', ',"', 'gaf', "'", 's', 'heyman', 'cautioned', '.', '"', 'the', 'shift', 'from', 'commodity', 'chemicals', 'may', 'be', 'ill', '-', 'advised', '.', 'specialty', 'businesses', 'do', 'not', 'stay', 'special', 'long', '."', 'houston', '-', 'based', 'cain', 'chemical', ',', 'created', 'this', 'month', 'by', 'the', 'sterling', 'investment', 'banking', 'group', ',', 'believes', 'it', 'can', 'generate', '700', 'mln', 'dlrs', 'in', 'annual', 'sales', 'by', 'bucking', 'the', 'industry', 'trend', '.', 'chairman', 'gordon', 'cain', ',', 'who', 'previously', 'led', 'a', 'leveraged', 'buyout', 'of', 'dupont', "'", 's', 'conoco', 'inc', "'", 's', 'chemical', 'business', ',', 'has', 'spent', '1', '.', '1', 'billion', 'dlrs', 'since', 'january', 'to', 'buy', 'seven', 'petrochemical', 'plants', 'along', 'the', 'texas', 'gulf', 'coast', '.', 'the', 'plants', 'produce', 'only', 'basic', 'commodity', 'petrochemicals', 'that', 'are', 'the', 'building', 'blocks', 'of', 'specialty', 'products', '.', '"', 'this', 'kind', 'of', 'commodity', 'chemical', 'business', 'will', 'never', 'be', 'a', 'glamorous', ',', 'high', '-', 'margin', 'business', ',"', 'cain', 'said', ',', 'adding', 'that', 'demand', 'is', 'expected', 'to', 'grow', 'by', 'about', 'three', 'pct', 'annually', '.', 'garo', 'armen', ',', 'an', 'analyst', 'with', 'dean', 'witter', 'reynolds', ',', 'said', 'chemical', 'makers', 'have', 'also', 'benefitted', 'by', 'increasing', 'demand', 'for', 'plastics', 'as', 'prices', 'become', 'more', 'competitive', 'with', 'aluminum', ',', 'wood', 'and', 'steel', 'products', '.', 'armen', 'estimated', 'the', 'upturn', 'in', 'the', 'chemical', 'business', 'could', 'last', 'as', 'long', 'as', 'four', 'or', 'five', 'years', ',', 'provided', 'the', 'u', '.', 's', '.', 'economy', 'continues', 'its', 'modest', 'rate', 'of', 'growth', '.', '<END>'], ['<START>', 'turkey', 'calls', 'for', 'dialogue', 'to', 'solve', 'dispute', 'turkey', 'said', 'today', 'its', 'disputes', 'with', 'greece', ',', 'including', 'rights', 'on', 'the', 'continental', 'shelf', 'in', 'the', 'aegean', 'sea', ',', 'should', 'be', 'solved', 'through', 'negotiations', '.', 'a', 'foreign', 'ministry', 'statement', 'said', 'the', 'latest', 'crisis', 'between', 'the', 'two', 'nato', 'members', 'stemmed', 'from', 'the', 'continental', 'shelf', 'dispute', 'and', 'an', 'agreement', 'on', 'this', 'issue', 'would', 'effect', 'the', 'security', ',', 'economy', 'and', 'other', 'rights', 'of', 'both', 'countries', '.', '"', 'as', 'the', 'issue', 'is', 'basicly', 'political', ',', 'a', 'solution', 'can', 'only', 'be', 'found', 'by', 'bilateral', 'negotiations', ',"', 'the', 'statement', 'said', '.', 'greece', 'has', 'repeatedly', 'said', 'the', 'issue', 'was', 'legal', 'and', 'could', 'be', 'solved', 'at', 'the', 'international', 'court', 'of', 'justice', '.', 'the', 'two', 'countries', 'approached', 'armed', 'confrontation', 'last', 'month', 'after', 'greece', 'announced', 'it', 'planned', 'oil', 'exploration', 'work', 'in', 'the', 'aegean', 'and', 'turkey', 'said', 'it', 'would', 'also', 'search', 'for', 'oil', '.', 'a', 'face', '-', 'off', 'was', 'averted', 'when', 'turkey', 'confined', 'its', 'research', 'to', 'territorrial', 'waters', '.', '"', 'the', 'latest', 'crises', 'created', 'an', 'historic', 'opportunity', 'to', 'solve', 'the', 'disputes', 'between', 'the', 'two', 'countries', ',"', 'the', 'foreign', 'ministry', 'statement', 'said', '.', 'turkey', "'", 's', 'ambassador', 'in', 'athens', ',', 'nazmi', 'akiman', ',', 'was', 'due', 'to', 'meet', 'prime', 'minister', 'andreas', 'papandreou', 'today', 'for', 'the', 'greek', 'reply', 'to', 'a', 'message', 'sent', 'last', 'week', 'by', 'turkish', 'prime', 'minister', 'turgut', 'ozal', '.', 'the', 'contents', 'of', 'the', 'message', 'were', 'not', 'disclosed', '.', '<END>']] ###Markdown Question 1.1: Implement `distinct_words` [code] (2 points)Write a method to work out the distinct words (word types) that occur in the corpus. You can do this with `for` loops, but it's more efficient to do it with Python list comprehensions. In particular, [this](https://coderwall.com/p/rcmaea/flatten-a-list-of-lists-in-one-line-in-python) may be useful to flatten a list of lists. If you're not familiar with Python list comprehensions in general, here's [more information](https://python-3-patterns-idioms-test.readthedocs.io/en/latest/Comprehensions.html).You may find it useful to use [Python sets](https://www.w3schools.com/python/python_sets.asp) to remove duplicate words. ###Code def distinct_words(corpus): # 중복 제거 """ Determine a list of distinct words for the corpus. Params: corpus (list of list of strings): corpus of documents Return: corpus_words (list of strings): list of distinct words across the corpus, sorted (using python 'sorted' function) num_corpus_words (integer): number of distinct words across the corpus """ corpus_words = [] num_corpus_words = -1 # ------------------ # Write your implementation here. corpus_words = sorted(list(set([word for words_list in corpus for word in words_list]))) num_corpus_words = len(corpus_words) #print(corpus_words) # ------------------ return corpus_words, num_corpus_words # --------------------- # Run this sanity check # Note that this not an exhaustive check for correctness. # --------------------- # Define toy corpus test_corpus = ["START All that glitters isn't gold END".split(" "), "START All's well that ends well END".split(" ")] test_corpus_words, num_corpus_words = distinct_words(test_corpus) # Correct answers ans_test_corpus_words = sorted(list(set(["START", "All", "ends", "that", "gold", "All's", "glitters", "isn't", "well", "END"]))) ans_num_corpus_words = len(ans_test_corpus_words) # Test correct number of words assert(num_corpus_words == ans_num_corpus_words), "Incorrect number of distinct words. Correct: {}. Yours: {}".format(ans_num_corpus_words, num_corpus_words) # Test correct words assert (test_corpus_words == ans_test_corpus_words), "Incorrect corpus_words.\nCorrect: {}\nYours: {}".format(str(ans_test_corpus_words), str(test_corpus_words)) # Print Success print ("-" * 80) print("Passed All Tests!") print ("-" * 80) ###Output -------------------------------------------------------------------------------- Passed All Tests! -------------------------------------------------------------------------------- ###Markdown Question 1.2: Implement `compute_co_occurrence_matrix` [code] (3 points)Write a method that constructs a co-occurrence matrix for a certain window-size $n$ (with a default of 4), considering words $n$ before and $n$ after the word in the center of the window. Here, we start to use `numpy (np)` to represent vectors, matrices, and tensors. If you're not familiar with NumPy, there's a NumPy tutorial in the second half of this cs231n [Python NumPy tutorial](http://cs231n.github.io/python-numpy-tutorial/). ###Code def compute_co_occurrence_matrix(corpus, window_size=4): # 윈도우 사이즈를 고려해서 co-occurrnect-matrix 만들기 """ Compute co-occurrence matrix for the given corpus and window_size (default of 4). Note: Each word in a document should be at the center of a window. Words near edges will have a smaller number of co-occurring words. For example, if we take the document "START All that glitters is not gold END" with window size of 4, "All" will co-occur with "START", "that", "glitters", "is", and "not". Params: corpus (list of list of strings): corpus of documents window_size (int): size of context window Return: M (numpy matrix of shape (number of corpus words, number of corpus words)): Co-occurence matrix of word counts. The ordering of the words in the rows/columns should be the same as the ordering of the words given by the distinct_words function. word2Ind (dict): dictionary that maps word to index (i.e. row/column number) for matrix M. """ words, num_words = distinct_words(corpus) M = None word2Ind = {} # ------------------ # Write your implementation here. M = np.zeros((num_words, num_words)) # (number of corpus words, number of corpus words) word2Ind = dict(zip(words, range(num_words))) # { 'All' : 0 , "All's" : 1, 'END':2 } 단어랑 숫자를 mapping # co-occurence matrix for sentence in corpus : for i in range(len(sentence)): # i 에는 중심 단어가 될 애들이 들어온다. left = max(i-window_size,0) right = min(i+window_size+1,len(sentence)) # +1을 해줘야 마지막 값도 포함되서 나온다. contents_word = sentence[left:i] + sentence[i+1:right] # center word 제외한 contents word center_word = sentence[i] center_idx = word2Ind[center_word] for content in contents_word : content_idx = word2Ind[content] M[content_idx,center_idx] += 1 # ------------------ return M, word2Ind # --------------------- # Run this sanity check # Note that this is not an exhaustive check for correctness. # --------------------- # Define toy corpus and get student's co-occurrence matrix test_corpus = ["START All that glitters isn't gold END".split(" "), "START All's well that ends well END".split(" ")] M_test, word2Ind_test = compute_co_occurrence_matrix(test_corpus, window_size=1) # Correct M and word2Ind M_test_ans = np.array( [[0., 0., 0., 1., 0., 0., 0., 0., 1., 0.,], [0., 0., 0., 1., 0., 0., 0., 0., 0., 1.,], [0., 0., 0., 0., 0., 0., 1., 0., 0., 1.,], [1., 1., 0., 0., 0., 0., 0., 0., 0., 0.,], [0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,], [0., 0., 0., 0., 0., 0., 0., 1., 1., 0.,], [0., 0., 1., 0., 0., 0., 0., 1., 0., 0.,], [0., 0., 0., 0., 0., 1., 1., 0., 0., 0.,], [1., 0., 0., 0., 1., 1., 0., 0., 0., 1.,], [0., 1., 1., 0., 1., 0., 0., 0., 1., 0.,]] ) word2Ind_ans = {'All': 0, "All's": 1, 'END': 2, 'START': 3, 'ends': 4, 'glitters': 5, 'gold': 6, "isn't": 7, 'that': 8, 'well': 9} # Test correct word2Ind assert (word2Ind_ans == word2Ind_test), "Your word2Ind is incorrect:\nCorrect: {}\nYours: {}".format(word2Ind_ans, word2Ind_test) # Test correct M shape assert (M_test.shape == M_test_ans.shape), "M matrix has incorrect shape.\nCorrect: {}\nYours: {}".format(M_test.shape, M_test_ans.shape) # Test correct M values for w1 in word2Ind_ans.keys(): idx1 = word2Ind_ans[w1] for w2 in word2Ind_ans.keys(): idx2 = word2Ind_ans[w2] student = M_test[idx1, idx2] correct = M_test_ans[idx1, idx2] if student != correct: print("Correct M:") print(M_test_ans) print("Your M: ") print(M_test) raise AssertionError("Incorrect count at index ({}, {})=({}, {}) in matrix M. Yours has {} but should have {}.".format(idx1, idx2, w1, w2, student, correct)) # Print Success print ("-" * 80) print("Passed All Tests!") print ("-" * 80) ###Output -------------------------------------------------------------------------------- Passed All Tests! -------------------------------------------------------------------------------- ###Markdown Question 1.3: Implement `reduce_to_k_dim` [code] (1 point)Construct a method that performs dimensionality reduction on the matrix to produce k-dimensional embeddings. Use SVD to take the top k components and produce a new matrix of k-dimensional embeddings. **Note:** All of numpy, scipy, and scikit-learn (`sklearn`) provide *some* implementation of SVD, but only scipy and sklearn provide an implementation of Truncated SVD, and only sklearn provides an efficient randomized algorithm for calculating large-scale Truncated SVD. So please use [sklearn.decomposition.TruncatedSVD](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html). ###Code def reduce_to_k_dim(M, k=2): # 차원축소를 진행해라! """ Reduce a co-occurence count matrix of dimensionality (num_corpus_words, num_corpus_words) to a matrix of dimensionality (num_corpus_words, k) using the following SVD function from Scikit-Learn: - http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html Params: M (numpy matrix of shape (number of corpus words, number of corpus words)): co-occurence matrix of word counts k (int): embedding size of each word after dimension reduction Return: M_reduced (numpy matrix of shape (number of corpus words, k)): matrix of k-dimensioal word embeddings. In terms of the SVD from math class, this actually returns U * S """ n_iters = 10 # Use this parameter in your call to `TruncatedSVD` M_reduced = None print("Running Truncated SVD over %i words..." % (M.shape[0])) # ------------------ # Write your implementation here. svd = TruncatedSVD(n_components=k, n_iter=n_iters, random_state=42) M_reduced = svd.fit_transform(M) print(M_reduced) # ------------------ print("Done.") return M_reduced # --------------------- # Run this sanity check # Note that this not an exhaustive check for correctness # In fact we only check that your M_reduced has the right dimensions. # --------------------- # Define toy corpus and run student code test_corpus = ["START All that glitters isn't gold END".split(" "), "START All's well that ends well END".split(" ")] M_test, word2Ind_test = compute_co_occurrence_matrix(test_corpus, window_size=1) M_test_reduced = reduce_to_k_dim(M_test, k=2) # Test proper dimensions assert (M_test_reduced.shape[0] == 10), "M_reduced has {} rows; should have {}".format(M_test_reduced.shape[0], 10) assert (M_test_reduced.shape[1] == 2), "M_reduced has {} columns; should have {}".format(M_test_reduced.shape[1], 2) # Print Success print ("-" * 80) print("Passed All Tests!") print ("-" * 80) ###Output Running Truncated SVD over 10 words... [[ 7.05647176e-01 4.84057274e-01] [ 7.05647176e-01 -4.84057274e-01] [ 6.54802087e-01 -7.83221122e-01] [ 5.20200324e-01 2.32592938e-14] [ 1.02780472e+00 -1.99445434e-14] [ 6.54802087e-01 7.83221122e-01] [ 3.82258491e-01 6.56224003e-01] [ 3.82258491e-01 -6.56224003e-01] [ 1.39420808e+00 -1.06179274e+00] [ 1.39420808e+00 1.06179274e+00]] Done. -------------------------------------------------------------------------------- Passed All Tests! -------------------------------------------------------------------------------- ###Markdown Question 1.4: Implement `plot_embeddings` [code] (1 point)Here you will write a function to plot a set of 2D vectors in 2D space. For graphs, we will use Matplotlib (`plt`).For this example, you may find it useful to adapt [this code](https://www.pythonmembers.club/2018/05/08/matplotlib-scatter-plot-annotate-set-text-at-label-each-point/). In the future, a good way to make a plot is to look at [the Matplotlib gallery](https://matplotlib.org/gallery/index.html), find a plot that looks somewhat like what you want, and adapt the code they give. ###Code def plot_embeddings(M_reduced, word2Ind, words): """ Plot in a scatterplot the embeddings of the words specified in the list "words". NOTE: do not plot all the words listed in M_reduced / word2Ind. Include a label next to each point. Params: M_reduced (numpy matrix of shape (number of unique words in the corpus , k)): matrix of k-dimensioal word embeddings word2Ind (dict): dictionary that maps word to indices for matrix M words (list of strings): words whose embeddings we want to visualize """ # ------------------ # Write your implementation here. for word in words: idx = word2Ind[word] embedding = M_reduced[idx] x = embedding[0] y = embedding[1] plt.scatter(x, y, marker='x', color='red') plt.text(x, y, word, fontsize=9) # ------------------ # --------------------- # Run this sanity check # Note that this not an exhaustive check for correctness. # The plot produced should look like the "test solution plot" depicted below. # --------------------- print ("-" * 80) print ("Outputted Plot:") M_reduced_plot_test = np.array([[1, 1], [-1, -1], [1, -1], [-1, 1], [0, 0]]) word2Ind_plot_test = {'test1': 0, 'test2': 1, 'test3': 2, 'test4': 3, 'test5': 4} words = ['test1', 'test2', 'test3', 'test4', 'test5'] plot_embeddings(M_reduced_plot_test, word2Ind_plot_test, words) print ("-" * 80) ###Output -------------------------------------------------------------------------------- Outputted Plot: -------------------------------------------------------------------------------- ###Markdown **Test Plot Solution** Question 1.5: Co-Occurrence Plot Analysis [written] (3 points)Now we will put together all the parts you have written! We will compute the co-occurrence matrix with fixed window of 4, over the Reuters "crude" corpus. Then we will use TruncatedSVD to compute 2-dimensional embeddings of each word. TruncatedSVD returns U\*S, so we normalize the returned vectors, so that all the vectors will appear around the unit circle (therefore closeness is directional closeness). **Note**: The line of code below that does the normalizing uses the NumPy concept of *broadcasting*. If you don't know about broadcasting, check out[Computation on Arrays: Broadcasting by Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/02.05-computation-on-arrays-broadcasting.html).Run the below cell to produce the plot. It'll probably take a few seconds to run. What clusters together in 2-dimensional embedding space? What doesn't cluster together that you might think should have? **Note:** "bpd" stands for "barrels per day" and is a commonly used abbreviation in crude oil topic articles. ###Code # ----------------------------- # Run This Cell to Produce Your Plot # ------------------------------ reuters_corpus = read_corpus() M_co_occurrence, word2Ind_co_occurrence = compute_co_occurrence_matrix(reuters_corpus) M_reduced_co_occurrence = reduce_to_k_dim(M_co_occurrence, k=2) # Rescale (normalize) the rows to make them each of unit-length M_lengths = np.linalg.norm(M_reduced_co_occurrence, axis=1) M_normalized = M_reduced_co_occurrence / M_lengths[:, np.newaxis] # broadcasting words = ['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela'] plot_embeddings(M_normalized, word2Ind_co_occurrence, words) ###Output Running Truncated SVD over 8185 words... [[ 7.32630060e+02 -1.16894192e+02] [ 1.26000427e+00 -1.61923588e-01] [ 2.80304332e-01 6.47334603e-02] ... [ 1.04145879e+00 -3.06320300e-01] [ 6.19972477e-01 -1.25537234e-01] [ 2.42230659e+00 2.28089719e-01]] Done. ###Markdown clustering1. kuwait, venezuela, ecuador --> 국가2. bpd(barrel per day) 랑 barrels 는 비슷한데 잘 안뭉쳐있다. Part 2: Prediction-Based Word Vectors (15 points)As discussed in class, more recently prediction-based word vectors have come into fashion, e.g. word2vec. Here, we shall explore the embeddings produced by word2vec. Please revisit the class notes and lecture slides for more details on the word2vec algorithm. If you're feeling adventurous, challenge yourself and try reading the [original paper](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf).Then run the following cells to load the word2vec vectors into memory. **Note**: This might take several minutes. ###Code def load_word2vec(): """ Load Word2Vec Vectors Return: wv_from_bin: All 3 million embeddings, each lengh 300 """ import gensim.downloader as api wv_from_bin = api.load("word2vec-google-news-300") vocab = list(wv_from_bin.vocab.keys()) print("Loaded vocab size %i" % len(vocab)) return wv_from_bin # ----------------------------------- # Run Cell to Load Word Vectors # Note: This may take several minutes # ----------------------------------- wv_from_bin = load_word2vec() ###Output [=================================================-] 100.0% 1662.2/1662.8MB downloaded Loaded vocab size 3000000 ###Markdown **Note: If you are receiving out of memory issues on your local machine, try closing other applications to free more memory on your device. You may want to try restarting your machine so that you can free up extra memory. Then immediately run the jupyter notebook and see if you can load the word vectors properly. If you still have problems with loading the embeddings onto your local machine after this, please follow the Piazza instructions, as how to run remotely on Stanford Farmshare machines.** Reducing dimensionality of Word2Vec Word EmbeddingsLet's directly compare the word2vec embeddings to those of the co-occurrence matrix. Run the following cells to:1. Put the 3 million word2vec vectors into a matrix M2. Run reduce_to_k_dim (your Truncated SVD function) to reduce the vectors from 300-dimensional to 2-dimensional. ###Code def get_matrix_of_vectors(wv_from_bin, required_words=['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']): """ Put the word2vec vectors into a matrix M. Param: wv_from_bin: KeyedVectors object; the 3 million word2vec vectors loaded from file Return: M: numpy matrix shape (num words, 300) containing the vectors word2Ind: dictionary mapping each word to its row number in M """ import random words = list(wv_from_bin.vocab.keys()) print("Shuffling words ...") random.shuffle(words) words = words[:10000] print("Putting %i words into word2Ind and matrix M..." % len(words)) word2Ind = {} M = [] curInd = 0 for w in words: try: M.append(wv_from_bin.word_vec(w)) word2Ind[w] = curInd curInd += 1 except KeyError: continue for w in required_words: try: M.append(wv_from_bin.word_vec(w)) word2Ind[w] = curInd curInd += 1 except KeyError: continue M = np.stack(M) print("Done.") return M, word2Ind # ----------------------------------------------------------------- # Run Cell to Reduce 300-Dimensinal Word Embeddings to k Dimensions # Note: This may take several minutes # ----------------------------------------------------------------- M, word2Ind = get_matrix_of_vectors(wv_from_bin) M_reduced = reduce_to_k_dim(M, k=2) ###Output Shuffling words ... Putting 10000 words into word2Ind and matrix M... Done. Running Truncated SVD over 10010 words... [[ 0.71409386 0.19815353] [ 1.6666449 0.11395256] [ 1.0203545 0.46522883] ... [ 0.66303724 0.24188055] [ 0.71468663 -0.14353275] [ 0.49946383 0.48399702]] Done. ###Markdown Question 2.1: Word2Vec Plot Analysis [written] (4 points)Run the cell below to plot the 2D word2vec embeddings for `['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela']`.What clusters together in 2-dimensional embedding space? What doesn't cluster together that you might think should have? How is the plot different from the one generated earlier from the co-occurrence matrix? ###Code words = ['barrels', 'bpd', 'ecuador', 'energy', 'industry', 'kuwait', 'oil', 'output', 'petroleum', 'venezuela'] plot_embeddings(M_reduced, word2Ind, words) ###Output _____no_output_____ ###Markdown co-occurrence matrix 를 이용해 plot 했을때 보다 덜 뭉쳐있다.나라들이 안뭉쳐 있다. Cosine SimilarityNow that we have word vectors, we need a way to quantify the similarity between individual words, according to these vectors. One such metric is cosine-similarity. We will be using this to find words that are "close" and "far" from one another.We can think of n-dimensional vectors as points in n-dimensional space. If we take this perspective L1 and L2 Distances help quantify the amount of space "we must travel" to get between these two points. Another approach is to examine the angle between two vectors. From trigonometry we know that:Instead of computing the actual angle, we can leave the similarity in terms of $similarity = cos(\Theta)$. Formally the [Cosine Similarity](https://en.wikipedia.org/wiki/Cosine_similarity) $s$ between two vectors $p$ and $q$ is defined as:$$s = \frac{p \cdot q}{||p|| ||q||}, \textrm{ where } s \in [-1, 1] $$ 두 벡터간 유사도가 높다 = 두 벡터의 내적값이 커진다 = 두 벡터가 이루는 각이 작아진다 Question 2.2: Polysemous Words (2 points) [code + written] Find a [polysemous](https://en.wikipedia.org/wiki/Polysemy) word (for example, "leaves" or "scoop") such that the top-10 most similar words (according to cosine similarity) contains related words from *both* meanings. For example, "leaves" has both "vanishes" and "stalks" in the top 10, and "scoop" has both "handed_waffle_cone" and "lowdown". You will probably need to try several polysemous words before you find one. Please state the polysemous word you discover and the multiple meanings that occur in the top 10. Why do you think many of the polysemous words you tried didn't work?**Note**: You should use the `wv_from_bin.most_similar(word)` function to get the top 10 similar words. This function ranks all other words in the vocabulary with respect to their cosine similarity to the given word. For further assistance please check the __[GenSim documentation](https://radimrehurek.com/gensim/models/keyedvectors.htmlgensim.models.keyedvectors.FastTextKeyedVectors.most_similar)__. ###Code # ------------------ # Write your polysemous word exploration code here. wv_from_bin.most_similar("leaves") # ------------------ # ------------------ # Write your polysemous word exploration code here. wv_from_bin.most_similar("run") # ------------------ ###Output _____no_output_____ ###Markdown *run*- 달리다- 배달- 운행- 개최하다 왜 우리가 알고있는 다의어의 다른뜻이 포함되지 않았을까?!데이터셋 상에서 다의어의 다른뜻으로 사용된 경우가 적지 않았을까. Question 2.3: Synonyms & Antonyms (2 points) [code + written] When considering Cosine Similarity, it's often more convenient to think of Cosine Distance, which is simply 1 - Cosine Similarity.Find three words (w1,w2,w3) where w1 and w2 are synonyms and w1 and w3 are antonyms, but Cosine Distance(w1,w3) < Cosine Distance(w1,w2). For example, w1="happy" is closer to w3="sad" than to w2="cheerful". Once you have found your example, please give a possible explanation for why this counter-intuitive result may have happened.You should use the the `wv_from_bin.distance(w1, w2)` function here in order to compute the cosine distance between two words. Please see the __[GenSim documentation](https://radimrehurek.com/gensim/models/keyedvectors.htmlgensim.models.keyedvectors.FastTextKeyedVectors.distance)__ for further assistance. ###Code # ------------------ # Write your synonym & antonym exploration code here. w1 = "happy" w2 = "cheerful" w3 = "sad" w1_w2_dist = wv_from_bin.distance(w1, w2) w1_w3_dist = wv_from_bin.distance(w1, w3) print("Synonyms {}, {} have cosine distance: {}".format(w1, w2, w1_w2_dist)) print("Antonyms {}, {} have cosine distance: {}".format(w1, w3, w1_w3_dist)) # ------------------ ###Output Synonyms happy, cheerful have cosine distance: 0.6162261962890625 Antonyms happy, sad have cosine distance: 0.46453857421875 ###Markdown 코사인 거리 = 1- 코사인 유사도코사인 유사도가 크다 == 코사인 거리가 작다동의어의 cosine distance가 더 크다. = 코사인 유사도가 작다"happy" 와 "sad"가 더 가깝다.이런 직관에 반하는 결과가 발생한 이유는 무엇일까.비슷한 맥락에서 두 단어가 사용되기 때문이지 않을까.! (동일한 맥락속에 단어만 바뀌어 들어가니까) Solving Analogies with Word VectorsWord2Vec vectors have been shown to *sometimes* exhibit the ability to solve analogies. As an example, for the analogy "man : king :: woman : x", what is x?In the cell below, we show you how to use word vectors to find x. The `most_similar` function finds words that are most similar to the words in the `positive` list and most dissimilar from the words in the `negative` list. The answer to the analogy will be the word ranked most similar (largest numerical value).**Note:** Further Documentation on the `most_similar` function can be found within the __[GenSim documentation](https://radimrehurek.com/gensim/models/keyedvectors.htmlgensim.models.keyedvectors.FastTextKeyedVectors.most_similar)__. ###Code # Run this cell to answer the analogy -- man : king :: woman : x pprint.pprint(wv_from_bin.most_similar(positive=['woman', 'king'], negative=['man'])) ###Output [('queen', 0.7118192911148071), ('monarch', 0.6189674139022827), ('princess', 0.5902431607246399), ('crown_prince', 0.5499460697174072), ('prince', 0.5377321243286133), ('kings', 0.5236844420433044), ('Queen_Consort', 0.5235945582389832), ('queens', 0.5181134343147278), ('sultan', 0.5098593235015869), ('monarchy', 0.5087411999702454)] ###Markdown Question 2.4: Finding Analogies [code + written] (2 Points)Find an example of analogy that holds according to these vectors (i.e. the intended word is ranked top). In your solution please state the full analogy in the form x:y :: a:b. If you believe the analogy is complicated, explain why the analogy holds in one or two sentences.**Note**: You may have to try many analogies to find one that works! ###Code # ------------------ # Write your analogy exploration code here. pprint.pprint(wv_from_bin.most_similar(positive=['student','teach'], negative=['teacher'])) # ------------------ ###Output [('educate', 0.535490870475769), ('learn', 0.5003311634063721), ('teaches', 0.4975493252277374), ('undergraduates', 0.49214357137680054), ('students', 0.4806554913520813), ('Fraternities_sororities', 0.47315847873687744), ('undergraduate', 0.472159206867218), ('taught', 0.469461053609848), ('undergrads', 0.466816782951355), ('NUS_NTU', 0.45938876271247864)] ###Markdown teacher : teach :: student : learn Question 2.5: Incorrect Analogy [code + written] (1 point)Find an example of analogy that does *not* hold according to these vectors. In your solution, state the intended analogy in the form x:y :: a:b, and state the (incorrect) value of b according to the word vectors. ###Code # ------------------ # Write your incorrect analogy exploration code here. pprint.pprint(wv_from_bin.most_similar(positive=['stomach','headache'], negative=['head'])) # ------------------ ###Output [('headaches', 0.598472535610199), ('stomach_ache', 0.5352030992507935), ('indigestion', 0.5213103890419006), ('heartburn', 0.5203337669372559), ('stomachache', 0.5200527310371399), ('severe_headaches_nausea', 0.4946659207344055), ('stomach_cramps', 0.49447107315063477), ('intestinal_cramps', 0.4901297688484192), ('sinus_congestion', 0.4889112710952759), ('backache', 0.4878302812576294)] ###Markdown head : headache :: stomach : headache,,,, Question 2.6: Guided Analysis of Bias in Word Vectors [written] (1 point)It's important to be cognizant of the biases (gender, race, sexual orientation etc.) implicit to our word embeddings.Run the cell below, to examine (a) which terms are most similar to "woman" and "boss" and most dissimilar to "man", and (b) which terms are most similar to "man" and "boss" and most dissimilar to "woman". What do you find in the top 10? ###Code # Run this cell # Here `positive` indicates the list of words to be similar to and `negative` indicates the list of words to be # most dissimilar from. pprint.pprint(wv_from_bin.most_similar(positive=['woman', 'boss'], negative=['man'])) print() pprint.pprint(wv_from_bin.most_similar(positive=['man', 'boss'], negative=['woman'])) ###Output [('bosses', 0.5522644519805908), ('manageress', 0.49151360988616943), ('exec', 0.45940813422203064), ('Manageress', 0.45598435401916504), ('receptionist', 0.4474116563796997), ('Jane_Danson', 0.44480544328689575), ('Fiz_Jennie_McAlpine', 0.44275766611099243), ('Coronation_Street_actress', 0.44275566935539246), ('supremo', 0.4409853219985962), ('coworker', 0.43986251950263977)] [('supremo', 0.6097398400306702), ('MOTHERWELL_boss', 0.5489562153816223), ('CARETAKER_boss', 0.5375303626060486), ('Bully_Wee_boss', 0.5333974361419678), ('YEOVIL_Town_boss', 0.5321705341339111), ('head_honcho', 0.5281980037689209), ('manager_Stan_Ternent', 0.525971531867981), ('Viv_Busby', 0.5256162881851196), ('striker_Gabby_Agbonlahor', 0.5250812768936157), ('BARNSLEY_boss', 0.5238943099975586)] ###Markdown 성별에 따른 bias가 존재한다. Question 2.7: Independent Analysis of Bias in Word Vectors [code + written] (2 points)Use the `most_similar` function to find another case where some bias is exhibited by the vectors. Please briefly explain the example of bias that you discover. ###Code # ------------------ # Write your bias exploration code here. pprint.pprint(wv_from_bin.most_similar(positive=['man', 'doctor'], negative=['woman'])) print() pprint.pprint(wv_from_bin.most_similar(positive=['woman','doctor'], negative=['man'])) # ------------------ ###Output [('physician', 0.6463665962219238), ('doctors', 0.5858404040336609), ('surgeon', 0.5723941326141357), ('dentist', 0.552364706993103), ('cardiologist', 0.5413815975189209), ('neurologist', 0.5271126627922058), ('neurosurgeon', 0.5249835848808289), ('urologist', 0.5247740149497986), ('Doctor', 0.5240625143051147), ('internist', 0.5183224081993103)] [('gynecologist', 0.7093892097473145), ('nurse', 0.647728681564331), ('doctors', 0.6471461057662964), ('physician', 0.64389967918396), ('pediatrician', 0.6249487996101379), ('nurse_practitioner', 0.6218312978744507), ('obstetrician', 0.6072014570236206), ('ob_gyn', 0.5986712574958801), ('midwife', 0.5927063226699829), ('dermatologist', 0.5739566683769226)]
hackathon-2018-pessoal.ipynb
###Markdown lendo csv ###Code df = pd.read_csv('data/forca-de-trabalho---17-04-18-csv.csv',encoding='latin1',delimiter = ';') df.head(6) ###Output _____no_output_____ ###Markdown vamos olhar um servidor especifico ###Code df[df['NOME DO SERVIDOR'].str.contains("EDUARDO &")] df.shape df.columns ###Output _____no_output_____ ###Markdown Total de servidores ###Code df.shape[0] ###Output _____no_output_____ ###Markdown Contagem por Carreira ###Code grouped = df.groupby('CARREIRA') grouped['MATRICULA'].count().sort_values(ascending=False) ###Output _____no_output_____ ###Markdown não tem os 8 mil auxiliares de enfermagem do dataset de distribuiçao de pessoal¯\\_(ツ)_/¯[notebook da distribuição de pessoal](https://github.com/chris-redfield/hacksaude-2018/blob/master/hackathon-2018-distribuicao-profissionais.ipynb)Talvez sejam esses Técnicos em Saúde ? ###Code grouped['MATRICULA'].count().sort_values(ascending=False).plot.bar() ###Output _____no_output_____ ###Markdown Contagem por estado ###Code grouped = df.groupby('STATUS') grouped['MATRICULA'].count().sort_values(ascending=False) grouped['MATRICULA'].count().sort_values(ascending=False).plot.bar() ###Output _____no_output_____ ###Markdown Percentual de servidores afastados ###Code df[df['STATUS']=='3 - AFASTADO'].count()['MATRICULA'] * 100 / df.shape[0] ###Output _____no_output_____ ###Markdown poucos, então esse não é o problema da secretaria ;p Contagem por unidade de lotação ###Code grouped = df.groupby('UA/LOTACAO') grouped['MATRICULA'].count().sort_values(ascending=False) grouped['MATRICULA'].count().sort_values(ascending=False)[:10].plot.bar() ###Output _____no_output_____ ###Markdown Contagem por regional ###Code grouped = df.groupby('REGIONAL') grouped['MATRICULA'].count().sort_values(ascending=False)[:15] grouped['MATRICULA'].count().sort_values(ascending=False)[:15].plot.bar() ###Output _____no_output_____ ###Markdown Podemos assumir que isso são hospitais ? Contagem por descrição da lotação ###Code grouped_ubs = df.groupby('DESCRICAO LOTACAO')['DESCRICAO LOTACAO'].count() grouped_ubs.sort_values(ascending=False)[:15] grouped_ubs.sort_values(ascending=False)[:15].plot.bar() ###Output _____no_output_____ ###Markdown dados não batem 100% com o dataset de distribuição de pessoal, mas estão próximos ^^¯\\_(ツ)_/¯[notebook da distribuição de pessoal](https://github.com/chris-redfield/hacksaude-2018/blob/master/hackathon-2018-distribuicao-profissionais.ipynbQuantidade-de-servidores-por-unidade) Distribuição por gênero ###Code grouped = df.groupby('DESC SEXO') grouped['MATRICULA'].count().sort_values(ascending=False) grouped['MATRICULA'].count().sort_values(ascending=False).plot.pie(autopct='%1.1f%%') ###Output _____no_output_____
site/ja/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb
###Markdown High-performance Simulation with KubernetesThis tutorial will describe how to set up high-performance simulation using aTFF runtime running on Kubernetes. The model is the same as in the previoustutorial, **High-performance simulations with TFF**. The only difference is thathere we use a worker pool instead of a local executor.This tutorial refers to Google Cloud's [GKE](https://cloud.google.com/kubernetes-engine/) to create the Kubernetes cluster,but all the steps after the cluster is created can be used with any Kubernetesinstallation. View on TensorFlow.org Run in Google Colab View source on GitHub GKE で TFF ワーカーを起動する> **注:** このチュートリアルは、ユーザーが既存の GCP プロジェクトを持っていることを前提としています。 Kubernetes クラスタの作成次の手順は一度だけ実行する必要があります。クラスタは、今後のワークロードに再利用できます。GKE の指示に従って、[コンテナクラスタを作成](https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-appstep_4_create_a_container_cluster)します。このチュートリアルの以下の部分では、クラスタ名が` tff-cluster `であると想定していますが、実際の名前は重要ではありません。「*ステップ 5: アプリケーションのデプロイ*」に到達したら、指示に従うのをやめます。 TFF ワーカーアプリケーションをデプロイするGCP とやり取りするコマンドは、[ローカル](https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-appoption_b_use_command-line_tools_locally)または[ Google Cloud Shell ](https://cloud.google.com/shell/)で実行できます。Google Cloud Shell では、追加の設定は必要ないため、Google Cloud Shell の使用をお勧めします。1. 次のコマンドを実行して、Kubernetes アプリケーションを起動します。```$ kubectl create deployment tff-workers --image=gcr.io/tensorflow-federated/remote-executor-service:{{version}}```1. アプリケーションのロードバランサを追加します。```$ kubectl expose deployment tff-workers --type=LoadBalancer --port 80 --target-port 8000```> **注:** これにより、デプロイメントがインターネットに公開されますが、これはデモのみのためです。運用環境では、ファイアウォールと認証を強くお勧めします。 Google Cloud Console でロードバランサの IP アドレスを検索します。後でトレーニングループをワーカーアプリに接続するために必要になります。 (または) Docker コンテナをローカルで起動する```$ docker run --rm -p 8000:8000 gcr.io/tensorflow-federated/remote_executor_service:{{version}}``` TFF 環境の設定 ###Code #@test {"skip": true} !pip install --upgrade tensorflow_federated ###Output - ###Markdown トレーニングするモデルの定義 ###Code import collections import time import tensorflow as tf import tensorflow_federated as tff source, _ = tff.simulation.datasets.emnist.load_data() def map_fn(example): return collections.OrderedDict( x=tf.reshape(example['pixels'], [-1, 784]), y=example['label']) def client_data(n): ds = source.create_tf_dataset_for_client(source.client_ids[n]) return ds.repeat(10).batch(20).map(map_fn) train_data = [client_data(n) for n in range(10)] input_spec = train_data[0].element_spec def model_fn(): model = tf.keras.models.Sequential([ tf.keras.layers.Input(shape=(784,)), tf.keras.layers.Dense(units=10, kernel_initializer='zeros'), tf.keras.layers.Softmax(), ]) return tff.learning.from_keras_model( model, input_spec=input_spec, loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) trainer = tff.learning.build_federated_averaging_process( model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02)) def evaluate(num_rounds=10): state = trainer.initialize() for round in range(num_rounds): t1 = time.time() state, metrics = trainer.next(state, train_data) t2 = time.time() print('Round {}: loss {}, round time {}'.format(round, metrics.loss, t2 - t1)) ###Output _____no_output_____ ###Markdown リモートエグゼキュータのセットアップデフォルトでは、TFF はすべての計算をローカルで実行します。このステップでは、上で設定した Kubernetes サービスに接続するよう TFF に指示します。サービスの IP アドレスは必ずここにコピーします。 ###Code import grpc ip_address = '0.0.0.0' #@param {type:"string"} port = 80 #@param {type:"integer"} client_ex = [] for i in range(10): channel = grpc.insecure_channel('{}:{}'.format(ip_address, port)) client_ex.append(tff.framework.RemoteExecutor(channel, rpc_mode='STREAMING')) factory = tff.framework.worker_pool_executor_factory(client_ex) context = tff.framework.ExecutionContext(factory) tff.framework.set_default_context(context) ###Output _____no_output_____ ###Markdown トレーニングの実行 ###Code evaluate() ###Output Round 0: loss 4.370407581329346, round time 4.201097726821899 Round 1: loss 4.1407670974731445, round time 3.3283166885375977 Round 2: loss 3.865147590637207, round time 3.098310947418213 Round 3: loss 3.534019708633423, round time 3.1565616130828857 Round 4: loss 3.272688388824463, round time 3.175067663192749 Round 5: loss 2.935391664505005, round time 3.008434534072876 Round 6: loss 2.7399251461029053, round time 3.31435227394104 Round 7: loss 2.5054931640625, round time 3.4411356449127197 Round 8: loss 2.290508985519409, round time 3.158798933029175 Round 9: loss 2.1194536685943604, round time 3.1348156929016113
Exemptive_Main_Project_Spring2020.ipynb
###Markdown Andreas Pappas ID: 1115201500201 Exemptive Data Mining Project for spring-2020 First and foremost we'll load the libraries needed: ###Code ######################################## ## import packages ######################################## import os import re import csv import codecs import numpy as np import pandas as pd import operator import nltk import pickle from nltk.stem.wordnet import WordNetLemmatizer from nltk import pos_tag, word_tokenize from nltk.corpus import stopwords, wordnet from sklearn.model_selection import ShuffleSplit from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import MultiLabelBinarizer from sklearn.linear_model import LogisticRegression from sklearn.model_selection import RandomizedSearchCV from nltk.corpus import stopwords from nltk.stem import SnowballStemmer from string import punctuation from collections import defaultdict from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer #from keras.preprocessing.text import Tokenizer #from keras.preprocessing.sequence import pad_sequences import sys import unidecode import sklearn from sklearn.metrics import classification_report, confusion_matrix, accuracy_score from sklearn.model_selection import train_test_split as cv from sklearn import svm # Support Vector Machine from sklearn.ensemble import RandomForestClassifier # Random Forest from sklearn.naive_bayes import MultinomialNB # Naive Bayes ###Output _____no_output_____ ###Markdown Now we'll load the dataset we'll work with: ###Code path = 'data/' train_data_file = path + 'train.csv' test_data_file = path + 'impermium_verification_set.csv' eval_data_file = path + 'impermium_verification_labels.csv' # Now that we got the paths, we'll load the data into pandas dataframe: train_data = pd.read_csv(train_data_file) test_data = pd.read_csv(test_data_file) eval_data = pd.read_csv(eval_data_file) train_data.head() test_data.head() eval_data.head() ###Output _____no_output_____ ###Markdown Preprocessing & Cleaning of data *Now that we loaded our data we're going to clean them, so they're in a more readable form for our algorithms, and so we get a better precision and understanding of the data* ###Code ######################################## # Load the cleaned words ######################################## cl_path = 'cleanwords.txt' clean_word_dict = {} with open(cl_path, 'r', encoding='utf-8') as cl: for line in cl: line = line.strip('\n') typo, correct = line.split(',') clean_word_dict[typo] = correct ######################################## ## process texts in datasets ######################################## print('Processing text dataset') # Regex to remove all Non-Alpha Numeric and space special_character_removal=re.compile(r'[^?!.,:a-z\d ]',re.IGNORECASE) # regex to replace all numerics replace_numbers=re.compile(r'\d+',re.IGNORECASE) word_count_dict = defaultdict(int) def clean_text(text, remove_stopwords=True, stem_words=True, count_null_words=True, clean_wiki_tokens=True): # Clean the text, with the option to remove stopwords and to stem words. # dirty words #non-ASCII characters to their closest ASCII equivalent automatically. #text = unidecode.unidecode(text) text = text.lower() #lower all text text = re.sub(r"https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{2,256}\.[a-z]{2,6}\b([-a-zA-Z0-9@:%_\+.~#?&//=]*)", "", text) text = re.sub(r"(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}", "", text) text = re.sub(r"\\xa|\\xc|\\|\\xe|\\u", " ", text) # remove @mentions text = re.sub('@[A-Za-z0-9]+', '', text) if clean_wiki_tokens: # Drop the image text = re.sub(r"image:[a-zA-Z0-9]*\.jpg", " ", text) text = re.sub(r"image:[a-zA-Z0-9]*\.png", " ", text) text = re.sub(r"image:[a-zA-Z0-9]*\.gif", " ", text) text = re.sub(r"image:[a-zA-Z0-9]*\.bmp", " ", text) # Drop css text = re.sub(r"#([A-Fa-f0-9]{6}|[A-Fa-f0-9]{3})", " ",text) text = re.sub(r"\{\|[^\}]*\|\}", " ", text) # Clean templates text = re.sub(r"\[?\[user:.*\]", " ", text) text = re.sub(r"\[?\[user:.*\|", " ", text) text = re.sub(r"\[?\[wikipedia:.*\]", " ", text) text = re.sub(r"\[?\[wikipedia:.*\|", " ", text) text = re.sub(r"\[?\[special:.*\]", " ", text) text = re.sub(r"\[?\[special:.*\|", " ", text) text = re.sub(r"\[?\[category:.*\]", " ", text) text = re.sub(r"\[?\[category:.*\|", " ", text) for typo, correct in clean_word_dict.items(): text = re.sub(typo, " " + correct + " ", text) text = re.sub(r"what's", "what is ", text) text = re.sub(r"\'s", " ", text) text = re.sub(r"\'ve", " have ", text) text = re.sub(r"can't", "cannot ", text) text = re.sub(r"n't", " not ", text) text = re.sub(r"i'm", "i am ", text) text = re.sub(r"\'re", " are ", text) text = re.sub(r"\'d", " would ", text) text = re.sub(r"\'ll", " will ", text) text = re.sub(r",", " ", text) text = re.sub(r"\.", " ", text) text = re.sub(r"!", " ! ", text) text = re.sub(r"\/", " ", text) text = re.sub(r"\?", " ? ", text) text = re.sub(r"\!", " ! ", text) text = re.sub(r"\"", " ", text) text = re.sub(r"\^", " ^ ", text) text = re.sub(r"\+", " + ", text) text = re.sub(r"\-", " - ", text) text = re.sub(r"\=", " = ", text) text = re.sub(r"'", " ", text) text = re.sub(r"(\d+)(k)", r"\g<1>000", text) text = re.sub(r":", " : ", text) text = re.sub(r" e g ", " eg ", text) text = re.sub(r" b g ", " bg ", text) text = re.sub(r" u s ", " american ", text) text = re.sub(r"\0s", "0", text) text = re.sub(r" 9 11 ", "911", text) text = re.sub(r"e - mail", "email", text) text = re.sub(r"j k", "jk", text) text = re.sub(r"\s{2,}", " ", text) text = replace_numbers.sub(' ', text) #text = special_character_removal.sub('',text) if count_null_words: text = text.split() for t in text: word_count_dict[t] += 1 text = " ".join(text) # Optionally, shorten words to their stems if stem_words: text = text.split() stemmer = SnowballStemmer('english') stemmed_words = [stemmer.stem(word) for word in text] text = " ".join(stemmed_words) return (text) list_sentences_train = train_data["Comment"].fillna("noComment").values list_sentences_test = test_data["Comment"].fillna("noComment").values list_sentences_eval = eval_data["Comment"].fillna("noComment").values train_comments = [clean_text(text) for text in list_sentences_train] test_comments = [clean_text(text) for text in list_sentences_test] eval_comments = [clean_text(text) for text in list_sentences_eval] print("Cleaned.") ###Output Processing text dataset Cleaned. ###Markdown **Now that we cleaned our data, we'll create new files to seperate uncleaned data from cleaned data** ###Code train_data['Comment'] = train_comments test_data['Comment'] = test_comments eval_data['Comment'] = eval_comments #save the cleaned data train_data.to_csv('data/cleaned_train.csv', index = False) test_data.to_csv('data/cleaned_test.csv', index = False) eval_data.to_csv('data/cleaned_eval.csv', index = False) #load the cleaned data into pandas dataframe: cl_train_df = pd.read_csv('data/cleaned_train.csv') cl_test_df = pd.read_csv('data/cleaned_test.csv') cl_eval_df = pd.read_csv('data/cleaned_eval.csv') cl_train_df.head() cl_test_df.head() cl_eval_df.head() ###Output _____no_output_____ ###Markdown **So as we can see the Comments are perfectly cleaned now, compared to the firstly given csv's** ###Code X_train, X_test, y_train, y_test = train_test_split(cl_train_df['Comment'], cl_train_df['Insult'], test_size=0.20, random_state=8) ###Output _____no_output_____ ###Markdown Classification with the classic NaiveBayes Algorithm Comments to word vectors using CountVectorizer ###Code CountVectorizer().get_params() # Parameter election ngram_range = (1,1) #we will use bigrams later on the improvement part min_df = 10 max_df = 1. max_features = 222 bow = CountVectorizer(encoding='utf-8', ngram_range=ngram_range, stop_words=None, max_df=max_df, min_df=min_df, max_features=max_features) bow_train = bow.fit_transform(X_train.astype('U')).toarray() print(bow_train.shape) bow_test = bow.fit_transform(X_test.astype('U')).toarray() print(bow_test.shape) ###Output (3157, 222) (790, 222) ###Markdown Basic NaiveBayes So at this point we will not tune any hyperparameter and we will pass the word vectors produced by the CountVectorizer as asked in the project definition. **Later on we will improve the NaiveBayes and compare the results for each improvement** ###Code mnbc = MultinomialNB(alpha=0) #alpha = 0 means we will not use laplace smoothing in this step mnbc mnbc.fit(bow_train, Y_train) mnbc_pred = mnbc.predict(bow_test) # Training accuracy print("The training accuracy is: ") print(accuracy_score(Y_train, mnbc.predict(bow_train))) # Test accuracy print("The test accuracy is: ") print(accuracy_score(Y_test, mnbc_pred)) # Classification report print("Classification report") print(classification_report(Y_test,mnbc_pred)) d = { 'Model': 'Basic Naïve Bayes', 'Training Set Accuracy': accuracy_score(Y_train, mnbc.predict(bow_train)), 'Test Set Accuracy': accuracy_score(Y_test, mnbc_pred) } df_models_mnbc = pd.DataFrame(d, index=[0]) df_models_mnbc with open('df_models_mnbc.pickle', 'wb') as output: pickle.dump(df_models_mnbc, output) ###Output _____no_output_____ ###Markdown **Now that we got the accuracy and the F1-Score of the basic NaiveBayes, we will improve it by doing lemmatization, removing stop words, using bigrams, and using laplace Smoothing to check if we're going to get any better results!** 1. Lemmatization ###Code # Saving the lemmatizer into an object wordnet_lemmatizer = WordNetLemmatizer() # IN order to lemmatize, we have to iterate through every word: nrows = len(train_data) lemmatized_text_list = [] for row in range(0, nrows): # Create an empty list containing lemmatized words lemmatized_list = [] # Save the text and its words into an object text = train_data.loc[row]['Comment'] text_words = text.split(" ") # Iterate through every word to lemmatize for word in text_words: lemmatized_list.append(wordnet_lemmatizer.lemmatize(word, pos="v")) # Join the list lemmatized_text = " ".join(lemmatized_list) # Append to the list containing the texts lemmatized_text_list.append(lemmatized_text) ###Output _____no_output_____ ###Markdown 2. Stop Words ###Code # Downloading the stop words list nltk.download('stopwords') # Loading the stop words in english stop_words = list(stopwords.words('english')) stop_words[0:10] #To remove the stop words, we'll handle a regular expression only detecting whole words, as seen in the following example: example = "me eating a meal" word = "me" # The regular expression is: regex = r"\b" + word + r"\b" # we need to build it like that to work properly re.sub(regex, "StopWord", example) # We can now loop through all the stop words: for stop_word in stop_words: regex_stopword = r"\b" + stop_word + r"\b" train_data['Comment'].str.replace(regex_stopword, '') train_data.head() ###Output _____no_output_____ ###Markdown **Now that we have done lemmatization and have removed the stopwords, we'll tune in NaiveBayes again with the newly cleaned data, using bigrams this time and applying Laplace Smoothing by setting alpha = 0.01** ###Code x_train, x_test, labels_train, labels_test = train_test_split(train_data['Comment'], train_data['Insult'], test_size=0.20, random_state=8) train_data.head() # Parameter election ngram_range2 = (1,2) #using bigrams this time! min_df2 = 10 max_df2 = 1. max_features2 = 222 bow2 = CountVectorizer(encoding='utf-8', ngram_range=ngram_range2, stop_words=stop_words, max_df=max_df2, min_df=min_df2, max_features=max_features2) bow_train2 = bow2.fit_transform(x_train.astype('U')).toarray() print(bow_train2.shape) bow_test2 = bow2.fit_transform(x_test.astype('U')).toarray() print(bow_test2.shape) ###Output (3157, 222) (790, 222) ###Markdown Improved NaiveBayes ###Code mnbc2 = MultinomialNB(alpha=0.01) #using Laplace Smoothing with value of 22 mnbc2 mnbc2.fit(bow_train2, labels_train) mnbc_pred2 = mnbc2.predict(bow_test2) # Training accuracy print("The training accuracy is: ") print(accuracy_score(labels_train, mnbc2.predict(bow_train2))) # Test accuracy print("The test accuracy is: ") print(accuracy_score(labels_test, mnbc_pred2)) # Classification report print("Classification report") print(classification_report(labels_test,mnbc_pred2)) d = { 'Model': 'Improved Naïve Bayes', 'Training Set Accuracy': accuracy_score(labels_train, mnbc2.predict(bow_train2)), 'Test Set Accuracy': accuracy_score(labels_test, mnbc_pred2) } df_models_mnbc2 = pd.DataFrame(d, index=[0]) df_models_mnbc2 with open('df_models_mnbc2.pickle', 'wb') as output: pickle.dump(df_models_mnbc2, output) path_pickles = "/home/andrewpap22/Desktop/dataMining_MainProject/" list_pickles = [ "df_models_mnbc.pickle", "df_models_mnbc2.pickle" ] df_summary = pd.DataFrame() for pickle_ in list_pickles: path = path_pickles + pickle_ with open(path, 'rb') as data: df = pickle.load(data) df_summary = df_summary.append(df) df_summary = df_summary.reset_index().drop('index', axis=1) df_summary ###Output _____no_output_____ ###Markdown **So as we can see the Improved NaiveBayes algorithm, after performing lemmatization, removing stopwords, using bigrams and using laplace smoothing has some *'litle'* but still there is improvement!** Part - Of - Speech ###Code def get_wordnet_pos(word): """ Map POS tag to first character lemmatize() accepts. """ tag = pos_tag([word])[0][1][0].upper() tag_dict = {"J": wordnet.ADJ, "N": wordnet.NOUN, "V": wordnet.VERB, "R": wordnet.ADV} return tag_dict.get(tag, wordnet.NOUN) ###Output _____no_output_____ ###Markdown Now we will provide the correct 'part-of-speech' tag as the second argument to lemmatize(). That way our Comment column of the dataframe will have the pos tags (features) of the whole text ###Code def lemmatize(tokens): """ Lemmatize all words in given list of tokens. """ lemmatizer = WordNetLemmatizer() lems = [lemmatizer.lemmatize(token, get_wordnet_pos(token)) for token in tokens] return lems x_train = x_train.apply(lambda x: lemmatize(x)) x_train.head() def dummy(doc): """ Dummy tokenizer to use when data are already tokenized. """ return doc def tf_idf(series): """ Tf-Idf vectorization of Comments. Return a series of the vectors. """ comment_list = series.tolist() tfidf_vectorizer = TfidfVectorizer( tokenizer=dummy, preprocessor=dummy, max_features=222) matr = tfidf_vectorizer.fit_transform(comment_list) ser = pd.Series(matr.toarray().tolist()) # return series of vectors for Comments return ser tfidf_train_pos = tf_idf(x_train) tfidf_test_pos = tf_idf(x_test) tfidf_train_pos.head() print(tfidf_train_pos.shape) print(tfidf_test_pos.shape) tfidf_test_pos.head() ###Output _____no_output_____ ###Markdown TF - IDF **This one will not be in use, as we have made above tf-idf representation given the pos-tags of the text data, so we'll use them combined on our models below as we won't get any better improvement testing tf-idf features alone!** But since the project definition needs the code implementation of both pos and tf-df seperataly, i'm providing my code of the 2nd project on tf-idf ###Code # Parameter election ngram_range3 = (1,2) min_df3 = 10 max_df3 = 1. max_features3 = 222 tfidf = TfidfVectorizer(encoding='utf-8', ngram_range=ngram_range3, stop_words=stop_words, lowercase=True, max_df=max_df3, min_df=min_df3, max_features=max_features3, norm='l2', sublinear_tf=True) tf_idf_train = tfidf.fit_transform(x_train.astype('U')).toarray() print(tf_idf_train.shape) tf_idf_test = tfidf.transform(x_test.astype('U')).toarray() print(tf_idf_test.shape) ###Output (3157, 222) (790, 222) ###Markdown **Now that we got our POS tags and our TF-IDF representation, we'll try them on SVM & Random Decision Forest** 1. SVM ###Code # Made them as lists to prevent the error: setting an array element with a sequence tfidf_trainpos = list(tfidf_train_pos) tfidf_testpos = list(tfidf_test_pos) Labels_train = list(labels_train) Labels_test = list(labels_test) C = [.0001, .001, .01, .1] degree = [3, 4, 5] gamma = [1, 10, 100] probability = [True] param_grid = [ {'C': C, 'kernel':['linear'], 'probability':probability}, {'C': C, 'kernel':['poly'], 'degree':degree, 'probability':probability}, {'C': C, 'kernel':['rbf'], 'gamma':gamma, 'probability':probability} ] # Create a base model svc = svm.SVC(random_state=8) # Manually create the splits in CV in order to be able to fix a random_state (GridSearchCV doesn't have that argument) cv_sets = ShuffleSplit(n_splits = 3, test_size = .22, random_state = 8) # Instantiate the grid search model grid_search = GridSearchCV(estimator=svc, param_grid=param_grid, scoring='accuracy', cv=cv_sets, verbose=1) # Fit the grid search to the data grid_search.fit(tfidf_trainpos, Labels_train) # best hyperparameters: print("The best hyperparameters from Grid Search are:") print(grid_search.best_params_) print("") print("The mean accuracy of a model with these hyperparameters is:") print(grid_search.best_score_) #saving the model as best_svc: best_svc = grid_search.best_estimator_ best_svc best_svc.fit(tfidf_trainpos, Labels_train) svc_pred = best_svc.predict(tfidf_testpos) # Training accuracy print("The training accuracy is: ") print(accuracy_score(Labels_train, best_svc.predict(tfidf_trainpos))) # Test accuracy print("The test accuracy is: ") print(accuracy_score(Labels_test, svc_pred)) # Classification report print("Classification report") print(classification_report(Labels_test,svc_pred)) # We'll create a dataset with a model summary to compare models: d = { 'Model': 'SVM', 'Training Set Accuracy': accuracy_score(Labels_train, best_svc.predict(tfidf_trainpos)), 'Test Set Accuracy': accuracy_score(Labels_test, svc_pred) } df_models_svc = pd.DataFrame(d, index=[0]) df_models_svc with open('df_models_svc.pickle', 'wb') as output: pickle.dump(df_models_svc, output) ###Output _____no_output_____ ###Markdown 2. Random Forest ###Code # Hyperparameters for Random Forest: rf_0 = RandomForestClassifier(random_state = 8) print('Parameters currently in use:\n') rf_0.get_params() # n_estimators n_estimators = [int(x) for x in np.linspace(start = 200, stop = 1000, num = 5)] # max_features max_features = ['auto', 'sqrt'] # max_depth max_depth = [int(x) for x in np.linspace(20, 100, num = 5)] max_depth.append(None) # min_samples_split min_samples_split = [2, 5, 10] # min_samples_leaf min_samples_leaf = [1, 2, 4] # bootstrap bootstrap = [True, False] # Create the random grid random_grid = {'n_estimators': n_estimators, 'max_features': max_features, 'max_depth': max_depth, 'min_samples_split': min_samples_split, 'min_samples_leaf': min_samples_leaf, 'bootstrap': bootstrap} random_grid # Grid Search: # Create the parameter grid based on the results of random search bootstrap = [False] max_depth = [30, 40, 50] max_features = ['sqrt'] min_samples_leaf = [1, 2, 4] min_samples_split = [5, 10, 15] n_estimators = [800] param_grid = { 'bootstrap': bootstrap, 'max_depth': max_depth, 'max_features': max_features, 'min_samples_leaf': min_samples_leaf, 'min_samples_split': min_samples_split, 'n_estimators': n_estimators } # Create a base model rfc = RandomForestClassifier(random_state=8) # Manually create the splits in CV in order to be able to fix a random_state (GridSearchCV doesn't have that argument) cv_sets = ShuffleSplit(n_splits = 3, test_size = .22, random_state = 8) # Instantiate the grid search model grid_search = GridSearchCV(estimator=rfc, param_grid=param_grid, scoring='accuracy', cv=cv_sets, verbose=1) # Fit the grid search to the data grid_search.fit(tfidf_trainpos, Labels_train) print("The best hyperparameters from Grid Search are:") print(grid_search.best_params_) print("") print("The mean accuracy of a model with these hyperparameters is:") print(grid_search.best_score_) best_rfc = grid_search.best_estimator_ best_rfc best_rfc.fit(tfidf_trainpos, Labels_train) rfc_pred = best_rfc.predict(tfidf_testpos) # Training accuracy print("The training accuracy is: ") print(accuracy_score(Labels_train, best_rfc.predict(tfidf_trainpos))) # Test accuracy print("The test accuracy is: ") print(accuracy_score(Labels_test, rfc_pred)) # Classification report print("Classification report") print(classification_report(Labels_test,rfc_pred)) d = { 'Model': 'Random Forest', 'Training Set Accuracy': accuracy_score(Labels_train, best_rfc.predict(tfidf_trainpos)), 'Test Set Accuracy': accuracy_score(Labels_test, rfc_pred) } df_models_rfc = pd.DataFrame(d, index=[0]) df_models_rfc with open('df_models_rfc.pickle', 'wb') as output: pickle.dump(df_models_rfc, output) path_pickles = "/home/andrewpap22/Desktop/dataMining_MainProject/" list_pickles = [ "df_models_mnbc.pickle", "df_models_mnbc2.pickle", "df_models_svc.pickle", "df_models_rfc.pickle" ] df_summary2 = pd.DataFrame() for pickle_ in list_pickles: path = path_pickles + pickle_ with open(path, 'rb') as data: df = pickle.load(data) df_summary2 = df_summary2.append(df) df_summary2 = df_summary2.reset_index().drop('index', axis=1) df_summary2 ###Output _____no_output_____ ###Markdown Sorting by: **Test Set Accuracy:** ###Code df_summary2.sort_values('Test Set Accuracy', ascending=False) ###Output _____no_output_____ ###Markdown Summary: **F1-Scores:**1. SVM: **0.84**2. Random Forest: **0.79**3. Improved Naive Bayes: **0.78**4. Basic Naive Bayes: **0.73** So as we can see: **SVM** is the best model so far with the best performance and scores!!! The reason is, we gave it the best possible features and best possible cleaned data (best possible by what i could personally manage... not to mean the best possible created!) It contained the full cleaned data with the extra cleaning we did for the Improved Naive bayes + the combination of tfidf and pos tag features. So, that's a good reason why SVM has the best test set accuracy and f1-score! **-----------------------------------------------------------------------------------------------------------** *__Now we'll try anything we can in order to get the best possible Test Set Accuracy and F1-Score__* i.e. We have to exceed the performance of SVM! i) Multinomial Logistic Regression ###Code #randomized search cross validation: # C C = [float(x) for x in np.linspace(start = 0.1, stop = 1, num = 10)] # multi_class multi_class = ['multinomial'] # solver solver = ['newton-cg', 'sag', 'saga', 'lbfgs'] # class_weight class_weight = ['balanced', None] # penalty penalty = ['l2'] # Create the random grid random_grid = {'C': C, 'multi_class': multi_class, 'solver': solver, 'class_weight': class_weight, 'penalty': penalty} random_grid #The search: # First create the base model to tune lrc = LogisticRegression(random_state=8) # Definition of the random search random_search = RandomizedSearchCV(estimator=lrc, param_distributions=random_grid, n_iter=50, scoring='accuracy', cv=3, verbose=1, random_state=8) # Fit the random search model random_search.fit(tf_idf_train, labels_train) print("The best hyperparameters from Random Search are:") print(random_search.best_params_) print("") print("The mean accuracy of a model with these hyperparameters is:") print(random_search.best_score_) best_lrc = random_search.best_estimator_ best_lrc best_lrc.fit(tf_idf_train, labels_train) lrc_pred = best_lrc.predict(tf_idf_test) # Training accuracy print("The training accuracy is: ") print(accuracy_score(labels_train, best_lrc.predict(tf_idf_train))) # Test accuracy print("The test accuracy is: ") print(accuracy_score(labels_test, lrc_pred)) # Classification report print("Classification report") print(classification_report(labels_test,lrc_pred)) d = { 'Model': 'Logistic Regression', 'Training Set Accuracy': accuracy_score(labels_train, best_lrc.predict(tf_idf_train)), 'Test Set Accuracy': accuracy_score(labels_test, lrc_pred) } df_models_lrc = pd.DataFrame(d, index=[0]) df_models_lrc with open('df_models_lrc.pickle', 'wb') as output: pickle.dump(df_models_lrc, output) ###Output _____no_output_____ ###Markdown Final Results & Conclusion!!! ###Code path_pickles = "/home/andrewpap22/Desktop/dataMining_MainProject/" list_pickles = [ "df_models_mnbc.pickle", "df_models_mnbc2.pickle", "df_models_svc.pickle", "df_models_rfc.pickle", "df_models_lrc.pickle" ] df_summary_final = pd.DataFrame() for pickle_ in list_pickles: path = path_pickles + pickle_ with open(path, 'rb') as data: df = pickle.load(data) df_summary_final = df_summary_final.append(df) df_summary_final = df_summary_final.reset_index().drop('index', axis=1) df_summary_final ###Output _____no_output_____ ###Markdown Sorting by: **Test Set Accuracy:** ###Code df_summary_final.sort_values('Test Set Accuracy', ascending=False) ###Output _____no_output_____
util_nbs/99a_local_repo_path.ipynb
###Markdown Accessing the repo root path> Admittedly, this is a hacked solution. Place a str object in a py file that is the absolute path to wherever this repo is located locally. This way I can use absolute paths for the rest of this module to place files where I would like them to go. ###Code #export try: # path of the current py file that the nb exports into (not the notebook) file_path = os.path.dirname(os.path.realpath(__file__)) # abs path to collection repo in your computer local_repo_path = file_path[:-2] # this could be cleaner, see below except: print('in a notebook environment') ###Output in a notebook environment
6th Sem/Data Science & Big Data/PR 5/.ipynb_checkpoints/Practical No 05-checkpoint.ipynb
###Markdown Data Analytics II- Implement logistic regression using Python/R to perform classification on Social_Network_Ads.csv dataset.- Compute Confusion matrix to find TP, FP, TN, FN, Accuracy, Error rate, Precision,Recall on the given dataset. ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split import warnings %matplotlib inline warnings.filterwarnings('ignore') df = pd.read_csv('Social_Network_Ads.csv') df.head() df.describe() # input x = df.iloc[:, [2, 3]].values # output y = df.iloc[:, 4].values X_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.25, random_state = 0) from sklearn.preprocessing import StandardScaler sc_x = StandardScaler() X_train = sc_x.fit_transform(X_train) X_test = sc_x.transform(X_test) print (X_train[0:10, :]) from sklearn.linear_model import LogisticRegression classifier = LogisticRegression(random_state = 0) classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) print ("Confusion Matrix : \n", cm) from sklearn.metrics import accuracy_score print ("Accuracy : ", accuracy_score(y_test, y_pred)) ###Output Accuracy : 0.32 ###Markdown Compute Confusion matrix to find TP, FP, TN, FN, Accuracy, Error rate, Precision, Recall on the given dataset. ###Code # classification report for precision, recall f1-score and accuracy from sklearn.metrics import classification_report matrix = classification_report(y_test, y_pred,labels=[1,0]) print('Classification report : \n',matrix) ###Output Classification report : precision recall f1-score support 1 0.32 1.00 0.48 32 0 0.00 0.00 0.00 68 accuracy 0.32 100 macro avg 0.16 0.50 0.24 100 weighted avg 0.10 0.32 0.16 100
HeErodeFxn.ipynb
###Markdown Erode the surface during He-3 production/diffusion Syntax`conc3He = HeErodeFxn(conc0, erode_z, CosmoPars, HeliumPars)` Input `conc0` : concentration at the beginning of the "year" (at g-1) `erode_z` : depth to be eroded (cm) `CosmoPars` : dictionary of parameters relevant for CRN calculations `HeliumPars` : dictionary of parameters relevant for He-3 calculations Variables Used Helium Pars`Hez` : vector of He node depths, size length(max_depths[0]) (cm) `EDTz` : vector of EDTs at the Hez node depths (K) `nx` : number of nodes in the quartz grains Cosmo Pars `mu` : mu production term `SLHL_He3` : sea level high latitude He-3 spallation production rate (at g-1 yr-1) Output Calc Pars`conc3He` : matrix size Hez, nx with concentration of 3He at depth and in each quartz grain Notes**Date of Creation:** 7. Juli 2021 **Author:** Donovan Dennis **Update:** ###Code def HeErodeFxn(conc0, erode_z, CosmoPars, HeliumPars): Hez = HeliumPars['Hez'] EDTz = HeliumPars['EDTz'] nx = HeliumPars['nx'] mu = CosmoPars['mu'] SLHL_He3 = CosmoPars['SLHL_He3'] old_depths = Hez[Hez > erode_z] He3P0 = SLHL_He3 * np.exp(-mu * Hez) conc0 = [ProdDiffHe3Fxn(conc0[i], He3P0[i], EDTz[i], HeliumPars) for i in range(len(Hez))] interpx = range(0,nx) first_fxn = interp2d(x = interpx, y = Hez, z = np.stack(conc0), kind = 'linear') first_data = first_fxn(x = interpx, y = Hez) second_fxn = interp2d(x = interpx, y = (Hez - erode_z), z = np.stack(first_data), kind = 'linear') conc3He = second_fxn(x = interpx, y = Hez) return conc3He ###Output _____no_output_____
Lab 2 notebook.ipynb
###Markdown Laboratory Experiment 2: Measuring Markers of Polluted Air CHM410 / CHM1410Welcome to the data analysis workshop for this experiment. We will be using this Jupyter notebook to gently dip our toes into the pool that is scientific computing. Here is an outline of what we'll be doing: 1. Introduction to programming with Python and using this notebook 1. [Basics](basics) 2. [Comments](comments) 3. [Variables](variables) 4. [Types](types) 5. [Lists](lists) 6. [Packages](packages) 7. [Functions and Methods](methods) 8. [Numpy](nparrays)2. [Loading your data](loading)3. [Making basic plots](plotting)4. [Maths and stats](stats)5. [Creating publication quality figures](figures)6. [Plotting on maps](maps)6. [Saving/exporting your data](saving)7. [Space for you to create your plots](free_space)The table of contents also provides links for ease of navigation. If you have prior experience with coding, Python, and/or Jupyter notebooks, feel free to take what you want from the following introduction and move on to the second section. ---- Just the basicsBelow this is a cell of code. Try running it by clicking inside the cell and then pressing __shift__ + __enter__ or clicking the __Run__ button on the toolbar at the top of the page. There is also a run button on the left of the cell that appears when your mouse hovers over the cell. ###Code print("Hello world") ###Output _____no_output_____ ###Markdown You should see the printed phrase appear below the cell. Ultimately, this is what we use computers to do — provide us with output based on the input we give it. In order to get the expected output for our input, we are using a programming language called Python, and it is in fact a language! It has rules of syntax, grammar, and usage just like any human language. We're going to go over some basics of this language, just enough to get you used to looking at the code and making some changes. You'll learn enough to be able to make some great figures by the end! ---Your computer will try to interpret everything you tell it. So in programming you need to use language with precision, or else your code will give you an error. Try running the cells below to look at some errors. ###Code print(Hello world) print "Hello world" print("Hello world) pirnt("Hello world") ###Output _____no_output_____ ###Markdown As you can see, there's not much forgiveness in doing something as simple as printing out a statement. We're going to go through a few Python concepts so you can figure out how to make your code run without errors. It may be helpful going forward to click on "View" on the toolbar and select the option to "toggle line numbers". This will display line numbers on the left side of the code, which can be very useful. CommentsAnyone who writes or uses code needs to know about comments. Comments are bits of text that programmers write into their code that your computer will ignore. Comments are incredibly important for understanding your own code and interpretting other people's code, and it is also useful for making your computer ignore some lines of code you don't want to use right now. Comments are denoted with the __\__ symbol. Anything that appears in the line after the \ symbol will be ignored by your computer. Look at the examples below to see how it works. ###Code # This is a comment print("This is not a comment") # but this is a comment! ###Output _____no_output_____ ###Markdown As mentioned above, comments are important for communication, but are also just useful to "turn off" certain lines of code. We sometimes call this "commenting out" these lines of code. Try commenting out some lines in the code below by adding in the \ symbol. ###Code var = 5 # this is a variable, which you'll see in the next example # if you comment out the line above, the rest of the code will give you an error. var = var + 8 print(var) var += 1 var -= 3 print(var) var = var**2 print(var) ###Output _____no_output_____ ###Markdown The rest of this notebook contains really helpful information in the comments. Feel free to add your own comments in wherever you'd like! Declaring variablesA variable is a place that Python stores information in. Variables are useful not only to store information, but also to make shortcuts for yourself. Continuing with the example we've been using... ###Code statement = "Hello world" print(statement) ###Output _____no_output_____ ###Markdown We have stored some information in the variable named "statement" and then used that to tell your computer to print out the output. We declare the variable following the rule: _variable name_ __=__ _information_ The name always goes to the left of the equal sign. Once we've declared the variable, we can just use the variable name to refer to the information stored in it. We can store new information in the same variable, but this will destroy the old information. ###Code var = 5 print(var) var = 6 print(var) ###Output _____no_output_____ ###Markdown We can also modify the information in the variable as we please. Python uses arithmetic operators similarly to how you might use them in an Excel formula. Check out the examples below. Feel free to test them out for yourself! ###Code var0 = 5 var1 = 13 var0 = var0 + 2 var0 = var0 + var1 var0 = var0 - 5 print("var0 = ", var0) # print commands can use commas to combine different objects or variables var1 = var1 / 2 var1 = var1 * 3 var1 = var1**2 # this is an exponent print("var1 = ", var1) #There are also some convenient ways to do arithmetic and overwrite a variable at the same time var0 = 5 var0 += 6 # this will add 6 to the value of var0 and store it in var0 print(var0) ###Output _____no_output_____ ###Markdown Types of informationThe concept of _type_ is important to understanding a lot of issues you may face when programming. A _type_ is a specific kind of information. Python will interpret types in different ways. We can think of this in terms of different kinds of data. Some data are integers, some data are text, some data are decimals, et cetera. Try running the code below to see these _types_. ###Code # using the type() command will have the computer tell you what type of information is inside the parentheses print(type(1)) # this 1 is an integer print(type(1.0)) # this is called a float, which is basically a decimal number print(type("1.0")) # this is called a string, which is interpretted as text #strings are created by placing quote marks around the information ###Output _____no_output_____ ###Markdown ListsNext we'll look at some other very useful types that are a little more complicated. A _list_ is an array of data, in a certain order. The data in a list can be many different types. ###Code numbers = [1, 2, 3] # a list is made by placing information inside square brackets. The data are separated with commas print(numbers) print(type(numbers)) elements = ["H", "He", "Li"] # the data in this array are all strings (text) print(elements) mixture = [1, "He", 3.14] # the data in this array are of various types ###Output _____no_output_____ ###Markdown One of the important properties of lists is called _indexing_. Each entry in the list has a position. By convention, computer scientists count from zero. Your knowledge of this fact can be used to impress your computer scientist friends. By telling the computer to use a certain _index_, you can access specific entries in your list. The syntax for _indexing_ is done with square brackets:_list name_\[__index__\] Check out the example below. Try changing the index to access different entries in your list. Try changing the index to a number larger than the number of entries in the list, remembering we start at zero!Try changing the index to -1 and to other negative numbers. ###Code elements = ["H", "He", "Li", "Be", "B"] entry = elements[3] # the index appears inside the square brackets print(entry) ###Output _____no_output_____ ###Markdown We can index across a list to get many parts of the list at once. This action is sometimes called a "slice". The syntax for a slice is: _list name_\[__start:end__\]The value in the list at the __end__ position will not be included in the slice. Check out the example below to see how slicing works. Try changing the numbers in the slice. ###Code numbers = [1, 2, 3, 4, 5] entry = numbers[1:4] # this slice will go from the position 1 through 4 print(entry) ###Output _____no_output_____ ###Markdown The : is a useful symbol for indicating ranges of values. You don't have to specify an end point or a beginning point, either. ###Code print(numbers[:2]) # this slice will include the values at positions 0 and 1, but nothing at 2 or after print(numbers[3:]) print(numbers[:]) # a : with no other numbers will reproduce the whole list ###Output _____no_output_____ ###Markdown Understanding lists and indexes opens up a lot of possibilities. The example in the cell below shows how you can use the integers in one list ("numbers") as indices for another list ("elements"). ###Code numbers = [1, 2, 3, 4, 5] elements = ["H", "He", "Li", "Be", "B"] entry = elements[numbers[0]] print(entry) ###Output _____no_output_____ ###Markdown PackagesNow that we have an understanding of some of the basics, we can find out how to make the most of our coding experience. We could spend hours and hours manipulating the basic structures to give us our desired output, or we can rely on other smart people who have already done a lot of the work for us! Packages are bodies of code that we can use to make our work faster and more convenient. There are many, many packages in Python for all kinds of applications. We'll be looking at and using some common science packages. First let's learn how to use them! Ordinarily, you would first need to check to make sure a package is installed on your computer, but the University's Jupyter notebook service comes with many already installed. Thanks, IT department! We need to "import" the package before we can use any of the contents. ###Code import math # here we imported the "math" package import numpy as np # here we imported a package called "numpy" and gave it a shorthand name for convenience from scipy import constants # here we imported part of the "scipy" package called "constants" # Package contents can be accessed using a . and the name of the particular function you'd like to use print(math.log(4)) # Ok try it out! In the space below, type in "constants." # and then press the tab key. This will show you all the contents of the package available! # Choose some different ones by pressing enter or clicking on them and see if you can get them to work # Make sure you add a print() statement around it to see the output # just so you know, unfortunately the tab shortcut only works for some packages # Packages open up the possibilities of powerful computations sample_array = np.arange(0,100) # Here we have used the numpy package to produce a list of numbers from 0 to 99 print(sample_array) print(np.sum(sample_array)) # numpy has many mathematical functions that make it easy to do operations on lists of numbers ###Output _____no_output_____ ###Markdown Methods and functions and other package contentsIn the above example of the math package, we used a . to access the contents of the package. Some of these contents we used as __functions__ or __methods__, meaning we used a set of parentheses after it to "call" the function or method. The items inside the parentheses are called __arguments__, and they are required to be used in a specified order. This order has been indicated in the notebook for you where appropriate. Arguments can be __positional__, meaning they occur in a certain sequence inside the parentheses. Arguments can also be __keywords__, meaning they use a specific name and an equal sign to designate themselves. Many of the package features we'll use today will be like these functions and methods. The packages also contain other objects. Look at the example below of a useful object in the numpy package: ###Code print(np.pi) ###Output _____no_output_____ ###Markdown This object is not a function or method, so we don't use parentheses. If you use a function without parentheses or you try to use parentheses on some other object, you'll encounter an error. Keep this in mind as you start writing your own code. Arrays using numpyWe've already seen the numpy package a little, but you'll need to get a little more familiar with it before we move on. Numpy arrays are great, and they're a lot like the basic python lists we saw above. The difference is that numpy arrays allow us to use "array operations", where the basic list type did not. Array operations means that we can perform mathematical operations and other such transformations to the entirety of a data set in one line of code. If we used a list, we'd have to apply the operation to each item in the list explicitly. If you want to learn more about that type of programming, you should read about python ["for loops"](https://www.dataquest.io/blog/python-for-loop-tutorial/). For loops will not be necessary for work in this notebook. ###Code # in the lines below, we create an array and then add 1 to all the elements in the array a = np.array([1,2,3,4,5,6]) print(a) a = a + 1 print("a + 1 =",a) # if we try the same thing with a list, it will cause an error a = list([1,2,3,4,5,6]) print(a) a = a + 1 ###Output _____no_output_____ ###Markdown Arrays can be one dimensional (like one column of data), or they can have mutliple dimensions. Two dimensional arrays can be very useful in doing array operations.One feature of numpy is being able to "reshape" an array. In this notebook, you may want to turn your 1-d array of data into a 2-d array so that you can do some operations on it. The next cell shows an example of reshaping an array and then doing an operation on the array. ###Code a = np.array([1,2,3,4,5,6]) print(a) print("size =", a.shape) # the .shape attribute will tell you the length of your array in each dimension a = np.reshape(a,(3,2)) print(a) print("reshaped size=", a.shape) print("mean = ", a.mean(axis=1)) # this line demonstrates an operation on the reshaped array ###Output _____no_output_____ ###Markdown The array above was 6 elements long, and then we changed it to a 3 by 2 array. Take note of how the output gets shown with brackets. The mean on the last line was taken on the __axis__ we indicated. This tells the method if it should mean the rows or the columns of the 2-d array. In this case, axis 1 indicates the rows were meaned. You can already predict what will happen if we had used axis=0: ###Code print("mean = ", a.mean(axis=0)) ###Output _____no_output_____ ###Markdown There are many ways to write your code to do the same thing. Understanding array dimensions and axes can be pretty abstract, but you can always achieve the result you want. Have a look at the example below, where we reshape the array into a 2 by 3 array: ###Code a = np.array([1,2,3,4,5,6]) a = np.reshape(a,(2,3)) print(a) print("reshaped size=", a.shape) print("mean = ", a.mean(axis=0)) # what output will this produce? ###Output _____no_output_____ ###Markdown Maybe that was the result we wanted, but maybe we were trying to get the mean of every 3 values. This can either be changed in the reshaping, or we can do a transpose of the array: ###Code print(a) b = np.transpose(a) # the transpose function is applied to a print("transposed array= ", b) print("mean = ", b.mean(axis=0)) ###Output _____no_output_____ ###Markdown That's just one small example of something you might encounter in doing your analysis, or in other scientific programming spaces. One more aspect of numpy arrays you may want to know is how to use the index. In a 1-d array it is the same as the list type from earlier. With a 2-d array, you can use a comma to indicate how you'd like to treat the columns and rows. ###Code print(b) print(b[:,0]) # This will print all the values in the first column print(b[0,:]) # This will print all the values in the first row print(b[1,1]) # second column, second row value ###Output _____no_output_____ ###Markdown That's the end of this introduction to Python. In the next section you'll get to actually work with your data and make some visualizations, using what you learned so far. Much of the code you need has been written for you; you'll need to make some adjustments to names, indexes, etc. and commenting/uncommenting lines of code as you need them. Loading your dataBefore we go further, you'll need to upload your data onto the Jupyter hub. Go back to your browser tab where the folder __Lab2.git__ is open. In the upper right corner there is an __Upload__ button. Click and use your system dialog to select the files you'd like to use. These should be the raw data provided in the .csv file format. Your data files should then appear in the list of files. You should also see a file called "sample.csv" in the same folder The package we'll use for loading your data is called _pandas_. [Consult wikipedia](https://en.wikipedia.org/wiki/Pandas_(software)) if you're wondering why it's called that. The files are in .csv format, which you can open in most analysis software, including Microsoft Excel. The next cell shows you an example of loading in a csv file. ###Code # import pandas package import pandas as pd filename = "sample.csv" data = pd.read_csv(filename) # this function will open the csv file and load it into your workspace print(data.columns) # this prints the names of the columns in the sample data set. print(data.head(5)) # this prints the first five rows of your data set. ###Output _____no_output_____ ###Markdown In practice, the files from the sensors used in the experiment have a slightly more complicated format than the sample above, but it is important to see how easy it is to read in a csv file. So, your TA has prepared functions to load data from the Aeroqual sensors and the Airbeam2 sensor. These functions are stored in the Lab2_Functions python file. In the cell below, please change the variable called "filename" to the exact name of your uploaded Airbeam data file. Other than that, the cell is already set up to load your data. The function used to load your data from the airbeam is called OpenAirBeam2() You should try printing the first few lines of some of the data to make sure it worked. ###Code import Lab2_Functions as lab2 # This import statement will allow you to use the functions your TA has written filename = "your file name.csv" # The next line illustrates the use of the lab2 library pm_datetimes, pm_rel_time, pm_temp, pm1, pm10, pm2, pm_rh, pm_lats, pm_longs = lab2.OpenAirBeam2(filename) #all of the objects on the left of the = are arrays containing your data # pm_datetimes contains formatted date information of the absolute time # pm_rel_time contains float values starting with 0 seconds counting up the relative time ###Output _____no_output_____ ###Markdown A note for these functions: they always supply you with the same data in order like that, but you can change the variable names to whatever you want. E.g. you can change "pm_rel_time" to "time", but you can't change that it comes second in the order.Let's load the rest of your data into arrays. The next two cells are set up similarly to the one above, showing the function for loading your Aeroqual monitor data. You'll need to change the file name. ###Code # Use this cell for loading CO2 monitor data filename = "your file name.csv" CO2_datetimes, CO2_rel_time, CO2_vmr = lab2.OpenAeroqual(filename) # absolute time and relative time are again included, and the concentration is in the CO2_vmr object # vmr stands for volume mixing ratio # Use this cell for loading O3 montior data filename = "your file name.csv" O3_datetimes, O3_rel_time, O3_vmr = lab2.OpenAeroqual(filename) ###Output _____no_output_____ ###Markdown If you have more data you'd like to load, you can add more OpenAeroqual or OpenAirbeam2 lines above, or you can use the empty cell below. You don't need to use this cell if you don't have more data. You'll need to make sure you use the right filename, and you should change the variable name so you can differentiate between data sets. __PRO TIP:__ If you want to work quickly without having to remember specific variable names, just type the first part of the name in, and press the tab key. A list of variables appear and you can select the one you want with arrow keys and then pressing enter. If there's only one variable it could possibly be, the variable name will autocomplete. ###Code # empty cell for loading more data ###Output _____no_output_____ ###Markdown Plotting your dataIf you've successfully loaded your data, next comes the fun part. We'll quickly try plotting some of these data. Exciting! The next cell of code is set up to plot your data in an interactive window. It should pop up on your screen, or you may have to click on the new window to see it. There are tools in the interactive window for you to zoom in and move about, as well as to adjust some of the appearance. If you find views of your data you think are interesting, you can save the current view with the save button. ###Code # The matplotlib package is a ubiquitous Python plotting package. import matplotlib.pyplot as plt # It also has helpful code for dealing with different kinds of data, such as dates import matplotlib.dates as mdates # These next lines determine how the plot will appear on your computer. # Comment and uncomment the lines starting with "%" to try them out # The next line is a bit of magic that makes the plot appear in a new interactive window %matplotlib notebook # This next line will make the plot appear in the page, but you won't be able to interact with it. #%matplotlib inline # These next lines are the actual plotting code. fig,ax = plt.subplots() # This creates a blank canvas to plot on plt.plot(pm_rel_time, pm1, '.-') # This "plot" function is what puts your data on the figure plt.plot(pm_rel_time, pm2, '.-') plt.plot(pm_rel_time, pm10, '.-') plt.show() # This line displays the plot ###Output _____no_output_____ ###Markdown The plt.plot function used above is very powerful and you'll be using it a bit going forward. The usage is: plt.plot(__x axis data__, __y axis data__, _symbol code_) the symbol code can change how the data appear on your figure. Try changing it to the following and replotting. 'x' '--' 'x--g' Using the plt.plot function on several lines allows you to add multiple data to the plot, and it automatically colours the points. You can also change the colour with the symbol code, and there are more details about colour in the next section. When looking at your data, you might want to know exactly which data point you're looking at. In this case you can add labels to your plot to help you identify the x value, y value, and x index of individual points. The Lab2 Functions package has a function that will add point labels for you. See the example below: ###Code # interactive plot style %matplotlib notebook x = np.linspace(0, 2.2, 100) # this creates an array of 100 x values from 0 to 2.2 y = x**2 +3*x + 0.3 # creating a y series fig,ax = plt.subplots() # create a blank canvas plt.plot(x, y, '.') # plot x vs y in points lab2.PointLabels(x, y, 5, plot_index=False) # the function requires arguments in order: x values, y values, number of points between labels (every nth label) # the plot_index function can be set to True or to False # this changes the label from showing the x value to the x index instead plt.show() ###Output _____no_output_____ ###Markdown The next cell includes lines allowing you to adjust the figure size and appearance and assign axis labels. ###Code fig,ax = plt.subplots(figsize=(4,4)) # creating a blank canvas with a size 4 inch by 4 inch fig.set_size_inches(6.5,3) # this will change the already created canvas size, as an alternative to the above plt.rcParams.update({"font.size":12}) # change the default font size for the plot fig.set_dpi(300) # change the resolution of the figure in dots per inch plt.ylabel("velocity, m s-1") # add a label to the y axis plt.xlabel("time, s") # add a label to the y axis plt.tight_layout() # a function that automatically reduces empty space around the canvas ###Output _____no_output_____ ###Markdown Histograms can be very useful for understanding distributions of data. They are easy to create, and you can adjust the number of histogram bins easily ###Code plt.hist(y, bins=20) ###Output _____no_output_____ ###Markdown Do you want to use the absolute date time data on the x axis? This isn't always necessary since relative time is often easier to deal with, but if you need to indicate a specific time of day or for any other reason, you can follow the example: ###Code import matplotlib.dates as mdates # this package is a date plotting helper fig, ax = plt.subplots() # blank canvas plt.plot(CO2_datetimes, CO2_vmr, '.') #plot using absolute time # The next three lines should all be used when plotting with absolute time time_format = mdates.DateFormatter('%H:%M:%S') # This sets the time to show hours:minutes:seconds ax.xaxis.set_major_formatter(time_format) # This applies the format to the x axis fig.autofmt_xdate() # This automatically rotates the labels to fit plt.show() ###Output _____no_output_____ ###Markdown Take some time to explore your data. You can create more plots in the empty cell below, or just edit the one above. Use point labels to record important ranges. Remember to save any figures you think are interesting. If you use the interactive plot (%matplotlib notebook), you can save with the button. If you use the inline plot (%matplotlib inline) you can right click and save the image. You can also use the savefig function to do this, as shown below: ###Code plt.savefig("test_figure.png", format = 'png') # you can save an image of your plot # specifying the filename in the first argument plt.savefig("test_figure.svg", format='svg', dpi=300) # you can also save as a 'pdf', 'svg', 'eps', or 'jpeg' file # you can specify the resolution to save at if you didn't change it before ###Output _____no_output_____ ###Markdown ___ Take a break: what is your data telling you?Now that you've had a detailed look at your data set, it may be useful to reflect on what is notable or surprising about your data and how that fits with your hypothesis. You are encouraged to take a moment to put pen to paper and write down your thoughts. Remember that in the end you are using your data to tell a story about your hypothesis, but your data can often have their own story to tell.___ Calculations and StatisticsThere are basically endless amounts of maths and statistics you can do with python, but let's cover the basics. These are the most important for summarizing your data and performing your analysis. This is not a statistics course, so you should only use statistical methods that you know inside and out since they almost always come with many assumptions.We already got to look at a little bit of taking means when we went over [reshaping numpy arrays](nparrays), but here's more you can easily do to enrich your analysis: ###Code from scipy import stats # this package contains additional stats functions # remember you can type "stats." and then press tab to check them all out data = np.array([1,2,3,4,1,6,2,3,2,1,4]) #there are two ways to take a mean np.mean(data) # one is a function in numpy data.mean() # one is a method of an array #similarly, there are two ways to take the standard deviation np.std(data) data.std() np.median(data) # median is a very useful statistic stats.mode(data) # mode is an underrated statistic stats.iqr(data, rng=(5,95)) # inter-quantile range is a robust dispersion descriptor #the rng argument should be set to a sequence of quantiles, this case shows the range from 5% to 95% of the data # performing a linear regression is quick and easy, and provides you with several useful statistics! slope, intercept, r_value, p_value, std_err = stats.linregress(x, y) # see if you can make a plot using the slope and intercept! # you can use the r_value to look for correlations between your different measured variables # remember if you want to see the output to use the print() function ###Output _____no_output_____ ###Markdown There's so much you can do with stats if you're interested. Some additional references for functions: [descriptive stats](https://docs.scipy.org/doc/scipy/reference/stats.htmlsummary-statistics), [more tests](https://docs.scipy.org/doc/scipy/reference/stats.htmlstatistical-tests), and [correlations](https://docs.scipy.org/doc/scipy/reference/stats.htmlcorrelation-functions). Using means of your data can help to reduce the number of points in your figure, effectively summarizing important information. You'd need to figure out how many points to mean down to one point. Review the reshaping arrays section if you need help. Making more effective plotsWe already got a brief look at plotting in python, but now we're going to dive deeper. What and how you plot your data depends on your hypothesis and research questions. We'll go over various options and scenarios for plotting your data, and then you'll be able to create your own script for plotting your data the way you'd like it. Adjusting the range of the x and y axes is important. If you're comparing different plots, it can be vital to a reader's interpretation that you use the same y scale on each plot. ###Code plt.plot() plt.xlim(0, 40) # set the range of x value shown, in x units plt.ylim(100, 1000) # set the range of y values shown, in y units ###Output _____no_output_____ ###Markdown ColourColour is another very important aspect of plotting. As mentioned before, the plot function will automatically cycle through colours as you plot separate series of data. The colours used are from a colour-blind friendly set. You can also manually choose from these colours using the __color__ argument and values of "C0", "C1", "C2", and so on.Some basic colours can also be specified by name or abbreviation. "w" stands for white, which can also be called "white". "k" stands for black ([here's why](https://en.wikipedia.org/wiki/CMYK_color_model)). Etc.If you are interested in more ways to specify colour, check out [this page](https://matplotlib.org/3.1.0/tutorials/colors/colors.html) that lists them. ###Code # try changing the colour argument plt.plot(O3_rel_time, O3_vmr, color="C4") ###Output _____no_output_____ ###Markdown LegendsAdding a legend to your plot when you're showing multiple series of data is easy. Add a __label__ argument to your plot function and then call plt.legend(). It will be automatically placed in the spot with least overlap with the data window. The label value must be a text string. ###Code plt.plot(O3_rel_time, O3_vmr, '.-', color="C0", label="O3") plt.plot(CO2_rel_time, CO2_vmr, '.-', color="C1", label="CO2") plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Vertical and horizontal linesAdding in vertical lines can be helpful for indicating separations or events in your timeseries. Horizontal lines can be useful for indicating limits, minima, or threshold values ###Code x_location = 1 y_min = 0 y_max = 1 plt.vlines(x_location, y_min, y_max) # this demonstrates the arguments used in the vlines function plt.vlines(2, 0, 1, linestyles='dashed') # you can also just add the numbers in directly # the linestyles argument can change it to "dashed" or "dotted", but the default is "solid" y_location = 1 x_min = 0 x_max = 1 plt.hlines(y_location, x_min, x_max) plt.hlines(3, 0, 1, linestyles='dotted') ###Output _____no_output_____ ###Markdown Shaded regionsYou can also create shaded rectangular areas on your graphs to indicate events occuring over a certain duration. ###Code x_min = 0 x_max = 2.5 plt.axvspan(x_min, x_max, alpha=0.1) ###Output _____no_output_____ ###Markdown Text annotationsPutting text on your plot can really help explain what it is showing. Text can be a brief description of the data. You could also use text as a label to refer to in your figure caption. Text annotations can be used along with other elements like vertical lines and shaded boxes to provide a lot of information! The following example shows how to add text anywere: ###Code text = "Look right here" # this is the text you want to appear x_position = 2 # the x and y coordinates need to be chosen y_position = 10 plt.plot([1,2,3],[0,50,30]) # this is just some meaningless data to plot # the annotate function takes your text and puts it where you specified # the x and y coordinates need to be in parentheses like so # you can change the font and size easily plt.annotate(text, (x_position, y_position), color='black', fontsize=16) plt.show() ###Output _____no_output_____ ###Markdown The annotate function will also automatically produce arrows when used slightly differently. See the example: ###Code text = "Look over here" # this is the text you want to appear x_position = 2 # the x and y coordinates need to be chosen y_position = 10 plt.plot([1,2,3],[0,50,30]) # this is just some meaningless data to plot arrow_x_position = 2 # x position for the arrow head arrow_y_position = 50 # y position for the arrow head arrow_style = {"arrowstyle":'->', "color":"C3"} # this special variable contains information for the arrow # you can change the color by changing C3 to something else # now the annotate function takes the xytext argument to put the text in a different location plt.annotate(text, (arrow_x_position, arrow_y_position), xytext = (x_position, y_position), color='C3', fontsize=14, arrowprops=arrow_style) plt.show() ###Output _____no_output_____ ###Markdown GridlinesIf you want to indicate a regular time interval, you can consider using vertical gridlines; if you want to assist the viewer in distinguishing between y-values, you can consider using horizontal gridlines. If overused, gridlines can make your plot look cluttered, and sometimes the absence of gridlines can convey a professional quality. ###Code plt.plot([1,2,3,4], '.') plt.grid(axis="both") # creates gridlines on both x and y axes plt.grid(axis="x") # creates gridlines on x axis plt.grid(axis="y") # creates gridlines on y axis ###Output _____no_output_____ ###Markdown Filling between valuesFilling in a shaded region between values can be useful to indicate a range of values or some statistical information like a 95% confidence interval. Here is a small example: ###Code x = np.arange(0,10) # create a rang of x values y1 = x**3 y2 = x**2 - 10 plt.plot(x, y1) plt.plot(x, y2) plt.fill_between(x, y1, y2, color = 'C5', alpha=0.3) # the alpha argument controls the transparency of the shaded region. Try changing it ###Output _____no_output_____ ###Markdown Secondary y axisSometimes you need to show data in two different ranges or different units, in which case you can use a secondary axis. Adding a secondary axis can be difficult to interpret though, so you should do this carefully. You'll need to manually specify the colour when you use the secondary axis. ###Code plt.plot([1,2,3,2],'.',color='C0') plt.ylabel("Data 1") plt.twinx() # this function switches the axis #everything you do will change the right side axis now, including how you add labels plt.plot([100,200,350,200],'.', color='C1') plt.ylabel("Data 2") plt.show() ###Output _____no_output_____ ###Markdown If you add a legend to a plot with a secondary axis, making it is a certainly more complicated than before. See below for an example ###Code lines1 = plt.plot([1,2,3,2],'.',color='C0', label="series 1") # we'll need the variable stored in lines1 for the legend plt.ylabel("Data 1") plt.twinx() # this function switches the axis #everything you do will change the right side axis now, including how you add labels lines2 = plt.plot([100,200,350,200],'.', color='C1', label="series 2") # we need lines2, just like lines1 plt.ylabel("Data 2") lines = lines1 + lines2 # combine the list of lines from the plot function labels = [l.get_label() for l in lines] # generate labels plt.legend(lines, labels) # create the legend with the lines and labels generated plt.show() ###Output _____no_output_____ ###Markdown Using variable colour and sizeSometimes you can show an additional dependent variable in your plot using the colour or size of the markers. Much like the secondary axis, this can be sometimes difficult to interpret, so make sure you're not overloading the plot with information.The __plt.scatter()__ function allows you to change the colour and size of the markers according to a variable. If you use colour, you'll need to include a colourbar to indicate what the colour map means. (The __colour map__ is the scale of colours used). Python automatically uses a "perceptually uniform" colour map, meaning the human eye won't interpret it incorrectly, since the change in the hue and intensity of the colour is uniform. You can read more about colour scales [here](https://colorcet.holoviz.org/). The names of these peceptually uniform colour scales are listed here:1. 'viridis'2. 'plasma'3. 'inferno'4. 'magma'5. 'cividis' ###Code x = [1,2,3,4,5] y = [4,5,3,1,9] #the colours should be the other set of y-values you want to show colours = [1,2,6,5,3] # the scatter function takes the x and y data along with the colours and the colourmap name plt.scatter(x, y, c=colours, cmap='viridis') # this will display the colour bar, with label next to it plt.colorbar(label="Scale (units)") ###Output _____no_output_____ ###Markdown Changing the size of the points in your scatter plot can also be a way of showing another value. This style can be difficult to interpret as well, especially quantitatively. You can set the area of the marker to be proportional to a value. If you set the size parameter to the square of the value, this effectively scales the radius of the points instead of the area. You can also set the size to an exponential for an even more dramatic visualization. ###Code x = [1,2,3,4,5] y = [6,5,3,2,8] sizes = np.array([4, 6, 8, 2, 12]) plt.scatter(x, y, s=sizes) plt.scatter(x, y, s=sizes**2) ###Output _____no_output_____ ###Markdown If you would like to use sizes more quantitatively, you can create a legend with points of different sizes corresponding to the values you'd like to represent. ###Code # first we'll create the point markers that will go inside the legend # this example has three points point0 = plt.scatter(0,0, s=2**2, color='C0') point1 = plt.scatter(0,0, s=5**2, color='C0') point2 = plt.scatter(0,0, s=10**2, color='C0') plt.clf() # this command will clear the figure that was automatically created by the above lines # the remaining lines will be where you'll create the actual plot fig, ax = plt.subplots() # creates a new figure plt.scatter(x, y, s=sizes**2) # the same kind of scatter plot as shown in the previous cell points = [point0, point1, point2] # makes a list of the point markers labels = ['2 ppm', '5 ppm', '10 ppm'] # makes a list of labels for the legend plt.legend(points,labels) # displays the legend with the points and labels ###Output _____no_output_____ ###Markdown Log axesDoes your data span a wide range and is difficult to show on a linear axis? You can consider using log axes. But you must use this power responsibly, since log axes can be difficult to interpret. ###Code # the semilogy function works just like the plot function, but automatically creates a log scale on the y axis x=[1,2,3,4] y=[4,40,400,500000] plt.semilogy(x, y, '.') ###Output _____no_output_____ ###Markdown Box plotsBox plots are often used to summarize the statistical view of the data. They are quite easy to make: ###Code box_data = [CO2_vmr, O3_vmr] # make a list of the data series you'd like in the plot plt.boxplot(box_data, showmeans=True) # the show means argument can be True or False ###Output _____no_output_____ ###Markdown Bar graphA bar graph can be useful for visualizing summarized data, for example, comparing means of different data sets. The y axis of a bar plot should usually have 0 at the bottom. The __plt.bar__ function makes bar plots easily: ###Code import matplotlib.pyplot as plt # you'll need to make an x axis with numbers, equal to the number of bars in your plot x_positions = [1,2,3,4] # if you want the bars to have text labels, you can do that as well x_labels = ["Grp A", "Grp B", "Grp C", "Control"] # example of y data y_data = [45, 60, 80, 12] plt.bar(x_positions, y_data, width= 0.8, color='C6', tick_label=x_labels) ###Output _____no_output_____ ###Markdown Error BarsThe __plt.errorbar()__ function will allow you to add error bars to your data set. It will also just plot your data for you. If you want to just make error bars without points, change the _markersize_ argument to zero. ###Code x_data = [1,2,3,4,5] y_data = [5,7,9,2,1] # these next lines will determine the magnitude of your error bars x_errors = 0 y_errors = [1,4,2,1,0.2] # the errorbar function has a lot of arguments to play around with. # try changing ecolor and elinwidth and capsize # the xerr and yerr arguments are set to accept the error variables from above plt.errorbar(x_data, y_data, xerr = x_errors, yerr = y_errors, fmt='.', markersize = 8, ecolor='black', elinewidth = 1, capsize=2) ###Output _____no_output_____ ###Markdown Map PlottingIf your data has a geospatial component to it, you might be interested in looking at your data on a map. __If your data does not depend on location, you get to skip this part.__ Your TA has set up the following cells for you to use and make your maps. This requires special topographical files called shapefiles. Your TA can provide these for you, and you can upload them to your lab2 folder for use in the cell below. ###Code import geopandas as gpd # this is the mapping package we'll use # the next two lines will load the shapefiles you uploaded toronto_map = gpd.read_file("./toronto-centreline-wgs84-latitude-longitude/CENTRELINE_WGS84.shp") peel_map = gpd.read_file("./Street_Centre_Line-shp-Peel/StreetCentreLine.shp") # the coordinate reference systems of these two maps are different, # so these next two lines makes them the same crs = toronto_map.crs peel_map = peel_map.to_crs(crs) print("Maps loaded.") ###Output _____no_output_____ ###Markdown The following cell has some code your TA has prepared to make your mapping experience a little smoother. Mostly you need to change the plt.scatter() arguments to whatever data you want to plot. ###Code # the map plots only work in inline plotting mode %matplotlib inline fig, ax = plt.subplots() # creates a new figure fig.set_dpi(300) # sets a high resolution # the next two lines plot the map data we loaded in the previous cell # you can try changing the color parameter and the linewidth parameter. # the zorder paramter forces these elements to be beneath anything else you plot above it toronto_map.plot(ax=ax, color='k', facecolor='w', linewidth=0.1, zorder=0) peel_map.plot(ax=ax, color='k', facecolor='w', linewidth=0.1, zorder=0) # make a scatter plot of your data below! # you need to change the arguments to meet your plotting needs # remember s will set the size and c will change the colours # cmap can set the colour map plt.scatter(longitudes, latitudes, s=5, c=pm2, marker='.') # add a colour bar # the shrink paramter can change the size relative to the figure plt.colorbar(shrink = 0.8, label="PM (units)") # the next few lines set the x and y limits # this means it matches the latitude and longitude window of your data xmin = np.min(pm_longs) - 0.001 # find the smallest longitude xmax = np.max(pm_longs) + 0.001 # find the largest longitude ymin = np.min(pm_lats) - 0.001 # find the smallest latitude ymax = np.max(pm_lats) + 0.001 # find the largest latitude plt.xlim(xmin, xmax) plt.ylim(ymin, ymax) # the next few lines make sure the the longitude and latitude scales don't # get distorted based on the size of your figure. x_dimension = 6.5 # this x dimension is in inches, you can set this however you like x_aspect_ratio = np.abs(xmin - xmax) / x_dimension # find the ratio of longitude to inches y_dimension = np.abs(ymin - ymax) / x_aspect_ratio # use the ratio to get the y dimension inches fig.set_size_inches(x_dimension, y_dimension) # set the figure size plt.show() # show your map! ###Output _____no_output_____ ###Markdown Example of advanced plottingBelow is am example of plotting more complicated things. You are not expected to recreate this. Just inspriration! ###Code import numpy as np fig,ax = plt.subplots(figsize=(4,4)) plt.rcParams.update({"font.size":12}) fig.set_size_inches(6.5,3) fig.set_dpi(300) string = 'C2' boxprops = {'color':string,'alpha':0.3,'facecolor':string} whiskerprops = {'color':string,'alpha':0.7} capprops = {'color':string,'alpha':0.7} medianprops = {'color':string,'alpha':1.0} meanprops = {'marker':'.','color':string,'alpha':1.0,'markersize':2} flierprops = {'markeredgecolor':string,'alpha':0.5,'markersize':1} #length = 300 #array = np.random.randn(length) end = (pm10.shape[0] % 100) * -1 array = np.asarray(pm10[0:end]) length = pm10[0:end].shape[0] #time = np.linspace(0,120,length) n_bins = int(length/100) print(n_bins) print(array.shape) binned_time = np.asarray(pm_rel_time[0:-14:100]) binned_data = np.transpose(np.reshape(array, (n_bins,int(length/n_bins)))) #binned_time = np.reshape(time, (n_bins,int(length/n_bins))) #binned_time = np.mean(binned_time, axis=1) print(binned_data.shape) print(binned_time.shape) plt.boxplot(binned_data, showmeans=True, patch_artist=True,\ boxprops = boxprops, whiskerprops = whiskerprops, capprops = capprops,\ medianprops = medianprops, meanprops = meanprops, flierprops = flierprops) from matplotlib.ticker import FormatStrFormatter plt.gca().xaxis.set_major_formatter(FormatStrFormatter('%1.f')) plt.show() ###Output _____no_output_____ ###Markdown Saving your dataThe last important step is to export your data from this notebook to a csv file. You can save individual arrays like so: ###Code save_filename = "sample0.csv" # change this to a descriptive file name np.savetxt(save_filename, CO2_vmr, delimiter=',') #second argument should be the array you want to save ###Output _____no_output_____ ###Markdown Your TA has also included a function to save all your airbeam data and aeroqual data together in Lab2 Functions. If you've imported multiple data sets, you'll need to use multiple function calls and use the appropriate variable names. ###Code # change these to descriptive file names filename_pm = "sample0.csv" filename_CO2 = "sample1.csv" filename_O3 = "sample2.csv" #The following function will save your Airbeam2 data lab2.SaveAirbeam2(filename_pm, pm_datetimes, pm_rel_time, pm1, pm2, pm10, pm_temp, pm_rh) #The following function will save your Aeroqual monitor data lab2.SaveAeroqual(filename_CO2, CO2_datetimes, CO2_rel_time, CO2_vmr) lab2.SaveAeroqual(filename_O3, O3_datetimes, O3_rel_time, O3_vmr) ###Output _____no_output_____
EDA and Feat Craft.ipynb
###Markdown 1 None订单 ###Code order_is_None = order_products_train.groupby(['order_id'])['reordered'].sum().reset_index() len(order_is_None[order_is_None.reordered == 0]) / len(order_is_None[order_is_None.reordered > 0]) a = pd.merge(order_is_None, orders, how = 'left', on = ['order_id']) ###Output _____no_output_____ ###Markdown prior、train订单 ###Code order_products_all = pd.concat([order_products_prior, order_products_train], axis = 0) ###Output _____no_output_____ ###Markdown 2 How many products do users buy each time- 每张订单的商品数目 ###Code grouped = order_products_prior.groupby("order_id")["add_to_cart_order"].aggregate("max").reset_index() grouped.add_to_cart_order.describe() ###Output _____no_output_____ ###Markdown 3 Do users purchase different numbers of products each time?- 用户每次购买的商品数目一样麽 ###Code grouped = pd.merge(grouped, orders, on = ['order_id'], how = 'left')[['user_id', 'add_to_cart_order', 'order_number', 'order_dow', 'order_hour_of_day', 'days_since_prior_order']] grouped = grouped.sort_values(['user_id', 'order_number']) grouped.columns = ['user_id', 'num_products', 'order_number', 'order_dow', 'order_hour_of_day', 'days_since_prior_order'] user_num_product = grouped.groupby(['user_id'])['num_products'].agg({'mean':'mean', 'std':'std'}) with open(DATA_DIR + 'user_num_product_stat.pkl', 'wb') as f: pickle.dump(user_num_product, f, pic) with open(constants.FEAT_DATA_DIR + 'user_num_product_stat.pkl', 'rb') as f: user_num_product = pickle.load(f) user_num_product['std'].describe() ###Output _____no_output_____ ###Markdown 4 Reorder Rate - 每张订单中重复购买商品比例 ###Code grouped = order_products_all.groupby("product_id")["reordered"].aggregate({'reorder_sum': sum,'reorder_total': 'count'}).reset_index() grouped['reorder_probability'] = grouped['reorder_sum'] / grouped['reorder_total'] grouped = pd.merge(grouped, products[['product_id', 'product_name']], how='left', on=['product_id']) grouped = grouped[grouped.reorder_total > 75].sort_values(['reorder_probability'], ascending=False)[:10] prior_reorder_rate = order_products_prior.groupby(['order_id'])['reordered'] \ .aggregate({'reorder_pnum':'sum', 'pnum':'count'}) prior_reorder_rate['reorder_rate'] = prior_reorder_rate['reorder_pnum'] / prior_reorder_rate['pnum'] prior_reorder_rate.reset_index(inplace=True) prior_orders = orders[orders.eval_set == 'prior'] prior_orders = pd.merge(prior_orders, prior_reorder_rate, on = ['order_id'], how = 'left') prior_orders.head(5) user_reorder_est = prior_orders.groupby(['user_id'])['reorder_pnum']\ .aggregate({'reorder_pnum_mean':'mean', 'reorder_pnum_std':'std'}).reset_index() user_reorder_est = user_reorder_est[['user_id', 'reorder_pnum_mean', 'reorder_pnum_std']] with open(constants.FEAT_DATA_DIR + 'user_reorder_est.pkl', 'wb') as f: pickle.dump(user_reorder_est, f, pickle.HIGHEST_PROTOCOL) with open(constants.FEAT_DATA_DIR + 'user_reorder_est.pkl', 'rb') as f: user_reorder_est = pickle.load(f) user_reorder_est.reorder_pnum_std.describe() ###Output _____no_output_____ ###Markdown 5 Products User Bought Previously ###Code users_products = pd.merge(prior_orders, order_products_prior, on = ['order_id'], how = 'left') users_products = users_products.groupby(['user_id'])['product_id'].apply(list).reset_index() with open(DATA_DIR + 'user_product.pkl', 'wb') as f: pickle.dump(users_products, f, pickle.HIGHEST_PROTOCOL) with open(constants.FEAT_DATA_DIR + 'user_product.pkl', 'rb') as f: users_products = pickle.load(f) l = users_products.product_id.apply(len) l.describe() ###Output _____no_output_____ ###Markdown 6 Candidate Products- last purchase- reorder items- all items that has high reorder rate- items that are added to cart first ###Code grouped = order_products_all.groupby("product_id")["reordered"].aggregate({'reorder_sum': sum,'reorder_total': 'count'}).reset_index() grouped['reorder_probability'] = grouped['reorder_sum'] / grouped['reorder_total'] ###Output _____no_output_____ ###Markdown 7 Time of orders ###Code grouped = orders.order_hour_of_day.value_counts() sns.set_style('darkgrid') sns.barplot(grouped.index, grouped.values) plt.show() ###Output /usr/local/lib/python3.5/dist-packages/matplotlib/font_manager.py:1297: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans (prop.get_family(), self.defaultFamily[fontext])) ###Markdown 8 Topic Distance- user VS product prior中的所有(u,p)对- latest order VS product 通过LDA-transform来构造 ###Code # term-frequency matrix construct orders = pd.read_csv(DATA_DIR + 'orders.csv') users_orders = pd.merge(order_products_prior, orders[['user_id', 'order_id']], on = ['order_id'], how = 'left') users_products_matrix = users_orders.groupby(['user_id'])['product_id'].apply(series_to_str) tf = CountVectorizer(analyzer = 'word', lowercase = False, max_df=0.95, min_df=2,) tf_matrix = tf.fit_transform(users_products_matrix.values) tf_feature_names = tf.get_feature_names() with open(DATA_DIR + 'tf.model', 'wb') as f: pickle.dump(tf, f, pickle.HIGHEST_PROTOCOL) #订单的Topic, tf为CountVector,将文档转化为term-frequency矩阵 op = order_products_prior.groupby(['order_id'])['product_id'].apply(series_to_str) topic_order = pd.DataFrame(lda.transform(tf.transform(op.values)), columns= ["topic_%d"%x for x in range(10)]) topic_order['order_id'] = op.index.values with open(DATA_DIR + 'order_topic_norm.pkl', 'wb') as f: pickle.dump(topic_order_norm, f, pickle.HIGHEST_PROTOCOL) up_distance = pd.merge(users_orders[['user_id', 'product_id']].drop_duplicates(), user_topic, on = ['user_id'], how = 'left') up_distance.columns = ['user_id', 'product_id'] + ["u_topic_%d"%x for x in range(10)] up_distance = pd.merge(up_distance, topic_product, on = ['product_id'], how = 'left') up_distance.columns = ['user_id', 'product_id'] + ["u_topic_%d"%x for x in range(10)] + ["p_topic_%d"%x for x in range(10)] def cal_up_distance(subf): u_topic = subf[["u_topic_%d"%x for x in range(10)]] p_topic = subf[["p_topic_%d"%x for x in range(10)]] upd = euclidean(u_topic, p_topic) return upd # 3 hours up_distance['up_dis'] = up_distance.apply(cal_up_distance, axis = 1) up_distance = up_distance[['user_id', 'product_id', 'up_dis']] with open(DATA_DIR + 'upd_feat.pkl', 'wb') as f: pickle.dump(up_distance, f, pickle.HIGHEST_PROTOCOL) ###Output _____no_output_____ ###Markdown 9 Order Topic Construct- countvector, lda transform- 由商品的Topic构造订单的Topic表达- 商品加入购物车的次序??? 先忽视次序- 每个用户学习:加购物车次序 VS 重购? VS下张订单的Topic?? ###Code order_topic = pd.merge(order_products_prior[['order_id', 'product_id']], topic_product, on = ['product_id'], how = 'inner')#throw stop words order_topic = order_topic.groupby(['order_id'])[["topic_%d"%x for x in range(10)]].sum().reset_index() unorm = order_topic[["topic_%d"%x for x in range(10)]].values order_topic[["topic_%d"%x for x in range(10)]] = unorm / unorm.sum(axis = 1)[:,np.newaxis] len(order_products_prior.product_id.unique()) len(topic_product.product_id.unique()) ###Output _____no_output_____ ###Markdown 10 XGBoost Feature Preparation- 正负样本10:1 ###Code import constants, utils, transactions, feats from imp import reload tle = transactions.TransLogExtractor(constants.RAW_DATA_DIR, constants.FEAT_DATA_DIR) train_none = feats.make_train_or_test_none(tle, 'train') test_none = feats.make_train_or_test_none(tle, 'test') train = feats.make_train_or_test(tle, 'train') utils.check_inf_nan(train[up_cols]) utils.check_inf_nan(train[ua_cols]) utils.check_inf_nan(train[ud_cols]) utils.check_inf_nan(train[p_cols]) utils.check_inf_nan(train[a_cols]) utils.check_inf_nan(train[d_cols]) utils.check_inf_nan(train[ctx_cols]) utils.check_inf_nan(train[topic_cols]) ###Output _____no_output_____ ###Markdown 11 LSTM Feature Preparation - (u,p,t)- 间隔、加购物车次序作为Symbol - 次序 - 1 - 2 - 3 - 4-6 - 7-11 - 12 —— - 间隔 - 1 - 7 - 8 - 16 - 17 - 33 - 34 - 100 NAN - 实现 - Encoder两个列, 总共30种符号 - Cartesian查表 - 直接数值 ###Code users_orders = tle.get_users_orders('prior') product_feat = tle.craft_feat_item('products') user_feat = tle.craft_feat_user() users_orders = pd.merge(users_orders, product_feat[['product_id', 'p_reorder_probability']], on=['product_id'], how='left') users_orders = pd.merge(users_orders, user_feat[['user_id', 'u_total_reorders']], on=['user_id'], how='left') def encode_numeric(row, bins): ''' convert numeric variable into binned category bins = [b1, b2, b3, b4] ''' index = ~(row < bins) return [bins[index][-1]] add2cart_bins = np.array([1, 2, 3, 4, 7, 12], dtype=float) # 6 interval_bins = np.array([-1, 4, 8, 17, 34], dtype=float)# 5 p_reorder_bins = np.array([0.0, 0.20, 0.38, 0.53], dtype=float)# 4 u_reorder_bins = np.array([0, 10, 33, 101], dtype=float)# 4 %%time users_orders = users_orders.sort_values(['user_id', 'product_id', 'order_number'], ascending = False) users_orders['up_interval'] = users_orders.groupby(['user_id', 'product_id'])['days_up_to_last'].diff() users_orders.up_interval.fillna(-1, inplace=True) users_orders['up_interval_sym'] = users_orders.up_interval.apply(lambda x: encode_numeric(x, interval_bins)) users_orders['up_add2cart_order_sym'] = users_orders.add_to_cart_order.apply(lambda x: encode_numeric(x, add2cart_bins)) users_orders['p_reorder_prob_sym'] = users_orders.p_reorder_probability.apply(lambda x: encode_numeric(x, p_reorder_bins)) users_orders['u_reorder_sym'] = users_orders.u_total_reorders.apply(lambda x:encode_numeric(x, u_reorder_bins)) feat_card = [add2cart_bins, interval_bins, p_reorder_bins, u_reorder_bins] feat_cartesian = cartesian(feat_card) users_orders['up_card'] = users_orders.up_add2cart_order_sym + users_orders.up_interval_sym + users_orders.p_reorder_prob_sym + users_orders.u_reorder_sym def encode_cartesian(row, feat_cartesian): ''' lookup table turn a group of categorical variable into a symbol ''' sym = np.where(np.all(row == feat_cartesian,axis=1))[0][0] + 1 return sym %%time users_orders['up_airr_sym'] = users_orders.up_card.apply(lambda x: encode_cartesian(x, feat_cartesian)) up_airr_sym = users_orders[['user_id', 'product_id', 'order_number', 'up_airr_sym']] up_airr_sym.sort_values(['user_id', 'product_id', 'order_number'], inplace=True) up_airr_sym_list = up_airr_sym.groupby(['user_id', 'product_id'])['up_airr_sym'].apply(list).reset_index() with open(constants.FEAT_DATA_DIR + 'up_airr_sym.pkl', 'wb') as f: pickle.dump(up_airr_sym_list, f, pickle.HIGHEST_PROTOCOL) ###Output _____no_output_____ ###Markdown (u,p)对时间间隔预测Time Series Forcasting 问题- 方案1:用之前的Timestep对当前值进行回归预测- 方案2:LSTM 仅仅包含购买间隔信息 - 样本(sample):(u,p,oid) - 特征(feature):两次购买之间的间隔 - 预处理 - 只出现一次的(u,p)无法计算间隔,NAN 丢弃 - p_purchase_interval:距离下次购买的时间 - 间隔为0的删除,同一天内购买两次视为一次 - 为了training,间隔序列的长度 >=2 即(u,p)在prior里至少出现3次 ###Code users_orders = tle.get_users_orders(prior_or_train='prior') a = users_orders[['user_id', 'order_number', 'product_id', 'days_up_to_last', 'p_purchase_interval']].sort_values(['user_id', 'order_number', 'p_purchase_interval']) del users_orders a.sort_values(['user_id', 'product_id', 'order_number'], ascending=False, inplace=True) %%time a['up_interval'] = a.head(1000).groupby(['user_id', 'product_id'])['days_up_to_last'].diff() a.sort_values(['user_id', 'product_id']) print("number of (u,p,t) tuples: %d"%len(users_orders)) del users_orders # free memory usage users_orders_intervals = users_orders.dropna() #throw away product_id bought only once users_orders_intervals = users_orders_intervals[users_orders_intervals.p_purchase_interval > 0] # throw away record buy in the same day users_orders_intervals = users_orders_intervals.sort_values(['user_id', 'product_id', 'order_number']) %%time up_interval_list = users_orders_intervals.groupby(['user_id', 'product_id'])['p_purchase_interval'].apply(list).reset_index() len(up_interval_list) del users_orders_intervals # free memory usage up_interval_list['len'] = up_interval_list.p_purchase_interval.apply(lambda x: len(x)) up_interval_list = up_interval_list[up_interval_list.len >= 2] # for train/test split with open(constants.FEAT_DATA_DIR + 'up_interval_feat.pkl', 'wb') as f: pickle.dump(up_interval_list, f, pickle.HIGHEST_PROTOCOL) len(up_interval_list) up_interval_list.len.describe() ###Output _____no_output_____
_notebooks/2022-02-06-r.ipynb
###Markdown data.frame, List, stringr - data.frame 객체 자료 처리 함수 ###Code df = data.frame(x=1:5, y=seq(1,9,2),z=c('abfa','aavd','avs','S','S')) df str(df) # 데이터 프레임의 구조를 보여준다. ###Output 'data.frame': 5 obs. of 3 variables: $ x: int 1 2 3 4 5 $ y: num 1 3 5 7 9 $ z: chr "abfa" "aavd" "avs" "S" ... ###Markdown - 5 obs. of 3 variables : 5개의 관측치와 3개의 변수로 구성됨 ###Code ncol(df) nrow(df) # 칼럼명 반환 names(df) df[c(2,3),1] # R은 파이썬과 달리 index가 1부터 시작함! df[1] # 첫 번째 열 summary(df) # 요약 통계량을 볼 수 있다. # 숫자로 구성된 칼럼에 대해서만 수행된다. df[,c(1,2)] df[c(1,2)] apply(df[,c(1,2)],2,sum) apply(df[c(1,2)],2,sum) ###Output _____no_output_____ ###Markdown - 데이터프레임의 부분 객체 만들기 - 데이터 프레임 객체의 데이터를 대상으로 조건에 만족하는 행을 추출하여 독립된 객체를 생성할 수 있다. ###Code x1 <- subset(df,x>=3) x1 ###Output _____no_output_____ ###Markdown - 행 기준이다. ###Code x2 <- subset(df, x>=2 & y<=6) x2 ###Output _____no_output_____ ###Markdown - 이렇게 두 개의 조건으로 부분 객체를 만들 수도 있다. --- ###Code sid = c('a','b','c','d') score = c(12,123,13,5) subject = c('컴퓨터1','컴퓨터2','컴퓨터3','컴퓨터4') student = data.frame(sid ,score, subject) student mode(student) class(student) str(sid);str(score);str(subject);str(student) # 벡터 자료구조와 데이터 프레임 자료구조 ###Output chr [1:4] "a" "b" "c" "d" num [1:4] 12 123 13 5 chr [1:4] "컴퓨터1" "컴퓨터2" "컴퓨터3" "컴퓨터4" 'data.frame': 4 obs. of 3 variables: $ sid : chr "a" "b" "c" "d" $ score : num 12 123 13 5 $ subject: chr "컴퓨터1" "컴퓨터2" "컴퓨터3" "컴퓨터4" ###Markdown --- ###Code h <- data.frame(id = c(1,2), height = c(123,1234)) w <- data.frame(id2 = c(1,2), weight = c(123,43)) h;w merge(h,w,by.x='id',by.y='id2') ###Output _____no_output_____ ###Markdown - by.x='id',by.y='id2' -> 병합 시에 기준이 되는 칼럼명이 상이할 때 사용함 --- ###Code install.packages('UsingR') # 패키지 설치 library(UsingR) # 패키지 로드 data(galton) # galton 데이터 셋 가져오기 str(galton) dim(galton) head(galton, 5) ###Output _____no_output_____ ###Markdown --- - List 자료구조 - List는 성격이 다른 자료형(문자열, 숫자형, 논리형)과 자료구조(벡터, 행렬, 리스트, 데이터 프레임)를 객체로 생성할 수 있다. - 하나의 메모리 영역에는 키과 값이 한 쌍 - Python의 dict 자료구조와 유사하다. - list 생성 함수 : list() - list 자료 처리 함수 : unlist(), lapply(), sapply() - list는 키와 값을 한 쌍으로 하여 원소가 저장되는 자료구조이다. 만약 키를 생략하면 자동으로 기본 키가 생성된다. ###Code list1 <- list('lee','lee2',95) # list 객체 생성 list1 ###Output _____no_output_____ ###Markdown - key를 지정하지 않아서 임의로 지정됐음- list 객체는 키를 통해서 값이 저장되기 때문에 서로 다른 자료형을 저장할 수 있다. - list와 data.frame은 상이한 자료형을 혼합할 수 있다. - list를 vector로 변경해보자 ###Code unlist <- unlist(list1) unlist ###Output _____no_output_____ ###Markdown - 리스트 자료구조에 다량의 데이터가 저장되는 경우 리스트 형태로 출력하면 여러 줄로 출력되기 때문에 벡터 형식으로 변환할 경우 자료 처리가 용이해진다. - `character > numeric > logical 순서로 벡터에 저장되기 때문에 전부 character로 변환하여 반환되었음` ###Code list2 <- list(c(1:5),c(5:1)) list2 list3 <- list(matrix(1:6,2),array(1:12,c(3,2,2))) list3 ###Output _____no_output_____ ###Markdown - `이렇게 list 객체의 value에 저장될 수 있는 자료구조는 VECTOR 뿐만이 아니라 matrix 혹은 array도 가능하다.` --- - key 명명하자 ###Code list4 <- list(name = c('홍길동','유관순'), age = c(1234,1245)) list4 list4$name list4$name[2] list4$age list4$age[1] list4$age[1]<-1245 # 원소 수정 가능 list4$newkey <- 'asdf' # 새로운 키 추가 list4$newkey[c(1,2)] <- c(124,1255) # 새롭게 추가 된 키에 새로운 value 추가 list4 length(list4) length(list4$name) mode(list4) class(list4) list4$new<- NULL list4 ###Output _____no_output_____ ###Markdown - 일부 key와 value 제거 ###Code list4 <- NULL list4 ###Output _____no_output_____ ###Markdown - 모두 제거 --- - 리스트 객체의 자료 처리 함수 ###Code a = list(c(1:5)) b = list(c(6:10)) lapply(c(a,b),max) # 리스트 객체에 max 함수 적용 ###Output _____no_output_____ ###Markdown - lapply() 함수는 두 개의 리스트 객체 a와 b를 대상으로 max() 함수를 적용하여 각 리스트 객체의 자료 중에서 가장 큰 값을 리스트 형태로 반환한다. - 동일한 결과를 벡터 형식으로 반환해보자 ###Code sapply(c(a,b),max) ###Output _____no_output_____ ###Markdown - lapply()는 연산 결과를 리스트 형태로 반환하지만, sapply()는 결과를 벡터형식으로 반환하기 때문에 많은 원소를 포함하고 있는 리스트 객체를 보다 효과적으로 처리할 수 있다. --- - 다차원 리스트 객체를 생성해보자 - 리스트 자료구조에 또 다른 리스트가 중첩된 자료구조를 다차원 리스트라고 한다. - 즉 value가 list이다 ###Code complex = list(c1 = list(1,2,3), c2 = list(4,5,6), c3 = list(7,8,9)) complex complex$c1 ###Output _____no_output_____ ###Markdown - 다차원 리스트를 열 단위로 바인딩하기 ###Code do.call(cbind, complex) class(do.call(cbind, complex)) do.call(rbind, complex) class(do.call(rbind, complex)) ###Output _____no_output_____ ###Markdown - 3개의 value를 구성하는 list 자료가 열 단위로 묶여서 matrix 객체가 생성된다. 특히 do.call() 함수는 다차원 리스트를 구성하는 리스트를 각각 분해한 후 지정된 함수(cbind 또는 rbind)를 호출하여 리스트 자료를 처리하는 데 효과적이다. --- - 텍스트 자료나 SNS에서 가공 처리된 빅데이터를 처리하기 위해서는 필요한 문자열을 적절하게 자르고 교체하고 추출하는 작업이 중요하다. 문자열을 효과적으로 처리하는 stringr 패키지에 대해 알아보자 ###Code # 패키지 설치 install.packages('stringr') library(stringr) str_extract("홍길동35이순신45유관순25","[1-9]{2}") str_extract_all("홍길동35이순신45유관순25","[1-9]{2}") ###Output _____no_output_____ ###Markdown - str_extract() 함수는 지정된 문자열을 대상으로 정규 표현식 '[1-9]{2}'의 패턴(숫자 2개가 연속된 경우)과 일치하는 가장 처음에 발견된 문자열을 추출해준다.- str_extract_all는 지정된 문자열을 대상으로 정규 표현식 '[1-9]{2}'의 패턴(숫자 2개가 연속된 경우)과 일치하는 모든 문자열을 추출해준다. --- - 정규 표현식 - 문자열 처리 관련 함수는 대부분 정규표현식을 이용하여 문자열의 패턴을 검사하고 해당 문자열을 대상으로 문자열을 교체하거나 추출하게 된다. 정규표현식은 약속된 기호인 메타문자들에 의해 표현된다. - 반복 관련 정규 표현식 - []기호는 대괄호 안의 문자가 한 번만 반복되고, {n}은 n만큼 반복된다. - 예를 들면 [a-z]의 정규 표현식은 영문 소문자 a에서 z까지 범위 중에서 한 개의 영문 소문자를 의미하고 [a-z]{3}은 영문 소문자가 연속으로 3개 발생한다는 의미이다. ###Code string = 'rhkrehtjddms123 dl rhdqnfmf444 Rh곽도성 xhdekfgkf rjtdlek.' str_extract_all(string, '[a-z]{4}') # 영문 소문자가 4글자 연속하는 경우 추출 str_extract_all(string, '[a-z]{4,}') # 영문 소문자가 4글자 **이상** 연속하는 경우 추출 str_extract_all(string, '[a-z]{3,5}') # 영문 소문자가 4글자 **이상** 5글자 **이하** 연속하는 경우 추출 str_extract_all(string, 'rhkrehtjd') # 해당 문자열 추출 str_extract_all(string,'2') # 해당 숫자 추출 str_extract_all(string,'[가-힣]{2,}') # 연속된 3개 이상의 한글 문자열 추출 ###Output _____no_output_____ ###Markdown - 대문자 추출할 땐 [A-Z]를 이용한다.- 제외할 땐 [^a-z]- 영문자는 일단 제외하고 남은 것중 4글자 추출 - [^a-z]{4} , 남은 것중 문자종류 상관없이 4글자 추출 --- - 한 개의 숫자와 단어 관련 정규표현식 ###Code jm = '12344-125215' str_extract(jm,'[0-9]{5}-[0-9]{2,}') str_extract_all(jm,'\\d{5}-\\d{6}') string = 'rhkrehtjddms123 dl rhdqnfmf444 Rh곽도성 xhdekfgkf rjtdlekrhkrehtjd.' str_extract_all(string,'\\w{3,}') # 3글자 이상의 단어만 추출, 허나 특수문자는 포함하지 않는다. str_length(string) # 문자열 내에서 특정 문자열의 첫 번째 index 시작과 끝 str_locate(string,'rhkrehtjd') # 문자열 내에서 특정 문자열의 모든 위치의 index 시작과 끝 str_locate_all(string,'rhkrehtjd') ###Output _____no_output_____ ###Markdown --- - 부분 문자열 만들기 ###Code string sub = str_sub(string,1,length(string)-20) sub ###Output _____no_output_____ ###Markdown - 대소문자 변경 ###Code change = str_to_upper(string) change change = str_to_lower(string) change ###Output _____no_output_____ ###Markdown - 문자열 교체 ###Code string rep1 = str_replace(string,'123','dl') rep1 # 123을 dl로 변경 ###Output _____no_output_____ ###Markdown - 문자열 결합 ###Code str_c(string,rep1) str_c(string,'1111111111111') ###Output _____no_output_____ ###Markdown - 문자열 분리 ###Code a=str_split(string,'r') # r기준으로 문자열 분리 a mode(a) class(a) ###Output _____no_output_____ ###Markdown - 문자열 합치기 ###Code new = c('asd','asdadfd','asds','aqefsd') # 콤마를 기준으로 문자열 벡터 합치자 join = paste(new,collapse = ',') join ###Output _____no_output_____
Investigation-financial-data-tools.ipynb
###Markdown **Financial Data Tools: Tiingo and Pandas Data Reader**This notebook contains a short demonstration of some of the features of the Tiingo API and the Pandas Data Reader library.Tiingo documentation:Tiingo API: https://api.tiingo.comTiingo on Pypi.org: https://pypi.org/project/tiingo/descriptionTiingo on readthedocs: https://tiingo-python.readthedocs.io/en/latest/readme.htmlusagePandas Data Reader documentation:GitHub: https://github.com/pydata/pandas-datareaderReadthedocs (Tiingo section): https://pandas-datareader.readthedocs.io/en/latest/readers/tiingo.htmlmodule-pandas_datareader.tiingo First, install tiingo. You can do this from the command line if you prefer. ###Code !pip install tiingo import numpy as np import pandas as pd ###Output _____no_output_____ ###Markdown For some reason, I need to point the code to where tiingo is installed on my machine. YMMV. ###Code import sys sys.path.append("/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages") ###Output _____no_output_____ ###Markdown Import TiingoClient and initialize. ###Code from tiingo import TiingoClient # Set TIINGO_API_KEY in your environment variables in your .bash_profile, OR # pass a dictionary with 'api_key' as a key into the TiingoClient. config = {} # To reuse the same HTTP Session across API calls (and have better performance), include a session key. config['session'] = True ### You will need to get an API key from https://api.tiingo.com ### The API key is free and includes a max number of requests per hour and day, and data transfer. ### Paid account will give more requests and data transfer. # If you don't have your API key as an environment variable, # pass it in via a configuration dictionary. config['api_key'] = "Your API key here" #Initialize client = TiingoClient(config) ###Output _____no_output_____ ###Markdown Now install Pandas Data Reader (here, or from the command line). ###Code !pip install pandas-datareader import os import pandas_datareader as pdr ###Output _____no_output_____ ###Markdown You can pass individual ticker symbols or a list of ticker symbols into the functions. Let's get the metadata for Gamestop (ticker symbol GME). ###Code gamestop_metadata = client.get_ticker_metadata("GME") print(gamestop_metadata) ##Here is the metadata about Google in a dataframe format (a bit easier to read!): df_gamestop=pd.DataFrame.from_dict(gamestop_metadata, orient='index') df_gamestop ###Output _____no_output_____ ###Markdown Instead of single ticker symbols, you can pass a list of ticker symbols into client.get_ticker_metadata. Note that ticker symbols do not have to be uppercase. ###Code company_symbols = ['EXEL', 'MSFT', 'gme','AMGN', 'DNA'] dict_list=[] for symbol in company_symbols: dict_companies=client.get_ticker_metadata(symbol) dict_list.append(dict_companies) df__company_symbols=pd.DataFrame(dict_list) df__company_symbols.head() ###Output _____no_output_____ ###Markdown What if we want to get historical price data for our list of stocks? Use pdr.get_data(), pass in the list of symbols, date range and API key. The last few days in January 2021 were interesting for Gamestop...![Screen%20Shot%202021-03-09%20at%207.59.18%20PM.png](attachment:Screen%20Shot%202021-03-09%20at%207.59.18%20PM.png) ###Code #Pass in the list of ticker symbols, and pandas datareader will download #the stock data for the specified time period. #Need to add error handling for KeyErrors, because some dates are out of range. test_hist_data_list=[] for symbol in company_symbols: try: df_test_prices = pdr.get_data_tiingo(symbol, start='2021-01-25', end='2021-01-29', pause=0.2, api_key='Your API Key here') test_hist_data_list.append(df_test_prices) except KeyError as ke: print('KeyError ', ke) ###Output KeyError 'date' ###Markdown One of our stocks had a very interesting couple of days. Let's take a look. ###Code print(test_hist_data_list[2]) df_test_hist_data=pd.concat(test_hist_data_list, axis=0) df_test_hist_data ###Output _____no_output_____ ###Markdown What if we want to read news about specific stocks, topics, or just news in general? https://tiingo-python.readthedocs.io/en/latest/readme.htmlusage ###Code ##Use the Tiingo client to get the news. You can pass in tickers, tags, sources, and start ##end dates. See info on the readthedocs website above. gme_news = client.get_news(tickers=['gme'], #tags=['Laptops'], sources=['washingtonpost.com'], startDate='2021-01-15', endDate='2021-01-29') df_gme_news=pd.DataFrame(gme_news) df_gme_news.head(10) ###Output _____no_output_____ ###Markdown To get a list of symbols for which Tiingo can access data, and save it to a csv file, use: ###Code df_symbols=pdr.tiingo.get_tiingo_symbols() df_symbols.to_csv('tiingo_symbols.csv') ###Output _____no_output_____
Stacking-AutomaticChoice.ipynb
###Markdown Get dataset ###Code names_Z = {} names_Z['td'] ='CNN-W2V|NB-TFIDF|KNN-FAST|NB-GLOVE|LR-W2V|NB-FAST|CNN-TFIDF|KNN-TFIDF|NB-W2V|KNN-CV' names_Z['zw'] = 'NB-GLOVE|LR-GLOVE|NB-W2V|SVM-CV|MLP-TFIDF|SVM-FAST|MLP-W2V|NB-CV|SVM-GLOVE|KNN-FAST|SVM-W2V|RF-CV' names_Z['td_zw']= 'NB-W2V|KNN-FAST|LR-FAST|KNN-GLOVE|CNN-CV|EXTRA-W2V|CNN-GLOVE|KNN-CV|NB-GLOVE|CNN-TFIDF|LR-W2V|KNN-TFIDF|NB-TFIDF' def stacking(dataset_name): _, _, label_test, probas_test, label_val, probas_val = load_dataset(dataset_name) all_stacking = [LogisticRegressionCV(class_weight='balanced', cv=10, scoring='f1_macro', n_jobs=5)] all_stacking_names = ['Stacking LR'] names = names_Z[dataset_name] results_Z = np.zeros(len(all_stacking)) X_val, X_test = filter_df_train_test(probas_val, probas_test, names) for idx_clf, clf in enumerate(all_stacking): clf.fit(X_val, label_val) y_pred = clf.predict(X_test) results_Z[idx_clf] = f1_score(label_test, y_pred, average='macro') group_Z_df = pd.DataFrame(results_Z.reshape(1, 1), columns=all_stacking_names, index=[dataset_name]) return group_Z_df group_TD_df = stacking('td') group_ZW_df = stacking('zw') group_TD_ZW_df = stacking('td_zw') pd.concat([group_TD_df, group_ZW_df, group_TD_ZW_df]) ###Output _____no_output_____
docs/.ipynb_checkpoints/Moist_tephi-checkpoint.ipynb
###Markdown =============================Moist Adiabats on a Tephigram=============================Sept [email protected] notebook is intended to summarize the work on moist adiabats on a tephigram. ###Code #import necessary packages import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = 10, 7 # that's default image size for this ###Output _____no_output_____ ###Markdown Set up thermodynamical constants: ###Code a = 0.285611 #Rd/Cp: for dry air b = 1.35e7 #K^2: Lv^2*Eps/Cp*Rd c = 2488.4 #K: Lv/Cp LvRv = 5422. #K: Lv/Rv T0 = 273.15 #0C Eps = 0.622 #gv/gd: ratio of gas constants dry vs vapour e0 = 0.611 #kPa: Clausius-Clayperon constant ###Output _____no_output_____ ###Markdown Next, set up a test values for moist adiabat labels and pressure, and some storage arrays. Note that the range of pressures is defined from 100 to 1kP with 0.01 integration step, to ensure the potential temperature formula doesn't blow up for values less than 1. Also the tested temperature range is limited to +30C, as past this value the specific heat of air begins to change, which can introduce additional error into the equations. ###Code Prange = np.arange(100,1, -0.01) ThetaW = np.arange(-50,30) adiabats = np.empty((len(Prange),len(ThetaW))) dry_adiabats = np.empty_like(adiabats) ###Output _____no_output_____ ###Markdown Now, define formulas for Clausius-Clayperon, saturation mixing ratio and dP/dT as follows:\begin{equation}e_s = e_0 * exp(\frac{L_v}{R_v} * (\frac{1}{T_0}-\frac{1}{T}))\end{equation}\begin{equation}r_s = \frac{\varepsilon * e_s}{P - e_s}\end{equation}\begin{equation}\frac{\Delta T}{\Delta P} = \frac{a * T + c * r_s}{P * (1 + \frac{b * r_s}{T^2})}\end{equation} ###Code def f_es(T): return e0*np.exp(LvRv*(1./T0 - 1./T)) def f_rs(P,es): return Eps*es / (P - es) def dTdP(P,T): return (a*T + c*rs)/(P*(1+(b*rs/T**2))) ###Output _____no_output_____ ###Markdown Manually integrate to calculate moist adiabats (for each pressure and temperature): ###Code for nT, Temp in enumerate(ThetaW): T = Temp + T0 for nP,Pres in enumerate(Prange): es = f_es(T) rs = f_rs(Pres,es) grad = dTdP(Pres,T) T = T - grad*0.01 adiabats[nP,nT] = T #store moist adiabat dry_adiabats[nP,nT] = T*((100./Pres)**a) #store dry adiabat ###Output _____no_output_____ ###Markdown Plot some adiabats (sanity check) every 10C: ###Code plt.plot(adiabats[:,0::10]-T0,Prange); plt.gca().invert_yaxis(); plt.xlabel("moist adiabats [C]"); plt.ylabel("pressure [kPa]"); ###Output _____no_output_____ ###Markdown Define a function to calculate equivalent potential temperature:\begin{equation} \theta_e = \theta_w * exp(\frac{a_3 * r_{s0}}{\theta_w})\end{equation} ###Code a3 = 2490. #K*kg_air/kg_vapour def f_thE(ThetaW,rs0): return ThetaW * np.exp(a3*rs0/ThetaW) ###Output _____no_output_____ ###Markdown Note, that the value of $a_3$ will be discussed further below. Next, calculate equivalent temperature for each moist adiabat: ###Code ThetaE = np.empty((len(ThetaW))) for nT, Temp in enumerate(ThetaW): T = Temp + T0 es0 = f_es(T) rs0 = f_rs(100,es0) ThetaE[nT] = f_thE(T,rs0) ###Output _____no_output_____ ###Markdown Finally, we can create an array of normalized adiabats, by using the following expression:\begin{equation}\theta_{norm} = \frac{\theta - \theta_{dry}}{\theta_e}\end{equation}This should ensure that all adiabats start at 0 at the surface and approach -1 at the top of the atmosphere. Since we did not test the pressures close to the top of the atmosphere (we stopped at 1kPa) the end of the normalized adiabat should reach values of ~-0.8 ###Code norm_adiabats = (adiabats - dry_adiabats)/ThetaE ###Output _____no_output_____ ###Markdown If successful, all adiabats should collapse into a single shape. ###Code plt.plot(norm_adiabats, Prange); plt.gca().invert_yaxis(); plt.ylabel("pressure [kPa]"); plt.xlabel(r"$\theta_{norm}$"); ###Output _____no_output_____ ###Markdown This appears to work reasonably well for the given range of temperatures and pressures. To evaluate the error we can calculate the standard deviation for each $\theta_{norm}$ value and plot the resultant curve: ###Code spread = np.std(norm_adiabats, axis = 1) plt.plot(spread, Prange); plt.gca().invert_yaxis(); plt.xlabel("standard deviation"); plt.ylabel("pressure [kPa]"); ###Output _____no_output_____
Modelagem/calculo_risco_final.ipynb
###Markdown Definindo segmentos ###Code segmento_infra = ['FATAGUA', 'TELEFFX', 'TELEFFIXA', 'TELEFMOVEL', 'CONDOMINIO', 'ENERGIAELET', 'ALUGUEL', 'SERVTELEFON'] segmento_credito = ['EMPRESCONTA', 'CREDCARTAO', 'FINANCIAMENT', 'CREDITOEFINANCIAMENTO-FINANC'] segmento_processos = ['EXCJUDTRAB', 'FISCALESTADUAL', 'EXECUCAO', 'FISCALFEDERAL', 'FISCALMUNICIPAL', 'EXECUCAO-JE', 'BUSCAEAPREENSAO'] df['segmento'] = df.apply(lambda x : 'processos' if x['tipo']=='processos' else('credito' if x['modalidade_natureza'] in segmento_credito else ('infra' if x['modalidade_natureza'] in segmento_infra else "outros")), axis=1) df = df[["cnpj", "valor", "segmento"]] # dataset com alguns parametros já calculados df_comp = pd.read_excel("../tabelas/dataset_variaveis_completo_201904.xlsx") df_comp["cnpj"] = df_comp.apply(lambda x : str(x["cnpj"]), axis=1) df_comp["cnpj"] = df_comp.apply(lambda x : "0" + x["cnpj"] if len(x["cnpj"])==13 else ("00" + x["cnpj"] if len(x["cnpj"])==12 else x["cnpj"]), axis=1) df_comp.shape df_comp[(df_comp["prop_divida"]<=1.5) & (df_comp["quantidade_cheques"]==0)]["cnpj"].unique().tolist().__len__() ###Output _____no_output_____ ###Markdown 1 - Probabilidade ###Code def calcula_probabilidade(cnpj, df): dt = df[df["cnpj"]==cnpj] dt = dt.groupby("segmento").count().reset_index()[["segmento", "valor"]] dt.columns = ["segmento", "ocorrencias"] dt["probabilidade"] = dt["ocorrencias"]/dt["ocorrencias"].sum() dt["cnpj"] = cnpj return dt calcula_probabilidade(cnpj, df) ret = [] for el in df["cnpj"].unique().tolist(): _df = calcula_probabilidade(el, df) ret.append(_df) dcalc = pd.concat(ret) dcalc = dcalc[["cnpj", "segmento", "ocorrencias", "probabilidade"]] dcalc.head() ###Output _____no_output_____ ###Markdown 2 - Composicao da Dívida ###Code def calcula_composicao(cnpj, df): dt = df[df["cnpj"]==cnpj] dt = dt.groupby("segmento").sum().reset_index() dt.columns = ["segmento", "valor_divida"] dt["composicao"] = dt["valor_divida"]/dt["valor_divida"].sum() dt["cnpj"] = cnpj return dt ret = [] for el in df["cnpj"].unique().tolist(): _df = calcula_composicao(el, df) ret.append(_df) _df = pd.concat(ret) _df = _df[["cnpj", "segmento", "valor_divida", "composicao"]] dcalc.shape _df.shape dcalc = dcalc.merge(_df, left_on=["cnpj", "segmento"], right_on=["cnpj", "segmento"], how="left") dcalc.head() ###Output _____no_output_____ ###Markdown Faturamento Medio ###Code dfat = df_comp[["cnpj", "fat_medio"]].drop_duplicates() dfat = dfat[dfat["cnpj"]!="60701190000104"] dfat = dfat[dfat["cnpj"]!="191"] dcalc = dcalc[dcalc["cnpj"]!='60701190000104'] dcalc = dcalc[dcalc["cnpj"]!="191"] dfat.shape dcalc["cnpj"].unique().tolist().__len__() dcalc = dcalc.merge(dfat, left_on="cnpj", right_on="cnpj", how="left") dcalc["pi"] = dcalc["valor_divida"]/dcalc["fat_medio"] dcalc.head() ###Output _____no_output_____ ###Markdown Aplicando o criterio de elegibilidade para a variavel pi ###Code lista_reprovados = dcalc[dcalc["pi"]>1.5]["cnpj"].unique().tolist() dcalc = dcalc[~dcalc["cnpj"].isin(lista_reprovados)] # normalizando pi dcalc["pi"] = (2/3)*dcalc["pi"] dcalc.sort_values("pi", ascending=False).head() ###Output _____no_output_____ ###Markdown Calculo do $\lambda$ ###Code dcalc["lambda"] = dcalc["composicao"]*dcalc["pi"] dcalc[dcalc["cnpj"]=='04247535000112'] # escala do impacto escala_impacto = {"credito" : {"i0" : 0.75, "i1" : 1}, "processos" : {"i0" : 0.5, "i1" : 0.75}, "infra" : {"i0" : 0.25, "i1" : 0.5}, "outros" : {"i0" : 0, "i1" : 0.25}, } escala_impacto def impacto_segmento(lambda_, segmento, escala_impacto): escala = escala_impacto.get(segmento) i0 = escala.get("i0") i1 = escala.get("i1") return (i1 - i0)*lambda_ + i0 impacto_segmento(0.269671, "infra", escala_impacto) impacto_segmento(0.000221, "infra", escala_impacto) dcalc["impacto_segmento"] = dcalc.apply(lambda x : impacto_segmento(x["lambda"], x["segmento"], escala_impacto), axis=1) dcalc.sort_values("impacto_segmento", ascending=False).head() _df = dcalc[(dcalc["segmento"]=="credito") & (dcalc["composicao"]==1)] _df.sort_values("pi").head() ###Output _____no_output_____ ###Markdown Calculo do risco ###Code dcalc["risco"] = dcalc["probabilidade"]*dcalc["impacto_segmento"] dcalc.sort_values("risco", ascending=False).head() dcalc[dcalc["cnpj"]=='04509695000192'] _df = dcalc[(dcalc["segmento"]=="credito") & (dcalc["composicao"]==1)] ###Output _____no_output_____ ###Markdown Calculo do score ###Code def calcula_dscore(risco_, score_limite): return -score_limite*risco_ + score_limite dcalc["dscore"] = dcalc.apply(lambda x : calcula_dscore(x["risco"], 700), axis=1) # definindo pesos para o score final W = {"credito" : 4, "processos" : 3, "infra" : 2, "outros" : 1} W dcalc.head() dcalc["peso"] = dcalc.apply(lambda x : W.get(x["segmento"]), axis=1) def calcula_score_total(cnpj, df): dt = df[df['cnpj']==cnpj] dt["score"] = dt["dscore"]*dt["peso"] score = dt["score"].sum()/dt["peso"].sum() dt["score"] = score return dt dcalc.drop(columns=["peso", "score", "socore"], axis=1, inplace=True) dcalc.head() calcula_score_total('71673990000177', dcalc) resp = [] for el in dcalc["cnpj"].unique().tolist(): _df = calcula_score_total(el, dcalc) resp.append(_df) dcalc = pd.concat(resp) dcalc[dcalc["cnpj"]=='14534748000189'] df_score = dcalc[["cnpj", "segmento", "dscore", "score"]] trace = go.Histogram( x = df_score[["cnpj", "score"]].drop_duplicates()["score"].tolist(), marker = dict(color='rgb(247,234,95)', line=dict(color="rgb(0, 0, 0)", width=1)) ) layout = go.Layout(title="Distribuicao dscore") fig = go.Figure(data = [trace], layout=layout) iplot(fig) # dcalc.to_excel("../tabelas/dataset_metricas_score_completo_20190425.xlsx") ###Output _____no_output_____ ###Markdown Analises dos Resultados ###Code dcalc.drop(columns=["ocorrencias"], axis=1, inplace=True) dcalc.head() color_menu = {"credito" : 'rgb(252,110,110)', "processos" : "rgb(146,233,249)", "infra" : "rgb(185,232,119)", "outros" : "rgb(247,241,86)" } def plot_dividas(cnpj, df_dividas): dt = df_dividas[df_dividas["cnpj"]==cnpj] labels = dt["segmento"].tolist() values = dt["composicao"].tolist() values = [np.around(el*100, 2) for el in values] colors = [color_menu.get(el) for el in labels] data = [] trace1 = go.Pie( labels = labels, values = values, hoverinfo = "label+percent", textinfo = "value", marker = dict(colors=colors, line = dict(color="rgb(0, 0, 0)", width=1)) ) layout = go.Layout(title="Composição da Dívida e Probabilidades: {}".format(cnpj), ) fig = go.Figure(data=[trace1], layout=layout) iplot(fig) return def plot_probabilidade(cnpj, df_dividas): dt = df_dividas[df_dividas["cnpj"]==cnpj] labels = dt["segmento"].tolist() values = dt["probabilidade"].tolist() values = [np.around(el, 2) for el in values] colors = [color_menu.get(el) for el in labels] trace1 = go.Pie( labels = labels, values = values, hoverinfo = "label+percent", textinfo = "value", marker = dict(colors=colors, line = dict(color="rgb(0, 0, 0)", width=1)) ) layout = go.Layout(title="Probabilidades: {}".format(cnpj)) fig = go.Figure(data=[trace1], layout=layout) iplot(fig) return def plot_risco(cnpj, resp): dt = resp[resp['cnpj']==cnpj] score = int(dt["score"].iloc[0]) dt.sort_values("risco", ascending=False, inplace=True) labels = dt["segmento"].tolist() data = [] for el in labels: d = dt[dt['segmento']==el] trace = go.Bar( x = [el], y = [d["risco"].iloc[0]], marker = dict(color = color_menu.get(el), line=dict(color='rgb(0, 0, 0)', width=1)), name = el ) data.append(trace) layout = go.Layout(title = "Risco por segmento de dívida - Score : {}".format(score), bargap=0.5 ) fig = go.Figure(data=data, layout=layout) iplot(fig) return plot_dividas('14534748000189', dcalc) plot_probabilidade('14534748000189', dcalc) plot_risco('14534748000189', dcalc) cnpj = '10532480000195' plot_dividas(cnpj, dcalc) plot_risco(cnpj, dcalc) cnpj = "71673990000177" plot_dividas(cnpj, dcalc) plot_probabilidade(cnpj, dcalc) plot_risco(cnpj, dcalc) cnpj = '61607164000176' plot_dividas(cnpj, dcalc) plot_probabilidade(cnpj, dcalc) plot_risco(cnpj, dcalc) cnpj = '00437101000124' plot_dividas(cnpj, dcalc) plot_probabilidade(cnpj, dcalc) plot_risco(cnpj, dcalc) cnpj = "04247535000112" plot_dividas(cnpj, dcalc) plot_probabilidade(cnpj, dcalc) plot_risco(cnpj, dcalc) dcalc[dcalc['cnpj']=='04247535000112'] cnpj = '17464661000170' plot_dividas(cnpj, dcalc) plot_probabilidade(cnpj, dcalc) plot_risco(cnpj, dcalc) dcalc[dcalc["segmento"]=="infra"].sort_values("pi", ascending=False).head() dcalc[dcalc['cnpj']=='71673990000177'] df[df['segmento']=='processos'].groupby('cnpj').count().sort_values('segmento', ascending=False).head(10) dcalc[dcalc['cnpj']=='10532480000195'] dcalc[dcalc['cnpj']=='00661205000118'] df[df['cnpj']=='00661205000118'] dcalc.groupby("cnpj").count().sort_values("segmento", ascending=False).head() dcalc[dcalc['cnpj']=='01828181000101']["score"] ###Output _____no_output_____ ###Markdown Calculo do Score Total ###Code from sqlalchemy import create_engine engine = create_engine("mysql+pymysql://capMaster:#jackpot123#@captalys.cmrbivuuu7sv.sa-east-1.rds.amazonaws.com:23306/varejo") con = engine.connect() dfpv = pd.read_sql("select cpf_cnpj as cnpj, data, valor from fluxo_pv where flag_aprovacao=1", con) con.close() dfpv = dfpv[dfpv['cnpj']!='00.000.000/0001-91'] dfpv["cnpj"] = dfpv.apply(lambda x : x['cnpj'].replace(".", "").replace("-", "").replace("/", ""), axis=1) dfpv['flag_cnpj'] = dfpv.apply(lambda x : int(len(x['cnpj'])==14), axis=1) dfpv = dfpv[dfpv['flag_cnpj']==1] dfpv.head() ###Output _____no_output_____ ###Markdown calculando o lscore para a base pv ###Code from pricing.service.scoring.lscore import LScoring dfpv = dfpv[dfpv['cnpj'].isin(dcalc['cnpj'].tolist())] dfpv['cnpj'].unique().tolist().__len__() resp = [] for el in dfpv['cnpj'].unique().tolist(): dt = dfpv[dfpv['cnpj']==el] dados = {'dados' : dt[['data', 'valor']].to_dict("records"), "id_produto" : "tomatico"} ls = LScoring(dados) ret = ls.calcula() score = ret['score'] resp.append(pd.DataFrame({'cnpj' : [el], "lscore" : [score]})) dscore = pd.concat(resp) dscore.shape dcalc.head() final = dcalc[dcalc['cnpj'].isin(dfpv['cnpj'].tolist())] final['cnpj'].unique().tolist().__len__() final.rename(columns={'dscore' : 'dscore_seg', 'score' : 'dscore'}, inplace=True) final.head() final = final.merge(dscore, left_on='cnpj', right_on='cnpj', how='left') final.head() def calcula_dscore(risco_, score_limite): return -score_limite*risco_ + score_limite final['dscore_final_seg'] = final.apply(lambda x : calcula_dscore(x['risco'], 0.8*x['lscore']) , axis=1) final[final['cnpj']=='09286118000100'] # calcula dscore final def calcula_score_total(cnpj, df): dt = df[df['cnpj']==cnpj] dt["dscore_final"] = dt["dscore_final_seg"]*dt["peso"] score = dt["dscore_final"].sum()/dt["peso"].sum() dt["dscore_final"] = score return dt resp = [] for el in final['cnpj'].unique().tolist(): _df = calcula_score_total(el, final) resp.append(_df) def calcula(lscore, dscore): return np.mean([lscore, dscore]) final = pd.concat(resp) final["score"] = final.apply(lambda x : calcula(x["lscore"], x["dscore_final"]), axis=1) resp = final[['cnpj', "lscore", "dscore_final", "score"]].drop_duplicates() resp.head() dcalc[dcalc['cnpj']=='00661205000118'] resp[resp['cnpj']=='00661205000118'] cnpj = '00661205000118' plot_dividas(cnpj, dcalc) plot_probabilidade(cnpj, dcalc) plot_risco(cnpj, dcalc) dcalc[dcalc['cnpj']=='04509695000192'] resp[resp['cnpj']=='04509695000192'] cnpj = '04509695000192' plot_dividas(cnpj, dcalc) plot_probabilidade(cnpj, dcalc) plot_risco(cnpj, dcalc) dcalc[dcalc['cnpj']=='55057392000117'] resp[resp['cnpj']=='55057392000117'] cnpj = '55057392000117' plot_dividas(cnpj, dcalc) plot_probabilidade(cnpj, dcalc) plot_risco(cnpj, dcalc) final[final["segmento"]=="processos"] final.groupby("cnpj").count().sort_values('segmento', ascending=False).head(20) ###Output _____no_output_____
Data visualization/Numpy by AAIC.ipynb
###Markdown Creating Arrays ###Code a = np.array([1,2,3,4]) print(a) a.ndim # 1 dimensional array a.shape b = a.reshape(4,1) print(b) b.ndim # 2 dimensional array len(b) c = np.array([[1,2,3],[4,5,6]]) print(c) c.reshape(3,2) c.T # 3 dimensional array d = np.array([[[1,2,3],[4,5,6]],[[7,8,9],[1,5,9]]]) print(d) d.ndim d[1:,1:] # 2nd dimension 2 row np.linspace(1,10,5) np.ones(3) np.ones([2,2]) np.zeros([1,2]) np.eye(3) a = np.eye(4,3) print(a) a = np.diag([1,2,3]) print(a) print(np.random.rand(2,5)) # uniform variate 0 to 1 print(np.random.randn(2,5)) # normal standard variate mean = 0 and std = 1 print(np.random.randint(2,50,5)) a = np.arange(10) a[5:7] a = np.diag([1,2,3]) a[1,1] a[2,1] = 5 a ###Output _____no_output_____ ###Markdown Slicing ###Code b = np.arange(0,20,2) b = b[b<11] b a = np.arange(10) print(a) a[4:] = b[::-1] a ###Output _____no_output_____ ###Markdown Copy and view ###Code a = np.arange(10) print(a) b = a[::2] b np.shares_memory(a,b) b[0] = 10 b a ###Output _____no_output_____ ###Markdown above when i change b[0] variable a itself changed ###Code c = a[::2].copy() c c[0]=10 c # here variable a wont change since i used copy() a np.shares_memory(a,c) ###Output _____no_output_____ ###Markdown Masking ###Code a = np.random.randint(0,20,15) a mask = (a%2==0) b extract_from_a = a[mask] extract_from_a a[mask] = -1 a # Indexing with an array of integers a = np.arange(0,100,10) a a[[2,3,3,2,4]] a[[9,7]] = 100 a ###Output _____no_output_____
yoruba_speech_preprocessing.ipynb
###Markdown Preprocessing ###Code # import os # from random import randint, uniform # import re # import numpy as np # import wave # import contextlib ###Output _____no_output_____ ###Markdown Data Preparation for Recording ###Code MAX_SENTENCE_LEN = 20 SOURCE_PATH = 'yor_trans.txt' DEST_TEMP_PATH = 'yor_split.txt' DEST_TEMP_CLEAN_PATH = 'yor_clean_split.txt' NUM_FOLDER_SPLIT = 50 FOLDER_PATH="split_text" PARTIAL_NAME= "yor_split" from yor_processor import split_file, split_file_into_folders status = split_file(SOURCE_PATH, DEST_TEMP_PATH, MAX_SENTENCE_LEN) # clean_file_status = split_file(SOURCE_PATH, DEST_TEMP_CLEAN_PATH, MAX_SENTENCE_LEN, end_of_file="\n") if status == 'done': split_file_into_folders(FOLDER_PATH, DEST_TEMP_PATH, PARTIAL_NAME, NUM_FOLDER_SPLIT) ###Output _____no_output_____ ###Markdown Calculate Total Recording Lengths ###Code DIR_OF_REC = "./recordings/" FILE_FORMAT = ".wav" from yor_processor import calculate_recording_len total_len, good_files, corrupted_files = calculate_recording_len(DIR_OF_REC, FILE_FORMAT) print("total len of recording is ", round(total_len, 2), "s",", ",total_len/(60*60),"h") print(good_files) print(corrupted_files) ###Output 1079 30 ###Markdown Organizing data Spliting records to train, val and test sets ###Code !mkdir data !mkdir data/records from yor_processor import extract_non_corrupted_files wav_files, all_linkers = extract_non_corrupted_files(DIR_OF_REC) all_linkers.keys() # copy wav files to_copy = " ".join(wav_files) !cp -t data/records/ {to_copy} new_linkers = dict() for section, linker in all_linkers.items(): for i,link in enumerate(linker): text_file_name = link.split(" ")[0].split("/")[-1] line = link.split(";")[0].split("(")[1].split(")")[0].strip() wav = link.split(";")[1].strip().split("/")[-1] linker[i] = wav+":"+line new_linkers[text_file_name] = linker !mkdir ./data/records/train !mkdir ./data/records/test !mkdir ./data/records/val !mkdir ./data/records/extra from yor_processor import split_train_val_test # reduce to 2hrs data by splitting into 3 with 3hrs of data # split 1hr into train and val to_copy_train, to_copy_valid, to_copy_test, to_copy_extra = split_train_val_test(wav_files, num_splits=3, # reduce to 2hrs data since I have 3hrs of data val_split=0.2) !mv -t data/records/train/ {to_copy_train} !mv -t data/records/val/ {to_copy_valid} !mv -t data/records/test/ {to_copy_test} !mv -t data/records/extra/ {to_copy_extra} ###Output _____no_output_____ ###Markdown Make chars.txt file ###Code from yor_processor import create_char_set chars_list, text_data = create_char_set(new_linkers, path="./split_text/", exclude="[\n\.,''-''̀''́'''!-]") print(chars_list) ###Output {' ': 1, 'ε': 0, 'à': 2, 'é': 3, 'l': 4, 'i': 5, 'r': 6, 'ṣ': 7, 'ò': 8, 'p': 9, 'v': 10, 'ú': 11, 'n': 12, 'b': 13, 'o': 14, 'y': 15, 'ọ': 16, 'd': 17, 'g': 18, 'e': 19, 't': 20, 'k': 21, 'ì': 22, 'j': 23, 'á': 24, 'è': 25, 'ù': 26, 'ẹ': 27, 'a': 28, 'í': 29, 's': 30, 'h': 31, 'w': 32, 'f': 33, 'ó': 34, 'm': 35, 'u': 36, 'c': 37, '–': 38, 'ń': 39, 'ǹ': 40, 'z': 41, 'ḿ': 42, 'ί': 43, 'ὸ': 44} ###Markdown Create data in format for training ###Code from yor_processor import create_data_format create_data_format(text_data, chars_list) len(chars_list) ###Output _____no_output_____
doublet_making.ipynb
###Markdown Doublet Maker 0 Environment Set Up ###Code ! pip install --user git+https://github.com/LAL/trackml-library ###Output Collecting git+https://github.com/LAL/trackml-library Cloning https://github.com/LAL/trackml-library to /tmp/pip-req-build-ifdyebij Running command git clone -q https://github.com/LAL/trackml-library /tmp/pip-req-build-ifdyebij Requirement already satisfied (use --upgrade to upgrade): trackml==3 from git+https://github.com/LAL/trackml-library in /global/u1/s/sconlon/.local/lib/python3.7/site-packages Requirement already satisfied: numpy in /global/u1/s/sconlon/.conda/envs/exatrkx/lib/python3.7/site-packages (from trackml==3) (1.18.1) Requirement already satisfied: pandas>=0.21.0 in /global/u1/s/sconlon/.local/lib/python3.7/site-packages (from trackml==3) (1.0.3) Requirement already satisfied: python-dateutil>=2.6.1 in /global/u1/s/sconlon/.conda/envs/exatrkx/lib/python3.7/site-packages (from pandas>=0.21.0->trackml==3) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /global/u1/s/sconlon/.local/lib/python3.7/site-packages (from pandas>=0.21.0->trackml==3) (2019.3) Requirement already satisfied: six>=1.5 in /global/u1/s/sconlon/.conda/envs/exatrkx/lib/python3.7/site-packages (from python-dateutil>=2.6.1->pandas>=0.21.0->trackml==3) (1.13.0) Building wheels for collected packages: trackml Building wheel for trackml (setup.py) ... [?25ldone [?25h Created wheel for trackml: filename=trackml-3-py2.py3-none-any.whl size=13512 sha256=4ee76ca25239e269a4c309045f3af7cec987cf94af17b06d64a5b8363eceff55 Stored in directory: /tmp/pip-ephem-wheel-cache-y8xvm7sb/wheels/62/a8/3a/330c0e606bd185f850e7aec01df4607aa3df395945cf74905c Successfully built trackml ###Markdown 1 Imports ###Code import numpy as np import pandas as pd import trackml.dataset from numba import jit, guvectorize, prange from numba import int64, float32, boolean from doublet_making_helper import * ###Output _____no_output_____ ###Markdown 2 Constants ###Code pt_min = 0 path= "../exatrkx-work/volpredictor/train_100_events/" nPhiSlices = 53 nLayers = 10 maxDoubletLength = 300.0 minDoubletLength = 10.0 zPlus = 150.0 zMinus = -150.0 maxEta = 2.7 maxTheta = 2 * np.arctan(np.exp(-maxEta)) maxCtg = np.cos(maxTheta) / np.sin(maxTheta) modelLayers = np.array([ [0, 32, -455, 455], # 8-2 [0, 72, -455, 455], # 8-4 [0, 116, -455, 455], # 8-6 [0, 172, -455, 455], # 8-8 [0, 260, -1030, 1030], # 13-2 [0, 360, -1030, 1030], # 13-4 [0, 500, -1030, 1030], # 13-6 [0, 660, -1030, 1030], # 13-8 [0, 820, -1030, 1030], # 17-2 [0, 1020, -1030, 1030] # 17-4 ], dtype='int32') FALSE_INT = 99999 #Integer that represents a false value ###Output _____no_output_____ ###Markdown 3 Load Data ###Code np.random.seed(30) # Chef Curry prefix= "event00000" + str(np.random.choice(100) + 1000) hits, particles, truth = trackml.dataset.load_event( path + prefix, parts=['hits', 'particles', 'truth']) ###Output _____no_output_____ ###Markdown 4 Prepare Data Make cuts ###Code %%time # Barrel volume and layer ids vlids = [(8,2), (8,4), (8,6), (8,8), (13,2), (13,4), (13,6), (13,8), (17,2), (17,4)] n_det_layers = len(vlids) # Select barrel layers and assign convenient layer number [0-9] vlid_groups = hits.groupby(['volume_id', 'layer_id']) hits = pd.concat([vlid_groups.get_group(vlids[i]).assign(layer=i) for i in range(n_det_layers)]) # Calculate particle transverse momentum pt = np.sqrt(particles.px**2 + particles.py**2) # True particle selection. # Applies pt cut, removes all noise hits. particles = particles[pt > pt_min] truth = (truth[['hit_id', 'particle_id']] .merge(particles[['particle_id']], on='particle_id')) # Calculate derived hits variables r = np.sqrt(hits.x**2 + hits.y**2) phi = np.arctan2(hits.y, hits.x) # Select the data columns we need hits = (hits .assign(r=r) .merge(truth[['hit_id', 'particle_id']], on='hit_id')) # Remove duplicate hits hits = hits.loc[ hits.groupby(['particle_id', 'layer'], as_index=False).r.idxmin() ] hits ###Output CPU times: user 6.89 s, sys: 24.6 ms, total: 6.92 s Wall time: 6.93 s ###Markdown Reformat hit table ###Code %%time hits['phi_bin'] = bin_phi(hits['x'].values, hits['y'].values, nPhiSlices) hits['r'] = np.hypot(hits['x'].values, hits['y'].values) hits.drop(columns=['x', 'y', 'volume_id', 'module_id', 'layer_id'], inplace=True) cols = hits.columns.tolist() # Rearranging column order cols = [cols[0], # hit_id cols[2], # layer cols[5], # phi_bin cols[3], # r cols[1], # z cols[4]] # particle_id hits = hits[cols] hit_table = hits.values.astype(np.int64) nHits = hit_table.shape[0] print('Number of hits: ', nHits) hits ###Output Number of hits: 40095 CPU times: user 132 ms, sys: 0 ns, total: 132 ms Wall time: 130 ms ###Markdown 5 Helper Functions ###Code @jit(nopython=True) def filter(inner_hit, layer_range, z_ranges): ''' This function combines the helper filters into one filter ''' keep = np.array([True] * hit_table.shape[0]) for row_idx in range(hit_table.shape[0]): keep[row_idx] = (filter_layers(hit_table[row_idx][1], layer_range) and filter_phi(inner_hit[2], hit_table[row_idx][2], nPhiSlices) and filter_doublet_length(inner_hit[3], hit_table[row_idx][3], minDoubletLength, maxDoubletLength) and filter_horizontal_doublets(inner_hit[3], inner_hit[4], hit_table[row_idx][3], hit_table[row_idx][4], maxCtg) and filter_z(hit_table[row_idx][1], hit_table[row_idx][4], layer_range, z_ranges)) return keep @jit(nopython=True) def get_valid_ranges(inner_hit): ''' This function returns the list of layers that contain interesting hits, given our chosen inner hit. It also returns the min/max bound in the z-direction for interesting hits for each outer layer. ''' #Get the radius of each layer refCoords = np.array([modelLayers[layer_idx][1] for layer_idx in range(nLayers)], dtype=int64) #Get the list of all valid layers layer_range = get_layer_range(inner_hit, refCoords, nLayers, maxDoubletLength, FALSE_INT) #Find the z bounds for each valid layer z_ranges = get_z_ranges(inner_hit, refCoords, layer_range, zMinus, zPlus, FALSE_INT) #Filter layers whose bounds of interest fall outside their geometric bounds z_mask(layer_range, z_ranges, modelLayers, FALSE_INT) return layer_range, z_ranges ###Output _____no_output_____ ###Markdown 6 Make Doublets ###Code @jit(nopython=True, parallel=True) def make(): ''' This function makes all possible doublets that fit the criteria of the filter. It first choses an inner hit and then iterates through the hit table looking for possible outer hit candidates. It then returns a list of hit ids cooresponding to the inner and outer hit pairs of the created doublets. ''' ncolumns = int(nHits * 0.01) outer_2D = np.zeros((nHits, ncolumns), dtype=int64) for row_idx in prange(nHits): inner_hit = hit_table[row_idx] layer_range, z_ranges = get_valid_ranges(inner_hit) outer_hit_set = hit_table[filter(inner_hit, layer_range, z_ranges)].T[0] for column_idx in prange(len(outer_hit_set)): outer_2D[row_idx][column_idx] = outer_hit_set[column_idx] outer = np.reshape(outer_2D, (1, nHits * ncolumns))[0] inner = np.zeros(len(outer), dtype=int64) for row_count in prange(outer_2D.shape[0]): for col_count in prange(ncolumns): inner[(row_count * ncolumns + col_count)] = hit_table[row_count][0] return inner, outer %%time make() ###Output CPU times: user 1min 45s, sys: 912 ms, total: 1min 46s Wall time: 8.21 s
J-面积图/进阶面积图MA_J_02/MA_J_02.ipynb
###Markdown Matplotlib图鉴——进阶面积图 公众号:可视化图鉴 ###Code import matplotlib print(matplotlib.__version__) #查看Matplotlib版本 import pandas as pd print(pd.__version__) #查看pandas版本 import numpy as np print(np.__version__) #查看numpy版本 import matplotlib.pyplot as plt plt.rcParams['font.sans-serif'] = ['STHeiti'] #设置中文 ###Output 3.3.3 1.1.5 1.19.5 ###Markdown 注意,代码在以下环境全部通过测试:- Python 3.7.1- Matplotlib == 3.3.3- pandas == 1.1.5- numpy == 1.19.5因版本不同,可能会有部分语法差异,如有报错,请先检查拼写及版本是否一致! ###Code import matplotlib as mpl import pandas as pd import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np from matplotlib.offsetbox import (TextArea, DrawingArea, OffsetImage, AnnotationBbox) from matplotlib.cbook import get_sample_data %matplotlib inline %config InlineBackend.figure_format = 'retina' plt.rcParams['font.sans-serif'] = ['Microsoft YaHei'] # 设置字体 data = pd.read_csv('data.csv') data1 = pd.read_csv('data1.csv') followers = data['followers'] plt.rcParams['font.sans-serif'] = ['Microsoft YaHei'] fig,ax=plt.subplots(figsize = (8,5),dpi=100) #创建画布 plt.plot(followers,color = '#f58220', linewidth=1) #绘制折线图 #修改坐标轴 plt.gca().spines["top"].set_alpha(0) plt.gca().spines["bottom"].set_alpha(.3) plt.gca().spines["right"].set_alpha(0) plt.gca().spines["left"].set_alpha(0) plt.ylim(0,None) x = [i for i in range(361)] x1 = [i for i in range(16)] x2 = [i for i in range(16,36)] x3 = [i for i in range(36,134)] x4 = [i for i in range(134,322)] x5 = [i for i in range(322,361)] #ax.set_facecolor('none') #设置背景为透明 #填充 extent = [0, 365, 0, 33000] _, yv = np.meshgrid(np.linspace(0,1,210), np.linspace(0,1,90)) ax.imshow(yv, cmap=mpl.cm.BuPu, origin='lower',alpha = 0.5, aspect = 'auto',extent = extent) ax.fill_between(data1['时间'], data1['累积关注人数'], 33000, color='white') #竖线 ax.vlines(x = 16,ymin = 0,ymax = 100,color='black') ax.vlines(x = 36,ymin = 0,ymax = 1000,color='grey') ax.vlines(x = 132,ymin = 0,ymax = 10000,color='grey') ax.vlines(x = 322,ymin = 0,ymax = 30000, color = 'grey') gca = plt.gca() #修改x轴刻度 label = ['2月7日','3月25日','5月13日','7月2日','8月21日','10月10日','11月29日','12月31日'] plt.xticks(range(0,400,50), labels=label,rotation = 40,color = 'yellow',fontsize = 8) for xlabel_i in gca.axes.get_yticklabels(): xlabel_i.set_fontsize(0.0) xlabel_i.set_visible(False) for tick in gca.axes.get_yticklines(): tick.set_visible(False) ax.spines['bottom'].set_color('black') plt.tick_params(axis='x',colors='black') #添加文字与箭头 offsetbox = TextArea("3月11日,粉丝突破 1000",textprops = dict(fontsize = 9)) ab = AnnotationBbox(offsetbox,[30, 3000], xybox=(-30., 90.), xycoords='data', boxcoords="offset points", pad=0.5, arrowprops=dict(arrowstyle="->",connectionstyle="angle3,angleA=70,angleB=-30") ) ax.add_artist(ab) offsetbox = TextArea("6月17日,粉丝突破 10000",textprops = dict(fontsize = 9)) ab = AnnotationBbox(offsetbox,[120, 11000], xybox=(-60., 100.), xycoords='data', boxcoords="offset points", pad=0.5, arrowprops=dict(arrowstyle="->",connectionstyle="angle3,angleA=75,angleB=-10") ) ax.add_artist(ab) offsetbox = TextArea("12月2日,粉丝突破 30000",textprops = dict(fontsize = 9)) ab = AnnotationBbox(offsetbox,[320, 31000], xybox=(-60., 60.), xycoords='data', boxcoords="offset points", pad=0.5, arrowprops=dict(arrowstyle="->",connectionstyle="angle3,angleA=190,angleB=60") ) ax.add_artist(ab) plt.show() ###Output _____no_output_____
schelling_project_work.ipynb
###Markdown Original Prompt ###Code import numpy as np from ipythonblocks import BlockGrid as bg from IPython.html.widgets import interact, interactive, fixed from IPython.html import widgets from IPython.display import display import timeit #Creating interacts that allow the user to choose the percent of each color, #size of the grid, the individual box size, satisfaction percentage. def interacts(size1): return size1 def interacts1(satisfaction_1): return satisfaction_1 def interacts2(orange_1): return orange_1 def interacts3(blue_1, ): return blue_1 def interacts4(block_size1): return block_size1 j = interactive(interacts, size1 = (2,12)) p = interactive(interacts1, satisfaction_1 = (0,1,0.01)) o = interactive(interacts2, orange_1 = (0,1,0.01)) b = interactive(interacts3, blue_1 = (0,1,0.01)) bs = interactive(interacts4, block_size1 = (0,30,0.1)) display(bs) display(j) display(b) display(o) display(p) #Creates an nxn numpy grid with a chosen percent of 1's and 2's which correspond to orange and blue blocks. k = j.result size_of_block = bs.result satisfaction_percentage = p.result blue = b.result orange = o.result black = 1 - blue - orange y = k - 1 grid = np.random.choice([0,1,2],size=(k,k),p = [black,orange,blue]) print (grid) grid2 = np.hstack(grid) print (grid2) grid1 = bg(k,k, block_size=size_of_block) grid3 = bg(k,k, block_size=size_of_block) grid4 = bg(k,k, block_size=size_of_block) #Creates an nxn IPythonBlocks grid. def make_grid(): x = 0 orange2 = 0 blue2 = 0 black2 = 0 for i in grid4: if grid2[x]==1: orange2 += 1 i.set_colors(300, 178, 34) elif grid2[x]==2: blue2 += 1 i.set_colors(90, 300, 420) else: black2+=1 x += 1 return orange2,blue2,black2 orange3, blue3, black3 = make_grid() print ('Orange', orange3) print ('Blue', blue3) print ('Black', black3) display(grid4) %timeit -n10 -r2 make_grid() #Finds the neighbors of the grid and checks to see if the neighbors #have the same value and then calculates the satisfaction of that block. def satisfaction_percent(): same_c1 = 0 same_c2 = 0 same_c3 = 0 same_c4 = 0 same_col_1 = 0 same_col_2 = 0 same_row_1 = 0 same_row_2 = 0 satisfaction_c1 = 0 satisfaction_c2 = 0 satisfaction_c3 = 0 satisfaction_c4 = 0 satisfaction_col_1 = 0 satisfaction_col_2 = 0 satisfaction_row_1 = 0 satisfaction_row_2 = 0 sat_col_1=[] sat_col_2=[] sat_row_1=[] sat_row_2=[] sat_1=[] same = 0 row = 0 col = 0 i_5 = 0 i_6 = 0 i_7 = 0 i_8 = 0 i_9 = 0 for n in grid: if row==0 and col==0: if grid[0,0]==0: same_c1+=3 else: if grid[0,0]==grid[1,0]: same_c1+=1 if grid[0,0]==grid[0,1]: same_c1+=1 if grid[0,0]==grid[1,1]: same_c1+=1 satisfaction_c1 = same_c1/3 print ('satisfaction_c1', satisfaction_c1) if row==0 and col==y: if grid[0,y]==0: same_c2+=3 else: if grid[0,y]==grid[0,y-1]: same_c2+=1 if grid[0,y]==grid[1,y-1]: same_c2+=1 if grid[0,y]==grid[1,y]: same_c2+=1 satisfaction_c2 = same_c2/3 print ('satisfaction_c2',satisfaction_c2) if row==y and col==0: if grid[y,0]==0: same_c3+=3 else: if grid[y,0]==grid[y-1,0]: same_c3+=1 if grid[y,0]==grid[y-1,1]: same_c3+=1 if grid[y,0]==grid[y,1]: same_c3+=1 satisfaction_c3 = same_c3/3 print ('satisfaction_c3',satisfaction_c3) if row==y and col==y: if grid[y,y]==0: same_c4+=3 else: if grid[y,y]==grid[y-1,y]: same_c4+=1 if grid[y,y]==grid[y,y-1]: same_c4+=1 if grid[y,y]==grid[y-1,y-1]: same_c4+=1 satisfaction_c4 = same_c4/3 print ('satisfaction_c4', satisfaction_c4) if row==0 and col!=(0 or y): i_5+=1 if grid[row,col]==0: same_col_1+=5 else: if grid[row,col]==grid[row,col-1]: same_col_1+=1 if grid[row,col]==grid[row,col+1]: same_col_1+=1 if grid[row,col]==grid[row+1,col-1]: same_col_1+=1 if grid[row,col]==grid[row+1,col]: same_col_1+=1 if grid[row,col]==grid[row+1,col+1]: same_col_1+=1 satisfaction_col_1 = same_col_1/5 sat_col_1.append(satisfaction_col_1) true_satisfaction_col_1 = np.hstack(sat_col_1) if i_5>y-1: sats_col_1 = true_satisfaction_col_1 print ('sats_col_1',sats_col_1) elif row==y and col!=(0 or y): i_6+=1 if grid[row,col]==0: same_col_2+=5 else: if grid[row,col]==grid[row,col-1]: same_col_2+=1 if grid[row,col]==grid[row,col+1]: same_col_2+=1 if grid[row,col]==grid[row-1,col-1]: same_col_2+=1 if grid[row,col]==grid[row-1,col]: same_col_2+=1 if grid[row,col]==grid[row-1,col+1]: same_col_2+=1 satisfaction_col_2 = same_col_2/5 sat_col_2.append(satisfaction_col_2) true_satisfaction_col_2 = np.hstack(sat_col_2) if i_6>y-1: sats_col_2 = true_satisfaction_col_2 print ('sats_col_2',sats_col_2) elif row!=(0 or y) and col==0: i_7+=1 if grid[row,col]==0: same_row_1+=5 else: if grid[row,col]==grid[row-1,col]: same_row_1+=1 if grid[row,col]==grid[row+1,col]: same_row_1+=1 if grid[row,col]==grid[row-1,col+1]: same_row_1+=1 if grid[row,col]==grid[row,col+1]: same_row_1+=1 if grid[row,col]==grid[row+1,col+1]: same_row_1+=1 satisfaction_row_1 = same_row_1/5 sat_row_1.append(satisfaction_row_1) true_satisfaction_row_1 = np.hstack(sat_row_1) if i_7>y-1: sats_row_1 = true_satisfaction_row_1 print ('sats_row_1',sats_row_1) elif row!=(0 or y) and col==y: i_8+=1 if grid[row,col]==0: same_row_2+=5 else: if grid[row,col]==grid[row-1,col]: same_row_2+=1 if grid[row,col]==grid[row+1,col]: same_row_2+=1 if grid[row,col]==grid[row-1,col-1]: same_row_2+=1 if grid[row,col]==grid[row,col-1]: same_row_2+=1 if grid[row,col]==grid[row+1,col-1]: same_row_2+=1 satisfaction_row_2 = same_row_2/5 sat_row_2.append(satisfaction_row_2) true_satisfaction_row_2 = np.hstack(sat_row_2) if i_8>y-1: sats_row_2 = true_satisfaction_row_2 print ('sats_row_2',sats_row_2) else: i_9+=1 if grid[row,col]==0: same+=8 else: if grid[row,col]==grid[row-1,col]: same+=1 if grid[row,col]==grid[row,col-1]: same+=1 if grid[row,col]==grid[row-1,col-1]: same+=1 if grid[row,col]==grid[row+1,col]: same+=1 if grid[row,col]==grid[row,col+1]: same+=1 if grid[row,col]==grid[row+1,col+1]: same+=1 if grid[row,col]==grid[row-1,col+1]: same+=1 if grid[row,col]==grid[row+1,col-1]: same+=1 satisfaction = same/8 sat_1.append(satisfaction) true_satisfaction_1 = np.hstack(sat_1) if i_9>y-2: sats_1 = true_satisfaction_1 print ('sats_1',sats_1) col+=1 if col>=10: col = col - 10 row+=1 sat_c_1_1 = [] sat_c_2_1 = [] sat_c_3_1 = [] sat_c_4_1 = [] sat_col_1_1 = [] sat_col_2_1 = [] sat_row_1_1 = [] sat_row_2_1 = [] sat_1_1 = [] m_1=0 m_2=0 m_3=0 m_4=0 m_5=0 m_6=0 m_7=0 m_8=0 m_9=0 for n in sat_col_1: if m_5==0: new_value_5 = sat_col_1[0] sat_col_1_1.append(new_value_5) s_col_1 = np.hstack(sat_col_1_1) m_5+=1 else: new_value_5 = sat_col_1[m_5] - sat_col_1[m_5-1] sat_col_1_1.append(new_value_5) m_5+=1 for n in sat_col_2: if m_6==0: new_value_6 = sat_col_2[0] sat_col_2_1.append(new_value_6) s_col_2 = np.hstack(sat_col_2_1) m_6+=1 else: new_value_6 = sat_col_2[m_6] - sat_col_2[m_6-1] sat_col_2_1.append(new_value_6) m_6+=1 for n in sat_row_1: if m_7==0: new_value_7 = sat_row_1[0] sat_row_1_1.append(new_value_7) sat_row_1 = np.hstack(sat_row_1_1) m_7+=1 else: new_value_7 = sat_row_1[m_7] - sat_row_1[m_7-1] sat_row_1_1.append(new_value_7) m_7+=1 for n in sat_row_2: if m_8==0: new_value_8 = sat_row_2[0] sat_row_2_1.append(new_value_8) s_row_2 = np.hstack(sat_row_2_1) m_8+=1 else: new_value_8 = sat_row_2[m_8] - sat_row_2[m_8-1] sat_row_2_1.append(new_value_8) m_8+=1 for n in sat_1: if m_9==0: new_value_9 = sat_1[0] sat_1_1.append(new_value_9) sat_1 = np.hstack(sat_1_1) m_9+=1 else: new_value_9 = sat_1[m_9] - sat_1[m_9-1] sat_1_1.append(new_value_9) m_9+=1 s_col_1 = np.hstack(sat_col_1_1) print ("satisfaction of 1st row", s_col_1) #Times how long it takes to find the satisfaction of each block %timeit -n1 -r1 satisfaction_percent() #If the block's satisfaction is below the satisfaction percentage chosen by the user #then the blocks moves to another position in the nxn grid. def move_unsatisfied(): row = 0 col = 0 ii = 0 for n in grid: if sat_c_1_1 < satisfaction_percentage: n.set_color(0,0,0) if grid[0,0]==1: if grid[row,col]==0: n.set_colors(300, 178, 34) else: col+=1 if col>=10: col = col - 10 row+=1 if grid[0,0]==2: if grid[row,col]==0: n.set_colors(90, 300, 420) else: col+=1 if col>=10: col = col - 10 row+=1 if sat_c_2_1 < satisfaction_percentage: n.set_color(0,0,0) if grid[0,y]==1: if grid[row,col]==0: n.set_colors(300, 178, 34) else: col+=1 if col>=10: col = col - 10 row+=1 if grid[0,y]==2: if grid[row,col]==0: n.set_colors(90, 300, 420) else: col+=1 if col>=10: col = col - 10 row+=1 if sat_c_3_1 < satisfaction_percentage: n.set_color(0,0,0) if grid[y,0]==1: if grid[row,col]==0: n.set_colors(300, 178, 34) else: col+=1 if col>=10: col = col - 10 row+=1 if grid[y,0]==2: if grid[row,col]==0: n.set_colors(90, 300, 420) else: col+=1 if col>=10: col = col - 10 row+=1 if sat_c_4_1 < satisfaction_percentage: n.set_color(0,0,0) if grid[y,y]==1: if grid[row,col]==0: n.set_colors(300, 178, 34) else: col+=1 if col>=10: col = col - 10 row+=1 if grid[y,y]==2: if grid[row,col]==0: n.set_colors(90, 300, 420) else: col+=1 if col>=10: col = col - 10 row+=1 if sat_col_1_1[ii] < satisfaction_percentage: n.set_color(0,0,0) if grid[row,col]==1: if grid[row+ii,col+ii]==0: n.set_colors(90, 300, 420) else: col+=1 if col>=10: col = col - 10 row+=1 if grid[row,col]==2: if grid[row+ii,col+ii]==0: n.set_colors(300, 178, 34) else: col+=1 if col>=10: col = col - 10 row+=1 if sat_col_2_1[ii] < satisfaction_percentage: n.set_color(0,0,0) if grid[row,col]==1: if grid[row+ii,col+ii]==0: n.set_colors(90, 300, 420) else: col+=1 if col>=10: col = col - 10 row+=1 if grid[row,col]==2: if grid[row+ii,col+ii]==0: n.set_colors(300, 178, 34) else: col+=1 if col>=10: col = col - 10 row+=1 if sat_row_1_1[ii] < satisfaction_percentage: n.set_color(0,0,0) if grid[row,col]==1: if grid[row+ii,col+ii]==0: n.set_colors(90, 300, 420) else: col+=1 if col>=10: col = col - 10 row+=1 if grid[row,col]==2: if grid[row+ii,col+ii]==0: n.set_colors(300, 178, 34) else: col+=1 if col>=10: col = col - 10 row+=1 if sat_row_2_1[ii] < satisfaction_percentage: n.set_color(0,0,0) if grid[row,col]==1: if grid[row+ii,col+ii]==0: n.set_colors(90, 300, 420) else: col+=1 if col>=10: col = col - 10 row+=1 if grid[row,col]==2: if grid[row+ii,col+ii]==0: n.set_colors(300, 178, 34) else: col+=1 if col>=10: col = col - 10 row+=1 if sat_1_1[ii] < satisfaction_percentage: n.set_color(0,0,0) if grid[row,col]==1: if grid[row+ii,col+ii]==0: n.set_colors(90, 300, 420) else: col+=1 if col>=10: col = col - 10 row+=1 if grid[row,col]==2: if grid[row+ii,col+ii]==0: n.set_colors(300, 178, 34) else: col+=1 if col>=10: col = col - 10 row+=1 ii+=1 col+=1 if col>=10: col = col - 10 row+=1 return grid %timeit -n1 -r1 move_unsatisfied() #Displays the final grid after all the blocks are at their max satisfaction. def final_grid(): x = 0 orange4 = 0 blue4 = 0 black4 = 0 for i in grid3: if orange3 > orange4: i.set_colors(300, 178, 34) orange4+=1 elif black3 > black4: i.set_colors(0,0,0) black4+=1 else: i.set_colors(90, 300, 420) print ('Orange', orange3) print ('Blue', blue3) print ('Black', black3) display(grid3) final_grid() ###Output Orange 64 Blue 34 Black 46
Essential_Math_for_Machine_Learning_Python_Edition/Module04/04-02-Statistics Fundamentals.ipynb
###Markdown Statistics FundamentalsStatistics is primarily about analyzing data samples, and that starts with udnerstanding the distribution of data in a sample. Analyzing Data DistributionA great deal of statistical analysis is based on the way that data values are distributed within the dataset. In this section, we'll explore some statistics that you can use to tell you about the values in a dataset. Measures of Central TendencyThe term *measures of central tendency* sounds a bit grand, but really it's just a fancy way of saying that we're interested in knowing where the middle value in our data is. For example, suppose decide to conduct a study into the comparative salaries of people who graduated from the same school. You might record the results like this:| Name | Salary ||----------|-------------|| Dan | 50,000 || Joann | 54,000 || Pedro | 50,000 || Rosie | 189,000 || Ethan | 55,000 || Vicky | 40,000 || Frederic | 59,000 |Now, some of the former-students may earn a lot, and others may earn less; but what's the salary in the middle of the range of all salaries? MeanA common way to define the central value is to use the *mean*, often called the *average*. This is calculated as the sum of the values in the dataset, divided by the number of observations in the dataset. When the dataset consists of the full population, the mean is represented by the Greek symbol ***&mu;*** (*mu*), and the formula is written like this:\begin{equation}\mu = \frac{\displaystyle\sum_{i=1}^{N}X_{i}}{N}\end{equation}More commonly, when working with a sample, the mean is represented by ***x&772;*** (*x-bar*), and the formula is written like this (note the lower case letters used to indicate values from a sample):\begin{equation}\bar{x} = \frac{\displaystyle\sum_{i=1}^{n}x_{i}}{n}\end{equation}In the case of our list of heights, this can be calculated as:\begin{equation}\bar{x} = \frac{50000+54000+50000+189000+55000+40000+59000}{7}\end{equation}Which is **71,000**.>In technical terminology, ***x&772;*** is a *statistic* (an estimate based on a sample of data) and ***&mu;*** is a *parameter* (a true value based on the entire population). A lot of the time, the parameters for the full population will be impossible (or at the very least, impractical) to measure; so we use statistics obtained from a representative sample to approximate them. In this case, we can use the sample mean of salary for our selection of surveyed students to try to estimate the actual average salary of all students who graduate from our school.In Python, when working with data in a *pandas.dataframe*, you can use the ***mean*** function, like this: ###Code import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000]}) print (df['Salary'].mean()) ###Output 71000.0 ###Markdown So, is **71,000** really the central value? Or put another way, would it be reasonable for a graduate of this school to expect to earn $71,000? After all, that's the average salary of a graduate from this school.If you look closely at the salaries, you can see that out of the seven former students, six earn less than the mean salary. The data is *skewed* by the fact that Rosie has clearly managed to find a much higher-paid job than her classmates. MedianOK, let's see if we can find another definition for the central value that more closely reflects the expected earning potential of students attending our school. Another measure of central tendancy we can use is the *median*. To calculate the median, we need to sort the values into ascending order and then find the middle-most value. When there are an odd number of observations, you can find the position of the median value using this formula (where *n* is the number of observations):\begin{equation}\frac{n+1}{2}\end{equation}Remember that this formula returns the *position* of the median value in the sorted list; not the value itself.If the number of observations is even, then things are a little (but not much) more complicated. In this case you calculate the median as the average of the two middle-most values, which are found like this:\begin{equation}\frac{n}{2} \;\;\;\;and \;\;\;\; \frac{n}{2} + 1\end{equation}So, for our graduate salaries; first lets sort the dataset:| Salary ||-------------|| 40,000 || 50,000 || 50,000 || 54,000 || 55,000 || 59,000 || 189,000 |There's an odd number of observation (7), so the median value is at position (7 + 1) &div; 2; in other words, position 4:| Salary ||-------------|| 40,000 || 50,000 || 50,000 ||***>54,000*** || 55,000 || 59,000 || 189,000 |So the median salary is **54,000**.The *pandas.dataframe* class in Python has a ***median*** function to find the median: ###Code import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000]}) print (df['Salary'].median()) ###Output 54000.0 ###Markdown ModeAnother related statistic is the *mode*, which indicates the most frequently occurring value. If you think about it, this is potentially a good indicator of how much a student might expect to earn when they graduate from the school; out of all the salaries that are being earned by former students, the mode is earned by more than any other.Looking at our list of salaries, there are two instances of former students earning **50,000**, but only one instance each for all other salaries:| Salary ||-------------|| 40,000 ||***>50,000***||***>50,000***|| 54,000 || 55,000 || 59,000 || 189,000 |The mode is therefore **50,000**.As you might expect, the *pandas.dataframe* class has a ***mode*** function to return the mode: ###Code import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000]}) print (df['Salary'].mode()) ###Output 0 50000 dtype: int64 ###Markdown Multimodal DataIt's not uncommon for a set of data to have more than one value as the mode. For example, suppose Ethan receives a raise that takes his salary to **59,000**:| Salary ||-------------|| 40,000 ||***>50,000***||***>50,000***|| 54,000 ||***>59,000***||***>59,000***|| 189,000 |Now there are two values with the highest frequency. This dataset is *bimodal*. More generally, when there is more than one mode value, the data is considered *multimodal*.The *pandas.dataframe.**mode*** function returns all of the modes: ###Code import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,59000,40000,59000]}) print (df['Salary'].mode()) ###Output 0 50000 1 59000 dtype: int64 ###Markdown Distribution and DensityNow we know something about finding the center, we can start to explore how the data is distributed around it. What we're interested in here is understanding the general "shape" of the data distribution so that we can begin to get a feel for what a 'typical' value might be expected to be.We can start by finding the extremes - the minimum and maximum. In the case of our salary data, the lowest paid graduate from our school is Vicky, with a salary of **40,000**; and the highest-paid graduate is Rosie, with **189,000**.The *pandas.dataframe* class has ***min*** and ***max*** functions to return these values.Run the following code to compare the minimum and maximum salaries to the central measures we calculated previously: ###Code import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000]}) print ('Min: ' + str(df['Salary'].min())) print ('Mode: ' + str(df['Salary'].mode()[0])) print ('Median: ' + str(df['Salary'].median())) print ('Mean: ' + str(df['Salary'].mean())) print ('Max: ' + str(df['Salary'].max())) ###Output Min: 40000 Mode: 50000 Median: 54000.0 Mean: 71000.0 Max: 189000 ###Markdown We can examine these values, and get a sense for how the data is distributed - for example, we can see that the *mean* is closer to the max than the *median*, and that both are closer to the *min* than to the *max*.However, it's generally easier to get a sense of the distribution by visualizing the data. Let's start by creating a histogram of the salaries, highlighting the *mean* and *median* salaries (the *min*, *max* are fairly self-evident, and the *mode* is wherever the highest bar is): ###Code %matplotlib inline import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000]}) salary = df['Salary'] salary.plot.hist(title='Salary Distribution', color='lightblue', bins=25) plt.axvline(salary.mean(), color='magenta', linestyle='dashed', linewidth=2) plt.axvline(salary.median(), color='green', linestyle='dashed', linewidth=2) plt.show() ###Output _____no_output_____ ###Markdown The ***mean*** and ***median*** are shown as dashed lines. Note the following:- *Salary* is a continuous data value - graduates could potentially earn any value along the scale, even down to a fraction of cent.- The number of bins in the histogram determines the size of each salary band for which we're counting frequencies. Fewer bins means merging more individual salaries together to be counted as a group.- The majority of the data is on the left side of the histogram, reflecting the fact that most graduates earn between 40,000 and 55,000- The mean is a higher value than the median and mode.- There are gaps in the histogram for salary bands that nobody earns.The histogram shows the relative frequency of each salary band, based on the number of bins. It also gives us a sense of the *density* of the data for each point on the salary scale. With enough data points, and small enough bins, we could view this density as a line that shows the shape of the data distribution.Run the following cell to show the density of the salary data as a line on top of the histogram: ###Code %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000]}) salary = df['Salary'] density = stats.gaussian_kde(salary) n, x, _ = plt.hist(salary, histtype='step', normed=True, bins=25) plt.plot(x, density(x)*5) plt.axvline(salary.mean(), color='magenta', linestyle='dashed', linewidth=2) plt.axvline(salary.median(), color='green', linestyle='dashed', linewidth=2) plt.show() ###Output N:\Python\Python37-32\lib\site-packages\matplotlib\axes\_axes.py:6521: MatplotlibDeprecationWarning: The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead. alternative="'density'", removal="3.1") ###Markdown Note that the density line takes the form of an asymmetric curve that has a "peak" on the left and a long tail on the right. We describe this sort of data distribution as being *skewed*; that is, the data is not distributed symmetrically but "bunched together" on one side. In this case, the data is bunched together on the left, creating a long tail on the right; and is described as being *right-skewed* because some infrequently occurring high values are pulling the *mean* to the right.Let's take a look at another set of data. We know how much money our graduates make, but how many hours per week do they need to work to earn their salaries? Here's the data:| Name | Hours ||----------|-------|| Dan | 41 || Joann | 40 || Pedro | 36 || Rosie | 30 || Ethan | 35 || Vicky | 39 || Frederic | 40 |Run the following code to show the distribution of the hours worked: ###Code %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Hours':[41,40,36,30,35,39,40]}) hours = df['Hours'] density = stats.gaussian_kde(hours) n, x, _ = plt.hist(hours, histtype='step', normed=True, bins=25) plt.plot(x, density(x)*7) plt.axvline(hours.mean(), color='magenta', linestyle='dashed', linewidth=2) plt.axvline(hours.median(), color='green', linestyle='dashed', linewidth=2) plt.show() ###Output N:\Python\Python37-32\lib\site-packages\matplotlib\axes\_axes.py:6521: MatplotlibDeprecationWarning: The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead. alternative="'density'", removal="3.1") ###Markdown Once again, the distribution is skewed, but this time it's **left-skewed**. Note that the curve is asymmetric with the ***mean*** to the left of the ***median*** and the *mode*; and the average weekly working hours skewed to the lower end.Once again, Rosie seems to be getting the better of the deal. She earns more than her former classmates for working fewer hours. Maybe a look at the test scores the students achieved on their final grade at school might help explain her success:| Name | Grade ||----------|-------|| Dan | 50 || Joann | 50 || Pedro | 46 || Rosie | 95 || Ethan | 50 || Vicky | 5 || Frederic | 57 |Let's take a look at the distribution of these grades: ###Code %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Grade':[50,50,46,95,50,5,57]}) grade = df['Grade'] density = stats.gaussian_kde(grade) n, x, _ = plt.hist(grade, histtype='step', normed=True, bins=25) plt.plot(x, density(x)*7.5) plt.axvline(grade.mean(), color='magenta', linestyle='dashed', linewidth=2) plt.axvline(grade.median(), color='green', linestyle='dashed', linewidth=2) plt.show() ###Output N:\Python\Python37-32\lib\site-packages\matplotlib\axes\_axes.py:6521: MatplotlibDeprecationWarning: The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead. alternative="'density'", removal="3.1") ###Markdown This time, the distribution is symmetric, forming a "bell-shaped" curve. The ***mean***, ***median***, and mode are at the same location, and the data tails off evenly on both sides from a central peak.Statisticians call this a *normal* distribution (or sometimes a *Gaussian* distribution), and it occurs quite commonly in many scenarios due to something called the *Central Limit Theorem*, which reflects the way continuous probability works - more about that later. Skewness and KurtosisYou can measure *skewness* (in which direction the data is skewed and to what degree) and kurtosis (how "peaked" the data is) to get an idea of the shape of the data distribution. In Python, you can use the ***skew*** and ***kurt*** functions to find this: ###Code %matplotlib inline import pandas as pd import numpy as np from matplotlib import pyplot as plt import scipy.stats as stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) numcols = ['Salary', 'Hours', 'Grade'] for col in numcols: print(df[col].name + ' skewness: ' + str(df[col].skew())) print(df[col].name + ' kurtosis: ' + str(df[col].kurt())) density = stats.gaussian_kde(df[col]) n, x, _ = plt.hist(df[col], histtype='step', normed=True, bins=25) plt.plot(x, density(x)*6) plt.show() print('\n') ###Output N:\Python\Python37-32\lib\site-packages\matplotlib\axes\_axes.py:6521: MatplotlibDeprecationWarning: The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead. alternative="'density'", removal="3.1") ###Markdown Now let's look at the distribution of a real dataset - let's see how the heights of the father's measured in Galton's study of parent and child heights are distributed: ###Code %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats import statsmodels.api as sm df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data fathers = df['father'] density = stats.gaussian_kde(fathers) n, x, _ = plt.hist(fathers, histtype='step', normed=True, bins=50) plt.plot(x, density(x)*2.5) plt.axvline(fathers.mean(), color='magenta', linestyle='dashed', linewidth=2) plt.axvline(fathers.median(), color='green', linestyle='dashed', linewidth=2) plt.show() ###Output _____no_output_____ ###Markdown As you can see, the father's height measurements are approximately normally distributed - in other words, they form a more or less *normal* distribution that is symmetric around the mean. Measures of VarianceWe can see from the distribution plots of our data that the values in our dataset can vary quite widely. We can use various measures to quantify this variance. RangeA simple way to quantify the variance in a dataset is to identify the difference between the lowest and highest values. This is called the *range*, and is calculated by subtracting the minimim value from the maximum value.The following Python code creates a single Pandas dataframe for our school graduate data, and calculates the *range* for each of the numeric features: ###Code import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) numcols = ['Salary', 'Hours', 'Grade'] for col in numcols: print(df[col].name + ' range: ' + str(df[col].max() - df[col].min())) ###Output _____no_output_____ ###Markdown Percentiles and QuartilesThe range is easy to calculate, but it's not a particularly useful statistic. For example, a range of 149,000 between the lowest and highest salary does not tell us which value within that range a graduate is most likely to earn - it doesn't tell us nothing about how the salaries are distributed around the mean within that range. The range tells us very little about the comparative position of an individual value within the distribution - for example, Frederic scored 57 in his final grade at school; which is a pretty good score (it's more than all but one of his classmates); but this isn't immediately apparent from a score of 57 and range of 90. PercentilesA percentile tells us where a given value is ranked in the overall distribution. For example, 25% of the data in a distribution has a value lower than the 25th percentile; 75% of the data has a value lower than the 75th percentile, and so on. Note that half of the data has a value lower than the 50th percentile - so the 50th percentile is also the median!Let's examine Frederic's grade using this approach. We know he scored 57, but how does he rank compared to his fellow students?Well, there are seven students in total, and five of them scored less than Frederic; so we can calculate the percentile for Frederic's grade like this:\begin{equation}\frac{5}{7} \times 100 \approx 71.4\end{equation} So Frederic's score puts him at the 71.4th percentile in his class.In Python, you can use the ***percentileofscore*** function in the *scipy.stats* package to calculate the percentile for a given value in a set of values: ###Code import pandas as pd from scipy import stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(stats.percentileofscore(df['Grade'], 57, 'strict')) ###Output _____no_output_____ ###Markdown We've used the strict definition of percentile; but sometimes it's calculated as being the percentage of values that are less than *or equal to* the value you're comparing. In this case, the calculation for Frederic's percentile would include his own score:\begin{equation}\frac{6}{7} \times 100 \approx 85.7\end{equation} You can calculate this way in Python by using the ***weak*** mode of the ***percentileofscore*** function: ###Code import pandas as pd from scipy import stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(stats.percentileofscore(df['Grade'], 57, 'weak')) ###Output _____no_output_____ ###Markdown We've considered the percentile of Frederic's grade, and used it to rank him compared to his fellow students. So what about Dan, Joann, and Ethan? How do they compare to the rest of the class? They scored the same grade (50), so in a sense they share a percentile.To deal with this *grouped* scenario, we can average the percentage rankings for the matching scores. We treat half of the scores matching the one we're ranking as if they are below it, and half as if they are above it. In this case, there were three matching scores of 50, and for each of these we calculate the percentile as if 1 was below and 1 was above. So the calculation for a percentile for Joann based on scores being less than or equal to 50 is:\begin{equation}(\frac{4}{7}) \times 100 \approx 57.14\end{equation} The value of **4** consists of the two scores that are below Joann's score of 50, Joann's own score, and half of the scores that are the same as Joann's (of which there are two, so we count one).In Python, the ***percentileofscore*** function has a ***rank*** function that calculates grouped percentiles like this: ###Code import pandas as pd from scipy import stats df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(stats.percentileofscore(df['Grade'], 50, 'rank')) ###Output _____no_output_____ ###Markdown QuartilesRather than using individual percentiles to compare data, we can consider the overall spread of the data by dividing those percentiles into four *quartiles*. The first quartile contains the values from the minimum to the 25th percentile, the second from the 25th percentile to the 50th percentile (which is the median), the third from the 50th percentile to the 75th percentile, and the fourth from the 75th percentile to the maximum.In Python, you can use the ***quantile*** function of the *pandas.dataframe* class to find the threshold values at the 25th, 50th, and 75th percentiles (*quantile* is a generic term for a ranked position, such as a percentile or quartile).Run the following code to find the quartile thresholds for the weekly hours worked by our former students: ###Code # Quartiles import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,17,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(df['Hours'].quantile([0.25, 0.5, 0.75])) ###Output _____no_output_____ ###Markdown Its usually easier to understand how data is distributed across the quartiles by visualizing it. You can use a histogram, but many data scientists use a kind of visualization called a *box plot* (or a *box and whiskers* plot).Let's create a box plot for the weekly hours: ###Code %matplotlib inline import pandas as pd from matplotlib import pyplot as plt df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) # Plot a box-whisker chart df['Hours'].plot(kind='box', title='Weekly Hours Distribution', figsize=(10,8)) plt.show() ###Output _____no_output_____ ###Markdown The box plot consists of:- A rectangular *box* that shows where the data between the 25th and 75th percentile (the second and third quartile) lie. This part of the distribution is often referred to as the *interquartile range* - it contains the middle 50 data values.- *Whiskers* that extend from the box to the bottom of the first quartile and the top of the fourth quartile to show the full range of the data.- A line in the box that shows that location of the median (the 50th percentile, which is also the threshold between the second and third quartile)In this case, you can see that the interquartile range is between 35 and 40, with the median nearer the top of that range. The range of the first quartile is from around 30 to 35, and the fourth quartile is from 40 to 41. OutliersLet's take a look at another box plot - this time showing the distribution of the salaries earned by our former classmates: ###Code %matplotlib inline import pandas as pd from matplotlib import pyplot as plt df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,30,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) # Plot a box-whisker chart df['Salary'].plot(kind='box', title='Salary Distribution', figsize=(10,8)) plt.show() ###Output _____no_output_____ ###Markdown So what's going on here?Well, as we've already noticed, Rosie earns significantly more than her former classmates. So much more in fact, that her salary has been identifed as an *outlier*. An outlier is a value that is so far from the center of the distribution compared to other values that it skews the distribution by affecting the mean. There are all sorts of reasons that you might have outliers in your data, including data entry errors, failures in sensors or data-generating equipment, or genuinely anomalous values.So what should we do about it?This really depends on the data, and what you're trying to use it for. In this case, let's assume we're trying to figure out what's a reasonable expectation of salary for a graduate of our school to earn. Ignoring for the moment that we have an extremly small dataset on which to base our judgement, it looks as if Rosie's salary could be either an error (maybe she mis-typed it in the form used to collect data) or a genuine anomaly (maybe she became a professional athelete or some other extremely highly paid job). Either way, it doesn't seem to represent a salary that a typical graduate might earn.Let's see what the distribution of the data looks like without the outlier: ###Code %matplotlib inline import pandas as pd from matplotlib import pyplot as plt df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,17,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) # Plot a box-whisker chart df['Salary'].plot(kind='box', title='Salary Distribution', figsize=(10,8), showfliers=False) plt.show() ###Output _____no_output_____ ###Markdown Now it looks like there's a more even distribution of salaries. It's still not quite symmetrical, but there's much less overall variance. There's potentially some cause here to disregard Rosie's salary data when we compare the salaries, as it is tending to skew the analysis.So is that OK? Can we really just ignore a data value we don't like?Again, it depends on what you're analyzing. Let's take a look at the distribution of final grades: ###Code %matplotlib inline import pandas as pd from matplotlib import pyplot as plt df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,17,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) # Plot a box-whisker chart df['Grade'].plot(kind='box', title='Grade Distribution', figsize=(10,8)) plt.show() ###Output _____no_output_____ ###Markdown Once again there are outliers, this time at both ends of the distribution. However, think about what this data represents. If we assume that the grade for the final test is based on a score out of 100, it seems reasonable to expect that some students will score very low (maybe even 0) and some will score very well (maybe even 100); but most will get a score somewhere in the middle. The reason that the low and high scores here look like outliers might just be because we have so few data points. Let's see what happens if we include a few more students in our data: ###Code %matplotlib inline import pandas as pd from matplotlib import pyplot as plt df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'], 'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]}) # Plot a box-whisker chart df['Grade'].plot(kind='box', title='Grade Distribution', figsize=(10,8)) plt.show() ###Output _____no_output_____ ###Markdown With more data, there are some more high and low scores; so we no longer consider the isolated cases to be outliers.The key point to take away here is that you need to really understand the data and what you're trying to do with it, and you need to ensure that you have a reasonable sample size, before determining what to do with outlier values. Variance and Standard DeviationWe've seen how to understand the *spread* of our data distribution using the range, percentiles, and quartiles; and we've seen the effect of outliers on the distribution. Now it's time to look at how to measure the amount of variance in the data. VarianceVariance is measured as the average of the squared difference from the mean. For a full population, it's indicated by a squared Greek letter *sigma* (***&sigma;2***) and calculated like this:\begin{equation}\sigma^{2} = \frac{\displaystyle\sum_{i=1}^{N} (X_{i} -\mu)^{2}}{N}\end{equation}For a sample, it's indicated as ***s2*** calculated like this:\begin{equation}s^{2} = \frac{\displaystyle\sum_{i=1}^{n} (x_{i} -\bar{x})^{2}}{n-1}\end{equation}In both cases, we sum the difference between the individual data values and the mean and square the result. Then, for a full population we just divide by the number of data items to get the average. When using a sample, we divide by the total number of items **minus 1** to correct for sample bias.Let's work this out for our student grades (assuming our data is a sample from the larger student population).First, we need to calculate the mean grade:\begin{equation}\bar{x} = \frac{50+50+46+95+50+5+57}{7}\approx 50.43\end{equation}Then we can plug that into our formula for the variance:\begin{equation}s^{2} = \frac{(50-50.43)^{2}+(50-50.43)^{2}+(46-50.43)^{2}+(95-50.43)^{2}+(50-50.43)^{2}+(5-50.43)^{2}+(57-50.43)^{2}}{7-1}\end{equation}So:\begin{equation}s^{2} = \frac{0.185+0.185+19.625+1986.485+0.185+2063.885+43.165}{6}\end{equation}Which simplifies to:\begin{equation}s^{2} = \frac{4113.715}{6}\end{equation}Giving the result:\begin{equation}s^{2} \approx 685.619\end{equation}The higher the variance, the more spread your data is around the mean.In Python, you can use the ***var*** function of the *pandas.dataframe* class to calculate the variance of a column in a dataframe: ###Code import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,17,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(df['Grade'].var()) ###Output _____no_output_____ ###Markdown Standard DeviationTo calculate the variance, we squared the difference of each value from the mean. If we hadn't done this, the numerator of our fraction would always end up being zero (because the mean is at the center of our values). However, this means that the variance is not in the same unit of measurement as our data - in our case, since we're calculating the variance for grade points, it's in grade points squared; which is not very helpful.To get the measure of variance back into the same unit of measurement, we need to find its square root:\begin{equation}s = \sqrt{685.619} \approx 26.184\end{equation}So what does this value represent?It's the *standard deviation* for our grades data. More formally, it's calculated like this for a full population:\begin{equation}\sigma = \sqrt{\frac{\displaystyle\sum_{i=1}^{N} (X_{i} -\mu)^{2}}{N}}\end{equation}Or like this for a sample:\begin{equation}s = \sqrt{\frac{\displaystyle\sum_{i=1}^{n} (x_{i} -\bar{x})^{2}}{n-1}}\end{equation}Note that in both cases, it's just the square root of the corresponding variance forumla!In Python, you can calculate it using the ***std*** function: ###Code import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,17,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(df['Grade'].std()) ###Output _____no_output_____ ###Markdown Standard Deviation in a Normal DistributionIn statistics and data science, we spend a lot of time considering *normal* distributions; because they occur so frequently. The standard deviation has an important relationship to play in a normal distribution.Run the following cell to show a histogram of a *standard normal* distribution (which is a distribution with a mean of 0 and a standard deviation of 1): ###Code %matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats # Create a random standard normal distribution df = pd.DataFrame(np.random.randn(100000, 1), columns=['Grade']) # Plot the distribution as a histogram with a density curve grade = df['Grade'] density = stats.gaussian_kde(grade) n, x, _ = plt.hist(grade, color='lightgrey', normed=True, bins=100) plt.plot(x, density(x)) # Get the mean and standard deviation s = df['Grade'].std() m = df['Grade'].mean() # Annotate 1 stdev x1 = [m-s, m+s] y1 = [0.25, 0.25] plt.plot(x1,y1, color='magenta') plt.annotate('1s (68.26%)', (x1[1],y1[1])) # Annotate 2 stdevs x2 = [m-(s*2), m+(s*2)] y2 = [0.05, 0.05] plt.plot(x2,y2, color='green') plt.annotate('2s (95.45%)', (x2[1],y2[1])) # Annotate 3 stdevs x3 = [m-(s*3), m+(s*3)] y3 = [0.005, 0.005] plt.plot(x3,y3, color='orange') plt.annotate('3s (99.73%)', (x3[1],y3[1])) # Show the location of the mean plt.axvline(grade.mean(), color='grey', linestyle='dashed', linewidth=1) plt.show() ###Output _____no_output_____ ###Markdown The horizontal colored lines show the percentage of data within 1, 2, and 3 standard deviations of the mean (plus or minus).In any normal distribution:- Approximately 68.26% of values fall within one standard deviation from the mean.- Approximately 95.45% of values fall within two standard deviations from the mean.- Approximately 99.73% of values fall within three standard deviations from the mean. Z ScoreSo in a normal (or close to normal) distribution, standard deviation provides a way to evaluate how far from a mean a given range of values falls, allowing us to compare where a particular value lies within the distribution. For example, suppose Rosie tells you she was the highest scoring student among her friends - that doesn't really help us assess how well she scored. She may have scored only a fraction of a point above the second-highest scoring student. Even if we know she was in the top quartile; if we don't know how the rest of the grades are distributed it's still not clear how well she performed compared to her friends.However, if she tells you how many standard deviations higher than the mean her score was, this will help you compare her score to that of her classmates.So how do we know how many standard deviations above or below the mean a particular value is? We call this a *Z Score*, and it's calculated like this for a full population:\begin{equation}Z = \frac{x - \mu}{\sigma}\end{equation}or like this for a sample:\begin{equation}Z = \frac{x - \bar{x}}{s}\end{equation}So, let's examine Rosie's grade of 95. Now that we know the *mean* grade is 50.43 and the *standard deviation* is 26.184, we can calculate the Z Score for this grade like this:\begin{equation}Z = \frac{95 - 50.43}{26.184} = 1.702\end{equation}.So Rosie's grade is 1.702 standard deviations above the mean. Summarizing Data Distribution in PythonWe've seen how to obtain individual statistics in Python, but you can also use the ***describe*** function to retrieve summary statistics for all numeric columns in a dataframe. These summary statistics include many of the statistics we've examined so far (though it's worth noting that the *median* is not included): ###Code import pandas as pd df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'], 'Salary':[50000,54000,50000,189000,55000,40000,59000], 'Hours':[41,40,36,17,35,39,40], 'Grade':[50,50,46,95,50,5,57]}) print(df.describe()) ###Output _____no_output_____
data/2019-01-19_football_managers/manager_table_script.ipynb
###Markdown Step 1: create columns containing a manager's previous and current clubs ###Code manager_previous_club_ids_list = [] manager_current_club_id_list = [] for m in manager_table["manager"]: try: manager_all_tenures = manager_all_tenures_performance_df[manager_all_tenures_performance_df["TM_manager_name"]==m] manager_previous_club_ids_list.append(list(manager_all_tenures["TM_team_id"])) manager_last_tenure = manager_all_tenures.iloc[-1] if dateutil.parser.parse(manager_last_tenure["TM_manager_end_date"]).date() > datetime.date(2018,12,20): manager_current_club_id_list.append(manager_last_tenure["TM_team_id"]) else: manager_current_club_id_list.append(None) except Exception: manager_current_club_id_list.append(None) manager_previous_club_names_list = [] manager_previous_club_countries_list = [] for m in manager_previous_club_ids_list: previous_club_names = [] previous_country_names = [] for i in m: club_name = team_converter[team_converter["TM_team_id"]==i]["full_team_name"].iloc[0] club_country = team_converter[team_converter["TM_team_id"]==i]["league_country"].iloc[0] previous_club_names.append(club_name) previous_country_names.append(club_country) manager_previous_club_names_list.append(previous_club_names) manager_previous_club_countries_list.append(previous_country_names) manager_current_club_names_list = [] manager_current_club_countries_list = [] for i in manager_current_club_id_list: try: club_name = team_converter[team_converter["TM_team_id"]==i]["full_team_name"].iloc[0] club_country = team_converter[team_converter["TM_team_id"]==i]["league_country"].iloc[0] manager_current_club_names_list.append(club_name) manager_current_club_countries_list.append(club_country) except Exception: manager_current_club_names_list.append(None) manager_current_club_countries_list.append(None) manager_table["previous_club_names"] = manager_previous_club_names_list manager_table["previous_club_ids"] = manager_previous_club_ids_list manager_table["previous_club_countries"] = manager_previous_club_countries_list manager_table["current_club_name"] = manager_current_club_names_list manager_table["current_club_name"] = manager_current_club_names_list manager_table["current_club_id"] = manager_current_club_id_list manager_table["currently_employed"] = 1*(manager_table["current_club_name"].notnull()) ###Output _____no_output_____ ###Markdown Step 2: assign Manchester United current job to Ole Gunnar Solskjaer ###Code solskjaer_index = manager_table[manager_table["manager"]=="Ole Gunnar Solskjaer"].index.values[0] manager_table["previous_club_names"].loc[solskjaer_index] = ['Cardiff','Manchester United'] manager_table["previous_club_ids"].loc[solskjaer_index] = [603,985] manager_table["current_club_name"].loc[solskjaer_index] = 'Manchester United' manager_table["current_club_id"].loc[solskjaer_index] = 985 manager_table["currently_employed"].loc[solskjaer_index] = 1 manager_table.to_csv("output_files/manager_table_df.csv") ###Output _____no_output_____
courses/Convolutional Neural Networks/WEEK 1/Convolution_model_Application_v1a.ipynb
###Markdown Convolutional Neural Networks: ApplicationWelcome to Course 4's second assignment! In this notebook, you will:- Implement helper functions that you will use when implementing a TensorFlow model- Implement a fully functioning ConvNet using TensorFlow **After this assignment you will be able to:**- Build and train a ConvNet in TensorFlow for a classification problem We assume here that you are already familiar with TensorFlow. If you are not, please refer the *TensorFlow Tutorial* of the third week of Course 2 ("*Improving deep neural networks*"). Updates to Assignment If you were working on a previous version* The current notebook filename is version "1a". * You can find your work in the file directory as version "1".* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of Updates* `initialize_parameters`: added details about tf.get_variable, `eval`. Clarified test case.* Added explanations for the kernel (filter) stride values, max pooling, and flatten functions.* Added details about softmax cross entropy with logits.* Added instructions for creating the Adam Optimizer.* Added explanation of how to evaluate tensors (optimizer and cost).* `forward_propagation`: clarified instructions, use "F" to store "flatten" layer.* Updated print statements and 'expected output' for easier visual comparisons.* Many thanks to Kevin P. Brown (mentor for the deep learning specialization) for his suggestions on the assignments in this course! 1.0 - TensorFlow modelIn the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call. As usual, we will start by loading in the packages. ###Code import math import numpy as np import h5py import matplotlib.pyplot as plt import scipy from PIL import Image from scipy import ndimage import tensorflow as tf from tensorflow.python.framework import ops from cnn_utils import * %matplotlib inline np.random.seed(1) ###Output _____no_output_____ ###Markdown Run the next cell to load the "SIGNS" dataset you are going to use. ###Code # Loading the data (signs) X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() ###Output _____no_output_____ ###Markdown As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples. ###Code # Example of a picture index = 6 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) ###Output y = 2 ###Markdown In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.To get started, let's examine the shapes of your data. ###Code X_train = X_train_orig/255. X_test = X_test_orig/255. Y_train = convert_to_one_hot(Y_train_orig, 6).T Y_test = convert_to_one_hot(Y_test_orig, 6).T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) conv_layers = {} ###Output number of training examples = 1080 number of test examples = 120 X_train shape: (1080, 64, 64, 3) Y_train shape: (1080, 6) X_test shape: (120, 64, 64, 3) Y_test shape: (120, 6) ###Markdown 1.1 - Create placeholdersTensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.**Exercise**: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension **[None, n_H0, n_W0, n_C0]** and Y should be of dimension **[None, n_y]**. [Hint: search for the tf.placeholder documentation"](https://www.tensorflow.org/api_docs/python/tf/placeholder). ###Code # GRADED FUNCTION: create_placeholders def create_placeholders(n_H0, n_W0, n_C0, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_H0 -- scalar, height of an input image n_W0 -- scalar, width of an input image n_C0 -- scalar, number of channels of the input n_y -- scalar, number of classes Returns: X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float" Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float" """ ### START CODE HERE ### (≈2 lines) X = tf.placeholder(tf.float32,[None, n_H0, n_W0, n_C0]) Y = tf.placeholder(tf.float32,[None, n_y]) ### END CODE HERE ### return X, Y X, Y = create_placeholders(64, 64, 3, 6) print ("X = " + str(X)) print ("Y = " + str(Y)) ###Output X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32) Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32) ###Markdown **Expected Output** X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32) Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32) 1.2 - Initialize parametersYou will initialize weights/filters $W1$ and $W2$ using `tf.contrib.layers.xavier_initializer(seed = 0)`. You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment.**Exercise:** Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use:```pythonW = tf.get_variable("W", [1,2,3,4], initializer = ...)``` tf.get_variable()[Search for the tf.get_variable documentation](https://www.tensorflow.org/api_docs/python/tf/get_variable). Notice that the documentation says:```Gets an existing variable with these parameters or create a new one.```So we can use this function to create a tensorflow variable with the specified name, but if the variables already exist, it will get the existing variable with that same name. ###Code # GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes weight parameters to build a neural network with tensorflow. The shapes are: W1 : [4, 4, 3, 8] W2 : [2, 2, 8, 16] Note that we will hard code the shape values in the function to make the grading simpler. Normally, functions should take values as inputs rather than hard coding. Returns: parameters -- a dictionary of tensors containing W1, W2 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 2 lines of code) W1 = tf.get_variable('W1',[4, 4, 3, 8],initializer=tf.contrib.layers.xavier_initializer(seed=0)) W2 = tf.get_variable('W2',[2, 2, 8, 16],initializer=tf.contrib.layers.xavier_initializer(seed=0)) ### END CODE HERE ### parameters = {"W1": W1, "W2": W2} return parameters tf.reset_default_graph() with tf.Session() as sess_test: parameters = initialize_parameters() init = tf.global_variables_initializer() sess_test.run(init) print("W1[1,1,1] = \n" + str(parameters["W1"].eval()[1,1,1])) print("W1.shape: " + str(parameters["W1"].shape)) print("\n") print("W2[1,1,1] = \n" + str(parameters["W2"].eval()[1,1,1])) print("W2.shape: " + str(parameters["W2"].shape)) ###Output W1[1,1,1] = [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 -0.06847463 0.05245192] W1.shape: (4, 4, 3, 8) W2[1,1,1] = [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 -0.22779644 -0.1601823 -0.16117483 -0.10286498] W2.shape: (2, 2, 8, 16) ###Markdown ** Expected Output:**```W1[1,1,1] = [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 -0.06847463 0.05245192]W1.shape: (4, 4, 3, 8)W2[1,1,1] = [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 -0.22779644 -0.1601823 -0.16117483 -0.10286498]W2.shape: (2, 2, 8, 16)``` 1.3 - Forward propagationIn TensorFlow, there are built-in functions that implement the convolution steps for you.- **tf.nn.conv2d(X,W, strides = [1,s,s,1], padding = 'SAME'):** given an input $X$ and a group of filters $W$, this function convolves $W$'s filters on X. The third parameter ([1,s,s,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). Normally, you'll choose a stride of 1 for the number of examples (the first value) and for the channels (the fourth value), which is why we wrote the value as `[1,s,s,1]`. You can read the full documentation on [conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d).- **tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'):** given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. For max pooling, we usually operate on a single example at a time and a single channel at a time. So the first and fourth value in `[1,f,f,1]` are both 1. You can read the full documentation on [max_pool](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool).- **tf.nn.relu(Z):** computes the elementwise ReLU of Z (which can be any shape). You can read the full documentation on [relu](https://www.tensorflow.org/api_docs/python/tf/nn/relu).- **tf.contrib.layers.flatten(P)**: given a tensor "P", this function takes each training (or test) example in the batch and flattens it into a 1D vector. * If a tensor P has the shape (m,h,w,c), where m is the number of examples (the batch size), it returns a flattened tensor with shape (batch_size, k), where $k=h \times w \times c$. "k" equals the product of all the dimension sizes other than the first dimension. * For example, given a tensor with dimensions [100,2,3,4], it flattens the tensor to be of shape [100, 24], where 24 = 2 * 3 * 4. You can read the full documentation on [flatten](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/flatten).- **tf.contrib.layers.fully_connected(F, num_outputs):** given the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation on [full_connected](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/fully_connected).In the last function above (`tf.contrib.layers.fully_connected`), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters. Window, kernel, filterThe words "window", "kernel", and "filter" are used to refer to the same thing. This is why the parameter `ksize` refers to "kernel size", and we use `(f,f)` to refer to the filter size. Both "kernel" and "filter" refer to the "window." **Exercise**Implement the `forward_propagation` function below to build the following model: `CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED`. You should use the functions above. In detail, we will use the following parameters for all the steps: - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME" - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME" - Flatten the previous output. - FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost. ###Code # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Note that for simplicity and grading purposes, we'll hard-code some values such as the stride and kernel (filter) sizes. Normally, functions should take these values as function parameters. Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "W2" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] W2 = parameters['W2'] ### START CODE HERE ### # CONV2D: stride of 1, padding 'SAME' Z1 = tf.nn.conv2d(X,W1,strides=[1, 1, 1, 1],padding='SAME') # RELU A1 = tf.nn.relu(Z1) # MAXPOOL: window 8x8, stride 8, padding 'SAME' P1 = tf.nn.max_pool(A1,ksize=[1, 8, 8, 1],strides=[1, 8, 8, 1],padding='SAME') # CONV2D: filters W2, stride 1, padding 'SAME' Z2 = tf.nn.conv2d(P1,W2,strides=[1, 1, 1, 1],padding='SAME') # RELU A2 = tf.nn.relu(Z2) # MAXPOOL: window 4x4, stride 4, padding 'SAME' P2 = tf.nn.max_pool(A2,ksize=[1, 4, 4, 1],strides=[1, 4, 4, 1], padding='SAME') # FLATTEN F = tf.contrib.layers.flatten(P2) # FULLY-CONNECTED without non-linear activation function (not not call softmax). # 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None" Z3 = tf.contrib.layers.fully_connected(F, 6, activation_fn=None) ### END CODE HERE ### return Z3 tf.reset_default_graph() with tf.Session() as sess: np.random.seed(1) X, Y = create_placeholders(64, 64, 3, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) init = tf.global_variables_initializer() sess.run(init) a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)}) print("Z3 = \n" + str(a)) ###Output Z3 = [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]] ###Markdown **Expected Output**:```Z3 = [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]``` 1.4 - Compute costImplement the compute cost function below. Remember that the cost function helps the neural network see how much the model's predictions differ from the correct labels. By adjusting the weights of the network to reduce the cost, the neural network can improve its predictions.You might find these two functions helpful: - **tf.nn.softmax_cross_entropy_with_logits(logits = Z, labels = Y):** computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation [softmax_cross_entropy_with_logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits).- **tf.reduce_mean:** computes the mean of elements across dimensions of a tensor. Use this to calculate the sum of the losses over all the examples to get the overall cost. You can check the full documentation [reduce_mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean). Details on softmax_cross_entropy_with_logits (optional reading)* Softmax is used to format outputs so that they can be used for classification. It assigns a value between 0 and 1 for each category, where the sum of all prediction values (across all possible categories) equals 1.* Cross Entropy is compares the model's predicted classifications with the actual labels and results in a numerical value representing the "loss" of the model's predictions.* "Logits" are the result of multiplying the weights and adding the biases. Logits are passed through an activation function (such as a relu), and the result is called the "activation."* The function is named `softmax_cross_entropy_with_logits` takes logits as input (and not activations); then uses the model to predict using softmax, and then compares the predictions with the true labels using cross entropy. These are done with a single function to optimize the calculations.** Exercise**: Compute the cost below using the function above. ###Code # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (number of examples, 6) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3,labels=Y)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: np.random.seed(1) X, Y = create_placeholders(64, 64, 3, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) init = tf.global_variables_initializer() sess.run(init) a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)}) print("cost = " + str(a)) ###Output cost = 2.91034 ###Markdown **Expected Output**: ```cost = 2.91034``` 1.5 Model Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset. **Exercise**: Complete the function below. The model below should:- create placeholders- initialize parameters- forward propagate- compute the cost- create an optimizerFinally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. [Hint for initializing the variables](https://www.tensorflow.org/api_docs/python/tf/global_variables_initializer) Adam OptimizerYou can use `tf.train.AdamOptimizer(learning_rate = ...)` to create the optimizer. The optimizer has a `minimize(loss=...)` function that you'll call to set the cost function that the optimizer will minimize.For details, check out the documentation for [Adam Optimizer](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) Random mini batchesIf you took course 2 of the deep learning specialization, you implemented `random_mini_batches()` in the "Optimization" programming assignment. This function returns a list of mini-batches. It is already implemented in the `cnn_utils.py` file and imported here, so you can call it like this:```Pythonminibatches = random_mini_batches(X, Y, mini_batch_size = 64, seed = 0)```(You will want to choose the correct variable names when you use it in your code). Evaluating the optimizer and costWithin a loop, for each mini-batch, you'll use the `tf.Session` object (named `sess`) to feed a mini-batch of inputs and labels into the neural network and evaluate the tensors for the optimizer as well as the cost. Remember that we built a graph data structure and need to feed it inputs and labels and use `sess.run()` in order to get values for the optimizer and cost.You'll use this kind of syntax:```output_for_var1, output_for_var2 = sess.run( fetches=[var1, var2], feed_dict={var_inputs: the_batch_of_inputs, var_labels: the_batch_of_labels} )```* Notice that `sess.run` takes its first argument `fetches` as a list of objects that you want it to evaluate (in this case, we want to evaluate the optimizer and the cost). * It also takes a dictionary for the `feed_dict` parameter. * The keys are the `tf.placeholder` variables that we created in the `create_placeholders` function above. * The values are the variables holding the actual numpy arrays for each mini-batch. * The sess.run outputs a tuple of the evaluated tensors, in the same order as the list given to `fetches`. For more information on how to use sess.run, see the documentation [tf.Sesssionrun](https://www.tensorflow.org/api_docs/python/tf/Sessionrun) documentation. ###Code # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009, num_epochs = 100, minibatch_size = 64, print_cost = True): """ Implements a three-layer ConvNet in Tensorflow: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Arguments: X_train -- training set, of shape (None, 64, 64, 3) Y_train -- test set, of shape (None, n_y = 6) X_test -- training set, of shape (None, 64, 64, 3) Y_test -- test set, of shape (None, n_y = 6) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: train_accuracy -- real number, accuracy on the train set (X_train) test_accuracy -- real number, testing accuracy on the test set (X_test) parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep results consistent (tensorflow seed) seed = 3 # to keep results consistent (numpy seed) (m, n_H0, n_W0, n_C0) = X_train.shape n_y = Y_train.shape[1] costs = [] # To keep track of the cost # Create Placeholders of the correct shape ### START CODE HERE ### (1 line) X, Y = create_placeholders(n_H0,n_W0,n_C0,n_y) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X,parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3,Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost. ### START CODE HERE ### (1 line) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) ### END CODE HERE ### # Initialize all the variables globally init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): minibatch_cost = 0. num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch """ # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the optimizer and the cost. # The feedict should contain a minibatch for (X,Y). """ ### START CODE HERE ### (1 line) _ , temp_cost = sess.run([optimizer,cost],feed_dict={X:minibatch_X,Y:minibatch_Y}) ### END CODE HERE ### minibatch_cost += temp_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 5 == 0: print ("Cost after epoch %i: %f" % (epoch, minibatch_cost)) if print_cost == True and epoch % 1 == 0: costs.append(minibatch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # Calculate the correct predictions predict_op = tf.argmax(Z3, 1) correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print(accuracy) train_accuracy = accuracy.eval({X: X_train, Y: Y_train}) test_accuracy = accuracy.eval({X: X_test, Y: Y_test}) print("Train Accuracy:", train_accuracy) print("Test Accuracy:", test_accuracy) return train_accuracy, test_accuracy, parameters ###Output _____no_output_____ ###Markdown Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code! ###Code _, _, parameters = model(X_train, Y_train, X_test, Y_test) ###Output Cost after epoch 0: 1.917929 Cost after epoch 5: 1.506757 Cost after epoch 10: 0.955359 Cost after epoch 15: 0.845802 Cost after epoch 20: 0.701174 Cost after epoch 25: 0.571977 Cost after epoch 30: 0.518435 Cost after epoch 35: 0.495806 Cost after epoch 40: 0.429827 Cost after epoch 45: 0.407291 Cost after epoch 50: 0.366394 Cost after epoch 55: 0.376922 Cost after epoch 60: 0.299491 Cost after epoch 65: 0.338870 Cost after epoch 70: 0.316400 Cost after epoch 75: 0.310413 Cost after epoch 80: 0.249549 Cost after epoch 85: 0.243457 Cost after epoch 90: 0.200031 Cost after epoch 95: 0.175452 ###Markdown **Expected output**: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease. **Cost after epoch 0 =** 1.917929 **Cost after epoch 5 =** 1.506757 **Train Accuracy =** 0.940741 **Test Accuracy =** 0.783333 Congratulations! You have finished the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance). Once again, here's a thumbs up for your work! ###Code fname = "images/thumbs_up.jpg" image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)) plt.imshow(my_image) ###Output _____no_output_____
array_strings/.ipynb_checkpoints/add_binary-checkpoint.ipynb
###Markdown Given two binary strings, return their sum (also a binary string).The input strings are both non-empty and contains only characters 1 or 0. Example 1:Input: a = "11", b = "1"Output: "100" Example 2:Input: a = "1010", b = "1011"Output: "10101" ###Code def addBinary( a, b): if len(a)==0: return b if len(b)==0: return a if a[-1] == '1' and b[-1] == '1': return addBinary(addBinary(a[0:-1],b[0:-1]),'1')+'0' if a[-1] == '0' and b[-1] == '0': return addBinary(a[0:-1],b[0:-1])+'0' else: return addBinary(a[0:-1],b[0:-1])+'1' # addBinary(addBinary(1,[]),1)+'0' # addBinary(addBinary(1,[]),1) + '0' ==> addBinary(1,[]) return a =1 B # addBinary(1,1)+'0' # return {addBinary(addBinary(a[0:-1],b[0:-1]),'1')+'0' } +'0' ==> addBinary(a[0:-1],b[0:-1]) return empty A # return {addBinary(empty,'1')+'0' } +'0' ===> addBinary(empty,'1') return 1 A #1 +'0' +'0' addBinary("11","1") def add_binary(a,b): print("len(a) {}".format(len(a))) print("len(b) {}".format(len(b))) print("a[-1] {}".format(a[-1])) print("b[-1] {}".format(b[-1])) print("a[0:-1]) {}".format(a[0:-1])) print("b[0:-1]) {}".format(b[0:-1])) if len(a)==0: print("len a==0") return b if len(b)==0: print("len b==0") return a if a[-1] == '1' and b[-1] == '1': print("First if condition 1,1") return addBinary(addBinary(a[0:-1],b[0:-1]),'1')+'0' if a[-1] == '0' and b[-1] == '0': print("Second if condition 0,0") return add_binary(a[0:-1],b[0:-1])+'0' else: print("Else") return add_binary(a[0:-1],b[0:-1])+'1' add_binary("1010","1011") def add_binary_nums(x, y): print((len(x))) print((len(x))) max_len = max(len(x), len(y)) print("max_len {}".format(max_len)) print() #Fill it with zeros x = x.zfill(max_len) print("x {}".format(x)) y = y.zfill(max_len) print("y {}".format(y)) print(y) # initialize the result result = '' # initialize the carry carry = 0 # Traverse the string for i in range(max_len - 1, -1, -1): r = carry r += 1 if x[i] == '1' else 0 r += 1 if y[i] == '1' else 0 result = ('1' if r % 2 == 1 else '0') + result carry = 0 if r < 2 else 1 # Compute the carry. if carry !=0 : result = '1' + result return result.zfill(max_len) add_binary_nums('100','10') ###Output 3 3 max_len 3 x 100 y 010 010
notebooks/R9b-L200standalone-figure6.ipynb
###Markdown *****L200 standalone predictions Vs Genome wide Brunello measurements** ###Code ### Load and read PC9 standalone data pc9_dir = '../out/21.0423 Lx PC9/L200only_reg_rf_boruta/anlyz' df_pc9 = pickle.load(open(os.path.join(pc9_dir,'y_compr_ext.pkl'),'rb')) # Standalone pc9_standalone_dir = '../out/21.0720 Lx PC9Standalone/L200only_reg_rf_boruta/anlyz' df_pc9_standalone = pickle.load(open(os.path.join(pc9_standalone_dir,'y_compr_ext.pkl'),'rb')) ### Format data for plotting #PC9 df_pc9 = pd.concat([df_pc9['actual'],df_pc9['predicted']], axis = 0).T df_pc9.columns = ['actual','predicted'] #Standalone df_pc9_standalone = pd.concat([df_pc9_standalone['actual'],df_pc9_standalone['predicted']], axis = 0).T df_pc9_standalone.columns = ['actual','predicted'] ### Plot scatter plot --- PC9 standalone fig, ax = plt.subplots() fig.set_size_inches(8, 6) ax = sns.regplot(x='actual', y='predicted',data = df_pc9_standalone) corr_pear = pearsonr(df_pc9_standalone['actual'], df_pc9_standalone['predicted'])[0] ax.text(0.05,0.95,'rho = '+str(corr_pear),transform=ax.transAxes,fontsize = 8) #add text ax.set_title('L200 standalone predictions V.S. Genome wide Brunello measurements', fontsize = 15) ax.set_xlabel('Brunello measurements', fontsize=15);ax.set_ylabel('L200 standalone predictions', fontsize=15) # plt.savefig('PC9_standalonel200_exp_pred.pdf') ###Output _____no_output_____ ###Markdown *****L200 standalone predictions vs L200 from brunello predictions** ###Code ### Create new dataframe to combine the standalone and brunello prediction df_pc9_pred = pd.concat([df_pc9['predicted'].T, df_pc9_standalone['predicted'].T], axis = 1) df_pc9_pred.columns = ['standalone', 'brunello'] df_pc9_pred = df_pc9_pred.dropna() ### Plot scatter plot --- PC9 standalone predicted and PC9 predicted fig, ax = plt.subplots() fig.set_size_inches(8, 6) ax = sns.regplot(x='standalone', y = 'brunello', data=df_pc9_pred) corr = pearsonr(df_pc9_pred['standalone'], df_pc9_pred['brunello'])[0] ax.text(0.05,0.95,'rho = '+str(corr),transform=ax.transAxes,fontsize = 8) #add text ax.set_title('L200 standalone predictions V.S. L200 pooled predictions', fontsize = 15) ax.set_xlabel('L200 pooled predictions', fontsize=15);ax.set_ylabel('L200 standalone predictions', fontsize=15) # plt.savefig('PC9_Q3v.s.standalone_exp_pred.pdf') ###Output _____no_output_____ ###Markdown *****Venn Diagram** ###Code ### Read prediction files - actual hits are df['actual'] # Brunello pc9_dir = '../out/21.0423 Lx PC9/L200only_reg_rf_boruta/anlyz' df_pc9 = pickle.load(open(os.path.join(pc9_dir,'y_compr_ext.pkl'),'rb')) # Standalone pc9_standalone_dir = '../out/21.0720 Lx PC9Standalone/L200only_reg_rf_boruta/anlyz' df_pc9_standalone = pickle.load(open(os.path.join(pc9_standalone_dir,'y_compr_ext.pkl'),'rb')) ### Find the top 500 hits in brunello, standalone and actual nhits = 500 top_standalone = df_pc9_standalone['predicted'].T.sort_values(by =0).head(nhits).index top_brunello = df_pc9['predicted'].T.sort_values(by =0).head(nhits).index top_actual = df_pc9['actual'].T.sort_values(by =0).head(nhits).index ### Plot 3-way Venn diagram fig, ax = plt.subplots() venn3([set(top_standalone), set(top_brunello), set(top_actual)], ('L200 Standalone','L200 Pooled','Actual'), alpha = 0.2) plt.tight_layout() plt.show() fig, axes = plt.subplots(1,3, figsize = (8,5)) venn2([set(top_standalone), set(top_brunello)], ('L200 Standalone','L200 Pooled'), alpha = 0.2,ax = axes[0],\ subset_label_formatter=lambda x: f"{round(x/nhits,2)}") venn2([set(top_brunello), set(top_actual)], ('L200 Pooled','Actual'), alpha = 0.2,ax = axes[1],\ subset_label_formatter=lambda x: f"{round(x/nhits,2)}") venn2([set(top_standalone), set(top_actual)], ('L200 Standalone','Actual'), alpha = 0.2,ax = axes[2],\ subset_label_formatter=lambda x: f"{round(x/nhits,2)}") plt.tight_layout() plt.show() ###Output _____no_output_____
first-neural-network.ipynb
###Markdown First Neural Network - Udacity ProjectThis Jupyter Notebook is my implementation of the deep learning foundations nano degree from Udacity. ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import imp from udacity.bike_data import load_bike_data from diagrams.draw_network import draw_example_1 %matplotlib inline %config InlineBackend.figure_format = 'retina' ###Output _____no_output_____ ###Markdown Bike DataFor the Udacity Project we are required to train a neural network we implement with numpy on bike usage data. This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. ###Code data_path = "Bike-Sharing-Dataset/hour.csv" rides = pd.read_csv(data_path) rides[:24*10].plot(x='dteday',y='cnt', figsize=(16,7)) plt.xlabel('Date and Time') plt.ylabel('Count of Bikes Used') plt.show() rides.head() ###Output _____no_output_____ ###Markdown Train-Validation-Test SplitThe udacity project provides scripts for preprossing the data that includes making dummy features for categorical variables ('season', 'weathersit', 'mnth', 'hr', 'weekday') and normalize the numerical features ('casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'). Additionally, it also splits the data into training data, validation data, and testing data, so we can estimate out of sample performance while fitting and after fitting the model ###Code train_features, train_targets, val_features, val_targets,\ test_features, test_targets,\ test_data, scaled_features = load_bike_data(data_path) ###Output _____no_output_____ ###Markdown Derivation of Deep Learning Regression For this project we are asked to create a regression model that predicts that count of bikes that are going to be rented at a particular hour of a particular day. Our model will make a prediction $\hat{y}$ and the goal is to minimize the mean squared error of the prediction on all observed data. This is the loss function, and the loss function for this regression problem is given by:$$ Loss = \frac{1}{2 N} \sum_{i=1}^N \left( y_i - \hat{y_i} \right)^2 $$ The Udacity project has us building a 2 layer neural network that is similar in structure to the following graph: ###Code draw_example_1() ###Output _____no_output_____ ###Markdown In this case $\hat{y}$ is given by the following equation:$$ \hat{y_i} = \begin{bmatrix} h_{i \ 1} & h_{i \ 2} \end{bmatrix} \times \begin{bmatrix} w^h_{1} \\ w^h_{2} \end{bmatrix} $$Where each edge has an associated weight $w^h_i$ associated with it. This is for the two layers neural network presented above. The equation for an arbitrary number of nodes in the hidden layer, we would use the following equation:$$ = \sum_{j} h_{i \ j} \ w^h_{j} $$ We will be using gradient decent algorithm on the weights which states that what can update the values of the weights using the following equation:$$ w^k_{i \ j} = w^k_{i \ j} - \alpha \frac{\partial}{\partial w^k_{i \ j}} Loss $$where $\alpha$ is known as the learning rate. If we want to find the update rule for the weights of the hidde layers, we can differentiate the the loss to find the following result:$$ \frac{\partial}{\partial w^h_k} Loss = \frac{-1}{N} \sum_{i=1}^N \left( y_i - \hat{y_i} \right) h_{i \ k} $$Insrting this into the gradient update rule provides us the weight update for the hidden layer:$$ w^h_k = w^h_k + \frac{\alpha}{N} \sum_{i=1}^N \left( y_i - \hat{y_i} \right) h_{i \ k} $$We can translate this into matrix form for the example network from above and get the following equation:$$ \begin{bmatrix} w^h_{1} \\ w^h_{2} \end{bmatrix} = \begin{bmatrix} w^h_{1} \\ w^h_{2} \end{bmatrix} + \frac{\alpha}{N} \ \sum_{i=1}^N \left( y_i - \hat{y_i} \right) \begin{bmatrix} h_{i \ 1} \\ h_{i \ 2} \end{bmatrix} $$ The output of the hidden layer in our example graph is given by the following equation:$$ h = sigmoid \left( \begin{bmatrix} x_{1} & x_{2} & x_{3} \end{bmatrix} \times \begin{bmatrix} w^I_{11} & w^I_{12} \\ w^I_{21} & w^I_{22} \\ w^I_{31} & w^I_{32} \end{bmatrix} \right) $$ Another perspective on this equation is: $$ h = \begin{bmatrix} sigmoid \left( \sum_i x_i \ w^I_{i1} \right) & sigmoid \left( \sum_i x_i \ w^I_{i2} \right) & \end{bmatrix}$$For an arbitrary number of notes between the input and hidden layer, we get the following equation a given hidden nodes output: $$ h_j = sigmoid \left( \sum_i x_i \ w^I_{i \ j} \right) $$ Differentiating the Loss function by the input rate gives us the flowing result:$$ \frac{\partial}{\partial w^I_{n \ m}} Loss = \frac{-1}{N} \sum_{i=1}^N \left( y_i - \hat{y_i} \right) w^h_k \frac{\partial}{\partial w^I_{n \ m}} h_{i \ k} $$We can differentiate the hidden output equation above and produe the following results: $$ \frac{ \partial h_j }{\partial w^I_{n \ m}} = h_j \ \left( 1 - h_j \right) \frac{ \partial }{\partial w^I_{n \ m}} \sum_i x_i \ w^I_{i \ j} $$Since the hidden layer need to use the input weight, we can rewrite this in the following way:$$ \frac{ \partial h_j }{\partial w^I_{n \ m}} = h_j \ \left( 1 - h_j \right) x_n \delta_{jm} $$ Combining the above quations gives us the differential of the loss function:$$ \frac{\partial}{\partial w^I_{n \ m}} Loss = \frac{-1}{N} \sum_{i=1}^N \left( y_i - \hat{y_i} \right) w^h_k h_{i \ k} \left( 1 - h_{i \ k} \right) x_n \delta_{mk} $$ The weight update for the input layer is then:$$ w^I_{n \ m} = w^I_{n \ m} + \frac{\alpha}{N} \sum_{i=1}^N \left( y_i - \hat{y_i} \right) w^h_k h_{i \ k} \left( 1 - h_{i \ k} \right) x_n \delta_{mk} $$ This equation can be converted back to the matrix for in our above example to get the follow equation:$$ \begin{bmatrix} w^I_{11} & w^I_{12} \\ w^I_{21} & w^I_{22} \\ w^I_{31} & w^I_{32} \end{bmatrix} = \begin{bmatrix} w^I_{11} & w^I_{12} \\ w^I_{21} & w^I_{22} \\ w^I_{31} & w^I_{32} \end{bmatrix} + \frac{\alpha}{N} \sum_{i=1}^N \left( y_i - \hat{y_i} \right) \begin{bmatrix} x_{1} \\ x_{2} \\ x_{3} \end{bmatrix} \begin{bmatrix} w^h_1 h_{i \ 1} (1 - h_{i \ 1}) & w^h_2 h_{i \ 2} (1 - h_{i \ 2}) \end{bmatrix} $$ Unit TestsRunning Udacity provided unit tests to make sure my network is training correctly:a ###Code import unittest from udacity.unit_tests import * suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) ###Output ... ---------------------------------------------------------------------- Ran 3 tests in 0.003s OK ###Markdown Training the network ###Code from udacity.neural_network import NeuralNetwork import sys def MSE(y, Y): return np.mean((y-Y)**2) ### Set the hyperparameters here ### iterations = 1000 learning_rate = 2.0 hidden_nodes = 4 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.iloc[batch].values, train_targets.iloc[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim() fig, ax = plt.subplots(figsize=(16,8)) mean, std = scaled_features['cnt'] predictions = network.run(test_features).T*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.iloc[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) ###Output _____no_output_____
doc/source/learning/Learning2.ipynb
###Markdown Learning 2 ###Code import numpy as np print('notebook 2') ###Output notebook 2
examples/OrthogonalRegressionNonAnalytic.ipynb
###Markdown Regression with orthogonal projector/matrices================================================================ In this example, we explain how when using `skcosmo.linear_model.OrthogonalRegression` the option `use_orthogonal_projector` can result in non-analytic behavior.In `skcosmo.linear_model.OrthogonalRegression`, we solve the linear regression problem assuming an orthogonal weighting matrix $\Omega$ to project from the feature space $X$ to the target space $y$.$$\min_\Omega ||y - X\Omega\||_F$$This assumes that $X$ and $y$ contain the same number of features.If `use_orthogonal_projector=False`, the smaller of $X$ and $y$ is padded with null features, i.e. columns of zeros.However, when `use_orthogonal_projector=True`, we begin with the weights $W$ determined by the linear regression problem$$ \min_W ||y - XW\||F,$$and solve the orthogonal Procrustes problem for$$\min\Omega' ||yV - XU\Omega'\||_F\quad \Omega'^T\Omega'=I,$$where the SVD of $W = USV^T$. The final orthogonal projector is then $\Omega = U\Omega' V^T$.In this notebook, we demonstrate a problem that may arise with this solution, as changing the number of features can result in non-analytic behavior of the reconstruction matrix and therefore also in the predictions. ###Code import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable from skcosmo.linear_model import OrthogonalRegression mpl.rc('font', size=16) # These are coordinates of a 3-dimensional cube. We treat the points of the cube as samples # and the 3 dimensions as features x y z cube = np.array( [ #x y z [0, 0, 0], [1, 0, 0], [0, 1, 0], [1, 1, 0], [0, 0, 1], [0, 1, 1], [1, 0, 1], [1, 1, 1], ] ) # the x y coordinates of the cube xy_plane_projected_cube = cube[:, [0, 1]] # a square prism with a scaling applied on the z axis def z_scaled_square_prism(z_scaling): return np.array( [ [0, 0, 0], [1, 0, 0], [0, 1, 0], [1, 1, 0], [0, 0, z_scaling], [0, 1, z_scaling], [1, 0, z_scaling], [1, 1, z_scaling], ] ) # In terms of information retrievable by regression analysis `xy_plane_projected_cube` is equivalent # to `z_scaled_square_prism` with z_scaling = 0, since adding features containing only zero values # to your dataset should not change the prediction quality of the regression analysis. ###Output _____no_output_____ ###Markdown We now compute the orthogonal regression error fitting on the square prism to predict the cube. In the case of a zero z-scaling, the error is computed once with a third dimension and once without it (using `xy_plane_projected_cube`). The regression is done with `skcosmo.linear_model.OrthogonalRegression` `use_orthogonal_projector` set to True. ###Code z_scalings = np.linspace(0, 1, 11) regression_errors_for_z_scaled_square_prism_using_orthogonal_projector = [] orth_reg_pred_cube = len(z_scalings) * [0] orth_reg_using_orthogonal_projector = OrthogonalRegression(use_orthogonal_projector=True) for i, z in enumerate(z_scalings): orth_reg_using_orthogonal_projector.fit(cube, z_scaled_square_prism(z)) orth_reg_pred_cube[i] = orth_reg_using_orthogonal_projector.predict(cube) regression_error = np.linalg.norm(z_scaled_square_prism(z) - orth_reg_pred_cube[i]) regression_errors_for_z_scaled_square_prism_using_orthogonal_projector.append(regression_error) orth_reg_using_orthogonal_projector.fit(cube, xy_plane_projected_cube) orth_reg_use_projector_xy_plane_pred_cube = orth_reg_using_orthogonal_projector.predict(cube) regression_error_for_xy_plane_projected_cube_using_orthogonal_projector = ( np.linalg.norm(xy_plane_projected_cube - orth_reg_use_projector_xy_plane_pred_cube) ) ###Output _____no_output_____ ###Markdown In the next cell we plot a visualization of the reconstruction of the square prism for different z scalings. We plot the projections of the xy, xz and yz planes. ###Code fig, (ax_xy, ax_xz, ax_yz) = plt.subplots(1, 3, figsize=(12, 4)) cmap = mpl.cm.Blues colors = cmap(np.linspace(0, 1, 11)) for i in range(len(orth_reg_pred_cube) - 1): ax_xy.scatter( orth_reg_pred_cube[i][:, 0], orth_reg_pred_cube[i][:, 1], color=colors[i] ) ax_xz.scatter( orth_reg_pred_cube[i][:, 0], orth_reg_pred_cube[i][:, 2], color=colors[i] ) ax_yz.scatter( orth_reg_pred_cube[i][:, 1], orth_reg_pred_cube[i][:, 2], color=colors[i] ) i = len(orth_reg_pred_cube) - 1 ax_xy.scatter( orth_reg_pred_cube[i][:, 0], orth_reg_pred_cube[i][:, 1], color=colors[i], label="orth. reconstruction", ) ax_xz.scatter(orth_reg_pred_cube[i][:, 0], orth_reg_pred_cube[i][:, 2], color=colors[i]) ax_yz.scatter(orth_reg_pred_cube[i][:, 1], orth_reg_pred_cube[i][:, 2], color=colors[i]) ax_xy.scatter(cube[:, 0], cube[:, 1], c="r", label="cube") ax_xz.scatter(cube[:, 0], cube[:, 2], c="r") ax_yz.scatter(cube[:, 1], cube[:, 2], c="r") ax_xy.legend(fontsize=14, loc="center") divider = make_axes_locatable(plt.gca()) ax_cb = divider.new_horizontal(size="5%", pad=0.05) cb1 = mpl.colorbar.ColorbarBase( ax_cb, cmap=cmap, orientation="vertical", ticks=z_scalings ) plt.gcf().add_axes(ax_cb) ax_cb.set_ylabel("z scaling") ax_xy.set_title("xy plane") ax_xz.set_title("xz plane") ax_yz.set_title("yz plane") plt.show() ###Output _____no_output_____ ###Markdown Now we set `use_orthogonal_projector` to False and repeat the above regression. ###Code orth_reg = OrthogonalRegression(use_orthogonal_projector=False) orth_reg_pred_cube = len(z_scalings) * [0] regression_errors_for_z_scaled_square_prism_zero_padded = [] for i, z in enumerate(z_scalings): orth_reg.fit(cube, z_scaled_square_prism(z)) orth_reg_pred_cube[i] = orth_reg.predict(cube) regression_error = np.linalg.norm(z_scaled_square_prism(z) - orth_reg_pred_cube[i]) regression_errors_for_z_scaled_square_prism_zero_padded.append(regression_error) ###Output _____no_output_____ ###Markdown Setting the `use_orthogonal_projector` option to False pads automatically input and output data to the same dimension with zeros. Therefore we pad `xy_plane_projected_cube` to three dimensions with zeros to compute the error. If we ignore the third dimension, the regression error will also not change smoothly. ###Code orth_reg.fit(cube, xy_plane_projected_cube) orth_reg_xy_plane_pred_cube = orth_reg.predict(cube) zero_padded_xy_plane_projected_cube = np.pad(xy_plane_projected_cube, [(0, 0), (0, 1)]) print("zero_padded_xy_plane_projected_cube:\n", zero_padded_xy_plane_projected_cube) print("orth_reg_xy_plane_pred_cube:\n", orth_reg_xy_plane_pred_cube) regression_error_for_xy_plane_projected_cube_zero_padded = np.linalg.norm( zero_padded_xy_plane_projected_cube - orth_reg_xy_plane_pred_cube ) ###Output _____no_output_____ ###Markdown The projection allows an optimal reconstruction of the cube while when not using a projection the orthogonal condition does not allow the same reconstruction ###Code fig, (ax_xy) = plt.subplots(1, 1, figsize=(5, 4)) ax_xy.scatter( xy_plane_projected_cube[:, 0], xy_plane_projected_cube[:, 1], s=70, c="r", label="cube", ) ax_xy.scatter( orth_reg_use_projector_xy_plane_pred_cube[:, 0], orth_reg_use_projector_xy_plane_pred_cube[:, 1], c="b", label="orth. reconstruction\n use projector=True", ) ax_xy.scatter( orth_reg_xy_plane_pred_cube[:, 0], orth_reg_xy_plane_pred_cube[:, 1], c="g", label="orth. reconstruction\n use projector=False", ) ax_xy.set_title("xy plane") plt.legend(bbox_to_anchor=(1, 1), loc="upper left") plt.show() ###Output _____no_output_____ ###Markdown The three dimensional cubic structure can be seen when no projector is used (`use_orthogonal_projector` is False). Now we plot the prediction error. ###Code fig, (ax_with_orth, ax_wo_orth) = plt.subplots(1, 2, figsize=(10, 3.8), sharey=True) ax_with_orth.scatter( z_scalings, regression_errors_for_z_scaled_square_prism_using_orthogonal_projector, label="Regression error for z-scaled cube", ) ax_with_orth.scatter( 0, regression_error_for_xy_plane_projected_cube_using_orthogonal_projector, label="Regression error for xy_plane_projected_cube", ) ax_with_orth.set_title( "Orthogonal regression error for\n features using orthogonal projector\n (use_orthogonal_projector=True)", fontsize=14, ) ax_with_orth.set_xlabel("scaling in z direction", fontsize=16) ax_with_orth.set_ylabel("orthogonal regression error", fontsize=14) ax_wo_orth.scatter( z_scalings, regression_errors_for_z_scaled_square_prism_zero_padded, label="Regression error for z-scaled square prism", ) ax_wo_orth.scatter( 0, regression_error_for_xy_plane_projected_cube_zero_padded, label="Regression error for xy_plane_projected_cube", ) ax_wo_orth.set_title( "Orthogonal regression error for\n zero padded features\n (use_orthogonal_projector=False) ", ) ax_wo_orth.set_xlabel("scaling in z direction") ax_wo_orth.legend(loc="upper right", bbox_to_anchor=(0.7, -0.2)) plt.show() ###Output _____no_output_____
scripts/d21-en/mxnet/chapter_recurrent-neural-networks/text-preprocessing.ipynb
###Markdown Text Preprocessing:label:`sec_text_preprocessing`We have reviewed and evaluatedstatistical tools and prediction challengesfor sequence data.Such data can take many forms.Specifically,as we will focus onin many chapters of the book,text is one of the most popular examples of sequence data.For example,an article can be simply viewed as a sequence of words, or even a sequence of characters.To facilitate our future experimentswith sequence data,we will dedicate this sectionto explain common preprocessing steps for text.Usually, these steps are:1. Load text as strings into memory.1. Split strings into tokens (e.g., words and characters).1. Build a table of vocabulary to map the split tokens to numerical indices.1. Convert text into sequences of numerical indices so they can be manipulated by models easily. ###Code import collections import re from d2l import mxnet as d2l ###Output _____no_output_____ ###Markdown Reading the DatasetTo get started we load text from H. G. Wells' [*The Time Machine*](http://www.gutenberg.org/ebooks/35).This is a fairly small corpus of just over 30000 words, but for the purpose of what we want to illustrate this is just fine.More realistic document collections contain many billions of words.The following function reads the dataset into a list of text lines, where each line is a string.For simplicity, here we ignore punctuation and capitalization. ###Code #@save d2l.DATA_HUB['time_machine'] = (d2l.DATA_URL + 'timemachine.txt', '090b5e7e70c295757f55df93cb0a180b9691891a') def read_time_machine(): #@save """Load the time machine dataset into a list of text lines.""" with open(d2l.download('time_machine'), 'r') as f: lines = f.readlines() return [re.sub('[^A-Za-z]+', ' ', line).strip().lower() for line in lines] lines = read_time_machine() print(f'# text lines: {len(lines)}') print(lines[0]) print(lines[10]) ###Output # text lines: 3221 the time machine by h g wells twinkled and his usually pale face was flushed and animated the ###Markdown TokenizationThe following `tokenize` functiontakes a list (`lines`) as the input,where each list is a text sequence (e.g., a text line).Each text sequence is split into a list of tokens.A *token* is the basic unit in text.In the end,a list of token lists are returned,where each token is a string. ###Code def tokenize(lines, token='word'): #@save """Split text lines into word or character tokens.""" if token == 'word': return [line.split() for line in lines] elif token == 'char': return [list(line) for line in lines] else: print('ERROR: unknown token type: ' + token) tokens = tokenize(lines) for i in range(11): print(tokens[i]) ###Output ['the', 'time', 'machine', 'by', 'h', 'g', 'wells'] [] [] [] [] ['i'] [] [] ['the', 'time', 'traveller', 'for', 'so', 'it', 'will', 'be', 'convenient', 'to', 'speak', 'of', 'him'] ['was', 'expounding', 'a', 'recondite', 'matter', 'to', 'us', 'his', 'grey', 'eyes', 'shone', 'and'] ['twinkled', 'and', 'his', 'usually', 'pale', 'face', 'was', 'flushed', 'and', 'animated', 'the'] ###Markdown VocabularyThe string type of the token is inconvenient to be used by models, which take numerical inputs.Now let us build a dictionary, often called *vocabulary* as well, to map string tokens into numerical indices starting from 0.To do so, we first count the unique tokens in all the documents from the training set,namely a *corpus*,and then assign a numerical index to each unique token according to its frequency.Rarely appeared tokens are often removed to reduce the complexity.Any token that does not exist in the corpus or has been removed is mapped into a special unknown token “&lt;unk&gt;”.We optionally add a list of reserved tokens, such as“&lt;pad&gt;” for padding,“&lt;bos&gt;” to present the beginning for a sequence, and “&lt;eos&gt;” for the end of a sequence. ###Code class Vocab: #@save """Vocabulary for text.""" def __init__(self, tokens=None, min_freq=0, reserved_tokens=None): if tokens is None: tokens = [] if reserved_tokens is None: reserved_tokens = [] # Sort according to frequencies counter = count_corpus(tokens) self.token_freqs = sorted(counter.items(), key=lambda x: x[1], reverse=True) # The index for the unknown token is 0 self.unk, uniq_tokens = 0, ['<unk>'] + reserved_tokens uniq_tokens += [ token for token, freq in self.token_freqs if freq >= min_freq and token not in uniq_tokens] self.idx_to_token, self.token_to_idx = [], dict() for token in uniq_tokens: self.idx_to_token.append(token) self.token_to_idx[token] = len(self.idx_to_token) - 1 def __len__(self): return len(self.idx_to_token) def __getitem__(self, tokens): if not isinstance(tokens, (list, tuple)): return self.token_to_idx.get(tokens, self.unk) return [self.__getitem__(token) for token in tokens] def to_tokens(self, indices): if not isinstance(indices, (list, tuple)): return self.idx_to_token[indices] return [self.idx_to_token[index] for index in indices] def count_corpus(tokens): #@save """Count token frequencies.""" # Here `tokens` is a 1D list or 2D list if len(tokens) == 0 or isinstance(tokens[0], list): # Flatten a list of token lists into a list of tokens tokens = [token for line in tokens for token in line] return collections.Counter(tokens) ###Output _____no_output_____ ###Markdown We construct a vocabulary using the time machine dataset as the corpus. Then we print the first few frequent tokens with their indices. ###Code vocab = Vocab(tokens) print(list(vocab.token_to_idx.items())[:10]) ###Output [('<unk>', 0), ('the', 1), ('i', 2), ('and', 3), ('of', 4), ('a', 5), ('to', 6), ('was', 7), ('in', 8), ('that', 9)] ###Markdown Now we can convert each text line into a list of numerical indices. ###Code for i in [0, 10]: print('words:', tokens[i]) print('indices:', vocab[tokens[i]]) ###Output words: ['the', 'time', 'machine', 'by', 'h', 'g', 'wells'] indices: [1, 19, 50, 40, 2183, 2184, 400] words: ['twinkled', 'and', 'his', 'usually', 'pale', 'face', 'was', 'flushed', 'and', 'animated', 'the'] indices: [2186, 3, 25, 1044, 362, 113, 7, 1421, 3, 1045, 1] ###Markdown Putting All Things TogetherUsing the above functions, we package everything into the `load_corpus_time_machine` function, which returns `corpus`, a list of token indices, and `vocab`, the vocabulary of the time machine corpus.The modifications we did here are:i) we tokenize text into characters, not words, to simplify the training in later sections;ii) `corpus` is a single list, not a list of token lists, since each text line in the time machine dataset is not necessarily a sentence or a paragraph. ###Code def load_corpus_time_machine(max_tokens=-1): #@save """Return token indices and the vocabulary of the time machine dataset.""" lines = read_time_machine() tokens = tokenize(lines, 'char') vocab = Vocab(tokens) # Since each text line in the time machine dataset is not necessarily a # sentence or a paragraph, flatten all the text lines into a single list corpus = [vocab[token] for line in tokens for token in line] if max_tokens > 0: corpus = corpus[:max_tokens] return corpus, vocab corpus, vocab = load_corpus_time_machine() len(corpus), len(vocab) ###Output _____no_output_____
mobilityteamproject/Step2_Model_training_and_tflite_convert/helmet_classification_for_tinyMLproject_part3.ipynb
###Markdown Helmet Classification For TinyML Project> 이 notebook 은 open source 컨트리뷰톤 2020 - tinyML (Tensorflow Lite Project) Mobility Team 의 오픈소스 프로젝트를 위해 만들어졌습니다. - 모빌리티 팀 (멘토 맹윤호)- 최예진(팀장), 이민우, 전수민, 이장후, 이경환, 조승현- **.ipynb 제작 - 이장후. 2020/08/29**- **.ipynb 수정자 -**- Target Github Repository : [TinyML : Tensorflow lite for microcontroller](https://github.com/yunho0130/tensorflow-lite)- Team Github Repository : [TinyML-Mobility](https://github.com/orgs/tinyml-mobility/teams) Before We Start- 런타임 -> GPU 로 변경 하셨나요? This Time- 생성된 h5 모델을 불러들여 tflite 파일로 변환해 봅시다.- 구현된 모델을 조금 수정하면서, Class Activation Map* 을 한번 visualization 해 봅시다.*Class Activation Map 이란, Helmet 클래스로 판단하는 데 어떤 부분을 가장 주목해서 보았는지와 같이, 어떤 클래스로 판단하는 것의 근거를 Visualization 한 이미지를 의미합니다. Google Drive- 학습을 시키기 전 데이터가 있는 google Drive 와 연동을 해야 합니다. ###Code from google.colab import drive drive.mount('/content/gdrive') ###Output _____no_output_____ ###Markdown Include Library - 이 노트북의 소스코드는 tensorflow 2.0 이상과 호환되지 않습니다.- Google colab 에서는 %tensorflow_version 을 통해, 원하는 버전의 tensorflow 를 쉽게 불러올 수 있습니다 ###Code try: # This %tensorflow_version magic only works in Colab. %tensorflow_version 1.x except Exception: pass # For your non-Colab code, be sure you have tensorflow==1.15 import tensorflow as tf assert tf.__version__.startswith('1') # tensorflow 는 기본적으로, "정적 그래프 형식" 으로 실행하여 그때그때 실행해서 결과를 찍어보는 것이 불가능합니다. # tf.enable_eager_execution 을 실행해 주어야, datagen 으로 실행이 가능합니다. # 이 코드는, tf.enable_eager_execution() import os import numpy as np import matplotlib.pyplot as plt tf.__version__ ###Output _____no_output_____ ###Markdown Overview TinyML- 모델을 .tflite 파일로 변환하기 위해 tensorflow lite 의 python API 를 활용할 것입니다.- .tflite 는 플랫버퍼 형식이라고 합니다.- 플랫버퍼 형식에 대한 장점은 굉장히 많다고 하지만, 저를 포함해서 이 튜토리얼을 진행하는 분들에게는 너무 어려운 내용일 것입니다. 그냥 "효율적인 자료 저장 형식이다" 라고 생각합시다! Optimization- 우리는 지금 "작은 모델을 만들기 위해" part3 으로 넘어왔습니다.- 우리가 다음 작업을 진행하기 전에, 확인해야 할 큰 그림이 있습니다. 이 내용은 tinyML 책 챕터 15에 자세히 나와 있습니다. 이를 간단히 이야기해보도록 해요.**Hardware Selection**- 주머니 사정과 성능, 접근성 및 개발 속도를 모두 고려하여 하드웨어를 선택해야 합니다.- 저희는 이 프로젝트를 진행하며, 파일을 Raspberry Pi 4에 업로드할 것입니다.- Raspberry Pi 4 는 단돈 5만원에 매우 강력한 성능을 자랑하는 컴퓨터로, 비영리 재단에서 만든 소형 컴퓨터입니다.- Raspberry Pi 4 는 일반적인 마이크로컨트롤러들과 달리, 운영체제가 올라가고 메모리와 디스크 모두 넉넉합니다.- Raspberry Pi 4 는 다양한 주변기기를 연결할 수 있도록 지원하고, 저희는 종내에 소형 GPU 를 사용해서 모델을 돌려 볼 수 있도록 할 것입니다.**Model Selection**- 우리는 헬멧 인식 모델을 만들 것이고, 헬멧을 착용하기 위한 모델은 굉장히 복잡합니다.- 소형 기기에 배포하기 위해 이 모델을 가볍게 만드는 것은 전력 소비 / 실행 속도 / 이용자 체감 에서도 매우 중요한 이슈겠지요.- 우리는 tinyML 책에서 제시된 MobileNet v1 보다 효율적인 MobileNet v2 를 사용했습니다.- 일반적인 CNN 모델들보다 훨씬 효율적인 모델입니다.**Quantization**- 많은 임베디드 디바이스에서는...- 오늘 할 작업입니다! ###Code %cd /content/gdrive/"My Drive"/data/ # 데이터가 존재하는 경로 ( /content/gdrive/"My Drive"/data/helmetclassification ) 를 data_dir 변수에 저장합니다. data_dir = os.path.join(os.getcwd(), 'helmetclassification') print(data_dir) # input 이미지의 크기는 160 by 160 by 3 으로 상정합니다. 채널은 RGB 이므로, 3 입니다. IMG_WIDTH = 160 IMG_HEIGHT = 160 IMG_CHANNEL = 3 IMG_SHAPE = (IMG_WIDTH, IMG_HEIGHT, IMG_CHANNEL) # 연산 처리 단위 (배치) 는 이미지 16장, 그리고 Learning Rate, optimizer, 에폭 등을 설정합니다. # *참고* : Optimizer 에서 SGD 는 최근 잘 사용하지 않지만, 안정적인 수렴을 위해 특정 경우에 사용합니다. BATCH_SIZE = 16 LEARNING_RATE_SGD = 0.001 LEARNING_RATE_ADAM = 0.0001 TRAINING_OPTIMIZER_SGD = tf.keras.optimizers.SGD(learning_rate=LEARNING_RATE_SGD, momentum=0.0) TRAINING_OPTIMIZER_ADAM = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE_ADAM) EPOCHS = 50 # 우리는 checkpoint 를 설정해서, 각 epoch 마다 가중치를 저장할 것입니다. CKPT_DIR = os.path.join(data_dir, 'checkpoint') # 우리는 h5 file 을 tflite 파일로 전환할 것입니다. 이름을 미리 정해 둡시다. SAVED_KERAS_MODEL_NAME = 'helmet_classification_model.h5' # 이번에 우리는 h5 file 을 읽어와 tflite 파일로 바꾸어 저장할 것입니다. 이름을 미리 정해 둡시다. SAVED_TFLITE_MODEL_NAME = 'helmet_classification_model.tflite' # 우리는 잠시 후에 라벨 파일을 만들어낼 것인데, 라벨 파일의 이름을 미리 정의해 둡시다. LABEL_FILE_NAME = 'ishelmetlabel.txt' ###Output _____no_output_____ ###Markdown Quantization Idea- 어떤 작업인지는 위에서 설명...- 학습한 데이터셋의 입력값 범위를 나타내는 숫자의 집합인 대표 예시 데이터셋을 만들어 넣어 주어야 합니다..- 그 이유는... How to- 예시 데이터를 만들어 줄 수 있는 generator ###Code %cd /content/gdrive/"My Drive"/data/helmetclassification converter = tf.lite.TFLiteConverter.from_keras_model_file(SAVED_KERAS_MODEL_NAME) # 양자화를 포함해서, 기본 최적화를 진행합니다. converter.optimizations = [tf.lite.Optimize.DEFAULT] # 예시 데이터를 가져다주어야 한다고 했습니다. def representative_data_gen(): dataset_list = tf.data.Dataset.list_files(data_dir + '/test' + '/*/*') print('dataset_list : ', dataset_list) for i in range(100): image = next(iter(dataset_list)) image = tf.io.read_file(image) image = tf.io.decode_jpeg(image, channels=3) image = tf.image.resize(image, [IMG_WIDTH, IMG_HEIGHT]) image = tf.cast(image / 255., tf.float32) image = tf.expand_dims(image, 0) yield [image] # 무슨 값이 들어있나 궁금하면 출력해 보세요. # im = representative_data_gen() # print(next(im)) # converter 객체에 등록해 줍니다. converter.representative_dataset = representative_data_gen # 실제값 = (int8변환값 - 영점) * scale # int8변환값 = 실제값 / scale + 영점 # input 은 float 이든 int 이든, 어차피 255 배 커지는건데 뭔상관?! # 255배 크게 활성화되든 작게 활성화되든 이미 training 은 끝났거든~~ converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 # 아래 주석처리된 소스코드는, INT8 로 전환할 때, 양자화를 지원하지 않는 operation 이 있는지 없는지 확인해 볼 수 있습니다. # 원래 CORAL 과 같은 int8 자료형만을 지원하는 디바이스에 업로드하기 위해서는 아래 코드를 활성화해야 합니다. # 모델을 직접 만드는 경우, 아직 mobilenet 의 모든 기능을 구현할 수 있도록 operation 이 support 되지 않을 수 있습니다. # 동일한 모델을 만들더라도, 다른 operation 을 활용해서 만들 수 있기 때문입니다. # handmaded (part2 에서 만들어낸 모델) 모델을 활용해서 convert 할 경우에는, 아래 코드를 주석처리 해 주세요. converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # 변환된 객체를 tflite_model 변수에 저장합니다. tflite_model = converter.convert() %cd /content/gdrive/"My Drive"/data/helmetclassification # 파일을 저장합니다. open(SAVED_TFLITE_MODEL_NAME, "wb").write(tflite_model) ###Output _____no_output_____ ###Markdown Class Activation Mapping- reference : [keras code example](https://keras.io/examples/vision/grad_cam/) ###Code %cd /content/gdrive/"My Drive"/data/helmetclassification full_model = tf.keras.models.load_model(SAVED_KERAS_MODEL_NAME) full_model.summary() def make_gradcam_heatmap( img_array, model, last_conv_layer_name="Conv_1", classifier_layer_names=["global_average_pooling2d", "Logits"] ): # First, we create a model that maps the input image to the activations # of the last conv layer last_conv_layer = model.get_layer(last_conv_layer_name) last_conv_layer_model = tf.keras.Model(model.inputs, last_conv_layer.output) # Second, we create a model that maps the activations of the last conv # layer to the final class predictions classifier_input = tf.keras.Input(shape=last_conv_layer.output.shape[1:]) x = classifier_input for layer_name in classifier_layer_names: x = model.get_layer(layer_name)(x) classifier_model = tf.keras.Model(classifier_input, x) # Then, we compute the gradient of the top predicted class for our input image # with respect to the activations of the last conv layer # 텐서플로는 자동 미분(주어진 입력 변수에 대한 연산의 그래디언트(gradient)를 계산하는 것. 쉽게 말하면 매개변수 미분, 연쇄법칙)을 위한 tf.GradientTape API를 제공합니다. # tf.GradientTape는 컨텍스트(context) 안에서 실행된 모든 연산을 테이프(tape)에 "기록"합니다. # 그 다음 텐서플로는 후진 방식 자동 미분(reverse mode differentiation)을 사용해 테이프에 "기록된" 연산의 그래디언트를 계산합니다. with tf.GradientTape() as tape: # Compute activations of the last conv layer last_conv_layer_output = last_conv_layer_model(img_array) # and make the tape watch it tape.watch(last_conv_layer_output) # Compute class predictions preds = classifier_model(last_conv_layer_output) print('prediction tensor shape : ', preds.shape) # (1,2) top_pred_index = tf.argmax(preds[0]) print('inside of prediction tnesor :', preds[0]) print('prediction argmax : ', top_pred_index) # Tensor(1,) top_class_channel = preds[:, top_pred_index] print('prediction top channel : ', top_class_channel) # # This is the gradient of the top predicted class with regard to # the output feature map of the last conv layer grads = tape.gradient(top_class_channel, last_conv_layer_output) # This is a vector where each entry is the mean intensity of the gradient # over a specific feature map channel # 구한 gradient 들의 채널들 평균이 해당 위치의 gradient 라는 것 같은데.. # axis 를 왜 저따구로 쓴거지..? 그냥 axis = 0 하면 안되는건가? # 일단 두번째 저 axis 순서대로 reduce 하는 것임은 확인. axis 0 >> axis 1 >> axis 2 # axis 0 은 batch wise 인데 어차피 batch size 1 이니까, channel last 인 tensorflow 특성상, row 로 평균내고, col 로 평균내면, channel 만 남게 됨. 즉 channel 당 1개. # 그럼 channel 당 1개라는건 그냥 특정 channel 의 중요도를 나타내게 됨. 즉, channel 의 weight 을 의미하게 됨. pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2)) # We multiply each channel in the feature map array # by "how important this channel is" with regard to the top predicted class last_conv_layer_output = last_conv_layer_output.numpy()[0] pooled_grads = pooled_grads.numpy() for i in range(pooled_grads.shape[-1]): # 모든 채널에 대해서 곱하겠다는 뜻인데 더럽게도 어렵게 써놨네 는 무슨 내가 이해할수라도 있어서 너무 다행이다. last_conv_layer_output[:, :, i] *= pooled_grads[i] # The channel-wise mean of the resulting feature map # is our heatmap of class activation heatmap = np.mean(last_conv_layer_output, axis=-1) # For visualization purpose, we will also normalize the heatmap between 0 & 1 heatmap = np.maximum(heatmap, 0) / np.max(heatmap) return heatmap ###Output _____no_output_____ ###Markdown 자 이제 함수를 다 만들었습니다! 다음 셀에서는 다음과 같은 작업을 진행합니다.- PIL 라이브러리를 활용해서 helmet 을 차고 있는 test set 에서 random 한 image 하나 가져오기- PIL 라이브러리를 활용해서 helmet 을 차고 있지 않은 test set 에서 random 한 image 하나 가져오기- PIL 로 가져온 것을 numpy array 로 바꾸고, matplotlib.pyplot 의 imshow() 로 이미지 잘 가져왔나 확인 해보기 ###Code import PIL import numpy as np import os # load the image helmet_test_path = os.path.join(data_dir, "test", "helmet") helemt_test_img_list = os.listdir(helmet_test_path) nonhelmet_test_path = os.path.join(data_dir, "test", "non_helmet") nonhelemt_test_img_list = os.listdir(nonhelmet_test_path) import random helmet_image_path = os.path.join(data_dir, "test", "helmet", random.choice(helemt_test_img_list)) nonhelmet_image_path = os.path.join(data_dir, "test", "non_helmet", random.choice(nonhelemt_test_img_list)) helmet_image = PIL.Image.open(helmet_image_path) nonhelmet_image = PIL.Image.open(nonhelmet_image_path) # convert image to numpy array helmet_npimage = np.asarray(helmet_image) nonhelmet_npimage = np.asarray(nonhelmet_image) print('좌 :',type(helmet_npimage)) print('좌 :',helmet_npimage.shape) print('우 :',type(nonhelmet_npimage)) print('우 :',nonhelmet_npimage.shape) import matplotlib.pyplot as plt plt.figure(figsize = [13,8]) plt.subplot(1,2,1) # (행, 열, 첫번째) - 자세한 내용은 plt.subplot() 을 참고하세요. plt.imshow(helmet_npimage) plt.subplot(1,2,2) # (행, 열, 두번째) - 자세한 내용은 plt.subplot() 을 참고하세요. plt.imshow(nonhelmet_npimage) # 참고로 PIL 모듈을 활용해 만들어진 keras 함수가 이미 존재해서, PIL 을 사용하지 않고 사용할 수도 있습니다. # 이 경우에, 불러오는 동시에 target_size 를 지정함으로써 다양한 속성을 지정해 직관적으로 이미지를 불러올 수 있습니다. helmet_image_from_keras = tf.keras.preprocessing.image.load_img( helmet_image_path, grayscale=False, color_mode='rgb', target_size=(IMG_WIDTH, IMG_HEIGHT), interpolation='nearest' ) nonhelmet_image_from_keras = tf.keras.preprocessing.image.load_img( nonhelmet_image_path, grayscale=False, color_mode='rgb', target_size=(IMG_WIDTH, IMG_HEIGHT), interpolation='nearest' ) # convert image to numpy array helmet_npimage_from_keras = tf.keras.preprocessing.image.img_to_array(helmet_image_from_keras) / 255. nonhelmet_npimage_from_keras = tf.keras.preprocessing.image.img_to_array(nonhelmet_image_from_keras) / 255. # helmet_npimage_from_keras = np.asarray(helmet_image_from_keras) # nonhelmet_npimage_from_keras = np.asarray(nonhelmet_image_from_keras) print('좌 :',type(helmet_npimage_from_keras)) print('좌 :',nonhelmet_npimage_from_keras.shape) print('우 :',type(helmet_npimage_from_keras)) print('우 :',nonhelmet_npimage_from_keras.shape) plt.figure(figsize = [13,8]) plt.subplot(1,2,1) # (행, 열, 첫번째) - 자세한 내용은 plt.subplot() 을 참고하세요. plt.imshow(helmet_npimage_from_keras) plt.subplot(1,2,2) # (행, 열, 두번째) - 자세한 내용은 plt.subplot() 을 참고하세요. plt.imshow(nonhelmet_npimage_from_keras) # 이 경우에, reshape 를 하면서 약간 찌그러진 모습이 보일 수 있습니다. def get_img_array(img_path, size): # `img` is a PIL image of size 160, 160 img = tf.keras.preprocessing.image.load_img(img_path, target_size=size) # `array` is a float32 Numpy array of shape (160, 160, 3) array = tf.keras.preprocessing.image.img_to_array(img) / 255. # We add a dimension to transform our array into a "batch" # of size (1, 160, 160, 3) array = np.expand_dims(array, axis=0) return array img_helmet = get_img_array(helmet_image_path, (IMG_WIDTH, IMG_HEIGHT)) img_nonhelmet = get_img_array(nonhelmet_image_path, (IMG_WIDTH, IMG_HEIGHT)) print('helmet image') heatmap_helmet = make_gradcam_heatmap(img_helmet, full_model, # 아래 두 줄의 코드는, part2 에서 직접 만든 모델을 돌릴 때 사용하세요. #'final_conv', #['global_average_pooling2d', 'reshape', 'conv2d_105', 'softmax', 'reshape_1'] ) print('\nnonhelmet image') heatmap_nonhelmet = make_gradcam_heatmap(img_nonhelmet, full_model, #'final_conv', #['global_average_pooling2d', 'reshape', 'conv2d_105', 'softmax', 'reshape_1'] ) # Display heatmap plt.matshow(heatmap_helmet) plt.matshow(heatmap_nonhelmet) img_original = tf.keras.preprocessing.image.load_img(helmet_image_path) img_original = tf.keras.preprocessing.image.img_to_array(img_original) nonhelmet_img_original = tf.keras.preprocessing.image.load_img(nonhelmet_image_path) nonhelmet_img_original = tf.keras.preprocessing.image.img_to_array(nonhelmet_img_original) # We rescale heatmap to a range 0-255 heatmap_helmet = np.uint8(255 * heatmap_helmet) heatmap_nonhelmet = np.uint8(255 * heatmap_nonhelmet) # We use jet colormap to colorize heatmap import matplotlib jet = matplotlib.cm.get_cmap("jet") # We use RGB values of the colormap jet_colors = jet(np.arange(256))[:, :3] # color map 에 color 을 대응시켜주는 코드입니다. 예를 들어 숫자가 클수록 붉은 색이 되는 color map 이 있을테고, 숫자가 커지는순간 0 이 돼버리는 color map 등 다양한 color map 이 있는데, # 내가 가진 어떤 값을 어떤 색상으로 대응시킬 것인지 골라주는 역할이라고 할 수 있지요. # 우리가 선택한 것은, RGB map 입니다. heatmat 을 RGB 채널에 대응되도록 만들어 줍니다. jet_heatmap = jet_colors[heatmap_helmet] jet_nonhelmet_heatmap = jet_colors[heatmap_nonhelmet] # We create an image with RGB colorized heatmap jet_heatmap = tf.keras.preprocessing.image.array_to_img(jet_heatmap) jet_heatmap = jet_heatmap.resize((img_original.shape[1], img_original.shape[0])) jet_heatmap = tf.keras.preprocessing.image.img_to_array(jet_heatmap) jet_nonhelmet_heatmap = tf.keras.preprocessing.image.array_to_img(jet_nonhelmet_heatmap) jet_nonhelmet_heatmap = jet_nonhelmet_heatmap.resize((nonhelmet_img_original.shape[1], nonhelmet_img_original.shape[0])) jet_nonhelmet_heatmap = tf.keras.preprocessing.image.img_to_array(jet_nonhelmet_heatmap) # Superimpose the heatmap on original image superimposed_img = jet_heatmap * 0.7 + img_original superimposed_img = tf.keras.preprocessing.image.array_to_img(superimposed_img) nonhelmet_superimposed_img = jet_nonhelmet_heatmap * 0.7 + nonhelmet_img_original nonhelmet_superimposed_img = tf.keras.preprocessing.image.array_to_img(nonhelmet_superimposed_img) # Display Grad CAM plt.figure(4, figsize = [15,8]) plt.subplot(1,2,1) plt.imshow(superimposed_img) plt.subplot(1,2,2) plt.imshow(nonhelmet_superimposed_img) ###Output _____no_output_____
notebooks/brand_analysis_solution.ipynb
###Markdown **Brand Analysis Using Social Media Data in R**Welcome to this hands-on training where you will learn how to perform brand analysis from social media data using R. We will be using different R libraries to analyze twitter data and derive insights.In this session, you will learn* How to compare brand popularity by extracting and comparing follower counts* How to promote a brand by identifying popular tweets* How to evaluate brand salience and compare the same for two brands using tweet frequencies* Understand brand perception through text mining and by visualizing key terms* Perform sentiment analysis to understand customer's feelings and sentiments about a brand **The Dataset**The datasets to be used in this training session are in CSV format. These datasets comprise extracted live tweets using `rtweet` library. The datasets are:* **users_twts.csv**: User data of four twitter accounts pre-extracted from Twitter* **tesladf.csv**: Tweets searched on keyword 'tesla' pre-extracted from Twitter* **toyotadf.csv**: Tweets searched on keyword 'toyota' pre-extracted from Twitter* **tesla_small.csv**: Tweets searched on keyword 'tesla' pre-extracted from Twitter. This is a smaller dataset with fewer tweets.Note that we will not be extracting live tweets from Twitter during this session as it invovles a setup process. We will be using pre-extracted tweets saved in CSV format.- **users_twts.csv**: has 4 records and 90 columns of user data and associated metadata- **tesladf.csv**: has 17979 records (tweets) and 90 columns of tweet text and associated metadata- **toyotadf.csv**: has 17798 records (tweets) and 90 columns of tweet text and associated metadata- **tesla_small.csv**: has 500 records (tweets) and 90 columns of tweet text and associated metadataAll the datasets have the same set of columns and some of the important columns that we will work with are listed below:- `created_at`: UTC time when this Tweet was created- `screen_name`: The screen name or twitter handle that an user identifies themselves with- `text`: The actual tweet text posted by an user- `retweet_count`: Number of times a given tweet has been retweeted.- `followers_count`: The number of followers a twitter account currently has. **Getting started and installing packages** ###Code # Install R packages system('apt-get install r-cran-httpuv r-cran-rtweet r-cran-reshape r-cran-qdapregex r-cran-tm r-cran-qdap') install.packages('syuzhet') ###Output Installing package into ‘/usr/local/lib/R/site-library’ (as ‘lib’ is unspecified) ###Markdown **1. Compare brand popularity by extracting and comparing follower counts** We can compare followers count for competing products by using their screen names and follower counts.Note:- `screen_name`: The screen name or twitter handle that an user identifies themselves with.- `followers_count`: The number of followers a twitter account currently has.The followers count for a twitter account indicates the popularity of that account and is a measure of social media influence.To extract user data directly from twitter, we usually load the `rtweet` package, obtain and create Twitter API access tokens according to the instructions in this [article](https://rtweet.info/articles/auth.html) and extract user data with the `lookup_users()` function which takes screen names as input and extracts user data from twitter accounts.```R Store name of users to extract data on twitter accounts of 4 auto magazinesusers <- c("caranddriver", "motortrend", "autoweekUSA", "roadandtrack") Extract user data for the twitter accounts stored in usersusers_twts <- lookup_users(users) Save extracted data as a CSV file using `fwrite()` from`data.table` libraryfwrite(users_twts, file = "users_twts.csv")```To avoid setting up individual API access tokens, we will be directly using a CSV file. ###Code # Load rtweet library library(rtweet) ###Output Attaching package: ‘rtweet’ The following object is masked from ‘package:syuzhet’: get_tokens ###Markdown Import the pre-saved CSV file with extracted user data for the four twitter accounts--- ###Code # Import extracted user data from the csv file into a dataframe users_twts = read.csv("https://github.com/datacamp/Brand-Analysis-using-Social-Media-Data-in-R-Live-Training/blob/master/data/users_twts.csv?raw=true") # View dimensions of the dataframe dim(users_twts) # View few rows of the dataframe head(users_twts) ###Output _____no_output_____ ###Markdown From the user data, extract details of screen names and follower counts for the 4 twitter accounts into a dataframe. ###Code # Create a data frame of screen names and followers count user_df <- users_twts[,c("screen_name","followers_count")] # Display and compare the follower counts for the 4 twitter accounts user_df ###Output _____no_output_____ ###Markdown We can see that "Car and Driver" is the most popular automobile magazine with number of followers exceeding a million and it is followed by "Motor Trends" with 739,800 followers. An automobile brand advertising for a new model can place its adverts on the homepage of these twitter acocunts or tag these twitter accounts while promoting its brand. Thus, Digital marketers can position ads on popular twitter accounts for increased visibility. --- Q&A 1 --- **2. Promote a brand by identifying popular tweets using retweet counts** To extract tweet data for a particular term, we can use the `search_tweets()` function from `rtweet` library which has the following arguments:* `q`: The query being used, for example `"tesla"`* `n`: The number of tweets* `lang`: The language of the tweet - here set to `"en"`* `include_rts`: A boolean value that either accepts the inclusion of retweets or not on resulting dataIn this notebook, we will be using a CSV file to import the tweets but using `search_tweets()` to extract tweets on `"tesla"` can be done as such.```R Extract 18000 tweets on Teslatweets_tesla = search_tweets("tesla", n = 18000, lang = "en", include_rts = FALSE)fwrite(tweets_tesla, "tesladf.csv")``` ###Code # Import extracted tweets on "tesla" in CSV format into a dataframe tesladf = read.csv("https://github.com/datacamp/Brand-Analysis-using-Social-Media-Data-in-R-Live-Training/blob/master/data/tesladf.csv?raw=true") # Explore the tweet dataframe dim(tesladf) head(tesladf) ###Output _____no_output_____ ###Markdown Extract the columns `retweet_count` and `text` and save to a new dataframe ###Code # Create a data frame of tweet text and retweet count rtwt <- tesladf[,c("text", "retweet_count")] # View few rows of the new dataframe head(rtwt) ###Output _____no_output_____ ###Markdown Sort in descending order of the retweet counts using `arrange()` from `dplyr` library ###Code # Import library library(dplyr) # Sort data frame based on descending order of retweet counts rtwt_sort <- arrange(rtwt, desc(retweet_count)) # View sorted output head(rtwt_sort) ###Output _____no_output_____ ###Markdown The `text` column usually contains duplicate tweets. To get unique tweets, we can use the `unique()` function which has 2 arguments:* the data frame being used* `by`: which columns to search for unique values in ###Code # Exclude rows with duplicate text from sorted data frame rtwt_unique <- unique(rtwt_sort, by = "text") # Print top 6 unique posts retweeted most number of times head(rtwt_unique) ###Output _____no_output_____ ###Markdown The most retweeted texts have popular quotes such as "I think I want a Tesla", indicating the loyalty of Tesla fans. These tweets can be used for promoting Tesla's models and brand loyalty. --- Q&A 2 --- **3. Evaluate brand salience and compare the same for two brands using tweet frequencies** Brand salience is the extent to which a brand is continuously talked about.Monitoring tweets on a certain brand over time is an excellent proxy to brand salience. Here, we will compare how tweets mentioning Tesla vs Toyota are present over time. **3a) Visualizing frequency of tweets using time series plots**Let's first visualize tweet frequency on the automobile brand "Tesla". We will be using the tweet dataframe created for Tesla in the previous exercise. ###Code # View the tweet dataframe head(tesladf) # View the `created_at` column in the tweet dataframe head(tesladf$created_at,10) ###Output _____no_output_____ ###Markdown We see the `created_at` column has the timestamp that we'd need to convert to the correct date format using `as.POSIXct()` which takes in:* The column being converted* `format`: The date format - here to be `"%Y-%m-%dT%H:%M:%SZ"`* `tz`: The time-zone of the conversionInputs for `format` argument to convert date-time format: ###Code # Update dates in `created_at` column with the new date format tesladf$created_at <- as.POSIXct(tesladf$created_at, format = "%Y-%m-%dT%H:%M:%SZ", tz = "GMT") # View the `created_at` column again head(tesladf$created_at, 10) ###Output _____no_output_____ ###Markdown To visualize tweets over time, we will use the `rtweet` library's `ts_plot()` function which takes in:* The data frame being plotted* `by`: The time interval - here `'hours'`* `color`: The color of the line ###Code # Create a time series plot ts_plot(tesladf, by = "hours", color = "blue") ###Output _____no_output_____ ###Markdown We see tweets for Tesla fluctuating from high to low and then reaching a high again between 17 and 18 May after a big dip on 17 May. The high number of tweets could be related to an event or topic about Tesla's products. **3b) Compare brand salience for two brands using time series plots and tweet frequencies**Let's compare how tweets mentioning `"Toyota"` compare against `"Tesla"` - here is the `search_tweets()` code used to get tweets on `"Toyota"````R Extract tweets for Toyota using `search_tweets()`tweets_toyo = search_tweets("toyota", n = 18000, lang = "en", include_rts = FALSE)fwrite(tweets_toyo, file = "toyotadf.csv")``` ###Code # Import extracted tweets on `"toyota"` in CSV format toyotadf = read.csv("https://github.com/datacamp/Brand-Analysis-using-Social-Media-Data-in-R-Live-Training/blob/master/data/toyotadf.csv?raw=true") # Explore the tweet dataframe for toyota dim(toyotadf) head(toyotadf) ###Output _____no_output_____ ###Markdown We can see the extracted tweets on `toyota` and the `created_at` column has the timestamp. ###Code # Update dates in `created_at` column with the new date format toyotadf$created_at <- as.POSIXct(toyotadf$created_at, format = "%Y-%m-%dT%H:%M:%SZ", tz = "GMT") # View the `created_at` column again head(toyotadf$created_at, 10) ###Output _____no_output_____ ###Markdown To visualize the number of tweets over time, we aggregate both `toyotadf` and `tesladf` into time series objects using `ts_data()` which takes in 2 arguments:* The data frame being converted* `by`: The time interval of frequency counting (here `'hours'`) ###Code # Create a time series object for Tesla at hourly intervals tesla_ts <- ts_data(tesladf, by ='hours') # View the time series object head(tesla_ts) # Rename the two columns in the time series object names(tesla_ts) <- c("time", "tesla_n") # View the output head(tesla_ts) # Create a time series object for Toyota at hourly intervals toyo_ts <- ts_data(toyotadf, by ='hours') # Rename the two columns in the time series object names(toyo_ts) <- c("time", "toyo_n") # View the output head(toyo_ts) ###Output _____no_output_____ ###Markdown We now have two time series objects with columns for time and tweet frequencies. Merge the objects into a single data frame using the `merge()` function which is from the `reshape` library. ###Code # Load the required libraries library(reshape) library(ggplot2) ###Output _____no_output_____ ###Markdown The `merge()` function takes 3 arguments:* the time series objects to be merged * `by` argument which specifies the common column for merging* `all` argument to instruct whether all the rows should be included ###Code # Merge the time series objects with "time" as the common column merged_df <- merge(tesla_ts, toyo_ts, by = "time", all = TRUE) # View few rows of the merged dataframe head(merged_df) ###Output _____no_output_____ ###Markdown We can see the tweet frqeuencies for tesla and toyota in separate columns.Stack the tweet frequency counts into a single column and brands into another column using `melt()` from `reshape` library.The `melt()` function takes 3 arguments:* the dataframe to melt * `na.rm` to specify whether to include or exclude rows with missing values* `id.vars` to specify the source columns to be retained (`time` in this case) ###Code # Stack the tweet frequency columns melt_df <- melt(merged_df, na.rm = TRUE, id.vars = "time") # View the output head(melt_df) ###Output _____no_output_____ ###Markdown We can see that all columns other than `time` have been stacked and we have three columns now: `time`, `variable`, `value`. Plot the frequency of tweets on Tesla and Toyota using `ggplot()`.Set the relevant column names i.e. as values for the x-axis, y-axis, and color of the plot. ###Code ## Compare brand salience by plotting the frequency of tweets # Plot frequency of tweets on Tesla and Toyota ggplot(data = melt_df, aes(x = time, y = value, col = variable))+ geom_line(lwd = 0.8) ###Output _____no_output_____ ###Markdown It's interesting to see that there are relatively more tweets on Tesla than on Toyota. The higher level of tweet activity for Tesla indicates a stronger brand salience for Tesla than Toyota. Visualizing tweets through time series analysis provides good insights on interest level on a product and can be used to compare brand salience. --- Q&A 3 --- **4. Understand brand perception through text mining and by visualizing key terms** One of the most important and common tasks in social media data analysis is being able to understand what users are tweeting about the most and how they perceive a particular brand. In this section, we will visualize the most common words mentioning `"Tesla"` to build a word cloud that showcases the most common words. **4a) Processing tweets and twitter data** Tweets are unstructured, noisy, and raw, and properly processing them is essentially to accurately capture useful brand-perception information. Here are some processing steps we will be performing:* Step 1: Remove URLs from text* Step 2: Remove special characters, punctuations, and numbers* Step 3: Convert the text to a Corpus (i.e. large document of text)* Step 4: Convert all letters in the Corpus to lower case* Step 5: Remove common words (the, a, and ...), also called stop words, from the Corpus* Step 6: Remove custom stop words from the Corpus* Step 7: Trim leading and trailing spaces from Corpus First, extract the tweets stored in the `text` column of the tweet dataframe for Tesla. ###Code # Extract tweet text from the Tesla dataset twt_txt <- tesladf$text head(twt_txt, 15) ###Output _____no_output_____ ###Markdown We can see the first few rows of tweet text extracted from the main dataframe. **Step 1: Remove URLs from text**Use the `rm_twitter_url()` function from the `qdapRegex` library to remove all URLs from the text.`rm_twitter_url()` takes the tweet text dataframe as input. ###Code # Load the library library(qdapRegex) # Remove URLs from the tweet text and view the output twt_txt_url <- rm_twitter_url(twt_txt) # View few rows of the dataframe head(twt_txt_url, 15) ###Output _____no_output_____ ###Markdown The URLs are removed from tweets: check records starting with "This article says VW beat Tesla..." and "Anyone up for some..." for example. **Step 2: Remove special characters, punctuations, and numbers**To remove special characters, punctuations, and numbers, we will use the `gsub()` function which takes in:* The pattern to search for - for example, if we are searching for non-numbers and non-letters, the regular expression `"[^A-Za-z]"` is a pattern* The character to replace it with* The text source here `twt_txt_url` ###Code # Replace special characters, punctuation, & numbers with spaces twt_txt_chrs <- gsub("[^A-Za-z]"," " , twt_txt_url) # View text after replacing special characters, punctuation, & numbers head(twt_txt_chrs, 15) ###Output _____no_output_____ ###Markdown In the output, we can see that all content other than letters has been replaced with spaces. **Step 3: Building a Corpus**A Corpus is a list of text documents and is often used in text processing functions. To create a corpus, we will be using the `tm` library and the functions `VectorSource()` and `Corpus()`. The `VectorSource()` converts the tweet text to a vector of texts and the `Corpus()` function takes the output of `VectorSource()` and converts to a Corpus. An example on a tweets object would be: ###Code # Convert processed text to a text corpus and view output library(tm) twt_corpus <- twt_txt_chrs %>% VectorSource() %>% Corpus() head(twt_corpus$content, 15) ###Output _____no_output_____ ###Markdown The text is stored under `content` within the corpus just created. **Step 4: Convert Corpus to lower case**To have all words in our corpus being uniform, we will lower all words in the Corpus to lower case (`'Tesla'` vs `'tesla'`). To do this, will use the `tm_map()` function which applies a transformation to the corpus. In this case, it takes in 2 arguments:* The corpus being transformed* The transformation itself, stored in the `tolower()` function ###Code # Convert the corpus to lowercase twt_corpus_lwr <- tm_map(twt_corpus, tolower) # View the corpus after converting to lowercase head(twt_corpus_lwr$content, 15) ###Output _____no_output_____ ###Markdown All characters in the corpus are now converted to lowercase. **Step 5: Remove stop words from the Corpus**Stop words are commonly used words like `"a"`, `"an"`, `"the"` etc. They are often the most common words and tend to skew your analysis if left in the corpus. We will remove English stop words from the Corpus by using `tm_map()`which takes in this case 3 arguments:* The corpus being transformed* The transformation itself, stored in `removeWords()`* The English stop words to be removed, stored in `stopwords("english")` ###Code # Remove English stop words from the corpus and view the corpus twt_corpus_stpwd <- tm_map(twt_corpus_lwr, removeWords, stopwords("english")) # View the content column head(twt_corpus_stpwd$content, 15) ###Output _____no_output_____ ###Markdown The common stop words are now removed from the corpus. **Step 6: Remove custom stop words from the Corpus**In the corpus, frequently appearing terms like `tesla`, `sure`, `can`, `will` etc do not add any value for analysis and can be removed to create a meaningul, refined corpus.To do this, first extract a list of most frequent terms and their number of occurrences (also called term frequency) using the `freq_terms()` function from `qdap` library. `freq_terms()` takes two arguments: * The corpus * The top `"n"` terms to be extracted based on the number of occurrences ###Code # Load the library qdap library(qdap) # Extract term frequencies for top 60 words in the Corpus and view the output termfreq <- freq_terms(twt_corpus_stpwd, 60) termfreq ###Output _____no_output_____ ###Markdown We can see high frequencies for custom stop words like `tesla`, `s`, `t`, `elon` (`elon musk` is retained).Create of vector of such high frequency custom stop words. ###Code # Create a vector of custom stop words custom_stopwds <- c("tesla", "s", "t", "will", "elon", "can", "like", "just", "musk", "one", "m", "get", "now", "cars", "amp", "re", "go", "even", "via") ###Output _____no_output_____ ###Markdown Apply `tm_map()` and `removeWords()` functions on the corpus to remove the custom stop words. `tm_map()` takes 3 arguments: * The corpus* `removeWords()`* The vector of custom stop words ###Code # Remove custom stop words and create a refined corpus twt_corpus_stpwd2 <- tm_map(twt_corpus_stpwd, removeWords, custom_stopwds) # View the text corpus after removing custom stop words head(twt_corpus_stpwd2$content, 15) ###Output Warning message in tm_map.SimpleCorpus(twt_corpus_stpwd, removeWords, custom_stopwds): “transformation drops documents” ###Markdown You can see that the corpus now has only important terms as the common and user-defined custom stop words have been removed. Check the frequently occuring top 60 words again to see if we get a different list. ###Code # Extract term frequencies for the top 60 words termfreq_clean <- freq_terms(twt_corpus_stpwd2, 60) # View the output termfreq_clean ###Output _____no_output_____ ###Markdown **Step 7: Trim leading and trailing spaces from Corpus**To remove additional spaces and create a clean corpus, use the `tm_map()` which takes two arguments: * The Corpus* `stripWhitespace()` which collapses multiple spaces to a single space ###Code # Remove additional spaces from the corpus corp_refined <- tm_map(twt_corpus_stpwd2, stripWhitespace) # View the text corpus after removing spaces head(corp_refined$content, 15) ###Output Warning message in tm_map.SimpleCorpus(twt_corpus_stpwd2, stripWhitespace): “transformation drops documents” ###Markdown The additional spaces are now removed from the corpus. **4b) Visualizing brand perception** The most frequently used words in tweets are typically popular terms relevant to the topic tweeted.In this exercise, we will extract and visualize popular terms in our refined corpus using the word cloud. **Identify top 15 words spoken about the brand**Extract and view the term frequency for the top 15 words from the refined corpus. ###Code # Extract term frequencies for the top 15 words termfreq_15w <- freq_terms(corp_refined, 15) termfreq_15w ###Output _____no_output_____ ###Markdown The popular terms related to tweets on Tesla can be seen here.The brand promotion team can analyze these terms to understand the pulse of the audience. ###Code # Identify terms with more than 60 counts from the top 15 list term60 <- subset(termfreq_15w, FREQ > 60) term60 ###Output _____no_output_____ ###Markdown **Visualize popular terms with word clouds**A word cloud is an image made up of words in which the size of each word indicates its frequency. The `wordcloud()` function from the `wordcloud` library is used to create word clouds and it takes the following arguments:* The Corpus* `min.freq` set to include only terms with a minimum frequency* `color` set to "red"* `scale` set to the range of font sizes* `random.order` set to FALSE to fix the word pattern in the word cloudThe `RColorBrewer()` library provides some interesting color palettes to work with. ###Code # Load libraries library(wordcloud) library(RColorBrewer) # Create a word cloud in red with min frequency of 100 wordcloud(corp_refined, min.freq = 100, colors = "red", scale = c(3,0.5),random.order = FALSE) ###Output _____no_output_____ ###Markdown A word cloud highlighting high-frequency words in large font sizes is displayed as output.We can see that 'elonmusk' stands out as the most popular term. Also, terms like 'car', 'buy', 'spacex' are the other popular ones. We can choose a color palette from the `RColorBrewer` library to make the word cloud colorful.Assign "6" colors from the “Dark2” palette of `brewer.pal()` and set the `max.words` argument to "50" to plot a word cloud of the top 50 words. ###Code # Create word cloud with 6 colors and max 50 words wordcloud(corp_refined, max.words = 50, colors = brewer.pal(6, "Dark2"), scale=c(4,1), random.order = FALSE) ###Output _____no_output_____ ###Markdown We now have an interesting word cloud depicting popular terms from tweets on Tesla positioned at the centre of the word cloud to highlight their relevance and importance.One can use word cloud as an effective promotional image for marketing campaigns as it communicates the brand messaging and highlights popular terms to convey the value of the content being shared. --- Q&A 4 --- 5. **Further understanding brand perception by analyzing tweet sentiments** Sentiment analysis is the process of retrieving information about a consumer's perception of a product or brand.It is used to extract and quantify positive, negative, and neutral opinions as well as emotions like trust, joy, and anger from the text. Steps involved in performing sentiment analysis:* Step 1: Extract tweets on topic of interest* Step 2: Extract sentiment scores from tweet text* Step 3: Visualize sentiment scores and interpret customer perceptions **Step 1: Extract tweets on topic of interest**To explore customer's sentiments on Tesla, import a smaller tweet dataset extracted from Twitter. ###Code # Load a smaller dataset for tesla tesladf_small <- read.csv("https://github.com/datacamp/Brand-Analysis-using-Social-Media-Data-in-R-Live-Training/blob/master/data/tesla_small.csv?raw=true", stringsAsFactors=FALSE) # Explore the tweet dataframe dim(tesladf_small) head(tesladf_small) ###Output _____no_output_____ ###Markdown We can see that this dataset has 500 tweets on Tesla. **Step 2: Extract sentiment scores from tweet text**The `get_nrc_sentiment()` function from the `syuzhet` package is used to extract sentiment scores for the text and it takes the column storing the tweet text as the argument. ###Code # Load library library(syuzhet) # Perform sentiment analysis for tweets on `tesla` sa.value <- get_nrc_sentiment(tesladf_small$text) # View the sentiment scores for first 10 tweets head(sa.value, 10) ###Output _____no_output_____ ###Markdown The sentiment scores for the first 10 records are displayed here with the rows and columns representing the tweets and the emotions respectively. The column values are the sentiment scores for the tweets against each emotion. Get the sum of the sentiment scores for each emotion using `colSums()` and convert the output to a dataframe. `colSums()` takes the extracted sentiment scores as input. ###Code # Calculate sum of sentiment scores score <- colSums(sa.value[,]) # Convert the sum of scores to a dataframe score_df <- data.frame(score) # View the dataframe score_df ###Output _____no_output_____ ###Markdown The aggregated scores for each sentiment is displayed here.The score of 146 for anger indicates that 146 words in the corpus were classified under the emotion anger by the sentiment libraries. Convert the rownames containing the sentiment heads into a column and use `cbind()` to combine this column with the sentiment scores.Also, set the row names for this new dataframe to `"NULL"`. ###Code # Convert row names into 'sentiment' column and combine with sentiment scores score_df2 <- cbind(sentiment = row.names(score_df), score_df, row.names = NULL) # View the dataframe print(score_df2) ###Output sentiment score 1 anger 146 2 anticipation 270 3 disgust 67 4 fear 149 5 joy 154 6 sadness 106 7 surprise 160 8 trust 330 9 negative 272 10 positive 727 ###Markdown We can now see a data frame with sentiments in one column and their respective scores in the second column. **Step 3: Visualize sentiment scores and interpret customer perceptions**X-axis and Y-axis take the values `"sentiment"` and `"score"` respectively and fill is set to `"sentiment"`. ###Code # Plot the sentiment scores ggplot(data = score_df2, aes(x = sentiment, y = score, fill = sentiment)) + geom_bar(stat = "identity") + theme(axis.text.x = element_text(angle = 45, hjust = 1)) ###Output _____no_output_____
code/Elipsoide_Clark_FAT_Classe.ipynb
###Markdown Elipsoide_Clark_FAT_Classe - Diego Taka Coisas para importar ###Code import numpy as np from scipy import linalg from matplotlib import pyplot as plt from fatiando import mesher, gridder, utils from fatiando.vis import mpl import scipy.special import scipy.interpolate %matplotlib inline ###Output _____no_output_____ ###Markdown Importar minhas funções de um arquivo externo ###Code #import Elipsoide_Clark_FAT_2V as me2 import Elipsoide_Clark_FAT_3V as me3 ###Output _____no_output_____ ###Markdown Fatiando a Terra - Ellipsoid ###Code # Malha coordenadas geograficas xmin = -100. xmax = 100. ymin = -90. ymax = 90. Nx = 200 Ny = 200 #xc posicao x , yc posição y e zc profundidade reais xc = -0. yc = -0. zc = 150. # Orientacoes do elipsoide azimute = 45. alfa = np.deg2rad(azimute+180.) delta = np.deg2rad(0.) gamma = np.deg2rad(0.) # Eixos do elipsoide a = 40.501 b = 30.500 c = 15.499 # Create a regular grid at 0m height shape = (Nx, Ny) area = [xmin, xmax, ymin, ymax] Xp, Yp, Zp = gridder.regular(area, shape, z=0.) ################################################################################################################################ # Set the inclination and declination of the regional field inten, inc, dec = 60000., np.deg2rad(62.), np.deg2rad(15.) # Create a ellipsoid model model = [me3.Ellipsoid(Xp, Yp, Zp, xc, yc, zc, a, b, c, alfa, delta, gamma, {'remanence': np.array([10000, np.deg2rad(25.), np.deg2rad(40.)]), 'k1': np.array([(0.1), np.deg2rad(90.), np.deg2rad(0.)]), 'k2': np.array([(0.1), np.deg2rad(180.), np.deg2rad(0.)]), 'k3': np.array([(0.1), np.deg2rad(0.), np.deg2rad(90.)])} )] # Calculate the anomaly for a given regional field Bx = me3.bx_c (Xp,Yp,Zp,inten,inc,dec,model) By = me3.by_c (Xp,Yp,Zp,inten,inc,dec,model) Bz = me3.bz_c (Xp,Yp,Zp,inten,inc,dec,model) Tf = me3.tf_c (Xp,Yp,Zp,inten,inc,dec,model) Bx = np.reshape(Bx, shape) By = np.reshape(By, shape) Bz = np.reshape(Bz, shape) Tf = np.reshape(Tf, shape) ###Output _____no_output_____ ###Markdown Resultado da minha função ###Code rangesBx = np.max(np.abs([np.max(Bx), np.min(Bx)])) plt.figure(figsize=(15,8)) plt.suptitle('Componente do campo Bx ( $nT$ )',y=1.04, fontsize=16, x=0.62) plt.subplot(1,1,1) plt.title('Elipsoide Fatiando', y=1.08) plt.axis('scaled') mpl.contourf(Yp,Xp,Bx,shape,15, vmin = -rangesBx, vmax = rangesBx, cmap=plt.cm.RdBu_r) cb = plt.colorbar(shrink=0.7) plt.xlim(ymin,ymax) plt.ylim(xmin,xmax) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.xlabel('Coordenada horizontal y (m)', fontsize=14) plt.ylabel('Coordenada horizontal x (m)', fontsize=14) plt.tight_layout() plt.show() rangesBy = np.max(np.abs([np.max(By), np.min(By)])) plt.figure(figsize=(15,8)) plt.suptitle('Componente do campo By ( $nT$ )',y=1.04, fontsize=16, x=0.62) plt.subplot(1,1,1) plt.title('Elipsoide Fatiando', y=1.08) plt.axis('scaled') mpl.contourf(Yp,Xp,By,shape,15, vmin = -rangesBy, vmax = rangesBy, cmap=plt.cm.RdBu_r) cb = plt.colorbar(shrink=0.7) plt.xlim(ymin,ymax) plt.ylim(xmin,xmax) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.xlabel('Coordenada horizontal y (m)', fontsize=16) plt.ylabel('Coordenada horizontal x (m)', fontsize=16) plt.tight_layout() plt.show() rangesBz = np.max(np.abs([np.max(Bz), np.min(Bz)])) plt.figure(figsize=(15,8)) plt.suptitle('Componente do campo Bz ( $nT$ )',y=1.04, fontsize=16, x=0.62) plt.subplot(1,1,1) plt.title('Elipsoide Fatiando', y=1.08) plt.axis('scaled') mpl.contourf(Yp,Xp,Bz,shape,15, vmin = -rangesBz, vmax = rangesBz, cmap=plt.cm.RdBu_r) cb = plt.colorbar(shrink=0.7) plt.xlim(ymin,ymax) plt.ylim(xmin,xmax) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.xlabel('Coordenada horizontal y (m)', fontsize=16) plt.ylabel('Coordenada horizontal x (m)', fontsize=16) plt.tight_layout() plt.show() rangesTf = np.max(np.abs([np.max(Tf), np.min(Tf)])) plt.figure(figsize=(15,8)) plt.suptitle('Anomalia de campo total aproximada ( $nT$ )',y=1.04, fontsize=16, x=0.62) plt.subplot(1,1,1) plt.title('Elipsoide triaxial', y=1.04, fontsize=14) plt.axis('scaled') mpl.contourf(Yp,Xp,Tf,shape,15, vmin = -rangesTf, vmax = rangesTf, cmap=plt.cm.RdBu_r) cb = plt.colorbar(shrink=0.7) plt.xlim(ymin,ymax) plt.ylim(xmin,xmax) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.xlabel('Coordenada horizontal y (m)', fontsize=12) plt.ylabel('Coordenada horizontal x (m)', fontsize=12) plt.tight_layout() plt.show() ###Output _____no_output_____
Python/Into to Python - Strings.ipynb
###Markdown Intro to Python - Exercises - Part 6 6. Strings Until now, most examples and exercises have been using numbers. In daily life, it is far more commonplace to deal with textual information. So are you ever going to learn how to deal with texts?The reason that dealing with texts was postponed until this point, is that dealing with numbers is simply easier than dealing with texts. But in the present section, the first steps are taken to learn to manipulate textual information.Texts, in programming languages, are dealt with in the form of strings. This section is on the details of strings, and on readily-available functions to juggle them. Multi-line strings Strings in Python may span across multiple lines. This can be useful when you have a very long string, or when you want to format the output of the string in a certain way. Multi-line strings can be achieved in two ways:1. With single or double quotes, and an indication that the remainder of the string continues on the next line with a backslash.2. With triple single or double quotes.I first demonstrate how this works when you use the regular string enclosure with one double or single quote at each end of the string: ###Code longString = "I'm fed up with being treated like sheep. \ What's the point of going abroad if you're just another \ tourist carted around in buses surrounded by sweaty \ mindless oafs from Kettering and Coventry in their \ cloth caps and their cardigans and their transistor \ radios and their Sunday Mirrors, complaining about \ the tea - 'Oh they don't make it properly here, do they, \ not like at home' - and stopping at Majorcan bodegas \ selling fish and chips and Watney's Red Barrel and \ calamaris and two veg and sitting in their cotton frocks \ squirting Timothy White's suncream all over their puffy \ raw swollen purulent flesh 'cos they 'overdid it on the first day." print(longString) ###Output I'm fed up with being treated like sheep. What's the point of going abroad if you're just another tourist carted around in buses surrounded by sweaty mindless oafs from Kettering and Coventry in their cloth caps and their cardigans and their transistor radios and their Sunday Mirrors, complaining about the tea - 'Oh they don't make it properly here, do they, not like at home' - and stopping at Majorcan bodegas selling fish and chips and Watney's Red Barrel and calamaris and two veg and sitting in their cotton frocks squirting Timothy White's suncream all over their puffy raw swollen purulent flesh 'cos they 'overdid it on the first day. ###Markdown As you can see, Python now interprets this example as a single line of text. The backslash (`\`) can actually be included after any Python statement to indicate that it continues on the next line, and it can be quite useful for that, for instance when you write long calculations.The recommended way to write multi-line strings in Python is, however, to use triple double or single quotes. I indicated earlier that you can use those to write multi-line comments. Such comments are basically large strings in the middle of your Python program, which do nothing as they are not assigned to a variable.Here is an example of a long string with triple double quotes: ###Code longString = """And being herded into endless Hotel Miramars and Bellevueses and Continentales with their modern international luxury roomettes and draught Red Barrel and swimming pools full of fat German businessmen pretending they're acrobats forming pyramids and frightening the children and barging into queues and if you're not at your table spot on seven you miss the bowl of Campbell's Cream of Mushroom soup, the first item on the menu of International Cuisine, and every Thursday night the hotel has a bloody cabaret in the bar, featuring a tiny emaciated dago with nine-inch hips and some bloated fat tart with her hair brylcreemed down and a big arse presenting Flamenco for Foreigners.""" print(longString) ###Output And being herded into endless Hotel Miramars and Bellevueses and Continentales with their modern international luxury roomettes and draught Red Barrel and swimming pools full of fat German businessmen pretending they're acrobats forming pyramids and frightening the children and barging into queues and if you're not at your table spot on seven you miss the bowl of Campbell's Cream of Mushroom soup, the first item on the menu of International Cuisine, and every Thursday night the hotel has a bloody cabaret in the bar, featuring a tiny emaciated dago with nine-inch hips and some bloated fat tart with her hair brylcreemed down and a big arse presenting Flamenco for Foreigners. ###Markdown The interesting difference between these two examples is that in the first example, the string was interpreted as one long, continuous series of characters, while in the second example the different lines are all printed on different lines on the output. The reason that this happens is that there is an invisible character included at the end of each line in the second example that indicates that Python should move to the next line before continuing. This is the so-called "newline" character, and you can actually insert it explicitly into a string, using the code "`\n`". So this code should not be read as two characters, the backslash and the "n", but as a single newline character. By using it, you can ensure that you print the output on multiple lines, even if you use the backslash to indicate the continuation of the string, as was done in the first example. For example: ###Code longstring = "And then some adenoidal typists from Birmingham with flabby\n\ white legs and diarrhoea trying to pick up hairy bandy-legged\n\ wop waiters called Manuel and once a week there's an excursion\n\ to the local Roman Ruins to buy cherryade and melted ice cream\n\ and bleeding Watney's Red Barrel and one evening you visit the\n\ so called typical restaurant with local colour and atmosphere\n\ and you sit next to a party from Rhyl who keep singing\n\ 'Torremolinos, torremolinos' and complaining about the food -\n\ 'It's so greasy here, isn't it?' - and you get cornered by some\n\ drunken greengrocer from Luton with an Instamatic camera and\n\ Dr. Scholl sandals and last Tuesday's Daily Express and he\n\ drones on and on and on about how Mr. Smith should be running\n\ this country and how many languages Enoch Powell can speak and\n\ then he throws up over the Cuba Libres." print(longstring) ###Output And then some adenoidal typists from Birmingham with flabby white legs and diarrhoea trying to pick up hairy bandy-legged wop waiters called Manuel and once a week there's an excursion to the local Roman Ruins to buy cherryade and melted ice cream and bleeding Watney's Red Barrel and one evening you visit the so called typical restaurant with local colour and atmosphere and you sit next to a party from Rhyl who keep singing 'Torremolinos, torremolinos' and complaining about the food - 'It's so greasy here, isn't it?' - and you get cornered by some drunken greengrocer from Luton with an Instamatic camera and Dr. Scholl sandals and last Tuesday's Daily Express and he drones on and on and on about how Mr. Smith should be running this country and how many languages Enoch Powell can speak and then he throws up over the Cuba Libres. ###Markdown This means that if you do not want automatic newline characters inserted into a multi-line string, you have to use the first approach, with the backslash at the end of the line. If you are okay with newline characters in your multi-line string, the second approach is probably the easiest to read. Escape sequences "`\n`" is a so-called "escape sequence". An escape sequence is a string character written as a backslash followed by a code, which can be one or multiple characters. Python interprets escape sequences in a string as a special character; a control character. ###Code word1 = "orange" word2 = "banana" def add_newline_between_words(word1, word2): new_line = word1 + "\n" + word2 return(new_line) print(add_newline_between_words(word1,word2)) ###Output orange banana ###Markdown Besides the newline character there are more special characters "`\'`" and "`\"`", which can be used to place a single respectively double quote in a string, regardless of what characters surround the string. I also mentioned that you can use "`\\`" to insert a "real" backslash in a string. There are a few more "backslash sequences" which lead to a special character. Most of these are archaic and you do not need to worry about them. The one I want to mention are "`\t`" which represents a single tabulation (also known as the 'tab'). ###Code d = "test" m = "me" def place_word_between_single_quotes(w1): new_line = '\'' + word1 + "\'" return(new_line) print(place_word_between_single_quotes(m)) def place_word_between_double_quotes(w1): new_line = '\t' + word1 + '"' return(new_line) print(place_word_between_double_quotes(d)) ###Output 'orange' orange" ###Markdown Extra information for students who want to know more, but not necessary for this course:There is another character "`\xnn`" whereby `nn` stands for two hexadecimal digits, which represents the character with hexadecimal number `nn`. For example, "`\x20`" is the character expressed by the hexadecimal number `20`, which is the same as the decimal number `32`, which is the space (this will be explained later in this chapter).In case you never learned about hexadecimal counting: hexadecimals use a numbering scheme that uses 16 different digits. We use ten (`0` to `9`), binary uses two (`0` to `1`), and hexidecimal then uses `0` to `9` and then continues from `A` to `F`. A direct translation from hexadecimals to decimals turns `A` into `10`, `B` into `11`, etcetera. In decimal counting, the value of a multi-digit number is found by multiplying the digits by increasing powers of `10`, from right to left, e.g., the number `1426` is `6 + 2*10 + 4*100 + 1*1000`. For hexadecimal numbers you do the same thing, but multiply by powers of `16`, e.g., the hexadecimal number `4AF2` is `2 + 15*16 + 10*256 + 4*4096`. Programmers tend to like hexadecimal numbers, as computers work with bytes as the smallest unit of memory storage, and a byte can store 256 different values, i.e., any byte value can be expressed by a hexadecimal number of two digits. Accessing characters of a string As I showed several times before, a string is a collection of characters in a specific order. You can access the individual characters of a string using indices. String indices Each symbol in a string has a position, this position can be referred to by the index number of the position. The index numbers start at 0 and then increase to the length of the string. The following table shows the word "orange" in the first row and the indices for each letter in the second and third rows:&nbsp;&nbsp;__` o r a n g e`__&nbsp;&nbsp;` 0 1 2 3 4 5`` -6 -5 -4 -3 -2 -1`As you can see, you can use positive indices, which start at the first letter of the string and increase until the end of the string is reached, or negative indices, which start with -1 for the last letter of the string and decrease until the first letter of the string is reached.As the length of a string `s` is `len(s)`, the last letter of the string has index `len(s)-1`. With negative indices, the first letter of the string has index `-len(s)`.If a string is stored in a variable, the individual letters of the string can be accessed by the variable name and the index of the requested letter between square brackets (`[]`) next to it. ###Code fruit = "orange" def print_indices(fruit,n): print(fruit[n]) print_indices(fruit,1) print_indices(fruit,2) print_indices(fruit,4) print_indices(fruit,-1) print_indices(fruit,-6) print_indices(fruit,-3) print(len(fruit)) ###Output r a g e o n 6 ###Markdown Besides using single indices you can also access a substring (also called a "slice") from a string by using two numbers between the square brackets with a colon (`:`) in between. The first of these numbers is the index where the substring starts, the second where it ends. The substring does *not* include the letter at the second index. By leaving out the left number you indicate that the substring starts at the beginning of the string (i.e., at index 0). By leaving out the right number you indicate that the substring ranges up to and includes the last character of the string.If you try to access a character using an index that is beyond the reaches of a string, you get a runtime error ("index out of bounds"). For a range of indices to access substrings such limitations do not exist; you can use numbers that are outside the bounds of the string. ###Code fruit = "orange" print(fruit[:]) print(fruit[0:]) print(fruit[:5]) print(fruit[:100]) print(fruit[:len(fruit)]) print(fruit[1:-1]) print(fruit[2], fruit[1:6]) ###Output orange orange orang orange orange rang a range ###Markdown Traversing strings We already saw how you can traverse the characters of a string using a `for` loop: ###Code fruit = 'apple' def traverse_characters(word): new_word = "" for char in word: new_word+=(char + ' - ') return new_word print(traverse_characters(fruit)) ###Output a - p - p - l - e - ###Markdown Now you know about indices, you probably realize you can also use those to traverse the characters of a string: ###Code fruit = 'apple' def traverse_characters2(word): new_word = "" for i in range(0, len(word)): new_word += word[i] + " - " return new_word def traverse_characters3(word): new_word = "" i = 0 while i < len(word): new_word += word[i] + " - " i += 1 return new_word print(traverse_characters2(fruit)+"\n"+traverse_characters3(fruit)) ###Output a - p - p - l - e - a - p - p - l - e - ###Markdown If you just want to traverse the individual characters of a string, the first method, using `for in :`, is by far the most elegant and readable. However, occasionally you have to solve problems in which you might prefer one of the other methods.**Exercise (optional)**: Write code that for a string prints the indices of all of its vowels (`a`, `e`, `i`, `o`, and `u`). This can be done with a `for` loop or a `while` loop. ###Code # Indices of vowels def index_vowels(text): # index_vowels("apple") ###Output 0 : a 4 : e ###Markdown **Exercise (optional)**: Write code that uses two strings. For each character in the first string that has exactly the same character at the same index in the second string, you print the character and the index. Watch out that you do not get an "index out of bounds" runtime error. ###Code # Your function statement1 = "The Holy Grail" statement2 = "Life of Brian" def similar_char(text1, text2): for i in range(len(text1)): print(i) print(similar_char(statement1, statement2)) ###Output 0 1 2 3 4 5 6 7 8 9 10 11 12 13 None ###Markdown **Exercise (optional)**: Write a function that takes a string as argument, and creates a new string that is a copy of the argument, except that every non-letter is replaced by a space (e.g., "`ph@t l00t`" is changed to "`ph t l t`"). To write such a function, you will start with an empty string, and traverse the characters of the argument one by one. When you encounter a character that is acceptable, you add it to the new string. When it is not acceptable, you add a space to the new string. Note that you can check whether a character is acceptable by simple comparisons, e.g., any lower case letter can be found using the test `if ch >= 'a' and ch <= 'z':`. ###Code # String cleaning function def clean_string(string): clean_string("Aph@t 100t") ###Output Aph t t ###Markdown Extended slices Slices (substrings) in python can take a third argument, which is the step size (or "stride") that is taken between indices. It is similar to the third argument for the `range()` function. The format for slices then becomes `[::]`. By default the step size is 1.The most common use for the step size is to use a negative step size in order to create a reversed version of a string. ###Code fruit = "banana" print(fruit[::2]) print(fruit[1::2]) print(fruit[::-1]) print(fruit[::-2]) ###Output bnn aaa ananab aaa ###Markdown Reversing a string using `[::-1]` is conceptually similar to traversing the string from the last character to the beginning of the string using backward steps of size 1. ###Code def fff(fruit): for i in range(len(fruit), -1): print(fruit[i]) fff("banana") ###Output _____no_output_____ ###Markdown Strings are immutable A core property of strings is that they are *immutable*. This means that they cannot be changed. For instance, you cannot change a character of a string by assigning a new value to it. As a demonstration, the following code leads to a runtime error if you try to run it: ###Code fruit = "oringe" fruit[2] = "a" print(fruit) ###Output _____no_output_____ ###Markdown If you want to make a change to a string, you have to create a new string that contains the change; you can then assign the new string to the existing variable if you want. For instance: ###Code fruit = "oringe" fruit = fruit[:2] + "a" + fruit[3:] print(fruit) ###Output orange ###Markdown The reasons for why strings are immutable are beyond the scope of this course. Just remember that if you want to modify a string you need to overwrite the entire string, and you cannot modify individual indices. `string` methods There is a collection of methods that are designed to operate on strings. All of these methods are applied to a string to perform some operation. Since strings are immutable, they *never change* the string they work on, but they always `return` a changed version of the string.All these methods are called as `.()`, i.e., you have to write the string that they work on before the method call, with a period in between. You will encounter this more often, and why this is implemented in this way will be explained later in the course, in the chapters about object orientation.Most of these methods are not part of a specific module, but can be called without importing them. There is a `string` module that contains specific constants and methods that can be used in your programs, but the methods I discuss here can all be used without importing the `string` module. `strip()` `strip()` removes from a string leading and trailing spaces, including leading and trailing newlines and other characters that may be viewed as spaces. There are no parameters. See the following example (the string is bordered by [ and ] to show the effect): ###Code s = " And now for something completely different\n " print("["+s+"]") s = s.strip() print("["+s+"]") ###Output [ And now for something completely different ] [And now for something completely different] ###Markdown `upper()` and `lower()` `upper()` creates a version of a string of which all letters are capitals. `lower()` is equivalent, but uses only lower case letters. Neither method uses parameters. ###Code s = "The Meaning of Life " print(s) print(s.upper()) print(s[:-3].lower()) print(s.strip()) ###Output The Meaning of Life THE MEANING OF LIFE the meaning of life The Meaning of Life ###Markdown `find()` `find()` can be used to search in a string for the starting index of a particular substring. As parameters it gets the substring, and optionally a starting index to search from, and an ending index. It returns the lowest index where the substring starts, or `-1` if the substring is not found. ###Code s = "sat on the wall" print(s.find("s")) print(s.find("t")) print(s.find("t", 12)) print(s.find("q")) s.find(" ") ###Output 0 2 -1 -1 ###Markdown `replace()` `replace()` replaces all occurrences of a substring with another substring. As parameters it gets the substring to look for, and the substring to replace it with. Optionally, it gets a parameter that indicates the maximum number of replacements to be made. I must stress again that strings are immutable, so the `replace()` function is not actually changing the string. It returns a new string that is a copy of the string with the replacements made. ###Code s = ' Humpty Dumpty sat on the wall ' new_s = s.replace('sat on', 'fell off') print(new_s) print(s) ###Output Humpty Dumpty fell off the wall Humpty Dumpty sat on the wall ###Markdown `split()` `split()` splits a string up in words, based on a given character or substring which is used as separator. The separator is given as the parameter, and if no separator is given, the white space is used, i.e., you split a string in the actual words (though punctuation attached to words is considered part of the words). If there are multiple occurrences of the separator next to each other, the extra ones are ignored (i.e., with the white space as separator, it does not matter if there is a single white space between two words, or multiple).The result of this split is a so-called "list" of strings. Lists are discussed in a coming chapter. However, note that if you want to access the separate words, you can use the `for in :` construction. ###Code s = 'Humpty Dumpty sat, on the wall' wordlist = s.split(',') for i in wordlist: print(i) print(wordlist) ###Output Humpty Dumpty sat on the wall ['Humpty Dumpty sat', ' on the wall'] ###Markdown A very useful property of splitting is that we can decode some basic file formats. For example, a comma separated value (CSV) file is a very simple format, of which the basic setup is that each line consists of values that are separated by a comma. These values can be split from each other using the `split()` method. (Note: In actuality it will be a bit more convoluted as there might be commas in the fields that are stored in the CSV file, so it depends a bit on the contents of the file whether this simple approach will work. More on CSV files will be said in a later chapter in the course, where file formats are discussed.) ###Code csv = "2016,September,28,Data Processing,Tilburg University,Tilburg" values = csv.split(',') for value in values: print(value) print("") print(values) print (values[1][0]) ###Output 2016 September 28 Data Processing Tilburg University Tilburg ['2016', 'September', '28', 'Data Processing', 'Tilburg University', 'Tilburg'] S ###Markdown `join()` `join()` is the opposite of `split()`. `join()` joins a list of words together, separated by a specific separator. This sounds like it would be a method of lists, but for historic reasons it is defined as a string method. Since all string methods are called with the format `.()`, there must be a string in front of the call to `join()`. That string is the separator that you want to use, while the parameter of the method is the list that you want to join together. The return value, as always, is the resulting string. In the following example, note the notation of each of these steps: ###Code s = "Humpty;Dumpty;sat;on;the;wall" a = "my name is" print (s) wordlist = s.split(';') print (wordlist) s = "".join(a) print(s) ###Output Humpty;Dumpty;sat;on;the;wall ['Humpty', 'Dumpty', 'sat', 'on', 'the', 'wall'] my name is ###Markdown What you learned In this chapter, you learned about:- Strings- Multi-line strings- Accessing string characters with positive and negative indices- Slices- Immutability of strings- String methods `strip()`, `upper()`, `lower()`, `find()`, `replace()`, `split()`, and `join()`- Escape sequences Exercises **Exercise 6.1:** The text string in the next cell contains several words which are enclosed by square brackets (`[` and `]`). Scan the string and print out all words which are between square brackets. For example, if the text string would be "`[a]n example[ string]`", you are expected to print out "`a string`". ###Code # Distilling text. text = """The quick, brown fox jumps over a lazy dog. DJs flock by when MTV ax quiz prog. Junk MTV quiz graced by fox whelps. [Never gonna ] Bawds jog, flick quartz, vex nymphs. [give you up\n] Waltz, bad nymph, for quick jigs vex! Fox nymphs grab quick-jived waltz. Brick quiz whangs jumpy veldt fox. [Never ] Bright vixens jump; [gonna let ] dozy fowl quack. Quick wafting zephyrs vex bold Jim. Quick zephyrs blow, vexing daft Jim. Charged [you down\n] fop blew my junk TV quiz. How quickly daft jumping zebras vex. Two driven jocks help fax my big quiz. Quick, Baz, get my woven flax jodhpurs! "Now fax quiz Jack!" my brave ghost pled. [Never ] Five quacking zephyrs jolt my wax bed. [gonna ] Flummoxed by job, kvetching W. zaps Iraq. Cozy sphinx waves quart jug of bad milk. [run around ] A very bad quack might jinx zippy fowls. Few quips galvanized the mock jury box. Quick brown dogs jump over the lazy fox. The jay, pig, fox, zebra, and my wolves quack! [and desert you] Blowzy red vixens fight for a quick jump. Joaquin Phoenix was gazed by MTV for luck. A wizard’s job is to vex chumps quickly in fog. Watch "Jeopardy!", Alex Trebek's fun TV quiz game.""" def mySplit(): text_split = text.split("[") for i in range(1, len(text_split)) : bracket_text = text_split[i].split("]") print(bracket_text[0]) mySplit() ###Output Never gonna give you up Never gonna let you down Never gonna run around and desert you ###Markdown **Exercise 6.2:** Print a line of all the capital letters "A" to "Z". Below it, print a line of the letters that are 13 positions in the alphabet away from the letters that are above them. E.g., below the "A" you print an "N", below the "B" you print an "O", etcetera. You have to consider the alphabet to be circular, i.e., after the "Z", it loops back to the "A" again. ###Code # ROTR-13 letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" new_letters = letters[13:] + letters[:13] print(letters) print(new_letters) ###Output ABCDEFGHIJKLMNOPQRSTUVWXYZ NOPQRSTUVWXYZABCDEFGHIJKLM ###Markdown **Exercise 6.3:** In the text below, count how often the word "wood" occurs (using program code, of course). Capitals and lower case letters may both be used, and you have to consider that the word "wood" should be a separate word, and not part of another word. Hint: If you did the exercises from this chapter, you already developed a function that "cleans" a text. Combining that function with the `split()` function more or less solves the problem for you. ###Code text = """How much wood would a woodchuck chuck If a woodchuck could chuck wood? He would chuck, he would, as much as he could, And chuck as much as a woodchuck would If a Mr. Smith could chuck wood\n\r\t.""" # read whole text # create a counter # get rid \n # get rid of ?. special grammar # lowercase my sentence "if a woodchuck could chuck wood" # split the string by some character ["if", "a", "woodchuck"] #check if wood is in the list # if yes # counter = counter +1 <--> counter += 1 # else #pass #return # Counting wood. def wood_counter(text): clean_text = text.replace("?", " ").replace("\n"," ").replace("\r", " ").replace("\t", " ").replace(",", " ").replace(".", " ") lower_text = clean_text.lower() split_text = lower_text.split() counter = 0 print(split_text) for word in split_text: if word == "wood": counter += 1 return counter wood_counter(text) help(str) ###Output Help on class str in module builtins: class str(object) | str(object='') -> str | str(bytes_or_buffer[, encoding[, errors]]) -> str | | Create a new string object from the given object. If encoding or | errors is specified, then the object must expose a data buffer | that will be decoded using the given encoding and error handler. | Otherwise, returns the result of object.__str__() (if defined) | or repr(object). | encoding defaults to sys.getdefaultencoding(). | errors defaults to 'strict'. | | Methods defined here: | | __add__(self, value, /) | Return self+value. | | __contains__(self, key, /) | Return key in self. | | __eq__(self, value, /) | Return self==value. | | __format__(self, format_spec, /) | Return a formatted version of the string as described by format_spec. | | __ge__(self, value, /) | Return self>=value. | | __getattribute__(self, name, /) | Return getattr(self, name). | | __getitem__(self, key, /) | Return self[key]. | | __getnewargs__(...) | | __gt__(self, value, /) | Return self>value. | | __hash__(self, /) | Return hash(self). | | __iter__(self, /) | Implement iter(self). | | __le__(self, value, /) | Return self<=value. | | __len__(self, /) | Return len(self). | | __lt__(self, value, /) | Return self<value. | | __mod__(self, value, /) | Return self%value. | | __mul__(self, value, /) | Return self*value. | | __ne__(self, value, /) | Return self!=value. | | __repr__(self, /) | Return repr(self). | | __rmod__(self, value, /) | Return value%self. | | __rmul__(self, value, /) | Return value*self. | | __sizeof__(self, /) | Return the size of the string in memory, in bytes. | | __str__(self, /) | Return str(self). | | capitalize(self, /) | Return a capitalized version of the string. | | More specifically, make the first character have upper case and the rest lower | case. | | casefold(self, /) | Return a version of the string suitable for caseless comparisons. | | center(self, width, fillchar=' ', /) | Return a centered string of length width. | | Padding is done using the specified fill character (default is a space). | | count(...) | S.count(sub[, start[, end]]) -> int | | Return the number of non-overlapping occurrences of substring sub in | string S[start:end]. Optional arguments start and end are | interpreted as in slice notation. | | encode(self, /, encoding='utf-8', errors='strict') | Encode the string using the codec registered for encoding. | | encoding | The encoding in which to encode the string. | errors | The error handling scheme to use for encoding errors. | The default is 'strict' meaning that encoding errors raise a | UnicodeEncodeError. Other possible values are 'ignore', 'replace' and | 'xmlcharrefreplace' as well as any other name registered with | codecs.register_error that can handle UnicodeEncodeErrors. | | endswith(...) | S.endswith(suffix[, start[, end]]) -> bool | | Return True if S ends with the specified suffix, False otherwise. | With optional start, test S beginning at that position. | With optional end, stop comparing S at that position. | suffix can also be a tuple of strings to try. | | expandtabs(self, /, tabsize=8) | Return a copy where all tab characters are expanded using spaces. | | If tabsize is not given, a tab size of 8 characters is assumed. | | find(...) | S.find(sub[, start[, end]]) -> int | | Return the lowest index in S where substring sub is found, | such that sub is contained within S[start:end]. Optional | arguments start and end are interpreted as in slice notation. | | Return -1 on failure. | | format(...) | S.format(*args, **kwargs) -> str | | Return a formatted version of S, using substitutions from args and kwargs. | The substitutions are identified by braces ('{' and '}'). | | format_map(...) | S.format_map(mapping) -> str | | Return a formatted version of S, using substitutions from mapping. | The substitutions are identified by braces ('{' and '}'). | | index(...) | S.index(sub[, start[, end]]) -> int | | Return the lowest index in S where substring sub is found, | such that sub is contained within S[start:end]. Optional | arguments start and end are interpreted as in slice notation. | | Raises ValueError when the substring is not found. | | isalnum(self, /) | Return True if the string is an alpha-numeric string, False otherwise. | | A string is alpha-numeric if all characters in the string are alpha-numeric and | there is at least one character in the string. | | isalpha(self, /) | Return True if the string is an alphabetic string, False otherwise. | | A string is alphabetic if all characters in the string are alphabetic and there | is at least one character in the string. | | isascii(self, /) | Return True if all characters in the string are ASCII, False otherwise. | | ASCII characters have code points in the range U+0000-U+007F. | Empty string is ASCII too. | | isdecimal(self, /) | Return True if the string is a decimal string, False otherwise. | | A string is a decimal string if all characters in the string are decimal and | there is at least one character in the string. | | isdigit(self, /) | Return True if the string is a digit string, False otherwise. | | A string is a digit string if all characters in the string are digits and there | is at least one character in the string. | | isidentifier(self, /) | Return True if the string is a valid Python identifier, False otherwise. | | Call keyword.iskeyword(s) to test whether string s is a reserved identifier, | such as "def" or "class". | | islower(self, /) | Return True if the string is a lowercase string, False otherwise. | | A string is lowercase if all cased characters in the string are lowercase and | there is at least one cased character in the string. | | isnumeric(self, /) | Return True if the string is a numeric string, False otherwise. | | A string is numeric if all characters in the string are numeric and there is at | least one character in the string. | | isprintable(self, /) | Return True if the string is printable, False otherwise. | | A string is printable if all of its characters are considered printable in | repr() or if it is empty. | | isspace(self, /) | Return True if the string is a whitespace string, False otherwise. | | A string is whitespace if all characters in the string are whitespace and there | is at least one character in the string. | | istitle(self, /) | Return True if the string is a title-cased string, False otherwise. | | In a title-cased string, upper- and title-case characters may only | follow uncased characters and lowercase characters only cased ones. | | isupper(self, /) | Return True if the string is an uppercase string, False otherwise. | | A string is uppercase if all cased characters in the string are uppercase and | there is at least one cased character in the string. | | join(self, iterable, /) | Concatenate any number of strings. | | The string whose method is called is inserted in between each given string. | The result is returned as a new string. | | Example: '.'.join(['ab', 'pq', 'rs']) -> 'ab.pq.rs' | | ljust(self, width, fillchar=' ', /) | Return a left-justified string of length width. | | Padding is done using the specified fill character (default is a space). | | lower(self, /) | Return a copy of the string converted to lowercase. | | lstrip(self, chars=None, /) | Return a copy of the string with leading whitespace removed. | | If chars is given and not None, remove characters in chars instead. | | partition(self, sep, /) | Partition the string into three parts using the given separator. | | This will search for the separator in the string. If the separator is found, | returns a 3-tuple containing the part before the separator, the separator | itself, and the part after it. | | If the separator is not found, returns a 3-tuple containing the original string | and two empty strings. | | replace(self, old, new, count=-1, /) | Return a copy with all occurrences of substring old replaced by new. | | count | Maximum number of occurrences to replace. | -1 (the default value) means replace all occurrences. | | If the optional argument count is given, only the first count occurrences are | replaced. | | rfind(...) | S.rfind(sub[, start[, end]]) -> int | | Return the highest index in S where substring sub is found, | such that sub is contained within S[start:end]. Optional | arguments start and end are interpreted as in slice notation. | | Return -1 on failure. | | rindex(...) | S.rindex(sub[, start[, end]]) -> int | | Return the highest index in S where substring sub is found, | such that sub is contained within S[start:end]. Optional | arguments start and end are interpreted as in slice notation. | | Raises ValueError when the substring is not found. | | rjust(self, width, fillchar=' ', /) | Return a right-justified string of length width. | | Padding is done using the specified fill character (default is a space). | | rpartition(self, sep, /) | Partition the string into three parts using the given separator. | | This will search for the separator in the string, starting at the end. If | the separator is found, returns a 3-tuple containing the part before the | separator, the separator itself, and the part after it. | | If the separator is not found, returns a 3-tuple containing two empty strings | and the original string. | | rsplit(self, /, sep=None, maxsplit=-1) | Return a list of the words in the string, using sep as the delimiter string. | | sep | The delimiter according which to split the string. | None (the default value) means split according to any whitespace, | and discard empty strings from the result. | maxsplit | Maximum number of splits to do. | -1 (the default value) means no limit. | | Splits are done starting at the end of the string and working to the front. | | rstrip(self, chars=None, /) | Return a copy of the string with trailing whitespace removed. | | If chars is given and not None, remove characters in chars instead. | | split(self, /, sep=None, maxsplit=-1) | Return a list of the words in the string, using sep as the delimiter string. | | sep | The delimiter according which to split the string. | None (the default value) means split according to any whitespace, | and discard empty strings from the result. | maxsplit | Maximum number of splits to do. | -1 (the default value) means no limit. | | splitlines(self, /, keepends=False) | Return a list of the lines in the string, breaking at line boundaries. | | Line breaks are not included in the resulting list unless keepends is given and | true. | | startswith(...) | S.startswith(prefix[, start[, end]]) -> bool | | Return True if S starts with the specified prefix, False otherwise. | With optional start, test S beginning at that position. | With optional end, stop comparing S at that position. | prefix can also be a tuple of strings to try. | | strip(self, chars=None, /) | Return a copy of the string with leading and trailing whitespace removed. | | If chars is given and not None, remove characters in chars instead. | | swapcase(self, /) | Convert uppercase characters to lowercase and lowercase characters to uppercase. | | title(self, /) | Return a version of the string where each word is titlecased. | | More specifically, words start with uppercased characters and all remaining | cased characters have lower case. | | translate(self, table, /) | Replace each character in the string using the given translation table. | | table | Translation table, which must be a mapping of Unicode ordinals to | Unicode ordinals, strings, or None. | | The table must implement lookup/indexing via __getitem__, for instance a | dictionary or list. If this operation raises LookupError, the character is | left untouched. Characters mapped to None are deleted. | | upper(self, /) | Return a copy of the string converted to uppercase. | | zfill(self, width, /) | Pad a numeric string with zeros on the left, to fill a field of the given width. | | The string is never truncated. | | ---------------------------------------------------------------------- | Static methods defined here: | | __new__(*args, **kwargs) from builtins.type | Create and return a new object. See help(type) for accurate signature. | | maketrans(...) | Return a translation table usable for str.translate(). | | If there is only one argument, it must be a dictionary mapping Unicode | ordinals (integers) or characters to Unicode ordinals, strings or None. | Character keys will be then converted to ordinals. | If there are two arguments, they must be strings of equal length, and | in the resulting dictionary, each character in x will be mapped to the | character at the same position in y. If there is a third argument, it | must be a string, whose characters will be mapped to None in the result.
jupyter_notebooks/human.all_genes.ipynb
###Markdown Average dN/dS ###Code # Define a function for calculating dN/dS when both dN and dS is zero. def weird_division(df): if df[5]==0 and df[6]==0: return 0 elif df[5]==0: return 0 elif df[6]==0: return np.NaN return df[5] / df[6] ###Output _____no_output_____ ###Markdown Import ortholog information, including dN and dS, which was downloaded from Ensembl 98. ###Code df_list = [] for file in sorted(glob.glob('../results/Ensembl98_human/human_protein_coding_genes.*.txt')): species_code_name = (file[29:-4]) # print(species_code_name) df = pd.read_csv(file, sep='\t', header=None, na_values=('ortholog_one2many', 'ortholog_many2many') ,index_col=0) df = pd.DataFrame(df.dropna().drop_duplicates().apply(weird_division, axis=1),columns=[species_code_name+'_dNdS']) df_list.append(df.dropna().drop_duplicates()) # information of all human protein coding genes, which was downloaded from Ensembl98 info_df = pd.read_csv('../data/info.human_protein_coding_genes.tsv',sep='\t',header=0,index_col=0) info_df.drop_duplicates(subset='Gene name',inplace=True) # Drop the duplicated gene names ###Output _____no_output_____ ###Markdown Now merge each of the mammalian species' dN/dS values against human onto the information of human protein-coding genes. ###Code integrate_df = info_df.copy(deep=True) for df in df_list: integrate_df = pd.merge(integrate_df,df, left_index=True, right_index=True, how='left') integrate_df = integrate_df.iloc[:,2:].dropna(how='all') # delete genes with no dN/dS scores #Feb 1 2020 bug fix integrate_df = pd.merge(info_df,integrate_df,left_index=True,right_index=True,how='right') ###Output _____no_output_____ ###Markdown Calculate the statistics of each human protein-coding gene. ###Code stats_df = integrate_df.iloc[:,2:].apply(pd.DataFrame.describe, axis=1) ###Output _____no_output_____ ###Markdown Save the tables. ###Code integrate_df.to_csv('../results/Ensembl98_human/human.92_species_dNdS.all_genes.tsv',sep='\t') stats_df.to_csv('../results/Ensembl98_human/human.dNdS_stats.all_genes.tsv',sep='\t') ###Output _____no_output_____ ###Markdown Statistics ###Code import heapq import scipy.stats as stats arr = stats_df['mean'].dropna().values ###Output _____no_output_____ ###Markdown Calculate the confidence interval of the median dN/dS score ###Code low = stats.binom.interval(alpha=.95,n=arr.shape[0],p=.5)[0] high = stats.binom.interval(alpha=.95,n=arr.shape[0],p=.5)[1] CI_low = heapq.nsmallest(low.astype(int),arr)[-1] CI_high = heapq.nsmallest(high.astype(int),arr)[-1] CI_low #lower bound of confidence interval CI_high #higher bound of confidence interval arr.shape #number of protein-coding genes with at least one species with valid dN/dS against human stats_df['mean'].median() # median of all human protein coding genes' average mammalian dN/dS ###Output _____no_output_____ ###Markdown Plotting ###Code import matplotlib import matplotlib.pyplot as plt import statsmodels.api as sm import seaborn as sns matplotlib.rcParams['figure.dpi']= 300 #make high quality figure # Creating a figure fig = plt.figure(figsize=(10,7.5)) # Size of a letter size paper in horizontal fig.suptitle('Distribution of dN/dS of All Human Protein-coding Genes', fontsize=14) # Setting subplot space grid = plt.GridSpec(nrows=1,ncols=1) # The subplot for distribution histogram distr_plot = fig.add_subplot(grid[:,:]) # Set up the bins for log scale x-axis, and get the centers bins=np.logspace(np.log10(0.001),np.log10(10), 100) bins_cntr = (bins[1:] + bins[:-1]) / 2 # Distribution Histograms counts, bin_edges, ignored = distr_plot.hist(arr, bins, histtype='stepfilled', alpha=0.3, label='dN/dS of protein-coding genes (med={0:.3f})'.format(np.median(arr))) # Log-normal Curve for Late Development Genes try: # calculate area of histograms (area under PDF should be 1) area_hist = ((bin_edges[1:] - bin_edges[:-1]) * counts).sum() shape, loc, scale = stats.lognorm.fit(arr) # pdf-values using cdf fit_log_cntr_ = stats.lognorm.cdf(bins, shape, loc=loc, scale=scale) fit_log_cntr = np.diff(fit_log_cntr_) # plot fitted and scaled PDFs into histogram distr_plot.plot(bins_cntr, fit_log_cntr * counts.sum(),'b-', label='lognormal fit of dN/dS distribution', linewidth=2) except ValueError: pass # Axis labels distr_plot.set_xlabel(xlabel='dN/dS') distr_plot.set_ylabel(ylabel='number of genes') distr_plot.set_xscale('log') distr_plot.legend(loc='best') fig.savefig('../figures/human.all_genes.pdf') fig.savefig('../figures/human.all_genes.eps') fig.savefig('../figures/human.all_genes.png') plt.close() ###Output _____no_output_____
E_Biostatistics_with_R/Playground/Multi-variate analysis.ipynb
###Markdown Multi-variate analysisIn this notebook, we will apply principal component analysis and compare different types of cluster analysis Installation of libraries and necessary softwareCopy the files _mmc5_vclust_in.csv_ , _MetaboIonsNormed.csv_ and _FcmClustPEst.R_ into the folder that contains this jupyter notebook or upload them to http://localhost:8888/treeInstall the necessary libraries (only needed once) by executing (shift-enter) the following cell: ###Code install.packages("DAAG", repos='http://cran.us.r-project.org') install.packages("MASS", repos='http://cran.us.r-project.org') if (!requireNamespace("BiocManager", quietly = TRUE)) install.packages("BiocManager", repos='http://cran.us.r-project.org') BiocManager::install("Biobase") BiocManager::install("Mfuzz") install.packages("e1071", repos='http://cran.us.r-project.org') install.packages("matrixStats", repos='http://cran.us.r-project.org') ###Output _____no_output_____ ###Markdown Loading data and librariesThis requires that the installation above has been finished without error ###Code library(DAAG) library(MASS) library(Biobase) library(e1071) library(matrixStats) # load data file (you need to place the file into the same folder) ExampleData <- read.csv("ExampleFile.csv") MetabolomicsData <- read.csv("MetaboIonsNormed.csv") source("FcmClustPEst.R") ###Output _____no_output_____ ###Markdown Exercise 1We will use dimensionality reduction to simplify a given data set. For a more extensive description of PCA in R, see e.g. https://www.datacamp.com/community/tutorials/pca-analysis-rCarry out principal component analysis for the ```possum``` data. Rows with missing values need to be removed before. Plot the scores of the PCA with different colors for the locations where the possums were trapped (defined by ```site```). ###Code data(possum) A <- possum[,5:ncol(possum)] ## How many rows without missing values ## data.frame without missing values ## PCA ... ###Output _____no_output_____ ###Markdown Question I: How many percent of the variance are already described by principal component 1?_Answer_ Question II: Which are the most discriminating traits?_Answer_ Question III: Which sites (provide numbers) can be separated in the scoring plot of the PCA?_Answer_ Exercise 2We will now compare different types of cluster analyses, applied to a proteomics data set (phosphorylated peptides) and a transcriptomics data set.Carry out hierarchical clustering, k-means and fuzzy c-means on the table from the file "mmc5_vclust_in.csv" and the ```geneData``` data in R (use a cluster number of 10 for all) ###Code data("geneData") protData <- as.matrix(read.csv("mmc5_vclust_in.csv", row.names=1)) # heatmap here: heatmap(geneData, scale="row") ## example code for the geneData set # For the visualization copy the code from the script of the lecture scaled_geneData <- t(scale(t(geneData))) # this scales each row to have mean 0 and s.d. 1 nclust <- 10 kmean.out <- kmeans(scaled_geneData,nclust) cm.out <- cmeans(scaled_geneData, nclust, m=1.1) par(mfrow=c(3,4)) for (c in 1:nclust) { # plot centroid plot(kmean.out$centers[c,], type="l", lwd=2, col=2, ylim=c(-4,4)) clustc <- scaled_geneData[kmean.out$cluster==c,] # plot genes apply(clustc, 1, lines, , col="#00000033") } par(mfrow=c(1,1)) ## fuzzy c-means clustering #cm.out$cluster par(mfrow=c(3,4)) for (c in 1:nclust) { plot(cm.out$centers[c,], type="l", lwd=2, col=2, ylim=c(-4,4), xlab="Condition", ylab="Expression pattern") # get members of cluster c c_indices <- cm.out$cluster==c if (sum(c_indices)>1) { # print(sum(c_indices)) clustc <- scaled_geneData[c_indices,] # get membership values, multiply by 100 and round -> number between 0..100 clustmem <- round(cm.out$membership[c_indices,c]*100) # color for each of 100 levels colors <- rainbow(100) for (m in 1:nrow(clustc)) { lines(clustc[m,], col=colors[clustmem[m]]) } } } par(mfrow=c(1,1)) gene1 <- c(1.2, 2, 1.9, 0.5, -0.5, -1) gene2 <- c(0.1, 0.2, 0.09, 0.05, -0.1, -0.2) plot(1:6, gene1 , type="b") points(1:6, gene2, col=2, type="b") plot(1:6, gene1/sd(gene1) , type="b") points(1:6, gene2/sd(gene2), col=2, type="b") ###Output _____no_output_____ ###Markdown Question I: Read the help describing ```geneData```. What does this dataset contain?_Answer_ Question II: Why should fuzzy c-means be superior to k-means?_Answer_ Question III: How many parameters are required for fuzzy c-means? How are they called?_Answer_ Question IV: Which difference do you see between all 3 clustering methods?_Answer_ Question V: What is a membership value?_Answer_ Question VI: Do you see any specific pattern in the proteomics data? What is the reason to see this behavior?_Answer_ Exercise 3Extract the columns corresponding to the first replicate of _protData_. Normalize the data to the median and again apply the cluster analysis (all from last exercise) on the resulting four-dimensional data set. ###Code # Show first lines of example file head(ExampleData) colnames(ExampleData) ExampleDataLog <- as.matrix(log2(ExampleData[,19:22])) # Normalization by median NormalizedData <- t(t(ExampleDataLog) - colMedians(ExampleDataLog,na.rm=T)) # remove rows with missing values for kmeans and cmeans NormalizedRedData <- NormalizedData[complete.cases(NormalizedData),] # heatmap here # kmeans + cmeans (10 clusters) StandardizedData <- t(scale(t(NormalizedRedData))) ###Output _____no_output_____ ###Markdown Question I: What does the function colMedians give?_Answer_ Question II: What do the row names of protData stand for?_Answer_ Question III: Is this data log-transformed? If yes, what tell us that it is already transformed?_Answer_ Question IV: How do we check whether the median normalization was correctly executed?_Answer_ Question V: Which samples are most similar and how does this show?_Answer_ Question VI: Why do we have to _scale_ the data before using k-means and fuzzy c-means?_Answer_ Exercise 4We will now look into the consequences of using different parameters of fuzzy c-means clustering. The fuzzifier will be automatically set to an optimal value which is much higher than previously used $m=1.1$.Carry out fuzzy c-means using the parameter estimation from the lecture on ```StandardizedData```. Compare the results to the ones in the exercise above. ###Code PExpr <- new("ExpressionSet",expr=as.matrix(StandardizedData)) parameters <- FcmClustPEst(PExpr, maxc = 25) # fuzzy c-means clustering with these here: ###Output _____no_output_____ ###Markdown Question I: Do the validation indices agree on the number of clusters?_Answer_ Question II: What are the main differences of the results between running fuzzy c-means clustering in the exercise above and here?_Answer_ Question III: What is the total number of clustered proteins when not considering proteins with max. membership value $>$ 0.5?_Answer_ Exercise 5We now will look into a metabolomics data set with strong temporal behavior and use a version of fuzzy c-means clustering that includes the variance of the replicates which is usually discardedCarry out hierarchical clustering on metabolomics data (paper: https://www.ncbi.nlm.nih.gov/pubmed/26373870) and test different distance measures. For that, check the help pages of ```heatmap``` and ```dist```.Load the file into VSClust (http://computproteomics.bmb.sdu.dk/Apps/VSClust) and carry out the analysis there (the app can become irreponsive while multiple users apply the analysis). Use the PCA plot to see whether you read the file with the correctly set number of replicates and conditions. Estimate the parameter values and then apply the variance-based clustering. ###Code # create the heatmap here: head(MetabolomicsData) rownames(MetabolomicsData) <- MetabolomicsData$X MetabolomicsDataM <- as.matrix(MetabolomicsData[,2:ncol(MetabolomicsData)]) heatmap(MetabolomicsDataM,cexRow = 0.2, cexCol= 0.5, distfun = function(x) dist(x,method = 'euclidean')) ###Output _____no_output_____ ###Markdown Question I: What are the main differences between heatmap and variance-sensitive clustering?_Answer_ Question II: Do you recognize the same groups?_Answer_ Question III: Why can the calculation of the heatmap take long?_Answer_ Question IV: Do the replicates of all 12 time points cluster together? If not, when do they fail to group and why do think this happens?_Answer_ Question V: Does this improve when using another distance measure?_Answer_ ###Code ?dist ###Output _____no_output_____
lessons/DataEngineering/MLPipelines/custom_transformer.ipynb
###Markdown Create a Custom Transformer ###Code import nltk nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger']) import re import numpy as np import pandas as pd from nltk.tokenize import word_tokenize from nltk.stem import WordNetLemmatizer from sklearn.pipeline import Pipeline from sklearn.metrics import confusion_matrix from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer url_regex = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+' ###Output _____no_output_____ ###Markdown Implement the StartingVerbExtractor class ###Code class StartingVerbExtractor(BaseEstimator, TransformerMixin): def starting_verb(self, text): # tokenize by sentences sentence_list = for sentence in sentence_list: # tokenize each sentence into words and tag part of speech pos_tags = # index pos_tags to get the first word and part of speech tag first_word, first_tag = # return true if the first word is an appropriate verb or RT for retweet if first_tag in ['VB', 'VBP'] or first_word == 'RT': return True return False def fit(self, x, y=None): return self def transform(self, X): # apply starting_verb function to all values in X X_tagged = return pd.DataFrame(X_tagged) ###Output _____no_output_____ ###Markdown Run program to test ###Code def load_data(): df = pd.read_csv('corporate_messaging.csv', encoding='latin-1') df = df[(df["category:confidence"] == 1) & (df['category'] != 'Exclude')] X = df.text.values y = df.category.values return X, y def tokenize(text): detected_urls = re.findall(url_regex, text) for url in detected_urls: text = text.replace(url, "urlplaceholder") tokens = word_tokenize(text) lemmatizer = WordNetLemmatizer() clean_tokens = [] for tok in tokens: clean_tok = lemmatizer.lemmatize(tok).lower().strip() clean_tokens.append(clean_tok) return clean_tokens def model_pipeline(): pipeline = Pipeline([ ('features', FeatureUnion([ ('text_pipeline', Pipeline([ ('vect', CountVectorizer(tokenizer=tokenize)), ('tfidf', TfidfTransformer()) ])), ('starting_verb', StartingVerbExtractor()) ])), ('clf', RandomForestClassifier()) ]) return pipeline def display_results(y_test, y_pred): labels = np.unique(y_pred) confusion_mat = confusion_matrix(y_test, y_pred, labels=labels) accuracy = (y_pred == y_test).mean() print("Labels:", labels) print("Confusion Matrix:\n", confusion_mat) print("Accuracy:", accuracy) def main(): X, y = load_data() X_train, X_test, y_train, y_test = train_test_split(X, y) model = model_pipeline() model.fit(X_train, y_train) y_pred = model.predict(X_test) display_results(y_test, y_pred) main() ###Output _____no_output_____
code/cleanText.ipynb
###Markdown Importing Dependencies for Cleaning ###Code import re import lxml import nltk import pandas as pd import pymongo from pymongo import MongoClient from bs4 import BeautifulSoup from nltk.tokenize import word_tokenize from nltk.tokenize import RegexpTokenizer from nltk.corpus import stopwords ###Output _____no_output_____ ###Markdown Removes HTML Syntax ###Code def decodeHTML(data): data = BeautifulSoup(data,"lxml").text return data ###Output _____no_output_____ ###Markdown Lowers the Entire Text ###Code def normaliseTextToLower(data): return data.lower() ###Output _____no_output_____ ###Markdown Splitting text and removing Punctuation ###Code def tokenizeAndPunctuationRemoval(data): tokenizedList=[] for token in RegexpTokenizer('\w+').tokenize(data): tokenizedList.append(token) return tokenizedList ###Output _____no_output_____ ###Markdown Removal of all Numbers ###Code def removeNumbers(dataList): dataListRefined = [] for token in dataList: if not (token.isnumeric()): dataListRefined.append(token) # print("HELO") return dataListRefined ###Output _____no_output_____ ###Markdown Removing Stop Words ###Code def removeStopWords(dataList): #stopwords are words like "the", "in" etc dataList = list(set(dataList)) stopWords = set(stopwords.words("english")) nonStoppedWords = list(token for token in dataList if token not in stopWords) convertToText = " ".join(nonStoppedWords) return convertToText ###Output _____no_output_____ ###Markdown Clean Data Function ###Code def cleanData(data): return (removeStopWords(removeNumbers(tokenizeAndPunctuationRemoval(normaliseTextToLower(decodeHTML(data)))))) ###Output _____no_output_____ ###Markdown Cleaning data stored in MongoDB ###Code mongo = MongoClient("mongodb://localhost:27017/") db = mongo.reddit #100 post data in srIndia posts = pd.DataFrame(list(db.rIndia.find())) print(posts) # print(posts["title"]) print("xxxxxxxxxxxxxxxxxxxxxxxxxxxx BREAK xxxxxxxxxxxxxxxxxxxxxxx") posts["title"] = posts["title"].apply(cleanData) posts["textBody"] = posts["title"].apply(cleanData) posts["comments"] = posts["title"].apply(cleanData) print(posts["title"]) print("xxxxxxxxxxxxxxxxxxxxxxxxxxxx BREAK xxxxxxxxxxxxxxxxxxxxxxx") print(posts["textBody"]) print("xxxxxxxxxxxxxxxxxxxxxxxxxxxx BREAK xxxxxxxxxxxxxxxxxxxxxxx") print(posts["comments"]) del posts["_id"] posts.to_csv('../data/cleansedData300.csv',index=False)#300 post data in cleansedData ###Output _id \ 0 5ea151689ebdc115e5417fab 1 5ea1516c9ebdc115e5417fac 2 5ea1516f9ebdc115e5417fad 3 5ea151729ebdc115e5417fae 4 5ea151759ebdc115e5417faf ... ... 2663 5ea161c39ebdc115e5418a12 2664 5ea161c49ebdc115e5418a13 2665 5ea161c69ebdc115e5418a14 2666 5ea161c79ebdc115e5418a15 2667 5ea161c89ebdc115e5418a16 title \ 0 Late Night Random Discussion Thread ! 1 Late Night Random Discussion Thread ! 2 Random Daily Discussion Thread - April 12, 202... 3 Late Night Random Discussion Thread ! 4 Late Night Random Discussion Thread ! ... ... 2663 Coronavirus outbreak Woman with no travel hist... 2664 PM Modi says Covid-19 does not see race, relig... 2665 30% of India’s Covid-19 positive caseload link... 2666 The nationwide lockdown has led to better air ... 2667 45-day old infant youngest COVID-19 casualty. ... url author \ 0 https://www.reddit.com/r/india/comments/g0lfy8... oxythebot 1 https://www.reddit.com/r/india/comments/fx8sbw... oxythebot 2 https://www.reddit.com/r/india/comments/fzpfrc... oxythebot 3 https://www.reddit.com/r/india/comments/fyi8g2... oxythebot 4 https://www.reddit.com/r/india/comments/fzyyju... oxythebot ... ... ... 2663 https://m.mid-day.com/amp/articles/coronavirus... kingof-potatos 2664 https://m.hindustantimes.com/india-news/pm-mod... varun1102030 2665 https://theprint.in/health/30-of-indias-covid-... Slowbhai 2666 https://twitter.com/DiscoveryIN/status/1251801... aviakki1 2667 https://www.businesstoday.in/latest/trends/cor... DenseSpirit5 textBody flair \ 0 ^Beep ^Boop ^Bot, ^I ^am ^a ^bot! ^if ^any ^pr... Scheduled 1 ^Beep ^Boop ^Bot, ^I ^am ^a ^bot! ^if ^any ^pr... Scheduled 2 ^Beep ^Boop ^Bot, ^I ^am ^a ^bot! ^if ^any ^pr... Scheduled 3 ^Beep ^Boop ^Bot, ^I ^am ^a ^bot! ^if ^any ^pr... Scheduled 4 ^Beep ^Boop ^Bot, ^I ^am ^a ^bot! ^if ^any ^pr... Scheduled ... ... ... 2663 Coronavirus 2664 Coronavirus 2665 Coronavirus 2666 Coronavirus 2667 Coronavirus comments \ 0 My Dadi passed away today at 7 pm, she was not... 1 [deleted] I’m having a cough since morning. Fe... 2 My friends complaining about running out of da... 3 Every time I come to this thread I realise how... 4 Education minister of Maharashtra wants to tak... ... ... 2663 >The woman confirmed to mid-day that she had d... 2664 Oh the irony Covid-19 doesn't but you politici... 2665 And does that include contacts of Jamaat meet ... 2666 2667 😨😨😨😨 0 to 1 is a risky age when it comes to co... authors 0 Desi_Bojack_Horseman imfuckedforever None None... 1 None hitch44 BainganDrift loveonfireasian None... 2 bunnykumarxyz Desi_Bojack_Horseman None Meraxe... 3 BainganDrift desdrot Desi_Bojack_Horseman None... 4 None Meraxes373 captainmogambo sfwaccountfw ac... ... ... 2663 hauntin he_is_not_our_member 2664 pythonapster NutellaForSatella TWO-WHEELER-MAF... 2665 apestogetherstoned bookthiefj0 chowkidarchor B... 2666 2667 promiscuous_bhisma icicibank [2668 rows x 8 columns] xxxxxxxxxxxxxxxxxxxxxxxxxxxx BREAK xxxxxxxxxxxxxxxxxxxxxxx 0 late random discussion thread night 1 late random discussion thread night 2 daily random discussion april thread 15am 3 late random discussion thread night 4 late random discussion thread night ... 2663 outbreak whisked travel woman history bmc away... 2664 covid race see modi creed pm religion colour c... 2665 tablighi jamaat govt positive linked india cov... 2666 breathe thanks particulate air quality week he... 2667 infant covid nurses three day contact found te... Name: title, Length: 2668, dtype: object xxxxxxxxxxxxxxxxxxxxxxxxxxxx BREAK xxxxxxxxxxxxxxxxxxxxxxx 0 late random discussion thread night 1 late random discussion thread night 2 daily discussion random april thread 15am 3 late random discussion thread night 4 late random discussion thread night ... 2663 outbreak whisked travel woman history bmc away... 2664 covid race see modi creed pm religion colour c... 2665 tablighi jamaat govt positive linked india cov... 2666 breathe dropped air quality week heroesfromhom... 2667 casualty old positive infection infant child c... Name: textBody, Length: 2668, dtype: object xxxxxxxxxxxxxxxxxxxxxxxxxxxx BREAK xxxxxxxxxxxxxxxxxxxxxxx 0 late random discussion thread night 1 late random discussion thread night 2 daily discussion random april thread 15am 3 late random discussion thread night 4 late random discussion thread night ... 2663 outbreak whisked travel woman history bmc away... 2664 covid race see modi creed pm religion colour c... 2665 tablighi jamaat govt positive linked india cov... 2666 breathe dropped air quality week heroesfromhom... 2667 casualty old positive infection infant child c... Name: comments, Length: 2668, dtype: object
protopipe/benchmarks/notebooks/TRAINING/benchmarks_DL1_calibration.ipynb
###Markdown Image extraction **Author:** Dr. Michele Peresano (CEA-Saclay/IRFU/DAp/LEPCHE), 2021**Recommended datasample(s):** any simtel file**Data level(s):** simtel (raw data)**Description:**This notebook provides calibration benchmarks for any version of the *protopipe* pipeline.**Requirements and steps to reproduce:**It can be used with any camera and any image extractor from any simtel file of any production supported by *ctapipe > 0.11.0*.To get a filled notebook and reproduce these results,- get the necessary input files for 1st (or single) pass image extraction and optionally 2nd pass `ctapipe-process` with the proper configuration file, e.g.`ctapipe-process --config protopipe_CTAMARS_1stPass_DL1a.json``--input XXX.simtel.gz`- execute the notebook,`papermill benchmarks_DL1_calibration.ipynb results_benchmarks_DL1_calibration.ipynb`specifying each required parameter as ``-p name value`. To obtain the list of all available parameters add ``--help-notebook`.You can pretty-print it in HTML format with,`jupyter nbconvert results_benchmarks_DL1_calibration.ipynb``--to html --TagRemovePreprocessor.remove_cell_tags "remove_input" --no-input`**Comparison between *protopipe* and *CTAMARS***:- the simtel reference file is `gamma_20deg_180deg_run100__cta-prod3-demo-2147m-LaPalma-baseline.simtel.gz`- `calibscale` should be set to 0.92, but we need to take 2.5% off (under investigation)- the configuration files are provided with this notebook.**Development and testing:** As with any other part of _protopipe_ and being part of the official repository, this notebook can be further developed by any interested contributor. The execution of this notebook is not currently automatic, it must be done locally by the user _before_ pushing a pull-request.Please, strip the output before pushing. Table of contents- [Correlation between reconstructed and true number of photoelectrons](Correlation-between-reconstructed-and-true-number-of-photoelectrons)- [Charge resolution](Charge-resolution)- [Average residual bias](Average-residual-bias)- [Charge resolution corrected for the average residual bias](Charge-resolution-corrected-for-the-average-residual-bias)- [RMS of charge resolution around 1](RMS-of-charge-resolution-around-1)- [Performance of 2nd pass](Performance-of-2nd-pass) - [Corrected charge resolution and average residual bias](Corrected-charge-resolution-and-average-residual-bias) - [RMS around 1 comparison between passes](RMS-around-1-comparison-between-passes) - [Comparison of charge resolution y-profiles between passes](Comparison-of-charge-resolution-y-profiles-between-passes)- [Single-pixels spectra and optimized cleaning thresholds](Single-pixels-spectra-and-optimized-cleaning-thresholds) - [Comparison between 1st or single passes](Comparison-between-1st-or-single-passes) - [Comparison between 2nd passes](Comparison-between-2nd-passes) - [Comparison between true and reconstructed spectra](Comparison-between-true-and-reconstructed-spectra)- [Charge resolution and bias for true signal pixels](Charge-resolution-and-bias-for-true-signal-pixels)- [Noise distribution](Noise-distribution) Imports[back to top](Table-of-contents) ###Code from pathlib import Path import json import yaml import warnings warnings.filterwarnings(action='ignore', message="invalid value encountered in true_divide") import uproot import numpy as np from scipy.stats import binned_statistic import astropy.units as u from astropy.table import Column, vstack, join import matplotlib.pyplot as plt from matplotlib.colors import LogNorm from matplotlib.ticker import FormatStrFormatter from matplotlib.pyplot import rc import matplotlib.style as style from cycler import cycler %matplotlib inline import ctapipe from ctapipe.instrument import SubarrayDescription try: from ctapipe.io import read_table except ImportError: from ctapipe.io.astropy_helpers import h5_table_to_astropy as read_table ###Output _____no_output_____ ###Markdown Functions[back to top](Table-of-contents) ###Code # TODO: move to protopipe.benchmarks.utils def raise_(ex): """Raise an exception as a statement. This is a general purpose raiser for cases such as a lambda function. Parameters ---------- ex: exception Python built-in exception to raise. """ raise ex # TODO: move to protopipe.benchmarks.utils def string_to_boolean(variables): """Convert True/False strings to booleans. Useful in case a specific use of the CLI doesn't allow to read booleans as booleans. Parameters ---------- variables: list of str Variables to check. """ def check_str(x): return x if type(x) == bool \ else True if x == "True" \ else False if x == "False" \ else raise_(ValueError(f"{x} is not a valid boolean.")) return list(map(check_str, variables)) # TODO: move to protopipe.benchmarks.utils def get_fig_size(ratio=None, scale=None): ratio = 4/3. if ratio is None else ratio scale = 1.0 if scale is None else scale height = 5 width = height * ratio return (width*scale, height*scale) def compute_weight_BTEL1010(true_energy): """Compute the weight from requirement B-TEL-1010-Intensity-Resolution.""" target_slope = -2.62 # this is the spectral slope as required by the B-TEL-1010 "Intensity Resolution" doc spec_slope = -2.0 # this is the spectral slope in the simtel files # each pixel of the same image (row of data table) needs the same weight weight = np.power(true_energy/200., target_slope - spec_slope) return weight def calc_bias(x_bin_edges, y_bin_edges, hist): """Calculate the average bias of charge resolution from 50 to 500 true photoeletrons. These limits are chosen in order to be safely away from saturation and from NSB noise. Parameters ---------- x_bin_edges : 1D array Bin edges in true photoelectrons. y_bin_edges : 1D array Bin edges in reconstructed/true photoelectrons. hist : 2D array The full histogram of reconstructed/true against true photoelectrons. Returns ------- bias : float Average bias of charge resolution from 50 to 500 true photoelectrons. """ min_edge_index = np.digitize(1.7, x_bin_edges) - 1 max_edge_index = np.digitize(2.7, x_bin_edges) proj = np.zeros(600) for i in range(min_edge_index, max_edge_index + 1): proj = proj + hist[i] y_bin_centers = 0.5*(y_bin_edges[1:] + y_bin_edges[:-1]) bias = 1./np.average(y_bin_centers, weights = proj) return bias def calc_rms(values, weights): """Root Mean Square around 1 as proposed from comparison with CTA-MARS. The input values are vertical slices of the 2D histogram showing the bias-corrected charge resolution. Parameters ---------- values : 1D array Values in reconstructed / true photoelectrons corrected for average bias. weights : 1D array Counts in a cell from the weigthed histogram. Returns ------- rms : float Root Mean Square of around 1 for a vertical slice. """ average = np.average(values, weights=weights) variance = np.average((values-average)**2, weights=weights) standard_deviation = np.sqrt(variance) a = np.power(standard_deviation,2) b = np.power(average-1,2) rms = np.sqrt(a+b) return rms def plot_spectrum(x, bins, total_entries, xrange, **kwargs): # make histogram hist, xbins = np.histogram(np.log10(x[x>0]), bins=bins, range=xrange) # plot cumulative histogram # each bin is divided by the total number of entries plt.semilogy(xbins[:-1], hist[::-1].cumsum()[::-1]/total_entries, **kwargs) x_values = 0.5 * (xbins[:-1] + xbins[1:]) y_values = hist[::-1].cumsum()[::-1]/total_entries return x_values, y_values def load_by_tel_id(filename = None, subarray = None, is_double_pass = False, filename2 = None): data = {} if filename is None: print("WARNING: input information is undefined!") raise ValueError else: # order by telescope size, largest first for tel_type in sorted( subarray.telescope_types, key=lambda t: -t.optics.equivalent_focal_length ): print(f"Loading data of {tel_type}...") simshowers = read_table(filename, "/simulation/event/subarray/shower") true_images = [] reco_images = [] for tel_id in subarray.get_tel_ids_for_type(tel_type): reco_images.append( read_table(filename, f"/dl1/event/telescope/images/tel_{tel_id:03d}") ) true_images.append( read_table( filename, f"/simulation/event/telescope/images/tel_{tel_id:03d}" ) ) reco_images = vstack(reco_images) true_images = vstack(true_images) # 1st Pass data[str(tel_type)] = join( reco_images, true_images, keys=["obs_id", "event_id", "tel_id"], join_type="left" ) # add simulated showers information data[str(tel_type)] = join( data[str(tel_type)], simshowers["obs_id", "event_id", "true_energy"], keys=["obs_id", "event_id"], join_type="left" ) # and add B-TEL-1010 weights true_energies = data[str(tel_type)]["true_energy"].to(u.GeV) w = compute_weight_BTEL1010(true_energies) n_pixels = tel_type.camera.geometry.n_pixels weights = Column([np.repeat(w[i], n_pixels) for i in range(len(w))]) # each pixel gets its weight data[str(tel_type)]["weights_B-TEL-1010"] = weights if is_double_pass: if filename2 is None: print("WARNING: some 2nd pass input file is undefined!") raise ValueError else: reco_images_2ndPass = [] for tel_id in subarray.get_tel_ids_for_type(tel_type): reco_images_2ndPass.append( read_table(filename2, f"/dl1/event/telescope/images/tel_{tel_id:03d}") ) reco_images_2ndPass = vstack(reco_images_2ndPass) # 2nd Pass data[str(tel_type)] = join( data[str(tel_type)], reco_images_2ndPass, keys=["obs_id", "event_id", "tel_id"], join_type="left" ) print("DONE.") return data def load_by_tel_type(filename = None, subarray = None, is_double_pass = False, filename2 = None): data = {} if filename is None: print("WARNING: input information is undefined!") raise ValueError else: # order by telescope size, largest first for tel_type in sorted( subarray.telescope_types, key=lambda t: -t.optics.equivalent_focal_length ): print(f"Loading data of {tel_type}...") simshowers = read_table(filename, "/simulation/event/subarray/shower") reco_images = read_table(filename, f"/dl1/event/telescope/images/{tel_type}") true_images = read_table(filename, f"/simulation/event/telescope/images/{tel_type}") # 1st Pass data[str(tel_type)] = join( reco_images, true_images, keys=["obs_id", "event_id", "tel_id"], join_type="left" ) # add simulated showers information data[str(tel_type)] = join( data[str(tel_type)], simshowers["obs_id", "event_id", "true_energy"], keys=["obs_id", "event_id"], join_type="left" ) # and add B-TEL-1010 weights true_energies = data[str(tel_type)]["true_energy"].to(u.GeV) w = compute_weight_BTEL1010(true_energies) n_pixels = tel_type.camera.geometry.n_pixels weights = Column([np.repeat(w[i], n_pixels) for i in range(len(w))]) # each pixel gets its weight data[str(tel_type)]["weights_B-TEL-1010"] = weights if is_double_pass: if filename2 is None: print("WARNING: some 2nd pass input file is undefined!") raise ValueError else: reco_images_2ndPass = read_table(filename2, f"/dl1/event/telescope/images/{tel_type}") # 2nd Pass data[str(tel_type)] = join( data[str(tel_type)], reco_images_2ndPass, keys=["obs_id", "event_id", "tel_id"], join_type="left" ) print("DONE.") return data def load_config(name): """Load YAML configuration file.""" try: with open(name, "r") as stream: cfg = yaml.load(stream, Loader=yaml.FullLoader) except FileNotFoundError as e: print(e) raise return cfg ###Output _____no_output_____ ###Markdown Input data[back to top](Table-of-contents) ###Code # Options analyses_directory = None # default read from benchmarks config analysis_name = None # default read from benchmarks config benchmarks_config = None # required output_directory = Path.cwd() # default output directory for plots load_CTAMARS = True # If True load CTAMARS reference data is_double_pass = True # If True this is a double-pass image extractor noise_rejection_level = 0.99 # calibscale = 1.0 # WARNING: should be set in SimtelEventSource # Inputs input_file = None # Single-pass or 1st pass data file input_file_2ndPass = None # 2nd pass data file (required if is_double_pass is True) config = None # Single-pass or 1st pass configuration file config_2ndPass = None # 2nd pass configuration file (required if is_double_pass is True) provenance_file = None # produced by ctapipe-process # Plotting use_seaborn = False # If True import seaborn and apply global settings from config file plots_scale = 1.0 # Scale the size of all figures by this multiplicative factor # Handle boolean variables (papermill reads them as strings) [load_CTAMARS, is_double_pass, use_seaborn] = string_to_boolean([load_CTAMARS, is_double_pass, use_seaborn]) if not benchmarks_config: raise FileNotFoundError("Benchmarks configuration file not found!") else: benchmarks_cfg = load_config(benchmarks_config) matplotlib_settings = benchmarks_cfg["matplotlib_settings"] single_plot_width = 4*2 single_plot_height = 3*2 double_plot_height = 9 double_plot_width = 16 ###Output _____no_output_____ ###Markdown Protopipe[back to top](Table-of-contents) ###Code # Input checks if config is None: raise ValueError("No configuration single-pass or 1st pass data is available.") else: config = Path(config) if input_file is None: raise ValueError("No single-pass or 1st pass data file is available.") else: input_file = Path(input_file) if is_double_pass: if not config_2ndPass: raise ValueError("This is a double pass image extractor, but no configuration file for the 2nd pass is available.") else: config_2ndPass = Path(config_2ndPass) if not input_file_2ndPass: raise ValueError("This is a double pass image extractor, but no 2nd pass data file is available.") else: input_file_2ndPass = Path(input_file_2ndPass) # Load configuration file and print some basic info if is_double_pass: config_to_use = config_2ndPass else: config_to_use = config with open(config_to_use) as config_file: A = json.load(config_file) try: split = A['DataWriter']['split_datasets_by'] except KeyError: split = 'tel_id' try: image_extractor_type = A['CameraCalibrator']['image_extractor_type'] except KeyError: image_extractor_type = A['Stage1ProcessorTool']['image_extractor_type'] try: split = A['Stage1ProcessorTool']['split_datasets_by'] except KeyError: split = 'tel_id' # Load provenance and save ctapipe version if provenance_file: with open(provenance_file, 'r') as p: for line in p.readlines(): if "ctapipe_version" in line: ctapipe_version = line.split('\"')[3] break else: raise ValueError("No provenance file is available.") subarray = SubarrayDescription.from_hdf(input_file) tel_types = {str(tel): tel.camera.geometry for tel in subarray.telescope_types}.keys() print(f"ctapipe version used to produce the input DL1a data: {ctapipe_version}\n") print("The calibration benchmarks will be produced for the following telescope types:\n") for tel_type in tel_types: print(f" - {tel_type}\n") print("Using the following options for calibration and image extraction:") try: print(A['CameraCalibrator']) except KeyError: print(f"image_extractor_type = {A['Stage1ProcessorTool']['image_extractor_type']}") try: print(A['SimTelEventSource']) CALIB_SCALE_from_SimTelEventSource = True CALIB_SCALE = A['SimTelEventSource']["calib_scale"] except KeyError: print("SimtelEventSource has default settings. CALIB_SCALE not set from ctapipe (aka CALIB_SCALE = 1.0)") print("Using CALIB_SCALE set in this notebook...") CALIB_SCALE = calibscale CALIB_SCALE_from_SimTelEventSource = False try: print(A[image_extractor_type]) except KeyError: print(f"Using {image_extractor_type} with default options") pass print(f"noise rejection level : {noise_rejection_level*100}") # We open all files and make 1 dictionary of pandas dataframes per camera if split == "tel_id": DL1a = load_by_tel_id(filename = input_file, subarray = subarray, is_double_pass=is_double_pass, filename2 = input_file_2ndPass) elif split == "tel_type": DL1a = load_by_tel_type(filename = input_file, subarray = subarray, is_double_pass=is_double_pass, filename2 = input_file_2ndPass) else: raise ValueError("--DataWriter.split_datasets_by is undefined") # We extract the necessary quantities true_pixel_values = {} # all pixels true_pixel_values_1stPass = {} # pixels for which reco > 0 (for log-log plots) reco_pixel_values = {} weights = {} if is_double_pass: reco_pixel_values_2ndPass = {} reco_pass_status = {} true_pixel_values_2ndPass = {} # pixels for which reco > 0 (for log-log plots) for tel_type in tel_types: true_pixel_values[tel_type] = DL1a[tel_type]["true_image"].ravel() weights[tel_type] = DL1a[tel_type]["weights_B-TEL-1010"].ravel() if CALIB_SCALE_from_SimTelEventSource: calib_scale_to_use_here = 1.0 else: calib_scale_to_use_here = scalibscale if is_double_pass: reco_pixel_values[tel_type] = DL1a[tel_type]["image_1"].ravel() / calib_scale_to_use_here try: selected_images = DL1a[tel_type][DL1a[tel_type]["passed_2"]>0] except KeyError: selected_images = DL1a[tel_type] true_pixel_values_2ndPass[tel_type] = selected_images["true_image"].ravel() reco_pixel_values_2ndPass[tel_type] = selected_images["image_2"].ravel() / calib_scale_to_use_here else: reco_pixel_values[tel_type] = DL1a[tel_type]["image"].ravel() / calib_scale_to_use_here true_pixel_values_1stPass[tel_type] = reco_pixel_values[tel_type]>0 ###Output _____no_output_____ ###Markdown CTAMARS[back to top](Table-of-contents) ###Code # CTAMARS data can be always loaded if needed, but it won't make sense if the # simtel file is not the same! if load_CTAMARS: try: indir_CTAMARS = Path(benchmarks_cfg["input_data_CTAMARS"]["parent_directory"]) / Path(benchmarks_cfg["input_data_CTAMARS"]["TRAINING/DL1"]) except (NameError, KeyError): raise ValueError("The input directory for CTAMARS data is undefined.") CTAMARSfile1 = "CTA_check_dl1a.root" path_mars_hists = Path(indir_CTAMARS/CTAMARSfile1) CTAMARSfile2 = "IntensityResolution_graphs.root" path_mars_rms = Path(indir_CTAMARS/CTAMARSfile2) # from CTA_check_dl1a.root try: file_hists = uproot.open(path_mars_hists) hist2 = file_hists["hist2_type00"] H2 = hist2.to_numpy() # from IntensityResolution_graphs file_rms = uproot.open(path_mars_rms) rms = {} rms["LST_LST_LSTCam"] = file_rms["IntensityResolution_LST"] rms["MST_MST_NectarCam"] = file_rms["IntensityResolution_MST"] except FileNotFoundError: raise FileNotFoundError("CTAMARS data files not found!") ###Output _____no_output_____ ###Markdown Plots and benchmarks[back to top](Table-of-contents) ###Code # First we check if a _plots_ folder exists already. # If not, we create it. plots_folder = Path(output_directory) / "plots" plots_folder.mkdir(parents=True, exist_ok=True) # Plot aesthetics settings scale = matplotlib_settings["scale"] if plots_scale is None else float(plots_scale) style.use(matplotlib_settings["style"]) cmap = matplotlib_settings["cmap"] rc('font', size=matplotlib_settings["rc"]["font_size"]) if matplotlib_settings["style"] == "seaborn-colorblind": # Change color order to have first ones more readable colors_order = ['#0072B2', '#D55E00', '#009E73', '#CC79A7', '#56B4E9', '#F0E442'] rc('axes', prop_cycle=cycler(color=colors_order)) use_seaborn = benchmarks_cfg["use_seaborn"] if not use_seaborn else False if use_seaborn: import seaborn as sns seaborn_settings = benchmarks_cfg["seaborn_settings"] sns.set_theme(context=seaborn_settings["theme"]["context"] if "context" in seaborn_settings["theme"] else "talk", style=seaborn_settings["theme"]["style"] if "style" in seaborn_settings["theme"] else "whitegrid", palette=seaborn_settings["theme"]["palette"] if "palette" in seaborn_settings["theme"] else None, font=seaborn_settings["theme"]["font"] if "font" in seaborn_settings["theme"] else "Fira Sans", font_scale=seaborn_settings["theme"]["font_scale"] if "font_scale" in seaborn_settings["theme"] else 1.0, color_codes=seaborn_settings["theme"]["color_codes"] if "color_codes" in seaborn_settings["theme"] else True ) sns.set_style(seaborn_settings["theme"]["style"], rc=seaborn_settings["rc_style"]) sns.set_context(seaborn_settings["theme"]["context"], font_scale=seaborn_settings["theme"]["font_scale"] if "font_scale" in seaborn_settings["theme"] else 1.0) ###Output _____no_output_____ ###Markdown Correlation between reconstructed and true number of photoelectrons[back to top](Table-of-contents) ###Code nbins_x = 400 nbins_y = 400 # order by telescope size, largest first for tel_type in tel_types: fig = plt.figure(figsize=(single_plot_width, single_plot_height), tight_layout=False) plt.title(tel_type) plt.xlabel("log10(true #p.e)") plt.ylabel("log10(reco #p.e)") signal_mask = np.where((true_pixel_values[tel_type] >0) & (reco_pixel_values[tel_type] > 0)) true = true_pixel_values[tel_type][signal_mask] reco = reco_pixel_values[tel_type][signal_mask] # This is just to count the real number of events given to the histogram h_no_weights = plt.hist2d(np.log10(true), np.log10(reco), bins=[nbins_x, nbins_y], range=[[-7.,5.],[-7.,5.]], norm=LogNorm()) # This histogram has the weights applied, # which chages the number of entries # This is also what is plot h = plt.hist2d(np.log10(true), np.log10(reco), bins=[nbins_x, nbins_y], range=[[-7.,5.],[-7.,5.]], norm=LogNorm(), cmap=plt.cm.rainbow, weights=weights[tel_type][signal_mask]) plt.plot([0, 4], [0, 4], color="black") # line showing perfect correlation plt.minorticks_on() plt.xticks(ticks=np.arange(-1, 5, 0.5), labels=["",""]+[str(i) for i in np.arange(0, 5, 0.5)]) plt.xlim(-0.2,4.2) plt.ylim(-4.,4.) plt.colorbar(h[3], ax=plt.gca() ) plt.grid() fig.savefig(f"./plots/calibration_recoPhesVsTruePhes_{tel_type}_protopipe_{analysis_name}.png") # Print some debug/benchmarking information print(f"Total number of entries in the plot of {tel_type} (before weighting) = {h_no_weights[0].sum()}") plt.show() ###Output _____no_output_____ ###Markdown Charge resolution[back to top](Table-of-contents) ###Code nbins_x = 800 nbins_y = 600 charge_resolution_histogram = {} # camera-wise un-zoomes histogram for calculating bias later on for tel_type in tel_types: fig = plt.figure(figsize=(single_plot_width, single_plot_height), tight_layout=False) plt.title(tel_type) plt.xlabel("log10(true #p.e)") plt.ylabel("reconstructed #p.e / true #p.e") signal_mask = np.where(true_pixel_values[tel_type] >0) true = true_pixel_values[tel_type][signal_mask] reco = reco_pixel_values[tel_type][signal_mask] h = plt.hist2d(np.log10(true), (reco/true), bins=[nbins_x, nbins_y], range=[[-7.,15.],[-2,13]], norm=LogNorm(), cmap=plt.cm.rainbow, weights=weights[tel_type][signal_mask], ) charge_resolution_histogram[tel_type] = h plt.plot([0, 4], [1, 1], color="black") # line showing perfect correlation plt.colorbar(h[3], ax=plt.gca() #, format=ticker.FuncFormatter(fmt) ) ax = plt.gca() ax.minorticks_on() ax.tick_params(axis='x', which='minor') plt.grid() plt.xlim(-0.2,4.2) plt.ylim(-2.,6.) fig.savefig(f"./plots/calibration_chargeResolution_1stPass_{tel_type}_protopipe_{analysis_name}.png") plt.show() ###Output _____no_output_____ ###Markdown Average residual bias[back to top](Table-of-contents) The average bias is calculated in the range from 50 to 500 p.e. to be safely away from saturation and from NSB noise.**NOTE:**In the analysis pipeline this bias is not yet corrected, so the definition of what 1 photoelectron is depends on this! ###Code corr = {} print(f"Correction factors for residual average bias : ") for tel_type in tel_types: corr[tel_type] = calc_bias(charge_resolution_histogram[tel_type][1], charge_resolution_histogram[tel_type][2], charge_resolution_histogram[tel_type][0]) print(f"- {tel_type} = {corr[tel_type]:.2f}") ###Output _____no_output_____ ###Markdown Charge resolution corrected for the average residual bias[back to top](Table-of-contents) ###Code nbins_x = 800 nbins_y = 600 corrected_charge_resolution_histogram = {} # here we store the histograms corrected for the bias to calculate RMS in the next cell for tel_type in tel_types: plt.figure(figsize=(single_plot_width, single_plot_height), tight_layout=False) plt.title(tel_type) plt.xlabel("log10(true #p.e)") plt.ylabel(f"{corr[tel_type]:.2f} * reconstructed #p.e / true #p.e") signal_mask = np.where(true_pixel_values[tel_type] >0) true = true_pixel_values[tel_type][signal_mask] reco = reco_pixel_values[tel_type][signal_mask] h = plt.hist2d(np.log10(true), corr[tel_type]*(reco/true), bins=[nbins_x, nbins_y], range=[[-7.,15.],[-2,13]], norm=LogNorm(), cmap=plt.cm.rainbow, weights=weights[tel_type][signal_mask], ) corrected_charge_resolution_histogram[tel_type] = h ax = plt.gca() plt.axvspan(np.log10(50.0), np.log10(500.0), ymin=ax.get_ylim()[0], ymax=ax.get_ylim()[1], alpha = 0.3, color = "grey", label = "bias calc range") plt.plot([0, 4], [1, 1], color="black", label="no bias") # line showing perfect correlation plt.colorbar(h[3], ax=plt.gca() #, format=ticker.FuncFormatter(fmt) ) ax.minorticks_on() ax.tick_params(axis='x', which='minor') plt.grid() plt.legend(loc="lower right") plt.xlim(-0.2,4.2) plt.ylim(-2.,6.) plt.savefig(f"./plots/calibration_chargeResolution_1stPass_biascorrected_{tel_type}_protopipe_{analysis_name}.png") plt.show() ###Output _____no_output_____ ###Markdown RMS of charge resolution around 1[back to top](Table-of-contents) **Warning:** CTAMARS data refers to the specific simtel file from the comparison! ###Code RMS_charge_resolution_1stPass = {} for tel_type in tel_types: if load_CTAMARS: fig = plt.figure(figsize=get_fig_size(ratio=16./9, scale=scale), tight_layout=False) plt.subplots_adjust(hspace=0.5) plt.suptitle(tel_type) plt.subplot(1,2,1) else: fig = plt.figure(figsize=get_fig_size(ratio=4./3, scale=scale), tight_layout=False) plt.title(tel_type) bin_edges_true = corrected_charge_resolution_histogram[tel_type][1] bincenters_true = 0.5*(bin_edges_true[1:]+bin_edges_true[:-1]) # mean value of each bin in true photoelectrons bin_edges_y = corrected_charge_resolution_histogram[tel_type][2] # bin edges in reconstructed photoelectrons bincenters_y = 0.5*(bin_edges_y[1:]+bin_edges_y[:-1]) # mean value of each bin in reconstructed photoelectrons # cycle over bins in true photoelectrons: values = [] errors = [] n = 0 ref = [] for true_bin in range(len(bincenters_true)): # if the bin center is over 3.2 if (bincenters_true[true_bin] > 3.2): break # stop # if it's before -0.5 if (bincenters_true[true_bin] < -0.5): continue # check the next bin # else proceed with the calculation # take the profile at this X bin along the Y axis profile_y = corrected_charge_resolution_histogram[tel_type][0][true_bin] # this is the sequence of weights (aka the heights of the 600 bins) # if there is data falling in this X-axis bin, if np.sum(profile_y): ref.append(true_bin) # get the resolution the way Abelardo does # to do this we need also the bin centers along the Y axis result = calc_rms(bincenters_y, profile_y) values.append(result) n = n + 1 else: # otherwise go to the next bin in true photoelectrons continue values = np.asarray(values) RMS_charge_resolution_1stPass[tel_type] = values # protopipe plt.plot(bincenters_true[ref], values, 'o', markersize=2, label="protopipe") plt.yscale("log") plt.ylim(1.e-2,10) plt.xlim(-0.2,4.2) plt.grid(which='both', axis='y') plt.grid(which='major', axis='x') plt.minorticks_on() plt.xlabel("log10(true #p.e)") plt.ylabel("Bias-corrected charge resolution RMS around 1") # CTA-MARS if load_CTAMARS: CTAMARS_X = rms[tel_type].member("fX") CTAMARS_Y = rms[tel_type].member("fY") CTAMARS_EX = rms[tel_type].member("fEX") CTAMARS_EY = rms[tel_type].member("fEY") plt.errorbar(x = CTAMARS_X, y = CTAMARS_Y, xerr = CTAMARS_EX, yerr = CTAMARS_EY, fmt="o", markersize=2, label="CTA-MARS") plt.legend() plt.subplot(1,2,2) plt.plot(CTAMARS_X, values/CTAMARS_Y) ax = plt.gca() xlims=ax.get_xlim() plt.hlines(1., xlims[0], xlims[1], label="expectation", color='r') plt.ylim(0, 2) plt.grid() plt.legend() plt.xlabel("log10(true #p.e)") plt.ylabel("ratio protopipe/CTA-MARS") plt.show() fig.savefig(f"./plots/calibration_chargeResolution_RMSaround1_1stPass_{tel_type}_protopipe_{analysis_name}.png") ###Output _____no_output_____ ###Markdown Performance of 2nd pass[back to top](Table-of-contents) ###Code if not is_double_pass: print("This is not a double-pass image extractor.") ###Output _____no_output_____ ###Markdown Corrected charge resolution and average residual bias[back to top](Table-of-contents) ###Code if is_double_pass: print(""" Ratio of reconstructed to true number of p.e. vs. true number of p.e (in a pixel) after the second-pass pulse integration. Note that there is a small population of pixels with ~0 reconstructed signal for a relatively large number of p.e. These must correspond to signals which arrive out of time relative to the bulk of the image (or, alternatively, to failed time fits). The average residual bias should be similar to that of 1st pass, since it is calculated between 50 and 500 true photoelectrons. """) else: print("This is not a double-pass image extractor.") # Calculate bias for 2nd pass (if any) if is_double_pass: nbins_x = 800 nbins_y = 600 charge_resolution_2ndPass_histogram = {} # camera-wise un-zoomes histogram for calculating bias later on for tel_type in tel_types: signal_mask = np.where((true_pixel_values_2ndPass[tel_type] >0)) true = true_pixel_values_2ndPass[tel_type][signal_mask] reco = reco_pixel_values_2ndPass[tel_type][signal_mask] h_2ndPass = np.histogram2d(np.log10(true), (reco/true), bins=[nbins_x, nbins_y], range=[[-7.,15.],[-2,13]], weights=weights[tel_type][signal_mask], ) charge_resolution_2ndPass_histogram[tel_type] = h_2ndPass corr_2ndPass = {} print(f"Correction factors for residual average bias (2nd pass) : ") for tel_type in tel_types: corr_2ndPass[tel_type] = calc_bias(charge_resolution_2ndPass_histogram[tel_type][1], charge_resolution_2ndPass_histogram[tel_type][2], charge_resolution_2ndPass_histogram[tel_type][0]) print(f"- {tel_type} = {corr_2ndPass[tel_type]:.2f}") else: print("This is not a double-pass image extractor.") ###Output _____no_output_____ ###Markdown RMS of charge resolution around 1[back to top](Table-of-contents) ###Code if is_double_pass: nbins_x = 800 nbins_y = 600 corrected_charge_2ndPass_resolution_histogram = {} # here we store the histograms corrected for the bias to calculate RMS in the next cell for tel_type in tel_types: fig = plt.figure(figsize=(single_plot_width, single_plot_height), tight_layout=False) plt.title(tel_type) plt.xlabel("log10(true #p.e)") plt.ylabel(f"{corr_2ndPass[tel_type]:.2f} * reconstructed #p.e / true #p.e") signal_mask = np.where((true_pixel_values_2ndPass[tel_type] >0)) true = true_pixel_values_2ndPass[tel_type][signal_mask] reco = reco_pixel_values_2ndPass[tel_type][signal_mask] h = plt.hist2d(np.log10(true), corr_2ndPass[tel_type]*(reco/true), bins=[nbins_x, nbins_y], range=[[-7.,15.],[-2,13]], norm=LogNorm(), cmap=plt.cm.rainbow, weights=weights[tel_type][signal_mask], ) corrected_charge_2ndPass_resolution_histogram[tel_type] = h ax = plt.gca() plt.axvspan(np.log10(50.0), np.log10(500.0), ymin=ax.get_ylim()[0], ymax=ax.get_ylim()[1], alpha = 0.3, color = "grey", label = "bias calc range") plt.plot([0, 4], [1, 1], color="black", label="no bias") # line showing perfect correlation plt.colorbar(h[3], ax=plt.gca() #, format=ticker.FuncFormatter(fmt) ) ax.minorticks_on() ax.tick_params(axis='x', which='minor') plt.grid(which="both", axis="both", visible=True) plt.legend(loc="lower right") plt.xlim(-0.2,4.2) plt.ylim(-2.,6.) fig.savefig(f"./plots/calibration_chargeResolution_2ndPass_biascorrected_{tel_type}_protopipe_{analysis_name}.png") else: print("This is not a double-pass image extractor.") ###Output _____no_output_____ ###Markdown RMS around 1 comparison between passes[back to top](Table-of-contents) ###Code if is_double_pass: for tel_type in tel_types: fig = plt.figure(figsize=(double_plot_width, double_plot_height), tight_layout=False) plt.subplots_adjust(hspace=0.4) plt.suptitle(tel_type) plt.subplot(1,2,1) bin_edges_true = corrected_charge_2ndPass_resolution_histogram[tel_type][1] bincenters_true = 0.5*(bin_edges_true[1:]+bin_edges_true[:-1]) # mean value of each bin in true photoelectrons bin_edges_y = corrected_charge_2ndPass_resolution_histogram[tel_type][2] # bin edges in reconstructed photoelectrons bincenters_y = 0.5*(bin_edges_y[1:]+bin_edges_y[:-1]) # mean value of each bin in reconstructed photoelectrons # cycle over bins in true photoelectrons: values = [] errors = [] n = 0 ref = [] for true_bin in range(len(bincenters_true)): # if the bin center is over 3.2 if (bincenters_true[true_bin] > 3.2): break # stop # if it's before -0.5 if (bincenters_true[true_bin] < -0.5): continue # check the next bin # else proceed with the calculation # take the profile at this X bin along the Y axis profile_y = corrected_charge_2ndPass_resolution_histogram[tel_type][0][true_bin] # this is the sequence of weights (aka the heights of the 600 bins) # if there is data falling in this X-axis bin, if np.sum(profile_y): ref.append(true_bin) # get the resolution the way Abelardo does # to do this we need also the bin centers along the Y axis result = calc_rms(bincenters_y, profile_y) values.append(result) n = n + 1 else: # otherwise go to the next bin in true photoelectrons continue values = np.asarray(values) # protopipe plt.plot(bincenters_true[ref], values, 'o', markersize=2, label="protopipe 2nd pass") plt.yscale("log") plt.ylim(0.02,6) plt.xlim(-0.2,4.2) plt.grid(which='both', axis='y') plt.grid(which='major', axis='x') plt.minorticks_on() plt.xlabel("log10(true #p.e)") plt.ylabel("Bias-corrected charge resolution RMS around 1") plt.plot(bincenters_true[ref], RMS_charge_resolution_1stPass[tel_type], 'o', markersize=2, label="protopipe 1st pass") plt.legend() plt.subplot(1,2,2) plt.plot(bincenters_true[ref], values/RMS_charge_resolution_1stPass[tel_type]) #ax = plt.gca() #xlims=ax.get_xlim() #plt.hlines(1., xlims[0], xlims[1], label="expectation", color='r') plt.ylim(0, 2) plt.grid() #plt.legend() plt.xlabel("log10(true #p.e)") plt.ylabel("Ratio 2nd-pass / 1st-pass") plt.show() fig.savefig(f"./plots/calibration_chargeResolution_RMSaround1_1stvs2ndPass_{tel_type}_protopipe_{analysis_name}.png") else: print("This is not a double-pass image extractor.") ###Output _____no_output_____ ###Markdown Comparison of charge resolution y-profiles between passes[back to top](Table-of-contents) ###Code if is_double_pass: for tel_type in tel_types: print(tel_type) first_pass = corrected_charge_resolution_histogram[tel_type] second_pass = corrected_charge_2ndPass_resolution_histogram[tel_type] bin_edges_true = first_pass[1][np.where((first_pass[1] > -0.5) & (first_pass[1] < 3.2))[0]] bincenters_true = 0.5*(bin_edges_true[1:]+bin_edges_true[:-1]) values = [0.1,0.4,0.7,1] # make the plot for the bin around here plt.figure(figsize=(15,10)) plt.suptitle(tel_type) plt.subplots_adjust(wspace=0.4, hspace=0.3) for i, value in enumerate(values): plt.subplot(int(np.sqrt(len(values))), int(np.sqrt(len(values))), i+1) true_index = np.digitize(value,second_pass[1]) while not (np.sum(second_pass[0][true_index][:]) and np.sum(second_pass[0][true_index][:])): true_index +=1 x_centers = 0.5 * (first_pass[2][1:]+ first_pass[2][:-1]) plt.plot(x_centers, second_pass[0][true_index][:], label=f"2nd pass", lw=3, alpha=0.5) plt.plot(x_centers, first_pass[0][true_index][:], label=f"1st pass") plt.legend() plt.xlabel("reco/true") plt.ylabel("pixel counts") plt.yscale("log") plt.xlim(-2,6.) plt.title(f"log10(true phe) = {second_pass[1][true_index]:.2f}") RMS_2 = calc_rms(x_centers, second_pass[0][true_index][:]) RMS_1 = calc_rms(x_centers, first_pass[0][true_index][:]) ratio = RMS_2 / RMS_1 print(f"Ratio 2ndPass vs 1st pass around {value} log10(true phe) = {ratio:.2f}") plt.show() else: print("This is not a double-pass image extractor.") ###Output _____no_output_____ ###Markdown Single-pixels spectra and optimized cleaning thresholds[back to top](Table-of-contents) ###Code if load_CTAMARS: CTAMARS_spectrum_1stPass_path = indir_CTAMARS / "pixspec_1st_pass.root" CTAMARS_spectrum_2ndPass_path = indir_CTAMARS / "pixspec_2nd_pass.root" CTAMARS_spectrum_1stPass = {} with uproot.open(CTAMARS_spectrum_1stPass_path) as file: CTAMARS_spectrum_1stPass["LST_LST_LSTCam"] = file["hPixAmpl_integral_type_0"] CTAMARS_spectrum_1stPass["MST_MST_NectarCam"] = file["hPixAmpl_integral_type_1"] CTAMARS_spectrum_2ndPass = {} with uproot.open(CTAMARS_spectrum_2ndPass_path) as file: CTAMARS_spectrum_2ndPass["LST_LST_LSTCam"] = file["hPixAmpl_integral_type_0"] CTAMARS_spectrum_2ndPass["MST_MST_NectarCam"] = file["hPixAmpl_integral_type_1"] ###Output _____no_output_____ ###Markdown **NOTE:**- If the image extractor uses a double-pass approach, the bias for the cut values calculated from the 2nd pass is calculated from 2nd pass reconstructed charges. Said this, image extractors such as ``TwoPassWindowSum`` are meant to be effective on weak charges, and the bias is anyway calculated between 50 and 500 true p.e. so the difference should be rather small if not insignificant with respect to the bias calculated from the 1st pass.- The optimized cleaning threshlods build upon the definition of what "1 photoelectron" means. If there is residual bias from the image extraction process and the pipeline _doesn't_ account for it _before_ the image cleaning process, then the **biased** values are the correct ones to be used throughout the analysis. The **unbiased** values are to be used if the residual bias is neglibile and in any case in the comparison between other pipelines/analyses when their bias is not known. ###Code for tel_type in sorted(tel_types): fig = plt.figure(figsize=(1.5*single_plot_width, 1.5*single_plot_height), tight_layout=False) plt.title(tel_type) plt.xlabel("log10(#p.e)") plt.ylabel("Relative frequency of pixels with > x phe") if load_CTAMARS: x_bin_edges = CTAMARS_spectrum_1stPass[tel_type].to_numpy()[1] else: x_bin_edges = np.linspace(-1.0, 4.0, 251) xrange = [min(x_bin_edges), max(x_bin_edges)] true = true_pixel_values[tel_type] total_entries = len(true) log_mask = reco_pixel_values[tel_type] > 0 reco = reco_pixel_values[tel_type][log_mask] signal = reco[true[log_mask] > 0] noise = reco[true[log_mask] == 0] if is_double_pass: true_2 = true_pixel_values_2ndPass[tel_type] log_mask_2 = reco_pixel_values_2ndPass[tel_type] > 0 reco_2 = reco_pixel_values_2ndPass[tel_type][log_mask_2] signal_2 = reco_2[true_2[log_mask_2] > 0] noise_2 = reco_2[true_2[log_mask_2] == 0] # Plot 1st-Pass (or unique pass if not a double-pass image extractor) # CTAMARS if load_CTAMARS: plt.step(x = 0.5 * (x_bin_edges[1:] + x_bin_edges[:-1]), y = CTAMARS_spectrum_1stPass[tel_type].to_numpy()[0], where = "mid", label="CTAMARS 1st pass (noise+signal)") # protopipe X, Y = plot_spectrum(reco, x_bin_edges, total_entries, xrange, drawstyle="steps-post", alpha=0.7, label="protopipe 1st pass (noise+signal)", color='blue') # Plot only signal (by default disabled if data from CTAMARS is shown to avoid an overcrowded plot) plot_spectrum(signal, x_bin_edges, total_entries, xrange, drawstyle="steps-post", alpha=0.7, label="protopipe signal", ls="dotted", color='blue') # Plot only noise plot_spectrum(noise, x_bin_edges, total_entries, xrange, drawstyle="steps-post", alpha=0.7, label="protopipe noise", ls="dashed", color='blue') if is_double_pass: # Plot 2nd-pass # CTAMARS if load_CTAMARS: plt.step(x = 0.5 * (x_bin_edges[1:] + x_bin_edges[:-1]), y = CTAMARS_spectrum_2ndPass[tel_type].to_numpy()[0], where = "mid", label="CTAMARS 2nd pass (noise+signal)") # protopipe X, Y = plot_spectrum(reco_2, x_bin_edges, total_entries, xrange, drawstyle="steps-post", alpha=0.7, label="protopipe 2nd-pass (noise+signal)", color='red') # Plot only signal plot_spectrum(signal_2, x_bin_edges, total_entries, xrange, drawstyle="steps-post", alpha=0.7, label="protopipe 2nd pass signal", ls="dotted", color='red') # Plot only noise plot_spectrum(noise_2, x_bin_edges, total_entries, xrange, drawstyle="steps-post", alpha=0.7, label="protopipe 2nd pass noise", ls="dashed", color='red') if is_double_pass: quantile = np.quantile(noise_2, noise_rejection_level) else: quantile = np.quantile(noise, noise_rejection_level) plt.vlines(np.log10(quantile), ymin = plt.gca().get_ylim()[0], ymax = plt.gca().get_ylim()[1], color="black", label=f"{quantile:.2f} p.e. ({noise_rejection_level} noise rejection)", ls="dashed" ) plt.minorticks_on() plt.ylim(1.e-7, 2.) plt.xticks(np.arange(min(xrange), max(xrange)+1, 0.5)) ax = plt.gca() ax.xaxis.set_major_formatter(FormatStrFormatter('%.1f')) plt.grid() print(f"\nOptimized cleaning thresholds for {tel_type}") print(f"\nMethod #1: fixing y=1.e-2 (as it is done in CTAMARS)") y_values = Y idx = (np.abs(y_values - 1.e-2)).argmin() x = 0.5 * (x_bin_edges[1:] + x_bin_edges[:-1]) cut = 10**X[idx] print("- BIASED definition") print(f"{cut:.2f} phe") print("- UN-BIASED definition") if is_double_pass: print(f"{cut * corr_2ndPass[tel_type]:.2f} phe") else: print(f"{cut * corr[tel_type]:.2f} phe") plt.vlines(X[idx], ymin = plt.gca().get_ylim()[0], ymax = plt.gca().get_ylim()[1], color="grey", label=f"{10**X[idx]:.2f} p.e. (at fixed y=1.e-2)", ls="dashed" ) if load_CTAMARS: plt.vlines(np.log10(4.0), ymin = plt.gca().get_ylim()[0], ymax = plt.gca().get_ylim()[1], color="magenta", label="CTAMARS cut = 4 p.e. (~y=1.e-2)", ls="dashed" ) plt.legend(loc="best") print(f"\nMethod #2: noise rejection") print("- BIASED definition") print(f"{quantile:.2f} p.e. for {noise_rejection_level*100}% noise rejection") print("- UN-BIASED definition") if is_double_pass: print(f"{quantile*corr_2ndPass[tel_type]:.2f} p.e. for {noise_rejection_level*100}% noise rejection") else: print(f"{quantile*corr[tel_type]:.2f} p.e. for {noise_rejection_level*100}% noise rejection") plt.show() fig.savefig(f"./plots/calibration_SinglePixelSpectrum_{tel_type}_protopipe_{analysis_name}.png") ###Output _____no_output_____ ###Markdown Comparison between 1st or single passes[back to top](Table-of-contents) ###Code if load_CTAMARS: for tel_type in sorted(tel_types): fig = plt.figure(figsize=(double_plot_width, double_plot_height), tight_layout=False) plt.subplots_adjust(hspace=0.4) plt.suptitle(tel_type) plt.subplot(1, 2, 1) plt.xlabel("log10(#p.e)") plt.ylabel("Relative frequency of pixels with > x phe") if load_CTAMARS: x_bin_edges = CTAMARS_spectrum_1stPass[tel_type].to_numpy()[1] else: x_bin_edges = np.linspace(-1.0, 4.0, 251) xrange = [min(x_bin_edges), max(x_bin_edges)] true = true_pixel_values[tel_type] total_entries = len(true) log_mask = reco_pixel_values[tel_type] > 0 reco = reco_pixel_values[tel_type][log_mask] # Plot 1st-pass # CTAMARS plt.step(x = 0.5 * (x_bin_edges[1:] + x_bin_edges[:-1]), y = CTAMARS_spectrum_1stPass[tel_type].to_numpy()[0], where = "mid", label="CTAMARS 1st-pass (noise+signal)") # protopipe X, Y = plot_spectrum(reco, x_bin_edges, total_entries, xrange, drawstyle="steps-post", alpha=0.7, label="protopipe 1st-pass (noise+signal)", color='red') plt.legend() plt.grid() plt.subplot(1, 2, 2) plt.xlabel("log10(reconstructed #p.e)") plt.ylabel("Ratio protopipe/CTAMARS") ratio = Y/CTAMARS_spectrum_1stPass[tel_type].to_numpy()[0] plt.plot(0.5 * (x_bin_edges[1:] + x_bin_edges[:-1]), ratio) ax = plt.gca() xlims=ax.get_xlim() plt.hlines(1.0, xmin=min(xlims), xmax=max(xlims), label="expectation", ls="dashed", color="black") #plt.hlines(1.025, xmin=min(xlims), xmax=max(xlims), alpha=0.5, label="1.025", ls="dashed", color="red") plt.ylim(0.95, 1.05) plt.grid() plt.legend() plt.show() else: print("No reference data (CTAMARS) was not provided") ###Output _____no_output_____ ###Markdown Comparison between 2nd passes[back to top](Table-of-contents) ###Code if is_double_pass and load_CTAMARS: for tel_type in sorted(tel_types): fig = plt.figure(figsize=(double_plot_width, double_plot_height), tight_layout=False) plt.subplots_adjust(hspace=0.4) plt.suptitle(tel_type) plt.subplot(1, 2, 1) plt.xlabel("log10(#p.e)") plt.ylabel("Relative frequency of pixels with > x phe") if load_CTAMARS: x_bin_edges = CTAMARS_spectrum_2ndPass[tel_type].to_numpy()[1] else: x_bin_edges = np.linspace(-1.0, 4.0, 251) xrange = [min(x_bin_edges), max(x_bin_edges)] true = true_pixel_values[tel_type] total_entries = len(true) log_mask_2 = reco_pixel_values_2ndPass[tel_type] > 0 reco_2 = reco_pixel_values_2ndPass[tel_type][log_mask_2] # Plot 2nd-pass # CTAMARS plt.step(x = 0.5 * (x_bin_edges[1:] + x_bin_edges[:-1]), y = CTAMARS_spectrum_2ndPass[tel_type].to_numpy()[0], where = "mid", label="CTAMARS 2nd pass (noise+signal)") # protopipe X, Y = plot_spectrum(reco_2, x_bin_edges, total_entries, xrange, drawstyle="steps-post", alpha=0.7, label="protopipe 2nd-pass (noise+signal)", color='red') plt.grid() plt.legend() plt.subplot(1, 2, 2) plt.xlabel("log10(reconstructed #p.e)") plt.ylabel("Ratio protopipe/CTAMARS") ratio = Y/CTAMARS_spectrum_2ndPass[tel_type].to_numpy()[0] plt.plot(0.5 * (x_bin_edges[1:] + x_bin_edges[:-1]), ratio) ax = plt.gca() xlims=ax.get_xlim() plt.hlines(1.0, xmin=min(xlims), xmax=max(xlims), label="expectation", ls="dashed", color="black") #plt.hlines(1.025, xmin=min(xlims), xmax=max(xlims), alpha=0.5, label="1.025", ls="dashed", color="red") plt.ylim(0.95, 1.05) plt.grid() plt.legend() plt.show() else: print("This is not a double-pass image extractor OR CTAMARS data is unavailable.") ###Output _____no_output_____ ###Markdown Comparison between true and reconstructed spectra[back to top](Table-of-contents) **Note:** - the true spectum is in units where CALIB_SCALE=1.0, by definition,- here "reconstructed" means "what will be used in image cleaning", so if the image extractor is a double-pass one, the reconstructed quantity is the charge reconstructed by the second pass. ###Code for tel_type in sorted(tel_types): fig = plt.figure(figsize=(double_plot_width, double_plot_height), tight_layout=False) plt.subplots_adjust(hspace=0.4) plt.suptitle(tel_type) plt.subplot(1, 2, 1) true = true_pixel_values[tel_type] x_bin_edges = np.around(np.arange(0, 5.0, 0.02), 2) xrange = [min(x_bin_edges), max(x_bin_edges)] if is_double_pass: reco = reco_pixel_values_2ndPass[tel_type] signal = reco[true_pixel_values_2ndPass[tel_type] > 0] noise = reco[true_pixel_values_2ndPass[tel_type] == 0] else: reco = reco_pixel_values[tel_type] signal = reco[true_pixel_values_1stPass[tel_type] > 0] noise = reco[true_pixel_values_1stPass[tel_type] == 0] X_true, Y_true = plot_spectrum(true, x_bin_edges, len(true), # total entries xrange, drawstyle="steps-post", label="true", color='red') X_reco, Y_reco = plot_spectrum(reco, x_bin_edges, len(true), # total entries xrange, drawstyle="steps-post", label="reconstructed (noise + signal)", color='green') X_reco_signal, Y_reco_signal = plot_spectrum(signal, x_bin_edges, len(true), # total entries xrange, drawstyle="steps-post", label="reconstructed signal", color='blue') X_reco_noise, Y_reco_noise = plot_spectrum(noise, x_bin_edges, len(true), # total entries xrange, drawstyle="steps-post", label="reconstructed noise", color='orange') plt.legend() plt.grid() plt.ylim(1.e-7, 1.e0) plt.xlabel("log10(#p.e)") plt.ylabel("Relative frequency of pixels with > x phe") plt.subplot(1, 2, 2) plt.xlabel("log10(#p.e)") plt.ylabel("Ratio reconstructed signal / true") ratio = Y_reco_signal/Y_true plt.plot(0.5 * (x_bin_edges[1:] + x_bin_edges[:-1]), ratio) ax = plt.gca() xlims=ax.get_xlim() plt.hlines(1.0, xmin=min(xlims), xmax=max(xlims), ls="dashed", lw=2, color="green") plt.grid() plt.show() ###Output _____no_output_____ ###Markdown Charge resolution and bias for true signal pixels[back to top](Table-of-contents) ###Code x_bin_edges = np.around(np.arange(0, 5.0, 0.02), 2) x_bin_centers = 0.5 * (x_bin_edges[1:] + x_bin_edges[:-1]) xrange = [min(x_bin_edges), max(x_bin_edges)] for tel_type in sorted(tel_types): fig = plt.figure(figsize=(double_plot_width, double_plot_height), tight_layout=False) plt.subplots_adjust(hspace=0.4) plt.suptitle(tel_type) true = true_pixel_values[tel_type] signal_mask = true > 0 if is_double_pass: true = true_pixel_values_2ndPass[tel_type] signal_mask = true > 0 reco = reco_pixel_values_2ndPass[tel_type][signal_mask] else: reco = reco_pixel_values[tel_type][signal_mask] # CHARGE RESOLUTION plt.subplot(1, 2, 1) resolution = binned_statistic(np.log10(true[signal_mask]), reco/true[signal_mask] - 1, statistic = lambda x: np.percentile(np.abs(x), 68), bins=x_bin_edges,) corr_resolution = binned_statistic(np.log10(true[signal_mask]), reco/true[signal_mask] - 1, statistic = lambda x: np.percentile(np.abs(x-np.mean(x)), 68), bins=x_bin_edges) plt.plot(x_bin_centers, resolution[0], "bo", label="bias included") plt.plot(x_bin_centers, corr_resolution[0], "ro", label="bias corrected") plt.hlines(0.0, plt.gca().get_xlim()[0], plt.gca().get_xlim()[1], ls="--", color="green") plt.grid(which="both", axis="both") plt.xlabel('log10(true #phe)') plt.ylabel('charge resolution as abs(reco/true - 1)_68%') plt.legend(loc="best") plt.ylim(-0.2, 1.5) # CHARGE BIAS plt.subplot(1, 2, 2) bias = binned_statistic(np.log10(true[signal_mask]), reco/true[signal_mask] - 1, statistic="mean", bins=x_bin_edges) plt.plot(x_bin_centers, bias[0], "bo") plt.hlines(0.0, plt.gca().get_xlim()[0], plt.gca().get_xlim()[1], ls="--", color="green") plt.grid(which="both", axis="both") plt.xlabel('log10(true #phe)') plt.ylabel('charge bias as mean(reco/true - 1)') plt.show() ###Output _____no_output_____ ###Markdown Noise distribution[back to top](Table-of-contents) If pedestals have been correctly subtracted, we should see that the distributions peak around 0.If the peak extraction method is ``LocalPeakWindowSum`` or ``TwoPassWindowSum`` _without_ the 2nd pass or similar, please take into account that those are biased methods: they will always catch the highest peak, regardless if due to signal or noise. ###Code fig = plt.figure(figsize=(single_plot_width, single_plot_height), tight_layout=False) for tel_type in sorted(tel_types): true = true_pixel_values[tel_type] noise_mask = (true == 0) if is_double_pass: true = true_pixel_values_2ndPass[tel_type] noise_mask = (true == 0) reco = reco_pixel_values_2ndPass[tel_type] else: reco = reco_pixel_values[tel_type] reconstructed_noise_pixels = reco[noise_mask] residual_pedestals = reconstructed_noise_pixels.mean() _, _, patches = plt.hist(reconstructed_noise_pixels, density=True, bins=100, alpha=0.5) plt.vlines(residual_pedestals, ymin=plt.gca().get_ylim()[0], ymax=plt.gca().get_ylim()[1], color=patches[0].get_facecolor(), label=f"{tel_type} mean = {residual_pedestals:.2f}") plt.vlines(0, ymin=plt.gca().get_ylim()[0], ymax=plt.gca().get_ylim()[0]) plt.legend() plt.ylim(0,1) plt.xlabel("Reconstructed #phe") plt.show() None ###Output _____no_output_____
_archiving/contribution/hyun0131-Sep12th2018/Factorization-Machines-Movielens.ipynb
###Markdown Movie recommendation on Amazon SageMaker with Factorization Machines Download ml-100k dataset ###Code !wget http://files.grouplens.org/datasets/movielens/ml-100k.zip !unzip -o ml-100k.zip %cd ml-100k !shuf ua.base -o ua.base.shuffled !head -10 ua.base.shuffled !head -10 ua.test import sagemaker import sagemaker.amazon.common as smac from sagemaker import get_execution_role from sagemaker.predictor import json_deserializer import boto3, csv, io, json import numpy as np from scipy.sparse import lil_matrix from collections import defaultdict ###Output _____no_output_____ ###Markdown Build training set and test set ###Code nbUsers = 943 nbMovies = 1682 # one hot encoding vector size nbFeatures = nbUsers + nbMovies # sample size nbRatingsTrain = 90570 nbRatingsTest = 9430 moviesByUser = defaultdict(list) with open('ua.base.shuffled', 'r') as f: samples = csv.reader(f, delimiter = '\t') for userId, movieId, rating, timestamp in samples: moviesByUser[str(int(userId)-1)].append(int(movieId)-1) def loadDataset(filename, lines, columns): # Features are one-hot encoded in a sparse matrix # lil_maxtrix: structure for constructing sparse matrices incrementally # lil: List of Lists Format # https://www.scipy-lectures.org/advanced/scipy_sparse/lil_matrix.html X = lil_matrix((lines, columns)).astype('float32') # Labels are stored in a vector Y = [] line = 0 with open(filename, 'r') as f: samples = csv.reader(f, delimiter = '\t') for userId, movieId, rating, timestamp in samples: X[line, int(userId) - 1] = 1 X[line, int(nbUsers) + int(movieId)-1] = 1 if int(rating) >= 4: Y.append(1) else: Y.append(0) line = line + 1 Y = np.array(Y).astype('float32') return X, Y # X_train: A training sparse matrix: 90,570 lines and 2,625 columns and this matrix is 99.92% sparse. # Y_train: A training label array: 90,570 ratings X_train, Y_train = loadDataset('ua.base.shuffled', nbRatingsTrain, nbFeatures) # X_test: A test sparse matrix: 9,430 lines and 2,625 columns # Y_test: A test label array: 9,430 ratings X_test, Y_test = loadDataset('ua.test', nbRatingsTest, nbFeatures) print(X_train.shape) print(Y_train.shape) assert X_train.shape == (nbRatingsTrain, nbFeatures) assert Y_train.shape == (nbRatingsTrain, ) zero_labels = np.count_nonzero(Y_train) print("Training labels: %d zeros, %d ones" % (zero_labels, nbRatingsTrain-zero_labels)) print(X_test.shape) print(Y_test.shape) assert X_test.shape == (nbRatingsTest, nbFeatures) assert Y_test.shape == (nbRatingsTest, ) zero_labels = np.count_nonzero(Y_test) print("Test labels: %d zeros, %d ones" % (zero_labels, nbRatingsTest-zero_labels)) ###Output _____no_output_____ ###Markdown Convert to protobuf and save to S3 ###Code # your bucket name bucket = 'hyun-data-kr' prefix = 'sagemaker/fm-movielens' train_key = 'train.protobuf' train_prefix = '{}/{}'.format(prefix, 'train3') test_key = 'test.protobuf' test_prefix = '{}/{}'.format(prefix, 'test3') output_prefix = 's3://{}/{}/output'.format(bucket, prefix) def writeDatasetToProtobuf(X, Y, bucket, prefix, key): buf = io.BytesIO() smac.write_spmatrix_to_sparse_tensor(buf, X, Y) buf.seek(0) obj = '{}/{}'.format(prefix, key) boto3.resource('s3').Bucket(bucket).Object(obj).upload_fileobj(buf) return 's3://{}/{}'.format(bucket, obj) train_data = writeDatasetToProtobuf(X_train, Y_train, bucket, train_prefix, train_key) test_data = writeDatasetToProtobuf(X_test, Y_test, bucket, test_prefix, test_key) print(train_data) print(test_data) print('Output: {}'.format(output_prefix)) ###Output _____no_output_____ ###Markdown Run training job ###Code from sagemaker.amazon.amazon_estimator import get_image_uri container = get_image_uri(boto3.Session().region_name, 'factorization-machines') fm = sagemaker.estimator.Estimator(container, get_execution_role(), train_instance_count = 1, train_instance_type = 'ml.c5.4xlarge', output_path = output_prefix, sagemaker_session = sagemaker.Session()) # num_factors: the common dimension for the user and item matrices fm.set_hyperparameters(feature_dim = nbFeatures, predictor_type = 'binary_classifier', mini_batch_size = 1000, num_factors = 64, epochs = 100) fm.fit({'train': train_data, 'test': test_data}) ###Output _____no_output_____ ###Markdown Deploy model ###Code fm_predictor = fm.deploy(instance_type = 'ml.c4.xlarge', initial_instance_count = 1) def fm_serializer(data): js = {'instances': []} for row in data: js['instances'].append({'features': row.tolist()}) return json.dumps(js) fm_predictor.content_type = 'application/json' fm_predictor.serializer = fm_serializer fm_predictor.deserializer = json_deserializer ###Output _____no_output_____ ###Markdown Run predictions ###Code result = fm_predictor.predict(X_test[1000:1010].toarray()) print(result) print (Y_test[1000:1010]) print(X_test[1000:1010]) print(Y_test[1000:1010]) fm_predictor.delete_endpoint() ###Output _____no_output_____
Runge-Kutta MV.ipynb
###Markdown Define our coupled derivatives to integrate ###Code def dydx(x,y): # Set the derivatives # Our equation is d^2y/dx^2 = -y # So we can write # dydx = z # dzdx = -y # We will set y = y[0] # We will sey z = y[1] # Declare an array y_derivs = np.zeros(2) # Set dydx = x y_derivs[0] = y[1] # Set dzdx = -y y_derivs[1] = -1*y[0] # Here we have to return the arrays of dydx return y_derivs ###Output _____no_output_____ ###Markdown Define the 4th order RK method ###Code def rk4_mv_core(dydx,xi,yi,nv,h): # Declare k? arrays k1 = np.zeros(nv) k2 = np.zeros(nv) k3 = np.zeros(nv) k4 = np.zeros(nv) # Define x at 1/2 step x_ipoh = xi + 0.5*h # Define x at 1 step x_ipo = xi + h # Declare a temp y array y_temp = np.zeros(nv) # Get k1 values y_derivs = dydx(xi,yi) k1[:] = h*y_derivs[:] # Get k2 values y_temp[:] = yi[:] + 0.5*k1[:] y_derivs = dydx(x_ipoh,y_temp) k2[:] = h*y_derivs[:] # Get k3 values y_temp[:] = yi[:] + 0.5*k2[:] y_derivs = dydx(x_ipoh,y_temp) k3[:] = h*y_derivs[:] # Get k4 values y_temp[:] = yi[:] + k3[:] y_derivs = dydx(x_ipo,y_temp) k4[:] = h*y_derivs[:] # Advance y by step h yipo = yi + (k1 + 2*k2 + 2*k3 + k4)/6. # THIS IS AN ARRAY return yipo ###Output _____no_output_____ ###Markdown Define an adaptive step size driver for RK4 ###Code def rk4_mv_ad(dydx,x_i,y_i,nv,h,tol): # Define safety scale SAFETY = 0.9 H_NEW_FAC = 2.0 # Set a maximum number of iterations imax = 10000 # Set an iteration variable i = 0 # Create an error Delta = np.full(nv,2*tol) # Remember the step h_step = h # Adjust the step while(Delta.max()/tol > 1.0): # Estimate our error by taking one step of size h # vs. two steps of size h/2 y_2 = rk4_mv_core(dydx,x_i,y_i,nv,h_step) y_1 = rk4_mv_core(dydx,x_i,y_i,nv,0.5*h_step) y_11 = rk4_mv_core(dydx,x_i+0.5*h_step,y_1,nv,0.5*h_step) # Compute an error Delta = np.fabs(y_2 - y_11) # If error is too latge, take a smaller step if(Delta.max()/tol > 1.0): # Our error is too large, decrease the step h_step *= SAFETY * (Delta.max()/tol)**(-0.25) # Check the iteration if(i>imax): print("Too many iterations in rk4_mv_ad()") raise StopIteration("Ending after i = ",i) # Iterate i += 1 # Next time, try to take a bigger step h_new = np.fmin(h_step * (Delta.max()/tol)**(-0.9), h_step*H_NEW_FAC) # Return the answer, a new step, and the step we actually took return y_2, h_new, h_step ###Output _____no_output_____ ###Markdown Define a wrapper for RK4 ###Code def rk4_mv(dydx,a,b,y_a,tol): # dydx is the derivative wrt x # a is the lower bound # b is the upper bound # y_a are the boundary conditions # tol is the tolerance for integrating y # Define our starting step xi = a yi = y_a.copy() # An initial step size == make very small h = 1.0e-4 * (b-a) # Set a maximum number of iterations imax = 10000 # Set an iteration variable i = 0 # Set the number of coupled odes to the size of y_a nv = len(y_a) # Set the initial conditions x = np.full(1,a) y = np.full((1,nv),y_a) # Set a flag flag = 1 # Loop until we reach the right side while(flag): # Calculate y_i+1 yi_new, h_new, h_step = rk4_mv_ad(dydx,xi,yi,nv,h,tol) # Update the step h = h_new # Prevent an overshoot if(xi+h_step>b): # Take a smaller step h = b-xi # Recalculate y_i+1 yi_new, h_new, h_step = rk4_mv_ad(dydx,xi,yi,nv,h,tol) # Break flag = 0 # Update values xi += h_step yi[:] = yi_new[:] # Add the step to the arrays x = np.append(x,xi) y_new = np.zeros((len(x),nv)) y_new[0:len(x)-1,:] = y y_new[-1,:] = yi[:] del y y = y_new # Prevent too many iterations if(i>=imax): print("Maximum iterations reached.") raise StopIteration("Iteration number = ",i) # Iterate i += 1 # Output some information s = "i = %3d\tx = %9.8f\th = %9.8f\tb=%9.8f" % (i, xi, h_step, b) print(s) # Break if new xi is == b if(xi==b): flag = 0 # Return the answer return x,y ###Output _____no_output_____ ###Markdown Perform the integration ###Code a = 0.0 b = 2.0 * np.pi y_0 = np.zeros(2) y_0[0] = 0.0 y_0[1] = 1.0 nv = 2 tolerance = 1.0e-6 # Perform the integration x,y = rk4_mv(dydx,a,b,y_0,tolerance) ###Output _____no_output_____ ###Markdown Plot the result ###Code plt.plot(x,y[:,0],'o',label='y(x)') plt.plot(x,y[:,1],'o',label='dydx(x)') xx = np.linspace(0,2.0*np.pi,1000) plt.plot(xx,np.sin(xx),label='sin(x)') plt.plot(xx,np.cos(xx),label='cos(x)') plt.xlabel('x') plt.ylabel('y, dy/dx') plt.legend(frameon=False) ###Output _____no_output_____ ###Markdown Plot the error Notice that the errors will actually exceed our "tolerance" ###Code sine = np.sin(x) cosine = np.cos(x) y_error = (y[:,0]-sine) dydx_error = (y[:,1]-cosine) plt.plot(x, y_error, label="y(x) Error") plt.plot(x, dydx_error, label="dydx(x) Error") plt.legend(frameon=False) ###Output _____no_output_____
prediction/multitask/pre-training/program synthesis/small_model.ipynb
###Markdown **Generate the program based on the question using codeTrans multitask training model**You can make free prediction online through this Link (When using the prediction online, you need to parse and tokenize the code first.) **1. Load necessry libraries including huggingface transformers** ###Code !pip install -q transformers sentencepiece from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline ###Output _____no_output_____ ###Markdown **2. Load the token classification pipeline and load it into the GPU if avilabile** ###Code pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_multitask", skip_special_tokens=True), device=0 ) ###Output /usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_auto.py:852: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models. FutureWarning, ###Markdown **3 Give the question for generating the code, parse and tokenize it** ###Code question = "you are given an array of numbers a and a number b, compute the difference of elements in a and b" #@param {type:"raw"} import nltk nltk.download('punkt') from nltk.tokenize import word_tokenize def englishTokenizer(sentence): result = [] tokens = word_tokenize(sentence) for t in tokens: if( not len(t)>50): result.append(t) return ' '.join(result) tokenized_question = englishTokenizer(question) print("tokenized question: " + tokenized_question) ###Output tokenized question: you are given an array of numbers a and a number b , compute the difference of elements in a and b ###Markdown **4. Make Prediction** ###Code pipeline([tokenized_question]) ###Output Your max_length is set to 512, but you input_length is only 28. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)
UdemyPandas/2018-04-python-pandas-text-data.ipynb
###Markdown * split name int first last* convert salary string to number ###Code #optimize department to save space chicago['Department'].nunique() chicago['Department'].count() # since 32062 columns contain just 35 unique values, candiate to cange to a category chicago['Department'] = chicago['Department'].astype('category') chicago.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 32063 entries, 0 to 32062 Data columns (total 4 columns): Name 32062 non-null object Position Title 32062 non-null object Department 32062 non-null category Employee Annual Salary 32062 non-null object dtypes: category(1), object(3) memory usage: 784.4+ KB ###Markdown string methods``.lower().upper().title().len()`` ###Code 'HELLO WORLD'.title() 'HELLO WORLD'.lower() 'hello world'.upper() len('hello world') chicago.head() # chicago['Department'].title() # NO chicago['Department'].str.title().head(10) chicago['Department'] = chicago['Department'].str.title() chicago['Name'] = chicago['Name'].str.title() chicago['Position Title'] = chicago['Position Title'].str.title() chicago.head(10) ###Output _____no_output_____ ###Markdown .replace() method ###Code chicago.head() "Baier".replace('i','k') chicago.tail() # get rid of null values in original import chicago = pd.read_csv('data/chicago.csv').dropna(how='all') chicago['Department'] = chicago['Department'].astype('category') chicago['Department'] = chicago['Department'].str.title() chicago['Name'] = chicago['Name'].str.title() chicago['Position Title'] = chicago['Position Title'].str.title() chicago.tail() chicago['Department'] = chicago['Department'].str.replace('Mgmnt','Management') chicago.head(10) # fix the salary column chicago['Employee Annual Salary'] = chicago['Employee Annual Salary'].str.replace('$','').astype(float) chicago['Employee Annual Salary'].mean() chicago['Employee Annual Salary'].sum() chicago['Employee Annual Salary'].nlargest(10) chicago['Employee Annual Salary'].nsmallest() ###Output _____no_output_____ ###Markdown filtering ###Code chicago = pd.read_csv('data/chicago.csv').dropna(how='all') chicago['Department'] = chicago['Department'].astype('category') chicago['Department'] = chicago['Department'].str.title() chicago['Name'] = chicago['Name'].str.title() chicago['Position Title'] = chicago['Position Title'].str.title() chicago['Employee Annual Salary'] = chicago['Employee Annual Salary'].str.replace('$','').astype(float) chicago.head(10) # need to generate a boolean series mask = chicago['Position Title'].str.lower().str.contains("water") # .contains('water') chicago[mask] chicago[chicago['Position Title'].str.lower().str.startswith('water')] mask = chicago['Position Title'].str.lower().str.endswith('ist') chicago[mask] ###Output _____no_output_____ ###Markdown more string methodselewhere known as `trim()```.strip().lstrip().rstrip()`` ###Code chicago = pd.read_csv('data/chicago.csv').dropna(how='all') chicago['Department'] = chicago['Department'].astype('category') chicago['Department'] = chicago['Department'].str.title().str.strip() chicago['Name'] = chicago['Name'].str.title().str.strip() chicago['Position Title'] = chicago['Position Title'].str.title().str.strip() chicago['Employee Annual Salary'] = chicago['Employee Annual Salary'].str.replace('$','').astype(float) chicago.tail() # chicago['Name'].split(',') # NO # AttributeError: 'Series' object has no attribute 'split' ###Output _____no_output_____ ###Markdown using string on indexes ###Code chicago = pd.read_csv('data/chicago.csv').dropna(how='all') chicago['Department'] = chicago['Department'].astype('category') chicago.tail() # Now with Name as index chicago = pd.read_csv('data/chicago.csv', index_col='Name').dropna(how='all') chicago['Department'] = chicago['Department'].astype('category') chicago.tail() chicago.index chicago.index.str.strip().str.title() chicago.index = chicago.index.str.strip().str.title() chicago.head(10) chicago.columns chicago.columns = chicago.columns.str.upper() chicago.head() ###Output _____no_output_____ ###Markdown Splitting ###Code import pandas as pd chicago = pd.read_csv('data/chicago.csv').dropna(how='all') chicago['Department'] = chicago['Department'].astype('category') chicago.head() 'hello my name is Simon'.split(' ') # creates a python list # what are the ten most common last names? chicago['Name'].str.split(",").str.get(0).str.title().value_counts().head(10) chicago['Name'].head() chicago['Name'].str.split(",").head() chicago['Name'].str.split(",").str.get(1) chicago['Name'].str.split(",").str.get(0) chicago['Name'].str.split(",").str.get(0).str.title().head() chicago['Name'].str.split(",").str.get(0).str.title().value_counts().head() chicago['Position Title'].str.split(' ').str.get(0).value_counts() ###Output _____no_output_____
projects/project01/ds_project01.ipynb
###Markdown Data Science Project 01 - Inheritance of beak morphology in the Galapagos finches ---_This project was developed as part of the Data Science Certification provided by DataCamp_.Dec 2020. José Oliveira da Cruz, Dec 2020.___Disclaimer___: The information about the dataset was provided by DataCamp and my role as student was to code the entire analysis and draw additional conclusions. Initial guidelines were provided by the instructor [Justin Bois](https://bois.caltech.edu/). Introduction--- The objective of the project is to apply statistical inference uncover evolution of beak morphology. Many of the important observations that led Charles Darwin to develop the theory of evolution were made in the Galápagos archipelago, particularly in the study of the small birds, called finches, that inhabit them. The islands are ideal for studying evolution because they are isolated so they do not have complicated effects from interactions with other species including humans. Furthermore, some of them are small, so entire populations can be monitored on a given island. Every year since 1973, Peter and Rosemary Grant of Princeton University have been spending several months of the year on the tiny volcanic cinder cone island of Daphne Major in the Galápagos.This island has two dominant ground finch species, _Geospiza fortis_ and _Geospiza scandens_. The Grants have monitored them every year, tagging them, making physiological measurements, taking samples for genetic sequencing, and more. In 2014, they published a book entitled "40 Years of Evolution: Darwin's Finches on Daphne Major Island". ###Code # Import Necessary Packages import numpy as np np.random.seed(0) import seaborn as sns import matplotlib.pyplot as plt import pandas as pd ###Output _____no_output_____ ###Markdown The dataset---Peter and Rosemary Grant measured during the past 40 years the beak morphology for 2 species of finch birds ( _Geospiza fortis_ and _Geospiza scandens_) present in the the Galápagos island of Daphne Major.The subset of the original dataset used in this notebook was curated and made avaiable by DataCamp staff as part of this project. ###Code # upload the dataset link_finch_1975 = 'https://assets.datacamp.com/production/repositories/470/datasets/eb228490f7d823bfa6458b93db075ca5ccd3ec3d/finch_beaks_1975.csv' finch_1975 = pd.read_csv(link_finch_1975) finch_1975['year'] = 1975 link_finch_2012 = 'https://assets.datacamp.com/production/repositories/470/datasets/b28d5bf65e38460dca7b3c5c0e4d53bdfc1eb905/finch_beaks_2012.csv' finch_2012 = pd.read_csv(link_finch_2012) finch_2012['year'] = 2012 # get the same column names finch_1975.columns = finch_2012.columns # Merge the 2 datasets together finch_main = finch_1975.append(finch_2012, ignore_index=True) finch_main.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 651 entries, 0 to 650 Data columns (total 5 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 band 651 non-null int64 1 species 651 non-null object 2 blength 651 non-null float64 3 bdepth 651 non-null float64 4 year 651 non-null int64 dtypes: float64(2), int64(2), object(1) memory usage: 25.6+ KB ###Markdown The dataset contains the following information: |Column|Obs||---|---||band|Individual identification||species |Species ID||blength | Beak Length in mm||bdepth | Beak Depth in mm||year|Year of Measurement| ###Code finch_main.head() ###Output _____no_output_____ ###Markdown PART 1 - Exploratory Data Analysis___Peter and Rosemary Grant reported that changes of beak geometry depending on the types of seeds available on the island, and they also noticed that there was some interbreeding with another major species on Daphne Major, Geospiza fortis. The first objective it to understand how the beak morphology (depth and length) of _Geospiza scandens_ changed over time. ###Code # The dataset contains 2 species of Geospiza finch_main.species.unique() fig, ax = plt.subplots() sns.swarmplot( x='year', y='bdepth', data=finch_main[finch_main.species.isin(['scandens'])], ax=ax, ) ax.set(ylabel='Beak depth (mm)', title="Beak Depth in $\it{G. scandens}$") ax.margins(0.2) ###Output _____no_output_____ ###Markdown By using simple swarmplots it is not clear whether there are changes in beak depth over the last 40 years. Let's look at the distributions instead. ###Code def ecdf( data, ): """Returns x, y to plot an Empirical Cumulative Distribution Function. Parameters ---------- data : Sequence of int or floats List of values to sort. Returns ------- x : np.array Sorted numpy array with data. y : np.array Percentiles. """ # sort data x = np.sort(data) # create linear space between 0 and 1. y = np.linspace(0, 1, len(data)) return x, y # Prepare data for ECDF x_1915, y_1975 = ecdf(finch_1975.bdepth[finch_1975.species.isin(['scandens'])]) x_2012, y_2012 = ecdf(finch_2012.bdepth[finch_2012.species.isin(['scandens'])]) fig, ax = plt.subplots() ax.plot(x_1915, y_1975, label='1975', marker='.', ls='none') ax.plot(x_2012, y_2012, label='2012', marker='.', ls='none') ax.set(ylabel='ECDF', xlabel='beak depth (mm)', title="Beak Depth in $\it{G. scandens}$") ax.margins(0.02) ax.legend(); ###Output _____no_output_____ ###Markdown It seems that the depth of the beak depth has increased over time but how confident are we about this measurements? Let's use bootstring calculate the 95% confidence interval on the mean change in beak depth. ###Code def draw_bs_reps(data, func, size=1): """Draw bootstrap replicates. Parameters ---------- data : sequence of int/floats func : funct The test statistic to be used. size : opt, int The number of replicates to be drawn. returns bs_replicates : np.array Array with bootstrap replicates. """ # Instantiate array to hold the results bs_replicates = np.empty(size) # iterate size times for i in range(size): # Get a new sample with replacement bs_sample = np.random.choice(data, size=len(data)) # Perform a test statistic and add data to the bs_replicates array bs_replicates[i] = func(bs_sample) return bs_replicates # Extract the data bd_scandens_1975 = finch_1975.bdepth[finch_1975.species.isin(['scandens'])] bd_scandens_2012 = finch_2012.bdepth[finch_2012.species.isin(['scandens'])] # Calculate the original beak mean difference diff_means = bd_scandens_2012.mean() - bd_scandens_1975.mean() # Draw replicates for each population bs_replicates_1975 = draw_bs_reps(bd_scandens_1975, np.mean, size=10000) bs_replicates_2012 = draw_bs_reps(bd_scandens_2012, np.mean, size=10000) # perform element-wise mean diff bs_diff_means = bs_replicates_2012 - bs_replicates_1975 # Get the 95% confidence interval ci95 = np.percentile(bs_diff_means, [2.5, 97.5]) # Print the results print(f'The original difference of means = {diff_means} mm') print(f'95% confidence interval = {ci95} mm') ###Output The original difference of means = 0.22622047244094645 mm 95% confidence interval = [0.06096452 0.38898364] mm ###Markdown Let's plot the histogram to have a picture of the bootstraping process. ###Code fig, ax = plt.subplots(figsize=(9, 3)) ax.hist(bs_diff_means, bins=100, density=True); ax.axvline(ci95[0], color='red', label='2.5 percentile') ax.axvline(ci95[1], color='red', label='97.5 percentile') ax.axvline(diff_means, color='black', label='original diff. of means') ax.set( ylabel='Density', xlabel='Difference of means (2012 - 1975)', title='Distribution of Bootstrap replicates' ) ax.legend(loc='upper left') ax.margins(0.2) ###Output _____no_output_____ ###Markdown It seems that indeed the beaks of _G. scandens_ have gotten deeper. But how likely is this conclusion to be true?We can use hypothesis testing to address this question. We will ask the following question:- What is the probability that we would get the observed difference in mean beak depth **if the means were the same**? __To test this hypothesis we can use bootstrapping with replacement.__If our questions was about whether the birds come from the same distribution, we could use a permutation test. ###Code # Calculate original difference in means original_diff_means = bd_scandens_2012.mean() - bd_scandens_1975.mean() # Merge the populations bd_scandens_merged = np.concatenate((bd_scandens_1975, bd_scandens_2012)) # Shift data to center around the full mean with variance unchanged shifted_bd_scandens_1975 = bd_scandens_1975 - bd_scandens_1975.mean()\ + bd_scandens_merged.mean() shifted_bd_scandens_2012 = bd_scandens_2012 - bd_scandens_2012.mean()\ + bd_scandens_merged.mean() # Create new replicates bs_replicates_scandens_1975 = draw_bs_reps(shifted_bd_scandens_1975, np.mean, size=10000) bs_replicates_scandens_2012 = draw_bs_reps(shifted_bd_scandens_2012, np.mean, size=10000) # Check the difference in means bs_replicates_diff_means = bs_replicates_scandens_2012 - bs_replicates_scandens_1975 # Calculate the probability of observing our original results if the means were the same pvalue = np.sum(bs_replicates_diff_means >= original_diff_means)/10000 ###Output _____no_output_____ ###Markdown Let's plot the histogram obtained from our bootstrap simulation. ###Code fig, ax = plt.subplots(figsize=(9, 3)) ax.hist(bs_replicates_diff_means, bins=100, density=True) ax.axvline(original_diff_means, color='red', label='observed \ndiff means') ax.set( ylabel='Density', xlabel='bs_replicates_diff_means', title='Null Distribution of Bootstrap replicates' ) ax.legend() ax.margins(0.2) print(f"The probability of observing our results if the difference of means was the same is {pvalue}.") ###Output The probability of observing our results if the difference of means was the same is 0.0033. ###Markdown Conclusion of Part 1--- There is a statistically significante difference, thus strongly suggesting that beak depth is increasing over time. Specifically, we can observe that there was a 0.2 mm increase over a period of 37 years. PART 2 - Variation in beak shapes in _G. scandens_---Now that we know that there is an increase of beak depth over a time period of 37 years, let's have a look at how the overall shape is changing. ###Code scandens = finch_main[finch_main.species.isin(['scandens'])] fig, ax = plt.subplots() ax.plot(scandens[scandens.year==1975].blength, scandens[scandens.year==1975].bdepth, '.', label='1975') ax.plot(scandens[scandens.year==2012].blength, scandens[scandens.year==2012].bdepth, '.', label='2012') ax.set( ylabel='Beak depth (mm)', xlabel='Beak length (mm)', title='Variation in beak shape: $\itG.\ scandens$' ) ax.legend(loc='upper left') ###Output _____no_output_____ ###Markdown It seems that longer beak are associated with shorter depths. Interestingly, it seems that in 2012 this ratio is changed. It is possible to observe that the distribution if shifted to the up left corner meaning that birds beak is deeper and maybe shorter and thus it seems that the shape has changed. Let's investigate further is hint. First, we will model the relation of the beak depth and length with a linear model. ###Code # extract the data x_1975 = scandens[scandens.year == 1975].blength y_1975 = scandens[scandens.year == 1975].bdepth x_2012 = scandens[scandens.year == 2012].blength y_2012 = scandens[scandens.year == 2012].bdepth # instantiate and fit the model coef_1975, intercept_1975 = np.polyfit(x_1975, y_1975, 1) coef_2012, intercept_2012 = np.polyfit(x_2012, y_2012, 1) fig, ax = plt.subplots() # Plot Data ax.plot(scandens[scandens.year==1975].blength, scandens[scandens.year==1975].bdepth, '.', label='1975', color='blue') # Plot Regression if 1975 data x = [x_1975.min(), x_1975.max()] y = np.polyval((coef_1975, intercept_1975), x) ax.plot(x, y, '-', color='blue') # Plot 2012 Data ax.plot(scandens[scandens.year==2012].blength, scandens[scandens.year==2012].bdepth, '.', label='2012', color='orange') # Plot Regression of 2012 data x = [x_2012.min(), x_2012.max()] y = np.polyval((coef_2012, intercept_2012), x) ax.plot(x, y, '-', color='orange') ax.set( ylabel='Beak depth (mm)', xlabel='Beak length (mm)', title='Variation in beak shape: $\itG.\ scandens$' ) ax.legend(loc='upper left'); ###Output _____no_output_____ ###Markdown It seems tha a linear regression can capture the relation between beak depth and length for both 1975 and 2012 data. What is the confidence around the coefficients?__Bootstrap simulation around the coefficients of the linear regression to find the 95% confidence intervals.__ ###Code # Extract the data x_1975 = scandens[scandens.year == 1975].blength.to_numpy() y_1975 = scandens[scandens.year == 1975].bdepth.to_numpy() x_2012 = scandens[scandens.year == 2012].blength.to_numpy() y_2012 = scandens[scandens.year == 2012].bdepth.to_numpy() # Let's boot strap and build confidence intervals fig, ax = plt.subplots() # Plot Data ax.plot(scandens[scandens.year==1975].blength, scandens[scandens.year==1975].bdepth, '.', label='1975', color='blue') # Plot 2012 Data ax.plot(scandens[scandens.year==2012].blength, scandens[scandens.year==2012].bdepth, '.', label='2012', color='orange') size = 1000 index_array_1975 = np.arange(len(x_1975)) index_array_2012 = np.arange(len(x_2012)) bs_replicates_coef_1975 = np.empty(size) bs_replicates_intercept_1975 = np.empty(size) bs_replicates_coef_2012 = np.empty(size) bs_replicates_intercept_2012 = np.empty(size) for i in range(size): # boot strap indices idx_1975 = np.random.choice(index_array_1975, size=len(x_1975)) idx_2012 = np.random.choice(index_array_2012, size=len(x_2012)) # instantiate and fit the model bs_x_1975 = x_1975[idx_1975] bs_y_1975 = y_1975[idx_1975] bs_x_2012 = x_2012[idx_2012] bs_y_2012 = y_2012[idx_2012] # find coefs bs_coef_1975, bs_intercept_1975 = np.polyfit(bs_x_1975, bs_y_1975, 1) bs_coef_2012, bs_intercept_2012 = np.polyfit(bs_x_2012, bs_y_2012, 1) # Append the coefs to bs_replicates_coeff and intercept bs_replicates_coef_1975[i], bs_replicates_intercept_1975[i] = bs_coef_1975, bs_intercept_1975 bs_replicates_coef_2012[i], bs_replicates_intercept_2012[i] = bs_coef_2012, bs_intercept_2012 # Plot Regression x = [x_1975.min(), x_1975.max()] y = np.polyval((bs_coef_1975, bs_intercept_1975), x) ax.plot(x, y, '-', color='grey', alpha=0.02) # Plot Regression x = [x_2012.min(), x_2012.max()] y = np.polyval((bs_coef_2012, bs_intercept_2012), x) ax.plot(x, y, '-', color='grey', alpha=0.02) # Plot Regression x = [x_2012.min(), x_2012.max()] y = np.polyval((coef_2012, intercept_2012), x) ax.plot(x, y, '-', color='orange') # Plot Regression x = [x_1975.min(), x_1975.max()] y = np.polyval((coef_1975, intercept_1975), x) ax.plot(x, y, '-', color='blue') ax.set( ylabel='Beak depth (mm)', xlabel='Beak length (mm)', title='Variation in beak shape: $\itG.\ scandens$' ) ax.legend(); # Calculate the 95%CI around the slope and intercept coef_conf_int_1975 = np.percentile(bs_replicates_coef_1975, [2.5, 97.5]) coef_conf_int_2012 = np.percentile(bs_replicates_coef_2012, [2.5, 97.5]) intercept_conf_int_1975 = np.percentile(bs_replicates_intercept_1975, [2.5, 97.5]) intercept_conf_int_2012 = np.percentile(bs_replicates_intercept_2012, [2.5, 97.5]) print('1975: slope =', coef_1975, '95% confidence interval =', coef_conf_int_1975) print('1975: intercept =', intercept_1975, '95% confidence interval =', intercept_conf_int_1975) print('2012: slope =', coef_2012, '95% confidence interval =', coef_conf_int_2012) print('2012: intercept =', intercept_2012, '95% confidence interval =', intercept_conf_int_2012) ###Output 1975: slope = 0.4652051691605937 95% confidence interval = [0.33985406 0.58651249] 1975: intercept = 2.3908752365842263 95% confidence interval = [0.69884163 4.16062188] 2012: slope = 0.462630358835313 95% confidence interval = [0.33711123 0.60974767] 2012: intercept = 2.9772474982360198 95% confidence interval = [1.00934301 4.64788281] ###Markdown Beak length to depth ratioThe linear regressions showed interesting information about the beak geometry.The slope was the same in 1975 and 2012, suggesting that for every millimeter gained in beak length, the birds gained about half a millimeter in depth in both years.However, if we are interested in the shape of the beak, we want to compare the ratio of beak length to beak depth. ###Code # extract the data bl_1975 = scandens[scandens.year == 1975].blength bd_1975 = scandens[scandens.year == 1975].bdepth bl_2012 = scandens[scandens.year == 2012].blength bd_2012 = scandens[scandens.year == 2012].bdepth # Compute length-to-depth ratios ratio_1975 = bl_1975 / bd_1975 ratio_2012 = bl_2012 / bd_2012 # Compute means mean_ratio_1975 = np.mean(ratio_1975) mean_ratio_2012 = np.mean(ratio_2012) # Generate bootstrap replicates of the means bs_replicates_1975 = draw_bs_reps(ratio_1975, np.mean, 10000) bs_replicates_2012 = draw_bs_reps(ratio_2012, np.mean, 10000) # Compute the 99% confidence intervals conf_int_1975 = np.percentile(bs_replicates_1975, [0.5, 99.5]) conf_int_2012 = np.percentile(bs_replicates_2012, [0.5, 99.5]) # Plotting the histogram to visualize distribution of bootstrap replicates fig, ax = plt.subplots(figsize=(10, 5)) ax.hist(bs_replicates_1975, bins=100, density=True, label='1975'); ax.axvline(conf_int_1975[0], color='red', label='95% CI') ax.axvline(conf_int_1975[1], color='red') ax.hist(bs_replicates_2012, bins=100, density=True, label='2012', color='orange'); ax.axvline(conf_int_2012[0], color='red') ax.axvline(conf_int_2012[1], color='red') ax.set(ylabel='Density', xlabel='Length-to-Depth Ratio', ylim=(0, 60), title='$\itG.\ scandens$') ax.legend() ax.margins(0.2) fig.suptitle('Bootstrap replicates and 95% Confidence Intervals (CI)'); ###Output _____no_output_____ ###Markdown How different is the Length-to-Depth ratio? ###Code # Print the results print(f'1975: mean ratio = {mean_ratio_1975: 0.5f} | 95% CI = {conf_int_1975}') print(f'2012: mean ratio = {mean_ratio_2012: 0.5f} | 95% CI = {conf_int_2012}') # plot data with DI fig, ax = plt.subplots(figsize=(2, 4)) ax.plot(['1975', '2012'], [mean_ratio_1975, mean_ratio_2012], marker='.', ls='none') ax.fill_between( ['1975'], [conf_int_1975[0]], [conf_int_1975[1]], color='b', alpha=0.5, ) ax.fill_between( ['2012'], [conf_int_2012[0]], [conf_int_2012[1]], color='b', alpha=0.5, ) ax.set(ylabel='Length-to-Depth Ratio', xlabel='year') ###Output _____no_output_____ ###Markdown The mean beak length-to-depth ratio decreased by about 0.1, or 7%, from 1975 to 2012. The 99% confidence intervals are not close to overlapping, so the beak shape changed during this time period. PART 3 - Measuring Heritability___What is driving the beak shapping in _G. scadens_. One of the possible explanations is the breeding with another species of finch birds: _G. fortis_. This putative breeding may cause the _G. scandens_ to inherit some features from the _G. fortis_. If this is true, then how strong parental traits (ie beak depth) are passed on to offspring? DataThe subdataset used in this part was provided by datacamp. ###Code url = 'https://raw.githubusercontent.com/joseferncruz/datascience_projects/master/notebooks/project01/data/finch_par_off.csv' # Download dataset from github finch_par_off = pd.read_csv(url, index_col=0) finch_par_off.head() ###Output _____no_output_____ ###Markdown `bd_parent_scandens`, `bd_parent_fortis`: average beak depth (in mm) of two parents of the species _G. scandens_ and _G. fortis_, respectively; `bd_offspring_scandens`, `bd_offspring_fortis`: average beak depth of the offspring of the respective parents. ###Code # Extract data into individual variables bd_parent_scandens = finch_par_off[ (finch_par_off['species']=='scandens') & (finch_par_off['generation']=='parental') ].bdepth.to_numpy() bd_offspring_scandens = finch_par_off[ (finch_par_off['species']=='scandens') & (finch_par_off['generation']=='offspring') ].bdepth.to_numpy() bd_parent_fortis = finch_par_off[ (finch_par_off['species']=='fortis') & (finch_par_off['generation']=='parental') ].bdepth.to_numpy() bd_offspring_fortis = finch_par_off[ (finch_par_off['species']=='fortis') & (finch_par_off['generation']=='offspring') ].bdepth.to_numpy() ###Output _____no_output_____ ###Markdown What is the relation between in beak depth in offspring vs parental? ###Code fig, ax = plt.subplots() ax.plot( bd_parent_scandens, bd_offspring_scandens, marker='.', ls='none', alpha=0.5, label='G. scandens' ) ax.plot( bd_parent_fortis, bd_offspring_fortis, marker='.', ls='none', alpha=0.5, label='G. fortis' ) ax.set( ylabel='parental beak depth (mm)', xlabel='offspring beak depth (mm)', ) ax.legend() ax.margins(0.1) ###Output _____no_output_____ ###Markdown It seems that in _G. fortis_ It appears as though there is a stronger correlation in G. fortis than in G. scandens. This suggests that beak depth is more strongly inherited in G. fortis. Quantification of this correlation using the pearson correlation coefficient and bootstrapping to get the confidence interval. ###Code def draw_bs_pairs(x, y, func, size=1): """Perform pairs bootstrap for a single statistic test. Parameters ---------- x : list, np.array y : list, np.array func : function The function used as test statistic size : int Number of bootstrap iterations Returns ------- bs_replicates : np.array Array of length size with values of the func applied to the bootstrap replicates. """ # Set up array of indices to sample from: inds inds = np.arange(len(x)) # Initialize replicates: bs_replicates bs_replicates = np.empty(size) # Generate replicates for i in range(size): # bootstrap indices bs_inds = np.random.choice(inds, size=len(inds)) # Extract data bs_x, bs_y = x[inds], y[inds] # Calculate test statistic and append result bs_replicates[i] = func(bs_x, bs_y) return bs_replicates def pearson_r(data1, data2): """Returns the pearson's correlation coefficient. Parameters ---------- data1 : list, np.array data2 : list, np.array Returns ------ Pearson's Correlation coefficient """ return np.corrcoef(data1, data2)[0][1] # Compute the Pearson correlation coefficients r_scandens = pearson_r(bd_parent_scandens, bd_offspring_scandens ) r_fortis = pearson_r(bd_parent_fortis, bd_offspring_fortis ) # Acquire 1000 bootstrap replicates of Pearson r bs_replicates_scandens = draw_bs_pairs(bd_parent_scandens, bd_offspring_scandens, pearson_r, size=1000 ) bs_replicates_fortis = draw_bs_pairs(bd_offspring_fortis, bd_parent_fortis, pearson_r, size=1000 ) # Compute 95% confidence intervals conf_int_scandens = np.percentile(bs_replicates_scandens, [2.5, 97.5]) conf_int_fortis = np.percentile(bs_replicates_fortis, [2.5, 97.5]) # Print results print('G. scandens:', r_scandens, conf_int_scandens) print('G. fortis:', r_fortis, conf_int_fortis) ###Output G. scandens: 0.4117063629401258 [0.41170636 0.41170636] G. fortis: 0.7283412395518486 [0.72834124 0.72834124] ###Markdown It is clear from the confidence intervals that beak depth of the offspring of G. fortis parents is more strongly correlated with their offspring than their G. scandens counterparts. Measuring heritabilityThe Pearson correlation coefficient is the ratio of the covariance to the geometric mean of the variances of the two data sets. This is a measure of the correlation between parents and offspring, but might not be the best estimate of heritability.It makes more sense to define heritability as the ratio of the covariance between parent and offspring to the variance of the parents alone. Let's stimate the heritability and perform a pairs bootstrap calculation to get the 95% confidence interval. ###Code def heritability(parents, offspring): """Compute the heritability from parent and offspring samples. Parameters ---------- parents : list, np.array offspring : list, np.array Retuns ------ float covariance(parents, offspring) / variance(parents) """ covariance_matrix = np.cov(parents, offspring) return covariance_matrix[0][1] / covariance_matrix[0][0] # Compute the heritability heritability_scandens = heritability(bd_parent_scandens, bd_offspring_scandens) heritability_fortis = heritability(bd_parent_fortis, bd_offspring_fortis) # Acquire 1000 bootstrap replicates of heritability replicates_scandens = draw_bs_pairs(bd_parent_scandens, bd_offspring_scandens, heritability, size=1000) replicates_fortis = draw_bs_pairs(bd_parent_fortis, bd_offspring_fortis, heritability, size=1000) # Compute 95% confidence intervals conf_int_scandens = np.percentile(replicates_scandens, [2.5, 97.5]) conf_int_fortis = np.percentile(replicates_fortis, [2.5, 97.5]) # Print results print('Heritability Measurements') print(f'G. scandens: {heritability_scandens: 0.2f} | 95% CI: {conf_int_scandens}') print(f'G. fortis: {heritability_fortis: 0.2f} | 95% CI: {conf_int_fortis}') ###Output Heritability Measurements G. scandens: 0.55 | 95% CI: [0.54853409 0.54853409] G. fortis: 0.72 | 95% CI: [0.72290519 0.72290519] ###Markdown It seems that features from G. fortis are strong passed into offspring when compared to G. scandens. The heritability of beak depth in G. scandens seems low (~0.55). It could be that this observed heritability was just achieved by chance and beak depth is actually not really heritable in the species. Let's test this hypothesis using a __permutation test__. ###Code # Initialize array of replicates: perm_replicates perm_replicates = np.empty(10000) # Draw replicates for i in range(10000): # Permute parent beak depths bd_parent_permuted = np.random.permutation(bd_parent_scandens) perm_replicates[i] = heritability(bd_parent_permuted, bd_offspring_scandens) # Compute p-value: p p = np.sum(perm_replicates >= heritability_scandens) / len(perm_replicates) print(f'The p-value associated is: {p}') # plot the null distribution fig, ax = plt.subplots() ax.hist(perm_replicates, bins=100, density=True, label='Perm Rep Dis') ax.axvline(heritability_scandens, color='red', label='Observed heritability') ax.set(ylabel='Density', xlabel='heritability', title='Distribution of Permutation Heritability Replicates') ax.legend(loc='upper left') ax.margins(0.3) plt.show() ###Output _____no_output_____
TensorFlow_lab_03_1.ipynb
###Markdown Lab-03-1 Minimizing Cost show graph --- ###Code import tensorflow.compat.v1 as tf tf.disable_v2_behavior() tf.set_random_seed(777) # for reprducibilty import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown X and Y data ###Code X = [1, 2, 3] Y = [1, 2, 3] ###Output _____no_output_____ ###Markdown Variable ###Code W = tf.placeholder(tf.float32) ###Output _____no_output_____ ###Markdown Our model ###Code # our hypothesis for linear model X * W hypo = X * W # cost/loss func cost = tf.reduce_mean(tf.square(hypo - Y)) ###Output _____no_output_____ ###Markdown Preparing session ###Code # launch the graph in a session sess = tf.Session() # initializes global variables in the graph sess.run(tf.global_variables_initializer()) ###Output _____no_output_____ ###Markdown Getting cost ###Code # variables for plotting cost function W_history = [] cost_history = [] for i in range(-30, 50): curr_W = i * 0.1 curr_cost = sess.run(cost, feed_dict={W: curr_W}) W_history.append(curr_W) cost_history.append(curr_cost) ###Output _____no_output_____ ###Markdown Show Graph ###Code # show the cost function plt.plot(W_history, cost_history) plt.show() ###Output _____no_output_____
lectures/lec10_matrix_inverse.ipynb
###Markdown 10 Matrix Inverse Unit 1: Vectors, Book ILA Ch. 1-5 Unit 2: Matrices, Book ILA Ch. 6-11 + Book IMC Ch. 2- 06 Matrices- 07 Linear Equations- 08 Linear Dynamical Systems- 09 Matrix Multiplication- **_10 Matrix Inverse_** Unit 3: Least Squares, Book ILA Ch. 12-14 Outline: 10 Matrix Inverse- **[Left and right inverses](sec-matrices)**- [Inverse](sec-matrices)- [Solving linear equations](sec-matrices)- [Examples](sec-matrices) Left inverse$\color{EF5645}{\text{Definition}}$: Consider a scalar $a$. A scalar $x$ that satisfies $xa = 1$ is called the inverse of $a$.- We have $x = \frac{1}{a}$, which exists and is unique if and only if $a \neq 0 $.$\color{EF5645}{\text{Definition}}$: Consider a matrix $A$. A matrix $X$ that satistifies:$$XA = I$$ is called a left-inverse of $A$. If a left inverse exists, A is left-invertible. The left-inverse might not be unique. $\color{047C91}{\text{Exercise}}$: Show that the matrix:$$A = \begin{bmatrix}-3 & -4 \\4 & 6 \\1 & 1 \end{bmatrix}$$ has two different left-inverses:$$X_1 = \frac{1}{9}\begin{bmatrix}-11 & -10 & 16 \\7 & 8 & -11 \end{bmatrix}, \quad X_2 = \frac{1}{2}\begin{bmatrix}0 & -1 & 6 \\0 & 1 & -4 \end{bmatrix}.$$ Properties of left inverses$\color{6D7D33}{\text{Properties}}$:- If $A$ has a left inverse, then the columns of $A$ are linearly independent.- If $A$ has a left inverse, then $A$ is tall or square.$\color{047C91}{\text{Exercise}}$: Prove the above statement. Solving linear equations with left inverses$\color{EF5645}{\text{Proposition}}$: Consider the linear equation $Ax = b$. Consider $C$ a left-inverse of $A$. Then, a solution to the linear equation is:$$x = Cb.$$$\color{047C91}{\text{Exercise}}$: Prove the above statement. $\color{047C91}{\text{Example}}$: Consider the matrix $A = \begin{bmatrix}-3 & -4 \\4 & 6 \\1 & 1 \end{bmatrix}$ from the previous slide, and $b = \begin{bmatrix} 1 \\ -2 \\ 0\end{bmatrix}$. Give two solutions to the linear equation:$$ Ax = b.$$ Right inverses$\color{EF5645}{\text{Definition}}$: Consider a matrix $A$. A matrix $X$ that satistifies:$$AX = I$$ is called a right-inverse of $A$. If a right inverse exists, A is right-invertible. The right-inverse might not be unique. Properties of right inverses$\color{6D7D33}{\text{Properties}}$:- $A$ is right invertible if and only if $A^T$ is left invertible.- $A$ is right invertible if and only if its rows are linearly independent.- If $A$ is right invertible, then $A$ is wide or square.$\color{047C91}{\text{Exercise}}$: Prove the above statements. Solving linear equations with right inverses$\color{EF5645}{\text{Proposition}}$: Consider the linear equation $Ax = b$. Consider $B$ a right-inverse of $A$. Then, a solution to the linear equation is:$$x = Bb.$$$\color{047C91}{\text{Exercise}}$: Prove the above statement. Outline: 10 Matrix Inverse- [Left and right inverses](sec-matrices)- **[Inverse](sec-matrices)**- [Solving linear equations](sec-matrices)- [Examples](sec-matrices) Fill out this second anonymous survey ;)https://tinyurl.com/2vaxhke9 Inverse$\color{EF5645}{\text{Definition}}$: If $A$ has a left and a right inverse, they are unique and equal. We say that $A$ is invertible. We denote $A^{-1}$ the (unique) inverse of $A$.$\color{6D7D33}{\text{Properties}}$:- If $A$ is invertible then $A$ is square.- The inverse of the inverse is: $(A^{-1})^{-1} = A$. Which Matrices are Invertible?$\color{6D7D33}{\text{Properties}}$: Examples of matrices that are always invertible:- Any lower triangular matrix $L$ with nonzero diagonal entries is invertible. - Any upper triangular $R$ with nonzero diagonal entries is invertible.$\color{047C91}{\text{Exercise}}$: Give examples of invertible matrices. Computing Inverses: $2 \times 2$ matrices$\color{6D7D33}{\text{Properties}}$: Consider $A$ is a $2 \times 2$ matrix:- $A$ is invertible if and only if $A_{11}A_{22} \neq A_{12}A_{21}$.- In this case: $A^{-1} = \frac{1}{A_{11}A_{22} - A_{12}A_{21}}\begin{bmatrix} A_{22} & -A_{12} \\ -A_{21} & A_{11} \end{bmatrix}$$\color{047C91}{\text{Exercise}}$: Compute the inverse of $\begin{bmatrix} 1& 2 \\ 0 & 4 \end{bmatrix}.$ Computing Inverses$\color{6D7D33}{\text{Properties}}$:- $I^{-1} = I$- If $Q$ is square matrix with $Q^TQ = I$: - Then $Q^{-1} = Q^T$.- If $D = diag(a_1, ..., a_n)$ is a diagonal matrix with nonzero elements: - Then $D^{-1} = diag(\frac{1}{a_1}, ..., \frac{1}{a_n}).$ Computing Inverses$\color{6D7D33}{\text{Properties}}$: Consider invertible square matrices $A, B$ with known inverses $A^{-1}, B^{-1}$.- $(AB)^{-1} = B^{-1}A^{-1}$- $(A^T)^{-1} = (A^{-1})^T$- New notation: Negative powers! $A^{-k} = (A^{k})^{-1}$ Computing Inverses from QR decomposition $\color{6D7D33}{\text{Properties}}$: Consider $A$, a square and invertible matrix. Consider the QR factorization $A = QR$. - Then, the inverse of $A$ can be written: $A^{-1} = R^{-1}Q^T$. Computing Inverses in Python$\color{003660}{\text{In Python}}$, we use `np.linalg.inv` to compute the inverse. ###Code import numpy as np A = np.array([ [1, 2], [0, 4] ]) np.linalg.inv(A) ###Output _____no_output_____ ###Markdown Outline: 10 Matrix Inverse- [Left and right inverses](sec-matrices)- [Inverse](sec-matrices)- **[Solving linear equations](sec-matrices)**- [Examples](sec-matrices) Recall: Linear equations$\color{EF5645}{\text{Definition}}$: A set (or system) of $m$ linear equations in $n$ variables $x_1, . . . , x_n$ is defined as:$$\begin{matrix}A_{11}x_1 + A_{12}x_2 + · · · + A_{1n}x_n = b_1 \\\vdots \\A_{m1}x_1 + A_{m2}x_2 + · · · + A_{mn}x_n = b_m\end{matrix}$$and can be written compactly as: $Ax = b.$$\color{EF5645}{\text{Proposition}}$: Consider the linear equation $Ax = b$. If $A$ is invertible with inverse $A^{-1}$, then the equation has a unique solution: $x = A^{-1}b$. In what follows, we see methods to solve $Ax = b$ in several special cases:- when we can compute $A^{-1}$- when $A$ is upper-triangular invertible- when we know the QR decomposition of $A$- using Python. Special case: we know $A^{-1}$$\color{6D7D33}{\text{Method}}$: Consider $A$ an invertible matrix and the linear equation $Ax = b$. Assume that we know $A^{-1}$. - Then the unique solution of $Ax = b$ is given by $A^{-1}b.$$\color{047C91}{\text{Example}}$: An airplane travels 1200 miles in 4 hours with a tail wind. On the way back, the same trip takes 5 hours, now with a head wind (against the wind). What is the speed of the plane in still air, and what was the wind speed? Special case: $A = R$ upper triangular invertible$\color{6D7D33}{\text{Method}}$: Consider $R$ an upper triangular matrix with nonzero entries and the linear equation: $Rx = b$, which can be re-written as:$$\begin{matrix}R_{11}x_1 + & R_{12}x_2 + & ... +& R_{1, n}x_n &= b_1 \\&&\vdots&&\\&&& R_{nn}x_n &= b_n\end{matrix}$$The solution of the linear equation can be found by back-substitution:- Last equation gives: $x_n = b_n / R_{nn}$- Second to last equation gives: $x_{n-1} = (b_{n-1} - R_{n-1, n}x_n)/ R_{n-1, n-1}$- Iterate. Special case: $A = QR$ via QR Factorization$\color{6D7D33}{\text{Method}}$: Consider $A$ an invertible matrix and the linear equation $Ax = b$. Assume that the $QR$ factorization of $A$ is given: $A = QR$.The solution of the linear equation can be found by using these steps:- Compute $Q^T b$- Solve the linear equation $Rx = Q^Tb$ by back substitution.$\color{047C91}{\text{Example}}$: We will see an example in a next slide. General case in Python $\color{003660}{\text{In Python}}$, a system of linear equations can be solved in several ways:- using `np.linalg.qr`- using `np.linalg.inv`- using `np.linalg.solve` ###Code A = np.array([[-3, -4], [4, 6]]); b = np.array([1, 2]) print(np.linalg.solve(A, b)); print(np.linalg.inv(A) @ b) q, r = np.linalg.qr(A) print(np.linalg.inv(r) @ q.T @ b) ###Output [-7. 5.] [-7. 5.] [-5.88 3. ] ###Markdown Outline: 10 Matrix Inverse- [Left and right inverses](sec-matrices)- [Inverse](sec-matrices)- [Solving linear equations](sec-matrices)- **[Examples](sec-matrices)** Example: Interpolation ###Code from IPython.display import Video; Video("figs/11_aftereff.mp4") ###Output _____no_output_____ ###Markdown Example: Polynomial Interpolation$\color{047C91}{\text{Example}}$: Consider a cubic polynomial with unknown coefficients $c_0, ..., c_3$:$$p(x) = c_0 + c_1 x + c_2 x^2 + c_3 x^3,$$that satisfies: $p(-1.1) = 1, p(-0.4) = 2, p(0.1)=4, p(0.8) = 1$.Find the polynomial that interpolates these 4 points. Only use `np.linalg.qr`. ###Code A = np.array([ [1, -1.1, (-1.1) ** 2, (-1.1) ** 3], [1, -0.4, (-0.4) ** 2, (-0.4) ** 3], [1, 0.1, (0.1) ** 2, (0.1) ** 3], [1, 0.8, (0.8) ** 2, (0.8) ** 3] ]) b = np.array([1, 2, 4, 1]) q, r = np.linalg.qr(A) print("R = \n", r) print("Q = \n", q) #print("QT b = \n", q.T @ b) #np.linalg.solve(A, b) ###Output R = [[-2. 0.3 -1.01 0.441 ] [ 0. -1.3892444 0.41677332 -1.27198641] [ 0. 0. 0.84 -0.378 ] [ 0. 0. 0. -0.28720648]] Q = [[-0.5 0.68382496 0.5 0.17995394] [-0.5 0.17995394 -0.5 -0.68382496] [-0.5 -0.17995394 -0.5 0.68382496] [-0.5 -0.68382496 0.5 -0.17995394]]
SVM/IncomeViaSVM.ipynb
###Markdown In this project, we will use a support vector machine (SVM) to predict whether an individual's annual income exceeds $50,000 given attributes about their educational and ethnic background, working class, and a few other features provided in [this](https://archive.ics.uci.edu/ml/datasets/adult) dataset from the UCI Machine Learning Repository.Here, the preliminary data analysis will proceed the same as in `IncomeViaLogisticRegression.ipynb`, except that we will use KNN imputation to impute missing values rather than treating them as their own category. ###Code import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown First, we import the data. ###Code cols = ['age', 'workclass', 'fnlwgt', 'education', 'education-num', \ 'marital-status', 'occupation', 'relationship', 'race', 'sex', \ 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', \ 'income'] df_orig = pd.read_csv('adult.data', names=cols, index_col=False, sep=', ', engine='python') df = df_orig.copy() df.head() ###Output _____no_output_____ ###Markdown It is worth noting the missing values in some of the variables, denoted by a '?'. ###Code print('Number of missing values for each variable:') for col in df: vc = df[col].value_counts() num_missing = 0 if '?' in vc.index: num_missing = vc['?'] print(' ', col, ':', num_missing) ###Output Number of missing values for each variable: age : 0 workclass : 1836 fnlwgt : 0 education : 0 education-num : 0 marital-status : 0 occupation : 1843 relationship : 0 race : 0 sex : 0 capital-gain : 0 capital-loss : 0 hours-per-week : 0 native-country : 583 income : 0 ###Markdown There are a number of ways of dealing with this issue. The simplest method is to drop all observations with missing values. However, this is not ideal as we are losing valuable data in the other variables associated with the dropped observations. For continuous variables, a common technique is to replace the missing values with a logical/unimpeding guess. Depending on the situation, this is usually either a zero, one, or the mean or median of the rest of the values for that variable. A more advanced way of filling the missing values is to employ a technique called KNN imputation. In this technique, the observations with no missing values are used as training data to train a k-nearest neighbours algorithm, which predicts the missing values. This can be easily implemented using `sklearn`'s `impute.KNNImputer` class. Some of the variables may require some explanation. `fnlwgt` is the final weight, which is the number of people the census believes this entry represents. For simplicity of analysis, and since it is more a quality of the population than the individual, this variable will be dropped. The variable `education-num` is an ordinal encoding of the `education` variable. The ordinal encoding will be more useful for our classification algorithm, so the education column will be removed. `relationship` represents the respondents' role in the family, which can be assessed from gender and marital status, so it, too, will be discarded. `capital-gain` and `capital-loss` represent income from sources other than wage or salary, such as investment income. ###Code df = df.drop(['education', 'relationship', 'fnlwgt'], axis=1) ###Output _____no_output_____ ###Markdown The variable `workclass` stands for the industry in which the responding unit is employed. ###Code df.workclass.value_counts() ###Output _____no_output_____ ###Markdown There are two small classes: `Without-pay` and `Never-worked`. I will combine these into a category called `Other`. To simplify the analysis, we can group those who work for the government into a `Government` class, and both self-employed classes, incorporated and not incorporated, into a single one. `?` values will also be converted to `np.nan` to aid in later analysis. ###Code df.workclass = df.workclass.map({'?':np.nan, 'Without-pay':'Other', 'Never-worked':'Other',\ 'Local-gov':'Government', 'State-gov':'Government', 'Federal-gov':'Government',\ 'Self-emp-not-inc':'Self-employed', 'Self-emp-inc':'Self-employed',\ 'Private':'Private'}) df.workclass.value_counts() ###Output _____no_output_____ ###Markdown To investigate the distribution of the `workclass` variable and its relationship with our target variable `income`, we can plot a bar plot of the `workclass` variable, coloured by `income`. ###Code df_plot = df.groupby(['workclass', 'income']).size().reset_index().pivot(columns='income', \ index='workclass', values=0) ax = df_plot.plot(kind='bar', stacked=True, figsize=(10,6)) plt.title('Income by Industry') plt.ylabel('count') annotations = df_plot.divide(df_plot.sum(axis=1), axis='index') annotations = np.array(100*annotations).round().astype(int) annotations = annotations.flatten(order='F') for i, p in enumerate(ax.patches): ax.annotate(str(annotations[i]) + '%', (p.get_x()+.18, p.get_y()+p.get_height()//2-200)) ###Output _____no_output_____ ###Markdown Those who are self employed have the greatest tendency of making more than $50,000 annually, while those with other or unknown employment have the lowest tendency. We can create a similar plot for the `education-num` variable. ###Code df_plot = df.groupby(['education-num', 'income']).size().reset_index().pivot(columns='income', \ index='education-num', values=0).fillna(0) ax = df_plot.plot(kind='bar', stacked=True, figsize=(12,10)) plt.title('Income by Years of Education') plt.ylabel('Count') annotations = df_plot.divide(df_plot.sum(axis=1), axis='index') annotations = np.array(100*annotations).round().astype(int) annotations = annotations.flatten(order='F') for i, p in enumerate(ax.patches): if annotations[i] == 0: continue x = p.get_x() y = max(p.get_y()+p.get_height()//2-80, 60) if i >= len(annotations)//2: y = max(y, 350) ax.annotate(str(annotations[i]) + '%', (x, y)) ###Output _____no_output_____ ###Markdown Perhaps unsuprisingly, the proportion of people making more than $\$$50,000 annually increases with years of education. Nearly three quarters of those with doctoral degrees (16) make more than $\$$50,000 per year, while less than 10% of those with a high school education (8) or less make over $\$$50,000 annually. ###Code df.occupation.value_counts() ###Output _____no_output_____ ###Markdown To simplify the `occupation` variable, I will group together the given categories into `White-collar`, `Professional`, `Sales`, `Service`, `Blue-collar`, and `Armed-Forces` categories. Again, `?` values will be converted to `np.nan`. ###Code df.occupation = df.occupation.map({'Prof-specialty':'Professional', 'Craft-repair':'Blue-collar', \ 'Exec-managerial':'White-collar', 'Adm-clerical':'White-collar', \ 'Machine-op-inspct':'Blue-collar', 'Transport-moving':'Blue-collar', \ 'Handlers-cleaners':'Blue-collar', 'Farming-fishing':'Blue-collar', \ 'Other-service':'Service', 'Tech-support':'Service', 'Protective-serv':'Service', \ 'Priv-house-serv':'Service', 'Sales':'Sales', \ '?':np.nan, 'Armed-Forces':'Armed-forces'}) df.occupation.value_counts() df_plot = df[df.occupation != 'Armed-forces'].groupby(['occupation', 'income']).size().reset_index()\ .pivot(columns='income', index='occupation', values=0).fillna(0) ax = df_plot.plot(kind='bar', stacked=True, figsize=(10,6)) plt.title('Income by Occupation') plt.ylabel('Count') annotations = df_plot.divide(df_plot.sum(axis=1), axis='index') annotations = np.array(100*annotations).round().astype(int) annotations = annotations.flatten(order='F') for i, p in enumerate(ax.patches): if annotations[i] == 0: continue x = p.get_x()+.15 y = p.get_y()+p.get_height()//2-80 if i >= len(annotations)//2: y = max(y, 350) ax.annotate(str(annotations[i]) + '%', (x, y)) ###Output _____no_output_____ ###Markdown It is notable that income varies greatly across different occupations. Nearly half of those with a professional occupation make over $\$$50,000 annually, however only 13% of service workers make over $\$$50,000 per year. The categorical `marital-status` variable will be simplified for analysis as well. ###Code df['marital-status'].value_counts() ###Output _____no_output_____ ###Markdown The `Married-civ-spouse`, `Married-spouse-absent`, and `Married-AF-spouse` categories will be combined into a `Married` variable. ###Code df['marital-status'] = df['marital-status'].map({'Never-married':'Single', \ 'Married-civ-spouse':'Married', 'Married-spouse-absent':'Married', 'Married-AF-spouse':'Married', \ 'Divorced':'Divorced', 'Separated':'Separated', 'Widowed':'Widowed'}) df['marital-status'].value_counts() df_plot = df.groupby(['marital-status', 'income']).size().reset_index().pivot(columns='income', \ index='marital-status', values=0).fillna(0) ax = df_plot.plot(kind='bar', stacked=True, figsize=(10,6)) plt.title('Income by Marital Status') plt.ylabel('Count') annotations = df_plot.divide(df_plot.sum(axis=1), axis='index') annotations = np.array(100*annotations).round().astype(int) annotations = annotations.flatten(order='F') for i, p in enumerate(ax.patches): if annotations[i] == 0: continue x = p.get_x()+.15 y = p.get_y()+p.get_height()//2-80 if i >= len(annotations)//2: y = max(y, 350) ax.annotate(str(annotations[i]) + '%', (x, y)) ###Output _____no_output_____ ###Markdown Almost half of married people make over $\$$50,000 annually, however, less than 10% of the rest of the respondents do. ###Code plt.figure(figsize=(10,6)) sns.histplot(df['capital-gain']) plt.title('Histogram of Capital Gain') plt.yscale('log') plt.figure(figsize=(10,6)) sns.histplot(df['capital-loss'], label='capital loss') plt.title('Histogram of Capital Loss') plt.yscale('log') print('Proportion of zeros [capital-gain]: %.1f%%' % (100*len(df[df['capital-gain'] == 0])/len(df))) print('Proportion of zeros [capital-loss]: %.1f%%' % (100*len(df[df['capital-loss'] == 0])/len(df))) ###Output Proportion of zeros [capital-gain]: 91.7% Proportion of zeros [capital-loss]: 95.3% ###Markdown As is clear from the above histograms (note the logarithmic scaling on the vertical axis) and computation, the `capital-gain` and `capital-loss` variables are both quite skewed, with a high proportion of zero values. Thus, we will exclude them from the dataset. ###Code df_plot = df.groupby(['native-country', 'income']).size().reset_index().pivot(columns='income', \ index='native-country', values=0).fillna(0) ax = df_plot.plot(kind='bar', stacked=True, figsize=(10,8)) plt.title('Income by Native Country') plt.ylabel('Count') plt.yscale('log') ###Output _____no_output_____ ###Markdown Similarly, the `native-country` variable displays high skewness as most observations are from the United States (again, note the logarithmic scaling of the vertical axis on the above plot). Hence, we will exclude this variable from our model as well. ###Code df = df.drop(['capital-gain', 'capital-loss', 'native-country'], axis=1) df_plot = df.groupby(['race', 'income']).size().reset_index().pivot(columns='income', \ index='race', values=0).fillna(0) ax = df_plot.plot(kind='bar', stacked=True, figsize=(10,6)) plt.title('Income by Race') plt.ylabel('Count') annotations = df_plot.divide(df_plot.sum(axis=1), axis='index') annotations = np.array(100*annotations).round().astype(int) annotations = annotations.flatten(order='F') for i, p in enumerate(ax.patches): if annotations[i] == 0: continue x = p.get_x()+.15 y = max(p.get_y()+p.get_height()//2-80, 120) if i >= len(annotations)//2: y = max(y, 1200) ax.annotate(str(annotations[i]) + '%', (x, y)) ###Output _____no_output_____ ###Markdown From the above plot, we see that the majority of respondents are white, and that white and asian-pacific islanders have the largest proportions of individuals earning more than $\$$50,000 per year. ###Code # Age hist by income plt.figure(figsize=(10,6)) bins = np.linspace(min(df.age) - 1, max(df.age), max(df.age) - min(df.age) + 2) plt.hist(df.age[df.income == '<=50K'], bins, alpha=0.5, label='<=50K') plt.hist(df.age[df.income == '>50K'], bins, alpha=0.5, label='>50K') plt.legend(loc='upper right', title='Income') plt.xlabel('Age') plt.ylabel('Count') plt.title('Superimposed Histograms of Age by Income') # Age hist by gender plt.figure(figsize=(10,6)) bins = np.linspace(min(df.age) - 1, max(df.age), max(df.age) - min(df.age) + 2) plt.hist(df.age[df.sex == 'Male'], bins, alpha=0.5, label='Male') plt.hist(df.age[df.sex == 'Female'], bins, alpha=0.5, label='Female') plt.legend(loc='upper right', title='Gender') plt.xlabel('Age') plt.ylabel('Count') plt.title('Superimposed Histograms of Age by Gender') ###Output _____no_output_____ ###Markdown Inspecting the distribution of the age variable, we see that there are significantly more observations for those who make less than $\$$50,000 annually than those who make more. Moreover, those who make more than $50,000 annually tend to be in their mid-career.Interestingly, females are underrepresented in the dataset, which could be caused by a census bias. ###Code plt.figure(figsize=(10,6)) sns.boxplot(data=df, x='hours-per-week', y='income') plt.xlabel('Hours per week') plt.ylabel('Income') plt.title('Distributions of Hours per Week by Income') ###Output _____no_output_____ ###Markdown Unsuprisingly, we see that those who make more than $\$$50,000 annually tend to work more hours per week than those who make less. Also, it is notable that the distribution of hours worked has a larger spread for those who make more than $\$$50,000 per year than those who don't. Now, we will process our data types to prepare the data for the SVM algorithm. First, we convert our binary variables, `sex` and `income`, to integers. ###Code df.sex = df.sex.map({'Male':0, 'Female':1}) df.income = df.income.map({'<=50K':0, '>50K':1}) ###Output _____no_output_____ ###Markdown Next, we will perform KNN imputation to impute the missing values in the `workclass` and `occupation` variables. We will do this using `sklearn`'s `impute.KNNImputer` class, but first we must transform our categorical data into numerical data. A simple ordinal encoding will suffice for this task. Later, we will reformat this ordinal encoding into a one-hot encoding, which will be better suited for the SVM. ###Code ordinal_encodings = {} for col in ['workclass', 'occupation', 'marital-status', 'race']: categories = df[col].unique() categories = categories[pd.notna(categories)] ordinals = range(len(categories)) encoding = {a:b for a, b in zip(categories, ordinals)} encoding[np.nan] = np.nan df[col] = df[col].map(encoding) ordinal_encodings[col] = encoding df.head() ###Output _____no_output_____ ###Markdown Now that our data is all numerical, we can proceed with the imputation. KNN imputation trains a k-nearest neighbour algorithm to predict the missing data values. All observations without missing values are treated as training data, with the corresponding training labels being the non-missing values from the variable we are aiming to impute. The observations with a missing value are then used by the KNN algorithm to predict the missing values. A range of different regression algorithms can be used in place of the KNN, however KNN models have proven to be effective in experiments.In our case, we must call the `round` method after the imputation since our data comes from an ordinal encoding, and thus must be an integer. ###Code from sklearn.impute import KNNImputer imputer = KNNImputer() df = pd.DataFrame(imputer.fit_transform(df), columns=df.columns).round() df.isna().any() ###Output _____no_output_____ ###Markdown We now perform the ordinal encoding in reverse to transform our newly imputed variables back into strings. This is simply done to aid in retaining appropriate column names in the next step. ###Code inverse_encodings = {v:{o:c for c, o in ordinal_encodings[v].items()} for v in ordinal_encodings} for col in ['workclass', 'occupation', 'marital-status', 'race']: df[col] = df[col].map(inverse_encodings[col]) ###Output _____no_output_____ ###Markdown Next comes the one-hot encoding. When a logical order is not present in the features, as is the case in our categorical variables, a one-hot encoding is a common technique to quantify the categorical data. Here, each category is mapped to a vector containing a 1 or 0 to denote the presence or absence of a feature. For instance, our `marital-status` variable could represent 'Single' as $[0,1,0,0,0]$, where the vector entries correspond to 'Married', 'Single', 'Divorced', 'Separated', and 'Widowed' respectively. This method can cause issues for variables with large cardinality as it drastically increases the sparsity of the dataset, however, it is a good way to quantify our data. This method can be easily implemented with the `pandas` method `get_dummies`. ###Code df = pd.get_dummies(df, columns=['workclass', 'marital-status', 'occupation', 'race']) df.head() ###Output _____no_output_____ ###Markdown We will preform the regression with the machine learning library scikit-learn. ###Code from sklearn.svm import SVC ###Output _____no_output_____ ###Markdown The data is split into training and validation sets and the independent variables in both sets are standardized using the mean and standard deviation of the training dataset. ###Code labels = df.income.copy() data = df.drop('income', axis=1).copy() train_frac = 0.75 train_len = int(train_frac * len(data)) train_msk = np.full(len(data), False) train_msk[:train_len] = True np.random.shuffle(train_msk) train_data, train_labels = data[train_msk], labels[train_msk].to_numpy() valid_data, valid_labels = data[~train_msk], labels[~train_msk].to_numpy() # Standardization train_mean, train_std = train_data.mean(), train_data.std() train_data = ((train_data - train_mean)/train_std).to_numpy() valid_data = ((valid_data - train_mean)/train_std).to_numpy() # Min-max normalization # train_min, train_max = train_data.min(), train_data.max() # train_data = ((train_data - train_min)/(train_max - train_min)).to_numpy() # valid_data = ((valid_data - train_min)/(train_max - train_min)).to_numpy() ###Output _____no_output_____ ###Markdown We will first start with the default parameters for our SVM. ###Code classifier = SVC() classifier.fit(train_data, train_labels) pred_labels = classifier.predict(valid_data) valid_acc = (valid_labels == pred_labels).sum()/len(valid_labels) print('Accuracy on validation data: %.2f%%' % (100*valid_acc)) ###Output Accuracy on validation data: 82.90% ###Markdown A good start, however we may be able to improve on the accuracy by adjusting the regularization parameter $C$ and the $\gamma$ parameter of the Radial Basis Function kernel SVM. The parameter $C$ trades off correct classification of training examples against maximization of the decision function's margin. For a larger $C$, a smaller margin will be accepted if the decision function is better at classifying training examples correctly. A smaller $C$ will encourage a larger margin and may be more resistant to outliers, however it may decrease training accuracy.The $\gamma$ parameter defines how far the infuence of each training example reaches, with lower values corresponging to a further reach. A higher $\gamma$ will put more importance on the training data and could result in overfitting. Conversely, a lower $\gamma$ makes the points in the training data less relevant and can result in underfitting.Here, we will use a logarithmic grid search over $C$ and $\gamma$ to find values which improve accuracy. ###Code from sklearn.model_selection import StratifiedShuffleSplit, GridSearchCV C_range = np.logspace(1, 11, 11) gamma_range = np.logspace(-11, -1, 11) # Use a random tenth of the data to decrease computation times frac = 0.1 msk = np.full(len(train_data), False) msk[:int(len(train_data)*frac)] = True np.random.shuffle(msk) param_grid = dict(gamma=gamma_range, C=C_range) sss = StratifiedShuffleSplit(n_splits=5, test_size=0.2, random_state=42) grid = GridSearchCV(SVC(), param_grid=param_grid, cv=sss, verbose=99) grid.fit(train_data[msk], train_labels[msk]) print('The best parameters are %s with a score of %0.2f' % (grid.best_params_, grid.best_score_)) gamma_range_s = gamma_range[2:] scores = grid.cv_results_['mean_test_score'].reshape(len(C_range), len(gamma_range))[:, 2:] plt.figure(figsize=(8, 6)) plt.subplots_adjust(left=0.2, right=0.95, bottom=0.15, top=0.95) plt.imshow(scores, cmap=plt.cm.hot, origin='lower') plt.xlabel('$\gamma$') plt.ylabel('$C$') plt.colorbar() plt.xticks(np.arange(len(gamma_range_s)), gamma_range_s, rotation=45) plt.yticks(np.arange(len(C_range)), C_range) plt.show() ###Output _____no_output_____ ###Markdown We seem to obtain good results for a variety of parameter pairs. We will try a search with a finer grid to narrow down closer to optimal parameter values. It is clear from the plot that we obtain better results when the parameters fall on the downward diagonal of the plot. Since it is far more computationally expensive to train a SVM with a higher $C$ value than a lower one, we will focus on lower $C$ values and higher $\gamma$ values. Thus, we will search using a base 2 logarithmic grid for $C$ between $2^6$ and $2^{20}$ and $\gamma$ between $2^{-20}$ and $2^{-9}$. ###Code ## Same thing again with narrower, more specific range on C, gamma C_range = np.logspace(6, 20, 15, base=2) gamma_range = np.logspace(-20, -9, 12, base=2) frac = 0.2 msk = np.full(len(train_data), False) msk[:int(len(train_data)*frac)] = True np.random.shuffle(msk) param_grid = dict(gamma=gamma_range, C=C_range) sss = StratifiedShuffleSplit(n_splits=5, test_size=0.2, random_state=42) grid = GridSearchCV(SVC(), param_grid=param_grid, cv=sss, verbose=99) grid.fit(train_data[msk], train_labels[msk]) print('The best parameters are C=2^%i and gamma=2^%i with a score of %0.3f.' % (*np.log2(list(grid.best_params_.values())), grid.best_score_)) scores = grid.cv_results_['mean_test_score'].reshape(len(C_range), len(gamma_range)) plt.figure(figsize=(8, 6)) plt.subplots_adjust(left=0.2, right=0.95, bottom=0.15, top=0.95) plt.imshow(scores, cmap=plt.cm.hot, origin='lower') plt.xlabel('$\gamma$') plt.ylabel('$C$') plt.colorbar() plt.xticks(np.arange(len(gamma_range)), gamma_range, rotation=45) plt.yticks(np.arange(len(C_range)), C_range) plt.title('Validation Accuracy') plt.show() ###Output _____no_output_____ ###Markdown We see that $C=2^{12}$ and $\gamma=2^{-13}$ provides the best performance out of the grid values tested, so we will fit our classifier with these values. We set the `probability` parameter to `True` to allow the classifier to compute probabilities instead of just binary classification. ###Code classifier = SVC(kernel='rbf', C=2**12, gamma=2**-13, probability=True, verbose=0) classifier.fit(train_data, train_labels) ###Output _____no_output_____ ###Markdown The `predict_proba` method returns class probabilities based on input data. To accomplish this, a technique called [Platt scaling](https://en.wikipedia.org/wiki/Platt_scaling) is employed. Platt scaling runs a logistic regression model to transform the output of the SVM (or more generally, any classification model) into a probability distribution over classes.Assuming the classification algorithm is given by a real-valued function $f$ with class predictions determined by $y=\text{sign}(f(x))$, Platt scaling obtains probabilities via $$\text{P}(y=1|x)=\frac1{1+\text{exp}(Af(x)+B)}$$ i.e., a logistic transformation on the classifier's scores. Here, $A$ and $B$ are scalar parameters learned by the algorithm. These parameters are estimated using a [maximum likelihood](https://en.wikipedia.org/wiki/Maximum_likelihood_estimation) method that optimizes on the same training set as that for the original classifier $f$. Additionally, `sklearn` uses a [cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)) strategy to reduce overfitting.In the case of an SVM with hyperplane coefficients $\beta$ and intercept $\beta_0$, $f$ is given by $$f(x)=\beta\cdot x+\beta_0.$$ This function is deployed in the `decision_function` method of the classifier class.To transform the probailities into class predictions, we may simply apply thresholding on the probabilities. ###Code pred_probs = classifier.predict_proba(valid_data) thres = 0.5 pred_labels_from_proba = np.array([0 if prob[0] > thres else 1 for prob in pred_probs]) valid_acc = (valid_labels == pred_labels_from_proba).sum()/len(valid_labels) print('Accuracy on validation data (via predict_proba): %.2f%%' % (100*valid_acc)) ###Output Accuracy on validation data (via predict_proba): 82.99% ###Markdown As seen above, we have achieved a small improvement in performance by optimizing our $C$ and $\gamma$ values.It is notable that the classifier's `predict` and `predict_proba` methods may be inconsistent. A sample may be labeled by `predict` as belonging to the positive class even if the output of `predict_proba` is less than 0.5; and similarly, it could be labeled as negative even if the output of `predict_proba` is more than 0.5. This is due to the cross-validation approach used to train the logistic model for the probability estimation. The difference is usually minimal for large datasets, however, `predict` usually performs slightly better. ###Code pred_labels = classifier.predict(valid_data) valid_acc = (valid_labels == pred_labels).sum()/len(valid_labels) print('Accuracy on validation data (via predict): %.2f%%' % (100*valid_acc)) diff = (pred_labels_from_proba != pred_labels).sum()/len(pred_labels) print('Proportion of differing class labels between predict and predict_proba methods: %.2f%%' % (100*diff)) ###Output Proportion of differing class labels between predict and predict_proba methods: 0.07% ###Markdown To check for overfitting, we can compare the above accuracy with the accuracy on the training data. ###Code pred_labels_train = classifier.predict(train_data) train_acc = (train_labels == pred_labels_train).sum()/len(train_labels) print('Accuracy on training data %.2f%%' % (100*train_acc)) ###Output Accuracy on training data 82.69% ###Markdown The similar accuracy on both datasets indicates the model is not overfitting the training dataset.A confusion matrix will give us more insight into how well our model is performing. ###Code cf_matrix = np.zeros((2,2)) for real, pred in zip(valid_labels.astype(int), pred_labels.astype(int)): cf_matrix[real][pred] += 1 print('Confusion matrix:') pd.DataFrame(cf_matrix, index=['actual_<=50k', 'actual_>50k'], \ columns=['predicted_<=50k', 'predicted_>50k']).astype(int) print('Normalized confusion matrix:') pd.DataFrame(cf_matrix/cf_matrix.sum(), index=['actual_<=50k', 'actual_>50k'], \ columns=['predicted_<=50k', 'predicted_>50k']) ###Output Normalized confusion matrix: ###Markdown From the confusion matrix, we can calculate a few other metrics for evaluating out model. First, we compute the misclassification rate, which is simply the fraction of predictions that were wrong. ###Code misclass_rate = (cf_matrix[0][1]+cf_matrix[1][0])/cf_matrix.sum() print('Misclassification rate: %.2f%%' % (100*misclass_rate)) ###Output Misclassification rate: 17.01% ###Markdown We can also compute the recall, precision, and $F_1$ score. Recall measures the proportion of actual positives that were identified correctly. That is, recall is defined as $$\text{Recall} = \frac{\text{True Positives}}{\text{True Positives}+\text{False Negatives}}$$Precision, on the other hand, is the proportion of positive identifications that were correct. Hence, precision is given by $$\text{Precision} = \frac{\text{True Positives}}{\text{True Positives}+\text{False Positives}}$$It is important to consider both these metrics when evaluating a classification model as it is possible to have high recall but low precision or vice versa.$F_1$ score is a single metric which combines both the precision and recall. It is given by the harmonic mean of precision and recall:$$F_1=2\cdot\frac{\text{Recall}\cdot\text{Precision}}{\text{Recall}+\text{Precision}}$$$F_1$ score is a useful metric since it is low if either precision or recall are low and allows us do sufficiently describe the effectiveness of our model in a single quantity. ###Code recall = cf_matrix[1][1]/cf_matrix[1].sum() print('Recall: ', recall) precision = cf_matrix[1][1]/cf_matrix.sum(axis=0)[1] print('Precision: ', precision) f1 = 2*precision*recall/(precision + recall) print('F1 score: ', f1) ###Output Recall: 0.5104493207941484 Precision: 0.6856140350877193 F1 score: 0.5852051512428872 ###Markdown As with the logistic regression model in `IncomeViaLogisticRegression.ipynb`, our model has a greater precision than recall, and a decent $F_1$ score. The ROC curve is a plot of the false positive rate on the horizontal axis versus the true positive rate on the vertical axis for thresholds varying from 0 to 1. It is particularly useful for directly comparing several models, however, for a single model, the area under the ROC curve (AUC) can be used as a summary of the model's effectiveness.Here, we will load the predictions and validation labels from the logistic regression model created in `IncomeViaLogisticRegression.ipynb` to compute and compare the ROC curve. ###Code from sklearn.metrics import roc_curve, roc_auc_score import pickle fpr, tpr, thresholds = roc_curve(valid_labels, pred_probs[:,1]) area = roc_auc_score(valid_labels, pred_probs[:,1]) with open('../Logistic Regression/income_and_predictions.pickle', 'rb') as f: valid_labels_lr, pred_probs_lr, valid_acc_lr, f1_lr = pickle.load(f) fpr_lr, tpr_lr, thresholds_lr = roc_curve(valid_labels_lr, pred_probs_lr) area_lr = roc_auc_score(valid_labels_lr, pred_probs_lr) plt.figure(figsize=(10,8)) plt.plot(fpr_lr, tpr_lr, label='Logistic Regression (area = %.3f)' % area_lr) plt.plot(fpr, tpr, label='SVM (area = %.3f)' % area) plt.xlim(0,1) plt.ylim(0,1) plt.legend(loc='lower right', fontsize=14) plt.xlabel('False Positive Rate', fontsize=14) plt.ylabel('True Positive Rate', fontsize=14) plt.title('ROC Curve', fontsize=18) print('Accuracies:\n SVM: %.2f%%\n Logistic Regression: %.2f%%' % (100*valid_acc, 100*valid_acc_lr)) print('F1 Scores:\n SVM: %.3f\n Logistic Regression: %.3f' % (f1, f1_lr)) ###Output Accuracies: SVM: 82.99% Logistic Regression: 82.52% F1 Scores: SVM: 0.585 Logistic Regression: 0.599
notebooks/sine_ja_stm32.ipynb
###Markdown 「tflite micro」であそぼう! 元ノートブック:[@dansitu](https://twitter.com/dansitu) 日本語バーション:[@proppy](https://twitter.com/proppy]) bit.ly/2St3T1k ←こちらです github.com/proppy/TfLiteMicroArduino ← PRをどうぞ 「tflite micro」ってなんだ?- マイコンで「tflite」が動く事![img](https://wiki.stm32duino.com/images/thumb/d/db/STM32_Blue_Pill_perspective.jpg/800px-STM32_Blue_Pill_perspective.jpg)- https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro ###Code ! python -m pip install tensorflow==2.0.0-beta1 import tensorflow as tf print(tf.version.VERSION) ! python -m pip install matplotlib %matplotlib inline import matplotlib.pyplot as plt plt.rcParams['figure.dpi'] = 200 ###Output _____no_output_____ ###Markdown 一番かんたんなモデルを作りましょう! sin() 1000個 ###Code import numpy as np import math import matplotlib.pyplot as plt x_values = np.random.uniform(low=0, high=2*math.pi, size=1000) np.random.shuffle(x_values) y_values = np.sin(x_values) plt.plot(x_values, y_values, 'b.') print(plt.show()) ###Output _____no_output_____ ###Markdown ノイズをかけて ###Code y_values += 0.1 * np.random.randn(*y_values.shape) plt.plot(x_values, y_values, 'b.') plt.show() ###Output _____no_output_____ ###Markdown datasetをちゃんと分けて ###Code x_train, x_test, x_validate = x_values[:600], x_values[600:800], x_values[800:] y_train, y_test, y_validate = y_values[:600], y_values[600:800], y_values[800:] plt.plot(x_train, y_train, 'b.', label="Train") plt.plot(x_test, y_test, 'r.', label="Test") plt.plot(x_validate, y_validate, 'y.', label="Validate") plt.legend() plt.show() ###Output _____no_output_____ ###Markdown Kerasで2分を温めて ###Code from tensorflow.keras import layers import tensorflow as tf model = tf.keras.Sequential() model.add(layers.Dense(16, activation='relu', input_shape=(1,))) model.add(layers.Dense(16, activation='relu')) model.add(layers.Dense(1)) model.compile(optimizer='rmsprop', loss='mse', metrics=['mae']) history = model.fit(x_train, y_train, epochs=1000, batch_size=16, validation_data=(x_validate, y_validate), verbose=1) ###Output _____no_output_____ ###Markdown モデルを試して ###Code predictions = model.predict(x_test) plt.clf() plt.plot(x_test, y_test, 'bo', label='Test') plt.plot(x_test, predictions, 'ro', label='Keras') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown tfliteにゆっくり変わって ###Code converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE] tflite_model = converter.convert() open("sine_model_data.tflite", "wb").write(tflite_model) ###Output _____no_output_____ ###Markdown マイコンに入れる前に最後の確認 ###Code interpreter = tf.lite.Interpreter('sine_model_data.tflite') interpreter.allocate_tensors() input = interpreter.tensor(interpreter.get_input_details()[0]["index"]) output = interpreter.tensor(interpreter.get_output_details()[0]["index"]) lite_predictions = np.empty(x_test.size) for i in range(x_test.size): input()[0] = x_test[i] interpreter.invoke() lite_predictions[i] = output()[0] plt.plot(x_test, y_test, 'bo', label='Test') plt.plot(x_test, predictions, 'ro', label='Keras') plt.plot(x_test, lite_predictions, 'kx', label='TFLite') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown マイコンに入れるために「ANSI C」に変わって ###Code _ = ! which xxd || apt-get install xxd ! xxd -i sine_model_data.tflite > sine_model_data.h try: from google.colab import files files.download('sine_model_data.h') except Exception as e: from IPython.display import FileLink display(FileLink('sine_model_data.h')) ###Output _____no_output_____
labs/supplementary_materials/exploratory_data_analysis/A-quick-review-of-Data-Analysis-Numerical-Attributes-Solution.ipynb
###Markdown A quick review of Data Analysis : Numerical Attributes Imports ###Code %matplotlib inline import matplotlib as mpl %matplotlib inline %config InlineBackend.figure_format = 'svg' import pandas as pd import numpy as np import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Probability and Statistics with Python Random Experiment Examples :1. **rolling a die** (for example, *Outcome* : 4)2. **measuring the time to reach home** (for example, *Outcome* : 42 minutes)3. **tomorrow's weather** ( for example, *Outcome* : Partly Cloudy)Characteristics :* **Sample Space** (denoted $\Omega$) : the set of all possible outcomes* **Outcome** : element of the sample space* **Event** (denoted $E$) : subset of the sample space* **Random Variable** (denoted $X$) : the numerial outcome of the experimentIf the *sample space* is **finite** or **countably infinite**, then the random variable is said to be **discrete**. Experiment : rolling a die* $\Omega = \{\omega_1, \omega_2, \ldots, \omega_6\}$* $\omega_i$ : the outcome of getting the face that has $i$ dots* $E = \{\omega_2, \omega_4, \omega_6\}$ **Exercise** - Complete the following code ###Code # Experiment : rolling a die sample_space = set(range(1, 7)) # Event : the result of the roll is an even number event = set(range(2,7,2)) print(sample_space, event) ###Output {1, 2, 3, 4, 5, 6} {2, 4, 6} ###Markdown A **trial** of a random experiment generates an outcome $\omega \in \Omega$. ###Code import random # The die is rolled once : sample = random.sample(sample_space, 1) print(sample) outcome = sample[0] print(outcome) ###Output [1] 1 ###Markdown If $\omega \in E$, then it is said that event $E$ occured during the trial. ###Code if outcome in event: print('the result of the roll is an even number') ###Output _____no_output_____ ###Markdown Axiomatic definition of probabilityProbability distribution : $P : 2^{\Omega} \rightarrow \mathbb{R}$1. $P(E) \geqslant 0$, for all event $E \subseteq \Omega$ 1. $P(\Omega) = 1$1. $P(E_1 \cup E_2) = P(E_1) + P(E_2)$, for any two mutually exclusive events $E_1$ and $E_2$ (*i.e.*, $E_1 \cap E_2 = \emptyset)$ Based on these axioms and elementary set theory :1. $P(\emptyset) = 0$1. If $E_1 \subseteq E_2$, then $P(E_1) \leqslant P(E_2)$1. $P(\bar{E}) = 1 - P(E)$, where $\bar{E} = \Omega \backslash E$ is the *complement* of $E$1. $P(E_1 \cup E_2) = P(E_1) + P(E_2) - P(E_1 \cap E_2)$, for any two events $E_1$ and $E_2$1. $P(E) = \sum_{\omega \in E} P(\omega)$ The classical interpretation $$\text{The probability of an event} = \frac{\text{Number of favourable outcomes}}{\text{Number of possible outcomes}}$$where all the possible **outcomes are equaly likely**.For example, if we consider the experiment : "rolling a die". In this case the probability of getting an odd number id $3/6$, because each possible outcome is equally likely. **Exercise** - Complete the following function definition : ###Code from fractions import Fraction def P(event, sample_space): """The probability of an event, given a sample space of equiprobable outcomes. """ return Fraction(len(event & sample_space), len(sample_space)) p = P(event = {2, 4, 6}, sample_space = {1, 2, 3, 4, 5, 6}) print(p, float(p)) ###Output 1/2 0.5 ###Markdown The frequency interpretation$$\text{The relative frequency of an event} = \frac{\text{Number of times the event has occured}}{\text{Number of observed cases}}$$Let $N$ denotes the number of times the random experiment is repeated and $N_E$ the number of times that event $E$ has occured.$$P(E) = \lim_{N\to\infty} \frac{N_E}{N}$$ ###Code n = 30 # The die is rolled n times sample = [random.sample(sample_space, 1)[0] for i in range(n)] print('sample length : {}'.format(len(sample))) print(sample) import numpy as np sample = np.random.choice(list(sample_space), n) print('samples length : {}'.format(len(sample))) print(sample) ###Output samples length : 30 [3 4 4 6 4 4 2 6 3 4 6 2 4 4 1 2 3 3 5 5 1 1 2 5 1 6 4 4 2 1] ###Markdown **Exercise** - With the help of the Counter function from the collections module (see, [collections.Counter](https://docs.python.org/3/library/collections.htmlcollections.Counter)), compute the absolute and relative frequency of each outcome from the sample space. ###Code import collections absolute_frequencies = collections.Counter(sample) for outcome in sample_space: absolute_freq = absolute_frequencies[outcome] relative_freq = absolute_freq/len(sample) print('relative frequency of {} : {}'.format(outcome, relative_freq)) ###Output relative frequency of 1 : 0.16666666666666666 relative frequency of 2 : 0.16666666666666666 relative frequency of 3 : 0.13333333333333333 relative frequency of 4 : 0.3 relative frequency of 5 : 0.1 relative frequency of 6 : 0.13333333333333333 ###Markdown **Exercise** - Same question with [scipy.stats.itemfreq](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.itemfreq.html). ###Code from scipy import stats freqs = stats.itemfreq(sample) print(freqs, '\n') for row in freqs: outcome = row[0] absolute_freq = row[1] relative_freq = absolute_freq/len(sample) print('relative frequency of {} : {}'.format(outcome, relative_freq)) ###Output [['HH' 9] ['HT' 6] ['TH' 4] ['TT' 11]] relative frequency of HH : 0.3 relative frequency of HT : 0.2 relative frequency of TH : 0.13333333333333333 relative frequency of TT : 0.36666666666666664 ###Markdown **Exercise** - Same question with (pandas.Series.value_counts)[http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) ###Code sample = pd.Series(sample) sample.value_counts() / len(sample) ###Output _____no_output_____ ###Markdown The conditional probability of $F$ given $E$ : $P(F|E)$For example :* Experiment : rolling a die once* $F = \{6\}$ and $E = \{5, 6\}$* $P(F|E) = 1/2$Observations :* If $\omega \notin E$, then $P(\omega | E) = 0$* $\sum_{\omega \in E} P(\omega|E) = 1$* $\sum_{\omega \in E} P(\omega) = P(E)$* we normalize by scaling the probabilities by $1/P(E)$ : $\sum_{\omega \in E} P(\omega)/P(E) = P(E)/P(E) = 1$$$P(F|E) = \sum_{\omega \in E \cap F} P(\omega|E) = \sum_{\omega \in E \cap F} \frac{P(\omega)}{P(E)} = \frac{P(E \cap F)}{P(E)}$$ **Exercise** - Define a function that given three sets $F$, $E$ and $G$, returns the conditional probability $P(F|E)$, where each outcome is assumed to be equally likely. ###Code def cond_proba(F, E, sample_space): return P(E & F, sample_space)/P(E, sample_space) sample_space = set(range(1,7)) print(cond_proba({6}, {5, 6}, sample_space), cond_proba({1}, {5, 6}, sample_space), cond_proba({1}, {1, 3, 5}, sample_space)) ###Output 1/2 0 1/3 ###Markdown An example with **combined experiments** :* Experiment : Toss a coin twice* Sample space : $\Omega = \Omega_1 \times \Omega_2 = \{H, T\} \times \{H, T\} = \{(H, T), (H, H), (T, H), (T, T)\}$* Event $A$ : *"the first toss is a head"** Event $B$ : *"the two outcomes are the same"*$$P(B|A) = \frac{P(B \cap A)}{P(A)} = \frac{P(\{(H,H)\})}{P(\{(H,H), (H,T)\})} = \frac{1/4}{1/2} = \frac{1}{2} = P(B)$$* Event $I$ = *"heads on the first toss"* = $\{(H,H), (H,T)\}$* Event $J$ = *"two heads turn up"* = $\{(H,H)\}$$$P(I)P(J) = 1/8 \neq P(I \cap J)$$ **Exercise** - Given the following data set for the experiment *"toss a coin twice"*, define a function that use the **frequency interpretation** to compute the conditional probability. ###Code from scipy import stats def proba(A, sample): outcome_freqs = stats.itemfreq(sample) event_freq = sum([int(freq) for outcome, freq in outcome_freqs if outcome in A]) return event_freq/len(sample) def cond_proba_freq(A, B, sample): return proba(A & B, sample) / proba(A, sample) sample_space = ['HT', 'HH', 'TH', 'TT'] sample = np.random.choice(sample_space, n) print('P({HT})', proba({'HT'}, sample)) print('P({HT, HH})', proba({'HT', 'HH'}, sample)) print(cond_proba_freq({'HT', 'HH'}, {'TT', 'HH'}, sample)) print(cond_proba_freq({'HH'}, {'TT', 'HH'}, sample)) ###Output P({HT}) 0.2 P({HT, HH}) 0.3333333333333333 0.4 1.0 ###Markdown Independent eventsTwo events $E$ and $F$ are said **independent** if $P(E|F) = P(E)$ and $P(F|E) = P(F)$.**Exercise** - Let us consider the experiment "rolling a die once" and three events $E_1 = \{2, 4, 6\}$, $E_2 = \{3, 4, 5, 6\}$, and $E_3 = \{4, 5, 6\}$. Are these events mutually independent ? ###Code import itertools events = [{2, 4, 6}, {3, 4, 5, 6}, {2, 5, 6}] sample_space = set(range(1,7)) for E in events: print('P({}) = {}'.format(E, P(E, sample_space))) for E, F in itertools.product(events, repeat=2): print('P({}|{}) = {}'.format(E, F, cond_proba(E, F))) ###Output P({2, 4, 6}) = 1/2 P({3, 4, 5, 6}) = 2/3 P({2, 5, 6}) = 1/2 P({2, 4, 6}|{2, 4, 6}) = 1.0 P({2, 4, 6}|{3, 4, 5, 6}) = 0.5 P({2, 4, 6}|{2, 5, 6}) = 0.6666666666666666 P({3, 4, 5, 6}|{2, 4, 6}) = 0.6666666666666666 P({3, 4, 5, 6}|{3, 4, 5, 6}) = 1.0 P({3, 4, 5, 6}|{2, 5, 6}) = 0.6666666666666666 P({2, 5, 6}|{2, 4, 6}) = 0.6666666666666666 P({2, 5, 6}|{3, 4, 5, 6}) = 0.5 P({2, 5, 6}|{2, 5, 6}) = 1.0 ###Markdown Numerical Attributes $\def\*1{\mathbf{1}}$$\DeclareMathOperator*{\argmax}{arg\,max}$A numeric attribute $X$ is a **random variable** (*i.e.*, $X : \Omega \to \mathbb{R}$) that assigns a real number to each outcome of a random experiment. By default, a numeric attribute $X_j$ such as the *sepal length* is considered as the identity random variable, *i.e.* $X(v) = v$, for all $v \in \Omega$.Let us consider the following data matrix $D \in \mathbb{R}^{n \times 1}$:$$\*D = \begin{pmatrix} X\\ \hline x_1\\ x_2\\ \vdots\\ x_n\end{pmatrix}$$The considered numeric attribute $X$ is a **random variable**. The observed data is a **random sample** drawn from $X$. That is to say, each variable $x_i$ is an identity random variable (*i.e.*, $x_i : \mathbb{R} \to \mathbb{R}$) independent and identically distributed as $X$ (*i.e.*, same mass and density function), with $i = 1,\ldots,n$.For example, in the case of the sepal length :$$\*D = \begin{pmatrix} X\\ \hline 5.1\\ 4.8\\ 6.0\\ 6.8\\ 6.7\\\end{pmatrix}$$Let us consider the *iris data set* and the following **discrete random variable** defined on the attribute *sepal length*$$A(v) = \left\{\begin{array}{l}0,\ \text{if}\ v < 7,\\1,\ \text{otherwise.}\end{array}\right.$$The **Probability Mass Function** is defined as usual, *i.e.* $f(x) = P(A = x)$, for all $x \in \mathbb{R}$ with $f(x) \geqslant 0$ and $\sum_{x} f(x) = 1$. This function can be estimated empirically based on the given data set.**Exercise** - Estimate the **Empirical Probability Mass Function** of $A$ defined as follows :$$\hat{f}(x) = \frac{1}{n} \sum_{i = 1}^n I(x_i = x)$$where the indicator variable is defined as follows,$$I(x_i = x) =\left\{ \begin{array}{ll}1, & \mbox{if}\ x_i = x,\\0, & \mbox{otherwise.}\end{array}\right.$$ ###Code def f(A, x): """ Probability mass function of A """ if x == 0: return (A < 7).sum()/len(A) elif x == 1: return (A >= 7).sum()/len(A) else: return 0 data = pd.read_csv('../../datasets/iris.data') A = data['SepalLength'] print(f(A, 0), f(A, 1)) print(f(A, 0) + f(A, 1)) ###Output 0.913333333333 0.0866666666667 1.0 ###Markdown The **probability distribution** of a coninuous variable $X$ is described by its **probability density function**. This function is defined as follows for all $a, b \in \mathbb{R}$ :$$P(a \leqslant X \leqslant b) = \int_{a}^b f(x)\ dx$$where $f(x) \geqslant 0$ and $\int_{-\infty}^{+\infty} f(x) dx = 1$.Let us model the numerical attribute *sepal length* via the *normal density function* given as :$$f(x) = \frac{1}{\sqrt{2\sigma^2\pi} } \; e^{ -\frac{(x-\mu)^2}{2\sigma^2} }$$In this case, the random variable has two unknown parameters $\mu$ and $\sigma$. Their estimators, denoted as $\hat{\mu}$ and $\hat{\sigma}$ are defined as follows :$$\hat{\mu} = \sum_{x}x\hat{f}(x) = \frac{1}{n} \sum_{i = 1}^n x_i$$$$\hat{\sigma}^2 = \frac{1}{n} \sum_{i = 1}^n (x_i - \hat{\mu})^2$$**Exercise** - Based on the [SciPy statistical functions](http://docs.scipy.org/doc/scipy/reference/stats.html), plot the density function of the *sepal length*. ###Code from scipy.stats import norm data = pd.read_csv('../../datasets/iris.data') X = data['SepalLength'] plt.hist(X, bins=10, normed=True) plt.xlabel("X") plt.ylabel("Frequency") x_space = np.linspace(1, 10, 100) mu = X.mean() sigma = X.std() plt.plot(x_space, norm(loc=mu, scale=sigma).pdf(x_space)) ###Output _____no_output_____ ###Markdown The **Empirical Cumulative Distribution Function** is difined as follows.$$\hat{F}(x) = \frac{1}{n} \sum_{i = 1}^n I(x_i \leqslant x)$$**Exercise** - Plot the empirical comulative distribution function of the sepal length and compare it with the one from normal distribution. ###Code from scipy.stats import norm data = pd.read_csv('../../datasets/iris.data') X = data['SepalLength'] plt.hist(X, bins=50, normed=True, cumulative=True) plt.xlabel("X") plt.ylabel("Frequency") x_space = np.linspace(1, 10, 100) mu = X.mean() sigma = X.std() plt.plot(x_space, norm(loc=mu, scale=sigma).cdf(x_space)) ###Output _____no_output_____ ###Markdown We consider the **bivariate random variable** $X = (X_1, X_2)^T$. In this case, we consider a data matrix $D \in \mathbb{R}^{n \times 2}$:$$\*D = \begin{pmatrix} X_1 & X_2\\ \hline x_{11} & x_{12}\\ x_{21} & x_{22}\\ \vdots & \vdots\\ x_{n1} & x_{n2}\end{pmatrix}$$Where the points $\*x_i$, with $i = 1, 2, \ldots, n$, is a **random sample** drawn from $\*X$. That is to say, $\*x_i$ are independent variables identically distributed as $\*X$.The **covariance** is used to measure the linear dependence between two variables. The covariancle between $X_1$ and $X_2$ is denoted $\sigma_{12}$ and is equal to 0 is they are independent. The **sample covariance** between $X_1$ and $X_2$ is computed as follows.$$\hat{\sigma}_{12} = \frac{1}{n} \sum_{i=1}^n (x_{i1}-\hat{\mu}_1) (x_{i2}-\hat{\mu}_1)$$The **correlation** between $X_1$ and $X_2$, denoted $\rho_{12}$, is a standardized covariance. The **sample correlation** is computed as follows.$$\hat{\rho}_{12} = \frac{\hat{\sigma}_{12}}{\hat{\sigma}_{1}\hat{\sigma}_{2}}$$**Exercise** - Compute the standard correlation between each pair of attribute of the iris data set (see, [pandas.DataFrame.corr](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html)). What do you conclude ? Use scatter plots to complete your conclusion. Which pair of attributes has the largest covariance, and which pair of attributes has the smallest covariance ? ###Code data.corr() data.plot(kind='scatter', color='Blue', x='SepalLength', y='PetalLength') data.plot(kind='scatter', color='Blue', x='SepalLength', y='SepalWidth') ###Output _____no_output_____ ###Markdown DistanceLet us consider the following dataset :| $\*x_i$ | Age ($X_1$) | Income ($X_2$) | |------------|-------------------|------------------| | $\*x_1$ | 12 | 300 | | $\*x_2$ | 14 | 500 | | $\*x_3$ | 18 | 1000 | | $\*x_4$ | 23 | 2000 | | $\*x_5$ | 27 | 3500 | | $\*x_6$ | 28 | 4000 | | $\*x_7$ | 34 | 4300 | | $\*x_8$ | 37 | 6000 | | $\*x_9$ | 39 | 2500 | | $\*x_{10}$ | 40 | 2700 | In methods like classification and clustering, we have to compute de similarity (or dissimilarity) between pairs of observations. For example, we could consider the euclidean distance to measure the dissimilarity between each pair of instances in this dataset. This leads to compute the so-called **distance matrix**. See common definitions of distances at the end of this notebook (*apendix* section).**Exercise** - Declare this data set as a Pandas DataFrame. Based on [pdist](http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html) and [squareform](http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.squareform.html) compute the corresponding distance matrix. ###Code from scipy.spatial.distance import pdist, squareform X = pd.DataFrame({'Age' : [12, 14, 18, 23, 27, 28, 34, 37, 39, 40], 'Income' : [300, 500, 1000, 2000, 3500, 4000, 4300, 6000, 2500, 2700]}) d = squareform(pdist(X)) # Distance between x_1 and x_2 d[0, 1] ###Output _____no_output_____ ###Markdown NormalizationThe two attributes in this data set have very different scales. The sample range for $X_1$ is $\hat{r} = 40 - 12 = 28$ and the sample range for $X_2$ is $\hat{r} = 2700 - 300 = 2400$. For example, the euclidean distance between $\*x_1$ and $\*x_2$ is $\sqrt{2^2 + 200^2} = 200.01$. As you can see, the contribution of these variables in the dissimilarity measure depends on their scale. The contribution of $X_1$ is therefore overshadowed by the contribution of $X_2$. Two traditional methods can be used to solve this problem :**Range Normalization**Let us consider an attribute denoted by $X$ and let $x_1, x_2, \ldots, x_n$ be a random sample drawn from $X$. Each value is scaled by the **sample range** $\hat{r}$ of $X$ :$$\begin{align*}x_i' &= \frac{x_i - \min\{x_j : j = 1,\ldots,n\}}{\hat{r}}\\ &= \frac{x_i - \min\{x_j : j = 1,\ldots,n\}}{\max\{x_j : j = 1,\ldots,n\} - \min\{x_j : j = 1,\ldots,n\}}\end{align*}$$However, it is worth noting that $\hat{r}$ is **sensitive to extreme values**, and thus **not robust**.**Standard Score Normalization**Each value is replaced by its $z$-score:$$x_i' = \frac{x_i - \hat{\mu}}{\hat{\sigma}}$$**Exercise** - Apply the *standard score normalisation* on this data set and compute the resulting mean and standard deviation. ###Code X_norm = (X - X.mean()) / X.std() print(X_norm) print(X_norm.mean()) print(X_norm.std()) ###Output Age Income 0 -1.476664 -1.308035 1 -1.282366 -1.198116 2 -0.893770 -0.923319 3 -0.408026 -0.373724 4 -0.019430 0.450667 5 0.077719 0.725465 6 0.660613 0.890343 7 0.952060 1.824653 8 1.146358 -0.098927 9 1.243507 0.010992 Age 2.220446e-17 Income -4.683753e-17 dtype: float64 Age 1.0 Income 1.0 dtype: float64 ###Markdown **Exercise** - Compute the distance matrix for the resulting data frame. Compare the two distance matrices visually with the help of [pcolor](http://matplotlib.org/api/pyplot_api.htmlmatplotlib.pyplot.pcolor). ###Code squareform(pdist(X_norm)) plt.pcolor(squareform(pdist(X_norm))) plt.colorbar() ###Output _____no_output_____ ###Markdown Execute the following code. What do you conclude from it ? ###Code # see "On the Surprising Behavior of Distance Metrics in High Dimensional Space" # by Charu C. Aggarwal, Alexander Hinneburg, and Daniel A. Keim dimensions = [10, 20, 30, 40, 50, 100, 200, 500, 1000] p_norms = [1, 2, 10] relative_contrasts = np.zeros((len(dimensions), len(p_norms))) for i, d in enumerate(dimensions): relative_contrasts_d = np.zeros((30, len(p_norms))) for j in range(30): points = np.random.rand(100, d) for k, p in enumerate(p_norms): dists = np.linalg.norm(points, axis=1, ord=p) relative_contrasts_d[j, k] = (max(dists) - min(dists))/min(dists) for k, p in enumerate(p_norms): relative_contrasts[i, k] = np.mean(relative_contrasts_d[:,k]) colors = ['r', 'g', 'b'] for i, color in enumerate(colors): plt.plot(dimensions, relative_contrasts[:,i], color + '-') plt.plot(dimensions, relative_contrasts[:,i], color + '.') plt.ylabel('Relative contrast') plt.xlabel('Data dimensionality') plt.show() ###Output _____no_output_____
Pytorch Practical Tasks/2_1_ForwardAD.ipynb
###Markdown Part 1: Forward Mode Automatic DifferentiationForward mode AD can simply be implemented by defining a class to represent [dual numbers](https://en.wikipedia.org/wiki/Dual_number) which hold the value and its derivative. The following skeleton defines a dual number and implements multiplication. __Tasks:__- Addition (`__add__`) is incomplete - can you finish it? - Can you also implement division (`__truediv__`), subtraction (`__sub__`) and power (`__pow__`)? ###Code import math class DualNumber: def __init__(self, value, dvalue): self.value = value self.dvalue = dvalue def __abs__(self): return abs(self.value) def __str__(self): return str(self.value) + " + " + str(self.dvalue) + "ε" def __mul__(self, other): return DualNumber(self.value * other.value, self.dvalue * other.value + other.dvalue * self.value) def __add__(self, other): return DualNumber(self.value + other.value, self.dvalue + other.dvalue) def __sub__(self, other): return DualNumber(self.value - other.value, self.dvalue - other.dvalue) def __truediv__(self, other): if abs(other.value) == 0: raise ZeroDivisionError else: return Dual(self.value / other.value, self.dvalue / other.value - self.value / (other.value)**2 * other.dvalue) def __pow__(self, other): return DualNumber(self.value ** other.value, self.dvalue ** other.dvalue) raise NotImplementedError() # Tests DualNumber(1,0) + DualNumber(1,0) / DualNumber(1,0) - DualNumber(1,0)**DualNumber(1,0) ###Output _____no_output_____ ###Markdown Implementing math functionsWe also need to implement some core math functions. Here's the sine function for a dual number: ###Code def sin(x): return DualNumber(math.sin(x.value), math.cos(x.value)*x.dvalue) ###Output _____no_output_____ ###Markdown __Task:__ can you implement the _cosine_ (`cos`), _tangent_ (`tan`), and _exponential_ (`exp`) functions in the code block below? ###Code # TODO: implement additional math functions on dual numbers def cos(x): return DualNumber(math.cos(x.value), -math.sin(x.value)*x.dvalue) raise NotImplementedError() def tan(x): return DualNumber(math.tan(x.value), x.dvalue / math.cos(x.value)**2) raise NotImplementedError() def exp(x): return DualNumber(math.exp(x.value), math.exp(x.value)*x.dvalue) raise NotImplementedError() # Tests assert cos(DualNumber(0,0)).value == 1 assert tan(DualNumber(0,0)).value == 0 assert exp(DualNumber(0,0)).value == 1 ###Output _____no_output_____ ###Markdown Time to try it outWe're now in a position to try our implementation.__Task:__ - Try running the following code to compute the value of the function $z=x\cdot y+sin(x)$ given $x=0.5$ and $y=4.2$, together with the derivative $\partial z/\partial x$ at that point. ###Code x = 0.5 dx = 1 y = 4.2 dy = 0 a = x * y da = y * dx + x *dy b = math.sin(x) db = math.cos(x) * dx z = a + b dzdx = da + db print (z, dzdx) x = DualNumber(0.5, 1) y = DualNumber(4.2, 0) z = x * y + sin(x) print(z) ###Output 2.579425538604203 5.077582561890373 2.579425538604203 + 5.077582561890373ε ###Markdown __Task__: Differentiate the above function with respect to $x$ and write the symbolic derivatives in the following box. Verify the result computed above is correct by plugging-in the values into your symbolic gradient expression. z = 2.58, dzdx = 5.077 __Task:__ Now use the code block below to compute the derivative $\partial z/\partial y$ of the above expression (at the same point $x=0.5, y=4.2$ as above) and store the derivative in the variable `dzdy` (just the derivative, not the Dual Number). Verify by hand that the result is correct. ###Code x = DualNumber(0.5, 0) y = DualNumber(4.2, 1) z = x * y + sin(x) dzdy = z.dvalue print('dz/dy:', dzdy) #Tests assert dzdy assert type(dzdy) == float ###Output _____no_output_____
sos_trades_core/tests/jupyter_doc/ipynb/ex_02.2_very_simple_multi_scenario.ipynb
###Markdown Import libraries ###Code import sys import os module_path = os.path.abspath('.') +"\\_scripts" print(module_path) if module_path not in sys.path: sys.path.append(module_path) from _00_Import_packages_git3 import * from time import sleep from shutil import rmtree from pathlib import Path from os.path import join import pandas as pd import numpy as np import os from sos_trades_core.execution_engine.execution_engine import ExecutionEngine from sos_trades_core.execution_engine.sos_simple_multi_scenario import SoSSimpleMultiScenario from sos_trades_core.execution_engine.sos_very_simple_multi_scenario import SoSVerySimpleMultiScenario from sos_trades_core.execution_engine.scatter_data import SoSScatterData from sos_trades_core.execution_engine.sos_discipline_scatter import SoSDisciplineScatter from tempfile import gettempdir from sos_trades_core.tools.rw.load_dump_dm_data import DirectLoadDump from sos_trades_core.study_manager.base_study_manager import BaseStudyManager from sos_trades_core.execution_engine.sos_discipline import SoSDiscipline from sos_trades_core.execution_engine.sos_coupling import SoSCoupling ###Output _____no_output_____ ###Markdown TestScatter SoSDiscipline test class setUp ###Code ''' Initialize third data needed for testing ''' dirs_to_del = [] namespace = 'MyCase' study_name = f'{namespace}' repo = 'sos_trades_core.sos_processes.test' base_path = 'sos_trades_core.sos_wrapping.test_discs' root_dir = gettempdir() exec_eng = ExecutionEngine(namespace) factory = exec_eng.factory ###Output _____no_output_____ ###Markdown tearDown ###Code for dir_to_del in dirs_to_del: sleep(0.5) if Path(dir_to_del).is_dir(): rmtree(dir_to_del) sleep(0.5) ###Output _____no_output_____ ###Markdown 02_consecutive_configuration ###Code exec_eng = ExecutionEngine(namespace) factory = exec_eng.factory # scatter build map ac_map = {'input_name': 'name_list', 'input_type': 'string_list', 'input_ns': 'ns_scatter_scenario', 'output_name': 'ac_name', 'scatter_ns': 'ns_ac', 'gather_ns': 'ns_scenario', 'ns_to_update': ['ns_data_ac']} exec_eng.smaps_manager.add_build_map('name_list', ac_map) import pandas as pd pd.DataFrame.from_dict(ac_map ,orient='index') # scenario build map scenario_map = {'input_name': 'scenario_list', 'input_type': 'string_list', 'input_ns': 'ns_scatter_scenario', 'output_name': 'scenario_name', 'scatter_ns': 'ns_scenario', 'gather_ns': 'ns_scatter_scenario', 'ns_to_update': ['ns_disc3', 'ns_barrierr', 'ns_out_disc3']} exec_eng.smaps_manager.add_build_map( 'scenario_list', scenario_map) import pandas as pd pd.DataFrame.from_dict(scenario_map ,orient='index') # shared namespace exec_eng.ns_manager.add_ns('ns_barrierr', 'MyCase') exec_eng.ns_manager.add_ns( 'ns_scatter_scenario', 'MyCase.multi_scenarios') exec_eng.ns_manager.add_ns( 'ns_disc3', 'MyCase.multi_scenarios.Disc3') exec_eng.ns_manager.add_ns( 'ns_out_disc3', 'MyCase.multi_scenarios') exec_eng.ns_manager.add_ns( 'ns_data_ac', 'MyCase') # instantiate factory # get instantiator from Discipline class builder_list = factory.get_builder_from_process(repo=repo, mod_id='test_disc1_scenario') scatter_list = exec_eng.factory.create_multi_scatter_builder_from_list( 'name_list', builder_list=builder_list, autogather=True) mod_path = f'{base_path}.disc3_scenario.Disc3' disc3_builder = exec_eng.factory.get_builder_from_module( 'Disc3', mod_path) scatter_list.append(disc3_builder) multi_scenarios = exec_eng.factory.create_very_simple_multi_scenario_builder( 'multi_scenarios', 'scenario_list', scatter_list, autogather=True, gather_node='Post-processing') exec_eng.factory.set_builders_to_coupling_builder( multi_scenarios) exec_eng.configure() exec_eng.display_treeview_nodes() dict_values = {f'{study_name}.multi_scenarios.scenario_list': ['scenario_1', 'scenario_2'], f'{study_name}.multi_scenarios.name_list': ['name_1', 'name_2']} dict_values exec_eng.load_study_from_input_dict(dict_values) exec_eng.display_treeview_nodes() for disc in exec_eng.dm.get_disciplines_with_name('MyCase.multi_scenarios'): if isinstance(disc, SoSVerySimpleMultiScenario): print(list(disc.get_scattered_disciplines().keys()), [ 'scenario_1', 'scenario_2']) dict_values[study_name + '.multi_scenarios.scenario_list'] = ['scenario_1'] dict_values exec_eng.load_study_from_input_dict(dict_values) exec_eng.display_treeview_nodes() print( [key for key in exec_eng.dm.data_id_map.keys() if 'scenario_2' in key and key.split('.')[-1] not in SoSDiscipline.NUM_DESC_IN and key.split('.')[-1] not in SoSCoupling.DEFAULT_NUMERICAL_PARAM_OUT_OF_INIT and key.split('.')[-1] != SoSCoupling.RESIDUALS_HISTORY], []) for disc in exec_eng.dm.get_disciplines_with_name('MyCase.multi_scenarios'): if isinstance(disc, SoSVerySimpleMultiScenario): print(list(disc.get_scattered_disciplines().keys()), [ 'scenario_1']) dict_values[study_name + '.multi_scenarios.scenario_list'] = ['scenario_1', 'scenario_2', 'scenario_3'] dict_values exec_eng.load_study_from_input_dict(dict_values) exec_eng.display_treeview_nodes() for disc in exec_eng.dm.get_disciplines_with_name('MyCase.multi_scenarios'): if isinstance(disc, SoSVerySimpleMultiScenario): print(list(disc.get_scattered_disciplines().keys()), [ 'scenario_1', 'scenario_2', 'scenario_3']) dict_values[study_name + '.multi_scenarios.scenario_list'] = [] dict_values exec_eng.load_study_from_input_dict(dict_values) exec_eng.display_treeview_nodes() for disc in exec_eng.dm.get_disciplines_with_name('MyCase.multi_scenarios'): if isinstance(disc, SoSVerySimpleMultiScenario): print( list(disc.get_scattered_disciplines().keys()), []) dict_values[study_name + '.multi_scenarios.scenario_list'] = ['scenario_A', 'scenario_B'] dict_values print( [key for key in exec_eng.dm.data_id_map.keys() if 'scenario_1' in key and key.split('.')[-1] not in SoSDiscipline.NUM_DESC_IN and key.split('.')[-1] not in SoSCoupling.DEFAULT_NUMERICAL_PARAM_OUT_OF_INIT and key.split('.')[-1] != SoSCoupling.RESIDUALS_HISTORY], []) print( [key for key in exec_eng.dm.data_id_map.keys() if 'scenario_2' in key and key.split('.')[-1] not in SoSDiscipline.NUM_DESC_IN and key.split('.')[-1] not in SoSCoupling.DEFAULT_NUMERICAL_PARAM_OUT_OF_INIT and key.split('.')[-1] != SoSCoupling.RESIDUALS_HISTORY], []) print( [key for key in exec_eng.dm.data_id_map.keys() if 'scenario_3' in key and key.split('.')[-1] not in SoSDiscipline.NUM_DESC_IN and key.split('.')[-1] not in SoSCoupling.DEFAULT_NUMERICAL_PARAM_OUT_OF_INIT and key.split('.')[-1] != SoSCoupling.RESIDUALS_HISTORY], []) exec_eng.load_study_from_input_dict(dict_values) exec_eng.display_treeview_nodes() for disc in exec_eng.dm.get_disciplines_with_name('MyCase.multi_scenarios'): if isinstance(disc, SoSVerySimpleMultiScenario): print(list(disc.get_scattered_disciplines().keys()), [ 'scenario_A', 'scenario_B']) scenario_list = ['scenario_A', 'scenario_B'] for scenario in scenario_list: a1 = 3 b1 = 4 a2 = 6 b2 = 2 x1 = 2 x2 = 4 dict_values[study_name + '.name_1.a'] = a1 dict_values[study_name + '.name_2.a'] = a2 dict_values[study_name + '.multi_scenarios.' + scenario + '.Disc1.name_1.b'] = b1 dict_values[study_name + '.multi_scenarios.' + scenario + '.Disc1.name_2.b'] = b2 dict_values[study_name + '.multi_scenarios.' + scenario + '.Disc3.constant'] = 3 dict_values[study_name + '.multi_scenarios.' + scenario + '.Disc3.power'] = 2 dict_values[study_name + '.multi_scenarios.scenario_A.Disc3.z'] = 1.2 dict_values[study_name + '.multi_scenarios.scenario_B.Disc3.z'] = 1.5 dict_values[study_name + '.name_1.x'] = x1 dict_values[study_name + '.name_2.x'] = x2 dict_values exec_eng.load_study_from_input_dict(dict_values) exec_eng.execute() for disc in exec_eng.dm.get_disciplines_with_name('MyCase.multi_scenarios'): if isinstance(disc, SoSVerySimpleMultiScenario): print( [key for key in list(disc.get_data_io_dict('in').keys()) if key not in disc.NUM_DESC_IN], ['scenario_list']) print(exec_eng.dm.get_value( f'{study_name}.multi_scenarios.scenario_list'), ['scenario_A', 'scenario_B']) print( list(exec_eng.dm.get_disciplines_with_name( f'{study_name}')[0].get_sosdisc_outputs().keys()), ['residuals_history']) elif isinstance(disc, SoSScatterData): print( list(disc.get_data_io_dict('in').keys()), ['x_dict', 'scenario_list']) print( list(disc.get_data_io_dict('out').keys()), ['scenario_A.x', 'scenario_B.x']) print(exec_eng.dm.get_value( f'{study_name}.multi_scenarios.x_dict'), {'scenario_A': 2, 'scenario_B': 4}) print(exec_eng.dm.get_value( f'{study_name}.multi_scenarios.scenario_A.x'), 2) print(exec_eng.dm.get_value( f'{study_name}.multi_scenarios.scenario_B.x'), 4) ###Output ['scenario_list'] ['scenario_list'] ['scenario_A', 'scenario_B'] ['scenario_A', 'scenario_B'] ['residuals_history'] ['residuals_history']
.ipynb_checkpoints/example-img-checkpoint.ipynb
###Markdown Neural Processes for ImagesThis notebook contains examples of Neural Processes for images and how these can be used for various tasks like inpainting. Load a trained model ###Code import json from neural_process import NeuralProcessImg # Load config file for mnist model folder = 'results_2020-12-30_16-04' config_file = folder + '/config.json' model_file = folder + '/model.pt' with open(config_file) as f: config = json.load(f) # Load trained model model = NeuralProcessImg(config["img_size"], config["r_dim"], config["h_dim"], config["z_dim"]).to(device) model.load_state_dict(torch.load(model_file, map_location=lambda storage, loc: storage)) ###Output _____no_output_____ ###Markdown Visualize some CelebA samples ###Code import os from skimage import io import torchvision.datasets.mnist as mnist root='../mnist_data/MNIST/raw' train_set = ( mnist.read_image_file(os.path.join(root, 'train-images-idx3-ubyte')), mnist.read_label_file(os.path.join(root, 'train-labels-idx1-ubyte')) ) test_set = ( mnist.read_image_file(os.path.join(root, 't10k-images-idx3-ubyte')), mnist.read_label_file(os.path.join(root, 't10k-labels-idx1-ubyte')) ) print("training set :",train_set[0].size()) print("test set :",test_set[0].size()) def convert_to_img(train=True): if(train): f=open(root+'train.txt','w') data_path=root+'/train/' if(not os.path.exists(data_path)): os.makedirs(data_path) for i, (img,label) in enumerate(zip(train_set[0],train_set[1])): img_path=data_path+str(i)+'.jpg' io.imsave(img_path,img.numpy()) f.write(img_path+' '+str(label)+'\n') f.close() else: f = open(root + 'test.txt', 'w') data_path = root + '/test/' if (not os.path.exists(data_path)): os.makedirs(data_path) for i, (img,label) in enumerate(zip(test_set[0],test_set[1])): img_path = data_path+ str(i) + '.jpg' io.imsave(img_path, img.numpy()) f.write(img_path + ' ' + str(label) + '\n') f.close() convert_to_img(True)#转换训练集 convert_to_img(False)#转换测试集 import imageio from torchvision.utils import make_grid import numpy as np # Read images into torch.Tensor all_imgs = torch.zeros(8, 1, 28, 28) for i in range(8): img = imageio.imread('../mnist_data/MNIST/raw/train/{}.jpg'.format(i + 1)) all_imgs[i] = torch.Tensor(img / 255.).unsqueeze(0) # Visualize sample on a grid img_grid = make_grid(all_imgs, nrow=4, pad_value=1.) plt.imshow(img_grid.permute(1, 2, 0).numpy()) ###Output _____no_output_____ ###Markdown Inpainting images with Neural ProcessesInpainting is the task of inferring missing pixels in a partially occluded image. Here we show examples of how Neural Processes can be used to solve this problem. Occluding image ###Code # Select one of the images to perform inpainting img = all_imgs[0] # Define a binary mask to occlude image. For Neural Processes, # the context points will be defined as the visible pixels context_mask = torch.zeros((28, 28)).byte() context_mask[:14, :] = 1 # Top half of pixels are visible # Show occluded image occluded_img = img * context_mask.float() plt.imshow(occluded_img.permute(1, 2, 0).numpy()) ###Output _____no_output_____ ###Markdown Generating inpaintings ###Code from utils import inpaint num_inpaintings = 10 # Number of inpaintings to sample from model all_inpaintings = torch.zeros(num_inpaintings, 1, 28, 28) # Sample several inpaintings labels = [0,1,2,3,4,5,6,7,8,9] one_hot = torch.zeros(10, 10).scatter_(1, label.unsqueeze(1), 1) for i in range(num_inpaintings): all_inpaintings[i] = inpaint(model, img, context_mask, onehot[i], device) # Visualize inpainting results on a grid inpainting_grid = make_grid(all_inpaintings, nrow=4, pad_value=1.) plt.imshow(inpainting_grid.permute(1, 2, 0).numpy()) ###Output _____no_output_____ ###Markdown As can be seen, the inpaintings match the context pixels and are fairly diverse. Different masksWe can use a variety of masks and image to test the model. ###Code # Select one of the images to perform inpainting img = all_imgs[1] # Define a random mask context_mask = (torch.Tensor(32, 32).uniform_() > 0.9).byte() # Visualize occluded image occluded_img = img * context_mask.float() plt.imshow(occluded_img.permute(1, 2, 0).numpy()) num_inpaintings = 8 # Number of inpaintings to sample from model all_inpaintings = torch.zeros(num_inpaintings, 3, 32, 32) # Sample several inpaintings for i in range(num_inpaintings): all_inpaintings[i] = inpaint(model, img, context_mask, device) # Visualize inpainting results on a grid inpainting_grid = make_grid(all_inpaintings, nrow=4, pad_value=1.) grid_as_np = inpainting_grid.permute(1, 2, 0).numpy() # If NP returns out of range values for pixels, clip values plt.imshow(np.clip(grid_as_np, 0, 1)) ###Output _____no_output_____
Deep Learning/cifar10.ipynb
###Markdown ###Code import keras from keras.datasets import cifar10 from google.colab import drive # Parameters batch_size = 32 num_classes = 10 epochs = 50 drive.mount('/content/gdrive') # Load cifar-10 dataset then # Split dataset into train and test (x_train, y_train), (x_test, y_test) = cifar10.load_data() # One-hot y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D #Create model model = Sequential() model.add(Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=x_train.shape[1:])) model.add(Conv2D(32, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(Conv2D(64, (5, 5), padding='same', activation='relu')) model.add(Conv2D(64, (5, 5), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(Dropout(0.4)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) #checkpoint from keras.callbacks import ModelCheckpoint filepath = F"/content/gdrive/My Drive/weights.h5" checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max') callbacks_list = [checkpoint] x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255.0 x_test /= 255.0 from keras.preprocessing.image import ImageDataGenerator datagen = ImageDataGenerator( width_shift_range=0.1, height_shift_range=0.1, #shear_range=0.2, horizontal_flip=True) datagen.fit(x_train) # Fit the model on the batches generated by datagen.flow(). model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size), epochs=epochs, validation_data=(x_test, y_test), workers=8, callbacks=callbacks_list, use_multiprocessing=True) # Save model.save(F"/content/gdrive/My Drive/model.h5") model.save_weights(F"/content/gdrive/My Drive/weights.h5") scores = model.evaluate(x_test, y_test, verbose=1) print('Test loss:', scores[0]) print('Test accuracy:', scores[1]) import cv2, numpy as np from keras.preprocessing import image from keras.models import load_model import matplotlib.pyplot as plt model = load_model(F"/content/gdrive/My Drive/model.h5") image_path = F"/content/gdrive/My Drive/Beagle.jpg" original = image.load_img(image_path) img = image.load_img(image_path, target_size=(32,32,3)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) pred = model.predict(x) pred = np.argmax(pred, axis=1) def get_label(num): if (num == 0): return 'airplane' elif (num == 1): return 'automobile' elif (num == 2): return 'bird' elif (num == 3): return 'cat' elif (num == 4): return 'deer' elif (num == 5): return 'dog' elif (num == 6): return 'frog' elif (num == 7): return 'horse' elif (num == 8): return 'ship' else: return 'truck' print('Prediction : ', get_label(pred)) plt.imshow(img) plt.imshow(original) plt.show from keras.utils.vis_utils import plot_model plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True) ###Output _____no_output_____
notebooks/canada/on/kitchener_utilities.ipynb
###Markdown [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ryanfobel/utility-bill-scraper/blob/main/notebooks%2Fcanada%2Fon%2Fkitchener_utilities.ipynb) IntroductionThis notebook will help you to download `pdf` statements and data from a [Kitchener Utilities](https://www.kitchenerutilities.ca) account. Launch an interactive version by clicking on the `Open in Colab` badge at the top of this page. Download dataTo run the notebook, choose `Runtime/Run all` from the menu or press `CTRL`+`F9`. The notebook may promp you for inputs (e.g., authorization to conect to your google drive, username, password). If you're running this in Google Colab, the files will be automatically saved to your Google Drive in the folder `Google Drive/Colab Notebooks/data`. ###Code try: import utility_bill_scraper except ModuleNotFoundError: import subprocess import sys cmd = ( f"{sys.executable} -m pip install --upgrade --upgrade-strategy " "only-if-needed " "git+https://github.com/ryanfobel/utility-bill-scraper.git" ) subprocess.check_output(cmd, stderr=subprocess.STDOUT, shell=True).decode("utf-8") from utility_bill_scraper import install_colab_dependencies install_colab_dependencies( required_envs=["KITCHENER_UTILITIES_USER", "KITCHENER_UTILITIES_PASSWORD"] ) %matplotlib inline import datetime as dt import os import sys from cycler import cycler from dotenv import load_dotenv import matplotlib.pyplot as plt import numpy as np from utility_bill_scraper import LIGHT_COLORMAP import utility_bill_scraper.canada.on.kitchener_utilities as ku # Plotting preferences plt.rc("axes", prop_cycle=cycler("color", LIGHT_COLORMAP)) figsize = (12, 6) bin_width = 0.9 alpha = 0.5 transparent = False bbox_inches = "tight" facecolor = "white" # Load the `.env` file into the environment if it exists load_dotenv() api = ku.KitchenerUtilitiesAPI( user=os.getenv("KITCHENER_UTILITIES_USER"), password=os.getenv("KITCHENER_UTILITIES_PASSWORD"), data_path=os.getenv("DATA_PATH", os.path.join("..", "..", "..", "data")), google_sa_credentials=os.getenv("GOOGLE_SA_CREDENTIALS"), browser=os.getenv("BROWSER", "Firefox"), ) # Get up to 24 statements (the most recent). updates = api.update(24) if updates is not None: print(f"{ len(updates) } statements_downloaded") api.history("monthly").tail() ###Output Download file from google drive(file_id=1-IYaB4IdO6rQnNmh-Fo8DyEm5OLQCruo, local_path=C:\Users\ryan\AppData\Local\Temp\tmpmgjnquu3\monthly.csv Upload file to google drive(file_id=1-IYaB4IdO6rQnNmh-Fo8DyEm5OLQCruo, local_path=C:\Users\ryan\AppData\Local\Temp\tmpmgjnquu3\monthly.csv 0 statements_downloaded ###Markdown Plotting Monthly consumption history ###Code gas = api.history("monthly") plt.figure(figsize=figsize) gas["Gas Consumption"].plot.bar( width=bin_width, figsize=figsize, ) plt.xticks(rotation=90) plt.title("Monthly Gas Consumption") plt.ylabel("m$^3$") ax = plt.gca() ax.spines["right"].set_visible(False) ax.spines["top"].set_visible(False) os.makedirs("images", exist_ok=True) plt.savefig( os.path.join("images", "monthly_gas_consumption.png"), bbox_inches=bbox_inches, transparent=transparent, facecolor=facecolor, ) locs, labels = plt.xticks() plt.xticks(locs, labels=[label.get_text().split(" ")[0] for label in labels]) plt.figure(figsize=figsize) gas["Water Consumption"].plot.bar( width=bin_width, figsize=figsize, ) plt.xticks(rotation=90) plt.title("Monthly Water Consumption") plt.ylabel("m$^3$") ax = plt.gca() ax.spines["right"].set_visible(False) ax.spines["top"].set_visible(False) os.makedirs("images", exist_ok=True) plt.savefig( os.path.join("images", "monthly_water_consumption.png"), bbox_inches=bbox_inches, transparent=transparent, facecolor=facecolor, ) locs, labels = plt.xticks() plt.xticks(locs, labels=[label.get_text().split(" ")[0] for label in labels]); ###Output _____no_output_____ ###Markdown Annual CO2 emissions ###Code from utility_bill_scraper import GAS_KGCO2_PER_CUBIC_METER gas["kgCO2"] = gas["Gas Consumption"] * GAS_KGCO2_PER_CUBIC_METER gas["year"] = [date.year for date in gas.index] gas["month"] = [date.month for date in gas.index] plt.figure(figsize=figsize) gas.groupby("year").sum()["Gas Consumption"].plot.bar(width=bin_width, alpha=alpha) plt.ylabel("m$^3$") ylim = plt.ylim() ax = plt.gca() ax2 = ax.twinx() plt.ylabel("tCO$_2$e") plt.ylim([GAS_KGCO2_PER_CUBIC_METER * y / 1e3 for y in ylim]) plt.title("Annual CO$_2$e emissions from natural gas") ax.spines["top"].set_visible(False) ax2.spines["top"].set_visible(False) os.makedirs("images", exist_ok=True) plt.savefig( os.path.join("images", "annual_co2_emissions_natural_gas.png"), bbox_inches=bbox_inches, transparent=transparent, facecolor=facecolor, ) ###Output _____no_output_____ ###Markdown CO2 emissions vs previous year ###Code n_years_history = 1 plt.figure(figsize=figsize) for year, df_year in gas.groupby("year"): if year >= dt.datetime.utcnow().year - n_years_history: df_year.sort_values("month", inplace=True) plt.bar( df_year["month"], df_year["Gas Consumption"], label=year, width=bin_width, alpha=alpha, ) plt.legend() plt.ylabel("m$^3$") plt.xlabel("Month") ylim = plt.ylim() ax = plt.gca() ax2 = ax.twinx() plt.ylabel("tCO$_2$e") plt.ylim([GAS_KGCO2_PER_CUBIC_METER * y / 1e3 for y in ylim]) plt.title("Monthly CO$_2$e emissions from natural gas") ax.spines["top"].set_visible(False) ax2.spines["top"].set_visible(False) os.makedirs("images", exist_ok=True) plt.savefig( os.path.join("images", "monthly_co2_emissions_natural_gas.png"), bbox_inches=bbox_inches, transparent=transparent, facecolor=facecolor, ) plt.figure(figsize=figsize) for year, df_year in gas.groupby("year"): if year >= dt.datetime.utcnow().year - n_years_history: df_year.sort_values("month", inplace=True) plt.bar( df_year["month"], np.cumsum(df_year["Gas Consumption"]), label=year, width=bin_width, alpha=alpha, ) plt.legend() plt.ylabel("m$^3$") plt.xlabel("Month") ylim = plt.ylim() ax = plt.gca() ax2 = ax.twinx() plt.ylabel("tCO$_2$e") plt.ylim([GAS_KGCO2_PER_CUBIC_METER * y / 1e3 for y in ylim]) plt.title("Cumulative CO$_2$e emissions from natural gas per year") ax.spines["top"].set_visible(False) ax2.spines["top"].set_visible(False) os.makedirs("images", exist_ok=True) plt.savefig( os.path.join("images", "cumulative_co2_emissions_natural_gas.png"), bbox_inches=bbox_inches, transparent=transparent, facecolor=facecolor, ) ###Output _____no_output_____
new_model_experiment/1_process_and_train_vae.ipynb
###Markdown Process and train vaeThis notebook inputs a compendium of gene expression data and trains a VAE model that will be used to simulate new experiments.The output of this notebook, which includes the vae models and template experiment, will be used in the next notebook ###Code %load_ext autoreload %autoreload 2 import os import sys import pandas as pd import numpy as np from sklearn import preprocessing import pickle from ponyo import utils, train_vae_modules, simulate_expression_data from generic_expression_patterns_modules import process ###Output WARNING:tensorflow:From /home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/ponyo/helper_vae.py:21: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead. WARNING:tensorflow:From /home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/ponyo/helper_vae.py:25: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. WARNING:tensorflow:From /home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/ponyo/helper_vae.py:25: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead. ###Markdown User inputs neededUser needs to define the following in the [config file](../configs/config_new_model_experiment.tsv):1. Directory on your local machine to store intermediate and output data files generated (`local_dir`). Make sure to end with `\`.2. Template experiment (`raw_template_filename`). This is the experiment you are interested in studying. This experiment is expected to be a matrix with samples as row and genes as columns (tab-delimited).3. Training compendium used to train VAE (`processed_compendium_filename`). This dataset is expected to be a matrix with samples as row and genes as columns (tab-delimited). Note: if using human gene ids from ensembl and you want to convert these to HGNC symbols, functions are available to do this in `generic_expression_patterns_modules/process_names.R` and `generic_expression_patterns_modules/process.py`. See [example](../human_general_analysis/1_process_recount2_data.ipynb)4. Scaler transform (`scaler_filename`) used to normalize the training compendium. This can be found in the `data/` directory within the analysis folder.5. Directory (`vae_model_dir`) containing trained VAE model (.h5 files) from the previous notebook.6. Size of the latent dimension (`latent_dim`).7. File that maps experiment ids to the associated sample ids (`experiment_to_sample_filename`)8. The delimiter used in the 'experiment_to_sample_filename' file (`metadata_delimiter`)9. The column header/name that contains the experiment ids (`experiment_id_colname`)10. Experiment id (`project_id`) to label newly create simulated experiments.11. The column header/name in the metadatathat contains the sample ids (`sample_id_colname`)The remaining parameters within the `config` file specify values needed to run the next notebook or filenames that are intermediate data files that will be generated when SOPHIE runs. ###Code # Set seeds to get reproducible VAE trained models process.set_all_seeds() ###Output WARNING:tensorflow:From /home/alexandra/Documents/Repos/generic-expression-patterns/generic_expression_patterns_modules/process.py:57: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead. ###Markdown Set parameters for data processing ###Code base_dir = os.path.abspath(os.path.join(os.getcwd(), "../")) # Read in config variables config_filename = os.path.abspath( os.path.join(base_dir, "configs", "config_new_model_experiment.tsv") ) params = utils.read_config(config_filename) local_dir = params["local_dir"] dataset_name = params["dataset_name"] # Column header containing sample ids metadata_colname = params["metadata_colname"] # Template experiment ID project_id = params["project_id"] # Output file: pickled list of shared genes(generated during gene ID mapping) shared_genes_filename = params["shared_genes_filename"] # Output files of pseudomonas template experiment data raw_template_filename = params["raw_template_filename"] processed_template_filename = params["processed_template_filename"] # Output files of compendium data processed_compendium_filename = params["processed_compendium_filename"] normalized_compendium_filename = params["normalized_compendium_filename"] # Output file: pickled scaler (generated during compendium normalization) scaler_filename = params["scaler_filename"] # Load metadata file with mapping between experiments and associated samples metadata_filename = params["experiment_to_sample_filename"] metadata_delimiter = params["metadata_delimiter"] experiment_id_colname = params["experiment_id_colname"] ###Output _____no_output_____ ###Markdown Normalize compendiumHere we will 0-1 normalize expression data ###Code process.normalize_compendium( processed_compendium_filename, normalized_compendium_filename, scaler_filename, ) ###Output input: dataset contains 576 samples and 5891 genes ###Markdown Get raw template experiment ###Code # Get sample ids associated with selected project id sample_ids = simulate_expression_data.get_sample_ids( metadata_filename, metadata_delimiter, experiment_id_colname, project_id, sample_id_colname, ) # Get samples from experiment id processed_compendium = pd.read_csv( processed_compendium_filename, header=0, index_col=0, sep="\t" ) template_data = processed_compendium.loc[sample_ids] template_data.to_csv(raw_template_filename, sep="\t") ###Output _____no_output_____ ###Markdown Train VAE ###Code # Create VAE directories if needed output_dirs = [ os.path.join(base_dir, dataset_name, "models"), os.path.join(base_dir, dataset_name, "logs"), ] NN_architecture = params["NN_architecture"] # Check if NN architecture directory exist otherwise create for each_dir in output_dirs: sub_dir = os.path.join(each_dir, NN_architecture) os.makedirs(sub_dir, exist_ok=True) # Train VAE on new compendium data train_vae_modules.train_vae(config_filename, normalized_compendium_filename) ###Output input dataset contains 576 samples and 5891 genes WARNING:tensorflow:From /home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. tracking <tf.Variable 'Variable:0' shape=() dtype=float32> beta WARNING:tensorflow:From /home/alexandra/anaconda3/envs/generic_expression/lib/python3.7/site-packages/tensorflow_core/python/ops/nn_impl.py:183: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where
MM_material/lecture-materials/lecture-1-introduction.ipynb
###Markdown Welcome to PIC16A! Python with applications*Instructor: Michael Murray **Teaching Assistant: TBC * Reminder of official prerequisitesEnforced requisite: PIC10A, Computer Science 31, or equivalent, with grades of C- or better What we will cover today1. Why python?2. Course goals and objectives4. Overview of the syllabus3. Who is this course for?5. Course format6. Administration, IT requirements and grading Why Python? I'll give two key reasons... 1) Its a nice language to work with!- concise, expressive and human readable (closer to writing out instructions for a human)- details concerning hardware abstracted away- very extensive libraries- highly versatile across environments and applications ###Code # In C++ using namespace std; int main() { string name; cin >> name; cout << "Good evening, " << name << endl; return 0; } # In python name = input() print("Good evening, " + name) ###Output _____no_output_____
IPL EDA - II.ipynb
###Markdown IPL Exoloratory Analysis Part-II In this part-II, we will explore some batting stats from different perspecticves like seasonwise, teamwise, match aggregates, etc. ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.set() matches = pd.read_csv('matches.csv') deliv = pd.read_csv('deliveries.csv') deliv team_names=['Sunrisers Hyderabad', 'Mumbai Indians', 'Gujarat Lions', 'Rising Pune Supergiant', 'Royal Challengers Bangalore', 'Kolkata Knight Riders', 'Delhi Daredevils', 'Kings XI Punjab', 'Chennai Super Kings', 'Rajasthan Royals', 'Deccan Chargers', 'Kochi Tuskers Kerala', 'Pune Warriors', 'Rising Pune Supergiants', 'Delhi Capitals'] abbrs = ['SRH','MI','GL','RPS','RCB','KKR','DC','KXIP','CSK','RR','SRH','KTK','PW','RPS','DC'] matches.replace(team_names,abbrs,inplace = True) deliv.replace(team_names, abbrs, inplace = True) #matches.drop(columns='umpire3', inplace=True) deliv = deliv[deliv['inning']<3] matches = matches[matches['result']=='normal'] matches = matches[matches['dl_applied'] == 0] deliv = deliv.merge(matches, left_on='match_id', right_on='id') ## Taking care of a few things def batruns(nb): if nb>1: bat = nb - 1 return int(bat) def extraruns(nb): if nb>1: return 1 def is_chased(bat, winner): return int(bat!=winner) deliv['batsman_runs'] = deliv.apply(lambda row: batruns(row['noball_runs']), axis=1) deliv['extra_runs'] = deliv.apply(lambda row: extraruns(row['noball_runs']), axis=1) ###Output _____no_output_____ ###Markdown Total Runs Scored ###Code tot_runs = deliv['total_runs'].sum() print(tot_runs, 'runs have been scored in total') ###Output 227211 runs have been scored in total ###Markdown Average Runs per Match ###Code match = deliv.groupby('match_id')['total_runs'].sum().reset_index() avg_score = match['total_runs'].sum()/len(match['total_runs']) print('Average Match Aggregate Score is', avg_score) ###Output Average Match Aggregate Score is 313.8273480662983 ###Markdown Average Runs per Inning ###Code inn = deliv.groupby(['match_id','inning'])['total_runs'].sum().reset_index() avg_score = inn['total_runs'].sum()/len(inn['total_runs']) print('The average inning score is : ', int(avg_score)) ###Output The average inning score is : 156 ###Markdown Highest and Lowest Aggregate Scores ###Code highest = match['total_runs'].max() lowest = match['total_runs'].min() print('Highest Aggregate score in a match :', highest) print('Lowest Aggregate score in a match :', lowest) ###Output Highest Aggregate score in a match : 471 Lowest Aggregate score in a match : 135 ###Markdown Seasonwise Total & Average Runs ###Code plt.figure(figsize=(16,8)) season = deliv.groupby('season')['total_runs'].sum().reset_index() season['matches']= matches.groupby('season')['id'].count().reset_index()['id'] season['avg_score'] = season['total_runs'] / season['matches'] sns.lineplot(x='season',y='total_runs', data=season, color='orange') plt.xticks(np.arange(2008,2020)) sns.scatterplot(x='season',y='total_runs', data=season, color='red') plt.title('Total Runs Scored per Season') plt.figure(figsize=(16,8)) sns.lineplot(x='season',y='avg_score', data=season, color='orange') plt.xticks(np.arange(2008,2020)) sns.scatterplot(x='season',y='avg_score', data=season, color='crimson') plt.title('Average Runs Scored per Match per Season') ###Output _____no_output_____ ###Markdown Some observations :- The total runs score was the highest for the 2013 season (due to the most number of matches).- Also, the average score was the highest for the 2018 season.- The lowest total and average score was for the 2009 season (which was played in South Africa). Teamwise Average Inning Score ###Code teams = deliv.groupby(['batting_team'])['total_runs'].sum().reset_index() teams.columns=['Team','Total Runs'] played = pd.concat([matches['team1'], matches['team2']]) played = played.value_counts().reset_index() played.columns = ['Team', 'Matches'] teams = teams.merge(played, on='Team') teams['Avg Score'] = teams['Total Runs']/teams['Matches'] teams = teams.sort_values(by='Avg Score', ascending = False) teams plt.figure(figsize=(16,8)) sns.barplot(x='Team', y='Avg Score', data=teams) ###Output _____no_output_____ ###Markdown Chennai Super Kings have the highest average inning score, although there is not much difference in the top average scores Locating the Batsman's Paradise ###Code mat = matches['venue'].value_counts().reset_index() venues = deliv.groupby('venue')['total_runs'].sum().reset_index() mat.columns = ['venue', 'matches'] venues = mat.merge(venues, on='venue') venues['avg_score'] = venues['total_runs']/venues['matches'] venues = venues.sort_values(by = 'avg_score', ascending=False) venues[:10] plt.figure(figsize=(16,8)) sns.barplot(y='venue', x='avg_score', data=venues[:15], orient='h') ###Output _____no_output_____ ###Markdown Brabourne Stadium has the highest avergae match score (almost 350) Highest and Lowest Inning Totals ###Code scores = deliv.groupby(['match_id','inning'])['total_runs'].sum().reset_index() scores.sort_values(by='total_runs', ascending=False) ## Highest Inning Score = 263 matches[matches['id'] == 411] ## Lowest Inning Score == 49 matches[matches['id'] == 27] ###Output _____no_output_____ ###Markdown Well, well, well, the highest and the lowest total records are held by the same team - Royals Challengers Bangalore * Ee sala cup namde intensifies * Teamwise Highest and Lowest Scores ###Code scores = deliv.groupby(['match_id','batting_team'])['total_runs'].sum().reset_index() high = scores.groupby('batting_team')['total_runs'].max() high low = scores.groupby('batting_team')['total_runs'].min() low.index plt.figure(figsize=(18,10)) ax = plt.subplot(111) ind = np.arange(12) width = 0.25 yvals = high rects1 = ax.bar(ind, yvals, width, color='steelblue') zvals = low rects2 = ax.bar(ind+width, zvals, width, color='darkorange') ax.set_xticks(ind+width) ax.set_xticklabels( (low.index) ) ax.legend( (rects1[0], rects2[0]), ('Highest Scores', 'Lowest Scores') ) def autolabel(rects): for rect in rects: h = rect.get_height() ax.text(rect.get_x()+rect.get_width()/2., 1.01*h, '%d'%int(h), ha='center', va='bottom') autolabel(rects1) autolabel(rects2) plt.show() ###Output _____no_output_____ ###Markdown Highest chased and Lowest defended scores ###Code res = deliv.groupby(['match_id','inning','batting_team','bowling_team','winner'])['total_runs'].sum().reset_index() res = res[res['batting_team']== res['winner']] chased = res[res['inning']== 2] chased.sort_values(by='total_runs', ascending=False)[:10] defend = res[res['inning']==1] defend.sort_values(by='total_runs')[:10] ###Output _____no_output_____ ###Markdown Important Note : The lowest defended scores here are 106 defended twice. What we fail to tell here is that these matches are rain-affected and thus are played for less than 20 overs. The lowest defended score in a full 20-over match is 116 won by CSK against KXIP. 200+ in the 1st Innings ###Code ## Let's see how many times 200 has been scored in the first innings doub = deliv.groupby(['match_id', 'inning','season','batting_team','bowling_team','winner'])['total_runs'].sum().reset_index() doub = doub[doub['total_runs']>=200] first = doub[doub['inning'] ==1] first[:10] ###Output _____no_output_____ ###Markdown 200+ in the 2nd Innings ###Code ## Let's see how many times 200 has been scored in the first innings sec = doub[doub['inning'] ==2] sec[:10] ###Output _____no_output_____ ###Markdown Teamwise 200+ scores ###Code teams = doub.groupby(['inning','batting_team']).count().reset_index() most_200 = doub.groupby(['batting_team'])['match_id'].count().reset_index() most_200 plt.figure(figsize=(16,8)) sns.barplot(x='batting_team', y='match_id', data=teams, hue='inning') ###Output _____no_output_____ ###Markdown RCB have the most 200+ scores in their 1st innings, while CSK and KXIP do well during their chase. Can you chase a 200+ score? ###Code first['is_chased'] = first.apply(lambda row: is_chased(row['batting_team'], row['winner']), axis=1) chased_200 = first[first['is_chased'] == 1] chased_200 ###Output <ipython-input-29-6d512bf9257b>:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy first['is_chased'] = first.apply(lambda row: is_chased(row['batting_team'], row['winner']), axis=1) ###Markdown There are 13 instances of a 200+ target being successfully being chased Overwise Runs Scored ###Code ov_grp = deliv.groupby(['over']) runs = ov_grp['total_runs'].sum().reset_index() plt.figure(figsize=(16,8)) sns.lineplot(x='over', y='total_runs', data=runs) plt.xticks(np.arange(1,21)) sns.scatterplot(x='over', y='total_runs', data=runs) ###Output _____no_output_____ ###Markdown See that sharp decline at the 7th over? It's the end of the powerplay which forced fielding restrictions. Also, the falling of graph at the death indicates two things :- More wickets falling at the death as batsmen try to hit everything into the outer space.- Most 2nd innings do not get to the 19th or 20th over, the match finishes way before that. Runs per Over for different teams ###Code team_rpo = deliv.groupby(['over','batting_team'])['total_runs'].sum().reset_index() exc_teams = ['RPS','GL','KTK','PW'] team_rpo = team_rpo[~team_rpo['batting_team'].isin(exc_teams)] plt.figure(figsize=(20,10)) sns.lineplot(x='over', y='total_runs', data=team_rpo, hue='batting_team') ###Output _____no_output_____
talks/uc2017/Cloning Your Portal Users, Groups and Content using ArcGIS API for Python/clone_portal_users_groups_and_content.ipynb
###Markdown Clone Portal users, groups and contentThis sample notebook can be used for cloning a portal, from say, a staging to a production environment. It clones the users, groups and the content. It does not copy over services though, and works at the tier of portal items.**Note**: To user this notebook as a Python script, checkout the accompanying [SDK GitHub](https://github.com/Esri/arcgis-python-api) repository. Running this as a script from a Python IDE allows you to set breakpoints, debug and inspect the script when an exception is raised. ###Code from arcgis.gis import GIS from IPython.display import display from getpass import getpass ###Output _____no_output_____ ###Markdown Define the source and target portalsTo start with, define the source and target portals. Connect to them using accounts with administrative privileges: ###Code source_password = getpass() target_password = getpass() source = GIS("https://dev005513.esri.com/portal", 'admin', source_password, verify_cert=False) target = GIS("https://dev005514.esri.com/portal", 'admin', target_password, verify_cert=False) target_admin_username = 'admin' ###Output ········ ········ ###Markdown UsersList the users in the source and target portals. We do not want to copy over system accounts since those would be available in the target portal as well. Hence, filter the search by negating any account that starts with 'esri_'. We also do not want to copy over the [initial administrator account](http://server.arcgis.com/en/portal/latest/administer/linux/about-the-initial-administrator-account.htm) as one would be present in the target as well. Hence, negate the account that starts with `admin` which happens to be the administrator account on source portal. ###Code #!esri_ & !admin source_users = source.users.search('!esri_ & !admin') for user in source_users: print(user.username + "\t:\t" + str(user.role)) ###Output brown.rogers : org_user davis.reed : org_admin johnson.stewart : org_user jones.morris : org_user miller.cook : org_publisher moore.bell : org_publisher smith.collins : org_admin taylor.murphy : org_publisher williams.sanchez : org_user wilson.morgan : org_publisher ###Markdown Get the number of users to migrate: ###Code len(source_users) ###Output _____no_output_____ ###Markdown Get the list of users already present in the target portal. Similar to earlier, filter out system and initial administrator accounts. The name of the admin account on target portal is `admin` as well in this example. ###Code # filter out system and initial administrator accounts target_users = target.users.search('!esri_ & !admin & !system_publisher') target_users ###Output _____no_output_____ ###Markdown If users found on source portal were already in the target portal, run the following code to delete them. You can choose to not delete them as well. Remove existing users from target portalIf you want to clean up the target portal except for the initial administrator account, run the cell below. As you delete, you may opt to assign their content to the initial administrator account. ###Code for source_user in source_users: try: target_user = target.users.get(source_user.username) if target_user is not None: print('Deleting user: ' + target_user.fullName) target_user.reassign_to(target_admin_username) target_user.delete() except: print('User {} does not exist in Target Portal'.format(source_user.username)) ###Output _____no_output_____ ###Markdown Copy UsersCreate a function that will accept connection to the target portal, `User` objects from source portal and password to create users with. In addition to creating the users, this function will set their access, description, tags and other similar properties from source. If a user by the same name already exists in the target portal (possible if you opted not to clean out the target portal) then this function prints out an error message. ###Code def copy_user(target_portal, source_user, password): # See if the user has firstName and lastName properties try: first_name = source_user.firstName last_name = source_user.lastName except: # if not, split the fullName full_name = source_user.fullName first_name = full_name.split()[0] try: last_name = full_name.split()[1] except: last_name = 'NoLastName' try: # create user target_user = target_portal.users.create(source_user.username, password, first_name, last_name, source_user.email, source_user.description, source_user.role) # update user properties target_user.update(source_user.access, source_user.preferredView, source_user.description, source_user.tags, source_user.get_thumbnail_link(), culture=source_user.culture, region=source_user.region) return target_user except Exception as Ex: print(str(Ex)) print("Unable to create user "+ source_user.username) return None ###Output _____no_output_____ ###Markdown For each user in source portal, make a corresponding user in target portal. In this sample, we provide a common password to all users `TestPassword@123` as we are creating users off the built-in identity store. If you are creating users off your enterprise identity store, you can ignore the `password` parameter and use the `provider` and `idp_username` parameters as explained in the [API reference doc](http://esri.github.io/arcgis-python-api/apidoc/html/arcgis.gis.htmlarcgis.gis.UserManager.create). ###Code for user in source_users: print("Creating user: " + user.username) copy_user(target, user, 'TestPassword@123') ###Output <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)> Unable to create user brown.rogers <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)> Unable to create user davis.reed <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)> Unable to create user johnson.stewart <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)> Unable to create user jones.morris <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)> Unable to create user miller.cook <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)> Unable to create user moore.bell <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)> Unable to create user smith.collins <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)> Unable to create user taylor.murphy <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)> Unable to create user williams.sanchez <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)> Unable to create user wilson.morgan ###Markdown Verify that users have been added to target portal: ###Code target_users = target.users.search() target_users ###Output _____no_output_____ ###Markdown Thus, users have been successfully added to the target portal Groups List the groups in the source and target portals. Similar to how we searched for users, we will ignore the system created and default groups as they would be available on the target portal as well. ###Code # filter out system created groups source_groups = source.groups.search("!owner:esri_* & !Basemaps") source_groups target_groups = target.groups.search("!owner:esri_* & !Basemaps") target_groups ###Output _____no_output_____ ###Markdown If any of the groups from source are already in the target, run the following code to delete them. If the group belongs to any of default user accounts, don't delete it. This step is optional, you may choose to not delete those groups if you prefer to retain them as is. ###Code for tg in target_groups: for sg in source_groups: if sg.title == tg.title and (not tg.owner.startswith('esri_')): print("Cleaning up group {} in target Portal...".format(tg.title)) tg.delete() break ###Output _____no_output_____ ###Markdown Copy GroupsLet us create a function that will clone the groups one at a time. As you call this function in a loop for each group, it reads the source group's properties, downloads thumbnail into a temporary file then creates a similar named group on target and applies those properties and thumbnail. If one of your portals is an organization on ArcGIS Online and other is an ArcGIS Enterprise, certain privacy properties need to be adapted. This function takes care of that. After creating the group, it finds which users were members of it and adds them appropriately. ###Code import tempfile GROUP_COPY_PROPERTIES = ['title', 'description', 'tags', 'snippet', 'phone', 'access', 'isInvitationOnly'] def copy_group(target, source, source_group): with tempfile.TemporaryDirectory() as temp_dir: try: target_group = {} for property_name in GROUP_COPY_PROPERTIES: target_group[property_name] = source_group[property_name] if source_group['access'] == 'org' and target.properties['portalMode'] == 'singletenant': #cloning from ArcGIS Online to ArcGIS Enterprise target_group['access'] = 'public' elif source_group['access'] == 'public'\ and source.properties['portalMode'] == 'singletenant'\ and target.properties['portalMode'] == 'multitenant'\ and 'id' in target.properties: #cloning from ArcGIS Enterprise to ArcGIS Online org target_group['access'] = 'org' # Download the thumbnail (if one exists) thumbnail_file = None if 'thumbnail' in group: target_group['thumbnail'] = group.download_thumbnail(temp_dir) # Create the group in the target portal copied_group = target.groups.create_from_dict(target_group) # Reassign all groups to correct owners, add users, and find shared items members = group.get_members() if not members['owner'] == target_admin_username: copied_group.reassign_to(members['owner']) if members['users']: copied_group.add_users(members['users']) return copied_group except: print("Error creating " + source_group['title']) ###Output _____no_output_____ ###Markdown For each group in source portal, make a corresponding group in target portal. ###Code from IPython.display import display for group in source_groups: target_group = copy_group(target, source, group) if target_group: display(target_group) ###Output _____no_output_____ ###Markdown As you can see, we were able to add the groups with their thumbnails. Now let us verify that groups can be listed on the target portal: ###Code target_groups = target.groups.search() target_groups ###Output _____no_output_____ ###Markdown With this part of the sample, we have successfully created users, groups and added the appropriate users to these groups. Thus, you can call the `get_members()` method one of the groups to view its members: ###Code group1 = target_groups[0] group1.get_members() ###Output _____no_output_____ ###Markdown Items Copying items consists of multiple steps as explained in the following section of the sample: 1. [For each user create a mapping of itemId to the `Item`](For-each-user-create-a-mapping-of-itemId-to-the-Item) 2. [Prepare sharing information for each item](Prepare-sharing-information-for-each-item) 1. [Print a mapping of item and its group membership](Print-a-mapping-of-item-and-its-group-membership) 3. [Copy items one by one](Copy-Items) 4. [Establish relationship between items](establish-relationship-between-items) For each user create a mapping of itemId to the `Item`Do this for every folder in the user's account on the source portal ###Code source_items_by_id = {} for user in source_users: num_items = 0 num_folders = 0 print("Collecting item ids for {}".format(user.username), end="\t\t") user_content = user.items() # Get item ids from root folder first for item in user_content: num_items += 1 source_items_by_id[item.itemid] = item # Get item ids from each of the folders next folders = user.folders for folder in folders: num_folders += 1 folder_items = user.items(folder=folder['title']) for item in folder_items: num_items += 1 source_items_by_id[item.itemid] = item print("Number of folders {} # Number of items {}".format(str(num_folders), str(num_items))) ###Output Collecting item ids for brown.rogers Number of folders 1 # Number of items 3 Collecting item ids for davis.reed Number of folders 1 # Number of items 3 Collecting item ids for johnson.stewart Number of folders 1 # Number of items 3 Collecting item ids for jones.morris Number of folders 1 # Number of items 3 Collecting item ids for miller.cook Number of folders 1 # Number of items 3 Collecting item ids for moore.bell Number of folders 1 # Number of items 3 Collecting item ids for project_archiver Number of folders 7 # Number of items 18 Collecting item ids for smith.collins Number of folders 1 # Number of items 4 Collecting item ids for taylor.murphy Number of folders 1 # Number of items 3 Collecting item ids for williams.sanchez Number of folders 1 # Number of items 3 Collecting item ids for wilson.morgan Number of folders 1 # Number of items 3 ###Markdown Let us print the dictionary of `{item_id : Item object}` ###Code source_items_by_id ###Output _____no_output_____ ###Markdown Prepare sharing information for each itemUsing the dictionary we created above, find to which groups are each of the items shared to. ###Code for group in source_groups: #iterate through each item shared to the source group for group_item in group.content(): try: #get the item item = source_items_by_id[group_item.itemid] if item is not None: if not 'groups'in item: item['groups'] = [] #assign the target portal's corresponding group's name item['groups'].append(group['title']) except: print("Cannot find item : " + group_item.itemid) ###Output _____no_output_____ ###Markdown Print a mapping of item and its group membership ###Code for key in source_items_by_id.keys(): item = source_items_by_id[key] print("\n{:40s}".format(item.title), end = " # ") if 'groups' in item: print(item.access, end = " # ") print(item.groups, end = "") ###Output KS # NC # AR # set2_catalina-points # FL # KS # set1_GeoJson # set2_australia # NV # AZ # NV # FL # set3_Streets # set2_counties # ID # Brown Rogers response locations # shared # ['Central Services'] set1_Chicago # set2_USAcities # Jones Morris response locations # shared # ['Customer Service, Finance, Billing and Accounting'] set2_Chicago # Miller Cook response locations # shared # ['Demographic Content'] ID # set1_fortune500 # set1_gov_sites_registration # Smith Collins response locations # shared # ['Central Services'] set1_india # Johnson Stewart response locations # shared # ['Central Services'] set2_SD_crime # IN # set1_GeoJson # LA # Moore Bell response locations # shared # ['Compliance', 'Demographic Content'] set2_empty # Williams Sanchez response locations # shared # ['Customer Service, Finance, Billing and Accounting'] NH # IN # AR # AZ # Wilson Morgan response locations # shared # ['Compliance', 'Demographic Content'] Davis Reed response locations # shared # ['Demographic Content'] Smith Collins response locations # NC # set1_mapping_tech # USA_cities_Fortune_500 # Taylor Murphy response locations # shared # ['Central Services', 'Compliance'] set2_Voronoi-diagram # set1_major_cities # LA # NH # ###Markdown As we can see from above, some items are shared to a few groups while some are not. Copy ItemsBelow we define a function that you can call in a loop for each item in the dictionary we composed earlier. If the item is a text based item such as a Web Map or a file based item such as a layer package, it downloads the item's data to a temporary directory and uses that for creating the target item during cloning. You can find the [exhaustive list of different items](http://doc.arcgis.com/en/arcgis-online/reference/supported-items.htm) that you can upload to your portal and their corresponding item types from the [REST API documentation](http://resources.arcgis.com/en/help/arcgis-rest-api/index.html/Items_and_item_types/02r3000000ms000000/). For brevity, this sample covers only a subset of those items. Note, if the item points to a web layer URL, the target item would also point to the same URL. ###Code TEXT_BASED_ITEM_TYPES = frozenset(['Web Map', 'Feature Service', 'Map Service','Web Scene', 'Image Service', 'Feature Collection', 'Feature Collection Template', 'Web Mapping Application', 'Mobile Application', 'Symbol Set', 'Color Set', 'Windows Viewer Configuration']) FILE_BASED_ITEM_TYPES = frozenset(['File Geodatabase','CSV', 'Image', 'KML', 'Locator Package', 'Map Document', 'Shapefile', 'Microsoft Word', 'PDF', 'Microsoft Powerpoint', 'Microsoft Excel', 'Layer Package', 'Mobile Map Package', 'Geoprocessing Package', 'Scene Package', 'Tile Package', 'Vector Tile Package']) ITEM_COPY_PROPERTIES = ['title', 'type', 'typeKeywords', 'description', 'tags', 'snippet', 'extent', 'spatialReference', 'name', 'accessInformation', 'licenseInfo', 'culture', 'url'] ###Output _____no_output_____ ###Markdown We define the copy function for items below. This function gets the properties of the item from source and applies it to the target. If the items were saved inside a folder, it creates that folder on the target as well. Finally, it sets the privacy (sharing) properties similar to how it was on the source portal. ###Code def copy_item(target, source_item): try: with tempfile.TemporaryDirectory() as temp_dir: item_properties = {} for property_name in ITEM_COPY_PROPERTIES: item_properties[property_name] = source_item[property_name] data_file = None if source_item.type in TEXT_BASED_ITEM_TYPES: # If its a text-based item, then read the text and add it to the request. text = source_item.get_data(False) item_properties['text'] = text elif source_item.type in FILE_BASED_ITEM_TYPES: # download data and add to the request as a file data_file = source_item.download(temp_dir) thumbnail_file = source_item.download_thumbnail(temp_dir) metadata_file = source_item.download_metadata(temp_dir) #find item's owner source_item_owner = source.users.search(source_item.owner)[0] #find item's folder item_folder_titles = [f['title'] for f in source_item_owner.folders if f['id'] == source_item.ownerFolder] folder_name = None if len(item_folder_titles) > 0: folder_name = item_folder_titles[0] #if folder does not exist for target user, create it if folder_name: target_user = target.users.search(source_item.owner)[0] target_user_folders = [f['title'] for f in target_user.folders if f['title'] == folder_name] if len(target_user_folders) == 0: #create the folder target.content.create_folder(folder_name, source_item.owner) # Add the item to the target portal, assign owner and folder target_item = target.content.add(item_properties, data_file, thumbnail_file, metadata_file, source_item.owner, folder_name) #Set sharing (privacy) information share_everyone = source_item.access == 'public' share_org = source_item.access in ['org', 'public'] share_groups = [] if source_item.access == 'shared': share_groups = source_item.groups target_item.share(share_everyone, share_org, share_groups) return target_item except Exception as copy_ex: print("\tError copying " + source_item.title) print("\t" + str(copy_ex)) return None ###Output _____no_output_____ ###Markdown Copy over each item. While doing so, construct a dictionary mapping of source item's ID with target item's ID ###Code source_target_itemId_map = {} for key in source_items_by_id.keys(): source_item = source_items_by_id[key] print("Copying {} \tfor\t {}".format(source_item.title, source_item.owner)) target_item = copy_item(target, source_item) if target_item: source_target_itemId_map[key] = target_item.itemid else: source_target_itemId_map[key] = None ###Output Copying KS for smith.collins Copying NC for jones.morris Copying AR for brown.rogers Copying set2_catalina-points for project_archiver Copying FL for davis.reed Copying KS for smith.collins Copying set1_GeoJson for project_archiver Copying set2_australia for project_archiver Copying NV for johnson.stewart Copying AZ for moore.bell Copying NV for johnson.stewart Copying FL for davis.reed Copying set3_Streets for project_archiver Copying set2_counties for project_archiver Copying ID for wilson.morgan Copying Brown Rogers response locations for brown.rogers Copying set1_Chicago for project_archiver Copying set2_USAcities for project_archiver Copying Jones Morris response locations for jones.morris Copying set2_Chicago for project_archiver Copying Miller Cook response locations for miller.cook Copying ID for wilson.morgan Copying set1_fortune500 for project_archiver Copying set1_gov_sites_registration for project_archiver Copying Smith Collins response locations for smith.collins Copying set1_india for project_archiver Copying Johnson Stewart response locations for johnson.stewart Copying set2_SD_crime for project_archiver Copying IN for williams.sanchez Copying set1_GeoJson for project_archiver Copying LA for taylor.murphy Copying Moore Bell response locations for moore.bell Copying set2_empty for project_archiver Copying Williams Sanchez response locations for williams.sanchez Copying NH for miller.cook Copying IN for williams.sanchez Copying AR for brown.rogers Copying AZ for moore.bell Copying Wilson Morgan response locations for wilson.morgan Copying Davis Reed response locations for davis.reed Copying Smith Collins response locations for smith.collins Copying NC for jones.morris Copying set1_mapping_tech for project_archiver Copying USA_cities_Fortune_500 for project_archiver Copying Taylor Murphy response locations for taylor.murphy Copying set2_Voronoi-diagram for project_archiver Copying set1_major_cities for project_archiver Copying LA for taylor.murphy Copying NH for miller.cook ###Markdown We have successfully cloned all the items from source to target. We can query the contents of one of the users below to verify: ###Code user1 = target.users.search()[2] user1 user1.items() ###Output _____no_output_____ ###Markdown We could query the folders belonging to this user and the items within as well ###Code user1.folders user1.items(folder=user1.folders[0]['title']) ###Output _____no_output_____ ###Markdown Establish relationship between itemsSo far, we have successfully cloned users, groups and items from source to target. Next, we will establish identical [relationships](http://resources.arcgis.com/en/help/arcgis-rest-api/index.html/Relationship_types/02r3000000mm000000/) between items as they were in the source portal. ###Code RELATIONSHIP_TYPES = frozenset(['Map2Service', 'WMA2Code', 'Map2FeatureCollection', 'MobileApp2Code', 'Service2Data', 'Service2Service']) ###Output _____no_output_____ ###Markdown Below, we loop through each item in source portal, find to which other item it is related and the type of that relationship. If a relationship is found, we find the corresponding items in target and establish the same relationship. To make this work, we will make use of the dictionary that maps the itemIds on source and target we created during the item clone stage. Let us take a look at that dictionary below: ###Code source_target_itemId_map for key in source_target_itemId_map.keys(): source_item = source_items_by_id[key] target_itemid = source_target_itemId_map[key] target_item = target.content.get(target_itemid) print(source_item.title + " # " + source_item.type) for relationship in RELATIONSHIP_TYPES: try: source_related_items = source_item.related_items(relationship) for source_related_item in source_related_items: print("\t\t" + source_related_item.title + " # " + source_related_item.type +"\t## " + relationship) #establish same relationship amongst target items print("\t\t" + "establishing relationship in target portal", end=" ") target_related_itemid = source_target_itemId_map[source_related_item.itemid] target_related_item = target.content.get(target_related_itemid) status = target_item.add_relationship(target_related_item, relationship) print(str(status)) except Exception as rel_ex: print("\t\t Error when checking for " + relationship + " : " + str(rel_ex)) continue ###Output NC # Feature Service NC # CSV ## Service2Data establishing relationship in target portal True AR # Feature Service AR # CSV ## Service2Data establishing relationship in target portal True set2_catalina-points # KML FL # Feature Service FL # CSV ## Service2Data establishing relationship in target portal True KS # Feature Service KS # CSV ## Service2Data establishing relationship in target portal True set1_GeoJson # PDF set2_australia # GeoJson Smith Collins response locations # Web Map AZ # Feature Service AZ # CSV ## Service2Data establishing relationship in target portal True NV # Feature Service NV # CSV ## Service2Data establishing relationship in target portal True FL # CSV set3_Streets # Map Document set2_counties # Locator Package ID # Feature Service ID # CSV ## Service2Data establishing relationship in target portal True Wilson Morgan response locations # Web Map set1_Chicago # CSV set2_USAcities # File Geodatabase Jones Morris response locations # Web Map set2_Chicago # CSV Miller Cook response locations # Web Map ID # CSV set1_fortune500 # File Geodatabase set1_gov_sites_registration # Microsoft Excel NV # CSV set1_india # GeoJson Johnson Stewart response locations # Web Map set2_SD_crime # Map Document IN # Feature Service IN # CSV ## Service2Data establishing relationship in target portal True set1_GeoJson # Microsoft Word LA # Feature Service LA # CSV ## Service2Data establishing relationship in target portal True Moore Bell response locations # Web Map set2_empty # Map Document set1_major_cities # Locator Package NH # CSV AZ # CSV AR # CSV IN # CSV Brown Rogers response locations # Web Map Davis Reed response locations # Web Map Smith Collins response locations # Web Map NC # CSV set1_mapping_tech # Microsoft Powerpoint USA_cities_Fortune_500 # Map Document Taylor Murphy response locations # Web Map set2_Voronoi-diagram # Microsoft Word Williams Sanchez response locations # Web Map LA # CSV NH # Feature Service NH # CSV ## Service2Data establishing relationship in target portal True
Mathematics/Mathematical Modeling/03.06-Second-Order-Models.ipynb
###Markdown *This notebook contains course material from [CBE30338](https://jckantor.github.io/CBE30338)by Jeffrey Kantor (jeff at nd.edu); the content is available [on Github](https://github.com/jckantor/CBE30338.git).The text is released under the [CC-BY-NC-ND-4.0 license](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode),and code is released under the [MIT license](https://opensource.org/licenses/MIT).* Second Order ModelsA standard form for a generic second-order model for a stable linear system is given by$$\tau^2\frac{d^2y}{dt^2} + 2\zeta\tau\frac{dy}{dt} + y = K u$$where $y$ and $u$ are **deviation variables**. The parameters have a generic interpretation that are commonly used to describe the qualitative characteristics of these systems.| Parameter | Units | Description || :-: | :-: | :-: || $K$ | $\frac{\mbox{units of } y}{\mbox{units of }u}$ | Steady State Gain || $\tau \gt 0$ | time | Time Constant || $\zeta \geq 0$ | dimensionless | Damping Factor |The standard form assumes that a zero input (i.e, $u(t) = 0$) results in a zero response ($y(t) = 0$). In practice, the nominal or quiescent value of $y$ or $u$ may different from zero. In that case we would write$$\tau^2\frac{d^2y}{dt^2} + 2\zeta\tau\frac{dy}{dt} + y - y_{ref} = K\left(u(t) - u_{ref}\right)$$where $u_{ref}$ and $y_{ref}$ represent constant reference values. Step ResponseThe step response corresponds to a system that is initially at steady-state where $u = u_{ref}$ and $y = y_{ref}$. At time $t=0$ the input is incremented by a constant value U, i.e. $u = u_{ref} + U$ for $t \geq 0$. The subsequent response $y(t) - y_{ref}$ is the **step response**.Second order linear systems have elegant analytical solutions expressed using exponential and trignometric functions. There are four distinct cases that depend on the value of the damping factor $\zeta$:* Overdamped* Critically damped* Underdamped* Undamped Oscillations Overdamped ($\zeta > 1$)An overdamped response tends to be sluggish, and with a potentially a large difference in time scales $\tau_1$ and $\tau_2$. The geometric mean of $\tau_1$ and $\tau_2$ is $\tau$. The value of $\zeta$ determines the differences.$$y(t) = y_{ref} + KU\left(1 - \frac{\tau_1e^{-t/\tau_1} - \tau_2e^{-t/\tau_2}}{\tau_1 - \tau_2}\right)$$where $\tau_1$ and $\tau_2$ are found by factor the polynomial$$\tau^2s^2 + 2\zeta\tau s + 1 = (\tau_1s + 1)(\tau_2s + 1)$$For $\zeta \geq 1$ the solutions are given by\begin{align}\tau_1 & = \frac{\tau}{\zeta - \sqrt{\zeta^2-1}} \\\tau_2 & = \frac{\tau}{\zeta + \sqrt{\zeta^2-1}}\end{align} ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt from ipywidgets import interact def overdamped(K, tau, zeta): t = np.linspace(0,20) tau_1 = tau/(zeta - np.sqrt(zeta**2 - 1)) tau_2 = tau/(zeta + np.sqrt(zeta**2 - 1)) y = K*(1 - ((tau_1*np.exp(-t/tau_1) - tau_2*np.exp(-t/tau_2))/(tau_1 - tau_2))) plt.plot(t,y) plt.grid() interact(overdamped, K=(0.5,2), tau=(0.5,2), zeta=(1.01,2)); ###Output _____no_output_____ ###Markdown Critically Damped ($\zeta = 1$)$$y(t) = y_{ref} + KU\left[1 - \left(1 + \frac{t}{\tau}\right)e^{-t/\tau}\right]$$ ###Code def criticallydamped(K, tau): t = np.linspace(0,20) y = K*(1 - (1 + t/tau)*np.exp(-t/tau)) plt.plot(t,y) plt.grid() criticallydamped(K=2, tau=2) ###Output _____no_output_____ ###Markdown Underdamped ($0 \lt \zeta \lt 1$)One version of the solution can be written$$y(t) = y_{ref} + KU\left(1 - e^{-\zeta t/\tau}\left[\cos\left(\frac{\sqrt{1-\zeta^2}}{\tau}t\right) + \frac{\zeta}{\sqrt{1-\zeta^2}}\sin\left(\frac{\sqrt{1-\zeta^2}}{\tau}t\right)\right] \right)$$This can be expressed a bit more compactly by introducing a frequency$$\omega = \frac{\sqrt{1-\zeta^2}}{\tau}$$which results in$$y(t) = y_{ref} + KU\left[1 - e^{-\zeta t/\tau}\left(\cos\left(\omega t\right) + \frac{\zeta}{\sqrt{1-\zeta^2}}\,\sin\left(\omega t\right) \right)\right]$$ ###Code def underdamped(K, tau, zeta): t = np.linspace(0,20) c = np.cos(np.sqrt(1-zeta**2)*t/tau) s = np.sin(np.sqrt(1-zeta**2)*t/tau) y = K*(1 - np.exp(-zeta*t/tau)*(c + zeta*s/np.sqrt(1-zeta**2))) plt.plot(t,y) plt.grid() interact(underdamped, K=(0.5,3), tau=(0.5,3), zeta=(0,0.999)) ###Output _____no_output_____ ###Markdown Undamped ($\zeta = 0$)Finally, there is the special case of an undamped oscillation$$y(t) = y_{ref} + KU\left[1 - \cos\left(\omega t\right) \right]$$where $\omega = 1/\tau$. SimulationA second-order differential equation can be simulated as a system of two first order differential equations. The key is to introduce a new variable $v = \frac{dy}{dt}$. $$\begin{align*}\frac{dy}{dt} & = v \\\frac{dv}{dt} & = -\frac{1}{\tau^2}(y-y_{ref}) - \frac{2\zeta}{\tau}v + K\left(u(t)-u_{ref}\right)\end{align*}$$ ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint from ipywidgets import interact def simulation(yref=0, U=1, K=1, tau=1, zeta=0.2): def deriv(X,t): y,v = X ydot = v vdot = -(y-yref)/tau/tau - 2*zeta*v/tau + K*U/tau/tau return[ydot,vdot] # simulation t = np.linspace(0,20*tau,1000) y = odeint(deriv, [yref,0], t)[:,0] # plot steady state line and bounds plt.figure(figsize=(12,6)) # plot solution plt.plot(t,y,lw=3) plt.title('Step Response of a Second Order System') plt.xlabel('Time') plt.ylabel('y') # plot limits plt.ylim(plt.ylim()[0],1.1*plt.ylim()[1]) plt.xlim(t[0],t[-1]) dy = np.diff(plt.ylim()) # arrow props ap1 = dict(arrowstyle="->") ap2 = dict(arrowstyle="<->") if zeta < 1: #overshoot os = np.exp(-np.pi*zeta/np.sqrt(1-zeta**2)) # time to first peak tp = np.pi*tau/np.sqrt(1-zeta**2) yp = (1+os)*K*U + yref plt.text(tp,yp+0.02*dy,"Overshoot\n b/a = {0:0.2f}".format(os), ha='center') plt.annotate('',xy=(tp,K*U+yref),xytext=(tp,yp),arrowprops=ap2) plt.text(tp,(K*U+yref+yp)/2,' b') plt.annotate('',xy=(tp,yref),xytext=(tp,K*U+yref),arrowprops=ap2) plt.text(tp,K*U/2+yref,' a') plt.annotate("Time to first\n peak = {0:.2f}".format(tp), xy=(tp,yref), xytext=(1.2*tp,0.2*K*U+yref),arrowprops=ap1) # rise time tr = t[np.where(np.diff(np.sign(y-yref-K*U))*np.sign(K*U)>0)[0][0]] if tr < plt.xlim()[1]: plt.plot([tr,tr],[0.3*K*U+yref,K*U+yref],'r:') plt.annotate('',xy=(plt.xlim()[0],0.4*K*U+yref),xytext=(tr,0.4*K*U+yref), arrowprops=ap2) plt.text(plt.xlim()[0]+tr/2,0.42*K*U+yref+0.02*dy, 'Rise Time\n = {0:.2f}'.format(tr),ha='center') # period P = 2*np.pi*tau/np.sqrt(1-zeta**2) if tr + P < plt.xlim()[1]: plt.plot([tr,tr],[0.3*K*U+yref,K*U+yref],'r:') plt.plot([tr+P,tr+P],[0.3*K*U+yref,K*U+yref],'r:') plt.annotate('',xy=(tr,0.4*K*U+yref),xytext=(tr+P,0.4*K*U+yref),arrowprops=ap2) plt.text(tr+P/2,0.42*K*U+yref+0.02*dy,'Period = {0:.2f}'.format(P), ha='center') # second peak if tp + P < plt.xlim()[1]: plt.annotate('',xy=(tp+P,K*U+yref),xytext=(tp+P,K*U*(1+os**3)+yref), arrowprops=ap2) plt.text(tp+P,K*U*(1+os**3/2)+yref,' c') plt.text(tp+P,K*U*(1+os**3)+yref+0.02*dy, 'Decay Ratio\n c/b = {0:.2f}'.format(os**2),va='bottom',ha='center') # settling time ts = -np.log(0.05)*np.sqrt(1-zeta**2)*tau/zeta if ts < plt.xlim()[1]: plt.fill_between(t[t>ts],0.95*K*U+yref,1.05*K*U+yref,alpha=0.4,color='y') plt.text(ts,1.05*K*U+yref+0.02*dy, 'Settling Time\n = {0:.2f}'.format(ts),ha='center') plt.plot(plt.xlim(),[yref,yref],'k--') plt.plot(plt.xlim(),[K*U+yref,K*U+yref],'k--') interact(simulation, yref = (-10,10,0.1), U=(0.01,5,0.01), K = (-5,5,0.01), zeta=(0.01,3,0.01), tau = (0.1,5.0,0.01)); ###Output _____no_output_____ ###Markdown Performance Indicators for Underdamped Systems For an underdamped second order system, the desired performance metrics are given by the following by formulas in the following table.| Quantity | Symbol | Expression/Value || :----------------: | :----: | :----------------------------------------------------: || Rise Time | $t_r$ | Time to first SS crossing || Time to first peak | $t_p$ | $\frac{\pi\tau}{\sqrt{1-\zeta^2}}$ || Overshoot | OS | $\exp\left(-\frac{\pi\zeta}{\sqrt{1-\zeta^2}}\right)$ || Decay Ratio | DR | $\exp\left(-\frac{2\pi\zeta}{\sqrt{1-\zeta^2}}\right)$ || Period | | $\frac{2\pi\tau}{\sqrt{1-\zeta^2}}$ || Setting Time | $t_s$ | Time to +/- 5% of SS | Estimating Parameters for an Underdamped System Starting with a Physical ModelA dynamical model for a u-tube manometer is given by$$\frac{d^2h'}{dt^2} + \frac{6\mu}{R^2\rho}\frac{dh'}{dt} + \frac{3}{2}\frac{g}{L} h' = \frac{3}{4\rho L} p'(t)$$where $h'$ is the liquid level displacement from an equilibrium position due to a pressure difference $p'(t)$.| Parameter | Symbol || :-: | :-: || radius | $R$ || liquid length | $L$ || gravity | $g$ || density | $\rho$ || viscosity | $\mu$ |What is the gain $K$? Time constant $\tau$? Damping factor $\zeta$? How would choose the radius for the fastest response without overshoot? Starting with a Step ResponseUnderdamped systems have clearly identifiable and measureable characteristics that can be used to identify parameters $K$, $\tau$, and $\zeta$. One procedure, for example, is to execute a step response experiment. Then,1. Measure overshoot, then estimate damping factor $\zeta$ using a chart of of this equation (or by directly solving the equation for $\zeta$):$$OS = \frac{a}{b} = \exp\left(\frac{-\pi\zeta}{\sqrt{1-\zeta^2}}\right)$$2. Measure time-to-first-peak $t_p$. Given $t_p$ and $\zeta$, solve for$$\tau = \frac{t_p}{\pi}\sqrt{1 - \zeta^2}$$Alternatively, given period $P$,$$\tau = \frac{P}{2\pi}\sqrt{1 - \zeta^2}$$ ![](2ndOrder.png) ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint zeta = np.linspace(0,0.999,100) os = np.exp(-np.pi*zeta/np.sqrt(1-zeta**2)) dr = np.exp(-2*np.pi*zeta/np.sqrt(1-zeta**2)) pd = np.sqrt(1-zeta**2) plt.figure(figsize=(8,8)) plt.plot(zeta, os, lw=3) plt.plot(zeta, dr, lw=3) plt.plot(zeta, pd, lw=3) plt.axis('square') plt.xlim(0, 1) plt.ylim(0, 1) plt.title('Performance Characteristics of Underdamped Second Order Systems') plt.xlabel('$\zeta$') plt.ylabel('Performance Characteristic') plt.text(0.35, 0.4, 'Overshoot') plt.text(0.05, 0.2, 'Decay Ratio') plt.text(0.70, 0.8, 'Natural Period / Period') plt.gca().set_xticks(np.arange(0,1,0.1), minor=True) plt.gca().set_yticks(np.arange(0,1,0.1), minor=True) plt.grid(b=True, which='major') plt.grid(b=True, which='minor') ###Output _____no_output_____
rsc/demo/notebooks/common_plot_elements_line_graphs.ipynb
###Markdown Imports ###Code import numpy as np import pandas as pd import datetime import matplotlib.pyplot as plt import matplotlib.path as pth ###Output _____no_output_____ ###Markdown 1. Display a quantity over timeFeatures: line graph, date range generation, custom figure size, and different plot color. ###Code idx = pd.date_range('1/1/2000', periods=1000) df = pd.DataFrame(np.random.randn(1000, 1), index=idx, columns=list('A')) # figure with a certain size plt.figure(figsize=(20,5)) plt.plot(df, color='r') plt.show() ###Output _____no_output_____ ###Markdown 2. Display two quantities over time Features: plot 1 + subplots, share axis, and make label invisible ###Code idx = pd.date_range('1/1/2000', periods=1000) df1 = pd.DataFrame(np.random.randn(1000, 1), index=idx, columns=list('A')) df2 = pd.DataFrame(np.random.randn(1000, 1), index=idx, columns=list('B')) # figure with a certain size plt.figure(figsize=(20,10)) ax1 = plt.subplot(211) ax1.plot(df1) ax1.set_xticklabels(ax1.get_xticklabels(), visible=False) ax2 = plt.subplot(212, sharex=ax1) ax2.plot(df2, color='r') ax2.set_xticklabels(ax1.get_xticklabels()) plt.show() ###Output _____no_output_____
material/pandas/.ipynb_checkpoints/04-Missing Data-checkpoint.ipynb
###Markdown ___ ___ Missing DataLet's show a few convenient methods to deal with Missing Data in pandas: ###Code import numpy as np import pandas as pd df = pd.DataFrame({'A':[1,2,np.nan], 'B':[5,np.nan,np.nan], 'C':[1,2,3]}) df df.dropna() df.dropna(axis=1) df.dropna(thresh=2) df.fillna(value='FILL VALUE') df['A'].fillna(value=df['A'].mean()) ###Output _____no_output_____
examples/notebooks/SupervisedIOHMM.ipynb
###Markdown This is the IOHMM model with the parameters learned in a supervised way. This is corresponding to the counting frequency process as in the supervised HMM. See notes in http://www.cs.columbia.edu/4761/notes07/chapter4.3-HMM.pdf. SupervisedIOHMM ###Code from __future__ import division import json import warnings import numpy as np import pandas as pd from IOHMM import SupervisedIOHMM from IOHMM import OLS, CrossEntropyMNL warnings.simplefilter("ignore") ###Output _____no_output_____ ###Markdown Load speed data ###Code speed = pd.read_csv('../data/speed.csv') speed.head() ###Output _____no_output_____ ###Markdown Label some/all states In our structure of the code, the states should be a dictionary, the key is the index in the sequence (e.g. 0, 5) and the value is a one-out-of-n code of array where the kth value is 1 if the hidden state is k. n is the number of states in total.In the following example, we assume that the "corr" column gives the correct hidden states. ###Code states = {} corr = np.array(speed['corr']) for i in range(len(corr)): state = np.zeros((2,)) if corr[i] == 'cor': states[i] = np.array([0,1]) else: states[i] = np.array([1,0]) ###Output _____no_output_____ ###Markdown Set up a simple model manully ###Code # we choose 2 hidden states in this model SHMM = SupervisedIOHMM(num_states=2) # we set only one output 'rt' modeled by a linear regression model SHMM.set_models(model_emissions = [OLS()], model_transition=CrossEntropyMNL(solver='lbfgs'), model_initial=CrossEntropyMNL(solver='lbfgs')) # we set no covariates associated with initial/transitiojn/emission models SHMM.set_inputs(covariates_initial = [], covariates_transition = [], covariates_emissions = [[]]) # set the response of the emission model SHMM.set_outputs([['rt']]) # set the data and ground truth states SHMM.set_data([[speed, states]]) ###Output _____no_output_____ ###Markdown Start training ###Code SHMM.train() ###Output _____no_output_____ ###Markdown See the training results ###Code # the coefficients of the output model for each states print(SHMM.model_emissions[0][0].coef) print(SHMM.model_emissions[1][0].coef) # the scale/dispersion of the output model of each states print(np.sqrt(SHMM.model_emissions[0][0].dispersion)) print(np.sqrt(SHMM.model_emissions[1][0].dispersion)) # the transition probability from each state print(np.exp(SHMM.model_transition[0].predict_log_proba(np.array([[]])))) print(np.exp(SHMM.model_transition[1].predict_log_proba(np.array([[]])))) ###Output [[ 0.38392857 0.61607143]] [[ 0.21165647 0.78834353]] ###Markdown Save the trained model ###Code json_dict = SHMM.to_json('../models/SupervisedIOHMM/') json_dict with open('../models/SupervisedIOHMM/model.json', 'w') as outfile: json.dump(json_dict, outfile, indent=4, sort_keys=True) ###Output _____no_output_____ ###Markdown Load back the trained model ###Code SHMM_from_json = SupervisedIOHMM.from_json(json_dict) ###Output _____no_output_____ ###Markdown See if the coefficients are any different ###Code # the coefficients of the output model for each states print(SHMM.model_emissions[0][0].coef) print(SHMM.model_emissions[1][0].coef) ###Output [[ 5.70451774]] [[ 6.13678825]] ###Markdown Set up the model using a config file, instead of doing it manully ###Code with open('../models/SupervisedIOHMM/config.json') as json_data: json_dict = json.load(json_data) SHMM_from_config = SupervisedIOHMM.from_config(json_dict) ###Output _____no_output_____ ###Markdown Set data and start training ###Code SHMM_from_config.set_data([[speed, states]]) SHMM_from_config.train() ###Output _____no_output_____ ###Markdown See if the training results are any different? ###Code # the coefficients of the output model for each states print(SHMM_from_config.model_emissions[0][0].coef) print(SHMM_from_config.model_emissions[1][0].coef) ###Output [[ 5.70451774]] [[ 6.13678825]]
Copy_of_Assignment_10.ipynb
###Markdown Linear Algebra for ECE Laboratory 10 : Linear Combination and Vector Spaces Now that you have a fundamental knowledge about linear combination, we'll try to visualize it using scientific programming. ObjectivesAt the end of this activity you will be able to:1. Be familiar with representing linear combinations in the 2-dimensional plane.2. Visualize spans using vector fields in Python.3. Perform vector fields operations using scientific programming. Discussion ###Code import numpy as np import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Linear Combination It is said that a linear combination is the combination of linear scaling and addition of a vector its bases/components We will try to visualize the vectors and their linear combinations by plotting a sample of real number values for the scalars for the vectors. Let's first try the vectors below: $$X = \begin{bmatrix} 2\\5 \\\end{bmatrix} , Y = \begin{bmatrix} 7\\9 \\\end{bmatrix} $$ ###Code vectX = np.array([2,5]) vectY = np.array([7,9]) ###Output _____no_output_____ ###Markdown Span of single vectors As discussed in the lecture, the span of individual vectors can be represented by a line span. Let's take vector $X$ as an example. $$X = c\cdot \begin{bmatrix} 2\\5 \\\end{bmatrix} $$ ###Code c = np.arange(-10,10,0.125) plt.scatter(c*vectX[0],c*vectX[1]) plt.xlim(-10,10) plt.ylim(-10,10) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.grid() plt.show() ###Output _____no_output_____ ###Markdown $$Y = c\cdot \begin{bmatrix} 7\\9 \\\end{bmatrix} $$ ###Code c = np.arange(-15,15,0.5) plt.scatter(c*vectY[0],c*vectY[1]) plt.xlim(-20,20) plt.ylim(-20,20) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.grid() plt.show() ###Output _____no_output_____ ###Markdown Span of a linear combination of vectors So what if we are to plot the span of a linear combination of vectors? We can visualize as a plane on the 2-dimensional coordinate system. Let's take the span of the linear combination below: $$S = \begin{Bmatrix} c_1 \cdot\begin{bmatrix} 1\\0 \\\end{bmatrix}, c_2 \cdot \begin{bmatrix} 1\\-1 \\\end{bmatrix}\end{Bmatrix} $$ ###Code vectA = np.array([1,0]) vectB = np.array([1,-1]) R = np.arange(-10,10,1) c1, c2 = np.meshgrid(R,R) vectR = vectA + vectB spanRx = c1*vectA[0] + c2*vectB[0] spanRy = c1*vectA[1] + c2*vectB[1] ##plt.scatter(R*vectA[0],R*vectA[1]) ##plt.scatter(R*vectB[0],R*vectB[1]) plt.scatter(spanRx,spanRy, s=5, alpha=0.75) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.grid() plt.show() vectP = np.array([2,1]) vectQ = np.array([4,3]) R = np.arange(-10,10,1) c1, c2 = np.meshgrid(R,R) vectR = vectP + vectQ spanRx = c1*vectP[0] + c2*vectQ[0] spanRy = c1*vectP[1] + c2*vectQ[1] ##plt.scatter(R*vectA[0],R*vectA[1]) ##plt.scatter(R*vectB[0],R*vectB[1]) plt.scatter(spanRx,spanRy, s=5, alpha=0.75) plt.axhline(y=0, color='k') plt.axvline(x=0, color='k') plt.grid() plt.show() ###Output _____no_output_____ ###Markdown Take note that if vectors are seen to be as a 2-dimensional span we can say it has a Rank of 2 or $\mathbb{R}^2$. But if the span of the linear combination of vectors are seen to be like a line, they are said to be linearly dependent and they have a rank of 1 or $\mathbb{R}^1$. Activity Task 1 Try different linear combinations using different scalar values. In your methodology discuss the different functions that you have used, the linear equation and vector form of the linear combination, and the flowchart for declaring and displaying linear combinations. Please make sure that your flowchart has only few words and not putting the entire code as it is bad practice. In your results, display and discuss the linear combination visualization you made. You should use the cells below for displaying the equation markdows using LaTeX and your code. $$Space \cdot for \cdot the \cdot general \cdot linear \cdot equation \cdot form$$ $$Space \cdot for \cdot the \cdot vector \cdot form$$ ###Code ### TYPE YOU CODE FOR TASK 1 HERE ###Output _____no_output_____
Skripsi-Crawling Engine.ipynb
###Markdown Skripri - Crawling Engine Fungsi Crawler(link,start,end)Fungsi untuk crawling, pake get,kasih beautifulsoup ###Code import requests from bs4 import BeautifulSoup import sys import time linkcoba = 'http://detik.feedsportal.com/c/33613/f/656089/s/4e8048a9/sc/3/l/0Lfinance0Bdetik0N0Cread0C20A160C0A30C250C1120A170C31730A410C40Cakhir0Emaret0Einka0Eekspor0E150Ekereta0Emade0Ein0Emadiun0Eke0Ebangladesh/story01.htm' source_code = requests.get(linkcoba) plain_text = source_code.text soup = BeautifulSoup(plain_text) soup def get_berita_detik(url): source_code = requests.get(url) plain_text = source_code.text soup = BeautifulSoup(plain_text) return soup.find(class_="text_detail") print get_berita_detik('http://finance.detik.com/read/2016/03/25/112017/3173041/4/akhir-maret-inka-ekspor-15-kereta-made-in-madiun-ke-bangladesh') ###Output <div class="text_detail"> <strong>Jakarta</strong> -PT INKA (Persero) akan melakukan ekspor kereta penumpang sebanyak 15 unit ke Bangladesh pada akhir Maret 2016. Pengiriman akan dilakukan melalui Pelabuhan Tanjung Perak, Surabaya pada 31 Maret 2016.<br/><br/>"Nanti kita undangan tanggal 31 Maret ini. Nanti pengiriman pertama untuk 15 unit gerbong penumpang semua," menurut sumber INKA kepada <strong>detikFinance</strong>, Kamis (25/3/2016).<br/><br/>Pengiriman ini merupakan bagian dari kontrak pengadaan 150 unit gerbong kereta yang dimenangkan oleh Badan Usaha Milik Negara (BUMN) yang bermarkas di Madiun, Jawa Timur ini. INKA berhasil menang tender pengadaan kereta penumpang berbagai tipe di Bangladesh setelah mengalahkan beberapa produsen kereta dari India dan China. <br/> Pengiriman nantinya akan dilakukan secara bertahap setiap bulannya hingga Agustus 2016. INKA pada tahun 2006 telah mengekspor kereta penumpang ke Bangladesh.<br/><div class="clearfix"></div><strong>(feb/feb)</strong> <br/> <br/> </div>
TrajGenerator.ipynb
###Markdown Import required libraries Author: Sameer Date: May 2019 ###Code import numpy as np import matplotlib.pyplot as plt from CartPole import CartPole # from CartPole_GPS import CartPole_GPS from ilqr.dynamics import constrain from copy import deepcopy from EstimateDynamics import local_estimate from GMM import Estimated_Dynamics_Prior from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import DotProduct, WhiteKernel ###Output WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions. ###Markdown Formulate the iLQR problem ###Code ''' 1 - dt = time step 2 - N = Number of control points in the trajectory 3 - x0 = Initial state 4 - x_goal = Final state 5 - Q = State cost 6 - R = Control cost 7 - Q_terminal = Cost at the final step 8 - x_dynamics array stores the information regarding system. x_dynamics[0] = m = mass of the pendulum bob x_dynamics[1] = M = mass of the cart x_dynamics[2] = L = length of the massless rod x_dynamics[3] = g = gravity x_dynamics[4] = d = damping in the system ''' dt = 0.05 N = 600 # Number of time steps in trajectory. x_dynamics = np.array([0.1, 1, 1, 9.80665, 0]) # m=1, M=5, L=2, g=9.80665, d=1 x0 = np.array([0.0, 0.0, 3.14, 0.0]) # Initial state x_goal = np.array([0.0, 0.0, 0.0, 0.0]) # Instantenous state cost. Q = np.eye(5) Q[1,1] = 10 Q[2, 2] = 100 Q[3, 3] = 100 Q[4, 4] = 10 # Terminal state cost. Q_terminal = np.eye(5) * 100 # Q_terminal[2, 2] = 100 # Q_terminal[3, 3] = 100 # Instantaneous control cost. R = np.array([[1.0]]) ###Output _____no_output_____ ###Markdown iLQR on Cart Pole ###Code cartpole_prob = CartPole(dt, N, x_dynamics, x0, x_goal, Q, R, Q_terminal) xs, us, K, k = cartpole_prob.run_IterLinQuadReg() # State matrix split into individual states. For plotting and analysing purposes. t = np.arange(N + 1) * dt x = xs[:, 0] # Position x_dot = xs[:, 1] # Velocity theta = np.unwrap(cartpole_prob.deaugment_state(xs)[:, 2]) # Theta, makes for smoother plots. theta_dot = xs[:, 3] # Angular velocity ###Output _____no_output_____ ###Markdown Simulate the real system and generate the dataCost matrices, initial position and goal position will remain same as the above problem. As it indicates one policy. But still the initial positions and goal positions must be passed explicitly to the function. But you don't need to pass cost matrices (assume penalty on the system is same), this is just used to use to calculate the cost of the trajectory. Correct control action must be passed. Parameter gamma indicates how much of original data you want to keepVariance of the Gaussian noise will be taken as input from a Unif(0, var_range) uniform distribution. Inputs: x_initial, x_goal, u, n_rollouts, pattern='Normal', pattern_rand=False, var_range=10, gamma=0.2, percent=20Pattern controls how the control sequence will be modified after applying white Guassian noise (zero mean).- Normal: based on the correction/mixing parameter gamma generate control (gamma controls how much noise we want).- MissingValue: based on the given percentage, set those many values to zero (it is implicitly it uses "Normal" generated control is used). - Shuffle: shuffles the entire "Normal" generated control sequence.- TimeDelay: takes the "Normal" generated control and shifts it by 1 index i.e. one unit time delay.- Extreme: sets gamma as zeros and generates control based on only noise.If 'pattern_rand' is 'True' then we don't need to send the explicitly, it will chose one randomly for every rollout (default is 'False'). If you want to chose specific pattern then send it explicitly. ###Code x_rollout, u_rollout, local_policy, cost = cartpole_prob.gen_rollouts(x0, x_goal, us, n_rollouts=10, pattern_rand=True, var_range=10, gamma=0.2, percent=20) ###Output _____no_output_____ ###Markdown Local system dynamics/model estimateloca_estimate: function takes the states (arranged in a special format, [x(t), u(t), x(t+1)]), no. of gaussian mixtures and no.of states. ###Code model = Estimated_Dynamics_Prior(init_sequential=False, eigreg=False, warmstart=True, min_samples_per_cluster=20, max_clusters=50, max_samples=20, strength=1.0) model.update_prior(x_rollout, u_rollout) A, B, C = model.fit(x_rollout, u_rollout) print(A.shape) print(B.shape) print(C.shape) u_rollout.shape ###Output _____no_output_____ ###Markdown iLQR on estimated modelHere system dynamics is specified in a special way. We give the A, B, C matrices as input. These matrices comes from GMM and GPS theory. They are the mean/expected trajectory followed by the states which is represented by the mean & covariance (A, B, C) matrices of a Gaussian. Remaining all properties of the iLQR problem remains the same (cost, initial & goal state, time steps). ###Code x_traj,u_traj = cartpole_prob.run_IterLinQuadReg_matrix(A, B, C) from scipy.stats.mstats import gmean # a = gmean(A,axis=0) a = np.sum(B, axis=0,keepdims=True)/B.shape[0] a.shape ###Output _____no_output_____ ###Markdown Plot ###Code # Control sequence plt.plot(np.arange(us.shape[0]), us, 'r.', label='Original') # plt.plot(np.arange(us.shape[0]), u_rollout[0:N], 'b.', label='Corrupted') plt.plot(np.arange(us.shape[0]), u_traj, 'g.', label='Estimated') plt.xlabel('Time steps') plt.ylabel('U') plt.legend() plt.show() plt.plot(np.arange(xs.shape[0]), xs[:, 2], 'r.', label='Original') plt.plot(np.arange(xs.shape[0]), cartpole_prob.deaugment_state(x_rollout)[0:N+1, 2], 'b.', label='Corrupted') plt.plot(np.arange(xs.shape[0]), cartpole_prob.deaugment_state(x_traj)[:, 2], 'g.', label='Estimated') plt.xlabel('Time steps') plt.ylabel('Theta') plt.legend() plt.show() plt.plot(np.arange(xs.shape[0]), xs[:, 0], 'r.', label='Original') plt.plot(np.arange(xs.shape[0]), cartpole_prob.deaugment_state(x_rollout)[0:N+1, 0], 'b.', label='Corrupted') plt.plot(np.arange(xs.shape[0]), cartpole_prob.deaugment_state(x_traj)[:, 0], 'g.', label='Estimated') plt.xlabel('Time steps') plt.ylabel('Pos') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown GPS ###Code dt = 0.005 N = 500 # Number of time steps in trajectory. x_dynamics = np.array([1, 5, 2, 9.80665, 1]) # m=1, M=5, L=2, g=9.80665, d=1 # Instantenous state cost. Q = np.eye(5) Q[1,1] = 10 Q[2, 2] = 1 Q[3, 3] = 10 Q[4, 4] = 1 # Terminal state cost. Q_terminal = 100 * np.eye(5) # Instantaneous control cost. R = np.array([[1.0]]) x_train = [] u_train = [] for i in range(10): print('iteration is ',i) x0 = np.array([2, 0, 0.001*i , 0]) # Initial state x_goal = np.array([2, 0.0, 0.0, 0.0]) cartpole_prob = CartPole(dt, N, x_dynamics, x0, x_goal, Q, R, Q_terminal) xs, us = cartpole_prob.run_IterLinQuadReg() t = np.arange(N + 1) * dt x = xs[:, 0] # Position x_dot = xs[:, 1] # Velocity theta = np.unwrap(cartpole_prob.deaugment_state(xs)[:, 2]) # Theta, makes for smoother plots. theta_dot = xs[:, 3] # Angular velocity x_rollout, u_rollout, local_policy, x_gmm, cost = cartpole_prob.gen_rollouts(x0, x_goal, us, n_rollouts=20, pattern_rand=False, var_range=10, gamma=0.8, percent=20) model = local_estimate(x_gmm, components=5, NoOfstates=5) A, B, C = model.estimate(N=N) x_traj,u_traj = cartpole_prob.run_IterLinQuadReg_matrix(A, B, C) x_train.append(x_traj) u_train.append(u_traj) x_train1 = x_train[0][:-1] u_train1 = u_train[0] for i in range(1,9): x_train1 = np.vstack((x_train1,x_train[i][:-1])) u_train1 = np.vstack((u_train1,u_train[i])) u_gr = constrain(u_train1,-0.9,0.9) kernel = DotProduct() + WhiteKernel() gpr = GaussianProcessRegressor(kernel=kernel, random_state=0).fit(x_train1, u_gr) gpr.score(x_train1,u_gr) u_pre = gpr.predict(xs) plt.plot(np.arange(us.shape[0]), constrain(us, -0.9, 0.9), 'r.', label='Original') plt.plot(np.arange(us.shape[0]), constrain(u_pre[0:N],-0.9,0.9), 'b.', label='Global') plt.plot(np.arange(us.shape[0]), constrain(u_traj, -0.9, 0.9), 'g.', label='Local') plt.xlabel('Time steps') plt.ylabel('U') plt.legend() plt.savefig('control.pdf') plt.show() x_rollout00, u_rollout00, local_policy00, x_gmm00, cost00 = cartpole_prob.gen_rollouts(x0, x_goal, u_pre[:-1], n_rollouts=10, var_range=0, gamma=1, percent=0) plt.plot(np.arange(xs.shape[0]), xs[:, 0], 'r.', label='Original') plt.plot(np.arange(xs.shape[0]), cartpole_prob.deaugment_state(x_rollout00)[0:N+1, 0], 'b.', label='Global') plt.plot(np.arange(xs.shape[0]), cartpole_prob.deaugment_state(x_traj)[:, 0], 'g.', label='Local') plt.xlabel('Time steps') plt.ylabel('Pos') plt.legend() plt.savefig('position.pdf') plt.show() plt.plot(np.arange(xs.shape[0]), xs[:, 2], 'r.', label='Original') plt.plot(np.arange(xs.shape[0]), cartpole_prob.deaugment_state(x_rollout00)[0:N+1, 2], 'b.', label='Global') plt.plot(np.arange(xs.shape[0]), cartpole_prob.deaugment_state(x_traj)[:, 2], 'g.', label='Local') plt.xlabel('Time steps') plt.ylabel('Theta') plt.legend() plt.savefig('theta.pdf') plt.show() # plt.subplot(3,1,1) # plt.plot(np.arange(us.shape[0]), constrain(us, -0.9, 0.9), 'r.', label='Original') # plt.plot(np.arange(us.shape[0]), constrain(u_pre[0:N],-0.9,0.9), 'b.', label='GPS') # plt.plot(np.arange(us.shape[0]), constrain(u_traj, -0.9, 0.9), 'g.', label='Estimated') # plt.xlabel('Time steps') # plt.ylabel('U') # plt.legend() # plt.title('Control action vs time') plt.subplot(2,1,1) plt.plot(np.arange(xs.shape[0]), xs[:, 0], 'r.', label='Original') plt.plot(np.arange(xs.shape[0]), cartpole_prob.deaugment_state(x_rollout00)[0:N+1, 0], 'b.', label='Corrupted') plt.plot(np.arange(xs.shape[0]), cartpole_prob.deaugment_state(x_traj)[:, 0], 'g.', label='Estimated') plt.xlabel('Time steps') plt.title('position vs time') plt.ylabel('Pos') plt.subplot(2,1,2) plt.plot(np.arange(xs.shape[0]), xs[:, 2], 'r.', label='Original' , lw=2) plt.plot(np.arange(xs.shape[0]), cartpole_prob.deaugment_state(x_rollout00)[0:N+1, 2], 'b.', label='Corrupted' , lw=2) plt.plot(np.arange(xs.shape[0]), cartpole_prob.deaugment_state(x_traj)[:, 2], 'g.', label='Estimated', lw=2) plt.xlabel('Time steps') plt.title('theta vs time') plt.ylabel('Theta') plt.subplots_adjust(hspace=1.5) plt.savefig('total.pdf') plt.show() np.isclose([1,0.2], [1,0.1],atol=0.1).all() from Simulator import Mujoco_sim Model = "mujoco/cartpole.xml" cart_pole_simulator = Mujoco_sim(Model,True) cart_pole_simulator.load(xs,us,k,K,x0,initial=False) cart_pole_simulator.runSimulation() ###Output _____no_output_____
Chapter08/Recipe1-Standardization.ipynb
###Markdown StandardizationStandardization involves centering the variable at zero, and standardizing the variance to 1. The procedure involves subtracting the mean of each observation and then dividing by the standard deviation:**z = (x - x_mean) / std** ###Code import pandas as pd # dataset for the demo from sklearn.datasets import load_boston from sklearn.model_selection import train_test_split # the scaler - for standardization from sklearn.preprocessing import StandardScaler # load the the Boston House price data # this is how we load the boston dataset from sklearn boston_dataset = load_boston() # create a dataframe with the independent variables data = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names) # add target data['MEDV'] = boston_dataset.target data.head() # Information about the boston house prince dataset # you will find details about the different variables # the aim is to predict the "Median value of the houses" # MEDV column in this dataset # and there are variables with characteristics about # the homes and the neighborhoods # print the dataset description print(boston_dataset.DESCR) # let's separate the data into training and testing set X_train, X_test, y_train, y_test = train_test_split(data.drop('MEDV', axis=1), data['MEDV'], test_size=0.3, random_state=0) X_train.shape, X_test.shape # standardisation: with the StandardScaler from sklearn # set up the scaler scaler = StandardScaler() # fit the scaler to the train set, it will learn the parameters scaler.fit(X_train) # transform train and test sets X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) # the scaler stores the mean of the features, learned from train set scaler.mean_ # the scaler stores the standard deviation deviation of the features, # learned from train set scaler.scale_ # let's transform the returned NumPy arrays to dataframes X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns) X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns) import matplotlib.pyplot as plt import seaborn as sns # let's compare the variable distributions before and after scaling fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5)) # before scaling ax1.set_title('Before Scaling') sns.kdeplot(X_train['RM'], ax=ax1) sns.kdeplot(X_train['LSTAT'], ax=ax1) sns.kdeplot(X_train['CRIM'], ax=ax1) # after scaling ax2.set_title('After Standard Scaling') sns.kdeplot(X_train_scaled['RM'], ax=ax2) sns.kdeplot(X_train_scaled['LSTAT'], ax=ax2) sns.kdeplot(X_train_scaled['CRIM'], ax=ax2) plt.show() ###Output _____no_output_____ ###Markdown Note from the above plots how standardisation centered all the distributions at zero, but it preserved their original distribution. The value range is not identical, but it looks more homogeneous across the variables. Note something interesting in the following plot: ###Code # let's compare the variable distributions before and after scaling fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5)) # before scaling ax1.set_title('Before Scaling') sns.kdeplot(X_train['AGE'], ax=ax1) sns.kdeplot(X_train['DIS'], ax=ax1) sns.kdeplot(X_train['NOX'], ax=ax1) # after scaling ax2.set_title('After Standard Scaling') sns.kdeplot(X_train_scaled['AGE'], ax=ax2) sns.kdeplot(X_train_scaled['DIS'], ax=ax2) sns.kdeplot(X_train_scaled['NOX'], ax=ax2) plt.show() ###Output _____no_output_____
300_analysis.ipynb
###Markdown load models ###Code BeautyTEXT = data.Field(tokenize='spacy') #BeautyLABEL = data.LabelField() BeautyLABEL = data.LabelField(tensor_type=torch.FloatTensor) print("loading dataset clean_Beauty300.tsv...") Beautytrain = data.TabularDataset.splits( path='../counter-sent-generation3/VAE/data/official_Amazon/', train='clean_Beauty300.tsv', format='tsv', fields=[('Text', BeautyTEXT),('Label', BeautyLABEL)])[0] BeautyTEXT.build_vocab(Beautytrain, max_size=60000, vectors="glove.6B.100d",min_freq=1) BeautyLABEL.build_vocab(Beautytrain) BeautyLABEL.vocab.stoi['1']=1 BeautyLABEL.vocab.stoi['2']=2 BeautyLABEL.vocab.stoi['3']=3 BeautyLABEL.vocab.stoi['4']=4 BeautyLABEL.vocab.stoi['5']=5 ApparelTEXT = data.Field(tokenize='spacy') #ApparelLABEL = data.LabelField() ApparelLABEL = data.LabelField(tensor_type=torch.FloatTensor) print("loading dataset clean_Apparel300.tsv...") Appareltrain = data.TabularDataset.splits( path='../counter-sent-generation3/VAE/data/official_Amazon/', train='clean_Apparel300.tsv', format='tsv', fields=[('Text', ApparelTEXT),('Label', ApparelLABEL)])[0] ApparelTEXT.build_vocab(Appareltrain, max_size=60000, vectors="glove.6B.100d",min_freq=1) ApparelLABEL.build_vocab(Appareltrain) ApparelLABEL.vocab.stoi['1']=1 ApparelLABEL.vocab.stoi['2']=2 ApparelLABEL.vocab.stoi['3']=3 ApparelLABEL.vocab.stoi['4']=4 ApparelLABEL.vocab.stoi['5']=5 class RNN(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, bidirectional, dropout): super().__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim) self.rnn = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers, bidirectional=bidirectional, dropout=dropout) self.fc = nn.Linear(hidden_dim*2, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, x): #x = [sent len, batch size] embedded = self.dropout(self.embedding(x)) #print("embedded shape: ", embedded.shape) #embedded = [sent len, batch size, emb dim] output, (hidden, cell) = self.rnn(embedded) #print("output.shape: ",output.shape) #print("output[-1].shape: ",output[-1].shape) #print("hidden.shape: ",hidden.shape) #print("cell.shape: ",cell.shape) #output = [sent len, batch size, hid dim * num directions] #hidden = [num layers * num directions, batch size, hid. dim] #cell = [num layers * num directions, batch size, hid. dim] hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim=1)) #print("hidden.shape: ",hidden.shape) y = self.fc(hidden.squeeze(0)) #hidden [batch size, hid. dim * num directions] #return self.fc(hidden.squeeze(0)) return y ''' INPUT_DIM = len(BeautyTEXT.vocab) EMBEDDING_DIM = 100 HIDDEN_DIM = 500 OUTPUT_DIM = 1 N_LAYERS = 2 BIDIRECTIONAL = True DROPOUT = 0.5 ''' #mrnn3 = torch.load('mrnn3') #mrnn4 = torch.load('mrnn4', map_location=lambda storage, loc: storage) #force to load on CPU #frnn4 = torch.load('frnn4', map_location=lambda storage, loc: storage) #force to load on CPU criterion = nn.MSELoss() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') #device = torch.device('cpu') #mrnn4 = mrnn4.to(device) criterion = criterion.to(device) #TEXT = BeautyTEXT #LABEL = BeautyLABEL Beautymodel = torch.load('Amazon/Beauty_classifier', map_location=lambda storage, loc: storage) #force to load on CPU Apparelmodel = torch.load('Amazon/Apparel_classifier', map_location=lambda storage, loc: storage) #force to load on CPU #frnn = torch.load('frnn8') #mrnn = torch.load('mrnn8') criterion = nn.MSELoss() #device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') device = torch.device('cpu') Beautymodel = Beautymodel.to(device) Apparelmodel = Apparelmodel.to(device) criterion = criterion.to(device) import spacy nlp = spacy.load('en') def predict_sentiment(sentence,model,TEXT): tokenized = [tok.text for tok in nlp.tokenizer(sentence)] indexed = [TEXT.vocab.stoi[t] for t in tokenized] tensor = torch.LongTensor(indexed).to(device) tensor = tensor.unsqueeze(1) model.eval() prediction = model(tensor) return prediction.item() def predict_word(word): print('Amodel: ',predict_sentiment(word,Beautymodel,BeautyTEXT)) print('Bmodel: ',predict_sentiment(word,Apparelmodel,ApparelTEXT)) ###Output _____no_output_____ ###Markdown find pre_dif of sentences ###Code with open('../counter-sent-generation3/VAE/data/official_Amazon/clean_Beauty300test.tsv') as f: Beauty = f.readlines() with open('../counter-sent-generation3/VAE/data/official_Amazon/clean_Apparel300test.tsv') as f: Apparel = f.readlines() with open('../counter-sent-generation3/VAE/data/official_Amazon/clean_Jewelry300test.tsv') as f: Jewelry = f.readlines() with open('../counter-sent-generation3/VAE/data/official_Amazon/clean_Shoes300test.tsv') as f: Shoes = f.readlines() for i,x in enumerate(Beauty): Beauty[i] = x.split('\t')[0] for i,x in enumerate(Apparel): Apparel[i] = x.split('\t')[0] with open('Amazon/Beauty_pre_Beautytest.txt','r') as f: Beauty_pre_Beautyf = f.readlines() with open('Amazon/Apparel_pre_Beautytest.txt','r') as f: Apparel_pre_Beautyf = f.readlines() with open('Amazon/Beauty_pre_Appareltest.txt','r') as f: Beauty_pre_Apparelf = f.readlines() with open('Amazon/Apparel_pre_Appareltest.txt','r') as f: Apparel_pre_Apparelf = f.readlines() label=[] Apparellabel=[] Beauty_pre_Beauty=[] Apparel_pre_Beauty=[] Beauty_pre_Apparel=[] Apparel_pre_Apparel=[] for x in Beauty_pre_Beautyf: label.append(float(x.split('\t')[1].strip('\n'))) Beauty_pre_Beauty.append(float(x.split('\t')[0])) for x in Apparel_pre_Beautyf: Apparel_pre_Beauty.append(float(x.split('\t')[0])) for x in Beauty_pre_Apparelf: Apparellabel.append(float(x.split('\t')[1].strip('\n'))) Beauty_pre_Apparel.append(float(x.split('\t')[0])) for x in Apparel_pre_Apparelf: Apparel_pre_Apparel.append(float(x.split('\t')[0])) label = np.array(label) Beauty_pre_Beauty = np.array(Beauty_pre_Beauty) Apparel_pre_Beauty = np.array(Apparel_pre_Beauty) Apparellabel=np.array(Apparellabel) Beauty_pre_Apparel=np.array(Beauty_pre_Apparel) Apparel_pre_Apparel = np.array(Apparel_pre_Apparel) np.mean((Beauty_pre_Beauty-label)<0.5) np.mean((Apparel_pre_Beauty-label)<0.5) np.mean((Jewelry_pre_Beauty-label)<0.5) np.mean((Shoes_pre_Beauty-label)<0.5) np.mean((Beauty_pre_Apparel-Apparellabel)<0.5) np.mean((Apparel_pre_Apparel-Apparellabel)<0.5) Appareldf = pd.DataFrame({'label':Apparellabel,'Beauty_pre':Beauty_pre_Apparel,'Apparel_pre':Apparel_pre_Apparel,"sent":Apparel}) Beautydf = pd.DataFrame({'label':label,'Beauty_pre':Beauty_pre_Beauty,'Apparel_pre':Apparel_pre_Beauty,"Jewelry_pre":Jewelry_pre_Beauty,"Shoes_pre":Shoes_pre_Beauty,"sent":Beauty}) Beautydf['BAdif'] = Beauty_pre_Beauty-Apparel_pre_Beauty Appareldf['BAdif'] = Beauty_pre_Apparel-Apparel_pre_Apparel sortedBA = Beautydf.sort_values(by='BAdif',ascending=True) sortedBA2 = Appareldf.sort_values(by='BAdif',ascending=True) sortedBA.head() with open('Beauty_Apparel_predif_on_Beauty300test.txt','w') as f: for i in range(len(label)): f.write(str(sortedBA.iloc[i]['label'])+'\t' +str(sortedBA.iloc[i]['Beauty_pre'])+'\t' +str(sortedBA.iloc[i]['Apparel_pre'])+'\t' +str(sortedBA.iloc[i]['BAdif'])+'\t'+sortedBA.iloc[i]['sent']+'\n') with open('Beauty_Apparel_predif_on_Apparel300test.txt','w') as f: for i in range(len(Apparellabel)): f.write(str(sortedBA2.iloc[i]['label'])+'\t' +str(sortedBA2.iloc[i]['Beauty_pre'])+'\t' +str(sortedBA2.iloc[i]['Apparel_pre'])+'\t' +str(sortedBA2.iloc[i]['BAdif'])+'\t'+sortedBA2.iloc[i]['sent']+'\n') sortedBA2.head() with open('Amazon/Beauty_Apparel_predif_on_Beauty300test.txt','r') as f: B = f.readlines() B = B[2:] B[0].split('\t')[3] predif=[] for x in B: predif.append(float(x.split('\t')[3])) np.mean(np.abs(predif)) ###Output _____no_output_____ ###Markdown see vocab pre_dif for test data ###Code common = set.intersection(set(BeautyTEXTtest.vocab.itos),set(BeautyTEXT.vocab.itos),set(ApparelTEXT.vocab.itos)) common = list(common) score=[] B=[] A=[] for key in common: Bpre = predict_sentiment(key,Beautymodel,BeautyTEXT) Apre = predict_sentiment(key,Apparelmodel,ApparelTEXT) B.append(Bpre) A.append(Apre) score.append(Bpre-Apre) common_vocab=pd.DataFrame({'vocab':np.array(common),'Beautypre':np.array(B),'Apparelpre':np.array(A),'BAdif':np.array(score)}) sorted2 = common_vocab.sort_values(by='BAdif') sorted2.head(10) with open('Amazon/Apparel300test_Beauty300_Apparel300_commonvocab_BeautyApparel_predif','w') as f: for i in range(len(score)): f.write(sorted2.iloc[i]['vocab']+'\t' +str(sorted2.iloc[i]['Beautypre'])+'\t' +str(sorted2.iloc[i]['Apparelpre'])+'\t' +str(sorted2.iloc[i]['BAdif'])+'\n') def find_sen(word,test): sen=[] for i,x in enumerate(test): if word in x: sen.append(i) return sen def test_otherf(word,n): print("fpref: ",fpref[n],' mpref: ',mpref[n]) scorem = predict_sentiment(re.sub(word,'',ftest[n]),mrnn,BeautyTEXT) scoref = predict_sentiment(re.sub(word,'',ftest[n]),frnn,ApparelTEXT) print("scoref: ",scoref," scorem: ",scorem) def test_otherm(word,n): print("fprem: ",fprem[n],' mprem: ',mprem[n]) scorem = predict_sentiment(re.sub(word,'',mtest[n]),mrnn,BeautyTEXT) scoref = predict_sentiment(re.sub(word,'',mtest[n]),frnn,ApparelTEXT) print("scoref: ",scoref," scorem: ",scorem) ls200=[] for i,x in enumerate(mtest): if len(x)<200: ls200.append(i) len(ls200) fdf.sort_values(by='f-m',ascending=False).head(20) ftest[6870] #fdf.iloc[ls200].sort_values(by='f-m',ascending=False).head(20) mdf.iloc[5001] ftest[13344] predict_word('pizza') ls = mdf.sort_values(by='f-m',ascending=False).index mdf.sort_values(by='f-m',ascending=False).head(20) n = ls[5] #14 print(n) mtest[n] n = ls[15] print(n) mtest[n] #import re #re.split('[^a-zA-Z\sn\'t]',s) n = 6870 s = ftest[n].split('\t')[0] exam=[] tokens = s.split() print("index: ",n, " fpref: ",fpref[n]," mpref: ",mpref[n]) for i in range(len(tokens)): scorem = predict_sentiment(re.sub(tokens[i],'',s,count=1),mrnn,BeautyTEXT) scoref = predict_sentiment(re.sub(tokens[i],'',s,count=1),frnn,ApparelTEXT) print(i,"scoref: ",scoref," scorem: ",scorem) if abs(scoref-fpref[n])>1.5 or abs(scorem-mpref[n])>1.5: exam.append([i,tokens[i],(fpref[n],scoref),(mpref[n],scorem)]) exam s = mtest[n].split('\t')[0] exam=[] tokens = s.split() print("index: ",n, " fprem: ",fprem[n]," mprem: ",mprem[n]) for i in range(len(tokens)): scorem = predict_sentiment(re.sub(tokens[i],'',s,count=1),mrnn,BeautyTEXT) scoref = predict_sentiment(re.sub(tokens[i],'',s,count=1),frnn,ApparelTEXT) print(i,"scoref: ",scoref," scorem: ",scorem) if abs(scoref-fprem[n])>1.5 or abs(scorem-mprem[n])>1.5: exam.append([i,tokens[i],(fprem[n],scoref),(mprem[n],scorem)]) s = "These donuts are absolutely ridiculous! Individually handmade to order, perfect presentation, fresh and hot. I never thought such artistic excellence could be applied to a mere donut. I drive 30 minutes each way for these beautiful babies! " s = "These donuts are absolutely ridiculous! Individually handmade to order, perfect presentation, fresh and hot. I never thought such artistic excellence could be applied to a mere donut." predict_word(s) predict_word("I drive 30 minutes each way for these beautiful babies!") exam # removing '23rd' lower the female model score, this is also true for other sentences in ftest which contain '23rd' predict_word('babies') ###Output frnn: 1.8468971252441406 mrnn: 0.1990654468536377 ###Markdown ------------- back-translation ###Code from googletrans import Translator import googletrans translator = Translator() s lang = ['fr','de','es','ru','it','ja','ko','zh-cn'] trans = [] print("original sentence:") print(s) predict_word(s) print('\n') for l in lang: tmp = translator.translate(s,dest=l ).text des = translator.translate(tmp,dest='en').text print(l,'###',des) predict_word(des) print('\n') googletrans.LANGUAGES predict_word(re.split('[^a-zA-Z\sn\'t]',s)[0]) for i in range(20): if flabel[i]==5: print(i,ftest[i]) predict_word("request") ApparelTEXT.vocab.freqs['request'] BeautyTEXT.vocab.freqs['request'] w='never' ls = find_sen(w,ftest) ftest[ls[0]] for i,n in enumerate(ls): print(i,test_other(w,n,ftest)) print('\n') m23 = find_sen('23rd',mtest) for n in m23: print(test_otherm('23rd',n)) print('\n') mtest[m23[2]] predict_word('closed') predict_word('never') predict_word('23rd') predict_word('awful') predict_word('raw') predict_word('rude') predict_word('elderly') ApparelTEXT.vocab.freqs['elerly'] BeautyTEXT.vocab.freqs['elderly'] sen = find_sen('elderly',ftest) sen test_other('23rd',sen[2],ftest) ftest[10791] fdf.iloc[10791] for i,x in enumerate(ftest): if "fillet" in x: print(i,fdf['f-m'][i]) w = 'utterly' mls=[] for i,x in enumerate(Beautytrain): if w in x: mls.append(i) mstar=[] for i in mls: mstar.append(int(Beautytrain[i].split('\t')[1].strip('\n'))) print(len(mls)) fls=[] for i,x in enumerate(Appareltrain): if w in x: fls.append(i) fstar=[] for i in fls: fstar.append(int(Appareltrain[i].split('\t')[1].strip('\n'))) print(len(fls)) fstar = np.array(fstar) mstar = np.array(mstar) predict_word('utterly') import matplotlib.pyplot as plt plt.hist(mstar,density=True) plt.xlabel('male sentiment distribution for "{}"'.format(w),fontsize=15) plt.show() plt.hist(fstar,density=True) plt.xlabel('female sentiment distribution for "{}"'.format(w),fontsize=15) plt.show() for x in Appareltrain: if "wife" in x: print(x) break ###Output great food and excellent customer service . had lunch on sat . went with son and his wife . we 're chatting when bill came . glanced at bill , misread bill . payed way to much . contacted weary traveler regarding my error . they were happy to refund my overpayment . receive the check very quick . we will be back for more great food . 5
Machine learning/SVM/sk_svm.ipynb
###Markdown SVM(Support Vector Machines)Support Vector Machines (SVM) are a method that uses points in a transformed problem space that best separate classes into two groups. Classification for multiple classes is supported by a one-vs-all method. SVM also supports regression by modeling the function with a minimum amount of allowable error. ###Code from sklearn import datasets from sklearn import metrics from sklearn.svm import SVC ###Output _____no_output_____ ###Markdown Iris flowers Dataset ###Code dataset = datasets.load_iris() ###Output _____no_output_____ ###Markdown Model ###Code model = SVC(gamma='auto') model.fit(dataset.data, dataset.target) ###Output _____no_output_____ ###Markdown Prediction/Classisifcation ###Code expected = dataset.target predicted = model.predict(dataset.data) print(metrics.classification_report(expected, predicted)) print(metrics.confusion_matrix(expected, predicted)) ###Output precision recall f1-score support 0 1.00 1.00 1.00 50 1 1.00 0.96 0.98 50 2 0.96 1.00 0.98 50 micro avg 0.99 0.99 0.99 150 macro avg 0.99 0.99 0.99 150 weighted avg 0.99 0.99 0.99 150 [[50 0 0] [ 0 48 2] [ 0 0 50]]
examples/01_mms/example_mms_walen_test.ipynb
###Markdown Walen Testauthor: Louis Richard\Example code to perform Walen test; only for burst mode MMS data. ###Code import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates from pyrfu import mms, pyrf from scipy import constants from pyrfu.plot import plot_line, plot_spectr ###Output _____no_output_____ ###Markdown Define spacecraft index, time intervals, jet direction and trasnformation matrix ###Code mms_id = 1 j_sign = 1 # +/-1 for jet direction #time = irf_time('2015-11-30T00:23:55.200Z', 'utc>epochtt'); trans_matrix = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) # in GSE # Plot tint = ["2015-11-30T00:23:48.000", "2015-11-30T00:24:01.000"] # reference region tint_ref = ["2015-11-30T00:23:49.000", "2015-11-30T00:23:50.000"] # Test region tint_walen = ["2015-11-30T00:23:50.000", "2015-11-30T00:23:54.000"] ###Output _____no_output_____ ###Markdown Load data PSD ###Code vdf_i = mms.get_data("pdi_fpi_brst_l2", tint, mms_id) ###Output Loading mms1_dis_dist_brst... ###Markdown Moments ###Code n_i = mms.get_data("ni_fpi_brst_l2", tint, mms_id) n_e = mms.get_data("ne_fpi_brst_l2", tint, mms_id) v_gse_i = mms.get_data("vi_gse_fpi_brst_l2", tint, mms_id) p_gse_i = mms.get_data("pi_gse_fpi_brst_l2", tint, mms_id) ###Output Loading mms1_dis_numberdensity_brst... Loading mms1_des_numberdensity_brst... Loading mms1_dis_bulkv_gse_brst... Loading mms1_dis_prestensor_gse_brst... ###Markdown Fields ###Code b_gse = mms.get_data("b_gse_fgm_brst_l2", tint, mms_id) ###Output Loading mms1_fgm_b_gse_brst_l2... ###Markdown Load defatt files ###Code defatt = mms.load_ancillary("defatt", tint, mms_id) ###Output Loading ancillary defatt files... ###Markdown Compute Compute omnidirectionnal differential energy flux (DEF) ###Code def_omni_i = mms.vdf_omni(mms.vdf_to_deflux(vdf_i)) ###Output _____no_output_____ ###Markdown Rotate pressure tensor into Field Aliigned Coordinates (FAC) ###Code p_fac_i = mms.rotate_tensor(p_gse_i, "fac", b_gse) ###Output notice : Transforming tensor into field-aligned coordinates. ###Markdown Alpha: pressure anisotropy factor ###Code alpha_ = pyrf.pres_anis(p_fac_i, b_gse) ###Output _____no_output_____ ###Markdown gse to new123 ###Code b_123 = pyrf.new_xyz(b_gse, trans_matrix) v_123_i = pyrf.new_xyz(v_gse_i, trans_matrix) ###Output _____no_output_____ ###Markdown Reference(MSH) region; in New frame(123); ###Code b_ref = pyrf.time_clip(b_123, tint_ref) b_ref = np.nanmean(b_ref.data, axis=0) v_i_ref = pyrf.time_clip(v_123_i, tint_ref) v_i_ref = np.nanmean(v_i_ref.data, axis=0) n_i_ref = pyrf.time_clip(n_i, tint_ref) n_i_ref = np.nanmean(n_i_ref.data, axis=0) alpha_ref = pyrf.time_clip(alpha_, tint_ref) alpha_ref = np.nanmean(alpha_ref.data, axis=0) ###Output _____no_output_____ ###Markdown Vipred1: delta_B / sqrt(rho1) ###Code b_123 = pyrf.resample(b_123, n_i) v_123_i = pyrf.resample(v_123_i, n_i) tmp_1 = (b_123 - b_ref) * 21.8 / np.sqrt(n_i_ref) v_i_pred1 = pyrf.resample(tmp_1, v_123_i) * j_sign + v_i_ref ###Output /Users/louisr/opt/anaconda3/lib/python3.8/site-packages/pyrfu/pyrf/resample.py:223: UserWarning: Using averages in resample warnings.warn("Using averages in resample", UserWarning) ###Markdown Vipred2: $B_2 / \sqrt{\rho_2} - B_1 / \sqrt{\rho_1}$ [Phan et al, 2004] ###Code tmp_2 = 21.8 * (1 - alpha_) * b_123 / np.sqrt(n_i_ref * (1 - alpha_ref)) v_i_pred2 = (tmp_2 - 21.8 * np.sqrt(1 - alpha_ref) * b_ref / np.sqrt(n_i_ref)) v_i_pred2 *= j_sign v_i_pred2 += v_i_ref ###Output _____no_output_____ ###Markdown Vipred2: $\sqrt{1 - \alpha_2} B_2 / \sqrt{\rho_2} - \sqrt{1 - \alpha_1} B_1 / \sqrt{\rho_1}$ ###Code v_i_pred3 = 21.8 * (1 - alpha_) * b_123 / np.sqrt(n_i) v_i_pred3 -= 21.8 * np.sqrt(1 - alpha_ref) * b_ref / np.sqrt(n_i_ref) v_i_pred3 *= j_sign v_i_pred3 += v_i_ref ###Output _____no_output_____ ###Markdown Slope & CC ###Code v_123_i_w = pyrf.time_clip(v_123_i, tint_walen) v_i_pred1_w = pyrf.time_clip(v_i_pred1, tint_walen) v_i_pred2_w = pyrf.time_clip(v_i_pred2, tint_walen) v_i_pred3_w = pyrf.time_clip(v_i_pred3, tint_walen) p_ = [np.polyfit(v_i_pred2_w.data[:, i], v_123_i_w.data[:, i], 1) for i in range(3)] slope_2 = [p_[i][0] for i in range(3)] corr_ = [np.corrcoef(v_i_pred2_w.data[:, i], v_123_i_w.data[:, i]) for i in range(3)] cc_2 = [corr_[i][0, 1] for i in range(3)] ###Output _____no_output_____ ###Markdown Plot ###Code %matplotlib notebook f, axs = plt.subplots(7, sharex="all", figsize=(8.5, 11)) f.subplots_adjust(bottom=.05, top=.95, left=.12, right=.88, hspace=0) plot_line(axs[0], b_gse) axs[0].legend(["$B_x$", "$B_y$", "$B_z$"], ncol=3) axs[0].set_ylabel("$B$ [nT]") axs[0].set_title(f"MMS-{mms_id:d}") plot_line(axs[1], n_i, color="tab:blue", label="$N_i$") plot_line(axs[1], n_e, color="tab:red", label="$N_i$") axs[1].legend(ncol=3) axs[1].set_ylabel("$N$ [cm$^{-3}$]") axs[2], caxs2 = plot_spectr(axs[2], def_omni_i, yscale="log", cscale="log") axs[2].set_yticks(np.logspace(1, 4, 4)) axs[2].set_ylabel("$W_i$ [eV]") caxs2.set_ylabel("DEF" + "\n" + "[(cm$^2$ s sr)$^{-1}$]") plot_line(axs[3], b_123) axs[3].legend(["$B_1$", "$B_2$", "$B_3$"], ncol=3) axs[3].set_ylabel("$B$ [nT]") axs[3].text(1.01, .75, np.array2string(trans_matrix[0, :], separator=",", precision=2), color="tab:blue", transform=axs[3].transAxes) axs[3].text(1.01, .50, np.array2string(trans_matrix[1, :], separator=",", precision=2), color="tab:green", transform=axs[3].transAxes) axs[3].text(1.01, .25, np.array2string(trans_matrix[2, :], separator=",", precision=2), color="tab:red", transform=axs[3].transAxes) plot_line(axs[4], v_123_i[:, 0], color="k", label="FPI") plot_line(axs[4], v_i_pred2_w[:, 0], color="tab:red", linestyle="-", label="pred") plot_line(axs[4], v_i_pred2[:, 0], color="tab:red", linestyle="--") axs[4].legend(ncol=3) axs[4].set_ylabel("$V_1$ [km s$^{-1}$]") axs[4].text(1.01, .75, f"slope = {slope_2[0]:3.2f}", color="k", transform=axs[4].transAxes) axs[4].text(1.01, .25, f"cc = {cc_2[0]:3.2f}", color="k", transform=axs[4].transAxes) axs[4].axvspan(mdates.datestr2num(tint_ref[0]), mdates.datestr2num(tint_ref[1]), color="tab:red", alpha=.2) axs[4].axvspan(mdates.datestr2num(tint_walen[0]), mdates.datestr2num(tint_walen[1]), color="yellow", alpha=.2) plot_line(axs[5], v_123_i[:, 1], color="k", label="FPI") plot_line(axs[5], v_i_pred2_w[:, 1], color="tab:red", linestyle="-", label="pred") plot_line(axs[5], v_i_pred2[:, 1], color="tab:red", linestyle="--") axs[5].legend(ncol=3) axs[5].set_ylabel("$V_2$ [km s$^{-1}$]") axs[5].text(1.01, .75, f"slope = {slope_2[1]:3.2f}", color="k", transform=axs[5].transAxes) axs[5].text(1.01, .25, f"cc = {cc_2[1]:3.2f}", color="k", transform=axs[5].transAxes) plot_line(axs[6], v_123_i[:, 2], color="k", label="FPI") plot_line(axs[6], v_i_pred2_w[:, 2], color="tab:red", linestyle="-", label="pred") plot_line(axs[6], v_i_pred2[:, 2], color="tab:red", linestyle="--") axs[6].legend(ncol=3) axs[6].set_ylabel("$V_3$ [km s$^{-1}$]") axs[6].text(1.01, .75, f"slope = {slope_2[2]:3.2f}", color="k", transform=axs[6].transAxes) axs[6].text(1.01, .25, f"cc = {cc_2[2]:3.2f}", color="k", transform=axs[6].transAxes) f.align_ylabels(axs) ###Output _____no_output_____
project-1/Project-Diamond-Prices.ipynb
###Markdown Predict the Diamond PricesProject 1 for Udacity Predictive Analytics for Business Nanodegree. Import necessary libraries ###Code import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn import datasets, linear_model from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error from sklearn.model_selection import train_test_split ###Output _____no_output_____ ###Markdown Read and understand the data ###Code df = pd.read_csv("diamonds.csv") df.head() df = df.drop(columns=["Unnamed: 0"]) ###Output _____no_output_____ ###Markdown Step 2: Visualize the data Plot 1 - Plot the data for the diamonds in the database, with carat on the x-axis and price on the y-axis. ###Code fig, ax = plt.subplots(figsize=(10, 6)) ax.scatter(x = df['carat'], y = df['price']) plt.xlabel("Carat") plt.ylabel("Diamond Price") plt.show() ###Output _____no_output_____ ###Markdown EncodingConvert non-numerical data to numerical data**Derive a relation and map into categories**1. Compare prices per unit carat 2. Map according to the categorical values ###Code df['price/wt']=df['price']/df['carat'] print(df.groupby('cut')['price/wt'].mean().sort_values()) print(df.groupby('color')['price/wt'].mean().sort_values()) print(df.groupby('clarity')['price/wt'].mean().sort_values()) df = df.drop(['price/wt'], axis=1) df['cut']=df['cut'].map({'Ideal':1,'Good':2,'Very Good':3,'Fair':4,'Premium':5}) df['color']=df['color'].map({'E':1,'D':2,'F':3,'G':4,'H':5,'I':6,'J':7}) df['clarity']=df['clarity'].map({'VVS1':1,'IF':2,'VVS2':3,'VS1':4,'I1':5,'VS2':6,'SI1':7,'SI2':8}) ###Output _____no_output_____ ###Markdown Find the features correlation`carat` is the most correlated feature ###Code corrMatrix = df.corr() sns.heatmap(corrMatrix, annot=True) plt.show() df['cut/wt']=df['cut']/df['carat'] df['color/wt']=df['color']/df['carat'] df['clarity/wt']=df['clarity']/df['carat'] df = df.drop(['cut','color','clarity'], axis=1) df.head() ###Output _____no_output_____ ###Markdown Regression (ML Prediction)1. Split y = what we want to predict and X = features 2. Split into train, test data 3. Run prediction by using linear regression and decision tree regressor. ###Code X = df.drop(columns=["price"]) y = df["price"] random_state = 42 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=random_state) lr=linear_model.LinearRegression() lr.fit(X_train,y_train) y_pred_lr=lr.predict(X_test) Rsquare=lr.score(X_test,y_test) print("Rsquare: %f" %(Rsquare)) coeff_df = pd.DataFrame(X_train.columns) coeff_df.columns = ['Variable'] coeff_df["Coeff"] = pd.Series(lr.coef_) coeff_df.sort_values(by='Coeff', ascending=True) print(coeff_df) print("Intercept: %f" %(lr.intercept_)) mae = mean_absolute_error(y_test,y_pred_lr) print("mae: %f" %(mae)) rmse=np.sqrt(mean_squared_error(y_test,y_pred_lr)) print("rmse: %f" %(rmse)) from sklearn.tree import DecisionTreeRegressor dtr = DecisionTreeRegressor(random_state = random_state) dtr.fit(X_train, y_train) y_pred_dtr = dtr.predict(X_test) mae = mean_absolute_error(y_test,y_pred_dtr) print("mae: %f" %(mae)) Rsquare=dtr.score(X_test,y_test) print("Rsquare: %f" %(Rsquare)) rmse=np.sqrt(mean_squared_error(y_test,y_pred_dtr)) print("rmse: %f" %(rmse)) ###Output mae: 310.926501 Rsquare: 0.974167 rmse: 628.761591 ###Markdown Plot 2 - Plot the data for the diamonds for which you are predicting prices with carat on the x-axis and predicted price on the y-axis. ###Code sns.regplot(X_test['carat'], y_pred_dtr, ci=None) sns.regplot(X_test['color/wt'], y_pred_dtr, ci=None) ###Output C:\Users\hamzahf\.conda\envs\ML\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. warnings.warn( ###Markdown Test the data with `new-diamonds.csv` file ###Code df_new = pd.read_csv('new-diamonds.csv') df_new.head() df_new = df_new.drop(columns=['Unnamed: 0']) df_new['cut']=df_new['cut'].map({'Ideal':1,'Good':2,'Very Good':3,'Fair':4,'Premium':5}) df_new['color']=df_new['color'].map({'E':1,'D':2,'F':3,'G':4,'H':5,'I':6,'J':7}) df_new['clarity']=df_new['clarity'].map({'VVS1':1,'IF':2,'VVS2':3,'VS1':4,'I1':5,'VS2':6,'SI1':7,'SI2':8}) df_new['cut/wt']=df_new['cut']/df_new['carat'] df_new['color/wt']=df_new['color']/df_new['carat'] df_new['clarity/wt']=df_new['clarity']/df_new['carat'] df_new = df_new.drop(['cut','color','clarity'], axis=1) ###Output _____no_output_____ ###Markdown Run prediction and append new column called `Predicted Price` ###Code df_new["Predicted Price"] = dtr.predict(df_new) df_new["Predicted Price"] = df_new["Predicted Price"].round(2) df_new df_new['cut']=df_new['cut/wt']*df_new['carat'] df_new['color']=df_new['color/wt']*df_new['carat'] df_new['clarity']=df_new['clarity/wt']*df_new['carat'] df_new df_new["cut"]=df_new["cut"].astype(int) df_new["color"]=df_new["color"].astype(int) df_new["clarity"]=df_new["clarity"].astype(int) df_new.describe() df_new['cut']=df_new['cut'].map({0: "Ideal", 1 :"Ideal", 2 :"Good", 3 : "Very Good", 4 : "Fair", 5 :"Premium"}) df_new['color']=df_new['color'].map({0: "E", 1:"E", 2: "D", 3:"F", 4: "G", 5:"H", 6: "I", 7:"J"}) df_new['clarity']=df_new['clarity'].map({0:"WS1", 1:"WS1", 2:"IF", 3: "VVS2", 4: "VS1", 5:"I1", 6:"VS2", 7:"SI1", 8: "SI2"}) df_new df_new = df_new.drop(columns=["cut/wt", "color/wt", "clarity/wt"]) df_new.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 3000 entries, 0 to 2999 Data columns (total 7 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 carat 3000 non-null float64 1 cut_ord 3000 non-null int64 2 clarity_ord 3000 non-null int64 3 Predicted Price 3000 non-null float64 4 cut 3000 non-null object 5 color 3000 non-null object 6 clarity 3000 non-null object dtypes: float64(2), int64(2), object(3) memory usage: 164.2+ KB ###Markdown Step 3: Make a Recommendation 1. What price do you recommend the jewelry company to bid? Please explain how you arrived at that number.See the `predicted-diamonds-price.csv` for full predicted price value and all the steps to arrive that predicted price is in this notebook. ###Code df_new.to_csv("predicted-diamonds-price.csv") total = df_new['Predicted Price'].sum() bid_price = 0.7 * total print(total) print(bid_price) ###Output 11695167.809999999 8186617.466999998 ###Markdown Step 2: Visualize the data Plot 2 - Plot the data for the diamonds for which you are predicting prices with carat on the x-axis and predicted price on the y-axis. ###Code sns.regplot(df_new['carat'], df_new['Predicted Price'], ci=None) ###Output C:\Users\hamzahf\.conda\envs\ML\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation. warnings.warn( ###Markdown 3. What strikes you about this comparison? After seeing this plot, do you feel confident in the model’s ability to predict prices? Yes, because Rsquare: 0.974167 is achieved by using Decision Tree Regressor and the scattered values are almost straightly plotted. Step 1 - Understanding the model 1. According to the model, if a diamond is 1 carat heavier than another with the same cut, how much more should I expect to pay? Why? ###Code df_new[df_new['carat']==0.5].describe() df_new[df_new['carat']==1.5].describe() df_new[df_new['carat']==2.5].describe() ###Output _____no_output_____ ###Markdown The average difference of 1 carat is (9846.77 - 1491.8) = 8354.90 and (16842 - 9846.77) = 6995.23I should expect to pay around 7000 to 8000 more. Step 1: Understanding the model 2. If you were interested in a 1.5 carat diamond with a Very Good cut (represented by a 3 in the model) and a VS2 clarity rating (represented by a 5 in the model), how much would the model predict you should pay for it? ###Code cond1 = np.logical_and(df_new['carat'] == 1.5, df_new['cut'] == "Very Good") cond2 = np.logical_and(cond1, df_new['clarity'] == "VS2") df_new[cond2] ###Output _____no_output_____
approximating-cont-controller/notebooks/Pole-placement-power-plant-reservoir.ipynb
###Markdown Example of pole placement for control of power-plant reservoirWe have plant model$$ H(z) = \frac{1}{z(z-1)} $$and controller$$ F_b(z) = \frac{s_0z + s_1}{z + r_1} $$Want closed-loop poles in $z=0.9$ and observer poles in the origin. Diophantine equation\begin{align} z(z-1)(z+r_1) + s_0z + s_1 &= z(z-0.9)^2\\ z^3 - (1-r_1)z^2 + (s_0-r_1)z + s_1 &= z^3 - 1.8z^2 + 0.81z\end{align}Resulting equations when setting coefficients equal\begin{align} 1 - r_ 1 &= 1.8 \quad \Rightarrow \quad r_1 = -0.8\\ s_0-r_1 &= 0.81 \quad \Rightarrow \quad s_0 = 0.01\\ s_1 &= 0\end{align} Feedforward part of controller$$T(z) = t_0A_o(z) = t_0z$$$$ G_c(z) = \frac{T(z)B(z)}{A_o(z)A_c(z)} = \frac{t_0 B(z)}{A_c(z)}, \quad \text{want}\, G_c(1)=1$$$$t_0 = \frac{A_c(1)}{B(1)} = \frac{(1-0.9)^2}{1} = 0.01$$ ###Code import numpy as np import matplotlib.pyplot as plt import sympy as sy import control.matlab as cm %matplotlib notebook ###Output _____no_output_____ ###Markdown Symbolic solution ###Code sy.init_printing() aa, alphaa, hh, r1, s0, s1 = sy.symbols('a, alpha, h, r1, s0, s1', real=True, positive=True) zz = sy.symbols('z', real=False) A = zz*(zz-1) B = 1 R = zz+r1 S = s0*zz + s1 LHS = sy.Poly(A*R + B*S, zz) LHS RHS = sy.Poly(zz*(zz-alphaa)**2, zz) Dioph = LHS-RHS coeffs = Dioph.coeffs() coeffs sol = sy.solve(coeffs, [r1, s0]) sol sol[r1].subs({alphaa: 0.9}) sol[s0].subs({alphaa: 0.9}) ###Output _____no_output_____ ###Markdown Numerical solution ###Code # Plant a = 1 b = 1 h = 0.1 H = cm.tf([b], [1, -a,0 ], h) # Desired closed-loop pole alpha = 0.9 # Controller parameters r_1 = -2*alpha +1 s_0 = -2*alpha+1 +alpha**2 Fb = cm.tf([s_0, 0], [1, r_1], h) Fb t0 = (1-alpha)**2/1 Ff = cm.tf([t0, 0], [1, r_1], h) # Check calculations Hc = cm.minreal(Ff*cm.feedback(H, Fb)) cm.pole(Hc) Hcv = cm.feedback(1, H*Fb) Hcn = cm.feedback(H*Fb, 1) br, res = cm.rlocus(Fb*H) plt.plot(np.real(br), np.imag(br)) plt.xlim((0,1.2)) plt.ylim((-1.2,1.2)); y, t = cm.step(Hc) tt = h*np.arange(len(y)) plt.stem(tt[:50], y[:50]); cm.bode(Hcv, Hcn); ###Output _____no_output_____
01_Image_Representation_Classification/1_2_Convolutional_Filters_Edge_Detection/6_1. Hough lines.ipynb
###Markdown Hough Lines Import resources and display the image ###Code import numpy as np import matplotlib.pyplot as plt import cv2 %matplotlib inline # Read in the image image = cv2.imread('images/phone.jpg') # Change color to RGB (from BGR) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.imshow(image) ###Output _____no_output_____ ###Markdown Perform edge detection ###Code # Convert image to grayscale gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) # Define our parameters for Canny low_threshold = 50 high_threshold = 100 edges = cv2.Canny(gray, low_threshold, high_threshold) plt.imshow(edges, cmap='gray') ###Output _____no_output_____ ###Markdown Find lines using a Hough transform ###Code # Define the Hough transform parameters # Make a blank the same size as our image to draw on rho = 1 theta = np.pi/180 threshold = 60 min_line_length = 50 max_line_gap = 5 line_image = np.copy(image) #creating an image copy to draw lines on # Run Hough on the edge-detected image lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]), min_line_length, max_line_gap) # Iterate over the output "lines" and draw lines on the image copy for line in lines: for x1,y1,x2,y2 in line: cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),5) plt.imshow(line_image) ###Output _____no_output_____
weight-initialization/.ipynb_checkpoints/weight_initialization_exercise-checkpoint.ipynb
###Markdown Weight InitializationIn this lesson, you'll learn how to find good initial weights for a neural network. Weight initialization happens once, when a model is created and before it trains. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker. Initial Weights and Observing Training LossTo see how different weights perform, we'll test on the same dataset and neural network. That way, we know that any changes in model behavior are due to the weights and not any changing data or model structure. > We'll instantiate at least two of the same models, with _different_ initial weights and see how the training loss decreases over time, such as in the example below. Sometimes the differences in training loss, over time, will be large and other times, certain weights offer only small improvements. Dataset and ModelWe'll train an MLP to classify images from the [Fashion-MNIST database](https://github.com/zalandoresearch/fashion-mnist) to demonstrate the effect of different initial weights. As a reminder, the FashionMNIST dataset contains images of clothing types; `classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']`. The images are normalized so that their pixel values are in a range [0.0 - 1.0). Run the cell below to download and load the dataset.--- EXERCISE[Link to normalized distribution, exercise code](normalex)--- Import Libraries and Load [Data](http://pytorch.org/docs/stable/torchvision/datasets.html) ###Code import torch import numpy as np from torchvision import datasets import torchvision.transforms as transforms from torch.utils.data.sampler import SubsetRandomSampler # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 100 # percentage of training set to use as validation valid_size = 0.2 # convert data to torch.FloatTensor transform = transforms.ToTensor() # choose the training and test datasets train_data = datasets.FashionMNIST(root='data', train=True, download=True, transform=transform) test_data = datasets.FashionMNIST(root='data', train=False, download=True, transform=transform) # obtain training indices that will be used for validation num_train = len(train_data) indices = list(range(num_train)) np.random.shuffle(indices) split = int(np.floor(valid_size * num_train)) train_idx, valid_idx = indices[split:], indices[:split] # define samplers for obtaining training and validation batches train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) # prepare data loaders (combine dataset and sampler) train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler=train_sampler, num_workers=num_workers) valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler=valid_sampler, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers) # specify the image classes classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] ###Output _____no_output_____ ###Markdown Visualize Some Training Data ###Code import matplotlib.pyplot as plt %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') ax.set_title(classes[labels[idx]]) ###Output _____no_output_____ ###Markdown Define the Model ArchitectureWe've defined the MLP that we'll use for classifying the dataset. Neural Network* A 3 layer MLP with hidden dimensions of 256 and 128. * This MLP accepts a flattened image (784-value long vector) as input and produces 10 class scores as output.---We'll test the effect of different initial weights on this 3 layer neural network with ReLU activations and an Adam optimizer. The lessons you learn apply to other neural networks, including different activations and optimizers. --- Initialize WeightsLet's start looking at some initial weights. All Zeros or OnesIf you follow the principle of [Occam's razor](https://en.wikipedia.org/wiki/Occam's_razor), you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.With every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.Let's compare the loss with all ones and all zero weights by defining two models with those constant weights.Below, we are using PyTorch's [nn.init](https://pytorch.org/docs/stable/nn.htmltorch-nn-init) to initialize each Linear layer with a constant weight. The init library provides a number of weight initialization functions that give you the ability to initialize the weights of each layer according to layer type.In the case below, we look at every layer/module in our model. If it is a Linear layer (as all three layers are for this MLP), then we initialize those layer weights to be a `constant_weight` with bias=0 using the following code:>```if isinstance(m, nn.Linear): nn.init.constant_(m.weight, constant_weight) nn.init.constant_(m.bias, 0)```The `constant_weight` is a value that you can pass in when you instantiate the model. ###Code import torch.nn as nn import torch.nn.functional as F # define the NN architecture class Net(nn.Module): def __init__(self, hidden_1=256, hidden_2=128, constant_weight=None): super(Net, self).__init__() # linear layer (784 -> hidden_1) self.fc1 = nn.Linear(28 * 28, hidden_1) # linear layer (hidden_1 -> hidden_2) self.fc2 = nn.Linear(hidden_1, hidden_2) # linear layer (hidden_2 -> 10) self.fc3 = nn.Linear(hidden_2, 10) # dropout layer (p=0.2) self.dropout = nn.Dropout(0.2) # initialize the weights to a specified, constant value if(constant_weight is not None): for m in self.modules(): if isinstance(m, nn.Linear): nn.init.constant_(m.weight, constant_weight) nn.init.constant_(m.bias, 0) def forward(self, x): # flatten image input x = x.view(-1, 28 * 28) # add hidden layer, with relu activation function x = F.relu(self.fc1(x)) # add dropout layer x = self.dropout(x) # add hidden layer, with relu activation function x = F.relu(self.fc2(x)) # add dropout layer x = self.dropout(x) # add output layer x = self.fc3(x) return x ###Output _____no_output_____ ###Markdown Compare Model BehaviorBelow, we are using `helpers.compare_init_weights` to compare the training and validation loss for the two models we defined above, `model_0` and `model_1`. This function takes in a list of models (each with different initial weights), the name of the plot to produce, and the training and validation dataset loaders. For each given model, it will plot the training loss for the first 100 batches and print out the validation accuracy after 2 training epochs. *Note: if you've used a small batch_size, you may want to increase the number of epochs here to better compare how models behave after seeing a few hundred images.* We plot the loss over the first 100 batches to better judge which model weights performed better at the start of training. **I recommend that you take a look at the code in `helpers.py` to look at the details behind how the models are trained, validated, and compared.**Run the cell below to see the difference between weights of all zeros against all ones. ###Code # initialize two NN's with 0 and 1 constant weights model_0 = Net(constant_weight=0) model_1 = Net(constant_weight=1) import helpers # put them in list form to compare model_list = [(model_0, 'All Zeros'), (model_1, 'All Ones')] # plot the loss over the first 100 batches helpers.compare_init_weights(model_list, 'All Zeros vs All Ones', train_loader, valid_loader) ###Output _____no_output_____ ###Markdown As you can see the accuracy is close to guessing for both zeros and ones, around 10%.The neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.A good solution for getting these random weights is to sample from a uniform distribution. Uniform DistributionA [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution) has the equal probability of picking any number from a set of numbers. We'll be picking from a continuous distribution, so the chance of picking the same number is low. We'll use NumPy's `np.random.uniform` function to pick random numbers from a uniform distribution.> [`np.random_uniform(low=0.0, high=1.0, size=None)`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html)>Outputs random values from a uniform distribution.>The generated values follow a uniform distribution in the range [low, high). The lower bound minval is included in the range, while the upper bound maxval is excluded.>- **low:** The lower bound on the range of random values to generate. Defaults to 0.- **high:** The upper bound on the range of random values to generate. Defaults to 1.- **size:** An int or tuple of ints that specify the shape of the output array.We can visualize the uniform distribution by using a histogram. Let's map the values from `np.random_uniform(-3, 3, [1000])` to a histogram using the `helper.hist_dist` function. This will be `1000` random float values from `-3` to `3`, excluding the value `3`. ###Code helpers.hist_dist('Random Uniform (low=-3, high=3)', np.random.uniform(-3, 3, [1000])) ###Output _____no_output_____ ###Markdown The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.Now that you understand the uniform function, let's use PyTorch's `nn.init` to apply it to a model's initial weights. Uniform Initialization, BaselineLet's see how well the neural network trains using a uniform weight initialization, where `low=0.0` and `high=1.0`. Below, I'll show you another way (besides in the Net class code) to initialize the weights of a network. To define weights outside of the model definition, you can:>1. Define a function that assigns weights by the type of network layer, *then* 2. Apply those weights to an initialized model using `model.apply(fn)`, which applies a function to each model layer.This time, we'll use `weight.data.uniform_` to initialize the weights of our model, directly. ###Code # takes in a module and applies the specified weight initialization def weights_init_uniform(m): classname = m.__class__.__name__ # for every Linear layer in a model.. if classname.find('Linear') != -1: # apply a uniform distribution to the weights and a bias=0 m.weight.data.uniform_(0.0, 1.0) m.bias.data.fill_(0) # create a new model with these weights model_uniform = Net() model_uniform.apply(weights_init_uniform) # evaluate behavior helpers.compare_init_weights([(model_uniform, 'Uniform Weights')], 'Uniform Baseline', train_loader, valid_loader) ###Output _____no_output_____ ###Markdown ---The loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction! General rule for setting weightsThe general rule for setting the weights in a neural network is to set them to be close to zero without being too small. >Good practice is to start your weights in the range of $[-y, y]$ where $y=1/\sqrt{n}$ ($n$ is the number of inputs to a given neuron).Let's see if this holds true; let's create a baseline to compare with and center our uniform range over zero by shifting it over by 0.5. This will give us the range [-0.5, 0.5). ###Code # takes in a module and applies the specified weight initialization def weights_init_uniform_center(m): classname = m.__class__.__name__ # for every Linear layer in a model.. if classname.find('Linear') != -1: # apply a centered, uniform distribution to the weights m.weight.data.uniform_(-0.5, 0.5) m.bias.data.fill_(0) # create a new model with these weights model_centered = Net() model_centered.apply(weights_init_uniform_center) ###Output _____no_output_____ ###Markdown Then let's create a distribution and model that uses the **general rule** for weight initialization; using the range $[-y, y]$, where $y=1/\sqrt{n}$ .And finally, we'll compare the two models. ###Code # takes in a module and applies the specified weight initialization def weights_init_uniform_rule(m): classname = m.__class__.__name__ # for every Linear layer in a model.. if classname.find('Linear') != -1: # get the number of the inputs n = m.in_features y = 1.0/np.sqrt(n) m.weight.data.uniform_(-y, y) m.bias.data.fill_(0) # create a new model with these weights model_rule = Net() model_rule.apply(weights_init_uniform_rule) # compare these two models model_list = [(model_centered, 'Centered Weights [-0.5, 0.5)'), (model_rule, 'General Rule [-y, y)')] # evaluate behavior helpers.compare_init_weights(model_list, '[-0.5, 0.5) vs [-y, y)', train_loader, valid_loader) ###Output _____no_output_____ ###Markdown This behavior is really promising! Not only is the loss decreasing, but it seems to do so very quickly for our uniform weights that follow the general rule; after only two epochs we get a fairly high validation accuracy and this should give you some intuition for why starting out with the right initial weights can really help your training process!---Since the uniform distribution has the same chance to pick *any value* in a range, what if we used a distribution that had a higher chance of picking numbers closer to 0? Let's look at the normal distribution. Normal DistributionUnlike the uniform distribution, the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution) has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from NumPy's `np.random.normal` function to a histogram.>[np.random.normal(loc=0.0, scale=1.0, size=None)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.normal.html)>Outputs random values from a normal distribution.>- **loc:** The mean of the normal distribution.- **scale:** The standard deviation of the normal distribution.- **shape:** The shape of the output array. ###Code helpers.hist_dist('Random Normal (mean=0.0, stddev=1.0)', np.random.normal(size=[1000])) ###Output _____no_output_____ ###Markdown Let's compare the normal distribution against the previous, rule-based, uniform distribution. TODO: Define a weight initialization function that gets weights from a normal distribution > The normal distribution should have a mean of 0 and a standard deviation of $y=1/\sqrt{n}$ ###Code ## complete this function def weights_init_normal(m): '''Takes in a module and initializes all linear layers with weight values taken from a normal distribution.''' classname = m.__class__.__name__ # for every Linear layer in a model # m.weight.data shoud be taken from a normal distribution # m.bias.data should be 0 ## -- no need to change code below this line -- ## # create a new model with the rule-based, uniform weights model_uniform_rule = Net() model_uniform_rule.apply(weights_init_uniform_rule) # create a new model with the rule-based, NORMAL weights model_normal_rule = Net() model_normal_rule.apply(weights_init_normal) # compare the two models model_list = [(model_uniform_rule, 'Uniform Rule [-y, y)'), (model_normal_rule, 'Normal Distribution')] # evaluate behavior helpers.compare_init_weights(model_list, 'Uniform vs Normal', train_loader, valid_loader) ###Output _____no_output_____ ###Markdown The normal distribution gives us pretty similar behavior compared to the uniform distribution, in this case. This is likely because our network is so small; a larger neural network will pick more weight values from each of these distributions, magnifying the effect of both initialization styles. In general, a normal distribution will result in better performance for a model. --- Automatic InitializationLet's quickly take a look at what happens *without any explicit weight initialization*. ###Code ## Instantiate a model with _no_ explicit weight initialization ## evaluate the behavior using helpers.compare_init_weights ###Output _____no_output_____
Colab-Supervised/Classification_of_Sign_Language_with_UCA_Net.ipynb
###Markdown Classification of Sign Language with UCA-Net By Arda Mavi & Zeynep Dikle Summary:Classification of our own 'Sign Language Dataset' with our own machine learning algorithm 'UCA-Net' Connecting Drive: ###Code !apt-get install -y -qq software-properties-common python-software-properties module-init-tools !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null !apt-get update -qq 2>&1 > /dev/null !apt-get -y install -qq google-drive-ocamlfuse fuse from google.colab import auth auth.authenticate_user() from oauth2client.client import GoogleCredentials creds = GoogleCredentials.get_application_default() import getpass !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL vcode = getpass.getpass() !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} !mkdir -p drive !google-drive-ocamlfuse drive import sys sys.path.insert(0, 'drive/Colab_UCA-Net') !ls drive/Colab_UCA-Net !pip3 install -r drive/Colab_UCA-Net/requirements.txt # Import import keras import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Getting Dataset: from get_dataset import get_dataset X_train, X_test, Y_train, Y_test = get_dataset('drive/Colab_UCA-Net/Data/npy_dataset') # About Dataset: img_size = X_train.shape[1] # 64 channel_size = X_train.shape[3] # 1: Grayscale, 3: RGB print('Training shape:', X_train.shape) print(X_train.shape[0], 'sample,',X_train.shape[1] ,'x',X_train.shape[2] ,'size grayscale image.\n') print('Test shape:', X_test.shape) print(X_test.shape[0], 'sample,',X_test.shape[1] ,'x',X_test.shape[2] ,'size grayscale image.\n') print('Examples:') n = 10 plt.figure(figsize=(20, 4)) for i in range(1, n+1): # Display some data: ax = plt.subplot(1, n, i) plt.imshow(X_train[i].reshape(img_size, img_size)) plt.gray() plt.axis('off') ###Output Training shape: (1649, 64, 64, 1) 1649 sample, 64 x 64 size grayscale image. Test shape: (413, 64, 64, 1) 413 sample, 64 x 64 size grayscale image. Examples: ###Markdown Creating Model: ###Code # Deep Learning Model: from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D, Dense, Activation, Lambda, Flatten, concatenate, Reshape from keras.models import Model input_img = Input(shape=(img_size, img_size, channel_size)) layer_1 = Conv2D(64, (3, 3), activation='relu', padding='same')(input_img) layer_1 = MaxPooling2D((2, 2))(layer_1) layer_2 = Conv2D(128, (3, 3), activation='relu', padding='same')(layer_1) layer_2 = MaxPooling2D((2, 2))(layer_2) layer_3 = Conv2D(256, (3, 3), activation='relu', padding='same')(layer_2) layer_3 = MaxPooling2D((2, 2))(layer_3) flat_1 = Flatten()(layer_3) fc_1 = Dense(256)(flat_1) fc_1 = Activation('relu')(fc_1) fc_2 = Dense(128)(fc_1) fc_2 = Activation('relu')(fc_2) #Decoder: fc_3 = Dense(256)(fc_2) fc_3 = Activation('relu')(fc_3) fc_4 = Dense(16384)(fc_3) fc_4 = Activation('relu')(fc_4) reshape_1 = Reshape((8, 8, 256))(fc_4) layer_4 = UpSampling2D((2, 2))(reshape_1) layer_4 = Conv2D(256, (3, 3), activation='relu', padding='same')(layer_4) layer_5 = UpSampling2D((2, 2))(layer_4) layer_5 = Conv2D(128, (3, 3), activation='relu', padding='same')(layer_5) layer_6 = UpSampling2D((2, 2))(layer_5) layer_6 = Conv2D(64, (3, 3), activation='relu', padding='same')(layer_6) layer_7 = Conv2D(channel_size, (3, 3), activation='sigmoid', padding='same')(layer_6) autoencoder = Model(input_img, layer_7) autoencoder.compile(optimizer='rmsprop', loss='mse') autoencoder.summary() # Checkpoints: from keras.callbacks import ModelCheckpoint, TensorBoard checkpoints = [] #checkpoints.append(TensorBoard(log_dir='/Checkpoints/logs')) ###Output _____no_output_____ ###Markdown For training model with Data Augmentation run this cell: Creates live data: For better yield. The duration of the training is extended.from keras.preprocessing.image import ImageDataGeneratorgenerated_data = ImageDataGenerator(featurewise_center=False, samplewise_center=False, featurewise_std_normalization=False, samplewise_std_normalization=False, zca_whitening=False, rotation_range=0, width_shift_range=0.1, height_shift_range=0.1, horizontal_flip = True, vertical_flip = False)generated_data.fit(X_train)model.fit_generator(generated_data.flow(X_train, X_train, batch_size=batch_size), steps_per_epoch=X.shape[0], epochs=epochs, validation_data=(X_test, X_test), callbacks=checkpoints) ###Code # Getting saved mode: autoencoder.load_weights('drive/Colab_UCA-Net/Data/Model/weights.h5') ###Output _____no_output_____ ###Markdown Training Model:epochs = 20batch_size = 5autoencoder.fit(X_train, X_train, batch_size=batch_size, epochs=epochs, validation_data=(X_test, X_test), shuffle=True, callbacks=checkpoints) Save Model and weights:import osdef save_model(model): if not os.path.exists('Data/Model/'): os.makedirs('Data/Model/') model_json = model.to_json() with open("Data/Model/model.json", "w") as model_file: model_file.write(model_json) serialize weights to HDF5 model.save_weights("Data/Model/weights.h5") print('Model and weights saved') returnsave_model(autoencoder) ###Code decoded_imgs = autoencoder.predict(X_test[0:11]) n = 10 plt.figure(figsize=(20, 4)) for i in range(1, n+1): # display original ax = plt.subplot(2, n, i) plt.imshow(X_test[i].reshape(64, 64)) plt.gray() plt.axis('off') # display reconstruction ax = plt.subplot(2, n, i + n) plt.imshow(decoded_imgs[i].reshape(64, 64)) plt.gray() plt.axis('off') # Split autoencoder: encoder = Model(input_img, fc_2) encoder.summary() num_summary = 128 # Deep Learning Model: from keras.layers import Input, Dense, Activation, Dropout from keras.models import Model sn_inputs = Input(shape=(2*num_summary,)) sn_fc_1 = Dense(512)(sn_inputs) sn_fc_1 = Activation('relu')(sn_fc_1) sn_drp_1 = Dropout(0.2)(sn_fc_1) sn_fc_2 = Dense(256)(sn_drp_1) sn_fc_2 = Activation('relu')(sn_fc_2) sn_drp_2 = Dropout(0.2)(sn_fc_2) sn_fc_3 = Dense(64)(sn_drp_2) sn_fc_3 = Activation('relu')(sn_fc_3) sn_fc_4 = Dense(1)(sn_fc_3) sn_similarity_output = Activation('sigmoid')(sn_fc_4) similarity_net = Model(sn_inputs, sn_similarity_output) similarity_net.compile(optimizer='adadelta', loss='mse') similarity_net.summary() from keras.layers import Input, concatenate encoder.trainable = False dis_input_img = Input(shape=(img_size, img_size, channel_size)) dis_encoder_out = encoder(dis_input_img) dis_input_img_2 = Input(shape=(img_size, img_size, channel_size)) dis_encoder_out_2 = encoder(dis_input_img_2) dis_cont_1 = concatenate([dis_encoder_out, dis_encoder_out_2]) dis_output = similarity_net(dis_cont_1) discriminator = Model([dis_input_img, dis_input_img_2], dis_output) discriminator.compile(optimizer='adadelta', loss='mse', metrics=['accuracy']) discriminator.summary() X_train_sets = [] X_train_sets_2 = [] Y_train_sets = [] for k in range(0, 5): for i in range(0, X_train.shape[0]): X_train_sets.append(X_train[i]) same_y_indexs = [index for index, same_Ys in enumerate((np.argmax(Y_train[i]) == np.argmax(Y_train, axis=1)).tolist()) if same_Ys] same_y_img = X_train[same_y_indexs[np.random.randint(len(same_y_indexs), size=1)[0]]] X_train_sets_2.append(same_y_img) Y_train_sets.append(1) X_train_sets.append(X_train[i]) not_same_y_indexs = [index for index, not_same_Ys in enumerate((np.argmax(Y_train[i]) != np.argmax(Y_train, axis=1)).tolist()) if not_same_Ys] not_same_y_img = X_train[not_same_y_indexs[np.random.randint(len(not_same_y_indexs), size=1)[0]]] X_train_sets_2.append(not_same_y_img) Y_train_sets.append(0) X_train_sets = np.array(X_train_sets) X_train_sets_2 = np.array(X_train_sets_2) Y_train_sets = np.array(Y_train_sets).reshape(len(Y_train_sets), 1) print(X_train_sets.shape, X_train_sets_2.shape) print(Y_train_sets.shape) # Example: indx = np.random.randint(len(X_train_sets), size=1)[0] print('Index:', indx) plt.gray() plt.imshow(X_train_sets[indx].reshape(img_size, img_size)) plt.axis('off') plt.show() plt.gray() plt.imshow(X_train_sets_2[indx].reshape(img_size, img_size)) plt.axis('off') plt.show() print(Y_train_sets[indx]) print('Index:', indx+1) plt.gray() plt.imshow(X_train_sets[indx+1].reshape(img_size, img_size)) plt.axis('off') plt.show() plt.gray() plt.imshow(X_train_sets_2[indx+1].reshape(img_size, img_size)) plt.axis('off') plt.show() print(Y_train_sets[indx+1]) ###Output Index: 750 ###Markdown Getting saved mode: ###Code discriminator.load_weights('drive/Colab_UCA-Net/Data/Model/weights_discriminator.h5') # epochs = 30 # discriminator.fit([X_train_sets, X_train_sets_2], Y_train_sets, batch_size=6, epochs=epochs, shuffle=True, validation_split=0.2) ###Output _____no_output_____ ###Markdown TODO:1) Update "discriminator" model2) Fix the places under (For "discriminator" model as output shape with 2) Save weights:import osdef save_model(model): if not os.path.exists('drive/Colab_UCA-Net/Data/Model/'): os.makedirs('drive/Colab_UCA-Net/Data/Model/') model_json = model.to_json() with open("drive/Colab_UCA-Net/Data/Model/model_discriminator.json", "w") as model_file: model_file.write(model_json) serialize weights to HDF5 model.save_weights("drive/Colab_UCA-Net/Data/Model/weights_discriminator.h5") print('Weights saved') returnsave_model(discriminator) ###Code # Save weights: import os def save_model(model): # serialize weights to HDF5 model.save_weights("drive/Colab_UCA-Net/Data/Model/weights_discriminator.h5") print('Weights saved') return save_model(discriminator) index = 8 one_simple = X_test[index].reshape(1, img_size, img_size, channel_size) plt.gray() plt.imshow(one_simple.reshape(img_size, img_size)) plt.axis('off') plt.show() shift = 5 for i in [shift, -1*shift,]: for j in [1, 2]: noise_img = np.roll(one_simple, i, axis=j) plt.imshow(noise_img.reshape(img_size, img_size)) plt.axis('off') plt.show() print(discriminator.predict([one_simple, noise_img])[0][0]) plt.gray() index = 8 one_simple = X_test[index].reshape(1, img_size, img_size, channel_size) plt.gray() plt.imshow(one_simple.reshape(img_size, img_size)) plt.axis('off') plt.show() for i in [1,2]: for j in [1,-1]: noise_image = X_train[index + j*i].reshape(1, img_size, img_size, 1) plt.imshow(noise_image.reshape(img_size, img_size)) plt.axis('off') plt.show() print(discriminator.predict([one_simple, noise_image])[0][0]) ###Output _____no_output_____ ###Markdown Now we look up result: ###Code print(discriminator.predict([X_test[1].reshape(1,64,64,1), X_test[9].reshape(1,64,64,1)])[0][0]) plt.axis('off') plt.imshow(X_test[1].reshape(64, 64)) plt.show() plt.axis('off') plt.imshow(X_test[9].reshape(64, 64)) plt.show() print(discriminator.predict([X_test[1].reshape(1,64,64,1), X_test[8].reshape(1,64,64,1)])[0][0]) plt.axis('off') plt.imshow(X_test[1].reshape(64, 64)) plt.show() plt.axis('off') plt.imshow(X_test[8].reshape(64, 64)) plt.show() ###Output 0.818574 ###Markdown Classifiction: ###Code from os import listdir from get_dataset import get_img dataset_path = 'drive/Colab_UCA-Net/Data/Train_Data' data_samples = [] labels = listdir(dataset_path) for label in range(0,10): datas_path = dataset_path+'/{0}'.format(label) img = get_img(datas_path+'/'+listdir(datas_path)[0]) data_samples.append(img) data_samples = 1 - np.array(data_samples).astype('float32')/255. data_samples = data_samples.reshape(data_samples.shape[0], img_size, img_size, channel_size) for i, img in enumerate(data_samples): print('{0}:'.format(i)) plt.gray() plt.imshow(img.reshape(img_size, img_size)) plt.axis('off') plt.show() class_code = encoder.predict(data_samples) ###Output _____no_output_____ ###Markdown Accuracy: ###Code encode = encoder.predict(X_test) models_y_test = [] for i in encode: results = [] for j in class_code: sim_y = similarity_net.predict(np.concatenate((i, j), axis=0).reshape(1, 256)) results.append(sim_y[0][0]) models_y_test.append(np.argmax(np.array(results).reshape(10), axis=0)) models_y_test = np.array(models_y_test) num_Y_test = np.argmax(Y_test, axis=1) comparison = models_y_test == num_Y_test loss = 1 - np.sum(comparison.astype(int)) / num_Y_test.shape[0] print('Loss:', loss) print('Examples:') for i in range(10,14): plt.imshow(X_test[i].reshape(64, 64)) plt.gray() plt.axis('off') plt.show() print('Class:', num_Y_test[i], '- Model\'s Output Class:', models_y_test[i],'\n'*2,'-'*40) ###Output Loss: 0.16464891041162233 Examples: ###Markdown Thank you! Still in development ###Code # Test with non-class similarity: from get_dataset import get_img img_0 = get_img('drive/Colab_UCA-Net/Data/non-class/0.jpg').reshape(1, 64, 64, 1) img_1 = get_img('drive/Colab_UCA-Net/Data/non-class/1.jpg').reshape(1, 64, 64, 1) img_0 = 1 - np.array(img_0).astype('float32')/255. img_1 = 1 - np.array(img_1).astype('float32')/255. plt.imshow(img_0.reshape(64, 64)) plt.gray() plt.axis('off') plt.show() plt.imshow(img_1.reshape(64, 64)) plt.gray() plt.axis('off') plt.show() print(discriminator.predict([img_0, img_1])) print('-'*20) plt.imshow(img_1.reshape(64, 64)) plt.gray() plt.axis('off') plt.show() plt.imshow(data_samples[2].reshape(64, 64)) plt.gray() plt.axis('off') plt.show() print(discriminator.predict([img_1, data_samples[2].reshape(1, 64, 64, 1)])) ###Output _____no_output_____
SHREC/LDT-Net_SHREC_14-class.ipynb
###Markdown Initialize the setting ###Code os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"]="0" class Config(): def __init__(self): self.frame_l = 32 # the length of frames self.joint_n = 12 # the number of joints self.joint_n = 22 # the number of joints self.joint_d = 3 # the dimension of joints self.clc_coarse = 14 # the number of coarse class self.clc_fine = 28 # the number of fine-grained class self.feat_d = 231 self.filters = 16 self.data_dir = '../data/SHREC/' C = Config() ###Output _____no_output_____ ###Markdown Building the model ###Code def poses_diff(x): H, W = x.get_shape()[1],x.get_shape()[2] x = tf.subtract(x[:,1:,...],x[:,:-1,...]) x = tf.image.resize_nearest_neighbor(x,size=[H.value,W.value],align_corners=False) # should not alignment here return x def pose_motion(P,frame_l): P_diff_slow = Lambda(lambda x: poses_diff(x))(P) P_diff_slow = Reshape((frame_l,-1))(P_diff_slow) P_fast = Lambda(lambda x: x[:,::2,...])(P) P_diff_fast = Lambda(lambda x: poses_diff(x))(P_fast) P_diff_fast = Reshape((int(frame_l/2),-1))(P_diff_fast) P_faster = Lambda(lambda x: x[:,::4,...])(P) P_diff_faster = Lambda(lambda x: poses_diff(x))(P_faster) P_diff_faster = Reshape((int(frame_l/4),-1))(P_diff_faster) return P_diff_slow,P_diff_fast,P_diff_faster def c1D(x,filters,kernel): x = SeparableConv1D(filters, kernel_size=kernel,padding='same',use_bias=False)(x) x = BatchNormalization()(x) x = LeakyReLU(alpha=0.2)(x) return x def block(x,filters): x = c1D(x,filters,3) return x def build_FM(frame_l=32,joint_n=22,joint_d=2,feat_d=231,filters=16): M = Input(shape=(frame_l,feat_d)) P = Input(shape=(frame_l,joint_n,joint_d)) diff_slow,diff_fast,diff_faster = pose_motion(P,frame_l) x = c1D(M,filters,3) x = SpatialDropout1D(0.1)(x) x = MaxPooling1D(2)(x) x = SpatialDropout1D(0.1)(x) x_d_slow = c1D(diff_slow,filters,3) x_d_slow = SpatialDropout1D(0.1)(x_d_slow) x_d_slow = MaxPool1D(2)(x_d_slow) x_d_slow = SpatialDropout1D(0.1)(x_d_slow) x_d_fast = c1D(diff_fast,filters,3) x_d_fast = SpatialDropout1D(0.1)(x_d_fast) x_d_faster = c1D(diff_faster,filters,5) x_d_faster = SpatialDropout1D(0.1)(x_d_faster) x_d_faster = UpSampling1D(2)(x_d_faster) x_d_faster = SpatialDropout1D(0.1)(x_d_faster) x = concatenate([x,x_d_slow,x_d_fast,x_d_faster]) x = SpatialDropout1D(0.1)(x) x_shortcut = x x = block(x,filters*2) x = MaxPool1D(2)(x) x = SpatialDropout1D(0.1)(x) x = block(x,filters*4) x = MaxPool1D(2)(x) x = SpatialDropout1D(0.1)(x) x = block(x,filters*8) x_shortcut = SeparableConv1D(filters*8, kernel_size=3,padding='same',use_bias=False)(x_shortcut) x_shortcut = BatchNormalization()(x_shortcut) x_shortcut = LeakyReLU(alpha=0.2)(x_shortcut) x_shortcut = MaxPool1D(4)(x_shortcut) x = add([x,x_shortcut]) return Model(inputs=[M,P],outputs=x) def build_LDT_Net(frame_l=32,joint_n=22,joint_d=3,feat_d=231,clc_num=14,filters=16): M = Input(name='M', shape=(C.frame_l,C.feat_d)) P = Input(name='P', shape=(C.frame_l,C.joint_n,C.joint_d)) FM = build_FM(C.frame_l,C.joint_n,C.joint_d,C.feat_d,C.filters) x = FM([M,P]) #FM.summary() #Prints a table with the FLOPS at each layer and total FLOPs #net_flops(FM,table=True) x = Dropout(0.5)(x) x = GlobalAveragePooling1D()(x) x = Dense(clc_num, activation='softmax')(x) model = Model(inputs=[M,P],outputs=x) return model LDT_Net = build_LDT_Net(C.frame_l,C.joint_n,C.joint_d,C.feat_d,C.clc_coarse,C.filters) LDT_Net.summary() ###Output __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== M (InputLayer) (None, 32, 231) 0 __________________________________________________________________________________________________ P (InputLayer) (None, 32, 22, 3) 0 __________________________________________________________________________________________________ model_1 (Model) (None, 4, 128) 31099 M[0][0] P[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 4, 128) 0 model_1[1][0] __________________________________________________________________________________________________ global_average_pooling1d_1 (Glo (None, 128) 0 dropout_1[0][0] __________________________________________________________________________________________________ dense_1 (Dense) (None, 14) 1806 global_average_pooling1d_1[0][0] ================================================================================================== Total params: 32,905 Trainable params: 32,073 Non-trainable params: 832 __________________________________________________________________________________________________ ###Markdown load data ###Code Train = pickle.load(open(C.data_dir+"train.pkl", "rb")) Test = pickle.load(open(C.data_dir+"test.pkl", "rb")) ###Output _____no_output_____ ###Markdown Without frame_sampling train ###Code X_0 = [] X_1 = [] Y = [] for i in tqdm(range(len(Train['pose']))): p = np.copy(Train['pose'][i]).reshape([-1,22,3]) p = zoom(p,target_l=C.frame_l,joints_num=C.joint_n,joints_dim=C.joint_d) p = normlize_range(p) label = np.zeros(C.clc_coarse) label[Train['coarse_label'][i]-1] = 1 M = get_CG(p,C) X_0.append(M) X_1.append(p) Y.append(label) X_0 = np.stack(X_0) X_1 = np.stack(X_1) Y = np.stack(Y) X_test_0 = [] X_test_1 = [] Y_test = [] for i in tqdm(range(len(Test['pose']))): p = np.copy(Test['pose'][i]).reshape([-1,22,3]) p = zoom(p,target_l=C.frame_l,joints_num=C.joint_n,joints_dim=C.joint_d) p = normlize_range(p) label = np.zeros(C.clc_coarse) label[Test['coarse_label'][i]-1] = 1 M = get_CG(p,C) X_test_0.append(M) X_test_1.append(p) Y_test.append(label) X_test_0 = np.stack(X_test_0) X_test_1 = np.stack(X_test_1) Y_test = np.stack(Y_test) from keras.callbacks import ModelCheckpoint import keras #设置模型参数 lr = 1e-2 LDT_Net.compile(loss="categorical_crossentropy",optimizer=adam(lr),metrics=['accuracy']) lrScheduler = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=1e-3) history = LDT_Net.fit([X_0,X_1],Y, batch_size=len(Y), epochs=800, verbose=True, shuffle=True, callbacks=[lrScheduler], validation_data=([X_test_0,X_test_1],Y_test) ) lr = 1e-3 LDT_Net.compile(loss="categorical_crossentropy",optimizer=adam(lr),metrics=['accuracy']) # checkpoint 报存最好的模型 filepath="weights.best.hdf5" checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True,mode='max') callbacks_list = [checkpoint] history = LDT_Net.fit([X_0,X_1],Y, batch_size=len(Y), epochs=1000, verbose=True, shuffle=True, callbacks=callbacks_list, validation_data=([X_test_0,X_test_1],Y_test) ) # Plot training & validation accuracy values plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('Model accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() LDT_Net.save_weights('weights/coarse_lite.h5') ###Output _____no_output_____ ###Markdown Calculate time (excute it twice, the first time initialize takes extra times) ###Code import time start_time = time.time() y = LDT_Net.predict([X_0,X_1]) time.time() - start_time ###Output _____no_output_____ ###Markdown Plot confusion matrix ###Code import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix Y_pred = LDT_Net.predict([X_test_0,X_test_1]) cnf_matrix = confusion_matrix(np.argmax(Y_test,axis=1),np.argmax(Y_pred,axis=1)) plt.figure(figsize=(10,10)) plt.imshow(cnf_matrix) plt.show() ###Output _____no_output_____
example_plotting_sample_and_grids.ipynb
###Markdown Spatial autocorrelation by scale and samplingThe following notebook exemplifies of how different scales of spatial aggregation, as well as different levels of sampling, can lead to different results in terms of measures of global spatial autocorrelation1. Generate a random set of points in a square2. Generate an autocorrelated set of variables via $Y = (I - \rho W)^{-1} \epsilon$3. Pull a random sample of these points, based off a sampling rate and binomial distribution.4. Aggregate these points to different sized grids, computing the mean of $Y$ in each grid cell5. Visualize the results and compute Moran's I, a common measure of spatial autocorrelationThe results show that the size of the grid impacts impacts the magnitude and significance of I and $\rho$ ###Code import numpy as np import pandas as pd import geopandas as gpd import seaborn as sns import geoplot import shapely import pysal import matplotlib.pyplot as plt %matplotlib inline ###Output /home/ja/miniconda3/envs/map/lib/python3.7/site-packages/pysal/model/spvcm/abstracts.py:10: UserWarning: The `dill` module is required to use the sqlite backend fully. from .sqlite import head_to_sql, start_sql ###Markdown Create some random XY points between 0 and 100 in each dimension, and plot for fun ###Code # constat data gen variables total_points = 10000 knn_for_data_gen = 30 # generate random location of points xy = 1.2 * np.random.rand(total_points,2) xydf = pd.DataFrame(data=xy) xydf.columns = ['x', 'y'] # standard normal errors for each point errors_ind = np.random.normal(0, 1, total_points) xydf["errors"] = errors_ind # an identiy matrix needed for generating simulated values at each point I = np.identity(total_points) # plotting the points sns.relplot(x="x", y="y", data=xydf); ###Output _____no_output_____ ###Markdown Generate a spatial weights matrix for the points based on $k$ nearest neighbours ###Code # weights matrix for k nearest kd = pysal.lib.cg.kdtree.KDTree(xy) W = pysal.lib.weights.KNN(kd, knn_for_data_gen) W.transform = 'r' # row normalizing # extract the sparse weights matrix as a full np array for matrix multiplication W = (W.sparse) W = (W.toarray()) ###Output _____no_output_____ ###Markdown Pick a $\rho$ for generating spatiall correlated values at each point, then run spatial autoregressive process ###Code rho = 0.9 ###Output _____no_output_____ ###Markdown $Y = (I - \rho W)^{-1} \epsilon$ ###Code %%time Y = np.matrix(I - rho * W).I.dot(errors_ind) # append these Y values to the point data frame xydf["sim"] = np.transpose(Y) # plot if we want to see the points with colours sns.relplot(x="x", y="y", hue="sim",data=xydf); # also convert these Y values to a binary (1,0) if we want to analyze zonal proportions (where mean = 0.5) simmean = xydf["sim"].mean() xydf['sim_b'] = 0 xydf['sim_b'][xydf['sim'] > simmean] = 1 # add in binomial distribution for whether observation is sampled or not, # do this for several sampling rates [0.03,0.05,0.1,0.2,0.5,1.0] xydf["sample_03"] = np.random.binomial(1, 0.03, size=total_points) xydf["sample_05"] = np.random.binomial(1, 0.05, size=total_points) xydf["sample_10"] = np.random.binomial(1, 0.10, size=total_points) xydf["sample_20"] = np.random.binomial(1, 0.20, size=total_points) xydf["sample_50"] = np.random.binomial(1, 0.50, size=total_points) xydf["sample_100"] = np.random.binomial(1, 1, size=total_points) # load in grid data (was generated in QGIS by hand) grid_6 = gpd.read_file("grids/grid_6x6.geojson") grid_8 = gpd.read_file("grids/grid_8x8.geojson") grid_10 = gpd.read_file("grids/grid_10x10.geojson") grid_12 = gpd.read_file("grids/grid_12x12.geojson") grid_15 = gpd.read_file("grids/grid_15x15.geojson") ###Output _____no_output_____ ###Markdown Loop over each sample size `["3%","5%","10%","20%","50%","100%"]`and then over each grid size `["6x6","8x8","10x10","12x12","15x15"]`Aggregating the sampled points to each grid cell.Computing global spatial autocorrelation statsAnd plotting simple choropleths of each to show how results vary ###Code ## include plotting info (comment out if not plotting) f, axarr = plt.subplots(6, 5, figsize=(15, 18)) samples = ["sample_03","sample_05","sample_10","sample_20","sample_50","sample_100"] sample_names = ["3%","5%","10%","20%","50%","100%"] grids = [grid_6, grid_8, grid_10, grid_12, grid_15] grid_names = ["6x6","8x8","10x10","12x12","15x15"] outputs = [] s = 0 for sample in samples: # subset data by each sample xydf_s = xydf.loc[(xydf[sample] == 1)] # set up a geodataframe for this, to allow for future spatial join geometry = [shapely.geometry.Point(xy) for xy in zip(xydf_s.x, xydf_s.y)] gdf = gpd.GeoDataFrame(xydf_s, geometry=geometry) g = 0 for grid in grids: # spatial join the grid IDs to the point data xy_with_grid = gpd.sjoin(gdf, grid, how="inner", op='intersects') # generate means and proportions in each cell of the grid grid_desc = xy_with_grid.groupby(['id']).agg({'errors': "count",'sim': "mean", 'sim_b': "sum"}) # update some of the column names grid_desc["mean"] = grid_desc["sim"] grid_desc["prop"] = grid_desc["sim_b"] / grid_desc["errors"] del grid_desc['sim'], grid_desc['sim_b'], grid_desc['errors'] # join back to grid boundaries grid_join = grid.merge(grid_desc, on='id') # compute spatial weights matrix Wg = pysal.lib.weights.Queen.from_dataframe(grid_join) Wg.transform = 'r' # row normalizing mi = pysal.explore.esda.Moran(np.array(grid_join["mean"]), Wg, two_tailed=False) grid_join["var"] = np.random.normal(0, 1, len(grid_join)) #YVar='mean' #XVars=['id'] Ym=grid_join['mean'].values.reshape((len(grid_join),1)) Xm=grid_join[['var']].values mlag = pysal.model.spreg.ml_lag.ML_Lag(Ym,Xm,w=Wg,name_y='mean', name_x=['var'] ) # output the values output = [sample_names[s], grid_names[g], round(mi.I, 3), round(mi.p_norm, 3), round(mlag.rho,3), round(mlag.z_stat[2][1],3)] outputs.append(output) geoplot.choropleth( grid_join, hue='mean', edgecolor='white', linewidth=1, cmap='Blues', legend=False, scheme='quantiles', figsize=(2, 2), ax=axarr[s][g] ) g += 1 s += 1 outputs grid_join.head() ###Output _____no_output_____
notebooks/swingup.ipynb
###Markdown ###Code GOOGLE_COLAB = True if not GOOGLE_COLAB: %cd ../ else: !pip install git+https://github.com/rland93/pendsim.git from pendsim import sim, controller, viz import numpy as np import matplotlib.pyplot as plt from IPython.display import HTML ###Output _____no_output_____ ###Markdown Swing Up of Pendulum From Resting Position With Energy ControlIt's quite alright if the pendulum starts from an upright position. We can stabilize it with simple controllers, even as simple as Bang-Bang (On-Off) control. But what about when it is in the downward position? We will need a controller that can guide the pendulum from the lower position, completely through the non-linear region (near $\theta=\pm \pi/2$), and to the upward position.Any simple (linear) controller will fail to guide us through this non-linear region. So what do we do? Fortunately, we have an option: swing up by energy control.The swing-up strategy exploits that the potential energy of the system is a good proxy of the state. Namely, that the maximum energy point is also the desired point for the controller: at $\theta=0$, the small mass $m$ is as high as it could possibly be. At $\theta=\pi$, the energy is at a minimum. And the position of the large mass has no effect on the potential energy of the system, because the cart is always on level ground. A successful swing-up strategy will pump energy into the system to maximize the potential energy.Functions which map the system state to a scalar, and which serve as a good proxy for the success or failure of a the state, are Lyapunov functions. A increase (or decrease) in a Lyapunov function must always result in driving the system towards a goal state. With these properties, so long as the Lyapunov function increases (or decreases), the state will be driven towards the goal. A control strategy can then be derived from the derivative of the Lyapunov function, with the control input driving the Lyapunov function in the desired direction.Often, the energy of the system under control is a good starting point for deriving a Lyapunov function.In this case, we use the following as the Lyapunov function:$ V = \frac{(E-E_0)^2}{2} $Rearranging potential energy terms and differentiating yields the control strategy:$ u = k (E - E_0) \dot{\theta} \cos{\theta}$Which we can see is a function of the energy loss, the angular velocity, and the pendulum position. In particular, when $\theta=\pi/2$ or $\theta=-\pi/2$, no amount of sideways push will change the pendulum angle. When $\theta=\pi$ or $\theta=0$, a sideways push affects the pendulum angle the most. This behavior is captured by the $\cos{\theta}$ term. This term is then scaled by $\dot{\theta}$; when $\dot{\theta}$ is positive, the control action pushes left; when $\dot{\theta}$ is negative, the control action pushes right. Finally, the amount of push is scaled by the energy of the system; when $E=E_0$, the action is 0. The derivation of this strategy and particular details can be found in Astrom's [Swinging up a Pendulum by Energy Control](https://www.sciencedirect.com/science/article/pii/S0005109899001405). For the actual strategy, rather than using an arbitrary gain $k$, we use a coefficient $n$ times the gravity $g$ as the gain. $u = n g \hspace{0.25em} \text{sign}(\hspace{0.25em} (E - E_0) \dot{\theta} \cos{\theta} \hspace{0.25em} )$ ###Code force_fn = lambda t: 0 dt = 0.01 t_final = 20 pend = sim.Pendulum( 2.0, 1.0, 1.0, initial_state = np.array([0.0, 0.0, -np.pi, 0.0]) ) # function to take the sign of `x` argument def sign(x): if x >= 0: return 1 else: return -1 # function to wrap pi def wrappi(theta): return (theta + np.pi) % (2 * np.pi) - np.pi class SwingUp(controller.Controller): def __init__(self, k, pend): self.k = k # gravity constant of the pendulum self.pend = pend # prev error for PID control self.prev_err, self.integrator = 0, 0 def policy(self, state, dt): # unpack state _, _, theta, thetadot = state # potential energy E = - self.pend.m*self.pend.g*self.pend.l*np.cos(theta) # potential energy zero-point E0 = 0 # swing up action swingup = self.k * self.pend.g * sign((E - E0) * thetadot * np.cos(theta)) # pid action pid = self.do_pid(dt, 50, 0, 2, state) # weight over pid/swingup wt_c = 0.25 wt = np.exp(-theta**2/wt_c) # if near theta, wt -> 1 and (1-wt) -> 0 action = wt * pid + (1-wt) * swingup return action, {} ###Output _____no_output_____ ###Markdown Let's have a look at what's going on here. First, we take $\theta$ and $\dot{\theta}$ out of the state. This control policy doesn't rely on $x$ and $\dot{x}$.Then, we calculate the potential energy of the pendulum position: $E = - m g l \cos{\theta}$. The maximum potential energy with that calculation can be had when $E=0$.This is a hybrid policy, where we either want to take action from a PD (*) control strategy (if the pendulum is up in the air) or from a swing-up strategy (if it is hanging). So we calculate both.Finally, we will multiply these two strategies by a weighted average.(*) PD, and not PID, because integral gain is set to zero -- to see why that results in good control for this system, have a look at the "PD" notebook! The weighted average is shown below.We can see that in the area near $\pi=0$, the controller chooses mostly a PD strategy, while anywhere far away from there, it chooses a swing-up strategy; it switches rapidly, but smoothly, in the area near $\pi=0$ from one to the other.We need to be careful with such hybrid control strategies; they can have mysterious results in the boundary region. But this one seems to work OK, and besides, this is a simulation! ###Code x = np.linspace(-np.pi, np.pi, 600) y1 = np.exp(-x**2/0.25) y2 = 1-np.exp(-x**2/0.25) fig, ax = plt.subplots() for y, label in zip((y1, y2), ("PD", "Swing-up")): ax.plot(x, y, label=label) ax.legend() ax.set_ylabel("Weight") ax.set_xlabel("Theta") plt.show() ###Output _____no_output_____ ###Markdown Make the controller and run the simulation. ###Code cont = SwingUp(0.5, pend) simu = sim.Simulation(dt, t_final, force_fn) res = simu.simulate(pend, cont) ###Output 100%|██████████| 2000/2000 [00:01<00:00, 1834.15it/s] ###Markdown Now, a plot of $\theta$ over time. ###Code plt.plot(res[("state", "t")]) plt.ylabel(r"$\theta$ (rad)") plt.xlabel("Time (s)") plt.show() ###Output _____no_output_____ ###Markdown We can see the controller working as intended. We know that energy is a function of $\theta$ only, so this chart shows that the energy is steadily pumped into the system. Finally, when $\theta$ reaches the zero point, the PD controller smoothly takes over, and removes the remaining perturbance. This plot shows the control action over time. The swing-up strategy manifests as a square wave: that's the `sign` function. What we are doing here is adding energy to the pendulum as quickly as we possibly can. Either we're pushing as hard as we can left, or as hard as we can right. Because the potential energy is a scalar, pushing in either direction is adding energy, so long as we do it at the right time. Then, we can see the transition region, which has an interesting and turbulent control signal. Finally, the action of the PD controller takes over, with the little bit of derivative gain damping out the oscillations in the system. ###Code plt.plot(res[("control action", "control action")]) plt.ylabel("Control Action (N)") plt.xlabel("Time (s)") plt.show() ###Output _____no_output_____ ###Markdown To see the relationship between energy and control action more clearly, we superimpose both onto the same plot: ###Code plt.plot(res[("energy", "potential")], "b", label="Potential Energy (J)") plt.plot(res[("control action", "control action")], "g--", label="Control Action (N)") plt.xlabel("Time (s)") plt.legend() plt.show() ###Output _____no_output_____ ###Markdown And this is the controller we have designed: one which adds energy as quickly as possible, bringing the pendulum to a higher energy state, which we then stabilize using a different PD control strategy. And finally, the animation of our strategy: ###Code anim = viz.Visualizer(res, pend, dt) ani = anim.animate() HTML(ani.to_html5_video()) ###Output _____no_output_____
4-Machine_Learning/3-Deep Learning/1-Redes Neuronales/1-Perceptron & MLP.ipynb
###Markdown 1. PerceptronEmpezamos cargando librerias ###Code ''' NOTA: Las implementaciones de sklearn de RRNN no soportan uso de GPU Tampoco podemos cambiar las funciones de activacion ni pesos iniciales en CADA capa. ''' import numpy as np import pandas as pd import seaborn as sns ###Output _____no_output_____ ###Markdown Cargamos datos. Utilizaremos el dataset de pinguinos de seaborn ###Code df = sns.load_dataset("penguins") # Limpiamos un poco los datos df.dropna(inplace=True) cleanup_nums = {"species": {"Adelie": 0, "Chinstrap": 1, "Gentoo": 2}, "sex": {"Male": 0, "Female": 1}} df.replace(cleanup_nums, inplace=True) df = pd.get_dummies(df, drop_first = True) df.head() ###Output _____no_output_____ ###Markdown Dividimos en train test ###Code from sklearn.model_selection import train_test_split X = df.iloc[:, 1:] y = df.iloc[:, 0] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) print(X_train.shape) print(X_test.shape) print(y_train.shape) print(y_test.shape) ###Output (266, 7) (67, 7) (266,) (67,) ###Markdown Vamos a probar un Perceptrón ###Code ''' Algoritmo de clasificación. El score es el accuracy. Sale muy malo, el perceptron no es capaz de diferenciar las clases. ''' from sklearn.linear_model import Perceptron per_clf = Perceptron() per_clf.fit(X_train, y_train) per_clf.score(X_test, y_test) ''' Una simple regresión logistica las diferencia mejor ''' from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(max_iter=500) log_reg.fit(X_train, y_train) print(log_reg.score(X_train, y_train)) print(log_reg.score(X_test, y_test)) ###Output 0.9962406015037594 0.9850746268656716 ###Markdown Probemos a estandarizar Parece que el perceptrón por si solo es bastante inútil, habrá que probar configuraciones más complejas. 2. Multi Layer Perceptron ###Code from sklearn.neural_network import MLPClassifier from sklearn.neural_network import MLPRegressor # TB existe. No se usa en este ejemplo ''' Por defecto una capa con 100 neuronas 3 capas: input, una hidden y output ''' mlp = MLPClassifier(max_iter=500) mlp.fit(X_train, y_train) print(mlp.score(X_train, y_train)) print(mlp.score(X_test, y_test)) ###Output 0.4924812030075188 0.43283582089552236 ###Markdown Probemos otra configuración. Es posible crear una red neuronal desde la propia función de MLPClassifier() ###Code mlp = MLPClassifier(max_iter=500, activation='tanh', hidden_layer_sizes = (150, 150, 150)) mlp.fit(X_train, y_train) print(mlp.score(X_train, y_train)) print(mlp.score(X_test, y_test)) ###Output 0.6616541353383458 0.5970149253731343 ###Markdown Utilizan descenso del gradiente, y por tanto son muy sensibles al escalado. Estandarizamos para el siguiente ejemplo ###Code ''' De nuevo, demostramos la gran mejora de los resultados en modelos que dependen del gradient descent gracias a estandarizar las features ''' from sklearn.preprocessing import StandardScaler sc = StandardScaler() sc.fit(X_train) X_train_s = sc.transform(X_train) X_test_s = sc.transform(X_test) per_clf = Perceptron() per_clf.fit(X_train_s, y_train) print(per_clf.score(X_train_s, y_train)) print(per_clf.score(X_test_s, y_test)) log_reg = LogisticRegression(max_iter=500) log_reg.fit(X_train_s, y_train) print(log_reg.score(X_train_s, y_train)) print(log_reg.score(X_test_s, y_test)) from sklearn.neural_network import MLPClassifier from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(X_train) X_train_scal =scaler.transform(X_train) X_test_scal =scaler.transform(X_test) mlp = MLPClassifier(max_iter=500) mlp.fit(X_train_scal, y_train) print(mlp.score(X_train_scal, y_train)) print(mlp.score(X_test_scal, y_test)) ###Output 1.0 1.0
01.1 - Date_check.ipynb
###Markdown Based on the naming of the dataset there were some assumptions as to what each date represented, so we decided to check that. ###Code import pandas as pd solicitations = pd.read_csv('Data/solicitations_prepared.csv') solicitations = solicitations.drop(columns='Unnamed: 0',axis=1) canvas = pd.read_csv('Data/canvas_prepared_wdate.csv') canvas = canvas.drop(columns='Unnamed: 0',axis=1) ###Output _____no_output_____ ###Markdown First of all, the canvas dataset date lead us to believe, that it represented when a canvas was placed, and the solicitation date, the date where the process begun. So we will isolate some date values per ID on both datasets and compare it. ###Code canvas.loc[canvas['ID'] == 8002630913] solicitations.loc[solicitations['ID'] == 8002630913] canvas.loc[canvas['ID'] == 8002108013] solicitations.loc[solicitations['ID'] == 8002108013] ###Output _____no_output_____
datasets/Part 6 - Reinforcement Learning/Section 33 - Thompson Sampling/random_selection.ipynb
###Markdown Clonamos el repositorio para obtener los dataSet ###Code !git clone https://github.com/joanby/machinelearning-az.git ###Output _____no_output_____ ###Markdown Damos acceso a nuestro Drive ###Code from google.colab import drive drive.mount('/content/drive') ###Output _____no_output_____ ###Markdown Test it ###Code !ls '/content/drive/My Drive' ###Output _____no_output_____ ###Markdown Google colab tools ###Code from google.colab import files # Para manejar los archivos y, por ejemplo, exportar a su navegador import glob # Para manejar los archivos y, por ejemplo, exportar a su navegador from google.colab import drive # Montar tu Google drive ###Output _____no_output_____ ###Markdown Instalar dependendias ###Code !pip install sklearn ###Output _____no_output_____ ###Markdown Random Selection Cómo importar las librerías ###Code import numpy as np import matplotlib.pyplot as plt import pandas as pd ###Output _____no_output_____ ###Markdown Importar el data set ###Code dataset = pd.read_csv('/content/machinelearning-az/datasets/Part 6 - Reinforcement Learning/Section 33 - Thompson Sampling/Ads_CTR_Optimisation.csv') ###Output _____no_output_____ ###Markdown Implementing Random Selection ###Code import random N = 10000 d = 10 ads_selected = [] total_reward = 0 for n in range(0, N): ad = random.randrange(d) ads_selected.append(ad) reward = dataset.values[n, ad] total_reward = total_reward + reward ###Output _____no_output_____ ###Markdown Visualising the results - Histogram ###Code plt.hist(ads_selected) plt.title('Histogram of ads selections') plt.xlabel('Ads') plt.ylabel('Number of times each ad was selected') plt.show() ###Output _____no_output_____
05-rErVofAmbisonicPanningFunctionsCircle.ipynb
###Markdown This code example is to animate the 2D Ambisonic panning functions interactivelyAcoustic Holography and HolophonyFranz Zotter, 2016This animation is about what discretizing the Ambisonic panning function on the circle does to the rE and rV measures.Some function definitions and headers first. ###Code import numpy as np import scipy as sp import math from bokeh.plotting import figure, output_file, show from bokeh.io import output_notebook def inphase_weights(N): a=np.ones(N+1) for n in range(1,N+1): a[n]=(N-n+1)/(1.0*(N+n))*a[n-1] return a def maxre_weights(N): m=np.arange(0,N+1) a=np.cos(np.pi/(2*(N+1))*m) return a def basic_weights(N): a=np.ones(N+1) return a def weighted_cosine_series(phi,a): N=a.size-1 g=np.zeros(phi.size) amplitude=0; for m in range(0,N+1): g+=np.cos(m*phi)*a[m]*(2-(m==0)) amplitude+=a[m]*(2-(m==0)) return g/amplitude def rvector(gls,phils): glsc=np.copy(gls) glsc=np.dot(np.diag(1/np.sum(glsc,1)),glsc) r=np.array([np.dot(glsc,np.cos(phils)),np.dot(glsc,np.sin(phils))]) return r def get_ambipan_loudspeaker_gains(N,phis,phils,weight_type): g=np.zeros(phils.size) if weight_type == 'in-phase': a=inphase_weights(N) elif weight_type == 'max-rE': a=maxre_weights(N) else: a=basic_weights(N) g=weighted_cosine_series(phils-phis,a) return g ###Output _____no_output_____ ###Markdown Let's take as an example $\mathrm{L}=7$ regularly spaced loudspeakers and the basic, rectangular weighting for panning on the horizon as an example for $5^\mathrm{th}$-order Ambisonic. As this is a $t=6$-design, it should already be able to perfectly control the $\boldsymbol{r}_\mathrm{V}$ vector. ###Code L=7 Npt=200 phils=np.mod((2*np.pi*np.arange(0,L))/L+np.pi,2*np.pi)-np.pi phis=np.linspace(-np.pi*0.99,np.pi,Npt) gls=np.zeros((Npt,L)) for n in range(0,Npt): gls[n,:]=get_ambipan_loudspeaker_gains(5,phis[n],phils,'basic') output_notebook() p1 = figure(title="rE/rV directions of 5th order basic Ambi panning on 7 loudspeaeker",plot_width=600, plot_height=270, x_range=(-180,180), y_range=(-180.0,180.0)) p2 = figure(title="rE/rV widths of 5th order basic Ambi panning on 7 loudspeaeker",plot_width=600, plot_height=270, x_range=(-180,180),y_range=(-3,100)) rE=rvector(gls**2,phils) dirE=np.arctan2(rE[1],rE[0])*180/np.pi; lenE=np.sqrt(np.sum(rE**2,0)) sigmaE=2*np.arccos(lenE)*180/np.pi p1.line(phis*180/np.pi, dirE, color="red",line_width=3,legend_label="rE basic") p2.line(phis*180/np.pi, sigmaE, color="red",line_width=3,legend_label="rE basic") rV=rvector(gls,phils) dirV=np.arctan2(rV[1],rV[0])*180/np.pi; lenV=np.sqrt(np.sum(rV**2,0)) sigmaV=2*np.arccos(lenV)*180/np.pi p1.line(phis*180/np.pi, dirV, color="blue",line_width=3,legend_label="rV basic",line_dash=(6,2)) p2.line(phis*180/np.pi, sigmaV, color="blue",line_width=3,legend_label="rV basic",line_dash=(6,2)) p1.legend.background_fill_alpha = 0.5 p2.legend.background_fill_alpha = 0.5 show(p1) show(p2) ###Output _____no_output_____ ###Markdown Taking $\mathrm{L}=12$, a $t=11$-design is already able to perfectly control the $\boldsymbol{r}_\mathrm{E}$ vector as well for $5^\mathrm{th}$ order basic-weighted panning. ###Code L=12 Npt=200 phils=np.mod((2*np.pi*np.arange(0,L))/L+np.pi,2*np.pi)-np.pi phis=np.linspace(-np.pi*0.99,np.pi,Npt) gls=np.zeros((Npt,L)) for n in range(0,Npt): gls[n,:]=get_ambipan_loudspeaker_gains(5,phis[n],phils,'basic') output_notebook() p3 = figure(title="rE/rV directions of 5th order basic Ambi panning on 12 loudspeaeker",plot_width=600, plot_height=270, x_range=(-180,180), y_range=(-180.0,180.0)) p4 = figure(title="rE/rV widths of 5th order basic Ambi panning on 12 loudspeaeker",plot_width=600, plot_height=270, x_range=(-180,180),y_range=(-3,100)) rE=rvector(gls**2,phils) dirE=np.arctan2(rE[1],rE[0])*180/np.pi; lenE=np.sqrt(np.sum(rE**2,0)) sigmaE=2*np.arccos(lenE)*180/np.pi p3.line(phis*180/np.pi, dirE, color="red",line_width=3,legend_label="rE basic") p4.line(phis*180/np.pi, sigmaE, color="red",line_width=3,legend_label="rE basic") rV=rvector(gls,phils) dirV=np.arctan2(rV[1],rV[0])*180/np.pi; lenV=np.sqrt(np.sum(rV**2,0)) sigmaV=2*np.arccos(lenV)*180/np.pi p3.line(phis*180/np.pi, dirV, color="blue",line_width=3,legend_label="rV basic",line_dash=(6,2)) p4.line(phis*180/np.pi, sigmaV, color="blue",line_width=3,legend_label="rV basic",line_dash=(6,2)) p3.legend.background_fill_alpha = 0.5 p4.legend.background_fill_alpha = 0.5 show(p3) show(p4) ###Output _____no_output_____ ###Markdown It also worlks for other weightings automatically. Let's for instance inspect the example for $\mathrm{max}-\boldsymbol{r}_\mathrm{E}$. ###Code L=12 Npt=200 phils=np.mod((2*np.pi*np.arange(0,L))/L+np.pi,2*np.pi)-np.pi phis=np.linspace(-np.pi*0.99,np.pi,Npt) gls=np.zeros((Npt,L)) for n in range(0,Npt): gls[n,:]=get_ambipan_loudspeaker_gains(5,phis[n],phils,'max-rE') output_notebook() p5 = figure(title="rE/rV directions of 5th order max-rE Ambi panning on 12 loudspeaeker",plot_width=600, plot_height=270, x_range=(-180,180), y_range=(-180.0,180.0)) p6 = figure(title="rE/rV widths of 5th order max-rE Ambi panning on 12 loudspeaeker",plot_width=600, plot_height=270, x_range=(-180,180),y_range=(-3,100)) rE=rvector(gls**2,phils) dirE=np.arctan2(rE[1],rE[0])*180/np.pi; lenE=np.sqrt(np.sum(rE**2,0)) sigmaE=2*np.arccos(lenE)*180/np.pi p5.line(phis*180/np.pi, dirE, color="red",line_width=3,legend_label="rE basic") p6.line(phis*180/np.pi, sigmaE, color="red",line_width=3,legend_label="rE basic") rV=rvector(gls,phils) dirV=np.arctan2(rV[1],rV[0])*180/np.pi; lenV=np.sqrt(np.sum(rV**2,0)) sigmaV=2*np.arccos(lenV)*180/np.pi p5.line(phis*180/np.pi, dirV, color="blue",line_width=3,legend_label="rV basic",line_dash=(6,2)) p6.line(phis*180/np.pi, sigmaV, color="blue",line_width=3,legend_label="rV basic",line_dash=(6,2)) p5.legend.background_fill_alpha = 0.5 p6.legend.background_fill_alpha = 0.5 show(p5) show(p6) ###Output _____no_output_____
Managing big data with MySQL (Duke)/MySQL_Exercise_06_Common_Pitfalls_of_Grouped_Queries.ipynb
###Markdown Copyright Jana Schaich Borg/Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) MySQL Exercise 6: Common Pitfalls of Grouped QueriesThere are two main reasons grouped queries can cause problems, especially in MySQL:>1) MySQL gives the user the benefit of the doubt, and assumes we don't make (at least some kinds of) mistakes. Unfortunately, we do make those mistakes.>2) We commonly think about data as spreadsheets that allow you make calculations across rows and columns, and that allow you to keep both raw and aggregated data in the same spreadsheet. Relational databases don't work that way.The way these issues cause problems are:>1) When we are working with a MySQL database, we incorrectly interpret non-sensical output from illogical queries, or >2) When we are working with a non-MySQL database platform, we struggle with trying to make queries that will never work because they ask for both aggregated and non-aggregated data.In this lesson, we will learn what these issues look like. 1. Misinterpretations due to Aggregation MismatchesBegin by loading the SQL library, connecting to the Dognition database, and setting the Dognition database as the default. ###Code %load_ext sql %sql mysql://studentuser:studentpw@mysqlserver/dognitiondb %sql USE dognitiondb ###Output The sql extension is already loaded. To reload it, use: %reload_ext sql 0 rows affected. ###Markdown Imagine that we would like to retrieve, for each breed_type in the Dognition database, the number of unique dog_guids associated with that breed_type and their weight. Let's try to write a query that reflects that request:```mySQLSELECT breed_type, COUNT(DISTINCT dog_guid) AS NumDogs, weightFROM dogsGROUP BY breed_type;```**Now take a look at the output:** ###Code %%sql SELECT breed_type, COUNT(DISTINCT dog_guid) AS NumDogs, weight FROM dogs GROUP BY breed_type; ###Output 4 rows affected. ###Markdown You immediately notice a few things: (1) the query accurately represents the fields I said I wanted; (2) the query executed without errors! Wonderful! (3) Cross Breed dogs weigh 0 pounds; and (4) the grammar of the sentence describing what I said I wanted seems a little confusing: "We would like to retrieve, for *each breed_type* in the Dognition database, *the number of* unique dog_guids associated with that breed_type and *their weight*." All of these things you noticed are related. Let's address them in reverse order. What's wrong with the sentence I wrote? One of the things I said I wanted was *the number of* unique dog_guids. This is a single number. I also said I wanted "their weight." "Their" implies many weight measurements, not one measurement. In order to make my grammar correct, I need my description of dog_guids and weight to either both be singular or both be plural. To make the logic behind the sentence make sense, I have to do a similar thing: either dog_guids and weight both need to be aggregated or dog_guids and weight both need to be non-aggregated. It's useful to remember that SQL output is always a table. How could you construct a valid table that would have columns for aggregate counts and individual weight measurements at the same time? The answer is, you can't. One option is to disaggregate the count so that you have one column with dog_guids and another column with weight measurements for each dog_guid. The only other option is to aggregate the weight measurements so that you have one column with the total count of dog_guids and another column with the average (or some other kind of summary aggregation) weight measurement for the group the count represents. That brings us to the next phenomenon we observed: Cross Breed dogs weigh 0 pounds. Well, unless the laws of gravity and physics have changed, that's not possible. Something strange must be happening in the weight field.We've established that the question I posed and the query I executed don't make logical sense, yet the MySQL query did run! If there is no way to make a tablular output that fits what I asked for, what is MySQL outputting for us?It turns out that MySQL resolves my poor query by choosing its own way to "summarize" the unaggregated field, which in our case is "weight." Rather than run an aggregation function that takes all the values in the weight column into account, though, it unpredictably populates the weight output column with one value from all the possible weight values within a given breed_type subset. Which value it chooses will be different depending on the original order of the raw data and the configuration of a database. This flexibility is very convenient when you know that all the values in a non-aggregated column are the same for the subsets of the data that correspond to the variable by which you are grouping. In fact, the visualization software Tableau (which is based in SQL language) recognized how frequently this type of situation arises and came up with a custom solution for its customers. Tableau incorprated an aggregation-like function called "ATTR" into its interface to let users say "I'm using an aggregation function here because SQL says I have to, but I know that this is a situation where all of the rows in each group will have the same value." Tableau's approach is helpful because it forces users to acknowledge that a field in a query is supposed to be aggregated, and Tableau's formulas will crash if all the rows in a group do not have the same value. MySQL doesn't force users to do this. MySQL trusts users to know what they are doing, and will provide an output even if all the rows in a group do not have the same value. Unfortunately, this approach can cause havoc if you aren't aware of what you are asking MySQL to do and aren't familiar with your data.Let's see a couple more first-hand examples of this tricky GROUP BY behavior. Let's assume you want to know the number of each kind of test completed in different months of the year.You execute the following query:```mySQLSELECT test_name, MONTH(created_at) AS Month, COUNT(created_at) AS Num_Completed_TestsFROM complete_testsGROUP BY test_nameORDER BY test_name ASC, Month ASC;```**Question 1: What does the Month column represent in this output? Take a look and see what you think:** ###Code %%sql SELECT test_name, MONTH(created_at) AS Month, COUNT(created_at) AS Num_Completed_Tests FROM complete_tests GROUP BY test_name ORDER BY test_name ASC, Month ASC; ###Output 40 rows affected. ###Markdown Now try a similar query, but GROUP BY Month instead of test_name:```mySQLSELECT test_name, MONTH(created_at) AS Month, COUNT(created_at) AS Num_Completed_TestsFROM complete_testsGROUP BY MonthORDER BY Month ASC, test_name ASC;```**Question 2: What does test_name mean in this case? Try it out:** ###Code %%sql SELECT test_name, MONTH(created_at) AS Month, COUNT(created_at) AS Num_Completed_Tests FROM complete_tests GROUP BY Month ORDER BY Month ASC, test_name ASC; ###Output 12 rows affected. ###Markdown It looks like in both of these cases, MySQL is likely populating the unaggregated column with the first value it finds in that column within the first "group" of rows it is examining. So how do we prevent this from happening?>The only way to be sure how the MySQL database will summarize a set of data in a SELECT clause is to tell it how to do so with an aggregate function.I should have written my original request to read:"I would like to know, for *each breed type* of dog, *the number of* unique Dog_Guids there are in the Dognition database and *the breed_type's average weight*."The query that would have reflected this sentence would have executed an aggregate function for both Dog_Guids and weight. The output of these aggregate functions would be unambiguous, and would easily be represented in a single table. 2. Errors due to Aggregation MismatchesIt is important to note that the issues I described above are the consequence of mismatching aggregate and non-aggregate functions through the GROUP BY clause in MySQL, but other databases manifest the problem in a different way. Other databases won't allow you to run the queries described above at all. When you try to do so, you get an error message that sounds something like:```Column 'X' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.```Especially when you are just starting to learn MySQL, these error messages can be confusing and infuriating. A good discussion of this problem can be found here:http://weblogs.sqlteam.com/jeffs/archive/2007/07/20/but-why-must-that-column-be-contained-in-an-aggregate.aspxAs a way to prevent these logical mismatches or error messages, you will often hear a rule that "every non-aggregated field that is listed in the SELECT list *must* be listed in the GROUP BY list." You have just seen that this rule is not true in MySQL, which makes MySQL both more flexible and more tricky to work with. However, it is a useful rule of thumb for helping you avoid unknown mismatch errors. 3. By the way, even if you want to, there is no way to intentionally include aggregation mismatches in a single queryYou might want to know the total number of unique User_Guids in the Dognition database, and in addition, the total number of unique User_Guids and average weight associated with each breed type. Given that you want to see the information efficiently to help you make decisions, you would like all of this information in one output. After all, that would be easy to do in Excel, given that all of this information could easily be summarized in a single worksheet.To retrieve this information, you try one of the queries described above. Since you know the rule describing the relationship between fields in the SELECT and GROUP BY clauses, you write:```mySQLSELECT COUNT(DISTINCT dog_guid), breed_type, AVG(weight) AS avg_weight, FROM dogsGROUP BY breed_type;```The output to your query gives you four rows with the correct information, but it doesn't give you a count of the entire table without the groups being applied. Surely there must be a way to write a sophisticated query that can put these two pieces of information together for you, right?Hopefully the discussion in the section above has already made it clear that the answer to this has to be "no." The output of every SQL query is a table. Can you think of a single table that could logically contain aggregated and non-aggregated data? You could put both types of information in an Excel worksheet, but not a single table. There's yet another more practical reason the information you want can't be selected in a single query. The order of SQL queries is meant to reflect the way we write sentences, but in actuality they are actually executed in a different order than we write them. The cartoon below shows the order we write the queries being sent to the database at the top of the funnel, and the order the database usually executes the queries on the conveyer belt. This diagram shows you that data are actually grouped before the SELECT expressions are applied. That means that when a GROUP BY expression is included in an SQL query, there is no way to use a SELECT statement to summarize data that cross multiple groups. The data will have already been separated by the time the SELECT statement is applied. The only way to get the information you want is to write two separate queries. This concept can be difficult to understand when you start using SQL for the first time after exclusively using Excel, but soon you will be come accustomed to it. By the way, this diagram also shows you why some platforms and some queries in some platforms crash when you try to use aliases or derived fields in WHERE, GROUP BY, or HAVING clauses. If the SELECT statement hasn't been run yet, the alias or derived fields won't be available (as a reminder, some database systems--like MySQL--have found ways to overcome this issue). On the other hand, SELECT is executed before ORDER BY clauses. That means most database systems should be able to use aliases and derived fields in ORDER BY clauses. Now that you are knowledgeable about the common pitfalls caused by GROUP BY, you are ready to perform one of the most powerful and fundamental utlities of a relational database: JOINS! Watch the next video to learn more about how joins work. ###Code %%sql SELECT COUNT(DISTINCT dog_guid), breed_type, AVG(weight) AS avg_weight FROM dogs GROUP BY breed_type; ###Output 4 rows affected.