markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Now we can compute the Global Field Power
We can track the emergence of spatial patterns compared to baseline
for each frequency band, with a bootstrapped confidence interval.
We see dominant responses in the Alpha and Beta bands. | # Helper function for plotting spread
def stat_fun(x):
"""Return sum of squares."""
return np.sum(x ** 2, axis=0)
# Plot
fig, axes = plt.subplots(4, 1, figsize=(10, 7), sharex=True, sharey=True)
colors = plt.get_cmap('winter_r')(np.linspace(0, 1, 4))
for ((freq_name, fmin, fmax), average), color, ax in zip(
frequency_map, colors, axes.ravel()[::-1]):
times = average.times * 1e3
gfp = np.sum(average.data ** 2, axis=0)
gfp = mne.baseline.rescale(gfp, times, baseline=(None, 0))
ax.plot(times, gfp, label=freq_name, color=color, linewidth=2.5)
ax.axhline(0, linestyle='--', color='grey', linewidth=2)
ci_low, ci_up = bootstrap_confidence_interval(average.data, random_state=0,
stat_fun=stat_fun)
ci_low = rescale(ci_low, average.times, baseline=(None, 0))
ci_up = rescale(ci_up, average.times, baseline=(None, 0))
ax.fill_between(times, gfp + ci_up, gfp - ci_low, color=color, alpha=0.3)
ax.grid(True)
ax.set_ylabel('GFP')
ax.annotate('%s (%d-%dHz)' % (freq_name, fmin, fmax),
xy=(0.95, 0.8),
horizontalalignment='right',
xycoords='axes fraction')
ax.set_xlim(-1000, 3000)
axes.ravel()[-1].set_xlabel('Time [ms]') | 0.20/_downloads/05c57a644672d33707fd1264df7f5617/plot_time_frequency_global_field_power.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
To enable these features, we first need to run enable_notebook to initialize
the required javascript. | from mdtraj.html import TrajectoryView, enable_notebook
enable_notebook() | examples/WebGL-Viewer.ipynb | daviddesancho/mdtraj | lgpl-2.1 |
The WebGL viewer engine is called iview, and is introduced in the following paper: Li, Hongjian, et al. "iview: an interactive WebGL visualizer for protein-ligand complex." BMC Bioinformatics 15.1 (2014): 56. | # Controls:
# - default mouse to rotate.
# - ctrl to translate
# - shift to zoom (or use wheel)
# - shift+ctrl to change the fog
# - double click to toggle full screen
widget = TrajectoryView(traj, secondaryStructure='ribbon')
widget | examples/WebGL-Viewer.ipynb | daviddesancho/mdtraj | lgpl-2.1 |
We can even animate through the trajectory simply by updating the widget's frame attribute | import time
for i in range(traj.n_frames):
widget.frame = i
time.sleep(0.1) | examples/WebGL-Viewer.ipynb | daviddesancho/mdtraj | lgpl-2.1 |
Geometric Brownian motion
Here is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process. | def brownian(maxt, n):
"""Return one realization of a Brownian (Wiener) process with n steps and a max time of t."""
t = np.linspace(0.0,maxt,n)
h = t[1]-t[0]
Z = np.random.normal(0.0,1.0,n-1)
dW = np.sqrt(h)*Z
W = np.zeros(n)
W[1:] = dW.cumsum()
return t, W | assignments/assignment03/NumpyEx03.ipynb | ajhenrikson/phys202-2015-work | mit |
Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W. | t,W=brownian(1,1000)
assert isinstance(t, np.ndarray)
assert isinstance(W, np.ndarray)
assert t.dtype==np.dtype(float)
assert W.dtype==np.dtype(float)
assert len(t)==len(W)==1000 | assignments/assignment03/NumpyEx03.ipynb | ajhenrikson/phys202-2015-work | mit |
Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes. | plt.plot(t,W)
plt.xlabel("t")
plt.ylabel("W(t)")
assert True # this is for grading | assignments/assignment03/NumpyEx03.ipynb | ajhenrikson/phys202-2015-work | mit |
Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences. | dW=np.diff(W)
print dW.mean()
print dW.std()
assert len(dW)==len(W)-1
assert dW.dtype==np.dtype(float) | assignments/assignment03/NumpyEx03.ipynb | ajhenrikson/phys202-2015-work | mit |
Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation:
$$
X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))}
$$
Use Numpy ufuncs and no loops in your function. | def geo_brownian(t, W, X0, mu, sigma):
"Return X(t) for geometric brownian motion with drift mu, volatility sigma."""
x=(X0)*np.exp((mu-(sigma**2)/2)*(t)+sigma*(W))
return x,t
assert True # leave this for grading | assignments/assignment03/NumpyEx03.ipynb | ajhenrikson/phys202-2015-work | mit |
Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above.
Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes. | x,t=geo_brownian(t,W, 1.0, .5, .3) #plotting with variables
plt.plot(t,x)
plt.xlabel("t")
plt.ylabel("X(t)")
assert True # leave this for grading | assignments/assignment03/NumpyEx03.ipynb | ajhenrikson/phys202-2015-work | mit |
Overdamped Langevin Equation
The overdamped Langevin equation is defined as
$$
dX_t=-\nabla U(x)dt+\sigma dB_t,
$$
for some potential $U$. $B_t$ represents Brownian motion and $\sigma$ controls the "strength" of the random variations.
In this example, we will work with a specific potential
$$
U(x)=(b-a/2)(x^2-1)^2+a/2\cdot (x+1).
$$
This is a double-well potential, as can be seen in the following plot. | a = -1;
b = 1;
def U(x,a=-1,b=1):
return (b-a/2)*(x**2-1)**2+a/2*(x+1)
x = np.linspace(-1.5,1.5)
pl.plot(x,U(x),color=pale_red,linewidth=5)
pl.title('The potential $U(x)$ with $a=-1$ and $b=1$',fontsize=20) | ipython_notebooks/langevin.ipynb | kgourgou/stochastic-simulations-class | mit |
Deterministic System
Let's write the equation down, given the specific potential. First, if $\sigma=0$,then the equation is an ODE
$$
\begin{align}
\frac{dX_t}{dt}&=4c(1-x^2)x-a/2,\c&=(b-a/2).
\end{align}
$$ | # Defining the derivative of the potential
def Uprime(x,t,a=-1,b=1):
return 4*(b-a/2.0)*x*(1-x**2)-a/2.0 | ipython_notebooks/langevin.ipynb | kgourgou/stochastic-simulations-class | mit |
By numerically solving $U'(x)=0$, we can find that there are three equilibrium points for the system. Approximately, those are
$$
\begin{align}
x_1&=-0.955393,\
x_2&=-0.083924,\
x_3&=1.03932.
\end{align}
$$ | from scipy.integrate import odeint # importing a solver
t = np.linspace(0,10,100)
xinit = np.array([2.0,1.0,-0.08,-0.9,-2])
with sns.cubehelix_palette(3):
for i in xrange(5):
sol = odeint(Uprime, xinit[i], t)
pl.plot(t,sol,alpha=0.8,linewidth=10)
pl.title('Five different solutions of the ODE system',fontsize=20)
pl.xlabel('t',fontsize=20) | ipython_notebooks/langevin.ipynb | kgourgou/stochastic-simulations-class | mit |
As we can see, out of the three equilibrium solutions of the system, the two are stable and the one in the middle is unstable. We will use this information for comparisons with the stochastic system.
Stochastic System
Let us now assume that $\sigma>0$. In that case, we have an SDE, which we can solve with Euler-Maruyama. The scheme shall be :
$$
X_{n+1}=X_{n}+f(X_n)\Delta t+\sigma \sqrt{\Delta t}\cdot z,
$$
where $z$ is a standard normal distribution. | def EM(xinit,sigma,T,Dt=0.1,a=-1,b=1):
'''
Returns the solution of the Langevin equation with
potential U.
Arguments
=========
xinit : real, initial condition.
sigma : real, standard deviation parameter, used in generating brownian motion.
Dt : real, stepsize of the Euler-Maruyama.
T : real, final time to reach.
'''
n = int(T/Dt) # number of steps to reach T
X = np.zeros(n)
z = sigma*randn(n)
X[0] = xinit # Initial condition
# EM method
for i in xrange(1,n):
X[i] = X[i-1] + Dt* Uprime(X[i-1],a,b) + np.sqrt(Dt)*z[i-1]
return X
| ipython_notebooks/langevin.ipynb | kgourgou/stochastic-simulations-class | mit |
Now we can reproduce the picture from the deterministic case, but this time with the extra stochastic part. When $\sigma$ is small, we see a similar picture with previously as the deterministic dynamics overpower the stochasticity. | with sns.cubehelix_palette(3):
for i in xrange(5):
path = EM(xinit[i],sigma=0.1,T=10)
pl.plot(t,path,alpha=0.7,linewidth=10)
pl.title('Trajectories of the Langevin SDE, $\sigma=0.1$',fontsize=20)
pl.xlabel('t',fontsize=20)
with sns.cubehelix_palette(5):
for i in xrange(5):
path = EM(xinit[i],sigma=0.4,T=10)
pl.plot(t,path,alpha=0.9,linewidth=5)
pl.title('Trajectories of the Langevin SDE, $\sigma=0.4$',fontsize=20)
pl.xlabel('t',fontsize=20) | ipython_notebooks/langevin.ipynb | kgourgou/stochastic-simulations-class | mit |
Changing the $\sigma$ from $0.1$ to $0.4$ provides random "kicks" that are hard enough for the solutions to jump from one equilibrium to the other. | with sns.cubehelix_palette(5):
for i in xrange(5):
path = EM(xinit[i],sigma=1,T=10)
pl.plot(t,path,alpha=0.7,linewidth=5)
pl.title('Trajectories of the Langevin SDE, $\sigma=1$',fontsize=20)
pl.xlabel('t',fontsize=20)
| ipython_notebooks/langevin.ipynb | kgourgou/stochastic-simulations-class | mit |
With $\sigma=1$, the trajectories can now move freely between the equilibrium points. We can still see some kind of attraction though to the area around them.
Let us attempt to set $\sigma$ to a larger number and see what happens. | with sns.cubehelix_palette(5):
for i in xrange(5):
path = EM(xinit[i],sigma=2.4,T=10)
pl.plot(t,path,linewidth=5)
pl.title('Trajectories of the Langevin SDE, $\sigma=1$',fontsize=20)
pl.xlabel('t',fontsize=20) | ipython_notebooks/langevin.ipynb | kgourgou/stochastic-simulations-class | mit |
Now the kicks are strong enough that the attractiveness (or repulsiveness) of the stationary points looks completely irrelevant. The dynamics are all about the stochastic part.
Changing the properties of the potential
We now fix $\sigma=1$ and look at the paths for $a\in [0,b]$. We will start from $X_0=1.3$. | b = 1
arange = np.linspace(0,b,3)
with sns.cubehelix_palette(3):
for aval in arange:
path = EM(1.3,sigma=1, T=10, a=aval)
pl.plot(t,path,linewidth=4)
pl.title('With $a=0,0.5,1$',fontsize=20)
pl.xlabel('$t$',fontsize=20)
b = 1
arange = np.linspace(0,b,3)
with sns.cubehelix_palette(3):
for aval in arange:
pl.plot(x,U(x,a=aval),linewidth=4)
pl.title('With $a=0,0.5,1$',fontsize=20)
pl.xlabel('$x$',fontsize=20) | ipython_notebooks/langevin.ipynb | kgourgou/stochastic-simulations-class | mit |
Above we have the potentials corresponding to the paths. To make things more concrete, here is the path superimposed on the potential. | def plotOnPoten(a,T=2):
x = np.linspace(-2,2)
pl.plot(x,U(x,a=0.5),linewidth=1,color='black')
path = EM(1.3,sigma=1, T=T, a=aval)
pl.plot(path,U(path,a=0.5),linewidth=4,alpha=0.7,color=pale_red)
pl.title('For $a='+str(a)+'$.',fontsize=20)
plotOnPoten(1,T=10) | ipython_notebooks/langevin.ipynb | kgourgou/stochastic-simulations-class | mit |
Now that we have created a toy example for playing around with the DataFrame, let's print it out in different ways.
Concept for Exercise: Data Interaction and Printing
When interacting with data, it is very imporant to look at different parts of the data (e.g. df.head()). Here we will show that you can print the modin.pandas DataFrame in the same ways you would pandas. | # When working with non-string column labels it could happen that some backend logic would try to insert a column
# with a string name to the frame, so we do add_prefix()
df = df.add_prefix("col")
# Print the first 10 lines.
df.head(10)
df.count() | examples/tutorial/jupyter/execution/omnisci_on_native/local/exercise_1.ipynb | modin-project/modin | apache-2.0 |
XML example
for details about tree traversal and iterators, see https://docs.python.org/2.7/library/xml.etree.elementtree.html | document_tree = ET.parse( './data/mondial_database_less.xml' )
# print names of all countries
for child in document_tree.getroot():
print (child.find('name').text)
# print names of all countries and their cities
for element in document_tree.iterfind('country'):
print ('* ' + element.find('name').text + ':', end=''),
capitals_string = ''
for subelement in element.getiterator('city'):
capitals_string += subelement.find('name').text + ', '
print (capitals_string[:-2]) | Week_1/DATA_WRANGLING/WORKING_WITH_DATA_IN_FILES/data_wrangling_xml/data_wrangling_xml/.ipynb_checkpoints/sliderule_dsi_xml_exercise-checkpoint.ipynb | abhipr1/DATA_SCIENCE_INTENSIVE | apache-2.0 |
XML exercise
Using data in 'data/mondial_database.xml', the examples above, and refering to https://docs.python.org/2.7/library/xml.etree.elementtree.html, find
10 countries with the lowest infant mortality rates
10 cities with the largest population
10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries)
name and country of a) longest river, b) largest lake and c) airport at highest elevation | document = ET.parse( './data/mondial_database.xml' )
# print child and attributes
#for child in document.getroot():
# print (child.tag, child.attrib)
import pandas as pd
# Create a list of country and their Infant Mortality Rate
country_imr=[]
for country in document.getroot().findall('country'):
name = country.find('name').text
infant_mortality_rate = country.find('infant_mortality')
if infant_mortality_rate is not None:
infant_mortality_rate=infant_mortality_rate.text
else :
infant_mortality_rate = -1
country_imr.append((name, (float)(infant_mortality_rate))) | Week_1/DATA_WRANGLING/WORKING_WITH_DATA_IN_FILES/data_wrangling_xml/data_wrangling_xml/.ipynb_checkpoints/sliderule_dsi_xml_exercise-checkpoint.ipynb | abhipr1/DATA_SCIENCE_INTENSIVE | apache-2.0 |
10 countries with the lowest infant mortality rates | df = pd.DataFrame(country_imr, columns=['Country', 'Infant_Mortality_Rate'])
df_unknown_removed = df[df.Infant_Mortality_Rate != -1]
df_unknown_removed.set_index('Infant_Mortality_Rate').sort().head(10)
city_population=[]
for country in document.iterfind('country'):
for state in country.iterfind('province'):
for city in state.iterfind('city'):
try:
city_population.append((city.find('name').text, float(city.find('population').text)))
except:
next
for city in country.iterfind('city'):
try:
city_population.append((city.find('name').text, float(city.find('population').text)))
except:
next | Week_1/DATA_WRANGLING/WORKING_WITH_DATA_IN_FILES/data_wrangling_xml/data_wrangling_xml/.ipynb_checkpoints/sliderule_dsi_xml_exercise-checkpoint.ipynb | abhipr1/DATA_SCIENCE_INTENSIVE | apache-2.0 |
10 cities with the largest population | df = pd.DataFrame(city_population, columns=['City', 'Population'])
#df.info()
df.sort_index(by='Population', ascending=False).head(10)
ethnic_population={}
country_population={}
for country in document.iterfind('country'):
try:
country_population[country.find('name').text]= float(country.find('population').text)
except:
next
for state in country.iterfind('province' or 'state'):
try:
country_population[country.find('name').text] += float(state.find('population').text)
except:
next
for city in state.iterfind('city'):
try:
country_population[country.find('name').text] += float(city.find('population').text)
except:
next
for country in document.iterfind('country'):
for ethnicgroup in country.iterfind('ethnicgroup'):
try:
if ethnicgroup.text in ethnic_population:
ethnic_population[ethnicgroup.text] += country_population[country.find('name').text]*float(ethnicgroup.get('percentage'))/100
else:
ethnic_population[ethnicgroup.text] = country_population[country.find('name').text]*float(ethnicgroup.get('percentage'))/100
except:
next | Week_1/DATA_WRANGLING/WORKING_WITH_DATA_IN_FILES/data_wrangling_xml/data_wrangling_xml/.ipynb_checkpoints/sliderule_dsi_xml_exercise-checkpoint.ipynb | abhipr1/DATA_SCIENCE_INTENSIVE | apache-2.0 |
10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries) | pd.DataFrame(sorted(ethnic_population.items(), key=lambda x:x[1], reverse=True)[:10], columns=['Ethnic_Groups', 'Population'])
rivers_list=[]
rivers_df = pd.DataFrame()
for rivers in document.iterfind('river'):
try:
rivers_list.append({'name':rivers.find('name').text, 'length':rivers.find('length').text, 'country':rivers.find('located').attrib['country']})
except:
next
rivers_list | Week_1/DATA_WRANGLING/WORKING_WITH_DATA_IN_FILES/data_wrangling_xml/data_wrangling_xml/.ipynb_checkpoints/sliderule_dsi_xml_exercise-checkpoint.ipynb | abhipr1/DATA_SCIENCE_INTENSIVE | apache-2.0 |
In the example below, the Composer class "decorates" the two following functions, meaning the Composer instances become the new proxies for the functions they swallowed. The original functions are still on tap, through __call__.
Furthermore, when two such Composer types are multiplied, their internal functions get composed together, into a new internalized function. | class Composer:
def __init__(self, f):
self.func = f
def __call__(self, s):
return self.func(s)
def __mul__(self, other):
def new(s):
return self(other(s))
return Composer(new)
@Composer
def F(x):
return x * x
@Composer
def G(x):
return x + 2 | About_Decorators.ipynb | 4dsolutions/Python5 | mit |
Below is simple composition of functions. This is valid Python even if the Composer decorator is left out, i.e. function type objects would normally have no problem composing with one another in this way.
To compose F and G means going F(G(x)) for some x. | F(G(F(F(F(G(10)))))) | About_Decorators.ipynb | 4dsolutions/Python5 | mit |
Thanks to Compose, the "class decorator" (a decorator that happens to be a class), our F and G are actually Compose type objects, so have this additional ability to compose into other Compose type objects. We don't need an argument until we call the final H. | H = F*G*F*F*F*G # the functions themselves may be multiplied
H(10) | About_Decorators.ipynb | 4dsolutions/Python5 | mit |
Contenido
Overview de Numpy y Scipy
Librería Numpy
Arreglos vs Matrices
Axis
Funciones basicas.
Input y Output
Tips
1. Overview de numpy y scipy
¿Cual es la diferencia entre numpy y scipy?
In an ideal world, NumPy would contain nothing but the array data type and the most basic operations: indexing, sorting, reshaping, basic elementwise functions, et cetera. All numerical code would reside in SciPy. However, one of NumPy’s important goals is compatibility, so NumPy tries to retain all features supported by either of its predecessors. Thus NumPy contains some linear algebra functions, even though these more properly belong in SciPy. In any case, SciPy contains more fully-featured versions of the linear algebra modules, as well as many other numerical algorithms. If you are doing scientific computing with python, you should probably install both NumPy and SciPy. Most new features belong in SciPy rather than NumPy.
Link stackoverflow
Por ser python un lenguaje open-source, existen miles de paquetes disponibles creados por individuos o comunidades. Éstos pueden estar disponibles en un repositorio como github o bitbucket, o bien estar disponibles en el repositorio oficial de python: pypi. En un inicio, cuando no existía una librerías de cálculo científico oficial, varios candidatos proponían soluciones:
* numpy: tenía una excelente representación de vectores, matrices y arreglos, implementados en C y llamados fácilmente desde python
* scipy: proponía linkear a librerías ya elaboradas de calculo científico de alto rendimiento en C o fortran, permitiendo ejecutar rápidamente desde python.
Ambos projectos fueron creciendo en complejidad y alcance, y en vez de competir, decidieron dividir tareas y unificar fuerzas para proponer una plataforma de cálculo científico que reemplazara completamente otros programas.
numpy: Corresponde a lo relacionado con la estructura de los datos (arrays densos y sparse, matrices, constructores especiales, lectura de datos regulares, etc.), pero no las operaciones en sí. Por razones históricas y de compatibilidad, tiene algunos algoritmos, pero en realidad resulta más consistente utilizar los algoritmos de scipy.
scipy: Corresponde a la implementación numérica de diversos algoritmos de corte científicos: algebra lineal, estadística, ecuaciones diferenciales ordinarias, interpolacion, integracion, optimización, análisis de señales, entre otros.
OBSERVACIÓN IMPORTANTE:
Las matrices y arrays de numpy deben contener variables con el mismo tipo de datos: sólo enteros, sólo flotantes, sólo complejos, sólo booleanos o sólo strings. La uniformicidad de los datos es lo que permite acelerar los cálculos con implementaciones en C a bajo nivel.
2. Librería Numpy
Siempre importaremos la librería numpy de la siguiente forma:
import numpy as np
Todas las funciones y módulos de numpy quedan a nuestro alcance a 3 carácteres de distancia:
np.array([1,4,9,16])
np.linspace(0.,1.,100)
Evite a todo costo utilizar lo siguiente:
from numpy import * | import numpy as np
print np.version.version # Si alguna vez tienen problemas, verifiquen su version de numpy | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
Importante
Ipython notebook es interactivo y permite la utilización de tabulación para ofrecer sugerencias o enseñar ayuda (no solo para numpy, sino que para cualquier código en python).
Pruebe los siguientes ejemplos: | # Presionar tabulacción con el cursor despues de np.arr
np.arr
# Presionar Ctr-Enter para obtener la documentacion de la funcion np.array usando "?"
np.array?
# Presionar Ctr-Enter
%who
x = 10
%who | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
2. Librería Numpy
2.1 Array vs Matrix
Por defecto, la gran mayoria de las funciones de numpy y de scipy asumen que se les pasará un objeto de tipo array.
Veremos las diferencias entre los objetos array y matrix, pero recuerden utilizar array mientras sea posible.
Matrix
Una matrix de numpy se comporta exactamente como esperaríamos de una matriz:
Pros:
Multiplicación utiliza el signo * como es esperable.
Resulta natural si lo único que haremos es algebra lineal.
Contras:
Todas las matrices deben estar completamente alineadas para poder operar correctamente.
Operaciones elementwise son mas dificiles de definir/acceder.
Están exclusivamente definidas en 2D: un vector fila o un vector columna siguen siendo 2D. | # Operaciones con np.matrix
A = np.matrix([[1,2],[3,4]])
B = np.matrix([[1, 1],[0,1]], dtype=float)
x = np.matrix([[1],[2]])
print "A =\n", A
print "B =\n", B
print "x =\n", x
print "A+B =\n", A+B
print "A*B =\n", A*B
print "A*x =\n", A*x
print "A*A = A^2 =\n", A**2
print "x.T*A =\n", x.T * A | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
2.1 Array vs Matrix
Array
Un array de numpy es simplemente un "contenedor" multidimensional.
Pros:
Es multidimensional: 1D, 2D, 3D, ...
Resulta consistente: todas las operaciones son element-wise a menos que se utilice una función específica.
Contras:
Multiplicación maticial utiliza la función dot() | # Operaciones con np.matrix
A = np.array([[1,2],[3,4]])
B = np.array([[1, 1],[0,1]], dtype=float)
x = np.array([1,2]) # No hay necesidad de definir como fila o columna!
print "A =\n", A
print "B =\n", B
print "x =\n", x
print "A+B =\n", A+B
print "AoB = (multiplicacion elementwise) \n", A*B
print "A*B = (multiplicacion matricial, v1) \n", np.dot(A,B)
print "A*B = (multiplicacion matricial, v2) \n", A.dot(B)
print "A*A = A^2 = (potencia matricial)\n", np.linalg.matrix_power(A,2)
print "AoA = (potencia elementwise)\n", A**2
print "A*x =\n", np.dot(A,x)
print "x.T*A =\n", np.dot(x,A) # No es necesario transponer. | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
Desafío 1: matrix vs array
Sean
$$
A = \begin{pmatrix} 1 & 0 & 1 \ 0 & 1 & 1\end{pmatrix}
$$
y
$$
B = \begin{pmatrix} 1 & 0 & 1 \ 0 & 1 & 1 \ 0 & 0 & 1\end{pmatrix}
$$
Cree las matrices utilizando np.matrix y multipliquelas en el sentido matricial. Imprima el resultado.
Cree las matrices utilizando np.array y multipliquelas en el sentido matricial. Imprima el resultado. | # 1: Utilizando matrix
A = np.matrix([]) # FIX ME
B = np.matrix([]) # FIX ME
print "np.matrix, AxB=\n", #FIX ME
# 2: Utilizando arrays
A = np.array([]) # FIX ME
B = np.array([]) # FIX ME
print "np.matrix, AxB=\n", #FIX ME | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
2.2 Indexación y Slicing
Los arrays se indexan de la forma "tradicional".
Para un array unidimensional: sólo tiene una indexacion. ¡No es ni fila ni columna!
Para un array bidimensional: primera componente son las filas, segunda componente son las columnas. Notación respeta por tanto la convención tradicional de matrices.
Para un array tridimensional: primera componente son las filas, segunda componente son las columnas, tercera componente la siguiente dimension.
<img src="images/anatomyarray.png" alt="" height="100px" align="left"/>
Respecto a los índices de los elementos, éstos comienzan en cero, como en C. Además, es posible utilizar índices negativos, que como convención asignan -1 al último elemento, al -2 el penúltimo elemento, y así sucesivamente.
Por ejemplo, si a = [2,3,5,7,11,13,17,19], entonces a[0] es el valor 2 y a[1] es el valor 3, mientras que a[-1] es el valor 19 y a[-2] es el valor 17.
Ademas, en python existe la "slicing notation":
* a[start:end] : items desde índice start hasta end-1
* a[start:] : items desde índice start hasta el final del array
* a[:end] : items desde el inicio hasta el índice end-1
* a[:] : todos los items del array (una copia nueva)
* a[start:end:step] : items desde start hasta pasar end (sin incluir) con paso step | x = np.arange(9) # "Vector" con valores del 0 al 8
print "x = ", x
print "x[:] = ", x[:]
print "x[5:] = ", x[5:]
print "x[:8] = ", x[:8]
print "x[:-1] = ", x[:-1]
print "x[1:-1] = ", x[1:-1]
print "x[1:-1:2] = ", x[1:-1:2]
A = x.reshape(3,3) # Arreglo con valores del 0 al 8, en 3 filas y 3 columnas.
print "\n"
print "A = \n", A
print "primera fila de A\n", A[0,:]
print "ultima columna de A\n", A[:,-1]
print "submatriz de A\n", A[:2,:2] | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
Observación
Cabe destacar que al tomar slices (subsecciones) de un arreglo obtenemos siempre un arreglo con menores dimensiones que el original.
Esta notación es extremadamente conveniente, puesto que nos permite manipular el array sin necesitar conocer el tamaño del array y escribir de manera compacta las fórmulas numéricas.
Por ejemplo, implementar una derivada numérica es tan simple como sigue. | def f(x):
return 1 + x**2
x = np.array([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) # O utilizar np.linspace!
y = f(x) # Tan facil como llamar f sobre x
dydx = ( y[1:] - y[:-1] ) / ( x[1:] - x[:-1] )
x_aux = 0.5*(x[1:] + x[:-1])
# To plot
fig = plt.figure(figsize=(12,8))
plt.plot(x, y, '-s', label="f")
plt.plot(x_aux, dydx, '-s', label="df/dx")
plt.legend(loc="upper left")
plt.show() | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
Desafío 2: Derivación numérica
Implemente el cálculo de la segunda derivada, que puede obtenerse por diferencias finitas centradas mediante
$$ \frac{d f(x_i)}{dx} = \frac{1}{\Delta x^2} \Big( f(x_{i+1}) -2 f(x_{i}) + f(x_{i-1}) \Big)$$ | def g(x):
return 1 + x**2 + np.sin(x)
x = np.linspace(0,1,10)
y = g(x)
d2ydx2 = 0 * x # FIX ME
x_aux = 0*d2ydx2 # FIX ME
# To plot
fig = plt.figure(figsize=(12,8))
plt.plot(x, y, label="f")
plt.plot(x_aux, d2ydx2, label="d2f/dx2")
plt.legend(loc="upper left")
plt.show() | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
2. Librería Numpy
2.2 Funciones Básicas
Algunas funciones básicas que es conveniente conocer son las siguientes:
* shape: Entrega las dimensiones del arreglo. Siempre es una tupla.
* len: Entrega el número de elementos de la primera dimensión del arreglo. Siempre es un entero.
* ones: Crea un arreglo con las dimensiones provistas e inicializado con valores 1. Por defecto array 1D.
* zeros: Crea un arreglo con las dimensiones provistas e inicializado con valores 1. Por defecto array 1D.
* eye: Crea un arreglo con las dimensiones provistas e inicializado con 1 en la diagonal. Por defecto array 2D. | # arrays 1d
A = np.ones(3)
print "A = \n", A
print "A.shape =", A.shape
print "len(A) =", len(A)
B = np.zeros(3)
print "B = \n", B
print "B.shape =", B.shape
print "len(B) =", len(B)
C = np.eye(1,3)
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
# Si queremos forzar la misma forma que A y B
C = np.eye(1,3).flatten() # o np.eye(1,3)[0,:]
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
# square arrays
A = np.ones((3,3))
print "A = \n", A
print "A.shape =", A.shape
print "len(A) =", len(A)
B = np.zeros((3,3))
print "B = \n", B
print "B.shape =", B.shape
print "len(B) =", len(B)
C = np.eye(3) # Or np.eye(3,3)
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C)
# fat 2d array
A = np.ones((2,5))
print "A = \n", A
print "A.shape =", A.shape
print "len(A) =", len(A)
B = np.zeros((2,5))
print "B = \n", B
print "B.shape =", B.shape
print "len(B) =", len(B)
C = np.eye(2,5)
print "C = \n", C
print "C.shape =", C.shape
print "len(C) =", len(C) | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
2. Librería Numpy
2.2 Funciones Básicas
Algunas funciones básicas que es conveniente conocer son las siguientes:
* reshape: Convierte arreglo a nueva forma. Numero de elementos debe ser el mismo.
* linspace: Regresa un arreglo con valores linealmente espaciados.
* diag(x): Si x es 1D, regresa array 2D con valores en diagonal. Si x es 2D, regresa valores en la diagonal.
* sum: Suma los valores del arreglo. Puede hacerse en general o a lo largo de un axis.
* mean: Calcula el promedio de los valores del arreglo. Puede hacerse en general o a lo largo de un axis.
* std: Calcula la desviación estándar de los valores del arreglo. Puede hacerse en general o a lo largo de un axis. | x = np.linspace(0., 1., 6)
A = x.reshape(3,2)
print "x = \n", x
print "A = \n", A
print "np.diag(x) = \n", np.diag(x)
print "np.diag(B) = \n", np.diag(A)
print ""
print "A.sum() = ", A.sum()
print "A.sum(axis=0) = ", A.sum(axis=0)
print "A.sum(axis=1) = ", A.sum(axis=1)
print ""
print "A.mean() = ", A.mean()
print "A.mean(axis=0) = ", A.mean(axis=0)
print "A.mean(axis=1) = ", A.mean(axis=1)
print ""
print "A.std() = ", A.std()
print "A.std(axis=0) = ", A.std(axis=0)
print "A.std(axis=1) = ", A.std(axis=1) | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
Desafío 3
Complete el siguiente código:
* Se le provee un array A cuadrado
* Calcule un array B como la multiplicación element-wise de A por sí misma.
* Calcule un array C como la multiplicación matricial de A y B.
* Imprima la matriz C resultante.
* Calcule la suma, promedio y desviación estándar de los valores en la diagonal de C.
* Imprima los valores anteriormente calculados. | A = np.outer(np.arange(3),np.arange(3))
print A
# FIX ME
# FIX ME
# FIX ME
# FIX ME
# FIX ME | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
Desafío 4
Implemente la regla de integración trapezoidal | def mi_funcion(x):
f = 1 + x + x**3 + x**5 + np.sin(x)
return f
N = 5
x = np.linspace(-1,1,N)
y = mi_funcion(x)
# FIX ME
I = 0 # FIX ME
# FIX ME
print "Area bajo la curva: %.3f" %I
# Ilustración gráfica
x_aux = np.linspace(x.min(),x.max(),N**2)
fig = plt.figure(figsize=(12,8))
fig.gca().fill_between(x, 0, y, alpha=0.25)
plt.plot(x_aux, mi_funcion(x_aux), 'k')
plt.plot(x, y, 'r.-')
plt.show() | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
2. Librería Numpy
2.5 Inputs y Outputs
Numpy permite leer datos en formato array con la función loadtxt. Existen variados argumentos opcionales, pero los mas importantes son:
* skiprows: permite saltarse lineas en la lectura.
* dtype: declarar que tipo de dato tendra el array resultante | # Ejemplo de lectura de datos
data = np.loadtxt("data/cherry.txt")
print data.shape
print data
# Ejemplo de lectura de datos, saltandose 11 lineas y truncando a enteros
data_int = np.loadtxt("data/cherry.txt", skiprows=11).astype(int)
print data_int.shape
print data_int | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
2. Librería Numpy
2.5 Inputs y Outputs
Numpy permite guardar datos de manera sencilla con la función savetxt: siempre debemos dar el nombre del archivo y el array a guardar.
Existen variados argumentos opcionales, pero los mas importantes son:
* header: Línea a escribir como encabezado de los datos
* fmt: Formato con el cual se guardan los datos (%d para enteros, %.5f para flotantes con 5 decimales, %.3E para notación científica con 3 decimales, etc.). | # Guardando el archivo con un header en español
encabezado = "Diametro Altura Volumen (Valores truncados a numeros enteros)"
np.savetxt("data/cherry_int.txt", data_int, fmt="%d", header=encabezado) | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
Revisemos si el archivo quedó bien escrito. Cambiaremos de python a bash para utilizar los comandos del terminal: | %%bash
cat data/cherry_int.txt | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
Desafío 5
Lea el archivo data/cherry.txt
Escale la matriz para tener todas las unidades en metros o metros cubicos.
Guarde la matriz en un nuevo archivo data/cherry_mks.txt, con un encabezado apropiado y 2 decimales de precisión para el flotante (pero no en notación científica). | # Leer datos
#FIX_ME#
# Convertir a mks
#FIX_ME#
# Guardar en nuevo archivo
#FIX_ME# | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
2. Librería Numpy
2.6 Selecciones de datos
Existen 2 formas de seleccionar datos en un array A:
* Utilizar máscaras de datos, que corresponden a un array con las mismas dimensiones del array A, pero de tipo booleano. Todos aquellos elementos True del array de la mascara serán seleccionados.
* Utilizar un array con valores enteros. Los valores del array indican los valores que desean conservarse.
2.6 Máscaras
Observe que el array regresado siempre es unidimensional puesto que no es posible garantizar que se mantenga la dimensión original del array. | x = np.linspace(0,42,10)
print "x = ", x
print "x.shape = ", x.shape
print "\n"
mask_x_1 = x>10
print "mask_x_1 = ", mask_x_1
print "x[mask_x_1] = ", x[mask_x_1]
print "x[mask_x_1].shape = ", x[mask_x_1].shape
print "\n"
mask_x_2 = x > x.mean()
print "mask_x_2 = ", mask_x_2
print "x[mask_x_2] = ", x[mask_x_2]
print "x[mask_x_2].shape = ", x[mask_x_2].shape
A = np.linspace(10,20,12).reshape(3,4)
print "\n"
print "A = ", A
print "A.shape = ", A.shape
print "\n"
mask_A_1 = A>13
print "mask_A_1 = ", mask_A_1
print "A[mask_A_1] = ", A[mask_A_1]
print "A[mask_A_1].shape = ", A[mask_A_1].shape
print "\n"
mask_A_2 = A > 0.5*(A.min()+A.max())
print "mask_A_2 = ", mask_A_2
print "A[mask_A_2] = ", A[mask_A_2]
print "A[mask_A_2].shape = ", A[mask_A_2].shape
T = np.linspace(-100,100,24).reshape(2,3,4)
print "\n"
print "T = ", T
print "T.shape = ", T.shape
print "\n"
mask_T_1 = T>=0
print "mask_T_1 = ", mask_T_1
print "T[mask_T_1] = ", T[mask_T_1]
print "T[mask_T_1].shape = ", T[mask_T_1].shape
print "\n"
mask_T_2 = 1 - T + 2*T**2 < 0.1*T**3
print "mask_T_2 = ", mask_T_2
print "T[mask_T_2] = ", T[mask_T_2]
print "T[mask_T_2].shape = ", T[mask_T_2].shape | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
2.6 Índices
Observe que es posible repetir índices, por lo que el array obtenido puede tener más elementos que el array original.
En un arreglo 2d, es necesario pasar 2 arrays, el primero para las filas y el segundo para las columnas. | x = np.linspace(10,20,11)
print "x = ", x
print "x.shape = ", x.shape
print "\n"
ind_x_1 = np.array([1,2,3,5,7])
print "ind_x_1 = ", ind_x_1
print "x[ind_x_1] = ", x[ind_x_1]
print "x[ind_x_1].shape = ", x[ind_x_1].shape
print "\n"
ind_x_2 = np.array([0,0,1,2,3,4,5,6,7,-3,-2,-1,-1])
print "ind_x_2 = ", ind_x_2
print "x[ind_x_2] = ", x[ind_x_2]
print "x[ind_x_2].shape = ", x[ind_x_2].shape
A = np.linspace(-90,90,10).reshape(2,5)
print "A = ", A
print "A.shape = ", A.shape
print "\n"
ind_row_A_1 = np.array([0,0,0,1,1])
ind_col_A_1 = np.array([0,2,4,1,3])
print "ind_row_A_1 = ", ind_row_A_1
print "ind_col_A_1 = ", ind_col_A_1
print "A[ind_row_A_1,ind_col_A_1] = ", A[ind_row_A_1,ind_col_A_1]
print "A[ind_row_A_1,ind_col_A_1].shape = ", A[ind_row_A_1,ind_col_A_1].shape
print "\n"
ind_row_A_2 = 1
ind_col_A_2 = np.array([0,1,3])
print "ind_row_A_2 = ", ind_row_A_2
print "ind_col_A_2 = ", ind_col_A_2
print "A[ind_row_A_2,ind_col_A_2] = ", A[ind_row_A_2,ind_col_A_2]
print "A[ind_row_A_2,ind_col_A_2].shape = ", A[ind_row_A_2,ind_col_A_2].shape | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
Desafío 6
<img src="images/generador_eolico.jpg" alt="" width="280px" align="right"/>
La potencia de un aerogenerador, para $k$ una constante relacionada con la geometría y la eficiencia, $\rho$ la densidad del are, $r$ el radio del aerogenerador en metros y $v$ la velocidad el viento en metros por segundo, está dada por:
$$ P = \begin{cases} k \ \rho \ r^2 \ v^3, 3 \leq v \leq 25\ 0,\ eoc\end{cases}$$
Típicamente se considera una valor de $k=0.8$ y una densidad del aire de $\rho = 1.2$ [$kg/m^3$].
Calcule el número de aerogeneradores activos, la potencia promedio y la potencia total generada por los 11 generadores del parque Eólico Canela 1.
Los valores de radio del aerogenerador (en metros) y la velocidad del viento (en kilometros por hora) se indican a continuación en arreglos en el código numérico. | import numpy as np
k = 0.8
rho = 1.2 #
r_m = np.array([ 25., 25., 25., 25., 25., 25., 20., 20., 20., 20., 20.])
v_kmh = np.array([10.4, 12.6, 9.7, 7.2, 12.3, 10.8, 12.9, 13.0, 8.6, 12.6, 11.2]) # En kilometros por hora
P = 0
n_activos = 0
P_mean = 0.0
P_total = 0.0
print "Existen %d aerogeneradores activos del total de %d" %(n_activos, r.shape[0])
print "La potencia promedio de los aeorgeneradores es {0:.2f} ".format(P_mean)
print "La potencia promedio de los aeorgeneradores es " + str(P_total) | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | sebastiandres/mat281 | cc0-1.0 |
obtido no site <a href="http://dados.gov.br">dados.gov.br</a>
Como nosso arquivo é um .xml, vamos usar o módulo xml.etree.ElementTree para parsear o conteúdo do arquivo. Vamos abreviar o nome desse módulo por ET. | import xml.etree.ElementTree as ET | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
O módulo ElementTree (ET)
Um arquivo XML é um conjunto hierárquico de dados, e portanto a maneira mais natural de representar esses dados é através de uma árvore. Para isso, o módulo ET tem duas classes: a classe ElementTree representa o documento XML inteiro como uma árvore, e a classe Element representa um nó desta árvore. Todas as interações que ocorrem com o documento completo (por exemplo, leitura e escrita no arquivo) são feitas através da classe ElementTree; por outro lado, as interações com um elemento isolado do XML e seus subelementos são feitas através da classe Element.
O método ET.parse retorna uma ElementTree. | tree = ET.parse(arquivo) | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Para vermos o elemento raiz da árvore, usamos | root = tree.getroot() | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
O objeto root, que é um Element, tem as propriedades tag e attrib, que é um dicionário de seus atributos. | root.tag
root.attrib | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Para acessarmos cada um dos nós do elemento raiz, iteramos nestes nós (que são, também, Elements): | for child in root:
print(child.tag, child.attrib) | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Selecionando os dados
Agora que temos uma ideia melhor dos dados a serem tratados, vamos construir um DataFrame do pandas com o que nos interessa. Primeiramente, observamos que somente o último nó nos interessa, já que todos os outros compõem o cabeçalho do arquivo XML. Assim, vamos explorar o nó valores: | valoresIDEB = root.find('valores') | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Observe que temos mais uma camada de dados: | valoresIDEB
valoresIDEB[0] | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Assim, podemos por exemplo explorar os nós netos da árvore: | for child in valoresIDEB:
for grandchild in child:
print(grandchild.tag, grandchild.attrib) | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Vamos transformar agora os dados em um DataFrame. | data = []
for child in valoresIDEB:
data.append([float(child[0].text), child[1].text, child[2].text])
data | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Como a biblioteca <a href="http://pandas.pydata.org/">Pandas</a> está na moda ;) vamos utilizá-la para tratar e armazenar os dados. Mas vamos chamar a biblioteca pandas com um nome mais curto, pd. | import pandas as pd | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Inicialmente, criamos um DataFrame, ou seja, uma tabela, com os dados que já temos. | tabelaInicial = pd.DataFrame(data, columns = ["Valor", "Municipio", "Ano"])
tabelaInicial | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Observe que nesta tabela temos dados de 2007 e 2009. Não vamos usar os dados relativos a 2007 por simplicidade. | tabelaInicial = tabelaInicial.loc[0:19] | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Obtendo códigos IBGE para os municípios
Na tabelaInicial, os municípios não estão identificados por nome, e sim pelo seu código IBGE. Para lermos o arquivo excel em que temos a tabela dos municípios brasileiros (atualizada em 2014) e seus respectivos códigos de 7 dígitos - os códigos incluem um dígito verificador ao final - usamos o módulo xlrd, que não estará instalado junto com o pandas por definição (você deve instalá-lo manualmente) se quiser executar o comando abaixo. Veja <a href="https://pypi.python.org/pypi/xlrd">aqui</a>, por exemplo. | dadosMunicipioIBGE = pd.read_excel("DTB_2014_Municipio.xls") | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Podemos olhar o tipo de tabela que temos usando o método head do pandas. | dadosMunicipioIBGE.head() | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Como não são todos os dados que nos interessam, vamos selecionar apenas as colunas "Nome_UF" (pois pode ser interessante referenciarmos o estado da federação mais tarde), "Cod Municipio Completo" e "Nome_Município". | dadosMunicipioIBGE = dadosMunicipioIBGE[["Nome_UF", "Cod Municipio Completo", "Nome_Município"]] | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Em seguida, precisamos selecionar na tabela completa dadosMunicipioIBGE os dados dos municípios presentes na tabelaInicial contendo os valores calculados do IDEB. Para isso, vamos extrair dos dois DataFrames as colunas correspondentes aos codigos de município (lembrando que nos dadosMunicipioIBGE os códigos contém um dígito verificador que não será utilizado): | listaMunicipiosInicial = tabelaInicial["Municipio"]
listaMunicipios = dadosMunicipioIBGE["Cod Municipio Completo"].map(lambda x: str(x)[0:6]) | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Observe que usamos acima o método map para transformar os dados numéricos em string, e depois extrair o último dígito.
Agora, ambos listaMunicípiosInicial e listaMunicipios são objetos Series do pandas. Para obtermos os índices dos municípios para os quais temos informação do IDEB, vamos primeiro identificar quais códigos não constam da listaMunicipiosInicial: | indicesMunicipios = listaMunicipios[~listaMunicipios.isin(listaMunicipiosInicial)] | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
E agora vamos extrair as linhas correspondentes na tabela dadosMunicipioIBGE. | new = dadosMunicipioIBGE.drop(indicesMunicipios.index).reset_index(drop=True) | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Por fim, vamos criar uma nova tabela (DataFrame) juntando nome e valor do IDEB calculado na tabelaInicial. | dadosFinais = pd.concat([new, tabelaInicial], axis=1) | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
A tabela final é | dadosFinais | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Para terminar: um gráfico
Para usarmos gráficos em notebooks, devemos incluir no notebook o comando
% matplotlib inline
ou
% matplotlib notebook
Como isso é geralmente feito usando a primeira célula do notebook, mas no nosso caso não gostaríamos de sacrificar a legibilidade do documento, usamos uma nbextension chamada init_cell para que este comando seja executado na inicialização do notebook (Detalhes)
Primeiramente, vamos importar a biblioteca matplotlib. | import matplotlib.pyplot as plt | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Em seguida, vamos substituir os índices da tabela dadosFinais pelos nomes dos municípios listados, já que gostaríamos de fazer um gráfico do valor do IDEB por município. | dadosFinais.set_index(["Nome_Município"], inplace=True) | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Finalmente, como nos interessa um gráfico do IDEB por município, só vamos utilizar os dados da coluna "Valor" na tabela dadosFinais (observe que o resultado desta operação é uma Series) | dadosFinais["Valor"] | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Estamos prontos para fazer nosso gráfico. | dadosFinais["Valor"].plot(kind='barh')
plt.title("IDEB por Município (Dados de 2009)") | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Comentários sobre a geração dos documentos e do script
Para converter este notebook para um script Python, use o comando
O arquivo removeextracode.tpl tem o seguinte conteúdo:
Célula de Inicialização <a id='sobre_inicializacao'></a>
Através da extensão "init_cell" do nbextensions, é possível alterar a ordem de inicialização das células do notebook. Se olharmos os metadados da célula abaixo, veremos que ela está marcada pra ser executada antes de todas as outras células, obtendo-se assim o resultado desejado (esta célula permite que os gráficos da matploblib sejam renderizados dentro do notebook). | %matplotlib inline | exemplo/IDEB.ipynb | melissawm/lpwithnotebooks | gpl-3.0 |
Submodular Optimization & Influence Maximization
The content and example in this documentation is build on top of the wonderful blog post at the following link. Blog: Influence Maximization in Python - Greedy vs CELF.
Influence Maximization (IM)
Influence Maximization (IM) is a field of network analysis with a lot of applications - from viral marketing to disease modeling and public health interventions. IM is the task of finding a small subset of nodes in a network such that the resulting "influence" propagating from that subset reaches the largest number of nodes in the network. "Influence" represents anything that can be passed across connected peers within a network, such as information, behavior, disease or product adoption. To make it even more concrete, IM can be used to answer the question:
If we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?
Kempe et al. (2003) were the first to formalize IM as the following combinatorial optimization problem: Given a network with $n$ nodes and given a "spreading" or propagation process on that network, choose a "seed set" $S$ of size $k<n$ to maximize the number of nodes in the network that are ultimately influenced.
Solving this problem turns out to be extremely computationally burdensome. For example, in a relatively small network of 1,000 nodes, there are ${n\choose k} \approx 8$ trillion different possible candidates of size $k=5$ seed sets, which is impossible to solve directly even on state-of-the-art high performance computing resources. Consequently, over the last 15 years, researchers has been actively trying to find approximate solutions to the problem that can be solved quickly. This notebook walks through:
How to implement two of the earliest and most fundamental approximation algorithms in Python - the Greedy and the CELF algorithms - and compares their performance.
We will also spend some time discussing the field of submodular optimization, as it turns out, the combinatorial optimization problem we described above is submodular.
Getting Started
We begin by loading a few modules. There are many popular network modeling packages, but we'll use the igraph package. Don't worry if you're not acquainted with the library, we will explain the syntax, and if you like, you can even swap it out with a different graph library that you prefer.
We'll first test these algorithms to see if they can produce the correct solution for a simple example for which we know the two nodes which are the most influential. Below we create a 10-node/20-edge directed igraph network object. This artificially created network is designed to ensure that nodes 0 and 1 are the most influential. We do this by creating 8 links outgoing from each of these nodes compared to only 1 outgoing links for the other 8 nodes. We also ensure nodes 0 and 1 are not neighbors so that having one in the seed set does not make the other redundant. | source = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 4, 5]
target = [2, 3, 4, 5, 6, 7, 8, 9, 2, 3, 4, 5, 6, 7, 8, 9, 6, 7, 8, 9]
# create a directed graph
graph = Graph(directed=True)
# add the nodes/vertices (the two are used interchangeably) and edges
# 1. the .add_vertices method adds the number of vertices
# to the graph and igraph uses integer vertex id starting from zero
# 2. to add edges, we call the .add_edges method, where edges
# are specified by a tuple of integers.
graph.add_vertices(10)
graph.add_edges(zip(source, target))
print('vertices count:', graph.vcount())
print('edges count:', graph.ecount())
# a graph api should allow us to retrieve the neighbors of a node
print('neighbors: ', graph.neighbors(2, mode='out'))
# or create an adjacency list of the graph,
# as we can see node 0 and 1 are the most influential
# as the two nodes are connected to a lot of other nodes
graph.get_adjlist() | networkx/max_influence/max_influence.ipynb | ethen8181/machine-learning | mit |
Spread Process - Independent Cascade (IC)
IM algorithms solve the optimization problem for a given spread or propagation process. We therefore first need to specify a function that simulates the spread from a given seed set across the network. We'll simulate the influence spread using the popular Independent Cascade (IC) model, although there are many others we could have chosen.
Independent Cascade starts by having an initial set of seed nodes, $A_0$, that start the diffusion process, and the process unfolds in discrete steps according to the following randomized rule:
When node $v$ first becomes active in step $t$, it is given a single chance to activate each currently inactive
neighbor $w$; this process succeeds with a probability $p_{v,w}$, a parameter of the system — independently of the history thus far. If $v$ succeeds, then $w$ will become active in step $t + 1$; but whether or not $v$ succeeds in this current step $t$, it cannot make any further attempts to activate $w$ in subsequent rounds. This process runs until no more activations are possible. Here, we assume that the nodes are progressive, meaning the node will only go from inactive to active, but not the other way around. | def compute_independent_cascade(graph, seed_nodes, prob, n_iters=1000):
total_spead = 0
# simulate the spread process over multiple runs
for i in range(n_iters):
np.random.seed(i)
active = seed_nodes[:]
new_active = seed_nodes[:]
# for each newly activated nodes, find its neighbors that becomes activated
while new_active:
activated_nodes = []
for node in new_active:
neighbors = graph.neighbors(node, mode='out')
success = np.random.uniform(0, 1, len(neighbors)) < prob
activated_nodes += list(np.extract(success, neighbors))
# ensure the newly activated nodes doesn't already exist
# in the final list of activated nodes before adding them
# to the final list
new_active = list(set(activated_nodes) - set(active))
active += new_active
total_spead += len(active)
return total_spead / n_iters
# assuming we start with 1 seed node
seed_nodes = [0]
compute_independent_cascade(graph, seed_nodes, prob=0.2) | networkx/max_influence/max_influence.ipynb | ethen8181/machine-learning | mit |
We calculate the expected spread of a given seed set by taking the average over a large number of Monte Carlo simulations. The outer loop in the function iterates over each of these simulations and calculates the spread for each iteration, at the end, the mean of each iteration will be our unbiased estimation for the expected spread of the seed nodes we've provided. The actual number of simulation required is up to debate, through experiment I found 1,000 to work well enough, whereas 100 was too low. On the other hand, the paper even set the simulation number up to 10,000.
Within each Monte Carlo iteration, we simulate the spread of influence throughout the network over time, where a different "time period" occurs within each of the while loop iterations, which checks whether any new nodes were activated in the previous time step. If no new nodes were activated (when new_active is an empty list and therefore evaluates to False) then the independent cascade process terminates, and the function moves onto the next simulation after recording the total spread for this simulation. The term total spread here refers to the number of nodes ultimately activated (some algorithms are framed in terms of the "additional spread" in which case we would subtract the size of the seed set so the code would be amended to len(active) - len(seed_nodes).
Greedy Algorithm
With our spread function in hand, we can now turn to the IM algorithms themselves. We begin with the Greedy algorithm. The method is referred to as greedy as it adds the node that currently provides the best spread to our solution set without considering if it is actually the optimal solution in the long run, to elaborate the process is:
We start with an empty seed set/nodes.
For all the nodes that are not in the seed set/nodes, we find the node with the largest spread and adds it to the seed
We repeat step 2 until $k$ seed nodes are found.
This algorithm only needs to calculate the spread of $\sum_{i=0}^k (n-i)\approx kn$ nodes, which is just 5,000 in the case of our 1,000 node and $k=5$ network (a lot less that 8 trillion!). Of course, this computational improvement comes at the cost of the resulting seed set only being an approximate solution to the IM problem because it only considers the incremental spread of the $k$ nodes individually rather than combined. Fortunately, this seemingly naive greedy algorithm is theoretically guaranteed to choose a seed set whose spread will be at least 63% of the spread of the optimal seed set. The proof of the guarantee relies heavily on the "submodular" property of spread functions, which will be explained in more detail in later section.
The following greedy() function implements the algorithm. It produces the optimal set of k seed nodes for the graph graph. Apart from returning the optimal seed set, it also records average spread of that seed set along with a list showing the cumulative time taken to complete each iteration, we will use these information to compare with a different algorithm, CELF, in later section. | def greedy(graph, k, prob=0.2, n_iters=1000):
"""
Find k nodes with the largest spread (determined by IC) from a igraph graph
using the Greedy Algorithm.
"""
# we will be storing elapsed time and spreads along the way, in a setting where
# we only care about the final solution, we don't need to record these
# additional information
elapsed = []
spreads = []
solution = []
start_time = time.time()
for _ in range(k):
best_node = -1
best_spread = -np.inf
# loop over nodes that are not yet in our final solution
# to find biggest marginal gain
nodes = set(range(graph.vcount())) - set(solution)
for node in nodes:
spread = compute_independent_cascade(graph, solution + [node], prob, n_iters)
if spread > best_spread:
best_spread = spread
best_node = node
solution.append(best_node)
spreads.append(best_spread)
elapse = round(time.time() - start_time, 3)
elapsed.append(elapse)
return solution, spreads, elapsed
# the result tells us greedy algorithm was able to find the two most influential
# node, node 0 and node 1
k = 2
prob = 0.2
n_iters = 1000
greedy_solution, greedy_spreads, greedy_elapsed = greedy(graph, k, prob, n_iters)
print('solution: ', greedy_solution)
print('spreads: ', greedy_spreads)
print('elapsed: ', greedy_elapsed) | networkx/max_influence/max_influence.ipynb | ethen8181/machine-learning | mit |
Submodular Optimization
Now that we have a brief understanding of the IM problem and taken a first stab at solving this problem, let's take a step back and formally discuss submodular optimization. A function $f$ is said to be submodular if it satisfies the diminishing return property. More formally, if we were given a ground set $V$, a function $f:2^V \rightarrow \mathbb{R}$ (the function's space is 2 power $V$, as the function can either contain or not contain each element in the set $V$). The submodular property is defined as:
\begin{align}
f(A \cup {i}) - f(A) \geq f(B \cup {i}) - f(B)
\end{align}
For any $A \subseteq B \subseteq V$ and $i \in V \setminus B$. Hence by adding any element $i$ to $A$, which is a subset of $B$ yields as least as much value (or more) if we were to add $i$ to $B$. In other words, the marginal gain of adding $i$ to $A$ should be greater or equal to the marginal gain of adding $i$ to $B$ if $A$ is a subset of $B$.
The next property is known as monotone. We say that a submodular function is monotone if for any $A \subseteq B
\subseteq V$, we have $f(A) \leq f(B)$. This means that adding more elements to a set cannot decrease its value.
For example: Let $f(X)=max(X)$. We have the set $X= {1,2,3,4,5}$, and we choose $A={1,2}$ and $B={1,2,5}$. Given those information, we can see $f(A)=2$ and $f(B)=5$ and the marginal gain of items 3,4 is :
\begin{align}
f(3 \, | \, A) = 1 \ \nonumber
f(4 \, | \, B) = 0 \ \nonumber
f(3 \, | \, A) = 2 \ \nonumber
f(4 \, | \, B) = 0
\end{align}
Here we use the shorthand $f(i \, | \, A)$, to denote $f(A \cup {i}) - f(A)$.
Note that $f(i \, | \, A) \ge f(i \, | \, B)$ for any choice of $i$, $A$ and $B$. This is because $f$ is submodular and monotone. To recap, submodular functions has the diminishing return property saying adding an element to a larger set results in smaller marginal increase in the value of $f$ (compared to adding the element to a smaller set). And monotone ensures that adding additional element to the solution set does not decrease the function's value.
Since the functions we're dealing with functions that are monotone, the set with maximum value is always including everything from the ground set $V$. But what we're actually interested in is when we impose a cardinality constraint - that is, finding the set of size at most k that maximizes the utility. Formally:
\begin{align}
A^* = \underset{A: |A| \leq k}{\text{argmax}} \,\, f(A)
\end{align}
For instance, in our IM problem, we are interested in finding the subset $k$ nodes that generates the largest influence. The greedy algorithm we showed above is one approach of solving this combinatorial problem.
Given a ground set $V$, if we're interested in populating a solution set of size $k$.
The algorithm starts with the empty set $A_0$
Then repeats the following step for $i = 0, ... , (k-1)$:
\begin{align}
A_{i+1} = A_{i} \cup { \underset{v \in V \setminus A_i}{\text{argmax}} \,\, f(A_i \cup {v}) }
\end{align}
From a theoretical standpoint, this procedure guarantees a solution that has a score of 0.63 of the optimal set. | # if we check the solutions from the greedy algorithm we've
# implemented above, we can see that our solution is in fact
# submodular, as the spread we get is in diminshing order
np.diff(np.hstack([np.array([0]), greedy_spreads])) | networkx/max_influence/max_influence.ipynb | ethen8181/machine-learning | mit |
Cost Effective Lazy Forward (CELF) Algorithm
CELF Algorithm was developed by Leskovec et al. (2007). In other places, this is referred to as the Lazy Greedy Algorithm. Although the Greedy algorithm is much quicker than solving the full problem, it is still very slow when used on realistically sized networks. CELF was one of the first significant subsequent improvements.
CELF exploits the sub-modularity property of the spread function, which implies that the marginal spread of a given node in one iteration of the Greedy algorithm cannot be any larger than its marginal spread in the previous iteration. This helps us to choose the nodes for which we evaluate the spread function in a more sophisticated manner, rather than simply evaluating the spread for all nodes. More specifically, in the first round, we calculate the spread for all nodes (like Greedy) and store them in a list/heap, which is then sorted. Naturally, the top node is added to the seed set in the first iteration, and then removed from the list/heap. In the next iteration, only the spread for the top node is calculated. If, after resorting, that node remains at the top of the list/heap, then it must have the highest marginal gain of all nodes. Why? Because we know that if we calculated the marginal gain for all other nodes, they'd be lower than the value currently in the list (due to submodularity) and therefore the "top node" would remain on top. This process continues, finding the node that remains on top after calculating its marginal spread, and then adding it to the seed set. By avoiding calculating the spread for many nodes, CELF turns out to be much faster than Greedy, which we'll show below.
The celf() function below that implements the algorithm, is split into two components. The first component, like the Greedy algorithm, iterates over each node in the graph and selects the node with the highest spread into the seed set. However, it also stores the spreads of each node for use in the second component.
The second component iterates to find the remaining $k-1$ seed nodes. Within each iteration, the algorithm evaluates the marginal spread of the top node. If, after resorting, the top node stays in place then that node is selected as the next seed node. If not, then the marginal spread of the new top node is evaluated and so on.
Like greedy(), the function returns the optimal seed set, the resulting spread and the time taken to compute each iteration. In addition, it also returns the list lookups, which keeps track of how many spread calculations were performed at each iteration. We didn't bother doing this for greedy() because we know the number of spread calculations in iteration $i$ is $N-i-1$. | import heapq
def celf(graph, k, prob, n_iters=1000):
"""
Find k nodes with the largest spread (determined by IC) from a igraph graph
using the Cost Effective Lazy Forward Algorithm, a.k.a Lazy Greedy Algorithm.
"""
start_time = time.time()
# find the first node with greedy algorithm:
# python's heap is a min-heap, thus
# we negate the spread to get the node
# with the maximum spread when popping from the heap
gains = []
for node in range(graph.vcount()):
spread = compute_independent_cascade(graph, [node], prob, n_iters)
heapq.heappush(gains, (-spread, node))
# we pop the heap to get the node with the best spread,
# when storing the spread to negate it again to store the actual spread
spread, node = heapq.heappop(gains)
solution = [node]
spread = -spread
spreads = [spread]
# record the number of times the spread is computed
lookups = [graph.vcount()]
elapsed = [round(time.time() - start_time, 3)]
for _ in range(k - 1):
node_lookup = 0
matched = False
while not matched:
node_lookup += 1
# here we need to compute the marginal gain of adding the current node
# to the solution, instead of just the gain, i.e. we need to subtract
# the spread without adding the current node
_, current_node = heapq.heappop(gains)
spread_gain = compute_independent_cascade(
graph, solution + [current_node], prob, n_iters) - spread
# check if the previous top node stayed on the top after pushing
# the marginal gain to the heap
heapq.heappush(gains, (-spread_gain, current_node))
matched = gains[0][1] == current_node
# spread stores the cumulative spread
spread_gain, node = heapq.heappop(gains)
spread -= spread_gain
solution.append(node)
spreads.append(spread)
lookups.append(node_lookup)
elapse = round(time.time() - start_time, 3)
elapsed.append(elapse)
return solution, spreads, elapsed, lookups
k = 2
prob = 0.2
n_iters = 1000
celf_solution, celf_spreads, celf_elapsed, celf_lookups = celf(graph, k, prob, n_iters)
print('solution: ', celf_solution)
print('spreads: ', celf_spreads)
print('elapsed: ', celf_elapsed)
print('lookups: ', celf_lookups) | networkx/max_influence/max_influence.ipynb | ethen8181/machine-learning | mit |
Larger Network
Now that we know both algorithms at least work correctly for a simple network for which we know the answer, we move on to a more generic graph to compare the performance and efficiency of each method. Any igraph network object will work, but for the purposes of this post we will use a random Erdos-Renyi graph with 100 nodes and 300 edges. The exact type of graph doesn't matter as the main points hold for any graph. Rather than explicitly defining the nodes and edges like we did above, here we make use of the .Erdos_Renyi() method to automatically create the graph. | np.random.seed(1234)
graph = Graph.Erdos_Renyi(n=100, m=300, directed=True) | networkx/max_influence/max_influence.ipynb | ethen8181/machine-learning | mit |
Given the graph, we again compare both optimizers with the same parameter. Again for the n_iters parameter, it is not uncommon to see it set to a much higher number in literatures, such as 10,000 to get a more accurate estimate of spread, we chose a lower number here so we don't have to wait as long for the results | k = 10
prob = 0.1
n_iters = 1500
celf_solution, celf_spreads, celf_elapsed, celf_lookups = celf(graph, k, prob, n_iters)
greedy_solution, greedy_spreads, greedy_elapsed = greedy(graph, k, prob, n_iters)
# print resulting solution
print('celf output: ' + str(celf_solution))
print('greedy output: ' + str(greedy_solution)) | networkx/max_influence/max_influence.ipynb | ethen8181/machine-learning | mit |
Thankfully, both optimization method yields the same solution set.
In the next few code chunk, we will use some of the information we've stored while performing the optimizing to perform a more thorough comparison. First, by plotting the resulting expected spread from both optimization method. We can see both methods yield the same expected spread. | # change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
lw = 4
fig = plt.figure(figsize=(9,6))
ax = fig.add_subplot(111)
ax.plot(range(1, len(greedy_spreads) + 1), greedy_spreads, label="Greedy", color="#FBB4AE", lw=lw)
ax.plot(range(1, len(celf_spreads) + 1), celf_spreads, label="CELF", color="#B3CDE3", lw=lw)
ax.legend(loc=2)
plt.ylabel('Expected Spread')
plt.title('Expected Spread')
plt.xlabel('Size of Seed Set')
plt.tick_params(bottom=False, left=False)
plt.show() | networkx/max_influence/max_influence.ipynb | ethen8181/machine-learning | mit |
We now compare the speed of each algorithm. The plot below shows that the computation time of Greedy is larger than CELF for all seed set sizes greater than 1 and the difference in computational times grows exponentially with the size of the seed set. This is because Greedy must compute the spread of $N-i-1$ nodes in iteration $i$ whereas CELF generally performs far fewer spread computations after the first iteration. | lw = 4
fig = plt.figure(figsize=(9,6))
ax = fig.add_subplot(111)
ax.plot(range(1, len(greedy_elapsed) + 1), greedy_elapsed, label="Greedy", color="#FBB4AE", lw=lw)
ax.plot(range(1, len(celf_elapsed) + 1), celf_elapsed, label="CELF", color="#B3CDE3", lw=lw)
ax.legend(loc=2)
plt.ylabel('Computation Time (Seconds)')
plt.xlabel('Size of Seed Set')
plt.title('Computation Time')
plt.tick_params(bottom=False, left=False)
plt.show() | networkx/max_influence/max_influence.ipynb | ethen8181/machine-learning | mit |
We can get some further insight into the superior computational efficiency of CELF by observing how many "node lookups" it had to perform during each of the 10 rounds. The list that records this information shows that the first round iterated over all 100 nodes of the network. This is identical to Greedy which is why the graph above shows that the running time is equivalent for $k=1$. However, for subsequent iterations, there are far fewer spread computations because the marginal spread of a node in a previous iteration is a good indicator for its marginal spread in a future iteration. Note the relationship between the values below and the corresponding computation time presented in the graph above. There is a visible jump in the blue line for higher values of the "node lookups". This again solidifies the fact that while CELF produces identical solution set as Greedy, it usually has enormous speedups over the standard Greedy procedure. | celf_lookups | networkx/max_influence/max_influence.ipynb | ethen8181/machine-learning | mit |
Let's generate some sample data. 100000 observations, 50 features, only 5 of which matter, 7 of which are redundant, split among 2 classes for classification. | X, y = make_classification(n_samples=100000,
n_features=50,
n_informative=5,
n_redundant=7,
n_classes=2,
random_state=2) | notebooks/Regularization and Model Tuning.ipynb | jwjohnson314/data-803 | mit |
When building linear models, it's a good idea to standarize all of your predictors (mean at zero, variance 1). | figsize(12, 6)
plt.scatter(range(50), np.mean(X, axis=0));
scaler = StandardScaler()
X = scaler.fit_transform(X)
figsize(12, 6)
plt.scatter(range(50), np.mean(X, axis=0));
# nice normal looking predictors
figsize(18, 8)
ax = plt.subplot(441)
plt.hist(X[:, 0]);
ax = plt.subplot(442)
plt.hist(X[:, 1]);
ax = plt.subplot(443)
plt.hist(X[:, 2]);
ax = plt.subplot(444)
plt.hist(X[:, 3]);
# multicollinearity
correlations = np.corrcoef(X, rowvar=0)
corrpairs = {}
for i in range(50):
for j in range(i+1, 50, 1):
if correlations[i, j] > 0.25:
print(i, j, correlations[i,j])
corrpairs[(i,j)] = correlations[i,j]
# plot is slow - 1 min or more
figsize(12, 18)
plt.subplot(311)
plt.scatter(X[:, 16], X[:, 37])
plt.subplot(312)
plt.scatter(X[:, 16], X[:, 18])
plt.subplot(313)
plt.scatter(X[:, 26], X[:, 43]); | notebooks/Regularization and Model Tuning.ipynb | jwjohnson314/data-803 | mit |
Let's perform a train-test split for cross-validation. | Xtr, Xte, ytr, yte = train_test_split(X, y, test_size=0.2, random_state=0) | notebooks/Regularization and Model Tuning.ipynb | jwjohnson314/data-803 | mit |
Next let's build a model using the default parameters and look at several different measures of performance. | default_model = LogisticRegression(random_state=0).fit(Xtr,ytr) # instantiate and fit
pred = default_model.predict(Xte) # make predictions
print('Accuracy: %s\n' % default_model.score(Xte, yte))
print(classification_report(yte, pred))
print('Confusion Matrix:\n\n %s\n' % confusion_matrix(yte, pred)); | notebooks/Regularization and Model Tuning.ipynb | jwjohnson314/data-803 | mit |
In the sklearn implementation, this default model <i>is</i> a regularized model, using $\mathcal{l}2$ regularization with $C = 1$. That is, the cost function to be minimized is $$-\frac{1}{n}\sum_{i=1}^n[y_i\log(p_i) - (1-y_i)\log(y_i - p_i)]+\frac{1}{C}\cdot\sum_{j=1}^m w_j^2.$$ Here, $y_i$ is the $i^{th}$ response (target), $p_i$ is the predicted probability of that target, and $w_j$ are the coefficients of the linear model. In a traditional statistical implementation, the second sum wouldn't be there as it biases the model. This is the regularization.
There is no reason to believe that $C = 1$ is the ideal choice; it may be better to increase or decrease $C$. One way to search for better values by doing a grid search over a set of possible values for $C$, assessing the best choice using cross-validation. | cs = [10**(i+1) for i in range(2)] + [10**(-i) for i in range(5)] # create a list of C's
print(cs)
lm = LogisticRegression(random_state=0)
grid = GridSearchCV(estimator=lm,
param_grid=dict(C=cs),
scoring='accuracy',
verbose=1,
cv=5,
n_jobs=-1, # parallelize over all cores
refit=True) # instatiate the grid search (note model input)
grid.fit(Xtr, ytr) # fit
print("Best score: %s" % grid.best_score_)
print("Best choice of C: %s" % grid.best_estimator_.C)
# change the metric
grid_prec = GridSearchCV(estimator=lm,
param_grid=dict(C=cs),
scoring='precision',
verbose=1,
cv=5,
n_jobs=-1, # parallelize over all cores
refit=True) # instatiate the grid search (note model input)
grid_prec.fit(Xtr, ytr) # fit
print("Best score: %s" % grid_prec.best_score_)
print("Best choice of C: %s" % grid_prec.best_estimator_.C)
# change the metric
grid_auc = GridSearchCV(estimator=lm,
param_grid=dict(C=cs),
scoring='roc_auc',
verbose=1,
cv=5,
n_jobs=-1, # parallelize over all cores
refit=True) # instatiate the grid search (note model input)
grid_auc.fit(Xtr, ytr) # fit
print("Best score: %s" % grid_auc.best_score_)
print("Best choice of C: %s" % grid_auc.best_estimator_.C)
grid_preds = grid.predict(Xte)
print('Accuracy: %s\n' % accuracy_score(grid.predict(Xte), yte))
print(classification_report(yte, grid_preds))
print('Confusion Matrix:\n\n %s\n' % confusion_matrix(yte, grid_preds));
grid.best_estimator_.coef_
figsize(12, 6)
plt.scatter(range(grid.best_estimator_.coef_.shape[1]),
grid.best_estimator_.coef_)
plt.ylabel('value of coefficient')
plt.xlabel('predictor variable (index)'); | notebooks/Regularization and Model Tuning.ipynb | jwjohnson314/data-803 | mit |
Another way to do this is with the 'LogisticRegressionCV' function. This is a logistic regression function built with tuning $C$ via cross-validation in mind. This time, we'll set the penalty to $\mathcal{l}1$, we'll let python pick 10 possible $C$'s, we'll use all cores on my machine ('n_jobs=-1'), and we'll use the liblinear solver (which is the only one of the three possible choice which can optimize with the l1 penalty). The $\mathcal{l}1$ penalty is $$-\frac{1}{n}\sum_{i=1}^n[y_i\log(p_i) - (1-y_i)\log(y_i - p_i)]+\frac{1}{C}\cdot\sum_{j=1}^m |w_j|.$$ This will take a minute or two to run. | cvmodel = LogisticRegressionCV(penalty='l1',
Cs=10,
n_jobs=-1,
verbose=1,
scoring='accuracy',
solver='liblinear') # liblinear only for l1 penalty
# takes about a minute
cv_fit = cvmodel.fit(Xtr,ytr)
cvmodel.C_
cvmodel.coef_ # now all very small, most effectively 0
plt.scatter(range(cvmodel.coef_.shape[1]), cvmodel.coef_[0])
plt.ylabel('value of coefficient')
plt.xlabel('predictor variable (index)')
plt.title('coefficients with l1 regularization');
cv_preds = cvmodel.predict(Xte)
print(accuracy_score(cv_preds, yte))
tuned_cv_scores = cross_val_score(cv_fit, X, y, scoring='accuracy',n_jobs=-1, verbose=2)
print(tuned_cv_scores)
print(np.mean(tuned_cv_scores))
default_cv_scores = cross_val_score(default_model.fit(Xtr, ytr), X, y, scoring='accuracy',n_jobs=-1, verbose=2)
print(default_cv_scores)
print(np.mean(default_cv_scores))
fig, ax = plt.subplots(1,2, sharey=True, figsize=(16, 6))
ax[0].scatter(range(grid.best_estimator_.coef_.shape[1]),
grid.best_estimator_.coef_)
ax[0].set_ylabel('value of coefficient')
ax[0].set_xlabel('predictor variable (index)')
ax[0].set_title('Coefficients with l2 Penalty')
ax[1].scatter(range(cvmodel.coef_.shape[1]), cvmodel.coef_[0])
ax[1].set_ylabel('value of coefficient')
ax[1].set_xlabel('predictor variable (index)')
ax[1].set_title('Coefficients with l1 Penalty');
trivial = np.isclose(cvmodel.coef_, np.zeros(shape=cvmodel.coef_.shape)).flatten()
nontrivial = []
for i in range(len(trivial)):
if not trivial[i]:
nontrivial.append(i)
nontrivial
final = LogisticRegression(C=cvmodel.C_[0], penalty='l1', solver='liblinear').fit(X[:, nontrivial], y)
# thanks StackOverflow!
# see http://stackoverflow.com/questions/36373266/change-in-running-behavior-of-sklearn-code-between-laptop-and-desktop/37259431
import warnings
warnings.filterwarnings("ignore")
final_cv_scores = cross_val_score(final, X[:, nontrivial], y, scoring='accuracy', n_jobs=-1)
print(final_cv_scores)
print(np.mean(final_cv_scores))
alt = cross_val_score(LogisticRegressionCV(penalty='l1', solver='liblinear', verbose=2, n_jobs=-1), X[:, nontrivial], y, scoring='accuracy')
print(alt)
print(np.mean(alt)) | notebooks/Regularization and Model Tuning.ipynb | jwjohnson314/data-803 | mit |
Series
Uma Series é um objeto semelhante a uma vetor que possui um vetor de dados e um vetor de labels associadas chamado index.
Sua documentação completa se encontra em: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html#pandas.Series
Instanciando uma Series | """ Apenas a partir dos valores """
obj = pd.Series([4, 7, -5, 3])
obj
obj.values
obj.index
""" A partir dos valores e dos índices """
obj2 = pd.Series([4, 7, -5, 3], index=['d','b','a','c'])
obj2
obj2.index
""" A partir de um dictionary """
sdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000}
obj3 = pd.Series(sdata)
obj3
""" A partir de um dictionary e dos índices """
states = ['California', 'Ohio', 'Oregon', 'Texas']
obj4 = pd.Series(sdata, index=states)
obj4 | 2019/02-python-bibliotecas-manipulacao-dados/pandas_basico.ipynb | InsightLab/data-science-cookbook | mit |
Acessando elementos de uma Series | obj2['a']
obj2['d'] = 6
obj2['d']
obj2[['c','a','d']]
obj2[obj2 > 0] | 2019/02-python-bibliotecas-manipulacao-dados/pandas_basico.ipynb | InsightLab/data-science-cookbook | mit |
Algumas operações permitidas em uma Series | """ Multiplicação por um escalar """
obj2 * 2
""" Operações de vetor do numpy """
import numpy as np
np.exp(obj2)
""" Funções que funcionam com dictionaries """
'b' in obj2
'e' in obj2
""" Funções para identificar dados faltando """
obj4.isnull()
obj4.notnull()
""" Operações aritméticas com alinhamento automático dos índices """
obj3 + obj4 | 2019/02-python-bibliotecas-manipulacao-dados/pandas_basico.ipynb | InsightLab/data-science-cookbook | mit |
DataFrame
Um DataFrame representa uma estrutura de dados tabular, semelhante a uma planilha de excel, contendo um conjunto ordenado de colunas, podendo ser cada uma de tipos de valores diferente. Um DataFrame possui um índice de linhas e um de colunas e pode ser encarado como um dict de Series.
Sua documentação completa se encontra em: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html
Instanciando um DataFrame | """ A partir de um dictionary de vetores """
data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], \
'year': [2000, 2001, 2002, 2001, 2002], \
'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}
frame = pd.DataFrame(data)
frame
""" A partir de um dictionary em uma ordem específica das colunas """
pd.DataFrame(data, columns=['year', 'state', 'pop'])
""" A partir de um dictionary e dos índices das colunas e/ou dos índices das linhas """
frame2 = pd.DataFrame(data, columns=['year', 'state', 'pop', 'debt'], index=['one', 'two', 'three', 'four', 'five'])
frame2
""" A partir de um dictionary de dictionaries aninhados """
pop = {'Nevada': {2001: 2.4, 2002: 2.9}, 'Ohio': {2000: 1.5, 2001: 1.7, 2002: 3.6}}
frame3 = pd.DataFrame(pop)
frame3 | 2019/02-python-bibliotecas-manipulacao-dados/pandas_basico.ipynb | InsightLab/data-science-cookbook | mit |
Note que estas não são todas as formas possíveis de se fazê-lo. Para uma visão mais completa veja a seguinte tabela com as possíveis entradas para o construtor do DataFrame:
Type |Notes
-----|-----
2D ndarray | A matrix of data, passing optional row and column labels
dict of arrays, lists, or tuples | Each sequence becomes a column in the DataFrame. All sequences must be the same length.
NumPy structured/record array | Treated as the “dict of arrays” case
dict of Series | Each value becomes a column. Indexes from each Series are unioned together to form the result’s row index if no explicit index is passed.
dict of dicts | Each inner dict becomes a column. Keys are unioned to form the row index as in the “dict of Series” case.
list of dicts or Series | Each item becomes a row in the DataFrame. Union of dict keys or Series indexes become the DataFrame’s column labels
List of lists or tuples | Treated as the “2D ndarray” case
Another DataFrame | The DataFrame’s indexes are used unless different ones are passed
NumPy MaskedArray | Like the “2D ndarray” case except masked values become NA/missing in the DataFrame result
Manipulando linhas e colunas de um DataFrame | """ Acessando colunas como em uma Series ou dictionary """
frame2['state']
""" Como colunas como um atributo """
frame2.year
""" Acessando linhas com o nome da linha """
frame2.ix['three']
""" Acessando linhas com o índice da linha """
frame2.ix[3]
""" Modificando uma coluna com um valor """
frame2['debt'] = 16.5
frame2
""" Modificando uma coluna com um vetor """
frame2['debt'] = np.arange(5.)
frame2
""" Modificando uma coluna com uma Series """
val = pd.Series([-1.2, -1.5, -1.7], index=['two', 'four', 'five'])
frame2['debt'] = val
frame2
""" Adicionando uma coluna que não existe """
frame2['eastern'] = frame2.state == 'Ohio'
frame2
""" Deletando uma coluna """
del frame2['eastern']
frame2.columns | 2019/02-python-bibliotecas-manipulacao-dados/pandas_basico.ipynb | InsightLab/data-science-cookbook | mit |
读取没有head的数据
1,5,2,3,cat
2,7,8,5,dog
3,3,6,7,horse
2,2,8,3,duck
4,4,2,1,mouse | pd.read_csv('myCSV_02.csv',header=None) | Data_Analytics_in_Action/pandasIO.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
可以指定header | pd.read_csv('myCSV_02.csv',names=['white','red','blue','green','animal']) | Data_Analytics_in_Action/pandasIO.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
创建一个具有等级结构的DataFrame对象,可以添加index_col选项,数据文件格式
colors,status,item1,item2,item3
black,up,3,4,6
black,down,2,6,7
white,up,5,5,5
white,down,3,3,2
red,up,2,2,2
red,down,1,1,4 | pd.read_csv('myCSV_03.csv',index_col=['colors','status']) | Data_Analytics_in_Action/pandasIO.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
Regexp 解析TXT文件
使用正则表达式指定sep,来达到解析数据文件的目的。
正则元素 | 功能
--- | ---
. | 换行符以外所有元素
\d | 数字
\D | 非数字
\s | 空白字符
\S | 非空白字符
\n | 换行符
\t | 制表符
\uxxxx | 使用十六进制表示ideaUnicode字符
数据文件随机以制表符和空格分隔
white red blue green
1 4 3 2
2 4 6 7 | pd.read_csv('myCSV_04.csv',sep='\s+') | Data_Analytics_in_Action/pandasIO.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
读取有字母分隔的数据
000end123aaa122
001end125aaa144 | pd.read_csv('myCSV_05.csv',sep='\D*',header=None,engine='python') | Data_Analytics_in_Action/pandasIO.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
读取文本文件跳过一些不必要的行
```
log file
this file has been generate by automatic system
white,red,blue,green,animal
12-feb-2015:counting of animals inside the house
1,3,5,2,cat
2,4,8,5,dog
13-feb-2015:counting of animals inside the house
3,3,6,7,horse
2,2,8,3,duck
``` | pd.read_table('myCSV_06.csv',sep=',',skiprows=[0,1,3,6]) | Data_Analytics_in_Action/pandasIO.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
从TXT文件中读取部分数据
只想读文件的一部分,可明确指定解析的行号,这时候用到nrows和skiprows选项,从指定的行开始和从起始行往后读多少行(norow=i) | pd.read_csv('myCSV_02.csv',skiprows=[2],nrows=3,header=None) | Data_Analytics_in_Action/pandasIO.ipynb | gaufung/Data_Analytics_Learning_Note | mit |
Subsets and Splits